akolobov commited on
Commit
5e928d8
1 Parent(s): f024bd1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +59 -3
README.md CHANGED
@@ -1,3 +1,59 @@
1
- ---
2
- license: cdla-permissive-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cdla-permissive-2.0
3
+ ---
4
+ # MoCapAct Dataset
5
+ Control of simulated humanoid characters is a challenging benchmark for sequential decision-making methods, as it assesses a policy’s ability to drive an inherently unstable, discontinuous, and high-dimensional physical system. Motion capture (MoCap) data can be very helpful in learning sophisticated locomotion policies by teaching a humanoid agent low-level skills (e.g., standing, walking, and running) that can then be used to generate high-level behaviors. However, even with MoCap data, controlling simulated humanoids remains very hard, because this data offers only kinematic information. Finding physical control inputs to realize the MoCap-demonstrated motions has required methods like reinforcement learning that need large amounts of compute, which has effectively served as a barrier to entry for this exciting research direction.
6
+
7
+ In an effort to broaden participation and facilitate evaluation of ideas in humanoid locomotion research, we are releasing MoCapAct (Motion Capture with Actions), a library of high-quality pre-trained agents that can track over three hours of MoCap data for a simulated humanoid in the `dm_control` physics-based environment and rollouts from these experts containing proprioceptive observations and actions. MoCapAct allows researchers to sidestep the computationally intensive task of training low-level control policies from MoCap data and instead use MoCapAct's expert agents and demonstrations for learning advanced locomotion behaviors. It also allows improving on our low-level policies by using them and their demonstration data as a starting point.
8
+
9
+ In our work, we use MoCapAct to train a single hierarchical policy capable of tracking the entire MoCap dataset within `dm_control`.
10
+ We then re-use the learned low-level component to efficiently learn other high-level tasks.
11
+ Finally, we use MoCapAct to train an autoregressive GPT model and show that it can perform natural motion completion given a motion prompt.
12
+ We encourage the reader to visit our [project website](https://microsoft.github.io/MoCapAct/) to see videos of our results as well as get links to our paper and code.
13
+
14
+ ## File Structure
15
+
16
+ The file structure of the dataset is:
17
+ ```
18
+ ├── all
19
+ │ ├── large
20
+ │ │ ├── large_1.tar.gz
21
+ │ │ ├── large_2.tar.gz
22
+ | │ ...
23
+ │ │ └── large_43.tar.gz
24
+ │ └── small
25
+ │ ├── small_1.tar.gz
26
+ │ ├── small_2.tar.gz
27
+ │ └── small_3.tar.gz
28
+
29
+ ├── sample
30
+ │ ├── large.tar.gz
31
+ │ └── small.tar.gz
32
+
33
+ └── videos
34
+ ├── full_clip_videos.tar.gz
35
+ └── snippet_videos.tar.gz
36
+ ```
37
+
38
+ ## MoCapAct Dataset Tarball Files
39
+ The dataset tarball files have the following structure:
40
+ - `all/small/small_*.tar.gz`: Contains HDF5 files with 20 rollouts per snippet. Due to file size limitations, we split the rollouts among multiple tarball files.
41
+ - `all/large/large_*.tar.gz`: Contains HDF5 files with 200 rollouts per snippet. Due to file size limitations, we split the rollouts among multiple tarball files.
42
+ - `sample/small.tar.gz`: Contains example HDF5 files with 20 rollouts per snippet.
43
+ - `sample/large.tar.gz`: Contains example HDF5 files with 200 rollouts per snippet.
44
+
45
+ The HDF5 structure is detailed in Appendix A.2 of the paper as well as https://github.com/microsoft/MoCapAct#description.
46
+
47
+ An example for loading and inspecting an HDF5 file in Python is:
48
+ ```python
49
+ import h5py
50
+ dset = h5py.File("/path/to/small/CMU_083_33.hdf5", "r")
51
+ print("Expert actions from first rollout episode:")
52
+ print(dset["CMU_083_33-0-194/0/actions"][...])
53
+ ```
54
+
55
+ ## MoCap Videos
56
+ There are two tarball files containing videos of the MoCap clips in the dataset:
57
+ - `full_clip_videos.tar.gz` contains videos of the full MoCap clips.
58
+ - `snippet_videos.tar.gz` contains videos of the snippets that were used to train the experts.
59
+ Note that they are playbacks of the clips themselves, not rollouts of the corresponding experts.