Imitation: Reallution Motions
This work focuses on generating realistic, physically-based human behaviors from multi-modal inputs, which may only partially specify the desired motion. For example, the input may come from a VR controller providing arm motion and body velocity, partial key-point animation, computer vision applied to videos, or even higher-level motion goals. This requires a versatile low-level humanoid controller that can handle such sparse, under-specified guidance, seamlessly switch between skills, and recover from failures.
Current approaches for learning humanoid controllers from demonstration data capture some of these characteristics, but none achieve them all. To this end, we introduce the Masked Humanoid Controller (MHC), a novel approach that applies multi-objective imitation learning on augmented and selectively masked motion demonstrations. The training methodology results in an MHC that exhibits the key capabilities of catch-up to out-of-sync input commands, combining elements from multiple motion sequences, and completing unspecified parts of motions from sparse multimodal input.
We demonstrate these key capabilities for an MHC learned over a dataset of 87 diverse skills and showcase different multi-modal use cases, including integration with planning frameworks to highlight MHC’s ability to solve new user-defined tasks without any finetuning.
Imitation: Reallution Motions
Imitation: High Dynamic Reallusion Motions
Imitation: ASE Rollout
Catchup: Out of sync start
Catchup: Random Perturbations
Catchup: Zap in the middle
Combine: Reallusion Motion
Combine: High Dynamic Reallusion Motions
Combine: root command + upper body
Video to Motion
Text to Motion
VR controllers
Joystick Commands
FSM - Diverse
DAC - Different Discount Factor Objective
DAC - Different Reward Objective
DAC - Safety Objective