My name is Peizhuo Li (李沛卓). I am a direct doctorate student at Interactive Geometry Lab under the supervision of Prof. Olga Sorkine-Hornung. My research interest lies in the intersection between deep learning and computer graphics. In particular, I am interested in practical problems related to character animation. Prior to my PhD study, I was an intern at Visual Computing and Learning lab at Peking University and advised by Prof. Baoquan Chen.
Direct Doctorate, 2021 ~ Present
ETH Zurich
BSc in Computer Science, 2017 ~ 2021
Turing Class, Peking University
We introduce a neural motion synthesis approach that uses accessible pose data to generate plausible character motions by transferring motion from existing motion capture datasets. Our method effectively combines motion features from the source character with pose features of the target character and performs robustly even with small or noisy pose datasets. User studies indicate a preference for our retargeted motions, finding them more lifelike, enjoyable to watch, and exhibiting fewer artifacts.
We introduce a novel approach to learn a common phase manifold from motion datasets across different characters, such as human and dog, using vector quantized periodic autoencoders. This manifold clusters semantically similar motions into the same connected component and aligns them temporally without supervision. Our method enables effective motion matching and supports applications in motion retrieval, transfer, and stylization.
Data driven and learning based solutions for modeling dynamic garments have significantly advanced, especially in the context of digital humans. We model the dynamics of a garment by exploiting its local interactions with the underlying human body. At the core of our approach is a mesh-agnostic garment representation and a manifold-aware transformer network design, which together enable our method to generalize to unseen garment and body geometries.
We present Generative Motion Matching (GenMM), a generative model that “mines” as many diverse motions as possible from a single or few example sequences. GenMM is training-free and can synthesize a high-quality motion within a fraction of a second, even with highly complex and large skeletal structures.
The emergence of neural networks revolutionized motion synthesis, yet synthesizing diverse motions remains challenging. We present MoDi, an unsupervised generative model trained on a diverse, unstructured, unlabeled dataset, capable of synthesizing high-quality, diverse motions. Despite dataset’s lack of structure, MoDi yields a structured latent space for semantic clustering, enabling applications like semantic editing and crowd simulation. We also introduce an encoder that inverts real motions into MoDi’s motion manifold, addressing ill-posed challenges like completion from prefix and spatial editing, achieving state-of-the-art results surpassing recent techniques.
We present GANimator, a generative model that learns to synthesize novel motions from a single, short motion sequence. GANimator generates motions that resemble the core elements of the original motion, while simultaneously synthesizing novel and diverse movements. It also enables applications including crowd simulation, key-frame editing, style transfer, and interactive control for a variety of skeletal structures e.g., bipeds, quadropeds, hexapeds, and more, all from a single input sequence.
We develop a neural technique for articulating 3D characters using enveloping with a pre-defined skeletal structure, which is essential for animating character with motion capture (mocap) data. Furthermore, we propose neural blend shapes – a set of corrective pose-dependent shapes that is used to address the notorious artifacts caused by standard rigging and skinning technique in joint region.
We introduce a novel deep learning framework for data-driven motion retargeting between skeletons, which may have different structure, yet corresponding to homeomorphic graphs. Importantly, our approach learns how to retarget without requiring any explicit pairing between the motions in the training set.