Peizhuo Li

Peizhuo Li

Direct Doctorate in Computer Science

IGL | ETH Zurich

Short Bio

My name is Peizhuo Li (李沛卓). I am a direct doctorate student at Interactive Geometry Lab under the supervision of Prof. Olga Sorkine-Hornung. My research interest lies in the intersection between deep learning and computer graphics. In particular, I am interested in practical problems related to character animation. Prior to my PhD study, I was an intern at Visual Computing and Learning lab at Peking University and advised by Prof. Baoquan Chen.

Interests

  • Computer Graphics
  • Character Animation
  • Deep Learning

Education

  • Direct Doctorate, 2021 ~ Present

    ETH Zurich

  • BSc in Computer Science, 2017 ~ 2021

    Turing Class, Peking University

Recent Publications

Neural Garment Dynamics via Manifold-Aware Transformers

Data driven and learning based solutions for modeling dynamic garments have significantly advanced, especially in the context of digital humans. We model the dynamics of a garment by exploiting its local interactions with the underlying human body. At the core of our approach is a mesh-agnostic garment representation and a manifold-aware transformer network design, which together enable our method to generalize to unseen garment and body geometries.

Example-based Motion Synthesis via Generative Motion Matching

We present Generative Motion Matching (GenMM), a generative model that “mines” as many diverse motions as possible from a single or few example sequences. GenMM is training-free and can synthesize a high-quality motion within a fraction of a second, even with highly complex and large skeletal structures.

MoDi: Unconditional Motion Synthesis from Diverse Data

The emergence of neural networks revolutionized motion synthesis, yet synthesizing diverse motions remains challenging. We present MoDi, an unsupervised generative model trained on a diverse, unstructured, unlabeled dataset, capable of synthesizing high-quality, diverse motions. Despite dataset’s lack of structure, MoDi yields a structured latent space for semantic clustering, enabling applications like semantic editing and crowd simulation. We also introduce an encoder that inverts real motions into MoDi’s motion manifold, addressing ill-posed challenges like completion from prefix and spatial editing, achieving state-of-the-art results surpassing recent techniques.

GANimator: Neural Motion Synthesis from a Single Sequence

We present GANimator, a generative model that learns to synthesize novel motions from a single, short motion sequence. GANimator generates motions that resemble the core elements of the original motion, while simultaneously synthesizing novel and diverse movements. It also enables applications including crowd simulation, key-frame editing, style transfer, and interactive control for a variety of skeletal structures e.g., bipeds, quadropeds, hexapeds, and more, all from a single input sequence.

Learning Skeletal Articulations with Neural Blend Shapes

We develop a neural technique for articulating 3D characters using enveloping with a pre-defined skeletal structure, which is essential for animating character with motion capture (mocap) data. Furthermore, we propose neural blend shapes – a set of corrective pose-dependent shapes that is used to address the notorious artifacts caused by standard rigging and skinning technique in joint region.

Skeleton-Aware Networks for Deep Motion Retargeting

We introduce a novel deep learning framework for data-driven motion retargeting between skeletons, which may have different structure, yet corresponding to homeomorphic graphs. Importantly, our approach learns how to retarget without requiring any explicit pairing between the motions in the training set.