Qianli Ma

PhD student
CAB G 82.1

Basic Information

I am a PhD student at the Max Planck Institute for Intelligent Systems and ETH Zürich, co-supervised by Michael Black and Siyu Tang . I am also associated to the Max Planck ETH Center for Learning Systems. Prior to this, I received my Master's degree in Optics and Photonics from Karlsruhe Institute of Technology and Bachelor's degree in Physics from Peking University.

My research uses machine learning to solve computer vision and graphics problems, with a current focus on 3D representations and deformable 3D shape modeling.


Authors:Siwei ZhangQianli MaYan ZhangSadegh AliakbarianDarren CoskerSiyu Tang

We propose a novel scene-conditioned probabilistic method to recover the human mesh from an egocentric view image (typically with the body truncated) in the 3D environment.

Authors:Sergey ProkudinQianli MaMaxime RaafatJulien ValentinSiyu Tang

We propose to model dynamic surfaces with a point-based model, where the motion of a point over time is represented by an implicit deformation field. Working directly with points (rather than SDFs) allows us to easily incorporate various well-known deformation constraints, e.g. as-isometric-as-possible. We showcase the usefulness of this approach for creating animatable avatars in complex clothing.

Authors:Qianli Ma, Jinlong Yang, Michael J. Black and Siyu Tang

The power of point-based digital human representations further unleashed: SkiRT models dynamic shapes of 3D clothed humans including those that wear challenging outfits such as skirts and dresses.

Authors:Siwei ZhangQianli MaYan ZhangZhiyin QianTaein KwonMarc PollefeysFederica Bogo and Siyu Tang

A large-scale dataset of accurate 3D human body shape, pose and motion of humans interacting in 3D scenes, with multi-modal streams from third-person and egocentric views, captured by Azure Kinects and a HoloLens2.

Authors:Qianli Ma, Jinlong Yang, Siyu Tang and Michael J. Black

We introduce POP — a point-based, unified model for multiple subjects and outfits that can turn a single, static 3D scan into an animatable avatar with natural pose-dependent clothing deformations.

Authors:Shaofei Wang, Marko Mihajlovic, Qianli Ma, Andreas Geiger, Siyu Tang

MetaAvatar is meta-learned model that represents generalizable and controllable neural signed distance fields (SDFs) for clothed humans. It can be fast fine-tuned to represent unseen subjects given as few as 8 monocular depth images.

Authors:Qianli MaShunsuke SaitoJinlong Yang, Siyu Tang and Michael J. Black

SCALE models 3D clothed humans with hundreds of articulated surface elements, resulting in avatars with realistic clothing that deforms naturally even in the presence of topological change.

Authors:Siwei Zhang, Yan Zhang, Qianli MaMichael J. Black, Siyu Tang

Automated synthesis of realistic humans posed naturally in a 3D scene is essential for many applications. In this paper we propose explicit representations for the 3D scene and the person-​scene contact relation in a coherent manner.

Authors:Qianli Ma, Jinlong Yang, Anurag Ranjan, Sergi Pujades, Gerard Pons-​Moll, Siyu Tang, and Michael J. Black

CAPE is a Graph-CNN based generative model for dressing 3D meshes of human body. It is compatible with the popular body model, SMPL, and can generalize to diverse body shapes and body poses. The CAPE Dataset provides SMPL mesh registration of 4D scans of people in clothing, along with registered scans of the ground truth body shapes under clothing.