In-depth look at our work.
Conference: International Conference on Computer Vision (ICCV 2023)
Authors:Ayça Takmaz*, Jonas Schult*, Irem Kaftan, Mertcan Akçay, Bastian Leibe, Robert Sumner, Francis Engelmann, Siyu Tang
We propose the first multi-human body-part segmentation model, called Human3D 🧑🤝🧑, that directly operates on 3D scenes. In an extensive analysis, we validate the benefits of training on synthetic data on multiple baselines and tasks.Conference: International Conference on Computer Vision (ICCV 2023)
Authors:Kaifeng Zhao, Yan Zhang, Shaofei Wang, Thabo Beeler, Siyu Tang
Interaction with environments is one core ability of virtual humans and remains a challenging problem. We propose a method capable of generating a sequence of natural interaction events in real cluttered scenes.Conference: International Conference on Computer Vision (ICCV 2023)
Authors:Korrawe Karunratanakul, Konpat Preechakul, Supasorn Suwajanakorn, Siyu Tang
Guided Motion Diffusion (GMD) model can synthesize realistic human motion according to a text prompt, a reference trajectory, and key locations, as well as avoiding hitting your toe on giant X-mark circles that someone dropped on the floor. No need to retrain diffusion models for each of these tasks!Conference: International Conference on Computer Vision (ICCV 2023) oral presentation
Authors:Siwei Zhang, Qianli Ma, Yan Zhang, Sadegh Aliakbarian, Darren Cosker, Siyu Tang
We propose a novel scene-conditioned probabilistic method to recover the human mesh from an egocentric view image (typically with the body truncated) in the 3D environment.Conference: International Conference on Computer Vision (ICCV 2023) oral presentation
Authors:Sergey Prokudin, Qianli Ma, Maxime Raafat, Julien Valentin, Siyu Tang
We propose to model dynamic surfaces with a point-based model, where the motion of a point over time is represented by an implicit deformation field. Working directly with points (rather than SDFs) allows us to easily incorporate various well-known deformation constraints, e.g. as-isometric-as-possible. We showcase the usefulness of this approach for creating animatable avatars in complex clothing.Journal: Siggraph 2023 Journal Track
Authors:Anpei Chen, Zexiang Xu, Xinyue Wei, Siyu Tang, Hao Su, Andreas Geiger
We present Dictionary Fields, a novel neural representation which decomposes a signal into a product of factors, each represented by a classical or neural field representation, operating on transformed input coordinates.Conference: International Conference on Robotics and Automation (ICRA 2023) Best Paper Nominee
Authors:Theodora Kontogianni, Ekin Celikkan, Siyu Tang, Konrad Schindler
We present interactive object segmentation directly in 3D point clouds. Users provide feedback to a deep learning model in the form of positive and negative clicks to segment a 3D object of interest.Here’s what we've been up to recently.
We have five papers accepted at ICCV 2023:Dynamic Point Fields: Towards Efficient and Scalable Dynamic Surface Representations (oral presentation)EgoHMR: Probabilistic Human Mesh Recovery in 3D Scenes from Egocentric Views (oral presentation)GMD: Controllable Human Motion Synthesis via Guided...
We are excited to announce the EgoBody challenge at ECCV2022! The EgoBody benchmark provides pseudo-ground-truth body meshes for natural human-human interaction sequences captured in the egocentric view.Details about the dataset and the challenge: https://sanweiliti.github.io/egobody/egobody.html