In-depth look at our work.
Journal: Siggraph 2023 Journal Track
Authors:Anpei Chen, Zexiang Xu, Xinyue Wei, Siyu Tang, Hao Su, Andreas Geiger
We present Dictionary Fields, a novel neural representation which decomposes a signal into a product of factors, each represented by a classical or neural field representation, operating on transformed input coordinates.Conference: International Conference on Robotics and Automation (ICRA 2023) Best Paper Nominee
Authors:Theodora Kontogianni, Ekin Celikkan, Siyu Tang, Konrad Schindler
We present interactive object segmentation directly in 3D point clouds. Users provide feedback to a deep learning model in the form of positive and negative clicks to segment a 3D object of interest.Conference: Conference on Computer Vision and Pattern Recognition (CVPR 2023)
Authors:Korrawe Karunratanakul, Sergey Prokudin, Otmar Hilliges, Siyu Tang
We present HARP (HAnd Reconstruction and Personalization), a personalized hand avatar creation approach that takes a short monocular RGB video of a human hand as input and reconstructs a faithful hand avatar exhibiting a high-fidelity appearance and geometry.Conference: International Conference on Robotics and Automation (ICRA 2023)
Authors:Jonas Schult, Francis Engelmann, Alexander Hermans, Or Litany, Siyu Tang, and Bastian Leibe
Mask3D predicts accurate 3D semantic instances achieving state-of-the-art on ScanNet, ScanNet200, S3DIS and STPLS3D.Conference: International Conference on 3D Vision (3DV 2022)
Authors:Qianli Ma, Jinlong Yang, Michael J. Black and Siyu Tang
The power of point-based digital human representations further unleashed: SkiRT models dynamic shapes of 3D clothed humans including those that wear challenging outfits such as skirts and dresses.Conference: European Conference on Computer Vision (ECCV 2022)
Authors:Siwei Zhang, Qianli Ma, Yan Zhang, Zhiyin Qian, Taein Kwon, Marc Pollefeys, Federica Bogo and Siyu Tang
A large-scale dataset of accurate 3D human body shape, pose and motion of humans interacting in 3D scenes, with multi-modal streams from third-person and egocentric views, captured by Azure Kinects and a HoloLens2.Here’s what we've been up to recently.
We are excited to announce the EgoBody challenge at ECCV2022! The EgoBody benchmark provides pseudo-ground-truth body meshes for natural human-human interaction sequences captured in the egocentric view.Details about the dataset and the challenge: https://sanweiliti.github.io/egobody/egobody.html
We have five papers accepted at ECCV 2022:1. ARAH: Animatable Volume Rendering of Articulated Human SDFs2. COINS: Compositional Human-Scene Interaction Synthesis with Semantic Control3. EgoBody: Human Body Shape and Motion of Interacting People from Head-Mounted Devices4. SAGA: Stochastic Whole-Body...