In-depth look at our work.
Conference: Conference on Computer Vision and Pattern Recognition (CVPR 2024)
Authors:Siwei Zhang, Bharat Lal Bhatnagar, Yuanlu Xu, Alexander Winkler, Petr Kadlecek, Siyu Tang, Federica Bogo
Conditioned on noisy and occluded input data, RoHM reconstructs complete, plausible motions in consistent global coordinates.Conference: Conference on Computer Vision and Pattern Recognition (CVPR 2024)
Authors:Gen Li, Kaifeng Zhao, Siwei Zhang, Xiaozhong Lyu, Mihai Dusmanu, Yan Zhang, Marc Pollefeys, Siyu Tang
EgoGen is new synthetic data generator that can produce accurate and rich ground-truth training data for egocentric perception tasks.Conference: Conference on Computer Vision and Pattern Recognition (CVPR 2024)
Authors:Shaofei Wang , Božidar Antić , Andreas Geiger , Siyu Tang
IntrinsicAvatar learns relightable and animatable avatars from monocular videos, without any data-driven priors.Conference: Conference on Computer Vision and Pattern Recognition (CVPR 2024)
Authors:Xiyi Chen , Marko Mihajlovic , Shaofei Wang , Sergey Prokudin , Siyu Tang
We introduce a morphable diffusion model to enable consistent controllable novel view synthesis of humans from a single image. Given a single input image and a morphable mesh with a desired facial expression, our method directly generates 3D consistent and photo-realistic images from novel viewpoints, which we could use to reconstruct a coarse 3D model using off-the-shelf neural surface reconstruction methods such as NeuS2.Conference: Conference on Computer Vision and Pattern Recognition (CVPR 2024)
Authors:Zhiyin Qian, Shaofei Wang, Marko Mihajlovic, Andreas Geiger, Siyu Tang
Given a monocular video, 3DGS-Avatar learns clothed human avatars that model pose-dependent appearance and generalize to out-of-distribution poses, with short training time and interactive rendering frame rate.Conference: International Conference on Learning Representations (ICLR 2024) spotlight presentation
Authors:Marko Mihajlovic, Sergey Prokudin, Marc Pollefeys, Siyu Tang
ResField layers incorporates time-dependent weights into MLPs to effectively represent complex temporal signals.Here’s what we've been up to recently.
We have five papers accepted at ICCV 2023:Dynamic Point Fields: Towards Efficient and Scalable Dynamic Surface Representations (oral presentation)EgoHMR: Probabilistic Human Mesh Recovery in 3D Scenes from Egocentric Views (oral presentation)GMD: Controllable Human Motion Synthesis via Guided...
We are excited to announce the EgoBody challenge at ECCV2022! The EgoBody benchmark provides pseudo-ground-truth body meshes for natural human-human interaction sequences captured in the egocentric view.Details about the dataset and the challenge: https://sanweiliti.github.io/egobody/egobody.html