• Home
  • License
  • Download
BEDLAM2BEDLAM2
  • Home
  • License
  • Download
  • FAQ
  • Sign In
    Logout

BEDLAM2.0:

Synthetic Humans and Cameras in Motion

Joachim Tesch1   Giorgio Becherini1   Prerana Achar1   Anastasios Yiannakidis1

Muhammed Kocabas2   Priyanka Patel2   Michael J. Black1

1Max Planck Institute for Intelligent Systems, Tübingen, Germany
2Meshcapade GmbH

NeurIPS 2025 Datasets and Benchmarks Track

Paper  Video  Data  Code  License  Contact

BEDLAM2.0 is a large-scale synthetic video dataset of animated bodies in simulated clothing, designed to train and test algorithms on the task of 3D human pose and shape estimation. With more than 8 million images, it is a significant expansion of the popular BEDLAM dataset that increases pose and body shape variation, and adds shoes and strand-based hair. Most importantly, it introduces a wide range of realistic cameras and camera motions.

Abstract

Inferring 3D human motion from video remains a challenging problem with many applications. While traditional methods estimate the human in image coordinates, many applications require human motion to be estimated in world coordinates. This is particularly challenging when there is both human and camera motion. Progress on this topic has been limited by the lack of rich video data with ground truth human and camera movement. We address this with BEDLAM2.0, a new dataset that goes beyond the popular BEDLAM dataset in important ways. In addition to introducing more diverse and realistic cameras and camera motions, BEDLAM2.0 increases diversity and realism of body shape, motions, clothing, hair, and 3D environments. Additionally, it adds shoes, which were missing in BEDLAM. BEDLAM has become a key resource for training 3D human pose and motion regressors today and we show that BEDLAM2.0 is significantly better, particularly for training methods that estimate humans in world coordinates. We compare state-of-the art methods trained on BEDLAM and BEDLAM2.0, and find that BEDLAM2.0 significantly improves accuracy over BEDLAM. For research purposes, we provide the rendered videos, ground truth body parameters, and camera motions. We also provide the 3D assets to which we have rights and links to those from third parties.

News

  • 2026-02-04: Dataset will become available on Hugging Face mirror for world-wide high-speed download access. Data upload process has started with estimated completion February 2026.
  • 2025-12-05: Code and models released
  • 2025-11-18: Paper released on arXiv
  • 2025-09-22: Dataset released
  • 2025-09-18: Paper accepted at NeurIPS 2025 as an oral (Dataset and Benchmarks Track)

Video

 

 


Paper

  • arXiv page
    • Download paper PDF (includes appendix, 12MB)

Dataset

To access the dataset please register at this website by using the "Sign In" button at the top right. This process requires you to read and accept the license agreement in order to get access to the BEDLAM2 dataset. You need to provide a valid email address. Shortly after you will receive an automated email for confirming the registration. If you do not receive the confirmation email within a day, then please check your spam folder first and try with different email provider before contacting us. Once you are signed in you can access the download area via the new top menu "Download" link. Redistribution of downloaded data is not permitted.

  • Synthetic image data:
    • 27480 image sequences, 30fps, 1280x720
      • 8 million images (PNG, 11TB)
    • movies (MP4/H.264, 160GB)
    • camera and body ground truth for all sequences (CSV+JSON)
    • depth maps (EXR/16-bit, available for 44% of images, 15TB)
  • Render assets:
    • body textures
    • clothing assets
    • animation files for SMPL-X model (locked head, no head bun)
    • strand-based hair grooms
    • shoes (represented as displacement maps)

Known Issues: A major focus of this dataset is to help improve methods for estimating 3D humans from video. The dataset is also useful for training and evaluating methods for 3D/4D point tracking, structure from motion estimation with non-rigid motions, depth estimation, optical flow, and dynamic scene reconstruction. The list of known dataset issues can help you decide if the dataset quality is sufficient for your use case.

Statistics

The BEDLAM2 statistics web app with interactive plots allows you to explore the dataset statistics.

Code and Models

  • Machine Learning - Images
    • Image model (CameraHMR) training, evaluation and demo Python code is available here.
    • Data processing and visualization Python code is available here.
  • Machine Learning - Videos
    • GVHMR_BEDLAM2
      • Training and evaluation Python code for GVHMR on BEDLAM2 dataset
      • Checkpoints of GVHMR trained on BEDLAM1 and BEDLAM2 datasets
    • PromptHMR (Prepare data section)
      • Checkpoints of PromptHMR-video trained on BEDLAM1 and BEDLAM2 datasets
  • Retargeting code (Unreal Engine)
  • Render code/tools for BEDLAM2 dataset image generation (Unreal Engine 5.3.2)
    • Additional BEDLAM2 dependencies and asset packs released in Download area (Unreal Engine Assets for Rendering):
      • Required Unreal Core Assets
      • Custom Unreal Engine plugins
      • Sample Unreal project
      • BEDLAM2 Unreal Assets Starter Pack
        • Unreal 5.3.2 render components in ready-to-use .uasset format
          • animated bodies and textures, clothing and textures, hair, shoes, HDRI
        • Small subset of bodies/clothing used for rendering of BEDLAM2 dataset
          • 51 body shapes, 150 motions, toeless sock feet for shoe augmentation
  • Processing code/tools for BEDLAM2 assets generation
    • Pipeline for sampling and processing SMPL-X parameters, shoes, clothing

Acknowledgements and Disclosure

We are grateful to the following people for their valuable contributions:

  • Sophie-Therese Lupas/STUDIO LUPAS GbR (3D clothing creation)
  • WowPatterns and Publicdomainvectors.org (pattern designs)
  • Poly Haven (HDRIs)
  • Meshcapade GmbH (skin textures)
  • Tsvetelina Alexiadis, Taylor Obersat, Claudia Gallatz, Asuka Bertler, Alpar Cseke, Arina Kuznetcova, Florentin Doll, Suraj Bhor, Tithi Rakshit, Tomasz Niewiadomsky, Valerian Fourel (3D outfit texturing, quality evaluation of the clothing simulations)
  • Software Workshop (deployment of dataset statistics web app and related code improvements)
  • Benjamin Pellkofer (IT support)

Blender (SMPL-X render data preparation) and Unreal Engine 5.3 (rendering) were invaluable tools in the image dataset generation pipeline and we deeply thank all the respective developers for their contributions.

Website: Embla Carousel (video carousel)

Disclosure: While MJB is a co-founder and Chief Scientist at Meshcapade, his research in this project was performed solely at, and funded solely by, the Max Planck Society.


Citation

@inproceedings{tesch2025bedlam2,
title={{BEDLAM}2.0: Synthetic humans and cameras in motion},
author={Joachim Tesch and Giorgio Becherini and Prerana Achar and Anastasios Yiannakidis and Muhammed Kocabas and Priyanka Patel and Michael J. Black},
booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2025}
}

Contact

  • Questions: bedlam@tue.mpg.de
    • Please add a [BEDLAM2] tag to your subject line if your questions are specific to BEDLAM2
    • If you like to report an issue with the dataset, then please first check if it is already documented on the list of known dataset issues
    • If you like to ask a question, then please check first if your question is already answered in the FAQ or in the issue tracker of the relevant GitHub code repository
  • Commercial licensing: ps-licensing@tue.mpg.de

© 2025 Max-Planck-Gesellschaft -Imprint-Privacy Policy-License
© 2025 Max-Planck-Gesellschaft