Frequently Asked Questions
Major changes in BEDLAM2 compared to BEDLAM
- Body model
- BEDLAM2: SMPL-X Gender-neutral model (locked head, no head bun), 16 shape space parameters
- BEDLAM: SMPL-X v1.1 model (with head bun)
- clothing simulation and rendering: Female/Male with 10 shape components
- ground truth: Gender-neutral with 11 shape components
- Depth data
- BEDLAM2: 16-bit float, BEDLAM: 32-bit float
- Camera ground truth
- BEDLAM2: also available in JSON format
How to obtain body motion and clothing simulation ground truth for a given image frame
Sample data processing code for ML researchers: BEDLAM ML-Images GitHub repository
BEDLAM2 uses the same data-driven approach as BEDLAM to automate rendering with Unreal Engine. At its core is the body sequence definition file be_seq.csv which specifies the bodies used for each rendered sequence and how they will be augmented (simulated clothing, hair, shoes, body texture). This file is included in the released BEDLAM2 ground truth information for each render job and the Comment column contains relevant meta tags.
You can obtain the corresponding motion index as follows:
Example image: seq_000042_0123.png from renderjob 20241204_1_2120_hdri
- Find Group section in be_seq.csv where sequence_name=seq_000042
- Example: Line 87, Index column value: 85, frames=299
- sequence_name tag in Comment section corresponds to image name prefix
- frames lists the number of frames which were rendered out. This number is always smaller than the maximum number of frames in the source motion due to the use of high quality motion blur with temporal samples. This approach needs proper motion data before first and after last frame so that those frames also have correct center frame motion blur.
- Find desired Body tag in that group
- Example: Line 88, Index column value: 86
- Body=us_2214_4XL_2400
- subject: us_2214_4XL, subject motion: 2400
- start_frame=1
- Body=us_2214_4XL_2400
- Body column entry lists the used body motion and corresponding clothing simulation
- start_frame is the 30fps motion offset for the first image frame
- for single body sequences this value is always 1
- first rendered image (_0000.png) always uses second motion frame as ground truth
- for multi body sequences this value is only 1 for the shortest motion but higher for all other motions since we temporally subsample all group body motions to match the shortest motion duration in the group
- for single body sequences this value is always 1
- Example: Line 88, Index column value: 86
- Calculate motion index for 30fps motion .npz source data (zero-based indexing)
- motion_index = start_frame + image_index
- Example: motion_index = 1 + 123 = 124
- motion_index = start_frame + image_index