The dataset viewer is not available for this split.
Error code: JobManagerCrashedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
KinDER Demonstration Datasets
Human-collected robot demonstration datasets for the KinDER physical-reasoning benchmark (RSS 2026).
These datasets are used to train the imitation learning baselines (kinder-imitation-learning) and the model-based RL baseline (kinder-mbrl) shipped with KinDER.
Dataset summary
| File | KinDER environment | Episodes | Total steps | Mean ep. length |
|---|---|---|---|---|
motion2d_p0.hdf5 |
Motion2D-p0 |
111 | 4 634 | 41.7 |
stickbutton2d_b1.hdf5 |
StickButton2D-b1 |
114 | 8 141 | 71.4 |
dynobstruction2d_o1.hdf5 |
DynObstruction2D-o1 |
101 | 8 901 | 88.1 |
dynpushpullhook2d_o5.hdf5 |
DynPushPullHook2D-o5 |
103 | 22 435 | 217.8 |
BaseMotion3D_110.hdf5 |
BaseMotion3D |
110 | 2 785 | 25.3 |
shelf3d_106_new.hdf5 |
Shelf3D |
106 | 46 598 | 439.6 |
sweep3d_100.hdf5 |
SweepIntoDrawer3D |
100 | 51 741 | 517.4 |
transport3D_o2.hdf5 |
Transport3D-o2 |
117 | 126 860 | 1 084.3 |
| Total | 862 | 272 095 |
Data collection
2D environments (motion2d, stickbutton2d, dynobstruction2d, dynpushpullhook2d):
Demonstrations were collected using a PS5 controller.
3D environments (BaseMotion3D, shelf3d, sweep3d, transport3D):
Demonstrations were collected using an iPhone web app (XR Browser) or a
VR headset, streaming 6-DoF pose commands to the TidyBot robot in simulation.
File format
Every .hdf5 file follows the same schema:
data/
demo_0/
actions (T, action_dim) β control commands applied at each step
obs/
robot_state (T, robot_dim) β proprioceptive robot state
env_state (T, env_dim) β environment / object state
image (T, 224, 224, 3) β top-down / scene RGB image [2D envs]
base_image (T, 224, 224, 3) β robot base camera [3D envs]
wrist_image (T, 224, 224, 3) β robot wrist camera [3D envs]
overview_image (T, 224, 224, 3) β static overview camera [3D envs]
demo_1/
...
All images are uint8 RGB arrays of shape (T, 224, 224, 3).
State and action arrays are float32.
Per-dataset dimensions
2D environments
| File | action_dim |
robot_dim |
env_dim |
Cameras |
|---|---|---|---|---|
motion2d_p0.hdf5 |
5 | 9 | 10 | image |
stickbutton2d_b1.hdf5 |
5 | 9 | 19 | image |
dynobstruction2d_o1.hdf5 |
5 | 24 | 44 | image |
dynpushpullhook2d_o5.hdf5 |
5 | 24 | 106 | image |
3D environments (TidyBot)
| File | action_dim |
robot_dim |
env_dim |
Cameras |
|---|---|---|---|---|
BaseMotion3D_110.hdf5 |
11 | 19 | 3 | base, wrist, overview |
shelf3d_106_new.hdf5 |
11 | 22 | 23 | base, wrist, overview |
sweep3d_100.hdf5 |
11 | 22 | 131 | base, wrist, overview |
transport3D_o2.hdf5 |
11 | 19 | 48 | base, wrist, overview |
3D action space (11-dim): [base_x, base_y, base_yaw, joint_1, β¦, joint_7, gripper] β delta joint-position commands.
3D robot state (19β22-dim): base pose + joint angles + joint velocities + gripper state.
Loading the data
With h5py (any baseline)
import h5py
import numpy as np
with h5py.File("sweep3d_100.hdf5", "r") as f:
episodes = list(f["data"].keys()) # ["demo_0", "demo_1", ...]
ep = f["data"][episodes[0]]
actions = ep["actions"][:] # (T, 11)
robot_state = ep["obs"]["robot_state"][:] # (T, 22)
env_state = ep["obs"]["env_state"][:] # (T, 131)
base_image = ep["obs"]["base_image"][:] # (T, 224, 224, 3)
With kinder-mbrl (world model training)
python experiments/train_world_model.py \
--mode train \
--hdf5_path sweep3d_100.hdf5 \
--output_dir output \
--epochs 1000
The training script reads robot_state and env_state from obs/ and
computes per-step (state, action, delta) transitions automatically.
With kinder-imitation-learning (Diffusion Policy)
Convert raw demos to the diffusion policy format:
cd kinder-baselines/kinder-models/scripts
python demos_to_hdf5.py \
--teleop_data_dir $YOUR_DATA_DIR \
--output_path $OUTPUT_HDF5_PATH \
--render_images
Then train:
cd ~/kinder-diffusion-policy
mamba activate robodiff
python train.py --config-name=train_sweep3d_image
Baselines trained on this data
| Baseline | Reference |
|---|---|
| Diffusion Policy (DP) | kinder-imitation-learning |
| DP + Environment States (DPES) | kinder-imitation-learning |
| Finetuned Ο0.5 VLA | kinder-openpi |
| MLP World Model + Random-shooting MPC | kinder-mbrl |
Citation
If you use these datasets, please cite the KinDER benchmark:
@inproceedings{huang2026kinder,
title = {KinDER: A Physical Reasoning Benchmark for Robot Learning and Planning},
author = {Huang, Yixuan and Li, Bowen and Saxena, Vaibhav and Liang, Yichao and Mishra, Utkarsh and Ji, Liang and Zha, Lihan and Wu, Jimmy and Kumar, Nishanth and Scherer, Sebastian and Xu, Danfei and Silver, Tom},
booktitle = {Robotics: Science and Systems (RSS)},
year = {2026}
}
- Downloads last month
- 37