Teleoperation Data Collection#
This workflow covers collecting demonstrations for the Unitree G1 static apple-to-plate task using Meta Quest 3 or Pico 4 Ultra supported by Nvidia IsaacTeleop.
No teleoperation hardware?
The static task drops the locomotion / squat / turn channels but still needs bimanual end-effector control, so a keyboard or SpaceMouse is not practical. If you don’t have an XR headset, you can still smoke-test the pipeline with the Immersive Web Emulator Runtime (IWER). Open https://nvidia.github.io/IsaacTeleop/client in desktop Chrome (instead of the Quest browser); the page auto-loads IWER and emulates a Quest 3 with your mouse and keyboard, per the IsaacTeleop Quick Start. Follow Steps 1–4 below unchanged; the only difference is that Step 3 is done from a desktop browser tab. Because the static task is upper-body-only, IWER drives it noticeably better than the loco-manipulation variant — you can plausibly complete a few demos with just mouse + keyboard, though a real Quest 3 still gives much smoother demonstrations.
Step 1: Start the CloudXR Runtime#
On the host machine, configure the firewall to allow CloudXR traffic. The required ports depend on the client type. The example below uses
ufw(Ubuntu); on other distributions use the equivalent firewall tooling (e.g.firewalldon Fedora/RHEL,pfon macOS).sudo ufw allow 49100/tcp # Signaling sudo ufw allow 47998/udp # Media stream sudo ufw allow 48322/tcp # Proxy (HTTPS mode only)
Start the CloudXR runtime from the Arena Docker container:
./docker/run_docker.sh
python -m isaacteleop.cloudxr
Attention
The first run will prompt users to accept the NVIDIA CloudXR License Agreement.
To accept the EULA, reply Yes when prompted with the below message:
NVIDIA CloudXR EULA must be accepted to run. View: https://github.com/NVIDIA/IsaacTeleop/blob/main/deps/cloudxr/CLOUDXR_LICENSE
Accept NVIDIA CloudXR EULA? [y/N]: Yes
Step 2: Start Arena Teleop#
In another terminal, start the Arena Docker container and launch the teleop session to verify the pipeline:
./docker/run_docker.sh
Run the following command to activate IsaacTeleop CloudXR environment settings:
source ~/.cloudxr/run/cloudxr.env
Important
Order matters. In the terminal where you will run Arena,
source ~/.cloudxr/run/cloudxr.envafter the CloudXR runtime from Step 1 is already running, and before you start the Arena app. The Arena app must inherit the IsaacTeleop CloudXR environment variables.Run the teleop script:
python isaaclab_arena/scripts/imitation_learning/teleop.py \ --viz kit \ --device cpu \ galileo_g1_static_pick_and_place \ --object apple_01_objaverse_robolab \ --destination clay_plates_hot3d_robolab \ --teleop_device openxr
In the running application, start the session from the XR tab in the application window.
Arena teleop session with XR running. Stereoscopic view (left) and OpenXR settings in the XR tab (right).#
Step 3: Connect from the headset device#
For detailed instructions please refer to Connect an XR Device:
A strong wireless connection is essential for a high-quality streaming experience. Refer to the CloudXR Network Setup guide for router configuration.
Open the browser on your headset and navigate to https://nvidia.github.io/IsaacTeleop/client.
Enter the IP address of your Isaac Lab host machine in the Server IP field.
Click the Click https://<ip>:48322/ to accept cert link that appears on the page. Accept the certificate in the new page that opens, then navigate back to the CloudXR.js client page.
Click Connect to begin teleoperation.
Note
Once you press Connect in the web browser, you should see the following control panel. Press Play to start teleoperation. You can also reset the scene by pressing the Reset button.
If the control panel is not visible (for example, behind a solid wall in the simulated environment), you can put the headset on before clicking Start XR in the Isaac Lab Arena application, and drag the control panel to a better location.
Teleoperation Controls:
Left joystick: Move the body forward/backward/left/right.
Right joystick: Squat (down), rotate torso (left/right).
Controllers: Move end-effector (EE) targets for the arms.
Note
If the simulation runs at too low FPS and makes the teleoperation feel laggy, you can try to reduce the XR resolution from the XR tab / Advanced Settings / Render Resolution.
Reducing render resolution from 1 (default) to 0.2.#
Step 4: Record with the headset device#
Note
Run the following command to activate IsaacTeleop CloudXR environment settings again if you are starting the recording app from a different terminal.
source ~/.cloudxr/run/cloudxr.env
Recording: When ready to collect data, run the recording script from the Arena container:
export DATASET_DIR=/datasets/isaaclab_arena/static_apple_tutorial mkdir -p $DATASET_DIR
# Record demonstrations with OpenXR teleop python isaaclab_arena/scripts/imitation_learning/record_demos.py \ --viz kit \ --device cpu \ --enable_cameras \ --dataset_file $DATASET_DIR/arena_g1_static_apple_dataset_recorded.hdf5 \ --num_demos 10 \ --num_success_steps 10 \ galileo_g1_static_pick_and_place \ --object apple_01_objaverse_robolab \ --destination clay_plates_hot3d_robolab \ --teleop_device openxr
In the running application, start the session from the XR tab in the application window.
Follow Step 3 to connect the headset again.
Complete the task for each demo. Reset between demos. The script saves successful runs to the HDF5 file above.
Hint
Suggested sequence for the task:
Reach toward the apple with one controller — the apple is small and round, so approach it from above and pinch-grasp with a fingertip grip.
Lift it 5–10 cm above the shelf to clear the plate’s rim.
Move it laterally over the plate.
Open the gripper to release the apple onto the plate.
Demos for this task should be noticeably shorter than the loco-manipulation variant (no walk / turn / squat phases), so you can collect 10 successful demos in around 5–10 minutes once the pipeline is running.
Note
Releasing a small round object onto a flat plate is noticeably harder than dropping a box into a bin. Keep the release height low and the orientation stable — these recordings are fed directly into LeRobot conversion and policy post-training (see Policy Post-Training (GR00T N1.7)), so demo quality is what the policy learns from.
Step 5: Replay Recorded Demos (Optional)#
Replay the recorded HDF5 to confirm the demos look correct end-to-end. This doubles as a no-XR sanity check on the environment: it drives the env from the recorded actions and needs no teleoperation device, so you can visually verify the scene, embodiment and asset placements without launching CloudXR.
# Replay from the recorded HDF5 dataset
python isaaclab_arena/scripts/imitation_learning/replay_demos.py \
--viz kit \
--device cpu \
--dataset_file $DATASET_DIR/arena_g1_static_apple_dataset_recorded.hdf5 \
galileo_g1_static_pick_and_place \
--object apple_01_objaverse_robolab \
--destination clay_plates_hot3d_robolab