Isaac Teleop#
Isaac Teleop is the unified framework for high-fidelity egocentric and robot data collection. It provides a standardized device interface, a flexible graph-based retargeting pipeline, and works seamlessly across simulated and real-world robots.
Isaac Teleop replaces the previous native XR teleop stack (isaaclab.devices.openxr) in Isaac
Lab. For migration details see Migrating to Isaac Lab 3.0.
Tip
Just want to get running? Follow the Setting up Isaac Teleop with CloudXR how-to guide for installation and first-run steps, then come back here for deeper topics.
Supported Devices#
Isaac Teleop supports multiple XR headsets and tracking peripherals. Each device provides different input modes, which determine which retargeters and control schemes are available.
Device |
Input Modes |
Client / Connection |
Notes |
|---|---|---|---|
Apple Vision Pro |
Hand tracking (26 joints), spatial controllers |
Native visionOS app (Isaac XR Teleop Sample Client) |
Build from source; see Build and Install the Client App |
Meta Quest 3 |
Motion controllers (triggers, thumbsticks, squeeze), hand tracking |
CloudXR.js WebXR client (browser) |
|
Pico 4 Ultra |
Motion controllers, hand tracking |
CloudXR.js WebXR client (browser) |
Requires Pico OS 15.4.4U+; must use HTTPS mode |
Manus Gloves |
High-fidelity finger tracking (Manus SDK) |
Isaac Teleop plugin (bundled) |
Migrated from the now-deprecated |
Choose a Control Scheme#
The right combination of input device and retargeters depends on your task. Use this table as a starting point, then see the detailed pipeline examples below.
Task Type |
Recommended Input |
Retargeters |
Action Dim |
Reference Config |
|---|---|---|---|---|
Manipulation (e.g. Franka) |
Motion controllers |
|
8 |
|
Bimanual dex + locomotion (e.g. G1 TriHand) |
Motion controllers |
Bimanual |
32 |
|
Bimanual dex, fixed base (e.g. G1) |
Motion controllers |
Bimanual |
28 |
|
Complex dex hand (e.g. GR1T2, G1 Inspire) |
Hand tracking / Manus gloves |
Bimanual |
36+ |
|
Why motion controllers for manipulation? Controllers provide precise spatial control via a grip pose and a physical trigger for gripper actuation, making them ideal for pick-and-place tasks.
Why hand tracking for complex dex hands? Hand tracking captures the full 26-joint hand pose required for high-fidelity dexterous retargeting. This is essential when individual finger control matters.
How It Works#
The IsaacTeleopDevice is the main integration point between Isaac Teleop
and Isaac Lab. It composes three collaborators:
XrAnchorManager – creates and synchronizes an XR anchor prim in the simulation, and computes the
world_T_anchortransform matrix that maps XR tracking data into the simulation coordinate frame.TeleopSessionLifecycle – builds the retargeting pipeline, acquires OpenXR handles from Isaac Sim’s XR bridge, creates the
TeleopSession, and steps it each frame to produce an action tensor.CommandHandler – registers and dispatches START / STOP / RESET callbacks triggered by XR UI buttons or the message bus.
Session lifecycle details
The session uses deferred creation: if the user has not yet clicked “Start AR” in the Isaac
Sim UI, the session is not created immediately. Instead, each call to advance() retries
session creation until OpenXR handles become available. Once connected, advance() returns a
flattened action tensor (torch.Tensor) on the configured device. It returns None when
the session is not yet ready or has been torn down.
Retargeting Framework#
Isaac Teleop uses a graph-based retargeting pipeline. Data flows from source nodes through retargeters and is combined into a single action tensor.
Source Nodes#
HandsSource– provides hand tracking data (left/right, 26 joints each).ControllersSource– provides motion controller data (grip pose, trigger, thumbstick, etc.).
Available Retargeters#
Retargeters are provided by the isaacteleop package from the
Isaac Teleop repository. The retargeters listed below
are those used by the built-in Isaac Lab environments. Isaac Teleop may offer additional
retargeters not listed here – refer to the
Isaac Teleop repository for the full set.
Se3AbsRetargeter / Se3RelRetargeter
Maps hand or controller tracking to end-effector pose. Se3AbsRetargeter outputs a 7D
absolute pose (position + quaternion). Se3RelRetargeter outputs a 6D delta.
Configurable rotation offsets (roll, pitch, yaw in degrees).
GripperRetargeter
Outputs a single float (-1.0 closed, 1.0 open). Uses controller trigger (priority) or thumb-index pinch distance from hand tracking.
DexHandRetargeter / DexBiManualRetargeter
Retargets full hand tracking (26 joints) to robot-specific hand joint angles using the
dex-retargeting library. Requires a robot hand URDF and a YAML configuration file.
Warning
The links used for retargeting must be defined at the actual fingertips, not in the middle of the fingers, to ensure accurate optimization.
TriHandMotionControllerRetargeter
Maps VR controller buttons (trigger, squeeze) to G1 TriHand joints (7 DOF per hand). Simple mapping: trigger controls the index finger, squeeze controls the middle finger, and both together control the thumb.
LocomotionRootCmdRetargeter
Maps controller thumbsticks to a 4D locomotion command:
[vel_x, vel_y, rot_vel_z, hip_height].
TensorReorderer
Utility that flattens and reorders outputs from multiple retargeters into a single 1D action
tensor. The output_order must match the action space expected by the environment.
The built-in Isaac Lab environments use these retargeters as follows:
Environment |
Retargeters Used |
|---|---|
Franka manipulation (stack, pick-place) |
|
G1 Inspire dexterous pick-place |
|
GR1-T2 dexterous pick-place |
|
G1 upper-body (fixed base) |
|
G1 loco-manipulation |
|
Teleoperation Environment Reference#
The tables below list every built-in Isaac Lab environment that supports teleoperation,
organized by input method. Environments whose Task ID ends in -Play are designed for
closed-loop policy evaluation and are not included here.
Isaac Teleop (XR Headset) Environments#
These environments use the Isaac Teleop XR pipeline with motion controllers or hand tracking.
Task ID |
Input Mode |
Hands |
Operator Interaction |
|---|---|---|---|
|
Controllers |
Right |
Arm: right controller grip pose drives end-effector. Gripper: right trigger. |
|
Hand tracking |
Both |
Arms: left/right hand wrist pose drives each end-effector.
Hands: full 26-joint hand tracking retargeted to 11 DOF per Fourier hand via |
|
Hand tracking |
Both |
Same as |
|
Hand tracking |
Both |
Same retargeting pipeline as |
|
Hand tracking |
Both |
Same retargeting pipeline as |
|
Hand tracking |
Both |
Arms: left/right hand wrist pose drives each end-effector.
Hands: full 26-joint hand tracking retargeted to 12 DOF per Inspire hand via |
|
Controllers |
Both |
Arms: left/right controller grip pose drives each end-effector. Hands: trigger closes index, squeeze closes middle, both together close thumb (7 DOF TriHand per hand). |
|
Controllers |
Both |
Arms: same as fixed-base G1 above. Hands: same TriHand mapping. Locomotion: left thumbstick = linear velocity (x/y), right thumbstick X = rotational velocity, right thumbstick Y = hip height. |
Tip
Controllers provide a grip pose plus physical buttons (trigger, squeeze, thumbstick), ideal for tasks that need a gripper or simple hand mapping. Hand tracking captures 26 wrist and finger joints per hand, required for dexterous retargeting to complex robot hands.
Keyboard and SpaceMouse Environments#
Note
Keyboard and SpaceMouse teleoperation uses the legacy native Isaac Lab teleop stack
(isaaclab.devices), not Isaac Teleop. These environments do not require an XR headset.
The device button layouts below apply to all environments in this section. Per-environment differences (gripper enabled/disabled, sensitivity) are noted in the environment table that follows.
Keyboard
Function |
Keys |
Description |
|---|---|---|
Position X |
|
Move end-effector forward / backward. |
Position Y |
|
Move end-effector left / right. |
Position Z |
|
Move end-effector up / down. |
Roll |
|
Rotate about X axis. |
Pitch |
|
Rotate about Y axis. |
Yaw |
|
Rotate about Z axis. |
Gripper toggle |
|
Open / close gripper or suction (disabled in Reach envs). |
Reset |
|
Clear accumulated delta pose and gripper state. |
SpaceMouse
Function |
Control |
Description |
|---|---|---|
Translation |
6-DOF knob |
Push/pull/slide the knob to move the end-effector in X/Y/Z. |
Rotation |
6-DOF knob |
Tilt/twist the knob to rotate the end-effector in roll/pitch/yaw. |
Gripper toggle |
Left button |
Open / close gripper or suction (disabled in Reach envs). |
Reset |
Right button |
Clear accumulated delta pose and gripper state. |
Gamepad (Reach environments only)
Function |
Control |
Description |
|---|---|---|
Position X / Y |
Left stick |
Move end-effector forward/backward and left/right. |
Position Z |
Right stick (up/down) |
Move end-effector up / down. |
Roll / Pitch |
D-Pad |
Left/right for roll, up/down for pitch. |
Yaw |
Right stick (left/right) |
Rotate about Z axis. |
Gripper toggle |
X button |
Open / close gripper (disabled in Reach envs). |
Task ID |
Devices |
Operator Interaction |
|---|---|---|
|
Keyboard, SpaceMouse |
Arm: end-effector pose via RMPFlow.
Gripper: |
|
Keyboard, SpaceMouse |
Arm: end-effector pose via RMPFlow.
Suction: |
|
Keyboard, SpaceMouse |
Same as left-arm gripper above with camera observations. |
|
Keyboard, SpaceMouse |
Arm: relative IK end-effector control.
Suction: |
|
Keyboard, SpaceMouse |
Same as long-suction UR10 above with a shorter suction cup. |
|
Keyboard, Gamepad, SpaceMouse |
Arm: absolute IK end-effector control. Gripper disabled. |
|
Keyboard, Gamepad, SpaceMouse |
Arm: relative IK end-effector control. Gripper disabled. |
Switch Between Controllers and Hand Tracking#
The retargeting pipeline determines whether an environment uses motion controllers or hand
tracking. Switching input modes requires changing the pipeline_builder function in your
environment config. No other environment-level changes are needed as long as the action
space (TensorReorderer output order) stays the same.
Controller to hand tracking
The key changes are:
Create a
HandsSourceand apply the world-to-anchor transform to it (instead ofControllersSource).Point the
Se3RetargeterConfig.input_deviceat the appropriateHandsSourcekey.Set
use_wrist_rotation=Trueanduse_wrist_position=Trueso that the SE3 retargeter reads from the hand wrist joint rather than the controller grip pose.The
GripperRetargeteralready supports both inputs – it uses the controller trigger when connected to aControllersSourceor thumb-index pinch when connected to aHandsSource.
Here is the Franka stack environment’s controller-based pipeline alongside a hand-tracking variant for comparison.
Original (controller-based):
# SE3: tracks right controller grip pose
se3_cfg = Se3RetargeterConfig(
input_device=ControllersSource.RIGHT,
use_wrist_rotation=False,
use_wrist_position=False,
target_offset_roll=90.0,
)
se3 = Se3AbsRetargeter(se3_cfg, name="ee_pose")
connected_se3 = se3.connect({
ControllersSource.RIGHT: transformed_controllers.output(
ControllersSource.RIGHT
),
})
Modified (hand-tracking-based):
se3_cfg = Se3RetargeterConfig(
input_device=HandsSource.RIGHT,
use_wrist_rotation=True,
use_wrist_position=True,
target_offset_roll=0.0,
)
se3 = Se3AbsRetargeter(se3_cfg, name="ee_pose")
transformed_hands = hands.transformed(transform_input.output(ValueInput.VALUE))
connected_se3 = se3.connect({
HandsSource.RIGHT: transformed_hands.output(HandsSource.RIGHT),
})
The GripperRetargeter needs no changes – it accepts both controller and hand inputs and
uses whichever source is connected.
Hand tracking to controller
Reverse the steps above: set input_device to a ControllersSource key, transform the
controllers instead of the hands, and set use_wrist_rotation=False and
use_wrist_position=False. Adjust target_offset_roll/pitch/yaw to account for the
controller grip frame orientation (typically 90 degrees roll for Franka-style grippers).
Note
When switching between input modes, you may need to tune the target_offset_roll,
target_offset_pitch, and target_offset_yaw values. Controller grip frames and hand
wrist frames have different default orientations relative to the robot end-effector.
Build a Retargeting Pipeline#
A pipeline builder is a callable that constructs the retargeting graph and returns an
OutputCombiner with a single "action" key. Here is a complete example for a Franka
manipulator (from stack_ik_abs_env_cfg.py):
def _build_franka_stack_pipeline():
from isaacteleop.retargeting_engine.deviceio_source_nodes import ControllersSource, HandsSource
from isaacteleop.retargeting_engine.interface import OutputCombiner, ValueInput
from isaacteleop.retargeters import (
GripperRetargeter, GripperRetargeterConfig,
Se3AbsRetargeter, Se3RetargeterConfig,
TensorReorderer,
)
from isaacteleop.retargeting_engine.tensor_types import TransformMatrix
# 1. Create input sources
controllers = ControllersSource(name="controllers")
hands = HandsSource(name="hands")
# 2. Apply coordinate-frame transform (world_T_anchor provided by IsaacTeleopDevice)
transform_input = ValueInput("world_T_anchor", TransformMatrix())
transformed_controllers = controllers.transformed(
transform_input.output(ValueInput.VALUE)
)
# 3. Create and connect retargeters
se3_cfg = Se3RetargeterConfig(
input_device=ControllersSource.RIGHT,
target_offset_roll=90.0,
)
se3 = Se3AbsRetargeter(se3_cfg, name="ee_pose")
connected_se3 = se3.connect({
ControllersSource.RIGHT: transformed_controllers.output(ControllersSource.RIGHT),
})
gripper_cfg = GripperRetargeterConfig(hand_side="right")
gripper = GripperRetargeter(gripper_cfg, name="gripper")
connected_gripper = gripper.connect({
ControllersSource.RIGHT: transformed_controllers.output(ControllersSource.RIGHT),
HandsSource.RIGHT: hands.output(HandsSource.RIGHT),
})
# 4. Flatten into a single action tensor with TensorReorderer
ee_elements = ["pos_x", "pos_y", "pos_z", "quat_x", "quat_y", "quat_z", "quat_w"]
reorderer = TensorReorderer(
input_config={
"ee_pose": ee_elements,
"gripper_command": ["gripper_value"],
},
output_order=ee_elements + ["gripper_value"],
name="action_reorderer",
input_types={"ee_pose": "array", "gripper_command": "scalar"},
)
connected_reorderer = reorderer.connect({
"ee_pose": connected_se3.output("ee_pose"),
"gripper_command": connected_gripper.output("gripper_command"),
})
# 5. Return OutputCombiner with "action" key
return OutputCombiner({"action": connected_reorderer.output("output")})
Tip
The output_order of the TensorReorderer must match the action space of your environment.
Mismatches will cause silent control errors.
Configure Your Environment#
Register the pipeline in your environment configuration using IsaacTeleopCfg:
from isaaclab_teleop import IsaacTeleopCfg, XrCfg
@configclass
class MyTeleopEnvCfg(ManagerBasedRLEnvCfg):
xr: XrCfg = XrCfg(anchor_pos=(0.5, 0.0, 0.5))
def __post_init__(self):
super().__post_init__()
self.isaac_teleop = IsaacTeleopCfg(
pipeline_builder=_build_my_pipeline,
sim_device=self.sim.device,
xr_cfg=self.xr,
)
Key IsaacTeleopCfg fields:
pipeline_builder– callable that returns anOutputCombinerwith an"action"output.retargeters_to_tune– optional callable returning retargeters to expose in the live tuning UI.xr_cfg–XrCfgfor anchor configuration (see below).plugins– list of Isaac Teleop plugin configurations (e.g. Manus).sim_device– torch device string (default"cuda:0").
Warning
pipeline_builder and retargeters_to_tune must be callables (functions or lambdas),
not pre-built objects. The @configclass decorator deep-copies mutable attributes, which
would break pre-built pipeline graphs.
Configure the XR Anchor#
The XrCfg controls how the simulation is positioned and oriented in the
XR device’s view.
anchor_pos/anchor_rotStatic anchor placement. The simulation point at these coordinates appears at the XR device’s local origin (floor level). Set to a point on the floor beneath the robot to position it in front of the user.
anchor_prim_pathAttach the anchor to a USD prim for dynamic positioning. Use this for locomotion tasks where the robot moves and the XR camera should follow.
anchor_rotation_modeControls how anchor rotation behaves:
Mode
Description
FIXEDSets rotation once from
anchor_rot. Best for static manipulation setups.FOLLOW_PRIMRotation continuously tracks the attached prim. Best for locomotion where the user should face the robot’s heading direction.
FOLLOW_PRIM_SMOOTHEDSame as
FOLLOW_PRIMwith slerp interpolation. Controlled byanchor_rotation_smoothing_time(seconds, default 1.0). Reduces motion sickness from abrupt rotation changes. Typical range: 0.3–1.5 s.CUSTOMUser-provided callable
anchor_rotation_custom_func(headpose, primpose) -> quaternionfor fully custom logic.fixed_anchor_heightWhen
True(default), keeps the anchor height at its initial value. Prevents vertical bobbing during locomotion.near_planeClosest render distance for the XR device (default 0.15 m).
Note
On Apple Vision Pro, the local coordinate frame can be reset to a point on the floor beneath the user by holding the digital crown.
Tip
When using XR, call remove_camera_configs() on your env config to strip
camera sensors. Additional cameras cause GPU contention and degrade XR performance.
Record Demonstrations for Imitation Learning#
Isaac Teleop integrates with Isaac Lab’s record_demos.py script for recording teleoperated
demonstrations.
When your environment configuration has an isaac_teleop attribute, the script automatically
uses create_isaac_teleop_device() – no --teleop_device flag is needed:
./isaaclab.sh -p scripts/tools/record_demos.py \
--task Isaac-PickPlace-GR1T2-WaistEnabled-Abs-v0 \
--visualizer kit \
--xr
Some environments use the legacy teleop_devices configuration instead of isaac_teleop
(e.g. the Galbot RmpFlow relative-mode tasks). For these, pass --teleop_device to select
the input device:
./isaaclab.sh -p scripts/tools/record_demos.py \
--task Isaac-Stack-Cube-Galbot-Left-Arm-Gripper-RmpFlow-v0 \
--visualizer kit \
--teleop_device keyboard
The workflow is:
Configure your environment with
IsaacTeleopCfg(see Configure Your Environment) orteleop_devicesfor legacy devices (keyboard, spacemouse).Run
record_demos.pywith the task name.For XR tasks: start AR, connect your XR device, and teleoperate. For legacy tasks: use the configured input device directly.
Demonstrations are recorded to HDF5 files.
Use the recorded data with Isaac Lab Mimic or other imitation learning frameworks.
For the broader imitation learning pipeline (replay, augmentation, policy training), see Teleoperation and Imitation Learning with Isaac Lab Mimic.
Add a New Robot#
To add teleoperation support for a new robot in Isaac Lab:
Choose a control scheme. Refer to the Choose a Control Scheme table to determine which retargeters match your robot’s capabilities.
Build the pipeline. If existing retargeters are sufficient (e.g.
Se3AbsRetargeter+GripperRetargeterfor a new manipulator), write a pipeline builder function following the pattern in Build a Retargeting Pipeline. Configure theTensorReordereroutput order to match your environment’s action space.For dexterous hands: create a robot hand URDF and YAML config for
DexHandRetargeter. Ensure fingertip links are positioned at the actual fingertips, not mid-finger.For a custom retargeter: see Add a New Retargeter below.
Configure the XR anchor for your robot (static for manipulation, dynamic for locomotion). See Configure the XR Anchor.
Register in env config via
IsaacTeleopCfg(see Configure Your Environment).
Add a New Retargeter#
If the built-in retargeters do not cover your use case, you can implement a custom one in the Isaac Teleop repository:
Inherit from
BaseRetargeterand implementinput_spec(),output_spec(), andcompute().Optionally add a
ParameterStatefor parameters that should be live-tunable via the retargeter tuning UI.Connect to existing source nodes (
HandsSource,ControllersSource) or create a newIDeviceIOSourcesubclass for custom input devices.
See the Isaac Teleop repository and Contributing Guide for details.
Add a New Device#
There are two levels of device integration:
- Isaac Teleop plugin (C++ level)
For new hardware that requires a custom driver or SDK. Plugins push data via OpenXR tensor collections. Existing plugins include Manus gloves, OAK-D camera, controller synthetic hands, and foot pedals. After creating the plugin, update the retargeting pipeline config to consume data from the new plugin’s source node.
See the Plugins directory for examples.
- Pipeline configuration only
For devices already supported by Isaac Teleop (or whose data is available as hand / controller tracking). Simply update your
pipeline_builderto use the appropriate source nodes and retargeters for the device’s data format.
Optimize XR Performance#
Configure the physics and render time step
Ensure the simulation render time step roughly matches the XR device display time step and can be sustained in real time. Apple Vision Pro runs at 90 Hz; we recommend a simulation dt of 90 Hz with a render interval of 2 (rendering at 45 Hz):
@configclass
class XrTeleopEnvCfg(ManagerBasedRLEnvCfg):
def __post_init__(self):
self.sim.dt = 1.0 / 90
self.sim.render_interval = 2
If render times are highly variable, set NV_PACER_FIXED_TIME_STEP_MS as an environment
variable when starting the CloudXR runtime to use fixed pacing.
Try running physics on CPU
Running teleoperation scripts with --device cpu may reduce latency when only a single
environment is present, since it avoids GPU contention with rendering.
Known Issues#
XR_ERROR_VALIDATION_FAILURE: xrWaitFrame(frameState->type == 0)when stopping AR ModeCan be safely ignored. Caused by a race condition in the exit handler.
XR_ERROR_INSTANCE_LOST in xrPollEventOccurs if the CloudXR runtime exits before Isaac Lab. Restart the runtime to resume.
[omni.usd] TF_PYTHON_EXCEPTIONwhen starting/stopping AR ModeCan be safely ignored. Caused by a race condition in the enter/exit handler.
Invalid version string in _ParseVersionStringCaused by shader assets authored with older USD versions. Typically safe to ignore.
XR device connects but no video is displayed (viewport responds to tracking)
The GPU index may differ between host and container. Set
NV_GPU_INDEXto0,1, or2in the runtime to match the host GPU.
API Reference#
See the isaaclab_teleop for full class and function documentation: