isaaclab_contrib.rl#
Reinforcement learning extensions contributed by the community.
Submodules#
isaaclab_contrib.rl.rlinf#
RLinf integration for IsaacLab.
This package provides the extension mechanism for integrating IsaacLab tasks with RLinf’s distributed RL training framework for VLA models like GR00T.
Extension Module#
The extension module (extension.py) is loaded by RLinf via the
RLINF_EXT_MODULE environment variable and handles:
Registering IsaacLab tasks into RLinf’s
REGISTER_ISAACLAB_ENVSRegistering GR00T obs/action converters
Patching GR00T
get_modelfor custom embodiments
- Usage:
export RLINF_EXT_MODULE="isaaclab_contrib.rl.rlinf.extension" export RLINF_CONFIG_FILE="/path/to/config.yaml"
Note
The extension module requires the external rlinf package and cannot be
introspected at documentation build time. The API is described textually below.
Extension Module#
The extension module (isaaclab_contrib.rl.rlinf.extension) is loaded by RLinf’s
worker framework via the RLINF_EXT_MODULE environment variable. It is not imported
directly by user code.
Setup:
export RLINF_EXT_MODULE="isaaclab_contrib.rl.rlinf.extension"
export RLINF_CONFIG_FILE="/path/to/config.yaml"
Public entry point:
register()– Called by RLinf’s worker to perform all setup. It:Registers GR00T observation and action converters.
Patches GR00T’s
get_modelfor custom embodiment tags.Registers IsaacLab tasks into RLinf’s
REGISTER_ISAACLAB_ENVSregistry.
Expected YAML configuration (under env.train.isaaclab):
env:
train:
isaaclab: &isaaclab_config
task_description: "Assemble trocar with dual-arm robot"
main_images: "front_camera"
extra_view_images: ["left_wrist_camera", "right_wrist_camera"]
states:
- key: "robot_joint_state"
slice: [15, 29]
gr00t_mapping:
video:
main_images: "video.room_view"
action_mapping:
prefix_pad: 15
eval:
isaaclab: *isaaclab_config # Reuse via YAML anchor
Task IDs are read automatically from env.train.init_params.id and
env.eval.init_params.id in the YAML config.