Hydra Configuration System#

Isaac Lab supports the Hydra configuration system to modify the task’s configuration using command line arguments, which can be useful to automate experiments and perform hyperparameter tuning.

Any parameter of the environment can be modified by adding one or multiple elements of the form env.a.b.param1=value to the command line input, where a.b.param1 reflects the parameter’s hierarchy, for example env.actions.joint_effort.scale=10.0. Similarly, the agent’s parameters can be modified by using the agent prefix, for example agent.seed=2024.

The way these command line arguments are set follow the exact structure of the configuration files. Since the different RL frameworks use different conventions, there might be differences in the way the parameters are set. For example, with rl_games the seed will be set with agent.params.seed, while with rsl_rl, skrl and sb3 it will be set with agent.seed.

As a result, training with hydra arguments can be run with the following syntax:

python source/standalone/workflows/rsl_rl/train.py --task=Isaac-Cartpole-v0 --headless env.actions.joint_effort.scale=10.0 agent.seed=2024
python source/standalone/workflows/rl_games/train.py --task=Isaac-Cartpole-v0 --headless env.actions.joint_effort.scale=10.0 agent.params.seed=2024
python source/standalone/workflows/skrl/train.py --task=Isaac-Cartpole-v0 --headless env.actions.joint_effort.scale=10.0 agent.seed=2024
python source/standalone/workflows/sb3/train.py --task=Isaac-Cartpole-v0 --headless env.actions.joint_effort.scale=10.0 agent.seed=2024

The above command will run the training script with the task Isaac-Cartpole-v0 in headless mode, and set the env.actions.joint_effort.scale parameter to 10.0 and the agent.seed parameter to 2024.

Note

To keep backwards compatibility, and to provide a more user-friendly experience, we have kept the old cli arguments of the form --param, for example --num_envs, --seed, --max_iterations. These arguments have precedence over the hydra arguments, and will overwrite the values set by the hydra arguments.

Modifying advanced parameters#

Callables#

It is possible to modify functions and classes in the configuration files by using the syntax module:attribute_name. For example, in the Cartpole environment:

class ObservationsCfg:
    """Observation specifications for the MDP."""

    @configclass
    class PolicyCfg(ObsGroup):
        """Observations for policy group."""

        # observation terms (order preserved)
        joint_pos_rel = ObsTerm(func=mdp.joint_pos_rel)
        joint_vel_rel = ObsTerm(func=mdp.joint_vel_rel)

        def __post_init__(self) -> None:
            self.enable_corruption = False
            self.concatenate_terms = True

    # observation groups
    policy: PolicyCfg = PolicyCfg()

we could modify joint_pos_rel to compute absolute positions instead of relative positions with env.observations.policy.joint_pos_rel.func=omni.isaac.lab.envs.mdp:joint_pos.

Setting parameters to None#

To set parameters to None, use the null keyword, which is a special keyword in Hydra that is automatically converted to None. In the above example, we could also disable the joint_pos_rel observation by setting it to None with env.observations.policy.joint_pos_rel=null.

Dictionaries#

Elements in dictionaries are handled as a parameters in the hierarchy. For example, in the Cartpole environment:

    reset_cart_position = EventTerm(
        func=mdp.reset_joints_by_offset,
        mode="reset",
        params={
            "asset_cfg": SceneEntityCfg("robot", joint_names=["slider_to_cart"]),
            "position_range": (-1.0, 1.0),
            "velocity_range": (-0.5, 0.5),
        },
    )

    reset_pole_position = EventTerm(
        func=mdp.reset_joints_by_offset,
        mode="reset",
        params={
            "asset_cfg": SceneEntityCfg("robot", joint_names=["cart_to_pole"]),
            "position_range": (-0.25 * math.pi, 0.25 * math.pi),
            "velocity_range": (-0.25 * math.pi, 0.25 * math.pi),
        },
    )


@configclass
class RewardsCfg:
    """Reward terms for the MDP."""

the position_range parameter can be modified with env.events.reset_cart_position.params.position_range="[-2.0, 2.0]". This example shows two noteworthy points:

  • The parameter we set has a space, so it must be enclosed in quotes.

  • The parameter is a list while it is a tuple in the config. This is due to the fact that Hydra does not support tuples.

Modifying inter-dependent parameters#

Particular care should be taken when modifying the parameters using command line arguments. Some of the configurations perform intermediate computations based on other parameters. These computations will not be updated when the parameters are modified.

For example, for the configuration of the Cartpole camera depth environment:

class CartpoleDepthCameraEnvCfg(CartpoleRGBCameraEnvCfg):
    # camera
    tiled_camera: TiledCameraCfg = TiledCameraCfg(
        prim_path="/World/envs/env_.*/Camera",
        offset=TiledCameraCfg.OffsetCfg(pos=(-5.0, 0.0, 2.0), rot=(1.0, 0.0, 0.0, 0.0), convention="world"),
        data_types=["depth"],
        spawn=sim_utils.PinholeCameraCfg(
            focal_length=24.0, focus_distance=400.0, horizontal_aperture=20.955, clipping_range=(0.1, 20.0)
        ),
        width=80,
        height=80,
    )

    # spaces
    observation_space = [tiled_camera.height, tiled_camera.width, 1]

If the user were to modify the width of the camera, i.e. env.tiled_camera.width=128, then the parameter env.observation_space=[80,128,1] must be updated and given as input as well.

Similarly, the __post_init__ method is not updated with the command line inputs. In the LocomotionVelocityRoughEnvCfg, for example, the post init update is as follows:

class LocomotionVelocityRoughEnvCfg(ManagerBasedRLEnvCfg):
    """Configuration for the locomotion velocity-tracking environment."""

    # Scene settings
    scene: MySceneCfg = MySceneCfg(num_envs=4096, env_spacing=2.5)
    # Basic settings
    observations: ObservationsCfg = ObservationsCfg()
    actions: ActionsCfg = ActionsCfg()
    commands: CommandsCfg = CommandsCfg()
    # MDP settings
    rewards: RewardsCfg = RewardsCfg()
    terminations: TerminationsCfg = TerminationsCfg()
    events: EventCfg = EventCfg()
    curriculum: CurriculumCfg = CurriculumCfg()

    def __post_init__(self):
        """Post initialization."""
        # general settings
        self.decimation = 4
        self.episode_length_s = 20.0
        # simulation settings
        self.sim.dt = 0.005
        self.sim.render_interval = self.decimation
        self.sim.disable_contact_processing = True
        self.sim.physics_material = self.scene.terrain.physics_material
        # update sensor update periods
        # we tick all the sensors based on the smallest update period (physics update period)
        if self.scene.height_scanner is not None:
            self.scene.height_scanner.update_period = self.decimation * self.sim.dt
        if self.scene.contact_forces is not None:
            self.scene.contact_forces.update_period = self.sim.dt

        # check if terrain levels curriculum is enabled - if so, enable curriculum for terrain generator
        # this generates terrains with increasing difficulty and is useful for training
        if getattr(self.curriculum, "terrain_levels", None) is not None:
            if self.scene.terrain.terrain_generator is not None:
                self.scene.terrain.terrain_generator.curriculum = True
        else:
            if self.scene.terrain.terrain_generator is not None:
                self.scene.terrain.terrain_generator.curriculum = False

Here, when modifying env.decimation or env.sim.dt, the user needs to give the updated env.sim.render_interval, env.scene.height_scanner.update_period, and env.scene.contact_forces.update_period as input as well.