Configuring an RL Agent

Configuring an RL Agent#

In the previous tutorial, we saw how to train an RL agent to solve the cartpole balancing task using the Stable-Baselines3 library. In this tutorial, we will see how to configure the training process to use different RL libraries and different training algorithms.

In the directory scripts/reinforcement_learning, you will find the scripts for different RL libraries. These are organized into subdirectories named after the library name. Each subdirectory contains the training and playing scripts for the library.

To configure a learning library with a specific task, you need to create a configuration file for the learning agent. This configuration file is used to create an instance of the learning agent and is used to configure the training process. Similar to the environment registration shown in the Registering an Environment tutorial, you can register the learning agent with the gymnasium.register method.

The Code#

As an example, we will look at the configuration included for the task Isaac-Cartpole-v0 in the isaaclab_tasks package. This is the same task that we used in the Training with an RL Agent tutorial.

gym.register(
    id="Isaac-Cartpole-v0",
    entry_point="isaaclab.envs:ManagerBasedRLEnv",
    disable_env_checker=True,
    kwargs={
        "env_cfg_entry_point": f"{__name__}.cartpole_env_cfg:CartpoleEnvCfg",
        "rl_games_cfg_entry_point": f"{agents.__name__}:rl_games_ppo_cfg.yaml",
        "rsl_rl_cfg_entry_point": f"{agents.__name__}.rsl_rl_ppo_cfg:CartpolePPORunnerCfg",
        "rsl_rl_with_symmetry_cfg_entry_point": f"{agents.__name__}.rsl_rl_ppo_cfg:CartpolePPORunnerWithSymmetryCfg",
        "skrl_cfg_entry_point": f"{agents.__name__}:skrl_ppo_cfg.yaml",
        "sb3_cfg_entry_point": f"{agents.__name__}:sb3_ppo_cfg.yaml",
    },

The Code Explained#

Under the attribute kwargs, we can see the configuration for the different learning libraries. The key is the name of the library and the value is the path to the configuration instance. This configuration instance can be a string, a class, or an instance of the class. For example, the value of the key "rl_games_cfg_entry_point" is a string that points to the configuration YAML file for the RL-Games library. Meanwhile, the value of the key "rsl_rl_cfg_entry_point" points to the configuration class for the RSL-RL library.

The pattern used for specifying an agent configuration class follows closely to that used for specifying the environment configuration entry point. This means that while the following are equivalent:

Specifying the configuration entry point as a string
from . import agents

gym.register(
   id="Isaac-Cartpole-v0",
   entry_point="isaaclab.envs:ManagerBasedRLEnv",
   disable_env_checker=True,
   kwargs={
      "env_cfg_entry_point": f"{__name__}.cartpole_env_cfg:CartpoleEnvCfg",
      "rsl_rl_cfg_entry_point": f"{agents.__name__}.rsl_rl_ppo_cfg:CartpolePPORunnerCfg",
   },
)
Specifying the configuration entry point as a class
from . import agents

gym.register(
   id="Isaac-Cartpole-v0",
   entry_point="isaaclab.envs:ManagerBasedRLEnv",
   disable_env_checker=True,
   kwargs={
      "env_cfg_entry_point": f"{__name__}.cartpole_env_cfg:CartpoleEnvCfg",
      "rsl_rl_cfg_entry_point": agents.rsl_rl_ppo_cfg.CartpolePPORunnerCfg,
   },
)

The first code block is the preferred way to specify the configuration entry point. The second code block is equivalent to the first one, but it leads to import of the configuration class which slows down the import time. This is why we recommend using strings for the configuration entry point.

All the scripts in the scripts/reinforcement_learning directory are configured by default to read the <library_name>_cfg_entry_point from the kwargs dictionary to retrieve the configuration instance.

For instance, the following code block shows how the train.py script reads the configuration instance for the Stable-Baselines3 library:

Code for train.py with SB3
  1# Copyright (c) 2022-2026, The Isaac Lab Project Developers (https://github.com/isaac-sim/IsaacLab/blob/main/CONTRIBUTORS.md).
  2# All rights reserved.
  3#
  4# SPDX-License-Identifier: BSD-3-Clause
  5
  6
  7"""Script to train RL agent with Stable Baselines3."""
  8
  9import argparse
 10import contextlib
 11import logging
 12import os
 13import random
 14import signal
 15import sys
 16import time
 17from datetime import datetime
 18from pathlib import Path
 19
 20import gymnasium as gym
 21import numpy as np
 22from stable_baselines3 import PPO
 23from stable_baselines3.common.callbacks import CheckpointCallback, LogEveryNTimesteps
 24from stable_baselines3.common.vec_env import VecNormalize
 25
 26from isaaclab.envs import DirectMARLEnvCfg, ManagerBasedRLEnvCfg
 27from isaaclab.utils.dict import print_dict
 28from isaaclab.utils.io import dump_yaml
 29
 30from isaaclab_rl.sb3 import Sb3VecEnvWrapper, process_sb3_cfg
 31
 32import isaaclab_tasks  # noqa: F401
 33from isaaclab_tasks.utils import add_launcher_args, launch_simulation, resolve_task_config
 34
 35logger = logging.getLogger(__name__)
 36
 37# PLACEHOLDER: Extension template (do not remove this comment)
 38with contextlib.suppress(ImportError):
 39    import isaaclab_tasks_experimental  # noqa: F401
 40
 41# -- argparse ----------------------------------------------------------------
 42parser = argparse.ArgumentParser(description="Train an RL agent with Stable-Baselines3.")
 43parser.add_argument("--video", action="store_true", default=False, help="Record videos during training.")
 44parser.add_argument("--video_length", type=int, default=200, help="Length of the recorded video (in steps).")
 45parser.add_argument("--video_interval", type=int, default=2000, help="Interval between video recordings (in steps).")
 46parser.add_argument("--num_envs", type=int, default=None, help="Number of environments to simulate.")
 47parser.add_argument("--task", type=str, default=None, help="Name of the task.")
 48parser.add_argument(
 49    "--agent", type=str, default="sb3_cfg_entry_point", help="Name of the RL agent configuration entry point."
 50)
 51parser.add_argument("--seed", type=int, default=None, help="Seed used for the environment")
 52parser.add_argument("--log_interval", type=int, default=100_000, help="Log data every n timesteps.")
 53parser.add_argument("--checkpoint", type=str, default=None, help="Continue the training from checkpoint.")
 54parser.add_argument("--max_iterations", type=int, default=None, help="RL Policy training iterations.")
 55parser.add_argument("--export_io_descriptors", action="store_true", default=False, help="Export IO descriptors.")
 56parser.add_argument(
 57    "--keep_all_info",
 58    action="store_true",
 59    default=False,
 60    help="Use a slower SB3 wrapper but keep all the extra training info.",
 61)
 62parser.add_argument(
 63    "--ray-proc-id", "-rid", type=int, default=None, help="Automatically configured by Ray integration, otherwise None."
 64)
 65add_launcher_args(parser)
 66args_cli, hydra_args = parser.parse_known_args()
 67
 68if args_cli.video:
 69    args_cli.enable_cameras = True
 70
 71sys.argv = [sys.argv[0]] + hydra_args
 72
 73
 74def cleanup_pbar(*args):
 75    """
 76    A small helper to stop training and
 77    cleanup progress bar properly on ctrl+c
 78    """
 79    import gc
 80
 81    tqdm_objects = [obj for obj in gc.get_objects() if "tqdm" in type(obj).__name__]
 82    for tqdm_object in tqdm_objects:
 83        if "tqdm_rich" in type(tqdm_object).__name__:
 84            tqdm_object.close()
 85    raise KeyboardInterrupt
 86
 87
 88signal.signal(signal.SIGINT, cleanup_pbar)
 89
 90
 91def main():
 92    """Train with stable-baselines agent."""
 93    env_cfg, agent_cfg = resolve_task_config(args_cli.task, args_cli.agent)
 94    with launch_simulation(env_cfg, args_cli):
 95        # randomly sample a seed if seed = -1
 96        if args_cli.seed == -1:
 97            args_cli.seed = random.randint(0, 10000)
 98
 99        # override configurations with non-hydra CLI arguments
100        env_cfg.scene.num_envs = args_cli.num_envs if args_cli.num_envs is not None else env_cfg.scene.num_envs
101        agent_cfg["seed"] = args_cli.seed if args_cli.seed is not None else agent_cfg["seed"]
102        # max iterations for training
103        if args_cli.max_iterations is not None:
104            agent_cfg["n_timesteps"] = args_cli.max_iterations * agent_cfg["n_steps"] * env_cfg.scene.num_envs
105
106        # set the environment seed
107        env_cfg.seed = agent_cfg["seed"]
108        env_cfg.sim.device = args_cli.device if args_cli.device is not None else env_cfg.sim.device
109
110        # directory for logging into
111        run_info = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
112        log_root_path = os.path.abspath(os.path.join("logs", "sb3", args_cli.task))
113        print(f"[INFO] Logging experiment in directory: {log_root_path}")
114        print(f"Exact experiment name requested from command line: {run_info}")
115        log_dir = os.path.join(log_root_path, run_info)
116        # dump the configuration into log-directory
117        dump_yaml(os.path.join(log_dir, "params", "env.yaml"), env_cfg)
118        dump_yaml(os.path.join(log_dir, "params", "agent.yaml"), agent_cfg)
119
120        # save command used to run the script
121        command = " ".join(sys.orig_argv)
122        (Path(log_dir) / "command.txt").write_text(command)
123
124        # post-process agent configuration
125        agent_cfg = process_sb3_cfg(agent_cfg, env_cfg.scene.num_envs)
126        # read configurations about the agent-training
127        policy_arch = agent_cfg.pop("policy")
128        n_timesteps = agent_cfg.pop("n_timesteps")
129
130        # set the IO descriptors export flag if requested
131        if isinstance(env_cfg, ManagerBasedRLEnvCfg):
132            env_cfg.export_io_descriptors = args_cli.export_io_descriptors
133        else:
134            logger.warning(
135                "IO descriptors are only supported for manager based RL environments."
136                " No IO descriptors will be exported."
137            )
138
139        # set the log directory for the environment
140        env_cfg.log_dir = log_dir
141
142        # create isaac environment
143        env = gym.make(args_cli.task, cfg=env_cfg, render_mode="rgb_array" if args_cli.video else None)
144
145        # convert to single-agent instance if required by the RL algorithm
146        if isinstance(env.unwrapped.cfg, DirectMARLEnvCfg):
147            from isaaclab.envs import multi_agent_to_single_agent
148
149            env = multi_agent_to_single_agent(env)
150
151        # wrap for video recording
152        if args_cli.video:
153            video_kwargs = {
154                "video_folder": os.path.join(log_dir, "videos", "train"),
155                "step_trigger": lambda step: step % args_cli.video_interval == 0,
156                "video_length": args_cli.video_length,
157                "disable_logger": True,
158            }
159            print("[INFO] Recording videos during training.")
160            print_dict(video_kwargs, nesting=4)
161            env = gym.wrappers.RecordVideo(env, **video_kwargs)
162
163        start_time = time.time()
164
165        # wrap around environment for stable baselines
166        env = Sb3VecEnvWrapper(env, fast_variant=not args_cli.keep_all_info)
167
168        norm_keys = {"normalize_input", "normalize_value", "clip_obs"}
169        norm_args = {}
170        for key in norm_keys:
171            if key in agent_cfg:
172                norm_args[key] = agent_cfg.pop(key)
173
174        if norm_args and norm_args.get("normalize_input"):
175            print(f"Normalizing input, {norm_args=}")
176            env = VecNormalize(
177                env,
178                training=True,
179                norm_obs=norm_args["normalize_input"],
180                norm_reward=norm_args.get("normalize_value", False),
181                clip_obs=norm_args.get("clip_obs", 100.0),
182                gamma=agent_cfg["gamma"],
183                clip_reward=np.inf,
184            )
185
186        # create agent from stable baselines
187        agent = PPO(policy_arch, env, verbose=1, tensorboard_log=log_dir, **agent_cfg)
188        if args_cli.checkpoint is not None:
189            agent = agent.load(args_cli.checkpoint, env, print_system_info=True)
190
191        # callbacks for agent
192        checkpoint_callback = CheckpointCallback(save_freq=1000, save_path=log_dir, name_prefix="model", verbose=2)
193        callbacks = [checkpoint_callback, LogEveryNTimesteps(n_steps=args_cli.log_interval)]
194
195        # train the agent
196        with contextlib.suppress(KeyboardInterrupt):
197            agent.learn(
198                total_timesteps=n_timesteps,
199                callback=callbacks,
200                progress_bar=True,
201                log_interval=None,
202            )
203        # save the final model
204        agent.save(os.path.join(log_dir, "model"))
205        print("Saving to:")
206        print(os.path.join(log_dir, "model.zip"))
207
208        if isinstance(env, VecNormalize):
209            print("Saving normalization")
210            env.save(os.path.join(log_dir, "model_vecnormalize.pkl"))
211
212        print(f"Training time: {round(time.time() - start_time, 2)} seconds")
213
214        # close the simulator
215        env.close()
216
217
218if __name__ == "__main__":
219    main()

The argument --agent is used to specify the learning library to use. This is used to retrieve the configuration instance from the kwargs dictionary. You can manually specify alternate configuration instances by passing the --agent argument.

The Code Execution#

Since for the cartpole balancing task, RSL-RL library offers two configuration instances, we can use the --agent argument to specify the configuration instance to use.

  • Training with the standard PPO configuration:

    # standard PPO training
    ./isaaclab.sh -p scripts/reinforcement_learning/rsl_rl/train.py --task Isaac-Cartpole-v0 --headless \
      --run_name ppo
    
  • Training with the PPO configuration with symmetry augmentation:

    # PPO training with symmetry augmentation
    ./isaaclab.sh -p scripts/reinforcement_learning/rsl_rl/train.py --task Isaac-Cartpole-v0 --headless \
      --agent rsl_rl_with_symmetry_cfg_entry_point \
      --run_name ppo_with_symmetry_data_augmentation
    
    # you can use hydra to disable symmetry augmentation but enable mirror loss computation
    ./isaaclab.sh -p scripts/reinforcement_learning/rsl_rl/train.py --task Isaac-Cartpole-v0 --headless \
      --agent rsl_rl_with_symmetry_cfg_entry_point \
      --run_name ppo_without_symmetry_data_augmentation \
      agent.algorithm.symmetry_cfg.use_data_augmentation=false
    

The --run_name argument is used to specify the name of the run. This is used to create a directory for the run in the logs/rsl_rl/cartpole directory.