Training with an RL Agent#
In the previous tutorials, we covered how to define an RL task environment, register
it into the gym registry, and interact with it using a random agent. We now move
on to the next step: training an RL agent to solve the task.
Although the envs.ManagerBasedRLEnv conforms to the gymnasium.Env interface,
it is not exactly a gym environment. The input and outputs of the environment are
not numpy arrays, but rather based on torch tensors with the first dimension being the
number of environment instances.
Additionally, most RL libraries expect their own variation of an environment interface.
For example, Stable-Baselines3 expects the environment to conform to its
VecEnv API which expects a list of numpy arrays instead of a single tensor. Similarly,
RSL-RL, RL-Games and SKRL expect a different interface. Since there is no one-size-fits-all
solution, we do not base the envs.ManagerBasedRLEnv on any particular learning library.
Instead, we implement wrappers to convert the environment into the expected interface.
These are specified in the isaaclab_rl module.
In this tutorial, we will use Stable-Baselines3 to train an RL agent to solve the cartpole balancing task.
Caution
Wrapping the environment with the respective learning framework’s wrapper should happen in the end,
i.e. after all other wrappers have been applied. This is because the learning framework’s wrapper
modifies the interpretation of environment’s APIs which may no longer be compatible with gymnasium.Env.
The Code#
For this tutorial, we use the training script from Stable-Baselines3 workflow in the
scripts/reinforcement_learning/sb3 directory.
Code for train.py
1# Copyright (c) 2022-2026, The Isaac Lab Project Developers (https://github.com/isaac-sim/IsaacLab/blob/main/CONTRIBUTORS.md).
2# All rights reserved.
3#
4# SPDX-License-Identifier: BSD-3-Clause
5
6
7"""Script to train RL agent with Stable Baselines3."""
8
9import argparse
10import contextlib
11import logging
12import os
13import random
14import signal
15import sys
16import time
17from datetime import datetime
18from pathlib import Path
19
20import gymnasium as gym
21import numpy as np
22from stable_baselines3 import PPO
23from stable_baselines3.common.callbacks import CheckpointCallback, LogEveryNTimesteps
24from stable_baselines3.common.vec_env import VecNormalize
25
26from isaaclab.envs import DirectMARLEnvCfg, ManagerBasedRLEnvCfg
27from isaaclab.utils.dict import print_dict
28from isaaclab.utils.io import dump_yaml
29
30from isaaclab_rl.sb3 import Sb3VecEnvWrapper, process_sb3_cfg
31
32import isaaclab_tasks # noqa: F401
33from isaaclab_tasks.utils import add_launcher_args, launch_simulation, resolve_task_config
34
35logger = logging.getLogger(__name__)
36
37# PLACEHOLDER: Extension template (do not remove this comment)
38with contextlib.suppress(ImportError):
39 import isaaclab_tasks_experimental # noqa: F401
40
41# -- argparse ----------------------------------------------------------------
42parser = argparse.ArgumentParser(description="Train an RL agent with Stable-Baselines3.")
43parser.add_argument("--video", action="store_true", default=False, help="Record videos during training.")
44parser.add_argument("--video_length", type=int, default=200, help="Length of the recorded video (in steps).")
45parser.add_argument("--video_interval", type=int, default=2000, help="Interval between video recordings (in steps).")
46parser.add_argument("--num_envs", type=int, default=None, help="Number of environments to simulate.")
47parser.add_argument("--task", type=str, default=None, help="Name of the task.")
48parser.add_argument(
49 "--agent", type=str, default="sb3_cfg_entry_point", help="Name of the RL agent configuration entry point."
50)
51parser.add_argument("--seed", type=int, default=None, help="Seed used for the environment")
52parser.add_argument("--log_interval", type=int, default=100_000, help="Log data every n timesteps.")
53parser.add_argument("--checkpoint", type=str, default=None, help="Continue the training from checkpoint.")
54parser.add_argument("--max_iterations", type=int, default=None, help="RL Policy training iterations.")
55parser.add_argument("--export_io_descriptors", action="store_true", default=False, help="Export IO descriptors.")
56parser.add_argument(
57 "--keep_all_info",
58 action="store_true",
59 default=False,
60 help="Use a slower SB3 wrapper but keep all the extra training info.",
61)
62parser.add_argument(
63 "--ray-proc-id", "-rid", type=int, default=None, help="Automatically configured by Ray integration, otherwise None."
64)
65add_launcher_args(parser)
66args_cli, hydra_args = parser.parse_known_args()
67
68if args_cli.video:
69 args_cli.enable_cameras = True
70
71sys.argv = [sys.argv[0]] + hydra_args
72
73
74def cleanup_pbar(*args):
75 """
76 A small helper to stop training and
77 cleanup progress bar properly on ctrl+c
78 """
79 import gc
80
81 tqdm_objects = [obj for obj in gc.get_objects() if "tqdm" in type(obj).__name__]
82 for tqdm_object in tqdm_objects:
83 if "tqdm_rich" in type(tqdm_object).__name__:
84 tqdm_object.close()
85 raise KeyboardInterrupt
86
87
88signal.signal(signal.SIGINT, cleanup_pbar)
89
90
91def main():
92 """Train with stable-baselines agent."""
93 env_cfg, agent_cfg = resolve_task_config(args_cli.task, args_cli.agent)
94 with launch_simulation(env_cfg, args_cli):
95 # randomly sample a seed if seed = -1
96 if args_cli.seed == -1:
97 args_cli.seed = random.randint(0, 10000)
98
99 # override configurations with non-hydra CLI arguments
100 env_cfg.scene.num_envs = args_cli.num_envs if args_cli.num_envs is not None else env_cfg.scene.num_envs
101 agent_cfg["seed"] = args_cli.seed if args_cli.seed is not None else agent_cfg["seed"]
102 # max iterations for training
103 if args_cli.max_iterations is not None:
104 agent_cfg["n_timesteps"] = args_cli.max_iterations * agent_cfg["n_steps"] * env_cfg.scene.num_envs
105
106 # set the environment seed
107 env_cfg.seed = agent_cfg["seed"]
108 env_cfg.sim.device = args_cli.device if args_cli.device is not None else env_cfg.sim.device
109
110 # directory for logging into
111 run_info = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
112 log_root_path = os.path.abspath(os.path.join("logs", "sb3", args_cli.task))
113 print(f"[INFO] Logging experiment in directory: {log_root_path}")
114 print(f"Exact experiment name requested from command line: {run_info}")
115 log_dir = os.path.join(log_root_path, run_info)
116 # dump the configuration into log-directory
117 dump_yaml(os.path.join(log_dir, "params", "env.yaml"), env_cfg)
118 dump_yaml(os.path.join(log_dir, "params", "agent.yaml"), agent_cfg)
119
120 # save command used to run the script
121 command = " ".join(sys.orig_argv)
122 (Path(log_dir) / "command.txt").write_text(command)
123
124 # post-process agent configuration
125 agent_cfg = process_sb3_cfg(agent_cfg, env_cfg.scene.num_envs)
126 # read configurations about the agent-training
127 policy_arch = agent_cfg.pop("policy")
128 n_timesteps = agent_cfg.pop("n_timesteps")
129
130 # set the IO descriptors export flag if requested
131 if isinstance(env_cfg, ManagerBasedRLEnvCfg):
132 env_cfg.export_io_descriptors = args_cli.export_io_descriptors
133 else:
134 logger.warning(
135 "IO descriptors are only supported for manager based RL environments."
136 " No IO descriptors will be exported."
137 )
138
139 # set the log directory for the environment
140 env_cfg.log_dir = log_dir
141
142 # create isaac environment
143 env = gym.make(args_cli.task, cfg=env_cfg, render_mode="rgb_array" if args_cli.video else None)
144
145 # convert to single-agent instance if required by the RL algorithm
146 if isinstance(env.unwrapped.cfg, DirectMARLEnvCfg):
147 from isaaclab.envs import multi_agent_to_single_agent
148
149 env = multi_agent_to_single_agent(env)
150
151 # wrap for video recording
152 if args_cli.video:
153 video_kwargs = {
154 "video_folder": os.path.join(log_dir, "videos", "train"),
155 "step_trigger": lambda step: step % args_cli.video_interval == 0,
156 "video_length": args_cli.video_length,
157 "disable_logger": True,
158 }
159 print("[INFO] Recording videos during training.")
160 print_dict(video_kwargs, nesting=4)
161 env = gym.wrappers.RecordVideo(env, **video_kwargs)
162
163 start_time = time.time()
164
165 # wrap around environment for stable baselines
166 env = Sb3VecEnvWrapper(env, fast_variant=not args_cli.keep_all_info)
167
168 norm_keys = {"normalize_input", "normalize_value", "clip_obs"}
169 norm_args = {}
170 for key in norm_keys:
171 if key in agent_cfg:
172 norm_args[key] = agent_cfg.pop(key)
173
174 if norm_args and norm_args.get("normalize_input"):
175 print(f"Normalizing input, {norm_args=}")
176 env = VecNormalize(
177 env,
178 training=True,
179 norm_obs=norm_args["normalize_input"],
180 norm_reward=norm_args.get("normalize_value", False),
181 clip_obs=norm_args.get("clip_obs", 100.0),
182 gamma=agent_cfg["gamma"],
183 clip_reward=np.inf,
184 )
185
186 # create agent from stable baselines
187 agent = PPO(policy_arch, env, verbose=1, tensorboard_log=log_dir, **agent_cfg)
188 if args_cli.checkpoint is not None:
189 agent = agent.load(args_cli.checkpoint, env, print_system_info=True)
190
191 # callbacks for agent
192 checkpoint_callback = CheckpointCallback(save_freq=1000, save_path=log_dir, name_prefix="model", verbose=2)
193 callbacks = [checkpoint_callback, LogEveryNTimesteps(n_steps=args_cli.log_interval)]
194
195 # train the agent
196 with contextlib.suppress(KeyboardInterrupt):
197 agent.learn(
198 total_timesteps=n_timesteps,
199 callback=callbacks,
200 progress_bar=True,
201 log_interval=None,
202 )
203 # save the final model
204 agent.save(os.path.join(log_dir, "model"))
205 print("Saving to:")
206 print(os.path.join(log_dir, "model.zip"))
207
208 if isinstance(env, VecNormalize):
209 print("Saving normalization")
210 env.save(os.path.join(log_dir, "model_vecnormalize.pkl"))
211
212 print(f"Training time: {round(time.time() - start_time, 2)} seconds")
213
214 # close the simulator
215 env.close()
216
217
218if __name__ == "__main__":
219 main()
The Code Explained#
Most of the code above is boilerplate code to create logging directories, saving the parsed configurations, and setting up different Stable-Baselines3 components. For this tutorial, the important part is creating the environment and wrapping it with the Stable-Baselines3 wrapper.
There are three wrappers used in the code above:
gymnasium.wrappers.RecordVideo: This wrapper records a video of the environment and saves it to the specified directory. This is useful for visualizing the agent’s behavior during training.wrappers.sb3.Sb3VecEnvWrapper: This wrapper converts the environment into a Stable-Baselines3 compatible environment.stable_baselines3.common.vec_env.VecNormalize: This wrapper normalizes the environment’s observations and rewards.
Each of these wrappers wrap around the previous wrapper by following env = wrapper(env, *args, **kwargs)
repeatedly. The final environment is then used to train the agent. For more information on how these
wrappers work, please refer to the Wrapping environments documentation.
The Code Execution#
We train a PPO agent from Stable-Baselines3 to solve the cartpole balancing task.
Training the agent#
There are three main ways to train the agent. Each of them has their own advantages and disadvantages. It is up to you to decide which one you prefer based on your use case.
Headless execution#
When no visualizer is requested, no interactive visualizer window is opened during training. This is useful when training on a remote server or when you do not need live visual feedback, which can add some compute cost. Rendering can still be active for sensor/camera data capture when enabled by the workflow.
./isaaclab.sh -p scripts/reinforcement_learning/sb3/train.py --task Isaac-Cartpole-v0 --num_envs 64
Headless execution with off-screen render#
Since the above command does not open an interactive visualizer, it is not possible to monitor behavior
live in a viewport window. To capture visual output during training, enable camera/sensor rendering
in the workflow and pass --video to record the agent behavior.
./isaaclab.sh -p scripts/reinforcement_learning/sb3/train.py --task Isaac-Cartpole-v0 --num_envs 64 --video
The videos are saved to the logs/sb3/Isaac-Cartpole-v0/<run-dir>/videos/train directory. You can open these videos
using any video player.
Interactive execution#
While the above two methods are useful for training the agent, they don’t allow you to interact with the simulation to see what is happening. In this case, run the training script as follows:
./isaaclab.sh -p scripts/reinforcement_learning/sb3/train.py --task Isaac-Cartpole-v0 --num_envs 64 --viz kit
This will open the Kit visualizer window and you can see the agent training in the environment. However, this
can slow down the training process because interactive visual feedback is enabled. As a workaround, you
can switch between different render modes in the "Isaac Lab" window that is docked on the bottom-right
corner of the screen. To learn more about these render modes, please check the
sim.SimulationContext.RenderMode class.
Viewing the logs#
On a separate terminal, you can monitor the training progress by executing the following command:
# execute from the root directory of the repository
./isaaclab.sh -p -m tensorboard.main --logdir logs/sb3/Isaac-Cartpole-v0
Playing the trained agent#
Once the training is complete, you can visualize the trained agent by executing the following command:
# execute from the root directory of the repository
./isaaclab.sh -p scripts/reinforcement_learning/sb3/play.py --task Isaac-Cartpole-v0 --num_envs 32 --use_last_checkpoint --viz kit
The above command will load the latest checkpoint from the logs/sb3/Isaac-Cartpole-v0
directory. You can also specify a specific checkpoint by passing the --checkpoint flag.