Training with an RL Agent#
In the previous tutorials, we covered how to define an RL task environment, register
it into the gym
registry, and interact with it using a random agent. We now move
on to the next step: training an RL agent to solve the task.
Although the envs.ManagerBasedRLEnv
conforms to the gymnasium.Env
interface,
it is not exactly a gym
environment. The input and outputs of the environment are
not numpy arrays, but rather based on torch tensors with the first dimension being the
number of environment instances.
Additionally, most RL libraries expect their own variation of an environment interface.
For example, Stable-Baselines3 expects the environment to conform to its
VecEnv API which expects a list of numpy arrays instead of a single tensor. Similarly,
RSL-RL, RL-Games and SKRL expect a different interface. Since there is no one-size-fits-all
solution, we do not base the envs.ManagerBasedRLEnv
on any particular learning library.
Instead, we implement wrappers to convert the environment into the expected interface.
These are specified in the omni.isaac.lab_tasks.utils.wrappers
module.
In this tutorial, we will use Stable-Baselines3 to train an RL agent to solve the cartpole balancing task.
Caution
Wrapping the environment with the respective learning framework’s wrapper should happen in the end,
i.e. after all other wrappers have been applied. This is because the learning framework’s wrapper
modifies the interpretation of environment’s APIs which may no longer be compatible with gymnasium.Env
.
The Code#
For this tutorial, we use the training script from Stable-Baselines3 workflow in the
source/standalone/workflows/sb3
directory.
Code for train.py
1# Copyright (c) 2022-2024, The Isaac Lab Project Developers.
2# All rights reserved.
3#
4# SPDX-License-Identifier: BSD-3-Clause
5
6"""Script to train RL agent with Stable Baselines3.
7
8Since Stable-Baselines3 does not support buffers living on GPU directly,
9we recommend using smaller number of environments. Otherwise,
10there will be significant overhead in GPU->CPU transfer.
11"""
12
13"""Launch Isaac Sim Simulator first."""
14
15import argparse
16import sys
17
18from omni.isaac.lab.app import AppLauncher
19
20# add argparse arguments
21parser = argparse.ArgumentParser(description="Train an RL agent with Stable-Baselines3.")
22parser.add_argument("--video", action="store_true", default=False, help="Record videos during training.")
23parser.add_argument("--video_length", type=int, default=200, help="Length of the recorded video (in steps).")
24parser.add_argument("--video_interval", type=int, default=2000, help="Interval between video recordings (in steps).")
25parser.add_argument("--num_envs", type=int, default=None, help="Number of environments to simulate.")
26parser.add_argument("--task", type=str, default=None, help="Name of the task.")
27parser.add_argument("--seed", type=int, default=None, help="Seed used for the environment")
28parser.add_argument("--max_iterations", type=int, default=None, help="RL Policy training iterations.")
29# append AppLauncher cli args
30AppLauncher.add_app_launcher_args(parser)
31# parse the arguments
32args_cli, hydra_args = parser.parse_known_args()
33# always enable cameras to record video
34if args_cli.video:
35 args_cli.enable_cameras = True
36
37# clear out sys.argv for Hydra
38sys.argv = [sys.argv[0]] + hydra_args
39
40# launch omniverse app
41app_launcher = AppLauncher(args_cli)
42simulation_app = app_launcher.app
43
44"""Rest everything follows."""
45
46import gymnasium as gym
47import numpy as np
48import os
49import random
50from datetime import datetime
51
52from stable_baselines3 import PPO
53from stable_baselines3.common.callbacks import CheckpointCallback
54from stable_baselines3.common.logger import configure
55from stable_baselines3.common.vec_env import VecNormalize
56
57from omni.isaac.lab.envs import (
58 DirectMARLEnv,
59 DirectMARLEnvCfg,
60 DirectRLEnvCfg,
61 ManagerBasedRLEnvCfg,
62 multi_agent_to_single_agent,
63)
64from omni.isaac.lab.utils.dict import print_dict
65from omni.isaac.lab.utils.io import dump_pickle, dump_yaml
66
67import omni.isaac.lab_tasks # noqa: F401
68from omni.isaac.lab_tasks.utils.hydra import hydra_task_config
69from omni.isaac.lab_tasks.utils.wrappers.sb3 import Sb3VecEnvWrapper, process_sb3_cfg
70
71
72@hydra_task_config(args_cli.task, "sb3_cfg_entry_point")
73def main(env_cfg: ManagerBasedRLEnvCfg | DirectRLEnvCfg | DirectMARLEnvCfg, agent_cfg: dict):
74 """Train with stable-baselines agent."""
75 # randomly sample a seed if seed = -1
76 if args_cli.seed == -1:
77 args_cli.seed = random.randint(0, 10000)
78
79 # override configurations with non-hydra CLI arguments
80 env_cfg.scene.num_envs = args_cli.num_envs if args_cli.num_envs is not None else env_cfg.scene.num_envs
81 agent_cfg["seed"] = args_cli.seed if args_cli.seed is not None else agent_cfg["seed"]
82 # max iterations for training
83 if args_cli.max_iterations is not None:
84 agent_cfg["n_timesteps"] = args_cli.max_iterations * agent_cfg["n_steps"] * env_cfg.scene.num_envs
85
86 # set the environment seed
87 # note: certain randomizations occur in the environment initialization so we set the seed here
88 env_cfg.seed = agent_cfg["seed"]
89 env_cfg.sim.device = args_cli.device if args_cli.device is not None else env_cfg.sim.device
90
91 # directory for logging into
92 log_dir = os.path.join("logs", "sb3", args_cli.task, datetime.now().strftime("%Y-%m-%d_%H-%M-%S"))
93 # dump the configuration into log-directory
94 dump_yaml(os.path.join(log_dir, "params", "env.yaml"), env_cfg)
95 dump_yaml(os.path.join(log_dir, "params", "agent.yaml"), agent_cfg)
96 dump_pickle(os.path.join(log_dir, "params", "env.pkl"), env_cfg)
97 dump_pickle(os.path.join(log_dir, "params", "agent.pkl"), agent_cfg)
98
99 # post-process agent configuration
100 agent_cfg = process_sb3_cfg(agent_cfg)
101 # read configurations about the agent-training
102 policy_arch = agent_cfg.pop("policy")
103 n_timesteps = agent_cfg.pop("n_timesteps")
104
105 # create isaac environment
106 env = gym.make(args_cli.task, cfg=env_cfg, render_mode="rgb_array" if args_cli.video else None)
107 # wrap for video recording
108 if args_cli.video:
109 video_kwargs = {
110 "video_folder": os.path.join(log_dir, "videos", "train"),
111 "step_trigger": lambda step: step % args_cli.video_interval == 0,
112 "video_length": args_cli.video_length,
113 "disable_logger": True,
114 }
115 print("[INFO] Recording videos during training.")
116 print_dict(video_kwargs, nesting=4)
117 env = gym.wrappers.RecordVideo(env, **video_kwargs)
118
119 # convert to single-agent instance if required by the RL algorithm
120 if isinstance(env.unwrapped, DirectMARLEnv):
121 env = multi_agent_to_single_agent(env)
122
123 # wrap around environment for stable baselines
124 env = Sb3VecEnvWrapper(env)
125
126 if "normalize_input" in agent_cfg:
127 env = VecNormalize(
128 env,
129 training=True,
130 norm_obs="normalize_input" in agent_cfg and agent_cfg.pop("normalize_input"),
131 norm_reward="normalize_value" in agent_cfg and agent_cfg.pop("normalize_value"),
132 clip_obs="clip_obs" in agent_cfg and agent_cfg.pop("clip_obs"),
133 gamma=agent_cfg["gamma"],
134 clip_reward=np.inf,
135 )
136
137 # create agent from stable baselines
138 agent = PPO(policy_arch, env, verbose=1, **agent_cfg)
139 # configure the logger
140 new_logger = configure(log_dir, ["stdout", "tensorboard"])
141 agent.set_logger(new_logger)
142
143 # callbacks for agent
144 checkpoint_callback = CheckpointCallback(save_freq=1000, save_path=log_dir, name_prefix="model", verbose=2)
145 # train the agent
146 agent.learn(total_timesteps=n_timesteps, callback=checkpoint_callback)
147 # save the final model
148 agent.save(os.path.join(log_dir, "model"))
149
150 # close the simulator
151 env.close()
152
153
154if __name__ == "__main__":
155 # run the main function
156 main()
157 # close sim app
158 simulation_app.close()
The Code Explained#
Most of the code above is boilerplate code to create logging directories, saving the parsed configurations, and setting up different Stable-Baselines3 components. For this tutorial, the important part is creating the environment and wrapping it with the Stable-Baselines3 wrapper.
There are three wrappers used in the code above:
gymnasium.wrappers.RecordVideo
: This wrapper records a video of the environment and saves it to the specified directory. This is useful for visualizing the agent’s behavior during training.wrappers.sb3.Sb3VecEnvWrapper
: This wrapper converts the environment into a Stable-Baselines3 compatible environment.stable_baselines3.common.vec_env.VecNormalize: This wrapper normalizes the environment’s observations and rewards.
Each of these wrappers wrap around the previous wrapper by following env = wrapper(env, *args, **kwargs)
repeatedly. The final environment is then used to train the agent. For more information on how these
wrappers work, please refer to the Wrapping environments documentation.
The Code Execution#
We train a PPO agent from Stable-Baselines3 to solve the cartpole balancing task.
Training the agent#
There are three main ways to train the agent. Each of them has their own advantages and disadvantages. It is up to you to decide which one you prefer based on your use case.
Headless execution#
If the --headless
flag is set, the simulation is not rendered during training. This is useful
when training on a remote server or when you do not want to see the simulation. Typically, it speeds
up the training process since only physics simulation step is performed.
./isaaclab.sh -p source/standalone/workflows/sb3/train.py --task Isaac-Cartpole-v0 --num_envs 64 --headless
Headless execution with off-screen render#
Since the above command does not render the simulation, it is not possible to visualize the agent’s
behavior during training. To visualize the agent’s behavior, we pass the --enable_cameras
which
enables off-screen rendering. Additionally, we pass the flag --video
which records a video of the
agent’s behavior during training.
./isaaclab.sh -p source/standalone/workflows/sb3/train.py --task Isaac-Cartpole-v0 --num_envs 64 --headless --video
The videos are saved to the logs/sb3/Isaac-Cartpole-v0/<run-dir>/videos/train
directory. You can open these videos
using any video player.
Interactive execution#
While the above two methods are useful for training the agent, they don’t allow you to interact with the
simulation to see what is happening. In this case, you can ignore the --headless
flag and run the
training script as follows:
./isaaclab.sh -p source/standalone/workflows/sb3/train.py --task Isaac-Cartpole-v0 --num_envs 64
This will open the Isaac Sim window and you can see the agent training in the environment. However, this
will slow down the training process since the simulation is rendered on the screen. As a workaround, you
can switch between different render modes in the "Isaac Lab"
window that is docked on the bottom-right
corner of the screen. To learn more about these render modes, please check the
sim.SimulationContext.RenderMode
class.
Viewing the logs#
On a separate terminal, you can monitor the training progress by executing the following command:
# execute from the root directory of the repository
./isaaclab.sh -p -m tensorboard.main --logdir logs/sb3/Isaac-Cartpole-v0
Playing the trained agent#
Once the training is complete, you can visualize the trained agent by executing the following command:
# execute from the root directory of the repository
./isaaclab.sh -p source/standalone/workflows/sb3/play.py --task Isaac-Cartpole-v0 --num_envs 32 --use_last_checkpoint
The above command will load the latest checkpoint from the logs/sb3/Isaac-Cartpole-v0
directory. You can also specify a specific checkpoint by passing the --checkpoint
flag.