Ray Caster#
The Ray Caster sensor (and the ray caster camera) are similar to RTX based rendering in that they both involve casting rays. The difference here is that the rays cast by the Ray Caster sensor return strictly collision information along the cast, and the direction of each individual ray can be specified. They do not bounce, nor are they affected by things like materials or opacity. For each ray specified by the sensor, a line is traced along the path of the ray and the location of first collision with the specified mesh is returned. This is the method used by some of our quadruped examples to measure the local height field.
To keep the sensor performant when there are many cloned environments, the line tracing is done directly in Warp. This is the reason why specific meshes need to be identified to cast against: that mesh data is loaded onto the device by warp when the sensor is initialized. As a consequence, the current iteration of this sensor only works for literally static meshes (meshes that are not changed from the defaults specified in their USD file). This constraint will be removed in future releases.
Using a ray caster sensor requires a pattern and a parent xform to be attached to. The pattern defines how the rays are cast, while the prim properties defines the orientation and position of the sensor (additional offsets can be specified for more exact placement). Isaac Lab supports a number of ray casting pattern configurations, including a generic LIDAR and grid pattern.
from isaaclab_assets.robots.anymal import ANYMAL_C_CFG # isort: skip
@configclass
class RaycasterSensorSceneCfg(InteractiveSceneCfg):
"""Design the scene with sensors on the robot."""
# ground plane
ground = AssetBaseCfg(
prim_path="/World/Ground",
spawn=sim_utils.UsdFileCfg(
usd_path=f"{ISAAC_NUCLEUS_DIR}/Environments/Terrains/rough_plane.usd",
scale=(1, 1, 1),
),
)
# lights
dome_light = AssetBaseCfg(
prim_path="/World/Light", spawn=sim_utils.DomeLightCfg(intensity=3000.0, color=(0.75, 0.75, 0.75))
)
# robot
robot = ANYMAL_C_CFG.replace(prim_path="{ENV_REGEX_NS}/Robot")
ray_caster = RayCasterCfg(
prim_path="{ENV_REGEX_NS}/Robot/base/lidar_cage",
update_period=1 / 60,
offset=RayCasterCfg.OffsetCfg(pos=(0, 0, 0.5)),
mesh_prim_paths=["/World/Ground"],
ray_alignment="yaw",
pattern_cfg=patterns.LidarPatternCfg(
channels=100, vertical_fov_range=[-90, 90], horizontal_fov_range=[-90, 90], horizontal_res=1.0
Notice that the units on the pattern config is in degrees! Also, we enable visualization here to explicitly show the pattern in the rendering, but this is not required and should be disabled for performance tuning.
Querying the sensor for data can be done at simulation run time like any other sensor.
def run_simulator(sim: sim_utils.SimulationContext, scene: InteractiveScene):
.
.
.
# Simulate physics
while simulation_app.is_running():
.
.
.
# print information from the sensors
print("-------------------------------")
print(scene["ray_caster"])
print("Ray cast hit results: ", scene["ray_caster"].data.ray_hits_w)
-------------------------------
Ray-caster @ '/World/envs/env_.*/Robot/base/lidar_cage':
view type : <class 'isaacsim.core.prims.xform_prim.XFormPrim'>
update period (s) : 0.016666666666666666
number of meshes : 1
number of sensors : 1
number of rays/sensor: 18000
total number of rays : 18000
Ray cast hit results: tensor([[[-0.3698, 0.0357, 0.0000],
[-0.3698, 0.0357, 0.0000],
[-0.3698, 0.0357, 0.0000],
...,
[ inf, inf, inf],
[ inf, inf, inf],
[ inf, inf, inf]]], device='cuda:0')
-------------------------------
Here we can see the data returned by the sensor itself. Notice first that there are 3 closed brackets at the beginning and the end: this is because the data returned is batched by the number of sensors. The ray cast pattern itself has also been flattened, and so the dimensions of the array are [N, B, 3] where N is the number of sensors, B is the number of cast rays in the pattern, and 3 is the dimension of the casting space. Finally, notice that the first several values in this casting pattern are the same: this is because the lidar pattern is spherical and we have specified our FOV to be hemispherical, which includes the poles. In this configuration, the “flattening pattern” becomes apparent: the first 180 entries will be the same because it’s the bottom pole of this hemisphere, and there will be 180 of them because our horizontal FOV is 180 degrees with a resolution of 1 degree.
You can use this script to experiment with pattern configurations and build an intuition about how the data is stored by altering the triggered variable on line 81.
Code for raycaster_sensor.py
1# Copyright (c) 2022-2026, The Isaac Lab Project Developers (https://github.com/isaac-sim/IsaacLab/blob/main/CONTRIBUTORS.md).
2# All rights reserved.
3#
4# SPDX-License-Identifier: BSD-3-Clause
5
6import argparse
7
8from isaaclab.app import AppLauncher
9
10# add argparse arguments
11parser = argparse.ArgumentParser(description="Example on using the raycaster sensor.")
12parser.add_argument("--num_envs", type=int, default=1, help="Number of environments to spawn.")
13# append AppLauncher cli args
14AppLauncher.add_app_launcher_args(parser)
15# demos should open Kit visualizer by default
16parser.set_defaults(visualizer=["kit"])
17# parse the arguments
18args_cli = parser.parse_args()
19
20# launch omniverse app
21app_launcher = AppLauncher(args_cli)
22simulation_app = app_launcher.app
23
24"""Rest everything follows."""
25
26import numpy as np
27import torch
28import warp as wp
29
30import isaaclab.sim as sim_utils
31from isaaclab.assets import AssetBaseCfg
32from isaaclab.scene import InteractiveScene, InteractiveSceneCfg
33from isaaclab.sensors.ray_caster import RayCasterCfg, patterns
34from isaaclab.utils import configclass
35from isaaclab.utils.assets import ISAAC_NUCLEUS_DIR
36
37##
38# Pre-defined configs
39##
40from isaaclab_assets.robots.anymal import ANYMAL_C_CFG # isort: skip
41
42
43@configclass
44class RaycasterSensorSceneCfg(InteractiveSceneCfg):
45 """Design the scene with sensors on the robot."""
46
47 # ground plane
48 ground = AssetBaseCfg(
49 prim_path="/World/Ground",
50 spawn=sim_utils.UsdFileCfg(
51 usd_path=f"{ISAAC_NUCLEUS_DIR}/Environments/Terrains/rough_plane.usd",
52 scale=(1, 1, 1),
53 ),
54 )
55
56 # lights
57 dome_light = AssetBaseCfg(
58 prim_path="/World/Light", spawn=sim_utils.DomeLightCfg(intensity=3000.0, color=(0.75, 0.75, 0.75))
59 )
60
61 # robot
62 robot = ANYMAL_C_CFG.replace(prim_path="{ENV_REGEX_NS}/Robot")
63
64 ray_caster = RayCasterCfg(
65 prim_path="{ENV_REGEX_NS}/Robot/base/lidar_cage",
66 update_period=1 / 60,
67 offset=RayCasterCfg.OffsetCfg(pos=(0, 0, 0.5)),
68 mesh_prim_paths=["/World/Ground"],
69 ray_alignment="yaw",
70 pattern_cfg=patterns.LidarPatternCfg(
71 channels=100, vertical_fov_range=[-90, 90], horizontal_fov_range=[-90, 90], horizontal_res=1.0
72 ),
73 debug_vis=not args_cli.headless,
74 )
75
76
77def run_simulator(sim: sim_utils.SimulationContext, scene: InteractiveScene):
78 """Run the simulator."""
79 # Define simulation stepping
80 sim_dt = sim.get_physics_dt()
81 sim_time = 0.0
82 count = 0
83
84 triggered = True
85 countdown = 42
86
87 # Simulate physics
88 while simulation_app.is_running():
89 if count % 500 == 0:
90 # reset counter
91 count = 0
92 # reset the scene entities
93 # root state
94 # we offset the root state by the origin since the states are written in simulation world frame
95 # if this is not done, then the robots will be spawned at the (0, 0, 0) of the simulation world
96 root_pose = wp.to_torch(scene["robot"].data.default_root_pose).clone()
97 root_pose[:, :3] += scene.env_origins
98 scene["robot"].write_root_pose_to_sim_index(root_pose=root_pose)
99 root_vel = wp.to_torch(scene["robot"].data.default_root_vel).clone()
100 scene["robot"].write_root_velocity_to_sim_index(root_velocity=root_vel)
101 # set joint positions with some noise
102 joint_pos, joint_vel = (
103 wp.to_torch(scene["robot"].data.default_joint_pos).clone(),
104 wp.to_torch(scene["robot"].data.default_joint_vel).clone(),
105 )
106 joint_pos += torch.rand_like(joint_pos) * 0.1
107 scene["robot"].write_joint_position_to_sim_index(position=joint_pos)
108 scene["robot"].write_joint_velocity_to_sim_index(velocity=joint_vel)
109 # clear internal buffers
110 scene.reset()
111 print("[INFO]: Resetting robot state...")
112 # Apply default actions to the robot
113 # -- generate actions/commands
114 targets = wp.to_torch(scene["robot"].data.default_joint_pos)
115 # -- apply action to the robot
116 scene["robot"].set_joint_position_target_index(target=targets)
117 # -- write data to sim
118 scene.write_data_to_sim()
119 # perform step
120 sim.step()
121 # update sim-time
122 sim_time += sim_dt
123 count += 1
124 # update buffers
125 scene.update(sim_dt)
126
127 # print information from the sensors
128 print("-------------------------------")
129 print(scene["ray_caster"])
130 print("Ray cast hit results: ", scene["ray_caster"].data.ray_hits_w)
131
132 if not triggered:
133 if countdown > 0:
134 countdown -= 1
135 continue
136 data = scene["ray_caster"].data.ray_hits_w.cpu().numpy()
137 np.save("cast_data.npy", data)
138 triggered = True
139 else:
140 continue
141
142
143def main():
144 """Main function."""
145
146 # Initialize the simulation context
147 sim_cfg = sim_utils.SimulationCfg(dt=0.005, device=args_cli.device)
148 sim = sim_utils.SimulationContext(sim_cfg)
149 # Set main camera
150 sim.set_camera_view(eye=[3.5, 3.5, 3.5], target=[0.0, 0.0, 0.0])
151 # design scene
152 scene_cfg = RaycasterSensorSceneCfg(num_envs=args_cli.num_envs, env_spacing=2.0)
153 scene = InteractiveScene(scene_cfg)
154 # Play the simulator
155 sim.reset()
156 # Now we are ready!
157 print("[INFO]: Setup complete...")
158 # Run the simulator
159 run_simulator(sim, scene)
160
161
162if __name__ == "__main__":
163 # run the main function
164 main()
165 # close sim app
166 simulation_app.close()