isaaclab.sensors#
Sub-package containing various sensor classes implementations.
This subpackage contains the sensor classes that are compatible with Isaac Sim. We include both USD-based and custom sensors:
USD-prim sensors: Available in Omniverse and require creating a USD prim for them. For instance, RTX ray tracing camera and lidar sensors.
USD-schema sensors: Available in Omniverse and require creating a USD schema on an existing prim. For instance, contact sensors and frame transformers.
Custom sensors: Implemented in Python and do not require creating any USD prim or schema. For instance, warp-based ray-casters.
Due to the above categorization, the prim paths passed to the sensor’s configuration class are interpreted differently based on the sensor type. The following table summarizes the interpretation of the prim paths for different sensor types:
Sensor Type |
Example Prim Path |
Pre-check |
|---|---|---|
Camera |
/World/robot/base/camera |
Leaf is available, and it will spawn a USD camera |
Contact Sensor |
/World/robot/feet_* |
Leaf is available and checks if the schema exists |
Ray Caster |
/World/robot/base |
Leaf exists and is a physics body (Articulation / Rigid Body) |
Frame Transformer |
/World/robot/base |
Leaf exists and is a physics body (Articulation / Rigid Body) |
Imu |
/World/robot/base |
Leaf exists and is a physics body (Rigid Body) |
Submodules
Sub-module for ray-casting patterns used by the ray-caster. |
Classes
The base class for implementing a sensor. |
|
Configuration parameters for a sensor. |
|
The camera sensor for acquiring visual data. |
|
Data container for the camera sensor. |
|
Configuration for a camera sensor. |
|
Configuration for a tiled rendering-based camera sensor. |
|
Factory for creating contact sensor instances. |
|
Factory for creating contact sensor data instances. |
|
Configuration for the contact sensor. |
|
Factory for creating frame transformer instances. |
|
Factory for creating frame transformer data instances. |
|
Configuration for the frame transformer sensor. |
|
A ray-casting sensor. |
|
Data container for the ray-cast sensor. |
|
Configuration for the ray-cast sensor. |
|
A ray-casting camera sensor. |
|
Configuration for the ray-cast sensor. |
|
A multi-mesh ray-casting sensor. |
|
Data container for the multi-mesh ray-cast sensor. |
|
Configuration for the multi-mesh ray-cast sensor. |
|
A multi-mesh ray-casting camera sensor. |
|
Configuration for the multi-mesh ray-cast camera sensor. |
|
Factory for creating IMU sensor instances. |
|
Configuration for an Inertial Measurement Unit (IMU) sensor. |
Sensor Base#
- class isaaclab.sensors.SensorBase[source]#
The base class for implementing a sensor.
The implementation is based on lazy evaluation. The sensor data is only updated when the user tries accessing the data through the
dataproperty or setsforce_compute=Truein theupdate()method. This is done to avoid unnecessary computation when the sensor data is not used.The sensor is updated at the specified update period. If the update period is zero, then the sensor is updated at every simulation step.
Methods:
__init__(cfg)Initialize the sensor class.
set_debug_vis(debug_vis)Sets whether to visualize the sensor data.
reset([env_ids, env_mask])Resets the sensor internals.
Attributes:
Whether the sensor is initialized.
Number of instances of the sensor.
Memory device for computation.
Data from the sensor.
Whether the sensor has a debug visualization implemented.
- __init__(cfg: SensorBaseCfg)[source]#
Initialize the sensor class.
- Parameters:
cfg – The configuration parameters for the sensor.
- property is_initialized: bool#
Whether the sensor is initialized.
Returns True if the sensor is initialized, False otherwise.
- property num_instances: int#
Number of instances of the sensor.
This is equal to the number of sensors per environment multiplied by the number of environments.
- abstract property data: Any#
Data from the sensor.
This property is only updated when the user tries to access the data. This is done to avoid unnecessary computation when the sensor data is not used.
For updating the sensor when this property is accessed, you can use the following code snippet in your sensor implementation:
# update sensors if needed self._update_outdated_buffers() # return the data (where `_data` is the data for the sensor) return self._data
- property has_debug_vis_implementation: bool#
Whether the sensor has a debug visualization implemented.
- set_debug_vis(debug_vis: bool) bool[source]#
Sets whether to visualize the sensor data.
- Parameters:
debug_vis – Whether to visualize the sensor data.
- Returns:
Whether the debug visualization was successfully set. False if the sensor does not support debug visualization.
- reset(env_ids: Sequence[int] | None = None, env_mask: wp.array | None = None) None[source]#
Resets the sensor internals.
- Parameters:
env_ids – The environment indices to reset. Defaults to None, in which case all environments are reset.
env_mask – A boolean warp array indicating which environments to reset. If provided, takes priority over
env_ids. Defaults to None.
- class isaaclab.sensors.SensorBaseCfg[source]#
Configuration parameters for a sensor.
Attributes:
Prim path (or expression) to the sensor.
Update period of the sensor buffers (in seconds).
Whether to visualize the sensor.
- prim_path: str#
Prim path (or expression) to the sensor.
Note
The expression can contain the environment namespace regex
{ENV_REGEX_NS}which will be replaced with the environment namespace.Example:
{ENV_REGEX_NS}/Robot/sensorwill be replaced with/World/envs/env_.*/Robot/sensor.
USD Camera#
- class isaaclab.sensors.Camera[source]#
Bases:
SensorBaseThe camera sensor for acquiring visual data.
This class wraps over the UsdGeom Camera for providing a consistent API for acquiring visual data. It ensures that the camera follows the ROS convention for the coordinate system.
Summarizing from the replicator extension, the following sensor types are supported:
"rgb": A 3-channel rendered color image."rgba": A 4-channel rendered color image with alpha channel."albedo": A 4-channel fast diffuse-albedo only path for color image. Note that this path will achieve the best performance when used alone or with depth only."distance_to_camera": An image containing the distance to camera optical center."distance_to_image_plane": An image containing distances of 3D points from camera plane along camera’s z-axis."depth": The same as"distance_to_image_plane"."simple_shading_constant_diffuse": Simple shading (constant diffuse) RGB approximation."simple_shading_diffuse_mdl": Simple shading (diffuse MDL) RGB approximation."simple_shading_full_mdl": Simple shading (full MDL) RGB approximation."normals": An image containing the local surface normal vectors at each pixel."motion_vectors": An image containing the motion vector data at each pixel."semantic_segmentation": The semantic segmentation data."instance_segmentation_fast": The instance segmentation data."instance_id_segmentation_fast": The instance id segmentation data.
Note
Currently the following sensor types are not supported in a “view” format:
"instance_segmentation": The instance segmentation data. Please use the fast counterparts instead."instance_id_segmentation": The instance id segmentation data. Please use the fast counterparts instead."bounding_box_2d_tight": The tight 2D bounding box data (only contains non-occluded regions)."bounding_box_2d_tight_fast": The tight 2D bounding box data (only contains non-occluded regions)."bounding_box_2d_loose": The loose 2D bounding box data (contains occluded regions)."bounding_box_2d_loose_fast": The loose 2D bounding box data (contains occluded regions)."bounding_box_3d": The 3D view space bounding box data."bounding_box_3d_fast": The 3D view space bounding box data.
Attributes:
The configuration parameters.
The set of sensor types that are not supported by the camera class.
Number of instances of the sensor.
Data from the sensor.
Frame number when the measurement took place.
The path of the render products for the cameras.
A tuple containing (height, width) of the camera sensor.
Memory device for computation.
Whether the sensor has a debug visualization implemented.
Whether the sensor is initialized.
Methods:
__init__(cfg)Initializes the camera sensor.
set_intrinsic_matrices(matrices[, ...])Set parameters of the USD camera from its intrinsic matrix.
set_world_poses([positions, orientations, ...])Set the pose of the camera w.r.t.
set_world_poses_from_view(eyes, targets[, ...])Set the poses of the camera from the eye position and look-at target position.
reset([env_ids, env_mask])Resets the sensor internals.
set_debug_vis(debug_vis)Sets whether to visualize the sensor data.
- UNSUPPORTED_TYPES: set[str] = {'bounding_box_2d_loose', 'bounding_box_2d_loose_fast', 'bounding_box_2d_tight', 'bounding_box_2d_tight_fast', 'bounding_box_3d', 'bounding_box_3d_fast', 'instance_id_segmentation', 'instance_segmentation'}#
The set of sensor types that are not supported by the camera class.
- __init__(cfg: CameraCfg)[source]#
Initializes the camera sensor.
- Parameters:
cfg – The configuration parameters.
- Raises:
RuntimeError – If no camera prim is found at the given path.
ValueError – If the provided data types are not supported by the camera.
- property num_instances: int#
Number of instances of the sensor.
This is equal to the number of sensors per environment multiplied by the number of environments.
- property data: CameraData#
Data from the sensor.
This property is only updated when the user tries to access the data. This is done to avoid unnecessary computation when the sensor data is not used.
For updating the sensor when this property is accessed, you can use the following code snippet in your sensor implementation:
# update sensors if needed self._update_outdated_buffers() # return the data (where `_data` is the data for the sensor) return self._data
- property frame: torch.tensor#
Frame number when the measurement took place.
- property render_product_paths: list[str]#
The path of the render products for the cameras.
This can be used via replicator interfaces to attach to writes or external annotator registry.
- set_intrinsic_matrices(matrices: torch.Tensor, focal_length: float | None = None, env_ids: Sequence[int] | None = None)[source]#
Set parameters of the USD camera from its intrinsic matrix.
The intrinsic matrix is used to set the following parameters to the USD camera:
focal_length: The focal length of the camera.horizontal_aperture: The horizontal aperture of the camera.vertical_aperture: The vertical aperture of the camera.horizontal_aperture_offset: The horizontal offset of the camera.vertical_aperture_offset: The vertical offset of the camera.
Warning
Due to limitations of Omniverse camera, we need to assume that the camera is a spherical lens, i.e. has square pixels, and the optical center is centered at the camera eye. If this assumption is not true in the input intrinsic matrix, then the camera will not set up correctly.
- Parameters:
matrices – The intrinsic matrices for the camera. Shape is (N, 3, 3).
focal_length – Perspective focal length (in cm) used to calculate pixel size. Defaults to None. If None, focal_length will be calculated 1 / width.
env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.
- set_world_poses(positions: torch.Tensor | None = None, orientations: torch.Tensor | None = None, env_ids: Sequence[int] | None = None, convention: Literal['opengl', 'ros', 'world'] = 'ros')[source]#
Set the pose of the camera w.r.t. the world frame using specified convention.
Since different fields use different conventions for camera orientations, the method allows users to set the camera poses in the specified convention. Possible conventions are:
"opengl"- forward axis: -Z - up axis +Y - Offset is applied in the OpenGL (Usd.Camera) convention"ros"- forward axis: +Z - up axis -Y - Offset is applied in the ROS convention"world"- forward axis: +X - up axis +Z - Offset is applied in the World Frame convention
See
isaaclab.sensors.camera.utils.convert_camera_frame_orientation_convention()for more details on the conventions.- Parameters:
positions – The cartesian coordinates (in meters). Shape is (N, 3). Defaults to None, in which case the camera position in not changed.
orientations – The quaternion orientation in (x, y, z, w). Shape is (N, 4). Defaults to None, in which case the camera orientation in not changed.
env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.
convention – The convention in which the poses are fed. Defaults to “ros”.
- Raises:
RuntimeError – If the camera prim is not set. Need to call
initialize()method first.
- set_world_poses_from_view(eyes: torch.Tensor, targets: torch.Tensor, env_ids: Sequence[int] | None = None)[source]#
Set the poses of the camera from the eye position and look-at target position.
- Parameters:
eyes – The positions of the camera’s eye. Shape is (N, 3).
targets – The target locations to look at. Shape is (N, 3).
env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.
- Raises:
RuntimeError – If the camera prim is not set. Need to call
initialize()method first.NotImplementedError – If the stage up-axis is not “Y” or “Z”.
- reset(env_ids: Sequence[int] | None = None, env_mask: wp.array | None = None)[source]#
Resets the sensor internals.
- Parameters:
env_ids – The environment indices to reset. Defaults to None, in which case all environments are reset.
env_mask – A boolean warp array indicating which environments to reset. If provided, takes priority over
env_ids. Defaults to None.
- property has_debug_vis_implementation: bool#
Whether the sensor has a debug visualization implemented.
- class isaaclab.sensors.CameraData[source]#
Data container for the camera sensor.
Attributes:
Position of the sensor origin in world frame, following ROS convention.
Quaternion orientation (x, y, z, w) of the sensor origin in world frame, following the world coordinate frame
A tuple containing (height, width) of the camera sensor.
The intrinsic matrices for the camera.
The retrieved sensor data with sensor types as key.
The retrieved sensor info with sensor types as key.
Quaternion orientation (x, y, z, w) of the sensor origin in the world frame, following ROS convention.
Quaternion orientation (x, y, z, w) of the sensor origin in the world frame, following Opengl / USD Camera convention.
- pos_w: torch.Tensor = None#
Position of the sensor origin in world frame, following ROS convention.
Shape is (N, 3) where N is the number of sensors.
- quat_w_world: torch.Tensor = None#
Quaternion orientation (x, y, z, w) of the sensor origin in world frame, following the world coordinate frame
Note
World frame convention follows the camera aligned with forward axis +X and up axis +Z.
Shape is (N, 4) where N is the number of sensors.
- intrinsic_matrices: torch.Tensor = None#
The intrinsic matrices for the camera.
Shape is (N, 3, 3) where N is the number of sensors.
- output: dict[str, torch.Tensor] = None#
The retrieved sensor data with sensor types as key.
The format of the data is available in the Replicator Documentation. For semantic-based data, this corresponds to the
"data"key in the output of the sensor.
- info: list[dict[str, Any]] = None#
The retrieved sensor info with sensor types as key.
This contains extra information provided by the sensor such as semantic segmentation label mapping, prim paths. For semantic-based data, this corresponds to the
"info"key in the output of the sensor. For other sensor types, the info is empty.
- property quat_w_ros: torch.Tensor#
Quaternion orientation (x, y, z, w) of the sensor origin in the world frame, following ROS convention.
Note
ROS convention follows the camera aligned with forward axis +Z and up axis -Y.
Shape is (N, 4) where N is the number of sensors.
- property quat_w_opengl: torch.Tensor#
Quaternion orientation (x, y, z, w) of the sensor origin in the world frame, following Opengl / USD Camera convention.
Note
OpenGL convention follows the camera aligned with forward axis -Z and up axis +Y.
Shape is (N, 4) where N is the number of sensors.
- class isaaclab.sensors.CameraCfg[source]#
Bases:
SensorBaseCfgConfiguration for a camera sensor.
Attributes:
The offset pose of the sensor's frame from the sensor's parent frame.
Spawn configuration for the asset.
Clipping behavior for the camera for values exceed the maximum value.
List of sensor names/types to enable for the camera.
Width of the image in pixels.
Prim path (or expression) to the sensor.
Update period of the sensor buffers (in seconds).
Whether to visualize the sensor.
Height of the image in pixels.
Whether to update the latest camera pose when fetching the camera's data.
A string or a list specifying a semantic filter predicate.
Whether to colorize the semantic segmentation images.
Whether to colorize the instance ID segmentation images.
Whether to colorize the instance ID segmentation images.
Dictionary mapping semantics to specific colours
Renderer configuration for camera sensor.
- offset: OffsetCfg#
The offset pose of the sensor’s frame from the sensor’s parent frame. Defaults to identity.
Note
The parent frame is the frame the sensor attaches to. For example, the parent frame of a camera at path
/World/envs/env_0/Robot/Camerais/World/envs/env_0/Robot.
- spawn: PinholeCameraCfg | FisheyeCameraCfg | None#
Spawn configuration for the asset.
If None, then the prim is not spawned by the asset. Instead, it is assumed that the asset is already present in the scene.
- depth_clipping_behavior: Literal['max', 'zero', 'none']#
Clipping behavior for the camera for values exceed the maximum value. Defaults to “none”.
"max": Values are clipped to the maximum value."zero": Values are clipped to zero."none: No clipping is applied. Values will be returned asinf.
- data_types: list[str]#
List of sensor names/types to enable for the camera. Defaults to [“rgb”].
Please refer to the
Cameraclass for a list of available data types.
- prim_path: str#
Prim path (or expression) to the sensor.
Note
The expression can contain the environment namespace regex
{ENV_REGEX_NS}which will be replaced with the environment namespace.Example:
{ENV_REGEX_NS}/Robot/sensorwill be replaced with/World/envs/env_.*/Robot/sensor.
- update_period: float#
Update period of the sensor buffers (in seconds). Defaults to 0.0 (update every step).
- update_latest_camera_pose: bool#
Whether to update the latest camera pose when fetching the camera’s data. Defaults to False.
If True, the latest camera pose is updated in the camera’s data which will slow down performance due to the use of
XformPrimView. If False, the pose of the camera during initialization is returned.
- semantic_filter: str | list[str]#
A string or a list specifying a semantic filter predicate. Defaults to
"*:*".If a string, it should be a disjunctive normal form of (semantic type, labels). For examples:
"typeA : labelA & !labelB | labelC , typeB: labelA ; typeC: labelE": All prims with semantic type “typeA” and label “labelA” but not “labelB” or with label “labelC”. Also, all prims with semantic type “typeB” and label “labelA”, or with semantic type “typeC” and label “labelE”."typeA : * ; * : labelA": All prims with semantic type “typeA” or with label “labelA”
If a list of strings, each string should be a semantic type. The segmentation for prims with semantics of the specified types will be retrieved. For example, if the list is [“class”], only the segmentation for prims with semantics of type “class” will be retrieved.
See also
For more information on the semantics filter, see the documentation on Replicator Semantics Schema Editor.
- colorize_semantic_segmentation: bool#
Whether to colorize the semantic segmentation images. Defaults to True.
If True, semantic segmentation is converted to an image where semantic IDs are mapped to colors and returned as a
uint84-channel array. If False, the output is returned as aint32array.
- colorize_instance_id_segmentation: bool#
Whether to colorize the instance ID segmentation images. Defaults to True.
If True, instance id segmentation is converted to an image where instance IDs are mapped to colors. and returned as a
uint84-channel array. If False, the output is returned as aint32array.
- colorize_instance_segmentation: bool#
Whether to colorize the instance ID segmentation images. Defaults to True.
If True, instance segmentation is converted to an image where instance IDs are mapped to colors. and returned as a
uint84-channel array. If False, the output is returned as aint32array.
- semantic_segmentation_mapping: dict#
Dictionary mapping semantics to specific colours
Eg.
{ "class:cube_1": (255, 36, 66, 255), "class:cube_2": (255, 184, 48, 255), "class:cube_3": (55, 255, 139, 255), "class:table": (255, 237, 218, 255), "class:ground": (100, 100, 100, 255), "class:robot": (61, 178, 255, 255), }
- renderer_cfg: RendererCfg#
Renderer configuration for camera sensor.
Tile-Rendered USD Camera#
- class isaaclab.sensors.TiledCamera[source]#
Bases:
CameraAttributes:
The tiled rendering based camera sensor for acquiring the same data as the Camera class.
The configuration parameters.
The set of sensor types that are not supported by the camera class.
Data from the sensor.
Memory device for computation.
Frame number when the measurement took place.
Whether the sensor has a debug visualization implemented.
A tuple containing (height, width) of the camera sensor.
Whether the sensor is initialized.
Number of instances of the sensor.
The path of the render products for the cameras.
Methods:
__init__(cfg)Initializes the tiled camera sensor.
reset([env_ids, env_mask])Resets the sensor internals.
set_debug_vis(debug_vis)Sets whether to visualize the sensor data.
set_intrinsic_matrices(matrices[, ...])Set parameters of the USD camera from its intrinsic matrix.
set_world_poses([positions, orientations, ...])Set the pose of the camera w.r.t.
set_world_poses_from_view(eyes, targets[, ...])Set the poses of the camera from the eye position and look-at target position.
- SIMPLE_SHADING_AOV: str = 'SimpleShadingSD'#
The tiled rendering based camera sensor for acquiring the same data as the Camera class.
This class inherits from the
Cameraclass but uses the tiled-rendering API to acquire the visual data. Tiled-rendering concatenates the rendered images from multiple cameras into a single image. This allows for rendering multiple cameras in parallel and is useful for rendering large scenes with multiple cameras efficiently.The following sensor types are supported:
"rgb": A 3-channel rendered color image."rgba": A 4-channel rendered color image with alpha channel."albedo": A 4-channel fast diffuse-albedo only path for color image. Note that this path will achieve the best performance when used alone or with depth only."distance_to_camera": An image containing the distance to camera optical center."distance_to_image_plane": An image containing distances of 3D points from camera plane along camera’s z-axis."depth": Alias for"distance_to_image_plane"."simple_shading_constant_diffuse": Simple shading (constant diffuse) RGB approximation."simple_shading_diffuse_mdl": Simple shading (diffuse MDL) RGB approximation."simple_shading_full_mdl": Simple shading (full MDL) RGB approximation."normals": An image containing the local surface normal vectors at each pixel."motion_vectors": An image containing the motion vector data at each pixel."semantic_segmentation": The semantic segmentation data."instance_segmentation_fast": The instance segmentation data."instance_id_segmentation_fast": The instance id segmentation data.
Note
Currently the following sensor types are not supported in a “view” format:
"instance_segmentation": The instance segmentation data. Please use the fast counterparts instead."instance_id_segmentation": The instance id segmentation data. Please use the fast counterparts instead."bounding_box_2d_tight": The tight 2D bounding box data (only contains non-occluded regions)."bounding_box_2d_tight_fast": The tight 2D bounding box data (only contains non-occluded regions)."bounding_box_2d_loose": The loose 2D bounding box data (contains occluded regions)."bounding_box_2d_loose_fast": The loose 2D bounding box data (contains occluded regions)."bounding_box_3d": The 3D view space bounding box data."bounding_box_3d_fast": The 3D view space bounding box data.
Added in version v1.0.0: This feature is available starting from Isaac Sim 4.2. Before this version, the tiled rendering APIs were not available.
- cfg: TiledCameraCfg#
The configuration parameters.
- __init__(cfg: TiledCameraCfg)[source]#
Initializes the tiled camera sensor.
- Parameters:
cfg – The configuration parameters.
- Raises:
RuntimeError – If no camera prim is found at the given path.
ValueError – If the provided data types are not supported by the camera.
- reset(env_ids: Sequence[int] | None = None, env_mask: wp.array | None = None)[source]#
Resets the sensor internals.
- Parameters:
env_ids – The environment indices to reset. Defaults to None, in which case all environments are reset.
env_mask – A boolean warp array indicating which environments to reset. If provided, takes priority over
env_ids. Defaults to None.
- UNSUPPORTED_TYPES: set[str] = {'bounding_box_2d_loose', 'bounding_box_2d_loose_fast', 'bounding_box_2d_tight', 'bounding_box_2d_tight_fast', 'bounding_box_3d', 'bounding_box_3d_fast', 'instance_id_segmentation', 'instance_segmentation'}#
The set of sensor types that are not supported by the camera class.
- property data: CameraData#
Data from the sensor.
This property is only updated when the user tries to access the data. This is done to avoid unnecessary computation when the sensor data is not used.
For updating the sensor when this property is accessed, you can use the following code snippet in your sensor implementation:
# update sensors if needed self._update_outdated_buffers() # return the data (where `_data` is the data for the sensor) return self._data
- property frame: torch.tensor#
Frame number when the measurement took place.
- property has_debug_vis_implementation: bool#
Whether the sensor has a debug visualization implemented.
- property is_initialized: bool#
Whether the sensor is initialized.
Returns True if the sensor is initialized, False otherwise.
- property num_instances: int#
Number of instances of the sensor.
This is equal to the number of sensors per environment multiplied by the number of environments.
- property render_product_paths: list[str]#
The path of the render products for the cameras.
This can be used via replicator interfaces to attach to writes or external annotator registry.
- set_debug_vis(debug_vis: bool) bool#
Sets whether to visualize the sensor data.
- Parameters:
debug_vis – Whether to visualize the sensor data.
- Returns:
Whether the debug visualization was successfully set. False if the sensor does not support debug visualization.
- set_intrinsic_matrices(matrices: torch.Tensor, focal_length: float | None = None, env_ids: Sequence[int] | None = None)#
Set parameters of the USD camera from its intrinsic matrix.
The intrinsic matrix is used to set the following parameters to the USD camera:
focal_length: The focal length of the camera.horizontal_aperture: The horizontal aperture of the camera.vertical_aperture: The vertical aperture of the camera.horizontal_aperture_offset: The horizontal offset of the camera.vertical_aperture_offset: The vertical offset of the camera.
Warning
Due to limitations of Omniverse camera, we need to assume that the camera is a spherical lens, i.e. has square pixels, and the optical center is centered at the camera eye. If this assumption is not true in the input intrinsic matrix, then the camera will not set up correctly.
- Parameters:
matrices – The intrinsic matrices for the camera. Shape is (N, 3, 3).
focal_length – Perspective focal length (in cm) used to calculate pixel size. Defaults to None. If None, focal_length will be calculated 1 / width.
env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.
- set_world_poses(positions: torch.Tensor | None = None, orientations: torch.Tensor | None = None, env_ids: Sequence[int] | None = None, convention: Literal['opengl', 'ros', 'world'] = 'ros')#
Set the pose of the camera w.r.t. the world frame using specified convention.
Since different fields use different conventions for camera orientations, the method allows users to set the camera poses in the specified convention. Possible conventions are:
"opengl"- forward axis: -Z - up axis +Y - Offset is applied in the OpenGL (Usd.Camera) convention"ros"- forward axis: +Z - up axis -Y - Offset is applied in the ROS convention"world"- forward axis: +X - up axis +Z - Offset is applied in the World Frame convention
See
isaaclab.sensors.camera.utils.convert_camera_frame_orientation_convention()for more details on the conventions.- Parameters:
positions – The cartesian coordinates (in meters). Shape is (N, 3). Defaults to None, in which case the camera position in not changed.
orientations – The quaternion orientation in (x, y, z, w). Shape is (N, 4). Defaults to None, in which case the camera orientation in not changed.
env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.
convention – The convention in which the poses are fed. Defaults to “ros”.
- Raises:
RuntimeError – If the camera prim is not set. Need to call
initialize()method first.
- set_world_poses_from_view(eyes: torch.Tensor, targets: torch.Tensor, env_ids: Sequence[int] | None = None)#
Set the poses of the camera from the eye position and look-at target position.
- Parameters:
eyes – The positions of the camera’s eye. Shape is (N, 3).
targets – The target locations to look at. Shape is (N, 3).
env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.
- Raises:
RuntimeError – If the camera prim is not set. Need to call
initialize()method first.NotImplementedError – If the stage up-axis is not “Y” or “Z”.
- class isaaclab.sensors.TiledCameraCfg[source]#
Bases:
CameraCfgConfiguration for a tiled rendering-based camera sensor.
Classes:
The offset pose of the sensor's frame from the sensor's parent frame.
Attributes:
Prim path (or expression) to the sensor.
Update period of the sensor buffers (in seconds).
Whether to visualize the sensor.
The offset pose of the sensor's frame from the sensor's parent frame.
Spawn configuration for the asset.
Clipping behavior for the camera for values exceed the maximum value.
List of sensor names/types to enable for the camera.
Width of the image in pixels.
Height of the image in pixels.
Whether to update the latest camera pose when fetching the camera's data.
A string or a list specifying a semantic filter predicate.
Whether to colorize the semantic segmentation images.
Whether to colorize the instance ID segmentation images.
Whether to colorize the instance ID segmentation images.
Dictionary mapping semantics to specific colours
Renderer configuration for camera sensor.
- class OffsetCfg#
Bases:
objectThe offset pose of the sensor’s frame from the sensor’s parent frame.
Attributes:
Translation w.r.t.
Quaternion rotation (x, y, z, w) w.r.t.
The convention in which the frame offset is applied.
- rot: tuple[float, float, float, float]#
Quaternion rotation (x, y, z, w) w.r.t. the parent frame. Defaults to (0.0, 0.0, 0.0, 1.0).
- convention: Literal['opengl', 'ros', 'world']#
The convention in which the frame offset is applied. Defaults to “ros”.
"opengl"- forward axis:-Z- up axis:+Y- Offset is applied in the OpenGL (Usd.Camera) convention."ros"- forward axis:+Z- up axis:-Y- Offset is applied in the ROS convention."world"- forward axis:+X- up axis:+Z- Offset is applied in the World Frame convention.
- prim_path: str#
Prim path (or expression) to the sensor.
Note
The expression can contain the environment namespace regex
{ENV_REGEX_NS}which will be replaced with the environment namespace.Example:
{ENV_REGEX_NS}/Robot/sensorwill be replaced with/World/envs/env_.*/Robot/sensor.
- update_period: float#
Update period of the sensor buffers (in seconds). Defaults to 0.0 (update every step).
- offset: OffsetCfg#
The offset pose of the sensor’s frame from the sensor’s parent frame. Defaults to identity.
Note
The parent frame is the frame the sensor attaches to. For example, the parent frame of a camera at path
/World/envs/env_0/Robot/Camerais/World/envs/env_0/Robot.
- spawn: PinholeCameraCfg | FisheyeCameraCfg | None#
Spawn configuration for the asset.
If None, then the prim is not spawned by the asset. Instead, it is assumed that the asset is already present in the scene.
- depth_clipping_behavior: Literal['max', 'zero', 'none']#
Clipping behavior for the camera for values exceed the maximum value. Defaults to “none”.
"max": Values are clipped to the maximum value."zero": Values are clipped to zero."none: No clipping is applied. Values will be returned asinf.
- data_types: list[str]#
List of sensor names/types to enable for the camera. Defaults to [“rgb”].
Please refer to the
Cameraclass for a list of available data types.
- update_latest_camera_pose: bool#
Whether to update the latest camera pose when fetching the camera’s data. Defaults to False.
If True, the latest camera pose is updated in the camera’s data which will slow down performance due to the use of
XformPrimView. If False, the pose of the camera during initialization is returned.
- semantic_filter: str | list[str]#
A string or a list specifying a semantic filter predicate. Defaults to
"*:*".If a string, it should be a disjunctive normal form of (semantic type, labels). For examples:
"typeA : labelA & !labelB | labelC , typeB: labelA ; typeC: labelE": All prims with semantic type “typeA” and label “labelA” but not “labelB” or with label “labelC”. Also, all prims with semantic type “typeB” and label “labelA”, or with semantic type “typeC” and label “labelE”."typeA : * ; * : labelA": All prims with semantic type “typeA” or with label “labelA”
If a list of strings, each string should be a semantic type. The segmentation for prims with semantics of the specified types will be retrieved. For example, if the list is [“class”], only the segmentation for prims with semantics of type “class” will be retrieved.
See also
For more information on the semantics filter, see the documentation on Replicator Semantics Schema Editor.
- colorize_semantic_segmentation: bool#
Whether to colorize the semantic segmentation images. Defaults to True.
If True, semantic segmentation is converted to an image where semantic IDs are mapped to colors and returned as a
uint84-channel array. If False, the output is returned as aint32array.
- colorize_instance_id_segmentation: bool#
Whether to colorize the instance ID segmentation images. Defaults to True.
If True, instance id segmentation is converted to an image where instance IDs are mapped to colors. and returned as a
uint84-channel array. If False, the output is returned as aint32array.
- colorize_instance_segmentation: bool#
Whether to colorize the instance ID segmentation images. Defaults to True.
If True, instance segmentation is converted to an image where instance IDs are mapped to colors. and returned as a
uint84-channel array. If False, the output is returned as aint32array.
- semantic_segmentation_mapping: dict#
Dictionary mapping semantics to specific colours
Eg.
{ "class:cube_1": (255, 36, 66, 255), "class:cube_2": (255, 184, 48, 255), "class:cube_3": (55, 255, 139, 255), "class:table": (255, 237, 218, 255), "class:ground": (100, 100, 100, 255), "class:robot": (61, 178, 255, 255), }
- renderer_cfg: RendererCfg#
Renderer configuration for camera sensor.
Contact Sensor#
- class isaaclab.sensors.ContactSensor[source]#
Bases:
FactoryBase,BaseContactSensorFactory for creating contact sensor instances.
Attributes:
Data from the sensor.
Ordered names of shapes or bodies with contact sensors attached.
View for the contact forces captured.
Memory device for computation.
Whether the sensor has a debug visualization implemented.
Whether the sensor is initialized.
Deprecated property.
Number of instances of the sensor.
Number of sensors per environment.
The configuration parameters.
Methods:
__new__(cls, *args, **kwargs)Create a new instance of a contact sensor based on the backend.
__init__(cfg)Initializes the contact sensor object.
compute_first_air(dt[, abs_tol])Checks if bodies that have broken contact within the last
dtseconds.compute_first_contact(dt[, abs_tol])Checks if bodies that have established contact within the last
dtseconds.find_bodies(name_keys[, preserve_order])Deprecated method.
find_sensors(name_keys[, preserve_order])Find sensors in the contact sensor based on the name keys.
Returns a list of registered backend names.
register(name, sub_class)Register a new implementation class.
reset([env_ids, env_mask])Resets the sensor.
set_debug_vis(debug_vis)Sets whether to visualize the sensor data.
- abstract property data: BaseContactSensorData#
Data from the sensor.
- static __new__(cls, *args, **kwargs) BaseContactSensor | PhysXContactSensor | NewtonContactSensor[source]#
Create a new instance of a contact sensor based on the backend.
- __init__(cfg: ContactSensorCfg)#
Initializes the contact sensor object.
- Parameters:
cfg – The configuration parameters.
- abstract property body_names: list[str] | None#
Ordered names of shapes or bodies with contact sensors attached.
- abstractmethod compute_first_air(dt: float, abs_tol: float = 1e-08) warp.array#
Checks if bodies that have broken contact within the last
dtseconds.This function checks if the bodies have broken contact within the last
dtseconds by comparing the current air time with the given time period. If the air time is less than the given time period, then the bodies are considered to not be in contact.Note
It assumes that
dtis a factor of the sensor update time-step. In other words, \(dt / dt_sensor = n\), where \(n\) is a natural number. This is always true if the sensor is updated by the physics or the environment stepping time-step and the sensor is read by the environment stepping time-step.- Parameters:
dt – The time period since the contract is broken.
abs_tol – The absolute tolerance for the comparison.
- Returns:
A boolean tensor indicating the bodies that have broken contact within the last
dtseconds. Shape is (N, B), where N is the number of sensors and B is the number of bodies in each sensor.- Raises:
RuntimeError – If the sensor is not configured to track contact time.
- abstractmethod compute_first_contact(dt: float, abs_tol: float = 1e-08) warp.array#
Checks if bodies that have established contact within the last
dtseconds.This function checks if the bodies have established contact within the last
dtseconds by comparing the current contact time with the given time period. If the contact time is less than the given time period, then the bodies are considered to be in contact.Note
The function assumes that
dtis a factor of the sensor update time-step. In other words \(dt / dt_sensor = n\), where \(n\) is a natural number. This is always true if the sensor is updated by the physics or the environment stepping time-step and the sensor is read by the environment stepping time-step.- Parameters:
dt – The time period since the contact was established.
abs_tol – The absolute tolerance for the comparison.
- Returns:
A boolean tensor indicating the bodies that have established contact within the last
dtseconds. Shape is (N, B), where N is the number of sensors and B is the number of bodies in each sensor.- Raises:
RuntimeError – If the sensor is not configured to track contact time.
- abstract property contact_view: None#
View for the contact forces captured.
Note
None if there is no view associated with the sensor.
- find_bodies(name_keys: str | Sequence[str], preserve_order: bool = False) tuple[list[int], list[str]]#
Deprecated method. Please use find_sensors instead.
- find_sensors(name_keys: str | Sequence[str], preserve_order: bool = False) tuple[list[int], list[str]]#
Find sensors in the contact sensor based on the name keys.
- Parameters:
name_keys – A regular expression or a list of regular expressions to match the body names.
preserve_order – Whether to preserve the order of the name keys in the output. Defaults to False.
- Returns:
A tuple of lists containing the sensor indices and names.
- property has_debug_vis_implementation: bool#
Whether the sensor has a debug visualization implemented.
- property is_initialized: bool#
Whether the sensor is initialized.
Returns True if the sensor is initialized, False otherwise.
- abstractmethod reset(env_ids: Sequence[int] | None = None, env_mask: wp.array(dtype=wp.bool) | None = None)#
Resets the sensor.
- Parameters:
env_ids – The indices of the environments to reset. Defaults to None: all the environments are reset.
env_mask – The masks of the environments to reset. Defaults to None: all the environments are reset.
- set_debug_vis(debug_vis: bool) bool#
Sets whether to visualize the sensor data.
- Parameters:
debug_vis – Whether to visualize the sensor data.
- Returns:
Whether the debug visualization was successfully set. False if the sensor does not support debug visualization.
- cfg: ContactSensorCfg#
The configuration parameters.
- class isaaclab.sensors.ContactSensorData[source]#
Factory for creating contact sensor data instances.
Methods:
__new__(cls, *args, **kwargs)Create a new instance of a contact sensor data based on the backend.
Returns a list of registered backend names.
register(name, sub_class)Register a new implementation class.
Attributes:
Average position of contact points.
Time spent in air since last detach.
Time spent in contact since last contact.
Normal contact forces filtered between sensor and filtered bodies.
History of filtered contact forces.
Sum of friction forces.
Time spent in air before last contact.
Time spent in contact before last detach.
The net normal contact forces in world frame.
History of net normal contact forces.
Position of the sensor origin in world frame.
Pose of the sensor origin in world frame.
Orientation of the sensor origin in world frame.
- static __new__(cls, *args, **kwargs) BaseContactSensorData | PhysXContactSensorData | NewtonContactSensorData[source]#
Create a new instance of a contact sensor data based on the backend.
- abstract property contact_pos_w: wp.array | None#
Average position of contact points.
Shape is (num_instances, num_sensors, num_filter_shapes), dtype = wp.vec3f. In torch this resolves to (num_instances, num_sensors, num_filter_shapes, 3).
None if
ContactSensorCfg.track_contact_pointsis False.
- abstract property current_air_time: wp.array | None#
Time spent in air since last detach.
Shape is (num_instances, num_sensors), dtype = wp.float32.
None if
ContactSensorCfg.track_air_timeis False.
- abstract property current_contact_time: wp.array | None#
Time spent in contact since last contact.
Shape is (num_instances, num_sensors), dtype = wp.float32.
None if
ContactSensorCfg.track_air_timeis False.
- abstract property force_matrix_w: wp.array | None#
Normal contact forces filtered between sensor and filtered bodies.
Shape is (num_instances, num_sensors, num_filter_shapes), dtype = wp.vec3f. In torch this resolves to (num_instances, num_sensors, num_filter_shapes, 3).
None if
ContactSensorCfg.filter_prim_paths_expris empty.
- abstract property force_matrix_w_history: wp.array | None#
History of filtered contact forces.
Shape is (num_instances, history_length, num_sensors, num_filter_shapes), dtype = wp.vec3f. In torch this resolves to (num_instances, history_length, num_sensors, num_filter_shapes, 3).
None if
ContactSensorCfg.filter_prim_paths_expris empty.
- abstract property friction_forces_w: wp.array | None#
Sum of friction forces.
Shape is (num_instances, num_sensors, num_filter_shapes), dtype = wp.vec3f. In torch this resolves to (num_instances, num_sensors, num_filter_shapes, 3).
None if
ContactSensorCfg.track_friction_forcesis False.
- abstract property last_air_time: wp.array | None#
Time spent in air before last contact.
Shape is (num_instances, num_sensors), dtype = wp.float32.
None if
ContactSensorCfg.track_air_timeis False.
- abstract property last_contact_time: wp.array | None#
Time spent in contact before last detach.
Shape is (num_instances, num_sensors), dtype = wp.float32.
None if
ContactSensorCfg.track_air_timeis False.
- abstract property net_forces_w: wp.array | None#
The net normal contact forces in world frame.
Shape is (num_instances, num_sensors), dtype = wp.vec3f. In torch this resolves to (num_instances, num_sensors, 3).
- abstract property net_forces_w_history: wp.array | None#
History of net normal contact forces.
Shape is (num_instances, history_length, num_sensors), dtype = wp.vec3f. In torch this resolves to (num_instances, history_length, num_sensors, 3).
- abstract property pos_w: wp.array | None#
Position of the sensor origin in world frame.
Shape is (num_instances, num_sensors), dtype = wp.vec3f. In torch this resolves to (num_instances, num_sensors, 3).
None if
ContactSensorCfg.track_poseis False.
- abstract property pose_w: wp.array | None#
Pose of the sensor origin in world frame.
None if
ContactSensorCfg.track_poseis False.
- abstract property quat_w: wp.array | None#
Orientation of the sensor origin in world frame.
Shape is (num_instances, num_sensors), dtype = wp.quatf. In torch this resolves to (num_instances, num_sensors, 4). The orientation is provided in (x, y, z, w) format.
None if
ContactSensorCfg.track_poseis False.
- class isaaclab.sensors.ContactSensorCfg[source]#
Bases:
SensorBaseCfgConfiguration for the contact sensor.
Sensing bodies are selected via
SensorBaseCfg.prim_path. Filter bodies for per-partner force reporting are selected viafilter_prim_paths_expr.Only body-level sensing and filtering are supported. For shape-level granularity, see
NewtonContactSensorCfginisaaclab_newton.Attributes:
Whether to track the pose of the sensor's origin.
Whether to track the contact point locations.
Whether to track the friction forces at the contact points.
The maximum number of contacts across all batches of the sensor to keep track of.
Prim path (or expression) to the sensor.
Update period of the sensor buffers (in seconds).
Whether to visualize the sensor.
Whether to track the air/contact time of the bodies (time between contacts).
The threshold on the norm of the contact force that determines whether two bodies are in collision or not.
Number of past frames to store in the sensor buffers.
List of body prim path expressions to filter contacts against.
The configuration object for the visualization markers.
- track_friction_forces: bool#
Whether to track the friction forces at the contact points. Defaults to False.
- max_contact_data_count_per_prim: int | None#
The maximum number of contacts across all batches of the sensor to keep track of. Default is 4, where supported.
This parameter sets the total maximum counts of the simulation across all bodies and environments. The total number of contacts allowed is max_contact_data_count_per_prim*num_envs*num_sensor_bodies.
Note
If the environment is very contact rich it is suggested to increase this parameter to avoid out of bounds memory errors and loss of contact data leading to inaccurate measurements.
- prim_path: str#
Prim path (or expression) to the sensor.
Note
The expression can contain the environment namespace regex
{ENV_REGEX_NS}which will be replaced with the environment namespace.Example:
{ENV_REGEX_NS}/Robot/sensorwill be replaced with/World/envs/env_.*/Robot/sensor.
- update_period: float#
Update period of the sensor buffers (in seconds). Defaults to 0.0 (update every step).
- track_air_time: bool#
Whether to track the air/contact time of the bodies (time between contacts). Defaults to False.
- force_threshold: float | None#
The threshold on the norm of the contact force that determines whether two bodies are in collision or not. Defaults to None, in which case the sensor backend chooses an appropriate value.
This value is only used for tracking the mode duration (the time in contact or in air), if
track_air_timeis True.
- history_length: int#
Number of past frames to store in the sensor buffers. Defaults to 0, which means that only the current data is stored (no history).
- filter_prim_paths_expr: list[str]#
List of body prim path expressions to filter contacts against. Defaults to empty, meaning contacts with all bodies are aggregated into the net force.
If provided, a per-partner force matrix (
ContactSensorData.force_matrix_w) is reported in addition to the net force. Each expression is matched against body prim paths in the scene.For shape-level filtering, see
NewtonContactSensorCfginisaaclab_newton.Note
Expressions can contain the environment namespace regex
{ENV_REGEX_NS}, which is replaced with the environment namespace.Example:
{ENV_REGEX_NS}/Objectbecomes/World/envs/env_.*/Object.Attention
Filtered contact reporting only works when
SensorBaseCfg.prim_pathmatches a single primitive per environment. For many-to-many filtering, seeNewtonContactSensorCfginisaaclab_newton.
- visualizer_cfg: VisualizationMarkersCfg#
The configuration object for the visualization markers. Defaults to CONTACT_SENSOR_MARKER_CFG.
Note
This attribute is only used when debug visualization is enabled.
Frame Transformer#
- class isaaclab.sensors.FrameTransformer[source]#
Bases:
FactoryBase,BaseFrameTransformerFactory for creating frame transformer instances.
Attributes:
Data from the sensor.
Returns the names of the target bodies being tracked.
Memory device for computation.
Whether the sensor has a debug visualization implemented.
Whether the sensor is initialized.
Returns the number of target bodies being tracked.
Number of instances of the sensor.
The configuration parameters.
Methods:
__new__(cls, *args, **kwargs)Create a new instance of a frame transformer based on the backend.
__init__(cfg)Initializes the frame transformer object.
find_bodies(name_keys[, preserve_order])Find bodies in the articulation based on the name keys.
Returns a list of registered backend names.
register(name, sub_class)Register a new implementation class.
reset([env_ids, env_mask])Resets the sensor internals.
set_debug_vis(debug_vis)Sets whether to visualize the sensor data.
- abstract property data: BaseFrameTransformerData#
Data from the sensor.
This property is only updated when the user tries to access the data. This is done to avoid unnecessary computation when the sensor data is not used.
For updating the sensor when this property is accessed, you can use the following code snippet in your sensor implementation:
# update sensors if needed self._update_outdated_buffers() # return the data (where `_data` is the data for the sensor) return self._data
- static __new__(cls, *args, **kwargs) BaseFrameTransformer | PhysXFrameTransformer[source]#
Create a new instance of a frame transformer based on the backend.
- __init__(cfg: FrameTransformerCfg)#
Initializes the frame transformer object.
- Parameters:
cfg – The configuration parameters.
- abstract property body_names: list[str]#
Returns the names of the target bodies being tracked.
Deprecated since version Use:
data.target_frame_namesinstead. This property will be removed in a future release.
- find_bodies(name_keys: str | Sequence[str], preserve_order: bool = False) tuple[list[int], list[str]]#
Find bodies in the articulation based on the name keys.
- Parameters:
name_keys – A regular expression or a list of regular expressions to match the body names.
preserve_order – Whether to preserve the order of the name keys in the output. Defaults to False.
- Returns:
A tuple of lists containing the body indices and names.
- property has_debug_vis_implementation: bool#
Whether the sensor has a debug visualization implemented.
- property is_initialized: bool#
Whether the sensor is initialized.
Returns True if the sensor is initialized, False otherwise.
- abstract property num_bodies: int#
Returns the number of target bodies being tracked.
Deprecated since version Use:
len(data.target_frame_names)instead. This property will be removed in a future release.
- property num_instances: int#
Number of instances of the sensor.
This is equal to the number of sensors per environment multiplied by the number of environments.
- reset(env_ids: Sequence[int] | None = None, env_mask: wp.array | None = None) None#
Resets the sensor internals.
- Parameters:
env_ids – The environment indices to reset. Defaults to None, in which case all environments are reset.
env_mask – A boolean warp array indicating which environments to reset. If provided, takes priority over
env_ids. Defaults to None.
- set_debug_vis(debug_vis: bool) bool#
Sets whether to visualize the sensor data.
- Parameters:
debug_vis – Whether to visualize the sensor data.
- Returns:
Whether the debug visualization was successfully set. False if the sensor does not support debug visualization.
- cfg: FrameTransformerCfg#
The configuration parameters.
- class isaaclab.sensors.FrameTransformerData[source]#
Factory for creating frame transformer data instances.
Methods:
__new__(cls, *args, **kwargs)Create a new instance of a frame transformer data based on the backend.
Returns a list of registered backend names.
register(name, sub_class)Register a new implementation class.
Attributes:
Position of the source frame after offset in world frame.
Pose of the source frame after offset in world frame.
Orientation of the source frame after offset in world frame.
Target frame names (order matches data ordering).
Position of the target frame(s) relative to source frame.
Position of the target frame(s) after offset in world frame.
Pose of the target frame(s) relative to source frame.
Pose of the target frame(s) after offset in world frame.
Orientation of the target frame(s) relative to source frame.
Orientation of the target frame(s) after offset in world frame.
- static __new__(cls, *args, **kwargs) BaseFrameTransformerData | PhysXFrameTransformerData[source]#
Create a new instance of a frame transformer data based on the backend.
- abstract property source_pos_w: warp.array#
Position of the source frame after offset in world frame.
Shape is (num_instances,), dtype = wp.vec3f. In torch this resolves to (num_instances, 3).
- abstract property source_pose_w: wp.array | None#
Pose of the source frame after offset in world frame.
Shape is (num_instances,), dtype = wp.transformf. In torch this resolves to (num_instances, 7). The pose is provided in (x, y, z, qx, qy, qz, qw) format.
- abstract property source_quat_w: warp.array#
Orientation of the source frame after offset in world frame.
Shape is (num_instances,), dtype = wp.quatf. In torch this resolves to (num_instances, 4). The orientation is provided in (x, y, z, w) format.
- abstract property target_frame_names: list[str]#
Target frame names (order matches data ordering).
Resolved from
FrameTransformerCfg.FrameCfg.name.
- abstract property target_pos_source: warp.array#
Position of the target frame(s) relative to source frame.
Shape is (num_instances, num_target_frames), dtype = wp.vec3f. In torch this resolves to (num_instances, num_target_frames, 3).
- abstract property target_pos_w: warp.array#
Position of the target frame(s) after offset in world frame.
Shape is (num_instances, num_target_frames), dtype = wp.vec3f. In torch this resolves to (num_instances, num_target_frames, 3).
- abstract property target_pose_source: wp.array | None#
Pose of the target frame(s) relative to source frame.
Shape is (num_instances, num_target_frames), dtype = wp.transformf. In torch this resolves to (num_instances, num_target_frames, 7). The pose is provided in (x, y, z, qx, qy, qz, qw) format.
- abstract property target_pose_w: wp.array | None#
Pose of the target frame(s) after offset in world frame.
Shape is (num_instances, num_target_frames), dtype = wp.transformf. In torch this resolves to (num_instances, num_target_frames, 7). The pose is provided in (x, y, z, qx, qy, qz, qw) format.
- abstract property target_quat_source: warp.array#
Orientation of the target frame(s) relative to source frame.
Shape is (num_instances, num_target_frames), dtype = wp.quatf. In torch this resolves to (num_instances, num_target_frames, 4). The orientation is provided in (x, y, z, w) format.
- abstract property target_quat_w: warp.array#
Orientation of the target frame(s) after offset in world frame.
Shape is (num_instances, num_target_frames), dtype = wp.quatf. In torch this resolves to (num_instances, num_target_frames, 4). The orientation is provided in (x, y, z, w) format.
- class isaaclab.sensors.FrameTransformerCfg[source]#
Bases:
SensorBaseCfgConfiguration for the frame transformer sensor.
Classes:
Information specific to a coordinate frame.
Attributes:
Update period of the sensor buffers (in seconds).
Whether to visualize the sensor.
The prim path of the body to transform from (source frame).
The pose offset from the source prim frame.
A list of the target frames.
The configuration object for the visualization markers.
- class FrameCfg[source]#
Bases:
objectInformation specific to a coordinate frame.
Attributes:
The prim path corresponding to a rigid body.
User-defined name for the new coordinate frame.
The pose offset from the parent prim frame.
- prim_path: str#
The prim path corresponding to a rigid body.
This can be a regex pattern to match multiple prims. For example, “/Robot/.*” will match all prims under “/Robot”.
This means that if the source
FrameTransformerCfg.prim_pathis “/Robot/base”, and the targetFrameTransformerCfg.FrameCfg.prim_pathis “/Robot/.*”, then the frame transformer will track the poses of all the prims under “/Robot”, including “/Robot/base” (even though this will result in an identity pose w.r.t. the source frame).
- update_period: float#
Update period of the sensor buffers (in seconds). Defaults to 0.0 (update every step).
- target_frames: list[FrameCfg]#
A list of the target frames.
This allows a single FrameTransformer to handle multiple target prims. For example, in a quadruped, we can use a single FrameTransformer to track each foot’s position and orientation in the body frame using four frame offsets.
- visualizer_cfg: VisualizationMarkersCfg#
The configuration object for the visualization markers. Defaults to FRAME_MARKER_CFG.
Note
This attribute is only used when debug visualization is enabled.
Ray-Cast Sensor#
- class isaaclab.sensors.RayCaster[source]#
Bases:
SensorBaseA ray-casting sensor.
The ray-caster uses a set of rays to detect collisions with meshes in the scene. The rays are defined in the sensor’s local coordinate frame. The sensor can be configured to ray-cast against a set of meshes with a given ray pattern.
The meshes are parsed from the list of primitive paths provided in the configuration. These are then converted to warp meshes and stored in the warp_meshes list. The ray-caster then ray-casts against these warp meshes using the ray pattern provided in the configuration.
Note
Currently, only static meshes are supported. Extending the warp mesh to support dynamic meshes is a work in progress.
Attributes:
The configuration parameters.
A dictionary to store warp meshes for raycasting, shared across all instances.
Number of instances of the sensor.
Data from the sensor.
Memory device for computation.
Whether the sensor has a debug visualization implemented.
Whether the sensor is initialized.
Methods:
__init__(cfg)Initializes the ray-caster object.
reset([env_ids, env_mask])Resets the sensor internals.
set_debug_vis(debug_vis)Sets whether to visualize the sensor data.
- cfg: RayCasterCfg#
The configuration parameters.
- meshes: ClassVar[dict[str, wp.Mesh]] = {}#
A dictionary to store warp meshes for raycasting, shared across all instances.
The keys correspond to the prim path for the meshes, and values are the corresponding warp Mesh objects.
- __init__(cfg: RayCasterCfg)[source]#
Initializes the ray-caster object.
- Parameters:
cfg – The configuration parameters.
- property num_instances: int#
Number of instances of the sensor.
This is equal to the number of sensors per environment multiplied by the number of environments.
- property data: RayCasterData#
Data from the sensor.
This property is only updated when the user tries to access the data. This is done to avoid unnecessary computation when the sensor data is not used.
For updating the sensor when this property is accessed, you can use the following code snippet in your sensor implementation:
# update sensors if needed self._update_outdated_buffers() # return the data (where `_data` is the data for the sensor) return self._data
- reset(env_ids: Sequence[int] | None = None, env_mask: wp.array | None = None)[source]#
Resets the sensor internals.
- Parameters:
env_ids – The environment indices to reset. Defaults to None, in which case all environments are reset.
env_mask – A boolean warp array indicating which environments to reset. If provided, takes priority over
env_ids. Defaults to None.
- property has_debug_vis_implementation: bool#
Whether the sensor has a debug visualization implemented.
- class isaaclab.sensors.RayCasterData[source]#
Data container for the ray-cast sensor.
Attributes:
Position of the sensor origin in world frame.
Orientation of the sensor origin in quaternion (x, y, z, w) in world frame.
The ray hit positions in the world frame.
- pos_w: torch.Tensor = None#
Position of the sensor origin in world frame.
Shape is (N, 3), where N is the number of sensors.
- quat_w: torch.Tensor = None#
Orientation of the sensor origin in quaternion (x, y, z, w) in world frame.
Shape is (N, 4), where N is the number of sensors.
- ray_hits_w: torch.Tensor = None#
The ray hit positions in the world frame.
Shape is (N, B, 3), where N is the number of sensors, B is the number of rays in the scan pattern per sensor.
- class isaaclab.sensors.RayCasterCfg[source]#
Bases:
SensorBaseCfgConfiguration for the ray-cast sensor.
Classes:
The offset pose of the sensor's frame from the sensor's parent frame.
Attributes:
The list of mesh primitive paths to ray cast against.
The offset pose of the sensor's frame from the sensor's parent frame.
Whether the rays' starting positions and directions only track the yaw orientation.
Prim path (or expression) to the sensor.
Update period of the sensor buffers (in seconds).
Whether to visualize the sensor.
Specify in what frame the rays are projected onto the ground.
The pattern that defines the local ray starting positions and directions.
Maximum distance (in meters) from the sensor to ray cast to.
The range of drift (in meters) to add to the ray starting positions (xyz) in world frame.
The range of drift (in meters) to add to the projected ray points in local projection frame.
The configuration object for the visualization markers.
- class OffsetCfg[source]#
Bases:
objectThe offset pose of the sensor’s frame from the sensor’s parent frame.
Attributes:
- mesh_prim_paths: list[str]#
The list of mesh primitive paths to ray cast against.
Note
Currently, only a single static mesh is supported. We are working on supporting multiple static meshes and dynamic meshes.
- offset: OffsetCfg#
The offset pose of the sensor’s frame from the sensor’s parent frame. Defaults to identity.
- attach_yaw_only: bool | None#
Whether the rays’ starting positions and directions only track the yaw orientation. Defaults to None, which doesn’t raise a warning of deprecated usage.
This is useful for ray-casting height maps, where only yaw rotation is needed.
Deprecated since version 2.1.1: This attribute is deprecated and will be removed in the future. Please use
ray_alignmentinstead.To get the same behavior as setting this parameter to
TrueorFalse, setray_alignmentto"yaw"or “base” respectively.
- prim_path: str#
Prim path (or expression) to the sensor.
Note
The expression can contain the environment namespace regex
{ENV_REGEX_NS}which will be replaced with the environment namespace.Example:
{ENV_REGEX_NS}/Robot/sensorwill be replaced with/World/envs/env_.*/Robot/sensor.
- update_period: float#
Update period of the sensor buffers (in seconds). Defaults to 0.0 (update every step).
- ray_alignment: Literal['base', 'yaw', 'world']#
Specify in what frame the rays are projected onto the ground. Default is “base”.
The options are:
baseif the rays’ starting positions and directions track the full root position and orientation.yawif the rays’ starting positions and directions track root position and only yaw component of the orientation. This is useful for ray-casting height maps.worldif rays’ starting positions and directions are always fixed. This is useful in combination with a mapping package on the robot and querying ray-casts in a global frame.
- pattern_cfg: PatternBaseCfg#
The pattern that defines the local ray starting positions and directions.
- drift_range: tuple[float, float]#
The range of drift (in meters) to add to the ray starting positions (xyz) in world frame. Defaults to (0.0, 0.0).
For floating base robots, this is useful for simulating drift in the robot’s pose estimation.
- ray_cast_drift_range: dict[str, tuple[float, float]]#
The range of drift (in meters) to add to the projected ray points in local projection frame. Defaults to a dictionary with zero drift for each x, y and z axis.
For floating base robots, this is useful for simulating drift in the robot’s pose estimation.
- visualizer_cfg: VisualizationMarkersCfg#
The configuration object for the visualization markers. Defaults to RAY_CASTER_MARKER_CFG.
Note
This attribute is only used when debug visualization is enabled.
Ray-Cast Camera#
- class isaaclab.sensors.RayCasterCamera[source]#
Bases:
RayCasterA ray-casting camera sensor.
The ray-caster camera uses a set of rays to get the distances to meshes in the scene. The rays are defined in the sensor’s local coordinate frame. The sensor has the same interface as the
isaaclab.sensors.Camerathat implements the camera class through USD camera prims. However, this class provides a faster image generation. The sensor converts meshes from the list of primitive paths provided in the configuration to Warp meshes. The camera then ray-casts against these Warp meshes only.Currently, only the following annotators are supported:
"distance_to_camera": An image containing the distance to camera optical center."distance_to_image_plane": An image containing distances of 3D points from camera plane along camera’s z-axis."normals": An image containing the local surface normal vectors at each pixel.
Note
Currently, only static meshes are supported. Extending the warp mesh to support dynamic meshes is a work in progress.
Attributes:
The configuration parameters.
A set of sensor types that are not supported by the ray-caster camera.
Data from the sensor.
A tuple containing (height, width) of the camera sensor.
Frame number when the measurement took place.
Memory device for computation.
Whether the sensor has a debug visualization implemented.
Whether the sensor is initialized.
A dictionary to store warp meshes for raycasting, shared across all instances.
Number of instances of the sensor.
Methods:
__init__(cfg)Initializes the camera object.
set_intrinsic_matrices(matrices[, ...])Set the intrinsic matrix of the camera.
reset([env_ids, env_mask])Resets the sensor internals.
set_world_poses([positions, orientations, ...])Set the pose of the camera w.r.t.
set_world_poses_from_view(eyes, targets[, ...])Set the poses of the camera from the eye position and look-at target position.
set_debug_vis(debug_vis)Sets whether to visualize the sensor data.
- cfg: RayCasterCameraCfg#
The configuration parameters.
- UNSUPPORTED_TYPES: ClassVar[set[str]] = {'bounding_box_2d_loose', 'bounding_box_2d_loose_fast', 'bounding_box_2d_tight', 'bounding_box_2d_tight_fast', 'bounding_box_3d', 'bounding_box_3d_fast', 'instance_id_segmentation', 'instance_id_segmentation_fast', 'instance_segmentation', 'instance_segmentation_fast', 'motion_vectors', 'rgb', 'semantic_segmentation', 'skeleton_data'}#
A set of sensor types that are not supported by the ray-caster camera.
- __init__(cfg: RayCasterCameraCfg)[source]#
Initializes the camera object.
- Parameters:
cfg – The configuration parameters.
- Raises:
ValueError – If the provided data types are not supported by the ray-caster camera.
- property data: CameraData#
Data from the sensor.
This property is only updated when the user tries to access the data. This is done to avoid unnecessary computation when the sensor data is not used.
For updating the sensor when this property is accessed, you can use the following code snippet in your sensor implementation:
# update sensors if needed self._update_outdated_buffers() # return the data (where `_data` is the data for the sensor) return self._data
- property frame: torch.tensor#
Frame number when the measurement took place.
- set_intrinsic_matrices(matrices: torch.Tensor, focal_length: float = 1.0, env_ids: Sequence[int] | None = None)[source]#
Set the intrinsic matrix of the camera.
- Parameters:
matrices – The intrinsic matrices for the camera. Shape is (N, 3, 3).
focal_length – Focal length to use when computing aperture values (in cm). Defaults to 1.0.
env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.
- reset(env_ids: Sequence[int] | None = None, env_mask: wp.array | None = None)[source]#
Resets the sensor internals.
- Parameters:
env_ids – The environment indices to reset. Defaults to None, in which case all environments are reset.
env_mask – A boolean warp array indicating which environments to reset. If provided, takes priority over
env_ids. Defaults to None.
- set_world_poses(positions: torch.Tensor | None = None, orientations: torch.Tensor | None = None, env_ids: Sequence[int] | None = None, convention: Literal['opengl', 'ros', 'world'] = 'ros')[source]#
Set the pose of the camera w.r.t. the world frame using specified convention.
Since different fields use different conventions for camera orientations, the method allows users to set the camera poses in the specified convention. Possible conventions are:
"opengl"- forward axis: -Z - up axis +Y - Offset is applied in the OpenGL (Usd.Camera) convention"ros"- forward axis: +Z - up axis -Y - Offset is applied in the ROS convention"world"- forward axis: +X - up axis +Z - Offset is applied in the World Frame convention
See
isaaclab.utils.maths.convert_camera_frame_orientation_convention()for more details on the conventions.- Parameters:
positions – The cartesian coordinates (in meters). Shape is (N, 3). Defaults to None, in which case the camera position in not changed.
orientations – The quaternion orientation in (x, y, z, w). Shape is (N, 4). Defaults to None, in which case the camera orientation in not changed.
env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.
convention – The convention in which the poses are fed. Defaults to “ros”.
- Raises:
RuntimeError – If the camera prim is not set. Need to call
initialize()method first.
- set_world_poses_from_view(eyes: torch.Tensor, targets: torch.Tensor, env_ids: Sequence[int] | None = None)[source]#
Set the poses of the camera from the eye position and look-at target position.
- Parameters:
eyes – The positions of the camera’s eye. Shape is N, 3).
targets – The target locations to look at. Shape is (N, 3).
env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.
- Raises:
RuntimeError – If the camera prim is not set. Need to call
initialize()method first.NotImplementedError – If the stage up-axis is not “Y” or “Z”.
- property has_debug_vis_implementation: bool#
Whether the sensor has a debug visualization implemented.
- property is_initialized: bool#
Whether the sensor is initialized.
Returns True if the sensor is initialized, False otherwise.
- meshes: ClassVar[dict[str, wp.Mesh]] = {}#
A dictionary to store warp meshes for raycasting, shared across all instances.
The keys correspond to the prim path for the meshes, and values are the corresponding warp Mesh objects.
- class isaaclab.sensors.RayCasterCameraCfg[source]#
Bases:
RayCasterCfgConfiguration for the ray-cast sensor.
Attributes:
The offset pose of the sensor's frame from the sensor's parent frame.
Prim path (or expression) to the sensor.
Update period of the sensor buffers (in seconds).
Whether to visualize the sensor.
The list of mesh primitive paths to ray cast against.
Whether the rays' starting positions and directions only track the yaw orientation.
Specify in what frame the rays are projected onto the ground.
Maximum distance (in meters) from the sensor to ray cast to.
The range of drift (in meters) to add to the ray starting positions (xyz) in world frame.
The range of drift (in meters) to add to the projected ray points in local projection frame.
The configuration object for the visualization markers.
List of sensor names/types to enable for the camera.
Clipping behavior for the camera for values exceed the maximum value.
The pattern that defines the local ray starting positions and directions in a pinhole camera pattern.
- offset: OffsetCfg#
The offset pose of the sensor’s frame from the sensor’s parent frame. Defaults to identity.
- prim_path: str#
Prim path (or expression) to the sensor.
Note
The expression can contain the environment namespace regex
{ENV_REGEX_NS}which will be replaced with the environment namespace.Example:
{ENV_REGEX_NS}/Robot/sensorwill be replaced with/World/envs/env_.*/Robot/sensor.
- update_period: float#
Update period of the sensor buffers (in seconds). Defaults to 0.0 (update every step).
- mesh_prim_paths: list[str]#
The list of mesh primitive paths to ray cast against.
Note
Currently, only a single static mesh is supported. We are working on supporting multiple static meshes and dynamic meshes.
- attach_yaw_only: bool | None#
Whether the rays’ starting positions and directions only track the yaw orientation. Defaults to None, which doesn’t raise a warning of deprecated usage.
This is useful for ray-casting height maps, where only yaw rotation is needed.
Deprecated since version 2.1.1: This attribute is deprecated and will be removed in the future. Please use
ray_alignmentinstead.To get the same behavior as setting this parameter to
TrueorFalse, setray_alignmentto"yaw"or “base” respectively.
- ray_alignment: Literal['base', 'yaw', 'world']#
Specify in what frame the rays are projected onto the ground. Default is “base”.
The options are:
baseif the rays’ starting positions and directions track the full root position and orientation.yawif the rays’ starting positions and directions track root position and only yaw component of the orientation. This is useful for ray-casting height maps.worldif rays’ starting positions and directions are always fixed. This is useful in combination with a mapping package on the robot and querying ray-casts in a global frame.
- drift_range: tuple[float, float]#
The range of drift (in meters) to add to the ray starting positions (xyz) in world frame. Defaults to (0.0, 0.0).
For floating base robots, this is useful for simulating drift in the robot’s pose estimation.
- ray_cast_drift_range: dict[str, tuple[float, float]]#
The range of drift (in meters) to add to the projected ray points in local projection frame. Defaults to a dictionary with zero drift for each x, y and z axis.
For floating base robots, this is useful for simulating drift in the robot’s pose estimation.
- visualizer_cfg: VisualizationMarkersCfg#
The configuration object for the visualization markers. Defaults to RAY_CASTER_MARKER_CFG.
Note
This attribute is only used when debug visualization is enabled.
- data_types: list[str]#
List of sensor names/types to enable for the camera. Defaults to [“distance_to_image_plane”].
- depth_clipping_behavior: Literal['max', 'zero', 'none']#
Clipping behavior for the camera for values exceed the maximum value. Defaults to “none”.
"max": Values are clipped to the maximum value."zero": Values are clipped to zero."none: No clipping is applied. Values will be returned asinffordistance_to_cameraandnanfordistance_to_image_planedata type.
- pattern_cfg: PinholeCameraPatternCfg#
The pattern that defines the local ray starting positions and directions in a pinhole camera pattern.
Multi-Mesh Ray-Cast Sensor#
- class isaaclab.sensors.MultiMeshRayCaster[source]#
Bases:
RayCasterA multi-mesh ray-casting sensor.
The ray-caster uses a set of rays to detect collisions with meshes in the scene. The rays are defined in the sensor’s local coordinate frame. The sensor can be configured to ray-cast against a set of meshes with a given ray pattern.
The meshes are parsed from the list of primitive paths provided in the configuration. These are then converted to warp meshes and stored in the
mesheslist. The ray-caster then ray-casts against these warp meshes using the ray pattern provided in the configuration.Compared to the default RayCaster, the MultiMeshRayCaster provides additional functionality and flexibility as an extension of the default RayCaster with the following enhancements:
Raycasting against multiple target types : Supports primitive shapes (spheres, cubes, etc.) as well as arbitrary meshes.
Dynamic mesh tracking : Keeps track of specified meshes, enabling raycasting against moving parts (e.g., robot links, articulated bodies, or dynamic obstacles).
Memory-efficient caching : Avoids redundant memory usage by reusing mesh data across environments.
Example usage to raycast against the visual meshes of a robot (e.g. ANYmal):
ray_caster_cfg = MultiMeshRayCasterCfg( prim_path="{ENV_REGEX_NS}/Robot", mesh_prim_paths=[ "/World/Ground", MultiMeshRayCasterCfg.RaycastTargetCfg(prim_expr="{ENV_REGEX_NS}/Robot/LF_.*/visuals"), MultiMeshRayCasterCfg.RaycastTargetCfg(prim_expr="{ENV_REGEX_NS}/Robot/RF_.*/visuals"), MultiMeshRayCasterCfg.RaycastTargetCfg(prim_expr="{ENV_REGEX_NS}/Robot/LH_.*/visuals"), MultiMeshRayCasterCfg.RaycastTargetCfg(prim_expr="{ENV_REGEX_NS}/Robot/RH_.*/visuals"), MultiMeshRayCasterCfg.RaycastTargetCfg(prim_expr="{ENV_REGEX_NS}/Robot/base/visuals"), ], ray_alignment="world", pattern_cfg=patterns.GridPatternCfg(resolution=0.02, size=(2.5, 2.5), direction=(0, 0, -1)), )
Attributes:
The configuration parameters.
A dictionary to store mesh views for raycasting, shared across all instances.
Data from the sensor.
Memory device for computation.
Whether the sensor has a debug visualization implemented.
Whether the sensor is initialized.
A dictionary to store warp meshes for raycasting, shared across all instances.
Number of instances of the sensor.
Methods:
__init__(cfg)Initializes the ray-caster object.
reset([env_ids, env_mask])Resets the sensor internals.
set_debug_vis(debug_vis)Sets whether to visualize the sensor data.
- cfg: MultiMeshRayCasterCfg#
The configuration parameters.
- mesh_views: ClassVar[dict[str, XformPrimView | physx.ArticulationView | physx.RigidBodyView]] = {}#
A dictionary to store mesh views for raycasting, shared across all instances.
The keys correspond to the prim path for the mesh views, and values are the corresponding view objects.
- __init__(cfg: MultiMeshRayCasterCfg)[source]#
Initializes the ray-caster object.
- Parameters:
cfg – The configuration parameters.
- property data: MultiMeshRayCasterData#
Data from the sensor.
This property is only updated when the user tries to access the data. This is done to avoid unnecessary computation when the sensor data is not used.
For updating the sensor when this property is accessed, you can use the following code snippet in your sensor implementation:
# update sensors if needed self._update_outdated_buffers() # return the data (where `_data` is the data for the sensor) return self._data
- property has_debug_vis_implementation: bool#
Whether the sensor has a debug visualization implemented.
- property is_initialized: bool#
Whether the sensor is initialized.
Returns True if the sensor is initialized, False otherwise.
- meshes: ClassVar[dict[str, wp.Mesh]] = {}#
A dictionary to store warp meshes for raycasting, shared across all instances.
The keys correspond to the prim path for the meshes, and values are the corresponding warp Mesh objects.
- property num_instances: int#
Number of instances of the sensor.
This is equal to the number of sensors per environment multiplied by the number of environments.
- reset(env_ids: Sequence[int] | None = None, env_mask: wp.array | None = None)#
Resets the sensor internals.
- Parameters:
env_ids – The environment indices to reset. Defaults to None, in which case all environments are reset.
env_mask – A boolean warp array indicating which environments to reset. If provided, takes priority over
env_ids. Defaults to None.
- class isaaclab.sensors.MultiMeshRayCasterData[source]#
Data container for the multi-mesh ray-cast sensor.
Attributes:
Position of the sensor origin in world frame.
Orientation of the sensor origin in quaternion (x, y, z, w) in world frame.
The ray hit positions in the world frame.
The mesh ids of the ray hits.
- pos_w: torch.Tensor = None#
Position of the sensor origin in world frame.
Shape is (N, 3), where N is the number of sensors.
- quat_w: torch.Tensor = None#
Orientation of the sensor origin in quaternion (x, y, z, w) in world frame.
Shape is (N, 4), where N is the number of sensors.
- ray_hits_w: torch.Tensor = None#
The ray hit positions in the world frame.
Shape is (N, B, 3), where N is the number of sensors, B is the number of rays in the scan pattern per sensor.
- ray_mesh_ids: torch.Tensor = None#
The mesh ids of the ray hits.
Shape is (N, B, 1), where N is the number of sensors, B is the number of rays in the scan pattern per sensor.
- class isaaclab.sensors.MultiMeshRayCasterCfg[source]#
Bases:
RayCasterCfgConfiguration for the multi-mesh ray-cast sensor.
Classes:
Configuration for different ray-cast targets.
Attributes:
Prim path (or expression) to the sensor.
Update period of the sensor buffers (in seconds).
Whether to visualize the sensor.
The offset pose of the sensor's frame from the sensor's parent frame.
Whether the rays' starting positions and directions only track the yaw orientation.
Specify in what frame the rays are projected onto the ground.
The pattern that defines the local ray starting positions and directions.
Maximum distance (in meters) from the sensor to ray cast to.
The range of drift (in meters) to add to the ray starting positions (xyz) in world frame.
The range of drift (in meters) to add to the projected ray points in local projection frame.
The configuration object for the visualization markers.
The list of mesh primitive paths to ray cast against.
Whether to update the mesh ids of the ray hits in the
datacontainer.Whether to reference duplicated meshes instead of loading each one separately into memory.
- class RaycastTargetCfg[source]#
Bases:
objectConfiguration for different ray-cast targets.
Attributes:
The regex to specify the target prim to ray cast against.
Whether the target prim is assumed to be the same mesh across all environments.
Whether to merge the parsed meshes for a prim that contains multiple meshes.
Whether the mesh transformations should be tracked.
Whether the target prim is assumed to be the same mesh across all environments. Defaults to False.
If True, only the first mesh is read and then reused for all environments, rather than re-parsed. This provides a startup performance boost when there are many environments that all use the same asset.
Note
If
MultiMeshRayCasterCfg.reference_meshesis False, this flag has no effect.
- prim_path: str#
Prim path (or expression) to the sensor.
Note
The expression can contain the environment namespace regex
{ENV_REGEX_NS}which will be replaced with the environment namespace.Example:
{ENV_REGEX_NS}/Robot/sensorwill be replaced with/World/envs/env_.*/Robot/sensor.
- update_period: float#
Update period of the sensor buffers (in seconds). Defaults to 0.0 (update every step).
- offset: OffsetCfg#
The offset pose of the sensor’s frame from the sensor’s parent frame. Defaults to identity.
- attach_yaw_only: bool | None#
Whether the rays’ starting positions and directions only track the yaw orientation. Defaults to None, which doesn’t raise a warning of deprecated usage.
This is useful for ray-casting height maps, where only yaw rotation is needed.
Deprecated since version 2.1.1: This attribute is deprecated and will be removed in the future. Please use
ray_alignmentinstead.To get the same behavior as setting this parameter to
TrueorFalse, setray_alignmentto"yaw"or “base” respectively.
- ray_alignment: Literal['base', 'yaw', 'world']#
Specify in what frame the rays are projected onto the ground. Default is “base”.
The options are:
baseif the rays’ starting positions and directions track the full root position and orientation.yawif the rays’ starting positions and directions track root position and only yaw component of the orientation. This is useful for ray-casting height maps.worldif rays’ starting positions and directions are always fixed. This is useful in combination with a mapping package on the robot and querying ray-casts in a global frame.
- pattern_cfg: PatternBaseCfg#
The pattern that defines the local ray starting positions and directions.
- drift_range: tuple[float, float]#
The range of drift (in meters) to add to the ray starting positions (xyz) in world frame. Defaults to (0.0, 0.0).
For floating base robots, this is useful for simulating drift in the robot’s pose estimation.
- ray_cast_drift_range: dict[str, tuple[float, float]]#
The range of drift (in meters) to add to the projected ray points in local projection frame. Defaults to a dictionary with zero drift for each x, y and z axis.
For floating base robots, this is useful for simulating drift in the robot’s pose estimation.
- visualizer_cfg: VisualizationMarkersCfg#
The configuration object for the visualization markers. Defaults to RAY_CASTER_MARKER_CFG.
Note
This attribute is only used when debug visualization is enabled.
- mesh_prim_paths: list[str | RaycastTargetCfg]#
The list of mesh primitive paths to ray cast against.
If an entry is a string, it is internally converted to
RaycastTargetCfgwithtrack_mesh_transformsdisabled. These settings ensure backwards compatibility with the default raycaster.
- reference_meshes: bool#
Whether to reference duplicated meshes instead of loading each one separately into memory. Defaults to True.
When enabled, the raycaster parses all meshes in all environments, but reuses references for duplicates instead of storing multiple copies. This reduces memory footprint.
Multi-Mesh Ray-Cast Camera#
- class isaaclab.sensors.MultiMeshRayCasterCamera[source]#
Bases:
RayCasterCamera,MultiMeshRayCasterA multi-mesh ray-casting camera sensor.
The ray-caster camera uses a set of rays to get the distances to meshes in the scene. The rays are defined in the sensor’s local coordinate frame. The sensor has the same interface as the
isaaclab.sensors.Camerathat implements the camera class through USD camera prims. However, this class provides a faster image generation. The sensor converts meshes from the list of primitive paths provided in the configuration to Warp meshes. The camera then ray-casts against these Warp meshes only.Currently, only the following annotators are supported:
"distance_to_camera": An image containing the distance to camera optical center."distance_to_image_plane": An image containing distances of 3D points from camera plane along camera’s z-axis."normals": An image containing the local surface normal vectors at each pixel.
Attributes:
The configuration parameters.
A set of sensor types that are not supported by the ray-caster camera.
Data from the sensor.
Memory device for computation.
Frame number when the measurement took place.
Whether the sensor has a debug visualization implemented.
A tuple containing (height, width) of the camera sensor.
Whether the sensor is initialized.
A dictionary to store mesh views for raycasting, shared across all instances.
A dictionary to store warp meshes for raycasting, shared across all instances.
Number of instances of the sensor.
Methods:
__init__(cfg)Initializes the camera object.
reset([env_ids, env_mask])Resets the sensor internals.
set_debug_vis(debug_vis)Sets whether to visualize the sensor data.
set_intrinsic_matrices(matrices[, ...])Set the intrinsic matrix of the camera.
set_world_poses([positions, orientations, ...])Set the pose of the camera w.r.t.
set_world_poses_from_view(eyes, targets[, ...])Set the poses of the camera from the eye position and look-at target position.
- cfg: MultiMeshRayCasterCameraCfg#
The configuration parameters.
- __init__(cfg: MultiMeshRayCasterCameraCfg)[source]#
Initializes the camera object.
- Parameters:
cfg – The configuration parameters.
- Raises:
ValueError – If the provided data types are not supported by the ray-caster camera.
- UNSUPPORTED_TYPES: ClassVar[set[str]] = {'bounding_box_2d_loose', 'bounding_box_2d_loose_fast', 'bounding_box_2d_tight', 'bounding_box_2d_tight_fast', 'bounding_box_3d', 'bounding_box_3d_fast', 'instance_id_segmentation', 'instance_id_segmentation_fast', 'instance_segmentation', 'instance_segmentation_fast', 'motion_vectors', 'rgb', 'semantic_segmentation', 'skeleton_data'}#
A set of sensor types that are not supported by the ray-caster camera.
- property data: CameraData#
Data from the sensor.
This property is only updated when the user tries to access the data. This is done to avoid unnecessary computation when the sensor data is not used.
For updating the sensor when this property is accessed, you can use the following code snippet in your sensor implementation:
# update sensors if needed self._update_outdated_buffers() # return the data (where `_data` is the data for the sensor) return self._data
- property frame: torch.tensor#
Frame number when the measurement took place.
- property has_debug_vis_implementation: bool#
Whether the sensor has a debug visualization implemented.
- property is_initialized: bool#
Whether the sensor is initialized.
Returns True if the sensor is initialized, False otherwise.
- mesh_views: ClassVar[dict[str, XformPrimView | physx.ArticulationView | physx.RigidBodyView]] = {}#
A dictionary to store mesh views for raycasting, shared across all instances.
The keys correspond to the prim path for the mesh views, and values are the corresponding view objects.
- meshes: ClassVar[dict[str, wp.Mesh]] = {}#
A dictionary to store warp meshes for raycasting, shared across all instances.
The keys correspond to the prim path for the meshes, and values are the corresponding warp Mesh objects.
- property num_instances: int#
Number of instances of the sensor.
This is equal to the number of sensors per environment multiplied by the number of environments.
- reset(env_ids: Sequence[int] | None = None, env_mask: wp.array | None = None)#
Resets the sensor internals.
- Parameters:
env_ids – The environment indices to reset. Defaults to None, in which case all environments are reset.
env_mask – A boolean warp array indicating which environments to reset. If provided, takes priority over
env_ids. Defaults to None.
- set_debug_vis(debug_vis: bool) bool#
Sets whether to visualize the sensor data.
- Parameters:
debug_vis – Whether to visualize the sensor data.
- Returns:
Whether the debug visualization was successfully set. False if the sensor does not support debug visualization.
- set_intrinsic_matrices(matrices: torch.Tensor, focal_length: float = 1.0, env_ids: Sequence[int] | None = None)#
Set the intrinsic matrix of the camera.
- Parameters:
matrices – The intrinsic matrices for the camera. Shape is (N, 3, 3).
focal_length – Focal length to use when computing aperture values (in cm). Defaults to 1.0.
env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.
- set_world_poses(positions: torch.Tensor | None = None, orientations: torch.Tensor | None = None, env_ids: Sequence[int] | None = None, convention: Literal['opengl', 'ros', 'world'] = 'ros')#
Set the pose of the camera w.r.t. the world frame using specified convention.
Since different fields use different conventions for camera orientations, the method allows users to set the camera poses in the specified convention. Possible conventions are:
"opengl"- forward axis: -Z - up axis +Y - Offset is applied in the OpenGL (Usd.Camera) convention"ros"- forward axis: +Z - up axis -Y - Offset is applied in the ROS convention"world"- forward axis: +X - up axis +Z - Offset is applied in the World Frame convention
See
isaaclab.utils.maths.convert_camera_frame_orientation_convention()for more details on the conventions.- Parameters:
positions – The cartesian coordinates (in meters). Shape is (N, 3). Defaults to None, in which case the camera position in not changed.
orientations – The quaternion orientation in (x, y, z, w). Shape is (N, 4). Defaults to None, in which case the camera orientation in not changed.
env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.
convention – The convention in which the poses are fed. Defaults to “ros”.
- Raises:
RuntimeError – If the camera prim is not set. Need to call
initialize()method first.
- set_world_poses_from_view(eyes: torch.Tensor, targets: torch.Tensor, env_ids: Sequence[int] | None = None)#
Set the poses of the camera from the eye position and look-at target position.
- Parameters:
eyes – The positions of the camera’s eye. Shape is N, 3).
targets – The target locations to look at. Shape is (N, 3).
env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.
- Raises:
RuntimeError – If the camera prim is not set. Need to call
initialize()method first.NotImplementedError – If the stage up-axis is not “Y” or “Z”.
- class isaaclab.sensors.MultiMeshRayCasterCameraCfg[source]#
Bases:
RayCasterCameraCfg,MultiMeshRayCasterCfgConfiguration for the multi-mesh ray-cast camera sensor.
Attributes:
Prim path (or expression) to the sensor.
Update period of the sensor buffers (in seconds).
Whether to visualize the sensor.
The list of mesh primitive paths to ray cast against.
The offset pose of the sensor's frame from the sensor's parent frame.
Whether the rays' starting positions and directions only track the yaw orientation.
Specify in what frame the rays are projected onto the ground.
The pattern that defines the local ray starting positions and directions in a pinhole camera pattern.
Maximum distance (in meters) from the sensor to ray cast to.
The range of drift (in meters) to add to the ray starting positions (xyz) in world frame.
The range of drift (in meters) to add to the projected ray points in local projection frame.
The configuration object for the visualization markers.
Whether to update the mesh ids of the ray hits in the
datacontainer.Whether to reference duplicated meshes instead of loading each one separately into memory.
List of sensor names/types to enable for the camera.
Clipping behavior for the camera for values exceed the maximum value.
- prim_path: str#
Prim path (or expression) to the sensor.
Note
The expression can contain the environment namespace regex
{ENV_REGEX_NS}which will be replaced with the environment namespace.Example:
{ENV_REGEX_NS}/Robot/sensorwill be replaced with/World/envs/env_.*/Robot/sensor.
- update_period: float#
Update period of the sensor buffers (in seconds). Defaults to 0.0 (update every step).
- mesh_prim_paths: list[str]#
The list of mesh primitive paths to ray cast against.
If an entry is a string, it is internally converted to
RaycastTargetCfgwithtrack_mesh_transformsdisabled. These settings ensure backwards compatibility with the default raycaster.
- offset: OffsetCfg#
The offset pose of the sensor’s frame from the sensor’s parent frame. Defaults to identity.
- attach_yaw_only: bool | None#
Whether the rays’ starting positions and directions only track the yaw orientation. Defaults to None, which doesn’t raise a warning of deprecated usage.
This is useful for ray-casting height maps, where only yaw rotation is needed.
Deprecated since version 2.1.1: This attribute is deprecated and will be removed in the future. Please use
ray_alignmentinstead.To get the same behavior as setting this parameter to
TrueorFalse, setray_alignmentto"yaw"or “base” respectively.
- ray_alignment: Literal['base', 'yaw', 'world']#
Specify in what frame the rays are projected onto the ground. Default is “base”.
The options are:
baseif the rays’ starting positions and directions track the full root position and orientation.yawif the rays’ starting positions and directions track root position and only yaw component of the orientation. This is useful for ray-casting height maps.worldif rays’ starting positions and directions are always fixed. This is useful in combination with a mapping package on the robot and querying ray-casts in a global frame.
- pattern_cfg: PinholeCameraPatternCfg#
The pattern that defines the local ray starting positions and directions in a pinhole camera pattern.
- drift_range: tuple[float, float]#
The range of drift (in meters) to add to the ray starting positions (xyz) in world frame. Defaults to (0.0, 0.0).
For floating base robots, this is useful for simulating drift in the robot’s pose estimation.
- ray_cast_drift_range: dict[str, tuple[float, float]]#
The range of drift (in meters) to add to the projected ray points in local projection frame. Defaults to a dictionary with zero drift for each x, y and z axis.
For floating base robots, this is useful for simulating drift in the robot’s pose estimation.
- visualizer_cfg: VisualizationMarkersCfg#
The configuration object for the visualization markers. Defaults to RAY_CASTER_MARKER_CFG.
Note
This attribute is only used when debug visualization is enabled.
- reference_meshes: bool#
Whether to reference duplicated meshes instead of loading each one separately into memory. Defaults to True.
When enabled, the raycaster parses all meshes in all environments, but reuses references for duplicates instead of storing multiple copies. This reduces memory footprint.
- data_types: list[str]#
List of sensor names/types to enable for the camera. Defaults to [“distance_to_image_plane”].
- depth_clipping_behavior: Literal['max', 'zero', 'none']#
Clipping behavior for the camera for values exceed the maximum value. Defaults to “none”.
"max": Values are clipped to the maximum value."zero": Values are clipped to zero."none: No clipping is applied. Values will be returned asinffordistance_to_cameraandnanfordistance_to_image_planedata type.
Inertia Measurement Unit#
- class isaaclab.sensors.Imu[source]#
Bases:
FactoryBase,BaseImuFactory for creating IMU sensor instances.
Attributes:
Data from the sensor.
Memory device for computation.
Whether the sensor has a debug visualization implemented.
Whether the sensor is initialized.
Number of instances of the sensor.
The configuration parameters.
Methods:
__new__(cls, *args, **kwargs)Create a new instance of an IMU sensor based on the backend.
__init__(cfg)Initializes the Imu sensor.
Returns a list of registered backend names.
register(name, sub_class)Register a new implementation class.
reset([env_ids, env_mask])Resets the sensor internals.
set_debug_vis(debug_vis)Sets whether to visualize the sensor data.
- abstract property data: BaseImuData#
Data from the sensor.
This property is only updated when the user tries to access the data. This is done to avoid unnecessary computation when the sensor data is not used.
For updating the sensor when this property is accessed, you can use the following code snippet in your sensor implementation:
# update sensors if needed self._update_outdated_buffers() # return the data (where `_data` is the data for the sensor) return self._data
- static __new__(cls, *args, **kwargs) BaseImu | PhysXImu[source]#
Create a new instance of an IMU sensor based on the backend.
- property has_debug_vis_implementation: bool#
Whether the sensor has a debug visualization implemented.
- property is_initialized: bool#
Whether the sensor is initialized.
Returns True if the sensor is initialized, False otherwise.
- property num_instances: int#
Number of instances of the sensor.
This is equal to the number of sensors per environment multiplied by the number of environments.
- reset(env_ids: Sequence[int] | None = None, env_mask: wp.array | None = None) None#
Resets the sensor internals.
- Parameters:
env_ids – The environment indices to reset. Defaults to None, in which case all environments are reset.
env_mask – A boolean warp array indicating which environments to reset. If provided, takes priority over
env_ids. Defaults to None.
- class isaaclab.sensors.ImuCfg[source]#
Bases:
SensorBaseCfgConfiguration for an Inertial Measurement Unit (IMU) sensor.
Classes:
The offset pose of the sensor's frame from the sensor's parent frame.
Attributes:
Prim path (or expression) to the sensor.
Update period of the sensor buffers (in seconds).
Whether to visualize the sensor.
The offset pose of the sensor's frame from the sensor's parent frame.
The configuration object for the visualization markers.
The linear acceleration bias applied to the linear acceleration in the world frame (x,y,z).
- class OffsetCfg[source]#
Bases:
objectThe offset pose of the sensor’s frame from the sensor’s parent frame.
Attributes:
- prim_path: str#
Prim path (or expression) to the sensor.
Note
The expression can contain the environment namespace regex
{ENV_REGEX_NS}which will be replaced with the environment namespace.Example:
{ENV_REGEX_NS}/Robot/sensorwill be replaced with/World/envs/env_.*/Robot/sensor.
- update_period: float#
Update period of the sensor buffers (in seconds). Defaults to 0.0 (update every step).
- offset: OffsetCfg#
The offset pose of the sensor’s frame from the sensor’s parent frame. Defaults to identity.
- visualizer_cfg: VisualizationMarkersCfg#
The configuration object for the visualization markers. Defaults to RED_ARROW_X_MARKER_CFG.
This attribute is only used when debug visualization is enabled.
- gravity_bias: tuple[float, float, float]#
The linear acceleration bias applied to the linear acceleration in the world frame (x,y,z).
Imu sensors typically output a positive gravity acceleration in opposition to the direction of gravity. This config parameter allows users to subtract that bias if set to (0.,0.,0.). By default this is set to (0.0,0.0,9.81) which results in a positive acceleration reading in the world Z.