isaaclab.sensors

Contents

isaaclab.sensors#

Sub-package containing various sensor classes implementations.

This subpackage contains the sensor classes that are compatible with Isaac Sim. We include both USD-based and custom sensors:

  • USD-prim sensors: Available in Omniverse and require creating a USD prim for them. For instance, RTX ray tracing camera and lidar sensors.

  • USD-schema sensors: Available in Omniverse and require creating a USD schema on an existing prim. For instance, contact sensors and frame transformers.

  • Custom sensors: Implemented in Python and do not require creating any USD prim or schema. For instance, warp-based ray-casters.

Due to the above categorization, the prim paths passed to the sensor’s configuration class are interpreted differently based on the sensor type. The following table summarizes the interpretation of the prim paths for different sensor types:

Sensor Type

Example Prim Path

Pre-check

Camera

/World/robot/base/camera

Leaf is available, and it will spawn a USD camera

Contact Sensor

/World/robot/feet_*

Leaf is available and checks if the schema exists

Ray Caster

/World/robot/base

Leaf exists and is a physics body (Articulation / Rigid Body)

Frame Transformer

/World/robot/base

Leaf exists and is a physics body (Articulation / Rigid Body)

Imu

/World/robot/base

Leaf exists and is a physics body (Rigid Body)

Submodules

patterns

Sub-module for ray-casting patterns used by the ray-caster.

Classes

SensorBase

The base class for implementing a sensor.

SensorBaseCfg

Configuration parameters for a sensor.

Camera

The camera sensor for acquiring visual data.

CameraData

Data container for the camera sensor.

CameraCfg

Configuration for a camera sensor.

TiledCamera

TiledCameraCfg

Configuration for a tiled rendering-based camera sensor.

ContactSensor

Factory for creating contact sensor instances.

ContactSensorData

Factory for creating contact sensor data instances.

ContactSensorCfg

Configuration for the contact sensor.

FrameTransformer

Factory for creating frame transformer instances.

FrameTransformerData

Factory for creating frame transformer data instances.

FrameTransformerCfg

Configuration for the frame transformer sensor.

RayCaster

A ray-casting sensor.

RayCasterData

Data container for the ray-cast sensor.

RayCasterCfg

Configuration for the ray-cast sensor.

RayCasterCamera

A ray-casting camera sensor.

RayCasterCameraCfg

Configuration for the ray-cast sensor.

MultiMeshRayCaster

A multi-mesh ray-casting sensor.

MultiMeshRayCasterData

Data container for the multi-mesh ray-cast sensor.

MultiMeshRayCasterCfg

Configuration for the multi-mesh ray-cast sensor.

MultiMeshRayCasterCamera

A multi-mesh ray-casting camera sensor.

MultiMeshRayCasterCameraCfg

Configuration for the multi-mesh ray-cast camera sensor.

Imu

Factory for creating IMU sensor instances.

ImuCfg

Configuration for an Inertial Measurement Unit (IMU) sensor.

Sensor Base#

class isaaclab.sensors.SensorBase[source]#

The base class for implementing a sensor.

The implementation is based on lazy evaluation. The sensor data is only updated when the user tries accessing the data through the data property or sets force_compute=True in the update() method. This is done to avoid unnecessary computation when the sensor data is not used.

The sensor is updated at the specified update period. If the update period is zero, then the sensor is updated at every simulation step.

Methods:

__init__(cfg)

Initialize the sensor class.

set_debug_vis(debug_vis)

Sets whether to visualize the sensor data.

reset([env_ids, env_mask])

Resets the sensor internals.

Attributes:

is_initialized

Whether the sensor is initialized.

num_instances

Number of instances of the sensor.

device

Memory device for computation.

data

Data from the sensor.

has_debug_vis_implementation

Whether the sensor has a debug visualization implemented.

__init__(cfg: SensorBaseCfg)[source]#

Initialize the sensor class.

Parameters:

cfg – The configuration parameters for the sensor.

property is_initialized: bool#

Whether the sensor is initialized.

Returns True if the sensor is initialized, False otherwise.

property num_instances: int#

Number of instances of the sensor.

This is equal to the number of sensors per environment multiplied by the number of environments.

property device: str#

Memory device for computation.

abstract property data: Any#

Data from the sensor.

This property is only updated when the user tries to access the data. This is done to avoid unnecessary computation when the sensor data is not used.

For updating the sensor when this property is accessed, you can use the following code snippet in your sensor implementation:

# update sensors if needed
self._update_outdated_buffers()
# return the data (where `_data` is the data for the sensor)
return self._data
property has_debug_vis_implementation: bool#

Whether the sensor has a debug visualization implemented.

set_debug_vis(debug_vis: bool) bool[source]#

Sets whether to visualize the sensor data.

Parameters:

debug_vis – Whether to visualize the sensor data.

Returns:

Whether the debug visualization was successfully set. False if the sensor does not support debug visualization.

reset(env_ids: Sequence[int] | None = None, env_mask: wp.array | None = None) None[source]#

Resets the sensor internals.

Parameters:
  • env_ids – The environment indices to reset. Defaults to None, in which case all environments are reset.

  • env_mask – A boolean warp array indicating which environments to reset. If provided, takes priority over env_ids. Defaults to None.

class isaaclab.sensors.SensorBaseCfg[source]#

Configuration parameters for a sensor.

Attributes:

prim_path

Prim path (or expression) to the sensor.

update_period

Update period of the sensor buffers (in seconds).

debug_vis

Whether to visualize the sensor.

prim_path: str#

Prim path (or expression) to the sensor.

Note

The expression can contain the environment namespace regex {ENV_REGEX_NS} which will be replaced with the environment namespace.

Example: {ENV_REGEX_NS}/Robot/sensor will be replaced with /World/envs/env_.*/Robot/sensor.

update_period: float#

Update period of the sensor buffers (in seconds). Defaults to 0.0 (update every step).

debug_vis: bool#

Whether to visualize the sensor. Defaults to False.

USD Camera#

class isaaclab.sensors.Camera[source]#

Bases: SensorBase

The camera sensor for acquiring visual data.

This class wraps over the UsdGeom Camera for providing a consistent API for acquiring visual data. It ensures that the camera follows the ROS convention for the coordinate system.

Summarizing from the replicator extension, the following sensor types are supported:

  • "rgb": A 3-channel rendered color image.

  • "rgba": A 4-channel rendered color image with alpha channel.

  • "albedo": A 4-channel fast diffuse-albedo only path for color image. Note that this path will achieve the best performance when used alone or with depth only.

  • "distance_to_camera": An image containing the distance to camera optical center.

  • "distance_to_image_plane": An image containing distances of 3D points from camera plane along camera’s z-axis.

  • "depth": The same as "distance_to_image_plane".

  • "simple_shading_constant_diffuse": Simple shading (constant diffuse) RGB approximation.

  • "simple_shading_diffuse_mdl": Simple shading (diffuse MDL) RGB approximation.

  • "simple_shading_full_mdl": Simple shading (full MDL) RGB approximation.

  • "normals": An image containing the local surface normal vectors at each pixel.

  • "motion_vectors": An image containing the motion vector data at each pixel.

  • "semantic_segmentation": The semantic segmentation data.

  • "instance_segmentation_fast": The instance segmentation data.

  • "instance_id_segmentation_fast": The instance id segmentation data.

Note

Currently the following sensor types are not supported in a “view” format:

  • "instance_segmentation": The instance segmentation data. Please use the fast counterparts instead.

  • "instance_id_segmentation": The instance id segmentation data. Please use the fast counterparts instead.

  • "bounding_box_2d_tight": The tight 2D bounding box data (only contains non-occluded regions).

  • "bounding_box_2d_tight_fast": The tight 2D bounding box data (only contains non-occluded regions).

  • "bounding_box_2d_loose": The loose 2D bounding box data (contains occluded regions).

  • "bounding_box_2d_loose_fast": The loose 2D bounding box data (contains occluded regions).

  • "bounding_box_3d": The 3D view space bounding box data.

  • "bounding_box_3d_fast": The 3D view space bounding box data.

Attributes:

cfg

The configuration parameters.

UNSUPPORTED_TYPES

The set of sensor types that are not supported by the camera class.

num_instances

Number of instances of the sensor.

data

Data from the sensor.

frame

Frame number when the measurement took place.

render_product_paths

The path of the render products for the cameras.

image_shape

A tuple containing (height, width) of the camera sensor.

device

Memory device for computation.

has_debug_vis_implementation

Whether the sensor has a debug visualization implemented.

is_initialized

Whether the sensor is initialized.

Methods:

__init__(cfg)

Initializes the camera sensor.

set_intrinsic_matrices(matrices[, ...])

Set parameters of the USD camera from its intrinsic matrix.

set_world_poses([positions, orientations, ...])

Set the pose of the camera w.r.t.

set_world_poses_from_view(eyes, targets[, ...])

Set the poses of the camera from the eye position and look-at target position.

reset([env_ids, env_mask])

Resets the sensor internals.

set_debug_vis(debug_vis)

Sets whether to visualize the sensor data.

cfg: CameraCfg#

The configuration parameters.

UNSUPPORTED_TYPES: set[str] = {'bounding_box_2d_loose', 'bounding_box_2d_loose_fast', 'bounding_box_2d_tight', 'bounding_box_2d_tight_fast', 'bounding_box_3d', 'bounding_box_3d_fast', 'instance_id_segmentation', 'instance_segmentation'}#

The set of sensor types that are not supported by the camera class.

__init__(cfg: CameraCfg)[source]#

Initializes the camera sensor.

Parameters:

cfg – The configuration parameters.

Raises:
  • RuntimeError – If no camera prim is found at the given path.

  • ValueError – If the provided data types are not supported by the camera.

property num_instances: int#

Number of instances of the sensor.

This is equal to the number of sensors per environment multiplied by the number of environments.

property data: CameraData#

Data from the sensor.

This property is only updated when the user tries to access the data. This is done to avoid unnecessary computation when the sensor data is not used.

For updating the sensor when this property is accessed, you can use the following code snippet in your sensor implementation:

# update sensors if needed
self._update_outdated_buffers()
# return the data (where `_data` is the data for the sensor)
return self._data
property frame: torch.tensor#

Frame number when the measurement took place.

property render_product_paths: list[str]#

The path of the render products for the cameras.

This can be used via replicator interfaces to attach to writes or external annotator registry.

property image_shape: tuple[int, int]#

A tuple containing (height, width) of the camera sensor.

set_intrinsic_matrices(matrices: torch.Tensor, focal_length: float | None = None, env_ids: Sequence[int] | None = None)[source]#

Set parameters of the USD camera from its intrinsic matrix.

The intrinsic matrix is used to set the following parameters to the USD camera:

  • focal_length: The focal length of the camera.

  • horizontal_aperture: The horizontal aperture of the camera.

  • vertical_aperture: The vertical aperture of the camera.

  • horizontal_aperture_offset: The horizontal offset of the camera.

  • vertical_aperture_offset: The vertical offset of the camera.

Warning

Due to limitations of Omniverse camera, we need to assume that the camera is a spherical lens, i.e. has square pixels, and the optical center is centered at the camera eye. If this assumption is not true in the input intrinsic matrix, then the camera will not set up correctly.

Parameters:
  • matrices – The intrinsic matrices for the camera. Shape is (N, 3, 3).

  • focal_length – Perspective focal length (in cm) used to calculate pixel size. Defaults to None. If None, focal_length will be calculated 1 / width.

  • env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.

set_world_poses(positions: torch.Tensor | None = None, orientations: torch.Tensor | None = None, env_ids: Sequence[int] | None = None, convention: Literal['opengl', 'ros', 'world'] = 'ros')[source]#

Set the pose of the camera w.r.t. the world frame using specified convention.

Since different fields use different conventions for camera orientations, the method allows users to set the camera poses in the specified convention. Possible conventions are:

  • "opengl" - forward axis: -Z - up axis +Y - Offset is applied in the OpenGL (Usd.Camera) convention

  • "ros" - forward axis: +Z - up axis -Y - Offset is applied in the ROS convention

  • "world" - forward axis: +X - up axis +Z - Offset is applied in the World Frame convention

See isaaclab.sensors.camera.utils.convert_camera_frame_orientation_convention() for more details on the conventions.

Parameters:
  • positions – The cartesian coordinates (in meters). Shape is (N, 3). Defaults to None, in which case the camera position in not changed.

  • orientations – The quaternion orientation in (x, y, z, w). Shape is (N, 4). Defaults to None, in which case the camera orientation in not changed.

  • env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.

  • convention – The convention in which the poses are fed. Defaults to “ros”.

Raises:

RuntimeError – If the camera prim is not set. Need to call initialize() method first.

set_world_poses_from_view(eyes: torch.Tensor, targets: torch.Tensor, env_ids: Sequence[int] | None = None)[source]#

Set the poses of the camera from the eye position and look-at target position.

Parameters:
  • eyes – The positions of the camera’s eye. Shape is (N, 3).

  • targets – The target locations to look at. Shape is (N, 3).

  • env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.

Raises:
  • RuntimeError – If the camera prim is not set. Need to call initialize() method first.

  • NotImplementedError – If the stage up-axis is not “Y” or “Z”.

reset(env_ids: Sequence[int] | None = None, env_mask: wp.array | None = None)[source]#

Resets the sensor internals.

Parameters:
  • env_ids – The environment indices to reset. Defaults to None, in which case all environments are reset.

  • env_mask – A boolean warp array indicating which environments to reset. If provided, takes priority over env_ids. Defaults to None.

property device: str#

Memory device for computation.

property has_debug_vis_implementation: bool#

Whether the sensor has a debug visualization implemented.

property is_initialized: bool#

Whether the sensor is initialized.

Returns True if the sensor is initialized, False otherwise.

set_debug_vis(debug_vis: bool) bool#

Sets whether to visualize the sensor data.

Parameters:

debug_vis – Whether to visualize the sensor data.

Returns:

Whether the debug visualization was successfully set. False if the sensor does not support debug visualization.

class isaaclab.sensors.CameraData[source]#

Data container for the camera sensor.

Attributes:

pos_w

Position of the sensor origin in world frame, following ROS convention.

quat_w_world

Quaternion orientation (x, y, z, w) of the sensor origin in world frame, following the world coordinate frame

image_shape

A tuple containing (height, width) of the camera sensor.

intrinsic_matrices

The intrinsic matrices for the camera.

output

The retrieved sensor data with sensor types as key.

info

The retrieved sensor info with sensor types as key.

quat_w_ros

Quaternion orientation (x, y, z, w) of the sensor origin in the world frame, following ROS convention.

quat_w_opengl

Quaternion orientation (x, y, z, w) of the sensor origin in the world frame, following Opengl / USD Camera convention.

pos_w: torch.Tensor = None#

Position of the sensor origin in world frame, following ROS convention.

Shape is (N, 3) where N is the number of sensors.

quat_w_world: torch.Tensor = None#

Quaternion orientation (x, y, z, w) of the sensor origin in world frame, following the world coordinate frame

Note

World frame convention follows the camera aligned with forward axis +X and up axis +Z.

Shape is (N, 4) where N is the number of sensors.

image_shape: tuple[int, int] = None#

A tuple containing (height, width) of the camera sensor.

intrinsic_matrices: torch.Tensor = None#

The intrinsic matrices for the camera.

Shape is (N, 3, 3) where N is the number of sensors.

output: dict[str, torch.Tensor] = None#

The retrieved sensor data with sensor types as key.

The format of the data is available in the Replicator Documentation. For semantic-based data, this corresponds to the "data" key in the output of the sensor.

info: list[dict[str, Any]] = None#

The retrieved sensor info with sensor types as key.

This contains extra information provided by the sensor such as semantic segmentation label mapping, prim paths. For semantic-based data, this corresponds to the "info" key in the output of the sensor. For other sensor types, the info is empty.

property quat_w_ros: torch.Tensor#

Quaternion orientation (x, y, z, w) of the sensor origin in the world frame, following ROS convention.

Note

ROS convention follows the camera aligned with forward axis +Z and up axis -Y.

Shape is (N, 4) where N is the number of sensors.

property quat_w_opengl: torch.Tensor#

Quaternion orientation (x, y, z, w) of the sensor origin in the world frame, following Opengl / USD Camera convention.

Note

OpenGL convention follows the camera aligned with forward axis -Z and up axis +Y.

Shape is (N, 4) where N is the number of sensors.

class isaaclab.sensors.CameraCfg[source]#

Bases: SensorBaseCfg

Configuration for a camera sensor.

Attributes:

offset

The offset pose of the sensor's frame from the sensor's parent frame.

spawn

Spawn configuration for the asset.

depth_clipping_behavior

Clipping behavior for the camera for values exceed the maximum value.

data_types

List of sensor names/types to enable for the camera.

width

Width of the image in pixels.

prim_path

Prim path (or expression) to the sensor.

update_period

Update period of the sensor buffers (in seconds).

debug_vis

Whether to visualize the sensor.

height

Height of the image in pixels.

update_latest_camera_pose

Whether to update the latest camera pose when fetching the camera's data.

semantic_filter

A string or a list specifying a semantic filter predicate.

colorize_semantic_segmentation

Whether to colorize the semantic segmentation images.

colorize_instance_id_segmentation

Whether to colorize the instance ID segmentation images.

colorize_instance_segmentation

Whether to colorize the instance ID segmentation images.

semantic_segmentation_mapping

Dictionary mapping semantics to specific colours

renderer_cfg

Renderer configuration for camera sensor.

offset: OffsetCfg#

The offset pose of the sensor’s frame from the sensor’s parent frame. Defaults to identity.

Note

The parent frame is the frame the sensor attaches to. For example, the parent frame of a camera at path /World/envs/env_0/Robot/Camera is /World/envs/env_0/Robot.

spawn: PinholeCameraCfg | FisheyeCameraCfg | None#

Spawn configuration for the asset.

If None, then the prim is not spawned by the asset. Instead, it is assumed that the asset is already present in the scene.

depth_clipping_behavior: Literal['max', 'zero', 'none']#

Clipping behavior for the camera for values exceed the maximum value. Defaults to “none”.

  • "max": Values are clipped to the maximum value.

  • "zero": Values are clipped to zero.

  • "none: No clipping is applied. Values will be returned as inf.

data_types: list[str]#

List of sensor names/types to enable for the camera. Defaults to [“rgb”].

Please refer to the Camera class for a list of available data types.

width: int#

Width of the image in pixels.

prim_path: str#

Prim path (or expression) to the sensor.

Note

The expression can contain the environment namespace regex {ENV_REGEX_NS} which will be replaced with the environment namespace.

Example: {ENV_REGEX_NS}/Robot/sensor will be replaced with /World/envs/env_.*/Robot/sensor.

update_period: float#

Update period of the sensor buffers (in seconds). Defaults to 0.0 (update every step).

debug_vis: bool#

Whether to visualize the sensor. Defaults to False.

height: int#

Height of the image in pixels.

update_latest_camera_pose: bool#

Whether to update the latest camera pose when fetching the camera’s data. Defaults to False.

If True, the latest camera pose is updated in the camera’s data which will slow down performance due to the use of XformPrimView. If False, the pose of the camera during initialization is returned.

semantic_filter: str | list[str]#

A string or a list specifying a semantic filter predicate. Defaults to "*:*".

If a string, it should be a disjunctive normal form of (semantic type, labels). For examples:

  • "typeA : labelA & !labelB | labelC , typeB: labelA ; typeC: labelE": All prims with semantic type “typeA” and label “labelA” but not “labelB” or with label “labelC”. Also, all prims with semantic type “typeB” and label “labelA”, or with semantic type “typeC” and label “labelE”.

  • "typeA : * ; * : labelA": All prims with semantic type “typeA” or with label “labelA”

If a list of strings, each string should be a semantic type. The segmentation for prims with semantics of the specified types will be retrieved. For example, if the list is [“class”], only the segmentation for prims with semantics of type “class” will be retrieved.

See also

For more information on the semantics filter, see the documentation on Replicator Semantics Schema Editor.

colorize_semantic_segmentation: bool#

Whether to colorize the semantic segmentation images. Defaults to True.

If True, semantic segmentation is converted to an image where semantic IDs are mapped to colors and returned as a uint8 4-channel array. If False, the output is returned as a int32 array.

colorize_instance_id_segmentation: bool#

Whether to colorize the instance ID segmentation images. Defaults to True.

If True, instance id segmentation is converted to an image where instance IDs are mapped to colors. and returned as a uint8 4-channel array. If False, the output is returned as a int32 array.

colorize_instance_segmentation: bool#

Whether to colorize the instance ID segmentation images. Defaults to True.

If True, instance segmentation is converted to an image where instance IDs are mapped to colors. and returned as a uint8 4-channel array. If False, the output is returned as a int32 array.

semantic_segmentation_mapping: dict#

Dictionary mapping semantics to specific colours

Eg.

{
    "class:cube_1": (255, 36, 66, 255),
    "class:cube_2": (255, 184, 48, 255),
    "class:cube_3": (55, 255, 139, 255),
    "class:table": (255, 237, 218, 255),
    "class:ground": (100, 100, 100, 255),
    "class:robot": (61, 178, 255, 255),
}
renderer_cfg: RendererCfg#

Renderer configuration for camera sensor.

Tile-Rendered USD Camera#

class isaaclab.sensors.TiledCamera[source]#

Bases: Camera

Attributes:

SIMPLE_SHADING_AOV

The tiled rendering based camera sensor for acquiring the same data as the Camera class.

cfg

The configuration parameters.

UNSUPPORTED_TYPES

The set of sensor types that are not supported by the camera class.

data

Data from the sensor.

device

Memory device for computation.

frame

Frame number when the measurement took place.

has_debug_vis_implementation

Whether the sensor has a debug visualization implemented.

image_shape

A tuple containing (height, width) of the camera sensor.

is_initialized

Whether the sensor is initialized.

num_instances

Number of instances of the sensor.

render_product_paths

The path of the render products for the cameras.

Methods:

__init__(cfg)

Initializes the tiled camera sensor.

reset([env_ids, env_mask])

Resets the sensor internals.

set_debug_vis(debug_vis)

Sets whether to visualize the sensor data.

set_intrinsic_matrices(matrices[, ...])

Set parameters of the USD camera from its intrinsic matrix.

set_world_poses([positions, orientations, ...])

Set the pose of the camera w.r.t.

set_world_poses_from_view(eyes, targets[, ...])

Set the poses of the camera from the eye position and look-at target position.

SIMPLE_SHADING_AOV: str = 'SimpleShadingSD'#

The tiled rendering based camera sensor for acquiring the same data as the Camera class.

This class inherits from the Camera class but uses the tiled-rendering API to acquire the visual data. Tiled-rendering concatenates the rendered images from multiple cameras into a single image. This allows for rendering multiple cameras in parallel and is useful for rendering large scenes with multiple cameras efficiently.

The following sensor types are supported:

  • "rgb": A 3-channel rendered color image.

  • "rgba": A 4-channel rendered color image with alpha channel.

  • "albedo": A 4-channel fast diffuse-albedo only path for color image. Note that this path will achieve the best performance when used alone or with depth only.

  • "distance_to_camera": An image containing the distance to camera optical center.

  • "distance_to_image_plane": An image containing distances of 3D points from camera plane along camera’s z-axis.

  • "depth": Alias for "distance_to_image_plane".

  • "simple_shading_constant_diffuse": Simple shading (constant diffuse) RGB approximation.

  • "simple_shading_diffuse_mdl": Simple shading (diffuse MDL) RGB approximation.

  • "simple_shading_full_mdl": Simple shading (full MDL) RGB approximation.

  • "normals": An image containing the local surface normal vectors at each pixel.

  • "motion_vectors": An image containing the motion vector data at each pixel.

  • "semantic_segmentation": The semantic segmentation data.

  • "instance_segmentation_fast": The instance segmentation data.

  • "instance_id_segmentation_fast": The instance id segmentation data.

Note

Currently the following sensor types are not supported in a “view” format:

  • "instance_segmentation": The instance segmentation data. Please use the fast counterparts instead.

  • "instance_id_segmentation": The instance id segmentation data. Please use the fast counterparts instead.

  • "bounding_box_2d_tight": The tight 2D bounding box data (only contains non-occluded regions).

  • "bounding_box_2d_tight_fast": The tight 2D bounding box data (only contains non-occluded regions).

  • "bounding_box_2d_loose": The loose 2D bounding box data (contains occluded regions).

  • "bounding_box_2d_loose_fast": The loose 2D bounding box data (contains occluded regions).

  • "bounding_box_3d": The 3D view space bounding box data.

  • "bounding_box_3d_fast": The 3D view space bounding box data.

Added in version v1.0.0: This feature is available starting from Isaac Sim 4.2. Before this version, the tiled rendering APIs were not available.

cfg: TiledCameraCfg#

The configuration parameters.

__init__(cfg: TiledCameraCfg)[source]#

Initializes the tiled camera sensor.

Parameters:

cfg – The configuration parameters.

Raises:
  • RuntimeError – If no camera prim is found at the given path.

  • ValueError – If the provided data types are not supported by the camera.

reset(env_ids: Sequence[int] | None = None, env_mask: wp.array | None = None)[source]#

Resets the sensor internals.

Parameters:
  • env_ids – The environment indices to reset. Defaults to None, in which case all environments are reset.

  • env_mask – A boolean warp array indicating which environments to reset. If provided, takes priority over env_ids. Defaults to None.

UNSUPPORTED_TYPES: set[str] = {'bounding_box_2d_loose', 'bounding_box_2d_loose_fast', 'bounding_box_2d_tight', 'bounding_box_2d_tight_fast', 'bounding_box_3d', 'bounding_box_3d_fast', 'instance_id_segmentation', 'instance_segmentation'}#

The set of sensor types that are not supported by the camera class.

property data: CameraData#

Data from the sensor.

This property is only updated when the user tries to access the data. This is done to avoid unnecessary computation when the sensor data is not used.

For updating the sensor when this property is accessed, you can use the following code snippet in your sensor implementation:

# update sensors if needed
self._update_outdated_buffers()
# return the data (where `_data` is the data for the sensor)
return self._data
property device: str#

Memory device for computation.

property frame: torch.tensor#

Frame number when the measurement took place.

property has_debug_vis_implementation: bool#

Whether the sensor has a debug visualization implemented.

property image_shape: tuple[int, int]#

A tuple containing (height, width) of the camera sensor.

property is_initialized: bool#

Whether the sensor is initialized.

Returns True if the sensor is initialized, False otherwise.

property num_instances: int#

Number of instances of the sensor.

This is equal to the number of sensors per environment multiplied by the number of environments.

property render_product_paths: list[str]#

The path of the render products for the cameras.

This can be used via replicator interfaces to attach to writes or external annotator registry.

set_debug_vis(debug_vis: bool) bool#

Sets whether to visualize the sensor data.

Parameters:

debug_vis – Whether to visualize the sensor data.

Returns:

Whether the debug visualization was successfully set. False if the sensor does not support debug visualization.

set_intrinsic_matrices(matrices: torch.Tensor, focal_length: float | None = None, env_ids: Sequence[int] | None = None)#

Set parameters of the USD camera from its intrinsic matrix.

The intrinsic matrix is used to set the following parameters to the USD camera:

  • focal_length: The focal length of the camera.

  • horizontal_aperture: The horizontal aperture of the camera.

  • vertical_aperture: The vertical aperture of the camera.

  • horizontal_aperture_offset: The horizontal offset of the camera.

  • vertical_aperture_offset: The vertical offset of the camera.

Warning

Due to limitations of Omniverse camera, we need to assume that the camera is a spherical lens, i.e. has square pixels, and the optical center is centered at the camera eye. If this assumption is not true in the input intrinsic matrix, then the camera will not set up correctly.

Parameters:
  • matrices – The intrinsic matrices for the camera. Shape is (N, 3, 3).

  • focal_length – Perspective focal length (in cm) used to calculate pixel size. Defaults to None. If None, focal_length will be calculated 1 / width.

  • env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.

set_world_poses(positions: torch.Tensor | None = None, orientations: torch.Tensor | None = None, env_ids: Sequence[int] | None = None, convention: Literal['opengl', 'ros', 'world'] = 'ros')#

Set the pose of the camera w.r.t. the world frame using specified convention.

Since different fields use different conventions for camera orientations, the method allows users to set the camera poses in the specified convention. Possible conventions are:

  • "opengl" - forward axis: -Z - up axis +Y - Offset is applied in the OpenGL (Usd.Camera) convention

  • "ros" - forward axis: +Z - up axis -Y - Offset is applied in the ROS convention

  • "world" - forward axis: +X - up axis +Z - Offset is applied in the World Frame convention

See isaaclab.sensors.camera.utils.convert_camera_frame_orientation_convention() for more details on the conventions.

Parameters:
  • positions – The cartesian coordinates (in meters). Shape is (N, 3). Defaults to None, in which case the camera position in not changed.

  • orientations – The quaternion orientation in (x, y, z, w). Shape is (N, 4). Defaults to None, in which case the camera orientation in not changed.

  • env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.

  • convention – The convention in which the poses are fed. Defaults to “ros”.

Raises:

RuntimeError – If the camera prim is not set. Need to call initialize() method first.

set_world_poses_from_view(eyes: torch.Tensor, targets: torch.Tensor, env_ids: Sequence[int] | None = None)#

Set the poses of the camera from the eye position and look-at target position.

Parameters:
  • eyes – The positions of the camera’s eye. Shape is (N, 3).

  • targets – The target locations to look at. Shape is (N, 3).

  • env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.

Raises:
  • RuntimeError – If the camera prim is not set. Need to call initialize() method first.

  • NotImplementedError – If the stage up-axis is not “Y” or “Z”.

class isaaclab.sensors.TiledCameraCfg[source]#

Bases: CameraCfg

Configuration for a tiled rendering-based camera sensor.

Classes:

OffsetCfg

The offset pose of the sensor's frame from the sensor's parent frame.

Attributes:

prim_path

Prim path (or expression) to the sensor.

update_period

Update period of the sensor buffers (in seconds).

debug_vis

Whether to visualize the sensor.

offset

The offset pose of the sensor's frame from the sensor's parent frame.

spawn

Spawn configuration for the asset.

depth_clipping_behavior

Clipping behavior for the camera for values exceed the maximum value.

data_types

List of sensor names/types to enable for the camera.

width

Width of the image in pixels.

height

Height of the image in pixels.

update_latest_camera_pose

Whether to update the latest camera pose when fetching the camera's data.

semantic_filter

A string or a list specifying a semantic filter predicate.

colorize_semantic_segmentation

Whether to colorize the semantic segmentation images.

colorize_instance_id_segmentation

Whether to colorize the instance ID segmentation images.

colorize_instance_segmentation

Whether to colorize the instance ID segmentation images.

semantic_segmentation_mapping

Dictionary mapping semantics to specific colours

renderer_cfg

Renderer configuration for camera sensor.

class OffsetCfg#

Bases: object

The offset pose of the sensor’s frame from the sensor’s parent frame.

Attributes:

pos

Translation w.r.t.

rot

Quaternion rotation (x, y, z, w) w.r.t.

convention

The convention in which the frame offset is applied.

pos: tuple[float, float, float]#

Translation w.r.t. the parent frame. Defaults to (0.0, 0.0, 0.0).

rot: tuple[float, float, float, float]#

Quaternion rotation (x, y, z, w) w.r.t. the parent frame. Defaults to (0.0, 0.0, 0.0, 1.0).

convention: Literal['opengl', 'ros', 'world']#

The convention in which the frame offset is applied. Defaults to “ros”.

  • "opengl" - forward axis: -Z - up axis: +Y - Offset is applied in the OpenGL (Usd.Camera) convention.

  • "ros" - forward axis: +Z - up axis: -Y - Offset is applied in the ROS convention.

  • "world" - forward axis: +X - up axis: +Z - Offset is applied in the World Frame convention.

prim_path: str#

Prim path (or expression) to the sensor.

Note

The expression can contain the environment namespace regex {ENV_REGEX_NS} which will be replaced with the environment namespace.

Example: {ENV_REGEX_NS}/Robot/sensor will be replaced with /World/envs/env_.*/Robot/sensor.

update_period: float#

Update period of the sensor buffers (in seconds). Defaults to 0.0 (update every step).

debug_vis: bool#

Whether to visualize the sensor. Defaults to False.

offset: OffsetCfg#

The offset pose of the sensor’s frame from the sensor’s parent frame. Defaults to identity.

Note

The parent frame is the frame the sensor attaches to. For example, the parent frame of a camera at path /World/envs/env_0/Robot/Camera is /World/envs/env_0/Robot.

spawn: PinholeCameraCfg | FisheyeCameraCfg | None#

Spawn configuration for the asset.

If None, then the prim is not spawned by the asset. Instead, it is assumed that the asset is already present in the scene.

depth_clipping_behavior: Literal['max', 'zero', 'none']#

Clipping behavior for the camera for values exceed the maximum value. Defaults to “none”.

  • "max": Values are clipped to the maximum value.

  • "zero": Values are clipped to zero.

  • "none: No clipping is applied. Values will be returned as inf.

data_types: list[str]#

List of sensor names/types to enable for the camera. Defaults to [“rgb”].

Please refer to the Camera class for a list of available data types.

width: int#

Width of the image in pixels.

height: int#

Height of the image in pixels.

update_latest_camera_pose: bool#

Whether to update the latest camera pose when fetching the camera’s data. Defaults to False.

If True, the latest camera pose is updated in the camera’s data which will slow down performance due to the use of XformPrimView. If False, the pose of the camera during initialization is returned.

semantic_filter: str | list[str]#

A string or a list specifying a semantic filter predicate. Defaults to "*:*".

If a string, it should be a disjunctive normal form of (semantic type, labels). For examples:

  • "typeA : labelA & !labelB | labelC , typeB: labelA ; typeC: labelE": All prims with semantic type “typeA” and label “labelA” but not “labelB” or with label “labelC”. Also, all prims with semantic type “typeB” and label “labelA”, or with semantic type “typeC” and label “labelE”.

  • "typeA : * ; * : labelA": All prims with semantic type “typeA” or with label “labelA”

If a list of strings, each string should be a semantic type. The segmentation for prims with semantics of the specified types will be retrieved. For example, if the list is [“class”], only the segmentation for prims with semantics of type “class” will be retrieved.

See also

For more information on the semantics filter, see the documentation on Replicator Semantics Schema Editor.

colorize_semantic_segmentation: bool#

Whether to colorize the semantic segmentation images. Defaults to True.

If True, semantic segmentation is converted to an image where semantic IDs are mapped to colors and returned as a uint8 4-channel array. If False, the output is returned as a int32 array.

colorize_instance_id_segmentation: bool#

Whether to colorize the instance ID segmentation images. Defaults to True.

If True, instance id segmentation is converted to an image where instance IDs are mapped to colors. and returned as a uint8 4-channel array. If False, the output is returned as a int32 array.

colorize_instance_segmentation: bool#

Whether to colorize the instance ID segmentation images. Defaults to True.

If True, instance segmentation is converted to an image where instance IDs are mapped to colors. and returned as a uint8 4-channel array. If False, the output is returned as a int32 array.

semantic_segmentation_mapping: dict#

Dictionary mapping semantics to specific colours

Eg.

{
    "class:cube_1": (255, 36, 66, 255),
    "class:cube_2": (255, 184, 48, 255),
    "class:cube_3": (55, 255, 139, 255),
    "class:table": (255, 237, 218, 255),
    "class:ground": (100, 100, 100, 255),
    "class:robot": (61, 178, 255, 255),
}
renderer_cfg: RendererCfg#

Renderer configuration for camera sensor.

Contact Sensor#

class isaaclab.sensors.ContactSensor[source]#

Bases: FactoryBase, BaseContactSensor

Factory for creating contact sensor instances.

Attributes:

data

Data from the sensor.

body_names

Ordered names of shapes or bodies with contact sensors attached.

contact_view

View for the contact forces captured.

device

Memory device for computation.

has_debug_vis_implementation

Whether the sensor has a debug visualization implemented.

is_initialized

Whether the sensor is initialized.

num_bodies

Deprecated property.

num_instances

Number of instances of the sensor.

num_sensors

Number of sensors per environment.

cfg

The configuration parameters.

Methods:

__new__(cls, *args, **kwargs)

Create a new instance of a contact sensor based on the backend.

__init__(cfg)

Initializes the contact sensor object.

compute_first_air(dt[, abs_tol])

Checks if bodies that have broken contact within the last dt seconds.

compute_first_contact(dt[, abs_tol])

Checks if bodies that have established contact within the last dt seconds.

find_bodies(name_keys[, preserve_order])

Deprecated method.

find_sensors(name_keys[, preserve_order])

Find sensors in the contact sensor based on the name keys.

get_registry_keys()

Returns a list of registered backend names.

register(name, sub_class)

Register a new implementation class.

reset([env_ids, env_mask])

Resets the sensor.

set_debug_vis(debug_vis)

Sets whether to visualize the sensor data.

abstract property data: BaseContactSensorData#

Data from the sensor.

static __new__(cls, *args, **kwargs) BaseContactSensor | PhysXContactSensor | NewtonContactSensor[source]#

Create a new instance of a contact sensor based on the backend.

__init__(cfg: ContactSensorCfg)#

Initializes the contact sensor object.

Parameters:

cfg – The configuration parameters.

abstract property body_names: list[str] | None#

Ordered names of shapes or bodies with contact sensors attached.

abstractmethod compute_first_air(dt: float, abs_tol: float = 1e-08) warp.array#

Checks if bodies that have broken contact within the last dt seconds.

This function checks if the bodies have broken contact within the last dt seconds by comparing the current air time with the given time period. If the air time is less than the given time period, then the bodies are considered to not be in contact.

Note

It assumes that dt is a factor of the sensor update time-step. In other words, \(dt / dt_sensor = n\), where \(n\) is a natural number. This is always true if the sensor is updated by the physics or the environment stepping time-step and the sensor is read by the environment stepping time-step.

Parameters:
  • dt – The time period since the contract is broken.

  • abs_tol – The absolute tolerance for the comparison.

Returns:

A boolean tensor indicating the bodies that have broken contact within the last dt seconds. Shape is (N, B), where N is the number of sensors and B is the number of bodies in each sensor.

Raises:

RuntimeError – If the sensor is not configured to track contact time.

abstractmethod compute_first_contact(dt: float, abs_tol: float = 1e-08) warp.array#

Checks if bodies that have established contact within the last dt seconds.

This function checks if the bodies have established contact within the last dt seconds by comparing the current contact time with the given time period. If the contact time is less than the given time period, then the bodies are considered to be in contact.

Note

The function assumes that dt is a factor of the sensor update time-step. In other words \(dt / dt_sensor = n\), where \(n\) is a natural number. This is always true if the sensor is updated by the physics or the environment stepping time-step and the sensor is read by the environment stepping time-step.

Parameters:
  • dt – The time period since the contact was established.

  • abs_tol – The absolute tolerance for the comparison.

Returns:

A boolean tensor indicating the bodies that have established contact within the last dt seconds. Shape is (N, B), where N is the number of sensors and B is the number of bodies in each sensor.

Raises:

RuntimeError – If the sensor is not configured to track contact time.

abstract property contact_view: None#

View for the contact forces captured.

Note

None if there is no view associated with the sensor.

property device: str#

Memory device for computation.

find_bodies(name_keys: str | Sequence[str], preserve_order: bool = False) tuple[list[int], list[str]]#

Deprecated method. Please use find_sensors instead.

find_sensors(name_keys: str | Sequence[str], preserve_order: bool = False) tuple[list[int], list[str]]#

Find sensors in the contact sensor based on the name keys.

Parameters:
  • name_keys – A regular expression or a list of regular expressions to match the body names.

  • preserve_order – Whether to preserve the order of the name keys in the output. Defaults to False.

Returns:

A tuple of lists containing the sensor indices and names.

classmethod get_registry_keys() list[str]#

Returns a list of registered backend names.

property has_debug_vis_implementation: bool#

Whether the sensor has a debug visualization implemented.

property is_initialized: bool#

Whether the sensor is initialized.

Returns True if the sensor is initialized, False otherwise.

property num_bodies: int#

Deprecated property. Please use num_sensors instead.

abstract property num_instances: int | None#

Number of instances of the sensor.

abstract property num_sensors: int#

Number of sensors per environment.

classmethod register(name: str, sub_class) None#

Register a new implementation class.

abstractmethod reset(env_ids: Sequence[int] | None = None, env_mask: wp.array(dtype=wp.bool) | None = None)#

Resets the sensor.

Parameters:
  • env_ids – The indices of the environments to reset. Defaults to None: all the environments are reset.

  • env_mask – The masks of the environments to reset. Defaults to None: all the environments are reset.

set_debug_vis(debug_vis: bool) bool#

Sets whether to visualize the sensor data.

Parameters:

debug_vis – Whether to visualize the sensor data.

Returns:

Whether the debug visualization was successfully set. False if the sensor does not support debug visualization.

cfg: ContactSensorCfg#

The configuration parameters.

class isaaclab.sensors.ContactSensorData[source]#

Factory for creating contact sensor data instances.

Methods:

__new__(cls, *args, **kwargs)

Create a new instance of a contact sensor data based on the backend.

get_registry_keys()

Returns a list of registered backend names.

register(name, sub_class)

Register a new implementation class.

Attributes:

contact_pos_w

Average position of contact points.

current_air_time

Time spent in air since last detach.

current_contact_time

Time spent in contact since last contact.

force_matrix_w

Normal contact forces filtered between sensor and filtered bodies.

force_matrix_w_history

History of filtered contact forces.

friction_forces_w

Sum of friction forces.

last_air_time

Time spent in air before last contact.

last_contact_time

Time spent in contact before last detach.

net_forces_w

The net normal contact forces in world frame.

net_forces_w_history

History of net normal contact forces.

pos_w

Position of the sensor origin in world frame.

pose_w

Pose of the sensor origin in world frame.

quat_w

Orientation of the sensor origin in world frame.

static __new__(cls, *args, **kwargs) BaseContactSensorData | PhysXContactSensorData | NewtonContactSensorData[source]#

Create a new instance of a contact sensor data based on the backend.

abstract property contact_pos_w: wp.array | None#

Average position of contact points.

Shape is (num_instances, num_sensors, num_filter_shapes), dtype = wp.vec3f. In torch this resolves to (num_instances, num_sensors, num_filter_shapes, 3).

None if ContactSensorCfg.track_contact_points is False.

abstract property current_air_time: wp.array | None#

Time spent in air since last detach.

Shape is (num_instances, num_sensors), dtype = wp.float32.

None if ContactSensorCfg.track_air_time is False.

abstract property current_contact_time: wp.array | None#

Time spent in contact since last contact.

Shape is (num_instances, num_sensors), dtype = wp.float32.

None if ContactSensorCfg.track_air_time is False.

abstract property force_matrix_w: wp.array | None#

Normal contact forces filtered between sensor and filtered bodies.

Shape is (num_instances, num_sensors, num_filter_shapes), dtype = wp.vec3f. In torch this resolves to (num_instances, num_sensors, num_filter_shapes, 3).

None if ContactSensorCfg.filter_prim_paths_expr is empty.

abstract property force_matrix_w_history: wp.array | None#

History of filtered contact forces.

Shape is (num_instances, history_length, num_sensors, num_filter_shapes), dtype = wp.vec3f. In torch this resolves to (num_instances, history_length, num_sensors, num_filter_shapes, 3).

None if ContactSensorCfg.filter_prim_paths_expr is empty.

abstract property friction_forces_w: wp.array | None#

Sum of friction forces.

Shape is (num_instances, num_sensors, num_filter_shapes), dtype = wp.vec3f. In torch this resolves to (num_instances, num_sensors, num_filter_shapes, 3).

None if ContactSensorCfg.track_friction_forces is False.

classmethod get_registry_keys() list[str]#

Returns a list of registered backend names.

abstract property last_air_time: wp.array | None#

Time spent in air before last contact.

Shape is (num_instances, num_sensors), dtype = wp.float32.

None if ContactSensorCfg.track_air_time is False.

abstract property last_contact_time: wp.array | None#

Time spent in contact before last detach.

Shape is (num_instances, num_sensors), dtype = wp.float32.

None if ContactSensorCfg.track_air_time is False.

abstract property net_forces_w: wp.array | None#

The net normal contact forces in world frame.

Shape is (num_instances, num_sensors), dtype = wp.vec3f. In torch this resolves to (num_instances, num_sensors, 3).

abstract property net_forces_w_history: wp.array | None#

History of net normal contact forces.

Shape is (num_instances, history_length, num_sensors), dtype = wp.vec3f. In torch this resolves to (num_instances, history_length, num_sensors, 3).

abstract property pos_w: wp.array | None#

Position of the sensor origin in world frame.

Shape is (num_instances, num_sensors), dtype = wp.vec3f. In torch this resolves to (num_instances, num_sensors, 3).

None if ContactSensorCfg.track_pose is False.

abstract property pose_w: wp.array | None#

Pose of the sensor origin in world frame.

None if ContactSensorCfg.track_pose is False.

abstract property quat_w: wp.array | None#

Orientation of the sensor origin in world frame.

Shape is (num_instances, num_sensors), dtype = wp.quatf. In torch this resolves to (num_instances, num_sensors, 4). The orientation is provided in (x, y, z, w) format.

None if ContactSensorCfg.track_pose is False.

classmethod register(name: str, sub_class) None#

Register a new implementation class.

class isaaclab.sensors.ContactSensorCfg[source]#

Bases: SensorBaseCfg

Configuration for the contact sensor.

Sensing bodies are selected via SensorBaseCfg.prim_path. Filter bodies for per-partner force reporting are selected via filter_prim_paths_expr.

Only body-level sensing and filtering are supported. For shape-level granularity, see NewtonContactSensorCfg in isaaclab_newton.

Attributes:

track_pose

Whether to track the pose of the sensor's origin.

track_contact_points

Whether to track the contact point locations.

track_friction_forces

Whether to track the friction forces at the contact points.

max_contact_data_count_per_prim

The maximum number of contacts across all batches of the sensor to keep track of.

prim_path

Prim path (or expression) to the sensor.

update_period

Update period of the sensor buffers (in seconds).

debug_vis

Whether to visualize the sensor.

track_air_time

Whether to track the air/contact time of the bodies (time between contacts).

force_threshold

The threshold on the norm of the contact force that determines whether two bodies are in collision or not.

history_length

Number of past frames to store in the sensor buffers.

filter_prim_paths_expr

List of body prim path expressions to filter contacts against.

visualizer_cfg

The configuration object for the visualization markers.

track_pose: bool#

Whether to track the pose of the sensor’s origin. Defaults to False.

track_contact_points: bool#

Whether to track the contact point locations. Defaults to False.

track_friction_forces: bool#

Whether to track the friction forces at the contact points. Defaults to False.

max_contact_data_count_per_prim: int | None#

The maximum number of contacts across all batches of the sensor to keep track of. Default is 4, where supported.

This parameter sets the total maximum counts of the simulation across all bodies and environments. The total number of contacts allowed is max_contact_data_count_per_prim*num_envs*num_sensor_bodies.

Note

If the environment is very contact rich it is suggested to increase this parameter to avoid out of bounds memory errors and loss of contact data leading to inaccurate measurements.

prim_path: str#

Prim path (or expression) to the sensor.

Note

The expression can contain the environment namespace regex {ENV_REGEX_NS} which will be replaced with the environment namespace.

Example: {ENV_REGEX_NS}/Robot/sensor will be replaced with /World/envs/env_.*/Robot/sensor.

update_period: float#

Update period of the sensor buffers (in seconds). Defaults to 0.0 (update every step).

debug_vis: bool#

Whether to visualize the sensor. Defaults to False.

track_air_time: bool#

Whether to track the air/contact time of the bodies (time between contacts). Defaults to False.

force_threshold: float | None#

The threshold on the norm of the contact force that determines whether two bodies are in collision or not. Defaults to None, in which case the sensor backend chooses an appropriate value.

This value is only used for tracking the mode duration (the time in contact or in air), if track_air_time is True.

history_length: int#

Number of past frames to store in the sensor buffers. Defaults to 0, which means that only the current data is stored (no history).

filter_prim_paths_expr: list[str]#

List of body prim path expressions to filter contacts against. Defaults to empty, meaning contacts with all bodies are aggregated into the net force.

If provided, a per-partner force matrix (ContactSensorData.force_matrix_w) is reported in addition to the net force. Each expression is matched against body prim paths in the scene.

For shape-level filtering, see NewtonContactSensorCfg in isaaclab_newton.

Note

Expressions can contain the environment namespace regex {ENV_REGEX_NS}, which is replaced with the environment namespace.

Example: {ENV_REGEX_NS}/Object becomes /World/envs/env_.*/Object.

Attention

Filtered contact reporting only works when SensorBaseCfg.prim_path matches a single primitive per environment. For many-to-many filtering, see NewtonContactSensorCfg in isaaclab_newton.

visualizer_cfg: VisualizationMarkersCfg#

The configuration object for the visualization markers. Defaults to CONTACT_SENSOR_MARKER_CFG.

Note

This attribute is only used when debug visualization is enabled.

Frame Transformer#

class isaaclab.sensors.FrameTransformer[source]#

Bases: FactoryBase, BaseFrameTransformer

Factory for creating frame transformer instances.

Attributes:

data

Data from the sensor.

body_names

Returns the names of the target bodies being tracked.

device

Memory device for computation.

has_debug_vis_implementation

Whether the sensor has a debug visualization implemented.

is_initialized

Whether the sensor is initialized.

num_bodies

Returns the number of target bodies being tracked.

num_instances

Number of instances of the sensor.

cfg

The configuration parameters.

Methods:

__new__(cls, *args, **kwargs)

Create a new instance of a frame transformer based on the backend.

__init__(cfg)

Initializes the frame transformer object.

find_bodies(name_keys[, preserve_order])

Find bodies in the articulation based on the name keys.

get_registry_keys()

Returns a list of registered backend names.

register(name, sub_class)

Register a new implementation class.

reset([env_ids, env_mask])

Resets the sensor internals.

set_debug_vis(debug_vis)

Sets whether to visualize the sensor data.

abstract property data: BaseFrameTransformerData#

Data from the sensor.

This property is only updated when the user tries to access the data. This is done to avoid unnecessary computation when the sensor data is not used.

For updating the sensor when this property is accessed, you can use the following code snippet in your sensor implementation:

# update sensors if needed
self._update_outdated_buffers()
# return the data (where `_data` is the data for the sensor)
return self._data
static __new__(cls, *args, **kwargs) BaseFrameTransformer | PhysXFrameTransformer[source]#

Create a new instance of a frame transformer based on the backend.

__init__(cfg: FrameTransformerCfg)#

Initializes the frame transformer object.

Parameters:

cfg – The configuration parameters.

abstract property body_names: list[str]#

Returns the names of the target bodies being tracked.

Deprecated since version Use: data.target_frame_names instead. This property will be removed in a future release.

property device: str#

Memory device for computation.

find_bodies(name_keys: str | Sequence[str], preserve_order: bool = False) tuple[list[int], list[str]]#

Find bodies in the articulation based on the name keys.

Parameters:
  • name_keys – A regular expression or a list of regular expressions to match the body names.

  • preserve_order – Whether to preserve the order of the name keys in the output. Defaults to False.

Returns:

A tuple of lists containing the body indices and names.

classmethod get_registry_keys() list[str]#

Returns a list of registered backend names.

property has_debug_vis_implementation: bool#

Whether the sensor has a debug visualization implemented.

property is_initialized: bool#

Whether the sensor is initialized.

Returns True if the sensor is initialized, False otherwise.

abstract property num_bodies: int#

Returns the number of target bodies being tracked.

Deprecated since version Use: len(data.target_frame_names) instead. This property will be removed in a future release.

property num_instances: int#

Number of instances of the sensor.

This is equal to the number of sensors per environment multiplied by the number of environments.

classmethod register(name: str, sub_class) None#

Register a new implementation class.

reset(env_ids: Sequence[int] | None = None, env_mask: wp.array | None = None) None#

Resets the sensor internals.

Parameters:
  • env_ids – The environment indices to reset. Defaults to None, in which case all environments are reset.

  • env_mask – A boolean warp array indicating which environments to reset. If provided, takes priority over env_ids. Defaults to None.

set_debug_vis(debug_vis: bool) bool#

Sets whether to visualize the sensor data.

Parameters:

debug_vis – Whether to visualize the sensor data.

Returns:

Whether the debug visualization was successfully set. False if the sensor does not support debug visualization.

cfg: FrameTransformerCfg#

The configuration parameters.

class isaaclab.sensors.FrameTransformerData[source]#

Factory for creating frame transformer data instances.

Methods:

__new__(cls, *args, **kwargs)

Create a new instance of a frame transformer data based on the backend.

get_registry_keys()

Returns a list of registered backend names.

register(name, sub_class)

Register a new implementation class.

Attributes:

source_pos_w

Position of the source frame after offset in world frame.

source_pose_w

Pose of the source frame after offset in world frame.

source_quat_w

Orientation of the source frame after offset in world frame.

target_frame_names

Target frame names (order matches data ordering).

target_pos_source

Position of the target frame(s) relative to source frame.

target_pos_w

Position of the target frame(s) after offset in world frame.

target_pose_source

Pose of the target frame(s) relative to source frame.

target_pose_w

Pose of the target frame(s) after offset in world frame.

target_quat_source

Orientation of the target frame(s) relative to source frame.

target_quat_w

Orientation of the target frame(s) after offset in world frame.

static __new__(cls, *args, **kwargs) BaseFrameTransformerData | PhysXFrameTransformerData[source]#

Create a new instance of a frame transformer data based on the backend.

classmethod get_registry_keys() list[str]#

Returns a list of registered backend names.

classmethod register(name: str, sub_class) None#

Register a new implementation class.

abstract property source_pos_w: warp.array#

Position of the source frame after offset in world frame.

Shape is (num_instances,), dtype = wp.vec3f. In torch this resolves to (num_instances, 3).

abstract property source_pose_w: wp.array | None#

Pose of the source frame after offset in world frame.

Shape is (num_instances,), dtype = wp.transformf. In torch this resolves to (num_instances, 7). The pose is provided in (x, y, z, qx, qy, qz, qw) format.

abstract property source_quat_w: warp.array#

Orientation of the source frame after offset in world frame.

Shape is (num_instances,), dtype = wp.quatf. In torch this resolves to (num_instances, 4). The orientation is provided in (x, y, z, w) format.

abstract property target_frame_names: list[str]#

Target frame names (order matches data ordering).

Resolved from FrameTransformerCfg.FrameCfg.name.

abstract property target_pos_source: warp.array#

Position of the target frame(s) relative to source frame.

Shape is (num_instances, num_target_frames), dtype = wp.vec3f. In torch this resolves to (num_instances, num_target_frames, 3).

abstract property target_pos_w: warp.array#

Position of the target frame(s) after offset in world frame.

Shape is (num_instances, num_target_frames), dtype = wp.vec3f. In torch this resolves to (num_instances, num_target_frames, 3).

abstract property target_pose_source: wp.array | None#

Pose of the target frame(s) relative to source frame.

Shape is (num_instances, num_target_frames), dtype = wp.transformf. In torch this resolves to (num_instances, num_target_frames, 7). The pose is provided in (x, y, z, qx, qy, qz, qw) format.

abstract property target_pose_w: wp.array | None#

Pose of the target frame(s) after offset in world frame.

Shape is (num_instances, num_target_frames), dtype = wp.transformf. In torch this resolves to (num_instances, num_target_frames, 7). The pose is provided in (x, y, z, qx, qy, qz, qw) format.

abstract property target_quat_source: warp.array#

Orientation of the target frame(s) relative to source frame.

Shape is (num_instances, num_target_frames), dtype = wp.quatf. In torch this resolves to (num_instances, num_target_frames, 4). The orientation is provided in (x, y, z, w) format.

abstract property target_quat_w: warp.array#

Orientation of the target frame(s) after offset in world frame.

Shape is (num_instances, num_target_frames), dtype = wp.quatf. In torch this resolves to (num_instances, num_target_frames, 4). The orientation is provided in (x, y, z, w) format.

class isaaclab.sensors.FrameTransformerCfg[source]#

Bases: SensorBaseCfg

Configuration for the frame transformer sensor.

Classes:

FrameCfg

Information specific to a coordinate frame.

Attributes:

update_period

Update period of the sensor buffers (in seconds).

debug_vis

Whether to visualize the sensor.

prim_path

The prim path of the body to transform from (source frame).

source_frame_offset

The pose offset from the source prim frame.

target_frames

A list of the target frames.

visualizer_cfg

The configuration object for the visualization markers.

class FrameCfg[source]#

Bases: object

Information specific to a coordinate frame.

Attributes:

prim_path

The prim path corresponding to a rigid body.

name

User-defined name for the new coordinate frame.

offset

The pose offset from the parent prim frame.

prim_path: str#

The prim path corresponding to a rigid body.

This can be a regex pattern to match multiple prims. For example, “/Robot/.*” will match all prims under “/Robot”.

This means that if the source FrameTransformerCfg.prim_path is “/Robot/base”, and the target FrameTransformerCfg.FrameCfg.prim_path is “/Robot/.*”, then the frame transformer will track the poses of all the prims under “/Robot”, including “/Robot/base” (even though this will result in an identity pose w.r.t. the source frame).

name: str | None#

User-defined name for the new coordinate frame. Defaults to None.

If None, then the name is extracted from the leaf of the prim path.

offset: OffsetCfg#

The pose offset from the parent prim frame.

update_period: float#

Update period of the sensor buffers (in seconds). Defaults to 0.0 (update every step).

debug_vis: bool#

Whether to visualize the sensor. Defaults to False.

prim_path: str#

The prim path of the body to transform from (source frame).

source_frame_offset: OffsetCfg#

The pose offset from the source prim frame.

target_frames: list[FrameCfg]#

A list of the target frames.

This allows a single FrameTransformer to handle multiple target prims. For example, in a quadruped, we can use a single FrameTransformer to track each foot’s position and orientation in the body frame using four frame offsets.

visualizer_cfg: VisualizationMarkersCfg#

The configuration object for the visualization markers. Defaults to FRAME_MARKER_CFG.

Note

This attribute is only used when debug visualization is enabled.

class isaaclab.sensors.OffsetCfg[source]#

The offset pose of one frame relative to another frame.

Attributes:

pos

Translation w.r.t.

rot

Quaternion rotation (x, y, z, w) w.r.t.

pos: tuple[float, float, float]#

Translation w.r.t. the parent frame. Defaults to (0.0, 0.0, 0.0).

rot: tuple[float, float, float, float]#

Quaternion rotation (x, y, z, w) w.r.t. the parent frame. Defaults to (0.0, 0.0, 0.0, 1.0).

Ray-Cast Sensor#

class isaaclab.sensors.RayCaster[source]#

Bases: SensorBase

A ray-casting sensor.

The ray-caster uses a set of rays to detect collisions with meshes in the scene. The rays are defined in the sensor’s local coordinate frame. The sensor can be configured to ray-cast against a set of meshes with a given ray pattern.

The meshes are parsed from the list of primitive paths provided in the configuration. These are then converted to warp meshes and stored in the warp_meshes list. The ray-caster then ray-casts against these warp meshes using the ray pattern provided in the configuration.

Note

Currently, only static meshes are supported. Extending the warp mesh to support dynamic meshes is a work in progress.

Attributes:

cfg

The configuration parameters.

meshes

A dictionary to store warp meshes for raycasting, shared across all instances.

num_instances

Number of instances of the sensor.

data

Data from the sensor.

device

Memory device for computation.

has_debug_vis_implementation

Whether the sensor has a debug visualization implemented.

is_initialized

Whether the sensor is initialized.

Methods:

__init__(cfg)

Initializes the ray-caster object.

reset([env_ids, env_mask])

Resets the sensor internals.

set_debug_vis(debug_vis)

Sets whether to visualize the sensor data.

cfg: RayCasterCfg#

The configuration parameters.

meshes: ClassVar[dict[str, wp.Mesh]] = {}#

A dictionary to store warp meshes for raycasting, shared across all instances.

The keys correspond to the prim path for the meshes, and values are the corresponding warp Mesh objects.

__init__(cfg: RayCasterCfg)[source]#

Initializes the ray-caster object.

Parameters:

cfg – The configuration parameters.

property num_instances: int#

Number of instances of the sensor.

This is equal to the number of sensors per environment multiplied by the number of environments.

property data: RayCasterData#

Data from the sensor.

This property is only updated when the user tries to access the data. This is done to avoid unnecessary computation when the sensor data is not used.

For updating the sensor when this property is accessed, you can use the following code snippet in your sensor implementation:

# update sensors if needed
self._update_outdated_buffers()
# return the data (where `_data` is the data for the sensor)
return self._data
reset(env_ids: Sequence[int] | None = None, env_mask: wp.array | None = None)[source]#

Resets the sensor internals.

Parameters:
  • env_ids – The environment indices to reset. Defaults to None, in which case all environments are reset.

  • env_mask – A boolean warp array indicating which environments to reset. If provided, takes priority over env_ids. Defaults to None.

property device: str#

Memory device for computation.

property has_debug_vis_implementation: bool#

Whether the sensor has a debug visualization implemented.

property is_initialized: bool#

Whether the sensor is initialized.

Returns True if the sensor is initialized, False otherwise.

set_debug_vis(debug_vis: bool) bool#

Sets whether to visualize the sensor data.

Parameters:

debug_vis – Whether to visualize the sensor data.

Returns:

Whether the debug visualization was successfully set. False if the sensor does not support debug visualization.

class isaaclab.sensors.RayCasterData[source]#

Data container for the ray-cast sensor.

Attributes:

pos_w

Position of the sensor origin in world frame.

quat_w

Orientation of the sensor origin in quaternion (x, y, z, w) in world frame.

ray_hits_w

The ray hit positions in the world frame.

pos_w: torch.Tensor = None#

Position of the sensor origin in world frame.

Shape is (N, 3), where N is the number of sensors.

quat_w: torch.Tensor = None#

Orientation of the sensor origin in quaternion (x, y, z, w) in world frame.

Shape is (N, 4), where N is the number of sensors.

ray_hits_w: torch.Tensor = None#

The ray hit positions in the world frame.

Shape is (N, B, 3), where N is the number of sensors, B is the number of rays in the scan pattern per sensor.

class isaaclab.sensors.RayCasterCfg[source]#

Bases: SensorBaseCfg

Configuration for the ray-cast sensor.

Classes:

OffsetCfg

The offset pose of the sensor's frame from the sensor's parent frame.

Attributes:

mesh_prim_paths

The list of mesh primitive paths to ray cast against.

offset

The offset pose of the sensor's frame from the sensor's parent frame.

attach_yaw_only

Whether the rays' starting positions and directions only track the yaw orientation.

prim_path

Prim path (or expression) to the sensor.

update_period

Update period of the sensor buffers (in seconds).

debug_vis

Whether to visualize the sensor.

ray_alignment

Specify in what frame the rays are projected onto the ground.

pattern_cfg

The pattern that defines the local ray starting positions and directions.

max_distance

Maximum distance (in meters) from the sensor to ray cast to.

drift_range

The range of drift (in meters) to add to the ray starting positions (xyz) in world frame.

ray_cast_drift_range

The range of drift (in meters) to add to the projected ray points in local projection frame.

visualizer_cfg

The configuration object for the visualization markers.

class OffsetCfg[source]#

Bases: object

The offset pose of the sensor’s frame from the sensor’s parent frame.

Attributes:

pos

Translation w.r.t.

rot

Quaternion rotation (x, y, z, w) w.r.t.

pos: tuple[float, float, float]#

Translation w.r.t. the parent frame. Defaults to (0.0, 0.0, 0.0).

rot: tuple[float, float, float, float]#

Quaternion rotation (x, y, z, w) w.r.t. the parent frame. Defaults to (0.0, 0.0, 0.0, 1.0).

mesh_prim_paths: list[str]#

The list of mesh primitive paths to ray cast against.

Note

Currently, only a single static mesh is supported. We are working on supporting multiple static meshes and dynamic meshes.

offset: OffsetCfg#

The offset pose of the sensor’s frame from the sensor’s parent frame. Defaults to identity.

attach_yaw_only: bool | None#

Whether the rays’ starting positions and directions only track the yaw orientation. Defaults to None, which doesn’t raise a warning of deprecated usage.

This is useful for ray-casting height maps, where only yaw rotation is needed.

Deprecated since version 2.1.1: This attribute is deprecated and will be removed in the future. Please use ray_alignment instead.

To get the same behavior as setting this parameter to True or False, set ray_alignment to "yaw" or “base” respectively.

prim_path: str#

Prim path (or expression) to the sensor.

Note

The expression can contain the environment namespace regex {ENV_REGEX_NS} which will be replaced with the environment namespace.

Example: {ENV_REGEX_NS}/Robot/sensor will be replaced with /World/envs/env_.*/Robot/sensor.

update_period: float#

Update period of the sensor buffers (in seconds). Defaults to 0.0 (update every step).

debug_vis: bool#

Whether to visualize the sensor. Defaults to False.

ray_alignment: Literal['base', 'yaw', 'world']#

Specify in what frame the rays are projected onto the ground. Default is “base”.

The options are:

  • base if the rays’ starting positions and directions track the full root position and orientation.

  • yaw if the rays’ starting positions and directions track root position and only yaw component of the orientation. This is useful for ray-casting height maps.

  • world if rays’ starting positions and directions are always fixed. This is useful in combination with a mapping package on the robot and querying ray-casts in a global frame.

pattern_cfg: PatternBaseCfg#

The pattern that defines the local ray starting positions and directions.

max_distance: float#

Maximum distance (in meters) from the sensor to ray cast to. Defaults to 1e6.

drift_range: tuple[float, float]#

The range of drift (in meters) to add to the ray starting positions (xyz) in world frame. Defaults to (0.0, 0.0).

For floating base robots, this is useful for simulating drift in the robot’s pose estimation.

ray_cast_drift_range: dict[str, tuple[float, float]]#

The range of drift (in meters) to add to the projected ray points in local projection frame. Defaults to a dictionary with zero drift for each x, y and z axis.

For floating base robots, this is useful for simulating drift in the robot’s pose estimation.

visualizer_cfg: VisualizationMarkersCfg#

The configuration object for the visualization markers. Defaults to RAY_CASTER_MARKER_CFG.

Note

This attribute is only used when debug visualization is enabled.

Ray-Cast Camera#

class isaaclab.sensors.RayCasterCamera[source]#

Bases: RayCaster

A ray-casting camera sensor.

The ray-caster camera uses a set of rays to get the distances to meshes in the scene. The rays are defined in the sensor’s local coordinate frame. The sensor has the same interface as the isaaclab.sensors.Camera that implements the camera class through USD camera prims. However, this class provides a faster image generation. The sensor converts meshes from the list of primitive paths provided in the configuration to Warp meshes. The camera then ray-casts against these Warp meshes only.

Currently, only the following annotators are supported:

  • "distance_to_camera": An image containing the distance to camera optical center.

  • "distance_to_image_plane": An image containing distances of 3D points from camera plane along camera’s z-axis.

  • "normals": An image containing the local surface normal vectors at each pixel.

Note

Currently, only static meshes are supported. Extending the warp mesh to support dynamic meshes is a work in progress.

Attributes:

cfg

The configuration parameters.

UNSUPPORTED_TYPES

A set of sensor types that are not supported by the ray-caster camera.

data

Data from the sensor.

image_shape

A tuple containing (height, width) of the camera sensor.

frame

Frame number when the measurement took place.

device

Memory device for computation.

has_debug_vis_implementation

Whether the sensor has a debug visualization implemented.

is_initialized

Whether the sensor is initialized.

meshes

A dictionary to store warp meshes for raycasting, shared across all instances.

num_instances

Number of instances of the sensor.

Methods:

__init__(cfg)

Initializes the camera object.

set_intrinsic_matrices(matrices[, ...])

Set the intrinsic matrix of the camera.

reset([env_ids, env_mask])

Resets the sensor internals.

set_world_poses([positions, orientations, ...])

Set the pose of the camera w.r.t.

set_world_poses_from_view(eyes, targets[, ...])

Set the poses of the camera from the eye position and look-at target position.

set_debug_vis(debug_vis)

Sets whether to visualize the sensor data.

cfg: RayCasterCameraCfg#

The configuration parameters.

UNSUPPORTED_TYPES: ClassVar[set[str]] = {'bounding_box_2d_loose', 'bounding_box_2d_loose_fast', 'bounding_box_2d_tight', 'bounding_box_2d_tight_fast', 'bounding_box_3d', 'bounding_box_3d_fast', 'instance_id_segmentation', 'instance_id_segmentation_fast', 'instance_segmentation', 'instance_segmentation_fast', 'motion_vectors', 'rgb', 'semantic_segmentation', 'skeleton_data'}#

A set of sensor types that are not supported by the ray-caster camera.

__init__(cfg: RayCasterCameraCfg)[source]#

Initializes the camera object.

Parameters:

cfg – The configuration parameters.

Raises:

ValueError – If the provided data types are not supported by the ray-caster camera.

property data: CameraData#

Data from the sensor.

This property is only updated when the user tries to access the data. This is done to avoid unnecessary computation when the sensor data is not used.

For updating the sensor when this property is accessed, you can use the following code snippet in your sensor implementation:

# update sensors if needed
self._update_outdated_buffers()
# return the data (where `_data` is the data for the sensor)
return self._data
property image_shape: tuple[int, int]#

A tuple containing (height, width) of the camera sensor.

property frame: torch.tensor#

Frame number when the measurement took place.

set_intrinsic_matrices(matrices: torch.Tensor, focal_length: float = 1.0, env_ids: Sequence[int] | None = None)[source]#

Set the intrinsic matrix of the camera.

Parameters:
  • matrices – The intrinsic matrices for the camera. Shape is (N, 3, 3).

  • focal_length – Focal length to use when computing aperture values (in cm). Defaults to 1.0.

  • env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.

reset(env_ids: Sequence[int] | None = None, env_mask: wp.array | None = None)[source]#

Resets the sensor internals.

Parameters:
  • env_ids – The environment indices to reset. Defaults to None, in which case all environments are reset.

  • env_mask – A boolean warp array indicating which environments to reset. If provided, takes priority over env_ids. Defaults to None.

set_world_poses(positions: torch.Tensor | None = None, orientations: torch.Tensor | None = None, env_ids: Sequence[int] | None = None, convention: Literal['opengl', 'ros', 'world'] = 'ros')[source]#

Set the pose of the camera w.r.t. the world frame using specified convention.

Since different fields use different conventions for camera orientations, the method allows users to set the camera poses in the specified convention. Possible conventions are:

  • "opengl" - forward axis: -Z - up axis +Y - Offset is applied in the OpenGL (Usd.Camera) convention

  • "ros" - forward axis: +Z - up axis -Y - Offset is applied in the ROS convention

  • "world" - forward axis: +X - up axis +Z - Offset is applied in the World Frame convention

See isaaclab.utils.maths.convert_camera_frame_orientation_convention() for more details on the conventions.

Parameters:
  • positions – The cartesian coordinates (in meters). Shape is (N, 3). Defaults to None, in which case the camera position in not changed.

  • orientations – The quaternion orientation in (x, y, z, w). Shape is (N, 4). Defaults to None, in which case the camera orientation in not changed.

  • env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.

  • convention – The convention in which the poses are fed. Defaults to “ros”.

Raises:

RuntimeError – If the camera prim is not set. Need to call initialize() method first.

set_world_poses_from_view(eyes: torch.Tensor, targets: torch.Tensor, env_ids: Sequence[int] | None = None)[source]#

Set the poses of the camera from the eye position and look-at target position.

Parameters:
  • eyes – The positions of the camera’s eye. Shape is N, 3).

  • targets – The target locations to look at. Shape is (N, 3).

  • env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.

Raises:
  • RuntimeError – If the camera prim is not set. Need to call initialize() method first.

  • NotImplementedError – If the stage up-axis is not “Y” or “Z”.

property device: str#

Memory device for computation.

property has_debug_vis_implementation: bool#

Whether the sensor has a debug visualization implemented.

property is_initialized: bool#

Whether the sensor is initialized.

Returns True if the sensor is initialized, False otherwise.

meshes: ClassVar[dict[str, wp.Mesh]] = {}#

A dictionary to store warp meshes for raycasting, shared across all instances.

The keys correspond to the prim path for the meshes, and values are the corresponding warp Mesh objects.

property num_instances: int#

Number of instances of the sensor.

This is equal to the number of sensors per environment multiplied by the number of environments.

set_debug_vis(debug_vis: bool) bool#

Sets whether to visualize the sensor data.

Parameters:

debug_vis – Whether to visualize the sensor data.

Returns:

Whether the debug visualization was successfully set. False if the sensor does not support debug visualization.

class isaaclab.sensors.RayCasterCameraCfg[source]#

Bases: RayCasterCfg

Configuration for the ray-cast sensor.

Attributes:

offset

The offset pose of the sensor's frame from the sensor's parent frame.

prim_path

Prim path (or expression) to the sensor.

update_period

Update period of the sensor buffers (in seconds).

debug_vis

Whether to visualize the sensor.

mesh_prim_paths

The list of mesh primitive paths to ray cast against.

attach_yaw_only

Whether the rays' starting positions and directions only track the yaw orientation.

ray_alignment

Specify in what frame the rays are projected onto the ground.

max_distance

Maximum distance (in meters) from the sensor to ray cast to.

drift_range

The range of drift (in meters) to add to the ray starting positions (xyz) in world frame.

ray_cast_drift_range

The range of drift (in meters) to add to the projected ray points in local projection frame.

visualizer_cfg

The configuration object for the visualization markers.

data_types

List of sensor names/types to enable for the camera.

depth_clipping_behavior

Clipping behavior for the camera for values exceed the maximum value.

pattern_cfg

The pattern that defines the local ray starting positions and directions in a pinhole camera pattern.

offset: OffsetCfg#

The offset pose of the sensor’s frame from the sensor’s parent frame. Defaults to identity.

prim_path: str#

Prim path (or expression) to the sensor.

Note

The expression can contain the environment namespace regex {ENV_REGEX_NS} which will be replaced with the environment namespace.

Example: {ENV_REGEX_NS}/Robot/sensor will be replaced with /World/envs/env_.*/Robot/sensor.

update_period: float#

Update period of the sensor buffers (in seconds). Defaults to 0.0 (update every step).

debug_vis: bool#

Whether to visualize the sensor. Defaults to False.

mesh_prim_paths: list[str]#

The list of mesh primitive paths to ray cast against.

Note

Currently, only a single static mesh is supported. We are working on supporting multiple static meshes and dynamic meshes.

attach_yaw_only: bool | None#

Whether the rays’ starting positions and directions only track the yaw orientation. Defaults to None, which doesn’t raise a warning of deprecated usage.

This is useful for ray-casting height maps, where only yaw rotation is needed.

Deprecated since version 2.1.1: This attribute is deprecated and will be removed in the future. Please use ray_alignment instead.

To get the same behavior as setting this parameter to True or False, set ray_alignment to "yaw" or “base” respectively.

ray_alignment: Literal['base', 'yaw', 'world']#

Specify in what frame the rays are projected onto the ground. Default is “base”.

The options are:

  • base if the rays’ starting positions and directions track the full root position and orientation.

  • yaw if the rays’ starting positions and directions track root position and only yaw component of the orientation. This is useful for ray-casting height maps.

  • world if rays’ starting positions and directions are always fixed. This is useful in combination with a mapping package on the robot and querying ray-casts in a global frame.

max_distance: float#

Maximum distance (in meters) from the sensor to ray cast to. Defaults to 1e6.

drift_range: tuple[float, float]#

The range of drift (in meters) to add to the ray starting positions (xyz) in world frame. Defaults to (0.0, 0.0).

For floating base robots, this is useful for simulating drift in the robot’s pose estimation.

ray_cast_drift_range: dict[str, tuple[float, float]]#

The range of drift (in meters) to add to the projected ray points in local projection frame. Defaults to a dictionary with zero drift for each x, y and z axis.

For floating base robots, this is useful for simulating drift in the robot’s pose estimation.

visualizer_cfg: VisualizationMarkersCfg#

The configuration object for the visualization markers. Defaults to RAY_CASTER_MARKER_CFG.

Note

This attribute is only used when debug visualization is enabled.

data_types: list[str]#

List of sensor names/types to enable for the camera. Defaults to [“distance_to_image_plane”].

depth_clipping_behavior: Literal['max', 'zero', 'none']#

Clipping behavior for the camera for values exceed the maximum value. Defaults to “none”.

  • "max": Values are clipped to the maximum value.

  • "zero": Values are clipped to zero.

  • "none: No clipping is applied. Values will be returned as inf for distance_to_camera and nan for distance_to_image_plane data type.

pattern_cfg: PinholeCameraPatternCfg#

The pattern that defines the local ray starting positions and directions in a pinhole camera pattern.

Multi-Mesh Ray-Cast Sensor#

class isaaclab.sensors.MultiMeshRayCaster[source]#

Bases: RayCaster

A multi-mesh ray-casting sensor.

The ray-caster uses a set of rays to detect collisions with meshes in the scene. The rays are defined in the sensor’s local coordinate frame. The sensor can be configured to ray-cast against a set of meshes with a given ray pattern.

The meshes are parsed from the list of primitive paths provided in the configuration. These are then converted to warp meshes and stored in the meshes list. The ray-caster then ray-casts against these warp meshes using the ray pattern provided in the configuration.

Compared to the default RayCaster, the MultiMeshRayCaster provides additional functionality and flexibility as an extension of the default RayCaster with the following enhancements:

  • Raycasting against multiple target types : Supports primitive shapes (spheres, cubes, etc.) as well as arbitrary meshes.

  • Dynamic mesh tracking : Keeps track of specified meshes, enabling raycasting against moving parts (e.g., robot links, articulated bodies, or dynamic obstacles).

  • Memory-efficient caching : Avoids redundant memory usage by reusing mesh data across environments.

Example usage to raycast against the visual meshes of a robot (e.g. ANYmal):

ray_caster_cfg = MultiMeshRayCasterCfg(
    prim_path="{ENV_REGEX_NS}/Robot",
    mesh_prim_paths=[
        "/World/Ground",
        MultiMeshRayCasterCfg.RaycastTargetCfg(prim_expr="{ENV_REGEX_NS}/Robot/LF_.*/visuals"),
        MultiMeshRayCasterCfg.RaycastTargetCfg(prim_expr="{ENV_REGEX_NS}/Robot/RF_.*/visuals"),
        MultiMeshRayCasterCfg.RaycastTargetCfg(prim_expr="{ENV_REGEX_NS}/Robot/LH_.*/visuals"),
        MultiMeshRayCasterCfg.RaycastTargetCfg(prim_expr="{ENV_REGEX_NS}/Robot/RH_.*/visuals"),
        MultiMeshRayCasterCfg.RaycastTargetCfg(prim_expr="{ENV_REGEX_NS}/Robot/base/visuals"),
    ],
    ray_alignment="world",
    pattern_cfg=patterns.GridPatternCfg(resolution=0.02, size=(2.5, 2.5), direction=(0, 0, -1)),
)

Attributes:

cfg

The configuration parameters.

mesh_views

A dictionary to store mesh views for raycasting, shared across all instances.

data

Data from the sensor.

device

Memory device for computation.

has_debug_vis_implementation

Whether the sensor has a debug visualization implemented.

is_initialized

Whether the sensor is initialized.

meshes

A dictionary to store warp meshes for raycasting, shared across all instances.

num_instances

Number of instances of the sensor.

Methods:

__init__(cfg)

Initializes the ray-caster object.

reset([env_ids, env_mask])

Resets the sensor internals.

set_debug_vis(debug_vis)

Sets whether to visualize the sensor data.

cfg: MultiMeshRayCasterCfg#

The configuration parameters.

mesh_views: ClassVar[dict[str, XformPrimView | physx.ArticulationView | physx.RigidBodyView]] = {}#

A dictionary to store mesh views for raycasting, shared across all instances.

The keys correspond to the prim path for the mesh views, and values are the corresponding view objects.

__init__(cfg: MultiMeshRayCasterCfg)[source]#

Initializes the ray-caster object.

Parameters:

cfg – The configuration parameters.

property data: MultiMeshRayCasterData#

Data from the sensor.

This property is only updated when the user tries to access the data. This is done to avoid unnecessary computation when the sensor data is not used.

For updating the sensor when this property is accessed, you can use the following code snippet in your sensor implementation:

# update sensors if needed
self._update_outdated_buffers()
# return the data (where `_data` is the data for the sensor)
return self._data
property device: str#

Memory device for computation.

property has_debug_vis_implementation: bool#

Whether the sensor has a debug visualization implemented.

property is_initialized: bool#

Whether the sensor is initialized.

Returns True if the sensor is initialized, False otherwise.

meshes: ClassVar[dict[str, wp.Mesh]] = {}#

A dictionary to store warp meshes for raycasting, shared across all instances.

The keys correspond to the prim path for the meshes, and values are the corresponding warp Mesh objects.

property num_instances: int#

Number of instances of the sensor.

This is equal to the number of sensors per environment multiplied by the number of environments.

reset(env_ids: Sequence[int] | None = None, env_mask: wp.array | None = None)#

Resets the sensor internals.

Parameters:
  • env_ids – The environment indices to reset. Defaults to None, in which case all environments are reset.

  • env_mask – A boolean warp array indicating which environments to reset. If provided, takes priority over env_ids. Defaults to None.

set_debug_vis(debug_vis: bool) bool#

Sets whether to visualize the sensor data.

Parameters:

debug_vis – Whether to visualize the sensor data.

Returns:

Whether the debug visualization was successfully set. False if the sensor does not support debug visualization.

class isaaclab.sensors.MultiMeshRayCasterData[source]#

Data container for the multi-mesh ray-cast sensor.

Attributes:

pos_w

Position of the sensor origin in world frame.

quat_w

Orientation of the sensor origin in quaternion (x, y, z, w) in world frame.

ray_hits_w

The ray hit positions in the world frame.

ray_mesh_ids

The mesh ids of the ray hits.

pos_w: torch.Tensor = None#

Position of the sensor origin in world frame.

Shape is (N, 3), where N is the number of sensors.

quat_w: torch.Tensor = None#

Orientation of the sensor origin in quaternion (x, y, z, w) in world frame.

Shape is (N, 4), where N is the number of sensors.

ray_hits_w: torch.Tensor = None#

The ray hit positions in the world frame.

Shape is (N, B, 3), where N is the number of sensors, B is the number of rays in the scan pattern per sensor.

ray_mesh_ids: torch.Tensor = None#

The mesh ids of the ray hits.

Shape is (N, B, 1), where N is the number of sensors, B is the number of rays in the scan pattern per sensor.

class isaaclab.sensors.MultiMeshRayCasterCfg[source]#

Bases: RayCasterCfg

Configuration for the multi-mesh ray-cast sensor.

Classes:

RaycastTargetCfg

Configuration for different ray-cast targets.

Attributes:

prim_path

Prim path (or expression) to the sensor.

update_period

Update period of the sensor buffers (in seconds).

debug_vis

Whether to visualize the sensor.

offset

The offset pose of the sensor's frame from the sensor's parent frame.

attach_yaw_only

Whether the rays' starting positions and directions only track the yaw orientation.

ray_alignment

Specify in what frame the rays are projected onto the ground.

pattern_cfg

The pattern that defines the local ray starting positions and directions.

max_distance

Maximum distance (in meters) from the sensor to ray cast to.

drift_range

The range of drift (in meters) to add to the ray starting positions (xyz) in world frame.

ray_cast_drift_range

The range of drift (in meters) to add to the projected ray points in local projection frame.

visualizer_cfg

The configuration object for the visualization markers.

mesh_prim_paths

The list of mesh primitive paths to ray cast against.

update_mesh_ids

Whether to update the mesh ids of the ray hits in the data container.

reference_meshes

Whether to reference duplicated meshes instead of loading each one separately into memory.

class RaycastTargetCfg[source]#

Bases: object

Configuration for different ray-cast targets.

Attributes:

prim_expr

The regex to specify the target prim to ray cast against.

is_shared

Whether the target prim is assumed to be the same mesh across all environments.

merge_prim_meshes

Whether to merge the parsed meshes for a prim that contains multiple meshes.

track_mesh_transforms

Whether the mesh transformations should be tracked.

prim_expr: str#

The regex to specify the target prim to ray cast against.

is_shared: bool#

Whether the target prim is assumed to be the same mesh across all environments. Defaults to False.

If True, only the first mesh is read and then reused for all environments, rather than re-parsed. This provides a startup performance boost when there are many environments that all use the same asset.

Note

If MultiMeshRayCasterCfg.reference_meshes is False, this flag has no effect.

merge_prim_meshes: bool#

Whether to merge the parsed meshes for a prim that contains multiple meshes. Defaults to True.

This will create a new mesh that combines all meshes in the parsed prim. The raycast hits mesh IDs will then refer to the single merged mesh.

track_mesh_transforms: bool#

Whether the mesh transformations should be tracked. Defaults to True.

Note

Not tracking the mesh transformations is recommended when the meshes are static to increase performance.

prim_path: str#

Prim path (or expression) to the sensor.

Note

The expression can contain the environment namespace regex {ENV_REGEX_NS} which will be replaced with the environment namespace.

Example: {ENV_REGEX_NS}/Robot/sensor will be replaced with /World/envs/env_.*/Robot/sensor.

update_period: float#

Update period of the sensor buffers (in seconds). Defaults to 0.0 (update every step).

debug_vis: bool#

Whether to visualize the sensor. Defaults to False.

offset: OffsetCfg#

The offset pose of the sensor’s frame from the sensor’s parent frame. Defaults to identity.

attach_yaw_only: bool | None#

Whether the rays’ starting positions and directions only track the yaw orientation. Defaults to None, which doesn’t raise a warning of deprecated usage.

This is useful for ray-casting height maps, where only yaw rotation is needed.

Deprecated since version 2.1.1: This attribute is deprecated and will be removed in the future. Please use ray_alignment instead.

To get the same behavior as setting this parameter to True or False, set ray_alignment to "yaw" or “base” respectively.

ray_alignment: Literal['base', 'yaw', 'world']#

Specify in what frame the rays are projected onto the ground. Default is “base”.

The options are:

  • base if the rays’ starting positions and directions track the full root position and orientation.

  • yaw if the rays’ starting positions and directions track root position and only yaw component of the orientation. This is useful for ray-casting height maps.

  • world if rays’ starting positions and directions are always fixed. This is useful in combination with a mapping package on the robot and querying ray-casts in a global frame.

pattern_cfg: PatternBaseCfg#

The pattern that defines the local ray starting positions and directions.

max_distance: float#

Maximum distance (in meters) from the sensor to ray cast to. Defaults to 1e6.

drift_range: tuple[float, float]#

The range of drift (in meters) to add to the ray starting positions (xyz) in world frame. Defaults to (0.0, 0.0).

For floating base robots, this is useful for simulating drift in the robot’s pose estimation.

ray_cast_drift_range: dict[str, tuple[float, float]]#

The range of drift (in meters) to add to the projected ray points in local projection frame. Defaults to a dictionary with zero drift for each x, y and z axis.

For floating base robots, this is useful for simulating drift in the robot’s pose estimation.

visualizer_cfg: VisualizationMarkersCfg#

The configuration object for the visualization markers. Defaults to RAY_CASTER_MARKER_CFG.

Note

This attribute is only used when debug visualization is enabled.

mesh_prim_paths: list[str | RaycastTargetCfg]#

The list of mesh primitive paths to ray cast against.

If an entry is a string, it is internally converted to RaycastTargetCfg with track_mesh_transforms disabled. These settings ensure backwards compatibility with the default raycaster.

update_mesh_ids: bool#

Whether to update the mesh ids of the ray hits in the data container.

reference_meshes: bool#

Whether to reference duplicated meshes instead of loading each one separately into memory. Defaults to True.

When enabled, the raycaster parses all meshes in all environments, but reuses references for duplicates instead of storing multiple copies. This reduces memory footprint.

Multi-Mesh Ray-Cast Camera#

class isaaclab.sensors.MultiMeshRayCasterCamera[source]#

Bases: RayCasterCamera, MultiMeshRayCaster

A multi-mesh ray-casting camera sensor.

The ray-caster camera uses a set of rays to get the distances to meshes in the scene. The rays are defined in the sensor’s local coordinate frame. The sensor has the same interface as the isaaclab.sensors.Camera that implements the camera class through USD camera prims. However, this class provides a faster image generation. The sensor converts meshes from the list of primitive paths provided in the configuration to Warp meshes. The camera then ray-casts against these Warp meshes only.

Currently, only the following annotators are supported:

  • "distance_to_camera": An image containing the distance to camera optical center.

  • "distance_to_image_plane": An image containing distances of 3D points from camera plane along camera’s z-axis.

  • "normals": An image containing the local surface normal vectors at each pixel.

Attributes:

cfg

The configuration parameters.

UNSUPPORTED_TYPES

A set of sensor types that are not supported by the ray-caster camera.

data

Data from the sensor.

device

Memory device for computation.

frame

Frame number when the measurement took place.

has_debug_vis_implementation

Whether the sensor has a debug visualization implemented.

image_shape

A tuple containing (height, width) of the camera sensor.

is_initialized

Whether the sensor is initialized.

mesh_views

A dictionary to store mesh views for raycasting, shared across all instances.

meshes

A dictionary to store warp meshes for raycasting, shared across all instances.

num_instances

Number of instances of the sensor.

Methods:

__init__(cfg)

Initializes the camera object.

reset([env_ids, env_mask])

Resets the sensor internals.

set_debug_vis(debug_vis)

Sets whether to visualize the sensor data.

set_intrinsic_matrices(matrices[, ...])

Set the intrinsic matrix of the camera.

set_world_poses([positions, orientations, ...])

Set the pose of the camera w.r.t.

set_world_poses_from_view(eyes, targets[, ...])

Set the poses of the camera from the eye position and look-at target position.

cfg: MultiMeshRayCasterCameraCfg#

The configuration parameters.

__init__(cfg: MultiMeshRayCasterCameraCfg)[source]#

Initializes the camera object.

Parameters:

cfg – The configuration parameters.

Raises:

ValueError – If the provided data types are not supported by the ray-caster camera.

UNSUPPORTED_TYPES: ClassVar[set[str]] = {'bounding_box_2d_loose', 'bounding_box_2d_loose_fast', 'bounding_box_2d_tight', 'bounding_box_2d_tight_fast', 'bounding_box_3d', 'bounding_box_3d_fast', 'instance_id_segmentation', 'instance_id_segmentation_fast', 'instance_segmentation', 'instance_segmentation_fast', 'motion_vectors', 'rgb', 'semantic_segmentation', 'skeleton_data'}#

A set of sensor types that are not supported by the ray-caster camera.

property data: CameraData#

Data from the sensor.

This property is only updated when the user tries to access the data. This is done to avoid unnecessary computation when the sensor data is not used.

For updating the sensor when this property is accessed, you can use the following code snippet in your sensor implementation:

# update sensors if needed
self._update_outdated_buffers()
# return the data (where `_data` is the data for the sensor)
return self._data
property device: str#

Memory device for computation.

property frame: torch.tensor#

Frame number when the measurement took place.

property has_debug_vis_implementation: bool#

Whether the sensor has a debug visualization implemented.

property image_shape: tuple[int, int]#

A tuple containing (height, width) of the camera sensor.

property is_initialized: bool#

Whether the sensor is initialized.

Returns True if the sensor is initialized, False otherwise.

mesh_views: ClassVar[dict[str, XformPrimView | physx.ArticulationView | physx.RigidBodyView]] = {}#

A dictionary to store mesh views for raycasting, shared across all instances.

The keys correspond to the prim path for the mesh views, and values are the corresponding view objects.

meshes: ClassVar[dict[str, wp.Mesh]] = {}#

A dictionary to store warp meshes for raycasting, shared across all instances.

The keys correspond to the prim path for the meshes, and values are the corresponding warp Mesh objects.

property num_instances: int#

Number of instances of the sensor.

This is equal to the number of sensors per environment multiplied by the number of environments.

reset(env_ids: Sequence[int] | None = None, env_mask: wp.array | None = None)#

Resets the sensor internals.

Parameters:
  • env_ids – The environment indices to reset. Defaults to None, in which case all environments are reset.

  • env_mask – A boolean warp array indicating which environments to reset. If provided, takes priority over env_ids. Defaults to None.

set_debug_vis(debug_vis: bool) bool#

Sets whether to visualize the sensor data.

Parameters:

debug_vis – Whether to visualize the sensor data.

Returns:

Whether the debug visualization was successfully set. False if the sensor does not support debug visualization.

set_intrinsic_matrices(matrices: torch.Tensor, focal_length: float = 1.0, env_ids: Sequence[int] | None = None)#

Set the intrinsic matrix of the camera.

Parameters:
  • matrices – The intrinsic matrices for the camera. Shape is (N, 3, 3).

  • focal_length – Focal length to use when computing aperture values (in cm). Defaults to 1.0.

  • env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.

set_world_poses(positions: torch.Tensor | None = None, orientations: torch.Tensor | None = None, env_ids: Sequence[int] | None = None, convention: Literal['opengl', 'ros', 'world'] = 'ros')#

Set the pose of the camera w.r.t. the world frame using specified convention.

Since different fields use different conventions for camera orientations, the method allows users to set the camera poses in the specified convention. Possible conventions are:

  • "opengl" - forward axis: -Z - up axis +Y - Offset is applied in the OpenGL (Usd.Camera) convention

  • "ros" - forward axis: +Z - up axis -Y - Offset is applied in the ROS convention

  • "world" - forward axis: +X - up axis +Z - Offset is applied in the World Frame convention

See isaaclab.utils.maths.convert_camera_frame_orientation_convention() for more details on the conventions.

Parameters:
  • positions – The cartesian coordinates (in meters). Shape is (N, 3). Defaults to None, in which case the camera position in not changed.

  • orientations – The quaternion orientation in (x, y, z, w). Shape is (N, 4). Defaults to None, in which case the camera orientation in not changed.

  • env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.

  • convention – The convention in which the poses are fed. Defaults to “ros”.

Raises:

RuntimeError – If the camera prim is not set. Need to call initialize() method first.

set_world_poses_from_view(eyes: torch.Tensor, targets: torch.Tensor, env_ids: Sequence[int] | None = None)#

Set the poses of the camera from the eye position and look-at target position.

Parameters:
  • eyes – The positions of the camera’s eye. Shape is N, 3).

  • targets – The target locations to look at. Shape is (N, 3).

  • env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.

Raises:
  • RuntimeError – If the camera prim is not set. Need to call initialize() method first.

  • NotImplementedError – If the stage up-axis is not “Y” or “Z”.

class isaaclab.sensors.MultiMeshRayCasterCameraCfg[source]#

Bases: RayCasterCameraCfg, MultiMeshRayCasterCfg

Configuration for the multi-mesh ray-cast camera sensor.

Attributes:

prim_path

Prim path (or expression) to the sensor.

update_period

Update period of the sensor buffers (in seconds).

debug_vis

Whether to visualize the sensor.

mesh_prim_paths

The list of mesh primitive paths to ray cast against.

offset

The offset pose of the sensor's frame from the sensor's parent frame.

attach_yaw_only

Whether the rays' starting positions and directions only track the yaw orientation.

ray_alignment

Specify in what frame the rays are projected onto the ground.

pattern_cfg

The pattern that defines the local ray starting positions and directions in a pinhole camera pattern.

max_distance

Maximum distance (in meters) from the sensor to ray cast to.

drift_range

The range of drift (in meters) to add to the ray starting positions (xyz) in world frame.

ray_cast_drift_range

The range of drift (in meters) to add to the projected ray points in local projection frame.

visualizer_cfg

The configuration object for the visualization markers.

update_mesh_ids

Whether to update the mesh ids of the ray hits in the data container.

reference_meshes

Whether to reference duplicated meshes instead of loading each one separately into memory.

data_types

List of sensor names/types to enable for the camera.

depth_clipping_behavior

Clipping behavior for the camera for values exceed the maximum value.

prim_path: str#

Prim path (or expression) to the sensor.

Note

The expression can contain the environment namespace regex {ENV_REGEX_NS} which will be replaced with the environment namespace.

Example: {ENV_REGEX_NS}/Robot/sensor will be replaced with /World/envs/env_.*/Robot/sensor.

update_period: float#

Update period of the sensor buffers (in seconds). Defaults to 0.0 (update every step).

debug_vis: bool#

Whether to visualize the sensor. Defaults to False.

mesh_prim_paths: list[str]#

The list of mesh primitive paths to ray cast against.

If an entry is a string, it is internally converted to RaycastTargetCfg with track_mesh_transforms disabled. These settings ensure backwards compatibility with the default raycaster.

offset: OffsetCfg#

The offset pose of the sensor’s frame from the sensor’s parent frame. Defaults to identity.

attach_yaw_only: bool | None#

Whether the rays’ starting positions and directions only track the yaw orientation. Defaults to None, which doesn’t raise a warning of deprecated usage.

This is useful for ray-casting height maps, where only yaw rotation is needed.

Deprecated since version 2.1.1: This attribute is deprecated and will be removed in the future. Please use ray_alignment instead.

To get the same behavior as setting this parameter to True or False, set ray_alignment to "yaw" or “base” respectively.

ray_alignment: Literal['base', 'yaw', 'world']#

Specify in what frame the rays are projected onto the ground. Default is “base”.

The options are:

  • base if the rays’ starting positions and directions track the full root position and orientation.

  • yaw if the rays’ starting positions and directions track root position and only yaw component of the orientation. This is useful for ray-casting height maps.

  • world if rays’ starting positions and directions are always fixed. This is useful in combination with a mapping package on the robot and querying ray-casts in a global frame.

pattern_cfg: PinholeCameraPatternCfg#

The pattern that defines the local ray starting positions and directions in a pinhole camera pattern.

max_distance: float#

Maximum distance (in meters) from the sensor to ray cast to. Defaults to 1e6.

drift_range: tuple[float, float]#

The range of drift (in meters) to add to the ray starting positions (xyz) in world frame. Defaults to (0.0, 0.0).

For floating base robots, this is useful for simulating drift in the robot’s pose estimation.

ray_cast_drift_range: dict[str, tuple[float, float]]#

The range of drift (in meters) to add to the projected ray points in local projection frame. Defaults to a dictionary with zero drift for each x, y and z axis.

For floating base robots, this is useful for simulating drift in the robot’s pose estimation.

visualizer_cfg: VisualizationMarkersCfg#

The configuration object for the visualization markers. Defaults to RAY_CASTER_MARKER_CFG.

Note

This attribute is only used when debug visualization is enabled.

update_mesh_ids: bool#

Whether to update the mesh ids of the ray hits in the data container.

reference_meshes: bool#

Whether to reference duplicated meshes instead of loading each one separately into memory. Defaults to True.

When enabled, the raycaster parses all meshes in all environments, but reuses references for duplicates instead of storing multiple copies. This reduces memory footprint.

data_types: list[str]#

List of sensor names/types to enable for the camera. Defaults to [“distance_to_image_plane”].

depth_clipping_behavior: Literal['max', 'zero', 'none']#

Clipping behavior for the camera for values exceed the maximum value. Defaults to “none”.

  • "max": Values are clipped to the maximum value.

  • "zero": Values are clipped to zero.

  • "none: No clipping is applied. Values will be returned as inf for distance_to_camera and nan for distance_to_image_plane data type.

Inertia Measurement Unit#

class isaaclab.sensors.Imu[source]#

Bases: FactoryBase, BaseImu

Factory for creating IMU sensor instances.

Attributes:

data

Data from the sensor.

device

Memory device for computation.

has_debug_vis_implementation

Whether the sensor has a debug visualization implemented.

is_initialized

Whether the sensor is initialized.

num_instances

Number of instances of the sensor.

cfg

The configuration parameters.

Methods:

__new__(cls, *args, **kwargs)

Create a new instance of an IMU sensor based on the backend.

__init__(cfg)

Initializes the Imu sensor.

get_registry_keys()

Returns a list of registered backend names.

register(name, sub_class)

Register a new implementation class.

reset([env_ids, env_mask])

Resets the sensor internals.

set_debug_vis(debug_vis)

Sets whether to visualize the sensor data.

abstract property data: BaseImuData#

Data from the sensor.

This property is only updated when the user tries to access the data. This is done to avoid unnecessary computation when the sensor data is not used.

For updating the sensor when this property is accessed, you can use the following code snippet in your sensor implementation:

# update sensors if needed
self._update_outdated_buffers()
# return the data (where `_data` is the data for the sensor)
return self._data
static __new__(cls, *args, **kwargs) BaseImu | PhysXImu[source]#

Create a new instance of an IMU sensor based on the backend.

__init__(cfg: ImuCfg)#

Initializes the Imu sensor.

Parameters:

cfg – The configuration parameters.

property device: str#

Memory device for computation.

classmethod get_registry_keys() list[str]#

Returns a list of registered backend names.

property has_debug_vis_implementation: bool#

Whether the sensor has a debug visualization implemented.

property is_initialized: bool#

Whether the sensor is initialized.

Returns True if the sensor is initialized, False otherwise.

property num_instances: int#

Number of instances of the sensor.

This is equal to the number of sensors per environment multiplied by the number of environments.

classmethod register(name: str, sub_class) None#

Register a new implementation class.

reset(env_ids: Sequence[int] | None = None, env_mask: wp.array | None = None) None#

Resets the sensor internals.

Parameters:
  • env_ids – The environment indices to reset. Defaults to None, in which case all environments are reset.

  • env_mask – A boolean warp array indicating which environments to reset. If provided, takes priority over env_ids. Defaults to None.

set_debug_vis(debug_vis: bool) bool#

Sets whether to visualize the sensor data.

Parameters:

debug_vis – Whether to visualize the sensor data.

Returns:

Whether the debug visualization was successfully set. False if the sensor does not support debug visualization.

cfg: ImuCfg#

The configuration parameters.

class isaaclab.sensors.ImuCfg[source]#

Bases: SensorBaseCfg

Configuration for an Inertial Measurement Unit (IMU) sensor.

Classes:

OffsetCfg

The offset pose of the sensor's frame from the sensor's parent frame.

Attributes:

prim_path

Prim path (or expression) to the sensor.

update_period

Update period of the sensor buffers (in seconds).

debug_vis

Whether to visualize the sensor.

offset

The offset pose of the sensor's frame from the sensor's parent frame.

visualizer_cfg

The configuration object for the visualization markers.

gravity_bias

The linear acceleration bias applied to the linear acceleration in the world frame (x,y,z).

class OffsetCfg[source]#

Bases: object

The offset pose of the sensor’s frame from the sensor’s parent frame.

Attributes:

pos

Translation w.r.t.

rot

Quaternion rotation (x, y, z, w) w.r.t.

pos: tuple[float, float, float]#

Translation w.r.t. the parent frame. Defaults to (0.0, 0.0, 0.0).

rot: tuple[float, float, float, float]#

Quaternion rotation (x, y, z, w) w.r.t. the parent frame. Defaults to (0.0, 0.0, 0.0, 1.0).

prim_path: str#

Prim path (or expression) to the sensor.

Note

The expression can contain the environment namespace regex {ENV_REGEX_NS} which will be replaced with the environment namespace.

Example: {ENV_REGEX_NS}/Robot/sensor will be replaced with /World/envs/env_.*/Robot/sensor.

update_period: float#

Update period of the sensor buffers (in seconds). Defaults to 0.0 (update every step).

debug_vis: bool#

Whether to visualize the sensor. Defaults to False.

offset: OffsetCfg#

The offset pose of the sensor’s frame from the sensor’s parent frame. Defaults to identity.

visualizer_cfg: VisualizationMarkersCfg#

The configuration object for the visualization markers. Defaults to RED_ARROW_X_MARKER_CFG.

This attribute is only used when debug visualization is enabled.

gravity_bias: tuple[float, float, float]#

The linear acceleration bias applied to the linear acceleration in the world frame (x,y,z).

Imu sensors typically output a positive gravity acceleration in opposition to the direction of gravity. This config parameter allows users to subtract that bias if set to (0.,0.,0.). By default this is set to (0.0,0.0,9.81) which results in a positive acceleration reading in the world Z.