omni.isaac.lab.sensors#
Sub-package containing various sensor classes implementations.
This subpackage contains the sensor classes that are compatible with Isaac Sim. We include both USD-based and custom sensors:
USD-prim sensors: Available in Omniverse and require creating a USD prim for them. For instance, RTX ray tracing camera and lidar sensors.
USD-schema sensors: Available in Omniverse and require creating a USD schema on an existing prim. For instance, contact sensors and frame transformers.
Custom sensors: Implemented in Python and do not require creating any USD prim or schema. For instance, warp-based ray-casters.
Due to the above categorization, the prim paths passed to the sensor’s configuration class are interpreted differently based on the sensor type. The following table summarizes the interpretation of the prim paths for different sensor types:
Sensor Type |
Example Prim Path |
Pre-check |
---|---|---|
Camera |
/World/robot/base/camera |
Leaf is available, and it will spawn a USD camera |
Contact Sensor |
/World/robot/feet_* |
Leaf is available and checks if the schema exists |
Ray Caster |
/World/robot/base |
Leaf exists and is a physics body (Articulation / Rigid Body) |
Frame Transformer |
/World/robot/base |
Leaf exists and is a physics body (Articulation / Rigid Body) |
Imu |
/World/robot/base |
Leaf exists and is a physics body (Rigid Body) |
Submodules
Sub-module for ray-casting patterns used by the ray-caster. |
Classes
The base class for implementing a sensor. |
|
Configuration parameters for a sensor. |
|
The camera sensor for acquiring visual data. |
|
Data container for the camera sensor. |
|
Configuration for a camera sensor. |
|
The tiled rendering based camera sensor for acquiring the same data as the Camera class. |
|
Configuration for a tiled rendering-based camera sensor. |
|
A contact reporting sensor. |
|
Data container for the contact reporting sensor. |
|
Configuration for the contact sensor. |
|
A sensor for reporting frame transforms. |
|
Data container for the frame transformer sensor. |
|
Configuration for the frame transformer sensor. |
|
A ray-casting sensor. |
|
Data container for the ray-cast sensor. |
|
Configuration for the ray-cast sensor. |
|
A ray-casting camera sensor. |
|
Configuration for the ray-cast sensor. |
|
The Inertia Measurement Unit (IMU) sensor. |
|
Configuration for an Inertial Measurement Unit (IMU) sensor. |
Sensor Base#
- class omni.isaac.lab.sensors.SensorBase[source]#
The base class for implementing a sensor.
The implementation is based on lazy evaluation. The sensor data is only updated when the user tries accessing the data through the
data
property or setsforce_compute=True
in theupdate()
method. This is done to avoid unnecessary computation when the sensor data is not used.The sensor is updated at the specified update period. If the update period is zero, then the sensor is updated at every simulation step.
Methods:
__init__
(cfg)Initialize the sensor class.
set_debug_vis
(debug_vis)Sets whether to visualize the sensor data.
reset
([env_ids])Resets the sensor internals.
Attributes:
Whether the sensor is initialized.
Number of instances of the sensor.
Memory device for computation.
Data from the sensor.
Whether the sensor has a debug visualization implemented.
- __init__(cfg: SensorBaseCfg)[source]#
Initialize the sensor class.
- Parameters:
cfg – The configuration parameters for the sensor.
- property is_initialized: bool#
Whether the sensor is initialized.
Returns True if the sensor is initialized, False otherwise.
- property num_instances: int#
Number of instances of the sensor.
This is equal to the number of sensors per environment multiplied by the number of environments.
- abstract property data: Any#
Data from the sensor.
This property is only updated when the user tries to access the data. This is done to avoid unnecessary computation when the sensor data is not used.
For updating the sensor when this property is accessed, you can use the following code snippet in your sensor implementation:
# update sensors if needed self._update_outdated_buffers() # return the data (where `_data` is the data for the sensor) return self._data
- property has_debug_vis_implementation: bool#
Whether the sensor has a debug visualization implemented.
- class omni.isaac.lab.sensors.SensorBaseCfg[source]#
Configuration parameters for a sensor.
Attributes:
Prim path (or expression) to the sensor.
Update period of the sensor buffers (in seconds).
Number of past frames to store in the sensor buffers.
Whether to visualize the sensor.
- prim_path: str#
Prim path (or expression) to the sensor.
Note
The expression can contain the environment namespace regex
{ENV_REGEX_NS}
which will be replaced with the environment namespace.Example:
{ENV_REGEX_NS}/Robot/sensor
will be replaced with/World/envs/env_.*/Robot/sensor
.
- update_period: float#
Update period of the sensor buffers (in seconds). Defaults to 0.0 (update every step).
USD Camera#
- class omni.isaac.lab.sensors.Camera[source]#
Bases:
SensorBase
The camera sensor for acquiring visual data.
This class wraps over the UsdGeom Camera for providing a consistent API for acquiring visual data. It ensures that the camera follows the ROS convention for the coordinate system.
Summarizing from the replicator extension, the following sensor types are supported:
"rgb"
: A 3-channel rendered color image."rgba"
: A 4-channel rendered color image with alpha channel."distance_to_camera"
: An image containing the distance to camera optical center."distance_to_image_plane"
: An image containing distances of 3D points from camera plane along camera’s z-axis."depth"
: The same as"distance_to_image_plane"
."normals"
: An image containing the local surface normal vectors at each pixel."motion_vectors"
: An image containing the motion vector data at each pixel."semantic_segmentation"
: The semantic segmentation data."instance_segmentation_fast"
: The instance segmentation data."instance_id_segmentation_fast"
: The instance id segmentation data.
Note
Currently the following sensor types are not supported in a “view” format:
"instance_segmentation"
: The instance segmentation data. Please use the fast counterparts instead."instance_id_segmentation"
: The instance id segmentation data. Please use the fast counterparts instead."bounding_box_2d_tight"
: The tight 2D bounding box data (only contains non-occluded regions)."bounding_box_2d_tight_fast"
: The tight 2D bounding box data (only contains non-occluded regions)."bounding_box_2d_loose"
: The loose 2D bounding box data (contains occluded regions)."bounding_box_2d_loose_fast"
: The loose 2D bounding box data (contains occluded regions)."bounding_box_3d"
: The 3D view space bounding box data."bounding_box_3d_fast"
: The 3D view space bounding box data.
Attributes:
The configuration parameters.
The set of sensor types that are not supported by the camera class.
Number of instances of the sensor.
Data from the sensor.
Frame number when the measurement took place.
The path of the render products for the cameras.
A tuple containing (height, width) of the camera sensor.
Memory device for computation.
Whether the sensor has a debug visualization implemented.
Whether the sensor is initialized.
Methods:
__init__
(cfg)Initializes the camera sensor.
set_intrinsic_matrices
(matrices[, ...])Set parameters of the USD camera from its intrinsic matrix.
set_world_poses
([positions, orientations, ...])Set the pose of the camera w.r.t.
set_world_poses_from_view
(eyes, targets[, ...])Set the poses of the camera from the eye position and look-at target position.
reset
([env_ids])Resets the sensor internals.
set_debug_vis
(debug_vis)Sets whether to visualize the sensor data.
- UNSUPPORTED_TYPES: set[str] = {'bounding_box_2d_loose', 'bounding_box_2d_loose_fast', 'bounding_box_2d_tight', 'bounding_box_2d_tight_fast', 'bounding_box_3d', 'bounding_box_3d_fast', 'instance_id_segmentation', 'instance_segmentation'}#
The set of sensor types that are not supported by the camera class.
- __init__(cfg: CameraCfg)[source]#
Initializes the camera sensor.
- Parameters:
cfg – The configuration parameters.
- Raises:
RuntimeError – If no camera prim is found at the given path.
ValueError – If the provided data types are not supported by the camera.
- property num_instances: int#
Number of instances of the sensor.
This is equal to the number of sensors per environment multiplied by the number of environments.
- property data: CameraData#
Data from the sensor.
This property is only updated when the user tries to access the data. This is done to avoid unnecessary computation when the sensor data is not used.
For updating the sensor when this property is accessed, you can use the following code snippet in your sensor implementation:
# update sensors if needed self._update_outdated_buffers() # return the data (where `_data` is the data for the sensor) return self._data
- property frame: torch.tensor#
Frame number when the measurement took place.
- property render_product_paths: list[str]#
The path of the render products for the cameras.
This can be used via replicator interfaces to attach to writes or external annotator registry.
- set_intrinsic_matrices(matrices: torch.Tensor, focal_length: float = 1.0, env_ids: Sequence[int] | None = None)[source]#
Set parameters of the USD camera from its intrinsic matrix.
The intrinsic matrix and focal length are used to set the following parameters to the USD camera:
focal_length
: The focal length of the camera.horizontal_aperture
: The horizontal aperture of the camera.vertical_aperture
: The vertical aperture of the camera.horizontal_aperture_offset
: The horizontal offset of the camera.vertical_aperture_offset
: The vertical offset of the camera.
Warning
Due to limitations of Omniverse camera, we need to assume that the camera is a spherical lens, i.e. has square pixels, and the optical center is centered at the camera eye. If this assumption is not true in the input intrinsic matrix, then the camera will not set up correctly.
- Parameters:
matrices – The intrinsic matrices for the camera. Shape is (N, 3, 3).
focal_length – Focal length to use when computing aperture values (in cm). Defaults to 1.0.
env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.
- set_world_poses(positions: torch.Tensor | None = None, orientations: torch.Tensor | None = None, env_ids: Sequence[int] | None = None, convention: Literal['opengl', 'ros', 'world'] = 'ros')[source]#
Set the pose of the camera w.r.t. the world frame using specified convention.
Since different fields use different conventions for camera orientations, the method allows users to set the camera poses in the specified convention. Possible conventions are:
"opengl"
- forward axis: -Z - up axis +Y - Offset is applied in the OpenGL (Usd.Camera) convention"ros"
- forward axis: +Z - up axis -Y - Offset is applied in the ROS convention"world"
- forward axis: +X - up axis +Z - Offset is applied in the World Frame convention
See
omni.isaac.lab.sensors.camera.utils.convert_camera_frame_orientation_convention()
for more details on the conventions.- Parameters:
positions – The cartesian coordinates (in meters). Shape is (N, 3). Defaults to None, in which case the camera position in not changed.
orientations – The quaternion orientation in (w, x, y, z). Shape is (N, 4). Defaults to None, in which case the camera orientation in not changed.
env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.
convention – The convention in which the poses are fed. Defaults to “ros”.
- Raises:
RuntimeError – If the camera prim is not set. Need to call
initialize()
method first.
- set_world_poses_from_view(eyes: torch.Tensor, targets: torch.Tensor, env_ids: Sequence[int] | None = None)[source]#
Set the poses of the camera from the eye position and look-at target position.
- Parameters:
eyes – The positions of the camera’s eye. Shape is (N, 3).
targets – The target locations to look at. Shape is (N, 3).
env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.
- Raises:
RuntimeError – If the camera prim is not set. Need to call
initialize()
method first.NotImplementedError – If the stage up-axis is not “Y” or “Z”.
- reset(env_ids: Sequence[int] | None = None)[source]#
Resets the sensor internals.
- Parameters:
env_ids – The sensor ids to reset. Defaults to None.
- property has_debug_vis_implementation: bool#
Whether the sensor has a debug visualization implemented.
- class omni.isaac.lab.sensors.CameraData[source]#
Data container for the camera sensor.
Attributes:
Position of the sensor origin in world frame, following ROS convention.
Quaternion orientation (w, x, y, z) of the sensor origin in world frame, following the world coordinate frame
A tuple containing (height, width) of the camera sensor.
The intrinsic matrices for the camera.
The retrieved sensor data with sensor types as key.
The retrieved sensor info with sensor types as key.
Quaternion orientation (w, x, y, z) of the sensor origin in the world frame, following ROS convention.
Quaternion orientation (w, x, y, z) of the sensor origin in the world frame, following Opengl / USD Camera convention.
- pos_w: torch.Tensor = None#
Position of the sensor origin in world frame, following ROS convention.
Shape is (N, 3) where N is the number of sensors.
- quat_w_world: torch.Tensor = None#
Quaternion orientation (w, x, y, z) of the sensor origin in world frame, following the world coordinate frame
Note
World frame convention follows the camera aligned with forward axis +X and up axis +Z.
Shape is (N, 4) where N is the number of sensors.
- intrinsic_matrices: torch.Tensor = None#
The intrinsic matrices for the camera.
Shape is (N, 3, 3) where N is the number of sensors.
- output: dict[str, torch.Tensor] = None#
The retrieved sensor data with sensor types as key.
The format of the data is available in the Replicator Documentation. For semantic-based data, this corresponds to the
"data"
key in the output of the sensor.
- info: list[dict[str, Any]] = None#
The retrieved sensor info with sensor types as key.
This contains extra information provided by the sensor such as semantic segmentation label mapping, prim paths. For semantic-based data, this corresponds to the
"info"
key in the output of the sensor. For other sensor types, the info is empty.
- property quat_w_ros: torch.Tensor#
Quaternion orientation (w, x, y, z) of the sensor origin in the world frame, following ROS convention.
Note
ROS convention follows the camera aligned with forward axis +Z and up axis -Y.
Shape is (N, 4) where N is the number of sensors.
- property quat_w_opengl: torch.Tensor#
Quaternion orientation (w, x, y, z) of the sensor origin in the world frame, following Opengl / USD Camera convention.
Note
OpenGL convention follows the camera aligned with forward axis -Z and up axis +Y.
Shape is (N, 4) where N is the number of sensors.
- class omni.isaac.lab.sensors.CameraCfg[source]#
Bases:
SensorBaseCfg
Configuration for a camera sensor.
Classes:
The offset pose of the sensor's frame from the sensor's parent frame.
Attributes:
The offset pose of the sensor's frame from the sensor's parent frame.
Spawn configuration for the asset.
Clipping behavior for the camera for values exceed the maximum value.
Prim path (or expression) to the sensor.
Update period of the sensor buffers (in seconds).
Number of past frames to store in the sensor buffers.
Whether to visualize the sensor.
List of sensor names/types to enable for the camera.
Width of the image in pixels.
Height of the image in pixels.
A string or a list specifying a semantic filter predicate.
Whether to colorize the semantic segmentation images.
Whether to colorize the instance ID segmentation images.
Whether to colorize the instance ID segmentation images.
- class OffsetCfg[source]#
Bases:
object
The offset pose of the sensor’s frame from the sensor’s parent frame.
Attributes:
Translation w.r.t.
Quaternion rotation (w, x, y, z) w.r.t.
The convention in which the frame offset is applied.
- rot: tuple[float, float, float, float]#
Quaternion rotation (w, x, y, z) w.r.t. the parent frame. Defaults to (1.0, 0.0, 0.0, 0.0).
- convention: Literal['opengl', 'ros', 'world']#
The convention in which the frame offset is applied. Defaults to “ros”.
"opengl"
- forward axis:-Z
- up axis:+Y
- Offset is applied in the OpenGL (Usd.Camera) convention."ros"
- forward axis:+Z
- up axis:-Y
- Offset is applied in the ROS convention."world"
- forward axis:+X
- up axis:+Z
- Offset is applied in the World Frame convention.
- offset: OffsetCfg#
The offset pose of the sensor’s frame from the sensor’s parent frame. Defaults to identity.
Note
The parent frame is the frame the sensor attaches to. For example, the parent frame of a camera at path
/World/envs/env_0/Robot/Camera
is/World/envs/env_0/Robot
.
- spawn: PinholeCameraCfg | FisheyeCameraCfg | None#
Spawn configuration for the asset.
If None, then the prim is not spawned by the asset. Instead, it is assumed that the asset is already present in the scene.
- depth_clipping_behavior: Literal['max', 'zero', 'none']#
Clipping behavior for the camera for values exceed the maximum value. Defaults to “none”.
"max"
: Values are clipped to the maximum value."zero"
: Values are clipped to zero."none
: No clipping is applied. Values will be returned asinf
.
- prim_path: str#
Prim path (or expression) to the sensor.
Note
The expression can contain the environment namespace regex
{ENV_REGEX_NS}
which will be replaced with the environment namespace.Example:
{ENV_REGEX_NS}/Robot/sensor
will be replaced with/World/envs/env_.*/Robot/sensor
.
- update_period: float#
Update period of the sensor buffers (in seconds). Defaults to 0.0 (update every step).
- history_length: int#
Number of past frames to store in the sensor buffers. Defaults to 0, which means that only the current data is stored (no history).
- data_types: list[str]#
List of sensor names/types to enable for the camera. Defaults to [“rgb”].
Please refer to the
Camera
class for a list of available data types.
- semantic_filter: str | list[str]#
A string or a list specifying a semantic filter predicate. Defaults to
"*:*"
.If a string, it should be a disjunctive normal form of (semantic type, labels). For examples:
"typeA : labelA & !labelB | labelC , typeB: labelA ; typeC: labelE"
: All prims with semantic type “typeA” and label “labelA” but not “labelB” or with label “labelC”. Also, all prims with semantic type “typeB” and label “labelA”, or with semantic type “typeC” and label “labelE”."typeA : * ; * : labelA"
: All prims with semantic type “typeA” or with label “labelA”
If a list of strings, each string should be a semantic type. The segmentation for prims with semantics of the specified types will be retrieved. For example, if the list is [“class”], only the segmentation for prims with semantics of type “class” will be retrieved.
See also
For more information on the semantics filter, see the documentation on Replicator Semantics Schema Editor.
- colorize_semantic_segmentation: bool#
Whether to colorize the semantic segmentation images. Defaults to True.
If True, semantic segmentation is converted to an image where semantic IDs are mapped to colors and returned as a
uint8
4-channel array. If False, the output is returned as aint32
array.
- colorize_instance_id_segmentation: bool#
Whether to colorize the instance ID segmentation images. Defaults to True.
If True, instance id segmentation is converted to an image where instance IDs are mapped to colors. and returned as a
uint8
4-channel array. If False, the output is returned as aint32
array.
- colorize_instance_segmentation: bool#
Whether to colorize the instance ID segmentation images. Defaults to True.
If True, instance segmentation is converted to an image where instance IDs are mapped to colors. and returned as a
uint8
4-channel array. If False, the output is returned as aint32
array.
Tile-Rendered USD Camera#
- class omni.isaac.lab.sensors.TiledCamera[source]#
Bases:
Camera
The tiled rendering based camera sensor for acquiring the same data as the Camera class.
This class inherits from the
Camera
class but uses the tiled-rendering API to acquire the visual data. Tiled-rendering concatenates the rendered images from multiple cameras into a single image. This allows for rendering multiple cameras in parallel and is useful for rendering large scenes with multiple cameras efficiently.The following sensor types are supported:
"rgb"
: A 3-channel rendered color image."rgba"
: A 4-channel rendered color image with alpha channel."distance_to_camera"
: An image containing the distance to camera optical center."distance_to_image_plane"
: An image containing distances of 3D points from camera plane along camera’s z-axis."depth"
: Alias for"distance_to_image_plane"
."normals"
: An image containing the local surface normal vectors at each pixel."motion_vectors"
: An image containing the motion vector data at each pixel."semantic_segmentation"
: The semantic segmentation data."instance_segmentation_fast"
: The instance segmentation data."instance_id_segmentation_fast"
: The instance id segmentation data.
Note
Currently the following sensor types are not supported in a “view” format:
"instance_segmentation"
: The instance segmentation data. Please use the fast counterparts instead."instance_id_segmentation"
: The instance id segmentation data. Please use the fast counterparts instead."bounding_box_2d_tight"
: The tight 2D bounding box data (only contains non-occluded regions)."bounding_box_2d_tight_fast"
: The tight 2D bounding box data (only contains non-occluded regions)."bounding_box_2d_loose"
: The loose 2D bounding box data (contains occluded regions)."bounding_box_2d_loose_fast"
: The loose 2D bounding box data (contains occluded regions)."bounding_box_3d"
: The 3D view space bounding box data."bounding_box_3d_fast"
: The 3D view space bounding box data.
New in version v1.0.0: This feature is available starting from Isaac Sim 4.2. Before this version, the tiled rendering APIs were not available.
Attributes:
The configuration parameters.
The set of sensor types that are not supported by the camera class.
Data from the sensor.
Memory device for computation.
Frame number when the measurement took place.
Whether the sensor has a debug visualization implemented.
A tuple containing (height, width) of the camera sensor.
Whether the sensor is initialized.
Number of instances of the sensor.
The path of the render products for the cameras.
Methods:
__init__
(cfg)Initializes the tiled camera sensor.
reset
([env_ids])Resets the sensor internals.
set_debug_vis
(debug_vis)Sets whether to visualize the sensor data.
set_intrinsic_matrices
(matrices[, ...])Set parameters of the USD camera from its intrinsic matrix.
set_world_poses
([positions, orientations, ...])Set the pose of the camera w.r.t.
set_world_poses_from_view
(eyes, targets[, ...])Set the poses of the camera from the eye position and look-at target position.
- cfg: TiledCameraCfg#
The configuration parameters.
- __init__(cfg: TiledCameraCfg)[source]#
Initializes the tiled camera sensor.
- Parameters:
cfg – The configuration parameters.
- Raises:
RuntimeError – If no camera prim is found at the given path.
RuntimeError – If Isaac Sim version < 4.2
ValueError – If the provided data types are not supported by the camera.
- reset(env_ids: Sequence[int] | None = None)[source]#
Resets the sensor internals.
- Parameters:
env_ids – The sensor ids to reset. Defaults to None.
- UNSUPPORTED_TYPES: set[str] = {'bounding_box_2d_loose', 'bounding_box_2d_loose_fast', 'bounding_box_2d_tight', 'bounding_box_2d_tight_fast', 'bounding_box_3d', 'bounding_box_3d_fast', 'instance_id_segmentation', 'instance_segmentation'}#
The set of sensor types that are not supported by the camera class.
- property data: CameraData#
Data from the sensor.
This property is only updated when the user tries to access the data. This is done to avoid unnecessary computation when the sensor data is not used.
For updating the sensor when this property is accessed, you can use the following code snippet in your sensor implementation:
# update sensors if needed self._update_outdated_buffers() # return the data (where `_data` is the data for the sensor) return self._data
- property frame: torch.tensor#
Frame number when the measurement took place.
- property has_debug_vis_implementation: bool#
Whether the sensor has a debug visualization implemented.
- property is_initialized: bool#
Whether the sensor is initialized.
Returns True if the sensor is initialized, False otherwise.
- property num_instances: int#
Number of instances of the sensor.
This is equal to the number of sensors per environment multiplied by the number of environments.
- property render_product_paths: list[str]#
The path of the render products for the cameras.
This can be used via replicator interfaces to attach to writes or external annotator registry.
- set_debug_vis(debug_vis: bool) bool #
Sets whether to visualize the sensor data.
- Parameters:
debug_vis – Whether to visualize the sensor data.
- Returns:
Whether the debug visualization was successfully set. False if the sensor does not support debug visualization.
- set_intrinsic_matrices(matrices: torch.Tensor, focal_length: float = 1.0, env_ids: Sequence[int] | None = None)#
Set parameters of the USD camera from its intrinsic matrix.
The intrinsic matrix and focal length are used to set the following parameters to the USD camera:
focal_length
: The focal length of the camera.horizontal_aperture
: The horizontal aperture of the camera.vertical_aperture
: The vertical aperture of the camera.horizontal_aperture_offset
: The horizontal offset of the camera.vertical_aperture_offset
: The vertical offset of the camera.
Warning
Due to limitations of Omniverse camera, we need to assume that the camera is a spherical lens, i.e. has square pixels, and the optical center is centered at the camera eye. If this assumption is not true in the input intrinsic matrix, then the camera will not set up correctly.
- Parameters:
matrices – The intrinsic matrices for the camera. Shape is (N, 3, 3).
focal_length – Focal length to use when computing aperture values (in cm). Defaults to 1.0.
env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.
- set_world_poses(positions: torch.Tensor | None = None, orientations: torch.Tensor | None = None, env_ids: Sequence[int] | None = None, convention: Literal['opengl', 'ros', 'world'] = 'ros')#
Set the pose of the camera w.r.t. the world frame using specified convention.
Since different fields use different conventions for camera orientations, the method allows users to set the camera poses in the specified convention. Possible conventions are:
"opengl"
- forward axis: -Z - up axis +Y - Offset is applied in the OpenGL (Usd.Camera) convention"ros"
- forward axis: +Z - up axis -Y - Offset is applied in the ROS convention"world"
- forward axis: +X - up axis +Z - Offset is applied in the World Frame convention
See
omni.isaac.lab.sensors.camera.utils.convert_camera_frame_orientation_convention()
for more details on the conventions.- Parameters:
positions – The cartesian coordinates (in meters). Shape is (N, 3). Defaults to None, in which case the camera position in not changed.
orientations – The quaternion orientation in (w, x, y, z). Shape is (N, 4). Defaults to None, in which case the camera orientation in not changed.
env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.
convention – The convention in which the poses are fed. Defaults to “ros”.
- Raises:
RuntimeError – If the camera prim is not set. Need to call
initialize()
method first.
- set_world_poses_from_view(eyes: torch.Tensor, targets: torch.Tensor, env_ids: Sequence[int] | None = None)#
Set the poses of the camera from the eye position and look-at target position.
- Parameters:
eyes – The positions of the camera’s eye. Shape is (N, 3).
targets – The target locations to look at. Shape is (N, 3).
env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.
- Raises:
RuntimeError – If the camera prim is not set. Need to call
initialize()
method first.NotImplementedError – If the stage up-axis is not “Y” or “Z”.
- class omni.isaac.lab.sensors.TiledCameraCfg[source]#
Bases:
CameraCfg
Configuration for a tiled rendering-based camera sensor.
Classes:
The offset pose of the sensor's frame from the sensor's parent frame.
Attributes:
Prim path (or expression) to the sensor.
Update period of the sensor buffers (in seconds).
Number of past frames to store in the sensor buffers.
Whether to visualize the sensor.
The offset pose of the sensor's frame from the sensor's parent frame.
Spawn configuration for the asset.
Clipping behavior for the camera for values exceed the maximum value.
List of sensor names/types to enable for the camera.
Width of the image in pixels.
Height of the image in pixels.
A string or a list specifying a semantic filter predicate.
Whether to colorize the semantic segmentation images.
Whether to colorize the instance ID segmentation images.
Whether to colorize the instance ID segmentation images.
Whether to return the latest camera pose when fetching the camera's data.
- class OffsetCfg#
Bases:
object
The offset pose of the sensor’s frame from the sensor’s parent frame.
Attributes:
Translation w.r.t.
Quaternion rotation (w, x, y, z) w.r.t.
The convention in which the frame offset is applied.
- rot: tuple[float, float, float, float]#
Quaternion rotation (w, x, y, z) w.r.t. the parent frame. Defaults to (1.0, 0.0, 0.0, 0.0).
- convention: Literal['opengl', 'ros', 'world']#
The convention in which the frame offset is applied. Defaults to “ros”.
"opengl"
- forward axis:-Z
- up axis:+Y
- Offset is applied in the OpenGL (Usd.Camera) convention."ros"
- forward axis:+Z
- up axis:-Y
- Offset is applied in the ROS convention."world"
- forward axis:+X
- up axis:+Z
- Offset is applied in the World Frame convention.
- prim_path: str#
Prim path (or expression) to the sensor.
Note
The expression can contain the environment namespace regex
{ENV_REGEX_NS}
which will be replaced with the environment namespace.Example:
{ENV_REGEX_NS}/Robot/sensor
will be replaced with/World/envs/env_.*/Robot/sensor
.
- update_period: float#
Update period of the sensor buffers (in seconds). Defaults to 0.0 (update every step).
- history_length: int#
Number of past frames to store in the sensor buffers. Defaults to 0, which means that only the current data is stored (no history).
- offset: OffsetCfg#
The offset pose of the sensor’s frame from the sensor’s parent frame. Defaults to identity.
Note
The parent frame is the frame the sensor attaches to. For example, the parent frame of a camera at path
/World/envs/env_0/Robot/Camera
is/World/envs/env_0/Robot
.
- spawn: PinholeCameraCfg | FisheyeCameraCfg | None#
Spawn configuration for the asset.
If None, then the prim is not spawned by the asset. Instead, it is assumed that the asset is already present in the scene.
- depth_clipping_behavior: Literal['max', 'zero', 'none']#
Clipping behavior for the camera for values exceed the maximum value. Defaults to “none”.
"max"
: Values are clipped to the maximum value."zero"
: Values are clipped to zero."none
: No clipping is applied. Values will be returned asinf
.
- data_types: list[str]#
List of sensor names/types to enable for the camera. Defaults to [“rgb”].
Please refer to the
Camera
class for a list of available data types.
- semantic_filter: str | list[str]#
A string or a list specifying a semantic filter predicate. Defaults to
"*:*"
.If a string, it should be a disjunctive normal form of (semantic type, labels). For examples:
"typeA : labelA & !labelB | labelC , typeB: labelA ; typeC: labelE"
: All prims with semantic type “typeA” and label “labelA” but not “labelB” or with label “labelC”. Also, all prims with semantic type “typeB” and label “labelA”, or with semantic type “typeC” and label “labelE”."typeA : * ; * : labelA"
: All prims with semantic type “typeA” or with label “labelA”
If a list of strings, each string should be a semantic type. The segmentation for prims with semantics of the specified types will be retrieved. For example, if the list is [“class”], only the segmentation for prims with semantics of type “class” will be retrieved.
See also
For more information on the semantics filter, see the documentation on Replicator Semantics Schema Editor.
- colorize_semantic_segmentation: bool#
Whether to colorize the semantic segmentation images. Defaults to True.
If True, semantic segmentation is converted to an image where semantic IDs are mapped to colors and returned as a
uint8
4-channel array. If False, the output is returned as aint32
array.
- colorize_instance_id_segmentation: bool#
Whether to colorize the instance ID segmentation images. Defaults to True.
If True, instance id segmentation is converted to an image where instance IDs are mapped to colors. and returned as a
uint8
4-channel array. If False, the output is returned as aint32
array.
- colorize_instance_segmentation: bool#
Whether to colorize the instance ID segmentation images. Defaults to True.
If True, instance segmentation is converted to an image where instance IDs are mapped to colors. and returned as a
uint8
4-channel array. If False, the output is returned as aint32
array.
- return_latest_camera_pose: bool#
Whether to return the latest camera pose when fetching the camera’s data. Defaults to False.
If True, the latest camera pose is returned in the camera’s data which will slow down performance due to the use of
XformPrimView
. If False, the pose of the camera during initialization is returned.
Contact Sensor#
- class omni.isaac.lab.sensors.ContactSensor[source]#
Bases:
SensorBase
A contact reporting sensor.
The contact sensor reports the normal contact forces on a rigid body in the world frame. It relies on the PhysX ContactReporter API to be activated on the rigid bodies.
To enable the contact reporter on a rigid body, please make sure to enable the
omni.isaac.lab.sim.spawner.RigidObjectSpawnerCfg.activate_contact_sensors
on your asset spawner configuration. This will enable the contact reporter on all the rigid bodies in the asset.The sensor can be configured to report the contact forces on a set of bodies with a given filter pattern using the
ContactSensorCfg.filter_prim_paths_expr
. This is useful when you want to report the contact forces between the sensor bodies and a specific set of bodies in the scene. The data can be accessed using theContactSensorData.force_matrix_w
. Please check the documentation on RigidContactView for more details.The reporting of the filtered contact forces is only possible as one-to-many. This means that only one sensor body in an environment can be filtered against multiple bodies in that environment. If you need to filter multiple sensor bodies against multiple bodies, you need to create separate sensors for each sensor body.
As an example, suppose you want to report the contact forces for all the feet of a robot against an object exclusively. In that case, setting the
ContactSensorCfg.prim_path
andContactSensorCfg.filter_prim_paths_expr
with{ENV_REGEX_NS}/Robot/.*_FOOT
and{ENV_REGEX_NS}/Object
respectively will not work. Instead, you need to create a separate sensor for each foot and filter it against the object.Attributes:
The configuration parameters.
Number of instances of the sensor.
Data from the sensor.
Number of bodies with contact sensors attached.
Ordered names of bodies with contact sensors attached.
View for the rigid bodies captured (PhysX).
Contact reporter view for the bodies (PhysX).
Memory device for computation.
Whether the sensor has a debug visualization implemented.
Whether the sensor is initialized.
Methods:
__init__
(cfg)Initializes the contact sensor object.
reset
([env_ids])Resets the sensor internals.
find_bodies
(name_keys[, preserve_order])Find bodies in the articulation based on the name keys.
compute_first_contact
(dt[, abs_tol])Checks if bodies that have established contact within the last
dt
seconds.compute_first_air
(dt[, abs_tol])Checks if bodies that have broken contact within the last
dt
seconds.set_debug_vis
(debug_vis)Sets whether to visualize the sensor data.
- cfg: ContactSensorCfg#
The configuration parameters.
- __init__(cfg: ContactSensorCfg)[source]#
Initializes the contact sensor object.
- Parameters:
cfg – The configuration parameters.
- property num_instances: int#
Number of instances of the sensor.
This is equal to the number of sensors per environment multiplied by the number of environments.
- property data: ContactSensorData#
Data from the sensor.
This property is only updated when the user tries to access the data. This is done to avoid unnecessary computation when the sensor data is not used.
For updating the sensor when this property is accessed, you can use the following code snippet in your sensor implementation:
# update sensors if needed self._update_outdated_buffers() # return the data (where `_data` is the data for the sensor) return self._data
- property body_physx_view: omni.physics.tensors.impl.api.RigidBodyView#
View for the rigid bodies captured (PhysX).
Note
Use this view with caution. It requires handling of tensors in a specific way.
- property contact_physx_view: omni.physics.tensors.impl.api.RigidContactView#
Contact reporter view for the bodies (PhysX).
Note
Use this view with caution. It requires handling of tensors in a specific way.
- reset(env_ids: Sequence[int] | None = None)[source]#
Resets the sensor internals.
- Parameters:
env_ids – The sensor ids to reset. Defaults to None.
- find_bodies(name_keys: str | Sequence[str], preserve_order: bool = False) tuple[list[int], list[str]] [source]#
Find bodies in the articulation based on the name keys.
- Parameters:
name_keys – A regular expression or a list of regular expressions to match the body names.
preserve_order – Whether to preserve the order of the name keys in the output. Defaults to False.
- Returns:
A tuple of lists containing the body indices and names.
- compute_first_contact(dt: float, abs_tol: float = 1e-08) torch.Tensor [source]#
Checks if bodies that have established contact within the last
dt
seconds.This function checks if the bodies have established contact within the last
dt
seconds by comparing the current contact time with the given time period. If the contact time is less than the given time period, then the bodies are considered to be in contact.Note
The function assumes that
dt
is a factor of the sensor update time-step. In other words \(dt / dt_sensor = n\), where \(n\) is a natural number. This is always true if the sensor is updated by the physics or the environment stepping time-step and the sensor is read by the environment stepping time-step.- Parameters:
dt – The time period since the contact was established.
abs_tol – The absolute tolerance for the comparison.
- Returns:
A boolean tensor indicating the bodies that have established contact within the last
dt
seconds. Shape is (N, B), where N is the number of sensors and B is the number of bodies in each sensor.- Raises:
RuntimeError – If the sensor is not configured to track contact time.
- compute_first_air(dt: float, abs_tol: float = 1e-08) torch.Tensor [source]#
Checks if bodies that have broken contact within the last
dt
seconds.This function checks if the bodies have broken contact within the last
dt
seconds by comparing the current air time with the given time period. If the air time is less than the given time period, then the bodies are considered to not be in contact.Note
It assumes that
dt
is a factor of the sensor update time-step. In other words, \(dt / dt_sensor = n\), where \(n\) is a natural number. This is always true if the sensor is updated by the physics or the environment stepping time-step and the sensor is read by the environment stepping time-step.- Parameters:
dt – The time period since the contract is broken.
abs_tol – The absolute tolerance for the comparison.
- Returns:
A boolean tensor indicating the bodies that have broken contact within the last
dt
seconds. Shape is (N, B), where N is the number of sensors and B is the number of bodies in each sensor.- Raises:
RuntimeError – If the sensor is not configured to track contact time.
- property has_debug_vis_implementation: bool#
Whether the sensor has a debug visualization implemented.
- class omni.isaac.lab.sensors.ContactSensorData[source]#
Data container for the contact reporting sensor.
Attributes:
Position of the sensor origin in world frame.
Orientation of the sensor origin in quaternion (w, x, y, z) in world frame.
The net normal contact forces in world frame.
The net normal contact forces in world frame.
The normal contact forces filtered between the sensor bodies and filtered bodies in world frame.
Time spent (in s) in the air before the last contact.
Time spent (in s) in the air since the last detach.
Time spent (in s) in contact before the last detach.
Time spent (in s) in contact since the last contact.
- pos_w: torch.Tensor | None = None#
Position of the sensor origin in world frame.
Shape is (N, 3), where N is the number of sensors.
Note
If the
ContactSensorCfg.track_pose
is False, then this quantity is None.
- quat_w: torch.Tensor | None = None#
Orientation of the sensor origin in quaternion (w, x, y, z) in world frame.
Shape is (N, 4), where N is the number of sensors.
Note
If the
ContactSensorCfg.track_pose
is False, then this quantity is None.
- net_forces_w: torch.Tensor | None = None#
The net normal contact forces in world frame.
Shape is (N, B, 3), where N is the number of sensors and B is the number of bodies in each sensor.
Note
This quantity is the sum of the normal contact forces acting on the sensor bodies. It must not be confused with the total contact forces acting on the sensor bodies (which also includes the tangential forces).
- net_forces_w_history: torch.Tensor | None = None#
The net normal contact forces in world frame.
Shape is (N, T, B, 3), where N is the number of sensors, T is the configured history length and B is the number of bodies in each sensor.
In the history dimension, the first index is the most recent and the last index is the oldest.
Note
This quantity is the sum of the normal contact forces acting on the sensor bodies. It must not be confused with the total contact forces acting on the sensor bodies (which also includes the tangential forces).
- force_matrix_w: torch.Tensor | None = None#
The normal contact forces filtered between the sensor bodies and filtered bodies in world frame.
Shape is (N, B, M, 3), where N is the number of sensors, B is number of bodies in each sensor and
M
is the number of filtered bodies.Note
If the
ContactSensorCfg.filter_prim_paths_expr
is empty, then this quantity is None.
- last_air_time: torch.Tensor | None = None#
Time spent (in s) in the air before the last contact.
Shape is (N, B), where N is the number of sensors and B is the number of bodies in each sensor.
Note
If the
ContactSensorCfg.track_air_time
is False, then this quantity is None.
- current_air_time: torch.Tensor | None = None#
Time spent (in s) in the air since the last detach.
Shape is (N, B), where N is the number of sensors and B is the number of bodies in each sensor.
Note
If the
ContactSensorCfg.track_air_time
is False, then this quantity is None.
- last_contact_time: torch.Tensor | None = None#
Time spent (in s) in contact before the last detach.
Shape is (N, B), where N is the number of sensors and B is the number of bodies in each sensor.
Note
If the
ContactSensorCfg.track_air_time
is False, then this quantity is None.
- current_contact_time: torch.Tensor | None = None#
Time spent (in s) in contact since the last contact.
Shape is (N, B), where N is the number of sensors and B is the number of bodies in each sensor.
Note
If the
ContactSensorCfg.track_air_time
is False, then this quantity is None.
- class omni.isaac.lab.sensors.ContactSensorCfg[source]#
Bases:
SensorBaseCfg
Configuration for the contact sensor.
Attributes:
Whether to track the pose of the sensor's origin.
Whether to track the air/contact time of the bodies (time between contacts).
Prim path (or expression) to the sensor.
Update period of the sensor buffers (in seconds).
Number of past frames to store in the sensor buffers.
Whether to visualize the sensor.
The threshold on the norm of the contact force that determines whether two bodies are in collision or not.
The list of primitive paths (or expressions) to filter contacts with.
The configuration object for the visualization markers.
- track_air_time: bool#
Whether to track the air/contact time of the bodies (time between contacts). Defaults to False.
- prim_path: str#
Prim path (or expression) to the sensor.
Note
The expression can contain the environment namespace regex
{ENV_REGEX_NS}
which will be replaced with the environment namespace.Example:
{ENV_REGEX_NS}/Robot/sensor
will be replaced with/World/envs/env_.*/Robot/sensor
.
- update_period: float#
Update period of the sensor buffers (in seconds). Defaults to 0.0 (update every step).
- history_length: int#
Number of past frames to store in the sensor buffers. Defaults to 0, which means that only the current data is stored (no history).
- force_threshold: float#
The threshold on the norm of the contact force that determines whether two bodies are in collision or not.
This value is only used for tracking the mode duration (the time in contact or in air), if
track_air_time
is True.
- filter_prim_paths_expr: list[str]#
The list of primitive paths (or expressions) to filter contacts with. Defaults to an empty list, in which case no filtering is applied.
The contact sensor allows reporting contacts between the primitive specified with
prim_path
and other primitives in the scene. For instance, in a scene containing a robot, a ground plane and an object, you can obtain individual contact reports of the base of the robot with the ground plane and the object.Note
The expression in the list can contain the environment namespace regex
{ENV_REGEX_NS}
which will be replaced with the environment namespace.Example:
{ENV_REGEX_NS}/Object
will be replaced with/World/envs/env_.*/Object
.Attention
The reporting of filtered contacts only works when the sensor primitive
prim_path
corresponds to a single primitive in that environment. If the sensor primitive corresponds to multiple primitives, the filtering will not work as expected. Please checkContactSensor
for more details.
- visualizer_cfg: VisualizationMarkersCfg#
The configuration object for the visualization markers. Defaults to CONTACT_SENSOR_MARKER_CFG.
Note
This attribute is only used when debug visualization is enabled.
Frame Transformer#
- class omni.isaac.lab.sensors.FrameTransformer[source]#
Bases:
SensorBase
A sensor for reporting frame transforms.
This class provides an interface for reporting the transform of one or more frames (target frames) with respect to another frame (source frame). The source frame is specified by the user as a prim path (
FrameTransformerCfg.prim_path
) and the target frames are specified by the user as a list of prim paths (FrameTransformerCfg.target_frames
).The source frame and target frames are assumed to be rigid bodies. The transform of the target frames with respect to the source frame is computed by first extracting the transform of the source frame and target frames from the physics engine and then computing the relative transform between the two.
Additionally, the user can specify an offset for the source frame and each target frame. This is useful for specifying the transform of the desired frame with respect to the body’s center of mass, for instance.
A common example of using this sensor is to track the position and orientation of the end effector of a robotic manipulator. In this case, the source frame would be the body corresponding to the base frame of the manipulator, and the target frame would be the body corresponding to the end effector. Since the end-effector is typically a fictitious body, the user may need to specify an offset from the end-effector to the body of the manipulator.
Attributes:
The configuration parameters.
Data from the sensor.
Memory device for computation.
Whether the sensor has a debug visualization implemented.
Whether the sensor is initialized.
Number of instances of the sensor.
Methods:
__init__
(cfg)Initializes the frame transformer object.
reset
([env_ids])Resets the sensor internals.
set_debug_vis
(debug_vis)Sets whether to visualize the sensor data.
- cfg: FrameTransformerCfg#
The configuration parameters.
- __init__(cfg: FrameTransformerCfg)[source]#
Initializes the frame transformer object.
- Parameters:
cfg – The configuration parameters.
- property data: FrameTransformerData#
Data from the sensor.
This property is only updated when the user tries to access the data. This is done to avoid unnecessary computation when the sensor data is not used.
For updating the sensor when this property is accessed, you can use the following code snippet in your sensor implementation:
# update sensors if needed self._update_outdated_buffers() # return the data (where `_data` is the data for the sensor) return self._data
- reset(env_ids: Sequence[int] | None = None)[source]#
Resets the sensor internals.
- Parameters:
env_ids – The sensor ids to reset. Defaults to None.
- property has_debug_vis_implementation: bool#
Whether the sensor has a debug visualization implemented.
- property is_initialized: bool#
Whether the sensor is initialized.
Returns True if the sensor is initialized, False otherwise.
- class omni.isaac.lab.sensors.FrameTransformerData[source]#
Data container for the frame transformer sensor.
Attributes:
Target frame names (this denotes the order in which that frame data is ordered).
Position of the target frame(s) relative to source frame.
Orientation of the target frame(s) relative to source frame quaternion (w, x, y, z).
Position of the target frame(s) after offset (in world frame).
Orientation of the target frame(s) after offset (in world frame) quaternion (w, x, y, z).
Position of the source frame after offset (in world frame).
Orientation of the source frame after offset (in world frame) quaternion (w, x, y, z).
- target_frame_names: list[str] = None#
Target frame names (this denotes the order in which that frame data is ordered).
The frame names are resolved from the
FrameTransformerCfg.FrameCfg.name
field. This does not necessarily follow the order in which the frames are defined in the config due to the regex matching.
- target_pos_source: torch.Tensor = None#
Position of the target frame(s) relative to source frame.
Shape is (N, M, 3), where N is the number of environments, and M is the number of target frames.
- target_quat_source: torch.Tensor = None#
Orientation of the target frame(s) relative to source frame quaternion (w, x, y, z).
Shape is (N, M, 4), where N is the number of environments, and M is the number of target frames.
- target_pos_w: torch.Tensor = None#
Position of the target frame(s) after offset (in world frame).
Shape is (N, M, 3), where N is the number of environments, and M is the number of target frames.
- target_quat_w: torch.Tensor = None#
Orientation of the target frame(s) after offset (in world frame) quaternion (w, x, y, z).
Shape is (N, M, 4), where N is the number of environments, and M is the number of target frames.
- source_pos_w: torch.Tensor = None#
Position of the source frame after offset (in world frame).
Shape is (N, 3), where N is the number of environments.
- source_quat_w: torch.Tensor = None#
Orientation of the source frame after offset (in world frame) quaternion (w, x, y, z).
Shape is (N, 4), where N is the number of environments.
- class omni.isaac.lab.sensors.FrameTransformerCfg[source]#
Bases:
SensorBaseCfg
Configuration for the frame transformer sensor.
Classes:
Information specific to a coordinate frame.
Attributes:
Update period of the sensor buffers (in seconds).
Number of past frames to store in the sensor buffers.
Whether to visualize the sensor.
The prim path of the body to transform from (source frame).
The pose offset from the source prim frame.
A list of the target frames.
The configuration object for the visualization markers.
- class FrameCfg[source]#
Bases:
object
Information specific to a coordinate frame.
Attributes:
The prim path corresponding to a rigid body.
User-defined name for the new coordinate frame.
The pose offset from the parent prim frame.
- prim_path: str#
The prim path corresponding to a rigid body.
This can be a regex pattern to match multiple prims. For example, “/Robot/.*” will match all prims under “/Robot”.
This means that if the source
FrameTransformerCfg.prim_path
is “/Robot/base”, and the targetFrameTransformerCfg.FrameCfg.prim_path
is “/Robot/.*”, then the frame transformer will track the poses of all the prims under “/Robot”, including “/Robot/base” (even though this will result in an identity pose w.r.t. the source frame).
- update_period: float#
Update period of the sensor buffers (in seconds). Defaults to 0.0 (update every step).
- history_length: int#
Number of past frames to store in the sensor buffers. Defaults to 0, which means that only the current data is stored (no history).
- target_frames: list[omni.isaac.lab.sensors.frame_transformer.frame_transformer_cfg.FrameTransformerCfg.FrameCfg]#
A list of the target frames.
This allows a single FrameTransformer to handle multiple target prims. For example, in a quadruped, we can use a single FrameTransformer to track each foot’s position and orientation in the body frame using four frame offsets.
- visualizer_cfg: VisualizationMarkersCfg#
The configuration object for the visualization markers. Defaults to FRAME_MARKER_CFG.
Note
This attribute is only used when debug visualization is enabled.
Ray-Cast Sensor#
- class omni.isaac.lab.sensors.RayCaster[source]#
Bases:
SensorBase
A ray-casting sensor.
The ray-caster uses a set of rays to detect collisions with meshes in the scene. The rays are defined in the sensor’s local coordinate frame. The sensor can be configured to ray-cast against a set of meshes with a given ray pattern.
The meshes are parsed from the list of primitive paths provided in the configuration. These are then converted to warp meshes and stored in the warp_meshes list. The ray-caster then ray-casts against these warp meshes using the ray pattern provided in the configuration.
Note
Currently, only static meshes are supported. Extending the warp mesh to support dynamic meshes is a work in progress.
Attributes:
The configuration parameters.
Number of instances of the sensor.
Data from the sensor.
Memory device for computation.
Whether the sensor has a debug visualization implemented.
Whether the sensor is initialized.
Methods:
__init__
(cfg)Initializes the ray-caster object.
reset
([env_ids])Resets the sensor internals.
set_debug_vis
(debug_vis)Sets whether to visualize the sensor data.
- cfg: RayCasterCfg#
The configuration parameters.
- __init__(cfg: RayCasterCfg)[source]#
Initializes the ray-caster object.
- Parameters:
cfg – The configuration parameters.
- property num_instances: int#
Number of instances of the sensor.
This is equal to the number of sensors per environment multiplied by the number of environments.
- property data: RayCasterData#
Data from the sensor.
This property is only updated when the user tries to access the data. This is done to avoid unnecessary computation when the sensor data is not used.
For updating the sensor when this property is accessed, you can use the following code snippet in your sensor implementation:
# update sensors if needed self._update_outdated_buffers() # return the data (where `_data` is the data for the sensor) return self._data
- reset(env_ids: Sequence[int] | None = None)[source]#
Resets the sensor internals.
- Parameters:
env_ids – The sensor ids to reset. Defaults to None.
- property has_debug_vis_implementation: bool#
Whether the sensor has a debug visualization implemented.
- class omni.isaac.lab.sensors.RayCasterData[source]#
Data container for the ray-cast sensor.
Attributes:
Position of the sensor origin in world frame.
Orientation of the sensor origin in quaternion (w, x, y, z) in world frame.
The ray hit positions in the world frame.
- pos_w: torch.Tensor = None#
Position of the sensor origin in world frame.
Shape is (N, 3), where N is the number of sensors.
- quat_w: torch.Tensor = None#
Orientation of the sensor origin in quaternion (w, x, y, z) in world frame.
Shape is (N, 4), where N is the number of sensors.
- ray_hits_w: torch.Tensor = None#
The ray hit positions in the world frame.
Shape is (N, B, 3), where N is the number of sensors, B is the number of rays in the scan pattern per sensor.
- class omni.isaac.lab.sensors.RayCasterCfg[source]#
Bases:
SensorBaseCfg
Configuration for the ray-cast sensor.
Classes:
The offset pose of the sensor's frame from the sensor's parent frame.
Attributes:
The list of mesh primitive paths to ray cast against.
The offset pose of the sensor's frame from the sensor's parent frame.
Prim path (or expression) to the sensor.
Update period of the sensor buffers (in seconds).
Number of past frames to store in the sensor buffers.
Whether to visualize the sensor.
Whether the rays' starting positions and directions only track the yaw orientation.
The pattern that defines the local ray starting positions and directions.
Maximum distance (in meters) from the sensor to ray cast to.
The range of drift (in meters) to add to the ray starting positions (xyz).
The configuration object for the visualization markers.
- class OffsetCfg[source]#
Bases:
object
The offset pose of the sensor’s frame from the sensor’s parent frame.
Attributes:
- mesh_prim_paths: list[str]#
The list of mesh primitive paths to ray cast against.
Note
Currently, only a single static mesh is supported. We are working on supporting multiple static meshes and dynamic meshes.
- offset: OffsetCfg#
The offset pose of the sensor’s frame from the sensor’s parent frame. Defaults to identity.
- prim_path: str#
Prim path (or expression) to the sensor.
Note
The expression can contain the environment namespace regex
{ENV_REGEX_NS}
which will be replaced with the environment namespace.Example:
{ENV_REGEX_NS}/Robot/sensor
will be replaced with/World/envs/env_.*/Robot/sensor
.
- update_period: float#
Update period of the sensor buffers (in seconds). Defaults to 0.0 (update every step).
- history_length: int#
Number of past frames to store in the sensor buffers. Defaults to 0, which means that only the current data is stored (no history).
- attach_yaw_only: bool#
Whether the rays’ starting positions and directions only track the yaw orientation.
This is useful for ray-casting height maps, where only yaw rotation is needed.
- pattern_cfg: PatternBaseCfg#
The pattern that defines the local ray starting positions and directions.
- drift_range: tuple[float, float]#
The range of drift (in meters) to add to the ray starting positions (xyz). Defaults to (0.0, 0.0).
For floating base robots, this is useful for simulating drift in the robot’s pose estimation.
- visualizer_cfg: VisualizationMarkersCfg#
The configuration object for the visualization markers. Defaults to RAY_CASTER_MARKER_CFG.
Note
This attribute is only used when debug visualization is enabled.
Ray-Cast Camera#
- class omni.isaac.lab.sensors.RayCasterCamera[source]#
Bases:
RayCaster
A ray-casting camera sensor.
The ray-caster camera uses a set of rays to get the distances to meshes in the scene. The rays are defined in the sensor’s local coordinate frame. The sensor has the same interface as the
omni.isaac.lab.sensors.Camera
that implements the camera class through USD camera prims. However, this class provides a faster image generation. The sensor converts meshes from the list of primitive paths provided in the configuration to Warp meshes. The camera then ray-casts against these Warp meshes only.Currently, only the following annotators are supported:
"distance_to_camera"
: An image containing the distance to camera optical center."distance_to_image_plane"
: An image containing distances of 3D points from camera plane along camera’s z-axis."normals"
: An image containing the local surface normal vectors at each pixel.
Note
Currently, only static meshes are supported. Extending the warp mesh to support dynamic meshes is a work in progress.
Attributes:
The configuration parameters.
A set of sensor types that are not supported by the ray-caster camera.
Data from the sensor.
A tuple containing (height, width) of the camera sensor.
Frame number when the measurement took place.
Memory device for computation.
Whether the sensor has a debug visualization implemented.
Whether the sensor is initialized.
Number of instances of the sensor.
Methods:
__init__
(cfg)Initializes the camera object.
set_intrinsic_matrices
(matrices[, ...])Set the intrinsic matrix of the camera.
reset
([env_ids])Resets the sensor internals.
set_world_poses
([positions, orientations, ...])Set the pose of the camera w.r.t.
set_world_poses_from_view
(eyes, targets[, ...])Set the poses of the camera from the eye position and look-at target position.
set_debug_vis
(debug_vis)Sets whether to visualize the sensor data.
- cfg: RayCasterCameraCfg#
The configuration parameters.
- UNSUPPORTED_TYPES: ClassVar[set[str]] = {'bounding_box_2d_loose', 'bounding_box_2d_loose_fast', 'bounding_box_2d_tight', 'bounding_box_2d_tight_fast', 'bounding_box_3d', 'bounding_box_3d_fast', 'instance_id_segmentation', 'instance_id_segmentation_fast', 'instance_segmentation', 'instance_segmentation_fast', 'motion_vectors', 'rgb', 'semantic_segmentation', 'skeleton_data'}#
A set of sensor types that are not supported by the ray-caster camera.
- __init__(cfg: RayCasterCameraCfg)[source]#
Initializes the camera object.
- Parameters:
cfg – The configuration parameters.
- Raises:
ValueError – If the provided data types are not supported by the ray-caster camera.
- property data: CameraData#
Data from the sensor.
This property is only updated when the user tries to access the data. This is done to avoid unnecessary computation when the sensor data is not used.
For updating the sensor when this property is accessed, you can use the following code snippet in your sensor implementation:
# update sensors if needed self._update_outdated_buffers() # return the data (where `_data` is the data for the sensor) return self._data
- property frame: torch.tensor#
Frame number when the measurement took place.
- set_intrinsic_matrices(matrices: torch.Tensor, focal_length: float = 1.0, env_ids: Sequence[int] | None = None)[source]#
Set the intrinsic matrix of the camera.
- Parameters:
matrices – The intrinsic matrices for the camera. Shape is (N, 3, 3).
focal_length – Focal length to use when computing aperture values (in cm). Defaults to 1.0.
env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.
- reset(env_ids: Sequence[int] | None = None)[source]#
Resets the sensor internals.
- Parameters:
env_ids – The sensor ids to reset. Defaults to None.
- set_world_poses(positions: torch.Tensor | None = None, orientations: torch.Tensor | None = None, env_ids: Sequence[int] | None = None, convention: Literal['opengl', 'ros', 'world'] = 'ros')[source]#
Set the pose of the camera w.r.t. the world frame using specified convention.
Since different fields use different conventions for camera orientations, the method allows users to set the camera poses in the specified convention. Possible conventions are:
"opengl"
- forward axis: -Z - up axis +Y - Offset is applied in the OpenGL (Usd.Camera) convention"ros"
- forward axis: +Z - up axis -Y - Offset is applied in the ROS convention"world"
- forward axis: +X - up axis +Z - Offset is applied in the World Frame convention
See
omni.isaac.lab.utils.maths.convert_camera_frame_orientation_convention()
for more details on the conventions.- Parameters:
positions – The cartesian coordinates (in meters). Shape is (N, 3). Defaults to None, in which case the camera position in not changed.
orientations – The quaternion orientation in (w, x, y, z). Shape is (N, 4). Defaults to None, in which case the camera orientation in not changed.
env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.
convention – The convention in which the poses are fed. Defaults to “ros”.
- Raises:
RuntimeError – If the camera prim is not set. Need to call
initialize()
method first.
- set_world_poses_from_view(eyes: torch.Tensor, targets: torch.Tensor, env_ids: Sequence[int] | None = None)[source]#
Set the poses of the camera from the eye position and look-at target position.
- Parameters:
eyes – The positions of the camera’s eye. Shape is N, 3).
targets – The target locations to look at. Shape is (N, 3).
env_ids – A sensor ids to manipulate. Defaults to None, which means all sensor indices.
- Raises:
RuntimeError – If the camera prim is not set. Need to call
initialize()
method first.NotImplementedError – If the stage up-axis is not “Y” or “Z”.
- property has_debug_vis_implementation: bool#
Whether the sensor has a debug visualization implemented.
- property is_initialized: bool#
Whether the sensor is initialized.
Returns True if the sensor is initialized, False otherwise.
- class omni.isaac.lab.sensors.RayCasterCameraCfg[source]#
Bases:
RayCasterCfg
Configuration for the ray-cast sensor.
Classes:
The offset pose of the sensor's frame from the sensor's parent frame.
Attributes:
The offset pose of the sensor's frame from the sensor's parent frame.
Prim path (or expression) to the sensor.
Update period of the sensor buffers (in seconds).
Number of past frames to store in the sensor buffers.
Whether to visualize the sensor.
The list of mesh primitive paths to ray cast against.
Whether the rays' starting positions and directions only track the yaw orientation.
Maximum distance (in meters) from the sensor to ray cast to.
The range of drift (in meters) to add to the ray starting positions (xyz).
The configuration object for the visualization markers.
List of sensor names/types to enable for the camera.
Clipping behavior for the camera for values exceed the maximum value.
The pattern that defines the local ray starting positions and directions in a pinhole camera pattern.
- class OffsetCfg[source]#
Bases:
object
The offset pose of the sensor’s frame from the sensor’s parent frame.
Attributes:
Translation w.r.t.
Quaternion rotation (w, x, y, z) w.r.t.
The convention in which the frame offset is applied.
- rot: tuple[float, float, float, float]#
Quaternion rotation (w, x, y, z) w.r.t. the parent frame. Defaults to (1.0, 0.0, 0.0, 0.0).
- convention: Literal['opengl', 'ros', 'world']#
The convention in which the frame offset is applied. Defaults to “ros”.
"opengl"
- forward axis:-Z
- up axis:+Y
- Offset is applied in the OpenGL (Usd.Camera) convention."ros"
- forward axis:+Z
- up axis:-Y
- Offset is applied in the ROS convention."world"
- forward axis:+X
- up axis:+Z
- Offset is applied in the World Frame convention.
- offset: OffsetCfg#
The offset pose of the sensor’s frame from the sensor’s parent frame. Defaults to identity.
- prim_path: str#
Prim path (or expression) to the sensor.
Note
The expression can contain the environment namespace regex
{ENV_REGEX_NS}
which will be replaced with the environment namespace.Example:
{ENV_REGEX_NS}/Robot/sensor
will be replaced with/World/envs/env_.*/Robot/sensor
.
- update_period: float#
Update period of the sensor buffers (in seconds). Defaults to 0.0 (update every step).
- history_length: int#
Number of past frames to store in the sensor buffers. Defaults to 0, which means that only the current data is stored (no history).
- mesh_prim_paths: list[str]#
The list of mesh primitive paths to ray cast against.
Note
Currently, only a single static mesh is supported. We are working on supporting multiple static meshes and dynamic meshes.
- attach_yaw_only: bool#
Whether the rays’ starting positions and directions only track the yaw orientation.
This is useful for ray-casting height maps, where only yaw rotation is needed.
- drift_range: tuple[float, float]#
The range of drift (in meters) to add to the ray starting positions (xyz). Defaults to (0.0, 0.0).
For floating base robots, this is useful for simulating drift in the robot’s pose estimation.
- visualizer_cfg: VisualizationMarkersCfg#
The configuration object for the visualization markers. Defaults to RAY_CASTER_MARKER_CFG.
Note
This attribute is only used when debug visualization is enabled.
- data_types: list[str]#
List of sensor names/types to enable for the camera. Defaults to [“distance_to_image_plane”].
- depth_clipping_behavior: Literal['max', 'zero', 'none']#
Clipping behavior for the camera for values exceed the maximum value. Defaults to “none”.
"max"
: Values are clipped to the maximum value."zero"
: Values are clipped to zero."none
: No clipping is applied. Values will be returned asinf
fordistance_to_camera
andnan
fordistance_to_image_plane
data type.
- pattern_cfg: PinholeCameraPatternCfg#
The pattern that defines the local ray starting positions and directions in a pinhole camera pattern.
Inertia Measurement Unit#
- class omni.isaac.lab.sensors.Imu[source]#
Bases:
SensorBase
The Inertia Measurement Unit (IMU) sensor.
The sensor can be attached to any
RigidObject
orArticulation
in the scene. The sensor provides complete state information. The sensor is primarily used to provide the linear acceleration and angular velocity of the object in the body frame. The sensor also provides the position and orientation of the object in the world frame and the angular acceleration and linear velocity in the body frame. The extra data outputs are useful for simulating with or comparing against “perfect” state estimation.Note
We are computing the accelerations using numerical differentiation from the velocities. Consequently, the IMU sensor accuracy depends on the chosen phsyx timestep. For a sufficient accuracy, we recommend to keep the timestep at least as 200Hz.
Note
It is suggested to use the OffsetCfg to define an IMU frame relative to a rigid body prim defined at the root of a
RigidObject
or a prim that is defined by a non-fixed joint in anArticulation
(except for the root of a fixed based articulation). The use frames with fixed joints and small mass/inertia to emulate a transform relative to a body frame can result in lower performance and accuracy.Attributes:
The configuration parameters.
Data from the sensor.
Number of instances of the sensor.
Memory device for computation.
Whether the sensor has a debug visualization implemented.
Whether the sensor is initialized.
Methods:
__init__
(cfg)Initializes the Imu sensor.
reset
([env_ids])Resets the sensor internals.
set_debug_vis
(debug_vis)Sets whether to visualize the sensor data.
- __init__(cfg: ImuCfg)[source]#
Initializes the Imu sensor.
- Parameters:
cfg – The configuration parameters.
- property data: ImuData#
Data from the sensor.
This property is only updated when the user tries to access the data. This is done to avoid unnecessary computation when the sensor data is not used.
For updating the sensor when this property is accessed, you can use the following code snippet in your sensor implementation:
# update sensors if needed self._update_outdated_buffers() # return the data (where `_data` is the data for the sensor) return self._data
- property num_instances: int#
Number of instances of the sensor.
This is equal to the number of sensors per environment multiplied by the number of environments.
- reset(env_ids: Sequence[int] | None = None)[source]#
Resets the sensor internals.
- Parameters:
env_ids – The sensor ids to reset. Defaults to None.
- property has_debug_vis_implementation: bool#
Whether the sensor has a debug visualization implemented.
- class omni.isaac.lab.sensors.ImuCfg[source]#
Bases:
SensorBaseCfg
Configuration for an Inertial Measurement Unit (IMU) sensor.
Classes:
The offset pose of the sensor's frame from the sensor's parent frame.
Attributes:
Prim path (or expression) to the sensor.
Update period of the sensor buffers (in seconds).
Number of past frames to store in the sensor buffers.
Whether to visualize the sensor.
The offset pose of the sensor's frame from the sensor's parent frame.
The configuration object for the visualization markers.
The linear acceleration bias applied to the linear acceleration in the world frame (x,y,z).
- class OffsetCfg[source]#
Bases:
object
The offset pose of the sensor’s frame from the sensor’s parent frame.
Attributes:
- prim_path: str#
Prim path (or expression) to the sensor.
Note
The expression can contain the environment namespace regex
{ENV_REGEX_NS}
which will be replaced with the environment namespace.Example:
{ENV_REGEX_NS}/Robot/sensor
will be replaced with/World/envs/env_.*/Robot/sensor
.
- update_period: float#
Update period of the sensor buffers (in seconds). Defaults to 0.0 (update every step).
- history_length: int#
Number of past frames to store in the sensor buffers. Defaults to 0, which means that only the current data is stored (no history).
- offset: OffsetCfg#
The offset pose of the sensor’s frame from the sensor’s parent frame. Defaults to identity.
- visualizer_cfg: VisualizationMarkersCfg#
The configuration object for the visualization markers. Defaults to RED_ARROW_X_MARKER_CFG.
This attribute is only used when debug visualization is enabled.
- gravity_bias: tuple[float, float, float]#
The linear acceleration bias applied to the linear acceleration in the world frame (x,y,z).
Imu sensors typically output a positive gravity acceleration in opposition to the direction of gravity. This config parameter allows users to subtract that bias if set to (0.,0.,0.). By default this is set to (0.0,0.0,9.81) which results in a positive acceleration reading in the world Z.