isaaclab.sim.views#

Views for manipulating USD prims.

Classes

XformPrimView

Optimized batched interface for reading and writing transforms of multiple USD prims.

XForm Prim View#

class isaaclab.sim.views.XformPrimView[source]#

Bases: object

Optimized batched interface for reading and writing transforms of multiple USD prims.

This class provides efficient batch operations for getting and setting poses (position and orientation) of multiple prims at once using torch tensors. It is designed for scenarios where you need to manipulate many prims simultaneously, such as in multi-agent simulations or large-scale procedural generation.

The class supports both world-space and local-space pose operations:

  • World poses: Positions and orientations in the global world frame

  • Local poses: Positions and orientations relative to each prim’s parent

Warning

Fabric and Physics Simulation:

This view operates directly on USD attributes. When Fabric (NVIDIA’s USD runtime optimization) is enabled, physics simulation updates are written to Fabric’s internal representation and not propagated back to USD attributes. This causes the following issues:

Solution: For prims with physics components (rigid bodies, articulations), use isaaclab.assets classes (e.g., RigidObject, Articulation) which use PhysX tensor APIs that work correctly with Fabric.

When to use XformPrimView:

  • Non-physics prims (markers, visual elements, cameras without physics)

  • Setting initial poses before simulation starts

  • Non-Fabric workflows

For more information on Fabric, please refer to the Fabric documentation.

Note

Performance Considerations:

  • Tensor operations are performed on the specified device (CPU/CUDA)

  • USD write operations use Sdf.ChangeBlock for batched updates

  • Getting poses involves USD API calls and cannot be fully accelerated on GPU

  • For maximum performance, minimize get/set operations within tight loops

Note

Transform Requirements:

All prims in the view must be Xformable and have standardized transform operations: [translate, orient, scale]. Non-standard prims will raise a ValueError during initialization if validate_xform_ops is True. Please use the function isaaclab.sim.utils.standardize_xform_ops() to prepare prims before using this view.

Warning

This class operates at the USD default time code. Any animation or time-sampled data will not be affected by write operations. For animated transforms, you need to handle time-sampled keyframes separately.

Methods:

__init__(prim_path[, device, ...])

Initialize the view with matching prims.

set_world_poses([positions, orientations, ...])

Set world-space poses for prims in the view.

set_local_poses([translations, ...])

Set local-space poses for prims in the view.

set_scales(scales[, indices])

Set scales for prims in the view.

set_visibility(visibility[, indices])

Set visibility for prims in the view.

get_world_poses([indices])

Get world-space poses for prims in the view.

get_local_poses([indices])

Get local-space poses for prims in the view.

get_scales([indices])

Get scales for prims in the view.

get_visibility([indices])

Get visibility for prims in the view.

Attributes:

count

Number of prims in this view.

device

Device where tensors are allocated (cpu or cuda).

prims

List of USD prims being managed by this view.

prim_paths

List of prim paths (as strings) for all prims being managed by this view.

__init__(prim_path: str, device: str = 'cpu', validate_xform_ops: bool = True, stage: Usd.Stage | None = None)[source]#

Initialize the view with matching prims.

This method searches the USD stage for all prims matching the provided path pattern, validates that they are Xformable with standard transform operations, and stores references for efficient batch operations.

We generally recommend to validate the xform operations, as it ensures that the prims are in a consistent state and have the standard transform operations (translate, orient, scale in that order). However, if you are sure that the prims are in a consistent state, you can set this to False to improve performance. This can save around 45-50% of the time taken to initialize the view.

Parameters:
  • prim_path – USD prim path pattern to match prims. Supports wildcards (*) and regex patterns (e.g., "/World/Env_.*/Robot"). See isaaclab.sim.utils.find_matching_prims() for pattern syntax.

  • device – Device to place the tensors on. Can be "cpu" or CUDA devices like "cuda:0". Defaults to "cpu".

  • validate_xform_ops – Whether to validate that the prims have standard xform operations. Defaults to True.

  • stage – USD stage to search for prims. Defaults to None, in which case the current active stage from the simulation context is used.

Raises:

ValueError – If any matched prim is not Xformable or doesn’t have standardized transform operations (translate, orient, scale in that order).

property count: int#

Number of prims in this view.

Returns:

The number of prims being managed by this view.

property device: str#

Device where tensors are allocated (cpu or cuda).

property prims: list[pxr.Usd.Prim]#

List of USD prims being managed by this view.

property prim_paths: list[str]#

List of prim paths (as strings) for all prims being managed by this view.

This property converts each prim to its path string representation. The conversion is performed lazily on first access and cached for subsequent accesses.

Note

For most use cases, prefer using prims directly as it provides direct access to the USD prim objects without the conversion overhead. This property is mainly useful for logging, debugging, or when string paths are explicitly required.

Returns:

List of prim paths (as strings) in the same order as prims.

set_world_poses(positions: torch.Tensor | None = None, orientations: torch.Tensor | None = None, indices: Sequence[int] | None = None)[source]#

Set world-space poses for prims in the view.

This method sets the position and/or orientation of each prim in world space. The world pose is computed by considering the prim’s parent transforms. If a prim has a parent, this method will convert the world pose to the appropriate local pose before setting it.

Note

This operation writes to USD at the default time code. Any animation data will not be affected.

Parameters:
  • positions – World-space positions as a tensor of shape (M, 3) where M is the number of prims to set (either all prims if indices is None, or the number of indices provided). Defaults to None, in which case positions are not modified.

  • orientations – World-space orientations as quaternions (w, x, y, z) with shape (M, 4). Defaults to None, in which case orientations are not modified.

  • indices – Indices of prims to set poses for. Defaults to None, in which case poses are set for all prims in the view.

Raises:
  • ValueError – If positions shape is not (M, 3) or orientations shape is not (M, 4).

  • ValueError – If the number of poses doesn’t match the number of indices provided.

set_local_poses(translations: torch.Tensor | None = None, orientations: torch.Tensor | None = None, indices: Sequence[int] | None = None)[source]#

Set local-space poses for prims in the view.

This method sets the position and/or orientation of each prim in local space (relative to their parent prims). This is useful when you want to directly manipulate the prim’s transform attributes without considering the parent hierarchy.

Note

This operation writes to USD at the default time code. Any animation data will not be affected.

Parameters:
  • translations – Local-space translations as a tensor of shape (M, 3) where M is the number of prims to set (either all prims if indices is None, or the number of indices provided). Defaults to None, in which case translations are not modified.

  • orientations – Local-space orientations as quaternions (w, x, y, z) with shape (M, 4). Defaults to None, in which case orientations are not modified.

  • indices – Indices of prims to set poses for. Defaults to None, in which case poses are set for all prims in the view.

Raises:
  • ValueError – If translations shape is not (M, 3) or orientations shape is not (M, 4).

  • ValueError – If the number of poses doesn’t match the number of indices provided.

set_scales(scales: torch.Tensor, indices: Sequence[int] | None = None)[source]#

Set scales for prims in the view.

This method sets the scale of each prim in the view.

Parameters:
  • scales – Scales as a tensor of shape (M, 3) where M is the number of prims to set (either all prims if indices is None, or the number of indices provided).

  • indices – Indices of prims to set scales for. Defaults to None, in which case scales are set for all prims in the view.

Raises:

ValueError – If scales shape is not (M, 3).

set_visibility(visibility: torch.Tensor, indices: Sequence[int] | None = None)[source]#

Set visibility for prims in the view.

This method sets the visibility of each prim in the view.

Parameters:
  • visibility – Visibility as a boolean tensor of shape (M,) where M is the number of prims to set (either all prims if indices is None, or the number of indices provided).

  • indices – Indices of prims to set visibility for. Defaults to None, in which case visibility is set for all prims in the view.

Raises:

ValueError – If visibility shape is not (M,).

get_world_poses(indices: Sequence[int] | None = None) tuple[torch.Tensor, torch.Tensor][source]#

Get world-space poses for prims in the view.

This method retrieves the position and orientation of each prim in world space by computing the full transform hierarchy from the prim to the world root.

Note

Scale and skew are ignored. The returned poses contain only translation and rotation.

Parameters:

indices – Indices of prims to get poses for. Defaults to None, in which case poses are retrieved for all prims in the view.

Returns:

  • positions: Torch tensor of shape (M, 3) containing world-space positions (x, y, z), where M is the number of prims queried.

  • orientations: Torch tensor of shape (M, 4) containing world-space quaternions (w, x, y, z)

Return type:

A tuple of (positions, orientations) where

get_local_poses(indices: Sequence[int] | None = None) tuple[torch.Tensor, torch.Tensor][source]#

Get local-space poses for prims in the view.

This method retrieves the position and orientation of each prim in local space (relative to their parent prims). These are the raw transform values stored on each prim.

Note

Scale is ignored. The returned poses contain only translation and rotation.

Parameters:

indices – Indices of prims to get poses for. Defaults to None, in which case poses are retrieved for all prims in the view.

Returns:

  • translations: Torch tensor of shape (M, 3) containing local-space translations (x, y, z), where M is the number of prims queried.

  • orientations: Torch tensor of shape (M, 4) containing local-space quaternions (w, x, y, z)

Return type:

A tuple of (translations, orientations) where

get_scales(indices: Sequence[int] | None = None) torch.Tensor[source]#

Get scales for prims in the view.

This method retrieves the scale of each prim in the view.

Parameters:

indices – Indices of prims to get scales for. Defaults to None, in which case scales are retrieved for all prims in the view.

Returns:

A tensor of shape (M, 3) containing the scales of each prim, where M is the number of prims queried.

get_visibility(indices: Sequence[int] | None = None) torch.Tensor[source]#

Get visibility for prims in the view.

This method retrieves the visibility of each prim in the view.

Parameters:

indices – Indices of prims to get visibility for. Defaults to None, in which case visibility is retrieved for all prims in the view.

Returns:

A tensor of shape (M,) containing the visibility of each prim, where M is the number of prims queried. The tensor is of type bool.