API Reference

This page provides the API documentation for bard.

Top-Level Functions

Top-level free functions for the bard v0.3 unified API.

All computation goes through bard.* module-level functions that accept a Model and a Data.

bard.api.aba(model, data, tau, *, gravity=None)

Forward dynamics (Articulated Body Algorithm) using cached state.

Computes joint accelerations qdd from applied forces tau using the O(n) Articulated Body Algorithm (Featherstone).

Requires update_kinematics with qd provided.

Parameters:
  • model (Model) – The robot model.

  • data (Data) – The computation workspace (must include velocities).

  • tau (Tensor) – Generalized forces (B, nv).

  • gravity (Optional[Tensor]) – 3-element gravity vector. Defaults to [0, 0, -9.81].

Return type:

Tensor

Returns:

Generalized accelerations (B, nv).

bard.api.crba(model, data)

Mass matrix (Composite Rigid Body Algorithm) using cached state.

Requires a prior update_kinematics call.

Parameters:
  • model (Model) – The robot model.

  • data (Data) – The computation workspace.

Return type:

Tensor

Returns:

Mass matrix (B, nv, nv).

bard.api.create_data(model, max_batch_size=1024)

Creates a new computation workspace for a model.

Parameters:
  • model (Model) – The robot model.

  • max_batch_size (int) – Maximum supported batch size for pre-allocated buffers.

Return type:

Data

Returns:

A new Data instance.

bard.api.forward_kinematics(model, data, frame_id, *, q=None)

World-frame pose of a frame.

Cached mode (default): O(1) lookup from a prior update_kinematics call.

Standalone mode: Pass q to perform a path-only traversal without needing update_kinematics. This is efficient for single-frame queries.

Parameters:
  • model (Model) – The robot model.

  • data (Data) – The computation workspace.

  • frame_id (int) – Target frame index (from model.get_frame_id).

  • q (Optional[Tensor]) – If provided, performs standalone FK (path-only traversal).

Return type:

Tensor

Returns:

Homogeneous transform (B, 4, 4).

bard.api.jacobian(model, data, frame_id, *, q=None, reference_frame='world', return_pose=False)

Geometric Jacobian.

Cached mode (default): O(path_length) using prior update_kinematics state.

Standalone mode: Pass q to perform a path-only traversal without needing update_kinematics. Fuses FK propagation with Jacobian column computation in a single pass, avoiding the O(N) full-tree T_world computation.

Parameters:
  • model (Model) – The robot model.

  • data (Data) – The computation workspace.

  • frame_id (int) – Target frame index.

  • q (Optional[Tensor]) – If provided, performs standalone Jacobian (path-only traversal).

  • reference_frame (str) – "world" or "local".

  • return_pose (bool) – If True, also returns the world-frame pose.

Return type:

Union[Tensor, Tuple[Tensor, Tensor]]

Returns:

Jacobian (B, 6, nv), optionally with pose (B, 4, 4).

bard.api.rnea(model, data, qdd, *, gravity=None)

Inverse dynamics (Recursive Newton-Euler Algorithm) using cached state.

Requires update_kinematics with qd provided.

Parameters:
  • model (Model) – The robot model.

  • data (Data) – The computation workspace (must include velocities).

  • qdd (Tensor) – Generalized accelerations (B, nv).

  • gravity (Optional[Tensor]) – 3-element gravity vector. Defaults to [0, 0, -9.81].

Return type:

Tensor

Returns:

Generalized forces (B, nv).

bard.api.spatial_acceleration(model, data, qdd, frame_id, *, reference_frame='world')

Spatial acceleration using cached state.

Requires update_kinematics with qd provided.

Parameters:
  • model (Model) – The robot model.

  • data (Data) – The computation workspace (must include velocities).

  • qdd (Tensor) – Generalized accelerations (B, nv).

  • frame_id (int) – Target frame index.

  • reference_frame (str) – "world" or "local".

Return type:

Tensor

Returns:

Spatial acceleration (B, 6) as [linear; angular].

bard.api.update_kinematics(model, data, q, qd=None)

Compute and cache all shared kinematic quantities in a single tree traversal.

Call this once per control step, then use the cached data for all subsequent algorithm calls.

If qd is provided, velocity-level quantities (spatial velocities) are also computed. If qd is None, only position-level quantities are computed (sufficient for FK, Jacobian, CRBA).

Parameters:
  • model (Model) – The robot model.

  • data (Data) – The computation workspace (mutated in-place).

  • q (Tensor) – Generalized positions (B, nq).

  • qd (Optional[Tensor]) – Generalized velocities (B, nv), or None.

Return type:

Data

Returns:

The same data object (for convenience chaining).

Raises:

ValueError – If batch size exceeds data.max_batch_size.

Model

Immutable robot model containing structure, topology, and algorithm kernels.

The Model class is the primary object in the v0.3 API. It absorbs the Chain (kinematic tree topology) and the static precomputed data that was previously spread across RobotDynamics.__init__. All algorithm implementations live here as private methods that accept a Data workspace.

class bard.core.model.Model(chain, compile_enabled=False, compile_kwargs=None)

Immutable robot model for batched kinematics and dynamics.

A Model holds the robot’s kinematic tree topology, pre-computed inertias, joint axes, offset matrices, and compilation settings. It is constructed once (typically via bard.build_model_from_urdf()) and then used with one or more Data workspaces to perform computations.

nq

Configuration space dimension.

Type:

int

nv

Velocity space dimension.

Type:

int

n_joints

Number of actuated joints.

Type:

int

n_frames

Total number of frames (links) in the kinematic tree.

Type:

int

has_floating_base

Whether the robot has a 6-DOF floating base.

Type:

bool

create_data(max_batch_size=1024)

Creates a new Data workspace for this model.

Parameters:

max_batch_size (int) – Maximum supported batch size for pre-allocated buffers.

Return type:

Data

Returns:

A new Data instance.

enable_compilation(enabled=True, **compile_kwargs)

Enable or disable torch.compile for kinematics methods.

Dynamics algorithms (RNEA, CRBA, ABA) always run in eager mode because their sequential tree-traversal loops with small batched matmuls do not benefit from compilation.

ensure_tensor(value)

Converts various input types to a tensor on the model’s device/dtype.

Return type:

Tensor

get_frame_id(frame_name)

Gets the integer index for a frame name.

Return type:

int

get_frame_names(exclude_fixed=True)

Returns all frame names in traversal order.

Return type:

List[str]

get_joint_names()

Returns the ordered list of actuated joint names.

Return type:

List[str]

get_joint_parameter_names(exclude_fixed=True)

Returns the ordered list of actuated joint names.

Return type:

List[str]

property joint_limits: Tuple[Tensor, Tensor]

Lower and upper position limits for all actuated joints.

property nq: int

Configuration space dimension.

property nv: int

Velocity space dimension.

pack_q(q_base, q_joints)

Combines base and joint positions into q.

Return type:

Tensor

pack_v(v_base, v_joints)

Combines base and joint velocities into v.

Return type:

Tensor

to(dtype=None, device=None)

Move all internal tensors to a specified dtype and/or device.

Return type:

Model

Returns:

Self for method chaining.

unpack_q(q)

Splits q into (q_base, q_joints).

unpack_v(v)

Splits v into (v_base, v_joints).

Data

Mutable computation workspace for robot dynamics algorithms.

This module defines the Data class, which holds all pre-allocated tensor buffers used by kinematics and dynamics algorithms. It replaces the previous KinematicsState dataclass by also owning the algorithm-specific buffers (accelerations, forces, mass matrix, Jacobian), enabling a clean separation between the immutable robot model and the mutable computation state.

class bard.core.data.Data(n_nodes, nv, max_batch_size=1024, dtype=torch.float32, device='cpu')

Pre-allocated workspace for batched kinematics and dynamics computations.

A Data object is created from a Model via bard.create_data() and holds all mutable tensor buffers. Multiple Data instances can coexist for the same Model (e.g., for parallel simulation or planning vs. control).

max_batch_size

Maximum supported batch size.

batch_size

Actual batch size of the last update_kinematics call.

has_velocity

Whether velocity-level quantities were computed.

URDF Parsing

URDF parser for building a bard Model from URDF files.

This module provides build_model_from_urdf(), the primary entry point for creating a robot model. It handles parsing of links, joints, and inertial properties, and supports both fixed-base and floating-base robots.

bard.parsers.urdf.build_model_from_urdf(path, *, floating_base=False, base_frame_name='floating_base', dtype=torch.float64, device='cpu', compile_enabled=False, compile_kwargs=None)

Builds a bard Model from a URDF file.

This is the primary entry point for creating a robot model.

Parameters:
  • path (Union[str, PathLike]) – Filesystem path to the URDF file.

  • floating_base (bool) – If True, a 6-DOF floating base is prepended.

  • base_frame_name (str) – Name for the synthetic base frame.

  • dtype – PyTorch dtype for the model’s tensors.

  • device – PyTorch device for the model’s tensors.

  • compile_enabled (bool) – If True, JIT-compile internal methods.

  • compile_kwargs – Additional kwargs for torch.compile.

Return type:

Model

Returns:

A Model instance.

Data Structures

Frame representation for kinematic tree nodes.

class bard.structures.frame.Frame(name=None, link=None, joint=None, children=None)

A node in the robot kinematic tree, containing joint and link information.

A frame represents a coordinate system attached to a part of the robot. The kinematic tree is built by connecting these frames in parent-child relationships, forming a chain from the root to the end-effectors.

name

The unique identifier for this frame.

Type:

str

The link object associated with this frame, containing physical properties like mass and inertia.

Type:

Link

joint

The joint that connects this frame to its parent, defining the motion between them.

Type:

Joint

children

A list of child frames attached to this frame.

Type:

List[Frame]

add_child(child)

Adds a child frame to this frame’s list of children.

Parameters:

child (Frame) – The child frame to add.

Return type:

None

count_joints(exclude_fixed=True)

Counts the number of joints in the subtree starting from this frame.

Parameters:

exclude_fixed (bool) – If True, fixed joints are not counted. Defaults to True.

Returns:

The total number of joints.

Return type:

int

find_by_name(target_name)

Finds a frame by name within this frame’s subtree.

Parameters:

target_name (str) – The name of the frame to find.

Returns:

The Frame object if found, otherwise None.

Return type:

Optional[Frame]

get_all_frames()

Returns a list of all frames in the subtree starting from this frame.

Returns:

A list of all frames in depth-first order.

Return type:

List[Frame]

get_transform(theta)

Computes the transformation for the joint associated with this frame.

This transform represents the motion of the joint given a joint angle theta. It is composed with the joint’s static offset.

Parameters:

theta (Tensor) – The joint position(s). Can be a scalar or a batched tensor of shape (B,).

Returns:

A Transform3d object representing the batched joint motion.

Return type:

Transform3d

is_end_effector()

Checks if this frame is a leaf node in the kinematic tree.

Returns:

True if the frame has no children, otherwise False.

Return type:

bool

to(dtype=None, device=None)

Moves all tensor data in the frame and its children to a specified device/dtype.

Parameters:
  • dtype (Optional[dtype]) – The target data type. Defaults to None.

  • device (Optional[device]) – The target device. Defaults to None.

Returns:

Returns self for method chaining.

Return type:

Frame

Joint representation for robot kinematic chains.

class bard.structures.joint.Joint(name=None, offset=None, joint_type='fixed', axis=(0.0, 0.0, 1.0), dtype=torch.float32, device='cpu', limits=None, velocity_limits=None, effort_limits=None)

Represents a robot joint, defining the motion between two links.

This class stores the properties of a single joint, such as its type (revolute, prismatic, fixed), axis of motion, and physical limits.

name

The unique identifier for the joint.

Type:

str

offset

The static transform from the parent link’s frame to the joint’s frame.

Type:

Transform3d

joint_type

The type of joint, one of ‘fixed’, ‘revolute’, ‘prismatic’.

Type:

str

axis

A 3D unit vector representing the axis of motion in the joint’s own frame.

Type:

torch.Tensor

limits

A tuple of (lower, upper) position limits in radians or meters. None if unbounded.

Type:

Optional[Tuple[float, float]]

velocity_limits

A tuple of (lower, upper) velocity limits. None if unbounded.

Type:

Optional[Tuple[float, float]]

effort_limits

A tuple of (lower, upper) force or torque limits. None if unbounded.

Type:

Optional[Tuple[float, float]]

clamp(joint_position)

Clamps a given joint position to the joint’s defined limits.

If the joint has no limits, the input position is returned unchanged.

Parameters:

joint_position (Tensor) – The current joint position(s) to clamp.

Returns:

The clamped joint position(s).

Return type:

Tensor

to(dtype=None, device=None)

Moves all tensor data in the joint to a specified device/dtype.

Parameters:
  • dtype (Optional[dtype]) – The target data type.

  • device (Optional[device]) – The target device.

Returns:

Returns self for method chaining.

Return type:

Joint

Represents a rigid body in the robot’s kinematic chain.

A link holds the physical properties of a robot segment, including its mass, inertia, and visual geometries for rendering.

name

The unique identifier for the link.

Type:

str

offset

The static transform from the parent joint frame to this link’s reference frame.

Type:

Transform3d

inertial

A tuple containing the link’s inertial properties: (Center of Mass offset, mass, 3x3 inertia tensor). None for massless links.

Type:

Optional[Tuple[object, float, torch.Tensor]]

visuals

A list of Visual objects for rendering.

Type:

List[Visual]

to(dtype=None, device=None)

Moves all tensor data in the link to a specified device/dtype.

Parameters:
  • dtype (Optional[dtype]) – The target data type.

  • device (Optional[device]) – The target device.

Returns:

Returns self for method chaining.

Return type:

Link