Skip to content
Merged
Show file tree
Hide file tree
Changes from 10 commits
Commits
Show all changes
33 commits
Select commit Hold shift + click to select a range
5fd8dda
draft
Mar 23, 2026
1e15c77
update
Mar 23, 2026
baf731a
update comment
Mar 23, 2026
f1f043b
add viser dependence
Mar 24, 2026
1887304
Merge branch 'main' into cj/add-grasp-annotator
Mar 25, 2026
63ec5e6
update
Mar 25, 2026
73781d8
update
Mar 26, 2026
e0d129d
TODO: too slow
Mar 30, 2026
7c55249
add collision checker
Mar 30, 2026
35dcb44
update
Mar 30, 2026
508a712
add comments
Mar 30, 2026
2adfb8e
Merge branch 'main' into cj/add-grasp-annotator
Mar 30, 2026
ed8d941
Merge branch 'main' into cj/add-grasp-annotator
yuecideng Mar 31, 2026
bc1b03c
Merge branch 'main' into cj/add-grasp-annotator
yuecideng Apr 1, 2026
9cec088
Merge branch 'cj/add-grasp-annotator' of https://github.com/DexForce/…
yuecideng Apr 1, 2026
3fb7ff5
wip
yuecideng Apr 1, 2026
ab5506d
style
Apr 1, 2026
4b31ae8
update
Apr 1, 2026
c05b130
add batch convex unittest
Apr 1, 2026
a4c8341
add trasform points mat
Apr 1, 2026
7865db1
style
Apr 1, 2026
0043d06
update docs
Apr 1, 2026
90568fd
fix unittest
Apr 1, 2026
82a0c55
wip
yuecideng Apr 1, 2026
34cf7a6
Merge branch 'cj/add-grasp-annotator' of https://github.com/DexForce/…
yuecideng Apr 1, 2026
ae5622a
wip
yuecideng Apr 1, 2026
ac1a3f1
wip
yuecideng Apr 2, 2026
87b0095
wip
yuecideng Apr 2, 2026
e2151f9
wip
yuecideng Apr 2, 2026
e394a2b
Merge branch 'main' into yueci/refactor-grasp-pose
yuecideng Apr 2, 2026
c862be5
wip
yuecideng Apr 2, 2026
1e5cd10
Merge branch 'main' into cj/add-grasp-annotator
yuecideng Apr 2, 2026
7627756
update docs; remove deprecated cfg
Apr 2, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
77 changes: 77 additions & 0 deletions docs/source/tutorial/grasp_generator.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
Generating and Executing Robot Grasps
======================================

.. currentmodule:: embodichain.lab.sim

This tutorial demonstrates how to generate antipodal grasp poses for a target object and execute a full grasp trajectory with a robot arm. It covers scene initialization, robot and object creation, interactive grasp region annotation, grasp pose computation, and trajectory execution in the simulation loop.

The Code
~~~~~~~~

The tutorial corresponds to the ``grasp_generator.py`` script in the ``scripts/tutorials/grasp`` directory.

.. dropdown:: Code for grasp_generator.py
:icon: code

.. literalinclude:: ../../../scripts/tutorials/grasp/grasp_generator.py
:language: python
Copy link

Copilot AI Mar 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The tutorial references scripts/tutorials/grasp/grasp_generator.py via multiple literalinclude directives, but there is no such script in the repo (only grasp_mug.py exists under scripts/tutorials/grasp). This will break the Sphinx build; either add the referenced script or update the doc to include the correct filename/path.

Copilot uses AI. Check for mistakes.
:linenos:


The Code Explained
~~~~~~~~~~~~~~~~~~

Configuring the simulation
--------------------------

Command-line arguments are parsed with ``argparse`` to select the number of parallel environments, the compute device, and optional rendering features such as ray tracing and headless mode.

.. literalinclude:: ../../../scripts/tutorials/grasp/grasp_generator.py
:language: python
:start-at: def parse_arguments():
:end-at: return parser.parse_args()

The parsed arguments are passed to ``initialize_simulation``, which builds a :class:`SimulationManagerCfg` and creates the :class:`SimulationManager` instance. When ray tracing is enabled a directional :class:`cfg.LightCfg` is also added to the scene.

.. literalinclude:: ../../../scripts/tutorials/grasp/grasp_generator.py
:language: python
:start-at: def initialize_simulation(args) -> SimulationManager:
:end-at: return sim

Annotating and computing grasp poses
-------------------------------------

Grasp generation is performed by :meth:`objects.RigidObject.get_grasp_pose`, which internally runs an antipodal sampler on the object mesh. A :class:`toolkits.graspkit.pg_grasp.GraspAnnotatorCfg` controls sampler parameters (sample count, gripper jaw limits) and the interactive annotation workflow:

1. Open the visualization in a browser at the reported port (e.g. ``http://localhost:11801``).
2. Use *Rect Select Region* to highlight the area of the object that should be grasped.
3. Click *Confirm Selection* to finalize the region.

The function returns a batch of ``(N_envs, 4, 4)`` homogeneous transformation matrices representing candidate grasp frames in the world coordinate system.

For each grasp pose, gripper approach direction in world coordinate is required to compute the antipodal grasp. In this tutorial, we use a fixed approach direction (straight down in world frame) for simplicity, but it can be customized based on the task or object geometry.

.. literalinclude:: ../../../scripts/tutorials/grasp/grasp_generator.py
:language: python
:start-at: # get mug grasp pose
:end-at: logger.log_info(f"Get grasp pose cost time: {cost_time:.2f} seconds")


The Code Execution
~~~~~~~~~~~~~~~~~~

To run the script, execute the following command from the project root:

.. code-block:: bash

python scripts/tutorials/grasp/grasp_generator.py

A simulation window will open showing the robot and the mug. A browser-based visualizer will also launch (default port ``11801``) for interactive grasp region annotation.

You can customize the run with additional arguments:

.. code-block:: bash

python scripts/tutorials/grasp/grasp_generator.py --num_envs <n> --device <cuda/cpu> --enable_rt --headless

After confirming the grasp region in the browser, the script will compute a grasp pose, print the elapsed time, and then wait for you to press **Enter** before executing the full grasp trajectory in the simulation. Press **Enter** again to exit once the motion is complete.
1 change: 1 addition & 0 deletions docs/source/tutorial/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ Tutorials
sensor
motion_gen
gizmo
grasp_generator
Comment thread
matafela marked this conversation as resolved.
Outdated
basic_env
modular_env
rl
Expand Down
65 changes: 65 additions & 0 deletions embodichain/lab/sim/objects/rigid_object.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,11 @@
from embodichain.utils.math import convert_quat
from embodichain.utils.math import matrix_from_quat, quat_from_matrix, matrix_from_euler
from embodichain.utils import logger
from embodichain.toolkits.graspkit.pg_grasp.antipodal_annotator import (
GraspAnnotator,
GraspAnnotatorCfg,
)
Copy link

Copilot AI Mar 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

RigidObject now imports GraspAnnotator (and its dependencies like viser/trimesh/open3d) at module import time. This makes the core sim objects package depend on optional UI/geometry libraries and can break environments that don’t have those extras installed, even if grasp annotation isn’t used. Consider moving these imports inside get_grasp_pose() and raising a clear, actionable error if the optional deps are missing.

Suggested change
from embodichain.toolkits.graspkit.pg_grasp.antipodal_annotator import (
GraspAnnotator,
GraspAnnotatorCfg,
)
def _load_grasp_annotator_module():
"""Lazily import the grasp annotator module.
This avoids importing optional heavy UI/geometry dependencies (e.g., viser,
trimesh, open3d) at module import time. The import is performed only when
grasp annotation functionality is actually used.
"""
try:
from embodichain.toolkits.graspkit.pg_grasp import antipodal_annotator
except ImportError as exc:
raise ImportError(
"Grasp annotator dependencies are not installed. "
"To use grasp annotation (RigidObject.get_grasp_pose and related "
"functionality), install the optional grasp/geometry extras, e.g.:\n\n"
" pip install 'embodichain[grasp]'\n\n"
"or ensure that packages like 'viser', 'trimesh', and 'open3d' are "
"available in your environment."
) from exc
return antipodal_annotator
class GraspAnnotator: # type: ignore[misc]
"""Lazy proxy for the real GraspAnnotator class.
The actual class is imported from `embodichain.toolkits.graspkit.pg_grasp`
only when this proxy is instantiated.
"""
def __new__(cls, *args, **kwargs):
module = _load_grasp_annotator_module()
real_cls = module.GraspAnnotator
return real_cls(*args, **kwargs)
class GraspAnnotatorCfg: # type: ignore[misc]
"""Lazy proxy for the real GraspAnnotatorCfg class.
The actual class is imported only when this proxy is instantiated.
"""
def __new__(cls, *args, **kwargs):
module = _load_grasp_annotator_module()
real_cls = module.GraspAnnotatorCfg
return real_cls(*args, **kwargs)

Copilot uses AI. Check for mistakes.
Copy link

Copilot AI Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Importing GraspAnnotator (and its dependencies like viser) at module import time makes embodichain.lab.sim.objects.rigid_object depend on the annotator stack even when users never call get_grasp_pose(). Consider moving these imports inside get_grasp_pose() (or guarding with a lazy/optional import) to reduce import-time overhead and optional-dep failures.

Suggested change
from embodichain.toolkits.graspkit.pg_grasp.antipodal_annotator import (
GraspAnnotator,
GraspAnnotatorCfg,
)
try:
from embodichain.toolkits.graspkit.pg_grasp.antipodal_annotator import (
GraspAnnotator,
GraspAnnotatorCfg,
)
except ImportError:
logger.warning(
"Optional dependency 'embodichain.toolkits.graspkit.pg_grasp.antipodal_annotator' "
"could not be imported. Grasp-related functionality may be unavailable."
)
GraspAnnotator = None # type: ignore[assignment]
GraspAnnotatorCfg = None # type: ignore[assignment]

Copilot uses AI. Check for mistakes.
import torch.nn.functional as F
Copy link

Copilot AI Mar 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Importing GraspAnnotator at module import time pulls in heavy optional UI dependencies (e.g., viser, trimesh, open3d) whenever RigidObject is imported, even if grasp annotation is never used. To reduce baseline dependencies and import overhead (and avoid failures in minimal/headless installs), consider moving these imports inside get_grasp_pose() and raising a clear error if optional deps are missing.

Copilot uses AI. Check for mistakes.


@dataclass
Expand Down Expand Up @@ -1122,3 +1127,63 @@ def destroy(self) -> None:
arenas = [env]
for i, entity in enumerate(self._entities):
arenas[i].remove_actor(entity)

def get_grasp_pose(
self,
cfg: GraspAnnotatorCfg,
approach_direction: torch.Tensor = None,
is_visual: bool = False,
Copy link

Copilot AI Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

New public method RigidObject.get_grasp_pose() adds user-facing behavior (caching + pose computation) but there is no automated test coverage for it. Since tests/sim/objects/test_rigid_object.py exists, please add at least a unit test for the deterministic math path and a smoke test for the [num_envs, 4, 4] output shape.

Copilot uses AI. Check for mistakes.
) -> torch.Tensor:
if approach_direction is None:
approach_direction = torch.tensor(
[0, 0, -1], dtype=torch.float32, device=self.device
)
approach_direction = F.normalize(approach_direction, dim=-1)
if hasattr(self, "_grasp_annotator") is False:
vertices = torch.tensor(
self._entities[0].get_vertices(),
dtype=torch.float32,
device=self.device,
)
triangles = torch.tensor(
self._entities[0].get_triangles(), dtype=torch.int32, device=self.device
)
scale = torch.tensor(
self._entities[0].get_body_scale(),
dtype=torch.float32,
device=self.device,
)
vertices = vertices * scale
self._grasp_annotator = GraspAnnotator(
vertices=vertices, triangles=triangles, cfg=cfg
)

# Annotate antipodal point pairs
if hasattr(self, "_hit_point_pairs") is False or cfg.force_regenerate:
self._hit_point_pairs = self._grasp_annotator.annotate()

poses = self.get_local_pose(to_matrix=True)
poses = torch.as_tensor(poses, dtype=torch.float32, device=self.device)
grasp_poses: tuple[torch.Tensor] = []
open_lengths: tuple[torch.Tensor] = []
Copy link

Copilot AI Mar 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Type annotations are misleading here: grasp_poses: tuple[torch.Tensor] = [] and open_lengths: tuple[torch.Tensor] = [] are initialized as lists and later appended to. Prefer list[torch.Tensor] (or build tensors directly) to avoid type confusion for readers and static checkers.

Suggested change
grasp_poses: tuple[torch.Tensor] = []
open_lengths: tuple[torch.Tensor] = []
grasp_poses: list[torch.Tensor] = []
open_lengths: list[torch.Tensor] = []

Copilot uses AI. Check for mistakes.
Copy link

Copilot AI Apr 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

grasp_poses / open_lengths are initialized as lists but annotated as tuple[torch.Tensor]. Update the annotations to list[torch.Tensor] (or initialize tuples) to keep types accurate.

Suggested change
grasp_poses: tuple[torch.Tensor] = []
open_lengths: tuple[torch.Tensor] = []
grasp_poses: list[torch.Tensor] = []
open_lengths: list[torch.Tensor] = []

Copilot uses AI. Check for mistakes.
for pose in poses:
grasp_pose, open_length = self._grasp_annotator.get_approach_grasp_poses(
self._hit_point_pairs, pose, approach_direction, is_visual=False
)
grasp_poses.append(grasp_pose)
open_lengths.append(open_length)
Copy link

Copilot AI Mar 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

get_grasp_pose() assumes self._grasp_annotator.annotate() always returns a valid tensor and that get_approach_grasp_poses() always finds a grasp. However annotate() can return None and grasp generation can fail (empty candidates), which will currently crash inside the loop. Please add explicit validation/error handling here (e.g., raise a clear exception when annotation/grasp search fails) so callers get a deterministic failure mode.

Suggested change
poses = self.get_local_pose(to_matrix=True)
poses = torch.as_tensor(poses, dtype=torch.float32, device=self.device)
grasp_poses: tuple[torch.Tensor] = []
open_lengths: tuple[torch.Tensor] = []
for pose in poses:
grasp_pose, open_length = self._grasp_annotator.get_approach_grasp_poses(
self._hit_point_pairs, pose, approach_direction, is_visual=False
)
grasp_poses.append(grasp_pose)
open_lengths.append(open_length)
if self._hit_point_pairs is None or (
hasattr(self._hit_point_pairs, "__len__")
and len(self._hit_point_pairs) == 0
):
raise RuntimeError(
"RigidObject.get_grasp_pose(): grasp annotation failed; "
"no antipodal point pairs were generated for the object."
)
poses = self.get_local_pose(to_matrix=True)
poses = torch.as_tensor(poses, dtype=torch.float32, device=self.device)
grasp_poses: List[torch.Tensor] = []
open_lengths: List[torch.Tensor] = []
for idx, pose in enumerate(poses):
grasp_pose, open_length = self._grasp_annotator.get_approach_grasp_poses(
self._hit_point_pairs, pose, approach_direction, is_visual=False
)
if grasp_pose is None or open_length is None:
raise RuntimeError(
f"RigidObject.get_grasp_pose(): failed to compute grasp pose "
f"for pose index {idx}; no valid grasp candidates found."
)
grasp_poses.append(grasp_pose)
open_lengths.append(open_length)
if len(grasp_poses) == 0:
raise RuntimeError(
"RigidObject.get_grasp_pose(): no grasp poses were generated."
)

Copilot uses AI. Check for mistakes.
grasp_poses = torch.cat(
[grasp_pose.unsqueeze(0) for grasp_pose in grasp_poses], dim=0
)

if is_visual:
vertices = self._entities[0].get_vertices()
triangles = self._entities[0].get_triangles()
scale = self._entities[0].get_body_scale()
vertices = vertices * scale
Copy link

Copilot AI Mar 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the is_visual block, vertices, triangles, and scale are recomputed but never used (the visualizer uses self._grasp_annotator’s stored mesh). This is dead code and can be removed to avoid confusion and extra work on large meshes.

Suggested change
vertices = self._entities[0].get_vertices()
triangles = self._entities[0].get_triangles()
scale = self._entities[0].get_body_scale()
vertices = vertices * scale

Copilot uses AI. Check for mistakes.
self._grasp_annotator.visualize_grasp_pose(
obj_pose=poses[0],
grasp_pose=grasp_poses[0],
open_length=open_lengths[0].item(),
)
return grasp_poses
Loading
Loading