Skip to content

Releases: DexForce/EmbodiChain

v0.1.3

06 Apr 15:34
99cbe0a

Choose a tag to compare

EmbodiChain v0.1.3 Release Notes

Highlights

This release introduces multi-GPU RL training support, new simulation object types (cloth, grasp annotator), enhanced physics and action functor APIs, CLI improvements, and many bug fixes. For detail, please see below.

Breaking Changes

  • Articulation Default Drive: Changed articulation default drive type to none by @matafela in #210
  • Action Functor Modes: Added support for pre and post mode in action functor execution by @yuecideng in #183

New Features

  • Multi-GPU RL Training: Added support for multi-GPU training for reinforcement learning by @yangchen73 in #188
  • Cloth Object: Added cloth object simulation support by @matafela in #198
  • Grasp Annotator: Added grasp annotator for annotation pipeline by @matafela in #196
  • Mass Randomization: Added articulation mass randomization functor by @yuecideng in #219
  • Extra Observations: Added support for extra observations from gym_config by @yuecideng in #205
  • Asset Preview CLI: Added standalone asset preview script by @yuecideng in #207
  • CLI Entry Points: Added top-level CLI entry points for preview_asset and run_env by @yuecideng in #214
  • Configurable Data Roots: Added configurable data roots and asset download CLI by @yuecideng in #209

Bug Fixes

  • Aligned Pose: Fixed using bugs of the get_aligned_pose and _prepare_warpping function by @wuxinxin27 in #217
  • Gizmo API: Updated to new Gizmo API by @yuecideng in #213
  • Camera Recording: Fixed camera recording save on episode reset by @yuecideng in #208
  • Observation Manager: Fixed KeyError when add mode not present in observation_manager.active_functors by @yuecideng in #200
  • Video Saving: Fixed video saving bug for the first frame in data generation by @yuecideng in #199

Improvements

  • Physics Attributes: Enhanced physics attributes APIs and related functors by @yuecideng in #193
  • Contact Sensor: Improved contact sensor data buffer and added support for saving data to LeRobot dataset by @yuecideng in #197
  • Task Imports: Moved task env imports to tasks/__init__.py by @yuecideng in #212
  • ARX5 Asset: Updated ARX5 robot assets by @matafela in #192

Documentation

  • Functor Docs: Added hyperlinks and JSON config examples to functor docs by @yuecideng in #220
  • Contribution Guide: Updated env and robot contribution guide by @yuecideng in #194
  • PR Skill: Added PR skill for creating pull requests by @yuecideng in #201
  • Contributor: Added Lizhe Chen as a contributor by @ChenlizheMe in #195

Full Changelog: v0.1.2...v0.1.3

v0.1.2

19 Mar 16:10
3d0091d

Choose a tag to compare

EmbodiChain v0.1.2 Release Notes

Highlights

This release introduces significant improvements to the RL training pipeline, new gripper support, USD interoperability, Online Data Streaming implementation and mang bug fixed. For detail, please see below.

Breaking Changes

  • RL Rollout Refactoring: Refactored RL rollout with collector/buffer separation and shared TensorDict by @yangchen73 in
    #175
  • Action Manager: Implemented an action manager to process action and replaced RLEnv with EmbodiEnv by @yangchen73 in #164

New Features

  • SDF Collision: Added support for SDF collision for rigid bodies by @MahooX in #185
  • Arena Space: Added arena_space argument to environment configuration by @yuecideng in #187
  • Online Data Streaming: Added support for online data streaming engine by @yuecideng in #160
  • USD Import: Added import USD functionality by @MahooX in #127
  • USD Export: Added export USD functionality by @MahooX in #168
  • GRPO Algorithm: Implemented GRPO (Group Relative Policy Optimization) algorithm by @yangchen73 in #146
  • GRPO Camera: Added camera to record GRPO training by @yangchen73 in #146
  • Gripper Support: Added Robotiq 85 140 DH AG95 gripper support by @matafela in #177
  • Semantic Mask: Split robot left/right index in semantic mask computation by @dexscutai in #142
  • Indirect Light: Added support for indirect light setting for environment by @yuecideng in #161
  • Planner Enhancement: Added planner grid cell sampler by @yuecideng in #147
  • Light Control: Added light keyboard control for runtime adjustment by @yuecideng in #140

Bug Fixes

  • URDF Assembly: Fixed URDF assembly for copying mtl file by @yuecideng in #184
  • Dataset Saving: Fixed dataset saving for single episode in run_env by @yuecideng in #178
  • Contact Sensor: Fixed contact sensor visualization by @yangchen73 in #170
  • Observation Manager: Fixed observation manager issue for adding new keys by @yuecideng in #169
  • OPW Solver: Fixed OPW solver issues by @matafela in #152
  • Robot IK: Fixed [xyz, quat] format pose IK computation for robots by @yuecideng in #149
  • Trainer Config: Fixed num_envs override from trainer config by @yangchen73 in #148
  • Cartpole Training: Fixed cartpole training by @yangchen73 in #155
  • UR5 Asset: Updated UR5 assets by @yuecideng in #167

Improvements

  • Functor Tests: Added unit tests for all functors by @yuecideng in #181
  • RigidObject Velocity: Added set velocity for RigidObject by @yuecideng in #179
  • Manager Docs: Enhanced manager documentation for action, reward, and dataset by @yuecideng in #176
  • Dataset Filter: Added filter dataset saving for environment by @yuecideng in #143
  • Cached Property: Updated cached property implementation by @yhnsu in #141
  • Trajectory Interp: Added interpolated trajectory with segment numbers by @matafela in #151
  • Physics Step: Updated physics step for spatial functors by @yuecideng in #144
  • MotionGenerator: Refactor Planner and MotionGenerator by @matafela in #173

Documentation

  • URDF Assembly Tool: Added documentation for URDF Assembly Tool by @yuecideng in #174
  • New Robot: Added documentation for adding new robots by @yuecideng in #172
  • RL Module: Updated RL module API documentation by @yuecideng in #171
  • AGENTS.md: Added AGENTS.md for developer reference by @yuecideng in #165
  • CLAUDE.md: Added CLAUDE.md with AI assistant guidelines by @yuecideng in #162

Full Changelog: v0.1.1...v0.1.2


This release contains 39 commits from 6 contributors: @Chen Yang, @Chen Jian, @yueci Deng, @MahooX, @yhnsu , and
@dexscutai.

v0.1.1

13 Feb 14:21
c965e0a

Choose a tag to compare

🚀 EmbodiChain v0.1.1 Release Notes

Highlights

This patch release focuses on stability and usability: environment observation space correctness, more accurate camera pose reporting in the keyboard control tool, and cleaner environment launch workflows—plus documentation updates to streamline installation and onboarding.

What’s Changed

✨New Features

🐛 Bug Fixes

🔧 Improvements

  • Refactor env launcher scripts by @yuecideng in #126
  • Change visual material setter behavior for RigidObjectGroup by @yuecideng in #107
  • Support shared visual material instances by @yuecideng in #110
  • Enable get_sensor_pose_in_robot_frame for stereo camera by @dexscutai in #111
  • Save success status before reset @yhnsu #119
  • Suport camera group ids for batch rendering on RT backend by @yuecideng in #108
  • Porf: modify URDF path handling: Pass from urdf_cfg to solver_cfg by @chase6305 in #135

📚 Documentation

New Contributors

Full Changelog: v0.1.0...v0.1.1

v0.1.0

29 Jan 16:03
dfd93a4

Choose a tag to compare

🚀 EmbodiChain v0.1.0 Release Notes

EmbodiChain is an end-to-end, GPU-accelerated, and modular platform for building generalized Embodied Intelligence. We are excited to announce this early release, which includes the basic project structure, the simulaiton components, the basic gym components for robot learning, the simple RL algorithm with examples, the simulation examples and the documentations.

The project website can be found here: https://dexforce.com/embodichain/index.html#/.
For more amazing features (coming in the future), please check the roadmap: https://dexforce.github.io/EmbodiChain/resources/roadmap.html


🌟The Contributors

We are grateful for our contributors in this release:

Thank you for your contributions!