Replies: 3 comments
-
|
Hi @eiceland skrl already support vectorized environment for both, single and multi-agent algorithms. |
Beta Was this translation helpful? Give feedback.
-
|
Thank you for your response! I understand that Isaac Lab provides examples of multi-agent environments with vectorization, but my use case involves a custom multi-agent environment implemented and trained in PyTorch, not Isaac Lab. I’ve reviewed the documentation and examples, but I couldn’t find a clear way to apply vectorization in this scenario. Could you please clarify if there’s a way to use skrl’s wrappers or utilities for vectorized multi-agent environments outside Isaac Lab? If so, a minimal example or documentation reference would be very helpful. |
Beta Was this translation helpful? Give feedback.
-
|
@Toni-SM up this. I could not figure out if training (MAPPO in my case) using vectorized multi-agent PettingZoo environments is supported, or whether it can be achieved somehow. Thank you for the amazing work on this library and for taking the time to respond. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi everyone,
First of all, I want to say that skrl is an amazing library! The API is very clear, and the documentation is well-structured, which made it easy for me to get started. Great work to everyone involved!
I’ve recently implemented a single-agent custom environment and successfully trained it using vectorized environments with PyTorch. Everything worked smoothly.
Now I’ve moved to a multi-agent custom environment, but I couldn’t find a way to train it using vectorized environments. When I run training with only a single environment instance, the process is quite slow.
Here are my questions:
Any guidance or best practices would be greatly appreciated!
Thanks in advance for your help.
Beta Was this translation helpful? Give feedback.
All reactions