-
Notifications
You must be signed in to change notification settings - Fork 774
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question] Using wheel-based mobile robots in simulation #839
Comments
I'm not sure if this has changed in a more recent release of PhysX but previously there was not yet a cylinder prim object for the GPU pipeline, which defaulted to an n-gon (I believe it was 18 but could be configured up to 64). As a result directly simulating wheels for RL was not physically realistic. Instead, the supported workflow was to simulate the robot as the end effector of a virtual arm parallel to the ground, and then use the action space for that virtual arm (pose deviation, velocity, acceleration, whatever you prefer) to control the physical robot with some low level pose/velocity/etc. controller. The idea was that for most indoor environment "locomotion" is not the difficult task that must be learned with RL, but rather (potentially perceptive) navigation, as well as navigation with medium-to-long horizon manipulation. I'm not sure if an official example of this workflow exists, and of course if the supported workflow has changed please someone correct me. |
The PhysX team is working on proper wheel simulation. Once that's ready, we'll add some concrete examples on wheel-based mobile robots. Stay tuned :) |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
Dear Isaac Lab Team,
I hope this message finds you well.
I am currently working on a project involving reinforcement learning for mobile robots, and I am interested in using Isaac Lab for this purpose. I have a few questions regarding the support for mobile robots:
Are there any plans to update Isaac Lab to better support reinforcement learning for mobile robots? If so, could you provide an estimated timeline for these updates?
In the meantime, are there any existing examples or resources within Isaac Lab that you would recommend for getting started with reinforcement learning on mobile robots?
Thank you very much for your time and assistance. I look forward to your response.
The text was updated successfully, but these errors were encountered: