-
Notifications
You must be signed in to change notification settings - Fork 242
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Add WidowXAI Robot, specify robot ids in example scripts #904
base: main
Are you sure you want to change the base?
Conversation
4.2.mp4I was hoping for a clear video of picking up the cube to close out this PR, but despite sweeping over max_steps and seeds all I got was some lucky solves that are really more of a push than a pick. I think the biggest issue is that this robot is small so it starts far away and the gripper is barely big enough to grasp the cube. Collisions look good, but the cube needs to be smaller and nearer to the robot. Modifying the task with even more kwargs feels like it will just add more verbosity and cruft. Imo this feels okay enough to just ship, but I understand if yall would rather not include something unless it passes convincingly. |
Its strange it can't learn to grasp an object. We generally do want some sign of life that manipulation works before adding a manipulation system. A couple suggestions:
to get these joint pos reference points, you can open the viewer, and click the end-effector and click the transform tab to drag the robot to the cube and figure out where it is. |
Quick update on this project. What has been done:
funny_out.mp4output2.mp4output.mp4output3.mp4
What will be done: I will run a similar sweep but on the |
for pick cube another consideration is the robot is too small anyway, its reach is not as large as the franka / panda arm and this task has done its randomization with the panda arm in mind. Try lowering the amount of cube pose randomization + goal site randomization (eg the goal should probably not be so high since this arm can't reach it) moreover what is the ppo script you use at the moment? |
I am using |
oh definitely do not need to sweep so many control modes. It should work with pd_joint_delta_pos. If not then something is modelled poorly Also do not use any absolute control spaces (like pd_joint_pos or pd_ee_pose), RL does not learn from those easily unless you modify the reward function more to penalize large actions. |
curling.mp4double_tap.mp4@StoneT2000 I removed the scaffolding/test code so this code should be ready to merge. I am stepping away from this work for now: I am receiving the real robot soon and will focus on that the next couple weeks (e.g. testing out lerobot's imitation learning pipeline on the real robot). After that I will come back to this ManiSkill project and work towards the original goal of some kind of real2sim2real. We can just merge this now and then I can start a new pr later, or perhaps you would prefer one larger PR over smaller PRs. Up to you as the maintainer. |
I will probably not merge for now given that it can't seem to pick up the cube yet (I haven't looked in depth enough into the robot modeling). I did notice the gripper friction is 4.0, try setting it to 2.0 instead? Training should probably take no more than 5 minutes on a 4090 with ppo_fast. You should only try using the Another thing to try is to make the cube a bit smaller just for sanity check. It could be relatively too big for this robot. |
Adding the new WidowXAI robot from Trossen Robotics. Implementation follows the format of the existing WidowX250S robot.
Added
-robot_uids
and-r
args to specify robot for environments in example scripts.Robot assets are located here, again following the format of the widowx250s
@StoneT2000 do you have a command for the png at
ManiSkill/docs/source/robots/images
?