Some questions on the poppy humanoid (sensors & control)

Hi all! I’m new here (and to robotics in general), so excuse the sloppy terminology/ concepts.

Okay, so I am attending an Applied Reinforcement class this semester (graduate level, of course,) and the class project includes choosing a robot (can be a poppy) and demonstrating a rather non-trivial RL application with it.

Now, (as ambitions and/or naive I am,) I was thinking of doing something like making the poppy humanoid stand or maybe even poppy bipedal walking. But after a quick surf through this forum it seems like that would be borderline impossible.

I have the following questions in order to consider using the poppy humanoid for my project:
1) Is it possible to have torque control for the poppy (so far, I have only seen position) ?
2) Can we read torque off of the sensors (again, THIS link says no, but I wanted to confirm if there have been any updates ever since) ?
3) Is there anything else that I should know before considering the poppy for a RL project (bipedal walk or just making it stand).

Thanks!

P.S. Sorry if these questions are a little too trivial!

Hi,

I confirm, in a class project environment, making a reinforment learning of Poppy walking is not feasible, and will be very deceptive for the students.
It is possible to have torque control but you have to hack the dynamixel with a new firmware, taking the risk of loosing them. By the way, it is true that most of time, in robotics, we consider torque control while there are few actuators taking into account torque control. Most of them take into account directly PID control over position or speed. The reason is that, because of gear box backlash, torque control is very non-linear. In the future, the brushless low speed control may solve this problem.

The torque value returned by the dynamixel are only an image of current. It is too noisy to take in a control loop. It is sufficient to protect the motor and know it is forced clockwise or counterclockwise.

In a class project, with the arms, there are lots to do about reinforcement learning, taking objects… in a full humanoïde perspective, what about walking 4 legs or crawling ? You have to take into account self collisionning which is a very nice subject.

2 Likes

Thanks for the heads-up! I’ll definitely consider your project ideas if my team goes ahead with the poppy humanoid. But first we have to choose a robot. We are also considering something like object avoidance or swarm navigation (turtlebot or E puck). But I’m personally more inclined towards the poppy humanoid cause, well, humanoids are much cooler :stuck_out_tongue: Though it’s a shame that bipedal walk won’t be possible.

Thanks again!

Yes of course, I sometimes say that Poppy humanoïd cannot still walk, but it can do a lot of things the others humanoïds cannot do with its articulated spine, with its great size, with its “opensourceness”. Just imagine. I confirm that humanoïd are much cooler, but walking, even if it is practical, is not the principal.
And reinforcement learning can help reaching nice and unique features !

See my Poppy crawling :slight_smile: it is possible. Reinforcement learning on this could be cool

The characteristics of the sensor (OV05653-A66A-1E) include: miniaturization, digitization, intelligence, multi-function, systemization, and networking. It is the primary link to achieve automatic detection and automatic control. The existence and development of sensors allow objects to have senses such as touch, taste and smell, and make objects slowly come to life. It is generally divided into ten categories such as thermal sensors, light sensors, gas sensors, force sensors, magnetic sensors, humidity sensors, sound sensors, radiation sensors, color sensors, and taste sensors according to their basic sensing functions. .