With @Clement, we are currently developing Explauto, a Python library designed to study, model and simulate curiosity-driven learning and exploration in robotic agents. The main idea behind this framework is to provide a common interface for the implementation of active sensorimotor learning algorithm (details about the scientific groundings can be found here).
While developed to be easily integrated with various simulated or robotic environments, Explauto is tightly linked with Pypot through a dedicated PypotEnvironment. Thus, it allows to easily and quickly setup experiments on autonomous exploration with dynamixel based robots and more specially Poppy.
The source code is available on our github repository under the GPLv3 licence. Explauto is based on the same open principles than the Poppy-Project (most people involved in Explauto are also involved in the Poppy-Project) and shares its open-source and open-science philosophy. So, we will try to post on this forum all experiments made with our library and especially the ones based on Poppy.
To illustrate what you can do with Explauto and Poppy, we designed a very simple experience where Poppy is learning the inverse kinematic model of its arm: i.e. it learns how to reach a specific 3D location with its hand by controlling the motors of its arm. While this is a well-known problem and can be relatively easily solved using analytics method in such low dimensional space (Poppy's arm is only 4DoF) we think it makes a good proof-of-concept demo.
To go a bit further, we illustrate that thanks to Explauto pool of experiments you can easily run this experiment simultaneously on both arms using various conditions. Here, as a toy example, we compare motor and goal babbling strategies.
You can directly replicate this demo by adapting the code used for the experiment (note that the optitrack sensor that we used could be replaced by any other tracking device).