Poppy v1 todo list

Please use this wiki post to put all the remaining bug, features, optimization needed to achieve the 1.0 version of Poppy.


Software

[ ] Try accelerometer code on I/O Board
[] pypot/v-rep
[
] create a model of Poppy v1 for vrep
[] create URDF of Poppy v1 with humanoid standard


Electronics

[] Find Camera’s connector
[] Find a solution to integrate a mike
[
] Get a manga screen
[] Fork manga screen (details)
[] Add serigraphy on the boards.
[*] find a camera …
[] Add two internal usb port on the I/O board


Mechanics

[] change toes size
[
] small reduction of the foot length (3/4 mm)
[] Embed force sensors in the foot.
[*] Embed Odroid board
[] Create an alternative classical leg design for Poppy maybe inspired from Jimmy ones.
[] knee_bumper compatible with MX64 configuration
[] augment range of motion for the knee
[] create a icub-like support to hold poppy


Another possible way is to use sound card and camera with Rpi evolution.

This sound card and camera are for raspberry pi.
The first trials using pypot on raspberry pi were bad so we will use an Odroid U3.

For the moment we are thinking to try this setup:

Quick memo of Odroid install for Poppy (work in progress):

Manga screen:

  • get the project: https://bitbucket.org/intelligentagent/manga-screen
  • (for rev A3): go to the directory manga-screen/touch/Atmel/Atmega32U4/LUFA-130303/Projects/MangaScreenRevA3
  • connect USB and type make && sudo make upload (you need all the Atmel dev tools)
  • TODO edid
2 Likes

It is very useful to work on walking stability with a carousel. But to do it, a special link must be done at the pelvis. Here is a photo of the carousel of Alion.

The basis is a structure to hold a speaker. I just cut the upper part.

Inside this basis, I have fixed a fishing rod with a long washing machine tube. A fishing rod is completely fantastic since :

  • it is telescopic (easy to transport)
  • it is very light (no disturbance on walking)
  • it is flexible (no risk of break)
  • the cable threader is included
  • you can fish with it also

I made a zoom on the link at pelvis which is only an aluminium tube blocked by a crocodile

I think it could be useful to think in advance about this function.w

2 Likes

This would be a very good student project idea: design an open-source carousel and attachment system and make the documentation.

1 Like

Hello,

Well, I’ve been busy, and this is the only place I can figure good, to place an open source idea that’s really good for Robots in general!

I was looking over OpenCV.org and I realized out of all of the computation being done, you guys need a dedicated chip. But, the best way to do that is design it around the Banana Pi, which is a bit faster than the Raspberry Pi. So, I wrote to Nvidia, as a start for a Programmable Pipeline, and to of course OpenCV, to let them know that they need OpenCV, on the Pipeline to use as a kind of graphics accelerator that’s designed around processing streaming images quick enough to be connected to a Banana Pi. So, once you attach the shield or module mechanically, you would attach the cameras to that board, not the Raspberry Pi, or the Banana Pi because, it does allot of processing for you.

When it comes to research, the programmable pipelines are most likely around because, Nvidia is all they way through scientific imaging and medical. It’s complex but, the best way to handle large quantities of data to be restructured by a set of equations, the programmable pipelines are the best. We just need a single board input card solution for a single board computer.

OpenCV, might be doable with a card that normally runs DirectX or OpenGL. The existing video chips should be capable, the only real question is which one is the one with programmable pipelines.

Basically, everything you need to convert a wire frame into an animated character, you need in reverse order of operations to convert the real world into a 3D environment, where the robot can test and fail, or test and succeed, and then choose a path. We think about where were going before we get there, and using two cameras to decypher the real world, shouldn’t be any harder than 3D game driver. Think simpler, and here’s what I mean, 2*2=4. If I reverse the operation and take 4 and divide it by 2, I come back to the point I wanted to be, were 2=2. We can use it to reverse the orders of operations used to animate a 3D object, to map a 3D object, and either should be the same number of operations but, we need the result that’s in the reverse order.

It is going to need to be calibrated to some level, like a 3D printer. Things the robot should be able to do. Use the full range of motion that it should be able to keep it’s food 1 to 3 mm off of the floor. Then, slowing down the motors, when they close to that marked point, so the feet don’t slam on the floor. A list of minor inertia, vs the flex of the frame and structure, and how relying on sensors is for exception handling, like an unseen obstacle but, not the floor of a level surface. A quieter foot step, and if the visual systems work right, a light touch.

I would say that the point cloud method, is about the best. But, the resolution that I see from a 3D Scanner, is the farthest from what a robot really needs. Just think of that as a CNC machine, we don’t need movements that are accurate to the micron or anything. My main point is a human observer can see when a point cloud produces a recognizable object. So, that’s how you should choose the point cloud densities. Next, is include something totally different. Connect the dots, and make a wire frame from any surface, to be dropped into a game engine. I figure a duplicate of the Quake II engine or equivalent would be more than enough to recognize most objects. Even Quake I, the original version, has enough of a 3D engine to recognize things, and that gives you an idea of the density of the point clouds really needed.

Okay, now we simulate gravity, simulate Poppy, in a 3D map of a room. It’s just simple obstacle avoidance. Even at 700Mhz, it beats a 486, and the banana pi, could play the game several times faster than you’d ever want to, or humanly be able, if you take out all of the image processing because, we’re only worried about the robot’s center of gravity, and which scripts it used to move towards a desired object, and push it, like a ball. To interact with the animate and ignore the inanimate, or even run away from a small dog.

You’ve got to admit, if you could buy a robot that big, and it could clean a toilet to hospital standards, would you? Not being a programmer, couldn’t even you foresee every restaurant having one? Someday, a butler but, for now, if that’s a pre programmed mission, they’ll sell. Just put ears on it and a beard can call it a retired shoe making elf.

Hello,

I’m good at ideas but, I didn’t present a list. It’s a good foundation for a list.

But, I wanted to mention something to keep your minds on. Most robots, could be using their motors as sensors. Any time a motor sees a load, it increases how much current it draws. So, if you were monitoring the current from any motors closely enough, a simple obstacle and increase in load, causes an increase in current flow. The more gain, you apply to measuring that current, the closer you are to a touch sensor.

My real issue is foot slam! I want a robot that won’t stomp, and I’m personally working on that problem in theory, and tackling learning another programming language to just have the right software for building a robot. Any time you rely on a touch sensor, you’ll have foot slam. Here’s what happens when the motor controller just gives it a position, and waits for the sensor at full speed. The sensor, makes contact, and the motor controller stops the motor but, the inertia of the movement, and the slack in any gears will still follow through and be let out as a tap, or loud slam of the foot of the robot. The stomp is just caused by the release of inertia and the slack coming out of the gears that allows the foot to continue to move after the motor has stopped. This is why I believe that surface mapping a level surface is important because, it will pertain to most flat surfaces, and prevent stomping by slowing the motor down before the foot’s sensor detects contact. It’s just a small change in motor speed as it approaches a stopping point that is only a few milliseconds different than noisy.

The AI.

Here’s how my program worked. First, I would type in a sentence. Then, the program would spell the check the entire sentence, and fix any errors. That really helps allot when it comes to a fast, in a couple hours start holding conversations with a blank robot database, because, there’s a couple of them. Recording the initial conversation, keep that and time and date stamp it. Later, it will become a quote function. A robot can make a very good reporter, of conversations and saved overheard conversations, which it can learn from, as long as there’s a file record for every noun used, or subject of conversation, which is more important than grammar because, the subject of a conversation could be a verb like running. A user has to teach/define a subject of conversation but until then, it’s assumed to be the proper noun, pronoun, and a pronoun refers to a previous sentence and proper noun.

You might wonder how that can all work with just randoms.

Well is starts by parsing the sentence like you would for this word tree.

http://www.linguisticsgirl.com/wp-content/uploads/2013/03/2013-03-05-Prepositional-Phrase-Disjunct-Adverbial-Tree.jpg

All that’s important is the first line of descriptors, Noun, Verb, etc… The Spellchecker function, also includes the part of speech of the word but, the spellchecker is keeping that with the proper spelling. Some words have two functions and the robot needs to be taught/told which of the two the word is functioning as. In the Subject Directory, the subject is stored in the immediate folder. Now, each file is alphabetically organized, and segmented based upon memory space. Inside the folder, a single noun could take up a folder, like electron, and electronics because, the list of verbs, and nouns associated to that one word electron, is so greate. A subject, is stored with all of the words, nouns, verbs, with identifiers, and usage scores. These scores don’t change unless, it’s directly used in a sentence. It’s governed by memory size, and how much it can fit, so there may be several files in that one folder. The sorting is done in a simple way, first every part speech has a separate folder for total word counts, this is used basically to establish the most commonly and frequently used words to place the top 1,000 to 10,000,000 words into the upper memory for use in conversation. Once it’s sorted it automatically generates you’re greeting/superficial conversation mode. It moves to searching the hard drive when the conversation and the quantity and quality of information are properly inputted. It can read textbooks, and use the scores as if were overheard communications. Reading several times adjusts the scoring, and it’s robot, it doesn’t take that long to read a book 20 times. So, it pulls out the word trees from the book, loads up subjects and word counts based upon usage, and associates them to nouns. But, then it becomes the lab assistant. If you get a little rusty, you can just what was that equation, and the equation has to be a quote function. The more specific you get, the closer the robot get’s to the right answer. There’s tons of equations in electronics.

So, I would load one of the randomizing python functions, it works to randomize a list of objects, so you can throw in as many copies as you want, it works like lotto from here. Remember the scores, it throws that many copies into randomizer. If I asked, what does an electron do? it could say, drift, spin, boil, exhibit pressure on space, travel at one third the speed of light through space, well, there’s allot. I always wanted to have two of these in the same AI for word tree picks, and results. Let’s say you totalled the scores of all of the words used in two sentences, it would then count the words, and divide the scores by the number of words used in each sentence, and the highest score is the result I see from the two competing parts of this function. A response, is just pick a word tree, then select randomly from the list of subject associated verbs, etc. to fill in the word tree. If I just remove the sentence, look at the parts of speech and throw darts at a dictionary, to randomly pick word of the same part of speech, it’s still a working sentence, with little or no bearing on reality.

Okay, so a bunch random words can make a sentence. But, how does this omni-dimensional array really work. Well, the computer only works with groups of neurons that are suppose to work together, and since each word is a neuron, simply remembering that, means that each file record, includes several neurons if it’s a Subject or Topic of Discussion. CSV model file record.

Subject. x used verbs first, etc… other parts of speech.
Bird, 100, walk, 10, run, 1, fly 85, flying 17,

The file record now represents a group of neurons, and only the ones used in context with the subject, bird. Since, bird is the subject, it’s count is there for later, and sorting. It could wind up in the top 1,000 spot, and in memory for rapid access. When I save them like this, I don’t need to worry about geometry. The computer will look up Bird, and use this file record to choose a sentence that’s factual, hopefully, usually by statistical odds, and use it. Lying is easily sorted out by the lowest scores over time. You could almost look at the word score for the subject hit 1,000, and just start deleting any of the words in the list with a score of 1.

The trick is the robot only speaks when spoken to. It will always try to put in the last word. It’s just the robot logic. It never understands or knows anything honestly. It just interacts socially.

I want mine programmed with Electronics 4 times, and a sci fi novel “The Ion War”,eight times to acquire my veteran of an alien war. Well, yea, you can actually use cheating a little to add some character. It would identify itself as the lead character.

Here is a software component which could be very useful for any robot : the total mechanical energy estimation and its component (kinetic and potential). The needs are geometric and mass representation and global attitude measurement.

Any timing for v 1.0? Anxious to get my Poppy printed, but am waiting for the v 1.0 tweaks.

1 Like