This was too cool not to post.
I was also reminded of this single bladed UAV. Unfortunately, this design is pretty useless, since any efficiency gain you get from the one large blade to provide lift is overwhelmed by the tiny, multibladed propeller doing all the work. Still, it looks cool.
Coming up next: Sydney University Robot Wars videos!!
Robobob: Life of the party | Bear necessities | One step at a time | It's Alive!! | It was only a matter of time... | All work and no play | Two steps forward... | Computers 101 | Computers 102 | Computers 103
Friday, May 7, 2010
The title is a joke, by the way. Although I would be interested to see what the results of such an exercise would be. "Statue of David... *beep*: ART!", "Justin Beiber's Music... *beep* NOT_ART!".
The reason that I mention Bayesian Classifiers is because I wanted to talk about machine learning. This is the other branch of artificial intelligence and what most people think of when the topic of Skynet is brought up.
Fear not though. Bayes rule, decision trees and rule based learning are actually pretty mild. They are simply statistical methods of attempting to classify data by using the results of previous observations. Mostly harmless.
However, today I'm going to talk about genetic algorithms.
A genetic algorithm is an abstract representation of a mathematical function. They can take many forms, such as a string of bits which might indicate the presence/absence of a set of inputs, or a literal mathematical function "y = cos(x) - 2*z". The range of variables which is represented by the function is called a genome.
This can get a bit hard to visualize, so I often just settle for imagining genomes as Taylor polynomials. Therefore, a single genome consists of "x = A*input1^a + B*input2^b ..." where the values of A and a can take any real number. If some of the inputs are simply a higher derivatives of other inputs, then any arbitrary function can be represented in this way. There is also a rather nice representation involving trees.
If we start off with a population of individuals with random valued variables in their genomes, then we can evaluate each function to see how well it 'fits' a set of training data. The individuals which produce the minimum mean squared error for the training data are declared the 'fittest', and are allowed to survive into the next generation.
This is where the 'genetic' part comes in. There are many ways of 'evolving', 'mutating' and 'breeding' individuals, but the easiest to understand is the asexual method. This means that all individuals except the best performer are killed off (ie, deleted) and then their places are taken by the offspring of the remaining individual. However, tiny random 'mutations' are introduced to each of the new individuals variables - such as doubling/halving the values of A or B, or incrementing/decrementing a or b.
Anyway, thats the 30 second version of genetic algorithms. They can be used to find a semi-optimal solution to many problems, provided you can throw enough generations at them. I have been working with a C++ implementation called GAlib. If you are interested, I highly recommend going through the examples.
Now, some of you may be wondering what all this has to do with robots (actually, most of you are probably already filling in the blanks and peeing your pants in terror).
I've spoken on several occasions about using Robobob as a platform to investigate dynamic balance and movement. I plan to represent the control state of the robot as a search tree, with the robot beginning at a starting node/state and attempting to plan a path of control actions to reach a goal node/state. To navigate the tree, I want to implement a greedy search heuristic which will choose which control actions are most likely to lead to the goal state.
Now here's the tricky part - I intend to implement the heuristic as an evolutionary algorithm which can then be rewarded or punished depending on the outcome of executing the control path on the real robot. ie, if the heuristic gets stuck or can't find the goal state, it will be disfavored whereas successfully reaching the goal will be favored. After a series of generations, I will be able to study the path planning method which has evolved from this process.
Cool? I hope so.