Before I continue, I would like to mention that "Cloud" is by far my least favorite word du jour. There is no such thing as the Cloud! Just because you can't see it doesn't mean it is some kind of panacea! Everything still happens SOMEWHERE on SOME COMPUTER, and that computer can still fail!
Sorry, side-rant over. Where was I?
One of the biggest hurdles is defining "Intelligence". Even wikipedia doesn't have a simple answer. whenever I'm asked the question though, I usually go with:
"Intelligence is the ability to solve a wide variety of problems with a limited set of tools"
Which applies very nicely to robots.
Robots need to be more than just state-machines. Generally, robots have tasks or missions, and they are given a set of tools to complete the task. An intelligent robot needs to have the capacity to alter it's behavior to complete it's mission even if the environment changes. This is where artificial intelligence and machine learning play a role.
Many AI techniques can be thought of simply as optimizing techniques. There is a feedback loop (which may be supervised or unsupervised) and an algorithm which is capable of altering itself to satisfy some condition of the feedback.
In a previous post, I mentioned using Genetic Algorithms as a method of creating "intelligent" (per the definition above) robots. Every individual in a population is a "pilot", which takes turns to "drive" the robot. The pilots who are most succesful at the mission get to "train" the next generation of pilots.
Conceptually, this is not a bad way to get a basic adaptable AI working for a robot. Every individual's genetic code is represented as a decision making algorithm which decides what action the robot should take given the input (if any) from the sensors.
My somewhat clumsy implementation involved a 2DOF robot "inchy the inchbot", which learned to crawl forwards using a distance sensor mounted at the back. Every individual had 100 actions in which to move the furthest possible distance - after a few hundred generations, this is what happened:
My sincere apologies for the horrible webcam video. This was a good proof of concept for me, but there were a few things which need improvement:
- All individuals get a chance to "drive", even if they suck at "driving" particularly in the early generations, this meant that the robot was just as likely to go backwards as it was to go forwards. On a real mission, this would be unacceptable, since a bad driver might put the whole robot in harms way.
- The method of encoding the pilots in the GA was too prone to sudden changes. I used a heuristic function which evolved by swapping and switching operators in a tree format (more on this later), and this meant that even in later generations there was no shortage of evolutionary throwbacks.
- The combination of 1 and 2 meant that even though the population as a whole did get fitter, the robot would sometimes spaz out or just stall for apparently no reason.
- Another huge drawback was that each individual was very selfish. The individual had to regard for the (sometimes awkward or precarious) position it would leave the robot in for the next individual who drove the robot. The next individual would then have to waste some its "actions" to extricate the robot from a weird configuration - this mean that sometimes even the best individuals would under perform unnecessarily.
- In this particular application, the tendency for genetic algorithms to "Cheat" was very apparent. The sensor was mounted on the back, and early tests resulted in the robot simply pointing the distance sensor at the ceiling to maximize the "distance traveled" =)
This was a good proof of concept test, but I might need to re-examine some of the details. I don't know of many applications of GAs to do this kind of robot decision making, but if someone has some references I would love to hear about them.