Followers

Recent Posts

Sunday, April 18, 2010

Algorithm eats sandvitch in O(mn)O(mn)O(mn)O(m) time

I've been doing a lot of research on artificial intelligence recently. I'm toying with the idea of using it to help solve the inverse kinematics problem of a humanoid robot. I've already started work on extending robobob's firmware with some AI functions, which I'll share in a few updates. I don't know how well this will work out, but since future topics will probably include a fair bit of artificial intelligence talk, I thought I would put up a brief intro to AI and some common search strategies.

Many people are underwhelmed when they start learning about AI. Sci Fi has delivered promises of robotic servants, androids, and positronic brains - but even state of the art AI still struggles with simple real-world problems that humans find trivial. However, AI still as much potential if you are able to appropriately abstract the problem into a form that an AI strategy can handle.

One of the most basic methods of problem solving is searching. If you break the problem down into a starting state, a goal state, and a set of operations allowing traversal between states, then you simply search the tree until a goal is found will provide a method of solving the problem. 

This is why AI excels at games but fails at life. Games are easily distilled into a set of rules with clear conditions for victory. Using a searching strategy, possible configurations can be tested until one set of moves leads to a win. However, attempting to distill real world problems into a 'game tree' is even more complicated than it sounds since there are far too many options at each planning stage.

Below are some commonly used searching strategies. This is just a quick summary of each one, but it is important to know a little about how searching works in order to appreciate the more intricate levels of AI.

Breadth First Search

In a Breadth First Search, children of the most recently expanded node are added to the end of the fringe. Since nodes are explored one by one in the order they are placed in the fringe, nodes closer to the root of the tree will always be expanded first.






Since nodes closer to the root are expanded first, the algorithm will terminate when it finds the goal which requires the fewest operations to reach. In cases such as pathfinding, where the cost of traversing from one square to the next is constant, the solution closest to the root is the optimal (shortest) solution. However, every node which is closer to the root than the goal will be expanded before the goal is found, and this can lead to breath first search can opening an excessive number of nodes.

Depth First Search

Algorithmically, this search strategy is identical to Breadth First Search, except for the order in which the children of expanded nodes are added to the fringe. When a child node is discovered, it is added to the front of the fringe instead of the back. This means that the next node to be expanded will be one of the children of the open node and hence nodes which are further away from the root are explored before closer ones.






Depth First Search will keep traversing down the tree until it finds a terminal or a goal node. If multiple goals a present (eg, all terminal nodes are goals), then this strategy may find a solution much faster than Breath First Search, but it may not be the best solution. However, if the tree is particularly complex or has a very high branching factor, finding any goal quickly might be a better outcome than finding the best goal too late. On the other hand, some paths may have an infinite depth which will cause Depth First Search to never find a goal.

Greedy Search

Unlike BFS or DFS, where the ordering of nodes remains unchanged as the algorithm discovers new nodes, the greedy search method maintains a continuously sorted fringe of unopened nodes. The metric used to determine the order is known as the heuristic function, which is used to estimate the best node to open next. It doesn't matter how far away a node is in the tree or when it was discovered – the node opened by a greedy algorithm is governed only by the heuristic value of that node.





The effectiveness of greedy search lies in the strength of the heuristic function used. A good heuristic function should estimate the cost of reaching the goal from the node. Every time a new node is opened, it will be the 'best choice' to find the goal according the heuristic. Of course, if the heuristic is poor then there is no guarantee that the path will be optimal or that a goal will even be found at all.

A* Search

Like greedy, A* maintains a sorted fringe using a heuristic function to estimate the value of each node. In addition to using the heuristic to determine the order of the sorted fringe, A* also includes the cost of reaching the node from the root of the tree. Therefore, if two nodes are evaluated equally using the heuristic function, the node which is closer to the root will be opened first.






A* combines the predicted (heuristic) cost of reaching the goal from a node with the true cost of reaching that node from the root. This means that the nodes are explored in terms of their overall path cost, unlike greedy which sorts the nodes only according to their distance from the goal. If the heuristic used by A* never overestimates the distance from the node to the goal, then A* is guaranteed to find the optimal solution by opening fewer nodes than any other optimal strategy.

Hill Climber

Hill Climber searching is slightly different to all the other algorithms presented here due to the fact that it doesn't maintain a persistent fringe. The next node opened is simply the child of the open node with the best heuristic cost. Hill Climber differs from greedy because it 'forgets' about nodes which have already been discovered (regardless of how good they were), and focuses only on the immediate children of the open node.






Sometimes, the tree which needs to be searched is simply too large or too complex to be handled by an optimal searching strategy. In this case, using Hill Climber to find the 'local minimum' of the tree is often the best way to handle such situations. The algorithm is very easy to implement, since there is no need to maintain and sort a fringe of all the nodes discovered so far. However, the algorithm may completely miss the goal node if the heuristic leads it into a 'local minimum' where every node in the fringe is valued to be 'worse' than the current node.

Saturday, April 3, 2010

Hark! An update!

About time too. I have recently been busy organizing a Robot Wars competition for Sydney University, and haven't had a lot of time to work on Robobob. However, this will ensure that there will be plenty of cool material for future updates, so I hope you are all looking forward to it as much as I am!

Things are still in the concept phase at the moment, so it will be a while before I can post something tangible. In the meantime, here are a few videos to get everyone motivated:








Also, if you would like to contribute to making this Robot Wars competition as exciting as possible, click on an Ad! All proceeds from this blog will go towards the construction costs!