Followers

Recent Posts

Wednesday, December 15, 2010

Artificial Intelligence for Fun and Profit!

AI is one of those buzzwords which get's thrown around without people really understanding what it means.

Before I continue, I would like to mention that "Cloud" is by far my least favorite word du jour. There is no such thing as the Cloud! Just because you can't see it doesn't mean it is some kind of panacea! Everything still happens SOMEWHERE on SOME COMPUTER, and that computer can still fail!

Sorry, side-rant over. Where was I?

One of the biggest hurdles is defining "Intelligence". Even wikipedia doesn't have a simple answer. whenever I'm asked the question though, I usually go with:

"Intelligence is the ability to solve a wide variety of problems with a limited set of tools"

Which applies very nicely to robots.

Robots need to be more than just state-machines. Generally, robots have tasks or missions, and they are given a set of tools to complete the task. An intelligent robot needs to have the capacity to alter it's behavior to complete it's mission even if the environment changes. This is where artificial intelligence and machine learning play a role.

Many AI techniques can be thought of simply as optimizing techniques. There is a feedback loop (which may be supervised or unsupervised) and an algorithm which is capable of altering itself to satisfy some condition of the feedback.

In a previous post, I mentioned using Genetic Algorithms as a method of creating "intelligent" (per the definition above) robots. Every individual in a population is a "pilot", which takes turns to "drive" the robot. The pilots who are most succesful at the mission get to "train" the next generation of pilots.

Conceptually, this is not a bad way to get a basic adaptable AI working for a robot. Every individual's genetic code is represented as a decision making algorithm which decides what action the robot should take given the input (if any) from the sensors.

My somewhat clumsy implementation involved a 2DOF robot "inchy the inchbot", which learned to crawl forwards using a distance sensor mounted at the back. Every individual had 100 actions in which to move the furthest possible distance - after a few hundred generations, this is what happened:



My sincere apologies for the horrible webcam video. This was a good proof of concept for me, but there were a few things which need improvement:
  1. All individuals get a chance to "drive", even if they suck at "driving" particularly in the early generations, this meant that the robot was just as likely to go backwards as it was to go forwards. On a real mission, this would be unacceptable, since a bad driver might put the whole robot in harms way.
  2. The method of encoding the pilots in the GA was too prone to sudden changes. I used a heuristic function which evolved by swapping and switching operators in a tree format (more on this later), and this meant that even in later generations there was no shortage of evolutionary throwbacks.
  3. The combination of 1 and 2 meant that even though the population as a whole did get fitter, the robot would sometimes spaz out or just stall for apparently no reason.
  4. Another huge drawback was that each individual was very selfish. The individual had to regard for the (sometimes awkward or precarious) position it would leave the robot in for the next individual who drove the robot. The next individual would then have to waste some its "actions" to extricate the robot from a weird configuration - this mean that sometimes even the best individuals would under perform unnecessarily.
  5. In this particular application, the tendency for genetic algorithms to "Cheat" was very apparent. The sensor was mounted on the back, and early tests resulted in the robot simply pointing the distance sensor at the ceiling to maximize the "distance traveled" =)
This was a good proof of concept test, but I might need to re-examine some of the details. I don't know of many applications of GAs to do this kind of robot decision making, but if someone has some references I would love to hear about them.

Saturday, November 27, 2010

Where I've been

It's been a very long time since I posted anything, hasn't it. In case you were wondering if I had a good excuse or not, here is a list of things that have happened since my last update:
  • Got a part time job,
  • Got a full time job,
  • Learned lambda calculus and LISP,
  • Wrote a programming language and a compiler,
  • Learned to sing,
  • Sang on stage,
  • Held some more Robot Wars competitions,
  • Handed over the society to next year's president,
  • Put together a tender bid for work,
  • Won the contract,
  • Bought and built a new computer,
  • Set up a home file server and uPnP media server,
  • Accidentally erased my MFT and boot sector,
  • Managed to restore them,
  • Built and programmed a robot for the New Inventors TV Show,
  • Appeared on TV,
  • Studied for exams,
  • Finished university
So yeah. I've been busy. Fortunately, each of these is an excellent story which I hopefully will be able to recount here in the future.

But now that that's out of the way, we can get back to talking about robots.

I've been following the Google AI challenge (http://ai-contest.com/) and I highly suggest you check out the forums. Were it not for the aforementioned points, I would have loved to put an entry in, but as I write this there are only 6 hours left to go for submissions. Ooohhh.... now there's a challenge....

I also wanted to bring your attention to this amazing post. Give it a read while I go beat my head against the wall for even considering giving myself MORE things to do.

 

Monday, July 5, 2010

Hungry Hungry Robos!

Hungry Hungry Robos was an event for the 2010 Sydney University Robot Wars teams to get a chance to test their robots before sending them into battle. It was the first chance teams had to try out their robots and it was also heaps of fun.

The rules are very simple: The robot with the most marbles in their corner (taped off squares) when time runs out is the winner!



Below is the poster I designed to promote the event. As you can see from the dates, this actually happened ages ago, but I've been to busy to cut everything together until now. The next event will be some time in August.



Edit: Fixed the broken encoding on the video. Fun fact: This video took longer to encode than the entire event took to organize! Peas out.

Saturday, May 29, 2010

Sweet Zombie Jesus!

This was too cool not to post.



I was also reminded of this single bladed UAV. Unfortunately, this design is pretty useless, since any efficiency gain you get from the one large blade to provide lift is overwhelmed by the tiny, multibladed propeller doing all the work. Still, it looks cool.



Coming up next: Sydney University Robot Wars videos!!

Friday, May 7, 2010

Bayesian Classifier Answers the Question: "Is it art?"




The title is a joke, by the way. Although I would be interested to see what the results of such an exercise would be. "Statue of David... *beep*: ART!", "Justin Beiber's Music... *beep* NOT_ART!".

Almost as exciting as my 0-R Spam filter. Catches 100% of spam with a 97% accuracy rate! Oh mercy.

The reason that I mention Bayesian Classifiers is because I wanted to talk about machine learning. This is the other branch of artificial intelligence and what most people think of when the topic of Skynet is brought up.

Fear not though. Bayes rule, decision trees and rule based learning are actually pretty mild. They are simply statistical methods of attempting to classify data by using the results of previous observations. Mostly harmless.

However, today I'm going to talk about genetic algorithms.

A genetic algorithm is an abstract representation of a mathematical function. They can take many forms, such as a string of bits which might indicate the presence/absence of a set of inputs, or a literal mathematical function "y = cos(x) - 2*z". The range of variables which is represented by the function is called a genome.

This can get a bit hard to visualize, so I often just settle for imagining genomes as Taylor polynomials. Therefore, a single genome consists of "x = A*input1^a + B*input2^b ..." where the values of A and a can take any real number. If some of the inputs are simply a higher derivatives of other inputs, then any arbitrary function can be represented in this way. There is also a rather nice representation involving trees.

If we start off with a population of individuals with random valued variables in their genomes, then we can evaluate each function to see how well it 'fits' a set of training data. The individuals which produce the minimum mean squared error for the training data are declared the 'fittest', and are allowed to survive into the next generation.

This is where the 'genetic' part comes in. There are many ways of 'evolving', 'mutating' and 'breeding' individuals, but the easiest to understand is the asexual method. This means that all individuals except the best performer are killed off (ie, deleted) and then their places are taken by the offspring of the remaining individual. However, tiny random 'mutations' are introduced to each of the new individuals variables - such as doubling/halving the values of A or B, or incrementing/decrementing a or b.

Anyway, thats the 30 second version of genetic algorithms. They can be used to find a semi-optimal solution to many problems, provided you can throw enough generations at them. I have been working with a C++ implementation called GAlib. If you are interested, I highly recommend going through the examples.

Now, some of you may be wondering what all this has to do with robots (actually, most of you are probably already filling in the blanks and peeing your pants in terror).

I've spoken on several occasions about using Robobob as a platform to investigate dynamic balance and movement. I plan to represent the control state of the robot as a search tree, with the robot beginning at a starting node/state and attempting to plan a path of control actions to reach a goal node/state. To navigate the tree, I want to implement a greedy search heuristic which will choose which control actions are most likely to lead to the goal state.



Now here's the tricky part - I intend to implement the heuristic as an evolutionary algorithm which can then be rewarded or punished depending on the outcome of executing the control path on the real robot. ie, if the heuristic gets stuck or can't find the goal state, it will be disfavored whereas successfully reaching the goal will be favored. After a series of generations, I will be able to study the path planning method which has evolved from this process.

Cool? I hope so.

Terrifying? Definitely.




Sunday, April 18, 2010

Algorithm eats sandvitch in O(mn)O(mn)O(mn)O(m) time

I've been doing a lot of research on artificial intelligence recently. I'm toying with the idea of using it to help solve the inverse kinematics problem of a humanoid robot. I've already started work on extending robobob's firmware with some AI functions, which I'll share in a few updates. I don't know how well this will work out, but since future topics will probably include a fair bit of artificial intelligence talk, I thought I would put up a brief intro to AI and some common search strategies.

Many people are underwhelmed when they start learning about AI. Sci Fi has delivered promises of robotic servants, androids, and positronic brains - but even state of the art AI still struggles with simple real-world problems that humans find trivial. However, AI still as much potential if you are able to appropriately abstract the problem into a form that an AI strategy can handle.

One of the most basic methods of problem solving is searching. If you break the problem down into a starting state, a goal state, and a set of operations allowing traversal between states, then you simply search the tree until a goal is found will provide a method of solving the problem. 

This is why AI excels at games but fails at life. Games are easily distilled into a set of rules with clear conditions for victory. Using a searching strategy, possible configurations can be tested until one set of moves leads to a win. However, attempting to distill real world problems into a 'game tree' is even more complicated than it sounds since there are far too many options at each planning stage.

Below are some commonly used searching strategies. This is just a quick summary of each one, but it is important to know a little about how searching works in order to appreciate the more intricate levels of AI.

Breadth First Search

In a Breadth First Search, children of the most recently expanded node are added to the end of the fringe. Since nodes are explored one by one in the order they are placed in the fringe, nodes closer to the root of the tree will always be expanded first.






Since nodes closer to the root are expanded first, the algorithm will terminate when it finds the goal which requires the fewest operations to reach. In cases such as pathfinding, where the cost of traversing from one square to the next is constant, the solution closest to the root is the optimal (shortest) solution. However, every node which is closer to the root than the goal will be expanded before the goal is found, and this can lead to breath first search can opening an excessive number of nodes.

Depth First Search

Algorithmically, this search strategy is identical to Breadth First Search, except for the order in which the children of expanded nodes are added to the fringe. When a child node is discovered, it is added to the front of the fringe instead of the back. This means that the next node to be expanded will be one of the children of the open node and hence nodes which are further away from the root are explored before closer ones.






Depth First Search will keep traversing down the tree until it finds a terminal or a goal node. If multiple goals a present (eg, all terminal nodes are goals), then this strategy may find a solution much faster than Breath First Search, but it may not be the best solution. However, if the tree is particularly complex or has a very high branching factor, finding any goal quickly might be a better outcome than finding the best goal too late. On the other hand, some paths may have an infinite depth which will cause Depth First Search to never find a goal.

Greedy Search

Unlike BFS or DFS, where the ordering of nodes remains unchanged as the algorithm discovers new nodes, the greedy search method maintains a continuously sorted fringe of unopened nodes. The metric used to determine the order is known as the heuristic function, which is used to estimate the best node to open next. It doesn't matter how far away a node is in the tree or when it was discovered – the node opened by a greedy algorithm is governed only by the heuristic value of that node.





The effectiveness of greedy search lies in the strength of the heuristic function used. A good heuristic function should estimate the cost of reaching the goal from the node. Every time a new node is opened, it will be the 'best choice' to find the goal according the heuristic. Of course, if the heuristic is poor then there is no guarantee that the path will be optimal or that a goal will even be found at all.

A* Search

Like greedy, A* maintains a sorted fringe using a heuristic function to estimate the value of each node. In addition to using the heuristic to determine the order of the sorted fringe, A* also includes the cost of reaching the node from the root of the tree. Therefore, if two nodes are evaluated equally using the heuristic function, the node which is closer to the root will be opened first.






A* combines the predicted (heuristic) cost of reaching the goal from a node with the true cost of reaching that node from the root. This means that the nodes are explored in terms of their overall path cost, unlike greedy which sorts the nodes only according to their distance from the goal. If the heuristic used by A* never overestimates the distance from the node to the goal, then A* is guaranteed to find the optimal solution by opening fewer nodes than any other optimal strategy.

Hill Climber

Hill Climber searching is slightly different to all the other algorithms presented here due to the fact that it doesn't maintain a persistent fringe. The next node opened is simply the child of the open node with the best heuristic cost. Hill Climber differs from greedy because it 'forgets' about nodes which have already been discovered (regardless of how good they were), and focuses only on the immediate children of the open node.






Sometimes, the tree which needs to be searched is simply too large or too complex to be handled by an optimal searching strategy. In this case, using Hill Climber to find the 'local minimum' of the tree is often the best way to handle such situations. The algorithm is very easy to implement, since there is no need to maintain and sort a fringe of all the nodes discovered so far. However, the algorithm may completely miss the goal node if the heuristic leads it into a 'local minimum' where every node in the fringe is valued to be 'worse' than the current node.

Saturday, April 3, 2010

Hark! An update!

About time too. I have recently been busy organizing a Robot Wars competition for Sydney University, and haven't had a lot of time to work on Robobob. However, this will ensure that there will be plenty of cool material for future updates, so I hope you are all looking forward to it as much as I am!

Things are still in the concept phase at the moment, so it will be a while before I can post something tangible. In the meantime, here are a few videos to get everyone motivated:








Also, if you would like to contribute to making this Robot Wars competition as exciting as possible, click on an Ad! All proceeds from this blog will go towards the construction costs!

Sunday, March 7, 2010

Honour thy compiler and thy linker

Remember how when you were learning to program, your tutor/teacher/zen programming master always told you turn on all compiler output and to make sure that you dealt with all the warnings?

It's vaguely possible that they knew what they were talking about.

In the most recent version of the code, I wrote a wrapper function to streamline the procedure for executing a set of actions. As I explained in this post, the tables for robobuilder's actions consist of Nx16 unsigned char multidimensional arrays which are in the PROGMEM area of the chip.

A multidimensional array can be expressed in two ways. Interestingly, most of the time the two types are functionally and syntactically equivalent - they only differ by the method by which they are stored in memory. However, these subtleties can make all the difference when you are performing low level operations.

The first way is as an indexed array, which stores all the data in a sequential memory block. A matrix of data declared:

int matrix[3][2];


You address the elements of the matrix by row, then by column:


matrix[2][1] = 6;

Assigns the value 6 to the last element in your matrix. Because the compiler knows that the array is supposed to be 3 rows by 2 columns, this statement is equivalent to:

((int*)matrix)[(2*COLUMNS)+1] = 6;

(Given that COLUMNS = 2. Don't forget that C arrays start at 0.)

The second way of defining a multidimensional array is as an array of pointers or an array of arrays. In this case, the blocks of memory are not necessarily arranged sequentially in memory. The following declaration:

int r0[2], r1[2], r2[2];

int* matrix[3] = {r1,r2,r3};

will create an array of three pointers, which point to the start of the row arrays. However, r0-2 may not be in sequential memory. Some compilers will make an effort to put them together, but there are no guarantees.

Elements can be referenced exactly the same way as before:

matrix[2][1] = -6;

Assigns a -6 to the last element of the array. This works because of the order of evaluation: you dereference the third pointer in the array 'matrix' first, and then address the second element of THAT array. No matter how you define your array, you can use the same syntax to manipulate it.

So far so good, but here comes the tricky part.


When you pass arrays as an argument to a function, you always do so by reference. For one dimensional arrays, it doesn't matter if you use:

void func(int * array);

or

void func(int array[]);

in either case, what is being passed to the function is a pointer to the start of the data block.

People are also introduced to the concept of a "string as an array of characters", so an array of strings has the type "pointer to a pointer of chars" (char **). Because C uses null terminated strings, this will work fine and you shouldn't receive any warnings.

All this leads to the mentality that arrays and pointers are interchangeable - but this is only sometimes true. This is exactly the trap that I fell into when I declared this function:

void runAction(unsigned char ** flashpos, ...);

Officially, this is not a problem as long as I dereference the variable flashpos correctly - after all, it is pointing to the start of an array somewhere. It points the start of a sequential Nx16 table of chars. As long as I use the correct number of dereference operators in the right order, I should be able to access it's delicious chocolatey elements.

Hence, there was no compiler error.

What I got instead was the warning: "Incompatible data types in arg 1 of blah blah blah."

Warnings are usually thrown when the compiler is letting you know that you might be doing something you didn't intend, but it's not something illegal so it figures you know what you're doing.

Foolishly, I assumed that the error was being thrown due to a signed vs unsigned conflict and went ahead and cast the argument being passed into an (unsigned char**).

Then, I proceeded to access the data like the multidimensional array that it was:

x = pgm_read_byte(flashpos[i][j]);


Robobob went haywire. Can anyone see what went wrong?

The problem was that the compiler didn't know how many columns the table had. It's the equivalent of me giving you a book and asking for the 4th page in the 10th column. It doesn't make any sense unless you know how to arrange the pages!

You see, it would have worked fine if I had just used the form:

x = pgm_read_byte(*(flashpos+(i*16+j)));

But in the absence of the necessary metadata, the compiler had attempted to use the second form of referencing described above - i.e., the jth integer pointed to by the ith pointer in the variable flashpos. Which of course sent it searching for arbitrary values all over the place.

After I realised what was going on, all I had to do was simply change the declaration to:

void runAction(unsigned char flashpos[][MOTORS], ...);

Which informed the compiler of the subtype and dimensions of multidimensional array being referenced.

Always remember that warnings are there for a reason! Maybe you do know best, but the compiler is letting you know that you've done something ambiguous or potentially dangerous.

Tuesday, March 2, 2010

Robot destroys car, house, you

I think it's supposed to be a bomb disposal robot, but I think the bomb might do less damage.

Monday, March 1, 2010

Congratulations! You're our 2000th visitor!

Thats right, today marks the day that over 2000 people have visited this blog! I would like to thank you all for your interest in my work, and I hope I can look forward to many more visitors over the coming year. I've added a variety of social media links to the right sidebar, so please share this with your friends!

To commemorate this occasion, I present to you version 0.4 of the custom robobuilder software. This version includes timer-based scheduling of movement, battery recharging capabilities, microphone operation and the ability to change state based upon button presses. A few small bugs have been fixed since last time, but as usual this is still a beta release - don't expect flawless operation just yet. Compiling instructions are here.

Unfortunately, this release does not contain the FFT or leet dancing skillz. I've decided that I'm going to put a bit more work into improving them before I make them available for download. I'm sure you are all eager to get your groove on, so check back soon for updates.

In related news, this guy has given me many ideas for how to expand robobob's repertoire of dance moves.



And he's not even in time with the music! Of course, that's just the beginning...

Tuesday, February 23, 2010

Robobob: Life of the party

Dancing robots have been around for some time now, I'll give you that. Generally they are simply scripted lists of moves which are synchronized (or not) to a particular song. It took some hard work, but I've managed to take a different approach:



This dance isn't scripted! It's being generated based on the music being played, which means that Robobob will dance in time with any music you care to play to him!

Well, most music. The tempo is detected using a Fast Fourier Transform, which can separate beats and bass-lines from the rest of the music. This works great for songs with a strong bass component, but doesn't work so well on music composed of higher frequencies or vocals.

Simply place your robobuilder on the nearest sub woofer though, and you should be in business!

Watch in amazement as your robotic buddy dances along (almost) in time to the beat!

Impress your friends with his funky dancing styles consisting of over TWO distinct moves!

Saturday, February 13, 2010

Computers 103

The gnome is back! This time, he has a very important message for you about registers. Please give him your undivided attention, or there may be consequences. 0_o


I hope you are enjoying these comics. I will be making more of them, but I am also working on upgrading Robobob. I think you will be pleasantly surprised, so check back soon!

Wednesday, February 10, 2010

Computers 102

For your amenity, the next installment of "inside your CPU" is avaliable below. If your screen is big enough, you shouldn't even need to scroll. The previous issue can be even be found here. Convenience!!



(If you don't get the joke in the last panel, it is because the "take one down" action will eventually set the zero flag in the status register, but the "pass it around" line will never set the zero flag. The branch instruction can only know about the results of the operation immediately before it, so the gnome will eventually die of alcohol poisoning. THINK OF THE GNOMES!)


In other exciting news, the source code I used to make this video is now avaliable. You can find it here. There are a few improvements and bug fixes.

Tuesday, February 9, 2010

Computers 101

I'm taking a little hiatus from the epic how to build your own robot series for a few reasons.

Firstly, there is a LOT to cover. I'm writing this as a sort of trial run for how I might explain it to first year university students, and as we all know - firsties are only marginally more intelligent than the rats they experiment on. With so many topics to cover, I'm faced with either giving 10 second "just take my word for it" style advice, or deviating from the topic every paragraph to explain the reasoning behind the statements I've just made.

The other problem is that it isn't fun to read. It's very technically dense and wordy, and I feel that if I can't hold the reader's interest then there isn't much point in me spending all this time explaining it. After writing the how to choose servos article, I really felt that I should have introduced it with an earlier article about exactly how they worked and what they were used for.

So to remedy this, I've decided to take a different approach to the rest of the series. Before I start spouting off about software and microcontrollers and interrupts, I'm going to give you some background on the subject in general – just so that everyone is up to speed when I use terms like instruction set, word size or stack overflow.

What's more, to keep things interesting the lessons will take place in funky 8-bit comic style! Enjoy!


Build your own humanoid - Part 2

Last time, I started going over a basic outline of how you might go about designing your own humanoid robot.

After I finished writing part 1, I realized that there is a lot more to cover on each topic than I am able to explain in a single post. All I am covering here is a design methodology - one of many possible procedures. This means that if I skim over a topic or don't go into sufficient detail, you will need to do a bit of research on your own to fill in the blanks.

After deciding upon the number of degrees of articulation you intend to build into your robot, I suggested that the next step should be to draft up some approximate measurements for the arms, legs and body.

Some people find this difficult. Coming up with a number for a length or a weight almost without any input information seems pointless and arbitrary, since there are so many variables which affect and are affected by your decision. This is why I emphasize the inexact nature of these dimensions at this stage. We will come back and refine them once we have more details.

If you have the time, the tools and the expertise required to draw up an exact model of all the designable variables and how they correlate to each other, then this is a perfectly acceptable method. I personally find the iterative method is easier and less prone to mistakes.

At this stage, you would do well to open up a spreadsheet and enter all the values you have chosen so far. You can quickly sum up all the masses and lengths (warning: trigonometry), and it will help you quickly calculate your servo requirements.

Servos

Servos are not the only type of actuators you can use, but they are certainly the most commonly used. I'm going to assume most people will choose to use servo or stepper motors, but don't feel limited by this. You might want to use a combination.

There are two main alternatives to electric motors. The first is pneumatic actuators, like the ones used here:



The other option is shape-memory alloy, known as muscle-wire or nitinol:



Both of these have niche uses, but you will find that servo motors are more general purpose.

Unlike normal DC motors, servos are motors which can be commanded to move to a specific rotational angle. To achieve this, they contain a feedback loop which will keep turning the motor in the right direction until it reaches it's destination.

I could go on for pages about the specific inner workings of a servo (and I probably will someday), but right now we need to concentrate on three technical specifications:


Torque. When searching for servos, they should all report a stall torque in kilogram-centimeters or ounce-inches. These numbers represent how much weight they will be able to lift (on planet earth) at a distance of one centimeter or one inch away from the axis of rotation. More is generally better, but will come at a higher price.

How much do you need? This is why it is so important to have a rough idea of the proportions of your robot. I forgot to mention it before, but now would be a good time to also estimate the weight of your batteries and other heavy components.

Lets assume you want to choose an appropriate servo for your knee. The knee servo will be required to lift the entire weight of the robot from a crouched position to standing.


To calculate the torque, you multiply the weight by the horizontal distance between the axis and the center of mass.

 T = Fxd

In the above example, the robot's body weight is centered over it's hips, which is 5cm from the knee. His upper body is about 750g, so 0.75kg*5.0cm = 3.75kgcm is required to get the robot stand up.

He also has two knees which can work together, so really each servo only needs to have 1.9kgcm to stand up. However, when he walks we can assume each leg will need to work independently, so we should stick with the original value.

So use your estimated lengths and masses to come up with some numbers for the torque on each joint. You will have to imagine the most difficult movement your robot will be required to perform in order to find the necessary values.

  • If you aren't sure how much your robot will weigh, remember that we are trying to build an under 1kg robot. If you know it can move 1kg, it will be enough.
  • Similarly, the time your robot will be required to deliver the most torque is when it has fallen over. If you are unsure, use half of the robot's height as the distance, since this is representative of the single servo being used to stand up.
There are much more complex methods of calculating the required torque, but there isn't much point in going to the trouble since you mostly just estimated values anyway. If you're having trouble, here is everything you ever needed to know about everything.

Now you should have an estimate for the value of each joint. This is the minimum torque you will need to move the joint. Since your estimates were pretty sketchy to begin with, you should add a factor of safety to ensure you've got enough. I would suggest increasing all your estimates by 50%, or even more if you don't feel confident about your calculations.

Mass. Servos which deliver more torque are generally larger due to the fact that they contain more gears and heavy duty parts to withstand the extra forces.

Heavier servos will increase your requirements for torque. Sometimes, this will be unavoidable, but it's effects can be alleviated by making your robot shorter.

You can also reposition heavier parts so that they are closer to the axis you are rotating around. This will reduce the power of the servos you need and make your robot lighter overall.

Speed. The price of a servo is usually determined by it's torque and it's speed. Fast, strong servos will be particularly expensive, so it's important to only get what you need.

Rotational speeds are measured in seconds per 60 degrees. This is the time it takes to complete one sixth of a full revolution - lower means faster.

Not all of your servos will need to be lightning fast - only walking and balancing will require a lot of agility. The proportions of your robot will multiply how fast the limb will move. If you've designed longer legs, you won't need such speedy servos to walk at a reasonable pace.

In particular, the ankle joints usually only need to move through about 10 degrees in a step, which means they can be much slower than the rest of the servos in the legs without impacting the performance.

Now that you know what properties each servo needs, you will have to search around to find one which matches.

It's pretty unlikely you will find exactly what you are looking for, but thats fine. Near enough is good enough - up until now everything has been pretty approximate, so no reason to stop here. Remember that the point of this exercise is not optimization - just getting a robot with two legs on the ground is all we want for now. Here are a few more things to consider when choosing servos:

  • Metal gears will save you a lot of trouble.
  • A lot of robots simply get one type of servo for every joint. The advantage of this is that it's easier to repair and easier to build since you only use one set of parts. However, your robot will be heavier and more expensive than it needs to be.
  • There is more than one way to control a servo. Digital and analog servos require 50Hz PWM control, but some robotics servos use RS232 or RS485 protocols. Right now it doesn't matter which you choose, but make sure all of them use the same method.
  • Many servos offer extra features like adjustable PID parameters, overload protection and position feedback. These are valuable additions, and you would be wise to take advantage of them.
  • Most servos have a limited range of rotation. Often, this is around 270 degrees, but can be as little as 90. For most joints, 270 will give you a sufficient amount of freedom, but check you have enough.
Here comes the important bit. Having chosen a servo for every joint on your robot, go back and do the torque calculations again. This shouldn't be hard if you've put everything in a spreadsheet - just use the new torque and mass of every servo you've chosen, and make sure that your requirements are still met.

Chances are, you will find that most of your initial estimates were reasonable. If you find that one servo doesn't have enough power to move the robot anymore, you will either have to upgrade that one, downgrade one of your heavier ones, or adjust the dimensions of your robot. This may cause a cascade of other things to change, but that's just the catch 22 of design. Keep adjusting and recalculating until everything fits.

Wasn't that fun? No, it wasn't really. But at least now you know that the money you are spending on parts won't be wasted.

    Thursday, February 4, 2010

    Build your own humanoid - Part 1

    Not long ago, I spoke about the advantages of humanoid robo-one style combat over battlebots/robot wars matches.

    I started thinking about how difficult it might be to start up a competition at my university. Would people be interested? Would it be too expensive? Too complex? Perhaps I would be wrong about people appreciating the tactics and design over brute force destruction.

    So I thought I'd walk myself through the process of designing and building a small (under 1kg) humanoid robot, to see how it might play out. Keep in mind that while I do know a lot about humanoid robots, and I also own one; I have never build one from scratch before. I do have experience building other robots, but not walking ones.

    Design

    First of all, lets have a look at how other people have gone about it. This is often a very good first step when you are uncertain about a project.






    One of the first things I notice about these robots is that they are humanoid, not human. When it comes to designing robots, people often get very caught up in biomimicry, without stopping to think about why they are imitating nature's design. In the case of both these robots, the proportions have been modified to make it easier for the robot to balance and walk.

    As a general rule, shorter and squatter will make your life easier later on down the track. Keeping the robot's center of mass low will also make him harder to knock over.

    At this stage, you will also need to decide how many degrees of freedom you want your robot to have. To robotologists, a degree of freedom is a controllable aspect of the robot. In this case, it refers to the number of joints your robot will have and therefore how complex his actions can be.

    Bottom line is, more degrees of freedom means more versatile actions, but at the additional cost of more servos/actuators.

    Now, since we are trying to build a humanoid robot, the most attention needs to go to the legs. Robobob has 5 degrees of freedom in each leg. He has two ankle servos, one knee servo, one thigh servo and one hip servo.



    I would consider this a pretty good number, but you could get away with removing either the hip or one of the ankle servos and manage.

    So that's 10 controllable degrees of freedom just for the legs.

    The torso doesn't need to move at all, but giving it the ability to rotate will help allow your robot to shift it's weight quickly to maintain it's balance. Adding one additional servo won't break the bank, and it will contribute a lot to the moves you can perform.

    Arms can be as simple or as complex as you like. They do contribute to balance and walking a little, but far less than the torso or the legs. If you are building a robot to fight or wrestle, you will need arms to achieve this. However, if you have a torso which can rotate, all you will need to do with the arms is to raise and lower them in a flapping motion and rotate. If you take this approach, you can get away with as little as 13 servos.

    Of course, the exact implementation is up to you. It will depend on your budget and your requirements.

    Robobob has 16 degrees of freedom. 5 for each leg and 3 on each arm. This has worked out so far for me, and 16 is actually on the low end of kits available.

    If you want to have more than this, I would definiately consider adding them to the legs. A lot of robo-one robots use the double-knee joint, which allows them to stand up and walk twice as fast. The ankles also have a very big impact on how your robot will be able to walk, so if you can get x, y and z rotation on the ankles you will appreciate the advantages.

    The next stage is figuring out the approximate dimensions of your robot. I say approximate, because there is every chance that a design decision you make a little further on down the track will will require you to rethink his shape (eg, servo, battery size and weight).

    Draw up a skeletal model of your robot and come up with some trial values for the distances between joints. Here are a few things to consider:
    • Taller Robots are easier to knock down because of their higher center of gravity
    • Longer legs mean faster movement.
    • Shorter distances between joints will require less powerful servos.
    • Longer distances between joints will move the limb much more quickly.
    • Leave room for your batteries and circuitry.
    • Make it look cool! It's not all about optimizing numbers.
    At this stage, keep it simple. These dimensions will help you select appropriate servos for your needs. If you aren't sure, have a look at existing humanoids and see what works. I will go over a more detailed design in the future.

    Next time, I'll talk about choosing the right servos. As you can tell, there is a bit of a catch-22 about design, since your choice of servos depends on your dimensions, which in turn will depend on your servo choice.

    Tuesday, February 2, 2010

    Robobob

    Ladies and gentlemen, I give you: Robobob!



    You may recall that awhile back I opened my inbox to email suggestions for a good name. I actually intended to draw the name randomly or put up a poll, but once I started calling him Robobob the name sort of just stuck.

    So here he is, in all his 16DOF glory standing up to a withering barrage of socks from all directions.

    Once again, he has no gyroscope or accelerometer - he is simply using kinematic feedback from his servos to keep himself stable. The algorithm is almost identical to the one I used way back in this post - but he can now withstand attacks from all directions.

    You may notice that he always moves with the force that he is being subjected to. This is the same way that humans usually react - recoiling from a blow or a source of pain. I think it gives him a bit of a personality.

    Monday, February 1, 2010

    Tremble before DISHZOR!

    People are often surprised them when I tell them that my favourite robot is the dishwasher. Not Megatron or ASIMO or the roomba - just your ordinary, garden variety dishwasher.

    You see, there isn't much of a consensus regarding what defines a robot as a robot. Some people think they need to be humanoid. Others feel that robots should include sensors and actuators. Computers and software are often mentioned.

    One of the broader, but generally well received definitions is that a robot is a machine of some sort, capable of performing tasks on it's own. Begrudgingly, most people will admit that this would include dishwashers. And washing machines, printers and even modern cooking appliances.

    Thats right - robots have already invaded your home and you didn't even realise! Insidious!

    I think one of the reasons that these everyday robots have slipped under our radar is that we don't have such high expectations of them. We assume robots should be smart, since they are capable of performing the same tasks which humans can - but we tend to assume that tasks which are difficult for us are difficult for everyone.

    Sci-Fi has led us to bestow a level of expectation on a robot which is based on their appearance. For example, dog shaped robots tend to be about as smart as actual dogs, despite the fact that their positronic brains could make them just as smart as a human. Robotic bugs will behave exactly like real bugs would, even though they have no reason to seek dark places or put on threatening displays.

    In the case of dishwashers, we tend to assume that machines shaped like large bricks will have an equivalent IQ. That's probably why I find this so much less impressive than it actually is.



    It's no small feat, but it just seems that a multi robot system like that should be capable of so much more.

    This phenomenon also means that as soon as you give your robot a humanoid form, people suddenly have much higher expectations. I think this is why modern demonstration robots are given infant-like designs - if the robot reminds people of a five year old, then they will be that much more impressed when it walks without falling over or grasping objects.



    So if you are ever trying to demonstrate how smart your robot artificial intelligence is, remember to make it look stupid so that people will be extra impressed. It also helps to avoid the uncanny valley.

    Friday, January 29, 2010

    Come with me - if you want to live.

    I finally watched Terminator Salvation the other night.

    I couldn't work out if the machines actually knew who Kyle Reese was or why they wanted to capture him - and also why they didn't just kill him when they had the chance!

    There was also the matter of how lightly guarded skynet central was (despite apparently producing about 1 T800 per second), and how those motorcycle robots were supposed to get up if someone knocked them over.

    What I really wanted to talk about though, was the Robopocalypse. Not so much the idea that robots will eventually rise up against their human masters, but just about how robots (or any machine) can be dangerous if people are careless around them.

    I used to be vice-president of the Sydney University Mechatronics Organization, which held a yearly robot wars competition. Now, as you can imagine, robots designed to tear each other to bits would be equally (more?) effective at maiming pathetic squishy humans, so it was hard to disregard OH&S.

    Despite a strong emphasis on safety, the university was in a difficult position when it came to allowing students to use their facilities. If we were injured in one of the workshops or laboratories, then the uni was liable - even if we were being properly supervised or if it was due to our own negligence. However, if we attempted to work on the robot outside of the university, we were without the appropriate tools and safety equipment and were significantly more likely to injure ourselves. This created many headaches and administrative woes, which ultimatley turned what should have been a fun and simple activity for students into a legislative nightmare.

    This is why I am so much in favor of the smaller and safer robo-one style competition. I think it is a more elegant form of robot combat, for a more civilized age.





    This video is from a new humanoid competition called RT Corp under 1kg Robot Fight. Like robo-one, it emphasizes lightweight robots which wrestle each other in a ring.

    I think that this is a better direction for schools and universities who want to encourage students to learn about robots and mechatronics while having a bit of fun.

    1. Robot wars robots are glorified remote control cars. There, I said it. Many designs, like wedges or saws don't need any additional circuitry to control their weapons. The design is primarily in the mechanical field, and students will learn very little about the electrical or software side.
    2. It's so much safer. There are no dangerous weapons, and the weight limit is also much lower (unless you're these guys). The machining process is also much simpler (use tin-snips instead of table saws, solder instead of welding, etc).
    3. You get to keep your robot at the end. One of the most annoying things about robot wars is that if you lost, your robot was toast. Nothing gets damaged or destroyed in a wrestling match, so you are free to upgrade and modify parts for next time.
    4. The emphasis is on design and technique, not raw power. Software, real-time control, electrical systems and weight distribution are just as important as how strong your motors are.
    5. It's just as exciting. Robots can take their inspiration from boxing, tae-kwon do, ju-jitsu, MMA, akido, pancrase or any other type of martial art.

      The biggest drawback is that it can be quite expensive to build these little guys, but probably not by a huge margin. Our 13kg Robot wars Robots cost $300-500 (excluding the remote controls, which were borrowed), but I can see a under 1kg robot being build for a similar amount.

      If that doesn' convince you, just look at how much fun these guys are having!



      Wednesday, January 27, 2010

      It's not a bug, it's a feature

      Mystery solved!

      Here is a quote from the avrlibc documentation (which I probably should have looked at earlier):

      In order for these functions to work as intended, compiler optimizations must be enabled, and the delay time must be an expression that is a known constant at compile-time. If these requirements are not met, the resulting delay will be much longer (and basically unpredictable), and applications that otherwise do not use floating-point calculations will experience severe code bloat by the floating-point library routines linked into the application.

      This describes exactly the symptoms I was experiencing - if the function was passed a constant, it worked flawlessly. If I was using a value fetched from program memory, then the compiler was unable to determine how many NOPs should be performed and the delay was unpredictable.

      I don't really know why the library function requires prior knowledge of how long a delay should last. All a spin loop does is keep the processor busy for a set number of cycles. If you need longer, you can simply put another loop around the first loop to multiply it's effects. Sure, without careful calculations you can lose your nanosecond accuracy, but when you are working in the millisecond range it is usually unimportant if you are off by a few clock cycles.

      If you want the best accuracy, you should be using a timer or counter interrupt anyway...

      ..but that's a story for another night. I don't intend to be using spin loops for future versions anyway.

      Thanks to the people who emailed me and suggested that I might be accidentally passing the value of the pointer to the function, instead of the dereferenced value. That would also explain the strange behaviour, but it would also have shown up in the compiler output.

      In other news - what's up with the horrible popup?

      If you don't live in Australia (or perhaps even if you do) you might be unaware that the Australian government is considering introducing mandatory filtering of internet sites.

      Much like draconian airport security, national identity cards and racial profiling, this filter is supposed to be being put in place for our "protection". Because apparently Australians can't be trusted with unregulated free speech and freedom of information.

      I understand that some people are willing to sacrifice their personal liberties for the comforting feeling of saftey, but the problem is not with the filter as it exists now, but what it could become.

      First they came for the communists, and I did not speak out—because I was not a communist;
      Then they came for the trade unionists, and I did not speak out—because I was not a trade unionist;
      Then they came for the Jews, and I did not speak out—because I was not a Jew;
      Then they came for me—and there was no one left to speak out.

      Monday, January 25, 2010

      Byte by byte

      Here are a few things which might aid you in your detective work.

      First of all, I am still using avr-gcc and avrlibc to build the code. To achieve this, you first need to:

      compile,
      avr-gcc -mmcu=atmega128 -I. -g -Wall -Os -c *.c

      link,
      avr-gcc -o main.elf -mmcu=atmega128 *.o

      and then pack
      avr-objcopy -j .text -O ihex main.elf main.hex

      Then download the resulting .hex file using the RBCUpgradeTool. Note that you can simply switch the RBC block off and on, rather than messing around with the reset pin.

      This should help anyone who is trying to get to the bottom of the mystery I spoke about earlier. I also promised to talk a little about transferring data from the program space into working memory.

      Unlike desktop computers, embedded systems are built according to the Harvard Architecture which keeps the program instructions separate from the working memory. They even have separate buses, which means that an opcode and an operand can be fetched on a single clock cycle. In normal operation, there is no need to exchange data between the two areas of memory, since self modifying code has been all but banished to the history books.

      Sometimes, you find yourself running out of working memory especially if you are storing large tables or arrays which will eat up the free addresses very quickly. Also beware of memory leaks. Unlike a desktop with an operating system, your program won't segfault or blue-screen to let you know something is wrong - it will probably just start suddenly spewing garbage.

      However, if you absolutley must use giant arrays or matricies in your program, you can keep your RAM free by storing them in the program memory and then copy it out only when you need to use it. This is done by using the LMP (LoadProgramMemory) instruction, which enables a special register which shifts the data from the program bus onto the data bus.

      Unfortunatley, because the data bus is still only 8 bits wide, memory has to be loaded one byte at a time - so ints (which are 16 bits) need special routines to make sure that the high and low bytes are stored in adjacent locations. Also, reading a byte from flash memory (where the program is stored) takes more than a few clock cycles, during which execution needs to halt. Reading a large table could take up to a second.

      Most of the details and heavy lifting are handled by special hardware and instructions on the microcontroller, and core data types like floats have library functions in avrlibc. It is still the responsibility of the programmer to know when to load a table, and particularly to make sure that any memory loaded goes out of scope or is free()d when the operation is complete.

      The canned actions which are made using the robobuilder software suite can be exported as 16xN tables, where N is the number of poses (there are 16 servos for the basic robobuilder). For a complicated action, this could be many, many bytes - so the tables are necessarily stored in the flash memory. And of course, since only one action can be performed at a time, you can store dozens of actions in the 128kB of flash and simply swap them out when one needs to be performed.

      Friday, January 22, 2010

      A puzzle

      Boy do I have a good one for you today!

      First off, version 0.2 pre-pre-alpha-alpha-etc is available. Please download it and try it out - report any bugs or suggestions in the comments.

      The improvements in this update focus on enabling the execution of pre-generated actions, such as those from the robobuilder site. A few are included in the Hunobasic.h header for demonstration purposes. I have also written a few useful functions to transfer data from program memory to working memory (which will probably be the subject of the next update).

      In keeping with the philosophy of not trying to rewrite the default firmware, I've broken the procedure for executing an action (which is a series of poses) into much smaller steps, which makes it much more intuitive.

      Here is the method of making the robot walk forwards:

              getPGMWordArray((int*)HunoBasic_WalkForward_TrTime, (int*)delay, HUNOBASIC_WALKFORWARD_NUM_ACTION);
      //Load transition times

              for(i = 0; i <= HUNOBASIC_WALKFORWARD_NUM_ACTION; i++)
              {
      //For every pose in the action,
                  getPGMByteArray((char*)HunoBasic_WalkForward_Position[i],(char*)position,16);
      //Load the angle for each of the servos
                  setPose(position,torque,16);
      //Tell each servo to move
                  if(i < (HUNOBASIC_WALKFORWARD_NUM_ACTION))
                  {
      //After each frame, except the last,
                      j = delay[i];
                      while(j > 0){_delay_ms(1);j--;}

      //Pause for the required transition time.... 
      //wait... 
      //..WTF??
                  }
              }

      What's up with that last part? Why would you loop ten times and each time delay for 1ms, when you could just delay for 10ms in one go?

      Well, here's why - it doesn't work!

      Substituting the line with:

      _delay_ms(delay[i]);

      will pause for a much longer time than it should. If you have a AVR, please test this out for me and confirm it. However, passing a constant value to _delay_ms() works exactly as intended. I've tried it many ways, but this is the only method which works so far.

      _delay_ms(25);
      works, but
      delay[i] = 25; 
      _delay_ms(delay[i]);
      does not.

      Mysterious.

      As I've mentioned before, this solution is hacky and unattractive. But I thought I might upload the code (the rest of which is working fine) and see if anyone has any suggestions as to why this might be.