Archives for category: Holonomic wheels

I have built a new holonomic robot. It has a smaller wheelbase than Agilis and therefore turns faster. It has a 1:3 gear ratio, it’s maximum controllable speed is about 80 cm/second. It is also very sturdy thanks to all the triangles in its frame. And above all, I think it is prettier than Agilis. However, that is what every father says about his newborn. Below are some pictures

I got some feed-back on my last post with “building instructions” for Agilis. My friend Kirk wondered why I didn’t make proper instructions. He was not impressed by my lame excuses about odd angles or illegal builds and showed me things could be done. He was right, it was about time for me to master some Lego CAD tools. I choose to use LDraw and MLCad for the job. My fears for a steep learning curve proved wrong. The manual is quite good and I was up and running within an hour. The hardest thing to learn was the discipline to work precise immediately. One cannot build a sketch at first and then improve the model. Well, technically you can, but you shouldn’t as every brick you move triggers a chain of new moves that need to be made.

It was fun to create proper building instructions for Agilis. Along the way I also improved the gearbox as was suggested by another reader of my blog, Thimoty. You can freely download the instructions. I do appreciate constructive feedback. This is my first and I am sure there is room for improvement.

By request I created building instructions for Agilis. Well, sort of. It is a series of photos made while taking one of Agilis legs apart. Presented to you in reverse order, so that it looks like a building instruction.

One warning before you start. The gear box can be assambled very easily, but it cannot be disassambled without  some tools and, potentially, some damage.

The gear box displayed is for a 1:3.33 gear ratio.  Here you find a picture of a 1:2 gear box. I think the 1:2 gear box is a better option.

The part list in the pictures 1 to 5  is for one leg only. Also I did not have the parts to make a nice color scheme so you might end up with a rainbow warrior in black and white if you follow my color scheme.

Please send me a message or picture when you built Agilis.

Today Dexter Industries launched their latest product, an all-color LED called the dLight. I was involved in the development of the dLights so I got them early. I can also give away one set of dLights to one of my readers. More on that later. Let’s take a look at the dLights firs in this video.

I mounted three dLights underneath Agilis, one under each leg pointing to the wheel. I programmed the dLights to give a color that corresponds to the wheel speed of the particular leg. I think the result looks cool.

If you look closely to the sensor ports in the video, you’ll notice that only one is in use. This is one of the great benefits of the dLights, you can daisy chain them. So one free port is all you need to give a robot multiple lights. One set contains four dLights plus the cables to chain them.

As said, I can give away one set of dLights. If you want this set you just have to reply to this message before the first of April.

In case you might wonder how fast or accurate Agilis is, here are some numbers.

Setup

• The gear ratio of the motors to the wheels is 1:2, making the wheels rotate twice as fast as the motors. (It is possible to change the gear ratio to 1:3).
• Prototype orange Rotacaster wheels. This is the hard compound. There isalso a medium compound (gray) and soft compound (black) available.
• The batteries were rechargeable NiMH, 2500 mAh batteries. These were not fully charged.
• The test surface was a clean linoleum floor.

Speed

• Reliable top speed is about 60 cm/sec, equivalent to 2.16 km/h or 1.35 mph. At this speed the robot is still accurate as there is ample margin for the PID controllers of the motors.
• Unreliable top speed is about 75 cm/sec, equivalent to 2.7 kmh or 1.68 mph. At this speed the robot is not very accurate, especially the heading.

Accuracy

• The test track is a square with sides of one meter each. During each run the test track is traveled 4 times. Making the total distance of the test track 16 meters.
• The robot finishes the test track on average within 10 cm of its starting position. Expressed as a percentage of the total distance the error is about 0.6%.
• The movement error is systematic. The robot always ends up above and to the right of the starting position.
• The robot is more accurate at slower speed and acceleration settings.

The images shows the result of the accuracy testing. For each test the robot was placed exactly on the origin (bottom left in the picture). It then traveled a square with sides of one meter for four times, making the total distance traveled 16 meters. The finish location of the robot was then marked on the floor. This test was repeated three times for a single set of settings of speed and acceleration. Three different dynamic sets were used,  speed: 50 cm/sec and  acceleration at 100 cm/sec^2, speed 50 cm/sec and acceleration at 750 cm/sec^2 and speed 30 cm/sec and acceleration 60 cm/sec^2.

I want to repeat the tests with a 1:3 gear ratio and also with the black Rotacaster wheels.

This Christmas holiday I started working on a new robot, called Agilis. This robot should be a very agile and elegantly moving robot. The frame is based on a triangle equipped with holonomic wheels. So you might think, “What’s new, it is like your last robot?”. From the outside this is true, but it gets new and better brains on the inside. Let me tell you what I envision.

Most robots I built went from point A to point B, only then to decide what to do next. Others just drove around avoiding obstacles. This one should be able to do both at the same time. Agilis must be able to perform complex manouvres, like driving in a straight line while rotating around its centre, or like driving an arc while keeping pointed at an arbitrary spot. It should constantly use sensory input to stay on course, or to alter its course if needed. And all this must go fluently, just like a cat walking through the room.

Over the next several posts I will discuss the different aspects of Agilis. This first post deals with the drive system.

the chassis

Agilis is a three wheeled holonomic platform. This means it can drive in any direction without turning. It can turn around any point, even around its own center. Each wheel is driven by a NXT motor via a gear train that has a 1:2 ratio, the wheels rotate twice as fast as the motors. This makes Agilis a relatively fast robot. The gear train has to be very sturdy to handle the torque of the motors. It also has to be precise to allow for odometry. I used the same setup that I developed for my last robot, Koios the Guard bot.

From robot speed to motor speed

It is not very easy to control a holonomic robot, it takes some math. I created a kinematic model that does just that. The model takes robot speed as input and gives motor speed as output. Robot speed is expressed as speed in x-direction, y-direction and rotational speed. Motor speed is expressed as encoder ticks per second.

So how does this kinematic model look like? For a single wheel this looks like a function that takes the three robot speeds as input. For the three wheels together it looks like a matrix multiplication that multiplies a robot speed vector {xSpeed,ySpeed,angularSpeed} with a kinematic matrix. The resulting vector containers the speed of each of the three wheels. Let’s take a look at the single wheel function first.

To translate robot speed into motor speed one needs to know some physical aspects of the robot, the wheel and the motor. How big is the wheel, how far is it from the center of the robot, under what angle is it mounted, what is the gear ratio of the gear train and what is the number of encoder ticks per full cycle of the motor? With all this information one can write a formula to calculate motor speed from robot speed. Here is the formula.

motorSpeed =
xSpeed * (cosine(wheelAngle) * nEncoderTicks / ( gearRatio * 2 * PI * wheelRadius) -
ySpeed * (sinus(wheelAngle) * nEncoderTicks / (gearRatio * 2 * PI * wheelRadius) +
angularSpeed * distanceToCenter * nEncoderTicks / (gearRatio * 2 * PI * wheelRadius)

This formula might look daunting at first, but on second glance you might notice that there are a lot of constants in it. If you substitute the constants with their respective values you will end up with a much simpler formula.

motorSpeed = xSpeed * aConstantValue - ySpeed * anotherConstantValue + angularSpeed * yetAnotherConstantValue

This formula is not only much simpler, it is also very efficient to calculate, just three multiplications and two additions. A NXT can do this in no time. But remember, these constants are not the same for all the motors because each of the wheels has a different wheelAngle. But, you could also have wheels of different sizes, or differences in any of the other aspects. This means that you will have a formula for each of the motors, each formula is the same in structure but has its own constants. These constants can be stored in a matrix where each row in the matrix contains the 3 constants belonging to a single wheel. The matrix has a row for each of the wheels. If you then take the speed vector and multiply this with the matrix then all formulas are calculated at once and the result, the motorSpeed, is stored in a new vector. Each row in this vector holds the speed of a single motor. In java this matrix multiplication would look like this:

Matrix motorSpeed = kinematicModel.times(robotSpeed);

Wow, now things look simple at last! This is the beauty of matrix algebra.

The same kinematic model can be used to transform robot acceleration into motor acceleration. I use this to make my robot accelerate very smoothly. (the regulated motor class of Lejos supports regulated acceleration).

From tacho counter to robot position

To drive a robot this kinematic model works perfect. But I also want to be able to do things the other way around. I want to be able to calculate robot position from encoder values. At first I couldn’t figure this out at all. The math was just too complex for my mind. That is, until I realized that I just needed to use the inverse of the kinematic model.

deltaRobotPose = deltaMotorPosition * inverseKinematicModel

Here deltaMotorPosition is a vector containing the change in encoder value of each of the motors since the previous measurement. The inverseKinematicModel is the kinematic model inverted. And deltaRobotPose is the change in pose (x and y position and heading) of the robot. Looks simple, doesn’t it? The problem is how to calculate the inverse matrix of the kinematic model. I can’t tell you, because I don’t know. But hey, somebody else already programmed this in Java. I just used the inverse method of the Matrix class.

From the robots coordinates to the worlds coordinates

There is just one more thing to it. The robot can have any heading, this means that x and y coordinates of the robot are not aligned with the x and y coordinates of the world. To be able to steer the robot to a certain position in a room one must be able to convert this location to a location as the robot sees it. The same goes for keeping track of pose. We have seen the formula to calculate change in pose from the wheel encoders. This change however is a change as the robot sees it, not a change in the robots position it the world. The translation from world coordinates to robot coordinates can also be done with a simple matrix multiplication using a rotation matrix. The rotation matrix itself can be calculated from the heading of the robot.

$\begin{bmatrix} cos(heading) & -sin(heading) & 0\\ sin(heading)) & cos(heading) & 0\\ 0& 0 & 1 \end{bmatrix}$

Suppose you want to drive your robot to the left side of the room. The speed matrix in world frame would look like {0, speed, 0}. this can be multiplied with the rotation matrix to get a speed matrix as the robot sees it.

RobotSpeed =worldSpeed *  rotationMatrix

If we want to go the other way around, to get the change in pose in world frame we multiply the change in robot frame with the (you guessed it) inverse of the rotation matrix. For rotation matrices the inverse is the same as the transpose of the matrix, the transpose is far more efficient to calculate.

Wrap up

This really is all there is to driving a robot. To sum it up. You have a kinematic model to translate robot speed into motor speed. You have a rotation matrix to translate things from world coordinates to robot coordinates.
The same goes for odometry. You have the inverse of the kinematic model to translate change in encoder values to change in robot pose expressed in robot coordinates. You have the inverse of the rotation matrix to translate change robot pose in robot coordinates into world coordinates.
The kinematic model is a constant, it has to be calculated only once (unless your robot changes shape). The rotation matrix on the other hand has to be updated every time the heading of he robot changes.

The implementation

the robot uses lejos as its brains. Lejos has some excellent classes to drive the NXT motors. The regulated motor class that I used is able to rotate a motor at any given speed. This speed is maintained no matter what the conditions are. It also supports setting an acceleration rate. This is very good for my robot, as for most movements the wheel speed of the three motors is different. If all wheels would accelerate equally, then the slower moving motors would reach their target speed sooner than the faster spinning motors. This results in a robot that shivers and shakes during acceleration (or breaking). All this can be avoided by setting an acceleration value for each of the motors. The ratio of the acceleration values must be the same as the ratio between the (difference between current speed and) target speed of each of the motors. If done properly the robot accelerates very smoothly without jerky movements.

Lejos also has a Matrix class that helps to perform matrix algebra. I used this class to store the (inverse) kinematic models and the rotation matrix. I subclassed it to make a Pose Class that can be used together with the matrix class.

To create the kinematic model I developed a Builder class. This class has all kinds of methods to describe the robot, the wheels the motors and the gear train. When you are done describing the robot, the builder class delivers you a kinematic model and an inverse kinematic model.

To drive the robot I made a kind of pilot class. I plan to discuss it in a future post. This pilot accepts the kinematic model in its constructor.

For odometry I made another class, the Odometer. This class uses the inverse kinematic model.

Remember my plan to make a ball balancing robot? Last year I set myself the goal to make a ball balancing robot. I even build the robot. Since then I wondered off my original goal and made a guardbot, Koios, from this platform. Now I am having another shot at making a balancing robot.

Programming a balancing robot is easy in theory. You just need a sensor that tells you how much the robot is tilted, most often people use a gyro for this. I use my IMU for this, so that I do not suffer from gyro drift. The tilt angle is then feeded to a PID-controller that transformes tilt to motor speed. The hard part is to tune the PID controller, it has to translate tilt into just the right amount of motor speed, too little and the robot falls of the ball, too much and the robot goes over the top and falls of on the other side of the ball. Falling of the ball damages the robot. So I had a problem, how to tune the PID controller without damaging the robot?

To be able to tune the PID-controller without damaging the robot I made a special platform. It is a large disk with a small pole in the middle pointing down Due to the pole the disk will always be tilted when lying on the ground, only when it balances perfectly on the pole it is level. Therefore this disk can be used to tune the controller.  The robot can ride off the disk, but it doesn’t fall then, it just comes on the floor with one or two wheels.

When I tested this setup I discovered that the disk whas too smooth, the wheels didn’t have enough grip and slipped. To increase the friction I coated the surface of the disk with sillicon rubber, It is the light blue surface you see in the picture. Now I have a very “slick” surface.I only hope it lasts under the forces the NXT motors generate.But for the moment this problem is solved.

But there are other problems. One is the fact that these holonomic wheels make the robot vibrate, this affects the IMU filter, there is still some drift although it stays within certain limits. I do have prototype rotacaster wheels. The manufacturer told me that the production wheels are more round and generate less vibrations. If you are ever going to by these wheels, and they are a lot of fun, I advice you to take the black ones. They have the best grip. Anyway, I will have to tune the IMU as well.

Tuning PID controllers is difficult and very, very time consuming. There is some theory around tuning PID controllers but in the end it is mostly trial and error. Everytime I want to try a new set of parameters I’ll have to modify the program, download it to the brick, run the program and evaluate the results by watching the robot. It is hard to understand what goes wrong when you see the robot ride of the disk and make a run for the door to the staircase.

But not anymore. Kirk, one of the developers of Lejos made a very nice program that allows you to tune a running PID controller during over bluetooth. The tool is still under development so you won’t find it in Lejos 0.9.1 yet. This program is an add-on to the charting logger I often use to evaluate internals of the robot on the PC. So basicly, this program shows me what is going on in my robot and allows me to modify PID parameters on the fly. I think this is a great tool. Below is a screen shot of it.

So, now I have the robot, a test platform and a efficient tuning tool. That must mean immediate succes! Well, to be honest I don´t think so. I´m still not sure if I can get this robot to work as there are problem with weight and inertia as well. The robot weigths 998 grams. This is quite heavy, even for three powerful NXT motors. The robot is quite stiff, but there it still bends a bit under weight. This affects the IMU sensor. And I´m working on other projects as well. So in the end I think there is a bigger chance to fail than to succeed.

To be continued.

This time I want to introduce you to Koios the guard bot. Koios guards my house, it detects intruders and scares them away.

To perform this complicated task I have split Koios’ behavior into different subtasks and put these in a subsumption architecture.

The first task of Koios is to map its surrounding using two range sensors. These are an ultrasonic sensor for long range (<200 cm) and a Mindsensors dist sensor for accuracy on the short range (<80 cm). To map the surrounding Koios makes a full turn in place while scanning. The resulting map is what I call a circular range map. This means that the map stores the distance to the nearest obstacle for all directions (I recognize 24 directions, each 15 degrees wide). The map looks like a radar scan when plotted. This map is not permanent, it will be thrown away when Koios moves. As Koios does not try to build and maintain a more permanent map of its surrounding I did not have to deal with uncertainties about its absolute position. Therefore the mapping algorithm could be kept simple.

The second task of Koios is to find a safe spot on the map and then to move to this spot. A safe spot for Koios is a spot where Koios is surrounded by objects. A place next to a wall is good, a place in a corner is better. Koios finds a safe spot by examining the map for a place between obstacles. When the safest spot on the map is found Koios travels to this place in a straight line.
Once arrived at the new location Koios again makes a map of the surrounding. If, at second glance, the current location is safe enough then Koios will stay there. If not, it will try to find an even safer spot. This process is repeated until a location is found that is safe enough for Koios.
The video below shows Koios scanning the area and finding a safe spot. I actually had some trouble shooting the video. At first I had placed my webcam on a chair to shoot the video. But to my surprise Koios hid itself under the chair. This indeed is a very safe spot, but it was outside the frame of the video. In the end I placed the camero on my desk to shoot the clip.

When Koios has found a really safe spot it will start guarding the area. It will slowly turn in place while scanning the area again. If an obstacle is detected its location will be compared to the map. When the obstacle is already on the map it will be regarded as a part of the surrounding and ignored. If on the other hand it isn’t on the map then it must be something new and it will be treated as an intruder.
The task of guarding is rather complicated as there is always some uncertainty in both the map and the range sensor data. Suppose the map tells Koios that there is an obstacle at 150 cm and the range sensor detects an object at 145. Is this an intruder or part of the surrounding? The choice Koios makes is based on statistics. To support this kind of decision making a range map stores more information than just the range to the nearest object. It also stores statistical quantities like the number of measurements taken at the direction and the variance in measured ranges. This makes it possible to calculate the probability of an obstacle being new, an intruder, or old, part of the surrounding. If part of the surrounding the measurement is used to improve the map even further.

But if the object is an intruder Koios will home in on it! Koios will quickly run to the intruder until it is very close. I haven’t written this part of Koios’ behavior yet. So everything you read from now on is just on the drawing board.
For one thing I have not decided yet if Koios just runs blindly to the location where the intruder was detected or that it tries to maintain a lock on the intruder while homing in on it. It would be nice if Koios was able to maintain a lock. But it will also be complicated. Mayby I could use a heat sensor like some air-to-air missiles do to maintain a lock on the intruder.

Anyway, once close to the intruder Koios will scare it away using both light and sound. First it will mimic some kind of scary animal with eyes that glow in the dark while making scary noises. Then it will mimic the police using flash lights and and a siren. Then it will map its surrounding again to find a save spot and make a hasty retreat.

Last week I spent my evenings redesigning my holonomic platform. It had te be wider in order to get the center of gravity (relatively) lower. I also wanted to gear it up to make it more agile. And I wanted the brick to be better accessible, especially the USB port. Two other demands remained from Sidbot, the wheel angle should be adjustable so that the robot can be used both on a flat surface and on top of a large ball. It also had to be sturdy.

As it had to be sturdy I decided that the motors should be fixed to the frame. Sidbot had its motors adjustable to make the wheel angles adjustable. The wheels have to be adjustable on the new robot as well, this meant that the hinge point had to be between motor and wheels somewhere in the gear train. I tried different designs and gears but I always ended up with grinding gears. At last I ended up using a 5×7 liftarm to house part of the gear train. This effectively stopped the grinding but resulted in a very wide wheel housing as well. This is not so pretty so I’m still trying to improve this part. However, I now got a 2:1 gear ratio. With a motor speed of 120 RPM and a wheel diameter of 48 mm this gives the robot a top speed of 30 cm per second.

The frame of the robot consists of two triangles stacked vertically. The triangles are made of three 11 stud liftarms connected at the ends with 3 and 4 stud liftarms. This makes a very rigid frame, the brick lies on top of it making it easy accessible. The motors are mounted horizontally with the flat side down. This gives the robot width and also the ground space that is needed when riding on a ball. To prevent torsion between motor and frame I made a housing with L-shaped for the motors.
I used light bluish gray and dark bluish gray for the color scheme as these are the colors the motors and brick are made from. The result is a bit dull but still rather nice looking. It resembles a Starwars like vehicle. Maybe I should mount some laser guns on top.

The resulting robot does meet all my design specifications. But I have not been able to test it yet as I’m one 40 teeth gear short. I hope to get it this week.

The robot still needs a name. If you have a good suggestion you can post it in the remarks. There is a nice price for the winner a second hand Hitechnic gyro sensor. Submit your entry before November 2011.

As you might know from a previous post I want to put Sidbot on a ball. It must be able to balance on top of it without falling of. If it knows how to do that it must also learn to wander around. There are a number of things I must accomplish to reach this ultimate goal. Here I’ll describe what has been done and what must be done. In later posts I might zoom in on some of the tasks.

The platform

Sidbot has been built. You can see photo’s of it on previous posts. I am able to change the angle of the wheels, this enables me to adapt Sidbot to different ball sizes. I also made a spreadsheet that calculates the exact ball size needed for a given setup.

The ball

I bought a fitness ball with a diameter of 65 cm. My guess is that the bigger the ball the easier it is to balance it. Bigger balls will also be heavier and more difficult to get in motion, having more inertia. The ball also has a flat sticky surface that give grip to the wheels. It is a bit bouncy though.

The sensor

To balance on a ball Sidbot must know what is up and when it is tilting over. A lot of NXT based balancing bots measure the distance to the ground for tilt information. This technique cannot be used for Sidbot, it is a long way from the ground and the ball is in the way when looking at the ground. Therefore I will use gyro sensors to get tilt information.
I need at least two gyro’s, one to detect tilt in the y direction
(rear to front) and one to detect tilt in the x direction (left to
right). I also want to measure the rate of turn around the z axis (down to up) . This is not needed for balancing but for keeping a fixed heading when it is moving. Currently I just have the Hitechnic gyro sensor. This measures rate of turn in just one direction. Instead of buying two more of these I will built my own sensor that takes care of all three axes. This occupies less space and sensor ports, has a higher sampling rate, might be more accurate and also includes a 3 axis accelerometer. It is the IMU digital combo board from Sparkfun.
By chance this sensor fits exactly into the slide on the inside of the lego sensors. I will sacrifice my (never used) sound sensor to house it. Thus far my attempts to use this sensor with the NXT have been unsuccessful. The problem is the different voltage level of the sensor (3.3V) and the odd pull ups required by the NXT. I also lack experience with electronics. But I won’t give up that easy.

The PID controller

To balance on the ball Sidbot needs to adjust its position constantly. To do so sensory information from the gyro’s has to be translated into motor commands. For this I will use a PID controller. PID controllers are
very easy to program (that goes for number two and later) but they are hard to tune. And every time the controller fails Sidbot will hit the deck quite hard. So I want a quick way to find the optimal settings for the PID controller. My idea is to use a genetic algorithm to find the best settings. The algorithm works like this:

1. Generate 10 random PID parameters.
2. Try each of the PID parameters and measure how long
Sidbot stays on the ball.
3. Throw away the 5 least successful parameters.
4. Duplicate the other 5 parameters but change them a little bit when duplicating.
5. Start over at point 2.

This algorithm should run until a successful set of parameters is found. During Christmas holidays I developed as small function library that implement the steps mentioned above. I tested this library to find PID values for another controller that Sidbot uses. This PID is used to keep and adjust the heading of Sidbot. In the end this worked surprisingly well. But I also found out that it is not easy to quantify the success of a PID controller. Does it have to be fast, can it overshoot the desired position or not, etc?
I expect some difficulties with this in the future. However, now I got myself a good set of PID parameters for maintaining/adjusting the heading of Sidbot.

That’s my progress thus far.