Archives for category: Agilis

Hi,

Yesterday I published building instructions for a single leg of my holonomic robot Agilis. Here are the full instructions. These include the central frame and three sensor docks. Have fun with it.

Download the Building instructions

Agilis2

Agilis2WithBrick

The large EV3 motor has a different form factor then the NXT motor. I had to redesign the legs of Agilis to make use of these new motors.

EV3HolonomicLeg1

EV3HolonomicLeg2The new design is smaller and much prettier to the eye. The smaller size and all the triangular shapes makes the leg very stable. The new leg can be fitted on the same triangular frame that was used for Agilis.

Here are the building instructions for the new leg.

If you only have two large EV3 motors you can easily modify the leg to use the EV3 medium motor. This motor can be attached directly to the drive axis of the wheel and to the underside of the frame. You will have no gearing. But as the medium motor rotates a bit faster you will still have plenty of speed in your robot.

I got some feed-back on my last post with “building instructions” for Agilis. My friend Kirk wondered why I didn’t make proper instructions. He was not impressed by my lame excuses about odd angles or illegal builds and showed me things could be done. He was right, it was about time for me to master some Lego CAD tools. I choose to use LDraw and MLCad for the job. My fears for a steep learning curve proved wrong. The manual is quite good and I was up and running within an hour. The hardest thing to learn was the discipline to work precise immediately. One cannot build a sketch at first and then improve the model. Well, technically you can, but you shouldn’t as every brick you move triggers a chain of new moves that need to be made.

AgilisFront
It was fun to create proper building instructions for Agilis. Along the way I also improved the gearbox as was suggested by another reader of my blog, Thimoty. You can freely download the instructions. I do appreciate constructive feedback. This is my first and I am sure there is room for improvement.

Download the building instructions for Agilis

See this post for improved instructions!

By request I created building instructions for Agilis. Well, sort of. It is a series of photos made while taking one of Agilis legs apart. Presented to you in reverse order, so that it looks like a building instruction.

One warning before you start. The gear box can be assambled very easily, but it cannot be disassambled without some tools and, potentially, some damage.

The gear box displayed is for a 1:3.33 gear ratio. Here you find a picture of a 1:2 gear box. I think the 1:2 gear box is a better option.

The part list in the pictures 1 to 5 is for one leg only. Also I did not have the parts to make a nice color scheme so you might end up with a rainbow warrior in black and white if you follow my color scheme.

Please send me a message or picture when you built Agilis.

 

 

This time I introduce a new project: localization with a light sensor and light beacons. Localization, or navigation, is the art of determination of position and direction. The aim of the project is to determine the position and direction of a mobile robot with aid of a few beacons. Much like a sailor is able to determine the position of his ship with aid of a few lighthouses or known landmarks.

The beacons I use are dLight LEDs. Each beacon is blinking at a different rate, individual beacons can be recognized by it. Also lighthouses have a light characteristic to identify them by. The only difference between a light house and the beacons I use is in the blink rate, a lighthouse typically blinks once every couple of seconds, my beacons blink at a rate of 8 to 24 times a second.

The robot I use is the omniwheeled robot Agilis you might know from older posts. It can rotate on the spot and is equipped with a standard lego light sensor. I placed a tube in front of the light sensor to give it a narrow beam of detection. This makes it possible to localize the beacons with some accuracy. I also experimented with a lens to make it easier to detect faint light sources and to narrow down the angle of detection even further.

The localization process has three steps, localization of the beacons, identification of the beacons and calculating the robots position. The first two steps are done while the robot spins a full circle. The robot then tries to find as many beacons as possible. Once a beacon is found its location is determined and its identity established. After finishing the spin three beacons are selected and the robots location is calculated based on the relative angles between these beacons. I will discuss each of these steps in more detail over the next couple of posts starting with beacon localization in this post.

Beacon localization

Beacons are located by Agilis while spinning on the spot. While spinning the robot keeps track of its direction by means of odometry based on the motor encoders. How this is done is explained in one of my older posts about Agilis. It uses a single standard light sensor to detect the beacons. Once a beacon is found its angle is determined and stored for later use. There are two problems in locating a beacon. The first one is to recognize a beacon reliably under various circumstances. Sometimes the beacons are the brightest objects on the horizon, at other times they are not. Therefore my attempts to find beacons by light intensity proved unreliable. Especially when a beacon was further away from the robot or when the sun was shining in my room. Only when I realized that the beacons were the only blinking objects in my room I managed to locate them reliably. I wrote an algorithm that calculated the amount of variation in the light level and I took this as an indicator of a beacon. The graph below shows the results of a 180 degrees scan. The dark blue line shows the light level measured by the light sensor during the scan. The peak on the left side of the scan is caused by a beacon, the peak on the right by a light bulb. The light blue line below shows the amount of variation in the light level. As you can see it does peak when scanning the beacon, but it does hardly raise when scanning the light bulb. The algorithm proved very successful in other (more complex) conditions too. For those interested, the algorithm takes a running average (using a low pass filter) of the absolute difference between two consecutive measurements.

beacon localizationThe second problem is to pinpoint the position of a beacon. This is both a physical problem, related to the sensor, as a logical problem, related to the algorithm used. The sensor has a rather wide beam of detection but is most sensitive to light sources right in front of it as the graph above shows. I narrowed the beam of detection by constructing a tube in front of the sensor with aid of technic connectors . I also tried a lens. Below is a graph made from a 360 degrees scan using a lens. beacon1 I am not quite sure how I will narrow down the beam of detection in the final robot as there are drawback to each solution.
The second issue with pinpointing the location of a beacon is in the logic. You cannot simply sort the samples and then take the three biggest values. In this case you will end up with three points from the biggest peak. So you need to find the biggest value

In case you might wonder how fast or accurate Agilis is, here are some numbers.

Setup

  • The gear ratio of the motors to the wheels is 1:2, making the wheels rotate twice as fast as the motors. (It is possible to change the gear ratio to 1:3).
  • Prototype orange Rotacaster wheels. This is the hard compound. There isalso a medium compound (gray) and soft compound (black) available.
  • The batteries were rechargeable NiMH, 2500 mAh batteries. These were not fully charged.
  • The test surface was a clean linoleum floor.

Speed

  • Reliable top speed is about 60 cm/sec, equivalent to 2.16 km/h or 1.35 mph. At this speed the robot is still accurate as there is ample margin for the PID controllers of the motors.
  • Unreliable top speed is about 75 cm/sec, equivalent to 2.7 kmh or 1.68 mph. At this speed the robot is not very accurate, especially the heading.

Accuracy

  • The test track is a square with sides of one meter each. During each run the test track is traveled 4 times. Making the total distance of the test track 16 meters.
  • The robot finishes the test track on average within 10 cm of its starting position. Expressed as a percentage of the total distance the error is about 0.6%.
  • The movement error is systematic. The robot always ends up above and to the right of the starting position.
  • The robot is more accurate at slower speed and acceleration settings.

The images shows the result of the accuracy testing. For each test the robot was placed exactly on the origin (bottom left in the picture). It then traveled a square with sides of one meter for four times, making the total distance traveled 16 meters. The finish location of the robot was then marked on the floor. This test was repeated three times for a single set of settings of speed and acceleration. Three different dynamic sets were used,  speed: 50 cm/sec and  acceleration at 100 cm/sec^2, speed 50 cm/sec and acceleration at 750 cm/sec^2 and speed 30 cm/sec and acceleration 60 cm/sec^2.

foto (7)

I want to repeat the tests with a 1:3 gear ratio and also with the black Rotacaster wheels.

If you ever made an autonomous robot, differential or holonomic, you’ll have experienced that they most often don’t move very elegant. Agilis, my current robot didn’t move smoothly either. This is because it’s movements are composed of different actions. Each action, driving, turning and all the other fancy movements it can make, is very smooth by itself as you could see in previous video. But, when going from one action to the other, the robot experiences violent changes in wheel speed. This leads to wheel spin, sliding and large errors in positioning. The robot tries to go from one speed (one action) to a different speed (second action) in a moment.

One way to solve this problem, is to make the robot accelerate and decelerate at the beginning and ending of each action. This will get rid of the positioning errors and violent accelerations. But it will mean that there is a stop between each action, and that is not what I want for a fast moving robot like Agilis.

The way I made Agilis move smoothly is by action blending (as I call it). By this I mean that I let an action fade out while the next one fades in. Just like a DJ mixes tracks or a painter mixes colors. This works very well as you can see in this video. We see Agilis cutting corners when instructed to drive a square. The cutting of a corner is the result of blending in two actions, one saying, go forward , the other saying, go left. The blending also depends on the speed of the robot, a faster moving robot needs a bigger corner to turn. But also on it’s capability to brake and accelerate. The quicker it can do this, the smaller the turns it can make. I fiddled with this in the video.

There were a lot of technical and logical considerations when implementing action blending in code. I think most of you can’t be bothered by this, so I won’t discuss them unless there proves to e a demand for this. To facilitate further developments and overcome errors if needed I made action blending optional in my code.

Two things bother me. One is that with short actions or low values for acceleration more than two actions can overlap in time. I can only handle two. The other is that I haven’t got angle (between two actions) in the equation, so the results are angle specific. Giving good results with square angles only.

I have been using Premiere Elements 11 for the last two video’s and obviously still need to learn a lot. Sorry for the vague green texts, I don’t know how to fix that. They were perfectly sharp when I wrote them on the floor.

Last post I introduced Agilis, and I discussed the calculations needed for driving the motors and for doing odometry. This time I’ll discuss how it moves. But first, let us take a look at a video.

In the video we see that Agilis can make some amazing movements. It circles around while facing in one direction for example. It’s movements are not constraint in heading nor in direction, it can make any movement. This makes Agilis a holonomic robot. Any movement a holonomic robot can possibly make, can be described as a combination of two functions. One function describes the robots location (X and Y coordinates) in time. The second describes its heading (H) in time. The two functions together give the robots pose at a given moment.

For the above example the function for location is
x=cos(t)*radius;
Y=sin(t)*radius;

The function for heading is
h=aConstantValue

As I discussed in my previous post, Agilis accepts a speed vector {x,y,h} as input for its drive system. The speed vector is the derivative of pose. So we need to use the derivative of the above functions if we want to use these to make Agilis move. This might sound complicated, but what it means is that we need functions that give a speed at moment in time. For our example these functions are


x=-sin(t)*radius;
Y=cos(t)*radius;

The function for h is
h=0

There are two important remarks to the above.
First, it seems arbitrary to say that x and y are bundled together in one function and h is calculated in another function. This is for a good reason, it is how we think. We think of a robot driving in circles, this is x and y together. We think of a robot facing north. Also, if we have to independent functions in the program than we can make very interesting combinations of location and heading functions.
Second, you might wonder why I discussed the two forms of the formulas. Well, in most cases it is best to use the speed (or derivative) form of a function. In some cases though, there is no (obvious) derivate of a function. This is especially the case when a function relates to the outside world. An example of this would be a function that makes the robot “look” at a fixed point in space, no matter where it is going. That leaves us with a problem though. If there is no derivative of a function, how do you get a speed from such a function? The way to go is to calculate the difference between the desired position or heading and the actual position or heading and then to translate this difference into speed. I use a PID controller to make this translation.

This brings me to the way this is implemented for Agilis. In the code I created a pilot. This pilot is responsible for performing movements. It must be instructed which movements to make so it can execute them. While executing a movement the pilot periodically queries the two movements for the desired speed and location. If it is a location that the movement returns then pilot translates this into speed using the PID controller. It then transforms this speed vector from a global coordinate system into the local coordinate system of the robot and then passes the speed vector on to the chassis. The chassis is a logical part in my program that should be envisioned just below the pilot. It accepts a speed vector, translates this to speed of the motors and then drives the motors.

The pilot handles two functions in parallel, one for location and one for heading. Each of the functions can return either speed or location. In any combination. A function can have an end to it, like go straight for one meter. But this is not obliged, like keep facing north.

I have implemented or planned these functions for location.

  • stand still
  • drive a straight line
  • drive an arc

I have implemented or planned these functions for heading.

  • rotate
  • hold still
  • look to a fixed direction
  • look around (from left to right and back and so on)
  • look where you’re going

This is basically all there is to it. This program allows Agilis to perform all the movements you see in the video. But as you can notice the transition from one movement to another isn’t smooth yet. I plan to make it smooth with a technique I call maneuver blending. What it should do is to blend two maneuvers so that a transition between the two appears to be natural. As said, I haven’t programmed this jet. If I succeed I will dedicate a post to it. If not, it is most unlikely that i’ll write about it 🙂

This Christmas holiday I started working on a new robot, called Agilis. This robot should be a very agile and elegantly moving robot. The frame is based on a triangle equipped with holonomic wheels. So you might think, “What’s new, it is like your last robot?”. From the outside this is true, but it gets new and better brains on the inside. Let me tell you what I envision.

Most robots I built went from point A to point B, only then to decide what to do next. Others just drove around avoiding obstacles. This one should be able to do both at the same time. Agilis must be able to perform complex manouvres, like driving in a straight line while rotating around its centre, or like driving an arc while keeping pointed at an arbitrary spot. It should constantly use sensory input to stay on course, or to alter its course if needed. And all this must go fluently, just like a cat walking through the room.

Over the next several posts I will discuss the different aspects of Agilis. This first post deals with the drive system.

the chassis

geartrain IMG_8701Agilis is a three wheeled holonomic platform. This means it can drive in any direction without turning. It can turn around any point, even around its own center. Each wheel is driven by a NXT motor via a gear train that has a 1:2 ratio, the wheels rotate twice as fast as the motors. This makes Agilis a relatively fast robot. The gear train has to be very sturdy to handle the torque of the motors. It also has to be precise to allow for odometry. I used the same setup that I developed for my last robot, Koios the Guard bot.

From robot speed to motor speed

It is not very easy to control a holonomic robot, it takes some math. I created a kinematic model that does just that. The model takes robot speed as input and gives motor speed as output. Robot speed is expressed as speed in x-direction, y-direction and rotational speed. Motor speed is expressed as encoder ticks per second.

So how does this kinematic model look like? For a single wheel this looks like a function that takes the three robot speeds as input. For the three wheels together it looks like a matrix multiplication that multiplies a robot speed vector {xSpeed,ySpeed,angularSpeed} with a kinematic matrix. The resulting vector containers the speed of each of the three wheels. Let’s take a look at the single wheel function first.

To translate robot speed into motor speed one needs to know some physical aspects of the robot, the wheel and the motor. How big is the wheel, how far is it from the center of the robot, under what angle is it mounted, what is the gear ratio of the gear train and what is the number of encoder ticks per full cycle of the motor? With all this information one can write a formula to calculate motor speed from robot speed. Here is the formula.

motorSpeed = 
xSpeed * (cosine(wheelAngle) * nEncoderTicks / ( gearRatio * 2 * PI * wheelRadius) - 
ySpeed * (sinus(wheelAngle) * nEncoderTicks / (gearRatio * 2 * PI * wheelRadius) + 
angularSpeed * distanceToCenter * nEncoderTicks / (gearRatio * 2 * PI * wheelRadius)

This formula might look daunting at first, but on second glance you might notice that there are a lot of constants in it. If you substitute the constants with their respective values you will end up with a much simpler formula.

motorSpeed = xSpeed * aConstantValue - ySpeed * anotherConstantValue + angularSpeed * yetAnotherConstantValue

This formula is not only much simpler, it is also very efficient to calculate, just three multiplications and two additions. A NXT can do this in no time. But remember, these constants are not the same for all the motors because each of the wheels has a different wheelAngle. But, you could also have wheels of different sizes, or differences in any of the other aspects. This means that you will have a formula for each of the motors, each formula is the same in structure but has its own constants. These constants can be stored in a matrix where each row in the matrix contains the 3 constants belonging to a single wheel. The matrix has a row for each of the wheels. If you then take the speed vector and multiply this with the matrix then all formulas are calculated at once and the result, the motorSpeed, is stored in a new vector. Each row in this vector holds the speed of a single motor. In java this matrix multiplication would look like this:

Matrix motorSpeed = kinematicModel.times(robotSpeed);

Wow, now things look simple at last! This is the beauty of matrix algebra.

The same kinematic model can be used to transform robot acceleration into motor acceleration. I use this to make my robot accelerate very smoothly. (the regulated motor class of Lejos supports regulated acceleration).

From tacho counter to robot position

To drive a robot this kinematic model works perfect. But I also want to be able to do things the other way around. I want to be able to calculate robot position from encoder values. At first I couldn’t figure this out at all. The math was just too complex for my mind. That is, until I realized that I just needed to use the inverse of the kinematic model.

deltaRobotPose = deltaMotorPosition * inverseKinematicModel

Here deltaMotorPosition is a vector containing the change in encoder value of each of the motors since the previous measurement. The inverseKinematicModel is the kinematic model inverted. And deltaRobotPose is the change in pose (x and y position and heading) of the robot. Looks simple, doesn’t it? The problem is how to calculate the inverse matrix of the kinematic model. I can’t tell you, because I don’t know. But hey, somebody else already programmed this in Java. I just used the inverse method of the Matrix class.

From the robots coordinates to the worlds coordinates

There is just one more thing to it. The robot can have any heading, this means that x and y coordinates of the robot are not aligned with the x and y coordinates of the world. To be able to steer the robot to a certain position in a room one must be able to convert this location to a location as the robot sees it. The same goes for keeping track of pose. We have seen the formula to calculate change in pose from the wheel encoders. This change however is a change as the robot sees it, not a change in the robots position it the world. The translation from world coordinates to robot coordinates can also be done with a simple matrix multiplication using a rotation matrix. The rotation matrix itself can be calculated from the heading of the robot.

Suppose you want to drive your robot to the left side of the room. The speed matrix in world frame would look like {0, speed, 0}. this can be multiplied with the rotation matrix to get a speed matrix as the robot sees it.

RobotSpeed =worldSpeed *  rotationMatrix

If we want to go the other way around, to get the change in pose in world frame we multiply the change in robot frame with the (you guessed it) inverse of the rotation matrix. For rotation matrices the inverse is the same as the transpose of the matrix, the transpose is far more efficient to calculate.

Wrap up

This really is all there is to driving a robot. To sum it up. You have a kinematic model to translate robot speed into motor speed. You have a rotation matrix to translate things from world coordinates to robot coordinates.
The same goes for odometry. You have the inverse of the kinematic model to translate change in encoder values to change in robot pose expressed in robot coordinates. You have the inverse of the rotation matrix to translate change robot pose in robot coordinates into world coordinates.
The kinematic model is a constant, it has to be calculated only once (unless your robot changes shape). The rotation matrix on the other hand has to be updated every time the heading of he robot changes.

The implementation

the robot uses lejos as its brains. Lejos has some excellent classes to drive the NXT motors. The regulated motor class that I used is able to rotate a motor at any given speed. This speed is maintained no matter what the conditions are. It also supports setting an acceleration rate. This is very good for my robot, as for most movements the wheel speed of the three motors is different. If all wheels would accelerate equally, then the slower moving motors would reach their target speed sooner than the faster spinning motors. This results in a robot that shivers and shakes during acceleration (or breaking). All this can be avoided by setting an acceleration value for each of the motors. The ratio of the acceleration values must be the same as the ratio between the (difference between current speed and) target speed of each of the motors. If done properly the robot accelerates very smoothly without jerky movements.

Lejos also has a Matrix class that helps to perform matrix algebra. I used this class to store the (inverse) kinematic models and the rotation matrix. I subclassed it to make a Pose Class that can be used together with the matrix class.

To create the kinematic model I developed a Builder class. This class has all kinds of methods to describe the robot, the wheels the motors and the gear train. When you are done describing the robot, the builder class delivers you a kinematic model and an inverse kinematic model.

To drive the robot I made a kind of pilot class. I plan to discuss it in a future post. This pilot accepts the kinematic model in its constructor.

For odometry I made another class, the Odometer. This class uses the inverse kinematic model.