Archives for category: LEJOS

One of the most fundamental problems in mobile robotics is to know the position of the robot.  The question might seem simple, to get an answer is very difficult.  There are two fundamentally different approaches to answer this question. The first approach uses dynamics to keep track of the robots position in respect to its starting position. Most often this technique uses wheel encoders to keep track of wheel movement. This is translated into a position of the robot. This is known as odometry. The second approach uses external references of which the location is known to calculate the position of the robot. Navigation based on stars is the oldest example of this technique, the GPS system is a recent example. As much as we come to rely on GPS nowadays, it is not very useful for our small indoor robots. Indoors the GPS signal is week and the error of a GPS position is in most cases bigger than the range of our robots.

I created a robot that uses the same principles as the GPS system is based on to localize itself. Instead of GPS satellites I use  blinking LEDs (dLights from Dexter Industries) as beacons. The beacons are detected and identified using a standard NXT light sensor. Below you can see the robot in action. The robot is placed on a random location in a random direction in my room. It then has to find out where it is and then drive home.

The robot locates beacons by evaluating the variance in light level while scanning the horizon. Whenever a spot with large variation in light level is found, the robot stops to find out if this variation comes in a frequency that identifies one of the beacons. If so, the current heading of the robot is stored along with the identity of the beacon. If after a full scan three or more beacons are located then the robot has enough information to estimate its position. It does so by triangulation using a Snellius construction. Lejos software for the NXT has a class, lejos.robotics.localization.BeaconTriangle ,that implements all the difficult calculations. If more than tree beacons are located, the robot estimates its position by averaging the position estimated from all unique combinations three beacons. The result is an improved estimation.

This Christmas holiday I started working on a new robot, called Agilis. This robot should be a very agile and elegantly moving robot. The frame is based on a triangle equipped with holonomic wheels. So you might think, “What’s new, it is like your last robot?”. From the outside this is true, but it gets new and better brains on the inside. Let me tell you what I envision.

Most robots I built went from point A to point B, only then to decide what to do next. Others just drove around avoiding obstacles. This one should be able to do both at the same time. Agilis must be able to perform complex manouvres, like driving in a straight line while rotating around its centre, or like driving an arc while keeping pointed at an arbitrary spot. It should constantly use sensory input to stay on course, or to alter its course if needed. And all this must go fluently, just like a cat walking through the room.

Over the next several posts I will discuss the different aspects of Agilis. This first post deals with the drive system.

the chassis

geartrain IMG_8701Agilis is a three wheeled holonomic platform. This means it can drive in any direction without turning. It can turn around any point, even around its own center. Each wheel is driven by a NXT motor via a gear train that has a 1:2 ratio, the wheels rotate twice as fast as the motors. This makes Agilis a relatively fast robot. The gear train has to be very sturdy to handle the torque of the motors. It also has to be precise to allow for odometry. I used the same setup that I developed for my last robot, Koios the Guard bot.

From robot speed to motor speed

It is not very easy to control a holonomic robot, it takes some math. I created a kinematic model that does just that. The model takes robot speed as input and gives motor speed as output. Robot speed is expressed as speed in x-direction, y-direction and rotational speed. Motor speed is expressed as encoder ticks per second.

So how does this kinematic model look like? For a single wheel this looks like a function that takes the three robot speeds as input. For the three wheels together it looks like a matrix multiplication that multiplies a robot speed vector {xSpeed,ySpeed,angularSpeed} with a kinematic matrix. The resulting vector containers the speed of each of the three wheels. Let’s take a look at the single wheel function first.

To translate robot speed into motor speed one needs to know some physical aspects of the robot, the wheel and the motor. How big is the wheel, how far is it from the center of the robot, under what angle is it mounted, what is the gear ratio of the gear train and what is the number of encoder ticks per full cycle of the motor? With all this information one can write a formula to calculate motor speed from robot speed. Here is the formula.

motorSpeed = 
xSpeed * (cosine(wheelAngle) * nEncoderTicks / ( gearRatio * 2 * PI * wheelRadius) - 
ySpeed * (sinus(wheelAngle) * nEncoderTicks / (gearRatio * 2 * PI * wheelRadius) + 
angularSpeed * distanceToCenter * nEncoderTicks / (gearRatio * 2 * PI * wheelRadius)

This formula might look daunting at first, but on second glance you might notice that there are a lot of constants in it. If you substitute the constants with their respective values you will end up with a much simpler formula.

motorSpeed = xSpeed * aConstantValue - ySpeed * anotherConstantValue + angularSpeed * yetAnotherConstantValue

This formula is not only much simpler, it is also very efficient to calculate, just three multiplications and two additions. A NXT can do this in no time. But remember, these constants are not the same for all the motors because each of the wheels has a different wheelAngle. But, you could also have wheels of different sizes, or differences in any of the other aspects. This means that you will have a formula for each of the motors, each formula is the same in structure but has its own constants. These constants can be stored in a matrix where each row in the matrix contains the 3 constants belonging to a single wheel. The matrix has a row for each of the wheels. If you then take the speed vector and multiply this with the matrix then all formulas are calculated at once and the result, the motorSpeed, is stored in a new vector. Each row in this vector holds the speed of a single motor. In java this matrix multiplication would look like this:

Matrix motorSpeed = kinematicModel.times(robotSpeed);

Wow, now things look simple at last! This is the beauty of matrix algebra.

The same kinematic model can be used to transform robot acceleration into motor acceleration. I use this to make my robot accelerate very smoothly. (the regulated motor class of Lejos supports regulated acceleration).

From tacho counter to robot position

To drive a robot this kinematic model works perfect. But I also want to be able to do things the other way around. I want to be able to calculate robot position from encoder values. At first I couldn’t figure this out at all. The math was just too complex for my mind. That is, until I realized that I just needed to use the inverse of the kinematic model.

deltaRobotPose = deltaMotorPosition * inverseKinematicModel

Here deltaMotorPosition is a vector containing the change in encoder value of each of the motors since the previous measurement. The inverseKinematicModel is the kinematic model inverted. And deltaRobotPose is the change in pose (x and y position and heading) of the robot. Looks simple, doesn’t it? The problem is how to calculate the inverse matrix of the kinematic model. I can’t tell you, because I don’t know. But hey, somebody else already programmed this in Java. I just used the inverse method of the Matrix class.

From the robots coordinates to the worlds coordinates

There is just one more thing to it. The robot can have any heading, this means that x and y coordinates of the robot are not aligned with the x and y coordinates of the world. To be able to steer the robot to a certain position in a room one must be able to convert this location to a location as the robot sees it. The same goes for keeping track of pose. We have seen the formula to calculate change in pose from the wheel encoders. This change however is a change as the robot sees it, not a change in the robots position it the world. The translation from world coordinates to robot coordinates can also be done with a simple matrix multiplication using a rotation matrix. The rotation matrix itself can be calculated from the heading of the robot.

Suppose you want to drive your robot to the left side of the room. The speed matrix in world frame would look like {0, speed, 0}. this can be multiplied with the rotation matrix to get a speed matrix as the robot sees it.

RobotSpeed =worldSpeed *  rotationMatrix

If we want to go the other way around, to get the change in pose in world frame we multiply the change in robot frame with the (you guessed it) inverse of the rotation matrix. For rotation matrices the inverse is the same as the transpose of the matrix, the transpose is far more efficient to calculate.

Wrap up

This really is all there is to driving a robot. To sum it up. You have a kinematic model to translate robot speed into motor speed. You have a rotation matrix to translate things from world coordinates to robot coordinates.
The same goes for odometry. You have the inverse of the kinematic model to translate change in encoder values to change in robot pose expressed in robot coordinates. You have the inverse of the rotation matrix to translate change robot pose in robot coordinates into world coordinates.
The kinematic model is a constant, it has to be calculated only once (unless your robot changes shape). The rotation matrix on the other hand has to be updated every time the heading of he robot changes.

The implementation

the robot uses lejos as its brains. Lejos has some excellent classes to drive the NXT motors. The regulated motor class that I used is able to rotate a motor at any given speed. This speed is maintained no matter what the conditions are. It also supports setting an acceleration rate. This is very good for my robot, as for most movements the wheel speed of the three motors is different. If all wheels would accelerate equally, then the slower moving motors would reach their target speed sooner than the faster spinning motors. This results in a robot that shivers and shakes during acceleration (or breaking). All this can be avoided by setting an acceleration value for each of the motors. The ratio of the acceleration values must be the same as the ratio between the (difference between current speed and) target speed of each of the motors. If done properly the robot accelerates very smoothly without jerky movements.

Lejos also has a Matrix class that helps to perform matrix algebra. I used this class to store the (inverse) kinematic models and the rotation matrix. I subclassed it to make a Pose Class that can be used together with the matrix class.

To create the kinematic model I developed a Builder class. This class has all kinds of methods to describe the robot, the wheels the motors and the gear train. When you are done describing the robot, the builder class delivers you a kinematic model and an inverse kinematic model.

To drive the robot I made a kind of pilot class. I plan to discuss it in a future post. This pilot accepts the kinematic model in its constructor.

For odometry I made another class, the Odometer. This class uses the inverse kinematic model.

 

Remember my plan to make a ball balancing robot? Last year I set myself the goal to make a ball balancing robot. I even build the robot. Since then I wondered off my original goal and made a guardbot, Koios, from this platform. Now I am having another shot at making a balancing robot.

Programming a balancing robot is easy in theory. You just need a sensor that tells you how much the robot is tilted, most often people use a gyro for this. I use my IMU for this, so that I do not suffer from gyro drift. The tilt angle is then feeded to a PID-controller that transformes tilt to motor speed. The hard part is to tune the PID controller, it has to translate tilt into just the right amount of motor speed, too little and the robot falls of the ball, too much and the robot goes over the top and falls of on the other side of the ball. Falling of the ball damages the robot. So I had a problem, how to tune the PID controller without damaging the robot?

To be able to tune the PID-controller without damaging the robot I made a special platform. It is a large disk with a small pole in the middle pointing down Due to the pole the disk will always be tilted when lying on the ground, only when it balances perfectly on the pole it is level. Therefore this disk can be used to tune the controller.  The robot can ride off the disk, but it doesn’t fall then, it just comes on the floor with one or two wheels.  Afbeelding

When I tested this setup I discovered that the disk whas too smooth, the wheels didn’t have enough grip and slipped. To increase the friction I coated the surface of the disk with sillicon rubber, It is the light blue surface you see in the picture. Now I have a very “slick” surface.I only hope it lasts under the forces the NXT motors generate.But for the moment this problem is solved.

But there are other problems. One is the fact that these holonomic wheels make the robot vibrate, this affects the IMU filter, there is still some drift although it stays within certain limits. I do have prototype rotacaster wheels. The manufacturer told me that the production wheels are more round and generate less vibrations. If you are ever going to by these wheels, and they are a lot of fun, I advice you to take the black ones. They have the best grip. Anyway, I will have to tune the IMU as well.

Tuning PID controllers is difficult and very, very time consuming. There is some theory around tuning PID controllers but in the end it is mostly trial and error. Everytime I want to try a new set of parameters I’ll have to modify the program, download it to the brick, run the program and evaluate the results by watching the robot. It is hard to understand what goes wrong when you see the robot ride of the disk and make a run for the door to the staircase.

But not anymore. Kirk, one of the developers of Lejos made a very nice program that allows you to tune a running PID controller during over bluetooth. The tool is still under development so you won’t find it in Lejos 0.9.1 yet. This program is an add-on to the charting logger I often use to evaluate internals of the robot on the PC. So basicly, this program shows me what is going on in my robot and allows me to modify PID parameters on the fly. I think this is a great tool. Below is a screen shot of it.

Afbeelding

So, now I have the robot, a test platform and a efficient tuning tool. That must mean immediate succes! Well, to be honest I don´t think so. I´m still not sure if I can get this robot to work as there are problem with weight and inertia as well. The robot weigths 998 grams. This is quite heavy, even for three powerful NXT motors. The robot is quite stiff, but there it still bends a bit under weight. This affects the IMU sensor. And I´m working on other projects as well. So in the end I think there is a bigger chance to fail than to succeed.

To be continued.

Few posts ago I introduced Koios the guardbot. This robot can create a map of its surrounding using range sensors. This time I’ll tell some more about this.

The robot uses the map in the process of decision making. These are the main decisions Koios has to make. Is my current location save? Where can I find a safer spot? Where could an intruder appear? Do I spot an intruder?

One thing to keep in mind when reading this post is that the map Koios creates is not permanent. It will be used as long as the robot stays at the same location. Once it starts traveling the map is thrown away. You might think this is a pity, that you could easily keep the map and keep adding more data to it from other locations. This way one could ultimately create a complete map of the surrounding. This is true. But it also is very difficult to do. The main reason for this is that any movement in the robot will always lead to an uncertainty in the position of the robot. This uncertainty makes it impossible to just add a new scan of the surrounding to the old one. You would have to deal with this uncertainty using statistical methods. This technique is known as SLAM, or Simultanious Location And Mapping. I wanted to keep thing simple this time. Hence no SLAM, no permanent map, no overall image of the surrounding but just a temporary map giving an image of the surrounding as observed from the current location. But still, there is a lot you can do based on this information. All the decisions given above can be made based on a temporary map. Compare it to real life. Even without a map you can safely move around in a strange city as you can recognize roads, pavements, etc. you just cannot find your hotel back.

The map is constructed from data from range sensors (US sensor or NX Dist sensor) with aid of a direction sensor (compass or gyroscope). When constructing the map the robot turns around in place. While turning it takes as much range readings as possible and stores the range for all different directions. When using the map one can query the range for any given direction. More on using the map later. First, let me explain how exactly the range data is stored and why it is stored the way it is.

The range map is not a grid as you might expect. It doesn’t have a value (yes or no object) for each combination of x and y. Instead it stores information per direction. It is something like in a northern direction there is an object about 50 cm away.. The number of directions the map recognizes of course is not just four (north, east, etc). This would be a bit to little to be useful. It is not 360 either, this would take too much memory. This would also be overkill as the range sensors do not have a one degree resolution. The number of directions, I call them segments, that the map stores can be chosen by the user. As a rule of thumb I let the width of a segment be equal to half the width of the range sensors beam. As the US sensor has a beam of about 30 degrees a let a segment be 15 degrees. This devides a full circle into 24 segments.
The range is stored per segment. But one can and will have multiple range measurements per segment. One cannot store all measurements, this would take to much memory. One could store the last measurement only. But then one would lose the information from the other measurements. Therefore I went for another option. I wanted to store the average of all measurements taken within a segment. But to maintain and update the average you need other information as well. As the mean is defined as the sum (of the measurements) divided by the number (of measurements) I decided to store both the sum and the number. This allows me to calculate the mean at all times. I also store the sum of squared measurements. This allows me to calculate the variance in the measurements at all times. The variance tells how big the spread in the measurements is. Later we will see why this is useful information.
To wrap this up. A map consists of segments that are like slices of a pie. For each segment the map can give the number of measurements taken, the average range that was measured and the spread in these measurements.
The resulting map uses just 3 integer values per segment. Given 24 segments one needs 72 bytes of data to store a complete image of the surrounding. The NXT can easily handle this. For comparison, a grid map of 4 by 4 meters and a grid size of 10 cm is more or less equivalent to a range map like this. But it would require at least 1600 elements to store its information.

If one would draw the resulting map it would look like a radar screen. The robot would be in the center of the screen. The objects would be around this center. The bigger the object the bigger the reflection on the screen. The further away the object, the further it is from the center in the image. As a matter of fact the range map I build does have methods for displaying map data.
To transform map data to screen data one has to take the following steps:

  1. translate segment to angle
  2. translate angle to x and y fractions using x=cos(angle) and y=sin(angle)
  3. multiply x and y by average range. This gives the distance from robot to the object in x and y direction
  4. scale the distance to fit on the screen by multiplying by 32/max range
  5. rotate to align robot coordinates to screen coordinates. In my case the poitive x axis of the robot goes from the center of the brick to the motor ports. The positive y axis goes from the center to the left side of the robot. In respect to the screen the x and y axis are swapped as well as the sign of these axis. The resulting rotation is x_screen=-y_robot
    Y_screen=-x_robot

To use the map one has to translate range data into information that supports a decision. Sometimes this is easy, more often this is quite complicated. Let me give some examples, starting with an easy one and going to more complex examples step by step.
Suppose Koios, who’s goal in life is to find and scare off any intruder in my house, hasn’t seen any intruders for a ling time. It then wants to go to another spot, maybe it has more luck over there to find an intruder. This new spot should be as far away from the current spot as possible. But when traveling to the new spot it does not want to collide with any object in the surrounding. Therefore it uses the map to find the direction in which it can travel as far as possible without hitting an object. It does so by finding the segment that has the highest range value. When this segment is found it will rotate towards that direction and travel the distance corresponding with the range minus a safety margin.
Once arrived at the new location Koios has to make a harder decision, is the current location safe enough to stay and stand guard or does Koios need to find a safer location. I have decided beforehand that a safe location is a secluded location. A location where Koios has objects around him that protect him from people walking over him. To make this decision Koios creates a new map of the surrounding and calculates a safety indication from the number of objects that are close by. If this is large enough Koios regards the spot a safe spot and stays there. If, the current location is not safe enough Koios needs to find a safer spot. It does so by calculating the safety indication of all spots on the map and then to go to the safest spot it could find.
The hardest decision to make is in the process of intruder detection. Is an object detected by the range sensors an intruder or part of the surrounding? This decision is simple at first glance. Any object that is not on the map already must be an intruder. In other words, if the current range is shorter than the range for that direction on the map then there must be an intruder in that direction. But what if the difference between these two ranges is small. It then could as well be caused by some variance in sensor readings as by an intruder. It is here that the variance, or spread, in measurements that I mentioned earlier comes in handy. Statistics say that the square root of variance is called standard deviation. From the mean and the standard deviation one can calculate the likeliness of a range belonging to a new object (intruder) or to the surrounding. If the measured range is smaller than the average range on the map minus two times the standard deviation then there is a 97.5% chance that this object is an intruder.
You can see that none of these decisions need a permanent map. They all can be made using just an actual image of the current surrounding. It therefor is a good decision not to maintain a permanent map and to make the program over complicated.

To conclude. Mapping does not have to be very difficult if one doesn’t need a permanent image of the surrounding. The kind of decisions you want to make dictate if you can do with temporary maps. Also, a map does not have to be very large when choosing a map model wisely.

For those of you that are interested in the code for the range map I provide it below. It is for Lejos and not documented in any way. It therefor will be hard to use straight away. But maybe it can still help you one way or another. The methods that are involved in displaying the map use vector math as this makes rotating and scaling easier. The vector library itself is open source and can be found on the Internet It is called vecmath.


package nl.totan.robot;

import javax.microedition.lcdui.Graphics;

import lejos.nxt.Button;
import lejos.nxt.LCD;

import nl.totan.sensors.INS;
import nl.totan.vecmath.Matrix3f;
import nl.totan.vecmath.Vector3f;

public class RangeMap {
int segments;
double angle;
int depth;
int[] sum;
int[] n;
int[] sumsqr;
Vector3f v=new Vector3f();
Vector3f v2=new Vector3f();
Vector3f shift=new Vector3f(50,32,0);
INS dcm = null;
float scale;
Graphics plotter=new Graphics();

private Matrix3f rotate = new Matrix3f(0,-1,0,
-1,0,0,
0,0,1);

public RangeMap(int segments, int depth){
this.segments=segments;
this.depth=depth;
angle=2.0*Math.PI/segments;
scale=32.0f/depth;
sum=new int[segments];
n=new int[segments];
sumsqr=new int[segments];
clearMap();
}

public RangeMap(int segments, int depth, INS dcm){
this(segments, depth);
this.dcm=dcm;
}

public void clearMap() {
for (int a=0;adepth) range=depth;
update(toIndex(currentAngle),range);
}

public boolean isNew(double currentAngle, double range) {
if (range==Double.NaN || range>;depth) return false;
int a=toIndex(currentAngle);
LCD.clear();
LCD.drawString("Range: " +range, 0, 0);
LCD.drawString("Mean : " +getMean(a), 0, 1);
LCD.drawString("SDDEV: " +Math.sqrt(getVariance(a)), 0, 2);
LCD.drawString("N : " +getN(a), 0, 4);

LCD.drawString("DIF : " +(range-getMean(a)-2.0f * Math.sqrt(getVariance(a))), 0, 5);
//Button.ENTER.waitForPressAndRelease();
if (range<;getMean(a)-2.0f * Math.sqrt(getVariance(a))) return true;
update(a, range);
return false;
}

protected void drawDot( int a, int d) {
v.x=(float)(Math.cos(toAngle(a))*d);
v.y=(float)(Math.sin(toAngle(a))*d);
v2.x=(float)(Math.cos(toAngle(a+1))*d);
v2.y=(float)(Math.sin(toAngle(a+1))*d);
toScreen(v);
toScreen(v2);
plotter.drawLine((int)v.x, (int)v.y,(int)v2.x, (int)v2.y);
}

public void drawMap() {
for (int a=0;a<;segments;a++) {
drawDot(a,(int)getMean(a));
}
if (dcm != null) {
//draw north
v.x=0;
v.y=0;
v2.y=0;
v2.x=depth;
toScreen(v);
toScreen(v2);
plotter.drawLine((int)v.x, (int)v.y,(int)v2.x, (int)v2.y);
}

}

public RangeMap getDifference(RangeMap old) {
RangeMap dif=new RangeMap(segments, depth);
double maxDif=10;
double myRange, oldRange;
boolean isNew;

for (int a=0;a<;segments;a++) {
myRange=getMean(a);
isNew=true;
for (int aa=a-1;aa<;=a+1;aa++){
oldRange=old.getMean((aa+segments) % segments);
if (oldRange<; myRange+maxDif) {
isNew=false;
}
if (isNew) {
dif.sum[a]=sum[a];
dif.n[a]=n[a];
dif.sumsqr[a]=sumsqr[a];
}
}
}
return dif;
}

public double getProtectionLevel(int a, int dist, int margin) {
double level=0;
double dist2;
double dif;
double sin_a=Math.sin(toAngle(a))*dist;
double cos_a=Math.cos(toAngle(a))*dist;
for (int aa=0;aa0)
level+= 0.5-Math.atan(140*dif/depth-4)/3;
}
return level;
}

public void drawMarker(int a, int d) {
v.x=(float)(Math.cos(toAngle(a))*d);
v.y=(float) (Math.sin(toAngle(a))*d);
toScreen(v);
plotter.drawArc((int)v.x,(int)v.y,4,4,0,360);
}

public void drawRadius(double angle) {

v.x=(float)(Math.cos(angle)*depth);
v.y=(float)(Math.sin(angle)*depth);
v2.x=0;
v2.y=0;
toScreen(v);
toScreen(v2);
plotter.drawLine((int)v.x, (int)v.y,(int)v2.x, (int)v2.y);

}

void toScreen(Vector3f vec) {
if (dcm != null) dcm.transformB(vec);
rotate.transform(vec);
vec.scale(scale);
vec.add(shift);
}

}

I have included a compass into my INS filter. It now is drift-free and accurate over all three axis. It also is very robust to magnetic disturbances. The compass itself is not. It gives me great pleasure to see the first results.

Tuning the filter is difficult though. I have used the NXJChartingLogger that will be part of Lejos 0.9.1 a lot. The filter provides all kinds of output that is displayed real time on the pc. Different parameters are logically grouped into different graphs. The graphs provide information on the state of the filter and thus help in tuning.
I have also included a method to introduce errors in the sensor data. This enables me to examine the performance of the filter.

As a last thing I have made the filter calculate the initial state of the filter. With this the filter settles accurately (<.5degrees) within half a second.

The performance dropped to 97 Hertz with three sensors attached. But I haven't optimized the compass code yet.

In later posts I'll demonstrate the performance of the filter.

By request I will publish the Lejos software for the dIMU sensor.

The software contains of drivers for the Dexter Industries IMU sensor, programs to calibrate the sensor and the IMU filter that I wrote about on this blog before. The filter can be used for other IMU sensors as well. Also included is the sample program that utilizes this filter. It is the very same program I used in the video’s published in a previous post. There is no formal support and no warranty. If you are going to use it you will have to rely on your own wit, the javadoc and on this blog.

Download and install

You can download the software from this link. After downloading you will have to extract it keeping the directory structure intact. Make sure yor compiler can find the code.

Using the IMU drivers

The dIMU consists of two sensors, a gyroscope and a accelerometer, each has it’s own driver.  The gyroscope driver is called L3G4200D and the driver for the accelerometer is MMA7455L. (I use the technical names of the sensors a driver names.) On top of these drivers I made  user interfaces that allowes you to examine, configure and calibrate the sensors. The user interfaces are called L3G4200D_E and MMA7455L_E respectively. You can use these programs as driver as well. It gives your programs access to the additional functionality but it needs the NXT screen and might result in larger programs. There is a sample program included that gives access to the user interfaces, it is called testDIMU.

This is how you enable the drivers in your code,

SensorPort.S4.i2cEnable(I2CPort.HIGH_SPEED);
MMA7455L accel = new MMA7455L(SensorPort.S4);
L3G4200D gyro = new L3G4200D(SensorPort.S4);

The first line instructs Lejos to use high speed I2C. The sensors support this.

This is how you get data (rate, acceleration and tilt) from the sensors.

float[] rate = new float[3];
gyro.fetchAllRate(rate);

float[] accel = new float[3];
accel.fetchAllAccel(accel);

float[] tilt = new float[3];
accel.fetchAllTilt(tilt);

As you can see the drivers return data from all three axis is one call. If you just need data from one axis you can get it from the arrays. The order is X, Y, Z. The gyro returns degrees per second by default. The accelerometer returns Milli-G for acceleration and Degrees for tilt, also by default. If you want to use other units in your programs you can change the default values like this.

gyro.setRateUnit(RateUnits.RPS);
accel.setAccelUnit(AccelUnits.MS2);
accel.setTiltUnit(TiltUnits.RADIANS);

This changes the default units to radians per second, meter per second^2 and radians respectively. Other units are available, check the javaDoc. The default unit can be overridden per data request by specifying the desired unit as second parameter in the FetchAll… methods.

Configuring the sensors

The gyro can be configured for dynamic range and sample rate using the setRange and setSampleRate methods. As a rule one should select the lowest value that your application allows for. This gives data of best possible quality.

The accelerometer cannot be configured. I found this  of little use.

Calibrating the sensors

The gyro of my dIMU doesn’t really need calibrating. however there is the possibility to do so. Calibration is started b  calling gyro.calculateOffset(). During calibration the sensor should remain motionless. Calibration settings of the gyro are not stored, so they are lost when your program terminates.  (Storing calibration settings of gyro sensors is of no use as the calibration values depend on environmental factors and are not stable over time.)

The accelerometer needs calibration. The user interface driver provides functionality to calibrate the sensor and to store the calibration settings. The (base) driver looks for stored calibration settings upon initialization and loads these automatically when they are available. Calibration settings of the accelerometer are stable over time so you’ll need to do this just once. Each of the three axes has to be calibrated separately. To calibrate an axis one has to point it straight up first and hold still while the calibration routine collects data samples.  Then the axis has to be pointed straight down and held still for some time. Follow the on screen instructions and do not forget to save the settings after calibration.

Using the IMU filter

The IMU filter can be used with any three-axis gyro and any three-axis accelerometer as long as their drivers implement the RataData and TiltData interfaces. This is how you initialise the filter

NLCFilter attitude = new NLCFilter(gyro, accel);
attitude.start();

The two parameters to the filter constructor are the gyro driver and accelerometer driver. One could leave out the accelerometer, the filter will work but values will drift over time. The second line of code starts the filter. The filter needs 2 to 5 seconds to settle at start up, therefore you need to keep the sensor motionless and more or less level for a few seconds. You can find out if the filter is ready settling with the Initializing() method.

The IMU filter keeps track of the attitude, or pose, of your sensor/robot. You can query the roll, pitch and yaw angles this way

attitude.setTiltUnit(TiltUnits.RADIANS);
float roll=attitude.getRoll();
float pitch=attitude.getPitch();
float yaw=attitude.getYaw();

or

float[] tilt = new float[3];
attitude.fetchAllTilt(tilt);

By default these methods return angles in radians. You can change this by setting a different unit with the setTiltUnit() method.

You can also use the filter to transform data from a body frame to a world frame. This is useful if another sensor returns data that needs to be corrected for the robots current attitude. the next example transforms a distance returned by a UltraSonic sensor to world coordinates. The example assumes the us and IMU sensors are aligned and that the US sensor measures along the X-axis.

Matrix usBody=new Matrix(1,3);
Matrix usWorld=null;
us = new UltrasonicSensor(SensorPort.S1);
usBody.set(0,0,us.getDistance());
usWorld=attitude.transformToWorld(usBody);

The matrix usWorld now contains the distance from sensor to the detected object over the X, Y and Z axis.

Configuring and tuning the IMU filter

By default the IMU filter updates as often as possible. It’s update frequency is over 100 Hertz. To free computer resources one can lower the update frequency by using the setFrequency() method. The filter will try to match the specified frequency. A parameter value of 0 will run the filter at maximum speed.

The filter can be tuned to increase the quality of its output. I advise you not to tune the filter until you are familiar with it and understand its inner workings. Tuning consists of altering the P and I values of it’s internal PI controller. The I value takes care of gyro drift cancellation and the P value controls how fast attitude is corrected by use of tilt data from the accelerometer. The values can be modified by using the setKp() and setKi() methods.

There are two ways the filter can assist you in  tuning. It keeps track of the integral of absolute errors, or absI. This is a measure of the total error of the filter over time. The smaller the error (given a fixed period) the better the filter performs. /* The filter also allows you to send data over bluetooth to the pc for examination. For this one has to use the NXTChartingLogger on the pc that is part of the Lejos distribution. You instruct the filter to send its state to the pc by supplying  a NXTDataLogger object with the setDataLogger() method. */

Running the demo program

The demo program is called testIMU.  At startup of this program the sensor must be horizontal and motionless. The sensor is assumed to be aligned ith the NXT brick with the sensor plug facing to the same direction as the sensor ports. Once you see the wire frame you can start moving the sensor.The demo has four display modes:

  • Wire frame. Here it shows a wire frame of the sensor on the NXT screen
  • Rotation matrix. The screen will show the content of the rotation matrix. In  this matrix the current attitude is stored. The matrix is also used to convert body coordinates to world coordinates by matrix multiplication..
  • Roll, Pitch, Yaw. The screen shows the Roll, Pitch, Yaw angles of the sensor.
  • Update speed. The screen shows the actual update speed of the filter.

You can browse through the display modes by using the arrow keys. The enter key resets the filter.  The clip below shows the demo program in action in wire frame mode.

Here a quick post of some graphs that show the effect of the filter I use for my IMU.

You might have seen a previous posts where I show you that my sensor is capable of detecting a grain of sugar under the NXT. If you haven’t seen it, check the post and video here. Few months ago I changed from robotC to Lejos. And now I have rewritten the sensor drivers and the filter in Java.

The Lejos developers are currently working on a very nice logging feature that allows users to display data from the NXT real time on the pc. I got hold of the beta code and tested it without any problems. I used the NXTDataLogger (this is the name of the Java class) to made the intrinsics of the IMU filter visible.

The IMU sensor combines a 3-axis gyroscope and a 3-axis accelerometer. The filter reads the output from both sensors, combines there signals in a smart way and provides stable information about the orientation (yaw, pitch and roll) of the robot.

An accelerometer can provide tilt data (pitch and roll) as can be seen in the first graph. This sensor however has two serious drawbacks. First it is quite noisy, that’s why the graph lines seems so hairy. Second, it does not distinguish tilt from acceleration. So, the output from this sensor can have different causes. This is why you cannot make a balancing robot with just an accelerometer.

A gyroscope is far less noisy, that is why the second graph shows smooth lines. It is also not affected by acceleration. However also this sensor has its particularities. First, it doesn’t measure tilt, but it measures rate of turn. This can only be translated into tilt if the starting condition is known. To get the tilt at any time you have to integrate all the readings and the starting condition. In the process small errors are introduced, in the long run they add up to la large error. The green line in the graph shows this effect. At about 42 seconds the sensor rotates faster than its maximum range (>2000 degrees/second), it then gives faulty readings, these are integrated in the calculated tilt. As a result the line ends higher than it started, even though the sensor was turned back to its original position. The second effect that makes a gyro hard to use in long runs is called drift. Due to temperature changes or voltage drops the base level of a gyroscope changes. This means that after some time the gyro seems to be turning slowly when it is still being stationary. This effect is well visible in the blue line. This goes up slowly. This effect can not be eliminated by calibration, unless you are able to calibrate the sensor along the run.

The filter is like a PI-controller. It uses the data from the gyroscope as the basis of it’s output. The lines in the third graph, that shows the output of the filter, are smooth because of this. It uses the data from the accelerometer to correct the output from any errors that have been build up. But it does so slowly in order to keep the noise out of the output signal. The P-value of the PI controller corrects for errors in the gyro signal. It also makes the filter usefull when starting conditions are unknown. The filter also sums past errors to get an I-value, this value is used to correct for drift in the gyro data. The result is a graph with smooth drift-free lines.

Two final notes. First, the green line (the yaw-line) is not drift-free. This is because an accelerometer cannot provide information of orientation in the XY-plane. You need a compass sensor to provide this information. Second, the range is expressed in cosine of the angle, so 1 corresponds to zero degrees, 0 corresponds to 90 degrees.

Here is a short update of my works.

I’m doing two projects at the same time.
First, there is Sidbot. It waits till I master Java threads.
Second, I’m building a holonomic or Killough pilot in Lejos. This project hangs on me not knowing how far the Robot has travelled. This info is needed for the pilot to work with a navigator.
Actually, my main project now is learning java. This goes well but there us a lot to it.

Few weeks ago I ordered two more BlinkM’s. These are I2C driven all color led’s. I wrote a driver to address these led’s. The led’s are now mounted on each corner of Sidbot. So Sidbot can shine all colors to all Sides of it’s triangular body. Each led is driven individually so each one can have it’s own color. I made a nice flashlight show to test the driver.
I finished drivers for my IMU too. The filter to fuse the signals is also finished but it can’t run in a separate thread yet.

I also have been on holidays and busy with work. But these matters don’t concern this blog.

It has been sometime since you saw a post from me. The main reason is that I am really busy with work. The second reason is that I decided to use the LEJOS environment for my current project. I did not have any experience with LEJOS and Java, so I had to dive deep into the matter.

LEJOS allows you to program in the Java language on the NXT. It has some advantages over robotC that I have been using the last two years. I won’t list all the differences between the two languages. For me the higher I2C speed and object orientation were important factors.

Java proved to have a steep learning curve. It took me some time to get the first results. The main reason for this was not the language itself but the IDE. I am using Netbeans to write my programs. Netbeans uses ANT to compile the programs. ANT uses the classes provided with LEJOS. All these components have to work together. And when they don’t it is hard to find out why for a newbe like me. At one point I experienced lots of red lines in my program, indicating errors, but still the program compiled just fine. It turned out that ANT did know where to find the LEJOS classes but Netbeans didn’t.

Slowly I’m getting to grips with Java and the environment. As a first project I decided to write an improved interface to the Mindsensors accelerometer as the existing  interface does not allow to set the sensitivity of the sensor or to calibrate it. I also want my sensors to return SI values, I rather have m/s^2 than milli-g for acceleration, I rather have radians than a number between -127 and 128 for tilt. The interface is no jet finished but I do not feel lost anymore when writing a new method to the interface.

After I finish this little side-project I will rewrite the interface to my IMU and the component filter. Only then I can really tell whether my switch to LEJOS has payed of.