Archives for category: Gyro sensor

This time I will discuss the gyro sensor (again). What it measures. How to cope with offset and drift. And how to use it.

Gyro sensors are among the most widely used sensors in robots. They are often used in self balancing robots for keeping upright. They are also used  in autonomous robots for keeping track of orientation. But also game controllers might use them to detect angular motion. A gyro sensor measures the angular velocity. In other words they measure how fast the sensor is rotating. This is most often expressed as degrees per second or radians per second. Some gyro sensors measure the angular velocity over one axis, others take measurements over two or three axes.

One should be aware that a  rotating object always rotates over just one axis. This axis however can have any orientation. Compare this to speed. An object can move in just one direction at any given time. This direction however can be any direction in a three dimensional space. Most often we do express the speed of an object over three perpendicular axes. A landing plane might fly at a speed of 600 km/s and descent at a rate of 5 m/sec. Wind might blow it off course at a rate of 0.5 m/sec. But still this plane goes in one direction only. The same is true for  angular velocity, an object rotates just with one speed, but we express its speed over three separate axes.

Let us take a look at the output of a typical gyro sensor. The graph below shows the output of a digital three axis gyro sensor (ITG-3200) over a period of 17 seconds. The X-axis is the time axis, the Y-axis shows the angular velocity expressed in degrees per second. After five seconds from the start of the measurement I rotated the gyro sensor clockwise for about 180 degrees. This took me about 3 seconds. After about 11.5 seconds I rotated the sensor back (counter clockwise).

gyro-1

This graph tells us quite a lot. First, we notice that the sensor is a three axis sensor. It returns three different signals at any given time. Second, we can see that the rotation I made took place of the third axis, this is the Z-axis. It means that I rotated the sensor in the XY-plane. This is true, I had the sensor flat on my desk and I rotated it while keeping it flat. Third, a clockwise rotation is expressed as a negative angular velocity and a counter-clockwise rotation as a positive angular velocity. This is a matter of convention and is called the right-handed orientation system. Fourth, we can see that  when at rest the gyro signal is close to, but not exactly, zero. This is because the sensor is not perfect, it has a bit of an error. We’ll look into this error and ways to compensate for this later. And finaly, we cannot read the change of 180 degrees in orientation from the graph. This is because the sensor does not measure the orientation, instead it measures the angular velocity (speed of rotation). It is however possible to calculate the (change in) orientation from the angular velocity as I’ll explain later.

Offset and drift correction

Let us concentrate on the error in the sensor signal right now. Below is a graph taken over a period of ten minutes while the sensor was at stand still all the time.gyro-2

A perfect sensor would output a rate of velocity of zero for all three axes all the time. Obviously this sensor does not.  The X-axis signal is around 2 instead of zero. This error is called the offset error. Every axis has its own offset error. For the X-asis this is around 2, for the Y-axis it is about 3.3 and for the Z-axis it is about -1.5. It is easy to correct for the offset error once you know how big it is. You just substract the offset error from the sensor signal to get a corrected value. The offset error itself can be calculated by taking the mean of a number of samples, take 100 for example.

The offset itself may seem constant, but in reality it is not. The offset of a sensor  is influenced by several factors and can change over time as a result. This is called sensor drift. One of the biggest factors contributing to sensor drift is temperature. You can notice this when one starts using a sensor. When being used, the temperature of a sensor rises a bit. It gets hotter then the ambient temperature. As a result the offset of the sensor changes. If you need a very good signal you should take this into account and let the sensor warm up before calculating the offset.
Some gyro sensors, like the Hitechnic gyro for example,  are seriously affected by changes in input voltage. As the NXT is battery powered this is a serious problem. Starting the motors of the NXT will result in a power drop and thus in a change in offset. There is a trick to avoid this if you have a sensor mux with an external power supply. Like this one from Mindsensors. In general I advise you to choose another gyro.
Even when temperature and power are constant the offset of a gyro will still vary a bit over time. This variation is called random walk.

There is a very elegant technique to deal with sensor drift. This is to constantly but slowly update the offset error of the sensor. Instead of treating the offset as a constant you  treat it as a value that can change over time. To calculate the offset you now use the moving average of the most recent samples as the offset. Then you always have an up-to-date offset. Calculating the moving average however is CPU intensive and you also need a lot of memory to store the samples. Therefore it is better to use a low-pass filter instead of a moving average. This does not use extra memory and little computing power but the effect is the same.
This technique will work very well when the sensor is at stand still. But will it work well when it is rotating? Any rotation will  influence the calculated offset. So of possible one should pause updating the offset while the sensor rotates? Sometimes another sensor can help to detect a rotation. A compass, an accelerometer or  motor encounters can all be off help.
However, there is also another solution. This one is based on the notion that a rotation in one direction is very often compensated with a rotation in another direction. A balancing robot for example stays upright, so in the long term it does not rotate. It might lean forward and backward for short moments of time. But this forward and backward rotations cancel each other out on the long run. So in the long run the average signal of the sensor equals the offset of the sensor, even when there are rotations from time to time. This means that even under dynamic circumstances one can constantly update the sensor offset. One only has to make sure to use enough samples so that there is enough time for rotations to cancel each other out. This technique is useful for balancing robots where rotations are short. It is less useful for slow turning robots where rotations have a long duration.

Converting angular velocity to direction

There are situations where one needs to know the direction of the sensor. In navigation for example one needs to know in what direction a robot is heading. A gyro can provide this kind of information. There are some limitations though. So how do you transform angular velocity into direction? This is done by integration. This might seem difficult. But it really is not. If a robot rotates for 2 seconds at a speed of 90 degrees per second, it has rotated for 180 degrees. So integration is nothing more than time multiplied with speed. The graph below shows both the (offset corrected) angular velocity and the direction. During the test I rotated the sensor four times with 90 degrees clockwise, 360 degrees counter clockwise, 90 degrees counter clockwise and 90 degrees clockwise. After this the sensor was back at its start direction.

gyro-3The  line representing the direction, labeled Rotation 2, clearly shows the steps of 90 degrees that I made. Careful examination of the data shows that according to the sensor the robot rotated with -358.8 degrees at max, where as I tried to rotate it with -360 degrees. This makes the sensor, and the integration, pretty accurate. However, after turning the sensor back to its start direction the calculated direction is not zero as is to be expected. Instead, it is about 5.7 degrees. This is not so good. What makes it even worse, there is no way to correct this. At least, not without the aid of another sensor of user intervantion. This is the main drawback of integration. Over time small errors (in the offset corrected) signal build up to become a large error in the integrated data.

But integration can be very useful nevertheless. Suppose a robot that needs to make exact turns. Using integration you can do so. But you need to reset the initial direction to zero just before making the term. This way the error in each turn will only be equal to the integration error that built up during this turn. This is small as making a turn does not last that long. In other words, you can make your individual turns accurate but not the overall direction of the robot.

But wait, there is one more thing. Integration only gives a change in direction. To know the real direction one should also know the direction before the changes took place. Quite often this initial direction can be assumed to be zero. But this is arbitrary and does not relate to the world. Although it might be all you need. If you need to relate the orientation of your sensor to the real world you need to align the sensor with the real world (make it point north) or you need another sensor that can do this for you. This could be a compass sensor.

Hi,

Some time ago I asked you to come up with a name of my omniwheel robot. Now, It is time to announce the winner.

the name of the robot is Koios, after one of the Greek Titans. Stephen, aka PigsCanFly, suggested the name Coeus, translated into Dutch gives Koios. So Stephen is the winner.

The prize, A second hand Hitechnic gyro, will be sent by airpig to Stephen. Congratulations.

Thanks to everybody that participated.

If you visited this blog before you might know that I built myself an IMU sensor last year and that I wrote a filter to utilize this sensor to the max. It seems that if you want an IMU there is an easy way.  You can buy one from Dexter Industries.  As a matter of fact, they gave me one to evaluate. And so I did. I will present my findings in this post.

An IMU consists of a rate sensor, more often called a gyro sensor, and an accelerometer. IMU’s are mainly used to determine  position and attitude of a robot. However, an IMU cannot measure this right away, instead it measures movement. The gyro measures rotational movement, the speed at which the sensor turns. The accelerometer measures acceleration, the change in speed of the sensor. These measurements can be integrated to give position and attitude. However, integration is a tricky process, small errors in the measurements build up to large errors over time. Therefore it is very important that the sensors provide data of good quality.

The Dexter Industries IMU, or dIMU, has a three axis gyroscope (L3G4200D) that includes a temperature sensor. It also has a three axis accelerometer (MMA7455L). The sensors are digital sensors and addressed with the I2C protocol. They allow fast I2C, so if you have an environment that supports high I2C speeds like RobotC, LejoS or NXC, you can query this sensor at very short time intervals (>100Hz). There is a bright blue LED on the sensor that is Dexter Industries trademark. Some parts of the sensor board, including the LED get warm, this makes the temperature sensor useless. The components on the board are not covered. This gives the sensor a high tech look, but you should take care not to damage the components. The gyro and accelerometer are not aligned, the X-axis on the Gyroscope corresponds with the Y-Axis on the accelerometer. One should compensate for this in the software.

Below are the results of some tests I did with the sensor. I compared the sensor to others that I have. I will focus on the gyro first.

The gyro I compared to 2 other gyro’s, the analaogue gyro from HiTechnic and the digital gyro (ITG3200) from my custom build IMU (cIMU). The highTechnic gyro is an analogue one, it measures one axis only. he analogue signal from this sensor has to be translated to a digital signal by the NXT brick. This limits the resolution of this sensor to 1023 raw values or 1 degree a second. The maximum dynamic range of this sensor is about 600 degrees a second. The Dexter Industries Gyro has a resolution of  9 to 70 milldegrees a second and a dynamic range of 250 to 2000 degrees a second. Range and resolution are user selectable, where a higher dynamic range gives a lower resolution. The dynamic range of the ITG3200 form my IMU is 2000 gedrees a second with a resolution of 70 millidegrees a second, comparable to the DI gyro.

Gyro’s have an offset, this is the value they return when being motionless. One has to correct gyro readings with the offset to get the true rate of turn.  The problem with offset is that it changes over time or under different circumstances. Also, sensor signals are noisy, so it is not easy to determine a good offset value.  (see a previous post from me for details).  A faulty offset value or a change in offset over time results in faulty readings. Therefore the first test I performed was to see how stable the offset of the different gyro’s is. Below is a graph of the three gyro’s while being motionless. A perfect gyro would return 0 all the time. But perfect gyro’s do not exist, there is always some noise in the signal, that’s why the lines are not straight. The noise of the HiTechnic gyro looks different than that of the other two. This is because the HiTechnig gyro signal is rounded of to degrees per second while the oher two have a finer resolution.  This makes it hard to compare this signal to the other two. The dIMU is more noisy than the cIMU.
After 60 seconds of continuous measurement one of the NXT motors was powered. This gives a drop in the HiTechnic gyro, it’s offset has changed as a result of powering the motor. It seems as if the gyro has started to rotate very slowly, but it hasn’t. Therefore you need to calibrate the gyro under circumstances comparible to those that the gyro will experience when running. Both digital gyro’s do not suffer from powering the motor. This is a strong argument in favour of digital gyro’s.

The second test looks at the quality of the integrated gyro signal. The integrated signal gives the angle of the sensor (in respect to the starting position).  This is a very important aspect of gyro’s as you will quite often use this quantity in your programs. Integration takes away some noise as this is averaged out. However errors in offset are not biased to zero and these errors will add up over time. Again the sensors were motionless, there is no change in angle so one would expect the lines to stay at zero. This time there are four lines in the graph. The, new, red line is also a signal from the HiTechnic gyro. But this time it’s signal is corrected for offset errors using some smart statistical methods. This functionality is part of Lejos.
The HiTechnic gyro without the dynamic offset correction performs worst in this test. The line quickly deviates from the x-axis and this goes even faster when the motor is started (and the offset changes). With the dynamic offset correction the sensor performs very well,it stays close to zero. However, the algorhythm cannot cope with the change in offset after starting the motor, the line then quickly drops.
My gyro gives a smooth line but the line also drops quite fast.  The smoothness of the line is due to the low noise level, the drop is caused by changes in offset. My gyro defenitly could benefit from a dynamic offset correction.
The gyro from Dexter Industries stays around the x-axis. This means it has a very low change in offset or an internal mechanism for offset correction However in this test this sensor performs clearly the best.

The next graph shows the effect of dynamic offset correction on the digital sensors. My sensor clearly benefits from this. It stays closest to zero of the three sensors. The dIMU does not benefit from dynamic offset correction. As a matter of fact, it’s performance is worse than without it. This makes me believe that there is some internal offset correction in this sensor that interferes with the software offset correction I added. I conclude that you do not want to use dynamic offset correction with this sensor.

This is all for this time. Next time I will dive deeper into the accelerometer of the dIMU. Again I will compare this one to two other accelerometers.

 

 

 

In a previous post I showed how the Filter for the IMU sensor combines both gyro and accelerometer data to get drift-free low noise information about the robots orientation in space. The filter performs a second function, it translates sensor output to a world frame of reference. In this post I’ll explain what I mean by that and how the filters works in this respect.

Body frame of reference versus the world frame of reference

Suppose you are standing upright facing north when somebody next to you asks you to turn right for 180 degrees. After you turned so you’ll be facing south. Clear enough. Now suppose you are lying on the ground face down and head pointing south. If you are asked again to turn right for 180 degrees by the same person you will have to consider two options regarding the meaning of the word right. The first option is that right should be in reference to your body. If you turn like this you’ll end up face up while still pointing north. The second option is to treat right in reference to the person that asked the question. If you turn like that you’ll end up still facing down but with the head pointing south.

The same goes with robots and the sensors they carry. Your robot might have a gyro that indicates that the robot turns clockwise, but if the robot is upside down you’ll see itt turning counter clockwise. This is because the robot and the gyro attached to it have a different reference than you have.
To distinguish between the two references we call the first one the body frame of reference and the second the world frame of reference. The world frame of reference is supposed to be inert, a global frame where everything can relate to. Words like up, down, north and south are related to the world frame of reference. The body frame of reference belongs to the robot and the sensors. When the robot moves its body frame of reference moves along with it. Robot commands like drive forward are always in respect to the body frame of reference. Likewise, data from the sensors are also expressed in the body frame of reference.
In robotics, you’ll frequently have to translate information from one frame of reference into the other. For example, when you calculate the position of the robot (world frame of reference) from data from the tachometer (body frame of reference). The process to translate information from one frame to another is called transformation.

Transformations

A transformation requires the use of trigonometry. Suppose again that you want to calculate the position of the robot (world frame of reference) from data from the tachometer (body frame of reference). Let us assume that the tachometers indicate the the robot has travelled straight forward for 30 cm. In terms of body frame it is said that it moved over the x-axis with 30 cm and over the y-axis with 0 cm. Suppose the robot was facing east when it travelled and you have called the east to west axis the y-axis as is customary. It is clear then that in the world frame of reference the position of the robot increases by 30 cm along the y-axis and maintained its position along the x-axis. In mathematical terms the transformation from body frame to world frame is:

Xw = Xb * cos(a) - Yb * sin(a)
Yw = Xb * sin(a) + Yb * cos(a)

Where the parameter a stands for the angle between the robot (or better the body frame of reference) and the world. In the above example a=90, making cos(a)=0 and sin(a)=1. as a result the x and y values are swapped in the transformation.
Transforming sensor output to world frame of reference is the second function of the IMU filter. However, It does so in a 3D space and not in a 2D space as the example.

The preferred way to model a transformation mathematically is by using matrices and vectors.

This is the preferred way as it offers a convenient and short way to describe both the transformation and sensor output. Without going in too much detail, the transformation is stored in a transformation matrix of 3 by 3 elements. The sensor output is stored in a vector of 3 elements. To transform a vector from body frame of reference to world frame of reference one has to execute a matrix multiplication on the body vector using the transformation matrix. In mathematical terms:

Vw = Vb X T

Where T stands for transformation matrix. To go from world to body the formula is:

Vb = Vw X  Tinv

Where Tinv represents the inverse matrix of T.

Internally the filter maintains a transformation matrix. The robots current attitude can be calculated from the transformation matrix.

Last week I spent my evenings redesigning my holonomic platform. It had te be wider in order to get the center of gravity (relatively) lower. I also wanted to gear it up to make it more agile. And I wanted the brick to be better accessible, especially the USB port. Two other demands remained from Sidbot, the wheel angle should be adjustable so that the robot can be used both on a flat surface and on top of a large ball. It also had to be sturdy.

 

As it had to be sturdy I decided that the motors should be fixed to the frame. Sidbot had its motors adjustable to make the wheel angles adjustable. The wheels have to be adjustable on the new robot as well, this meant that the hinge point had to be between motor and wheels somewhere in the gear train. I tried different designs and gears but I always ended up with grinding gears. At last I ended up using a 5×7 liftarm to house part of the gear train. This effectively stopped the grinding but resulted in a very wide wheel housing as well. This is not so pretty so I’m still trying to improve this part. However, I now got a 2:1 gear ratio. With a motor speed of 120 RPM and a wheel diameter of 48 mm this gives the robot a top speed of 30 cm per second.

The frame of the robot consists of two triangles stacked vertically. The triangles are made of three 11 stud liftarms connected at the ends with 3 and 4 stud liftarms. This makes a very rigid frame, the brick lies on top of it making it easy accessible. The motors are mounted horizontally with the flat side down. This gives the robot width and also the ground space that is needed when riding on a ball. To prevent torsion between motor and frame I made a housing with L-shaped for the motors.
I used light bluish gray and dark bluish gray for the color scheme as these are the colors the motors and brick are made from. The result is a bit dull but still rather nice looking. It resembles a Starwars like vehicle. Maybe I should mount some laser guns on top.

The resulting robot does meet all my design specifications. But I have not been able to test it yet as I’m one 40 teeth gear short. I hope to get it this week.

The robot still needs a name. If you have a good suggestion you can post it in the remarks. There is a nice price for the winner a second hand Hitechnic gyro sensor. Submit your entry before November 2011.

Here a quick post of some graphs that show the effect of the filter I use for my IMU.

You might have seen a previous posts where I show you that my sensor is capable of detecting a grain of sugar under the NXT. If you haven’t seen it, check the post and video here. Few months ago I changed from robotC to Lejos. And now I have rewritten the sensor drivers and the filter in Java.

The Lejos developers are currently working on a very nice logging feature that allows users to display data from the NXT real time on the pc. I got hold of the beta code and tested it without any problems. I used the NXTDataLogger (this is the name of the Java class) to made the intrinsics of the IMU filter visible.

The IMU sensor combines a 3-axis gyroscope and a 3-axis accelerometer. The filter reads the output from both sensors, combines there signals in a smart way and provides stable information about the orientation (yaw, pitch and roll) of the robot.

An accelerometer can provide tilt data (pitch and roll) as can be seen in the first graph. This sensor however has two serious drawbacks. First it is quite noisy, that’s why the graph lines seems so hairy. Second, it does not distinguish tilt from acceleration. So, the output from this sensor can have different causes. This is why you cannot make a balancing robot with just an accelerometer.

A gyroscope is far less noisy, that is why the second graph shows smooth lines. It is also not affected by acceleration. However also this sensor has its particularities. First, it doesn’t measure tilt, but it measures rate of turn. This can only be translated into tilt if the starting condition is known. To get the tilt at any time you have to integrate all the readings and the starting condition. In the process small errors are introduced, in the long run they add up to la large error. The green line in the graph shows this effect. At about 42 seconds the sensor rotates faster than its maximum range (>2000 degrees/second), it then gives faulty readings, these are integrated in the calculated tilt. As a result the line ends higher than it started, even though the sensor was turned back to its original position. The second effect that makes a gyro hard to use in long runs is called drift. Due to temperature changes or voltage drops the base level of a gyroscope changes. This means that after some time the gyro seems to be turning slowly when it is still being stationary. This effect is well visible in the blue line. This goes up slowly. This effect can not be eliminated by calibration, unless you are able to calibrate the sensor along the run.

The filter is like a PI-controller. It uses the data from the gyroscope as the basis of it’s output. The lines in the third graph, that shows the output of the filter, are smooth because of this. It uses the data from the accelerometer to correct the output from any errors that have been build up. But it does so slowly in order to keep the noise out of the output signal. The P-value of the PI controller corrects for errors in the gyro signal. It also makes the filter usefull when starting conditions are unknown. The filter also sums past errors to get an I-value, this value is used to correct for drift in the gyro data. The result is a graph with smooth drift-free lines.

Two final notes. First, the green line (the yaw-line) is not drift-free. This is because an accelerometer cannot provide information of orientation in the XY-plane. You need a compass sensor to provide this information. Second, the range is expressed in cosine of the angle, so 1 corresponds to zero degrees, 0 corresponds to 90 degrees.

I totally forgot to post the video I made of the IMU sensor.

This clips shows how accurate the IMU sensor can be. This IMU sensor combines a 3 axis gyro and a 3 axis accelerometer. The signals from both sensors are combined using a nonlinear complementary filter to get an output signal that has low noise and is drift free. (Well actually it is only drift free over 2 axis. It still drifts over the Z-axis.) This home made sensor will show a blue light when it is horizontal using a 0.5 degrees margin. Otherwise the light will turn red. The full color LED is a BlinkM.
You’ll find more about this sensor in my previous posts.

Here is the code for the filter that calculates attitude from gyro and accelerometer. The functional bits of the code are well commented in the code itself. The main structure of the code will be described in this post. A practical description of the filter can be found in this article. The theory is in this bunch of articles.

The filter loops, giving frequent updates of an estimation of the robots attitude. Therefore the code is in one endless loop. The faster the filter loops the better. But the execution speed is limited due to hardware limitation and other processes that need CPU time. To preserve time for other processes the iteration speed of the program is controlled. At the start of the program the sensors and data structures need to be initialized. Hardware initialization is done in a separate function called init (not included in the code below). This concentrates all hardware depended code into one function and keeps the main function easily reusable, even if you have different hardware.
As stated before the loop contains the filter. Aside from the code that implements the filter there is code to supply information to the user. These parts of the code are all commented away. If they are useful to you, you can uncomment them. Even then you can switch these parts of the code on or of by changing a true to false in the if statement surrounding this code. There are three ways to supply information to the end user. Data can be logged to excel, displayed on the NXT screen or a full color led (BLINKM) can be triggered if certain conditions are met.

If you find the code useful or informative please press the “like” button. This might encourage me to make more posts like this.

#pragma systemfile
#pragma autostarttasks;

// this defines the iteration time of the filter in msec
#define IMU_speed 25

#include "math.c"
#include "matrixDirectives.c"
#include "I2C.c"
#include "IMU_acclMatrix.c"
#include "IMU_gyroMatrix.c"
#include "IMU_blinkM.c"
/*
// Optional code to log the filter state in an excel fie
#define NXT2excelMaxData 8
#include "nxt2excel.c"
*/

// this struct will hold attitude, position (Todo) and speed (Todo) of the robot
typedef struct
{
  matrix C;
  bool initialized;
} tPosition;

tPosition position;

void init();
void normalizeM(matrix &in);



task main()
{
  float duration;
  matrix  U, T;
  vector I, P, Rate;
  long lastTime, start, wait;
  bool home, oldhome;
  // PI controller parameters, these are used to tune the filter
  float kp=0.0005,ki=0.005, id=0.99;

  // Initialize the sensors
  init();
  

  lastTime=nPgmTime;
  eraseDisplay();

  // initialize cosine matrix, the cosine matrix holds the attitude of the robot
  position.C[0][0]=1;
  position.C[0][1]=0;
  position.C[0][2]=0;
  position.C[1][0]=0;
  position.C[1][1]=1;
  position.C[1][2]=0;
  position.C[2][0]=0;
  position.C[2][1]=0;
  position.C[2][2]=1;

  // iterate forever, but halt if one of the sensorsreturns an error
  while (I2C_gyro.error==NO_ERR && I2C_accl.error==NO_ERR)
  {
    start=nPgmTime;
    alive();

    // get sensor readings, 
    //   gyro readings (in rad/sec) are stored in a vector (array of 3 elements) called gyro.rate
    //   acceleration readings (in g) are stored in a vector called accl.g
    Gyro_getGyroBiased();
    Accl_getAcclBiased();

    // for better filter output the exact time for each iteration is measured
    duration=(nPgmTime-lastTime)/1000.0;
    lastTime=nPgmTime;
    // nPgmTime wraps around when reaching maxint, correct for this
    if (duration<0) duration=IMU_speed/1000.0; 
    
    /*
    // Optional code to log the filter state in an excel fie
    if (false)
    {
	    for (int Rcol=0;Rcol<3;Rcol++)
	    {
		    NXT2excelAddValue(1,gyro.rate[Rcol]*1000);
		    NXT2excelAddValue(2,accl.g[Rcol]*1000);
		    NXT2excelAddValue(3,duration*1000.0);
		    NXT2excelAddValue(4,I[Rcol]*1000);
		    NXT2excelAddValue(5,P[Rcol]*1000);
		      for (int ii=0;ii<3;ii++)
		        NXT2excelAddValue(6+ii,position.C[Rcol][ii]*1000.0);
		    NXT2excelSendMessage();
	    }
    }
    */


    // scale the turning rate (rad/sec) to a change in attitude (rad)
    VscaleVS(gyro.rate,duration,Rate);

    // corract the gyro signal for offset (I) and accumulated errors (P), but do it slowly (ki, kp)
    // at the same time, convert it to a skew symmetric matrix
    // for speed not matrix functions are used here
    U[2][1]=gyro.rate[0]*duration-I[0]*ki-P[0]*kp;
    U[1][2]=-U[2][1];
    U[0][2]=gyro.rate[1]*duration-I[1]*ki-P[1]*kp;
    U[2][0]=-U[0][2];
    U[1][0]=gyro.rate[2]*duration-I[2]*ki-P[2]*kp;
    U[0][1]=-U[1][0];

    
    // transform the change in attitude in respect to robot axis into change in attitude in respect to global axis
    // for speed not matrix functions are used here
    // matrix equivalent: MproductMM(position.C,U,T);
    T[0][0]=position.C[0][1]*U[1][0]+position.C[0][2]*U[2][0];
    T[0][1]=position.C[0][0]*U[0][1]+position.C[0][2]*U[2][1];
    T[0][2]=position.C[0][0]*U[0][2]+position.C[0][1]*U[1][2];
    T[1][0]=position.C[1][1]*U[1][0]+position.C[1][2]*U[2][0];
    T[1][1]=position.C[1][0]*U[0][1]+position.C[1][2]*U[2][1];
    T[1][2]=position.C[1][0]*U[0][2]+position.C[1][1]*U[1][2];
    T[2][0]=position.C[2][1]*U[1][0]+position.C[2][2]*U[2][0];
    T[2][1]=position.C[2][0]*U[0][1]+position.C[2][2]*U[2][1];
    T[2][2]=position.C[2][0]*U[0][2]+position.C[2][1]*U[1][2];

    // add the cange in attitude to the attitude to get an updated attitude
    MsumMM(position.C,T,position.C);


    // Renormalize matrix
    // due to small errors the matrix gets corrupted over time (its axis are no longer perpendicular
    // this function corrects this
    normalizeM(position.C);

    // correct the attitude using tilt info from the accelerometer
    // but only if there is no or little acceleration, otherwise titl info can not be trusted
    if (abs(sqrt(accl.g[0]*accl.g[0]+accl.g[1]*accl.g[1]+accl.g[2]*accl.g[2])-1.0)<.2)
    {
      // Calculate difference (error) between culculated attitude and accelerometer
      // and use this in a proportional feedback (P)
	    P[0] = (position.C[2][1]*accl.g[2]) - (position.C[2][2]*accl.g[1]);
	    P[1] = (position.C[2][2]*accl.g[0]) - (position.C[2][0]*accl.g[2]);
	    P[2] = (position.C[2][0]*accl.g[1]) - (position.C[2][1]*accl.g[0]);
	    // Add the error to the accumulated error and use this in a integral feedback (I)
	    I[0]=I[0]*id+P[0]*duration;
	    I[1]=I[1]*id+P[1]*duration;
	    I[2]=I[2]*id+P[2]*duration;
	    }
    else
    {
    P[0]=0;
    P[1]=0;
    P[2]=0;
    }

    /*
    // optional code to drive the LED in the sensor
    if (true)
    {
      // test if the robot is in it's starting attitude
      if ( abs(position.C[2][0])<.009 && abs(position.C[2][1])<.009)
        home=true;
      else
        home=false;
      // change the color of the LED if needed
      if (home != oldhome)
      {
        if (home)
          blinkM_setColour(0 , 0, 128);
        else
          blinkM_setColour(64 , 0, 0);
      }
      oldhome=home;
    }
    */
    
    
    // optional code to display sensor and filter data
    if (true && time1[T1]>200)
    {
      eraseDisplay();
      nxtDisplayTextLine(7, "%5d",wait);
      //displayM(position.C);
      //displayV(accl.g,1);
      //displayV(gyro.rate,0);
      ClearTimer(T1);
    }
    //while (nNxtButtonPressed==-1);
    //while (nNxtButtonPressed!=-1);


    // wait for next iteration
    // the program does not assume a constant execution speed, 
    // instead it measures every iteration, and waits the exact amount of time to guarantue constant iteration speed
    wait=IMU_speed-(nPgmTime-start);

    if (wait<0)
    {
      wait=0;
      //PlaySound(soundException);
    }
    wait1Msec(wait);
  }

}


void init()
{
    // configure port;
  SensorType[S3]=sensorI2CCustom;
  nI2CRetries=3;
  position.initialized=false;
  accl.initialized=false;
  gyro.initialized=false;

  // configure blinkM;
  I2C_blinkM.port=S3;
  I2C_blinkM.address=0x12;
  blinkM_setColour(0 , 4, 0);


  // configure accl;
  I2C_accl.port=S3;
  I2C_accl.address=0xa6;
  Accl_init();
  Accl_getBias();
  accl.initialized=true;


  // configure gyro;
  I2C_gyro.port=S3;
  I2C_gyro.address=0xD0;
  Gyro_init();
  Gyro_getBias();
  gyro.initialized=true;


  if (I2C_gyro.error || I2C_accl.error)
  {
    nxtScrollText("IMU: %3d, %3d",I2C_gyro.error , I2C_accl.error);
    blinkM_setColour(63 , 0, 0);
    PlaySound(soundException);
    wait1Msec(2000);
  }
  blinkM_setColour(0 , 0 , 0);
  position.initialized=true;

}

//normalizes a matrix (takes 2.87 msec)
void normalizeM(matrix &in)
{
  float error, renorm;
  vector vx,vy, xor, yor, zor;


  VgetRowMS(in, 0,vx);
  VgetRowMS(in, 1,vy);
  dotProductVVS(vx, vy,error); // eq.18
  if (abs(error)>0.05)
  {
    VscaleVS(vy,error/-2.0, xor);
    VsumVV(vx, xor, xor); //eq.19

    VscaleVS(vx,error/-2.0, yor);
    VsumVV(vy, yor, yor); //eq.19

    VcrossProductVV(xor,yor, zor); //eq.20

    dotProductVVS(xor, xor,error);
    renorm = .5 *(3 - error); //eq.21
    for ( int Rcol=0; Rcol<Msize;Rcol++)
      in[0][Rcol]=renorm*xor[Rcol];
    dotProductVVS(yor, yor,error);
    renorm = .5 *(3 - error); //eq.21
    for ( int Rcol=0; Rcol<Msize;Rcol++)
      in[1][Rcol]=renorm*yor[Rcol];
    dotProductVVS(zor, zor,error);
    renorm = .5 *(3 - error); //eq.21
    for ( int Rcol=0; Rcol<Msize;Rcol++)
      in[2][Rcol]=renorm*zor[Rcol];
  }
}

At last.

I finished programming the code for my IMU sensor. It corrects the gyro reading for offset and integration errors by means of accelerometer data. For this I use a nonlinear complementary filter based on a direction cosine matrix. This article explains how the filter works and, even better, it explains all the matrix math that is used to describe the filter. Basically, it explains everything that you need to know if you want to make such a filter yourself. But you must be prepared to read the article several times.

So now I know position, speed and attitude of my robot based on gyro and accelerometer. This is called an Inertial Navigation System or INS. Thanks to the filter attitude info does not get worse over time, but I don’t know yet how good it is. A very basic test showed that my robot knows the difference between lying flat on the ground and lying on a grain of sugar. What’s even better, it still knows this after two or five minutes. And this is the holy grail I was after all this time. So attitude information around the X and Y-axis is quite good. I cannot correct for drift around the Z-axis using the accelerometer, so this still has some drift. If I needed to I could use a compass to correct for this. But as this is not really necessary for my robot I think I’ll leave that be. I don’t know yet how precise position info is. I know it must still suffer from integration errors, but that doesn’t matter to.

I also need actual attitude data as it will be the input for a PID controller that keeps the robot balanced. Therefore it is good to know how fast the filter is. Using robotC and fast I2C (SensorType[S3]=sensorI2CCustomFastSkipStates) I get a reading every 15 msec. But I also get an error from time to time, the system cannot recover from it. When I use the standard I2C speed I get a reading every 24 Msec and a reliable system. Of course I will have to use a lower frequency to make room for other processes. I still aim for a heartbeat of 50/33 Herz for my robot.

“I know why you’re here, Neo. I know what you’ve been doing… why you hardly sleep, why you live alone, and why night after night, you sit by your computer.” from The Matrix.

As you might know from my previous posts there is quite a bit of math involved in computing speed, position and attitude from IMU sensor data. Thus far I have dealt with translating robot related measurements to world related measurements and offset correction. But there is more to come, correction for sensor position on the robot and, most important, sensor fusion. No matter what technique I will use for this, all documentation is in matrix math. This makes sense as matrix math is very compact. Therefore I decided to rewrite my IMU code to matrix math. I did this before when I built my Kalman filter, but then I abandoned in the end. But then I was using 2*2 matrices, now I am working in 3 dimensions and my matrices are 3*3. The bigger the matrices the more compact matrix math is compared to component math.

So, I’m drawn back into the matrix. Last night I have converted my library of matrix functions to work with 3*3 matrices. Next few nights I will convert the functionality I already have build for the IMU. Only then I can continue with sensor fusion. It really is two steps forward, one step back.