This time I’ll show you the first results of incorporating a 3-axis magnetometer (or compass) into my IMU filter.
A quick round-up first. I have a IMU sensor that houses a 3-axis accelerometer and a 3-axis gyro sensor. Well, actually I got two, one that I built myself and one from Dexter Industries. To use this type of sensor I have written a non-linear complementary filter (NLC-filter) who’s job it is to fuse the output of the two sensors to a drift free low noise attitude signal. The filter tells me the tilt over the x-axis and tilt over the y-axis. It also tells me the heading. The heading signal is not drift-free. The reason for this is that the accelerometer can provide tilt data but it cannot provide heading data. For this you need a compass.
A compass sensor should really be called an magnetometer because it doesn’t give you heading like a sailors compass does. Instead it gives you the force of the magnetic field over three axis. From this one can calculate heading.
It was technically not very difficult to incorporate the magnetometer data into the filter. Although it took me a long time to figure out how to do this. The main thing was to tune the PI-controller for the compass. I’m not finished with this but I can show you some first results of the filter anyway. The measurements were taken with the setup that you see in the picture. The NXT is mounted on a platform some 20 cm above the ground. This is to minimize effects of the steel enforced concrete floor. It can rotate endlessly as it is mounted on a turn table and driven by a NXT motor and a worm wheel. The worm wheel in combination with a regulated motor gives a very stable rotation speed. The sensors are mounted 10 cm from the NXT to prevent disturbances from the NXT. Measurements are sent over bluetooth to the PC in real time.
this first graph was taken while the NXT was stationary. It shows how stable the filter signal is. Note that the scale of the vertical ax is 0.1 degree, the vertical axis ranging from -4 to 0 degrees. The roll and pitch signals are very stable. During the 15 minutes of the test these signals stayed within a bandwidth of 0.4 degrees. This is all that is left of the gyro drift. The yaw signal (that is controlled by the compass) is less stable, it floats between -0.8 and -2.0 degrees. But also in the yaw signal the gyro drift is eliminated to a large extend. I am very, very pleased with these noise levels as the bandwidth of noise from an accelerometer is around 5 degrees.
The second graph compares the output signal from the compass with that of the filter. This time the NXT was spinning around the Z-axis. The graph shows the results of a full spin. . You basically see two descending lines, indicating a constant movement of the sensors. The signal from the compass, in blue, is not exactly over the signal from the filter. There is a slight difference in the heading that these two report. The reason for this is that the setup of the NXT is not exactly level. The compass signal suffers from this. Its line is not really straight but it is like an S-curve due to this. The filter signal on the other hand is tilt compensated and almost straight. The smoother red line also indicates a lower noise level.
This last image shows the effect of a magnetical disturbance on the compass sensor and on the filter. While taking this graph I slowly moved a head phone along side of the sensor.
The compass suffers from this disturbance, at the peak the signal is over 10 degrees of. The filter on the other hand is not affected by the disturbance. It detects the disturbance and ignores the compass signal while it is disturbed. During this time the filter relies solely on the gyro to keep track of heading.
I think the low noise levels of the filter are nice. But the true advantages are in the tilt compensation and the robustness to magnetic disturbances. The filter is equally robust to disturbances in acceleration I think. This however I cannot show you using this setup.
I have included a compass into my INS filter. It now is drift-free and accurate over all three axis. It also is very robust to magnetic disturbances. The compass itself is not. It gives me great pleasure to see the first results.
Tuning the filter is difficult though. I have used the NXJChartingLogger that will be part of Lejos 0.9.1 a lot. The filter provides all kinds of output that is displayed real time on the pc. Different parameters are logically grouped into different graphs. The graphs provide information on the state of the filter and thus help in tuning.
I have also included a method to introduce errors in the sensor data. This enables me to examine the performance of the filter.
As a last thing I have made the filter calculate the initial state of the filter. With this the filter settles accurately (<.5degrees) within half a second.
The performance dropped to 97 Hertz with three sensors attached. But I haven't optimized the compass code yet.
In later posts I'll demonstrate the performance of the filter.
By request I will publish the Lejos software for the dIMU sensor.
The software contains of drivers for the Dexter Industries IMU sensor, programs to calibrate the sensor and the IMU filter that I wrote about on this blog before. The filter can be used for other IMU sensors as well. Also included is the sample program that utilizes this filter. It is the very same program I used in the video’s published in a previous post. There is no formal support and no warranty. If you are going to use it you will have to rely on your own wit, the javadoc and on this blog.
Download and install
You can download the software from this link. After downloading you will have to extract it keeping the directory structure intact. Make sure yor compiler can find the code.
Using the IMU drivers
The dIMU consists of two sensors, a gyroscope and a accelerometer, each has it’s own driver. The gyroscope driver is called L3G4200D and the driver for the accelerometer is MMA7455L. (I use the technical names of the sensors a driver names.) On top of these drivers I made user interfaces that allowes you to examine, configure and calibrate the sensors. The user interfaces are called L3G4200D_E and MMA7455L_E respectively. You can use these programs as driver as well. It gives your programs access to the additional functionality but it needs the NXT screen and might result in larger programs. There is a sample program included that gives access to the user interfaces, it is called testDIMU.
This is how you enable the drivers in your code,
SensorPort.S4.i2cEnable(I2CPort.HIGH_SPEED);
MMA7455L accel = new MMA7455L(SensorPort.S4);
L3G4200D gyro = new L3G4200D(SensorPort.S4);
The first line instructs Lejos to use high speed I2C. The sensors support this.
This is how you get data (rate, acceleration and tilt) from the sensors.
float[] rate = new float[3];
gyro.fetchAllRate(rate);
float[] accel = new float[3];
accel.fetchAllAccel(accel);
float[] tilt = new float[3];
accel.fetchAllTilt(tilt);
As you can see the drivers return data from all three axis is one call. If you just need data from one axis you can get it from the arrays. The order is X, Y, Z. The gyro returns degrees per second by default. The accelerometer returns Milli-G for acceleration and Degrees for tilt, also by default. If you want to use other units in your programs you can change the default values like this.
This changes the default units to radians per second, meter per second^2 and radians respectively. Other units are available, check the javaDoc. The default unit can be overridden per data request by specifying the desired unit as second parameter in the FetchAll… methods.
Configuring the sensors
The gyro can be configured for dynamic range and sample rate using the setRange and setSampleRate methods. As a rule one should select the lowest value that your application allows for. This gives data of best possible quality.
The accelerometer cannot be configured. I found this of little use.
Calibrating the sensors
The gyro of my dIMU doesn’t really need calibrating. however there is the possibility to do so. Calibration is started b calling gyro.calculateOffset(). During calibration the sensor should remain motionless. Calibration settings of the gyro are not stored, so they are lost when your program terminates. (Storing calibration settings of gyro sensors is of no use as the calibration values depend on environmental factors and are not stable over time.)
The accelerometer needs calibration. The user interface driver provides functionality to calibrate the sensor and to store the calibration settings. The (base) driver looks for stored calibration settings upon initialization and loads these automatically when they are available. Calibration settings of the accelerometer are stable over time so you’ll need to do this just once. Each of the three axes has to be calibrated separately. To calibrate an axis one has to point it straight up first and hold still while the calibration routine collects data samples. Then the axis has to be pointed straight down and held still for some time. Follow the on screen instructions and do not forget to save the settings after calibration.
Using the IMU filter
The IMU filter can be used with any three-axis gyro and any three-axis accelerometer as long as their drivers implement the RataData and TiltData interfaces. This is how you initialise the filter
NLCFilter attitude = new NLCFilter(gyro, accel);
attitude.start();
The two parameters to the filter constructor are the gyro driver and accelerometer driver. One could leave out the accelerometer, the filter will work but values will drift over time. The second line of code starts the filter. The filter needs 2 to 5 seconds to settle at start up, therefore you need to keep the sensor motionless and more or less level for a few seconds. You can find out if the filter is ready settling with the Initializing() method.
The IMU filter keeps track of the attitude, or pose, of your sensor/robot. You can query the roll, pitch and yaw angles this way
float[] tilt = new float[3];
attitude.fetchAllTilt(tilt);
By default these methods return angles in radians. You can change this by setting a different unit with the setTiltUnit() method.
You can also use the filter to transform data from a body frame to a world frame. This is useful if another sensor returns data that needs to be corrected for the robots current attitude. the next example transforms a distance returned by a UltraSonic sensor to world coordinates. The example assumes the us and IMU sensors are aligned and that the US sensor measures along the X-axis.
Matrix usBody=new Matrix(1,3);
Matrix usWorld=null;
us = new UltrasonicSensor(SensorPort.S1);
usBody.set(0,0,us.getDistance());
usWorld=attitude.transformToWorld(usBody);
The matrix usWorld now contains the distance from sensor to the detected object over the X, Y and Z axis.
Configuring and tuning the IMU filter
By default the IMU filter updates as often as possible. It’s update frequency is over 100 Hertz. To free computer resources one can lower the update frequency by using the setFrequency() method. The filter will try to match the specified frequency. A parameter value of 0 will run the filter at maximum speed.
The filter can be tuned to increase the quality of its output. I advise you not to tune the filter until you are familiar with it and understand its inner workings. Tuning consists of altering the P and I values of it’s internal PI controller. The I value takes care of gyro drift cancellation and the P value controls how fast attitude is corrected by use of tilt data from the accelerometer. The values can be modified by using the setKp() and setKi() methods.
There are two ways the filter can assist you in tuning. It keeps track of the integral of absolute errors, or absI. This is a measure of the total error of the filter over time. The smaller the error (given a fixed period) the better the filter performs. /* The filter also allows you to send data over bluetooth to the pc for examination. For this one has to use the NXTChartingLogger on the pc that is part of the Lejos distribution. You instruct the filter to send its state to the pc by supplying a NXTDataLogger object with the setDataLogger() method. */
Running the demo program
The demo program is called testIMU. At startup of this program the sensor must be horizontal and motionless. The sensor is assumed to be aligned ith the NXT brick with the sensor plug facing to the same direction as the sensor ports. Once you see the wire frame you can start moving the sensor.The demo has four display modes:
Wire frame. Here it shows a wire frame of the sensor on the NXT screen
Rotation matrix. The screen will show the content of the rotation matrix. In this matrix the current attitude is stored. The matrix is also used to convert body coordinates to world coordinates by matrix multiplication..
Roll, Pitch, Yaw. The screen shows the Roll, Pitch, Yaw angles of the sensor.
Update speed. The screen shows the actual update speed of the filter.
You can browse through the display modes by using the arrow keys. The enter key resets the filter. The clip below shows the demo program in action in wire frame mode.
Some time ago I wrote about transformation matrices. This time I want to show you how to use it. You’ll see how much you can benefit from it.
I wrote a small class that can draw a compass rose on the NXT screen. The class contains a collection of points that are the start and end points of the lines that make the compass rose. It has a method to draw these lines. But before drawing the lines they are transformed using the current attitude of the IMU sensor. The transformation is a matrix multiplication of the compass rose definition and the attitude matrix that the IMU filter maintains. This is done in line 4 of the code example below. R is the transformation matrix, rose is the compass definition. The function toScreen on the same line centers the image on the screen.
public void draw(Matrix R) {
Matrix ss;
Graphics g=new Graphics();
ss=toScreen(R.times(rose));
for (int i=0;i<elements;i++) {
g.drawLine((int)ss.get(0,i*2),(int)ss.get(1,i*2),(int)ss.get(0,i*2+1),(int)ss.get(1,i*2+1));
}
}
As you can see the transformation matrix allows you to perform very complex functionality, three rotations, using a very simple function, times().
Below is a video of the application in action. You might want to play this video in HD as it is quite hard to see the details on the NXT screen.
This application just shows a compass rose. But the same technique can be used in a wide variety of applications. One such application could be to transform data from range sensors to world coordinates. This will ease the process of mapping the environment. As the matrix maintained by the IMU filter is 3 dimensional you can even make a 3D map.
Transformation matrices are also very useful for robots with holonomic wheels. Here they can be used to transform robot speed into wheel speed.
In a previous post I showed how the Filter for the IMU sensor combines both gyro and accelerometer data to get drift-free low noise information about the robots orientation in space. The filter performs a second function, it translates sensor output to a world frame of reference. In this post I’ll explain what I mean by that and how the filters works in this respect.
Body frame of reference versus the world frame of reference
Suppose you are standing upright facing north when somebody next to you asks you to turn right for 180 degrees. After you turned so you’ll be facing south. Clear enough. Now suppose you are lying on the ground face down and head pointing south. If you are asked again to turn right for 180 degrees by the same person you will have to consider two options regarding the meaning of the word right. The first option is that right should be in reference to your body. If you turn like this you’ll end up face up while still pointing north. The second option is to treat right in reference to the person that asked the question. If you turn like that you’ll end up still facing down but with the head pointing south.
The same goes with robots and the sensors they carry. Your robot might have a gyro that indicates that the robot turns clockwise, but if the robot is upside down you’ll see itt turning counter clockwise. This is because the robot and the gyro attached to it have a different reference than you have.
To distinguish between the two references we call the first one the body frame of reference and the second the world frame of reference. The world frame of reference is supposed to be inert, a global frame where everything can relate to. Words like up, down, north and south are related to the world frame of reference. The body frame of reference belongs to the robot and the sensors. When the robot moves its body frame of reference moves along with it. Robot commands like drive forward are always in respect to the body frame of reference. Likewise, data from the sensors are also expressed in the body frame of reference.
In robotics, you’ll frequently have to translate information from one frame of reference into the other. For example, when you calculate the position of the robot (world frame of reference) from data from the tachometer (body frame of reference). The process to translate information from one frame to another is called transformation.
Transformations
A transformation requires the use of trigonometry. Suppose again that you want to calculate the position of the robot (world frame of reference) from data from the tachometer (body frame of reference). Let us assume that the tachometers indicate the the robot has travelled straight forward for 30 cm. In terms of body frame it is said that it moved over the x-axis with 30 cm and over the y-axis with 0 cm. Suppose the robot was facing east when it travelled and you have called the east to west axis the y-axis as is customary. It is clear then that in the world frame of reference the position of the robot increases by 30 cm along the y-axis and maintained its position along the x-axis. In mathematical terms the transformation from body frame to world frame is:
Where the parameter a stands for the angle between the robot (or better the body frame of reference) and the world. In the above example a=90, making cos(a)=0 and sin(a)=1. as a result the x and y values are swapped in the transformation.
Transforming sensor output to world frame of reference is the second function of the IMU filter. However, It does so in a 3D space and not in a 2D space as the example.
The preferred way to model a transformation mathematically is by using matrices and vectors.
This is the preferred way as it offers a convenient and short way to describe both the transformation and sensor output. Without going in too much detail, the transformation is stored in a transformation matrix of 3 by 3 elements. The sensor output is stored in a vector of 3 elements. To transform a vector from body frame of reference to world frame of reference one has to execute a matrix multiplication on the body vector using the transformation matrix. In mathematical terms:
Vw = Vb X T
Where T stands for transformation matrix. To go from world to body the formula is:
Vb = Vw X Tinv
Where Tinv represents the inverse matrix of T.
Internally the filter maintains a transformation matrix. The robots current attitude can be calculated from the transformation matrix.
Here a quick post of some graphs that show the effect of the filter I use for my IMU.
You might have seen a previous posts where I show you that my sensor is capable of detecting a grain of sugar under the NXT. If you haven’t seen it, check the post and video here. Few months ago I changed from robotC to Lejos. And now I have rewritten the sensor drivers and the filter in Java.
The Lejos developers are currently working on a very nice logging feature that allows users to display data from the NXT real time on the pc. I got hold of the beta code and tested it without any problems. I used the NXTDataLogger (this is the name of the Java class) to made the intrinsics of the IMU filter visible.
The IMU sensor combines a 3-axis gyroscope and a 3-axis accelerometer. The filter reads the output from both sensors, combines there signals in a smart way and provides stable information about the orientation (yaw, pitch and roll) of the robot.
An accelerometer can provide tilt data (pitch and roll) as can be seen in the first graph. This sensor however has two serious drawbacks. First it is quite noisy, that’s why the graph lines seems so hairy. Second, it does not distinguish tilt from acceleration. So, the output from this sensor can have different causes. This is why you cannot make a balancing robot with just an accelerometer.
A gyroscope is far less noisy, that is why the second graph shows smooth lines. It is also not affected by acceleration. However also this sensor has its particularities. First, it doesn’t measure tilt, but it measures rate of turn. This can only be translated into tilt if the starting condition is known. To get the tilt at any time you have to integrate all the readings and the starting condition. In the process small errors are introduced, in the long run they add up to la large error. The green line in the graph shows this effect. At about 42 seconds the sensor rotates faster than its maximum range (>2000 degrees/second), it then gives faulty readings, these are integrated in the calculated tilt. As a result the line ends higher than it started, even though the sensor was turned back to its original position. The second effect that makes a gyro hard to use in long runs is called drift. Due to temperature changes or voltage drops the base level of a gyroscope changes. This means that after some time the gyro seems to be turning slowly when it is still being stationary. This effect is well visible in the blue line. This goes up slowly. This effect can not be eliminated by calibration, unless you are able to calibrate the sensor along the run.
The filter is like a PI-controller. It uses the data from the gyroscope as the basis of it’s output. The lines in the third graph, that shows the output of the filter, are smooth because of this. It uses the data from the accelerometer to correct the output from any errors that have been build up. But it does so slowly in order to keep the noise out of the output signal. The P-value of the PI controller corrects for errors in the gyro signal. It also makes the filter usefull when starting conditions are unknown. The filter also sums past errors to get an I-value, this value is used to correct for drift in the gyro data. The result is a graph with smooth drift-free lines.
Two final notes. First, the green line (the yaw-line) is not drift-free. This is because an accelerometer cannot provide information of orientation in the XY-plane. You need a compass sensor to provide this information. Second, the range is expressed in cosine of the angle, so 1 corresponds to zero degrees, 0 corresponds to 90 degrees.
I’m doing two projects at the same time.
First, there is Sidbot. It waits till I master Java threads.
Second, I’m building a holonomic or Killough pilot in Lejos. This project hangs on me not knowing how far the Robot has travelled. This info is needed for the pilot to work with a navigator.
Actually, my main project now is learning java. This goes well but there us a lot to it.
Few weeks ago I ordered two more BlinkM’s. These are I2C driven all color led’s. I wrote a driver to address these led’s. The led’s are now mounted on each corner of Sidbot. So Sidbot can shine all colors to all Sides of it’s triangular body. Each led is driven individually so each one can have it’s own color. I made a nice flashlight show to test the driver.
I finished drivers for my IMU too. The filter to fuse the signals is also finished but it can’t run in a separate thread yet.
I also have been on holidays and busy with work. But these matters don’t concern this blog.
I totally forgot to post the video I made of the IMU sensor.
This clips shows how accurate the IMU sensor can be. This IMU sensor combines a 3 axis gyro and a 3 axis accelerometer. The signals from both sensors are combined using a nonlinear complementary filter to get an output signal that has low noise and is drift free. (Well actually it is only drift free over 2 axis. It still drifts over the Z-axis.) This home made sensor will show a blue light when it is horizontal using a 0.5 degrees margin. Otherwise the light will turn red. The full color LED is a BlinkM.
You’ll find more about this sensor in my previous posts.
Here is the code for the filter that calculates attitude from gyro and accelerometer. The functional bits of the code are well commented in the code itself. The main structure of the code will be described in this post. A practical description of the filter can be found in this article. The theory is in this bunch of articles.
The filter loops, giving frequent updates of an estimation of the robots attitude. Therefore the code is in one endless loop. The faster the filter loops the better. But the execution speed is limited due to hardware limitation and other processes that need CPU time. To preserve time for other processes the iteration speed of the program is controlled. At the start of the program the sensors and data structures need to be initialized. Hardware initialization is done in a separate function called init (not included in the code below). This concentrates all hardware depended code into one function and keeps the main function easily reusable, even if you have different hardware.
As stated before the loop contains the filter. Aside from the code that implements the filter there is code to supply information to the user. These parts of the code are all commented away. If they are useful to you, you can uncomment them. Even then you can switch these parts of the code on or of by changing a true to false in the if statement surrounding this code. There are three ways to supply information to the end user. Data can be logged to excel, displayed on the NXT screen or a full color led (BLINKM) can be triggered if certain conditions are met.
If you find the code useful or informative please press the “like” button. This might encourage me to make more posts like this.
#pragma systemfile
#pragma autostarttasks;
// this defines the iteration time of the filter in msec
#define IMU_speed 25
#include "math.c"
#include "matrixDirectives.c"
#include "I2C.c"
#include "IMU_acclMatrix.c"
#include "IMU_gyroMatrix.c"
#include "IMU_blinkM.c"
/*
// Optional code to log the filter state in an excel fie
#define NXT2excelMaxData 8
#include "nxt2excel.c"
*/
// this struct will hold attitude, position (Todo) and speed (Todo) of the robot
typedef struct
{
matrix C;
bool initialized;
} tPosition;
tPosition position;
void init();
void normalizeM(matrix &in);
task main()
{
float duration;
matrix U, T;
vector I, P, Rate;
long lastTime, start, wait;
bool home, oldhome;
// PI controller parameters, these are used to tune the filter
float kp=0.0005,ki=0.005, id=0.99;
// Initialize the sensors
init();
lastTime=nPgmTime;
eraseDisplay();
// initialize cosine matrix, the cosine matrix holds the attitude of the robot
position.C[0][0]=1;
position.C[0][1]=0;
position.C[0][2]=0;
position.C[1][0]=0;
position.C[1][1]=1;
position.C[1][2]=0;
position.C[2][0]=0;
position.C[2][1]=0;
position.C[2][2]=1;
// iterate forever, but halt if one of the sensorsreturns an error
while (I2C_gyro.error==NO_ERR && I2C_accl.error==NO_ERR)
{
start=nPgmTime;
alive();
// get sensor readings,
// gyro readings (in rad/sec) are stored in a vector (array of 3 elements) called gyro.rate
// acceleration readings (in g) are stored in a vector called accl.g
Gyro_getGyroBiased();
Accl_getAcclBiased();
// for better filter output the exact time for each iteration is measured
duration=(nPgmTime-lastTime)/1000.0;
lastTime=nPgmTime;
// nPgmTime wraps around when reaching maxint, correct for this
if (duration<0) duration=IMU_speed/1000.0;
/*
// Optional code to log the filter state in an excel fie
if (false)
{
for (int Rcol=0;Rcol<3;Rcol++)
{
NXT2excelAddValue(1,gyro.rate[Rcol]*1000);
NXT2excelAddValue(2,accl.g[Rcol]*1000);
NXT2excelAddValue(3,duration*1000.0);
NXT2excelAddValue(4,I[Rcol]*1000);
NXT2excelAddValue(5,P[Rcol]*1000);
for (int ii=0;ii<3;ii++)
NXT2excelAddValue(6+ii,position.C[Rcol][ii]*1000.0);
NXT2excelSendMessage();
}
}
*/
// scale the turning rate (rad/sec) to a change in attitude (rad)
VscaleVS(gyro.rate,duration,Rate);
// corract the gyro signal for offset (I) and accumulated errors (P), but do it slowly (ki, kp)
// at the same time, convert it to a skew symmetric matrix
// for speed not matrix functions are used here
U[2][1]=gyro.rate[0]*duration-I[0]*ki-P[0]*kp;
U[1][2]=-U[2][1];
U[0][2]=gyro.rate[1]*duration-I[1]*ki-P[1]*kp;
U[2][0]=-U[0][2];
U[1][0]=gyro.rate[2]*duration-I[2]*ki-P[2]*kp;
U[0][1]=-U[1][0];
// transform the change in attitude in respect to robot axis into change in attitude in respect to global axis
// for speed not matrix functions are used here
// matrix equivalent: MproductMM(position.C,U,T);
T[0][0]=position.C[0][1]*U[1][0]+position.C[0][2]*U[2][0];
T[0][1]=position.C[0][0]*U[0][1]+position.C[0][2]*U[2][1];
T[0][2]=position.C[0][0]*U[0][2]+position.C[0][1]*U[1][2];
T[1][0]=position.C[1][1]*U[1][0]+position.C[1][2]*U[2][0];
T[1][1]=position.C[1][0]*U[0][1]+position.C[1][2]*U[2][1];
T[1][2]=position.C[1][0]*U[0][2]+position.C[1][1]*U[1][2];
T[2][0]=position.C[2][1]*U[1][0]+position.C[2][2]*U[2][0];
T[2][1]=position.C[2][0]*U[0][1]+position.C[2][2]*U[2][1];
T[2][2]=position.C[2][0]*U[0][2]+position.C[2][1]*U[1][2];
// add the cange in attitude to the attitude to get an updated attitude
MsumMM(position.C,T,position.C);
// Renormalize matrix
// due to small errors the matrix gets corrupted over time (its axis are no longer perpendicular
// this function corrects this
normalizeM(position.C);
// correct the attitude using tilt info from the accelerometer
// but only if there is no or little acceleration, otherwise titl info can not be trusted
if (abs(sqrt(accl.g[0]*accl.g[0]+accl.g[1]*accl.g[1]+accl.g[2]*accl.g[2])-1.0)<.2)
{
// Calculate difference (error) between culculated attitude and accelerometer
// and use this in a proportional feedback (P)
P[0] = (position.C[2][1]*accl.g[2]) - (position.C[2][2]*accl.g[1]);
P[1] = (position.C[2][2]*accl.g[0]) - (position.C[2][0]*accl.g[2]);
P[2] = (position.C[2][0]*accl.g[1]) - (position.C[2][1]*accl.g[0]);
// Add the error to the accumulated error and use this in a integral feedback (I)
I[0]=I[0]*id+P[0]*duration;
I[1]=I[1]*id+P[1]*duration;
I[2]=I[2]*id+P[2]*duration;
}
else
{
P[0]=0;
P[1]=0;
P[2]=0;
}
/*
// optional code to drive the LED in the sensor
if (true)
{
// test if the robot is in it's starting attitude
if ( abs(position.C[2][0])<.009 && abs(position.C[2][1])<.009)
home=true;
else
home=false;
// change the color of the LED if needed
if (home != oldhome)
{
if (home)
blinkM_setColour(0 , 0, 128);
else
blinkM_setColour(64 , 0, 0);
}
oldhome=home;
}
*/
// optional code to display sensor and filter data
if (true && time1[T1]>200)
{
eraseDisplay();
nxtDisplayTextLine(7, "%5d",wait);
//displayM(position.C);
//displayV(accl.g,1);
//displayV(gyro.rate,0);
ClearTimer(T1);
}
//while (nNxtButtonPressed==-1);
//while (nNxtButtonPressed!=-1);
// wait for next iteration
// the program does not assume a constant execution speed,
// instead it measures every iteration, and waits the exact amount of time to guarantue constant iteration speed
wait=IMU_speed-(nPgmTime-start);
if (wait<0)
{
wait=0;
//PlaySound(soundException);
}
wait1Msec(wait);
}
}
void init()
{
// configure port;
SensorType[S3]=sensorI2CCustom;
nI2CRetries=3;
position.initialized=false;
accl.initialized=false;
gyro.initialized=false;
// configure blinkM;
I2C_blinkM.port=S3;
I2C_blinkM.address=0x12;
blinkM_setColour(0 , 4, 0);
// configure accl;
I2C_accl.port=S3;
I2C_accl.address=0xa6;
Accl_init();
Accl_getBias();
accl.initialized=true;
// configure gyro;
I2C_gyro.port=S3;
I2C_gyro.address=0xD0;
Gyro_init();
Gyro_getBias();
gyro.initialized=true;
if (I2C_gyro.error || I2C_accl.error)
{
nxtScrollText("IMU: %3d, %3d",I2C_gyro.error , I2C_accl.error);
blinkM_setColour(63 , 0, 0);
PlaySound(soundException);
wait1Msec(2000);
}
blinkM_setColour(0 , 0 , 0);
position.initialized=true;
}
//normalizes a matrix (takes 2.87 msec)
void normalizeM(matrix &in)
{
float error, renorm;
vector vx,vy, xor, yor, zor;
VgetRowMS(in, 0,vx);
VgetRowMS(in, 1,vy);
dotProductVVS(vx, vy,error); // eq.18
if (abs(error)>0.05)
{
VscaleVS(vy,error/-2.0, xor);
VsumVV(vx, xor, xor); //eq.19
VscaleVS(vx,error/-2.0, yor);
VsumVV(vy, yor, yor); //eq.19
VcrossProductVV(xor,yor, zor); //eq.20
dotProductVVS(xor, xor,error);
renorm = .5 *(3 - error); //eq.21
for ( int Rcol=0; Rcol<Msize;Rcol++)
in[0][Rcol]=renorm*xor[Rcol];
dotProductVVS(yor, yor,error);
renorm = .5 *(3 - error); //eq.21
for ( int Rcol=0; Rcol<Msize;Rcol++)
in[1][Rcol]=renorm*yor[Rcol];
dotProductVVS(zor, zor,error);
renorm = .5 *(3 - error); //eq.21
for ( int Rcol=0; Rcol<Msize;Rcol++)
in[2][Rcol]=renorm*zor[Rcol];
}
}
I finished programming the code for my IMU sensor. It corrects the gyro reading for offset and integration errors by means of accelerometer data. For this I use a nonlinear complementary filter based on a direction cosine matrix. This article explains how the filter works and, even better, it explains all the matrix math that is used to describe the filter. Basically, it explains everything that you need to know if you want to make such a filter yourself. But you must be prepared to read the article several times.
So now I know position, speed and attitude of my robot based on gyro and accelerometer. This is called an Inertial Navigation System or INS. Thanks to the filter attitude info does not get worse over time, but I don’t know yet how good it is. A very basic test showed that my robot knows the difference between lying flat on the ground and lying on a grain of sugar. What’s even better, it still knows this after two or five minutes. And this is the holy grail I was after all this time. So attitude information around the X and Y-axis is quite good. I cannot correct for drift around the Z-axis using the accelerometer, so this still has some drift. If I needed to I could use a compass to correct for this. But as this is not really necessary for my robot I think I’ll leave that be. I don’t know yet how precise position info is. I know it must still suffer from integration errors, but that doesn’t matter to.
I also need actual attitude data as it will be the input for a PID controller that keeps the robot balanced. Therefore it is good to know how fast the filter is. Using robotC and fast I2C (SensorType[S3]=sensorI2CCustomFastSkipStates) I get a reading every 15 msec. But I also get an error from time to time, the system cannot recover from it. When I use the standard I2C speed I get a reading every 24 Msec and a reliable system. Of course I will have to use a lower frequency to make room for other processes. I still aim for a heartbeat of 50/33 Herz for my robot.