Archives for category: Kalman filter

This post presents the robotC code for the Kalman filter.

Now the Kalman filter is working I made the code suitable for use in robotC progams. The code can be freely used by you. Download the code

To use the filter include heading.c into your program. This will create a global variable called heading that contains the filtered heading. The variable is updated every 20 mSec. 

When you include the code into your program a task will be started automaticly once your program starts. This task will first search for both a compass and a gyro sensor. Once they are found the filter starts calculating the value of heading.  The program can find the sensors as long as they are defined with robotC’s motors and sensors setup.

Check the readme and example program called testHeading.c for additional information.

The code probably will not work when the sensors are connected through the sensor multiplexer becuse the function that searches for the sensors only scans the first four sensor ports. But in case you have a multiplexer it should be easy to modify the program.

Initially I used vector math to implement the filter in robotC.  For this version however I got rid of vector math and switched to normal type of (calculus) functions. This makes the code (and the filter) so much more easy to understand.

Here is the code, in case you just want to look at it.

//*!!Code automatically generated by 'ROBOTC' configuration wizard							 !!*//

*																																																	*
* PROGRAM: Heading    	      																																		*
* VERSION: 0																																											*
* PURPOSE: This program integrates gyro and compass readings using a Kalman filter to get a more	*
*					 accurate value for heading                     																				*
* AUTHOR:	 Aswin Bouwmeester (																				*
* DATE:		 okt 10, 2010	  																																				*
*																																																	*

variance for the gyro sensor for different sample sizes

N   T   var
1	  5	  0,222
2	  10	0,144
3	  15	0,123
4	  20	0,119
5	  25	0,114
6	  30	0,088
8	  40	0,093
10	50	0,078

#pragma systemFile
#pragma autoStartTasks

// MOUNTING can be set to - if the compass or the gyro is mouned upside down
#ifndef MOUNTING
#define MOUNTING

float meanSensorValue(tSensors s, int interval, int N);
tSensors findSensor(TSensorTypes type);
float heading=-1;

task getheading()

		var_compass = 0.9,




		// find the compass and gyro sensors;
		if (_compass==S5)

		// Display error message if the sensors are not found
		if (_compass==S5) nxtDisplayTextLine(1,"No compass found");
		if (_gyro==S5) nxtDisplayTextLine(2,"No _gyro found");
		if (_compass==S5 || _gyro==S5)

      // get the gyro offset

			// initialise the filter;

			// Run the filter forever;
			while (true)
				// get time span;
				if (time_span<=0) time_span=0.02; // this is to compensate for wrapping around the nPgmtime variable;

				// get measurements from sensors
				// (when changing the sample size of the gyro, one must also change the variance)
				gyro_measured=MOUNTING (meanSensorValue(_gyro,5,4)-offset);

				// predict;

				// heading must be between 0 and 359
				if (compass_predicted<0) compass_predicted+=360;
	      if (compass_predicted>=360) compass_predicted-=360;

				// Detect _compass disturbance;
				if (abs(compass_predicted-compass_measured)> 2 * sqrt(var_filter_predicted))

				// get Kalman gain;
				if (disturbed)

				// update;

				// make result available gobally

				// display informatin about filter

        nxtDisplayTextLine(0,"Heading filter");
        nxtDisplayTextLine(1,"Heading  : %3.0f",compass_updated);
        nxtDisplayTextLine(2,"Compass  : %3.0f",compass_measured);
        nxtDisplayTextLine(3,"Variance : %6f",var_filter_updated);
        nxtDisplayTextLine(4,"Disturbed: %1d  ",disturbed);
				// wait for next iteration;


// function used to automaticly find sensor out in what port a sensor is attached (returns S5 when sensor is not found)
tSensors findSensor(TSensorTypes type)
	  tSensors port;
	  for (port=S1;port<=S4;port++)
	    if (SensorType[port]==type) return port;
	  return S5;

// get a mean sensor value from specified number of samples
float meanSensorValue(tSensors s, int interval, int N)
	  float som=0;
	  for(int i=0;i<N;i++)
	  return som;

Last post has been months ago. I finally got the courage to continue my work on the Kalman filter. I can report some progress and another, hopefully small, draw back.

Remember that my filter didn’t work because of the compass disturbances made by metal and magnets. Last nights I’ve been working on a solution for this. In essence I want to ignore compass readings when there is a disturbance somewhere. But how do I detect disturbances? A little work I did for Xanders omniwheel robot pointed me in the right direction. 
My Kalman filter not only gives predicted values for the heading but it also knows how accurate this prediction is. Knowing these two things one can use statistics to calculate how likely a compass reading fits in the prediction. Let me explain this using an example. Suppose the filter predicts a heading of 90 degrees (West). At the same time it also knows that the true heading is for 95% certain between 87 and 93 degrees. In this case you can trust a compass reading of 92 degrees, but you wouldn’t trust a compass reading of 97 degrees. Based on this technique I added a switch to my filter. If compass readings are to be trusted the filter performs normally. But if it does not trust a compass reading then it doesn’t take this reading into account. It then skips the update phase and continues straight to the next iteration making the next prediction. This seems to work.

Here is the first graph of my filter in action. 

The red and purple lines represents the orientation of my robot. In red is what my compass reports, in purple is what the filter reports. The horizontal axis is the time axis (in msec). Let me explain what is happening in real life and how you can see the effects in the graph.
The robot is standing on an enforced concrete floor with its side against the wall. The iron in the concrete gives nasty disturbances to the compass. Initially the robot is pointing almost east (80 degrees). After some time I lift the robot and turn it half a circle and put it back against the wall with its other side. By using the wall I know that the robot made a turn of 180 degrees. Turning the robot shows up in the graph as rising (or falling) of the red and purple lines. Once the robot starts turning we can see both lines getting away from each other. This is because the compass is affected by the iron and the gyro is not. The filter detects this and start ignoring the compass readings. This is shown by the green line, when it is up the filter ignores the compass. As you can see from the red line the filter is very right to do so. We would expect it to rise to 80+180=260 degrees, but it reports around 240 degrees. This is obviously very wrong. The result of the filter on the other hand seems quite accurate, the purple line rises to the expected value of 260 degrees. Once I turn the robot back the compass readings come back in line with the filter readings. The filter then starts taking compass readings back into account. From that moment the filter can use compass readings again to filter out any accumulated errors in the filter heading.

Does this mean my filter is ready? No, There still is one problem to tackle. But I’ll explain that problem in the next episode of my ever lasting quest for better sensor readings.


It as been some time since I posted my last blog.  Work kept me busy for most of the time. But I also experienced some difficulties that affected my motivation. However, I’m not willing to surrender yet. I will discuss the difficulties I experienced, maybe this will help me to find solutions for them.

First, I thought I had improved the position of the compass sensor when I replaced it last time. There were no more disturbances from the motors. But during testing I found out why. The brick was influencing the compass sensor in such a strong way that the influence of the motors was pushed to the background. But still the sensor didn’t give me good readings. I moved the compass sensor to the top of a pole I placed on my robot. This is very ugly but it does the trick (I hope).

Second, my son needed some of the parts back that I borrowed from him. FreeRover was destroyed while he removed them. Luckily there is BrickLick. I put my son on sale on BrickLick, so that problem is solved as well 😉

The third problem is in compass disturbances. I anticipated this problem but I still did not solve it.  What happens is this. Suppose a situation were there are no disturbances to the compass. Its reading is more or less accurate. When I start moving the robot the combined output of the gyro and compass sensors by the Kalman filter is better than the output of the sensors on their own. But when I move the robot to a place where there is a magnetical disturbance and hold the robot there the Kalman output is good in the beginning but then it drifts of to the disturbed compass reading. I do understand why, the filter knows both the compass and the gyro cannot be trusted totally, also it knows there is some process noise (ie the position can change due to movements of the robot). So when it gets a false compass reading it still takes this reading into account because it cannot know were the errors came from, it might as well be a false gyro reading or a movement of the robot. I hoped to make the filter smarter and to detect false compass readings, when there is a large change in compass reading that is not in line with the gyro reading I ignored the compass reading. However this proved to be more difficult than I thought. As sensor readings are frequent, disturbances do not show as large changes, they come in slowly and cannot be detected because they are within the range of sensor noise.  I do not have a solution for this yet.

A first test of the accuracy of my compass and gyro sensors proved that sensors are heavily affected by the robot itself. Especially the motors affect the readings of the compass sensor.

With this knowledge In mind I wanted to make my robot more accurate. This is what I mean by system accuracy.

First I looked at the gyro. Both its average reading as the spread around the average seem to be affected by running motors. First I tried to put the sensor away from the motors by mounting it on top of the robot. This didn’t improve readings. The mean was still affected and spread got worse. The latter, I concluded, must be caused by vibrations. These are worse on top of the robot just as the tip of a mast swings further than the boat itself. I put the sensor back to its original position. The drop in mean still puzzled me, but I already concluded I should get the base level of the sensor with running motors, as this is the normal state of my robot. After I changed batteries I noticed that the drop in mean due to the motors was smaller. Maybe the motors cause a drop in voltage on the sensor port? Maybe this causes the drop in mean? Maybe this is also the cause of the spike errors just after changing power levels? I don’t know for sure. But this is more likely to be the cause than electro magnetical disturbances, at least for this sensor.

The compass sensor on the other hand suffered heavily from electro magnetical disturbances during my first test. I tried several alternative positions with this sensor but I ended with mounting it on top of the brick. The brick shields the sensor from the motors this way. This place proved to give the least disturbances. The sensor is still off-balance a short time after applying a change in power sent to the motors, but it recovers within 0.1 seconds. This effect I contribute to a temporarily drop in voltage on the sensor ports. I really should include voltage levels in my readings when testing again.

With the new set up I think the sensor placement is as good as it can be. I’ll have to keep in mind though that sensor readings seem to be affected by drops in power levels. I think I will overcome these effects by not feeding the Kalman filter observations when this happens. This is a valid technique for the filter. One just has to skip the update phase for a few iterations.

In the next post I will present the results of my final test and determine the variance of both sensors within my system.

In my previous post I discussed the Kalman filter for fusing compass and gyro readings. This time I’ll discuss the way I implement the filter on the NXT. I use robotC as my programming environment but most things discussed in this post are relevant to all environments. 

The formulas of the Kalman filter use matrices from linear math. They provide a convenient way to write down the various elements of the filter. Robotc on the other hand has no native support for matrix language. Most computer languages don’t support matrices.
This leads me to the first decision I’ll have to make. I have two alternatives. The first is to translate my filter into ordinary functions, the second is to write a function library to support matrix calculus.
The first option has the advantage of speed because it allows me to eliminate arguments and apply other optimizations. The disadvantage of this solution is that my functions can get complex and don’t resemble the model. This will make debugging difficult. But more important it will make it hard to change the model If I need to. It is mainly for this reason I don’t go for this option.
The other option is to implement support for matrix functions. There are only three matrix functions that I need: sum, product and inversion. That cannot be to hard, or can it? Well, matrices can have any number of rows and columns, a good function library should support this. This kind of flexibility is hard to get in robotC. My model only uses scalars (2×1 matrices) and 2×2 matrices. I will make a function library that works for these dimensions only. 

The second step is to define the data types to store matrices and scalars. Here again, I sacrifice on flexibility and limit myself to 2×1 scalars and 2×2 matrices. All elements must be of type float because the filter uses real numbers, not integers. Here are the type definitions: 

// Matrix size and scalar size;
const int Msize=2;

typedef float matrix[Msize][Msize];
typedef float scalar[Msize];

The third step is to decide how to pass information to my function library and back. This decision is language specific. Robotc supports “pass by reference” this means that one can tell a function were its input is located instead of passing al the input values. It is like me pointing out an article on Wikipedia instead of explaining things myself to you. This technique saves variable space and the amount of parameters used with functions. Therefore I will use this technique.
The result of each function in the library is a scalar or matrix. RobotC does not support user defined data types as function types. In other words, I cannot write functions that return a matrix. But I can pass a reference to a matrix that will be used to update. Therefore I’ll provide my functions a storage location as one of the input parameters. This design choice give the following general syntax for functions in my function library: 

Void matrixFunction(inputMatrix1 <,inputMatrix2...inputMatrixN> , resultMatrix);

I decided that my functions won’t be flexible enough to handle matrices of different dimensions. However, My filter requires me to calculate the product of 2*1 and 2*2 matrices and the product of 2*2 and 2*2 matrices. As a consequence of my design decision I’ll have to write two product function, one to handle the first case and one to handle the second case. 

The last thing to mention are the naming conventions I use in my function library. Function names start with a character indicating the return type of the function, M for functions that return a 2*2 matrix, S for functions that return a scalar. (technically all functions return void, What I mean is the functional return type). The return type indicator is followed by the name of the mathematical operation, sum, product, transpose or inverse. Function names are completed with characters indicating the type of the input parameters. The function SproductMS calculates the product of a matrix and a scalar and returns a scalar. 

Having explained all the design disicions I needed to make, it is time to show the code for the function library. 

Warning: This code is not (always) working properly in version 2.01 due to a robotC bug using structs and floats.


// returns the product of 2 matrices in matrix r
void MproductMM(matrix &m1, matrix &m2, matrix &r)
  int Rrow, Rcol, i;
  for (Rrow=0;Rrow
    for (Rcol=0;Rcol
      for (i=0;i

// returns the sum of 2 matrices in matrix r
void MsumMM(matrix &m1, matrix &m2, matrix &r)
  int Rrow, Rcol;
  for (Rrow=0;Rrow
    for (Rcol=0;Rcol

// returns the difference of 2 matrices in matrix r
void MsubstractMM(matrix &m1, matrix &m2, matrix &r)
  int Rrow, Rcol;
  for (Rrow=0;Rrow
    for (Rcol=0;Rcol

// Inverts a matrix
void MinvertM(matrix &m, matrix &r)
  float d;
  d=m[0][0]*m[1][1]-m[0][1]*m[1][0];  // determinant 

   if (d==0)
   r[0][0]= m[1][1]/d;
   r[1][1]= m[0][0]/d;

// Transposes a matrix
void MtransposeM(matrix &m, matrix &r)
  int Rrow, Rcol;
  for (Rrow=0;Rrow
    for (Rcol=0;Rcol

// copies one matrix into another
void McopyM(matrix &m, matrix &r)
  int Rrow, Rcol;
  for (Rrow=0;Rrow
    for (Rcol=0;Rcol<Msize;Rcol++)
// sums two scalars
void SsumSS(scalar &s1, scalar &s2, scalar &r)
  int Rrow;
  for (Rrow=0;Rrow

//substracts two scalars
void SsubstractSS(scalar &s1, scalar &s2, scalar &r)
  int Rrow;
  for (Rrow=0;Rrow<Msize;Rrow++)

// copies one scalar into another
void ScopyS(scalar &s, scalar &r)
  int Rrow;
  for (Rrow=0;Rrow<Msize;Rrow++)

// calculates the product of a matrix and a scalar
void SproductMS(matrix &m, scalar &s, scalar &r)
  int Rrow,  Rcol;
  for (Rrow=0;Rrow<Msize;Rrow++)
    for (Rcol=0;Rcol<Msize;Rcol++)

My ultimate goal is to make a robot that maps its environment, knows where it is and can navigate its environment on its own. Previous experiments have learned me that lots of obstacles have to be overcome to achieve this goal. One of them is combining sensory input. For example, I want to know which way my robot is facing. I do have both a compass and a gyro sensor. The readings of the compass sensor are not very precise and do suffer from magnetical disturbances (motors, NXT, metal objects). The gyro readings do not suffer from magnetical disturbances. But it only gives me an idea of the change in direction. Not the direction itself. One can integrate these changes to an overall change in respect to the start position. But then small errors in every reading will add up to a forever growing error. It would be nice to combine both sensors so that one sensor corrects the error from the other sensor. The Kalman filter promises to do so. This filter however is hard to understand and implement. As a consequence there is a lot of talk about it in the NXT community but it is seldom used. I’ve set myself the goal of understanding the filter and implement it for the set up described above. This and later posts will document my quest. Sometimes I’ll take things for granted, sometimes I’ll simplify things when a statistician wouldn’t. Sometimes I might be wrong.

The basic ideas behind the filter are simple.

  • Sensors are never very precise. Each sensor has an error. This tells you how precise this sensor is. When the compass reads 15 degrees north it could also be 14 or 16.
  • Two sensors combined give more precise information than a single sensor. When two compasses give me different reading the truth might be somewhere in the middle.
  • If you know how things were, then you have a better understanding of how things are. When the last observation from the compass was 13 and the current one is 15 then the true heading is more likely to be below 15 than over 15.
  • If you know how things changed from the last observation to the current one you can even better predict the current situation. If the robot is turning clockwise it is likely that the current heading should be more than the last time it was calculated.

It is not hard to find the formulas used with the Kalman filter, check Wikipedia (although I do suspect there is an error in the formulas there) or this excellent course. It was hard for me to understand those formulas however. There were two things I had to understand. One was matrix calculus, the second was the meaning of all the scalars and matrices that are in the equations. But there is some good news. Matrix calculus is not too difficult, some of us will have had it in high school, if not, there are just a few tricks to learn, addition (+), multiplication (*) and inversion (1/matrix). There is even more good news, not all matrices have to be understood. Some can be ignored in the simple situation I am dealing with, others can be taken for granted without understanding them.

The only formulas of importance are those under Predict and Update in the Wikipedia article. The other formulas will not make it to code. Lets examine what goes into the formulas first and what the formulas actually do afterwards. 
he first thing to understand is the state. This is the x with a hat.  The state describes the important properties of your robot. Important in this context I mean, what is it we want to know about the robot? In my case it is the heading of the robot (what way is it pointing at) and its rate of turn (how fast is it turning and in what direction). So the state contains two items in my case. But in other situations it can be totally different things that go into the state. This is very important to understand. The Kalman filter is not a ready made filter that accepts a number as input and gives another number as output. You have to design your Kalman filter yourself, fitting it for your situation and problem.
The state is a scalar (a vector with just one column) and in my case it has just two items heading and rate of turn.
With the Kalman filter the state comes in different flavours.
– there are state scalars associated with the current and with the previous observations. These are denoted as Xk and Xk-1 respectively.
– The a priory state vector. This holds the predicted state. The prediction is based on the previous state (and transition) only. It does not take sensory input nor control input in account. It is denoted as Xk-1|k-1 for the previous observation and as Xk|k-1 for the current observation.
– The a posteriori state vector. This one describes the prediction of the filter when taking sensory input and control input into account. It is denoted as Xk-1|k for the previous observation and as Xk|k for the current observation. This is my state vector:

X= Heading
Rate of turn

(When one knows the number of rows in the state vector one also knows the dimensions of most of the other vectors. These all have the same number of rows, also the number of columns equals the number of rows in the state vector. (this probably isn’t always true but in this simple case it is.))

There is another element in the filter that looks very much like the state vector. This element is called the observation and written down as Z. It holds the input from the sensors. In my case it holds compass and gyro readings.

Z= compass reading
gyro reading

As my sensors return heading and rate of turn, this vector is very much the same as the state vector. This is not by definition so! If measurements are indirect or use different units than the state (think of degrees Celsius versus degrees Fahrenheit) then both scalars will differ from each other.  We then have to provide formulas to translate measurements into state using an observation model (H).  My observation model doesn’t have to change anything and therefore can be ignored in the formulas. But officially it is:

H= 1 0
0 1

The next element to understand is the transition matrix, Wikipedia uses F to denote this element. The transition matrix describes how the state changes from observation to observation. In my example one can assume that the rate of turn does not change from observation to observation. If the robot was turning 5 degrees a second then we assume it is now still turning at the same rate. The transition matrix should tell that the rate of turn does not change over time, Rate of turnk=Rate of turnk-1. The heading however changes from observation to observation because the robot is turning. The change in heading being related to the rate of turn. So the new heading equals the old heading plus the rate of turn times the time interval, Headingk=Headingk-1 + Rat of turnK-1*deltaT. This is my transition matrix:

F= 1 deltaT
0 1

One can argue that the rate of turn is not constant over time. If the robot changes its steering then the rate of turn also changes. This is true and if one wishes this can be modelled in the control model, Bu. To keep things simple I pretend not to know anything about steering or speed. By ignoring control model I can simplify the first Kalman formula quite a bit.

I cannot totally ignore the fact that my robot does steer and the rate of turn does change. I consider changes in rate of turn to be a unknown factor, a kind of noise in the predictions the filter makes. This is called the process noise, Q. The heading is also affected of the steering of the robot changes. But as the interval between two predictions is very small I think it can be ignored. So I set the process noise of the heading to zero. The process noice is a covariance matrix, this means that there could also be a relation between the noise in rate of turn and the noise in heading. But because the noise in heading is zero there cannot be a relation between the two, the covariance is zero as well. I do not jet know the magnitude of the proces noise. In other words I do not know how often and how fast the rate of turn can change between two predictions. I’ll find out later. My process noise vector is:

Q= 0 0
0 ?

The last vector to understand describes how accurate the sensors are. It is called the observation noise and written down as R. The accuracy of a sensor can be measured, guessed or found in the data sheet of the sensor. In data sheets it is denoted as ±number. This number is officially called the variance or error. This matrix also holds information about the relationship between the accuracy, or covariance, of the different sensors. In my case a have two different sensors, the errors in their readings will not be related. So the covariance can be set to zero. In other cases errors between sensors can very well be related, for example sensors might both suffer in a similar way from vibrations in the robot. Then covariance cannot be ignored. I do not jet know the error of my sensors. I hope to find out later. For the moment my observation noise matrix is:

R= compass error 0
0 gyro error

Not only sensors are inaccurate, the predictions from the filter are also inaccurate. The Kalman filter knows how inaccurate the a prediction is, this is called the state error or P, and knowing this gives the filter its strength. The state error is calculated by the filter, I don’t have to bother giving it values. The state error comes in four flavours, just like the state itself.

I now understand al the elements that go into the filter and have modelled them for my goal. The other elements that are in the formulas as calculated by the filter. Their meaning can be ignored for the moment. It is time to look at the formulas themselves.

Each time the filter makes a prediction it takes two steps or phases. The first step is called the predict phase. In this step the filter calculates a new prediction from the previous prediction and the transition matrix. In my case it calculates a new heading. But it does not yet take readings from my sensor into account. It also calculates how good this prediction is (the state error P) taking the process noise into account.
In the second step the filter gets the input from the sensors. These readings will not be the same as the filter predicted (the difference is called the innovation or Y). The truth will be somewhere between the prediction and the measurement. The filter then calculates where between the two values the truth is most likely to be, this is called the Kalman gain or K. Using the Kalman gain it calculates a new prediction. And this prediction is the value I am after!

In this post I discussed the basic ideas behind a Kalman filter, I described the elements that go into the filter. Also I modelled this elements for my purpose, although I still have to fill in some values. Then I briefly described the steps the filter takes to make a prediction. In my next post I will describe the implementation of the filter in RobotC.