Archives for posts with tag: Gyro sensor

This time I will discuss the gyro sensor (again). What it measures. How to cope with offset and drift. And how to use it.

Gyro sensors are among the most widely used sensors in robots. They are often used in self balancing robots for keeping upright. They are also used  in autonomous robots for keeping track of orientation. But also game controllers might use them to detect angular motion. A gyro sensor measures the angular velocity. In other words they measure how fast the sensor is rotating. This is most often expressed as degrees per second or radians per second. Some gyro sensors measure the angular velocity over one axis, others take measurements over two or three axes.

One should be aware that a  rotating object always rotates over just one axis. This axis however can have any orientation. Compare this to speed. An object can move in just one direction at any given time. This direction however can be any direction in a three dimensional space. Most often we do express the speed of an object over three perpendicular axes. A landing plane might fly at a speed of 600 km/s and descent at a rate of 5 m/sec. Wind might blow it off course at a rate of 0.5 m/sec. But still this plane goes in one direction only. The same is true for  angular velocity, an object rotates just with one speed, but we express its speed over three separate axes.

Let us take a look at the output of a typical gyro sensor. The graph below shows the output of a digital three axis gyro sensor (ITG-3200) over a period of 17 seconds. The X-axis is the time axis, the Y-axis shows the angular velocity expressed in degrees per second. After five seconds from the start of the measurement I rotated the gyro sensor clockwise for about 180 degrees. This took me about 3 seconds. After about 11.5 seconds I rotated the sensor back (counter clockwise).

gyro-1

This graph tells us quite a lot. First, we notice that the sensor is a three axis sensor. It returns three different signals at any given time. Second, we can see that the rotation I made took place of the third axis, this is the Z-axis. It means that I rotated the sensor in the XY-plane. This is true, I had the sensor flat on my desk and I rotated it while keeping it flat. Third, a clockwise rotation is expressed as a negative angular velocity and a counter-clockwise rotation as a positive angular velocity. This is a matter of convention and is called the right-handed orientation system. Fourth, we can see that  when at rest the gyro signal is close to, but not exactly, zero. This is because the sensor is not perfect, it has a bit of an error. We’ll look into this error and ways to compensate for this later. And finaly, we cannot read the change of 180 degrees in orientation from the graph. This is because the sensor does not measure the orientation, instead it measures the angular velocity (speed of rotation). It is however possible to calculate the (change in) orientation from the angular velocity as I’ll explain later.

Offset and drift correction

Let us concentrate on the error in the sensor signal right now. Below is a graph taken over a period of ten minutes while the sensor was at stand still all the time.gyro-2

A perfect sensor would output a rate of velocity of zero for all three axes all the time. Obviously this sensor does not.  The X-axis signal is around 2 instead of zero. This error is called the offset error. Every axis has its own offset error. For the X-asis this is around 2, for the Y-axis it is about 3.3 and for the Z-axis it is about -1.5. It is easy to correct for the offset error once you know how big it is. You just substract the offset error from the sensor signal to get a corrected value. The offset error itself can be calculated by taking the mean of a number of samples, take 100 for example.

The offset itself may seem constant, but in reality it is not. The offset of a sensor  is influenced by several factors and can change over time as a result. This is called sensor drift. One of the biggest factors contributing to sensor drift is temperature. You can notice this when one starts using a sensor. When being used, the temperature of a sensor rises a bit. It gets hotter then the ambient temperature. As a result the offset of the sensor changes. If you need a very good signal you should take this into account and let the sensor warm up before calculating the offset.
Some gyro sensors, like the Hitechnic gyro for example,  are seriously affected by changes in input voltage. As the NXT is battery powered this is a serious problem. Starting the motors of the NXT will result in a power drop and thus in a change in offset. There is a trick to avoid this if you have a sensor mux with an external power supply. Like this one from Mindsensors. In general I advise you to choose another gyro.
Even when temperature and power are constant the offset of a gyro will still vary a bit over time. This variation is called random walk.

There is a very elegant technique to deal with sensor drift. This is to constantly but slowly update the offset error of the sensor. Instead of treating the offset as a constant you  treat it as a value that can change over time. To calculate the offset you now use the moving average of the most recent samples as the offset. Then you always have an up-to-date offset. Calculating the moving average however is CPU intensive and you also need a lot of memory to store the samples. Therefore it is better to use a low-pass filter instead of a moving average. This does not use extra memory and little computing power but the effect is the same.
This technique will work very well when the sensor is at stand still. But will it work well when it is rotating? Any rotation will  influence the calculated offset. So of possible one should pause updating the offset while the sensor rotates? Sometimes another sensor can help to detect a rotation. A compass, an accelerometer or  motor encounters can all be off help.
However, there is also another solution. This one is based on the notion that a rotation in one direction is very often compensated with a rotation in another direction. A balancing robot for example stays upright, so in the long term it does not rotate. It might lean forward and backward for short moments of time. But this forward and backward rotations cancel each other out on the long run. So in the long run the average signal of the sensor equals the offset of the sensor, even when there are rotations from time to time. This means that even under dynamic circumstances one can constantly update the sensor offset. One only has to make sure to use enough samples so that there is enough time for rotations to cancel each other out. This technique is useful for balancing robots where rotations are short. It is less useful for slow turning robots where rotations have a long duration.

Converting angular velocity to direction

There are situations where one needs to know the direction of the sensor. In navigation for example one needs to know in what direction a robot is heading. A gyro can provide this kind of information. There are some limitations though. So how do you transform angular velocity into direction? This is done by integration. This might seem difficult. But it really is not. If a robot rotates for 2 seconds at a speed of 90 degrees per second, it has rotated for 180 degrees. So integration is nothing more than time multiplied with speed. The graph below shows both the (offset corrected) angular velocity and the direction. During the test I rotated the sensor four times with 90 degrees clockwise, 360 degrees counter clockwise, 90 degrees counter clockwise and 90 degrees clockwise. After this the sensor was back at its start direction.

gyro-3The  line representing the direction, labeled Rotation 2, clearly shows the steps of 90 degrees that I made. Careful examination of the data shows that according to the sensor the robot rotated with -358.8 degrees at max, where as I tried to rotate it with -360 degrees. This makes the sensor, and the integration, pretty accurate. However, after turning the sensor back to its start direction the calculated direction is not zero as is to be expected. Instead, it is about 5.7 degrees. This is not so good. What makes it even worse, there is no way to correct this. At least, not without the aid of another sensor of user intervantion. This is the main drawback of integration. Over time small errors (in the offset corrected) signal build up to become a large error in the integrated data.

But integration can be very useful nevertheless. Suppose a robot that needs to make exact turns. Using integration you can do so. But you need to reset the initial direction to zero just before making the term. This way the error in each turn will only be equal to the integration error that built up during this turn. This is small as making a turn does not last that long. In other words, you can make your individual turns accurate but not the overall direction of the robot.

But wait, there is one more thing. Integration only gives a change in direction. To know the real direction one should also know the direction before the changes took place. Quite often this initial direction can be assumed to be zero. But this is arbitrary and does not relate to the world. Although it might be all you need. If you need to relate the orientation of your sensor to the real world you need to align the sensor with the real world (make it point north) or you need another sensor that can do this for you. This could be a compass sensor.

Advertisements

This time I’ll show you the first results of incorporating a 3-axis magnetometer (or compass) into my IMU filter.

A quick round-up first. I have a IMU sensor that houses a 3-axis accelerometer and a 3-axis gyro sensor. Well, actually I got two, one that I built myself and one from Dexter Industries. To use this type of sensor I have written a non-linear complementary filter (NLC-filter) who’s job it is to fuse the output of the two sensors to a drift free low noise attitude signal. The filter tells me the tilt over the x-axis and tilt over the y-axis. It also tells me the heading. The heading signal is not drift-free. The reason for this is that the accelerometer can provide tilt data but it cannot provide heading data. For this you need a compass.

A compass sensor should really be called an magnetometer because it doesn’t give you heading like a sailors compass does. Instead it gives you the force of the magnetic field over three axis. From this one can calculate heading.

It was technically not very difficult to incorporate the magnetometer data into the filter. Although it took me a long time to figure out how to do this. The main thing was to tune the PI-controller for the compass. I’m not finished with this but I can show you some first results of the filter anyway. The measurements were taken with the setup that you see in the picture. The NXT is mounted on a platform some 20 cm above the ground. This is to minimize effects of the steel enforced concrete floor. It can rotate endlessly as it is mounted on a turn table and driven by a NXT motor and a worm wheel. The worm wheel in combination with a regulated motor gives a very stable rotation speed. The sensors are mounted 10 cm from the NXT to prevent disturbances from the NXT. Measurements are sent over bluetooth to the PC in real time.

this first graph was taken while the NXT was stationary. It shows how stable the filter signal is. Note that the scale of the vertical ax is 0.1 degree, the vertical axis ranging from -4 to 0 degrees. The roll and pitch signals are very stable. During the 15 minutes of the test these signals stayed within a bandwidth of 0.4 degrees. This is all that is left of the gyro drift. The yaw signal (that is controlled by the compass) is less stable, it floats between -0.8 and -2.0 degrees. But also in the yaw signal the gyro drift is eliminated to a large extend. I am very, very pleased with these noise levels as the bandwidth of noise from an accelerometer is around 5 degrees.

The second graph compares the output signal from the compass with that of the filter. This time the NXT was spinning around the Z-axis. The graph shows the results of a full spin. . You basically see two descending lines, indicating a constant movement of the sensors. The signal from the compass, in blue, is not exactly over the signal from the filter. There is a slight difference in the heading that these two report. The reason for this is that the setup of the NXT is not exactly level. The compass signal suffers from this. Its line is not really straight but it is like an S-curve due to this. The filter signal on the other hand is tilt compensated and almost straight. The smoother red line also indicates a lower noise level.

This last image shows the effect of a magnetical disturbance on the compass sensor and on the filter. While taking this graph I slowly moved a head phone along side of the sensor.
The compass suffers from this disturbance, at the peak the signal is over 10 degrees of. The filter on the other hand is not affected by the disturbance. It detects the disturbance and ignores the compass signal while it is disturbed. During this time the filter relies solely on the gyro to keep track of heading.

I think the low noise levels of the filter are nice. But the true advantages are in the tilt compensation and the robustness to magnetic disturbances. The filter is equally robust to disturbances in acceleration I think. This however I cannot show you using this setup.

In my last post I discussed how I improved the accuracy of my robots sensors. This was done by trying out different sensor locations and measuring the variance of the sensors at different locations. This time I’ll describe how I measured the variance for both the compass and gyro sensor.

Earlier I found out that the NXT motors influence the accuracy of both sensors in two ways. First, the magnetic field generated by the NXT motors influence the compass sensor. I minimized this effect by using the brick as a shield. For this I placed the compass sensor at the top rear of the brick. The NXT motors are underneath and in front of the brick. Second, changes in motor power seem to affect the voltage level on the sensor ports for a short period of time. I’m still figuring out a good method to minimise this effect. For measuring the variance of the sensors I got rid of this effect by  pausing sensor readings for a short time after changes in motor power.

For my final test I wrote a program that set motor power to a random level between -100 and 100 every 0,5 seconds. At the end of this time period, just before applying a new value for motor power, I measured the value of both sensors. I did this 300 times, taking two and a half minutes. The motor power was applied to both driving motors of FreeRover, but not to the steering motor. FreeRover was placed on a cardboard box to get the wheels free from the ground and thus to prevent FreeRover from moving. The environment was as stable as could be whilst at the same time the system could vibrate (although not as much as it would when FreeRover really moves). Below is a graph of the measurements.

 The graph shows the readings of the compass sensor in red. It is pretty stable, even with the motors running. There are a few readings of the mean giving it a variance of 0.03544. The Gyro readings are in blue, it shows more variation in the readings. The variance of the Gyro is 0,93311 in this test. I did repeat the test several times at different headings and locations. The results of the tests were not always the same but the overall image is consistent with this example.

I want to finish this post with a remarks. This isn’t a test that satisfies scientific standards, but hey, the NXT is a toy after all.

A first test of the accuracy of my compass and gyro sensors proved that sensors are heavily affected by the robot itself. Especially the motors affect the readings of the compass sensor.

With this knowledge In mind I wanted to make my robot more accurate. This is what I mean by system accuracy.

First I looked at the gyro. Both its average reading as the spread around the average seem to be affected by running motors. First I tried to put the sensor away from the motors by mounting it on top of the robot. This didn’t improve readings. The mean was still affected and spread got worse. The latter, I concluded, must be caused by vibrations. These are worse on top of the robot just as the tip of a mast swings further than the boat itself. I put the sensor back to its original position. The drop in mean still puzzled me, but I already concluded I should get the base level of the sensor with running motors, as this is the normal state of my robot. After I changed batteries I noticed that the drop in mean due to the motors was smaller. Maybe the motors cause a drop in voltage on the sensor port? Maybe this causes the drop in mean? Maybe this is also the cause of the spike errors just after changing power levels? I don’t know for sure. But this is more likely to be the cause than electro magnetical disturbances, at least for this sensor.

The compass sensor on the other hand suffered heavily from electro magnetical disturbances during my first test. I tried several alternative positions with this sensor but I ended with mounting it on top of the brick. The brick shields the sensor from the motors this way. This place proved to give the least disturbances. The sensor is still off-balance a short time after applying a change in power sent to the motors, but it recovers within 0.1 seconds. This effect I contribute to a temporarily drop in voltage on the sensor ports. I really should include voltage levels in my readings when testing again.

With the new set up I think the sensor placement is as good as it can be. I’ll have to keep in mind though that sensor readings seem to be affected by drops in power levels. I think I will overcome these effects by not feeding the Kalman filter observations when this happens. This is a valid technique for the filter. One just has to skip the update phase for a few iterations.

In the next post I will present the results of my final test and determine the variance of both sensors within my system.

I am waiting for a new release of robotc. The current release, 2.01, has a bug with structs and floats that prevent me to further implement the Kalman filter I discussed in earlier posts. This gives me time to concentrate on another aspect of the Kalman filter, sensor accuracy.

One of the inputs for the filter is sensor accuracy. In the filter it is refered to as the covariance of the observation noise. This matrix contains sensor errors, in my case it is a compass and a gyro, and the relation between sensor errors, I assume there is no relation between errors in the gyro and the compass.

I do not have information about the sensors. I asked Hitechnic the specs, but they didn’t give it to me. They responded however and pointed out that sensor error is largely affected by the system it is used in (think of vibrations within the robot) and by its environment (think of iron chairs). So to get useful information about the sensor errors I will have to measure it myself. 

 A good measurement requires to measure just sensor errors within my system. Other sources of variation should be excluded from the measurements. These other sources include environmental disturbances (iron) and true changes in the system (the robot turning). The measurements have to take place in a stable environment. Also, I will have to keep the robot fixed to be sure it doesn’t turn. At the same time I want the vibrations a running robot is generating and I want the robot to activate its motors to measure the disturbances they inflict. These requirements are difficult to combine into one setup.

For the test I put my robot on a small box that fit between the wheels. The wheels were now free-floating and the robot wouldn’t move when the motors were running. This way I lost the vibrations generated by the wheels but I still got the vibrations generated by the motors and gear train. Also I made sure there was nothing close to the robot to disturb the compass. The compass and gyro were sampled every 10 msec and the measurements were sent straight to excel, NXT2excel came in very handy. Excel has functions to calculate the variance of series of measurements, I used the var function. The test included measurements under four different circumstances. Motors off, motors 50% power, motor -50% power and motors 100% power. The sample size for each of these circumstances was 500.  Here is a graph of the measurements.

The Graph shows motor power in blue, compass readings in purple and gyro readings in green. The compass gives quite stable readings when no NXT motors are turning, but once the motors are turning a wave pattern appears in the readings. Also when power levels of the motors change there is a spike in the compass readings. The Gyro has little error, therefore its readings are magnified by 10 in this graph. But also here There is influence of the motors, the mean drops and the noise increases.

From this first measurement I conclude that I have to put the compass further away from the motors. It is now mounted on the top front of the vehicle. The gyro isn’t visible on the picture. But you can see a touch sensor just above the rear wheel. The gyro is mounted at the same place on the other side of the vehicle. And, yes, it is mounted upside down. Maybe this also influences the accuracy of this sensor.

My ultimate goal is to make a robot that maps its environment, knows where it is and can navigate its environment on its own. Previous experiments have learned me that lots of obstacles have to be overcome to achieve this goal. One of them is combining sensory input. For example, I want to know which way my robot is facing. I do have both a compass and a gyro sensor. The readings of the compass sensor are not very precise and do suffer from magnetical disturbances (motors, NXT, metal objects). The gyro readings do not suffer from magnetical disturbances. But it only gives me an idea of the change in direction. Not the direction itself. One can integrate these changes to an overall change in respect to the start position. But then small errors in every reading will add up to a forever growing error. It would be nice to combine both sensors so that one sensor corrects the error from the other sensor. The Kalman filter promises to do so. This filter however is hard to understand and implement. As a consequence there is a lot of talk about it in the NXT community but it is seldom used. I’ve set myself the goal of understanding the filter and implement it for the set up described above. This and later posts will document my quest. Sometimes I’ll take things for granted, sometimes I’ll simplify things when a statistician wouldn’t. Sometimes I might be wrong.

The basic ideas behind the filter are simple.

  • Sensors are never very precise. Each sensor has an error. This tells you how precise this sensor is. When the compass reads 15 degrees north it could also be 14 or 16.
  • Two sensors combined give more precise information than a single sensor. When two compasses give me different reading the truth might be somewhere in the middle.
  • If you know how things were, then you have a better understanding of how things are. When the last observation from the compass was 13 and the current one is 15 then the true heading is more likely to be below 15 than over 15.
  • If you know how things changed from the last observation to the current one you can even better predict the current situation. If the robot is turning clockwise it is likely that the current heading should be more than the last time it was calculated.

It is not hard to find the formulas used with the Kalman filter, check Wikipedia (although I do suspect there is an error in the formulas there) or this excellent course. It was hard for me to understand those formulas however. There were two things I had to understand. One was matrix calculus, the second was the meaning of all the scalars and matrices that are in the equations. But there is some good news. Matrix calculus is not too difficult, some of us will have had it in high school, if not, there are just a few tricks to learn, addition (+), multiplication (*) and inversion (1/matrix). There is even more good news, not all matrices have to be understood. Some can be ignored in the simple situation I am dealing with, others can be taken for granted without understanding them.

The only formulas of importance are those under Predict and Update in the Wikipedia article. The other formulas will not make it to code. Lets examine what goes into the formulas first and what the formulas actually do afterwards. 
he first thing to understand is the state. This is the x with a hat.  The state describes the important properties of your robot. Important in this context I mean, what is it we want to know about the robot? In my case it is the heading of the robot (what way is it pointing at) and its rate of turn (how fast is it turning and in what direction). So the state contains two items in my case. But in other situations it can be totally different things that go into the state. This is very important to understand. The Kalman filter is not a ready made filter that accepts a number as input and gives another number as output. You have to design your Kalman filter yourself, fitting it for your situation and problem.
The state is a scalar (a vector with just one column) and in my case it has just two items heading and rate of turn.
With the Kalman filter the state comes in different flavours.
– there are state scalars associated with the current and with the previous observations. These are denoted as Xk and Xk-1 respectively.
– The a priory state vector. This holds the predicted state. The prediction is based on the previous state (and transition) only. It does not take sensory input nor control input in account. It is denoted as Xk-1|k-1 for the previous observation and as Xk|k-1 for the current observation.
– The a posteriori state vector. This one describes the prediction of the filter when taking sensory input and control input into account. It is denoted as Xk-1|k for the previous observation and as Xk|k for the current observation. This is my state vector:

X= Heading
Rate of turn

(When one knows the number of rows in the state vector one also knows the dimensions of most of the other vectors. These all have the same number of rows, also the number of columns equals the number of rows in the state vector. (this probably isn’t always true but in this simple case it is.))

There is another element in the filter that looks very much like the state vector. This element is called the observation and written down as Z. It holds the input from the sensors. In my case it holds compass and gyro readings.

Z= compass reading
gyro reading

As my sensors return heading and rate of turn, this vector is very much the same as the state vector. This is not by definition so! If measurements are indirect or use different units than the state (think of degrees Celsius versus degrees Fahrenheit) then both scalars will differ from each other.  We then have to provide formulas to translate measurements into state using an observation model (H).  My observation model doesn’t have to change anything and therefore can be ignored in the formulas. But officially it is:

H= 1 0
0 1

The next element to understand is the transition matrix, Wikipedia uses F to denote this element. The transition matrix describes how the state changes from observation to observation. In my example one can assume that the rate of turn does not change from observation to observation. If the robot was turning 5 degrees a second then we assume it is now still turning at the same rate. The transition matrix should tell that the rate of turn does not change over time, Rate of turnk=Rate of turnk-1. The heading however changes from observation to observation because the robot is turning. The change in heading being related to the rate of turn. So the new heading equals the old heading plus the rate of turn times the time interval, Headingk=Headingk-1 + Rat of turnK-1*deltaT. This is my transition matrix:

F= 1 deltaT
0 1

One can argue that the rate of turn is not constant over time. If the robot changes its steering then the rate of turn also changes. This is true and if one wishes this can be modelled in the control model, Bu. To keep things simple I pretend not to know anything about steering or speed. By ignoring control model I can simplify the first Kalman formula quite a bit.

I cannot totally ignore the fact that my robot does steer and the rate of turn does change. I consider changes in rate of turn to be a unknown factor, a kind of noise in the predictions the filter makes. This is called the process noise, Q. The heading is also affected of the steering of the robot changes. But as the interval between two predictions is very small I think it can be ignored. So I set the process noise of the heading to zero. The process noice is a covariance matrix, this means that there could also be a relation between the noise in rate of turn and the noise in heading. But because the noise in heading is zero there cannot be a relation between the two, the covariance is zero as well. I do not jet know the magnitude of the proces noise. In other words I do not know how often and how fast the rate of turn can change between two predictions. I’ll find out later. My process noise vector is:

Q= 0 0
0 ?

The last vector to understand describes how accurate the sensors are. It is called the observation noise and written down as R. The accuracy of a sensor can be measured, guessed or found in the data sheet of the sensor. In data sheets it is denoted as ±number. This number is officially called the variance or error. This matrix also holds information about the relationship between the accuracy, or covariance, of the different sensors. In my case a have two different sensors, the errors in their readings will not be related. So the covariance can be set to zero. In other cases errors between sensors can very well be related, for example sensors might both suffer in a similar way from vibrations in the robot. Then covariance cannot be ignored. I do not jet know the error of my sensors. I hope to find out later. For the moment my observation noise matrix is:

R= compass error 0
0 gyro error

Not only sensors are inaccurate, the predictions from the filter are also inaccurate. The Kalman filter knows how inaccurate the a prediction is, this is called the state error or P, and knowing this gives the filter its strength. The state error is calculated by the filter, I don’t have to bother giving it values. The state error comes in four flavours, just like the state itself.

I now understand al the elements that go into the filter and have modelled them for my goal. The other elements that are in the formulas as calculated by the filter. Their meaning can be ignored for the moment. It is time to look at the formulas themselves.

Each time the filter makes a prediction it takes two steps or phases. The first step is called the predict phase. In this step the filter calculates a new prediction from the previous prediction and the transition matrix. In my case it calculates a new heading. But it does not yet take readings from my sensor into account. It also calculates how good this prediction is (the state error P) taking the process noise into account.
In the second step the filter gets the input from the sensors. These readings will not be the same as the filter predicted (the difference is called the innovation or Y). The truth will be somewhere between the prediction and the measurement. The filter then calculates where between the two values the truth is most likely to be, this is called the Kalman gain or K. Using the Kalman gain it calculates a new prediction. And this prediction is the value I am after!

In this post I discussed the basic ideas behind a Kalman filter, I described the elements that go into the filter. Also I modelled this elements for my purpose, although I still have to fill in some values. Then I briefly described the steps the filter takes to make a prediction. In my next post I will describe the implementation of the filter in RobotC.