Archives for category: Sensors

It is time for another giveaway. This time you can win an IMU sensor from Dexter Industries. It is the same sensor DI sent me for evaluation purposes. I got it for free, and now you can! It probably is the best tested sensor available.

dIMU

Do you have a nice application for this sensor? Submit your plan in a reply to this blog. The most original or inspiring idea will be rewarded with this advanced sensor. I am to judge the submitted idea’s and to point out the winner. If you like a particular submission very much you can respond to this. You might be able to influence my decision.

Dexter Industries currently has a dIMU week. They to give some IMU’s away. So, you can increase your chance of winning one on their site.

Advertisements

Some time ago I wrote about transformation matrices. This time I want to show you how to use it. You’ll see how much you can benefit from it.

I wrote a small class that can draw a compass rose on the NXT screen. The class contains a collection of points that are the start and end points of the lines that make the compass rose. It has a method to draw these lines. But before drawing the lines they are transformed using the current attitude of the IMU sensor. The transformation is a matrix multiplication of the compass rose definition and the attitude matrix that the IMU filter maintains. This is done in line 4 of the code example below. R is the transformation matrix, rose is the compass definition. The function toScreen on the same line centers the image on the screen.

public void draw(Matrix R) {
 Matrix ss;
 Graphics g=new Graphics();
 ss=toScreen(R.times(rose));
 for (int i=0;i<elements;i++) {
 g.drawLine((int)ss.get(0,i*2),(int)ss.get(1,i*2),(int)ss.get(0,i*2+1),(int)ss.get(1,i*2+1));
 }
}

As you can see the transformation matrix allows you to perform very complex functionality, three rotations, using a very simple function, times().

Below is a video of the application in action. You might want to play this video in HD as it is quite hard to see the details on the NXT screen.

This application just shows a compass rose. But the same technique can be used in a wide variety of applications. One such application could be to transform data from range sensors to world coordinates. This will ease the process of mapping the environment. As the matrix maintained by the IMU filter is 3 dimensional you can even make a 3D map.

Transformation matrices are also very useful for robots with holonomic wheels. Here they can be used to transform robot speed into wheel speed.

Here is another video. This time it features the dIMU from Dexter Industries.

Last time I reviewed the gyro of Dexter Industries IMU sensor. This time I’ll take a look at the accelerometer of the IMU. I’ll compare this one to the MindSensors accelerometer and to the one on my custom IMU.

The three accelerometers in the test all all 3-axis digital sensors that support hi-speed I2C. They all allow you to set dynamic range, the MindSensors sensor supports 1.5, 2, 4 and 6G. The dIMU supports 2, 4 and 8G and my cIMU supports 2,4,8 and 16G. On the NXT you would normally use 2 or 4G dynamic range.

I tested the sensors at 6 and 8G. The resolution (the minimum difference between to samples) of the Mindsensors accelerometer is 1 milli-G for all ynamic ranges, the dIMU and the cIMU both have 16 milli-G at 8G. The MindSensors sensor has the best resolution. Later, I’ll explain why.

The test protocol is very simple. The sensors are tested at the same time. First they are horizontal, then they are tilted manually to an angle of 90 degrees and then they are turned back. The test tells something about the accuracy and noise levels of the sensors.

The first graph shows the unprocessed output of the Z-axis of each of the sensors. You would expect a value of 1000 milli-G when the sensor is horizontal and 0 milli-G when tilted. As you can see in the graph the three lines are quite different. The MindSensor gives smooth output with little noise. My sensor is very noisy, the line goes up and down. And the dIMU has some noise. What is more, the three lines are on different heights and none is on the expected level. Apparantly the sensors need calibration, to get the lines on the expected level, and the two IMU’s need some smoothing as well. My IMU allows to set the internal sample rate, lowering this to 100 Hz removed some of the noise. In later tests I used the lower sample rate.



Calibration of accelerometers is quite easy. The MindSensors sensor has an on board calibration routine but I advise you not to use it and to write your own instead (or use the program that MindSensors provide on their website). Calibration consists of determining an offset and a scale factor. The offset is a value that you substract from the raw value. It is equal to the value the sensor returns when it experiences zero G. For the MindSensors sensor used in the test this is about 125 milli-G. For the other two sensors it is a lot harder to find the offset value as the returned values are not stable at all. One has to take the average of multiple measurements to find the offset. Offset values of accelerometers are quite stable, once you found it you can keep using it. But keep in mind that each of the axes has its unique offset value.Then there is scale. After an offset correction has been applied to the signal one gets a zero at zero gravity conditions. This does not mean that one gets 1000 Milli-G under earth’s gravity. Normally this is a few percent off. A scale factor can correct for this. The scale factor equals 1000 divided by the offset-corrected-value the sensor returns under earth’s gravity. The formula to get a calibrated acceleration signal is:
value=(raw_value-offset) * scale
That’s all there is to it. Scale too is quite stable. You can hard code it in your programs once you calculated it. You’ll have to calculate it for each of the axes though. The next graph shows calibrated output of the three sensors.
Apart from the noise levels, all sensors perform very well after calibration.

The MindSensors sensor has a lower noise level and a better resolution than the other two sensors. This clearly is a big advantage of this sensor. But how come this sensor is so good? I suspect it is not the sensor itself but some on board filter that stabilizes the signal. Therefore I decided to write a filter for the other two sensors as well. At first I tried a low pass filter as this is very easy to program and uses little memory, but this filter added a delay in the response, so I abandoned this idea. Then I tried a technique called over-sampling. Basically this means that one tries to get measurements as often as the sensor allows and then calculate a moving average from it. This did the trick, the noise levels of both IMU sensors dropped dramatically and the resolution rose. Both IMU still are more noisy that the MindSensors accelerometer, the difference however is acceptable.

Of course a filter like this does come with some extra costs. One has to program it, It will have to run in a separate task, so that your program can still do the other useful things it is meant to do. And the filter consumes processor time.

The test I perormed was limited and not scientifically sound. However I conclude that all sensors perform well under test conditions. The MindSensors accelerometer is the best accelerometer for most of us as its signals are most easy to use. It does not have a gyro sensor on board as IMU sensors do. So if you also want a 3-axis gyro then you should consider an IMU sensor. The price dIMU is just 15$ more than the MindSensors accelerometer. This could be a good reason to go for the IMU straight away.

Hi,

Some time ago I asked you to come up with a name of my omniwheel robot. Now, It is time to announce the winner.

the name of the robot is Koios, after one of the Greek Titans. Stephen, aka PigsCanFly, suggested the name Coeus, translated into Dutch gives Koios. So Stephen is the winner.

The prize, A second hand Hitechnic gyro, will be sent by airpig to Stephen. Congratulations.

Thanks to everybody that participated.

If you visited this blog before you might know that I built myself an IMU sensor last year and that I wrote a filter to utilize this sensor to the max. It seems that if you want an IMU there is an easy way.  You can buy one from Dexter Industries.  As a matter of fact, they gave me one to evaluate. And so I did. I will present my findings in this post.

An IMU consists of a rate sensor, more often called a gyro sensor, and an accelerometer. IMU’s are mainly used to determine  position and attitude of a robot. However, an IMU cannot measure this right away, instead it measures movement. The gyro measures rotational movement, the speed at which the sensor turns. The accelerometer measures acceleration, the change in speed of the sensor. These measurements can be integrated to give position and attitude. However, integration is a tricky process, small errors in the measurements build up to large errors over time. Therefore it is very important that the sensors provide data of good quality.

The Dexter Industries IMU, or dIMU, has a three axis gyroscope (L3G4200D) that includes a temperature sensor. It also has a three axis accelerometer (MMA7455L). The sensors are digital sensors and addressed with the I2C protocol. They allow fast I2C, so if you have an environment that supports high I2C speeds like RobotC, LejoS or NXC, you can query this sensor at very short time intervals (>100Hz). There is a bright blue LED on the sensor that is Dexter Industries trademark. Some parts of the sensor board, including the LED get warm, this makes the temperature sensor useless. The components on the board are not covered. This gives the sensor a high tech look, but you should take care not to damage the components. The gyro and accelerometer are not aligned, the X-axis on the Gyroscope corresponds with the Y-Axis on the accelerometer. One should compensate for this in the software.

Below are the results of some tests I did with the sensor. I compared the sensor to others that I have. I will focus on the gyro first.

The gyro I compared to 2 other gyro’s, the analaogue gyro from HiTechnic and the digital gyro (ITG3200) from my custom build IMU (cIMU). The highTechnic gyro is an analogue one, it measures one axis only. he analogue signal from this sensor has to be translated to a digital signal by the NXT brick. This limits the resolution of this sensor to 1023 raw values or 1 degree a second. The maximum dynamic range of this sensor is about 600 degrees a second. The Dexter Industries Gyro has a resolution of  9 to 70 milldegrees a second and a dynamic range of 250 to 2000 degrees a second. Range and resolution are user selectable, where a higher dynamic range gives a lower resolution. The dynamic range of the ITG3200 form my IMU is 2000 gedrees a second with a resolution of 70 millidegrees a second, comparable to the DI gyro.

Gyro’s have an offset, this is the value they return when being motionless. One has to correct gyro readings with the offset to get the true rate of turn.  The problem with offset is that it changes over time or under different circumstances. Also, sensor signals are noisy, so it is not easy to determine a good offset value.  (see a previous post from me for details).  A faulty offset value or a change in offset over time results in faulty readings. Therefore the first test I performed was to see how stable the offset of the different gyro’s is. Below is a graph of the three gyro’s while being motionless. A perfect gyro would return 0 all the time. But perfect gyro’s do not exist, there is always some noise in the signal, that’s why the lines are not straight. The noise of the HiTechnic gyro looks different than that of the other two. This is because the HiTechnig gyro signal is rounded of to degrees per second while the oher two have a finer resolution.  This makes it hard to compare this signal to the other two. The dIMU is more noisy than the cIMU.
After 60 seconds of continuous measurement one of the NXT motors was powered. This gives a drop in the HiTechnic gyro, it’s offset has changed as a result of powering the motor. It seems as if the gyro has started to rotate very slowly, but it hasn’t. Therefore you need to calibrate the gyro under circumstances comparible to those that the gyro will experience when running. Both digital gyro’s do not suffer from powering the motor. This is a strong argument in favour of digital gyro’s.

The second test looks at the quality of the integrated gyro signal. The integrated signal gives the angle of the sensor (in respect to the starting position).  This is a very important aspect of gyro’s as you will quite often use this quantity in your programs. Integration takes away some noise as this is averaged out. However errors in offset are not biased to zero and these errors will add up over time. Again the sensors were motionless, there is no change in angle so one would expect the lines to stay at zero. This time there are four lines in the graph. The, new, red line is also a signal from the HiTechnic gyro. But this time it’s signal is corrected for offset errors using some smart statistical methods. This functionality is part of Lejos.
The HiTechnic gyro without the dynamic offset correction performs worst in this test. The line quickly deviates from the x-axis and this goes even faster when the motor is started (and the offset changes). With the dynamic offset correction the sensor performs very well,it stays close to zero. However, the algorhythm cannot cope with the change in offset after starting the motor, the line then quickly drops.
My gyro gives a smooth line but the line also drops quite fast.  The smoothness of the line is due to the low noise level, the drop is caused by changes in offset. My gyro defenitly could benefit from a dynamic offset correction.
The gyro from Dexter Industries stays around the x-axis. This means it has a very low change in offset or an internal mechanism for offset correction However in this test this sensor performs clearly the best.

The next graph shows the effect of dynamic offset correction on the digital sensors. My sensor clearly benefits from this. It stays closest to zero of the three sensors. The dIMU does not benefit from dynamic offset correction. As a matter of fact, it’s performance is worse than without it. This makes me believe that there is some internal offset correction in this sensor that interferes with the software offset correction I added. I conclude that you do not want to use dynamic offset correction with this sensor.

This is all for this time. Next time I will dive deeper into the accelerometer of the dIMU. Again I will compare this one to two other accelerometers.

 

 

 

In a previous post I showed how the Filter for the IMU sensor combines both gyro and accelerometer data to get drift-free low noise information about the robots orientation in space. The filter performs a second function, it translates sensor output to a world frame of reference. In this post I’ll explain what I mean by that and how the filters works in this respect.

Body frame of reference versus the world frame of reference

Suppose you are standing upright facing north when somebody next to you asks you to turn right for 180 degrees. After you turned so you’ll be facing south. Clear enough. Now suppose you are lying on the ground face down and head pointing south. If you are asked again to turn right for 180 degrees by the same person you will have to consider two options regarding the meaning of the word right. The first option is that right should be in reference to your body. If you turn like this you’ll end up face up while still pointing north. The second option is to treat right in reference to the person that asked the question. If you turn like that you’ll end up still facing down but with the head pointing south.

The same goes with robots and the sensors they carry. Your robot might have a gyro that indicates that the robot turns clockwise, but if the robot is upside down you’ll see itt turning counter clockwise. This is because the robot and the gyro attached to it have a different reference than you have.
To distinguish between the two references we call the first one the body frame of reference and the second the world frame of reference. The world frame of reference is supposed to be inert, a global frame where everything can relate to. Words like up, down, north and south are related to the world frame of reference. The body frame of reference belongs to the robot and the sensors. When the robot moves its body frame of reference moves along with it. Robot commands like drive forward are always in respect to the body frame of reference. Likewise, data from the sensors are also expressed in the body frame of reference.
In robotics, you’ll frequently have to translate information from one frame of reference into the other. For example, when you calculate the position of the robot (world frame of reference) from data from the tachometer (body frame of reference). The process to translate information from one frame to another is called transformation.

Transformations

A transformation requires the use of trigonometry. Suppose again that you want to calculate the position of the robot (world frame of reference) from data from the tachometer (body frame of reference). Let us assume that the tachometers indicate the the robot has travelled straight forward for 30 cm. In terms of body frame it is said that it moved over the x-axis with 30 cm and over the y-axis with 0 cm. Suppose the robot was facing east when it travelled and you have called the east to west axis the y-axis as is customary. It is clear then that in the world frame of reference the position of the robot increases by 30 cm along the y-axis and maintained its position along the x-axis. In mathematical terms the transformation from body frame to world frame is:

Xw = Xb * cos(a) - Yb * sin(a)
Yw = Xb * sin(a) + Yb * cos(a)

Where the parameter a stands for the angle between the robot (or better the body frame of reference) and the world. In the above example a=90, making cos(a)=0 and sin(a)=1. as a result the x and y values are swapped in the transformation.
Transforming sensor output to world frame of reference is the second function of the IMU filter. However, It does so in a 3D space and not in a 2D space as the example.

The preferred way to model a transformation mathematically is by using matrices and vectors.

This is the preferred way as it offers a convenient and short way to describe both the transformation and sensor output. Without going in too much detail, the transformation is stored in a transformation matrix of 3 by 3 elements. The sensor output is stored in a vector of 3 elements. To transform a vector from body frame of reference to world frame of reference one has to execute a matrix multiplication on the body vector using the transformation matrix. In mathematical terms:

Vw = Vb X T

Where T stands for transformation matrix. To go from world to body the formula is:

Vb = Vw X  Tinv

Where Tinv represents the inverse matrix of T.

Internally the filter maintains a transformation matrix. The robots current attitude can be calculated from the transformation matrix.

Last week I spent my evenings redesigning my holonomic platform. It had te be wider in order to get the center of gravity (relatively) lower. I also wanted to gear it up to make it more agile. And I wanted the brick to be better accessible, especially the USB port. Two other demands remained from Sidbot, the wheel angle should be adjustable so that the robot can be used both on a flat surface and on top of a large ball. It also had to be sturdy.

 

As it had to be sturdy I decided that the motors should be fixed to the frame. Sidbot had its motors adjustable to make the wheel angles adjustable. The wheels have to be adjustable on the new robot as well, this meant that the hinge point had to be between motor and wheels somewhere in the gear train. I tried different designs and gears but I always ended up with grinding gears. At last I ended up using a 5×7 liftarm to house part of the gear train. This effectively stopped the grinding but resulted in a very wide wheel housing as well. This is not so pretty so I’m still trying to improve this part. However, I now got a 2:1 gear ratio. With a motor speed of 120 RPM and a wheel diameter of 48 mm this gives the robot a top speed of 30 cm per second.

The frame of the robot consists of two triangles stacked vertically. The triangles are made of three 11 stud liftarms connected at the ends with 3 and 4 stud liftarms. This makes a very rigid frame, the brick lies on top of it making it easy accessible. The motors are mounted horizontally with the flat side down. This gives the robot width and also the ground space that is needed when riding on a ball. To prevent torsion between motor and frame I made a housing with L-shaped for the motors.
I used light bluish gray and dark bluish gray for the color scheme as these are the colors the motors and brick are made from. The result is a bit dull but still rather nice looking. It resembles a Starwars like vehicle. Maybe I should mount some laser guns on top.

The resulting robot does meet all my design specifications. But I have not been able to test it yet as I’m one 40 teeth gear short. I hope to get it this week.

The robot still needs a name. If you have a good suggestion you can post it in the remarks. There is a nice price for the winner a second hand Hitechnic gyro sensor. Submit your entry before November 2011.

Here a quick post of some graphs that show the effect of the filter I use for my IMU.

You might have seen a previous posts where I show you that my sensor is capable of detecting a grain of sugar under the NXT. If you haven’t seen it, check the post and video here. Few months ago I changed from robotC to Lejos. And now I have rewritten the sensor drivers and the filter in Java.

The Lejos developers are currently working on a very nice logging feature that allows users to display data from the NXT real time on the pc. I got hold of the beta code and tested it without any problems. I used the NXTDataLogger (this is the name of the Java class) to made the intrinsics of the IMU filter visible.

The IMU sensor combines a 3-axis gyroscope and a 3-axis accelerometer. The filter reads the output from both sensors, combines there signals in a smart way and provides stable information about the orientation (yaw, pitch and roll) of the robot.

An accelerometer can provide tilt data (pitch and roll) as can be seen in the first graph. This sensor however has two serious drawbacks. First it is quite noisy, that’s why the graph lines seems so hairy. Second, it does not distinguish tilt from acceleration. So, the output from this sensor can have different causes. This is why you cannot make a balancing robot with just an accelerometer.

A gyroscope is far less noisy, that is why the second graph shows smooth lines. It is also not affected by acceleration. However also this sensor has its particularities. First, it doesn’t measure tilt, but it measures rate of turn. This can only be translated into tilt if the starting condition is known. To get the tilt at any time you have to integrate all the readings and the starting condition. In the process small errors are introduced, in the long run they add up to la large error. The green line in the graph shows this effect. At about 42 seconds the sensor rotates faster than its maximum range (>2000 degrees/second), it then gives faulty readings, these are integrated in the calculated tilt. As a result the line ends higher than it started, even though the sensor was turned back to its original position. The second effect that makes a gyro hard to use in long runs is called drift. Due to temperature changes or voltage drops the base level of a gyroscope changes. This means that after some time the gyro seems to be turning slowly when it is still being stationary. This effect is well visible in the blue line. This goes up slowly. This effect can not be eliminated by calibration, unless you are able to calibrate the sensor along the run.

The filter is like a PI-controller. It uses the data from the gyroscope as the basis of it’s output. The lines in the third graph, that shows the output of the filter, are smooth because of this. It uses the data from the accelerometer to correct the output from any errors that have been build up. But it does so slowly in order to keep the noise out of the output signal. The P-value of the PI controller corrects for errors in the gyro signal. It also makes the filter usefull when starting conditions are unknown. The filter also sums past errors to get an I-value, this value is used to correct for drift in the gyro data. The result is a graph with smooth drift-free lines.

Two final notes. First, the green line (the yaw-line) is not drift-free. This is because an accelerometer cannot provide information of orientation in the XY-plane. You need a compass sensor to provide this information. Second, the range is expressed in cosine of the angle, so 1 corresponds to zero degrees, 0 corresponds to 90 degrees.

Here is a short update of my works.

I’m doing two projects at the same time.
First, there is Sidbot. It waits till I master Java threads.
Second, I’m building a holonomic or Killough pilot in Lejos. This project hangs on me not knowing how far the Robot has travelled. This info is needed for the pilot to work with a navigator.
Actually, my main project now is learning java. This goes well but there us a lot to it.

Few weeks ago I ordered two more BlinkM’s. These are I2C driven all color led’s. I wrote a driver to address these led’s. The led’s are now mounted on each corner of Sidbot. So Sidbot can shine all colors to all Sides of it’s triangular body. Each led is driven individually so each one can have it’s own color. I made a nice flashlight show to test the driver.
I finished drivers for my IMU too. The filter to fuse the signals is also finished but it can’t run in a separate thread yet.

I also have been on holidays and busy with work. But these matters don’t concern this blog.

This weekend I solved some hardware problems with the NXT and my custom sensor.

The screen of my NXT was behaving strangely last weeks. Sometimes the display shifted down or the left a couple of pixels and one time it was even upside down. Others had this problem before me and found the cause: a bad connection of one of the capacitors on the secondary board in the NXT. This board is rather empty, good for me, it houses just three capacitors and and the circuitry for the buttons. These same buttons are probably the cause of the bad connection. Pressing a button puts strain on the board which in the end can break the solder between board and capacitor. Heating and melting the connection with a soldering iron can repair this. And so I did. As said, there are three capacitors, the first two I soldered successfully. But the third one I accidentally took of the board. @&$##%. I had to reattach that thing of under 2mm in size. To make things worse I didn’t know what side was positive and took a squeezer and magnifying glass to inspect it. No sign whatsoever. And then disaster stroke. I squeezed to hard and the capacitor went airborne. Gone! After a long search on my knees I found it back on the floor. Phew. I soldered it back and the NXT screen worked fine again. It was a nerve wrecking experience.

You might remember my IMU sensor. When I built it I included a BlinkM all color LED in the housing. This was a design error as it shared the sensor port with the IMU. The BlinkM took I2C bandwidth that I needed badly for the IMU. So I decided it had to go. After I removed it I tested the IMU sensor again. Guess what, the problems I experienced with higher speed I2C that RobotC and Lejos offer where gone! This means I can query my sensor more often, leading to better information. Using Lejos I can query the IMU in just over 2 ms. Add some processing time to that and some time needed by other processes and It may still be less than 10 ms. An update frequency of 100 Herz is within reach.

Overall, this has been a successful weekend. My success encouraged me to start with some new things. I want the BlinkM to be daisy chained with two more LEDs. Remember my robot is triangular. I want a LED on every corner and have some neat light effects in mind. I ordered two more BlinkM’s and some connectors that allow me to daisy chain them.
I also ordered a 3 axis magnetometer. This will make it possible to calculate heading even when the robot is tilted. The idea is to feed this info into my complementary filter, thus further improving the quality of the data it outputs.
The remainder of my money I spend on a digital pocket oscilloscope. I really missed one when building the IMU.

Coming weeks I will convert my filter to Lejos. I’ll also start working on a physical model of my robot in Lejos. The purpose of the physical model is to abstract the robot design within the code. It should make it possible to say “go north with a speed of 0.3 m/s” instead of saying “speed of motor A=70, speed of motor B=-70, speed of motor C =0”.