Archives for posts with tag: NXT

I got some feed-back on my last post with “building instructions” for Agilis. My friend Kirk wondered why I didn’t make proper instructions. He was not impressed by my lame excuses about odd angles or illegal builds and showed me things could be done. He was right, it was about time for me to master some Lego CAD tools. I choose to use LDraw and MLCad for the job. My fears for a steep learning curve proved wrong. The manual is quite good and I was up and running within an hour. The hardest thing to learn was the discipline to work precise immediately. One cannot build a sketch at first and then improve the model. Well, technically you can, but you shouldn’t as every brick you move triggers a chain of new moves that need to be made.

AgilisFront
It was fun to create proper building instructions for Agilis. Along the way I also improved the gearbox as was suggested by another reader of my blog, Thimoty. You can freely download the instructions. I do appreciate constructive feedback. This is my first and I am sure there is room for improvement.

Download the building instructions for Agilis

This time I will discuss the gyro sensor (again). What it measures. How to cope with offset and drift. And how to use it.

Gyro sensors are among the most widely used sensors in robots. They are often used in self balancing robots for keeping upright. They are also used  in autonomous robots for keeping track of orientation. But also game controllers might use them to detect angular motion. A gyro sensor measures the angular velocity. In other words they measure how fast the sensor is rotating. This is most often expressed as degrees per second or radians per second. Some gyro sensors measure the angular velocity over one axis, others take measurements over two or three axes.

One should be aware that a  rotating object always rotates over just one axis. This axis however can have any orientation. Compare this to speed. An object can move in just one direction at any given time. This direction however can be any direction in a three dimensional space. Most often we do express the speed of an object over three perpendicular axes. A landing plane might fly at a speed of 600 km/s and descent at a rate of 5 m/sec. Wind might blow it off course at a rate of 0.5 m/sec. But still this plane goes in one direction only. The same is true for  angular velocity, an object rotates just with one speed, but we express its speed over three separate axes.

Let us take a look at the output of a typical gyro sensor. The graph below shows the output of a digital three axis gyro sensor (ITG-3200) over a period of 17 seconds. The X-axis is the time axis, the Y-axis shows the angular velocity expressed in degrees per second. After five seconds from the start of the measurement I rotated the gyro sensor clockwise for about 180 degrees. This took me about 3 seconds. After about 11.5 seconds I rotated the sensor back (counter clockwise).

gyro-1

This graph tells us quite a lot. First, we notice that the sensor is a three axis sensor. It returns three different signals at any given time. Second, we can see that the rotation I made took place of the third axis, this is the Z-axis. It means that I rotated the sensor in the XY-plane. This is true, I had the sensor flat on my desk and I rotated it while keeping it flat. Third, a clockwise rotation is expressed as a negative angular velocity and a counter-clockwise rotation as a positive angular velocity. This is a matter of convention and is called the right-handed orientation system. Fourth, we can see that  when at rest the gyro signal is close to, but not exactly, zero. This is because the sensor is not perfect, it has a bit of an error. We’ll look into this error and ways to compensate for this later. And finaly, we cannot read the change of 180 degrees in orientation from the graph. This is because the sensor does not measure the orientation, instead it measures the angular velocity (speed of rotation). It is however possible to calculate the (change in) orientation from the angular velocity as I’ll explain later.

Offset and drift correction

Let us concentrate on the error in the sensor signal right now. Below is a graph taken over a period of ten minutes while the sensor was at stand still all the time.gyro-2

A perfect sensor would output a rate of velocity of zero for all three axes all the time. Obviously this sensor does not.  The X-axis signal is around 2 instead of zero. This error is called the offset error. Every axis has its own offset error. For the X-asis this is around 2, for the Y-axis it is about 3.3 and for the Z-axis it is about -1.5. It is easy to correct for the offset error once you know how big it is. You just substract the offset error from the sensor signal to get a corrected value. The offset error itself can be calculated by taking the mean of a number of samples, take 100 for example.

The offset itself may seem constant, but in reality it is not. The offset of a sensor  is influenced by several factors and can change over time as a result. This is called sensor drift. One of the biggest factors contributing to sensor drift is temperature. You can notice this when one starts using a sensor. When being used, the temperature of a sensor rises a bit. It gets hotter then the ambient temperature. As a result the offset of the sensor changes. If you need a very good signal you should take this into account and let the sensor warm up before calculating the offset.
Some gyro sensors, like the Hitechnic gyro for example,  are seriously affected by changes in input voltage. As the NXT is battery powered this is a serious problem. Starting the motors of the NXT will result in a power drop and thus in a change in offset. There is a trick to avoid this if you have a sensor mux with an external power supply. Like this one from Mindsensors. In general I advise you to choose another gyro.
Even when temperature and power are constant the offset of a gyro will still vary a bit over time. This variation is called random walk.

There is a very elegant technique to deal with sensor drift. This is to constantly but slowly update the offset error of the sensor. Instead of treating the offset as a constant you  treat it as a value that can change over time. To calculate the offset you now use the moving average of the most recent samples as the offset. Then you always have an up-to-date offset. Calculating the moving average however is CPU intensive and you also need a lot of memory to store the samples. Therefore it is better to use a low-pass filter instead of a moving average. This does not use extra memory and little computing power but the effect is the same.
This technique will work very well when the sensor is at stand still. But will it work well when it is rotating? Any rotation will  influence the calculated offset. So of possible one should pause updating the offset while the sensor rotates? Sometimes another sensor can help to detect a rotation. A compass, an accelerometer or  motor encounters can all be off help.
However, there is also another solution. This one is based on the notion that a rotation in one direction is very often compensated with a rotation in another direction. A balancing robot for example stays upright, so in the long term it does not rotate. It might lean forward and backward for short moments of time. But this forward and backward rotations cancel each other out on the long run. So in the long run the average signal of the sensor equals the offset of the sensor, even when there are rotations from time to time. This means that even under dynamic circumstances one can constantly update the sensor offset. One only has to make sure to use enough samples so that there is enough time for rotations to cancel each other out. This technique is useful for balancing robots where rotations are short. It is less useful for slow turning robots where rotations have a long duration.

Converting angular velocity to direction

There are situations where one needs to know the direction of the sensor. In navigation for example one needs to know in what direction a robot is heading. A gyro can provide this kind of information. There are some limitations though. So how do you transform angular velocity into direction? This is done by integration. This might seem difficult. But it really is not. If a robot rotates for 2 seconds at a speed of 90 degrees per second, it has rotated for 180 degrees. So integration is nothing more than time multiplied with speed. The graph below shows both the (offset corrected) angular velocity and the direction. During the test I rotated the sensor four times with 90 degrees clockwise, 360 degrees counter clockwise, 90 degrees counter clockwise and 90 degrees clockwise. After this the sensor was back at its start direction.

gyro-3The  line representing the direction, labeled Rotation 2, clearly shows the steps of 90 degrees that I made. Careful examination of the data shows that according to the sensor the robot rotated with -358.8 degrees at max, where as I tried to rotate it with -360 degrees. This makes the sensor, and the integration, pretty accurate. However, after turning the sensor back to its start direction the calculated direction is not zero as is to be expected. Instead, it is about 5.7 degrees. This is not so good. What makes it even worse, there is no way to correct this. At least, not without the aid of another sensor of user intervantion. This is the main drawback of integration. Over time small errors (in the offset corrected) signal build up to become a large error in the integrated data.

But integration can be very useful nevertheless. Suppose a robot that needs to make exact turns. Using integration you can do so. But you need to reset the initial direction to zero just before making the term. This way the error in each turn will only be equal to the integration error that built up during this turn. This is small as making a turn does not last that long. In other words, you can make your individual turns accurate but not the overall direction of the robot.

But wait, there is one more thing. Integration only gives a change in direction. To know the real direction one should also know the direction before the changes took place. Quite often this initial direction can be assumed to be zero. But this is arbitrary and does not relate to the world. Although it might be all you need. If you need to relate the orientation of your sensor to the real world you need to align the sensor with the real world (make it point north) or you need another sensor that can do this for you. This could be a compass sensor.

In case you might wonder how fast or accurate Agilis is, here are some numbers.

Setup

  • The gear ratio of the motors to the wheels is 1:2, making the wheels rotate twice as fast as the motors. (It is possible to change the gear ratio to 1:3).
  • Prototype orange Rotacaster wheels. This is the hard compound. There isalso a medium compound (gray) and soft compound (black) available.
  • The batteries were rechargeable NiMH, 2500 mAh batteries. These were not fully charged.
  • The test surface was a clean linoleum floor.

Speed

  • Reliable top speed is about 60 cm/sec, equivalent to 2.16 km/h or 1.35 mph. At this speed the robot is still accurate as there is ample margin for the PID controllers of the motors.
  • Unreliable top speed is about 75 cm/sec, equivalent to 2.7 kmh or 1.68 mph. At this speed the robot is not very accurate, especially the heading.

Accuracy

  • The test track is a square with sides of one meter each. During each run the test track is traveled 4 times. Making the total distance of the test track 16 meters.
  • The robot finishes the test track on average within 10 cm of its starting position. Expressed as a percentage of the total distance the error is about 0.6%.
  • The movement error is systematic. The robot always ends up above and to the right of the starting position.
  • The robot is more accurate at slower speed and acceleration settings.

The images shows the result of the accuracy testing. For each test the robot was placed exactly on the origin (bottom left in the picture). It then traveled a square with sides of one meter for four times, making the total distance traveled 16 meters. The finish location of the robot was then marked on the floor. This test was repeated three times for a single set of settings of speed and acceleration. Three different dynamic sets were used,  speed: 50 cm/sec and  acceleration at 100 cm/sec^2, speed 50 cm/sec and acceleration at 750 cm/sec^2 and speed 30 cm/sec and acceleration 60 cm/sec^2.

foto (7)

I want to repeat the tests with a 1:3 gear ratio and also with the black Rotacaster wheels.

This Christmas holiday I started working on a new robot, called Agilis. This robot should be a very agile and elegantly moving robot. The frame is based on a triangle equipped with holonomic wheels. So you might think, “What’s new, it is like your last robot?”. From the outside this is true, but it gets new and better brains on the inside. Let me tell you what I envision.

Most robots I built went from point A to point B, only then to decide what to do next. Others just drove around avoiding obstacles. This one should be able to do both at the same time. Agilis must be able to perform complex manouvres, like driving in a straight line while rotating around its centre, or like driving an arc while keeping pointed at an arbitrary spot. It should constantly use sensory input to stay on course, or to alter its course if needed. And all this must go fluently, just like a cat walking through the room.

Over the next several posts I will discuss the different aspects of Agilis. This first post deals with the drive system.

the chassis

geartrain IMG_8701Agilis is a three wheeled holonomic platform. This means it can drive in any direction without turning. It can turn around any point, even around its own center. Each wheel is driven by a NXT motor via a gear train that has a 1:2 ratio, the wheels rotate twice as fast as the motors. This makes Agilis a relatively fast robot. The gear train has to be very sturdy to handle the torque of the motors. It also has to be precise to allow for odometry. I used the same setup that I developed for my last robot, Koios the Guard bot.

From robot speed to motor speed

It is not very easy to control a holonomic robot, it takes some math. I created a kinematic model that does just that. The model takes robot speed as input and gives motor speed as output. Robot speed is expressed as speed in x-direction, y-direction and rotational speed. Motor speed is expressed as encoder ticks per second.

So how does this kinematic model look like? For a single wheel this looks like a function that takes the three robot speeds as input. For the three wheels together it looks like a matrix multiplication that multiplies a robot speed vector {xSpeed,ySpeed,angularSpeed} with a kinematic matrix. The resulting vector containers the speed of each of the three wheels. Let’s take a look at the single wheel function first.

To translate robot speed into motor speed one needs to know some physical aspects of the robot, the wheel and the motor. How big is the wheel, how far is it from the center of the robot, under what angle is it mounted, what is the gear ratio of the gear train and what is the number of encoder ticks per full cycle of the motor? With all this information one can write a formula to calculate motor speed from robot speed. Here is the formula.

motorSpeed = 
xSpeed * (cosine(wheelAngle) * nEncoderTicks / ( gearRatio * 2 * PI * wheelRadius) - 
ySpeed * (sinus(wheelAngle) * nEncoderTicks / (gearRatio * 2 * PI * wheelRadius) + 
angularSpeed * distanceToCenter * nEncoderTicks / (gearRatio * 2 * PI * wheelRadius)

This formula might look daunting at first, but on second glance you might notice that there are a lot of constants in it. If you substitute the constants with their respective values you will end up with a much simpler formula.

motorSpeed = xSpeed * aConstantValue - ySpeed * anotherConstantValue + angularSpeed * yetAnotherConstantValue

This formula is not only much simpler, it is also very efficient to calculate, just three multiplications and two additions. A NXT can do this in no time. But remember, these constants are not the same for all the motors because each of the wheels has a different wheelAngle. But, you could also have wheels of different sizes, or differences in any of the other aspects. This means that you will have a formula for each of the motors, each formula is the same in structure but has its own constants. These constants can be stored in a matrix where each row in the matrix contains the 3 constants belonging to a single wheel. The matrix has a row for each of the wheels. If you then take the speed vector and multiply this with the matrix then all formulas are calculated at once and the result, the motorSpeed, is stored in a new vector. Each row in this vector holds the speed of a single motor. In java this matrix multiplication would look like this:

Matrix motorSpeed = kinematicModel.times(robotSpeed);

Wow, now things look simple at last! This is the beauty of matrix algebra.

The same kinematic model can be used to transform robot acceleration into motor acceleration. I use this to make my robot accelerate very smoothly. (the regulated motor class of Lejos supports regulated acceleration).

From tacho counter to robot position

To drive a robot this kinematic model works perfect. But I also want to be able to do things the other way around. I want to be able to calculate robot position from encoder values. At first I couldn’t figure this out at all. The math was just too complex for my mind. That is, until I realized that I just needed to use the inverse of the kinematic model.

deltaRobotPose = deltaMotorPosition * inverseKinematicModel

Here deltaMotorPosition is a vector containing the change in encoder value of each of the motors since the previous measurement. The inverseKinematicModel is the kinematic model inverted. And deltaRobotPose is the change in pose (x and y position and heading) of the robot. Looks simple, doesn’t it? The problem is how to calculate the inverse matrix of the kinematic model. I can’t tell you, because I don’t know. But hey, somebody else already programmed this in Java. I just used the inverse method of the Matrix class.

From the robots coordinates to the worlds coordinates

There is just one more thing to it. The robot can have any heading, this means that x and y coordinates of the robot are not aligned with the x and y coordinates of the world. To be able to steer the robot to a certain position in a room one must be able to convert this location to a location as the robot sees it. The same goes for keeping track of pose. We have seen the formula to calculate change in pose from the wheel encoders. This change however is a change as the robot sees it, not a change in the robots position it the world. The translation from world coordinates to robot coordinates can also be done with a simple matrix multiplication using a rotation matrix. The rotation matrix itself can be calculated from the heading of the robot.

Suppose you want to drive your robot to the left side of the room. The speed matrix in world frame would look like {0, speed, 0}. this can be multiplied with the rotation matrix to get a speed matrix as the robot sees it.

RobotSpeed =worldSpeed *  rotationMatrix

If we want to go the other way around, to get the change in pose in world frame we multiply the change in robot frame with the (you guessed it) inverse of the rotation matrix. For rotation matrices the inverse is the same as the transpose of the matrix, the transpose is far more efficient to calculate.

Wrap up

This really is all there is to driving a robot. To sum it up. You have a kinematic model to translate robot speed into motor speed. You have a rotation matrix to translate things from world coordinates to robot coordinates.
The same goes for odometry. You have the inverse of the kinematic model to translate change in encoder values to change in robot pose expressed in robot coordinates. You have the inverse of the rotation matrix to translate change robot pose in robot coordinates into world coordinates.
The kinematic model is a constant, it has to be calculated only once (unless your robot changes shape). The rotation matrix on the other hand has to be updated every time the heading of he robot changes.

The implementation

the robot uses lejos as its brains. Lejos has some excellent classes to drive the NXT motors. The regulated motor class that I used is able to rotate a motor at any given speed. This speed is maintained no matter what the conditions are. It also supports setting an acceleration rate. This is very good for my robot, as for most movements the wheel speed of the three motors is different. If all wheels would accelerate equally, then the slower moving motors would reach their target speed sooner than the faster spinning motors. This results in a robot that shivers and shakes during acceleration (or breaking). All this can be avoided by setting an acceleration value for each of the motors. The ratio of the acceleration values must be the same as the ratio between the (difference between current speed and) target speed of each of the motors. If done properly the robot accelerates very smoothly without jerky movements.

Lejos also has a Matrix class that helps to perform matrix algebra. I used this class to store the (inverse) kinematic models and the rotation matrix. I subclassed it to make a Pose Class that can be used together with the matrix class.

To create the kinematic model I developed a Builder class. This class has all kinds of methods to describe the robot, the wheels the motors and the gear train. When you are done describing the robot, the builder class delivers you a kinematic model and an inverse kinematic model.

To drive the robot I made a kind of pilot class. I plan to discuss it in a future post. This pilot accepts the kinematic model in its constructor.

For odometry I made another class, the Odometer. This class uses the inverse kinematic model.

 

This weekend I was wondering if it is possible to use triangulation to pinpoint the position of an object using two ultrasonic sensors. So I made a simple robot and wrote a small program to find out. It proves this can be done.

Triangulation is a technique that uses angles or distances between points to calculate the position of these points. GPS localization for example uses triangulation. The technique is based on the law of cosines. This law allows you to calculate the angles of a triangle given the lengths of its sides. But also to calculate the length of one side given the length of the two other sides and the angle of the opposite corner. This might sound complicated, but they teach it in high school, so you might know it.

This means that a robot equipped with two US sensors can calculate the exact position of an object, provided it is in sight of these two sensors. Potentially, this could be a huge benefit. Because a single sensor has a wide (30 degrees) detection beam, giving much uncertainty about the exact location of an object. But a single sensor also deforms the apparent shape of an object. Practically this means that constructing maps using a single US sensor is impossible. Maybe, with two sensors and some triangulation a map can be constructed, as the position of the objects can be calculated.

Here is a picture of the setup I used to test if it is practically possible to do this with two Lego US sensors. The sensors can be rotated as they are mounted on the shaft of a NXT motor. The sensors are constantly pointing to the object in sight. If no object is in sight the sensors point straight ahead, waiting for an object. It is clear to see from the rotation of the sensors where they are “looking” at. The effectiveness of the setup is clear to everyone.

After some tests I am pleased with the effectiveness of this setup. It is accurate up to 2-4 cm. If this is enough to produce good maps I don’t know. But it is at least effective for a whole range of other tasks. I’m thinking of shooting cannons, search missions and object avoidance. The video gives you an impression.

The effective range of the setup is limited as the picture to the right shows. I put a piece of Lego on every spot I got a successful hit. (The eleven hole beam is there for reference). It is not as wide or long as you might think. I think there are two reasons for that. the main reason is because of the low resolution of the sensor,  just one cm. This causes numerical errors whose effects are  small along the forward pointing axis, but that get larger to the sides. Offset error of the sensors is the second cause that limit the sideways effectiveness. Scale errors of the sensors limit the length of the effective range.

You might wonder how good a rigid setup works. Then the sensors are not allowed to rotate and home in on the object. Well, obviously the field of view is much smaller, and the sensors should be placed facing inward to maximize field of view. But I noticed that the result is less accurate and stable.

The program I wrote is very simple. It is just these two forms of the law of cosines embedded in some control structures.

package lejos.nxt.sensor.example;

import lejos.nxt.Button;
import lejos.nxt.LCD;
import lejos.nxt.Motor;
import lejos.nxt.NXTRegulatedMotor;
import lejos.nxt.SensorPort;
import lejos.nxt.sensor.filter.AdditionFilter;
import lejos.nxt.sensor.sensor.LcUltrasonic;
import lejos.util.Delay;

public class TwoUSSensors {

	static final int P=5;							// P component of the P-controller
	static final double MARGIN=0.8;		// Sets the margin of max distance for two measurements to assume a single object in sight
	static final float MAXRANGE=150;	// sets the macimum range of the sensor
	static final int MINANGLE=25;			// sets the minimum angle a sensor can rotate to(o means the two sensors are facing each other
	static final double b=20.5;				// sets the distance between the two sensors, measured from shaft to shaft
	static final float SENSOROFFSET=2.5f;

	static final LcUltrasonic sensorA=new LcUltrasonic(SensorPort.S1);
	static final LcUltrasonic sensorC=new LcUltrasonic(SensorPort.S4);
	static final AdditionFilter corrrectedA=new AdditionFilter(sensorA,SENSOROFFSET);  // to correct offset from shaft
	static final AdditionFilter corrrectedC=new AdditionFilter(sensorC,SENSOROFFSET);

	static final NXTRegulatedMotor C=Motor.C;
	static final NXTRegulatedMotor A=Motor.A;

	public static void main(String[] args) {

		Object run=new TwoUSSensors();

	}

	public TwoUSSensors() {
		double targetC=0, targetA=0;
		double  a,c;
		sensorA.setMode(LcUltrasonic.MODE_PING);
		sensorC.setMode(LcUltrasonic.MODE_PING);

		C.rotateTo(90, true);
		A.rotateTo(90, false);
		Delay.msDelay(500);

		while(!Button.ESCAPE.isDown()) {

			c=corrrectedA.fetchSample();
			a=corrrectedA.fetchSample();

			LCD.clear();
			LCD.drawString("       A     C", 0, 0);
			LCD.drawString("dist", 0, 1);
			LCD.drawString("dist'", 0, 2);
			LCD.drawString("target", 0, 3);
			LCD.drawString("error", 0, 4);
			LCD.drawString("state", 0, 5);

			LCD.drawInt((int) c, 7, 1);
			LCD.drawInt((int) a, 12, 1);

			if (a>MAXRANGE && c> MAXRANGE) {
				// it is assumed that there are no objects in sight
				 targetA = Math.PI/2;
				 targetC = Math.PI/2;
					LCD.drawString("-", 7, 5);
					LCD.drawString("-", 12, 5);
			}
			else {
				// it is assumed there are objects in sight
				if (Math.abs(a-c)>b*MARGIN) {
					// it is assumed that there are two different objects in sight
					if (a<c) {
						// it is assumed there is an object in front of sensorC
						LCD.drawString("-", 7, 5);
						LCD.drawString("+", 12, 5);
						targetC =Math.toRadians(C.getPosition());
						c=Math.sqrt(a*a+b*b-2*a*b*Math.cos(targetC)); // the distance between sensorA and object;
						targetA =	Math.acos((a*a-b*b-c*c)/(-2*c*b));
					}
					else {
						// it is assumed there is an object in front of sensorA
						LCD.drawString("+", 7, 5);
						LCD.drawString("-", 12, 5);
						targetA =Math.toRadians(A.getPosition());
						a=Math.sqrt(b*b+c*c-2*b*c*Math.cos(targetA)); // the distance between sensorA and object;
						targetC =Math.acos((c*c-a*a-b*b)/(-2*a*b));
					}
					LCD.drawInt((int) c, 7, 2);
					LCD.drawInt((int) a, 12, 2);
				}
				else {
					LCD.drawString("+", 7, 5);
					LCD.drawString("+", 12, 5);
					// it is assumed that there is one object in sight
					targetC =Math.acos((c*c-a*a-b*b)/(-2*a*b));
					targetA =Math.acos((a*a-b*b-c*c)/(-2*c*b));
				}
			}

			LCD.drawInt((int) Math.toDegrees(targetA),7,3);
			LCD.drawInt((int) Math.toDegrees(targetC),12,3);

			// set speed
			LCD.drawInt(rotateTo(A, targetA),7,4);
			LCD.drawInt(rotateTo(C, targetC),12,4);
			Delay.msDelay(20);
		}

		// return to start position
		A.setSpeed(200);
		C.setSpeed(200);
		C.rotateTo(0, true);
		A.rotateTo(0, false);
		Delay.msDelay(500);
	}

int rotateTo(NXTRegulatedMotor motor,double target){
	int targetDeg=(int) Math.toDegrees(target);

	if (targetDeg<MINANGLE) target=MINANGLE;
	int error=targetDeg-motor.getPosition();
	if (Math.abs(error)<=1) return error;
	motor.setSpeed(error*P);
	motor.rotateTo(targetDeg, true);
	return error;
}

}


The Lego ultrasonic sensor, or shortly the US sensor, is a range sensor. It measures the distance between the sensor and an object in front of the sensor. It does by sending short beeps and measuring how much time it takes for the beep to bounce back. Knowing the speed of sound it can then calculate the distance to the object. It works just like the sonar of a submarine.

The US sensor is one of most widely used sensors for Mindstorms. It is included in both NXT sets. Its most common application is in object avoidance. Given the wide spread use of the US sensor there is surprisingly little information available about the US sensor. This post hopes to change that.

The sensor is a digital sensor. It returns the distance to an object (or objects as we will discuss later) as a byte value expressed in cm. The value of 255 has a special meaning, it indicates there is no object within measuring range.
So In theory its minimum range is 0 cm and its maximum range is 254 cm. In reality the minimum range is about 7 cm. The maximum range depends on the object to be detected, large and hard objects can be detected over a longer range than small and soft objects. False echoes can limit the practical range even further. To my experience a wall (large and hard) can often be detected if it is within a range of two meters, but for a reliable signal it has to be within a range of 160 centimeter.

The resolution of the sensor is 1 cm. but that does not mean the sensor is that accurate. Most often the sensor measurement is a few cm of the true distance. If accurate measurements are needed one has to find out how big the error of the particular sensor is and compensate for this. According to the datasheet the sensor can compensate for the error by supplying calibration settings. However I have not been able to do so. My sensors do ignore calibration commands so I suspect that this functionality doesn’t work. (btw, the calibration address is not 0x50 as the datasheet states, it is 0x4a). Any calibration therefore has to be done in the user program.

The sensor has five modes of operation, off, continuous, ping, capture and reset. Only the continuous and ping modes work properly. The capture mode is supposed to detect other US sensors in the vicinity but this does not work. The sensor freezes in this mode and can only be activated again using the reset mode.
In continuous mode the sensor does continuous measurements at an interval of about 35 msec. It will only detect the nearest object.
In ping mode the sensor will only perform a single measurement, after that it will go in off mode. For a new measurement it has to be put in ping mode again. Ping mode has two advantages over continuous mode. In continuous mode one can only have one active sensor, adding a second sensor would disturb the echo off the first sensor as there is no way it can distinguish between an echo from its own beep and a beep from another sensor. In ping mode one can make sure that there are no two pings issued simultaneously and thus avoid disturbances. The second advantage of ping mode should be the ability of the sensor to measure the range of up to eight objects. This however does not work flawless. I get odd ranges for every but the first object. I guess the sensors ability to filter out false echoes is not so good. I have tried to filter out false echoes myself but I did not succeed in this. I therefore only use the first range when using my sensors in ping mode.
After all I was disappointed in the advanced capabilities of the sensor.

A sound beep does not travel in a narrow straight line as a laser beam does. Instead it spreads out like the light beam of a torchlight. As a result the sensor not only detects objects that are exactly in front of the sensor, it also detects objects that are somewhat to the left or right. As a rule of thumb one can assume that objects that are within an angle 15 degrees to the left or right are detected. The total width of the detection area is about 30 degrees. When using the sensor for obstacle avoidance this wide beam is an advantage. When the sensor is used for mapping it is a drawback as objects seem wider then they really are. They also seem arc shaped, the arc having an angle of at least 30 degrees for small objects and more for wider objects. See the image below. This makes the US sensor less suitable for detailed mapping. But keeping these effects in mind one can still create useful maps with this sensor. See one of my previous posts, a range map, for a successful application.

The US sensor sometimes suffers from an error in its measurement, it then returns a value of 255. This can happen at any time but is more frequent when the object is difficult to detect. This can lead to strange behavior of a robot. I once had a obstacle avoiding robot that sometimes drove straight for a short moment while turning to avoid an obstacle. It turned out that the sensor sometimes returned the 255 value even when an obstacle was in front of the robot. My program interpreted this value as “no obstacles ahead” and went straight ahead. It corrected itself once it got a new, valid, measurement. But its behavior seemed strange. Since then I always use a median or min filter on the US sensor. These filters filter out a single 255 value but pass a series of 255 values. As a result they effectively distinguish an invalid measurement from a “no obstacle ahead” condition.

A few last, technical, remarks on the sensor.

  1. according to the datasheet one can alter the measurement interval of the sensor. I could not see any effects of this.
  2. one can write to address 104. This is not documented. I could not find out what this is for.
  3. Xander has a nice post showing the internals of the sensor.

Remember my plan to make a ball balancing robot? Last year I set myself the goal to make a ball balancing robot. I even build the robot. Since then I wondered off my original goal and made a guardbot, Koios, from this platform. Now I am having another shot at making a balancing robot.

Programming a balancing robot is easy in theory. You just need a sensor that tells you how much the robot is tilted, most often people use a gyro for this. I use my IMU for this, so that I do not suffer from gyro drift. The tilt angle is then feeded to a PID-controller that transformes tilt to motor speed. The hard part is to tune the PID controller, it has to translate tilt into just the right amount of motor speed, too little and the robot falls of the ball, too much and the robot goes over the top and falls of on the other side of the ball. Falling of the ball damages the robot. So I had a problem, how to tune the PID controller without damaging the robot?

To be able to tune the PID-controller without damaging the robot I made a special platform. It is a large disk with a small pole in the middle pointing down Due to the pole the disk will always be tilted when lying on the ground, only when it balances perfectly on the pole it is level. Therefore this disk can be used to tune the controller.  The robot can ride off the disk, but it doesn’t fall then, it just comes on the floor with one or two wheels.  Afbeelding

When I tested this setup I discovered that the disk whas too smooth, the wheels didn’t have enough grip and slipped. To increase the friction I coated the surface of the disk with sillicon rubber, It is the light blue surface you see in the picture. Now I have a very “slick” surface.I only hope it lasts under the forces the NXT motors generate.But for the moment this problem is solved.

But there are other problems. One is the fact that these holonomic wheels make the robot vibrate, this affects the IMU filter, there is still some drift although it stays within certain limits. I do have prototype rotacaster wheels. The manufacturer told me that the production wheels are more round and generate less vibrations. If you are ever going to by these wheels, and they are a lot of fun, I advice you to take the black ones. They have the best grip. Anyway, I will have to tune the IMU as well.

Tuning PID controllers is difficult and very, very time consuming. There is some theory around tuning PID controllers but in the end it is mostly trial and error. Everytime I want to try a new set of parameters I’ll have to modify the program, download it to the brick, run the program and evaluate the results by watching the robot. It is hard to understand what goes wrong when you see the robot ride of the disk and make a run for the door to the staircase.

But not anymore. Kirk, one of the developers of Lejos made a very nice program that allows you to tune a running PID controller during over bluetooth. The tool is still under development so you won’t find it in Lejos 0.9.1 yet. This program is an add-on to the charting logger I often use to evaluate internals of the robot on the PC. So basicly, this program shows me what is going on in my robot and allows me to modify PID parameters on the fly. I think this is a great tool. Below is a screen shot of it.

Afbeelding

So, now I have the robot, a test platform and a efficient tuning tool. That must mean immediate succes! Well, to be honest I don´t think so. I´m still not sure if I can get this robot to work as there are problem with weight and inertia as well. The robot weigths 998 grams. This is quite heavy, even for three powerful NXT motors. The robot is quite stiff, but there it still bends a bit under weight. This affects the IMU sensor. And I´m working on other projects as well. So in the end I think there is a bigger chance to fail than to succeed.

To be continued.


Few posts ago I introduced Koios the guardbot. This robot can create a map of its surrounding using range sensors. This time I’ll tell some more about this.

The robot uses the map in the process of decision making. These are the main decisions Koios has to make. Is my current location save? Where can I find a safer spot? Where could an intruder appear? Do I spot an intruder?

One thing to keep in mind when reading this post is that the map Koios creates is not permanent. It will be used as long as the robot stays at the same location. Once it starts traveling the map is thrown away. You might think this is a pity, that you could easily keep the map and keep adding more data to it from other locations. This way one could ultimately create a complete map of the surrounding. This is true. But it also is very difficult to do. The main reason for this is that any movement in the robot will always lead to an uncertainty in the position of the robot. This uncertainty makes it impossible to just add a new scan of the surrounding to the old one. You would have to deal with this uncertainty using statistical methods. This technique is known as SLAM, or Simultanious Location And Mapping. I wanted to keep thing simple this time. Hence no SLAM, no permanent map, no overall image of the surrounding but just a temporary map giving an image of the surrounding as observed from the current location. But still, there is a lot you can do based on this information. All the decisions given above can be made based on a temporary map. Compare it to real life. Even without a map you can safely move around in a strange city as you can recognize roads, pavements, etc. you just cannot find your hotel back.

The map is constructed from data from range sensors (US sensor or NX Dist sensor) with aid of a direction sensor (compass or gyroscope). When constructing the map the robot turns around in place. While turning it takes as much range readings as possible and stores the range for all different directions. When using the map one can query the range for any given direction. More on using the map later. First, let me explain how exactly the range data is stored and why it is stored the way it is.

The range map is not a grid as you might expect. It doesn’t have a value (yes or no object) for each combination of x and y. Instead it stores information per direction. It is something like in a northern direction there is an object about 50 cm away.. The number of directions the map recognizes of course is not just four (north, east, etc). This would be a bit to little to be useful. It is not 360 either, this would take too much memory. This would also be overkill as the range sensors do not have a one degree resolution. The number of directions, I call them segments, that the map stores can be chosen by the user. As a rule of thumb I let the width of a segment be equal to half the width of the range sensors beam. As the US sensor has a beam of about 30 degrees a let a segment be 15 degrees. This devides a full circle into 24 segments.
The range is stored per segment. But one can and will have multiple range measurements per segment. One cannot store all measurements, this would take to much memory. One could store the last measurement only. But then one would lose the information from the other measurements. Therefore I went for another option. I wanted to store the average of all measurements taken within a segment. But to maintain and update the average you need other information as well. As the mean is defined as the sum (of the measurements) divided by the number (of measurements) I decided to store both the sum and the number. This allows me to calculate the mean at all times. I also store the sum of squared measurements. This allows me to calculate the variance in the measurements at all times. The variance tells how big the spread in the measurements is. Later we will see why this is useful information.
To wrap this up. A map consists of segments that are like slices of a pie. For each segment the map can give the number of measurements taken, the average range that was measured and the spread in these measurements.
The resulting map uses just 3 integer values per segment. Given 24 segments one needs 72 bytes of data to store a complete image of the surrounding. The NXT can easily handle this. For comparison, a grid map of 4 by 4 meters and a grid size of 10 cm is more or less equivalent to a range map like this. But it would require at least 1600 elements to store its information.

If one would draw the resulting map it would look like a radar screen. The robot would be in the center of the screen. The objects would be around this center. The bigger the object the bigger the reflection on the screen. The further away the object, the further it is from the center in the image. As a matter of fact the range map I build does have methods for displaying map data.
To transform map data to screen data one has to take the following steps:

  1. translate segment to angle
  2. translate angle to x and y fractions using x=cos(angle) and y=sin(angle)
  3. multiply x and y by average range. This gives the distance from robot to the object in x and y direction
  4. scale the distance to fit on the screen by multiplying by 32/max range
  5. rotate to align robot coordinates to screen coordinates. In my case the poitive x axis of the robot goes from the center of the brick to the motor ports. The positive y axis goes from the center to the left side of the robot. In respect to the screen the x and y axis are swapped as well as the sign of these axis. The resulting rotation is x_screen=-y_robot
    Y_screen=-x_robot

To use the map one has to translate range data into information that supports a decision. Sometimes this is easy, more often this is quite complicated. Let me give some examples, starting with an easy one and going to more complex examples step by step.
Suppose Koios, who’s goal in life is to find and scare off any intruder in my house, hasn’t seen any intruders for a ling time. It then wants to go to another spot, maybe it has more luck over there to find an intruder. This new spot should be as far away from the current spot as possible. But when traveling to the new spot it does not want to collide with any object in the surrounding. Therefore it uses the map to find the direction in which it can travel as far as possible without hitting an object. It does so by finding the segment that has the highest range value. When this segment is found it will rotate towards that direction and travel the distance corresponding with the range minus a safety margin.
Once arrived at the new location Koios has to make a harder decision, is the current location safe enough to stay and stand guard or does Koios need to find a safer location. I have decided beforehand that a safe location is a secluded location. A location where Koios has objects around him that protect him from people walking over him. To make this decision Koios creates a new map of the surrounding and calculates a safety indication from the number of objects that are close by. If this is large enough Koios regards the spot a safe spot and stays there. If, the current location is not safe enough Koios needs to find a safer spot. It does so by calculating the safety indication of all spots on the map and then to go to the safest spot it could find.
The hardest decision to make is in the process of intruder detection. Is an object detected by the range sensors an intruder or part of the surrounding? This decision is simple at first glance. Any object that is not on the map already must be an intruder. In other words, if the current range is shorter than the range for that direction on the map then there must be an intruder in that direction. But what if the difference between these two ranges is small. It then could as well be caused by some variance in sensor readings as by an intruder. It is here that the variance, or spread, in measurements that I mentioned earlier comes in handy. Statistics say that the square root of variance is called standard deviation. From the mean and the standard deviation one can calculate the likeliness of a range belonging to a new object (intruder) or to the surrounding. If the measured range is smaller than the average range on the map minus two times the standard deviation then there is a 97.5% chance that this object is an intruder.
You can see that none of these decisions need a permanent map. They all can be made using just an actual image of the current surrounding. It therefor is a good decision not to maintain a permanent map and to make the program over complicated.

To conclude. Mapping does not have to be very difficult if one doesn’t need a permanent image of the surrounding. The kind of decisions you want to make dictate if you can do with temporary maps. Also, a map does not have to be very large when choosing a map model wisely.

For those of you that are interested in the code for the range map I provide it below. It is for Lejos and not documented in any way. It therefor will be hard to use straight away. But maybe it can still help you one way or another. The methods that are involved in displaying the map use vector math as this makes rotating and scaling easier. The vector library itself is open source and can be found on the Internet It is called vecmath.


package nl.totan.robot;

import javax.microedition.lcdui.Graphics;

import lejos.nxt.Button;
import lejos.nxt.LCD;

import nl.totan.sensors.INS;
import nl.totan.vecmath.Matrix3f;
import nl.totan.vecmath.Vector3f;

public class RangeMap {
int segments;
double angle;
int depth;
int[] sum;
int[] n;
int[] sumsqr;
Vector3f v=new Vector3f();
Vector3f v2=new Vector3f();
Vector3f shift=new Vector3f(50,32,0);
INS dcm = null;
float scale;
Graphics plotter=new Graphics();

private Matrix3f rotate = new Matrix3f(0,-1,0,
-1,0,0,
0,0,1);

public RangeMap(int segments, int depth){
this.segments=segments;
this.depth=depth;
angle=2.0*Math.PI/segments;
scale=32.0f/depth;
sum=new int[segments];
n=new int[segments];
sumsqr=new int[segments];
clearMap();
}

public RangeMap(int segments, int depth, INS dcm){
this(segments, depth);
this.dcm=dcm;
}

public void clearMap() {
for (int a=0;adepth) range=depth;
update(toIndex(currentAngle),range);
}

public boolean isNew(double currentAngle, double range) {
if (range==Double.NaN || range>;depth) return false;
int a=toIndex(currentAngle);
LCD.clear();
LCD.drawString("Range: " +range, 0, 0);
LCD.drawString("Mean : " +getMean(a), 0, 1);
LCD.drawString("SDDEV: " +Math.sqrt(getVariance(a)), 0, 2);
LCD.drawString("N : " +getN(a), 0, 4);

LCD.drawString("DIF : " +(range-getMean(a)-2.0f * Math.sqrt(getVariance(a))), 0, 5);
//Button.ENTER.waitForPressAndRelease();
if (range<;getMean(a)-2.0f * Math.sqrt(getVariance(a))) return true;
update(a, range);
return false;
}

protected void drawDot( int a, int d) {
v.x=(float)(Math.cos(toAngle(a))*d);
v.y=(float)(Math.sin(toAngle(a))*d);
v2.x=(float)(Math.cos(toAngle(a+1))*d);
v2.y=(float)(Math.sin(toAngle(a+1))*d);
toScreen(v);
toScreen(v2);
plotter.drawLine((int)v.x, (int)v.y,(int)v2.x, (int)v2.y);
}

public void drawMap() {
for (int a=0;a<;segments;a++) {
drawDot(a,(int)getMean(a));
}
if (dcm != null) {
//draw north
v.x=0;
v.y=0;
v2.y=0;
v2.x=depth;
toScreen(v);
toScreen(v2);
plotter.drawLine((int)v.x, (int)v.y,(int)v2.x, (int)v2.y);
}

}

public RangeMap getDifference(RangeMap old) {
RangeMap dif=new RangeMap(segments, depth);
double maxDif=10;
double myRange, oldRange;
boolean isNew;

for (int a=0;a<;segments;a++) {
myRange=getMean(a);
isNew=true;
for (int aa=a-1;aa<;=a+1;aa++){
oldRange=old.getMean((aa+segments) % segments);
if (oldRange<; myRange+maxDif) {
isNew=false;
}
if (isNew) {
dif.sum[a]=sum[a];
dif.n[a]=n[a];
dif.sumsqr[a]=sumsqr[a];
}
}
}
return dif;
}

public double getProtectionLevel(int a, int dist, int margin) {
double level=0;
double dist2;
double dif;
double sin_a=Math.sin(toAngle(a))*dist;
double cos_a=Math.cos(toAngle(a))*dist;
for (int aa=0;aa0)
level+= 0.5-Math.atan(140*dif/depth-4)/3;
}
return level;
}

public void drawMarker(int a, int d) {
v.x=(float)(Math.cos(toAngle(a))*d);
v.y=(float) (Math.sin(toAngle(a))*d);
toScreen(v);
plotter.drawArc((int)v.x,(int)v.y,4,4,0,360);
}

public void drawRadius(double angle) {

v.x=(float)(Math.cos(angle)*depth);
v.y=(float)(Math.sin(angle)*depth);
v2.x=0;
v2.y=0;
toScreen(v);
toScreen(v2);
plotter.drawLine((int)v.x, (int)v.y,(int)v2.x, (int)v2.y);

}

void toScreen(Vector3f vec) {
if (dcm != null) dcm.transformB(vec);
rotate.transform(vec);
vec.scale(scale);
vec.add(shift);
}

}

This time I want to introduce you to Koios the guard bot. Koios guards my house, it detects intruders and scares them away.

To perform this complicated task I have split Koios’ behavior into different subtasks and put these in a subsumption architecture.

The first task of Koios is to map its surrounding using two range sensors. These are an ultrasonic sensor for long range (<200 cm) and a Mindsensors dist sensor for accuracy on the short range (<80 cm). To map the surrounding Koios makes a full turn in place while scanning. The resulting map is what I call a circular range map. This means that the map stores the distance to the nearest obstacle for all directions (I recognize 24 directions, each 15 degrees wide). The map looks like a radar scan when plotted. This map is not permanent, it will be thrown away when Koios moves. As Koios does not try to build and maintain a more permanent map of its surrounding I did not have to deal with uncertainties about its absolute position. Therefore the mapping algorithm could be kept simple.

The second task of Koios is to find a safe spot on the map and then to move to this spot. A safe spot for Koios is a spot where Koios is surrounded by objects. A place next to a wall is good, a place in a corner is better. Koios finds a safe spot by examining the map for a place between obstacles. When the safest spot on the map is found Koios travels to this place in a straight line.
Once arrived at the new location Koios again makes a map of the surrounding. If, at second glance, the current location is safe enough then Koios will stay there. If not, it will try to find an even safer spot. This process is repeated until a location is found that is safe enough for Koios.
The video below shows Koios scanning the area and finding a safe spot. I actually had some trouble shooting the video. At first I had placed my webcam on a chair to shoot the video. But to my surprise Koios hid itself under the chair. This indeed is a very safe spot, but it was outside the frame of the video. In the end I placed the camero on my desk to shoot the clip.

When Koios has found a really safe spot it will start guarding the area. It will slowly turn in place while scanning the area again. If an obstacle is detected its location will be compared to the map. When the obstacle is already on the map it will be regarded as a part of the surrounding and ignored. If on the other hand it isn’t on the map then it must be something new and it will be treated as an intruder.
The task of guarding is rather complicated as there is always some uncertainty in both the map and the range sensor data. Suppose the map tells Koios that there is an obstacle at 150 cm and the range sensor detects an object at 145. Is this an intruder or part of the surrounding? The choice Koios makes is based on statistics. To support this kind of decision making a range map stores more information than just the range to the nearest object. It also stores statistical quantities like the number of measurements taken at the direction and the variance in measured ranges. This makes it possible to calculate the probability of an obstacle being new, an intruder, or old, part of the surrounding. If part of the surrounding the measurement is used to improve the map even further.

But if the object is an intruder Koios will home in on it! Koios will quickly run to the intruder until it is very close. I haven’t written this part of Koios’ behavior yet. So everything you read from now on is just on the drawing board.
For one thing I have not decided yet if Koios just runs blindly to the location where the intruder was detected or that it tries to maintain a lock on the intruder while homing in on it. It would be nice if Koios was able to maintain a lock. But it will also be complicated. Mayby I could use a heat sensor like some air-to-air missiles do to maintain a lock on the intruder.

Anyway, once close to the intruder Koios will scare it away using both light and sound. First it will mimic some kind of scary animal with eyes that glow in the dark while making scary noises. Then it will mimic the police using flash lights and and a siren. Then it will map its surrounding again to find a save spot and make a hasty retreat.