Archives for posts with tag: ultrasonic sensor

This weekend I was wondering if it is possible to use triangulation to pinpoint the position of an object using two ultrasonic sensors. So I made a simple robot and wrote a small program to find out. It proves this can be done.

Triangulation is a technique that uses angles or distances between points to calculate the position of these points. GPS localization for example uses triangulation. The technique is based on the law of cosines. This law allows you to calculate the angles of a triangle given the lengths of its sides. But also to calculate the length of one side given the length of the two other sides and the angle of the opposite corner. This might sound complicated, but they teach it in high school, so you might know it.

This means that a robot equipped with two US sensors can calculate the exact position of an object, provided it is in sight of these two sensors. Potentially, this could be a huge benefit. Because a single sensor has a wide (30 degrees) detection beam, giving much uncertainty about the exact location of an object. But a single sensor also deforms the apparent shape of an object. Practically this means that constructing maps using a single US sensor is impossible. Maybe, with two sensors and some triangulation a map can be constructed, as the position of the objects can be calculated.

Here is a picture of the setup I used to test if it is practically possible to do this with two Lego US sensors. The sensors can be rotated as they are mounted on the shaft of a NXT motor. The sensors are constantly pointing to the object in sight. If no object is in sight the sensors point straight ahead, waiting for an object. It is clear to see from the rotation of the sensors where they are “looking” at. The effectiveness of the setup is clear to everyone.

After some tests I am pleased with the effectiveness of this setup. It is accurate up to 2-4 cm. If this is enough to produce good maps I don’t know. But it is at least effective for a whole range of other tasks. I’m thinking of shooting cannons, search missions and object avoidance. The video gives you an impression.

The effective range of the setup is limited as the picture to the right shows. I put a piece of Lego on every spot I got a successful hit. (The eleven hole beam is there for reference). It is not as wide or long as you might think. I think there are two reasons for that. the main reason is because of the low resolution of the sensor,  just one cm. This causes numerical errors whose effects are  small along the forward pointing axis, but that get larger to the sides. Offset error of the sensors is the second cause that limit the sideways effectiveness. Scale errors of the sensors limit the length of the effective range.

You might wonder how good a rigid setup works. Then the sensors are not allowed to rotate and home in on the object. Well, obviously the field of view is much smaller, and the sensors should be placed facing inward to maximize field of view. But I noticed that the result is less accurate and stable.

The program I wrote is very simple. It is just these two forms of the law of cosines embedded in some control structures.

```package lejos.nxt.sensor.example;

import lejos.nxt.Button;
import lejos.nxt.LCD;
import lejos.nxt.Motor;
import lejos.nxt.NXTRegulatedMotor;
import lejos.nxt.SensorPort;
import lejos.nxt.sensor.sensor.LcUltrasonic;
import lejos.util.Delay;

public class TwoUSSensors {

static final int P=5;							// P component of the P-controller
static final double MARGIN=0.8;		// Sets the margin of max distance for two measurements to assume a single object in sight
static final float MAXRANGE=150;	// sets the macimum range of the sensor
static final int MINANGLE=25;			// sets the minimum angle a sensor can rotate to(o means the two sensors are facing each other
static final double b=20.5;				// sets the distance between the two sensors, measured from shaft to shaft
static final float SENSOROFFSET=2.5f;

static final LcUltrasonic sensorA=new LcUltrasonic(SensorPort.S1);
static final LcUltrasonic sensorC=new LcUltrasonic(SensorPort.S4);

static final NXTRegulatedMotor C=Motor.C;
static final NXTRegulatedMotor A=Motor.A;

public static void main(String[] args) {

Object run=new TwoUSSensors();

}

public TwoUSSensors() {
double targetC=0, targetA=0;
double  a,c;
sensorA.setMode(LcUltrasonic.MODE_PING);
sensorC.setMode(LcUltrasonic.MODE_PING);

C.rotateTo(90, true);
A.rotateTo(90, false);
Delay.msDelay(500);

while(!Button.ESCAPE.isDown()) {

c=corrrectedA.fetchSample();
a=corrrectedA.fetchSample();

LCD.clear();
LCD.drawString("       A     C", 0, 0);
LCD.drawString("dist", 0, 1);
LCD.drawString("dist'", 0, 2);
LCD.drawString("target", 0, 3);
LCD.drawString("error", 0, 4);
LCD.drawString("state", 0, 5);

LCD.drawInt((int) c, 7, 1);
LCD.drawInt((int) a, 12, 1);

if (a>MAXRANGE && c> MAXRANGE) {
// it is assumed that there are no objects in sight
targetA = Math.PI/2;
targetC = Math.PI/2;
LCD.drawString("-", 7, 5);
LCD.drawString("-", 12, 5);
}
else {
// it is assumed there are objects in sight
if (Math.abs(a-c)>b*MARGIN) {
// it is assumed that there are two different objects in sight
if (a<c) {
// it is assumed there is an object in front of sensorC
LCD.drawString("-", 7, 5);
LCD.drawString("+", 12, 5);
c=Math.sqrt(a*a+b*b-2*a*b*Math.cos(targetC)); // the distance between sensorA and object;
targetA =	Math.acos((a*a-b*b-c*c)/(-2*c*b));
}
else {
// it is assumed there is an object in front of sensorA
LCD.drawString("+", 7, 5);
LCD.drawString("-", 12, 5);
a=Math.sqrt(b*b+c*c-2*b*c*Math.cos(targetA)); // the distance between sensorA and object;
targetC =Math.acos((c*c-a*a-b*b)/(-2*a*b));
}
LCD.drawInt((int) c, 7, 2);
LCD.drawInt((int) a, 12, 2);
}
else {
LCD.drawString("+", 7, 5);
LCD.drawString("+", 12, 5);
// it is assumed that there is one object in sight
targetC =Math.acos((c*c-a*a-b*b)/(-2*a*b));
targetA =Math.acos((a*a-b*b-c*c)/(-2*c*b));
}
}

LCD.drawInt((int) Math.toDegrees(targetA),7,3);
LCD.drawInt((int) Math.toDegrees(targetC),12,3);

// set speed
LCD.drawInt(rotateTo(A, targetA),7,4);
LCD.drawInt(rotateTo(C, targetC),12,4);
Delay.msDelay(20);
}

A.setSpeed(200);
C.setSpeed(200);
C.rotateTo(0, true);
A.rotateTo(0, false);
Delay.msDelay(500);
}

int rotateTo(NXTRegulatedMotor motor,double target){
int targetDeg=(int) Math.toDegrees(target);

if (targetDeg<MINANGLE) target=MINANGLE;
int error=targetDeg-motor.getPosition();
if (Math.abs(error)<=1) return error;
motor.setSpeed(error*P);
motor.rotateTo(targetDeg, true);
return error;
}

}

```

This time I want to introduce you to Koios the guard bot. Koios guards my house, it detects intruders and scares them away.

To perform this complicated task I have split Koios’ behavior into different subtasks and put these in a subsumption architecture.

The first task of Koios is to map its surrounding using two range sensors. These are an ultrasonic sensor for long range (<200 cm) and a Mindsensors dist sensor for accuracy on the short range (<80 cm). To map the surrounding Koios makes a full turn in place while scanning. The resulting map is what I call a circular range map. This means that the map stores the distance to the nearest obstacle for all directions (I recognize 24 directions, each 15 degrees wide). The map looks like a radar scan when plotted. This map is not permanent, it will be thrown away when Koios moves. As Koios does not try to build and maintain a more permanent map of its surrounding I did not have to deal with uncertainties about its absolute position. Therefore the mapping algorithm could be kept simple.

The second task of Koios is to find a safe spot on the map and then to move to this spot. A safe spot for Koios is a spot where Koios is surrounded by objects. A place next to a wall is good, a place in a corner is better. Koios finds a safe spot by examining the map for a place between obstacles. When the safest spot on the map is found Koios travels to this place in a straight line.
Once arrived at the new location Koios again makes a map of the surrounding. If, at second glance, the current location is safe enough then Koios will stay there. If not, it will try to find an even safer spot. This process is repeated until a location is found that is safe enough for Koios.
The video below shows Koios scanning the area and finding a safe spot. I actually had some trouble shooting the video. At first I had placed my webcam on a chair to shoot the video. But to my surprise Koios hid itself under the chair. This indeed is a very safe spot, but it was outside the frame of the video. In the end I placed the camero on my desk to shoot the clip.

When Koios has found a really safe spot it will start guarding the area. It will slowly turn in place while scanning the area again. If an obstacle is detected its location will be compared to the map. When the obstacle is already on the map it will be regarded as a part of the surrounding and ignored. If on the other hand it isn’t on the map then it must be something new and it will be treated as an intruder.
The task of guarding is rather complicated as there is always some uncertainty in both the map and the range sensor data. Suppose the map tells Koios that there is an obstacle at 150 cm and the range sensor detects an object at 145. Is this an intruder or part of the surrounding? The choice Koios makes is based on statistics. To support this kind of decision making a range map stores more information than just the range to the nearest object. It also stores statistical quantities like the number of measurements taken at the direction and the variance in measured ranges. This makes it possible to calculate the probability of an obstacle being new, an intruder, or old, part of the surrounding. If part of the surrounding the measurement is used to improve the map even further.

But if the object is an intruder Koios will home in on it! Koios will quickly run to the intruder until it is very close. I haven’t written this part of Koios’ behavior yet. So everything you read from now on is just on the drawing board.
For one thing I have not decided yet if Koios just runs blindly to the location where the intruder was detected or that it tries to maintain a lock on the intruder while homing in on it. It would be nice if Koios was able to maintain a lock. But it will also be complicated. Mayby I could use a heat sensor like some air-to-air missiles do to maintain a lock on the intruder.

Anyway, once close to the intruder Koios will scare it away using both light and sound. First it will mimic some kind of scary animal with eyes that glow in the dark while making scary noises. Then it will mimic the police using flash lights and and a siren. Then it will map its surrounding again to find a save spot and make a hasty retreat.

After a healthy holiday I came to the conclusion a low pass filter isn’t the way to go. Low pass filters are good for removing high frequency noise. But the errors the ultrasonic sensor give me aren’t like that. The errors it gives me are single misreadings occurring at random intervals (far apart). These misreadings give very high values most of the time. I’ll call these errors spikes as they look like spikes when you plot US readings graphically.

In a low pass filter these spikes are treated as valid values where I just want to ignore them. Also, in a low pass filter a spike affects the filter for some time after it occurred. This is also an unwanted effect of a low pass filter in my case.

There is a statistic called median that is not very sensitive to exceptional values. It is more or less like the mean value of a couple of readings but is calculated in a different way. The mean value is defined as the sum of all readings divided by the number of readings. The median is defined as the reading that separate the higher half from the lower half of all readings. Let me give an example. Suppose I got 5 readings from the US sensor, 100, 98, 255, 94, 92. The value of 255 is a misreading, the actual distance was 96 at that time. The median of these readings is 98, as there are 2 numbers bigger than 98 and two numbers are smaller. The median is quite close to the actual value of 96. In this case the median gives a better estimate of the average reading than the mean value (127,8).

A low pass filter is like the mean, they both assume all readings are valid although maybe not very precise on their own. The median is different in this respect, it ignores the biggest and smallest readings and assumes only that the most average readings are valid. If I want to get rid of the spikes in the US sensor readings I need a filter that behaves like the median. Luckily it is not to difficult to create a filter based on medians. The filter should just return the median of the last couple of readings form the US sensor.

So how does a median filter behave? Suppose a median filter that returns the median value of the three most recent readings and an object is approaching the US sensor. This filter would be able to completely filter out all spikes that last only one reading (unlike the low pass filter). If a spike lasts two or more readings it will not be ignored at all. The filter could still be able to ignore these situations when it uses more than three readings.
Also, the filter needs time to responds to changes in readings (just like the low pass filter). This becomes clear if an object is closing in on the US sensor. The result of the filter would lag one reading from the actual situation. A median filter based on more than three readings would suffer from even more lag. So there is a trade-off between the strength of the filter and the time it needs to adjust.

It’s good to be aware of the potential difficulties in programming a median filter. First,  one needs to keep track of the history of readings, this takes memory space and requires the use of queues, arrays or lists. Second, the readings have to be sorted by value for being able to find the median. This has to be done again for every reading.

The US sensor updates every 50 ms, so it is of no use polling it faster.  Spikes lasting longer than one reading are rare, so taking the median of the last three readings will most often be sufficient. In this case one can even create a median filter without queues and sorting routines.

In my last post I did some experimenting with a low pass filter in Excel. But then there is reality. The US sensor is a slow sensor, I can’t use the sample rate I experimented with. I found out that RobotC, my programming environment, reads the US sensor about every 70 ms. This is a lot slower than the 4 ms I used in my experiment. As a result the filter responds far too slow as you can  see in the first image. FreeRover would hit a wall  ansd still be thinking it was over 20 cm away. If I increase Alpha the filter reacts faster but it will not be as effective as a filter. I needed something else.

The filter described in my last post and above is a first order filter, meaning it uses the current reading and the previous result. Second order filters also use the two previous readings. I started experimenting with a second order filter. Here is an image of the results.

Again, the red line represents raw readings, the green line is the result of a first order filter, the blue line is the result of a second order filter. Thetime axis has changed scale as I use a sample rate of once every 0,070 ms. There are five scenario’s. Scenario 1 and 3 are faulty readings of zero. Scenari 4 is a faulty reading of 255. I won’t go into much detail of these scenario’s, but notice that the second order filter is about as responsive as the first order filter but is more effective in filtering out spike errors.
Scenario 5 is the most interesting one. Here, the robot drives into a wall at a speed of 100 cm/second. This is quite fast for a robot, almost the speed of a walking man. Both filters respond with some delay, this means that the real distance to the wall is less then one might think based on filtered values. Upon hitting the wall the first order filter gives a distance of 5,5 cm, the second order gives 6,3 cm. The size of the lag can be altered by altering alpha and beta but will also depend on the speed of the robot.

I conclude that it is important to know the sensor, the kind of errors it gives and the sampling rate it can handle. It also is important to know the robot, how fast it will go. And it is important to tune the filter. This is best done in a spreadsheet at first.

Here is my code for a second order low pass filter for a US sensor.

```float lowPassUS;

{
int  deltaT=70;
float alpha=0.2, beta=0.18;
float US0=0,US1=0, US2=0;
// Start main loop of the task;
while(true)
{
wait1Msec(deltaT);
// low pass value of US sensor
US0=(float)SensorValue[sonar];
lowPassUS=US1+alpha*(US0-US1)+beta*(US0-US2);
US2=US1;
US1=lowPassUS;
}
}

```

Today I decided to enhance US sensor output. Why? Sometimes I had the impression the car changed behaviour. I suspected this had something to do with faulty readings from the Ultrasonic sensor. I decided to make a low pass filter for the US sensor. This should eliminate random faulty readings. But first I had to understand low pass filters. Coding a low pass filter isn’t too hard. There is an explanation on Wikipedia. Basicly it’s just one line of code:
lowPassUS=lowPassUS+alpha*(SensorValue[sonar]-lowPassUS);
The filtered value is calculated from the raw value and the previous filtered value. There is one parameter, alpha, that controls the filter. I wanted to understand how the value of alpha changes the behaviour of the filter. That is why I did some experiments in Excel. Here is an example. The red line represents unfiltered values, the green filtered values. The horizontal axis represents time.

The alpha factor does one thing, it controls how fast the filter adjusts to a new situation. The smaller the alpha, the quicker the filter reacts. In the plot it means that the green line follows the red line more closely. The downside is that less noise is filtered out.
There is one other thing that is important to realise. Alpha doesn’t control the time the filter takes to adjust, it controls the number of steps (iterations) the filter needs to adjust. This means that there is a relation between sample rate, how often the output from the US sensor is read, and alpha. With a higher sample rate and constant alpha, the filter adjusts quicker in real time. But also, if the sample rate is changes one also needs to consider to change the alpha.

The graph above shows four scenario’s. At t=1 the sampling starts, the filter needs time to adjust. One should give the filter time to settle, or, one should initialize the filtered value to the unfiltered value.
Around t=0,3 I tested another scenario. The reading of the Sensor changes because an object is in sight. In this example it takes about 0,1 second for the filter to adjust to the new situation. Is this quick enough, for me it is.
Around t=0,46 there is an faulty reading, the filter dampens the reading but for a short time (0,06 seconds) the objects seems further than it is.  In my case, The damping needs to be enough so that the filtered value  not exceeds treshhold. I need more real life testing to find out the damping is enough. Also I do know i get faulty 255 values sometimes, but I don’t know if I also get faulty zero values. These will affect behaviour more often, as they can influence collision detection.
Around t=0,6 the last scenario is shown.  Here an object at 60 cm dissappears and and a background object at 90 cm appears. But this time the objects doesn’t dissappear at once but during a short time it is sometimes sen and at other times it is not seen. Here the filter really smoothens out the jitter.