Few posts ago I introduced Koios the guardbot. This robot can create a map of its surrounding using range sensors. This time I’ll tell some more about this.
The robot uses the map in the process of decision making. These are the main decisions Koios has to make. Is my current location save? Where can I find a safer spot? Where could an intruder appear? Do I spot an intruder?
One thing to keep in mind when reading this post is that the map Koios creates is not permanent. It will be used as long as the robot stays at the same location. Once it starts traveling the map is thrown away. You might think this is a pity, that you could easily keep the map and keep adding more data to it from other locations. This way one could ultimately create a complete map of the surrounding. This is true. But it also is very difficult to do. The main reason for this is that any movement in the robot will always lead to an uncertainty in the position of the robot. This uncertainty makes it impossible to just add a new scan of the surrounding to the old one. You would have to deal with this uncertainty using statistical methods. This technique is known as SLAM, or Simultanious Location And Mapping. I wanted to keep thing simple this time. Hence no SLAM, no permanent map, no overall image of the surrounding but just a temporary map giving an image of the surrounding as observed from the current location. But still, there is a lot you can do based on this information. All the decisions given above can be made based on a temporary map. Compare it to real life. Even without a map you can safely move around in a strange city as you can recognize roads, pavements, etc. you just cannot find your hotel back.
The map is constructed from data from range sensors (US sensor or NX Dist sensor) with aid of a direction sensor (compass or gyroscope). When constructing the map the robot turns around in place. While turning it takes as much range readings as possible and stores the range for all different directions. When using the map one can query the range for any given direction. More on using the map later. First, let me explain how exactly the range data is stored and why it is stored the way it is.
The range map is not a grid as you might expect. It doesn’t have a value (yes or no object) for each combination of x and y. Instead it stores information per direction. It is something like in a northern direction there is an object about 50 cm away.. The number of directions the map recognizes of course is not just four (north, east, etc). This would be a bit to little to be useful. It is not 360 either, this would take too much memory. This would also be overkill as the range sensors do not have a one degree resolution. The number of directions, I call them segments, that the map stores can be chosen by the user. As a rule of thumb I let the width of a segment be equal to half the width of the range sensors beam. As the US sensor has a beam of about 30 degrees a let a segment be 15 degrees. This devides a full circle into 24 segments.
The range is stored per segment. But one can and will have multiple range measurements per segment. One cannot store all measurements, this would take to much memory. One could store the last measurement only. But then one would lose the information from the other measurements. Therefore I went for another option. I wanted to store the average of all measurements taken within a segment. But to maintain and update the average you need other information as well. As the mean is defined as the sum (of the measurements) divided by the number (of measurements) I decided to store both the sum and the number. This allows me to calculate the mean at all times. I also store the sum of squared measurements. This allows me to calculate the variance in the measurements at all times. The variance tells how big the spread in the measurements is. Later we will see why this is useful information.
To wrap this up. A map consists of segments that are like slices of a pie. For each segment the map can give the number of measurements taken, the average range that was measured and the spread in these measurements.
The resulting map uses just 3 integer values per segment. Given 24 segments one needs 72 bytes of data to store a complete image of the surrounding. The NXT can easily handle this. For comparison, a grid map of 4 by 4 meters and a grid size of 10 cm is more or less equivalent to a range map like this. But it would require at least 1600 elements to store its information.
If one would draw the resulting map it would look like a radar screen. The robot would be in the center of the screen. The objects would be around this center. The bigger the object the bigger the reflection on the screen. The further away the object, the further it is from the center in the image. As a matter of fact the range map I build does have methods for displaying map data.
To transform map data to screen data one has to take the following steps:
- translate segment to angle
- translate angle to x and y fractions using
x=cos(angle)
and y=sin(angle)
- multiply x and y by average range. This gives the distance from robot to the object in x and y direction
- scale the distance to fit on the screen by multiplying by 32/max range
- rotate to align robot coordinates to screen coordinates. In my case the poitive x axis of the robot goes from the center of the brick to the motor ports. The positive y axis goes from the center to the left side of the robot. In respect to the screen the x and y axis are swapped as well as the sign of these axis. The resulting rotation is
x_screen=-y_robot
Y_screen=-x_robot
To use the map one has to translate range data into information that supports a decision. Sometimes this is easy, more often this is quite complicated. Let me give some examples, starting with an easy one and going to more complex examples step by step.
Suppose Koios, who’s goal in life is to find and scare off any intruder in my house, hasn’t seen any intruders for a ling time. It then wants to go to another spot, maybe it has more luck over there to find an intruder. This new spot should be as far away from the current spot as possible. But when traveling to the new spot it does not want to collide with any object in the surrounding. Therefore it uses the map to find the direction in which it can travel as far as possible without hitting an object. It does so by finding the segment that has the highest range value. When this segment is found it will rotate towards that direction and travel the distance corresponding with the range minus a safety margin.
Once arrived at the new location Koios has to make a harder decision, is the current location safe enough to stay and stand guard or does Koios need to find a safer location. I have decided beforehand that a safe location is a secluded location. A location where Koios has objects around him that protect him from people walking over him. To make this decision Koios creates a new map of the surrounding and calculates a safety indication from the number of objects that are close by. If this is large enough Koios regards the spot a safe spot and stays there. If, the current location is not safe enough Koios needs to find a safer spot. It does so by calculating the safety indication of all spots on the map and then to go to the safest spot it could find.
The hardest decision to make is in the process of intruder detection. Is an object detected by the range sensors an intruder or part of the surrounding? This decision is simple at first glance. Any object that is not on the map already must be an intruder. In other words, if the current range is shorter than the range for that direction on the map then there must be an intruder in that direction. But what if the difference between these two ranges is small. It then could as well be caused by some variance in sensor readings as by an intruder. It is here that the variance, or spread, in measurements that I mentioned earlier comes in handy. Statistics say that the square root of variance is called standard deviation. From the mean and the standard deviation one can calculate the likeliness of a range belonging to a new object (intruder) or to the surrounding. If the measured range is smaller than the average range on the map minus two times the standard deviation then there is a 97.5% chance that this object is an intruder.
You can see that none of these decisions need a permanent map. They all can be made using just an actual image of the current surrounding. It therefor is a good decision not to maintain a permanent map and to make the program over complicated.
To conclude. Mapping does not have to be very difficult if one doesn’t need a permanent image of the surrounding. The kind of decisions you want to make dictate if you can do with temporary maps. Also, a map does not have to be very large when choosing a map model wisely.
For those of you that are interested in the code for the range map I provide it below. It is for Lejos and not documented in any way. It therefor will be hard to use straight away. But maybe it can still help you one way or another. The methods that are involved in displaying the map use vector math as this makes rotating and scaling easier. The vector library itself is open source and can be found on the Internet It is called vecmath.
package nl.totan.robot;
import javax.microedition.lcdui.Graphics;
import lejos.nxt.Button;
import lejos.nxt.LCD;
import nl.totan.sensors.INS;
import nl.totan.vecmath.Matrix3f;
import nl.totan.vecmath.Vector3f;
public class RangeMap {
int segments;
double angle;
int depth;
int[] sum;
int[] n;
int[] sumsqr;
Vector3f v=new Vector3f();
Vector3f v2=new Vector3f();
Vector3f shift=new Vector3f(50,32,0);
INS dcm = null;
float scale;
Graphics plotter=new Graphics();
private Matrix3f rotate = new Matrix3f(0,-1,0,
-1,0,0,
0,0,1);
public RangeMap(int segments, int depth){
this.segments=segments;
this.depth=depth;
angle=2.0*Math.PI/segments;
scale=32.0f/depth;
sum=new int[segments];
n=new int[segments];
sumsqr=new int[segments];
clearMap();
}
public RangeMap(int segments, int depth, INS dcm){
this(segments, depth);
this.dcm=dcm;
}
public void clearMap() {
for (int a=0;adepth) range=depth;
update(toIndex(currentAngle),range);
}
public boolean isNew(double currentAngle, double range) {
if (range==Double.NaN || range>;depth) return false;
int a=toIndex(currentAngle);
LCD.clear();
LCD.drawString("Range: " +range, 0, 0);
LCD.drawString("Mean : " +getMean(a), 0, 1);
LCD.drawString("SDDEV: " +Math.sqrt(getVariance(a)), 0, 2);
LCD.drawString("N : " +getN(a), 0, 4);
LCD.drawString("DIF : " +(range-getMean(a)-2.0f * Math.sqrt(getVariance(a))), 0, 5);
//Button.ENTER.waitForPressAndRelease();
if (range<;getMean(a)-2.0f * Math.sqrt(getVariance(a))) return true;
update(a, range);
return false;
}
protected void drawDot( int a, int d) {
v.x=(float)(Math.cos(toAngle(a))*d);
v.y=(float)(Math.sin(toAngle(a))*d);
v2.x=(float)(Math.cos(toAngle(a+1))*d);
v2.y=(float)(Math.sin(toAngle(a+1))*d);
toScreen(v);
toScreen(v2);
plotter.drawLine((int)v.x, (int)v.y,(int)v2.x, (int)v2.y);
}
public void drawMap() {
for (int a=0;a<;segments;a++) {
drawDot(a,(int)getMean(a));
}
if (dcm != null) {
//draw north
v.x=0;
v.y=0;
v2.y=0;
v2.x=depth;
toScreen(v);
toScreen(v2);
plotter.drawLine((int)v.x, (int)v.y,(int)v2.x, (int)v2.y);
}
}
public RangeMap getDifference(RangeMap old) {
RangeMap dif=new RangeMap(segments, depth);
double maxDif=10;
double myRange, oldRange;
boolean isNew;
for (int a=0;a<;segments;a++) {
myRange=getMean(a);
isNew=true;
for (int aa=a-1;aa<;=a+1;aa++){
oldRange=old.getMean((aa+segments) % segments);
if (oldRange<; myRange+maxDif) {
isNew=false;
}
if (isNew) {
dif.sum[a]=sum[a];
dif.n[a]=n[a];
dif.sumsqr[a]=sumsqr[a];
}
}
}
return dif;
}
public double getProtectionLevel(int a, int dist, int margin) {
double level=0;
double dist2;
double dif;
double sin_a=Math.sin(toAngle(a))*dist;
double cos_a=Math.cos(toAngle(a))*dist;
for (int aa=0;aa0)
level+= 0.5-Math.atan(140*dif/depth-4)/3;
}
return level;
}
public void drawMarker(int a, int d) {
v.x=(float)(Math.cos(toAngle(a))*d);
v.y=(float) (Math.sin(toAngle(a))*d);
toScreen(v);
plotter.drawArc((int)v.x,(int)v.y,4,4,0,360);
}
public void drawRadius(double angle) {
v.x=(float)(Math.cos(angle)*depth);
v.y=(float)(Math.sin(angle)*depth);
v2.x=0;
v2.y=0;
toScreen(v);
toScreen(v2);
plotter.drawLine((int)v.x, (int)v.y,(int)v2.x, (int)v2.y);
}
void toScreen(Vector3f vec) {
if (dcm != null) dcm.transformB(vec);
rotate.transform(vec);
vec.scale(scale);
vec.add(shift);
}
}