Today Dexter Industries launched their latest product, an all-color LED called the dLight. I was involved in the development of the dLights so I got them early. I can also give away one set of dLights to one of my readers. More on that later. Let’s take a look at the dLights firs in this video.

I mounted three dLights underneath Agilis, one under each leg pointing to the wheel. I programmed the dLights to give a color that corresponds to the wheel speed of the particular leg. I think the result looks cool.

If you look closely to the sensor ports in the video, you’ll notice that only one is in use. This is one of the great benefits of the dLights, you can daisy chain them. So one free port is all you need to give a robot multiple lights. One set contains four dLights plus the cables to chain them.

As said, I can give away one set of dLights. If you want this set you just have to reply to this message before the first of April.

You’ll find more info about the dLights on the website of Dexter Industries.

In case you might wonder how fast or accurate Agilis is, here are some numbers.

Setup

  • The gear ratio of the motors to the wheels is 1:2, making the wheels rotate twice as fast as the motors. (It is possible to change the gear ratio to 1:3).
  • Prototype orange Rotacaster wheels. This is the hard compound. There isalso a medium compound (gray) and soft compound (black) available.
  • The batteries were rechargeable NiMH, 2500 mAh batteries. These were not fully charged.
  • The test surface was a clean linoleum floor.

Speed

  • Reliable top speed is about 60 cm/sec, equivalent to 2.16 km/h or 1.35 mph. At this speed the robot is still accurate as there is ample margin for the PID controllers of the motors.
  • Unreliable top speed is about 75 cm/sec, equivalent to 2.7 kmh or 1.68 mph. At this speed the robot is not very accurate, especially the heading.

Accuracy

  • The test track is a square with sides of one meter each. During each run the test track is traveled 4 times. Making the total distance of the test track 16 meters.
  • The robot finishes the test track on average within 10 cm of its starting position. Expressed as a percentage of the total distance the error is about 0.6%.
  • The movement error is systematic. The robot always ends up above and to the right of the starting position.
  • The robot is more accurate at slower speed and acceleration settings.

The images shows the result of the accuracy testing. For each test the robot was placed exactly on the origin (bottom left in the picture). It then traveled a square with sides of one meter for four times, making the total distance traveled 16 meters. The finish location of the robot was then marked on the floor. This test was repeated three times for a single set of settings of speed and acceleration. Three different dynamic sets were used,  speed: 50 cm/sec and  acceleration at 100 cm/sec^2, speed 50 cm/sec and acceleration at 750 cm/sec^2 and speed 30 cm/sec and acceleration 60 cm/sec^2.

foto (7)

I want to repeat the tests with a 1:3 gear ratio and also with the black Rotacaster wheels.

If you ever made an autonomous robot, differential or holonomic, you’ll have experienced that they most often don’t move very elegant. Agilis, my current robot didn’t move smoothly either. This is because it’s movements are composed of different actions. Each action, driving, turning and all the other fancy movements it can make, is very smooth by itself as you could see in previous video. But, when going from one action to the other, the robot experiences violent changes in wheel speed. This leads to wheel spin, sliding and large errors in positioning. The robot tries to go from one speed (one action) to a different speed (second action) in a moment.

One way to solve this problem, is to make the robot accelerate and decelerate at the beginning and ending of each action. This will get rid of the positioning errors and violent accelerations. But it will mean that there is a stop between each action, and that is not what I want for a fast moving robot like Agilis.

The way I made Agilis move smoothly is by action blending (as I call it). By this I mean that I let an action fade out while the next one fades in. Just like a DJ mixes tracks or a painter mixes colors. This works very well as you can see in this video. We see Agilis cutting corners when instructed to drive a square. The cutting of a corner is the result of blending in two actions, one saying, go forward , the other saying, go left. The blending also depends on the speed of the robot, a faster moving robot needs a bigger corner to turn. But also on it’s capability to brake and accelerate. The quicker it can do this, the smaller the turns it can make. I fiddled with this in the video.

There were a lot of technical and logical considerations when implementing action blending in code. I think most of you can’t be bothered by this, so I won’t discuss them unless there proves to e a demand for this. To facilitate further developments and overcome errors if needed I made action blending optional in my code.

Two things bother me. One is that with short actions or low values for acceleration more than two actions can overlap in time. I can only handle two. The other is that I haven’t got angle (between two actions) in the equation, so the results are angle specific. Giving good results with square angles only.

I have been using Premiere Elements 11 for the last two video’s and obviously still need to learn a lot. Sorry for the vague green texts, I don’t know how to fix that. They were perfectly sharp when I wrote them on the floor.

Last post I introduced Agilis, and I discussed the calculations needed for driving the motors and for doing odometry. This time I’ll discuss how it moves. But first, let us take a look at a video.

In the video we see that Agilis can make some amazing movements. It circles around while facing in one direction for example. It’s movements are not constraint in heading nor in direction, it can make any movement. This makes Agilis a holonomic robot. Any movement a holonomic robot can possibly make, can be described as a combination of two functions. One function describes the robots location (X and Y coordinates) in time. The second describes its heading (H) in time. The two functions together give the robots pose at a given moment.

For the above example the function for location is
x=cos(t)*radius;
Y=sin(t)*radius;

The function for heading is
h=aConstantValue

As I discussed in my previous post, Agilis accepts a speed vector {x,y,h} as input for its drive system. The speed vector is the derivative of pose. So we need to use the derivative of the above functions if we want to use these to make Agilis move. This might sound complicated, but what it means is that we need functions that give a speed at moment in time. For our example these functions are


x=-sin(t)*radius;
Y=cos(t)*radius;

The function for h is
h=0

There are two important remarks to the above.
First, it seems arbitrary to say that x and y are bundled together in one function and h is calculated in another function. This is for a good reason, it is how we think. We think of a robot driving in circles, this is x and y together. We think of a robot facing north. Also, if we have to independent functions in the program than we can make very interesting combinations of location and heading functions.
Second, you might wonder why I discussed the two forms of the formulas. Well, in most cases it is best to use the speed (or derivative) form of a function. In some cases though, there is no (obvious) derivate of a function. This is especially the case when a function relates to the outside world. An example of this would be a function that makes the robot “look” at a fixed point in space, no matter where it is going. That leaves us with a problem though. If there is no derivative of a function, how do you get a speed from such a function? The way to go is to calculate the difference between the desired position or heading and the actual position or heading and then to translate this difference into speed. I use a PID controller to make this translation.

This brings me to the way this is implemented for Agilis. In the code I created a pilot. This pilot is responsible for performing movements. It must be instructed which movements to make so it can execute them. While executing a movement the pilot periodically queries the two movements for the desired speed and location. If it is a location that the movement returns then pilot translates this into speed using the PID controller. It then transforms this speed vector from a global coordinate system into the local coordinate system of the robot and then passes the speed vector on to the chassis. The chassis is a logical part in my program that should be envisioned just below the pilot. It accepts a speed vector, translates this to speed of the motors and then drives the motors.

The pilot handles two functions in parallel, one for location and one for heading. Each of the functions can return either speed or location. In any combination. A function can have an end to it, like go straight for one meter. But this is not obliged, like keep facing north.

I have implemented or planned these functions for location.

  • stand still
  • drive a straight line
  • drive an arc

I have implemented or planned these functions for heading.

  • rotate
  • hold still
  • look to a fixed direction
  • look around (from left to right and back and so on)
  • look where you’re going

This is basically all there is to it. This program allows Agilis to perform all the movements you see in the video. But as you can notice the transition from one movement to another isn’t smooth yet. I plan to make it smooth with a technique I call maneuver blending. What it should do is to blend two maneuvers so that a transition between the two appears to be natural. As said, I haven’t programmed this jet. If I succeed I will dedicate a post to it. If not, it is most unlikely that i’ll write about it 🙂

This Christmas holiday I started working on a new robot, called Agilis. This robot should be a very agile and elegantly moving robot. The frame is based on a triangle equipped with holonomic wheels. So you might think, “What’s new, it is like your last robot?”. From the outside this is true, but it gets new and better brains on the inside. Let me tell you what I envision.

Most robots I built went from point A to point B, only then to decide what to do next. Others just drove around avoiding obstacles. This one should be able to do both at the same time. Agilis must be able to perform complex manouvres, like driving in a straight line while rotating around its centre, or like driving an arc while keeping pointed at an arbitrary spot. It should constantly use sensory input to stay on course, or to alter its course if needed. And all this must go fluently, just like a cat walking through the room.

Over the next several posts I will discuss the different aspects of Agilis. This first post deals with the drive system.

the chassis

geartrain IMG_8701Agilis is a three wheeled holonomic platform. This means it can drive in any direction without turning. It can turn around any point, even around its own center. Each wheel is driven by a NXT motor via a gear train that has a 1:2 ratio, the wheels rotate twice as fast as the motors. This makes Agilis a relatively fast robot. The gear train has to be very sturdy to handle the torque of the motors. It also has to be precise to allow for odometry. I used the same setup that I developed for my last robot, Koios the Guard bot.

From robot speed to motor speed

It is not very easy to control a holonomic robot, it takes some math. I created a kinematic model that does just that. The model takes robot speed as input and gives motor speed as output. Robot speed is expressed as speed in x-direction, y-direction and rotational speed. Motor speed is expressed as encoder ticks per second.

So how does this kinematic model look like? For a single wheel this looks like a function that takes the three robot speeds as input. For the three wheels together it looks like a matrix multiplication that multiplies a robot speed vector {xSpeed,ySpeed,angularSpeed} with a kinematic matrix. The resulting vector containers the speed of each of the three wheels. Let’s take a look at the single wheel function first.

To translate robot speed into motor speed one needs to know some physical aspects of the robot, the wheel and the motor. How big is the wheel, how far is it from the center of the robot, under what angle is it mounted, what is the gear ratio of the gear train and what is the number of encoder ticks per full cycle of the motor? With all this information one can write a formula to calculate motor speed from robot speed. Here is the formula.

motorSpeed = 
xSpeed * (cosine(wheelAngle) * nEncoderTicks / ( gearRatio * 2 * PI * wheelRadius) - 
ySpeed * (sinus(wheelAngle) * nEncoderTicks / (gearRatio * 2 * PI * wheelRadius) + 
angularSpeed * distanceToCenter * nEncoderTicks / (gearRatio * 2 * PI * wheelRadius)

This formula might look daunting at first, but on second glance you might notice that there are a lot of constants in it. If you substitute the constants with their respective values you will end up with a much simpler formula.

motorSpeed = xSpeed * aConstantValue - ySpeed * anotherConstantValue + angularSpeed * yetAnotherConstantValue

This formula is not only much simpler, it is also very efficient to calculate, just three multiplications and two additions. A NXT can do this in no time. But remember, these constants are not the same for all the motors because each of the wheels has a different wheelAngle. But, you could also have wheels of different sizes, or differences in any of the other aspects. This means that you will have a formula for each of the motors, each formula is the same in structure but has its own constants. These constants can be stored in a matrix where each row in the matrix contains the 3 constants belonging to a single wheel. The matrix has a row for each of the wheels. If you then take the speed vector and multiply this with the matrix then all formulas are calculated at once and the result, the motorSpeed, is stored in a new vector. Each row in this vector holds the speed of a single motor. In java this matrix multiplication would look like this:

Matrix motorSpeed = kinematicModel.times(robotSpeed);

Wow, now things look simple at last! This is the beauty of matrix algebra.

The same kinematic model can be used to transform robot acceleration into motor acceleration. I use this to make my robot accelerate very smoothly. (the regulated motor class of Lejos supports regulated acceleration).

From tacho counter to robot position

To drive a robot this kinematic model works perfect. But I also want to be able to do things the other way around. I want to be able to calculate robot position from encoder values. At first I couldn’t figure this out at all. The math was just too complex for my mind. That is, until I realized that I just needed to use the inverse of the kinematic model.

deltaRobotPose = deltaMotorPosition * inverseKinematicModel

Here deltaMotorPosition is a vector containing the change in encoder value of each of the motors since the previous measurement. The inverseKinematicModel is the kinematic model inverted. And deltaRobotPose is the change in pose (x and y position and heading) of the robot. Looks simple, doesn’t it? The problem is how to calculate the inverse matrix of the kinematic model. I can’t tell you, because I don’t know. But hey, somebody else already programmed this in Java. I just used the inverse method of the Matrix class.

From the robots coordinates to the worlds coordinates

There is just one more thing to it. The robot can have any heading, this means that x and y coordinates of the robot are not aligned with the x and y coordinates of the world. To be able to steer the robot to a certain position in a room one must be able to convert this location to a location as the robot sees it. The same goes for keeping track of pose. We have seen the formula to calculate change in pose from the wheel encoders. This change however is a change as the robot sees it, not a change in the robots position it the world. The translation from world coordinates to robot coordinates can also be done with a simple matrix multiplication using a rotation matrix. The rotation matrix itself can be calculated from the heading of the robot.

Suppose you want to drive your robot to the left side of the room. The speed matrix in world frame would look like {0, speed, 0}. this can be multiplied with the rotation matrix to get a speed matrix as the robot sees it.

RobotSpeed =worldSpeed *  rotationMatrix

If we want to go the other way around, to get the change in pose in world frame we multiply the change in robot frame with the (you guessed it) inverse of the rotation matrix. For rotation matrices the inverse is the same as the transpose of the matrix, the transpose is far more efficient to calculate.

Wrap up

This really is all there is to driving a robot. To sum it up. You have a kinematic model to translate robot speed into motor speed. You have a rotation matrix to translate things from world coordinates to robot coordinates.
The same goes for odometry. You have the inverse of the kinematic model to translate change in encoder values to change in robot pose expressed in robot coordinates. You have the inverse of the rotation matrix to translate change robot pose in robot coordinates into world coordinates.
The kinematic model is a constant, it has to be calculated only once (unless your robot changes shape). The rotation matrix on the other hand has to be updated every time the heading of he robot changes.

The implementation

the robot uses lejos as its brains. Lejos has some excellent classes to drive the NXT motors. The regulated motor class that I used is able to rotate a motor at any given speed. This speed is maintained no matter what the conditions are. It also supports setting an acceleration rate. This is very good for my robot, as for most movements the wheel speed of the three motors is different. If all wheels would accelerate equally, then the slower moving motors would reach their target speed sooner than the faster spinning motors. This results in a robot that shivers and shakes during acceleration (or breaking). All this can be avoided by setting an acceleration value for each of the motors. The ratio of the acceleration values must be the same as the ratio between the (difference between current speed and) target speed of each of the motors. If done properly the robot accelerates very smoothly without jerky movements.

Lejos also has a Matrix class that helps to perform matrix algebra. I used this class to store the (inverse) kinematic models and the rotation matrix. I subclassed it to make a Pose Class that can be used together with the matrix class.

To create the kinematic model I developed a Builder class. This class has all kinds of methods to describe the robot, the wheels the motors and the gear train. When you are done describing the robot, the builder class delivers you a kinematic model and an inverse kinematic model.

To drive the robot I made a kind of pilot class. I plan to discuss it in a future post. This pilot accepts the kinematic model in its constructor.

For odometry I made another class, the Odometer. This class uses the inverse kinematic model.

 

This weekend I was wondering if it is possible to use triangulation to pinpoint the position of an object using two ultrasonic sensors. So I made a simple robot and wrote a small program to find out. It proves this can be done.

Triangulation is a technique that uses angles or distances between points to calculate the position of these points. GPS localization for example uses triangulation. The technique is based on the law of cosines. This law allows you to calculate the angles of a triangle given the lengths of its sides. But also to calculate the length of one side given the length of the two other sides and the angle of the opposite corner. This might sound complicated, but they teach it in high school, so you might know it.

This means that a robot equipped with two US sensors can calculate the exact position of an object, provided it is in sight of these two sensors. Potentially, this could be a huge benefit. Because a single sensor has a wide (30 degrees) detection beam, giving much uncertainty about the exact location of an object. But a single sensor also deforms the apparent shape of an object. Practically this means that constructing maps using a single US sensor is impossible. Maybe, with two sensors and some triangulation a map can be constructed, as the position of the objects can be calculated.

Here is a picture of the setup I used to test if it is practically possible to do this with two Lego US sensors. The sensors can be rotated as they are mounted on the shaft of a NXT motor. The sensors are constantly pointing to the object in sight. If no object is in sight the sensors point straight ahead, waiting for an object. It is clear to see from the rotation of the sensors where they are “looking” at. The effectiveness of the setup is clear to everyone.

After some tests I am pleased with the effectiveness of this setup. It is accurate up to 2-4 cm. If this is enough to produce good maps I don’t know. But it is at least effective for a whole range of other tasks. I’m thinking of shooting cannons, search missions and object avoidance. The video gives you an impression.

The effective range of the setup is limited as the picture to the right shows. I put a piece of Lego on every spot I got a successful hit. (The eleven hole beam is there for reference). It is not as wide or long as you might think. I think there are two reasons for that. the main reason is because of the low resolution of the sensor,  just one cm. This causes numerical errors whose effects are  small along the forward pointing axis, but that get larger to the sides. Offset error of the sensors is the second cause that limit the sideways effectiveness. Scale errors of the sensors limit the length of the effective range.

You might wonder how good a rigid setup works. Then the sensors are not allowed to rotate and home in on the object. Well, obviously the field of view is much smaller, and the sensors should be placed facing inward to maximize field of view. But I noticed that the result is less accurate and stable.

The program I wrote is very simple. It is just these two forms of the law of cosines embedded in some control structures.

package lejos.nxt.sensor.example;

import lejos.nxt.Button;
import lejos.nxt.LCD;
import lejos.nxt.Motor;
import lejos.nxt.NXTRegulatedMotor;
import lejos.nxt.SensorPort;
import lejos.nxt.sensor.filter.AdditionFilter;
import lejos.nxt.sensor.sensor.LcUltrasonic;
import lejos.util.Delay;

public class TwoUSSensors {

	static final int P=5;							// P component of the P-controller
	static final double MARGIN=0.8;		// Sets the margin of max distance for two measurements to assume a single object in sight
	static final float MAXRANGE=150;	// sets the macimum range of the sensor
	static final int MINANGLE=25;			// sets the minimum angle a sensor can rotate to(o means the two sensors are facing each other
	static final double b=20.5;				// sets the distance between the two sensors, measured from shaft to shaft
	static final float SENSOROFFSET=2.5f;

	static final LcUltrasonic sensorA=new LcUltrasonic(SensorPort.S1);
	static final LcUltrasonic sensorC=new LcUltrasonic(SensorPort.S4);
	static final AdditionFilter corrrectedA=new AdditionFilter(sensorA,SENSOROFFSET);  // to correct offset from shaft
	static final AdditionFilter corrrectedC=new AdditionFilter(sensorC,SENSOROFFSET);

	static final NXTRegulatedMotor C=Motor.C;
	static final NXTRegulatedMotor A=Motor.A;

	public static void main(String[] args) {

		Object run=new TwoUSSensors();

	}

	public TwoUSSensors() {
		double targetC=0, targetA=0;
		double  a,c;
		sensorA.setMode(LcUltrasonic.MODE_PING);
		sensorC.setMode(LcUltrasonic.MODE_PING);

		C.rotateTo(90, true);
		A.rotateTo(90, false);
		Delay.msDelay(500);

		while(!Button.ESCAPE.isDown()) {

			c=corrrectedA.fetchSample();
			a=corrrectedA.fetchSample();

			LCD.clear();
			LCD.drawString("       A     C", 0, 0);
			LCD.drawString("dist", 0, 1);
			LCD.drawString("dist'", 0, 2);
			LCD.drawString("target", 0, 3);
			LCD.drawString("error", 0, 4);
			LCD.drawString("state", 0, 5);

			LCD.drawInt((int) c, 7, 1);
			LCD.drawInt((int) a, 12, 1);

			if (a>MAXRANGE && c> MAXRANGE) {
				// it is assumed that there are no objects in sight
				 targetA = Math.PI/2;
				 targetC = Math.PI/2;
					LCD.drawString("-", 7, 5);
					LCD.drawString("-", 12, 5);
			}
			else {
				// it is assumed there are objects in sight
				if (Math.abs(a-c)>b*MARGIN) {
					// it is assumed that there are two different objects in sight
					if (a<c) {
						// it is assumed there is an object in front of sensorC
						LCD.drawString("-", 7, 5);
						LCD.drawString("+", 12, 5);
						targetC =Math.toRadians(C.getPosition());
						c=Math.sqrt(a*a+b*b-2*a*b*Math.cos(targetC)); // the distance between sensorA and object;
						targetA =	Math.acos((a*a-b*b-c*c)/(-2*c*b));
					}
					else {
						// it is assumed there is an object in front of sensorA
						LCD.drawString("+", 7, 5);
						LCD.drawString("-", 12, 5);
						targetA =Math.toRadians(A.getPosition());
						a=Math.sqrt(b*b+c*c-2*b*c*Math.cos(targetA)); // the distance between sensorA and object;
						targetC =Math.acos((c*c-a*a-b*b)/(-2*a*b));
					}
					LCD.drawInt((int) c, 7, 2);
					LCD.drawInt((int) a, 12, 2);
				}
				else {
					LCD.drawString("+", 7, 5);
					LCD.drawString("+", 12, 5);
					// it is assumed that there is one object in sight
					targetC =Math.acos((c*c-a*a-b*b)/(-2*a*b));
					targetA =Math.acos((a*a-b*b-c*c)/(-2*c*b));
				}
			}

			LCD.drawInt((int) Math.toDegrees(targetA),7,3);
			LCD.drawInt((int) Math.toDegrees(targetC),12,3);

			// set speed
			LCD.drawInt(rotateTo(A, targetA),7,4);
			LCD.drawInt(rotateTo(C, targetC),12,4);
			Delay.msDelay(20);
		}

		// return to start position
		A.setSpeed(200);
		C.setSpeed(200);
		C.rotateTo(0, true);
		A.rotateTo(0, false);
		Delay.msDelay(500);
	}

int rotateTo(NXTRegulatedMotor motor,double target){
	int targetDeg=(int) Math.toDegrees(target);

	if (targetDeg<MINANGLE) target=MINANGLE;
	int error=targetDeg-motor.getPosition();
	if (Math.abs(error)<=1) return error;
	motor.setSpeed(error*P);
	motor.rotateTo(targetDeg, true);
	return error;
}

}

This post I will explain the workings of a special NXT motorized gearbox. The gearbox is unique because it uses two NXT motors both for driving the output and for shifting gears. As such the full power of the two motors is available for propelling a device while there is no need for a dedicated motor for shifting.

For its inner workings the gearbox uses differentials. Therefore it is good to understand how a differential works. A differential can be used in two ways. In most cases a differential is used to distribute the power of one motor evenly over two wheels. A setup like this isfound in cars for example. A differential can also be used to combine two power of two motors into one output shaft. When used this way the output shaft will have the combined power (torque) of the two motors and the average speed of the two motors. A setup like this is used when the exact speeds of the two motors might be slightly different.

This gearbox uses one differential to combine the output of the two motors. The output shaft of the differential is connected to a simple gearbox having two shifts, the low gear has a ratio of 1:1, the high gear has a 1:3 ratio. The gear shifter is controlled by the motors via a second differential. But this time the direction of one of the motors is reversed before going into the differential. Also this differential outputs the mean speed of the two motors. But as one of the motors is reversed this means that when to motors are turning at the same speed that the output shaft of this differential will not move at all. Also the gear shifter, that is connected to the second differential, will be at rest. If however both motors rotate at different speeds, then the output of this differential will be rotating and also the gear shifter will move. Changing the gearbox to another gear. So gears can be changed by speed differences between the two motors.

Of course you will need to control the speed of the motors exactly for this gearbox to function properly. Luckily the NXT motors have tachometers and can be controlled precisely.

In this example I used speed differences to control a gearbox. The same technique can also be used to drive a steering mechanism for example. Then one would just need two motors both to drive and steer a robot whilst having the full power of both NXT motors available for propulsion. As the NXT only has three motor ports, this can be a good way to free one motor for other purposes, like grabbing, lifting or scanning.

This video shows how the gearbox is working.

this video shows the gearbox in action.

And here are some detailed pictures:


The Lego ultrasonic sensor, or shortly the US sensor, is a range sensor. It measures the distance between the sensor and an object in front of the sensor. It does by sending short beeps and measuring how much time it takes for the beep to bounce back. Knowing the speed of sound it can then calculate the distance to the object. It works just like the sonar of a submarine.

The US sensor is one of most widely used sensors for Mindstorms. It is included in both NXT sets. Its most common application is in object avoidance. Given the wide spread use of the US sensor there is surprisingly little information available about the US sensor. This post hopes to change that.

The sensor is a digital sensor. It returns the distance to an object (or objects as we will discuss later) as a byte value expressed in cm. The value of 255 has a special meaning, it indicates there is no object within measuring range.
So In theory its minimum range is 0 cm and its maximum range is 254 cm. In reality the minimum range is about 7 cm. The maximum range depends on the object to be detected, large and hard objects can be detected over a longer range than small and soft objects. False echoes can limit the practical range even further. To my experience a wall (large and hard) can often be detected if it is within a range of two meters, but for a reliable signal it has to be within a range of 160 centimeter.

The resolution of the sensor is 1 cm. but that does not mean the sensor is that accurate. Most often the sensor measurement is a few cm of the true distance. If accurate measurements are needed one has to find out how big the error of the particular sensor is and compensate for this. According to the datasheet the sensor can compensate for the error by supplying calibration settings. However I have not been able to do so. My sensors do ignore calibration commands so I suspect that this functionality doesn’t work. (btw, the calibration address is not 0x50 as the datasheet states, it is 0x4a). Any calibration therefore has to be done in the user program.

The sensor has five modes of operation, off, continuous, ping, capture and reset. Only the continuous and ping modes work properly. The capture mode is supposed to detect other US sensors in the vicinity but this does not work. The sensor freezes in this mode and can only be activated again using the reset mode.
In continuous mode the sensor does continuous measurements at an interval of about 35 msec. It will only detect the nearest object.
In ping mode the sensor will only perform a single measurement, after that it will go in off mode. For a new measurement it has to be put in ping mode again. Ping mode has two advantages over continuous mode. In continuous mode one can only have one active sensor, adding a second sensor would disturb the echo off the first sensor as there is no way it can distinguish between an echo from its own beep and a beep from another sensor. In ping mode one can make sure that there are no two pings issued simultaneously and thus avoid disturbances. The second advantage of ping mode should be the ability of the sensor to measure the range of up to eight objects. This however does not work flawless. I get odd ranges for every but the first object. I guess the sensors ability to filter out false echoes is not so good. I have tried to filter out false echoes myself but I did not succeed in this. I therefore only use the first range when using my sensors in ping mode.
After all I was disappointed in the advanced capabilities of the sensor.

A sound beep does not travel in a narrow straight line as a laser beam does. Instead it spreads out like the light beam of a torchlight. As a result the sensor not only detects objects that are exactly in front of the sensor, it also detects objects that are somewhat to the left or right. As a rule of thumb one can assume that objects that are within an angle 15 degrees to the left or right are detected. The total width of the detection area is about 30 degrees. When using the sensor for obstacle avoidance this wide beam is an advantage. When the sensor is used for mapping it is a drawback as objects seem wider then they really are. They also seem arc shaped, the arc having an angle of at least 30 degrees for small objects and more for wider objects. See the image below. This makes the US sensor less suitable for detailed mapping. But keeping these effects in mind one can still create useful maps with this sensor. See one of my previous posts, a range map, for a successful application.

The US sensor sometimes suffers from an error in its measurement, it then returns a value of 255. This can happen at any time but is more frequent when the object is difficult to detect. This can lead to strange behavior of a robot. I once had a obstacle avoiding robot that sometimes drove straight for a short moment while turning to avoid an obstacle. It turned out that the sensor sometimes returned the 255 value even when an obstacle was in front of the robot. My program interpreted this value as “no obstacles ahead” and went straight ahead. It corrected itself once it got a new, valid, measurement. But its behavior seemed strange. Since then I always use a median or min filter on the US sensor. These filters filter out a single 255 value but pass a series of 255 values. As a result they effectively distinguish an invalid measurement from a “no obstacle ahead” condition.

A few last, technical, remarks on the sensor.

  1. according to the datasheet one can alter the measurement interval of the sensor. I could not see any effects of this.
  2. one can write to address 104. This is not documented. I could not find out what this is for.
  3. Xander has a nice post showing the internals of the sensor.

Remember my plan to make a ball balancing robot? Last year I set myself the goal to make a ball balancing robot. I even build the robot. Since then I wondered off my original goal and made a guardbot, Koios, from this platform. Now I am having another shot at making a balancing robot.

Programming a balancing robot is easy in theory. You just need a sensor that tells you how much the robot is tilted, most often people use a gyro for this. I use my IMU for this, so that I do not suffer from gyro drift. The tilt angle is then feeded to a PID-controller that transformes tilt to motor speed. The hard part is to tune the PID controller, it has to translate tilt into just the right amount of motor speed, too little and the robot falls of the ball, too much and the robot goes over the top and falls of on the other side of the ball. Falling of the ball damages the robot. So I had a problem, how to tune the PID controller without damaging the robot?

To be able to tune the PID-controller without damaging the robot I made a special platform. It is a large disk with a small pole in the middle pointing down Due to the pole the disk will always be tilted when lying on the ground, only when it balances perfectly on the pole it is level. Therefore this disk can be used to tune the controller.  The robot can ride off the disk, but it doesn’t fall then, it just comes on the floor with one or two wheels.  Afbeelding

When I tested this setup I discovered that the disk whas too smooth, the wheels didn’t have enough grip and slipped. To increase the friction I coated the surface of the disk with sillicon rubber, It is the light blue surface you see in the picture. Now I have a very “slick” surface.I only hope it lasts under the forces the NXT motors generate.But for the moment this problem is solved.

But there are other problems. One is the fact that these holonomic wheels make the robot vibrate, this affects the IMU filter, there is still some drift although it stays within certain limits. I do have prototype rotacaster wheels. The manufacturer told me that the production wheels are more round and generate less vibrations. If you are ever going to by these wheels, and they are a lot of fun, I advice you to take the black ones. They have the best grip. Anyway, I will have to tune the IMU as well.

Tuning PID controllers is difficult and very, very time consuming. There is some theory around tuning PID controllers but in the end it is mostly trial and error. Everytime I want to try a new set of parameters I’ll have to modify the program, download it to the brick, run the program and evaluate the results by watching the robot. It is hard to understand what goes wrong when you see the robot ride of the disk and make a run for the door to the staircase.

But not anymore. Kirk, one of the developers of Lejos made a very nice program that allows you to tune a running PID controller during over bluetooth. The tool is still under development so you won’t find it in Lejos 0.9.1 yet. This program is an add-on to the charting logger I often use to evaluate internals of the robot on the PC. So basicly, this program shows me what is going on in my robot and allows me to modify PID parameters on the fly. I think this is a great tool. Below is a screen shot of it.

Afbeelding

So, now I have the robot, a test platform and a efficient tuning tool. That must mean immediate succes! Well, to be honest I don´t think so. I´m still not sure if I can get this robot to work as there are problem with weight and inertia as well. The robot weigths 998 grams. This is quite heavy, even for three powerful NXT motors. The robot is quite stiff, but there it still bends a bit under weight. This affects the IMU sensor. And I´m working on other projects as well. So in the end I think there is a bigger chance to fail than to succeed.

To be continued.