Data: 10 10 2008
Duration of activity: three hours
Group members participating: all group members
1. The Goal
The goal of this lab session is to continue the work on the robot with mounted light sensor following the line. The general goal, of course, is to make the robot go around the track as fast as possible and to stop at the finish blue square.
2. The Plan
- Efficiently distinguish the color blue
- Stop at the blue goal zone
- Traverse the track faster
3.1 The problem with blue
After making many runs around the track, the need of calibration becomes obvious. As there are many windows and glass doors in the room where the car track is, every single change in lightning changes the readings of the sensor. We learned that in a hard way. It was time saving to use hardwired values in stead of run-time calibration, but that made things more complicated.
There were many discussions of how exactly to distinguish color blue. How to set intervals, or how to choose thresholds. The changing surrounding lightning didn't make the task easier.
We have a plot which exactly shows what is going on with the sensor while driving around the track:
The blue line in the plot indicates where the 'spectrum' of color blue is. Because of the fact that the values that the sensor gives do not change instantly from being 350 to 500. The transfer is smooth (note the many data points between a black and a white read-out), therefore the color blue can be 'seen' many times while driving around the track. This fact is crucial for distinguishing the right color blue.
3.2 The solution for blue
Given the above experiences, it's clear that a simple I-saw-blue-so-I'll-stop solution will cause the robot to stop early. The robot must see a number of readouts in the spectrum of blue, before it stops. That is not to say that the robot can't react to seeing blue immediately, but, the stopping action should be postponed.
We simply devised a counting algorithm, that counts the number of sequential blue readouts it has seen. This is the simpler of the approaches that exist to distinguishing blue, given the hardware that Thomas is composed of:
The other---perhaps better---method would be to keep a number of readouts, and let the number of blue readouts in the last N milliseconds be the limiting factor. Such an algorithm may be as simple as: Decrement numblues once for every readout (but never below zero), but increment it by three for every blue readout seen. If numblues ends up being above 20, a lot of blue has been seen in the current time window, so the robot must be atop something blue.
Note the difference between the running average of discrete number of blue readouts and the running average of the readouts themselves. The latter will cross from white to black for each of Thomas's twitches, and in the process, will cross over the blue value.
3.3 More abstract solutions for blue
Given that the sensor reads out a greyscale value, a three-color transparent layer could be mounted in front of the sensor, and each read-out will correspond to the R x G x B tuple that follows from three readouts, each behind its own color of the transparent layer. This exploits the fact that blue (for blue:= blue,red,green) colors go relatively unhindered through a transparent blue layer, whereas the other colors are---again, relatively---absorbed by the medium.
This will make it easy to distinguish blue, by comparing that channel to the mean of the others.
Of course, this would require a faster-working sensor, a motor to spin a disc and most importantly, that the robot stays still over a point of measurement for the duration of the measurement. So it wouldn't be useful for the application at hand, because it---to a very much larger extent---depends on speed of measurement and reaction than to precision of read-outs. But the method could find application in e.g. a color scanner based on lego.
Distinguishing blue is easier said than done. It simply requires that one averages a bunch of readouts, and that one hopes that the blue phenomenon that one is trying to measure will hold for that bunch. If that is not an option, other methods must be used. Digital camera's aren't that expensive to mount, neither.
We've had a lot of trouble with programming the robot, which we haven't been able to parallelize. Also, we've stayed the course with a case-wise-controlled bang-bang control program, but we've made preparations for converting it into another style of control program. We've however been hindered by latencies in reaction to motor commands, and doing PID control with a mechanical/electrical/firmware setup that is doomed to overshoot every time---let's just say it didn't catch our interest.