Friday, 31 October 2008

NXT Programming, Lesson 7

Data: 31 10 2008
Duration of activity: three hours
Group members participating: all group members

1. The Goal

The goal of this lesson is to build and program Braitenberg vehicles.

2. The Plan

  • Build and program Braitenberg vehicles

  • Replace vehicle 2b's stimulates connections with inhibition connections

  • Discuss what would happen if multiple robots, all reacting to light and with their own light source on top, would be put together in the same environment

  • Discuss how the use of multiple threads for measuring multiple sensor-inputs would influence in the robots behavior

  • Rewrite the code of the vehicle to be self-calibrating with respect to the environment by only making a calibration last for N samples for some N

3.The Results

Our goals for this lab-session are based on the assignments on the course homepage for lesson 7 but we changed some build/program-assignments to discussion-assignments.

  • Rebuilding our robot (named Thomas) to become a Braitenberg vehicle was straightforward. We took the design from our previous build and made the front be the back, and the back be the front. We replaced the light sensor on the back, which was pointing toward the ground, with one (and later on, two) on the front, pointing straight forward.

  • As it was suggested in the beginning of the lab description, we have tried out all three different kinds of vehicles: 1, 2a, and 2b, as they appear in the following picture:

    More about the exact results of our robot's construction read in 3.1.Constructions of the robot

  • There were some aspects of the lab work that we decided not to implement. After constructing different approaches of the robot, we made some experiments and observed the behavior of the robot. That lead to some reasonable discussions. These discussions are mentioned in 3.2. Discussions.

  • The last thing that our group decided to try out with the robot, was to make it adaptive. We focused on finding the average of the light level in order to determine whether to respond (and, if so, how to respond) or not to the measured light level. We used ideas borrowed from 'Notes on construction of Braitenberg's Vehicles', by Tom Dean. Read more about this in 3.3. Adaptive robot behavior.

  • 3.1. Constructions of the robot

    1. To construct the vehicle 1, we used two light sensors mounted on the robot and facing straight forward, and these light sensors are then averaged in software, so as to appear as one single sensor. The idea is that the robot moves straight towards the light source when the light level goes over a predetermined threshold. In this case, both motors react the same, as there is only one sensor input.

    2. To construct vehicle 2a we used the same two light sensors mounted on the robot, facing straight forward. The idea is that the robot tries to avoid the light in some sense: When the right sensor gets enough light, the program causes the right motor to turn on (symmetrical for left). When doing experiments with this kind of robot behavior, it seems that the robot tries to go away from the light source.

    3. To construct vehicle 2b again we used the two light sensors mounted on the robot. But know it is an opposite case: The robot is following the light. When the right sensor gets enough light, the control program causes the left motor to turn on (and symmetrical). When doing experiments with this kind of robot behavior, it seems that the robot is trying to go directly to the source of light---if it starts out seeing strong enough light, that is, because there's no light-finding part of the algorithm.

    The majority of our experiments were done with a LED (light-emitting diode), which gives a concentrated cone of light, which is relatively directional and narrow. When one of the robot's light sensors gets the amount of light that results from being inside the LED cone, it reacts pretty quickly and reveals the actual behavior of the robot.

    The problem with this is that the LED is too directional to be practical. It is basically only able to hit a single light sensor, when used within the range where the emitted light is still bright enough.

    Unfortunately, we weren't able to test with a less directional and high amplitude light source (like a table lamp). We couldn't find anything useable anywhere around. Nevertheless, we could establish some thresholds, that made the robot react to the LED cone from further away.

    Below is a plot of the light sensor values read when aiming the light sensor at a stationary light and the inside of a suit, respectively:

    Despite of everything, we managed to succeed in a very promising experiment: The robot was placed in a dark room (without windows or other light sources) facing a closed door, and then the door was opened to a very bright room. With the appropriate source code uploaded, the robot got 'alive' and gladly ran towards the door and straight through it.

    In this case, the robot was already facing the door. Generally the next step would be to make robot to look for light (by turning to the left or right). We didn't implement this feature because of lack of the time.

    3.2. Discussions
    • Sound sensors.

    • For the robot 1 with one sensor, the behavior of the robot would be similar: The more light (sound) there is in the environment, the more power will be applied to both motors and, therefore, the faster the robot will go.

      When we have two sensors on the robot, it is a lot easier to work with two light sensors rather then with two sound sensors. If we were to measure the sound from the surrounding environment, both sound sensors would pick up similar values. So to provide directed sound levels that would differ enough in order to reveal the behavioral patterns of the robot is not so easy. Of course, if we placed sensors on directly opposite sides, we may have achieved the desired results.

      All in all, it seemed less troublesome to stick with the light sensors in our experiments.

    • Inhibited connections.

    • Having inhibited connections instead of stimulates will make the robot apply less power to the motors when measuring more light. As a result, the robot will accelerate quickly towards a light-source and then slow down as it gets closer, until it finally stops close the light-source.

    • About the lamp on top of the robot and multiple other robots around.

    • A deterministic robot, reacting in a predefined way, will (or should, at least) always act the same in a static environment. It becomes much more interesting when the environment is dynamic, because then it isn't possible to predict the behavior of the robot.

      We can argue how the robot will act on a theoretical level, because we are in control of the robots reactions on input, but we cannot predict the course the robot will drive when let loose over a period of time. (Albeit, from a strictly theoretical point of view, the whole system has a state and this state will deterministically result in a predeterminable outcome. But that's esoteric.)

      When trying to describe what we think a robot will do in a dynamic environment, we have a tendency to describe it like we would a human having the same goal as the robot.

      Therefore, giving multiple robots of type 2b, and with a light bulb on top, put in the same environment, what would they do? They would find together in groups, minimally in pairs. When every robot is in a group, everybody would drive a little back and forth, always correcting the distance between one another.

      This kind of behavior can most certainly be found in nature. Just think of a shoal of fish or a swarm of mosquitoes. Although it might not be that this is 'intelligent' behavior, it is behavior of something living.

      When putting vehicles of type 2a together, the robots will back away from one another, always driving the way that is the most away from the rest of the group. In the end every robot will have situated itself in isolation from the others.

      All of the above implicitly assumes that the governing algorithms work as intended, and does not counter the inherent discretization in sensor values. In the real world, for instance, the 2a robots would probably not find a corner each, but, at least show willingness to do so.

    • Vehicles with two threads of control.

    • As long as we are only using one thread, and the control for both the motors is in this thread, both motors will always get their commands at the same time. When using one thread pr. sensor and/or motor, it is the schedulers responsibility to ensure that every module of the vehicle gets its required CPU-cycles to make its reactions to the environment, and makes modules work disjoint from one another. The disjointness of modules is a very nice feature to have for more complex systems, where we would like to be sure that one module in the system can't influence another module reactions if they don't share sensors or actuators.

    3.3. Adaptive robot behavior

    Among the proposed experiments, the one where it should be investigated how to adopt dynamic bounds was the most interesting. The other candidates, making the program threaded or trying sound sensors, are mostly interesting experiments by the resulting discussion.

    We tried to make a ``sliding window'' of values that should be interpreted as light and dark. This sliding window is parametrized by a center point, which itself slides, and a radius around it. Values are expected to lie within centerpoint ± radius, and the LightSensor class's own normalization is used with this information.

    The centerpoint is a running average of the values collected. For this calculation, we refer once again to Tom Dean's chapter 5. The value is calculated and used roughly as:

    public static final double BETA = 0.1;

    /* ... */

    /* Use the *last* value of the average: */


    average += BETA * ((read_left_norm+read_right_norm)/2 - average);

    The average is thereby added with BETA times the currently read value (note the inline averaging) minus the previous value of average, in accordance with Dean's approach.

    The value of BETA is the tricky part---the rest is only inserting the formula---because it's value specifies the tendency to make old values count in the re-calculation. If BETA is a large number, the running average will only reflect the very most recent values count, and for smaller values, more of the past is considered in the present---so to speak.

    The issue is rounding. The value ``average'' is an integral type, and for each step in the calculation, the intermediate result is discretizised to become an integer. This makes very small values of BETA infeasible, since the running average cannot reach the expected average value. (E.g., even after a thousand consecutive readouts of values of circa 350, the running average would stick at 180-something.)

    Add to this concern (which, should be noted, can be patched up algorithmically) the wish for keeping a tremendous amount of past values relevant in the current value: If the robot stands in a room where the lights are switched off, it should not willingly and profusely adjust its notion of ``dark'' to fit with its sorroundings and start moving towards e.g. a standby LED on some device. But it should however be able to adjust its notion of ``dark' over the span of a day or over the span of its assignment.

    BETA == 0.1 however was reasonable, and the robot did show signs of adapting its reaction to its sliding window: When kept blindfolded for some seconds, it would react much more to the LED cone than it would without the prior blindfolding.

    4. Conclusion

    This week we have tried to make somewhat complex (and random) behavior with very simple connections and control-programs. We have build and tested Braitenberg vehicles which lead to some interesting discussions about what his simple connections could amount to in the robots ability to react to the environment and whether or not intelligences is needed to make life-like behavior. Although we didn't build and tested all the robots in the lesson notes, we did discuss what the result of them would have been.

    We discussed the capabilities of a robot that in the dark would turn around to find a light-source and then drive toward it. This would be a very simple system to build and program, but display some complex movement, because its reactions is based on a dynamic environment. Unfortunately there wasn't time to implement the system.

    Thursday, 9 October 2008

    NXT Programming, Lesson 5 continued

    Data: 10 10 2008
    Duration of activity: three hours
    Group members participating: all group members

    1. The Goal

    The goal of this lab session is to continue the work on the robot with mounted light sensor following the line. The general goal, of course, is to make the robot go around the track as fast as possible and to stop at the finish blue square.

    2. The Plan

    • Efficiently distinguish the color blue

    • Stop at the blue goal zone

    • Traverse the track faster

    3.The Results

    3.1 The problem with blue

    After making many runs around the track, the need of calibration becomes obvious. As there are many windows and glass doors in the room where the car track is, every single change in lightning changes the readings of the sensor. We learned that in a hard way. It was time saving to use hardwired values in stead of run-time calibration, but that made things more complicated.

    There were many discussions of how exactly to distinguish color blue. How to set intervals, or how to choose thresholds. The changing surrounding lightning didn't make the task easier.

    We have a plot which exactly shows what is going on with the sensor while driving around the track:

    The blue line in the plot indicates where the 'spectrum' of color blue is. Because of the fact that the values that the sensor gives do not change instantly from being 350 to 500. The transfer is smooth (note the many data points between a black and a white read-out), therefore the color blue can be 'seen' many times while driving around the track. This fact is crucial for distinguishing the right color blue.

    3.2 The solution for blue

    Given the above experiences, it's clear that a simple I-saw-blue-so-I'll-stop solution will cause the robot to stop early. The robot must see a number of readouts in the spectrum of blue, before it stops. That is not to say that the robot can't react to seeing blue immediately, but, the stopping action should be postponed.

    We simply devised a counting algorithm, that counts the number of sequential blue readouts it has seen. This is the simpler of the approaches that exist to distinguishing blue, given the hardware that Thomas is composed of:

    The other---perhaps better---method would be to keep a number of readouts, and let the number of blue readouts in the last N milliseconds be the limiting factor. Such an algorithm may be as simple as: Decrement numblues once for every readout (but never below zero), but increment it by three for every blue readout seen. If numblues ends up being above 20, a lot of blue has been seen in the current time window, so the robot must be atop something blue.

    Note the difference between the running average of discrete number of blue readouts and the running average of the readouts themselves. The latter will cross from white to black for each of Thomas's twitches, and in the process, will cross over the blue value.

    3.3 More abstract solutions for blue

    Given that the sensor reads out a greyscale value, a three-color transparent layer could be mounted in front of the sensor, and each read-out will correspond to the R x G x B tuple that follows from three readouts, each behind its own color of the transparent layer. This exploits the fact that blue (for blue:= blue,red,green) colors go relatively unhindered through a transparent blue layer, whereas the other colors are---again, relatively---absorbed by the medium.

    This will make it easy to distinguish blue, by comparing that channel to the mean of the others.

    Of course, this would require a faster-working sensor, a motor to spin a disc and most importantly, that the robot stays still over a point of measurement for the duration of the measurement. So it wouldn't be useful for the application at hand, because it---to a very much larger extent---depends on speed of measurement and reaction than to precision of read-outs. But the method could find application in e.g. a color scanner based on lego.

    4. Conclusion

    Distinguishing blue is easier said than done. It simply requires that one averages a bunch of readouts, and that one hopes that the blue phenomenon that one is trying to measure will hold for that bunch. If that is not an option, other methods must be used. Digital camera's aren't that expensive to mount, neither.

    We've had a lot of trouble with programming the robot, which we haven't been able to parallelize. Also, we've stayed the course with a case-wise-controlled bang-bang control program, but we've made preparations for converting it into another style of control program. We've however been hindered by latencies in reaction to motor commands, and doing PID control with a mechanical/electrical/firmware setup that is doomed to overshoot every time---let's just say it didn't catch our interest.

    Saturday, 4 October 2008

    NXT Programming, Lesson 5

    Date: 03 10 2008
    Duration of activity: 3 hours
    Group members participating: all group members

    1. The Goal

    The goal of this lesson is to build a line follower again, and this time, make the robot recognize three different colors: White, black and blue. The idea is that the robot has to follow a black line until it reaches a blue rectangular place (the finish area) where it has to stop.

    2. The Plan

    • Build a robot that is able to follow a line and mount it with a light sensor

    • Take the readings of the light sensor while detecting white, black and blue

    • Make the robot follow the line

    • Try calibrating thresholds before the run

    • Make the robot stop at the finish square

    3. The Results

    • Firstly, Thomas was rebuilt to be a line follower robot. Our group decided not to follow the approach of a three wheeled robot, instead, we used a sliding approach from Read more in 3.1. The construction of the robot

    • Secondly, as the robot was ready, we wanted to see the exact readings we were dealing with. It was necessary to distinguish precisely between white, black and blue. Read more in 3.2. The readings of the sensor

    • Thirdly, the hardest part of our work: Code writing. To make the code work reasonably, we had to ask ourselves many questions: Is it important to calibrate or is it enough to hardwire the thresholds? How should we make the comparison with the thresholds and how precise intervals should we use? And many other such questions. Read more in 3.3. Code writing.

    3.1. The construction of the robot

    As already mentioned, the robot we constructed is based on sliding on its back skids instead of having a third wheel. This approach seemed more reasonable: When using a third wheel, at each turn, some power had to be given away to the third wheel to change its heading, which also has as impact on the timeliness of corrective actions.

    The idea is to place as much weight as possible on top of the drive wheels, and just enough weight over the sliding part to keep the robot stable and avoid tipping over. If there is too much over the sliding part, the robot would struggle to turn, and if not enough, the robot would be too jumpy and might fall on its back.

    Having all that in mind, here is new sliding Thomas:

    As one might notice, our Thomas has bigger wheels than the one from the example referenced above. There is a big reason for that: We have made some experiments with this robot. On the given track in the Zuse building, the version of this robot with smaller and thicker wheels it takes 1 minute to go around the track. With big wheels (the ones in the picture) it 13 seconds less to go around (47 seconds). That is a big improvement. In conjunction with this experiment, we tried to push mass center a bit away from the sliding part, but that didn't give any improvement but made the robot a bit jumpy.

    For future improvements, Thomas was mounted with two more light sensors. These two were mounted on each side. The purpose is for the robot to be able to "see" a broader spectrum of the environment:

    We haven't made any use of these, as of yet, but a seemingly effective approach is to use the sensor which lastly saw the line as an indication of which way to go to get back on track.

    3.2. The readings of the sensor

    There is one of the snapshots of the light sensor readings. The values correspond to white, blue and black. As it can be seen, blue and black color ranges actually are relatively close to each other.

    3.3. Code writing

    The code for this instance took its start from the suggested code in That code is limited by its inability to determine whether it's over the blue goal area (it uses a BlackWhiteSensor), and by its lack of precision -- it's using the readValue() method on LightSensor, rather than its readNormalizedValue() method, which emits data with the whole 10-bit spectrum as range.

    For our work, LineFollowerCal was forked into LineFollower and BlackWhiteSensor was forked into ColorSensor.

    In ColorSensor, the value read is compared with three values: A lower bound for the predicate white, an upper bound for the predicate black, and a (midpoint,radius) pair for blue. This latter requirement stems from the fact that the empirically established values for blue showed that blue lies well away from the mean of black and white (confer with the plot). It thus needs to be established in another way as the naïve if-it's-neither-black-nor-white-it-must-be-blue approach.

    This also means that our ColorSensor does not satisfy the invariant that at least one of black, white or blue always hold.

    A lot of work from thereon concentrated on getting the code to work. In one way or another. Foremost, there was a (very) annoying issue with a while(somecond);{}-typo.

    Another issue -- which still stands unresolved -- is the speed at which motor speeds propagate. It is as if we make multiple corrective measures (that is, multiple runs of the main while loop), and only one of them will have effect on the motor speeds. Or the actual PWM values to the motors are only set in a deferred manner.

    3.3.1 Power-saving blinking floodlight

    In an attempt to be clever, the floodlight was turned on immediately before use, the sensor read, and the floodlight turned off again. This is a basic method of conserving battery power. And it's a method that had worked for the programmer before. But -- and this is the insight -- the lejos firmware is rumored to not do what it's supposed to do: It polls the sensorports every 3 milliseconds (for some value of 3), and non-blockingly returns the most recently read value, when a measurement is requested. This of course is in conflict with our blinking the floodlight.

    3.3.2 Accuracy-enhancing blinking floodlight

    Another perhaps worthwhile idea is that of blinking the floodlight in a clever way:

    1. Turn off the floodlight

    2. Read value v1

    3. Turn on the floodlight

    4. Read value v2

    5. Turn off the floodlight

    6. Return (v2-v1)

    This expresses the way that a recent value (v1) for the ambient lighting is subtracted from the ambient-and-reflective read value (v2) in an effort to isolate the reflection component.

    This approach is not possible with the current platform (without excessive waiting, or hacking of lejos -- since there's this each-3-milliseconds polling). But we didn't explore this because we decided that the environment that the robot would race in was very statical and controllable. This is in spite of LED's (as the floodlight is based on) are almost immediate in their switching.

    4. Conclusion

    We only did a minimal implementation of the exercise (and we've given up hopes of medals). Our best time yet is 43 seconds, with a bang-bang control program.

    The code writing again proved the most difficult part to surmount. It's still annoyingly sequential work -- only one group member can program the device, and the others can only watch. If one changes the physical characteristics of the robot, the programming (or at least, the tuned parameters) go down the drain.