Friday 31 October 2008

NXT Programming, Lesson 7



Data: 31 10 2008
Duration of activity: three hours
Group members participating: all group members


1. The Goal

The goal of this lesson is to build and program Braitenberg vehicles.

2. The Plan

  • Build and program Braitenberg vehicles

  • Replace vehicle 2b's stimulates connections with inhibition connections

  • Discuss what would happen if multiple robots, all reacting to light and with their own light source on top, would be put together in the same environment

  • Discuss how the use of multiple threads for measuring multiple sensor-inputs would influence in the robots behavior

  • Rewrite the code of the vehicle to be self-calibrating with respect to the environment by only making a calibration last for N samples for some N


3.The Results

Our goals for this lab-session are based on the assignments on the course homepage for lesson 7 but we changed some build/program-assignments to discussion-assignments.

  • Rebuilding our robot (named Thomas) to become a Braitenberg vehicle was straightforward. We took the design from our previous build and made the front be the back, and the back be the front. We replaced the light sensor on the back, which was pointing toward the ground, with one (and later on, two) on the front, pointing straight forward.


  • As it was suggested in the beginning of the lab description, we have tried out all three different kinds of vehicles: 1, 2a, and 2b, as they appear in the following picture:

    More about the exact results of our robot's construction read in 3.1.Constructions of the robot

  • There were some aspects of the lab work that we decided not to implement. After constructing different approaches of the robot, we made some experiments and observed the behavior of the robot. That lead to some reasonable discussions. These discussions are mentioned in 3.2. Discussions.

  • The last thing that our group decided to try out with the robot, was to make it adaptive. We focused on finding the average of the light level in order to determine whether to respond (and, if so, how to respond) or not to the measured light level. We used ideas borrowed from 'Notes on construction of Braitenberg's Vehicles', by Tom Dean. Read more about this in 3.3. Adaptive robot behavior.



  • 3.1. Constructions of the robot


    1. To construct the vehicle 1, we used two light sensors mounted on the robot and facing straight forward, and these light sensors are then averaged in software, so as to appear as one single sensor. The idea is that the robot moves straight towards the light source when the light level goes over a predetermined threshold. In this case, both motors react the same, as there is only one sensor input.


    2. To construct vehicle 2a we used the same two light sensors mounted on the robot, facing straight forward. The idea is that the robot tries to avoid the light in some sense: When the right sensor gets enough light, the program causes the right motor to turn on (symmetrical for left). When doing experiments with this kind of robot behavior, it seems that the robot tries to go away from the light source.


    3. To construct vehicle 2b again we used the two light sensors mounted on the robot. But know it is an opposite case: The robot is following the light. When the right sensor gets enough light, the control program causes the left motor to turn on (and symmetrical). When doing experiments with this kind of robot behavior, it seems that the robot is trying to go directly to the source of light---if it starts out seeing strong enough light, that is, because there's no light-finding part of the algorithm.



    The majority of our experiments were done with a LED (light-emitting diode), which gives a concentrated cone of light, which is relatively directional and narrow. When one of the robot's light sensors gets the amount of light that results from being inside the LED cone, it reacts pretty quickly and reveals the actual behavior of the robot.

    The problem with this is that the LED is too directional to be practical. It is basically only able to hit a single light sensor, when used within the range where the emitted light is still bright enough.

    Unfortunately, we weren't able to test with a less directional and high amplitude light source (like a table lamp). We couldn't find anything useable anywhere around. Nevertheless, we could establish some thresholds, that made the robot react to the LED cone from further away.

    Below is a plot of the light sensor values read when aiming the light sensor at a stationary light and the inside of a suit, respectively:



    Despite of everything, we managed to succeed in a very promising experiment: The robot was placed in a dark room (without windows or other light sources) facing a closed door, and then the door was opened to a very bright room. With the appropriate source code uploaded, the robot got 'alive' and gladly ran towards the door and straight through it.

    In this case, the robot was already facing the door. Generally the next step would be to make robot to look for light (by turning to the left or right). We didn't implement this feature because of lack of the time.

    3.2. Discussions
    • Sound sensors.

    • For the robot 1 with one sensor, the behavior of the robot would be similar: The more light (sound) there is in the environment, the more power will be applied to both motors and, therefore, the faster the robot will go.

      When we have two sensors on the robot, it is a lot easier to work with two light sensors rather then with two sound sensors. If we were to measure the sound from the surrounding environment, both sound sensors would pick up similar values. So to provide directed sound levels that would differ enough in order to reveal the behavioral patterns of the robot is not so easy. Of course, if we placed sensors on directly opposite sides, we may have achieved the desired results.

      All in all, it seemed less troublesome to stick with the light sensors in our experiments.

    • Inhibited connections.

    • Having inhibited connections instead of stimulates will make the robot apply less power to the motors when measuring more light. As a result, the robot will accelerate quickly towards a light-source and then slow down as it gets closer, until it finally stops close the light-source.

    • About the lamp on top of the robot and multiple other robots around.


    • A deterministic robot, reacting in a predefined way, will (or should, at least) always act the same in a static environment. It becomes much more interesting when the environment is dynamic, because then it isn't possible to predict the behavior of the robot.

      We can argue how the robot will act on a theoretical level, because we are in control of the robots reactions on input, but we cannot predict the course the robot will drive when let loose over a period of time. (Albeit, from a strictly theoretical point of view, the whole system has a state and this state will deterministically result in a predeterminable outcome. But that's esoteric.)

      When trying to describe what we think a robot will do in a dynamic environment, we have a tendency to describe it like we would a human having the same goal as the robot.

      Therefore, giving multiple robots of type 2b, and with a light bulb on top, put in the same environment, what would they do? They would find together in groups, minimally in pairs. When every robot is in a group, everybody would drive a little back and forth, always correcting the distance between one another.

      This kind of behavior can most certainly be found in nature. Just think of a shoal of fish or a swarm of mosquitoes. Although it might not be that this is 'intelligent' behavior, it is behavior of something living.

      When putting vehicles of type 2a together, the robots will back away from one another, always driving the way that is the most away from the rest of the group. In the end every robot will have situated itself in isolation from the others.

      All of the above implicitly assumes that the governing algorithms work as intended, and does not counter the inherent discretization in sensor values. In the real world, for instance, the 2a robots would probably not find a corner each, but, at least show willingness to do so.

    • Vehicles with two threads of control.

    • As long as we are only using one thread, and the control for both the motors is in this thread, both motors will always get their commands at the same time. When using one thread pr. sensor and/or motor, it is the schedulers responsibility to ensure that every module of the vehicle gets its required CPU-cycles to make its reactions to the environment, and makes modules work disjoint from one another. The disjointness of modules is a very nice feature to have for more complex systems, where we would like to be sure that one module in the system can't influence another module reactions if they don't share sensors or actuators.


    3.3. Adaptive robot behavior

    Among the proposed experiments, the one where it should be investigated how to adopt dynamic bounds was the most interesting. The other candidates, making the program threaded or trying sound sensors, are mostly interesting experiments by the resulting discussion.

    We tried to make a ``sliding window'' of values that should be interpreted as light and dark. This sliding window is parametrized by a center point, which itself slides, and a radius around it. Values are expected to lie within centerpoint ± radius, and the LightSensor class's own normalization is used with this information.

    The centerpoint is a running average of the values collected. For this calculation, we refer once again to Tom Dean's chapter 5. The value is calculated and used roughly as:

    public static final double BETA = 0.1;

    /* ... */

    /* Use the *last* value of the average: */

    ls_left.setLow(average-radius);
    ls_right.setLow(average-radius);
    ls_left.setHigh(average+radius);
    ls_right.setHigh(average+radius);

    average += BETA * ((read_left_norm+read_right_norm)/2 - average);


    The average is thereby added with BETA times the currently read value (note the inline averaging) minus the previous value of average, in accordance with Dean's approach.

    The value of BETA is the tricky part---the rest is only inserting the formula---because it's value specifies the tendency to make old values count in the re-calculation. If BETA is a large number, the running average will only reflect the very most recent values count, and for smaller values, more of the past is considered in the present---so to speak.

    The issue is rounding. The value ``average'' is an integral type, and for each step in the calculation, the intermediate result is discretizised to become an integer. This makes very small values of BETA infeasible, since the running average cannot reach the expected average value. (E.g., even after a thousand consecutive readouts of values of circa 350, the running average would stick at 180-something.)

    Add to this concern (which, should be noted, can be patched up algorithmically) the wish for keeping a tremendous amount of past values relevant in the current value: If the robot stands in a room where the lights are switched off, it should not willingly and profusely adjust its notion of ``dark'' to fit with its sorroundings and start moving towards e.g. a standby LED on some device. But it should however be able to adjust its notion of ``dark' over the span of a day or over the span of its assignment.

    BETA == 0.1 however was reasonable, and the robot did show signs of adapting its reaction to its sliding window: When kept blindfolded for some seconds, it would react much more to the LED cone than it would without the prior blindfolding.

    4. Conclusion

    This week we have tried to make somewhat complex (and random) behavior with very simple connections and control-programs. We have build and tested Braitenberg vehicles which lead to some interesting discussions about what his simple connections could amount to in the robots ability to react to the environment and whether or not intelligences is needed to make life-like behavior. Although we didn't build and tested all the robots in the lesson notes, we did discuss what the result of them would have been.

    We discussed the capabilities of a robot that in the dark would turn around to find a light-source and then drive toward it. This would be a very simple system to build and program, but display some complex movement, because its reactions is based on a dynamic environment. Unfortunately there wasn't time to implement the system.

    No comments: