Saturday 29 November 2008

End course project, Lesson 11



Date: 28 11 2008


Initial Description of End Course Project

Three discussed proposals:

1. Sound-signature differentiating robot arm

2. Multi-microphone boomerang-copycat

3. X-Y-coordinator




1. Sound-signature differentiating robot arm

1.1. Description

This proposal is about a microphone-equipped robotic arm, that distinguishes sound signatures. A mechanically challenging project where the main challenge is to make the robotic arm position itself correctly.

1.2. Description of platforms and architectures

The robot would obviously be built with NXT's and the NXT motors. But given that this proposal is about an articulated arm, it will probably not be possible to make the setup with only three NXT motors, and thus, two NXT's are needed.

The robot would have a shoulder joint (which has two degrees of freedom), an elbow joint (one degree of freedom) and a wrist joint, which could be a two-degrees-of-freedom joint, but it could also be a more simple one-degree-of-freedom joint. The finger joint would be a simple squeezing joint. So this creates for a five-degree-of-freedom articulated arm, and this necessitates a two-NXT-approach.

The robotic arm should be able to position itself rather precisely, given a 3D-coordinate tuple. This means that we would have to do the maths to calculate the robot arm.

The robot's software would be an closed-loop control program, that has a layered architecture on top of it. The lower layers would generally handle the more hardware-related matters, whereas upper layers would handle more goal-oriented matters. What this concretely means, will be left for discussion.

This robot, like the X-Y-coordinator proposal, will need some sort of computer input (and perhaps output), which basically will be the same methods as applicable for the X-Y-coordinator case.

Overall, the robot adopts a reactive control model [2]. This is due to the fact that the idea is not to analyze the data coming from the sensors, but to immediately pursue the goal if it wasn't reached yet. Although it is hard to predict in advance---it may not be enough. To make things work better, it might be the case that a hybrid approach[2] would be a solution. That would make one part of the robot try to figure out sound signatures (or colors) and another part deciding whether to pick up something or not.

1.3. Problems to be solved

As it was mentioned, the biggest challenge would be to make the robotic arm position itself correctly. What is more, the project also has a problem in distinguishing sound. The distinguishing might only be to distinguish 1Hz bleeping from 2Hz bleeping, but that, too, will have its difficulties.

1.4. Expectations at the end

This project would end in a robotic arm, that would be able to pick up specific objects. The difference between the objects would be either a sound signature or the color of the object.

Variations of this project include making precise 4D or 5D-movements, or mounting a light sensor and have the robot pick up only the correctly-colored ball.


2. A multi-microphone Boomerang copy-cat.

2.1. Description

This project proposes a multi-microphone Boomerang [1] copy-cat, where the aim is to determine the origin of a sound, and point an arm to signify the direction[1].

2.2. Description of platforms and architectures

The hardware for this project would be a multi-microphone setup, that is mounted on a static fixture, and an simplistic arm, besides it.

The software for the robot, which would work in isolation (i.e., there's no PC in this setup), would be the challenging part of this proposal. It might very well be the case that it is impossible to get the degree of precision that is needed for this project, from the NXT and its internal workings (that is, the firmware implementation).

This could be implemented by relying on reactive control[2] only. The idea is to find the direction of the sound and point at it. All the robot has to do is to react to what the sensors are saying. Now on the other hand, if one seems to think about 'pointing towards sound source' as a goal state in which the robot has to stay over time, then it could be called, in some sense, a feedback control[2].

On the other hand, this could also be implemented by having the robot do a whole-world calculation, and, from its model, determine the action to be taken. This model would probably contain a notion of the origin coordinates, relative to some set of axis.

2.3. Problems to be solved

The challenge is to create a precise timing setup for the NXT and to compute the direction of sound, on a limited-processor machine. Variations include making an arm point in the direction of the origin of sound, and trying to do the trick with very few microphones (think middleware-for-a-vacuuming-robot).

2.4. Expectations at the end

This project would result in the described hardware, and it would be able to find the origin of a sound rather precisely. If so, it would point towards the source.

If it turns out that the lejos firmware would be inadequate for this usage, it might be possible to write a forked firmware that would have the possibility to precisely enough measure input.

-- But this is the success case. It might not be possible to implement this proposal, and in this case the project will result in a rather uninteresting presentation at the show-and-tell.


3. An X-Y-coordinator (the one we have chosen as an end course project)

3.1. Description

This project calls for a rather large physical construction that is able to move a head around on a paper or a similar surface. The robot is a special case of a cartesian robot: The robot has two major axis, and a tertiary up-and-down motion, which calls for using at the very least three NXT motors. If we want to do (even) more interesting things, we may need a fourth motor, and already at this point, we will need another NXT.

3.2. Description of platforms and architectures

The head may be an ``output'' head---a pen, for example---or an input one: A light sensor or a probe (see below).

The name of the game in this robot is precision. The robot has all the physical advantages, so it's the software that's to blame, if the robot winds up being inaccurate.

The software for controlling the robot would be layered, as was the case for the articulated robot arm. In this case, the concretization is easier: The lower layer takes commands from upper layers, and the separation between the layers is whether the layer controls the motors or controls the motor-controllers.

When speaking of controlling, some aspects are put to discussion. Here we are dealing with an X-Y coordinate system, so it is up for debate exactly how applicable the idea of map building suggested by Maja[4] is, here. The idea is for the robot (or for a part of the robot) to navigate itself to the goal. There are some aspects from 'managing the course of the robot'[3]: In our case, knowing where the head is, at any moment in time (localization), is very important. As well as knowing where to go (mission planning) and how to get there (path-finding).

But is it important to remember where the robot has been? In connection with this, one must not forget that the robot gets precise vectors as an input.

It's analoguous to having a predetermined map and following it blindly without actually making sure if the map is correct. If the robot is absolutely precise, it might be enough, but generally not.

Nevertheless, this kind of approach cannot be ruled out beforehand. It still might come in handy.

Another approach for this project to discuss, is what would be a ``sequential control'' [2]. In some interpretation, giving vectors as input is the same as giving a series of steps that a robot must run trough in order to accomplish a task. If the robot has to draw a complicated figure, for example, it may do it vector by vector, or---after analyzing the input---it might choose more clever sequence for doing the work. This bears some resemblance to a deliberative control model[2] where the robot has to take into account all relevant inputs and, after thinking, make a plan and then follow it.


3.3. Problems to be solved

The main challenge is to calculate the trajectories -- the gradients -- of several primitive commands: ARC(a_x,a_y,b_x,b_y,c_x,c_y) or LINE(a_x,a_y,b_x,b_y), and performing closed-loop feedback to govern the motors, so that a shape isn't misdrawn even in case of unexpected resistance.

3.4. Expectations at the end

This project will result in an X-Y-coordinator, which is at least able to draw simple vector graphics on a piece of paper, given a description of those, in an input file.

Variations of this project include mounting a (color?) scanner head or a industrial measuring probe, in order to generate a 3D model (concretely, a height map) of an object. More esoteric variations would be to make the robot able to replicate a height map of some object that was seen in the work area.

Less esoteric variations on this proposals would be to make interactive control from a PC program.


Conclusion
We've chosen an end course project, and while all of the outlined ideas seem worthwhile to pursue, in their own way, the chosen one is the most interesting: It has a mathematical-programming angle to it (the gradients), a software architecture angle (the layered architecture), and a interesting build (which may be topped by the articulated arm, though).

This however also has an influence on the feasibility of the project. The Boomerang-proposal may fail miserably, and may turn out to be trivial, too, whereas the articulated arm has so many inherent difficulties that it may turn out to be impossible to construct satisfactorily.

5. Work plan

  • Building the robot

  • Implement a straight-line governor

  • Implement an arc governor

  • Implement governor-using algorithms



6. References

[1] http://en.wikipedia.org/wiki/Boomerang_(mobile_shooter_detection_system)

[2] Fred G. Martin,Robotic Explorations: A Hands-on Introduction to Engineering, Chapter 5

[3] Brian Bagnall, Maximum Lego NXTBuilding Robots with Java Brains, Chapter 12

[4] Maja J Mataric, Integration of Representation Into Goal-Driven Behavior-Based Robots


Thursday 20 November 2008

NXT Programming, lesson 10



Date: 21 11 2008
Duration of activity: 3 hours
Group members participating: all group members



1. The Goal

The goal of this lab session is to investigate how the implementation of a (namely, lejos's) behavior-based architecture works.

2. The Plan

  • Build and experiment with the idea of BumperCar

  • Investigate functions of the Arbitrator

  • Investigate alternative choices for Arbitrator

  • Discuss motivation functions


3. The Results

  • The first thing that was necessary to do in this lab session, was to prepare the robot for trying out the BumperCar program (which is included with lejos, in the samples/ directory). Towards that end, the robot was mounted with a touch sensor. Read more about that in 3.1. Robot construction.

  • As the robot was prepared for touching the environment, some experiments with the code were made and some discussions arose. Read more about that in 3.2. Investigations into the Arbitrator.

  • The idea of the Arbitrator that was discussed in 3.2. is just one way of doing things. There are alternative ways to implement Arbitrator. It is discussed in 3.3. Alternative ideas for Arbitrator.

  • Following the trail of discussion, 3.4. Motivation functions discusses the idea of motivation functions[1].



3.1. Robot construction

The construction of the robot is fairly simple. This is due to the fact that there aren't any observations of explicit robot behavior. Rather, the idea of this lab session is basically to figure out how and why the code works as it does. So now our robot is mounted with a touch sensor and it looks like this:


3.2. Investigations into the Arbitrator

  • When the touch sensor is being pressed.

  • When the touch sensor is pressed, the robot stops. 'takeControl' replies whether the behavior wants to be run, and in this case, the answer corresponds to the pressedness of the touch sensor. The Arbitrator iteratively asks if behaviors want to take control.

  • What happens if takeControl of DriveForward is called when HitWall is active?

  • The robot does not drive forward. The Behaviors of the system are prioritized, and the topmost is the touch behavior. It is always asked if it wants to take control and it always does, while the touch sensor is pressed. For this reason, 'drive forward' is never asked it is wants to take control.

  • Third behavior. Why is it that the exit behavior is not activated immediately? Why is it possible for HitWall to continue controlling the motors even after the suppress method is called?

  • If 'exit' was pressed with the source code given as it was, this far, it seems that the robot stops immediately. To investigate this, the back up time was set to 10 seconds to ensure HitWall to be active when 'escape' is pressed. This enables us to see that indeed the Exit behavior is not activated immediately.

    In the Arbitrator we have a for-loop which asks if HitWall wants to be active (by calling its takeControl predicate method), and the same is done for Exit behavior. It should be noted that the firstly activated behavior has to finish (i.e. return from action()), as there can only be one method running at a time in this setup. We cannot do anything to stop 'action' method (which again is a corollary to the fact that ordinary method invocations cannot be stopped in their tracks).

    HitWall continues controlling the motors even after the suppress method has been called from the Arbitrator. This is due to the fact that the action method has to finish (even if suppress is called) before anything else can happen, and this action method is rather long-running.

  • Fourth behavior. Explain.


  • We implemented PlaySound just like the other Behaviors. As intended, we put the PlaySound-behavior under HitWall-behavior, priority-wise, so that HitWall has both a behavior over it and under it, that can be activated by the button on the NXT unit. When only sticking to the theory one would think that the PlaySound-behavior would not be able to suppress the HitWall-behavior, but that the Exit-behavior would. But when we experimented with it, both behaviors seemed to be suppressed by HitWall. To find out why HitWall seemed to be able to suppress both the Exit- and PlaySound-behavior we had to look really deep into the code.

    The behavior-handler is implemented through a while(true)-loop, that constantly checks whether or not a high-ranking behavior wants access, over a lower ranking one.

    The decision about which behavior gets to run is done in some nested if-statments. The following code is the if-statments saying that, if i is a lower ranking behavior, it will yield. Or else it will suppress the current.


    if(currentBehavior >= i) // If higher level thread, wait to complete..
    while(!actionThread.done) {Thread.yield();}
    behavior[currentBehavior].suppress();


    According to these lines of code, Exit should be able to suppress HitWall, but it doesn't.

    The reason for this, can be found further down in the class. The actual running of the behavior is done in another while(true)-loop, with takes whatever behavior is set as being the highest and runs its .action() method.


    public void action() {
    // Back up:
    Motor.A.forward();
    Motor.C.forward();
    try{Thread.sleep(1000);}catch(Exception e) {}
    // Rotate by causing only one wheel to stop:
    Motor.A.stop();
    try{Thread.sleep(300);}catch(Exception e) {}
    Motor.C.stop();
    }


    And here is the problem: HitWall's action-method is a blocking method, that uses the .sleep() method while making the robot stop. But in between this time, the robot does not have the ability to switch behavior, and as a consequence one would have to hold down the Exit button for several seconds for it to be activated in the next deciding-behavior-cycle.


3.3. Alternative ideas for Arbitrator

The main issue with the current approach to the Arbitrator is that method invocations cannot be undone. Threads, however, usually can be forcefully destroyed (for example, they can in mainline Java), but this is not the case in lejos' Thread. This causes frustration, as embedded programs could easily benefit from this.

If Thread.destroy() had been defined, the best choice for implementing Behaviors would be to have each Behavior's action method be called in a seperate thread. In this way, execution could be stopped in general, since one could do a two-step approach: First interrupting the thread, and then, if it doesn't willingly terminate, forcefully kill it. The inteface for Behavior should, in this case, still require both an action and suppress method, since the Arbitrator would be required to do the thread's cleanup (its suppress invocation) for the thread, in case the Arbitrator kills off the thread forcefully.

Falling short of that functionality, the second-best approach is to encourage interfacers to write interrupt-aware action methods, and still do the first step of the above solution. In this way, conformant methods will be interruptable---albeit, conformance in this case means conformance to the encouragement. This is vastly inferior to having the possibility for forcefully killing the thread, since conformance can be destroyed by library calls: Calling a library routine which, itself, does a try { Thread.sleep(n)} catch (InterruptedException e){}, will void the InterruptedException that was intended for the interfacer.

A third---more esoteric---approach is to annotate the byte code inside the called action method with inserted if(Thread.interrupted()) return;-statements. This again could enforce the interruptability of the action method, but at a very high price. Reflection is always horribly slow, and must be even more so on the NXT.

As a variation of this proposal, one could choose to emulate the underlying JVM within the JVM, and interpret the called action method, rather than placing it on the JVM stack for execution.

Of course, one could also program one's code in a more natural language for an embedded platform. One that had interruptions enabled by default, perhaps. Or failing that, one---wearing one's firmware programmer hat---could implement such interruptability support into one's already terribly native-disfigured API.

3.4. Motivation functions

We ended up discussing how to use motivation functions to implement behavior in our robot. Instead of having ``takeControl'' return a binary answer on whether or not to use a behavior, let it return a value on how much the robots feels motivated for using this behavior. This way, one could make a more fine grained decision when deciding between behaviors---if nothing else, just implement it as if the answer was binary. When deciding upon with behavior to run, the one with the highest value wins. For actual deciding what value to return, the takeControl() would just have to be made more complex, and through measurements decide how motivated the robot is for this behavior. In the ``HitWall'' example, we could test whether or not we are in the middle of a HitWall-behavior, and therefore aren't that motivated to do further HitWall-behavior. This could be done by setting a boolean ``turning'' to true after we have driven backward and are about to turn. When takeControl() is called, we can then return different values through a statement if( turning ).

4. Conclusion

We built and experimented with a behavior-based architecture, and made a robot that showed the effects of the architecture. It didnt't do anything useful, and to some extent, this part of the exercise is a re-cap of the previous behavior-based-architecture lab session.

This exercise has been about understanding the given architecture, and we've done so. We've pondered both the questions in the lab assignment and we've been discussing issues that reach slightly outside the lab assigment's questions.

We've looked into possibilities for writing another implementation of Arbitrator, but our efforts conclude that the situation is pretty much helpless. And we've been discussing possibilities for making the architecture motivation-based, which also isn't strongly motivated by this assigment.


References

1.Thiemo Krink. Motivation Networks - A Biological Model for Autonomous Agent Control.

Our code for this week

Thursday 13 November 2008

NXT Programming, Lesson 9



Data: 14 11 2007
Duration of activity: 3 hours
Group members participating: all group members



1. The Goal

The goal of this lab session is to build and program a robot which is able to navigate in the environment using TachoNavigator, and perhaps experiment a bit.

2. The Plan

  • Build a robot whose construction is suitable for a navigation task [1].

  • Try "Blightbot" [1], which moves between different positions in a Cartesian coordinate system.

  • Discuss how the robot should navigate while avoiding obstacles.

  • Try and evaluate upon the robot's navigational skills while it is using the HiTechnic compass.



3. The results

  • The first thing in this lab session was to rebuild the robot. The idea of the new build-up is for the robot to be able to efficiently turn around on the dime--- that is, without moving forward in any direction. Read more in 3.1. Robot building.

  • Once the robot was built, we could try out how the robot navigates using TachoNavigator. Read more about the implementation and the test in 3.2. Using TachoNavigator.

  • The idea of driving from one coordinate to another seems like little challenge. It may be worth thinking about more complex situations---avoiding obstacles while driving from one point to another. Read our ideas in 3.3. Discussion about obstacles.

  • The last thing in the lab session was to try out navigation using a compass. Although it wasn't suggested to do in the lab description, we were curious if our robot would be able to find the finish point more precisely. Read about this in 3.4. Using compass.


3.1. Robot building

Without too hard thought, we built a robot using instructions from B. Bagnall's book [1]. Now the robot looks like that in the picture of that book. There is just one difference: There is a long pole mounted with a compass, which we used in the last part of the lab session.

This is our robot:


3.2. Using TachoNavigator

The source code for this phase was taken verbatim from B.Bagnall's book [1]. The idea behind the code is very simple: Set up a TachoNavigator and (with goTo()) set the coordinates that the robot has to go to.

There are two very important variables that the TachoNavigator takes as constructor parameters: The tire diameter and track width. These two parameters determine the precision of the robot by a large degree. Concerning tire diameter, we weren't concerned, since it is printed on the rubber of the LEGO tire. However, the track width was a different story: The wheels that were mounted onto the axis weren't too tight, and as a consequence, we could easily move the wheels a bit both inward and outward. It might not seem signficant, but this rattling troubles the precision. The track width measured, in its extremes, could vary from 15.7cm to 16.4cm, which is not preferable. Having this situation in mind, we continuously added rigids during the lab session, so it would be more stable. (Note that this didn't affect the parameters of the system by much, since the strengthening only added weight---it specifically didn't alter the track diameter.)

Up next, the test, which we set up in the way it is described in the book [1]. At first we tried to use coins as markers, but that proved too imprecise, since we only had a 30cm ruler to measure distance. To make things work a little better, we set up a coordinate system on the flour using some kind of paper-tape. This gave us an opportunity to exactly see how precise the robot was.

We have observed the behavior of the robot many times. The thing is that the robot is pretty precise when going from (0,0) to (200,0) and then from(200,0) to (100,100). Later it seemed to be more and more imprecise. And from theory we know that this is because all the littlte errors add up to some major that is called a drift. In the results that a book suggests, the robot comes to the final goal within 30 centimeters. The observations that we had provided us with very similar results.

This is an example that represents couple of our test runs:


3.3. Discussion about obstacles

To avoid obstacles and still navigate, certain proposals could be used:

  • A behavior-based (confer the relevant lab session) approach comes to mind: Two behaviors, one trying to take the robot to the current destination point, another to guide the robot away from any seen obstacles, with the latter overruling the prior.

  • A simple check-for-obstacles back-off

  • Changing the Navigator code to take callback object as parameter, and make the Navigator call the callback in case an obstacle has been seen (as per some metric)

  • Complex algoritms, see below


Whatever one chooses, it must be integrated with the Navigator in such a way that the avoidance algorithm affects the coordinates the Navigator uses for its pathfinding.

3.3.1. Complex algorithms
Given that the coordinate subsystem is in play, it would make sense to use that for obstacle avoidance. This, however, is almost inevitably more intricate than simple ``go away'' algorithms---hence the heading.

This amounts to generating some model of the real world (assumably a two-dimensional one similar to the coordinate system the Navigator uses) and of the obstacles (which could simply be a fixed number of points in the cartesian coordinate system that are known to be inaccessible). The routing algorithm would then have to avoid such trouble spots.

One can argue whether the Navigator is supposed to handle such situations. It essentially could be forced to follow a de-tour in order to get from A to B, and---at present---it is not doing anything but trying its best to follow a straight line, so one could conclude that exactly that is its intended area of responsibility, and obstacle avoidance is outside the responsibilities of Navigator implementations. But that's a software architecture discussion---ironically enough, targeted at a platform that supports very few mechanisms for implementing such.

3.4. Using compass

As in Brian Bagnal[1] we also tried to build the Blighbot with a compass. Instead of using a TachoNavigator, we used a CompassNavigator. The CompassNavigator still takes the wheel diameter and track width as parameters, which shows that the compass only can be used to determine the robots heading, and not e.g. how far it has driven. Using the compass instead of only the tacho-counter should be a drop-in replacement, but we encountered many unforeseen problems.

In Brian Bagnal's example the compass needs to be calibrated. This is done fairly manually with a CompassSensor-object. First the calibration is started with startCalibration(), the robot is then programmed (by us) to turn around two times, and the calibration is stopped with stopCalibration(). Accordingly to the documentation, a CompassNavigator-object should have a more high-lvl calibration method available. The method calibrateCompass() should calibrate automatically, but we couldn't get this method to work. When we tried to use this the robot just stood still.

After we had calibrated the compass, we had hoped we were good to go. But instead our robot started to act weird. Accelerate forward at times, but mostly turn around in-place. To us, in completely random patterns. And we had done minimal changes to our code, so it seemed unlikely the problem were to be found there.

In the Brighbot-tacho version the code for making a navigator looks like this:
TachoNavigator robot = new TachoNavigator(5.6F, 14.1F, Motor.C, Motor.B, false);

and the code for the Brighbot-compass looked like this:

CompassPilot pilot = new CompassPilot(cps, 5.6F, 14.1F, Motor.C, Motor.B, false);
CompassNavigator robot = new CompassNavigator(pilot);


We had a hard time figuring out why the robot all of the sudden wouldn't drive the course. In the end we figured out that the parameters for a TachoNavigator() and CompassNavigator()---allthough virtually the same---have a different understanding of what is the left wheel and what is the right. As soon as we interchanged the wires of the motors our robot started to be able to complete the course. This leads to the hypothesis that the robot stood in-place trying to get its directions, but the algorithm couldn't make sense of its actions: It would try to corrigate in its intended direction, but the readings from the compass only got more off.

After changing the boolean value ``reverse'' in the constructor-call our robot was back on track, with the wires connected as intended:

CompassPilot pilot = new CompassPilot(cps, 5.6F, 14.1F, Motor.C, Motor.B, true);
CompassNavigator robot = new CompassNavigator(pilot);


Now we were at a point where we could start to compare the tacho-counting and the compass versions' performance. Our hypothesis was that the compass version would preform better, because every turn the robot takes doesn't add to the drift. But our results turned out not to be so straight-forward.

We now had the problem that our robot couldn't even follow a straight line. In the first long stretch from (0,0) to (200,0) the robot drifted to the left, but then corrected along the way, before it to ended up at the desired point. Then, in (200,0), where the robot is supposed to turn to the right, the robot turned to the left. The robot ended up completing an inverted version of the course. For some unknown reason, the compass sensor has to be up-side down on the robot, for it not to invert its directions --- but it seems likely that this has to do with the fact that our robot's setup had its problems with inversion, before.

In the end we got the compass sensor up and running, and we were able to produce a better result with the CompassNavigator-class. But it was not a straight-forward conversion.
As a last remark on the compass-sensor, it is also very sensitive to magnetic fields. In its manual, it says the sensor has to be connected at least 15cm aways from the motors. It turned out to be more like 30cm. We instantly got a much better result after extending the antenna of our robot further away from the motors (from around 10-15cm to the position on the images -- about 30cm away). But the motors aren't the only thing emitting a magnetic field. All around us in the Zuse building is possible sources of interference. Power-outlets, laptops and other electronics. At one point we let the robot drive by a power-outlet and when it came close enough, it started to turn, as if confused.

All in all, the compass sensor seems to have a lot of potential, but one needs to remember what the environment of the robot is. A environment can easily be too contaminated by other magnetic fields for the sensor to be reliable.
4. Conclusion

This was a productive lab session. We experimented with tacho-counting-only's precision, and found it to be poor. We also played around with a compass-based approach and found it even more so (because of the flux (pun intended) of the Zuse building's environment).

A compass is a brittle instrument, and it is easily led off course by even minute magnetic interference. This is especially true when driving in a wire-full environment. Also, the robot's own magnetic sources interfere, and meassures (confer the large mast on the robot) must be taken to minimize those.

We didn't get around to implementing any obstacle-avoidance algorithms, but we did discuss them---and the possibilities for implementing them.

References

[1], Brian Bagnall, Maximum Lego NXTBuilding Robots with Java Brains, Chapter 12, Localization, p.285 - p.302

The code we wrote/altered for this lab session


Thursday 6 November 2008

NXT Programming, Lesson 8



Data: 07 11 2008
Duration of activity: 3 hours
Group members participating: all group members



1. The Goal

The goal of this lab sessions is to monitor different behaviors of the robot, when these behaviors are handled by different threads.

2. The Plan

3. The Results

  • As is was mentioned, the idea of this lab session is actually to monitor different behaviors of one robot, when every behavior pattern is implemented by a different thread. To actually see what is going with the robot's behavior, different kinds of sensors come in handy. Read more about the construction in 3.1. The construction of the robot.

  • The first thing to do is to try the code that was given for lesson 8. The idea is in the SoundCar.java program and the remaining classes needed to run that program. See 3.2. Different behaviors of the robot.

  • The most important aspect of this lab session's work is to actually see how the robot acts, when it is using threads. Some threads perform more significant behavior than others, nevertheless all threads together produces interesting results. The robot even seems to be exhibit ``smart'' behavior. Read more about this in 3.3. Different thread activenesses.

  • As we are analyzing interesting behavior aspects, the ideas in the source code might not always be so straightforward. This brings some discussions. Read about them in 3.4. Discussions.

  • In the initial code we have three threads of behavior: Random driving, avoiding obstacles, and playing beeping sound. The last part of the lab session is to add one more behavior in a new thread---going towards the light. This is based on ideas from Tom Dean's notes. Read more about this implementation in 3.5. New thread for light following.


3.1. The construction of the robot

This lab work basically relies on playing with the code more than figuring out how NXT works. This was done in many previous lab sessions. Now the construction of the robot is not significant. For this particular lab session we have a robot which has two motors and is mounted with two thick, not-so-tall wheels. The robot has to be able to go forward, backward, turn right, and turn left. Also, the robot is mounted with three sensors: An ultrasonic sensor, and two light sensors.


And the robot now looks like this from the side:


And the robot looks like this from the front:


3.2. Different behaviors of the robot

When inspecting the robot's behavior, it seemed to be driving around without meaningful purposes: Sometimes driving forward, sometimes to the left or to the right. This was done in different speeds and with significant pauses. Also, whenever an obstacle was spotted close enough, the robot turned the motors full speed backwards for a moment of time. Lastly, the robot was beeping an annoying sound with a constant time interval.

One more thing to note was the LCD output. When driving randomly, "f" was written whenever it was driving forward, to the left or to the right. "s" was written whenever the car was in the stopping mode. Nothing was written whenever the car was not effected by the random driving. Now, when the car was handled by the obstacle avoidance thread, besides "f" and "s" there were "b" for the cases when the car was driving full speed backwards. What is more, the distance value was printed as well. The last thing, there was "s" printed with respect to playing sound, and this was done right before the beeping sound could be heard.

3.3. Different thread activenesses
In this model a layer can suppress every layer beneath it. It is important to notice that higher-layers do not get more CPU-time then lower-layers; they simple suppress whatever the underlaying layer might want to do (while this might give the high-level more CPU-time because the underlaying layers have nothing to do). So if a higher layer always is active, the lower layer will never get into play. An observation is that the lowest level probably is something the robot can do when there is nothing else (nothing smarter) to do. For instance, random-walking.

When only the lowest level is activated, the robot only drives randomly around, bumping into whatever might get in its way.

It gets much more interesting when the second layer is activated. The robot at this point suppresses the urge to drive randomly around every time it is about to hit something. This way of reacting is much like our humans' nerve-system. We can let our hands run freely across a surface, but as soon as we feel something burning hot or sharp we retract our arms to get away from the source of pain. This is the exact same behavior our robot started to show as soon as the avoid-layer was activated.

3.4. Discussions
  • Daemon threads.
    The term ``daemon'' was first used at MIT, in some of the earliest work on making a multiuser operating system, and something UNIX would later inherit. A daemon process is a process which, immediately after its creation is disowned by its parent, and made a child of the init-proces (pid 0). This way, the initial parent-process can terminate without also terminating the daemon-process. As a effect a daemon-process is often used to do maintenance work.

    In Java, these semantics are somewhat inverted: A daemon thread is one that the VM is not kept running for the sake of (and thereby, its existence becomes relatively volatile). Rather, if a thread is a daemon thread, it is not waited for. See also the javadoc on Thread.setDaemon().

    In our code, the classes extending the Behavior-class need to be running until the robot stops, and by making them daemon, the threads will be terminated when the main-thread terminates.


  • Boolean suppression value.

    Every thread created from a class extending Behavior has a boolean field "suppressed". If this boolean is true, the thread will not send out motor commands. More precisely: Every method in those threads who uses the motors, have a if ( ! suppressed ) statement as the first line of code. Therefore, if another thread has set the boolean value to true, the method will not do anything.

    This method has pros and cons:

    A good result, is that a ``lesser'' behavior quite effectively gets suppressed by a higher behavior. The drawback is that one need to implement the if ( ! suppressed ) for every piece of code that someone or something would want to suppress. Another drawback is that we need to know at compile-time how many behaviors we want to suppress. As of now, we tell every layer beneath us that we want to suppress them, and afterwards unsuppress. It is therefore part of the programmer's job to be sure that he/she has remembered to suppress and later unsuppress all the layers, or else the entire algorithm falls apart.

    A positive thing is that this method is somewhat easy to extend (not as in Java ``extends''). We can later on add a layer that will do something more important than playing a sound---and therefore suppress everything we've designed so far. But we can not put layers in between of already existing ones, easily.

    This is a rather limiting factor for this programming style, because often one would put a ``panicking'' layer at the top. E.g.: When close to hitting a wall, the most important thing is to avoid. But if one uses this programming style as a way to add more fine-grained behavior, we would have to revisit the Avoid-layer for each layer being added to make it suppress that one, also.

    All in all, designing a robot as a stack of layers, wherein a higher layer can suppress everything beneath it, is a very good way to structure the different elements of the robot, but the current design does not scale as good as one would like, and pretty much demands one to have a complete overview of the robots design beforehand. This partly is why we refactored the code slightly. See more in 4. Refactoring the code, suppressionwise


3.5. New thread for light following

To program a thread that implements the behavior of seeking light, we went for the Braitenberg approach: A simple 2b-typed robot, where sensors linearly feed the opposite motors positively.

The interesting thing to note about this approach is that when programming with multiple ``concurrent'' (or interleaved, at least), behaviors, it naturally becomes hard to distinguish the different behaviors, and when they're in play.

Two things aim to mend this: The generous use of explicit delays and the inhibition of other behaviors, so that at any one time, only one behavior is visible.

The use of explicit delays seem like wasting time: Any goal that the robot is supposed to reach will only be reached more slowly, when using explicit delays, as opposed to when leaving them out. However, this may not be entirely true, since when developing the robot, the imperative objective is to gain knowledge of the behavior---why it is doing as it does and what needs to be changed to get closer to the desired behavior.

This motivates the use of explicit delays. Otherwise, the robot's behavior becomes incomprehensible, simply because it happens too fast. Obviously, the ultimate goal is to make the robot work as fast or precise as possible, but up until then, the development can refine its explicit delays.

4. Refactoring the code, suppressionwise
The code that was made available contained an admittedly silly implementation of suppression: The various classes had constructors with a parameter each for the by-this-suppression Behaviors. Also, the given Behavior-subclasses that were made available defined fields for each of those, and manually called setSuppress(true) on each.

Obviously, suppression is shared behavior amongst the various subclasses of Behavior, so in this scenario, it was attractive to refactor the code so that Behavior defined suppressSupressees() and unsuppressSupressees(), which, given constructor initialization of the ``suppressees'', will suppress all of the this's suppressees.

This however proved cumbersome, and as a word of warning, below is the obstacles that hit the idea:


  • Refactoring six places at one time is never easy. It is prone to errors. And in fact, a Behavior-subclass stood completely untouched right until it generated errors.

  • Programming with Java 1.5 generics doesn't work. Lejos doesn't support that. So static type safety guarantees must be enforced by the programmer.

  • There is no easily-accessible variadic method-implementation. That is, it's cumbersome to get started doing, and it may not even be supported. Confer with the point on generics.

  • Java 1.5 for loops over Iterable types doesn't work. There is no extended for loop available. (Albeit, this is a corollary to generics not working).



However, the advantage is that the resulting code is a tad nicer, from a separation-of-duties point of view. The interfacer has gotten a tougher job, though.

A snapshot of the code is available here. The interesting parts are repeated below:

The Behavior class was extended with:

public Behavior(String name, int LCDrow, ArrayList s)
{
[...]
suppressees = s;
[...]
}

[...]

public void suppressSupressees()
{
for(int i = 0; i < suppressees.size(); i++)
((Behavior)suppressees.get(i)).setSuppress(true);
}
public void unsuppressSupressees()
{
for(int i = 0; i < suppressees.size(); i++)
((Behavior)suppressees.get(i)).setSuppress(false);
}


The run()-methods of the various Behavior-subclasses were altered to call suppressSupressees() and unsuppressSupressees().

And lastly, the main() method of SoundCar was changed to call the constructors appropriately:

ArrayList afsupressees = new ArrayList(),
rdsupressees = new ArrayList(),
pssupressees = new ArrayList(),
slsupressees = new ArrayList();

sl = new SeekLight ("Light",4,slsupressees);

rdsupressees.add(sl);
rd = new RandomDrive("Drive",1,rdsupressees);

afsupressees.add(rd);
afsupressees.add(sl);
af = new AvoidFront ("Avoid",2,afsupressees);

pssupressees.add(rd);
pssupressees.add(af);
pssupressees.add(sl);
ps = new PlaySounds ("Play ",3,pssupressees);

5. Conclusion
We've succesfully played around with behavior-governed robots, and gained experience with layered architectures in robot control. Among those experiences are:

Robots designed with a layered architecture make for easy understanding of what's happening. One's mindset can attribute different steps to different modules in the code, and this is helped by using explicit delays to slow down the program's actions.

When one contemplates a layered architecture, one might want to go all-in from the start, and make a smart dependency/suppression-system, early on. We, for one, have quickly outgrown the simple constructor-parameter scheme presented to us.

6. Further work

The programs we wrote were not fully-developed, algorithm-wise, due to time constraints. The SeekLight behavior ought to get some thought in order to more consistently drive towards a light source.

Our robot design might also gain something from being redesigned. The robot is almost tipping over, when it's avoiding obstacles, and the light sensors are aimed almost in parallel.

Most importantly, some generic way of handling a DAG of Behavior dependencies ought to be researched and implemented.