Friday, 12 December 2008

End-project. PART IV

Date: 12 12 2008
Duration of activity: 2 hours
Group members participating: all group members

1. The Goal

The goal of this lab session was to agree on the main points of the architecture and make some adjustments with regards of the LEGO structure.

2. The Plan

  • Agree on the architecture.

  • Further upper carriage construction.

  • Adjustments of the sliding part.

3. The Results

3.1. The architecture

The first thing that we have done during this lab session was agreeing on the architecture. As it was mentioned before, in this project we will use a layered architecture. It is very important to leave as much space as possible for scalability and modifiability as for now it is not exactly clear how much of each layer we will need to use. The things we have agreed upon for now, looks like this:

4. Input handling. (tentative)
3. Input (vector) interpreter. (tentative)
2. State calculating navigation layer. Possibly feedback giving. (tentative)
1. X-Y movement calibration (mm wise). (definite)
0. Motor speed and tacho count handling. Re-zeroing. (definite)

3.2. The carriage

The second thing to do in this lab session was to mount the upper platform (the carriage) with two touch sensors. Whenever a touch sensor is touched on this platform, it indicates that the Y-axis is at its very beginning or at its very end. We couldn't get our hands on the standard official LEGO grey touch sensors, so instead, we have to use a couple of semi-transparent blue third-party ones. For now, the upper platform with such a sensor looks like this:

3.3. The sliding part

The third thing that was done in this lab session is a very important one. We have made the support near the spindle-axis more smooth. Also, we re-built the part that is responsible for the up/down movement in a way so it pushes the Y-axis-spindle more from outside-in, which is to say, it gets pushed into its support structure now, as opposed to being pushed out of it, before. By pushing the axis inwards that way we get more support for the axis and the movement along the Y-coordinate axis becomes more stable and does not get stuck.

4. Conclusion

The benefit of this lab session is that we now exactly agreed which layers of the layered architecture we are planning to use. Also, we mounted the carriage with two touch sensors and made the sliding part travel from one side to the other more smoothly.

Tuesday, 9 December 2008

End-project. PART III

Date: 09 12 2008
Duration of activity: 4 hours
Group members participating: all group members

1. The Goal

The goal of this lab session is to put all our efforts into making the construction better.

2. The Plan

  • Improve the sliding part (the one that moves on the carriage).

  • Improve the carriage itself.

  • Mount the NXT to be in a comfortable position with regards to the whole structure.

  • Install emergency stop buttons.

3. The Results

3.1. Sliding part

The first thing that was done today was to improve the construction's part which is responsible for up/down movement handling. It was enhanced in a way so that now it is very easy to attach any kind of pointer or, in general, tool of any kind. In our case it is most likely to be a drawing pen. The wheel is the part that enables the movement itself with the help of the motor. The whole part is actually hanging onto the carriage, but it is clamped onto it tightly enough so it should be able to move left/right with no significant difficulty.

The aforementioned part looks like this:

3.2. The carriage

The second thing that was done---improving the upper platform in a way for it to be as compact as possible, so it would leave more space available for the X-Y coordinate system. The working area is not that big, so making the mounts as optimal as possible is an important issue. The pictures are showing the upper platform and its sliding area close-up:

3.3. The NXT

The third thing that was done was to attach the NXT brick onto the working area so it will be easier to work with it. This looks like seen below:

3.4. The touch sensors

The fourth thing that was done today: Attaching touch sensors. This was done for us to always be aware about what the limits of our coordinate system are. The lower platform (the main one for x-axis handling) got four touch sensors, the upper platform (the carriage for y-axis handling) got two sensors and the ``up/down'' platform presumably will get only one sensor, if at all. But that is enough, as all we need is to be sure that we touched the paper surface. Old kind RCX touch sensors are used for this purpose, since these can be connected in parallel (and we'll have plenty more input sensors than the NXT has input ports for, so this is necessary.)

3.5. Software

With regards to programming, the X-Y movement controller was started. The initial idea is to give (x, y) coordinates as parameters to make the pointer move x tacho counts along the X-axis (forwards/backwards with regards to our working area) and symmetric for the y coordinate.

3.6. Final outcome

At the end of the work, it seemed that the part that is responsible for up/down movement would have to be redesigned. When attached to the ``upper'' platform (the carriage) it does not move correctly along the spindle axis, it gets stuck at some points. At the end of the day, the overall construction looks like this:

4. Conclusion

This lab session was all about the construction. Now we have the main platform, a quite final version of the carriage, and the improved sliding part. The sliding part is built in a way so that it is able to move left/right (y-axis movement handling) although with some spindle-related troubles that will be fixed in the future. Also, a part of the sliding part is able to perform up/down movements. Due to the overall construction, we mounted six touch sensors so far (four on the main platform and two on the carriage top) to act as emergency stops, so no platform would over-go the limits of the working area.

Friday, 5 December 2008

End-project. PART II

Date: 05 12 2008
Duration of activity: 2 hours
Group members participating: all group members

1. The Goal

The goal of this lab session is to finish building the carriage that is able to go back/forth (that is x-axis movement) and some part on top of that carriage that is able to go left/right (that is y-axis movement).

2. The Plan

  • Finish building the carriage.

  • Try to build a part that can move left/right with regards to the carriage.

  • Mount motors and test how well platforms are able to move.

3. The Results

3.1. Movement

The biggest challenge of this lab session was to keep up on building. So far, we had the ``lower'' platform which is responsible of forwards/backwards movement handling, and on top of this, an ``upper'' platform (the ``carriage'') is put (the platform that will be going forwards or backwards). This ``upper'' platform is responsible of leftwards/rightwards movement handling.

To make it easier to imagine all the directions of movement, this little schema introduces the platforms in play. Most importantly, it all corresponds to movement in X-Y-Z coordinate system.

If the construction succeeds, in the final version we will be able to move in a three-dimensional space.

3.2. Platforms together

The idea that enables the carriage to be able to move at all is rather simple: we take two plastic axes (or, actually, more than that, but divided into two parts) and mount them with ``spindles'' (which are actually more appropriately called ``screw conveyors'', but ``spindle'' is the word we're settling on) that are able to take a LEGO-gear-grabbing linear drive from the beginning of the axis to the very end. In order to accomplish that, the weight of the load wouldn't be on the gear itself, but rather on small flat wheels that were mounted for that purpose.

Here you can see those platforms (the lower one and the carriage) on top of each other:

3.3. The carriage

From the basic point of view, all that was left was to build a top sliding part, mounted on top of ``upper'' platform (the carriage), that is able to go leftwards/rightwards. This part has to be able to move an attached head up and down.

So the top of the carriage itself, has an axis with mounted spindles. The idea again is similar to before: To make a sliding part (the leftwards/rightwards movement) mounted with a gear grips into the spindle. This is the aforementioned ``upper'' part (the carriage) taken apart:

3.4. The part for Z-movement

The next important part to consider was the actual construction doing up/down movements. Lifting obviously has to be handled by a motor, and the fact that a motor has to be directly connected is a bit of a burden (which to no small extent was because of the limited space available in this advanced place). The problem of mounting the motor in a fashion that linearizes its movements can be solved by different solutions. Our used approach was a creative one: Instead of more obvious ``gear to gear'' approach, we decided to go with ``rubber-wheel to smooth-plastic-surface'' approach. This has the advantage of being simple and---in some sense---continuous in its movement, but the drawback is that it requires a lot of tension to work reliably and therefore the motor has to give a lot of power in order to make any movement at all. This issue will be discussed later.

This is how the construction (of the part that is able to slide left/right on the carriage) looks like:

Now, the part that is responsible for handling up/down movement has to be connected somehow, put on top of ``upper'' part (the carriage). For that, a sliding construction is made. Here it can be seen, mounted on the platform and by itself:

3.5. Final outcome

Due to the final construction, we have the main lower platform, the carriage on top of that, and the sliding part on top of the carriage itself. The carriage is able to move back and forth as a motor is connected to the main axis (which moves both sides' axes and thus enables the movement). The sliding part on the carriage is not able to move yet, but it will. The plan is to mount a motor to drive the only axis on the carriage, and a third motor will be connected to the sliding part itself (to enable movements up/down).

And so the overall construction when all the parts are put together looks like this:

With regards to programming, some basic motor controlling was done. With a motor mounted on the ``lower'' main platform, forwards/backwards movement was tested in different speeds. We were satisfied with the performance as the sliding happens smoothly enough. There still are problems to be solved: Sometimes a ``tooth'' or ``groove'' of a spindle is skipped, which is a very undesired property. The solution for this will come is postponed to later in the process.

4. Conclusion

This lab sessions was a productive one. We have a main ``lower'' platform on which a carriage is able to move (this corresponds to an x-axis movement). The carriage itself is mounted with a sliding part that will be able to move from side to side (this corresponds to an y-axis movement). The sliding part is made in a way so that a part of it is able to be lifted up/down (this corresponds to a Z-axis movement). What is more, X-axis movement was tested with simple software programs.

Thursday, 4 December 2008

End-project. PART I

Date: 02 12 2008
Duration of activity: ~5 hours
Group members participating: all group members

1. The Goal

The goal of this lab session is to discuss the expectations of the project both software- and construction-wise, and to start building the LEGO construction.

2. The Plan

During this lab session our group took three very important steps:

  • We discussed the overall project,

  • we started planning the work and

  • we started building a prototype.

3. The Results

3.1. Theory

At this point we have agreed to make an X-Y-coordinator. The envisioned construction consists of some platforms/axes that are able to move independently. No concrete implementation aspects are discussed as the highest-prioritized issue is the actual building.

All group members have different ideas on how the LEGO build has to look like, how to be build and in which manner to behave. Discussions were the main activity during the time that was spent.

3.2. Practice

The idea of the project is to make a drawing robot. For that, a part of the robot (some sort of ``head'') has to be able to move backwards/forwards, and left/right. The outcome of the building we did was two platforms that enable this kind of movement.

The process of building was to ``discuss'', then ``build'' and finally ``try out''. As there are no exact understanding of what we are expecting to get, structure-wise, there is no exact understanding of what exactly is the best way to get to the goal.

This way of working gave some fine results. The basic ideas has been generated and some basic structure has been built. The prototype we have so far looks like this:

4. Conclusion

During this lab session our group discussed all the expectations related to the end-project. We also did some actual building: So far we have some sort of lower platform (handling the x-axis) and a construction of a carriage that should be able to go back and forth on that platform.

Saturday, 29 November 2008

End course project, Lesson 11

Date: 28 11 2008

Initial Description of End Course Project

Three discussed proposals:

1. Sound-signature differentiating robot arm

2. Multi-microphone boomerang-copycat

3. X-Y-coordinator

1. Sound-signature differentiating robot arm

1.1. Description

This proposal is about a microphone-equipped robotic arm, that distinguishes sound signatures. A mechanically challenging project where the main challenge is to make the robotic arm position itself correctly.

1.2. Description of platforms and architectures

The robot would obviously be built with NXT's and the NXT motors. But given that this proposal is about an articulated arm, it will probably not be possible to make the setup with only three NXT motors, and thus, two NXT's are needed.

The robot would have a shoulder joint (which has two degrees of freedom), an elbow joint (one degree of freedom) and a wrist joint, which could be a two-degrees-of-freedom joint, but it could also be a more simple one-degree-of-freedom joint. The finger joint would be a simple squeezing joint. So this creates for a five-degree-of-freedom articulated arm, and this necessitates a two-NXT-approach.

The robotic arm should be able to position itself rather precisely, given a 3D-coordinate tuple. This means that we would have to do the maths to calculate the robot arm.

The robot's software would be an closed-loop control program, that has a layered architecture on top of it. The lower layers would generally handle the more hardware-related matters, whereas upper layers would handle more goal-oriented matters. What this concretely means, will be left for discussion.

This robot, like the X-Y-coordinator proposal, will need some sort of computer input (and perhaps output), which basically will be the same methods as applicable for the X-Y-coordinator case.

Overall, the robot adopts a reactive control model [2]. This is due to the fact that the idea is not to analyze the data coming from the sensors, but to immediately pursue the goal if it wasn't reached yet. Although it is hard to predict in advance---it may not be enough. To make things work better, it might be the case that a hybrid approach[2] would be a solution. That would make one part of the robot try to figure out sound signatures (or colors) and another part deciding whether to pick up something or not.

1.3. Problems to be solved

As it was mentioned, the biggest challenge would be to make the robotic arm position itself correctly. What is more, the project also has a problem in distinguishing sound. The distinguishing might only be to distinguish 1Hz bleeping from 2Hz bleeping, but that, too, will have its difficulties.

1.4. Expectations at the end

This project would end in a robotic arm, that would be able to pick up specific objects. The difference between the objects would be either a sound signature or the color of the object.

Variations of this project include making precise 4D or 5D-movements, or mounting a light sensor and have the robot pick up only the correctly-colored ball.

2. A multi-microphone Boomerang copy-cat.

2.1. Description

This project proposes a multi-microphone Boomerang [1] copy-cat, where the aim is to determine the origin of a sound, and point an arm to signify the direction[1].

2.2. Description of platforms and architectures

The hardware for this project would be a multi-microphone setup, that is mounted on a static fixture, and an simplistic arm, besides it.

The software for the robot, which would work in isolation (i.e., there's no PC in this setup), would be the challenging part of this proposal. It might very well be the case that it is impossible to get the degree of precision that is needed for this project, from the NXT and its internal workings (that is, the firmware implementation).

This could be implemented by relying on reactive control[2] only. The idea is to find the direction of the sound and point at it. All the robot has to do is to react to what the sensors are saying. Now on the other hand, if one seems to think about 'pointing towards sound source' as a goal state in which the robot has to stay over time, then it could be called, in some sense, a feedback control[2].

On the other hand, this could also be implemented by having the robot do a whole-world calculation, and, from its model, determine the action to be taken. This model would probably contain a notion of the origin coordinates, relative to some set of axis.

2.3. Problems to be solved

The challenge is to create a precise timing setup for the NXT and to compute the direction of sound, on a limited-processor machine. Variations include making an arm point in the direction of the origin of sound, and trying to do the trick with very few microphones (think middleware-for-a-vacuuming-robot).

2.4. Expectations at the end

This project would result in the described hardware, and it would be able to find the origin of a sound rather precisely. If so, it would point towards the source.

If it turns out that the lejos firmware would be inadequate for this usage, it might be possible to write a forked firmware that would have the possibility to precisely enough measure input.

-- But this is the success case. It might not be possible to implement this proposal, and in this case the project will result in a rather uninteresting presentation at the show-and-tell.

3. An X-Y-coordinator (the one we have chosen as an end course project)

3.1. Description

This project calls for a rather large physical construction that is able to move a head around on a paper or a similar surface. The robot is a special case of a cartesian robot: The robot has two major axis, and a tertiary up-and-down motion, which calls for using at the very least three NXT motors. If we want to do (even) more interesting things, we may need a fourth motor, and already at this point, we will need another NXT.

3.2. Description of platforms and architectures

The head may be an ``output'' head---a pen, for example---or an input one: A light sensor or a probe (see below).

The name of the game in this robot is precision. The robot has all the physical advantages, so it's the software that's to blame, if the robot winds up being inaccurate.

The software for controlling the robot would be layered, as was the case for the articulated robot arm. In this case, the concretization is easier: The lower layer takes commands from upper layers, and the separation between the layers is whether the layer controls the motors or controls the motor-controllers.

When speaking of controlling, some aspects are put to discussion. Here we are dealing with an X-Y coordinate system, so it is up for debate exactly how applicable the idea of map building suggested by Maja[4] is, here. The idea is for the robot (or for a part of the robot) to navigate itself to the goal. There are some aspects from 'managing the course of the robot'[3]: In our case, knowing where the head is, at any moment in time (localization), is very important. As well as knowing where to go (mission planning) and how to get there (path-finding).

But is it important to remember where the robot has been? In connection with this, one must not forget that the robot gets precise vectors as an input.

It's analoguous to having a predetermined map and following it blindly without actually making sure if the map is correct. If the robot is absolutely precise, it might be enough, but generally not.

Nevertheless, this kind of approach cannot be ruled out beforehand. It still might come in handy.

Another approach for this project to discuss, is what would be a ``sequential control'' [2]. In some interpretation, giving vectors as input is the same as giving a series of steps that a robot must run trough in order to accomplish a task. If the robot has to draw a complicated figure, for example, it may do it vector by vector, or---after analyzing the input---it might choose more clever sequence for doing the work. This bears some resemblance to a deliberative control model[2] where the robot has to take into account all relevant inputs and, after thinking, make a plan and then follow it.

3.3. Problems to be solved

The main challenge is to calculate the trajectories -- the gradients -- of several primitive commands: ARC(a_x,a_y,b_x,b_y,c_x,c_y) or LINE(a_x,a_y,b_x,b_y), and performing closed-loop feedback to govern the motors, so that a shape isn't misdrawn even in case of unexpected resistance.

3.4. Expectations at the end

This project will result in an X-Y-coordinator, which is at least able to draw simple vector graphics on a piece of paper, given a description of those, in an input file.

Variations of this project include mounting a (color?) scanner head or a industrial measuring probe, in order to generate a 3D model (concretely, a height map) of an object. More esoteric variations would be to make the robot able to replicate a height map of some object that was seen in the work area.

Less esoteric variations on this proposals would be to make interactive control from a PC program.

We've chosen an end course project, and while all of the outlined ideas seem worthwhile to pursue, in their own way, the chosen one is the most interesting: It has a mathematical-programming angle to it (the gradients), a software architecture angle (the layered architecture), and a interesting build (which may be topped by the articulated arm, though).

This however also has an influence on the feasibility of the project. The Boomerang-proposal may fail miserably, and may turn out to be trivial, too, whereas the articulated arm has so many inherent difficulties that it may turn out to be impossible to construct satisfactorily.

5. Work plan

  • Building the robot

  • Implement a straight-line governor

  • Implement an arc governor

  • Implement governor-using algorithms

6. References


[2] Fred G. Martin,Robotic Explorations: A Hands-on Introduction to Engineering, Chapter 5

[3] Brian Bagnall, Maximum Lego NXTBuilding Robots with Java Brains, Chapter 12

[4] Maja J Mataric, Integration of Representation Into Goal-Driven Behavior-Based Robots

Thursday, 20 November 2008

NXT Programming, lesson 10

Date: 21 11 2008
Duration of activity: 3 hours
Group members participating: all group members

1. The Goal

The goal of this lab session is to investigate how the implementation of a (namely, lejos's) behavior-based architecture works.

2. The Plan

  • Build and experiment with the idea of BumperCar

  • Investigate functions of the Arbitrator

  • Investigate alternative choices for Arbitrator

  • Discuss motivation functions

3. The Results

  • The first thing that was necessary to do in this lab session, was to prepare the robot for trying out the BumperCar program (which is included with lejos, in the samples/ directory). Towards that end, the robot was mounted with a touch sensor. Read more about that in 3.1. Robot construction.

  • As the robot was prepared for touching the environment, some experiments with the code were made and some discussions arose. Read more about that in 3.2. Investigations into the Arbitrator.

  • The idea of the Arbitrator that was discussed in 3.2. is just one way of doing things. There are alternative ways to implement Arbitrator. It is discussed in 3.3. Alternative ideas for Arbitrator.

  • Following the trail of discussion, 3.4. Motivation functions discusses the idea of motivation functions[1].

3.1. Robot construction

The construction of the robot is fairly simple. This is due to the fact that there aren't any observations of explicit robot behavior. Rather, the idea of this lab session is basically to figure out how and why the code works as it does. So now our robot is mounted with a touch sensor and it looks like this:

3.2. Investigations into the Arbitrator

  • When the touch sensor is being pressed.

  • When the touch sensor is pressed, the robot stops. 'takeControl' replies whether the behavior wants to be run, and in this case, the answer corresponds to the pressedness of the touch sensor. The Arbitrator iteratively asks if behaviors want to take control.

  • What happens if takeControl of DriveForward is called when HitWall is active?

  • The robot does not drive forward. The Behaviors of the system are prioritized, and the topmost is the touch behavior. It is always asked if it wants to take control and it always does, while the touch sensor is pressed. For this reason, 'drive forward' is never asked it is wants to take control.

  • Third behavior. Why is it that the exit behavior is not activated immediately? Why is it possible for HitWall to continue controlling the motors even after the suppress method is called?

  • If 'exit' was pressed with the source code given as it was, this far, it seems that the robot stops immediately. To investigate this, the back up time was set to 10 seconds to ensure HitWall to be active when 'escape' is pressed. This enables us to see that indeed the Exit behavior is not activated immediately.

    In the Arbitrator we have a for-loop which asks if HitWall wants to be active (by calling its takeControl predicate method), and the same is done for Exit behavior. It should be noted that the firstly activated behavior has to finish (i.e. return from action()), as there can only be one method running at a time in this setup. We cannot do anything to stop 'action' method (which again is a corollary to the fact that ordinary method invocations cannot be stopped in their tracks).

    HitWall continues controlling the motors even after the suppress method has been called from the Arbitrator. This is due to the fact that the action method has to finish (even if suppress is called) before anything else can happen, and this action method is rather long-running.

  • Fourth behavior. Explain.

  • We implemented PlaySound just like the other Behaviors. As intended, we put the PlaySound-behavior under HitWall-behavior, priority-wise, so that HitWall has both a behavior over it and under it, that can be activated by the button on the NXT unit. When only sticking to the theory one would think that the PlaySound-behavior would not be able to suppress the HitWall-behavior, but that the Exit-behavior would. But when we experimented with it, both behaviors seemed to be suppressed by HitWall. To find out why HitWall seemed to be able to suppress both the Exit- and PlaySound-behavior we had to look really deep into the code.

    The behavior-handler is implemented through a while(true)-loop, that constantly checks whether or not a high-ranking behavior wants access, over a lower ranking one.

    The decision about which behavior gets to run is done in some nested if-statments. The following code is the if-statments saying that, if i is a lower ranking behavior, it will yield. Or else it will suppress the current.

    if(currentBehavior >= i) // If higher level thread, wait to complete..
    while(!actionThread.done) {Thread.yield();}

    According to these lines of code, Exit should be able to suppress HitWall, but it doesn't.

    The reason for this, can be found further down in the class. The actual running of the behavior is done in another while(true)-loop, with takes whatever behavior is set as being the highest and runs its .action() method.

    public void action() {
    // Back up:
    try{Thread.sleep(1000);}catch(Exception e) {}
    // Rotate by causing only one wheel to stop:
    try{Thread.sleep(300);}catch(Exception e) {}

    And here is the problem: HitWall's action-method is a blocking method, that uses the .sleep() method while making the robot stop. But in between this time, the robot does not have the ability to switch behavior, and as a consequence one would have to hold down the Exit button for several seconds for it to be activated in the next deciding-behavior-cycle.

3.3. Alternative ideas for Arbitrator

The main issue with the current approach to the Arbitrator is that method invocations cannot be undone. Threads, however, usually can be forcefully destroyed (for example, they can in mainline Java), but this is not the case in lejos' Thread. This causes frustration, as embedded programs could easily benefit from this.

If Thread.destroy() had been defined, the best choice for implementing Behaviors would be to have each Behavior's action method be called in a seperate thread. In this way, execution could be stopped in general, since one could do a two-step approach: First interrupting the thread, and then, if it doesn't willingly terminate, forcefully kill it. The inteface for Behavior should, in this case, still require both an action and suppress method, since the Arbitrator would be required to do the thread's cleanup (its suppress invocation) for the thread, in case the Arbitrator kills off the thread forcefully.

Falling short of that functionality, the second-best approach is to encourage interfacers to write interrupt-aware action methods, and still do the first step of the above solution. In this way, conformant methods will be interruptable---albeit, conformance in this case means conformance to the encouragement. This is vastly inferior to having the possibility for forcefully killing the thread, since conformance can be destroyed by library calls: Calling a library routine which, itself, does a try { Thread.sleep(n)} catch (InterruptedException e){}, will void the InterruptedException that was intended for the interfacer.

A third---more esoteric---approach is to annotate the byte code inside the called action method with inserted if(Thread.interrupted()) return;-statements. This again could enforce the interruptability of the action method, but at a very high price. Reflection is always horribly slow, and must be even more so on the NXT.

As a variation of this proposal, one could choose to emulate the underlying JVM within the JVM, and interpret the called action method, rather than placing it on the JVM stack for execution.

Of course, one could also program one's code in a more natural language for an embedded platform. One that had interruptions enabled by default, perhaps. Or failing that, one---wearing one's firmware programmer hat---could implement such interruptability support into one's already terribly native-disfigured API.

3.4. Motivation functions

We ended up discussing how to use motivation functions to implement behavior in our robot. Instead of having ``takeControl'' return a binary answer on whether or not to use a behavior, let it return a value on how much the robots feels motivated for using this behavior. This way, one could make a more fine grained decision when deciding between behaviors---if nothing else, just implement it as if the answer was binary. When deciding upon with behavior to run, the one with the highest value wins. For actual deciding what value to return, the takeControl() would just have to be made more complex, and through measurements decide how motivated the robot is for this behavior. In the ``HitWall'' example, we could test whether or not we are in the middle of a HitWall-behavior, and therefore aren't that motivated to do further HitWall-behavior. This could be done by setting a boolean ``turning'' to true after we have driven backward and are about to turn. When takeControl() is called, we can then return different values through a statement if( turning ).

4. Conclusion

We built and experimented with a behavior-based architecture, and made a robot that showed the effects of the architecture. It didnt't do anything useful, and to some extent, this part of the exercise is a re-cap of the previous behavior-based-architecture lab session.

This exercise has been about understanding the given architecture, and we've done so. We've pondered both the questions in the lab assignment and we've been discussing issues that reach slightly outside the lab assigment's questions.

We've looked into possibilities for writing another implementation of Arbitrator, but our efforts conclude that the situation is pretty much helpless. And we've been discussing possibilities for making the architecture motivation-based, which also isn't strongly motivated by this assigment.


1.Thiemo Krink. Motivation Networks - A Biological Model for Autonomous Agent Control.

Our code for this week

Thursday, 13 November 2008

NXT Programming, Lesson 9

Data: 14 11 2007
Duration of activity: 3 hours
Group members participating: all group members

1. The Goal

The goal of this lab session is to build and program a robot which is able to navigate in the environment using TachoNavigator, and perhaps experiment a bit.

2. The Plan

  • Build a robot whose construction is suitable for a navigation task [1].

  • Try "Blightbot" [1], which moves between different positions in a Cartesian coordinate system.

  • Discuss how the robot should navigate while avoiding obstacles.

  • Try and evaluate upon the robot's navigational skills while it is using the HiTechnic compass.

3. The results

  • The first thing in this lab session was to rebuild the robot. The idea of the new build-up is for the robot to be able to efficiently turn around on the dime--- that is, without moving forward in any direction. Read more in 3.1. Robot building.

  • Once the robot was built, we could try out how the robot navigates using TachoNavigator. Read more about the implementation and the test in 3.2. Using TachoNavigator.

  • The idea of driving from one coordinate to another seems like little challenge. It may be worth thinking about more complex situations---avoiding obstacles while driving from one point to another. Read our ideas in 3.3. Discussion about obstacles.

  • The last thing in the lab session was to try out navigation using a compass. Although it wasn't suggested to do in the lab description, we were curious if our robot would be able to find the finish point more precisely. Read about this in 3.4. Using compass.

3.1. Robot building

Without too hard thought, we built a robot using instructions from B. Bagnall's book [1]. Now the robot looks like that in the picture of that book. There is just one difference: There is a long pole mounted with a compass, which we used in the last part of the lab session.

This is our robot:

3.2. Using TachoNavigator

The source code for this phase was taken verbatim from B.Bagnall's book [1]. The idea behind the code is very simple: Set up a TachoNavigator and (with goTo()) set the coordinates that the robot has to go to.

There are two very important variables that the TachoNavigator takes as constructor parameters: The tire diameter and track width. These two parameters determine the precision of the robot by a large degree. Concerning tire diameter, we weren't concerned, since it is printed on the rubber of the LEGO tire. However, the track width was a different story: The wheels that were mounted onto the axis weren't too tight, and as a consequence, we could easily move the wheels a bit both inward and outward. It might not seem signficant, but this rattling troubles the precision. The track width measured, in its extremes, could vary from 15.7cm to 16.4cm, which is not preferable. Having this situation in mind, we continuously added rigids during the lab session, so it would be more stable. (Note that this didn't affect the parameters of the system by much, since the strengthening only added weight---it specifically didn't alter the track diameter.)

Up next, the test, which we set up in the way it is described in the book [1]. At first we tried to use coins as markers, but that proved too imprecise, since we only had a 30cm ruler to measure distance. To make things work a little better, we set up a coordinate system on the flour using some kind of paper-tape. This gave us an opportunity to exactly see how precise the robot was.

We have observed the behavior of the robot many times. The thing is that the robot is pretty precise when going from (0,0) to (200,0) and then from(200,0) to (100,100). Later it seemed to be more and more imprecise. And from theory we know that this is because all the littlte errors add up to some major that is called a drift. In the results that a book suggests, the robot comes to the final goal within 30 centimeters. The observations that we had provided us with very similar results.

This is an example that represents couple of our test runs:

3.3. Discussion about obstacles

To avoid obstacles and still navigate, certain proposals could be used:

  • A behavior-based (confer the relevant lab session) approach comes to mind: Two behaviors, one trying to take the robot to the current destination point, another to guide the robot away from any seen obstacles, with the latter overruling the prior.

  • A simple check-for-obstacles back-off

  • Changing the Navigator code to take callback object as parameter, and make the Navigator call the callback in case an obstacle has been seen (as per some metric)

  • Complex algoritms, see below

Whatever one chooses, it must be integrated with the Navigator in such a way that the avoidance algorithm affects the coordinates the Navigator uses for its pathfinding.

3.3.1. Complex algorithms
Given that the coordinate subsystem is in play, it would make sense to use that for obstacle avoidance. This, however, is almost inevitably more intricate than simple ``go away'' algorithms---hence the heading.

This amounts to generating some model of the real world (assumably a two-dimensional one similar to the coordinate system the Navigator uses) and of the obstacles (which could simply be a fixed number of points in the cartesian coordinate system that are known to be inaccessible). The routing algorithm would then have to avoid such trouble spots.

One can argue whether the Navigator is supposed to handle such situations. It essentially could be forced to follow a de-tour in order to get from A to B, and---at present---it is not doing anything but trying its best to follow a straight line, so one could conclude that exactly that is its intended area of responsibility, and obstacle avoidance is outside the responsibilities of Navigator implementations. But that's a software architecture discussion---ironically enough, targeted at a platform that supports very few mechanisms for implementing such.

3.4. Using compass

As in Brian Bagnal[1] we also tried to build the Blighbot with a compass. Instead of using a TachoNavigator, we used a CompassNavigator. The CompassNavigator still takes the wheel diameter and track width as parameters, which shows that the compass only can be used to determine the robots heading, and not e.g. how far it has driven. Using the compass instead of only the tacho-counter should be a drop-in replacement, but we encountered many unforeseen problems.

In Brian Bagnal's example the compass needs to be calibrated. This is done fairly manually with a CompassSensor-object. First the calibration is started with startCalibration(), the robot is then programmed (by us) to turn around two times, and the calibration is stopped with stopCalibration(). Accordingly to the documentation, a CompassNavigator-object should have a more high-lvl calibration method available. The method calibrateCompass() should calibrate automatically, but we couldn't get this method to work. When we tried to use this the robot just stood still.

After we had calibrated the compass, we had hoped we were good to go. But instead our robot started to act weird. Accelerate forward at times, but mostly turn around in-place. To us, in completely random patterns. And we had done minimal changes to our code, so it seemed unlikely the problem were to be found there.

In the Brighbot-tacho version the code for making a navigator looks like this:
TachoNavigator robot = new TachoNavigator(5.6F, 14.1F, Motor.C, Motor.B, false);

and the code for the Brighbot-compass looked like this:

CompassPilot pilot = new CompassPilot(cps, 5.6F, 14.1F, Motor.C, Motor.B, false);
CompassNavigator robot = new CompassNavigator(pilot);

We had a hard time figuring out why the robot all of the sudden wouldn't drive the course. In the end we figured out that the parameters for a TachoNavigator() and CompassNavigator()---allthough virtually the same---have a different understanding of what is the left wheel and what is the right. As soon as we interchanged the wires of the motors our robot started to be able to complete the course. This leads to the hypothesis that the robot stood in-place trying to get its directions, but the algorithm couldn't make sense of its actions: It would try to corrigate in its intended direction, but the readings from the compass only got more off.

After changing the boolean value ``reverse'' in the constructor-call our robot was back on track, with the wires connected as intended:

CompassPilot pilot = new CompassPilot(cps, 5.6F, 14.1F, Motor.C, Motor.B, true);
CompassNavigator robot = new CompassNavigator(pilot);

Now we were at a point where we could start to compare the tacho-counting and the compass versions' performance. Our hypothesis was that the compass version would preform better, because every turn the robot takes doesn't add to the drift. But our results turned out not to be so straight-forward.

We now had the problem that our robot couldn't even follow a straight line. In the first long stretch from (0,0) to (200,0) the robot drifted to the left, but then corrected along the way, before it to ended up at the desired point. Then, in (200,0), where the robot is supposed to turn to the right, the robot turned to the left. The robot ended up completing an inverted version of the course. For some unknown reason, the compass sensor has to be up-side down on the robot, for it not to invert its directions --- but it seems likely that this has to do with the fact that our robot's setup had its problems with inversion, before.

In the end we got the compass sensor up and running, and we were able to produce a better result with the CompassNavigator-class. But it was not a straight-forward conversion.
As a last remark on the compass-sensor, it is also very sensitive to magnetic fields. In its manual, it says the sensor has to be connected at least 15cm aways from the motors. It turned out to be more like 30cm. We instantly got a much better result after extending the antenna of our robot further away from the motors (from around 10-15cm to the position on the images -- about 30cm away). But the motors aren't the only thing emitting a magnetic field. All around us in the Zuse building is possible sources of interference. Power-outlets, laptops and other electronics. At one point we let the robot drive by a power-outlet and when it came close enough, it started to turn, as if confused.

All in all, the compass sensor seems to have a lot of potential, but one needs to remember what the environment of the robot is. A environment can easily be too contaminated by other magnetic fields for the sensor to be reliable.
4. Conclusion

This was a productive lab session. We experimented with tacho-counting-only's precision, and found it to be poor. We also played around with a compass-based approach and found it even more so (because of the flux (pun intended) of the Zuse building's environment).

A compass is a brittle instrument, and it is easily led off course by even minute magnetic interference. This is especially true when driving in a wire-full environment. Also, the robot's own magnetic sources interfere, and meassures (confer the large mast on the robot) must be taken to minimize those.

We didn't get around to implementing any obstacle-avoidance algorithms, but we did discuss them---and the possibilities for implementing them.


[1], Brian Bagnall, Maximum Lego NXTBuilding Robots with Java Brains, Chapter 12, Localization, p.285 - p.302

The code we wrote/altered for this lab session

Thursday, 6 November 2008

NXT Programming, Lesson 8

Data: 07 11 2008
Duration of activity: 3 hours
Group members participating: all group members

1. The Goal

The goal of this lab sessions is to monitor different behaviors of the robot, when these behaviors are handled by different threads.

2. The Plan

3. The Results

  • As is was mentioned, the idea of this lab session is actually to monitor different behaviors of one robot, when every behavior pattern is implemented by a different thread. To actually see what is going with the robot's behavior, different kinds of sensors come in handy. Read more about the construction in 3.1. The construction of the robot.

  • The first thing to do is to try the code that was given for lesson 8. The idea is in the program and the remaining classes needed to run that program. See 3.2. Different behaviors of the robot.

  • The most important aspect of this lab session's work is to actually see how the robot acts, when it is using threads. Some threads perform more significant behavior than others, nevertheless all threads together produces interesting results. The robot even seems to be exhibit ``smart'' behavior. Read more about this in 3.3. Different thread activenesses.

  • As we are analyzing interesting behavior aspects, the ideas in the source code might not always be so straightforward. This brings some discussions. Read about them in 3.4. Discussions.

  • In the initial code we have three threads of behavior: Random driving, avoiding obstacles, and playing beeping sound. The last part of the lab session is to add one more behavior in a new thread---going towards the light. This is based on ideas from Tom Dean's notes. Read more about this implementation in 3.5. New thread for light following.

3.1. The construction of the robot

This lab work basically relies on playing with the code more than figuring out how NXT works. This was done in many previous lab sessions. Now the construction of the robot is not significant. For this particular lab session we have a robot which has two motors and is mounted with two thick, not-so-tall wheels. The robot has to be able to go forward, backward, turn right, and turn left. Also, the robot is mounted with three sensors: An ultrasonic sensor, and two light sensors.

And the robot now looks like this from the side:

And the robot looks like this from the front:

3.2. Different behaviors of the robot

When inspecting the robot's behavior, it seemed to be driving around without meaningful purposes: Sometimes driving forward, sometimes to the left or to the right. This was done in different speeds and with significant pauses. Also, whenever an obstacle was spotted close enough, the robot turned the motors full speed backwards for a moment of time. Lastly, the robot was beeping an annoying sound with a constant time interval.

One more thing to note was the LCD output. When driving randomly, "f" was written whenever it was driving forward, to the left or to the right. "s" was written whenever the car was in the stopping mode. Nothing was written whenever the car was not effected by the random driving. Now, when the car was handled by the obstacle avoidance thread, besides "f" and "s" there were "b" for the cases when the car was driving full speed backwards. What is more, the distance value was printed as well. The last thing, there was "s" printed with respect to playing sound, and this was done right before the beeping sound could be heard.

3.3. Different thread activenesses
In this model a layer can suppress every layer beneath it. It is important to notice that higher-layers do not get more CPU-time then lower-layers; they simple suppress whatever the underlaying layer might want to do (while this might give the high-level more CPU-time because the underlaying layers have nothing to do). So if a higher layer always is active, the lower layer will never get into play. An observation is that the lowest level probably is something the robot can do when there is nothing else (nothing smarter) to do. For instance, random-walking.

When only the lowest level is activated, the robot only drives randomly around, bumping into whatever might get in its way.

It gets much more interesting when the second layer is activated. The robot at this point suppresses the urge to drive randomly around every time it is about to hit something. This way of reacting is much like our humans' nerve-system. We can let our hands run freely across a surface, but as soon as we feel something burning hot or sharp we retract our arms to get away from the source of pain. This is the exact same behavior our robot started to show as soon as the avoid-layer was activated.

3.4. Discussions
  • Daemon threads.
    The term ``daemon'' was first used at MIT, in some of the earliest work on making a multiuser operating system, and something UNIX would later inherit. A daemon process is a process which, immediately after its creation is disowned by its parent, and made a child of the init-proces (pid 0). This way, the initial parent-process can terminate without also terminating the daemon-process. As a effect a daemon-process is often used to do maintenance work.

    In Java, these semantics are somewhat inverted: A daemon thread is one that the VM is not kept running for the sake of (and thereby, its existence becomes relatively volatile). Rather, if a thread is a daemon thread, it is not waited for. See also the javadoc on Thread.setDaemon().

    In our code, the classes extending the Behavior-class need to be running until the robot stops, and by making them daemon, the threads will be terminated when the main-thread terminates.

  • Boolean suppression value.

    Every thread created from a class extending Behavior has a boolean field "suppressed". If this boolean is true, the thread will not send out motor commands. More precisely: Every method in those threads who uses the motors, have a if ( ! suppressed ) statement as the first line of code. Therefore, if another thread has set the boolean value to true, the method will not do anything.

    This method has pros and cons:

    A good result, is that a ``lesser'' behavior quite effectively gets suppressed by a higher behavior. The drawback is that one need to implement the if ( ! suppressed ) for every piece of code that someone or something would want to suppress. Another drawback is that we need to know at compile-time how many behaviors we want to suppress. As of now, we tell every layer beneath us that we want to suppress them, and afterwards unsuppress. It is therefore part of the programmer's job to be sure that he/she has remembered to suppress and later unsuppress all the layers, or else the entire algorithm falls apart.

    A positive thing is that this method is somewhat easy to extend (not as in Java ``extends''). We can later on add a layer that will do something more important than playing a sound---and therefore suppress everything we've designed so far. But we can not put layers in between of already existing ones, easily.

    This is a rather limiting factor for this programming style, because often one would put a ``panicking'' layer at the top. E.g.: When close to hitting a wall, the most important thing is to avoid. But if one uses this programming style as a way to add more fine-grained behavior, we would have to revisit the Avoid-layer for each layer being added to make it suppress that one, also.

    All in all, designing a robot as a stack of layers, wherein a higher layer can suppress everything beneath it, is a very good way to structure the different elements of the robot, but the current design does not scale as good as one would like, and pretty much demands one to have a complete overview of the robots design beforehand. This partly is why we refactored the code slightly. See more in 4. Refactoring the code, suppressionwise

3.5. New thread for light following

To program a thread that implements the behavior of seeking light, we went for the Braitenberg approach: A simple 2b-typed robot, where sensors linearly feed the opposite motors positively.

The interesting thing to note about this approach is that when programming with multiple ``concurrent'' (or interleaved, at least), behaviors, it naturally becomes hard to distinguish the different behaviors, and when they're in play.

Two things aim to mend this: The generous use of explicit delays and the inhibition of other behaviors, so that at any one time, only one behavior is visible.

The use of explicit delays seem like wasting time: Any goal that the robot is supposed to reach will only be reached more slowly, when using explicit delays, as opposed to when leaving them out. However, this may not be entirely true, since when developing the robot, the imperative objective is to gain knowledge of the behavior---why it is doing as it does and what needs to be changed to get closer to the desired behavior.

This motivates the use of explicit delays. Otherwise, the robot's behavior becomes incomprehensible, simply because it happens too fast. Obviously, the ultimate goal is to make the robot work as fast or precise as possible, but up until then, the development can refine its explicit delays.

4. Refactoring the code, suppressionwise
The code that was made available contained an admittedly silly implementation of suppression: The various classes had constructors with a parameter each for the by-this-suppression Behaviors. Also, the given Behavior-subclasses that were made available defined fields for each of those, and manually called setSuppress(true) on each.

Obviously, suppression is shared behavior amongst the various subclasses of Behavior, so in this scenario, it was attractive to refactor the code so that Behavior defined suppressSupressees() and unsuppressSupressees(), which, given constructor initialization of the ``suppressees'', will suppress all of the this's suppressees.

This however proved cumbersome, and as a word of warning, below is the obstacles that hit the idea:

  • Refactoring six places at one time is never easy. It is prone to errors. And in fact, a Behavior-subclass stood completely untouched right until it generated errors.

  • Programming with Java 1.5 generics doesn't work. Lejos doesn't support that. So static type safety guarantees must be enforced by the programmer.

  • There is no easily-accessible variadic method-implementation. That is, it's cumbersome to get started doing, and it may not even be supported. Confer with the point on generics.

  • Java 1.5 for loops over Iterable types doesn't work. There is no extended for loop available. (Albeit, this is a corollary to generics not working).

However, the advantage is that the resulting code is a tad nicer, from a separation-of-duties point of view. The interfacer has gotten a tougher job, though.

A snapshot of the code is available here. The interesting parts are repeated below:

The Behavior class was extended with:

public Behavior(String name, int LCDrow, ArrayList s)
suppressees = s;


public void suppressSupressees()
for(int i = 0; i < suppressees.size(); i++)
public void unsuppressSupressees()
for(int i = 0; i < suppressees.size(); i++)

The run()-methods of the various Behavior-subclasses were altered to call suppressSupressees() and unsuppressSupressees().

And lastly, the main() method of SoundCar was changed to call the constructors appropriately:

ArrayList afsupressees = new ArrayList(),
rdsupressees = new ArrayList(),
pssupressees = new ArrayList(),
slsupressees = new ArrayList();

sl = new SeekLight ("Light",4,slsupressees);

rd = new RandomDrive("Drive",1,rdsupressees);

af = new AvoidFront ("Avoid",2,afsupressees);

ps = new PlaySounds ("Play ",3,pssupressees);

5. Conclusion
We've succesfully played around with behavior-governed robots, and gained experience with layered architectures in robot control. Among those experiences are:

Robots designed with a layered architecture make for easy understanding of what's happening. One's mindset can attribute different steps to different modules in the code, and this is helped by using explicit delays to slow down the program's actions.

When one contemplates a layered architecture, one might want to go all-in from the start, and make a smart dependency/suppression-system, early on. We, for one, have quickly outgrown the simple constructor-parameter scheme presented to us.

6. Further work

The programs we wrote were not fully-developed, algorithm-wise, due to time constraints. The SeekLight behavior ought to get some thought in order to more consistently drive towards a light source.

Our robot design might also gain something from being redesigned. The robot is almost tipping over, when it's avoiding obstacles, and the light sensors are aimed almost in parallel.

Most importantly, some generic way of handling a DAG of Behavior dependencies ought to be researched and implemented.

Friday, 31 October 2008

NXT Programming, Lesson 7

Data: 31 10 2008
Duration of activity: three hours
Group members participating: all group members

1. The Goal

The goal of this lesson is to build and program Braitenberg vehicles.

2. The Plan

  • Build and program Braitenberg vehicles

  • Replace vehicle 2b's stimulates connections with inhibition connections

  • Discuss what would happen if multiple robots, all reacting to light and with their own light source on top, would be put together in the same environment

  • Discuss how the use of multiple threads for measuring multiple sensor-inputs would influence in the robots behavior

  • Rewrite the code of the vehicle to be self-calibrating with respect to the environment by only making a calibration last for N samples for some N

3.The Results

Our goals for this lab-session are based on the assignments on the course homepage for lesson 7 but we changed some build/program-assignments to discussion-assignments.

  • Rebuilding our robot (named Thomas) to become a Braitenberg vehicle was straightforward. We took the design from our previous build and made the front be the back, and the back be the front. We replaced the light sensor on the back, which was pointing toward the ground, with one (and later on, two) on the front, pointing straight forward.

  • As it was suggested in the beginning of the lab description, we have tried out all three different kinds of vehicles: 1, 2a, and 2b, as they appear in the following picture:

    More about the exact results of our robot's construction read in 3.1.Constructions of the robot

  • There were some aspects of the lab work that we decided not to implement. After constructing different approaches of the robot, we made some experiments and observed the behavior of the robot. That lead to some reasonable discussions. These discussions are mentioned in 3.2. Discussions.

  • The last thing that our group decided to try out with the robot, was to make it adaptive. We focused on finding the average of the light level in order to determine whether to respond (and, if so, how to respond) or not to the measured light level. We used ideas borrowed from 'Notes on construction of Braitenberg's Vehicles', by Tom Dean. Read more about this in 3.3. Adaptive robot behavior.

  • 3.1. Constructions of the robot

    1. To construct the vehicle 1, we used two light sensors mounted on the robot and facing straight forward, and these light sensors are then averaged in software, so as to appear as one single sensor. The idea is that the robot moves straight towards the light source when the light level goes over a predetermined threshold. In this case, both motors react the same, as there is only one sensor input.

    2. To construct vehicle 2a we used the same two light sensors mounted on the robot, facing straight forward. The idea is that the robot tries to avoid the light in some sense: When the right sensor gets enough light, the program causes the right motor to turn on (symmetrical for left). When doing experiments with this kind of robot behavior, it seems that the robot tries to go away from the light source.

    3. To construct vehicle 2b again we used the two light sensors mounted on the robot. But know it is an opposite case: The robot is following the light. When the right sensor gets enough light, the control program causes the left motor to turn on (and symmetrical). When doing experiments with this kind of robot behavior, it seems that the robot is trying to go directly to the source of light---if it starts out seeing strong enough light, that is, because there's no light-finding part of the algorithm.

    The majority of our experiments were done with a LED (light-emitting diode), which gives a concentrated cone of light, which is relatively directional and narrow. When one of the robot's light sensors gets the amount of light that results from being inside the LED cone, it reacts pretty quickly and reveals the actual behavior of the robot.

    The problem with this is that the LED is too directional to be practical. It is basically only able to hit a single light sensor, when used within the range where the emitted light is still bright enough.

    Unfortunately, we weren't able to test with a less directional and high amplitude light source (like a table lamp). We couldn't find anything useable anywhere around. Nevertheless, we could establish some thresholds, that made the robot react to the LED cone from further away.

    Below is a plot of the light sensor values read when aiming the light sensor at a stationary light and the inside of a suit, respectively:

    Despite of everything, we managed to succeed in a very promising experiment: The robot was placed in a dark room (without windows or other light sources) facing a closed door, and then the door was opened to a very bright room. With the appropriate source code uploaded, the robot got 'alive' and gladly ran towards the door and straight through it.

    In this case, the robot was already facing the door. Generally the next step would be to make robot to look for light (by turning to the left or right). We didn't implement this feature because of lack of the time.

    3.2. Discussions
    • Sound sensors.

    • For the robot 1 with one sensor, the behavior of the robot would be similar: The more light (sound) there is in the environment, the more power will be applied to both motors and, therefore, the faster the robot will go.

      When we have two sensors on the robot, it is a lot easier to work with two light sensors rather then with two sound sensors. If we were to measure the sound from the surrounding environment, both sound sensors would pick up similar values. So to provide directed sound levels that would differ enough in order to reveal the behavioral patterns of the robot is not so easy. Of course, if we placed sensors on directly opposite sides, we may have achieved the desired results.

      All in all, it seemed less troublesome to stick with the light sensors in our experiments.

    • Inhibited connections.

    • Having inhibited connections instead of stimulates will make the robot apply less power to the motors when measuring more light. As a result, the robot will accelerate quickly towards a light-source and then slow down as it gets closer, until it finally stops close the light-source.

    • About the lamp on top of the robot and multiple other robots around.

    • A deterministic robot, reacting in a predefined way, will (or should, at least) always act the same in a static environment. It becomes much more interesting when the environment is dynamic, because then it isn't possible to predict the behavior of the robot.

      We can argue how the robot will act on a theoretical level, because we are in control of the robots reactions on input, but we cannot predict the course the robot will drive when let loose over a period of time. (Albeit, from a strictly theoretical point of view, the whole system has a state and this state will deterministically result in a predeterminable outcome. But that's esoteric.)

      When trying to describe what we think a robot will do in a dynamic environment, we have a tendency to describe it like we would a human having the same goal as the robot.

      Therefore, giving multiple robots of type 2b, and with a light bulb on top, put in the same environment, what would they do? They would find together in groups, minimally in pairs. When every robot is in a group, everybody would drive a little back and forth, always correcting the distance between one another.

      This kind of behavior can most certainly be found in nature. Just think of a shoal of fish or a swarm of mosquitoes. Although it might not be that this is 'intelligent' behavior, it is behavior of something living.

      When putting vehicles of type 2a together, the robots will back away from one another, always driving the way that is the most away from the rest of the group. In the end every robot will have situated itself in isolation from the others.

      All of the above implicitly assumes that the governing algorithms work as intended, and does not counter the inherent discretization in sensor values. In the real world, for instance, the 2a robots would probably not find a corner each, but, at least show willingness to do so.

    • Vehicles with two threads of control.

    • As long as we are only using one thread, and the control for both the motors is in this thread, both motors will always get their commands at the same time. When using one thread pr. sensor and/or motor, it is the schedulers responsibility to ensure that every module of the vehicle gets its required CPU-cycles to make its reactions to the environment, and makes modules work disjoint from one another. The disjointness of modules is a very nice feature to have for more complex systems, where we would like to be sure that one module in the system can't influence another module reactions if they don't share sensors or actuators.

    3.3. Adaptive robot behavior

    Among the proposed experiments, the one where it should be investigated how to adopt dynamic bounds was the most interesting. The other candidates, making the program threaded or trying sound sensors, are mostly interesting experiments by the resulting discussion.

    We tried to make a ``sliding window'' of values that should be interpreted as light and dark. This sliding window is parametrized by a center point, which itself slides, and a radius around it. Values are expected to lie within centerpoint ± radius, and the LightSensor class's own normalization is used with this information.

    The centerpoint is a running average of the values collected. For this calculation, we refer once again to Tom Dean's chapter 5. The value is calculated and used roughly as:

    public static final double BETA = 0.1;

    /* ... */

    /* Use the *last* value of the average: */


    average += BETA * ((read_left_norm+read_right_norm)/2 - average);

    The average is thereby added with BETA times the currently read value (note the inline averaging) minus the previous value of average, in accordance with Dean's approach.

    The value of BETA is the tricky part---the rest is only inserting the formula---because it's value specifies the tendency to make old values count in the re-calculation. If BETA is a large number, the running average will only reflect the very most recent values count, and for smaller values, more of the past is considered in the present---so to speak.

    The issue is rounding. The value ``average'' is an integral type, and for each step in the calculation, the intermediate result is discretizised to become an integer. This makes very small values of BETA infeasible, since the running average cannot reach the expected average value. (E.g., even after a thousand consecutive readouts of values of circa 350, the running average would stick at 180-something.)

    Add to this concern (which, should be noted, can be patched up algorithmically) the wish for keeping a tremendous amount of past values relevant in the current value: If the robot stands in a room where the lights are switched off, it should not willingly and profusely adjust its notion of ``dark'' to fit with its sorroundings and start moving towards e.g. a standby LED on some device. But it should however be able to adjust its notion of ``dark' over the span of a day or over the span of its assignment.

    BETA == 0.1 however was reasonable, and the robot did show signs of adapting its reaction to its sliding window: When kept blindfolded for some seconds, it would react much more to the LED cone than it would without the prior blindfolding.

    4. Conclusion

    This week we have tried to make somewhat complex (and random) behavior with very simple connections and control-programs. We have build and tested Braitenberg vehicles which lead to some interesting discussions about what his simple connections could amount to in the robots ability to react to the environment and whether or not intelligences is needed to make life-like behavior. Although we didn't build and tested all the robots in the lesson notes, we did discuss what the result of them would have been.

    We discussed the capabilities of a robot that in the dark would turn around to find a light-source and then drive toward it. This would be a very simple system to build and program, but display some complex movement, because its reactions is based on a dynamic environment. Unfortunately there wasn't time to implement the system.

    Thursday, 9 October 2008

    NXT Programming, Lesson 5 continued

    Data: 10 10 2008
    Duration of activity: three hours
    Group members participating: all group members

    1. The Goal

    The goal of this lab session is to continue the work on the robot with mounted light sensor following the line. The general goal, of course, is to make the robot go around the track as fast as possible and to stop at the finish blue square.

    2. The Plan

    • Efficiently distinguish the color blue

    • Stop at the blue goal zone

    • Traverse the track faster

    3.The Results

    3.1 The problem with blue

    After making many runs around the track, the need of calibration becomes obvious. As there are many windows and glass doors in the room where the car track is, every single change in lightning changes the readings of the sensor. We learned that in a hard way. It was time saving to use hardwired values in stead of run-time calibration, but that made things more complicated.

    There were many discussions of how exactly to distinguish color blue. How to set intervals, or how to choose thresholds. The changing surrounding lightning didn't make the task easier.

    We have a plot which exactly shows what is going on with the sensor while driving around the track:

    The blue line in the plot indicates where the 'spectrum' of color blue is. Because of the fact that the values that the sensor gives do not change instantly from being 350 to 500. The transfer is smooth (note the many data points between a black and a white read-out), therefore the color blue can be 'seen' many times while driving around the track. This fact is crucial for distinguishing the right color blue.

    3.2 The solution for blue

    Given the above experiences, it's clear that a simple I-saw-blue-so-I'll-stop solution will cause the robot to stop early. The robot must see a number of readouts in the spectrum of blue, before it stops. That is not to say that the robot can't react to seeing blue immediately, but, the stopping action should be postponed.

    We simply devised a counting algorithm, that counts the number of sequential blue readouts it has seen. This is the simpler of the approaches that exist to distinguishing blue, given the hardware that Thomas is composed of:

    The other---perhaps better---method would be to keep a number of readouts, and let the number of blue readouts in the last N milliseconds be the limiting factor. Such an algorithm may be as simple as: Decrement numblues once for every readout (but never below zero), but increment it by three for every blue readout seen. If numblues ends up being above 20, a lot of blue has been seen in the current time window, so the robot must be atop something blue.

    Note the difference between the running average of discrete number of blue readouts and the running average of the readouts themselves. The latter will cross from white to black for each of Thomas's twitches, and in the process, will cross over the blue value.

    3.3 More abstract solutions for blue

    Given that the sensor reads out a greyscale value, a three-color transparent layer could be mounted in front of the sensor, and each read-out will correspond to the R x G x B tuple that follows from three readouts, each behind its own color of the transparent layer. This exploits the fact that blue (for blue:= blue,red,green) colors go relatively unhindered through a transparent blue layer, whereas the other colors are---again, relatively---absorbed by the medium.

    This will make it easy to distinguish blue, by comparing that channel to the mean of the others.

    Of course, this would require a faster-working sensor, a motor to spin a disc and most importantly, that the robot stays still over a point of measurement for the duration of the measurement. So it wouldn't be useful for the application at hand, because it---to a very much larger extent---depends on speed of measurement and reaction than to precision of read-outs. But the method could find application in e.g. a color scanner based on lego.

    4. Conclusion

    Distinguishing blue is easier said than done. It simply requires that one averages a bunch of readouts, and that one hopes that the blue phenomenon that one is trying to measure will hold for that bunch. If that is not an option, other methods must be used. Digital camera's aren't that expensive to mount, neither.

    We've had a lot of trouble with programming the robot, which we haven't been able to parallelize. Also, we've stayed the course with a case-wise-controlled bang-bang control program, but we've made preparations for converting it into another style of control program. We've however been hindered by latencies in reaction to motor commands, and doing PID control with a mechanical/electrical/firmware setup that is doomed to overshoot every time---let's just say it didn't catch our interest.