Monday 26 January 2009

Overview of the blog / End-thoughts

This post


Since this blog (or, more precisely, all its posts pertaining to the end project) is to be used for evaluation, this blog post will serve to provide an overview of the postings.

The group members are Renata Norbutaite, Anders Dannesboe (a.k.a. prox), Anders Breindahl (a.k.a. skrewz) and of course Thomas (which is the name of the robot).

General structure


We've settled upon a strictly chronological blog, with allowance of hindsight-updates. Thus, all of the postings pertain to a particular moment in time---the date of the meeting the blog post is for---and to no particular subject. Of course, there has been subjects-of-the day, and this of course gets reflected in the postings.

To read through this blog, start with these introduction points, read the blog in its chronological order (start with Part I and work yourself upwards), and end up with the conclusion in this post. Use the chronological overview of this post to correlate actions to points in time (because wading through the posts themselves probably will be too tiresome).

Focal points


This blog describes the noteworthy efforts that we've gone through. Thus, un-noteworthy efforts are left out. At all posts, a goal-plan-results approach has been attempted.

We've taken a far-from-code approach to documenting this effort. This choice is because our
considerations pertain to the concepts, and not to the code they're implemented in, with few exceptions.

Code availability


The code is available in its entirety at the following location.

http://daimi.au.dk/~skrewz/thomas-src
http://daimi.au.dk/~skrewz/thomas-src.tar.bz2
(you may, in the future, need to use ``cs'' in stead of ``daimi''),

and (if that's unavailable!), by contacting skrewz. No specific licences have been settled upon, so that means public-but-all-rights-reserved---contact us if that is inappropriate for you.

Brief chronological overview



This walks through the postings, chronological, and explains the posts briefly. Note that, since the blog is top posting, so is this walkthrough.


  • End-project. PART XV:
    This posting describes what happened at around the exam.

  • 20090121: End-project. PART XIV:
    The robot was assured to be able to write its own name, and the rest of the alphabet was implemented. Meanwhile, a bluetooth effort wast tried.

  • 20090120: End-project. PART XIII:
    In a long day, both the reproducability of drawings, writing of the robot's name and various shapes were tested. A PC-side driver (a replacement of the GradientInterpreter layer and a driver class for this) was created,

  • 20090119: End-project. PART XII:
    Due to thievery of our battery pack, a bit of coding blindly was done. However, the effort became mostly physical-construction-oriented.

  • 20090117: End-project. PART XI:
    Refinements to the NavigationLayer motivate extensive testing of motor behaviour. It turns out to behave linearly. Also, the first really nicely-done drawings of triangles (which have a sharp corner) appear.

  • 20090116: End-project. PART X:
    First actual drawings (of circles, no less) start to appear. The NavigationLayer-implementation turns out to be too low-tech.

  • 20090114: End-project. PART IX:
    Experiments were conducted very thoroughly with the RCX-style touch sensors, and data plots were produced. All in order to defend program conditions (which turned out to be very easy).

  • 20090113: End-project. PART VIII:
    The GradientInterpreter layer is brought into play, and the lower layers are still being fiddled into place.

  • 20090112: End-project. PART VII:
    Further refinements on the troublesome MotorLayer, primarily related to touch sensors.

  • 20090108: End-project. PART VI:
    A group member becomes ill, and only the two bottommost layers are implemneted, tested and presented. However, the coding effort is picking up speed.

  • 20090106: End-project. PART V:
    The architecture is settled upon to the level of defining Java interfaces, and programming homework is delegated.

  • 20081212: End-project. PART IV:
    Still (re-)building the robot, but architecture discussions are touched upon: The layers are named, and terminology is established.

  • 20081209: End-project. PART III:
    The touch sensors are settled upon and installed. Generally, still in the building-the-robot-phase.

  • 20081205: End-project. PART II:
    It becomes more evident to the group that an X-Y-coordinator is a complex build. We manage to start building the Z-axis and start some basic motor controlling, and basically finish the X-axis.

  • 20081202: End-project. PART I:
    We discuss project headings and document our prototype's progress.

  • 20081128: Initial Description of End Course Project:
    In this post (which has seen approval) we discuss potential projects for the end course project.



Conclusion and final thoughts



This is the long conclusion and wrapping-up thoughts section of our project. We've allowed ourselves to be a little verbose for a conclusion in order to pick up on all self-evaluation that the chronological format doesn't allow.

Reflection


At the very beginning of this project we had a chance of describing our expectations to the final result. So, when we look back at this description, we can say that what we did is close to exactly what we were expecting. At this particular moment, we have a robot which is able to navigate in an X-Y coordinate system, and even, it is able to move a pen in an up/down motion.

It is the purpose of this robot is to be able to draw something, and indeed it does that. So far, the robot can draw whatever is given by description of lines and arcs, but the software extends beautifully to arbitrary t → (x,y,z) parametric functions of elapsed time in the movement.

Structure


For the robot to be able to do all this, we use three motors (X-axis handling, Y-axis handling, Z-axis handling) and six touch sensors in total for safety and calibration reasons. Four of them are related to the X-axis (2 at the min-border, 2 at the max-border) and two of them on the Y-axis (one at each min and max borders).

These will stop the motors, when a relevant part touches its min or max border. We considered to put a touch sensor onto the Z-axis as well, but that eventually seemed as a burden to the structure more than actual help.

Theory


When we talk about the robot from a theoretical point of view, we can think about the idea of feed back control. At every moment when we are in an action of drawing, we are aware of where exactly we are and where we are supposed to be at. The robot takes these two things into account and tries to bring the system to a goal state and keep it there. We have actually considered more sophisticated controlling such as proportional-derivate or similar would fit for us here in order to increase the accuracy, but we never actually came around to implement or otherwise try it. The current linear feedback algorithms do us good enough, already.

From the overall robot’s behavior point of view, we can think of terms as sequential control and reactive control. The behavior of the robot can be described as a sequential control whenever it follows the drawing instructions. The drawing is all about following a predetermined sequence of lines and arcs. The robot does not think (as of writing) about which way is the best to do that, it just follows the sequence. Now we have a sense of reactive control whenever the robot is calibrating itself. That is done by going to the min (which is determined by help of touch sensors) and counting all tachos all over again and going to the max. So, as it was said, it is not actual stimulus-response behavior, but a sense of it, because at this stage robot is dependable on the touch sensors.

Achievements


We've succeed with regards to our project description, as argued above. On top, we even went for some of our bells-and-whistles extensions, albeit we couldn't make them work (pertaining to Bluetooth, here).

So, from a project-to-product point of view, we've suceeded. But we achieve extra-description features, too. We have a sound software architecture, and we keep to it: The layered architecture in our software is actually a strictly layered architecture (i.e. no layer communicates with any but its immediate neighbours in the layer stack). And we demonstrate the usefulness of a strong software architecture in our ability to cleanly exchange the GradientInterpreter layer with an emulator, and run the whole software suite on a PC, outputting to GNUplot.

The separation of concerns that arises from strictly defined barriers of responsibilities also make for a really neat way of expressing movements in a high-level language. One simply has to provide a gradientlayer.GradientGiver implementation, and the robot will then be able to trace that.

Needless to say, our achieved typical error magnitude of 0.5 millimeters (and maximal error magnitude of strictly below 2 millimeters during normal working conditions) is also something to be proud of, and our physical setup seems to be able to keep up with the software's precision in being able to do almost-exact multiple-draw-experiments.

No comments: