LEGO LAB blog

Monday 26 January 2009

Overview of the blog / End-thoughts

This post


Since this blog (or, more precisely, all its posts pertaining to the end project) is to be used for evaluation, this blog post will serve to provide an overview of the postings.

The group members are Renata Norbutaite, Anders Dannesboe (a.k.a. prox), Anders Breindahl (a.k.a. skrewz) and of course Thomas (which is the name of the robot).

General structure


We've settled upon a strictly chronological blog, with allowance of hindsight-updates. Thus, all of the postings pertain to a particular moment in time---the date of the meeting the blog post is for---and to no particular subject. Of course, there has been subjects-of-the day, and this of course gets reflected in the postings.

To read through this blog, start with these introduction points, read the blog in its chronological order (start with Part I and work yourself upwards), and end up with the conclusion in this post. Use the chronological overview of this post to correlate actions to points in time (because wading through the posts themselves probably will be too tiresome).

Focal points


This blog describes the noteworthy efforts that we've gone through. Thus, un-noteworthy efforts are left out. At all posts, a goal-plan-results approach has been attempted.

We've taken a far-from-code approach to documenting this effort. This choice is because our
considerations pertain to the concepts, and not to the code they're implemented in, with few exceptions.

Code availability


The code is available in its entirety at the following location.

http://daimi.au.dk/~skrewz/thomas-src
http://daimi.au.dk/~skrewz/thomas-src.tar.bz2
(you may, in the future, need to use ``cs'' in stead of ``daimi''),

and (if that's unavailable!), by contacting skrewz. No specific licences have been settled upon, so that means public-but-all-rights-reserved---contact us if that is inappropriate for you.

Brief chronological overview



This walks through the postings, chronological, and explains the posts briefly. Note that, since the blog is top posting, so is this walkthrough.


  • End-project. PART XV:
    This posting describes what happened at around the exam.

  • 20090121: End-project. PART XIV:
    The robot was assured to be able to write its own name, and the rest of the alphabet was implemented. Meanwhile, a bluetooth effort wast tried.

  • 20090120: End-project. PART XIII:
    In a long day, both the reproducability of drawings, writing of the robot's name and various shapes were tested. A PC-side driver (a replacement of the GradientInterpreter layer and a driver class for this) was created,

  • 20090119: End-project. PART XII:
    Due to thievery of our battery pack, a bit of coding blindly was done. However, the effort became mostly physical-construction-oriented.

  • 20090117: End-project. PART XI:
    Refinements to the NavigationLayer motivate extensive testing of motor behaviour. It turns out to behave linearly. Also, the first really nicely-done drawings of triangles (which have a sharp corner) appear.

  • 20090116: End-project. PART X:
    First actual drawings (of circles, no less) start to appear. The NavigationLayer-implementation turns out to be too low-tech.

  • 20090114: End-project. PART IX:
    Experiments were conducted very thoroughly with the RCX-style touch sensors, and data plots were produced. All in order to defend program conditions (which turned out to be very easy).

  • 20090113: End-project. PART VIII:
    The GradientInterpreter layer is brought into play, and the lower layers are still being fiddled into place.

  • 20090112: End-project. PART VII:
    Further refinements on the troublesome MotorLayer, primarily related to touch sensors.

  • 20090108: End-project. PART VI:
    A group member becomes ill, and only the two bottommost layers are implemneted, tested and presented. However, the coding effort is picking up speed.

  • 20090106: End-project. PART V:
    The architecture is settled upon to the level of defining Java interfaces, and programming homework is delegated.

  • 20081212: End-project. PART IV:
    Still (re-)building the robot, but architecture discussions are touched upon: The layers are named, and terminology is established.

  • 20081209: End-project. PART III:
    The touch sensors are settled upon and installed. Generally, still in the building-the-robot-phase.

  • 20081205: End-project. PART II:
    It becomes more evident to the group that an X-Y-coordinator is a complex build. We manage to start building the Z-axis and start some basic motor controlling, and basically finish the X-axis.

  • 20081202: End-project. PART I:
    We discuss project headings and document our prototype's progress.

  • 20081128: Initial Description of End Course Project:
    In this post (which has seen approval) we discuss potential projects for the end course project.



Conclusion and final thoughts



This is the long conclusion and wrapping-up thoughts section of our project. We've allowed ourselves to be a little verbose for a conclusion in order to pick up on all self-evaluation that the chronological format doesn't allow.

Reflection


At the very beginning of this project we had a chance of describing our expectations to the final result. So, when we look back at this description, we can say that what we did is close to exactly what we were expecting. At this particular moment, we have a robot which is able to navigate in an X-Y coordinate system, and even, it is able to move a pen in an up/down motion.

It is the purpose of this robot is to be able to draw something, and indeed it does that. So far, the robot can draw whatever is given by description of lines and arcs, but the software extends beautifully to arbitrary t → (x,y,z) parametric functions of elapsed time in the movement.

Structure


For the robot to be able to do all this, we use three motors (X-axis handling, Y-axis handling, Z-axis handling) and six touch sensors in total for safety and calibration reasons. Four of them are related to the X-axis (2 at the min-border, 2 at the max-border) and two of them on the Y-axis (one at each min and max borders).

These will stop the motors, when a relevant part touches its min or max border. We considered to put a touch sensor onto the Z-axis as well, but that eventually seemed as a burden to the structure more than actual help.

Theory


When we talk about the robot from a theoretical point of view, we can think about the idea of feed back control. At every moment when we are in an action of drawing, we are aware of where exactly we are and where we are supposed to be at. The robot takes these two things into account and tries to bring the system to a goal state and keep it there. We have actually considered more sophisticated controlling such as proportional-derivate or similar would fit for us here in order to increase the accuracy, but we never actually came around to implement or otherwise try it. The current linear feedback algorithms do us good enough, already.

From the overall robot’s behavior point of view, we can think of terms as sequential control and reactive control. The behavior of the robot can be described as a sequential control whenever it follows the drawing instructions. The drawing is all about following a predetermined sequence of lines and arcs. The robot does not think (as of writing) about which way is the best to do that, it just follows the sequence. Now we have a sense of reactive control whenever the robot is calibrating itself. That is done by going to the min (which is determined by help of touch sensors) and counting all tachos all over again and going to the max. So, as it was said, it is not actual stimulus-response behavior, but a sense of it, because at this stage robot is dependable on the touch sensors.

Achievements


We've succeed with regards to our project description, as argued above. On top, we even went for some of our bells-and-whistles extensions, albeit we couldn't make them work (pertaining to Bluetooth, here).

So, from a project-to-product point of view, we've suceeded. But we achieve extra-description features, too. We have a sound software architecture, and we keep to it: The layered architecture in our software is actually a strictly layered architecture (i.e. no layer communicates with any but its immediate neighbours in the layer stack). And we demonstrate the usefulness of a strong software architecture in our ability to cleanly exchange the GradientInterpreter layer with an emulator, and run the whole software suite on a PC, outputting to GNUplot.

The separation of concerns that arises from strictly defined barriers of responsibilities also make for a really neat way of expressing movements in a high-level language. One simply has to provide a gradientlayer.GradientGiver implementation, and the robot will then be able to trace that.

Needless to say, our achieved typical error magnitude of 0.5 millimeters (and maximal error magnitude of strictly below 2 millimeters during normal working conditions) is also something to be proud of, and our physical setup seems to be able to keep up with the software's precision in being able to do almost-exact multiple-draw-experiments.

Wednesday 21 January 2009

End-project. PART XV



At the exam/presentation





Right before our exam we tested our robot by writing a group member name: ``RENATA''.


During the exam we demonstrated how the robot is able to draw a square with rounded corners and a word composition ``EIGHT GRADE'' (which is a misspell, but it's not the first time that that's happened in a glitch). The reason for the string ``eight(h) grade'' is that at this point in the exam, a Danish primary school's eighth graders were coming to visit. Thomas entertained and wrote dedications---although, at Thomas's pace, only a single one was written, dedicated to ``Jonas'', and a more general one to ``Erritsø Centralskole''.



After the exam


After the exam, further pictures were taken.







Thomas, in his final looks.

Thomas is now able to introduce himself.


Some close-up shots if anybody would be wondering how we managed to clamp it all together:






The pen holder mechanism, which both centers and resists twitching of the pen.

The carriage, as seen from the side. There's been added extra handles to ease adjustment.


It is noticable that the letters vary in their precision. For example, the A shape is a rather smooth diagonal line, whereas the Y shape doesn't cut a sharp corner. Much of this has to do with the fact that the working area is curved and bends under pressure. This can also be seen from the way the robot lifts off the pen and puts it down again, where it---at times---leaves a small ink trace of the vertical movement in the pen liftoff.

End-project. PART XIV



Date: 21 01 2009
Duration of activity: ?
Group members participating: all group members



1. The Goal



The goal of this lab session is to make final adjustments to the structure, and code, and try to write correctly the whole alphabet.

2. The Plan




  • Write correctly robot's name 'THOMAS'.

  • Practice to write the whole alphabet.

  • Work further on the idea of string passing via bluetooth connection.



3. The Results



3.1. Writing the name



Last time time we (PC-side-)plotted ``THOMAS'' and it showed that this word is written correctly. This time we tried actually writing it. It still needs some adjustments for the equal alignment of the letters and some adjustments so the pencil would be placed on the paper on the right time.



After making the robot writing it's name in a correct fashion, it was time to see how precise the letters are each time, which can be done using the technique of multiple overlapping drawings.



3.2. Writing the alphabet



We got the idea that we should make the robot be able to draw general English words. Implementing this idea, all English alphabet letters and a 'dash' symbol were (excruciatingly!) defined.

Before trying each letter with actual pen-on-paper, an evaluation was done using the GradientInterpreterPCSideDumper, which can be used in stead of the actual GradientInterpreter implementation, when driven with the PCDriver. The alphabet looks like this (note that the Z character couldn't fit, and that the algorithm left it out.):




There are some results of testing made over all these letters.


3.3. Handler layer


The Handler layer, currently filled by a BogusHandler and various other not-similarly named classes, is the user of the layered architecture (which is also the reason there's no interface for it, nor any naming convention). This has been extended with an:

InteractiveHandler

  • write(String string) The parameter is a string to be put to print. Returns after the movement has completed.

  • move(float[] change) The parameter 'change' signed change in position (in mm) to move. Returns after the movement has completed



ShapesCatalog
The definitions of shapes began to become complicated, and the raw definitions were moved into their own class, the ShapesCatalog. Most shapes are parametrized by an offset-vector, at which they'll set their origo.


  • drawAShape (float[] at) draws a letter 'A'.

  • etc.

  • drawZShape (float[] at) draws a letter 'Z'.

  • drawDashShape (float[] at) draws '-' shape.



3.4. LetterHelper


The LetterHelper handles issues specific to letter shapes. Those of course include that of easily printing them in sequence, and to have an easy interface for entering new strings for printing.


  • printString(String what, float[] at) prints a string at a given offset, compensating for necessary line breaks and out-of-space conditions. Uses printChar() as a delegate.

  • printChar(char what, float[] at) prints the character given as it's drawn with the appropriate draw?Shape() method of ShapesCatalog.




3.5. Remote controlling



An effort was made to try to remote-control Thomas, and in particular, to be able to have Thomas draw strings that are entered from a Bluetooth connection. But this turned out to be much more troublesome than expected. (Note, though, that this was declared to be a bells-and-whistles extension to our project).

BTCmdSender

  • BTCmdSender(String nxtname, String addr, String cmd) a class constructor dealing with connecting to an NXT via Bluetooth; sending bits; closing connection. nxtname and addr are for communication with the right NXT, and cmd is one String containing the command with parameters to be send to the NXT. The class will terminate after sending cmd, use multiple instantiations of this class to send multiple commands.



BTCmdReceiver

  • startRececeiving() tries start listening, if it does not succeed, an error message is printed.

  • startListening() starts the looping of listening on the BT line, parse command and listening again.

  • readLine() reads chars on the BT line until '\n', and return a string version of the line.

  • openConnection() opens all used connection needed.

  • closeConnection() closes all used connection that has been opened.



3.6. Various drivers



  • BTCmdRecDriver is the receiving end of the Bluetooth communication. Thus, the lejOS-side of things.

  • PCCmdBTDriver is the host side of the Bluetooth communication, which is run on the PC side.



The lejos sourcecode is shipped with some examples, two of these being a receiver and sender of integers via a BT line. Those two examples were the primary inspiration for making our BT communication classes. But it turns out that using BT to do anything else then exactly the same as their examples do, is extremely difficult (and time consuming).

There is basically two classes we are interested in:
java.io.DataInputStream
java.io.DataOutputStream

These are the ones containing all the method for sending and receiving ones the connection has been established (and the ones not working like one would like).

The goal for this BT communication is to be able to communicate with the NXT via a simple protocol and be able to execute commands on the NXT through the sent data.

The protocol is textual and line based, first line being the command itself and the next lines being the parameters to the command. From this it is clear that every command must have a fixed number of parameters, so that the receiving end knows when to stop trying to receive.

That was the protocol, now for the sending and receiving. According to the API we should be able to use the method public final void writeChars(String s)[1] which writes the whole string (ending with a '\n') on BT line. And in the receiving end use public final String readLine()[2] to read a whole line, thereby effectively sending strings between the PC and the NXT.

But as it turns out, this is not fruitful. This just doesn't work. A guess to why, is because we are actually trying to send a java.lang.String object (presumably serialized as a Java object), and what the receiving end is trying to parse is a sequence of chars forming a line.

So after a whole night of trying to get the obvious to work, this method was rejected. Instead we defined our own readLine() method:
private String readLine()
Which basically puts chars on the BT line into a buffer until a '\n' is received, and returns this buffer as a String.
On the sending end, each string needing to be send is converted into a char-array and sent one char at a time.

This approach worked, somewhat.

As a general observation, the BT functionality seems to be working best is nothing else is taking up CPU cycles.

BTCmdSender's BTCmdSender(String nxtname, String addr, String cmd) is supposed to take the cmd-string (let us say cmd.equals("write MY-NAME-IS-THOMAS")), convert it into an array of String, splitting on whitespace, and use this array of string,

String[] lines = cmd.split(" ");

to send string as a separate line to the NXT. This will not work. Hard coding a array of strings into the java file,

String[] lines = {"write","MY-NAME-IS-THOMAS"};

and use this as a command to be send to the NXT, works.

In the end, we got a somewhat working implementation of ``remote control'', but it was really frustrating to get anything to work as it was supposed to. Especially considering that this was some last minute extras we made in the end of the project.
After some communication with other groups about BT communication it seems like the only reliable method of communication is not using any of the fancy defined methods in the API, but fall-back to public final int readInt()[3], and cast each value to char afterwards.

Update: Since we implemented the BT communication the lejos has updated their API to reflect the newest version of lejos. And what will you know, there has been made major changes to the DataInputStream and DataOutputStream classes. readLine() is deprecated, and instead public final String readUTF() is available.

3.7. Other drivers




  • PCDriver is the PC-side driver for testing ShapesCatalog-calls before trying them on Thomas.

  • LetterDriver this is for printing letter strings on Thomas using the LetterHelper.

  • PlotterDriver this one was for printing various non-letter shapes on Thomas. Currently, it draws the rounded square shape.




4. Conclusion



The best thing that comes after this lab session is that we now are able to make the robot to write a whatever word that is made of letters taken from the English alphabet. It is done in a rather precise fashion even if the same letter is written on top over and over again. What is more, the idea of passing a string via a bluetooth connection is getting better, although not yet exactly working.

References


  1. http://java.sun.com/j2se/1.5.0/docs/api/java/io/DataOutputStream.html#writeChars(java.lang.String)

  2. http://java.sun.com/j2se/1.5.0/docs/api/java/io/DataInputStream.html#readLine()

  3. http://java.sun.com/j2se/1.5.0/docs/api/java/io/DataInputStream.html#readInt()




End-project. PART XIII



Date: 20 01 2009
Duration of activity: 7 hours + (6 + 8 hours of homework)
Group members participating: all group members (Anders B. + Anders D. of homework)



1. The Goal



The goal of this lab session is to put all the efforts in making the robot to draw its name ``THOMAS''.

2. The Plan




  • Run many tests a and adjust code to improve accuracy to its maximum.

  • Tryout shape drawing.

  • Define letters for the name.

  • Tryout drawing letters.

  • Plot the name 'THOMAS' to see if the definition is right.



3. The Results



3.1. Rounded square



Before, we have had some troubles while drawing a square with rounded corners. This may be due to the fact that we didn't define it correctly. The robot actually was trying to draw what was given:



On the other hand---this figure shows a very different figure than the one that was apparently attempted printed, earlier in the process. It could also have been a bug in the description that has snook in.

3.2. Robot's art



Along the way of testing how to manage and balance all the motor controls and corrections, we got some drawings that can be called a robot's ``art''.



3.3. Multiple drawing



In order to measure how precise our robot can do, we tried to draw the same shapes several times on top of each other. The results we got were surprising in a good way. The arrow points to a line, where at least five overlapping lines were drawn (yet one cannot distinguish that many lines, visually).



3.4. Shapes



To look at some shapes, we tested both drawing a triangle and a square with rounded corners a lot (after the definition was fixed, we got that shape very precisely).



3.5. Letters



After we had success with shapes, we decided to try drawing letters. This is more of a challenge, as the letters (if we want to write a word) have to be relatively small and more complicated. These are examples of letter ``S'' and letters ``TH'' together. We were intending for the robot to be able to write it's name ``THOMAS''.



3.6. PC-side plotting



Seemingly it was possible to write letters pretty nicely, it was time to write a whole name. To be sure that everything is done correctly, and instead of wasting time testing that fact while actually drawing, plotting descriptions were a more easy and time saving thing to do. The below image shows this in action.



This is a very interesting prospect: We're able to, to some degree, simulate the robot's actions, since we can have GNUplot plot the data in a three-dimensional coordinate system, and spot the errors in the descriptions (if any) already at this stage.

3.7. Calibration layer



  • reCalibrate() has been extended to give off an annoying (very error-ish) sound if any axis is touching an endpoint, when reCalibrate() is run.

  • findMinAndMax() enables using previously recorded calibration data. The data is written through a FileUtil file.



3.8. Navigation layer



  • motorPowerFromFraction (float fraction) minor corrections were made: conditions determining speed sign were deleted.

  • motorPowerFromDistance (float[] distance) the way calculations are made was corrected: An acceleration vector is introduced. It used to be a single common-to-all-axes value, but now the urgency of getting back on track is calculated coordinate-wise. (For completeness, it should be noted that this vector also appeared in the previous quoting of the code).

  • getMaxVelocity() returns max velocity value from a calibration layer


The change of using a vector of acceleration factors was a significant one: The error magnitude (the error vector's norm, really) used to hover around 5mm with peaks of up to around 7mm, but when the urgency became adjusted per-axis, this norm started to hover around 0.5mm with peaks of strictly below 2mm. This is perhaps the greatest statement about our achieved precision -- a typical error of around half a millimeter is an extreme degree of precision for this project.

Note however, that this precision is by no means spectacular in general. All environmental effects are under control in an x-y-coordinator, and the movements are known beforehand. If this was an engineering course, we probably would be striving for being 1-5 tacho counts from the ideal value.

3.9. Gradient interpreter layer



  • GradientInterpreterPCSideDumper is the PC-side GradientInterpreter implementation, which does nothing but to enable the dumping of data points for plotting by GNUplot.



3.10. Handler layer



The InteractiveHandler class handles doing ``interactive'' movements with the robot. Thus, it synchronously does operations on its GradientInterpreter, with the purpose of being readily-usable for Bluetooth remote control.


  • move(float[] change) the parameter ``change'' is the signed change in position (in mm) to move. Returns after the movement has completed



ShapesCatalog.

  • drawTShape (float[] at) draws a letter 'T'.

  • drawHShape (float[] at) draws a letter 'H'.

  • drawOShape (float[] at) draws a letter 'O'.

  • drawMShape (float[] at) draws a letter 'M'.

  • drawAShape (float[] at) draws a letter 'A'.

  • drawSShape (float[] at) draws a letter 'S'.

  • drawTHOMAS () handles the drawing of the whole word. (in a crude way, that would be improved upon in the LetterHelper.)



CalibrationLayer

3.11. Reading from and writing to the flash memory



During development much of the robot's time was spent recalibrating (i.e. inside CalibrationLayer's reCalibrate(). Every test case from the CalibrationLayer and up, had to recalibrate Thomas on boot-up, to be able to distinguish between top and bottom safety breaks, the size of the work area, maximum velocity of each axis and the position of the pen.

Observations:

  1. The minimum tacho count is always zero.

  2. The maximum tacho seldom varies greatly, because this would imply a reconstruction of Thomas.

  3. By hitting the bottom-most touch sensor, the number of tachos the carriage must travel to the top is so significant, that we can deduct which touch sensors are being pressed in the future.

  4. The size of the work area is calculated from the constant amount of tacho's per millimeter, and the max tacho count.

  5. The maximum velocity of Thomas does not vary if the gearing of the motors isn't changed and he is on a constant power-supply.



With all these observations in our hands we decide to reduce the time Thomas takes to recalibrate, by storing enough information achieved from the last calibration for to only require a calibration run to make the carriage go to origo. It turns out that to make a complete calibration, Thomas only needs to know the maximum tacho count of each axis and their max velocity. The size of the work areas is easily calculated. And as long as we force the carriage to drive downward until the bottom safety-breaks are being pressed, we will reset the tacho count to zero and we are certain that the robot can distinguish between top-most and bottom-most touchsensors. As soon as (0,0) is reached, Thomas also knows its own position, and can keep on knowing it from tachos pr. millimeter.

We now know what we need to store.

FileUtil

  • nextLine(FileInputStream stream) returns next line of a file.

  • parseFloatInLine(String line) parse characters in order to read floats.

  • writeString(String filename, String toBeWritten) implements actual string writing to the file.



For reading and writing from the flash we made the utility class FileUtil.java. This class should contain all commonly used I/O interaction methods, but as of now we have only implemented the specific method that we needed to solve our problem.

The protocol for parsing information from the flash memory is based on the notion that the flash memory is shared by all programs and browsed with nxjbrowse. Stored information should contain enough information for it to be human readable when possible.

From this we implemented a solution where every recalibration run checks if as certain file exists on the flash memory: ``lastCalibration''. Here's an example of how this file looks:

maxTachoNoX = 21218
maxTachoNoY = 15228
maxVelX = 0.0075553244
maxVelY = 0.0069913477

Creation and parsing of this file is simply done with calling writeString(line) per line to be written, and parsing by calling nextLine(stream) and parseFloatInLine(line) to get the float values stored.

Driver

  • testWritingToFlash() tests the ability to being able to write something to the flash.

  • testReadingFromFlash() test the ability to being able to read from the flash.

  • reCalibrateByResuseSavedValues() runs the calibration with aforesaved file with calibration values.



4. Conclusion



This has been a very successful lab session. We went all the way from Thomas's art to a very precise shape drawing. We also defined letters of the name 'THOMAS' and we tried drawing 'S', 'T', and 'H'. We have also started working on making the robot get to know what to draw using a bluetooth connection instead of uploading a new program every time. The time required for recalibration was also significantly reduced as soon as we were able to read and write to the flash memory.


Monday 19 January 2009

End-project. PART XII



Date: 19 01 2009
Duration of activity: 5 hours + (2.5 + 2.5 hours of homework)
Group members participating: all group members + (Anders D. and Anders B. of homework)



1. The Goal



The goal of this lab session is to improve the stability of the structure as much as possible. Also, as always, to make further adjustments to the code.

2. The Plan




  • Make some reconstructions to the LEGO robot.

  • Find out a way to work with missing (stolen) parts of the robot.

  • Make adjustments to the code.



3. The Results



3.1. Reconstruction



During this lab session it was a challenge to work with the robot we constructed as we found some pieces ``borrowed''. All the time was spent to reconstruct some parts of the robot. Later, we got the parts we needed and a longer cable to connect the motors to the NXT. We made plenty of testing with regards to Z-axis handling.

The part that handles Z-axis movement was redesigned. In this picture it can be seen that the idea that actually moves the part up and down was redesigned to not depend on pressure as we had before, then a pen keeper was changed, and a rather large claw contraption that goes over the carriage was built in order to increase stability.



The overall view of the robot looks like this:



In the following picture one can see the long white cable we tried to use. The interesting thing is that this cable allows to handle motor speed but it does not transmit tacho count readouts, which signifies that the cable isn't fully connected, internally.



3.2. Motor layer



  • setSpeed(MotorPort motor, int speed) each motor gets a predetermined mode: x,y gets mode 1, and z gets mode 2. This is due to our build, as there occurs a need to invert Z's movement direction (which could be fixed in both the X- and Y-case by simply redefining origo)

  • getTachoCount(MotorPort motor) each motor gets the aforementioned predetermined mode and the tachoCount is correlated in order to match the sign of the axis.



3.3. Calibration layer



  • reCalibrate() it seems that is not possible to determine which border we are at if we are pushing safety buttons when starting the calibration.

  • findMinAndMax() deals with assigning values to maximum velocity variables: Convert tachos to millimeters, and then divide with the time it took to go from min to max

  • getMaxVelocity(MotorPort motor) returns a maximum velocity value for a certain motor.



3.4. Navigation layer



  • motorPowerFromFraction (float fraction) condition for returning zero power was corrected.

  • motorPowerFromDistance (float[] distance) condition for assigning retrieval for Z-axis to be zero was corrected.



3.5. Gradient interpreter layer


GradientInterpreterStandard

  • assureMaintainerRunning() we seemingly get the desired behavior when using setDaemon(true) instead of setting it to false.

  • enqueue(GradientGiver newone) checks whether the prior operation in the queue actually ends at the approximately the same point in space where newone starts. If not, aborts the enqueue.

  • enqueueLinearMovement(String name, float[] from, float[] to) is an introduced helper method that just enqueues a linear movement between the two points at the maximal speed accessible.



3.6. Bogus handler layer




  • findOrigo () was modified a little. After origo is found, it waits until the queue becomes empty (which had caused coordinate keepup problems.)



4. Conclusion



This lab session let us concentrate on making presumably last reconstructions due to increasing accuracy of movement of the sliding part (the one that is placed on the carriage). While running different tests, we noticed that exact carriage placement can be assured by starting at a border, unfortunately, it is impossible to determine at which border (min or max) the carriage is, and the reCalibrate() method will not (and should not) allow this.

Saturday 17 January 2009

End-project. PART XI



Date: 17 01 2009
Duration of activity: 3 hours
Group members participating: all group members



1. The Goal



The goal of this lab session was to further test drawing and adjust the code in order to improve accuracy.

2. The Plan




  • Make adjustments to the structure: The pen-holder and the working area surface.

  • Test drawing.

  • Make some plots with respect to tacho count changes.

  • Make further adjustments to the code.




3. The Results



3.1. Reconstruction



The LEGO-buildup got some reconstructions along the way. First of all, the NXT holder was put into a more appropriate position.



Due to the Z-axis issue, a pen-holder was constructed, that can move up and down independently of the Z-axis. Is is designed with an elastics-based load in mind, but it may be used with only the help of gravity, too (pen type permitting, of course).



To keep the paper sheet stabilized, two paper holders were constructed:



So the overall look of our robot structure at this moment can be seen here:



3.2. Drawing



This lab session also included some drawing experiments. In the picture, there are two triangles. The black one is as imprecise as it is because the pen was pushed with too much force onto the paper and the motors could not compensate for that pressure.



3.3. Plotting



One more important thing that was done during this lab session, was to plot how motor power works with respect to gathering tacho count values. The idea is to see what kind of change in tacho count that the motor gives back at all the different speeds (here, sampled over a Thread.sleep(1000). There is a slight difference in values for x-axis and for y-axis. Nevertheless, it takes more than 50 of motor power to make the motor start to move and, therefore, for tacho values to change.

This information is paramount to defining NavigationLayerImp's motorPowerFromFraction() method, which basically must map a fraction of the maximal speed to the motor power required to obtain that speed. Assuming the plot to be linear, a linear mapping can be defined and inverted, and, as of writing, the power required is calculated as power = Math.round(45.0f * fraction) + 55;. Note that the code assumes both axes to behave similarly, which, based on the graphs, is not the worst assumption.




3.4. Motor layer



  • MotorSpeed() now checks if a carriage starts at the border (and makes a very error-ish sound if it is)

  • getEndpointThreshold(MotorPort motor) returns the constant endpoint-threshold associated with the axis of a motor. Is private.

  • cacheSpeed(MotorPort motor, int speed) caches the speed that is given as a parameter of the axies associated with the motor.



3.5. Calibration layer



  • findMinAndMax() now clears and re-sets the values of min and max tacho count.



3.6. Navigation layer



  • motorPowerFromFraction (float fraction) returns the power to supply to a motor to obtain the desired fraction of the maximal speed and it was updated to work more properly. See the above discussion.

  • motorPowerFromDistance (float[] distance) It is not the responsibility of this method to do over-compensation in one axis, since this is implicitly handled by the fact that the distance-input will want to move in an angle that (in the out-of-place-situation) does not express the scheduled angle for the given point in time, but rather, the angle towards the current supposed-to-be-at position. Note that there's a difference, here. All in all, this method must only implement the give-motor-powers-from-distance semantic, but that this also involves an urgency factor, the accel's. Since this method is such a neat way (in the author's mind) for the NavigationLayerImp to do corrective action, it is replicated below, where all of the discussed features can be seen in use.


  • public int[] motorPowerFromDistance (float[] distance)
    {
    int retval[] = {0,0,0};
    float magnitude = (float)Math.sqrt(distance[0]*distance[0]+distance[1]*distance[1]);
    float unit_vector[] = {distance[0] / magnitude,distance[1] / magnitude};
    float accel_vector[] = {0f,0f};


    /* Adjust the acceleration factor according to the absolute distance
    * from the desired point. Mostly a linear mapping, but with a cutoff at
    * the bottom and a maximal output. */
    for(int i = 0; i < 2; i++)
    {
    if(Math.abs(distance[i]) <= ACCELERATION_MINPOINT)
    accel_vector[i] = 0.0f;
    else
    accel_vector[i] = (float)Math.abs(distance[i]) / ACCELERATION_FACTOR;
    }

    retval[0] = motorPowerFromFraction(unit_vector[0] * accel_vector[0]);
    retval[1] = motorPowerFromFraction(unit_vector[1] * accel_vector[1]);

    /* Handle the Z axis separately. Basically, always use the slowest speed
    * available -- but only so when needed. */
    if (Math.abs(distance[2]) <= Z_AXIS_INSIGNIFICANCE_THRESHOLD)
    retval[2] = 0;
    else
    retval[2] = (distance[2] > 0f)?MINIMAL_Z_POWER_DOWN:-MINIMAL_Z_POWER_UP;
    return retval;
    }



3.7. Gradient interpreter layer


GradientInterpreterStandard

  • assureMaintainerRunning() has been changed so that the operation only becomes pop'ed after it was actually carried out. This is required to distinguish emptiness of the queue.

  • timeNeededForMovement(float[] from, float[] to) calculates approximate times to go from one coordinate to the other in a straight line (basically, which is self-explanatory, this is the highest axis-wise time requirement, which sets the lower bound on the time duration of the whole movement):

    public int timeNeededForMovement(float[] from, float[] to)
    {
    float max_speeds[] = nav_layer.getMaxVelocity();
    int timereq_x = Math.round((float)Math.abs(to[0]-from[0]) / max_speeds[0]),
    timereq_y = Math.round((float)Math.abs(to[1]-from[1]) / max_speeds[1]),
    timereq_z = Math.round((float)Math.abs(to[2]-from[2]) / max_speeds[2]);
    return Math.max(timereq_x,Math.max(timereq_y,timereq_z));
    }


  • cartesianDistance(float[] from, float[] to) calculates the cartesian distance between two coordinates. This takes into account all of the coordinates, and takes steps to conserve numerical accuracy.

  • reZero() has this sequence of work: initiates the move back to origo; waits for the queue of operations to become empty; initiates the move back to where the reZero() was called from.



4. Conclusion



This lab session showed that we are able to draw now even a triangle quite precisely. This is due to the minor structure modifications (in particular the new Z-axis-independent pen-mount) and adjustments to the code, especially to the navigation layer. What is more, plotting of experiments showed the relationship between the motor power and tacho count value-changes: It is quite linear.


Friday 16 January 2009

End-project. PART X



Date: 16 01 2009
Duration of activity: 3 hours + (2 hours of homework)
Group members participating: all group members + (Anders B. of homework)



1. The Goal



The goal of this lab session is to make the robot to actually draw something decent.

2. The Plan




  • Run drawing tests and make relevant conclusions.

  • Make further adjustments to the software.



3. The Results



In this lab session we have run several trials of drawing. It was very important to establish how precisely our robot manages to follow the given instructions.

3.1. Drawing rounded square



The first attempt was to draw a square with round-shaped corners. This is the result that we got:



As it can be noticed at once---neither it is a square with round-shaped corners, nor it is smooth in any sense. At this particular moment there are two problems to be solved: Resolve the way operations are given to the robot (presently, we could appear to have a miscommunication) and make it draw things more precisely. The wavy lines appear because the correction of where-I-am and where-I-am-supposed-to-be is done in a relatively aggressive bang-bang fashion.

Also, in this picture the areas rounded with red circles indicate a mechanical problem that we have. As spindles on the axes are not tightened enough, lengthwise, they tend to open up gaps between the spindle parts, and this causes the robot ``skip over'' once in a while... and that clearly can be seen in the process of drawing!

3.2. Drawing square



Now in the second picture one can see that another attempt to draw a square (with non-rounded edges) ended up being more like a square. This time corners are at least in similar shapes. We still have problems with skipping spindles, we end up somewhat far from where we started, and the lines are somewhat shaky, but there is a definite improvement if to compare with the first attempt. All of these observations are backed by the way the robot behaved during plotting (the shakyness stemming from bang-bang-corrections, for example), and have a reasonable explanation.



3.3. Really drawing a square



The third attempt was really a success. The robot drew a rectangle with sharp corners and it was done with a big rate of precision. The error was (as predicted during the previous plot) that the robot was instructed to trace the shape too fast, and the rounded corners are a result of it not being able to keep up with where it was supposed to be at.

As it can be seen, the upper-most and the lower-most edges both are of 8,0 cm length. The difference between the right and the left edges is only one millimeter, so it is rather precise given the fact that we count tachos and later correlate them to millimeter values. One more issue was that we actually tried to draw a square, and not a rectangle. This problem requires further adjustments.



3.4. Drawing a triangle



As we were able to draw lines along x-axis and y-axis and do it rather precisely, it was time to try out how the robot handles diagonal lines. To fulfill this idea, we tried to draw a triangle. This was not that much of a success. The lines were of different lengths (compared to what they were supposed to be) and it did not shape the form it was supposed to shape. This may be due to the fact that we decided to hardwire the values that correspond to the length of x-axis and y-axis. Also, it cannot be excluded that some programmer error snook in, and the robot actually did as it was instructed to do.


3.5. Navigation layer




  • motorPowerFromFraction (float fraction) returns the power to supply to a motor in order to obtain the desired fraction of the maximal speed.

  • motorPowerFromDistance (float[] distance) returns the (signed!) motor powers that NavigationLayerImp should use to move towards a point that is ``distance'' away.

  • gotoCoordinates(float x, float y, float z) is now done in a more correct fashion. Inside the speed and direction of motors are set. Finds out in which direction the pointer is needing to go in the various axis.

  • halt () stops the motors.

  • getCoordinates() retrieves coordinates with the help of calibration layer and puts into an array.


The new thing to this layer is general improvement of the corrective measures taken in order to go from actual position to desired position. This uses the motorPowerFromDistance() helper method, that undergoes a transition from being a simple guesswork stepwise function to a nice linear mapping that is based on empirical knowledge.

3.6. Gradient interpreter layer



GradientInterpreterStandard has had some changes from last time:

  • assureMaintainerRunning() the way operations are run is corrected.

  • interactivelyRunOperation(GradientGiver operation) is all about to be certain where to go. It allows to go (or to try to go) to the currently most correct coordinates.




The most important thing with a gradient interpreter layer at this point is the act of correction. At every moment we are trying to go to the most correct coordinates we are aware of, as determined from the time since the start of a movement.

3.7. Bogus handler layer




  • findOrigo () finds the starting point, interactively (i.e. it's a blocking call).

  • drawRoundedSquare () handles the way a square with rounded corners is being drawn.

  • drawSimpleSquare () handles the way a square without rounded corners is being drawn.

  • drawTriangle () handles the way a triangle is being drawn.

  • drawLinesegments (float[][] array) handles the way some array of line segments is being drawn.

  • main (String[] args) handles sequence of actions.


This layer now contains descriptions of some more shapes. Therefore, now we should be able to draw a rounded square, a simple square, a triangle, and an ordinary line.

4. Conclusion


Drawing tests showed that we are actually able to draw lines along the axis with a lot of precision, whereas diagonal ones aren't at that level of precision yet. With regards to the drawing trials, necessary adjustments were made to the navigation layer, as well as to the gradient interpreter and bogus handler layers.