Friday, 16 January 2009

End-project. PART IX



Date: 14 01 2009
Duration of activity: 3 hours
Group members participating: all group members



1. The Goal



The main goal of this lab session is to find out everything about how to handle touch sensors, run relevant tests on already written software and make relevant adjustments.

2. The Plan




  • Make many plots in order to figure out what values the touch sensors produce.

  • Run test cases for the software.

  • Make relevant adjustments to the methods according to the test results.



3. The Results



3.1. Plotting the sensor readouts



In the following two plots there are values that are returned by measuring two touch sensors of the y-axis separately. What is interesting about these sensors is that they are the blue ones. One can see that, when measuring outputs, they return different threshold values about the state where they are being unpushed (as opposed to the official LEGO sensors).



Very different values are returned whenever both blue sensors are connected in parallel. Having in mind that our robot will work having these sensors connected together, the threshold for the y-axis working area and safety stop will be taken to be 400.

Now, the second picture gives measurements from one grey touch sensor. It can be seen that the values are significantly different than those collected from blue sensors.



With respect to these two last plots, it can be noted that when we measure grey LEGO touch sensors, are they taken separately or connected in parallel, is one pushed or some of them, the results are always the same (in short, ``they behave nicely''). Therefore, the threshold for us to use when deciding if a sensor is pushed can be clearly decided. We use 1000 for that matter.



3.2. Problems arising in the MotorLayer



MotorSpeed (the MotorLayer implementation) has problems. There's issues getting to both detect the endpoint and allow the right movement away from the endpoint:


  • There's only a single sensor input per axis, which makes it indistinguishable (from a hardware point-of-view) which endpoint is being pressed.

  • A software solution must thus rely on a deduction from the currently known tacho count for the axis, which endpoint was actually pressed.

  • However, as a way of optimistically/pragmatically ensuring accuracy, the tacho count is re-zero'ed whenever the low-order endpoint is pressed

  • In the first recalibration run, it is not possible to use the tacho count to deduct anything, so the first recalibration must make a pragmatic choose on where it is.



The second and third points cause a problem: If the axis thinks (deducing from the low tacho count) that the lowermost endpoint is being pressed, it will be rezero'ed---even though it was actually the topmost endpoint that was being pressed at a point in time where the tacho-count has lost its track. Which could very well be the case in the fourth point. The tacho count is zero when the robot boots. Therefore if one of the safety-break buttons are being pushed at boot up, the robot would like to think that is the bottom most buttons that are being pressed, because they are the ones being pushed whenever we register a low tacho count. But this might not be the case.

Add to this that there's an (theoretically!) independent issue with allowing the right movements in the proximity of an endpoint. E.g. when the topmost endpoint is being pressed, don't allow (i.e. inhibit) movements that would only move even more towards the topmost endpoint. This actually a case of inhibition in work: A lower layer inhibits a command from a higher layer.

3.3. Motor layer




  • MotorSpeed() has a new condition. It allows to assign a speed value to the motor in case some end-point threshold is lesser than or equal to the actual value of tacho counts and that motor's speed is greater than or equal to zero, or that threshold is more than actual value of tacho count and that motor's speed is lesser than or equal to zero.

  • setSpeed(MotorPort motor, int speed) due to the construction that was made, the default motor mode that was 2 (move backwards) was changed to 1 (move forwards).

  • getTachoCount(MotorPort motor) it is now checked if the tacho count is less than zero. If that is the case, resets the tacho counter and returns zero (thus upkeeping an invariant that getTachoCount() only returns values greater than or equal to 0).

  • isAtBorder(MotorPort motor) checks whether any border was hit.



The reason for changing the mode of the motors is because we would like to register a positive tacho count. Although you can change the mode of the motors, and both give it positive and negative speeds, there is only one way that tacho's are being measured. So in order not to have to invert the sign of the tacho count every time, we had to invert the actual physical movement of the carriage, and then change the mode to go backwards instead of forward.

These were all minor adjustments. These adjustments just make things work better, they do not introduce any new behavior.

3.4. Calibration layer




  • getCoordinate(MotorPort motor) in this version already is aware of how many tacho/millimeters there are.

  • getTachoPerMill(MotorPort motor) returns tachos/millimeter-values for a certain motor.

  • reZero() handles re-zeroing in the way that it is done for all layers in parallel. It is all about finding the minimum extreme and going all the way back. All intermediate values are written to the LCD screen. Don't confuse this naming with the act of re-zero'ing (i.e. re-establishing one's idea of the current coordinates), which only does part of what reZero() does.


This layer now is aware of tachos per millimeter-values. What is more, re-zeroing now is done for x-axis, y-axis, and z-axis. The relevant parts go all the way to the minimum extreme and all the way to the maximum extreme.

3.5. Other stuff



Driver helps to test platform movement issues. Thus, these methods are developed for development purposes, and are not related to the ultimate behaviour of the robot.

  • main(String [] args) deals with interactions with NXT.

  • testFreeControl() starts control mode.

  • testTouchSensorsRawValues() gets raw touch sensors' values.

  • testGotoCoordinates() asks navigation layer to go to some coordinates.

  • testFindMinAndMax() reports about finding min and max in the x-axis.

  • testDriveBackAndForth() drives back and forth along the x-axis.

  • testRecalibrate() calls for recalibration in recalibrate layer.

  • printCoordinate() coordinates are printed in the LCD screen.

  • controlMode() gives control of the movement via NXT.



TouchSenTester does touch sensor testing, as used for the above plots.

  • main(String [] args) deals with interaction with NXT.

  • testTouchSensorsRawValues() prints out raw values of touch sensors and data-logs them to a file in the filesystem.


The other code changes are all about running relevant test cases such as: Get values of touch sensors, make the robot go to some coordinate, make the robot find min and max on the x-axis, make the robot's platform go back and forth, and similar. Basically, the names of the methods talk for themselves.

4. Conclusion



This lab session was very informative. The idea of plotting touch sensors' values showed us that we do not get a binary value for to be sure if a touch sensor is touched or not. We also concluded that ``grey'' touch sensors are very different from ``blue'' ones, and not only with the threshold that they provide. The ``blue'' sensors give different thresholds from one another, and something different altogether whenever they are connected in parallel. Whereas, ``grey'' sensors do not suffer from this differentiation.

With regards to the software, some adjustments were made to the motor and calibration layers. What is more, we have defined many relevant test cases in the Driver.java which measure relevant movement and provide with relevant parameters along the process.


End-project. PART VIII



Date: 13 01 2009
Duration of activity: 3 hours + (4 hours homework + 7 hours of homework)
Group members participating: all group members (Anders D. of homework + Anders B. of homework)



1. The Goal



The goal of this lab session is to make further adjustments to the motor-, calibration- and navigation- layers, and to test out the gradient interpreter and some implementation of bogus handler layer.

2. The Plan




  • Make further necessary adjustments to the motor layer.

  • Make further necessary adjustments to the calibration layer.

  • Make further necessary adjustments to the navigation layer.

  • Run tests on the methods of the gradient interpreter layer.

  • Run test on the bogus handler class.



3. The Results


3.1. Motor layer




  • MotorSpeed() has the idea if the lower most border is hit, the tacho count is reset.

  • setSpeed(MotorPort motor, int speed) now before setting a speed, it is explicitly (as opposed to before, where consistency of the data structure was relied upon) checked if no borders were reached (no push sensors were pressed).



So basically the new thing for this motor layer is that we explicitly check if we are not at any border before starting to move (trying to move towards the border whenever we are at it would be opposite of what we are trying to achieve with these touch sensors). What is more, when we know that we are at the border which indicates min values of x-axis, we reset tacho counts to be zero to increase precision in a best-effort approach: When the lower endpoint is hit, we're at 0 by definition---so let's make it so, if it wasn't already.


3.2. Calibration layer


This outlines the changes in the layer, not the layer's cumulative behaviour.

  • getCoordinate(MotorPort motor) now actually returns cached values.

  • reCalibrate() assigns values to be cached.

  • reZero() handles all re-zeroing process together with writing intermediate values to the screen.



The calibration layer caches coordinate values, so it would be easier to deal with them. The re-zeroing process is actually working now. This is all about going forcefully all the way to the min of x-axis and re-zeroing the tacho count. The same thing applies to the y-axis as well.


3.3. Navigation layer



The significant thing is that in all the methods in the navigation layer that was suppose to use double but ended up using int's, was changed to float as double is not supported in lejos, and int's do not give the required precision. This change to this interface also prompted changing all the underlaying layers use of int, where more precise type was needed, to float. Floats are only 32-bit types, compared to doubles, which are 64-bit, but they will suffice for our needs.


3.4. Gradient interpreter layer


This newly added layer implementation does a high-level job. It handles a queue of operations (which are expressed throgh the gradientlayer.GradientGiver interface) and runs through those in order. The algorithm is based on parametric functions, where the parameter is the time. Thus, a line segment (gradientlayer.LineGG) is drawn at a certain speed, and at n milliseconds from the start of the line segment drawing, the line segment should be n / linesegment.getDuration() done. Thus, the gradient interpreter layer---which has a NavigationLayer at its disposal for the ugly work---only needs to derive the relative time since start-of-movement and throw the (GradientGiver-given) vector of ideal coordinates after its NavigationLayer-reference. This is a mean and clean way to use separation of concerns, and indeed this does significantly beautify the GradientInterpreter implementation:

GradientInterpreterStandard is responsible for layer handling.

  • GradientInterpreterStandard(NavigationLayer nav) is the constructor of the class.

  • assureMaintainerRunning() runs one operation at a time.

  • interactivelyRunOperation(GradientGiver operation) runs an operation from start to end. Precondition/postcondition is that the robot will have been controlled to be at the specified position for the movement.

  • enqueue(GradientGiver newone) pushes a new object to the queue.

  • wipeQueue() clear the queue.

  • reZero() not yet implemented.




ArcGG handles arc drawing.

  • ArcGG(float[] start, float[] centerpoint, float[] end, int total_time) is a class constructor where some rules regarding drawing arcs are defined.

  • float[] coordinatesAt(int time) returns coordinates following the arc segment, takes time parameter as input, that is in milliseconds, returns an array of x,y,z-coordinates to be at.



LineGG handles line drawing.

  • LineGG(float start[], float end[], int total_time) is a class constructor.

  • float[] coordinatesAt(int time) returns coordinates following the line, taking a time parameter as input, that is in milliseconds. Returns an array of x,y,z-coordinates to be at.



PCDriver

  • main (String[] args) drives the PC-side dumping, as detailed later.



The gradient interpreter layer is a layer on top of the navigation layer. Classes gradientlayer.LineGG and gradientlayer.ArcGG contain the rules of how to draw a line and how to draw an arc. The gradient interpreter layer is responsible for dealing with operations: Enqueueing them, performing them one by one, and checking that they make sense. These operation in our case basically means going from one point in the coordinate system to the other, in some fashion that is specified throug GradientGiver (which is abbreviated ``GG'', as in ArcGG).

3.5. Bogus handler layer




  • main (String[] args) is assigned some test case that suppose to draw a square with round corners.



This class currently basically only contains a definition of a square with rounded corners. The definition for a line is all about its starting and ending points. The definition for an arc requires the center coordinates of the circle that the arc would form if it would go around. This would change significantly, later on.

4. Conclusion



During this lab session the three lowermost layers were adjusted. The most significant things are that the motor layer---before setting the speed value---checks if no borders were reached (no touch sensors touched) and the calibration layer caches coordinate values. The gradient interpreter layer contains rules for drawing arcs and lines, and the BogusHandler has a method with a definition of a square with rounded corners.


End-project. PART VII



Date: 12 01 2009
Duration of activity: 3 hours + (3 hours of homework)
Group members participating: all group members + (Anders D. of homework)



1. The Goal



The goal of this session is to continue on making relevant adjustments with motor and calibration layers, and to test methods of the navigation layer.

2. The Plan




  • Make adjustments if needed to the motor layer.

  • Make adjustments if needed to the calibration layer.

  • Make tests with methods of navigation layer.



3. The Results



3.1. Motor layer




  • MotorSpeed() The constructor now creates two SensorListener which listen for sensor port events. The listener is implemented in the usual ``anonymous class'' way, with the method stateChanged(SensorPort aSource, int aOldValue, int aNewValue) being filled out. The two parameters denote the raw values before and after the event, but since what we would like two know is the answer to the binary question ``is the touch sensor pressed'', we will keep on using isPressed().

  • The MotorLayer implementation now effectively has the safety-break that will stop to motors whenever the corner are being pushed, but this does not stop the motors for starting again (see MotorSpeed() below).

  • setSpeed(MotorPort motor, int speed) The new thing is a condition made before setting the speed to the motor. First of all it is checked if the tacho count taken from the motor handling the x-axis is greater or equal to some predetermined x-tacho-threshold, and if the current speed is greater than or equal to zero. In this case, the movement is allowed, since it's a movement from close to the lower endpoint and towards less close to the lower endpoint. A symmetric argument arises about the top-point.


So far, the news with this layer is about touch sensors. To make things right, we use SensorPort listeners to be aware of when a touch sensor is pushed. Also, we decided to have two sensor ports: One for the lowermost layer (x-axis) and the other for the carriage (y-axis).


3.2. Navigation layer




  • gotoCoordinates(int x, int y, int z) defines actual driving to the given spot in the coordinate system.

  • gotoCoordinates(int[] coords) a wrapper of the prior.

  • getCoordinates() gets an array of coordinates. The array is of length 3.

  • getXCorrdinate(), ..Y.., ..Z.. gets a certain coordinate by calling a method from the calibration layer.


The navigation layer is on top of the calibration layer. This layer handles going to certain coordinates. What is more, it should be able to get current coordinates in an array, or get coordinates separately. For now, this layer needs a lot of testing.

3.3. Driver class, debugging the layers



So far, each test of different parts of the system has done through Driver.java. The driver class is primarily a main() method that allow us to run our code on the NXT brick, without making a main method for each layer. It has a button listener attached that allows us to exit that programs gracefully (i.e. without disconnecting the battery). Driver.java is that only class that breaks that strictly layer-based architecture.
Each testcase added to this blob of a driver is enclosed into a method with a reasonable name, and then called from the main loop.
A testcase could look like:

public static void testFindMinAndMax()
{
cal.setSpeed(MotorPort.A, -100);
while(Motor.A.getSpeed() != 0){};
LCD.drawString("min="+MotorPort.A.getTachoCount(),0,0);
cal.setSpeed(MotorPort.A, 100);
while(Motor.A.getSpeed() != 0){};
LCD.drawString("max="+MotorPort.A.getTachoCount(),0,0);
}

So far the class is for provoking the calibration and prints on the screen tacho count values.

This class is nothing more then a driver class for testing code being written during development. Having one class with the canonical name Driver.nxj also makes it a bit easier to debug unwanted behavior from the robot, by only needing to set the file as being the default program once, and then not bothering with menu interactions afterwards.

This driver should not be seen as the final driver-class for the program, showing off all the features of the robot. Later in this project, when actual final features can be shown, other driver classes should be constructed.

4. Conclusion



During this lab session, the motor layer was adjusted to better serve for our needs and the navigation layer methods were tested. The most significant change is that, in the motor layer, we use a SensorPort listener for touch sensors and we support two sensor ports.


Friday, 9 January 2009

End-project. PART VI



Date: 08 01 2009
Duration of activity: 3 hours + (about 2-3 hours for homework)
Group members participating: Anders D., Renata + (all group members for homework)



1. The Goal



The goal of this lab session is to start to actually write code in order to implement the layers that were identified in the last lab session.

2. The Plan




  • Test how MotorLayer methods work on the robot.

  • Test how CalibrationLayer methods work on the robot.



3. The Results



3.1. Motor layer




  • setSpeed(MotorPort motor, int speed) assigns a speed value to a particular motor. If the speed is positive---motors go forward (i.e. towards higher coordinates); if the speed is negative---backward. The method is not complicated therefore works pretty straightforward.

  • getTachoCount(MotorPort motor) gets a tacho count value of some particular motor/axis.

  • getCurTachos() gets the last seen tacho values of the three motors.

  • MotorSpeed() is a class constructor. Safety break---triggered when one of the corner buttons is pressed---will turn of all motors, and setSpeed has the responsibility only to start the motors again, if they are going in the right direction.



This is the lowermost layer. Basically the two basic responsibilities are: Count tachos and set speeds to the motors. Tacho values are returned either for a particular motor or for all motors at once. The idea with the safety buttons is that whenever any of those are touched, motors have to stop. Under normal circumstances, these buttons have to be touched during calibration time; if the sensors are touched in other circumstances, something must have gone wrong. Thus, the sensors delimit the working area of the robot.

3.2. Calibration layer




  • getCoordinate(MotorPort motor) returns the current coordinate (in millimetres) for the given motor.

  • setSpeed(MotorPort motor, int speed) sets the motor speed of the given motor. The speed is in the interval [-100;100]. Zero means stop-and-brake; as opposed to stop-and-float.

  • reCalibrate()starts an interactive re-calibration of the axes. While the calibration is running, the coordinate progress is hidden from the interfacing layers: getCoordinate() returns cached results, and setSpeed does nothing.

  • reZero() clears and re-sets the values of min and max tacho count.



This is a layer on top of the motor layer. So far, it can ``get'' coordinates of where the motors are, re-set tacho counts and run calibration. At least this is what this layer suppose to do. These methods still lack testing, therefore, more precise conclusions will be made along the way. The primary responsibility of the layer, when seen from above layers, is to handle the tacho-to-millimeter conversion (and indeed, the layer's implementation underwent a conversion to do floating-point arithmetic in order to conserve precision of the floating-point millimeter values.)

3.3. Discussion



There were some issues to be resolved with the days work:

When uploading our first draft of the source code, the NXT brick kept on throwing an exception, making us reach for the battery pack. We kept on commenting parts of the source out, in the end deducing that the method public double getCoordinate(MotorPort motor) was the one causing problems. As it turns out, lejos doesn't support double, and is encouraging the use of floats instead[1]. We had similar problems when trying to use the method static long currentTimeMillis() on java.lang.System. The return-type was long, but doing arithmetic on a long would make the brick throw exceptions.


For developing reasons, the interface was changed to use int, because we knew this was supported. But making a note that the return type should be something with more precision.

When using a touch sensor, it makes sense to use isPressed(), because one expects a binary answer. The touch sensors (at least the RCX-style ones) do not return a binary answer, but a number in some interval. This phenomenon we know is controllable, but for now, we don't investigate it further.

4. Conclusion



Today's work was the first attempt to make the robot do something according to the written software methods. It took some time to test and to pick out all the mistakes so the simple methods from the motor layer would work. For this moment, the calibration layer does only ``something'' by being able to move all three platforms at once (which, we might add, has its own sort of charm to it). To make things work properly, more adjustments will be needed in this layer.

The MotorLayer also needs to implement a way to handle touch sensor events.

5. References



  1. http://osdir.com/ml/java.lejos/2003-05/msg00000.html

  2. http://lejos.sourceforge.net/p_technologies/nxt/nxj/api/java/lang/System.html#currentTimeMillis()



Tuesday, 6 January 2009

End-project. PART V



Date: 06 01 2009
Duration of activity: 3 hours
Group members participating: all group members



1. The Goal



The goal of this lab session was to discuss and agree on the architecture of our project by deciding on what exactly a particular layer does and defining methods that have to be realized in those layers.

2. The Plan




  • Discuss the motor layer.

  • Discuss the calibration layer.

  • Discuss the navigation layer.

  • Discuss the gradient interpreter layer.

  • Discuss the bogus handler layer.



3. The Results



The illustration of the hierarchy.



The whole lab session took discussions about the layers of our thought-up layered architecture. The summary of all that we came up with can be seen in the table below. It gives the description of the whole structure of the architecture.






















LayerLayer descriptionMethodMethod description
BogusHandlerThis is the upper most layer handling inputs.
GradientInterpreterIt is all about interpreting incoming gradients and vectors. It includes queue making for pending drawing assignment and reassuring itself with respect to 'map' by re-zeroing.
enqueue()Keeps gradients in the queue array until they are drawn.
wipeQueue()Emptying the queue.
reZero()Go to the zero coordinates in the system in oder to be sure of 'where I am' and thus keeping the precision.
NavigationLayerThis layer does corrective movement. It is given a coordinate tuple that expresses where the robot was supposed be at, and is expected to make a corrective movement to that end.
void gotoCoordinates (double x, double y, double z)Go to some set of pre-difined coordinates.
gotoCoordinates (double[] coords)Go to coordinates taken from coordinate array.
double[] getCoordinates ()Get a set of coordinates.
double getXCoordinate ()Get specific coordinate X.
double getYCoordinate ()Get specific coordinate Y.
double getZCoordinate ()Get specific coordinate Z.
CalibrationLayerThis layer handles mm-to-tacho's calibration and re-calibration of the axis. Obviously, it's a part of the state of this class to know how much a millimeter's worth, in tacho's.
getCoordinate(String motor)Returns the current coordinate (in mm's) for the given motor.
setSpeed(String motor, int speed)This sets the motor speed of the given motor. The speed is in the interval [-100;100]. Zero means stop; as opposed to float.
reCalibrate()This starts an interactive re-calibration of the axis. While the calibration is running, the coordinate progress is hidden from the interfacing layers: getCoordinate returns cached results, and setSpeed does nothing.
MotorLayerThis interface handles low-level tacho-count-based movements. It is expected to be a lowermost layer of the layered architecture. The layer is supposed to keep count of the tacho counts, and it is supposed to re-zero upon hitting an endpoint.
int getTachoCount(String motor)Returns the current coordinate (in tacho's) for the given motor.
setSpeed(String motor, int speed)This sets the motor speed of the given motor. The speed is in the interval [-100;100]. Zero means stop; as opposed to float.


The table, however, is only half of it. At the same time, methods were defined in Java interfaces, so that the method names are also defined at the syntactical level. Nicely commented Java interfaces were written.

4. Conclusion



With this layered architecture we tried to separate different aspects of the robot handling (in order): Motor abstraction, correlation to millimeters, navigation to coordinates, high-level description interpretation, and actual drivers. This structure is the first sketch and not yet implemented, therefore it will get a lot of changes along the process. The skeletal structure however remains stable.


Friday, 12 December 2008

End-project. PART IV



Date: 12 12 2008
Duration of activity: 2 hours
Group members participating: all group members



1. The Goal



The goal of this lab session was to agree on the main points of the architecture and make some adjustments with regards of the LEGO structure.

2. The Plan



  • Agree on the architecture.

  • Further upper carriage construction.

  • Adjustments of the sliding part.


3. The Results



3.1. The architecture



The first thing that we have done during this lab session was agreeing on the architecture. As it was mentioned before, in this project we will use a layered architecture. It is very important to leave as much space as possible for scalability and modifiability as for now it is not exactly clear how much of each layer we will need to use. The things we have agreed upon for now, looks like this:


4. Input handling. (tentative)
3. Input (vector) interpreter. (tentative)
2. State calculating navigation layer. Possibly feedback giving. (tentative)
1. X-Y movement calibration (mm wise). (definite)
0. Motor speed and tacho count handling. Re-zeroing. (definite)


3.2. The carriage



The second thing to do in this lab session was to mount the upper platform (the carriage) with two touch sensors. Whenever a touch sensor is touched on this platform, it indicates that the Y-axis is at its very beginning or at its very end. We couldn't get our hands on the standard official LEGO grey touch sensors, so instead, we have to use a couple of semi-transparent blue third-party ones. For now, the upper platform with such a sensor looks like this:



3.3. The sliding part



The third thing that was done in this lab session is a very important one. We have made the support near the spindle-axis more smooth. Also, we re-built the part that is responsible for the up/down movement in a way so it pushes the Y-axis-spindle more from outside-in, which is to say, it gets pushed into its support structure now, as opposed to being pushed out of it, before. By pushing the axis inwards that way we get more support for the axis and the movement along the Y-coordinate axis becomes more stable and does not get stuck.



4. Conclusion



The benefit of this lab session is that we now exactly agreed which layers of the layered architecture we are planning to use. Also, we mounted the carriage with two touch sensors and made the sliding part travel from one side to the other more smoothly.


Tuesday, 9 December 2008

End-project. PART III



Date: 09 12 2008
Duration of activity: 4 hours
Group members participating: all group members



1. The Goal



The goal of this lab session is to put all our efforts into making the construction better.

2. The Plan



  • Improve the sliding part (the one that moves on the carriage).

  • Improve the carriage itself.

  • Mount the NXT to be in a comfortable position with regards to the whole structure.

  • Install emergency stop buttons.



3. The Results



3.1. Sliding part



The first thing that was done today was to improve the construction's part which is responsible for up/down movement handling. It was enhanced in a way so that now it is very easy to attach any kind of pointer or, in general, tool of any kind. In our case it is most likely to be a drawing pen. The wheel is the part that enables the movement itself with the help of the motor. The whole part is actually hanging onto the carriage, but it is clamped onto it tightly enough so it should be able to move left/right with no significant difficulty.

The aforementioned part looks like this:







3.2. The carriage



The second thing that was done---improving the upper platform in a way for it to be as compact as possible, so it would leave more space available for the X-Y coordinate system. The working area is not that big, so making the mounts as optimal as possible is an important issue. The pictures are showing the upper platform and its sliding area close-up:






3.3. The NXT



The third thing that was done was to attach the NXT brick onto the working area so it will be easier to work with it. This looks like seen below:





3.4. The touch sensors



The fourth thing that was done today: Attaching touch sensors. This was done for us to always be aware about what the limits of our coordinate system are. The lower platform (the main one for x-axis handling) got four touch sensors, the upper platform (the carriage for y-axis handling) got two sensors and the ``up/down'' platform presumably will get only one sensor, if at all. But that is enough, as all we need is to be sure that we touched the paper surface. Old kind RCX touch sensors are used for this purpose, since these can be connected in parallel (and we'll have plenty more input sensors than the NXT has input ports for, so this is necessary.)





3.5. Software



With regards to programming, the X-Y movement controller was started. The initial idea is to give (x, y) coordinates as parameters to make the pointer move x tacho counts along the X-axis (forwards/backwards with regards to our working area) and symmetric for the y coordinate.

3.6. Final outcome



At the end of the work, it seemed that the part that is responsible for up/down movement would have to be redesigned. When attached to the ``upper'' platform (the carriage) it does not move correctly along the spindle axis, it gets stuck at some points. At the end of the day, the overall construction looks like this:







4. Conclusion



This lab session was all about the construction. Now we have the main platform, a quite final version of the carriage, and the improved sliding part. The sliding part is built in a way so that it is able to move left/right (y-axis movement handling) although with some spindle-related troubles that will be fixed in the future. Also, a part of the sliding part is able to perform up/down movements. Due to the overall construction, we mounted six touch sensors so far (four on the main platform and two on the carriage top) to act as emergency stops, so no platform would over-go the limits of the working area.