Friday 16 January 2009

End-project. PART IX



Date: 14 01 2009
Duration of activity: 3 hours
Group members participating: all group members



1. The Goal



The main goal of this lab session is to find out everything about how to handle touch sensors, run relevant tests on already written software and make relevant adjustments.

2. The Plan




  • Make many plots in order to figure out what values the touch sensors produce.

  • Run test cases for the software.

  • Make relevant adjustments to the methods according to the test results.



3. The Results



3.1. Plotting the sensor readouts



In the following two plots there are values that are returned by measuring two touch sensors of the y-axis separately. What is interesting about these sensors is that they are the blue ones. One can see that, when measuring outputs, they return different threshold values about the state where they are being unpushed (as opposed to the official LEGO sensors).



Very different values are returned whenever both blue sensors are connected in parallel. Having in mind that our robot will work having these sensors connected together, the threshold for the y-axis working area and safety stop will be taken to be 400.

Now, the second picture gives measurements from one grey touch sensor. It can be seen that the values are significantly different than those collected from blue sensors.



With respect to these two last plots, it can be noted that when we measure grey LEGO touch sensors, are they taken separately or connected in parallel, is one pushed or some of them, the results are always the same (in short, ``they behave nicely''). Therefore, the threshold for us to use when deciding if a sensor is pushed can be clearly decided. We use 1000 for that matter.



3.2. Problems arising in the MotorLayer



MotorSpeed (the MotorLayer implementation) has problems. There's issues getting to both detect the endpoint and allow the right movement away from the endpoint:


  • There's only a single sensor input per axis, which makes it indistinguishable (from a hardware point-of-view) which endpoint is being pressed.

  • A software solution must thus rely on a deduction from the currently known tacho count for the axis, which endpoint was actually pressed.

  • However, as a way of optimistically/pragmatically ensuring accuracy, the tacho count is re-zero'ed whenever the low-order endpoint is pressed

  • In the first recalibration run, it is not possible to use the tacho count to deduct anything, so the first recalibration must make a pragmatic choose on where it is.



The second and third points cause a problem: If the axis thinks (deducing from the low tacho count) that the lowermost endpoint is being pressed, it will be rezero'ed---even though it was actually the topmost endpoint that was being pressed at a point in time where the tacho-count has lost its track. Which could very well be the case in the fourth point. The tacho count is zero when the robot boots. Therefore if one of the safety-break buttons are being pushed at boot up, the robot would like to think that is the bottom most buttons that are being pressed, because they are the ones being pushed whenever we register a low tacho count. But this might not be the case.

Add to this that there's an (theoretically!) independent issue with allowing the right movements in the proximity of an endpoint. E.g. when the topmost endpoint is being pressed, don't allow (i.e. inhibit) movements that would only move even more towards the topmost endpoint. This actually a case of inhibition in work: A lower layer inhibits a command from a higher layer.

3.3. Motor layer




  • MotorSpeed() has a new condition. It allows to assign a speed value to the motor in case some end-point threshold is lesser than or equal to the actual value of tacho counts and that motor's speed is greater than or equal to zero, or that threshold is more than actual value of tacho count and that motor's speed is lesser than or equal to zero.

  • setSpeed(MotorPort motor, int speed) due to the construction that was made, the default motor mode that was 2 (move backwards) was changed to 1 (move forwards).

  • getTachoCount(MotorPort motor) it is now checked if the tacho count is less than zero. If that is the case, resets the tacho counter and returns zero (thus upkeeping an invariant that getTachoCount() only returns values greater than or equal to 0).

  • isAtBorder(MotorPort motor) checks whether any border was hit.



The reason for changing the mode of the motors is because we would like to register a positive tacho count. Although you can change the mode of the motors, and both give it positive and negative speeds, there is only one way that tacho's are being measured. So in order not to have to invert the sign of the tacho count every time, we had to invert the actual physical movement of the carriage, and then change the mode to go backwards instead of forward.

These were all minor adjustments. These adjustments just make things work better, they do not introduce any new behavior.

3.4. Calibration layer




  • getCoordinate(MotorPort motor) in this version already is aware of how many tacho/millimeters there are.

  • getTachoPerMill(MotorPort motor) returns tachos/millimeter-values for a certain motor.

  • reZero() handles re-zeroing in the way that it is done for all layers in parallel. It is all about finding the minimum extreme and going all the way back. All intermediate values are written to the LCD screen. Don't confuse this naming with the act of re-zero'ing (i.e. re-establishing one's idea of the current coordinates), which only does part of what reZero() does.


This layer now is aware of tachos per millimeter-values. What is more, re-zeroing now is done for x-axis, y-axis, and z-axis. The relevant parts go all the way to the minimum extreme and all the way to the maximum extreme.

3.5. Other stuff



Driver helps to test platform movement issues. Thus, these methods are developed for development purposes, and are not related to the ultimate behaviour of the robot.

  • main(String [] args) deals with interactions with NXT.

  • testFreeControl() starts control mode.

  • testTouchSensorsRawValues() gets raw touch sensors' values.

  • testGotoCoordinates() asks navigation layer to go to some coordinates.

  • testFindMinAndMax() reports about finding min and max in the x-axis.

  • testDriveBackAndForth() drives back and forth along the x-axis.

  • testRecalibrate() calls for recalibration in recalibrate layer.

  • printCoordinate() coordinates are printed in the LCD screen.

  • controlMode() gives control of the movement via NXT.



TouchSenTester does touch sensor testing, as used for the above plots.

  • main(String [] args) deals with interaction with NXT.

  • testTouchSensorsRawValues() prints out raw values of touch sensors and data-logs them to a file in the filesystem.


The other code changes are all about running relevant test cases such as: Get values of touch sensors, make the robot go to some coordinate, make the robot find min and max on the x-axis, make the robot's platform go back and forth, and similar. Basically, the names of the methods talk for themselves.

4. Conclusion



This lab session was very informative. The idea of plotting touch sensors' values showed us that we do not get a binary value for to be sure if a touch sensor is touched or not. We also concluded that ``grey'' touch sensors are very different from ``blue'' ones, and not only with the threshold that they provide. The ``blue'' sensors give different thresholds from one another, and something different altogether whenever they are connected in parallel. Whereas, ``grey'' sensors do not suffer from this differentiation.

With regards to the software, some adjustments were made to the motor and calibration layers. What is more, we have defined many relevant test cases in the Driver.java which measure relevant movement and provide with relevant parameters along the process.


No comments: