Thursday 18 September 2008

NXT Programming, Lesson 3



Date: 19 09 2008
Duration of activity: 3 hours
Group members participating all group members



1. The Goal

The goal of the lesson 3 is tryout many different ideas while using the sound sensor to control the LEGO robot.

2. The Plan

  • Mount the robot with sound sensor and tryout the test program and write down the readings.

  • Use a data logger to collect and record data in the file.

  • Try out the given program that controls the robot with different sounds and describe how the program interprets the readings.

  • Try out the idea of a ButtonListener for the escape button.

  • Carry out the suggested investigation of clapping using the parameters from Sivan Toledo.



3. The Results

During lab session 3 our group managed to try out everything that was suggested to do in NXT Programming Lesson 3 description, and beyond that, we were inspired by the clap controlled car idea and made some experiments regarding this type of controlling.

  • First of all, the LEGO robot (whose name is Thomas, as one may remember from previous lab descriptions) was mounted with one more sensor, the sound sensor. Thomas with the sound sensor looks like this:



    As it can be seen from these pictures, Thomas is equipped not only with the new sound sensor (the little arrow points exactly where this sensor is) but with the ultrasonic sensor and with the light sensor from previous assignments as well.

  • The next thing that we were occupied with was actually testing the sound sensor. To that end, the ideas of SonicSensorTest.java were used. Read more in 3.1. Testing sound sensor.

  • After our group was sure how the sound sensor worked, we tried to make the robot log all the readings that the sound sensor reads out. For that, DataLogger.java was used. The principle is that it writes measurements to a Sample.txt file, and this file can be retrieved using the nxjbrowse command. We changed certain things in this code to facilitate easier plotting of the measurements. Read more about this in 3.2. Data logging.

  • The next thing on the list was to try out the idea, that the robot can be controlled by sound. For that we used SoundCtrCar.java and Car.java. In this program, Thomas waits for a loud sound and then goes forwards. The next loud sound makes him go to the right, the next -- turn left and a final loud sound will make him stop. And so on, untill escape is pressed in a very peculiar way. This motivated the suggested change of the code to be ButtonListener-actuated. Read more about sound controlling in 3.3. Sound controlled car.

  • And finally, the last thing on the list was to make the LEGO robot not react to anything but the sound of clapping. For that we changed the java code as for the regular sound controlled car. What's more, we added data logging to this program as well to see if the theory corresponds to the practice. Read more about the results we got in 3.4. Clap controlled car.



3.1. Testing sound sensor

For testing the microphone we made some changes to the SonicSensorTest.java program, supplied on the course-homepage:

import lejos.nxt.*;
public class SoundSensorTest {

public static void main(String [] args) throws Exception {

SoundSensor ss = new SoundSensor(SensorPort.S2,false);


LCD.drawString("Sound level (%) ", 0, 0);

while (! Button.ESCAPE.isPressed())
{

LCD.drawInt(ss.readValue(),3,13,0);
LCD.refresh();

Thread.sleep(300);
}
LCD.clear();
LCD.drawString("Program stopped", 0, 0);
LCD.refresh();
}
}


Only minor changes as been made. Running this program and SonicSensorTest.java through "diff" will make this very clear.

The changes made are:

  • Instead of using a UltrasonicSensor, the program uses a SoundSensor. This is connected to a SensorPort like the UltrasonicSensor, but one also needs to specify whether or not the soundsensor is in "dBa" or "dB" mode. The documentation doesn't say anything about what the difference between these modes are.

    As far as we could tell, the dBa-mode is an adjusted mode, were the sensor is adapted to the sensitivity of human ears. In dB-mode is detecting every sound (it is capable of) from its surroundings -- irrespective of what humans are able to hear.

  • We chose the dB-mode so that our data wasn't going to reflect frequency ranges being left out.

  • We updated the message on the LCD to write "Sound level (%) ".

  • Instead of drawing the reading of a distances on the LCD, it was change to drawing the read the measured sound-level.

  • We also made the control-loop run more frequent.



  • As for the sound sensor testing results, it was interesting to see how different sounds change the readings of the sensor. The general sound environment, when things are happening far from the sound sensor, gives measurements of 3-15 dB -- which is the ambient sound level. When someone starts talking loudly about 0,5m away, the measurements go up to 30dB or more. As a part of testing, our group played some tunes from a mobile phone directly into the sensor, which gave a whole spectrum of readings. As the NXT sound sensor can capture sound pressure levels up to 90 dB, we could often see value of 93 dB in the screen. What's more, even though the tune can make the sound sensor achieve the highest value of sound pressure when played directly, 1 meter away the same tune doesn't seem to have more impact than the noise of ordinary surroundings. Similarly with the angle. If the sound to the sensor appears in the direct line from the front of the sound sensor, this gives higher measurements than if the sound would come, say in 45 degrees of angle with the front of the sensor.

    3.2. Data logging

    In our first iteration of data-logging we used the code supplied on the course-homepage, where the only thing we changed was what sensorport the microphone was on.
    Later on, we changed the way observation results are being recorded. The SoundSampling.java program relies on Thread.sleep(5) to decide when to pull a read-result from the microphone. This method of pulling results will give a datapoint every 5+"time to pull" miliseconds, not every 5 miliseconds as one would think. For the later iterations of data logging , which depended on the datalogger, we rewrote DataLogger.java to depend on the systemclock (System.currentTimeMillis()) and not a simple counter for the time. For the new implementation of DataLogger see "New idea for data recording".
    For our first test (done with the original code) we recorded the same things as in part 3.1.

    Having done data logging, we were able to plot the graphs and explicitly see how things work. We played tunes from a mobile phone and we tried clapping and talking. The results are as they were described in part 3.1., except that now things are more clear when the data is plotted into the graph:



    3.3. Sound controlled car

    Controlling with sound

    On the course homepage the sourcecode for SoundCtrCar.java, which makes a robot react by driving forward, backward and so forth, when it detects a loud sound. We uploaded the program to Thomas to test how what the sourcecode described translated into reactions from the robot in the real world.

    Out of the box, the threshold for Thomas to react was 90, which is too high for the system. It was easy to generate 90 dB when we first started out testing the microphone, but in these tests we were clapping and generating sound a point-blank range. As mentioned in 3.1, the sound source has to be fairly close to the microphone to read anything above static. Therefore it was nearly impossible to make Thomas change directions once he got started, because we couldn't generate 90 dB sounds for the robot while it was moving (in spite of the sensor being mounted on top of the robot).
    As a test, the threshold was reduced to 60, which made it much easier to control the robot, and was still too high for background-noise to interfere.

    ESCAPE button functionality
    In stead of having to press and hold the escape button while making loud sounds in the aforementioned peculiar manner, the following actionlistener pattern solution was used. Instead of using whether escape is held down as loop condition, the following ButtonListener anonymous class is used:


    Button.ESCAPE.addButtonListener(new ButtonListener()
    {
    public void buttonPressed(Button b)
    {
    System.exit(0);
    }
    public void buttonReleased(Button b)
    {
    System.exit(0);
    }
    });
    while (true)
    {
    ...

    3.4. Clap controlled car

    As a way to analyze Sivan Toledo's clap detection, we started out writing code for Thomas to be able to start driving forward on a double clap and stop on a single clap. To be able to analyze our results we integrated a datalogger into the new program, which was our rewritten version of DataLogger.java.
    The following sections are about the code we ended up writing for the program, and the analysis we made during the systems construction.

    New idea for data recording
    The modified code (or, an extract of it) is shown below.


    public DataLogger (String fileName)
    {
    startTime = (int)System.currentTimeMillis();
    try
    {
    f = new File(fileName);
    if( ! f.exists() )
    {
    f.createNewFile();
    }
    else
    {
    f.delete();
    f.createNewFile();
    }

    fos = new FileOutputStream(f);
    }
    catch(IOException e)
    {
    LCD.drawString(e.getMessage(),0,0);
    System.exit(0);
    }
    }

    public void writeSample( int sample )
    {

    Integer sampleInt = new Integer(sample);
    String sampleString = sampleInt.toString();
    Integer time = new Integer((int)System.currentTimeMillis() - startTime);
    String timeString = time.toString();

    try
    {
    for(int i=0; i < timeString.length(); i++)
    {
    fos.write((byte) timeString.charAt(i));
    }
    fos.write((byte)' ');
    for(int i=0; i < sampleString.length(); i++)
    {
    fos.write((byte) sampleString.charAt(i));
    }
    // Separate items with newlines
    fos.write((byte)('\n'));
    }
    catch(IOException e)
    {
    LCD.drawString(e.getMessage(),0,0);
    System.exit(0);
    }
    }

    Note that long-arithmetic is completely unsupported in lejos (they seemingly didn't implement the firmware for it), and as a consequence, java.lang.Long is undefined. The value read -- the milliseconds since NXT boot -- is immediately truncated to an int, which will -- assuming width-of-int to be 32bits -- cause our program to malfunction after approximately 25 days of uptime, when the currentTimeMillis value will overflow and wrap around.

    The main change made is that of outputting a real-time flavoured value along with the sample (which of course finds use in data logging over time) and the fact that the output is broken into lines. This is because the simple format used for plotting gnuplot graphs, which requires several columns of data values and a single tuple per line in the file. Then the recorded values may be plotted using the following bash shell fragment:


    gnuplot <<< 'set terminal png; set output "Sample.png"; plot "Sample.txt" using 1:2 with linespoints'



    The clap pattern and theory

    From the theory of Sivan Toledo, who investigated how the sound sensor can be used to detect claps, we have the clap pattern to be:

    A clap is a pattern that starts with a low-amplitude sample (say below 50), followed by a very-high amplitude sample (say above 85) within 25 milliseconds, and then returns back to low (below 50) within another 250 milliseconds.

    To control the car using these constraints was a success. What is more, we tried to make different lower bounds, so the clap would be recognized more precise. If we leave it to be 50, then, from our experiment, even a mobile phone tune could fool the robot. To go up 35 dB very quickly, and again to down 35 dB less quickly can not only stem from a clap; to set up the lower bound to 30 seemed reasonable. Partly is was a success. The tune could not fool the robot any more, and the car still started after a normal clap. But in this case the environment was the enemy. Namely, the wheels of the robot. When the car started to move, the wheels made sound levels of >40 dB! The normal clap could not stop the car. We had to agree that under these circumstances 50dB was a very reasonable limit.

    Other than that, to see if the theory corresponds to the practice in general, we have plotted a graph of a very loud (and painful!) clap by skrewz:



    It can clearly be seen from this graph that going from <50 to >85 is done very quickly. This change indeed can be captured within 25ms. The going-back slope is not that steep, so it takes more time. But to capture this, 250ms is absolutely enough.

    In the end we made our robot start going forward on two claps and stop on one. This is accomplished using a time-out'ing waitForClap(timeout) and testing whether a second clap appears within a second of the end of the first one.

    4. Conclusion

    In this lab session we have done some extended testing of using sound-input in an embedded system. This include tests done in a static environment and dynamic environment. Unlike when we were testing the ultrasonic sensor, we have recorded a lot of our observations, and made graphs to verify our conclusions.

    Sivan Toledo's "algorithm" for detecting a clap was very good. When we used our own thresholds the robot was able to pick up nearly every clap we made. From this testing of sound-input it is clear that it is very hard to use variations in sound as feedback signal, in a environment where the distance of the sound-source and the microphone isn't static, because sound dissipates so fast that the sound signature of the same sound is unrecognizable coming from two different distances

    No comments: