Saturday, 29 November 2008

End course project, Lesson 11

Date: 28 11 2008

Initial Description of End Course Project

Three discussed proposals:

1. Sound-signature differentiating robot arm

2. Multi-microphone boomerang-copycat

3. X-Y-coordinator

1. Sound-signature differentiating robot arm

1.1. Description

This proposal is about a microphone-equipped robotic arm, that distinguishes sound signatures. A mechanically challenging project where the main challenge is to make the robotic arm position itself correctly.

1.2. Description of platforms and architectures

The robot would obviously be built with NXT's and the NXT motors. But given that this proposal is about an articulated arm, it will probably not be possible to make the setup with only three NXT motors, and thus, two NXT's are needed.

The robot would have a shoulder joint (which has two degrees of freedom), an elbow joint (one degree of freedom) and a wrist joint, which could be a two-degrees-of-freedom joint, but it could also be a more simple one-degree-of-freedom joint. The finger joint would be a simple squeezing joint. So this creates for a five-degree-of-freedom articulated arm, and this necessitates a two-NXT-approach.

The robotic arm should be able to position itself rather precisely, given a 3D-coordinate tuple. This means that we would have to do the maths to calculate the robot arm.

The robot's software would be an closed-loop control program, that has a layered architecture on top of it. The lower layers would generally handle the more hardware-related matters, whereas upper layers would handle more goal-oriented matters. What this concretely means, will be left for discussion.

This robot, like the X-Y-coordinator proposal, will need some sort of computer input (and perhaps output), which basically will be the same methods as applicable for the X-Y-coordinator case.

Overall, the robot adopts a reactive control model [2]. This is due to the fact that the idea is not to analyze the data coming from the sensors, but to immediately pursue the goal if it wasn't reached yet. Although it is hard to predict in advance---it may not be enough. To make things work better, it might be the case that a hybrid approach[2] would be a solution. That would make one part of the robot try to figure out sound signatures (or colors) and another part deciding whether to pick up something or not.

1.3. Problems to be solved

As it was mentioned, the biggest challenge would be to make the robotic arm position itself correctly. What is more, the project also has a problem in distinguishing sound. The distinguishing might only be to distinguish 1Hz bleeping from 2Hz bleeping, but that, too, will have its difficulties.

1.4. Expectations at the end

This project would end in a robotic arm, that would be able to pick up specific objects. The difference between the objects would be either a sound signature or the color of the object.

Variations of this project include making precise 4D or 5D-movements, or mounting a light sensor and have the robot pick up only the correctly-colored ball.

2. A multi-microphone Boomerang copy-cat.

2.1. Description

This project proposes a multi-microphone Boomerang [1] copy-cat, where the aim is to determine the origin of a sound, and point an arm to signify the direction[1].

2.2. Description of platforms and architectures

The hardware for this project would be a multi-microphone setup, that is mounted on a static fixture, and an simplistic arm, besides it.

The software for the robot, which would work in isolation (i.e., there's no PC in this setup), would be the challenging part of this proposal. It might very well be the case that it is impossible to get the degree of precision that is needed for this project, from the NXT and its internal workings (that is, the firmware implementation).

This could be implemented by relying on reactive control[2] only. The idea is to find the direction of the sound and point at it. All the robot has to do is to react to what the sensors are saying. Now on the other hand, if one seems to think about 'pointing towards sound source' as a goal state in which the robot has to stay over time, then it could be called, in some sense, a feedback control[2].

On the other hand, this could also be implemented by having the robot do a whole-world calculation, and, from its model, determine the action to be taken. This model would probably contain a notion of the origin coordinates, relative to some set of axis.

2.3. Problems to be solved

The challenge is to create a precise timing setup for the NXT and to compute the direction of sound, on a limited-processor machine. Variations include making an arm point in the direction of the origin of sound, and trying to do the trick with very few microphones (think middleware-for-a-vacuuming-robot).

2.4. Expectations at the end

This project would result in the described hardware, and it would be able to find the origin of a sound rather precisely. If so, it would point towards the source.

If it turns out that the lejos firmware would be inadequate for this usage, it might be possible to write a forked firmware that would have the possibility to precisely enough measure input.

-- But this is the success case. It might not be possible to implement this proposal, and in this case the project will result in a rather uninteresting presentation at the show-and-tell.

3. An X-Y-coordinator (the one we have chosen as an end course project)

3.1. Description

This project calls for a rather large physical construction that is able to move a head around on a paper or a similar surface. The robot is a special case of a cartesian robot: The robot has two major axis, and a tertiary up-and-down motion, which calls for using at the very least three NXT motors. If we want to do (even) more interesting things, we may need a fourth motor, and already at this point, we will need another NXT.

3.2. Description of platforms and architectures

The head may be an ``output'' head---a pen, for example---or an input one: A light sensor or a probe (see below).

The name of the game in this robot is precision. The robot has all the physical advantages, so it's the software that's to blame, if the robot winds up being inaccurate.

The software for controlling the robot would be layered, as was the case for the articulated robot arm. In this case, the concretization is easier: The lower layer takes commands from upper layers, and the separation between the layers is whether the layer controls the motors or controls the motor-controllers.

When speaking of controlling, some aspects are put to discussion. Here we are dealing with an X-Y coordinate system, so it is up for debate exactly how applicable the idea of map building suggested by Maja[4] is, here. The idea is for the robot (or for a part of the robot) to navigate itself to the goal. There are some aspects from 'managing the course of the robot'[3]: In our case, knowing where the head is, at any moment in time (localization), is very important. As well as knowing where to go (mission planning) and how to get there (path-finding).

But is it important to remember where the robot has been? In connection with this, one must not forget that the robot gets precise vectors as an input.

It's analoguous to having a predetermined map and following it blindly without actually making sure if the map is correct. If the robot is absolutely precise, it might be enough, but generally not.

Nevertheless, this kind of approach cannot be ruled out beforehand. It still might come in handy.

Another approach for this project to discuss, is what would be a ``sequential control'' [2]. In some interpretation, giving vectors as input is the same as giving a series of steps that a robot must run trough in order to accomplish a task. If the robot has to draw a complicated figure, for example, it may do it vector by vector, or---after analyzing the input---it might choose more clever sequence for doing the work. This bears some resemblance to a deliberative control model[2] where the robot has to take into account all relevant inputs and, after thinking, make a plan and then follow it.

3.3. Problems to be solved

The main challenge is to calculate the trajectories -- the gradients -- of several primitive commands: ARC(a_x,a_y,b_x,b_y,c_x,c_y) or LINE(a_x,a_y,b_x,b_y), and performing closed-loop feedback to govern the motors, so that a shape isn't misdrawn even in case of unexpected resistance.

3.4. Expectations at the end

This project will result in an X-Y-coordinator, which is at least able to draw simple vector graphics on a piece of paper, given a description of those, in an input file.

Variations of this project include mounting a (color?) scanner head or a industrial measuring probe, in order to generate a 3D model (concretely, a height map) of an object. More esoteric variations would be to make the robot able to replicate a height map of some object that was seen in the work area.

Less esoteric variations on this proposals would be to make interactive control from a PC program.

We've chosen an end course project, and while all of the outlined ideas seem worthwhile to pursue, in their own way, the chosen one is the most interesting: It has a mathematical-programming angle to it (the gradients), a software architecture angle (the layered architecture), and a interesting build (which may be topped by the articulated arm, though).

This however also has an influence on the feasibility of the project. The Boomerang-proposal may fail miserably, and may turn out to be trivial, too, whereas the articulated arm has so many inherent difficulties that it may turn out to be impossible to construct satisfactorily.

5. Work plan

  • Building the robot

  • Implement a straight-line governor

  • Implement an arc governor

  • Implement governor-using algorithms

6. References


[2] Fred G. Martin,Robotic Explorations: A Hands-on Introduction to Engineering, Chapter 5

[3] Brian Bagnall, Maximum Lego NXTBuilding Robots with Java Brains, Chapter 12

[4] Maja J Mataric, Integration of Representation Into Goal-Driven Behavior-Based Robots

No comments: