Monday, 8 March 2010

Martian Cyborgs in Utah



Here is a good example of why Utah is way more interesting than people give it credit for.


Due to the time lag between an instruction sent from Earth and it being obeyed by the rover on Mars, the rovers need to be able to operate pretty much autonomously, receiving their orders at the beginning of each Martian day (“sol”) and carrying them out through that. One problem is that the Rovers can’t work out what is and what isn’t interesting and don’t know what to focus on.
Over at bad astronomy Phil Plait has a good example of how even a high resolution photograph can be very misleading (craters look like mounds etc).

The chaps over at the Mars Desert Astrobiology Research Centre have been trying to solve this problem and given it the rather grandiose title of the Cyborg astrobiologist. Rather than
Motoko Kusenagi with a microscope, what this means is that they’ve decided to test out the camera recording and identification mechanisms for the rovers on amuch lower tech propulsion device, i.e. a human researcher.

 The tech is pretty much “off the shelf”. I don’t see much that a dedicated amateur with the funds (the VR goggles cost about $2000) and the time couldn’t replicate. One of the “cyborg’s” investigations seems to be carried out using a cell phone camera. 



The “cyborg” really consists of an investigator with a pair of VR goggles that receive the feed from a camera, which in at least one case is a cellphone camera. which assembles the panorama from a mosaic of images it looks at. The real innovation happens in between the camera and the goggles, when the data is processed. To a computer, any image is simply a string of dots with given values for each. Trying to get them to distinguish between different shapes was difficult enough, let alone to distinguish something like “unusualness”. The solution is basically to have the computer mark out any region where the pixels differ sharply with the neighbouring ones more than a given limit, though this sounds like something that would have involved months of fine-tuning.

The effect looks rather like the visionscape from the old Terminator films – bits of interest are outlined and rendered in sharper relief through increased focus, in the way that a target would be in the film. To test this, the cyborg was trundling around Utah, which has the advantage of looking like Mars. The software picks out bits of a rock face and decides how “interesting” various portions are. It’s quite impressive – take a look, how many of these bits would you identify?

That said, there are a few quibbles about this. One is that this program seems to have been under development for at least five years (look at some of the earlier work referenced in the paper)That might mean, though not necessarily, a sign of something that’s great in theory but just takes much to long to make reality. The other thing is that, as much as this may work, it doesn’t look like it’s going to replace the time-honoured method of analysing image data: luckless Ph.D. students.

No comments:

Post a Comment