RoombaTrap

Having finally dragged myself into the smartphone age I was at last able to get set up with the mobile version of TriggerTrap, a timelapse gizmo created by some friends of mine. I already had a starting project in mind, to capture the antics of my Roomba.

Having recently returned to Edinburgh for an MSc, I find myself studying Matlab for the first time in a decade. I always feel that the best way to familiarise yourself with a programming language is to have a goal in mind, and I remembered from a JMM talk that Matlab can be used for image processing. So whilst I’d normally reach for python to tackle an unfamiliar task, on this occasion I took the rather circuitous route of Matlab, processing and some video-editing tools.

Step one, then, was to get some images to process! For that I hooked a nexus 4 to a canon 550d with a wide-angle lens, mounted on a tripod in a corner of the room. I secured that behind a virtual wall to prevent the Roomba bumping it out of alignment. I found that the freshly-installed triggertrap android app couldn’t actually trigger the camera until I’d power-cycled the phone, but that’s not exactly a difficult fix! Also straightforward was the capture maths: grabbing one frame per second means 15 minutes of real time will give you a minute of footage at 15fps, using 900 total shots. As a first experiment, I was happy to take just half that for a 30 second clip. Working in manual focus I got a reference capture of the room (using TT as a fancy cable release), then brought in the roomba, killed the lights, and set the app to work in timelapse mode.

With 450 captures I actually completed a loop on the file numbering on my camera (I think that’s 30,000 shots now), which made post-processing the files slightly more hassle than it would normally be. But after renaming into chronological order and resizing down I was able to push them first through PhotoLapse, then Sony’s improbably named Vegas Movie Studio HD Platinum 10 to get the video above.

Next stop Matlab! I wrote a function to search the green channel of an image file to find the most intense point, and return its coordinates along with the intensity. It turned out to be pretty easy to iterate over the contents of a directory, populate an array with this position/intensity data, and write that to a text file. That was then read in to processing to recreate the positions as perfect ellipses on clean backgrounds – the original captures had quite a lot of noise. This created a slight issue in that if the roomba was obscured by a piece of furniture, its location would instead be infered from the greenest piece of noise! So I restricted processing to only render points for which the intensity was above a threshold of 100; for fun I also rendered then with their corresponding intensity (unsurprisingly, more distant locations tended to be less bright). After some playing around interactively, these new frames were written out as files and again merged into a video, pretty similar to the raw capture:

I also experimented with rendering lines between consecutive captured locations – this required being a bit more aggressive on the thresholding, requiring both a point and its predecessor to be sufficiently intense else there were some violent jumps to remaining noisy locations. This forced a disconnected collection of lines, but it still makes it easier to track the navigation – and you can identify the edge of one of the sofas!

Next job was to reintroduce the background image of the room; here it is for the final set of lines:

Roomba lines

and here with a cumulative rendering of the detected locations:

The final frame being

Roomba dots

Playing around with various parameters in processing I was able to adjust the frame rate, toggle combinations of segments and points, and render partial data such as in the intro video. Sadly the latest version of processing doesn’t support applet export (and earlier versions throw spectactular security warnings), so I won’t be sharing it here. Plus a lot of the fun is writing these things yourself! But I hope I’ve given enough details to help other new Triggertrappers get started: Photolapse is particularly handy.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>