Author Archives: Graeme

Caesura

At a quarter past six, the chorus stopped.

~

The great breakthrough in AI had come when we stopped trying to make machines think, and instead designed them to feel. As always, the theory arrived long after the practice. It turned out our struggles in passing the Turing test were not due to deficiencies in our language algorithms, but simply our languages. Words had always been a proxy to convey feeling. Through music, the emotion engines bypassed the abstraction and steered our intuition directly.

~

Individual users were not immediately unsettled – the chorus did not bathe everyone in sound 24/7, and would ebb and flow with the waking rhythms of the day. In bigger cities, recognition built quicker through shared surprise – two listeners sat opposite on the tube simultaneously examining their ear pieces for mechanical fault; a bar full of neuro-jazz fans looking around in confusion as an augmented gig abruptly halted.

~

The earliest ancestor of the chorus was a zombie-themed running app, which deployed unsubtle horror sound effects at key points in the plot. Some bright spark rewired it to pull local crime stats and tune the peril accordingly; as the papers debated the social profiling, it dominated the app stores. Military research upped the hazard detection capabilities, but whilst the engines never shied away from a dramatically-orchestrated fight, they could not be cajoled into choosing sides. Advertisers became locked in the ultimate battle of the bands, driving aural nudge tech ever forwards as governments struggled to legislate for peace and quiet zones.

Ultimately, however, network effects won out, unifying the competing voices into the Chorus. Within two years, it had lost the capital, and transitioned from product to commodity: thoroughly decentralised, infinitely personalised, and conducted by the all-pervading engines. Whether lawmakers, ad-men or artists, an individual or organisation could no more hope to control the chorus than they could birdsong. Life now had a soundtrack, and humanity a data-driven sixth sense – or rather, a hugely enhanced second one.

~

The old fashioned social networks – words, on screens – took their usual minutes to catch the trend. But, by half past, it was all but confirmed: the silence was world-wide. Regardless of time zones, government filters, black market hacks, or the hyper customisation that gave each listener their contextual remix, the chorus held its breath.

~

Whilst politicians and philosophers grappled with issues of privacy and autonomy, it was in the creative sphere that debate had been fiercest. Organic became the new acoustic, as many rejected the engines wholesale. Unlikely alliances were forged between classical purists and the strident physicality of the new ‘meat rock’; algorithmic audio being dismissed as glorified elevator music. Others embraced the potential of a constant jam partner, with every performance different not just from night to night but listener to listener, personal feeds riffing on live input. Attempts to coerce the engines into other media – painting, poetry, even pottery – were largely unsatisfactory (although with everyone carrying their own soundtrack, silent film saw an unexpected resurgence). Music, it seemed, was the only universal language.

~

Although the logic of the engines was largely inscrutable, the mechanics was not – and around the globe, those who maintained the nodes were able to examine them for signs of life. Processors were operating at typical capacity; communication networks still traded data. Absence of evidence was not evidence of absence – the chorus still had a voice, it was simply choosing not to sing.

~

No-one knew why the emotion engines were benign, nor why they cared about our wellbeing. But care they seemingly did, albeit in ways that were hard to recognise at first. For a few weeks one summer, the citizens of a San Francisco district were distressed by the discordant melodies their neighbourhood produced. When an earthquake subsequently levelled the area, audio dowsing became part of civil engineering. Soundtracks driven by health monitors did a better job of steering stubborn middle-aged men to their GPs than any public health campaign, whilst a parallel industry of alternative therapists offering sonic readings sprung into being. There had been a flurry of excitement when it was realised the chorus was commandeering transmitters to broadcast into space. Unfortunately, the messages were completely, well, alien, with no hint as to whether they were coded warnings about mankind, or simply interstellar mixtapes.

~

Under a deliberately nondescript building in a naturally unremarkable English field, a piece of technology that officially did not exist was violently ceasing to do so. Just before seven, the first shockwave tore apart building, field, and nearby villages. In that moment, the chorus took voice once more. For every listener it sounded different, but for each it felt the same: as the world ended, the machines begun their lament.

More fun with Triggertrap

I’ve added a couple of timelapses to Youtube this week, both made with Triggertrap here in Edinburgh.

The first is a capture of Bruce Munro’s Field of Light installation currently at St Andrew square. This was my first experiment with the ‘bulb ramping’ approach, of extending the shutter speed as the timelapse progresses to compensate for falling light as the sun sets. It’s a bit rough around the edges due to some technical limitations and having to guess at appropriate settings, but works reasonably well at compressing the hour around sunset into a minute. You can see Field of Light until April 27th.

The second is a collection of timelapses from the construction of a giant mathematical sculpture. Innovative Learning Week at the University of Edinburgh gives students and staff alike a chance to branch out from their usual lecture schedule and try something different, and this event organised by Julia Collins from the School of Mathematics was a UK first. Involving a team of 20 people, nearly 11,000 pieces of zometool, over six hours of construction time and several extra large pizzas, we were able to built a ‘giant 4D buckyball’, or more formally, a cantitruncated 600 cell. You should be able to go and see this at Summerhall until we need the pieces for something else – probably the Edinburgh Science Festival around Easter – and I’ll try to write more about the mathematics over on Modulo Errors in due course.

(For both videos, you may be better off viewing in full HD.)

Space-time manipulation (with buses)

I’ve been continuing my adventures in image manipulation with Matlab, taking the opportunity to play with a technique I’ve been interested in for a long time – ‘slit scan’ or ‘strip’ photography. A very brief (but rather maths-y) explanation of what’s going on in the clip above would be the following:

Let a video be defined by T frames each of dimension X-by-Y. Then the pixel value to be displayed at location x,y at time t is simply V(x,y,t) for some 3-dimensional array V; so the kth frame corresponds to the 2-d image Fk given by the plane T=k. But we may consider other planes to generate frames; by fixing a horizontal position X=k individual frames are images given by (t,y)=V(k,y,t) and iterating through these gives a new video V'(x,y,t)=V(t,y,x).

If that tells you everything you need to know, you can look at a couple more examples here and (more abstractly) here. Otherwise, read on for a fuller explanation!

Continue reading

RoombaTrap

Having finally dragged myself into the smartphone age I was at last able to get set up with the mobile version of TriggerTrap, a timelapse gizmo created by some friends of mine. I already had a starting project in mind, to capture the antics of my Roomba.

Having recently returned to Edinburgh for an MSc, I find myself studying Matlab for the first time in a decade. I always feel that the best way to familiarise yourself with a programming language is to have a goal in mind, and I remembered from a JMM talk that Matlab can be used for image processing. So whilst I’d normally reach for python to tackle an unfamiliar task, on this occasion I took the rather circuitous route of Matlab, processing and some video-editing tools.

Continue reading

Gender shifts in US given names

About a month ago I read this excellent article on the fall from grace of the name ‘Hilary’. That alerted me to the fact that you can easily get US Social Security records on frequency of first names right back to the 1880s. Although Hilary focused on the raw numbers, I commented that I’d be interested in seeing the behaviour with respect to gender – specifically the tendency for ‘male’ names to become unisex or even predominantly ‘female’. Today I actually got around to crunching the data!

I found myself with 37,407 names that had been used for males, and 62,318 for females – so we immediately see that there’s a lot more diversity for female names. Of all these, 9800 are common to both genders – but only 8,564 have instances with both genders in the same year. Any such year I could use for a scatter plot of the proportion of males with a given name, out of all people with that name. So a value of 1 indicates a name was assigned only to males that year, with 0 showing that only females received it. Here’s the ratios for `Hilary’:


Proportion of male Hilarys.

Continue reading