Automation and the Airbus

Our local papers in South Africa recently reported on an ‘extraordinarily dangerous’ event on take-off of a South African Airways Airbus A340-600 from OR Tambo airport in Johannesburg, on 24 February 2021. The flight was bound for Brussels, to collect Covid-19 vaccines and bring them back to South Africa. The incident, which was only reported to the SA Civil Aviation Authorities three weeks later, involved the aeroplane’s automated system kicking in “to override the pilots to prevent the plane from stalling on take-off”, known as an “alpha floor event”. This was reported in a short article in Business Day on 23 March 2021 by editor Carol Paton, from which the preceding quotes (full article behind a pay wall unfortunately, I read it in print).

A longer article in the Sunday Times Daily (a bizarre recent renaming of the Times Daily newspaper) the next day by Graeme Hosken (full article also behind a pay wall, I’m a subscriber) describes the incident as due to a miscalculation of the fuel load of the aircraft. “Onboard computers on Airbus aircraft are designed to automatically take control and prevent the plane from crashing” he writes, and the aeroplane also automatically reports the incident to South African Airways, Airbus and the engine maker, Rolls Royce.

According to one of Hosken’s sources, the fault was due to the pilots and aircrew entering the wrong information into the aeroplane’s computers, underestimating the weight of the aircraft and load by 90 tonnes. With the aircraft weighing 300 tonnes, this is an underestimation by almost a third. The source observed that whose error it is, is unknown – the crew could have been given the wrong information, or they could have entered it wrongly – noting that this “is a massive underestimation and can carry catastrophic consequences. Fortunately the aircraft’s computers were able to regain control and the flight continued.”

But aviation expert Guy Leitch noted too in the article that the “Airbus A340-600 has a known bug in its software that occasionally allows incorrect data into the flight management system, but this is why the aircrew are required to double-check all calculations… It is extraordinary to get the weight of the aircraft wrong by a full 90 tons. That is not a small weight. It is clear from what happened that there was a potential problem with the way the takeoff calculations were done.” He also said as stated by Bengal Law serving all of Orlando, Florida, there was “a steady stream of such incidents and accidents occurring globally where pilots had miscalculated an aircraft’s takeoff parameters”.

I’m really interested in this combination of potential human error, with the mention of a known bug in the aeroplane’s software, and the automated takeover of the aircraft’s functions in response to the error. Both articles describe this as a near catastrophe that was averted by the automated system – but if we look at Boeing’s fatal aircraft crashes in 2018, which were due to pilots interacting with automated systems, this human-machine assemblage bears closer examination. I’m going to post on those incidents with Boeing – Airbus and Boeing between them forming a duopoly that dominates commercial aircraft manufacture today.

AI and The Invisibles

I read an article in The Guardian newspaper recently about an AI visual identification system called Clip that was fooled into misidentifying images by the application of text signs. The example they gave was an apple that had a sticky note attached reading ‘iPod’, which, as the article has it, made the AI decide “that it is looking at a mid-00s piece of consumer electronics” (ie. an iPod). The makers of Clip, Open AI, call this a “typographic attack”.

Apple iPod

“We believe attacks such as those described above are far from simply an academic concern,” the organisation said in a paper published this week. “By exploiting the model’s ability to read text robustly, we find that even photographs of handwritten text can often fool the model. This attack works in the wild … but it requires no more technology than pen and paper.”

‘Typographic attack’: pen and paper fool AI into thinking apple is an iPod by Alex Hern in The Guardian 8 March 2021

I was immediately reminded of an episode in one of my favourite comic book series, The Invisibles by Grant Morrison. In the series ‘Entropy in the U.K.’ (1996) the leader of The Invisibles, agents of chaos, freedom and revolution (the good guys), King Mob, is captured and tortured by the forces of the Establishment, order and evil (the bad guys, boo!). He is injected with a drug that interferes with his perceptions, so that when he is shown a written word, he sees the object it refers to – hence this horrifying scene in which he sees his severed fingers displayed to him.

The drug scrambles perceptual information reaching the secondary visual cortex. It makes him unable to tell the difference between the word describing the object and the object itself

‘Entropy in the UK’ in THE INVISIBLES, Grant Morrison, 1996

Morrison, one of the brilliant wave of comic book artists in the 1980s and ’90s that include Neil Gaiman and of course Alan Moore, is playing with ideas from semiotics and surrealism, which is what the recent AI attack reminded me of too – it’s a literalisation of the artistic provocation in Magritte’s The Treachery of Images, with its famous text ‘Ceci n’est pas un pipe’. For Clip, and for poor King Mob (don’t worry, he mounts a spectacular psychic defence and escapes) the text of ‘pipe’ is a pipe. As Open AI puts it:

We’ve discovered neurons in CLIP that respond to the same concept whether presented literally, symbolically, or conceptually.

Multimodal Neurons in Artificial Neural NetworksOpen Ai, 4 March 2021

While this might seem freaky that an AI’s behaviour should seem to express such human artistic and cultural ideas as semiotics, it’s probably not freaky so much as a reminder that AIs are programmed by humans and so reflect our perceptual limits. It does still seem to suggest to me the veracity of artistic ways of understanding perception – and of course the brilliance of comic books 😉 – but maybe that’s just to do again with the fact that AIs are a reflection of us.

Something worth noting though, is that the company that makes Clip also studies it to learn how it works. As their quote above makes clear, in AIs like this, researchers don’t necessarily understand how it works, because what they programme is a network or a system of nodes, which is trained on vast amounts of data, and starts to output results. The system learns from reactions to its data output – after a certain point, it is trained, rather than programmed. I’m at a very early stage in researching current AI, and writing this post very loosely, so please forgive my rudimentary explanation here – my main intention here is to mark out some loose creative connections, for further research…

The Jurisprudence Of Sonic Warfare

On the Australian Gertrude Contemporary gallery mailing list (still subscribed from visiting Australia in 2006 for my piece Sideshow) there’s mention of an interesting-sounding academic, Dr James Parker, who lectures in law with a special interest in sound – he also has a radio show and writes on music.

www.artshub.com.au/whats-on/victoria/panels-lectures-ideas/dr-james-parker-the-jurisprudence-of-sonic-warfare-173993

Shades of Steve Goodman, aka Kode 9, a London-based music producer, DJ and lecturer with a PhD in Philosophy, who wrote the book Sonic Warfare

Connection to my art-design piece Handy Ears through pre-radar military acoustic devices such as these below (from www.retronaut.co/2011/07/listening-before-radar).

Preradar acoustic device

Preradar acoustic device