It’s hard to overestimate Alan Turing’s contributions to contemporary civilization. To mathematics, he contributed one of two nearly simultaneous proofs about the limits of first-order logic. In cryptography he devised an electromechanical device that decoded German Enigma machine’s signals during World War II, an accomplishment that should also be counted as a contribution to twentieth century warfare and politics. In computer science, he developed a theory of universal computation and an associated architectural design that forms the foundation for the computer on which you are now reading. His take on machine intelligence has been influential in both the philosophy of mind and as the foundation of the field of artificial intelligence. And his prosecution for homosexuality, along with his apparent resulting suicide has offered a pertinent reminder of one of the remaining barriers to social justice and equity.
This year, the centennial of Turing’s birth, we rightly celebrate Turing’s life and accomplishments, the impact of which is difficult to measure sufficiently. But as we do so, we should also take a lesson from the major cultural figure whose centennial we marked last year: Marshall McLuhan. McLuhan teaches us to look beyond the content and application of inventions and discoveries in search of their structures, the logics that motivate them. For McLuhan, television was a collective nervous system pervading every sense, not a dead device for entertainment, education, or moral corruption.
If we look at Alan Turing’s legacy through McLuhan’s lens, a pattern emerges: that of feigning, of deception and interchangeability. If we had to summarize Turing’s diverse work and influence, both intentional and inadvertent, we might say he is an engineer of pretenses, as much as a philosopher of them.
The most obvious example of this logic can be found in the now famous Turing Test, the name later given to the imitation game Turing proposed in the 1950 article “Computing Machinery and Intelligence,” published in the journal Mind. The paper ponders the question “Can machines think?”, meditating at length on the difficulty in answering this question given the ambiguity of the terms “machine” and “think.”
Turing suggests replacing thought or intelligence with imitation. He proposes an “imitation game” in which a human would be asked to interact by teletype with two parties hidden behind closed doors. The first would be another human, the second a machine. Each tries to convince the human judge that it is in fact the human.
In proposing the imitation game as a stand-in for another definition of thought or intelligence, Turing does more than deliver a clever logical flourish that helps him creatively answer a very old question about what makes someone (or something) capable of thought. In fact, he really skirts the question of intelligence entirely, replacing it with the outcomes of thought—in this case, the ability to perform “being human” as convincingly and interestingly as a real human. To be intelligent is to act like a human rather than to have a mind that operates like one. Or, even better, intelligence—whatever it is, the thing that goes on inside a human or a machine—is less interesting and productive a topic of conversation than the effects of such a process, the experience it creates in observers and interlocutors.
This is a kind of pretense most readily found on stage and on screen. An actor’s craft is best described in terms of its effect, the way he or she portrays a part, elicits emotion, and so forth. While it’s certainly also possible to talk about the method by which that outcome emerges (the Stanizlavski method or the Meisner technique, for example) nobody would mistake those processes for the outcomes they produce. That is to say, an actor’s performance is not reducible to the logic by which he or she executes that performance.
Turing did not invent the term “artificial intelligence,” but his work has been enormously influential in that field. Nevertheless, artificial intelligence fails to learn Turing’s lesson on intelligence: the processes by which thought takes place are not incidental, but they are also not primary. So-called “strong AI” hopes to make computers as intelligent as people, often by attempting to create models of human cognition, or even better to argue that the brain itself works like a computer. But Turing never claimed that computers can be intelligent nor that they are artificial. He simply suggested that it would be appealing to consider how computers might perform well at the imitation game — how they might pretend to seem human in interesting ways.
As for the question of what sort of machines are the best subjects for the imitation game, it’s obvious to us now that the digital machines we call computers are the best candidates for successful imitation. This wasn’t so clear a choice in 1950, and Turing was responding to the long history of proposals for logical, mechanical, and calculating devices that could accomplish rational thought.
But the computer itself reveals another example of pretense for Turing, thanks to his own theory of abstract computation and its implementation in the device known as the Turing machine. In the form Turing proposed, this machine is a device that manipulates symbols on a strip of tape. Through simple instructions like move forward, erase, write, and read, such a machine can enact any algorithm—and indeed, the design of modern CPUs is based directly on this principle.
Unlike other sorts of machines, the purpose of a Turing machine is not to carry out any specific task like grinding grain or stamping iron, but to simulate any other machine by carrying out its logic through programmed instructions. A computer, it turns out, is just a particular kind of machine that works by pretending to be another machine. This is precisely what today’s computers do—they pretend to be calculators, ledgers, typewriters, film splicers, telephones, vintage cameras and so much more.
If we combine Turing’s ideas of thought and of machine, we find machines that convincingly pretend to be other machines. The Turing test doesn’t apply just to human intelligence but to what we might call “device behavior,” if we remember that intelligence is really just convincing action for Turing.
Over time, this relationship has become nested and recursive: computer hardware and software doesn’t just mimic existing mechanical or physical machines, but also the various extant forms of computational machinery. If Lotus 1-2-3 simulates the ledger, then Microsoft Excel simulates Lotus 1-2-3. If the iPhone simulates the notepad, then the Samsung Galaxy Nexus simulates the iPhone. As computational machinery has developed, it has also mutated, and the job of today’s software and hardware companies largely involves convincing us that the kind of machine a particular device simulates is one worthy of our attention in the first place.
Once you see pretense as an organizing principle for Turing, it’s hard not to discover it in everything he touched. Computation means one machine acting like any another. Intelligence means doing so in an interesting way. In mathematics, his solution to the Entscheidungsproblem entails making the Turing machine halting problem act like it. Even cryptography for Turing amounted to pretense: making a British machine act like a German radio receiver.
In fact, recent evidence reveals that even Alan Turing’s prosecution and death might be a kind of retroactive pretense. There’s no doubt that he was subjected to chemical castration a a part of his sentence, a treatment that introduced female hormones into his male body in order to make his homosexual body act like an asexual one. But history has told us that Turing, afflicted by his unfair persecution, committed suicide shortly thereafter by ingesting a cyanide poisoned apple, an act that itself simulates the famous scene from Snow White. While indisputably tragic, Turing’s suicide also partly facilitated his contributions to social justice—it was a machine that made a mathematician act like a martyr.
But on the occasion of his centennial, Turing expert Jack Copeland has argued that the evidence presented in the 1954 inquest into Turing’s tragic end is insufficient to conclude that his death came at his own hand. Turing apparently took an apple regularly at bedtime, and according to Copeland, absent any evidence of premeditation or planning a suicide verdict cannot be substantiated.
As with the nested logic of computers, unlocking one pretense in Turing’s life always reveals another. In 1954, Turing’s death was sufficient to convince a coroner of suicide. Today, do we question that conclusion because we have higher evidentiary standards for legal conclusions, or because we have a different idea of what suicide looks like? Certainly a computer of the 1950s would be less likely to convince a modern user that it acts like a calculator than a computer of today—but then again, in 1950 “calculator” was a name for a profession, not for a handheld electronic device.
Such is Turing’s legacy: that of a nested chain of pretenses, each pointing not to reality, but to the caricature of another idea, device, individual, or concept. In the inquest on his death, Turing’s coroner wrote, “In a man of his type, one never knows what his mental processes are going to do next.” It’s easy to take this statement as a slight, an insult against a national hero whose culture took him as a criminal just for being a gay man. But can’t you also see it differently, more generously? Everyone—everything—is one of his or her or its own type, its internal processes forever hidden from view, its real nature only partly depicted through its behavior. As heirs to Turing’s legacy, the best we can do is admit it. Everyone pretends. And everything is more than we can ever see of it.