A premise: all the problems of computational solutionism can be expressed in terms of two fundamental misunderstandings of Turing:
- Simulation is not equivalence. A machine that acts like another changes that other and itself; it doesn’t reproduce them.
- Machines aren’t intelligent; rather they are persuasive. Thus, accepting or rejecting any individual one entails more than just reason and cannot be quantified.
To be clear: the two bullets summarize Turing’s actual positions. Their opposite interpretations rule us all, so to speak.
Comments
Pete Shanahan
Hi, what the f**k is ‘computational solutionism. Cos I’m flying half blind here
Philosophy asks: considering an omnipotent god, is there something that god can created that god can’t lift?
Bearing that in mind, Turing hated Gödel because he pretty much said that ‘your math is weak’.
Question 1 is meaningless without all the qualifiers (I can conceive of a simulation that is perfect by stepping outside of the frame of the simulation, which is not allowed)
Question 2. It’s not that they are not intelligent; it’s more that they’re not intelligent enough to be considered. Persuasion is not intelligence, my toaster can reside me to take the toast out, and I would never consider it intelligent
Adam Ford
Chatting with my wife about this over breakfast. This opens up a very interesting argument, but she pointed out that the second point seems to give computers an agency or emotional quality that they don’t actually have – especially in terms of the first point and what it says about AI. Computers aren’t persuasive – they’re mimetic, and it is humanity’s tendency to look for its reflection that persuades us – we are the persuader and the persuaded.
Joseph Zizys
bullet point 2 clearly goes to far as a summary of Turing’s actual position. “machines aren’t intelligent” is EXACTLY what Turing argues you have no legitimate epistemic grounds to say.
Greg Borenstein
On the topic of similation, Jessica Riskin has been doing some really interesting scholarship on the history of AI. She’s written a couple of strong papers about a trend in the fabrication of automatons from the 18th century towards simulation, specifically the Defecating Duck: http://www.stanford.edu/dept/HPS/DefecatingDuck.pdf This paper on “18th-Century wetware” also makes some really good points on it: http://www.stanford.edu/dept/HPS/representations1.pdf Her point is that we redefine the “essence of humanity” in contrast to new capacities of technical systems.
She draws a parallel between the present and that time. And she combines both of your points here together in an interesting way.
Jahooma
I think your assertions are wrong because you fundamentally understand the world in a different way. Saying that “machines aren’t intelligent” doesn’t make sense because, ultimately, we are machines. The world is built from interacting particles that behave in regular ways, and so everything that happens can be understood as interacting machines, or equivalently, as one giant, super-complex machine.
I don’t understand what you mean when you say machines are persuasive, but I assure you even our artificially built machines (i.e. computers, as opposed to our naturally built machines–human offspring) are or can be intelligent. Intelligence is a vaguely defined concept, but any rigorous definition must be reduced to a system that when given certain inputs, gives certain outputs that we deem “intelligent.” Put this way, it’s clear that intelligence is nothing more than the implementation of a complex algorithm. Of course we can build a computer to simulate intelligence, because the fact that intelligence exists in humans proves that there are some outputs corresponding to those inputs which are intelligent, and a computer can match those outputs.
And, of course simulation is not exactly equivalence; that’s why it’s simulation. A computer simulating a human brain won’t be a human brain, because they have different hardware. But, in terms of how they act, they would be identical, and that’s the part that actually matters.
I do want to say that what Turing thought back in the 40s and 50s can be interesting, but certainly appealing to his authority as a reason why something is true is not productive. (Not that you are necessarily taking this position.)
Finally, I invite you to test the waters of computationalism. For the first time in human history, the world actually makes sense without having to invoke supernatural hypotheses to explain life and human intelligence. Give it a try, the waters are warm (and very clear!).
Cheers,
Jahooma
Divinenephron
“any rigorous definition [of intelligence] must be reduced to a system that when given certain inputs, gives certain outputs that we deem ‘intelligent.'” – Jahooma
Presuming the mind can be described using only set of inputs and outputs is a valid way of studying it, and one that’s been pretty productive. But you haven’t given nearly enough justification for why this is the only good way of describing the mind. You’ve just said it’s “rigorous”. One argument that can be made against this presumption is that it can’t explain qualia. You might not be concerned about qualia, but other will be.
I would like Ian to clarify what computationalism is (it looks like a theory of mind), and how it’s ruling us.
Ian Bogost
On comptuationalism and solutionism, read David Golumbia and Evgeny Morozov, respectively.
Ian Bogost
@Joseph Zizys
Turing argues that intelligence amounts to persuasive imitation. But contemporary conceptions of AI have reified intelligence much more broadly than this.
@Adam Ford
“Persuasively mimetic” in other words