Historically, one of the fundamental questions in AI and Cognitive Science has been “Can Machines Think?”, at least since Alan Turing tried to find a way to answer it in “Computing Machinery And Intelligence” (Mind, Volume LIX, Issue 236, October 1950, Pages 433–460). Actually, there Turing tries to bypass the question by proposing his famous Imitation Game, aka “The Turing Test”, and then asking “Could a machine pass this test?”. Why does Turing do this? Well, he thinks that we cannot come up with workable definitions of “thinking” and “machine” (“The original question, ‘Can machines think!’ I believe to be too meaningless to deserve discussion.”). Of course he then goes on to define precisely what kind of machine he means: digital computers (such as they were up to 1950).
Yet, het does not do so for “thinking”. Since then, trying to answer the original question has acquired the aura of a quite dubious affair, of a purely a speculative question that belongs to philosophy rather than to any exact science. And yet, a scientific approach to the question of what “thinking” actually is was born more or less at the same time we started developing the kind of devices that inspired Turing’s question in the first place: computing machinery. Both scientific psychology as well as universal computation are paradigms that originated and developed in the 19th century.
Can machines think?’ This should begin with definitions of the meaning of the terms ‘machine’ and ‘think’. Full article at
Pratchett “The Hogfather”
‘He just looks as though he’s thinking, right?’
‘Er … yes.’
‘But he’s not actually thinking?’
‘Er … no.’
‘So … he just gives the impression of thinking but really it’s just a show?’
‘Er … yes.’
‘Just like everyone else, then, really,’
Brain or machines?
In a very real sense, in one form or another, the question about the relation between “thinking” and whatever is being used to “think” (brain or machines) affects any and all scientific disciplines. Moreover, to even make sense of the question you would need nearly all arts and sciences.
Let’s start where Turing started: What do we mean by “machine”? What do we mean by “thinking”? What are the technical requirements on the machine? Does it need to process information in the same way that humans do? Massively parallel, biochemically, etc. like a human brain? Or by the same algorithm, but implemented differently? Does the machine need to resemble a human aesthetically to correctly express its thinking? Does it actually need to be some sort of robot or can the machine be a box on a desk? Does it need sensors to perceive the actual world or would a virtual world suffice?
…. And so forth and so on ….
This involves far more than engineering and math, philosophy and programming: psychology, sociology, neurobiology, logic, etc. all can contribute to refining and addressing such questions. In other words, something like “Can Machines Think?” is necessarily an interdisciplinary question and the field of the Cognitive Sciences (plural, of course) has grown up at the intersection of philosophy, programming, psychology, etc. to try to tackle it in its original form.
There is no single answer: coming at it from different individual disciplines will yield different answers. A philosopher might say “No, but …”, while an expert in machines learning might say “Yes, if …”
It is only when we allow multiple very different disciplines to collaborate, to enter in a dialogue as equals, that we can hope to do justice to the complexity of the question. As Turing prefigured in 1950, the idea of a “thinking machine” is no longer foreign to us and we talk seriously about machines and algorithms possessing “Artificial Intelligence”.
The machines and their programming have evolved a great deal since Turing’s time, but the question is still puzzling us. Is all this intelligent information processing really the same as “thinking”? Psychology and philosophy, not to mention neuroscience, have also come a long way since then.
We are progressively figuring out the structures and rules underlying human thinking and implementing them into other things. Some of the most promising approaches go right back to
Turing himself: not to program a fully fledged human-level intelligence, but to make a learning machine capable of evolving intelligence and thought.
This blog will try to show some of the interdisciplinary conversations on cognition and computation as well as the historical and philosophical underpinnings of current debates. What idea or debate from the present or the past do you think should definitely be addressed here?