The Chinese Room
The Chinese Room argument
holds that a digital computer executing a program cannot have a
"mind", "understanding", or "consciousness",
regardless of how intelligently or human-like the program may make the computer
behave. – Wikipedia
This is a classic Straw Man
argument. If a computer does what I say, it can’t be intelligent. What about
when it doesn’t do what you say?
If a computer is executing a
program, it cannot have a “mind”.
Well dah. Why the assumption
that the computer is executing a program? Maybe it is reading text and creating
a structure, and the structure begins issuing instructions based on the states
arising from transmission of states in the structure. The executed instructions
change the states, directions and connections in the structure, which cause
other instructions to be issued. Rather than being passive, the text is
transformed into an Active Structure.
It would be reasonable to
ask – “Have you created a list of instructions for a computer to provide an
output that appears to match a human output – in other words, a Chinese Room” “No, we have left such trivia to others”.
Well, the first thing you
would find is that it would be extremely difficult to do. A person’s
unconscious mind does most of the work in parsing text, and the person’s Four
Pieces Limit means that they can only build very shallow structures on top. If
you implement what the Unconscious Mind does, taking into account the vastness
of language, the only way it can be made to work is have text build a structure
which contains the meaning of the text. If you do that, you have created
something sufficiently complex to “get the joke” when one emerges.
A goal is to break the FourPieces Limit – we don’t want the machine to “think like a human” – far too
shallow and limited.
Comments
Post a Comment