07 Dec '05 11:36>
Originally posted by KellyJaySuppose you had:
That will not happen.
Kelly
(a) a person with consciousness who behaved in ways suggesting he or she was conscious (including saying things like "I'm conscious! Really!"😉
(b) a machine without consciousness that behaved in ways suggesting it was conscious (including saying things like "I'm conscious! Really!"😉
How would you tell the difference between (a) and (b), over the internet say?
Now suppose you had:
(1) a person with consciousness who behaved in ways suggesting he or she was conscious (including saying things like "I'm conscious! Really!"😉
(2) a person without consciousness who behaved in ways suggesting he or she was conscious (including saying things like "I'm conscious! Really!"😉
How would you tell the difference between (1) and (2), over the internet say?
But, in reality, we never worry about the difference between (1) and (2), do we? We never think: "Gee, I wonder whether bbarr is conscious, but RBHILL isn't. Or vice versa?"
But, if we don't wonder about distinguishing (1) and (2), why should we worry about distinguishing (a) and (b)?
Doesn't this suggest that, if a machine behaves in ways that are complex enough to mimic a conscious human being, then we do, in effect, regard it is conscious, and there is nothing more to consciousness than that?
And if so, isn't all we have to do is surmount the technical obstacle of building a sufficiently smart machine? Surely, you can rule that out, like people a few hundred years ago ruled out travelling to the moon, or talking to someone on the other side of the world, or developing a handheld device that can beat 99.999% of people everyone at speed chess?