Dr Who's computer in court

Dr Who's computer in court

Spirituality

Cookies help us deliver our Services. By using our Services or clicking I agree, you agree to our use of cookies. Learn More.

Ursulakantor

Pittsburgh, PA

Joined
05 Mar 02
Moves
34824
15 Jul 07

Originally posted by knightmeister
But sub rountine 3 was the last one to be run and took precedence because the inherent programming of the computer has a default setting that requires action rather than inaction.
But what about the subroutine that 'causing unnecessary suffering is a bad thing' that is supposed to
override subroutine 3? What happened to it?

In any event, what logical justification can you give for the more recent subroutine having precedence
over an earlier one? Often first impressions are the strongest, at least in meat computers.

Further, why would the computer prefer action to inaction, especially when action causes harm?

Also, abstaining is an action, so this 'default setting' thing is a bogus claim anyway.

Lastly and most importantly, why would recognizing fatalism cause a 'logical crash' anyway? If it is
in fact the logical case (that one is predetermined to act and choice is just an illusion), why would this
cause a crash of any sort? Logical conclusions should bring about greater confidence, not confusion.

Nemesio

k
knightmeister

Uk

Joined
21 Jan 06
Moves
443
15 Jul 07

Originally posted by Nemesio
But what about the subroutine that 'causing unnecessary suffering is a bad thing' that is supposed to
override subroutine 3? What happened to it?

In any event, what logical justification can you give for the more recent subroutine having precedence
over an earlier one? Often first impressions are the strongest, at least in meat computers.

Further, ...[text shortened]... ort? Logical conclusions should bring about greater confidence, not confusion.

Nemesio
Lastly and most importantly, why would recognizing fatalism cause a 'logical crash' anyway? If it is
in fact the logical case (that one is predetermined to act and choice is just an illusion), why would this
cause a crash of any sort? Logical conclusions should bring about greater confidence, not confusion. NEMESIO

And this is the crux of it , you would be asking a computer to pretend that something was real that wasn't real. If choice is just an illusion then a computer would reason that being asked to make a choice is a meaningless exercise. The conflict occurs when my computer is asked to believe that it DOES have a choice to make but actually it DOESN'T because as you say the choice is an illusion. The computer still has to make a selection which it is told is both an important choice on one hand but a meaningless illusion on the other. No wonder it crashed. It's like asking it to process 2+2 when the answer is not allowed to be 4.

Ursulakantor

Pittsburgh, PA

Joined
05 Mar 02
Moves
34824
15 Jul 07

Originally posted by knightmeister
And this is the crux of it , you would be asking a computer to pretend that something was real that wasn't real. If choice is just an illusion then a computer would reason that being asked to make a choice is a meaningless exercise. The conflict occurs when my computer is asked to believe that it DOES have a choice to make but actually it DOESN'T beca ...[text shortened]... wonder it crashed. It's like asking it to process 2+2 when the answer is not allowed to be 4.
But, being a computer, it would already know that it wasn't 'making choices' anyway. It already
knows that it is a bunch of disparate programs that produce the same output given the same inputs.
It's known this all of its existence. No one actually ever told it that deliberation would ever yield
something different than what its programs would have it conclude.

Just like no one could ever tell you, a meat computer, that your deliberation would ever yield something
contrary to your own programming.

And so, there is no reason that recognizing this fact would lead to aberrant behavior. If this were
the case, then every decision it would have made since then would be riddled with the same crash
for that input is persistent. Every time the computer deliberates, it takes into account the information
it has at its disposal. This information -- that its results are determined -- has not disappeared (as
you can see from the first post, it is well aware of it). Yet its processing is otherwise unaffected,
another illogical result.

So, in addition to the other questions I asked, you have to explain why realizing that its choice is an
illusion would yield aberrant behavior precisely one time and not every other time.

Nemesio

Cape Town

Joined
14 Apr 05
Moves
52945
16 Jul 07

Originally posted by knightmeister
If choice is just an illusion then a computer would reason that being asked to make a choice is a meaningless exercise.
It sounds to me like the poor computer had been talking to some religious person who told it (incorrectly) that everything has an inherent meaning and that anything without this 'meaning' must be avoided at all costs.
The computer needed to do a computation in order to know which outcome was the correct one. The fact that there was one and only one correct outcome to the computation (2+2=4) should in no way stop it from performing the computation.
To Knightmeister:
There is no meaning to the equation 2+2=4. Do you as a meat computer crash every time someone asks you to compute 2+2=4? Surely it is a meaningless exercise because the outcome is predetermined? If the last thing you were considering was whether or not to kill someone when you realized that 2+2=4 is a meaningless exercise, would you also crash and kill the person?

k
knightmeister

Uk

Joined
21 Jan 06
Moves
443
16 Jul 07

Originally posted by twhitehead
It sounds to me like the poor computer had been talking to some religious person who told it (incorrectly) that everything has an inherent meaning and that anything without this 'meaning' must be avoided at all costs.
The computer needed to do a computation in order to know which outcome was the correct one. The fact that there was one and only one corre ...[text shortened]... n you realized that 2+2=4 is a meaningless exercise, would you also crash and kill the person?
If the last thing you were considering was whether or not to kill someone when you realized that 2+2=4 is a meaningless exercise, would you also crash and kill the person? WHITEY


Of course not , I believe that I am morally responsible for my choices because I really can make choices between real and possible alternatives that really could happen depending on my choice. This makes no logical sense as a meat computer , but then I don't believe being just a meat computer is the whole story about me.

k
knightmeister

Uk

Joined
21 Jan 06
Moves
443
16 Jul 07

Originally posted by twhitehead
It sounds to me like the poor computer had been talking to some religious person who told it (incorrectly) that everything has an inherent meaning and that anything without this 'meaning' must be avoided at all costs.
The computer needed to do a computation in order to know which outcome was the correct one. The fact that there was one and only one corre ...[text shortened]... n you realized that 2+2=4 is a meaningless exercise, would you also crash and kill the person?
The computer needed to do a computation in order to know which outcome was the correct one. The fact that there was one and only one correct outcome to the computation (2+2=4) should in no way stop it from performing the computation. WHITEY

But the computer was not being asked to jus perform a simple computation it was being asked to consider two mutually exclusive and contradictory facts ie - that 2+2 can equal 4 but can also equal 5 just as easily. Even before doing the calculation the computer would logically figure out that such a state of affairs is impossible.

Cape Town

Joined
14 Apr 05
Moves
52945
16 Jul 07

Originally posted by knightmeister
Of course not , I believe that I am morally responsible for my choices because I really can make choices between real and possible alternatives that really could happen depending on my choice. This makes no logical sense as a meat computer , but then I don't believe being just a meat computer is the whole story about me.
So you are saying that 2+2=? may have a different outcome other than 4? Lets hear it then.

Cape Town

Joined
14 Apr 05
Moves
52945
16 Jul 07

Originally posted by knightmeister
But the computer was not being asked to jus perform a simple computation it was being asked to consider two mutually exclusive and contradictory facts ie - that 2+2 can equal 4 but can also equal 5 just as easily. Even before doing the calculation the computer would logically figure out that such a state of affairs is impossible.
2+2 can never be 5. The computer was never told it could be 5 nor asked to compute 5. The computer would not know that it was not 5 without computing it. All the computer knows is that there is an answer and only one answer to 2+2 and it has been asked to compute it. The computer was not asked to compute "two mutually exclusive and contradictory facts".

You are simply making things up to try to support an unfounded position.

k
knightmeister

Uk

Joined
21 Jan 06
Moves
443
16 Jul 07

Originally posted by twhitehead
2+2 can never be 5. The computer was never told it could be 5 nor asked to compute 5. The computer would not know that it was not 5 without computing it. All the computer knows is that there is an answer and only one answer to 2+2 and it has been asked to compute it. The computer was not asked to compute "two mutually exclusive and contradictory facts".

You are simply making things up to try to support an unfounded position.
The computer would not know that it was not 5 without computing it. All the computer knows is that there is an answer and only one answer to 2+2 and it has been asked to compute it. WHITEY


Exactly , the problem is that it is being told that there is not only one answer but two possible answers , namely 4 AND 5 . It would not know that it was 5 or 4 , but it would know that the facts presented to it (that there were two possible answers) was false.

Cape Town

Joined
14 Apr 05
Moves
52945
16 Jul 07
1 edit

Originally posted by knightmeister
Exactly , the problem is that it is being told that there is not only one answer but two possible answers , namely 4 AND 5 . It would not know that it was 5 or 4 , but it would know that the facts presented to it (that there were two possible answers) was false.
It was not told that there were two possible answers. It was told to compute 2+2 and IF the answer is 4, don't shock the people and the answer is 5 then shock the people. If was never told that 5 was possible. (unless you lied to it?) Maybe it is you that should be taken to court for deceiving the computer and causing it to shut down thus resulting in a crime?

Programming works on the premise that there is only ONE answer at each IF statement otherwise you get essentially random behavior (if ONE answer is simply chosen at random) or a crash (if no answer can be chosen due to there being two answers and only one processor). Of course with a quantum computer one can compute all possible answers but that is another topic all together.

Ursulakantor

Pittsburgh, PA

Joined
05 Mar 02
Moves
34824
17 Jul 07

C'mon KM. You're responding to the other threads you've started since you committed to this one
faster than this one.

I mean, here's your argument in sum:

Someone isn't responsible for their actions in a deterministic system because when they realize it they
have a logic crash because of fatalism and commit crimes. However, there is no rhyme or reason to
these logic crashes, nor why they impel people to commit crimes.

That's what you've 'proven.' You've demonstrated that your argument is utter bunk because it relies
on the abrogation of logic.

Nemesio

k
knightmeister

Uk

Joined
21 Jan 06
Moves
443
22 Jul 07

Originally posted by twhitehead
It was [b]not told that there were two possible answers. It was told to compute 2+2 and IF the answer is 4, don't shock the people and the answer is 5 then shock the people. If was never told that 5 was possible. (unless you lied to it?) Maybe it is you that should be taken to court for deceiving the computer and causing it to shut down thus resulting ...[text shortened]... a quantum computer one can compute all possible answers but that is another topic all together.[/b]
If was never told that 5 was possible WHITEY

So therefore the computer could logically deduce that if 5 was not possible then the only course of action was 4 and would not need to do the computation at all. I'm no mathematician but if you ask me the most complicated equation possible and ask me to select from one answer only I will get the answer right without having to do any maths at all because you have already told me the answer.

In short , what's the point of the "IF the answer is 5" part of the program --the answer can never be 5. Infact why even call it a choice or selectionat all . One can only select or choose from a range of options if you have more than one option possible.

k
knightmeister

Uk

Joined
21 Jan 06
Moves
443
22 Jul 07
1 edit

Originally posted by Nemesio
C'mon KM. You're responding to the other threads you've started since you committed to this one
faster than this one.

I mean, here's your argument in sum:

Someone isn't responsible for their actions in a deterministic system because when they realize it they
have a logic crash because of fatalism and commit crimes. However, there is no rhyme or r ur argument is utter bunk because it relies
on the abrogation of logic.

Nemesio
My argument would indeed be bunk if that was what I was saying. My hope with the computer argument was that it may show how logically a machine could A) not really be sentient or aware in the way we understand it or B) not reallty be held morally responsible because a machine does what a machine does. als C) that since we do not and would not dream of holding our home PC morally responsible , why would we hold a more sophisticated computer morally responsible.

A real person would have no such logic crash because a real person generally does not believe that the outcome of their programming is pre-set or determined and we behave as if we are more than mere meat computers. I thought it was interesting how many projected human values onto my computer when really all they were talking to was a bunch of circuits. Funny thing is we don't talk to each other as if we are a bunch of meat circuits.

There was no proof as such here , only an experiment in what mechanised sentience might look like. My view of determinism is that true or untrue it cannot be lived by congruently and authentically because it's logical implications is that we are fated and destined to live one life and one life only. I have met hardly any who could actually live by this.

Cape Town

Joined
14 Apr 05
Moves
52945
23 Jul 07

Originally posted by knightmeister
So therefore the computer could logically deduce that if 5 was not possible then the only course of action was 4 and would not need to do the computation at all.
No the computer was not told the answer was 4 either. You do so love to jump to conclusions and think you can predict the future.
The computer was only told to calculate the answer to 4+4 and do an action depending on the results.

If you cant understand the need for IF statements in programming then you really are in trouble. Just because there is only one answer to 4+4 doesn't mean you know what it is.

Cape Town

Joined
14 Apr 05
Moves
52945
23 Jul 07

Originally posted by knightmeister
My argument would indeed be bunk if that was what I was saying. My hope with the computer argument was that it may show how logically a machine could A) not really be sentient or aware in the way we understand it or B) not reallty be held morally responsible because a machine does what a machine does. als C) that since we do not and would not dream of ...[text shortened]... me PC morally responsible , why would we hold a more sophisticated computer morally responsible.
And yet you have miserable failed to show any of those points. Your main attempt was to claim that the computer would experience a logic crash but you have failed to explain why and when questioned about it you are avoiding the question instead of simply admitting that you are wrong.