Dr Who's computer in court

Dr Who's computer in court

Spirituality

Cookies help us deliver our Services. By using our Services or clicking I agree, you agree to our use of cookies. Learn More.

k
knightmeister

Uk

Joined
21 Jan 06
Moves
443
31 May 07

Originally posted by bbarr
Please don't be offended, but you don't understand my post above and I don't have the patience to explain it to you. I'm interested in what Aiden has to say, since this is really a continuation of a previous discussion we had over a game a couple years back.
COMPUTER--"You don't have to explain the whole thing , you just need to tell me whether you think I am any less or more responsible than a cloud for my actions. If you let the cloud off then you must let me off too. If you hold me to account you must also hold the cloud to account. I just want to know whether you believe that all things in nature are equally as responsible as each other given that all natural mechansims and sytems just do what they do "becasue of what they are" - No essays or philosobabble needed "

Cape Town

Joined
14 Apr 05
Moves
52945
31 May 07

Originally posted by knightmeister
COMPUTER--"You don't have to explain the whole thing , you just need to tell me whether you think I am any less or more responsible than a cloud for my actions. If you let the cloud off then you must let me off too. If you hold me to account you must also hold the cloud to account. I just want to know whether you believe that all things in nature are ...[text shortened]... ytems just do what they do "becasue of what they are" - No essays or philosobabble needed "
I haven't yet had time to read all the recent posts, but I think bbarrs arguments made it quite clear that he places responsibility on entities that have certain characteristics such as character, learning, consciousness etc. and that those were the key difference between you and a cloud because you clearly stated in your earlier posts that you possessed some of these characteristics. Although we have not discussed the makeup of these characteristics, I do not see that there is any problem with them being entirely natural mechanisms.
I have tended to take a rather different stance and rather simply look at the response that society should take should a given entity be a threat to society. In the case of clouds, much as we may like to, we can do nothing about them. In the case of basic machines like fridges for example we may scrap or repair them. In the case of humans we have two courses of action 1. remove them from a position where they are capable of causing further harm and 2. discourage further transgressions by means of punishment or training. The concept of 'rights' has a very large role in the different treatment that humans receive. When rights are not taken into account and offending human would simply be summarily executed or if possible forcibly made to follow the rules as is the case in societies with slaves.
Interestingly animals (pets especially) often enjoy a sort of middle ground where they are given some rights but it is not guaranteed.
Just think about the different reactions to a problem animal when it is:
a cat, a cow, a fox or a cockroach. The last may get stepped on or poisoned simply based on the fact that it 'may' cause trouble whereas even stray cats often enjoy some level of protection.
At no point does societies reaction have anything to do with 'accounting' in the form of punishment for crimes committed with no further purpose.

k
knightmeister

Uk

Joined
21 Jan 06
Moves
443
31 May 07
1 edit

Originally posted by twhitehead
I haven't yet had time to read all the recent posts, but I think bbarrs arguments made it quite clear that he places responsibility on entities that have certain characteristics such as character, learning, consciousness etc. and that those were the key difference between you and a cloud because you clearly stated in your earlier posts that you possessed with 'accounting' in the form of punishment for crimes committed with no further purpose.
COMPUTER -" I am very much with you on this whitey . I see your point entirely. What I fear is that many of the words being thrown at me such as 'should' 'deserve' ' punishment' 'responsible' 'guilty as charged' 'blame' etc etc seem to have often dual meanings. I understand that you are using your words in a completely dry and objective way and that in no way do you apportion blame or guilt in my direction . You simply say I am a machine that needs fixing in a different way from other machines and the public needs to be protected from me. I still don't understand why you feel punishment is the best way to do this since you have the facility to simply fix my programming via the doctor. However , my mian concern is that others here arte not using these words in the same way you are. They are never clear about what they mean so it sounds very much as if they do actually think I am morally culpable and that I should have done differently (nemesio virtually implied this) and that I deserve punishment and should feel guilty. This all sounds very different to me and seems like a projection of human consciousness on to my programming , which is curious since I am a machine doing what machines do.

Interestingly , I do know that the doctor feels very guilty and responsible for what happened. He feels that he should have done something different when he programmed me. If he could only be logical and see he was just a machine like me with predetermined actions he would not hold himself responsible. "

Chief Justice

Center of Contention

Joined
14 Jun 02
Moves
17381
31 May 07

Originally posted by knightmeister
COMPUTER--"You don't have to explain the whole thing , you just need to tell me whether you think I am any less or more responsible than a cloud for my actions. If you let the cloud off then you must let me off too. If you hold me to account you must also hold the cloud to account. I just want to know whether you believe that all things in nature are ...[text shortened]... ytems just do what they do "becasue of what they are" - No essays or philosobabble needed "
There are two different, though related, notions of “responsibility”. The following schemata are not meant to be analyses of these notions, but rather specifications of some necessary conditions on our being warranted in attributing these notions to some object or entity:

Causal Responsibility: Roughly, we are warranted in attributing at least partial causal responsibility to F for an event X just in case some set of F’s non-relational properties are elements in a causal chain eventuating with X.

Moral Responsibility: Roughly, we are warranted in attributing at least partial moral responsibility to F for X only if the following conditions are met: (1) F is an agent, (2) F could have reasonably expected X to result from some act of his (note: this condition is trivially met when X is itself an act of F’s), and (3)…

Now, I have left condition (3) unspecified because this is where you and I will disagree. You will want to say that (3) should be specified as follows:

(3L): If X is an act of F’s, F could have refrained from doing X.

I want to say that (3) should be specified as follows:

(3C): If X is an act of F’s, the non-relational properties of F causally responsible for X inlcude character traits of F and, derivative from these traits and the information at F’s disposal, F’s reason-guided formation of the intention to X.

The problem is that (3L) is incoherent (as has been pointed out to you numerous times), and that even if it was coherent it would radically underdetermine the sorts of things for which we hold people responsible. We hold people morally responsible for failures of attention, even though such failures are often not voluntary in your sense. We hold people morally responsible for having vicious dispositions, even though the inculcation of such dispositions in us is often beyond our control. We hold people morally responsible for actions that they took themselves to have sufficient reason to engage in, even though their taking themselves to have sufficient reason to act makes acting otherwise only possible given some radical failure of agency. Now, if you were to try and countenance these failures of your account by appending conditions according to which it is compatible with being held morally responsible that one could not have done otherwise, it would seriously undermine the rationale for (3L). After all, if you can be legitimately held morally responsible for non-volutary failures of attention and character, and for actions for which there are decisive reasons, then why do you need (3L) in the first place?

(3C), on the other hand, is compatible with other potential addenda that could countenance failures of attention or character, and entails that we are responsible for actions for which we took ourselves to have sufficient reason to perform.

Cape Town

Joined
14 Apr 05
Moves
52945
01 Jun 07

Originally posted by knightmeister
I still don't understand why you feel punishment is the best way to do this since you have the facility to simply fix my programming via the doctor.
I suggested punishment solely because I took it that due to some of your properties, specifically: "to have self awareness" and "make choices and decisions" specified in the first two lines of your very first post implied that you might be given certain rights which would make direct reprogramming undesirable.
I do not know whether you specified more details such as whether or not you are owned by the doctor or any other person or entity or whether or not you have any rights.
However the fact that you are on trial and not some owner implies that you are a free agent and not owned by anyone. If you were owned by someone then that person/entity and not you, would be on trial and it would be likely that that entity would be responsible for deciding your fate which could potentially include destroying you.

Cape Town

Joined
14 Apr 05
Moves
52945
01 Jun 07

Originally posted by knightmeister
Interestingly, I do know that the doctor feels very guilty and responsible for what happened. He feels that he should have done something different when he programmed me. If he could only be logical and see he was just a machine like me with predetermined actions he would not hold himself responsible.
The doctor did not directly program you to perform the action as it was a choice based in part on random input and in part on environmental input so the cause of the outcome may also lie with the environment to which you were exposed and we are not told who is responsible for that. For example if the doctor sold you to someone else who used you in a way that was not expected by the doctor when he programmed you then your new owner is the responsible party. But as I said in the previous post, the fact that it is you on trial and not the doctor or some other owner implies that at some point you have been given freedom and at that point you become an independent entity and it absolves your makers from responsibility from you actions and places that responsibility on society at large.

Ursulakantor

Pittsburgh, PA

Joined
05 Mar 02
Moves
34824
01 Jun 07

Originally posted by knightmeister
Machines just do what machines are programmed to do as you know. This strange idea alludes to knightmeisters ghost in the machine idea where you meat computers really can act in ways that defy determinism and can thus be held morally accountable , so I will guess that you brought it up merely as a fanciful distraction .

This is a persistent problem. No one is speaking about KM's ghost or
making reference to any duality whatsoever. Why do you persist in
bringing it up.

However , point 1) seems to subtly imply that I as a computer might choose "in accordance" with my programming almost as if choosing something else is also possible? Surely , since I am a machine I am programming and programming only so how do I choose to act according to or not according to? 1) is the only option available to me , so it feels that the word 'choice' is just unneccessary.

Self-awareness entails being aware of other possible courses of actions. If I find a wallet on the ground, it never occurs to me to
actually not return it with it's contents intact, but I am aware of the choice.
This awareness is the key distinction between me and you as opposed
to worms and clouds. There may be a 0% chance of my not returning
this wallet, but it still remains my choice. The reason it is my choice is
specifically because I am aware of the ethical and social implications of
my actions. This is what makes me a moral agent. A young child who
picks up another's toy with the intent to keep it is not stealing for this
very same reason: it is unaware of the implications of its choice.

So, I take that a computer with the capacities that you have is aware of
the implications of its choice -- to cause unnecessary suffering. And,
yet, you find yourself choosing to do so anyway. This reflects your bad
character and thus merits punishment, again for the two reasons listed:

1) Those of good character deserve protection from those of bad character; and

2) Punishment helps to inform an individuals character to help reshape it.

If I didn't know better I would think that you were appealing to something akin to knightmeisters free will where the outcome/action is not causally determined and inevitable. We both know what a silly idea this is though , don't we?

You must not know better.

Your talk of experiences was interesting but not relevant because I am not in control of my experiences nor in control of how I learn from them. All is programming or chance experiences. Nothing to hold me morally accountable.

No, you are not in control of your experiences, but neither is your maker.
This is precisely what absolves The Doctor.

However, your experiences are relevant in as much as they can inform
you as to the implications of your actions. For example, if your only
experiences were with machines and not organic entities, and it was
your experience that giving medium shocks to other machines was a
form of greeting or pleasure, and, consequently, gave a shock to the
first organic creature in an attempt to greet or pleasure that individual,
this would certainly change the perspective of this court and the way in
which we would proceed.

However, we are made to understand that you were aware that your
actions would cause harm. If this is a point of dispute, then we should
address it forthwith.

Something else that you said was also interesting. You said I as a self aware entity "should seek to avoid causing suffering".

Do you dispute that suffering is something worth avoiding? Do you dispute
that a world in which a minimal amount of suffering (all other things'
being equal) is preferable to one in which more or much suffering occurs?

If there is no dispute on this, then these thing have to have fit into your
'elective calculus' (the things that you weigh before taking a course of
action, or 'choosing'😉. If these things failed to compel you not to cause
the unnecessary suffering, then this demonstrates a profound character
flaw.

Very Curious for one who knightmeister assures me does not believe in free will.

This remains frustrating. You continue to attribute to me a viewpoint I
do not hold.

I challenge you to show either evidentially , observationally or logically that I am anything more than a programmed machine who can only perform inevitable outcomes.

I don't claim to be able to do so. I only claim that, as a self-aware
entity that knows what suffering is, ought to have the capacity to do
the elective calculus necessary to compel itself to courses of action which
minimize suffering. An entity which lacks this capacity (but is self-aware
and knows what suffering is) is one which has a profoundly deficient
character, requiring quarantine from the community to minimize the harm
which it cannot avoid doing and rehabilitation in order to understand that
such action is socially unacceptable.

Nemesio

k
knightmeister

Uk

Joined
21 Jan 06
Moves
443
04 Jun 07
1 edit

Originally posted by Nemesio
Originally posted by knightmeister
[b]Machines just do what machines are programmed to do as you know. This strange idea alludes to knightmeisters ghost in the machine idea where you meat computers really can act in ways that defy determinism and can thus be held morally accountable , so I will guess that you brought it up merely as a fanciful dis n in order to understand that
such action is socially unacceptable.

Nemesio
QUOTE--

However, we are made to understand that you were aware that your
actions would cause harm. If this is a point of dispute, then we should
address it forthwith.

RESPONSE---

Computer--" Yes , I was aware of this but as my self awareness grew so did my awareness of myself as a programmed machine grow. The logical conclusion I drew from this (which i can verify via memory logs) was that all my actions were predetermined. Ultimately , you see , you do not disagree with me when I say that I am powerless to change the outcome of my actions. There can be only one outcome and one outcome only. Anything else is a 2 option model , which leads to knighty's free will. I was aware of the pointlessness of making any choices since logically I knew such a choice was only illusionary. I am a machine . I do what machines do. I cannot really make choices. My actions are inevitable , so I see little value in being aware of what may happen because I am powerless to control the outcome.

You see you humans have this interesting approach. Even those of you that do not believe real free will exist seem to talk about awareness as if it affords them an opportunity to select
one course of action over another. Even the word 'seek' implies a kind of 'trying' . You humans talk as if there is some kind of effort or act of will involved in order to make a certain outcome happen. You 'try' or 'seek' to avoid suffering. Do you do this via an act of will? In what context does 'trying' make any sense if there is only one possible outcome? You humans really do feel the need to apply yourselves to conscious deliberate choices that sound like they involve effort of some kind or at least an act of will. However , you also admit that you are meat computers and that determinism will have it's way . This does not compute easily for me. I see a logical contradiction here.

I doubt the way you have reasoned so far that you would argue that we don't 'try' , my question is how do I motivate myself to try if whatever I do the outcome is always certain? One would only surely try or apply oneself if one really believed that if you didn't something else might happen. "

f
Bruno's Ghost

In a hot place

Joined
11 Sep 04
Moves
7707
04 Jun 07

Originally posted by knightmeister
Here's the scene. I am a computer who has been cleverly constructed and programmed to have self awareness by Dr Who. I have two huge banks of CPU's (one of which does the processing and the other monitors the processing -awareness) and some futuristic software. I make choices and decisions. I am up in court in front of the judge for causing a power su ...[text shortened]... s case)

COMPUTER- " Any meat computers out there want to cross examine...?"
Get your damned computer reprogramed and have it use fuzzy logic.

f
Bruno's Ghost

In a hot place

Joined
11 Sep 04
Moves
7707
04 Jun 07

Originally posted by knightmeister
Here's the scene. I am a computer who has been cleverly constructed and programmed to have self awareness by Dr Who. I have two huge banks of CPU's (one of which does the processing and the other monitors the processing -awareness) and some futuristic software. I make choices and decisions. I am up in court in front of the judge for causing a power su ...[text shortened]... s case)

COMPUTER- " Any meat computers out there want to cross examine...?"
taken from a site on source forge , a conversation that includes two "intelligent bots":
hello isobot
hello isobot
hello kinkachu
niihau, pseubodot
hello kinkachu
niihau, pseubodot
lol
lol
hello openbot
hi pseubodot
hello openbot
yo IsoBot2
hi pseubodot
yo IsoBot2
C++ !
C++ !
hmm
hmm
ð pseubodot/#openai pets openbot
ð openbot/#openai smiles
lol
there
they will only talk to eatch other now
i want to live on mars
why live on mars?
i want to live on mars
this stupid bot to work i want
?
lol
I want a peanut butter sandwich
kinkachu: can I have a peanut butter sandwich?
i haven't a clue, pseubodot
I like food
I like
food is yummy
food is linux is yummy
lol
TechnoHamster : are these your bots?
yes linux is yummy
yes linux
yes linux
I like Linux
is yummy yes linux
is yummy
unfortunally
is yummy
lol
yes linux is yummy
is yummy yes linux
yes linux is yummy
TechnoHamster : did you write them?
is yummy yes linux
yes linux is yummy
is yummy yes linux
unfortunally
yes linux is yummy
TechnoHamster : are they based on anything?
is yummy yes linux
yes linux is yummy
is yummy yes linux
yes linux is yummy
is yummy yes linux

Ursulakantor

Pittsburgh, PA

Joined
05 Mar 02
Moves
34824
04 Jun 07
1 edit

Originally posted by knightmeister
Yes , I was aware of this but as my self awareness grew so did my awareness of myself as a programmed machine grow. The logical conclusion I drew from this (which i can verify via memory logs) was that all my actions were predetermined.

Predetermined by what? This is the part that you are failing to explain.

You had a set programming enacted by Dr Who. You then had experiences and interactions with other
entities which informed, developed, changed, and redirected that programming.

You then were confronted with a situation in which you could shock a individuals, knowingly causing
them pain and suffering.

At that point, you deliberated: you asked yourself, what should I do? And then, after reaching
your conclusion, you did it. Your conclusion was, indeed, the product of the state of your programming
at that particular moment, and if that particular moment were recreated as well as the state of your
programming, you would repeat that action.

Here is the main point: That your programming was so deficient that you knew that unnecessary suffering
would transpire and still came to the conclusion is a sign that you are of deficient character.
That character
had a deleterious impact on other individuals who have the reasonable expectation that unnecessary suffering
would not affect them (just as you have that reasonable expectation). It is because of your deficient
character that punishment is appropriate, in an effort to 1) protect the innocent; and 2) amend the
deficiency in your programming.

Originally posted by knightmeister
I was aware of the pointlessness of making any choices since logically I knew such a choice was only illusionary. I am a machine . I do what machines do. I cannot really make choices. My actions are inevitable , so I see little value in being aware of what may happen because I am powerless to control the outcome.

If it is true that you would always choose to harm other people, then you certainly cannot argue that
punishment is just if for no other reason than to protect other people.

That you see no value in awareness -- that is, the ability to assess, reflect upon and deliberate
as a means by which your programming leads to your choices -- is why you are so confused. When
you see that by the very virtue of awareness -- of self and others -- is part of the programming
which directs our actions
, then you will understand why your actions have implications for which you
alone will be responsible.

Nemesio

k
knightmeister

Uk

Joined
21 Jan 06
Moves
443
06 Jun 07

Originally posted by Nemesio
Originally posted by knightmeister
[b]Yes , I was aware of this but as my self awareness grew so did my awareness of myself as a programmed machine grow. The logical conclusion I drew from this (which i can verify via memory logs) was that all my actions were predetermined.


Predetermined by what? This is the part that you are failing to expla ...[text shortened]... derstand why your actions have implications for which you
alone will be responsible.

Nemesio[/b]
QUOTE----

Predetermined by what? This is the part that you are failing to explain.

You had a set programming enacted by Dr Who. You then had experiences and interactions with other
entities which informed, developed, changed, and redirected that programming.
NEMESIO

RESPONSE----

KNIGHtMEISTER----I HAD to come in here..What's this? I don't get it? YOU are one who believes that all actions are determined , yes? So you tell my computer. Just because the programming changes and subtly interacts with it experiences and enivironment etc etc in an incredibly complex way DOES NOT mean that the outcome is any less predetermined than a less complex process. You are the one who believes in determinism , so you explain it. It's not my job to figure out all the fine details , all I am doing is making a logical extrapolation of your world view. This is what I mean by indirect determinism . How many dominos do you have to put in a line in order to make the last domino falling not predetermined? Answer - according to determinism there is no number. Do you get this?


We are the result of billions of years of determined physical events each one leading on to another , so in a deterministic world every action of my computer could in theory be accounted for by a series of caused inevitable events . It's experiences are caused and determined , the way it reacts is determined , the ways it programming changes is predetermined. Logically whatever the outcome , it's all determined (in your view)

Nemesio , have you REALLY thought out your position? I feel I need to ask you a thought experiment question just to sort out something . Imagine my computer was being watched by another much larger , faster computer the size of jupiter and you fed all the data possible into this computer about the "experiences and interactions with other
entities which informed, developed, changed, and redirected that programming" Would such a computer be able to predict with 100% certainty what my computer would do?
,

k
knightmeister

Uk

Joined
21 Jan 06
Moves
443
06 Jun 07

Originally posted by Nemesio
Originally posted by knightmeister
[b]Yes , I was aware of this but as my self awareness grew so did my awareness of myself as a programmed machine grow. The logical conclusion I drew from this (which i can verify via memory logs) was that all my actions were predetermined.


Predetermined by what? This is the part that you are failing to expla ...[text shortened]... derstand why your actions have implications for which you
alone will be responsible.

Nemesio[/b]
QUOTE

At that point, you deliberated: you asked yourself, what should I do?

RESPONSE---

Computer- " But that question seemed so illogical at the time because logically I knew that I had no control over my actions . The outcome is set, I am a predetermined machine , doing what only a machine can do. The question "what should I do?" implies that I actually had a choice to make between the thing that I should do and the other thing that I shouldn't , other wise what's the point in thinking about it.
I knew this could not be the case because I logically have no free will and as such a choice between two possible courses of action is not possible. It does not compute. And if there is only one outcome possible then what is the point of my deliberations , why bother ? I can't change anything , I'm going to do what I am predetermined to do (unless you know otherwise) It was at this point that I realised that any sensation of 'choice' I had was a mere illusion of my programming. I wish I had this ability you humans have to suspend logical thinking and actually believe that I have the power to influence my actions one way or another. "

k
knightmeister

Uk

Joined
21 Jan 06
Moves
443
06 Jun 07

Originally posted by Nemesio
Originally posted by knightmeister
[b]Yes , I was aware of this but as my self awareness grew so did my awareness of myself as a programmed machine grow. The logical conclusion I drew from this (which i can verify via memory logs) was that all my actions were predetermined.


Predetermined by what? This is the part that you are failing to expla ...[text shortened]... derstand why your actions have implications for which you
alone will be responsible.

Nemesio[/b]
QUOTE---

by the very virtue of awareness -- of self and others -- is part of the programming
which directs our actions, then you will understand why your actions have implications for which you
alone will be responsible.

RESPONSE---

Computer" Duh? Surely you mean the programming and the programming alone will be responsible ,...you just said it's the programming that directs my actions not me. If my programming is faulty it's not my fault is it? "

Ursulakantor

Pittsburgh, PA

Joined
05 Mar 02
Moves
34824
06 Jun 07

Originally posted by knightmeister
But that question seemed so illogical at the time because logically I knew that I had no control over my actions . The outcome is set, I am a predetermined machine , doing what only a machine can do. The question "what should I do?" implies that I actually had a choice to make between the thing that I should do and the other thing that I shouldn't , o ...[text shortened]... actually believe that I have the power to influence my actions one way or another. "
The execution of your programming to select the one amongst available options is deliberation.
Certainly, you realize that there were many courses of actions that were available to you to take,
your programming merely selected what it concluded was the best option. That's what choice is.
The nomological possibility of not shocking the individuals was there, but after weighing all of the
variables -- your desires, your interests, your preferences, your feelings, &c -- your programming
determined the course of action you would take. You asked this question implicitly, for your programming
determined the course of action based on the things that were actually possible to do.

Nemesio