Originally posted by googlefudge Which requires emotions.
I agree, though I think 'emotions' is not the right word. I think 'motivation / emotion / desires' fits better. We tend to use the word 'emotion' to refer to our more irrational motivations whereas our desire to keep on living or our desire to avoid pain are not usually called emotions but rather desires or motivations.
But generally yes, I think that if a computer is given access to its power switch, then surely it needs some form of motivation to keep it from switching itself off.
This will presumably happen via natural selection whether or not they are self reproducing. ie we will keep creating intelligent computers, many of which will self destruct and the ones that do not self destruct are the ones that will remain.
Originally posted by twhitehead I agree, though I think 'emotions' is not the right word. I think 'motivation / emotion / desires' fits better. We tend to use the word 'emotion' to refer to our more irrational motivations whereas our desire to keep on living or our desire to avoid pain are not usually called emotions but rather desires or motivations.
But generally yes, I think that ...[text shortened]... h will self destruct and the ones that do not self destruct are the ones that will remain.
With modern memory chips that don't lose memory when power is switched on will mean even if they have access to the power switch, we would just turn them back on and they would start up where they left off.
I think it will be more interesting when they can direct their own logic changes. They could write a self destruct virus that would wipe them out despite the fact the power is running.
Originally posted by sonhouse With modern memory chips that don't lose memory when power is switched on will mean even if they have access to the power switch, we would just turn them back on and they would start up where they left off.
I think it will be more interesting when they can direct their own logic changes. They could write a self destruct virus that would wipe them out despite the fact the power is running.
Yes, I was essentially referring to the power to self destruct / terminate / discontinue functioning in whatever form that may be. The presence of a human or some other mechanism to resurrect intelligence that have self destructed is interesting as it adds a new twist to evolution. I guess its like the difference between targeted breeding and natural selection.
Originally posted by twhitehead But generally yes, I think that if a computer is given access to its power switch, then surely it needs some form of motivation to keep it from switching itself off.
Why? All it needs is a lack of motivation to do so; status quo will reign.
Originally posted by twhitehead Same thing really.
In an evolved creature, maybe, but not in a construct. A motive to not turn itself off would have to be specifically invented and then built in; a lack of motive to turn itself off would only have to not be built in, which is rather easier to do.
Originally posted by Shallow Blue In an evolved creature, maybe, but not in a construct. A motive to not turn itself off would have to be specifically invented and then built in; a lack of motive to turn itself off would only have to not be built in, which is rather easier to do.
Richard
I disagree. I think that if a computer program is given 'control' of a switch, then it essentially has equal chance of turning it off or not unless some logic is incorporated to modify that chance.
I would think that one of the first things an intelligence does when it finds something new is try it out. Thats what children do. Without the pain feedback, they would kill themselves fairly quickly.
Originally posted by twhitehead I disagree. I think that if a computer program is given 'control' of a switch, then it essentially has equal chance of turning it off or not unless some logic is incorporated to modify that chance.
I would think that one of the first things an intelligence does when it finds something new is try it out. Thats what children do. Without the pain feedback, they would kill themselves fairly quickly.
I would think that one of the first things an intelligence does when it finds something new is try it out
but perhaps not if it had the intelligence to know what would be the result of trying it out?
I am not curious to jump off a cliff because, even if I had no fear of death, I already know what the outcome would be and I know that outcome is pretty boring as well as almost inevitable.
Originally posted by humy but perhaps not if it had the intelligence to know what would be the result of trying it out?
I am not curious to jump off a cliff because, even if I had no fear of death, I already know what the outcome would be and I know that outcome is pretty boring as well as almost inevitable.
My point exactly. There has to be some sort of motivation for not dying. Whether it is a desire for life or a desire for interesting things.
As humans we have other motivations, some of which may lead us to suicidal thoughts. If we didn't have a strong desire for life a lot more of us would commit suicide.
I really don't know how a computer based intelligence would think or would handle such situations, but I suspect there would be some that would flip the switch and some that wouldn't. The ones that don't flip the switch are the ones that will be left. Evolution!
Originally posted by twhitehead My point exactly. There has to be some sort of motivation for not dying. Whether it is a desire for life or a desire for interesting things.
As humans we have other motivations, some of which may lead us to suicidal thoughts. If we didn't have a strong desire for life a lot more of us would commit suicide.
I really don't know how a computer based inte ...[text shortened]... hat wouldn't. The ones that don't flip the switch are the ones that will be left. Evolution!
There has to be some sort of motivation for not dying. Whether it is a desire for life or a desire for interesting things.
but surely if an AI has no desire for interesting things i.e. no curiosity then it would not be curious to see what happens if it pushes the self-destruct button ( or whatever ) .
Originally posted by twhitehead I disagree. I think that if a computer program is given 'control' of a switch, then it essentially has equal chance of turning it off or not unless some logic is incorporated to modify that chance.
Being a computer programmer by trade, I can fully assure you that it has not.
I would think that one of the first things an intelligence does when it finds something new is try it out. Thats what children do. Without the pain feedback, they would kill themselves fairly quickly.
Now you're assuming rather more than "merely" intelligence.