Suppose I make a machine that can predict your future behavior perfectly.
In the future, one minute from now, you will have to choose to take either a harmless purple pill, or not. Everything has been neatly prearranged and is going to plan, so barring very unlikely circumstances, you will be in position to make that choice, pill in hand, one minute from now.
By definition, the machine will be able to predict whether you will take the pill or not.
The machine then notifies me of its prediction.
Being wealthy, and of a singularly contrarian disposition, I offer you £1,000,000 to do the *opposite* of what the machine predicts. You can use the money for any selfish or selfless end you wish.
Questions:
1) Can you refuse to do what the machine predicts?
2) Or is such a machine impossible *in principle* to build (not just in practice)?
3) Does quantum uncertainty necessarily exist to prevent a paradox from forming?
Originally posted by DoctorScribblesSuppose the machine were not perfect in general, but just with regard to the specific situation described?
2) Such a machine is impossible in principle to build. The problem you present serves as a counterexample to its logical feasibility.
Would the same conclusion hold?
Originally posted by PawnokeyholeIf such a machine was possible the machine would notify you of the wrong answer so he is correct. But it all really depends in what was in those pills...to make it a more fair test, the person should choose between two different pills not a pill and nothing.
Suppose I make a machine that can predict your future behavior perfectly.
In the future, one minute from now, you will have to choose to take either a harmless purple pill, or not. Everything has been neatly prearranged and is going to plan, so barring very unlikely circumstances, you will be in position to make that choice, pill in hand, one minute ...[text shortened]... n practice)?
3) Does quantum uncertainty necessarily exist to prevent a paradox from forming?
Originally posted by PawnokeyholeMy personal belief is that quantum physics seems to be showing a certain amount of 'flexibility' with regard to time and space, and this should mean the machine is theoretically possible, we just haven't reached the stage yet where we have the necessary understanding.
Suppose I make a machine that can predict your future behavior perfectly.
In the future, one minute from now, you will have to choose to take either a harmless purple pill, or not. Everything has been neatly prearranged and is going to plan, so barring very unlikely circumstances, you will be in position to make that choice, pill in hand, one minute ...[text shortened]... n practice)?
3) Does quantum uncertainty necessarily exist to prevent a paradox from forming?
As for whether you can do the opposite of what the machine predicts, that depends on the scenario given to the machine to make its prediction.
If the machine was just asked if you would take a pill or not, in theory, you could do the opposite of the prediction. By offering the incentive to do the opposite, you are adding another variable to the equation, thus rendering the prediction obsolete.
If, however, the machine's prediction took into account the offer, then it would be theoretically impossible to do the opposite. If you then took the offer, and did the opposite of the prediction, by definition the machine must have been flawed.
So, if the machine works, it is impossible to refuse to do what it predicts. If you do, it just shows the machine didn't work. Thus the paradox is avoided...🙄
Originally posted by tojoI'm assuming that the machine takes into account the external bribe to defy the machine's prediction.
My personal belief is that quantum physics seems to be showing a certain amount of 'flexibility' with regard to time and space, and this should mean the machine is theoretically possible, we just haven't reached the stage yet where we have the necessary understanding.
As for whether you can do the opposite of what the machine predicts, that depends on the ...[text shortened]... it predicts. If you do, it just shows the machine didn't work. Thus the paradox is avoided...🙄
You claim that, even if you learned what the machine had predicted, and someone offered you a huge sum of money to do the opposite, then, if the machine really worked, you couldn't do the opposite of what the machine predicted. I agree.
However, the questions I am interested in are these:
1) Could such a machine exist?
2) If it couldn't, why not?
Dr. Scribbles opined that the machine cannot exist. I think his reason was that, if it could, a predictive paradox would arise, whereby I could always choose to defy the prediction; hence, the machine is impossible.
Dr Scribbles: If so (if that was precisely what you meant), does that mean we have free will?
Originally posted by PawnokeyholeWe could have free will, in a meaningful sense, even if the machine could exist and always predicted correctly.
I'm assuming that the machine takes into account the external bribe to defy the machine's prediction.
You claim that, even if you learned what the machine had predicted, and someone offered you a huge sum of money to do the opposite, then, if the machine really worked, you couldn't do the opposite of what the machine predicted. I agree.
Howeve ...[text shortened]...
Dr Scribbles: If so (if that was precisely what you meant), does that mean we have free will?
Originally posted by PawnokeyholeOf course, there is also the theory that time splits reality into an infinite number of alternate realities, where not only is everything possible, but can and will happen, in at least one of these realities.
I'm assuming that the machine takes into account the external bribe to defy the machine's prediction.
You claim that, even if you learned what the machine had predicted, and someone offered you a huge sum of money to do the opposite, then, if the machine really worked, you couldn't do the opposite of what the machine predicted. I agree.
Howeve ...[text shortened]...
Dr Scribbles: If so (if that was precisely what you meant), does that mean we have free will?
By this, I mean, you make one decision, you follow one path in time, but at the same instant, you also make the opposite desicion, and continue down the otherr path in another reality. What Pratchett calls 'the trouser legs of time'.
In this scenario, when given the choice, you can refuse to do what was predicted, and in that reality, the machine did not work. But in a different reality, you choose to do as predicted, and the machine did work (or at least, wasn't proven to be flawed).
Obviously, to better provide proof of the machines success, you would have to repeat the experiment a number of times. Each time, the same choice exists, and reality splits depending on the outcome of that choice.
Say ten tests were taken. That would give a total number of possible realities of 1024 (if my reasoning is correct), with 1023 versions where, in at least one test, the machine failed. But in the other reality, the machine successfully predicted the outome every time.
In this reality, the machine succeded in predicting the outcome of the choice, and free will still existed for the test subject. Therefore, no paradox?
Or is this a bit of a cop-out theory?
Originally posted by dottewellBut doesn't predictable imply caused? And perfectly predictable, wholly caused?
We could have free will, in a meaningful sense, even if the machine could exist and always predicted correctly.
I'm not a compatibilitist. I disagree about free will being compatible with determinism, though it's apparently not the fashion.
Originally posted by PawnokeyholeWell yes - it depends on your idea of free will. Your machine may be a counter-example to a non-compatibilist view of free will.
But doesn't predictable imply caused? And perfectly predictable, wholly caused?
I'm not a compatibilitist. I disagree about free will being compatible with determinism, though it's apparently not the fashion.
Originally posted by dottewellSuch a reason seems a priori very unlikely, based on a knowledge of (ironically, somewhat predictive and deterministic) human psychology.
...there would be a reason, unknown to us at the time, why we decided to ditch our intention to "defy" the machine.
It seems odd that, just to "please the machine", an extraordinary reason would emerge that would override greed or selflessness.
Would nature just produce such reason on demand in this situation?