top-down causation

top-down causation

Science

Cookies help us deliver our Services. By using our Services or clicking I agree, you agree to our use of cookies. Learn More.

D
Losing the Thread

Quarantined World

Joined
27 Oct 04
Moves
87415
29 Mar 16

Originally posted by KazetNagorra
Take any kind of normalized wave function, and compute the product of the standard deviation of the momentum and position operators. It will obey the Heisenberg uncertainty principle. No measurement is required for this to be the case. Evolve the wave function using some unitary operator - again, the Heisenberg uncertainty principle will be obeyed at al ...[text shortened]... ng or averaging over some microscopic details in the system? That's the million dollar question.
Yes, but what is uncertain? Between observations we have very little epistemological justification for even believing the particle exists. The only thing available to be uncertain is the outcome of some experiment. So, yes, one can play this mathematical game, but to check the theory one has to do an experiment and one can't really get around this awkward lock epistemological considerations have over ontological ones.

This is why the Copenhagen Interpretation is so heavily influenced by Logical Positivism and denies the reality of everything in the theory except measurements. It's also why quantum theory is so weird. The basic problem is that, unlike in classical physics, we don't really have the right to claim that a particle has a property unless we make a measurement, but in making the measurement we do some real damage to the system.

I don't think the apparent randomness can be due to neglecting aspects of experimental detail as then the distribution would be expected to change depending on the experimental details - beyond wave-particle duality type considerations. There could be a local hidden variable, but it looks unlikely given the Aspect experiments.

Also, when one wants to calculate a decay rate for example, then one calculates a matrix element that gives the probability amplitude for the decay from the initial to the final state. So this probabilistic interpretation is built into the theory from the start.

It's worth noting that most physicists never worry about this in their work (except for the few that actually work in this area) there's just the "shut up and calculate" interpretation.

D
Losing the Thread

Quarantined World

Joined
27 Oct 04
Moves
87415
29 Mar 16
1 edit

Originally posted by KazetNagorra
It's a little bit more tricky than that because there are some simple interacting systems which we can solve, usually using numerical tools, to arbitrary precision (i.e. they are deterministic). In his standard work on quantum mechanics for undergrads, Griffiths suggests a "measurement" should be interpreted as the slightly less vague "interaction with a macroscopic system" (see above).
This is a really tricky point. First, I know you didn't intend it, but your first sentence makes it sound as if the fact that we can do the calculation is what means the system is deterministic. The states of a hydrogen atom are a nice simple example of an interacting system, it's a composite, where we can calculate the energy levels, we could prepare an isolated hydrogen atom in an excited state and calculate the half-life for it to decay into it's state using non-relativistic quantum mechanics or even the full technology of relativistic quantum field theory. We "see" the emitted photon - note here what the detector interacts with is the emitted physical photon. The difficulty with Griffiths' "measurement as an interaction with a macroscopic system" is that although macroscopic it still should be governed by quantum theory if we want quantum theory to be a universal theory which is a fairly standard requirement for physical laws. This means that the macroscopic system should really go into a linear superposition of states, so we haven't actually managed to collapse the wave function. This is a problem for the Copenhagen Interpretation as they have the detectors in their thought experiments as classical. The von Neumann interpretation gets around this by insisting on conscious observers, which at least invokes something that is qualitatively different from any old macroscopic system, but most people who aren't hippies find problematic. The Many Worlds theory copes by having the observer go into a linear superposition with each copy seeing a different outcome, in many ways this is the cleanest interpretation, but as Einstein put it: "It's a bit heavy on universes.". Penrose suggests that non-linear effects, in other words quantum gravity, should collapse the wave-function. The Ensemble interpretation is Copenhagen lite (and suffers from the same drawback, but at least admits it), it's the one I tend to believe as a sort of holding position. Then deBroglie Bohm and allied interpretations have a variety of ways to make the whole thing metaphysically deterministic. Take your pick.

K

Germany

Joined
27 Oct 08
Moves
3118
31 Mar 16

Yes, it's tricky. The theory has no problem with doing no measurements but as you point out, we can't test it without doing measurements, which makes it very hard to measure the influence of measurements. Still, it's not excluded that there is some microscopic, deterministic origin of apparent randomness as long as the way in which this happens is universal. It's not obvious that this is the case, but neither is it obvious that it's not the case. Numerical simulations are promising in the sense that ever more complex simulations might allow us to compute the entire system including the measurement "device," which could shed light on measurements in macroscopic systems.

h

Joined
06 Mar 12
Moves
642
01 Apr 16
7 edits

Originally posted by KazetNagorra
Yes, it's tricky. The theory has no problem with doing no measurements but as you point out, we can't test it without doing measurements, which makes it very hard to measure the influence of measurements. Still, it's not excluded that there is some microscopic, deterministic origin of apparent randomness as long as the way in which this happens is unive ...[text shortened]... cluding the measurement "device," which could shed light on measurements in macroscopic systems.
I am currently privately researching epistemology and I would say that the 'simplistic' application of Occam's razor would, with all the current evidence, tells us we should assume a 0.5 probability that there exists true randomness (i.e. no merely 'random' purely because of our inability to predict) in the external world and thus a 0.5 probability that there exists no true randomness, only pseudo-randomness in the external world.

I also conclude that that 0.5 probability that there exists true randomness can never increase. That is because, even if there really does exist true randomness, there logically cannot ever be any empirical evidence to indicate this truth. The rational reassessment of that 0.5 probability can only either keep that 0.5 value the same or decrease it depending on what the new evidence (if any) is and what the truth of this matter is.

K

Germany

Joined
27 Oct 08
Moves
3118
01 Apr 16

Originally posted by humy
I am currently privately researching epistemology and I would say that the 'simplistic' application of Occam's razor would, with all the current evidence, tells us we should assume a 0.5 probability that there exists true randomness (i.e. no merely 'random' purely because of our inability to predict) in the external world and thus a 0.5 probability that ...[text shortened]... decrease it depending on what the new evidence (if any) is and what the truth of this matter is.
I don't think there is much point in assigning probability to a non-probabilistic event.

h

Joined
06 Mar 12
Moves
642
01 Apr 16
5 edits

Originally posted by KazetNagorra
I don't think there is much point in assigning probability to a non-probabilistic event.
I am surprised you said that.
There existing true randomness or there existing no true randomness, whichever of these two is correct, isn't an event but rather is the state of affairs.

But, that aside, I also would say there is often a point in assigning probability to a non-probabilistic event providing we don't know or are unsure whether it will or has taken place because of our insufficient data/knowledge.
We do this often in our everyday lives; the unborn baby of my currently pregnant neighbor is going to be born either a boy or a girl but I don't have the knowledge of which sex it is and, as a result, I would assign a 0.5 probability for each of those possibilities, even though my pregnant neighbor might know for a fact via a scan that it is a boy so would say the probability of it going to being born a boy is ~1 and would say it will be a non-probabilistic event (close enough ).

I would also say there is often a point in assigning probability to a non-probabilistic non-event state of affairs providing we don't know or are unsure of what the state of affairs is because of our insufficient data/knowledge. We also often do this in our everyday lives. going back to the unborn baby example; forget about whether it will be born a boy or a girl; is it currently a boy or a girl? Which has already been determined but I still don't know which so I still assign a 0.5 probability to each.

Cape Town

Joined
14 Apr 05
Moves
52945
01 Apr 16

Originally posted by humy
The rational reassessment of that 0.5 probability can only either keep that 0.5 value the same or decrease it depending on what the new evidence (if any) is and what the truth of this matter is.
Why do you believe the probability can go one way but not the other?

I assume that 0.5 probability really just means 'we haven't got a clue'.

Now suppose we come up with a 'theory of everything' that accurately describes the universe in a fully deterministic way. Will this rule out the existence of randomness? I don't think so, but it would strongly suggest that there is no randomness.

Now alternatively suppose our 'theory of everything' includes randomness inherent in its structure. Can we ever know if that randomness is truly random? I think not. But it would strongly suggest that there is randomness.

I really don't think that in either scenario an accurate probability can be assigned or should be assigned.

K

Germany

Joined
27 Oct 08
Moves
3118
01 Apr 16

Originally posted by humy
I am surprised you said that.
There existing true randomness or there existing no true randomness, whichever of these two is correct, isn't an event but rather is the state of affairs.

But, that aside, I also would say there is often a point in assigning probability to a non-probabilistic event providing we don't know or are unsure whether it will or has ta ...[text shortened]... lready been determined but I still don't know which so I still assign a 0.5 probability to each.
Probability is something that has a meaning in relation to a measurement (or a mathematical idealization). When one says, for instance, that the probability that the Higgs boson exists is x% it is only because we have gathered some statistics, even though the Higgs bosons either exists or it doesn't (most likely, but not certainly, it does).

We have gathered no data on whether there is true randomness or not so we can't assign any meaningful probability. We do have data on the number of males and females so we can say that, without any other knowledge, the probability that any given birth will be a boy is roughly 0.5 (it is slightly higher for boys than for girls if I remember correctly).

h

Joined
06 Mar 12
Moves
642
01 Apr 16
6 edits

Originally posted by twhitehead
Why do you believe the probability can go one way but not the other?

I assume that 0.5 probability really just means 'we haven't got a clue'.

Now suppose we come up with a 'theory of everything' that accurately describes the universe in a fully deterministic way. Will this rule out the existence of randomness? I don't think so, but it would strongly ...[text shortened]... n't think that in either scenario an accurate probability can be assigned or should be assigned.
Why do you believe the probability can go one way but not the other?

How on earth can an observation show that there does NOT exist any kind of hidden cause to any apparent randomness? -that's why.
I assume that 0.5 probability really just means 'we haven't got a clue'.

No, because that 0.5 is a real probability which means we have no more reason to favor than to refute and no more reason to refute than to favor the hypothesis.
If a toss a coin, I would say it has a 0.5 probability of it being heads because it is rational to assign equal possibilities with equal probabilities (I have written a mathematical proof based on axioms of logic that equal possibilities have equal probabilities in my book yet-to-be published ). Saying it has a 0.5 probability doesn't mean 'we haven't got a clue' because that 0.5 probability was so assigned with valid logic. If we have no idea that only one side of a coin is heads or no side is heads THEN 'we haven't got a clue' because we don't know if we have equal possibilities there; in which case we cannot rationally assign 0.5 but rather cannot assign any value because the probability is undefined which is just another way of saying it doesn't exists.

Now suppose we come up with a 'theory of everything' that accurately describes the universe in a fully deterministic way. Will this rule out the existence of randomness? I don't think so,

-and I never said it would. But if that theory is then backed up by real empirical evidence, THEN that would increase our most rational assignment of the probability that the universe is fully deterministic.

Now alternatively suppose our 'theory of everything' includes randomness inherent in its structure. Can we ever know if that randomness is truly random? I think not.

correct.
But it would strongly suggest that there is randomness.

NO! Because no matter how good the theory, there is no logical contradiction of there being no randomness and no possible observation show that there does NOT exist any kind of hidden cause to any apparent randomness thus no valid logical reason to increase that 0.5 probability that there exists true randomness to a value greater than 0.5 -we are just back where we started.


I really don't think that in either scenario an accurate probability can be assigned or should be assigned

Why not? Excluding any probability that comes from true randomness (if such a thing exists) , all assignment of probability is based on and defined as the current most rational level of certainty given our current limited knowledge and our current ignorance. If it wasn't for true randomness (if such a thing exists) and our ignorance, there would be not such thing as probability and we would have no use for it.
Now, we have limit knowledge (and ignorance ) of what would tell us whether there exists true randomness; why then should we treat that as a special case so we say we cannot rationally assign a probability to whether there exists true randomness and yet we say we can assign a probability to something else that we also have incomplete knowledge of?

h

Joined
06 Mar 12
Moves
642
01 Apr 16
4 edits

Originally posted by KazetNagorra
Probability is something that has a meaning in relation to a measurement (or a mathematical idealization). When one says, for instance, that the probability that the Higgs boson exists is x% it is only because we have gathered some statistics, even though the Higgs bosons either exists or it doesn't (most likely, but not certainly, it does).

We have ...[text shortened]... be a boy is roughly 0.5 (it is slightly higher for boys than for girls if I remember correctly).
Probability is something that has a meaning in relation to a measurement

Not if it is a prior probability.
it is only because we have gathered some statistics,

Now you are talking about statistical probability, which shouldn't be confused with a prior probability. What if we have no measurement? We still can assign some probabilities.
see
https://en.wikipedia.org/wiki/Prior_probability
We have gathered no data on whether there is true randomness

so this is not a statistical probability. This is prior probability.

We do have data on the number of males and females so we can say that, without any other knowledge, the probability that any given birth will be a boy is roughly 0.5

But we need data for that to show that boy and girl are equal probabilities (but not prior probabilities ) but we don't need data to show that there being true randomness and there being not are equal prior probabilities (I intend to write the formal mathematical proof of that deduced from axioms of logic in due course and publish it in my book ) ; prior probabilities obey different rules from posterior probabilities and this must be taken into account.

h

Joined
06 Mar 12
Moves
642
01 Apr 16
14 edits

There appears to be general confusion displayed here between prior probability and posterior probability especially statistical probability which is a type of posterior probability. Equivocating the two is a all-too-common error of logic.

https://en.wikipedia.org/wiki/Prior_probability

https://en.wikipedia.org/wiki/Posterior_probability

https://en.wikipedia.org/wiki/Equivocation

-and note that prior probability doesn't require data/measurement. Just because you cannot define probabilities for some hypotheses before the data/measurement/evidence i.e. just because some hypotheses don't have a prior probability (and there are many that don't have a prior probability! ) , doesn't mean none do. By equivocating prior probability with statistical probability just as done here, one can make the logical error of concluding no hypotheses, or at least the hypothesis considered at the current time (such as the above hypothesis that there exists true randomness), has a prior probability! But this is a false inference since some hypotheses DO have prior probabilities and that means they have probabilities even in the absence of data/measurement.

It has been implied there that we cannot assign any probability to whether there exists true randomness until we have data saying one way or another; I can show that this is a self-contradiction by the standard definition of probability as defined by the classical equation for posterior probability;
-this is how:
The claim that we cannot assign any probability to whether there exists true randomness until we have data saying one way or another is the same as claiming there is no prior probability for that (see definition of prior: https://en.wikipedia.org/wiki/Prior_probability ).
But, if we look at the classical equation for posterior probability:
https://en.wikipedia.org/wiki/Posterior_probability
"...he posterior probability is defined as:
...
posterior probability ∝ likelihood * prior probability
..."

As you can see, we cannot assign the posterior probability until we assign the prior probability with a value (actually, that is certainly not always true and I have discovered some extremely significant exceptions to that rule which I call 'priorless probabilities' and will publish that in my book but they have no relevance to exactly what I am disusing here). Therefore, if we cannot assign a prior probability of something then, using that equation, we cannot assign any posterior probability and that means we cannot assign any statistical probability. Therefore, if we cannot assign any probability before the data (thus in the form of a prior probability) to the hypothesis that there exists true randomness, we also cannot assign any probability after the data (thus in the form of a posterior probability, a statistical probability in this case) to the hypothesis that there exists true randomness; thus contradicting the notion that we cannot assign any probability to whether there exists true randomness until we have data saying one way or another as, if that were true, we wouldn't even be able to assign any probability even after considering any new evidence!
I claim that promise is false and we can assign and define both the priors and the posteriors in this case.

K

Germany

Joined
27 Oct 08
Moves
3118
01 Apr 16

Originally posted by humy
This is prior probability.
There is no sensible prior probability either.

h

Joined
06 Mar 12
Moves
642
01 Apr 16
1 edit

Originally posted by KazetNagorra
There is no sensible prior probability either.
why no 'sensible' prior probability?
Are you saying there generally is no 'sensible' prior probability or merely that there is no 'sensible' prior probability specifically for this particular hypothesis?
Is any prior probability 'sensible' and, if so, what makes this particular hypotheses have no 'sensible' prior probability?

K

Germany

Joined
27 Oct 08
Moves
3118
01 Apr 16

Originally posted by humy
why no 'sensible' prior probability?
Is any prior probability 'sensible' and, if so, what makes this particular hypotheses have no 'sensible' prior probability?
A prior probability is used in Bayesian statistics if you have some way of making a sensible estimate. In this case, we do not. Remember that guy who "calculated" using Bayesian statistics that there is an x% chance that God exists? Garbage in, garbage out.

h

Joined
06 Mar 12
Moves
642
01 Apr 16
12 edits

Originally posted by KazetNagorra
A prior probability is used in Bayesian statistics if you have some way of making a sensible estimate. In this case, we do not. Remember that guy who "calculated" using Bayesian statistics that there is an x% chance that God exists? Garbage in, garbage out.
It is true you cannot rationally use conventional statistics to calculate the probability that there is a God. But that is because much of conventional statistics is simply wrong because of a drastic misunderstanding on what probability is which the whole of conventional statistics is based on. In my book I am writing, I will give a series of mathematical proofs via proof by contradiction that much of conventional statistics is simply wrong and then I will prove what the correct complete set of axioms for probability are, which is almost the same ones except for a critical additional one, which I call the tie axiom, which drastically changes everything. Then I explain how the resulting new statistics, that I call tie statistics, solves the problem of induction and the Hempel's paradox and the asymmetric reasoning problem and many other paradoxes and problems.

But then I will show and how this leads to a way of calculating the prior probabilities that there exist true randomness and that there exists a God; neither of which you can do with the deeply flawed conventional statistics. The prior probabilities that there exist a God depends on how you define God but, generally, tie statistics would give it a vary small prior probability. This is because the hypothesis that there exists a God is not one hypothesis but a compound hypothesis that consists of attaching many attributes to the same object; God is supernatural AND God is the creator of the universe AND God is conscious with a mind etc -note all those 'AND's! Each and every addition of one of those ands lowers the prior probability that there is a God since it takes only one of those attributes to not be attached to that same object (which is God in this case) for there to be not such object (which is God in this case). This can be thought as the application of Occam's razor except now with a massive load of tie logic vindication and improvement that you never could get with the deeply flawed conventional understanding of statistics.

This is an odd and rare case of conventional wisdom not being flawed because of what it has got (such as a flawed premise or inference ) but for what it hasn't and, because of what it hasn't got (which is the tie axiom), this leads to no end of paradoxes, problems (such as the problem of induction) and just generally no end of total confusion and total nonsense.

I cannot say here what the tie axiom actually is until my book is published because of the obviously very high risk of plagiarism there is in this case -hope you understand.