*Originally posted by @humy*

**If I have understood his philosophy correctly, One implication (which he might himself have been unaware of! ) of his philosophy if valid is it would be impossible to ever totally rationally and scientifically come to the conclusion that any theory is 'probable' but rather merely one theory is 'more probable than another' giving probability a weirdly highly su ...[text shortened]... es; I still need to do a lot more work on it before I get an equation that works for all cases).**

I think you might have typed this in a little quickly. The sentence:

So we cannot

*scientifically* say photosynthesis 'probably' results in CO2 intake and O2 released?

Should probably read:

So we cannot

*scientifically* say the theory of photosynthesis, which predicts CO2 intake and O2 released, is probably true?

I don't know enough about Popper to really say what he had in mind. Most of the above is based on conversations with a former flat mate who had studied philosophy. We are getting into the realms of what I think rather than what Popper thinks, so bear in mind there's some merging going on in the following.

What I'm about to say works reasonably well for Physics but less so for Biology - and is my idea of what a scientific theory should look like although I don't think it's far off any kind of standard picture, the reason I'm saying this is that I don't want you to think that Popper nor any other person is responsible for problems with it. A theory has a number of elements, there is a formal language for making deductions - maths, logic, or some such, which one can use to answer questions about the world. The theory specifies some objects and their properties, ideally in an axiomatic system, but it tends to be looser than that in practice. There is also a metalanguage - a natural language such as English with some additional jargon - which explains what the formal language means and how to translate questions about the world into the formal language. The metalanguage is responsible for giving meaning to the theory, which otherwise would be indistinguishable from pure maths.

So as a toy model consider the following question about the world: "We know that Socrates is a man, is he mortal?"

The formal language has the single axiom:

∀x Mx -> Dx

The metalanguage tells us that the formal language is Classical Logic and that objects with the property M are men and objects with the property D are mortal. A named object such as Socrates has the property M if he is a man and D if he is mortal.

So we know to translate "Socrates is a man" into the formal language by writing Ma and by using Classical logic we crank the handle and get Da out, which we translate back into the metalanguage and get our answer that Socrates is indeed mortal.

The basic idea is that the formal language is something one could give to a Turing machine and the only work the user has to do is work out how to ask the right question and translate it into the formal language. So, in this picture, a scientific theory is a string of symbols and some deductive rules, one asks it questions about the world which the theory answers and one can then test the answers against measurements of the world.

Now we run into a problem: Is the theory true? Whether it is or not depends on what one means by true. What it is is a collection of sentences which describe the world, so it's true in the way descriptive sentences are. It describes the world and some sentences describe the world better than others. So I suspect that what Popper is saying is not that one theory is "more probable" than another, he is saying that one theory is "a better description" than another. Quantum Theory can account for effects which we see that Newtonian Mechanics cannot so Quantum Theory is a better description. However, for most practical purposes, Newtonian Mechanics is perfectly adequate so it's not bad, Quantum Theory just catches more detail.

In terms of probability, when a theory is falsified it is proven false so has zero chance of being true. It's somewhat difficult to assign a probability to a theory that has not been falsified. Suppose there were two candidate theories for some phenomenon. Some experiments had been done and one of them is confounded by numerical stability problems in the deductive apparatus (call this theory A) - so we have tighter confidence intervals for the other (call this theory B). Although theory B is more probable (we are more confident it is producing results which accord with experiment) we have no real basis for rejecting theory A - when a better algorithm for producing results is found it may be that it provides answers as close to experiment as the theory which does not have numerical problems. So we can give some sort of relative risk given our state of knowledge of the two theories.

However, assigning some sort of absolute probability is problematic. Since we might discover an effect which falsifies both theories tomorrow it is not really possible to say that theory B is true with a probability of p_B and theory A with probability p_A > p_B, since we have no good way of assigning a probability that there is an unknown effect that will falsify one or other of them. But the confidence intervals do give one a relative risk based on what we do know.