03 Feb '16 17:15>4 edits
Originally posted by twhitehead
I stick with the textbook definition:
Given a set of possible outcomes that occur at specific frequencies, the probability is a measure of those frequencies.
Or in the words of Wikipedia:
[quote]Probability is the measure of the likelihood that an event will occur. Probability is quantified as a number between 0 and 1 (where 0 indicates impossibili ...[text shortened]... hereas a 50/50 result is fantastic under my definition it is unsatisfactory failure under yours.
I stick with the textbook definition:
What if the textbook definition isn't what someone would most naturally mean by probability? Then he would have an incentive to make his own definition. That is what I did.
Note that it is not clear in the above sentence but I would add that 1 and 0 are exclusive. ie a probability lies in the open set (0,1)
What if I disagree with the textbook definition that this is what I mean by probability and I have created a perfectly self-consistent alternative definition that many people would intuitively agree with and prefer to the one in the text book?
Why would that definition be any less valid than that of the textbook?
I note that your definition is 'applied probability'. Your probability is to mathematical probability
not sure what you mean by 'applied probability' in this context.
When I see something happens at a frequency of 90% you will only ask 'how sure are we that it will happen'?
you have completely lost me with this "you will only ask 'how sure are we that it will happen'? " With my definition, why would I not give a probability of 90% given me knowing the evidence for 90% ?
I may only fail to give 90% if I didn't know that evidence; but my definition covers that. That kind of probability depends on what we know.