28 May '16 07:096 edits

I know the required equation for this from the output of my computer program but what I want is the

The equation for the probability mass of the random discrete variable x in a binomial distribution can be expressed as;

P(x | p, x≤n) = C(n, x) p^x (1 – p)^(n – x)

( x, n ∈ ℕ

p ∈ [0, 1] )

where:

n is the number of Bernoulli trials conducted to form the distribution.

x is a possible total number of yes results from that n number of Bernoulli trials.

p is the probability of a yes result from each of the Bernoulli trials.

C(n, x) = n!/( x!(n – x)! ) n, x ∈ ℕ

and is an application of the binomial coefficient.

(see https://en.wikipedia.org/wiki/Binomial_distribution )

But suppose we didn't actual know what the value of p is here but we do know that all its possible values of p within [0, 1] are equally likely i.e. the probability density of p is:

probability_density(p | 0 ≤ p ≤ 1) = 1 else 0

( So this is a probability density OF a probability; what I like to call a 'meta-probability' )

So how to algebraically derive the the probability mass of an x value given the above condition?

Here we are treating x as the constant (because we

I have made a computer program to find out that uses a numerical approach to get ever closer approximations and its output clearly shows the answer is that the probability mass of x given these conditions is this simple continuous distribution equation of:

P(x | 0≤x≤n) = 1 / (n + 1)

which also intuitively looks about right to me. The problem here is that I don't yet see how to

I assume the derivation would involve an integral but all the ones I tried so far somehow don't seem to work.

*algebraic derivation*of it, which I don't yet have;The equation for the probability mass of the random discrete variable x in a binomial distribution can be expressed as;

P(x | p, x≤n) = C(n, x) p^x (1 – p)^(n – x)

( x, n ∈ ℕ

p ∈ [0, 1] )

where:

n is the number of Bernoulli trials conducted to form the distribution.

x is a possible total number of yes results from that n number of Bernoulli trials.

p is the probability of a yes result from each of the Bernoulli trials.

C(n, x) = n!/( x!(n – x)! ) n, x ∈ ℕ

and is an application of the binomial coefficient.

(see https://en.wikipedia.org/wiki/Binomial_distribution )

But suppose we didn't actual know what the value of p is here but we do know that all its possible values of p within [0, 1] are equally likely i.e. the probability density of p is:

probability_density(p | 0 ≤ p ≤ 1) = 1 else 0

( So this is a probability density OF a probability; what I like to call a 'meta-probability' )

So how to algebraically derive the the probability mass of an x value given the above condition?

Here we are treating x as the constant (because we

*know*the x we are currently considering) and p as if it was the random variable along a continuous distribution from 0 to 1.I have made a computer program to find out that uses a numerical approach to get ever closer approximations and its output clearly shows the answer is that the probability mass of x given these conditions is this simple continuous distribution equation of:

P(x | 0≤x≤n) = 1 / (n + 1)

which also intuitively looks about right to me. The problem here is that I don't yet see how to

*algebraically*show it.I assume the derivation would involve an integral but all the ones I tried so far somehow don't seem to work.