Originally posted by KazetNagorrathe "intersection" graphs of ses versus various unmean measures of deviations versus standardized ( read : i do believe this is a buncha' horseyplop ) measures of intelligence, plus or minus certain variables that is. i can give page numbers. yes i know a little bit about statistics ( read : i don't wanna' have to crank through some of that extraneous bee,ess again), alot about linear algebra, and barely enough about calculus to see me through the explanations. otherwise i will just try to simulate this crapola with dice rolls and binomial distributions, which appears to be unindicated at fist glance.
How is what done exactly?
Originally posted by eldragonflyDoes anyone understand what he just said? I get the feeling he has smoked too much ganja.
the "intersection" graphs of ses versus various unmean measures of deviations versus standardized ( read : i do believe this is a buncha' horseyplop ) measures of intelligence, plus or minus certain variables that is. i can give page numbers. yes i know a little bit about statistics ( read : i don't wanna' have to crank through some of that extraneous be ...[text shortened]... with dice rolls and binomial distributions, which appears to be unindicated at fist glance.
The bell curve is the natural behavior of large numbers and random distributions. presumably, you refer to two different categories of humans, whatever they may be, such as different races, genders, religions, or socioeconomic backgrounds, or education levels, and find that they perform differently on intelligence tests.
If your question is serious, and which it certainly does not appear to be so, then you are asking how do we explain one categorial group performing on a test that measures some form of intelligence better than another categorial group and avoiding assigning the reason for the difference purely on the categorizations. For instance, males performing on a math exam better than females of the same age, or maybe asians performing better than western european whites.
The answer is that there are confounding variables other than innate intelligence. Those variables include biased test designs, cultural background differences with differing value systems, language differences, biased sampling and many other possible reasons.
Originally posted by sonhouseI didn't...
Does anyone understand what he just said? I get the feeling he has smoked too much ganja.
To the OP: The normal distribution is very common because of the central limit theorem. Subject to certain (quite general) properties, the mean of a sample of random draws will approach a normal distribution as the sample size increases.
Originally posted by AThousandYoungthank you Palynka. very well spoken.
It sounds kinda like he wants to know how the IQ test score information was processed to get those graphs. Not sure.
yes i want to know how they generate the ses versus mean scores of standard distributions, re: iq scores unfairily distributed, and flatten out those curves; to make the intersection graphs happen. also what it means... i mean it certainly can't be about money and lifestyle choices. *dances happy dance or two*
i can give page numbers, but there are a plethora of them around the middle of the book.
is it just raw statistics or do i need differential equations to make those measures appear standard. really that is my basic question here. i don't see how calculus , by itself, could cut it.
Originally posted by sonhouse
Does anyone understand what he just said? I get the feeling he has smoked too much ganja.
don't i wish. heh heh. 😉
Originally posted by coquette
If your question is serious, and which it certainly does not appear to be so, then you are asking how do we explain one categorial group performing on a test that measures some form of intelligence better than another categorial group and avoiding assigning the reason for the difference purely on the categorizations.
yessir, the question is very serious. and i know enough about "beginning" statistics, the standard distribution and the guassian distribution to realize this. and yes i have read almost 1/3 of the bell curve, can't believe how stupid some people are when criticizing this work.
Originally posted by mtthw
The other point (in my opinion) is that since there's generally far more variation within these groups than between groups, even if there is a systematic difference it's pretty irrelevant for most purposes.
thank you. i also consider it uncontrived and blatant garbage.
Originally posted by mtthwActually there is rather an interesting phenomenon that I read of in " A mathematician reads the newspaper" (Great Book!). If one looks at two bell curves with the same disstribution but whose mean is slightly off then the difference is exagerrated at the ends of the curve.
The other point (in my opinion) is that since there's generally far more variation within these groups than between groups, even if there is a systematic difference it's pretty irrelevant for most purposes.
A practical example is to consider IQs for two groups.
Lets say group A averages 99.5 and group B averages 100.5
The number of 150+ IQers will be substantially higher for group B.
Any statisticians care to put some meat on that?
(Unfortunately I no longer have the book ....)
Originally posted by wolfgang59The difference between the number of 150+ folks is easy to calculate provided the standard deviation is known. One of the problems here, however, is that the approximation of a normal distribution tends to break down at the tails (for example, you would get a finite chance of finding someone with an 1000+ IQ).
Actually there is rather an interesting phenomenon that I read of in " A mathematician reads the newspaper" (Great Book!). If one looks at two bell curves with the same disstribution but whose mean is slightly off then the difference is exagerrated at the ends of the curve.
A practical example is to consider IQs for two groups.
Lets say group A ...[text shortened]... y statisticians care to put some meat on that?
(Unfortunately I no longer have the book ....)
Originally posted by wolfgang59If you have, say, a standard deviation of 10, then you can calculate the cdf of the normal function.
Actually there is rather an interesting phenomenon that I read of in " A mathematician reads the newspaper" (Great Book!). If one looks at two bell curves with the same disstribution but whose mean is slightly off then the difference is exagerrated at the ends of the curve.
A practical example is to consider IQs for two groups.
Lets say group A ...[text shortened]... y statisticians care to put some meat on that?
(Unfortunately I no longer have the book ....)
The percentage of people above 150 in group B would be:
1/2-1/2*erf( (150-100.5) / (10*sqrt(2)) ) = 3.71067... × 10^-7 or around 371 in a billion.
For group A would be:
1/2-1/2*erf( (150-99.5) / (10*sqrt(2)) ) = 2.20905... × 10^-7 or around 221 in a billion.
Originally posted by KazetNagorraYes, this is very true. Worse than the 1000+, or well...also weird, would be the positive probability of someone having negative IQ.
The difference between the number of 150+ folks is easy to calculate provided the standard deviation is known. One of the problems here, however, is that the approximation of a normal distribution tends to break down at the tails (for example, you would get a finite chance of finding someone with an 1000+ IQ).
The question is why they are all in Debates. 😀