Originally posted by sh76Depending on the significance you want, X may be quite large, I doubt you are going to be able to get
I figure that mathematics is a science, so I hope I have the right forum.
I need to determine what level of sample size is necessary to generate statistically reliable data for exam results. In other words, if I give an exam to X random college students, I want to be able to assert that based on those results, I can be confident that Y% of random students wi ...[text shortened]... or; and
2) Someone can give me a layman's tip on how to make that determination.
Originally posted by googlefudgeYou're right about statistics. I'll put it on my list.
Depending on the significance you want, X may be quite large, I doubt you are going to be able to get
particularly reliable data with sub hundred numbers, and ideally you would want a thousand plus.....
depending on what kind of exam you are conducting and what you want to find out from it.
It might be possible to get a significant result with few ...[text shortened]... so I would endeavour to learn more about statistics, it's more useful than most people realise.
Originally posted by amolv06According to Excel, the standard deviation of all 845 scores is 12.08%
In reply to your last post, which I saw after I posted, what was your standard deviation?
Originally posted by sh76OK, I think I understand what you want. You want to calculate the confidence interval that your mean is correct.
For example, if I can say: "Based on these results, we can project with 95% confidence that random exam takers will generate a mean within 3.5 points of the mean of our exam." or something to that effect, that would be great. I just don't quite know how to determine those other numbers.
Originally posted by amolv06Maybe I'm reading hist post wrong, but I don't think that's really what he wants?
I think what you're looking for is standard deviation of the mean. If I recall correctly, that's simply the standard deviation divided by the root of the sample size. Assuming a standard deviation of 15 points or so with a 100 student class, you can be reasonably sure that the average of your next class will be within 5 points of the average of the previous class, all other things being equal. At least from what I recall.
Originally posted by WoodPushThank you. this is very helpful. It's not exactly what I'd had in mind (I was looking more for predicting the likelihood of future examinee scoring ranges) but I definitely think I can use it.
OK, I think I understand what you want. You want to calculate the confidence interval that your mean is correct.
When you calculate a mean from a sample, like:
sum(all test scores) / N - 1
you aren't really calculating the "true mean" for the test - you are estimating what it is.
If you give the same test to a different batch of students, you like you want an Excel macro anyway. I think the CONFIDENCE macro does what you want.
Originally posted by amolv06Standard deviation of a sample set is the measure of how much individual scores of the students vary in that sample set.
But the standard deviation of the mean is a measure of how much your mean will vary, not the variance in the scores of the individual students.