Thursday, November 19, 2009

How reliable are 'statistics' when their 'confidence level' is absent?

On the GS forum there is always much dispute about the reliability of statistics cited by pro-feminst sources (pro-feminist websites and wikipaedia) and those from other sources. Even statistics from government and government agencies are questionable. So how trustworthy are the 'statistics' which are presented to us?





I have yet to see here any cited statistics which reveal the 'confidence level' of the statistics presented. Confidence level goes from 0% to 100% and is calculated according to the number of samples, the typicality of the samples and so on. (We were taught an equation for doing all this.)





Should we disregard the statistics which omit this vital information?

How reliable are 'statistics' when their 'confidence level' is absent?
I've never heard of the confidence level indicator (except on opinion polls). Yes, stats should include them.





But could you guarantee those weren't tampered with?
Reply:I doubt seriously that outside of professional journals you will hear about confidence levels, " goodness of fit ", chi squares. T tests, or any other of the boring tools of statistical analysis that I must use constantly and do not ever talk about, except professionally
Reply:"Figures often beguile me, particularly when I have the arranging of them myself; in which case the remark attributed to Disraeli would often apply with justice and force: "There are three kinds of lies: lies, damned lies and statistics." ""


- Autobiography of Mark Twain
Reply:I think we should use logic and common sense. Some people WANT to believe certain things whether they make sense or not.





Its not about how you feel.. its about how much sense it makes. If it makes sense.. its probably true.. and vica versa
Reply:FYI not all statistics use confidence intervals.





On that note, you can see all the statistics in any scientific article(its a requirement in order to get published). I myself have cited 2 or 3 on this board, they exist you are just not looking for them.
Reply:If the stat was presented in a puff piece, you can just ignore it. If it is presented in a professional journal without a confidence level and a methodology, you can cancel your subscription.
Reply:This question is way too complicated to be answered easily.





Confidence intervals are different than being statistically significant. For example, a person may have an IQ of 100 with a 95% confidence interval of 90-110, meaning that we can say with 95% confidence that his/her score is within that range.





Anything that gets published as "significant" usually will have to be significant at the %26lt; 05. level. However, if we have a large enough sample size, anything will be statistically significant. You also need to look at statistical power and effect size.





Additionally, test-retest reliability is the best way of telling if an effect truely exists-meaning we find the same results in multiple studies in multiple samples in different places across time.
Reply:The fact that the people citing them don't include all the relevant information doesn't make the stats wrong.





If it's from a credible source (for instance, gallup uses sample sizes that give them 95% confidence, and higher), and if you know when it was gotten, and how and by whom, is all essential info.





As is the margin of error.





If it's greater than the difference between groups, that essentially says theirs no detectable difference.





It they're old, they may no longer be accurate.





If they were found in a goofy way -- not statistically sound, or neutral questions -- they're questionable.





In general, where you don't have all the info yourself, go by the source. Credible sources will have the backing to make the claims they make.
Reply:I tend to agree with the above answers.





I would have thought that you'd be able to find confidence levels by doing a bit of digging on a government website though. Most people wouldn't have a clue what they are and what their use is, so maybe it's information that doesn't get published.





Most websites quoting stats wouldn't have that kind of info though, guaranteed. But, if you're quoting from a blog, for example, I'm going to take that with a healthy dose of skepticism anyways.
Reply:Good point, but a little on the academic side. Confidence intervals are a useful guide but the real devil is not usually in this sort of detail. The serious problems are usually far more fundamental e.g. sample bias, file drawer effect etc.


I think we can generally trust research that omits CIs if they have the other important factors present and correct. Someone who draws their sample of 10 participants from their friends but doesn't like the results, so asks 10 other friends until they find the result they wanted, will produce results not to be trusted, even if they produce a 95% CI.
Reply:I concur.
Reply:I don't even know confidence intervals are applicable to a lot of the kinds of research quoted in this forum, and perhaps that is why they aren't quoted.





When a survey is conducted, for example, respondents will often give answers that are based on previous questions, especially if the survey "walks though" a set of questions.





There are formula that give a confidence interval based on the sample size to population size. They are at their most valuable when applied to things like opinion polls, where there is a simple Yes/No ABC type answer, but again there are always issues, such as demographics, socio economic grouping, gender distribution.





In my experience with these things I tend to think "it is what it is" - i.e. generally a survey or a piece of research is better than no research. I will look at the sample size. I will look at the distribution of the sample if that information is available. If I can see the source questionnaire I will look at that too. If I am interested enough.





I also greatly enjoy qualitative research. It has no statistical significance at all usually, but it does illustrate patterns of thinking and behaviour. It is valuable for those reasons.





Frankly the kinds of mathematical methods used in the physical sciences could muddy the waters much more than they clarify them. You cannot scientifically quantify how well someone understands a question. You cannot clearly identify what biases are present within a survey, or how they affect respondents. You cannot neatly designate on the basis of geography or affluence.





Putting a confidence interval to a piece of research implies that you, as a scientist know the uncertainty in the variables you are measuring. That the experiment can be conducted against an accurate control group and repeated with a similar outcome within the parameters of experimental error.





Social sciences deal with surveys which the same respondent may fill in differently depending on whether they are in a good or bad mood.





What this leaves is qualitative and quantitative studies with some value. The bigger the sample size of a quantitative study the more accurate it is likely to be. And a review of the source documentation should provide at least some insight into how "reasonable" a piece of research it is.
Reply:New plan. Every time someone posts statistics, you will be responsible for calculating its confidence level, and then we'll decide whether or not to get rid of it.
Reply:???????????????????

braces for teeth

No comments:

Post a Comment