![]() ![]() How different could you expect the t-values from many random samples from the same population to be? And how does the t-value from your sample data compare to those expected t-values? It you took repeated random samples of data from the same population, you'd get slightly different t-values each time, due to random sampling error (which is really not a mistake of any kind–it's just the random variation expected in the data). Remember, the t-value in your output is calculated from only one sample from the entire population. The closer T is to 0, the more likely there isn't a significant difference. This means there is greater evidence that there is a significant difference. The greater the magnitude of T, the greater the evidence against the null hypothesis. Put another way, T is simply the calculated difference represented in units of standard error. ![]() The t-value measures the size of the difference relative to the variation in your sample data. When you perform a t-test, you're usually trying to find evidence of a significant difference between population means (2-sample t) or between the population mean and a hypothesized value (1-sample t). They go arm in arm, like Tweedledee and Tweedledum. T & P: The Tweedledee and Tweedledum of a T-test What are these values, really? Where do they come from? Even if you’ve used the p-value to interpret the statistical significance of your results umpteen times, its actual origin may remain murky to you. “Curiouser and curiouser!” you might exclaim, like Alice, as you gaze at your output. Suddenly, you step into a fantastical world where strange and mysterious phantasms appear out of nowhere.įor example, consider the T and P in your t-test results. If you’re not a statistician, looking through statistical output can sometimes make you feel a bit like Alice in Wonderland. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |