# Stats Exam 2

Question | Answer |
---|---|

Point Estimate | Summary statistic- one number as an estimate of the population. Ex: mean |

Interval Estimate | Based on our sample statistic, a range we'd expect if we repeatedly sampled the population. Ex: confidence interval |

Confidence Interval | An interval estimate that includes the range around a mean when we add and subtract a margin of error. Confirms that findings of hypothesis testing in more detail. |

95% Confidence Interval | A 95% confidence level is most commonly used, indicating the 95% that falls between the two tails. We construct this around the obtained SAMPLE mean. |

Calculating confidence intervals | 1. Draw a distribution around the sample mean 2. Indicate bounds on your drawing (-1.96 and 1.96) 3. Mark the middle 95% 4. Turn the z-statistic into raw scores 5. Check |

Confidence interval: How to turn the z-statistic into raw scores | Mlower=Msample-z(?M) Mupper=Msample+z(?M) *z=1.96* |

Increasing sample size… | will make us more likely to find a statistically significant effect |

Effect size | Can tell us whether a statistically significant difference might also be an important difference; it's the size of a difference that is unaffected by sample size. Tells us how much 2 populations do not overlap. The less overlap, the bigger the effect size |

Ways to increase effect size | Decrease overlap, means are further apart, and variation is smaller |

Cohen's d | Assesses the difference between means using the standard deviation instead of standard error. d= M-?/? |

Cohen's effect sizes | Small= .2 Medium= .5 Large= .8 |

Statistical Power | The likelihood of rejecting the null hypothesis, given that the null is false. Probability that we won't make a type 2 error. |

Information needed for calculating power | Population mean, standard deviation, sample mean, sample size, standard error |

Mcrit= | z(?M)+MHo |

ZH1= | Mcrit-MH1/?M… look up in z-table |

Ways to get more power | Increase alpha (don't do), one tailed test instead of two (don't do), increase sample size or decrease standard deviation, increase difference between the means |

If the treatment has a very small effect, then what is the likely outcome for a hypothesis test evaluating the treatment? | A type 2 error |

If given the population mean, standard error, and a d=.5, how can you find the sample mean? | Half a standard deviation from the population mean. ?+/-half of ? |

Corrected standard deviation formula | s=the square root of ?(X-M)?/(N-1) |

Calculating estimated standard error for t-statistic | sM= s/square root of N |

A t-distribution is no longer… | A normal distribution. It is fatter and flatter. A larger sample size makes a more normal distribution |

t-statistic formula | (M-M?)/sM |

Critical value for t-statistic | Look up the df in table in .05 two tailed test column |

Making a decision based off of t-statistic | If the t-score is past the critical region, reject the null. If not, fail to reject null (we want to reject null) |

p<.05 | Significant (rejected null) The probability is small that the relationship or difference happened by chance |

p>.05 | Insignificant (failed to reject null) |

Null hypothesis for a single sample t-test | ?=x |

If the mean from a sample falls in the tail of a sample distribution, we conclude… | that it didn't come from the sample |

With a large enough sample size, even a very small difference in data… | can fall in the rejection, or critical region (a number that falls in the confidence interval with a sample size of 6 might be fall in the tail with a sample size of 49) |

Calculating power is useful | Before a study. Because after, if you fail to reject the null, you know you didn't have enough power and if you reject the null power doesn't matter because you're happy with your results. |

Wasting time if power is below | .8 (80%) |

Variance accounted for | Another measure of effect size; r?= t squared divided by t s squared plus df. |

Variance accounted for effect sizes | .01=small .09=medium .25=large |

Two samples drawn from the same population will probably have _____ t-statistics even if they're the same size and have the same mean | Different; each sample has a different standard deviation. |

if sM is larger, the ______ t will be | smaller |

Paired-samples t test | Measure performance on two different occasions and compare difference; within-groups design |

How to annotate null hypothesis for paired-samples t-test | ?1=?2 |

Calculating t in a paired-samples t-test | Mdifference-?Mdifference/sM difference (usually subtract by 0) |

When the null hypothesis is true, critical values are _____ to occur | unlikely |

Ways to go further than hypothesis testing | 95% confidence interval, effect size, power calculation |

In a repeated-measures study, a small variance for the difference scores indicates that the treatment effect is ______. | consistent |

Paired-samples t-test are well suited to | research studies examining change over time |

Independent samples t-test | Used to compare two means in a between-groups design (each participant is only in one condition) |

Comparison distribution for paired-samples t-test | Distribution of mean difference scores |

Comparison distribution for independent-samples t-test | Distribution of differences between means |

New assumption in independent-samples t-test | Homogeneity of variance |

Homogeneity of variance | The two populations from which samples are selected must have equal variances… as long as larger variance isn't twice the size of small variance. Fmax=s?(largest)/s?(smallest) |

Independent samples: s?difference= | s?Mx+s?My |

Independent samples: standard error | sdifference= the square root of s?difference |

Three assumptions that underlie parametric tests | The distribution of the population of interest must be approximately normal, the participants are randomly selected, the dependent variable is scale |

Robust | A robust hypothesis test is one that produces fairly accurate results even when the data suggest that the population might not meet some of the assumptions. |

In Cohen's d, the farther apart the means of two distributions, the _____ the effect size, assuming the standard deviation is held constant. | higher |

A Cohen's d value of 0.2 indicates how much overlap between distributions? | 85% overlap |

In Cohen's d, the closer together the means of two distributions, the _____ the effect size, assuming the standard deviation is held constant. | lower |

Larger effect size= | Less overlap |

Smaller effect size= | More overlap |

Statistical power can be enhanced by decreasing the standard deviation in the sample. This can be accomplished by: | Sampling a more homogenous group, working a narrower, less variable, distribution of scores, and using a more reliable measure. |

A study reports finding a “significant” difference between group means. What can be concluded about this report? | To fully assess the importance and impact of the results, we must also have information about the spread of the distributions |

Calculating t-statistic | (M –?M)/sM |

In order to conduct a single-sample t test, one needs to know: | The population mean and the properties of your sample |

The _____ indicates the distance of a sample mean from a population mean in terms of the standard error. | t-statistic |

Griffin is looking through a statistics text for a z table, but he can find only a t table in the index. What tip would best help him find the information he needs? | Use the sample size of infinity listed in the t table because it is equal to the z table. |

Critical t values _____ as the degrees of freedom _____. | decrease; increase |

The t statistic is more _____ than the z statistic because one is less likely to observe an extreme t statistic. | conservative |

A dot plot allows us to _____, while a stem-and-leaf plot does not. | include more than one sample |

An effect size of 0.53 was calculated on data after performing a hypothesis test with a single-sample t statistic in which the null hypothesis was rejected. What can be concluded about the results based on this information? | The sample data are significantly different from what was expected based on the population, with the sample mean 0.53 standard deviations greater than the population mean. |

Why is it that the mean difference of the comparison distribution is always 0? | The null hypothesis posits no difference |

A value of 0 within the interval of a confidence interval may indicate: | No difference |

In an independent-samples t-test, the larger variances increase the likelihood of finding ______ | Insignificance |

As sample size increases, the critical region boundaries for a two-tailed test with a=.05 will _____ | Move closer to zero |

Advantages of a within-groups design | Distribution of mean differences, smaller sample size is ok, reduced variability, more power |

When do you use the corrected standard deviation formula? | Before we conduct a single-sample t test, we must estimate the population standard deviation by using the sample standard deviation. By subtracting 1 from N, you will get a slightly larger and more accurate standard deviation value. |

Type 1 error | We reject the null hypothesis but the null hypothesis is actually correct |

Type 2 error | We fail to reject the null hypothesis but the null hypothesis is actually incorrect |

Statistical significance | When you have a large sample size and a very small difference, your test is statistically significant (this is why larger sample sizes are preferred) |

Meta-analysis | A study that involves the calculation of a mean effect size from the individual effect sizes of more than one study |

Forest plot | Shows the confidence interval for the effect size of every study |

File drawer analysis | A statistical calculation, following a meta-analysis, of the number of studies with null results that would have to exist so that a mean effect size would no longer be statistically significant. |

Dot plot | A graph that displays all the data points in a sample, with the range of scores along the x-axis and a dot for each data point above the appropriate value. |

Order effects | Refer to how a participant’s behavior changes when the dependent variable is presented for a second time, sometimes called practice effects |

Counterbalancing | Minimizes order effects by varying the order of presentation of different levels of the independent variable from one participant to the next |

Critical region for 95% confidence interval | 1.96 and -1.96 for z-tests, and critical values for df in t-tests |

When given data; subtract the mean from every x, and square each value to get the…. | Sum of squares |

In a t-test, what do you do with the sum of squares? | SS/N-1= standard deviation. |

Mean of difference scores in paired-samples t-test | The mean of the difference scores is calculated by adding up the differences for each participant and dividing by the sample size |

95% confidence interval for paired-samples t-test | Mdif +/- t(sMdif) |