Thank you David. that makes sense now. One more question; what exactly does "|D|" notation represent?Hi @dtammerz Those are solutions to the maximum difference (hopefully the "⇒" is not throwing you off; "⇒" signifies "implies")
The denominator (i.e., the square root) is the standard deviation (aka, standard error) of the difference between correlated means and is equal to sqrt[to 4 + 1 - 2*0.3*sqrt(4)*sqrt(1)] = 0.398 (shown in exhibit).
So we really just have T = |D|/ σ(diff); i.e., the test-statistic T is the raw difference standardized by dividing by σ(diff). Just like we standardize the raw difference between the observed sample mean and the null hypothesized mean, X - μ, by dividing it by the SE, (X - μ)/SE, to retrieve the test statistic for a (univariate) sample mean.
Given T = |D|/ σ(diff), the max distance |D| = T*σ(diff); in this case, |D|= T*0.398. If we seek two-sided 95.0%, then |D| = 1.96*0.398 = 0.78, and if we seek two-sided 99.0% confidence, then |D| = 2.58*0.398 = 1.02. I hope that's helpful!
Thank you!!Sure @dtammerz The vertical bars represent absolute value (https://en.wikipedia.org/wiki/Absolute_value); as in, for example, the absolute value of |-2.33| = 2.33. As mentioned, the numerator given by "D" is the raw distance between the sample means; but it doesn't matter if the difference is positive or negative; the example shows a difference of µ(X) - µ(Y) = 0.75, but it wouldn't (and shouldn't) matter if we calculated a raw difference of µ(Y) - µ(X) = -0.75. Thanks,
Thank you.HI @NStha8467
1) The chi-squared distribution is included for completeness: before GARP "simplified" their econometrics chapters, the four sampling distributions were (normal, student's t, chi-squared, and F-distribution) were foundational. The normal/student's t because they test a sample mean, and the chi-squared because it tests the sample variance. True, the formula itself likely has low testability, but its also not too difficult:
the test statistic being χ^2(df = n - 1) = (S^2/σ^2)*df, where S is the sample variance.
.... so, to me, it's very natural to include the chi-square test for sample variance along with the normal/student's for sample mean. Although for both chi-squared and F test, the quantitative testability is currently low, such that you are probably fine with conceptual recognition.
2) I can't find the example(s) to which you refer? Although the test statistic does not vary with the 1- versus 2-sided hypothesis, the critical value (which informs the one- or two-sided critical region) should be different for the 1- versus 2-sided confidence interval. I don't recall hearing this specific problem about the miller questions, which tend to be fairly reliable. Nevertheless, for a sample mean test, the one-side test definitely implies a smaller critical value, I agree with your general point here. I hope that's helpful,