Reliability of test scores is one of the most pervasive psychometric concepts in measurement. Reliability coefficients based on a unifactor model for continuous indicators include maximal reliability and an unweighted sum score–based , among many others. With increasing popularity of item response theory, a parallel reliability measure has been introduced using the information function. This article studies the relationship among the three reliability coefficients. Exploiting the equivalency between item factor analysis and the normal ogive model, (2) for dichotomous data is shown to be always smaller than . Additional results imply that is typically greater than (2) in practical conditions, though mathematically there is no dominant relationship between (2) and . Further results indicate that, as the number of response categories increases, can surpass . The reasons why and fall short of are also explored from an information gain/loss perspective. Implications of the findings on scale development and analysis are discussed.