This study examined two methods for detecting differential item functioning (DIF): Raju, van der Linden, and Fleer’s 1995 differential functioning of items and tests (DFIT) procedure and Thissen, Steinberg, and Wainer’s 1988 likelihood ratio test (LRT). The major research questions concerned which test provides the best balance of Type I errors and power and if the tests differ in terms of detecting different types of DIF. Monte Carlo simulations were conducted to address these questions. Equal and unequal sample size conditions were fully crossed with test lengths of 10 and 20 items. In addition, α and β parameters were manipulated in order to simulate DIF. Findings indicate that DFIT and LRT both had acceptable Type I error rates when sample sizes were equal but that DFIT produced too many Type I errors when sample sizes were unequal. Overall, the LRT exhibited greater power to detect both α and β parameter DIF than did DFIT. However, DFIT was more powerful than LRT when the last two β parameters had DIF as opposed to when the extreme β parameters had DIF.