The authors compared the effectiveness of several methods for statistically adjusting college grade point average (GPA) criteria for course and departmental differences in grading standards, using first-semester grades from an entire entering class at a large state university. Most of the adjusted GPAs produced by these methods functioned similarly and, despite high correlations with actual GPA, had greater internal-consistency reliability than actual GPA criteria. These methods are potentially useful in dealing with the lack of comparability of other criterion measures in applied settings that rely on subjective evaluations by different raters or on objective indexes of behavior in different organizational units.
Abstract