Measuring nominal scale agreement among many raters

Auteur(s)
Fleiss, J.L.
Jaar

The statistic kappa was introduced to measure nominal scale agreement between a fixed pair of raters. In this paper kappa is generalized to the case where each of a sample of subjects is rated on a nominal scale by the same number of raters, but where the raters rating one subject are not necessarily the same as those rating another. Large sample standard errors are derived, and a numerical example is given.

Publicatie aanvragen

11 + 3 =
Los deze eenvoudige rekenoefening op en voer het resultaat in. Bijvoorbeeld: voor 1+3, voer 4 in.
Pagina's
378-382
Verschenen in
Psychological Bulletin
76 (5)
Bibliotheeknummer
20220128 ST [electronic version only]

Onze collectie

Deze publicatie behoort tot de overige publicaties die we naast de SWOV-publicaties in onze collectie hebben.