09 Sep Agreement Categorical Data
If you have only two categories, then Scott`s Pi statistics (with confidence interval constructed according to the Donner-Eliasziv method (1992) are more reliable for the Inter-Rater agreement (Zwick, 1988) than Kappa. These are summarized in Table 2 and are explained below. Here`s the coverage of quantity and the instructive attribution of disagreements, while Kappa hides the information. In addition, Kappa introduces some challenges in calculation and interpretation, as Kappa is a ratio. It is possible that the kappa ratio returns an indefinite value due to zero in the denominator. Moreover, a report does not betray either its counter or its denominator. It is more informative for researchers to point out disagreements in two components, quantity and allocation….