When Assessing Reliability What Is The Advantage Of The Kappa Statistic Over Percent Agreement

A big flaw in this type of inter-evaluator reliability is that it does not take into account random agreement and overestimates the degree of agreement. For example, suppose you are analyzing data from a group of 50 people applying for a grant. Each grant application was read by two readers and each reader said “yes” or “no” to the proposal. Suppose the data on the number of disagreements is as follows, where A and B are readers, the data on the main diagonal of the matrix (a and d) count the number of chords, and the data outside the diagonal (b and c) count the number of disagreements: capture samples of different activities in a particular environment (often called “activity sample”). .