where in is the relative correspondence observed between advisors (identical to accuracy), and pe is the hypothetical probability of a random agreement, the observed data being used to calculate the probabilities of each observer who sees each category at random. If the advisors are in complete agreement, it`s the option ” 1″ “textstyle” “kappa – 1.” If there is no agreement between advisors who are not expected at random (as indicated by pe), the “textstyle” option is given by the name “. The statistics may be negative, which implies that there is no effective agreement between the two advisers or that the agreement is worse than by chance. Kappa is an index that takes into account the agreement observed with regard to a basic agreement. However, investigators must carefully consider whether Kappa`s core agreement is relevant to the research issue. Kappa`s baseline is often called random tuning, which is only partially correct. The basic agreement of Kappa is the agreement that could be expected because of the accidental allocation, given the quantities declared in quantity in the limit amounts of the square emergency table. Kappa – 0 if the observed attribution appears to be random, regardless of the quantitative opinion limited by the limit amounts. However, for many applications, investigators should be more interested in quantitative opinion in marginal amounts than in attribution opinion, as described in the supplementary information on the diagonal of the square emergency table. Kappa`s base is therefore more entertaining than illuminating for many applications. Consider the following example: Kappa only addresses its maximum theoretical value of 1 if the two observers distribute codes in the same way, i.e. if the corresponding totals are the same.
Everything else is less than a perfect match. Nevertheless, the maximum value Kappa could achieve helps, as uneven distributions help interpret the actual value received from Kappa. The equation for the maximum is: Some researchers have expressed concern about the tendency to take frequencies from the categories observed as given, which can make it unreliable for measuring matches in situations such as the diagnosis of rare diseases. In these situations, the S tends to underestimate the agreement on the rare category.  This is why the degree of convergence is considered too conservative.  Others[citation necessary] dispute the assertion that kappa “takes into consideration” the coincidence agreement. To do this effectively, an explicit model of the impact of chance on councillors` decisions would be needed. The so-called random adjustment of Kappa`s statistics assumes that, if they are not entirely sure, the advisors simply guess – a very unrealistic scenario. Another factor is the number of codes. As the number of codes increases, kappas become higher.
Based on a simulation study, Bakeman and colleagues concluded that for fallible observers, Kappa values were lower when codes were lower. And in accordance with Sim-Wright`s claim on prevalence, kappas were higher than the codes were about equal. Thus Bakeman et al. concluded that no Kappa value could be considered universally acceptable. :357 They also provide a computer program that allows users to calculate values for Kappa that indicate the number of codes, their probability and the accuracy of the observer.