Interrater reliability

Consistency in the scoring of research data results by two or more data analysts. Many health care investigators analyze data that are graduated rather than binary. In an analysis of anxiety, for example, a graduated scale may rate research subjects as “very anxious,” “somewhat anxious,” “mildly anxious,” or “not at all anxious,” whereas a binary method of rating anxiety might include just the two categories “anxious” and “not anxious.” If the study is carried out and coded by more than one psychologist, the coders may not agree on the implementation of the graduated scale; some may interview a patient and find him or her “somewhat” anxious, while another might assess the patient as being ’’very anxious.” The congruence in the application of the rating scale by more than one psychologist constitutes its interrater reliability.


The extent to which two independent parties, each using the same tool or examining the same data, arrive at matching conclusions. It is a measure of the agreement, consensus, or consistency of independent parties in using a common rating scale or instrument.


 


Posted

in

by

Tags: