An examination of interrater reliability for scoring the Rorschach Comprehensive System in eight data sets

Gregory J. Meyer, Mark J. Hilsenroth, Dirk Baxter, John E. Exner, James Chris Fowler, Craig C. Piers, Justin Resnick

Research output: Contribution to journalArticlepeer-review

131 Scopus citations

Abstract

In this article, we describe interrater reliability for the Comprehensive System (CS; Exner, 1993) in 8 relatively large samples, including (a) students, (b) experienced researchers, (c) clinicians, (d) clinicians and then researchers, (e) a composite clinical sample (i.e., a to d), and 3 samples in which randomly generated erroneous scores were substituted for (f) 10%, (g) 20%, or (h) 30% of the original responses. Across samples, 133 to 143 statistically stable CS scores had excellent reliability, with median intraclass correlations of. 85., 96., 97., 95., 93., 95., 89, and. 82, respectively. We also demonstrate reliability findings from this study closely match the results derived from a synthesis of prior research, CS summary scores are more reliable than scores assigned to individual responses, small samples are more likely to generate unstable and lower reliability estimates, and Meyer's (1997a) procedures for estimating response segment reliability were accurate. The CS can be scored reliably, but because scoring is the result of coder skills clinicians must conscientiously monitor their accuracy.

Original languageEnglish (US)
Pages (from-to)219-274
Number of pages56
JournalJournal of Personality Assessment
Volume78
Issue number2
DOIs
StatePublished - Jan 1 2002

ASJC Scopus subject areas

  • Clinical Psychology
  • Psychiatry and Mental health
  • Health, Toxicology and Mutagenesis

Fingerprint

Dive into the research topics of 'An examination of interrater reliability for scoring the Rorschach Comprehensive System in eight data sets'. Together they form a unique fingerprint.

Cite this