Led On Line
Presentazione - About us
Novita' - What's new
E-Journals
E-books
Lededizioni Home Page Ricerca - Search
Catalogo - Catalogue
Per contattarci - Contacts
Per gli Autori - For the Authors
Statistiche - Statistics
Cookie Policy
Privacy Policy

Accordo assoluto tra valutazioni espresse su scala ordinale

Giuseppe Bove, Daniela Marella

Abstract


Many methods for measuring agreement among raters have been proposed and applied in many domains in the areas of education, psychology, sociology, and medical research. A brief overview of the most used measures of interrater absolute agreements for ordinal rating scales is provided, and a new index is proposed that has several advantages. In particular, the new index allows to evaluate the agreement between raters for each single case (subject or object), and to obtain also a global measure of the interrater agreement for the whole group of cases evaluated. The possibility of having evaluations of the agreement on the single case is particularly useful, for example, in situations where the rating scale is being tested, and it is necessary to identify any changes to it, or to request the raters for a specific comparison on the single case in which the disagreement occurred. The index is not affected by the possible concentration of ratings on a very small number of levels of the ordinal scale.


Keywords


Educational assessment; Interrater agreement; Kappa index; Ordinal rating scales; Statistical dispersion.

Full Text:

PDF

References


Bove, G., Nuzzo, E., & Serafini, A. (2018). Measurement of interrater agreement for the assessment of language proficiency. In S. Capecchi, F. Di Iorio, & R. Simone (Eds.), ASMOD 2018. Proceedings of the Advanced Statistical Modelling for Ordinal Data Conference (pp. 61-68). Napoli: Università di Napoli Federico II, FedOAPress.

Bove, G., Conti, P. L., & Marella, D. (2020). A measure of interrater absolute agreement for ordinal categorical data. Statistical Methods & Applications.

https://doi.org/10.1007/s10260-020-00551-5

Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20, 213-220.

https://doi.org/10.1177/001316446002000104

Cohen, J. (1968). Weighted kappa: Nominal scale agreement with provision for scaled disagreement or partial credit. Psychological Bulletin, 70, 213-220.

https://doi.org/10.1037/h0026256

Conger, A. J. (1980). Integration and generalization of kappas for multiple raters. Psychological Bulletin, 88, 322-328.

https://doi.org/10.1037/0033-2909.88.2.322

Graham, M., Milanowsky, A., & Miller, J. (2012). Measuring and promoting interrater agreement of teacher and principal performance ratings. Washington, DC: Center for Educator Compensation Reform (CECR), US Department of Education.

Gwet, K. L. (2014). Handbook of inter-rater reliability (4th ed.). Gaithersburg, MD: Advanced Analytics, LLC.

LeBreton, J. M., & Senter, J. L. (2008). Answers to 20 questions about interrater reliability and interrater agreement. Organizational Research Methods, 11(4), 815-852.

https://doi.org/10.1177/1094428106296642

Leti, G. (1983). Statistica descrittiva. Bologna: il Mulino.

Nuzzo, E., & Bove, G. (2020). Assessing functional adequacy across tasks: A comparison of learners' and native speakers' written texts. EuroAmerican Journal of Applied Linguistics and Languages, 7(2), 9-27.

https://doi.org/10.21283/2376905X.12.175

O'Connell, D. L., & Dobson, A. J. (1984). General observer-agreement measures on individual subjects and groups of subjects. Biometrics, 40(4), 973-983.

https://doi.org/10.2307/2531148

O'Neill, T. A. (2017). An overview of interrater agreement on Likert scales for researchers and practitioners. Frontiers in Psychology, 8, 777. https://doi.org/10.3389/fpsyg.2017.00777

Shoukri, M. M. (2011). Measures of interobserver agreement and reliability. Boca Raton, FL: Taylor and Francis Group.

https://doi.org/10.1201/b10433

von Eye, A., & Mun, E. Y. (2005). Analyzing rater agreement: Manifest variable methods. Mahwah, NJ: Lawrence Erlbaum Associates, Inc.

Warrens, M. J. (2012). Equivalences of weighted kappas for multiple raters. Statistical Methodology, 9, 407-422.

https://doi.org/10.1016/j.stamet.2011.11.001




DOI: https://doi.org/10.7358/ecps-2021-023-boma

Copyright (©) 2021 Giuseppe Bove, Daniela Marella – Editorial format and Graphical layout: copyright (©) LED Edizioni Universitarie

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

 


Journal of Educational, Cultural and Psychological Studies (ECPS)
Registered by Tribunale di Milano (19/05/2010 n. 278)
Online ISSN 2037-7924 - Print ISSN 2037-7932

Research Laboratory on Didactics and Evaluation - Department of Education - "Roma Tre" University


Executive Editor: Gaetano Domenici - Associate Executive Editor & Managing  Editor: Valeria Biasci
Editorial Board: Eleftheria Argyropoulou - Massimo Baldacci - Joao Barroso - Richard Bates - Christofer Bezzina - Paolo Bonaiuto - Lucia Boncori - Pietro Boscolo - Sara Bubb  - Carlo Felice Casula - Jean-Émile Charlier - Lucia Chiappetta Cajola - Carmela Covato - Jean-Louis Derouet - Peter Early - Franco Frabboni - Constance Katz - James Levin - Pietro Lucisano  - Roberto Maragliano - Romuald Normand - Michael Osborne - Donatella Palomba - Michele Pellerey - Clotilde Pontecorvo - Vitaly V. Rubtzov - Jaap Scheerens - Noah W. Sobe - Francesco Susi - Giuseppe Spadafora - Pat Thomson
Editorial Staff: Fabio Alivernini - Guido Benvenuto - Anna Maria Ciraci - Massimiliano Fiorucci - Luca Mallia - Massimo Margottini - Giovanni Moretti - Carla Roverselli 
Editorial Secretary:
Nazarena Patrizi 


Referee List


© 2001 LED Edizioni Universitarie di Lettere Economia Diritto