Led On Line
Presentazione - About us
Novita' - What's new
E-Journals
E-books
Lededizioni Home Page Ricerca - Search
Catalogo - Catalogue
Per contattarci - Contacts
Per gli Autori - For the Authors
Statistiche - Statistics
Cookie Policy
Privacy Policy

Gli indici di effect size nella ricerca educativa. Analisi comparativa e significatività pratica

Marta Pellegrini, Giuliano Vivanet, Roberto Trinchero

Abstract


Effect sizes are statistical indexes used to quantify the difference between two groups, typically adopted in educational research to measure the efficacy of an intervention. Their use in research reports is recommended by the most important international research association in the field of psychology and education, such as the American Psychological Association (APA) and the American Educational Research Association (AERA). In this work, through a comparative analysis, after a brief description of the most widely used effect size indexes in educational research, authors provide practical indications about their use, and their interpretation. With this purpose in mind, a comparative analysis among Glass’ Δ, Cohen’s d and Hedeges’ g has been carried out, so that to observe their «behavior» in relation to different conditions of study design and to know which one is better to use in those conditions.


Keywords


Educational efficacy evaluation; Effect size; Evidence informed education; Experimental studies; Practical significance.

Full Text:

PDF

References


AERA (2006), “Standards for reporting on empirical social science research in AERA publications,” American Educational Research Association website. https://www.aera.net/Publications/Standards-for-Research-Conduct (ver. 20.08.2018).

APA. American Psychological Association. (2001). Publication manual of the American Psychological Association (5th ed). Washington, DC: Author.

Bakeman, R. (2001). Results need nurturing: Guidelines for authors. Infancy, 2(1), 1-5.

Bickman, L., & Rog, D. J. (Eds.). (2008). The SAGE handbook of applied social research methods. Sage publications.

Bloom, H. S., Hill, C. J., Black, A. R., & Lipsey, M. W. (2008). Performance trajectories and performance gaps as achievement effect-size benchmarks for educational interventions. Journal of Research on Educational Effectiveness, 1(4), 289-328.

Borenstein, M., Hedges, L., Higgins, J., & Rothstein, H. (2009). An Introduction to Meta-Analysis. Chichester, UK: John Wiley & Sons, Ltd.

Bottani, N. (2009). Il difficile rapporto fra politica e ricerca scientifica sui sistemi scolastici. Working Paper, 17, Fondazione Giovanni Agnelli, Torino.

Campion, M.A. (1993). Article review checklist: A criterion checklist for reviewing research articles in applied psychology. Personnel Psychology, 46(3), 705-718.

CEM. Centre for Evaluation & Monitoring. Effect Size Calculator http://www.cem.org/effect-size-calculator (ver. 20.08.2018).

CEM. Centre for Evaluation & Monitoring. Effect Size Resources. https://www.cem.org/effect-size-resources (ver. 20.08.2018).

Cheung, A., & Slavin, R. (2016). How methodological features affect effect sizes in education. Educational Researcher, 45(5), 283-292.

Coe R. (2002). It's the Effect Size, Stupid. What effect size is and why it is important. Paper presented at the Annual Conference of the British Educational Research Association, University of Exeter, England, 12-14 September 2002.

http://www.leeds.ac.uk/educol/documents/00002182.htm (ver. 20.08.2018).

Cohen, J. (1969). Statistical power analysis for the behavioral sciences. New York, NY: Academic Press.

Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, New York: Erlbaum.

Cohen, J. (1992). A power primer. Psychological Bulletin, 112, 155–159.

Cook, R. J., & Sackett, D. L. (1995). The number needed to treat: a clinically useful measure of treatment effect. BMJ: British Medical Journal, 310(6977), 452.

Davies, P. (1999). What is evidence-based education?. British Journal of Educational Studies, 47(2), 108-121.

de Boer, H., Donker, A., & van der Werf, M. (2014). Effects of the attributes of educational interventions on students’ academic performance: A meta-analysis. Review of Educational Research, 84(4), 509-545.

Di Nuovo, S. (1995). La meta-analisi: fondamenti teorici e applicazioni nella ricerca psicologica. Roma: Borla.

Durlak, J. A. (2009). How to select, calculate, and interpret effect sizes. Journal of Pediatric Psychology, 34(9), 917-928.

Ellis, P. D. (2010). The Essential Guide to Effect Sizes: Statistical Power, Meta-Analysis, and the Interpretation of Research Results. Cambridge: Cambridge University Press.

Ellis, S. M., & Steyn, H. S. (2003). Practical significance (effect sizes) versus or in combination with statistical significance (p-values): research note. Management Dynamics: Journal of the Southern African Institute for Management Scientists, 12(4), 51-53.

Elmore, P. B., & Rotou, O. (2001). A Primer on Basic Effect Size Concepts. Paper presented at the annual meeting of the American Educational Research Association, Seattle.

Fan, X. (2001). Statistical significance and effect size in education research: Two sides of a coin. The Journal of Educational Research, 94(5), 275-282.

Fang, C. T., Sau, W. Y., & Chang, S. C. (1999). From effect size into number needed to treat. The Lancet, 354(9178), 597-598.

Fisher, R.A. (1925), Statistical Methods for Research Workers. Edinburgh: Oliver and Boyd.

Furukawa T.A. (1999). From effect size into number needed to treat. Lancet, 353: 1680,

Glass, G. V, McGaw, B., & Smith, M. L. (1981). Meta-analysis in social research. Beverly Hills, CA: Sage.

Glass, G. V. (1976). Primary, secondary, and meta-analysis of research. Educational researcher, 5(10), 3-8.

Grissom, R. J. (1994). Probability of the superior outcome of one treatment over another. Journal of Applied Psychology, 79(2), 314-316.

Grissom, R. J., & Kim, J. J. (2001). Review of assumptions and problems in the appropriate conceptualization of effect size. Psychological Methods, 6(2), 135-146.

Hattie, J. (2009). Visible Learning: A Synthesis of over 800 Meta-

Analysis Relating to Achievement. London-New York: Routledge.

Hattie, J. (2016). Apprendimento visibile, insegnamento efficace: Metodi e strategie di successo dalla ricerca evidence-based. Trento: Edizioni Centro Studi Erickson.

Hedges, L. (1981). Distribution Theory for Glass’s Estimator of Effect Size and Related Estimators. Journal of Educational Statistics, 6(2), 107-128.

Higgins, S. and Katsipataki, M. and Villanueva-Aguilera, A.B. and Coleman, R. and Henderson, P. and Major, L.E. and Coe, R. and Mason, D. (2016). The Sutton Trust-Education Endowment Foundation Teaching and Learning Toolkit.', Manual. Education Endowment Foundation, London.

Hill, C. J., Bloom, H. S., Black, A. R., & Lipsey, M. W. (2008). Empirical benchmarks for interpreting effect sizes in research. Child Development Perspectives, 2(3), 172-177.

Hill, C. R., & Thompson, B. (2004). Computing and interpreting effect sizes. In J.C. Smart (Ed.), Higher education: Handbook of theory and research (pp. 175-196). Dordrecht: Springer.

Hsu, L. M. (2004). Biases of success rate differences shown in binomial effect size displays. Psychological Methods, 9, 183-197.

Iacobucci, D. (2005). From the editor. Journal of Consumer Research, 32(1), 1-6.

Inns, A., Pellegrini, M., Lake, C., & Slavin, R. E. (2018b). Do small studies add up in the What Works Clearinghouse? Paper presented at the Annual Meeting of American Psychological Association, San Francisco.

JEP (2003). Instructions to authors. Journal of Educational Psychology, 95(1), 201.

Kelley, K., & Preacher, K. J. (2012). On effect size. Psychological methods, 17(2), 137-152.

Kirk, R. E. (1996). Practical significance: A concept whose time has come. Educational and psychological measurement, 56(5), 746-759.

Konstantopoulos, S., & Hedges, L. V. (2008). How large an effect can we expect from school reforms? Teachers College Record, 110, 1613-1640.

La Greca, A.M. (2005). Editorial. Journal of Consulting and Clinical Psychology, 73(1), 3-5.

Laupacis, A., Sackett, D. L., & Roberts, R. S. (1988). An assessment of clinically useful measures of the consequences of treatment. New England journal of medicine, 318(26), 1728-1733.

Lipsey, M. W. (1990). Design sensitivity: Statistical power for experimental research (Vol. 19). Sage.

Lipsey, M., & Wilson, D. (2001). Practical Meta-Analysis. Thousand Oaks, CA: Sage Publications.

Lipsey, M.W., Puzio, K., Yun, C., Hebert, M.A., Steinka-Fry, K., Cole, M.W., Roberts, M., Anthony, K.S. & Busick, M.D. (2012). Translating the Statistical Representation of the Effects of Education Interventions into More Readily Interpretable Forms. National Center for Special Education Research.

Littell, J. H., Corcoran, J., Pillai, V. (2008). Systematic Reviews and Meta-Analysis. Oxford: Oxford University Press.

Lustig, D., & Strauser, D. (2004). Effect size and rehabilitation research. Journal of Rehabilitation, 70(4), 3.

Maher, J. M., Markey, J. C., & Ebert-May, D. (2013). The other half of the story: effect size analysis in quantitative research. CBE - Life Sciences Education, 12(3), 345-351.

McGraw, K. O., & Wong, S. P. (1992). A common language effect size statistic. Psychological bulletin, 111(2), 361.

McLean, J. E., & Ernest, J. M. (1998). The role of statistical significance testing in educational research. Research in the Schools, 5(2), 15-22.

Murphy, K.R. (1997). Editorial. Journal of Applied Psychology, 82(1), 3-5.

NCES. National Center for Education Statistics (2002). National Center for Education Statistics - Statistical Standards. https://nces.ed.gov/pubs2003/2003601.pdf (ver. 20.08.2018).

Olejnik, S., & Algina, J. (2000) Measures of ES for Comparative Studies: Applications, Interpretations and Limitations. Contemporary Educational Psychology, 25, 241-286.

Pellegrini, M., Inns, A., Lake, C., & Slavin, R. E. (2018). Effects of types of measures on What Works Clearinghouse outcomes. Paper presented at the Annual Meeting of the American Psychological Association, San Francisco.

Pellegrini, M. & Vivanet, G. (2018). Sintesi di ricerca in educazione. Basi teoriche e metodologiche. Roma: Carocci.

R Psychologist. http://rpsychologist.com/d3/cohend/ (ver. 20.08.2018).

Rosenthal, J. A. (1996). Qualitative descriptors of strength of association and effect size. Journal of social service Research, 21(4), 37-59.

Rothman, K.J. (1986). Significance testing. Annals of Internal Medicine, 105(3), 445-447.

S.Ap.I.E. Società per l’Apprendimento e l’Istruzione informati da Evidenza. www.sapie.it (ver. 20.08.2018).

Sawilowsky, S. S. (2009). New effect size rules of thumb. Journal of Modern Applied Statistical Methods, 8(2), 597-599.

Shaver, J. P. (1993). What statistical significance testing is, and what it is not. The Journal of Experimental Education, 61(4), 293-316.

Slavin, R. (2018). Effect Sizes and the 10-Foot Man. https://robertslavinsblog.wordpress.com/2018/05/10/effect-sizes-and-the-10-foot-man/ (ver. 20.08.2018).

Slavin, R. E. (1986). Best-Evidence Synthesis: An Alternative to Meta-Analytic and Traditional

Slavin, R. E. (2008). What works? Issues in synthesizing educational program evaluations. Educational Researcher, 37(1), 5-14.

Teaching & Learning Toolkit. https://educationendowmentfoundation.org.uk/evidence-summaries/teaching-learning-toolkit (ver. 20.08.2018).

Torgerson, C. J. (2007). The quality of systematic reviews of effectiveness in literacy learning in English: a ‘tertiary’review. Journal of Research in Reading, 30(3), 287-315.

Trinchero, R. (2012). La ricerca e la sua valutazione. Istanze di qualità per la ricerca educativa. Journal of Educational, Cultural and Psychological Studies, 6, 75-96.

Vargha, A., & Delaney, H. D. (2000). A critique and improvement of the CL common language effect size statistics of McGraw and Wong. Journal of Educational and Behavioral Statistics, 25(2), 101-132.

What Works Clearinghouse (2015). Procedures and Standards Handbook (version 3.0). Washington, DC: Author.

Whitehurst G.J. (2002). Evidence-Based Education. Statement of G.J. Whitehurst during the Student Achievement and School Accountability Conference. U.S. Department of Education, Washington, DC 2002. https://www2.ed.gov/nclb/methods/whatworks/eb/edlite-index.html (ver. 20.08.2018).

Wilkinson, L. and the Taskforce on Statistical Inference (1999), “Statistical methods in psychology journals: Guidelines and expectations,” American Psychologist, 54(8): 594–604.

Wilson, D. B., & Lipsey, M. W. (2001). The role of method in treatment effectiveness research: Evidence from meta-analysis. Psychological methods, 6(4), 413.

Wolfe, D. A., & Hogg, R. V. (1971). On constructing statistics and reporting data. The American Statistician, 25(4), 27-30.




DOI: https://doi.org/10.7358/ecps-2018-018-pel1

Copyright (©) 2018 Marta Pellegrini, Roberto Trinchero, Giuliano Vivanet – Editorial format and Graphical layout: copyright (©) LED Edizioni Universitarie



 


Journal of Educational, Cultural and Psychological Studies (ECPS)
Registered by Tribunale di Milano (19/05/2010 n. 278)
Online ISSN 2037-7924 - Print ISSN 2037-7932

Research Laboratory on Didactics and Evaluation - Department of Education - "Roma Tre" University


Executive Editor: Gaetano Domenici - Associate Executive Editor & Managing  Editor: Valeria Biasci
Editorial Board: Eleftheria Argyropoulou - Massimo Baldacci - Joao Barroso - Richard Bates - Christofer Bezzina - Paolo Bonaiuto - Lucia Boncori - Pietro Boscolo - Sara Bubb  - Carlo Felice Casula - Jean-Émile Charlier - Lucia Chiappetta Cajola - Carmela Covato - Jean-Louis Derouet - Peter Early - Franco Frabboni - Constance Katz - James Levin - Pietro Lucisano  - Roberto Maragliano - Romuald Normand - Michael Osborne - Donatella Palomba - Michele Pellerey - Clotilde Pontecorvo - Vitaly V. Rubtzov - Jaap Scheerens - Noah W. Sobe - Francesco Susi - Giuseppe Spadafora - Pat Thomson
Editorial Staff: Fabio Alivernini - Guido Benvenuto - Anna Maria Ciraci - Massimiliano Fiorucci - Luca Mallia - Massimo Margottini - Giovanni Moretti - Carla Roverselli 
Editorial Secretary:
Nazarena Patrizi 


Referee List


© 2001 LED Edizioni Universitarie di Lettere Economia Diritto