Intersezione tra intelligenza artificiale generativa e educazione: un’ipotesi
Abstract
INTERSECTION BETWEEN GENERATIVE ARTIFICIAL INTELLIGENCE AND EDUCATION: A HYPHOTHESIS
Abstract
This study explores the impact of integrating Generative Artificial Intelligence (GenAI) into adaptive and personalized learning environments, focusing on its diverse applications in the field of education. It begins with an examination of the evolution of GenAI models and frameworks, establishing selection criteria to curate case studies that showcase the applications of GenAI in education. The analysis of these case studies highlights the tangible benefits of integrating GenAI, such as increased student engagement, improved test scores, and accelerated skill development. Ethical, technical, and pedagogical challenges are also identified, emphasizing the need for careful collaboration between educators and computer science experts. The findings underscore the potential of GenAI to revolutionize the field of education. By addressing technological challenges and ethical concerns, and embracing human-centered approaches, educators and computer science experts can leverage GenAI to create innovative and inclusive learning environments. Finally, the study also highlights the importance of socio-emotional learning and personalization in the evolutionary process that will revolutionize the future of education.
Keywords
Full Text:
PDFReferences
Abdi, H., Valentin, D., & Edelman, B. (1999). Neural networks. Sage Publications, Inc.
https://doi.org/10.4135/9781412985277
Bandi, A., Adapa, P.V.S.R., & Kuchi, Y.E.V.P.K. (2023). The power of generative AI: A review of requirements, models, input-output formats, evaluation metrics, and challenges. Future Internet, 15(8), 260.
https://doi.org/10.3390/fi15080260
Blagoev, I., Vassileva, G., & Monov, V. (2023). From data to learning: The scientific approach to AI-enhanced online course design. In Proceedings of the 8th International Conference on Big Data, Knowledge and Control Systems Engineering - BdKCSE 2023. New York: Institute of Electrical and Electronics Engineers (IEEE).
https://doi.org/10.1109/BdKCSE59280.2023.10339693
Brynjolfsson, E., Li, D., & Raymond, L. (2023). Generative AI at work. Cambridge, MA: National Bureau of Economic Research.
https://doi.org/10.3386/w31161
Cao, Y., Li, S., Liu, Y., Yan, Z., Dai, Y., Yu, P.S., & Sun, L. (2023). A comprehensive survey of AI-Generated Content (AIGC): A history of generative AI from GAN to ChatGPT. arXiv preprint.
https://doi.org/10.48550/arXiv.2303.04226
Chen, E., Lee, J.E., Lin, J., & Koedinger, K. (2024). GPTutor: Great personalized tutor with large language models for personalized learning content generation. In Proceedings of the 11th ACM Conference on Learning @ Scale - L@S 2024 (pp. 539-541). New York: Association for Computing Machinery (ACM).
https://doi.org/10.1145/3657604.3664718
Cunningham, P., Cord, M., & Delany, S.J. (2008). Supervised learning. In Machine learning techniques for multimedia: Case studies on organization and retrieval (pp. 21-49). Berlin - Heidelberg: Springer.
https://doi.org/10.1007/978-3-540-75171-7_2
De Luca, C., Domenici, G., & Spadafora, G. (2023). Per una inclusione sostenibile. Roma: Anicia.
Denny, P., Gulwani, S., Heffernan, N.T., Käser, T., Moore, S., Rafferty, A.N., & Singla, A. (2024). Generative AI for Education (GAIED): Advances, opportunities, and challenges. arXiv, 2402.01580.
Dewey, J. (1949). Scuola e società. Firenze: La Nuova Italia.
Dewey, J. (2018). Democrazia e educazione. A cura di G. Spadafora. Roma: Anicia.
Dickey, E., & Bejarano, A. (2023). GAIDE: A framework for using generative AI to assist in course content development. arXiv, 2308.12276.
Ding, N., Qin, Y., Yang, G., Wei, F., Yang, Z., Su, Y., Hu, S., Chen, Y., Chan, C., Chen, W., Yi, J., Zhao, W., Wang, X., Liu, Z., Zheng, H., Chen, J., Liu, Y., Tang, J., Li, J., & Sun, M. (2023). Parameter-efficient fine-tuning of large-scale pre-trained language models. Nature Machine Intelligence, 5(3), 220-235.
https://doi.org/10.1038/s42256-023-00626-4
Fahes, M., Vu, T.-H., Bursuc, A., Pérez, P., & De Charette, R. (2023). Poda: Prompt-driven zero-shot domain adaptation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 18623-18633). New York: Institute of Electrical and Electronics Engineers (IEEE).
https://doi.org/10.1109/ICCV51070.2023.01707
Girin, L., Leglaive, S., Bie, X., Diard, J., Hueber, T., & Alameda-Pineda, X. (2021). Dynamical variational autoencoders: A comprehensive review. Foundations and Trends in Machine Learning, 15(1-2), 1-175.
https://doi.org/10.1561/9781680839135
https://doi.org/10.1561/2200000089
Goldstein, J.A., Sastry, G., Musser, M., Di Resta, R., Gentzel, M., & Sedova, K. (2023). Generative language models and automated influence operations: Emerging threats and potential mitigations. arXiv preprint.
https://doi.org/10.48550/arXiv.2301.04246
Guettala, M., Bourekkache, S., Kazar, O., & Harous, S. (2024). Generative artificial intelligence in education: Advancing adaptive and personalized learning. Acta Informatica Pragensia, 13, 460-489.
https://doi.org/10.18267/j.aip.235
Han, Z., Gao, C., Liu, J., Zhang, J., & Zhang, S.Q. (2024). Parameter-efficient fine-tuning for large models: A comprehensive survey. arXiv preprint.
https://doi.org/10.48550/arXiv.2403.14608
Han, Z., Wang, J., Fan, H., Wang, L., & Zhang, P. (2018). Unsupervised generative modeling using matrix product states. Physical Review X, 8(3).
https://doi.org/10.1103/PhysRevX.8.031012
Hastie, T., Tibshirani, R., & Friedman, J.H. (2009). The elements of statistical learning: Data mining, inference, and prediction. New York: Springer.
https://doi.org/10.1007/978-0-387-84858-7
Heston, T., & Khun, C. (2023). Prompt engineering in medical education. International Medical Education, 2(3), 198-205.
https://doi.org/10.3390/ime2030019
Hitawala, S. (2018). Comparative study on generative adversarial networks. arXiv preprint.
https://doi.org/10.48550/arXiv.1801.04271
Hoffmann, J., Borgeaud, S., Mensch, A., Buchatskaya, E., Cai, T., Rutherford, E., …, & Clark, A. (2022). Training compute-optimal large language models. In Proceedings of the 36th International Conference on Neural Information Processing Systems - NIPS'22 (pp. 30016-30030). ACM (Association for Computing Machinery) Digital Library.
Hutt, S., & Hieb, G. (2024). Scaling up mastery learning with generative AI: Exploring how generative AI can assist in the generation and evaluation of mastery quiz questions. In Proceedings of the 11th ACM Conference on Learning @ Scale - L@S 2024 (pp. 310-314). New York: Association for Computing Machinery (ACM).
https://doi.org/10.1145/3657604.3664699
Khanzode, K.C.A., & Sarode, R.D. (2020). Advantages and disadvantages of artificial intelligence and machine learning: A literature review. International Journal of Library & Information Science, 9(1), 30-36.
Khowaja, S.A., Khuwaja, P., Dev, K., Wang, W., Nkenyereye, L., & Fortino, G. (2023). ChatGPT needs SPADE (sustainability, privacy, digital divide, and ethics) evaluation: A review. TechRxiv. Preprint.
https://doi.org/10.36227/techrxiv.22619932.v4
Kleinbaum, D.G., & Klein, M. (2002). Logistic regression. New York: Springer.
LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444.
https://doi.org/10.1038/nature14539
Leiker, D., Finnigan, S., Ricker Gyllen, A., & Cukurova, M. (2023). Prototyping the use of Large Language Models (LLMs) for adult learning content creation at scale. arXiv, 2306.01815.
Lu, W., & Zoghi, B.B. (2024). Using generative AI for a graduate level capstone course design: A case study. In 2024 ASEE Annual Conference & Exposition, Portland (OR), June 23-26, 2024.
https://doi.org/10.18260/1-2--48235
Lu, Yue, Guo, Chao, Dou, Yong, Dai, Xingyuan, & Wang, Fei-Yue. 2023. Could ChatGPT Imagine: Content control for artistic painting generation via large language models. Journal of Intelligent and Robotic Systems: Theory and Applications, 109(2).
https://doi.org/10.1007/s10846-023-01956-6
Magee, J.F. (1964). Decision trees for decision making. Harvard Business Review.
https://hbr.org/1964/07/decision-trees-for-decision-making
Moor, M., Banerjee, O., Abad, Z.S.H., Krumholz, H.M., Leskovec, J., Topol, E.J., & Rajpurkar, P. (2023). Foundation models for generalist medical artificial intelligence. Nature, 616(7956), 259-265.
https://doi.org/10.1038/s41586-023-05881-4
Nah, F.F., Zheng, R., Cai, J., Siau, K., & Chen, L. (2023). Generative AI and ChatGPT: Applications, challenges, and AI-human collaboration. Journal of Information Technology Case and Application Research, 25(3), 277-304.
https://doi.org/10.1080/15228053.2023.2233814
Nye, B.D., Mee, D., & Core, M.G. (2023). Generative large language models for dialog-based tutoring: An early consideration of opportunities and concerns. In S. Moore, J. Stamper, R. Tong, C. Cao, Z. Liu, X. Hu, Y. Lu, J. Liang, H. Khosravi, P. Denny, A. Singh, & C. Brooks (Eds.), Proceedings of the Workshop on Empowering Education with LLMs: The Next-Gen Interface and Content Generation, July 7, 2023, Tokyo, Japan (pp. 78-88). Centro Europeo Università e Ricerca (CEUR).
Peeperkorn, M., Kouwenhoven, T., Brown, D., & Jordanous, A. (2024). Is temperature the creativity parameter of large language models? arXiv preprint.
https://doi.org/10.48550/arXiv.2405.00492
Peis, I., Olmos, P.M., & Artés-Rodríguez, A. (2023). Unsupervised learning of global factors in deep generative models. Pattern Recognition, 134, 109130.
https://doi.org/10.1016/j.patcog.2022.109130
Pesovski, I., Santos, R., Henriques, R., & Trajkovik, V. (2024). Generative AI for customizable learning experiences. Sustainability (Switzerland), 16(7).
https://doi.org/10.3390/su16073034
Pham, D.T., Dimov, S.S., & Nguyen, C.D. (2005). Selection of K in K-means clustering. Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science, 219(1), 103-119.
https://doi.org/10.1243/095440605X8298
Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training. PrePrint.
https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf
Regenwetter, L., Nobari, A.H., & Ahmed, F. (2022). Deep generative models in engineering design: A review. Journal of Mechanical Design, 144(7), 071704.
https://doi.org/10.1115/1.4053859
Schick, T., & Schütze, H. (2022). True few-shot learning with prompts: A real-world perspective. Transactions of the Association for Computational Linguistics, 10, 716-731.
https://doi.org/10.1162/tacl_a_00485
Shoeybi, M., Patwary, M., Puri, R., LeGresley, P., Casper, J., & Catanzaro, B. (2019). Megatron-lm: Training multi-billion parameter language models using model parallelism. arXiv, 1909.08053.
Sivasathiya, G., Anil Kumar, D., Harish Rangasamy, A.R., & Kanishkaa, R. (2024). Emotion-Aware Multimedia synthesis: A generative AI framework for personalized content generation based on user sentiment analysis. In 2nd International Conference on Intelligent Data Communication Technologies and Internet of Things - IDCIoT 2024 (pp. 1344-1350). New York: Institute of Electrical and Electronics Engineers (IEEE).
https://doi.org/10.1109/IDCIoT59759.2024.10467761
Spadafora, G. (2015). L'educazione per la democrazia. Roma: Anicia.
Spadafora, G. (2018). Processi didattici per una nuova scuola democratica. Roma: Anicia.
Sridhar, P., Doyle, A., Agarwal, A., Bogart, C., Savelka, J., & Sakr, M. (2023). Harnessing LLMs in curricular design: Using GPT-4 to support authoring of learning objectives. In S. Moore, J. Stamper, R. Tong, C. Cao, Z. Liu, X. Hu, Y. Lu, J. Liang, H. Khosravi, P. Denny, A. Singh, & C. Brooks (Eds.), Proceedings of the Workshop on Empowering Education with LLMs: The Next-Gen Interface and Content Generation, July 7, 2023, Tokyo, Japan (pp. 139-150). Centro Europeo Università e Ricerca (CEUR).
Su, J., & Yang, W. (2023). Unlocking the power of ChatGPT: A framework for applying generative AI in education. ECNU Review of Education 6(3), 355-366.
https://doi.org/10.1177/20965311231168423
Variš, D., & Bojar, O. (2021). Sequence length is a domain: Length-based overfitting in transformer models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (pp. 8246-8257). Punta Cana, Dominican Republic: Association for Computational Linguistics (ACL).
https://doi.org/10.18653/v1/2021.emnlp-main.650
Wang, C., Li, M., & Smola, A.J. (2019). Language models with transformers. arXiv preprint.
https://doi.org/10.48550/arXiv.1904.09408
Weisberg, S. (2005). Applied linear regression. Hoboken, NJ: John Wiley & Sons.
https://doi.org/10.1002/0471704091
Wetzel, S.J. (2017). Unsupervised learning of phase transitions: From principal component analysis to variational autoencoders. Physical Review E, 96(2), 022140.
https://doi.org/10.1103/PhysRevE.96.022140
Zeadally, S., Adi, E., Baig, Z., & Khan, I.A. (2020). Harnessing artificial intelligence capabilities to improve cybersecurity. IEEE Access, 8, 23817-23837.
https://doi.org/10.1109/ACCESS.2020.2968045
Zhang, Chenshuang, Zhang, Chaoning, Zhang, M., & Kweon, I.S. (2023). Text-to-image diffusion models in generative AI: A survey». arXiv preprint.
https://doi.org/10.48550/arXiv.2303.07909
Zheng, C., Wu, G., Bao, F., Cao, Y., Li, C., & Zhu, J. (2023). Revisiting discriminative vs. generative classifiers: Theory and implications. PMLR - Proceedings of Machine Learning Research, 202: 40th International Conference on Machine Learning - ICML'23, 42420-42477.
Zhou, Z.H. (2021). Machine learning. Singapore: Springer Nature.
https://doi.org/10.1007/978-981-15-1967-3
Zhu, B., & Rao, Y. (2023). Exploring robust overfitting for pre-trained language models. In Findings of the Association for Computational Linguistics: ACL 2023 (pp. 5506-5522). Punta Cana, Dominican Republic: Association for Computational Linguistics (ACL).
https://doi.org/10.18653/v1/2023.findings-acl.340
Zhuoyan, L., Chen, L., Jing P., & Ming Y. (2024). The value, benefits, and concerns of generative AI-powered assistance in writing. In Proceedings of the 2024 Conference on Human Factors in Computing Systems - CHI'24. New York: Association for Computing Machinery (ACM).
DOI: https://doi.org/10.7358/ecps-2024-030-fort
Copyright (©) 2025 Francesco Pupo – Editorial format and Graphical layout: copyright (©) LED Edizioni Universitarie

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Journal of Educational, Cultural and Psychological Studies (ECPS)
Registered by Tribunale di Milano (19/05/2010 n. 278)
Online ISSN 2037-7924 - Print ISSN 2037-7932
Research Laboratory on Didactics and Evaluation - Department of Education - "Roma Tre" University
Executive Editor: Gaetano Domenici - Associate Executive Editor & Managing Editor: Valeria Biasci
Editorial Board: Eleftheria Argyropoulou - Massimo Baldacci - Joao Barroso - Richard Bates - Christofer Bezzina - Paolo Bonaiuto - Lucia Boncori - Pietro Boscolo - Sara Bubb - Carlo Felice Casula - Jean-Émile Charlier - Lucia Chiappetta Cajola - Carmela Covato - Jean-Louis Derouet - Peter Early - Franco Frabboni - Constance Katz - James Levin - Pietro Lucisano - Roberto Maragliano - Romuald Normand - Michael Osborne - Donatella Palomba - Michele Pellerey - Clotilde Pontecorvo - Vitaly V. Rubtzov - Jaap Scheerens - Noah W. Sobe - Francesco Susi - Giuseppe Spadafora - Pat Thomson
Editorial Staff: Fabio Alivernini - Guido Benvenuto - Anna Maria Ciraci - Massimiliano Fiorucci - Luca Mallia - Massimo Margottini - Giovanni Moretti - Carla Roverselli
Editorial Secretary:Nazarena Patrizi
© 2001 LED Edizioni Universitarie di Lettere Economia Diritto