Multimodal Affective Modeling in an LLM-based Intelligent Tutoring System for Foreign Language Learning

Dionysios Koulouris1, Athasios Kallipolitis1, Melina Tziomaka1, Argyrios Zafeiriou1, Stamatios Orfanos1, Andreas Menychtas1, Ilias Maglogiannis1, George Tsoulouhas2, Stamatia Michalopoulou3, Athina Sioupi3, Voula Giouli4
1University of Piraeus, Greece, 2Athena Research Center, 3Aristotle University of Thessaloniki, Greece, 4Aristotle University of Thessaloniki / ILSP, ATHENA RC


Abstract

Foreign language learning is a cognitively and affectively demanding process, in which fluctuations in attention and motivation can negatively impact learner engagement. Emotions play a central role in this process, yet they are rarely modelled in a systematic, data-driven manner in authentic learning environments. At the same time, positive emotions. This paper presents a prototype affective computing architecture that incorporates various modalities (audio, video, biosignals) to facilitate real-time or near-real-time emotion recognition in an educational scenario; the architecture is integrated within an emotion-aware and adaptive Language Learning application that harnesses Large Language Models in view of providing appropriate educational scenarios to learners. The system comprises modules for acquiring data for each modality and a processing pipeline for synchronizing and analyzing heterogeneous affective signals. We demonstrate both the feasibility and applicability of the approach through a proof-of-concept implementation and discuss its relevance for studying learner affect and supporting affect-aware educational scenarios. The results highlight both the applicability of multimodal affective data in educational settings and the need for further research on their pedagogical interpretation and use.