Personalized Music Psychological Intervention Based on Artificial Intelligence: The Role of Real-time Emotion Recognition and Feedback in Students' Emotional Expression and Skill Acquisition
DOI: https://doi.org/10.62517/jhet.202515554
Author(s)
Tianyi Yang
Affiliation(s)
York University, York, YO10 5DD, UK
*Corresponding Author
Abstract
This article focuses on the field of personalized music psychological intervention based on artificial intelligence, delving deeply into the role of real-time emotion recognition and feedback in students' emotional expression and skill acquisition. This paper expounds the technical basis and implementation methods of artificial intelligence in music psychological intervention, and analyzes the positive impact of real-time emotion recognition and feedback on students' emotional expression, including enhancing emotional perception and enriching expression methods, etc. At the same time, it discusses its promoting effect on students' skill acquisition, such as enhancing learning motivation and optimizing learning strategies. Finally, the future development direction is prospected, aiming to provide theoretical references for the application of artificial intelligence in personalized music psychological intervention in the field of education.
Keywords
Artificial Intelligence; Personalized Music Psychological Intervention; Real-time Emotion Recognition; Emotional Expression; Skill Acquisition
References
[1] Oswalt, S. B., Lederer, A. M., Chestnut-Steich, K., Day, C., Halbritter, A., & Ortiz, D. (2020). Trends in college students' mental health diagnoses and utilization of services, 2009–2015. Journal of American college health, 68(1), 41-51.
[2] Dzedzickis, A., Kaklauskas, A., & Bucinskas, V. (2020). Human emotion recognition: Review of sensors and methods. Sensors, 20(3), 592.
[3] Ramirez-Melendez, R. (2023). Personalizing Music Interventions. In Neurocognitive Music Therapy: Intersecting Music, Medicine and Technology for Health and Well-Being (pp. 83-89). Cham: Springer International Publishing.
[4] Poria, S., Cambria, E., Bajpai, R., & Hussain, A. (2017). A review of affective computing: From unimodal analysis to multimodal fusion. Information fusion, 37, 98-125.
[5] Dasgupta, P. B. (2017). Detection and analysis of human emotions through voice and speech pattern processing. arXiv preprint arXiv:1710.10198.
[6] Kwon, T., Tekin, B., Tang, S., & Pollefeys, M. (2022). Context-aware sequence alignment using 4d skeletal augmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 8172-8182).
[7] El Ayadi, M., Kamel, M. S., & Karray, F. (2011). Survey on speech emotion recognition: Features, classification schemes, and databases. Pattern recognition, 44(3), 572-587.
[8] Lieskovská, E., Jakubec, M., Jarina, R., & Chmulík, M. (2021). A review on speech emotion recognition using deep learning and attention mechanism. Electronics, 10(10), 1163.
[9] Prabhu, K., Kumar, S. S., Sivachitra, M., Dineshkumar, S., & Sathiyabama, P. (2022). Facial Expression Recognition Using Enhanced Convolution Neural Network with Attention Mechanism. Computer Systems Science & Engineering, 41(1).