STEMM Institute Press
Science, Technology, Engineering, Management and Medicine
Research on the Application of Large Language Models in Recommendation Systems
DOI: https://doi.org/10.62517/jes.202602113
Author(s)
Yan Yang*, Rong Li
Affiliation(s)
Computer School, Central China Normal University, Wuhan, China *Corresponding Author
Abstract
In the context of information explosion and diversified user demands, large language models (LLMs) provide new ideas for optimizing recommendation systems. This paper studies the application of LLMs in recommendation systems. Firstly, the paper outlines the basic framework of LLM-based recommendation systems, and then focuses on analyzing three advanced recommendation methods based on fine-tuning, prompt learning, and instruction tuning. The paper further elaborates on the advantages of LLMs compared to traditional recommendation systems in terms of accuracy, cold start, diversity, and other aspects, and points out their limitations in data processing, computational efficiency, and privacy security. Finally, the paper provides an outlook on future research directions.
Keywords
Recommendation System; Large Language Models; Instruction Tuning
References
[1] Wu Z., Tang Y., Liu H. Survey of Personalized Learning Recommendation [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(1): 21-40. [2] Brown T., Mann B., Ryder N., et al. Language Models are Few-shot Learners [J]. Advances in Neural Information Processing Systems, 2020, 159: 1877-1901. [3] Stiennon N., Ouyang L., Wu J., et al. Learning to Summarize with Human Feedback [J]. Advances in Neural Information Processing Systems, 2020, 33: 3008-3021. [4] Ding N., Qin Y., Yang G., et al. Parameter-efficient Fine-tuning of Large-scale Pre-trained Language Models [J]. Nature Machine Intelligence, 2023, 5(3): 220-235. [5] Wu X., Magnani A., Chaidaroon S., et al. A Multi-Task Learning Framework for Product Ranking with BERT [C]. Proceedings of WWW2022, 2022: 493-501. [6] Chen X., Zhang N., Xie X., et al. KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization for Relation Extraction [C]. Proceedings of WWW2022, 2022: 2778-2788. [7] Geng S., Liu S., Fu Z., et al. Recommendation as Language Processing (RLP): A Unified Pretrain, Personalized Prompt & Predict Paradigm (P5) [C]. Proceedings of the 16th ACM Conference on Recommender Systems (RecSys 2022), 2022: 299-315. [8] Xiang W., Wang Z., Dai L., et al. ConnPrompt: Connective-cloze Prompt Learning for Implicit Discourse Relation Recognition [C]. Proceedings of the 29th International Conference on Computational Linguistics (COLING 2022), 2022: 902-911.
Copyright @ 2020-2035 STEMM Institute Press All Rights Reserved