2023 FineTuningPretrainedLanguageMod
- (Yun et al., 2023) ⇒ Jiseon Yun, Jae Eui Sohn, and Sunghyon Kyeong. (2023). “Fine-Tuning Pretrained Language Models to Enhance Dialogue Summarization in Customer Service Centers.” In: Proceedings of the Fourth ACM International Conference on AI in Finance. doi:10.1145/3604237.3626838
Subject Headings: Dialogue Summarization, RDASS Text Summarization Measure, Customer Support Dialog Session, Korean NLP.
Notes
Cited By
Quotes
Abstract
The application of pretrained language models in real-world business domains has gained significant attention. However, research on the practical use of generative artificial intelligence (AI) to address real-world downstream tasks is limited. This study aims to enhance the routine tasks of customer service (CS) representatives, particularly in the finance domain, by applying a fine-tuning method to dialogue summarization in CS centers. KakaoBank handles an average of 15,000 CS calls daily. By employing a fine-tuning method using real-world CS dialogue data, we can reduce the time required to summarize CS dialogues and standardize summarization skills. To ensure effective dialogue summarization in the finance domain, pretrained language models should acquire additional knowledge and skills, such as specific knowledge of financial products, problem-solving abilities, and the capacity to handle emotionally charged customers. In this study, we developed a reference fine-tuned model using Polyglot-Ko (5.8B) as the baseline PLM and a dataset containing a wide range of zero-shot instructions and partially containing summarization instructions. We compared this reference model with another model fine-tuned using KakaoBank’s CS dialogues and summarization data as the instruct dataset. The results demonstrated that the fine-tuned model based on KakaoBank’s internal datasets outperformed the reference model, showing a 199% and 12% improvement in ROUGE-L and RDASS, respectively. This study emphasizes the significance of task-specific fine-tuning using appropriate instruct datasets for effective performance in specific downstream tasks. Considering its practical use, we suggest that fine-tuning using real-world instruct datasets is a powerful and cost-effective technique for developing generative AI in the business domain.
References
;
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
2023 FineTuningPretrainedLanguageMod | Jiseon Yun Jae Eui Sohn Sunghyon Kyeong | Fine-Tuning Pretrained Language Models to Enhance Dialogue Summarization in Customer Service Centers | 10.1145/3604237.3626838 | 2023 |