Xipeng Qiu
Jump to navigation
Jump to search
Xipeng Qiu is a person.
- Context:
- ...
- See: Longformer Model, Text Sequence Modeling.
References
2023
- (An et al., 2023) ⇒ Chenxin An, Shansan Gong, Ming Zhong, Mukai Li, Jun Zhang, Lingpeng Kong, and Xipeng Qiu. (2023). “L-Eval: Instituting Standardized Evaluation for Long Context Language Models.” In: arXiv preprint arXiv:2307.11088. doi:10.48550/arXiv.2307.11088
2020
- (Beltagy et al., 2020) ⇒ Iz Beltagy, Matthew E Peters, and Arman Cohan. (2020). “Longformer: The Long-document Transformer.” In: arXiv preprint arXiv:2004.05150. doi:10.48550/arXiv.2004.05150
2018
- (Chen, Qiu et al., 2018) ⇒ Junkun Chen, Xipeng Qiu, Pengfei Liu, and Xuanjing Huang (2018, April). "Meta Multi-Task Learning for Sequence Modeling". In: Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 32, No. 1).
2020
- (Qiu et al., 2020) ⇒ Xipeng Qiu, Tao Sun, Yige Xu, Yuxuan Shao, Ning Dai, Xuanjing Huang. (2020). “Pre-trained models for natural language processing: A survey.” In: Science China Technological Sciences, 63(10), 1872-1897
2019
- (Sun, Qiu, Xu & Huang, 2019) ⇒ Chi Sun, Xipeng Qiu, Yige Xu, Xuanjing Huang. (2019). “How to fine-tune BERT for text classification?” In: China National Conference on Chinese Computational Linguistics, 194-206
2016
- (Liu, Qiu & Huang, 2016) ⇒ Pengfei Liu, Xipeng Qiu, Xuanjing Huang. (2016). “Recurrent neural network for text classification with multi-task learning.” In: IJCAI, 1524