Li Dong
Jump to navigation
Jump to search
References
2024
- (Wu, Mao et al., 2024) ⇒ Wenshan Wu, Shaoguang Mao, Yadong Zhang, Yan Xia, Li Dong, Lei Cui, and Furu Wei. (2024). “Visualization-of-Thought Elicits Spatial Reasoning in Large Language Models.” doi:10.48550/arXiv.2404.03622
- (Ge, Jing et al., 2024) ⇒ Tao Ge, Hu Jing, Li Dong, Shaoguang Mao, Yan Xia, Xun Wang, Si-Qing Chen, and Furu Wei. (2024). “Extensible Prompts for Language Models on Zero-shot Language Style Customization.” In: Advances in Neural Information Processing Systems, 36.
2023
- (Ding, Ma et al., 2023) ⇒ Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, and Furu Wei. (2023). “LongNet: Scaling Transformers to 1,000,000,000 Tokens.” doi:10.48550/arXiv.2307.02486
2022
- (Bao et al., 2022) ⇒ Hui Bao, Li Dong, Shuhao Piao, and Furu Wei. (2022). “BEiT: BERT Pre-Training of Image Transformers.” In: Proceedings of the International Conference on Learning Representations.
2020
- (Li, Yin et al., 2020) ⇒ Xin Li, Xiaodong Yin, Chen Li, Xingzhi Hu, Pengchuan Zhang, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, and Furu Wei. (2020). “Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks.” In: Proceedings of the European Conference on Computer Vision (ECCV 2020).
2019
- (Dong, Yang et al., 2019) ⇒ Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. (2019). “Unified Language Model Pre-training for Natural Language Understanding and Generation.” In: Proceedings of the 33rd Conference on Neural Information Processing Systems (NeurIPS 2019).
- (Puduppully et al., 2019) ⇒ Ratish Puduppully, Li Dong, and Mirella Lapata. (2019). “Data-to-Text Generation with Content Selection and Planning.” In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI-2019).
- (Liu & Lapata, 2019) ⇒ Yang Liu, and Mirella Lapata. (2019). “Text Summarization with Pretrained Encoders.” In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP 2019)
2016
- (Cheng et al., 2016) ⇒ Jianpeng Cheng, Li Dong, and Mirella Lapata. (2016). “Long Short-Term Memory-Networks for Machine Reading.” In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP-2016).