Percy Liang
(Redirected from P. Liang)
Jump to navigation
Jump to search
Percy Liang is a person.
References
- Professional Homepage: http://cs.stanford.edu/~pliang/
- Google Scholar Author Page: http://scholar.google.com/citations?user=pouyVyUAAAAJ
2024
- (Zhang, Ladhak et al., 2024) ⇒ Tianyi Zhang, Faisal Ladhak, Esin Durmus, Percy Liang, Kathleen McKeown, and Tatsunori B Hashimoto. (2024). “Benchmarking Large Language Models for News Summarization.” In: Transactions of the Association for Computational Linguistics, 12. doi:10.1162/tacl_a_00632
2023
- (Park et al., 2023) ⇒ Joon Sung Park, Joseph C. O'Brien, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein. (2023). “Generative Agents: Interactive Simulacra of Human Behavior.” doi:10.48550/arXiv.2304.03442
- (Zhang, Ladhak et al., 2023) ⇒ Tianyi Zhang, Faisal Ladhak, Esin Durmus, Percy Liang, Kathleen McKeown, and Tatsunori B. Hashimoto. (2023). “Benchmarking Large Language Models for News Summarization.” arXiv preprint arXiv:2301.13848
2022
- (Wei, Tay et al., 2022) ⇒ Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus.. (2022). “Emergent Abilities of Large Language Models.” In: Transactions on Machine Learning Research, 08/2022 (TMLR).
- (Liang, Bommasani et al., 2022) ⇒ Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, Benjamin Newman, Binhang Yuan, Bobby Yan, Ce Zhang, Christian Cosgrove, Christopher D. Manning, Christopher Ré, Diana Acosta-Navas, Drew A. Hudson, Eric Zelikman, Esin Durmus, Faisal Ladhak, Frieda Rong, Hongyu Ren, Huaxiu Yao, Jue Wang, Keshav Santhanam, Laurel Orr, Lucia Zheng, Mert Yuksekgonul, Mirac Suzgun, Nathan Kim, Neel Guha, Niladri Chatterji, Omar Khattab, Peter Henderson, Qian Huang, Ryan Chi, Sang Michael Xie, Shibani Santurkar, Surya Ganguli, Tatsunori Hashimoto, Thomas Icard, Tianyi Zhang, Vishrav Chaudhary, William Wang, Xuechen Li, Yifan Mai, Yuhui Zhang, and Yuta Koreeda. (2022). “Holistic Evaluation of Language Models.” doi:10.48550/arXiv.2211.09110
2021
- (Bommasani et al., 2021) ⇒ Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, ..., Lucia Zheng, Kaitlyn Zhou, Percy Liang. (2021). “On the Opportunities and Risks of Foundation Models.” arXiv preprint arXiv:2108.07258.
2016
- (Rajpurkar et al., 2016) ⇒ Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. (2016). “SQuAD: 100,000+ Questions for Machine Comprehension of Text.” In: arXiv preprint arXiv:1606.05250.
2016
- (Liang, 2016) ⇒ Percy Liang. (2016). “Learning Executable Semantic Parsers for Natural Language Understanding.” In: Communications of the ACM Journal, 59(9). doi:10.1145/2866568
- QUOTE: A long-standing goal of artificial intelligence (AI) is to build systems capable of understanding natural language.
2016
- (Liang, 2016) ⇒ Percy Liang. (2016). “Querying Unnormalized and Incomplete Knowledge Bases." Invited Talk at the 5th Workshop on Automated Knowledge Base Construction (AKBC-2016).
- ABSTRACT: In an ideal world, one might construct a perfect knowledge base and use it to answer compositional queries. However, real-world knowledge bases are far from perfect---they can be inaccurate and incomplete. In this talk, I show two ways that we can cope with these imperfections by directly learning to answer queries on the imperfect knowledge base. First, we treat semi-structured web tables as an unnormalized knowledge base and perform semantic parsing on it to answer compositional questions. Second, we show how to embed an incomplete knowledge base to support compositional queries directly in vector space. Finally, we discuss some ideas for combining the best of both worlds.
2015
- (Liang, 2015) ⇒ Percy Liang. (2013). “Natural Language Understanding: Foundations and State-of-the-Art." Tutorial at ICML-2015.
- ABSTRACT: Building systems that can understand human language — being able to answer questions, follow instructions, carry on dialogues — has been a long-standing challenge since the early days of AI. Due to recent advances in machine learning, there is again renewed interest in taking on this formidable task. A major question is how one represents and learns the semantics (meaning) of natural language, to which there are only partial answers. The goal of this tutorial is (i) to describe the linguistic and statistical challenges that any system must address; and (ii) to describe the types of cutting edge approaches and the remaining open problems. Topics include distributional semantics (e.g., word vectors), frame semantics (e.g., semantic role labeling), model-theoretic semantics (e.g., semantic parsing), the role of context, grounding, neural networks, latent variables, and inference. The hope is that this unified presentation will clarify the landscape, and show that this is an exciting time for the machine learning community to engage in the problems in natural language understanding.
2013
- (Berant et al., 2013) ⇒ Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. (2013). “Semantic Parsing on Freebase from Question-Answer Pairs.” In: Proceedings of EMNLP (EMNLP-2013).
2009
- (Liang, Jordan & Klein, 2009) ⇒ Percy Liang, Michael I. Jordan, and Dan Klein. (2009). “Learning Semantic Correspondences with Less Supervision.” In: Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP (ACL 2009).
2006
- (Liang et al., 2006) ⇒ Percy Liang, Ben Taskar, and Dan Klein. (2006). “Alignment by Agreement.” In: Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics. doi:10.3115/1220835.1220849