Christopher D. Manning
(Redirected from Christopher D Manning)
Jump to navigation
Jump to search
Christopher D. Manning is a person.
References
- Professional Homepage: http://nlp.stanford.edu/~manning/
- DBLP http://www.informatik.uni-trier.de/~ley/db/indices/a-tree/m/Manning:Christopher_D=.html
- Google Author Page: https://scholar.google.com/citations?user=1zmDOdwAAAAJ
2024
- (Magesh et al., 2024) ⇒ Varun Magesh, Faiz Surani, Matthew Dahl, Mirac Suzgun, Christopher D. Manning, and Daniel E Ho. (2024). “Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools.” In: Stanford preprint.
2023
- (Rafailov et al., 2023) ⇒ Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D. Manning, and Chelsea Finn. (2023). “Direct Preference Optimization: Your Language Model is Secretly a Reward Model.” doi:10.48550/arXiv.2305.18290
2022
- (Wei, Tay et al., 2022) ⇒ Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus.. (2022). “Emergent Abilities of Large Language Models.” In: Transactions on Machine Learning Research, 08/2022 (TMLR).
- (Liang, Bommasani et al., 2022) ⇒ Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, Benjamin Newman, Binhang Yuan, Bobby Yan, Ce Zhang, Christian Cosgrove, Christopher D. Manning, Christopher Ré, Diana Acosta-Navas, Drew A. Hudson, Eric Zelikman, Esin Durmus, Faisal Ladhak, Frieda Rong, Hongyu Ren, Huaxiu Yao, Jue Wang, Keshav Santhanam, Laurel Orr, Lucia Zheng, Mert Yuksekgonul, Mirac Suzgun, Nathan Kim, Neel Guha, Niladri Chatterji, Omar Khattab, Peter Henderson, Qian Huang, Ryan Chi, Sang Michael Xie, Shibani Santurkar, Surya Ganguli, Tatsunori Hashimoto, Thomas Icard, Tianyi Zhang, Vishrav Chaudhary, William Wang, Xuechen Li, Yifan Mai, Yuhui Zhang, and Yuta Koreeda. (2022). “Holistic Evaluation of Language Models.” doi:10.48550/arXiv.2211.09110
- (Srivastava et al., 2022) ⇒ Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, ..., Christian Voigt, Christopher D. Manning, Christopher Potts, ... (2023). "Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models." In: arXiv preprint arXiv:2206.04615 (2022).
2021
- (Koreeda & Manning, 2021a) ⇒ Yuta Koreeda, and Christopher D. Manning. (2021). “ContractNLI: A Dataset for Document-level Natural Language Inference for Contracts.” In: Findings of the Association for Computational Linguistics: EMNLP 2021.
- (Bommasani et al., 2021) ⇒ Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, ..., Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, ..., Lucia Zheng, Kaitlyn Zhou, Percy Liang. (2021). “On the Opportunities and Risks of Foundation Models.” arXiv preprint arXiv:2108.07258.
2020
- (Clark et al., 2020) ⇒ Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning). (2020). “ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators." Proceedings of ICLR 2020 (ICLR 2020).
2018
- (Hudson & Manning, 2018) ⇒ Drew A. Hudson, and Christopher D. Manning. (2018). “Compositional Attention Networks for Machine Reasoning.” In: Proceedings of the International Conference on Learning Representations, ICLR-2018).
2017
- (See et al., 2017) ⇒ Abigail See, Peter J. Liu, and Christopher D. Manning. (2017). “Get To The Point: Summarization with Pointer-Generator Networks.” In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).
- Lecture in Natural Language Processing with Deep Learning - Stanford CS224N Ling284 (2017).
2016
- (Manning, 2016) ⇒ Christopher D. Manning. (2016). “Texts as Knowledge Bases." Invited Talk at the 5th Workshop on Automated Knowledge Base Construction (AKBC-2016).
- ABSTRACT: Much of text understanding is either towards the end of the spectrum where there is no representation of linguistic conceptual structure (bag-of-words models) or near the other extreme where complex representations are employed (first order logic, AMR, ...). I've been interested in how far one can get with just a little bit of appropriate linguistic structure. I will summarize two recent case studies, one using deep learning and the other natural logic. Enabling a computer to understand a document so that it can use the knowledge within it, for example, to answer reading comprehension questions is a central, yet still unsolved, goal of NLP. I’ll introduce our recent work on the Deepmind QA dataset - a recently released large dataset constructed from news articles. On the one hand, we show that (simple) neural network models are surprisingly good at solving this task and achieving state-of-the-art accuracies; on the other hand, we did a careful hand-analysis of a small subset of the problems and argue that we are quite close to a performance ceiling on this dataset, and what this task needs is still quite far from genuine deep / complex understanding. I will then turn to the use of Natural Logic, a weak proof theory on surface linguistic forms which can nevertheless model many of the common-sense inferences that we wish to make over human language material. I will show how it can support common-sense reasoning and be part of a more linguistically based approach to open information extraction which outperforms previous systems. I show how to augment this approach with a shallow lexical classifier to handle situations where we cannot find any supporting premises. With this augmentation, the system gets very promising results on answering 4th grade science questions, improving over both the classifier in isolation, a strong IR baseline, and prior work. Joint work with Gabor Angeli and Danqi Chen.
- (Nivre et al., 2016) ⇒ Joakim Nivre, Marie-Catherine De Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajič, Christopher D. Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, Daniel Zeman. (2016). “Universal Dependencies V1: A Multilingual Treebank Collection.” In: Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16).
2015
- (Tai et al., 2015) ⇒ Kai Sheng Tai, Richard Socher, and Christopher D. Manning. (2015). “Improved Semantic Representations from Tree-structured Long Short-term Memory Networks.” In: arXiv preprint arXiv:1503.00075.
- (Hirschberg & Manning, 2015) ⇒ Julia Hirschberg, and Christopher D. Manning. (2015). “Advances in Natural Language Processing.” In: Science Journal, 349 (6245). doi:10.1126/science.aaa8415
- (Luong et al., 2015) ⇒ Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. (2015). “Effective Approaches to Attention-based Neural Machine Translation.” arXiv preprint arXiv:1508.04025
2014
- (Pennington et al., 2014) ⇒ Jeffrey Pennington, Richard Socher, and Christopher D. Manning. (2014). “GloVe: Global Vectors for Word Representation.” In: Proceedings of EMNLP 2014.
2013
- (Luong et al., 2013) ⇒ Thang Luong, Richard Socher, and Christopher Manning. (2013). “Better Word Representations with Recursive Neural Networks for Morphology.” In: Proceedings of the Seventeenth Conference on Computational Natural Language Learning (CoNLL-2013).
2012
- (Huang et al., 2012) ⇒ Eric H. Huang, Richard Socher, Christopher D. Manning, and Andrew Y. Ng. (2012). “Improving Word Representations via Global Context and Multiple Word Prototypes.” In: Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (ACL 2012).
2011
- (Socher et al., 2011a) ⇒ Richard Socher, Jeffrey Pennington, Eric H. Huang, Andrew Y. Ng, and Christopher D. Manning. (2011). “Semi-supervised Recursive Autoencoders for Predicting Sentiment Distributions.” In: Proceedings of the Conference on Empirical Methods in Natural Language Processing. ISBN:978-1-937284-11-4
- (Socher et al., 2011b) ⇒ Richard Socher, Eric H. Huang, Jeffrey Pennin, Christopher D. Manning, and Andrew Y. Ng. (2011). “Dynamic Pooling and Unfolding Recursive Autoencoders for Paraphrase Detection.” In: Advances in Neural Information Processing Systems, pp. 801-809.
2009
- Course in NLP http://see.stanford.edu/see/lecturelist.aspx?coll=63480b48-8819-4efd-8412-263f1a472f5a
- (Finkel & Manning, 2009) ⇒ Jenny Rose Finkel, and Christopher D. Manning. (2009). “Joint Parsing and Named Entity Recognition.” In: Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics (ACL 2009).
- (Finkel & Manning, 2009) ⇒ Jenny Rose Finkel, and Christopher D. Manning. (2009). “Nested Named Entity Recognition.” In: Proceedings of the 2009 Conference on Empirical Methods on Natural Language Processing (EMNLP 2009).
2008
- (Manning et al., 2008) ⇒ Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schütze. (2008). “Introduction to Information Retrieval." Cambridge University Press. ISBN:0521865719.
2006
- (de Marneffe et al., 2006) ⇒ Marie-Catherine de Marneffe, Bill MacCartney and Christopher D. Manning. (2006). “Generating Typed Dependency Parses from Phrase Structure Parses.” In: Proceedings of LREC 2006 (LREC 2006).
- (Krishnan & Manning, 2006) ⇒ V. Krishnan, and Christopher D. Manning. (2006). “An Effective Two-stage Model for Exploiting Non-local Dependencies in Named Entity Recognition.” In: Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics. (ACL-2006) doi:10.3115/1220175.1220316
2005
- (Finkel et al., 2005) ⇒ Jenny Rose Finkel, Trond Grenager, and Christopher D. Manning. (2005). “Incorporating Nonlocal Information into Information Extraction Systems by Gibbs Sampling.” In: Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL 2005).
- (Toutanova et al., 2005) ⇒ K. Toutanova., A. Haghighi, Christopher D. Manning, (2005). “Joint Learning Improves Semantic Role Labeling.” In: Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics (pp. 589-596). Association for Computational Linguistics.
2004
- (Taskar et al., 2004) ⇒ Ben Taskar, Dan Klein, Michael Collins, Daphne Koller, and Christopher D. Manning. (2004). “Max-Margin Parsing."In: Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing (EMNLP 2004).
- (Finkel et al., 2004) ⇒ Jenny Finkel, Shipra Dingare, Huy Nguyen, Malvina Nissim, Christopher Manning, and Gail Sinclair. (2004). “Exploiting Context for Biomedical Entity Recognition: From Syntax to the Web.” In: Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications.
- (Klein & Manning, 2004) ⇒ Dan Klein, and Christopher D. Manning. (2004) "Corpus-based induction of syntactic structure: Models of dependency and constituency.” In: Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics (ACL 2004).
2003
- (Manning et al., 2003) ⇒ Christopher D. Manning, Dan Klein, and Roger Levy. (2003). “Natural Language Parsing: Graphs, the A* Algorithm, and Modularity." Talk at University of Toronto.
- (Klein & Manning, 2003) ⇒ Dan Klein, and Christopher D. Manning. (2003). “Accurate Unlexicalized Parsing.” In: Proceedings of ACL 2003.
- (Beineke et al., 2003) ⇒ Philip Beineke, Trevor Hastie, Christopher D. Manning, and Shivakumar Vaithyanathan. (2003). “An Exploration of Sentiment Summarization.” In: Proceedings of the AAAI Spring Symposium on Exploring Attitude and Affect in Text: Theories and Applications.
- (Toutanova et al., 2003) ⇒ Kristina Toutanova, Dan Klein, Christopher D. Manning, and Yoram Singer. (2003). “Feature-Rich Part-of-Speech Tagging with a Cyclic Dependency Network.” In: Proceedings of HLT-NAACL 2003 (HLT-NAACL 2003).
- (Kamvar et al., 2003) ⇒ Sepandar D. Kamvar, Dan Klein, and Christopher D. Manning. (2003). “Spectral Learning.” In: Proceedings of the 18th international joint conference on Artificial intelligence.
2002
- (Klein and Manning, 2002a) ⇒ Dan Klein, and Christopher D. Manning. (2002). “Fast Exact Inference with a Factored Model for Natural Language Parsing.” In: Advances in Neural Information Processing Systems 15 (NIPS 2002).
- (Klein & Manning, 2002b) ⇒ Dan Klein, and Christopher D. Manning. (2002). “Conditional structure versus conditional estimation in NLP models.” In: Workshop on Empirical Methods in Natural Language Processing (EMNLP), (2002).
2001
- (Klein & Manning, 2001a) ⇒ Dan Klein, and Christopher D. Manning. (2001). “Parsing with Treebank Grammars: Empirical bounds, theoretical models, and the structure of the Penn treebank.” In: Proceedings of ACL 39/EACL 10.
- (Klein & Manning, 2001b) ⇒ Dan Klein, and Christopher D. Manning. (2001). “Parsing and Hypergraphs."In: Proceedings of the 7th International Workshop on Parsing Technologies (IWPT-2001).
2000
- (Toutanova & Manning, 2000) ⇒ Kristina Toutanova, and Christopher D. Manning. (2000). “Enriching the Knowledge Sources Used in a Maximum Entropy Part-of-Speech Tagger.” In: Proceedings of the Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora (EMNLP/VLC 2000).
1999
- (Manning & Schütze, 1999) ⇒ Christopher D. Manning, and Hinrich Schütze. (1999). “Foundations of Statistical Natural Language Processing." The MIT Press. ISBN:0262133601