Jason Eisner
Jump to navigation
Jump to search
Jason Eisner is a person.
References
- http://scholar.google.com/citations?user=tjb2UccAAAAJ
- Professional Homepage: http://www.cs.jhu.edu/~jason/
- QUOTE: All kinds of novel methods for natural language processing: New machine learning, combinatorial algorithms, probabilistic models of linguistic structure, and declarative specification of knowledge and algorithms.
- Why:
- The question: How can we appropriately formalize linguistic structure and discover it automatically?
- The engineering motivation: Computers must learn to understand human language. A huge portion of human communication, thought, and culture now passes through computers. Ultimately, we want our devices to help us by understanding text and speech as a human would — both at the small scale of intelligent user interfaces and at the large scale of the entire multilingual Internet.
- The scientific motivation: Human language is fascinatingly complex and ambiguous. Yet babies are born with the incredible ability to discover the structure of the language around them. Soon they are able to rapidly comprehend and produce that language and relate it to events and concepts in the world. Figuring out how this is possible is a grand challenge for both cognitive science and machine learning. *** The disciplines: My research program combines computer science with statistics and linguistics. The challenge is to fashion statistical models that are nuanced enough to capture good intuitions about linguistic structure, and especially, to develop efficient algorithms to apply these models to data (including training them with as little supervision as possible).
- What?
- Models: I've developed significant modeling approaches for a wide variety of domains in natural language processing — syntax, phonology, morphology, and machine translation, as well as semantic preferences, name variation, and even database-backed websites. The goal is to capture not just the structure of sentences, but also deep regularities within the grammar and lexicon of a language (and across languages). My students and I are always thinking about new problems and better models. Lately we are doing a lot of non-parametric Bayesian modeling, so that the model can be a linguistically plausible account of how the data arose.
- Algorithms: A good mathematical model will define the best analysis of the data, but can we compute that analysis? My students and I are constantly developing new algorithms, to cope with the tricky structured prediction and learning problems posed by increasingly sophisticated models. Unlike many areas of machine learning, we have to deal with probability distributions over unboundedly large structured variables such as strings, trees, alignments, and grammars. My favorite tools include dynamic programming, Markov chain Monte Carlo (MCMC), belief propagation and other variational approximations, automatic differentiation, deterministic annealing, stochastic local search, coarse-to-fine search, integer linear programming, and relaxation methods. I especially enjoy connecting disparate techniques in fruitful new ways.
- General paradigms: My students and I also work to pioneer general statistical and algorithmic paradigms that cut across problems (not limited to NLP). We are developing a high-level declarative programming language, Dyna, which allows startlingly short programs, backed up by many interesting general efficiency tricks so that these don't have to be reinvented and reimplemented in new settings all the time. We are also showing how to learn execution strategies that do fast and accurate approximate statistical inference, and how to properly train these essentially discriminative strategies in a Bayesian way. In the past we have developed other machine learning techniques of general interest.
- Measuring success: We implement our new methods and evaluate them carefully on collections of naturally occurring language. We have repeatedly improved the state of the art. While our work can certainly be used within today's end-user applications, such as machine translation and information extraction, we ourselves are generally focused on building up the long-term fundamentals of the field.
2014
- Jason Eisner http://www.quora.com/How-is-Computational-Linguistics-different-from-Natural-Language-Processing/answer/Jason-Eisner
2005
- (Smith & Eisner, 2005) ⇒ Noah A. Smith, and Jason Eisner. (2005). “Contrastive Estimation: Training Log-linear Models on Unlabeled Data.” In: Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pp. 354-362 . Association for Computational Linguistics,
2003
- (Eisner, 2003) ⇒ Jason Eisner. (2003). “Learning Non-isomorphic Tree Mappings for Machine Translation.” In: Proceedings of the 41st Annual Meeting on Association for Computational Linguistics-Volume 2, pp. 205-208. Association for Computational Linguistics,
1999
- (Eisner & Satta, 1999) ⇒ Jason Eisner, and Giorgio Satta. (1999). “Efficient Parsing for Bilexical Context-free Grammars and Head Automaton Grammars.” In: Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics on Computational Linguistics, pp. 457-464 . Association for Computational Linguistics,
1996
- (Eisner, 1996) ⇒ Jason M. Eisner. (1996). “Three New Probabilistic Models for Dependency Parsing: An Exploration.” In: Proceedings of the 16th conference on Computational linguistics-Volume 1, pp. 340-345 . Association for Computational Linguistics,