2009 ConditionalNeuralFields
- (Peng et al., 2009) ⇒ Jian Peng, Liefeng Bo, and Jinbo Xu. (2009). “Conditional Neural Fields.” In: Proceedings of the 22nd International Conference on Neural Information Processing Systems. ISBN:978-1-61567-911-9
Subject Headings:
Notes
Cited By
- http://scholar.google.com/scholar?q=%222009%22+Conditional+Neural+Fields
- http://dl.acm.org/citation.cfm?id=2984093.2984253&preflayout=flat#citedby
2017
- (Goldberg, 2017) ⇒ Yoav Goldberg. (2017). “Neural Network Methods for Natural Language Processing.” In: Synthesis Lectures on Human Language Technologies, 10(1). doi:10.2200/S00762ED1V01Y201703HLT037
2016
- (Goldberg, 2016) ⇒ Yoav Goldberg. (2016). “A Primer on Neural Network Models for Natural Language Processing.” In: Journal of Artificial Intelligence Research, 57(1).
Quotes
Abstract
Conditional random fields (CRF) are widely used for sequence labeling such as natural language processing and biological sequence analysis. Most CRF models use a linear potential function to represent the relationship between input features and output. However, in many real-world applications such as protein structure prediction and handwriting recognition, the relationship between input features and output is highly complex and nonlinear, which cannot be accurately modeled by a linear function. To model the nonlinear relationship between input and output we propose a new conditional probabilistic graphical model, Conditional Neural Fields (CNF), for sequence labeling. CNF extends CRF by adding one (or possibly more) middle layer between input and output. The middle layer consists of a number of gate functions, each acting as a local neuron or feature extractor to capture the nonlinear relationship between input and output. Therefore, conceptually CNF is much more expressive than CRF. Experiments on two widely-used benchmarks indicate that CNF performs significantly better than a number of popular methods. In particular, CNF is the best among approximately 10 machine learning methods for protein secondary structure prediction and also among a few of the best methods for handwriting recognition.
References
- 1. Fei Sha, Fernando Pereira, Shallow Parsing with Conditional Random Fields, Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, p.134-141, May 27-June 01, 2003, Edmonton, Canada
- 2. D. T. Jones. Protein Secondary Structure Prediction based on Position-specific Scoring Matrices. Journal of Molecular Biology, 292(2):195-202, September 1999.
- 3. Feng Zhao, Shuaicheng Li, Beckett W. Sterner, and Jinbo Xu. Discriminative Learning for Protein Conformation Sampling. Proteins, 73(1):228-240, October 2008.
- 4. Feng Zhao, Jian Peng, Joe Debartolo, Karl F. Freed, Tobin R. Sosnick, Jinbo Xu, A Probabilistic Graphical Model for Ab Initio Folding, Proceedings of the 13th Annual International Conference on Research in Computational Molecular Biology, p.59-73, May 18-21, 2009, Tucson, Arizona
- 5. Sy Bor Wang, Ariadna Quattoni, Louis-Philippe Morency, David Demirdjian, Hidden Conditional Random Fields for Gesture Recognition, Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, p.1521-1527, June 17-22, 2006
- 6. Lawrence R. Rabiner. A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition. In Proceedings of the IEEE, 1989.
- 7. John D. Lafferty, Andrew McCallum, Fernando C. N. Pereira, Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data, Proceedings of the Eighteenth International Conference on Machine Learning, p.282-289, June 28-July 01, 2001
- 8. Ben Taskar, Carlos Guestrin, and Daphne Koller. Max-margin Markov Networks. In NIPS 2003.
- 9. Ioannis Tsochantaridis, Thomas Hofmann, Thorsten Joachims, Yasemin Altun, Support Vector Machine Learning for Interdependent and Structured Output Spaces, Proceedings of the Twenty-first International Conference on Machine Learning, p.104, July 04-08, 2004, Banff, Alberta, Canada
- 10. Nam Nguyen, Yunsong Guo, Comparisons of Sequence Labeling Algorithms and Extensions, Proceedings of the 24th International Conference on Machine Learning, p.681-688, June 20-24, 2007, Corvalis, Oregon, USA
- 11. Yan Liu, Jaime Carbonell, Judith Klein-Seetharaman, Vanathi Gopalakrishnan, Comparison of Probabilistic Combination Methods for Protein Secondary Structure Prediction, Bioinformatics, v.20 n.17, p.3099-3107, November 2004
- 12. D. C. Liu, J. Nocedal, On the Limited Memory BFGS Method for Large Scale Optimization, Mathematical Programming: Series A and B, v.45 n.3, p.503-528, Dec. 1989
- 13. Richard H. Byrd, Jorge Nocedal, Robert B. Schnabel, Representations of Quasi-Newton Matrices and their Use in Limited Memory Methods, Mathematical Programming: Series A and B, v.63 n.2, p.129-156, Jan. 31, 1994
- 14. David J. C. MacKay, A Practical Bayesian Framework for Backpropagation Networks, Neural Computation, v.4 n.3, p.448-472, May 1992
- 15. Christopher M. Bishop, Neural Networks for Pattern Recognition, Oxford University Press, Inc., New York, NY, 1995
- 16. John Lafferty, Xiaojin Zhu, Yan Liu, Kernel Conditional Random Fields: Representation and Clique Selection, Proceedings of the Twenty-first International Conference on Machine Learning, p.64, July 04-08, 2004, Banff, Alberta, Canada
- 17. Yoshua Bengio, Réjean Ducharme, Pascal Vincent, Christian Janvin, A Neural Probabilistic Language Model, The Journal of Machine Learning Research, 3, 3/1/2003
- 18. Ilya Sutskever, Geoffrey E Hinton, and Graham Taylor. The Recurrent Temporal Restricted Boltzmann Machine. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, Editors, NIPS 2009.
- 19. Barbara Hammer, Recurrent Networks for Structured Data - A Unifying Approach and Its Properties, Cognitive Systems Research, v.3 n.2, p.145-165, June, 2002
- 20. Alex Graves and Juergen Schmidhuber. Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, Editors, NIPS 2009.
- 21. S. F. Altschul, T. L. Madden, A. A. Schäffer, J. Zhang, Z. Zhang, W. Miller, and D. J. Lipman. Gapped Blast and Psi-blast: A New Generation of Protein Database Search Programs. Nucleic Acids Research, 25, September 1997.
- 22. James A. Cuff and Geoffrey J. Barton. Evaluation and Improvement of Multiple Sequence Methods for Protein Secondary Structure Prediction. Proteins: Structure, Function, and Genetics, 34, 1999.
- 23. Wolfgang Kabsch and Christian Sander. Dictionary of Protein Secondary Structure: Pattern Recognition of Hydrogen-bonded and Geometrical Features. Biopolymers, 22(12):2577-2637, December 1983.
- 24. H. Kim and H. Park. Protein Secondary Structure Prediction based on An Improved Support Vector Machines Approach. Protein Engineering, 16(8), August 2003.
- 25. Wei Chu, Zoubin Ghahramani, David L. Wild, A Graphical Model for Protein Secondary Structure Prediction, Proceedings of the Twenty-first International Conference on Machine Learning, p.21, July 04-08, 2004, Banff, Alberta, Canada
- 26. Sujun Hua and Zhirong Sun. A Novel Method of Protein Secondary Structure Prediction with High Segment Overlap Measure: Support Vector Machine Approach. Journal of Molecular Biology, 308, 2001.
- 27. George Karypis. Yasspp: Better Kernels and Coding Schemes Lead to Improvements in Protein Secondary Structure Prediction. Proteins: Structure, Function, and Bioinformatics, 64(3):575-586, 2006.
- 28. O. Dor and Y. Zhou. Achieving 80% Ten-fold Cross-validated Accuracy for Secondary Structure Prediction by Large-scale Training. Proteins: Structure, Function, and Bioinformatics, 66, March 2007.
;
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
2009 ConditionalNeuralFields | Jian Peng Liefeng Bo Jinbo Xu | Conditional Neural Fields | 2009 |