Samuel R. Bowman
Jump to navigation
Jump to search
Samuel R. Bowman is a person.
- See: Corpus Linguistics, Natural Language Inference, Neural Networks, Sentence Generation, Textual Entailment.
References
2023
- (Rein et al., 2023) ⇒ David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani, Julian Michael, and Samuel R. Bowman. (2023). “GPQA: A Graduate-Level Google-Proof Q&A Benchmark.” doi:10.48550/arXiv.2311.12022
2019
- (Wang, Pruksachatkun et al., 2019) ⇒ Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. (2019). “SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems.” In: Proceedings of the 33rd Conference on Neural Information Processing Systems (NeurIPS 2019). arXiv:1905.00537
- (Wang, Singh et al., 2019) ⇒ Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. (2019). “GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding.” In: Proceedings of the 7th International Conference on Learning Representations (ICLR 2019).
2018
- (Wang et al., 2018) ⇒ Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. (2018). “GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding.” In: Proceedings of the Workshop: Analyzing and Interpreting Neural Networks for NLP (BlackboxNLP@EMNLP 2018). DOI:10.18653/v1/W18-5446
2018
- (Williams et al., 2018) ⇒ Alex Williams, Nikita Nangia, Samuel R. Bowman. (2018). "A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference." In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. doi:10.18653/v1/N18-1101
- NOTE: This paper presents the Multi-Genre Natural Language Inference (MultiNLI) corpus, which extends the SNLI dataset by including texts from various genres to test the robustness of natural language inference models across different domains.
2016
- (Bowman et al., 2016) ⇒ Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal Jozefowicz, Samy Bengio. (2016). "Generating sentences from a continuous space." In: Proceedings of the 2016 Conference on Computational Natural Language Learning. doi:10.18653/v1/K16-1002
- NOTE: This paper explores the use of a variational autoencoder (VAE) framework to generate coherent and contextually relevant sentences by sampling from a continuous latent space, showcasing applications in various NLP tasks such as machine translation and text summarization.
2015
- (Bowman et al., 2015) ⇒ Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning. (2015). "A large annotated corpus for learning natural language inference." In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. doi:10.18653/v1/D15-1075
- NOTE: This paper introduces the Stanford Natural Language Inference (SNLI) corpus, a large-scale dataset designed to facilitate research in natural language understanding by providing labeled sentence pairs for entailment and contradiction tasks.