Karthik Narasimhan
(Redirected from Narasimhan, K.)
Jump to navigation
Jump to search
Karthik Narasimhan is a person.
References
2024
- (Shinn et al., 2024) ⇒ N. Shinn, F. Cassano, A. Gopinath, Karthik Narasimhan, and S. Yao. (2024). “Reflexion: Language Agents with Verbal Reinforcement Learning.” In: Advances in Neural Information Processing Systems 36.
2023
- (Yao et al., 2023) ⇒ S. Yao, D. Yu, J. Zhao, I. Shafran, T.L. Griffiths, Y. Cao, and Karthik Narasimhan. (2023). “Tree of Thoughts: Deliberate Problem Solving with Large Language Models.” In: Neural Information Processing Systems (NeurIPS).
- (Yao et al., 2023) ⇒ S. Yao, J. Zhao, D. Yu, N. Du, I. Shafran, Karthik Narasimhan, and Y. Cao. (2023). “React: Synergizing Reasoning and Acting in Language Models.” In: International Conference on Learning Representations (ICLR).
- (Jimenez et al., 2023) ⇒ Carlos E. Jimenez, John Yang, Alexander Wettig, Shunyu Yao, Kexin Pei, Ofir Press, and Karthik Narasimhan. (2023). “SWE-bench: Can Language Models Resolve Real-world Github Issues?. ” arXiv preprint arXiv:2310.06770
2018
- (Radford et al., 2018) ⇒ Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. (2018). “Improving Language Understanding by Generative Pre-Training.”
2016
- (Kulkarni et al., 2016) ⇒ T.D. Kulkarni, K.R. Narasimhan, A. Saeedi, and J.B. Tenenbaum. (2016). “Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation.” In: Neural Information Processing Systems (NIPS).
2015
- (Narasimhan et al., 2015) ⇒ Karthik Narasimhan, T. Kulkarni, and R. Barzilay. (2015). “Language Understanding for Text-based Games Using Deep Reinforcement Learning.” In: Empirical Methods in Natural Language Processing (EMNLP).