Gabriel Synnaeve
Jump to navigation
Jump to search
Gabriel Synnaeve is a person.
- See: Bayesian Modeling, Inductive Logic Programming, Language Acquisition, Real-Time Strategy Games, Transformer Model.
References
2024
- Perplexity
- Gabriel Synnaeve is a research scientist at Facebook AI Research (FAIR) team, who joined as a postdoctoral researcher in 2015[1]. He has made significant contributions to the field of artificial intelligence, particularly in areas such as language acquisition, real-time strategy games AI, and audio generation.
- Background and Education
- Synnaeve received his PhD in Bayesian modeling applied to real-time strategy games AI from the University of Grenoble in 2012[1].
- Prior to joining Facebook, he was a postdoctoral fellow at École Normale Supérieure in Paris, working on reverse-engineering the acquisition of language in babies[1].
- In 2009, he worked on inductive logic programming applied to systems biology at the National Institute of Informatics in Tokyo[1].
- Contributions and Research
- Synnaeve programmed a bot that placed 4th in the AAAI AIIDE 2012 StarCraft AI competition[1].
- He has co-authored several research papers, including "Masked Audio Generation using a Single Non-Autoregressive Transformer" (2024)[2] and "Getting the most out of your tokenizer for pre-training and domain adaptation" (2024)[3].
- The "Masked Audio Generation" paper introduces MAGNeT, a masked generative sequence modeling method for tasks like text-to-music and text-to-audio generation, which is significantly faster than autoregressive baselines[2].
- The "Tokenizer" paper explores the impact of tokenizer design on the performance of large language models for code generation tasks, providing recommendations for tokenizer hyper-parameters selection[3].
- Synnaeve is an active researcher in the field of AI, with a focus on areas such as language understanding, audio generation, and model optimization[4][5]. His work has contributed to advancing the state-of-the-art in these domains.
- Citations:
[1] https://ai.meta.com/people/1447559096135307/gabriel-synnaeve/ [2] https://arxiv.org/abs/2401.04577 [3] https://arxiv.org/abs/2402.01035 [4] https://scholar.google.com/citations?hl=en&user=wN9rBkcAAAAJ [5] https://twitter.com/syhw?lang=en
2024
- (Gloeckle et al., 2024) ⇒ Fabian Gloeckle, Badr Youbi Idrissi, Baptiste Rozière, David Lopez-Paz, and Gabriel Synnaeve. (2024). “Better & Faster Large Language Models via Multi-token Prediction.” doi:10.48550/arXiv.2404.19737
2023
- (Oquab et al., 2023) ⇒ Maxime Oquab, Timothée Darcet, Théo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, Mahmoud Assran, Nicolas Ballas, Wojciech Galuba, Russell Howes, Po-Yao Huang, Shang-Wen Li, Ishan Misra, Michael Rabbat, Vasu Sharma, Gabriel Synnaeve, Hu Xu, Hervé Jegou, Julien Mairal, Patrick Labatut, Armand Joulin, and Piotr Bojanowski. (2023). "Dinov2: Learning Robust Visual Features Without Supervision.” In: arXiv preprint arXiv:2304.07193.
- NOTE: It presents a method to enhance visual feature learning in the absence of labeled data.
2021
- (Touvron et al., 2021) ⇒ Hugo Touvron, Matthieu Cord, Alexandre Sablayrolles, Gabriel Synnaeve, and Hervé Jégou. (2021). "Going Deeper with Image Transformers.” In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 32-42.
- NOTE: It highlights advancements in transformer-based models for image processing, focusing on deeper network architectures.
2020
- (Carion et al., 2020) ⇒ Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. (2020). "End-to-End Object Detection with Transformers.” In: European Conference on Computer Vision, pp. 213-229, Springer International Publishing.
- NOTE: It demonstrates the effectiveness of transformers in object detection tasks, simplifying the detection pipeline.
2017
- (Lin, Gehring et al., 2017) ⇒ Zeming Lin, Jonas Gehring, Vasil Khalidov, and Gabriel Synnaeve. (2017). “STARDATA: A StarCraft AI Research Dataset.” In: Proceedings of AIIDE-2017.