Switch Transformer Architecture
(Redirected from Switch Transformer Network)
Jump to navigation
Jump to search
A Switch Transformer Architecture is an MoE transformer network architecture that ...
- Context:
- It can (typically) select different parameters for processing different inputs.
- It can facilitate scaling to Deep Learning Networks with trillions of parameters.
- It can aim to improve efficiency by distilling sparse pre-trained and fine-tuned models into smaller, dense models.
- It can reduce the model size by up to 99% while retaining around 30% of the quality gains of the larger, sparse models.
- It can employ selective precision training, enhancing both efficiency and effectiveness.
- ...
- Example(s):
- Google's Switch-C Transformer with 1.6 billion parameters
- Beijing Academy of Artificial Intelligence's WuDao 2.0 with 1.75 trillion parameters.
- ...
- Counter-Example(s):
- Traditional Transformer models that do not utilize the Mixture of Experts approach.
- Smaller-scale AI models with a fixed set of parameters for all inputs.
- See: Mixture of Experts, Language Model, Sparsely-Gated MoE.
References
2021
- (Fedus et al., 2021) ⇒ William Fedus, Barret Zoph, and Noam Shazeer. (2021). "Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity.”. In: The Journal of Machine Learning Research, 23(1). [DOI:10.5555/3586589.3586709]
- QUOTE: In deep learning, models typically reuse the same parameters for all Inputs. Mixture of Experts (MoE) defies this and instead selects different parameters for each Incoming Example. The result is a Sparsely-Activated Model -- with outrageous numbers of Parameters -- but a constant Computational Cost. However, despite several notable successes of MoE, widespread adoption has been hindered by complexity, Communication Costs, and Training Instability -- we address these with the Switch Transformer. We simplify the MoE Routing Algorithm and design intuitive improved models with reduced Communication and Computational Costs. Our proposed training techniques help wrangle the instabilities and we show large Sparse Models may be trained, for the first time, with lower precision formats. We design models based off T5-Base and T5-Large to obtain up to 7x increases in Pre-Training Speed with the same Computational Resources. These improvements extend into Multilingual Settings where we measure gains over the mT5-Base version across all 101 languages. Finally, we advance the current scale of Language Models by pre-training up to trillion parameter models on the “Colossal Clean Crawled Corpus” and achieve a 4x speedup over the T5-XXL Model.