Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS
Wise Practitioner – Predictive Analytics Interview Series: Oscar Porto and Fábio Ferraretto at DHAUZ
 In anticipation of their upcoming presentation at Predictive Analytics...
PAW Preview Video: Evan Wimpey, Director of Strategic Analytics at Elder Research
 In anticipation of his upcoming presentation at Predictive Analytics...
Podcast: Real-Time Machine Learning: Why It’s Vital and How to Do It
  Welcome to the next episode of The Machine Learning...
PAW Preview Video: Aric LaBarr, Institute for Advanced Analytics at NC State University
 In anticipation of his upcoming presentation at Predictive Analytics...
SHARE THIS:

3 weeks ago
Pathways Language Model (PaLM): Scaling to 540 Billion Parameters for Breakthrough Performance

 
Originally published in Google AI Blog, April 4, 2022.

In recent years, large neural networks trained for language understanding and generation have achieved impressive results across a wide range of tasks. GPT-3 first showed that large language models (LLMs) can be used for few-shot learning and can achieve impressive results without large-scale task-specific data collection or model parameter updating. More recent LLMs, such as GLaMLaMDAGopher, and Megatron-Turing NLG, achieved state-of-the-art few-shot results on many tasks by scaling model size, using sparsely activated modules, and training on larger datasets from more diverse sources. Yet much work remains in understanding the capabilities that emerge with few-shot learning as we push the limits of model scale.

Last year Google Research announced our vision for Pathways, a single model that could generalize across domains and tasks while being highly efficient. An important milestone toward realizing this vision was to develop the new Pathways system to orchestrate distributed computation for accelerators. In “PaLM: Scaling Language Modeling with Pathways”, we introduce the Pathways Language Model (PaLM), a 540-billion parameter, dense decoder-only Transformer model trained with the Pathways system, which enabled us to efficiently train a single model across multiple TPU v4 Pods. We evaluated PaLM on hundreds of language understanding and generation tasks, and found that it achieves state-of-the-art few-shot performance across most tasks, by significant margins in many cases.

To continue reading this article, click here.

Leave a Reply