Machine Learning Times
EXCLUSIVE HIGHLIGHTS
Our Last Hope Before The AI Bubble Detonates: Taming LLMs
  Originally published in Forbes To know that we’re in...
The Agentic AI Hype Cycle Is Out Of Control — Yet Widely Normalized
  Originally published in Forbes I recently wrote about how...
Predictive AI Must Be Valuated – But Rarely Is. Here’s How To Do It
  Originally published in Forbes To be a business is...
Agentic AI Is The New Vaporware
  Originally published in Forbes The hype term “agentic AI”...
SHARE THIS:

1 year ago
Nvidia improves Meta’s Llama model with new training approach

 

Originally published in the-decoder.com, Oct 18, 2024.

Nvidia has introduced a new large language model that outperforms others in alignment benchmarks. The company achieved this through a special training procedure combining evaluation and preference models.

The new model, called Llama-3.1-Nemotron-70B-Instruct, is based on Meta’s open-source Llama 3.1 model. Nvidia optimized it to provide helpful answers to user queries by combining different training methods.

However, the results only show that the answers align better with human preferences, not that the content is necessarily more accurate. In fact, the Nemotron variant performs slightly worse than the base model on the MMLU Pro benchmark, which tests factual knowledge.

Nvidia created two new datasets for training: HelpSteer2 and HelpSteer2-Preference. HelpSteer2 contains over 20,000 prompt-response pairs. Multiple annotators rated each response on a 1-5 scale for criteria like helpfulness, correctness, and coherence. HelpSteer2-Preference adds comparisons between two answers to the same prompt. Annotators indicated which answer they preferred and how strong their preference was.

To continue reading this article, click here.

One thought on “Nvidia improves Meta’s Llama model with new training approach

  1. Pingback: Nvidia improves Meta’s Llama model with new training approach - revtech