Machine Learning Times
EXCLUSIVE HIGHLIGHTS
AI Business Value Is Not an Oxymoron: How Predictive AI Delivers Real ROI for Enterprises
  Originally published in AI Realized Now “Shouldn’t a great...
How To Un-Botch Predictive AI: Business Metrics
  Originally published in Forbes Predictive AI offers tremendous potential...
2 More Ways To Hybridize Predictive AI And Generative AI
  Originally published in Forbes Predictive AI and generative AI...
How To Overcome Predictive AI’s Everyday Failure
  Originally published in Forbes Executives know the importance of predictive...
SHARE THIS:

1 year ago
Nvidia improves Meta’s Llama model with new training approach

 

Originally published in the-decoder.com, Oct 18, 2024.

Nvidia has introduced a new large language model that outperforms others in alignment benchmarks. The company achieved this through a special training procedure combining evaluation and preference models.

The new model, called Llama-3.1-Nemotron-70B-Instruct, is based on Meta’s open-source Llama 3.1 model. Nvidia optimized it to provide helpful answers to user queries by combining different training methods.

However, the results only show that the answers align better with human preferences, not that the content is necessarily more accurate. In fact, the Nemotron variant performs slightly worse than the base model on the MMLU Pro benchmark, which tests factual knowledge.

Nvidia created two new datasets for training: HelpSteer2 and HelpSteer2-Preference. HelpSteer2 contains over 20,000 prompt-response pairs. Multiple annotators rated each response on a 1-5 scale for criteria like helpfulness, correctness, and coherence. HelpSteer2-Preference adds comparisons between two answers to the same prompt. Annotators indicated which answer they preferred and how strong their preference was.

To continue reading this article, click here.

One thought on “Nvidia improves Meta’s Llama model with new training approach

  1. Pingback: Nvidia improves Meta’s Llama model with new training approach - revtech