Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS
How Generative AI Helps Predictive AI
 Originally published in Forbes, August 21, 2024 This is the...
4 Ways Machine Learning Can Perpetuate Injustice and What to Do About It
 Originally published in Built In, July 12, 2024 When ML...
The Great AI Myth: These 3 Misconceptions Fuel It
 Originally published in Forbes, July 29, 2024 The hottest thing...
Where FICO Gets Its Data for Screening Two-Thirds of All Card Transactions
 Originally published in The European Business Review, March 21,...
SHARE THIS:

3 months ago
How to fine-tune: Focus on effective datasets

 

Originally published in ai.meta.com/blog, August 7, 2024.

This is the third blog post in a series about adapting open source large language models (LLMs). In this post, we explore some rules of thumb for curating a good training dataset.

In Part 1, we took a look at prevalent approaches for adapting language models to domain data.
In Part 2, we discussed how to determine if fine-tuning is the right approach for your use case.

Introduction
Fine-tuning LLMs is a mix of art and science, with best practices in the field still emerging. In this blog post, we’ll highlight design variables for fine-tuning and give directional guidance on best practices we’ve seen so far to fine-tune models with resource constraints. We recommend using the information below as a starting point to strategize your fine-tuning experiments.

Full fine-tuning vs. parameter-efficient fine-tuning (PEFT)
Both full fine-tuning and PEFT have shown improvements in downstream performance when applied to new domains in both academic and practical settings. Choosing one boils down to compute available (in GPU hours and GPU memory), performance on tasks other than the target downstream task (the learning-forgetting tradeoff) and human annotation costs.

To continue reading this article, click here.

Comments are closed.