Machine Learning Times
EXCLUSIVE HIGHLIGHTS
Our Last Hope Before The AI Bubble Detonates: Taming LLMs
  Originally published in Forbes To know that we’re in...
The Agentic AI Hype Cycle Is Out Of Control — Yet Widely Normalized
  Originally published in Forbes I recently wrote about how...
Predictive AI Must Be Valuated – But Rarely Is. Here’s How To Do It
  Originally published in Forbes To be a business is...
Agentic AI Is The New Vaporware
  Originally published in Forbes The hype term “agentic AI”...
SHARE THIS:

2 years ago
Large language models use a surprisingly simple mechanism to retrieve some stored knowledge

 

Originally published in MIT News, March 25, 2024

Researchers demonstrate a technique that can be used to probe a model to see what it knows about new subjects.

Large language models, such as those that power popular artificial intelligence chatbots like ChatGPT, are incredibly complex. Even though these models are being used as tools in many areas, such as customer support, code generation, and language translation, scientists still don’t fully grasp how they work.

They found a surprising result: Large language models (LLMs) often use a very simple linear function to recover and decode stored facts. Moreover, the model uses the same decoding function for similar types of facts. Linear functions, equations with only two variables and no exponents, capture the straightforward, straight-line relationship between two variables.

The researchers showed that, by identifying linear functions for different facts, they can probe the model to see what it knows about new subjects, and where within the model that knowledge is stored.

To continue reading this article, click here.

21 thoughts on “Large language models use a surprisingly simple mechanism to retrieve some stored knowledge

  1. By identifying these linear functions, scientists can probe LLMs to understand Pokerogue what they know about new subjects and pinpoint where this knowledge is stored, shedding light on the inner workings of these complex AI systems.

     
  2. Researchers have discovered that large language models (LLMs), like those used in AI chatbots, often utilize simple linear functions to decode and retrieve stored information. This surprising finding reveals that these models use the same decoding function for similar types of facts. Invisalign Doctor Site

     
  3. One of the best parts of Stuff Your Kindle Day is that there are a lot of genres to pick from at no cost. Whether it be romance or thrillers, science fiction or memoirs, everyone will find something satisfying. Go to our Stuff Your Kindle Day Genres page for further information on the multitude genres on offer, and new titles to read. Stuff Your Kindle Day Books

     
  4. This is actually pretty cool! I always wondered how these LLMs managed to pull information—linear functions, huh? That’s simpler than I expected! Makes you wonder what else we don’t know about how they work. If you’re curious about other cutting-edge AI projects, take a look at Lanta AI.

     
  5. Wow, it’s wild how something as complex as an LLM can use such a simple trick—just a linear function—to pull out facts it “remembers.” Makes you wonder what other secrets are hiding in plain sight inside these models. By the way, if you ever want to take a break and play a quirky game, check out bad ice cream—it’s my go-to for some quick fun!

     
  6. In a new study, researchers have found a way to probe large language models (LLMs) to see what they know about new topics. They discovered that these complex models surprisingly use simple linear functions. wisely by adp

     

Leave a Reply