Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS
AI and ML in Health Care: A Brief Review
 Of the many disciplines that are active users of...
Visualizing Decision Trees with Pybaobabdt
 Originally published in Towards Data Science, Dec 14, 2021....
Correspondence Analysis: From Raw Data to Visualizing Relationships
 Isn’t it satisfying to find a tool that makes...
Podcast: Four Things the Machine Learning Industry Must Learn from Self-Driving Cars
    Welcome to the next episode of The Machine...
SHARE THIS:

2 months ago
Productizing Large Language Models

 
Originally posted on Replit.com, Sept 21, 2022. 

Large Language Models (LLMs) are known for their near-magical ability to learn from very few examples — as little as zero — to create language wonders. LLMs can chat, write poetry, write code, and even do basic arithmetic. However, the same properties that make LLMs magical also make them challenging from an engineering perspective.

At Replit we have deployed transformer-based language models of all sizes: ~100m parameter models for search and spam, 1-10B models for a code autocomplete product we call GhostWriter, and 100B+ models for features that require a higher reasoning ability. In this post we’ll talk about what we’ve learned about building and hosting large language models.

Nonsense

Any sufficiently advanced bullshit is indistinguishable from intelligence, or so the LLM thought. LLMs are super suggestible — in fact, the primary way to interact with LLMs is via “prompting.” Basically, you give the LLM a string of text and it generates a response, mostly in text form although some models can also generate audio or even images. The problem is, you can prompt the LLM with nonsense and it will generate nonsense. Garbage in, garbage out. Also, LLMs tend to get stuck in loops, repeating the same thing over and over again, since they have a limited attention span when dealing with some novel scenarios that were not present during training.

To continue reading this article, click here.

2 thoughts on “Productizing Large Language Models

  1. Pingback: Productizing Large Language Models « Machine Learning Times ✔️ Autocomp

Leave a Reply