Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS
ML in the Spotlight: Trailblazers with Walter Isaacson Covers Predictive Analytics
 Predictive analytics got another public spotlight and Machine Learning Times Executive...
ChapGPT Doesn’t “Know” But It Can Tell
  Polanyi’s paradox, named in honor of the philosopher...
Take the 2023 Rexer Analytics Data Science Survey Now
  Rexer Analytics and Machine Learning Week launch 2023...
Three Ethical Issues Related to Credit Scores
 A reasonable credit score and its accompanying benefits provide...
SHARE THIS:

2 months ago
OpenAI’s Attempts to Watermark AI Text Hit Limits

 
Originally published in TechCrunch, Dec 22, 2022.

It’s proving tough to rein in systems like ChatGPT.

Did a human write that, or ChatGPT? It can be hard to tell — perhaps too hard, its creator OpenAI thinks, which is why it is working on a way to “watermark” AI-generated content.

In a lecture at the University of Texas at Austin, computer science professor Scott Aaronson, currently a guest researcher at OpenAI, revealed that OpenAI is developing a tool for “statistically watermarking the outputs of a text [AI system].” Whenever a system — say, ChatGPT — generates text, the tool would embed an “unnoticeable secret signal” indicating where the text came from.

OpenAI engineer Hendrik Kirchner built a working prototype, Aaronson says, and the hope is to build it into future OpenAI-developed systems.

“We want it to be much harder to take [an AI system’s] output and pass it off as if it came from a human,” Aaronson said in his remarks. “This could be helpful for preventing academic plagiarism, obviously, but also, for example, mass generation of propaganda — you know, spamming every blog with seemingly on-topic comments supporting Russia’s invasion of Ukraine without even a building full of trolls in Moscow. Or impersonating someone’s writing style in order to incriminate them.”

To continue reading this article, click here.

4 thoughts on “OpenAI’s Attempts to Watermark AI Text Hit Limits

Leave a Reply