Machine Learning Times
EXCLUSIVE HIGHLIGHTS
2 More Ways To Hybridize Predictive AI And Generative AI
  Originally published in Forbes Predictive AI and generative AI...
How To Overcome Predictive AI’s Everyday Failure
  Originally published in Forbes Executives know the importance of predictive...
Our Last Hope Before The AI Bubble Detonates: Taming LLMs
  Originally published in Forbes To know that we’re in...
The Agentic AI Hype Cycle Is Out Of Control — Yet Widely Normalized
  Originally published in Forbes I recently wrote about how...
SHARE THIS:

5 years ago
‘Deepfakes’ Ranked As Most Serious AI Crime Threat

 
Originally published in UCL, August 4, 2020.

Fake audio or video content has been ranked by experts as the most worrying use of artificial intelligence in terms of its potential applications for crime or terrorism, according to a new UCL report.

The study, published in Crime Science and funded by the Dawes Centre for Future Crime at UCL (and available as a policy briefing), identified 20 ways AI could be used to facilitate crime over the next 15 years. These were ranked in order of concern – based on the harm they could cause, the potential for criminal profit or gain, how easy they would be to carry out and how difficult they would be to stop.

Authors said fake content would be difficult to detect and stop, and that it could have a variety of aims – from discrediting a public figure to extracting funds by impersonating a couple’s son or daughter in a video call. Such content, they said, may lead to a widespread distrust of audio and visual evidence, which itself would be a societal harm.

Aside from fake content, five other AI-enabled crimes were judged to be of high concern. These were using driverless vehicles as weapons, helping to craft more tailored phishing messages (spear phishing), disrupting AI-controlled systems, harvesting online information for the purposes of large-scale blackmail, and AI-authored fake news.

To continue reading this article, click here.