Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS
Transitions: Predicting The Next Event
 Models predicting the potential spread of the COVID-19 pandemic...
Coursera’s “Machine Learning for Everyone” Fulfills Unmet Training Requirements
  My new course series on Coursera, Machine Learning...
Segmentation and RFM Analysis in the World of Wine and Spirits
 Segmentation is a hot word these days, and it...
How Machine Learning Works – in 20 Seconds
  This transcript comes from Coursera’s online course series,...
SHARE THIS:

3 weeks ago
The Computational Limits of Deep Learning Are Closer Than You Think

 
Originally posted to DiscoverMagazine, July 24, 2020.

Deep learning eats so much power that even small advances will be unfeasible give the massive environmental damage they will wreak, say computer scientists.

Deep in the bowels of the Smithsonian National Museum of American History in Washington, D.C., sits a large metal cabinet the size of a walk-in wardrobe. The cabinet houses a remarkable computer — the front is covered in dials, switches and gauges, and inside, it is filled with potentiometers controlled by small electric motors. Behind one of the cabinet doors is a 20 by 20 array of light sensitive cells, a kind of artificial eye.

This is the Perceptron Mark I, a simplified electronic version of a biological neuron. It was designed by the American psychologist Frank Rosenblatt at Cornell University in the late 1950s who taught it to recognize simple shapes such as triangles.

Rosenblatt’s work is now widely recognized as the foundation of modern artificial intelligence but, at the time, it was controversial. Despite the original success, researchers were unable to build on it, not least because more complex pattern recognition required vastly more computational power than was available at the time. This insatiable appetite prevented further study of artificial neurons and the networks they create.

To continue reading this article, click here.

 

Leave a Reply