Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS
Survey: Machine Learning Projects Still Routinely Fail to Deploy
 Originally published in KDnuggets. Eric Siegel highlights the chronic...
Three Best Practices for Unilever’s Global Analytics Initiatives
    This article from Morgan Vawter, Global Vice...
Getting Machine Learning Projects from Idea to Execution
 Originally published in Harvard Business Review Machine learning might...
Eric Siegel on Bloomberg Businessweek
  Listen to Eric Siegel, former Columbia University Professor,...
SHARE THIS:

4 years ago
The Computational Limits of Deep Learning Are Closer Than You Think

 
Originally posted to DiscoverMagazine, July 24, 2020.

Deep learning eats so much power that even small advances will be unfeasible give the massive environmental damage they will wreak, say computer scientists.

Deep in the bowels of the Smithsonian National Museum of American History in Washington, D.C., sits a large metal cabinet the size of a walk-in wardrobe. The cabinet houses a remarkable computer — the front is covered in dials, switches and gauges, and inside, it is filled with potentiometers controlled by small electric motors. Behind one of the cabinet doors is a 20 by 20 array of light sensitive cells, a kind of artificial eye.

This is the Perceptron Mark I, a simplified electronic version of a biological neuron. It was designed by the American psychologist Frank Rosenblatt at Cornell University in the late 1950s who taught it to recognize simple shapes such as triangles.

Rosenblatt’s work is now widely recognized as the foundation of modern artificial intelligence but, at the time, it was controversial. Despite the original success, researchers were unable to build on it, not least because more complex pattern recognition required vastly more computational power than was available at the time. This insatiable appetite prevented further study of artificial neurons and the networks they create.

To continue reading this article, click here.

 

Leave a Reply