Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS
Effective Machine Learning Needs Leadership — Not AI Hype
 Originally published in BigThink, Feb 12, 2024.  Excerpted from The...
Today’s AI Won’t Radically Transform Society, But It’s Already Reshaping Business
 Originally published in Fast Company, Jan 5, 2024. Eric...
Calculating Customer Potential with Share of Wallet
 No question about it: We, as consumers have our...
A University Curriculum Supplement to Teach a Business Framework for ML Deployment
    In 2023, as a visiting analytics professor...
SHARE THIS:

5 years ago
Machine, Learning, 1951

 

By:  Jef Akst

Originally published in The Scientist, May 1, 2019

For today’s leading deep learning methods and technology, attend the conference and training workshops at Deep Learning World, June 16-19, 2019, in Las Vegas. 

Marvin Minsky engineered the first known artificial neural network, in which “rats” represented as lights learned to solve a maze.

As an undergraduate at Harvard in the late 1940s and in his first year of grad school at Princeton in 1950, Marvin Minsky pondered how to build a machine that could learn. At both universities, Minsky studied mathematics, but he was curious about the human mind—what he saw as the most profound mystery in science. He wanted to better understand intelligence by recreating it.

In the summer of 1951, he got his chance. George Miller, an up-and-coming psychologist at Harvard, secured funding for Minsky to return to Boston for the summer and build his device. Minsky enlisted the help of fellow Princeton graduate student Dean Edmonds, and the duo crafted what would become known as the first artificial neural network. Called SNARC, for stochastic neural-analog reinforcement calculator, the network included 40 interconnected artificial neurons, each of which had short-term and long-term memory of sorts. The short-term memory came in the form of a capacitor, a piece of hardware that stores electrical energy, that could remember for a few seconds if the neuron had recently relayed a signal. Long-term memory was handled by a potentiometer, or volume knob, that would increase a neuron’s probability of relaying a signal if it had just fired when the system was “rewarded,” either manually or through an automated electrical signal.

Minsky and Edmonds tested the model’s ability to learn a maze. The details of how the young researchers tracked the output are unclear, but one theory is that they observed, through an arrangement of lights, how a signal moved through the network from a random starting place in the neural network to a predetermined finish line. The duo referred to the signal as “rats” running through a maze of tunnels. When the rats followed a path that led toward the finish line, the system adjusted to increase the likelihood of that firing pattern happening again. Sure enough, the rats began making fewer wrong turns. Multiple rats could run at once to increase the speed at which the system learned.

To continue reading this article click here.

One thought on “Machine, Learning, 1951

  1. Pingback: Machine, Learning, 1951 – The Predictive Analytics Times – IAM Network

Leave a Reply