Machine Learning Times
Machine Learning Times
Podcast: Four Things the Machine Learning Industry Must Learn from Self-Driving Cars
    Welcome to the next episode of The Machine...
A Refresher on Continuous Versus Discrete Input Variables
 How many times have I heard that the most...
Podcast: Why Deep Learning Could Expedite the Next AI Winter
  Welcome to the next episode of The Machine Learning...
PAW Preview Video: Evan Wimpey, Director of Strategic Analytics at Elder Research
 In anticipation of his upcoming presentation at Deep Learning...

9 years ago
Selecting Mathematical Models With Greatest Predictive Power: Finding Occam’s Razor in an Era of Information Overload


How can the actions and reactions of proteins so small or stars so distant they are invisible to the human eye be accurately predicted? How can blurry images be brought into focus and reconstructed?

A new study led by physicist Steve Pressé, Ph.D., of the School of Science at Indiana University-Purdue University Indianapolis, shows that there may be a preferred strategy for selecting mathematical models with the greatest predictive power. Picking the best model is about sticking to the simplest line of reasoning, according to Pressé. His paper explaining his theory is published online this month in Physical Review Letters.

“Building mathematical models from observation is challenging, especially when there is, as is quite common, a ton of noisy data available,” said Pressé, an assistant professor of physics who specializes in statistical physics. “There are many models out there that may fit the data we do have. How do you pick the most effective model to ensure accurate predictions? Our study guides us towards a specific mathematical statement of Occam’s razor.”

Occam’s razor is an oft cited 14th century adage that “plurality should not be posited without necessity” sometimes translated as “entities should not be multiplied unnecessarily.” Today it is interpreted as meaning that all things being equal, the simpler theory is more likely to be correct.

A principle for picking the simplest model to answer complex questions of science and nature, originally postulated in the 19th century by Austrian physicist Ludwig Boltzmann, had been embraced by the physics community throughout the world. Then, in 1998, an alternative strategy for picking models was developed by Brazilian Constantino Tsallis. This strategy has been widely used in business (such as in option pricing and for modeling stock swings) as well as scientific applications (such as for evaluating population distributions). The new study finds that Boltzmann’s strategy, not the 20th century alternative, assures that the models picked are the simplest and most consistent with data.

“For almost three decades in physics we have had two main competing strategies for picking the best model. We needed some resolution,” Pressé said. “Even as simple an experiment as flipping a coin or as complex an enterprise as understanding functions of proteins or groups of proteins in human disease need a model to describe them. Simply put, we need one Occam’s razor, not two, when selecting models.”

In addition to Pressé, co-authors of “Nonadditive entropies yield probability distributions with biases not warranted by the data” are Kingshuk Ghosh of the University of Denver, Julian Lee of Soongsil University, and Ken A. Dill of Stony Brook University.

Pressé is also the first author of a companion paper, “Principles of maximum entropy and maximum caliber in statistical physics” published in the July-September issue of the Reviews of Modern Physics.

The above story is based on materials provided by Indiana University-Purdue University Indianapolis School of Science. Originally published at sciencedaily

Leave a Reply