Deep Neural Networks (DNNs) have become very successful in the domain of artificial intelligence. They have begun to directly influence our lives through image recognition, automated machine translation, precision medicine and many other solutions. Furthermore, there are many parallels between these modern artificial algorithms and biological brains: The two systems resemble each other in their function – for example, they can solve surprisingly complex tasks – and in their anatomical structure – for example, they contain many hierarchically structured neurons.
Given these apparent similarities, many questions arise: How similar are human and machine vision really? Can we understand human vision by studying machine vision? Or the other way round: Can we gain insights from human vision to improve machine vision? All these questions motivate the comparison of these two intriguing systems.
While comparison studies can advance our understanding, they are not straightforward to conduct. Differences between the two systems can complicate the endeavor and open up several challenges. Therefore, it is important to carefully investigate the comparisons of DNNs and humans.
In our recently published preprint “The Notorious Difficulty of Comparing Human and Machine Perception”, we highlight three of the most common pitfalls that can easily lead to fragile conclusions:
Pitfall 1: Humans are often too quick to conclude that machines learned human-like concepts
Let’s start with a small experiment on yourself: Does the following image contain a closed contour?
To continue reading this article, click here.