The issue is clear; many of the most successful recent AI technologies revolve around deep learning: complex artificial neural networks – with so many layers of so many neurons transforming so many variables – that behave like “black boxes” for us.
We cannot comprehend anymore the model, we don’t know how or why the outcome to a specific input is obtained.
Is it scary?
In the film Dekalog 1 by Krzysztof Kieślowski – the first of ten short films inspired to the ten Christian imperatives, the first one being “I am the Lord your God; you shall have no other gods before me” – Krzysztof lives alone with Paweł, his 12-years-old and highly intelligent son, and introduces him to the world of personal computers.
Paweł and his father use the computer to calculate if it is possible for him to skate on the ice, to see whether it would hold him. After filling in the data into the computer, the PC says that the expected ice would hold.
The next day, Krzysztof hears firemen’s sirens going off and people rushing to the lake. Krzysztof remains calm at first and refuses to believe that the ice could have broken, since his calculations clearly indicated that this was not possible. After searching all around the neighbourhood he gets confirmation from one of Paweł’s friends that Paweł was skating at the time of the accident.
A lot of the “drawbacks” mentioned in the article seem to revolve around legal accountability — if you don’t know exactly how your self-driving car works, or how a medical AI diagnoses that a stroke is imminent, legal problems are just one step away. Moreover, if you do not understand how a conclusion is reached by a machine learning algorithm, then you cannot exclude that some kind of bias has been introduced implicitly in the algorithm from the input data. Think about an algorithm choosing the best candidate among all resumes received that prefers male candidates over female ones only because the training data was biased.
But for users outside of these situations, might not bother them.
For example, a machine learning system can identify hand-written numbers after it has been trained with thousands of examples. It would do that not because has inferred a generic recognisable rule (“number one is a vertical line”) but by looking for complex patterns of darker and lighter pixels, expressed as matrices of numbers .
How the system detects patterns is beyond human comprehension.
Yet these artificial networks work very well at their specific tasks (image and voice recognition, language translation and many more).
Albert Einstein was puzzled initially by the Quantum Mechanics theory. After years of discussing it with Niels Bohr, he accepted that it was a giant step forward in our world understanding but was still convinced that the reality cannot be so “strange” and there must be a more reasonable explanation.
After a century, we are still at the same point.
The quantum mechanics equations are used daily by physicists, engineers, chemists and biologists and in all modern technologies. No transistors and no computers without these equations.
Yet they remain mysterious: they do not describe what happens but only how a system is perceived by another system.
Is the reality indefinable? Is our current knowledge still limited? Or is the reality like that, only interactions?
Will we able one day to describe and define how the artificial neural networks work or is their logic simply alien for us ?
And maybe “alien” doesn’t mean “wrong”.
Maybe the nature of the world is closer to the way our network of computers and sensors represent it than how the human mind perceives it.
The machines may be closer to the truth than we humans ever could be.