The search for a better understanding of the processes taking place in artificial neural networks.
Rapid progress has been made in deep learning in the last few years. Neural networks, whose framework is based on the human brain, are already highly efficient when it comes to recognising patterns and structures and analysing complex data sets (such as images). Back-propagation, where target values are monitored and feedback is entered into the network, was a key factor in getting us to this stage.
What remains largely unclear today, however, is why neural networks behave in the way they do. In concrete terms, this means that while we can see that self-learning networks continuously improve their accuracy through constant training, at the micro level of individual neurons we do not know exactly why a neuron decides to ‘fire’ or not ‘fire’ in a given situation. This is referred to as black box AI.
A detailed understanding of the complex processes taking place in a neural network is the key to finding out how neural networks ‘think’. This knowledge would allow us to improve the efficiency of such networks. It would also make a crucial difference where complex and ethical decisions are involved, for example in the field of autonomous driving or medical diagnostics.
Governments and society at large are not content simply to accept the fact that AI is capable of making certain decisions. They want to know how the decisions were reached. For scientific purposes, it is also important to understand the process behind a discovery made by a neural network. More and more research projects are focusing on understanding mechanisms within neural networks in order to open the black box. This will help to usher in the widespread use of higher-performance, specialised networks, even in sensitive fields.
Read on about current trends in AI and more in our paper here