200-year-old Math Helps Us Understand AI

Technical University of Denmark

When ChatGPT hallucinates and starts concocting answers that have no basis in reality, this is an example of an artificial intelligence system behaving in a way that cannot be explained. ChatGPT is a large language model (LLM) based on a deep learning algorithm. Deep learning is a variety of machine learning – and both are forms of artificial intelligence.

In order for artificial intelligence to be based on a deep learning algorithm, it must first be trained by humans. This is typically done by 'feeding' the system with large volumes of data while also feeding it with the answer sheet in the form of the solutions we want the artificial intelligence to find in future. In the world of research, many researchers use deep learning systems to identify connections and patterns in large data sets – for example when tracing links between genes and diseases.

Loss of control

Following the training phase, the artificial intelligence is unleashed to find the right solutions based on new data while also simultaneously learning from the data that it encounters. This is why this type of artificial intelligence is called self-learning. However, while the algorithms that allow the artificial intelligence to successfully complete its task may have been developed by us humans, we do not necessarily understand how it completes the task, according to Søren Hauberg, Professor at DTU Compute. He elaborates:

"You could say that we have given the artificial intelligence the freedom to select any method of its choosing in order to complete its tasks. This means that we can't always account for what is taking place and how it reaches the solutions that it then presents us with – these processes are often hidden from us."

The hidden processes contained in self-learning artificial intelligence systems are known as black boxes.

"A black box represents a loss of control and there are situations where that loss of control is not acceptable. If the artificial intelligence in question is performing a task such as controlling a robot on a car assembly line, then it simply isn't acceptable for us not to have a full grasp of how it moves from A to B – dangerous situations may arise if it behaves unpredictably. This is one of the reasons why we stand to gain a lot from finding out what is going on inside black boxes," Søren Hauberg says.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.