How to make transparency and explainability in artificial intelligence concrete

The importance of digitization has become even more evident during the Corona crisis. Society and the Dutch economy are therefore rapidly digitizing. This calls for a good balance between seizing opportunities and reducing risks.

Professor Bram Klievink was a speaker at the conference Nederland Digitaal last Tuesday, 9 February. Together with the director of AI (Artificial Intelligence) of the Ministry of I&W, he held a discussion on the question 'How do we make transparency and explainability in AI concrete?

There are problematic aspects of the deployment of AI that are increasingly reaching the public debate. Transparency and explainability are essential, especially for application in the public sector.

Klievink: 'Transparency and explainability are not just a feature of a specific algorithm or tool. Ultimately it's about developing and using them in a setting where the social and organisational factors play just as big a role as the technical ones. Actually, this already starts with the question of for which tasks the deployment of AI is really appropriate, in which form, and when it is proportional. And also aspects such as how you organise the necessary expertise, how - and to whom - you have to account for what exactly. These are all questions that you would like to run together with the development and deployment of AI.'

The Netherlands Digital Conference is an annual conference initiated by the cabinet, and this year will take place entirely online from 8 to 10 February

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.