Explainable AI is the key to progress and acceptance of AI

February 08, 2019
Prof. Dr. Harald Sack

SEMANTiCS conference general chair Harald Sack is Vice President Information Service Engineering at FIZ Karlsruhe – Leibniz Institute for Information Infrastructure, the hosting institution of SEMANTiCS 2019, and Professor at Karlsruhe Institute of Technology (KIT). In this interview, Harald elaborates on the core topics of SEMANTiCS 2019, explainable AI and Knowledge Graphs.

AI has seen many ups and downs. How do you estimate the current situation: Is it more than hype? Is AI finally here to stay?

First of all, let’s be careful with our choice of labels. The current hype of AI refers most of all to the success of machine learning, esp. of so-called deep learning. Massive hardware parallelization in connection with reusability of already trained models has given way for significant progress in tasks of classification, prediction, and generation. Computer vision, language understanding and speech generation enable the development of highly autonomous systems. For sure we have not reached the end of this development yet. However, AI also has its roots in symbolic representation and logics, technologies at the core of the SEMANTiCS conference since its beginning. One of the key problems today is that the results achieved via deep learning often lack any further explanation. Explainable AI, therefore, tries to combine machine learning with symbolic representation to enable further progress and acceptance of AI-based systems.


Your research project is ready to be reviewed? Calls for SEMANTiCS 2019 are open!


You are the general chair  of SEMANTiCS 2019 which investigates the intersections of AI and Knowledge Graphs. How do these two technology areas fit together from your perspective?

As already mentioned, it is important to combine machine learning and symbolic representations, i.e. AI and Knowledge Graphs, especially when it comes to explainable AI systems. From my point of view, significant progress in complex learning tasks might only be achieved via a fruitful synthesis of both worlds. These topics are at the current frontier of research which is driven by major research groups in academia as well as in industry.

What are the challenges ahead when it comes to the widespread adoption of next generation AI?

I can tell you about one of our own research projects, which aims to enable and improve automated classification and annotation of archival material based on a technique called Dataless Classification. There, FIZ Karlsruhe is collaborating with the German National Archive on the challenge of annotating millions of documents for which almost no training data is available. The challenges for the widespread adoption of AI are definitely based on changes for our society. Think of AI applied in cybercrime - not for the prevention of crime but instead to commit crime - or AI involved in decision making processes for the job market, health insurance, or every case with direct consequences for our daily life. However, I am definitely not afraid of superhuman intelligent AI systems, as they have been predicted often in the past. Let me phrase it like my prominent Oxford colleague Luciano Floridi: “Robocop is not coming.” Based on my personal experiences - my first acquaintance with machine learning dates back to the late 1980s when we tried to apply neural networks to stock market predictions - today we are still far, far away from achieving something that comes close to the abilities of general human intelligence. Deep Learning might achieve superhuman results in rather specific narrowly focussed tasks, such as playing the game of Go or identifying tumour cells. But if it comes to the understanding of new situations, context switches, or more complex tasks that require the application and transfer of common knowledge gained by the experience of a lifetime, we still have to go a long way. Therefore, I fear even more the potentially unjustified trust in the capabilities of today’s AI systems than their abuse.

Finaly talking about your Institute. What are the focal research areas of FIZ Karlsruhe and how will they be reflected by this year’s conference?

One of FIZ Karlsruhe’s main research focuses lies in the semantic analysis of natural language data and arbitrary research data, which includes symbolic knowledge representations, text-, data- and knowledge-mining, as well as the design and implementation of efficient information services based on the principles of semantic technologies. These FIZ Karlsruhe research topics are at the core of the SEMANTiCS conference. A second research focus of FIZ Karlsruhe lies in intellectual property rights in distributed information infrastructures including copyright law, data privacy law, and IT law. These research topics are well reflected in one of SEMANTiCS 2019 special tracks on LegalTech. Furthermore, FIZ Karlsruhe is applying the results of the aforementioned research areas in the cultural heritage domain, as e.g. FIZ Karlsruhe is hosting the information services of German Digital Library (Deutsche Digitale Bibliothek) as well as the German Digital Archive (Archivportal-D). The cultural heritage domain also represents the second special track topic of this year’s edition of the SEMANTiCS conference.


Participate, get reviewed and increase awareness for your projects - Calls are open NOW!


About SEMANTiCS

The annual SEMANTiCS conference is the meeting place for professionals who make semantic computing work, and understand its benefits and know its limitations. Every year, SEMANTiCS attracts information managers, IT-architects, software engineers, and researchers, from organisations ranging from NPOs, universities, public administrations to the largest companies in the world. http://www.semantics.cc