Foundations of Explainable Knowledge-Enabled Systems release_pz54e4ag35hf3osr7dfbmhb4ze

by Shruthi Chari, Daniel M. Gruen, Oshani Seneviratne, Deborah L. McGuinness

Released as a article .

2020  

Abstract

Explainability has been an important goal since the early days of Artificial Intelligence. Several approaches for producing explanations have been developed. However, many of these approaches were tightly coupled with the capabilities of the artificial intelligence systems at the time. With the proliferation of AI-enabled systems in sometimes critical settings, there is a need for them to be explainable to end-users and decision-makers. We present a historical overview of explainable artificial intelligence systems, with a focus on knowledge-enabled systems, spanning the expert systems, cognitive assistants, semantic applications, and machine learning domains. Additionally, borrowing from the strengths of past approaches and identifying gaps needed to make explanations user- and context-focused, we propose new definitions for explanations and explainable knowledge-enabled systems.
In text/plain format

Archived Files and Locations

application/pdf  1.4 MB
file_q6n2onj4vzf5xj23uh4eywqaoi
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2020-03-17
Version   v1
Language   en ?
arXiv  2003.07520v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: d41c1bbe-82a3-4580-b0f2-405b9c7311c6
API URL: JSON