Foundations of Explainable Knowledge-Enabled Systems
release_pz54e4ag35hf3osr7dfbmhb4ze
by
Shruthi Chari, Daniel M. Gruen, Oshani Seneviratne, Deborah L.
McGuinness
2020
Abstract
Explainability has been an important goal since the early days of Artificial
Intelligence. Several approaches for producing explanations have been
developed. However, many of these approaches were tightly coupled with the
capabilities of the artificial intelligence systems at the time. With the
proliferation of AI-enabled systems in sometimes critical settings, there is a
need for them to be explainable to end-users and decision-makers. We present a
historical overview of explainable artificial intelligence systems, with a
focus on knowledge-enabled systems, spanning the expert systems, cognitive
assistants, semantic applications, and machine learning domains. Additionally,
borrowing from the strengths of past approaches and identifying gaps needed to
make explanations user- and context-focused, we propose new definitions for
explanations and explainable knowledge-enabled systems.
In text/plain
format
Archived Files and Locations
application/pdf 1.4 MB
file_q6n2onj4vzf5xj23uh4eywqaoi
|
arxiv.org (repository) web.archive.org (webarchive) |
2003.07520v1
access all versions, variants, and formats of this works (eg, pre-prints)