Accéder directement au contenu Accéder directement à la navigation
Nouvelle interface
Pré-publication, Document de travail

Flexible and Context-Specific AI Explainability: A Multidisciplinary Approach

Valérie Beaudouin 1, 2, 3 Isabelle Bloch 4, 5, 3 David Bounie 3, 6, 1 Stéphan Clémençon 7 Florence d'Alché-Buc 5, 8, 3 James R Eagan 9, 10, 3 Winston Maxwell 3, 6, 1 Pavlo Mozharovskyi 11 Jayneel Parekh 
2 SID - Sociologie Information-Communication Design
I3, une unité mixte de recherche CNRS (UMR 9217) - Institut interdisciplinaire de l’innovation
4 IMAGES - Image, Modélisation, Analyse, GEométrie, Synthèse
LTCI - Laboratoire Traitement et Communication de l'Information
6 ECOGE - Economie Gestion
I3, une unité mixte de recherche CNRS (UMR 9217) - Institut interdisciplinaire de l’innovation
8 S2A - Signal, Statistique et Apprentissage
LTCI - Laboratoire Traitement et Communication de l'Information
9 DIVA - Design, Interaction, Visualization & Applications
LTCI - Laboratoire Traitement et Communication de l'Information
Abstract : The recent enthusiasm for artificial intelligence (AI) is due principally to advances in deep learning. Deep learning methods are remarkably accurate, but also opaque, which limits their potential use in safety-critical applications. To achieve trust and accountability, designers and operators of machine learning algorithms must be able to explain the inner workings, the results and the causes of failures of algorithms to users, regulators, and citizens. The originality of this paper is to combine technical, legal and economic aspects of explainability to develop a framework for defining the "right" level of explain-ability in a given context. We propose three logical steps: First, define the main contextual factors, such as who the audience of the explanation is, the operational context, the level of harm that the system could cause, and the legal/regulatory framework. This step will help characterize the operational and legal needs for explanation, and the corresponding social benefits. Second, examine the technical tools available, including post hoc approaches (input perturbation, saliency maps...) and hybrid AI approaches. Third, as function of the first two steps, choose the right levels of global and local explanation outputs, taking into the account the costs involved. We identify seven kinds of costs and emphasize that explanations are socially useful only when total social benefits exceed costs.
Type de document :
Pré-publication, Document de travail
Liste complète des métadonnées

Littérature citée [79 références]  Voir  Masquer  Télécharger

https://hal.telecom-paris.fr/hal-02506409
Contributeur : David Bounie Connectez-vous pour contacter le contributeur
Soumis le : jeudi 12 mars 2020 - 12:34:59
Dernière modification le : samedi 22 octobre 2022 - 03:14:58
Archivage à long terme le : : samedi 13 juin 2020 - 14:42:18

Fichiers

main.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-02506409, version 1

Citation

Valérie Beaudouin, Isabelle Bloch, David Bounie, Stéphan Clémençon, Florence d'Alché-Buc, et al.. Flexible and Context-Specific AI Explainability: A Multidisciplinary Approach. 2020. ⟨hal-02506409⟩

Partager

Métriques

Consultations de la notice

689

Téléchargements de fichiers

327