V. Beaudoin, I. Bloch, D. Bounie, S. Clémencon, . Florence-d'aché et al., Flexible and context-specific AI explainability: a multidisciplinary approach, ArXiv, 2020.

S. Bhattacharyya, D. Cofer, J. Musliner, E. Mueller, and . Engstrom, Certification considerations for adaptive systems, 2015 IEEE International Conference on Unmanned Aircraft Systems (ICUAS), pp.270-279, 2015.

M. Borg, C. Englund, K. Wnuk, B. Duran, C. Levandowski et al., Safely entering the deep: A review of verification and validation for machine learning and a challenge elicitation in the automotive industry, Journal of Automotive Software Engineering, vol.1, issue.1, pp.1-19, 2019.

J. Burrell, How the machine thinks: Understanding opacity in machine learning algorithms, Big Data & Society, vol.3, issue.1, p.2053951715622512, 2016.

F. Doshi-velez, M. Kortz, R. Budish, C. Bavitz, S. Gershman et al., Accountability of ai under the law: The role of explanation, 2017.

, Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions -Building trust in human centric artificial intelligence (com(2019)168)', Technical report, 2019.

R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti et al., A survey of methods for explaining black box models, ACM Computing Surveys (CSUR), vol.51, issue.5, 2018.

A. I. Hleg, High-level expert group on artificial intelligence, Ethics Guidelines for Trustworthy AI, 2019.

, Project ExplAIn interim report, Information Commissioners Office, 2019.

, Ethically aligned design: A vision for prioritizing human wellbeing with autonomous and intelligent systems, IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, 2019.

A. Joshua, S. Kroll, . Barocas, J. R. Edward-w-felten, . Reidenberg et al., Accountable algorithms, U. Pa. L. Rev, vol.165, p.633, 2016.

Z. Kurd and T. Kelly, Safety lifecycle for developing safety critical artificial neural networks, pp.77-91, 2003.

D. Lehr and P. Ohm, Playing with the data: what legal scholars should learn about machine learning, UCDL Rev, vol.51, 2017.

M. Scott, S. Lundberg, and . Lee, A unified approach to interpreting model predictions, Advances in Neural Information Processing Systems, pp.4765-4774, 2017.

, Artificial Intelligence in Society, 2019.

, Recommendation of the Council on Artificial Intelligence, 2019.

. Gerald-e-peterson, Foundation for neural network verification and validation, Science of Artificial Neural Networks II, vol.1966, pp.196-207, 1993.

S. Marco-tulio-ribeiro, C. Singh, and . Guestrin, Why should I trust you?: Explaining the predictions of any classifier, 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp.1135-1144, 2016.

F. Schauer, Giving reasons, Stanford Law Review, pp.633-659, 1995.

A. Selbst and S. Barocas, The intuitive appeal of explainable machines, SSRN Electronic Journal, vol.87, 2018.

K. Simonyan, A. Vedaldi, and A. Zisserman, Deep inside convolutional networks: Visualising image classification models and saliency maps, 2013.

P. S. Thomas, B. Castro-da, A. G. Silva, S. Barto, Y. Giguere et al., Preventing undesirable behavior of intelligent machines, Science, vol.366, issue.6468, pp.999-1004, 2019.

, Proposed regulatory framework for modifications to artificial intelligence/machine learning (AI/ML)-based software as a medical device, 2019.

S. Wachter, B. Mittelstadt, and C. Russell, Counterfactual explanations without opening the black box: Automated decisions and the gpdr, Harv. JL & Tech, vol.31, 2017.

M. Welling, Are ML and statistics complementary?, IMS-ISBA Meeting on Data Science in the Next 50 Years, 2015.

F. T. Alan, M. Winfield, and . Jirotka, The case for an ethical black box, Annual Conference Towards Autonomous Robotic Systems, pp.262-273, 2017.