Flexible and context-specific AI explainability: a multidisciplinary approach, ArXiv, 2020. ,
Certification considerations for adaptive systems, 2015 IEEE International Conference on Unmanned Aircraft Systems (ICUAS), pp.270-279, 2015. ,
Safely entering the deep: A review of verification and validation for machine learning and a challenge elicitation in the automotive industry, Journal of Automotive Software Engineering, vol.1, issue.1, pp.1-19, 2019. ,
How the machine thinks: Understanding opacity in machine learning algorithms, Big Data & Society, vol.3, issue.1, p.2053951715622512, 2016. ,
Accountability of ai under the law: The role of explanation, 2017. ,
, Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions -Building trust in human centric artificial intelligence (com(2019)168)', Technical report, 2019.
A survey of methods for explaining black box models, ACM Computing Surveys (CSUR), vol.51, issue.5, 2018. ,
High-level expert group on artificial intelligence, Ethics Guidelines for Trustworthy AI, 2019. ,
, Project ExplAIn interim report, Information Commissioners Office, 2019.
, Ethically aligned design: A vision for prioritizing human wellbeing with autonomous and intelligent systems, IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, 2019.
Accountable algorithms, U. Pa. L. Rev, vol.165, p.633, 2016. ,
Safety lifecycle for developing safety critical artificial neural networks, pp.77-91, 2003. ,
Playing with the data: what legal scholars should learn about machine learning, UCDL Rev, vol.51, 2017. ,
A unified approach to interpreting model predictions, Advances in Neural Information Processing Systems, pp.4765-4774, 2017. ,
, Artificial Intelligence in Society, 2019.
, Recommendation of the Council on Artificial Intelligence, 2019.
Foundation for neural network verification and validation, Science of Artificial Neural Networks II, vol.1966, pp.196-207, 1993. ,
Why should I trust you?: Explaining the predictions of any classifier, 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp.1135-1144, 2016. ,
Giving reasons, Stanford Law Review, pp.633-659, 1995. ,
The intuitive appeal of explainable machines, SSRN Electronic Journal, vol.87, 2018. ,
Deep inside convolutional networks: Visualising image classification models and saliency maps, 2013. ,
Preventing undesirable behavior of intelligent machines, Science, vol.366, issue.6468, pp.999-1004, 2019. ,
, Proposed regulatory framework for modifications to artificial intelligence/machine learning (AI/ML)-based software as a medical device, 2019.
Counterfactual explanations without opening the black box: Automated decisions and the gpdr, Harv. JL & Tech, vol.31, 2017. ,
Are ML and statistics complementary?, IMS-ISBA Meeting on Data Science in the Next 50 Years, 2015. ,
The case for an ethical black box, Annual Conference Towards Autonomous Robotic Systems, pp.262-273, 2017. ,