, Contextual explanation networks, 2017.
A causal framework for explaining the predictions of black-box sequence-to-sequence models, 2017. ,
Algorithms on regulatory lockdown in medicine, Science, vol.366, issue.6470, pp.1202-1204, 2019. ,
Information fiduciaries and the first amendment, UCDL Rev, vol.49, p.1183, 2015. ,
Explaining a black-box using deep variational information bottleneck approach, 2019. ,
Certification considerations for adaptive systems, 2015 International Conference on Unmanned Aircraft Systems (ICUAS), pp.270-279, 2015. ,
Safely entering the deep: A review of verification and validation for machine learning and a challenge elicitation in the automotive industry, Journal of Automotive Software Engineering, vol.1, issue.1, pp.1-19, 2019. ,
Algorithm cart. Classification and Regression Trees, California Wadsworth International Group, 1984. ,
Random forests, Machine Learning, vol.45, pp.5-32, 2001. ,
How the machine 'thinks': Understanding opacity in machine learning algorithms, Big Data & Society, vol.3, issue.1, p.2053951715622512, 2016. ,
This looks like that: deep learning for interpretable image recognition, Advances in Neural Information Processing Systems, pp.8928-8939, 2019. ,
The epistemology of a rule-based expert system-a framework for explanation, Artificial intelligence, vol.20, issue.3, pp.215-251, 1983. ,
Transparency and algorithmic governance, Administrative Law Review, vol.71, p.1, 2018. ,
Tensorlog: Deep learning meets probabilistic dbs, 2017. ,
Support-vector networks, Machine learning, vol.20, issue.3, pp.273-297, 1995. ,
Des intelligences TRÈS artificielles, 2019. ,
Accountability of ai under the law: The role of explanation, 2017. ,
Rule extraction with fuzzy neural network, International journal of neural systems, vol.5, issue.01, pp.1-11, 1994. ,
, , 2016.
, Genetic fuzzy based artificial intelligence for unmanned combat aerial vehicle control in simulated air combat missions, Journal of Defense Management, vol.6, issue.1, pp.2167-0374
, Communication from the commission to the european parliament, the council, the european economic and social committee and the committee of the regions -building trust in human centric artificial intelligence (com(2019)168), European Commission, 2019.
, White paper on artificial intelligence -a european approach to excellence and trust (com(2020)65 final), European Commission, 2020.
From suspicion to action -converting financial intelligence into greater operational impact, Europol, 2017. ,
De l'obligation d'information dans les contrats -essai d'une théorie, 1992. ,
Accountability for data governance in cloud ecosystems, 2013 IEEE 5th International Conference on Cloud Computing Technology and Science, vol.2, pp.327-332, 2013. ,
Linear discriminant analysis, Ann. Eugenics, vol.7, p.179, 1936. ,
A survey of methods for explaining black box models, ACM computing surveys (CSUR), vol.51, issue.5, p.93, 2018. ,
Artificial Intelligence: The Very Idea, 1985. ,
High-level expert group on artificial intelligence, Ethics Guidelines for Trustworthy AI, 2019. ,
La motivation en droit des contrats, Revue de droit d, p.19, 2019. ,
Project explain interim report, 2019. ,
, Ethically aligned design: A vision for prioritizing human well-being with autonomous and intelligent systems, IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, 2019.
, Machine learning in anti-money laundering -summary report, International Finance, 2018.
, Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav), 2017.
The (un)reliability of saliency methods, 2017. ,
Accountable algorithms, U. Pa. L. Rev, vol.165, p.633, 2016. ,
An application of combinatorial methods for explainability in artificial intelligence and machine learning (draft), 2019. ,
Safety lifecycle for developing safety critical artificial neural networks, Computer Safety, Reliability, and Security, pp.77-91, 2003. ,
Towards robust, locally linear deep networks, International Conference on Learning Representations, 2019. ,
, Functional transparency for structured data: a game-theoretic approach, 2019.
Playing with the data: what legal scholars should learn about machine learning, UCDL Rev, vol.51, p.653, 2017. ,
A unified approach to interpreting model predictions, Advances in Neural Information Processing Systems, pp.4765-4774, 2017. ,
Users and Customizable Software: A Co-Adaptive Phenomenon, 1990. ,
Fuzzy Rules Extraction and Redunduncy Elimination: an Application to Remote Sensing Image Analysis, International Journal of Intelligent Systems, vol.12, issue.11, pp.793-818, 1997. ,
Smart (er) Internet Regulation Through Cost-Benefit Analysis: Measuring harms to privacy, freedom of expression, and the internet ecosystem, 2017. ,
Towards robust interpretability with selfexplaining neural networks, Advances in Neural Information Processing Systems, pp.7775-7784, 2018. ,
, , 1969.
Artificial Intelligence in Society, 2019. ,
Recommendation of the Council on Artificial Intelligence, 2019. ,
Foundation for neural network verification and validation, Science of Artificial Neural Networks II, vol.1966, pp.196-207, 1993. ,
Why should i trust you?: Explaining the predictions of any classifier, Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp.1135-1144, 2016. ,
, , 2016.
The perceptron, a perceiving and recognizing automaton Project Para Report No. 85-460-1, 1957. ,
Learning representations by back-propagating errors, nature, vol.323, issue.6088, pp.533-536, 1986. ,
The strength of weak learnability, Machine learning, vol.5, issue.2, pp.197-227, 1990. ,
Giving reasons, Stanford Law Review, pp.633-659, 1995. ,
The intuitive appeal of explainable machines, SSRN Electronic Journal, vol.87, 2018. ,
Learning a sat solver from single-bit supervision, 2018. ,
Grad-cam: Visual explanations from deep networks via gradient-based localization, Proceedings of the IEEE International Conference on Computer Vision, pp.618-626, 2017. ,
A model of inexact reasoning in medicine, Mathematical Biosciences, vol.23, issue.3, pp.351-379, 1975. ,
, Deep inside convolutional networks: Visualising image classification models and saliency maps, 2013.
, Smoothgrad: removing noise by adding noise, 2017.
Striving for simplicity: The all convolutional net, 2014. ,
Consistent nonparametric regression. The Annals of Statistics, pp.595-620, 1977. ,
Axiomatic attribution for deep networks, Proceedings of the 34th International Conference on Machine Learning, vol.70, pp.3319-3328, 2017. ,
Explanations in knowledge systems: Design for explainable expert systems, IEEE Expert, vol.6, issue.3, pp.58-64, 1991. ,
Preventing undesirable behavior of intelligent machines, Science, vol.366, issue.6468, pp.999-1004, 2019. ,
, The information bottleneck method, 2000.
Knowledge-based artificial neural networks, Artificial intelligence, vol.70, issue.1-2, pp.119-165, 1994. ,
, Proposed regulatory framework for modifications to artificial intelligence/machine learning (ai/ml)-based software as a medical device, US Food and Drug Administration, 2019.
Counterfactual explanations without opening the black box: Automated decisions and the gpdr, Harv. JL & Tech, vol.31, p.841, 2017. ,
Explainable artificial intelligence-the new frontier in legal informatics, Jusletter IT, vol.4, pp.1-10, 2018. ,
Are ml and statistics complementary?, IMS-ISBA Meeting on 'Data Science in the Next 50 Years, 2015. ,
The case for an ethical black box, Annual Conference Towards Autonomous Robotic Systems, pp.262-273, 2017. ,
What does the robot think? transparency as a fundamental design requirement for intelligent systems, Ijcai-2016 ethics for artificial intelligence workshop, 2016. ,
Ai governance by human rightscentred design, deliberation and oversight: An end to ethics washing. The Oxford Handbook of AI Ethics, 2019. ,
Interpretable convolutional neural networks, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.8827-8836, 2018. ,
, tion and regression trees (CART) (Breiman, 1984) and their bagged version random forest (RF) (Breiman, 2001), as well as boosting-based aggregation, 1990.
, Connecting a substantial number of perceptrons with (continuous) non-linear transformation yielded the whole area of (deep) neural networks (NNs)