Catalog of General Ethical Requirements for AI Certification - Workshop with Jocelyn Maclure and Markus Gabriel
"General Ethical Requirements for AI Certification" a conversation with
Jocelyn Maclure and Markus Gabriel and launch of the whitepaper by Nicholas
Kluge Corrêa and Julia Maria Mönig.
The research project "Certified AI" explores the question of what a
certification of "trustworthy AI" could look like. The philosophy team, Nicholas
Kluge Corrêa and Julia Maria Mönig, research this question from an ethical
perspective.
While many arguments can be raised against certifying "ethical"
AI, with our latest publication we offer guidance to computer scientists and
programmers on how to try to make AI-based applications more "ethical". We do
this by presenting (a list of) ethical principles that are consistent with
other proposals in the literature. We briefly explain what the values are and
why they are important, and give examples to align them with the risk
categories of the European AI Act (2024). To operationalize the principles,
for each step we propose methods and technical tools that can be used to put
the ethical requirements into practice.
In the long run, this could serve as a way to make the tech industry's current
profit-driven motto of "move fast and break things" fairer, more privacy-
preserving, safer and more robust, sustainable and transparent, while
promoting accountability and truthfulness, if developers adhere to the
proposed practices.
Arguments against ethics 'certification' (including by the authors) are that
companies and other stakeholders could use it as 'ethics washing'. In
addition, practices may evolve over time and therefore certification (as in
other areas) could only be valid for a limited period of time, if at all,
after which it would need to be renewed. In addition, ethical tools may not
have the intended effects, as has been shown in the literature, e.g. for the
example of fairness algorithms. Last but not least, it is difficult to
'standardise' ethics, as this entails quantification, whereas ethics refers to
and consists of deliberation, with values that may change over time, and in a
liberal-democratic system must take into account the views of all
stakeholders, especially vulnerable groups. We hence suggest that the efforts
laid out in the paper would need to be accompanied for instance by workshops,
guidance by professional ethicists and discussions with involved stakeholders.
While the values presented in the white paper resonate with those usually
attributed to the so-called "Global North", we understand these values as
being universal, with human dignity and respect at their core. The four
overarching principles of autonomy, beneficence, no harm and accountability are
intended to serve all humans on earth, and call for the ethical inclusion of
all potentially affected stakeholders, e.g. through the principle of
sustainability that addresses problems caused by the actions of the "Global
North" on the "Global South".
In this workshop we will present the guidelines in the presence of Markus
Gabriel, the principal investigator of the philosophy subproject of
"Zertifizierte KI", and Jocelyn Maclure who will comment on them from a
philosophical perspective, drawing on his expertise and decade-long experience
in AI ethics.
Time and Place
Wednesday, 18.12.2024
3-5 pm
Conference Room of the International Center for Philosophy NRW (IZPH)
Poppelsdorfer Allee 28
53115 Bonn
3rd floor (elevator available)
Entrance area not barrier-free
Contact and Organisation
Julia Maria Mönig
University of Bonn, Center for Science and Thought, Institute of Philosophy, Konrad-Zuse-Platz 1-3
53227 Bonn