Studiegids

nl en

Philosophy of Artificial Intelligence

Vak
2023-2024

Admission requirements

Admission to the following programme is required:

  • MA Philosophy 60 EC: specialisation Philosphy of Knowledge

Description

The course covers some of the foundational issues in the philosophy of AI, such as explainability, understanding, implicit and explicit theoretical assumptions in machine learning, and we will also touch on some ethical and political issues such as fairness, trustworthiness, and political power. This course then inadvertently connects many core areas of philosophy such as epistemology (e.g., the questions about knowledge, understanding and epistemic justification), metaphysics (e.g., the issues about personhood, identity), general philosophy of science (e.g., explanation, understanding, evidence, and mathematical modelling) and ethics (e.g., the issues such as trustworthiness, fairness, political power, and privacy). In this way, this course provides unprecedented and fascinating insight into philosophy as well.

We will approach these issues by closely reading and discussing relevant papers in the philosophy of AI. Students will be assigned short papers for group presentations in which they will be required to reconstruct arguments, and outline relations between various philosophical positions in the literature. In this way, students can form a solid ground that they can later broaden into a comprehensive overview of the field as they move forward through the course. The course is hence very interactive.

Course objectives

Students who successfully complete the course will have a good understanding of:

  • central debates in contemporary philosophy of AI;

  • key concepts and arguments employed in these debates.

Students who successfully complete the course will be able to:

  • to analyze key concepts and arguments in the contemporary philosophy of AI and engage critically with them;

  • apply the general philosophical analyses to concrete cases, especially from his/her own field of study;

  • give clear oral and written presentations on philosophical topics.

Timetable

The timetables are available through MyTimetable.

Mode of instruction

  • Seminar

Class attendance is required.

Assessment method

Assessment

  • Two group presentations

  • Final essay (3500 words)

To qualify for the final essay, all students will have to give two group presentations. Thus, these two group presentations are mandatory.

The first presentation will be graded only as passed/failed. The feedback will be organised as open peer-review, in which your classmates will discuss your presentation against a set of criteria.

The second presentation will be graded on the scale from 1-10. The lecturer will grade it, and so no peer-review will be involved.

Important note: for the assessment criteria (presentations and essay), see the Remarks section below.

Weighing

  • Two group presentations (30%)

  • Final essay (70%)

Resit

The resit exam will consist of an oral examination (50%) and written essay of 3,500 words (50%).

No separate resits will be offered for mid-term or final tests. The mark will replace all previously earned marks for subtests.

For the final essay, (resit essay and oral examination) you must discuss one topic or a set of topics, which will be posted on Brightspace.

Deadline: at least one week before the relevant deadline (this applies to the final essay and the resit).

Inspection and feedback

How and when an exam review will take place will be disclosed together with the publication of the exam results at the latest. If a student requests a review within 30 days after publication of the exam results, an exam review will have to be organized.

Reading list

The reading list (to be confirmed):

Arsiwalla, Xerxes D., Ricard Solé, Clément Moulin-Frier, Ivan Herreros, Martí Sánchez-Fibla, and Paul Verschure. 2023. “The Morphospace of Consciousness: Three Kinds of Complexity for Minds and Machines.” NeuroSci 4 (2): 79–102. https://doi.org/10.3390/neurosci4020009.

Binns, R.. (2018). Fairness in Machine Learning: Lessons from Political Philosophy. Proceedings of the 1st Conference on Fairness, Accountability and Transparency, in Proceedings of Machine Learning Research 81:149-159 Available from https://proceedings.mlr.press/v81/binns18a.html.

Bietti, Elettra, From Ethics Washing to Ethics Bashing: A View on Tech Ethics from Within Moral Philosophy (August 30, 2021).

Boge, Florian J. 2022. “Two Dimensions of Opacity and the Deep Learning Predicament.” Minds and Machines 32 (1): 43–75. https://doi.org/10.1007/s11023-021-09569-4.

Boyd, Nora Mills. 2018. “Evidence Enriched.” Philosophy of Science 85 (3): 403–21. https://doi.org/10.1086/697747.

Buijsman, Stefan. 2022. “Defining Explanation and Explanatory Depth in XAI.” Minds and Machines 32 (3): 563–84. https://doi.org/10.1007/s11023-022-09607-9.

Burrell, Jenna. 2016. “How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithms.” Big Data & Society 3 (1): 205395171562251. https://doi.org/10.1177/2053951715622512.

Castro, Clinton. 2019. “What’s Wrong with Machine Bias.” Ergo, an Open Access Journal of Philosophy 6 (20201214). https://doi.org/10.3998/ergo.12405314.0006.015.

Creel, Kathleen A. 2020. “Transparency in Complex Computational Systems.” Philosophy of Science 87 (4): 568–89. https://doi.org/10.1086/709729.

Doran, Derek, Sarah Schulz, and Tarek R. Besold. 2017. “What Does Explainable AI Really Mean? A New Conceptualization of Perspectives.” arXiv. http://arxiv.org/abs/1710.00794.

Floridi, Luciano. 2012. “Big Data and Their Epistemological Challenge.” Philosophy & Technology 25 (4): 435–37. https://doi.org/10.1007/s13347-012-0093-4.

Franssen, Maarten, Gert-Jan Lokhorst, and Ibo van de Poel, "Philosophy of Technology", The Stanford Encyclopedia of Philosophy (Spring 2023 Edition), Edward N. Zalta & Uri Nodelman (eds.), https://plato.stanford.edu/archives/spr2023/entries/technology/.

Günther, Mario, and Atoosa Kasirzadeh. 2022. “Algorithmic and Human Decision Making: For a Double Standard of Transparency.” AI & SOCIETY 37 (1): 375–81. https://doi.org/10.1007/s00146-021-01200-5.

Kasirzadeh, Atoosa. 2021. “Reasons, Values, Stakeholders: A Philosophical Framework for Explainable Artificial Intelligence.” arXiv. http://arxiv.org/abs/2103.00752.

Korteling, J. E. (Hans)., G. C. Van De Boer-Visschedijk, R. A. M. Blankendaal, R. C. Boonekamp, and A. R. Eikelboom. 2021. “Human- versus Artificial Intelligence.” Frontiers in Artificial Intelligence 4 (March): 622364. https://doi.org/10.3389/frai.2021.622364.

Kostic Daniel. 2023. Pragmatics for XAI. Manuscript.

LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. 2015. “Deep Learning.” Nature 521 (7553): 436–44. https://doi.org/10.1038/nature14539.

Linardatos, Pantelis, Vasilis Papastefanopoulos, and Sotiris Kotsiantis. 2020. “Explainable AI: A Review of Machine Learning Interpretability Methods.” Entropy 23 (1): 18. https://doi.org/10.3390/e23010018.

Mollen, Joost, Peter Van Der Putten, and Kate Darling. 2023. “Bonding with a Couchsurfing Robot: The Impact of Common Locus on Human-Robot Bonding In-the-Wild.” ACM Transactions on Human-Robot Interaction 12 (1): 1–33. https://doi.org/10.1145/3563702.

Sullivan, Emily. 2022a. “Understanding from Machine Learning Models.” The British Journal for the Philosophy of Science 73 (1): 109–33. https://doi.org/10.1093/bjps/axz035.

———. 2022b. “Inductive Risk, Understanding, and Opaque Machine Learning Models.” Philosophy of Science 89 (5): 1065–74. https://doi.org/10.1017/psa.2022.62.

Watson, David S. 2022. “Conceptual Challenges for Interpretable Machine Learning.” Synthese 200 (2): 65. https://doi.org/10.1007/s11229-022-03485-5.

Winner, L. (1980). Do Artifacts Have Politics? Daedalus, 109(1), 121–136. http://www.jstor.org/stable/20024652

Zednik, Carlos. 2021. “Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence.” Philosophy & Technology 34 (2): 265–88. https://doi.org/10.1007/s13347-019-00382-7.

Background reading, regulatory documents:

European Commission High Level Expert Group. Ethics Guidelines for Trustworthy AI. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai

Draft text of the European Commission Proposal for a Regulation of Artificial Intelligence (AI Act). https://eur-lex.europa.eu/resource.html?uri=cellar:e0649735-a372-11eb-9585-01aa75ed71a1.0001.02/DOC_1&format=PDF

Registration

Enrolment through MyStudyMap is mandatory.
General information about course and exam enrolment is available on the website

Contact

  • For substantive questions, contact the lecturer listed in the right information bar.

  • For questions about enrolment, admission, etc, contact the Education Administration Office: Huizinga

Remarks

Assessment criteria presentations

The first presentation will be graded as passed/failed. The feedback will be organised as open peer-review, in which your classmates will discuss your presentation against the following criteria:

Structure and clarity. The presentation should be clearly structured and planned. The problem you are discussing should be clearly formulated and well-motivated. At the beginning you should give an outline of the presentation, which you should follow with a minimum of digressions.

Argument. You should reconstruct the argument, or all the arguments from the materials in good detail, showing how it/they relate(s) to the problem formulated at the beginning.

Style and expression. Avoid colloquialisms, or expressions with which you are not quite familiar. Of course, always be respectful and charitable when presenting or discussing views of others.

The same criteria will apply for your second presentation, in which you will have to show that you are able to implement the feedback you have gotten on your first presentation. The second presentation will be graded on the scale from 1-10. The lecturer will grade it, and so no peer-review will be involved.

Assessment criteria final essay

(Please have a look at the Harvard guide to writing the philosophy paper, which will be upload as a separate document on Brightspace.)

Structure. The essay should be clearly structured and planned. It should be clear from the beginning what the purpose and aim of the essay is, and how this aim is going to be achieved. You should lead the reader step by step towards the conclusion, with a minimum of digressions.

Argument. The structure of your argument – for example when defending or attacking a particular view – should always be logically sound and easy to follow for the reader. Beware of hidden premises and of ‘jumping to the conclusion’.

Content. The essay should be based on (relevant parts of) the assigned literature. Relevant parts of the literature should be discussed (in your own words) in the text of your essay. The essay should show that you have studied and understood the relevant literature.

Originality of arguments and conclusion. Try to develop your own view on the subject. The essay should contain not only a description of the views discussed in the literature, but also your own evaluation and conclusions. If you succeed in formulating original arguments and conclusions, this will be reflected in the marking.

Style and presentation. Writing a readable essay on a philosophical subject is a skill and an art. Try not to use long sentences. Try to vary your vocabulary. Try to give examples now and then (especially when the subject of your essay is highly abstract). Although style is ultimately a matter of taste, the essays will also be assessed with respect to this aspect.