Philippe Verreault-Julien
Philippe Verreault-Julien
Home
Research
Talks
CV
Contact
English
Français
Recent & Upcoming Talks
2023
Workshop Epistemic Rights in AI Policy
November 8, 2023 — November 10, 2023
Hotel de Bilderberg
Emily Sullivan
,
Yeji Streppel
,
Philippe Verreault-Julien
Panel 'The Alignment Problem in AI Viewed Through the Lens of European Law'
The alignment problem in artificial intelligence raises important questions about ensuring that AI foundational models and systems …
November 3, 2023 11:30 — 12:30
Omni Boston Hotel at the Seaport
Philippe Verreault-Julien
Ethical Nudging with Opaque Recommender Systems?
This paper examines ethical challenges that arise in the context of using deep learning models for nudging. In particular, I will show …
June 22, 2023 14:00 — 14:25
Center for Science and Thought, University of Bonn
Philippe Verreault-Julien
Project
Ethical Nudging with Opaque Artificial Intelligence Systems?
This paper examines the challenge to transparency that deep learning models used in recommender systems raise for ethical nudging. In …
May 26, 2023 10:15 — 10:45
Ca’ Foscari University, Venice
Philippe Verreault-Julien
Project
Lessons Large Language Models Teach Us About Understanding
This paper explores the implications of state-of-the-art large language models (LLMs) such as GPT-3.5-turbo for the concept of …
March 26, 2023 16:00 — 17:40
New York University, USA
Philippe Verreault-Julien
Project
2022
Understanding and How-Possibly Explanations: Why Can’t They Be Friends?
In the current debate on the relation between how-possibly explanations (HPEs) and understanding, two seemingly irreconcilable …
November 11, 2022 19:00 — 21:00
Pittsburgh, PA, USA
Philippe Verreault-Julien
,
Till Grüne-Yanoff
PDF
Project
Salvaging Epistemically Possible How-Possibly Explanations from Epistemic Opacity
Some how-possibly explanations have epistemic value because they are epistemically possible; we cannot rule out their truth. One …
October 17, 2022 16:30
Tilburg University
Philippe Verreault-Julien
Project
Ethical Nudging with Opaque AI Systems?
Some how-possibly explanations have epistemic value because they are epistemically possible; we cannot rule out their truth. One …
October 7, 2022 11:00
Leiden
Philippe Verreault-Julien
Project
From Explanation to Recommendation: Ethical Standards for Algorithmic Recourse
People are increasingly subject to algorithmic decisions, and it is generally agreed that end-users should be provided an explanation …
August 1, 2022
Oxford
Philippe Verreault-Julien
Project
Explanation in Economics
Discussions in the literature on economic methodology often do not explicitly concern explanation. The goal of this chapter is to show …
July 4, 2022 09:00
Ghent University
Philippe Verreault-Julien
Toy models, dispositions, and the power to explain
May 16, 2022
Online
Philippe Verreault-Julien
Project
Recommander pour le bien-être: vers des standards éthiques pour le recours algorithmique
People are increasingly subject to algorithmic decisions, and it is generally agreed that end-users should be provided an explanation …
May 11, 2022 10:30
Université Laval
Philippe Verreault-Julien
Project
Toy models, dispositions, and the power to explain
Two recent contributions have discussed, and disagreed, over whether so-called toy models that attempt to represent dispositions have …
April 23, 2022 12:10
Fordham University
Philippe Verreault-Julien
Project
2021
Representing non-actual targets?
Scientists seek to learn about targets of interest by representing them with models. This supposes that we have an account of how …
June 1, 2021 14:00
University of Alberta
Philippe Verreault-Julien
Project
2020
Representing and Understanding (Im)Possible Targets?
One strategy that scientists use to understand targets of interest is by representing them with models. Understanding the world by …
March 12, 2020 — March 14, 2020
Emory University, United States
2019
Inferentialism and representation: chasing factivity
In this paper, I argue that two brands of inferentialism (Suárez 2004; Suárez 2015) and what I call the factive inferentialist account …
September 11, 2019
University of Geneva, Switzerland
Philosophical methodology in theoretical modelling: the case of herd behaviour
Naturalism is the view according to which philosophy should solve its problems using the empirical scientific methodology. It assumes a …
July 2, 2019
Lake Como School of Advanced Studies, Italy
2018
Hamilton’s rule: understanding the disagreement about its explanatoriness
November 9, 2018
University of Twente, The Netherlands
with Vaios Koliofotis
Learning and understanding with models: same same but different?
June 11, 2018
University of Turin, Italy
Hamilton’s rule: understanding the disagreement about its explanatoriness
May 23, 2018
Ghent University, Belgium
with Vaios Koliofotis
Learning and understanding with models: same same but different?
March 15, 2018
University of South Carolina, Columbia, United States of America
2017
Learning and understanding with models: same same but different?
October 12, 2017
University of Helsinki, Finland
Models and how-possibly explanations: a demarcation problem
September 6, 2017
University of Exeter, United Kingdom
How possibly could how-possibly explanations explain?
March 24, 2017
Erasmus University Rotterdam, The Netherlands
2016
The inferentialist conception of model-based understanding: A new hope or return of the puzzlement?
December 10, 2016
Groningen, The Netherlands
Non-causal understanding with economic models: the case of general equilibrium
June 16, 2016
Aix-en-Provence, France
A case for non-causal understanding with models
May 18, 2016
University of Barcelona, Spain
2015
A case for non-causal understanding with economic models
November 22, 2015
University of Cape Town, South Africa
Cite
×