Models, Understanding, and Explanation

Philosophers have been interested in how models that systematically and deeply idealize may explain or afford understanding if they misrepresent the very things that are supposed to explain and be explained. This is problematic insofar as prevalent accounts of explanation hold that explanation is factive. This means that successful explanations need to identify (approximately) true actual explanantia and explananda. Accordingly, it seems that only models that faithfully represent may explain.

One solution to this problem has been to argue that these models make a particular epistemic contribution by providing how-possibly explanations. Models may thus fail to explain in the traditional sense, yet provide a different sort of explanation or similar epistemic benefits (e.g. understanding). However, what exactly are how-possibly explanations, how could models provide them, and how they could afford the purported epistemic benefits has received relatively little attention.

Furthermore, the modal features of modelling seem to raise several issues related to scientific representation. If models have ‘imaginary’, ‘fictional’, or ’non-existent’ targets, how are they supposed to represent them? Can models have non-actual targets and, if so, how are we supposed to learn about them?

Philippe Verreault-Julien
Philippe Verreault-Julien
Postdoctoral Researcher

Philosopher working on the ethics, epistemology, governance, and safety of artificial intelligence systems.