Learning and understanding with models: same same but different?

Résumé

This paper is about two proposals that have been made about the possible epistemic function of highly idealized models. Some argue that models help us learn about the world (e.g. Claveau and Vergara Fernández 2015; Grüne-Yanoff 2009b; Grüne-Yanoff 2013; Morgan 1999). According to this perspective, using and manipulating models improves our knowledge about the world. For one, Grüne-Yanoff argues that “we learn from minimal models because they may affect our beliefs about what is impossible or necessary in the real world” (2009a, 82). In this case, the epistemic function unrealistic models serve is to prompt learning. Others instead argue that models afford understanding (e.g. Bokulich 2016; Kuorikoski and Ylikoski 2015; Rice 2016; Ylikoski and Aydinonat 2014). On this view, models provide us with insight into why (or how) the world is the way it is. For instance, Ylikoski and Aydinonat present “possible ways in which highly abstract theoretical models […] could contribute to our understanding” (Ylikoski and Aydinonat 2014, 20). For this set of views, the epistemic import of models is that they provide understanding. Both the learning and the understanding accounts suggest ways in which models may serve an epistemic function. However, what remains unclear is whether learning and understanding are similar or whether they are, in fact, two different epistemic benefits one can get from models. Is learning equivalent to understanding? Is one more valuable than the other? Absent answers to these questions, our current accounts of the epistemic functions of models run the risk of talking past each other. Using a distinction from contemporary epistemology between reductionist and non-reductionist accounts of understanding (see Sullivan 2017), I show under what conditions learning and understanding are similar. I also argue that there are two ways they can come apart.

Philippe Verreault-Julien
Philippe Verreault-Julien
Chercheur postdoctoral

Philosophe travaillant sur l’éthique, l’épistémologie, la gouvernance et la sécurité des systèmes d’intelligence artificielle.