Ethical Nudging with Opaque AI Systems?

Abstract

Some how-possibly explanations have epistemic value because they are epistemically possible; we cannot rule out their truth. One paradoxical implication of that proposal is that epistemic value may be obtained from mere ignorance. For the less we know, then the more is epistemically possible. In this paper, I examine a particular way we can acquire justification for a how-possibly explanation, viz. via an epistemically opaque process. How could these how-possibly explanations have epistemic value if they result from a process about which we lack knowledge or understanding? I propose three different strategies to salvage epistemic value from epistemic opacity, namely salvaging value from 1) functional transparency, 2) modal operator interpretation, and 3) pursuitworthiness. I illustrates using cases from deep neural network modeling.

Date
October 7, 2022 11:00
Location
Leiden
Philippe Verreault-Julien
Philippe Verreault-Julien
Postdoctoral Researcher

Philosopher working on the ethics, epistemology, governance, and safety of artificial intelligence systems.