Modality and Artificial Intelligence
This project investigates the modal features of artificial intelligence (AI) systems. These features are unexplored, yet instantiated in current AI practices. For example, some machine learning models 1) allegedly help to provide how-possibly explanations of phenomena, 2) are trained on synthetic (non-actual) data, or 3) aim to provide feasible explanations. These practices give rise to many philosophical questions. How can interpretability or explainability techniques help to identify modality in AI systems? What are the normative justifications for constraints to explanations in explainable AI? Can models trained on synthetic data help us understand or explain anything about the world?