Lessons Large Language Models Teach Us About Understanding

Image created with DALL·E by OpenAI (prompt: a large language model teaching lessons to humans, digital art)

Abstract

This paper explores the implications of state-of-the-art large language models (LLMs) such as GPT-3.5-turbo for the concept of understanding, both in philosophy and natural language processing (NLP) practice. LLMs challenge the conventional notion that language models lack understanding. However, what constitutes ‘understanding’ is still contentious in both philosophy and NLP. The paper identifies three lessons for philosophers and two for NLP practitioners from studying LLMs. For philosophers, the paper suggests that unpacking the nature of ‘grasping’ and exploring other abilities like abstraction and analogy may be crucial to understanding. Furthermore, it argues that LLMs put pressure on accounts that view understanding as being compatible with luck or falsehood. For practitioners, the paper argues that the capabilities of LLMs demonstrate that understanding comes in degrees and that improving LLMs’ understanding may require exploiting and representing other information besides statistical correlations. Overall, the paper suggests that LLMs provide an exciting opportunity for research at the intersection of philosophy and NLP.

Date
March 26, 2023 16:00 — 17:40
Location
New York University, USA
Philippe Verreault-Julien
Philippe Verreault-Julien
Postdoctoral Researcher

Philosopher working on the ethics, epistemology, governance, and safety of artificial intelligence systems.