Ben Adcock
Department of Mathematics,
Simon Fraser University
"Too good to believe? Two stories on hallucinations and instabilities in AI"
Friday, Oct 18, 2024
Abstract:
Hallucinations are a big problem for modern AI systems. Anyone who has used ChatGPT, for example, will have witnessed it confidently provide false information or flawed reasoning. Beyond chatbots, hallucinations are known to arise in many other applications of AI, such as AI-inspired methods in computational science and engineering. Such methods may also suffer from severe instability, yielding dramatic failures when the inputs are slightly perturbed. In general, although these phenomena have been widely observed, there is little theory that strives to explain why and how they arise. In this talk, I will present two stories that theoretically explore these issues in two different settings. First, I will describe their appearance in inverse problems and imaging, where they are closely related to the ill-posedness or ill-conditioning of the forward operator. Second, I will consider the broad setting of Artificial General Intelligence, where an AI strives to mimic human intelligence. Here I will present the “Consistent Reasoning Paradox”, which explains how any attempt by an AI to reason consistently (like humans do) necessarily leads to hallucinations.