Ben Adcock
Department of Mathematics,
Simon Fraser University

"Too good to believe? Two stories on hallucinations and instabilities in AI"

Friday, Oct 18, 2024, Schedule:

Special Seminar -  499 DSL Seminar Room
12:00 to 1:00 PM Eastern Time (US and Canada)

Nespresso & Teatime - 417 DSL Commons
1:00 to 1:30 PM Eastern Time (US and Canada)


In-person attendance is requested.
499 DSL Seminar Room
Zoom access is intended for external (non-departmental) participants only.

Click Here to Join via Zoom

Meeting # 942 7359 5552

Zoom Meeting # 942 7359 5552


Abstract:

Hallucinations are a big problem for modern AI systems. Anyone who has used ChatGPT, for example, will have witnessed it confidently provide false information or flawed reasoning. Beyond chatbots, hallucinations are known to arise in many other applications of AI, such as AI-inspired methods in computational science and engineering. Such methods may also suffer from severe instability, yielding dramatic failures when the inputs are slightly perturbed. In general, although these phenomena have been widely observed, there is little theory that strives to explain why and how they arise. In this talk, I will present two stories that theoretically explore these issues in two different settings. First, I will describe their appearance in inverse problems and imaging, where they are closely related to the ill-posedness or ill-conditioning of the forward operator. Second, I will consider the broad setting of Artificial General Intelligence, where an AI strives to mimic human intelligence. Here I will present the “Consistent Reasoning Paradox”, which explains how any attempt by an AI to reason consistently (like humans do) necessarily leads to hallucinations.

Speaker's Bio

Dept. of Scientific Computing
Florida State University
400 Dirac Science Library
Tallahassee, FL 32306-4120
Phone: (850) 644-1010
admin@sc.fsu.edu
© Scientific Computing, Florida State University
Scientific Computing