skip to main content
Caltech

H.B. Keller Colloquium

Monday, April 6, 2026
4:00pm to 5:00pm
Add to Cal
Annenberg 105
Toward a Science of Auditing AI-Mediated Information Ecosystems
Chara Podimata, Class of 1942 Career Development Assistant Professor and Assistant Professor of Operations Research and Statistics, MIT,

AI-mediated systems, from social media recommendation algorithms to LLMs, now curate the information that billions of people worldwide consume at an unprecedented scale. Yet both operate as black boxes: their internal mechanisms are opaque, their biases poorly understood, and their accountability to ethical norms mostly unenforced. In this talk, I present two complementary studies that work toward a science of auditing such systems. In doing so, I will reveal a duality: LLMs can serve as both the methodological tool and the object of the auditing study. In the first study, I introduce a counterfactual auditing framework that uses LLMs as behavioral engines for synthetic user accounts, enabling causal identification of how social media algorithms respond to user demographics, a form of identification that had previously been infeasible. Deployed on X during the 2024 U.S. presidential election, we find that the platform's recommendation algorithm substantially amplifies toxic, polarizing, and right-leaning content, with effects that are highly heterogeneous across user types and political leanings. In the second, I turn the same auditing lens on LLMs themselves, querying 12 models daily from July through November 2024 on a set of more than 12,000 election-related questions. I find that LLMs exhibit systematic biases in how they represent candidates and electoral issues, are sensitive to demographic steering, and hold implicit (and highly unstable) beliefs about election outcomes. These findings suggest that LLMs are political actors, whether or not they intend to be. Taken together, I argue that auditing AI-mediated information systems requires new methodological frameworks, ones that are counterfactual, large-scale, and sensitive to heterogeneity across user populations. Building this science is one of the most pressing challenges at the intersection of AI and society.

For more information, please contact Sumaia Abedin by phone at (626) 395-6704 or by email at [email protected].