Skip to Content
  • U-M
  • //
  • LSA
  • //
  • Departments and Units
  • //
  • Majors and Minors
  • //
  • Look To Michigan
  • //
  • LSA Course Guide
  • //
  • LSA Gateway

Search: {{$root.lsaSearchQuery.q}}, Page {{$root.page}}

previous | next

  • About Us
  • People
  • News and Events
  • Symposium
  • Seminar Series
  • Foundations & Frontiers

for

  • Undergraduate Students
  • Graduate Students
  • Prospective Students
  • Alumni and Friends
  • U-M
  • LSA
  • Departments and Units
  • Majors and Minors
  • Look To Michigan
  • LSA Course Guide
  • LSA Gateway
Weinberg Institute for Cognitive Science
  • About Us
  • People
  • News and Events
  • Symposium
  • Seminar Series
  • Foundations & Frontiers

Search: {{$root.lsaSearchQuery.q}}, Page {{$root.page}}

previous | next

for

  • Undergraduate Students
  • Graduate Students
  • Prospective Students
  • Alumni and Friends
  • Current Students
  • Student Spotlight
  • Cognitive Science Community
  • Commencement and Graduation
  • Prospective Students
  • Graduate Certificate Eligibility
  • COGSCI 700
  • GSI Resources
  • Application Process
  • Graduate Certificate Course Petition
  • Frequently Asked Questions
  • Graduate Certificate Curriculum
  • Funding Opportunities for Graduate Students
  • COGSCI 200
  • FAQ
  • Transfer Students
  • Giving Opportunities
  • Share Your News
  • CogSci Alumni
  • Alumni Resources
Skip to Page Content
  1. Symposium
  2. 2025 Symposium
  1. ...
  2. 2025 Symposium
    1. Symposium
  1. [X] close
  1. Symposium
  2. 2025 Symposium
    1. 2nd Annual Poster Session
  3. Previous Symposia

2025 Symposium

  1. Symposium
  2. 2025 Symposium
    1. 2nd Annual Poster Session
  3. Previous Symposia

13th Annual Marshall M. Weinberg Symposium

The symposium took place in the Michigan League Ballroom on Friday, March 21, 2025.

Three big claims have dominated the last half century of thinking in cognitive science about language. First, language relies on a system of abstract syntactic rules and principles. Second, these rules and principles are too complex to be learned by children without an innate linguistic faculty. Third, developing competence in language requires social interaction grounded in the real world. The last few years have seen the advent of large language models, neural networks that learn to produce fluent language “from scratch”, trained on data from the internet. Do large language models force us to reconsider any of these big claims, and if so, how should we revise our understanding of language? The 2025 Weinberg Symposium features three world-leading experts who will discuss and debate these questions.

Learn More About the 2025 Symposium

2025 Featured Speakers

Our featured speakers are renowned experts in their respective disciplines, offering diverse perspectives in cognitive science.   

Please click on their names below to read more about them. 

Richard Futrell

Richard Futrell

Richard Futrell is an Associate Professor in Language Science at the University of California, Irvine.

Prediction and Locality in Language and Language Models

Abstract: I argue that neural language models succeed in part because they process language in a way that has similarity with humans. The shared information-processing constraints between humans and models do not have to do with the specific neural-network architecture nor any hardwired formal structure, but with the shared core task of language models and the brain: predicting upcoming input. I show that certain universals of language can be explained in terms of generic information-theoretic constraints, and that the same constraints explain language model performance when learning human-like versus non-human-like languages. I argue that this information-theoretic approach provides a deeper explanation for the nature of human language than purely symbolic approaches, and links the science of language with neuroscience and machine learning. 

Jordan Kodner

Jordan Kodner

Jordan Kodner Assistant Professor in the Stony Brook University Department of Linguistics and an affiliate of the Institute for Advanced Computational Science, Department of Computer Science, and AI Institute. His primary research revolves around computational approaches to child language acquisition and their broader implications. In particular, algorithmic models of grammar acquisition, how those processes drive language variation and change, what insights they provide for low-resource NLP, and how we can evaluate and draw conclusions from computational models at the intersection of (low-resource) NLP and cognitive science. In 2020, Jordan received his PhD from the University of Pennsylvania Department of Linguistics, where he worked with Charles Yang and Mitch Marcus. Jordan received a master's degree from the University of Pennsylvania Department of Computer and Information Science in 2018. Before graduate school, he was a member of the Speech, Language, and Multimedia group at Raytheon BBN Technologies where he worked on defense and medical-related projects.

LLMs and linguistics: Use them for what they are, and don't try to make them what they aren't

Large language models are the most powerful tools we have for uncovering statistical distributions in large quantities of linguistic data. Leveraging this information, they perform on par with or superior to humans on a range of tasks traditionally argued to require a human-like knowledge of language. Nevertheless, LLMs cannot serve as satisfactory models of human language cognition. Rather than revealing how humans learn, process, and represent language, LLMs have shown us that there are sometimes multiple ways to succeed at a task. In this talk, I will discuss weaknesses in popular evaluation methods which inflate the apparent performance of LLMs on tests of linguistic knowledge, both through unintended confounds and cognitively superficial NLP-like evaluation methodologies. I ask whether a test without these problems could even demonstrate that an LLM is an appropriate model of language cognition in the first place. Finally, I propose how LLMs may contribute to the science of language when we use them for what they are rather than trying to force them to be what they aren't.

Ellie Pavlick

Ellie Pavlick

Ellie Pavlick is an Associate Professor of Computer Science, Cognitive Science, and Linguistics at Brown University, and a Research Scientist at Google Deepmind. She leads the Language Understanding and Representation (LUNAR) Lab, which seeks to understand how language "works" and to build computational models which can understand language the way that humans do. Her lab's projects focus on language broadly construed, and often includes the study of capacities more general than language, including conceptual representations, reasoning, learning, and generalization. They are interested in understanding how humans achieve these things, how computational models (especially large language models and similar types of "black box" AI systems) achieve these things, and what insights can be gained from comparing the two. We often collaborate with researchers outside of computer science, including cognitive science, neuroscience, and philosophy.

Not-Your-Mother's-Connectionism: LLMs as Cognitive Models

Abstract: Recent advances in AI have led to large neural network models which exhibit human-like behavior across a range of language and reasoning tasks. This (re-)opens important theoretical questions about the nature of the structure that is required to support such behaviors, leading to debates reminiscent of long-running arguments that pit neural network models against explicitly structured symbolic models of the mind. In this talk, I will describe a series of experiments which highlight the ways in which LLMs today appear importantly different from the connectionist systems that inspired these debates originally. I will argue for a more nuanced stance which does not assume neural networks to be diametrically opposed to traditional models of the mind, but still acknowledges the potential of LLMs to teach us something fundamentally new about the structures that govern language and cognition in humans.

 

Check out the Symposium

Watch the livestream recording of the 2025 Weinberg Symposium.

Weinberg Institute for Cognitive Science
9th Floor Weiser Hall
500 Church Street
Ann Arbor, MI 48109-1045
Weinberg-Institute@umich.edu
734.615.3275
Sitemap
Facebook Twitter Youtube
LSA - College of Literature, Science, and The Arts - University of Michigan
  • Information For
  • Prospective Students
  • Current Students
  • Faculty and Staff
  • Alumni and Friends
  • More about LSA
  • About LSA
  • How Do I Apply?
  • News
  • LSA Magazine
  • Give
  • Maps
  • Student Resources
  • Courses
  • Academic Advising
  • Majors and Minors
  • Departments and Units
  • Global Studies
  • LSA Opportunity Hub
  • Connect
  • Social Media
  • Update Contact Info
  • Contact Us
  • Privacy Statement
  • Report Feedback
© 2025 Regents of the University of Michigan