Assistant Professor Huteng Dai was recently invited to speak at the 2025 U-M Symposium on Human-Centered AI, organized by the School of Information and by the Michigan Institute for Data & AI in Society (MIDAS). His presentation was titled “How BabyLMs Learn Filler-Gap Dependences”. An abstract of the talk is given below.
Abstract: Humans acquire syntactic constructions like filler-gap dependencies from limited and often noisy input. Can neural language models do the same? We investigate this question by evaluating GPT-2 models trained on child-oriented input from the BabyLM Challenge. Our experiments focus on whether these “baby” language models acquire filler-gap dependencies, generalize across constructions, and respect structural constraints such as island effects. We apply a suite of syntactic constructions to four models trained on child language, including two base models (trained on 10M and 100M tokens) and two well-performing models from the BabyLM Challenge (ConcreteGPT and BabbleGPT). We evaluate model behavior using wh-licensing scores, flip tests, and grammaticality contrasts across four constructions. Results show that BabyLM-scale models partially acquire filler-gap dependencies but often fail to generalize or fully capture island constraints.
