Assistant Professor Huteng Dai and PhD Student Xueyang Huang recently gave a presentation at Empirical Methods on Natural Language Processing (EMNLP) 2025, in Suzhou, China, a major conference in the field of Natural Language Processing. Titled “Mind the Gap: How BabyLMs learn Filler-Gap Dependencies”, this project was the result of collaborative work with others, including PhD Candidate Olawale Akingbade, and originated as a class project in Professor Dai’s Language and Information class in Winter 2025.
The abstract of their talk is given below, and the full proceedings paper can be found online here.
Abstract: Humans acquire syntactic constructions like filler-gap dependencies from limited and often noisy input. Can neural language models do the same? We investigate this question by evaluating GPT-2 models trained on child-oriented input from the BabyLM Challenge. Our experiments focus on whether these “baby” language models acquire filler-gap dependencies, generalize across constructions, and respect structural constraints such as island effects. We apply a suite of syntactic constructions to four models trained on child language, including two base models (trained on 10M and 100M tokens) and two well-performing models from the BabyLM Challenge (ConcreteGPT and BabbleGPT). We evaluate model behavior using wh-licensing scores, flip tests, and grammaticality contrasts across four constructions. Results show that BabyLM-scale models partially acquire filler-gap dependencies but often fail to generalize or fully capture island constraints.
