Skip to Content

Search: {{$root.lsaSearchQuery.q}}, Page {{$root.page}}

Spring Colloquium

The Department's graduate students organize an annual Spring Colloquium, featuring multiple speakers on a single topic or theme. Graduate students comment on each talk. The Department has hosted a Spring Colloquium every year since the first weekend program in 1990. 

The Spring Colloquium is funded by the James B. and Grace J. Nelson Endowment for the Teaching of Philosophy.

2025 Spring Colloquium

Philosophy For a Changing World: Theorizing in the Age of Information Technology

 

Friday, April 4
Location: 2210 ABC, Michigan Union

1:30-2pm: Opening Remarks, Prof. Gordon Belot 

2-4pm: Don Fallis (Northeastern), “How Serious is the Epistemic Threat of Digital Fakes?”

Comments: Alison Weinberger

Chair: Dennis Lee

Abstract: It has been suggested that digital fakes—deepfakes, fake news, and bots posing as humans, etc.—could lead to an “infopocalypse” where we can no longer tell what is real.  The most obvious epistemic threat is that people can end up with numerous false beliefs about the world if they take these digital fakes to be genuine.  But perhaps more importantly, digital fakes can also undermine our trust in valuable epistemic resources, such as videos, newspapers, and human testifiers.  However, several philosophers (e.g., Atencia-Linares and Artiga 2022, Chalmers 2022, Harris 2022, Habgood-Coote 2023, Simon et al. 2023, Williams 2023) have recently tried to downplay the epistemic threat of digital fakes.  Leveraging philosophical work on art forgeries and counterfeit currency, we offer a conceptual analysis of fakes.  We use this analysis to show that the arguments of these philosophers fail.

 

4:15-6:15pm: Salome Viljoen (Michigan), “Privacy Puzzles”

Comments: Sophia Wushanley

Chair: Lorenzo Manuali

Abstract: This Article uses two recent privacy cases to argue that current U.S. privacy--and in particular, what counts as a 'privacy harm'--has become an overburdened concept. Privacy and what counts as its violation has long referred to a set of related, yet distinct concepts. However, in recent years privacy harm has expanded to answer for (or name) a greater set of social and political concerns. This expanded approach has rightly emphasized the important role of privacy as a precondition for other socially necessary or desirable goods, and provided a much-needed corrective to the view of privacy as of marginal political and legal concern (particularly for members of vulnerable social groups). However, this big tent approach to privacy and privacy harm also increases the internal tension—and perhaps, confusion—within the category. This has a programmatic and a conceptual cost. As privacy indexes a greater set of interests, internal privacy conflicts about core principles and guiding values can become more numerous and more substantial. This overly expansive terrain of internalist debate risks exacerbating privacy’s problems, empowering its enemies, and resulting in unintended (and harmful) doctrinal consequences.  It also papers over a distinct set of public, or social, legal interests in information about people. Removing these interests from the privacy bucket and standing them up on their own can help resolve privacy’s internal puzzles. Disambiguating social informational interests from private ones can also grant conceptual (and perhaps eventually, doctrinal) standing to a class of legal interests that are of growing importance to legal rights in an information society.

 

Saturday, March 19
Location: 2210 ABC, Michigan Union

1-3pm: Hanti Lin (UC Davis), “A Plea for History and Philosophy of Statistics and Machine Learning—with a Little Story of Achievabilism”

Comments: DJ Arends

Chair: Mitch Barrington

Abstract: Philosophy of science encompasses diverse research programs, with formal epistemology and history and philosophy of science (HPS) at opposite ends of the spectrum—the former mathematical, the latter historical. Despite their differences, a synthesis is needed. Statistics and its younger sibling, machine learning, are theories of scientific inference developed by scientists and for scientists; as such, they warrant epistemologists’ attention. Yet philosophical investigations into these fields require two key virtues: (i) extraction and scrutiny of epistemological ideas from mathematical language, given the extensive use of probability theory, and (ii) attention to historical context, as these ideas often emerge implicitly in response to concrete scientific problems. Hacking (1965) recognized early on the need to integrate these virtues, and the case for doing so is even stronger today, given recent advances in machine learning. This integration leads to one possible conception of HPS+: "History and Philosophy of Science—plus formal epistemology," as well as "History and Philosophy of Statistics—and machine learning." I illustrate this with a historical case study on a recurring yet underexplored idea that seems crucial to the foundations of statistics and machine learning. I call it achievabilism—the thesis that the standards for assessing non-deductive inference methods should not be invariant but instead sensitive to what is achievable in specific empirical contexts. Achievabilism has rarely (if ever) been articulated explicitly, leading to its repeated reinvention in practice: first by Neyman & Pearson (1936) in classical statistics, then by Freedman (1963) in Bayesian statistics, by Putnam (1965) and Gold (1967) in formal learning theory, and later by Shalev-Shwartz & Ben-David (2014) in statistical/machine learning theory. After identifying achievabilism, I will also explore its epistemological implications, ranging from the widespread skepticism toward formal theories of scientific inference to the debate between externalism and internalism in traditional epistemology. This talk assumes only minimal mathematical background—just elementary probability theory.

 

3:30-5:30pm: Kathleen Creel (Northeastern), “Algorithmic Monoculture and Systemic Exclusion”

Comments: Lorenzo Manuali

Chair: Alison Weinberger

Abstract: Mistakes are inevitable, but fortunately human mistakes are typically heterogenous. Using the same machine learning model for high stakes decisions creates consistency while amplifying the weaknesses, biases, and idiosyncrasies of the original model. When the same person re-encounters the same model or models trained on the same dataset, she might be wrongly rejected again and again. Thus algorithmic monoculture could lead to consistent ill-treatment of individual people by homogenizing the decision outcomes they experience.   Is it wrong to allow the quirks of an algorithmic system to consistently exclude a small number of people from consequential opportunities? Many philosophers have claimed or indicated in passing that consistent and arbitrary exclusion is wrong, even when it is divorced from bias or discrimination. But why and under what circumstances it is unfair has not yet been established.  This talk will formalize a measure of outcome homogenization, describe experiments that demonstrate that it occurs, then present an ethical argument for why and in what circumstances outcome homogenization is unfair.