Curated Cognition

PhD Dissertation

The dissertation introduces the term “Curated Spaces”, which builds on the concept of situated cognition. Many of the environments we encounter in our daily lives are specifically designed to help us with particular tasks, and we seek out these spaces to help us achieve these goals. For instance, when you go to a grocery store, the space is designed to help you buy food. When we use the space to assist us in our goals and the space is designed with this intention, I refer to these environments as curated spaces. I refer to the kind of situated cognition in these curated spaces as “curated cognition.” This relates to work on “situated cognition,” which explores how we use external features to solve problems, but “curated cognition” draws attention to the fact that environments are often designed to impact our mental states and behavior.

The dissertation demonstrates how curation is not solely found in art galleries, as the term might suggest, but that we live our daily lives in curated spaces. The dissertation explores different facets of how curated spaces influence our lives. One chapter concerns the experience of sound in curated spaces. Another argues that even natural spaces such as national parks are highly groomed and curated. I also argue that uncultivated spaces–those that are untouched physically by human design–will still inspire psychological responses that qualify as curated because of the history of curated experiences we have internalized. There is also a chapter dealing with social and political themes. It explores the many ways in which neighborhoods in America have been designed to control citizens, comparing middle-class America with ghettoized America. The dissertation concludes with a discussion of digital platforms curated for us by artificial intelligence, such as Netflix, Instagram, Twitter (X), and Spotify.

The Missing I in AI

Co-Author with Jesse Prinz

Overview: The recent advances of large language models (LLMs) has reinvigorated debates about the psychological capacities of conversational computing systems. Much of the debate has centered around questions that are broadly semantic and understanding: Do the linguistic outputs of LLMs mean anything? Do they really understand their linguistic inputs and outputs? Emphasis on these questions is rooted in the history of AI debates. John Searle’s influential argument that you can’t get meaning from syntax and the subsequent symbol grounding problem helped cement these as the core questions for the field. Anxieties emerged early on the conversational abilities could mislead users about cognitive capacities (the Eliza illusion), and current debates revisit those worries with deep neural networks using transformer architectures. People ask, do current LLMs really understand the words they seem to use so fluently, or are they merely word predictors? This is usually framed as a question of whether the tokens in these models refer (semantics) and whether the models understand the sentences they produce. The very term “intelligence” as it is used in the AI literature connotes meaning and understanding. There is another “I” that has received far less attention. The “I” of selfhood or identity.  Can current LLMs be credited with having artificial identities? Is there an I in these machines? Call this the I-question. We think the I-question deserves a resoundingly negative answer. Moreover, we think that the absent I in AI systems points to a limitation that is more profound in important respects than any limitations in meaning and understanding. As the field wrestles to assess and improve semantic and epistemic capacities, it neglects selfhood, and that negligence leaves a vast gap between the mentality of machines and our own.

COUNTER-SURVEILLANCE and Big Tech

Co-Author with Kathryn Petrozzo

Foucault argues that even the mere idea of surveillance can change the social behavior of citizens: “Surveillance is permanent in its effects, even if it is discontinuous in its action.” Knowing one could be watched produces harmful psychological ripples in the citizen, and more so in the citizen most likely to be watched. Marginalized communities have always been under governmental surveillance, which has contributed to a negative impact on mental health. In our current times of exaggerated technological advances, algorithms,  artificial intelligence, and deep neural networks of collected data, what hope do the surveyed have? In this paper, we will actually argue in favor of surveillance, but only as a necessary measure in the form of counter-surveillance conducted by unjustly surveyed marginalized groups. 

Trying out Tahir Hemphill’s installation “Mapper’s Delight, 2022” at the Data Consciousness: Reframing Blackness in Contemporary Print exhibition at the Print Center New York.