Minimalist Grammars Parsing as a Model of Human Sentence Processing

Computationally specified parsing algorithms can be used to ask questions about human processing behavior by connecting linguistics, psychology, and computer science. My most recent work in this direction uses Stabler (2013)’s top-down parser for Minimalist grammars (MGs) to predict processing difficulty based on how the structures built by the parser affects memory usage.

For instance, I have shown how the MG parser correctly predicts preverbal vs. postverbal subject preferences in Italian, across a variety of constructions.

In my dissertation, I expand on this work by defining complexity metrics based on cognitively plausible assumptions about human memory mechanisms. Through these metrics, I am also exploring the contribution of grammatical features to memory consumption, evaluating the model’s performance by looking at a variety of syntactic priming effects reported by psycholinguists. By investigating the unique contribution of the feature- driven grammar to sentence processing, this approach will clarify the link between grammatical knowledge and processing behavior.

I am also interested in using this model to bear on more general theoretical debates in the syntactic literature, thanks to its high sensitivity to fine-grained syntactic structure. As an example, I have been exploring the MG parser as a formal model of how gradient acceptability can arise from categorical grammars

Moreover, Nazila Shafiei and I have also been arguing for using parsing models with a computationally explicit linking hypothesis to have experimental results guide our choice of syntactic analyses. As a case study, we have looked at alternatives in the structure of Persian relative clauses.

The Subregular Complexity of Linguistic Dependencies

In the past few years, I have been particularly interested in the study of linguistic patterns from a formal language-theoretical perspective, particularly in the framework of the subregular hierarchy. Here, you can watch me talk about how subregular characterizations highlight core parallels between phonology and syntax (thanks to Roberta D'Alessandro for the video!). My work in this area can be divided in several sub-projects.

From the formal side, I've proposed typologically grounded extensions to the class of tier-based strictly local dependencies.

Together with Kevin McMullin and Alëna Aksënova, I'm now working on efficient learning algorithms for these classes.

Moreover, I believe that our formal understanding of these classes can help us design better artificial grammar learning experiments, and target precise biases in human learning.

Formal language theory can also help us settle long-standing linguistics debates. For example, Alëna and I have used this approach to argue in favor of derivational representations in morphology.

Generalized Quantifiers: Memory, Verification, and Priming

In the study of generalized quantifiers, it is essential to have an insightful theory of how their meaning is computed. In particular, I've been interested in exploring how different quantifiers (aristotelian, cardinal, proportional) engage memory resources during encoding and verification, and how these effects relate to theories based on the approximate number system or more precise counting systems (e.g. the semantic automata model). In this line of inquiry, I have relied on experimental techniques such as pupillometry, and EEG.

In a related project Jon Rawski, John Drury, Amanda Yazdani, and I have begun exploring how quantified sentences can be used to pinpoint specific ERP markers of strategy switching during truth-value verification and to understand how linguistic meaning and visual context interact during language processing.

rss facebook twitter github youtube mail spotify instagram linkedin google pinterest medium vimeo academia