The Subregular Complexity of Linguistic Dependencies

In the past few years, I have been particularly interested in the study of linguistic patterns from a formal language-theoretical perspective, particularly in the framework of the subregular hierarchy. Here, you can watch me talk about how subregular characterizations highlight core parallels between phonology and syntax (thanks to Roberta D'Alessandro for the video!). My work in this area can be divided in several sub-projects.

From the formal side, I've proposed typologically grounded extensions to the class of tier-based strictly local dependencies.

Together with Kevin McMullin and Alëna Aksënova, I'm now working on efficient learning algorithms for these classes.

Moreover, I believe that our formal understanding of these classes can help us design better artificial grammar learning experiments, and target precise biases in human learning.

Formal language theory can also help us settle long-standing linguistics debates. For example, Alëna and I have used this approach to argue in favor of derivational representations in morphology.

Minimalist Grammars Parsing and Processing Effects

Computationally specified parsing algorithms can be used to ask questions about human processing behavior by connecting linguistics, psychology, and computer science. In particular, in line with work by Kobele et al. (2013), I’ve been interested in understanding the interaction of syntactic structure and memory resources, by using Stabler (2013)’s top-down parser for Minimalist grammars (MGs) coupled with a set of complexity metrics, that predict processing difficulty based on how the structures built by the parser affects memory usage.

Recent work that I have done in this framework shows that the MG parser correctly predicts preverbal vs. postverbal subject preferences in Italian, across a variety of constructions (e.g. declarative sentences with unaccusatives vs. unergatives verbs, relative clauses, etc.).

Generalized Quantifiers: Memory, Verification, and Priming

In the study of generalized quantifiers, it is essential to have an insightful theory of how their meaning is computed. In particular, I've been interested in exploring how different quantifiers (aristotelian, cardinal, proportional) engage memory resources during encoding and verification, and how these effects relate to theories based on the approximate number system or more precise counting systems (e.g. the semantic automata model).

In a related project Jon Rawski, John Drury, Amanda Yazdani, and I have begun exploring how quantified sentences can be used to pinpoint specific ERP markers of strategy switching during truth-value verification and to understand how linguistic meaning and visual context interact during language processing.

rss facebook twitter github youtube mail spotify instagram linkedin google pinterest medium vimeo academia