In the past few years, I have been particularly interested in the study of linguistic patterns from a formal language-theoretical perspective, particularly in the framework of the subregular hierarchy. Here, you can watch me talk about how subregular characterizations highlight core parallels between phonology and syntax (thanks to Roberta D'Alessandro for the video!). My work in this area can be divided in several sub-projects.
From the formal side, I've proposed typologically grounded extensions to the class of tier-based strictly local dependencies.
Moreover, I believe that our formal understanding of these classes can help us design better artificial grammar learning experiments, and target precise biases in human learning.
Formal language theory can also help us settle long-standing linguistics debates. For example, Alëna and I have used this approach to argue in favor of derivational representations in morphology.
Computationally specified parsing algorithms can be used to ask questions about human processing behavior by connecting linguistics, psychology, and computer science. In particular, in line with work by Kobele et al. (2013), I’ve been interested in understanding the interaction of syntactic structure and memory resources, by using Stabler (2013)’s top-down parser for Minimalist grammars (MGs) coupled with a set of complexity metrics, that predict processing difficulty based on how the structures built by the parser affects memory usage.
Recent work that I have done in this framework shows that the MG parser correctly predicts preverbal vs. postverbal subject preferences in Italian, across a variety of constructions (e.g. declarative sentences with unaccusatives vs. unergatives verbs, relative clauses, etc.).
Nazila Shafiei and I have also been arguing for using parsing models with a computationally explicit linking hypothesis to fill the gap between psycholinguistics and theoretical syntax; for example by having experimental results guide our choice of syntactic analyses. As a case study, we have looked at alternatives in the structure of Persian relative clauses.
In the study of generalized quantifiers, it is essential to have an insightful theory of how their meaning is computed. In particular, I've been interested in exploring how different quantifiers (aristotelian, cardinal, proportional) engage memory resources during encoding and verification, and how these effects relate to theories based on the approximate number system or more precise counting systems (e.g. the semantic automata model).
In a related project Jon Rawski, John Drury, Amanda Yazdani, and I have begun exploring how quantified sentences can be used to pinpoint specific ERP markers of strategy switching during truth-value verification and to understand how linguistic meaning and visual context interact during language processing.