Research
Working Papers
- “Mental Models, Social Learning and Statistical Discrimination: A Laboratory Study”Abstract: Behavioral biases in decision-making are widespread and often persist even in the presence of feedback that is diagnostic of optimal behavior. Using a laboratory experiment, I examine whether exposure to others' decisions can correct initial misconceptions and facilitate learning in a worker hiring task where deviations from the theoretical benchmark arise from neglecting an informative signal. I show that exposure to optimal behavior substantially improves decision quality, while exposure to suboptimal choices has a weak negative effect. The analysis reveals substantial heterogeneity in learning beyond exposure: roughly one-third of subjects mechanically replicate observed choices and revert to suboptimal behavior once exposure ends, another third maintain improved performance only when the task environment remains unchanged, and the remaining third exhibit deeper learning that transfers to a related task with altered primitives. These findings highlight the dual role of social learning: while it can enhance decision-making, it can also generate imitation that fails to generalize beyond the observed context.
Presented at: Southwest Economic Theory Conference (Loyola Marymount University 2026), American Economic Association Mentoring Conference (Chicago 2025), ESA North American Meeting (Columbus 2024), Behavioral & Experimental Economics Student Conference (Caltech 2024), Los Angeles Experiments Conference Poster (Caltech 2024)
Published Papers
- “Beliefs, Information Sharing, and Mental Health Care Use Among University Students”(with Ida Grigoryeva, Bruno Calderon, Roberto González and Alejandro Guardiola Ramires)Journal of Development Economics (Forthcoming)Abstract: This paper investigates the role of beliefs and stigma in shaping students' use of professional mental health services at a large private university in Mexico, where supply-side barriers are minimal and services are readily accessible. In a survey experiment with 680 students, we find that nearly 50% of students in distress do not receive professional mental health support despite a high level of awareness and perceived effectiveness, constituting a substantial treatment gap. We document stigmatized beliefs and misconceptions correlated with the treatment gap. As three-quarters of students incorrectly believe that those in distress perform worse academically and that the majority of students going to therapy are in severe distress, we implement an information intervention to correct these beliefs. We find that it increases students' sharing of on-campus mental health resources with peers and encourages them to recommend these resources when advising a friend in distress. Interestingly, we find that it lowers respondents' willingness to pay for private therapy at the end of the intervention. Yet, this effect does not translate into a long-run reduction in self-reported therapy use six months after the experiment, with prior therapy users showing increased off-campus take-up.
Presented at: Advances with Field Experiments Conference (UChicago 2025), NHH Field Experiments Conference (Bergen 2024)*, Behavioral & Experimental Economics Student Conference (Caltech 2023)
Work in Progress
- “Learning to Ignore Irrelevant Contingencies: An Experiment” (with Jeongbin Kim and Emanuel Vespa)Data collection completed (Abstract)
Presented at: ESA North American Meeting (Tucson 2025), Behavioral & Experimental Economics Student Conference (UC Santa Barbara 2025)
- “Experiment on Narratives and Information in Persuasion” (with Bridget Galaty)Pilot completed (Abstract)
Presented at: Behavioral & Experimental Economics Student Conference (UC Santa Barbara 2025)*
- “Descriptive Simplicity in Strategyproof Matching Mechanisms”Designing experiment
Other Published Papers
- “Mass Reproducibility and Replicability: A New Hope” (meta paper with Abel Brodeur, et al.)Nature (Forthcoming)Abstract: This study pushes our understanding of research reliability by reproducing and replicating claims from 110 papers in leading economic and political science journals. The analysis involves computational reproducibility checks and robustness assessments. It reveals several patterns. First, we uncover a high rate of fully computationally reproducible results (over 85%). Second, excluding minor issues like missing packages or broken pathways, we uncover coding errors for about 25% of studies, with some studies containing multiple errors. Third, we test the robustness of the results to 5,511 re-analyses. We find a robustness reproducibility of about 70%. Robustness reproducibility rates are relatively higher for re-analyses that introduce new data and lower for re-analyses that change the sample or the definition of the dependent variable. Fourth, 52% of re-analysis effect size estimates are smaller than the original published estimates and the average statistical significance of a re-analysis is 77% of the original. Lastly, we rely on six teams of researchers working independently to answer eight additional research questions on the determinants of robustness reproducibility. Most teams find a negative relationship between replicators' experience and reproducibility, while finding no relationship between reproducibility and the provision of intermediate or even raw data combined with the necessary cleaning codes.
* – presentation by co-author
