- “Mental Models, Social Learning and Statistical Discrimination: A Laboratory Study”
See others’ optimal choices: / learn by imitating, but / not when environment changes (Abstract) Abstract: In economic decision-making, individuals often rely on subjective representation of the environment to process information and make inferences. Using a laboratory experiment, I investigate how such mental models transform when people are exposed to the evaluations of others, particularly in scenarios where one or more parties may adopt an incorrect or misspecified model. Participants face a hiring task where their goal is to choose a worker with higher ability by integrating a noisy education signal with prior group information. The design of treatment conditions varies subjects' exposure to choices by another participant, using one group to present evaluations closely aligned with the theoretical Bayesian benchmark and another to expose subjects to evaluations consistent with signal neglect. I find that exposure to optimal behavior improves decision quality, with treated subjects making up to 22 percentage points more optimal choices. However, many participants appear to imitate others' decisions without internalizing the correct decision rule, leading to mislearning once the primitives of the environment change. Using a diagnostic treatment, I find that helping subjects recognize the optimality of others' choices only partially improves their decisions. Conversely, exposure to suboptimal choices has a weaker and statistically insignificant negative effect, with lower confidence in one’s choices strongly associated with following others’ suboptimal actions. These findings highlight the dual role of social learning: while it can enhance decision-making, it also fosters mechanical imitation that fails to generalize beyond the observed context.
- “Stigma, Beliefs and Demand for Mental Health Services Among University Students” – under review
(with Ida Grigoryeva, Bruno Calderon, Roberto González and Alejandro Guardiola Ramires)
Students don’t go to therapy, why? / Unlikely inaccurate beliefs: / info treatment decreases demand (Abstract) Abstract: This paper investigates the role of beliefs and stigma in shaping students’ use of professional mental health services at a large private university in Mexico, where supply-side barriers are minimal and services are readily accessible. In an online experiment with 680 students, we estimate a large treatment gap with nearly 50% of students in distress not receiving professional mental health support despite a high level of awareness and perceived effectiveness. We document stigmatized beliefs and misconceptions correlated with the treatment gap. For example, three-quarters of students incorrectly believe that those in distress perform worse academically, and many underestimate how common therapy use is among their peers. To correct inaccurate beliefs, we implement an information intervention and find that it increases students’ willingness to share on-campus mental health resources with peers and encourages them to recommend these resources when advising a friend in distress. However, we also find that it lowers their willingness to pay for external services, suggesting a potential substitution effect from private therapy to free on-campus resources
- “Mass Reproducibility and Replicability: A New Hope” (2024)
(with Abel Brodeur, Derek Mikola, Nikolai Cook, et al.)
Replicate 110 papers: / computational reproducibility 85% and / robustness reproducibility 70% (Abstract) Abstract: This study pushes our understanding of research reliability by reproducing and replicating claims from 110 papers in leading economic and political science journals. The analysis involves computational reproducibility checks and robustness assessments. It reveals several patterns. First, we uncover a high rate of fully computationally reproducible results (over 85%). Second, excluding minor issues like missing packages or broken pathways, we uncover coding errors for about 25% of studies, with some studies containing multiple errors. Third, we test the robustness of the results to 5,511 re-analyses. We find a robustness reproducibility of about 70%. Robustness reproducibility rates are relatively higher for re-analyses that introduce new data and lower for re-analyses that change the sample or the definition of the dependent variable. Fourth, 52% of re-analysis effect size estimates are smaller than the original published estimates and the average statistical significance of a re-analysis is 77% of the original. Lastly, we rely on six teams of researchers working independently to answer eight additional research questions on the determinants of robustness reproducibility. Most teams find a negative relationship between replicators' experience and reproducibility, while finding no relationship between reproducibility and the provision of intermediate or even raw data combined with the necessary cleaning codes.