Skip to main content

Kennedy School Review

Topic / Decision Making and Negotiation

Macro Lessons From a Micro-Experiment: Behavioral Insights for Policy Students

BY ROBERT REYNOLDS

Policy students interested in nudging must run experiments. Yet, rigorous experiments require substantial time, funding, and expert guidance. Because most students do not have this, behavioral enthusiasts rarely complete experiments while in policy school. This should change.

So, how can policy students do this without succumbing to the difficulties of professional experiments? They can run micro-experiments: small studies whose purpose is to learn about experimentation.

Three lessons from my micro-experiment

My support for micro-experiments comes from the lessons I learned through a small experiment.  For my study, I partnered with a political candidate who believed that some people were skimming or altogether ignoring the mail she sent. With this behavioral problem in mind, I hypothesized that adding a handwritten note to envelopes would increase (i) the number and (ii) the size of donations she would receive (our two dependent variables). This nudge was based on the finding that adding a handwritten note to envelopes for overdue sewer bills increases payments by 34.2 percentage points.

This micro-experiment was a cheap, fast way to expose important experimental principles I was unaware of. Among other lessons, this study taught me the importance of piloting experiments, having an explicit study design, and knowing an experiment’s positive predictive value.

Lesson #1:  Pilot your experiment

All 1,000 envelopes (control and treatment) had a letter signed by the candidate and the 500 treatment envelopes had a handwritten note—“[recipient name], I really hope to hear from you soon”—written by a campaign worker. After completing the experiment, I learned that some recipients noticed that the handwriting on the envelope was different than the letter. Because of this error, our experiment results were biased by whether or not individuals in the treatment noticed this difference. The lesson is that if we had we run a small pilot (on as few as 10 people) before our full experiment, we likely would have noticed this, corrected it, and could have launched a better experiment.

Lesson #2: Have an explicit study design

I launched this experiment on July 12 and planned to measure the donations received over one month. On August 12, however, the campaign received so many donations that the end date of “one month of data collection” became significant.  Ending data collection on August 11 showed the treatment outperforming the control by 23 percent, while ending on August 12 showed only a six percent difference. The lesson is that judgment calls arise when researchers have sloppy study designs. This taught me to accept other researchers’ findings with skepticism until I am confident that their results are not the outcome of self-serving judgement calls.

Lesson #3:  Know your experiment’s positive predictive value

In health, positive predictive value is the probability that a patient actually has a disease when their medical test is positive. Similarly, a behavioral science study’s positive predictive value is the likelihood that a statistically significant finding is due to a real effect and not a false negative. Because of its small sample size and low donation rates my experiment’s positive predictive value was less than 25 percent. This means that even with a statistically significant difference between the two conditions, there was at least a 75 percent chance that the difference was simply due to chance. This taught me to read studies with low positive predictive values with appropriate doubt.

Conclusion

For policy students interested in behavioral science, micro-experiments provide unmatched experiential learning opportunities. Micro-experiments teach students what they did not know they did not know and help students refine their interests, become better evaluators other researchers’ studies, and better behavioral science practitioners.  For these reasons, behavioral enthusiasts should complete micro-experiments while in policy school.

The opinions expressed in this article are the author’s own and do not necessarily reflect the view of any organization he is affiliated with.”

Robert Reynolds, HKS MPP ‘15, is an Associate at ideas42, a non-profit behavioral science consultancy.   While at Harvard he founded theBehavioral Insights Student Group, a student-led club that hosts lunch events with experts working in applied behavioral insights, seminars with faculty, and the year-long Experimental Pitch Innovation Competition (EPIC), which gives students the opportunity to attend workshops on running RCTs, experimental design, use of survey software, and ultimately to design and run their own RCT.  

Photo Credit: Julia Lindpaintner