If you've worked on political campaigns, you've probably heard the phrase "microtargeting" and probably associated with some sort of sophisticated way to pinpoint voters to persuade. Recently, some people have been questioning the usefulness of microtargeting, saying that there's no proof that it's effective. Someone I work with mentioned to me today that her campaign is considering using microtargeting, so I thought I'd take a moment to write a blog post about the subject.
Traditional microtargeting works by calling a selection of voters and interviewing them about a variety of topics related to the election. Within these surveys, the voter is asked several times their opinion about the issue or candidate. The questions are often framed as a difficult value choice: most people would answer yes to "Do you support more funding for schools?", but if you ask "Would you be willing to put more money towards schools if it meant less money for policing?", respondents are forced to make a value choice that, according to microtargeters, reveals more about their underlying beliefs.
Microtargeters look at respondents who answer the questions differently throughout the survey process, believing that these people with "unstable opinions" are the most likely to be persuadable on an issue. A statistician uses these people to train a look-alike model that, based on demographic and consumer information, rates every voter on how similar they look the respondents that the statistician is modeling. (For more information on how look-alike models work, see this blog post.) The voters who are rated highest on this model are considered the best targets for persuasion.
How do we know if this works? Microtargeters offer a couple arguments. First, when they build their model, they leave out a segment of the respondents. Then when they build their model, they see how well the model predicts the responses of this group. But what if microtargeters are wrong, and people with unstable opinions actually aren't more persuadable than other groups? What if they just don't care? Or they flip back and forth all the time, whether or not someone tries to persuade them? Microtargeters point out that, in some campaigns that have used microtargeting models, the high-scoring voters on the microtargeting model have, throughout the course of the campaign, become increasingly more supportive, and the low-scoring voters have not. This is better evidence, but by no means conclusive. In these campaigns, the microtargeting may have identified people who were likely to become more supportive for reasons completely different from the campaign's activities (TV ads, phone calls, direct mail).
So is microtargeting useless? I don't know. The alternative to microtargeting, and the approach that I advocate, is randomized controlled experiments, as I outline in this post. But randomized controlled experiments can be more expensive and time-consuming than building a microtargeting model, so it would be great if it turns out that microtargeting really does identify persuadable voters. What I suggested to the person I talked to this morning was to do a randomized controlled experiment with the microtargeting model. They could build the model and then mail a random selection of high-scoring voters, as well as a random selection drawn from all voters. After surveying the mail recipients, they could compare the persuasion rates from the microtargeted voters and the random voters and see how well the microtargeting model performed. And they could build their own persuadability model from the random voters and see how similar it was to the microtargeting model.