skip to main content

New MoneyAds Study Shows How to Make Ads Pay Off Big Time

Steve ManattMarch 10, 2014

In a recent post, I called attention to a retail case study on advertising effectiveness that Acxiom published last year, and I promised more extensive research on the topic in the New Year.

Last week, we released “MoneyAds: Marketing Lessons from Changes in a 200-Year-Old Sport.”   It’s a study based on several years of Acxiom experience targeting and measuring online advertising.     You can download the paper at http://www.acxiom.com/howadvertisingworks/.

If you spend money on online advertising, there are definitely some things in this study to make you feel good.   For instance, online advertising truly pays for itself, on average generating nine dollars of sales for every dollar spent on the ads.  [Caveat: the ratio varies widely in both directions.]   Only about 15 percent of the campaigns we looked at failed to make back their ad dollar investment through incremental revenue.

But there are also some findings that challenge conventional wisdom:

  • People who see an ad less than ten times are unlikely to be influenced to buy.
  • To get the most value out of a campaign, run the campaign for at least a month, and as much as two months, without changing the definition of the segment targeted or the creative.   For most significant purchases, you can’t judge the effectiveness of online campaigns in a day or a week.   It takes time for ads to influence people to buy stuff, especially if most of the purchases happen somewhere other than online.    Seriously, hands off!
  • It’s not necessary to use any advertising technology other than what the publisher provides to get a great return on advertising.   All the campaigns we looked at were run using direct buys from publishers sold at normal rates.  No remnant inventory.  The difference was that the ads were shown only to people identified by Acxiom and uploaded to the publisher.

There are unique features of our approach that make these results particularly significant.   We use a true test-and-control methodology, holding back a subset of people (not cookies or devices) to ensure these individuals never see the ads for a given campaign.   That means our estimates of revenue are different than 90 percent of what you read or see in online measurement tools.   We’re only looking for incremental lift – the difference in the rate of purchase of the people who saw the ads compared to the ones who didn’t.

What we particularly saw using this technique is that many of the purchases made by people who saw ads once or only a few times would have happened anyway.   In other words, there was no difference in purchase rate between the low frequency people and the control group.   It’s only after more repetitions, i.e., higher frequency, that the incremental revenue really starts to kick in.

Yet so much optimization and measurement we see in online advertising lacks this disciplined test and control approach.  So many companies are shifting money around and claiming to have influenced results, when in fact they’ve just done a superior job of showing ads to people who were already planning to make a purchase.   So all this lightning fast optimization that’s driving IPO’s and headlines could in fact be sub-optimizing for what advertisers really want: more revenue.

Now we’re not saying that ad tech doesn’t work – at least not yet – because we haven’t run the numbers on those types of campaigns.   But we’re looking into it.  So stay tuned for future releases.