'Dilly Dilly' means 'Go for it': how ad testing actually risks doing something great

Copy-testing is usually the blunt instrument that beats out the sometimes unexplainable magic of an ad or campaign, says Young & Laramore's president.

Maybe I watched too much football this weekend, but I awoke this morning remembering a strange dream I had last night. 

The scene opened in the Palace of Marketing… 

From his scrolls, the Duke of Advertising proudly presented his work to the newly crowned Queen of Marketing.

"Your highness, we have eliminated all down-side risk to the kingdom! I have brought you a thoroughly copy-tested campaign." Bowing deeply, the Duke expected the approving exclamation, "Dilly Dilly!" from the Queen and her subjects. 

Instead, the wise Queen mockingly inquired, "Did you consider the upside risk?"

Raising his head, the Duke muttered, "Upside?"

The Queen intoned, "Yes, a visit to the Pit of Misery may remind you that the upside is more valuable than the downside." 

As the Ad Duke was dragged away, the subjects exclaimed, "Dilly Dilly!" 

Ah, the upside. Marketers too often forget about missing the upside. Upside risk may be the most under-rated concept—not only in marketing—but in business in general. I’d like to have a share of Netflix for every time a marketing person has said, "We’re spending a lot of money here. Don’t we need to test the campaign to reduce any risk?" 

These marketers fail to comprehend that the real risk may be missing the opportunity to do something noteworthy, ground-breaking, industry-changing—something that seeps into the culture’s consciousness. Testing creative in an artificial environment is the surest way to wind up with something ordinary. In its attempt to protect the downside, copy-testing is usually the blunt instrument that beats out the sometimes unexplainable magic of an ad or campaign that drives a consumer’s interest, engagement and then of course, purchase. 

Why does copy testing have this blindspot? 

When people know that they are being asked to assess ads, they shift from a distracted, disinterested consumer mindset, to a hyper-aware, evaluator mindset. Plenty of studies have demonstrated that no matter what situation people are put in, whether online evaluating random ads or in a boardroom making significant business decisions, people generally lean away from the novel or unfamiliar because it makes them uncomfortable.

Secondly, as humans, we’re all fooled by our own rationality. We cling to this false certainty that we’re only motivated by rational, logical facts or arguments. Here too, volumes of studies have concluded that our behavior and attitudes can be shaped by stimuli that isn’t obvious to us or is even below our conscious awareness. The fact is, we humans actually have a hard time predicting what motivates us. But, because of this misunderstanding, consumers subject to an ad testing situation will likely judge rational, logical messaging as more persuasive than something that is less direct.

Problem is, it’s often the less direct approach that can catapult a brand well beyond the conservative growth plan that most brands’ budgets are built upon. This understanding is why Dan Wieden famously convinced P&G to relax its ad testing requirements for W+K’s wildly effective work for Old Spice.

That success may have given Miguel Patricio, chief marketing officer of AB InBev, the confidence to ignore its ad testing results for W+K’s Bud Light work and run the "Dilly Dilly" campaign. In a Business Insider interview, Patricio noted, "It didn't test that well. We said, ‘Consumers will get it.’ And especially with repetition. We have a chance here for this to become big. So, we went against the research and we gave a chance to ‘Dilly Dilly’ and we are so happy!" 

Here, Patricio identifies a third flaw of copy-testing: the difficulty in assessing how multiple exposures build over time. It’s very rare that a specific piece of a campaign needs to work in one shot. While marketers acknowledge that it takes multiple exposures to a piece of communication to have an effect on consumers, they mistakenly expect a one or two exposure artificial test to tell them whether an ad is going to work or not. 

So, imagine that you’ve done it again. Despite your better judgment, you went off and tested the new concepts your agency just presented. You were excited about one campaign in particular, but it made you a little nervous because it’s like nothing else you've ever seen. You simply wanted to validate your gut instincts, but the campaign that you thought would win, tanked. The other ads tested much better, and the one that you liked—the campaign that your agency is pushing for—just didn't connect with the panel of consumers. 

Your boss is awaiting the test results. What do you report? Do you recommend producing the spot you believe will jump start your brand, or the one that consumers said would influence them to buy your product? 

Which side do you risk? 

Dilly Dilly.

Tom Denari is President & Chief Strategy Officer at Young & Laramore.