We’ve previously discussed five ad testing mistakes that cause advertisers to choose the wrong ad. To review, those five mistakes were:
Pausing an ad within the first couple days.Using ad performance from a time where only the control was running.Not running an ad long enough.Leaving Google’s default rotation settings on.Not comparing top vs. side performance.Now we'll look at five tips to take your ad testing to the next level.
Google ACEGoogle ACE (AdWords Campaign Experiments) is a cool tool that shows you the probabilities of performance differences between your test settings in a campaign being significant. It’s a neat tool that cleanly shows you up and down arrows indicating how statistically sound this data is.
However, Google doesn’t break things down by top vs. side, so take their data with a grain of salt. You may have to download everything into your statistical testing calculator to get the best data.
So while it’s a nifty tool, there can be some significant differences.
Where Dummy Ads are UsefulOne easy way of knowing how statistically significant your test is, would be to duplicate your original ad, and run the test ad as well. If there’s a big discrepancy between your original and the duplicate, you would know that the results you're seeing from your test are noise as well, and vice versa.
As you start testing more, you may find some crazy things. There are times where for no apparent reason, there is a huge fluctuation in click-through rates between identical ads. Therefore, using a dummy ad is always recommended.
Audience MessagingLearn more PPC insider tips at SES New York 2013:Reporting and Analyzing PPC CampaignsSee the full agenda.One of the mistakes I used to make was to test whether a specific line of copy would work better than another. The problem is, because the testing was done across so many different keywords and ad groups, it was hard to really get good information and data.
The best way to handle that is to work on understanding your audience, creating a hypothesis about what resonates with that audience, and then using that messaging not just on your PPC ads, but across your site.
For example, you may want to know whether your audience prefers “free shipping” or “same day shipping”. You will create multiple ads testing those same messages. However, some of that ad copy may also be targeted to their specific intent.
AdWords LabelsOne of Google’s under-appreciated tools they have in AdWords is AdWords labels. This allows you to add labels to campaigns, ad groups, keywords and ads.
By labeling all of the ads that were intended to address one message, and then labeling all of the ads that were intended to address a different message, you can use aggregate data from all of those ads to understand what works best.
Testing in Low Volume CampaignsNow that you have AdWords labels, you can create ads across multiple campaigns and ad groups, and test which messaging works better. The ads don’t even have to match exactly, as long as you can test the messaging intent.
SummaryAdWords Campaigns Experiments is an easy tool, but misses a key point. For every test you create, you want to create a dummy of your original ad plus your test ad. You would label the original as original, the dummy as such, and then the test ad. When running your calculation results, you would segment out results by top vs. side, and then compare either the top results or the side results for each label to each other.
Unfortunately, Google doesn’t allow you to play with the labels via API. You will be forced to download the spreadsheets manually, and use a number of sumif formulas to get the data you want. However, with a little work, you can get some really good AdWords testing systems available.
Presenting Digital SES
Want to view one of the sessions you missed or listen to an especially informative presenter a second time? SES New York sessions were recorded and are available for purchase on ClickZ Academy's new e-Learning site.
Watch SES on video!
No comments:
Post a Comment