Among the most rudimentary, yet important, regular optimization tasks you should be doing for your pay-per-click advertising account is ad split testing. It is a fact that, in most cases, no two ads for the same keyword(s) will perform the same. It’s not enough just to have good keywords; the ad text is what ultimately does the selling. And no matter how talented you are at ad copy writing, it is impossible to predict what text will “click” with customers and make them, well, click.

Enter split testing, or A/B testing as it is also known. Always write at least two ads for any ad group and run them concurrently. (HINT: for valid testing, you will need to set ads to run evenly in AdWords or turn off Optimize Ad Delivery in Yahoo [campaign settings in either case].) Let the ads run for a while, then check their stats. When you think you have a clear winner, deactivate the loser and write a new test ad. Usually split testing works best if you test minor variations (i.e., just change one thing at a time from the original ad), but the more subtle the change, the more impressions you should amass before declaring a winner.

But here’s the rub: How can you be sure you have statistically valid results for a test? And furthermore, what metrics should be taken into consideration for evaluating a test?

As for the first question, there is no hard rule for how many impressions or clicks you need for a valid test. Obviously, the more of each, the more valid the test. Fortunately, there are tools like Split Tester ( that can tell you to what degree of certainty you can predict a winner given a low number of impressions.

The answer to the second question (test validity) depends on the intended outcome of the ad. If you are just looking for traffic to your page, then Click-Through Rate (CTR) may be enough, and Split Tester linked above will be all you need. If, however, your ad links to an ecommerce page where you hope to make a sale, CTR is probably not enough, and may actually be misleading. What really matters to you is conversions, and beyond that profits. Which ad results in the most profit is the real question you should be asking. Add to this the fact that it is not uncommon to see one ad in a test have a higher CTR but the other ad a higher conversion rate, and you know you need to be evaluating more than CTR.

Super Split Tester to the rescue! ( Super Split Tester goes beyond Split Tester. Instead of just entering CTR, Super Split Tester asks you for CTR plus approximate value per sale (of the product being advertised), conversion rate, impressions, and the AdWords cost for the ad. Using all these metrics, Super Split Tester calculates which of the two ads has generated the most actual profit. The resulting winner will not always be the ad you might have picked from just looking at the raw analytics data.

If you’re interested in the calculations on which Super Split Tester’s results are based, this video explains all: