We ran a Google Ads experiment over a 30-day period across 14 different e-commerce brands in order to optimize branded search campaign spend.
A few questions we wanted to answer.
Could we decrease the spend on branded search campaigns while maintaining total revenue attributed to brand terms, and thus reducing the percentage of total budget attributed to branded keywords?
Could we decrease cost per actions (CPAs), and how would that impact our return on ad spend (ROAS)?
And, while running these tests, would other metrics like clickthrough rate (CTR), conversion rate (CVR), and average order value (AOV) be negatively affected?
The Strategy
To run this experiment, all existing search brand campaigns — our control campaigns — were set up as broad match keywords targeting brand terms, using brand lists as campaign restrictions. These campaigns were already pre-existing, and had been running for several months prior to this test.
Our aim was to test exact match keyword types along with manual keyword bidding.
Control vs Trial Campaigns
With Brand List Restriction
With Manual CPC Bidding
To set up the trial campaigns, we first ran a search terms report on the control campaign for the last 90 days, and selected the search terms that represented 90% (or higher) of the impressions within that period. Depending on the account, that resulted in a list of between 5 and 20 search terms that we then used as the exact match keyword variant within the trial campaign.
To determine the maximum manual CPC bids, we took the average CPC of the control campaign’s ad groups over the past 30 days, and turned off enhanced conversions on the test campaign bidding strategy.
The ad copy, assets, and final URLs were identical between the control campaign and the trial campaign, with all other variables being equal between the control and trial campaign as well.
We then set up a custom search campaign experiment to A/B test the control campaign against our new exact match & manual bidding campaign over a 30 day period, splitting budget 50/50 between the 2 campaigns.
The Results
The results of this experiment where quite positive across all accounts.
We were able to decrease campaign cost, which resulted in an increase in ROAS of 209%.
Trial Campaign Results
Decrease in Cost
Increase in ROAS
Did Cost Decrease?
We saw an average decrease in campaign cost by 49%, with a median decrease of 52% across all 14 accounts, with a statistical significance rate of 98.8%
Did The Decrease in Cost Affect Revenue?
We saw an average increase in revenue (conversion value) by 12%, with a median 8% increase, although this was not statistically significant.
Only 4 of the 14 accounts showed a decrease in revenue.
Was The Change In Revenue Justified?
To determine if the decrease in revenue was justified, we can calculate the marginal Return On Ad Spend (ROAS) for each account as
Marginal ROAS = (decrease in revenue) / (decrease in cost)
We get an average marginal ROAS of -2.8x. In other words, the brand spend that was cut, was not only unprofitable, but losing 2.8 times.
For the 4 accounts where revenue did decrease, the average marginal ROAS was 4.0x, while the average ROAS for these same 4 accounts was 36.7x in the trial campaigns, versus 7.9x in the control campaigns.
What Where The Driving Factors?
While seeing these results, we wanted to dig into other metrics to determine what where the driving factors, and whether there were any negative effects on other metrics.
Clickthrough Rate (CTR)
We also saw a substantial increase in clickthrough rate with an average increase of 18%, and median of 16%, at a statistically significant rate of 99.6%.
We believe the increase in clickthrough rate was due to the narrow focus on exact match keywords, which then directly affected the ad quality score which contributed to lower cost per clicks.
Cost Per Click (CPC)
Cost per click decreased on average 59%, with a median decrease of 66%, even though we used the average CPC of the control campaign over the past 30 days, manual bidding proved to acquire an equivalent number of impressions and clicks, at a fraction of the cost. These results were statistically significant at 99.9%, and are the main driving factor in reduced campaign cost.
Cost Per Action (CPA)
Because of the lower CPCs, we saw an average decrease in CPAs of 59%, and a median decrease of 67%. These results were statistically significant at 97.8%
Return On Ad Spend (ROAS)
Due to the lower CPAs, we saw an average increase in ROAS of 209%, with a median increase of 212%. Only 1 account out of the 14 showed a decrease in ROAS of 11%, which we attribute to that brand having the highest variability of brand term misspellings.
This increase in ROAS is purely a result of the decrease in campaign cost, since conversion rate was relatively unchanged.
The change in ROAS was also statistically significant at a rate of 98.8%
How Did Other Metrics Perform?
Impressions
Impressions were flat between the control and trial campaigns. The average showed an increase of 4%, with a median change of -2%.
One of our biggest concerns when we started this test was whether we would see less impressions due to the limited selection of exact match keywords. However, that did not turn out to be an issue.
Clicks
We saw an average increase of 19% in the number of clicks, with a median change of 13%, although these results were statistically insignificant.
Conversions
We saw an average increase of 21% in the number of conversions, with a median change of 5%, and these results were also statistically insignificant.
Conversion Rate (CVR)
Conversion rates were essentially flat. We observed that the brands most prone to the highest variability in brand name misspellings also showed the highest conversion rate changes.
Average Order Value (AOV)
We saw an average decrease of 6% in average order value, with a median decrease of 5%, however these results were not statistically significant.
Conclusion
Overall, we believe narrowing the keyword selection to the keywords that bring in the majority of the impressions resulted in a reduction of wasted ad spend, and higher clickthrough rates, which directly affect ad quality scores and cost per clicks.
These results have helped us reduce brand spend by almost half, allowing us to free up more budget for top of funnel acquisition — or — for some brands we kept brand spend the same to capture that marginal benefit to the upside.