Mobile Marketing Mistakes Your Competitors Are Making & You Can Avoid
In the fiercely competitive landscape of mobile marketing in 2020, the smallest slip can cost you time, money and your app’s ranking. It’s no longer simply a question of who has better assets or who buys more media but, of who will make the first mistake and who will profit from it.
In this article we take you through some of the pitfalls mobile marketers tend to fall into, giving you the chance to get ahead of the competition.
All ASO, No Paid Traffic
Media asset optimization is important, but it’ll only get you so far without paid traffic.
The reasoning of the average app promoter goes like this. To drive organic traffic, I need to invest in optimizing my app’s assets. Once I have great graphics and converting, optimized texts, organic traffic will start flowing in and my ranking will go up.
This reasoning is solid as much as it is incomplete. The problem is that it’s not taking into account the fact that your competitors are doing exactly the same. Suppose there are two apps that are competing for the same keyword. Both apps have quality assets and are practically the same in terms of features and offerings. What will set them apart? Why would one rank above the other?
To answer these questions we ran a small test for two, almost identical Bitcoin wallet apps, both competing for the keyphrase “buying Bitcoin.” We found that the differentiating factors in ranking were traffic and downloads. While the app that ran a paid media campaign was able to buy traffic and consequently increase its downloads, the app that stuck to ASO alone quickly fell behind.
It goes to show that while optimization is crucial for targeting the right visitors and converting them to downloads, traffic and downloads are just as important for outranking your competitors. In other words, to rank high and make the most of your ASO strategy, it’s best to make sure your optimized assets are backed up by paid traffic and vice versa.
Not Experimenting with App Assets
The only excuse for not knowing Google Play Experiments is if you’ve been living under a rock for the past five years. To recap, since 2015, Google has allowed app owners to experiment with their store listing pages. This means you can simultaneously run several versions of an app’s listing, including graphics and texts, to test which variant performs best in front of live users. The valuable data and insights alone are reason enough to be doing these experiments on a regular basis. But wait, it gets better.
We heard it through the grapevine that experiments are indexed. This includes app names, short descriptions and practically every GP asset, text & graphic alike. Let that sink in for a moment and then think of all the extra keywords you could be ranking for. With nothing to lose (did we mention it’s free?) and everything to gain in terms of data, insight, and traffic, not doing these experiments regularly is just leaving money on the table.
Narrow Media Buying
Most mobile advertisers will tell you one thing with absolute certainty:
The more targeted your campaign is, the higher your conversion rate will be and the faster you’ll be hitting your KPIs.
At the heart of this statement is the misconception that it’s best to launch a narrow campaign and, once you’ve got your target audience down, scale-up. It’s actually the opposite.The reason why it’s better to start wide and narrow down is that campaign platforms like Google or Facebook have machine learning capabilities. This means they study your audience and automatically optimize the campaign for maximum conversions. By limiting your audience to specific demographics or interest groups you believe will perform best, you are in fact impeding the algorithm’s ability to learn, not to mention that you’re likely to miss audiences you haven’t thought about.
It may be counterintuitive but casting your net wide, to begin with, is the best strategy you can choose for the first campaigns of your app.
Targeting Irrelevant Mobile Devices or Operating Systems
One of the more costly mistakes you can make is pushing an Android app down the throat of an iOS user and vice versa. Forgetting to eliminate devices and operating systems that are not compatible with the app you’re promoting is a common pitfall even among professionals. This means you have a chance to save precious campaign funds while your competitors are wastefully sending traffic to irrelevant audiences.
To take it one step further, think of the global distribution of OSs and devices. Targeting iOS users in Europe, for example, where the iOS market share is a puny 26.5%, is not an effective strategy. Neither is sending traffic to Android users in New York. According to a GoldSpot Media study, 50% of mobile ad clicks are accidental, for example in the case of a tablet or iPad banner being sent to a mobile Android user. This could easily be avoided by eliminating certain devices from the campaign. Moreover, it’s well known that Android and iOS users vastly differ in their spending power and consumer behavior. Targeting both devices with the same assets is simply a waste of traffic.
A/B Testing – Get It Right
If you’re doing A/B testing, you’re probably either using GPE, which is OK, or 3rd party A/B testing tools like SplitMetrics or StoreMaven, which are much better. Regardless of how you go about it, here are some of the more common mistakes to avoid in A/B testing.
- Skimping on A/B testing for iOS. Just because iOS doesn’t have an built-in A/B testing system like Google Play Experiments doesn’t mean you shouldn’t be doing it. A/B testing for your mobile app is a crucial step in understanding which assets best capture your audience. Our own iOS A/B tests at Moburst continuously help us improve the CVR of screenshots and videos, averaging at 9% per test. In a recent test, we were able to improve app engagement by 20% and CVR by over 27%!
- Forgetting to test icons. Perhaps because it’s the most visible asset, many app owners treat icons as their Holy Grail, the one asset that shouldn’t be altered. Its high visibility and the fact that it can very easily be tweaked actually makes icon one of the best candidates for A/B testing. The smallest adjustment, be it switching from sharp edges to rounded corners or playing around with color combinations, can lead to a significant improvement in CVR.
We recently uplifted a client’s CVR by 3% just by changing their solid background color to a gradient.
- Stopping variations too soon. There is an understandable urge to eliminate losing variants of A/B tests as quickly as possible, especially when it’s costing you money. But before rushing to conclusions after seeing a line graph go up, ask yourself whether you have enough data to validate your findings. Without sufficient evidence, you can’t reach the desired statistical significance each A/B test should produce (and even then, it’s not always the right thing to end a test or stop a variation).
- Implementing results without follow up. It’s one thing to perform an A/B test and come up with a winning version for your target audience. It’s quite another thing to see the same results in the live store. After implementing the winning version, be sure to go back and check the live results against the A/B test. This can take you down two paths. Either you find the app performs as expected, in which case you can pat yourself on the back and move on to greater things; or you find the winning version is a real-world loser, in which case go back to the drawing board and redo the A/B test.
When it comes to staying ahead of the competition, mobile marketers have to be on their toes, constantly testing different strategies and checking the results against real-life outcomes. It also doesn’t hurt to keep your ear to the ground. You’d be surprised at how many insider tips you can pick up simply by following the right source.
Subscribe today to get the most up-to-date industry news and insights for 2020.