The Problem with Mobile App Marketing  

Written by Jonathan

On October 21, 2019

Mobile app marketing and growth are more of a shot in the dark than people want to admit. An average mobile app/game developer will maintain dozens of dashboards, metrics, and indices with the hopes they are data-driven

But what does it actually mean to be data-driven?

In theory, being data-driven means making decisions with a much higher chance of being right. If your decisions are based on objective data, if success is cleanly attributed to a clear cause, you can replicate said success with ease. “We gained a relative 19% uplift in organic installs because of this user acquisition (UA) push” or “This uplift was caused undoubtedly by this product release” means your marketing, product, UA, monetization, and growth teams know exactly what they need to do in order to achieve team and (more importantly) company KPIs.

What actually happens today? 

Instead, most companies spend weeks—if not months—trying to analyze the impact of their marketing and advertising activities on user and revenue growth or decline. Usually, by the time the data science team is back with their best estimate, the ‘event’ that caused that change is long gone and they’ve lost millions of potential installs and dollars. 

What’s the holy grail of data-driven mobile growth?

One centralized place to analyze all data together to understand the impact of changes holistically so the marketing, growth, and even leadership team can make better and faster decisions to fix growth problems (and detect them on time), replicate success, and maximize growth over time. 

The challenge lies in the fact that data is analyzed in silos. Paid UA analyzes their campaigns’ and ads’ performance using attribution providers such as AppsFlyer and Adjust. App Store Optimization (ASO) teams try to analyze the impact of their changes within app store intelligence dashboards such as App Annie, App Store Connect, and Google Play Developer Console. Marketing teams try to understand the impact of featuring on the App Store within their own silo. Each of these analyses does not take into consideration the full, holistic picture of the app business.

In many cases, this leads to a subjectivity problem in which one team will attempt to take credit for performance improvements (or point in other directions if performance is hurt). Only by breaking the silos is it possible to get to the real root cause of a change in performance. 

What’s the cost of analyzing mobile growth in silos?

In the App Store ecosystem, everything impacts everything; everything is connected. Too often, one team attributes growth success to something they did. For example, a mobile growth team takes credit for an uptick in new install volume and pinpoints the root cause of that growth to an influencer campaign when, in reality, that growth can be attributed to an increase in ranking that was caused by a new advertising channel that the UA team started to use. 

Without a clear identification of the root cause for such growth, it’ll be extremely hard to replicate that success. In this example, a wrong conclusion will lead to more resources going into influencer campaigns while it was that new UA channel that deserved that additional investment. 

In this article, we’ll explore the main challenges that stand in the way of getting to that holy grail we mentioned. 

Problem 1: Lack of standardization in the mobile ecosystem—the mobile tower of Babylon

Or: Why is an install so hard to define?

Today’s mobile ecosystem is essentially the tower of Babylon: people tried to build a tower to the heavens, then God got them to speak different languages so they couldn’t understand each other. Of course, with no common language, the tower was never completed.

In mobile marketing, we still speak different languages. 

Let’s take oil for example.

Oil is one of the most traded resources on earth, and driving that trade is standardization. A barrel of oil is always exactly 159 liters of crude oil. This allows people to easily trade, price, and ship oil around the world. Imagine the chaos that would ensue if no one could agree on what a barrel of oil actually is. All hell would break loose. 

If data was really the new oil, data would have standardization of its own. 

Here are some of the core ‘terms’ the mobile app market can’t seem to agree on:

What are organic installs? This depends if you ask the platforms or attribution partners.

Let’s take the topic of organic growth.

Many marketing teams track organic installs within their attribution provider dashboard. From the attribution providers’ point of view, organic installs are simply all installs that didn’t result from clicking (or view, in case view-through attribution kicks in) an ad. 

Mobile attribution works by identifying users who clicked on ads (most of the time through their unique mobile identifiers—IDFA or GAID—and, in other cases, through fingerprinting technology) and then matching these identifiers with users who opened the app in which the attribution SDK is installed.

The issue is that these identifiers are tied to devices. If a user switched devices, turned on limited ad tracking (LAT), or is simply a returning user (after a period of inactivity) it would be categorized as an organic install. 

However, from the app stores’ points of view, one of the most important metrics that dictate success in the store and feeds into the ranking algorithms (both in search and in top/category charts) is first-time installers. With access to user IDs, not just devices, the platforms can figure out whether a user is actually new or just updating a new device with their old favorite apps.

The two numbers (first-time installers and organic installs), as they appear in the attribution platform, will always be different—sometimes by a factor of two—making it extremely hard to understand the true volume of organic installs and what impacts them. 

So what metrics do we track? 

The answer to that question is (annoyingly) it depends. It depends on what you’re trying to improve and achieve as each source of data has different metrics that are more important to drive different types of performance. 

For the purpose of understanding the profitability of paid ads, the attribution provider would be the best source to get data from. 

For the purpose of improving organic growth driven by more platform-driven (App Store & Google Play) installs, organic install data from the attribution provider’s dashboard isn’t going to do the trick. 

As we mentioned, one of the metrics that the platforms use to understand which app is relevant is first-time installers (‘App Units,’ in the App Store terminology). So improving growth would be more about influencing that metric, not organic installs as they are defined by attribution partners. 

Problem 2: Mobile measurement is broken: the organic uplift conundrum 

Or: How can we know the true value of paid UA efforts? 

Without accurate attribution, it’s impossible for UA teams to quantify the profitability of the ads they are running. Mobile attribution technology has enabled teams to have a fantastic metric they can follow to determine that profitability, the well-known return on ad spend (ROAS). 

But in reality, the revenues that are taken into consideration are solely calculated from installs that the attribution platform attributed to the ad. No organic uplift is considered. What types of uplift are we talking about? 

    1. Ranking Increase Driven Uplift: As a result of additional first-time installers, the store has determined that an app is more relevant; hence, it ranks higher on the top/category charts and in search results. This uplift creates new installs (from browsing the stores) that may contribute significant revenues but were not included in the ROAS calculation. 
    2. Branded Search Driven Uplift: As a result of a paid ad that created brand exposure, a significant number of users didn’t directly click on the ad but later searched for the brand on the App Store/Google Play Store. This surge in branded search-driven installs wouldn’t happen without the ad running. 

In these two examples, it’s easy to see how a marketing team would arrive at the wrong conclusion that an ad should be taken down because the ROAS isn’t sufficient. But the true ROAS (tROAS) of that ad could be much higher, justifying the cost. By arriving at that decision, the team risks losing significant organic install volume. 

Some teams try to solve this by assuming each paid install drives a fixed number of organic installs (let’s say 0.3 organic installs for every paid install). 

But this methodology isn’t close to driving actionable insights as each channel, campaign, and the ad has a different potential to drive an organic uplift. Remember the reliance on first-time installers? We know that certain ad campaigns drive almost exclusively returning installs and a different one targets a whole new audience: the organic uplift of both would be completely different.

Problem 3: There is no single source of truth

Or: How can we understand the impact of what we do in marketing if we don’t look at the same data in the same way?

This problem actually breaks down into two parts. Not knowing which data to use leads to many different analyses taking place in a company or a team that don’t agree with one another. It’s comparing apples to oranges. 

The other part is that even if the same data is analyzed, different people choose a different methodology to analyze it (for example, changing the pre/post analysis period by ±1 week to show a different conclusion, based on the result the analyst wants to show). 

Let’s break it down:

Knowing which data to use to answer which question

The ways that humans and businesses make decisions hasn’t essentially changed much for thousands of years. First, someone asks a question. That question is usually based on a change that happened in the world because if nothing changes and everything is static, humans usually don’t ask questions. 

After the status quo is interrupted and the question is asked, the relevant data is collected.

The data is then re-arranged in a way that makes sense, “cleaned”, and then visualized in a certain way—because humans are much more comfortable understanding data when they see it visually. 

After looking at the visual data, a decision is made and the person goes on with life, hopefully better off. 

In mobile marketing today, decisions are made in the same way. Someone raises a question based on a change they saw in the performance of the app. Let’s say a drop in organic installs.

Based on the question, a person or a team needs to understand the right data to collect and where to collect it from. This is where many mistakes happen: wrong metrics lead to wrong conclusions. 

For example, an organic drop has occurred in Google Play and the team spends weeks trying to tear down the keyword strategy to understand why it stopped working. Meanwhile, the truth lies with a new version that increased app crashes or app-not-responding events that led Google Play to penalize the app in its rankings. 

Siloed teams lead to different perspectives 

Even when different people look at the same data they might reach different conclusions. The usual process of running a pre/post analysis and comparing the performance of different periods is prone to errors. 

All it takes is for the teams to not agree on the pre/post timeframe (one team takes three weeks and the other two weeks) to produce a different result. There’s a lack of a clear and accepted methodology to analyze the impact of different changes and marketing activities. 

There’s no one source of truth for an app business. Only when combining multiple data sources and analyzing them, holistically using the same methodology across the organization, can one see the overall effect of marketing activity. The limitations that exist in today’s landscape only drive teams—in lack of a better solution—to analyze things in silos. 

Problem 4: The difficulty of detecting change in growth trends and understanding the root cause

Or: How can we catch negative/positive changes when they occur so we can act on it instead of being reactive?

The world is a dirty place. Mobile performance data is too. Nothing in the mobile ecosystem can or should be analyzed in a vacuum. Almost every action a marketing team takes creates a ripple effect that changes metrics other than the one in focus. 

Offline marketing, app stores’ algorithms updates, new competitors, changing user behavior, store visual updates, brand awareness campaigns, featuring, and more all affect an app’s overall growth trends. 

The problem is that it’s hard to monitor holistic performance; hence, it’s difficult to identify changes in real-time. Data comes from multiple sources which means it takes significant time to pull that data for analysis. Even when the analysis is done, it’s simply done on static data instead of continuously monitoring it to detect change. 

Thus, the vast majority of marketing teams spend time reacting to events that happened weeks or months ago just because they caught it too late. 

What if marketing teams could know, in real-time, when anything that might impact growth happens and act on it before things become dire? 

But detecting a change in real-time solves only one part of the problem. Understanding what caused that change so we’re able to make sense of it and correct it (or double-down on it, if needed) is the second step. 

Problem 5: Lack of documentation leads to hours of lost time and dead-end analyses

Or: How can we understand the impact of change if we don’t take into account everything that happened in the industry and within the company?

Another main challenge in mobile marketing is the fact that many teams operate and perform different experiments or attempt different marketing or growth efforts without knowing what other teams are doing. 

The UA team could be running an incrementality test on one of the channels exactly when the ASO team is updating the App Store with new creatives that will skew the results of that test. Things could go even worse if Apple, unknowingly to the team, just updated their ranking algorithms which will further confuse the ASO team which is trying to understand the impact of the new creative. 

But even outside experimentation, the organic acquisition team could achieve its KPIs much better if it works in tandem with the paid acquisition team. When trying to improve rankings, a potential solution could be the right UA campaign at the right time. When the paid acquisition team is trying a new channel, it can be significantly more successful if the ASO team tries to experiment with creatives on that channel to understand what drives this audience to install. This will increase conversion rates and dramatically improve the paid acquisition team’s metrics. 

There is a lack of a clear, company-wide timeline that tracks and logs events that are going on inside and outside the company. There are two types of such ‘events’. 

External events, or industry events

These events are those that happen outside of the company. These are changes in the platforms themselves, such as algorithm changes or even App Store layout changes. In addition, new competitors or changing competitor behavior would also be considered an industry event. Knowing that these events happen can explain a lot of changes in performance and de-panic teams that rush to understand what they did wrong.

Internal events

Any event that is controlled by the company falls under this category. These include app version updates, creative updates, UA strategy changes, keyword strategy changes, spend level changes, and more. 

Documenting these ‘events’ will lead to better visibility of the business with clear changes as reference points to guide the analysis of the business. 

Understanding the impact of these events (including industry events) allow mobile marketing leaders to quickly grasp and understand what they should do next. 

Conclusion

It’s clear that imagining mobile marketing as it can be is radically different than the situation today. Instead of getting buried in siloes deeper and deeper (where there is some comfort but lack of a bigger picture), we believe that the mobile apps and games that’ll win the market in 2020 and beyond will be those which adopt a holistic view of their business, tackle the challenges that prevent them from making better decisions today, and then find themselves with the holy grail of mobile marketing.

You May Also Like…

The battle against the App Store and Google Play 30% fee

The battle against the App Store and Google Play 30% fee

Since late 2018, large developers have started to take a stand against the, as they refer to it, App Store and Google Play tax. This ‘tax’ is the 30% fee that the platforms charge on most revenues that flow through their platforms. The ethics and supposed fairness of...

Why is your Mobile Growth Dropping?

Why is your Mobile Growth Dropping?

What’s the hardest part about monitoring mobile growth? There’s nothing more frustrating than being in a meeting with folks from the growth team, the marketing team, the user acquisition team, and management, and trying to pinpoint the cause of a drop in app install...