User conversion rate has always been one of the key aspects for any e-commerce business. It is, therefore, no surprise that the growing mobile app market has made companies look for new ways of improving conversion in the new channels.
The pillar of conversion rate optimization is a profound A/B testing infrastructure; while there are well-described go-to solutions for the web, the mobile industry hasn’t nominated a leader yet. In this article, we’ll focus on one of the most cost-effective setups - content experiments with Google Tag Manager (GTM).
Utilizing GTM for mobile content experiments hadn’t been a popular idea for our team, that is until Amazon announced they would discontinue the support for their A/B testing service, which we used. We had to find a substitute so we decided to look through the tools we had already been using. This led us to Google Tag Manager.
After reading the Content Experiments feature description, it turned out that this can be a viable option. Why? Because:
We decided to give it a go and we’ve stuck with it since. Now, we’d like to showcase this tool so that you can figure out if can be a fit for your mobile A/B testing efforts too.
A word of explanation if you aren’t familiar with the Tag Manager. Its primary use case is to simplify tracking management on web and mobile apps. It gives you the ability to add and update your own tags for conversion tracking, site analytics or remarketing without the need to wait for website code updates. But, on top of that, it offers the Experiments API. Here’s how it works.
Let’s come up with a story to better illustrate the problem GTM solves. Let’s assume that we want to give Rachel (the marketer) the ability to set up multi-variate tests in your mobile app.
She wants to test a simple scenario - to check how 2 different headlines influence user engagement. She assumes that the current copy would yield worse results than the new ideas she has at the back of her mind and she wants to measure engagement with the session duration.
So how can we approach this case with GTM? it would look something like this:
Now that we have a process overview, let’s break it down and run through every step from the bottom up.
Having GTM all set up, we can get down to the experiment design. We start by defining what content parts will be varying in the app. This will tell us how to map the variations in the GTM wizard and will also allow developers to update the code respectively.
So, in Rachel’s case this is straightforward; she just has 2 versions of the headline, the original copy and the one you want to test the original against:
To include these variants into GTM, go to Variables and create a new Google Analytics Content Experiment.
In the wizard, the first step is to put the original and the variation’s parameters into the editor. To do so, click on the “Original” item and type the headline parameter with the corresponding value - as in the picture below. Do the same for the first Variation. (Note that the editor supports JSON meaning you can input complex, also nested variable structures too).
The second step is to choose the Experiment Objective. As said, Rachel is going to measure the time users spend in the app. This is one of the built-in objective measures in GTM, so you can just select it from the dropdown menu. (Bear in mind she can also use the goals she’s already been tracking in Google Analytics, like signups, conversions etc.).
Now that she defined what to show to users, let’s define when to do it. GTM comes to the rescue here too, it gives an easy way of defining the frequency of variations exposure. See the picture below, it’s self-explanatory:
There’s something worth highlighting at this point though: all the statistical work is done completely by the Tag Manager. Developers don’t need to be aware of the number of variations or any other thresholds Rachel modifies in her experiment. The Tag Manager SDK will handle variants distribution itself and will report back the fact that the user has seen a particular variation.
GTM also offers an additional set of rules to control when the particular experiment should be active. E.g. Rachel may want to push out the experiment only to people using her app in a particular version. She can configure it herself within the wizard:
You should also know that GTM comes with a plethora of conditions (also custom variables) she can use to enable or disable users from the experiment. Imagine she wants to target only premium users or limit the experiment to Germany, this is all at her fingertips with the rule wizard.
Alright, so Rachel defined her first experiment. Before she can make it live, she needs to confirm it with developers. The reason why she needs to do this is that the dev team needs to verify that they use the same variables keys in the app code. Otherwise, it simply won’t work.
Once that’s done, she can kick off the experiment. It comes down to clicking the Publish button and her variants will be underway.
Rachel can evaluate how her variations have been performing in the Google Analytics Experiment section.
As described above, during the experiment GTM takes care of selecting the users who will be exposed to variants. It will also choose the winning variation when the experiment ends. So, all in all, there’s really no need for developers to assist at any other stage of the campaign.
This is just a brief introduction to the Mobile Content Experiments with GTM. The tool offers way more than we demonstrated and we encourage you to experiment (pun intended) with it yourself. The learning curve is fairly low, plus, if you already use GTM for other tracking purposes, a solid and free A/B testing infrastructure might not be as far away as you think.