From time to time, we ask experts in their field to share wisdom from their unique perspective. This week, the founder of Gmail customer support app Hiver lists his top tips for effective A/B testing.
The most effective way to optimize your marketing campaign is to test and experiment. It allows you to reverse engineer an optimized and effective campaign from the results you get. A/B testing, or split testing, is commonly used to test out variables, especially for email marketing. It can help you get even the seemingly small details, such as length of the subject line, right.
Here are some best practices to keep in mind when split testing your campaigns:
Having a hypothesis will add a direction to your split testing. In a way, a hypothesis is nothing but the objective of the test itself. It is a simple statement that helps you describe what you want to prove or disprove using your A/B test. Taking the time to write down your hypothesis will give you a clear direction and help you understand what metrics to test for.
The most effective way to optimize your marketing campaign is to test and experiment.
A hypothesis (e.g., including a quote from a customer on the landing pages will help improve sign ups) will help you identify what parameters you should be measuring. It may seem like a very basic step, but it is an essential one. So, anytime you want to split test, start here.
You can only measure one parameter effectively in a test. In order to draw conclusive results, you must keep everything else constant. If you test more variables, you won’t know to which variable actually contributes to the test results.
For example, say you want to test the length of the subject line for your email newsletter. Identify two subject lines (one short and one long one) that you think are most effective. While keeping other aspects, such as preheaders, constant, divide the email list into two groups, and test out the most effective length. In this case, the metric you would measure would be the email open rate. Whichever email has the best open rate is the subject line you should use to send the email to your full list.
Decide how you are going to measure the success or failure of the test before you even get started. Define what metrics you are going to use to measure the test results. Also, ensure that the success metric you pick is strongly relevant and dependent on the variable you want to test. For instance, say you want to increase the conversion rates of your landing pages and you want to test if it’s beneficial to add snippets of customer reviews to your page. Here, your variable is the reviews on the landing page and your success metric is the conversion rate.
A common mistake most people make when doing A/B testing: they follow multiple metrics and then decide later which one they want to rely on. This often leads to confusion and inconclusive results. A more effective way is to decide the variable and the result metric at the start of the test and stick to it.
Inadvertently, you sometimes end up introducing new variables into your sample. This happens when the sample selected is not random enough. For example, maybe you select a sample of people, majority of which are women. Now, without intending to, you’ve introduced a demographic variable.
To neutralize the probability of invisible variables, make sure to keep the selection as random as possible. For instance, you can use random computer-generated numbers to select names from your email list to test. The sample must represent the randomness of the real pool of people you have. The point is that your selection process should be executed in such a way that everyone has the same probability of getting selected.
Another common mistake marketers make. You should take the time to record your results and observations, more so if you are someone who frequently uses split testing to test out a number of factors. This way, you won’t end up repeating your tests. This will also help you build on your past lessons from these tests. Additionally, it is easier to explain your findings to your successors and new team members when you have a record of it. Even in your absence, people will have access to these results.
In fact, you could use tools with features such as email notes and reminders to make a habit out of documenting your findings and insights consistently. You can compile all these notes into a document later on. What’s even better—if you have no problem publishing your results, you can create a blog post including all your findings/insights and post it on your company’s blog. Your audience too would be interested in your findings.
You can draw reliable conclusions only if the volume is large enough. You should take a large sample of emails and try to draw your insights based on that. When we say volume, it’s not just the volume of the sample. The volume of the results should also be large enough. Even beyond these numbers, the statistical difference between A and B is extremely important. If the difference between tests A and B is statistically insignificant, you need not concern yourself with the test results.
For example, say that you are trying to test the CTAs in your email newsletter. You must first acquire enough open rates to be able to really test your hypothesis. If your subscribers are hardly opening the emails, then the problem may not be with the CTAs, but somewhere else—potentially in the subject line.
Split testing is a powerful tool and technique to optimize your marketing campaigns, but overusing it is not advisable. It doesn’t make sense to test every small aspect. Sometimes you have to draw conclusions based on expert suggestions and your common sense. Know when the ROI of you split testing is not really paying off.
©2023 Olive & Company / 612.379.3090 / 125 Main Street SE #343, Minneapolis, MN 55414