Experiments, finding your way through trial and error

What is experimentation

Experiments are ways to obtain information about whether doing something leads to the outcome you want.

For an example, if you run an email campaign to get customers to buy a product, you may want to know whether emails with video lead to better click through rate (CTR) and purchase than without videos. So you split your email list into two batches, and sending one the email with video (Treatment arm), the other without (Control arm). You then measure the CTRs of the 2 batches so you can tell whether including videos led to more CTR.

Scroll down to read the pro tips.

Why run experiments

There are a number of reasons.

1 Confounding factors: There may be confounding factors that make it difficult gauge whether the change led to the outcome you wanted. For example, if seasonality affects sales, and you happen to run the experiment during a down period. You may see a 10% drop in CTR after launching emails with videos, and conclude videos were no good. But if you knew the same period would have had a 30% drop in CTR, then you would conclude that adding videos added a 20% net gain.

2 Size of investment: adding videos to emails may be simple. For more costly investments, you will need to be much more certain that the change will lead to positive impact. Knowing the size of the impact also helps you decide the magnitude of investment that is justified.

You can use ROI models to evaluate this. Subscribe to my newsletter to receive future posts on ROI evaluations.

3 Need precision: you may be deciding between a few email templates, w/o video, w/o coupon offer, w/o large font size etc. Instead of sequentially testing different designs, over a long period of time, you may want to run 2^3=8 batches all at once. That would shorten time to finding optimal marketing email drastically.

When to not run experiments?

Running experiments takes time and resources, and can add unnecessary complexity. There are times when you should not run these, such as when

  • When you already know sufficiently through previous evidence or qualitative research that doing something has no downside and only upside, esp if making the change takes minimal resources.
  • Your volume of samples is too low or your data quality is very poor, so you may end up having a tough time analyzing the results, in statistical talk “low signal to noise ratio”

How to run experiments

1 Design the experiment

Set a clear hypothesis so that you can more easily set up the experiment and get results you can rely on. In our example, the hypothesis could be “I assume including video in email will increase CTR. I’d also like to know by how much does CTR increases”

You should clearly define the metrics you’re aiming to change. Click throughs / number of emails sent

You will also need to set up the sample size, how many emails you need to send out with or w/o videos to be able to tell whether you have seen statistically significant (Statsig in short) variation between control and treatment arms.

2 Set up your experiment and connect data

Experimentation platforms (e.g. Amplitude/Statsig) can split traffic to control and treatment arm/s through using feature flagging.

Feature flagging is when a piece of software (SDK) code is generated, that is then inserted into your product’s software architecture so your product knows whether to insert videos into an email. This can get complicated fast, and you should seek input from your engineering colleagues, let me know if you’d like to learn more on this topic.

3 Data flow/reporting: You need to connect necessary data into and out of the platform. In our example, email lists need to be fed into the platform, and CTR need to be reported out.

Typically these platforms have inbuilt reporting capabilities but you may need to do more nuanced analytics. E.g. if you wanted to know whether customers from certain age groups respond differently to seeing videos, you can more flexibly gauge that impact in another reporting tool, like Tableau or Looker.

Leave a comment