Experiment Diagnostic Run-Time Charts

Related products: Experiment
Experiment Diagnostic Run-Time Charts

Trusting the data that powers your experiments


When you’re trying to make critical decisions about the changes to your product, the ability to trust the results of your experiments is vital. The outcome of each experiment you run is only as good as the quality of the data being collected. Most other experimentation platforms only provide you with a final calculation or analysis, without ever exposing the underlying data. This is problematic!


Sample Ratio Mismatch (SRM) is a really common problem in a lot of experiments that often goes undetected and unnoticed. An SRM occurs when the number of users in your control and variant are either uneven or don’t match your planned distribution split. While an experiment can reach statistical significance and a platform can report it, an SRM in the experiment can invalidate those results. If you don’t have access to the underlying data in the analysis, you’ll never you’re looking at faulty results.



One of the biggest advantages of using Amplitude Experiment is the close tie-in with Analytics, allowing you to track your experiments in real-time, across entire user segments or down to an individual user’s journey. In short, Amplitude makes available all of the underlying data that’s powering the outcomes of your experiments.

Trusting the data that powers your experiments is vital, and over the next several months we’re focusing on making that a bigger component of Experiment.



When you log in to Amplitude Experiment, and view the Run tab, you’ll notice some changes, providing more insights into both the Assignment and Exposure events of each experiment you run, in real-time. You’ll now also be able to switch between cumulative and non-cumulative views of your data.


New Assignment and Exposure Events Charts




Track both exposure and assignment events in real-time, actively monitoring how many users are assigned and exposed to your control or variant(s). Toggle between both cumulative and non-cumulative views. With these new views, you’ll be able to quickly detect anomalies in your experiment delivery. For example, if you’re seeing too many users getting exposed to a variant over a control, this may indicate a Sample Ratio Mismatch and potentially invalidate the results of your experiment.



Over the coming months you’ll see us make additional improvements to this page, to provide more detail about how your experiments are running and more insights into the data powering the analysis. Follow this page to get notified as we start working on:

  • Assignment-to-exposure conversion chart

  • Variant jumping

  • Diagnostic alerts and warnings for things like Sample Ratio Mismatch


Be the first to reply!