Solved

Firing fetch_experiment and exposure together (when fetch_experiment return variant)

  • 9 February 2023
  • 3 replies
  • 134 views

Hello,

 

We are using Ruby SDK initialize_remote to use fetch_experiment and exposure events.

My question is, is it best practise to fire exposure event when ever we fetch_experiment returns experiment?

 

Problem we think. Firing exposure and fetch separately will introduce unpredictability as customer might drop without seeing the experiment itself.

 

Your thoughts on this?  

icon

Best answer by tsegalla 9 February 2023, 18:30

View original

3 replies

I am with the same doubt, I think it makes sense to expose when it’s fetched. The only reason I wouldn’t do this is if we get it for another reason than provide the experience to the user. I would love to hear from someone else.

Userlevel 3
Badge +5

Hi @kannan.ganesan1989 and @marcel ueno, great question.

Regardless of when you fire each, first make sure that you’re always calling .fetch() first before calling .variant() / firing exposure.  This order is important to ensure we are assigning the user to a test and a variant first before considering them exposed so that your data is as accurate as possible.

In terms of best practice on how close together these calls get made, it really depends on where your test is taking place in your user flow:

  • If the change that you’re testing takes place on a home page/screen immediately after initializing, then it makes sense that the .fetch() call and the exposure would be called one right after the other.  In this case, there’s no difference between the two.
  • If the change that you’re testing takes place further down the user flow, we recommend calling .fetch() upon app initialization, and then calling .variant() / firing exposure closer to when the user would actually encounter your change in the flow.  For example, if you’re running an experiment on your pricing page, a user might be assigned via .fetch() on the home page for the experiment—but if they don’t visit the pricing page, they'll never actually be exposed to it.  For that reason, this user should not be considered to be part of the experiment results.
    • Amplitude uses the exposure as the denominator in every experiment metric you set up.  This means that while we can still measure if a user was assigned to a test and a variant upfront, we only include users who were exposed in the results.  Doing it this way helps ensure that you’re only analyzing user behavior as a result of them having experienced the change, reducing any noise from users that wouldn’t see the change.

I hope this is helpful!

Hi @kannan.ganesan1989 and @marcel ueno, great question.

Regardless of when you fire each, first make sure that you’re always calling .fetch() first before calling .variant() / firing exposure.  This order is important to ensure we are assigning the user to a test and a variant first before considering them exposed so that your data is as accurate as possible.

In terms of best practice on how close together these calls get made, it really depends on where your test is taking place in your user flow:

  • If the change that you’re testing takes place on a home page/screen immediately after initializing, then it makes sense that the .fetch() call and the exposure would be called one right after the other.  In this case, there’s no difference between the two.
  • If the change that you’re testing takes place further down the user flow, we recommend calling .fetch() upon app initialization, and then calling .variant() / firing exposure closer to when the user would actually encounter your change in the flow.  For example, if you’re running an experiment on your pricing page, a user might be assigned via .fetch() on the home page for the experiment—but if they don’t visit the pricing page, they'll never actually be exposed to it.  For that reason, this user should not be considered to be part of the experiment results.
    • Amplitude uses the exposure as the denominator in every experiment metric you set up.  This means that while we can still measure if a user was assigned to a test and a variant upfront, we only include users who were exposed in the results.  Doing it this way helps ensure that you’re only analyzing user behavior as a result of them having experienced the change, reducing any noise from users that wouldn’t see the change.

I hope this is helpful!

Thanks 👍

Reply