Solved

Good practices: how to test our app event trace.

  • 4 March 2021
  • 6 replies
  • 969 views

Userlevel 1
Badge

Hello,

Our app is evolving, we would like to make sure the tracking of our events is stable.

It would be damageable, and hard to fix, if we realise after releasing the new version of our app, that some changes in the code made some event not be tracked anymore (refactoring, replacing features, ...).

 

But that is what tests are for ! A test could perform “a normal walkthrough” of the app, and check that the event trace of this walkthrough is stable.

 

How does those tests look like ? Does the Amplitude community has best practices for this use case, maybe a playbook, and eventually point to code or a git repository ?  

 

icon

Best answer by jarren.patao 9 March 2021, 23:15

View original

6 replies

Userlevel 7
Badge +10

Hey @castor-radieux !

That’s a really important topic you brought up.

Here are key things which have helped us over the past few years to contain the chaos during wiring multiple events and maintain sanity wrt the data governance side of things before sending events into Amplitude -

  • We are heavy on cross-portfolio analysis and have multiple apps instrumented. Creating and maintaining an updated data dictionary / taxonomy sheet is the first and easiest step. Ensure that the events + event properties currently being sent in Amplitude are part of your data dictionary first.
  • The Govern Feature is another really useful one. Using this one for triggering alerts for event validation errors and event/property planning.

For making sure that the existing instrumentation isn't affected by refactoring and adding on new features/events, you can use some of the Amplitude features -

  • Creating charts for existing events and setting up custom monitor alerts. You can set threshold conditions in there and get alerts if the event counts go down below your expectations after the release.
  • Creating test scripts using the Dashboard REST API. You can write some code by using these endpoints and calculating your own threshold from the data points returned. Running these for all your events before and after your release might help you in your instrumentation walkthrough.

I'm curious to hear more on this from the other users in this community and how they are tackling this use case.

Hope this helps!

Userlevel 4

Hi @castor-radieux,

@Saish Redkar brings up a lot of great points about the tools that are useful for ensuring your instrumentation is in line with your expectations.

I actually create multiple projects where before any new release, I just redirect the API Key used to the test project. You are able to create as many projects as you’d like at the moment so we usually advise testing in a completely separate environment with your own CD/CI tools so you aren’t polluting your Live environment with testing data.

As for any playbooks, we do have our Data Taxonomy Playbook that has some information that might be helpful to you, but you can check that out in our Help Center: https://help.amplitude.com/hc/en-us/articles/115000465251

Hope this helps!

Userlevel 1
Badge

I actually create multiple projects where before any new release, I just redirect the API Key used to the test project.

Would you mind detailing what you do afterwards ?

Userlevel 4

@castor-radieux sure! After redirecting the API Key to send events from the newest version of my project with the Test Project API Key, I do all formal testing of my events to make sure they end up in Amplitude accordingly.

I use a couple of our tools to verify my actual event behavior:

  1. The Event Explorer: https://help.amplitude.com/hc/en-us/articles/360050836071
    This feature allows you to query a user by their user_id and inspect their events in real time as they’re being performed and ingested.
  2. The Instrumentation Explorer: https://chrome.google.com/webstore/detail/amplitude-instrumentation/acehfjhnmhbmgkedjmjlobpgdicnhkbp
    This feature works similarly to the Event Explorer, but the events seen in the Instrumentation Explorer will happen prior to event processing so you may even be able to see blocked events prior to event processing with this tool.

Once I verify that my events are being performed as expected, I then revert the API Key to my Production Project where I ensure my new release is detected so I can track any changes in metrics between versions.

Userlevel 1
Badge

Thanks for the follow-up Jarren,

We do our manual testing in a similar way.

The next step would be to automatise those tests in some ways, to free up some time, remove the risk inherent to manual testing, and test events that you did not consider could be affected.

Userlevel 4

Thanks for your feedback @castor-radieux! I definitely understand how automating some testing could free up some time. I’ll provide this feedback to our team to see if there is anything we might be able to address on our end!

Reply