What are some tactics you've used in creating your organization's Product Analytics data language?

  • 14 March 2022
  • 6 replies
  • 230 views

Userlevel 6
Badge +9

Join us for this month's Coffee Chat, featuring Amplitude's very own solutions architect, Xin Cao (@CanioX) ! Enjoy a coffee on us while chatting about how to create your company's product analytics data language. 

We have a question for you:

What are some tactics you've used in creating your organization's Product Analytics data language? What have been some of the biggest challenges?

Share your thoughts and Xin will be posting here after the event too! 

Note that to see the latest posts you need to refresh this page. We can’t wait to hear from you! 


6 replies

Userlevel 2
Badge

Question from the event:
I have also come across issues where during exploration/testing phase the engineering/dev team generates a lot of data that has very less business value. Obviously, that data gets normalized over time but at the time of feature launch, when data monitoring is most important, it creates a lot of confusion.

Userlevel 2
Badge

Another question from the event!
Following up on that. I’m able to update tracking plans, but should the engineer who is implementing the code also be responsible for updating the tracking plan? That way I’m not a bottle neck for implementing events

Userlevel 2
Badge

Is the Iteratively tracking plan functionality compatible across cloud offerings? (AWS, Azure, GCP)

Userlevel 1
Badge

Question from the event:
I have also come across issues where during exploration/testing phase the engineering/dev team generates a lot of data that has very less business value. Obviously, that data gets normalized over time but at the time of feature launch, when data monitoring is most important, it creates a lot of confusion.

Great question. Two main things you could do to target that, first is having a clearer understanding of the dataset before pushing things to dev, which should reduce the volume of events coming into to the development project. If that isn’t possible then starting with a smaller dataset and working to clean that up. Ultimately though there sounds like there is a shift needed in pulling the importance of data management to the forefront and not down the line. Data health during the initial launch is arguably the most important so you want to make sure you get it right first!

Userlevel 1
Badge

Another question from the event!
Following up on that. I’m able to update tracking plans, but should the engineer who is implementing the code also be responsible for updating the tracking plan? That way I’m not a bottle neck for implementing events

It really depends on how involved the engineer is with the actual implementation of the product. We definitely want to give more ownership to the relevant team members. Once you have the responsibilities set it should be fairly straightforward to execute on. I’ve seen it both ways where the eng team is solely in product instrumenting and not involved at all on creation of the tracking plan and vice versa.

 

If you’re looking to create your framework I would definitely recommend having more engineering empathy and involving them in the decision making process, at minimum they should be consulted on how feasible certain events are.

 

Lastly, check our our Iteratively product that makes the instrumentation process pretty seamless between eng and product, The tracking plan will automatically get updated as events are instrumented.

Userlevel 1
Badge

Is the Iteratively tracking plan functionality compatible across cloud offerings? (AWS, Azure, GCP)

The tracking plan is usable independent of the different offering that you’ve listed, Amplitude will just surface discrepancies based on the events that are coming in, you will not have the instrumentation benefits without SDK installation. However, Amplitude currently supports integrations with AWS and GCP to pull data in. We’ll be releasing more in the future.

Reply