Acquisition Channels and Multi-Touch Attribution now in open beta

At Amplify we announced new features including Acquisition Channels and Multi-Touch Attribution. These new features are now in open beta and are available to customers in our Growth and Enterprise Plans. Acquisition Channels  With Acquisition Channels, you can identify where your acquisition traffic is coming from, alongside your other performance measurements. You can also create rules that use existing properties and property values to automatically classify the channel used to source respective events in your product. Alongside our new Multi-Touch Attribution, you can now understand how your channels impact bottom-line product outcomes. i.e. which channels bring in power users, which features are true conversion drivers, how your channels retain over time. Learn more here. Multi-Touch Attribution With Multi-Touch Attribution available natively in Amplitude, you can easily distribute credit for your marketing programs using pre-built attribution models out of the box that can be configured on your metric. If the pre-built attribution models do not meet your needs, you can also create a custom model. Unlike other solutions, the power of the Amplitude Behavioral Graph enables unlimited look-back windows on attribution. By combining product + acquisition data, you can attribute product usage by acquisition source as well as see how product features or content impact product success events. Learn more here. By combining new acquisition insights along side Multi-Touch attribution, we want to help customers measure how effective your acquisition investments are in driving product engagement, as well as the impact of your channel strategy across the customer journey. We’re excited to see you take advantage of these new capabilities inside Amplitude!

September 2022 Product Release Highlights: Updates make it even easier to do great work in Amplitude

It’s never been easier to power data-driven products and experiences in Amplitude. The product team released updates last month to help customers be even more effective, including enhancements to existing features and new workflows. Updates are available to all Amplitude customers and in the EU Data Center unless otherwise specified.  SummaryAnalytics Timezone customization for charts and dashboards Add Amplitude charts to Miro boards Audiences Distinct day and nested cohorts Data Derived properties enhancements Data warehouse import improvements Event streaming debugger view Experiment Open Experiment charts in Analytics Experiment end notifications Home page improvements   Amplitude Analytics New integration and customization options Easily add Amplitude charts to Miro boardsNow customers can easily search for and add Amplitude charts directly into Miro boards without switching between the two platforms. EU Data Center availability coming soon.Learn more about the Miro integration. Customize timezones for charts and dashboardsBy popular demand, customers can now change time zones at the individual chart or dashboard level without impacting their project.  Amplitude AudiencesNew cohort enhancements Target users based on consistent behavior with distinct day cohortsNow available in the cohort builder, customers can create cohorts defined by the number of distinct days a user has done a behavior. For example, a customer could create a cohort of users who favorited an item on their site at least once per week in the last 30 days.  Define new audiences based on existing cohortsWith new nested cohorts, customers can define an audience by referencing and changing the filters of an existing cohort. They can use nested cohorts to create groups of users that are common to multiple existing cohorts or filter down to a subset of users within a cohort that share properties or behaviors.  Amplitude Data More clarity and ways to use data Event Streaming Setup just got easier with the new debugger viewThe debugger view provides customers with live errors and logs for specific data payloads.Learn more about new event streaming capabilities. Create sophisticated data transformations with derived properties enhancementsWith derived properties, customers can create new event and user properties based on functions and operators they can apply across multiple existing properties, including: Event properties User properties Group properties Lookup properties  Channel classifiers Customers can now also create derived properties from other derived and transformative properties.Available to Enterprise and Govern Add-On (formerly “Taxonomy Add-On”) customers.Learn more about derived properties and use cases. Understand your data imports easily Customers have access to a new metrics page within S3 import sources. Get greater clarity into how much data is imported over time, the number of data errors, the type of data errors, and more helpful information.  Amplitude Experiment Improvements to the product experience Easily open Experiment charts in AnalyticsView Experiment charts in Analytics in just one click with the new “Open in Analytics" button. Now it’s easier for customers to add Experiment analyses to Notebooks, Dashboards, or anywhere else they monitor and communicate with their team.Available to paid customers. EU Data Center availability coming soon. Get alerted when a test is ready for a decision with Evaluation NotificationsWith new Evaluation Notifications, customers get an email when their test reaches statistical significance. They don’t need to constantly check progress or have a completed test sit idle. Coming soon: Experiment will send email notifications for end dates and issues like sample ratio mismatch.Available to paid customers. EU Data Center availability coming soon. Enjoy improvements to the Experiment home pageCustomers can see new details about recent tests, such as start/end dates and decisions, on the Experiment home page. They also have the option to customize what’s displayed.Available to paid customers. EU Data Center availability coming soon.  Never miss an update Get release highlights and launch news delivered to your inbox. Subscribe to product updates from our Community.If you’re not an Amplitude customer yet, let’s change that! Get started for free.

August 2022 Product Release Highlights: New features make Amplitude even better

The Amplitude product team had a busy month releasing 18 (!!!) updates across our platform. These releases support teams in making better decisions with data, delivering better experiences for their customers, and better understanding how their experiments are running. Let’s take a look at the highlights. Amplitude Analytics: Features that help teams get more from their data Make better decisions with Data TablesCustomers can analyze their most important metrics in a single view with Data Tables. This lets them unlock multi-metric, multi-dimensional analysis to improve analysis of marketing attribution, market segmentation, experiments, and more. Learn more about Data Tables. Get answers to important questions fast with Starter TemplatesCustomers can answer essential product questions quickly with out-of-the-box dashboards they can customize and save.Not sure where you’d begin with Starter Templates? Try determining your product’s North Star with a new set of Product KPIs we’ve released in the feature. Demo the experience for free. Learn more about Starter Templates. Customize event frequency charts with new data bucketing optionsBy popular demand, Analytics customers can now tailor the view of their event segmentation frequency charts with custom buckets. They can choose to adjust the minimum, maximum, and interval size or use the custom bucket modal to enter their own bucket ranges.  Enjoy improvements to our user stream experienceCustomers who use the user streams feature will see an improved version of the experience, accessible from most charts via Microscope. These improvements include the ability to: Create cohorts Export users to CSV Filter events Link to the user page with a clear CTA for a deeper dive into user and event information See up to 20 users at once without having to click “Show more users”   Amplitude Customer Data Platform: Better real-time campaigns and data visibility  Recommend is now Audiences We’re introducing Audiences as an expanded version of the product formerly known as Recommend. As the new name suggests, Audiences provides powerful audience discovery, segmentation, and activation to help customers better target their users with personalized content and experiences.  Learn more about Audiences. Build dynamic campaigns with real-time syncsUpdate and sync cohorts every minute to create in-the-moment engagement campaigns. Real-time syncs are now available to all Audiences customers on paid plans.  Learn more about real-time syncs, and how to set them up in Audiences. Data Management: Connect your Data Warehouse to AmplitudeData Warehouses (DWH) Destination Connections are now available at no additional cost for all customers on Starter and Scholarship plans! Customers can access all destination connections to leading cloud data warehouses and storage providers including: Amazon S3 Snowflake Redshift Google Cloud Storage BigQuery Learn more about setting up your Data Warehouse. Event streaming destinations now available to all customersEvent streaming destinations enable Amplitude customers to power real-time, targeted experiences. With event streaming, you can use the rich behavioral data that is captured, resolved, and transformed in Amplitude to enrich customer profiles and send data to marketing, sales, and infrastructure tools. This functionality is now generally available to all customers. You can stream up to 10M events per month for free! Learn more about available and upcoming event streaming destinations and upcoming destinations on our roadmap. Simplify onboarding and managing data set configurations with Guided ConfigurationOnboarding data sets to Amplitude and managing existing import configurations has never been easier. Configure a Cloud Storage import with the new Guided Configuration option. Once customers have connected their Cloud Storage (AWS S3, or GCP Storage) bucket in the Sources section of Amplitude, they can map and/or transform data to the Amplitude model specification. Learn more about Guided Configuration. Support data security with new event data retention optionsCustomers can now control how long their data is retained on the Amplitude platform by setting an event retention policy for organization-level data stored in Amplitude. If you’re a current customer, reach out to your account team to set up this functionality. Filter tracking plan data right within AmplitudeWith new data filtering for tracking plans, customers can drill down into table data more granularly right within their account. There’s no need to download a CSV file to manipulate the data.  Stay up-to-date on tracking plan comments with Slack integration in DataCustomers can choose to get notified when someone comments on their tracking plan. They can also connect projects to Slack workspaces to get notifications about plan updates.Learn more about Slack integration in Data. Understand where bad data is coming from to quickly fix issuesIt used to be that customers would see “unknown” sources in their tracking plan, making it challenging to find the source of bad data. Improvements to source data display all sources as shown on the connections page, now makes it easier to spot and fix errors in tracking plans.  Amplitude Experiment: Experience updates help teams run quality experiments Save overhead with new local evaluation mode flags and experimentsWith new local evaluation in Experiement, customers get a boost in performance over running evaluation on a remote server. They also save the overhead of making added network requests.Learn more about local evaluation. Avoid accidentally making disruptive experiment changes with guardrail warningsOnce an experiment has started running, it’s best practice to avoid changing how it’s set up to avoid unexpected results that are hard to interpret. With new guardrail warnings, a customer who is about to make such a change to the configuration of an experiment will see a pop-up message letting them know it could impact the validity of their results. Learn more about new notifications guardrail warnings. Understand when sample ratio mismatches happen to quickly fix themSample ratio mismatches (SRMs) happen when an uneven distribution of users see your control and variant in an experiment. SRMs indicate that the experiment hasn’t been truly random so the results may not be conclusive. We added new charts to help customers understand whether there’s an SRM in an experiment, warnings to show it's been detected, and suggestions for next steps.Learn more about new warnings for SRM detection. Understand at-a-glance how much longer experiments need to run We added a "Duration Estimate" to the experiment while it's running that shows how much longer it needs to run to achieve statistical significance. Learn more about duration estimation. Enjoy a better experiment completion experienceWe've renamed the "Make Decision" button to "Complete Experiment" and improved some quality of life around decision making, such as the ability to roll back a decision made in error.  Learn more about improvements to the experiment completion experience. There are more product updates on the way Don’t miss any of the latest product goodness! Subscribe to Product Updates to hear about them first.Not yet an Amplitude customer? Get started for free.

Announcing Amplitude's new Event Streaming Destinations!

We are excited to announce the general availability of new event streaming destinations! Event streaming allows you to send the data in Amplitude across your stack. With event streaming, you can use the rich behavioral data in Amplitude to enrich customer profiles and send data to marketing, sales, and infrastructure tools in near real-time to power customer experiences.Destinations such as Braze, Google Analytics 4, Kinesis Data Firehose, Iterable, Customer i.o, Intercom, and Google Pub/Sub are now available, with many more destinations coming soon!  This new feature includes powerful, no-code, configuration-based tools that give fine-grained control over the data you send. Decide exactly what users, events, and properties you wish to forward with the ability to filter by user, group, and event properties. In addition, see at a glance important metrics like event volume, end-to-end latency, and detailed Delivery Issue information to understand the performance and health of your integration. Here’s how some of our customers are using this new feature: Marco Polo was able to quickly power a ‘Welcome’ email campaign by streaming their sign-up events in real-time from Amplitude to Braze. RefaceAI was able to leverage Amplitude’s Kinesis Data Stream integration to pipe events in real-time from their mobile clients to their back-end services. This avoided building and maintaining an expensive custom pipeline. Invoice Simple was able to power a robust engagement campaign by sending a series of events to customize and tailor their messaging engagement campaign. Start using Amplitude as your insights-driven customer data platform today. You can stream up to 10M events per month for free! You can also keep track of this free event volume on your settings page.   

Related products:Data Management

Experiment: Duration estimation and conclusion clarity

We have two minor updates to quickly talk about to the lifecycle component of Experiment. Lifecycle is the structure of Experiment in stages with Plan, Configure, Monitor, and Analyze as seen across the top of the page. We’re so grateful for all of our customers who provide us feedback on the overall lifecycle of their experiments. We’re always aiming to have this lifecycle match what happens in reality. Two areas we’ve received feedback on are from users wanting to have a better understanding of how long it will take for an experiment to finish running as well as a bit more clarity on what to do when an experiment ends.  Analyze: New Duration EstimationWe have now added a Duration Estimate to the summary card while your experiment is running. This callout provides an estimate of how much longer we believe it will take for the analysis to reach statistical significance as it relates to your primary metric. We calculate this largely based on how much traffic the experiment has been receiving so far and project that forward as an estimate. We also provide an estimated best case and worst case scenario.  Analyze: Completing an ExperimentPreviously when an experiment reached statistical significance, we presented users with a button labeled “Make a Decision”. While we ultimately think this is the best outcome for an experiment, to make a well informed product decision based on the data, sometimes an experiment does not end in making a product decision. Sometimes users would not click this button because they weren’t ready to make a final decision. We’re now changing that button to read “Complete Experiment”, making that button more prominent in a few places, as well as making it more dynamic based on what our recommendation is. We’re also adding the ability to go back and revisit a decision, allowing users to change their previous decision. We hope this brings a bit more clarity to the purpose of this button to complete an experiment and take a next step.     Thank you again to everyone that continues to drive incredible decisions through Experiment and we’re always looking for ways to improve the product, both big and small.  

Related products:Experiment

Experiment: New Warnings for SRM Detection & Guardrails

Update by Stephen Choi, Marina Sergeyeva, & Shelley Wang, our lead engineers on diagnostics and warningsOne area that has become increasingly important for our customers is the ability to be warned when experiments are not quite going as expected. This meta-analysis of the quality of experiment data ensures that our customers can trust the analysis and results of each experiment. Last month we released diagnostic run-time charts, as the first step in this direction, giving users the ability to track the health of their experiments in real-time. Watching the assignment and exposure event data allows anomalous data to surface that is otherwise hidden in other tools. We’re excited to continue to expand on that work this month with two new updates: SRM Detection warnings and Experiment guardrail warnings.  SRM Detection Warnings2 new charts & 2 new warning boxesSample Ratio Mismatch (SRM) occurs when there is an uneven or unintended distribution of exposures to the control and variant(s) in an experiment. As an example, if you’ve set a 50/50 allocation in your experiment so that 50% of your users see the control and 50% of your users see the variant, you would expect to see a 50/50 split in the actual exposures as planned. However, due to various reasons outside of our control, sometimes more users can see the control or the variant. This is usually an unintended bug in your code or deployment of the experiment in the feature flags.  When an SRM occurs, you may see something like 55% of users seeing control and 45% of users seeing the variant. This indicates something has gone wrong with the true randomization of the experiment and that the analysis & results should be highly scrutinized.  We will be publishing a blog on SRMs in the near future with more in-depth explanations on why they’re important to catch. Subscribe to our blog so you don’t miss that!  What we shipped!We are adding two new charts to help users to get information to indicate whether there is a sample ratio mismatch ratio in the running experiment as well as warning boxes when an SRM has been detected. Assignment to Exposure conversion chartThis chart provides information on how many assignment events actually have converted to an exposure event. It is another form of displaying the assignment and exposure events shown in the Monitor tab already. If there is a significant difference in the conversion for each variant, then it is a good indication of why an SRM, if at all, is happening. Variant Jumping chart for each variantVariant jumping is when the same user sees two or more variants. This is another good indication of an SRM if the numbers are significantly high. SRM Detection WarningsThere will now be a warning banner in the Monitor tab and on the summary card in the Analyze tab if a significant SRM has been detected. We also provide a help center article which provides recommended actions.     Experiment Guardrail WarningsMaking foundational changes to the setup of an experiment after it has already started running is not best practice and can lead to unexpected results that are difficult to interpret. Customers have asked us to provide ample warning if they are about to make a change to the configuration of an experiment that could lead to invalidating the analysis. These new guardrail warnings will generate a pop-up message when a user is about to make a change that could affect the validity of an experiment.     When experiments influence product decisions the data must be trusted.  If there are other areas in the product where you worry about data quality and experiment integrity, please let us know!

Related products:Experiment

Experiment Local Evaluation (General Availability)

Update by Brian Giori, lead engineer on our feature delivery system  We’re excited to announce that local evaluation has been promoted to general availability! All Amplitude Experiment customers may now create local evaluation mode flags and experiments, or migrate existing flags to local. What is local evaluation?Server-side local evaluation runs evaluation logic on your server, saving you the overhead incurred by making a network request per user evaluation. The sub-millisecond evaluation is perfect for latency-minded systems which need to be performant at scale. PerformanceThe primary benefit of local evaluation is its performance compared to remote evaluation. A single flag evaluation completes in well under a millisecond by avoiding having to make a network request per user evaluation. One local evaluation beta customer partner, especially affected by network latency caused by geographical distance, shaved over 100ms from their end-user latency, and nearly doubled server throughput during peak hours. TradeoffsBecause local evaluation happens outside of Amplitude, advanced targeting and identity resolution powered by Amplitude Analytics is not supported. That said, local evaluation allows you to perform consistent bucketing with target segments, which is sufficient in many cases. Feature Remote Evaluation Local Evaluation Consistent Bucketing ✅ ✅ Individual inclusions ✅ ✅ Targeting segments ✅ ✅ Amplitude ID resolution ✅ ❌ User enrichment ✅ ❌ Sticky bucketing ✅ ❌    SDKsLocal evaluation is only supported by server-side SDKs which have local evaluation implemented. Local evaluation for Ruby is in active development. Let us know if there’s a specific language you’d like support for! SDK Remote Evaluation Local Evaluation Node.js ✅ ✅ JVM (Beta) ✅ ✅ Go (Beta) ✅ ✅ Ruby (Beta) ✅ 🚧 Python (Beta) ✅ ❌ 🚧 Ruby SDK is in active development for Local Evaluation Advanced use casesEdge-local evaluation evaluation-js library can be used to run evaluation on edge compute platforms. Works on AWS Lambda@Edge, Cloudflare Workers, and Akamai EdgeWorkers Provides up-to-date variants for a user even if the content is served from the cache. Server-side rendering Node.js SDK is used to run evaluations when the page is rendered on the server. Works with popular SSR frameworks (e.g. Next.js).

Related products:Experiment

Experiment Diagnostic Run-Time Charts

Trusting the data that powers your experiments When you’re trying to make critical decisions about the changes to your product, the ability to trust the results of your experiments is vital. The outcome of each experiment you run is only as good as the quality of the data being collected. Most other experimentation platforms only provide you with a final calculation or analysis, without ever exposing the underlying data. This is problematic! Sample Ratio Mismatch (SRM) is a really common problem in a lot of experiments that often goes undetected and unnoticed. An SRM occurs when the number of users in your control and variant are either uneven or don’t match your planned distribution split. While an experiment can reach statistical significance and a platform can report it, an SRM in the experiment can invalidate those results. If you don’t have access to the underlying data in the analysis, you’ll never you’re looking at faulty results. “At LinkedIn, about 10% of our triggered experiments used to suffer from bias” (Automatic Detection and Diagnosis of Biased Online Experiments ) In fact, it happens in about 6-10% of all A/B tests run. And, in redirect tests, where a portion of traffic is allocated to a new page, SRM can be even more prevalent. (Sample Ratio Mismatch (SRM) Explained ) "I can recall hundreds of SRMs. We consider it one of the most severe data quality issues we can detect.” (https://exp-platform.com/Documents/2019_KDDFabijanGupchupFuptaOmhoverVermeerDmitriev.pdf) “I am working on resolving an SRM just now. The SRM is critical. The analysis is completely untrustworthy.” (https://exp-platform.com/Documents/2019_KDDFabijanGupchupFuptaOmhoverVermeerDmitriev.pdf) One of the biggest advantages of using Amplitude Experiment is the close tie-in with Analytics, allowing you to track your experiments in real-time, across entire user segments or down to an individual user’s journey. In short, Amplitude makes available all of the underlying data that’s powering the outcomes of your experiments.Trusting the data that powers your experiments is vital, and over the next several months we’re focusing on making that a bigger component of Experiment. NowWhen you log in to Amplitude Experiment, and view the Run tab, you’ll notice some changes, providing more insights into both the Assignment and Exposure events of each experiment you run, in real-time. You’ll now also be able to switch between cumulative and non-cumulative views of your data. New Assignment and Exposure Events Charts      Track both exposure and assignment events in real-time, actively monitoring how many users are assigned and exposed to your control or variant(s). Toggle between both cumulative and non-cumulative views. With these new views, you’ll be able to quickly detect anomalies in your experiment delivery. For example, if you’re seeing too many users getting exposed to a variant over a control, this may indicate a Sample Ratio Mismatch and potentially invalidate the results of your experiment. NextOver the coming months you’ll see us make additional improvements to this page, to provide more detail about how your experiments are running and more insights into the data powering the analysis. Follow this page to get notified as we start working on: Assignment-to-exposure conversion chart Variant jumping Diagnostic alerts and warnings for things like Sample Ratio Mismatch  

Related products:Experiment

Introducing Dashboard Templates

Amplitude is launching new Dashboard Templates. Our goal is to make creating and leveraging templates a seamless experience for everyone using Amplitude.Dashboard Templates can be used to speed up reporting on the most common workflows used by teams within your organization. You can use dashboard templates to quickly analyze new product launches, evaluate different experiments across critical metrics, and spin up customer health analysis for your key accounts. With this launch, you can quickly turn your dashboards into templates by tagging the dashboard as a template, allowing teams to efficiently and confidently recreate their standard analyses and share best practices with just a few clicks. Save time when repeating common analyses and make it simpler for new team members to measure impact.Using Find and Replace on dashboards, you can set up parameters for templates or make changes to your dashboard’s charts without clicking into charts. This update allows you to replace any property, event, text, or even projects at the dashboard level without needing to edit every single chart for analysis! Lastly, Dashboard Templates can now be found in search and be added to spaces! This allows teams to manage their template inventory better and make it easier for anyone in the organization to find templates relevant to them. With these improvements, Dashboards are now replacing templates, and they can: Be tagged as a template (tag appears on the DB, in search, and spaces) Highlight template instructions for end-users  Replace events, properties, texts, and projects to templatize charts  Comment below with your template ideas. We can’t wait to see what you create!

Related products:Product Analytics

New and Improved Spaces for Teams

Some of the most valuable analyses in Amplitude are the result of collaborations among teammates. Spaces help product teams subscribe to and organize analyses shared in Amplitude. Today we're introducing a brand new organization system for your charts, dashboards, notebooks, and cohorts! The goal of this release is to help you and your team more easily discover and organize relevant content in Amplitude.Below are some key changes and improvements you will start to see in your SpacesFolders are a convenient way to group related content together in a single, easily-viewable spot. You can now create folders and subfolders within your team spaces and personal workspace to better organize your analyses and make it easier for your teammates to find them. Content can only be saved to one location, but you can create "shortcuts" to that content in other spaces. A shortcut is a way to add content to multiple spaces and folders. Anyone can create a shortcut to a piece of content, but only an owner of the original content can move the original to a new space. The previous "My Workspace" feature has been renamed to "Home" and we have a new personal space labeled with your first and last name, where you can save your personal content and organize it into folders. This space is meant just for you to organize your own content. You can find this space under "Starred Spaces" in the left navigation. Every saved piece of content now must live in a space. By default, content is saved into your own personal workspace. You can also choose to move them into a shared space. We've improved search and filtering capabilities within a Space and added a brand new table format to more easily browse and find content you might be looking for. Within our new table view in spaces, you can now also perform bulk actions including bulk archival and bulk move objects to speed up organization in in your spaces.For more information on the latest updates to spaces, please check out our help guide.

Related products:Product Analytics

Experiment: Enhanced Goals & Takeaways

Hello Everyone!Thank you for being patient with us on product updates for Experiment.First, we want to let you know of an enhancement we’re adding this week to improve Goals and Takeaways! Better Goals & Takeaways are arriving this week!The ultimate goal for running experiments is to make iterations within your product that lead to a measurable improvement of a desired outcome. This causality relationship is critical to know whether the feature you shipped or the change you made impacted your desired results.To do that, you need toSet a measurable goal that matters to you Run your experiment against this goal Know what to do with your results once the data reaches statistical significanceWe’ve made some adjustment to the Experiment goal setting stage by adding a “Minimum Detectable Effect,” or goal, as a measurable metric you hope to obtain with this experiment. This metric might be something like “Increase subscription purchases by 5%.”Screenshot of the revised goal setting section in the Experiment product We then use the goal you’ve set, along with our statistical analysis of the experiment, to provide you with a recommendation on what you should do next as a new “Summary” card.Screenshot of the new Summary card displayed when an experiment completes In a single glance, we’ll show you whether your experiment was statistically significant, above the baseline, and whether you reached your goal. We’ll restate your original hypothesis, provide our recommended next step, as well as a quick snapshot of how the control and variants did against your target.The example below shows an experiment with statistically significant results but didn’t hit the desired goal. Now you can make a more informed decision on whether you should roll that feature out or make some minor adjustments to achieve your target goal.Another screenshot of the new Summary card with slightly different outcomes displayedWe’ve had this information in the product before, but we’ve now made it a lot easier for you and your stakeholders to see everything they need quickly. Other update reminders:A couple of weeks ago, we sent an email out on some enhancements we’ve made to the product over the last few months. As a quick reminder, these included: Improved Exposure Tracking: A simple and well-defined default exposure event for tracking a user’s exposure to a variant. Improves analysis accuracy and reliability by removing possibility of misattribution and race conditions caused by Assignment events. In order to take advantage of Improved Exposure Tracking, you’ll need to make changes to your Experiment implementation. Deprecate ‘Off’ as the Default Variant Value: With the move to improved exposure tracking, we want to maintain user property consistency across the system. Therefore, we have changed the experiment evaluation servers to unset an experiment’s user property when the user is not bucketed a variant rather than setting the value to ‘off’. Integrated SDKs: Client-side SDKs now support seamless integration between Amplitude Analytics and Experiment SDKs. Integrated SDKs simplify initialization, enable real-time user properties, and automatically track exposure events when a variant is served to the user. Experiment Lifecycle: An all-new guided experience for experiments. Features are now organized by the way teams work, from planning and running an experiment to analyzing the results and making decisions. You’ll also notice a status bar that tracks key tasks in each stage and the duration of your experiment, along with suggestions on next steps to take.

Related products:Experiment

Amplitude + Productboard

 Our new Productboard integration will enable Amplitude customers that use Productboard to filter customer feedback based on cohorts created within Amplitude, and categorize these insights into themes that can inform the product roadmap and prioritization process. This will help product managers make better decisions about what to build and who it will impact when new features are shipped. With this integration, Amplitude + Productboard customers will now be able to:Aggregate customer and product data from multiple sources in a single place to get a richer view of how your feature is performing Use built-in Amplitude cohorts to filter notes, features, and roadmaps, and create custom user impact scores Better serve your target persona in Productboard by studying qualitative feedback alongside behavioral product dataTo get started, Amplitude + Productboard users can create a cohort of users for a particular segment that might be important for their product strategy - including isolating feedback from Cohorts or showing roadmaps based on Cohorts. You can bring these cohorts into Productboard in order to organize feedback, prioritize features and create compelling roadmaps. Learn more about the Product Board integration here. p.s. Interested in learning more about user engagement and how cohorts can help? be sure to check out our on-demand webinar around driving user engagement as part of our Product Led Growth Series! 

Related products:Product AnalyticsCollaboration