Experiment Local Evaluation (General Availability)

Update by Brian Giori, lead engineer on our feature delivery system  We’re excited to announce that local evaluation has been promoted to general availability! All Amplitude Experiment customers may now create local evaluation mode flags and experiments, or migrate existing flags to local. What is local evaluation?Server-side local evaluation runs evaluation logic on your server, saving you the overhead incurred by making a network request per user evaluation. The sub-millisecond evaluation is perfect for latency-minded systems which need to be performant at scale. PerformanceThe primary benefit of local evaluation is its performance compared to remote evaluation. A single flag evaluation completes in well under a millisecond by avoiding having to make a network request per user evaluation. One local evaluation beta customer partner, especially affected by network latency caused by geographical distance, shaved over 100ms from their end-user latency, and nearly doubled server throughput during peak hours. TradeoffsBecause local evaluation happens outside of Amplitude, advanced targeting and identity resolution powered by Amplitude Analytics is not supported. That said, local evaluation allows you to perform consistent bucketing with target segments, which is sufficient in many cases. Feature Remote Evaluation Local Evaluation Consistent Bucketing ✅ ✅ Individual inclusions ✅ ✅ Targeting segments ✅ ✅ Amplitude ID resolution ✅ ❌ User enrichment ✅ ❌ Sticky bucketing ✅ ❌    SDKsLocal evaluation is only supported by server-side SDKs which have local evaluation implemented. Local evaluation for Ruby is in active development. Let us know if there’s a specific language you’d like support for! SDK Remote Evaluation Local Evaluation Node.js ✅ ✅ JVM (Beta) ✅ ✅ Go (Beta) ✅ ✅ Ruby (Beta) ✅ 🚧 Python (Beta) ✅ ❌ 🚧 Ruby SDK is in active development for Local Evaluation Advanced use casesEdge-local evaluation evaluation-js library can be used to run evaluation on edge compute platforms. Works on AWS Lambda@Edge, Cloudflare Workers, and Akamai EdgeWorkers Provides up-to-date variants for a user even if the content is served from the cache. Server-side rendering Node.js SDK is used to run evaluations when the page is rendered on the server. Works with popular SSR frameworks (e.g. Next.js).

Related products:Experiment

Experiment Diagnostic Run-Time Charts

Trusting the data that powers your experiments When you’re trying to make critical decisions about the changes to your product, the ability to trust the results of your experiments is vital. The outcome of each experiment you run is only as good as the quality of the data being collected. Most other experimentation platforms only provide you with a final calculation or analysis, without ever exposing the underlying data. This is problematic! Sample Ratio Mismatch (SRM) is a really common problem in a lot of experiments that often goes undetected and unnoticed. An SRM occurs when the number of users in your control and variant are either uneven or don’t match your planned distribution split. While an experiment can reach statistical significance and a platform can report it, an SRM in the experiment can invalidate those results. If you don’t have access to the underlying data in the analysis, you’ll never you’re looking at faulty results. “At LinkedIn, about 10% of our triggered experiments used to suffer from bias” (Automatic Detection and Diagnosis of Biased Online Experiments ) In fact, it happens in about 6-10% of all A/B tests run. And, in redirect tests, where a portion of traffic is allocated to a new page, SRM can be even more prevalent. (Sample Ratio Mismatch (SRM) Explained ) "I can recall hundreds of SRMs. We consider it one of the most severe data quality issues we can detect.” (https://exp-platform.com/Documents/2019_KDDFabijanGupchupFuptaOmhoverVermeerDmitriev.pdf) “I am working on resolving an SRM just now. The SRM is critical. The analysis is completely untrustworthy.” (https://exp-platform.com/Documents/2019_KDDFabijanGupchupFuptaOmhoverVermeerDmitriev.pdf) One of the biggest advantages of using Amplitude Experiment is the close tie-in with Analytics, allowing you to track your experiments in real-time, across entire user segments or down to an individual user’s journey. In short, Amplitude makes available all of the underlying data that’s powering the outcomes of your experiments.Trusting the data that powers your experiments is vital, and over the next several months we’re focusing on making that a bigger component of Experiment. NowWhen you log in to Amplitude Experiment, and view the Run tab, you’ll notice some changes, providing more insights into both the Assignment and Exposure events of each experiment you run, in real-time. You’ll now also be able to switch between cumulative and non-cumulative views of your data. New Assignment and Exposure Events Charts      Track both exposure and assignment events in real-time, actively monitoring how many users are assigned and exposed to your control or variant(s). Toggle between both cumulative and non-cumulative views. With these new views, you’ll be able to quickly detect anomalies in your experiment delivery. For example, if you’re seeing too many users getting exposed to a variant over a control, this may indicate a Sample Ratio Mismatch and potentially invalidate the results of your experiment. NextOver the coming months you’ll see us make additional improvements to this page, to provide more detail about how your experiments are running and more insights into the data powering the analysis. Follow this page to get notified as we start working on: Assignment-to-exposure conversion chart Variant jumping Diagnostic alerts and warnings for things like Sample Ratio Mismatch  

Related products:Experiment

Introducing Dashboard Templates

Amplitude is launching new Dashboard Templates. Our goal is to make creating and leveraging templates a seamless experience for everyone using Amplitude.Dashboard Templates can be used to speed up reporting on the most common workflows used by teams within your organization. You can use dashboard templates to quickly analyze new product launches, evaluate different experiments across critical metrics, and spin up customer health analysis for your key accounts. With this launch, you can quickly turn your dashboards into templates by tagging the dashboard as a template, allowing teams to efficiently and confidently recreate their standard analyses and share best practices with just a few clicks. Save time when repeating common analyses and make it simpler for new team members to measure impact.Using Find and Replace on dashboards, you can set up parameters for templates or make changes to your dashboard’s charts without clicking into charts. This update allows you to replace any property, event, text, or even projects at the dashboard level without needing to edit every single chart for analysis! Lastly, Dashboard Templates can now be found in search and be added to spaces! This allows teams to manage their template inventory better and make it easier for anyone in the organization to find templates relevant to them. With these improvements, Dashboards are now replacing templates, and they can: Be tagged as a template (tag appears on the DB, in search, and spaces) Highlight template instructions for end-users  Replace events, properties, texts, and projects to templatize charts  Comment below with your template ideas. We can’t wait to see what you create!

Related products:Product Analytics

New and Improved Spaces for Teams

Some of the most valuable analyses in Amplitude are the result of collaborations among teammates. Spaces help product teams subscribe to and organize analyses shared in Amplitude. Today we're introducing a brand new organization system for your charts, dashboards, notebooks, and cohorts! The goal of this release is to help you and your team more easily discover and organize relevant content in Amplitude.Below are some key changes and improvements you will start to see in your SpacesFolders are a convenient way to group related content together in a single, easily-viewable spot. You can now create folders and subfolders within your team spaces and personal workspace to better organize your analyses and make it easier for your teammates to find them. Content can only be saved to one location, but you can create "shortcuts" to that content in other spaces. A shortcut is a way to add content to multiple spaces and folders. Anyone can create a shortcut to a piece of content, but only an owner of the original content can move the original to a new space. The previous "My Workspace" feature has been renamed to "Home" and we have a new personal space labeled with your first and last name, where you can save your personal content and organize it into folders. This space is meant just for you to organize your own content. You can find this space under "Starred Spaces" in the left navigation. Every saved piece of content now must live in a space. By default, content is saved into your own personal workspace. You can also choose to move them into a shared space. We've improved search and filtering capabilities within a Space and added a brand new table format to more easily browse and find content you might be looking for. Within our new table view in spaces, you can now also perform bulk actions including bulk archival and bulk move objects to speed up organization in in your spaces.For more information on the latest updates to spaces, please check out our help guide.

Related products:Product Analytics

Experiment: Enhanced Goals & Takeaways

Hello Everyone!Thank you for being patient with us on product updates for Experiment.First, we want to let you know of an enhancement we’re adding this week to improve Goals and Takeaways! Better Goals & Takeaways are arriving this week!The ultimate goal for running experiments is to make iterations within your product that lead to a measurable improvement of a desired outcome. This causality relationship is critical to know whether the feature you shipped or the change you made impacted your desired results.To do that, you need toSet a measurable goal that matters to you Run your experiment against this goal Know what to do with your results once the data reaches statistical significanceWe’ve made some adjustment to the Experiment goal setting stage by adding a “Minimum Detectable Effect,” or goal, as a measurable metric you hope to obtain with this experiment. This metric might be something like “Increase subscription purchases by 5%.”Screenshot of the revised goal setting section in the Experiment product We then use the goal you’ve set, along with our statistical analysis of the experiment, to provide you with a recommendation on what you should do next as a new “Summary” card.Screenshot of the new Summary card displayed when an experiment completes In a single glance, we’ll show you whether your experiment was statistically significant, above the baseline, and whether you reached your goal. We’ll restate your original hypothesis, provide our recommended next step, as well as a quick snapshot of how the control and variants did against your target.The example below shows an experiment with statistically significant results but didn’t hit the desired goal. Now you can make a more informed decision on whether you should roll that feature out or make some minor adjustments to achieve your target goal.Another screenshot of the new Summary card with slightly different outcomes displayedWe’ve had this information in the product before, but we’ve now made it a lot easier for you and your stakeholders to see everything they need quickly. Other update reminders:A couple of weeks ago, we sent an email out on some enhancements we’ve made to the product over the last few months. As a quick reminder, these included: Improved Exposure Tracking: A simple and well-defined default exposure event for tracking a user’s exposure to a variant. Improves analysis accuracy and reliability by removing possibility of misattribution and race conditions caused by Assignment events. In order to take advantage of Improved Exposure Tracking, you’ll need to make changes to your Experiment implementation. Deprecate ‘Off’ as the Default Variant Value: With the move to improved exposure tracking, we want to maintain user property consistency across the system. Therefore, we have changed the experiment evaluation servers to unset an experiment’s user property when the user is not bucketed a variant rather than setting the value to ‘off’. Integrated SDKs: Client-side SDKs now support seamless integration between Amplitude Analytics and Experiment SDKs. Integrated SDKs simplify initialization, enable real-time user properties, and automatically track exposure events when a variant is served to the user. Experiment Lifecycle: An all-new guided experience for experiments. Features are now organized by the way teams work, from planning and running an experiment to analyzing the results and making decisions. You’ll also notice a status bar that tracks key tasks in each stage and the duration of your experiment, along with suggestions on next steps to take.

Related products:Experiment

Amplitude + Productboard

 Our new Productboard integration will enable Amplitude customers that use Productboard to filter customer feedback based on cohorts created within Amplitude, and categorize these insights into themes that can inform the product roadmap and prioritization process. This will help product managers make better decisions about what to build and who it will impact when new features are shipped. With this integration, Amplitude + Productboard customers will now be able to:Aggregate customer and product data from multiple sources in a single place to get a richer view of how your feature is performing Use built-in Amplitude cohorts to filter notes, features, and roadmaps, and create custom user impact scores Better serve your target persona in Productboard by studying qualitative feedback alongside behavioral product dataTo get started, Amplitude + Productboard users can create a cohort of users for a particular segment that might be important for their product strategy - including isolating feedback from Cohorts or showing roadmaps based on Cohorts. You can bring these cohorts into Productboard in order to organize feedback, prioritize features and create compelling roadmaps. Learn more about the Product Board integration here. p.s. Interested in learning more about user engagement and how cohorts can help? be sure to check out our on-demand webinar around driving user engagement as part of our Product Led Growth Series! 

Related products:Product AnalyticsCollaboration

New Funnel Conversion Insights

Amplitude users today use Funnels to explore what drives or hinders conversion outcomes. We know that these analyses provide valuable insights as to how users convert and why they convert. This month, we’re rolling out a series of our top requested updates to Funnels requested by our customers. Funnel Event TotalsCustomers can now measure instances where their users go through the same funnel multiple times. With Funnel event totals, you can construct your funnel of interest, then select whether you’d like to count conversions by unique users or by event totals. Median time to convert over timeTime to convert is a key metric to evaluate whether your users are struggling to complete a critical product flow. With this new update, you are able to view median time to convert for the entire funnel, as well as see how that metric changes over time. This allows you to assess how time to convert is being impacted by product changes (for example, knowing if your recent release is actually helping users convert faster). Multiple Conversion over TimeLast but not least, our conversion over time visualization allows customers to analyze how conversion rates are changing. You can now select and view multiple conversion-over-time metrics in a single view. To compare conversion for different steps in the funnel, you no longer have to create multiple charts, and flip back and forth for comparison. To see how these updates improve the Funnel Conversion experience in Amplitude, check out our Loom video below:With these latest updates, we’re continuing to invest on our top most requested features from our customers.  Be sure to also check out more of our recent releases from 2021 that will better equip your team to drive product-led growth.

Related products:Product AnalyticsBehavioral Targeting

Pinned Dashboard Filters and Replace Properties

With pinned filters, dashboard owners can now pin relevant filters to their dashboards so other users at their organization can edit the contained charts on the fly. In addition, each filtered view you create will generate a unique dashboard edit URL. This improves the impact of Analytics dashboards in two ways:Usability - to alter a teammate’s dashboard before pinned filters, you had to copy, edit, and save a new dashboard. Now, you can quickly apply a dashboard filter to ask and answer a question, without moving through three extra steps and cluttering your content library.  Discoverability -  before pinned filters, bulk filters could only be found through the “more” dropdown menu. Now, novice users will see the option for dashboard filters right on the page, encouraging them to try creating their own dashboard views. Experienced teammates can also pin suggested properties for filtering, so novice users have a good place to start. Distribution - since dashboard filters create unique URLs whenever they’re applied applied, you can share any views you find valuable without needing to save a new dashboard.As part of pinned filters, we brought over the replace properties function from dashboard templates, onto any saved dashboard page directly. This allows customers to quickly filter charts in their dashboard by a selected property value, without needing to access templates.Watch the demo video below, and read our dashboard help doc here to learn how to start using pinned filters at your organization!