Skip to main content

We would like to request the capability to conduct multi-armed bandit experiments on your platform. Our payment team requires this method of experimentation, as it is a valuable and statistically robust method for identifying better-performing variations and optimizing traffic allocation.

MAB testing is a type of A/B testing that uses machine learning to dynamically increase visitor allocation in favor of better-performing variations, while reducing allocation to less effective variations over time. The core concept of MAB is 'dynamic traffic allocation,' which allows for continuous identification of the degree to which a version is outperforming others and routing the majority of traffic dynamically and in real-time to the winning variant. Currently, the process that we did is super manual, we need to add new variant every xx weeks and look at the updated data to identify who is the winner.

We believe that the implementation of MAB experiments on Amplitude would greatly benefit our experimentation process, as it allows for more efficient use of resources and improved accuracy in identifying better-performing variations. Additionally, our payment team requires this method of experimentation, and we hope that Amplitude can support this feature.

@Novita Olivera you probably already saw this reply from the support team, but posting the answer here for other members to know:

At this time Multi Armed Bandit is not supported with Amplitude Experiment. Happy to pass this along to our Product team as a feature request. The ETA for this is in the 2nd half of 2023.


Hi @Novita Olivera - Are multi-armed bandits still on the roadmap for the near term? Happy to test the feature.   

Thanks!


Sorry @Novita Olivera, I meant to @Esther Trapadoux . 


Howdy, @Esther Trapadoux!  Would you be able to provide an update on the multi-arm bandit feature?  My team is looking to align our roadmap with yours, and this is high on our priority list :)