
Create articles from any YouTube video or use our API to get YouTube transcriptions
Start for freeThe Reality of A/B Testing: Setting Realistic Expectations
When it comes to A/B testing and experimentation, many product managers and marketers have unrealistic expectations about success rates and timelines. Andrew Glman, a veteran in the field of growth hacking and experimentation, shares some sobering statistics:
- 89% of experiments do not move the needle
- Only 11% of experiments have a positive impact
- The vast majority of experiments fall into the "no impact" range
Glman emphasizes that this high failure rate is not necessarily a bad thing, but it's crucial to set proper expectations:
"If you have a 65% loss rate, you're doing great. You're doing three times better than the average right in the industry right now."
The key is to understand that experimentation is about learning and iterating, not just achieving wins. However, it's also important to deliver results over time to justify the investment in experimentation.
The Time Factor: Why Patience is Crucial in A/B Testing
One of the biggest misconceptions about A/B testing is how quickly results can be achieved. Glman explains:
"On average, it takes about anywhere from 1 to two weeks to a month. We see by analyzing experiments from major companies across the internet, we see that it takes about a month."
This timeline has significant implications for how many experiments can realistically be run:
- Even if you could launch a thousand different ideas, you won't get the results back in time
- You have a very limited number of "at bats" (opportunities to test)
- Understanding this limitation is crucial for planning and prioritization
The Importance of Big Swings in Low-Traffic Scenarios
For companies with lower traffic volumes (around 5,000 users per week), Glman advises taking bigger swings with experiments:
- Small, incremental changes are unlikely to show significant results with low traffic
- Consider testing more substantial changes or redesigns
- Focus on unified variable sets that ladder up to a bigger idea
"If you have a very high value item, let's say you are an up-and-coming Neo Bank and every customer is worth $100,000. Well, the impact of having a medium-size improvement on conversion is a very meaty dollar amount for you. And therefore, it is worth running the experiment or investing into to try and optimize that experience because the payout is so good even though the traffic is low."
Building Blocks and Unified Variable Sets
Glman recommends a strategic approach to designing experiments, especially for low-traffic scenarios:
-
Identify building blocks: Use data from various sources (competitor experiments, usability tests, customer feedback) to identify elements likely to have an impact.
-
Create unified variable sets: Instead of testing small, isolated changes, group multiple changes that support a larger hypothesis or idea.
-
Take bigger swings: Be willing to test more significant changes to increase the likelihood of detecting an impact.
"What we like to encourage people to do is do two things. One, think about it as building blocks. Like figure out what can you learn from others that you know works that you've seen through their experiments that you talk to your friends that you've kind got signal on that you see people in your usability sessions like really stumbling on one identify the building blocks that you have high confidence in through having data in a variety of different ways is number one and use those as building blocks."
The Challenge of Redesigns and Big Changes
When it comes to major redesigns or significant changes to a product, Glman acknowledges the inherent challenges:
- Existing users may resist change due to familiarity with the current interface
- New users may respond differently to a modernized design
- It's difficult to test sweeping changes incrementally
He suggests a pragmatic approach:
-
Start with isolated tests: Try testing new designs on landing pages or specific user segments before a full rollout.
-
Focus on fundamental ideas: Test core concepts that underpin the redesign separately before implementing the entire change.
-
Be prepared for short-term pain: Accept that there may be a period of disjointedness as you iterate towards the new vision.
"You either need to figure out how to live with the disjointedness for some period of time and have a disjointed experience in order to learn and iterate your way towards the thing or you are going to take forever to launch a thing and really cross your fingers and hopes that it works because it's very very very hard to iterate or experiment in ways that always feel like 100% consistent with the new vision."
Storing and Leveraging Experiment Learnings
One of the ongoing challenges in A/B testing is how to effectively store and utilize learnings from past experiments. Glman acknowledges this is an area where many companies struggle:
- Knowledge can become outdated as markets and user behaviors change
- Vast collections of experiment results can become overwhelming and difficult to navigate
- There's a risk of dismissing ideas too quickly based on past failures
Glman suggests asking key questions when revisiting old experiment ideas:
"The real question that I like to ask is, 'Well, what's different now than when the last time we ran this? What's changed?'"
He also hints at ongoing work to track patterns and persistence of experiment results over time, which could provide valuable insights into which learnings remain relevant.
Avoiding Trend-Chasing in Experimentation
Glman cautions against blindly following design or UX trends without proper testing:
"So notion blew up right it became like really big and everyone loved it and so what is it about notion that everyone looks at when they look at their experience? Oh, they've got all these cute little cartoons, these little animations. So, everyone looked at Notion and they said, 'You know what we're going to do? We're going to start adding these little animations to our website because they're super cute, these little illustrations.' So, I saw suddenly in Dual Works, I see like all of these people running these tests with animations or these illustrations. And guess what? It's actually like worse. It's amongst the worst of the kinds of images you could have, right?"
The key takeaway is to focus on testing what works for your specific audience and use case, rather than assuming success can be replicated by copying surface-level elements from other successful products.
Recommended Resources for Deeper Learning
For those looking to dive deeper into product development and experimentation, Glman recommends two resources:
-
"Getting Real" by 37 Signals: A free PDF that offers practical advice on building products, focusing on creating value and working from the inside out.
-
"Four Steps to the Epiphany" by Steve Blank: A more challenging read that delves into customer development and structured approaches to product development and experimentation.
Conclusion: Embracing the Reality of A/B Testing
A/B testing and experimentation are powerful tools for driving product growth and optimization. However, it's crucial to approach these practices with realistic expectations and a strategic mindset. Key takeaways include:
- Accept that most experiments will not yield positive results
- Plan for longer timelines to achieve statistical significance
- Take bigger swings with experiments, especially in low-traffic scenarios
- Use building blocks and unified variable sets to increase impact
- Be cautious with major redesigns and consider incremental testing where possible
- Continuously reassess and update your learnings from past experiments
- Avoid blindly following trends without proper testing
By embracing these principles and maintaining a learning-oriented approach, product teams can maximize the value of their experimentation efforts and drive meaningful improvements over time.
Article created from: https://www.youtube.com/watch?v=54cv61m0BAE