
If you've ever debated a design change and wanted proof, A/B testing is how I de-risk decisions. It isn’t about colour swaps; it’s about reducing friction in real user journeys while keeping accessibility, performance, and SEO intact.
What Is A/B Testing in UX?
Show two versions of an experience to different users and measure which performs better. In UX, “better” means higher completion, faster onboarding, stronger retention, and fewer accessibility barriers — not just more clicks.
How I Run Tests I Trust (My Field Checklist)
- Falsifiable hypothesis. “Splitting the form should cut drop-offs by ~15%.” If I can’t falsify it, I’m not ready to test.
- High-impact journeys. Onboarding, checkout, navigation — not footer links.
- Inclusive variations. Labels, logical focus, screen-reader announcements, colour contrast.
- Outcome metrics. Conversions, retention, task success — not vanity clicks.
- SEO + performance. Don’t hide crawlable content; prefer lazy-loading and caching.
- Statistical power. Run long enough for ~95% confidence to avoid false positives.
- Working notes. I keep a log so decisions get faster and better over time.
Real Scenarios (From My Work)
- Form completion: Multi-step beat single-page after I optimised validation and field-feedback performance.
- Mobile onboarding: Fewer steps improved activation — but only once “Skip” worked with screen readers and preserved focus order.
- SEO vs. speed: Hiding blocks sped up load time but hurt rankings; switching to lazy-loading kept speed and SEO.
Tooling That Fits Different Contexts
- Heap / PostHog: Funnels, drop-offs, event insights.
- VWO / Optimizely: Structured web experiments with segmentation.
- Firebase Remote Config: App flags & experiments without store releases.
The tool isn’t the strategy. Discipline in hypotheses, accessibility, and outcomes is.
A/B Testing — Quick FAQ
What confidence level do I aim for?
About 95% for most UX tests, so I don’t “ship” false positives.
How do I keep SEO intact during tests?
Avoid hiding critical, crawlable content. Prefer lazy-loading and measure Core Web Vitals.
How is accessibility part of the test?
I audit labels, focus order, screen-reader announcements, contrast, and touch targets in each variant.
Takeaway
A/B tests are a repeatable way to de-risk UX calls. Pick one high-impact journey this month, write a tight hypothesis, include accessibility checks, and measure a metric that matters. Ship the winner with confidence.