Free Statistical Significance Calculator

Find out if your A/B test results are statistically significant. No signup required.

Control (A)

Variation (B)

✓ Statistically Significant! ⚠ Not Statistically Significant

🎉 Variation (B) Wins! Control (A) Wins

+% relative improvement

Control Rate

%

Variation Rate

%

Z-Score

P-Value

You can be % confident this result is not due to chance. The difference could be due to random chance. Consider running the test longer or with more traffic.

How to Use This Calculator

  1. Enter Control (A) data — the number of visitors and conversions for your original version
  2. Enter Variation (B) data — the number of visitors and conversions for your test version
  3. Select your confidence level — 95% is standard for most A/B tests
  4. Click Calculate to see if your results are statistically significant

What is Statistical Significance?

Statistical significance tells you whether the difference between your control and variation is real or just due to random chance. When a result is statistically significant, you can be confident that the variation actually performs differently from the control.

In A/B testing, you're comparing two versions of something (a webpage, email, button, etc.) to see which performs better. Without statistical significance, you might make decisions based on random fluctuations in your data.

A 95% confidence level means there's only a 5% chance that the observed difference happened by random chance. This is the industry standard for most business decisions.

When to Use This Calculator

  • Testing different landing page designs
  • Comparing email subject lines
  • Evaluating call-to-action button variations
  • Measuring the impact of pricing changes
  • Analyzing survey response differences between groups
  • Any scenario where you're comparing conversion rates

The Formula

Z = (p₂ - p₁) / √[p(1-p)(1/n₁ + 1/n₂)]

Where:

  • p₁ = control conversion rate
  • p₂ = variation conversion rate
  • p = pooled conversion rate (combined)
  • n₁, n₂ = sample sizes for each group

Need to collect A/B test feedback?

Create surveys to understand why users prefer one version over another. Try it yourself 👇

  • 40% higher completion rates than regular forms
  • Unlimited forms & responses — free forever
  • No credit card required
Try it yourself 👇

Frequently Asked Questions

How many visitors do I need for statistical significance?

It depends on your baseline conversion rate and the size of the difference you want to detect. Generally, you need hundreds to thousands of visitors per variation. Smaller differences require larger sample sizes to detect reliably.

What does the p-value mean?

The p-value is the probability that you'd see a difference this large (or larger) if there was actually no real difference between the groups. A p-value below 0.05 (for 95% confidence) means the result is statistically significant.

Why isn't my test reaching significance?

Common reasons include: not enough traffic yet, the real difference is too small to detect, or there genuinely is no difference. Try running the test longer or focusing on larger changes that might produce bigger effects.

Should I stop my test as soon as it reaches significance?

No — this is called "peeking" and can lead to false positives. Decide on your sample size before the test starts and run it to completion, or use sequential testing methods designed for continuous monitoring.

What's the difference between one-tailed and two-tailed tests?

This calculator uses a two-tailed test, which checks if there's a difference in either direction (better or worse). One-tailed tests only check one direction. Two-tailed is more conservative and generally recommended.

Related Free Tools

View all tools →
Youform - A free Typeform alternative | Product Hunt

🔒 Data securely stored with AWS in EU 🇪🇺

🧡  Help Center 🙏  Feature Request