AB SHARK
Back to blog

CUPED Explained: How to Cut A/B Test Sample Size by 30–50% Using Pre-Experiment Data

May 11, 2026

There is one trick in the controlled-experiments literature that pays for itself within a single test cycle, ships in a few hundred lines of code, and gets you results in half the time. It's called CUPEDControlled experiments Using Pre-Experiment Data — and Microsoft published it in 2013 (Deng, Xu, Kohavi & Walker, WSDM 2013). Every serious experimentation platform — Microsoft, Booking, Netflix, Meta, Airbnb — runs some version of it under the hood.

It is not glamorous. It is not Bayesian. It does not require new infrastructure. It is the boring industrial workhorse of variance reduction, and if you run A/B tests on continuous metrics — revenue, sessions, time on site, clicks per user — you are leaving 30–50% of your sample size on the floor by not using it.

This post walks through why it works, when it works, when it doesn't, and the handful of ways people manage to break it.

The setup: why sample size is really a variance problem

The two-sample sample-size formula, for a continuous metric, simplifies to:

n2(zα/2+zβ)2σ2δ2n \approx \frac{2(z_{\alpha/2} + z_\beta)^2\,\sigma^2}{\delta^2}

Two things on the right-hand side control how many users your test needs:

  • δ\delta, the effect you want to detect (your MDE)
  • σ2\sigma^2, the variance of the metric

Most teams treat σ2\sigma^2 as a fact of nature. It isn't. A substantial fraction of that variance is predictable from things you already knew about the user before the experiment started — and any variance you can predict, you can subtract out.

That is the entire idea behind CUPED.

The intuition in one paragraph

Imagine you're testing a new checkout flow. Some users in your test were always going to spend 200thisweekandsomewerealwaysgoingtospend200 this week and some were always going to spend 5, regardless of which variant they saw — their pre-period spending told you that before the test even started. If you compare raw weekly revenue across variants, that pre-existing heterogeneity shows up as noise that obscures the real treatment effect. CUPED adjusts each user's outcome by their pre-period behavior, so what's left is mostly the variant effect plus genuinely new noise. Less noise, smaller required n.

That's it. Everything below is just making that paragraph precise.

The math, briefly

For each user ii, pick a pre-experiment covariate XiX_i — typically their value of the same metric over the 30 days before the test started. Compute the adjusted outcome:

Yicuped=Yiθ(XiXˉ)Y^{cuped}_i = Y_i - \theta\,(X_i - \bar{X})

where θ\theta is chosen to maximize variance reduction:

θ=Cov(Y,X)Var(X)\theta = \frac{\text{Cov}(Y, X)}{\text{Var}(X)}

Then run your usual two-sample t-test (or whatever test you'd use) on YcupedY^{cuped} instead of YY.

Two properties make this useful:

  1. Unbiased. Because XX is measured before the test, it's independent of the treatment assignment. Subtracting a function of XX doesn't shift the difference between variants in expectation. You're not gaming the result — you're cleaning it.
  2. Lower variance. The variance of the adjusted estimator is
Var(Ycuped)=Var(Y)(1ρ2)\text{Var}(Y^{cuped}) = \text{Var}(Y)\,(1 - \rho^2)

where ρ\rho is the correlation between YY and the covariate XX. Variance is reduced by a factor of 1ρ21 - \rho^2, and so is required sample size.

The whole game is finding a covariate with high ρ\rho.

How much sample size do you actually save?

Because required n scales linearly with variance, the savings table falls straight out of 1ρ21 - \rho^2:

Correlation ρ\rho between metric and covariateVariance reductionRequired-n reduction
0.24%4%
0.39%9%
0.416%16%
0.525%25%
0.636%36%
0.749%49%
0.864%64%

For most consumer metrics with a sensible 30-day pre-period covariate, ρ\rho lands somewhere between 0.4 and 0.7 — which is exactly the "30–50% sample-size cut" you'll see cited everywhere. Revenue per user and sessions per user tend to sit in the upper half of that range; engagement metrics on logged-in surfaces are even higher.

That same reduction shows up as a calendar shrink. A test that needed six weeks to hit the planned sample size now needs three. Either you ship faster, or you spend the same time and resolve a smaller MDE — your pick.

When CUPED helps a lot

CUPED is a leverage tool, not a free lunch. The leverage comes from two ingredients, and you need both:

  • A metric with strong autocorrelation. Continuous, user-level metrics where this week's value strongly predicts next week's: revenue per user, sessions per user, minutes watched, items viewed, ad impressions, GMV. These have ρ\rho in the 0.5–0.8 range.
  • A user population with reliable pre-period data. Logged-in users with a stable history. The longer the pre-period, the better the covariate (with diminishing returns past 30 days for most metrics).

When both conditions hold, CUPED is the highest-ROI change you can make to your experimentation stack. Most published case studies report 30–50% variance reduction on flagship metrics, and that translates directly into half-length tests.

When CUPED doesn't help much

The places it falls down are predictable:

  • Binary conversion metrics with low base rates. If your metric is "did the user convert in the test window" and the base rate is 2%, there is very little user-level variance to predict, and pre-period conversion is a weak signal anyway. You'll see single-digit percent savings at best.
  • First-time users with no pre-period. If most of your test population didn't exist a month ago, you don't have a covariate. Acquisition and onboarding tests fall into this bucket.
  • Metrics with no autocorrelation. Some metrics genuinely look like fresh draws each session — checkout funnel completion rate per visit, for example. If ρ\rho is near zero, CUPED does nothing.
  • Very short tests on rapidly-changing populations. If the pre-period covariate is stale by the time the test runs, ρ\rho collapses.

The honest workflow is to compute ρ\rho once on historical data per metric, and only enable CUPED where the savings clear, say, 15%. Below that, the operational complexity isn't worth it.

The four ways teams break CUPED

Most of the failures are subtle, and most of them look like wins right up until someone runs an A/A test and notices the false-positive rate is wrong.

1. Using post-experiment data as the covariate

The covariate must be measured strictly before the user could have been exposed to the variant. If XX includes any post-assignment behavior, then XX is itself affected by treatment, and subtracting it shifts the difference between variants. You will get more "significant" results — and the extra ones will be false positives.

Symptom: A/A tests start failing at well above 5%. If you're not running A/A tests as a routine check, start now.

2. Tuning θ on the test data and then evaluating significance on the same data

The standard recipe estimates θ\theta from the same pooled (treatment + control) data the test runs on. That's fine: under the null, θ\theta is independent of the treatment indicator in expectation, and the bias is negligible at typical sample sizes. What's not fine is choosing θ\theta in a way that depends on the observed treatment effect — picking the covariate that "makes the test pop," for instance. That's a researcher-degrees-of-freedom problem dressed up in math.

Fix: lock the covariate and the estimator before the test starts, and treat the choice as part of the pre-registration.

3. Forgetting that variance reduction doesn't fix bias

CUPED reduces variance. It does not fix peeking, it does not fix sample-ratio mismatch, it does not fix selection bias from non-randomized assignment. If your test was broken without CUPED, it is still broken with CUPED — just with tighter confidence intervals around the wrong answer.

4. Comparing CUPED-adjusted variant means to raw control means

Apply CUPED to both arms or to neither. The adjustment is a within-user transformation, and the test is on the adjusted outcomes across arms. Reporting "control = 42,variant=42, variant = 44.50 (CUPED)" mixes units in a way that will eventually embarrass someone.

CUPED vs stratification vs regression adjustment

CUPED is one of three related tools you'll see in the literature:

MethodWhat it doesWhen to reach for it
StratificationBlock randomize on a pre-experiment variable (country, device, user tier)Categorical covariates; small number of strata
CUPEDSubtract a linear function of one continuous pre-period covariateContinuous covariate with ρ>0.3\rho > 0.3
Regression adjustmentRegress outcome on multiple pre-period covariatesYou have several useful covariates and the engineering budget for it

In practice CUPED with the obvious covariate (same metric, 30 days prior) gets you most of the way there. Regression adjustment with a handful of pre-period features can squeeze out another 5–15%, but the ROI on the second covariate is dramatically lower than on the first.

A worked example

Setup: you're testing a new recommendation module. Your primary metric is revenue per user over a 14-day window, baseline 20,σ20, σ 50 (revenue is famously skewed). At a 5% relative MDE, α=0.05, power=0.8, the required sample size is roughly

n2(1.96+0.84)2(50)2(0.05×20)239,200n \approx \frac{2\,(1.96 + 0.84)^2\,(50)^2}{(0.05 \times 20)^2} \approx 39{,}200

per variant.

You compute, on historical data, that revenue in the 14 days after some reference date correlates with revenue in the 30 days before at ρ=0.7\rho = 0.7. Plug that into CUPED:

  • Variance shrinks by a factor of 10.72=0.511 - 0.7^2 = 0.51, so effective σ becomes 500.5135.750\sqrt{0.51} \approx 35.7.
  • Required n becomes 20,000\approx 20{,}000 per variant — a 49% reduction.

A test that needed five weeks at your weekly traffic now needs about two and a half. If you instead held the duration fixed, you could resolve a roughly 3.5% MDE in the same five weeks.

What about Bayesian or sequential tests?

CUPED is orthogonal to the inference method. You can CUPED-adjust your outcomes and then run a frequentist t-test, a Bayesian posterior update, or a sequential test. Variance reduction is variance reduction — it makes whatever test you were going to run sharper. The 30–50% savings claim translates directly to "tighter posterior" or "earlier stop" depending on your framework.

CUPED on the AB SHARK roadmap

The variance-reduction protocol in backend/app/core/variance/ already supports a no-op baseline and stratification; CUPED is the next adapter on the list. The contract is a single transform applied to the outcome column before the analyzer sees it, so once it lands, every metric in the analyzer inherits the savings without changes to the rest of the pipeline.

If you'd like to be notified when it ships — or if you want to argue about which covariate the default should be — the /plan page is the right place to start a test today, and the variance-reduction toggle will appear there first.

The one-sentence summary

If your A/B test metric is continuous, user-level, and at all autocorrelated with past behavior, CUPED is the cheapest way to halve your test duration without changing anything about how the test is designed, randomized, or analyzed. The cost is one extra column in your data warehouse. The benefit, on most consumer metrics, is six weeks turning into three.

Related reading: the sample size calculator walks through the variance term that CUPED attacks. The MDE explainer covers what to do with the sample-size budget once you have it. And the peeking post is required reading before you spend your newfound variance savings on "checking in early."