Content Experiments

Run content A/B tests with Ours Privacy. Change headlines, images, buttons, or sections on the same URL and measure results with events already flowing through your CDP.

Content Experiments

A content experiment shows different versions of a page to different visitors — and measures which version converts better.

Each variant is defined as a list of DOM modifications — typed actions that update the rendered page (set text, swap an image, toggle styles, inject custom CSS or JS, and so on). The runtime applies them before the page becomes visible, so the visitor only sees their assigned variant.

You stay on the same URL. Half your visitors see your original headline. The other half see a new one. The version that drives more sign-ups, demo requests, or purchases wins.

In reporting, both groups count as experiment participants. Visitors assigned to the original experience are the control group, and visitors assigned to changed experiences are treatment groups. Both control and treatment receive experiment impression events; visitors outside traffic allocation do not.

This is the most common type of experiment. Use it when you want to test a change to a page without building a whole new page.


What You Can Test

Anything visible on the page is fair game:

  • Headlines and body copy — does "Start your free trial" outperform "Get started today"?
  • Button text and color — does a green button convert better than a gray one?
  • Hero images — which photo drives more engagement?
  • Entire sections — test a short-form hero against a long-form one
  • Calls to action — price anchoring, urgency copy, social proof placement
  • Layout changes — reorder sections, show or hide elements, restructure a form

Available DOM Modification Actions

Every variant is a list of these actions. The Chrome extension generates them for you, but if you're authoring variants via the REST API, this is the canonical action set:

ActionWhat it doesvalue shape
setTextReplace the element's text contentPlain string
setHtmlReplace the element's inner HTMLHTML string
setStyleSet one or more inline CSS propertiesJSON-stringified { property: value } map
setAttributeSet one or more attributes (e.g. class, href)JSON-stringified { name: value } map
setImageUpdate an <img> src (or background image)URL string
removeRemove the element from the DOM
insertBeforeInsert HTML immediately before the elementHTML string
insertAfterInsert HTML immediately after the elementHTML string
customCssInject a <style> block scoped to this variantCSS string
customJsRun a JavaScript snippet for this variantJS source

Each modification also carries a CSS selector. Modifications run in order; the runtime waits for matching elements to exist before applying — useful for SPAs and lazy-rendered sections.

For setStyle and setAttribute, the API also accepts structured styles[] and attribute inputs and normalizes them into the canonical JSON-string value for you.


How to Set Up an Experiment

Everything is configured in the Ours Privacy dashboard or, for code-driven workflows, the Platform API. No code required for most tests.

  1. Create an experiment and name it
  2. Set the URL where the experiment should run (e.g. your homepage or pricing page)
  3. Define your variants using the visual editor. Click the element you want to change, type the new text, swap the image, or adjust the style.
  4. Set your conversion goal — pick any event already flowing through your CDP. A page view, a button click, a form submit, a purchase. If you're tracking it, you can measure against it.
  5. Choose your audience — run the experiment for all visitors, or limit it to a specific audience segment you've already built
  6. Set traffic allocation — start at 50% if you want to ease into it, or 100% to run it on everyone
  7. Start the experiment

The dashboard shows Bayesian probability-to-be-best updating in real time as results come in.


How It Uses Your Existing CDP Data

The experiment system is built into the same platform as the rest of your CDP, which means:

Events you're already tracking become conversion goals. If you track trial_started, demo_requested, or checkout_completed, those events are already available as metrics. You don't need to add new tracking code for most experiments.

Audiences you've already built become targeting rules. Want to run a test only for visitors in your "Healthcare Provider" audience, or only for mobile visitors arriving from Google Ads? You can use any audience segment from the Audience Builder directly as an experiment target.

Session replay and heatmaps are split by variant. See click maps and recordings for each variant separately, so you understand not just whether a variant won, but why.


Visitors Always See the Same Variant

Once a visitor is assigned to a variant, they see it on every visit — across sessions, page reloads, and browser restarts. Assignments are stored in a first-party cookie and tied to the visitor's identity in the CDP.

This prevents the same visitor from appearing in both variants, which would corrupt your results.


No Flicker

Variants apply before the page renders. Visitors never see the original version flash in before it's replaced.


Preview Before You Launch

Before starting an experiment, you can preview any variant by adding ?ours_preview=<variantId> to the URL. The variant applies without tracking an impression, so you can review it without affecting your results.


Picking a Winner

The dashboard shows probability to be best — a Bayesian measure of how confident we are that a variant has the highest true conversion rate. When you're confident enough in a winner, stop the experiment and apply the winning variant permanently.

There's no hard rule on when to call a winner. It depends on how much risk you're comfortable with and how long you want to wait. The dashboard shows you the tradeoffs in real time.


Next Steps

How is this guide?

On this page