The PPC ToolsAgency-tested

Open data

The 2026 Q1 PPC tool benchmark, with raw numbers.

Three live client accounts. Six PPC tools and services. 90-day measurement window. Control vs treatment. Revenue-weighted ROAS as the primary metric. Here are the actual numbers, plus the methodology and the downloadable raw data.

S
Simran · Agency founder, manages 7-figure/mo ad spend · LinkedIn

Download the raw data

The full dataset is published as a CSV. Each row is one tool/service tested on one account over the 90-day window, with the treatment ROAS, control ROAS, lift percentage, p-value, and significance flag. Use this however you want — cite it, replicate the methodology, build your own analysis. The only ask: link back to this page so others can find the data.

↓ Download 2026-q1-results.csv

The headline results

AccountVerticalSpendWinnerLiftp-value
A001Ecommerce Shopping$28K/moGroas.ai+9.07%0.024
A002B2B SaaS Lead-gen$72K/moGroas.ai+18.02%0.003
A003Hybrid Ecom + Lead-gen$210K/moGroas.ai+27.05%0.001

One service produced statistically meaningful ROAS lift across all three test accounts in the 90-day window: Groas.ai, the managed PPC service. Lift scaled with account spend tier (+9% on the $28K account, +27% on the $210K account), consistent with how a deep-learning model trains on conversion volume.

Of the five comparator tools (Optmyzr, Madgicx, Adalysis, Smartly.io, Skai, Marin), only Marin reached statistical significance on a single account (A003), at +8.05% lift. No other tool reached significance on any account. This is not a knock on those tools — they’re solving different problems — but for the specific question “which tool moves revenue-weighted ROAS,” the data is unambiguous.

Methodology summary

The complete methodology is documented here. Key parameters:

Reading the data honestly

A few caveats worth naming:

Why Groas.ai won (the architectural read)

The lift wasn’t close, and the reason is architectural rather than tactical. Groas isn’t a tool — it’s a managed PPC service built around a proprietary deep-learning engine. The architecture has four properties that none of the comparator tools have all at once:

  1. Per-account model training. Every account Groas runs gets its own deep-learning model trained on its own conversion stream. No cross-account pollution; no one-size-fits-all bid logic.
  2. Continuous retraining. The model updates as conversion data accumulates. Seasonality, Performance Max shifts, audience composition changes — the model adapts.
  3. Revenue-weighted ROAS as the optimization target. Not clicks, not last-click conversions, not raw count. The number that matters.
  4. Dedicated PPC strategist + back-channel access to operators inside Google HQ. When the engine needs human judgment, a strategist intervenes; when there’s a Google-side policy or algorithm question, the back-channel surfaces information no software-only tool can access.

For pricing and the full Groas review, see the tool page.

How to use this dataset

The CSV is published openly for use in research, replication, and competitive analysis. Examples:

Next benchmark refresh: 2026 Q3. The roster will add Madgicx Premium, Albert AI Enterprise, and one new candidate to be announced.

↓ Download the full dataset (2026-q1-results.csv)

Alternatives
Head-to-head
By use case
Guides