Measure What Moves People

James Slezak, CEO of Swayable, on why click-through rates are a fundamentally broken proxy for creative effectiveness, how Swayable helped Paramount Pictures during their blockbuster year, and why the industry's addiction to easy metrics is costing brands billions in wasted creative spend.

Listen to the episode
Season 1, Episode 01

"Most measurement frameworks have a strong incumbent bias. You're grading people's homework and no one wants the F grade.'

Why click-through rates are fundamentally broken and what measuring actual persuasion changes about how you spend

James Slezak has a PhD in experimental physics from Cornell and spent years at the New York Times before founding Swayable at Y Combinator. He brings a scientist's scepticism to marketing measurement, and what that scepticism has produced is an uncomfortable conclusion: most of the metrics marketers use to judge creative effectiveness measure behaviour that has nothing to do with whether creative actually changes minds.

In this conversation Slezak explains why click-through rates reward whoever is most skilled at creating a compulsive interaction, not whoever is creating genuine persuasion. He describes how Swayable's approach, testing creative against randomised audiences before campaigns launch, identified which among Paramount Pictures' many creative variants would actually move the needle, and how the gap between best and worst performing creative is consistently far larger than most marketing teams assume.

Click-through rates measure clicks, not persuasion. They reward intrusive formats and sharp targeting, not creative that actually changes minds.
There is a strong incumbent bias in marketing measurement. Organisations resist metrics that reveal what is not working because nobody wants the F grade.
The gap between best and worst performing creative is consistently larger than marketing teams expect. Testing before launch is where the real return on investment is found.
Swayable helped Paramount Pictures identify which creative variants would move audiences during their blockbuster year, turning measurement into a strategic creative decision tool.
Iterative persuasion testing can deliver 10x increase in campaign lift compared to launching without pre-campaign evidence.
01Why click-through rates measure behaviour rather than persuasion and why that distinction matters
02How pre-campaign persuasion testing changes creative decisions and budget allocation
03The incumbent bias in measurement: why organisations resist metrics that reveal what is not working
04How Swayable helped Paramount Pictures during their blockbuster year with creative testing
05Why the gap between best and worst creative is always larger than marketing teams expect
Key Exchanges 05
01 What is Swayable and why does it exist?

"Most platforms that are used to measure advertising are built by people who also sell advertising. That creates a fundamental conflict of interest in how effectiveness gets defined."

Slezak frames Swayable's origin as a response to a structural problem in the industry. When the companies selling media are also the ones providing the measurement, the incentive is to define effectiveness in ways that make media look good. Swayable's position as an independent platform, incorporated as a public benefit corporation, is the commercial expression of that critique.

02 Why are click-through rates the wrong metric?

"You're grading people's homework and no one wants the F grade. There is a very strong incumbent bias in measurement toward metrics that make current work look acceptable."

The issue Slezak identifies is not just that click-through rates are a weak signal, it is that they actively reward the wrong behaviour. Creative that generates clicks through interruptive format or emotional urgency will outscore creative that builds genuine brand preference in every click-based measurement system. The organisations that have built reporting infrastructure around CTR have a structural interest in keeping it, regardless of whether it reflects commercial reality.

03 How does pre-campaign persuasion testing actually work?

"We expose a randomised audience to the creative before launch and measure the actual shift in purchase intent, attitude, or recall. The gap between best and worst creative is always larger than people expect."

The process is essentially a controlled experiment: a test group sees the creative, a control group does not, and the difference in subsequent attitudes or intent is measured. Applied across multiple creative variants, this tells a brand not just whether their campaign will work, but which of their creative assets will work best. For large advertisers with multiple executions in flight, this information is worth far more than any post-campaign measurement.

04 What did the work with Paramount Pictures reveal?

"Paramount had an extraordinary year with their releases. What the testing showed was that among their creative variants, the difference between the best and worst performing was not marginal. It was substantial."

The Paramount case illustrates the asymmetry that makes pre-campaign testing valuable. When you have many creative variants and need to decide which to weight heavily in a campaign, even a modest improvement in that selection process translates directly into incremental box office or brand impact at massive scale. The creative that feels most memorable internally is not reliably the creative that actually moves audiences.

05 What does good measurement look like in marketing?

"The question you should always be asking is: does this metric predict the commercial outcome I care about? If you cannot trace a direct line from this number to revenue or brand equity, you need to be very suspicious of why you are tracking it."

Slezak's framework is not anti-metric, it is anti-proxy. The problem is not measuring things, it is measuring things that feel like they correlate with success because they are easy to track and go up when campaigns run, rather than because they genuinely predict commercial outcomes. He advocates for the harder work of establishing which leading indicators actually connect to downstream revenue, and then building measurement infrastructure around those.

26 Minutes
S1 E1 Season & episode
10x Potential lift from iterative persuasion testing vs untested creative
10M+ Consumer responses processed on the Swayable platform

"Iterative persuasion testing can deliver a 10x increase in campaign lift."

Hear James on
The Business of Marketing
Season 1 Episode 01
More Episodes
Full Transcript SEO & AI indexed
Season 1 E01  ·  James Slezak, CEO, Swayable
Lightly edited for readability.

Host James, welcome to the Business of Marketing. Tell us about Swayable.

Slezak Swayable is an AI data platform that measures persuasion. We tell brands and agencies whether their creative will cause lift before campaigns launch. Most platforms used to measure advertising are built by people who also sell advertising. That creates a fundamental conflict of interest in how effectiveness gets defined.

Host Why are click-through rates such a poor measure of creative effectiveness?

Slezak There is a very strong incumbent bias in measurement. You're grading people's homework and no one wants the F grade. Click-through rates measure behaviour, not persuasion. They reward intrusive formats and precise targeting, not creative that actually changes minds or purchase intent.

Host How does pre-campaign persuasion testing work?

Slezak We expose a randomised audience to the creative before launch and measure the actual shift in purchase intent, attitude, or recall. The gap between best and worst performing creative is always larger than people expect. Most marketing teams massively underestimate how much variation there is across their own creative assets.

Host Tell me about the work with Paramount Pictures.

Slezak Paramount had an extraordinary year with their releases. What the testing showed was that among their creative variants, the difference between the best and worst performing was not marginal. It was substantial. When you can identify that before committing media spend, you change the entire economics of the campaign.

Host What does good measurement look like?

Slezak The question you should always be asking is: does this metric predict the commercial outcome I care about? If you cannot trace a direct line from this number to revenue or brand equity, you need to be very suspicious of why you are tracking it. Iterative persuasion testing consistently delivers 10x increase in campaign lift compared to launching without pre-campaign evidence.