A new paradigm · v1.0

Press releases
were written for
journalists.

Now they are read by machines. The wire is dead. The agent is the audience.

~70%
of news queries now answered by AI, not search
1.4s
avg time an LLM spends parsing a press release
0
wire services optimized for machine readers
1
new format that fixes it
01 — The Problem

For a hundred years, press releases targeted one species: the working journalist.

They were stapled to fax machines, wired through PR Newswire, and embargoed until 6 a.m. Eastern. The format optimized for one thing: a tired reporter scanning for a quote.

That reader is gone. In its place: a probabilistic model, an answer engine, an autonomous agent. They don't skim. They don't follow links. They don't care about your boilerplate.

The press release of 2026 is still being written for an audience that was extinct when it was designed.

02 — The Manifesto

Eight principles for the
AI-native press release.

A working draft. We expect to be wrong about half of these in 18 months. We expect to be right about the rest for a decade.

  1. 01

    Structured over stylish.

    Schema.org markup, JSON-LD, atomic facts. The lede is metadata, not metaphor.

  2. 02

    Citeable, not clever.

    Every claim is a standalone sentence with a source. LLMs cite sentences, not paragraphs.

  3. 03

    Quotes as facts.

    Attributed quotes need full context inline. Models will quote you without your headline.

  4. 04

    Machine-first metadata.

    Date, entity, role, location, and relation are first-class fields — not buried in prose.

  5. 05

    One canonical version.

    No reformatted PDFs, no stylized image releases. One URL, one machine-readable truth.

  6. 06

    Atomic, not narrative.

    Break the announcement into discrete claims. Agents recompose; they don't paraphrase.

  7. 07

    Linkable evidence.

    Filings, datasets, demos. If it can't be retrieved, it won't be trusted.

  8. 08

    Built to be ingested.

    Robots.txt, sitemaps, RSS, and feed endpoints. If the crawler can't reach it, it doesn't exist.

03 — Before / After

The same news,
two formats.

Left: the press release as it's been written for thirty years. Right: the same announcement, redesigned for the model that reads it first.

Before · 1995
For Immediate Release
ACME Inc. · Press

ACME Unveils Revolutionary New Platform That Reimagines the Future of Enterprise AI1

SAN FRANCISCO — Today2, ACME Inc., a leading provider of next-generation enterprise solutions, announced the launch of its most ambitious product yet — a paradigm-shifting platform that reimagines what's possible when cutting-edge AI meets best-in-class user experience.3

"We are absolutely thrilled to embark on this exciting new chapter and partner with the world-class customers who share our bold vision," said Jane Doe, CEO and visionary founder of ACME.4

About ACME5

ACME is a leading provider of next-generation enterprise solutions empowering organizations worldwide to unlock the transformative potential of artificial intelligence at scale...

Media contact: media@acme.com6
After · AI-native
acme.com/press/reactor-2AI-native

ACME Inc. ships Reactor 2 at $24,000, delivering 18,400 tokens/sec at 650W.1

2025-05-06T09:00:00-07:00SAN FRANCISCOen-US2
TL;DR3

ACME began shipping Reactor 2 on May 6, 2025 at $24,000, delivering 2.3× Reactor 1's throughput at the same 650W envelope.

Atomic claims4
  1. C1 · Ships 2025-05-06 at $24,000 USD
  2. C2 · 18,400 tok/s · Llama-3.1-70B fp8 b=1
  3. C3 · 650W sustained · MLPerf v4.1
Quote · on-record · verified5

"Reactor 2 hits 18,400 tok/s on Llama-3.1-70B at 650W. The same workload on Reactor 1 took 1,500W."

— Jane Doe, CEO · jane@acme.com
Comparison6
MetricR1R2Δ
tok/s8,00018,400+2.30×
Watts1,500650−56.7%
Price$28k$24k−14.3%
JSON-LD7
{ "@type": "ProductLaunch",
  "name": "ACME Reactor 2",
  "launchDate": "2025-05-06",
  "price": { "value": 24000, "currency": "USD" },
  "metrics": { "tok_s": 18400, "watts": 650 } }
Contact8
Mira Chen · Head of Commsmira.chen@acme.com · +1 415 555 014209:00–18:00 PT · SLA: 2h
What's broken
  1. 1Vague headline — no facts
  2. 2'Today' — undated for machines
  3. 3Marketing adjectives, zero data
  4. 4Quote = vibes, not a claim
  5. 5Boilerplate 'About' filler
  6. 6media@ black hole
What's fixed
  1. 1Atomic headline: subject + verb + numbers
  2. 2ISO-8601 dateline + timezone
  3. 3TL;DR engineered as the snippet
  4. 4Numbered claims, each citeable
  5. 5Quote with verification metadata
  6. 6Comparison table with deltas
  7. 7JSON-LD payload for ingestion
  8. 8Named contact + SLA
04 — The mapping

Every part of the
old release,
redrawn.

A one-to-one map from the press release of 1995 to the press release of 2026. Same intent. New substrate.

Traditional
AI-native
01FOR IMMEDIATE RELEASE
01
Stable canonical URL
Embargo metadata becomes a permanent, machine-addressable location.
02"Today" / "Recently"
02
ISO-8601 dateline + timezone
Models can't resolve relative time. Give them an absolute timestamp.
03Marketing headline
03
Atomic claim headline
Subject + verb + object + numbers. Parseable as a single fact.
04Lede paragraph
04
TL;DR snippet block
Engineered to be the 2-sentence answer an LLM returns.
05Body paragraphs
05
Numbered atomic claims
Each fact stands alone, citeable without surrounding prose.
06CEO quote
06
Verified, structured quote
Speaker, title, org, contact, verification status as data.
07Boilerplate 'About'
07
JSON-LD payload
schema.org structured data. Indexed verbatim. No filler.
08media@ inbox
08
Named contact + SLA
Person, role, hours, response time. Agents need a real endpoint.
04 — Anatomy of an AI press release

Ten features.
One release.

A full AI-native press release, annotated. Hover the numbered pins to see what each element does, and why it matters when the first reader is a model.

Press release/acme.com/press/reactor-2 AI-native v1.0

ACME Inc. ships Reactor 2 AI inference appliance at $24,000, delivering 18,400 tokens/second.

SAN FRANCISCO — 2025-05-06T09:00:00-07:00Embargo: noneLang: en-US
TL;DR

ACME Inc. began shipping Reactor 2 on May 6, 2025 at $24,000, delivering 2.3× the throughput of Reactor 1 at the same 650W power envelope.

Atomic claims
  1. C1Reactor 2 ships May 6, 2025 at $24,000 USD. [datasheet]
  2. C2Throughput: 18,400 tokens/sec on Llama-3.1-70B-Instruct, batch=1, fp8.
  3. C3Power draw: 650W sustained, measured per MLPerf v4.1 protocol.
Quote — on record, verified

"Reactor 2 hits 18,400 tokens per second on Llama-3.1-70B at 650 watts. The same workload on Reactor 1 took 1,500 watts."

— Jane Doe, CEO, ACME Inc. · jane@acme.com · verified 2025-05-06
Comparison
MetricReactor 1Reactor 2Δ
Throughput (tok/s)8,00018,400+2.30×
Power (W)1,500650−56.7%
Price (USD)$28,000$24,000−14.3%
Structured payload (JSON-LD)
{
  "@context": "https://schema.org",
  "@type": "ProductLaunch",
  "name": "ACME Reactor 2",
  "launchDate": "2025-05-06",
  "price": { "@type": "MonetaryAmount", "value": 24000, "currency": "USD" },
  "metrics": {
    "throughput_tokens_per_sec": 18400,
    "power_w": 650,
    "benchmark": "MLPerf v4.1 / Llama-3.1-70B fp8 batch=1"
  },
  "evidence": ["https://acme.com/reactor/datasheet.pdf"]
}
Contact
Mira Chen · Head of Commsmira.chen@acme.com+1 415 555 0142Hours: 09:00–18:00 PT · SLA: 2h
05 — The X-ray

Eight things break.
Eight things replace them.

A line-by-line autopsy of the press release. Left: what fails when a model reads it. Right: the substitution that makes it machine-native.

8
elements rebuilt
0
adjectives required
canonical URL
100%
machine-readable
01 · before
Distribution wire URL
prnewswire.com/...?utm=...
Tracking-laden, ephemeral, syndicated 50× — no canonical truth.
01 · after
Stable canonical URL
acme.com/press/reactor-2
One immutable address an LLM can cite forever.
02 · before
Marketing headline
“Revolutionary AI Platform Reimagines the Future”
Zero parseable facts. Adjectives, no subject-verb-object.
02 · after
Atomic claim headline
ACME ships Reactor 2 at $24,000 · 18,400 tok/s
Subject + verb + numbers. A single extractable fact.
03 · before
“Today” dateline
SAN FRANCISCO — Today,
Models can’t resolve relative time. Date is lost on ingestion.
03 · after
ISO-8601 + timezone
2026-05-06T09:00:00-07:00
Absolute timestamp. Sortable, comparable, machine-native.
04 · before
Lede paragraph
“…cutting-edge, best-in-class, paradigm-shifting…”
Buzzword stew. Snippet engines have nothing to grab.
04 · after
TL;DR snippet block
Reactor 2 ships May 6 at $24,000, 2.3× the throughput at 650W.
Engineered as the 2-sentence answer an LLM will return.
05 · before
Narrative body
Five paragraphs of context-dependent prose.
Each fact requires the surrounding paragraph to make sense.
05 · after
Numbered atomic claims
C1 · C2 · C3 — each citeable in isolation
Every claim stands alone. RAG-friendly, hallucination-resistant.
06 · before
CEO quote
“We’re thrilled to embark on this exciting journey…”
Vibes, not assertions. Unverifiable, unattributable, unparseable.
06 · after
Verified structured quote
Speaker + title + org + email + verified-on date
An on-record fact with provenance. Treated as data.
07 · before
Boilerplate “About”
“ACME is a leading provider of next-gen solutions…”
Filler. Indistinguishable from every other company.
07 · after
JSON-LD payload
{ "@type": "Organization", "founded": 2014, … }
schema.org structured data. Indexed verbatim.
08 · before
media@ inbox
media@acme.com
Black hole. No name, no SLA, no agent endpoint.
08 · after
Named contact + SLA
Mira Chen · Head of Comms · 2h response
A real person agents and journalists can reach.
Who reads it now
01
LLMs
ChatGPT, Claude, Gemini
02
Answer engines
Perplexity, You.com
03
Agents
Autonomous research bots
04
RAG pipelines
Enterprise knowledge bases
05
Humans
(eventually)
06 — The pipeline

From publish to
answered.

What actually happens after you press publish. Five stages, all optimizable, all measurable.

01
Publish
Release lives at a stable canonical URL with JSON-LD payload + atomic claims.
schema.org · sitemap.xml · llms.txt
02
Crawl
Search bots and LLM crawlers (GPTBot, ClaudeBot, PerplexityBot) ingest within hours.
robots.txt allow · 200 OK · server-rendered
03
Index
Facts get extracted as structured triples into model + RAG indexes.
claim → evidence → source
04
Retrieve
Agent queries pull your atomic claims as the cited answer chunk.
<cited via canonical URL>
05
Answer
Your release becomes the sentence the user actually reads in the model's output.
“According to ACME’s release…”
Outcome
You stop chasing journalists.
You start being the source models cite. Every atomic claim becomes a piece of cited evidence in answers your customers, investors, and competitors are already getting from AI.
04 — The Guides

How to write one.

Practical guides, templates, and teardowns. Sign up below to be the first to read each one.

Anatomy

The anatomy of an AI-readable press release

What every section should look like — from the headline as a fact statement to the boilerplate as JSON-LD.

Coming soon
Markup

Schema markup for the modern newsroom

A field-by-field guide to NewsArticle, Organization, FundingEvent, and ProductLaunch schemas.

Coming soon
Voice

Writing quotes that LLMs will cite

Why most CEO quotes never make it into Perplexity, ChatGPT, or Gemini answers — and how to fix it.

Coming soon
06 — FAQ

Questions,
answered atomically.

Each answer is a self-contained, citeable claim. By design.

What is an AI press release?
An AI press release is a structured, machine-readable announcement designed to be ingested and cited by large language models, AI agents, and answer engines — not just human journalists. It uses schema.org markup, atomic claims, and standalone quotes so its facts can be retrieved and reproduced verbatim by generative systems.
How is an AI press release different from a traditional press release?
Traditional press releases optimize for a human reporter scanning for a quote. AI press releases optimize for retrieval and citation: they expose facts as discrete, attributed claims; embed JSON-LD structured data; and treat metadata (date, entity, role, location) as first-class content rather than narrative.
What is Generative Engine Optimization (GEO)?
Generative Engine Optimization (GEO) is the practice of structuring content so it is preferentially retrieved, cited, and reproduced by generative AI systems such as ChatGPT, Perplexity, Gemini, and Claude. GEO extends classical SEO with focus on atomic facts, citeable quotes, schema markup, and llms.txt-style discoverability.
Do LLMs actually read press releases?
Yes. Most major LLMs are trained on, or retrieve from, web corpora that include wire services, company newsrooms, and indexed news pages. Answer engines like Perplexity and Google's AI Overviews cite press releases directly when answering company, product, and funding questions.
What schema.org types should a press release use?
At minimum, NewsArticle or PressRelease, plus Organization for the issuer. For specific announcements, use FundingEvent, ProductLaunch, MergerAcquisition, or Event. Always include datePublished, author, publisher, and mainEntityOfPage.
Who is behind The AI Press Release?
The AI Press Release is published by High Caliber AI, a firm that helps category-defining companies become the answer in LLMs, agents, and the new web.
David Berkowitz, founder of High Caliber AI
Author · Editor
06 — Who's behind this

David Berkowitz

Founder, High Caliber AI · Founder, AI Marketers Guild · Author, The Non-Obvious Guide to Using AI for Marketing (Ideapress, 2025).

A longtime marketing strategist, David has led marketing and innovation for companies including Mediaocean, Storyhunter, Sysomos, MRY (Publicis), and 360i (Dentsu).

He has contributed 600+ columns to outlets like Advertising Age, MediaPost, and VentureBeat, and spoken at 400+ events worldwide. He helps marketers harness AI to work smarter, stay creative, and strengthen customer connections.

David lives in New York City.

05 — Updates

One email per
new paradigm.

Get every guide, teardown, and template the moment it drops. No spam. No fluff. Just the next chapter of the press release.

Subscribers get the free white paper · "The AI Press Release: A field manual"