Skip to content
← ALL WRITING

2026-04-23 / 7 MIN READ

Programmatic SEO without the Helpful Content demotion risk

A contrarian take on programmatic SEO and the Helpful Content Update. The risk is not scale or AI authorship. The risk is utility. Here is how to design for it.

The received wisdom about the Helpful Content Update is wrong in a specific way that has cost a lot of sites a lot of traffic. The wisdom says: if you scale content with templates or AI, you will get demoted. The evidence says: sites that scaled thin content got demoted. Sites that scaled useful content did not. The distinction matters because it points to a different intervention than "stop scaling."

This is the contrarian essay. The argument is that programmatic SEO and the Helpful Content Update are not inherently in tension. The tension is between scale and utility, and you can operate at scale without losing utility if you design for it. I have shipped programmatic content for DTC clients through the 2023 and 2024 HCU rounds without a demotion, and this is the design pattern that explains why.

Utility score: 455 / 500
  • Original data
    85
  • Lived example
    90
  • Named author
    100
  • Unique stance
    80
  • Editorial review
    100
Same template, same URL, same topic. The utility delta is where HCU safety lives.

What the HCU actually targets

Google's public explanation of the Helpful Content System (which replaced the episodic HCU after March 2024) is that it evaluates whether a page "creates unhelpful content primarily to rank well in search engines rather than to help people." That phrasing matters because it is describing intent, not method.

A page created primarily to rank can be written by a human or an AI, can use a template or be bespoke, and can be long or short. The common thread is that the page does not actually help the reader who lands on it. It is a vehicle for ranking.

A page created primarily to help can also be written by a human or an AI, templated or bespoke, long or short. The test is whether a reader who lands on the page learns something useful that they could not have learned from the top-ranked alternatives.

This framing is not my invention. It is how Google describes the system. But it is frequently mistranslated into "AI content bad" or "programmatic content bad," which leads sites to either avoid programmatic entirely (a strategic loss) or scale without caring about utility (an HCU demotion waiting to happen).

Why the 2022-2024 HCU rounds hit so hard

The three episodic HCU updates between September 2022 and March 2024 targeted a specific population of sites. Looking at what was demoted, three patterns show up.

Pattern one, pure aggregation with no editorial layer. Sites that aggregated product data, review data, or event data from third-party sources and wrapped the results in templated prose with no human review, no editorial angle, and no original synthesis. These sites got demoted heavily.

Pattern two, answer-box fishing without depth. Sites that wrote thin "how to do X" pages optimized for featured snippet placement, where the answer was fine but the rest of the page was padding. The signal to the HCU was: this page exists to capture the clip, not to teach the topic.

Pattern three, AI-written long-form with no grounding. Sites that used GPT-3/GPT-4 era tools to generate 2,000-word articles that paraphrased general knowledge with no specific expertise, no original data, no named author. These got demoted in the March 2024 round specifically.

What did not get demoted: sites running AI-assisted content with real experiential grounding, named authors, editorial review, and specific claims backed by specific data. The volume was not the problem. The shape was.

The design pattern that avoids HCU exposure

Five requirements. Every programmatic page that ships under this pattern meets all five.

One, original data or original synthesis. The page must contain something a reader cannot get from the other pages ranking for the same query. Original data (first-party benchmarks, experience metrics, field notes) is the strongest. Original synthesis (a specific point of view, an unusual frame, a non-obvious connection) is next strongest. Pure paraphrase of existing material is the weakest.

Two, a named author with verifiable authority. Not a decorative byline. An author whose LinkedIn, published work, or professional credentials plausibly back the claims on the page. The author bio should link out to a main author page that carries Person schema and references the author's real work.

Three, specific claims backed by specific evidence. Generic advice ("it is important to optimize your images") is a red flag. Specific claims with evidence ("I audited 14 Shopify stores and 8 of them were shipping 2MB hero images") is a green flag. The ratio of specific to generic across the page is a decent proxy for utility.

Four, editorial review before publish. Every page has a human review step. The reviewer checks the specific claims against ground truth, rewrites any section that reads as filler, and verifies the page actually answers the query. For AI-assisted content, this is the step that separates pages that will hold up under HCU scrutiny from pages that will not.

Five, a stance or a frame. The page takes a position, not just reports information. Reporting information is the neutral mode most AI content defaults to, and it is the mode that reads as generic to both readers and algorithms. A page with a stance ("the INP metric matters more than LCP now, and here is why") is more useful than one without.

What this means in practice

A cluster built this way looks different from the inside than from the outside. The author scopes every piece to real expertise. Every support is about something the author has actually shipped, audited, or studied. Grounding is real. Review is the author's. Voice is the author's.

A reader who lands on any page in a cluster like that should be able to tell, within a paragraph or two, that the page is not padding. Whether that reader could recognize the production method is a different question, and not one Google's quality raters appear to care about in 2026. They care about utility.

The HCU did not punish AI content as a category. It punished unhelpful content regardless of authorship method.

When a site gets caught on the wrong side of this

If your site got demoted in one of the HCU rounds, the fix is rarely "wait for recovery." The mechanism is algorithmic reclassification, which means you have to change the input to the algorithm. Three moves, in order.

Prune the thin content. Pages that do not meet the five requirements above are a drag on the whole domain. Delete them or consolidate them into stronger pages. This is the single highest-impact move. I have seen sites recover substantial traffic by pruning 30-50% of their thin content.

Strengthen the survivors. For the pages that are close to meeting the requirements, fix the gaps. Add named authors, add original claims, add specific evidence, add a stance. This is slow work but it compounds.

Wait for the next core update. Recovery from an HCU demotion typically requires a subsequent core update to register the reclassification. That is usually 3-6 months after the cleanup. Sites that panic and revert or try to game it usually make things worse.

Where this fits

This essay sits alongside the cluster hub as the principled argument for why the programmatic approach can be safe. Author brand versus programmatic scale covers the author-layer requirement in depth. AI-assisted content with real grounding covers requirement number three specifically.

If you want an audit that checks whether your existing content meets these five requirements and flags the pages that do not, the DTC stack audit includes a content-quality pass. Full product ladder is at /products.

For readers coming from the other round-2 clusters, the ecommerce conversion patterns cluster and the AI agent engineering cluster share the broader operator theme.

Sources and further reading

  • Google Search Central: Helpful Content System documentation, 2024-2025
  • Google Search Liaison posts on HCU rounds, September 2022, September 2023, March 2024
  • Sistrix and Search Engine Land recovery case studies, 2024-2025

// related

Let us talk

If something in here connected, feel free to reach out. No pitch deck, no intake form. Just a direct conversation.

>Get in touch