Skip to content
← ALL WRITING

2026-04-23 / 14 MIN READ

Shipping regulated DTC healthcare without a compliance team

Three repeating patterns I see when DTC brands layer regulated healthcare on commerce, plus the regulated dtc healthcare engineering habits that keep it safe.

The phrase "regulated DTC healthcare" covers a specific kind of product: a direct-to-consumer commerce experience sitting on top of a clinical, device, or health-record backbone. The ecommerce half is well-documented. The clinical half has its own decades of compliance practice. The combination is where most small teams get stuck, and where most PHI leaks show up a year into production.

I have shipped and advised on a handful of these builds now. A regulated healthcare client where members bought supplies online while clinical staff managed care on the back. A regulated intake flow that captured consent for follow-on services. A DTC healthcare brand whose commerce team kept adding vendors without anyone checking whether the data those vendors saw was PHI. Different shapes. The same three failure modes.

Pattern OverlayLAYERING
Instance 1commerce shell / clinical backbone
Instance 2regulated intake / consent record
Instance 3vendor procurement / BAA boundary
Three anonymized regulated DTC healthcare builds converge on the same PHI-boundary shape.

The pattern across regulated DTC healthcare

A regulated DTC healthcare product is two systems in one codebase. There is the commerce system - carts, checkouts, email receipts, attribution, retention flows - where speed and experimentation win. And there is the clinical or record system - intake, consent, treatment data, billing that touches insurers - where auditability wins. The compliance boundary runs down the middle of the codebase, usually invisible until something breaks.

The off-the-shelf stacks do not cover this combination. Shopify is excellent at commerce and has no HIPAA posture. Cerner and Epic are built for clinical systems and have no DTC motion. Teams end up assembling the middle layer themselves: a Next.js app with Stripe for payments, Supabase or Postgres for records, a transactional email provider for fulfillment, and a clinical system integration sitting somewhere in the stack. That middle layer is where the compliance burden lives, and it is where almost every real-world failure I have seen originates.

I wrote a post on the PHI-boundary patterns that come out of this work - the solo HIPAA Next.js primitives post - which covers the technical basics. This one is the pattern map: the three recurring failure modes I see at the architecture level, and what resolves each.

Instance 1: a commerce shell bolted to a clinical backbone

The first instance is the one that scares me the most because it looks fine on a dashboard and fails quietly in the data layer.

A regulated healthcare client had an ecommerce storefront where members could reorder supplies, and a separate clinical system where providers managed care. The two systems shared members. They also shared member data, sometimes inadvertently. A marketing list exported from the commerce platform could include fields the commerce platform should never have seen - a therapy identifier, a diagnosis hint encoded in a SKU, a date that lined up with a clinical appointment. None of this was designed to leak. It leaked because the teams shipping each side of the app did not have a shared definition of what was PHI.

The resolution pattern is simple to describe and expensive to retrofit. The commerce system gets derivative data only. A member's ability to reorder a particular supply is represented as an entitlement - a boolean or a non-clinical SKU - derived from the clinical record and pushed into the commerce layer. The commerce layer never holds the clinical truth. If marketing wants to segment, they can segment on the entitlement, not the underlying reason for it.

This single boundary resolves a surprising number of downstream risks. Attribution pixels can run on the commerce side without touching PHI. Analytics vendors can see commerce events without needing a business associate agreement. The clinical audit log stays clean because the commerce side cannot query into it.

The architectural cost is real. You cannot have a single Postgres with RLS doing both jobs. You end up with two schemas, sometimes two databases, and an ETL or an event bus that moves derivative data across the boundary. Small teams often skip this until a privacy review forces it. By then, the fix is a six-month project.

The second instance shows up in the funnel. A regulated intake flow - the form a member fills out to qualify for a product or a service - captures information that is clinical in nature, and it captures consent to use that information in a specific way.

I have seen several of these built as if they were standard marketing forms. Fields collected, submitted via a webhook, stored in a CRM. The consent checkbox is styled carefully and rendered on the page. The consent itself, as a legal event, is not recorded anywhere durable. If a privacy review or an audit asks "what exactly did this member agree to, on what date, under which version of the privacy policy," the answer is a screenshot of the page the marketing team happened to have when they last ran QA.

The resolution pattern is to treat consent as an immutable event, not a UI state. The consent artifact includes the member identifier, the timestamp, the specific policy version, the text that was shown at the moment of consent, and the IP and user agent of the client. Storing these in an append-only table, separate from the member's mutable profile, means the consent survives schema changes, member updates, and marketing flow rebuilds. If the policy changes, new consent is captured. The old artifact is still there.

This matters operationally because regulated intake is rarely a one-time event. Members re-enter the flow to add services, update information, or renew. Each pass through may need fresh consent on different policy versions. A system that treats consent as state cannot handle this. A system that treats consent as an event handles it by default.

The pattern I keep returning to for intake flows goes deeper on the field-level decisions. The architectural point is the same: the consent table is separate, append-only, and versioned.

Instance 3: procurement as the real compliance layer

The third instance is the one senior engineers underestimate. Compliance is enforced less by code than by which vendors the system talks to. Every new integration is a potential leak.

I have seen this across several DTC healthcare brands with small teams. A new analytics tool gets added for a growth experiment. A new email provider is wired up for a welcome series. A new data warehouse connector is installed for a marketing dashboard. Each of these decisions was made by an engineer or a growth lead, neither of whom checked whether the vendor could accept PHI, would sign a business associate agreement, or was even aware of the concept.

The symptom is subtle. PHI starts showing up in places no one intended, carried by fields that look innocent. An email address with a disease-specific username. A customer note field with a free-text medical detail. A server log line with a query parameter that contained a member's intake response. The vendor's terms of service are fine for ecommerce data. They are not fine for PHI.

The resolution pattern is to make procurement a pre-merge step. Before any integration code merges, three things have to exist: a BAA on file (or an explicit written confirmation that no PHI will flow to the vendor, with the mechanism that enforces it), a data-flow diagram showing exactly which fields cross the boundary, and a kill switch on the integration that lets the team rip it out without a code change if something goes wrong.

This is not expensive. A BAA check is a ten-minute email to a vendor's compliance team. The data-flow diagram can live as a markdown file in the repo. The kill switch is usually a feature flag. Done up front, the whole thing takes an afternoon. Done after an incident, it takes weeks of forensic work followed by a potential breach notification.

I have written a vendor evaluation checklist for small teams that captures the questions that matter. The hard part is making it a habit, not adding a new process.

Compliance is enforced less by code than by which vendors the system talks to.

What the pattern tells us

All three instances point at the same underlying reality. Regulated DTC healthcare is an architectural discipline, not a legal one. The lawyer's job is narrow - review the BAAs, draft the privacy policy, advise on a specific regulatory question. Everything else is decided in code: where PHI lives, how it moves across services, how consent is recorded, which vendors see which fields. By the time an app is in production, the compliance posture is baked in. Fixing a misaligned boundary takes months. Designing the boundary in the first week takes days.

This is also where the hybrid role earns its keep. An engineer who understands the commerce side and a compliance advisor who understands the Security Rule can collaborate, but neither one alone can design the seam between the two systems. The person who does the integration in practice is usually a senior full-stack engineer who has paid attention to both sides. If your team has that person, the architecture gets drawn correctly. If it does not, the architecture tends to drift until an incident forces a redesign.

How to spot the gap early

If I am walking into an existing regulated DTC build for the first time, there are four things I look at before anything else.

The PHI boundary. Is there a single, documented place where PHI lives? Can I trace, on a whiteboard, every path that PHI takes from the clinical system into the commerce shell, and is each path explicitly derivative? If the answer is "it's complicated," the boundary is probably leaking.

The consent record. Pull a real member. What did they agree to? On what date? Under which policy version? If the answer requires opening Slack and asking the marketing team, the consent architecture does not exist.

The vendor map. List every third-party service the application talks to. For each, which has a BAA? Which receives PHI? Which receives only derivative data? The answer is almost always incomplete, and the gap tells you where procurement has not been treated as part of the system.

The audit log. Where is it written? What permissions does the application have on those records? Can a rogue admin or an attacker modify it? If the log is just application stdout shipped to a generic log aggregator, it is not an audit trail under the Security Rule. My audit logging patterns for regulated Next.js apps covers the implementation shape in detail.

These four questions take a day to answer properly. The answers tell you whether the architecture is solid or whether the next year will be spent retrofitting. Most teams come out somewhere in the middle.

A small-team compliance posture that does not bury you

The assumption behind regulated DTC healthcare is that compliance requires a team. For larger organizations, it does. For a startup or a small operator, a workable posture is five habits and one piece of outside help.

Draw the PHI boundary on day one. Before any commerce code ships, decide what the commerce layer will and will not see. Put this in the architecture document. Design the member identifier so the two sides can talk without the commerce side holding clinical truth.

Record consent as events, not UI state. The first consent artifact in the database can be three columns: member_id, timestamp, policy_version. You can add fields later. You cannot retroactively reconstruct the events.

Gate every vendor. Before any integration merges, confirm the BAA status and document the data-flow. The diff for the integration is trivial once the answer to those two questions is yes.

Make the audit log append-only and separate. Do this on day one. Retrofitting an audit system onto a production app is an order of magnitude more work than building it in up front.

Run a privacy-oriented code review monthly. Twenty minutes, one developer, walking the four diagnostic questions from the previous section against the current state of the app. Most months the answer is "nothing changed." The months it changed are exactly when you want to know.

The outside help, when it is worth hiring for, is usually a senior engineer who has shipped this combination before and a healthcare attorney who can confirm the regulatory edges. The engineer designs the seam. The attorney ratifies it. Neither role needs to be full time on a small team.

For teams that want a structured read on where the current architecture is leaking - commerce side, tracking, and data flow - the DTC Stack Audit I run before accepting any retainer is the on-ramp. It covers the commerce side in depth and flags anywhere PHI looks adjacent to a tracking surface. The full case-study version of the compliance and PHI patterns is part of the enterprise work I document under the silent failure retrospective.

FAQ

Is HIPAA actually relevant to a DTC healthcare brand, or is that only for hospitals?

HIPAA applies to any entity that is a covered entity (a healthcare provider, health plan, or clearinghouse that transmits health information electronically) or a business associate of one. A DTC brand that sells products containing protected health information, receives diagnoses or prescriptions from providers, or integrates with clinical systems is almost always caught by one of those definitions. The test is whether the data handled is PHI, not whether the company feels like a hospital.

Can we use Shopify for the commerce side of a regulated healthcare DTC product?

Yes, with care. Shopify can act as the commerce shell over a separate clinical system as long as the Shopify side never receives PHI. Orders, SKUs, and customer profiles can be designed so the Shopify layer holds only derivative entitlement data. The moment you start storing clinical detail in Shopify customer notes or metafields, the boundary has moved and Shopify needs to be treated as a PHI-handling system, which it is not built for.

Do we need a compliance officer for a team of five?

Not a full-time one. What you need is one person who owns the architecture decisions that shape the PHI boundary, and an outside healthcare attorney on retainer for the legal edges. The architectural role can be filled by a senior engineer who has shipped regulated software before. The attorney role is typically a few hours a quarter.

How does consent for a regulated intake differ from GDPR or CCPA consent?

Functionally they overlap. Both require a clear record of what was consented to and when. HIPAA adds requirements around authorization for specific uses and disclosures of PHI. The practical implication is that the consent event record needs the policy version and the scope of use, not just a boolean. A system built to the HIPAA authorization standard usually handles GDPR and CCPA requirements as a side effect; the reverse is not always true.

What is the first thing that typically breaks in a regulated DTC healthcare build?

Vendor creep. The commerce side adds a new tool every few weeks, and at some point one of those tools starts receiving data that was not supposed to cross the PHI boundary. The break is usually silent for months. That is why procurement gating is the habit with the highest return for the lowest effort.

Is a BAA with the hosting provider enough on its own?

No. A BAA with the hosting provider covers the hosting provider. Every other service that touches PHI, including email, SMS, storage, analytics, and any third-party API called from your backend, needs its own BAA. Mapping the vendor list and checking each is the procurement gate in practice.

Sources and specifics

  • Patterns observed across three regulated DTC healthcare engagements and advisory work, 2024-2026, all Next.js 14 or 15 with a Postgres record layer.
  • HIPAA Security Rule references: 45 CFR 164.312(b) (audit controls), 45 CFR 164.314(a)(2)(iii) (business associate contracts).
  • Vendor BAA availability for Stripe, major transactional email providers, and major cloud hosting platforms confirmed via their published compliance documentation as of early 2026; verify current terms with each vendor.
  • Consent-as-event pattern derived from a combination of HIPAA authorization requirements and GDPR Article 7 evidentiary requirements; implementable as an append-only Postgres table or an event-store-backed record.
  • No client-specific metrics, infrastructure identifiers, or partner names are included; all instances anonymized per internal policy.

// related

Let us talk

If something in here connected, feel free to reach out. No pitch deck, no intake form. Just a direct conversation.

>Get in touch