The application log is the most ignored PHI leak in a regulated Next.js app. The audit log gets code review because it looks compliance-adjacent. The application log, the Sentry stream, the Datadog trace, the console.error in a shipped API route - those get patched in at midnight and nobody looks at them until a security review asks. In every regulated Next.js engagement I have worked on, at least one of them was leaking.
This post catalogs the three instances I see repeatedly, with the redaction pattern that resolves each, and the test that tells you whether your own logs are safe. The code runs on Next.js 14 or 15 and assumes a typical observability sink (Sentry, Datadog, Logtail, self-hosted). The pattern is the same regardless of sink; the difference is whether the sink is under a business associate agreement, which is a separate question covered in the BAA evaluation patterns post.
The pattern: application logs are the PHI leak nobody reviews
Application logs exist to debug problems. The people who write them are under pressure, at 2 AM, trying to understand why a production request failed. In that mode, developers log whatever is in scope: the request body, the user object, the response, the stack trace. In a non-regulated app, this is the right instinct. In a regulated app, that same instinct is how PHI ends up in a third-party observability vendor that does not have a BAA.
- error: Failed to charge member=jane.doe@...
- breadcrumb: POST /intake body={dob:1982-...}
- console.error: Visit record ENC_FAILED id=...
- trace: prescription.query member_id=a1b2...
- error: Failed to charge member=[redacted]
- breadcrumb: POST /intake body=[**]
- console.error: Visit record ENC_FAILED id=...
- trace: prescription.query member_id=a1b2...
- error: err_id=err_4f2a type=charge_failed
- breadcrumb: POST /intake status=422 err_id=err_6c1b
- error: err_id=err_9d2e type=encryption_failure
- trace: prescription.query duration_ms=42 err_id=err_b1a3
The working pattern has three components. First, redact at the emitter: the application itself strips PHI before any log line leaves the process. Second, log error IDs instead of error payloads: when an error occurs, record a pseudonymous ID in the log and store the rich context in your own BAA-covered datastore where it can be looked up by that ID. Third, use a BAA-covered sink for anything that can still contain regulated metadata even after redaction. These three together keep application logs useful for debugging without making them a PHI disclosure.
A related discipline is the audit log, which answers a different question. Application logs tell you why a request failed; audit logs tell you who touched what. The audit logging patterns post in this cluster covers the audit-log side; this post is about the application log.
Instance 1: Sentry breadcrumbs pulling member data into error events
The first instance is the one I see most often because Sentry is the most common error tracker in Next.js shops. Sentry's default instrumentation captures "breadcrumbs" automatically: console messages, network requests, navigation events, and a handful of other categories. When an error fires, Sentry attaches the recent breadcrumbs to the event payload and ships it to their servers.
The trap: the breadcrumbs capture request bodies and response payloads by default. A POST to /api/intake that fails validation leaves a breadcrumb containing the full request body. In a regulated product, that body has PHI in it. The body is now in Sentry, which may or may not have a BAA with you (Sentry offers a BAA on Business and Enterprise plans, verified before use).
The resolution pattern is to strip sensitive fields before Sentry sees them. Sentry supports a beforeSend and beforeBreadcrumb hook that runs in-process before the event leaves. In sentry.client.config.ts and sentry.server.config.ts:
import * as Sentry from "@sentry/nextjs";
const REDACTED = "[redacted]";
const PHI_KEYS = new Set([
"email", "phone", "ssn", "dob", "first_name", "last_name",
"address", "city", "zip", "member_id", "patient_id",
"diagnosis", "medication", "prescription", "claim_id",
]);
function redactDeep(value: unknown): unknown {
if (Array.isArray(value)) return value.map(redactDeep);
if (value && typeof value === "object") {
const out: Record<string, unknown> = {};
for (const [k, v] of Object.entries(value as Record<string, unknown>)) {
out[k] = PHI_KEYS.has(k.toLowerCase()) ? REDACTED : redactDeep(v);
}
return out;
}
return value;
}
Sentry.init({
dsn: process.env.NEXT_PUBLIC_SENTRY_DSN,
beforeBreadcrumb(breadcrumb) {
if (breadcrumb.category === "fetch" || breadcrumb.category === "xhr") {
if (breadcrumb.data?.body) {
breadcrumb.data.body = REDACTED;
}
}
return breadcrumb;
},
beforeSend(event) {
if (event.request?.data) {
event.request.data = redactDeep(event.request.data);
}
if (event.contexts) {
event.contexts = redactDeep(event.contexts) as typeof event.contexts;
}
return event;
},
});
The redaction runs in your process, before the event leaves. The breadcrumb data that used to carry the request body now carries [redacted]. The error still tells you what happened and where; it just does not ship the data that caused it.
Instance 2: Datadog APM spans capturing request bodies by default
The second instance is APM. Datadog, New Relic, and similar tools auto-instrument HTTP handlers and capture span attributes including request paths, response codes, and in many configurations, request and response bodies. The bodies show up in trace search and flame graphs, visible to anyone with dashboard access.
The pattern is identical to the Sentry case: redact at the emitter. Datadog's Node tracer supports hooks that mutate spans before they ship. In instrumentation.ts or your tracer config:
import tracer from "dd-trace";
tracer.init({
logInjection: true,
});
tracer.use("http", {
hooks: {
request(span, req) {
// strip request/response body captures entirely
if (span) {
span.setTag("http.request.body", undefined);
span.setTag("http.response.body", undefined);
}
},
},
});
Better still, configure the tracer to not collect bodies at all at the SDK level if you have the option. The rule of thumb: if the SDK offers a "capture bodies" toggle, it should be off in any regulated environment, period. The replacement is structured attributes that describe the request without carrying the payload: the route, the response status, the duration, the error class.
The same pattern applies to logs shipped via the Datadog Node logger or Winston/Pino transports. Pino's redact option takes a list of paths and removes them before serialization:
import pino from "pino";
export const logger = pino({
redact: {
paths: [
"req.body.*", "res.body.*",
"*.email", "*.phone", "*.ssn", "*.dob",
"*.member_id", "*.patient_id",
"*.diagnosis", "*.medication",
],
censor: "[redacted]",
},
});
The redaction runs before the log line is serialized. The Pino-Datadog transport only ever sees the redacted form. This is the same principle as Sentry's beforeSend: the emitter is the last line of defense, and it must be authoritative.
Instance 3: Next.js error boundaries rendering PHI into client error pages
The third instance is the one I see on teams new to Next.js App Router. An error boundary (error.tsx) or a global error handler renders a friendly message plus, for debugging, the raw error. The raw error includes the stack trace; the stack trace includes the data that caused the failure; that data is PHI. The error page is now a client-rendered PHI disclosure.
The resolution pattern is an error ID indirection. The application catches the error on the server, generates a pseudonymous ID, stores the full error context in your own PHI-safe datastore keyed on that ID, and returns only the ID to the client. The error page shows the ID and a "reference this when contacting support" message. The rich context lives in your own Postgres, behind your own authentication.
// src/lib/error-capture.ts
import { randomUUID } from "crypto";
import { db } from "@/lib/db";
type CapturedError = {
id: string;
kind: string;
route: string;
member_hint?: string; // never raw member_id; hash if needed
};
export async function captureError(err: unknown, route: string): Promise<CapturedError> {
const id = `err_${randomUUID().slice(0, 8)}`;
const kind = err instanceof Error ? err.name : "unknown";
// Persist the full context to your own BAA-covered datastore
await db.errorContext.insert({
id,
kind,
route,
stack: err instanceof Error ? err.stack : null,
created_at: new Date(),
});
return { id, kind, route };
}
In error.tsx:
"use client";
export default function Error({ error, reset }: { error: Error & { digest?: string }; reset: () => void }) {
// error.digest is Next.js's server-side-only ID — safe to show
return (
<section>
<h2>Something went wrong.</h2>
<p>If you contact support, reference: <code>{error.digest ?? "unknown"}</code></p>
<button onClick={reset}>Try again</button>
</section>
);
}
Next.js App Router supports a server-side digest field on errors thrown in server components. The digest is a short hash that the framework logs server-side and exposes client-side as the error reference. In a regulated app, combine the framework digest with your own ID in captureError for richer lookup; the client sees only the identifier, never the underlying data.
“The emitter is the last line of defense, and it must be authoritative.”
What the pattern tells us
All three instances point to the same structural truth. Observability is a pipeline. Every hop in the pipeline is a potential leak point. The only point you fully control is the emitter, which is the code running inside your application process. Everything downstream, the SDK, the network, the vendor, the dashboard, is out of your process and harder to govern. The cheapest and most reliable place to redact is the first one, in your code.
The second insight is that "redact the log" and "use a BAA vendor" are not substitutes. They are complements. Redaction handles the case where the log leaks despite the vendor relationship (misconfigured dashboards, third-party SDKs that auto-instrument, a plan downgrade that silently drops BAA coverage). The BAA handles the case where the redaction misses something. You need both.
The third insight applies to audit discipline. Application logs should be treated as low-trust artifacts that can flow to observability vendors. Audit logs, as covered in the append-only audit log patterns post, are high-trust artifacts that stay inside a BAA-covered datastore under strict role separation. Do not mix the two streams. A Sentry event is not an audit record, and an audit row should never leave your Postgres.
How to spot the gap early
Four checks surface the leak points before they hurt.
Fire a deliberate error in a dev environment and inspect the full event payload your error tracker received. Look at request.data, contexts.request, breadcrumbs. If any of them contain a recognizable member field, the emitter-side redaction is incomplete.
Search your APM traces for anything matching a regex for common PHI patterns: email, phone, numeric member IDs. If anything returns, request bodies are flowing into spans.
Open a Next.js error.tsx file and check whether it renders error.message or error.stack to the client. If yes, replace with the digest-only pattern above.
Run grep -r "console.error\|console.log" src/ and audit every hit. console.error in a server route ships to your serverless host's log aggregator; if that aggregator is not BAA-covered, you have a PHI leak every time that line fires.
If all four pass, the application log surface is reasonable. If any fail, the fix is small and local, and it compounds across the app. For teams operating a Next.js healthcare stack on a small budget, the tracking-and-observability-side checks in the DTC Stack Audit overlap with this pattern on the analytics surface; most of what leaks to Sentry is also leaking to the analytics vendor. The cluster hub at the regulated DTC healthcare engineering playbook covers the architectural context around all of this.
FAQ
Is Vercel's log output covered by a BAA?
Vercel offers a BAA on Enterprise plans. Hobby and Pro plans do not include one. If your Next.js app is on a non-BAA plan, any console.log or console.error from a server route is flowing into Vercel's log system without BAA coverage. Either move to a plan with a BAA, route logs to a BAA-covered sink via the runtime logger instead of console, or eliminate PHI from the log path entirely via the redact-at-emitter pattern above.
Can I use Sentry with PHI if I have their BAA?
Sentry offers a BAA on Business and Enterprise plans per their public documentation, and it covers the core ingestion pipeline. The practical answer is that redaction is still required because a BAA does not reduce your minimum-necessary obligation. The BAA makes Sentry a business associate; it does not authorize you to ship arbitrary PHI through them. Verify the current state of Sentry's BAA with their legal team before relying on it.
What about server-side exception logs written to stdout in serverless functions?
These go to your hosting provider's log aggregator (Vercel, AWS CloudWatch, Cloud Run). The aggregator's BAA status governs whether PHI is safe there. Even with a BAA, the redact-at-emitter pattern applies. A good default is to wrap all server logging in a structured logger that redacts before emitting, and to treat console.* as forbidden in the regulated application.
How do I debug a production incident if the error payload has no PHI in it?
The error ID is the key. You look up the ID in your own BAA-covered error context store and see the full payload there, authenticated with the access controls you configured. The log stream tells you which IDs fired and at what rate; the detail lookup is a separate authenticated step. This feels slower for the first few incidents and becomes second nature after a week.
Does Next.js's built-in error.digest leak anything sensitive?
The digest is a short hash generated server-side by Next.js and is intentionally opaque. It does not contain request data, stack content, or user identifiers. It is safe to render to the client; that is what it is designed for. Your own error context store is where the rich lookup lives.
Should I log anonymized member IDs or no member reference at all?
If you need to correlate errors to members for support, log a deterministic hash of the member ID (HMAC with a server-only secret) rather than the raw ID. The hash is reversible only inside your system, which lets support trace an error to a member without putting the raw identifier into the log pipeline. A per-tenant HMAC key keeps the hash space separate per customer.
Sources and specifics
- HIPAA Security Rule: 45 CFR 164.312(b) audit controls, 45 CFR 164.502(b) minimum necessary.
- Sentry BAA availability is documented by Sentry on Business and Enterprise plans as of 2025; verify current state before relying.
- Vercel BAA availability is documented on Enterprise plans as of 2025; verify current state with Vercel's legal team.
- Code examples are Next.js 15 with the App Router; the patterns hold on Next.js 14 with minor import path differences.
- Patterns observed across multiple regulated Next.js engagements, 2024-2026; no client-specific log pipelines, retention windows, or PHI examples are included in this post.
- Nothing in this post is legal or compliance advice. Engage counsel and a qualified compliance reviewer before applying these patterns in production.
