Skip to content
← ALL EXPERIMENTS

16 / WEB-AUDIO / 2026-04-11

AI Michael

Talk to a Michael clone in a terminal. He routes you to the right page.

How it works

// 01 DPI and pixel density

The frontend is a contenteditable input wired to a streaming fetch call. When you send a message, the request goes to an Edge runtime API route that calls Claude via the Anthropic SDK with `stream: true`. Tokens arrive as server-sent events and append to the transcript as they stream, which is what produces the live typing feel.

// 02 Bleed and cut tolerance

The system prompt is assembled at request time from three sources: a static Michael context block (my work history, technical stack, and tone guidelines), a dynamic product and page index pulled from the site's content layer, and a short set of routing rules that tell the model when to recommend a specific URL instead of answering in-line. The product index updates whenever the content changes, so the agent always has an accurate picture of what is currently for sale.

// 03 Safe zone and trim accuracy

Each turn sends the last four exchanges as history. Sending the full history would blow up the context window on long conversations and slow responses. Sending only the latest turn would make the agent forget what you just said. Four turns is a compromise that works for the kinds of questions visitors actually ask, which are usually two to three exchanges deep before a decision is made.

OBJECT / ai_michael.chatANTHROPIC
......
......

// why this exists

real conversational routing, not a chatbot toy

Most site chat widgets are the same sad artifact. A bouncing icon in the corner, a lead capture form pretending to be a conversation, and a reply from a queue somewhere that arrives the next day. This widget is the opposite of that. It is a terminal-styled chat interface backed by Claude, loaded with context about my work, products, and case studies, and wired to route visitors to the page on this site that actually answers their question.

Ask it what I do. Ask it about the CAPI Leak Report. Ask it whether I can help with a Shopify migration. Ask it for a case study of tracking work I've done in healthcare. The responses are generated by Claude using a system prompt that includes my professional context, the current product ladder, and an index of the site's pages. When the response includes a recommendation, it links to the page directly, so the chat functions as an intelligent routing layer on top of the normal site nav.

Why terminal chrome instead of a normal chat bubble? Because the visual vocabulary matches the rest of the site, and because a terminal implies a tool rather than a salesperson. The typing animation is not a gimmick, it is paced to match a real Claude streaming response so the interaction feels like using a real system rather than watching a canned reply unfold. The prompt cursor is the same pink as the brand signal color. The monospace face is DM Mono, set against a dark panel that shares tokens with every other terminal on the site.

The backing model is Claude via the Vercel AI SDK. Each conversation carries a short history (the last few turns) plus a system prompt with my context. No memory across sessions, no user account required, no data stored. The conversation ends when you close the tab. That is a deliberate choice. I would rather be useful in the moment than harvest your messages for later.

If you ask something outside my expertise (tax advice, legal advice, medical advice, generic code help), the agent says so and redirects rather than making something up. That guardrail is in the system prompt. It is the difference between a useful assistant and a liability.

Frequently asked questions

Is this a real person?

No. It is Claude answering as a Michael clone, trained on my public context and tuned to route you to the right page. For direct contact, use the contact page.

What model is running this?

Claude via the Anthropic SDK. The exact model version updates when new models release and prove stable on the benchmarks I care about for this use case.

Does my conversation get stored?

No. Conversations exist only in the current browser session. Nothing is logged server-side, nothing is associated with an identity, nothing persists past the tab close.

Can this agent do real tasks like book a meeting?

Not yet. Right now it is a routing and context agent. Tool use (booking, quoting, CRM updates) is a reasonable next addition but intentionally out of scope for this version.

Why terminal chrome instead of a normal chat bubble?

Because it looks like a tool, not a salesperson. The visual cue matters. A terminal implies curiosity. A bubble implies lead capture.

How is this different from a GPT site search?

GPT site search answers in-line from a vector index. This agent answers conversationally and, when appropriate, links to a specific page that goes deeper than an answer can. It is a routing layer, not a replacement for the content.

What happens if I ask something off-topic?

The system prompt instructs the agent to decline non-Michael topics (tax, legal, medical, generic code debugging) and redirect. It will not hallucinate a confident answer outside its scope.

Can you build this for my site?

Yes. The pattern is straightforward. The work is in the system prompt design, the page indexing, and the routing rules. It is a paid engagement, not a template.

What about voice input?

The widget is text-first. Web Speech API integration is on the list. Voice adds complexity around permissions, mobile behavior, and accessibility that I wanted to keep out of this version.