Solving Port Conflicts in AI-Assisted Development: HTTP over Named Pipes
4 min read
The way I work day to day has changed quite a lot over the last 6 months.
AI coding assistants like Claude Code aren’t just speeding up individual tasks anymore — they’ve changed how I structure my work entirely. It’s now completely normal for me to have multiple AI sessions running in parallel, each focused on a different feature or experiment.
To support that, I lean heavily on git worktrees. One worktree per feature, one AI session per worktree, and everything stays nicely isolated. It’s a great setup… right up until each of those worktrees needs to run a local demo site.
At that point, reality kicks in. Two processes still can’t listen on the same port, and suddenly this otherwise clean parallel workflow starts falling over something as mundane as localhost.
The Port Discovery Problem
This became particularly obvious when generating OpenAPI clients.
At Umbraco, we use hey-api to generate OpenAPI clients from Swagger definitions exposed by a demo site. In a single-worktree world, this is trivial: start the site, point the generator at https://localhost:12345, and you’re done.
{
"scripts": {
"generate-client": "node scripts/generate-client.js https://localhost:12345/umbraco/swagger/<package>/swagger.json"
}
}
In a multi-worktree, AI-assisted setup, things get more interesting. You might have one Claude instance working on a feature in a feature-a worktree, another instance in feature-b, and both need to spin up the demo site to generate clients. Each site quite reasonably binds to a dynamic port — but now the tooling has to answer a deceptively simple question:
Which port is this demo site actually using?
The usual approaches don’t scale particularly well. Fixed ports break immediately. Port ranges require coordination and still collide. Parsing console output works until someone changes a log line. Environment variables help, but they don’t persist cleanly across sessions and tools — especially once AI gets involved.
What I really wanted was something simpler: each worktree should be able to start a demo site, and any tool — human-driven or AI-driven — should be able to discover how to talk to it without manual wiring or shared state.
HTTP Over Named Pipes
The breakthrough came when I stopped thinking about the problem as “how do I find the port?” and started thinking about “how do I avoid needing the port at all?”
That’s when named pipes entered the picture.
To be honest, I’d never even heard of named pipes before this, but they’re essentially a way for local processes to communicate via a named endpoint rather than a TCP port — which, once you realise HTTP can run over them, makes them a surprisingly good fit for this problem.
The inspiration for this approach came from an excellent blog post by Andrew Lock on using named pipes with ASP.NET Core and HttpClient. He walks through how HTTP traffic can be sent over local named endpoints instead of TCP, and once I’d read that, the solution to this problem more or less snapped into focus.
Instead of teaching tools how to discover a dynamic port and then make an HTTP request, you can give them a stable, predictable endpoint that doesn’t involve ports at all.
How the Pieces Fit Together
In practice, the setup is fairly straightforward.
The demo site still listens on HTTP/HTTPS using a dynamic port chosen by the OS. That doesn’t change. But alongside that, it also listens on a named pipe whose name is derived from the git context — typically the worktree or branch name.
Conceptually, it looks something like this:
- HTTPS on
127.0.0.1using a dynamic port - A named pipe like
umbraco.demosite.feature-a
Each worktree gets its own pipe. The name is predictable, but the instance behind it is completely isolated.
On the server side, Kestrel is configured to listen on both endpoints when running in development:
public void Configure(KestrelServerOptions options)
{
if (!hostEnvironment.IsDevelopment())
return;
options.ListenNamedPipe($"umbraco.demosite.{GetUniqueIdentifier()}");
options.Listen(IPAddress.Loopback, 0, o => o.UseHttps());
}
The important detail here is GetUniqueIdentifier(). As long as that maps cleanly to the git worktree or branch name, both humans and tools can reason about it.
On the client side, things get much simpler than they used to be. Instead of parsing output or juggling environment variables, the tooling derives the pipe name from git and connects directly:
const identifier = getGitIdentifier();
const pipeName = `umbraco.demosite.${identifier}`;
const spec = await fetch({
socketPath: `\\\\.\\pipe\\${pipeName}`,
path: '/umbraco/swagger/<package>/swagger.json'
});
For tools that genuinely need the public HTTPS address, the site exposes a tiny discovery endpoint over the pipe that returns it as plain text:
endpoints.MapGet("/site-address", async context =>
{
await context.Response.WriteAsync(GetHttpsAddress());
});
No metadata, no ceremony — just the thing the tool actually needs.
Why This Works Well
Once this was in place, a few nice properties fell out almost immediately.
First, true parallel development becomes boringly reliable. Multiple developers, multiple worktrees, multiple AI sessions — all running demo sites at the same time without any configuration or coordination.
Second, the tooling got smaller and more robust. A whole class of fragile “find the port” logic simply disappeared.
And perhaps most interestingly, this turned out to be a really good fit for AI-driven workflows. The rules are simple, predictable, and derivable from context. There’s no shared state to manage, and nothing for an AI assistant to ask a human about.
Looking Forward
As AI assistants become a normal part of how we build software, I think we’re going to have to revisit a lot of assumptions baked into our tooling. Many of them quietly assume a single developer, a single terminal, and a single running instance.
HTTP over named pipes isn’t a silver bullet, but it’s been a surprisingly elegant solution to a very real problem for us. I suspect there are plenty of similar “obvious in hindsight” patterns waiting to be rediscovered as our workflows continue to evolve.
The interesting question is: what other parts of our development process are still assuming a world that no longer exists?
Until next time 👋