What We Gained Building Umbraco AI on Microsoft.Extensions.AI and the Microsoft Agent Framework

4 min read

When we set out to build AI capabilities for Umbraco, we had a choice every platform team faces: build your own abstractions, or bet on someone else’s.

We bet on Microsoft’s. Specifically, Microsoft.Extensions.AI (M.E.AI) for chat clients, embeddings, and tool calling, and the Microsoft Agent Framework (MAF) for agent orchestration. This post is about what that bet gave us — how the frameworks compose, where they’re load-bearing, and the patterns that fell out along the way.

IChatClient as the Foundation

M.E.AI gives you IChatClient — a single interface for chat completions across any provider. OpenAI, Anthropic, Google, Azure AI Foundry — they all converge to the same type.

The real value isn’t provider swapping though. It’s the decorator pattern. IChatClient is designed to be wrapped, and we built our entire feature set on that:

builder.AIChatMiddleware()
    .Append<AIOpenTelemetryChatMiddleware>()
    .Append<AIFileProcessingChatMiddleware>()
    .Append<AIChatOptionsOverrideChatMiddleware>()
    .Append<AIRuntimeContextInjectingChatMiddleware>()
    .Append<AIFunctionInvokingChatMiddleware>()
    .Append<AIGuardrailChatMiddleware>()
    .Append<AITrackingChatMiddleware>()
    .Append<AIUsageRecordingChatMiddleware>()
    .Append<AIAuditingChatMiddleware>()
    .Append<AIContextInjectingChatMiddleware>();

Every AI feature in Umbraco.AI is a middleware layer. Ten layers, and none of them know about each other. Guardrails don’t import file processing. Auditing doesn’t reference guardrails. They compose because they all speak IChatClient.

Ordering matters — first appended = innermost (closest to the provider), last = outermost. File processing runs before function invocation so extracted text is available to tools.

Meanwhile, provider code stays trivially small. Five providers, same shape. Here’s our OpenAI chat capability in its entirety:

public class OpenAIChatCapability(OpenAIProvider provider)
    : AIChatCapabilityBase<OpenAIProviderSettings>(provider)
{
    protected override IChatClient CreateClient(
        OpenAIProviderSettings settings, string? modelId)
        => OpenAIProvider.CreateOpenAIClient(settings)
            .GetResponsesClient(modelId ?? "gpt-4o")
            .AsIChatClient();
}

Create the SDK client, call .AsIChatClient(), done. All the complexity lives in the middleware, not the providers.

From Chat Client to Agent

This is where MAF enters the picture. MAF’s ChatClientAgent accepts an IChatClient — which means our entire middleware stack comes along for free:

var chatClient = await _chatClientFactory.CreateClientAsync(profile);

return new ChatClientAgent(
    chatClient,
    instructions: agent.Instructions,
    name: agent.Name,
    description: agent.Description,
    tools: tools);

Agents get guardrails, auditing, context injection, file processing — without the agent code knowing any of it exists. We didn’t have to rebuild for agents. The M.E.AI investment carried forward because MAF builds on the same primitives.

On top of MAF’s ChatClientAgent, we add a ScopedAIAgent decorator for per-execution runtime context. Three layers, clean delegation: IChatClient (M.E.AI) → ChatClientAgent (MAF) → ScopedAIAgent (ours). Each does one thing and delegates to the next.

For more complex scenarios, we use MAF’s Microsoft.Agents.AI.Workflows to compose multiple agents into workflow graphs. Workflows are auto-discovered via attributes — the same pattern we use for providers, tools, and middleware. This is still early (MAF Workflows is at 1.0.0-rc3), but building on it now means orchestrated agents will mature alongside the framework.

Tools: Define Once, Run Everywhere

M.E.AI’s AIFunction gives you a standardised way to define tool capabilities. In Umbraco.AI, tools are simple C# classes with typed arguments:

public record InventoryArgs(
    [property: Description("The product SKU to look up")] string Sku);

[AITool("lookup_inventory", "Lookup Inventory")]
public class InventoryTool : AIToolBase<InventoryArgs>
{
    public override string Description =>
        "Looks up current inventory for a product by SKU.";

    public InventoryTool(IProductService productService)
        => _productService = productService;

    public override async Task<object> ExecuteAsync(
        InventoryArgs args, CancellationToken ct)
        => await _productService.GetStockAsync(args.Sku, ct);
}

M.E.AI infers the JSON schema from TArgs and its [Description] attributes — no manual schema authoring. Tools are auto-discovered at startup, converted to M.E.AI AIFunction instances, and work identically whether invoked by a chat client, a MAF agent, or an orchestrated workflow. Third-party packages can ship tools as NuGet drops — the attribute is all that’s needed.

Tool execution is handled by M.E.AI’s FunctionInvokingChatClient, registered as middleware. Because it’s middleware, it composes with everything else: guardrails evaluate tool results, auditing captures invocations, tracking records token usage across the full tool loop.

The Value

The biggest payoff has been transferable skills. If you’ve used M.E.AI’s IChatClient, you already understand our middleware. If you’ve used MAF’s ChatClientAgent, you already understand our agents. That’s not a coincidence — our types are their types. We deliberately avoided proprietary wrappers, which means anything the M.E.AI ecosystem produces — middleware, tooling, observability — works with Umbraco.AI out of the box. The investment compounds in both directions.

Beyond that:

  1. IChatClient is more than a provider abstraction. The decorator pattern is the real value. Design features as middleware from the start.

  2. MAF builds on M.E.AI, not beside it. If you’ve invested in an M.E.AI pipeline, MAF extends it rather than replacing it.

  3. Middleware ordering is your architecture. Get this right early — changing it later changes every AI request.

  4. Let M.E.AI generate your tool schemas. Typed arguments with [Description] attributes beat hand-written JSON Schema every time.

One honest caveat: you’re coupling to frameworks that are still maturing. MAF is at 1.0.0-rc3. When the framework has a gap, you’re working around it rather than owning the fix. For us, the velocity we gain far outweighs the upgrade friction — but it’s a trade-off you should make with eyes open.

M.E.AI and MAF gave us a foundation that scaled from “add a chat box” to “multi-agent orchestration with guardrails, auditing, and tool execution” — and the abstractions held the whole way.

Until next time 👋