Introducing Umbraco AI
8 min read
If you caught the Umbraco Winter keynote webinar, you’ll have seen us announce something I’ve been working on for a while now - Umbraco AI. I’ve been quietly obsessing over this for the last month, and I’m really excited to finally talk about it properly. So let me dig a bit deeper into what it is, how it’s structured, and where we’re planning to take it.
What is Umbraco AI?
At its core, Umbraco AI is a provider-agnostic AI integration layer for Umbraco CMS. Rather than tying you to a single AI service, it provides a unified way to connect to multiple AI providers - OpenAI, Anthropic (Claude), Google Gemini, Amazon Bedrock, Microsoft AI Foundry and more - all through a consistent API. You want to switch from OpenAI to Claude tomorrow? Change a profile setting and you’re done. Total flexibility.
We’ve built Umbraco AI on top of Microsoft.Extensions.AI (M.E.AI), which is Microsoft’s new standard abstraction for AI services in .NET. Instead of creating our own proprietary types and abstractions, we expose the M.E.AI types directly. If you’re already familiar with IChatClient, ChatMessage, and ChatResponse, you’ll feel right at home. We just add the Umbraco-specific features on top - backoffice management, audit logging, usage analytics, and so on.
The Architecture
The architecture follows a hierarchical configuration model that I think makes a lot of sense once you see it:
Let me break that down:
Providers
Providers are installable NuGet packages that add support for specific AI services. When you install a provider package (like Umbraco.Ai.OpenAI), it’s automatically discovered and registered. Each provider can expose multiple capabilities - OpenAI, for example, supports both Chat and Embedding operations.
We currently ship providers for:
- OpenAI - GPT-4, GPT-4o, embeddings
- Anthropic - Claude models
- Google - Gemini models
- Amazon Bedrock - Multi-model access
- Microsoft AI Foundry - Azure AI Services
You aren’t just tied to these providers though. Umbraco AI is designed so anyone can build their own provider as a NuGet package and plug it straight into the system.
If you have an internal AI service, want to support a new vendor, or want to experiment with an emerging model, you can implement a provider using Microsoft.Extensions.AI and expose the capabilities you need. Once installed, it behaves just like a built-in provider — discoverable, configurable in the backoffice, and usable from profiles.
We’re actively hoping the community will build and share providers as new AI services appear. The architecture is designed for extension, not lock-in.
Capabilities
Capabilities represent the types of AI operations a provider supports. Currently we have:
- Chat - Conversational AI and text generation
- Embedding - Vector representations for semantic search
We have plans for more - image generation, content moderation, and others - but we wanted to get the foundations solid first.
Connections
A Connection stores your authentication details for a provider - typically an API key. You can have multiple connections to the same provider, which is useful for separating dev/prod environments or different team budgets.
One feature I’m quite pleased with is configuration references. If you prefix a value with $, it resolves the setting from your application configuration rather than being stored in the database. So you can set your API key field to $OpenAI:ApiKey and it pulls from appsettings.json or environment variables.
Profiles
Profiles are where you configure how you want to use AI for a specific purpose. A Profile combines a Connection with model-specific settings - which model to use, temperature, max tokens, system prompt, and so on.
The idea is that you define profiles for different use cases: “ContentSummarizer” might use GPT-4o-mini with a low temperature for consistent outputs, while “CreativeWriter” might use Claude with higher temperature for more varied responses. Your code then just asks for a profile by name or alias.
Contexts
Contexts allow you to inject additional knowledge into AI requests - things like brand guidelines, documentation, or domain-specific information that the AI should consider when responding. This is the foundation for RAG (Retrieval-Augmented Generation) scenarios.
Contexts can be attached at multiple levels and compose together:
- Profile-level - Base context that applies to all requests using that profile
- Content-level - Dynamically resolved based on which content node you’re working with
- Prompt-level - Specific knowledge for a particular prompt template
- Agent-level - Domain knowledge that guides an agent’s behavior
This composability means you can layer contexts in powerful ways. A content editor working on a blog post might get context from their “ContentWriter” profile (brand guidelines), the blog section they’re in (conversational tone), and the specific prompt they’re using (SEO best practices) - all combined automatically.
A Context can be as simple as a static text file or as dynamic as a query against your content database. When resolved, contexts are automatically injected into the system prompt, providing contextually-appropriate AI assistance without developers needing to manually manage prompt variations.
How It All Works Together
Here’s a simple example of how you’d use Umbraco AI in code:
public class ContentService
{
private readonly IAIChatService _chatService;
public ContentService(IAIChatService chatService)
{
_chatService = chatService;
}
public async Task<string> SummarizeContentAsync(string content)
{
var messages = new List<ChatMessage>
{
new(ChatRole.System, "You are a helpful assistant that summarizes content."),
new(ChatRole.User, $"Please summarize the following:\n\n{content}")
};
var response = await _chatService.GetChatResponseAsync(
messages,
profileAlias: "ContentSummarizer");
return response.Message.Text ?? string.Empty;
}
}
Notice how the code uses standard M.E.AI types throughout. The profileAlias parameter tells Umbraco AI which profile configuration to use, and it handles all the provider selection, authentication, and middleware execution behind the scenes.
The Add-on Ecosystem
Beyond the core AI layer, we’ve built several add-on packages that extend the functionality:
Umbraco.AI.Prompt
This add-on lets you define executable prompts that run directly inside the Umbraco backoffice.
Prompts can be exposed as property actions, allowing editors to generate values for a property with a single click — for example “Generate an SEO title” or “Generate alt text for this image”.
Prompts can reference property values using Mustache syntax and can optionally include the entire page as context, making them ideal for summarisation and metadata generation.
Umbraco.AI.Agent
This add-on lets you define AI agents that can perform complex, multi-step tasks.
Agents are built on top of the Microsoft Agent Framework (MAF), which extends the same Microsoft.Extensions.AI foundation used throughout Umbraco AI. That means agents naturally fit into the existing profiles, providers, and middleware pipeline, without introducing a separate abstraction model.
Agents can use tools (functions) to interact with your systems and decide when to call them as part of a task.
For web-based access, agents are exposed via AG-UI, an HTTP-based protocol that enables real-time streaming of agent activity. This makes it straightforward to build rich web experiences and integrations on top of agents while keeping the core agent implementation clean and standards-based.
Umbraco.AI.Agent.Copilot
This is a frontend-only package that provides a chat sidebar UI for interacting with agents. Think of it as your AI assistant living in the Umbraco backoffice, ready to help with content tasks.
Extensibility
Extensibility is a core design goal of Umbraco AI. Rather than a single plugin point, the system exposes clear, well-defined extension points at every layer:
Providers – Build provider packages for new AI services using Microsoft.Extensions.AI. Once installed, they’re automatically discoverable in the backoffice and usable from profiles.
Middleware – Insert cross-cutting concerns like logging, caching, rate limiting, or policy enforcement into the request pipeline, using a model that mirrors ASP.NET Core middleware.
Tools – Define tools that agents can call to interact with your systems — querying data, triggering workflows, sending emails, or performing domain-specific actions.
Context Resource Types – Introduce new kinds of context sources. Whether that’s a CRM, a custom database, or an external knowledge base, implementing a resource type makes it available in the context editor.
Context Resolvers – Resolve additional, relevant context from the current request. Resolvers can determine things like which content node the request relates to and automatically surface the appropriate contexts without requiring explicit configuration.
Enterprise Features
We’ve thought about enterprise needs from the start:
Audit Logging tracks every AI operation with full request/response details, error categorization, and user context. When you need to answer “what prompts did we send to the AI and what did it respond with?”, you have the data.
Usage Analytics helps you understand and manage costs. Token usage is tracked per request, aggregated hourly, and rolled up daily. You can see which profiles are consuming the most tokens and optimize accordingly.
Version History all AI entities are fully versioned, allowing easy rollback and giving you confidence to iterate safely as prompts, agents, and profiles evolve.
What’s Next?
We have plans to expand Umbraco AI in several directions:
Automations Package - A new add-on that lets you trigger agents as part of automation workflows - imagine automatically generating alt text when images are uploaded, or summarizing content when it’s published.
Tool Permissions System - Fine-grained control over which tools agents can access, allowing you to safely expose agent capabilities to editors while maintaining security boundaries.
Testing Framework for Performing Evals - Built-in evaluation tools to test and measure AI behavior, ensuring consistent quality as you iterate on prompts, contexts, and agent configurations.
Richer RAG Support - We want to make it easier to build retrieval-augmented generation scenarios, where AI responses are grounded in your content and data.
Umbraco Deploy Integration - Seamless deployment of AI configurations (profiles, prompts, agents, contexts) across environments, treating AI entities as part of your standard deployment workflow.
As well as all this, we’d also like to continue refining and polishing the prompt and copilot experiences, making them more intuitive and powerful for content editors.
Open Source & Community
One of the best things about Umbraco AI is that it’s also fully open source and available on GitHub at github.com/umbraco/Umbraco.AI.
This isn’t just about making the code available - we genuinely want this to be a community-driven project. Whether you want to build a custom provider for a new AI service, contribute bug fixes, suggest features, or improve documentation, we’d love to have you involved.
The repository includes full setup instructions in the README, so you can get a local development environment running and start exploring the codebase. If you’re interested in contributing, you’ll find contribution guidelines there as well.
We’re particularly excited about the potential for community-built providers. As new AI services emerge, the community can extend Umbraco AI’s reach faster than we ever could alone.
Getting Started
If you just want to have a play with Umbraco AI, it is available to install via NuGet on the NuGet Gallery. The basic setup is:
- Install the core package:
Umbraco.AI - Install a provider:
Umbraco.AI.OpenAI - Optionally install add-ons:
Umbraco.AI.Prompt,Umbraco.AI.Agent,Umbraco.AI.Agent.Copilot - Configure a connection in the backoffice
- Create profiles for your use cases
- Start using the
IAIChatServicein your code or via Prompt or Copilot in the backoffice
Check out the GitHub repository for detailed setup instructions and documentation.
Wrapping Up
I’m genuinely excited about what Umbraco AI enables. The goal has always been to give you a solid, extensible foundation for AI integration that doesn’t lock you into a single vendor and doesn’t force you to learn proprietary abstractions. Whether you want to add simple content summarization or build sophisticated AI agents, Umbraco AI provides the building blocks.
This is just the beginning, and I’d love to hear what you think. What would you build with this? What features would make your life easier? Drop me a message on Mastodon, find me in the Umbraco community Slack, or leave a comment below.
Until next time 👋