Understanding Contexts in Umbraco AI

8 min read

In my introductory post, I gave a high-level overview of how Umbraco AI is structured — providers, connections, profiles, and contexts. Of those, contexts are probably the most nuanced, so I want to dig into them properly. What they are, how they change what the AI produces, and how to use them effectively.

What Is a Context?

A context is a reusable container of knowledge that gets injected into AI requests. Think of it as background information you’d give a new colleague before asking them to write something — your brand guidelines, your target audience, the tone you’re going for, things to avoid, reference material. Without that briefing, they’d produce something generic. With it, they produce something that sounds like you.

That’s exactly what contexts do for AI. They shape the output without you having to repeat the same instructions every time.

A context is made up of one or more resources, each with a specific type. Out of the box, Umbraco AI ships with two resource types:

  • Text — Free-form text or markdown. Write whatever instructions you need.
  • Brand Voice — A structured format with fields for tone, target audience, style guidelines, and patterns to avoid.

You can mix and match these within a single context. A “Corporate Brand Voice” context might have a Brand Voice resource for tone and audience, plus a Text resource with specific formatting rules or terminology.

Context Resources
Context Resources

How Contexts Affect Results

To see the difference, let’s take a simple example. Say you ask the AI to write an introduction for a blog post about your new product.

Without context, you’ll get something perfectly competent but generic — the kind of copy that could belong to anyone.

With a Brand Voice context that says your tone is conversational and direct, your audience is developers, and you avoid marketing jargon, the same request produces something noticeably different. It’ll be shorter, more technical, and sound like it was written by someone who actually understands the audience.

Brand Voice Configuration
Brand Voice Configuration

Behind the scenes, context resources are processed and injected into the system prompt — the instructions the AI receives before your actual request. The AI doesn’t see “context” as a concept; it just sees additional instructions that shape how it responds. The Brand Voice resource type, for example, formats its fields into clear directives:

ToneDescription: We are friendly, professional, and approachable. Use clear, simple language. Avoid jargon. Speak directly to the reader using 'you'. Keep sentences short and paragraphs focused.
TargetAudience: Web developers and content editors using Umbraco CMS. Our audience ranges from technical developers building sites to non-technical content editors managing day-to-day content. Write so both groups can understand.
StyleGuidelines: Use active voice. Lead with the benefit or outcome, not the feature. Use sentence case for headings. Prefer short paragraphs (2-3 sentences). Use bullet points for lists of three or more items. Write at a secondary school reading level.
AvoidPatterns: Marketing buzzwords (leverage, synergy, cutting-edge). Exclamation marks. Overly casual language (gonna, wanna). Passive voice where active is possible. Filler phrases (in order to, it is important to note that). Starting sentences with 'So' or 'Basically'.

Injection Modes

Each resource within a context has an injection mode that controls how it’s delivered to the AI:

  • Always — The resource content is included in the system prompt for every request. Use this for essential guidelines that should always apply, like brand voice or core instructions.
  • On Demand — The resource is listed as available, but the AI decides whether to retrieve it. The AI sees a name and description, and can call a tool to fetch the full content when it thinks it’s relevant. This is useful for larger reference material that isn’t needed on every request.

This distinction matters as your contexts grow. You don’t want to burn tokens stuffing the system prompt with a 5,000-word style guide on every request. Put the essentials on Always, and let the AI pull in the rest when it needs to.

Injection Modes
Injection Modes

Where You Can Assign Contexts

Contexts are standalone entities — you create them once and then assign them wherever they’re needed. There are four levels where contexts can be attached, and they compose together automatically.

Profiles

When you configure a profile, you can attach one or more contexts that apply to all AI requests using that profile. This is the broadest level — a “ContentWriter” profile might always include your brand voice context, ensuring every piece of content generated through that profile follows your guidelines.

Prompts

The Prompt add-on lets you attach contexts to individual prompt templates. A “Generate SEO Description” prompt might have a context with SEO best practices and keyword guidelines. These layer on top of whatever contexts the profile already provides.

Agents

Similarly, the Agent add-on lets you attach contexts to agent configurations. A “Legal Specialist” agent might have a context with regulatory requirements and approved legal language. The agent carries this knowledge into every conversation.

Content (The Context Picker)

This is the one I’m most pleased with. Umbraco AI ships with a Context Picker property editor that you can add to your document types, just like any other property. This lets editors — or more likely, architects — assign different contexts to different parts of the content tree.

Here’s where it gets interesting. The context picker inherits down the tree. If you set a context on your “Blog” node, every blog post underneath it automatically picks up that context. You don’t need to set it on every individual page. The resolver walks up the content tree from the current node until it finds a context picker with a value, and uses that.

This means you can have genuinely different AI behaviour depending on where you are in the site:

  • Blog section → Conversational tone, informal language
  • Legal pages → Precise, formal language, regulatory compliance
  • Product pages → Feature-focused, technical but accessible
  • Help centre → Supportive, step-by-step, assumes no prior knowledge

All without changing a single line of code or reconfiguring any profiles. The content structure itself drives the AI’s behaviour.

The picker supports single or multiple selection, with optional min/max constraints, so you have full control over how it’s configured per document type.

Context Picker
Context Picker

How Contexts Compose

When an AI request is made, Umbraco AI runs through its context resolvers in order:

  1. Profile contexts — Base layer from the profile configuration
  2. Content contexts — Resolved from the content tree (walks up to find nearest)
  3. Prompt contexts — Added by the prompt being executed
  4. Agent contexts — Added by the agent handling the request

All resolved resources are merged together, deduplicated by resource ID (later resolvers can override earlier ones), and then separated by injection mode. The “Always” resources go into the system prompt. The “On Demand” resources are listed as available tools.

This composability is powerful. An editor running a “Generate SEO Description” prompt on a blog post might end up with context from four sources simultaneously — brand voice from the profile, conversational tone from the blog section’s context picker, SEO guidelines from the prompt — and they all just work together.

Building Your Own Resource Types

The two built-in resource types (Text and Brand Voice) cover a lot of ground, but the system is designed to be extended. If you have a specific kind of knowledge that benefits from structured input, you can create your own resource type.

Here’s what that looks like. Say you want a “Product Specs” resource type that formats technical specifications for the AI:

// 1. Define the settings model
public class ProductSpecsResourceSettings
{
    [AIField]
    public string? ProductName { get; set; }

    [AIField(EditorUiAlias = "Umb.PropertyEditorUi.TextArea")]
    public string? KeyFeatures { get; set; }

    [AIField(EditorUiAlias = "Umb.PropertyEditorUi.TextArea")]
    public string? TechnicalSpecs { get; set; }

    [AIField(EditorUiAlias = "Umb.PropertyEditorUi.TextArea")]
    public string? TargetUseCases { get; set; }
}

// 2. Create the resource type
[AIContextResourceType("product-specs", "Product Specs",
    Description = "Technical product specifications and feature details",
    Icon = "icon-box")]
public class ProductSpecsResourceType
    : AIContextResourceTypeBase<ProductSpecsResourceSettings>
{
    public ProductSpecsResourceType(
        IAIContextResourceTypeInfrastructure infrastructure)
        : base(infrastructure) { }

    protected override string FormatDataForLlm(ProductSpecsResourceSettings data)
    {
        var sb = new StringBuilder();

        if (!string.IsNullOrWhiteSpace(data.ProductName))
            sb.AppendLine($"Product: {data.ProductName}");
        if (!string.IsNullOrWhiteSpace(data.KeyFeatures))
            sb.AppendLine($"Key Features: {data.KeyFeatures}");
        if (!string.IsNullOrWhiteSpace(data.TechnicalSpecs))
            sb.AppendLine($"Technical Specs: {data.TechnicalSpecs}");
        if (!string.IsNullOrWhiteSpace(data.TargetUseCases))
            sb.AppendLine($"Target Use Cases: {data.TargetUseCases}");

        return sb.ToString().TrimEnd();
    }
}

// 3. Register it
public class MyComposer : IComposer
{
    public void Compose(IUmbracoBuilder builder)
    {
        builder.AIContextResourceTypes()
            .Add<ProductSpecsResourceType>();
    }
}

The [AIField] attributes on your settings model automatically generate the editor UI in the backoffice — no frontend code needed. Once registered, your new resource type appears in the resource type dropdown when adding resources to a context.

Custom Context Resource
Custom Context Resource

Notice the separation between how settings are captured (the settings model) and how they’re presented to the AI (FormatDataForLlm). This means you can iterate on the LLM formatting without changing how editors input the data, and vice versa.

Resolving Data from External Sources

But context doesn’t have to come from the settings model at all. Resource types support a two-step pipeline: resolve data from settings, then format that data for the LLM. When you use the single type parameter base class (AIContextResourceTypeBase<TSettings>), settings are passed directly to FormatDataForLlm. But when you use the two type parameter version (AIContextResourceTypeBase<TSettings, TData>), you can override ResolveDataAsync to fetch data from anywhere — the settings become configuration for where to find the real content.

Here’s a resource type that pulls context from an external URL:

// Settings: what the editor configures
public class ExternalDocResourceSettings
{
    [AIField]
    public string? Url { get; set; }
}

// Data: what gets formatted for the LLM
public class ExternalDocResourceData
{
    public string Content { get; set; } = string.Empty;
}

[AIContextResourceType("external-doc", "External Document",
    Description = "Fetches context from an external URL",
    Icon = "icon-link")]
public class ExternalDocResourceType
    : AIContextResourceTypeBase<ExternalDocResourceSettings, ExternalDocResourceData>
{
    private readonly IHttpClientFactory _httpClientFactory;

    public ExternalDocResourceType(
        IAIContextResourceTypeInfrastructure infrastructure,
        IHttpClientFactory httpClientFactory)
        : base(infrastructure)
    {
        _httpClientFactory = httpClientFactory;
    }

    public override async Task<ExternalDocResourceData?> ResolveDataAsync(
        ExternalDocResourceSettings settings,
        CancellationToken cancellationToken = default)
    {
        if (string.IsNullOrWhiteSpace(settings.Url))
            return null;

        var client = _httpClientFactory.CreateClient();
        var content = await client.GetStringAsync(settings.Url, cancellationToken);
        return new ExternalDocResourceData { Content = content };
    }

    protected override string FormatDataForLlm(ExternalDocResourceData data)
        => data.Content;
}

The editor configures a URL. At resolution time, ResolveDataAsync fetches the document content. Then FormatDataForLlm formats that resolved data for the AI. The same pattern works for a database query, an API call, a file on disk — context resources can be living, dynamic sources of knowledge that stay current without manual updates.

Wrapping Up

Contexts are the feature that turns AI responses from “technically correct but generic” into “sounds like it was written by someone who knows our brand.” The composability — profile + content tree + prompt + agent — means you can layer knowledge naturally, and the pluggable resource types mean you’re not limited to what ships out of the box.

If you haven’t tried them yet, start simple. Create a Brand Voice context, attach it to a profile, and run a prompt with and without it. The difference in output quality speaks for itself.

Until next time 👋