Why Umbraco 17.4.0 Is a Big Deal for AI

3 min read

If you told most Umbraco developers to get excited about a minor release, you’d probably get a polite nod. Point releases aren’t usually the ones that grab attention.

But for those of us working with AI, this one matters.

The Problem: AI Doesn’t Know What It Doesn’t Know

Umbraco’s flexibility is both its greatest strength and, for AI, its biggest challenge. Property editors can produce almost any kind of value, making the system incredibly powerful—but also unpredictable.

The catch is that only the property editor knows what its value actually looks like. From the outside, there’s no reliable way to understand the structure of that data. To an LLM, every property is effectively a black box.

When AI tries to populate a property value, it’s forced to guess the shape of the data. That can work for simple types, but as soon as you hit more complex editors, it’s operating blind. There’s no contract to inspect, no schema to follow—just trial and error. And when it gets it wrong, the result is either a failed save or invalid content.

The Solution: JSON Schema Endpoints

PR #21771 introduces JSON Schema support to the Management API for both data types and document types. It adds new endpoints that expose the expected value structure for any property:

  • /umbraco/management/api/v1/data-type/{id}/schema — returns the JSON Schema for a data type’s value
  • /umbraco/management/api/v1/document-type/{id}/schema — returns a full schema for a document type, linking to each property’s data type schema

To make this concrete, here’s what a Textbox data type looks like when configured with a max length of 250:

{
  "$schema": "https://json-schema.org/draft/2020-12/schema",
  "type": ["string", "null"],
  "maxLength": 250
}

Simple, but it changes everything. The AI now knows this property expects a string (or null), and exactly how long it can be. No guesswork required.

Now compare that to something more complex like a Media Picker configured for single selection, focal point, and crops:

{
  "$schema": "https://json-schema.org/draft/2020-12/schema",
  "type": ["array", "null"],
  "maxItems": 1,
  "items": {
    "type": "object",
    "required": ["key", "mediaKey"],
    "properties": {
      "key": { "type": "string", "format": "uuid" },
      "mediaKey": { "type": "string", "format": "uuid" },
      "mediaTypeAlias": { "type": "string" },
      "crops": {
        "type": ["array", "null"],
        "items": {
          "type": "object",
          "properties": {
            "alias": { "type": "string", "enum": ["thumbnail", "hero"] },
            "width": { "type": "integer" },
            "height": { "type": "integer" },
            "coordinates": {
              "oneOf": [
                { "type": "null" },
                {
                  "type": "object",
                  "properties": {
                    "x1": { "type": "number" },
                    "y1": { "type": "number" },
                    "x2": { "type": "number" },
                    "y2": { "type": "number" }
                  }
                }
              ]
            }
          }
        }
      },
      "focalPoint": {
        "oneOf": [
          { "type": "null" },
          {
            "type": "object",
            "properties": {
              "left": { "type": "number", "minimum": 0, "maximum": 1 },
              "top": { "type": "number", "minimum": 0, "maximum": 1 }
            }
          }
        ]
      }
    }
  }
}

This is the kind of structure an LLM simply can’t guess correctly. UUIDs, nested crop coordinates, constrained focal points, and specific allowed crop aliases. But with the schema, it knows exactly what to construct.

At the document type level, everything is composed into a single schema. Querying the document type schema endpoint returns the full shape of a create/update payload, with $ref links pointing to each property’s data type schema. An LLM can follow these references to build a complete, valid content item—property by property.

Each core property editor now implements IValueSchemaProvider, generating schemas that reflect not just the editor type, but its configuration. This means the schema is accurate to the actual data—constraints and all.

For AI, this is transformative. Instead of guessing, it can request the schema, understand the required structure, and construct valid values. What was previously a “best effort” becomes a deterministic, validated operation.

What This Unlocks

With schema endpoints in place, we can now build reliable content editing tools for Agents. The flow becomes:

  1. User asks an Agent to create or update content
  2. It fetches the document type schema
  3. The schema defines exactly what each property expects
  4. It constructs valid values and submits them via the Management API

But this isn’t limited to Umbraco AI.

The same capability unlocks better integrations across the board. The Umbraco MCP project can use schemas to provide structured, reliable interactions with content. The Umbraco Compose platform can build on the same foundation to enable more intelligent content workflows.

Anywhere content is created or updated programmatically, there’s now a clear contract to follow.

That’s the real shift—from guessing what content should look like, to knowing.

Looking Ahead

17.4.0 might look like a minor release on paper, but for us it unlocks the next chapter of Umbraco AI.

We’re excited to start building the tools that turn Copilot into a practical content editing companion—and we can’t wait to share what comes next.

Until next time 👋