Building a Sitecore Agent Playground with Gradio

Published 11/7/2025
sitecoreopenailangchainagentsgradio

The Sitecore Agent API is a powerhouse: nearly forty REST endpoints for managing content, pages, components, media, and personalization from outside the XM Cloud UI. But stitching those operations together by hand is tedious. I wanted an interactive playground where I could talk to Sitecore in plain English, watch the agent reason about the right API calls, and still have full control over authentication and auditing. The result is the open-source sample Sitecore-OpenAI-Agent-API-Gradio.

Important context: this repository is intentionally a teaching scaffold, not a production-ready admin console. The Gradio surface exists to demonstrate how LangChain agents can consume the Sitecore OpenAPI spec, generate tools on the fly, and execute real API calls. It is perfect for experimentation, demos, and brown-bag sessions with developers, but it omits the polish, guardrails, and operational hardening you would need in a customer-facing app.

This post is a guided tour for developers. We will cover the architecture, the dev workflow, how LangChain auto-generates tools from Sitecore’s OpenAPI spec, and a checklist for adapting the sample to production when you are ready to build something more substantial.

Why a Gradio front end?

Gradio takes the friction out of prototyping conversational tools. Instead of wiring up a custom React app, you get a battery-included chat UI, status indicators, and file upload support in a single Python process. That matters when you are iterating quickly on agent behaviour: you can tweak prompts, log traces, or swap language models without reloading a bundle.

For the Sitecore Agent use-case, the web UI sits alongside the LangChain agent and exposes three modes:

  • Chat – Natural language commands such as β€œList personalization rules for the Solterra site”
  • CLI – Terminal mode for scripted testing and CI workflows (python main.py --mode cli)
  • Test harness – Runs a deterministic checklist against the generated tools (python main.py --mode test)

Architecture quick start

The repository is intentionally small but expressive. Here is the high-level flow:

  1. OpenAPI loading – index.json ships with the latest Sitecore Agent specification. You can opt into auto-download with SITECORE_AUTO_UPDATE_SPEC=true.
  2. Tool generation – spec_based_tools.py parses the spec, builds Pydantic models for every operation, and registers 39+ LangChain tools on the fly.
  3. Agent orchestration – sitecore_agent.py assembles a ReAct-style agent that reasons, chooses tools, and formats responses.
  4. Authentication – auth_manager.py (imported inside the agent) handles OAuth2 client credentials and refreshes tokens when they cross the 5-minute remaining threshold.
  5. Gradio UI – gradio_interface.py launches the chat surface, wires in health indicators, and streams agent messages back to the browser.

You can visualise it like this:

           β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
           β”‚      Gradio UI        β”‚
           β”‚  - chat + CLI modes   β”‚
           β”‚  - status indicators  β”‚
           β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                      β”‚
                      β–Ό
         β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
         β”‚   LangChain ReAct Agent  β”‚
         β”‚  - GPT-5-mini reasoning  β”‚
         β”‚  - tool selection loop   β”‚
         β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                    β”‚
                    β–Ό
         β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
         β”‚  Spec-Based Tool Layer   β”‚
         β”‚  - Pydantic validation   β”‚
         β”‚  - OAuth-signed requests β”‚
         β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                    β”‚
                    β–Ό
         β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
         β”‚   Sitecore Agent API     β”‚
         β”‚  - 39 REST operations    β”‚
         β”‚  - XM Cloud resources    β”‚
         β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Running the sample locally

Clone the repo and copy the example environment file:

git clone https://github.com/kevinpbuckley/Sitecore-OpenAI-Agent-API-Gradio.git
cd Sitecore-OpenAI-Agent-API-Gradio
copy .env.example .env  # Windows
# cp .env.example .env  # macOS/Linux

Edit .env with your credentials:

OPENAI_API_KEY=sk-your-openai-key
SITECORE_CLIENT_ID=your-client-id
SITECORE_CLIENT_SECRET=your-client-secret
SITECORE_DEFAULT_SITE=solterra
OPENAI_MODEL=gpt-5-mini

Install dependencies and start Gradio:

pip install -r requirements.txt
python main.py  # defaults to Gradio mode

Once the console displays a green status light, open http://localhost:7860 and start chatting. The agent will call the Sitecore APIs in the background and render responses inline.

Example conversations

Here are a few prompts that exercise different areas of the Sitecore surface:

  • β€œList all pages in the Solterra site and include their IDs.”
  • β€œCreate a new article called β€˜Holiday Gift Guide’ under /content/solterra/articles.”
  • β€œSearch media items tagged with β€˜hero’ and show me direct download URLs.”
  • β€œShow the personalization rules applied to the homepage.”
  • β€œUpload this PNG and add it to the home hero component.”

Every command follows the same loop: the agent inspects available tools, pulls in Pydantic schemas for parameter validation, executes the request with OAuth headers, and reformats the JSON into human-friendly text.

Deep dive: dynamic tool generation

spec_based_tools.py does the heavy lifting. Instead of hard-coding functions, it:

  1. Loads the OpenAPI definition and iterates over paths.
  2. Extracts operation IDs, HTTP verbs, required parameters, and response schemas.
  3. Builds a Pydantic model for the combined query/path/body parameters so LangChain can coerce inputs.
  4. Wraps the request in a LangChain Tool with descriptive docstrings for the agent to read.

Because of this approach, upgrading to a new Sitecore drop or adding experimental endpoints is as simple as swapping the index.json file. The agent will β€œsee” new tools the next time it boots.

Configuration tips

  • Model selection – OPENAI_MODEL defaults to gpt-5-mini, but the sample works with gpt-4o or gpt-4o-mini if that is what your org approves.
  • Spec updates – Set SITECORE_AUTO_UPDATE_SPEC=true in .env to download the latest OpenAPI doc on startup. The local cache keeps the last good version in case of outages.
  • Logging – Toggle LOG_LEVEL=DEBUG to print raw request/response payloads during development.
  • Headless mode – python main.py --mode cli launches an interactive terminal session (handy inside VS Code dev containers or Azure Container Apps).

Wrapping up

The Sitecore Agent API plus LangChain makes Sitecore automation approachable for developers who prefer natural language interfaces. Gradio adds a friendly skin so stakeholders can try scenarios without touching Postman or the XM Cloud UI. Remember, this repository is a teaching exampleβ€”keep it in a sandbox, use it to understand the agent pattern, then graduate the concepts into a hardened app that matches your governance model.

If you are exploring agentic CMS workflows, fork the GitHub repository, wire it to your Sitecore sandbox, and start issuing commands. As always, contributions are welcomeβ€”let me know what features you add! This is a sandbox designed to accelerate experimentation, and I hope it sparks ideas for your next XM Cloud integration.