Blogs

MCP – Grounding Your LLM

Ground your LLM with Atlas from MetaBroadcast

Have you tried working with an LLM to improve your metadata? Has it become obvious that a lot of the responses are simply incorrect. When coverage and accuracy matter, there is a better way…

As AI continues the transitions from hype cycle into real business solutions, a key challenge is reducing your chosen LLM’s hallucination. Fortunately, implementing MCP (Model Context Protocol) and utilising datasets of known provenance greatly reduces this risk when connecting business solutions to Media and Entertainment metadata – improving search and discovery but reducing completely inaccurate responses.  At MetaBroadcast we can demonstrate how to improve your confidence in data accuracy, improve time to market when adding new sources and obtain extensive coverage.

What is MCP?

It stands for Model Context Protocol. Think of it as a standard connector, like a USB-C cable, for linking any Large Language Model (LLM) app to your company’s data and tools. No more needing a bunch of different custom connectors!

MCP Server’s Three Components

  1. Tools: Functions an LLM “agent” can use.
  2. Resources: Read-only data like notes, logs, or database records.
  3. Prompt Templates: Examples of natural language questions the system expects.

Why Bother?

Our success in building an MCP server means we can connecting LLM’s to new data (like our Atlas API) way faster. Since it’s a known standard, maintenance and collaboration are simpler—plus, new team members can get up to speed fast.

The Big Challenge Solved

Because film/TV metadata is often public, the LLMs tested repeated in trying to search external sites (IMDb, fansites etc) instead of our Atlas API. We fixed this by giving the LLM a bunch of specific tools (different Atlas requests/endpoints) in the MCP server, encouraging it to query data of provenance.

The Result

We now have an MCP server that lets a chatbot-style LLM translate normal questions into Atlas API requests and return accurate content, including IDs.

A Key Lesson Learned

Having dedicated tools for specific tasks really helps stop the AI from getting confused or “hallucinating” (making stuff up).

What This Means for Users

  1. Search with plain text! Forget writing clunky SQL or super-specific API queries.
  2. You get the benefit of an LLM without the headache of hallucinations (or at least, with a lot less risk).
  3. You get real-world, grounded results from both structured and unstructured data.

Want to see it in action? Just ask for a demo!