Dynamic JSON-LD vs Hardcoded Schema
The wall every schema project hits
The first 20 pages of JSON-LD on a site are fun. An engineer hand-writes Product, FAQPage, Organization. Everyone admires the rich result preview. Then the catalog grows to 2,000 SKUs, the FAQ database doubles, marketing launches a comparison hub, and the schema falls behind. Within a quarter, half of it is stale and a third is invalid.
This is the wall a dynamic LLM Discovery API was built to break through.
What "dynamic" actually means
Dynamic schema generation means a single service inspects each URL's rendered content and emits a fresh, valid JSON-LD graph at request time (or precomputed and cached at the edge). The page author writes content. The platform writes the schema.
The key properties:
- One source of truth. Price changes in the CMS, schema follows.
- Coverage. Every URL gets schema, not just the ones engineering had time for.
- Validation built in. Invalid graphs never ship.
- Versioned. When schema.org evolves (and it does, often), the platform updates centrally.
What hardcoded schema costs
Hand-rolled JSON-LD has three failure modes that compound over time:
A $20k schema retrofit from an agency fixes the snapshot. It does not fix drift. Three months later you're back where you started.
What changes when schema is dynamic
From the data we've seen across customer sites running the LLM Discovery API, three things move:
- Pages extracted by AI engines: +12% on average within 30 days. This is the cleanest GEO metric we have.
- Schema validity: ~100% — invalid graphs never reach production because the API refuses to emit them.
- Engineering time on schema: near zero after initial integration.
When hardcoded is still the right call
For a 5-page brochure site, dynamic is overkill. Hand-write the Organization and one Service graph and move on. The break-even is roughly 50 URLs or any catalog that changes weekly.