What Do Models See?
Tool schemas become prompt text.
This atlas compares what a model receives after a JSON tool definition has passed through tokenizer-aware chat templates, provider-style tool renderers, or model-supplied custom renderers.
Start from shared OpenAI-style JSON tool definitions covering required strings, nested arrays, enums, refs, nullable unions, and oneOf/allOf stress cases.
Download the model tokenizer where available, inspect its chat template and tool-use conventions, then render through realistic tokenizer templates or model-supplied custom renderer code.
Compare the rendered prompt text, feature survival, special modes, and copied evidence claims so each model’s tool dialect remains auditable.
Reading The Report
Start with a model, then compare the same input case across dialects.
The left rail groups models with identical sampled renderings, while the evidence section keeps each underlying packet visible. The right inspector records which schema features survived the template render.
Reading The Output
The rendered prompt is shown whole.
Inline highlights call out tool names and schema-bearing keywords, but the report keeps the full rendered text visible so boilerplate, wrappers, and surrounding template instructions remain auditable.
Selected model
Model
Same JSON tool schema, model-specific prompt dialect
Template render evidence only. Hosted APIs or serving layers may normalize, reject, or rewrite tool schemas before the model receives them.
Evidence packet