Test-Driven AI Agent Definition (TDAD): Compiling Tool-Using Agents from Behavioral Specifications
Abstract
TDAD is a methodology for developing AI agents that uses behavioral specifications and automated testing to ensure reliable and compliant deployment of language model agents in production environments.
We present Test-Driven AI Agent Definition (TDAD), a methodology that treats agent prompts as compiled artifacts: engineers provide behavioral specifications, a coding agent converts them into executable tests, and a second coding agent iteratively refines the prompt until tests pass. Deploying tool-using LLM agents in production requires measurable behavioral compliance that current development practices cannot provide. Small prompt changes cause silent regressions, tool misuse goes undetected, and policy violations emerge only after deployment. To mitigate specification gaming, TDAD introduces three mechanisms: (1) visible/hidden test splits that withhold evaluation tests during compilation, (2) semantic mutation testing via a post-compilation agent that generates plausible faulty prompt variants, with the harness measuring whether the test suite detects them, and (3) spec evolution scenarios that quantify regression safety when requirements change. We evaluate TDAD on SpecSuite-Core, a benchmark of four deeply-specified agents spanning policy compliance, grounded analytics, runbook adherence, and deterministic enforcement. Across 24 independent trials, TDAD achieves 92% v1 compilation success with 97% mean hidden pass rate; evolved specifications compile at 58%, with most failed runs passing all visible tests except 1-2, and show 86-100% mutation scores, 78% v2 hidden pass rate, and 97% regression safety scores. The implementation is available as an open benchmark at https://github.com/f-labs-io/tdad-paper-code.
Community
Introducing TDAD (Test-Driven Agent Definition), a specification-first framework for building reliable LLM agents. We release SpecSuite-Core, a benchmark of 4 agent specifications with mutation testing for robustness evaluation
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Test vs Mutant: Adversarial LLM Agents for Robust Unit Test Generation (2026)
- ContextCov: Deriving and Enforcing Executable Constraints from Agent Instruction Files (2026)
- IDE-Bench: Evaluating Large Language Models as IDE Agents on Real-World Software Engineering Tasks (2026)
- AgentAssay: Token-Efficient Regression Testing for Non-Deterministic AI Agent Workflows (2026)
- TAM-Eval: Evaluating LLMs for Automated Unit Test Maintenance (2026)
- RepoMod-Bench: A Benchmark for Code Repository Modernization via Implementation-Agnostic Testing (2026)
- One-Eval: An Agentic System for Automated and Traceable LLM Evaluation (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper