Skip to main content

Iterations and Verification

AgentGate uses an iterative process with multi-level verification to ensure generated code meets quality standards. This page explains how iterations work and what verification levels mean.

The Iteration Process

When a run executes, it goes through multiple iterations:
1

Code Generation

The AI agent analyzes the task and generates code changes.
2

Verification

Automated verification checks run against the generated code.
3

Feedback Loop

If verification fails, results feed back to the AI for improvement.
4

Repeat or Complete

Process repeats until verification passes or max iterations reached.

Why Multiple Iterations?

AI-generated code doesn’t always work on the first try. The iteration process:
  • Catches errors early through verification
  • Provides feedback for the AI to improve
  • Ensures quality through multiple checks
  • Converges on working solutions

Verification Levels

AgentGate uses four verification levels, each progressively more thorough:

L0: Contract Verification

What it checks:
  • Syntax correctness
  • Type checking (for typed languages)
  • Import resolution
  • Basic linting
When it runs: Every iteration Example failures:
  • Syntax errors
  • Missing imports
  • Type mismatches

L1: Test Verification

What it checks:
  • Unit test execution
  • Test pass/fail status
  • Code coverage thresholds
When it runs: After L0 passes Example failures:
  • Failing unit tests
  • Below coverage threshold
  • Test runtime errors

L2: Behavioral Verification

What it checks:
  • Integration test execution
  • API contract compliance
  • Cross-component interaction
When it runs: After L1 passes Example failures:
  • Integration test failures
  • API response mismatches
  • Database interaction errors

L3: Sanity Verification

What it checks:
  • End-to-end scenarios
  • Performance thresholds
  • Security scans
  • Final quality checks
When it runs: After L2 passes Example failures:
  • E2E test failures
  • Performance degradation
  • Security vulnerabilities

Gate Plans

Gate plans configure which verification levels apply:

Default Gate Plan

By default, all relevant verification levels run:
  1. L0 always runs
  2. L1 runs if tests exist
  3. L2 runs if integration tests exist
  4. L3 runs for final validation

Custom Gate Plans

Configure verification for your needs:
{
  "taskPrompt": "...",
  "workspaceSource": { "..." },
  "gatePlan": {
    "levels": ["L0", "L1"],
    "customChecks": {
      "lint": true,
      "typecheck": true,
      "test": true
    }
  }
}

When to Customize

Use only L0 for rapid prototyping where speed matters more than comprehensive testing.
Use all levels (L0-L3) for production-critical code where quality is paramount.
Emphasize L1/L2 when the codebase has good test coverage you want to leverage.

Convergence

A run converges when verification passes at all required levels:
Iteration 1: L0 ✓, L1 ✗ (test failures)
Iteration 2: L0 ✓, L1 ✓, L2 ✗ (integration issue)
Iteration 3: L0 ✓, L1 ✓, L2 ✓, L3 ✓ → CONVERGED

Factors Affecting Convergence

FactorImpact
Task clarityClearer prompts converge faster
Codebase qualityWell-structured code helps AI
Test coverageMore tests provide better feedback
Task complexityComplex tasks need more iterations

Iteration Limits

Default Limit

The default maximum is 10 iterations.

Configuring Limits

Set custom limits per work order:
{
  "taskPrompt": "...",
  "workspaceSource": { "..." },
  "maxIterations": 5
}

When Limits Are Reached

If max iterations reached without convergence:
  • Run status becomes failed
  • Best partial result may be available
  • You’re charged for completed iterations
If runs frequently hit limits, consider: improving task prompts, reducing task scope, or increasing limits for complex tasks.

Cost Implications

Each iteration incurs costs:
  • More iterations = higher cost
  • Verification level affects iteration cost
  • Failed iterations still cost credits

Cost Optimization

  • Write clear, specific task prompts
  • Use appropriate verification levels
  • Start with lower iteration limits and increase if needed
  • Break complex tasks into smaller work orders

Monitoring Iterations

Track iteration progress in run details:
{
  "id": "run_abc123",
  "status": "running",
  "iterations": 3,
  "currentIteration": {
    "number": 3,
    "phase": "verification",
    "level": "L1"
  }
}