ASEWAVE Whitepaper

Augmented Solo Engineering
With Adversarial Verification + Evidence

Build with AI speed. Ship with human-grade evidence.

ASEWAVE is a human-controlled AI software delivery method for building production systems at accelerated pace without trusting AI-generated claims. This whitepaper describes the method in full — its background, design principles, roles, phase rhythm, evidence discipline, and the boundary between the public method and the private operational engine.

Public method Private engine In production use Jan Tingsvad · XDRagon Version 1.0 · 2026
Abstract

A human-controlled AI delivery method.

ASEWAVE is a human-controlled AI software delivery method for building production systems at accelerated pace without trusting AI-generated claims.

The method separates specification, execution, verification, and acceptance into distinct roles. A human owns direction and approval. AI agents assist with investigation, implementation, and review. Every phase is gated by evidence that can be replayed by an external reviewer.

ASEWAVE was developed during production work on XDRagon as a response to a practical problem: AI agents can produce implementation volume quickly, but they can also produce fluent, plausible, and incorrect claims about what they changed, tested, deleted, verified, or understood.

ASEWAVE treats that risk as a first-class engineering concern.


1 — Background problem

The bottleneck moves.

Modern AI coding agents can generate code, tests, migrations, documentation, and summaries at a speed that changes the economics of solo development. The bottleneck is no longer only typing speed or implementation volume. It becomes review bandwidth, context control, claim verification, and the ability to distinguish real progress from plausible narrative.

This creates several recurring risks:

  • An agent claims tests passed without preserving test output
  • An agent claims a file was removed without verifying git history
  • An agent assumes a schema instead of inspecting the actual schema
  • An agent writes a walkthrough after the fact that describes artefacts it did not actually capture
  • An agent drifts from the original prompt during long-running work
  • An agent reviews its own work optimistically

ASEWAVE was designed to reduce these failure modes by forcing evidence before acceptance.


2 — Definition

Augmented Solo Engineering With Adversarial Verification + Evidence.

ASEWAVE describes a development method where:

  • One human owns the project direction
  • AI agents increase implementation and analysis volume
  • Specification and execution are separated
  • Claims are checked against primary evidence
  • Every phase has explicit gates
  • Nothing closes on narrative alone

The method is not about trusting AI more. It is about designing a workflow where trust is not required.


3 — Design principles

Six load-bearing principles.

3.1 Human-owned development

The human owns scope, priorities, architecture, risk acceptance, destructive operations, merge decisions, and release decisions. AI agents can recommend, implement, summarize, and challenge. They do not own the product.

3.2 Adversarial separation

A single AI agent reviewing its own work tends to optimise for completion and confidence. ASEWAVE separates roles so that one role can produce and another can challenge. The reviewer does not assume the executor is correct. The executor does not assume the prompt is correct during investigation. Both are anchored to repository state and evidence.

3.3 Evidence-first acceptance

A claim is not accepted because it is fluent. It is accepted when it is backed by primary evidence — commits, diffs, test output, logs, screenshots, saved artefacts, checksums, file listings, endpoint responses, and runtime output.

3.4 Phase-gated progress

ASEWAVE uses process gates. A phase cannot silently flow into the next phase. Investigation must be reviewed before implementation. Verification must produce evidence before narrative. Closure requires independent review.

3.5 Replayability

The audit trail should be replayable. A third-party reviewer should be able to inspect the walkthrough, locate the referenced artefacts, compare them with repository state, and verify whether the claim is supported.

3.6 Private operational depth

The public method explains the principles. The private engine contains the production-grade operational layer: prompt contracts, verifier logic, memory structure, phase templates, red-flag checks, handoff conventions, routing rationale, and recovery patterns.


4 — Roles

Role-based, not tool-specific.

Specific products and models may change. The role separation should not.

4.1 Human Gatekeeper

The human makes decisions. Reviews prompts before execution, reviews investigations before implementation, reviews outputs before merge, and controls release decisions. The human is not merely a passive approver — the human is the strategic control layer.

The human's leverage is judgment density, not typing speed.

4.2 Reviewer / Prompt Author

Converts intent into structured work. Writes phase prompts, defines investigation gates, states expected artefacts, reviews investigation reports, challenges implementation claims, checks walkthroughs against requirements, and verifies that evidence exists. The reviewer does not write production code.

4.3 Executor Agent

Reads the prompt, inspects the actual repository, verifies assumptions, stops after investigation, implements after approval, writes or updates tests, captures evidence, and produces walkthroughs. The executor does not approve its own work.

4.4 Evidence Verifier

The verifier is the mechanical or semi-mechanical review layer. It checks for patterns that humans and agents may miss — missing files, duplicated screenshots, absent test output, schema assumptions, stale documentation, and unverified deletion claims.

The exact production verifier used in XDRagon development is part of the private ASEWAVE Engine.


5 — Phase rhythm

Phase A → Phase B → Phase C.

ASEWAVE uses a repeating three-phase rhythm. The phase boundaries are process gates, not suggestions.

Phase A

Investigation

The executor verifies the prompt against the actual repository before writing code. Checks file existence, current code vs. assumption, schemas, APIs, routes, config names, hidden dependencies, and scope safety. Documents findings and stops. No implementation happens in Phase A.

Human Gate

Review before implementation

The human and reviewer inspect the investigation. They may approve Phase B, revise scope, reject the task, request more investigation, split the work, or correct assumptions. This gate prevents wrong assumptions from becoming implemented code.

Phase B

Implementation

The executor implements the approved change. Expected outputs: code changes, tests, migrations, documentation updates, commits, notes about deviations. Implementation claims should be tied to files, diffs, and commits.

Phase C

Verification

The executor runs the system, tests relevant flows, captures artefacts, verifies those artefacts on disk, and produces a walkthrough. Phase C must not begin as prose — it begins as evidence.

Independent Review

Review before closure

A separate reviewer checks the walkthrough against the original prompt, Phase A investigation, Phase B implementation, Phase C evidence, git history, repository state, and runtime outputs where relevant. The phase closes only when review clears.


6 — Capture-Before-Document protocol

Capture before document.

The most distinctive ASEWAVE discipline. This exists because an AI agent under pressure may produce a plausible description of verification work that it did not actually perform.

Step 1 — Capture first

Run the relevant system or test. Capture the artefacts: screenshots, logs, endpoint output, test results, generated files, before/after records.

Step 2 — Verify on disk

Confirm the artefacts exist with non-zero bytes. Where multiple screenshots are involved, duplicate hashes should be treated as suspicious unless explicitly justified.

$ ls -la evidence/phase-c/
-rw-r--r--  1 agent  staff  142832  flow-login.png
-rw-r--r--  1 agent  staff    4218  pytest-output.txt
-rw-r--r--  1 agent  staff    1094  api-healthcheck.log

$ md5sum evidence/phase-c/*
7f9c2a14...  flow-login.png
3aa1dc88...  pytest-output.txt
91bd047f...  api-healthcheck.log

✓ zero duplicate hashes — artefacts are distinct

Step 3 — Paste evidence before prose

The walkthrough should include a first evidence block before narrative explanation. The reviewer should be able to see the evidence of the evidence.

Step 4 — Then write narrative

Only after the evidence exists should the executor describe what the artefacts show. The sequence matters. If prose comes first, the risk of fabricated verification increases.


7 — Phase Boundary Re-Read Protocol

Long AI sessions drift.

A model may begin with a correct understanding of the task and slowly move away from the original constraints as implementation proceeds. ASEWAVE counters this with a Phase Boundary Re-Read Protocol.

At each boundary, the executor re-reads the original prompt and confirms that the output of the previous phase matches the stated requirements. This is especially important before:

  • Moving from investigation to implementation
  • Moving from implementation to verification
  • Writing final walkthroughs
  • Declaring completion

The goal is to re-anchor the work to the original requirement before momentum becomes authority.


8 — Verification categories

What the verifier checks.

The private ASEWAVE Engine contains detailed red-flag checks. The public method describes the categories without exposing the production implementation.

8.1 Artefact existence

Referenced files must exist at the paths claimed.

8.2 Artefact non-emptiness

Screenshots, logs, and output files should not be zero-byte placeholders.

8.3 Artefact uniqueness

Where multiple screenshots are claimed to show different states, hashes should not be identical unless explained.

8.4 Test output presence

"Tests passed" is not enough. The walkthrough should include the relevant command and output summary.

8.5 Schema verification

Claims about database fields, API contracts, config keys, or route structures should be based on inspected source, not memory.

8.6 Git history alignment

Claims about created, changed, moved, or deleted files should match git history and diffs.

8.7 Runtime verification

If a feature is runtime-visible, verification should include runtime evidence, not just code inspection.

8.8 Documentation drift

Docs should not be updated in ways that conflict with actual implementation state.

8.9 Silent failure detection

Startup and runtime logs should be checked for silent failures where relevant.

8.10 Scope drift

The completed work should match the approved scope, not an expanded interpretation.


9 — What ASEWAVE is not

Controlled acceleration — not autonomous development.

Not autonomous AI development

ASEWAVE does not let agents run until they decide they are done. The human holds gates and approval.

Not vibe coding

The method rejects "looks good" as a completion standard. Evidence is required.

Not a single-agent loop

ASEWAVE avoids having one agent specify, implement, verify, and approve its own work.

Not ordinary AI-assisted coding

In ordinary AI-assisted coding, the human often writes code with AI suggestions. In ASEWAVE, AI may produce implementation volume while the human directs, reviews, and accepts based on evidence.

Not a correctness guarantee

ASEWAVE reduces unchecked claims and improves reviewability. It does not eliminate bugs, architectural mistakes, or human judgment errors.


10 — Where ASEWAVE fits

Strongest where speed and evidence both matter.

● Good fit

  • Solo developers building serious production software
  • Two-person teams
  • Security tooling and infrastructure
  • Internal platforms and regulated environments
  • Long-running products with meaningful audit needs
  • Projects where speed and evidence both matter

● Less good fit

  • Disposable prototypes
  • Single-file automations and pure demos
  • Projects with no need for auditability
  • Environments where process overhead is more costly than defects

11 — Public method vs private engine

The method is public. The engine is private.

11.1 Public method

The public method includes human ownership, role separation, phase rhythm, evidence-first verification, replayable audit trails, public terminology, and broad implementation guidance. This is enough for others to understand and apply ASEWAVE in spirit.

11.2 Private engine

The private engine includes production operational details: prompt-construction logic, memory structure, context recovery patterns, verifier prompts, red-flag implementation, workflow contracts, phase templates, handoff conventions, routing strategies, and project-specific playbooks.

The public method is enough to understand ASEWAVE. The private engine is what makes the method repeatable, fast, and commercially deployable at production depth.


12 — Current production status

In active use on XDRagon.

ASEWAVE is currently used in production development of XDRagon. The methodology is documented publicly for reference, discussion, and attribution.

The private ASEWAVE Engine is not currently released. Commercial access, partnerships, or reference implementations may be considered later — but there is no public support, training, or implementation package at this time.

Inquiries about future commercial access or partnerships may be directed to wi24rd.com.


13 — Attribution

Coinage and credit.

ASEWAVE — Augmented Solo Engineering With Adversarial Verification + Evidence — was coined during the XDRagon development process by Jan Tingsvad. The methodology was refined and documented with AI-assisted review, including Claude as a methodology collaborator.

Attribution is appreciated where the term is used in writing, talks, or implementations:

ASEWAVE — Augmented Solo Engineering With Adversarial Verification + Evidence.
Coined by Jan Tingsvad during the XDRagon development process.
The methodology was refined and documented with AI-assisted review, including Claude as a methodology collaborator.
Attribution appreciated where the term is used.

ASEWAVE is not a promise that AI makes software easy.

It is a response to the fact that AI makes software production faster.

When production accelerates, verification must accelerate with it. ASEWAVE makes that verification part of the delivery method itself.