Human-controlled AI delivery · Built for production

Build with AI speed.
Ship with human-grade evidence.

ASEWAVE is a human-controlled AI software delivery method for solo developers and small teams building production systems where every claim must be verified, replayable, and backed by evidence.

Human-owned Adversarial verification Evidence-first Replayable audit trail Private engine
ASEWAVE logo
Why ASEWAVE exists

AI agents can write code fast. They can also write confident fiction fast.

ASEWAVE exists because AI-generated progress should not be accepted as truth just because it is fluent. The method forces work through investigation, implementation, evidence capture, disk verification, and independent review before a phase closes.

The goal is not autonomous AI development. The goal is controlled AI acceleration with a replayable audit trail.

The method

Four principles. One audit trail.

Wi24RD develops XDRagon Monitor using ASEWAVE — a disciplined, human-owned, AI-assisted method where every phase starts with an investigation and ends with a walkthrough tied to verifiable evidence.

A
Augmented
AI increases development volume. Human ownership keeps direction, scope and judgment grounded.
SE
Solo Engineering
One developer owns the product, prioritization, architecture and every destructive decision.
AV
Adversarial Verification
Independent verification checks claims against code, commits and test evidence before changes are accepted.
+E
Evidence
Commit hashes, screenshots, test output and walkthroughs — not narrative claims alone.
Phase rhythm

Every phase starts with an investigation. Every phase ends with a walkthrough.

Claims are tied to evidence: commit hashes, test output, screenshots, file paths, diffs and reproducible checks — not just "the feature works".

Phase A

Investigation

The executor AI verifies all prompt assumptions against the actual repository state and halts. No code written yet.

Phase B

Implementation

Code, tests, migrations and commits are produced — but only after the human reviews the Phase A investigation and issues a green light.

Phase C

Verification

Artefacts captured first. Verified on disk with ls -la and md5sum. Verbatim outputs pasted before any narrative is written.

🔐

Phase Boundary Re-Read Protocol. At every phase boundary, the executor re-reads the original prompt verbatim and confirms prior outputs match stated requirements — preventing the LLM drift that accumulates over long jobs.

Operating model

One human owns direction. Separate AI roles produce and challenge the work.

AI agents assist with specification, implementation and verification — but their claims must be checked against the actual codebase. Scope, prioritization and every destructive decision stay with the human.

Reviewer / Prompt Author

Turns intent into structured phase prompts. Reviews investigations, challenges claims not backed by evidence. Does not write production code.

Executor Agent

Verifies assumptions against the real repository, implements after approval, captures artefacts, and produces walkthrough evidence.

Human Gatekeeper

Holds scope, priorities, release decisions and every destructive operation. No phase closes without human approval.

ASEWAVE operating model diagram — interaction between the human, Cowork-Claude, and executor AI across phase A, B and C boundaries
ASEWAVE Operating Model — Phase A → B → C with human gate points
The four roles

Separation of concerns at every level.

ASEWAVE is role-based, not tool-specific. Specific products and models may change. The role separation should not — a single agent reviewing its own work optimises for confidence, not correctness.

01

Human Gatekeeper

Owns scope, priorities, architecture direction, commercial direction, destructive decisions, and release decisions. Reviews prompts before execution and outputs before merge.

Judgment density, not typing speed
02

Reviewer / Prompt Author

Converts intent into structured phase prompts. Defines investigation gates. Reviews Phase A before approving Phase B. Checks Phase C walkthroughs against requirements and evidence. Does not write production code.

Structure, review, verify — not execute
03

Executor Agent

Reads the prompt, inspects the actual repository, verifies assumptions, and stops after investigation. After human approval: implements, commits, runs tests, captures artefacts, and produces walkthrough evidence.

Implement, capture, evidence — not approve
04

Evidence Verifier

Checks that claims match primary evidence: file existence, non-empty artefacts, unique hashes, test output presence, schema verification, git history alignment, and documentation drift. The production verifier logic is part of the private engine.

Check claims — not generate them
Evidence discipline

Not just "it works." What changed, why, how it was tested.

The result is an audit trail by construction — what changed, why it changed, how it was tested, and what evidence supports the claim. Every walkthrough leads with disk reality that an external reviewer can replay independently.

Capture first. Artefacts are saved to disk before any prose is written.
Verify on disk. ls -la and md5sum outputs confirm existence and uniqueness.
§C.0 block mandatory. Walkthrough section C.0 contains verbatim terminal output — no exceptions.
14 red-flag checks. Automated review catches fabrication patterns before a phase closes.
§C.0 — Artefact verification
$ ls -la evidence/phase-c/
total 312
-rw-r--r--  1 ag  staff  142832  flow-login.png
-rw-r--r--  1 ag  staff    4218  pytest-output.txt
-rw-r--r--  1 ag  staff    1094  api-healthcheck.log

$ md5sum evidence/phase-c/*
7f9c2a14...  flow-login.png
3aa1dc88...  pytest-output.txt
91bd047f...  api-healthcheck.log

$ python -m pytest tests/ -v 2>&1 | tail -5
tests/test_auth.py::test_login PASSED
tests/test_api.py::test_health PASSED
✓ 47 passed in 3.42s
✓ zero duplicate MD5 hashes
✓ artefacts exist before prose
⚑ phase closes only on human review
Principles

What ASEWAVE is — and is not.

The methodology is built around controlled augmentation, adversarial separation, and mechanical proof. Understanding its limits is as important as understanding its strengths.

Human decision points at every gate

AI agents execute the human's strategy. No agent commits, closes a phase, or makes scope decisions autonomously.

Not autonomous AI development

ASEWAVE is not an agentic loop where AI runs until it decides it's done. The human green-lights every phase boundary.

Adversarial separation by design

Spec-author and executor are different agents with different roles. Neither trusts the other's claims without verification.

Not "AI-assisted coding"

That phrase means a human types with AI suggestions. ASEWAVE inverts it: AI types code, the human directs. The discipline is in the orchestration.

Replayable audit trail by construction

Commits, walkthroughs, screenshots, hashes and test outputs back every phase. An external reviewer can verify independently.

Not faster than a senior team

It scales one developer to mid-team velocity, not enterprise velocity. The human review bandwidth is deliberately preserved, not removed.

Fit

When ASEWAVE applies.

Best fit

  • Solo developer or two-person team building production software
  • Domain where audit trails matter — security tooling, regulated industries, infrastructure
  • Evidence-based release management is non-negotiable
  • Multi-month build where context preservation across sessions is critical
  • Mixed AI tier access — token-budgeted reviewer + larger executor

Less good fit

  • Pure prototyping where shipping ugly fast beats shipping correctly
  • Single-file scripts or one-off automations
  • Teams with many engineers and a preference for human-only review
  • Projects with no need for replayable audit trails
Public method

The method is public.

This site describes the public ASEWAVE principles: human ownership, adversarial separation, phase-gated delivery, evidence-first verification, and replayable audit trails.

The public method is enough to understand and apply ASEWAVE in spirit.

Private engine

The engine is private.

The operational engine remains private while it is refined in active XDRagon development. That private layer includes prompt contracts, verifier logic, memory structure, phase templates, red-flag checks, handoff conventions, routing rationale, and recovery patterns.

The private engine is what makes the workflow repeatable, fast, and commercially deployable at production depth.

Current status
In production use

ASEWAVE is currently used in the production development of XDRagon. The methodology is documented publicly for reference, discussion, and attribution.

The private ASEWAVE Engine is not currently released. Commercial access, partnerships, or reference implementations may be considered later — but there is no public support, training, or implementation package at this time.

Attribution & contact

The public method is available now.
The private engine remains closed.

ASEWAVE is a methodology, not a product. Anyone is welcome to apply it — attribution appreciated. Inquiries about future commercial access or partnerships may be directed to wi24rd.com.

ASEWAVE — Augmented Solo Engineering With Adversarial Verification + Evidence
Coined by wi24rd.com during the XDRagon development process.
The methodology was refined and documented with AI-assisted review, including Claude as a methodology collaborator.
Attribution appreciated where the term is used.