Introduction

What is dovetell?

dovetell is a context orchestration layer for AI-assisted software teams. Its north star: "I want to be asked fewer questions."

When teams use AI tools to build fast, context gets lost. Decisions made in chat threads, assumptions in someone's head, documentation that goes stale the moment work begins — all of it accumulates as context debt. dovetell exists to stop that accumulation.

One sentence: dovetell captures decisions, aligns intent, and keeps everyone from asking the same questions twice.

What dovetell is not

  • Not a documentation tool — those require humans to maintain them
  • Not a project management tool — no tasks, sprints, or deadlines
  • Not an LLM wrapper or chatbot
  • Not a code review tool

Three pillars

The name contains the product concept:

  • Capture (dove — the messenger carries it) — decisions and domain knowledge recorded as work happens
  • Align (dovetail — the joint locks it in place) — context stays synchronized across code, docs, and team
  • Tell (broadcast — it travels) — the right context reaches the right person at the right moment

Current state

PhaseWhatStatus
Phase 1Team AI Maturity Assessment + Prompt LibraryLive
Phase 2SaaS platform — context base, unvetted queue, MCPPlanned
Phase 3Queryable decision traces at scaleFuture
Introduction

Why it exists

Three failure modes that get worse as AI-assisted development accelerates.

01 — Tribal knowledge failure

Domain context lives in one person's head and never gets captured. When that person is heads-down or unavailable, the team stalls. Developers ask the same domain questions every sprint. The PM becomes a human search engine.

02 — Documentation drift

Governing docs are written once and go stale the moment work begins. Nobody trusts them, so nobody reads them. The gap between what the docs say and what the code does grows every sprint.

03 — Bi-directional blindness

Code evolves without informing docs. Docs change without reaching developers. The two never meet. As AI-assisted development accelerates, this gap widens faster than any team can manually close it.

The rise of vibe coding makes this acute. The more AI assists in building, the more critical it becomes to maintain a clean, authoritative, living context layer — or outputs drift from intent.

Introduction

Where dovetell fits

dovetell is the context injection layer in your agent harness. If you're not the model, you're the harness.

The agent stack

Model          Claude / GPT-4o / Gemini / Local
────────────────────────────────────
Agent Harness
  Tools        Cursor / Claude Code / Copilot
  dovetell  ←  context injection layer  ★
  Memory       filesystem / AGENTS.md / git
────────────────────────────────────
Your Codebase

Harness-as-a-Service platforms (Cursor SDK, Claude Code, GitHub Copilot) give you three things: which model, which tools, which task. They don't give you domain context — what your team's definitions are, which decisions were made, what the architecture constraints are. That's dovetell's job.

One liner: Cursor handles the runtime. dovetell handles what the runtime needs to know about your team.

Three phases of agent development

  • Phase 1 — Weights: bigger models, more data, fine-tuning
  • Phase 2 — Context: prompt engineering, RAG — dovetell starts here
  • Phase 3 — Harness: what environment should the model operate in — dovetell ends here
The Assessment

Team AI Maturity Model

Four levels describing how systematically a team uses AI. Every team is somewhere on this curve. dovetell helps you move up it.

Scattered
0 – 22 / 90
Ad hoc
AI use is isolated. Context lives in individual chats. No shared standards or reuse.
Structured
23 – 54 / 90
Repeatable
Some repeatable practices but inconsistent. Context is scattered, handoffs vary.
Coordinated
55 – 72 / 90
Aligned
Shared context and standard prompts. Team works together with AI effectively.
Compounding
73 – 90 / 90
Systematic
AI context compounds over time. Every decision becomes searchable precedent.

Six capability areas

Teams are scored across six dimensions, 5 questions each, 0–3 points per question. Max 90 points.

#AreaWhat it measures
1Shared ContextDoes team-wide context exist and is it discoverable?
2Prompt ReuseAre high-value prompts captured and shared?
3Team HandoffsIs there a standard handoff format with clear ownership?
4Knowledge CaptureAre insights and decisions captured systematically?
5Review & GovernanceIs there a review process for AI outputs?
6Workflow IntegrationIs AI integrated consistently across the team?
The Assessment

How it works

5 minutes. 30 questions. Personalized score, gaps, and next steps. A unique link to track your team's progress over time.

1
Take the assessment
Answer 30 questions across 6 capability areas. Score each 0–3 (Not true → Very true). Skip anything that doesn't apply — skips score 0.
2
Enter your email
Required. Optional: project name, role, team size, company, industry, primary AI tool, how you found us. No account required.
3
Get your results
Your maturity level, score out of 90, breakdown by capability area, your strengths, your biggest gaps, and 3 recommended next steps.
4
Get your unique link
A permanent URL for your project. Bookmark it. Share it. Return to retake and track how your team matures over time.
5
See your recommendations
Personalized resources based on your level — free checklist, Starter Kit ($49), Pro Kit ($99), or a Setup Review session ($299).

Take the assessment: dovetell.io/team-assessment

The Assessment

Scoring

How points are calculated and how levels are assigned.

The math

6 sections
× 5 questions per section
× max 3 points per question
= 90 points maximum

Unanswered questions → 0 points
Skipped sections → 0 points for that section

Level thresholds

LevelScore% of max
Scattered0 – 220 – 24%
Structured23 – 5426 – 60%
Coordinated55 – 7261 – 80%
Compounding73 – 9081 – 100%

Thresholds are estimates. They will be recalibrated after the first 50 non-founder assessment completions using real score distribution data.

Answer scale

ScoreLabel
0Not true
1Somewhat true
2Mostly true
3Very true
The Assessment

Tracking over time

Your unique link is your project's permanent home. Retake whenever your team's practices change and watch the trajectory.

How the ID system works

uid
userId. Permanent per browser. Generated once, stored locally. Ties all your projects together.
pid
projectId. One per team or initiative. Lives in your unique URL. Persists across every retake.
aid
assessmentId. Generated fresh every submission. Tracks each individual run within a project.
vid
versionId. Which question set was used. Currently v01. Bumped when questions change.

Hierarchy

email
└── uid  (you, across all projects)
    ├── pid-a  (Ops Analytics Team)
    │   ├── aid-1  Jan 2026 · v01 · 14/90 · Scattered
    │   ├── aid-2  Mar 2026 · v01 · 32/90 · Structured
    │   └── aid-3  May 2026 · v01 · 58/90 · Coordinated
    └── pid-b  (Data Infrastructure Squad)
        └── aid-1  Mar 2026 · v01 · 21/90 · Scattered

Your unique link

After completing the assessment you receive a link like:

dovetell.io/assessments/?pid=8f3c2a1b

Bookmark it. When you return, you'll see your last score and a retake button. Each retake generates a new aid while keeping the same pid — your project's growth trajectory builds automatically.

Starting a new project? Just go to dovetell.io/team-assessment without your existing link. A new pid is generated and you have a separate tracking thread.

The Platform

Phase 2 overview

The dovetell platform is coming. The prompt library and assessment are Phase 1 — validation that the problem is real and people will pay to solve it.

Phase 2 is the SaaS platform that makes the prompt library automatic — a living, queryable context layer that your team doesn't have to maintain manually.

The unvetted → reviewed → becomes truth loop

Everything the platform captures starts as unvetted. A PM reviews, accepts, edits, or rejects. Accepted items become vetted — authoritative, queryable, and surfaced to the team automatically.

No automation without human review. The human in the loop is a feature, not a limitation.

Join the waitlist: dovetell.io/#waitlist

The Platform

Capabilities

Four capabilities, built in dependency order. The queue is the connective tissue — nothing becomes truth without it.

1 — Context Console

The dashboard. A context health monitor, not a document editor. The PM opens it at standup — sees the drift score, unvetted count, stale definitions, decisions captured this sprint. Red/Yellow/Green in under 30 seconds.

2 — Context Query (MCP)

dovetell as an MCP server. A developer types @dovetell what's our defect rate threshold inside Cursor or Claude Code and gets the team's answer — vetted, sourced, with decision trace. "I want to be asked fewer questions" made literal.

3 — Context Import

Drop in existing docs — markdown, Notion exports, Confluence exports. dovetell extracts decisions, assumptions, and definitions. Everything starts unvetted. First value in under an hour.

4 — Unvetted Queue

The human-in-the-loop gate. Every automated update lands here first. The PM accepts, edits, rejects, or defers. Accepted items become vetted truth. This is where tribal knowledge stops being tribal.

CapabilityFeedsStatus
Import (3)Queue (4)Planned
Queue (4)Console (1)Planned
Console (1)surfaces Queue (4)Planned
Queue (4)Query (2)Planned
Reference

Definitions

Shared vocabulary. When a term is used in dovetell documentation or code, it means what's defined here.

Core concepts

context base — The living collection of decisions, assumptions, definitions, and policies that governs how a team's AI-assisted work should behave.

context drift — The gap that grows between what a team's documents say and what the code or work actually does.

context orchestration — Assembling, aligning, and delivering the right context to the right person or tool at the right time.

tribal knowledge failure — Domain context that lives in one person's head and never gets captured.

unvetted — Status for any item captured automatically but not yet reviewed by a human.

vetted — Status for any item reviewed and accepted by a designated reviewer. Authoritative.

decision trace — A record of what was decided, when, by whom, under which policy.

drift score — A metric (0–100) indicating how far a team's documented context has drifted from their actual work.

dog food loop — Using dovetell to manage the context of building dovetell. Active from day one.

ID definitions

uid (userId) — Permanent per browser. Generated once on first submission. Ties all projects for one person together.

pid (projectId) — One per project or team. Persists across retakes. Enables growth tracking.

aid (assessmentId) — Fresh per submission. Tracks individual runs within a project.

vid (versionId) — Which question set. Currently v01. Bumped when questions change.

Reference

Decisions log

Key decisions that shaped the product. One file, scannable, honest about tradeoffs.

Assessment architecture

DecisionRationale
Client-side ID generationNo backend required; IDs sent to Formspree for backloading when Postgres exists
uid permanent per browserTies projects to a person without requiring account creation
pid persists across retakesEnables growth tracking per project
Unanswered questions score 0Defaulting to 1 caused artificial scores on empty submissions
Page separation (assessment / assessments / recommendations)Each page one job; retake loop prevention
dovetell-data.json as content sourceQuestions and thresholds not hardcoded in HTML

Infrastructure

DecisionRationale
GitHub Pages over CarrdFree, version controlled, no character limits, clean URLs
BYOK / local inference onlyNot in the API cost business; trust signal for regulated industries
.dovetell-context/ folderHidden folder convention; dovetell running on itself

Privacy

DecisionRationale
Scores seen by product team, not sold/sharedHonest disclosure without alarming language
Project name field with proprietary content disclaimerProtects IP boundary; sets expectations before input
Reference

Privacy

Plain English. No surprises.

What we collect

Required: email address only.

Optional: project name, role, team size, company, industry, primary AI tool, how you found us.

Automatically: assessment scores, section breakdown, raw answers, skip count, unique IDs (uid/pid/aid/vid), timestamp.

How we use it

  • To send you your results and unique tracking link
  • To enable progress tracking over time (via pid)
  • The dovetell product team sees your scores to improve the product
  • Occasional product updates — opt out any time by replying "unsubscribe"

What we don't do

  • Never sell your data
  • Never share with third parties or advertisers
  • No cookies (Plausible analytics is cookieless)
  • No tracking pixels

Full privacy policy: dovetell.io/privacy

Project name disclaimer

The project name field shown in the assessment gate says: "⚠ Don't include sensitive, confidential, or proprietary information here." We mean it. dovetell is not a secure document store.