What is dovetell?
dovetell is a context orchestration layer for AI-assisted software teams. Its north star: "I want to be asked fewer questions."
When teams use AI tools to build fast, context gets lost. Decisions made in chat threads, assumptions in someone's head, documentation that goes stale the moment work begins — all of it accumulates as context debt. dovetell exists to stop that accumulation.
One sentence: dovetell captures decisions, aligns intent, and keeps everyone from asking the same questions twice.
What dovetell is not
- Not a documentation tool — those require humans to maintain them
- Not a project management tool — no tasks, sprints, or deadlines
- Not an LLM wrapper or chatbot
- Not a code review tool
Three pillars
The name contains the product concept:
- Capture (dove — the messenger carries it) — decisions and domain knowledge recorded as work happens
- Align (dovetail — the joint locks it in place) — context stays synchronized across code, docs, and team
- Tell (broadcast — it travels) — the right context reaches the right person at the right moment
Current state
| Phase | What | Status |
|---|---|---|
| Phase 1 | Team AI Maturity Assessment + Prompt Library | Live |
| Phase 2 | SaaS platform — context base, unvetted queue, MCP | Planned |
| Phase 3 | Queryable decision traces at scale | Future |
Why it exists
Three failure modes that get worse as AI-assisted development accelerates.
01 — Tribal knowledge failure
Domain context lives in one person's head and never gets captured. When that person is heads-down or unavailable, the team stalls. Developers ask the same domain questions every sprint. The PM becomes a human search engine.
02 — Documentation drift
Governing docs are written once and go stale the moment work begins. Nobody trusts them, so nobody reads them. The gap between what the docs say and what the code does grows every sprint.
03 — Bi-directional blindness
Code evolves without informing docs. Docs change without reaching developers. The two never meet. As AI-assisted development accelerates, this gap widens faster than any team can manually close it.
The rise of vibe coding makes this acute. The more AI assists in building, the more critical it becomes to maintain a clean, authoritative, living context layer — or outputs drift from intent.
Where dovetell fits
dovetell is the context injection layer in your agent harness. If you're not the model, you're the harness.
The agent stack
Model Claude / GPT-4o / Gemini / Local
────────────────────────────────────
Agent Harness
Tools Cursor / Claude Code / Copilot
dovetell ← context injection layer ★
Memory filesystem / AGENTS.md / git
────────────────────────────────────
Your Codebase
Harness-as-a-Service platforms (Cursor SDK, Claude Code, GitHub Copilot) give you three things: which model, which tools, which task. They don't give you domain context — what your team's definitions are, which decisions were made, what the architecture constraints are. That's dovetell's job.
One liner: Cursor handles the runtime. dovetell handles what the runtime needs to know about your team.
Three phases of agent development
- Phase 1 — Weights: bigger models, more data, fine-tuning
- Phase 2 — Context: prompt engineering, RAG — dovetell starts here
- Phase 3 — Harness: what environment should the model operate in — dovetell ends here
Team AI Maturity Model
Four levels describing how systematically a team uses AI. Every team is somewhere on this curve. dovetell helps you move up it.
Six capability areas
Teams are scored across six dimensions, 5 questions each, 0–3 points per question. Max 90 points.
| # | Area | What it measures |
|---|---|---|
| 1 | Shared Context | Does team-wide context exist and is it discoverable? |
| 2 | Prompt Reuse | Are high-value prompts captured and shared? |
| 3 | Team Handoffs | Is there a standard handoff format with clear ownership? |
| 4 | Knowledge Capture | Are insights and decisions captured systematically? |
| 5 | Review & Governance | Is there a review process for AI outputs? |
| 6 | Workflow Integration | Is AI integrated consistently across the team? |
How it works
5 minutes. 30 questions. Personalized score, gaps, and next steps. A unique link to track your team's progress over time.
Take the assessment: dovetell.io/team-assessment
Scoring
How points are calculated and how levels are assigned.
The math
6 sections
× 5 questions per section
× max 3 points per question
= 90 points maximum
Unanswered questions → 0 points
Skipped sections → 0 points for that section
Level thresholds
| Level | Score | % of max |
|---|---|---|
| Scattered | 0 – 22 | 0 – 24% |
| Structured | 23 – 54 | 26 – 60% |
| Coordinated | 55 – 72 | 61 – 80% |
| Compounding | 73 – 90 | 81 – 100% |
Thresholds are estimates. They will be recalibrated after the first 50 non-founder assessment completions using real score distribution data.
Answer scale
| Score | Label |
|---|---|
0 | Not true |
1 | Somewhat true |
2 | Mostly true |
3 | Very true |
Tracking over time
Your unique link is your project's permanent home. Retake whenever your team's practices change and watch the trajectory.
How the ID system works
v01. Bumped when questions change.Hierarchy
email
└── uid (you, across all projects)
├── pid-a (Ops Analytics Team)
│ ├── aid-1 Jan 2026 · v01 · 14/90 · Scattered
│ ├── aid-2 Mar 2026 · v01 · 32/90 · Structured
│ └── aid-3 May 2026 · v01 · 58/90 · Coordinated
└── pid-b (Data Infrastructure Squad)
└── aid-1 Mar 2026 · v01 · 21/90 · Scattered
Your unique link
After completing the assessment you receive a link like:
dovetell.io/assessments/?pid=8f3c2a1b
Bookmark it. When you return, you'll see your last score and a retake button. Each retake generates a new aid while keeping the same pid — your project's growth trajectory builds automatically.
Starting a new project? Just go to dovetell.io/team-assessment without your existing link. A new pid is generated and you have a separate tracking thread.
Phase 2 overview
The dovetell platform is coming. The prompt library and assessment are Phase 1 — validation that the problem is real and people will pay to solve it.
Phase 2 is the SaaS platform that makes the prompt library automatic — a living, queryable context layer that your team doesn't have to maintain manually.
The unvetted → reviewed → becomes truth loop
Everything the platform captures starts as unvetted. A PM reviews, accepts, edits, or rejects. Accepted items become vetted — authoritative, queryable, and surfaced to the team automatically.
No automation without human review. The human in the loop is a feature, not a limitation.
Join the waitlist: dovetell.io/#waitlist
Capabilities
Four capabilities, built in dependency order. The queue is the connective tissue — nothing becomes truth without it.
1 — Context Console
The dashboard. A context health monitor, not a document editor. The PM opens it at standup — sees the drift score, unvetted count, stale definitions, decisions captured this sprint. Red/Yellow/Green in under 30 seconds.
2 — Context Query (MCP)
dovetell as an MCP server. A developer types @dovetell what's our defect rate threshold inside Cursor or Claude Code and gets the team's answer — vetted, sourced, with decision trace. "I want to be asked fewer questions" made literal.
3 — Context Import
Drop in existing docs — markdown, Notion exports, Confluence exports. dovetell extracts decisions, assumptions, and definitions. Everything starts unvetted. First value in under an hour.
4 — Unvetted Queue
The human-in-the-loop gate. Every automated update lands here first. The PM accepts, edits, rejects, or defers. Accepted items become vetted truth. This is where tribal knowledge stops being tribal.
| Capability | Feeds | Status |
|---|---|---|
| Import (3) | Queue (4) | Planned |
| Queue (4) | Console (1) | Planned |
| Console (1) | surfaces Queue (4) | Planned |
| Queue (4) | Query (2) | Planned |
Definitions
Shared vocabulary. When a term is used in dovetell documentation or code, it means what's defined here.
Core concepts
context base — The living collection of decisions, assumptions, definitions, and policies that governs how a team's AI-assisted work should behave.
context drift — The gap that grows between what a team's documents say and what the code or work actually does.
context orchestration — Assembling, aligning, and delivering the right context to the right person or tool at the right time.
tribal knowledge failure — Domain context that lives in one person's head and never gets captured.
unvetted — Status for any item captured automatically but not yet reviewed by a human.
vetted — Status for any item reviewed and accepted by a designated reviewer. Authoritative.
decision trace — A record of what was decided, when, by whom, under which policy.
drift score — A metric (0–100) indicating how far a team's documented context has drifted from their actual work.
dog food loop — Using dovetell to manage the context of building dovetell. Active from day one.
ID definitions
uid (userId) — Permanent per browser. Generated once on first submission. Ties all projects for one person together.
pid (projectId) — One per project or team. Persists across retakes. Enables growth tracking.
aid (assessmentId) — Fresh per submission. Tracks individual runs within a project.
vid (versionId) — Which question set. Currently v01. Bumped when questions change.
Decisions log
Key decisions that shaped the product. One file, scannable, honest about tradeoffs.
Assessment architecture
| Decision | Rationale |
|---|---|
| Client-side ID generation | No backend required; IDs sent to Formspree for backloading when Postgres exists |
| uid permanent per browser | Ties projects to a person without requiring account creation |
| pid persists across retakes | Enables growth tracking per project |
| Unanswered questions score 0 | Defaulting to 1 caused artificial scores on empty submissions |
| Page separation (assessment / assessments / recommendations) | Each page one job; retake loop prevention |
| dovetell-data.json as content source | Questions and thresholds not hardcoded in HTML |
Infrastructure
| Decision | Rationale |
|---|---|
| GitHub Pages over Carrd | Free, version controlled, no character limits, clean URLs |
| BYOK / local inference only | Not in the API cost business; trust signal for regulated industries |
| .dovetell-context/ folder | Hidden folder convention; dovetell running on itself |
Privacy
| Decision | Rationale |
|---|---|
| Scores seen by product team, not sold/shared | Honest disclosure without alarming language |
| Project name field with proprietary content disclaimer | Protects IP boundary; sets expectations before input |
Privacy
Plain English. No surprises.
What we collect
Required: email address only.
Optional: project name, role, team size, company, industry, primary AI tool, how you found us.
Automatically: assessment scores, section breakdown, raw answers, skip count, unique IDs (uid/pid/aid/vid), timestamp.
How we use it
- To send you your results and unique tracking link
- To enable progress tracking over time (via pid)
- The dovetell product team sees your scores to improve the product
- Occasional product updates — opt out any time by replying "unsubscribe"
What we don't do
- Never sell your data
- Never share with third parties or advertisers
- No cookies (Plausible analytics is cookieless)
- No tracking pixels
Full privacy policy: dovetell.io/privacy
Project name disclaimer
The project name field shown in the assessment gate says: "⚠ Don't include sensitive, confidential, or proprietary information here." We mean it. dovetell is not a secure document store.