Skip to main content
Coding agents are accelerating how fast code gets written — 55% faster task completion in controlled studies, ~20% steady-state time savings in enterprise deployments. But without alignment on what to build, that speed compounds cost instead of reducing it:
  • Stability degrades. The 2024 DORA State of DevOps Report found that a 25% increase in AI usage quickened code reviews but decreased delivery stability by 7.2%. More code, shipped faster, with more incidents. Their conclusion: “writing code isn’t the bottleneck to deploying reliable applications.”
  • Senior engineers absorb the cost. A 2025 study of experienced developers measured a 39-44% gap between perceived and actual productivity. Developers felt 20% faster but were 19% slower in practice. The reason: senior engineers spent their time reviewing and coordinating misaligned agent output instead of shipping their own work — reviewing 6.5% more code while their own output dropped 19%.
  • Coordination breaks silently. A longitudinal study tracking teams from 2023-2025 found that even as AI adoption increased, core teamwork problems persisted: misaligned communication across roles, lack of shared visibility into project goals, and ambiguity around accountability.
Each of these problems — rework from instability, wasted senior eng time on coordination, agents executing on misaligned intent — has a direct dollar cost. DORA reports that high-performing teams spend 22% less time on unplanned rework. Agent compute is metered per request, and a single misaligned prompt can consume an order of magnitude more tokens than a well-scoped one. Multiply that across a team running agents in parallel on the wrong direction, and the waste scales with the speed. Alignment is the force multiplier. When teams agree on intent before agents execute, every dollar spent on agent compute goes further, every senior engineer’s review time is spent on substance instead of correction, and every PR ships code that matches what was actually approved. Pulse makes this visible.

What Pulse does

Pulse is a conversational interface that lets you query the relationship between what your team planned and what actually shipped. Ask questions in natural language:
  • “Which approved plans have drifted from what landed?” — surfaces stability risk before it becomes an incident
  • “Who’s blocked on reviews this sprint?” — shows where senior eng time is being absorbed by coordination instead of output
  • “What shipped without an approved plan?” — reveals alignment gaps where agents executed without shared intent
Every answer references real planspaces, real code changes, and real review decisions. Pulse connects to your team’s tools (Slack, Notion, Linear, GitHub) so answers draw from the full context, not just what lives in Scott.

How it works

  1. Engineers share plans and proposals in planspaces
  2. Plans are reviewed and approved by stakeholders
  3. Scott CI traces each PR back to the approved intent automatically
This creates a data layer that didn’t exist before — every PR linked to the plan that drove it, every plan linked to the PRs that implemented it. Pulse queries this layer. Traditional tools measure code throughput. Pulse measures whether the code matches what was agreed on.

Metrics

Each metric maps directly to the cost problems above.
MetricWhat it measuresWhich problem it tracks
Intent coverage% of shipped PRs backed by an approved planCoordination gaps. Low coverage means agents are executing without shared intent — the silent coordination failure the research identifies.
Design fidelityHow closely shipped code matches approved intentStability risk. Low fidelity is the leading indicator of the instability DORA measured — code that shipped fast but didn’t match the plan.
Rework rateHow often a team revisits intent after implementation startsSenior eng burden. Every rework cycle is a senior engineer reviewing, correcting, and re-coordinating instead of shipping. This is the 22% rework gap between high and low performers.
Review cycle timeTime from review request to first decisionAlignment speed. Slow cycles mean intent is stuck waiting for feedback while agents keep executing — widening the gap between what’s approved and what’s being built.

Attention alerts

Pulse surfaces the specific situations that cost the most if left unaddressed:
  • Stale reviews — a review open for days means a senior engineer’s feedback is blocking a team, or intent is stalling while agent compute continues to burn
  • Design drift — an approved plan where the shipped code diverged significantly. This is the stability problem made concrete: what landed doesn’t match what was agreed on
  • Blocked work — planspaces waiting on unresolved comments or external dependencies. Work that’s blocked but still consuming agent sessions is pure waste
  • High fork activity — multiple competing branches on the same design. Healthy exploration or misalignment that needs a decision — Pulse surfaces it so you can tell the difference