Nexus Hub — User Manual
Complete guide to Nexus Hub for Azure DevOps. Same content available inside the extension under Nexus Hub → Manual.
01 Overview
Nexus Hub analyzes Azure DevOps Work Items in real time, evaluating story quality based on the I.N.V.E.S.T framework and generating an actionable quality score.
The extension has three main modules:
02 Work Item Flow
When you open any Work Item (User Story, Feature, or Epic), the Nexus Hub tab appears automatically inside the form.
- Open a Work Item from
Boards › Work Items. - Click the
Nexus Hubtab on the side panel. - Click
Analyzeto start the analysis. - Wait for processing — the system parses title, description, and acceptance criteria.
- Review the
Score,I.N.V.E.S.T dimensions, and the generatedAlerts.
03 Score & Classification
Score ranges from 0 to 100 as the weighted average of INVEST criteria.
| Range | Class | Meaning | Action |
|---|---|---|---|
90–100 | Excellent | Well written, clear, actionable. | Ready for sprint pool review. |
80–89 | Good | Acceptable, minor adjustments. | Review minor before refinement. |
60–79 | Warning | Structural gaps; refinement required. | Schedule refinement with team. |
< 60 | Critical | Lacks essential information. | Block sprint entry · rewrite. |
04 INVEST Criteria
Six dimensions evaluated independently and weighted into the final score (0–15 each, max 90, normalized to 100).
| Letter | Dimension | Description | Weight |
|---|---|---|---|
| I | Independent | Can be delivered without depending on other stories. | 15% |
| N | Negotiable | Not a contract — leaves room for conversation. | 10% |
| V | Valuable | Delivers measurable value to a real persona. | 25% |
| E | Estimable | Team can size the effort with reasonable confidence. | 15% |
| S | Small | Fits within a single sprint. | 15% |
| T | Testable | Has explicit, verifiable acceptance criteria. | 20% |
05 Backlog IQ
Backlog IQ ranks teams by average backlog quality and surfaces recommended refinement actions per squad.
- Open
Nexus Hub › Backlog IQfrom the sidebar. - Filter by Sprint, Area Path, or Item Type.
- Click any team row to drill into per-item scores.
- Use
Scheduleon red items to book a refinement session.
06 Predictive Analytics
Monte Carlo runs 10,000 simulations against your team's historical throughput to project delivery confidence intervals.
| Percentile | Label | Use case |
|---|---|---|
| p50 | Optimistic | Best case — only the fastest 50% of simulations met it. |
| p85 | Likely | Recommended commitment — 85% of simulations completed by this date. |
| p95 | Conservative | Worst case — used for buffer planning and stakeholder commitments. |
legacy, refactor, spike, or migration, the model adds a multiplier to historical variance.07 Best Practices
- Run analysis early. Trigger Nexus Hub during inception, not at sprint planning. Catching gaps days earlier saves hours of refinement.
- Don't game the score. A 100/100 story nobody wants is worse than a 68/100 valuable one. Use the score as a signal, not a goal.
- Pin a target date. Pin one date per release. Tracking confidence drift over time is more useful than a snapshot.
- Review NLP findings. When AI flags semantic risk, read its rationale ("MOTIVO" link) — that's where the most actionable refactors live.