User Manual · v1.9.6

Nexus Hub — User Manual

Complete guide to Nexus Hub for Azure DevOps. Same content available inside the extension under Nexus Hub → Manual.

01 Overview

Nexus Hub analyzes Azure DevOps Work Items in real time, evaluating story quality based on the I.N.V.E.S.T framework and generating an actionable quality score.

The extension has three main modules:

Module 1 – Work Item Form: In-form analysis panel embedded in each Work Item.
Module 2 – Backlog IQ: Strategic dashboard consolidating quality across all teams in the project.
Module 3 – Predictive Analytics: Monte Carlo simulator projecting delivery dates using actual throughput history.

02 Work Item Flow

When you open any Work Item (User Story, Feature, or Epic), the Nexus Hub tab appears automatically inside the form.

  1. Open a Work Item from Boards › Work Items.
  2. Click the Nexus Hub tab on the side panel.
  3. Click Analyze to start the analysis.
  4. Wait for processing — the system parses title, description, and acceptance criteria.
  5. Review the Score, I.N.V.E.S.T dimensions, and the generated Alerts.
Important: The minimum analysis input is the title field — if description and acceptance criteria are empty, the system runs with reduced confidence.

03 Score & Classification

Score ranges from 0 to 100 as the weighted average of INVEST criteria.

RangeClassMeaningAction
90–100ExcellentWell written, clear, actionable.Ready for sprint pool review.
80–89GoodAcceptable, minor adjustments.Review minor before refinement.
60–79WarningStructural gaps; refinement required.Schedule refinement with team.
< 60CriticalLacks essential information.Block sprint entry · rewrite.

04 INVEST Criteria

Six dimensions evaluated independently and weighted into the final score (0–15 each, max 90, normalized to 100).

LetterDimensionDescriptionWeight
IIndependentCan be delivered without depending on other stories.15%
NNegotiableNot a contract — leaves room for conversation.10%
VValuableDelivers measurable value to a real persona.25%
EEstimableTeam can size the effort with reasonable confidence.15%
SSmallFits within a single sprint.15%
TTestableHas explicit, verifiable acceptance criteria.20%

05 Backlog IQ

Backlog IQ ranks teams by average backlog quality and surfaces recommended refinement actions per squad.

  1. Open Nexus Hub › Backlog IQ from the sidebar.
  2. Filter by Sprint, Area Path, or Item Type.
  3. Click any team row to drill into per-item scores.
  4. Use Schedule on red items to book a refinement session.
Tip: Teams below the 70-point threshold for two consecutive weeks trigger an automatic alert in the Overview module.

06 Predictive Analytics

Monte Carlo runs 10,000 simulations against your team's historical throughput to project delivery confidence intervals.

PercentileLabelUse case
p50OptimisticBest case — only the fastest 50% of simulations met it.
p85LikelyRecommended commitment — 85% of simulations completed by this date.
p95ConservativeWorst case — used for buffer planning and stakeholder commitments.
AI Semantic Adjustment: when descriptions contain risk markers like legacy, refactor, spike, or migration, the model adds a multiplier to historical variance.

07 Best Practices

  1. Run analysis early. Trigger Nexus Hub during inception, not at sprint planning. Catching gaps days earlier saves hours of refinement.
  2. Don't game the score. A 100/100 story nobody wants is worse than a 68/100 valuable one. Use the score as a signal, not a goal.
  3. Pin a target date. Pin one date per release. Tracking confidence drift over time is more useful than a snapshot.
  4. Review NLP findings. When AI flags semantic risk, read its rationale ("MOTIVO" link) — that's where the most actionable refactors live.