Sprint Workflow
Overview
AzmX follows a 2-week sprint cycle with agile development practices optimized for our 15-person engineering team. This workflow ensures predictable delivery, maintains code quality, and promotes continuous improvement. You complete features incrementally while maintaining a sustainable pace and clear communication across all team members.
Sprint Structure
Sprint Duration
- Length: 2 weeks (10 working days)
- Sprint Planning: First Monday of sprint (2 hours)
- Daily Standups: Daily at 10:00 AM (15 minutes)
- Sprint Review: Last Friday of sprint (1 hour)
- Retrospective: Last Friday of sprint (30 minutes)
Sprint Planning Process
Week Before Sprint
- Product Owner: Refines backlog and prioritizes features
- Tech Leads: Review technical dependencies and capacity
- Team Leads: Assess team availability and velocity
Sprint Planning Meeting
- Review Previous Sprint (15 minutes)
- Completed work review
- Carry-over items discussion
-
Velocity assessment
-
Sprint Goal Setting (30 minutes)
- Define sprint objective
- Align with business priorities
-
Set success criteria
-
Backlog Refinement (45 minutes)
- Story point estimation
- Technical dependency mapping
-
Risk assessment
-
Capacity Planning (30 minutes)
- Team availability review
- Individual capacity assessment
- Commitment finalization
Sprint Goal Examples
Good Sprint Goals: - "Enable users to export their data in CSV and PDF formats" - "Reduce API response time by 40% for dashboard endpoints" - "Complete payment integration with Stripe for subscription plans"
These goals are specific, measurable, and achievable within one sprint.
Poor Sprint Goals: - "Improve the system" (too vague) - "Fix bugs and add features" (not focused) - "Work on the dashboard, API, and mobile app" (too broad)
Story Point Estimation Guidelines
Use modified Fibonacci sequence: 1, 2, 3, 5, 8, 13, 21
Point Scale: - 1 point: 1-2 hours (minor bug fix, documentation update) - 2 points: Half day (small feature, simple component) - 3 points: 1 day (standard feature implementation) - 5 points: 2-3 days (complex feature with testing) - 8 points: 1 week (major feature, multiple components) - 13 points: Split into smaller tasks (too large for one sprint) - 21+ points: Epic - break down immediately
Estimation Examples:
1 point: Fix typo in user profile form
2 points: Add email validation to signup form
3 points: Implement password reset functionality
5 points: Create user dashboard with charts
8 points: Build complete authentication system
13 points: TOO BIG - split into smaller stories
Capacity Planning Formula
Calculate your team's sprint capacity:
Individual Capacity = Working Days × Productivity Factor
Team Capacity = Sum of All Individual Capacities
Example:
Developer A: 10 days × 0.8 (80% available) = 8 points/day = 64 points
Developer B: 10 days × 0.6 (60% available) = 6 points/day = 48 points
Developer C: 8 days × 1.0 (2 days off) = 8 points/day = 64 points
Team Capacity = 64 + 48 + 64 = 176 points per sprint
Productivity Factors: - 1.0 = Fully dedicated to sprint work - 0.8 = 20% time on meetings, support, etc. - 0.6 = 40% time on other commitments - 0.5 = Half-time availability
Buffer Recommendation: Commit to 80% of calculated capacity to account for unknowns.
Example Sprint Plan Template
# Sprint 24 - Jan 15-26, 2025
## Sprint Goal
Enable users to manage their subscription plans and view billing history.
## Team Capacity
- Frontend Team: 72 points
- Backend Team: 64 points
- Total: 136 points
- Committed: 108 points (80% buffer)
## Committed Stories
- [8 pts] Subscription plan selection UI
- [5 pts] Payment integration with Stripe
- [5 pts] Billing history page
- [3 pts] Email notifications for payments
- [8 pts] Subscription management API endpoints
- [3 pts] User role permissions update
- [2 pts] Documentation for subscription flow
## Stretch Goals (if capacity allows)
- [3 pts] Invoice PDF generation
- [2 pts] Subscription analytics dashboard
## Dependencies
- Stripe account setup (DevOps) - Required by Day 3
- Legal review of subscription terms - Required by Day 5
## Risks
- Holiday on Day 8 (reduced capacity)
- Stripe API changes may require additional work
Daily Standup Format
Structure (15 minutes max)
- Round-robin updates (2 minutes per person)
- Blocker discussion (5 minutes)
- Sprint progress review (3 minutes)
Individual Update Format
Good Update Examples
Example 1 (Developer):
Yesterday: Completed user authentication API endpoints. All tests passing.
Today: Start frontend integration and handle error states.
Blockers: None. Need design assets by tomorrow for error messages.
Example 2 (With Blocker):
Yesterday: Started payment gateway integration. Hit API rate limit issue.
Today: Wait for DevOps to increase rate limit. Work on error handling meanwhile.
Blockers: Need production API credentials from DevOps team.
Example 3 (QA):
Yesterday: Tested user dashboard. Found 2 bugs, logged in ClickUp.
Today: Retest fixes from yesterday. Start testing export feature.
Blockers: Dev environment down since 9 AM. Cannot test new features.
Bad Update Examples (Anti-Patterns)
Too Vague:
Better: Be specific about what you worked on and what you'll do.Too Detailed:
❌ Yesterday: I started by reviewing the database schema, then I refactored the user
model to include the new subscription fields, after that I wrote 15 unit tests
covering edge cases for the payment processing, then I met with the frontend team
to discuss API contract changes, and finally I updated the documentation...
Status Report Instead of Team Sync:
❌ Yesterday: Fixed bug #123, bug #456, and bug #789. Updated 12 tickets.
❌ Today: Will fix more bugs.
❌ Blockers: None.
Remote Standup Best Practices
For Distributed Teams: - Start on time, end on time (respect everyone's schedule) - Use video when possible (builds team connection) - Mute when not speaking (reduces background noise) - Use "raise hand" feature to avoid talking over each other - Post written updates in Slack for timezone-challenged team members
Async Standup Format (for global teams):
Daily Update - Jan 15, 2025
**Completed:**
- ✅ User authentication API
- ✅ Unit tests for login flow
**Today:**
- 🚧 Frontend integration
- 🚧 Error handling
**Blockers:**
- ⚠️ Waiting on design assets for error states
**Sprint Progress:** 12/35 points completed (34%)
Blocker Escalation Process
Not all blockers are equal. Use this flowchart to escalate appropriately.
graph TD
A[Blocker Identified] --> B{Can I resolve it myself?}
B -->|Yes, within 1 hour| C[Self-resolve after standup]
B -->|No| D{Can my team help?}
D -->|Yes| E[Quick team discussion after standup]
D -->|No| F{Is it blocking sprint goal?}
F -->|Yes - Critical| G[Escalate to Tech Lead immediately]
F -->|No - Can work on other tasks| H[Log in ClickUp, continue with other work]
C --> I[Update team in Slack]
E --> I
G --> J[Tech Lead prioritizes resolution]
H --> K[Revisit in next standup]
style G fill:#FF5252,color:#fff
style C fill:#4CAF50,color:#fff
style E fill:#FFC107,color:#000
Blocker Categories: - Self-resolvable: Need information from documentation, minor config issue - Team-resolvable: Need code review, pair programming, technical discussion - Leadership-resolvable: Need production access, budget approval, cross-team coordination - External: Waiting on third-party service, client feedback, legal review
Sprint Execution
Task Management in ClickUp
- Story Points: Fibonacci sequence (1, 2, 3, 5, 8, 13)
- Task States: To Do → In Progress → Code Review → Testing → Done
- Daily Updates: Required for all active tasks
- Blocker Tracking: Immediate escalation process
Detailed Task Lifecycle with ClickUp Status Mapping
Tasks move through specific states with clear entry and exit criteria.
stateDiagram-v2
[*] --> ToDo: Task assigned
ToDo --> InProgress: Developer starts work
InProgress --> PRReady: Pull request opened
PRReady --> PRMerged: PR approved and merged
PRMerged --> ReadyForQA: Deployed to dev environment
ReadyForQA --> UnderTesting: QA starts testing
UnderTesting --> Done: QA approves
UnderTesting --> InProgress: Issues found (rework)
Done --> [*]: Sprint complete
note right of PRReady
Automated via GitHub Actions
Review within 4 hours
end note
note right of ReadyForQA
Auto-deploy takes 5-10 min
Developer tests first
end note
Status Definitions:
| Status | Entry Criteria | Exit Criteria | Owner | Time Limit |
|---|---|---|---|---|
| To Do | Task assigned to sprint | Developer begins work | Developer | - |
| In Progress | Developer starts coding | PR created | Developer | Per estimate |
| PR Ready | PR opened on GitHub | PR approved | Reviewer | 4 hours |
| PR Merged | PR merged to develop | Deployed to dev | GitHub Actions | 5-10 minutes |
| Ready for QA | Developer tested on dev | QA starts testing | QA | 24 hours |
| Under Testing | QA begins testing | QA completes | QA | 1-2 days |
| Done | QA approves | Sprint complete | - | - |
Code Review SLA
Target Response Times: - Initial review: Within 4 hours of PR creation - Follow-up review: Within 2 hours of changes - Approval: Same day for simple PRs, next day for complex
Review Checklist: - [ ] Code follows style guide - [ ] Unit tests included and passing - [ ] No security vulnerabilities - [ ] Performance impact considered - [ ] Documentation updated - [ ] No hardcoded values or secrets
Code Review Process
- All code changes require PR review
- SonarQube quality gate must pass
- At least one approval from peer reviewer
- Tech lead approval for architectural changes
Example PR Description Template
## Description
Brief description of what this PR does and why.
## Changes
- Added user subscription management UI
- Integrated Stripe payment API
- Created billing history page
## Type of Change
- [x] New feature
- [ ] Bug fix
- [ ] Breaking change
- [ ] Documentation update
## Testing
- [x] Unit tests added and passing
- [x] Integration tests passing
- [x] Manual testing completed on dev environment
- [ ] Performance testing (not required for this change)
## Screenshots (if UI changes)
[Add screenshots here]
## ClickUp Task
CU-abc123
## Checklist
- [x] Code follows team style guide
- [x] Self-reviewed my code
- [x] Commented complex sections
- [x] Updated documentation
- [x] No console.log or debug code
- [x] Tested on multiple browsers (if frontend)
## Deployment Notes
No special deployment steps required. Auto-deploys to dev on merge.
## Related PRs
- Related to #123 (backend API)
Sprint Burndown Chart
Track progress daily to ensure sprint goals are achievable.
How to Read the Burndown: - Ideal line: Straight diagonal from start to end - Above ideal: Behind schedule - Below ideal: Ahead of schedule - Flat sections: No progress (investigate blockers)
Daily Update Process: 1. Team members update task status in ClickUp 2. Burndown chart auto-updates from ClickUp data 3. Review in daily standup 4. Adjust workload if trending behind
Sprint Review
Demo Preparation
- Feature Demos: Live demonstration of completed work
- Stakeholder Feedback: Product Owner and business stakeholders
- Technical Highlights: Notable achievements and improvements
Demo Guidelines
Preparation Checklist (24 hours before review): - [ ] Test demo flow on staging environment - [ ] Prepare demo accounts with sample data - [ ] Screenshot backup if live demo fails - [ ] Prepare 2-minute pitch per feature - [ ] List known limitations or upcoming work - [ ] Clear browser cache and login sessions
Common Demo Mistakes to Avoid: - ❌ Demoing from localhost (use staging/production) - ❌ Showing technical details stakeholders don't need - ❌ Apologizing for incomplete work - ❌ Going over time limit - ❌ Using real user data (privacy concerns) - ❌ No backup plan if demo fails
Good Demo Structure: 1. Context (15 seconds): "This feature allows users to..." 2. Demo (90 seconds): Show the feature in action 3. Impact (15 seconds): "This solves the problem of..." 4. Questions (20 seconds): Open for feedback
Review Agenda (1 hour)
- Sprint Goal Assessment (10 minutes)
- Feature Demonstrations (35 minutes)
- Metrics Review (10 minutes)
- Next Sprint Preview (5 minutes)
Stakeholder Feedback Template
Use this template to capture feedback during sprint review:
# Sprint Review Feedback - Sprint 24
## Feature: User Subscription Management
**Attendees:**
- Product Owner: Jane Smith
- Backend Lead: John Doe
- Frontend Lead: Sarah Johnson
- Stakeholders: Marketing Team (3), Sales Team (2)
**Demo Rating:** 4/5 stars
**Positive Feedback:**
- Clean UI design, intuitive flow
- Fast loading times
- Mobile responsive works well
**Concerns/Questions:**
- Can we add invoice history export?
- What happens if payment fails?
- Need more prominent "Cancel Subscription" button
**Action Items:**
- [ ] Add invoice PDF export (8 points) - Sprint 25
- [ ] Update error messages for failed payments - Sprint 25
- [ ] Redesign cancel button - Sprint 25
**Next Steps:**
- Product Owner to prioritize action items
- Schedule follow-up with Sales team for training
Example Metrics Dashboard Structure
Sprint Metrics to Display:
# Sprint 24 Metrics
## Velocity
- Committed: 108 points
- Completed: 102 points
- Achievement Rate: 94%
## Quality
- Bugs Found in Dev: 8
- Bugs Found in QA: 4
- Bugs Found in Production: 0
- Quality Rate: 100% (no production bugs)
## Delivery
- Stories Completed: 14/16
- Stories Carried Over: 2
- Average Cycle Time: 3.2 days
## Code Review
- Average PR Review Time: 3.5 hours
- PRs Merged: 42
- Code Coverage: 87%
## Team Health
- Sprint Goal Achieved: Yes
- Blocked Days: 2
- Team Satisfaction: 8/10
Sprint Retrospective
Retrospective Format (30 minutes)
- What Went Well (10 minutes)
- What Could Be Improved (10 minutes)
- Action Items (10 minutes)
Retrospective Techniques
Technique 1: Start/Stop/Continue
Use when you want clear actionable changes.
## Start (What should we begin doing?)
- Daily code reviews before 2 PM for faster turnaround
- Pair programming for complex features
- Weekly tech debt allocation (10% of sprint)
## Stop (What should we stop doing?)
- Last-minute PR submissions
- Skipping unit tests to save time
- Working on unplanned tasks during sprint
## Continue (What's working well?)
- Morning standups keep everyone aligned
- Code review checklist catches bugs early
- ClickUp integration saves status update time
Technique 2: Mad/Sad/Glad
Use when you want to surface emotions and team morale.
## Mad (What frustrated you?)
- Production bug slipped through because we rushed QA
- Meeting interrupted deep work 3 times this sprint
- Design changes came too late in sprint
## Sad (What disappointed you?)
- Couldn't finish stretch goals
- Code review took too long on Friday
- Had to work overtime last week
## Glad (What made you happy?)
- New developer onboarded smoothly
- Zero production bugs this sprint
- Team collaboration on payment feature
Technique 3: 4Ls (Liked, Learned, Lacked, Longed For)
Use when you want focus on learning and growth.
## Liked
- New testing framework makes tests easier
- Async standups work well for remote team
## Learned
- Stripe API has better error handling than we thought
- Breaking large stories early prevents late-sprint crunch
## Lacked
- Clear requirements on subscription cancellation flow
- Staging environment was unstable mid-sprint
## Longed For
- Faster CI/CD pipeline (currently 15 minutes)
- Better local development database setup
Example Action Items with Owners and Timelines
Good action items are specific, measurable, and have clear ownership.
# Retrospective Action Items - Sprint 24
## Action Items
### Item 1: Improve PR Review Turnaround
**Problem:** PRs waiting 6+ hours for first review
**Action:** Implement PR review rotation schedule
**Owner:** Tech Lead (John)
**Timeline:** Implement before Sprint 25 starts
**Success Metric:** Average PR review time < 4 hours
### Item 2: Reduce Dev Environment Instability
**Problem:** Dev environment down 3 times this sprint
**Action:** Set up monitoring and auto-restart for dev services
**Owner:** DevOps (Sarah)
**Timeline:** Complete by Day 3 of Sprint 25
**Success Metric:** Zero unplanned dev environment downtime
### Item 3: Earlier Design Review
**Problem:** Design changes came on Day 7, caused rework
**Action:** Require design approval before sprint planning
**Owner:** Product Owner (Jane)
**Timeline:** New process starts Sprint 25
**Success Metric:** No design changes after Day 3 of sprint
## Completed Action Items from Previous Sprint
- ✅ Set up ClickUp GitHub integration (completed Day 2)
- ✅ Create PR template (completed Day 5)
- ⏳ Automate database backups (in progress, 80% done)
Continuous Improvement
- Action items tracked in ClickUp
- Process improvements implemented incrementally
- Team feedback incorporated into planning
How to Track Retrospective Improvements
Create ClickUp List for Action Items: 1. Create "Retrospective Actions" list in ClickUp 2. Add action items as tasks with: - Clear title - Owner assigned - Due date - Success criteria in description 3. Review progress in next retrospective
Measure Improvement Over Time:
# Retrospective Trends (Last 3 Sprints)
| Metric | Sprint 22 | Sprint 23 | Sprint 24 | Trend |
|--------|-----------|-----------|-----------|-------|
| Action Items Created | 5 | 4 | 3 | ⬇️ Improving |
| Action Items Completed | 2 | 3 | 4 | ⬆️ Good |
| Repeated Issues | 2 | 1 | 0 | ⬇️ Excellent |
| Team Satisfaction | 6/10 | 7/10 | 8/10 | ⬆️ Good |
Sprint Anti-Patterns
Common mistakes teams make and how to avoid them.
Anti-Pattern 1: Scope Creep
Problem: Adding new work mid-sprint without removing existing commitments.
Symptoms: - Stories added after Day 3 of sprint - Sprint goal becomes unclear - Team consistently misses commitments
Solution:
Sprint Scope Change Policy:
Day 1-3: Minor additions allowed if capacity exists
Day 4-7: Only critical bugs or dependencies
Day 8-10: Sprint scope locked, focus on completion
New requests go to backlog for next sprint.
How to Handle Urgent Requests: 1. Assess criticality (Is it truly urgent?) 2. Identify what to remove from sprint 3. Get Product Owner approval for swap 4. Update sprint backlog immediately 5. Communicate changes in standup
Anti-Pattern 2: Late-Sprint Rush
Problem: Most work completed in last 2 days of sprint.
Symptoms: - PRs created on Day 9-10 - QA has no time to test - Quality issues slip through - Team works overtime
Solution: - Set "PR creation deadline" at Day 7 - Track daily progress on burndown chart - Escalate early if trending behind - Break large stories into smaller increments
Anti-Pattern 3: Perpetual Carry-Over
Problem: Same stories carrying over sprint after sprint.
Symptoms: - Stories moved to "next sprint" repeatedly - Team avoids difficult tasks - Velocity calculations become meaningless
Solution: 1. Identify why story keeps carrying over 2. Break into smaller achievable pieces 3. Assign to pair for knowledge sharing 4. Set "complete or cancel" deadline 5. Consider removing from backlog if not valuable
Anti-Pattern 4: Cherry-Picking Easy Tasks
Problem: Team members only pick small, easy stories.
Symptoms: - Complex stories remain untouched - Unbalanced workload distribution - Sprint goals not achieved despite points completed
Solution: - Assign complex stories during planning - Pair junior with senior on difficult tasks - Rotate who tackles complex work - Celebrate completing difficult stories
Managing Carry-Over Items
Carry-Over Decision Framework:
graph TD
A[Story Not Complete] --> B{Why not complete?}
B -->|Underestimated| C[Re-estimate and prioritize]
B -->|Blocked| D[Remove blocker, high priority next sprint]
B -->|Low priority| E[Return to backlog]
B -->|No longer needed| F[Close story]
C --> G[Add to next sprint with updated estimate]
D --> G
E --> H[Product Owner re-prioritizes]
F --> I[Document why, archive]
style F fill:#4CAF50,color:#fff
style G fill:#FFC107,color:#000
Carry-Over Guidelines: - Maximum 20% of next sprint capacity for carry-overs - Re-estimate carried stories (original estimate likely wrong) - Prioritize carry-overs if they block other work - Cancel stories carried over 3+ times
Sprint Best Practices
Tips for successful sprints from high-performing teams.
Best Practice 1: Front-Load Complex Work
Start difficult stories on Day 1-2 when energy is high and time is available.
Benefits: - More time to handle unexpected complexity - QA has adequate time for thorough testing - Reduces end-of-sprint stress
Best Practice 2: Daily Progress Visibility
Make progress visible to everyone, every day.
Implementation: - Update ClickUp status before standup - Share screenshots of work-in-progress - Post blockers in Slack immediately - Review burndown chart in standup
Best Practice 3: Cross-Team Coordination
Coordinate with other teams early and often.
Weekly Sync Points: - Backend + Frontend: API contract review (Day 2) - Dev + QA: Testability review (Day 3) - Dev + DevOps: Deployment planning (Day 5) - All teams: Integration testing (Day 8)
Best Practice 4: Technical Debt Management
Allocate 10-20% of sprint capacity to technical debt.
Technical Debt Sprint Allocation:
Sprint Capacity: 100 points
Feature Work: 80 points (80%)
Technical Debt: 15 points (15%)
Buffer: 5 points (5%)
Technical Debt Examples:
- Upgrade dependencies
- Refactor complex modules
- Improve test coverage
- Update documentation
- Performance optimization
Track Tech Debt: - Maintain "Tech Debt" list in ClickUp - Prioritize by impact and effort - Include in sprint planning - Celebrate tech debt reduction
Best Practice 5: Definition of Done
Establish clear Definition of Done for all stories.
Example Definition of Done Checklist: - [ ] Code written and follows style guide - [ ] Unit tests written and passing (min 80% coverage) - [ ] Integration tests passing - [ ] Code reviewed and approved - [ ] Merged to main/develop branch - [ ] Deployed to dev environment - [ ] Manual testing completed by developer - [ ] QA testing completed and approved - [ ] Documentation updated - [ ] No known bugs or issues
Tools and Templates
ClickUp Sprint Board Setup
Recommended List Structure:
Project Name
├── Backlog (All unscheduled work)
├── Sprint 24 - Jan 15-26
│ ├── To Do
│ ├── In Progress
│ ├── PR Created
│ ├── PR Merged
│ ├── Ready for QA
│ ├── Under Testing
│ └── Done
└── Sprint 25 - Jan 29-Feb 9
Custom Fields to Add: - Story Points (Number field) - Sprint (Dropdown: Sprint 23, Sprint 24, etc.) - Detection Phase (for bugs: Dev, QA, Production) - Priority (Dropdown: Blocker, High, Medium, Low)
Sprint Planning Meeting Agenda Template
# Sprint Planning - Sprint 24
Date: Monday, Jan 15, 2025, 9:00-11:00 AM
## Attendees
- Product Owner: Jane Smith
- Tech Leads: John Doe, Sarah Johnson
- Development Team (10 engineers)
- QA Lead: Mike Chen
## Agenda
### 1. Review Previous Sprint (15 min)
- Velocity: 102/108 points (94%)
- Stories completed: 14/16
- Carry-overs: 2 stories (9 points)
- Key learnings from retrospective
### 2. Sprint Goal Definition (30 min)
**Proposed Goal:** Enable subscription management and billing
**Success Criteria:**
- Users can upgrade/downgrade plans
- Users can view billing history
- Payment processing works end-to-end
**Vote:** All team members vote on goal feasibility
### 3. Backlog Refinement (45 min)
**Stories to Estimate:**
1. Subscription plan selection UI
2. Payment integration with Stripe
3. Billing history page
4. Email notifications for payments
**Estimation Activity:**
- Planning poker for each story
- Break down 13+ point stories
- Identify dependencies
### 4. Capacity Planning (30 min)
**Team Availability:**
- Developer A: 10 days, 80% = 64 points
- Developer B: 10 days, 60% = 48 points
- [Complete for all team members]
**Total Capacity:** 176 points
**Committed (80%):** 140 points
**Sprint Backlog:**
[List committed stories]
### 5. Wrap-Up (10 min)
- Confirm sprint goal
- Verify everyone understands commitments
- Note any dependencies or risks
- Schedule any needed follow-ups
Daily Standup Meeting Agenda Template
# Daily Standup - Sprint 24, Day 5
Date: Friday, Jan 19, 2025, 10:00-10:15 AM
## Sprint Progress
- Committed: 140 points
- Completed: 68 points (49%)
- In Progress: 32 points
- Remaining: 40 points
## Round-Robin Updates (2 min each)
1. Developer A: [Yesterday/Today/Blockers]
2. Developer B: [Yesterday/Today/Blockers]
[Continue for all team members]
## Blocker Discussion (5 min)
Active Blockers:
- Stripe API rate limit (DevOps resolving)
- Design assets pending for error states
## Sprint Health Check (3 min)
- On track / At risk / Behind
- Any adjustments needed?
- Reminder: PR deadline is Day 7
## Next Steps
- [Action items from standup]
Estimation Poker Guidelines
How to Run Estimation Poker:
- Prepare Story:
- Product Owner explains story
- Team asks clarifying questions
-
Acceptance criteria reviewed
-
Estimate:
- Each team member selects card privately
- Reveal simultaneously
- If consensus: Done
-
If not: Discuss
-
Discussion:
- Highest and lowest estimators explain reasoning
- Team discusses complexity, dependencies, unknowns
-
Re-estimate
-
Repeat until consensus
Online Tools: - PlanITpoker.com - PointingPoker.com - Scrum Poker Online
KPI Tracking During Sprints
Team Metrics with Targets
Track these metrics to measure team health and productivity.
1. Velocity - Definition: Story points completed per sprint - Target: 40-50 points per sprint (for 15-person team) - Calculation: Sum of story points for all "Done" stories
Sprint 22 Velocity: 45 points
Sprint 23 Velocity: 48 points
Sprint 24 Velocity: 52 points
Average Velocity: 48 points
Interpreting Velocity: - Trending up: Team improving or over-estimating - Trending down: Team struggling or under-estimating - Stable: Predictable delivery, use for planning
2. Commitment Accuracy
- Definition: Planned vs. actual completion rate
- Target: 85-95% completion rate
- Formula: (Completed Points / Committed Points) × 100
3. Bug Detection Rate
- Definition: Where bugs are found in the process
- Target: ≥90% found in Dev or QA (not Production)
- Formula: ((Dev Bugs + QA Bugs) / Total Bugs) × 100
Example Sprint:
Bugs in Dev: 12
Bugs in QA: 8
Bugs in Production: 2
Total: 22
Detection Rate: (20 / 22) × 100 = 91% ✅
4. Cycle Time - Definition: Time from "In Progress" to "Done" - Target: 2-4 days average - Calculation: Track in ClickUp time tracking
5. PR Review Turnaround - Definition: Time from PR creation to first review - Target: < 4 hours - Calculation: Track in GitHub
Individual Metrics
1. Task Completion Rate
- Formula: (Completed Tasks / Assigned Tasks) × 100
- Target: ≥85%
2. Code Quality Score - Source: SonarQube - Target: A or B rating - Metrics: Code coverage, duplications, code smells
3. Code Review Participation - Target: Review 2-3 PRs per day - Track: GitHub review activity
Example KPI Dashboard
# Sprint 24 KPI Dashboard
## Team Performance
| Metric | Target | Actual | Status |
|--------|--------|--------|--------|
| Velocity | 45-50 pts | 48 pts | ✅ On Target |
| Commitment Accuracy | 85-95% | 94% | ✅ Excellent |
| Bug Detection Rate | ≥90% | 91% | ✅ Good |
| Avg Cycle Time | 2-4 days | 3.2 days | ✅ On Target |
| PR Review Time | < 4 hours | 3.5 hours | ✅ Good |
## Quality Metrics
| Metric | Sprint 23 | Sprint 24 | Trend |
|--------|-----------|-----------|-------|
| Production Bugs | 3 | 0 | ⬇️ Excellent |
| Code Coverage | 82% | 87% | ⬆️ Improving |
| SonarQube Rating | B | A | ⬆️ Excellent |
## Delivery Metrics
| Metric | Value |
|--------|-------|
| Stories Completed | 14/16 (88%) |
| PRs Merged | 42 |
| Deploy Success Rate | 100% |
| Rollbacks | 0 |
How to Interpret Trends
Red Flags (Investigate Immediately): - Velocity dropping 20%+ sprint over sprint - Commitment accuracy below 70% - Bug detection rate below 80% (too many production bugs) - PR review time increasing - Code coverage decreasing
Good Signs: - Stable velocity (predictable planning) - Commitment accuracy 85-95% - Bug detection rate above 90% - Improving code quality scores - Decreasing cycle time
FAQ
Q1: What if we can't complete all committed work?
Answer: Focus on the sprint goal first, stretch goals second.
Steps: 1. Identify stories critical to sprint goal 2. Prioritize those stories for completion 3. Move non-critical stories back to backlog 4. Update team in standup about re-prioritization 5. Discuss in retrospective why over-committed
Prevention: - Leave 20% buffer in capacity planning - Break large stories into smaller pieces - Flag risks early in sprint - Don't commit to stretch goals
Q2: How do we handle production bugs during sprint?
Answer: Use a bug budget allocation strategy.
Bug Budget Approach: - Reserve 15-20% of sprint capacity for unplanned work - Production bugs get immediate priority - Minor bugs go to backlog for next sprint
Process: 1. Assess severity (Blocker/Critical/High/Medium/Low) 2. Blocker/Critical: Immediate hotfix, pause sprint work 3. High: Add to current sprint, remove equal-sized story 4. Medium/Low: Add to next sprint backlog
Q3: Can we change the sprint goal mid-sprint?
Answer: Only in exceptional circumstances.
Valid Reasons: - Critical production issue requiring major rework - Business priority shifts (rare) - Technical blocker makes goal unachievable
Process: 1. Discuss with Product Owner and Tech Lead 2. Present to full team with justification 3. Get team agreement on new goal 4. Document reason in sprint notes 5. Review in retrospective
Best Practice: This should happen less than once per 10 sprints.
Q4: How do we handle team members on vacation?
Answer: Plan capacity accounting for absences.
Capacity Planning with Vacation:
Developer A: 10 days × 0.8 = 64 points
Developer B: 5 days (5 days off) × 0.8 = 32 points
Developer C: 10 days × 0.8 = 64 points
Adjusted Team Capacity: 160 points (vs. 192 full capacity)
Best Practices: - Share vacation plans before sprint planning - Redistribute work if key person away - Don't assign critical path items to vacationing members - Plan lighter sprint if multiple team members out
Q5: What if our velocity is unpredictable?
Answer: Look for root causes and stabilize your process.
Common Causes: - Inconsistent estimation - Frequent scope changes - Unplanned work consuming capacity - Team members switching between projects - External dependencies causing delays
Solutions: 1. Review estimation accuracy in retrospective 2. Enforce sprint scope lock after Day 3 3. Track unplanned work, allocate buffer 4. Minimize context switching 5. Identify and resolve dependencies earlier
Give it time: Velocity stabilizes after 3-5 sprints.
Q6: How much time should we spend in meetings?
Answer: Target 10-15% of sprint time on ceremonies.
Time Budget (2-week sprint = 80 hours per person):
Sprint Planning: 2 hours (2.5%)
Daily Standups: 2.5 hours total (3.1%)
Sprint Review: 1 hour (1.25%)
Retrospective: 0.5 hours (0.6%)
Total: 6 hours (7.5% of sprint)
Red flags: - More than 10 hours in sprint ceremonies - Daily standups exceeding 15 minutes - Planning taking more than 2 hours
Q7: Should we estimate bugs?
Answer: Yes, estimate all work including bugs.
Bug Estimation Guidelines: - Minor bug fix: 1-2 points - Standard bug fix: 2-3 points - Complex bug requiring investigation: 5-8 points - Production hotfix: Estimate after investigation
Include in Velocity: Count bug story points toward velocity to reflect actual capacity consumed.
Related Documentation
Internal Documentation
- Git Workflow - Branch strategy and PR process
- GitHub-ClickUp Workflow - Complete development lifecycle
- Team Collaboration Process - Cross-team coordination
- Git Commit Standards - Commit message formatting
- Pull Request Template - PR description template
- KPI Tracking - Detailed KPI tracking processes
External Resources
- Scrum Guide - Official Scrum framework
- Agile Manifesto - Agile principles
- Planning Poker - Estimation technique
- ClickUp Documentation - ClickUp features and setup