Code Quality Standards
Overview
Comprehensive code quality standards that ensure consistent, maintainable, and robust software development across all AzmX engineering teams.
Quality Metrics and Gates
SonarQube Quality Gates
graph TD
A[Code Commit] --> B[SonarQube Analysis]
B --> C{Quality Gate}
C -->|"Pass ≥85%"| D[Merge Approved]
C -->|"Fail <85%"| E[Fix Required]
E --> F[Address Issues]
F --> B
D --> G[Production Ready]
Quality Gate Criteria
- Overall Score: ≥85% (minimum passing grade)
- Code Coverage: ≥80% for new code
- Duplicated Lines: ≤3% of total codebase
- Maintainability Rating: A or B rating required
- Reliability Rating: A rating required (no bugs)
- Security Rating: A rating required (no vulnerabilities)
Critical Issues (Zero Tolerance)
- Security Vulnerabilities: Any security issue blocks merge
- Bugs: Critical and major bugs block deployment
- Code Smells: Major code smells require resolution
- Coverage Gaps: New code without tests blocks merge
Language-Specific Standards
Python/Django Standards
# Code formatting with Black
# Import organization with isort
# Type hints for all functions
from typing import List, Optional, Dict, Any
from django.http import JsonResponse
from .models import User
from .serializers import UserSerializer
def get_user_list(
request: HttpRequest,
filters: Optional[Dict[str, Any]] = None
) -> JsonResponse:
"""
Retrieve filtered list of users.
Args:
request: HTTP request object
filters: Optional filters to apply
Returns:
JsonResponse with user data
Raises:
ValidationError: When filters are invalid
"""
try:
# Implementation with proper error handling
users = User.objects.filter(**(filters or {}))
serializer = UserSerializer(users, many=True)
return JsonResponse({
'success': True,
'data': serializer.data,
'count': len(serializer.data)
})
except Exception as e:
return JsonResponse({
'success': False,
'error': str(e)
}, status=400)
Python Quality Checklist
- [ ] Type Hints: All function parameters and returns
- [ ] Docstrings: Google/NumPy style for all public functions
- [ ] Error Handling: Explicit exception handling
- [ ] Logging: Appropriate log levels and messages
- [ ] Imports: Organized with isort, no unused imports
- [ ] Formatting: Black formatter applied
- [ ] Linting: Flake8 or Ruff passes without errors
TypeScript/React Standards
// Strict type definitions
interface UserProps {
id: number;
name: string;
email: string;
isActive?: boolean;
}
interface UserListProps {
users: UserProps[];
onUserSelect: (user: UserProps) => void;
loading?: boolean;
}
// Proper component structure with error boundaries
export const UserList: React.FC<UserListProps> = ({
users,
onUserSelect,
loading = false
}) => {
const [selectedUser, setSelectedUser] = useState<UserProps | null>(null);
const handleUserClick = useCallback((user: UserProps) => {
setSelectedUser(user);
onUserSelect(user);
}, [onUserSelect]);
if (loading) {
return <LoadingSpinner data-testid="user-list-loading" />;
}
return (
<div className="user-list" data-testid="user-list">
{users.map(user => (
<UserCard
key={user.id}
user={user}
isSelected={selectedUser?.id === user.id}
onClick={() => handleUserClick(user)}
/>
))}
</div>
);
};
TypeScript Quality Checklist
- [ ] Strict Types: No
anytypes in production code - [ ] Interface Definitions: Clear, well-documented interfaces
- [ ] Error Boundaries: Proper error handling in React
- [ ] Performance: Memoization where appropriate
- [ ] Accessibility: ARIA labels and semantic HTML
- [ ] Testing: Component tests with React Testing Library
- [ ] Formatting: Prettier and ESLint configuration
Testing Standards
Test Coverage Requirements
pie title Test Coverage Targets
"Unit Tests" : 60
"Integration Tests" : 25
"E2E Tests" : 10
"Manual Testing" : 5
Coverage Targets by Component
- Business Logic: 90% unit test coverage
- API Endpoints: 100% integration test coverage
- UI Components: 80% component test coverage
- Critical Paths: 100% end-to-end test coverage
Test Quality Standards
# Example: Well-structured test case
class TestUserService:
"""Test suite for UserService with proper setup and teardown."""
def setup_method(self):
"""Setup test data before each test."""
self.user_service = UserService()
self.test_user_data = {
'name': 'Test User',
'email': '[email protected]'
}
def test_create_user_success(self):
"""Test successful user creation with valid data."""
# Arrange
user_data = self.test_user_data
# Act
result = self.user_service.create_user(user_data)
# Assert
assert result.success is True
assert result.user.name == user_data['name']
assert result.user.email == user_data['email']
assert result.user.id is not None
def test_create_user_invalid_email(self):
"""Test user creation fails with invalid email."""
# Arrange
user_data = {**self.test_user_data, 'email': 'invalid-email'}
# Act & Assert
with pytest.raises(ValidationError) as exc_info:
self.user_service.create_user(user_data)
assert "Invalid email format" in str(exc_info.value)
Test Quality Checklist
- [ ] Descriptive Names: Tests clearly describe what they test
- [ ] AAA Pattern: Arrange, Act, Assert structure
- [ ] Independent Tests: No dependencies between test cases
- [ ] Data Isolation: Clean setup and teardown
- [ ] Edge Cases: Test boundary conditions and error cases
- [ ] Performance: Tests run quickly (<100ms per unit test)
- [ ] Maintainability: Tests are easy to understand and modify
Code Review Standards
Review Process Flow
sequenceDiagram
participant Dev as Developer
participant PR as Pull Request
participant Rev as Reviewer
participant Lead as Tech Lead
participant CI as CI/CD
Dev->>PR: Create PR
PR->>CI: Trigger automated checks
CI-->>PR: Quality gates results
alt Quality gates pass
PR->>Rev: Request review
Rev->>PR: Code review
alt Changes needed
Rev-->>Dev: Request changes
Dev->>PR: Push updates
PR->>CI: Re-run checks
else Approved
Rev-->>Lead: Review complete
Lead->>PR: Final approval
PR->>Dev: Merge authorized
end
else Quality gates fail
CI-->>Dev: Fix quality issues
Dev->>PR: Push fixes
end
Review Criteria
Architecture and Design
- [ ] SOLID Principles: Code follows SOLID design principles
- [ ] Design Patterns: Appropriate pattern usage
- [ ] Separation of Concerns: Clear responsibility boundaries
- [ ] DRY Principle: No unnecessary code duplication
- [ ] YAGNI: No over-engineering or premature optimization
Code Quality
- [ ] Readability: Code is self-documenting and clear
- [ ] Naming: Variables, functions, classes have descriptive names
- [ ] Function Size: Functions are focused and reasonably sized
- [ ] Complexity: Cyclomatic complexity is manageable
- [ ] Error Handling: Proper exception handling and logging
Security Review
- [ ] Input Validation: All inputs are properly validated
- [ ] Authentication: Proper authentication mechanisms
- [ ] Authorization: Appropriate access controls
- [ ] Data Protection: Sensitive data is properly handled
- [ ] SQL Injection: No unsafe database queries
- [ ] XSS Prevention: Output is properly escaped
Performance Review
- [ ] Efficiency: Algorithms and data structures are efficient
- [ ] Database Queries: Optimized queries, proper indexing
- [ ] Caching: Appropriate caching strategies
- [ ] Resource Usage: Memory and CPU usage considered
- [ ] Scalability: Code scales with increased load
Documentation Standards
Code Documentation
class UserService:
"""
Service class for managing user operations.
This service handles all user-related business logic including
creation, validation, and data persistence.
Attributes:
repository: User data repository
validator: User data validator
Example:
>>> service = UserService()
>>> user = service.create_user({'name': 'John', 'email': '[email protected]'})
>>> print(user.name)
'John'
"""
def create_user(
self,
user_data: Dict[str, Any],
validate: bool = True
) -> User:
"""
Create a new user with the provided data.
Args:
user_data: Dictionary containing user information
Required keys: 'name', 'email'
Optional keys: 'phone', 'address'
validate: Whether to perform data validation
Returns:
User: Created user instance with assigned ID
Raises:
ValidationError: When user_data is invalid
DuplicateError: When email already exists
Example:
>>> user_data = {'name': 'Jane Doe', 'email': '[email protected]'}
>>> user = service.create_user(user_data)
"""
# Implementation here
Documentation Checklist
- [ ] API Documentation: All public APIs documented
- [ ] Complex Logic: Non-obvious code has explanatory comments
- [ ] Examples: Usage examples for public interfaces
- [ ] Error Scenarios: Documented exception conditions
- [ ] Dependencies: External dependencies and requirements noted
- [ ] Configuration: Setup and configuration documented
Automated Quality Checks
Pre-commit Hooks
# .pre-commit-config.yaml
repos:
- repo: https://github.com/psf/black
rev: 23.1.0
hooks:
- id: black
language_version: python3.11
- repo: https://github.com/pycqa/isort
rev: 5.12.0
hooks:
- id: isort
- repo: https://github.com/pycqa/flake8
rev: 6.0.0
hooks:
- id: flake8
args: [--max-line-length=88]
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v1.0.1
hooks:
- id: mypy
additional_dependencies: [types-requests]
CI/CD Quality Gates
# GitHub Actions workflow
name: Quality Checks
on: [push, pull_request]
jobs:
quality:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Install dependencies
run: |
pip install -r requirements.txt
pip install -r requirements-dev.txt
- name: Run linting
run: |
black --check .
isort --check-only .
flake8 .
mypy .
- name: Run tests
run: |
pytest --cov=. --cov-report=xml
- name: SonarQube Scan
uses: sonarqube-quality-gate-action@master
with:
scanMetadataReportFile: target/sonar/report-task.txt
Monitoring and Metrics
Code Quality Metrics Dashboard
- Overall Quality Score: Team and individual scores
- Test Coverage Trends: Coverage over time
- Technical Debt: SonarQube debt metrics
- Review Cycle Time: Time from PR creation to merge
- Defect Density: Bugs per lines of code
- Code Churn: Files changed most frequently
Quality Improvement Process
- Weekly Reviews: Team quality metric review
- Monthly Goals: Set quality improvement targets
- Quarterly Retrospectives: Process improvement sessions
- Annual Standards Update: Update standards based on learnings
Quality Culture
Team Responsibilities
- Developers: Write quality code, participate in reviews
- Tech Leads: Monitor metrics, guide improvements
- DevOps: Maintain quality automation tools
- QA: Validate quality standards in testing
Continuous Improvement
- Feedback Loops: Regular feedback on quality processes
- Tool Updates: Keep quality tools current
- Training: Regular quality training sessions
- Innovation: Experiment with new quality practices