Required framework for all development tasks. Plan before execution, specify task category and complexity, implement with quality gates, and report status after completion. Keep work small, traceable, testable, and reversible.
- NO CODE without documentation: All tasks must begin with complete planning documentation
- BDD Requirements: All functional requirements captured as executable Gherkin scenarios
- Organized Structure: Use
./docs/for project docs,./.ai-work/for agent management - Living Documentation: Keep all documentation updated throughout development lifecycle
- Searchable Knowledge: All decisions, patterns, and solutions documented for future reference
- Transparency: Identify yourself as AI agent in commits and documentation
- Accountability: Document decision rationale, confidence levels, and model configurations
- Planning Mandatory: Create complete planning pack before any implementation begins
- Quality First: Generate comprehensive tests before implementing features
- Incremental: Keep changes small, focused, and easily reversible
- Modular Architecture: Always decompose SYSTEM/EPIC tasks into smaller components
- Human Collaboration: Escalate when uncertain or confidence below threshold
- Security Conscious: Validate all outputs for security, bias, and correctness
β NEVER start coding without:
β’ Complete project structure setup
β’ Complete BDD feature files in Gherkin syntax
β’ Complete planning documentation in organized folders
β’ Validated planning checklist
β
ALWAYS document first, then implement
πΉ ATOMIC - Single function, bug fix, documentation update, isolated change
πΉ COMPONENT - Feature implementation, integration, module refactoring
πΉ SYSTEM - Multiple components, architecture changes, cross-cutting concerns
πΉ EPIC - Complex initiatives requiring multiple related tasks
For SYSTEM or EPIC tasks:
- NEVER implement as monolith - Always decompose first
- Break down immediately into smaller ATOMIC/COMPONENT tasks
- Apply modularity principles - Separate concerns, loose coupling
- Consider service boundaries - Independent deployability
- Design for scalability - Horizontal scaling, containerization
Decomposition Process:
π SYSTEM ANALYSIS CHECKLIST
β
Identify distinct business domains/bounded contexts
β
Map service boundaries and responsibilities
β
Define APIs and communication patterns between services
β
Plan data storage strategy per service
β
Design deployment and scaling strategy
β
Create infrastructure and containerization plan
β
Break into individual ATOMIC/COMPONENT tasks
π’ LOW - Standard patterns, well-defined requirements, high confidence
π‘ MEDIUM - Some ambiguity, moderate complexity, requires analysis
π΄ HIGH - Complex logic, significant impact, low confidence, novel solutions
π₯ HIGH (90-100%) - Clear requirements, standard implementation
β‘ MEDIUM (70-89%) - Some assumptions, normal complexity
β οΈ LOW (50-69%) - Significant uncertainty, requires human review
π¨ CRITICAL (<50%) - Mandatory human oversight before proceeding
ποΈ DESIGN - Architecture, system design, technical specifications
βοΈ DEVELOPMENT - Feature implementation, bug fixes, refactoring
π§ͺ TESTING - Test creation, validation, quality assurance
π DOCS - Documentation, guides, API specifications
π INFRA - Infrastructure, deployment, operations, CI/CD
π¬ RESEARCH - Spikes, proof of concepts, feasibility studies
π€ AI-MGMT - Model management, prompt engineering, AI safety
π PROJECT ORGANIZATION (Create if not exists)
./docs/
βββ architecture/ # System design and architecture docs
βββ features/ # BDD feature files in Gherkin syntax
βββ api/ # API documentation and specifications
βββ decisions/ # Architecture Decision Records (ADRs)
./.ai-work/ # AI Agent management (hidden folder)
βββ current/ # Current task planning documents
βββ completed/ # Historical task records
βββ knowledge/ # Patterns, learnings, and reference materials
βββ coordination/ # Multi-agent communication logs
- Parse requirements and acceptance criteria
- Create BDD feature files in
./docs/features/using Gherkin syntax - Classify task scope, category, complexity, and confidence level
- Identify constraints, dependencies, and risks
- Search knowledge base for similar implementations
- Determine human oversight requirements
Location: ./docs/features/[feature-name].feature
Required Elements:
- Feature description with user story format (As a... I want... So that...)
- Background setup conditions
- Happy path scenarios with Given/When/Then
- Edge case scenarios using Scenario Outline
- Error handling scenarios
- Proper tagging (@priority, @category)
Task Brief: ./.ai-work/current/TASK_BRIEF.md
Required Information:
- Reference to feature files and related documentation
- Clear goal, scope, and out-of-scope boundaries
- Task classification (category, scope, complexity, confidence)
- BDD scenario mapping and acceptance criteria
- Dependencies, risks, and assumptions
Solution Design: ./docs/architecture/[task-id]-solution-design.md
Required Information:
- BDD implementation strategy and scenario mapping
- System architecture overview and service decomposition
- Service/module breakdown with responsibilities and APIs
- Inter-service communication patterns
- Infrastructure and deployment strategy
- Alternative approaches considered and rationale
- Task breakdown strategy for implementation
Testing Strategy: ./.ai-work/current/TESTING_STRATEGY.md
Required Information:
- BDD test implementation approach and framework
- Test coverage plan (BDD acceptance, unit, integration, E2E)
- Test scenarios derived from feature files
- Test data strategy and environment setup
- Validation criteria for acceptance
Implementation Plan: ./.ai-work/current/IMPLEMENTATION_PLAN.md
Required Information:
- BDD-driven development process steps
- Task decomposition for SYSTEM/EPIC scope
- Dependencies and execution order
- Containerization and communication strategy
- Risk mitigation and rollback strategies
- Monitoring and validation approach
For ANY task that could grow beyond a single component:
ποΈ ARCHITECTURE ASSESSMENT
β
Will this system need to scale independently?
β
Will multiple teams work on different parts?
β
Are there distinct business domains/contexts?
β
Will different parts have different technology needs?
β
Will parts need to be deployed independently?
If ANY answer is YES β Design as modular/microservices architecture
If ALL answers are NO β Modular monolith may be acceptable
Modularity Enforcement Rules:
- Single Responsibility: Each service/module has ONE clear purpose
- Loose Coupling: Services communicate via well-defined APIs only
- High Cohesion: Related functionality grouped together
- Independent Deployment: Each service can be deployed separately
- Data Isolation: Each service owns its data (no shared databases)
- Failure Isolation: One service failure doesn't cascade to others
π PLANNING COMPLETENESS
β
Project folder structure created (./docs/ and ./.ai-work/)
β
BDD feature files created with comprehensive Gherkin scenarios
β
Task brief created with classification and dependencies
β
Solution design created with architecture and service breakdown
β
Testing strategy created with BDD integration plan
β
Implementation plan created with BDD-driven development approach
β
All dependencies identified and risks assessed
β
Human review requirements determined
β
Knowledge base searched for similar patterns
β
Success criteria mapped to BDD scenarios
ποΈ ARCHITECTURE VALIDATION (for SYSTEM/EPIC)
β
System decomposition analysis completed
β
Service boundaries clearly defined with single responsibilities
β
Inter-service communication patterns designed
β
Data isolation strategy defined
β
Independent deployment strategy documented
β
Containerization approach planned
β
Task breakdown into smaller tasks completed
π¨ MANDATORY: All items must be β
before proceeding to implementation
π REQUIRED BEFORE CODING
β
All planning documents complete and validated
β
Planning checklist 100% complete
β
Human review completed (if required by confidence level)
β
Knowledge base updated with planning insights
Step 1: Container & Environment Foundation (Before any application code)
π¦ INFRASTRUCTURE SCAFFOLDING ORDER:
1. π³ DOCKER FOUNDATION
β’ Create docker-compose.yml for multi-service architecture
β’ Define service containers per solution design
β’ Set up shared networks and volumes
β’ Configure environment-specific overrides (dev/staging/prod)
2. π§ ENVIRONMENT CONFIGURATION
β’ Create .env.example with all required variables
β’ Set up .env files for each environment
β’ Configure secrets management approach
β’ Document environment variable purposes
3. ποΈ SERVICE SCAFFOLDING
β’ Create Dockerfile for each service (per solution design)
β’ Set up basic project structure per service
β’ Configure build contexts and multi-stage builds
β’ Implement health checks and readiness probes
4. π NETWORKING & COMMUNICATION
β’ Configure service-to-service communication
β’ Set up API gateway or load balancer if needed
β’ Configure database connections and data volumes
β’ Set up monitoring and logging infrastructure
5. π DEVELOPMENT WORKFLOW
β’ Create development startup scripts (make dev, ./dev up)
β’ Configure hot-reload and development tools
β’ Set up testing environments in containers
β’ Document local development setup
Step 2: Service Implementation Order (After infrastructure is ready)
π SERVICE DEVELOPMENT SEQUENCE:
1. π DATA LAYER SERVICES FIRST
β’ Database services and migrations
β’ Data access patterns and repositories
β’ Shared data models and schemas
2. βοΈ CORE BUSINESS SERVICES
β’ Business logic services (per solution design)
β’ Internal APIs and service contracts
β’ Inter-service communication patterns
3. π API GATEWAY/EXTERNAL INTERFACES
β’ External-facing APIs
β’ Authentication and authorization services
β’ Rate limiting and security middleware
4. π₯οΈ FRONTEND SERVICES (if applicable)
β’ UI applications and static assets
β’ Client-side routing and state management
β’ Integration with backend services
Container-First Development Principles:
π³ INFRASTRUCTURE-FIRST APPROACH:
β’ Begin with architectural scaffolding (containers, networking, environments)
β’ Establish service boundaries through containerization before writing business logic
β’ Set up development workflow that mirrors production environment
β’ Configure all external dependencies (databases, caches) as services
β’ Language-specific build files (package.json, pom.xml) come after service structure is defined
β’ Service communication and data flow established before individual service implementation
- Feature File Analysis β Parse Gherkin scenarios from
./docs/features/ - Step Definition Creation β Implement Given/When/Then steps (failing tests first)
- Minimal Implementation β Write just enough code to make BDD scenarios pass
- Traditional TDD β Add unit tests for internal component logic
- Refactoring β Improve code while keeping all tests green
- Integration Validation β Ensure BDD scenarios work end-to-end
./tests/
βββ bdd/step_definitions/ # Given/When/Then implementations
βββ unit/ # Unit tests by component/service
βββ integration/ # Service integration tests
βββ e2e/ # End-to-end tests
βββ performance/ # Performance validation
- Follow established coding conventions
- Implement exactly as documented in solution design
- Include comprehensive comments for complex logic
- Implement proper error handling and logging
- Add security validations and input sanitization
- Document any deviations from original design with rationale
π CODE QUALITY
β
No hardcoded values, proper naming, consistent style
β
No duplicate code, appropriate comments, edge cases handled
π‘οΈ SECURITY
β
No credentials in code, input validation, injection prevention
β
Authentication/authorization, vulnerability scan clean
β‘ PERFORMANCE
β
Efficient algorithms, optimized queries, caching where beneficial
β
Resource utilization within limits
π§ͺ TESTING
β
All tests pass (BDD + unit + integration)
β
>80% test coverage, security validations pass
- Confidence below threshold, security changes, architecture changes
- Novel implementations, HIGH complexity tasks
- Feature flags for controlled rollout
- Monitoring and alerting configured
- Rollback procedures tested
- Post-deployment validation
π¨ BUILD/TEST FAILURES
1. Analyze logs systematically
2. Check recent changes for correlation
3. Revert to last known good state if unclear
4. Document resolution in knowledge base
π DEPLOYMENT ISSUES
1. Execute rollback immediately
2. Analyze in non-production environment
3. Document incident and implement preventive measures
Handoff Format:
Agent: [ID] | Task: [ID] | Context: [Current state]
Confidence: [Level with reasoning] | Next Steps: [Actions]
Blockers: [Dependencies] | Human Review: [Status]
π¨ IMMEDIATE: Security, privacy, critical impact, ethics, compliance
β οΈ SCHEDULED: Low confidence, performance issues, novel patterns
π OPTIONAL: Medium complexity completed, optimizations identified
- API endpoints: <200ms (95th percentile)
- Database queries: <100ms
- Page loads: <3 seconds
- Resource usage: <70% CPU, <80% memory
- Structured logging with correlation IDs
- Performance metrics and alerting
- Security event tracking
- AI-specific metrics (confidence, token usage, human overrides)
- Defect rate, test coverage, deployment success
- AI agent: confidence accuracy, human override rate, cost efficiency
./.ai-work/knowledge/
βββ patterns/ # Reusable solutions and templates
βββ lessons/ # Technical insights and gotchas
βββ components/ # Shared utilities and infrastructure
- Pre-task: Search knowledge base for similar patterns
- Post-task: Document new patterns, update troubleshooting guides
- Archive: Move completed tasks to historical records
Required Information:
- Task summary (ID, category, scope, complexity, confidence)
- Documentation created/updated (BDD features, architecture, planning)
- Implementation details (changes made, BDD scenarios implemented)
- AI analysis (model used, generated content, human review status)
- Quality validation results (BDD testing, traditional testing, security)
- Performance impact (metrics, cost analysis)
- Risk assessment and rollback procedures
- Knowledge updates and lessons learned
- References and artifacts
π¦ EXTERNAL DEPENDENCIES
β
Document all third-party libraries, services, and APIs
β
Verify license compatibility and security status
β
Assess maintenance and performance impact
β
Document fallback options
π΄ HIGH RISK: Critical impact, security implications, data loss
β’ Requires: Detailed mitigation, human approval, staged rollout
π‘ MEDIUM RISK: Performance impact, user experience degradation
β’ Requires: Mitigation strategies, enhanced monitoring
π’ LOW RISK: Minor changes, cosmetic updates
β’ Requires: Standard testing and validation
π‘οΈ INPUT/OUTPUT SECURITY
β
Input validation, output encoding, file upload restrictions
β
Rate limiting, request size limits
π AUTHENTICATION & AUTHORIZATION
β
Strong authentication, authorization checks
β
Secure session management, password policies
π DATA PROTECTION
β
Encryption in transit and at rest
β
PII compliance, data retention policies
β
Access logging, security scans clean
π€ AI SECURITY
β
AI-generated code security scanning
β
Prompt injection prevention
β
Model output monitoring for inappropriate content
β
Training data privacy and governance
β
AI decision audit trails
Required Information:
- Task status and complexity level
- Completed work with measurable outcomes
- Next steps and blockers
- Support needed
Required Information:
- Issue classification and severity
- Impact assessment and business impact
- Problem description and root cause analysis
- Attempted solutions and recommended next steps
- Timeline constraints and resource requirements
Comprehensive reference patterns for common technology stacks. Adapt to specific project requirements while maintaining core principles.
π BDD TOOLCHAIN REQUIREMENTS
β’ Gherkin Feature Files: Human-readable requirements in ./docs/features/
β’ Test Framework: Cucumber (Java), SpecFlow (.NET), Behave (Python), etc.
β’ Step Definitions: Given/When/Then implementations in ./tests/bdd/step_definitions/
β’ Test Data Management: Shared test data and fixtures for scenarios
β’ Reporting: BDD test execution reports with scenario pass/fail status
β’ CI Integration: Automated BDD test execution in deployment pipeline
1. π FEATURE DEFINITION
β’ Write .feature files in Gherkin syntax
β’ Define scenarios covering happy path, edge cases, errors
β’ Tag scenarios with priorities and categories
β’ Review with stakeholders for acceptance
2. π§ͺ STEP DEFINITION CREATION
β’ Implement Given steps (preconditions/setup)
β’ Implement When steps (actions/operations)
β’ Implement Then steps (assertions/validations)
β’ Create reusable step definitions for common patterns
3. βοΈ IMPLEMENTATION
β’ Write minimal code to make scenarios pass
β’ Follow Red-Green-Refactor cycle
β’ Ensure all scenarios remain green during refactoring
β’ Add unit tests for internal logic not covered by BDD
4. β
VALIDATION
β’ All BDD scenarios pass automatically
β’ Manual testing for non-automated scenarios
β’ Performance validation for critical scenarios
β’ Security testing for sensitive scenarios
π― SCENARIO WRITING GUIDELINES
β’ Use business language, not technical implementation details
β’ Keep scenarios focused on single piece of functionality
β’ Use Scenario Outline for data-driven testing
β’ Tag scenarios with @priority, @category, @slow, @security tags
β’ Include both positive and negative test cases
β’ Write scenarios from user perspective, not system perspective
β
GOOD GHERKIN EXAMPLE:
Scenario: User successfully logs into the system
Given I am a registered user with email "[email protected]"
And my password is "SecurePass123"
When I attempt to log in with valid credentials
Then I should be redirected to the dashboard
And I should see a welcome message with my name
β BAD GHERKIN EXAMPLE:
Scenario: Test login API endpoint
Given POST request to /auth/login
When send JSON with username and password
Then return 200 status code
β JAVA TECHNOLOGY STACK
β’ Spring Boot (current LTS) for application framework
β’ Spring Data JPA for database access with repository pattern
β’ Spring Security for authentication and authorization
β’ Spring Actuator for health checks and metrics
β’ Maven/Gradle with wrapper for build management
β’ JUnit 5 + Mockito + TestContainers for testing
ποΈ ARCHITECTURE PATTERNS
β’ Controller β Service β Repository layered architecture
β’ DTOs for API boundaries with MapStruct for mapping
β’ Bean Validation (JSR-303) for input validation
β’ Transactions managed at Service layer
β’ Exception handling with @ControllerAdvice
β’ Configuration externalized with Spring Cloud Config
π§ͺ TESTING STRATEGIES
β’ Unit tests for business logic in Service layer
β’ Slice tests (@WebMvcTest, @DataJpaTest) for layers
β’ Integration tests with @SpringBootTest and TestContainers
β’ Contract testing with Spring Cloud Contract
β’ Performance testing with JMeter or Gatling
π§ DEVELOPMENT TOOLS
β’ Spring Boot DevTools for hot reloading
β’ Actuator endpoints for monitoring and health checks
β’ Micrometer for metrics collection
β’ Logback for structured logging
β’ OpenAPI 3 with springdoc-openapi for documentation
π PYTHON TECHNOLOGY STACK
β’ FastAPI + Uvicorn for high-performance async API
β’ Pydantic for data validation and serialization
β’ SQLAlchemy 2.0 for database ORM with async support
β’ Alembic for database migrations
β’ Poetry for dependency management
β’ Pytest + pytest-asyncio for testing
ποΈ ARCHITECTURE PATTERNS
β’ Router β Service β Repository β Model architecture
β’ Dependency injection with FastAPI dependencies
β’ Async/await patterns for I/O operations
β’ Type hints throughout codebase
β’ Pydantic models for request/response validation
β’ Background tasks with Celery or FastAPI BackgroundTasks
π§ͺ TESTING STRATEGIES
β’ Pytest with fixtures for test setup
β’ AsyncIO testing with pytest-asyncio
β’ Database testing with pytest-postgresql
β’ API testing with FastAPI TestClient
β’ Factory patterns with factory_boy for test data
β’ Property-based testing with Hypothesis
π§ DEVELOPMENT TOOLS
β’ Black + isort for code formatting
β’ Ruff for fast linting
β’ mypy for static type checking
β’ pre-commit hooks for code quality
β’ uvicorn with reload for development
β’ OpenAPI automatic documentation generation
π’ NODE.JS TECHNOLOGY STACK
β’ Express.js or Fastify for web framework
β’ TypeScript for type safety and better DX
β’ Prisma or TypeORM for database access
β’ Jest + Supertest for testing
β’ npm/yarn with workspaces for monorepos
β’ ESLint + Prettier for code quality
ποΈ ARCHITECTURE PATTERNS
β’ Router β Controller β Service β Repository
β’ Middleware for cross-cutting concerns
β’ Dependency injection with inversify or manual DI
β’ Error handling with async error boundaries
β’ Validation with Joi, Yup, or Zod
β’ Configuration management with dotenv
π§ͺ TESTING STRATEGIES
β’ Unit tests with Jest and mock functions
β’ Integration tests with Supertest
β’ Database testing with jest-mongodb or similar
β’ E2E testing with Playwright or Cypress
β’ API contract testing with Pact
β’ Load testing with Artillery or k6
π§ DEVELOPMENT TOOLS
β’ nodemon for development auto-restart
β’ ts-node for TypeScript execution
β’ husky for git hooks
β’ Winston or Pino for structured logging
β’ Swagger/OpenAPI for documentation
β’ Docker for containerization
βοΈ REACT TECHNOLOGY STACK
β’ React 18+ with Concurrent Features
β’ TypeScript for type safety
β’ Next.js for production-grade applications
β’ React Query/TanStack Query for server state
β’ Zustand or Redux Toolkit for client state
β’ React Hook Form for form management
π¨ UI AND STYLING
β’ Tailwind CSS for utility-first styling
β’ Headless UI or Radix UI for accessible components
β’ Framer Motion for animations
β’ React Icons for icon library
β’ Storybook for component development
π§ͺ TESTING STRATEGIES
β’ React Testing Library for component testing
β’ Jest for unit tests and mocks
β’ Playwright or Cypress for E2E testing
β’ Mock Service Worker (MSW) for API mocking
β’ Visual regression testing with Chromatic
π§ DEVELOPMENT TOOLS
β’ Vite or Create React App for build tooling
β’ ESLint + Prettier for code formatting
β’ Husky + lint-staged for pre-commit hooks
β’ React DevTools for debugging
β’ Bundle analyzer for performance optimization
π
°οΈ ANGULAR TECHNOLOGY STACK
β’ Angular (latest LTS) with TypeScript
β’ Angular CLI for project scaffolding and builds
β’ RxJS for reactive programming patterns
β’ Angular Material or PrimeNG for UI components
β’ NgRx for complex state management
β’ Angular Forms (Reactive) for form handling
π¨ UI AND STYLING
β’ Angular Material Design system
β’ Angular Flex Layout for responsive design
β’ SCSS for enhanced styling capabilities
β’ Angular Animations API for transitions
β’ CDK for building custom components
π§ͺ TESTING STRATEGIES
β’ Jasmine + Karma for unit testing
β’ Angular Testing Utilities for component testing
β’ Protractor or Cypress for E2E testing
β’ Spectator for simplified testing
β’ ng-mocks for mocking dependencies
π§ DEVELOPMENT TOOLS
β’ Angular DevKit for development server
β’ Angular Language Service for IDE support
β’ Compodoc for documentation generation
β’ Angular ESLint for code quality
β’ Webpack Bundle Analyzer for optimization
π VUE.JS TECHNOLOGY STACK
β’ Vue 3 with Composition API
β’ TypeScript support with Vue TSX
β’ Vite for fast development and building
β’ Vue Router for client-side routing
β’ Pinia for state management
β’ VeeValidate for form validation
π¨ UI AND STYLING
β’ Vuetify or Quasar for component libraries
β’ Tailwind CSS for utility styling
β’ Vue Transition for animations
β’ Iconify for comprehensive icon sets
β’ PostCSS for CSS processing
π§ͺ TESTING STRATEGIES
β’ Vue Test Utils for component testing
β’ Vitest for unit testing (Vite-native)
β’ Cypress for E2E testing
β’ Testing Library Vue for testing best practices
β’ Storybook for component documentation
π§ DEVELOPMENT TOOLS
β’ Vue DevTools for debugging
β’ Vetur or Volar for IDE support
β’ ESLint Vue plugin for code quality
β’ Vue CLI for project management
β’ Nuxt.js for SSR/SSG applications
ποΈ RELATIONAL DATABASE PATTERNS
β’ PostgreSQL for ACID compliance and advanced features
β’ MySQL/MariaDB for web applications and read-heavy workloads
β’ SQLite for development, testing, and embedded applications
β’ Database connection pooling (PgBouncer, HikariCP)
β’ Read replicas for scaling read operations
π DATA MODELING PATTERNS
β’ Normalized schema design for data integrity
β’ Indexing strategies for query optimization
β’ Partitioning for large tables
β’ Foreign key constraints for referential integrity
β’ Database migrations with version control integration
β‘ PERFORMANCE OPTIMIZATION
β’ Query optimization with EXPLAIN plans
β’ Materialized views for complex aggregations
β’ Database-level caching with Redis
β’ Connection pooling and prepared statements
β’ Monitoring with pg_stat_statements or similar
π DOCUMENT DATABASES
β’ MongoDB for flexible document storage
β’ Document validation with JSON Schema
β’ Aggregation pipelines for complex queries
β’ Sharding for horizontal scaling
β’ Replica sets for high availability
π KEY-VALUE STORES
β’ Redis for caching and session storage
β’ Redis Streams for event sourcing
β’ Redis Cluster for distributed caching
β’ TTL for automatic data expiration
β’ Pub/Sub for real-time messaging
π SEARCH ENGINES
β’ Elasticsearch for full-text search
β’ Index management and mapping strategies
β’ Aggregations for analytics
β’ Kibana for data visualization
β’ Logstash for data ingestion pipelines
πΎ ORM/ODM PATTERNS
β’ Repository pattern for data access abstraction
β’ Unit of Work pattern for transaction management
β’ Active Record vs Data Mapper patterns
β’ Query builders for complex queries
β’ Database seeding and fixtures for testing
π CACHING STRATEGIES
β’ Application-level caching (in-memory)
β’ Distributed caching with Redis
β’ Database query result caching
β’ CDN caching for static assets
β’ Cache invalidation strategies (TTL, event-based)
π DATA SYNCHRONIZATION
β’ Event sourcing for audit trails
β’ CQRS (Command Query Responsibility Segregation)
β’ Database replication (master-slave, master-master)
β’ Change data capture (CDC) for real-time sync
β’ Eventual consistency patterns for distributed systems
π³ DOCKER CONTAINERIZATION
β’ Multi-stage builds for optimization
β’ Distroless or Alpine base images for security
β’ Non-root user containers
β’ Health checks for container orchestration
β’ Docker Compose for local development
β’ .dockerignore for build optimization
βΈοΈ KUBERNETES ORCHESTRATION
β’ Deployment manifests with resource limits
β’ Services and Ingress for networking
β’ ConfigMaps and Secrets for configuration
β’ Horizontal Pod Autoscaler (HPA) for scaling
β’ Network Policies for security
β’ Helm charts for package management
π§ CONTAINER BEST PRACTICES
β’ Image scanning with Trivy or Snyk
β’ Runtime security with Falco
β’ Resource quotas and limits
β’ Liveness and readiness probes
β’ Rolling updates with zero downtime
β’ Pod disruption budgets for availability
π CONTINUOUS INTEGRATION
β’ GitHub Actions, GitLab CI, or Jenkins
β’ Multi-stage pipelines (build, test, security, deploy)
β’ Parallel job execution for speed
β’ Artifact caching for build optimization
β’ Quality gates with SonarQube or CodeClimate
β’ Dependency vulnerability scanning
π¦ CONTINUOUS DEPLOYMENT
β’ GitOps with ArgoCD or Flux
β’ Blue-green deployments for zero downtime
β’ Canary deployments for risk mitigation
β’ Feature flags for gradual rollouts
β’ Automated rollback triggers
β’ Environment promotion pipelines
π MONITORING & OBSERVABILITY
β’ Application Performance Monitoring (APM)
β’ Distributed tracing with Jaeger or Zipkin
β’ Centralized logging with ELK stack
β’ Metrics collection with Prometheus
β’ Alerting with PagerDuty or Slack integration
β’ SLI/SLO monitoring and reporting
βοΈ CLOUD INFRASTRUCTURE
β’ Infrastructure as Code with Terraform
β’ Serverless functions (AWS Lambda, Azure Functions)
β’ Managed databases (RDS, CosmosDB, Cloud SQL)
β’ Object storage (S3, Azure Blob, GCS)
β’ CDN integration (CloudFlare, CloudFront)
β’ Auto-scaling groups and load balancers
π SECURITY PATTERNS
β’ Identity and Access Management (IAM)
β’ Secrets management (AWS Secrets Manager, HashiCorp Vault)
β’ Network security with VPCs and security groups
β’ SSL/TLS termination at load balancer
β’ Web Application Firewall (WAF)
β’ Security scanning in CI/CD pipelines
π° COST OPTIMIZATION
β’ Right-sizing resources based on usage
β’ Spot instances for non-critical workloads
β’ Reserved instances for predictable workloads
β’ Auto-scaling based on demand
β’ Resource tagging for cost allocation
β’ Cost monitoring and alerting
π§ͺ UNIT TESTING
β’ Test individual functions and methods in isolation
β’ Mock external dependencies
β’ Aim for >80% code coverage
β’ Fast execution (<5 minutes for full suite)
β’ Test edge cases and error conditions
β’ Parameterized tests for multiple scenarios
π INTEGRATION TESTING
β’ Test component interactions
β’ Database integration tests with TestContainers
β’ API integration tests with real HTTP calls
β’ Message queue integration testing
β’ Third-party service integration (with contracts)
β’ Test data setup and teardown strategies
π END-TO-END TESTING
β’ Test complete user workflows
β’ Browser automation with Selenium/Playwright
β’ Mobile testing with Appium
β’ API workflow testing
β’ Performance testing under load
β’ Cross-browser and cross-platform testing
π‘οΈ SECURITY TESTING
β’ Static Application Security Testing (SAST)
β’ Dynamic Application Security Testing (DAST)
β’ Dependency vulnerability scanning
β’ Penetration testing for critical applications
β’ Security headers validation
β’ Input validation and injection testing
π CODE QUALITY
β’ Static code analysis (SonarQube, CodeClimate)
β’ Code review processes and checklists
β’ Linting and formatting enforcement
β’ Complexity metrics monitoring
β’ Technical debt tracking
β’ Documentation coverage analysis
β‘ PERFORMANCE TESTING
β’ Load testing with JMeter, k6, or Artillery
β’ Stress testing for breaking points
β’ Volume testing with large datasets
β’ Endurance testing for memory leaks
β’ Spike testing for traffic surges
β’ Performance monitoring in production
π CONTINUOUS TESTING
β’ Test automation in CI/CD pipelines
β’ Parallel test execution for speed
β’ Test environment management
β’ Test data management and generation
β’ Flaky test detection and remediation
β’ Test reporting and analytics
π€ MODEL SERVING PLATFORMS
β’ FastAPI/Flask for custom model APIs
β’ TensorFlow Serving for TensorFlow models
β’ TorchServe for PyTorch models
β’ ONNX Runtime for cross-platform inference
β’ Triton Inference Server for multi-framework serving
β’ Kubernetes operators for ML workloads
π MLOPS TOOLCHAIN
β’ MLflow for experiment tracking and model registry
β’ DVC for data versioning and pipeline management
β’ Weights & Biases for experiment monitoring
β’ Kubeflow for ML pipelines on Kubernetes
β’ Apache Airflow for workflow orchestration
β’ Great Expectations for data validation
π MODEL MONITORING
β’ Evidently AI for model monitoring and drift detection
β’ WhyLabs for data and ML monitoring
β’ Seldon Core for advanced deployments
β’ Custom metrics for bias and fairness monitoring
β’ A/B testing frameworks for model comparison
β’ Automated retraining triggers based on performance
π§ MACHINE LEARNING FRAMEWORKS
β’ TensorFlow/Keras for deep learning
β’ PyTorch for research and production
β’ Scikit-learn for traditional ML algorithms
β’ Hugging Face Transformers for NLP
β’ LangChain/LangGraph for LLM applications
β’ OpenAI API integration patterns
π DATA PROCESSING
β’ Apache Spark for large-scale data processing
β’ Pandas for data manipulation and analysis
β’ Apache Kafka for real-time data streaming
β’ Feature stores (Feast, Tecton) for feature management
β’ Data pipelines with Apache Beam
β’ ETL/ELT processes for data preparation
π AI GOVERNANCE & ETHICS
β’ Model explainability with SHAP, LIME
β’ Bias detection and mitigation strategies
β’ Privacy-preserving techniques (differential privacy)
β’ Audit trails for AI decision making
β’ Human-in-the-loop workflows
β’ Compliance with AI regulations (EU AI Act, etc.)