05_17_10: Integration Testing - Version Control Workflows¶
Executive Summary¶
Comprehensive integration testing framework for all version control subsystems and workflows.
Key Features: - Automated test execution - Multiple test environments (local, CI, staging) - Test suite organization by subsystem - Detailed reporting (console, JUnit, HTML, JSON) - Performance benchmarking - CI/CD integration - Test fixtures and mocking
Tools¶
test_runner.py¶
Automated test runner for integration tests.
Usage:
# List available test suites
python test_runner.py --list
# Run all tests
python test_runner.py
# Run specific suite
python test_runner.py --suite version_strategy
# Run multiple suites
python test_runner.py --suite version_strategy --suite commit_conventions
# Verbose output
python test_runner.py --verbose
Example Output:
======================================================================
AudioLab Integration Test Runner
======================================================================
✓ Test environment setup complete: /tmp/audiolab_test_abc123/test_repo
======================================================================
Running Test Suite: version_strategy
======================================================================
Test: Patch version bump
✓ PASS (0.023s)
Test: Major version bump
✓ PASS (0.031s)
Suite Summary: 2/2 passed
Duration: 0.054s
======================================================================
Running Test Suite: commit_conventions
======================================================================
Test: Valid conventional commit
✓ PASS (0.015s)
Test: Invalid format
✓ PASS (0.012s)
Test: Breaking change
✓ PASS (0.018s)
Suite Summary: 3/3 passed
Duration: 0.045s
======================================================================
Test Report
======================================================================
Overall Results:
Total Tests: 5
Passed: 5 (100.0%)
Failed: 0 (0.0%)
Duration: 0.099s
Suite Breakdown:
✓ version_strategy 2/ 2 (100.0%)
✓ commit_conventions 3/ 3 (100.0%)
======================================================================
✓ Test environment cleaned up
✓ JSON report written to: test-results/results.json
✓ JUnit report written to: test-results/junit.xml
setup_test_env.py¶
Environment setup and validation script.
Usage:
# Check environment readiness
python setup_test_env.py --check-only
# Full setup with fixtures
python setup_test_env.py
# Install Python dependencies
python setup_test_env.py --install-deps
Example Output:
======================================================================
AudioLab Test Environment Setup
======================================================================
Checking Dependencies:
----------------------------------------------------------------------
✓ Git: git version 2.43.0
✓ Python: Python 3.11.5
✓ CMake: cmake version 3.27.0
✓ PyYAML: installed
Checking Git Configuration:
----------------------------------------------------------------------
✓ user.name: John Doe
✓ user.email: john@audiolab.dev
Checking Directories:
----------------------------------------------------------------------
✓ test-results/: Exists
Platform: windows
----------------------------------------------------------------------
✓ PowerShell: 7.4.0
Creating Test Fixtures:
----------------------------------------------------------------------
✓ Created: sample_commits.txt
✓ Created: sample_versions.txt
======================================================================
Test Environment Setup Report
======================================================================
✓ Checks Passed: 9
• Git
• Python
• CMake
• PyYAML
• git user.name
• git user.email
• directories
• PowerShell
• test fixtures
✓ Environment is ready for integration testing
Test Suites¶
1. Version Strategy Tests¶
Tests version numbering, tagging, and coordinated versioning.
Tests: - Semantic version validation - Version bumping (patch/minor/major) - Coordinated versioning across layers - Tag creation and management - Version constraint checking
Example Scenario:
name: "Patch version bump"
steps:
- "Current version: 2.1.0"
- "Apply bug fix"
- "Bump to 2.1.1"
- "Verify tag created"
expected: "Version 2.1.1 tagged successfully"
2. Branching Model Tests¶
Tests git branching workflows.
Tests: - Feature branch creation and merging - Hotfix workflow - Release branch management - Branch protection rules - Stale branch detection
Example Scenario:
name: "Feature branch workflow"
steps:
- "Create feature/new-oscillator from develop"
- "Make commits"
- "Create PR to develop"
- "Merge with squash"
expected: "Single commit in develop with all changes"
3. Commit Conventions Tests¶
Tests commit message validation and formatting.
Tests: - Conventional commit format validation - Commit message length checking - Breaking change detection - Scope validation - Commit hook enforcement
Example Scenario:
name: "Valid conventional commit"
commit: "feat(audio): add stereo panning support"
expected: "Commit accepted"
4. Release Automation Tests¶
Tests automated release processes.
Tests: - Changelog generation - Version tagging - Artifact creation - Release notes generation - Rollback procedures
Example Scenario:
name: "Release creation"
steps:
- "Trigger release workflow"
- "Generate changelog from commits"
- "Create version tag"
- "Build release artifacts"
- "Publish release notes"
expected: "Release v2.1.0 published successfully"
5. Merge Strategies Tests¶
Tests merge conflict detection and resolution.
Tests: - Fast-forward merge - Merge commit creation - Squash merge - Rebase merge - Conflict detection - Automated conflict resolution
Example Scenario:
name: "Clean fast-forward"
steps:
- "Feature branch ahead of main"
- "No conflicts"
- "Fast-forward merge"
expected: "Linear history maintained"
6. Compatibility Matrix Tests¶
Tests version compatibility checking.
Tests: - Layer compatibility (L0-L3) - External dependency compatibility - Platform compatibility - Constraint validation - Deprecation warnings
Example Scenario:
name: "Compatible versions"
versions:
kernels: "2.1.0"
atoms: "2.1.0"
cells: "2.1.0"
engines: "2.1.0"
expected: "All versions compatible"
7. Migration Tools Tests¶
Tests automated version migration.
Tests: - Pre-migration validation - Regex transformation - AST transformation - Dry-run mode - Rollback procedures
Example Scenario:
name: "Successful migration"
steps:
- "Validate pre-conditions"
- "Create backup"
- "Run dry-run"
- "Apply transformations"
- "Verify post-conditions"
expected: "Migration completed successfully"
Test Execution¶
Sequential Execution¶
Parallel Execution¶
Configured in integration_testing.yaml:
Dependency Order¶
Some suites depend on others:
execution:
order:
strategy: "dependency"
dependencies:
branching_model: ["version_strategy"]
merge_strategies: ["branching_model"]
Test Environments¶
Local Environment¶
For development and debugging:
- Creates temporary repository
- Runs tests locally
- Fast feedback loop
CI Environment¶
Automated testing in GitHub Actions:
# .github/workflows/integration-tests.yml
name: Integration Tests
on: [pull_request, push]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup
run: python setup_test_env.py --install-deps
- name: Run tests
run: python test_runner.py
Staging Environment¶
Production-like testing before deployment.
Test Data and Fixtures¶
Sample Commits¶
Valid conventional commits:
feat(audio): add reverb effect
fix(dsp): correct phase calculation
docs(readme): update installation guide
Invalid commits:
Sample Versions¶
Valid:
Invalid:
Sample Branches¶
Reporting¶
Console Output¶
Real-time test execution feedback.
JUnit XML¶
For CI/CD integration:
<?xml version="1.0" encoding="UTF-8"?>
<testsuites>
<testsuite name="version_strategy" tests="2" failures="0" time="0.054">
<testcase name="Patch version bump" classname="version_strategy" time="0.023"/>
<testcase name="Major version bump" classname="version_strategy" time="0.031"/>
</testsuite>
</testsuites>
JSON Report¶
Machine-readable results:
{
"timestamp": "2025-01-15T14:23:45",
"summary": {
"total": 5,
"passed": 5,
"failed": 0,
"duration": 0.099
},
"suites": [...]
}
HTML Report¶
Human-readable detailed report with charts and visualizations.
Configuration Files¶
integration_testing.yaml¶
Complete configuration (~15 KB): - Test environments - Test suites with scenarios - Execution settings - Test data and fixtures - Assertions - Reporting configuration - CI/CD integration
Key Sections:
test_suites:
version_strategy:
description: "Test version numbering and tagging"
scenarios:
- name: "Patch version bump"
steps: [...]
expected: "..."
execution:
parallel:
enabled: true
max_workers: 4
reporting:
formats: ["console", "junit", "html", "json"]
Best Practices¶
Writing Tests¶
- Clear scenario names - Describe what's being tested
- Explicit steps - Each step should be atomic
- Verifiable expectations - Clear pass/fail criteria
- Test isolation - Each test independent
- Fast execution - Keep tests under 5 seconds when possible
Running Tests¶
- Setup environment first - Run
setup_test_env.py - Run locally before CI - Catch issues early
- Review failed tests - Understand root cause
- Check reports - Use JSON/HTML for detailed analysis
Maintaining Tests¶
- Update scenarios - When workflows change
- Add regression tests - For bugs found in production
- Remove obsolete tests - Keep suite lean
- Monitor flaky tests - Fix or quarantine
Integration with CI/CD¶
GitHub Actions¶
# .github/workflows/integration-tests.yml
name: Integration Tests
on:
pull_request:
branches: [main, develop]
push:
branches: [main]
schedule:
- cron: '0 2 * * *' # Daily at 2 AM
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Setup test environment
run: python setup_test_env.py --install-deps
- name: Run integration tests
run: python test_runner.py
- name: Upload test results
if: always()
uses: actions/upload-artifact@v3
with:
name: test-results
path: test-results/
- name: Publish test report
uses: mikepenz/action-junit-report@v3
if: always()
with:
report_paths: 'test-results/junit.xml'
Pre-commit Hook¶
Run tests before committing:
Troubleshooting¶
Issue: Environment setup fails¶
Symptom:
Solution: Install required dependencies:
Issue: Tests timeout¶
Symptom:
Solution: 1. Check system load 2. Increase timeout in configuration 3. Optimize slow tests
Issue: Flaky tests¶
Symptom: Tests pass/fail intermittently
Solution: 1. Identify flaky tests in reports 2. Add retries for network operations 3. Improve test isolation 4. Fix race conditions
Performance Benchmarks¶
Target Performance¶
- Version validation: < 100ms
- Commit validation: < 50ms
- Merge conflict detection: < 500ms
- Compatibility check: < 200ms
Actual Performance¶
Run benchmarks:
Example results:
Benchmark: version_validation
Average: 85ms
95th percentile: 95ms
Status: ✓ PASS
Benchmark: commit_validation
Average: 42ms
95th percentile: 48ms
Status: ✓ PASS
Test Coverage¶
Coverage Goals¶
- Line coverage: 80%+
- Branch coverage: 70%+
- Function coverage: 90%+
Measuring Coverage¶
# Run with coverage
python -m coverage run test_runner.py
# Generate report
python -m coverage report
python -m coverage html
Extension Points¶
Custom Test Suites¶
Add new suites to integration_testing.yaml:
test_suites:
my_custom_suite:
description: "Custom test suite"
tests:
- test_custom_feature
scenarios:
- name: "Custom scenario"
steps: [...]
expected: "..."
Custom Assertions¶
Extend assertion framework:
def assert_custom_condition(value, expected):
"""Custom assertion logic"""
if value != expected:
raise AssertionError(f"Expected {expected}, got {value}")
Part of AudioLab Version Control System (05_17_VERSION_CONTROL)