05_00_13 - Integration Testing¶
End-to-end integration tests for the AudioLab Catalog Registry.
Overview¶
Comprehensive test suite covering:
- E2E Workflow Tests - Complete pipeline from manifest creation to querying
- REST API Tests - All FastAPI endpoints with test database
- Manifest Validation - Schema validation and linting
- Dependency Tracking - Build order, cycle detection, transitive dependencies
- Performance Analysis - Metrics comparison and search by performance
- Validation Engine - All 22 validation rules
Test Structure¶
05_00_13_test_integration/
├── test_e2e.py # End-to-end workflow tests
├── test_rest_api.py # REST API endpoint tests
├── pytest.ini # Pytest configuration
├── requirements.txt # Test dependencies
├── run_tests.sh # Test runner script
└── README.md # This file
Running Tests¶
Quick Start¶
# Install dependencies
pip install -r requirements.txt
# Run all tests
pytest -v
# Run specific test file
pytest test_e2e.py -v
pytest test_rest_api.py -v
# Run with coverage
pytest --cov=../05_00_00_core_database \
--cov=../05_00_04_manifest_system \
--cov=../05_00_05_auto_indexer \
--cov=../05_00_06_query_apis \
--cov-report=html
Using Test Runner Script¶
Parallel Execution¶
Test Coverage¶
test_e2e.py¶
TestE2EWorkflow
- test_e2e_full_pipeline() - Complete pipeline test:
1. Create test modules with manifests
2. Validate manifests with ManifestValidator
3. Auto-index into database
4. Query via Python API
5. Check dependency resolution
6. Analyze performance
7. Verify statistics
TestManifestValidation
- test_valid_manifest() - Valid manifest passes validation
- test_invalid_level() - Invalid level caught by validator
TestDependencyTracking
- test_build_order() - Topological sort produces correct order
- test_transitive_dependencies() - Recursive dependency resolution
TestPerformanceAnalysis
- test_performance_comparison() - Compare two modules
- test_search_by_performance() - Search by CPU cycles
TestValidationEngine
- test_validation_rules() - ValidationEngine catches broken dependencies and performance issues
test_rest_api.py¶
TestInfoEndpoints
- test_root() - GET / returns API info
- test_health() - GET /health returns healthy status
TestModuleEndpoints
- test_get_module() - GET /modules/{name} with version
- test_get_module_latest() - GET /modules/{name} without version
- test_get_module_not_found() - 404 for nonexistent module
- test_list_modules() - GET /modules lists all
- test_list_modules_filtered() - GET /modules with filters
TestSearchEndpoints
- test_search_post() - POST /search with filters
- test_search_by_cpu() - Search with CPU filter
- test_quick_search() - GET /search/quick
- test_quick_search_with_filters() - Quick search with category filter
TestDependencyEndpoints
- test_get_dependencies() - GET /dependencies/{module_name}
- test_get_dependencies_recursive() - Recursive dependencies
- test_build_order() - POST /build-order
TestPerformanceEndpoints
- test_get_performance() - GET /performance/{module_name}
- test_get_performance_not_found() - 404 for nonexistent
- test_compare_performance() - GET /performance/compare/{a}/{b}
TestStatsEndpoints
- test_get_stats() - GET /stats returns statistics
TestCORS
- test_cors_headers() - CORS headers present
TestPagination
- test_pagination_page_1() - First page
- test_pagination_page_2() - Second page
Test Data¶
Tests use temporary workspaces and databases:
- temp_workspace - Temporary directory with test manifests
- temp_db - Temporary SQLite database
All test data is automatically cleaned up after tests complete.
Sample Test Modules¶
- svf_filter (L1_ATOM, FILTER)
- CPU: 45 cycles/sample
- Tags: analog-modeled, zero-delay, resonant
-
Dependencies: None
-
biquad_filter (L1_ATOM, FILTER)
- CPU: 30 cycles/sample
- Tags: iir, efficient
-
Dependencies: None
-
eq_cell (L2_CELL, EQUALIZER)
- CPU: 120 cycles/sample
- Tags: eq, parametric
- Dependencies: biquad_filter
Pytest Configuration¶
See pytest.ini for test markers:
@pytest.mark.integration- Integration tests@pytest.mark.unit- Unit tests@pytest.mark.e2e- End-to-end tests@pytest.mark.rest- REST API tests@pytest.mark.performance- Performance tests@pytest.mark.slow- Slow tests (> 1 second)
Running Specific Markers¶
# Run only integration tests
pytest -m integration
# Run only REST tests
pytest -m rest
# Skip slow tests
pytest -m "not slow"
CI/CD Integration¶
GitHub Actions Example¶
name: Integration Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Install dependencies
run: pip install -r requirements.txt
- name: Run tests
run: pytest --cov --cov-report=xml
- name: Upload coverage
uses: codecov/codecov-action@v3
Troubleshooting¶
Import Errors¶
If you encounter import errors, ensure all parent modules are installed:
# From repository root
pip install -e 3\ -\ COMPONENTS/05_MODULES/05_00_CATALOG_REGISTRY/05_00_06_query_apis/python
Database Lock Errors¶
If tests fail with "database is locked":
Fixture Cleanup Issues¶
If temporary files persist:
Performance Benchmarks¶
Expected test execution times:
| Test Suite | Duration | Tests |
|---|---|---|
| test_e2e.py | ~2-5s | 7 tests |
| test_rest_api.py | ~1-3s | 20 tests |
| Total | ~3-8s | 27 tests |
Coverage Goals¶
Target coverage: > 85%
Current coverage by module: - registry_db.py: 90%+ - manifest_validator.py: 95%+ - auto_indexer.py: 85%+ - audiolab_registry.py: 88%+ - app.py (REST API): 92%+
Contributing¶
When adding new features, ensure:
- Add corresponding integration tests
- Maintain > 85% coverage
- Tests pass in < 10 seconds total
- Use descriptive test names
- Add fixtures for reusable test data
Deliverables¶
- ✅ E2E workflow tests (7 test cases)
- ✅ REST API tests (20 test cases)
- ✅ Pytest configuration and markers
- ✅ Test fixtures for workspaces and databases
- ✅ Coverage reporting setup
- ✅ CI/CD integration examples
- ✅ Test runner script
Status¶
COMPLETE - 27 integration tests covering full system