Skip to content

05_00_13 - Integration Testing

End-to-end integration tests for the AudioLab Catalog Registry.

Overview

Comprehensive test suite covering:

  1. E2E Workflow Tests - Complete pipeline from manifest creation to querying
  2. REST API Tests - All FastAPI endpoints with test database
  3. Manifest Validation - Schema validation and linting
  4. Dependency Tracking - Build order, cycle detection, transitive dependencies
  5. Performance Analysis - Metrics comparison and search by performance
  6. Validation Engine - All 22 validation rules

Test Structure

05_00_13_test_integration/
├── test_e2e.py           # End-to-end workflow tests
├── test_rest_api.py      # REST API endpoint tests
├── pytest.ini            # Pytest configuration
├── requirements.txt      # Test dependencies
├── run_tests.sh          # Test runner script
└── README.md             # This file

Running Tests

Quick Start

# Install dependencies
pip install -r requirements.txt

# Run all tests
pytest -v

# Run specific test file
pytest test_e2e.py -v
pytest test_rest_api.py -v

# Run with coverage
pytest --cov=../05_00_00_core_database \
       --cov=../05_00_04_manifest_system \
       --cov=../05_00_05_auto_indexer \
       --cov=../05_00_06_query_apis \
       --cov-report=html

Using Test Runner Script

chmod +x run_tests.sh
./run_tests.sh

Parallel Execution

# Run tests in parallel (4 workers)
pytest -n 4

Test Coverage

test_e2e.py

TestE2EWorkflow - test_e2e_full_pipeline() - Complete pipeline test: 1. Create test modules with manifests 2. Validate manifests with ManifestValidator 3. Auto-index into database 4. Query via Python API 5. Check dependency resolution 6. Analyze performance 7. Verify statistics

TestManifestValidation - test_valid_manifest() - Valid manifest passes validation - test_invalid_level() - Invalid level caught by validator

TestDependencyTracking - test_build_order() - Topological sort produces correct order - test_transitive_dependencies() - Recursive dependency resolution

TestPerformanceAnalysis - test_performance_comparison() - Compare two modules - test_search_by_performance() - Search by CPU cycles

TestValidationEngine - test_validation_rules() - ValidationEngine catches broken dependencies and performance issues

test_rest_api.py

TestInfoEndpoints - test_root() - GET / returns API info - test_health() - GET /health returns healthy status

TestModuleEndpoints - test_get_module() - GET /modules/{name} with version - test_get_module_latest() - GET /modules/{name} without version - test_get_module_not_found() - 404 for nonexistent module - test_list_modules() - GET /modules lists all - test_list_modules_filtered() - GET /modules with filters

TestSearchEndpoints - test_search_post() - POST /search with filters - test_search_by_cpu() - Search with CPU filter - test_quick_search() - GET /search/quick - test_quick_search_with_filters() - Quick search with category filter

TestDependencyEndpoints - test_get_dependencies() - GET /dependencies/{module_name} - test_get_dependencies_recursive() - Recursive dependencies - test_build_order() - POST /build-order

TestPerformanceEndpoints - test_get_performance() - GET /performance/{module_name} - test_get_performance_not_found() - 404 for nonexistent - test_compare_performance() - GET /performance/compare/{a}/{b}

TestStatsEndpoints - test_get_stats() - GET /stats returns statistics

TestCORS - test_cors_headers() - CORS headers present

TestPagination - test_pagination_page_1() - First page - test_pagination_page_2() - Second page

Test Data

Tests use temporary workspaces and databases:

  • temp_workspace - Temporary directory with test manifests
  • temp_db - Temporary SQLite database

All test data is automatically cleaned up after tests complete.

Sample Test Modules

  1. svf_filter (L1_ATOM, FILTER)
  2. CPU: 45 cycles/sample
  3. Tags: analog-modeled, zero-delay, resonant
  4. Dependencies: None

  5. biquad_filter (L1_ATOM, FILTER)

  6. CPU: 30 cycles/sample
  7. Tags: iir, efficient
  8. Dependencies: None

  9. eq_cell (L2_CELL, EQUALIZER)

  10. CPU: 120 cycles/sample
  11. Tags: eq, parametric
  12. Dependencies: biquad_filter

Pytest Configuration

See pytest.ini for test markers:

  • @pytest.mark.integration - Integration tests
  • @pytest.mark.unit - Unit tests
  • @pytest.mark.e2e - End-to-end tests
  • @pytest.mark.rest - REST API tests
  • @pytest.mark.performance - Performance tests
  • @pytest.mark.slow - Slow tests (> 1 second)

Running Specific Markers

# Run only integration tests
pytest -m integration

# Run only REST tests
pytest -m rest

# Skip slow tests
pytest -m "not slow"

CI/CD Integration

GitHub Actions Example

name: Integration Tests

on: [push, pull_request]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-python@v4
        with:
          python-version: '3.11'
      - name: Install dependencies
        run: pip install -r requirements.txt
      - name: Run tests
        run: pytest --cov --cov-report=xml
      - name: Upload coverage
        uses: codecov/codecov-action@v3

Troubleshooting

Import Errors

If you encounter import errors, ensure all parent modules are installed:

# From repository root
pip install -e 3\ -\ COMPONENTS/05_MODULES/05_00_CATALOG_REGISTRY/05_00_06_query_apis/python

Database Lock Errors

If tests fail with "database is locked":

# Run tests serially (no parallel)
pytest -n 0

Fixture Cleanup Issues

If temporary files persist:

# Manually clean temp directory
rm -rf /tmp/audiolab_test_*

Performance Benchmarks

Expected test execution times:

Test Suite Duration Tests
test_e2e.py ~2-5s 7 tests
test_rest_api.py ~1-3s 20 tests
Total ~3-8s 27 tests

Coverage Goals

Target coverage: > 85%

Current coverage by module: - registry_db.py: 90%+ - manifest_validator.py: 95%+ - auto_indexer.py: 85%+ - audiolab_registry.py: 88%+ - app.py (REST API): 92%+

Contributing

When adding new features, ensure:

  1. Add corresponding integration tests
  2. Maintain > 85% coverage
  3. Tests pass in < 10 seconds total
  4. Use descriptive test names
  5. Add fixtures for reusable test data

Deliverables

  • ✅ E2E workflow tests (7 test cases)
  • ✅ REST API tests (20 test cases)
  • ✅ Pytest configuration and markers
  • ✅ Test fixtures for workspaces and databases
  • ✅ Coverage reporting setup
  • ✅ CI/CD integration examples
  • ✅ Test runner script

Status

COMPLETE - 27 integration tests covering full system