Skip to content

05_02_test_integration - Integration Testing

📋 Descripción General

Suite completa de tests de integración que verifica que todos los subsistemas del Dependency Graph Visualizer funcionan correctamente juntos. Incluye tests end-to-end, benchmarks de performance y validación de workflows completos.

🎯 Objetivos

  1. Integration Tests - Verificar interacción entre subsistemas
  2. End-to-End Tests - Workflows completos desde catálogo hasta salida
  3. Performance Tests - Benchmarks con grafos reales grandes
  4. Regression Tests - Prevenir regresiones en funcionalidad
  5. Smoke Tests - Verificación rápida de funcionalidad básica

🏗️ Estructura de Tests

test_integration/
├── test_full_workflow.cpp      # Tests E2E completos
├── test_subsystem_integration.cpp  # Integración entre módulos
├── test_performance.cpp         # Benchmarks de performance
├── test_regression.cpp          # Tests de regresión
├── fixtures/                    # Datos de prueba
│   ├── small_catalog.json       # Grafo pequeño (10 nodos)
│   ├── medium_catalog.json      # Grafo mediano (100 nodos)
│   ├── large_catalog.json       # Grafo grande (500 nodos)
│   └── cyclic_catalog.json      # Grafo con ciclos (inválido)
└── utilities/
    ├── test_helpers.hpp         # Utilidades para tests
    └── graph_generators.hpp     # Generadores de grafos sintéticos

🧪 Categorías de Tests

1. Full Workflow Tests

Escenario: Cargar catálogo → Construir grafo → Visualizar → Exportar → Validar

TEST_CASE("Full Workflow: Catalog to Visualization", "[integration][e2e]") {
    // 1. Load catalog
    GraphBuilder builder;
    builder.load_from_catalog("fixtures/medium_catalog.json");

    // 2. Build graph
    Graph graph = builder.build();
    REQUIRE(graph.get_nodes().size() > 0);

    // 3. Visualize
    DotExporter dot_exporter;
    std::string dot = dot_exporter.export_graph(graph);
    REQUIRE(!dot.empty());

    // 4. Export to multiple formats
    BatchExporter batch;
    batch.add_formats({ExportFormat::DOT, ExportFormat::JSON});
    auto results = batch.export_all(graph, "test_output");

    // 5. Validate outputs
    for (const auto& result : results) {
        REQUIRE(result.success);
        REQUIRE(result.file_size_bytes > 0);
    }
}

2. Subsystem Integration Tests

Verificar que módulos se comunican correctamente:

TEST_CASE("Query + Metrics Integration", "[integration][subsystem]") {
    Graph graph = load_test_graph();

    // Calculate metrics
    MetricsEngine metrics(graph);
    metrics.compute_all();

    // Query using metrics results
    auto important = GraphQuery(graph)
        .select_all()
        .where([](const NodePtr& n) {
            return n->betweenness > 0.5;  // High centrality
        })
        .get_nodes();

    REQUIRE(!important.empty());
}

TEST_CASE("Filter + Export Integration", "[integration][subsystem]") {
    Graph graph = load_test_graph();

    // Apply filter
    auto filter = FilterChain()
        .by_level({L2_CELL, L3_ENGINE})
        .cpu_greater_than(100);

    Graph filtered = filter.apply(graph);

    // Export filtered graph
    JsonExporter exporter;
    std::string json = exporter.export_graph(filtered);

    // Validate JSON contains only filtered nodes
    REQUIRE(json.find("L0_KERNEL") == std::string::npos);
    REQUIRE(json.find("L1_ATOM") == std::string::npos);
}

3. Performance Tests

Benchmark con grafos grandes:

TEST_CASE("Performance: Large Graph Operations", "[integration][performance]") {
    // Generate large synthetic graph
    Graph large_graph = generate_synthetic_graph(500, 1500);

    BENCHMARK("Build graph (500 nodes)") {
        GraphBuilder builder;
        return builder.build();
    };

    BENCHMARK("Topological sort (500 nodes)") {
        return GraphQuery(large_graph)
            .select_all()
            .topological_sort()
            .get_nodes();
    };

    BENCHMARK("Export to JSON (500 nodes)") {
        JsonExporter exporter;
        return exporter.export_graph(large_graph);
    };

    BENCHMARK("Calculate all metrics (500 nodes)") {
        MetricsEngine metrics(large_graph);
        metrics.compute_all();
        return metrics.get_graph_metrics();
    };
}

4. Regression Tests

Prevenir que bugs corregidos reaparezcan:

TEST_CASE("Regression: Cycle Detection Bug #42", "[integration][regression]") {
    // Bug: Cycle detector no encontraba ciclos de longitud 2
    Graph graph;

    auto a = std::make_shared<Node>("a", "A", HierarchyLevel::L0_KERNEL);
    auto b = std::make_shared<Node>("b", "B", HierarchyLevel::L0_KERNEL);

    graph.add_node(a);
    graph.add_node(b);
    graph.add_edge(std::make_shared<Edge>("a", "b"));
    graph.add_edge(std::make_shared<Edge>("b", "a"));

    auto cycles = GraphQuery(graph).get_cycles();

    REQUIRE_FALSE(cycles.empty());
    REQUIRE(cycles[0].length() == 2);
}

TEST_CASE("Regression: Export Escaping Issue #57", "[integration][regression]") {
    // Bug: Comillas en nombres causaban JSON inválido
    Graph graph;

    auto node = std::make_shared<Node>("test", "Module \"Test\"", HierarchyLevel::L0_KERNEL);
    graph.add_node(node);

    JsonExporter exporter;
    std::string json = exporter.export_graph(graph);

    // JSON debe ser válido y contener comillas escapadas
    REQUIRE(json.find("\\\"Test\\\"") != std::string::npos);
}

5. Smoke Tests

Verificación rápida de funcionalidad básica:

TEST_CASE("Smoke Test: All Subsystems Available", "[integration][smoke]") {
    Graph graph = create_minimal_graph();

    // Graph construction
    REQUIRE(graph.get_nodes().size() > 0);

    // Visualization
    DotExporter dot;
    REQUIRE(!dot.export_graph(graph).empty());

    // Query
    auto all = GraphQuery(graph).select_all().get_nodes();
    REQUIRE(!all.empty());

    // Metrics
    MetricsEngine metrics(graph);
    auto stats = metrics.get_graph_metrics();
    REQUIRE(stats.node_count > 0);

    // Export
    JsonExporter json;
    REQUIRE(!json.export_graph(graph).empty());
}

📊 Test Fixtures

Small Graph (10 nodos)

Uso: Tests rápidos, debugging

{
    "modules": [
        {"id": "mul", "label": "Multiply", "level": "L0_KERNEL"},
        {"id": "add", "label": "Add", "level": "L0_KERNEL"},
        {"id": "osc", "label": "Oscillator", "level": "L1_ATOM"},
        ...
    ],
    "dependencies": [
        {"source": "osc", "target": "mul"},
        ...
    ]
}

Medium Graph (100 nodos)

Uso: Tests realistas, performance moderado

  • 25 L0_KERNEL
  • 40 L1_ATOM
  • 25 L2_CELL
  • 10 L3_ENGINE
  • ~250 dependencias

Large Graph (500 nodos)

Uso: Stress tests, benchmarks

  • 100 L0_KERNEL
  • 200 L1_ATOM
  • 150 L2_CELL
  • 50 L3_ENGINE
  • ~1500 dependencias

Cyclic Graph (inválido)

Uso: Tests de validación, error handling

{
    "modules": [
        {"id": "a", "label": "A"},
        {"id": "b", "label": "B"},
        {"id": "c", "label": "C"}
    ],
    "dependencies": [
        {"source": "a", "target": "b"},
        {"source": "b", "target": "c"},
        {"source": "c", "target": "a"}  // Ciclo!
    ]
}

🔧 Test Utilities

Graph Generators

/**
 * @brief Generate synthetic graph for testing
 */
Graph generate_synthetic_graph(size_t node_count, size_t edge_count) {
    Graph graph;

    // Generate nodes
    for (size_t i = 0; i < node_count; ++i) {
        auto node = std::make_shared<Node>(
            "node_" + std::to_string(i),
            "Node " + std::to_string(i),
            random_level()
        );
        node->cpu_cycles = random_cpu();
        graph.add_node(node);
    }

    // Generate edges (ensuring DAG)
    for (size_t i = 0; i < edge_count; ++i) {
        auto [src, tgt] = random_dag_edge(graph);
        graph.add_edge(std::make_shared<Edge>(src, tgt));
    }

    return graph;
}

/**
 * @brief Create minimal valid graph
 */
Graph create_minimal_graph() {
    Graph graph;

    auto kernel = std::make_shared<Node>("kernel", "Kernel", L0_KERNEL);
    auto atom = std::make_shared<Node>("atom", "Atom", L1_ATOM);

    graph.add_node(kernel);
    graph.add_node(atom);
    graph.add_edge(std::make_shared<Edge>("atom", "kernel"));

    return graph;
}

/**
 * @brief Create graph with known properties
 */
Graph create_graph_with_bottleneck() {
    // Creates graph where one node has very high in-degree
    Graph graph;

    auto bottleneck = std::make_shared<Node>("bottleneck", "Bottleneck", L0_KERNEL);
    graph.add_node(bottleneck);

    for (int i = 0; i < 20; ++i) {
        auto dependent = std::make_shared<Node>(
            "dep_" + std::to_string(i),
            "Dependent " + std::to_string(i),
            L1_ATOM
        );
        graph.add_node(dependent);
        graph.add_edge(std::make_shared<Edge>(dependent->id, bottleneck->id));
    }

    return graph;
}

Test Helpers

/**
 * @brief Verify graph structure matches expected
 */
bool verify_graph_structure(
    const Graph& graph,
    size_t expected_nodes,
    size_t expected_edges
) {
    return graph.get_nodes().size() == expected_nodes &&
           graph.get_edges().size() == expected_edges;
}

/**
 * @brief Check if file exists and has content
 */
bool verify_file_output(const std::filesystem::path& filepath) {
    return std::filesystem::exists(filepath) &&
           std::filesystem::file_size(filepath) > 0;
}

/**
 * @brief Compare two graphs for equality
 */
bool graphs_are_equal(const Graph& g1, const Graph& g2) {
    if (g1.get_nodes().size() != g2.get_nodes().size()) return false;
    if (g1.get_edges().size() != g2.get_edges().size()) return false;

    // Check all nodes exist in both
    for (const auto& [id, node] : g1.get_nodes()) {
        if (!g2.get_node(id)) return false;
    }

    // Check all edges exist in both
    for (const auto& edge : g1.get_edges()) {
        if (!g2.has_edge(edge->source, edge->target)) return false;
    }

    return true;
}

🎯 Coverage Targets

Funcional

  • Graph Construction: 100% (todos los casos de carga)
  • Visualization: 95% (todos los layouts + renderers)
  • Path Analysis: 100% (todos los algoritmos)
  • Cycle Detection: 100% (incluye edge cases)
  • Metrics: 90% (todas las métricas principales)
  • Filtering: 95% (todos los tipos de filtro)
  • Diff: 90% (todos los tipos de cambio)
  • Export: 95% (todos los formatos)
  • Query: 95% (API completa)

Integración

  • Cross-Module: 80% (interacciones principales)
  • End-to-End: 75% (workflows críticos)
  • Error Handling: 85% (casos de error comunes)

📈 Performance Baselines

Expected Performance (500 nodes, 1500 edges)

Operation Target Current
Load Catalog < 50ms TBD
Build Graph < 100ms TBD
Topological Sort < 150ms TBD
Export DOT < 12ms TBD
Export JSON < 8ms TBD
Calculate Metrics < 300ms TBD
Query (complex) < 50ms TBD
Full Workflow < 1s TBD

Memory Usage

Graph Size Expected Current
100 nodes < 1 MB TBD
500 nodes < 5 MB TBD
1000 nodes < 10 MB TBD

🔄 CI/CD Integration

GitHub Actions Workflow

name: Integration Tests

on: [push, pull_request]

jobs:
  integration:
    runs-on: ubuntu-latest

    steps:
      - uses: actions/checkout@v2

      - name: Install Dependencies
        run: |
          sudo apt-get update
          sudo apt-get install -y catch2

      - name: Build Tests
        run: |
          mkdir build
          cd build
          cmake ..
          make test_integration

      - name: Run Integration Tests
        run: |
          cd build
          ./test_integration

      - name: Run Performance Tests
        run: |
          cd build
          ./test_performance --benchmark

      - name: Upload Test Results
        uses: actions/upload-artifact@v2
        with:
          name: test-results
          path: build/test_*.xml

🐛 Known Issues & Workarounds

Issue #1: File System Timing

Problem: Live monitor tests pueden fallar en CI por timing.

Workaround:

// Aumentar delays en CI
#ifdef CI_ENVIRONMENT
    std::this_thread::sleep_for(std::chrono::seconds(2));
#else
    std::this_thread::sleep_for(std::chrono::milliseconds(500));
#endif

Issue #2: Large Graph Generation

Problem: Generar grafos grandes puede ser lento.

Workaround:

// Pre-generar y cachear
static Graph large_graph_cache;
static std::once_flag once;

std::call_once(once, []() {
    large_graph_cache = generate_synthetic_graph(500, 1500);
});

return large_graph_cache;

📋 Test Checklist

Antes de cada release, verificar:

  • Todos los tests unitarios pasan
  • Todos los tests de integración pasan
  • Performance no ha regresado (< 10% degradación)
  • Todos los formatos de export funcionan
  • Live monitor detecta cambios correctamente
  • Documentación se genera sin errores
  • No hay memory leaks (valgrind)
  • Coverage > 80% en código nuevo

🎓 Best Practices

  1. Fixtures Over Inline Data: Usar archivos JSON reales
  2. One Assertion Per Test: Facilita debugging
  3. Descriptive Names: test_query_filters_by_cpu_correctly
  4. Setup/Teardown: Limpiar archivos temporales
  5. Deterministic Tests: No dependencias de timing/random
  6. Fast Tests: Subsecond para tests individuales
  7. Isolated Tests: No dependencias entre tests

📚 Referencias


Parte de: 05_02_DEPENDENCY_GRAPH - Dependency Graph Visualizer Requiere: Todos los subsistemas (TAREA 1-11) Exporta: Test suite completa