🧪 AudioLab Testing Cheat Sheet¶
Quick reference for all testing commands
📋 Table of Contents¶
- CTest Commands
- Catch2 Test Filters
- Test Runner Scripts
- CMake Test Commands
- Debugging Tests
- Performance Testing
- Golden File Testing
- Common Workflows
CTest Commands¶
Basic Usage¶
# Run all tests
ctest
# Run with specific configuration
ctest -C Debug
ctest -C Release
# Run with verbose output
ctest --verbose
ctest -V
# Show only failed test output
ctest --output-on-failure
# Run in parallel (4 jobs)
ctest -j 4
Test Filtering¶
# Run tests matching regex
ctest -R "test_example.*" # Tests starting with test_example
ctest -R "unit_*" # All unit tests
ctest -R "benchmark_*" # All benchmarks
# Exclude tests matching regex
ctest -E "benchmark_*" # Skip benchmarks
ctest -E "slow_*" # Skip slow tests
# Combine include and exclude
ctest -R "test_.*" -E "benchmark" # All tests except benchmarks
Test Execution Control¶
# Run only failed tests from last run
ctest --rerun-failed
# Stop on first failure
ctest --stop-on-failure
# Set timeout (seconds)
ctest --timeout 60 # Kill tests after 60s
# List all tests without running
ctest -N
ctest --show-only
Output Formats¶
# Generate JUnit XML (for CI)
ctest --output-junit test_results.xml
# Show detailed timing
ctest --verbose --timeout 120
# Quiet mode (only errors)
ctest --quiet
Catch2 Test Filters¶
Running Test Binaries Directly¶
# Windows
.\build\tests\Debug\test_example_processor.exe
# macOS/Linux
./build/tests/test_example_processor
Tag-Based Filtering¶
# Run only unit tests
./test_example_processor "[unit]"
# Run only integration tests
./test_example_processor "[integration]"
# Run only benchmarks
./test_example_processor "[benchmark]"
# Run only real-time safety tests
./test_example_processor "[rt-safety]"
# Run quality tests
./test_example_processor "[quality]"
# Combine tags (AND)
./test_example_processor "[unit][audio]"
# Combine tags (OR)
./test_example_processor "[unit],[integration]"
# Exclude tag
./test_example_processor "~[benchmark]" # Everything except benchmarks
Test Case Name Filtering¶
# Run specific test case
./test_example_processor "SimpleProcessor processes audio correctly"
# Wildcard matching
./test_example_processor "*Processor*" # Any test with "Processor"
./test_example_processor "Simple*" # Tests starting with "Simple"
# Run all tests in a section
./test_example_processor "Section: Audio Processing"
Listing Tests¶
# List all test cases
./test_example_processor --list-tests
./test_example_processor -l
# List all tags
./test_example_processor --list-tags
./test_example_processor -t
# List all test cases with tags
./test_example_processor --list-tests --verbosity high
Benchmark Options¶
# Run benchmarks with output
./test_example_processor "[benchmark]" -s
# Run benchmarks with detailed timing
./test_example_processor "[benchmark]" --benchmark-samples 100
# Run benchmarks with warmup
./test_example_processor "[benchmark]" --benchmark-warmup-time 1000
# Save benchmark results to file
./test_example_processor "[benchmark]" -s -o benchmark_results.txt
Verbosity and Output¶
# Show successful assertions
./test_example_processor -s
# High verbosity (all messages)
./test_example_processor --verbosity high
# Minimal output (only failures)
./test_example_processor --verbosity quiet
# No color output
./test_example_processor --colour-mode none
# Reporter options
./test_example_processor -r console # Console reporter (default)
./test_example_processor -r xml # XML output
./test_example_processor -r junit # JUnit XML
./test_example_processor -r compact # Compact format
Test Runner Scripts¶
Windows (PowerShell)¶
# Run all tests
.\scripts\run_tests.ps1
# Run with specific build directory
.\scripts\run_tests.ps1 -BuildDir "build"
# Run with specific configuration
.\scripts\run_tests.ps1 -Config Release
# Run with filter
.\scripts\run_tests.ps1 -Filter "test_example*"
# Combine options
.\scripts\run_tests.ps1 -BuildDir "build" -Config Debug -Filter "*unit*"
macOS/Linux (Bash)¶
# Run all tests
./scripts/run_tests.sh
# Run with specific build directory
./scripts/run_tests.sh --build-dir build
# Run with specific configuration
./scripts/run_tests.sh --config Release
# Run with filter
./scripts/run_tests.sh --filter "test_example*"
CMake Test Commands¶
Building Tests¶
# Build all tests
cmake --build build --target all
# Build specific test
cmake --build build --target test_example_processor
# Build with specific config
cmake --build build --config Debug
cmake --build build --config Release
# Parallel build (4 jobs)
cmake --build build -j 4
# Clean and rebuild
cmake --build build --clean-first
Configuring Tests¶
# Enable testing during configuration
cmake -S . -B build -DBUILD_TESTING=ON
# Disable tests
cmake -S . -B build -DBUILD_TESTING=OFF
# Enable sanitizers (for debugging)
cmake -S . -B build -DENABLE_ASAN=ON # AddressSanitizer
cmake -S . -B build -DENABLE_TSAN=ON # ThreadSanitizer
cmake -S . -B build -DENABLE_UBSAN=ON # UndefinedBehaviorSanitizer
# Enable coverage
cmake -S . -B build -DENABLE_COVERAGE=ON
Debugging Tests¶
Running Under Debugger¶
# Windows (Visual Studio)
devenv build/tests/Debug/test_example_processor.exe
# Windows (WinDbg)
windbg build\tests\Debug\test_example_processor.exe
# macOS (LLDB)
lldb ./build/tests/test_example_processor
(lldb) run "[unit]"
(lldb) breakpoint set --name TEST_CASE
# Linux (GDB)
gdb ./build/tests/test_example_processor
(gdb) run "[unit]"
(gdb) break test_example_processor.cpp:58
Catching Specific Failures¶
# Break on assertion failure (Catch2)
./test_example_processor --break
# Run single test case
./test_example_processor "Gain applies correctly"
# Run with debugger attached
gdb --args ./test_example_processor "[unit]"
Memory Debugging¶
# AddressSanitizer (detect memory errors)
cmake -S . -B build -DENABLE_ASAN=ON
cmake --build build
./build/tests/test_example_processor
# Valgrind (Linux)
valgrind --leak-check=full ./build/tests/test_example_processor
# Dr. Memory (Windows)
drmemory -- build\tests\Debug\test_example_processor.exe
Finding Flaky Tests¶
# Run test repeatedly (Windows)
.\03_04_05_troubleshooting\run_test_repeatedly.ps1 -TestName "test_example_processor" -Iterations 100
# Run test repeatedly (manual)
for i in {1..100}; do ./test_example_processor || break; done
Performance Testing¶
Running Benchmarks¶
# Run all benchmarks
./test_example_processor "[benchmark]" -s
# Run with more samples (higher accuracy)
./test_example_processor "[benchmark]" --benchmark-samples 1000
# Run with warmup
./test_example_processor "[benchmark]" --benchmark-warmup-time 2000
# Save results to file
./test_example_processor "[benchmark]" -s > benchmark_results.txt
Regression Detection¶
# Windows
.\03_04_03_performance_testing\regression_detection.ps1
# macOS/Linux
./03_04_03_performance_testing/regression_detection.sh
Profiling¶
# Windows - Intel VTune
vtune -collect hotspots -- test_example_processor.exe
# macOS - Instruments
instruments -t "Time Profiler" test_example_processor
# Linux - perf
perf record ./test_example_processor "[benchmark]"
perf report
# Linux - Valgrind Callgrind
valgrind --tool=callgrind ./test_example_processor "[benchmark]"
callgrind_annotate callgrind.out.*
Golden File Testing¶
Comparing Audio Files¶
// In test code
#include "golden_file_manager.hpp"
GoldenFileManager golden("../golden_files");
// Load input
auto input = golden.load_wav("input.wav");
// Process
auto output = process(input);
// Compare with golden (1% tolerance)
REQUIRE(golden.compare_wav("expected_output.wav", output, 0.01));
Managing Golden Files¶
# Add new golden file
git lfs track "*.wav"
git add golden_files/new_reference.wav
git commit -m "Add golden file for reverb test"
# Update existing golden file
# 1. Generate new output
# 2. Listen and verify
# 3. Replace old golden file
cp output.wav golden_files/reference.wav
git add golden_files/reference.wav
git commit -m "Update golden file: improved reverb algorithm"
# List golden files
ls golden_files/*.wav
git lfs ls-files
Common Workflows¶
Quick Smoke Test (< 10 seconds)¶
Pre-Commit Test (< 1 minute)¶
Full Test Suite (1-2 minutes)¶
Performance Validation (5-10 minutes)¶
# Run benchmarks and check for regressions
./test_example_processor "[benchmark]" -s
./03_04_03_performance_testing/regression_detection.sh
Quality Assurance (10-20 minutes)¶
# Run all tests including quality tests
./test_example_processor "[unit],[integration],[quality]" -s
# Run with sanitizers
cmake -S . -B build-asan -DENABLE_ASAN=ON
cmake --build build-asan
cd build-asan
ctest -C Debug --output-on-failure
CI Pipeline Simulation¶
# Mimic what CI does
cmake -S . -B build -DCMAKE_BUILD_TYPE=Release -DENABLE_COVERAGE=ON
cmake --build build -j 4
cd build
ctest -j 4 --output-junit test_results.xml
ctest -T Coverage
Keyboard Shortcuts (Interactive Mode)¶
When running Catch2 tests interactively:
- Ctrl+C - Stop current test
- Ctrl+Z - Suspend test (Unix)
- Ctrl+Break - Force quit (Windows)
Environment Variables¶
# Control Catch2 behavior
export CATCH_CONFIG_COLOUR_MODE=none # Disable color output
export CATCH_CONFIG_CONSOLE_WIDTH=120 # Set console width
# CTest behavior
export CTEST_PARALLEL_LEVEL=4 # Default parallel jobs
export CTEST_OUTPUT_ON_FAILURE=1 # Always show failures
Exit Codes¶
CTest¶
- 0 - All tests passed
- 1 - Some tests failed
- 2 - Error in test execution (e.g., test crashed)
Catch2¶
- 0 - All tests passed
- 1 - Some tests failed
- 2 - Test binary error (e.g., wrong arguments)
Tips & Tricks¶
Speed Up Test Runs¶
# Run only changed tests (manual tracking)
ctest -R "test_my_new_feature"
# Use parallel execution
ctest -j $(nproc) # Linux
ctest -j $(sysctl -n hw.ncpu) # macOS
ctest -j $env:NUMBER_OF_PROCESSORS # Windows PowerShell
# Skip slow tests during development
ctest -E "benchmark|slow"
Better Test Output¶
# Colorized output (Unix)
ctest --output-on-failure 2>&1 | less -R
# Save test results
ctest --output-on-failure 2>&1 | tee test_results.log
# Count test results
ctest -N | grep "Total Tests:"
Continuous Testing (Watch Mode)¶
# Linux/macOS (using entr)
ls tests/*.cpp | entr -c cmake --build build && ctest
# Linux (using inotifywait)
while true; do
inotifywait -r -e modify tests/
cmake --build build && ctest
done
Resources¶
- README.md - Full testing framework documentation
- QUICK_START.md - 5-minute tutorial
- Catch2 CLI Reference
- CTest Documentation
Quick Copy-Paste Commands
# Run all tests (quick)
ctest -C Debug --output-on-failure
# Run unit tests only
./test_example_processor "[unit]"
# Run benchmarks
./test_example_processor "[benchmark]" -s
# Debug failed test
gdb --args ./test_example_processor "Failing Test Name"
# Check for regressions
./03_04_03_performance_testing/regression_detection.sh
# Find flaky test
for i in {1..100}; do ./test_example_processor || break; done
Master these commands and testing becomes effortless! 🎧