Comprehensive testing suite for the Heimdall RF source localization system.
The testing suite covers:
services/
├── training/
│ └── tests/
│ ├── unit/ # Unit tests
│ ├── integration/ # Integration tests
│ │ └── test_synthetic_pipeline_integration.py
│ ├── test_feature_extractor_basic.py
│ ├── test_iq_generator.py
│ ├── test_performance.py # Performance benchmarks
│ └── conftest.py
├── backend/
│ └── tests/
│ ├── unit/
│ │ ├── test_batch_feature_extraction.py
│ │ └── test_feature_extraction_task_extended.py
│ ├── integration/
│ └── e2e/
└── common/
└── tests/
# Run all tests
pytest
# Run tests for specific service
cd services/training && pytest
cd services/backend && pytest
# Run with verbose output
pytest -v
# Run specific test file
pytest services/training/tests/test_performance.py -v
# Unit tests only (fast)
pytest -m unit
# Integration tests (requires Docker services)
pytest -m integration
# Performance benchmarks
pytest -m performance
# Exclude slow tests
pytest -m "not slow"
# Performance tests excluding slow ones
pytest -m "performance and not slow"
# Generate coverage report
pytest --cov=services --cov-report=html --cov-report=term-missing
# View HTML report
xdg-open htmlcov/index.html # Linux
open htmlcov/index.html # macOS
# Training service tests
cd services/training
pytest -v
# With coverage
pytest --cov=src --cov-report=html
# Only performance tests
pytest tests/test_performance.py -v -m performance
# Backend service tests
cd services/backend
pytest tests/unit/ -v
File: services/training/tests/test_feature_extractor_basic.py
Tests the RF feature extraction from IQ samples:
pytest services/training/tests/test_feature_extractor_basic.py -v
File: services/training/tests/test_iq_generator.py
Tests synthetic IQ sample generation:
pytest services/training/tests/test_iq_generator.py -v
File: services/training/tests/integration/test_synthetic_pipeline_integration.py
Tests full synthetic data generation pipeline:
pytest services/training/tests/integration/ -v -m integration
File: services/backend/tests/unit/test_feature_extraction_task_extended.py
Tests feature extraction from real recordings:
pytest services/backend/tests/unit/test_feature_extraction_task_extended.py -v
File: services/backend/tests/unit/test_batch_feature_extraction.py
Tests background batch processing:
pytest services/backend/tests/unit/test_batch_feature_extraction.py -v
File: services/training/tests/test_performance.py
Performance tests with strict targets:
| Test | Target | Expected | Command |
|---|---|---|---|
| IQ Generation | <50ms | ~30ms | pytest -k test_iq_generation_performance |
| Feature Extraction | <100ms | ~60ms | pytest -k test_feature_extraction_performance |
| End-to-End | <150ms | ~90ms | pytest -k test_end_to_end_performance |
| Batch (3 RX) | <500ms | ~300ms | pytest -k test_batch_generation_performance |
# Run all performance tests
pytest services/training/tests/test_performance.py -v -m performance
# Run specific performance test
pytest -k test_iq_generation_performance -v
# Run without slow tests
pytest -m "performance and not slow" -v
Performance tests output detailed timing statistics:
IQ Generation Performance:
Average: 28.45 ms
StdDev: 3.12 ms
Min: 24.10 ms
Max: 35.20 ms
Tests are categorized using pytest markers:
@pytest.mark.unit # Fast unit test
@pytest.mark.integration # Requires Docker services
@pytest.mark.performance # Performance benchmark
@pytest.mark.slow # Long-running test
@pytest.mark.asyncio # Async test
@pytest.mark.e2e # End-to-end test
"""
Unit tests for [component name].
"""
import pytest
from src.module import Component
@pytest.mark.unit
def test_component_basic_functionality():
"""Test basic component operation."""
component = Component()
result = component.do_something()
assert result is not None
"""
Integration tests for [feature name].
"""
import pytest
@pytest.mark.integration
@pytest.mark.asyncio
async def test_database_integration(test_db_pool):
"""Test database operations."""
async with test_db_pool.acquire() as conn:
result = await conn.fetchval("SELECT 1")
assert result == 1
"""
Performance tests for [component name].
"""
import pytest
import time
from statistics import mean
@pytest.mark.performance
def test_operation_performance():
"""Benchmark operation speed."""
times = []
for _ in range(100):
start = time.perf_counter()
# Operation to benchmark
result = expensive_operation()
elapsed = time.perf_counter() - start
times.append(elapsed)
avg_time = mean(times) * 1000 # ms
print(f"\nAverage: {avg_time:.2f} ms")
# Assert performance target
assert avg_time < 50.0, f"Too slow: {avg_time:.2f}ms"
pytest.ini)[pytest]
testpaths = services
python_files = test_*.py
markers =
unit: Unit tests
integration: Integration tests
performance: Performance benchmarks
slow: Slow tests
asyncio_mode = auto
services/training/pytest.ini)[pytest]
testpaths = tests
markers =
unit: Unit tests
integration: Integration tests
performance: Performance benchmarks
slow: Slow tests
Check coverage:
pytest --cov=src --cov-report=term-missing --cov-fail-under=80
Tests run automatically on:
CI runs:
# Ensure PYTHONPATH is set
export PYTHONPATH=/path/to/heimdall/services/training/src:$PYTHONPATH
# Or use conftest.py (already configured)
Some integration tests require Docker services:
# Start services
docker-compose up -d
# Run integration tests
pytest -m integration
# Stop services
docker-compose down
Performance tests may timeout on slow machines:
# Skip slow tests
pytest -m "performance and not slow"
# Increase timeout
pytest --timeout=300
Each test should be independent:
@pytest.fixture
def clean_database():
"""Ensure clean database state."""
# Setup
yield
# Teardown
test_generate_single_sample_multiple_receiverstest_generationassert result == expected_valueassert result@pytest.fixture
def sample_config():
return {"frequency_mhz": 144.0}
@pytest.mark.unit
@pytest.mark.performance
def test_fast_operation():
pass
def test_feature_extraction():
"""Test that features are extracted correctly from IQ samples."""
pass
| Operation | Target | Status |
|---|---|---|
| IQ Generation | <50ms | ✅ |
| Feature Extraction | <100ms | ✅ |
| End-to-End | <150ms | ✅ |
| 10k Samples (24 cores) | <3 min | ✅ |
| Operation | Target | Status |
|---|---|---|
| Recording Feature Extraction | <2s | ✅ |
| Batch Processing (50 recordings) | <30s | ✅ |