Test Framework Documentation
This document describes the unit testing framework setup for the Wegent project.
Overviewβ
The project includes comprehensive unit testing support across all modules:
- Backend (FastAPI): pytest + pytest-asyncio + pytest-cov + pytest-mock
- Executor (AI Agent Engine): pytest + pytest-mock + pytest-asyncio
- Executor Manager (Task Management): pytest + pytest-mock + pytest-cov
- Shared (Utilities): pytest + pytest-cov
- Frontend (Next.js + React 19): Jest + @testing-library/react
Current Test Coverageβ
Backend (backend/)β
- β Core security: Authentication, JWT tokens, password hashing
- β Configuration management
- β Exception handling
- β User service and models
- β GitHub repository provider
- β³ API endpoints (placeholder directory exists)
Executor (executor/)β
- β Agent factory
- β Base agent classes
- β Mocked AI client interactions (Anthropic, OpenAI)
Executor Manager (executor_manager/)β
- β Base executor classes
- β Task dispatcher
- β Docker executor and utilities
- β Docker constants and configuration
Shared (shared/)β
- β Cryptography utilities
- β Sensitive data masking (tokens, API keys, etc.)
Frontend (frontend/)β
- β³ Component tests (basic setup in place)
- β³ Hook tests
- β³ Utility tests
Test Coverage Goalsβ
- Target: 40-60% code coverage initially
- Priority: Core business logic and critical paths
- Strategy: Incremental coverage improvement
Backend Testsβ
Running Testsβ
cd backend
pytest # Run all tests
pytest tests/core/ # Run core tests only
pytest --cov=app # Run with coverage report
pytest -v # Verbose output
pytest -k test_security # Run specific test pattern
pytest -m unit # Run only unit tests
pytest -m integration # Run only integration tests
Test Structureβ
backend/tests/
βββ conftest.py # Global test fixtures
βββ core/ # Core infrastructure tests
β βββ test_security.py # Authentication & JWT tests
β βββ test_config.py # Configuration tests
β βββ test_exceptions.py # Exception handler tests
βββ services/ # Service layer tests
β βββ test_user_service.py # User service tests
βββ models/ # Data model tests
β βββ test_user_model.py # User model tests
βββ repository/ # Repository integration tests
β βββ test_github_provider.py
βββ api/ # API endpoint tests (placeholder)
Test Configurationβ
The backend uses pytest.ini for configuration with the following settings:
[pytest]
testpaths = tests
python_files = test_*.py
python_classes = Test*
python_functions = test_*
addopts =
-v
--strict-markers
--cov=app
--cov-report=term-missing
--cov-report=html
--cov-report=xml
asyncio_mode = auto
markers =
unit: Unit tests
integration: Integration tests
slow: Slow running tests
Key Fixturesβ
test_db: SQLite in-memory database session (function scope)test_settings: Test settings with overridden valuestest_user: Test user instancetest_admin_user: Test admin user instancetest_inactive_user: Inactive test user instancetest_token: Valid JWT token for test usertest_admin_token: Valid JWT token for admin usertest_client: FastAPI test client with database overridemock_redis: Mocked Redis client
Executor Testsβ
Running Testsβ
cd executor
pytest tests/ --cov=agents
Test Structureβ
executor/tests/
βββ conftest.py # Executor-specific fixtures
βββ agents/ # Agent tests
Key Fixturesβ
mock_anthropic_client: Mocked Anthropic API client for testing Claude modelsmock_openai_client: Mocked OpenAI API client for testing GPT modelsmock_callback_client: Mocked callback HTTP client for agent responsessuppress_resource_warnings: Session-scoped fixture to suppress ResourceWarning messagescleanup_logging: Session-scoped fixture to clean up logging handlers and prevent daemon thread errors
Executor Manager Testsβ
Running Testsβ
cd executor_manager
pytest tests/ --cov=executors
Key Fixturesβ
mock_docker_client: Mocked Docker SDK client for container operationsmock_executor_config: Mock executor configuration with image, CPU, memory, and network settings
Test Structureβ
executor_manager/tests/
βββ conftest.py # Executor manager fixtures
βββ executors/ # Executor tests
βββ test_base.py
βββ test_dispatcher.py
βββ test_docker_executor.py
βββ test_docker_utils.py
βββ test_docker_constants.py
Shared Testsβ
Running Testsβ
cd shared
pytest tests/ --cov=utils
Test Structureβ
shared/tests/
βββ utils/
βββ test_crypto.py # Encryption/decryption tests
βββ test_sensitive_data_masker.py # Sensitive data masking tests
Key Features Testedβ
- Cryptography: Encryption and decryption of sensitive data (Git tokens, API keys)
- Data Masking: Automatic masking of sensitive information in logs and outputs
- GitHub tokens (github_pat_*)
- Anthropic API keys (sk-ant-api03-*)
- OpenAI API keys
- Generic API keys and secrets
- File path protection (no false positives)
- URL protection (no false positives)
Frontend Testsβ
Running Testsβ
cd frontend
npm test # Run all tests
npm run test:watch # Watch mode
npm run test:coverage # With coverage report
Test Structureβ
frontend/src/__tests__/
βββ utils/ # Utility function tests
βββ hooks/ # React hooks tests
βββ components/ # Component tests
Continuous Integrationβ
GitHub Actions Workflowβ
The .github/workflows/test.yml workflow runs automatically on:
- Push to
main,master, ordevelopbranches - Pull requests to these branches
Workflow Jobsβ
-
test-backend: Python backend tests
- Matrix strategy: Python 3.10 and 3.11
- Coverage reports uploaded to Codecov
- Dependency caching for faster builds
-
test-executor: Executor engine tests
- Python 3.10
- Coverage for agents module
- Tests AI agent factory and base classes
-
test-executor-manager: Task manager tests
- Python 3.10
- Coverage for executors module
- Tests Docker executor and dispatcher
-
test-shared: Shared utilities tests
- Python 3.10
- Coverage for utils module
- Tests cryptography and data masking
-
test-frontend: Frontend tests (Node.js 18.x)
- Jest with React Testing Library
- Runs with
--passWithNoTestsflag - Coverage uploaded to Codecov
-
test-summary: Aggregate results
- Depends on all test jobs
- Fails if any test job fails
- Always runs regardless of individual job status
Coverage Reportsβ
Coverage reports are automatically uploaded to Codecov (if configured).
Mocking Strategyβ
External APIsβ
- GitHub/GitLab/Gitee: Mock with
httpx-mockorpytest-mock - Anthropic/OpenAI: Mock SDK clients
- Redis: Use
fakeredisor mock
Databaseβ
- Test DB: SQLite in-memory database
- Isolation: Each test gets a fresh transaction
- Cleanup: Automatic rollback after each test
Dockerβ
- Mock
docker.from_env()and container operations
Best Practicesβ
Writing Testsβ
- One assertion per test: Each test should verify one specific behavior
- Descriptive names: Use clear, descriptive test function names that explain what is being tested
- AAA pattern: Arrange, Act, Assert - structure your tests clearly
- Mock external dependencies: Never call real external services (APIs, databases, etc.)
- Use fixtures: Share common test setup via fixtures to reduce duplication
- Test edge cases: Include tests for error conditions, boundary values, and unusual inputs
- Keep tests independent: Each test should be able to run independently without relying on other tests
Security Testing Best Practicesβ
The project includes comprehensive security testing examples in backend/tests/core/test_security.py:
- Password hashing and verification (bcrypt)
- JWT token creation and validation
- Token expiration handling
- User authentication with valid/invalid credentials
- Inactive user detection
- Role-based access control (admin vs regular users)
Example test pattern for security features:
@pytest.mark.unit
class TestPasswordHashing:
"""Test password hashing and verification functions"""
def test_verify_password_with_correct_password(self):
"""Test password verification with correct password"""
password = "testpassword123"
hashed = get_password_hash(password)
assert verify_password(password, hashed) is True
def test_verify_password_with_incorrect_password(self):
"""Test password verification with incorrect password"""
password = "testpassword123"
hashed = get_password_hash(password)
assert verify_password("wrongpassword", hashed) is False
Test Organizationβ
@pytest.mark.unit
class TestFeatureName:
"""Test feature description"""
def test_success_case(self):
"""Test successful operation"""
# Arrange
data = {"key": "value"}
# Act
result = function_under_test(data)
# Assert
assert result == expected_value
def test_error_case(self):
"""Test error handling"""
with pytest.raises(ExpectedException):
function_under_test(invalid_data)
Using Test Markersβ
Test markers help categorize and selectively run tests:
# Run only unit tests
pytest -m unit
# Run only integration tests
pytest -m integration
# Run slow tests
pytest -m slow
# Skip slow tests
pytest -m "not slow"
Async Testsβ
@pytest.mark.asyncio
async def test_async_function():
"""Test asynchronous function"""
result = await async_function()
assert result is not None
The backend's pytest.ini has asyncio_mode = auto which automatically detects and runs async tests.
Adding New Testsβ
Backendβ
- Create test file in appropriate
tests/subdirectory (e.g.,tests/services/test_new_service.py) - Import necessary fixtures from
conftest.py - Use
@pytest.mark.unitor@pytest.mark.integrationto categorize tests - Follow the AAA (Arrange-Act-Assert) pattern
- Write test classes and methods with descriptive names
- Run tests locally before committing:
pytest tests/ -v - Ensure coverage is maintained or improved:
pytest --cov=app --cov-report=term-missing
Frontendβ
- Create test file in
src/__tests__/matching source structure - Use
@testing-library/reactfor component tests - Mock API calls and external dependencies
- Ensure tests pass with
npm test
Debugging Testsβ
Backendβ
# Run specific test with verbose output
pytest tests/core/test_security.py::TestPasswordHashing::test_verify_password_with_correct_password -v
# Drop into debugger on failure
pytest --pdb
# Show print statements
pytest -s
Frontendβ
# Run tests in watch mode
npm run test:watch
# Debug specific test file
npm test -- src/__tests__/utils/test_example.test.ts
Configuration Filesβ
Backendβ
backend/pytest.ini: pytest configuration with coverage settings and test markers- Enables verbose output, strict markers, and automatic async mode
- Configures coverage reports in terminal, HTML, and XML formats
- Defines custom markers:
unit,integration,slow
Executor/Executor Manager/Sharedβ
pytest.ini: Module-specific pytest configuration- Similar setup to backend but with module-specific coverage targets
Frontendβ
frontend/jest.config.ts: Jest configurationfrontend/jest.setup.js: Test environment setup
Future Improvementsβ
- Increase coverage to 70-80%
- Add integration tests for API endpoints (currently placeholder)
- Add E2E tests for critical user flows
- Performance/load testing
- Mutation testing with
mutmut - Add more frontend component tests
- Implement database migration tests
- Add tests for WebSocket connections and real-time features
Troubleshootingβ
Common Issuesβ
Import errors in tests:
- Ensure you're running pytest from the correct directory
- Check that modules are installed:
uv sync
Database errors:
- Tests use SQLite in-memory DB, no setup needed
- Check that fixtures are imported correctly
Frontend test failures:
- Ensure Node.js 18.x is installed
- Run
npm cito install exact dependency versions - Clear Jest cache:
npx jest --clearCache