mirror of
https://github.com/temporal-community/temporal-ai-agent.git
synced 2026-03-15 14:08:08 +01:00
* uvx migrate-to-uv
* uv migration
* Fix hatch build
* Fixup
* uv run
* Add tab completion to devcontainer uv
Co-authored-by: Simon Emms <simon@simonemms.com>
* Revert "Add tab completion to devcontainer uv"
This reverts commit a3b7bdd84b.
---------
Co-authored-by: Simon Emms <simon@simonemms.com>
4.1 KiB
4.1 KiB
Testing the Temporal AI Agent
This guide provides instructions for running the comprehensive test suite for the Temporal AI Agent project.
Quick Start
-
Install dependencies:
uv sync -
Run all tests:
uv run pytest -
Run with time-skipping for faster execution:
uv run pytest --workflow-environment=time-skipping
Test Categories
Unit Tests
- Activity Tests:
tests/test_tool_activities.py- LLM integration (mocked)
- Environment configuration
- JSON processing
- Dynamic tool execution
Integration Tests
- Workflow Tests:
tests/test_agent_goal_workflow.py- Full workflow execution
- Signal and query handling
- State management
- Error scenarios
Running Specific Tests
# Run only activity tests
uv run pytest tests/test_tool_activities.py -v
# Run only workflow tests
uv run pytest tests/test_agent_goal_workflow.py -v
# Run a specific test
uv run pytest tests/test_tool_activities.py::TestToolActivities::test_sanitize_json_response -v
# Run tests matching a pattern
uv run pytest -k "validation" -v
Test Environment Options
Local Environment (Default)
uv run pytest --workflow-environment=local
Time-Skipping Environment (Recommended for CI)
uv run pytest --workflow-environment=time-skipping
External Temporal Server
uv run pytest --workflow-environment=localhost:7233
Environment Variables
Tests can be configured with these environment variables:
LLM_MODEL: Model for LLM testing (default: "openai/gpt-4")LLM_KEY: API key for LLM service (mocked in tests)LLM_BASE_URL: Custom LLM endpoint (optional)
Test Coverage
The test suite covers:
✅ Workflows
- AgentGoalWorkflow initialization and execution
- Signal handling (user_prompt, confirm, end_chat)
- Query methods (conversation history, agent goal, tool data)
- State management and conversation flow
- Validation and error handling
✅ Activities
- ToolActivities class methods
- LLM integration (mocked)
- Environment variable handling
- JSON response processing
- Dynamic tool activity execution
✅ Integration
- End-to-end workflow execution
- Activity registration in workers
- Temporal client interactions
Test Output
Successful test run example:
============================== test session starts ==============================
platform darwin -- Python 3.11.3, pytest-8.3.5, pluggy-1.5.0
rootdir: /Users/steveandroulakis/Documents/Code/agentic/temporal-demo/temporal-ai-agent
configfile: pyproject.toml
plugins: anyio-4.5.2, asyncio-0.26.0
collected 21 items
tests/test_tool_activities.py::TestToolActivities::test_sanitize_json_response PASSED
tests/test_tool_activities.py::TestToolActivities::test_parse_json_response_success PASSED
tests/test_tool_activities.py::TestToolActivities::test_get_wf_env_vars_default_values PASSED
...
============================== 21 passed in 12.5s ==============================
Troubleshooting
Common Issues
- Module not found errors: Run
uv sync - Async warnings: These are expected with pytest-asyncio and can be ignored
- Test timeouts: Use
--workflow-environment=time-skippingfor faster execution - Import errors: Check that you're running tests from the project root directory
Debugging Tests
Enable verbose logging:
uv run pytest --log-cli-level=DEBUG -s
Run with coverage:
uv run pytest --cov=workflows --cov=activities
Continuous Integration
For CI environments, use:
uv run pytest --workflow-environment=time-skipping --tb=short
Additional Resources
- See
tests/README.mdfor detailed testing documentation - Review
tests/conftest.pyfor available test fixtures - Check individual test files for specific test scenarios
Test Architecture
The tests use:
- Temporal Testing Framework: For workflow and activity testing
- pytest-asyncio: For async test support
- unittest.mock: For mocking external dependencies
- Test Fixtures: For consistent test data and setup
All external dependencies (LLM calls, file I/O) are mocked to ensure fast, reliable tests.