mirror of
https://github.com/temporal-community/temporal-ai-agent.git
synced 2026-03-15 14:08:08 +01:00
0cfb4046b06a84e402be19364d15e589e0c889fa
AI Agent execution using Temporal
Work in progress.
This demo shows a multi-turn conversation with an AI agent running inside a Temporal workflow. The goal is to collect information towards a goal. There's a simple DSL input for collecting information (currently set up to use mock functions to search for events, book flights around those events then create an invoice for those flights). The AI will respond with clarifications and ask for any missing information to that goal. It uses a local LLM via Ollama.
Setup
- See
.env_examplefor the required environment variables and copy to.envin the root directory. - Requires an OpenAI key for the gpt-4o model. Set this in the
OPENAI_API_KEYenvironment variable in .env - Requires a Rapidapi key for sky-scrapper (how we find flights). Set this in the
RAPIDAPI_KEYenvironment variable in .env- It's free to sign up and get a key at RapidAPI
- If you're lazy go to
tools/search_flights.pyand replace theget_flightsfunction with the mocksearch_flights_examplethat exists in the same file.
- Requires a Stripe key for the
create_invoicetool. Set this in theSTRIPE_API_KEYenvironment variable in .env- It's free to sign up and get a key at Stripe
- If you're lazy go to
tools/create_invoice.pyand replace thecreate_invoicefunction with the mockcreate_invoice_examplethat exists in the same file.
- Install and run Temporal. Follow the instructions in the Temporal documentation to install and run the Temporal server.
Python Environment
Requires Poetry to manage dependencies.
-
python -m venv venv -
source venv/bin/activate -
poetry install
React UI
cd frontendnpm installto install the dependencies.
Deprecated:
- Install Ollama and the Qwen2.5 14B model (
ollama run qwen2.5:14b). (note this model is about 9GB to download).- Local LLM is disabled as ChatGPT 4o was better for this use case. To use Ollama, examine
./activities/tool_activities.pyand rename the functions.
- Local LLM is disabled as ChatGPT 4o was better for this use case. To use Ollama, examine
Running the demo
Run a Temporal Dev Server
On a Mac
brew install temporal
temporal server start-dev
See the Temporal documentation for other platforms.
Run a Temporal Worker
From the /scripts directory:
- Run the worker:
poetry run python run_worker.py
Then run the API and UI using the instructions below.
API
poetry run uvicorn api.main:app --reloadto start the API server.- Access the API at
/docsto see the available endpoints.
UI
npm run devto start the dev server.- Access the UI at
http://localhost:5173
Customizing the agent
tool_registry.pycontains the mapping of tool names to tool definitions (so the AI understands how to use them)goal_registry.pycontains descriptions of goals and the tools used to achieve them- The tools themselves are defined in their own files in
/tools - Note the mapping in
tools/__init__.pyto each tool - See main.py where some tool-specific logic is defined (todo, move this to the tool definition)
TODO
- I should prove this out with other tool definitions outside of the event/flight search case (take advantage of my nice DSL).
- Currently hardcoded to the Temporal dev server at localhost:7233. Need to support options incl Temporal Cloud.
- UI: Make prettier
Languages
Python
86.7%
JavaScript
9.6%
C#
2.6%
Makefile
0.4%
CSS
0.3%
Other
0.4%