Temporal AI Agent
This demo shows a multi-turn conversation with an AI agent running inside a Temporal workflow. The goal is to collect information towards a goal. There's a simple DSL input for collecting information (currently set up to use mock functions to search for events, book flights around those events then create an invoice for those flights). The AI will respond with clarifications and ask for any missing information to that goal. It uses ChatGPT 4o but can be made to use a local LLM via Ollama (see the deprecated section below).
Watch the demo (5 minute YouTube video)
Configuration
This application uses .env files for configuration. Copy the .env.example file to .env and update the values:
cp .env.example .env
The agent requires an OpenAI key for the gpt-4o model. Set this in the OPENAI_API_KEY environment variable in .env
Agent Tools
- Requires a Rapidapi key for sky-scrapper (how we find flights). Set this in the
RAPIDAPI_KEYenvironment variable in .env- It's free to sign up and get a key at RapidAPI
- If you're lazy go to
tools/search_flights.pyand replace theget_flightsfunction with the mocksearch_flights_examplethat exists in the same file.
- Requires a Stripe key for the
create_invoicetool. Set this in theSTRIPE_API_KEYenvironment variable in .env- It's free to sign up and get a key at Stripe
- If you're lazy go to
tools/create_invoice.pyand replace thecreate_invoicefunction with the mockcreate_invoice_examplethat exists in the same file.
Configuring Temporal Connection
By default, this application will connect to a local Temporal server (localhost:7233) in the default namespace, using the agent-task-queue task queue. You can override these settings in your .env file.
Use Temporal Cloud
See .env.example for details on connecting to Temporal Cloud using mTLS or API key authentication.
Use a local Temporal Dev Server
On a Mac
brew install temporal
temporal server start-dev
See the Temporal documentation for other platforms.
Running the Application
Python Backend
Requires Poetry to manage dependencies.
-
python -m venv venv -
source venv/bin/activate -
poetry install
Run the following commands in separate terminal windows:
- Start the Temporal worker:
poetry run python scripts/run_worker.py
- Start the API server:
poetry run uvicorn api.main:app --reload
Access the API at /docs to see the available endpoints.
React UI
Start the frontend:
cd frontend
npm install
npx vite
Access the UI at http://localhost:5173
Customizing the Agent
tool_registry.pycontains the mapping of tool names to tool definitions (so the AI understands how to use them)goal_registry.pycontains descriptions of goals and the tools used to achieve them- The tools themselves are defined in their own files in
/tools - Note the mapping in
tools/__init__.pyto each tool - See main.py where some tool-specific logic is defined (todo, move this to the tool definition)
Using a local LLM instead of ChatGPT 4o
With a small code change, the agent can use local LLMs.
- Install Ollama and the Qwen2.5 14B model (
ollama run qwen2.5:14b). (note this model is about 9GB to download).- Local LLM is disabled as ChatGPT 4o was better for this use case. To use Ollama, examine
./activities/tool_activities.pyand rename the existing functions. - Note that Qwen2.5 14B is not as good as ChatGPT 4o for this use case and will perform worse at moving the conversation towards the goal.
- Local LLM is disabled as ChatGPT 4o was better for this use case. To use Ollama, examine
TODO
- I should prove this out with other tool definitions outside of the event/flight search case (take advantage of my nice DSL).
- Currently hardcoded to the Temporal dev server at localhost:7233. Need to support options incl Temporal Cloud.
- Continue-as-new shouldn't be a big consideration for this use case (as it would take many conversational turns to trigger). Regardless, I should ensure that it's able to carry the agent state over to the new workflow execution.
- Tests would be nice!
