Temporal Cloud support

This commit is contained in:
Steve Androulakis
2025-01-10 14:13:32 -08:00
parent 22f9f6dee4
commit c623029c9d
9 changed files with 155 additions and 104 deletions

View File

@@ -4,3 +4,11 @@ RAPIDAPI_KEY=9df2cb5...
RAPIDAPI_HOST=sky-scrapper.p.rapidapi.com RAPIDAPI_HOST=sky-scrapper.p.rapidapi.com
STRIPE_API_KEY=sk_test_51J... STRIPE_API_KEY=sk_test_51J...
# uncomment and unset these environment variables to connect to the local dev server
TEMPORAL_ADDRESS=namespace.acct.tmprl.cloud:7233
TEMPORAL_NAMESPACE=default
TEMPORAL_TASK_QUEUE=agent-task-queue
TEMPORAL_TLS_CERT='path/to/cert.pem'
TEMPORAL_TLS_KEY='path/to/key.pem'
# TEMPORAL_API_KEY=abcdef1234567890 # Uncomment not using mTLS

View File

@@ -1,4 +1,4 @@
# AI Agent execution using Temporal # Temporal AI Agent
This demo shows a multi-turn conversation with an AI agent running inside a Temporal workflow. The goal is to collect information towards a goal. There's a simple DSL input for collecting information (currently set up to use mock functions to search for events, book flights around those events then create an invoice for those flights). The AI will respond with clarifications and ask for any missing information to that goal. It uses ChatGPT 4o but can be made to use a local LLM via [Ollama](https://ollama.com) (see the deprecated section below). This demo shows a multi-turn conversation with an AI agent running inside a Temporal workflow. The goal is to collect information towards a goal. There's a simple DSL input for collecting information (currently set up to use mock functions to search for events, book flights around those events then create an invoice for those flights). The AI will respond with clarifications and ask for any missing information to that goal. It uses ChatGPT 4o but can be made to use a local LLM via [Ollama](https://ollama.com) (see the deprecated section below).
@@ -6,18 +6,51 @@ This demo shows a multi-turn conversation with an AI agent running inside a Temp
[![Watch the demo](./agent-youtube-screenshot.jpeg)](https://www.youtube.com/watch?v=GEXllEH2XiQ) [![Watch the demo](./agent-youtube-screenshot.jpeg)](https://www.youtube.com/watch?v=GEXllEH2XiQ)
## Setup ## Configuration
* See `.env_example` for the required environment variables and copy to `.env` in the root directory.
* Requires an OpenAI key for the gpt-4o model. Set this in the `OPENAI_API_KEY` environment variable in .env This application uses `.env` files for configuration. Copy the `.env.example` file to `.env` and update the values:
```bash
cp .env.example .env
```
The agent requires an OpenAI key for the gpt-4o model. Set this in the `OPENAI_API_KEY` environment variable in .env
#### Using a local LLM instead of ChatGPT 4o
* Install [Ollama](https://ollama.com) and the [Qwen2.5 14B](https://ollama.com/library/qwen2.5) model (`ollama run qwen2.5:14b`). (note this model is about 9GB to download).
* Local LLM is disabled as ChatGPT 4o was better for this use case. To use Ollama, examine `./activities/tool_activities.py` and rename the functions.
## Agent Tools
* Requires a Rapidapi key for sky-scrapper (how we find flights). Set this in the `RAPIDAPI_KEY` environment variable in .env * Requires a Rapidapi key for sky-scrapper (how we find flights). Set this in the `RAPIDAPI_KEY` environment variable in .env
* It's free to sign up and get a key at [RapidAPI](https://rapidapi.com/apiheya/api/sky-scrapper) * It's free to sign up and get a key at [RapidAPI](https://rapidapi.com/apiheya/api/sky-scrapper)
* If you're lazy go to `tools/search_flights.py` and replace the `get_flights` function with the mock `search_flights_example` that exists in the same file. * If you're lazy go to `tools/search_flights.py` and replace the `get_flights` function with the mock `search_flights_example` that exists in the same file.
* Requires a Stripe key for the `create_invoice` tool. Set this in the `STRIPE_API_KEY` environment variable in .env * Requires a Stripe key for the `create_invoice` tool. Set this in the `STRIPE_API_KEY` environment variable in .env
* It's free to sign up and get a key at [Stripe](https://stripe.com/) * It's free to sign up and get a key at [Stripe](https://stripe.com/)
* If you're lazy go to `tools/create_invoice.py` and replace the `create_invoice` function with the mock `create_invoice_example` that exists in the same file. * If you're lazy go to `tools/create_invoice.py` and replace the `create_invoice` function with the mock `create_invoice_example` that exists in the same file.
* Install and run Temporal. Follow the instructions in the [Temporal documentation](https://learn.temporal.io/getting_started/python/dev_environment/#set-up-a-local-temporal-service-for-development-with-temporal-cli) to install and run the Temporal server.
### Python Environment ## Configuring Temporal Connection
By default, this application will connect to a local Temporal server (`localhost:7233`) in the default namespace, using the `agent-task-queue` task queue. You can override these settings in your `.env` file.
### Use Temporal Cloud
See `.env.example` for details on connecting to Temporal Cloud using mTLS or API key authentication.
[Sign up for Temporal Cloud](https://temporal.io/get-cloud)
### Use a local Temporal Dev Server
On a Mac
```bash
brew install temporal
temporal server start-dev
```
See the [Temporal documentation](https://learn.temporal.io/getting_started/python/dev_environment/) for other platforms.
## Running the Application
### Python Backend
Requires [Poetry](https://python-poetry.org/) to manage dependencies. Requires [Poetry](https://python-poetry.org/) to manage dependencies.
@@ -27,44 +60,29 @@ Requires [Poetry](https://python-poetry.org/) to manage dependencies.
3. `poetry install` 3. `poetry install`
### React UI Run the following commands in separate terminal windows:
- `cd frontend`
- `npm install` to install the dependencies.
1. Start the Temporal worker:
#### Deprecated:
* Install [Ollama](https://ollama.com) and the [Qwen2.5 14B](https://ollama.com/library/qwen2.5) model (`ollama run qwen2.5:14b`). (note this model is about 9GB to download).
* Local LLM is disabled as ChatGPT 4o was better for this use case. To use Ollama, examine `./activities/tool_activities.py` and rename the functions.
## Running the demo
### Run a Temporal Dev Server
On a Mac
```bash ```bash
brew install temporal poetry run python scripts/run_worker.py
temporal server start-dev
``` ```
See the [Temporal documentation](https://learn.temporal.io/getting_started/python/dev_environment/) for other platforms.
### Run a Temporal Worker 2. Start the API server:
```bash
poetry run uvicorn api.main:app --reload
```
Access the API at `/docs` to see the available endpoints.
From the `/scripts` directory: ### React UI
Start the frontend:
```bash
cd frontend
npm install
npm run dev
```
Access the UI at `http://localhost:5173`
- Run the worker: `poetry run python run_worker.py` ## Customizing the Agent
Then run the API and UI using the instructions below.
### API
- `poetry run uvicorn api.main:app --reload` to start the API server.
- Access the API at `/docs` to see the available endpoints.
### UI
- `npm run dev` to start the dev server.
- Access the UI at `http://localhost:5173`
## Customizing the agent
- `tool_registry.py` contains the mapping of tool names to tool definitions (so the AI understands how to use them) - `tool_registry.py` contains the mapping of tool names to tool definitions (so the AI understands how to use them)
- `goal_registry.py` contains descriptions of goals and the tools used to achieve them - `goal_registry.py` contains descriptions of goals and the tools used to achieve them
- The tools themselves are defined in their own files in `/tools` - The tools themselves are defined in their own files in `/tools`
@@ -74,4 +92,3 @@ Then run the API and UI using the instructions below.
## TODO ## TODO
- I should prove this out with other tool definitions outside of the event/flight search case (take advantage of my nice DSL). - I should prove this out with other tool definitions outside of the event/flight search case (take advantage of my nice DSL).
- Currently hardcoded to the Temporal dev server at localhost:7233. Need to support options incl Temporal Cloud. - Currently hardcoded to the Temporal dev server at localhost:7233. Need to support options incl Temporal Cloud.
- UI: Make prettier

View File

@@ -1,12 +1,21 @@
from fastapi import FastAPI from fastapi import FastAPI
from typing import Optional
from temporalio.client import Client from temporalio.client import Client
from workflows.tool_workflow import ToolWorkflow from workflows.tool_workflow import ToolWorkflow
from models.data_types import CombinedInput, ToolWorkflowParams from models.data_types import CombinedInput, ToolWorkflowParams
from tools.goal_registry import goal_event_flight_invoice from tools.goal_registry import goal_event_flight_invoice
from temporalio.exceptions import TemporalError from temporalio.exceptions import TemporalError
from fastapi.middleware.cors import CORSMiddleware from fastapi.middleware.cors import CORSMiddleware
from shared.config import get_temporal_client, TEMPORAL_TASK_QUEUE
app = FastAPI() app = FastAPI()
temporal_client: Optional[Client] = None
@app.on_event("startup")
async def startup_event():
global temporal_client
temporal_client = await get_temporal_client()
app.add_middleware( app.add_middleware(
CORSMiddleware, CORSMiddleware,
@@ -25,10 +34,9 @@ def root():
@app.get("/tool-data") @app.get("/tool-data")
async def get_tool_data(): async def get_tool_data():
"""Calls the workflow's 'get_tool_data' query.""" """Calls the workflow's 'get_tool_data' query."""
client = await Client.connect("localhost:7233")
try: try:
# Get workflow handle # Get workflow handle
handle = client.get_workflow_handle("agent-workflow") handle = temporal_client.get_workflow_handle("agent-workflow")
# Check if the workflow is completed # Check if the workflow is completed
workflow_status = await handle.describe() workflow_status = await handle.describe()
@@ -48,9 +56,8 @@ async def get_tool_data():
@app.get("/get-conversation-history") @app.get("/get-conversation-history")
async def get_conversation_history(): async def get_conversation_history():
"""Calls the workflow's 'get_conversation_history' query.""" """Calls the workflow's 'get_conversation_history' query."""
client = await Client.connect("localhost:7233")
try: try:
handle = client.get_workflow_handle("agent-workflow") handle = temporal_client.get_workflow_handle("agent-workflow")
conversation_history = await handle.query("get_conversation_history") conversation_history = await handle.query("get_conversation_history")
return conversation_history return conversation_history
@@ -61,8 +68,6 @@ async def get_conversation_history():
@app.post("/send-prompt") @app.post("/send-prompt")
async def send_prompt(prompt: str): async def send_prompt(prompt: str):
client = await Client.connect("localhost:7233")
# Create combined input # Create combined input
combined_input = CombinedInput( combined_input = CombinedInput(
tool_params=ToolWorkflowParams(None, None), tool_params=ToolWorkflowParams(None, None),
@@ -72,11 +77,11 @@ async def send_prompt(prompt: str):
workflow_id = "agent-workflow" workflow_id = "agent-workflow"
# Start (or signal) the workflow # Start (or signal) the workflow
await client.start_workflow( await temporal_client.start_workflow(
ToolWorkflow.run, ToolWorkflow.run,
combined_input, combined_input,
id=workflow_id, id=workflow_id,
task_queue="agent-task-queue", task_queue=TEMPORAL_TASK_QUEUE,
start_signal="user_prompt", start_signal="user_prompt",
start_signal_args=[prompt], start_signal_args=[prompt],
) )
@@ -87,9 +92,8 @@ async def send_prompt(prompt: str):
@app.post("/confirm") @app.post("/confirm")
async def send_confirm(): async def send_confirm():
"""Sends a 'confirm' signal to the workflow.""" """Sends a 'confirm' signal to the workflow."""
client = await Client.connect("localhost:7233")
workflow_id = "agent-workflow" workflow_id = "agent-workflow"
handle = client.get_workflow_handle(workflow_id) handle = temporal_client.get_workflow_handle(workflow_id)
await handle.signal("confirm") await handle.signal("confirm")
return {"message": "Confirm signal sent."} return {"message": "Confirm signal sent."}
@@ -97,11 +101,10 @@ async def send_confirm():
@app.post("/end-chat") @app.post("/end-chat")
async def end_chat(): async def end_chat():
"""Sends a 'end_chat' signal to the workflow.""" """Sends a 'end_chat' signal to the workflow."""
client = await Client.connect("localhost:7233")
workflow_id = "agent-workflow" workflow_id = "agent-workflow"
try: try:
handle = client.get_workflow_handle(workflow_id) handle = temporal_client.get_workflow_handle(workflow_id)
await handle.signal("end_chat") await handle.signal("end_chat")
return {"message": "End chat signal sent."} return {"message": "End chat signal sent."}
except TemporalError as e: except TemporalError as e:

View File

@@ -1,6 +1,6 @@
import asyncio import asyncio
from temporalio.client import Client
from workflows.tool_workflow import ToolWorkflow from workflows.tool_workflow import ToolWorkflow

View File

@@ -1,12 +1,12 @@
import asyncio import asyncio
from temporalio.client import Client from shared.config import get_temporal_client
from workflows import ToolWorkflow from workflows.tool_workflow import ToolWorkflow
async def main(): async def main():
# Create client connected to server at the given address # Create client connected to server at the given address
client = await Client.connect("localhost:7233") client = await get_temporal_client()
workflow_id = "agent-workflow" workflow_id = "agent-workflow"
handle = client.get_workflow_handle(workflow_id) handle = client.get_workflow_handle(workflow_id)
@@ -15,16 +15,7 @@ async def main():
history = await handle.query(ToolWorkflow.get_conversation_history) history = await handle.query(ToolWorkflow.get_conversation_history)
print("Conversation History") print("Conversation History")
print( print(history)
*(f"{speaker.title()}: {message}\n" for speaker, message in history), sep="\n"
)
# Queries the workflow for the conversation summary
summary = await handle.query(ToolWorkflow.get_summary_from_history)
if summary is not None:
print("Conversation Summary:")
print(summary)
if __name__ == "__main__": if __name__ == "__main__":

View File

@@ -1,23 +0,0 @@
import asyncio
import json
from temporalio.client import Client
from workflows.tool_workflow import ToolWorkflow
async def main():
# Create client connected to server at the given address
client = await Client.connect("localhost:7233")
workflow_id = "agent-workflow"
handle = client.get_workflow_handle(workflow_id)
# Queries the workflow for the conversation history
tool_data = await handle.query(ToolWorkflow.get_tool_data)
# pretty print
print(json.dumps(tool_data, indent=4))
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -1,27 +1,26 @@
import asyncio import asyncio
import concurrent.futures
import logging
from temporalio.client import Client import concurrent.futures
from temporalio.worker import Worker from temporalio.worker import Worker
from activities.tool_activities import ToolActivities, dynamic_tool_activity from activities.tool_activities import ToolActivities, dynamic_tool_activity
from workflows.tool_workflow import ToolWorkflow from workflows.tool_workflow import ToolWorkflow
from dotenv import load_dotenv
load_dotenv() from shared.config import get_temporal_client, TEMPORAL_TASK_QUEUE
async def main(): async def main():
# Create client connected to server at the given address # Create the client
client = await Client.connect("localhost:7233") client = await get_temporal_client()
activities = ToolActivities() activities = ToolActivities()
# Run the worker # Run the worker
with concurrent.futures.ThreadPoolExecutor(max_workers=100) as activity_executor: with concurrent.futures.ThreadPoolExecutor(max_workers=100) as activity_executor:
worker = Worker( worker = Worker(
client, client,
task_queue="agent-task-queue", task_queue=TEMPORAL_TASK_QUEUE,
workflows=[ToolWorkflow], workflows=[ToolWorkflow],
activities=[ activities=[
activities.prompt_llm, activities.prompt_llm,
@@ -29,12 +28,10 @@ async def main():
], ],
activity_executor=activity_executor, activity_executor=activity_executor,
) )
print(f"Starting worker, connecting to task queue: {TEMPORAL_TASK_QUEUE}")
await worker.run() await worker.run()
if __name__ == "__main__": if __name__ == "__main__":
print("Starting worker")
logging.basicConfig(level=logging.INFO)
asyncio.run(main()) asyncio.run(main())

View File

@@ -1,12 +1,14 @@
import asyncio import asyncio
import sys import sys
from temporalio.client import Client
from shared.config import get_temporal_client
from workflows.tool_workflow import ToolWorkflow
async def main(): async def main():
# 1) Connect to Temporal and signal the workflow # Connect to Temporal and signal the workflow
client = await Client.connect("localhost:7233") client = await get_temporal_client()
workflow_id = "agent-workflow" workflow_id = "agent-workflow"

56
shared/config.py Normal file
View File

@@ -0,0 +1,56 @@
import os
from dotenv import load_dotenv
from temporalio.client import Client
from temporalio.service import TLSConfig
load_dotenv()
# Temporal connection settings
TEMPORAL_ADDRESS = os.getenv("TEMPORAL_ADDRESS", "localhost:7233")
TEMPORAL_NAMESPACE = os.getenv("TEMPORAL_NAMESPACE", "default")
TEMPORAL_TASK_QUEUE = os.getenv("TEMPORAL_TASK_QUEUE", "agent-task-queue")
# Authentication settings
TEMPORAL_TLS_CERT = os.getenv("TEMPORAL_TLS_CERT", "")
TEMPORAL_TLS_KEY = os.getenv("TEMPORAL_TLS_KEY", "")
TEMPORAL_API_KEY = os.getenv("TEMPORAL_API_KEY", "")
async def get_temporal_client() -> Client:
"""
Creates a Temporal client based on environment configuration.
Supports local server, mTLS, and API key authentication methods.
"""
# Default to no TLS for local development
tls_config = False
print(f"Address: {TEMPORAL_ADDRESS}, Namespace {TEMPORAL_NAMESPACE}")
print("(If unset, then will try to connect to local server)")
# Configure mTLS if certificate and key are provided
if TEMPORAL_TLS_CERT and TEMPORAL_TLS_KEY:
print(f"TLS cert: {TEMPORAL_TLS_CERT}")
print(f"TLS key: {TEMPORAL_TLS_KEY}")
with open(TEMPORAL_TLS_CERT, "rb") as f:
client_cert = f.read()
with open(TEMPORAL_TLS_KEY, "rb") as f:
client_key = f.read()
tls_config = TLSConfig(
client_cert=client_cert,
client_private_key=client_key,
)
# Use API key authentication if provided
if TEMPORAL_API_KEY:
print(f"API key: {TEMPORAL_API_KEY}")
return await Client.connect(
TEMPORAL_ADDRESS,
namespace=TEMPORAL_NAMESPACE,
api_key=TEMPORAL_API_KEY,
tls=True, # Always use TLS with API key
)
# Use mTLS or local connection
return await Client.connect(
TEMPORAL_ADDRESS,
namespace=TEMPORAL_NAMESPACE,
tls=tls_config,
)