2 Commits

Author SHA1 Message Date
Steve Androulakis
5ede58519c setup readme 2025-05-26 13:59:12 -07:00
znack
87afa718d5 Add Docker for better DX from Znack's PR 2025-05-26 13:37:03 -07:00
14 changed files with 994 additions and 1642 deletions

View File

@@ -1,13 +1,27 @@
RAPIDAPI_KEY=9df2cb5... RAPIDAPI_KEY=9df2cb5...
RAPIDAPI_HOST_FLIGHTS=sky-scrapper.p.rapidapi.com #For travel flight information tool RAPIDAPI_HOST_FLIGHTS=sky-scrapper.p.rapidapi.com #For travel flight information tool
RAPIDAPI_HOST_PACKAGE=trackingpackage.p.rapidapi.com #For eCommerce order status package tracking tool RAPIDAPI_HOST_PACKAGE=trackingpackage.p.rapidapi.com #For eCommerce order status package tracking tool
FOOTBALL_DATA_API_KEY= FOOTBALL_DATA_API_KEY=....
# Leave blank to use the built-in mock fixtures generator
STRIPE_API_KEY=sk_test_51J... STRIPE_API_KEY=sk_test_51J...
LLM_MODEL=openai/gpt-4o # default LLM_PROVIDER=openai # default
LLM_KEY=sk-proj-... OPENAI_API_KEY=sk-proj-...
# or
#LLM_PROVIDER=grok
#GROK_API_KEY=xai-your-grok-api-key
# or
# LLM_PROVIDER=ollama
# OLLAMA_MODEL_NAME=qwen2.5:14b
# or
# LLM_PROVIDER=google
# GOOGLE_API_KEY=your-google-api-key
# or
# LLM_PROVIDER=anthropic
# ANTHROPIC_API_KEY=your-anthropic-api-key
# or
# LLM_PROVIDER=deepseek
# DEEPSEEK_API_KEY=your-deepseek-api-key
# uncomment and unset these environment variables to connect to the local dev server # uncomment and unset these environment variables to connect to the local dev server

View File

@@ -1,62 +0,0 @@
.PHONY: setup install run-worker run-api run-frontend run-train-api run-legacy-worker run-enterprise setup-venv check-python run-dev
# Setup commands
setup: check-python setup-venv install
check-python:
@which python3 >/dev/null 2>&1 || (echo "Python 3 is required. Please install it first." && exit 1)
setup-venv:
python3 -m venv venv
@echo "Virtual environment created. Don't forget to activate it with 'source venv/bin/activate'"
install:
poetry install
cd frontend && npm install
# Run commands
run-worker:
poetry run python scripts/run_worker.py
run-api:
poetry run uvicorn api.main:app --reload
run-frontend:
cd frontend && npx vite
run-train-api:
poetry run python thirdparty/train_api.py
run-legacy-worker:
poetry run python scripts/run_legacy_worker.py
run-enterprise:
cd enterprise && dotnet build && dotnet run
# Development environment setup
setup-temporal-mac:
brew install temporal
temporal server start-dev
# Run all development services
run-dev:
@echo "Starting all development services..."
@make run-worker & \
make run-api & \
make run-frontend & \
wait
# Help command
help:
@echo "Available commands:"
@echo " make setup - Create virtual environment and install dependencies"
@echo " make setup-venv - Create virtual environment only"
@echo " make install - Install all dependencies"
@echo " make run-worker - Start the Temporal worker"
@echo " make run-api - Start the API server"
@echo " make run-frontend - Start the frontend development server"
@echo " make run-train-api - Start the train API server"
@echo " make run-legacy-worker - Start the legacy worker"
@echo " make run-enterprise - Build and run the enterprise .NET worker"
@echo " make setup-temporal-mac - Install and start Temporal server on Mac"
@echo " make run-dev - Start all development services (worker, API, frontend) in parallel"

View File

@@ -2,13 +2,7 @@
This demo shows a multi-turn conversation with an AI agent running inside a Temporal workflow. The purpose of the agent is to collect information towards a goal, running tools along the way. There's a simple DSL input for collecting information (currently set up to use mock functions to search for public events, search for flights around those events, then create a test Stripe invoice for the trip). This demo shows a multi-turn conversation with an AI agent running inside a Temporal workflow. The purpose of the agent is to collect information towards a goal, running tools along the way. There's a simple DSL input for collecting information (currently set up to use mock functions to search for public events, search for flights around those events, then create a test Stripe invoice for the trip).
The AI will respond with clarifications and ask for any missing information to that goal. You can configure it to use any LLM supported by [LiteLLM](https://docs.litellm.ai/docs/providers), including: The AI will respond with clarifications and ask for any missing information to that goal. You can configure it to use [ChatGPT 4o](https://openai.com/index/hello-gpt-4o/), [Anthropic Claude](https://www.anthropic.com/claude), [Google Gemini](https://gemini.google.com), [Deepseek-V3](https://www.deepseek.com/), [Grok](https://docs.x.ai/docs/overview) or a local LLM of your choice using [Ollama](https://ollama.com).
- OpenAI models (GPT-4, GPT-3.5)
- Anthropic Claude models
- Google Gemini models
- Deepseek models
- Ollama models (local)
- And many more!
It's really helpful to [watch the demo (5 minute YouTube video)](https://www.youtube.com/watch?v=GEXllEH2XiQ) to understand how interaction works. It's really helpful to [watch the demo (5 minute YouTube video)](https://www.youtube.com/watch?v=GEXllEH2XiQ) to understand how interaction works.
@@ -34,11 +28,7 @@ These are the key elements of an agentic framework:
For a deeper dive into this, check out the [architecture guide](./architecture.md). For a deeper dive into this, check out the [architecture guide](./architecture.md).
## Setup and Configuration ## Setup and Configuration
See [the Setup guide](./setup.md) for detailed instructions. The basic configuration requires just two environment variables: See [the Setup guide](./setup.md).
```bash
LLM_MODEL=openai/gpt-4o # or any other model supported by LiteLLM
LLM_KEY=your-api-key-here
```
## Customizing Interaction & Tools ## Customizing Interaction & Tools
See [the guide to adding goals and tools](./adding-goals-and-tools.md). See [the guide to adding goals and tools](./adding-goals-and-tools.md).

View File

@@ -1,28 +1,142 @@
import inspect import inspect
from temporalio import activity from temporalio import activity
from ollama import chat, ChatResponse
from openai import OpenAI
import json import json
from typing import Optional, Sequence from typing import Sequence, Optional
from temporalio.common import RawValue from temporalio.common import RawValue
import os import os
from datetime import datetime from datetime import datetime
import google.generativeai as genai
import anthropic
import deepseek
from dotenv import load_dotenv from dotenv import load_dotenv
from models.data_types import EnvLookupOutput, ValidationInput, ValidationResult, ToolPromptInput, EnvLookupInput from models.data_types import EnvLookupOutput, ValidationInput, ValidationResult, ToolPromptInput, EnvLookupInput
from litellm import completion
load_dotenv(override=True) load_dotenv(override=True)
print(
"Using LLM provider: "
+ os.environ.get("LLM_PROVIDER", "openai")
+ " (set LLM_PROVIDER in .env to change)"
)
if os.environ.get("LLM_PROVIDER") == "ollama":
print(
"Using Ollama (local) model: "
+ os.environ.get("OLLAMA_MODEL_NAME", "qwen2.5:14b")
)
class ToolActivities: class ToolActivities:
def __init__(self): def __init__(self):
"""Initialize LLM client using LiteLLM.""" """Initialize LLM clients based on environment configuration."""
self.llm_model = os.environ.get("LLM_MODEL", "openai/gpt-4") self.llm_provider = os.environ.get("LLM_PROVIDER", "openai").lower()
self.llm_key = os.environ.get("LLM_KEY") print(f"Initializing ToolActivities with LLM provider: {self.llm_provider}")
self.llm_base_url = os.environ.get("LLM_BASE_URL")
print(f"Initializing ToolActivities with LLM model: {self.llm_model}") # Initialize client variables (all set to None initially)
if self.llm_base_url: self.openai_client: Optional[OpenAI] = None
print(f"Using custom base URL: {self.llm_base_url}") self.grok_client: Optional[OpenAI] = None
self.anthropic_client: Optional[anthropic.Anthropic] = None
self.genai_configured: bool = False
self.deepseek_client: Optional[deepseek.DeepSeekAPI] = None
self.ollama_model_name: Optional[str] = None
self.ollama_initialized: bool = False
# Only initialize the client specified by LLM_PROVIDER
if self.llm_provider == "openai":
if os.environ.get("OPENAI_API_KEY"):
self.openai_client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
print("Initialized OpenAI client")
else:
print("Warning: OPENAI_API_KEY not set but LLM_PROVIDER is 'openai'")
elif self.llm_provider == "grok":
if os.environ.get("GROK_API_KEY"):
self.grok_client = OpenAI(api_key=os.environ.get("GROK_API_KEY"), base_url="https://api.x.ai/v1")
print("Initialized grok client")
else:
print("Warning: GROK_API_KEY not set but LLM_PROVIDER is 'grok'")
elif self.llm_provider == "anthropic":
if os.environ.get("ANTHROPIC_API_KEY"):
self.anthropic_client = anthropic.Anthropic(
api_key=os.environ.get("ANTHROPIC_API_KEY")
)
print("Initialized Anthropic client")
else:
print(
"Warning: ANTHROPIC_API_KEY not set but LLM_PROVIDER is 'anthropic'"
)
elif self.llm_provider == "google":
api_key = os.environ.get("GOOGLE_API_KEY")
if api_key:
genai.configure(api_key=api_key)
self.genai_configured = True
print("Configured Google Generative AI")
else:
print("Warning: GOOGLE_API_KEY not set but LLM_PROVIDER is 'google'")
elif self.llm_provider == "deepseek":
if os.environ.get("DEEPSEEK_API_KEY"):
self.deepseek_client = deepseek.DeepSeekAPI(
api_key=os.environ.get("DEEPSEEK_API_KEY")
)
print("Initialized DeepSeek client")
else:
print(
"Warning: DEEPSEEK_API_KEY not set but LLM_PROVIDER is 'deepseek'"
)
# For Ollama, we store the model name but actual initialization happens in warm_up_ollama
elif self.llm_provider == "ollama":
self.ollama_model_name = os.environ.get("OLLAMA_MODEL_NAME", "qwen2.5:14b")
print(
f"Using Ollama model: {self.ollama_model_name} (will be loaded on worker startup)"
)
else:
print(
f"Warning: Unknown LLM_PROVIDER '{self.llm_provider}', defaulting to OpenAI"
)
def warm_up_ollama(self):
"""Pre-load the Ollama model to avoid cold start latency on first request"""
if self.llm_provider != "ollama" or self.ollama_initialized:
return False # No need to warm up if not using Ollama or already warmed up
try:
print(
f"Pre-loading Ollama model '{self.ollama_model_name}' - this may take 30+ seconds..."
)
start_time = datetime.now()
# Make a simple request to load the model into memory
chat(
model=self.ollama_model_name,
messages=[
{"role": "system", "content": "You are an AI assistant"},
{
"role": "user",
"content": "Hello! This is a warm-up message to load the model.",
},
],
)
elapsed_time = (datetime.now() - start_time).total_seconds()
print(f"✅ Ollama model loaded successfully in {elapsed_time:.2f} seconds")
self.ollama_initialized = True
return True
except Exception as e:
print(f"❌ Error pre-loading Ollama model: {str(e)}")
print(
"The worker will continue, but the first actual request may experience a delay."
)
return False
@activity.defn @activity.defn
async def agent_validatePrompt(self, validation_input: ValidationInput) -> ValidationResult: async def agent_validatePrompt(
self, validation_input: ValidationInput
) -> ValidationResult:
""" """
Validates the prompt in the context of the conversation history and agent goal. Validates the prompt in the context of the conversation history and agent goal.
Returns a ValidationResult indicating if the prompt makes sense given the context. Returns a ValidationResult indicating if the prompt makes sense given the context.
@@ -73,7 +187,7 @@ class ToolActivities:
prompt=validation_prompt, context_instructions=context_instructions prompt=validation_prompt, context_instructions=context_instructions
) )
result = await self.agent_toolPlanner(prompt_input) result = self.agent_toolPlanner(prompt_input)
return ValidationResult( return ValidationResult(
validationResult=result.get("validationResult", False), validationResult=result.get("validationResult", False),
@@ -81,7 +195,41 @@ class ToolActivities:
) )
@activity.defn @activity.defn
async def agent_toolPlanner(self, input: ToolPromptInput) -> dict: def agent_toolPlanner(self, input: ToolPromptInput) -> dict:
if self.llm_provider == "ollama":
return self.prompt_llm_ollama(input)
elif self.llm_provider == "google":
return self.prompt_llm_google(input)
elif self.llm_provider == "anthropic":
return self.prompt_llm_anthropic(input)
elif self.llm_provider == "deepseek":
return self.prompt_llm_deepseek(input)
elif self.llm_provider == "grok":
return self.prompt_llm_grok(input)
else:
return self.prompt_llm_openai(input)
def parse_json_response(self, response_content: str) -> dict:
"""
Parses the JSON response content and returns it as a dictionary.
"""
try:
data = json.loads(response_content)
return data
except json.JSONDecodeError as e:
print(f"Invalid JSON: {e}")
raise
def prompt_llm_openai(self, input: ToolPromptInput) -> dict:
if not self.openai_client:
api_key = os.environ.get("OPENAI_API_KEY")
if not api_key:
raise ValueError(
"OPENAI_API_KEY is not set in the environment variables but LLM_PROVIDER is 'openai'"
)
self.openai_client = OpenAI(api_key=api_key)
print("Initialized OpenAI client on demand")
messages = [ messages = [
{ {
"role": "system", "role": "system",
@@ -95,53 +243,236 @@ class ToolActivities:
}, },
] ]
try: chat_completion = self.openai_client.chat.completions.create(
completion_kwargs = { model="gpt-4o", messages=messages # was gpt-4-0613
"model": self.llm_model, )
"messages": messages,
"api_key": self.llm_key
}
# Add base_url if configured
if self.llm_base_url:
completion_kwargs["base_url"] = self.llm_base_url
response = completion(**completion_kwargs) response_content = chat_completion.choices[0].message.content
activity.logger.info(f"ChatGPT response: {response_content}")
response_content = response.choices[0].message.content
activity.logger.info(f"LLM response: {response_content}") # Use the new sanitize function
response_content = self.sanitize_json_response(response_content)
return self.parse_json_response(response_content)
def prompt_llm_grok(self, input: ToolPromptInput) -> dict:
if not self.grok_client:
api_key = os.environ.get("GROK_API_KEY")
if not api_key:
raise ValueError(
"GROK_API_KEY is not set in the environment variables but LLM_PROVIDER is 'grok'"
)
self.grok_client = OpenAI(api_key=api_key, base_url="https://api.x.ai/v1")
print("Initialized grok client on demand")
messages = [
{
"role": "system",
"content": input.context_instructions
+ ". The current date is "
+ datetime.now().strftime("%B %d, %Y"),
},
{
"role": "user",
"content": input.prompt,
},
]
chat_completion = self.grok_client.chat.completions.create(
model="grok-2-1212", messages=messages
)
response_content = chat_completion.choices[0].message.content
activity.logger.info(f"Grok response: {response_content}")
# Use the new sanitize function
response_content = self.sanitize_json_response(response_content)
return self.parse_json_response(response_content)
def prompt_llm_ollama(self, input: ToolPromptInput) -> dict:
# If not yet initialized, try to do so now (this is a backup if warm_up_ollama wasn't called or failed)
if not self.ollama_initialized:
print(
"Ollama model not pre-loaded. Loading now (this may take 30+ seconds)..."
)
try:
self.warm_up_ollama()
except Exception:
# We already logged the error in warm_up_ollama, continue with the actual request
pass
model_name = self.ollama_model_name or os.environ.get(
"OLLAMA_MODEL_NAME", "qwen2.5:14b"
)
messages = [
{
"role": "system",
"content": input.context_instructions
+ ". The current date is "
+ get_current_date_human_readable(),
},
{
"role": "user",
"content": input.prompt,
},
]
try:
response: ChatResponse = chat(model=model_name, messages=messages)
print(f"Chat response: {response.message.content}")
# Use the new sanitize function # Use the new sanitize function
response_content = self.sanitize_json_response(response_content) response_content = self.sanitize_json_response(response.message.content)
return self.parse_json_response(response_content) return self.parse_json_response(response_content)
except (json.JSONDecodeError, ValueError) as e:
# Re-raise JSON-related exceptions to let Temporal retry the activity
print(f"JSON parsing error with Ollama response: {str(e)}")
raise
except Exception as e: except Exception as e:
print(f"Error in LLM completion: {str(e)}") # Log and raise other exceptions that may need retrying
print(f"Error in Ollama chat: {str(e)}")
raise raise
def parse_json_response(self, response_content: str) -> dict: def prompt_llm_google(self, input: ToolPromptInput) -> dict:
""" if not self.genai_configured:
Parses the JSON response content and returns it as a dictionary. api_key = os.environ.get("GOOGLE_API_KEY")
""" if not api_key:
try: raise ValueError(
data = json.loads(response_content) "GOOGLE_API_KEY is not set in the environment variables but LLM_PROVIDER is 'google'"
return data )
except json.JSONDecodeError as e: genai.configure(api_key=api_key)
print(f"Invalid JSON: {e}") self.genai_configured = True
raise print("Configured Google Generative AI on demand")
model = genai.GenerativeModel(
"models/gemini-1.5-flash",
system_instruction=input.context_instructions
+ ". The current date is "
+ datetime.now().strftime("%B %d, %Y"),
)
response = model.generate_content(input.prompt)
response_content = response.text
print(f"Google Gemini response: {response_content}")
# Use the new sanitize function
response_content = self.sanitize_json_response(response_content)
return self.parse_json_response(response_content)
def prompt_llm_anthropic(self, input: ToolPromptInput) -> dict:
if not self.anthropic_client:
api_key = os.environ.get("ANTHROPIC_API_KEY")
if not api_key:
raise ValueError(
"ANTHROPIC_API_KEY is not set in the environment variables but LLM_PROVIDER is 'anthropic'"
)
self.anthropic_client = anthropic.Anthropic(api_key=api_key)
print("Initialized Anthropic client on demand")
response = self.anthropic_client.messages.create(
model="claude-3-5-sonnet-20241022",
#model="claude-3-7-sonnet-20250219", # doesn't do as well
max_tokens=1024,
system=input.context_instructions
+ ". The current date is "
+ get_current_date_human_readable(),
messages=[
{
"role": "user",
"content": input.prompt,
}
],
)
response_content = response.content[0].text
print(f"Anthropic response: {response_content}")
# Use the new sanitize function
response_content = self.sanitize_json_response(response_content)
return self.parse_json_response(response_content)
def prompt_llm_deepseek(self, input: ToolPromptInput) -> dict:
if not self.deepseek_client:
api_key = os.environ.get("DEEPSEEK_API_KEY")
if not api_key:
raise ValueError(
"DEEPSEEK_API_KEY is not set in the environment variables but LLM_PROVIDER is 'deepseek'"
)
self.deepseek_client = deepseek.DeepSeekAPI(api_key=api_key)
print("Initialized DeepSeek client on demand")
messages = [
{
"role": "system",
"content": input.context_instructions
+ ". The current date is "
+ datetime.now().strftime("%B %d, %Y"),
},
{
"role": "user",
"content": input.prompt,
},
]
response = self.deepseek_client.chat_completion(prompt=messages)
response_content = response
print(f"DeepSeek response: {response_content}")
# Use the new sanitize function
response_content = self.sanitize_json_response(response_content)
return self.parse_json_response(response_content)
def sanitize_json_response(self, response_content: str) -> str: def sanitize_json_response(self, response_content: str) -> str:
""" """
Sanitizes the response content to ensure it's valid JSON. Extracts the JSON block from the response content as a string.
Supports:
- JSON surrounded by ```json and ```
- Raw JSON input
- JSON preceded or followed by extra text
Rejects invalid input that doesn't contain JSON.
""" """
# Remove any markdown code block markers try:
response_content = response_content.replace("```json", "").replace("```", "") start_marker = "```json"
end_marker = "```"
# Remove any leading/trailing whitespace
response_content = response_content.strip()
return response_content
json_str = None
# Case 1: JSON surrounded by markers
if start_marker in response_content and end_marker in response_content:
json_start = response_content.index(start_marker) + len(start_marker)
json_end = response_content.index(end_marker, json_start)
json_str = response_content[json_start:json_end].strip()
# Case 2: Text with valid JSON
else:
# Try to locate the JSON block by scanning for the first `{` and last `}`
json_start = response_content.find("{")
json_end = response_content.rfind("}")
if json_start != -1 and json_end != -1 and json_start < json_end:
json_str = response_content[json_start : json_end + 1].strip()
# Validate and ensure the extracted JSON is valid
if json_str:
json.loads(json_str) # This will raise an error if the JSON is invalid
return json_str
# If no valid JSON found, raise an error
raise ValueError("Response does not contain valid JSON.")
except json.JSONDecodeError:
# Invalid JSON
print(f"Invalid JSON detected in response: {response_content}")
raise ValueError("Response does not contain valid JSON.")
except Exception as e:
# Other errors
print(f"Error processing response: {str(e)}")
print(f"Full response: {response_content}")
raise
# get env vars for workflow
@activity.defn @activity.defn
async def get_wf_env_vars(self, input: EnvLookupInput) -> EnvLookupOutput: async def get_wf_env_vars(self, input: EnvLookupInput) -> EnvLookupOutput:
""" gets env vars for workflow as an activity result so it's deterministic """ gets env vars for workflow as an activity result so it's deterministic
@@ -167,6 +498,18 @@ class ToolActivities:
return output return output
def get_current_date_human_readable():
"""
Returns the current date in a human-readable format.
Example: Wednesday, January 1, 2025
"""
from datetime import datetime
return datetime.now().strftime("%A, %B %d, %Y")
@activity.defn(dynamic=True) @activity.defn(dynamic=True)
async def dynamic_tool_activity(args: Sequence[RawValue]) -> dict: async def dynamic_tool_activity(args: Sequence[RawValue]) -> dict:
from tools import get_handler from tools import get_handler

View File

@@ -3,7 +3,7 @@ import NavBar from "../components/NavBar";
import ChatWindow from "../components/ChatWindow"; import ChatWindow from "../components/ChatWindow";
import { apiService } from "../services/api"; import { apiService } from "../services/api";
const POLL_INTERVAL = 600; // 0.6 seconds const POLL_INTERVAL = 500; // 0.5 seconds
const INITIAL_ERROR_STATE = { visible: false, message: '' }; const INITIAL_ERROR_STATE = { visible: false, message: '' };
const DEBOUNCE_DELAY = 300; // 300ms debounce for user input const DEBOUNCE_DELAY = 300; // 300ms debounce for user input

1572
poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -31,11 +31,16 @@ temporalio = "^1.8.0"
# Standard library modules (e.g. asyncio, collections) don't need to be added # Standard library modules (e.g. asyncio, collections) don't need to be added
# since they're built-in for Python 3.8+. # since they're built-in for Python 3.8+.
litellm = "^1.70.0" ollama = "^0.4.5"
pyyaml = "^6.0.2" pyyaml = "^6.0.2"
fastapi = "^0.115.6" fastapi = "^0.115.6"
uvicorn = "^0.34.0" uvicorn = "^0.34.0"
python-dotenv = "^1.0.1" python-dotenv = "^1.0.1"
openai = "^1.59.2"
stripe = "^11.4.1"
google-generativeai = "^0.8.4"
anthropic = "0.47.0"
deepseek = "^1.0.0"
requests = "^2.32.3" requests = "^2.32.3"
pandas = "^2.2.3" pandas = "^2.2.3"
gtfs-kit = "^10.1.1" gtfs-kit = "^10.1.1"

23
scripts/run_ollama.py Normal file
View File

@@ -0,0 +1,23 @@
from ollama import chat, ChatResponse
def main():
model_name = "mistral"
# The messages to pass to the model
messages = [
{
"role": "user",
"content": "Why is the sky blue?",
}
]
# Call ollama's chat function
response: ChatResponse = chat(model=model_name, messages=messages)
# Print the full message content
print(response.message.content)
if __name__ == "__main__":
main()

View File

@@ -17,18 +17,18 @@ async def main():
load_dotenv(override=True) load_dotenv(override=True)
# Print LLM configuration info # Print LLM configuration info
llm_model = os.environ.get("LLM_MODEL", "openai/gpt-4") llm_provider = os.environ.get("LLM_PROVIDER", "openai").lower()
print(f"Worker will use LLM model: {llm_model}") print(f"Worker will use LLM provider: {llm_provider}")
# Create the client # Create the client
client = await get_temporal_client() client = await get_temporal_client()
# Initialize the activities class # Initialize the activities class once with the specified LLM provider
activities = ToolActivities() activities = ToolActivities()
print(f"ToolActivities initialized with LLM model: {llm_model}") print(f"ToolActivities initialized with LLM provider: {llm_provider}")
# If using Ollama, pre-load the model to avoid cold start latency # If using Ollama, pre-load the model to avoid cold start latency
if llm_model.startswith("ollama"): if llm_provider == "ollama":
print("\n======== OLLAMA MODEL INITIALIZATION ========") print("\n======== OLLAMA MODEL INITIALIZATION ========")
print("Ollama models need to be loaded into memory on first use.") print("Ollama models need to be loaded into memory on first use.")
print("This may take 30+ seconds depending on your hardware and model size.") print("This may take 30+ seconds depending on your hardware and model size.")
@@ -51,6 +51,8 @@ async def main():
print("Worker ready to process tasks!") print("Worker ready to process tasks!")
logging.basicConfig(level=logging.WARN) logging.basicConfig(level=logging.WARN)
# Run the worker # Run the worker
with concurrent.futures.ThreadPoolExecutor(max_workers=100) as activity_executor: with concurrent.futures.ThreadPoolExecutor(max_workers=100) as activity_executor:
worker = Worker( worker = Worker(

105
setup.md
View File

@@ -14,37 +14,6 @@ If you want to show confirmations/enable the debugging UI that shows tool args,
SHOW_CONFIRM=True SHOW_CONFIRM=True
``` ```
### Quick Start with Makefile
We've provided a Makefile to simplify the setup and running of the application. Here are the main commands:
```bash
# Initial setup
make setup # Creates virtual environment and installs dependencies
make setup-venv # Creates virtual environment only
make install # Installs all dependencies
# Running the application
make run-worker # Starts the Temporal worker
make run-api # Starts the API server
make run-frontend # Starts the frontend development server
# Additional services
make run-train-api # Starts the train API server
make run-legacy-worker # Starts the legacy worker
make run-enterprise # Builds and runs the enterprise .NET worker
# Development environment setup
make setup-temporal-mac # Installs and starts Temporal server on Mac
# View all available commands
make help
```
### Manual Setup (Alternative to Makefile)
If you prefer to run commands manually, follow these steps:
### Agent Goal Configuration ### Agent Goal Configuration
The agent can be configured to pursue different goals using the `AGENT_GOAL` environment variable in your `.env` file. If unset, default is `goal_choose_agent_type`. The agent can be configured to pursue different goals using the `AGENT_GOAL` environment variable in your `.env` file. If unset, default is `goal_choose_agent_type`.
@@ -56,41 +25,54 @@ GOAL_CATEGORIES=hr,travel-flights,travel-trains,fin
See the section Goal-Specific Tool Configuration below for tool configuration for specific goals. See the section Goal-Specific Tool Configuration below for tool configuration for specific goals.
### LLM Configuration ### LLM Provider Configuration
Note: We recommend using OpenAI's GPT-4o or Claude 3.5 Sonnet for the best results. There can be significant differences in performance and capabilities between models, especially for complex tasks. The agent can use OpenAI's GPT-4o, Google Gemini, Anthropic Claude, or a local LLM via Ollama. Set the `LLM_PROVIDER` environment variable in your `.env` file to choose the desired provider:
The agent uses LiteLLM to interact with various LLM providers. Configure theqfollowing environment variables in your `.env` file: - `LLM_PROVIDER=openai` for OpenAI's GPT-4o
- `LLM_PROVIDER=google` for Google Gemini
- `LLM_PROVIDER=anthropic` for Anthropic Claude
- `LLM_PROVIDER=deepseek` for DeepSeek-V3
- `LLM_PROVIDER=ollama` for running LLMs via [Ollama](https://ollama.ai) (not recommended for this use case)
- `LLM_MODEL`: The model to use (e.g., "openai/gpt-4o", "anthropic/claude-3-sonnet", "google/gemini-pro", etc.) ### Option 1: OpenAI
- `LLM_KEY`: Your API key for the selected provider
- `LLM_BASE_URL`: (Optional) Custom base URL for the LLM provider. Useful for:
- Using Ollama with a custom endpoint
- Using a proxy or custom API gateway
- Testing with different API versions
LiteLLM will automatically detect the provider based on the model name. For example: If using OpenAI, ensure you have an OpenAI key for the GPT-4o model. Set this in the `OPENAI_API_KEY` environment variable in `.env`.
- For OpenAI models: `openai/gpt-4o` or `openai/gpt-3.5-turbo`
- For Anthropic models: `anthropic/claude-3-sonnet`
- For Google models: `google/gemini-pro`
- For Ollama models: `ollama/mistral` (requires `LLM_BASE_URL` set to your Ollama server)
Example configurations: ### Option 2: Google Gemini
```bash
# For OpenAI
LLM_MODEL=openai/gpt-4o
LLM_KEY=your-api-key-here
# For Anthropic To use Google Gemini:
LLM_MODEL=anthropic/claude-3-sonnet
LLM_KEY=your-api-key-here
# For Ollama with custom URL 1. Obtain a Google API key and set it in the `GOOGLE_API_KEY` environment variable in `.env`.
LLM_MODEL=ollama/mistral 2. Set `LLM_PROVIDER=google` in your `.env` file.
LLM_BASE_URL=http://localhost:11434
```
For a complete list of supported models and providers, visit the [LiteLLM documentation](https://docs.litellm.ai/docs/providers). ### Option 3: Anthropic Claude (recommended)
I find that Claude Sonnet 3.5 performs better than the other hosted LLMs for this use case.
To use Anthropic:
1. Obtain an Anthropic API key and set it in the `ANTHROPIC_API_KEY` environment variable in `.env`.
2. Set `LLM_PROVIDER=anthropic` in your `.env` file.
### Option 4: Deepseek-V3
To use Deepseek-V3:
1. Obtain a Deepseek API key and set it in the `DEEPSEEK_API_KEY` environment variable in `.env`.
2. Set `LLM_PROVIDER=deepseek` in your `.env` file.
### Option 5: Local LLM via Ollama (not recommended)
To use a local LLM with Ollama:
1. Install [Ollama](https://ollama.com) and the [Qwen2.5 14B](https://ollama.com/library/qwen2.5) model.
- Run `ollama run <OLLAMA_MODEL_NAME>` to start the model. Note that this model is about 9GB to download.
- Example: `ollama run qwen2.5:14b`
2. Set `LLM_PROVIDER=ollama` in your `.env` file and `OLLAMA_MODEL_NAME` to the name of the model you installed.
Note: I found the other (hosted) LLMs to be MUCH more reliable for this use case. However, you can switch to Ollama if desired, and choose a suitably large model if your computer has the resources.
## Configuring Temporal Connection ## Configuring Temporal Connection
@@ -184,7 +166,7 @@ Here is configuration guidance for specific goals. Travel and financial goals ha
* Requires a Stripe key for the `create_invoice` tool. Set this in the `STRIPE_API_KEY` environment variable in .env * Requires a Stripe key for the `create_invoice` tool. Set this in the `STRIPE_API_KEY` environment variable in .env
* It's free to sign up and get a key at [Stripe](https://stripe.com/) * It's free to sign up and get a key at [Stripe](https://stripe.com/)
* Set permissions for read-write on: `Credit Notes, Invoices, Customers and Customer Sessions` * Set permissions for read-write on: `Credit Notes, Invoices, Customers and Customer Sessions`
* If you don't have a Stripe key, comment out the STRIPE_API_KEY in the .env file, and a dummy invoice will be created rather than a Stripe invoice. The function can be found in `tools/create_invoice.py` * If you're lazy go to `tools/create_invoice.py` and replace the `create_invoice` function with the mock `create_invoice_example` that exists in the same file.
### Goal: Find a Premier League match, book train tickets to it and invoice the user for the cost (Replay 2025 Keynote) ### Goal: Find a Premier League match, book train tickets to it and invoice the user for the cost (Replay 2025 Keynote)
- `AGENT_GOAL=goal_match_train_invoice` - Focuses on Premier League match attendance with train booking and invoice generation - `AGENT_GOAL=goal_match_train_invoice` - Focuses on Premier League match attendance with train booking and invoice generation
@@ -192,7 +174,8 @@ Here is configuration guidance for specific goals. Travel and financial goals ha
- Note, there is failure built in to this demo (the train booking step) to show how the agent can handle failures and retry. See Tool Configuration below for details. - Note, there is failure built in to this demo (the train booking step) to show how the agent can handle failures and retry. See Tool Configuration below for details.
#### Configuring Agent Goal: goal_match_train_invoice #### Configuring Agent Goal: goal_match_train_invoice
NOTE: This goal was developed for an on-stage demo and has failure (and its resolution) built in to show how the agent can handle failures and retry. NOTE: This goal was developed for an on-stage demo and has failure (and its resolution) built in to show how the agent can handle failures and retry.
* Finding a match requires a key from [Football Data](https://www.football-data.org). Sign up for a free account, then see the 'My Account' page to get your API token. Set `FOOTBALL_DATA_API_KEY` to this value. If the key is omitted, the `SearchFixtures` tool automatically returns mock Premier League fixtures (3 months into the future only). * Finding a match requires a key from [Football Data](https://www.football-data.org). Sign up for a free account, then see the 'My Account' page to get your API token. Set `FOOTBALL_DATA_API_KEY` to this value.
* If you're lazy go to `tools/search_fixtures.py` and replace the `search_fixtures` function with the mock `search_fixtures_example` that exists in the same file.
* We use a mock function to search for trains. Start the train API server to use the real API: `python thirdparty/train_api.py` * We use a mock function to search for trains. Start the train API server to use the real API: `python thirdparty/train_api.py`
* * The train activity is 'enterprise' so it's written in C# and requires a .NET runtime. See the [.NET backend](#net-(enterprise)-backend) section for details on running it. * * The train activity is 'enterprise' so it's written in C# and requires a .NET runtime. See the [.NET backend](#net-(enterprise)-backend) section for details on running it.
* Requires a Stripe key for the `create_invoice` tool. Set this in the `STRIPE_API_KEY` environment variable in .env * Requires a Stripe key for the `create_invoice` tool. Set this in the `STRIPE_API_KEY` environment variable in .env
@@ -269,4 +252,4 @@ For more details, check out [adding goals and tools guide](./adding-goals-and-to
[ ] `cd frontend`, `npm install`, `npx vite` <br /> [ ] `cd frontend`, `npm install`, `npx vite` <br />
[ ] Access the UI at `http://localhost:5173` <br /> [ ] Access the UI at `http://localhost:5173` <br />
And that's it! Happy AI Agent Exploring! And that's it! Happy AI Agent Exploring!

View File

@@ -27,7 +27,7 @@ def ensure_customer_exists(
def create_invoice(args: dict) -> dict: def create_invoice(args: dict) -> dict:
"""Create and finalize a Stripe invoice.""" """Create and finalize a Stripe invoice."""
# If an API key exists in the env file, find or create customer # If an API key exists in the env file, find or create customer
if stripe.api_key is not None and stripe.api_key != "": if stripe.api_key is not None:
customer_id = ensure_customer_exists( customer_id = ensure_customer_exists(
args.get("customer_id"), args.get("email", "default@example.com") args.get("customer_id"), args.get("email", "default@example.com")
) )
@@ -69,3 +69,15 @@ def create_invoice(args: dict) -> dict:
"invoiceURL": "https://pay.example.com/invoice/12345", "invoiceURL": "https://pay.example.com/invoice/12345",
"reference": "INV-12345", "reference": "INV-12345",
} }
def create_invoice_example(args: dict) -> dict:
"""
This is an example implementation of the CreateInvoice tool
Doesn't call any external services, just returns a dummy response
"""
print("[CreateInvoice] Creating invoice with:", args)
return {
"invoiceStatus": "generated",
"invoiceURL": "https://pay.example.com/invoice/12345",
"reference": "INV-12345",
}

View File

@@ -114,10 +114,10 @@ goal_match_train_invoice = AgentGoal(
], ],
description="The user wants to book a trip to a city in the UK around the dates of a premier league match. " description="The user wants to book a trip to a city in the UK around the dates of a premier league match. "
"Help the user find a premier league match to attend, search and book trains for that match and offers to invoice them for the cost of train tickets. " "Help the user find a premier league match to attend, search and book trains for that match and offers to invoice them for the cost of train tickets. "
"The user lives in London. Premier league fixtures may be mocked data, so don't worry about valid season dates and teams. " "The user lives in London. "
"Gather args for these tools in order, ensuring you move the user from one tool to the next: " "Gather args for these tools in order, ensuring you move the user from one tool to the next: "
"1. SearchFixtures: Search for fixtures for a team within a specified date range. The user might ask questions about the matches dates and locations to decide on where to go. " "1. SearchFixtures: Search for fixtures for a team within a specified date range. The user might ask questions about the matches dates and locations to decide on where to go. "
"2. SearchTrains: Search for trains to the city of the match. Ensure you list them for the customer to choose from " "2. SearchTrains: Search for trains to the city of the match and list them for the customer to choose from "
"3. BookTrains: Book the train tickets, used to invoice the user for the cost of the train tickets " "3. BookTrains: Book the train tickets, used to invoice the user for the cost of the train tickets "
"4. CreateInvoice: Invoices the user for the cost of train tickets, with total and details inferred from the conversation history ", "4. CreateInvoice: Invoices the user for the cost of train tickets, with total and details inferred from the conversation history ",
starter_prompt=starter_prompt_generic, starter_prompt=starter_prompt_generic,
@@ -489,6 +489,6 @@ if multi_goal_mode:
if tool.name == "ListAgents": if tool.name == "ListAgents":
list_agents_found = True list_agents_found = True
continue continue
if list_agents_found is False: if list_agents_found == False:
goal.tools.append(tool_registry.list_agents_tool) goal.tools.append(tool_registry.list_agents_tool)
continue continue

View File

@@ -1,263 +1,64 @@
import os import os
import requests import requests
import random from datetime import datetime, timedelta
from datetime import datetime, timedelta, date
from dotenv import load_dotenv from dotenv import load_dotenv
PREMIER_LEAGUE_CLUBS_DATA = [
{"name": "Arsenal FC", "stadium": "Emirates Stadium"},
{"name": "Aston Villa FC", "stadium": "Villa Park"},
{"name": "AFC Bournemouth", "stadium": "Vitality Stadium"},
{"name": "Brentford FC", "stadium": "Gtech Community Stadium"},
{"name": "Brighton & Hove Albion FC", "stadium": "American Express Stadium"},
{"name": "Chelsea FC", "stadium": "Stamford Bridge"},
{"name": "Crystal Palace FC", "stadium": "Selhurst Park"},
{"name": "Everton FC", "stadium": "Goodison Park"},
{"name": "Fulham FC", "stadium": "Craven Cottage"},
{"name": "Ipswich Town FC", "stadium": "Portman Road"},
{"name": "Leicester City FC", "stadium": "King Power Stadium"},
{"name": "Liverpool FC", "stadium": "Anfield"},
{"name": "Manchester City FC", "stadium": "Etihad Stadium"},
{"name": "Manchester United FC", "stadium": "Old Trafford"},
{"name": "Newcastle United FC", "stadium": "St James' Park"},
{"name": "Nottingham Forest FC", "stadium": "City Ground"},
{"name": "Southampton FC", "stadium": "St Mary's Stadium"},
{"name": "Tottenham Hotspur FC", "stadium": "Tottenham Hotspur Stadium"},
{"name": "West Ham United FC", "stadium": "London Stadium"},
{"name": "Wolverhampton Wanderers FC", "stadium": "Molineux Stadium"},
]
def get_future_matches(
team_name: str,
all_clubs_data: list,
num_matches: int = 12,
date_from: date = None,
date_to: date = None,
) -> list:
"""Generate a set of future Premier League matches for ``team_name``.
This is a purely mocked schedule. It returns up to ``num_matches``
fixtures, respecting the ``date_from`` and ``date_to`` constraints.
Matches are typically on Saturdays or Sundays.
"""
matches = []
team_details = next((c for c in all_clubs_data if c["name"] == team_name), None)
if not team_details:
return []
opponents_pool = [c for c in all_clubs_data if c["name"] != team_name]
if not opponents_pool:
return []
# Determine the maximum number of matches we can generate based on opponents
# and the requested num_matches
num_actual_matches_to_generate = min(num_matches, len(opponents_pool))
if num_actual_matches_to_generate == 0:
return []
# Shuffle opponents once and pick them sequentially
random.shuffle(opponents_pool) # Shuffle in place
# Determine the initial Saturday for match week consideration
today_date = date.today()
# Default to next Saturday
current_match_week_saturday = today_date + timedelta(
days=(5 - today_date.weekday() + 7) % 7
)
# If today is Saturday and it's late evening, or if today is Sunday,
# advance to the following Saturday.
now_time = datetime.now().time()
if (
today_date.weekday() == 5
and now_time > datetime.strptime("20:00", "%H:%M").time()
) or (today_date.weekday() == 6):
current_match_week_saturday += timedelta(days=7)
# If date_from is specified, ensure our starting Saturday is not before it.
if date_from:
if current_match_week_saturday < date_from:
current_match_week_saturday = date_from
# Align current_match_week_saturday to be a Saturday on or after the potentially adjusted date
current_match_week_saturday += timedelta(
days=(5 - current_match_week_saturday.weekday() + 7) % 7
)
opponent_idx = 0
while len(matches) < num_actual_matches_to_generate and opponent_idx < len(
opponents_pool
):
# If the current week's Saturday is already past date_to, stop.
if date_to and current_match_week_saturday > date_to:
break
opponent_details = opponents_pool[opponent_idx]
is_saturday_game = random.choice([True, True, False])
actual_match_date = None
kick_off_time = ""
if is_saturday_game:
actual_match_date = current_match_week_saturday
kick_off_time = random.choice(["12:30", "15:00", "17:30"])
else: # Sunday game
actual_match_date = current_match_week_saturday + timedelta(days=1)
kick_off_time = random.choice(["14:00", "16:30"])
# Check if this specific match date is within the date_to constraint
if date_to and actual_match_date > date_to:
# If this game is too late, try the next week if possible.
# (This mainly affects Sunday games if Saturday was the last valid day)
current_match_week_saturday += timedelta(days=7)
continue # Skip adding this match, try next week.
match_datetime_gmt = (
f"{actual_match_date.strftime('%Y-%m-%d')} {kick_off_time} GMT"
)
is_home_match = random.choice([True, False])
if is_home_match:
team1_name = team_details["name"]
team2_name = opponent_details["name"]
stadium_name = team_details["stadium"]
else:
team1_name = opponent_details["name"]
team2_name = team_details["name"]
stadium_name = opponent_details["stadium"]
matches.append(
{
"team1": team1_name,
"team2": team2_name,
"stadium": stadium_name,
"datetime_gmt": match_datetime_gmt,
}
)
opponent_idx += 1
current_match_week_saturday += timedelta(
days=7
) # Advance to next week's Saturday
return matches
BASE_URL = "https://api.football-data.org/v4" BASE_URL = "https://api.football-data.org/v4"
def search_fixtures(args: dict) -> dict: def search_fixtures(args: dict) -> dict:
load_dotenv(override=True) load_dotenv(override=True)
api_key = os.getenv("FOOTBALL_DATA_API_KEY") api_key = os.getenv("FOOTBALL_DATA_API_KEY", "YOUR_DEFAULT_KEY")
team_name = args.get("team") team_name = args.get("team")
date_from_str = args.get("date_from") date_from_str = args.get("date_from")
date_to_str = args.get("date_to") date_to_str = args.get("date_to")
if not team_name:
return {"error": "Team name is required."}
parsed_date_from = None
if date_from_str:
try:
parsed_date_from = datetime.strptime(date_from_str, "%Y-%m-%d").date()
except ValueError:
return {
"error": f"Invalid date_from: '{date_from_str}'. Expected format YYYY-MM-DD."
}
parsed_date_to = None
if date_to_str:
try:
parsed_date_to = datetime.strptime(date_to_str, "%Y-%m-%d").date()
except ValueError:
return {
"error": f"Invalid date_to: '{date_to_str}'. Expected format YYYY-MM-DD."
}
if parsed_date_from and parsed_date_to and parsed_date_from > parsed_date_to:
return {"error": "date_from cannot be after date_to."}
# If no API key, fall back to mocked data
if not api_key:
# Use the parsed date objects (which can be None)
fixtures = get_future_matches(
team_name,
PREMIER_LEAGUE_CLUBS_DATA,
date_from=parsed_date_from,
date_to=parsed_date_to,
# num_matches can be passed explicitly if needed, otherwise defaults to 12
)
if not fixtures:
# Check if the team name itself was invalid, as get_future_matches returns [] for that too
team_details_check = next(
(c for c in PREMIER_LEAGUE_CLUBS_DATA if c["name"] == team_name), None
)
if not team_details_check:
return {"error": f"Team '{team_name}' not found in mocked data."}
# If team is valid, an empty fixtures list means no matches fit the criteria (e.g., date range)
return {"fixtures": fixtures}
# API Key is present, proceed with API logic
# The API requires both date_from and date_to
if not parsed_date_from or not parsed_date_to:
return {
"error": "Both date_from and date_to (YYYY-MM-DD) are required for API search."
}
headers = {"X-Auth-Token": api_key} headers = {"X-Auth-Token": api_key}
# For API calls, team name matching might be case-insensitive or require specific handling team_name = team_name.lower()
# The existing logic uses team_name.lower() for the API search path later.
try:
date_from = datetime.strptime(date_from_str, "%Y-%m-%d")
date_to = datetime.strptime(date_to_str, "%Y-%m-%d")
except ValueError:
return {
"error": "Invalid date provided. Expected format YYYY-MM-DD for both date_from and date_to."
}
# Fetch team ID # Fetch team ID
teams_response = requests.get(f"{BASE_URL}/competitions/PL/teams", headers=headers) teams_response = requests.get(f"{BASE_URL}/competitions/PL/teams", headers=headers)
if teams_response.status_code != 200: if teams_response.status_code != 200:
return { return {"error": "Failed to fetch teams data."}
"error": f"Failed to fetch teams data from API (status {teams_response.status_code})."
}
teams_data = teams_response.json() teams_data = teams_response.json()
team_id = None team_id = None
# Using lower() for comparison, assuming API team names might have varied casing for team in teams_data["teams"]:
# or the input team_name might not be exact. if team_name in team["name"].lower():
# The `ToolDefinition` lists exact names, so direct match might also be an option. team_id = team["id"]
for team_api_data in teams_data.get("teams", []):
if team_name.lower() in team_api_data.get("name", "").lower():
team_id = team_api_data["id"]
break break
if not team_id: if not team_id:
return {"error": f"Team '{team_name}' not found via API."} return {"error": "Team not found."}
date_from_formatted = parsed_date_from.strftime("%Y-%m-%d") date_from_formatted = date_from.strftime("%Y-%m-%d")
date_to_formatted = parsed_date_to.strftime("%Y-%m-%d") date_to_formatted = date_to.strftime("%Y-%m-%d")
fixtures_url = f"{BASE_URL}/teams/{team_id}/matches?dateFrom={date_from_formatted}&dateTo={date_to_formatted}" fixtures_url = f"{BASE_URL}/teams/{team_id}/matches?dateFrom={date_from_formatted}&dateTo={date_to_formatted}"
# print(fixtures_url) # Keep for debugging if necessary print(fixtures_url)
fixtures_response = requests.get(fixtures_url, headers=headers) fixtures_response = requests.get(fixtures_url, headers=headers)
if fixtures_response.status_code != 200: if fixtures_response.status_code != 200:
return { return {"error": "Failed to fetch fixtures data."}
"error": f"Failed to fetch fixtures data from API (status {fixtures_response.status_code})."
}
fixtures_data = fixtures_response.json() fixtures_data = fixtures_response.json()
matching_fixtures = [] matching_fixtures = []
for match in fixtures_data.get("matches", []): for match in fixtures_data.get("matches", []):
# Ensure match datetime parsing is robust match_datetime = datetime.strptime(match["utcDate"], "%Y-%m-%dT%H:%M:%SZ")
try: if match["competition"]["code"] == "PL":
match_datetime_utc = datetime.strptime(
match["utcDate"], "%Y-%m-%dT%H:%M:%SZ"
)
except (ValueError, TypeError):
# Skip malformed match entries or log an error
continue
if match.get("competition", {}).get("code") == "PL":
matching_fixtures.append( matching_fixtures.append(
{ {
"date": match_datetime_utc.strftime("%Y-%m-%d"), "date": match_datetime.strftime("%Y-%m-%d"),
"homeTeam": match.get("homeTeam", {}).get("name", "N/A"), "homeTeam": match["homeTeam"]["name"],
"awayTeam": match.get("awayTeam", {}).get("name", "N/A"), "awayTeam": match["awayTeam"]["name"],
} }
) )
@@ -281,69 +82,34 @@ def search_fixtures_example(args: dict) -> dict:
# Validate dates # Validate dates
try: try:
# Ensure date strings are not None before parsing date_from = datetime.strptime(date_from_str, "%Y-%m-%d")
if date_from_str is None or date_to_str is None: date_to = datetime.strptime(date_to_str, "%Y-%m-%d")
raise ValueError("Date strings cannot be None")
date_from_obj = datetime.strptime(date_from_str, "%Y-%m-%d")
date_to_obj = datetime.strptime(date_to_str, "%Y-%m-%d")
except ValueError: except ValueError:
return { return {
"error": "Invalid date provided. Expected format YYYY-MM-DD for both date_from and date_to." "error": "Invalid date provided. Expected format YYYY-MM-DD for both date_from and date_to."
} }
# Calculate 3 reasonable fixture dates within the given range # Calculate 3 reasonable fixture dates within the given range
date_range = (date_to_obj - date_from_obj).days date_range = (date_to - date_from).days
if date_range < 0: # date_from is after date_to
return {"fixtures": []} # No fixtures possible
fixture_dates_timestamps = []
if date_range < 21: if date_range < 21:
# If range is less than 3 weeks, use evenly spaced fixtures if possible # If range is less than 3 weeks, use evenly spaced fixtures
if date_range >= 2: # Need at least some gap for 3 fixtures fixture_dates = [
fixture_dates_timestamps = [ date_from + timedelta(days=max(1, date_range // 3)),
date_from_obj date_from + timedelta(days=max(2, date_range * 2 // 3)),
+ timedelta(days=max(0, date_range // 4)), # Closer to start date_to - timedelta(days=min(2, date_range // 4)),
date_from_obj + timedelta(days=max(1, date_range // 2)), # Middle ]
date_to_obj - timedelta(days=max(0, date_range // 4)), # Closer to end
]
elif date_range == 1: # Only two days
fixture_dates_timestamps = [date_from_obj, date_to_obj]
elif date_range == 0: # Only one day
fixture_dates_timestamps = [date_from_obj]
else: # date_range is negative, handled above, or 0 (single day)
fixture_dates_timestamps = [date_from_obj] if date_range == 0 else []
else: else:
# Otherwise space them out by weeks, ensuring they are within the bounds # Otherwise space them out by weeks
d1 = date_from_obj + timedelta(days=7) fixture_dates = [
d2 = date_from_obj + timedelta(days=14) date_from + timedelta(days=7),
d3 = date_to_obj - timedelta(days=7) # Potential third game from the end date_from + timedelta(days=14),
date_to - timedelta(days=7),
]
fixture_dates_timestamps.append(d1) # Ensure we only have 3 dates
if d2 <= date_to_obj and d2 > d1: # ensure d2 is valid and distinct fixture_dates = fixture_dates[:3]
fixture_dates_timestamps.append(d2)
if (
d3 >= date_from_obj and d3 > d2 and d3 <= date_to_obj
): # ensure d3 is valid and distinct
fixture_dates_timestamps.append(d3)
elif (
d3 < date_from_obj and len(fixture_dates_timestamps) < 3
): # if d3 is too early, try using date_to_obj itself if distinct
if date_to_obj not in fixture_dates_timestamps:
fixture_dates_timestamps.append(date_to_obj)
# Ensure unique dates and sort, then take up to 3.
fixture_dates_timestamps = sorted(
list(
set(
f_date
for f_date in fixture_dates_timestamps
if date_from_obj <= f_date <= date_to_obj
)
)
)
fixture_dates_final = fixture_dates_timestamps[:3]
# Expanded pool of opponent teams to avoid team playing against itself
all_opponents = [ all_opponents = [
"Manchester United FC", "Manchester United FC",
"Leicester City FC", "Leicester City FC",
@@ -354,35 +120,35 @@ def search_fixtures_example(args: dict) -> dict:
"Tottenham Hotspur FC", "Tottenham Hotspur FC",
"West Ham United FC", "West Ham United FC",
"Everton FC", "Everton FC",
"Generic Opponent A",
"Generic Opponent B",
"Generic Opponent C", # Fallbacks
] ]
# Select opponents that aren't the same as the requested team
available_opponents = [ available_opponents = [
team for team in all_opponents if team.lower() != team_name.lower() team for team in all_opponents if team.lower() != team_name.lower()
] ]
# Ensure we have enough opponents for the number of fixtures we'll generate # Ensure we have at least 3 opponents
if len(available_opponents) < len(fixture_dates_final): if len(available_opponents) < 3:
needed = len(fixture_dates_final) - len(available_opponents) # Add generic opponents if needed
for i in range(needed): additional_teams = [f"Opponent {i} FC" for i in range(1, 4)]
available_opponents.append(f"Placeholder Opponent {i+1}") available_opponents.extend(additional_teams)
opponents = available_opponents[: len(fixture_dates_final)] # Take only the first 3 opponents
opponents = available_opponents[:3]
# Generate fixtures - always exactly 3
fixtures = [] fixtures = []
for i, fixture_date_obj in enumerate(fixture_dates_final): for i, fixture_date in enumerate(fixture_dates):
if i >= len(opponents): # Should not happen with the logic above date_str = fixture_date.strftime("%Y-%m-%d")
break
date_str = fixture_date_obj.strftime("%Y-%m-%d") # Alternate between home and away games
if i % 2 == 0: # Home game if i % 2 == 0:
fixtures.append(
{"date": date_str, "homeTeam": team_name, "awayTeam": opponents[i]}
)
else: # Away game
fixtures.append( fixtures.append(
{"date": date_str, "homeTeam": opponents[i], "awayTeam": team_name} {"date": date_str, "homeTeam": opponents[i], "awayTeam": team_name}
) )
else:
fixtures.append(
{"date": date_str, "homeTeam": team_name, "awayTeam": opponents[i]}
)
return {"fixtures": fixtures} return {"fixtures": fixtures}

View File

@@ -90,7 +90,7 @@ search_flights_tool = ToolDefinition(
search_trains_tool = ToolDefinition( search_trains_tool = ToolDefinition(
name="SearchTrains", name="SearchTrains",
description="Search for trains between two English cities. Returns a list of train information for the user to choose from. Present the list to the user.", description="Search for trains between two English cities. Returns a list of train information for the user to choose from.",
arguments=[ arguments=[
ToolArgument( ToolArgument(
name="origin", name="origin",
@@ -156,7 +156,7 @@ create_invoice_tool = ToolDefinition(
search_fixtures_tool = ToolDefinition( search_fixtures_tool = ToolDefinition(
name="SearchFixtures", name="SearchFixtures",
description="Search for upcoming fixtures for a given team within a date range inferred from the user's description. Ignore valid premier league dates. Valid teams this season are Arsenal FC, Aston Villa FC, AFC Bournemouth, Brentford FC, Brighton & Hove Albion FC, Chelsea FC, Crystal Palace FC, Everton FC, Fulham FC, Ipswich Town FC, Leicester City FC, Liverpool FC, Manchester City FC, Manchester United FC, Newcastle United FC, Nottingham Forest FC, Southampton FC, Tottenham Hotspur FC, West Ham United FC, Wolverhampton Wanderers FC", description="Search for upcoming fixtures for a given team within a date range inferred from the user's description. Valid teams this 24/25 season are Arsenal FC, Aston Villa FC, AFC Bournemouth, Brentford FC, Brighton & Hove Albion FC, Chelsea FC, Crystal Palace FC, Everton FC, Fulham FC, Ipswich Town FC, Leicester City FC, Liverpool FC, Manchester City FC, Manchester United FC, Newcastle United FC, Nottingham Forest FC, Southampton FC, Tottenham Hotspur FC, West Ham United FC, Wolverhampton Wanderers FC",
arguments=[ arguments=[
ToolArgument( ToolArgument(
name="team", name="team",