mirror of
https://github.com/temporal-community/temporal-ai-agent.git
synced 2026-03-15 05:58:08 +01:00
Jonymusky litellm integration (#36)
* feat: LiteLLM integration * update * chore: make start-dev feedback from: https://github.com/temporal-community/temporal-ai-agent/issues/31 * bump dependencies * clean up setup.md * setup update --------- Co-authored-by: Jonathan Muszkat <muskys@gmail.com>
This commit is contained in:
committed by
GitHub
parent
847f4bbaef
commit
7bb6688797
19
.env.example
19
.env.example
@@ -5,23 +5,8 @@ FOOTBALL_DATA_API_KEY=....
|
||||
|
||||
STRIPE_API_KEY=sk_test_51J...
|
||||
|
||||
LLM_PROVIDER=openai # default
|
||||
OPENAI_API_KEY=sk-proj-...
|
||||
# or
|
||||
#LLM_PROVIDER=grok
|
||||
#GROK_API_KEY=xai-your-grok-api-key
|
||||
# or
|
||||
# LLM_PROVIDER=ollama
|
||||
# OLLAMA_MODEL_NAME=qwen2.5:14b
|
||||
# or
|
||||
# LLM_PROVIDER=google
|
||||
# GOOGLE_API_KEY=your-google-api-key
|
||||
# or
|
||||
# LLM_PROVIDER=anthropic
|
||||
# ANTHROPIC_API_KEY=your-anthropic-api-key
|
||||
# or
|
||||
# LLM_PROVIDER=deepseek
|
||||
# DEEPSEEK_API_KEY=your-deepseek-api-key
|
||||
LLM_MODEL=openai/gpt-4o # default
|
||||
LLM_KEY=sk-proj-...
|
||||
|
||||
|
||||
# uncomment and unset these environment variables to connect to the local dev server
|
||||
|
||||
62
Makefile
Normal file
62
Makefile
Normal file
@@ -0,0 +1,62 @@
|
||||
.PHONY: setup install run-worker run-api run-frontend run-train-api run-legacy-worker run-enterprise setup-venv check-python run-dev
|
||||
|
||||
# Setup commands
|
||||
setup: check-python setup-venv install
|
||||
|
||||
check-python:
|
||||
@which python3 >/dev/null 2>&1 || (echo "Python 3 is required. Please install it first." && exit 1)
|
||||
|
||||
setup-venv:
|
||||
python3 -m venv venv
|
||||
@echo "Virtual environment created. Don't forget to activate it with 'source venv/bin/activate'"
|
||||
|
||||
install:
|
||||
poetry install
|
||||
cd frontend && npm install
|
||||
|
||||
# Run commands
|
||||
run-worker:
|
||||
poetry run python scripts/run_worker.py
|
||||
|
||||
run-api:
|
||||
poetry run uvicorn api.main:app --reload
|
||||
|
||||
run-frontend:
|
||||
cd frontend && npx vite
|
||||
|
||||
run-train-api:
|
||||
poetry run python thirdparty/train_api.py
|
||||
|
||||
run-legacy-worker:
|
||||
poetry run python scripts/run_legacy_worker.py
|
||||
|
||||
run-enterprise:
|
||||
cd enterprise && dotnet build && dotnet run
|
||||
|
||||
# Development environment setup
|
||||
setup-temporal-mac:
|
||||
brew install temporal
|
||||
temporal server start-dev
|
||||
|
||||
# Run all development services
|
||||
run-dev:
|
||||
@echo "Starting all development services..."
|
||||
@make run-worker & \
|
||||
make run-api & \
|
||||
make run-frontend & \
|
||||
wait
|
||||
|
||||
# Help command
|
||||
help:
|
||||
@echo "Available commands:"
|
||||
@echo " make setup - Create virtual environment and install dependencies"
|
||||
@echo " make setup-venv - Create virtual environment only"
|
||||
@echo " make install - Install all dependencies"
|
||||
@echo " make run-worker - Start the Temporal worker"
|
||||
@echo " make run-api - Start the API server"
|
||||
@echo " make run-frontend - Start the frontend development server"
|
||||
@echo " make run-train-api - Start the train API server"
|
||||
@echo " make run-legacy-worker - Start the legacy worker"
|
||||
@echo " make run-enterprise - Build and run the enterprise .NET worker"
|
||||
@echo " make setup-temporal-mac - Install and start Temporal server on Mac"
|
||||
@echo " make run-dev - Start all development services (worker, API, frontend) in parallel"
|
||||
14
README.md
14
README.md
@@ -2,7 +2,13 @@
|
||||
|
||||
This demo shows a multi-turn conversation with an AI agent running inside a Temporal workflow. The purpose of the agent is to collect information towards a goal, running tools along the way. There's a simple DSL input for collecting information (currently set up to use mock functions to search for public events, search for flights around those events, then create a test Stripe invoice for the trip).
|
||||
|
||||
The AI will respond with clarifications and ask for any missing information to that goal. You can configure it to use [ChatGPT 4o](https://openai.com/index/hello-gpt-4o/), [Anthropic Claude](https://www.anthropic.com/claude), [Google Gemini](https://gemini.google.com), [Deepseek-V3](https://www.deepseek.com/), [Grok](https://docs.x.ai/docs/overview) or a local LLM of your choice using [Ollama](https://ollama.com).
|
||||
The AI will respond with clarifications and ask for any missing information to that goal. You can configure it to use any LLM supported by [LiteLLM](https://docs.litellm.ai/docs/providers), including:
|
||||
- OpenAI models (GPT-4, GPT-3.5)
|
||||
- Anthropic Claude models
|
||||
- Google Gemini models
|
||||
- Deepseek models
|
||||
- Ollama models (local)
|
||||
- And many more!
|
||||
|
||||
It's really helpful to [watch the demo (5 minute YouTube video)](https://www.youtube.com/watch?v=GEXllEH2XiQ) to understand how interaction works.
|
||||
|
||||
@@ -28,7 +34,11 @@ These are the key elements of an agentic framework:
|
||||
For a deeper dive into this, check out the [architecture guide](./architecture.md).
|
||||
|
||||
## Setup and Configuration
|
||||
See [the Setup guide](./setup.md).
|
||||
See [the Setup guide](./setup.md) for detailed instructions. The basic configuration requires just two environment variables:
|
||||
```bash
|
||||
LLM_MODEL=openai/gpt-4o # or any other model supported by LiteLLM
|
||||
LLM_KEY=your-api-key-here
|
||||
```
|
||||
|
||||
## Customizing Interaction & Tools
|
||||
See [the guide to adding goals and tools](./adding-goals-and-tools.md).
|
||||
|
||||
@@ -1,142 +1,28 @@
|
||||
import inspect
|
||||
from temporalio import activity
|
||||
from ollama import chat, ChatResponse
|
||||
from openai import OpenAI
|
||||
import json
|
||||
from typing import Sequence, Optional
|
||||
from typing import Optional, Sequence
|
||||
from temporalio.common import RawValue
|
||||
import os
|
||||
from datetime import datetime
|
||||
import google.generativeai as genai
|
||||
import anthropic
|
||||
import deepseek
|
||||
from dotenv import load_dotenv
|
||||
from models.data_types import EnvLookupOutput, ValidationInput, ValidationResult, ToolPromptInput, EnvLookupInput
|
||||
from litellm import completion
|
||||
|
||||
load_dotenv(override=True)
|
||||
print(
|
||||
"Using LLM provider: "
|
||||
+ os.environ.get("LLM_PROVIDER", "openai")
|
||||
+ " (set LLM_PROVIDER in .env to change)"
|
||||
)
|
||||
|
||||
if os.environ.get("LLM_PROVIDER") == "ollama":
|
||||
print(
|
||||
"Using Ollama (local) model: "
|
||||
+ os.environ.get("OLLAMA_MODEL_NAME", "qwen2.5:14b")
|
||||
)
|
||||
|
||||
|
||||
class ToolActivities:
|
||||
def __init__(self):
|
||||
"""Initialize LLM clients based on environment configuration."""
|
||||
self.llm_provider = os.environ.get("LLM_PROVIDER", "openai").lower()
|
||||
print(f"Initializing ToolActivities with LLM provider: {self.llm_provider}")
|
||||
|
||||
# Initialize client variables (all set to None initially)
|
||||
self.openai_client: Optional[OpenAI] = None
|
||||
self.grok_client: Optional[OpenAI] = None
|
||||
self.anthropic_client: Optional[anthropic.Anthropic] = None
|
||||
self.genai_configured: bool = False
|
||||
self.deepseek_client: Optional[deepseek.DeepSeekAPI] = None
|
||||
self.ollama_model_name: Optional[str] = None
|
||||
self.ollama_initialized: bool = False
|
||||
|
||||
# Only initialize the client specified by LLM_PROVIDER
|
||||
if self.llm_provider == "openai":
|
||||
if os.environ.get("OPENAI_API_KEY"):
|
||||
self.openai_client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
|
||||
print("Initialized OpenAI client")
|
||||
else:
|
||||
print("Warning: OPENAI_API_KEY not set but LLM_PROVIDER is 'openai'")
|
||||
|
||||
elif self.llm_provider == "grok":
|
||||
if os.environ.get("GROK_API_KEY"):
|
||||
self.grok_client = OpenAI(api_key=os.environ.get("GROK_API_KEY"), base_url="https://api.x.ai/v1")
|
||||
print("Initialized grok client")
|
||||
else:
|
||||
print("Warning: GROK_API_KEY not set but LLM_PROVIDER is 'grok'")
|
||||
|
||||
elif self.llm_provider == "anthropic":
|
||||
if os.environ.get("ANTHROPIC_API_KEY"):
|
||||
self.anthropic_client = anthropic.Anthropic(
|
||||
api_key=os.environ.get("ANTHROPIC_API_KEY")
|
||||
)
|
||||
print("Initialized Anthropic client")
|
||||
else:
|
||||
print(
|
||||
"Warning: ANTHROPIC_API_KEY not set but LLM_PROVIDER is 'anthropic'"
|
||||
)
|
||||
|
||||
elif self.llm_provider == "google":
|
||||
api_key = os.environ.get("GOOGLE_API_KEY")
|
||||
if api_key:
|
||||
genai.configure(api_key=api_key)
|
||||
self.genai_configured = True
|
||||
print("Configured Google Generative AI")
|
||||
else:
|
||||
print("Warning: GOOGLE_API_KEY not set but LLM_PROVIDER is 'google'")
|
||||
|
||||
elif self.llm_provider == "deepseek":
|
||||
if os.environ.get("DEEPSEEK_API_KEY"):
|
||||
self.deepseek_client = deepseek.DeepSeekAPI(
|
||||
api_key=os.environ.get("DEEPSEEK_API_KEY")
|
||||
)
|
||||
print("Initialized DeepSeek client")
|
||||
else:
|
||||
print(
|
||||
"Warning: DEEPSEEK_API_KEY not set but LLM_PROVIDER is 'deepseek'"
|
||||
)
|
||||
|
||||
# For Ollama, we store the model name but actual initialization happens in warm_up_ollama
|
||||
elif self.llm_provider == "ollama":
|
||||
self.ollama_model_name = os.environ.get("OLLAMA_MODEL_NAME", "qwen2.5:14b")
|
||||
print(
|
||||
f"Using Ollama model: {self.ollama_model_name} (will be loaded on worker startup)"
|
||||
)
|
||||
else:
|
||||
print(
|
||||
f"Warning: Unknown LLM_PROVIDER '{self.llm_provider}', defaulting to OpenAI"
|
||||
)
|
||||
|
||||
def warm_up_ollama(self):
|
||||
"""Pre-load the Ollama model to avoid cold start latency on first request"""
|
||||
if self.llm_provider != "ollama" or self.ollama_initialized:
|
||||
return False # No need to warm up if not using Ollama or already warmed up
|
||||
|
||||
try:
|
||||
print(
|
||||
f"Pre-loading Ollama model '{self.ollama_model_name}' - this may take 30+ seconds..."
|
||||
)
|
||||
start_time = datetime.now()
|
||||
|
||||
# Make a simple request to load the model into memory
|
||||
chat(
|
||||
model=self.ollama_model_name,
|
||||
messages=[
|
||||
{"role": "system", "content": "You are an AI assistant"},
|
||||
{
|
||||
"role": "user",
|
||||
"content": "Hello! This is a warm-up message to load the model.",
|
||||
},
|
||||
],
|
||||
)
|
||||
|
||||
elapsed_time = (datetime.now() - start_time).total_seconds()
|
||||
print(f"✅ Ollama model loaded successfully in {elapsed_time:.2f} seconds")
|
||||
self.ollama_initialized = True
|
||||
return True
|
||||
except Exception as e:
|
||||
print(f"❌ Error pre-loading Ollama model: {str(e)}")
|
||||
print(
|
||||
"The worker will continue, but the first actual request may experience a delay."
|
||||
)
|
||||
return False
|
||||
"""Initialize LLM client using LiteLLM."""
|
||||
self.llm_model = os.environ.get("LLM_MODEL", "openai/gpt-4")
|
||||
self.llm_key = os.environ.get("LLM_KEY")
|
||||
self.llm_base_url = os.environ.get("LLM_BASE_URL")
|
||||
print(f"Initializing ToolActivities with LLM model: {self.llm_model}")
|
||||
if self.llm_base_url:
|
||||
print(f"Using custom base URL: {self.llm_base_url}")
|
||||
|
||||
@activity.defn
|
||||
async def agent_validatePrompt(
|
||||
self, validation_input: ValidationInput
|
||||
) -> ValidationResult:
|
||||
async def agent_validatePrompt(self, validation_input: ValidationInput) -> ValidationResult:
|
||||
"""
|
||||
Validates the prompt in the context of the conversation history and agent goal.
|
||||
Returns a ValidationResult indicating if the prompt makes sense given the context.
|
||||
@@ -187,7 +73,7 @@ class ToolActivities:
|
||||
prompt=validation_prompt, context_instructions=context_instructions
|
||||
)
|
||||
|
||||
result = self.agent_toolPlanner(prompt_input)
|
||||
result = await self.agent_toolPlanner(prompt_input)
|
||||
|
||||
return ValidationResult(
|
||||
validationResult=result.get("validationResult", False),
|
||||
@@ -195,19 +81,43 @@ class ToolActivities:
|
||||
)
|
||||
|
||||
@activity.defn
|
||||
def agent_toolPlanner(self, input: ToolPromptInput) -> dict:
|
||||
if self.llm_provider == "ollama":
|
||||
return self.prompt_llm_ollama(input)
|
||||
elif self.llm_provider == "google":
|
||||
return self.prompt_llm_google(input)
|
||||
elif self.llm_provider == "anthropic":
|
||||
return self.prompt_llm_anthropic(input)
|
||||
elif self.llm_provider == "deepseek":
|
||||
return self.prompt_llm_deepseek(input)
|
||||
elif self.llm_provider == "grok":
|
||||
return self.prompt_llm_grok(input)
|
||||
else:
|
||||
return self.prompt_llm_openai(input)
|
||||
async def agent_toolPlanner(self, input: ToolPromptInput) -> dict:
|
||||
messages = [
|
||||
{
|
||||
"role": "system",
|
||||
"content": input.context_instructions
|
||||
+ ". The current date is "
|
||||
+ datetime.now().strftime("%B %d, %Y"),
|
||||
},
|
||||
{
|
||||
"role": "user",
|
||||
"content": input.prompt,
|
||||
},
|
||||
]
|
||||
|
||||
try:
|
||||
completion_kwargs = {
|
||||
"model": self.llm_model,
|
||||
"messages": messages,
|
||||
"api_key": self.llm_key
|
||||
}
|
||||
|
||||
# Add base_url if configured
|
||||
if self.llm_base_url:
|
||||
completion_kwargs["base_url"] = self.llm_base_url
|
||||
|
||||
response = completion(**completion_kwargs)
|
||||
|
||||
response_content = response.choices[0].message.content
|
||||
activity.logger.info(f"LLM response: {response_content}")
|
||||
|
||||
# Use the new sanitize function
|
||||
response_content = self.sanitize_json_response(response_content)
|
||||
|
||||
return self.parse_json_response(response_content)
|
||||
except Exception as e:
|
||||
print(f"Error in LLM completion: {str(e)}")
|
||||
raise
|
||||
|
||||
def parse_json_response(self, response_content: str) -> dict:
|
||||
"""
|
||||
@@ -220,259 +130,18 @@ class ToolActivities:
|
||||
print(f"Invalid JSON: {e}")
|
||||
raise
|
||||
|
||||
def prompt_llm_openai(self, input: ToolPromptInput) -> dict:
|
||||
if not self.openai_client:
|
||||
api_key = os.environ.get("OPENAI_API_KEY")
|
||||
if not api_key:
|
||||
raise ValueError(
|
||||
"OPENAI_API_KEY is not set in the environment variables but LLM_PROVIDER is 'openai'"
|
||||
)
|
||||
self.openai_client = OpenAI(api_key=api_key)
|
||||
print("Initialized OpenAI client on demand")
|
||||
|
||||
messages = [
|
||||
{
|
||||
"role": "system",
|
||||
"content": input.context_instructions
|
||||
+ ". The current date is "
|
||||
+ datetime.now().strftime("%B %d, %Y"),
|
||||
},
|
||||
{
|
||||
"role": "user",
|
||||
"content": input.prompt,
|
||||
},
|
||||
]
|
||||
|
||||
chat_completion = self.openai_client.chat.completions.create(
|
||||
model="gpt-4o", messages=messages # was gpt-4-0613
|
||||
)
|
||||
|
||||
response_content = chat_completion.choices[0].message.content
|
||||
activity.logger.info(f"ChatGPT response: {response_content}")
|
||||
|
||||
# Use the new sanitize function
|
||||
response_content = self.sanitize_json_response(response_content)
|
||||
|
||||
return self.parse_json_response(response_content)
|
||||
|
||||
def prompt_llm_grok(self, input: ToolPromptInput) -> dict:
|
||||
if not self.grok_client:
|
||||
api_key = os.environ.get("GROK_API_KEY")
|
||||
if not api_key:
|
||||
raise ValueError(
|
||||
"GROK_API_KEY is not set in the environment variables but LLM_PROVIDER is 'grok'"
|
||||
)
|
||||
self.grok_client = OpenAI(api_key=api_key, base_url="https://api.x.ai/v1")
|
||||
print("Initialized grok client on demand")
|
||||
|
||||
messages = [
|
||||
{
|
||||
"role": "system",
|
||||
"content": input.context_instructions
|
||||
+ ". The current date is "
|
||||
+ datetime.now().strftime("%B %d, %Y"),
|
||||
},
|
||||
{
|
||||
"role": "user",
|
||||
"content": input.prompt,
|
||||
},
|
||||
]
|
||||
|
||||
chat_completion = self.grok_client.chat.completions.create(
|
||||
model="grok-2-1212", messages=messages
|
||||
)
|
||||
|
||||
response_content = chat_completion.choices[0].message.content
|
||||
activity.logger.info(f"Grok response: {response_content}")
|
||||
|
||||
# Use the new sanitize function
|
||||
response_content = self.sanitize_json_response(response_content)
|
||||
|
||||
return self.parse_json_response(response_content)
|
||||
def prompt_llm_ollama(self, input: ToolPromptInput) -> dict:
|
||||
# If not yet initialized, try to do so now (this is a backup if warm_up_ollama wasn't called or failed)
|
||||
if not self.ollama_initialized:
|
||||
print(
|
||||
"Ollama model not pre-loaded. Loading now (this may take 30+ seconds)..."
|
||||
)
|
||||
try:
|
||||
self.warm_up_ollama()
|
||||
except Exception:
|
||||
# We already logged the error in warm_up_ollama, continue with the actual request
|
||||
pass
|
||||
|
||||
model_name = self.ollama_model_name or os.environ.get(
|
||||
"OLLAMA_MODEL_NAME", "qwen2.5:14b"
|
||||
)
|
||||
messages = [
|
||||
{
|
||||
"role": "system",
|
||||
"content": input.context_instructions
|
||||
+ ". The current date is "
|
||||
+ get_current_date_human_readable(),
|
||||
},
|
||||
{
|
||||
"role": "user",
|
||||
"content": input.prompt,
|
||||
},
|
||||
]
|
||||
|
||||
try:
|
||||
response: ChatResponse = chat(model=model_name, messages=messages)
|
||||
print(f"Chat response: {response.message.content}")
|
||||
|
||||
# Use the new sanitize function
|
||||
response_content = self.sanitize_json_response(response.message.content)
|
||||
return self.parse_json_response(response_content)
|
||||
except (json.JSONDecodeError, ValueError) as e:
|
||||
# Re-raise JSON-related exceptions to let Temporal retry the activity
|
||||
print(f"JSON parsing error with Ollama response: {str(e)}")
|
||||
raise
|
||||
except Exception as e:
|
||||
# Log and raise other exceptions that may need retrying
|
||||
print(f"Error in Ollama chat: {str(e)}")
|
||||
raise
|
||||
|
||||
def prompt_llm_google(self, input: ToolPromptInput) -> dict:
|
||||
if not self.genai_configured:
|
||||
api_key = os.environ.get("GOOGLE_API_KEY")
|
||||
if not api_key:
|
||||
raise ValueError(
|
||||
"GOOGLE_API_KEY is not set in the environment variables but LLM_PROVIDER is 'google'"
|
||||
)
|
||||
genai.configure(api_key=api_key)
|
||||
self.genai_configured = True
|
||||
print("Configured Google Generative AI on demand")
|
||||
|
||||
model = genai.GenerativeModel(
|
||||
"models/gemini-1.5-flash",
|
||||
system_instruction=input.context_instructions
|
||||
+ ". The current date is "
|
||||
+ datetime.now().strftime("%B %d, %Y"),
|
||||
)
|
||||
response = model.generate_content(input.prompt)
|
||||
response_content = response.text
|
||||
print(f"Google Gemini response: {response_content}")
|
||||
|
||||
# Use the new sanitize function
|
||||
response_content = self.sanitize_json_response(response_content)
|
||||
|
||||
return self.parse_json_response(response_content)
|
||||
|
||||
def prompt_llm_anthropic(self, input: ToolPromptInput) -> dict:
|
||||
if not self.anthropic_client:
|
||||
api_key = os.environ.get("ANTHROPIC_API_KEY")
|
||||
if not api_key:
|
||||
raise ValueError(
|
||||
"ANTHROPIC_API_KEY is not set in the environment variables but LLM_PROVIDER is 'anthropic'"
|
||||
)
|
||||
self.anthropic_client = anthropic.Anthropic(api_key=api_key)
|
||||
print("Initialized Anthropic client on demand")
|
||||
|
||||
response = self.anthropic_client.messages.create(
|
||||
model="claude-3-5-sonnet-20241022",
|
||||
#model="claude-3-7-sonnet-20250219", # doesn't do as well
|
||||
max_tokens=1024,
|
||||
system=input.context_instructions
|
||||
+ ". The current date is "
|
||||
+ get_current_date_human_readable(),
|
||||
messages=[
|
||||
{
|
||||
"role": "user",
|
||||
"content": input.prompt,
|
||||
}
|
||||
],
|
||||
)
|
||||
|
||||
response_content = response.content[0].text
|
||||
print(f"Anthropic response: {response_content}")
|
||||
|
||||
# Use the new sanitize function
|
||||
response_content = self.sanitize_json_response(response_content)
|
||||
|
||||
return self.parse_json_response(response_content)
|
||||
|
||||
def prompt_llm_deepseek(self, input: ToolPromptInput) -> dict:
|
||||
if not self.deepseek_client:
|
||||
api_key = os.environ.get("DEEPSEEK_API_KEY")
|
||||
if not api_key:
|
||||
raise ValueError(
|
||||
"DEEPSEEK_API_KEY is not set in the environment variables but LLM_PROVIDER is 'deepseek'"
|
||||
)
|
||||
self.deepseek_client = deepseek.DeepSeekAPI(api_key=api_key)
|
||||
print("Initialized DeepSeek client on demand")
|
||||
|
||||
messages = [
|
||||
{
|
||||
"role": "system",
|
||||
"content": input.context_instructions
|
||||
+ ". The current date is "
|
||||
+ datetime.now().strftime("%B %d, %Y"),
|
||||
},
|
||||
{
|
||||
"role": "user",
|
||||
"content": input.prompt,
|
||||
},
|
||||
]
|
||||
|
||||
response = self.deepseek_client.chat_completion(prompt=messages)
|
||||
response_content = response
|
||||
print(f"DeepSeek response: {response_content}")
|
||||
|
||||
# Use the new sanitize function
|
||||
response_content = self.sanitize_json_response(response_content)
|
||||
|
||||
return self.parse_json_response(response_content)
|
||||
|
||||
def sanitize_json_response(self, response_content: str) -> str:
|
||||
"""
|
||||
Extracts the JSON block from the response content as a string.
|
||||
Supports:
|
||||
- JSON surrounded by ```json and ```
|
||||
- Raw JSON input
|
||||
- JSON preceded or followed by extra text
|
||||
Rejects invalid input that doesn't contain JSON.
|
||||
Sanitizes the response content to ensure it's valid JSON.
|
||||
"""
|
||||
try:
|
||||
start_marker = "```json"
|
||||
end_marker = "```"
|
||||
# Remove any markdown code block markers
|
||||
response_content = response_content.replace("```json", "").replace("```", "")
|
||||
|
||||
# Remove any leading/trailing whitespace
|
||||
response_content = response_content.strip()
|
||||
|
||||
return response_content
|
||||
|
||||
json_str = None
|
||||
|
||||
# Case 1: JSON surrounded by markers
|
||||
if start_marker in response_content and end_marker in response_content:
|
||||
json_start = response_content.index(start_marker) + len(start_marker)
|
||||
json_end = response_content.index(end_marker, json_start)
|
||||
json_str = response_content[json_start:json_end].strip()
|
||||
|
||||
# Case 2: Text with valid JSON
|
||||
else:
|
||||
# Try to locate the JSON block by scanning for the first `{` and last `}`
|
||||
json_start = response_content.find("{")
|
||||
json_end = response_content.rfind("}")
|
||||
|
||||
if json_start != -1 and json_end != -1 and json_start < json_end:
|
||||
json_str = response_content[json_start : json_end + 1].strip()
|
||||
|
||||
# Validate and ensure the extracted JSON is valid
|
||||
if json_str:
|
||||
json.loads(json_str) # This will raise an error if the JSON is invalid
|
||||
return json_str
|
||||
|
||||
# If no valid JSON found, raise an error
|
||||
raise ValueError("Response does not contain valid JSON.")
|
||||
|
||||
except json.JSONDecodeError:
|
||||
# Invalid JSON
|
||||
print(f"Invalid JSON detected in response: {response_content}")
|
||||
raise ValueError("Response does not contain valid JSON.")
|
||||
except Exception as e:
|
||||
# Other errors
|
||||
print(f"Error processing response: {str(e)}")
|
||||
print(f"Full response: {response_content}")
|
||||
raise
|
||||
|
||||
# get env vars for workflow
|
||||
@activity.defn
|
||||
async def get_wf_env_vars(self, input: EnvLookupInput) -> EnvLookupOutput:
|
||||
""" gets env vars for workflow as an activity result so it's deterministic
|
||||
@@ -498,18 +167,6 @@ class ToolActivities:
|
||||
|
||||
return output
|
||||
|
||||
|
||||
def get_current_date_human_readable():
|
||||
"""
|
||||
Returns the current date in a human-readable format.
|
||||
|
||||
Example: Wednesday, January 1, 2025
|
||||
"""
|
||||
from datetime import datetime
|
||||
|
||||
return datetime.now().strftime("%A, %B %d, %Y")
|
||||
|
||||
|
||||
@activity.defn(dynamic=True)
|
||||
async def dynamic_tool_activity(args: Sequence[RawValue]) -> dict:
|
||||
from tools import get_handler
|
||||
|
||||
1572
poetry.lock
generated
1572
poetry.lock
generated
File diff suppressed because it is too large
Load Diff
@@ -31,16 +31,11 @@ temporalio = "^1.8.0"
|
||||
|
||||
# Standard library modules (e.g. asyncio, collections) don't need to be added
|
||||
# since they're built-in for Python 3.8+.
|
||||
ollama = "^0.4.5"
|
||||
litellm = "^1.70.0"
|
||||
pyyaml = "^6.0.2"
|
||||
fastapi = "^0.115.6"
|
||||
uvicorn = "^0.34.0"
|
||||
python-dotenv = "^1.0.1"
|
||||
openai = "^1.59.2"
|
||||
stripe = "^11.4.1"
|
||||
google-generativeai = "^0.8.4"
|
||||
anthropic = "0.47.0"
|
||||
deepseek = "^1.0.0"
|
||||
requests = "^2.32.3"
|
||||
pandas = "^2.2.3"
|
||||
gtfs-kit = "^10.1.1"
|
||||
|
||||
@@ -1,23 +0,0 @@
|
||||
from ollama import chat, ChatResponse
|
||||
|
||||
|
||||
def main():
|
||||
model_name = "mistral"
|
||||
|
||||
# The messages to pass to the model
|
||||
messages = [
|
||||
{
|
||||
"role": "user",
|
||||
"content": "Why is the sky blue?",
|
||||
}
|
||||
]
|
||||
|
||||
# Call ollama's chat function
|
||||
response: ChatResponse = chat(model=model_name, messages=messages)
|
||||
|
||||
# Print the full message content
|
||||
print(response.message.content)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -17,18 +17,18 @@ async def main():
|
||||
load_dotenv(override=True)
|
||||
|
||||
# Print LLM configuration info
|
||||
llm_provider = os.environ.get("LLM_PROVIDER", "openai").lower()
|
||||
print(f"Worker will use LLM provider: {llm_provider}")
|
||||
llm_model = os.environ.get("LLM_MODEL", "openai/gpt-4")
|
||||
print(f"Worker will use LLM model: {llm_model}")
|
||||
|
||||
# Create the client
|
||||
client = await get_temporal_client()
|
||||
|
||||
# Initialize the activities class once with the specified LLM provider
|
||||
# Initialize the activities class
|
||||
activities = ToolActivities()
|
||||
print(f"ToolActivities initialized with LLM provider: {llm_provider}")
|
||||
print(f"ToolActivities initialized with LLM model: {llm_model}")
|
||||
|
||||
# If using Ollama, pre-load the model to avoid cold start latency
|
||||
if llm_provider == "ollama":
|
||||
if llm_model.startswith("ollama"):
|
||||
print("\n======== OLLAMA MODEL INITIALIZATION ========")
|
||||
print("Ollama models need to be loaded into memory on first use.")
|
||||
print("This may take 30+ seconds depending on your hardware and model size.")
|
||||
@@ -51,8 +51,6 @@ async def main():
|
||||
print("Worker ready to process tasks!")
|
||||
logging.basicConfig(level=logging.WARN)
|
||||
|
||||
|
||||
|
||||
# Run the worker
|
||||
with concurrent.futures.ThreadPoolExecutor(max_workers=100) as activity_executor:
|
||||
worker = Worker(
|
||||
|
||||
98
setup.md
98
setup.md
@@ -14,6 +14,37 @@ If you want to show confirmations/enable the debugging UI that shows tool args,
|
||||
SHOW_CONFIRM=True
|
||||
```
|
||||
|
||||
### Quick Start with Makefile
|
||||
|
||||
We've provided a Makefile to simplify the setup and running of the application. Here are the main commands:
|
||||
|
||||
```bash
|
||||
# Initial setup
|
||||
make setup # Creates virtual environment and installs dependencies
|
||||
make setup-venv # Creates virtual environment only
|
||||
make install # Installs all dependencies
|
||||
|
||||
# Running the application
|
||||
make run-worker # Starts the Temporal worker
|
||||
make run-api # Starts the API server
|
||||
make run-frontend # Starts the frontend development server
|
||||
|
||||
# Additional services
|
||||
make run-train-api # Starts the train API server
|
||||
make run-legacy-worker # Starts the legacy worker
|
||||
make run-enterprise # Builds and runs the enterprise .NET worker
|
||||
|
||||
# Development environment setup
|
||||
make setup-temporal-mac # Installs and starts Temporal server on Mac
|
||||
|
||||
# View all available commands
|
||||
make help
|
||||
```
|
||||
|
||||
### Manual Setup (Alternative to Makefile)
|
||||
|
||||
If you prefer to run commands manually, follow these steps:
|
||||
|
||||
### Agent Goal Configuration
|
||||
|
||||
The agent can be configured to pursue different goals using the `AGENT_GOAL` environment variable in your `.env` file. If unset, default is `goal_choose_agent_type`.
|
||||
@@ -25,54 +56,41 @@ GOAL_CATEGORIES=hr,travel-flights,travel-trains,fin
|
||||
|
||||
See the section Goal-Specific Tool Configuration below for tool configuration for specific goals.
|
||||
|
||||
### LLM Provider Configuration
|
||||
### LLM Configuration
|
||||
|
||||
The agent can use OpenAI's GPT-4o, Google Gemini, Anthropic Claude, or a local LLM via Ollama. Set the `LLM_PROVIDER` environment variable in your `.env` file to choose the desired provider:
|
||||
Note: We recommend using OpenAI's GPT-4o or Claude 3.5 Sonnet for the best results. There can be significant differences in performance and capabilities between models, especially for complex tasks.
|
||||
|
||||
- `LLM_PROVIDER=openai` for OpenAI's GPT-4o
|
||||
- `LLM_PROVIDER=google` for Google Gemini
|
||||
- `LLM_PROVIDER=anthropic` for Anthropic Claude
|
||||
- `LLM_PROVIDER=deepseek` for DeepSeek-V3
|
||||
- `LLM_PROVIDER=ollama` for running LLMs via [Ollama](https://ollama.ai) (not recommended for this use case)
|
||||
The agent uses LiteLLM to interact with various LLM providers. Configure theqfollowing environment variables in your `.env` file:
|
||||
|
||||
### Option 1: OpenAI
|
||||
- `LLM_MODEL`: The model to use (e.g., "openai/gpt-4o", "anthropic/claude-3-sonnet", "google/gemini-pro", etc.)
|
||||
- `LLM_KEY`: Your API key for the selected provider
|
||||
- `LLM_BASE_URL`: (Optional) Custom base URL for the LLM provider. Useful for:
|
||||
- Using Ollama with a custom endpoint
|
||||
- Using a proxy or custom API gateway
|
||||
- Testing with different API versions
|
||||
|
||||
If using OpenAI, ensure you have an OpenAI key for the GPT-4o model. Set this in the `OPENAI_API_KEY` environment variable in `.env`.
|
||||
LiteLLM will automatically detect the provider based on the model name. For example:
|
||||
- For OpenAI models: `openai/gpt-4o` or `openai/gpt-3.5-turbo`
|
||||
- For Anthropic models: `anthropic/claude-3-sonnet`
|
||||
- For Google models: `google/gemini-pro`
|
||||
- For Ollama models: `ollama/mistral` (requires `LLM_BASE_URL` set to your Ollama server)
|
||||
|
||||
### Option 2: Google Gemini
|
||||
Example configurations:
|
||||
```bash
|
||||
# For OpenAI
|
||||
LLM_MODEL=openai/gpt-4o
|
||||
LLM_KEY=your-api-key-here
|
||||
|
||||
To use Google Gemini:
|
||||
# For Anthropic
|
||||
LLM_MODEL=anthropic/claude-3-sonnet
|
||||
LLM_KEY=your-api-key-here
|
||||
|
||||
1. Obtain a Google API key and set it in the `GOOGLE_API_KEY` environment variable in `.env`.
|
||||
2. Set `LLM_PROVIDER=google` in your `.env` file.
|
||||
# For Ollama with custom URL
|
||||
LLM_MODEL=ollama/mistral
|
||||
LLM_BASE_URL=http://localhost:11434
|
||||
```
|
||||
|
||||
### Option 3: Anthropic Claude (recommended)
|
||||
|
||||
I find that Claude Sonnet 3.5 performs better than the other hosted LLMs for this use case.
|
||||
|
||||
To use Anthropic:
|
||||
|
||||
1. Obtain an Anthropic API key and set it in the `ANTHROPIC_API_KEY` environment variable in `.env`.
|
||||
2. Set `LLM_PROVIDER=anthropic` in your `.env` file.
|
||||
|
||||
### Option 4: Deepseek-V3
|
||||
|
||||
To use Deepseek-V3:
|
||||
|
||||
1. Obtain a Deepseek API key and set it in the `DEEPSEEK_API_KEY` environment variable in `.env`.
|
||||
2. Set `LLM_PROVIDER=deepseek` in your `.env` file.
|
||||
|
||||
### Option 5: Local LLM via Ollama (not recommended)
|
||||
|
||||
To use a local LLM with Ollama:
|
||||
|
||||
1. Install [Ollama](https://ollama.com) and the [Qwen2.5 14B](https://ollama.com/library/qwen2.5) model.
|
||||
- Run `ollama run <OLLAMA_MODEL_NAME>` to start the model. Note that this model is about 9GB to download.
|
||||
- Example: `ollama run qwen2.5:14b`
|
||||
|
||||
2. Set `LLM_PROVIDER=ollama` in your `.env` file and `OLLAMA_MODEL_NAME` to the name of the model you installed.
|
||||
|
||||
Note: I found the other (hosted) LLMs to be MUCH more reliable for this use case. However, you can switch to Ollama if desired, and choose a suitably large model if your computer has the resources.
|
||||
For a complete list of supported models and providers, visit the [LiteLLM documentation](https://docs.litellm.ai/docs/providers).
|
||||
|
||||
## Configuring Temporal Connection
|
||||
|
||||
|
||||
Reference in New Issue
Block a user