mirror of
https://github.com/temporal-community/temporal-ai-agent.git
synced 2026-03-15 14:08:08 +01:00
feat: LiteLLM integration
This commit is contained in:
committed by
Steve Androulakis
parent
847f4bbaef
commit
dcb6271c23
53
Makefile
Normal file
53
Makefile
Normal file
@@ -0,0 +1,53 @@
|
|||||||
|
.PHONY: setup install run-worker run-api run-frontend run-train-api run-legacy-worker run-enterprise setup-venv check-python
|
||||||
|
|
||||||
|
# Setup commands
|
||||||
|
setup: check-python setup-venv install
|
||||||
|
|
||||||
|
check-python:
|
||||||
|
@which python3 >/dev/null 2>&1 || (echo "Python 3 is required. Please install it first." && exit 1)
|
||||||
|
|
||||||
|
setup-venv:
|
||||||
|
python3 -m venv venv
|
||||||
|
@echo "Virtual environment created. Don't forget to activate it with 'source venv/bin/activate'"
|
||||||
|
|
||||||
|
install:
|
||||||
|
poetry install
|
||||||
|
cd frontend && npm install
|
||||||
|
|
||||||
|
# Run commands
|
||||||
|
run-worker:
|
||||||
|
poetry run python scripts/run_worker.py
|
||||||
|
|
||||||
|
run-api:
|
||||||
|
poetry run uvicorn api.main:app --reload
|
||||||
|
|
||||||
|
run-frontend:
|
||||||
|
cd frontend && npx vite
|
||||||
|
|
||||||
|
run-train-api:
|
||||||
|
poetry run python thirdparty/train_api.py
|
||||||
|
|
||||||
|
run-legacy-worker:
|
||||||
|
poetry run python scripts/run_legacy_worker.py
|
||||||
|
|
||||||
|
run-enterprise:
|
||||||
|
cd enterprise && dotnet build && dotnet run
|
||||||
|
|
||||||
|
# Development environment setup
|
||||||
|
setup-temporal-mac:
|
||||||
|
brew install temporal
|
||||||
|
temporal server start-dev
|
||||||
|
|
||||||
|
# Help command
|
||||||
|
help:
|
||||||
|
@echo "Available commands:"
|
||||||
|
@echo " make setup - Create virtual environment and install dependencies"
|
||||||
|
@echo " make setup-venv - Create virtual environment only"
|
||||||
|
@echo " make install - Install all dependencies"
|
||||||
|
@echo " make run-worker - Start the Temporal worker"
|
||||||
|
@echo " make run-api - Start the API server"
|
||||||
|
@echo " make run-frontend - Start the frontend development server"
|
||||||
|
@echo " make run-train-api - Start the train API server"
|
||||||
|
@echo " make run-legacy-worker - Start the legacy worker"
|
||||||
|
@echo " make run-enterprise - Build and run the enterprise .NET worker"
|
||||||
|
@echo " make setup-temporal-mac - Install and start Temporal server on Mac"
|
||||||
14
README.md
14
README.md
@@ -2,7 +2,13 @@
|
|||||||
|
|
||||||
This demo shows a multi-turn conversation with an AI agent running inside a Temporal workflow. The purpose of the agent is to collect information towards a goal, running tools along the way. There's a simple DSL input for collecting information (currently set up to use mock functions to search for public events, search for flights around those events, then create a test Stripe invoice for the trip).
|
This demo shows a multi-turn conversation with an AI agent running inside a Temporal workflow. The purpose of the agent is to collect information towards a goal, running tools along the way. There's a simple DSL input for collecting information (currently set up to use mock functions to search for public events, search for flights around those events, then create a test Stripe invoice for the trip).
|
||||||
|
|
||||||
The AI will respond with clarifications and ask for any missing information to that goal. You can configure it to use [ChatGPT 4o](https://openai.com/index/hello-gpt-4o/), [Anthropic Claude](https://www.anthropic.com/claude), [Google Gemini](https://gemini.google.com), [Deepseek-V3](https://www.deepseek.com/), [Grok](https://docs.x.ai/docs/overview) or a local LLM of your choice using [Ollama](https://ollama.com).
|
The AI will respond with clarifications and ask for any missing information to that goal. You can configure it to use any LLM supported by [LiteLLM](https://docs.litellm.ai/docs/providers), including:
|
||||||
|
- OpenAI models (GPT-4, GPT-3.5)
|
||||||
|
- Anthropic Claude models
|
||||||
|
- Google Gemini models
|
||||||
|
- Deepseek models
|
||||||
|
- Ollama models (local)
|
||||||
|
- And many more!
|
||||||
|
|
||||||
It's really helpful to [watch the demo (5 minute YouTube video)](https://www.youtube.com/watch?v=GEXllEH2XiQ) to understand how interaction works.
|
It's really helpful to [watch the demo (5 minute YouTube video)](https://www.youtube.com/watch?v=GEXllEH2XiQ) to understand how interaction works.
|
||||||
|
|
||||||
@@ -28,7 +34,11 @@ These are the key elements of an agentic framework:
|
|||||||
For a deeper dive into this, check out the [architecture guide](./architecture.md).
|
For a deeper dive into this, check out the [architecture guide](./architecture.md).
|
||||||
|
|
||||||
## Setup and Configuration
|
## Setup and Configuration
|
||||||
See [the Setup guide](./setup.md).
|
See [the Setup guide](./setup.md) for detailed instructions. The basic configuration requires just two environment variables:
|
||||||
|
```bash
|
||||||
|
LLM_MODEL=openai/gpt-4o # or any other model supported by LiteLLM
|
||||||
|
LLM_KEY=your-api-key-here
|
||||||
|
```
|
||||||
|
|
||||||
## Customizing Interaction & Tools
|
## Customizing Interaction & Tools
|
||||||
See [the guide to adding goals and tools](./adding-goals-and-tools.md).
|
See [the guide to adding goals and tools](./adding-goals-and-tools.md).
|
||||||
|
|||||||
@@ -1,142 +1,28 @@
|
|||||||
import inspect
|
import inspect
|
||||||
from temporalio import activity
|
from temporalio import activity
|
||||||
from ollama import chat, ChatResponse
|
|
||||||
from openai import OpenAI
|
|
||||||
import json
|
import json
|
||||||
from typing import Sequence, Optional
|
from typing import Optional, Sequence
|
||||||
from temporalio.common import RawValue
|
from temporalio.common import RawValue
|
||||||
import os
|
import os
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
import google.generativeai as genai
|
|
||||||
import anthropic
|
|
||||||
import deepseek
|
|
||||||
from dotenv import load_dotenv
|
from dotenv import load_dotenv
|
||||||
from models.data_types import EnvLookupOutput, ValidationInput, ValidationResult, ToolPromptInput, EnvLookupInput
|
from models.data_types import EnvLookupOutput, ValidationInput, ValidationResult, ToolPromptInput, EnvLookupInput
|
||||||
|
from litellm import completion
|
||||||
|
|
||||||
load_dotenv(override=True)
|
load_dotenv(override=True)
|
||||||
print(
|
|
||||||
"Using LLM provider: "
|
|
||||||
+ os.environ.get("LLM_PROVIDER", "openai")
|
|
||||||
+ " (set LLM_PROVIDER in .env to change)"
|
|
||||||
)
|
|
||||||
|
|
||||||
if os.environ.get("LLM_PROVIDER") == "ollama":
|
|
||||||
print(
|
|
||||||
"Using Ollama (local) model: "
|
|
||||||
+ os.environ.get("OLLAMA_MODEL_NAME", "qwen2.5:14b")
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class ToolActivities:
|
class ToolActivities:
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
"""Initialize LLM clients based on environment configuration."""
|
"""Initialize LLM client using LiteLLM."""
|
||||||
self.llm_provider = os.environ.get("LLM_PROVIDER", "openai").lower()
|
self.llm_model = os.environ.get("LLM_MODEL", "openai/gpt-4")
|
||||||
print(f"Initializing ToolActivities with LLM provider: {self.llm_provider}")
|
self.llm_key = os.environ.get("LLM_KEY")
|
||||||
|
self.llm_base_url = os.environ.get("LLM_BASE_URL")
|
||||||
# Initialize client variables (all set to None initially)
|
print(f"Initializing ToolActivities with LLM model: {self.llm_model}")
|
||||||
self.openai_client: Optional[OpenAI] = None
|
if self.llm_base_url:
|
||||||
self.grok_client: Optional[OpenAI] = None
|
print(f"Using custom base URL: {self.llm_base_url}")
|
||||||
self.anthropic_client: Optional[anthropic.Anthropic] = None
|
|
||||||
self.genai_configured: bool = False
|
|
||||||
self.deepseek_client: Optional[deepseek.DeepSeekAPI] = None
|
|
||||||
self.ollama_model_name: Optional[str] = None
|
|
||||||
self.ollama_initialized: bool = False
|
|
||||||
|
|
||||||
# Only initialize the client specified by LLM_PROVIDER
|
|
||||||
if self.llm_provider == "openai":
|
|
||||||
if os.environ.get("OPENAI_API_KEY"):
|
|
||||||
self.openai_client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
|
|
||||||
print("Initialized OpenAI client")
|
|
||||||
else:
|
|
||||||
print("Warning: OPENAI_API_KEY not set but LLM_PROVIDER is 'openai'")
|
|
||||||
|
|
||||||
elif self.llm_provider == "grok":
|
|
||||||
if os.environ.get("GROK_API_KEY"):
|
|
||||||
self.grok_client = OpenAI(api_key=os.environ.get("GROK_API_KEY"), base_url="https://api.x.ai/v1")
|
|
||||||
print("Initialized grok client")
|
|
||||||
else:
|
|
||||||
print("Warning: GROK_API_KEY not set but LLM_PROVIDER is 'grok'")
|
|
||||||
|
|
||||||
elif self.llm_provider == "anthropic":
|
|
||||||
if os.environ.get("ANTHROPIC_API_KEY"):
|
|
||||||
self.anthropic_client = anthropic.Anthropic(
|
|
||||||
api_key=os.environ.get("ANTHROPIC_API_KEY")
|
|
||||||
)
|
|
||||||
print("Initialized Anthropic client")
|
|
||||||
else:
|
|
||||||
print(
|
|
||||||
"Warning: ANTHROPIC_API_KEY not set but LLM_PROVIDER is 'anthropic'"
|
|
||||||
)
|
|
||||||
|
|
||||||
elif self.llm_provider == "google":
|
|
||||||
api_key = os.environ.get("GOOGLE_API_KEY")
|
|
||||||
if api_key:
|
|
||||||
genai.configure(api_key=api_key)
|
|
||||||
self.genai_configured = True
|
|
||||||
print("Configured Google Generative AI")
|
|
||||||
else:
|
|
||||||
print("Warning: GOOGLE_API_KEY not set but LLM_PROVIDER is 'google'")
|
|
||||||
|
|
||||||
elif self.llm_provider == "deepseek":
|
|
||||||
if os.environ.get("DEEPSEEK_API_KEY"):
|
|
||||||
self.deepseek_client = deepseek.DeepSeekAPI(
|
|
||||||
api_key=os.environ.get("DEEPSEEK_API_KEY")
|
|
||||||
)
|
|
||||||
print("Initialized DeepSeek client")
|
|
||||||
else:
|
|
||||||
print(
|
|
||||||
"Warning: DEEPSEEK_API_KEY not set but LLM_PROVIDER is 'deepseek'"
|
|
||||||
)
|
|
||||||
|
|
||||||
# For Ollama, we store the model name but actual initialization happens in warm_up_ollama
|
|
||||||
elif self.llm_provider == "ollama":
|
|
||||||
self.ollama_model_name = os.environ.get("OLLAMA_MODEL_NAME", "qwen2.5:14b")
|
|
||||||
print(
|
|
||||||
f"Using Ollama model: {self.ollama_model_name} (will be loaded on worker startup)"
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
print(
|
|
||||||
f"Warning: Unknown LLM_PROVIDER '{self.llm_provider}', defaulting to OpenAI"
|
|
||||||
)
|
|
||||||
|
|
||||||
def warm_up_ollama(self):
|
|
||||||
"""Pre-load the Ollama model to avoid cold start latency on first request"""
|
|
||||||
if self.llm_provider != "ollama" or self.ollama_initialized:
|
|
||||||
return False # No need to warm up if not using Ollama or already warmed up
|
|
||||||
|
|
||||||
try:
|
|
||||||
print(
|
|
||||||
f"Pre-loading Ollama model '{self.ollama_model_name}' - this may take 30+ seconds..."
|
|
||||||
)
|
|
||||||
start_time = datetime.now()
|
|
||||||
|
|
||||||
# Make a simple request to load the model into memory
|
|
||||||
chat(
|
|
||||||
model=self.ollama_model_name,
|
|
||||||
messages=[
|
|
||||||
{"role": "system", "content": "You are an AI assistant"},
|
|
||||||
{
|
|
||||||
"role": "user",
|
|
||||||
"content": "Hello! This is a warm-up message to load the model.",
|
|
||||||
},
|
|
||||||
],
|
|
||||||
)
|
|
||||||
|
|
||||||
elapsed_time = (datetime.now() - start_time).total_seconds()
|
|
||||||
print(f"✅ Ollama model loaded successfully in {elapsed_time:.2f} seconds")
|
|
||||||
self.ollama_initialized = True
|
|
||||||
return True
|
|
||||||
except Exception as e:
|
|
||||||
print(f"❌ Error pre-loading Ollama model: {str(e)}")
|
|
||||||
print(
|
|
||||||
"The worker will continue, but the first actual request may experience a delay."
|
|
||||||
)
|
|
||||||
return False
|
|
||||||
|
|
||||||
@activity.defn
|
@activity.defn
|
||||||
async def agent_validatePrompt(
|
async def agent_validatePrompt(self, validation_input: ValidationInput) -> ValidationResult:
|
||||||
self, validation_input: ValidationInput
|
|
||||||
) -> ValidationResult:
|
|
||||||
"""
|
"""
|
||||||
Validates the prompt in the context of the conversation history and agent goal.
|
Validates the prompt in the context of the conversation history and agent goal.
|
||||||
Returns a ValidationResult indicating if the prompt makes sense given the context.
|
Returns a ValidationResult indicating if the prompt makes sense given the context.
|
||||||
@@ -187,7 +73,7 @@ class ToolActivities:
|
|||||||
prompt=validation_prompt, context_instructions=context_instructions
|
prompt=validation_prompt, context_instructions=context_instructions
|
||||||
)
|
)
|
||||||
|
|
||||||
result = self.agent_toolPlanner(prompt_input)
|
result = await self.agent_toolPlanner(prompt_input)
|
||||||
|
|
||||||
return ValidationResult(
|
return ValidationResult(
|
||||||
validationResult=result.get("validationResult", False),
|
validationResult=result.get("validationResult", False),
|
||||||
@@ -195,19 +81,43 @@ class ToolActivities:
|
|||||||
)
|
)
|
||||||
|
|
||||||
@activity.defn
|
@activity.defn
|
||||||
def agent_toolPlanner(self, input: ToolPromptInput) -> dict:
|
async def agent_toolPlanner(self, input: ToolPromptInput) -> dict:
|
||||||
if self.llm_provider == "ollama":
|
messages = [
|
||||||
return self.prompt_llm_ollama(input)
|
{
|
||||||
elif self.llm_provider == "google":
|
"role": "system",
|
||||||
return self.prompt_llm_google(input)
|
"content": input.context_instructions
|
||||||
elif self.llm_provider == "anthropic":
|
+ ". The current date is "
|
||||||
return self.prompt_llm_anthropic(input)
|
+ datetime.now().strftime("%B %d, %Y"),
|
||||||
elif self.llm_provider == "deepseek":
|
},
|
||||||
return self.prompt_llm_deepseek(input)
|
{
|
||||||
elif self.llm_provider == "grok":
|
"role": "user",
|
||||||
return self.prompt_llm_grok(input)
|
"content": input.prompt,
|
||||||
else:
|
},
|
||||||
return self.prompt_llm_openai(input)
|
]
|
||||||
|
|
||||||
|
try:
|
||||||
|
completion_kwargs = {
|
||||||
|
"model": self.llm_model,
|
||||||
|
"messages": messages,
|
||||||
|
"api_key": self.llm_key
|
||||||
|
}
|
||||||
|
|
||||||
|
# Add base_url if configured
|
||||||
|
if self.llm_base_url:
|
||||||
|
completion_kwargs["base_url"] = self.llm_base_url
|
||||||
|
|
||||||
|
response = completion(**completion_kwargs)
|
||||||
|
|
||||||
|
response_content = response.choices[0].message.content
|
||||||
|
activity.logger.info(f"LLM response: {response_content}")
|
||||||
|
|
||||||
|
# Use the new sanitize function
|
||||||
|
response_content = self.sanitize_json_response(response_content)
|
||||||
|
|
||||||
|
return self.parse_json_response(response_content)
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error in LLM completion: {str(e)}")
|
||||||
|
raise
|
||||||
|
|
||||||
def parse_json_response(self, response_content: str) -> dict:
|
def parse_json_response(self, response_content: str) -> dict:
|
||||||
"""
|
"""
|
||||||
@@ -220,259 +130,18 @@ class ToolActivities:
|
|||||||
print(f"Invalid JSON: {e}")
|
print(f"Invalid JSON: {e}")
|
||||||
raise
|
raise
|
||||||
|
|
||||||
def prompt_llm_openai(self, input: ToolPromptInput) -> dict:
|
|
||||||
if not self.openai_client:
|
|
||||||
api_key = os.environ.get("OPENAI_API_KEY")
|
|
||||||
if not api_key:
|
|
||||||
raise ValueError(
|
|
||||||
"OPENAI_API_KEY is not set in the environment variables but LLM_PROVIDER is 'openai'"
|
|
||||||
)
|
|
||||||
self.openai_client = OpenAI(api_key=api_key)
|
|
||||||
print("Initialized OpenAI client on demand")
|
|
||||||
|
|
||||||
messages = [
|
|
||||||
{
|
|
||||||
"role": "system",
|
|
||||||
"content": input.context_instructions
|
|
||||||
+ ". The current date is "
|
|
||||||
+ datetime.now().strftime("%B %d, %Y"),
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"role": "user",
|
|
||||||
"content": input.prompt,
|
|
||||||
},
|
|
||||||
]
|
|
||||||
|
|
||||||
chat_completion = self.openai_client.chat.completions.create(
|
|
||||||
model="gpt-4o", messages=messages # was gpt-4-0613
|
|
||||||
)
|
|
||||||
|
|
||||||
response_content = chat_completion.choices[0].message.content
|
|
||||||
activity.logger.info(f"ChatGPT response: {response_content}")
|
|
||||||
|
|
||||||
# Use the new sanitize function
|
|
||||||
response_content = self.sanitize_json_response(response_content)
|
|
||||||
|
|
||||||
return self.parse_json_response(response_content)
|
|
||||||
|
|
||||||
def prompt_llm_grok(self, input: ToolPromptInput) -> dict:
|
|
||||||
if not self.grok_client:
|
|
||||||
api_key = os.environ.get("GROK_API_KEY")
|
|
||||||
if not api_key:
|
|
||||||
raise ValueError(
|
|
||||||
"GROK_API_KEY is not set in the environment variables but LLM_PROVIDER is 'grok'"
|
|
||||||
)
|
|
||||||
self.grok_client = OpenAI(api_key=api_key, base_url="https://api.x.ai/v1")
|
|
||||||
print("Initialized grok client on demand")
|
|
||||||
|
|
||||||
messages = [
|
|
||||||
{
|
|
||||||
"role": "system",
|
|
||||||
"content": input.context_instructions
|
|
||||||
+ ". The current date is "
|
|
||||||
+ datetime.now().strftime("%B %d, %Y"),
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"role": "user",
|
|
||||||
"content": input.prompt,
|
|
||||||
},
|
|
||||||
]
|
|
||||||
|
|
||||||
chat_completion = self.grok_client.chat.completions.create(
|
|
||||||
model="grok-2-1212", messages=messages
|
|
||||||
)
|
|
||||||
|
|
||||||
response_content = chat_completion.choices[0].message.content
|
|
||||||
activity.logger.info(f"Grok response: {response_content}")
|
|
||||||
|
|
||||||
# Use the new sanitize function
|
|
||||||
response_content = self.sanitize_json_response(response_content)
|
|
||||||
|
|
||||||
return self.parse_json_response(response_content)
|
|
||||||
def prompt_llm_ollama(self, input: ToolPromptInput) -> dict:
|
|
||||||
# If not yet initialized, try to do so now (this is a backup if warm_up_ollama wasn't called or failed)
|
|
||||||
if not self.ollama_initialized:
|
|
||||||
print(
|
|
||||||
"Ollama model not pre-loaded. Loading now (this may take 30+ seconds)..."
|
|
||||||
)
|
|
||||||
try:
|
|
||||||
self.warm_up_ollama()
|
|
||||||
except Exception:
|
|
||||||
# We already logged the error in warm_up_ollama, continue with the actual request
|
|
||||||
pass
|
|
||||||
|
|
||||||
model_name = self.ollama_model_name or os.environ.get(
|
|
||||||
"OLLAMA_MODEL_NAME", "qwen2.5:14b"
|
|
||||||
)
|
|
||||||
messages = [
|
|
||||||
{
|
|
||||||
"role": "system",
|
|
||||||
"content": input.context_instructions
|
|
||||||
+ ". The current date is "
|
|
||||||
+ get_current_date_human_readable(),
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"role": "user",
|
|
||||||
"content": input.prompt,
|
|
||||||
},
|
|
||||||
]
|
|
||||||
|
|
||||||
try:
|
|
||||||
response: ChatResponse = chat(model=model_name, messages=messages)
|
|
||||||
print(f"Chat response: {response.message.content}")
|
|
||||||
|
|
||||||
# Use the new sanitize function
|
|
||||||
response_content = self.sanitize_json_response(response.message.content)
|
|
||||||
return self.parse_json_response(response_content)
|
|
||||||
except (json.JSONDecodeError, ValueError) as e:
|
|
||||||
# Re-raise JSON-related exceptions to let Temporal retry the activity
|
|
||||||
print(f"JSON parsing error with Ollama response: {str(e)}")
|
|
||||||
raise
|
|
||||||
except Exception as e:
|
|
||||||
# Log and raise other exceptions that may need retrying
|
|
||||||
print(f"Error in Ollama chat: {str(e)}")
|
|
||||||
raise
|
|
||||||
|
|
||||||
def prompt_llm_google(self, input: ToolPromptInput) -> dict:
|
|
||||||
if not self.genai_configured:
|
|
||||||
api_key = os.environ.get("GOOGLE_API_KEY")
|
|
||||||
if not api_key:
|
|
||||||
raise ValueError(
|
|
||||||
"GOOGLE_API_KEY is not set in the environment variables but LLM_PROVIDER is 'google'"
|
|
||||||
)
|
|
||||||
genai.configure(api_key=api_key)
|
|
||||||
self.genai_configured = True
|
|
||||||
print("Configured Google Generative AI on demand")
|
|
||||||
|
|
||||||
model = genai.GenerativeModel(
|
|
||||||
"models/gemini-1.5-flash",
|
|
||||||
system_instruction=input.context_instructions
|
|
||||||
+ ". The current date is "
|
|
||||||
+ datetime.now().strftime("%B %d, %Y"),
|
|
||||||
)
|
|
||||||
response = model.generate_content(input.prompt)
|
|
||||||
response_content = response.text
|
|
||||||
print(f"Google Gemini response: {response_content}")
|
|
||||||
|
|
||||||
# Use the new sanitize function
|
|
||||||
response_content = self.sanitize_json_response(response_content)
|
|
||||||
|
|
||||||
return self.parse_json_response(response_content)
|
|
||||||
|
|
||||||
def prompt_llm_anthropic(self, input: ToolPromptInput) -> dict:
|
|
||||||
if not self.anthropic_client:
|
|
||||||
api_key = os.environ.get("ANTHROPIC_API_KEY")
|
|
||||||
if not api_key:
|
|
||||||
raise ValueError(
|
|
||||||
"ANTHROPIC_API_KEY is not set in the environment variables but LLM_PROVIDER is 'anthropic'"
|
|
||||||
)
|
|
||||||
self.anthropic_client = anthropic.Anthropic(api_key=api_key)
|
|
||||||
print("Initialized Anthropic client on demand")
|
|
||||||
|
|
||||||
response = self.anthropic_client.messages.create(
|
|
||||||
model="claude-3-5-sonnet-20241022",
|
|
||||||
#model="claude-3-7-sonnet-20250219", # doesn't do as well
|
|
||||||
max_tokens=1024,
|
|
||||||
system=input.context_instructions
|
|
||||||
+ ". The current date is "
|
|
||||||
+ get_current_date_human_readable(),
|
|
||||||
messages=[
|
|
||||||
{
|
|
||||||
"role": "user",
|
|
||||||
"content": input.prompt,
|
|
||||||
}
|
|
||||||
],
|
|
||||||
)
|
|
||||||
|
|
||||||
response_content = response.content[0].text
|
|
||||||
print(f"Anthropic response: {response_content}")
|
|
||||||
|
|
||||||
# Use the new sanitize function
|
|
||||||
response_content = self.sanitize_json_response(response_content)
|
|
||||||
|
|
||||||
return self.parse_json_response(response_content)
|
|
||||||
|
|
||||||
def prompt_llm_deepseek(self, input: ToolPromptInput) -> dict:
|
|
||||||
if not self.deepseek_client:
|
|
||||||
api_key = os.environ.get("DEEPSEEK_API_KEY")
|
|
||||||
if not api_key:
|
|
||||||
raise ValueError(
|
|
||||||
"DEEPSEEK_API_KEY is not set in the environment variables but LLM_PROVIDER is 'deepseek'"
|
|
||||||
)
|
|
||||||
self.deepseek_client = deepseek.DeepSeekAPI(api_key=api_key)
|
|
||||||
print("Initialized DeepSeek client on demand")
|
|
||||||
|
|
||||||
messages = [
|
|
||||||
{
|
|
||||||
"role": "system",
|
|
||||||
"content": input.context_instructions
|
|
||||||
+ ". The current date is "
|
|
||||||
+ datetime.now().strftime("%B %d, %Y"),
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"role": "user",
|
|
||||||
"content": input.prompt,
|
|
||||||
},
|
|
||||||
]
|
|
||||||
|
|
||||||
response = self.deepseek_client.chat_completion(prompt=messages)
|
|
||||||
response_content = response
|
|
||||||
print(f"DeepSeek response: {response_content}")
|
|
||||||
|
|
||||||
# Use the new sanitize function
|
|
||||||
response_content = self.sanitize_json_response(response_content)
|
|
||||||
|
|
||||||
return self.parse_json_response(response_content)
|
|
||||||
|
|
||||||
def sanitize_json_response(self, response_content: str) -> str:
|
def sanitize_json_response(self, response_content: str) -> str:
|
||||||
"""
|
"""
|
||||||
Extracts the JSON block from the response content as a string.
|
Sanitizes the response content to ensure it's valid JSON.
|
||||||
Supports:
|
|
||||||
- JSON surrounded by ```json and ```
|
|
||||||
- Raw JSON input
|
|
||||||
- JSON preceded or followed by extra text
|
|
||||||
Rejects invalid input that doesn't contain JSON.
|
|
||||||
"""
|
"""
|
||||||
try:
|
# Remove any markdown code block markers
|
||||||
start_marker = "```json"
|
response_content = response_content.replace("```json", "").replace("```", "")
|
||||||
end_marker = "```"
|
|
||||||
|
|
||||||
json_str = None
|
# Remove any leading/trailing whitespace
|
||||||
|
response_content = response_content.strip()
|
||||||
|
|
||||||
# Case 1: JSON surrounded by markers
|
return response_content
|
||||||
if start_marker in response_content and end_marker in response_content:
|
|
||||||
json_start = response_content.index(start_marker) + len(start_marker)
|
|
||||||
json_end = response_content.index(end_marker, json_start)
|
|
||||||
json_str = response_content[json_start:json_end].strip()
|
|
||||||
|
|
||||||
# Case 2: Text with valid JSON
|
|
||||||
else:
|
|
||||||
# Try to locate the JSON block by scanning for the first `{` and last `}`
|
|
||||||
json_start = response_content.find("{")
|
|
||||||
json_end = response_content.rfind("}")
|
|
||||||
|
|
||||||
if json_start != -1 and json_end != -1 and json_start < json_end:
|
|
||||||
json_str = response_content[json_start : json_end + 1].strip()
|
|
||||||
|
|
||||||
# Validate and ensure the extracted JSON is valid
|
|
||||||
if json_str:
|
|
||||||
json.loads(json_str) # This will raise an error if the JSON is invalid
|
|
||||||
return json_str
|
|
||||||
|
|
||||||
# If no valid JSON found, raise an error
|
|
||||||
raise ValueError("Response does not contain valid JSON.")
|
|
||||||
|
|
||||||
except json.JSONDecodeError:
|
|
||||||
# Invalid JSON
|
|
||||||
print(f"Invalid JSON detected in response: {response_content}")
|
|
||||||
raise ValueError("Response does not contain valid JSON.")
|
|
||||||
except Exception as e:
|
|
||||||
# Other errors
|
|
||||||
print(f"Error processing response: {str(e)}")
|
|
||||||
print(f"Full response: {response_content}")
|
|
||||||
raise
|
|
||||||
|
|
||||||
# get env vars for workflow
|
|
||||||
@activity.defn
|
@activity.defn
|
||||||
async def get_wf_env_vars(self, input: EnvLookupInput) -> EnvLookupOutput:
|
async def get_wf_env_vars(self, input: EnvLookupInput) -> EnvLookupOutput:
|
||||||
""" gets env vars for workflow as an activity result so it's deterministic
|
""" gets env vars for workflow as an activity result so it's deterministic
|
||||||
@@ -498,18 +167,6 @@ class ToolActivities:
|
|||||||
|
|
||||||
return output
|
return output
|
||||||
|
|
||||||
|
|
||||||
def get_current_date_human_readable():
|
|
||||||
"""
|
|
||||||
Returns the current date in a human-readable format.
|
|
||||||
|
|
||||||
Example: Wednesday, January 1, 2025
|
|
||||||
"""
|
|
||||||
from datetime import datetime
|
|
||||||
|
|
||||||
return datetime.now().strftime("%A, %B %d, %Y")
|
|
||||||
|
|
||||||
|
|
||||||
@activity.defn(dynamic=True)
|
@activity.defn(dynamic=True)
|
||||||
async def dynamic_tool_activity(args: Sequence[RawValue]) -> dict:
|
async def dynamic_tool_activity(args: Sequence[RawValue]) -> dict:
|
||||||
from tools import get_handler
|
from tools import get_handler
|
||||||
|
|||||||
1572
poetry.lock
generated
1572
poetry.lock
generated
File diff suppressed because it is too large
Load Diff
@@ -31,16 +31,11 @@ temporalio = "^1.8.0"
|
|||||||
|
|
||||||
# Standard library modules (e.g. asyncio, collections) don't need to be added
|
# Standard library modules (e.g. asyncio, collections) don't need to be added
|
||||||
# since they're built-in for Python 3.8+.
|
# since they're built-in for Python 3.8+.
|
||||||
ollama = "^0.4.5"
|
litellm = "^1.30.7"
|
||||||
pyyaml = "^6.0.2"
|
pyyaml = "^6.0.2"
|
||||||
fastapi = "^0.115.6"
|
fastapi = "^0.115.6"
|
||||||
uvicorn = "^0.34.0"
|
uvicorn = "^0.34.0"
|
||||||
python-dotenv = "^1.0.1"
|
python-dotenv = "^1.0.1"
|
||||||
openai = "^1.59.2"
|
|
||||||
stripe = "^11.4.1"
|
|
||||||
google-generativeai = "^0.8.4"
|
|
||||||
anthropic = "0.47.0"
|
|
||||||
deepseek = "^1.0.0"
|
|
||||||
requests = "^2.32.3"
|
requests = "^2.32.3"
|
||||||
pandas = "^2.2.3"
|
pandas = "^2.2.3"
|
||||||
gtfs-kit = "^10.1.1"
|
gtfs-kit = "^10.1.1"
|
||||||
|
|||||||
@@ -1,23 +0,0 @@
|
|||||||
from ollama import chat, ChatResponse
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
model_name = "mistral"
|
|
||||||
|
|
||||||
# The messages to pass to the model
|
|
||||||
messages = [
|
|
||||||
{
|
|
||||||
"role": "user",
|
|
||||||
"content": "Why is the sky blue?",
|
|
||||||
}
|
|
||||||
]
|
|
||||||
|
|
||||||
# Call ollama's chat function
|
|
||||||
response: ChatResponse = chat(model=model_name, messages=messages)
|
|
||||||
|
|
||||||
# Print the full message content
|
|
||||||
print(response.message.content)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
||||||
@@ -17,18 +17,18 @@ async def main():
|
|||||||
load_dotenv(override=True)
|
load_dotenv(override=True)
|
||||||
|
|
||||||
# Print LLM configuration info
|
# Print LLM configuration info
|
||||||
llm_provider = os.environ.get("LLM_PROVIDER", "openai").lower()
|
llm_model = os.environ.get("LLM_MODEL", "openai/gpt-4")
|
||||||
print(f"Worker will use LLM provider: {llm_provider}")
|
print(f"Worker will use LLM model: {llm_model}")
|
||||||
|
|
||||||
# Create the client
|
# Create the client
|
||||||
client = await get_temporal_client()
|
client = await get_temporal_client()
|
||||||
|
|
||||||
# Initialize the activities class once with the specified LLM provider
|
# Initialize the activities class
|
||||||
activities = ToolActivities()
|
activities = ToolActivities()
|
||||||
print(f"ToolActivities initialized with LLM provider: {llm_provider}")
|
print(f"ToolActivities initialized with LLM model: {llm_model}")
|
||||||
|
|
||||||
# If using Ollama, pre-load the model to avoid cold start latency
|
# If using Ollama, pre-load the model to avoid cold start latency
|
||||||
if llm_provider == "ollama":
|
if llm_model.startswith("ollama"):
|
||||||
print("\n======== OLLAMA MODEL INITIALIZATION ========")
|
print("\n======== OLLAMA MODEL INITIALIZATION ========")
|
||||||
print("Ollama models need to be loaded into memory on first use.")
|
print("Ollama models need to be loaded into memory on first use.")
|
||||||
print("This may take 30+ seconds depending on your hardware and model size.")
|
print("This may take 30+ seconds depending on your hardware and model size.")
|
||||||
@@ -51,8 +51,6 @@ async def main():
|
|||||||
print("Worker ready to process tasks!")
|
print("Worker ready to process tasks!")
|
||||||
logging.basicConfig(level=logging.WARN)
|
logging.basicConfig(level=logging.WARN)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
# Run the worker
|
# Run the worker
|
||||||
with concurrent.futures.ThreadPoolExecutor(max_workers=100) as activity_executor:
|
with concurrent.futures.ThreadPoolExecutor(max_workers=100) as activity_executor:
|
||||||
worker = Worker(
|
worker = Worker(
|
||||||
|
|||||||
69
setup.md
69
setup.md
@@ -14,6 +14,37 @@ If you want to show confirmations/enable the debugging UI that shows tool args,
|
|||||||
SHOW_CONFIRM=True
|
SHOW_CONFIRM=True
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Quick Start with Makefile
|
||||||
|
|
||||||
|
We've provided a Makefile to simplify the setup and running of the application. Here are the main commands:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Initial setup
|
||||||
|
make setup # Creates virtual environment and installs dependencies
|
||||||
|
make setup-venv # Creates virtual environment only
|
||||||
|
make install # Installs all dependencies
|
||||||
|
|
||||||
|
# Running the application
|
||||||
|
make run-worker # Starts the Temporal worker
|
||||||
|
make run-api # Starts the API server
|
||||||
|
make run-frontend # Starts the frontend development server
|
||||||
|
|
||||||
|
# Additional services
|
||||||
|
make run-train-api # Starts the train API server
|
||||||
|
make run-legacy-worker # Starts the legacy worker
|
||||||
|
make run-enterprise # Builds and runs the enterprise .NET worker
|
||||||
|
|
||||||
|
# Development environment setup
|
||||||
|
make setup-temporal-mac # Installs and starts Temporal server on Mac
|
||||||
|
|
||||||
|
# View all available commands
|
||||||
|
make help
|
||||||
|
```
|
||||||
|
|
||||||
|
### Manual Setup (Alternative to Makefile)
|
||||||
|
|
||||||
|
If you prefer to run commands manually, follow these steps:
|
||||||
|
|
||||||
### Agent Goal Configuration
|
### Agent Goal Configuration
|
||||||
|
|
||||||
The agent can be configured to pursue different goals using the `AGENT_GOAL` environment variable in your `.env` file. If unset, default is `goal_choose_agent_type`.
|
The agent can be configured to pursue different goals using the `AGENT_GOAL` environment variable in your `.env` file. If unset, default is `goal_choose_agent_type`.
|
||||||
@@ -25,15 +56,39 @@ GOAL_CATEGORIES=hr,travel-flights,travel-trains,fin
|
|||||||
|
|
||||||
See the section Goal-Specific Tool Configuration below for tool configuration for specific goals.
|
See the section Goal-Specific Tool Configuration below for tool configuration for specific goals.
|
||||||
|
|
||||||
### LLM Provider Configuration
|
### LLM Configuration
|
||||||
|
|
||||||
The agent can use OpenAI's GPT-4o, Google Gemini, Anthropic Claude, or a local LLM via Ollama. Set the `LLM_PROVIDER` environment variable in your `.env` file to choose the desired provider:
|
The agent uses LiteLLM to interact with various LLM providers. Configure the following environment variables in your `.env` file:
|
||||||
|
|
||||||
- `LLM_PROVIDER=openai` for OpenAI's GPT-4o
|
- `LLM_MODEL`: The model to use (e.g., "openai/gpt-4o", "anthropic/claude-3-sonnet", "google/gemini-pro", etc.)
|
||||||
- `LLM_PROVIDER=google` for Google Gemini
|
- `LLM_KEY`: Your API key for the selected provider
|
||||||
- `LLM_PROVIDER=anthropic` for Anthropic Claude
|
- `LLM_BASE_URL`: (Optional) Custom base URL for the LLM provider. Useful for:
|
||||||
- `LLM_PROVIDER=deepseek` for DeepSeek-V3
|
- Using Ollama with a custom endpoint
|
||||||
- `LLM_PROVIDER=ollama` for running LLMs via [Ollama](https://ollama.ai) (not recommended for this use case)
|
- Using a proxy or custom API gateway
|
||||||
|
- Testing with different API versions
|
||||||
|
|
||||||
|
LiteLLM will automatically detect the provider based on the model name. For example:
|
||||||
|
- For OpenAI models: `openai/gpt-4o` or `openai/gpt-3.5-turbo`
|
||||||
|
- For Anthropic models: `anthropic/claude-3-sonnet`
|
||||||
|
- For Google models: `google/gemini-pro`
|
||||||
|
- For Ollama models: `ollama/mistral` (requires `LLM_BASE_URL` set to your Ollama server)
|
||||||
|
|
||||||
|
Example configurations:
|
||||||
|
```bash
|
||||||
|
# For OpenAI
|
||||||
|
LLM_MODEL=openai/gpt-4o
|
||||||
|
LLM_KEY=your-api-key-here
|
||||||
|
|
||||||
|
# For Anthropic
|
||||||
|
LLM_MODEL=anthropic/claude-3-sonnet
|
||||||
|
LLM_KEY=your-api-key-here
|
||||||
|
|
||||||
|
# For Ollama with custom URL
|
||||||
|
LLM_MODEL=ollama/mistral
|
||||||
|
LLM_BASE_URL=http://localhost:11434
|
||||||
|
```
|
||||||
|
|
||||||
|
For a complete list of supported models and providers, visit the [LiteLLM documentation](https://docs.litellm.ai/docs/providers).
|
||||||
|
|
||||||
### Option 1: OpenAI
|
### Option 1: OpenAI
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user