mirror of
https://github.com/temporal-community/temporal-ai-agent.git
synced 2026-03-15 05:58:08 +01:00
Enhance Dev Experience and Code Quality (#41)
* Format codebase to satisfy linters * fixing pylance and ruff-checked files * contributing md, and type and formatting fixes * setup file capitalization * test fix
This commit is contained in:
committed by
GitHub
parent
e35181b5ad
commit
eb06cf5c8d
3
.gitignore
vendored
3
.gitignore
vendored
@@ -33,3 +33,6 @@ coverage.xml
|
||||
|
||||
.env
|
||||
.env*
|
||||
|
||||
# Cursor
|
||||
.cursor
|
||||
@@ -169,7 +169,7 @@ For detailed architecture information, see [architecture.md](architecture.md).
|
||||
- Ensure tests pass before submitting: `poetry run pytest --workflow-environment=time-skipping`
|
||||
|
||||
## Additional Resources
|
||||
- **Setup Guide**: [setup.md](setup.md) - Detailed configuration instructions
|
||||
- **Setup Guide**: [SETUP.md](SETUP.md) - Detailed configuration instructions
|
||||
- **Architecture Decisions**: [architecture-decisions.md](architecture-decisions.md) - Why Temporal for AI agents
|
||||
- **Demo Video**: [5-minute YouTube overview](https://www.youtube.com/watch?v=GEXllEH2XiQ)
|
||||
- **Multi-Agent Demo**: [Advanced multi-agent execution](https://www.youtube.com/watch?v=8Dc_0dC14yY)
|
||||
@@ -8,12 +8,12 @@ All notable changes to this project will be documented in this file.
|
||||
|
||||
### Added
|
||||
- **Multi‑goal agent architecture** with dynamic goal switching (`goal_choose_agent_type`, `ListAgents`, `ChangeGoal`).
|
||||
- See [the architecture guide](./architecture.md) and [setup guide](./setup.md).
|
||||
- See [the architecture guide](./architecture.md) and [setup guide](./SETUP.md).
|
||||
- **New goal categories & agents**: HR PTO scheduling/checking, paycheck integration, Financial (balances, money movement, loan application), E‑commerce order tracking.
|
||||
- See [the guide for adding goals and tools](./adding-goals-and-tools.md).
|
||||
- **Force Confirmation**: `SHOW_CONFIRM` will show a confirmation box before allowing the agent to run a tool.
|
||||
- **Grok (`x.ai`) LLM provider** support via `GROK_API_KEY`.
|
||||
- Extensive **docs**: `setup.md`, `architecture.md`, `architecture-decisions.md`, `adding-goals-and-tools.md`, plus new diagrams & assets.
|
||||
- Extensive **docs**: `SETUP.md`, `architecture.md`, `architecture-decisions.md`, `adding-goals-and-tools.md`, plus new diagrams & assets.
|
||||
|
||||
### Changed
|
||||
- **UI Confirmation Box** is less 'debug' looking and prettier.
|
||||
|
||||
106
CONTRIBUTING.md
Normal file
106
CONTRIBUTING.md
Normal file
@@ -0,0 +1,106 @@
|
||||
# Contributing to the Temporal AI Agent Project
|
||||
|
||||
This document provides guidelines for contributing to `temporal-ai-agent`. All setup and installation instructions can be found in [./SETUP.md](./SETUP.md).
|
||||
|
||||
## Getting Started
|
||||
|
||||
### Code Style & Formatting
|
||||
We use `black` for code formatting and `isort` for import sorting to maintain a consistent codebase.
|
||||
- **Format code:**
|
||||
```bash
|
||||
poetry run poe format
|
||||
```
|
||||
Or manually:
|
||||
```bash
|
||||
poetry run black .
|
||||
poetry run isort .
|
||||
```
|
||||
Please format your code before committing.
|
||||
|
||||
### Linting & Type Checking
|
||||
We use `mypy` for static type checking and other linters configured via `poe the poet`.
|
||||
- **Run linters and type checks:**
|
||||
```bash
|
||||
poetry run poe lint
|
||||
```
|
||||
Or manually for type checking:
|
||||
```bash
|
||||
poetry run mypy --check-untyped-defs --namespace-packages .
|
||||
```
|
||||
Ensure all linting and type checks pass before submitting a pull request.
|
||||
|
||||
## Testing
|
||||
Comprehensive testing is crucial for this project. We use `pytest` and Temporal's testing framework.
|
||||
- **Install test dependencies** (if not already done with `poetry install --with dev`):
|
||||
```bash
|
||||
poetry install --with dev
|
||||
```
|
||||
- **Run all tests:**
|
||||
```bash
|
||||
poetry run pytest
|
||||
```
|
||||
- **Run tests with time-skipping (recommended for faster execution, especially in CI):**
|
||||
```bash
|
||||
poetry run pytest --workflow-environment=time-skipping
|
||||
```
|
||||
|
||||
For detailed information on test categories, running specific tests, test environments, coverage, and troubleshooting, please refer to:
|
||||
- [TESTING.md](./TESTING.md) (Quick Start and overview)
|
||||
- [tests/README.md](./tests/README.md) (Comprehensive guide, patterns, and best practices)
|
||||
|
||||
**Ensure all tests pass before submitting a pull request.**
|
||||
|
||||
## Making Changes
|
||||
|
||||
### Adding New Tools or Goals
|
||||
If you're looking to extend the agent's capabilities:
|
||||
1. Create your tool implementation in the `tools/` directory.
|
||||
2. Register your tool and associate it with relevant goals.
|
||||
For detailed instructions, please see:
|
||||
- [Agent Customization in agents.md](./agents.md#agent-customization)
|
||||
- [Adding Goals and Tools Guide](./adding-goals-and-tools.md)
|
||||
|
||||
### General Code Changes
|
||||
- Follow the existing code style and patterns.
|
||||
- Ensure any new code is well-documented with comments.
|
||||
- Write new tests for new functionality or bug fixes.
|
||||
- Update existing tests if necessary.
|
||||
|
||||
## Submitting Contributions
|
||||
|
||||
### Pull Requests
|
||||
When you're ready to submit your changes:
|
||||
1. Push your branch to the remote repository.
|
||||
2. Open a Pull Request (PR) against the `main` branch.
|
||||
3. **Describe your changes:** Clearly explain what you changed and why. Reference any related issues.
|
||||
4. **Ensure tests pass:** All CI checks, including tests and linters, must pass. The command `poetry run pytest --workflow-environment=time-skipping` is a good one to run locally.
|
||||
5. **Request review:** Request a review from one or more maintainers.
|
||||
|
||||
## Reporting Bugs
|
||||
If you encounter a bug, please:
|
||||
1. **Search existing issues:** Check if the bug has already been reported.
|
||||
2. **Open a new issue:** If not, create a new issue.
|
||||
- Provide a clear and descriptive title.
|
||||
- Include steps to reproduce the bug.
|
||||
- Describe the expected behavior and what actually happened.
|
||||
- Provide details about your environment (OS, Python version, Temporal server version, etc.).
|
||||
- Include any relevant logs or screenshots.
|
||||
|
||||
## Suggesting Enhancements
|
||||
We welcome suggestions for new features or improvements!
|
||||
1. **Search existing issues/discussions:** See if your idea has already been discussed.
|
||||
2. **Open a new issue:**
|
||||
- Use a clear and descriptive title.
|
||||
- Provide a detailed explanation of the enhancement and its benefits.
|
||||
- Explain the use case or problem it solves.
|
||||
- Include any potential implementation ideas if you have them.
|
||||
|
||||
## Key Resources
|
||||
- **Project Overview**: [README.md](./README.md)
|
||||
- **Detailed Contribution & Development Guide**: [agents.md](./agents.md)
|
||||
- **Setup Instructions**: [SETUP.md](./SETUP.md)
|
||||
- **Comprehensive Testing Guide**: [TESTING.md](./TESTING.md) and [tests/README.md](./tests/README.md)
|
||||
- **System Architecture**: [architecture.md](./architecture.md)
|
||||
- **Architecture Decisions**: [architecture-decisions.md](./architecture-decisions.md)
|
||||
- **Customizing Agent Tools and Goals**: [adding-goals-and-tools.md](./adding-goals-and-tools.md)
|
||||
- **To-Do List / Future Enhancements**: [todo.md](./todo.md)
|
||||
@@ -34,7 +34,7 @@ These are the key elements of an agentic framework:
|
||||
For a deeper dive into this, check out the [architecture guide](./architecture.md).
|
||||
|
||||
## Setup and Configuration
|
||||
See [the Setup guide](./setup.md) for detailed instructions. The basic configuration requires just two environment variables:
|
||||
See [the Setup guide](./SETUP.md) for detailed instructions. The basic configuration requires just two environment variables:
|
||||
```bash
|
||||
LLM_MODEL=openai/gpt-4o # or any other model supported by LiteLLM
|
||||
LLM_KEY=your-api-key-here
|
||||
@@ -77,7 +77,7 @@ Install dependencies:
|
||||
poetry install
|
||||
```
|
||||
|
||||
Start the Temporal Server and API server, see [setup](setup.md)
|
||||
Start the Temporal Server and API server, see [setup](SETUP.md)
|
||||
|
||||
## Productionalization & Adding Features
|
||||
- In a prod setting, I would need to ensure that payload data is stored separately (e.g. in S3 or a noSQL db - the claim-check pattern), or otherwise 'garbage collected'. Without these techniques, long conversations will fill up the workflow's conversation history, and start to breach Temporal event history payload limits.
|
||||
|
||||
@@ -1,16 +1,25 @@
|
||||
import inspect
|
||||
from temporalio import activity
|
||||
import json
|
||||
from typing import Optional, Sequence
|
||||
from temporalio.common import RawValue
|
||||
import os
|
||||
from datetime import datetime
|
||||
from typing import Sequence
|
||||
|
||||
from dotenv import load_dotenv
|
||||
from models.data_types import EnvLookupOutput, ValidationInput, ValidationResult, ToolPromptInput, EnvLookupInput
|
||||
from litellm import completion
|
||||
from temporalio import activity
|
||||
from temporalio.common import RawValue
|
||||
|
||||
from models.data_types import (
|
||||
EnvLookupInput,
|
||||
EnvLookupOutput,
|
||||
ToolPromptInput,
|
||||
ValidationInput,
|
||||
ValidationResult,
|
||||
)
|
||||
|
||||
load_dotenv(override=True)
|
||||
|
||||
|
||||
class ToolActivities:
|
||||
def __init__(self):
|
||||
"""Initialize LLM client using LiteLLM."""
|
||||
@@ -22,7 +31,9 @@ class ToolActivities:
|
||||
print(f"Using custom base URL: {self.llm_base_url}")
|
||||
|
||||
@activity.defn
|
||||
async def agent_validatePrompt(self, validation_input: ValidationInput) -> ValidationResult:
|
||||
async def agent_validatePrompt(
|
||||
self, validation_input: ValidationInput
|
||||
) -> ValidationResult:
|
||||
"""
|
||||
Validates the prompt in the context of the conversation history and agent goal.
|
||||
Returns a ValidationResult indicating if the prompt makes sense given the context.
|
||||
@@ -99,15 +110,15 @@ class ToolActivities:
|
||||
completion_kwargs = {
|
||||
"model": self.llm_model,
|
||||
"messages": messages,
|
||||
"api_key": self.llm_key
|
||||
"api_key": self.llm_key,
|
||||
}
|
||||
|
||||
|
||||
# Add base_url if configured
|
||||
if self.llm_base_url:
|
||||
completion_kwargs["base_url"] = self.llm_base_url
|
||||
|
||||
response = completion(**completion_kwargs)
|
||||
|
||||
|
||||
response_content = response.choices[0].message.content
|
||||
activity.logger.info(f"LLM response: {response_content}")
|
||||
|
||||
@@ -136,19 +147,20 @@ class ToolActivities:
|
||||
"""
|
||||
# Remove any markdown code block markers
|
||||
response_content = response_content.replace("```json", "").replace("```", "")
|
||||
|
||||
|
||||
# Remove any leading/trailing whitespace
|
||||
response_content = response_content.strip()
|
||||
|
||||
|
||||
return response_content
|
||||
|
||||
@activity.defn
|
||||
async def get_wf_env_vars(self, input: EnvLookupInput) -> EnvLookupOutput:
|
||||
""" gets env vars for workflow as an activity result so it's deterministic
|
||||
handles default/None
|
||||
"""gets env vars for workflow as an activity result so it's deterministic
|
||||
handles default/None
|
||||
"""
|
||||
output: EnvLookupOutput = EnvLookupOutput(show_confirm=input.show_confirm_default,
|
||||
multi_goal_mode=True)
|
||||
output: EnvLookupOutput = EnvLookupOutput(
|
||||
show_confirm=input.show_confirm_default, multi_goal_mode=True
|
||||
)
|
||||
show_confirm_value = os.getenv(input.show_confirm_env_var_name)
|
||||
if show_confirm_value is None:
|
||||
output.show_confirm = input.show_confirm_default
|
||||
@@ -156,17 +168,21 @@ class ToolActivities:
|
||||
output.show_confirm = False
|
||||
else:
|
||||
output.show_confirm = True
|
||||
|
||||
|
||||
first_goal_value = os.getenv("AGENT_GOAL")
|
||||
if first_goal_value is None:
|
||||
output.multi_goal_mode = True # default if unset
|
||||
elif first_goal_value is not None and first_goal_value.lower() != "goal_choose_agent_type":
|
||||
output.multi_goal_mode = True # default if unset
|
||||
elif (
|
||||
first_goal_value is not None
|
||||
and first_goal_value.lower() != "goal_choose_agent_type"
|
||||
):
|
||||
output.multi_goal_mode = False
|
||||
else:
|
||||
output.multi_goal_mode = True
|
||||
|
||||
return output
|
||||
|
||||
|
||||
@activity.defn(dynamic=True)
|
||||
async def dynamic_tool_activity(args: Sequence[RawValue]) -> dict:
|
||||
from tools import get_handler
|
||||
@@ -185,5 +201,3 @@ async def dynamic_tool_activity(args: Sequence[RawValue]) -> dict:
|
||||
# Optionally log or augment the result
|
||||
activity.logger.info(f"Tool '{tool_name}' result: {result}")
|
||||
return result
|
||||
|
||||
|
||||
|
||||
@@ -49,7 +49,7 @@ description="Help the user gather args for these tools in order: "
|
||||
```
|
||||
|
||||
Tools should generally return meaningful information and be generally ‘failsafe’ in returning a useful result based on input.
|
||||
(If you're doing a local data approach like those in [.tools/data/](./tools/data/)) it's good to document how they can be setup to get a good result in tool specific [setup](./setup.md).
|
||||
(If you're doing a local data approach like those in [.tools/data/](./tools/data/)) it's good to document how they can be setup to get a good result in tool specific [setup](./SETUP.md).
|
||||
|
||||
### Add to Tool Registry
|
||||
1. Open [/tools/tool_registry.py](tools/tool_registry.py) - this file contains mapping of tool names to tool definitions (so the AI understands how to use them)
|
||||
|
||||
27
api/main.py
27
api/main.py
@@ -1,18 +1,18 @@
|
||||
import asyncio
|
||||
import os
|
||||
from fastapi import FastAPI
|
||||
from typing import Optional
|
||||
|
||||
from dotenv import load_dotenv
|
||||
from fastapi import FastAPI, HTTPException
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
from temporalio.api.enums.v1 import WorkflowExecutionStatus
|
||||
from temporalio.client import Client
|
||||
from temporalio.exceptions import TemporalError
|
||||
from temporalio.api.enums.v1 import WorkflowExecutionStatus
|
||||
from fastapi import HTTPException
|
||||
from dotenv import load_dotenv
|
||||
import asyncio
|
||||
|
||||
from workflows.agent_goal_workflow import AgentGoalWorkflow
|
||||
from models.data_types import CombinedInput, AgentGoalWorkflowParams
|
||||
from models.data_types import AgentGoalWorkflowParams, CombinedInput
|
||||
from shared.config import TEMPORAL_TASK_QUEUE, get_temporal_client
|
||||
from tools.goal_registry import goal_list
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
from shared.config import get_temporal_client, TEMPORAL_TASK_QUEUE
|
||||
from workflows.agent_goal_workflow import AgentGoalWorkflow
|
||||
|
||||
app = FastAPI()
|
||||
temporal_client: Optional[Client] = None
|
||||
@@ -23,7 +23,9 @@ load_dotenv()
|
||||
|
||||
def get_initial_agent_goal():
|
||||
"""Get the agent goal from environment variables."""
|
||||
env_goal = os.getenv("AGENT_GOAL", "goal_choose_agent_type") #if no goal is set in the env file, default to choosing an agent
|
||||
env_goal = os.getenv(
|
||||
"AGENT_GOAL", "goal_choose_agent_type"
|
||||
) # if no goal is set in the env file, default to choosing an agent
|
||||
for listed_goal in goal_list:
|
||||
if listed_goal.id == env_goal:
|
||||
return listed_goal
|
||||
@@ -119,7 +121,8 @@ async def get_conversation_history():
|
||||
raise HTTPException(
|
||||
status_code=500, detail="Internal server error while querying workflow."
|
||||
)
|
||||
|
||||
|
||||
|
||||
@app.get("/agent-goal")
|
||||
async def get_agent_goal():
|
||||
"""Calls the workflow's 'get_agent_goal' query."""
|
||||
@@ -148,7 +151,7 @@ async def send_prompt(prompt: str):
|
||||
combined_input = CombinedInput(
|
||||
tool_params=AgentGoalWorkflowParams(None, None),
|
||||
agent_goal=get_initial_agent_goal(),
|
||||
#change to get from workflow query
|
||||
# change to get from workflow query
|
||||
)
|
||||
|
||||
workflow_id = "agent-workflow"
|
||||
|
||||
@@ -3,7 +3,7 @@ This documents some of the "why" behind the [architecture](./architecture.md).
|
||||
|
||||
## AI Models
|
||||
We wanted to have flexibility to use different models, because this space is changing rapidly and models get better regularly.
|
||||
Also, for you, we wanted to let you pick your model of choice. The system is designed to make changing models out simple. For how to do that, checkout the [setup guide](./setup.md).
|
||||
Also, for you, we wanted to let you pick your model of choice. The system is designed to make changing models out simple. For how to do that, checkout the [setup guide](./SETUP.md).
|
||||
|
||||
## Temporal
|
||||
We asked one of the AI models used in this demo to answer this question (edited minorly):
|
||||
|
||||
@@ -39,7 +39,7 @@ This is where you can add probabalistic business logic to
|
||||
## LLM
|
||||
Probabalistic execution: it will _probably_ do what you tell it to do.
|
||||
Turns the guidance from the prompts (see [agent prompts](./prompts/agent_prompt_generators.py) and [goal prompts](./tools/goal_registry.py)) into
|
||||
You have a choice of providers - see [setup](./setup.md).
|
||||
You have a choice of providers - see [setup](./SETUP.md).
|
||||
The LLM:
|
||||
- Drives toward the initial Goal and any subsequent Goals selected by user
|
||||
- Decides what to do based on input, such as:
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
from dataclasses import dataclass
|
||||
from typing import Optional, Deque, Dict, Any, List, Union, Literal
|
||||
from typing import Any, Deque, Dict, List, Literal, Optional, Union
|
||||
|
||||
from models.tool_definitions import AgentGoal
|
||||
|
||||
|
||||
@@ -43,12 +44,14 @@ class ValidationResult:
|
||||
if self.validationFailedReason is None:
|
||||
self.validationFailedReason = {}
|
||||
|
||||
|
||||
@dataclass
|
||||
class EnvLookupInput:
|
||||
show_confirm_env_var_name: str
|
||||
show_confirm_default: bool
|
||||
|
||||
|
||||
@dataclass
|
||||
class EnvLookupOutput:
|
||||
show_confirm: bool
|
||||
multi_goal_mode: bool
|
||||
multi_goal_mode: bool
|
||||
|
||||
@@ -15,6 +15,7 @@ class ToolDefinition:
|
||||
description: str
|
||||
arguments: List[ToolArgument]
|
||||
|
||||
|
||||
@dataclass
|
||||
class AgentGoal:
|
||||
id: str
|
||||
@@ -24,6 +25,4 @@ class AgentGoal:
|
||||
tools: List[ToolDefinition]
|
||||
description: str = "Description of the tools purpose and overall goal"
|
||||
starter_prompt: str = "Initial prompt to start the conversation"
|
||||
example_conversation_history: str = (
|
||||
"Example conversation history to help the AI agent understand the context of the conversation"
|
||||
)
|
||||
example_conversation_history: str = "Example conversation history to help the AI agent understand the context of the conversation"
|
||||
|
||||
184
poetry.lock
generated
184
poetry.lock
generated
@@ -1,4 +1,4 @@
|
||||
# This file is automatically @generated by Poetry 2.1.3 and should not be changed by hand.
|
||||
# This file is automatically @generated by Poetry 1.5.1 and should not be changed by hand.
|
||||
|
||||
[[package]]
|
||||
name = "aiohappyeyeballs"
|
||||
@@ -6,7 +6,6 @@ version = "2.6.1"
|
||||
description = "Happy Eyeballs for asyncio"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "aiohappyeyeballs-2.6.1-py3-none-any.whl", hash = "sha256:f349ba8f4b75cb25c99c5c2d84e997e485204d2902a9597802b0371f09331fb8"},
|
||||
{file = "aiohappyeyeballs-2.6.1.tar.gz", hash = "sha256:c3f9d0113123803ccadfdf3f0faa505bc78e6a72d1cc4806cbd719826e943558"},
|
||||
@@ -18,7 +17,6 @@ version = "3.11.18"
|
||||
description = "Async http client/server framework (asyncio)"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "aiohttp-3.11.18-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:96264854fedbea933a9ca4b7e0c745728f01380691687b7365d18d9e977179c4"},
|
||||
{file = "aiohttp-3.11.18-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:9602044ff047043430452bc3a2089743fa85da829e6fc9ee0025351d66c332b6"},
|
||||
@@ -114,7 +112,7 @@ propcache = ">=0.2.0"
|
||||
yarl = ">=1.17.0,<2.0"
|
||||
|
||||
[package.extras]
|
||||
speedups = ["Brotli ; platform_python_implementation == \"CPython\"", "aiodns (>=3.2.0) ; sys_platform == \"linux\" or sys_platform == \"darwin\"", "brotlicffi ; platform_python_implementation != \"CPython\""]
|
||||
speedups = ["Brotli", "aiodns (>=3.2.0)", "brotlicffi"]
|
||||
|
||||
[[package]]
|
||||
name = "aiosignal"
|
||||
@@ -122,7 +120,6 @@ version = "1.3.2"
|
||||
description = "aiosignal: a list of registered asynchronous callbacks"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "aiosignal-1.3.2-py2.py3-none-any.whl", hash = "sha256:45cde58e409a301715980c2b01d0c28bdde3770d8290b5eb2173759d9acb31a5"},
|
||||
{file = "aiosignal-1.3.2.tar.gz", hash = "sha256:a8c255c66fafb1e499c9351d0bf32ff2d8a0321595ebac3b93713656d2436f54"},
|
||||
@@ -137,7 +134,6 @@ version = "0.7.0"
|
||||
description = "Reusable constraint types to use with typing.Annotated"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "annotated_types-0.7.0-py3-none-any.whl", hash = "sha256:1f02e8b43a8fbbc3f3e0d4f0f4bfc8131bcb4eebe8849b8e5c773f3a1c582a53"},
|
||||
{file = "annotated_types-0.7.0.tar.gz", hash = "sha256:aff07c09a53a08bc8cfccb9c85b05f1aa9a2a6f23728d790723543408344ce89"},
|
||||
@@ -149,7 +145,6 @@ version = "4.5.2"
|
||||
description = "High level compatibility layer for multiple asynchronous event loop implementations"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "anyio-4.5.2-py3-none-any.whl", hash = "sha256:c011ee36bc1e8ba40e5a81cb9df91925c218fe9b778554e0b56a21e1b5d4716f"},
|
||||
{file = "anyio-4.5.2.tar.gz", hash = "sha256:23009af4ed04ce05991845451e11ef02fc7c5ed29179ac9a420e5ad0ac7ddc5b"},
|
||||
@@ -163,7 +158,7 @@ typing-extensions = {version = ">=4.1", markers = "python_version < \"3.11\""}
|
||||
|
||||
[package.extras]
|
||||
doc = ["Sphinx (>=7.4,<8.0)", "packaging", "sphinx-autodoc-typehints (>=1.2.0)", "sphinx-rtd-theme"]
|
||||
test = ["anyio[trio]", "coverage[toml] (>=7)", "exceptiongroup (>=1.2.0)", "hypothesis (>=4.0)", "psutil (>=5.9)", "pytest (>=7.0)", "pytest-mock (>=3.6.1)", "trustme", "truststore (>=0.9.1) ; python_version >= \"3.10\"", "uvloop (>=0.21.0b1) ; platform_python_implementation == \"CPython\" and platform_system != \"Windows\""]
|
||||
test = ["anyio[trio]", "coverage[toml] (>=7)", "exceptiongroup (>=1.2.0)", "hypothesis (>=4.0)", "psutil (>=5.9)", "pytest (>=7.0)", "pytest-mock (>=3.6.1)", "trustme", "truststore (>=0.9.1)", "uvloop (>=0.21.0b1)"]
|
||||
trio = ["trio (>=0.26.1)"]
|
||||
|
||||
[[package]]
|
||||
@@ -172,8 +167,6 @@ version = "5.0.1"
|
||||
description = "Timeout context manager for asyncio programs"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main"]
|
||||
markers = "python_version == \"3.10\""
|
||||
files = [
|
||||
{file = "async_timeout-5.0.1-py3-none-any.whl", hash = "sha256:39e3809566ff85354557ec2398b55e096c8364bacac9405a7a1fa429e77fe76c"},
|
||||
{file = "async_timeout-5.0.1.tar.gz", hash = "sha256:d9321a7a3d5a6a5e187e824d2fa0793ce379a202935782d555d6e9d2735677d3"},
|
||||
@@ -185,19 +178,18 @@ version = "25.3.0"
|
||||
description = "Classes Without Boilerplate"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "attrs-25.3.0-py3-none-any.whl", hash = "sha256:427318ce031701fea540783410126f03899a97ffc6f61596ad581ac2e40e3bc3"},
|
||||
{file = "attrs-25.3.0.tar.gz", hash = "sha256:75d7cefc7fb576747b2c81b4442d4d4a1ce0900973527c011d1030fd3bf4af1b"},
|
||||
]
|
||||
|
||||
[package.extras]
|
||||
benchmark = ["cloudpickle ; platform_python_implementation == \"CPython\"", "hypothesis", "mypy (>=1.11.1) ; platform_python_implementation == \"CPython\" and python_version >= \"3.10\"", "pympler", "pytest (>=4.3.0)", "pytest-codspeed", "pytest-mypy-plugins ; platform_python_implementation == \"CPython\" and python_version >= \"3.10\"", "pytest-xdist[psutil]"]
|
||||
cov = ["cloudpickle ; platform_python_implementation == \"CPython\"", "coverage[toml] (>=5.3)", "hypothesis", "mypy (>=1.11.1) ; platform_python_implementation == \"CPython\" and python_version >= \"3.10\"", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins ; platform_python_implementation == \"CPython\" and python_version >= \"3.10\"", "pytest-xdist[psutil]"]
|
||||
dev = ["cloudpickle ; platform_python_implementation == \"CPython\"", "hypothesis", "mypy (>=1.11.1) ; platform_python_implementation == \"CPython\" and python_version >= \"3.10\"", "pre-commit-uv", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins ; platform_python_implementation == \"CPython\" and python_version >= \"3.10\"", "pytest-xdist[psutil]"]
|
||||
benchmark = ["cloudpickle", "hypothesis", "mypy (>=1.11.1)", "pympler", "pytest (>=4.3.0)", "pytest-codspeed", "pytest-mypy-plugins", "pytest-xdist[psutil]"]
|
||||
cov = ["cloudpickle", "coverage[toml] (>=5.3)", "hypothesis", "mypy (>=1.11.1)", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-xdist[psutil]"]
|
||||
dev = ["cloudpickle", "hypothesis", "mypy (>=1.11.1)", "pre-commit-uv", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-xdist[psutil]"]
|
||||
docs = ["cogapp", "furo", "myst-parser", "sphinx", "sphinx-notfound-page", "sphinxcontrib-towncrier", "towncrier"]
|
||||
tests = ["cloudpickle ; platform_python_implementation == \"CPython\"", "hypothesis", "mypy (>=1.11.1) ; platform_python_implementation == \"CPython\" and python_version >= \"3.10\"", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins ; platform_python_implementation == \"CPython\" and python_version >= \"3.10\"", "pytest-xdist[psutil]"]
|
||||
tests-mypy = ["mypy (>=1.11.1) ; platform_python_implementation == \"CPython\" and python_version >= \"3.10\"", "pytest-mypy-plugins ; platform_python_implementation == \"CPython\" and python_version >= \"3.10\""]
|
||||
tests = ["cloudpickle", "hypothesis", "mypy (>=1.11.1)", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-xdist[psutil]"]
|
||||
tests-mypy = ["mypy (>=1.11.1)", "pytest-mypy-plugins"]
|
||||
|
||||
[[package]]
|
||||
name = "black"
|
||||
@@ -205,7 +197,6 @@ version = "23.12.1"
|
||||
description = "The uncompromising code formatter."
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["dev"]
|
||||
files = [
|
||||
{file = "black-23.12.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e0aaf6041986767a5e0ce663c7a2f0e9eaf21e6ff87a5f95cbf3675bfd4c41d2"},
|
||||
{file = "black-23.12.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c88b3711d12905b74206227109272673edce0cb29f27e1385f33b0163c414bba"},
|
||||
@@ -242,7 +233,7 @@ typing-extensions = {version = ">=4.0.1", markers = "python_version < \"3.11\""}
|
||||
|
||||
[package.extras]
|
||||
colorama = ["colorama (>=0.4.3)"]
|
||||
d = ["aiohttp (>=3.7.4) ; sys_platform != \"win32\" or implementation_name != \"pypy\"", "aiohttp (>=3.7.4,!=3.9.0) ; sys_platform == \"win32\" and implementation_name == \"pypy\""]
|
||||
d = ["aiohttp (>=3.7.4)", "aiohttp (>=3.7.4,!=3.9.0)"]
|
||||
jupyter = ["ipython (>=7.8.0)", "tokenize-rt (>=3.2.0)"]
|
||||
uvloop = ["uvloop (>=0.15.2)"]
|
||||
|
||||
@@ -252,7 +243,6 @@ version = "0.8.1"
|
||||
description = "Generate complex HTML+JS pages with Python"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "branca-0.8.1-py3-none-any.whl", hash = "sha256:d29c5fab31f7c21a92e34bf3f854234e29fecdcf5d2df306b616f20d816be425"},
|
||||
{file = "branca-0.8.1.tar.gz", hash = "sha256:ac397c2d79bd13af0d04193b26d5ed17031d27609a7f1fab50c438b8ae712390"},
|
||||
@@ -267,7 +257,6 @@ version = "2024.12.14"
|
||||
description = "Python package for providing Mozilla's CA Bundle."
|
||||
optional = false
|
||||
python-versions = ">=3.6"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "certifi-2024.12.14-py3-none-any.whl", hash = "sha256:1275f7a45be9464efc1173084eaa30f866fe2e47d389406136d332ed4967ec56"},
|
||||
{file = "certifi-2024.12.14.tar.gz", hash = "sha256:b650d30f370c2b724812bee08008be0c4163b163ddaec3f2546c1caf65f191db"},
|
||||
@@ -279,7 +268,6 @@ version = "3.4.1"
|
||||
description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet."
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "charset_normalizer-3.4.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:91b36a978b5ae0ee86c394f5a54d6ef44db1de0815eb43de826d41d21e4af3de"},
|
||||
{file = "charset_normalizer-3.4.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7461baadb4dc00fd9e0acbe254e3d7d2112e7f92ced2adc96e54ef6501c5f176"},
|
||||
@@ -381,7 +369,6 @@ version = "8.1.8"
|
||||
description = "Composable command line interface toolkit"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
groups = ["main", "dev"]
|
||||
files = [
|
||||
{file = "click-8.1.8-py3-none-any.whl", hash = "sha256:63c132bbbed01578a06712a2d1f497bb62d9c1c0d329b7903a866228027263b2"},
|
||||
{file = "click-8.1.8.tar.gz", hash = "sha256:ed53c9d8990d83c2a27deae68e4ee337473f6330c040a31d4225c9574d16096a"},
|
||||
@@ -396,12 +383,10 @@ version = "0.4.6"
|
||||
description = "Cross-platform colored terminal text."
|
||||
optional = false
|
||||
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,>=2.7"
|
||||
groups = ["main", "dev"]
|
||||
files = [
|
||||
{file = "colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6"},
|
||||
{file = "colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44"},
|
||||
]
|
||||
markers = {main = "platform_system == \"Windows\"", dev = "platform_system == \"Windows\" or sys_platform == \"win32\""}
|
||||
|
||||
[[package]]
|
||||
name = "distro"
|
||||
@@ -409,7 +394,6 @@ version = "1.9.0"
|
||||
description = "Distro - an OS platform information API"
|
||||
optional = false
|
||||
python-versions = ">=3.6"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "distro-1.9.0-py3-none-any.whl", hash = "sha256:7bffd925d65168f85027d8da9af6bddab658135b840670a223589bc0c8ef02b2"},
|
||||
{file = "distro-1.9.0.tar.gz", hash = "sha256:2fa77c6fd8940f116ee1d6b94a2f90b13b5ea8d019b98bc8bafdcabcdd9bdbed"},
|
||||
@@ -421,8 +405,6 @@ version = "1.2.2"
|
||||
description = "Backport of PEP 654 (exception groups)"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
groups = ["main", "dev"]
|
||||
markers = "python_version == \"3.10\""
|
||||
files = [
|
||||
{file = "exceptiongroup-1.2.2-py3-none-any.whl", hash = "sha256:3111b9d131c238bec2f8f516e123e14ba243563fb135d3fe885990585aa7795b"},
|
||||
{file = "exceptiongroup-1.2.2.tar.gz", hash = "sha256:47c2edf7c6738fafb49fd34290706d1a1a2f4d1c6df275526b62cbb4aa5393cc"},
|
||||
@@ -437,7 +419,6 @@ version = "0.115.6"
|
||||
description = "FastAPI framework, high performance, easy to learn, fast to code, ready for production"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "fastapi-0.115.6-py3-none-any.whl", hash = "sha256:e9240b29e36fa8f4bb7290316988e90c381e5092e0cbe84e7818cc3713bcf305"},
|
||||
{file = "fastapi-0.115.6.tar.gz", hash = "sha256:9ec46f7addc14ea472958a96aae5b5de65f39721a46aaf5705c480d9a8b76654"},
|
||||
@@ -458,7 +439,6 @@ version = "3.18.0"
|
||||
description = "A platform independent file lock."
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "filelock-3.18.0-py3-none-any.whl", hash = "sha256:c401f4f8377c4464e6db25fff06205fd89bdd83b65eb0488ed1b160f780e21de"},
|
||||
{file = "filelock-3.18.0.tar.gz", hash = "sha256:adbc88eabb99d2fec8c9c1b229b171f18afa655400173ddc653d5d01501fb9f2"},
|
||||
@@ -467,7 +447,7 @@ files = [
|
||||
[package.extras]
|
||||
docs = ["furo (>=2024.8.6)", "sphinx (>=8.1.3)", "sphinx-autodoc-typehints (>=3)"]
|
||||
testing = ["covdefaults (>=2.3)", "coverage (>=7.6.10)", "diff-cover (>=9.2.1)", "pytest (>=8.3.4)", "pytest-asyncio (>=0.25.2)", "pytest-cov (>=6)", "pytest-mock (>=3.14)", "pytest-timeout (>=2.3.1)", "virtualenv (>=20.28.1)"]
|
||||
typing = ["typing-extensions (>=4.12.2) ; python_version < \"3.11\""]
|
||||
typing = ["typing-extensions (>=4.12.2)"]
|
||||
|
||||
[[package]]
|
||||
name = "folium"
|
||||
@@ -475,7 +455,6 @@ version = "0.19.4"
|
||||
description = "Make beautiful maps with Leaflet.js & Python"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "folium-0.19.4-py2.py3-none-any.whl", hash = "sha256:bea5246b6a6aa61b96d1c51399dd63254bacbd6ba8a826eeb491f45242032dfd"},
|
||||
{file = "folium-0.19.4.tar.gz", hash = "sha256:431a655b52a9bf3efda336f2be022103f0106504a0599e7c349efbfd30bafda6"},
|
||||
@@ -497,7 +476,6 @@ version = "1.6.0"
|
||||
description = "A list-like structure which implements collections.abc.MutableSequence"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "frozenlist-1.6.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:e6e558ea1e47fd6fa8ac9ccdad403e5dd5ecc6ed8dda94343056fa4277d5c65e"},
|
||||
{file = "frozenlist-1.6.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:f4b3cd7334a4bbc0c472164f3744562cb72d05002cc6fcf58adb104630bbc352"},
|
||||
@@ -611,7 +589,6 @@ version = "2025.5.0"
|
||||
description = "File-system specification"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "fsspec-2025.5.0-py3-none-any.whl", hash = "sha256:0ca253eca6b5333d8a2b8bd98c7326fe821f1f0fdbd34e1b445bddde8e804c95"},
|
||||
{file = "fsspec-2025.5.0.tar.gz", hash = "sha256:e4f4623bb6221f7407fd695cc535d1f857a077eb247580f4ada34f5dc25fd5c8"},
|
||||
@@ -651,7 +628,6 @@ version = "1.0.1"
|
||||
description = "Geographic pandas extensions"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "geopandas-1.0.1-py3-none-any.whl", hash = "sha256:01e147d9420cc374d26f51fc23716ac307f32b49406e4bd8462c07e82ed1d3d6"},
|
||||
{file = "geopandas-1.0.1.tar.gz", hash = "sha256:b8bf70a5534588205b7a56646e2082fb1de9a03599651b3d80c99ea4c2ca08ab"},
|
||||
@@ -675,7 +651,6 @@ version = "10.1.1"
|
||||
description = "A Python library for analyzing GTFS feeds."
|
||||
optional = false
|
||||
python-versions = ">=3.10"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "gtfs_kit-10.1.1-py3-none-any.whl", hash = "sha256:2a54982d30993c365ee082eb3f5dc981ecd89c294728199a1f39776dee6c71b2"},
|
||||
{file = "gtfs_kit-10.1.1.tar.gz", hash = "sha256:b94135883fbb4a5135b33d66215e12507a0480218f53df8c6a3a88ee359e7ab4"},
|
||||
@@ -696,7 +671,6 @@ version = "0.14.0"
|
||||
description = "A pure-Python, bring-your-own-I/O implementation of HTTP/1.1"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "h11-0.14.0-py3-none-any.whl", hash = "sha256:e3fe4ac4b851c468cc8363d500db52c2ead036020723024a109d37346efaa761"},
|
||||
{file = "h11-0.14.0.tar.gz", hash = "sha256:8f19fbbe99e72420ff35c00b27a34cb9937e902a8b810e2c88300c6f0a3b699d"},
|
||||
@@ -708,7 +682,6 @@ version = "1.0.7"
|
||||
description = "A minimal low-level HTTP client."
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "httpcore-1.0.7-py3-none-any.whl", hash = "sha256:a3fff8f43dc260d5bd363d9f9cf1830fa3a458b332856f34282de498ed420edd"},
|
||||
{file = "httpcore-1.0.7.tar.gz", hash = "sha256:8551cb62a169ec7162ac7be8d4817d561f60e08eaa485234898414bb5a8a0b4c"},
|
||||
@@ -730,7 +703,6 @@ version = "0.27.2"
|
||||
description = "The next generation HTTP client."
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "httpx-0.27.2-py3-none-any.whl", hash = "sha256:7bb2708e112d8fdd7829cd4243970f0c223274051cb35ee80c03301ee29a3df0"},
|
||||
{file = "httpx-0.27.2.tar.gz", hash = "sha256:f7c2be1d2f3c3c3160d441802406b206c2b76f5947b11115e6df10c6c65e66c2"},
|
||||
@@ -744,7 +716,7 @@ idna = "*"
|
||||
sniffio = "*"
|
||||
|
||||
[package.extras]
|
||||
brotli = ["brotli ; platform_python_implementation == \"CPython\"", "brotlicffi ; platform_python_implementation != \"CPython\""]
|
||||
brotli = ["brotli", "brotlicffi"]
|
||||
cli = ["click (==8.*)", "pygments (==2.*)", "rich (>=10,<14)"]
|
||||
http2 = ["h2 (>=3,<5)"]
|
||||
socks = ["socksio (==1.*)"]
|
||||
@@ -756,7 +728,6 @@ version = "0.31.4"
|
||||
description = "Client library to download and publish models, datasets and other repos on the huggingface.co hub"
|
||||
optional = false
|
||||
python-versions = ">=3.8.0"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "huggingface_hub-0.31.4-py3-none-any.whl", hash = "sha256:4f70704760296cc69b612916056e9845f5490a33782b924fc531767967acc15d"},
|
||||
{file = "huggingface_hub-0.31.4.tar.gz", hash = "sha256:5a7bc710b9f9c028aee5b1476867b4ec5c1b92f043cb364d5fdc54354757e4ce"},
|
||||
@@ -792,7 +763,6 @@ version = "3.10"
|
||||
description = "Internationalized Domain Names in Applications (IDNA)"
|
||||
optional = false
|
||||
python-versions = ">=3.6"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "idna-3.10-py3-none-any.whl", hash = "sha256:946d195a0d259cbba61165e88e65941f16e9b36ea6ddb97f00452bae8b1287d3"},
|
||||
{file = "idna-3.10.tar.gz", hash = "sha256:12f65c9b470abda6dc35cf8e63cc574b1c52b11df2c86030af0ac09b01b13ea9"},
|
||||
@@ -807,7 +777,6 @@ version = "8.7.0"
|
||||
description = "Read metadata from Python packages"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "importlib_metadata-8.7.0-py3-none-any.whl", hash = "sha256:e5dd1551894c77868a30651cef00984d50e1002d06942a7101d34870c5f02afd"},
|
||||
{file = "importlib_metadata-8.7.0.tar.gz", hash = "sha256:d13b81ad223b890aa16c5471f2ac3056cf76c5f10f82d6f9292f0b415f389000"},
|
||||
@@ -817,12 +786,12 @@ files = [
|
||||
zipp = ">=3.20"
|
||||
|
||||
[package.extras]
|
||||
check = ["pytest-checkdocs (>=2.4)", "pytest-ruff (>=0.2.1) ; sys_platform != \"cygwin\""]
|
||||
check = ["pytest-checkdocs (>=2.4)", "pytest-ruff (>=0.2.1)"]
|
||||
cover = ["pytest-cov"]
|
||||
doc = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-lint"]
|
||||
enabler = ["pytest-enabler (>=2.2)"]
|
||||
perf = ["ipython"]
|
||||
test = ["flufl.flake8", "importlib_resources (>=1.3) ; python_version < \"3.9\"", "jaraco.test (>=5.4)", "packaging", "pyfakefs", "pytest (>=6,!=8.1.*)", "pytest-perf (>=0.9.2)"]
|
||||
test = ["flufl.flake8", "importlib_resources (>=1.3)", "jaraco.test (>=5.4)", "packaging", "pyfakefs", "pytest (>=6,!=8.1.*)", "pytest-perf (>=0.9.2)"]
|
||||
type = ["pytest-mypy"]
|
||||
|
||||
[[package]]
|
||||
@@ -831,7 +800,6 @@ version = "2.0.0"
|
||||
description = "brain-dead simple config-ini parsing"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
groups = ["dev"]
|
||||
files = [
|
||||
{file = "iniconfig-2.0.0-py3-none-any.whl", hash = "sha256:b6a85871a79d2e3b22d2d1b94ac2824226a63c6b741c88f7ae975f18b6778374"},
|
||||
{file = "iniconfig-2.0.0.tar.gz", hash = "sha256:2d91e135bf72d31a410b17c16da610a82cb55f6b0477d1a902134b24a455b8b3"},
|
||||
@@ -843,7 +811,6 @@ version = "5.13.2"
|
||||
description = "A Python utility / library to sort Python imports."
|
||||
optional = false
|
||||
python-versions = ">=3.8.0"
|
||||
groups = ["dev"]
|
||||
files = [
|
||||
{file = "isort-5.13.2-py3-none-any.whl", hash = "sha256:8ca5e72a8d85860d5a3fa69b8745237f2939afe12dbf656afbcb47fe72d947a6"},
|
||||
{file = "isort-5.13.2.tar.gz", hash = "sha256:48fdfcb9face5d58a4f6dde2e72a1fb8dcaf8ab26f95ab49fab84c2ddefb0109"},
|
||||
@@ -858,7 +825,6 @@ version = "3.1.5"
|
||||
description = "A very fast and expressive template engine."
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "jinja2-3.1.5-py3-none-any.whl", hash = "sha256:aba0f4dc9ed8013c424088f68a5c226f7d6097ed89b246d7749c2ec4175c6adb"},
|
||||
{file = "jinja2-3.1.5.tar.gz", hash = "sha256:8fefff8dc3034e27bb80d67c671eb8a9bc424c0ef4c0826edbff304cceff43bb"},
|
||||
@@ -876,7 +842,6 @@ version = "0.8.2"
|
||||
description = "Fast iterable JSON parser."
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "jiter-0.8.2-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:ca8577f6a413abe29b079bc30f907894d7eb07a865c4df69475e868d73e71c7b"},
|
||||
{file = "jiter-0.8.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:b25bd626bde7fb51534190c7e3cb97cee89ee76b76d7585580e22f34f5e3f393"},
|
||||
@@ -962,7 +927,6 @@ version = "1.3.0"
|
||||
description = "JSON to HTML Table Representation"
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "json2html-1.3.0.tar.gz", hash = "sha256:8951a53662ae9cfd812685facdba693fc950ffc1c1fd1a8a2d3cf4c34600689c"},
|
||||
]
|
||||
@@ -973,7 +937,6 @@ version = "4.23.0"
|
||||
description = "An implementation of JSON Schema validation for Python"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "jsonschema-4.23.0-py3-none-any.whl", hash = "sha256:fbadb6f8b144a8f8cf9f0b89ba94501d143e50411a1278633f56a7acf7fd5566"},
|
||||
{file = "jsonschema-4.23.0.tar.gz", hash = "sha256:d71497fef26351a33265337fa77ffeb82423f3ea21283cd9467bb03999266bc4"},
|
||||
@@ -995,7 +958,6 @@ version = "2025.4.1"
|
||||
description = "The JSON Schema meta-schemas and vocabularies, exposed as a Registry"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "jsonschema_specifications-2025.4.1-py3-none-any.whl", hash = "sha256:4653bffbd6584f7de83a67e0d620ef16900b390ddc7939d56684d6c81e33f1af"},
|
||||
{file = "jsonschema_specifications-2025.4.1.tar.gz", hash = "sha256:630159c9f4dbea161a6a2205c3011cc4f18ff381b189fff48bb39b9bf26ae608"},
|
||||
@@ -1010,7 +972,6 @@ version = "1.70.0"
|
||||
description = "Library to easily interface with LLM API providers"
|
||||
optional = false
|
||||
python-versions = "!=2.7.*,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,!=3.7.*,>=3.8"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "litellm-1.70.0-py3-none-any.whl", hash = "sha256:7e094057b38ddb1d77f61452895835aa5d376db1850e9a1bc0342c5631d89638"},
|
||||
{file = "litellm-1.70.0.tar.gz", hash = "sha256:357f3891e38f23a12f0932c235ed860dc41bc5880afaee7229e6d25318652706"},
|
||||
@@ -1030,8 +991,8 @@ tiktoken = ">=0.7.0"
|
||||
tokenizers = "*"
|
||||
|
||||
[package.extras]
|
||||
extra-proxy = ["azure-identity (>=1.15.0,<2.0.0)", "azure-keyvault-secrets (>=4.8.0,<5.0.0)", "google-cloud-kms (>=2.21.3,<3.0.0)", "prisma (==0.11.0)", "redisvl (>=0.4.1,<0.5.0) ; python_version >= \"3.9\" and python_version < \"3.14\"", "resend (>=0.8.0,<0.9.0)"]
|
||||
proxy = ["PyJWT (>=2.8.0,<3.0.0)", "apscheduler (>=3.10.4,<4.0.0)", "backoff", "boto3 (==1.34.34)", "cryptography (>=43.0.1,<44.0.0)", "fastapi (>=0.115.5,<0.116.0)", "fastapi-sso (>=0.16.0,<0.17.0)", "gunicorn (>=23.0.0,<24.0.0)", "litellm-enterprise (==0.1.3)", "litellm-proxy-extras (==0.1.21)", "mcp (==1.5.0) ; python_version >= \"3.10\"", "orjson (>=3.9.7,<4.0.0)", "pynacl (>=1.5.0,<2.0.0)", "python-multipart (>=0.0.18,<0.0.19)", "pyyaml (>=6.0.1,<7.0.0)", "rich (==13.7.1)", "rq", "uvicorn (>=0.29.0,<0.30.0)", "uvloop (>=0.21.0,<0.22.0) ; sys_platform != \"win32\"", "websockets (>=13.1.0,<14.0.0)"]
|
||||
extra-proxy = ["azure-identity (>=1.15.0,<2.0.0)", "azure-keyvault-secrets (>=4.8.0,<5.0.0)", "google-cloud-kms (>=2.21.3,<3.0.0)", "prisma (==0.11.0)", "redisvl (>=0.4.1,<0.5.0)", "resend (>=0.8.0,<0.9.0)"]
|
||||
proxy = ["PyJWT (>=2.8.0,<3.0.0)", "apscheduler (>=3.10.4,<4.0.0)", "backoff", "boto3 (==1.34.34)", "cryptography (>=43.0.1,<44.0.0)", "fastapi (>=0.115.5,<0.116.0)", "fastapi-sso (>=0.16.0,<0.17.0)", "gunicorn (>=23.0.0,<24.0.0)", "litellm-enterprise (==0.1.3)", "litellm-proxy-extras (==0.1.21)", "mcp (==1.5.0)", "orjson (>=3.9.7,<4.0.0)", "pynacl (>=1.5.0,<2.0.0)", "python-multipart (>=0.0.18,<0.0.19)", "pyyaml (>=6.0.1,<7.0.0)", "rich (==13.7.1)", "rq", "uvicorn (>=0.29.0,<0.30.0)", "uvloop (>=0.21.0,<0.22.0)", "websockets (>=13.1.0,<14.0.0)"]
|
||||
utils = ["numpydoc"]
|
||||
|
||||
[[package]]
|
||||
@@ -1040,7 +1001,6 @@ version = "3.0.2"
|
||||
description = "Safely add untrusted strings to HTML/XML markup."
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "MarkupSafe-3.0.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:7e94c425039cde14257288fd61dcfb01963e658efbc0ff54f5306b06054700f8"},
|
||||
{file = "MarkupSafe-3.0.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:9e2d922824181480953426608b81967de705c3cef4d1af983af849d7bd619158"},
|
||||
@@ -1111,7 +1071,6 @@ version = "6.4.4"
|
||||
description = "multidict implementation"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "multidict-6.4.4-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:8adee3ac041145ffe4488ea73fa0a622b464cc25340d98be76924d0cda8545ff"},
|
||||
{file = "multidict-6.4.4-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:b61e98c3e2a861035aaccd207da585bdcacef65fe01d7a0d07478efac005e028"},
|
||||
@@ -1222,13 +1181,66 @@ files = [
|
||||
[package.dependencies]
|
||||
typing-extensions = {version = ">=4.1.0", markers = "python_version < \"3.11\""}
|
||||
|
||||
[[package]]
|
||||
name = "mypy"
|
||||
version = "1.16.0"
|
||||
description = "Optional static typing for Python"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
files = [
|
||||
{file = "mypy-1.16.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:7909541fef256527e5ee9c0a7e2aeed78b6cda72ba44298d1334fe7881b05c5c"},
|
||||
{file = "mypy-1.16.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:e71d6f0090c2256c713ed3d52711d01859c82608b5d68d4fa01a3fe30df95571"},
|
||||
{file = "mypy-1.16.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:936ccfdd749af4766be824268bfe22d1db9eb2f34a3ea1d00ffbe5b5265f5491"},
|
||||
{file = "mypy-1.16.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:4086883a73166631307fdd330c4a9080ce24913d4f4c5ec596c601b3a4bdd777"},
|
||||
{file = "mypy-1.16.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:feec38097f71797da0231997e0de3a58108c51845399669ebc532c815f93866b"},
|
||||
{file = "mypy-1.16.0-cp310-cp310-win_amd64.whl", hash = "sha256:09a8da6a0ee9a9770b8ff61b39c0bb07971cda90e7297f4213741b48a0cc8d93"},
|
||||
{file = "mypy-1.16.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:9f826aaa7ff8443bac6a494cf743f591488ea940dd360e7dd330e30dd772a5ab"},
|
||||
{file = "mypy-1.16.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:82d056e6faa508501af333a6af192c700b33e15865bda49611e3d7d8358ebea2"},
|
||||
{file = "mypy-1.16.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:089bedc02307c2548eb51f426e085546db1fa7dd87fbb7c9fa561575cf6eb1ff"},
|
||||
{file = "mypy-1.16.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:6a2322896003ba66bbd1318c10d3afdfe24e78ef12ea10e2acd985e9d684a666"},
|
||||
{file = "mypy-1.16.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:021a68568082c5b36e977d54e8f1de978baf401a33884ffcea09bd8e88a98f4c"},
|
||||
{file = "mypy-1.16.0-cp311-cp311-win_amd64.whl", hash = "sha256:54066fed302d83bf5128632d05b4ec68412e1f03ef2c300434057d66866cea4b"},
|
||||
{file = "mypy-1.16.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:c5436d11e89a3ad16ce8afe752f0f373ae9620841c50883dc96f8b8805620b13"},
|
||||
{file = "mypy-1.16.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:f2622af30bf01d8fc36466231bdd203d120d7a599a6d88fb22bdcb9dbff84090"},
|
||||
{file = "mypy-1.16.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:d045d33c284e10a038f5e29faca055b90eee87da3fc63b8889085744ebabb5a1"},
|
||||
{file = "mypy-1.16.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:b4968f14f44c62e2ec4a038c8797a87315be8df7740dc3ee8d3bfe1c6bf5dba8"},
|
||||
{file = "mypy-1.16.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:eb14a4a871bb8efb1e4a50360d4e3c8d6c601e7a31028a2c79f9bb659b63d730"},
|
||||
{file = "mypy-1.16.0-cp312-cp312-win_amd64.whl", hash = "sha256:bd4e1ebe126152a7bbaa4daedd781c90c8f9643c79b9748caa270ad542f12bec"},
|
||||
{file = "mypy-1.16.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:a9e056237c89f1587a3be1a3a70a06a698d25e2479b9a2f57325ddaaffc3567b"},
|
||||
{file = "mypy-1.16.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:0b07e107affb9ee6ce1f342c07f51552d126c32cd62955f59a7db94a51ad12c0"},
|
||||
{file = "mypy-1.16.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:c6fb60cbd85dc65d4d63d37cb5c86f4e3a301ec605f606ae3a9173e5cf34997b"},
|
||||
{file = "mypy-1.16.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:a7e32297a437cc915599e0578fa6bc68ae6a8dc059c9e009c628e1c47f91495d"},
|
||||
{file = "mypy-1.16.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:afe420c9380ccec31e744e8baff0d406c846683681025db3531b32db56962d52"},
|
||||
{file = "mypy-1.16.0-cp313-cp313-win_amd64.whl", hash = "sha256:55f9076c6ce55dd3f8cd0c6fff26a008ca8e5131b89d5ba6d86bd3f47e736eeb"},
|
||||
{file = "mypy-1.16.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f56236114c425620875c7cf71700e3d60004858da856c6fc78998ffe767b73d3"},
|
||||
{file = "mypy-1.16.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:15486beea80be24ff067d7d0ede673b001d0d684d0095803b3e6e17a886a2a92"},
|
||||
{file = "mypy-1.16.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:f2ed0e0847a80655afa2c121835b848ed101cc7b8d8d6ecc5205aedc732b1436"},
|
||||
{file = "mypy-1.16.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:eb5fbc8063cb4fde7787e4c0406aa63094a34a2daf4673f359a1fb64050e9cb2"},
|
||||
{file = "mypy-1.16.0-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:a5fcfdb7318c6a8dd127b14b1052743b83e97a970f0edb6c913211507a255e20"},
|
||||
{file = "mypy-1.16.0-cp39-cp39-win_amd64.whl", hash = "sha256:2e7e0ad35275e02797323a5aa1be0b14a4d03ffdb2e5f2b0489fa07b89c67b21"},
|
||||
{file = "mypy-1.16.0-py3-none-any.whl", hash = "sha256:29e1499864a3888bca5c1542f2d7232c6e586295183320caa95758fc84034031"},
|
||||
{file = "mypy-1.16.0.tar.gz", hash = "sha256:84b94283f817e2aa6350a14b4a8fb2a35a53c286f97c9d30f53b63620e7af8ab"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
mypy_extensions = ">=1.0.0"
|
||||
pathspec = ">=0.9.0"
|
||||
tomli = {version = ">=1.1.0", markers = "python_version < \"3.11\""}
|
||||
typing_extensions = ">=4.6.0"
|
||||
|
||||
[package.extras]
|
||||
dmypy = ["psutil (>=4.0)"]
|
||||
faster-cache = ["orjson"]
|
||||
install-types = ["pip"]
|
||||
mypyc = ["setuptools (>=50)"]
|
||||
reports = ["lxml"]
|
||||
|
||||
[[package]]
|
||||
name = "mypy-extensions"
|
||||
version = "1.0.0"
|
||||
description = "Type system extensions for programs checked with the mypy type checker."
|
||||
optional = false
|
||||
python-versions = ">=3.5"
|
||||
groups = ["dev"]
|
||||
files = [
|
||||
{file = "mypy_extensions-1.0.0-py3-none-any.whl", hash = "sha256:4392f6c0eb8a5668a69e23d168ffa70f0be9ccfd32b5cc2d26a34ae5b844552d"},
|
||||
{file = "mypy_extensions-1.0.0.tar.gz", hash = "sha256:75dbf8955dc00442a438fc4d0666508a9a97b6bd41aa2f0ffe9d2f2725af0782"},
|
||||
@@ -1240,7 +1252,6 @@ version = "2.2.2"
|
||||
description = "Fundamental package for array computing in Python"
|
||||
optional = false
|
||||
python-versions = ">=3.10"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "numpy-2.2.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:7079129b64cb78bdc8d611d1fd7e8002c0a2565da6a47c4df8062349fee90e3e"},
|
||||
{file = "numpy-2.2.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:2ec6c689c61df613b783aeb21f945c4cbe6c51c28cb70aae8430577ab39f163e"},
|
||||
@@ -1305,7 +1316,6 @@ version = "1.75.0"
|
||||
description = "The official Python library for the openai API"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "openai-1.75.0-py3-none-any.whl", hash = "sha256:fe6f932d2ded3b429ff67cc9ad118c71327db32eb9d32dd723de3acfca337125"},
|
||||
{file = "openai-1.75.0.tar.gz", hash = "sha256:fb3ea907efbdb1bcfd0c44507ad9c961afd7dce3147292b54505ecfd17be8fd1"},
|
||||
@@ -1332,7 +1342,6 @@ version = "24.2"
|
||||
description = "Core utilities for Python packages"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main", "dev"]
|
||||
files = [
|
||||
{file = "packaging-24.2-py3-none-any.whl", hash = "sha256:09abb1bccd265c01f4a3aa3f7a7db064b36514d2cba19a2f694fe6150451a759"},
|
||||
{file = "packaging-24.2.tar.gz", hash = "sha256:c228a6dc5e932d346bc5739379109d49e8853dd8223571c7c5b55260edc0b97f"},
|
||||
@@ -1344,7 +1353,6 @@ version = "2.2.3"
|
||||
description = "Powerful data structures for data analysis, time series, and statistics"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "pandas-2.2.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:1948ddde24197a0f7add2bdc4ca83bf2b1ef84a1bc8ccffd95eda17fd836ecb5"},
|
||||
{file = "pandas-2.2.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:381175499d3802cde0eabbaf6324cce0c4f5d52ca6f8c377c29ad442f50f6348"},
|
||||
@@ -1431,7 +1439,6 @@ version = "0.12.1"
|
||||
description = "Utility library for gitignore style pattern matching of file paths."
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["dev"]
|
||||
files = [
|
||||
{file = "pathspec-0.12.1-py3-none-any.whl", hash = "sha256:a0d503e138a4c123b27490a4f7beda6a01c6f288df0e4a8b79c7eb0dc7b4cc08"},
|
||||
{file = "pathspec-0.12.1.tar.gz", hash = "sha256:a482d51503a1ab33b1c67a6c3813a26953dbdc71c31dacaef9a838c4e29f5712"},
|
||||
@@ -1443,7 +1450,6 @@ version = "4.3.6"
|
||||
description = "A small Python package for determining appropriate platform-specific dirs, e.g. a `user data dir`."
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["dev"]
|
||||
files = [
|
||||
{file = "platformdirs-4.3.6-py3-none-any.whl", hash = "sha256:73e575e1408ab8103900836b97580d5307456908a03e92031bab39e4554cc3fb"},
|
||||
{file = "platformdirs-4.3.6.tar.gz", hash = "sha256:357fb2acbc885b0419afd3ce3ed34564c13c9b95c89360cd9563f73aa5e2b907"},
|
||||
@@ -1460,7 +1466,6 @@ version = "1.5.0"
|
||||
description = "plugin and hook calling mechanisms for python"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["dev"]
|
||||
files = [
|
||||
{file = "pluggy-1.5.0-py3-none-any.whl", hash = "sha256:44e1ad92c8ca002de6377e165f3e0f1be63266ab4d554740532335b9d75ea669"},
|
||||
{file = "pluggy-1.5.0.tar.gz", hash = "sha256:2cffa88e94fdc978c4c574f15f9e59b7f4201d439195c3715ca9e2486f1d0cf1"},
|
||||
@@ -1476,7 +1481,6 @@ version = "0.3.1"
|
||||
description = "Accelerated property cache"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "propcache-0.3.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:f27785888d2fdd918bc36de8b8739f2d6c791399552333721b58193f68ea3e98"},
|
||||
{file = "propcache-0.3.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d4e89cde74154c7b5957f87a355bb9c8ec929c167b59c83d90654ea36aeb6180"},
|
||||
@@ -1584,7 +1588,6 @@ version = "5.29.2"
|
||||
description = ""
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "protobuf-5.29.2-cp310-abi3-win32.whl", hash = "sha256:c12ba8249f5624300cf51c3d0bfe5be71a60c63e4dcf51ffe9a68771d958c851"},
|
||||
{file = "protobuf-5.29.2-cp310-abi3-win_amd64.whl", hash = "sha256:842de6d9241134a973aab719ab42b008a18a90f9f07f06ba480df268f86432f9"},
|
||||
@@ -1605,7 +1608,6 @@ version = "2.10.4"
|
||||
description = "Data validation using Python type hints"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "pydantic-2.10.4-py3-none-any.whl", hash = "sha256:597e135ea68be3a37552fb524bc7d0d66dcf93d395acd93a00682f1efcb8ee3d"},
|
||||
{file = "pydantic-2.10.4.tar.gz", hash = "sha256:82f12e9723da6de4fe2ba888b5971157b3be7ad914267dea8f05f82b28254f06"},
|
||||
@@ -1618,7 +1620,7 @@ typing-extensions = ">=4.12.2"
|
||||
|
||||
[package.extras]
|
||||
email = ["email-validator (>=2.0.0)"]
|
||||
timezone = ["tzdata ; python_version >= \"3.9\" and platform_system == \"Windows\""]
|
||||
timezone = ["tzdata"]
|
||||
|
||||
[[package]]
|
||||
name = "pydantic-core"
|
||||
@@ -1626,7 +1628,6 @@ version = "2.27.2"
|
||||
description = "Core functionality for Pydantic validation and serialization"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "pydantic_core-2.27.2-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:2d367ca20b2f14095a8f4fa1210f5a7b78b8a20009ecced6b12818f455b1e9fa"},
|
||||
{file = "pydantic_core-2.27.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:491a2b73db93fab69731eaee494f320faa4e093dbed776be1a829c2eb222c34c"},
|
||||
@@ -1739,7 +1740,6 @@ version = "0.10.0"
|
||||
description = "Vectorized spatial vector file format I/O using GDAL/OGR"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "pyogrio-0.10.0-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:046eeeae12a03a3ebc3dc5ff5a87664e4f5fc0a4fb1ea5d5c45d547fa941072b"},
|
||||
{file = "pyogrio-0.10.0-cp310-cp310-macosx_12_0_x86_64.whl", hash = "sha256:44380f4d9245c776f432526e29ce4d29238aea26adad991803c4f453474f51d3"},
|
||||
@@ -1791,7 +1791,6 @@ version = "3.7.0"
|
||||
description = "Python interface to PROJ (cartographic projections and coordinate transformations library)"
|
||||
optional = false
|
||||
python-versions = ">=3.10"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "pyproj-3.7.0-cp310-cp310-macosx_12_0_x86_64.whl", hash = "sha256:d5c7e7d24b967e328a5efd013f466804a1f226d1106ac7efc47dcc99360dbc8f"},
|
||||
{file = "pyproj-3.7.0-cp310-cp310-macosx_14_0_arm64.whl", hash = "sha256:448958c46bd3fe2da91c89ba551ac5835e63073ca861422c6eb1af89979dfab1"},
|
||||
@@ -1829,7 +1828,6 @@ version = "8.3.5"
|
||||
description = "pytest: simple powerful testing with Python"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["dev"]
|
||||
files = [
|
||||
{file = "pytest-8.3.5-py3-none-any.whl", hash = "sha256:c69214aa47deac29fad6c2a4f590b9c4a9fdb16a403176fe154b79c0b4d4d820"},
|
||||
{file = "pytest-8.3.5.tar.gz", hash = "sha256:f4efe70cc14e511565ac476b57c279e12a855b11f48f212af1080ef2263d3845"},
|
||||
@@ -1852,7 +1850,6 @@ version = "0.26.0"
|
||||
description = "Pytest support for asyncio"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["dev"]
|
||||
files = [
|
||||
{file = "pytest_asyncio-0.26.0-py3-none-any.whl", hash = "sha256:7b51ed894f4fbea1340262bdae5135797ebbe21d8638978e35d31c6d19f72fb0"},
|
||||
{file = "pytest_asyncio-0.26.0.tar.gz", hash = "sha256:c4df2a697648241ff39e7f0e4a73050b03f123f760673956cf0d72a4990e312f"},
|
||||
@@ -1871,7 +1868,6 @@ version = "2.9.0.post0"
|
||||
description = "Extensions to the standard Python datetime module"
|
||||
optional = false
|
||||
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "python-dateutil-2.9.0.post0.tar.gz", hash = "sha256:37dd54208da7e1cd875388217d5e00ebd4179249f90fb72437e91a35459a0ad3"},
|
||||
{file = "python_dateutil-2.9.0.post0-py2.py3-none-any.whl", hash = "sha256:a8b2bc7bffae282281c8140a97d3aa9c14da0b136dfe83f850eea9a5f7470427"},
|
||||
@@ -1886,7 +1882,6 @@ version = "1.0.1"
|
||||
description = "Read key-value pairs from a .env file and set them as environment variables"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "python-dotenv-1.0.1.tar.gz", hash = "sha256:e324ee90a023d808f1959c46bcbc04446a10ced277783dc6ee09987c37ec10ca"},
|
||||
{file = "python_dotenv-1.0.1-py3-none-any.whl", hash = "sha256:f7b63ef50f1b690dddf550d03497b66d609393b40b564ed0d674909a68ebf16a"},
|
||||
@@ -1901,7 +1896,6 @@ version = "2025.1"
|
||||
description = "World timezone definitions, modern and historical"
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "pytz-2025.1-py2.py3-none-any.whl", hash = "sha256:89dd22dca55b46eac6eda23b2d72721bf1bdfef212645d81513ef5d03038de57"},
|
||||
{file = "pytz-2025.1.tar.gz", hash = "sha256:c2db42be2a2518b28e65f9207c4d05e6ff547d1efa4086469ef855e4ab70178e"},
|
||||
@@ -1913,7 +1907,6 @@ version = "6.0.2"
|
||||
description = "YAML parser and emitter for Python"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "PyYAML-6.0.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:0a9a2848a5b7feac301353437eb7d5957887edbf81d56e903999a75a3d743086"},
|
||||
{file = "PyYAML-6.0.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:29717114e51c84ddfba879543fb232a6ed60086602313ca38cce623c1d62cfbf"},
|
||||
@@ -1976,7 +1969,6 @@ version = "0.36.2"
|
||||
description = "JSON Referencing + Python"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "referencing-0.36.2-py3-none-any.whl", hash = "sha256:e8699adbbf8b5c7de96d8ffa0eb5c158b3beafce084968e2ea8bb08c6794dcd0"},
|
||||
{file = "referencing-0.36.2.tar.gz", hash = "sha256:df2e89862cd09deabbdba16944cc3f10feb6b3e6f18e902f7cc25609a34775aa"},
|
||||
@@ -1993,7 +1985,6 @@ version = "2024.11.6"
|
||||
description = "Alternative regular expression module, to replace re."
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "regex-2024.11.6-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:ff590880083d60acc0433f9c3f713c51f7ac6ebb9adf889c79a261ecf541aa91"},
|
||||
{file = "regex-2024.11.6-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:658f90550f38270639e83ce492f27d2c8d2cd63805c65a13a14d36ca126753f0"},
|
||||
@@ -2097,7 +2088,6 @@ version = "2.32.3"
|
||||
description = "Python HTTP for Humans."
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "requests-2.32.3-py3-none-any.whl", hash = "sha256:70761cfe03c773ceb22aa2f671b4757976145175cdfca038c02654d061d6dcc6"},
|
||||
{file = "requests-2.32.3.tar.gz", hash = "sha256:55365417734eb18255590a9ff9eb97e9e1da868d4ccd6402399eaf68af20a760"},
|
||||
@@ -2119,7 +2109,6 @@ version = "0.25.0"
|
||||
description = "Python bindings to Rust's persistent data structures (rpds)"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "rpds_py-0.25.0-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:c146a24a8f0dc4a7846fb4640b88b3a68986585b8ce8397af15e66b7c5817439"},
|
||||
{file = "rpds_py-0.25.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:77814c7a4e1dc43fba73aeb4c1ef0fe37d901f3aa869a4823de5ea843a283fd0"},
|
||||
@@ -2243,7 +2232,6 @@ version = "1.3.0"
|
||||
description = "R-Tree spatial index for Python GIS"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "Rtree-1.3.0-py3-none-macosx_10_9_x86_64.whl", hash = "sha256:80879d9db282a2273ca3a0d896c84583940e9777477727a277624ebfd424c517"},
|
||||
{file = "Rtree-1.3.0-py3-none-macosx_11_0_arm64.whl", hash = "sha256:4328e9e421797c347e6eb08efbbade962fe3664ebd60c1dffe82c40911b1e125"},
|
||||
@@ -2263,7 +2251,6 @@ version = "2.0.7"
|
||||
description = "Manipulation and analysis of geometric objects"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "shapely-2.0.7-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:33fb10e50b16113714ae40adccf7670379e9ccf5b7a41d0002046ba2b8f0f691"},
|
||||
{file = "shapely-2.0.7-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:f44eda8bd7a4bccb0f281264b34bf3518d8c4c9a8ffe69a1a05dabf6e8461147"},
|
||||
@@ -2322,7 +2309,6 @@ version = "1.17.0"
|
||||
description = "Python 2 and 3 compatibility utilities"
|
||||
optional = false
|
||||
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "six-1.17.0-py2.py3-none-any.whl", hash = "sha256:4721f391ed90541fddacab5acf947aa0d3dc7d27b2e1e8eda2be8970586c3274"},
|
||||
{file = "six-1.17.0.tar.gz", hash = "sha256:ff70335d468e7eb6ec65b95b99d3a2836546063f63acc5171de367e834932a81"},
|
||||
@@ -2334,7 +2320,6 @@ version = "1.3.1"
|
||||
description = "Sniff out which async library your code is running under"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "sniffio-1.3.1-py3-none-any.whl", hash = "sha256:2f6da418d1f1e0fddd844478f41680e794e6051915791a034ff65e5f100525a2"},
|
||||
{file = "sniffio-1.3.1.tar.gz", hash = "sha256:f4324edc670a0f49750a81b895f35c3adb843cca46f0530f79fc1babb23789dc"},
|
||||
@@ -2346,7 +2331,6 @@ version = "0.41.3"
|
||||
description = "The little ASGI library that shines."
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "starlette-0.41.3-py3-none-any.whl", hash = "sha256:44cedb2b7c77a9de33a8b74b2b90e9f50d11fcf25d8270ea525ad71a25374ff7"},
|
||||
{file = "starlette-0.41.3.tar.gz", hash = "sha256:0e4ab3d16522a255be6b28260b938eae2482f98ce5cc934cb08dce8dc3ba5835"},
|
||||
@@ -2364,7 +2348,6 @@ version = "11.6.0"
|
||||
description = "Python bindings for the Stripe API"
|
||||
optional = false
|
||||
python-versions = ">=3.6"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "stripe-11.6.0-py2.py3-none-any.whl", hash = "sha256:6e6cf09ebb6d5fc2d708401cb8868fd7bff987a6d09a0433caaa92c62f97dbc5"},
|
||||
{file = "stripe-11.6.0.tar.gz", hash = "sha256:0ced7cce23a6cb1a393c86a1f7f9435c9d83ae7cbd556362868caf62cb44a92c"},
|
||||
@@ -2380,7 +2363,6 @@ version = "1.9.0"
|
||||
description = "Temporal.io Python SDK"
|
||||
optional = false
|
||||
python-versions = "<4.0,>=3.8"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "temporalio-1.9.0-cp38-abi3-macosx_10_9_x86_64.whl", hash = "sha256:ee941702e8925e2c018b5c2d7b296f811205043654d7f9c4564d7fa6597f1989"},
|
||||
{file = "temporalio-1.9.0-cp38-abi3-macosx_11_0_arm64.whl", hash = "sha256:101040090238d97b61d769e009f732409894d8f26596a3827662f2dde2862097"},
|
||||
@@ -2406,7 +2388,6 @@ version = "0.9.0"
|
||||
description = "tiktoken is a fast BPE tokeniser for use with OpenAI's models"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "tiktoken-0.9.0-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:586c16358138b96ea804c034b8acf3f5d3f0258bd2bc3b0227af4af5d622e382"},
|
||||
{file = "tiktoken-0.9.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:d9c59ccc528c6c5dd51820b3474402f69d9a9e1d656226848ad68a8d5b2e5108"},
|
||||
@@ -2454,7 +2435,6 @@ version = "0.21.1"
|
||||
description = ""
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "tokenizers-0.21.1-cp39-abi3-macosx_10_12_x86_64.whl", hash = "sha256:e78e413e9e668ad790a29456e677d9d3aa50a9ad311a40905d6861ba7692cf41"},
|
||||
{file = "tokenizers-0.21.1-cp39-abi3-macosx_11_0_arm64.whl", hash = "sha256:cd51cd0a91ecc801633829fcd1fda9cf8682ed3477c6243b9a095539de4aecf3"},
|
||||
@@ -2487,8 +2467,6 @@ version = "2.2.1"
|
||||
description = "A lil' TOML parser"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["dev"]
|
||||
markers = "python_version == \"3.10\""
|
||||
files = [
|
||||
{file = "tomli-2.2.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:678e4fa69e4575eb77d103de3df8a895e1591b48e740211bd1067378c69e8249"},
|
||||
{file = "tomli-2.2.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:023aa114dd824ade0100497eb2318602af309e5a55595f76b626d6d9f3b7b0a6"},
|
||||
@@ -2530,7 +2508,6 @@ version = "4.67.1"
|
||||
description = "Fast, Extensible Progress Meter"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "tqdm-4.67.1-py3-none-any.whl", hash = "sha256:26445eca388f82e72884e0d580d5464cd801a3ea01e63e5601bdff9ba6a48de2"},
|
||||
{file = "tqdm-4.67.1.tar.gz", hash = "sha256:f8aef9c52c08c13a65f30ea34f4e5aac3fd1a34959879d7e59e63027286627f2"},
|
||||
@@ -2552,7 +2529,6 @@ version = "5.29.1.20241207"
|
||||
description = "Typing stubs for protobuf"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "types_protobuf-5.29.1.20241207-py3-none-any.whl", hash = "sha256:92893c42083e9b718c678badc0af7a9a1307b92afe1599e5cba5f3d35b668b2f"},
|
||||
{file = "types_protobuf-5.29.1.20241207.tar.gz", hash = "sha256:2ebcadb8ab3ef2e3e2f067e0882906d64ba0dc65fc5b0fd7a8b692315b4a0be9"},
|
||||
@@ -2564,12 +2540,10 @@ version = "4.12.2"
|
||||
description = "Backported and Experimental Type Hints for Python 3.8+"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main", "dev"]
|
||||
files = [
|
||||
{file = "typing_extensions-4.12.2-py3-none-any.whl", hash = "sha256:04e5ca0351e0f3f85c6853954072df659d0d13fac324d0072316b67d7794700d"},
|
||||
{file = "typing_extensions-4.12.2.tar.gz", hash = "sha256:1a7ead55c7e559dd4dee8856e3a88b41225abfe1ce8df57b7c13915fe121ffb8"},
|
||||
]
|
||||
markers = {dev = "python_version == \"3.10\""}
|
||||
|
||||
[[package]]
|
||||
name = "tzdata"
|
||||
@@ -2577,7 +2551,6 @@ version = "2025.1"
|
||||
description = "Provider of IANA time zone data"
|
||||
optional = false
|
||||
python-versions = ">=2"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "tzdata-2025.1-py2.py3-none-any.whl", hash = "sha256:7e127113816800496f027041c570f50bcd464a020098a3b6b199517772303639"},
|
||||
{file = "tzdata-2025.1.tar.gz", hash = "sha256:24894909e88cdb28bd1636c6887801df64cb485bd593f2fd83ef29075a81d694"},
|
||||
@@ -2589,14 +2562,13 @@ version = "2.3.0"
|
||||
description = "HTTP library with thread-safe connection pooling, file post, and more."
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "urllib3-2.3.0-py3-none-any.whl", hash = "sha256:1cee9ad369867bfdbbb48b7dd50374c0967a0bb7710050facf0dd6911440e3df"},
|
||||
{file = "urllib3-2.3.0.tar.gz", hash = "sha256:f8c5449b3cf0861679ce7e0503c7b44b5ec981bec0d1d3795a07f1ba96f0204d"},
|
||||
]
|
||||
|
||||
[package.extras]
|
||||
brotli = ["brotli (>=1.0.9) ; platform_python_implementation == \"CPython\"", "brotlicffi (>=0.8.0) ; platform_python_implementation != \"CPython\""]
|
||||
brotli = ["brotli (>=1.0.9)", "brotlicffi (>=0.8.0)"]
|
||||
h2 = ["h2 (>=4,<5)"]
|
||||
socks = ["pysocks (>=1.5.6,!=1.5.7,<2.0)"]
|
||||
zstd = ["zstandard (>=0.18.0)"]
|
||||
@@ -2607,7 +2579,6 @@ version = "0.34.0"
|
||||
description = "The lightning-fast ASGI server."
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "uvicorn-0.34.0-py3-none-any.whl", hash = "sha256:023dc038422502fa28a09c7a30bf2b6991512da7dcdb8fd35fe57cfc154126f4"},
|
||||
{file = "uvicorn-0.34.0.tar.gz", hash = "sha256:404051050cd7e905de2c9a7e61790943440b3416f49cb409f965d9dcd0fa73e9"},
|
||||
@@ -2619,7 +2590,7 @@ h11 = ">=0.8"
|
||||
typing-extensions = {version = ">=4.0", markers = "python_version < \"3.11\""}
|
||||
|
||||
[package.extras]
|
||||
standard = ["colorama (>=0.4) ; sys_platform == \"win32\"", "httptools (>=0.6.3)", "python-dotenv (>=0.13)", "pyyaml (>=5.1)", "uvloop (>=0.14.0,!=0.15.0,!=0.15.1) ; sys_platform != \"win32\" and sys_platform != \"cygwin\" and platform_python_implementation != \"PyPy\"", "watchfiles (>=0.13)", "websockets (>=10.4)"]
|
||||
standard = ["colorama (>=0.4)", "httptools (>=0.6.3)", "python-dotenv (>=0.13)", "pyyaml (>=5.1)", "uvloop (>=0.14.0,!=0.15.0,!=0.15.1)", "watchfiles (>=0.13)", "websockets (>=10.4)"]
|
||||
|
||||
[[package]]
|
||||
name = "xyzservices"
|
||||
@@ -2627,7 +2598,6 @@ version = "2025.1.0"
|
||||
description = "Source of XYZ tiles providers"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "xyzservices-2025.1.0-py3-none-any.whl", hash = "sha256:fa599956c5ab32dad1689960b3bb08fdcdbe0252cc82d84fc60ae415dc648907"},
|
||||
{file = "xyzservices-2025.1.0.tar.gz", hash = "sha256:5cdbb0907c20be1be066c6e2dc69c645842d1113a4e83e642065604a21f254ba"},
|
||||
@@ -2639,7 +2609,6 @@ version = "1.20.0"
|
||||
description = "Yet another URL library"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "yarl-1.20.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:f1f6670b9ae3daedb325fa55fbe31c22c8228f6e0b513772c2e1c623caa6ab22"},
|
||||
{file = "yarl-1.20.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:85a231fa250dfa3308f3c7896cc007a47bc76e9e8e8595c20b7426cac4884c62"},
|
||||
@@ -2758,21 +2727,20 @@ version = "3.21.0"
|
||||
description = "Backport of pathlib-compatible object wrapper for zip files"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "zipp-3.21.0-py3-none-any.whl", hash = "sha256:ac1bbe05fd2991f160ebce24ffbac5f6d11d83dc90891255885223d42b3cd931"},
|
||||
{file = "zipp-3.21.0.tar.gz", hash = "sha256:2c9958f6430a2040341a52eb608ed6dd93ef4392e02ffe219417c1b28b5dd1f4"},
|
||||
]
|
||||
|
||||
[package.extras]
|
||||
check = ["pytest-checkdocs (>=2.4)", "pytest-ruff (>=0.2.1) ; sys_platform != \"cygwin\""]
|
||||
check = ["pytest-checkdocs (>=2.4)", "pytest-ruff (>=0.2.1)"]
|
||||
cover = ["pytest-cov"]
|
||||
doc = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-lint"]
|
||||
enabler = ["pytest-enabler (>=2.2)"]
|
||||
test = ["big-O", "importlib-resources ; python_version < \"3.9\"", "jaraco.functools", "jaraco.itertools", "jaraco.test", "more-itertools", "pytest (>=6,!=8.1.*)", "pytest-ignore-flaky"]
|
||||
test = ["big-O", "importlib-resources", "jaraco.functools", "jaraco.itertools", "jaraco.test", "more-itertools", "pytest (>=6,!=8.1.*)", "pytest-ignore-flaky"]
|
||||
type = ["pytest-mypy"]
|
||||
|
||||
[metadata]
|
||||
lock-version = "2.1"
|
||||
lock-version = "2.0"
|
||||
python-versions = ">=3.10,<4.0"
|
||||
content-hash = "ae5534663e9fa1ab21fb50bd6a7007aa201a22da0c3b729289f8a931434c14bf"
|
||||
content-hash = "b391df89fabb111e4dd5d65a52a9db3a0bf9d95d5473e77cd0946beb940cf26f"
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
from models.tool_definitions import AgentGoal
|
||||
from typing import Optional
|
||||
import json
|
||||
from typing import Optional
|
||||
|
||||
from models.tool_definitions import AgentGoal
|
||||
|
||||
MULTI_GOAL_MODE: bool = None
|
||||
|
||||
|
||||
@@ -46,6 +46,7 @@ pytest = ">=8.2"
|
||||
pytest-asyncio = "^0.26.0"
|
||||
black = "^23.7"
|
||||
isort = "^5.12"
|
||||
mypy = "^1.16.0"
|
||||
|
||||
[build-system]
|
||||
requires = ["poetry-core>=1.4.0"]
|
||||
@@ -57,4 +58,15 @@ log_cli = true
|
||||
log_cli_level = "INFO"
|
||||
log_cli_format = "%(asctime)s [%(levelname)8s] %(message)s (%(filename)s:%(lineno)s)"
|
||||
asyncio_default_fixture_loop_scope = "function"
|
||||
norecursedirs = ["vibe"]
|
||||
norecursedirs = ["vibe"]
|
||||
|
||||
[tool.mypy]
|
||||
python_version = "3.10"
|
||||
ignore_missing_imports = true
|
||||
check_untyped_defs = true
|
||||
namespace_packages = true
|
||||
explicit_package_bases = true
|
||||
ignore_errors = true
|
||||
|
||||
[tool.isort]
|
||||
profile = "black"
|
||||
|
||||
@@ -1,12 +1,12 @@
|
||||
import asyncio
|
||||
|
||||
|
||||
from shared.config import get_temporal_client
|
||||
from workflows.agent_goal_workflow import AgentGoalWorkflow
|
||||
|
||||
|
||||
async def main():
|
||||
# Create client connected to server at the given address
|
||||
client = await Client.connect("localhost:7233")
|
||||
client = await get_temporal_client()
|
||||
|
||||
workflow_id = "agent-workflow"
|
||||
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
from tools.search_flights import search_flights
|
||||
import json
|
||||
|
||||
from tools.search_flights import search_flights
|
||||
|
||||
# Example usage
|
||||
if __name__ == "__main__":
|
||||
search_args = {"city": "Sydney", "month": "July"}
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
from tools.search_flights import search_flights
|
||||
import json
|
||||
|
||||
from tools.search_flights import search_flights
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Suppose user typed "new" for New York, "lon" for London
|
||||
flights = search_flights("London", "JFK", "2025-01-15", "2025-01-23")
|
||||
|
||||
@@ -1,12 +1,10 @@
|
||||
import asyncio
|
||||
|
||||
import concurrent.futures
|
||||
|
||||
from temporalio.worker import Worker
|
||||
|
||||
from activities.tool_activities import dynamic_tool_activity
|
||||
|
||||
from shared.config import get_temporal_client, TEMPORAL_LEGACY_TASK_QUEUE
|
||||
from shared.config import TEMPORAL_LEGACY_TASK_QUEUE, get_temporal_client
|
||||
|
||||
|
||||
async def main():
|
||||
@@ -24,7 +22,9 @@ async def main():
|
||||
activity_executor=activity_executor,
|
||||
)
|
||||
|
||||
print(f"Starting legacy worker, connecting to task queue: {TEMPORAL_LEGACY_TASK_QUEUE}")
|
||||
print(
|
||||
f"Starting legacy worker, connecting to task queue: {TEMPORAL_LEGACY_TASK_QUEUE}"
|
||||
)
|
||||
await worker.run()
|
||||
|
||||
|
||||
|
||||
@@ -1,16 +1,15 @@
|
||||
import asyncio
|
||||
import concurrent.futures
|
||||
import os
|
||||
from dotenv import load_dotenv
|
||||
import logging
|
||||
import os
|
||||
|
||||
from dotenv import load_dotenv
|
||||
from temporalio.worker import Worker
|
||||
|
||||
from activities.tool_activities import ToolActivities, dynamic_tool_activity
|
||||
from shared.config import TEMPORAL_TASK_QUEUE, get_temporal_client
|
||||
from workflows.agent_goal_workflow import AgentGoalWorkflow
|
||||
|
||||
from shared.config import get_temporal_client, TEMPORAL_TASK_QUEUE
|
||||
|
||||
|
||||
async def main():
|
||||
# Load environment variables
|
||||
|
||||
@@ -5,7 +5,6 @@ from shared.config import get_temporal_client
|
||||
|
||||
|
||||
async def main():
|
||||
|
||||
# Connect to Temporal and signal the workflow
|
||||
client = await get_temporal_client()
|
||||
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
import os
|
||||
|
||||
from dotenv import load_dotenv
|
||||
from temporalio.client import Client
|
||||
from temporalio.service import TLSConfig
|
||||
@@ -9,13 +10,16 @@ load_dotenv(override=True)
|
||||
TEMPORAL_ADDRESS = os.getenv("TEMPORAL_ADDRESS", "localhost:7233")
|
||||
TEMPORAL_NAMESPACE = os.getenv("TEMPORAL_NAMESPACE", "default")
|
||||
TEMPORAL_TASK_QUEUE = os.getenv("TEMPORAL_TASK_QUEUE", "agent-task-queue")
|
||||
TEMPORAL_LEGACY_TASK_QUEUE = os.getenv("TEMPORAL_LEGACY_TASK_QUEUE", "agent-task-queue-legacy")
|
||||
TEMPORAL_LEGACY_TASK_QUEUE = os.getenv(
|
||||
"TEMPORAL_LEGACY_TASK_QUEUE", "agent-task-queue-legacy"
|
||||
)
|
||||
|
||||
# Authentication settings
|
||||
TEMPORAL_TLS_CERT = os.getenv("TEMPORAL_TLS_CERT", "")
|
||||
TEMPORAL_TLS_KEY = os.getenv("TEMPORAL_TLS_KEY", "")
|
||||
TEMPORAL_API_KEY = os.getenv("TEMPORAL_API_KEY", "")
|
||||
|
||||
|
||||
async def get_temporal_client() -> Client:
|
||||
"""
|
||||
Creates a Temporal client based on environment configuration.
|
||||
|
||||
@@ -63,8 +63,8 @@ async def client(env: WorkflowEnvironment) -> Client:
|
||||
@pytest.fixture
|
||||
def sample_agent_goal():
|
||||
"""Sample agent goal for testing."""
|
||||
from models.tool_definitions import AgentGoal, ToolDefinition, ToolArgument
|
||||
|
||||
from models.tool_definitions import AgentGoal, ToolArgument, ToolDefinition
|
||||
|
||||
return AgentGoal(
|
||||
id="test_goal",
|
||||
category_tag="test",
|
||||
@@ -77,13 +77,11 @@ def sample_agent_goal():
|
||||
description="A test tool for testing purposes",
|
||||
arguments=[
|
||||
ToolArgument(
|
||||
name="test_arg",
|
||||
type="string",
|
||||
description="A test argument"
|
||||
name="test_arg", type="string", description="A test argument"
|
||||
)
|
||||
]
|
||||
],
|
||||
)
|
||||
]
|
||||
],
|
||||
)
|
||||
|
||||
|
||||
@@ -93,7 +91,7 @@ def sample_conversation_history():
|
||||
return {
|
||||
"messages": [
|
||||
{"actor": "user", "response": "Hello, I need help with testing"},
|
||||
{"actor": "agent", "response": "I can help you with that"}
|
||||
{"actor": "agent", "response": "I can help you with that"},
|
||||
]
|
||||
}
|
||||
|
||||
@@ -101,16 +99,13 @@ def sample_conversation_history():
|
||||
@pytest.fixture
|
||||
def sample_combined_input(sample_agent_goal):
|
||||
"""Sample combined input for workflow testing."""
|
||||
from models.data_types import CombinedInput, AgentGoalWorkflowParams
|
||||
|
||||
from collections import deque
|
||||
|
||||
|
||||
from models.data_types import AgentGoalWorkflowParams, CombinedInput
|
||||
|
||||
tool_params = AgentGoalWorkflowParams(
|
||||
conversation_summary="Test conversation summary",
|
||||
prompt_queue=deque() # Start with empty queue for most tests
|
||||
)
|
||||
|
||||
return CombinedInput(
|
||||
agent_goal=sample_agent_goal,
|
||||
tool_params=tool_params
|
||||
prompt_queue=deque(), # Start with empty queue for most tests
|
||||
)
|
||||
|
||||
return CombinedInput(agent_goal=sample_agent_goal, tool_params=tool_params)
|
||||
|
||||
@@ -1,40 +1,35 @@
|
||||
import uuid
|
||||
from unittest.mock import patch, MagicMock
|
||||
import pytest
|
||||
|
||||
from temporalio import activity
|
||||
from temporalio.client import Client
|
||||
from temporalio.worker import Worker
|
||||
from temporalio.testing import WorkflowEnvironment
|
||||
|
||||
from workflows.agent_goal_workflow import AgentGoalWorkflow
|
||||
from activities.tool_activities import ToolActivities
|
||||
from models.data_types import (
|
||||
CombinedInput,
|
||||
AgentGoalWorkflowParams,
|
||||
ConversationHistory,
|
||||
ValidationResult,
|
||||
ValidationInput,
|
||||
EnvLookupOutput,
|
||||
CombinedInput,
|
||||
EnvLookupInput,
|
||||
ToolPromptInput
|
||||
EnvLookupOutput,
|
||||
ToolPromptInput,
|
||||
ValidationInput,
|
||||
ValidationResult,
|
||||
)
|
||||
from workflows.agent_goal_workflow import AgentGoalWorkflow
|
||||
|
||||
|
||||
class TestAgentGoalWorkflow:
|
||||
"""Test cases for AgentGoalWorkflow."""
|
||||
|
||||
async def test_workflow_initialization(self, client: Client, sample_combined_input: CombinedInput):
|
||||
async def test_workflow_initialization(
|
||||
self, client: Client, sample_combined_input: CombinedInput
|
||||
):
|
||||
"""Test workflow can be initialized and started."""
|
||||
task_queue_name = str(uuid.uuid4())
|
||||
|
||||
|
||||
# Create mock activity functions with proper signatures
|
||||
@activity.defn(name="get_wf_env_vars")
|
||||
async def mock_get_wf_env_vars(input: EnvLookupInput) -> EnvLookupOutput:
|
||||
return EnvLookupOutput(
|
||||
show_confirm=True,
|
||||
multi_goal_mode=True
|
||||
)
|
||||
|
||||
return EnvLookupOutput(show_confirm=True, multi_goal_mode=True)
|
||||
|
||||
async with Worker(
|
||||
client,
|
||||
task_queue=task_queue_name,
|
||||
@@ -48,120 +43,47 @@ class TestAgentGoalWorkflow:
|
||||
id=str(uuid.uuid4()),
|
||||
task_queue=task_queue_name,
|
||||
)
|
||||
|
||||
|
||||
# Verify workflow is running
|
||||
assert handle is not None
|
||||
|
||||
|
||||
# Query the workflow to check initial state
|
||||
conversation_history = await handle.query(AgentGoalWorkflow.get_conversation_history)
|
||||
conversation_history = await handle.query(
|
||||
AgentGoalWorkflow.get_conversation_history
|
||||
)
|
||||
assert isinstance(conversation_history, dict)
|
||||
assert "messages" in conversation_history
|
||||
|
||||
|
||||
# Test goal query
|
||||
agent_goal = await handle.query(AgentGoalWorkflow.get_agent_goal)
|
||||
assert agent_goal == sample_combined_input.agent_goal
|
||||
|
||||
|
||||
# End the workflow
|
||||
await handle.signal(AgentGoalWorkflow.end_chat)
|
||||
result = await handle.result()
|
||||
assert isinstance(result, str)
|
||||
|
||||
async def test_user_prompt_signal(self, client: Client, sample_combined_input: CombinedInput):
|
||||
async def test_user_prompt_signal(
|
||||
self, client: Client, sample_combined_input: CombinedInput
|
||||
):
|
||||
"""Test user_prompt signal handling."""
|
||||
task_queue_name = str(uuid.uuid4())
|
||||
|
||||
# Create mock activity functions with proper signatures
|
||||
@activity.defn(name="get_wf_env_vars")
|
||||
async def mock_get_wf_env_vars(input: EnvLookupInput) -> EnvLookupOutput:
|
||||
return EnvLookupOutput(
|
||||
show_confirm=True,
|
||||
multi_goal_mode=True
|
||||
)
|
||||
|
||||
@activity.defn(name="agent_validatePrompt")
|
||||
async def mock_agent_validatePrompt(validation_input: ValidationInput) -> ValidationResult:
|
||||
return ValidationResult(
|
||||
validationResult=True,
|
||||
validationFailedReason={}
|
||||
)
|
||||
|
||||
@activity.defn(name="agent_toolPlanner")
|
||||
async def mock_agent_toolPlanner(input: ToolPromptInput) -> dict:
|
||||
return {
|
||||
"next": "done",
|
||||
"response": "Test response from LLM"
|
||||
}
|
||||
|
||||
async with Worker(
|
||||
client,
|
||||
task_queue=task_queue_name,
|
||||
workflows=[AgentGoalWorkflow],
|
||||
activities=[
|
||||
mock_get_wf_env_vars,
|
||||
mock_agent_validatePrompt,
|
||||
mock_agent_toolPlanner
|
||||
],
|
||||
):
|
||||
handle = await client.start_workflow(
|
||||
AgentGoalWorkflow.run,
|
||||
sample_combined_input,
|
||||
id=str(uuid.uuid4()),
|
||||
task_queue=task_queue_name,
|
||||
)
|
||||
|
||||
# Send user prompt
|
||||
await handle.signal(AgentGoalWorkflow.user_prompt, "Hello, this is a test message")
|
||||
|
||||
# Wait for workflow to complete (it should end due to "done" next step)
|
||||
result = await handle.result()
|
||||
assert isinstance(result, str)
|
||||
|
||||
# Verify the conversation includes our message
|
||||
import json
|
||||
try:
|
||||
conversation_history = json.loads(result.replace("'", '"'))
|
||||
except:
|
||||
# Fallback to eval if json fails
|
||||
conversation_history = eval(result)
|
||||
messages = conversation_history["messages"]
|
||||
|
||||
# Should have our user message and agent response
|
||||
user_messages = [msg for msg in messages if msg["actor"] == "user"]
|
||||
assert len(user_messages) > 0
|
||||
assert any("Hello, this is a test message" in str(msg["response"]) for msg in user_messages)
|
||||
|
||||
async def test_confirm_signal(self, client: Client, sample_combined_input: CombinedInput):
|
||||
"""Test confirm signal handling for tool execution."""
|
||||
task_queue_name = str(uuid.uuid4())
|
||||
|
||||
# Create mock activity functions with proper signatures
|
||||
@activity.defn(name="get_wf_env_vars")
|
||||
async def mock_get_wf_env_vars(input: EnvLookupInput) -> EnvLookupOutput:
|
||||
return EnvLookupOutput(
|
||||
show_confirm=True,
|
||||
multi_goal_mode=True
|
||||
)
|
||||
|
||||
return EnvLookupOutput(show_confirm=True, multi_goal_mode=True)
|
||||
|
||||
@activity.defn(name="agent_validatePrompt")
|
||||
async def mock_agent_validatePrompt(validation_input: ValidationInput) -> ValidationResult:
|
||||
return ValidationResult(
|
||||
validationResult=True,
|
||||
validationFailedReason={}
|
||||
)
|
||||
|
||||
async def mock_agent_validatePrompt(
|
||||
validation_input: ValidationInput,
|
||||
) -> ValidationResult:
|
||||
return ValidationResult(validationResult=True, validationFailedReason={})
|
||||
|
||||
@activity.defn(name="agent_toolPlanner")
|
||||
async def mock_agent_toolPlanner(input: ToolPromptInput) -> dict:
|
||||
return {
|
||||
"next": "confirm",
|
||||
"tool": "TestTool",
|
||||
"args": {"test_arg": "test_value"},
|
||||
"response": "Ready to execute tool"
|
||||
}
|
||||
|
||||
@activity.defn(name="TestTool")
|
||||
async def mock_test_tool(args: dict) -> dict:
|
||||
return {"result": "Test tool executed successfully"}
|
||||
|
||||
return {"next": "done", "response": "Test response from LLM"}
|
||||
|
||||
async with Worker(
|
||||
client,
|
||||
task_queue=task_queue_name,
|
||||
@@ -170,7 +92,6 @@ class TestAgentGoalWorkflow:
|
||||
mock_get_wf_env_vars,
|
||||
mock_agent_validatePrompt,
|
||||
mock_agent_toolPlanner,
|
||||
mock_test_tool
|
||||
],
|
||||
):
|
||||
handle = await client.start_workflow(
|
||||
@@ -179,317 +100,64 @@ class TestAgentGoalWorkflow:
|
||||
id=str(uuid.uuid4()),
|
||||
task_queue=task_queue_name,
|
||||
)
|
||||
|
||||
# Send user prompt that will require confirmation
|
||||
await handle.signal(AgentGoalWorkflow.user_prompt, "Execute the test tool")
|
||||
|
||||
# Query to check tool data is set
|
||||
import asyncio
|
||||
await asyncio.sleep(0.1) # Give workflow time to process
|
||||
|
||||
tool_data = await handle.query(AgentGoalWorkflow.get_latest_tool_data)
|
||||
if tool_data:
|
||||
assert tool_data.get("tool") == "TestTool"
|
||||
assert tool_data.get("next") == "confirm"
|
||||
|
||||
# Send confirmation and end chat
|
||||
await handle.signal(AgentGoalWorkflow.confirm)
|
||||
await handle.signal(AgentGoalWorkflow.end_chat)
|
||||
|
||||
|
||||
# Send user prompt
|
||||
await handle.signal(
|
||||
AgentGoalWorkflow.user_prompt, "Hello, this is a test message"
|
||||
)
|
||||
|
||||
# Wait for workflow to complete (it should end due to "done" next step)
|
||||
result = await handle.result()
|
||||
assert isinstance(result, str)
|
||||
|
||||
async def test_validation_failure(self, client: Client, sample_combined_input: CombinedInput):
|
||||
"""Test workflow handles validation failures correctly."""
|
||||
task_queue_name = str(uuid.uuid4())
|
||||
|
||||
# Create mock activity functions with proper signatures
|
||||
@activity.defn(name="get_wf_env_vars")
|
||||
async def mock_get_wf_env_vars(input: EnvLookupInput) -> EnvLookupOutput:
|
||||
return EnvLookupOutput(
|
||||
show_confirm=True,
|
||||
multi_goal_mode=True
|
||||
)
|
||||
|
||||
@activity.defn(name="agent_validatePrompt")
|
||||
async def mock_agent_validatePrompt(validation_input: ValidationInput) -> ValidationResult:
|
||||
return ValidationResult(
|
||||
validationResult=False,
|
||||
validationFailedReason={
|
||||
"next": "question",
|
||||
"response": "Your request doesn't make sense in this context"
|
||||
}
|
||||
)
|
||||
|
||||
async with Worker(
|
||||
client,
|
||||
task_queue=task_queue_name,
|
||||
workflows=[AgentGoalWorkflow],
|
||||
activities=[
|
||||
mock_get_wf_env_vars,
|
||||
mock_agent_validatePrompt
|
||||
],
|
||||
):
|
||||
handle = await client.start_workflow(
|
||||
AgentGoalWorkflow.run,
|
||||
sample_combined_input,
|
||||
id=str(uuid.uuid4()),
|
||||
task_queue=task_queue_name,
|
||||
)
|
||||
|
||||
# Send invalid prompt
|
||||
await handle.signal(AgentGoalWorkflow.user_prompt, "Invalid nonsensical prompt")
|
||||
|
||||
# Give workflow time to process the prompt
|
||||
import asyncio
|
||||
await asyncio.sleep(0.2)
|
||||
|
||||
# End workflow to check conversation
|
||||
await handle.signal(AgentGoalWorkflow.end_chat)
|
||||
result = await handle.result()
|
||||
|
||||
# Verify validation failure message was added
|
||||
# Verify the conversation includes our message
|
||||
import json
|
||||
|
||||
try:
|
||||
conversation_history = json.loads(result.replace("'", '"'))
|
||||
except:
|
||||
except Exception:
|
||||
# Fallback to eval if json fails
|
||||
conversation_history = eval(result)
|
||||
messages = conversation_history["messages"]
|
||||
|
||||
# Should have validation failure response
|
||||
agent_messages = [msg for msg in messages if msg["actor"] == "agent"]
|
||||
assert len(agent_messages) > 0
|
||||
assert any("doesn't make sense" in str(msg["response"]) for msg in agent_messages)
|
||||
|
||||
async def test_conversation_summary_initialization(self, client: Client, sample_agent_goal):
|
||||
"""Test workflow initializes with conversation summary."""
|
||||
# Should have our user message and agent response
|
||||
user_messages = [msg for msg in messages if msg["actor"] == "user"]
|
||||
assert len(user_messages) > 0
|
||||
assert any(
|
||||
"Hello, this is a test message" in str(msg["response"])
|
||||
for msg in user_messages
|
||||
)
|
||||
|
||||
async def test_confirm_signal(
|
||||
self, client: Client, sample_combined_input: CombinedInput
|
||||
):
|
||||
"""Test confirm signal handling for tool execution."""
|
||||
task_queue_name = str(uuid.uuid4())
|
||||
|
||||
# Create input with conversation summary
|
||||
from collections import deque
|
||||
tool_params = AgentGoalWorkflowParams(
|
||||
conversation_summary="Previous conversation summary",
|
||||
prompt_queue=deque()
|
||||
)
|
||||
combined_input = CombinedInput(
|
||||
agent_goal=sample_agent_goal,
|
||||
tool_params=tool_params
|
||||
)
|
||||
|
||||
|
||||
# Create mock activity functions with proper signatures
|
||||
@activity.defn(name="get_wf_env_vars")
|
||||
async def mock_get_wf_env_vars(input: EnvLookupInput) -> EnvLookupOutput:
|
||||
return EnvLookupOutput(
|
||||
show_confirm=True,
|
||||
multi_goal_mode=True
|
||||
)
|
||||
|
||||
async with Worker(
|
||||
client,
|
||||
task_queue=task_queue_name,
|
||||
workflows=[AgentGoalWorkflow],
|
||||
activities=[mock_get_wf_env_vars],
|
||||
):
|
||||
handle = await client.start_workflow(
|
||||
AgentGoalWorkflow.run,
|
||||
combined_input,
|
||||
id=str(uuid.uuid4()),
|
||||
task_queue=task_queue_name,
|
||||
)
|
||||
|
||||
# Give workflow time to initialize
|
||||
import asyncio
|
||||
await asyncio.sleep(0.1)
|
||||
|
||||
# Query conversation summary
|
||||
summary = await handle.query(AgentGoalWorkflow.get_summary_from_history)
|
||||
assert summary == "Previous conversation summary"
|
||||
|
||||
# Query conversation history - should include summary message
|
||||
conversation_history = await handle.query(AgentGoalWorkflow.get_conversation_history)
|
||||
messages = conversation_history["messages"]
|
||||
|
||||
# Should have conversation_summary message
|
||||
summary_messages = [msg for msg in messages if msg["actor"] == "conversation_summary"]
|
||||
assert len(summary_messages) == 1
|
||||
assert summary_messages[0]["response"] == "Previous conversation summary"
|
||||
|
||||
# End workflow
|
||||
await handle.signal(AgentGoalWorkflow.end_chat)
|
||||
await handle.result()
|
||||
return EnvLookupOutput(show_confirm=True, multi_goal_mode=True)
|
||||
|
||||
async def test_workflow_queries(self, client: Client, sample_combined_input: CombinedInput):
|
||||
"""Test all workflow query methods."""
|
||||
task_queue_name = str(uuid.uuid4())
|
||||
|
||||
# Create mock activity functions with proper signatures
|
||||
@activity.defn(name="get_wf_env_vars")
|
||||
async def mock_get_wf_env_vars(input: EnvLookupInput) -> EnvLookupOutput:
|
||||
return EnvLookupOutput(
|
||||
show_confirm=True,
|
||||
multi_goal_mode=True
|
||||
)
|
||||
|
||||
async with Worker(
|
||||
client,
|
||||
task_queue=task_queue_name,
|
||||
workflows=[AgentGoalWorkflow],
|
||||
activities=[mock_get_wf_env_vars],
|
||||
):
|
||||
handle = await client.start_workflow(
|
||||
AgentGoalWorkflow.run,
|
||||
sample_combined_input,
|
||||
id=str(uuid.uuid4()),
|
||||
task_queue=task_queue_name,
|
||||
)
|
||||
|
||||
# Give workflow time to initialize
|
||||
import asyncio
|
||||
await asyncio.sleep(0.1)
|
||||
|
||||
# Test get_conversation_history query
|
||||
conversation_history = await handle.query(AgentGoalWorkflow.get_conversation_history)
|
||||
assert isinstance(conversation_history, dict)
|
||||
assert "messages" in conversation_history
|
||||
|
||||
# Test get_agent_goal query
|
||||
agent_goal = await handle.query(AgentGoalWorkflow.get_agent_goal)
|
||||
assert agent_goal.id == sample_combined_input.agent_goal.id
|
||||
|
||||
# Test get_summary_from_history query
|
||||
summary = await handle.query(AgentGoalWorkflow.get_summary_from_history)
|
||||
# Summary might be None if not set, so check for that
|
||||
if sample_combined_input.tool_params.conversation_summary:
|
||||
assert summary == sample_combined_input.tool_params.conversation_summary
|
||||
else:
|
||||
assert summary is None
|
||||
|
||||
# Test get_latest_tool_data query (should be None initially)
|
||||
tool_data = await handle.query(AgentGoalWorkflow.get_latest_tool_data)
|
||||
assert tool_data is None
|
||||
|
||||
# End workflow
|
||||
await handle.signal(AgentGoalWorkflow.end_chat)
|
||||
await handle.result()
|
||||
|
||||
async def test_enable_disable_debugging_confirm_signals(self, client: Client, sample_combined_input: CombinedInput):
|
||||
"""Test debugging confirm enable/disable signals."""
|
||||
task_queue_name = str(uuid.uuid4())
|
||||
|
||||
# Create mock activity functions with proper signatures
|
||||
@activity.defn(name="get_wf_env_vars")
|
||||
async def mock_get_wf_env_vars(input: EnvLookupInput) -> EnvLookupOutput:
|
||||
return EnvLookupOutput(
|
||||
show_confirm=True,
|
||||
multi_goal_mode=True
|
||||
)
|
||||
|
||||
async with Worker(
|
||||
client,
|
||||
task_queue=task_queue_name,
|
||||
workflows=[AgentGoalWorkflow],
|
||||
activities=[mock_get_wf_env_vars],
|
||||
):
|
||||
handle = await client.start_workflow(
|
||||
AgentGoalWorkflow.run,
|
||||
sample_combined_input,
|
||||
id=str(uuid.uuid4()),
|
||||
task_queue=task_queue_name,
|
||||
)
|
||||
|
||||
# Test enable debugging confirm signal
|
||||
await handle.signal(AgentGoalWorkflow.enable_debugging_confirm)
|
||||
|
||||
# Test disable debugging confirm signal
|
||||
await handle.signal(AgentGoalWorkflow.disable_debugging_confirm)
|
||||
|
||||
# End workflow
|
||||
await handle.signal(AgentGoalWorkflow.end_chat)
|
||||
result = await handle.result()
|
||||
assert isinstance(result, str)
|
||||
|
||||
async def test_workflow_with_empty_prompt_queue(self, client: Client, sample_agent_goal):
|
||||
"""Test workflow behavior with empty prompt queue."""
|
||||
task_queue_name = str(uuid.uuid4())
|
||||
|
||||
# Create input with empty prompt queue
|
||||
from collections import deque
|
||||
tool_params = AgentGoalWorkflowParams(
|
||||
conversation_summary=None,
|
||||
prompt_queue=deque()
|
||||
)
|
||||
combined_input = CombinedInput(
|
||||
agent_goal=sample_agent_goal,
|
||||
tool_params=tool_params
|
||||
)
|
||||
|
||||
# Create mock activity functions with proper signatures
|
||||
@activity.defn(name="get_wf_env_vars")
|
||||
async def mock_get_wf_env_vars(input: EnvLookupInput) -> EnvLookupOutput:
|
||||
return EnvLookupOutput(
|
||||
show_confirm=True,
|
||||
multi_goal_mode=True
|
||||
)
|
||||
|
||||
async with Worker(
|
||||
client,
|
||||
task_queue=task_queue_name,
|
||||
workflows=[AgentGoalWorkflow],
|
||||
activities=[mock_get_wf_env_vars],
|
||||
):
|
||||
handle = await client.start_workflow(
|
||||
AgentGoalWorkflow.run,
|
||||
combined_input,
|
||||
id=str(uuid.uuid4()),
|
||||
task_queue=task_queue_name,
|
||||
)
|
||||
|
||||
# Give workflow time to initialize
|
||||
import asyncio
|
||||
await asyncio.sleep(0.1)
|
||||
|
||||
# Query initial state
|
||||
conversation_history = await handle.query(AgentGoalWorkflow.get_conversation_history)
|
||||
assert isinstance(conversation_history, dict)
|
||||
assert "messages" in conversation_history
|
||||
|
||||
# Should have no messages initially (empty prompt queue, no summary)
|
||||
messages = conversation_history["messages"]
|
||||
assert len(messages) == 0
|
||||
|
||||
# End workflow
|
||||
await handle.signal(AgentGoalWorkflow.end_chat)
|
||||
result = await handle.result()
|
||||
assert isinstance(result, str)
|
||||
|
||||
async def test_multiple_user_prompts(self, client: Client, sample_combined_input: CombinedInput):
|
||||
"""Test workflow handling multiple user prompts in sequence."""
|
||||
task_queue_name = str(uuid.uuid4())
|
||||
|
||||
# Create mock activity functions with proper signatures
|
||||
@activity.defn(name="get_wf_env_vars")
|
||||
async def mock_get_wf_env_vars(input: EnvLookupInput) -> EnvLookupOutput:
|
||||
return EnvLookupOutput(
|
||||
show_confirm=True,
|
||||
multi_goal_mode=True
|
||||
)
|
||||
|
||||
@activity.defn(name="agent_validatePrompt")
|
||||
async def mock_agent_validatePrompt(validation_input: ValidationInput) -> ValidationResult:
|
||||
return ValidationResult(
|
||||
validationResult=True,
|
||||
validationFailedReason={}
|
||||
)
|
||||
|
||||
async def mock_agent_validatePrompt(
|
||||
validation_input: ValidationInput,
|
||||
) -> ValidationResult:
|
||||
return ValidationResult(validationResult=True, validationFailedReason={})
|
||||
|
||||
@activity.defn(name="agent_toolPlanner")
|
||||
async def mock_agent_toolPlanner(input: ToolPromptInput) -> dict:
|
||||
# Keep workflow running for multiple prompts
|
||||
return {
|
||||
"next": "question",
|
||||
"response": f"Processed: {input.prompt}"
|
||||
"next": "confirm",
|
||||
"tool": "TestTool",
|
||||
"args": {"test_arg": "test_value"},
|
||||
"response": "Ready to execute tool",
|
||||
}
|
||||
|
||||
|
||||
@activity.defn(name="TestTool")
|
||||
async def mock_test_tool(args: dict) -> dict:
|
||||
return {"result": "Test tool executed successfully"}
|
||||
|
||||
async with Worker(
|
||||
client,
|
||||
task_queue=task_queue_name,
|
||||
@@ -497,7 +165,8 @@ class TestAgentGoalWorkflow:
|
||||
activities=[
|
||||
mock_get_wf_env_vars,
|
||||
mock_agent_validatePrompt,
|
||||
mock_agent_toolPlanner
|
||||
mock_agent_toolPlanner,
|
||||
mock_test_tool,
|
||||
],
|
||||
):
|
||||
handle = await client.start_workflow(
|
||||
@@ -506,35 +175,369 @@ class TestAgentGoalWorkflow:
|
||||
id=str(uuid.uuid4()),
|
||||
task_queue=task_queue_name,
|
||||
)
|
||||
|
||||
# Send multiple prompts
|
||||
await handle.signal(AgentGoalWorkflow.user_prompt, "First message")
|
||||
|
||||
# Send user prompt that will require confirmation
|
||||
await handle.signal(AgentGoalWorkflow.user_prompt, "Execute the test tool")
|
||||
|
||||
# Query to check tool data is set
|
||||
import asyncio
|
||||
|
||||
await asyncio.sleep(0.1) # Give workflow time to process
|
||||
|
||||
tool_data = await handle.query(AgentGoalWorkflow.get_latest_tool_data)
|
||||
if tool_data:
|
||||
assert tool_data.get("tool") == "TestTool"
|
||||
assert tool_data.get("next") == "confirm"
|
||||
|
||||
# Send confirmation and end chat
|
||||
await handle.signal(AgentGoalWorkflow.confirm)
|
||||
await handle.signal(AgentGoalWorkflow.end_chat)
|
||||
|
||||
result = await handle.result()
|
||||
assert isinstance(result, str)
|
||||
|
||||
async def test_validation_failure(
|
||||
self, client: Client, sample_combined_input: CombinedInput
|
||||
):
|
||||
"""Test workflow handles validation failures correctly."""
|
||||
task_queue_name = str(uuid.uuid4())
|
||||
|
||||
# Create mock activity functions with proper signatures
|
||||
@activity.defn(name="get_wf_env_vars")
|
||||
async def mock_get_wf_env_vars(input: EnvLookupInput) -> EnvLookupOutput:
|
||||
return EnvLookupOutput(show_confirm=True, multi_goal_mode=True)
|
||||
|
||||
@activity.defn(name="agent_validatePrompt")
|
||||
async def mock_agent_validatePrompt(
|
||||
validation_input: ValidationInput,
|
||||
) -> ValidationResult:
|
||||
return ValidationResult(
|
||||
validationResult=False,
|
||||
validationFailedReason={
|
||||
"next": "question",
|
||||
"response": "Your request doesn't make sense in this context",
|
||||
},
|
||||
)
|
||||
|
||||
async with Worker(
|
||||
client,
|
||||
task_queue=task_queue_name,
|
||||
workflows=[AgentGoalWorkflow],
|
||||
activities=[mock_get_wf_env_vars, mock_agent_validatePrompt],
|
||||
):
|
||||
handle = await client.start_workflow(
|
||||
AgentGoalWorkflow.run,
|
||||
sample_combined_input,
|
||||
id=str(uuid.uuid4()),
|
||||
task_queue=task_queue_name,
|
||||
)
|
||||
|
||||
# Send invalid prompt
|
||||
await handle.signal(
|
||||
AgentGoalWorkflow.user_prompt, "Invalid nonsensical prompt"
|
||||
)
|
||||
|
||||
# Give workflow time to process the prompt
|
||||
import asyncio
|
||||
|
||||
await asyncio.sleep(0.2)
|
||||
|
||||
# End workflow to check conversation
|
||||
await handle.signal(AgentGoalWorkflow.end_chat)
|
||||
result = await handle.result()
|
||||
|
||||
# Verify validation failure message was added
|
||||
import json
|
||||
|
||||
try:
|
||||
conversation_history = json.loads(result.replace("'", '"'))
|
||||
except Exception:
|
||||
# Fallback to eval if json fails
|
||||
conversation_history = eval(result)
|
||||
messages = conversation_history["messages"]
|
||||
|
||||
# Should have validation failure response
|
||||
agent_messages = [msg for msg in messages if msg["actor"] == "agent"]
|
||||
assert len(agent_messages) > 0
|
||||
assert any(
|
||||
"doesn't make sense" in str(msg["response"]) for msg in agent_messages
|
||||
)
|
||||
|
||||
async def test_conversation_summary_initialization(
|
||||
self, client: Client, sample_agent_goal
|
||||
):
|
||||
"""Test workflow initializes with conversation summary."""
|
||||
task_queue_name = str(uuid.uuid4())
|
||||
|
||||
# Create input with conversation summary
|
||||
from collections import deque
|
||||
|
||||
tool_params = AgentGoalWorkflowParams(
|
||||
conversation_summary="Previous conversation summary", prompt_queue=deque()
|
||||
)
|
||||
combined_input = CombinedInput(
|
||||
agent_goal=sample_agent_goal, tool_params=tool_params
|
||||
)
|
||||
|
||||
# Create mock activity functions with proper signatures
|
||||
@activity.defn(name="get_wf_env_vars")
|
||||
async def mock_get_wf_env_vars(input: EnvLookupInput) -> EnvLookupOutput:
|
||||
return EnvLookupOutput(show_confirm=True, multi_goal_mode=True)
|
||||
|
||||
async with Worker(
|
||||
client,
|
||||
task_queue=task_queue_name,
|
||||
workflows=[AgentGoalWorkflow],
|
||||
activities=[mock_get_wf_env_vars],
|
||||
):
|
||||
handle = await client.start_workflow(
|
||||
AgentGoalWorkflow.run,
|
||||
combined_input,
|
||||
id=str(uuid.uuid4()),
|
||||
task_queue=task_queue_name,
|
||||
)
|
||||
|
||||
# Give workflow time to initialize
|
||||
import asyncio
|
||||
|
||||
await asyncio.sleep(0.1)
|
||||
|
||||
await handle.signal(AgentGoalWorkflow.user_prompt, "Second message")
|
||||
|
||||
# Query conversation summary
|
||||
summary = await handle.query(AgentGoalWorkflow.get_summary_from_history)
|
||||
assert summary == "Previous conversation summary"
|
||||
|
||||
# Query conversation history - should include summary message
|
||||
conversation_history = await handle.query(
|
||||
AgentGoalWorkflow.get_conversation_history
|
||||
)
|
||||
messages = conversation_history["messages"]
|
||||
|
||||
# Should have conversation_summary message
|
||||
summary_messages = [
|
||||
msg for msg in messages if msg["actor"] == "conversation_summary"
|
||||
]
|
||||
assert len(summary_messages) == 1
|
||||
assert summary_messages[0]["response"] == "Previous conversation summary"
|
||||
|
||||
# End workflow
|
||||
await handle.signal(AgentGoalWorkflow.end_chat)
|
||||
await handle.result()
|
||||
|
||||
async def test_workflow_queries(
|
||||
self, client: Client, sample_combined_input: CombinedInput
|
||||
):
|
||||
"""Test all workflow query methods."""
|
||||
task_queue_name = str(uuid.uuid4())
|
||||
|
||||
# Create mock activity functions with proper signatures
|
||||
@activity.defn(name="get_wf_env_vars")
|
||||
async def mock_get_wf_env_vars(input: EnvLookupInput) -> EnvLookupOutput:
|
||||
return EnvLookupOutput(show_confirm=True, multi_goal_mode=True)
|
||||
|
||||
async with Worker(
|
||||
client,
|
||||
task_queue=task_queue_name,
|
||||
workflows=[AgentGoalWorkflow],
|
||||
activities=[mock_get_wf_env_vars],
|
||||
):
|
||||
handle = await client.start_workflow(
|
||||
AgentGoalWorkflow.run,
|
||||
sample_combined_input,
|
||||
id=str(uuid.uuid4()),
|
||||
task_queue=task_queue_name,
|
||||
)
|
||||
|
||||
# Give workflow time to initialize
|
||||
import asyncio
|
||||
|
||||
await asyncio.sleep(0.1)
|
||||
|
||||
await handle.signal(AgentGoalWorkflow.user_prompt, "Third message")
|
||||
await asyncio.sleep(0.1)
|
||||
|
||||
|
||||
# Test get_conversation_history query
|
||||
conversation_history = await handle.query(
|
||||
AgentGoalWorkflow.get_conversation_history
|
||||
)
|
||||
assert isinstance(conversation_history, dict)
|
||||
assert "messages" in conversation_history
|
||||
|
||||
# Test get_agent_goal query
|
||||
agent_goal = await handle.query(AgentGoalWorkflow.get_agent_goal)
|
||||
assert agent_goal.id == sample_combined_input.agent_goal.id
|
||||
|
||||
# Test get_summary_from_history query
|
||||
summary = await handle.query(AgentGoalWorkflow.get_summary_from_history)
|
||||
# Summary might be None if not set, so check for that
|
||||
if sample_combined_input.tool_params.conversation_summary:
|
||||
assert summary == sample_combined_input.tool_params.conversation_summary
|
||||
else:
|
||||
assert summary is None
|
||||
|
||||
# Test get_latest_tool_data query (should be None initially)
|
||||
tool_data = await handle.query(AgentGoalWorkflow.get_latest_tool_data)
|
||||
assert tool_data is None
|
||||
|
||||
# End workflow
|
||||
await handle.signal(AgentGoalWorkflow.end_chat)
|
||||
await handle.result()
|
||||
|
||||
async def test_enable_disable_debugging_confirm_signals(
|
||||
self, client: Client, sample_combined_input: CombinedInput
|
||||
):
|
||||
"""Test debugging confirm enable/disable signals."""
|
||||
task_queue_name = str(uuid.uuid4())
|
||||
|
||||
# Create mock activity functions with proper signatures
|
||||
@activity.defn(name="get_wf_env_vars")
|
||||
async def mock_get_wf_env_vars(input: EnvLookupInput) -> EnvLookupOutput:
|
||||
return EnvLookupOutput(show_confirm=True, multi_goal_mode=True)
|
||||
|
||||
async with Worker(
|
||||
client,
|
||||
task_queue=task_queue_name,
|
||||
workflows=[AgentGoalWorkflow],
|
||||
activities=[mock_get_wf_env_vars],
|
||||
):
|
||||
handle = await client.start_workflow(
|
||||
AgentGoalWorkflow.run,
|
||||
sample_combined_input,
|
||||
id=str(uuid.uuid4()),
|
||||
task_queue=task_queue_name,
|
||||
)
|
||||
|
||||
# Test enable debugging confirm signal
|
||||
await handle.signal(AgentGoalWorkflow.enable_debugging_confirm)
|
||||
|
||||
# Test disable debugging confirm signal
|
||||
await handle.signal(AgentGoalWorkflow.disable_debugging_confirm)
|
||||
|
||||
# End workflow
|
||||
await handle.signal(AgentGoalWorkflow.end_chat)
|
||||
result = await handle.result()
|
||||
assert isinstance(result, str)
|
||||
|
||||
|
||||
async def test_workflow_with_empty_prompt_queue(
|
||||
self, client: Client, sample_agent_goal
|
||||
):
|
||||
"""Test workflow behavior with empty prompt queue."""
|
||||
task_queue_name = str(uuid.uuid4())
|
||||
|
||||
# Create input with empty prompt queue
|
||||
from collections import deque
|
||||
|
||||
tool_params = AgentGoalWorkflowParams(
|
||||
conversation_summary=None, prompt_queue=deque()
|
||||
)
|
||||
combined_input = CombinedInput(
|
||||
agent_goal=sample_agent_goal, tool_params=tool_params
|
||||
)
|
||||
|
||||
# Create mock activity functions with proper signatures
|
||||
@activity.defn(name="get_wf_env_vars")
|
||||
async def mock_get_wf_env_vars(input: EnvLookupInput) -> EnvLookupOutput:
|
||||
return EnvLookupOutput(show_confirm=True, multi_goal_mode=True)
|
||||
|
||||
async with Worker(
|
||||
client,
|
||||
task_queue=task_queue_name,
|
||||
workflows=[AgentGoalWorkflow],
|
||||
activities=[mock_get_wf_env_vars],
|
||||
):
|
||||
handle = await client.start_workflow(
|
||||
AgentGoalWorkflow.run,
|
||||
combined_input,
|
||||
id=str(uuid.uuid4()),
|
||||
task_queue=task_queue_name,
|
||||
)
|
||||
|
||||
# Give workflow time to initialize
|
||||
import asyncio
|
||||
|
||||
await asyncio.sleep(0.1)
|
||||
|
||||
# Query initial state
|
||||
conversation_history = await handle.query(
|
||||
AgentGoalWorkflow.get_conversation_history
|
||||
)
|
||||
assert isinstance(conversation_history, dict)
|
||||
assert "messages" in conversation_history
|
||||
|
||||
# Should have no messages initially (empty prompt queue, no summary)
|
||||
messages = conversation_history["messages"]
|
||||
assert len(messages) == 0
|
||||
|
||||
# End workflow
|
||||
await handle.signal(AgentGoalWorkflow.end_chat)
|
||||
result = await handle.result()
|
||||
assert isinstance(result, str)
|
||||
|
||||
async def test_multiple_user_prompts(
|
||||
self, client: Client, sample_combined_input: CombinedInput
|
||||
):
|
||||
"""Test workflow handling multiple user prompts in sequence."""
|
||||
task_queue_name = str(uuid.uuid4())
|
||||
|
||||
# Create mock activity functions with proper signatures
|
||||
@activity.defn(name="get_wf_env_vars")
|
||||
async def mock_get_wf_env_vars(input: EnvLookupInput) -> EnvLookupOutput:
|
||||
return EnvLookupOutput(show_confirm=True, multi_goal_mode=True)
|
||||
|
||||
@activity.defn(name="agent_validatePrompt")
|
||||
async def mock_agent_validatePrompt(
|
||||
validation_input: ValidationInput,
|
||||
) -> ValidationResult:
|
||||
return ValidationResult(validationResult=True, validationFailedReason={})
|
||||
|
||||
@activity.defn(name="agent_toolPlanner")
|
||||
async def mock_agent_toolPlanner(input: ToolPromptInput) -> dict:
|
||||
# Keep workflow running for multiple prompts
|
||||
return {"next": "question", "response": f"Processed: {input.prompt}"}
|
||||
|
||||
async with Worker(
|
||||
client,
|
||||
task_queue=task_queue_name,
|
||||
workflows=[AgentGoalWorkflow],
|
||||
activities=[
|
||||
mock_get_wf_env_vars,
|
||||
mock_agent_validatePrompt,
|
||||
mock_agent_toolPlanner,
|
||||
],
|
||||
):
|
||||
handle = await client.start_workflow(
|
||||
AgentGoalWorkflow.run,
|
||||
sample_combined_input,
|
||||
id=str(uuid.uuid4()),
|
||||
task_queue=task_queue_name,
|
||||
)
|
||||
|
||||
# Send multiple prompts
|
||||
await handle.signal(AgentGoalWorkflow.user_prompt, "First message")
|
||||
import asyncio
|
||||
|
||||
await asyncio.sleep(0.1)
|
||||
|
||||
await handle.signal(AgentGoalWorkflow.user_prompt, "Second message")
|
||||
await asyncio.sleep(0.1)
|
||||
|
||||
await handle.signal(AgentGoalWorkflow.user_prompt, "Third message")
|
||||
await asyncio.sleep(0.1)
|
||||
|
||||
# End workflow
|
||||
await handle.signal(AgentGoalWorkflow.end_chat)
|
||||
result = await handle.result()
|
||||
assert isinstance(result, str)
|
||||
|
||||
# Parse result and verify multiple messages
|
||||
import json
|
||||
|
||||
try:
|
||||
conversation_history = json.loads(result.replace("'", '"'))
|
||||
except:
|
||||
except Exception:
|
||||
conversation_history = eval(result)
|
||||
messages = conversation_history["messages"]
|
||||
|
||||
|
||||
# Should have at least one user message (timing dependent)
|
||||
user_messages = [msg for msg in messages if msg["actor"] == "user"]
|
||||
assert len(user_messages) >= 1
|
||||
|
||||
|
||||
# Verify at least the first message was processed
|
||||
message_texts = [str(msg["response"]) for msg in user_messages]
|
||||
assert any("First message" in text for text in message_texts)
|
||||
assert any("First message" in text for text in message_texts)
|
||||
|
||||
@@ -1,19 +1,18 @@
|
||||
import os
|
||||
import uuid
|
||||
import json
|
||||
from unittest.mock import patch, MagicMock, AsyncMock
|
||||
import os
|
||||
from unittest.mock import AsyncMock, MagicMock, patch
|
||||
|
||||
import pytest
|
||||
from temporalio.client import Client
|
||||
from temporalio.worker import Worker
|
||||
from temporalio.testing import ActivityEnvironment
|
||||
|
||||
from activities.tool_activities import ToolActivities, dynamic_tool_activity
|
||||
from models.data_types import (
|
||||
EnvLookupInput,
|
||||
EnvLookupOutput,
|
||||
ToolPromptInput,
|
||||
ValidationInput,
|
||||
ValidationResult,
|
||||
ToolPromptInput,
|
||||
EnvLookupInput,
|
||||
EnvLookupOutput
|
||||
)
|
||||
|
||||
|
||||
@@ -25,63 +24,66 @@ class TestToolActivities:
|
||||
self.tool_activities = ToolActivities()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_agent_validatePrompt_valid_prompt(self, sample_agent_goal, sample_conversation_history):
|
||||
async def test_agent_validatePrompt_valid_prompt(
|
||||
self, sample_agent_goal, sample_conversation_history
|
||||
):
|
||||
"""Test agent_validatePrompt with a valid prompt."""
|
||||
validation_input = ValidationInput(
|
||||
prompt="I need help with the test tool",
|
||||
conversation_history=sample_conversation_history,
|
||||
agent_goal=sample_agent_goal
|
||||
agent_goal=sample_agent_goal,
|
||||
)
|
||||
|
||||
|
||||
# Mock the agent_toolPlanner to return a valid response
|
||||
mock_response = {
|
||||
"validationResult": True,
|
||||
"validationFailedReason": {}
|
||||
}
|
||||
|
||||
with patch.object(self.tool_activities, 'agent_toolPlanner', new_callable=AsyncMock) as mock_planner:
|
||||
mock_response = {"validationResult": True, "validationFailedReason": {}}
|
||||
|
||||
with patch.object(
|
||||
self.tool_activities, "agent_toolPlanner", new_callable=AsyncMock
|
||||
) as mock_planner:
|
||||
mock_planner.return_value = mock_response
|
||||
|
||||
|
||||
activity_env = ActivityEnvironment()
|
||||
result = await activity_env.run(
|
||||
self.tool_activities.agent_validatePrompt,
|
||||
validation_input
|
||||
self.tool_activities.agent_validatePrompt, validation_input
|
||||
)
|
||||
|
||||
|
||||
assert isinstance(result, ValidationResult)
|
||||
assert result.validationResult is True
|
||||
assert result.validationFailedReason == {}
|
||||
|
||||
|
||||
# Verify the mock was called with correct parameters
|
||||
mock_planner.assert_called_once()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_agent_validatePrompt_invalid_prompt(self, sample_agent_goal, sample_conversation_history):
|
||||
async def test_agent_validatePrompt_invalid_prompt(
|
||||
self, sample_agent_goal, sample_conversation_history
|
||||
):
|
||||
"""Test agent_validatePrompt with an invalid prompt."""
|
||||
validation_input = ValidationInput(
|
||||
prompt="asdfghjkl nonsense",
|
||||
conversation_history=sample_conversation_history,
|
||||
agent_goal=sample_agent_goal
|
||||
agent_goal=sample_agent_goal,
|
||||
)
|
||||
|
||||
|
||||
# Mock the agent_toolPlanner to return an invalid response
|
||||
mock_response = {
|
||||
"validationResult": False,
|
||||
"validationFailedReason": {
|
||||
"next": "question",
|
||||
"response": "Your request doesn't make sense in this context"
|
||||
}
|
||||
"response": "Your request doesn't make sense in this context",
|
||||
},
|
||||
}
|
||||
|
||||
with patch.object(self.tool_activities, 'agent_toolPlanner', new_callable=AsyncMock) as mock_planner:
|
||||
|
||||
with patch.object(
|
||||
self.tool_activities, "agent_toolPlanner", new_callable=AsyncMock
|
||||
) as mock_planner:
|
||||
mock_planner.return_value = mock_response
|
||||
|
||||
|
||||
activity_env = ActivityEnvironment()
|
||||
result = await activity_env.run(
|
||||
self.tool_activities.agent_validatePrompt,
|
||||
validation_input
|
||||
self.tool_activities.agent_validatePrompt, validation_input
|
||||
)
|
||||
|
||||
|
||||
assert isinstance(result, ValidationResult)
|
||||
assert result.validationResult is False
|
||||
assert "doesn't make sense" in str(result.validationFailedReason)
|
||||
@@ -90,29 +92,29 @@ class TestToolActivities:
|
||||
async def test_agent_toolPlanner_success(self):
|
||||
"""Test agent_toolPlanner with successful LLM response."""
|
||||
prompt_input = ToolPromptInput(
|
||||
prompt="Test prompt",
|
||||
context_instructions="Test context instructions"
|
||||
prompt="Test prompt", context_instructions="Test context instructions"
|
||||
)
|
||||
|
||||
|
||||
# Mock the completion function
|
||||
mock_response = MagicMock()
|
||||
mock_response.choices = [MagicMock()]
|
||||
mock_response.choices[0].message.content = '{"next": "confirm", "tool": "TestTool", "response": "Test response"}'
|
||||
|
||||
with patch('activities.tool_activities.completion') as mock_completion:
|
||||
mock_response.choices[0].message.content = (
|
||||
'{"next": "confirm", "tool": "TestTool", "response": "Test response"}'
|
||||
)
|
||||
|
||||
with patch("activities.tool_activities.completion") as mock_completion:
|
||||
mock_completion.return_value = mock_response
|
||||
|
||||
|
||||
activity_env = ActivityEnvironment()
|
||||
result = await activity_env.run(
|
||||
self.tool_activities.agent_toolPlanner,
|
||||
prompt_input
|
||||
self.tool_activities.agent_toolPlanner, prompt_input
|
||||
)
|
||||
|
||||
|
||||
assert isinstance(result, dict)
|
||||
assert result["next"] == "confirm"
|
||||
assert result["tool"] == "TestTool"
|
||||
assert result["response"] == "Test response"
|
||||
|
||||
|
||||
# Verify completion was called with correct parameters
|
||||
mock_completion.assert_called_once()
|
||||
call_args = mock_completion.call_args[1]
|
||||
@@ -125,27 +127,25 @@ class TestToolActivities:
|
||||
async def test_agent_toolPlanner_with_custom_base_url(self):
|
||||
"""Test agent_toolPlanner with custom base URL configuration."""
|
||||
# Set up tool activities with custom base URL
|
||||
with patch.dict(os.environ, {'LLM_BASE_URL': 'https://custom.endpoint.com'}):
|
||||
with patch.dict(os.environ, {"LLM_BASE_URL": "https://custom.endpoint.com"}):
|
||||
tool_activities = ToolActivities()
|
||||
|
||||
|
||||
prompt_input = ToolPromptInput(
|
||||
prompt="Test prompt",
|
||||
context_instructions="Test context instructions"
|
||||
prompt="Test prompt", context_instructions="Test context instructions"
|
||||
)
|
||||
|
||||
|
||||
mock_response = MagicMock()
|
||||
mock_response.choices = [MagicMock()]
|
||||
mock_response.choices[0].message.content = '{"next": "done", "response": "Test"}'
|
||||
|
||||
with patch('activities.tool_activities.completion') as mock_completion:
|
||||
mock_response.choices[0].message.content = (
|
||||
'{"next": "done", "response": "Test"}'
|
||||
)
|
||||
|
||||
with patch("activities.tool_activities.completion") as mock_completion:
|
||||
mock_completion.return_value = mock_response
|
||||
|
||||
|
||||
activity_env = ActivityEnvironment()
|
||||
await activity_env.run(
|
||||
tool_activities.agent_toolPlanner,
|
||||
prompt_input
|
||||
)
|
||||
|
||||
await activity_env.run(tool_activities.agent_toolPlanner, prompt_input)
|
||||
|
||||
# Verify base_url was included in the call
|
||||
call_args = mock_completion.call_args[1]
|
||||
assert "base_url" in call_args
|
||||
@@ -155,41 +155,37 @@ class TestToolActivities:
|
||||
async def test_agent_toolPlanner_json_parsing_error(self):
|
||||
"""Test agent_toolPlanner handles JSON parsing errors."""
|
||||
prompt_input = ToolPromptInput(
|
||||
prompt="Test prompt",
|
||||
context_instructions="Test context instructions"
|
||||
prompt="Test prompt", context_instructions="Test context instructions"
|
||||
)
|
||||
|
||||
|
||||
# Mock the completion function to return invalid JSON
|
||||
mock_response = MagicMock()
|
||||
mock_response.choices = [MagicMock()]
|
||||
mock_response.choices[0].message.content = 'Invalid JSON response'
|
||||
|
||||
with patch('activities.tool_activities.completion') as mock_completion:
|
||||
mock_response.choices[0].message.content = "Invalid JSON response"
|
||||
|
||||
with patch("activities.tool_activities.completion") as mock_completion:
|
||||
mock_completion.return_value = mock_response
|
||||
|
||||
|
||||
activity_env = ActivityEnvironment()
|
||||
with pytest.raises(Exception): # Should raise JSON parsing error
|
||||
await activity_env.run(
|
||||
self.tool_activities.agent_toolPlanner,
|
||||
prompt_input
|
||||
self.tool_activities.agent_toolPlanner, prompt_input
|
||||
)
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_get_wf_env_vars_default_values(self):
|
||||
"""Test get_wf_env_vars with default values."""
|
||||
env_input = EnvLookupInput(
|
||||
show_confirm_env_var_name="SHOW_CONFIRM",
|
||||
show_confirm_default=True
|
||||
show_confirm_env_var_name="SHOW_CONFIRM", show_confirm_default=True
|
||||
)
|
||||
|
||||
|
||||
# Clear environment variables
|
||||
with patch.dict(os.environ, {}, clear=True):
|
||||
activity_env = ActivityEnvironment()
|
||||
result = await activity_env.run(
|
||||
self.tool_activities.get_wf_env_vars,
|
||||
env_input
|
||||
self.tool_activities.get_wf_env_vars, env_input
|
||||
)
|
||||
|
||||
|
||||
assert isinstance(result, EnvLookupOutput)
|
||||
assert result.show_confirm is True # default value
|
||||
assert result.multi_goal_mode is True # default value
|
||||
@@ -198,21 +194,18 @@ class TestToolActivities:
|
||||
async def test_get_wf_env_vars_custom_values(self):
|
||||
"""Test get_wf_env_vars with custom environment values."""
|
||||
env_input = EnvLookupInput(
|
||||
show_confirm_env_var_name="SHOW_CONFIRM",
|
||||
show_confirm_default=True
|
||||
show_confirm_env_var_name="SHOW_CONFIRM", show_confirm_default=True
|
||||
)
|
||||
|
||||
|
||||
# Set environment variables
|
||||
with patch.dict(os.environ, {
|
||||
'SHOW_CONFIRM': 'false',
|
||||
'AGENT_GOAL': 'specific_goal'
|
||||
}):
|
||||
with patch.dict(
|
||||
os.environ, {"SHOW_CONFIRM": "false", "AGENT_GOAL": "specific_goal"}
|
||||
):
|
||||
activity_env = ActivityEnvironment()
|
||||
result = await activity_env.run(
|
||||
self.tool_activities.get_wf_env_vars,
|
||||
env_input
|
||||
self.tool_activities.get_wf_env_vars, env_input
|
||||
)
|
||||
|
||||
|
||||
assert isinstance(result, EnvLookupOutput)
|
||||
assert result.show_confirm is False # from env var
|
||||
assert result.multi_goal_mode is False # from env var
|
||||
@@ -220,20 +213,22 @@ class TestToolActivities:
|
||||
def test_sanitize_json_response(self):
|
||||
"""Test JSON response sanitization."""
|
||||
# Test with markdown code blocks
|
||||
response_with_markdown = "```json\n{\"test\": \"value\"}\n```"
|
||||
response_with_markdown = '```json\n{"test": "value"}\n```'
|
||||
sanitized = self.tool_activities.sanitize_json_response(response_with_markdown)
|
||||
assert sanitized == '{"test": "value"}'
|
||||
|
||||
|
||||
# Test with extra whitespace
|
||||
response_with_whitespace = " \n{\"test\": \"value\"} \n"
|
||||
sanitized = self.tool_activities.sanitize_json_response(response_with_whitespace)
|
||||
response_with_whitespace = ' \n{"test": "value"} \n'
|
||||
sanitized = self.tool_activities.sanitize_json_response(
|
||||
response_with_whitespace
|
||||
)
|
||||
assert sanitized == '{"test": "value"}'
|
||||
|
||||
def test_parse_json_response_success(self):
|
||||
"""Test successful JSON parsing."""
|
||||
json_string = '{"next": "confirm", "tool": "TestTool"}'
|
||||
result = self.tool_activities.parse_json_response(json_string)
|
||||
|
||||
|
||||
assert isinstance(result, dict)
|
||||
assert result["next"] == "confirm"
|
||||
assert result["tool"] == "TestTool"
|
||||
@@ -241,7 +236,7 @@ class TestToolActivities:
|
||||
def test_parse_json_response_failure(self):
|
||||
"""Test JSON parsing with invalid JSON."""
|
||||
invalid_json = "Not valid JSON"
|
||||
|
||||
|
||||
with pytest.raises(Exception): # Should raise JSON parsing error
|
||||
self.tool_activities.parse_json_response(invalid_json)
|
||||
|
||||
@@ -255,26 +250,22 @@ class TestDynamicToolActivity:
|
||||
# Mock the activity info and payload converter
|
||||
mock_info = MagicMock()
|
||||
mock_info.activity_type = "TestTool"
|
||||
|
||||
|
||||
mock_payload_converter = MagicMock()
|
||||
mock_payload = MagicMock()
|
||||
mock_payload.payload = b'{"test_arg": "test_value"}'
|
||||
mock_payload_converter.from_payload.return_value = {"test_arg": "test_value"}
|
||||
|
||||
|
||||
# Mock the handler function
|
||||
def mock_handler(args):
|
||||
return {"result": f"Handled {args['test_arg']}"}
|
||||
|
||||
with patch('temporalio.activity.info', return_value=mock_info), \
|
||||
patch('temporalio.activity.payload_converter', return_value=mock_payload_converter), \
|
||||
patch('tools.get_handler', return_value=mock_handler):
|
||||
|
||||
|
||||
with patch("temporalio.activity.info", return_value=mock_info), patch(
|
||||
"temporalio.activity.payload_converter", return_value=mock_payload_converter
|
||||
), patch("tools.get_handler", return_value=mock_handler):
|
||||
activity_env = ActivityEnvironment()
|
||||
result = await activity_env.run(
|
||||
dynamic_tool_activity,
|
||||
[mock_payload]
|
||||
)
|
||||
|
||||
result = await activity_env.run(dynamic_tool_activity, [mock_payload])
|
||||
|
||||
assert isinstance(result, dict)
|
||||
assert result["result"] == "Handled test_value"
|
||||
|
||||
@@ -284,26 +275,22 @@ class TestDynamicToolActivity:
|
||||
# Mock the activity info and payload converter
|
||||
mock_info = MagicMock()
|
||||
mock_info.activity_type = "AsyncTestTool"
|
||||
|
||||
|
||||
mock_payload_converter = MagicMock()
|
||||
mock_payload = MagicMock()
|
||||
mock_payload.payload = b'{"test_arg": "async_test"}'
|
||||
mock_payload_converter.from_payload.return_value = {"test_arg": "async_test"}
|
||||
|
||||
|
||||
# Mock the async handler function
|
||||
async def mock_async_handler(args):
|
||||
return {"async_result": f"Async handled {args['test_arg']}"}
|
||||
|
||||
with patch('temporalio.activity.info', return_value=mock_info), \
|
||||
patch('temporalio.activity.payload_converter', return_value=mock_payload_converter), \
|
||||
patch('tools.get_handler', return_value=mock_async_handler):
|
||||
|
||||
|
||||
with patch("temporalio.activity.info", return_value=mock_info), patch(
|
||||
"temporalio.activity.payload_converter", return_value=mock_payload_converter
|
||||
), patch("tools.get_handler", return_value=mock_async_handler):
|
||||
activity_env = ActivityEnvironment()
|
||||
result = await activity_env.run(
|
||||
dynamic_tool_activity,
|
||||
[mock_payload]
|
||||
)
|
||||
|
||||
result = await activity_env.run(dynamic_tool_activity, [mock_payload])
|
||||
|
||||
assert isinstance(result, dict)
|
||||
assert result["async_result"] == "Async handled async_test"
|
||||
|
||||
@@ -314,21 +301,17 @@ class TestToolActivitiesIntegration:
|
||||
@pytest.mark.asyncio
|
||||
async def test_activities_in_worker(self, client: Client):
|
||||
"""Test activities can be registered and executed in a worker."""
|
||||
task_queue_name = str(uuid.uuid4())
|
||||
# task_queue_name = str(uuid.uuid4())
|
||||
tool_activities = ToolActivities()
|
||||
|
||||
|
||||
# Test get_wf_env_vars activity using ActivityEnvironment
|
||||
env_input = EnvLookupInput(
|
||||
show_confirm_env_var_name="TEST_CONFIRM",
|
||||
show_confirm_default=False
|
||||
show_confirm_env_var_name="TEST_CONFIRM", show_confirm_default=False
|
||||
)
|
||||
|
||||
|
||||
activity_env = ActivityEnvironment()
|
||||
result = await activity_env.run(
|
||||
tool_activities.get_wf_env_vars,
|
||||
env_input
|
||||
)
|
||||
|
||||
result = await activity_env.run(tool_activities.get_wf_env_vars, env_input)
|
||||
|
||||
assert isinstance(result, EnvLookupOutput)
|
||||
assert isinstance(result.show_confirm, bool)
|
||||
assert isinstance(result.multi_goal_mode, bool)
|
||||
@@ -336,36 +319,36 @@ class TestToolActivitiesIntegration:
|
||||
|
||||
class TestEdgeCases:
|
||||
"""Test edge cases and error handling."""
|
||||
|
||||
|
||||
def setup_method(self):
|
||||
"""Set up test environment for each test."""
|
||||
self.tool_activities = ToolActivities()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_agent_validatePrompt_with_empty_conversation_history(self, sample_agent_goal):
|
||||
async def test_agent_validatePrompt_with_empty_conversation_history(
|
||||
self, sample_agent_goal
|
||||
):
|
||||
"""Test validation with empty conversation history."""
|
||||
validation_input = ValidationInput(
|
||||
prompt="Test prompt",
|
||||
conversation_history={"messages": []},
|
||||
agent_goal=sample_agent_goal
|
||||
agent_goal=sample_agent_goal,
|
||||
)
|
||||
|
||||
mock_response = {
|
||||
"validationResult": True,
|
||||
"validationFailedReason": {}
|
||||
}
|
||||
|
||||
with patch.object(self.tool_activities, 'agent_toolPlanner', new_callable=AsyncMock) as mock_planner:
|
||||
|
||||
mock_response = {"validationResult": True, "validationFailedReason": {}}
|
||||
|
||||
with patch.object(
|
||||
self.tool_activities, "agent_toolPlanner", new_callable=AsyncMock
|
||||
) as mock_planner:
|
||||
mock_planner.return_value = mock_response
|
||||
|
||||
|
||||
activity_env = ActivityEnvironment()
|
||||
result = await activity_env.run(
|
||||
self.tool_activities.agent_validatePrompt,
|
||||
validation_input
|
||||
self.tool_activities.agent_validatePrompt, validation_input
|
||||
)
|
||||
|
||||
|
||||
assert isinstance(result, ValidationResult)
|
||||
assert result.validationResult == True
|
||||
assert result.validationResult
|
||||
assert result.validationFailedReason == {}
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@@ -373,22 +356,22 @@ class TestEdgeCases:
|
||||
"""Test toolPlanner with very long prompt."""
|
||||
long_prompt = "This is a very long prompt " * 100
|
||||
tool_prompt_input = ToolPromptInput(
|
||||
prompt=long_prompt,
|
||||
context_instructions="Test context instructions"
|
||||
prompt=long_prompt, context_instructions="Test context instructions"
|
||||
)
|
||||
|
||||
|
||||
# Mock the completion response
|
||||
mock_response = MagicMock()
|
||||
mock_response.choices = [MagicMock()]
|
||||
mock_response.choices[0].message.content = '{"next": "done", "response": "Processed long prompt"}'
|
||||
|
||||
with patch('activities.tool_activities.completion', return_value=mock_response):
|
||||
mock_response.choices[0].message.content = (
|
||||
'{"next": "done", "response": "Processed long prompt"}'
|
||||
)
|
||||
|
||||
with patch("activities.tool_activities.completion", return_value=mock_response):
|
||||
activity_env = ActivityEnvironment()
|
||||
result = await activity_env.run(
|
||||
self.tool_activities.agent_toolPlanner,
|
||||
tool_prompt_input
|
||||
self.tool_activities.agent_toolPlanner, tool_prompt_input
|
||||
)
|
||||
|
||||
|
||||
assert isinstance(result, dict)
|
||||
assert result["next"] == "done"
|
||||
assert "Processed long prompt" in result["response"]
|
||||
@@ -397,15 +380,15 @@ class TestEdgeCases:
|
||||
async def test_sanitize_json_with_various_formats(self):
|
||||
"""Test JSON sanitization with various input formats."""
|
||||
# Test markdown code blocks
|
||||
markdown_json = "```json\n{\"test\": \"value\"}\n```"
|
||||
markdown_json = '```json\n{"test": "value"}\n```'
|
||||
result = self.tool_activities.sanitize_json_response(markdown_json)
|
||||
assert result == '{"test": "value"}'
|
||||
|
||||
|
||||
# Test with extra whitespace
|
||||
whitespace_json = " \n {\"test\": \"value\"} \n "
|
||||
whitespace_json = ' \n {"test": "value"} \n '
|
||||
result = self.tool_activities.sanitize_json_response(whitespace_json)
|
||||
assert result == '{"test": "value"}'
|
||||
|
||||
|
||||
# Test already clean JSON
|
||||
clean_json = '{"test": "value"}'
|
||||
result = self.tool_activities.sanitize_json_response(clean_json)
|
||||
@@ -423,44 +406,38 @@ class TestEdgeCases:
|
||||
# Test with "true" string
|
||||
with patch.dict(os.environ, {"TEST_CONFIRM": "true"}):
|
||||
env_input = EnvLookupInput(
|
||||
show_confirm_env_var_name="TEST_CONFIRM",
|
||||
show_confirm_default=False
|
||||
show_confirm_env_var_name="TEST_CONFIRM", show_confirm_default=False
|
||||
)
|
||||
|
||||
|
||||
activity_env = ActivityEnvironment()
|
||||
result = await activity_env.run(
|
||||
self.tool_activities.get_wf_env_vars,
|
||||
env_input
|
||||
self.tool_activities.get_wf_env_vars, env_input
|
||||
)
|
||||
|
||||
assert result.show_confirm == True
|
||||
|
||||
|
||||
assert result.show_confirm
|
||||
|
||||
# Test with "false" string
|
||||
with patch.dict(os.environ, {"TEST_CONFIRM": "false"}):
|
||||
env_input = EnvLookupInput(
|
||||
show_confirm_env_var_name="TEST_CONFIRM",
|
||||
show_confirm_default=True
|
||||
show_confirm_env_var_name="TEST_CONFIRM", show_confirm_default=True
|
||||
)
|
||||
|
||||
|
||||
activity_env = ActivityEnvironment()
|
||||
result = await activity_env.run(
|
||||
self.tool_activities.get_wf_env_vars,
|
||||
env_input
|
||||
self.tool_activities.get_wf_env_vars, env_input
|
||||
)
|
||||
|
||||
assert result.show_confirm == False
|
||||
|
||||
|
||||
assert not result.show_confirm
|
||||
|
||||
# Test with missing env var (should use default)
|
||||
with patch.dict(os.environ, {}, clear=True):
|
||||
env_input = EnvLookupInput(
|
||||
show_confirm_env_var_name="MISSING_VAR",
|
||||
show_confirm_default=True
|
||||
show_confirm_env_var_name="MISSING_VAR", show_confirm_default=True
|
||||
)
|
||||
|
||||
|
||||
activity_env = ActivityEnvironment()
|
||||
result = await activity_env.run(
|
||||
self.tool_activities.get_wf_env_vars,
|
||||
env_input
|
||||
self.tool_activities.get_wf_env_vars, env_input
|
||||
)
|
||||
|
||||
assert result.show_confirm == True
|
||||
|
||||
assert result.show_confirm
|
||||
|
||||
@@ -1,25 +1,22 @@
|
||||
import concurrent.futures
|
||||
import uuid
|
||||
from contextlib import contextmanager
|
||||
|
||||
from temporalio import activity
|
||||
from temporalio.client import Client, WorkflowExecutionStatus
|
||||
from temporalio.worker import Worker
|
||||
from temporalio import activity
|
||||
import concurrent.futures
|
||||
from temporalio.testing import WorkflowEnvironment
|
||||
|
||||
from api.main import get_initial_agent_goal
|
||||
from models.data_types import (
|
||||
AgentGoalWorkflowParams,
|
||||
AgentGoalWorkflowParams,
|
||||
CombinedInput,
|
||||
ValidationResult,
|
||||
ValidationInput,
|
||||
EnvLookupOutput,
|
||||
EnvLookupInput,
|
||||
ToolPromptInput
|
||||
EnvLookupOutput,
|
||||
ToolPromptInput,
|
||||
ValidationInput,
|
||||
ValidationResult,
|
||||
)
|
||||
from workflows.agent_goal_workflow import AgentGoalWorkflow
|
||||
from activities.tool_activities import ToolActivities, dynamic_tool_activity
|
||||
from unittest.mock import patch
|
||||
from dotenv import load_dotenv
|
||||
import os
|
||||
from contextlib import contextmanager
|
||||
|
||||
|
||||
@contextmanager
|
||||
@@ -29,57 +26,49 @@ def my_context():
|
||||
print("Cleanup")
|
||||
|
||||
|
||||
|
||||
async def test_flight_booking(client: Client):
|
||||
# load_dotenv("test_flights_single.env")
|
||||
|
||||
#load_dotenv("test_flights_single.env")
|
||||
|
||||
with my_context() as value:
|
||||
print(f"Working with {value}")
|
||||
|
||||
|
||||
|
||||
# Create the test environment
|
||||
#env = await WorkflowEnvironment.start_local()
|
||||
#client = env.client
|
||||
# env = await WorkflowEnvironment.start_local()
|
||||
# client = env.client
|
||||
task_queue_name = str(uuid.uuid4())
|
||||
workflow_id = str(uuid.uuid4())
|
||||
|
||||
# Create mock activity functions with proper signatures
|
||||
@activity.defn(name="get_wf_env_vars")
|
||||
async def mock_get_wf_env_vars(input: EnvLookupInput) -> EnvLookupOutput:
|
||||
return EnvLookupOutput(
|
||||
show_confirm=True,
|
||||
multi_goal_mode=True
|
||||
)
|
||||
|
||||
return EnvLookupOutput(show_confirm=True, multi_goal_mode=True)
|
||||
|
||||
@activity.defn(name="agent_validatePrompt")
|
||||
async def mock_agent_validatePrompt(validation_input: ValidationInput) -> ValidationResult:
|
||||
return ValidationResult(
|
||||
validationResult=True,
|
||||
validationFailedReason={}
|
||||
)
|
||||
|
||||
async def mock_agent_validatePrompt(
|
||||
validation_input: ValidationInput,
|
||||
) -> ValidationResult:
|
||||
return ValidationResult(validationResult=True, validationFailedReason={})
|
||||
|
||||
@activity.defn(name="agent_toolPlanner")
|
||||
async def mock_agent_toolPlanner(input: ToolPromptInput) -> dict:
|
||||
return {
|
||||
"next": "done",
|
||||
"response": "Test response from LLM"
|
||||
}
|
||||
return {"next": "done", "response": "Test response from LLM"}
|
||||
|
||||
with concurrent.futures.ThreadPoolExecutor(max_workers=100) as activity_executor:
|
||||
with concurrent.futures.ThreadPoolExecutor(
|
||||
max_workers=100
|
||||
) as activity_executor:
|
||||
worker = Worker(
|
||||
client,
|
||||
client,
|
||||
task_queue=task_queue_name,
|
||||
workflows=[AgentGoalWorkflow],
|
||||
activities=[
|
||||
mock_get_wf_env_vars,
|
||||
mock_agent_validatePrompt,
|
||||
mock_agent_toolPlanner
|
||||
mock_agent_toolPlanner,
|
||||
],
|
||||
activity_executor=activity_executor,
|
||||
)
|
||||
|
||||
async with worker:
|
||||
async with worker:
|
||||
initial_agent_goal = get_initial_agent_goal()
|
||||
# Create combined input
|
||||
combined_input = CombinedInput(
|
||||
@@ -87,30 +76,36 @@ async def test_flight_booking(client: Client):
|
||||
agent_goal=initial_agent_goal,
|
||||
)
|
||||
|
||||
prompt="Hello!"
|
||||
prompt = "Hello!"
|
||||
|
||||
#async with Worker(client, task_queue=task_queue_name, workflows=[AgentGoalWorkflow], activities=[ToolActivities.agent_validatePrompt, ToolActivities.agent_toolPlanner, dynamic_tool_activity]):
|
||||
# async with Worker(client, task_queue=task_queue_name, workflows=[AgentGoalWorkflow], activities=[ToolActivities.agent_validatePrompt, ToolActivities.agent_toolPlanner, dynamic_tool_activity]):
|
||||
|
||||
# todo set goal categories for scenarios
|
||||
handle = await client.start_workflow(
|
||||
AgentGoalWorkflow.run,
|
||||
combined_input,
|
||||
id=workflow_id,
|
||||
id=workflow_id,
|
||||
task_queue=task_queue_name,
|
||||
start_signal="user_prompt",
|
||||
start_signal_args=[prompt],
|
||||
)
|
||||
# todo send signals to simulate user input
|
||||
# await handle.signal(AgentGoalWorkflow.user_prompt, "book flights") # for multi-goal
|
||||
await handle.signal(AgentGoalWorkflow.user_prompt, "sydney in september")
|
||||
assert WorkflowExecutionStatus.RUNNING == (await handle.describe()).status
|
||||
await handle.signal(
|
||||
AgentGoalWorkflow.user_prompt, "sydney in september"
|
||||
)
|
||||
assert (
|
||||
WorkflowExecutionStatus.RUNNING == (await handle.describe()).status
|
||||
)
|
||||
|
||||
|
||||
#assert ["Hello, user1", "Hello, user2"] == await handle.result()
|
||||
await handle.signal(AgentGoalWorkflow.user_prompt, "I'm all set, end conversation")
|
||||
|
||||
#assert WorkflowExecutionStatus.COMPLETED == (await handle.describe()).status
|
||||
# assert ["Hello, user1", "Hello, user2"] == await handle.result()
|
||||
await handle.signal(
|
||||
AgentGoalWorkflow.user_prompt, "I'm all set, end conversation"
|
||||
)
|
||||
|
||||
# assert WorkflowExecutionStatus.COMPLETED == (await handle.describe()).status
|
||||
|
||||
result = await handle.result()
|
||||
#todo dump workflow history for analysis optional
|
||||
#todo assert result is good
|
||||
print(f"Workflow result: {result}")
|
||||
# todo dump workflow history for analysis optional
|
||||
# todo assert result is good
|
||||
|
||||
8
thirdparty/train_api.py
vendored
8
thirdparty/train_api.py
vendored
@@ -1,9 +1,9 @@
|
||||
from http.server import HTTPServer, BaseHTTPRequestHandler
|
||||
from urllib.parse import parse_qs, urlparse
|
||||
import json
|
||||
import time
|
||||
import random
|
||||
import string
|
||||
import time
|
||||
from http.server import BaseHTTPRequestHandler, HTTPServer
|
||||
from urllib.parse import parse_qs, urlparse
|
||||
|
||||
|
||||
def parse_datetime(datetime_str):
|
||||
@@ -213,4 +213,4 @@ def run_server():
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
run_server()
|
||||
run_server()
|
||||
|
||||
@@ -1,29 +1,24 @@
|
||||
from .search_fixtures import search_fixtures
|
||||
from .search_flights import search_flights
|
||||
from .search_trains import search_trains
|
||||
from .search_trains import book_trains
|
||||
from .create_invoice import create_invoice
|
||||
from .find_events import find_events
|
||||
from .list_agents import list_agents
|
||||
from .change_goal import change_goal
|
||||
from .transfer_control import transfer_control
|
||||
|
||||
from .hr.current_pto import current_pto
|
||||
from .hr.book_pto import book_pto
|
||||
from .hr.future_pto_calc import future_pto_calc
|
||||
from .hr.checkpaybankstatus import checkpaybankstatus
|
||||
|
||||
from .create_invoice import create_invoice
|
||||
from .ecommerce.get_order import get_order
|
||||
from .ecommerce.list_orders import list_orders
|
||||
from .ecommerce.track_package import track_package
|
||||
from .fin.check_account_valid import check_account_valid
|
||||
from .fin.get_account_balances import get_account_balance
|
||||
from .fin.move_money import move_money
|
||||
from .fin.submit_loan_application import submit_loan_application
|
||||
|
||||
from .ecommerce.get_order import get_order
|
||||
from .ecommerce.track_package import track_package
|
||||
from .ecommerce.list_orders import list_orders
|
||||
|
||||
from .find_events import find_events
|
||||
from .give_hint import give_hint
|
||||
from .guess_location import guess_location
|
||||
from .hr.book_pto import book_pto
|
||||
from .hr.checkpaybankstatus import checkpaybankstatus
|
||||
from .hr.current_pto import current_pto
|
||||
from .hr.future_pto_calc import future_pto_calc
|
||||
from .list_agents import list_agents
|
||||
from .search_fixtures import search_fixtures
|
||||
from .search_flights import search_flights
|
||||
from .search_trains import book_trains, search_trains
|
||||
from .transfer_control import transfer_control
|
||||
|
||||
|
||||
def get_handler(tool_name: str):
|
||||
|
||||
@@ -1,9 +1,8 @@
|
||||
def change_goal(args: dict) -> dict:
|
||||
|
||||
new_goal = args.get("goalID")
|
||||
if new_goal is None:
|
||||
new_goal = "goal_choose_agent_type"
|
||||
|
||||
return {
|
||||
"new_goal": new_goal,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
import os
|
||||
|
||||
import stripe
|
||||
from dotenv import load_dotenv
|
||||
|
||||
|
||||
@@ -1,16 +1,18 @@
|
||||
from pathlib import Path
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
# this is made to demonstrate functionality but it could just as durably be an API call
|
||||
# called as part of a temporal activity with automatic retries
|
||||
def get_order(args: dict) -> dict:
|
||||
|
||||
order_id = args.get("order_id")
|
||||
|
||||
file_path = Path(__file__).resolve().parent.parent / "data" / "customer_order_data.json"
|
||||
file_path = (
|
||||
Path(__file__).resolve().parent.parent / "data" / "customer_order_data.json"
|
||||
)
|
||||
if not file_path.exists():
|
||||
return {"error": "Data file not found."}
|
||||
|
||||
|
||||
with open(file_path, "r") as file:
|
||||
data = json.load(file)
|
||||
order_list = data["orders"]
|
||||
@@ -18,6 +20,6 @@ def get_order(args: dict) -> dict:
|
||||
for order in order_list:
|
||||
if order["id"] == order_id:
|
||||
return order
|
||||
|
||||
|
||||
return_msg = "Order " + order_id + " not found."
|
||||
return {"error": return_msg}
|
||||
return {"error": return_msg}
|
||||
|
||||
@@ -1,17 +1,20 @@
|
||||
from pathlib import Path
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def sorting(e):
|
||||
return e['order_date']
|
||||
return e["order_date"]
|
||||
|
||||
|
||||
def list_orders(args: dict) -> dict:
|
||||
|
||||
email_address = args.get("email_address")
|
||||
|
||||
file_path = Path(__file__).resolve().parent.parent / "data" / "customer_order_data.json"
|
||||
file_path = (
|
||||
Path(__file__).resolve().parent.parent / "data" / "customer_order_data.json"
|
||||
)
|
||||
if not file_path.exists():
|
||||
return {"error": "Data file not found."}
|
||||
|
||||
|
||||
with open(file_path, "r") as file:
|
||||
data = json.load(file)
|
||||
order_list = data["orders"]
|
||||
@@ -24,7 +27,6 @@ def list_orders(args: dict) -> dict:
|
||||
if len(rtn_order_list) > 0:
|
||||
rtn_order_list.sort(key=sorting)
|
||||
return {"orders": rtn_order_list}
|
||||
else:
|
||||
else:
|
||||
return_msg = "No orders for customer " + email_address + " found."
|
||||
return {"error": return_msg}
|
||||
|
||||
|
||||
@@ -1,49 +1,59 @@
|
||||
import http
|
||||
import os
|
||||
import json
|
||||
|
||||
import os
|
||||
from pathlib import Path
|
||||
|
||||
#Send back dummy data in the correct format - to use the real API, 1) change this to be track_package_fake and 2) change the below track_package_real to be track_package
|
||||
|
||||
# Send back dummy data in the correct format - to use the real API, 1) change this to be track_package_fake and 2) change the below track_package_real to be track_package
|
||||
def track_package(args: dict) -> dict:
|
||||
|
||||
tracking_id = args.get("tracking_id")
|
||||
file_path = Path(__file__).resolve().parent.parent / "data" / "dummy_tracking_data.json"
|
||||
file_path = (
|
||||
Path(__file__).resolve().parent.parent / "data" / "dummy_tracking_data.json"
|
||||
)
|
||||
if not file_path.exists():
|
||||
return {"error": "Data file not found."}
|
||||
|
||||
|
||||
with open(file_path, "r") as file:
|
||||
data = json.load(file)
|
||||
package_list = data["packages"]
|
||||
|
||||
for package in package_list:
|
||||
if package["TrackingNumber"] == tracking_id:
|
||||
scheduled_delivery_date = package["ScheduledDeliveryDate"]
|
||||
carrier = package["Carrier"]
|
||||
status_summary = package["StatusSummary"]
|
||||
tracking_details = package.get("TrackingDetails", [])
|
||||
last_tracking_update = ""
|
||||
if tracking_details and tracking_details is not None and tracking_details[0] is not None:
|
||||
last_tracking_update = tracking_details[0]["EventDateTimeInDateTimeFormat"]
|
||||
|
||||
tracking_link = ""
|
||||
if carrier == "USPS":
|
||||
tracking_link = f"https://tools.usps.com/go/TrackConfirmAction?qtc_tLabels1={tracking_id}"
|
||||
elif carrier == "UPS":
|
||||
tracking_link = f"https://www.ups.com/track?track=yes&trackNums={tracking_id}"
|
||||
scheduled_delivery_date = package["ScheduledDeliveryDate"]
|
||||
carrier = package["Carrier"]
|
||||
status_summary = package["StatusSummary"]
|
||||
tracking_details = package.get("TrackingDetails", [])
|
||||
last_tracking_update = ""
|
||||
if (
|
||||
tracking_details
|
||||
and tracking_details is not None
|
||||
and tracking_details[0] is not None
|
||||
):
|
||||
last_tracking_update = tracking_details[0][
|
||||
"EventDateTimeInDateTimeFormat"
|
||||
]
|
||||
|
||||
tracking_link = ""
|
||||
if carrier == "USPS":
|
||||
tracking_link = f"https://tools.usps.com/go/TrackConfirmAction?qtc_tLabels1={tracking_id}"
|
||||
elif carrier == "UPS":
|
||||
tracking_link = (
|
||||
f"https://www.ups.com/track?track=yes&trackNums={tracking_id}"
|
||||
)
|
||||
|
||||
return {
|
||||
"scheduled_delivery_date": scheduled_delivery_date,
|
||||
"carrier": carrier,
|
||||
"status_summary": status_summary,
|
||||
"tracking_link": tracking_link,
|
||||
"last_tracking_update": last_tracking_update,
|
||||
}
|
||||
|
||||
return {
|
||||
"scheduled_delivery_date": scheduled_delivery_date,
|
||||
"carrier": carrier,
|
||||
"status_summary": status_summary,
|
||||
"tracking_link": tracking_link,
|
||||
"last_tracking_update": last_tracking_update
|
||||
}
|
||||
|
||||
return_msg = "Package not found with tracking info " + tracking_id
|
||||
return {"error": return_msg}
|
||||
|
||||
'''Format of response:
|
||||
|
||||
"""Format of response:
|
||||
{
|
||||
"TrackingNumber": "",
|
||||
"Delivered": false,
|
||||
@@ -94,9 +104,10 @@ def track_package(args: dict) -> dict:
|
||||
}
|
||||
]
|
||||
}
|
||||
'''
|
||||
def track_package_real(args: dict) -> dict:
|
||||
"""
|
||||
|
||||
|
||||
def track_package_real(args: dict) -> dict:
|
||||
tracking_id = args.get("tracking_id")
|
||||
|
||||
api_key = os.getenv("RAPIDAPI_KEY")
|
||||
@@ -127,11 +138,17 @@ def track_package_real(args: dict) -> dict:
|
||||
status_summary = json_data["StatusSummary"]
|
||||
tracking_details = json_data.get("TrackingDetails", [])
|
||||
last_tracking_update = ""
|
||||
if tracking_details and tracking_details is not None and tracking_details[0] is not None:
|
||||
if (
|
||||
tracking_details
|
||||
and tracking_details is not None
|
||||
and tracking_details[0] is not None
|
||||
):
|
||||
last_tracking_update = tracking_details[0]["EventDateTimeInDateTimeFormat"]
|
||||
tracking_link = ""
|
||||
if carrier == "USPS":
|
||||
tracking_link = f"https://tools.usps.com/go/TrackConfirmAction?qtc_tLabels1={tracking_id}"
|
||||
tracking_link = (
|
||||
f"https://tools.usps.com/go/TrackConfirmAction?qtc_tLabels1={tracking_id}"
|
||||
)
|
||||
elif carrier == "UPS":
|
||||
tracking_link = f"https://www.ups.com/track?track=yes&trackNums={tracking_id}"
|
||||
|
||||
@@ -140,5 +157,5 @@ def track_package_real(args: dict) -> dict:
|
||||
"carrier": carrier,
|
||||
"status_summary": status_summary,
|
||||
"tracking_link": tracking_link,
|
||||
"last_tracking_update": last_tracking_update
|
||||
}
|
||||
"last_tracking_update": last_tracking_update,
|
||||
}
|
||||
|
||||
@@ -1,24 +1,31 @@
|
||||
from pathlib import Path
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
# this is made to demonstrate functionality but it could just as durably be an API call
|
||||
# called as part of a temporal activity with automatic retries
|
||||
def check_account_valid(args: dict) -> dict:
|
||||
|
||||
email = args.get("email")
|
||||
account_id = args.get("account_id")
|
||||
|
||||
file_path = Path(__file__).resolve().parent.parent / "data" / "customer_account_data.json"
|
||||
file_path = (
|
||||
Path(__file__).resolve().parent.parent / "data" / "customer_account_data.json"
|
||||
)
|
||||
if not file_path.exists():
|
||||
return {"error": "Data file not found."}
|
||||
|
||||
|
||||
with open(file_path, "r") as file:
|
||||
data = json.load(file)
|
||||
account_list = data["accounts"]
|
||||
|
||||
for account in account_list:
|
||||
if account["email"] == email or account["account_id"] == account_id:
|
||||
return{"status": "account valid"}
|
||||
|
||||
return_msg = "Account not found with email address " + email + " or account ID: " + account_id
|
||||
return {"error": return_msg}
|
||||
return {"status": "account valid"}
|
||||
|
||||
return_msg = (
|
||||
"Account not found with email address "
|
||||
+ email
|
||||
+ " or account ID: "
|
||||
+ account_id
|
||||
)
|
||||
return {"error": return_msg}
|
||||
|
||||
@@ -1,23 +1,33 @@
|
||||
from pathlib import Path
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
# this is made to demonstrate functionality but it could just as durably be an API call
|
||||
# this assumes it's a valid account - use check_account_valid() to verify that first
|
||||
def get_account_balance(args: dict) -> dict:
|
||||
|
||||
account_key = args.get("email_address_or_account_ID")
|
||||
|
||||
file_path = Path(__file__).resolve().parent.parent / "data" / "customer_account_data.json"
|
||||
file_path = (
|
||||
Path(__file__).resolve().parent.parent / "data" / "customer_account_data.json"
|
||||
)
|
||||
if not file_path.exists():
|
||||
return {"error": "Data file not found."}
|
||||
|
||||
|
||||
with open(file_path, "r") as file:
|
||||
data = json.load(file)
|
||||
account_list = data["accounts"]
|
||||
|
||||
for account in account_list:
|
||||
if account["email"] == account_key or account["account_id"] == account_key:
|
||||
return{ "name": account["name"], "email": account["email"], "account_id": account["account_id"], "checking_balance": account["checking_balance"], "savings_balance": account["savings_balance"], "bitcoin_balance": account["bitcoin_balance"], "account_creation_date": account["account_creation_date"] }
|
||||
|
||||
return {
|
||||
"name": account["name"],
|
||||
"email": account["email"],
|
||||
"account_id": account["account_id"],
|
||||
"checking_balance": account["checking_balance"],
|
||||
"savings_balance": account["savings_balance"],
|
||||
"bitcoin_balance": account["bitcoin_balance"],
|
||||
"account_creation_date": account["account_creation_date"],
|
||||
}
|
||||
|
||||
return_msg = "Account not found with for " + account_key
|
||||
return {"error": return_msg}
|
||||
return {"error": return_msg}
|
||||
|
||||
@@ -1,16 +1,12 @@
|
||||
import os
|
||||
from pathlib import Path
|
||||
import json
|
||||
from temporalio.client import Client
|
||||
import os
|
||||
from dataclasses import dataclass
|
||||
from typing import Optional
|
||||
import asyncio
|
||||
from pathlib import Path
|
||||
|
||||
from temporalio.exceptions import WorkflowAlreadyStartedError
|
||||
|
||||
from shared.config import get_temporal_client
|
||||
|
||||
|
||||
from enum import Enum, auto
|
||||
|
||||
# enums for the java enum
|
||||
# class ExecutionScenarios(Enum):
|
||||
# HAPPY_PATH = 0
|
||||
@@ -32,7 +28,6 @@ class MoneyMovementWorkflowParameterObj:
|
||||
# this is made to demonstrate functionality but it could just as durably be an API call
|
||||
# this assumes it's a valid account - use check_account_valid() to verify that first
|
||||
async def move_money(args: dict) -> dict:
|
||||
|
||||
account_key = args.get("email_address_or_account_ID")
|
||||
account_type: str = args.get("accounttype")
|
||||
amount = args.get("amount")
|
||||
@@ -101,7 +96,6 @@ async def move_money(args: dict) -> dict:
|
||||
async def start_workflow(
|
||||
amount_cents: int, from_account_name: str, to_account_name: str
|
||||
) -> str:
|
||||
|
||||
start_real_workflow = os.getenv("FIN_START_REAL_WORKFLOW")
|
||||
if start_real_workflow is not None and start_real_workflow.lower() == "false":
|
||||
START_REAL_WORKFLOW = False
|
||||
@@ -128,7 +122,7 @@ async def start_workflow(
|
||||
task_queue="MoneyTransferJava", # Task queue name
|
||||
)
|
||||
return handle.id
|
||||
except WorkflowAlreadyStartedError as e:
|
||||
except WorkflowAlreadyStartedError:
|
||||
existing_handle = client.get_workflow_handle(workflow_id=workflow_id)
|
||||
return existing_handle.id
|
||||
else:
|
||||
|
||||
@@ -1,18 +1,10 @@
|
||||
from datetime import date, timedelta
|
||||
import os
|
||||
from pathlib import Path
|
||||
import json
|
||||
from temporalio.client import (
|
||||
Client,
|
||||
WithStartWorkflowOperation,
|
||||
WorkflowHandle,
|
||||
WorkflowUpdateFailedError,
|
||||
)
|
||||
from temporalio import common
|
||||
from dataclasses import dataclass
|
||||
from typing import Optional
|
||||
import asyncio
|
||||
from temporalio.exceptions import WorkflowAlreadyStartedError
|
||||
from datetime import date
|
||||
|
||||
from temporalio import common
|
||||
from temporalio.client import WithStartWorkflowOperation, WorkflowUpdateFailedError
|
||||
|
||||
from shared.config import get_temporal_client
|
||||
|
||||
|
||||
@@ -24,39 +16,55 @@ class TransactionRequest:
|
||||
sourceAccount: str
|
||||
targetAccount: str
|
||||
|
||||
|
||||
@dataclass
|
||||
class TxResult:
|
||||
transactionId: str
|
||||
status: str
|
||||
|
||||
#demonstrate starting a workflow and early return pattern while the workflow continues
|
||||
|
||||
# demonstrate starting a workflow and early return pattern while the workflow continues
|
||||
async def submit_loan_application(args: dict) -> dict:
|
||||
account_key = args.get("email_address_or_account_ID")
|
||||
amount = args.get("amount")
|
||||
|
||||
loan_status: dict = await start_workflow(amount=amount,account_name=account_key)
|
||||
loan_status: dict = await start_workflow(amount=amount, account_name=account_key)
|
||||
|
||||
if loan_status.get("error") is None:
|
||||
return {'status': loan_status.get("loan_application_status"), 'detailed_status': loan_status.get("application_details"), 'next_step': loan_status.get("advisement"), 'confirmation_id': loan_status.get("transaction_id")}
|
||||
return {
|
||||
"status": loan_status.get("loan_application_status"),
|
||||
"detailed_status": loan_status.get("application_details"),
|
||||
"next_step": loan_status.get("advisement"),
|
||||
"confirmation_id": loan_status.get("transaction_id"),
|
||||
}
|
||||
else:
|
||||
print(loan_status)
|
||||
return loan_status
|
||||
|
||||
|
||||
|
||||
# Async function to start workflow
|
||||
async def start_workflow(amount: str, account_name: str, )-> dict:
|
||||
|
||||
async def start_workflow(
|
||||
amount: str,
|
||||
account_name: str,
|
||||
) -> dict:
|
||||
start_real_workflow = os.getenv("FIN_START_REAL_WORKFLOW")
|
||||
if start_real_workflow is not None and start_real_workflow.lower() == "false":
|
||||
START_REAL_WORKFLOW = False
|
||||
return {'loan_application_status': "applied", 'application_details': "loan application is submitted and initial validation is complete",'transaction_id': "APPLICATION"+account_name, 'advisement': "You'll receive a confirmation for final approval in three business days", }
|
||||
# START_REAL_WORKFLOW = False
|
||||
return {
|
||||
"loan_application_status": "applied",
|
||||
"application_details": "loan application is submitted and initial validation is complete",
|
||||
"transaction_id": "APPLICATION" + account_name,
|
||||
"advisement": "You'll receive a confirmation for final approval in three business days",
|
||||
}
|
||||
else:
|
||||
START_REAL_WORKFLOW = True
|
||||
# Connect to Temporal
|
||||
# START_REAL_WORKFLOW = True
|
||||
# Connect to Temporal
|
||||
client = await get_temporal_client()
|
||||
|
||||
|
||||
# Define the workflow ID and task queue
|
||||
workflow_id = "LOAN_APPLICATION-"+account_name+"-"+date.today().strftime('%Y-%m-%d')
|
||||
workflow_id = (
|
||||
"LOAN_APPLICATION-" + account_name + "-" + date.today().strftime("%Y-%m-%d")
|
||||
)
|
||||
task_queue = "LatencyOptimizationTEST"
|
||||
|
||||
# Create a TransactionRequest (matching the Java workflow's expected input)
|
||||
@@ -83,21 +91,27 @@ async def start_workflow(amount: str, account_name: str, )-> dict:
|
||||
)
|
||||
)
|
||||
except WorkflowUpdateFailedError:
|
||||
print("aww man got exception WorkflowUpdateFailedError" )
|
||||
print("aww man got exception WorkflowUpdateFailedError")
|
||||
tx_result = None
|
||||
return_msg = "Loan could not be processed for " + account_name
|
||||
return {"error": return_msg}
|
||||
|
||||
workflow_handle = await start_op.workflow_handle()
|
||||
print(f"Workflow started with ID: {workflow_handle.id}")
|
||||
print(tx_result)
|
||||
|
||||
print(f"Update result: Transaction ID = {tx_result.transactionId}, Message = {tx_result.status}")
|
||||
print(
|
||||
f"Update result: Transaction ID = {tx_result.transactionId}, Message = {tx_result.status}"
|
||||
)
|
||||
|
||||
# Optionally, wait for the workflow to complete and get the final result
|
||||
# final_result = await handle.result()
|
||||
# print(f"Workflow completed with result: {final_result}")
|
||||
|
||||
|
||||
# return {'status': loan_status.get("loan_status"), 'detailed_status': loan_status.get("results"), 'next_step': loan_status.get("advisement"), 'confirmation_id': loan_status.get("workflowID")}
|
||||
return {'loan_application_status': "applied", 'application_details': "loan application is submitted and initial validation is complete",'transaction_id': tx_result.transactionId, 'advisement': "You'll receive a confirmation for final approval in three business days", }
|
||||
|
||||
# return {'status': loan_status.get("loan_status"), 'detailed_status': loan_status.get("results"), 'next_step': loan_status.get("advisement"), 'confirmation_id': loan_status.get("workflowID")}
|
||||
return {
|
||||
"loan_application_status": "applied",
|
||||
"application_details": "loan application is submitted and initial validation is complete",
|
||||
"transaction_id": tx_result.transactionId,
|
||||
"advisement": "You'll receive a confirmation for final approval in three business days",
|
||||
}
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
import json
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
import json
|
||||
|
||||
|
||||
def find_events(args: dict) -> dict:
|
||||
|
||||
@@ -1,10 +1,10 @@
|
||||
TREASURE_LOCATION = {
|
||||
"address": "300 Lenora",
|
||||
"city": "Seattle",
|
||||
"state_full": "Washington",
|
||||
"state_abbrev": "WA",
|
||||
"zip": "98121",
|
||||
"country": "USA"
|
||||
"address": "300 Lenora",
|
||||
"city": "Seattle",
|
||||
"state_full": "Washington",
|
||||
"state_abbrev": "WA",
|
||||
"zip": "98121",
|
||||
"country": "USA",
|
||||
}
|
||||
|
||||
HINTS = [
|
||||
@@ -12,8 +12,8 @@ HINTS = [
|
||||
"state of " + TREASURE_LOCATION["state_full"],
|
||||
"city of " + TREASURE_LOCATION["city"],
|
||||
"at a company HQ",
|
||||
"The company's tech traces its roots to a project called Cadence", #thanks, Grok
|
||||
"The company offers a tool that lets developers write code as if it's running forever, no matter what crashes", #thanks, Grok
|
||||
"The company's tech traces its roots to a project called Cadence", # thanks, Grok
|
||||
"The company offers a tool that lets developers write code as if it's running forever, no matter what crashes", # thanks, Grok
|
||||
]
|
||||
''' Additional Grok provided hints about Temporal:
|
||||
"This company was founded by two engineers who previously worked on a system named after a South American river at Uber."
|
||||
@@ -26,16 +26,14 @@ HINTS = [
|
||||
"They’re backed by big venture capital names like Sequoia, betting on their vision for reliable software."
|
||||
"The company’s name might remind you of a word for something fleeting, yet their tech is built to last."'''
|
||||
|
||||
|
||||
def give_hint(args: dict) -> dict:
|
||||
hint_total = args.get("hint_total")
|
||||
if hint_total is None:
|
||||
hint_total = 0
|
||||
|
||||
|
||||
index = hint_total % len(HINTS)
|
||||
hint_text = HINTS[index]
|
||||
|
||||
hint_total = hint_total + 1
|
||||
return {
|
||||
"hint_number": hint_total,
|
||||
"hint": hint_text
|
||||
}
|
||||
return {"hint_number": hint_total, "hint": hint_text}
|
||||
|
||||
@@ -1,7 +1,8 @@
|
||||
import os
|
||||
from typing import List
|
||||
from models.tool_definitions import AgentGoal
|
||||
|
||||
import tools.tool_registry as tool_registry
|
||||
from models.tool_definitions import AgentGoal
|
||||
|
||||
# Turn on Silly Mode - this should be a description of the persona you'd like the bot to have and can be a single word or a phrase.
|
||||
# Example if you want the bot to be a specific person, like Mario or Christopher Walken, or to describe a specific tone:
|
||||
@@ -310,7 +311,7 @@ goal_fin_check_account_balances = AgentGoal(
|
||||
)
|
||||
|
||||
# this tool checks account balances, and uses ./data/customer_account_data.json as dummy data
|
||||
# it also uses a separate workflow/tool, see ./setup.md for details
|
||||
# it also uses a separate workflow/tool, see ./SETUP.md for details
|
||||
goal_fin_move_money = AgentGoal(
|
||||
id="goal_fin_move_money",
|
||||
category_tag="fin",
|
||||
@@ -350,7 +351,7 @@ goal_fin_move_money = AgentGoal(
|
||||
)
|
||||
|
||||
# this starts a loan approval process
|
||||
# it also uses a separate workflow/tool, see ./setup.md for details
|
||||
# it also uses a separate workflow/tool, see ./SETUP.md for details
|
||||
goal_fin_loan_application = AgentGoal(
|
||||
id="goal_fin_loan_application",
|
||||
category_tag="fin",
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
from .give_hint import TREASURE_LOCATION
|
||||
|
||||
|
||||
def guess_location(args: dict) -> dict:
|
||||
|
||||
guess_address = args.get("address").lower()
|
||||
guess_city = args.get("city").lower()
|
||||
guess_state = args.get("state").lower()
|
||||
@@ -11,8 +11,12 @@ def guess_location(args: dict) -> dict:
|
||||
else:
|
||||
compare_state = TREASURE_LOCATION.get("state_full").lower()
|
||||
|
||||
#Check for the street address to be included in the guess to account for "st" vs "street" or leaving Street off entirely
|
||||
if TREASURE_LOCATION.get("address").lower() in guess_address and TREASURE_LOCATION.get("city").lower() == guess_city and compare_state == guess_state:
|
||||
# Check for the street address to be included in the guess to account for "st" vs "street" or leaving Street off entirely
|
||||
if (
|
||||
TREASURE_LOCATION.get("address").lower() in guess_address
|
||||
and TREASURE_LOCATION.get("city").lower() == guess_city
|
||||
and compare_state == guess_state
|
||||
):
|
||||
return {"treasure_found": "True"}
|
||||
else:
|
||||
return {"treasure_found": "False"}
|
||||
return {"treasure_found": "False"}
|
||||
|
||||
@@ -1,11 +1,10 @@
|
||||
def book_pto(args: dict) -> dict:
|
||||
|
||||
email = args.get("email")
|
||||
start_date = args.get("start_date")
|
||||
end_date = args.get("end_date")
|
||||
|
||||
print(f"[BookPTO] Totally would send an email confirmation of PTO from {start_date} to {end_date} to {email} here!")
|
||||
print(
|
||||
f"[BookPTO] Totally would send an email confirmation of PTO from {start_date} to {end_date} to {email} here!"
|
||||
)
|
||||
|
||||
return {
|
||||
"status": "success"
|
||||
}
|
||||
return {"status": "success"}
|
||||
|
||||
@@ -1,9 +1,4 @@
|
||||
from pathlib import Path
|
||||
import json
|
||||
|
||||
|
||||
def checkpaybankstatus(args: dict) -> dict:
|
||||
|
||||
email = args.get("email")
|
||||
|
||||
if email == "grinch@grinch.com":
|
||||
@@ -12,4 +7,4 @@ def checkpaybankstatus(args: dict) -> dict:
|
||||
|
||||
# could do logic here or look up data but for now everyone but the grinch is getting paid
|
||||
return_msg = "connected"
|
||||
return {"status": return_msg}
|
||||
return {"status": return_msg}
|
||||
|
||||
@@ -1,26 +1,27 @@
|
||||
from pathlib import Path
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def current_pto(args: dict) -> dict:
|
||||
|
||||
email = args.get("email")
|
||||
|
||||
file_path = Path(__file__).resolve().parent.parent / "data" / "employee_pto_data.json"
|
||||
file_path = (
|
||||
Path(__file__).resolve().parent.parent / "data" / "employee_pto_data.json"
|
||||
)
|
||||
if not file_path.exists():
|
||||
return {"error": "Data file not found."}
|
||||
|
||||
|
||||
data = json.load(open(file_path))
|
||||
employee_list = data["theCompany"]["employees"]
|
||||
|
||||
for employee in employee_list:
|
||||
if employee["email"] == email:
|
||||
num_hours = int(employee["currentPTOHrs"])
|
||||
num_days = float(num_hours/8)
|
||||
num_days = float(num_hours / 8)
|
||||
return {
|
||||
"num_hours": num_hours,
|
||||
"num_days": num_days,
|
||||
}
|
||||
|
||||
|
||||
return_msg = "Employee not found with email address " + email
|
||||
return {"error": return_msg}
|
||||
return {"error": return_msg}
|
||||
|
||||
@@ -1,43 +1,59 @@
|
||||
import json
|
||||
import pandas
|
||||
from pathlib import Path
|
||||
from datetime import date, datetime
|
||||
from pathlib import Path
|
||||
|
||||
import pandas
|
||||
from dateutil.relativedelta import relativedelta
|
||||
|
||||
|
||||
def future_pto_calc(args: dict) -> dict:
|
||||
|
||||
file_path = Path(__file__).resolve().parent.parent / "data" / "employee_pto_data.json"
|
||||
file_path = (
|
||||
Path(__file__).resolve().parent.parent / "data" / "employee_pto_data.json"
|
||||
)
|
||||
if not file_path.exists():
|
||||
return {"error": "Data file not found."}
|
||||
|
||||
|
||||
start_date = datetime.strptime(args.get("start_date"), "%Y-%m-%d").date()
|
||||
end_date = datetime.strptime(args.get("end_date"), "%Y-%m-%d").date()
|
||||
email = args.get("email")
|
||||
|
||||
#Next, set up the ability to calculate how much PTO will be added to the user's total by the start of the PTO request
|
||||
# Next, set up the ability to calculate how much PTO will be added to the user's total by the start of the PTO request
|
||||
today = date.today()
|
||||
|
||||
if today > start_date:
|
||||
return_msg = "PTO start date " + args.get("start_date") + "cannot be in the past"
|
||||
return_msg = (
|
||||
"PTO start date " + args.get("start_date") + "cannot be in the past"
|
||||
)
|
||||
return {"error": return_msg}
|
||||
|
||||
|
||||
if end_date < start_date:
|
||||
return_msg = "PTO end date " + args.get("end_date") + " must be after PTO start date " + args.get("start_date")
|
||||
return_msg = (
|
||||
"PTO end date "
|
||||
+ args.get("end_date")
|
||||
+ " must be after PTO start date "
|
||||
+ args.get("start_date")
|
||||
)
|
||||
return {"error": return_msg}
|
||||
|
||||
#Get the number of business days, and then business hours (assume 8 hr biz day), included in the PTO request
|
||||
biz_days_of_request = len(pandas.bdate_range(start=start_date, end=end_date, inclusive="both"))
|
||||
|
||||
# Get the number of business days, and then business hours (assume 8 hr biz day), included in the PTO request
|
||||
biz_days_of_request = len(
|
||||
pandas.bdate_range(start=start_date, end=end_date, inclusive="both")
|
||||
)
|
||||
if biz_days_of_request == 0:
|
||||
return_msg = "There are no business days between " + args.get("start_date") + " and " + args.get("end_date")
|
||||
return_msg = (
|
||||
"There are no business days between "
|
||||
+ args.get("start_date")
|
||||
+ " and "
|
||||
+ args.get("end_date")
|
||||
)
|
||||
return {"error": return_msg}
|
||||
biz_hours_of_request = biz_days_of_request * 8
|
||||
|
||||
#Assume PTO is added on the first of every month - month math compares rolling dates, so compare the PTO request with the first day of the current month.
|
||||
|
||||
# Assume PTO is added on the first of every month - month math compares rolling dates, so compare the PTO request with the first day of the current month.
|
||||
today_first_of_month = date(today.year, today.month, 1)
|
||||
time_difference = relativedelta(start_date, today_first_of_month)
|
||||
months_to_accrue = time_difference.years * 12 + time_difference.months
|
||||
|
||||
|
||||
data = json.load(open(file_path))
|
||||
employee_list = data["theCompany"]["employees"]
|
||||
|
||||
@@ -47,12 +63,14 @@ def future_pto_calc(args: dict) -> dict:
|
||||
if employee["email"] == email:
|
||||
current_pto_hours = int(employee["currentPTOHrs"])
|
||||
hrs_added_per_month = int(employee["hrsAddedPerMonth"])
|
||||
pto_available_at_start = current_pto_hours + (months_to_accrue * hrs_added_per_month)
|
||||
pto_available_at_start = current_pto_hours + (
|
||||
months_to_accrue * hrs_added_per_month
|
||||
)
|
||||
pto_hrs_remaining_after = pto_available_at_start - biz_hours_of_request
|
||||
if pto_hrs_remaining_after >= 0:
|
||||
enough_pto = True
|
||||
return {
|
||||
"enough_pto": enough_pto,
|
||||
"enough_pto": enough_pto,
|
||||
"pto_hrs_remaining_after": str(pto_hrs_remaining_after),
|
||||
}
|
||||
|
||||
|
||||
@@ -1,19 +1,23 @@
|
||||
import os
|
||||
|
||||
import tools.goal_registry as goals
|
||||
|
||||
def list_agents(args: dict) -> dict:
|
||||
|
||||
def list_agents(args: dict) -> dict:
|
||||
goal_categories_start = os.getenv("GOAL_CATEGORIES")
|
||||
if goal_categories_start is None:
|
||||
goal_categories = ["all"] # default to 'all' categories
|
||||
goal_categories = ["all"] # default to 'all' categories
|
||||
else:
|
||||
goal_categories_start.strip().lower() # handle extra spaces or non-lowercase
|
||||
goal_categories_start.strip().lower() # handle extra spaces or non-lowercase
|
||||
goal_categories = goal_categories_start.split(",")
|
||||
|
||||
# if multi-goal-mode, add agent_selection as a goal (defaults to True)
|
||||
if "agent_selection" not in goal_categories :
|
||||
first_goal_value = os.getenv("AGENT_GOAL")
|
||||
if first_goal_value is None or first_goal_value.lower() == "goal_choose_agent_type":
|
||||
if "agent_selection" not in goal_categories:
|
||||
first_goal_value = os.getenv("AGENT_GOAL")
|
||||
if (
|
||||
first_goal_value is None
|
||||
or first_goal_value.lower() == "goal_choose_agent_type"
|
||||
):
|
||||
goal_categories.append("agent_selection")
|
||||
|
||||
# always show goals labeled as "system," like the goal chooser
|
||||
@@ -33,7 +37,7 @@ def list_agents(args: dict) -> dict:
|
||||
"goal_id": goal.id,
|
||||
"agent_description": goal.agent_friendly_description,
|
||||
}
|
||||
)
|
||||
)
|
||||
return {
|
||||
"agents": agents,
|
||||
}
|
||||
|
||||
@@ -1,7 +1,8 @@
|
||||
import os
|
||||
import requests
|
||||
import random
|
||||
from datetime import datetime, timedelta, date
|
||||
from datetime import date, datetime, timedelta
|
||||
|
||||
import requests
|
||||
from dotenv import load_dotenv
|
||||
|
||||
PREMIER_LEAGUE_CLUBS_DATA = [
|
||||
|
||||
@@ -1,9 +1,10 @@
|
||||
import os
|
||||
import json
|
||||
import http.client
|
||||
from dotenv import load_dotenv
|
||||
import json
|
||||
import os
|
||||
import urllib.parse
|
||||
|
||||
from dotenv import load_dotenv
|
||||
|
||||
|
||||
def search_airport(query: str) -> list:
|
||||
"""
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
from models.tool_definitions import ToolDefinition, ToolArgument
|
||||
from models.tool_definitions import ToolArgument, ToolDefinition
|
||||
|
||||
# ----- System tools -----
|
||||
list_agents_tool = ToolDefinition(
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
import shared.config
|
||||
|
||||
def transfer_control(args: dict) -> dict:
|
||||
|
||||
def transfer_control(args: dict) -> dict:
|
||||
return {
|
||||
"new_goal": shared.config.AGENT_GOAL,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,31 +1,35 @@
|
||||
from collections import deque
|
||||
from datetime import timedelta
|
||||
from typing import Dict, Any, Union, List, Optional, Deque, TypedDict
|
||||
from typing import Any, Deque, Dict, List, Optional, TypedDict, Union
|
||||
|
||||
from temporalio.common import RetryPolicy
|
||||
from temporalio import workflow
|
||||
from temporalio.common import RetryPolicy
|
||||
|
||||
from models.data_types import ConversationHistory, EnvLookupOutput, NextStep, ValidationInput, EnvLookupInput
|
||||
from models.data_types import (
|
||||
ConversationHistory,
|
||||
EnvLookupInput,
|
||||
EnvLookupOutput,
|
||||
NextStep,
|
||||
ValidationInput,
|
||||
)
|
||||
from models.tool_definitions import AgentGoal
|
||||
from workflows.workflow_helpers import LLM_ACTIVITY_START_TO_CLOSE_TIMEOUT, \
|
||||
LLM_ACTIVITY_SCHEDULE_TO_CLOSE_TIMEOUT
|
||||
from workflows import workflow_helpers as helpers
|
||||
from workflows.workflow_helpers import (
|
||||
LLM_ACTIVITY_SCHEDULE_TO_CLOSE_TIMEOUT,
|
||||
LLM_ACTIVITY_START_TO_CLOSE_TIMEOUT,
|
||||
)
|
||||
|
||||
with workflow.unsafe.imports_passed_through():
|
||||
from activities.tool_activities import ToolActivities
|
||||
from prompts.agent_prompt_generators import (
|
||||
generate_genai_prompt
|
||||
)
|
||||
from models.data_types import (
|
||||
CombinedInput,
|
||||
ToolPromptInput,
|
||||
)
|
||||
from models.data_types import CombinedInput, ToolPromptInput
|
||||
from prompts.agent_prompt_generators import generate_genai_prompt
|
||||
from tools.goal_registry import goal_list
|
||||
|
||||
# Constants
|
||||
MAX_TURNS_BEFORE_CONTINUE = 250
|
||||
|
||||
#ToolData as part of the workflow is what's accessible to the UI - see LLMResponse.jsx for example
|
||||
|
||||
# ToolData as part of the workflow is what's accessible to the UI - see LLMResponse.jsx for example
|
||||
class ToolData(TypedDict, total=False):
|
||||
next: NextStep
|
||||
tool: str
|
||||
@@ -33,6 +37,7 @@ class ToolData(TypedDict, total=False):
|
||||
response: str
|
||||
force_confirm: bool = True
|
||||
|
||||
|
||||
@workflow.defn
|
||||
class AgentGoalWorkflow:
|
||||
"""Workflow that manages tool execution with user confirmation and conversation history."""
|
||||
@@ -43,16 +48,21 @@ class AgentGoalWorkflow:
|
||||
self.conversation_summary: Optional[str] = None
|
||||
self.chat_ended: bool = False
|
||||
self.tool_data: Optional[ToolData] = None
|
||||
self.confirmed: bool = False # indicates that we have confirmation to proceed to run tool
|
||||
self.confirmed: bool = (
|
||||
False # indicates that we have confirmation to proceed to run tool
|
||||
)
|
||||
self.tool_results: List[Dict[str, Any]] = []
|
||||
self.goal: AgentGoal = {"tools": []}
|
||||
self.show_tool_args_confirmation: bool = True # set from env file in activity lookup_wf_env_settings
|
||||
self.multi_goal_mode: bool = False # set from env file in activity lookup_wf_env_settings
|
||||
self.show_tool_args_confirmation: bool = (
|
||||
True # set from env file in activity lookup_wf_env_settings
|
||||
)
|
||||
self.multi_goal_mode: bool = (
|
||||
False # set from env file in activity lookup_wf_env_settings
|
||||
)
|
||||
|
||||
# see ../api/main.py#temporal_client.start_workflow() for how the input parameters are set
|
||||
@workflow.run
|
||||
async def run(self, combined_input: CombinedInput) -> str:
|
||||
|
||||
"""Main workflow execution method."""
|
||||
# setup phase, starts with blank tool_params and agent_goal prompt as defined in tools/goal_registry.py
|
||||
params = combined_input.tool_params
|
||||
@@ -68,12 +78,12 @@ class AgentGoalWorkflow:
|
||||
if params and params.prompt_queue:
|
||||
self.prompt_queue.extend(params.prompt_queue)
|
||||
|
||||
waiting_for_confirm = False
|
||||
waiting_for_confirm = False
|
||||
current_tool = None
|
||||
|
||||
# This is the main interactive loop. Main responsibilities:
|
||||
# - Selecting and changing goals as directed by the user
|
||||
# - reacting to user input (from signals)
|
||||
# - reacting to user input (from signals)
|
||||
# - validating user input to make sure it makes sense with the current goal and tools
|
||||
# - calling the LLM through activities to determine next steps and prompts
|
||||
# - executing the selected tools via activities
|
||||
@@ -87,7 +97,7 @@ class AgentGoalWorkflow:
|
||||
if self.chat_should_end():
|
||||
return f"{self.conversation_history}"
|
||||
|
||||
# Execute the tool
|
||||
# Execute the tool
|
||||
if self.ready_for_tool_execution(waiting_for_confirm, current_tool):
|
||||
waiting_for_confirm = await self.execute_tool(current_tool)
|
||||
continue
|
||||
@@ -96,10 +106,12 @@ class AgentGoalWorkflow:
|
||||
if self.prompt_queue:
|
||||
# get most recent prompt
|
||||
prompt = self.prompt_queue.popleft()
|
||||
workflow.logger.info(f"workflow step: processing message on the prompt queue, message is {prompt}")
|
||||
|
||||
workflow.logger.info(
|
||||
f"workflow step: processing message on the prompt queue, message is {prompt}"
|
||||
)
|
||||
|
||||
# Validate user-provided prompts
|
||||
if self.is_user_prompt(prompt):
|
||||
if self.is_user_prompt(prompt):
|
||||
self.add_message("user", prompt)
|
||||
|
||||
# Validate the prompt before proceeding
|
||||
@@ -120,18 +132,25 @@ class AgentGoalWorkflow:
|
||||
|
||||
# If validation fails, provide that feedback to the user - i.e., "your words make no sense, puny human" end this iteration of processing
|
||||
if not validation_result.validationResult:
|
||||
workflow.logger.warning(f"Prompt validation failed: {validation_result.validationFailedReason}")
|
||||
self.add_message("agent", validation_result.validationFailedReason)
|
||||
workflow.logger.warning(
|
||||
f"Prompt validation failed: {validation_result.validationFailedReason}"
|
||||
)
|
||||
self.add_message(
|
||||
"agent", validation_result.validationFailedReason
|
||||
)
|
||||
continue
|
||||
|
||||
# If valid, proceed with generating the context and prompt
|
||||
context_instructions = generate_genai_prompt(
|
||||
agent_goal=self.goal,
|
||||
conversation_history = self.conversation_history,
|
||||
multi_goal_mode=self.multi_goal_mode,
|
||||
raw_json=self.tool_data)
|
||||
|
||||
prompt_input = ToolPromptInput(prompt=prompt, context_instructions=context_instructions)
|
||||
agent_goal=self.goal,
|
||||
conversation_history=self.conversation_history,
|
||||
multi_goal_mode=self.multi_goal_mode,
|
||||
raw_json=self.tool_data,
|
||||
)
|
||||
|
||||
prompt_input = ToolPromptInput(
|
||||
prompt=prompt, context_instructions=context_instructions
|
||||
)
|
||||
|
||||
# connect to LLM and execute to get next steps
|
||||
tool_data = await workflow.execute_activity_method(
|
||||
@@ -151,20 +170,24 @@ class AgentGoalWorkflow:
|
||||
next_step = tool_data.get("next")
|
||||
current_tool = tool_data.get("tool")
|
||||
|
||||
workflow.logger.info(f"next_step: {next_step}, current tool is {current_tool}")
|
||||
workflow.logger.info(
|
||||
f"next_step: {next_step}, current tool is {current_tool}"
|
||||
)
|
||||
|
||||
# make sure we're ready to run the tool & have everything we need
|
||||
if next_step == "confirm" and current_tool:
|
||||
args = tool_data.get("args", {})
|
||||
# if we're missing arguments, ask for them
|
||||
if await helpers.handle_missing_args(current_tool, args, tool_data, self.prompt_queue):
|
||||
# if we're missing arguments, ask for them
|
||||
if await helpers.handle_missing_args(
|
||||
current_tool, args, tool_data, self.prompt_queue
|
||||
):
|
||||
continue
|
||||
|
||||
waiting_for_confirm = True
|
||||
|
||||
# We have needed arguments, if we want to force the user to confirm, set that up
|
||||
# We have needed arguments, if we want to force the user to confirm, set that up
|
||||
if self.show_tool_args_confirmation:
|
||||
self.confirmed = False # set that we're not confirmed
|
||||
self.confirmed = False # set that we're not confirmed
|
||||
workflow.logger.info("Waiting for user confirm signal...")
|
||||
# if we have all needed arguments (handled above) and not holding for a debugging confirm, proceed:
|
||||
else:
|
||||
@@ -174,14 +197,11 @@ class AgentGoalWorkflow:
|
||||
workflow.logger.info("All steps completed. Resetting goal.")
|
||||
self.change_goal("goal_choose_agent_type")
|
||||
|
||||
|
||||
|
||||
# else if the next step is to be done with the conversation such as if the user requests it via asking to "end conversation"
|
||||
elif next_step == "done":
|
||||
|
||||
self.add_message("agent", tool_data)
|
||||
|
||||
#here we could send conversation to AI for analysis
|
||||
# here we could send conversation to AI for analysis
|
||||
|
||||
# end the workflow
|
||||
return str(self.conversation_history)
|
||||
@@ -192,10 +212,10 @@ class AgentGoalWorkflow:
|
||||
self.prompt_queue,
|
||||
self.goal,
|
||||
MAX_TURNS_BEFORE_CONTINUE,
|
||||
self.add_message
|
||||
self.add_message,
|
||||
)
|
||||
|
||||
#Signal that comes from api/main.py via a post to /send-prompt
|
||||
# Signal that comes from api/main.py via a post to /send-prompt
|
||||
@workflow.signal
|
||||
async def user_prompt(self, prompt: str) -> None:
|
||||
"""Signal handler for receiving user prompts."""
|
||||
@@ -205,28 +225,28 @@ class AgentGoalWorkflow:
|
||||
return
|
||||
self.prompt_queue.append(prompt)
|
||||
|
||||
#Signal that comes from api/main.py via a post to /confirm
|
||||
# Signal that comes from api/main.py via a post to /confirm
|
||||
@workflow.signal
|
||||
async def confirm(self) -> None:
|
||||
"""Signal handler for user confirmation of tool execution."""
|
||||
workflow.logger.info("Received user signal: confirmation")
|
||||
self.confirmed = True
|
||||
|
||||
#Signal that comes from api/main.py via a post to /end-chat
|
||||
# Signal that comes from api/main.py via a post to /end-chat
|
||||
@workflow.signal
|
||||
async def end_chat(self) -> None:
|
||||
"""Signal handler for ending the chat session."""
|
||||
workflow.logger.info("signal received: end_chat")
|
||||
self.chat_ended = True
|
||||
|
||||
#Signal that can be sent from Temporal Workflow UI to enable debugging confirm and override .env setting
|
||||
# Signal that can be sent from Temporal Workflow UI to enable debugging confirm and override .env setting
|
||||
@workflow.signal
|
||||
async def enable_debugging_confirm(self) -> None:
|
||||
"""Signal handler for enabling debugging confirm UI & associated logic."""
|
||||
workflow.logger.info("signal received: enable_debugging_confirm")
|
||||
self.enable_debugging_confirm = True
|
||||
|
||||
#Signal that can be sent from Temporal Workflow UI to disable debugging confirm and override .env setting
|
||||
# Signal that can be sent from Temporal Workflow UI to disable debugging confirm and override .env setting
|
||||
@workflow.signal
|
||||
async def disable_debugging_confirm(self) -> None:
|
||||
"""Signal handler for disabling debugging confirm UI & associated logic."""
|
||||
@@ -237,7 +257,7 @@ class AgentGoalWorkflow:
|
||||
def get_conversation_history(self) -> ConversationHistory:
|
||||
"""Query handler to retrieve the full conversation history."""
|
||||
return self.conversation_history
|
||||
|
||||
|
||||
@workflow.query
|
||||
def get_agent_goal(self) -> AgentGoal:
|
||||
"""Query handler to retrieve the current goal of the agent."""
|
||||
@@ -245,7 +265,7 @@ class AgentGoalWorkflow:
|
||||
|
||||
@workflow.query
|
||||
def get_summary_from_history(self) -> Optional[str]:
|
||||
"""Query handler to retrieve the conversation summary if available.
|
||||
"""Query handler to retrieve the conversation summary if available.
|
||||
Used only for continue as new of the workflow."""
|
||||
return self.conversation_summary
|
||||
|
||||
@@ -272,9 +292,9 @@ class AgentGoalWorkflow:
|
||||
)
|
||||
|
||||
def change_goal(self, goal: str) -> None:
|
||||
""" Change the goal (usually on request of the user).
|
||||
|
||||
Args:
|
||||
"""Change the goal (usually on request of the user).
|
||||
|
||||
Args:
|
||||
goal: goal to change to)
|
||||
"""
|
||||
if goal is not None:
|
||||
@@ -283,8 +303,9 @@ class AgentGoalWorkflow:
|
||||
self.goal = listed_goal
|
||||
workflow.logger.info("Changed goal to " + goal)
|
||||
if goal is None:
|
||||
workflow.logger.warning("Goal not set after goal reset, probably bad.") # if this happens, there's probably a problem with the goal list
|
||||
|
||||
workflow.logger.warning(
|
||||
"Goal not set after goal reset, probably bad."
|
||||
) # if this happens, there's probably a problem with the goal list
|
||||
|
||||
# workflow function that defines if chat should end
|
||||
def chat_should_end(self) -> bool:
|
||||
@@ -293,9 +314,11 @@ class AgentGoalWorkflow:
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
||||
|
||||
# define if we're ready for tool execution
|
||||
def ready_for_tool_execution(self, waiting_for_confirm: bool, current_tool: Any) -> bool:
|
||||
def ready_for_tool_execution(
|
||||
self, waiting_for_confirm: bool, current_tool: Any
|
||||
) -> bool:
|
||||
if self.confirmed and waiting_for_confirm and current_tool and self.tool_data:
|
||||
return True
|
||||
else:
|
||||
@@ -304,19 +327,19 @@ class AgentGoalWorkflow:
|
||||
# LLM-tagged prompts start with "###"
|
||||
# all others are from the user
|
||||
def is_user_prompt(self, prompt) -> bool:
|
||||
if prompt.startswith("###"):
|
||||
return False
|
||||
else:
|
||||
return True
|
||||
|
||||
if prompt.startswith("###"):
|
||||
return False
|
||||
else:
|
||||
return True
|
||||
|
||||
# look up env settings in an activity so they're part of history
|
||||
async def lookup_wf_env_settings(self, combined_input: CombinedInput)->None:
|
||||
async def lookup_wf_env_settings(self, combined_input: CombinedInput) -> None:
|
||||
env_lookup_input = EnvLookupInput(
|
||||
show_confirm_env_var_name = "SHOW_CONFIRM",
|
||||
show_confirm_default = True,
|
||||
show_confirm_env_var_name="SHOW_CONFIRM",
|
||||
show_confirm_default=True,
|
||||
)
|
||||
env_output:EnvLookupOutput = await workflow.execute_activity_method(
|
||||
ToolActivities.get_wf_env_vars,
|
||||
env_output: EnvLookupOutput = await workflow.execute_activity_method(
|
||||
ToolActivities.get_wf_env_vars,
|
||||
env_lookup_input,
|
||||
start_to_close_timeout=LLM_ACTIVITY_START_TO_CLOSE_TIMEOUT,
|
||||
retry_policy=RetryPolicy(
|
||||
@@ -325,11 +348,13 @@ class AgentGoalWorkflow:
|
||||
)
|
||||
self.show_tool_args_confirmation = env_output.show_confirm
|
||||
self.multi_goal_mode = env_output.multi_goal_mode
|
||||
|
||||
|
||||
# execute the tool - return False if we're not waiting for confirm anymore (always the case if it works successfully)
|
||||
#
|
||||
async def execute_tool(self, current_tool: str)->bool:
|
||||
workflow.logger.info(f"workflow step: user has confirmed, executing the tool {current_tool}")
|
||||
#
|
||||
async def execute_tool(self, current_tool: str) -> bool:
|
||||
workflow.logger.info(
|
||||
f"workflow step: user has confirmed, executing the tool {current_tool}"
|
||||
)
|
||||
self.confirmed = False
|
||||
waiting_for_confirm = False
|
||||
confirmed_tool_data = self.tool_data.copy()
|
||||
@@ -342,21 +367,27 @@ class AgentGoalWorkflow:
|
||||
self.tool_data,
|
||||
self.tool_results,
|
||||
self.add_message,
|
||||
self.prompt_queue
|
||||
self.prompt_queue,
|
||||
)
|
||||
|
||||
# set new goal if we should
|
||||
if len(self.tool_results) > 0:
|
||||
if "ChangeGoal" in self.tool_results[-1].values() and "new_goal" in self.tool_results[-1].keys():
|
||||
if (
|
||||
"ChangeGoal" in self.tool_results[-1].values()
|
||||
and "new_goal" in self.tool_results[-1].keys()
|
||||
):
|
||||
new_goal = self.tool_results[-1].get("new_goal")
|
||||
self.change_goal(new_goal)
|
||||
elif "ListAgents" in self.tool_results[-1].values() and self.goal.id != "goal_choose_agent_type":
|
||||
elif (
|
||||
"ListAgents" in self.tool_results[-1].values()
|
||||
and self.goal.id != "goal_choose_agent_type"
|
||||
):
|
||||
self.change_goal("goal_choose_agent_type")
|
||||
return waiting_for_confirm
|
||||
|
||||
|
||||
# debugging helper - drop this in various places in the workflow to get status
|
||||
# also don't forget you can look at the workflow itself and do queries if you want
|
||||
def print_useful_workflow_vars(self, status_or_step:str) -> None:
|
||||
def print_useful_workflow_vars(self, status_or_step: str) -> None:
|
||||
print(f"***{status_or_step}:***")
|
||||
if self.goal:
|
||||
print(f"current goal: {self.goal.id}")
|
||||
@@ -367,4 +398,3 @@ class AgentGoalWorkflow:
|
||||
else:
|
||||
print("no tool data initialized yet")
|
||||
print(f"self.confirmed: {self.confirmed}")
|
||||
|
||||
|
||||
@@ -1,8 +1,9 @@
|
||||
from datetime import timedelta
|
||||
from typing import Dict, Any, Deque
|
||||
from typing import Any, Deque, Dict
|
||||
|
||||
from temporalio import workflow
|
||||
from temporalio.exceptions import ActivityError
|
||||
from temporalio.common import RetryPolicy
|
||||
from temporalio.exceptions import ActivityError
|
||||
|
||||
from models.data_types import ConversationHistory, ToolPromptInput
|
||||
from prompts.agent_prompt_generators import (
|
||||
|
||||
Reference in New Issue
Block a user