mirror of
https://github.com/temporal-community/temporal-ai-agent.git
synced 2026-03-17 06:58:09 +01:00
Compare commits
4 Commits
food-order
...
0.3.0
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
1811e4cf59 | ||
|
|
157c337d23 | ||
|
|
e52ddd3e5e | ||
|
|
eb06cf5c8d |
3
.gitignore
vendored
3
.gitignore
vendored
@@ -33,3 +33,6 @@ coverage.xml
|
|||||||
|
|
||||||
.env
|
.env
|
||||||
.env*
|
.env*
|
||||||
|
|
||||||
|
# Cursor
|
||||||
|
.cursor
|
||||||
@@ -169,7 +169,7 @@ For detailed architecture information, see [architecture.md](architecture.md).
|
|||||||
- Ensure tests pass before submitting: `poetry run pytest --workflow-environment=time-skipping`
|
- Ensure tests pass before submitting: `poetry run pytest --workflow-environment=time-skipping`
|
||||||
|
|
||||||
## Additional Resources
|
## Additional Resources
|
||||||
- **Setup Guide**: [setup.md](setup.md) - Detailed configuration instructions
|
- **Setup Guide**: [SETUP.md](SETUP.md) - Detailed configuration instructions
|
||||||
- **Architecture Decisions**: [architecture-decisions.md](architecture-decisions.md) - Why Temporal for AI agents
|
- **Architecture Decisions**: [architecture-decisions.md](architecture-decisions.md) - Why Temporal for AI agents
|
||||||
- **Demo Video**: [5-minute YouTube overview](https://www.youtube.com/watch?v=GEXllEH2XiQ)
|
- **Demo Video**: [5-minute YouTube overview](https://www.youtube.com/watch?v=GEXllEH2XiQ)
|
||||||
- **Multi-Agent Demo**: [Advanced multi-agent execution](https://www.youtube.com/watch?v=8Dc_0dC14yY)
|
- **Multi-Agent Demo**: [Advanced multi-agent execution](https://www.youtube.com/watch?v=8Dc_0dC14yY)
|
||||||
@@ -8,12 +8,12 @@ All notable changes to this project will be documented in this file.
|
|||||||
|
|
||||||
### Added
|
### Added
|
||||||
- **Multi‑goal agent architecture** with dynamic goal switching (`goal_choose_agent_type`, `ListAgents`, `ChangeGoal`).
|
- **Multi‑goal agent architecture** with dynamic goal switching (`goal_choose_agent_type`, `ListAgents`, `ChangeGoal`).
|
||||||
- See [the architecture guide](./architecture.md) and [setup guide](./setup.md).
|
- See [the architecture guide](./architecture.md) and [setup guide](./SETUP.md).
|
||||||
- **New goal categories & agents**: HR PTO scheduling/checking, paycheck integration, Financial (balances, money movement, loan application), E‑commerce order tracking.
|
- **New goal categories & agents**: HR PTO scheduling/checking, paycheck integration, Financial (balances, money movement, loan application), E‑commerce order tracking.
|
||||||
- See [the guide for adding goals and tools](./adding-goals-and-tools.md).
|
- See [the guide for adding goals and tools](./adding-goals-and-tools.md).
|
||||||
- **Force Confirmation**: `SHOW_CONFIRM` will show a confirmation box before allowing the agent to run a tool.
|
- **Force Confirmation**: `SHOW_CONFIRM` will show a confirmation box before allowing the agent to run a tool.
|
||||||
- **Grok (`x.ai`) LLM provider** support via `GROK_API_KEY`.
|
- **Grok (`x.ai`) LLM provider** support via `GROK_API_KEY`.
|
||||||
- Extensive **docs**: `setup.md`, `architecture.md`, `architecture-decisions.md`, `adding-goals-and-tools.md`, plus new diagrams & assets.
|
- Extensive **docs**: `SETUP.md`, `architecture.md`, `architecture-decisions.md`, `adding-goals-and-tools.md`, plus new diagrams & assets.
|
||||||
|
|
||||||
### Changed
|
### Changed
|
||||||
- **UI Confirmation Box** is less 'debug' looking and prettier.
|
- **UI Confirmation Box** is less 'debug' looking and prettier.
|
||||||
|
|||||||
106
CONTRIBUTING.md
Normal file
106
CONTRIBUTING.md
Normal file
@@ -0,0 +1,106 @@
|
|||||||
|
# Contributing to the Temporal AI Agent Project
|
||||||
|
|
||||||
|
This document provides guidelines for contributing to `temporal-ai-agent`. All setup and installation instructions can be found in [./SETUP.md](./SETUP.md).
|
||||||
|
|
||||||
|
## Getting Started
|
||||||
|
|
||||||
|
### Code Style & Formatting
|
||||||
|
We use `black` for code formatting and `isort` for import sorting to maintain a consistent codebase.
|
||||||
|
- **Format code:**
|
||||||
|
```bash
|
||||||
|
poetry run poe format
|
||||||
|
```
|
||||||
|
Or manually:
|
||||||
|
```bash
|
||||||
|
poetry run black .
|
||||||
|
poetry run isort .
|
||||||
|
```
|
||||||
|
Please format your code before committing.
|
||||||
|
|
||||||
|
### Linting & Type Checking
|
||||||
|
We use `mypy` for static type checking and other linters configured via `poe the poet`.
|
||||||
|
- **Run linters and type checks:**
|
||||||
|
```bash
|
||||||
|
poetry run poe lint
|
||||||
|
```
|
||||||
|
Or manually for type checking:
|
||||||
|
```bash
|
||||||
|
poetry run mypy --check-untyped-defs --namespace-packages .
|
||||||
|
```
|
||||||
|
Ensure all linting and type checks pass before submitting a pull request.
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
Comprehensive testing is crucial for this project. We use `pytest` and Temporal's testing framework.
|
||||||
|
- **Install test dependencies** (if not already done with `poetry install --with dev`):
|
||||||
|
```bash
|
||||||
|
poetry install --with dev
|
||||||
|
```
|
||||||
|
- **Run all tests:**
|
||||||
|
```bash
|
||||||
|
poetry run pytest
|
||||||
|
```
|
||||||
|
- **Run tests with time-skipping (recommended for faster execution, especially in CI):**
|
||||||
|
```bash
|
||||||
|
poetry run pytest --workflow-environment=time-skipping
|
||||||
|
```
|
||||||
|
|
||||||
|
For detailed information on test categories, running specific tests, test environments, coverage, and troubleshooting, please refer to:
|
||||||
|
- [TESTING.md](./TESTING.md) (Quick Start and overview)
|
||||||
|
- [tests/README.md](./tests/README.md) (Comprehensive guide, patterns, and best practices)
|
||||||
|
|
||||||
|
**Ensure all tests pass before submitting a pull request.**
|
||||||
|
|
||||||
|
## Making Changes
|
||||||
|
|
||||||
|
### Adding New Tools or Goals
|
||||||
|
If you're looking to extend the agent's capabilities:
|
||||||
|
1. Create your tool implementation in the `tools/` directory.
|
||||||
|
2. Register your tool and associate it with relevant goals.
|
||||||
|
For detailed instructions, please see:
|
||||||
|
- [Agent Customization in agents.md](./agents.md#agent-customization)
|
||||||
|
- [Adding Goals and Tools Guide](./adding-goals-and-tools.md)
|
||||||
|
|
||||||
|
### General Code Changes
|
||||||
|
- Follow the existing code style and patterns.
|
||||||
|
- Ensure any new code is well-documented with comments.
|
||||||
|
- Write new tests for new functionality or bug fixes.
|
||||||
|
- Update existing tests if necessary.
|
||||||
|
|
||||||
|
## Submitting Contributions
|
||||||
|
|
||||||
|
### Pull Requests
|
||||||
|
When you're ready to submit your changes:
|
||||||
|
1. Push your branch to the remote repository.
|
||||||
|
2. Open a Pull Request (PR) against the `main` branch.
|
||||||
|
3. **Describe your changes:** Clearly explain what you changed and why. Reference any related issues.
|
||||||
|
4. **Ensure tests pass:** All CI checks, including tests and linters, must pass. The command `poetry run pytest --workflow-environment=time-skipping` is a good one to run locally.
|
||||||
|
5. **Request review:** Request a review from one or more maintainers.
|
||||||
|
|
||||||
|
## Reporting Bugs
|
||||||
|
If you encounter a bug, please:
|
||||||
|
1. **Search existing issues:** Check if the bug has already been reported.
|
||||||
|
2. **Open a new issue:** If not, create a new issue.
|
||||||
|
- Provide a clear and descriptive title.
|
||||||
|
- Include steps to reproduce the bug.
|
||||||
|
- Describe the expected behavior and what actually happened.
|
||||||
|
- Provide details about your environment (OS, Python version, Temporal server version, etc.).
|
||||||
|
- Include any relevant logs or screenshots.
|
||||||
|
|
||||||
|
## Suggesting Enhancements
|
||||||
|
We welcome suggestions for new features or improvements!
|
||||||
|
1. **Search existing issues/discussions:** See if your idea has already been discussed.
|
||||||
|
2. **Open a new issue:**
|
||||||
|
- Use a clear and descriptive title.
|
||||||
|
- Provide a detailed explanation of the enhancement and its benefits.
|
||||||
|
- Explain the use case or problem it solves.
|
||||||
|
- Include any potential implementation ideas if you have them.
|
||||||
|
|
||||||
|
## Key Resources
|
||||||
|
- **Project Overview**: [README.md](./README.md)
|
||||||
|
- **Detailed Contribution & Development Guide**: [agents.md](./agents.md)
|
||||||
|
- **Setup Instructions**: [SETUP.md](./SETUP.md)
|
||||||
|
- **Comprehensive Testing Guide**: [TESTING.md](./TESTING.md) and [tests/README.md](./tests/README.md)
|
||||||
|
- **System Architecture**: [architecture.md](./architecture.md)
|
||||||
|
- **Architecture Decisions**: [architecture-decisions.md](./architecture-decisions.md)
|
||||||
|
- **Customizing Agent Tools and Goals**: [adding-goals-and-tools.md](./adding-goals-and-tools.md)
|
||||||
|
- **To-Do List / Future Enhancements**: [todo.md](./todo.md)
|
||||||
12
README.md
12
README.md
@@ -34,7 +34,7 @@ These are the key elements of an agentic framework:
|
|||||||
For a deeper dive into this, check out the [architecture guide](./architecture.md).
|
For a deeper dive into this, check out the [architecture guide](./architecture.md).
|
||||||
|
|
||||||
## Setup and Configuration
|
## Setup and Configuration
|
||||||
See [the Setup guide](./setup.md) for detailed instructions. The basic configuration requires just two environment variables:
|
See [the Setup guide](./SETUP.md) for detailed instructions. The basic configuration requires just two environment variables:
|
||||||
```bash
|
```bash
|
||||||
LLM_MODEL=openai/gpt-4o # or any other model supported by LiteLLM
|
LLM_MODEL=openai/gpt-4o # or any other model supported by LiteLLM
|
||||||
LLM_KEY=your-api-key-here
|
LLM_KEY=your-api-key-here
|
||||||
@@ -72,12 +72,9 @@ poetry run pytest --workflow-environment=time-skipping
|
|||||||
|
|
||||||
## Development
|
## Development
|
||||||
|
|
||||||
Install dependencies:
|
To contribute to this project, see [CONTRIBUTING.md](CONTRIBUTING.md).
|
||||||
```bash
|
|
||||||
poetry install
|
|
||||||
```
|
|
||||||
|
|
||||||
Start the Temporal Server and API server, see [setup](setup.md)
|
Start the Temporal Server and API server, see [setup](SETUP.md)
|
||||||
|
|
||||||
## Productionalization & Adding Features
|
## Productionalization & Adding Features
|
||||||
- In a prod setting, I would need to ensure that payload data is stored separately (e.g. in S3 or a noSQL db - the claim-check pattern), or otherwise 'garbage collected'. Without these techniques, long conversations will fill up the workflow's conversation history, and start to breach Temporal event history payload limits.
|
- In a prod setting, I would need to ensure that payload data is stored separately (e.g. in S3 or a noSQL db - the claim-check pattern), or otherwise 'garbage collected'. Without these techniques, long conversations will fill up the workflow's conversation history, and start to breach Temporal event history payload limits.
|
||||||
@@ -85,8 +82,7 @@ Start the Temporal Server and API server, see [setup](setup.md)
|
|||||||
- Perhaps the UI should show when the LLM response is being retried (i.e. activity retry attempt because the LLM provided bad output)
|
- Perhaps the UI should show when the LLM response is being retried (i.e. activity retry attempt because the LLM provided bad output)
|
||||||
- The project now includes comprehensive tests for workflows and activities! [See testing guide](TESTING.md).
|
- The project now includes comprehensive tests for workflows and activities! [See testing guide](TESTING.md).
|
||||||
|
|
||||||
|
See [the todo](./todo.md) for more details on things we want to do (or that you could contribute!).
|
||||||
See [the todo](./todo.md) for more details.
|
|
||||||
|
|
||||||
See [the guide to adding goals and tools](./adding-goals-and-tools.md) for more ways you can add features.
|
See [the guide to adding goals and tools](./adding-goals-and-tools.md) for more ways you can add features.
|
||||||
|
|
||||||
|
|||||||
@@ -1,16 +1,25 @@
|
|||||||
import inspect
|
import inspect
|
||||||
from temporalio import activity
|
|
||||||
import json
|
import json
|
||||||
from typing import Optional, Sequence
|
|
||||||
from temporalio.common import RawValue
|
|
||||||
import os
|
import os
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
|
from typing import Sequence
|
||||||
|
|
||||||
from dotenv import load_dotenv
|
from dotenv import load_dotenv
|
||||||
from models.data_types import EnvLookupOutput, ValidationInput, ValidationResult, ToolPromptInput, EnvLookupInput
|
|
||||||
from litellm import completion
|
from litellm import completion
|
||||||
|
from temporalio import activity
|
||||||
|
from temporalio.common import RawValue
|
||||||
|
|
||||||
|
from models.data_types import (
|
||||||
|
EnvLookupInput,
|
||||||
|
EnvLookupOutput,
|
||||||
|
ToolPromptInput,
|
||||||
|
ValidationInput,
|
||||||
|
ValidationResult,
|
||||||
|
)
|
||||||
|
|
||||||
load_dotenv(override=True)
|
load_dotenv(override=True)
|
||||||
|
|
||||||
|
|
||||||
class ToolActivities:
|
class ToolActivities:
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
"""Initialize LLM client using LiteLLM."""
|
"""Initialize LLM client using LiteLLM."""
|
||||||
@@ -22,7 +31,9 @@ class ToolActivities:
|
|||||||
print(f"Using custom base URL: {self.llm_base_url}")
|
print(f"Using custom base URL: {self.llm_base_url}")
|
||||||
|
|
||||||
@activity.defn
|
@activity.defn
|
||||||
async def agent_validatePrompt(self, validation_input: ValidationInput) -> ValidationResult:
|
async def agent_validatePrompt(
|
||||||
|
self, validation_input: ValidationInput
|
||||||
|
) -> ValidationResult:
|
||||||
"""
|
"""
|
||||||
Validates the prompt in the context of the conversation history and agent goal.
|
Validates the prompt in the context of the conversation history and agent goal.
|
||||||
Returns a ValidationResult indicating if the prompt makes sense given the context.
|
Returns a ValidationResult indicating if the prompt makes sense given the context.
|
||||||
@@ -99,15 +110,15 @@ class ToolActivities:
|
|||||||
completion_kwargs = {
|
completion_kwargs = {
|
||||||
"model": self.llm_model,
|
"model": self.llm_model,
|
||||||
"messages": messages,
|
"messages": messages,
|
||||||
"api_key": self.llm_key
|
"api_key": self.llm_key,
|
||||||
}
|
}
|
||||||
|
|
||||||
# Add base_url if configured
|
# Add base_url if configured
|
||||||
if self.llm_base_url:
|
if self.llm_base_url:
|
||||||
completion_kwargs["base_url"] = self.llm_base_url
|
completion_kwargs["base_url"] = self.llm_base_url
|
||||||
|
|
||||||
response = completion(**completion_kwargs)
|
response = completion(**completion_kwargs)
|
||||||
|
|
||||||
response_content = response.choices[0].message.content
|
response_content = response.choices[0].message.content
|
||||||
activity.logger.info(f"LLM response: {response_content}")
|
activity.logger.info(f"LLM response: {response_content}")
|
||||||
|
|
||||||
@@ -136,19 +147,20 @@ class ToolActivities:
|
|||||||
"""
|
"""
|
||||||
# Remove any markdown code block markers
|
# Remove any markdown code block markers
|
||||||
response_content = response_content.replace("```json", "").replace("```", "")
|
response_content = response_content.replace("```json", "").replace("```", "")
|
||||||
|
|
||||||
# Remove any leading/trailing whitespace
|
# Remove any leading/trailing whitespace
|
||||||
response_content = response_content.strip()
|
response_content = response_content.strip()
|
||||||
|
|
||||||
return response_content
|
return response_content
|
||||||
|
|
||||||
@activity.defn
|
@activity.defn
|
||||||
async def get_wf_env_vars(self, input: EnvLookupInput) -> EnvLookupOutput:
|
async def get_wf_env_vars(self, input: EnvLookupInput) -> EnvLookupOutput:
|
||||||
""" gets env vars for workflow as an activity result so it's deterministic
|
"""gets env vars for workflow as an activity result so it's deterministic
|
||||||
handles default/None
|
handles default/None
|
||||||
"""
|
"""
|
||||||
output: EnvLookupOutput = EnvLookupOutput(show_confirm=input.show_confirm_default,
|
output: EnvLookupOutput = EnvLookupOutput(
|
||||||
multi_goal_mode=True)
|
show_confirm=input.show_confirm_default, multi_goal_mode=True
|
||||||
|
)
|
||||||
show_confirm_value = os.getenv(input.show_confirm_env_var_name)
|
show_confirm_value = os.getenv(input.show_confirm_env_var_name)
|
||||||
if show_confirm_value is None:
|
if show_confirm_value is None:
|
||||||
output.show_confirm = input.show_confirm_default
|
output.show_confirm = input.show_confirm_default
|
||||||
@@ -156,17 +168,21 @@ class ToolActivities:
|
|||||||
output.show_confirm = False
|
output.show_confirm = False
|
||||||
else:
|
else:
|
||||||
output.show_confirm = True
|
output.show_confirm = True
|
||||||
|
|
||||||
first_goal_value = os.getenv("AGENT_GOAL")
|
first_goal_value = os.getenv("AGENT_GOAL")
|
||||||
if first_goal_value is None:
|
if first_goal_value is None:
|
||||||
output.multi_goal_mode = True # default if unset
|
output.multi_goal_mode = True # default if unset
|
||||||
elif first_goal_value is not None and first_goal_value.lower() != "goal_choose_agent_type":
|
elif (
|
||||||
|
first_goal_value is not None
|
||||||
|
and first_goal_value.lower() != "goal_choose_agent_type"
|
||||||
|
):
|
||||||
output.multi_goal_mode = False
|
output.multi_goal_mode = False
|
||||||
else:
|
else:
|
||||||
output.multi_goal_mode = True
|
output.multi_goal_mode = True
|
||||||
|
|
||||||
return output
|
return output
|
||||||
|
|
||||||
|
|
||||||
@activity.defn(dynamic=True)
|
@activity.defn(dynamic=True)
|
||||||
async def dynamic_tool_activity(args: Sequence[RawValue]) -> dict:
|
async def dynamic_tool_activity(args: Sequence[RawValue]) -> dict:
|
||||||
from tools import get_handler
|
from tools import get_handler
|
||||||
@@ -185,5 +201,3 @@ async def dynamic_tool_activity(args: Sequence[RawValue]) -> dict:
|
|||||||
# Optionally log or augment the result
|
# Optionally log or augment the result
|
||||||
activity.logger.info(f"Tool '{tool_name}' result: {result}")
|
activity.logger.info(f"Tool '{tool_name}' result: {result}")
|
||||||
return result
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -49,7 +49,7 @@ description="Help the user gather args for these tools in order: "
|
|||||||
```
|
```
|
||||||
|
|
||||||
Tools should generally return meaningful information and be generally ‘failsafe’ in returning a useful result based on input.
|
Tools should generally return meaningful information and be generally ‘failsafe’ in returning a useful result based on input.
|
||||||
(If you're doing a local data approach like those in [.tools/data/](./tools/data/)) it's good to document how they can be setup to get a good result in tool specific [setup](./setup.md).
|
(If you're doing a local data approach like those in [.tools/data/](./tools/data/)) it's good to document how they can be setup to get a good result in tool specific [setup](./SETUP.md).
|
||||||
|
|
||||||
### Add to Tool Registry
|
### Add to Tool Registry
|
||||||
1. Open [/tools/tool_registry.py](tools/tool_registry.py) - this file contains mapping of tool names to tool definitions (so the AI understands how to use them)
|
1. Open [/tools/tool_registry.py](tools/tool_registry.py) - this file contains mapping of tool names to tool definitions (so the AI understands how to use them)
|
||||||
|
|||||||
27
api/main.py
27
api/main.py
@@ -1,18 +1,18 @@
|
|||||||
|
import asyncio
|
||||||
import os
|
import os
|
||||||
from fastapi import FastAPI
|
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
|
|
||||||
|
from dotenv import load_dotenv
|
||||||
|
from fastapi import FastAPI, HTTPException
|
||||||
|
from fastapi.middleware.cors import CORSMiddleware
|
||||||
|
from temporalio.api.enums.v1 import WorkflowExecutionStatus
|
||||||
from temporalio.client import Client
|
from temporalio.client import Client
|
||||||
from temporalio.exceptions import TemporalError
|
from temporalio.exceptions import TemporalError
|
||||||
from temporalio.api.enums.v1 import WorkflowExecutionStatus
|
|
||||||
from fastapi import HTTPException
|
|
||||||
from dotenv import load_dotenv
|
|
||||||
import asyncio
|
|
||||||
|
|
||||||
from workflows.agent_goal_workflow import AgentGoalWorkflow
|
from models.data_types import AgentGoalWorkflowParams, CombinedInput
|
||||||
from models.data_types import CombinedInput, AgentGoalWorkflowParams
|
from shared.config import TEMPORAL_TASK_QUEUE, get_temporal_client
|
||||||
from tools.goal_registry import goal_list
|
from tools.goal_registry import goal_list
|
||||||
from fastapi.middleware.cors import CORSMiddleware
|
from workflows.agent_goal_workflow import AgentGoalWorkflow
|
||||||
from shared.config import get_temporal_client, TEMPORAL_TASK_QUEUE
|
|
||||||
|
|
||||||
app = FastAPI()
|
app = FastAPI()
|
||||||
temporal_client: Optional[Client] = None
|
temporal_client: Optional[Client] = None
|
||||||
@@ -23,7 +23,9 @@ load_dotenv()
|
|||||||
|
|
||||||
def get_initial_agent_goal():
|
def get_initial_agent_goal():
|
||||||
"""Get the agent goal from environment variables."""
|
"""Get the agent goal from environment variables."""
|
||||||
env_goal = os.getenv("AGENT_GOAL", "goal_choose_agent_type") #if no goal is set in the env file, default to choosing an agent
|
env_goal = os.getenv(
|
||||||
|
"AGENT_GOAL", "goal_choose_agent_type"
|
||||||
|
) # if no goal is set in the env file, default to choosing an agent
|
||||||
for listed_goal in goal_list:
|
for listed_goal in goal_list:
|
||||||
if listed_goal.id == env_goal:
|
if listed_goal.id == env_goal:
|
||||||
return listed_goal
|
return listed_goal
|
||||||
@@ -119,7 +121,8 @@ async def get_conversation_history():
|
|||||||
raise HTTPException(
|
raise HTTPException(
|
||||||
status_code=500, detail="Internal server error while querying workflow."
|
status_code=500, detail="Internal server error while querying workflow."
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@app.get("/agent-goal")
|
@app.get("/agent-goal")
|
||||||
async def get_agent_goal():
|
async def get_agent_goal():
|
||||||
"""Calls the workflow's 'get_agent_goal' query."""
|
"""Calls the workflow's 'get_agent_goal' query."""
|
||||||
@@ -148,7 +151,7 @@ async def send_prompt(prompt: str):
|
|||||||
combined_input = CombinedInput(
|
combined_input = CombinedInput(
|
||||||
tool_params=AgentGoalWorkflowParams(None, None),
|
tool_params=AgentGoalWorkflowParams(None, None),
|
||||||
agent_goal=get_initial_agent_goal(),
|
agent_goal=get_initial_agent_goal(),
|
||||||
#change to get from workflow query
|
# change to get from workflow query
|
||||||
)
|
)
|
||||||
|
|
||||||
workflow_id = "agent-workflow"
|
workflow_id = "agent-workflow"
|
||||||
|
|||||||
@@ -3,7 +3,7 @@ This documents some of the "why" behind the [architecture](./architecture.md).
|
|||||||
|
|
||||||
## AI Models
|
## AI Models
|
||||||
We wanted to have flexibility to use different models, because this space is changing rapidly and models get better regularly.
|
We wanted to have flexibility to use different models, because this space is changing rapidly and models get better regularly.
|
||||||
Also, for you, we wanted to let you pick your model of choice. The system is designed to make changing models out simple. For how to do that, checkout the [setup guide](./setup.md).
|
Also, for you, we wanted to let you pick your model of choice. The system is designed to make changing models out simple. For how to do that, checkout the [setup guide](./SETUP.md).
|
||||||
|
|
||||||
## Temporal
|
## Temporal
|
||||||
We asked one of the AI models used in this demo to answer this question (edited minorly):
|
We asked one of the AI models used in this demo to answer this question (edited minorly):
|
||||||
|
|||||||
@@ -39,7 +39,7 @@ This is where you can add probabalistic business logic to
|
|||||||
## LLM
|
## LLM
|
||||||
Probabalistic execution: it will _probably_ do what you tell it to do.
|
Probabalistic execution: it will _probably_ do what you tell it to do.
|
||||||
Turns the guidance from the prompts (see [agent prompts](./prompts/agent_prompt_generators.py) and [goal prompts](./tools/goal_registry.py)) into
|
Turns the guidance from the prompts (see [agent prompts](./prompts/agent_prompt_generators.py) and [goal prompts](./tools/goal_registry.py)) into
|
||||||
You have a choice of providers - see [setup](./setup.md).
|
You have a choice of providers - see [setup](./SETUP.md).
|
||||||
The LLM:
|
The LLM:
|
||||||
- Drives toward the initial Goal and any subsequent Goals selected by user
|
- Drives toward the initial Goal and any subsequent Goals selected by user
|
||||||
- Decides what to do based on input, such as:
|
- Decides what to do based on input, such as:
|
||||||
|
|||||||
@@ -1,5 +1,6 @@
|
|||||||
from dataclasses import dataclass
|
from dataclasses import dataclass
|
||||||
from typing import Optional, Deque, Dict, Any, List, Union, Literal
|
from typing import Any, Deque, Dict, List, Literal, Optional, Union
|
||||||
|
|
||||||
from models.tool_definitions import AgentGoal
|
from models.tool_definitions import AgentGoal
|
||||||
|
|
||||||
|
|
||||||
@@ -43,12 +44,14 @@ class ValidationResult:
|
|||||||
if self.validationFailedReason is None:
|
if self.validationFailedReason is None:
|
||||||
self.validationFailedReason = {}
|
self.validationFailedReason = {}
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class EnvLookupInput:
|
class EnvLookupInput:
|
||||||
show_confirm_env_var_name: str
|
show_confirm_env_var_name: str
|
||||||
show_confirm_default: bool
|
show_confirm_default: bool
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class EnvLookupOutput:
|
class EnvLookupOutput:
|
||||||
show_confirm: bool
|
show_confirm: bool
|
||||||
multi_goal_mode: bool
|
multi_goal_mode: bool
|
||||||
|
|||||||
@@ -15,6 +15,7 @@ class ToolDefinition:
|
|||||||
description: str
|
description: str
|
||||||
arguments: List[ToolArgument]
|
arguments: List[ToolArgument]
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class AgentGoal:
|
class AgentGoal:
|
||||||
id: str
|
id: str
|
||||||
@@ -24,6 +25,4 @@ class AgentGoal:
|
|||||||
tools: List[ToolDefinition]
|
tools: List[ToolDefinition]
|
||||||
description: str = "Description of the tools purpose and overall goal"
|
description: str = "Description of the tools purpose and overall goal"
|
||||||
starter_prompt: str = "Initial prompt to start the conversation"
|
starter_prompt: str = "Initial prompt to start the conversation"
|
||||||
example_conversation_history: str = (
|
example_conversation_history: str = "Example conversation history to help the AI agent understand the context of the conversation"
|
||||||
"Example conversation history to help the AI agent understand the context of the conversation"
|
|
||||||
)
|
|
||||||
|
|||||||
184
poetry.lock
generated
184
poetry.lock
generated
@@ -1,4 +1,4 @@
|
|||||||
# This file is automatically @generated by Poetry 2.1.3 and should not be changed by hand.
|
# This file is automatically @generated by Poetry 1.5.1 and should not be changed by hand.
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "aiohappyeyeballs"
|
name = "aiohappyeyeballs"
|
||||||
@@ -6,7 +6,6 @@ version = "2.6.1"
|
|||||||
description = "Happy Eyeballs for asyncio"
|
description = "Happy Eyeballs for asyncio"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.9"
|
python-versions = ">=3.9"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "aiohappyeyeballs-2.6.1-py3-none-any.whl", hash = "sha256:f349ba8f4b75cb25c99c5c2d84e997e485204d2902a9597802b0371f09331fb8"},
|
{file = "aiohappyeyeballs-2.6.1-py3-none-any.whl", hash = "sha256:f349ba8f4b75cb25c99c5c2d84e997e485204d2902a9597802b0371f09331fb8"},
|
||||||
{file = "aiohappyeyeballs-2.6.1.tar.gz", hash = "sha256:c3f9d0113123803ccadfdf3f0faa505bc78e6a72d1cc4806cbd719826e943558"},
|
{file = "aiohappyeyeballs-2.6.1.tar.gz", hash = "sha256:c3f9d0113123803ccadfdf3f0faa505bc78e6a72d1cc4806cbd719826e943558"},
|
||||||
@@ -18,7 +17,6 @@ version = "3.11.18"
|
|||||||
description = "Async http client/server framework (asyncio)"
|
description = "Async http client/server framework (asyncio)"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.9"
|
python-versions = ">=3.9"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "aiohttp-3.11.18-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:96264854fedbea933a9ca4b7e0c745728f01380691687b7365d18d9e977179c4"},
|
{file = "aiohttp-3.11.18-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:96264854fedbea933a9ca4b7e0c745728f01380691687b7365d18d9e977179c4"},
|
||||||
{file = "aiohttp-3.11.18-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:9602044ff047043430452bc3a2089743fa85da829e6fc9ee0025351d66c332b6"},
|
{file = "aiohttp-3.11.18-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:9602044ff047043430452bc3a2089743fa85da829e6fc9ee0025351d66c332b6"},
|
||||||
@@ -114,7 +112,7 @@ propcache = ">=0.2.0"
|
|||||||
yarl = ">=1.17.0,<2.0"
|
yarl = ">=1.17.0,<2.0"
|
||||||
|
|
||||||
[package.extras]
|
[package.extras]
|
||||||
speedups = ["Brotli ; platform_python_implementation == \"CPython\"", "aiodns (>=3.2.0) ; sys_platform == \"linux\" or sys_platform == \"darwin\"", "brotlicffi ; platform_python_implementation != \"CPython\""]
|
speedups = ["Brotli", "aiodns (>=3.2.0)", "brotlicffi"]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "aiosignal"
|
name = "aiosignal"
|
||||||
@@ -122,7 +120,6 @@ version = "1.3.2"
|
|||||||
description = "aiosignal: a list of registered asynchronous callbacks"
|
description = "aiosignal: a list of registered asynchronous callbacks"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.9"
|
python-versions = ">=3.9"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "aiosignal-1.3.2-py2.py3-none-any.whl", hash = "sha256:45cde58e409a301715980c2b01d0c28bdde3770d8290b5eb2173759d9acb31a5"},
|
{file = "aiosignal-1.3.2-py2.py3-none-any.whl", hash = "sha256:45cde58e409a301715980c2b01d0c28bdde3770d8290b5eb2173759d9acb31a5"},
|
||||||
{file = "aiosignal-1.3.2.tar.gz", hash = "sha256:a8c255c66fafb1e499c9351d0bf32ff2d8a0321595ebac3b93713656d2436f54"},
|
{file = "aiosignal-1.3.2.tar.gz", hash = "sha256:a8c255c66fafb1e499c9351d0bf32ff2d8a0321595ebac3b93713656d2436f54"},
|
||||||
@@ -137,7 +134,6 @@ version = "0.7.0"
|
|||||||
description = "Reusable constraint types to use with typing.Annotated"
|
description = "Reusable constraint types to use with typing.Annotated"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "annotated_types-0.7.0-py3-none-any.whl", hash = "sha256:1f02e8b43a8fbbc3f3e0d4f0f4bfc8131bcb4eebe8849b8e5c773f3a1c582a53"},
|
{file = "annotated_types-0.7.0-py3-none-any.whl", hash = "sha256:1f02e8b43a8fbbc3f3e0d4f0f4bfc8131bcb4eebe8849b8e5c773f3a1c582a53"},
|
||||||
{file = "annotated_types-0.7.0.tar.gz", hash = "sha256:aff07c09a53a08bc8cfccb9c85b05f1aa9a2a6f23728d790723543408344ce89"},
|
{file = "annotated_types-0.7.0.tar.gz", hash = "sha256:aff07c09a53a08bc8cfccb9c85b05f1aa9a2a6f23728d790723543408344ce89"},
|
||||||
@@ -149,7 +145,6 @@ version = "4.5.2"
|
|||||||
description = "High level compatibility layer for multiple asynchronous event loop implementations"
|
description = "High level compatibility layer for multiple asynchronous event loop implementations"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "anyio-4.5.2-py3-none-any.whl", hash = "sha256:c011ee36bc1e8ba40e5a81cb9df91925c218fe9b778554e0b56a21e1b5d4716f"},
|
{file = "anyio-4.5.2-py3-none-any.whl", hash = "sha256:c011ee36bc1e8ba40e5a81cb9df91925c218fe9b778554e0b56a21e1b5d4716f"},
|
||||||
{file = "anyio-4.5.2.tar.gz", hash = "sha256:23009af4ed04ce05991845451e11ef02fc7c5ed29179ac9a420e5ad0ac7ddc5b"},
|
{file = "anyio-4.5.2.tar.gz", hash = "sha256:23009af4ed04ce05991845451e11ef02fc7c5ed29179ac9a420e5ad0ac7ddc5b"},
|
||||||
@@ -163,7 +158,7 @@ typing-extensions = {version = ">=4.1", markers = "python_version < \"3.11\""}
|
|||||||
|
|
||||||
[package.extras]
|
[package.extras]
|
||||||
doc = ["Sphinx (>=7.4,<8.0)", "packaging", "sphinx-autodoc-typehints (>=1.2.0)", "sphinx-rtd-theme"]
|
doc = ["Sphinx (>=7.4,<8.0)", "packaging", "sphinx-autodoc-typehints (>=1.2.0)", "sphinx-rtd-theme"]
|
||||||
test = ["anyio[trio]", "coverage[toml] (>=7)", "exceptiongroup (>=1.2.0)", "hypothesis (>=4.0)", "psutil (>=5.9)", "pytest (>=7.0)", "pytest-mock (>=3.6.1)", "trustme", "truststore (>=0.9.1) ; python_version >= \"3.10\"", "uvloop (>=0.21.0b1) ; platform_python_implementation == \"CPython\" and platform_system != \"Windows\""]
|
test = ["anyio[trio]", "coverage[toml] (>=7)", "exceptiongroup (>=1.2.0)", "hypothesis (>=4.0)", "psutil (>=5.9)", "pytest (>=7.0)", "pytest-mock (>=3.6.1)", "trustme", "truststore (>=0.9.1)", "uvloop (>=0.21.0b1)"]
|
||||||
trio = ["trio (>=0.26.1)"]
|
trio = ["trio (>=0.26.1)"]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
@@ -172,8 +167,6 @@ version = "5.0.1"
|
|||||||
description = "Timeout context manager for asyncio programs"
|
description = "Timeout context manager for asyncio programs"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
groups = ["main"]
|
|
||||||
markers = "python_version == \"3.10\""
|
|
||||||
files = [
|
files = [
|
||||||
{file = "async_timeout-5.0.1-py3-none-any.whl", hash = "sha256:39e3809566ff85354557ec2398b55e096c8364bacac9405a7a1fa429e77fe76c"},
|
{file = "async_timeout-5.0.1-py3-none-any.whl", hash = "sha256:39e3809566ff85354557ec2398b55e096c8364bacac9405a7a1fa429e77fe76c"},
|
||||||
{file = "async_timeout-5.0.1.tar.gz", hash = "sha256:d9321a7a3d5a6a5e187e824d2fa0793ce379a202935782d555d6e9d2735677d3"},
|
{file = "async_timeout-5.0.1.tar.gz", hash = "sha256:d9321a7a3d5a6a5e187e824d2fa0793ce379a202935782d555d6e9d2735677d3"},
|
||||||
@@ -185,19 +178,18 @@ version = "25.3.0"
|
|||||||
description = "Classes Without Boilerplate"
|
description = "Classes Without Boilerplate"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "attrs-25.3.0-py3-none-any.whl", hash = "sha256:427318ce031701fea540783410126f03899a97ffc6f61596ad581ac2e40e3bc3"},
|
{file = "attrs-25.3.0-py3-none-any.whl", hash = "sha256:427318ce031701fea540783410126f03899a97ffc6f61596ad581ac2e40e3bc3"},
|
||||||
{file = "attrs-25.3.0.tar.gz", hash = "sha256:75d7cefc7fb576747b2c81b4442d4d4a1ce0900973527c011d1030fd3bf4af1b"},
|
{file = "attrs-25.3.0.tar.gz", hash = "sha256:75d7cefc7fb576747b2c81b4442d4d4a1ce0900973527c011d1030fd3bf4af1b"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.extras]
|
[package.extras]
|
||||||
benchmark = ["cloudpickle ; platform_python_implementation == \"CPython\"", "hypothesis", "mypy (>=1.11.1) ; platform_python_implementation == \"CPython\" and python_version >= \"3.10\"", "pympler", "pytest (>=4.3.0)", "pytest-codspeed", "pytest-mypy-plugins ; platform_python_implementation == \"CPython\" and python_version >= \"3.10\"", "pytest-xdist[psutil]"]
|
benchmark = ["cloudpickle", "hypothesis", "mypy (>=1.11.1)", "pympler", "pytest (>=4.3.0)", "pytest-codspeed", "pytest-mypy-plugins", "pytest-xdist[psutil]"]
|
||||||
cov = ["cloudpickle ; platform_python_implementation == \"CPython\"", "coverage[toml] (>=5.3)", "hypothesis", "mypy (>=1.11.1) ; platform_python_implementation == \"CPython\" and python_version >= \"3.10\"", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins ; platform_python_implementation == \"CPython\" and python_version >= \"3.10\"", "pytest-xdist[psutil]"]
|
cov = ["cloudpickle", "coverage[toml] (>=5.3)", "hypothesis", "mypy (>=1.11.1)", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-xdist[psutil]"]
|
||||||
dev = ["cloudpickle ; platform_python_implementation == \"CPython\"", "hypothesis", "mypy (>=1.11.1) ; platform_python_implementation == \"CPython\" and python_version >= \"3.10\"", "pre-commit-uv", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins ; platform_python_implementation == \"CPython\" and python_version >= \"3.10\"", "pytest-xdist[psutil]"]
|
dev = ["cloudpickle", "hypothesis", "mypy (>=1.11.1)", "pre-commit-uv", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-xdist[psutil]"]
|
||||||
docs = ["cogapp", "furo", "myst-parser", "sphinx", "sphinx-notfound-page", "sphinxcontrib-towncrier", "towncrier"]
|
docs = ["cogapp", "furo", "myst-parser", "sphinx", "sphinx-notfound-page", "sphinxcontrib-towncrier", "towncrier"]
|
||||||
tests = ["cloudpickle ; platform_python_implementation == \"CPython\"", "hypothesis", "mypy (>=1.11.1) ; platform_python_implementation == \"CPython\" and python_version >= \"3.10\"", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins ; platform_python_implementation == \"CPython\" and python_version >= \"3.10\"", "pytest-xdist[psutil]"]
|
tests = ["cloudpickle", "hypothesis", "mypy (>=1.11.1)", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-xdist[psutil]"]
|
||||||
tests-mypy = ["mypy (>=1.11.1) ; platform_python_implementation == \"CPython\" and python_version >= \"3.10\"", "pytest-mypy-plugins ; platform_python_implementation == \"CPython\" and python_version >= \"3.10\""]
|
tests-mypy = ["mypy (>=1.11.1)", "pytest-mypy-plugins"]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "black"
|
name = "black"
|
||||||
@@ -205,7 +197,6 @@ version = "23.12.1"
|
|||||||
description = "The uncompromising code formatter."
|
description = "The uncompromising code formatter."
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
groups = ["dev"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "black-23.12.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e0aaf6041986767a5e0ce663c7a2f0e9eaf21e6ff87a5f95cbf3675bfd4c41d2"},
|
{file = "black-23.12.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e0aaf6041986767a5e0ce663c7a2f0e9eaf21e6ff87a5f95cbf3675bfd4c41d2"},
|
||||||
{file = "black-23.12.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c88b3711d12905b74206227109272673edce0cb29f27e1385f33b0163c414bba"},
|
{file = "black-23.12.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c88b3711d12905b74206227109272673edce0cb29f27e1385f33b0163c414bba"},
|
||||||
@@ -242,7 +233,7 @@ typing-extensions = {version = ">=4.0.1", markers = "python_version < \"3.11\""}
|
|||||||
|
|
||||||
[package.extras]
|
[package.extras]
|
||||||
colorama = ["colorama (>=0.4.3)"]
|
colorama = ["colorama (>=0.4.3)"]
|
||||||
d = ["aiohttp (>=3.7.4) ; sys_platform != \"win32\" or implementation_name != \"pypy\"", "aiohttp (>=3.7.4,!=3.9.0) ; sys_platform == \"win32\" and implementation_name == \"pypy\""]
|
d = ["aiohttp (>=3.7.4)", "aiohttp (>=3.7.4,!=3.9.0)"]
|
||||||
jupyter = ["ipython (>=7.8.0)", "tokenize-rt (>=3.2.0)"]
|
jupyter = ["ipython (>=7.8.0)", "tokenize-rt (>=3.2.0)"]
|
||||||
uvloop = ["uvloop (>=0.15.2)"]
|
uvloop = ["uvloop (>=0.15.2)"]
|
||||||
|
|
||||||
@@ -252,7 +243,6 @@ version = "0.8.1"
|
|||||||
description = "Generate complex HTML+JS pages with Python"
|
description = "Generate complex HTML+JS pages with Python"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.7"
|
python-versions = ">=3.7"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "branca-0.8.1-py3-none-any.whl", hash = "sha256:d29c5fab31f7c21a92e34bf3f854234e29fecdcf5d2df306b616f20d816be425"},
|
{file = "branca-0.8.1-py3-none-any.whl", hash = "sha256:d29c5fab31f7c21a92e34bf3f854234e29fecdcf5d2df306b616f20d816be425"},
|
||||||
{file = "branca-0.8.1.tar.gz", hash = "sha256:ac397c2d79bd13af0d04193b26d5ed17031d27609a7f1fab50c438b8ae712390"},
|
{file = "branca-0.8.1.tar.gz", hash = "sha256:ac397c2d79bd13af0d04193b26d5ed17031d27609a7f1fab50c438b8ae712390"},
|
||||||
@@ -267,7 +257,6 @@ version = "2024.12.14"
|
|||||||
description = "Python package for providing Mozilla's CA Bundle."
|
description = "Python package for providing Mozilla's CA Bundle."
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.6"
|
python-versions = ">=3.6"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "certifi-2024.12.14-py3-none-any.whl", hash = "sha256:1275f7a45be9464efc1173084eaa30f866fe2e47d389406136d332ed4967ec56"},
|
{file = "certifi-2024.12.14-py3-none-any.whl", hash = "sha256:1275f7a45be9464efc1173084eaa30f866fe2e47d389406136d332ed4967ec56"},
|
||||||
{file = "certifi-2024.12.14.tar.gz", hash = "sha256:b650d30f370c2b724812bee08008be0c4163b163ddaec3f2546c1caf65f191db"},
|
{file = "certifi-2024.12.14.tar.gz", hash = "sha256:b650d30f370c2b724812bee08008be0c4163b163ddaec3f2546c1caf65f191db"},
|
||||||
@@ -279,7 +268,6 @@ version = "3.4.1"
|
|||||||
description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet."
|
description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet."
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.7"
|
python-versions = ">=3.7"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "charset_normalizer-3.4.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:91b36a978b5ae0ee86c394f5a54d6ef44db1de0815eb43de826d41d21e4af3de"},
|
{file = "charset_normalizer-3.4.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:91b36a978b5ae0ee86c394f5a54d6ef44db1de0815eb43de826d41d21e4af3de"},
|
||||||
{file = "charset_normalizer-3.4.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7461baadb4dc00fd9e0acbe254e3d7d2112e7f92ced2adc96e54ef6501c5f176"},
|
{file = "charset_normalizer-3.4.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7461baadb4dc00fd9e0acbe254e3d7d2112e7f92ced2adc96e54ef6501c5f176"},
|
||||||
@@ -381,7 +369,6 @@ version = "8.1.8"
|
|||||||
description = "Composable command line interface toolkit"
|
description = "Composable command line interface toolkit"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.7"
|
python-versions = ">=3.7"
|
||||||
groups = ["main", "dev"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "click-8.1.8-py3-none-any.whl", hash = "sha256:63c132bbbed01578a06712a2d1f497bb62d9c1c0d329b7903a866228027263b2"},
|
{file = "click-8.1.8-py3-none-any.whl", hash = "sha256:63c132bbbed01578a06712a2d1f497bb62d9c1c0d329b7903a866228027263b2"},
|
||||||
{file = "click-8.1.8.tar.gz", hash = "sha256:ed53c9d8990d83c2a27deae68e4ee337473f6330c040a31d4225c9574d16096a"},
|
{file = "click-8.1.8.tar.gz", hash = "sha256:ed53c9d8990d83c2a27deae68e4ee337473f6330c040a31d4225c9574d16096a"},
|
||||||
@@ -396,12 +383,10 @@ version = "0.4.6"
|
|||||||
description = "Cross-platform colored terminal text."
|
description = "Cross-platform colored terminal text."
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,>=2.7"
|
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,>=2.7"
|
||||||
groups = ["main", "dev"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6"},
|
{file = "colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6"},
|
||||||
{file = "colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44"},
|
{file = "colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44"},
|
||||||
]
|
]
|
||||||
markers = {main = "platform_system == \"Windows\"", dev = "platform_system == \"Windows\" or sys_platform == \"win32\""}
|
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "distro"
|
name = "distro"
|
||||||
@@ -409,7 +394,6 @@ version = "1.9.0"
|
|||||||
description = "Distro - an OS platform information API"
|
description = "Distro - an OS platform information API"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.6"
|
python-versions = ">=3.6"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "distro-1.9.0-py3-none-any.whl", hash = "sha256:7bffd925d65168f85027d8da9af6bddab658135b840670a223589bc0c8ef02b2"},
|
{file = "distro-1.9.0-py3-none-any.whl", hash = "sha256:7bffd925d65168f85027d8da9af6bddab658135b840670a223589bc0c8ef02b2"},
|
||||||
{file = "distro-1.9.0.tar.gz", hash = "sha256:2fa77c6fd8940f116ee1d6b94a2f90b13b5ea8d019b98bc8bafdcabcdd9bdbed"},
|
{file = "distro-1.9.0.tar.gz", hash = "sha256:2fa77c6fd8940f116ee1d6b94a2f90b13b5ea8d019b98bc8bafdcabcdd9bdbed"},
|
||||||
@@ -421,8 +405,6 @@ version = "1.2.2"
|
|||||||
description = "Backport of PEP 654 (exception groups)"
|
description = "Backport of PEP 654 (exception groups)"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.7"
|
python-versions = ">=3.7"
|
||||||
groups = ["main", "dev"]
|
|
||||||
markers = "python_version == \"3.10\""
|
|
||||||
files = [
|
files = [
|
||||||
{file = "exceptiongroup-1.2.2-py3-none-any.whl", hash = "sha256:3111b9d131c238bec2f8f516e123e14ba243563fb135d3fe885990585aa7795b"},
|
{file = "exceptiongroup-1.2.2-py3-none-any.whl", hash = "sha256:3111b9d131c238bec2f8f516e123e14ba243563fb135d3fe885990585aa7795b"},
|
||||||
{file = "exceptiongroup-1.2.2.tar.gz", hash = "sha256:47c2edf7c6738fafb49fd34290706d1a1a2f4d1c6df275526b62cbb4aa5393cc"},
|
{file = "exceptiongroup-1.2.2.tar.gz", hash = "sha256:47c2edf7c6738fafb49fd34290706d1a1a2f4d1c6df275526b62cbb4aa5393cc"},
|
||||||
@@ -437,7 +419,6 @@ version = "0.115.6"
|
|||||||
description = "FastAPI framework, high performance, easy to learn, fast to code, ready for production"
|
description = "FastAPI framework, high performance, easy to learn, fast to code, ready for production"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "fastapi-0.115.6-py3-none-any.whl", hash = "sha256:e9240b29e36fa8f4bb7290316988e90c381e5092e0cbe84e7818cc3713bcf305"},
|
{file = "fastapi-0.115.6-py3-none-any.whl", hash = "sha256:e9240b29e36fa8f4bb7290316988e90c381e5092e0cbe84e7818cc3713bcf305"},
|
||||||
{file = "fastapi-0.115.6.tar.gz", hash = "sha256:9ec46f7addc14ea472958a96aae5b5de65f39721a46aaf5705c480d9a8b76654"},
|
{file = "fastapi-0.115.6.tar.gz", hash = "sha256:9ec46f7addc14ea472958a96aae5b5de65f39721a46aaf5705c480d9a8b76654"},
|
||||||
@@ -458,7 +439,6 @@ version = "3.18.0"
|
|||||||
description = "A platform independent file lock."
|
description = "A platform independent file lock."
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.9"
|
python-versions = ">=3.9"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "filelock-3.18.0-py3-none-any.whl", hash = "sha256:c401f4f8377c4464e6db25fff06205fd89bdd83b65eb0488ed1b160f780e21de"},
|
{file = "filelock-3.18.0-py3-none-any.whl", hash = "sha256:c401f4f8377c4464e6db25fff06205fd89bdd83b65eb0488ed1b160f780e21de"},
|
||||||
{file = "filelock-3.18.0.tar.gz", hash = "sha256:adbc88eabb99d2fec8c9c1b229b171f18afa655400173ddc653d5d01501fb9f2"},
|
{file = "filelock-3.18.0.tar.gz", hash = "sha256:adbc88eabb99d2fec8c9c1b229b171f18afa655400173ddc653d5d01501fb9f2"},
|
||||||
@@ -467,7 +447,7 @@ files = [
|
|||||||
[package.extras]
|
[package.extras]
|
||||||
docs = ["furo (>=2024.8.6)", "sphinx (>=8.1.3)", "sphinx-autodoc-typehints (>=3)"]
|
docs = ["furo (>=2024.8.6)", "sphinx (>=8.1.3)", "sphinx-autodoc-typehints (>=3)"]
|
||||||
testing = ["covdefaults (>=2.3)", "coverage (>=7.6.10)", "diff-cover (>=9.2.1)", "pytest (>=8.3.4)", "pytest-asyncio (>=0.25.2)", "pytest-cov (>=6)", "pytest-mock (>=3.14)", "pytest-timeout (>=2.3.1)", "virtualenv (>=20.28.1)"]
|
testing = ["covdefaults (>=2.3)", "coverage (>=7.6.10)", "diff-cover (>=9.2.1)", "pytest (>=8.3.4)", "pytest-asyncio (>=0.25.2)", "pytest-cov (>=6)", "pytest-mock (>=3.14)", "pytest-timeout (>=2.3.1)", "virtualenv (>=20.28.1)"]
|
||||||
typing = ["typing-extensions (>=4.12.2) ; python_version < \"3.11\""]
|
typing = ["typing-extensions (>=4.12.2)"]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "folium"
|
name = "folium"
|
||||||
@@ -475,7 +455,6 @@ version = "0.19.4"
|
|||||||
description = "Make beautiful maps with Leaflet.js & Python"
|
description = "Make beautiful maps with Leaflet.js & Python"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.9"
|
python-versions = ">=3.9"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "folium-0.19.4-py2.py3-none-any.whl", hash = "sha256:bea5246b6a6aa61b96d1c51399dd63254bacbd6ba8a826eeb491f45242032dfd"},
|
{file = "folium-0.19.4-py2.py3-none-any.whl", hash = "sha256:bea5246b6a6aa61b96d1c51399dd63254bacbd6ba8a826eeb491f45242032dfd"},
|
||||||
{file = "folium-0.19.4.tar.gz", hash = "sha256:431a655b52a9bf3efda336f2be022103f0106504a0599e7c349efbfd30bafda6"},
|
{file = "folium-0.19.4.tar.gz", hash = "sha256:431a655b52a9bf3efda336f2be022103f0106504a0599e7c349efbfd30bafda6"},
|
||||||
@@ -497,7 +476,6 @@ version = "1.6.0"
|
|||||||
description = "A list-like structure which implements collections.abc.MutableSequence"
|
description = "A list-like structure which implements collections.abc.MutableSequence"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.9"
|
python-versions = ">=3.9"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "frozenlist-1.6.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:e6e558ea1e47fd6fa8ac9ccdad403e5dd5ecc6ed8dda94343056fa4277d5c65e"},
|
{file = "frozenlist-1.6.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:e6e558ea1e47fd6fa8ac9ccdad403e5dd5ecc6ed8dda94343056fa4277d5c65e"},
|
||||||
{file = "frozenlist-1.6.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:f4b3cd7334a4bbc0c472164f3744562cb72d05002cc6fcf58adb104630bbc352"},
|
{file = "frozenlist-1.6.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:f4b3cd7334a4bbc0c472164f3744562cb72d05002cc6fcf58adb104630bbc352"},
|
||||||
@@ -611,7 +589,6 @@ version = "2025.5.0"
|
|||||||
description = "File-system specification"
|
description = "File-system specification"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.9"
|
python-versions = ">=3.9"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "fsspec-2025.5.0-py3-none-any.whl", hash = "sha256:0ca253eca6b5333d8a2b8bd98c7326fe821f1f0fdbd34e1b445bddde8e804c95"},
|
{file = "fsspec-2025.5.0-py3-none-any.whl", hash = "sha256:0ca253eca6b5333d8a2b8bd98c7326fe821f1f0fdbd34e1b445bddde8e804c95"},
|
||||||
{file = "fsspec-2025.5.0.tar.gz", hash = "sha256:e4f4623bb6221f7407fd695cc535d1f857a077eb247580f4ada34f5dc25fd5c8"},
|
{file = "fsspec-2025.5.0.tar.gz", hash = "sha256:e4f4623bb6221f7407fd695cc535d1f857a077eb247580f4ada34f5dc25fd5c8"},
|
||||||
@@ -651,7 +628,6 @@ version = "1.0.1"
|
|||||||
description = "Geographic pandas extensions"
|
description = "Geographic pandas extensions"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.9"
|
python-versions = ">=3.9"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "geopandas-1.0.1-py3-none-any.whl", hash = "sha256:01e147d9420cc374d26f51fc23716ac307f32b49406e4bd8462c07e82ed1d3d6"},
|
{file = "geopandas-1.0.1-py3-none-any.whl", hash = "sha256:01e147d9420cc374d26f51fc23716ac307f32b49406e4bd8462c07e82ed1d3d6"},
|
||||||
{file = "geopandas-1.0.1.tar.gz", hash = "sha256:b8bf70a5534588205b7a56646e2082fb1de9a03599651b3d80c99ea4c2ca08ab"},
|
{file = "geopandas-1.0.1.tar.gz", hash = "sha256:b8bf70a5534588205b7a56646e2082fb1de9a03599651b3d80c99ea4c2ca08ab"},
|
||||||
@@ -675,7 +651,6 @@ version = "10.1.1"
|
|||||||
description = "A Python library for analyzing GTFS feeds."
|
description = "A Python library for analyzing GTFS feeds."
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.10"
|
python-versions = ">=3.10"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "gtfs_kit-10.1.1-py3-none-any.whl", hash = "sha256:2a54982d30993c365ee082eb3f5dc981ecd89c294728199a1f39776dee6c71b2"},
|
{file = "gtfs_kit-10.1.1-py3-none-any.whl", hash = "sha256:2a54982d30993c365ee082eb3f5dc981ecd89c294728199a1f39776dee6c71b2"},
|
||||||
{file = "gtfs_kit-10.1.1.tar.gz", hash = "sha256:b94135883fbb4a5135b33d66215e12507a0480218f53df8c6a3a88ee359e7ab4"},
|
{file = "gtfs_kit-10.1.1.tar.gz", hash = "sha256:b94135883fbb4a5135b33d66215e12507a0480218f53df8c6a3a88ee359e7ab4"},
|
||||||
@@ -696,7 +671,6 @@ version = "0.14.0"
|
|||||||
description = "A pure-Python, bring-your-own-I/O implementation of HTTP/1.1"
|
description = "A pure-Python, bring-your-own-I/O implementation of HTTP/1.1"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.7"
|
python-versions = ">=3.7"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "h11-0.14.0-py3-none-any.whl", hash = "sha256:e3fe4ac4b851c468cc8363d500db52c2ead036020723024a109d37346efaa761"},
|
{file = "h11-0.14.0-py3-none-any.whl", hash = "sha256:e3fe4ac4b851c468cc8363d500db52c2ead036020723024a109d37346efaa761"},
|
||||||
{file = "h11-0.14.0.tar.gz", hash = "sha256:8f19fbbe99e72420ff35c00b27a34cb9937e902a8b810e2c88300c6f0a3b699d"},
|
{file = "h11-0.14.0.tar.gz", hash = "sha256:8f19fbbe99e72420ff35c00b27a34cb9937e902a8b810e2c88300c6f0a3b699d"},
|
||||||
@@ -708,7 +682,6 @@ version = "1.0.7"
|
|||||||
description = "A minimal low-level HTTP client."
|
description = "A minimal low-level HTTP client."
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "httpcore-1.0.7-py3-none-any.whl", hash = "sha256:a3fff8f43dc260d5bd363d9f9cf1830fa3a458b332856f34282de498ed420edd"},
|
{file = "httpcore-1.0.7-py3-none-any.whl", hash = "sha256:a3fff8f43dc260d5bd363d9f9cf1830fa3a458b332856f34282de498ed420edd"},
|
||||||
{file = "httpcore-1.0.7.tar.gz", hash = "sha256:8551cb62a169ec7162ac7be8d4817d561f60e08eaa485234898414bb5a8a0b4c"},
|
{file = "httpcore-1.0.7.tar.gz", hash = "sha256:8551cb62a169ec7162ac7be8d4817d561f60e08eaa485234898414bb5a8a0b4c"},
|
||||||
@@ -730,7 +703,6 @@ version = "0.27.2"
|
|||||||
description = "The next generation HTTP client."
|
description = "The next generation HTTP client."
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "httpx-0.27.2-py3-none-any.whl", hash = "sha256:7bb2708e112d8fdd7829cd4243970f0c223274051cb35ee80c03301ee29a3df0"},
|
{file = "httpx-0.27.2-py3-none-any.whl", hash = "sha256:7bb2708e112d8fdd7829cd4243970f0c223274051cb35ee80c03301ee29a3df0"},
|
||||||
{file = "httpx-0.27.2.tar.gz", hash = "sha256:f7c2be1d2f3c3c3160d441802406b206c2b76f5947b11115e6df10c6c65e66c2"},
|
{file = "httpx-0.27.2.tar.gz", hash = "sha256:f7c2be1d2f3c3c3160d441802406b206c2b76f5947b11115e6df10c6c65e66c2"},
|
||||||
@@ -744,7 +716,7 @@ idna = "*"
|
|||||||
sniffio = "*"
|
sniffio = "*"
|
||||||
|
|
||||||
[package.extras]
|
[package.extras]
|
||||||
brotli = ["brotli ; platform_python_implementation == \"CPython\"", "brotlicffi ; platform_python_implementation != \"CPython\""]
|
brotli = ["brotli", "brotlicffi"]
|
||||||
cli = ["click (==8.*)", "pygments (==2.*)", "rich (>=10,<14)"]
|
cli = ["click (==8.*)", "pygments (==2.*)", "rich (>=10,<14)"]
|
||||||
http2 = ["h2 (>=3,<5)"]
|
http2 = ["h2 (>=3,<5)"]
|
||||||
socks = ["socksio (==1.*)"]
|
socks = ["socksio (==1.*)"]
|
||||||
@@ -756,7 +728,6 @@ version = "0.31.4"
|
|||||||
description = "Client library to download and publish models, datasets and other repos on the huggingface.co hub"
|
description = "Client library to download and publish models, datasets and other repos on the huggingface.co hub"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8.0"
|
python-versions = ">=3.8.0"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "huggingface_hub-0.31.4-py3-none-any.whl", hash = "sha256:4f70704760296cc69b612916056e9845f5490a33782b924fc531767967acc15d"},
|
{file = "huggingface_hub-0.31.4-py3-none-any.whl", hash = "sha256:4f70704760296cc69b612916056e9845f5490a33782b924fc531767967acc15d"},
|
||||||
{file = "huggingface_hub-0.31.4.tar.gz", hash = "sha256:5a7bc710b9f9c028aee5b1476867b4ec5c1b92f043cb364d5fdc54354757e4ce"},
|
{file = "huggingface_hub-0.31.4.tar.gz", hash = "sha256:5a7bc710b9f9c028aee5b1476867b4ec5c1b92f043cb364d5fdc54354757e4ce"},
|
||||||
@@ -792,7 +763,6 @@ version = "3.10"
|
|||||||
description = "Internationalized Domain Names in Applications (IDNA)"
|
description = "Internationalized Domain Names in Applications (IDNA)"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.6"
|
python-versions = ">=3.6"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "idna-3.10-py3-none-any.whl", hash = "sha256:946d195a0d259cbba61165e88e65941f16e9b36ea6ddb97f00452bae8b1287d3"},
|
{file = "idna-3.10-py3-none-any.whl", hash = "sha256:946d195a0d259cbba61165e88e65941f16e9b36ea6ddb97f00452bae8b1287d3"},
|
||||||
{file = "idna-3.10.tar.gz", hash = "sha256:12f65c9b470abda6dc35cf8e63cc574b1c52b11df2c86030af0ac09b01b13ea9"},
|
{file = "idna-3.10.tar.gz", hash = "sha256:12f65c9b470abda6dc35cf8e63cc574b1c52b11df2c86030af0ac09b01b13ea9"},
|
||||||
@@ -807,7 +777,6 @@ version = "8.7.0"
|
|||||||
description = "Read metadata from Python packages"
|
description = "Read metadata from Python packages"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.9"
|
python-versions = ">=3.9"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "importlib_metadata-8.7.0-py3-none-any.whl", hash = "sha256:e5dd1551894c77868a30651cef00984d50e1002d06942a7101d34870c5f02afd"},
|
{file = "importlib_metadata-8.7.0-py3-none-any.whl", hash = "sha256:e5dd1551894c77868a30651cef00984d50e1002d06942a7101d34870c5f02afd"},
|
||||||
{file = "importlib_metadata-8.7.0.tar.gz", hash = "sha256:d13b81ad223b890aa16c5471f2ac3056cf76c5f10f82d6f9292f0b415f389000"},
|
{file = "importlib_metadata-8.7.0.tar.gz", hash = "sha256:d13b81ad223b890aa16c5471f2ac3056cf76c5f10f82d6f9292f0b415f389000"},
|
||||||
@@ -817,12 +786,12 @@ files = [
|
|||||||
zipp = ">=3.20"
|
zipp = ">=3.20"
|
||||||
|
|
||||||
[package.extras]
|
[package.extras]
|
||||||
check = ["pytest-checkdocs (>=2.4)", "pytest-ruff (>=0.2.1) ; sys_platform != \"cygwin\""]
|
check = ["pytest-checkdocs (>=2.4)", "pytest-ruff (>=0.2.1)"]
|
||||||
cover = ["pytest-cov"]
|
cover = ["pytest-cov"]
|
||||||
doc = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-lint"]
|
doc = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-lint"]
|
||||||
enabler = ["pytest-enabler (>=2.2)"]
|
enabler = ["pytest-enabler (>=2.2)"]
|
||||||
perf = ["ipython"]
|
perf = ["ipython"]
|
||||||
test = ["flufl.flake8", "importlib_resources (>=1.3) ; python_version < \"3.9\"", "jaraco.test (>=5.4)", "packaging", "pyfakefs", "pytest (>=6,!=8.1.*)", "pytest-perf (>=0.9.2)"]
|
test = ["flufl.flake8", "importlib_resources (>=1.3)", "jaraco.test (>=5.4)", "packaging", "pyfakefs", "pytest (>=6,!=8.1.*)", "pytest-perf (>=0.9.2)"]
|
||||||
type = ["pytest-mypy"]
|
type = ["pytest-mypy"]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
@@ -831,7 +800,6 @@ version = "2.0.0"
|
|||||||
description = "brain-dead simple config-ini parsing"
|
description = "brain-dead simple config-ini parsing"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.7"
|
python-versions = ">=3.7"
|
||||||
groups = ["dev"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "iniconfig-2.0.0-py3-none-any.whl", hash = "sha256:b6a85871a79d2e3b22d2d1b94ac2824226a63c6b741c88f7ae975f18b6778374"},
|
{file = "iniconfig-2.0.0-py3-none-any.whl", hash = "sha256:b6a85871a79d2e3b22d2d1b94ac2824226a63c6b741c88f7ae975f18b6778374"},
|
||||||
{file = "iniconfig-2.0.0.tar.gz", hash = "sha256:2d91e135bf72d31a410b17c16da610a82cb55f6b0477d1a902134b24a455b8b3"},
|
{file = "iniconfig-2.0.0.tar.gz", hash = "sha256:2d91e135bf72d31a410b17c16da610a82cb55f6b0477d1a902134b24a455b8b3"},
|
||||||
@@ -843,7 +811,6 @@ version = "5.13.2"
|
|||||||
description = "A Python utility / library to sort Python imports."
|
description = "A Python utility / library to sort Python imports."
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8.0"
|
python-versions = ">=3.8.0"
|
||||||
groups = ["dev"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "isort-5.13.2-py3-none-any.whl", hash = "sha256:8ca5e72a8d85860d5a3fa69b8745237f2939afe12dbf656afbcb47fe72d947a6"},
|
{file = "isort-5.13.2-py3-none-any.whl", hash = "sha256:8ca5e72a8d85860d5a3fa69b8745237f2939afe12dbf656afbcb47fe72d947a6"},
|
||||||
{file = "isort-5.13.2.tar.gz", hash = "sha256:48fdfcb9face5d58a4f6dde2e72a1fb8dcaf8ab26f95ab49fab84c2ddefb0109"},
|
{file = "isort-5.13.2.tar.gz", hash = "sha256:48fdfcb9face5d58a4f6dde2e72a1fb8dcaf8ab26f95ab49fab84c2ddefb0109"},
|
||||||
@@ -858,7 +825,6 @@ version = "3.1.5"
|
|||||||
description = "A very fast and expressive template engine."
|
description = "A very fast and expressive template engine."
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.7"
|
python-versions = ">=3.7"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "jinja2-3.1.5-py3-none-any.whl", hash = "sha256:aba0f4dc9ed8013c424088f68a5c226f7d6097ed89b246d7749c2ec4175c6adb"},
|
{file = "jinja2-3.1.5-py3-none-any.whl", hash = "sha256:aba0f4dc9ed8013c424088f68a5c226f7d6097ed89b246d7749c2ec4175c6adb"},
|
||||||
{file = "jinja2-3.1.5.tar.gz", hash = "sha256:8fefff8dc3034e27bb80d67c671eb8a9bc424c0ef4c0826edbff304cceff43bb"},
|
{file = "jinja2-3.1.5.tar.gz", hash = "sha256:8fefff8dc3034e27bb80d67c671eb8a9bc424c0ef4c0826edbff304cceff43bb"},
|
||||||
@@ -876,7 +842,6 @@ version = "0.8.2"
|
|||||||
description = "Fast iterable JSON parser."
|
description = "Fast iterable JSON parser."
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "jiter-0.8.2-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:ca8577f6a413abe29b079bc30f907894d7eb07a865c4df69475e868d73e71c7b"},
|
{file = "jiter-0.8.2-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:ca8577f6a413abe29b079bc30f907894d7eb07a865c4df69475e868d73e71c7b"},
|
||||||
{file = "jiter-0.8.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:b25bd626bde7fb51534190c7e3cb97cee89ee76b76d7585580e22f34f5e3f393"},
|
{file = "jiter-0.8.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:b25bd626bde7fb51534190c7e3cb97cee89ee76b76d7585580e22f34f5e3f393"},
|
||||||
@@ -962,7 +927,6 @@ version = "1.3.0"
|
|||||||
description = "JSON to HTML Table Representation"
|
description = "JSON to HTML Table Representation"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = "*"
|
python-versions = "*"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "json2html-1.3.0.tar.gz", hash = "sha256:8951a53662ae9cfd812685facdba693fc950ffc1c1fd1a8a2d3cf4c34600689c"},
|
{file = "json2html-1.3.0.tar.gz", hash = "sha256:8951a53662ae9cfd812685facdba693fc950ffc1c1fd1a8a2d3cf4c34600689c"},
|
||||||
]
|
]
|
||||||
@@ -973,7 +937,6 @@ version = "4.23.0"
|
|||||||
description = "An implementation of JSON Schema validation for Python"
|
description = "An implementation of JSON Schema validation for Python"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "jsonschema-4.23.0-py3-none-any.whl", hash = "sha256:fbadb6f8b144a8f8cf9f0b89ba94501d143e50411a1278633f56a7acf7fd5566"},
|
{file = "jsonschema-4.23.0-py3-none-any.whl", hash = "sha256:fbadb6f8b144a8f8cf9f0b89ba94501d143e50411a1278633f56a7acf7fd5566"},
|
||||||
{file = "jsonschema-4.23.0.tar.gz", hash = "sha256:d71497fef26351a33265337fa77ffeb82423f3ea21283cd9467bb03999266bc4"},
|
{file = "jsonschema-4.23.0.tar.gz", hash = "sha256:d71497fef26351a33265337fa77ffeb82423f3ea21283cd9467bb03999266bc4"},
|
||||||
@@ -995,7 +958,6 @@ version = "2025.4.1"
|
|||||||
description = "The JSON Schema meta-schemas and vocabularies, exposed as a Registry"
|
description = "The JSON Schema meta-schemas and vocabularies, exposed as a Registry"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.9"
|
python-versions = ">=3.9"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "jsonschema_specifications-2025.4.1-py3-none-any.whl", hash = "sha256:4653bffbd6584f7de83a67e0d620ef16900b390ddc7939d56684d6c81e33f1af"},
|
{file = "jsonschema_specifications-2025.4.1-py3-none-any.whl", hash = "sha256:4653bffbd6584f7de83a67e0d620ef16900b390ddc7939d56684d6c81e33f1af"},
|
||||||
{file = "jsonschema_specifications-2025.4.1.tar.gz", hash = "sha256:630159c9f4dbea161a6a2205c3011cc4f18ff381b189fff48bb39b9bf26ae608"},
|
{file = "jsonschema_specifications-2025.4.1.tar.gz", hash = "sha256:630159c9f4dbea161a6a2205c3011cc4f18ff381b189fff48bb39b9bf26ae608"},
|
||||||
@@ -1010,7 +972,6 @@ version = "1.70.0"
|
|||||||
description = "Library to easily interface with LLM API providers"
|
description = "Library to easily interface with LLM API providers"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = "!=2.7.*,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,!=3.7.*,>=3.8"
|
python-versions = "!=2.7.*,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,!=3.7.*,>=3.8"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "litellm-1.70.0-py3-none-any.whl", hash = "sha256:7e094057b38ddb1d77f61452895835aa5d376db1850e9a1bc0342c5631d89638"},
|
{file = "litellm-1.70.0-py3-none-any.whl", hash = "sha256:7e094057b38ddb1d77f61452895835aa5d376db1850e9a1bc0342c5631d89638"},
|
||||||
{file = "litellm-1.70.0.tar.gz", hash = "sha256:357f3891e38f23a12f0932c235ed860dc41bc5880afaee7229e6d25318652706"},
|
{file = "litellm-1.70.0.tar.gz", hash = "sha256:357f3891e38f23a12f0932c235ed860dc41bc5880afaee7229e6d25318652706"},
|
||||||
@@ -1030,8 +991,8 @@ tiktoken = ">=0.7.0"
|
|||||||
tokenizers = "*"
|
tokenizers = "*"
|
||||||
|
|
||||||
[package.extras]
|
[package.extras]
|
||||||
extra-proxy = ["azure-identity (>=1.15.0,<2.0.0)", "azure-keyvault-secrets (>=4.8.0,<5.0.0)", "google-cloud-kms (>=2.21.3,<3.0.0)", "prisma (==0.11.0)", "redisvl (>=0.4.1,<0.5.0) ; python_version >= \"3.9\" and python_version < \"3.14\"", "resend (>=0.8.0,<0.9.0)"]
|
extra-proxy = ["azure-identity (>=1.15.0,<2.0.0)", "azure-keyvault-secrets (>=4.8.0,<5.0.0)", "google-cloud-kms (>=2.21.3,<3.0.0)", "prisma (==0.11.0)", "redisvl (>=0.4.1,<0.5.0)", "resend (>=0.8.0,<0.9.0)"]
|
||||||
proxy = ["PyJWT (>=2.8.0,<3.0.0)", "apscheduler (>=3.10.4,<4.0.0)", "backoff", "boto3 (==1.34.34)", "cryptography (>=43.0.1,<44.0.0)", "fastapi (>=0.115.5,<0.116.0)", "fastapi-sso (>=0.16.0,<0.17.0)", "gunicorn (>=23.0.0,<24.0.0)", "litellm-enterprise (==0.1.3)", "litellm-proxy-extras (==0.1.21)", "mcp (==1.5.0) ; python_version >= \"3.10\"", "orjson (>=3.9.7,<4.0.0)", "pynacl (>=1.5.0,<2.0.0)", "python-multipart (>=0.0.18,<0.0.19)", "pyyaml (>=6.0.1,<7.0.0)", "rich (==13.7.1)", "rq", "uvicorn (>=0.29.0,<0.30.0)", "uvloop (>=0.21.0,<0.22.0) ; sys_platform != \"win32\"", "websockets (>=13.1.0,<14.0.0)"]
|
proxy = ["PyJWT (>=2.8.0,<3.0.0)", "apscheduler (>=3.10.4,<4.0.0)", "backoff", "boto3 (==1.34.34)", "cryptography (>=43.0.1,<44.0.0)", "fastapi (>=0.115.5,<0.116.0)", "fastapi-sso (>=0.16.0,<0.17.0)", "gunicorn (>=23.0.0,<24.0.0)", "litellm-enterprise (==0.1.3)", "litellm-proxy-extras (==0.1.21)", "mcp (==1.5.0)", "orjson (>=3.9.7,<4.0.0)", "pynacl (>=1.5.0,<2.0.0)", "python-multipart (>=0.0.18,<0.0.19)", "pyyaml (>=6.0.1,<7.0.0)", "rich (==13.7.1)", "rq", "uvicorn (>=0.29.0,<0.30.0)", "uvloop (>=0.21.0,<0.22.0)", "websockets (>=13.1.0,<14.0.0)"]
|
||||||
utils = ["numpydoc"]
|
utils = ["numpydoc"]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
@@ -1040,7 +1001,6 @@ version = "3.0.2"
|
|||||||
description = "Safely add untrusted strings to HTML/XML markup."
|
description = "Safely add untrusted strings to HTML/XML markup."
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.9"
|
python-versions = ">=3.9"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "MarkupSafe-3.0.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:7e94c425039cde14257288fd61dcfb01963e658efbc0ff54f5306b06054700f8"},
|
{file = "MarkupSafe-3.0.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:7e94c425039cde14257288fd61dcfb01963e658efbc0ff54f5306b06054700f8"},
|
||||||
{file = "MarkupSafe-3.0.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:9e2d922824181480953426608b81967de705c3cef4d1af983af849d7bd619158"},
|
{file = "MarkupSafe-3.0.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:9e2d922824181480953426608b81967de705c3cef4d1af983af849d7bd619158"},
|
||||||
@@ -1111,7 +1071,6 @@ version = "6.4.4"
|
|||||||
description = "multidict implementation"
|
description = "multidict implementation"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.9"
|
python-versions = ">=3.9"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "multidict-6.4.4-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:8adee3ac041145ffe4488ea73fa0a622b464cc25340d98be76924d0cda8545ff"},
|
{file = "multidict-6.4.4-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:8adee3ac041145ffe4488ea73fa0a622b464cc25340d98be76924d0cda8545ff"},
|
||||||
{file = "multidict-6.4.4-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:b61e98c3e2a861035aaccd207da585bdcacef65fe01d7a0d07478efac005e028"},
|
{file = "multidict-6.4.4-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:b61e98c3e2a861035aaccd207da585bdcacef65fe01d7a0d07478efac005e028"},
|
||||||
@@ -1222,13 +1181,66 @@ files = [
|
|||||||
[package.dependencies]
|
[package.dependencies]
|
||||||
typing-extensions = {version = ">=4.1.0", markers = "python_version < \"3.11\""}
|
typing-extensions = {version = ">=4.1.0", markers = "python_version < \"3.11\""}
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "mypy"
|
||||||
|
version = "1.16.0"
|
||||||
|
description = "Optional static typing for Python"
|
||||||
|
optional = false
|
||||||
|
python-versions = ">=3.9"
|
||||||
|
files = [
|
||||||
|
{file = "mypy-1.16.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:7909541fef256527e5ee9c0a7e2aeed78b6cda72ba44298d1334fe7881b05c5c"},
|
||||||
|
{file = "mypy-1.16.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:e71d6f0090c2256c713ed3d52711d01859c82608b5d68d4fa01a3fe30df95571"},
|
||||||
|
{file = "mypy-1.16.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:936ccfdd749af4766be824268bfe22d1db9eb2f34a3ea1d00ffbe5b5265f5491"},
|
||||||
|
{file = "mypy-1.16.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:4086883a73166631307fdd330c4a9080ce24913d4f4c5ec596c601b3a4bdd777"},
|
||||||
|
{file = "mypy-1.16.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:feec38097f71797da0231997e0de3a58108c51845399669ebc532c815f93866b"},
|
||||||
|
{file = "mypy-1.16.0-cp310-cp310-win_amd64.whl", hash = "sha256:09a8da6a0ee9a9770b8ff61b39c0bb07971cda90e7297f4213741b48a0cc8d93"},
|
||||||
|
{file = "mypy-1.16.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:9f826aaa7ff8443bac6a494cf743f591488ea940dd360e7dd330e30dd772a5ab"},
|
||||||
|
{file = "mypy-1.16.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:82d056e6faa508501af333a6af192c700b33e15865bda49611e3d7d8358ebea2"},
|
||||||
|
{file = "mypy-1.16.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:089bedc02307c2548eb51f426e085546db1fa7dd87fbb7c9fa561575cf6eb1ff"},
|
||||||
|
{file = "mypy-1.16.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:6a2322896003ba66bbd1318c10d3afdfe24e78ef12ea10e2acd985e9d684a666"},
|
||||||
|
{file = "mypy-1.16.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:021a68568082c5b36e977d54e8f1de978baf401a33884ffcea09bd8e88a98f4c"},
|
||||||
|
{file = "mypy-1.16.0-cp311-cp311-win_amd64.whl", hash = "sha256:54066fed302d83bf5128632d05b4ec68412e1f03ef2c300434057d66866cea4b"},
|
||||||
|
{file = "mypy-1.16.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:c5436d11e89a3ad16ce8afe752f0f373ae9620841c50883dc96f8b8805620b13"},
|
||||||
|
{file = "mypy-1.16.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:f2622af30bf01d8fc36466231bdd203d120d7a599a6d88fb22bdcb9dbff84090"},
|
||||||
|
{file = "mypy-1.16.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:d045d33c284e10a038f5e29faca055b90eee87da3fc63b8889085744ebabb5a1"},
|
||||||
|
{file = "mypy-1.16.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:b4968f14f44c62e2ec4a038c8797a87315be8df7740dc3ee8d3bfe1c6bf5dba8"},
|
||||||
|
{file = "mypy-1.16.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:eb14a4a871bb8efb1e4a50360d4e3c8d6c601e7a31028a2c79f9bb659b63d730"},
|
||||||
|
{file = "mypy-1.16.0-cp312-cp312-win_amd64.whl", hash = "sha256:bd4e1ebe126152a7bbaa4daedd781c90c8f9643c79b9748caa270ad542f12bec"},
|
||||||
|
{file = "mypy-1.16.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:a9e056237c89f1587a3be1a3a70a06a698d25e2479b9a2f57325ddaaffc3567b"},
|
||||||
|
{file = "mypy-1.16.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:0b07e107affb9ee6ce1f342c07f51552d126c32cd62955f59a7db94a51ad12c0"},
|
||||||
|
{file = "mypy-1.16.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:c6fb60cbd85dc65d4d63d37cb5c86f4e3a301ec605f606ae3a9173e5cf34997b"},
|
||||||
|
{file = "mypy-1.16.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:a7e32297a437cc915599e0578fa6bc68ae6a8dc059c9e009c628e1c47f91495d"},
|
||||||
|
{file = "mypy-1.16.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:afe420c9380ccec31e744e8baff0d406c846683681025db3531b32db56962d52"},
|
||||||
|
{file = "mypy-1.16.0-cp313-cp313-win_amd64.whl", hash = "sha256:55f9076c6ce55dd3f8cd0c6fff26a008ca8e5131b89d5ba6d86bd3f47e736eeb"},
|
||||||
|
{file = "mypy-1.16.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f56236114c425620875c7cf71700e3d60004858da856c6fc78998ffe767b73d3"},
|
||||||
|
{file = "mypy-1.16.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:15486beea80be24ff067d7d0ede673b001d0d684d0095803b3e6e17a886a2a92"},
|
||||||
|
{file = "mypy-1.16.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:f2ed0e0847a80655afa2c121835b848ed101cc7b8d8d6ecc5205aedc732b1436"},
|
||||||
|
{file = "mypy-1.16.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:eb5fbc8063cb4fde7787e4c0406aa63094a34a2daf4673f359a1fb64050e9cb2"},
|
||||||
|
{file = "mypy-1.16.0-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:a5fcfdb7318c6a8dd127b14b1052743b83e97a970f0edb6c913211507a255e20"},
|
||||||
|
{file = "mypy-1.16.0-cp39-cp39-win_amd64.whl", hash = "sha256:2e7e0ad35275e02797323a5aa1be0b14a4d03ffdb2e5f2b0489fa07b89c67b21"},
|
||||||
|
{file = "mypy-1.16.0-py3-none-any.whl", hash = "sha256:29e1499864a3888bca5c1542f2d7232c6e586295183320caa95758fc84034031"},
|
||||||
|
{file = "mypy-1.16.0.tar.gz", hash = "sha256:84b94283f817e2aa6350a14b4a8fb2a35a53c286f97c9d30f53b63620e7af8ab"},
|
||||||
|
]
|
||||||
|
|
||||||
|
[package.dependencies]
|
||||||
|
mypy_extensions = ">=1.0.0"
|
||||||
|
pathspec = ">=0.9.0"
|
||||||
|
tomli = {version = ">=1.1.0", markers = "python_version < \"3.11\""}
|
||||||
|
typing_extensions = ">=4.6.0"
|
||||||
|
|
||||||
|
[package.extras]
|
||||||
|
dmypy = ["psutil (>=4.0)"]
|
||||||
|
faster-cache = ["orjson"]
|
||||||
|
install-types = ["pip"]
|
||||||
|
mypyc = ["setuptools (>=50)"]
|
||||||
|
reports = ["lxml"]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "mypy-extensions"
|
name = "mypy-extensions"
|
||||||
version = "1.0.0"
|
version = "1.0.0"
|
||||||
description = "Type system extensions for programs checked with the mypy type checker."
|
description = "Type system extensions for programs checked with the mypy type checker."
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.5"
|
python-versions = ">=3.5"
|
||||||
groups = ["dev"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "mypy_extensions-1.0.0-py3-none-any.whl", hash = "sha256:4392f6c0eb8a5668a69e23d168ffa70f0be9ccfd32b5cc2d26a34ae5b844552d"},
|
{file = "mypy_extensions-1.0.0-py3-none-any.whl", hash = "sha256:4392f6c0eb8a5668a69e23d168ffa70f0be9ccfd32b5cc2d26a34ae5b844552d"},
|
||||||
{file = "mypy_extensions-1.0.0.tar.gz", hash = "sha256:75dbf8955dc00442a438fc4d0666508a9a97b6bd41aa2f0ffe9d2f2725af0782"},
|
{file = "mypy_extensions-1.0.0.tar.gz", hash = "sha256:75dbf8955dc00442a438fc4d0666508a9a97b6bd41aa2f0ffe9d2f2725af0782"},
|
||||||
@@ -1240,7 +1252,6 @@ version = "2.2.2"
|
|||||||
description = "Fundamental package for array computing in Python"
|
description = "Fundamental package for array computing in Python"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.10"
|
python-versions = ">=3.10"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "numpy-2.2.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:7079129b64cb78bdc8d611d1fd7e8002c0a2565da6a47c4df8062349fee90e3e"},
|
{file = "numpy-2.2.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:7079129b64cb78bdc8d611d1fd7e8002c0a2565da6a47c4df8062349fee90e3e"},
|
||||||
{file = "numpy-2.2.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:2ec6c689c61df613b783aeb21f945c4cbe6c51c28cb70aae8430577ab39f163e"},
|
{file = "numpy-2.2.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:2ec6c689c61df613b783aeb21f945c4cbe6c51c28cb70aae8430577ab39f163e"},
|
||||||
@@ -1305,7 +1316,6 @@ version = "1.75.0"
|
|||||||
description = "The official Python library for the openai API"
|
description = "The official Python library for the openai API"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "openai-1.75.0-py3-none-any.whl", hash = "sha256:fe6f932d2ded3b429ff67cc9ad118c71327db32eb9d32dd723de3acfca337125"},
|
{file = "openai-1.75.0-py3-none-any.whl", hash = "sha256:fe6f932d2ded3b429ff67cc9ad118c71327db32eb9d32dd723de3acfca337125"},
|
||||||
{file = "openai-1.75.0.tar.gz", hash = "sha256:fb3ea907efbdb1bcfd0c44507ad9c961afd7dce3147292b54505ecfd17be8fd1"},
|
{file = "openai-1.75.0.tar.gz", hash = "sha256:fb3ea907efbdb1bcfd0c44507ad9c961afd7dce3147292b54505ecfd17be8fd1"},
|
||||||
@@ -1332,7 +1342,6 @@ version = "24.2"
|
|||||||
description = "Core utilities for Python packages"
|
description = "Core utilities for Python packages"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
groups = ["main", "dev"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "packaging-24.2-py3-none-any.whl", hash = "sha256:09abb1bccd265c01f4a3aa3f7a7db064b36514d2cba19a2f694fe6150451a759"},
|
{file = "packaging-24.2-py3-none-any.whl", hash = "sha256:09abb1bccd265c01f4a3aa3f7a7db064b36514d2cba19a2f694fe6150451a759"},
|
||||||
{file = "packaging-24.2.tar.gz", hash = "sha256:c228a6dc5e932d346bc5739379109d49e8853dd8223571c7c5b55260edc0b97f"},
|
{file = "packaging-24.2.tar.gz", hash = "sha256:c228a6dc5e932d346bc5739379109d49e8853dd8223571c7c5b55260edc0b97f"},
|
||||||
@@ -1344,7 +1353,6 @@ version = "2.2.3"
|
|||||||
description = "Powerful data structures for data analysis, time series, and statistics"
|
description = "Powerful data structures for data analysis, time series, and statistics"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.9"
|
python-versions = ">=3.9"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "pandas-2.2.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:1948ddde24197a0f7add2bdc4ca83bf2b1ef84a1bc8ccffd95eda17fd836ecb5"},
|
{file = "pandas-2.2.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:1948ddde24197a0f7add2bdc4ca83bf2b1ef84a1bc8ccffd95eda17fd836ecb5"},
|
||||||
{file = "pandas-2.2.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:381175499d3802cde0eabbaf6324cce0c4f5d52ca6f8c377c29ad442f50f6348"},
|
{file = "pandas-2.2.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:381175499d3802cde0eabbaf6324cce0c4f5d52ca6f8c377c29ad442f50f6348"},
|
||||||
@@ -1431,7 +1439,6 @@ version = "0.12.1"
|
|||||||
description = "Utility library for gitignore style pattern matching of file paths."
|
description = "Utility library for gitignore style pattern matching of file paths."
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
groups = ["dev"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "pathspec-0.12.1-py3-none-any.whl", hash = "sha256:a0d503e138a4c123b27490a4f7beda6a01c6f288df0e4a8b79c7eb0dc7b4cc08"},
|
{file = "pathspec-0.12.1-py3-none-any.whl", hash = "sha256:a0d503e138a4c123b27490a4f7beda6a01c6f288df0e4a8b79c7eb0dc7b4cc08"},
|
||||||
{file = "pathspec-0.12.1.tar.gz", hash = "sha256:a482d51503a1ab33b1c67a6c3813a26953dbdc71c31dacaef9a838c4e29f5712"},
|
{file = "pathspec-0.12.1.tar.gz", hash = "sha256:a482d51503a1ab33b1c67a6c3813a26953dbdc71c31dacaef9a838c4e29f5712"},
|
||||||
@@ -1443,7 +1450,6 @@ version = "4.3.6"
|
|||||||
description = "A small Python package for determining appropriate platform-specific dirs, e.g. a `user data dir`."
|
description = "A small Python package for determining appropriate platform-specific dirs, e.g. a `user data dir`."
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
groups = ["dev"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "platformdirs-4.3.6-py3-none-any.whl", hash = "sha256:73e575e1408ab8103900836b97580d5307456908a03e92031bab39e4554cc3fb"},
|
{file = "platformdirs-4.3.6-py3-none-any.whl", hash = "sha256:73e575e1408ab8103900836b97580d5307456908a03e92031bab39e4554cc3fb"},
|
||||||
{file = "platformdirs-4.3.6.tar.gz", hash = "sha256:357fb2acbc885b0419afd3ce3ed34564c13c9b95c89360cd9563f73aa5e2b907"},
|
{file = "platformdirs-4.3.6.tar.gz", hash = "sha256:357fb2acbc885b0419afd3ce3ed34564c13c9b95c89360cd9563f73aa5e2b907"},
|
||||||
@@ -1460,7 +1466,6 @@ version = "1.5.0"
|
|||||||
description = "plugin and hook calling mechanisms for python"
|
description = "plugin and hook calling mechanisms for python"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
groups = ["dev"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "pluggy-1.5.0-py3-none-any.whl", hash = "sha256:44e1ad92c8ca002de6377e165f3e0f1be63266ab4d554740532335b9d75ea669"},
|
{file = "pluggy-1.5.0-py3-none-any.whl", hash = "sha256:44e1ad92c8ca002de6377e165f3e0f1be63266ab4d554740532335b9d75ea669"},
|
||||||
{file = "pluggy-1.5.0.tar.gz", hash = "sha256:2cffa88e94fdc978c4c574f15f9e59b7f4201d439195c3715ca9e2486f1d0cf1"},
|
{file = "pluggy-1.5.0.tar.gz", hash = "sha256:2cffa88e94fdc978c4c574f15f9e59b7f4201d439195c3715ca9e2486f1d0cf1"},
|
||||||
@@ -1476,7 +1481,6 @@ version = "0.3.1"
|
|||||||
description = "Accelerated property cache"
|
description = "Accelerated property cache"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.9"
|
python-versions = ">=3.9"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "propcache-0.3.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:f27785888d2fdd918bc36de8b8739f2d6c791399552333721b58193f68ea3e98"},
|
{file = "propcache-0.3.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:f27785888d2fdd918bc36de8b8739f2d6c791399552333721b58193f68ea3e98"},
|
||||||
{file = "propcache-0.3.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d4e89cde74154c7b5957f87a355bb9c8ec929c167b59c83d90654ea36aeb6180"},
|
{file = "propcache-0.3.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d4e89cde74154c7b5957f87a355bb9c8ec929c167b59c83d90654ea36aeb6180"},
|
||||||
@@ -1584,7 +1588,6 @@ version = "5.29.2"
|
|||||||
description = ""
|
description = ""
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "protobuf-5.29.2-cp310-abi3-win32.whl", hash = "sha256:c12ba8249f5624300cf51c3d0bfe5be71a60c63e4dcf51ffe9a68771d958c851"},
|
{file = "protobuf-5.29.2-cp310-abi3-win32.whl", hash = "sha256:c12ba8249f5624300cf51c3d0bfe5be71a60c63e4dcf51ffe9a68771d958c851"},
|
||||||
{file = "protobuf-5.29.2-cp310-abi3-win_amd64.whl", hash = "sha256:842de6d9241134a973aab719ab42b008a18a90f9f07f06ba480df268f86432f9"},
|
{file = "protobuf-5.29.2-cp310-abi3-win_amd64.whl", hash = "sha256:842de6d9241134a973aab719ab42b008a18a90f9f07f06ba480df268f86432f9"},
|
||||||
@@ -1605,7 +1608,6 @@ version = "2.10.4"
|
|||||||
description = "Data validation using Python type hints"
|
description = "Data validation using Python type hints"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "pydantic-2.10.4-py3-none-any.whl", hash = "sha256:597e135ea68be3a37552fb524bc7d0d66dcf93d395acd93a00682f1efcb8ee3d"},
|
{file = "pydantic-2.10.4-py3-none-any.whl", hash = "sha256:597e135ea68be3a37552fb524bc7d0d66dcf93d395acd93a00682f1efcb8ee3d"},
|
||||||
{file = "pydantic-2.10.4.tar.gz", hash = "sha256:82f12e9723da6de4fe2ba888b5971157b3be7ad914267dea8f05f82b28254f06"},
|
{file = "pydantic-2.10.4.tar.gz", hash = "sha256:82f12e9723da6de4fe2ba888b5971157b3be7ad914267dea8f05f82b28254f06"},
|
||||||
@@ -1618,7 +1620,7 @@ typing-extensions = ">=4.12.2"
|
|||||||
|
|
||||||
[package.extras]
|
[package.extras]
|
||||||
email = ["email-validator (>=2.0.0)"]
|
email = ["email-validator (>=2.0.0)"]
|
||||||
timezone = ["tzdata ; python_version >= \"3.9\" and platform_system == \"Windows\""]
|
timezone = ["tzdata"]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "pydantic-core"
|
name = "pydantic-core"
|
||||||
@@ -1626,7 +1628,6 @@ version = "2.27.2"
|
|||||||
description = "Core functionality for Pydantic validation and serialization"
|
description = "Core functionality for Pydantic validation and serialization"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "pydantic_core-2.27.2-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:2d367ca20b2f14095a8f4fa1210f5a7b78b8a20009ecced6b12818f455b1e9fa"},
|
{file = "pydantic_core-2.27.2-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:2d367ca20b2f14095a8f4fa1210f5a7b78b8a20009ecced6b12818f455b1e9fa"},
|
||||||
{file = "pydantic_core-2.27.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:491a2b73db93fab69731eaee494f320faa4e093dbed776be1a829c2eb222c34c"},
|
{file = "pydantic_core-2.27.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:491a2b73db93fab69731eaee494f320faa4e093dbed776be1a829c2eb222c34c"},
|
||||||
@@ -1739,7 +1740,6 @@ version = "0.10.0"
|
|||||||
description = "Vectorized spatial vector file format I/O using GDAL/OGR"
|
description = "Vectorized spatial vector file format I/O using GDAL/OGR"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.9"
|
python-versions = ">=3.9"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "pyogrio-0.10.0-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:046eeeae12a03a3ebc3dc5ff5a87664e4f5fc0a4fb1ea5d5c45d547fa941072b"},
|
{file = "pyogrio-0.10.0-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:046eeeae12a03a3ebc3dc5ff5a87664e4f5fc0a4fb1ea5d5c45d547fa941072b"},
|
||||||
{file = "pyogrio-0.10.0-cp310-cp310-macosx_12_0_x86_64.whl", hash = "sha256:44380f4d9245c776f432526e29ce4d29238aea26adad991803c4f453474f51d3"},
|
{file = "pyogrio-0.10.0-cp310-cp310-macosx_12_0_x86_64.whl", hash = "sha256:44380f4d9245c776f432526e29ce4d29238aea26adad991803c4f453474f51d3"},
|
||||||
@@ -1791,7 +1791,6 @@ version = "3.7.0"
|
|||||||
description = "Python interface to PROJ (cartographic projections and coordinate transformations library)"
|
description = "Python interface to PROJ (cartographic projections and coordinate transformations library)"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.10"
|
python-versions = ">=3.10"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "pyproj-3.7.0-cp310-cp310-macosx_12_0_x86_64.whl", hash = "sha256:d5c7e7d24b967e328a5efd013f466804a1f226d1106ac7efc47dcc99360dbc8f"},
|
{file = "pyproj-3.7.0-cp310-cp310-macosx_12_0_x86_64.whl", hash = "sha256:d5c7e7d24b967e328a5efd013f466804a1f226d1106ac7efc47dcc99360dbc8f"},
|
||||||
{file = "pyproj-3.7.0-cp310-cp310-macosx_14_0_arm64.whl", hash = "sha256:448958c46bd3fe2da91c89ba551ac5835e63073ca861422c6eb1af89979dfab1"},
|
{file = "pyproj-3.7.0-cp310-cp310-macosx_14_0_arm64.whl", hash = "sha256:448958c46bd3fe2da91c89ba551ac5835e63073ca861422c6eb1af89979dfab1"},
|
||||||
@@ -1829,7 +1828,6 @@ version = "8.3.5"
|
|||||||
description = "pytest: simple powerful testing with Python"
|
description = "pytest: simple powerful testing with Python"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
groups = ["dev"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "pytest-8.3.5-py3-none-any.whl", hash = "sha256:c69214aa47deac29fad6c2a4f590b9c4a9fdb16a403176fe154b79c0b4d4d820"},
|
{file = "pytest-8.3.5-py3-none-any.whl", hash = "sha256:c69214aa47deac29fad6c2a4f590b9c4a9fdb16a403176fe154b79c0b4d4d820"},
|
||||||
{file = "pytest-8.3.5.tar.gz", hash = "sha256:f4efe70cc14e511565ac476b57c279e12a855b11f48f212af1080ef2263d3845"},
|
{file = "pytest-8.3.5.tar.gz", hash = "sha256:f4efe70cc14e511565ac476b57c279e12a855b11f48f212af1080ef2263d3845"},
|
||||||
@@ -1852,7 +1850,6 @@ version = "0.26.0"
|
|||||||
description = "Pytest support for asyncio"
|
description = "Pytest support for asyncio"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.9"
|
python-versions = ">=3.9"
|
||||||
groups = ["dev"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "pytest_asyncio-0.26.0-py3-none-any.whl", hash = "sha256:7b51ed894f4fbea1340262bdae5135797ebbe21d8638978e35d31c6d19f72fb0"},
|
{file = "pytest_asyncio-0.26.0-py3-none-any.whl", hash = "sha256:7b51ed894f4fbea1340262bdae5135797ebbe21d8638978e35d31c6d19f72fb0"},
|
||||||
{file = "pytest_asyncio-0.26.0.tar.gz", hash = "sha256:c4df2a697648241ff39e7f0e4a73050b03f123f760673956cf0d72a4990e312f"},
|
{file = "pytest_asyncio-0.26.0.tar.gz", hash = "sha256:c4df2a697648241ff39e7f0e4a73050b03f123f760673956cf0d72a4990e312f"},
|
||||||
@@ -1871,7 +1868,6 @@ version = "2.9.0.post0"
|
|||||||
description = "Extensions to the standard Python datetime module"
|
description = "Extensions to the standard Python datetime module"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7"
|
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "python-dateutil-2.9.0.post0.tar.gz", hash = "sha256:37dd54208da7e1cd875388217d5e00ebd4179249f90fb72437e91a35459a0ad3"},
|
{file = "python-dateutil-2.9.0.post0.tar.gz", hash = "sha256:37dd54208da7e1cd875388217d5e00ebd4179249f90fb72437e91a35459a0ad3"},
|
||||||
{file = "python_dateutil-2.9.0.post0-py2.py3-none-any.whl", hash = "sha256:a8b2bc7bffae282281c8140a97d3aa9c14da0b136dfe83f850eea9a5f7470427"},
|
{file = "python_dateutil-2.9.0.post0-py2.py3-none-any.whl", hash = "sha256:a8b2bc7bffae282281c8140a97d3aa9c14da0b136dfe83f850eea9a5f7470427"},
|
||||||
@@ -1886,7 +1882,6 @@ version = "1.0.1"
|
|||||||
description = "Read key-value pairs from a .env file and set them as environment variables"
|
description = "Read key-value pairs from a .env file and set them as environment variables"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "python-dotenv-1.0.1.tar.gz", hash = "sha256:e324ee90a023d808f1959c46bcbc04446a10ced277783dc6ee09987c37ec10ca"},
|
{file = "python-dotenv-1.0.1.tar.gz", hash = "sha256:e324ee90a023d808f1959c46bcbc04446a10ced277783dc6ee09987c37ec10ca"},
|
||||||
{file = "python_dotenv-1.0.1-py3-none-any.whl", hash = "sha256:f7b63ef50f1b690dddf550d03497b66d609393b40b564ed0d674909a68ebf16a"},
|
{file = "python_dotenv-1.0.1-py3-none-any.whl", hash = "sha256:f7b63ef50f1b690dddf550d03497b66d609393b40b564ed0d674909a68ebf16a"},
|
||||||
@@ -1901,7 +1896,6 @@ version = "2025.1"
|
|||||||
description = "World timezone definitions, modern and historical"
|
description = "World timezone definitions, modern and historical"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = "*"
|
python-versions = "*"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "pytz-2025.1-py2.py3-none-any.whl", hash = "sha256:89dd22dca55b46eac6eda23b2d72721bf1bdfef212645d81513ef5d03038de57"},
|
{file = "pytz-2025.1-py2.py3-none-any.whl", hash = "sha256:89dd22dca55b46eac6eda23b2d72721bf1bdfef212645d81513ef5d03038de57"},
|
||||||
{file = "pytz-2025.1.tar.gz", hash = "sha256:c2db42be2a2518b28e65f9207c4d05e6ff547d1efa4086469ef855e4ab70178e"},
|
{file = "pytz-2025.1.tar.gz", hash = "sha256:c2db42be2a2518b28e65f9207c4d05e6ff547d1efa4086469ef855e4ab70178e"},
|
||||||
@@ -1913,7 +1907,6 @@ version = "6.0.2"
|
|||||||
description = "YAML parser and emitter for Python"
|
description = "YAML parser and emitter for Python"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "PyYAML-6.0.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:0a9a2848a5b7feac301353437eb7d5957887edbf81d56e903999a75a3d743086"},
|
{file = "PyYAML-6.0.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:0a9a2848a5b7feac301353437eb7d5957887edbf81d56e903999a75a3d743086"},
|
||||||
{file = "PyYAML-6.0.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:29717114e51c84ddfba879543fb232a6ed60086602313ca38cce623c1d62cfbf"},
|
{file = "PyYAML-6.0.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:29717114e51c84ddfba879543fb232a6ed60086602313ca38cce623c1d62cfbf"},
|
||||||
@@ -1976,7 +1969,6 @@ version = "0.36.2"
|
|||||||
description = "JSON Referencing + Python"
|
description = "JSON Referencing + Python"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.9"
|
python-versions = ">=3.9"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "referencing-0.36.2-py3-none-any.whl", hash = "sha256:e8699adbbf8b5c7de96d8ffa0eb5c158b3beafce084968e2ea8bb08c6794dcd0"},
|
{file = "referencing-0.36.2-py3-none-any.whl", hash = "sha256:e8699adbbf8b5c7de96d8ffa0eb5c158b3beafce084968e2ea8bb08c6794dcd0"},
|
||||||
{file = "referencing-0.36.2.tar.gz", hash = "sha256:df2e89862cd09deabbdba16944cc3f10feb6b3e6f18e902f7cc25609a34775aa"},
|
{file = "referencing-0.36.2.tar.gz", hash = "sha256:df2e89862cd09deabbdba16944cc3f10feb6b3e6f18e902f7cc25609a34775aa"},
|
||||||
@@ -1993,7 +1985,6 @@ version = "2024.11.6"
|
|||||||
description = "Alternative regular expression module, to replace re."
|
description = "Alternative regular expression module, to replace re."
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "regex-2024.11.6-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:ff590880083d60acc0433f9c3f713c51f7ac6ebb9adf889c79a261ecf541aa91"},
|
{file = "regex-2024.11.6-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:ff590880083d60acc0433f9c3f713c51f7ac6ebb9adf889c79a261ecf541aa91"},
|
||||||
{file = "regex-2024.11.6-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:658f90550f38270639e83ce492f27d2c8d2cd63805c65a13a14d36ca126753f0"},
|
{file = "regex-2024.11.6-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:658f90550f38270639e83ce492f27d2c8d2cd63805c65a13a14d36ca126753f0"},
|
||||||
@@ -2097,7 +2088,6 @@ version = "2.32.3"
|
|||||||
description = "Python HTTP for Humans."
|
description = "Python HTTP for Humans."
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "requests-2.32.3-py3-none-any.whl", hash = "sha256:70761cfe03c773ceb22aa2f671b4757976145175cdfca038c02654d061d6dcc6"},
|
{file = "requests-2.32.3-py3-none-any.whl", hash = "sha256:70761cfe03c773ceb22aa2f671b4757976145175cdfca038c02654d061d6dcc6"},
|
||||||
{file = "requests-2.32.3.tar.gz", hash = "sha256:55365417734eb18255590a9ff9eb97e9e1da868d4ccd6402399eaf68af20a760"},
|
{file = "requests-2.32.3.tar.gz", hash = "sha256:55365417734eb18255590a9ff9eb97e9e1da868d4ccd6402399eaf68af20a760"},
|
||||||
@@ -2119,7 +2109,6 @@ version = "0.25.0"
|
|||||||
description = "Python bindings to Rust's persistent data structures (rpds)"
|
description = "Python bindings to Rust's persistent data structures (rpds)"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.9"
|
python-versions = ">=3.9"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "rpds_py-0.25.0-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:c146a24a8f0dc4a7846fb4640b88b3a68986585b8ce8397af15e66b7c5817439"},
|
{file = "rpds_py-0.25.0-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:c146a24a8f0dc4a7846fb4640b88b3a68986585b8ce8397af15e66b7c5817439"},
|
||||||
{file = "rpds_py-0.25.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:77814c7a4e1dc43fba73aeb4c1ef0fe37d901f3aa869a4823de5ea843a283fd0"},
|
{file = "rpds_py-0.25.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:77814c7a4e1dc43fba73aeb4c1ef0fe37d901f3aa869a4823de5ea843a283fd0"},
|
||||||
@@ -2243,7 +2232,6 @@ version = "1.3.0"
|
|||||||
description = "R-Tree spatial index for Python GIS"
|
description = "R-Tree spatial index for Python GIS"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "Rtree-1.3.0-py3-none-macosx_10_9_x86_64.whl", hash = "sha256:80879d9db282a2273ca3a0d896c84583940e9777477727a277624ebfd424c517"},
|
{file = "Rtree-1.3.0-py3-none-macosx_10_9_x86_64.whl", hash = "sha256:80879d9db282a2273ca3a0d896c84583940e9777477727a277624ebfd424c517"},
|
||||||
{file = "Rtree-1.3.0-py3-none-macosx_11_0_arm64.whl", hash = "sha256:4328e9e421797c347e6eb08efbbade962fe3664ebd60c1dffe82c40911b1e125"},
|
{file = "Rtree-1.3.0-py3-none-macosx_11_0_arm64.whl", hash = "sha256:4328e9e421797c347e6eb08efbbade962fe3664ebd60c1dffe82c40911b1e125"},
|
||||||
@@ -2263,7 +2251,6 @@ version = "2.0.7"
|
|||||||
description = "Manipulation and analysis of geometric objects"
|
description = "Manipulation and analysis of geometric objects"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.7"
|
python-versions = ">=3.7"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "shapely-2.0.7-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:33fb10e50b16113714ae40adccf7670379e9ccf5b7a41d0002046ba2b8f0f691"},
|
{file = "shapely-2.0.7-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:33fb10e50b16113714ae40adccf7670379e9ccf5b7a41d0002046ba2b8f0f691"},
|
||||||
{file = "shapely-2.0.7-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:f44eda8bd7a4bccb0f281264b34bf3518d8c4c9a8ffe69a1a05dabf6e8461147"},
|
{file = "shapely-2.0.7-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:f44eda8bd7a4bccb0f281264b34bf3518d8c4c9a8ffe69a1a05dabf6e8461147"},
|
||||||
@@ -2322,7 +2309,6 @@ version = "1.17.0"
|
|||||||
description = "Python 2 and 3 compatibility utilities"
|
description = "Python 2 and 3 compatibility utilities"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7"
|
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "six-1.17.0-py2.py3-none-any.whl", hash = "sha256:4721f391ed90541fddacab5acf947aa0d3dc7d27b2e1e8eda2be8970586c3274"},
|
{file = "six-1.17.0-py2.py3-none-any.whl", hash = "sha256:4721f391ed90541fddacab5acf947aa0d3dc7d27b2e1e8eda2be8970586c3274"},
|
||||||
{file = "six-1.17.0.tar.gz", hash = "sha256:ff70335d468e7eb6ec65b95b99d3a2836546063f63acc5171de367e834932a81"},
|
{file = "six-1.17.0.tar.gz", hash = "sha256:ff70335d468e7eb6ec65b95b99d3a2836546063f63acc5171de367e834932a81"},
|
||||||
@@ -2334,7 +2320,6 @@ version = "1.3.1"
|
|||||||
description = "Sniff out which async library your code is running under"
|
description = "Sniff out which async library your code is running under"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.7"
|
python-versions = ">=3.7"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "sniffio-1.3.1-py3-none-any.whl", hash = "sha256:2f6da418d1f1e0fddd844478f41680e794e6051915791a034ff65e5f100525a2"},
|
{file = "sniffio-1.3.1-py3-none-any.whl", hash = "sha256:2f6da418d1f1e0fddd844478f41680e794e6051915791a034ff65e5f100525a2"},
|
||||||
{file = "sniffio-1.3.1.tar.gz", hash = "sha256:f4324edc670a0f49750a81b895f35c3adb843cca46f0530f79fc1babb23789dc"},
|
{file = "sniffio-1.3.1.tar.gz", hash = "sha256:f4324edc670a0f49750a81b895f35c3adb843cca46f0530f79fc1babb23789dc"},
|
||||||
@@ -2346,7 +2331,6 @@ version = "0.41.3"
|
|||||||
description = "The little ASGI library that shines."
|
description = "The little ASGI library that shines."
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "starlette-0.41.3-py3-none-any.whl", hash = "sha256:44cedb2b7c77a9de33a8b74b2b90e9f50d11fcf25d8270ea525ad71a25374ff7"},
|
{file = "starlette-0.41.3-py3-none-any.whl", hash = "sha256:44cedb2b7c77a9de33a8b74b2b90e9f50d11fcf25d8270ea525ad71a25374ff7"},
|
||||||
{file = "starlette-0.41.3.tar.gz", hash = "sha256:0e4ab3d16522a255be6b28260b938eae2482f98ce5cc934cb08dce8dc3ba5835"},
|
{file = "starlette-0.41.3.tar.gz", hash = "sha256:0e4ab3d16522a255be6b28260b938eae2482f98ce5cc934cb08dce8dc3ba5835"},
|
||||||
@@ -2364,7 +2348,6 @@ version = "11.6.0"
|
|||||||
description = "Python bindings for the Stripe API"
|
description = "Python bindings for the Stripe API"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.6"
|
python-versions = ">=3.6"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "stripe-11.6.0-py2.py3-none-any.whl", hash = "sha256:6e6cf09ebb6d5fc2d708401cb8868fd7bff987a6d09a0433caaa92c62f97dbc5"},
|
{file = "stripe-11.6.0-py2.py3-none-any.whl", hash = "sha256:6e6cf09ebb6d5fc2d708401cb8868fd7bff987a6d09a0433caaa92c62f97dbc5"},
|
||||||
{file = "stripe-11.6.0.tar.gz", hash = "sha256:0ced7cce23a6cb1a393c86a1f7f9435c9d83ae7cbd556362868caf62cb44a92c"},
|
{file = "stripe-11.6.0.tar.gz", hash = "sha256:0ced7cce23a6cb1a393c86a1f7f9435c9d83ae7cbd556362868caf62cb44a92c"},
|
||||||
@@ -2380,7 +2363,6 @@ version = "1.9.0"
|
|||||||
description = "Temporal.io Python SDK"
|
description = "Temporal.io Python SDK"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = "<4.0,>=3.8"
|
python-versions = "<4.0,>=3.8"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "temporalio-1.9.0-cp38-abi3-macosx_10_9_x86_64.whl", hash = "sha256:ee941702e8925e2c018b5c2d7b296f811205043654d7f9c4564d7fa6597f1989"},
|
{file = "temporalio-1.9.0-cp38-abi3-macosx_10_9_x86_64.whl", hash = "sha256:ee941702e8925e2c018b5c2d7b296f811205043654d7f9c4564d7fa6597f1989"},
|
||||||
{file = "temporalio-1.9.0-cp38-abi3-macosx_11_0_arm64.whl", hash = "sha256:101040090238d97b61d769e009f732409894d8f26596a3827662f2dde2862097"},
|
{file = "temporalio-1.9.0-cp38-abi3-macosx_11_0_arm64.whl", hash = "sha256:101040090238d97b61d769e009f732409894d8f26596a3827662f2dde2862097"},
|
||||||
@@ -2406,7 +2388,6 @@ version = "0.9.0"
|
|||||||
description = "tiktoken is a fast BPE tokeniser for use with OpenAI's models"
|
description = "tiktoken is a fast BPE tokeniser for use with OpenAI's models"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.9"
|
python-versions = ">=3.9"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "tiktoken-0.9.0-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:586c16358138b96ea804c034b8acf3f5d3f0258bd2bc3b0227af4af5d622e382"},
|
{file = "tiktoken-0.9.0-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:586c16358138b96ea804c034b8acf3f5d3f0258bd2bc3b0227af4af5d622e382"},
|
||||||
{file = "tiktoken-0.9.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:d9c59ccc528c6c5dd51820b3474402f69d9a9e1d656226848ad68a8d5b2e5108"},
|
{file = "tiktoken-0.9.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:d9c59ccc528c6c5dd51820b3474402f69d9a9e1d656226848ad68a8d5b2e5108"},
|
||||||
@@ -2454,7 +2435,6 @@ version = "0.21.1"
|
|||||||
description = ""
|
description = ""
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.9"
|
python-versions = ">=3.9"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "tokenizers-0.21.1-cp39-abi3-macosx_10_12_x86_64.whl", hash = "sha256:e78e413e9e668ad790a29456e677d9d3aa50a9ad311a40905d6861ba7692cf41"},
|
{file = "tokenizers-0.21.1-cp39-abi3-macosx_10_12_x86_64.whl", hash = "sha256:e78e413e9e668ad790a29456e677d9d3aa50a9ad311a40905d6861ba7692cf41"},
|
||||||
{file = "tokenizers-0.21.1-cp39-abi3-macosx_11_0_arm64.whl", hash = "sha256:cd51cd0a91ecc801633829fcd1fda9cf8682ed3477c6243b9a095539de4aecf3"},
|
{file = "tokenizers-0.21.1-cp39-abi3-macosx_11_0_arm64.whl", hash = "sha256:cd51cd0a91ecc801633829fcd1fda9cf8682ed3477c6243b9a095539de4aecf3"},
|
||||||
@@ -2487,8 +2467,6 @@ version = "2.2.1"
|
|||||||
description = "A lil' TOML parser"
|
description = "A lil' TOML parser"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
groups = ["dev"]
|
|
||||||
markers = "python_version == \"3.10\""
|
|
||||||
files = [
|
files = [
|
||||||
{file = "tomli-2.2.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:678e4fa69e4575eb77d103de3df8a895e1591b48e740211bd1067378c69e8249"},
|
{file = "tomli-2.2.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:678e4fa69e4575eb77d103de3df8a895e1591b48e740211bd1067378c69e8249"},
|
||||||
{file = "tomli-2.2.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:023aa114dd824ade0100497eb2318602af309e5a55595f76b626d6d9f3b7b0a6"},
|
{file = "tomli-2.2.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:023aa114dd824ade0100497eb2318602af309e5a55595f76b626d6d9f3b7b0a6"},
|
||||||
@@ -2530,7 +2508,6 @@ version = "4.67.1"
|
|||||||
description = "Fast, Extensible Progress Meter"
|
description = "Fast, Extensible Progress Meter"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.7"
|
python-versions = ">=3.7"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "tqdm-4.67.1-py3-none-any.whl", hash = "sha256:26445eca388f82e72884e0d580d5464cd801a3ea01e63e5601bdff9ba6a48de2"},
|
{file = "tqdm-4.67.1-py3-none-any.whl", hash = "sha256:26445eca388f82e72884e0d580d5464cd801a3ea01e63e5601bdff9ba6a48de2"},
|
||||||
{file = "tqdm-4.67.1.tar.gz", hash = "sha256:f8aef9c52c08c13a65f30ea34f4e5aac3fd1a34959879d7e59e63027286627f2"},
|
{file = "tqdm-4.67.1.tar.gz", hash = "sha256:f8aef9c52c08c13a65f30ea34f4e5aac3fd1a34959879d7e59e63027286627f2"},
|
||||||
@@ -2552,7 +2529,6 @@ version = "5.29.1.20241207"
|
|||||||
description = "Typing stubs for protobuf"
|
description = "Typing stubs for protobuf"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "types_protobuf-5.29.1.20241207-py3-none-any.whl", hash = "sha256:92893c42083e9b718c678badc0af7a9a1307b92afe1599e5cba5f3d35b668b2f"},
|
{file = "types_protobuf-5.29.1.20241207-py3-none-any.whl", hash = "sha256:92893c42083e9b718c678badc0af7a9a1307b92afe1599e5cba5f3d35b668b2f"},
|
||||||
{file = "types_protobuf-5.29.1.20241207.tar.gz", hash = "sha256:2ebcadb8ab3ef2e3e2f067e0882906d64ba0dc65fc5b0fd7a8b692315b4a0be9"},
|
{file = "types_protobuf-5.29.1.20241207.tar.gz", hash = "sha256:2ebcadb8ab3ef2e3e2f067e0882906d64ba0dc65fc5b0fd7a8b692315b4a0be9"},
|
||||||
@@ -2564,12 +2540,10 @@ version = "4.12.2"
|
|||||||
description = "Backported and Experimental Type Hints for Python 3.8+"
|
description = "Backported and Experimental Type Hints for Python 3.8+"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
groups = ["main", "dev"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "typing_extensions-4.12.2-py3-none-any.whl", hash = "sha256:04e5ca0351e0f3f85c6853954072df659d0d13fac324d0072316b67d7794700d"},
|
{file = "typing_extensions-4.12.2-py3-none-any.whl", hash = "sha256:04e5ca0351e0f3f85c6853954072df659d0d13fac324d0072316b67d7794700d"},
|
||||||
{file = "typing_extensions-4.12.2.tar.gz", hash = "sha256:1a7ead55c7e559dd4dee8856e3a88b41225abfe1ce8df57b7c13915fe121ffb8"},
|
{file = "typing_extensions-4.12.2.tar.gz", hash = "sha256:1a7ead55c7e559dd4dee8856e3a88b41225abfe1ce8df57b7c13915fe121ffb8"},
|
||||||
]
|
]
|
||||||
markers = {dev = "python_version == \"3.10\""}
|
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "tzdata"
|
name = "tzdata"
|
||||||
@@ -2577,7 +2551,6 @@ version = "2025.1"
|
|||||||
description = "Provider of IANA time zone data"
|
description = "Provider of IANA time zone data"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=2"
|
python-versions = ">=2"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "tzdata-2025.1-py2.py3-none-any.whl", hash = "sha256:7e127113816800496f027041c570f50bcd464a020098a3b6b199517772303639"},
|
{file = "tzdata-2025.1-py2.py3-none-any.whl", hash = "sha256:7e127113816800496f027041c570f50bcd464a020098a3b6b199517772303639"},
|
||||||
{file = "tzdata-2025.1.tar.gz", hash = "sha256:24894909e88cdb28bd1636c6887801df64cb485bd593f2fd83ef29075a81d694"},
|
{file = "tzdata-2025.1.tar.gz", hash = "sha256:24894909e88cdb28bd1636c6887801df64cb485bd593f2fd83ef29075a81d694"},
|
||||||
@@ -2589,14 +2562,13 @@ version = "2.3.0"
|
|||||||
description = "HTTP library with thread-safe connection pooling, file post, and more."
|
description = "HTTP library with thread-safe connection pooling, file post, and more."
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.9"
|
python-versions = ">=3.9"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "urllib3-2.3.0-py3-none-any.whl", hash = "sha256:1cee9ad369867bfdbbb48b7dd50374c0967a0bb7710050facf0dd6911440e3df"},
|
{file = "urllib3-2.3.0-py3-none-any.whl", hash = "sha256:1cee9ad369867bfdbbb48b7dd50374c0967a0bb7710050facf0dd6911440e3df"},
|
||||||
{file = "urllib3-2.3.0.tar.gz", hash = "sha256:f8c5449b3cf0861679ce7e0503c7b44b5ec981bec0d1d3795a07f1ba96f0204d"},
|
{file = "urllib3-2.3.0.tar.gz", hash = "sha256:f8c5449b3cf0861679ce7e0503c7b44b5ec981bec0d1d3795a07f1ba96f0204d"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.extras]
|
[package.extras]
|
||||||
brotli = ["brotli (>=1.0.9) ; platform_python_implementation == \"CPython\"", "brotlicffi (>=0.8.0) ; platform_python_implementation != \"CPython\""]
|
brotli = ["brotli (>=1.0.9)", "brotlicffi (>=0.8.0)"]
|
||||||
h2 = ["h2 (>=4,<5)"]
|
h2 = ["h2 (>=4,<5)"]
|
||||||
socks = ["pysocks (>=1.5.6,!=1.5.7,<2.0)"]
|
socks = ["pysocks (>=1.5.6,!=1.5.7,<2.0)"]
|
||||||
zstd = ["zstandard (>=0.18.0)"]
|
zstd = ["zstandard (>=0.18.0)"]
|
||||||
@@ -2607,7 +2579,6 @@ version = "0.34.0"
|
|||||||
description = "The lightning-fast ASGI server."
|
description = "The lightning-fast ASGI server."
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.9"
|
python-versions = ">=3.9"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "uvicorn-0.34.0-py3-none-any.whl", hash = "sha256:023dc038422502fa28a09c7a30bf2b6991512da7dcdb8fd35fe57cfc154126f4"},
|
{file = "uvicorn-0.34.0-py3-none-any.whl", hash = "sha256:023dc038422502fa28a09c7a30bf2b6991512da7dcdb8fd35fe57cfc154126f4"},
|
||||||
{file = "uvicorn-0.34.0.tar.gz", hash = "sha256:404051050cd7e905de2c9a7e61790943440b3416f49cb409f965d9dcd0fa73e9"},
|
{file = "uvicorn-0.34.0.tar.gz", hash = "sha256:404051050cd7e905de2c9a7e61790943440b3416f49cb409f965d9dcd0fa73e9"},
|
||||||
@@ -2619,7 +2590,7 @@ h11 = ">=0.8"
|
|||||||
typing-extensions = {version = ">=4.0", markers = "python_version < \"3.11\""}
|
typing-extensions = {version = ">=4.0", markers = "python_version < \"3.11\""}
|
||||||
|
|
||||||
[package.extras]
|
[package.extras]
|
||||||
standard = ["colorama (>=0.4) ; sys_platform == \"win32\"", "httptools (>=0.6.3)", "python-dotenv (>=0.13)", "pyyaml (>=5.1)", "uvloop (>=0.14.0,!=0.15.0,!=0.15.1) ; sys_platform != \"win32\" and sys_platform != \"cygwin\" and platform_python_implementation != \"PyPy\"", "watchfiles (>=0.13)", "websockets (>=10.4)"]
|
standard = ["colorama (>=0.4)", "httptools (>=0.6.3)", "python-dotenv (>=0.13)", "pyyaml (>=5.1)", "uvloop (>=0.14.0,!=0.15.0,!=0.15.1)", "watchfiles (>=0.13)", "websockets (>=10.4)"]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "xyzservices"
|
name = "xyzservices"
|
||||||
@@ -2627,7 +2598,6 @@ version = "2025.1.0"
|
|||||||
description = "Source of XYZ tiles providers"
|
description = "Source of XYZ tiles providers"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "xyzservices-2025.1.0-py3-none-any.whl", hash = "sha256:fa599956c5ab32dad1689960b3bb08fdcdbe0252cc82d84fc60ae415dc648907"},
|
{file = "xyzservices-2025.1.0-py3-none-any.whl", hash = "sha256:fa599956c5ab32dad1689960b3bb08fdcdbe0252cc82d84fc60ae415dc648907"},
|
||||||
{file = "xyzservices-2025.1.0.tar.gz", hash = "sha256:5cdbb0907c20be1be066c6e2dc69c645842d1113a4e83e642065604a21f254ba"},
|
{file = "xyzservices-2025.1.0.tar.gz", hash = "sha256:5cdbb0907c20be1be066c6e2dc69c645842d1113a4e83e642065604a21f254ba"},
|
||||||
@@ -2639,7 +2609,6 @@ version = "1.20.0"
|
|||||||
description = "Yet another URL library"
|
description = "Yet another URL library"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.9"
|
python-versions = ">=3.9"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "yarl-1.20.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:f1f6670b9ae3daedb325fa55fbe31c22c8228f6e0b513772c2e1c623caa6ab22"},
|
{file = "yarl-1.20.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:f1f6670b9ae3daedb325fa55fbe31c22c8228f6e0b513772c2e1c623caa6ab22"},
|
||||||
{file = "yarl-1.20.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:85a231fa250dfa3308f3c7896cc007a47bc76e9e8e8595c20b7426cac4884c62"},
|
{file = "yarl-1.20.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:85a231fa250dfa3308f3c7896cc007a47bc76e9e8e8595c20b7426cac4884c62"},
|
||||||
@@ -2758,21 +2727,20 @@ version = "3.21.0"
|
|||||||
description = "Backport of pathlib-compatible object wrapper for zip files"
|
description = "Backport of pathlib-compatible object wrapper for zip files"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.9"
|
python-versions = ">=3.9"
|
||||||
groups = ["main"]
|
|
||||||
files = [
|
files = [
|
||||||
{file = "zipp-3.21.0-py3-none-any.whl", hash = "sha256:ac1bbe05fd2991f160ebce24ffbac5f6d11d83dc90891255885223d42b3cd931"},
|
{file = "zipp-3.21.0-py3-none-any.whl", hash = "sha256:ac1bbe05fd2991f160ebce24ffbac5f6d11d83dc90891255885223d42b3cd931"},
|
||||||
{file = "zipp-3.21.0.tar.gz", hash = "sha256:2c9958f6430a2040341a52eb608ed6dd93ef4392e02ffe219417c1b28b5dd1f4"},
|
{file = "zipp-3.21.0.tar.gz", hash = "sha256:2c9958f6430a2040341a52eb608ed6dd93ef4392e02ffe219417c1b28b5dd1f4"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.extras]
|
[package.extras]
|
||||||
check = ["pytest-checkdocs (>=2.4)", "pytest-ruff (>=0.2.1) ; sys_platform != \"cygwin\""]
|
check = ["pytest-checkdocs (>=2.4)", "pytest-ruff (>=0.2.1)"]
|
||||||
cover = ["pytest-cov"]
|
cover = ["pytest-cov"]
|
||||||
doc = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-lint"]
|
doc = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-lint"]
|
||||||
enabler = ["pytest-enabler (>=2.2)"]
|
enabler = ["pytest-enabler (>=2.2)"]
|
||||||
test = ["big-O", "importlib-resources ; python_version < \"3.9\"", "jaraco.functools", "jaraco.itertools", "jaraco.test", "more-itertools", "pytest (>=6,!=8.1.*)", "pytest-ignore-flaky"]
|
test = ["big-O", "importlib-resources", "jaraco.functools", "jaraco.itertools", "jaraco.test", "more-itertools", "pytest (>=6,!=8.1.*)", "pytest-ignore-flaky"]
|
||||||
type = ["pytest-mypy"]
|
type = ["pytest-mypy"]
|
||||||
|
|
||||||
[metadata]
|
[metadata]
|
||||||
lock-version = "2.1"
|
lock-version = "2.0"
|
||||||
python-versions = ">=3.10,<4.0"
|
python-versions = ">=3.10,<4.0"
|
||||||
content-hash = "ae5534663e9fa1ab21fb50bd6a7007aa201a22da0c3b729289f8a931434c14bf"
|
content-hash = "b391df89fabb111e4dd5d65a52a9db3a0bf9d95d5473e77cd0946beb940cf26f"
|
||||||
|
|||||||
@@ -1,6 +1,7 @@
|
|||||||
from models.tool_definitions import AgentGoal
|
|
||||||
from typing import Optional
|
|
||||||
import json
|
import json
|
||||||
|
from typing import Optional
|
||||||
|
|
||||||
|
from models.tool_definitions import AgentGoal
|
||||||
|
|
||||||
MULTI_GOAL_MODE: bool = None
|
MULTI_GOAL_MODE: bool = None
|
||||||
|
|
||||||
|
|||||||
@@ -46,6 +46,7 @@ pytest = ">=8.2"
|
|||||||
pytest-asyncio = "^0.26.0"
|
pytest-asyncio = "^0.26.0"
|
||||||
black = "^23.7"
|
black = "^23.7"
|
||||||
isort = "^5.12"
|
isort = "^5.12"
|
||||||
|
mypy = "^1.16.0"
|
||||||
|
|
||||||
[build-system]
|
[build-system]
|
||||||
requires = ["poetry-core>=1.4.0"]
|
requires = ["poetry-core>=1.4.0"]
|
||||||
@@ -57,4 +58,15 @@ log_cli = true
|
|||||||
log_cli_level = "INFO"
|
log_cli_level = "INFO"
|
||||||
log_cli_format = "%(asctime)s [%(levelname)8s] %(message)s (%(filename)s:%(lineno)s)"
|
log_cli_format = "%(asctime)s [%(levelname)8s] %(message)s (%(filename)s:%(lineno)s)"
|
||||||
asyncio_default_fixture_loop_scope = "function"
|
asyncio_default_fixture_loop_scope = "function"
|
||||||
norecursedirs = ["vibe"]
|
norecursedirs = ["vibe"]
|
||||||
|
|
||||||
|
[tool.mypy]
|
||||||
|
python_version = "3.10"
|
||||||
|
ignore_missing_imports = true
|
||||||
|
check_untyped_defs = true
|
||||||
|
namespace_packages = true
|
||||||
|
explicit_package_bases = true
|
||||||
|
ignore_errors = true
|
||||||
|
|
||||||
|
[tool.isort]
|
||||||
|
profile = "black"
|
||||||
|
|||||||
@@ -1,12 +1,12 @@
|
|||||||
import asyncio
|
import asyncio
|
||||||
|
|
||||||
|
from shared.config import get_temporal_client
|
||||||
from workflows.agent_goal_workflow import AgentGoalWorkflow
|
from workflows.agent_goal_workflow import AgentGoalWorkflow
|
||||||
|
|
||||||
|
|
||||||
async def main():
|
async def main():
|
||||||
# Create client connected to server at the given address
|
# Create client connected to server at the given address
|
||||||
client = await Client.connect("localhost:7233")
|
client = await get_temporal_client()
|
||||||
|
|
||||||
workflow_id = "agent-workflow"
|
workflow_id = "agent-workflow"
|
||||||
|
|
||||||
|
|||||||
@@ -1,6 +1,7 @@
|
|||||||
from tools.search_flights import search_flights
|
|
||||||
import json
|
import json
|
||||||
|
|
||||||
|
from tools.search_flights import search_flights
|
||||||
|
|
||||||
# Example usage
|
# Example usage
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
search_args = {"city": "Sydney", "month": "July"}
|
search_args = {"city": "Sydney", "month": "July"}
|
||||||
|
|||||||
@@ -1,6 +1,7 @@
|
|||||||
from tools.search_flights import search_flights
|
|
||||||
import json
|
import json
|
||||||
|
|
||||||
|
from tools.search_flights import search_flights
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
# Suppose user typed "new" for New York, "lon" for London
|
# Suppose user typed "new" for New York, "lon" for London
|
||||||
flights = search_flights("London", "JFK", "2025-01-15", "2025-01-23")
|
flights = search_flights("London", "JFK", "2025-01-15", "2025-01-23")
|
||||||
|
|||||||
@@ -1,12 +1,10 @@
|
|||||||
import asyncio
|
import asyncio
|
||||||
|
|
||||||
import concurrent.futures
|
import concurrent.futures
|
||||||
|
|
||||||
from temporalio.worker import Worker
|
from temporalio.worker import Worker
|
||||||
|
|
||||||
from activities.tool_activities import dynamic_tool_activity
|
from activities.tool_activities import dynamic_tool_activity
|
||||||
|
from shared.config import TEMPORAL_LEGACY_TASK_QUEUE, get_temporal_client
|
||||||
from shared.config import get_temporal_client, TEMPORAL_LEGACY_TASK_QUEUE
|
|
||||||
|
|
||||||
|
|
||||||
async def main():
|
async def main():
|
||||||
@@ -24,7 +22,9 @@ async def main():
|
|||||||
activity_executor=activity_executor,
|
activity_executor=activity_executor,
|
||||||
)
|
)
|
||||||
|
|
||||||
print(f"Starting legacy worker, connecting to task queue: {TEMPORAL_LEGACY_TASK_QUEUE}")
|
print(
|
||||||
|
f"Starting legacy worker, connecting to task queue: {TEMPORAL_LEGACY_TASK_QUEUE}"
|
||||||
|
)
|
||||||
await worker.run()
|
await worker.run()
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -1,16 +1,15 @@
|
|||||||
import asyncio
|
import asyncio
|
||||||
import concurrent.futures
|
import concurrent.futures
|
||||||
import os
|
|
||||||
from dotenv import load_dotenv
|
|
||||||
import logging
|
import logging
|
||||||
|
import os
|
||||||
|
|
||||||
|
from dotenv import load_dotenv
|
||||||
from temporalio.worker import Worker
|
from temporalio.worker import Worker
|
||||||
|
|
||||||
from activities.tool_activities import ToolActivities, dynamic_tool_activity
|
from activities.tool_activities import ToolActivities, dynamic_tool_activity
|
||||||
|
from shared.config import TEMPORAL_TASK_QUEUE, get_temporal_client
|
||||||
from workflows.agent_goal_workflow import AgentGoalWorkflow
|
from workflows.agent_goal_workflow import AgentGoalWorkflow
|
||||||
|
|
||||||
from shared.config import get_temporal_client, TEMPORAL_TASK_QUEUE
|
|
||||||
|
|
||||||
|
|
||||||
async def main():
|
async def main():
|
||||||
# Load environment variables
|
# Load environment variables
|
||||||
|
|||||||
@@ -5,7 +5,6 @@ from shared.config import get_temporal_client
|
|||||||
|
|
||||||
|
|
||||||
async def main():
|
async def main():
|
||||||
|
|
||||||
# Connect to Temporal and signal the workflow
|
# Connect to Temporal and signal the workflow
|
||||||
client = await get_temporal_client()
|
client = await get_temporal_client()
|
||||||
|
|
||||||
|
|||||||
@@ -1,4 +1,5 @@
|
|||||||
import os
|
import os
|
||||||
|
|
||||||
from dotenv import load_dotenv
|
from dotenv import load_dotenv
|
||||||
from temporalio.client import Client
|
from temporalio.client import Client
|
||||||
from temporalio.service import TLSConfig
|
from temporalio.service import TLSConfig
|
||||||
@@ -9,13 +10,16 @@ load_dotenv(override=True)
|
|||||||
TEMPORAL_ADDRESS = os.getenv("TEMPORAL_ADDRESS", "localhost:7233")
|
TEMPORAL_ADDRESS = os.getenv("TEMPORAL_ADDRESS", "localhost:7233")
|
||||||
TEMPORAL_NAMESPACE = os.getenv("TEMPORAL_NAMESPACE", "default")
|
TEMPORAL_NAMESPACE = os.getenv("TEMPORAL_NAMESPACE", "default")
|
||||||
TEMPORAL_TASK_QUEUE = os.getenv("TEMPORAL_TASK_QUEUE", "agent-task-queue")
|
TEMPORAL_TASK_QUEUE = os.getenv("TEMPORAL_TASK_QUEUE", "agent-task-queue")
|
||||||
TEMPORAL_LEGACY_TASK_QUEUE = os.getenv("TEMPORAL_LEGACY_TASK_QUEUE", "agent-task-queue-legacy")
|
TEMPORAL_LEGACY_TASK_QUEUE = os.getenv(
|
||||||
|
"TEMPORAL_LEGACY_TASK_QUEUE", "agent-task-queue-legacy"
|
||||||
|
)
|
||||||
|
|
||||||
# Authentication settings
|
# Authentication settings
|
||||||
TEMPORAL_TLS_CERT = os.getenv("TEMPORAL_TLS_CERT", "")
|
TEMPORAL_TLS_CERT = os.getenv("TEMPORAL_TLS_CERT", "")
|
||||||
TEMPORAL_TLS_KEY = os.getenv("TEMPORAL_TLS_KEY", "")
|
TEMPORAL_TLS_KEY = os.getenv("TEMPORAL_TLS_KEY", "")
|
||||||
TEMPORAL_API_KEY = os.getenv("TEMPORAL_API_KEY", "")
|
TEMPORAL_API_KEY = os.getenv("TEMPORAL_API_KEY", "")
|
||||||
|
|
||||||
|
|
||||||
async def get_temporal_client() -> Client:
|
async def get_temporal_client() -> Client:
|
||||||
"""
|
"""
|
||||||
Creates a Temporal client based on environment configuration.
|
Creates a Temporal client based on environment configuration.
|
||||||
|
|||||||
@@ -63,8 +63,8 @@ async def client(env: WorkflowEnvironment) -> Client:
|
|||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def sample_agent_goal():
|
def sample_agent_goal():
|
||||||
"""Sample agent goal for testing."""
|
"""Sample agent goal for testing."""
|
||||||
from models.tool_definitions import AgentGoal, ToolDefinition, ToolArgument
|
from models.tool_definitions import AgentGoal, ToolArgument, ToolDefinition
|
||||||
|
|
||||||
return AgentGoal(
|
return AgentGoal(
|
||||||
id="test_goal",
|
id="test_goal",
|
||||||
category_tag="test",
|
category_tag="test",
|
||||||
@@ -77,13 +77,11 @@ def sample_agent_goal():
|
|||||||
description="A test tool for testing purposes",
|
description="A test tool for testing purposes",
|
||||||
arguments=[
|
arguments=[
|
||||||
ToolArgument(
|
ToolArgument(
|
||||||
name="test_arg",
|
name="test_arg", type="string", description="A test argument"
|
||||||
type="string",
|
|
||||||
description="A test argument"
|
|
||||||
)
|
)
|
||||||
]
|
],
|
||||||
)
|
)
|
||||||
]
|
],
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@@ -93,7 +91,7 @@ def sample_conversation_history():
|
|||||||
return {
|
return {
|
||||||
"messages": [
|
"messages": [
|
||||||
{"actor": "user", "response": "Hello, I need help with testing"},
|
{"actor": "user", "response": "Hello, I need help with testing"},
|
||||||
{"actor": "agent", "response": "I can help you with that"}
|
{"actor": "agent", "response": "I can help you with that"},
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -101,16 +99,13 @@ def sample_conversation_history():
|
|||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def sample_combined_input(sample_agent_goal):
|
def sample_combined_input(sample_agent_goal):
|
||||||
"""Sample combined input for workflow testing."""
|
"""Sample combined input for workflow testing."""
|
||||||
from models.data_types import CombinedInput, AgentGoalWorkflowParams
|
|
||||||
|
|
||||||
from collections import deque
|
from collections import deque
|
||||||
|
|
||||||
|
from models.data_types import AgentGoalWorkflowParams, CombinedInput
|
||||||
|
|
||||||
tool_params = AgentGoalWorkflowParams(
|
tool_params = AgentGoalWorkflowParams(
|
||||||
conversation_summary="Test conversation summary",
|
conversation_summary="Test conversation summary",
|
||||||
prompt_queue=deque() # Start with empty queue for most tests
|
prompt_queue=deque(), # Start with empty queue for most tests
|
||||||
)
|
|
||||||
|
|
||||||
return CombinedInput(
|
|
||||||
agent_goal=sample_agent_goal,
|
|
||||||
tool_params=tool_params
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
return CombinedInput(agent_goal=sample_agent_goal, tool_params=tool_params)
|
||||||
|
|||||||
@@ -1,40 +1,35 @@
|
|||||||
import uuid
|
import uuid
|
||||||
from unittest.mock import patch, MagicMock
|
|
||||||
import pytest
|
|
||||||
from temporalio import activity
|
from temporalio import activity
|
||||||
from temporalio.client import Client
|
from temporalio.client import Client
|
||||||
from temporalio.worker import Worker
|
from temporalio.worker import Worker
|
||||||
from temporalio.testing import WorkflowEnvironment
|
|
||||||
|
|
||||||
from workflows.agent_goal_workflow import AgentGoalWorkflow
|
|
||||||
from activities.tool_activities import ToolActivities
|
|
||||||
from models.data_types import (
|
from models.data_types import (
|
||||||
CombinedInput,
|
|
||||||
AgentGoalWorkflowParams,
|
AgentGoalWorkflowParams,
|
||||||
ConversationHistory,
|
CombinedInput,
|
||||||
ValidationResult,
|
|
||||||
ValidationInput,
|
|
||||||
EnvLookupOutput,
|
|
||||||
EnvLookupInput,
|
EnvLookupInput,
|
||||||
ToolPromptInput
|
EnvLookupOutput,
|
||||||
|
ToolPromptInput,
|
||||||
|
ValidationInput,
|
||||||
|
ValidationResult,
|
||||||
)
|
)
|
||||||
|
from workflows.agent_goal_workflow import AgentGoalWorkflow
|
||||||
|
|
||||||
|
|
||||||
class TestAgentGoalWorkflow:
|
class TestAgentGoalWorkflow:
|
||||||
"""Test cases for AgentGoalWorkflow."""
|
"""Test cases for AgentGoalWorkflow."""
|
||||||
|
|
||||||
async def test_workflow_initialization(self, client: Client, sample_combined_input: CombinedInput):
|
async def test_workflow_initialization(
|
||||||
|
self, client: Client, sample_combined_input: CombinedInput
|
||||||
|
):
|
||||||
"""Test workflow can be initialized and started."""
|
"""Test workflow can be initialized and started."""
|
||||||
task_queue_name = str(uuid.uuid4())
|
task_queue_name = str(uuid.uuid4())
|
||||||
|
|
||||||
# Create mock activity functions with proper signatures
|
# Create mock activity functions with proper signatures
|
||||||
@activity.defn(name="get_wf_env_vars")
|
@activity.defn(name="get_wf_env_vars")
|
||||||
async def mock_get_wf_env_vars(input: EnvLookupInput) -> EnvLookupOutput:
|
async def mock_get_wf_env_vars(input: EnvLookupInput) -> EnvLookupOutput:
|
||||||
return EnvLookupOutput(
|
return EnvLookupOutput(show_confirm=True, multi_goal_mode=True)
|
||||||
show_confirm=True,
|
|
||||||
multi_goal_mode=True
|
|
||||||
)
|
|
||||||
|
|
||||||
async with Worker(
|
async with Worker(
|
||||||
client,
|
client,
|
||||||
task_queue=task_queue_name,
|
task_queue=task_queue_name,
|
||||||
@@ -48,120 +43,47 @@ class TestAgentGoalWorkflow:
|
|||||||
id=str(uuid.uuid4()),
|
id=str(uuid.uuid4()),
|
||||||
task_queue=task_queue_name,
|
task_queue=task_queue_name,
|
||||||
)
|
)
|
||||||
|
|
||||||
# Verify workflow is running
|
# Verify workflow is running
|
||||||
assert handle is not None
|
assert handle is not None
|
||||||
|
|
||||||
# Query the workflow to check initial state
|
# Query the workflow to check initial state
|
||||||
conversation_history = await handle.query(AgentGoalWorkflow.get_conversation_history)
|
conversation_history = await handle.query(
|
||||||
|
AgentGoalWorkflow.get_conversation_history
|
||||||
|
)
|
||||||
assert isinstance(conversation_history, dict)
|
assert isinstance(conversation_history, dict)
|
||||||
assert "messages" in conversation_history
|
assert "messages" in conversation_history
|
||||||
|
|
||||||
# Test goal query
|
# Test goal query
|
||||||
agent_goal = await handle.query(AgentGoalWorkflow.get_agent_goal)
|
agent_goal = await handle.query(AgentGoalWorkflow.get_agent_goal)
|
||||||
assert agent_goal == sample_combined_input.agent_goal
|
assert agent_goal == sample_combined_input.agent_goal
|
||||||
|
|
||||||
# End the workflow
|
# End the workflow
|
||||||
await handle.signal(AgentGoalWorkflow.end_chat)
|
await handle.signal(AgentGoalWorkflow.end_chat)
|
||||||
result = await handle.result()
|
result = await handle.result()
|
||||||
assert isinstance(result, str)
|
assert isinstance(result, str)
|
||||||
|
|
||||||
async def test_user_prompt_signal(self, client: Client, sample_combined_input: CombinedInput):
|
async def test_user_prompt_signal(
|
||||||
|
self, client: Client, sample_combined_input: CombinedInput
|
||||||
|
):
|
||||||
"""Test user_prompt signal handling."""
|
"""Test user_prompt signal handling."""
|
||||||
task_queue_name = str(uuid.uuid4())
|
task_queue_name = str(uuid.uuid4())
|
||||||
|
|
||||||
# Create mock activity functions with proper signatures
|
|
||||||
@activity.defn(name="get_wf_env_vars")
|
|
||||||
async def mock_get_wf_env_vars(input: EnvLookupInput) -> EnvLookupOutput:
|
|
||||||
return EnvLookupOutput(
|
|
||||||
show_confirm=True,
|
|
||||||
multi_goal_mode=True
|
|
||||||
)
|
|
||||||
|
|
||||||
@activity.defn(name="agent_validatePrompt")
|
|
||||||
async def mock_agent_validatePrompt(validation_input: ValidationInput) -> ValidationResult:
|
|
||||||
return ValidationResult(
|
|
||||||
validationResult=True,
|
|
||||||
validationFailedReason={}
|
|
||||||
)
|
|
||||||
|
|
||||||
@activity.defn(name="agent_toolPlanner")
|
|
||||||
async def mock_agent_toolPlanner(input: ToolPromptInput) -> dict:
|
|
||||||
return {
|
|
||||||
"next": "done",
|
|
||||||
"response": "Test response from LLM"
|
|
||||||
}
|
|
||||||
|
|
||||||
async with Worker(
|
|
||||||
client,
|
|
||||||
task_queue=task_queue_name,
|
|
||||||
workflows=[AgentGoalWorkflow],
|
|
||||||
activities=[
|
|
||||||
mock_get_wf_env_vars,
|
|
||||||
mock_agent_validatePrompt,
|
|
||||||
mock_agent_toolPlanner
|
|
||||||
],
|
|
||||||
):
|
|
||||||
handle = await client.start_workflow(
|
|
||||||
AgentGoalWorkflow.run,
|
|
||||||
sample_combined_input,
|
|
||||||
id=str(uuid.uuid4()),
|
|
||||||
task_queue=task_queue_name,
|
|
||||||
)
|
|
||||||
|
|
||||||
# Send user prompt
|
|
||||||
await handle.signal(AgentGoalWorkflow.user_prompt, "Hello, this is a test message")
|
|
||||||
|
|
||||||
# Wait for workflow to complete (it should end due to "done" next step)
|
|
||||||
result = await handle.result()
|
|
||||||
assert isinstance(result, str)
|
|
||||||
|
|
||||||
# Verify the conversation includes our message
|
|
||||||
import json
|
|
||||||
try:
|
|
||||||
conversation_history = json.loads(result.replace("'", '"'))
|
|
||||||
except:
|
|
||||||
# Fallback to eval if json fails
|
|
||||||
conversation_history = eval(result)
|
|
||||||
messages = conversation_history["messages"]
|
|
||||||
|
|
||||||
# Should have our user message and agent response
|
|
||||||
user_messages = [msg for msg in messages if msg["actor"] == "user"]
|
|
||||||
assert len(user_messages) > 0
|
|
||||||
assert any("Hello, this is a test message" in str(msg["response"]) for msg in user_messages)
|
|
||||||
|
|
||||||
async def test_confirm_signal(self, client: Client, sample_combined_input: CombinedInput):
|
|
||||||
"""Test confirm signal handling for tool execution."""
|
|
||||||
task_queue_name = str(uuid.uuid4())
|
|
||||||
|
|
||||||
# Create mock activity functions with proper signatures
|
# Create mock activity functions with proper signatures
|
||||||
@activity.defn(name="get_wf_env_vars")
|
@activity.defn(name="get_wf_env_vars")
|
||||||
async def mock_get_wf_env_vars(input: EnvLookupInput) -> EnvLookupOutput:
|
async def mock_get_wf_env_vars(input: EnvLookupInput) -> EnvLookupOutput:
|
||||||
return EnvLookupOutput(
|
return EnvLookupOutput(show_confirm=True, multi_goal_mode=True)
|
||||||
show_confirm=True,
|
|
||||||
multi_goal_mode=True
|
|
||||||
)
|
|
||||||
|
|
||||||
@activity.defn(name="agent_validatePrompt")
|
@activity.defn(name="agent_validatePrompt")
|
||||||
async def mock_agent_validatePrompt(validation_input: ValidationInput) -> ValidationResult:
|
async def mock_agent_validatePrompt(
|
||||||
return ValidationResult(
|
validation_input: ValidationInput,
|
||||||
validationResult=True,
|
) -> ValidationResult:
|
||||||
validationFailedReason={}
|
return ValidationResult(validationResult=True, validationFailedReason={})
|
||||||
)
|
|
||||||
|
|
||||||
@activity.defn(name="agent_toolPlanner")
|
@activity.defn(name="agent_toolPlanner")
|
||||||
async def mock_agent_toolPlanner(input: ToolPromptInput) -> dict:
|
async def mock_agent_toolPlanner(input: ToolPromptInput) -> dict:
|
||||||
return {
|
return {"next": "done", "response": "Test response from LLM"}
|
||||||
"next": "confirm",
|
|
||||||
"tool": "TestTool",
|
|
||||||
"args": {"test_arg": "test_value"},
|
|
||||||
"response": "Ready to execute tool"
|
|
||||||
}
|
|
||||||
|
|
||||||
@activity.defn(name="TestTool")
|
|
||||||
async def mock_test_tool(args: dict) -> dict:
|
|
||||||
return {"result": "Test tool executed successfully"}
|
|
||||||
|
|
||||||
async with Worker(
|
async with Worker(
|
||||||
client,
|
client,
|
||||||
task_queue=task_queue_name,
|
task_queue=task_queue_name,
|
||||||
@@ -170,7 +92,6 @@ class TestAgentGoalWorkflow:
|
|||||||
mock_get_wf_env_vars,
|
mock_get_wf_env_vars,
|
||||||
mock_agent_validatePrompt,
|
mock_agent_validatePrompt,
|
||||||
mock_agent_toolPlanner,
|
mock_agent_toolPlanner,
|
||||||
mock_test_tool
|
|
||||||
],
|
],
|
||||||
):
|
):
|
||||||
handle = await client.start_workflow(
|
handle = await client.start_workflow(
|
||||||
@@ -179,317 +100,64 @@ class TestAgentGoalWorkflow:
|
|||||||
id=str(uuid.uuid4()),
|
id=str(uuid.uuid4()),
|
||||||
task_queue=task_queue_name,
|
task_queue=task_queue_name,
|
||||||
)
|
)
|
||||||
|
|
||||||
# Send user prompt that will require confirmation
|
# Send user prompt
|
||||||
await handle.signal(AgentGoalWorkflow.user_prompt, "Execute the test tool")
|
await handle.signal(
|
||||||
|
AgentGoalWorkflow.user_prompt, "Hello, this is a test message"
|
||||||
# Query to check tool data is set
|
)
|
||||||
import asyncio
|
|
||||||
await asyncio.sleep(0.1) # Give workflow time to process
|
# Wait for workflow to complete (it should end due to "done" next step)
|
||||||
|
|
||||||
tool_data = await handle.query(AgentGoalWorkflow.get_latest_tool_data)
|
|
||||||
if tool_data:
|
|
||||||
assert tool_data.get("tool") == "TestTool"
|
|
||||||
assert tool_data.get("next") == "confirm"
|
|
||||||
|
|
||||||
# Send confirmation and end chat
|
|
||||||
await handle.signal(AgentGoalWorkflow.confirm)
|
|
||||||
await handle.signal(AgentGoalWorkflow.end_chat)
|
|
||||||
|
|
||||||
result = await handle.result()
|
result = await handle.result()
|
||||||
assert isinstance(result, str)
|
assert isinstance(result, str)
|
||||||
|
|
||||||
async def test_validation_failure(self, client: Client, sample_combined_input: CombinedInput):
|
# Verify the conversation includes our message
|
||||||
"""Test workflow handles validation failures correctly."""
|
|
||||||
task_queue_name = str(uuid.uuid4())
|
|
||||||
|
|
||||||
# Create mock activity functions with proper signatures
|
|
||||||
@activity.defn(name="get_wf_env_vars")
|
|
||||||
async def mock_get_wf_env_vars(input: EnvLookupInput) -> EnvLookupOutput:
|
|
||||||
return EnvLookupOutput(
|
|
||||||
show_confirm=True,
|
|
||||||
multi_goal_mode=True
|
|
||||||
)
|
|
||||||
|
|
||||||
@activity.defn(name="agent_validatePrompt")
|
|
||||||
async def mock_agent_validatePrompt(validation_input: ValidationInput) -> ValidationResult:
|
|
||||||
return ValidationResult(
|
|
||||||
validationResult=False,
|
|
||||||
validationFailedReason={
|
|
||||||
"next": "question",
|
|
||||||
"response": "Your request doesn't make sense in this context"
|
|
||||||
}
|
|
||||||
)
|
|
||||||
|
|
||||||
async with Worker(
|
|
||||||
client,
|
|
||||||
task_queue=task_queue_name,
|
|
||||||
workflows=[AgentGoalWorkflow],
|
|
||||||
activities=[
|
|
||||||
mock_get_wf_env_vars,
|
|
||||||
mock_agent_validatePrompt
|
|
||||||
],
|
|
||||||
):
|
|
||||||
handle = await client.start_workflow(
|
|
||||||
AgentGoalWorkflow.run,
|
|
||||||
sample_combined_input,
|
|
||||||
id=str(uuid.uuid4()),
|
|
||||||
task_queue=task_queue_name,
|
|
||||||
)
|
|
||||||
|
|
||||||
# Send invalid prompt
|
|
||||||
await handle.signal(AgentGoalWorkflow.user_prompt, "Invalid nonsensical prompt")
|
|
||||||
|
|
||||||
# Give workflow time to process the prompt
|
|
||||||
import asyncio
|
|
||||||
await asyncio.sleep(0.2)
|
|
||||||
|
|
||||||
# End workflow to check conversation
|
|
||||||
await handle.signal(AgentGoalWorkflow.end_chat)
|
|
||||||
result = await handle.result()
|
|
||||||
|
|
||||||
# Verify validation failure message was added
|
|
||||||
import json
|
import json
|
||||||
|
|
||||||
try:
|
try:
|
||||||
conversation_history = json.loads(result.replace("'", '"'))
|
conversation_history = json.loads(result.replace("'", '"'))
|
||||||
except:
|
except Exception:
|
||||||
# Fallback to eval if json fails
|
# Fallback to eval if json fails
|
||||||
conversation_history = eval(result)
|
conversation_history = eval(result)
|
||||||
messages = conversation_history["messages"]
|
messages = conversation_history["messages"]
|
||||||
|
|
||||||
# Should have validation failure response
|
|
||||||
agent_messages = [msg for msg in messages if msg["actor"] == "agent"]
|
|
||||||
assert len(agent_messages) > 0
|
|
||||||
assert any("doesn't make sense" in str(msg["response"]) for msg in agent_messages)
|
|
||||||
|
|
||||||
async def test_conversation_summary_initialization(self, client: Client, sample_agent_goal):
|
# Should have our user message and agent response
|
||||||
"""Test workflow initializes with conversation summary."""
|
user_messages = [msg for msg in messages if msg["actor"] == "user"]
|
||||||
|
assert len(user_messages) > 0
|
||||||
|
assert any(
|
||||||
|
"Hello, this is a test message" in str(msg["response"])
|
||||||
|
for msg in user_messages
|
||||||
|
)
|
||||||
|
|
||||||
|
async def test_confirm_signal(
|
||||||
|
self, client: Client, sample_combined_input: CombinedInput
|
||||||
|
):
|
||||||
|
"""Test confirm signal handling for tool execution."""
|
||||||
task_queue_name = str(uuid.uuid4())
|
task_queue_name = str(uuid.uuid4())
|
||||||
|
|
||||||
# Create input with conversation summary
|
|
||||||
from collections import deque
|
|
||||||
tool_params = AgentGoalWorkflowParams(
|
|
||||||
conversation_summary="Previous conversation summary",
|
|
||||||
prompt_queue=deque()
|
|
||||||
)
|
|
||||||
combined_input = CombinedInput(
|
|
||||||
agent_goal=sample_agent_goal,
|
|
||||||
tool_params=tool_params
|
|
||||||
)
|
|
||||||
|
|
||||||
# Create mock activity functions with proper signatures
|
# Create mock activity functions with proper signatures
|
||||||
@activity.defn(name="get_wf_env_vars")
|
@activity.defn(name="get_wf_env_vars")
|
||||||
async def mock_get_wf_env_vars(input: EnvLookupInput) -> EnvLookupOutput:
|
async def mock_get_wf_env_vars(input: EnvLookupInput) -> EnvLookupOutput:
|
||||||
return EnvLookupOutput(
|
return EnvLookupOutput(show_confirm=True, multi_goal_mode=True)
|
||||||
show_confirm=True,
|
|
||||||
multi_goal_mode=True
|
|
||||||
)
|
|
||||||
|
|
||||||
async with Worker(
|
|
||||||
client,
|
|
||||||
task_queue=task_queue_name,
|
|
||||||
workflows=[AgentGoalWorkflow],
|
|
||||||
activities=[mock_get_wf_env_vars],
|
|
||||||
):
|
|
||||||
handle = await client.start_workflow(
|
|
||||||
AgentGoalWorkflow.run,
|
|
||||||
combined_input,
|
|
||||||
id=str(uuid.uuid4()),
|
|
||||||
task_queue=task_queue_name,
|
|
||||||
)
|
|
||||||
|
|
||||||
# Give workflow time to initialize
|
|
||||||
import asyncio
|
|
||||||
await asyncio.sleep(0.1)
|
|
||||||
|
|
||||||
# Query conversation summary
|
|
||||||
summary = await handle.query(AgentGoalWorkflow.get_summary_from_history)
|
|
||||||
assert summary == "Previous conversation summary"
|
|
||||||
|
|
||||||
# Query conversation history - should include summary message
|
|
||||||
conversation_history = await handle.query(AgentGoalWorkflow.get_conversation_history)
|
|
||||||
messages = conversation_history["messages"]
|
|
||||||
|
|
||||||
# Should have conversation_summary message
|
|
||||||
summary_messages = [msg for msg in messages if msg["actor"] == "conversation_summary"]
|
|
||||||
assert len(summary_messages) == 1
|
|
||||||
assert summary_messages[0]["response"] == "Previous conversation summary"
|
|
||||||
|
|
||||||
# End workflow
|
|
||||||
await handle.signal(AgentGoalWorkflow.end_chat)
|
|
||||||
await handle.result()
|
|
||||||
|
|
||||||
async def test_workflow_queries(self, client: Client, sample_combined_input: CombinedInput):
|
|
||||||
"""Test all workflow query methods."""
|
|
||||||
task_queue_name = str(uuid.uuid4())
|
|
||||||
|
|
||||||
# Create mock activity functions with proper signatures
|
|
||||||
@activity.defn(name="get_wf_env_vars")
|
|
||||||
async def mock_get_wf_env_vars(input: EnvLookupInput) -> EnvLookupOutput:
|
|
||||||
return EnvLookupOutput(
|
|
||||||
show_confirm=True,
|
|
||||||
multi_goal_mode=True
|
|
||||||
)
|
|
||||||
|
|
||||||
async with Worker(
|
|
||||||
client,
|
|
||||||
task_queue=task_queue_name,
|
|
||||||
workflows=[AgentGoalWorkflow],
|
|
||||||
activities=[mock_get_wf_env_vars],
|
|
||||||
):
|
|
||||||
handle = await client.start_workflow(
|
|
||||||
AgentGoalWorkflow.run,
|
|
||||||
sample_combined_input,
|
|
||||||
id=str(uuid.uuid4()),
|
|
||||||
task_queue=task_queue_name,
|
|
||||||
)
|
|
||||||
|
|
||||||
# Give workflow time to initialize
|
|
||||||
import asyncio
|
|
||||||
await asyncio.sleep(0.1)
|
|
||||||
|
|
||||||
# Test get_conversation_history query
|
|
||||||
conversation_history = await handle.query(AgentGoalWorkflow.get_conversation_history)
|
|
||||||
assert isinstance(conversation_history, dict)
|
|
||||||
assert "messages" in conversation_history
|
|
||||||
|
|
||||||
# Test get_agent_goal query
|
|
||||||
agent_goal = await handle.query(AgentGoalWorkflow.get_agent_goal)
|
|
||||||
assert agent_goal.id == sample_combined_input.agent_goal.id
|
|
||||||
|
|
||||||
# Test get_summary_from_history query
|
|
||||||
summary = await handle.query(AgentGoalWorkflow.get_summary_from_history)
|
|
||||||
# Summary might be None if not set, so check for that
|
|
||||||
if sample_combined_input.tool_params.conversation_summary:
|
|
||||||
assert summary == sample_combined_input.tool_params.conversation_summary
|
|
||||||
else:
|
|
||||||
assert summary is None
|
|
||||||
|
|
||||||
# Test get_latest_tool_data query (should be None initially)
|
|
||||||
tool_data = await handle.query(AgentGoalWorkflow.get_latest_tool_data)
|
|
||||||
assert tool_data is None
|
|
||||||
|
|
||||||
# End workflow
|
|
||||||
await handle.signal(AgentGoalWorkflow.end_chat)
|
|
||||||
await handle.result()
|
|
||||||
|
|
||||||
async def test_enable_disable_debugging_confirm_signals(self, client: Client, sample_combined_input: CombinedInput):
|
|
||||||
"""Test debugging confirm enable/disable signals."""
|
|
||||||
task_queue_name = str(uuid.uuid4())
|
|
||||||
|
|
||||||
# Create mock activity functions with proper signatures
|
|
||||||
@activity.defn(name="get_wf_env_vars")
|
|
||||||
async def mock_get_wf_env_vars(input: EnvLookupInput) -> EnvLookupOutput:
|
|
||||||
return EnvLookupOutput(
|
|
||||||
show_confirm=True,
|
|
||||||
multi_goal_mode=True
|
|
||||||
)
|
|
||||||
|
|
||||||
async with Worker(
|
|
||||||
client,
|
|
||||||
task_queue=task_queue_name,
|
|
||||||
workflows=[AgentGoalWorkflow],
|
|
||||||
activities=[mock_get_wf_env_vars],
|
|
||||||
):
|
|
||||||
handle = await client.start_workflow(
|
|
||||||
AgentGoalWorkflow.run,
|
|
||||||
sample_combined_input,
|
|
||||||
id=str(uuid.uuid4()),
|
|
||||||
task_queue=task_queue_name,
|
|
||||||
)
|
|
||||||
|
|
||||||
# Test enable debugging confirm signal
|
|
||||||
await handle.signal(AgentGoalWorkflow.enable_debugging_confirm)
|
|
||||||
|
|
||||||
# Test disable debugging confirm signal
|
|
||||||
await handle.signal(AgentGoalWorkflow.disable_debugging_confirm)
|
|
||||||
|
|
||||||
# End workflow
|
|
||||||
await handle.signal(AgentGoalWorkflow.end_chat)
|
|
||||||
result = await handle.result()
|
|
||||||
assert isinstance(result, str)
|
|
||||||
|
|
||||||
async def test_workflow_with_empty_prompt_queue(self, client: Client, sample_agent_goal):
|
|
||||||
"""Test workflow behavior with empty prompt queue."""
|
|
||||||
task_queue_name = str(uuid.uuid4())
|
|
||||||
|
|
||||||
# Create input with empty prompt queue
|
|
||||||
from collections import deque
|
|
||||||
tool_params = AgentGoalWorkflowParams(
|
|
||||||
conversation_summary=None,
|
|
||||||
prompt_queue=deque()
|
|
||||||
)
|
|
||||||
combined_input = CombinedInput(
|
|
||||||
agent_goal=sample_agent_goal,
|
|
||||||
tool_params=tool_params
|
|
||||||
)
|
|
||||||
|
|
||||||
# Create mock activity functions with proper signatures
|
|
||||||
@activity.defn(name="get_wf_env_vars")
|
|
||||||
async def mock_get_wf_env_vars(input: EnvLookupInput) -> EnvLookupOutput:
|
|
||||||
return EnvLookupOutput(
|
|
||||||
show_confirm=True,
|
|
||||||
multi_goal_mode=True
|
|
||||||
)
|
|
||||||
|
|
||||||
async with Worker(
|
|
||||||
client,
|
|
||||||
task_queue=task_queue_name,
|
|
||||||
workflows=[AgentGoalWorkflow],
|
|
||||||
activities=[mock_get_wf_env_vars],
|
|
||||||
):
|
|
||||||
handle = await client.start_workflow(
|
|
||||||
AgentGoalWorkflow.run,
|
|
||||||
combined_input,
|
|
||||||
id=str(uuid.uuid4()),
|
|
||||||
task_queue=task_queue_name,
|
|
||||||
)
|
|
||||||
|
|
||||||
# Give workflow time to initialize
|
|
||||||
import asyncio
|
|
||||||
await asyncio.sleep(0.1)
|
|
||||||
|
|
||||||
# Query initial state
|
|
||||||
conversation_history = await handle.query(AgentGoalWorkflow.get_conversation_history)
|
|
||||||
assert isinstance(conversation_history, dict)
|
|
||||||
assert "messages" in conversation_history
|
|
||||||
|
|
||||||
# Should have no messages initially (empty prompt queue, no summary)
|
|
||||||
messages = conversation_history["messages"]
|
|
||||||
assert len(messages) == 0
|
|
||||||
|
|
||||||
# End workflow
|
|
||||||
await handle.signal(AgentGoalWorkflow.end_chat)
|
|
||||||
result = await handle.result()
|
|
||||||
assert isinstance(result, str)
|
|
||||||
|
|
||||||
async def test_multiple_user_prompts(self, client: Client, sample_combined_input: CombinedInput):
|
|
||||||
"""Test workflow handling multiple user prompts in sequence."""
|
|
||||||
task_queue_name = str(uuid.uuid4())
|
|
||||||
|
|
||||||
# Create mock activity functions with proper signatures
|
|
||||||
@activity.defn(name="get_wf_env_vars")
|
|
||||||
async def mock_get_wf_env_vars(input: EnvLookupInput) -> EnvLookupOutput:
|
|
||||||
return EnvLookupOutput(
|
|
||||||
show_confirm=True,
|
|
||||||
multi_goal_mode=True
|
|
||||||
)
|
|
||||||
|
|
||||||
@activity.defn(name="agent_validatePrompt")
|
@activity.defn(name="agent_validatePrompt")
|
||||||
async def mock_agent_validatePrompt(validation_input: ValidationInput) -> ValidationResult:
|
async def mock_agent_validatePrompt(
|
||||||
return ValidationResult(
|
validation_input: ValidationInput,
|
||||||
validationResult=True,
|
) -> ValidationResult:
|
||||||
validationFailedReason={}
|
return ValidationResult(validationResult=True, validationFailedReason={})
|
||||||
)
|
|
||||||
|
|
||||||
@activity.defn(name="agent_toolPlanner")
|
@activity.defn(name="agent_toolPlanner")
|
||||||
async def mock_agent_toolPlanner(input: ToolPromptInput) -> dict:
|
async def mock_agent_toolPlanner(input: ToolPromptInput) -> dict:
|
||||||
# Keep workflow running for multiple prompts
|
|
||||||
return {
|
return {
|
||||||
"next": "question",
|
"next": "confirm",
|
||||||
"response": f"Processed: {input.prompt}"
|
"tool": "TestTool",
|
||||||
|
"args": {"test_arg": "test_value"},
|
||||||
|
"response": "Ready to execute tool",
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@activity.defn(name="TestTool")
|
||||||
|
async def mock_test_tool(args: dict) -> dict:
|
||||||
|
return {"result": "Test tool executed successfully"}
|
||||||
|
|
||||||
async with Worker(
|
async with Worker(
|
||||||
client,
|
client,
|
||||||
task_queue=task_queue_name,
|
task_queue=task_queue_name,
|
||||||
@@ -497,7 +165,8 @@ class TestAgentGoalWorkflow:
|
|||||||
activities=[
|
activities=[
|
||||||
mock_get_wf_env_vars,
|
mock_get_wf_env_vars,
|
||||||
mock_agent_validatePrompt,
|
mock_agent_validatePrompt,
|
||||||
mock_agent_toolPlanner
|
mock_agent_toolPlanner,
|
||||||
|
mock_test_tool,
|
||||||
],
|
],
|
||||||
):
|
):
|
||||||
handle = await client.start_workflow(
|
handle = await client.start_workflow(
|
||||||
@@ -506,35 +175,369 @@ class TestAgentGoalWorkflow:
|
|||||||
id=str(uuid.uuid4()),
|
id=str(uuid.uuid4()),
|
||||||
task_queue=task_queue_name,
|
task_queue=task_queue_name,
|
||||||
)
|
)
|
||||||
|
|
||||||
# Send multiple prompts
|
# Send user prompt that will require confirmation
|
||||||
await handle.signal(AgentGoalWorkflow.user_prompt, "First message")
|
await handle.signal(AgentGoalWorkflow.user_prompt, "Execute the test tool")
|
||||||
|
|
||||||
|
# Query to check tool data is set
|
||||||
import asyncio
|
import asyncio
|
||||||
|
|
||||||
|
await asyncio.sleep(0.1) # Give workflow time to process
|
||||||
|
|
||||||
|
tool_data = await handle.query(AgentGoalWorkflow.get_latest_tool_data)
|
||||||
|
if tool_data:
|
||||||
|
assert tool_data.get("tool") == "TestTool"
|
||||||
|
assert tool_data.get("next") == "confirm"
|
||||||
|
|
||||||
|
# Send confirmation and end chat
|
||||||
|
await handle.signal(AgentGoalWorkflow.confirm)
|
||||||
|
await handle.signal(AgentGoalWorkflow.end_chat)
|
||||||
|
|
||||||
|
result = await handle.result()
|
||||||
|
assert isinstance(result, str)
|
||||||
|
|
||||||
|
async def test_validation_failure(
|
||||||
|
self, client: Client, sample_combined_input: CombinedInput
|
||||||
|
):
|
||||||
|
"""Test workflow handles validation failures correctly."""
|
||||||
|
task_queue_name = str(uuid.uuid4())
|
||||||
|
|
||||||
|
# Create mock activity functions with proper signatures
|
||||||
|
@activity.defn(name="get_wf_env_vars")
|
||||||
|
async def mock_get_wf_env_vars(input: EnvLookupInput) -> EnvLookupOutput:
|
||||||
|
return EnvLookupOutput(show_confirm=True, multi_goal_mode=True)
|
||||||
|
|
||||||
|
@activity.defn(name="agent_validatePrompt")
|
||||||
|
async def mock_agent_validatePrompt(
|
||||||
|
validation_input: ValidationInput,
|
||||||
|
) -> ValidationResult:
|
||||||
|
return ValidationResult(
|
||||||
|
validationResult=False,
|
||||||
|
validationFailedReason={
|
||||||
|
"next": "question",
|
||||||
|
"response": "Your request doesn't make sense in this context",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
async with Worker(
|
||||||
|
client,
|
||||||
|
task_queue=task_queue_name,
|
||||||
|
workflows=[AgentGoalWorkflow],
|
||||||
|
activities=[mock_get_wf_env_vars, mock_agent_validatePrompt],
|
||||||
|
):
|
||||||
|
handle = await client.start_workflow(
|
||||||
|
AgentGoalWorkflow.run,
|
||||||
|
sample_combined_input,
|
||||||
|
id=str(uuid.uuid4()),
|
||||||
|
task_queue=task_queue_name,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Send invalid prompt
|
||||||
|
await handle.signal(
|
||||||
|
AgentGoalWorkflow.user_prompt, "Invalid nonsensical prompt"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Give workflow time to process the prompt
|
||||||
|
import asyncio
|
||||||
|
|
||||||
|
await asyncio.sleep(0.2)
|
||||||
|
|
||||||
|
# End workflow to check conversation
|
||||||
|
await handle.signal(AgentGoalWorkflow.end_chat)
|
||||||
|
result = await handle.result()
|
||||||
|
|
||||||
|
# Verify validation failure message was added
|
||||||
|
import json
|
||||||
|
|
||||||
|
try:
|
||||||
|
conversation_history = json.loads(result.replace("'", '"'))
|
||||||
|
except Exception:
|
||||||
|
# Fallback to eval if json fails
|
||||||
|
conversation_history = eval(result)
|
||||||
|
messages = conversation_history["messages"]
|
||||||
|
|
||||||
|
# Should have validation failure response
|
||||||
|
agent_messages = [msg for msg in messages if msg["actor"] == "agent"]
|
||||||
|
assert len(agent_messages) > 0
|
||||||
|
assert any(
|
||||||
|
"doesn't make sense" in str(msg["response"]) for msg in agent_messages
|
||||||
|
)
|
||||||
|
|
||||||
|
async def test_conversation_summary_initialization(
|
||||||
|
self, client: Client, sample_agent_goal
|
||||||
|
):
|
||||||
|
"""Test workflow initializes with conversation summary."""
|
||||||
|
task_queue_name = str(uuid.uuid4())
|
||||||
|
|
||||||
|
# Create input with conversation summary
|
||||||
|
from collections import deque
|
||||||
|
|
||||||
|
tool_params = AgentGoalWorkflowParams(
|
||||||
|
conversation_summary="Previous conversation summary", prompt_queue=deque()
|
||||||
|
)
|
||||||
|
combined_input = CombinedInput(
|
||||||
|
agent_goal=sample_agent_goal, tool_params=tool_params
|
||||||
|
)
|
||||||
|
|
||||||
|
# Create mock activity functions with proper signatures
|
||||||
|
@activity.defn(name="get_wf_env_vars")
|
||||||
|
async def mock_get_wf_env_vars(input: EnvLookupInput) -> EnvLookupOutput:
|
||||||
|
return EnvLookupOutput(show_confirm=True, multi_goal_mode=True)
|
||||||
|
|
||||||
|
async with Worker(
|
||||||
|
client,
|
||||||
|
task_queue=task_queue_name,
|
||||||
|
workflows=[AgentGoalWorkflow],
|
||||||
|
activities=[mock_get_wf_env_vars],
|
||||||
|
):
|
||||||
|
handle = await client.start_workflow(
|
||||||
|
AgentGoalWorkflow.run,
|
||||||
|
combined_input,
|
||||||
|
id=str(uuid.uuid4()),
|
||||||
|
task_queue=task_queue_name,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Give workflow time to initialize
|
||||||
|
import asyncio
|
||||||
|
|
||||||
await asyncio.sleep(0.1)
|
await asyncio.sleep(0.1)
|
||||||
|
|
||||||
await handle.signal(AgentGoalWorkflow.user_prompt, "Second message")
|
# Query conversation summary
|
||||||
|
summary = await handle.query(AgentGoalWorkflow.get_summary_from_history)
|
||||||
|
assert summary == "Previous conversation summary"
|
||||||
|
|
||||||
|
# Query conversation history - should include summary message
|
||||||
|
conversation_history = await handle.query(
|
||||||
|
AgentGoalWorkflow.get_conversation_history
|
||||||
|
)
|
||||||
|
messages = conversation_history["messages"]
|
||||||
|
|
||||||
|
# Should have conversation_summary message
|
||||||
|
summary_messages = [
|
||||||
|
msg for msg in messages if msg["actor"] == "conversation_summary"
|
||||||
|
]
|
||||||
|
assert len(summary_messages) == 1
|
||||||
|
assert summary_messages[0]["response"] == "Previous conversation summary"
|
||||||
|
|
||||||
|
# End workflow
|
||||||
|
await handle.signal(AgentGoalWorkflow.end_chat)
|
||||||
|
await handle.result()
|
||||||
|
|
||||||
|
async def test_workflow_queries(
|
||||||
|
self, client: Client, sample_combined_input: CombinedInput
|
||||||
|
):
|
||||||
|
"""Test all workflow query methods."""
|
||||||
|
task_queue_name = str(uuid.uuid4())
|
||||||
|
|
||||||
|
# Create mock activity functions with proper signatures
|
||||||
|
@activity.defn(name="get_wf_env_vars")
|
||||||
|
async def mock_get_wf_env_vars(input: EnvLookupInput) -> EnvLookupOutput:
|
||||||
|
return EnvLookupOutput(show_confirm=True, multi_goal_mode=True)
|
||||||
|
|
||||||
|
async with Worker(
|
||||||
|
client,
|
||||||
|
task_queue=task_queue_name,
|
||||||
|
workflows=[AgentGoalWorkflow],
|
||||||
|
activities=[mock_get_wf_env_vars],
|
||||||
|
):
|
||||||
|
handle = await client.start_workflow(
|
||||||
|
AgentGoalWorkflow.run,
|
||||||
|
sample_combined_input,
|
||||||
|
id=str(uuid.uuid4()),
|
||||||
|
task_queue=task_queue_name,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Give workflow time to initialize
|
||||||
|
import asyncio
|
||||||
|
|
||||||
await asyncio.sleep(0.1)
|
await asyncio.sleep(0.1)
|
||||||
|
|
||||||
await handle.signal(AgentGoalWorkflow.user_prompt, "Third message")
|
# Test get_conversation_history query
|
||||||
await asyncio.sleep(0.1)
|
conversation_history = await handle.query(
|
||||||
|
AgentGoalWorkflow.get_conversation_history
|
||||||
|
)
|
||||||
|
assert isinstance(conversation_history, dict)
|
||||||
|
assert "messages" in conversation_history
|
||||||
|
|
||||||
|
# Test get_agent_goal query
|
||||||
|
agent_goal = await handle.query(AgentGoalWorkflow.get_agent_goal)
|
||||||
|
assert agent_goal.id == sample_combined_input.agent_goal.id
|
||||||
|
|
||||||
|
# Test get_summary_from_history query
|
||||||
|
summary = await handle.query(AgentGoalWorkflow.get_summary_from_history)
|
||||||
|
# Summary might be None if not set, so check for that
|
||||||
|
if sample_combined_input.tool_params.conversation_summary:
|
||||||
|
assert summary == sample_combined_input.tool_params.conversation_summary
|
||||||
|
else:
|
||||||
|
assert summary is None
|
||||||
|
|
||||||
|
# Test get_latest_tool_data query (should be None initially)
|
||||||
|
tool_data = await handle.query(AgentGoalWorkflow.get_latest_tool_data)
|
||||||
|
assert tool_data is None
|
||||||
|
|
||||||
|
# End workflow
|
||||||
|
await handle.signal(AgentGoalWorkflow.end_chat)
|
||||||
|
await handle.result()
|
||||||
|
|
||||||
|
async def test_enable_disable_debugging_confirm_signals(
|
||||||
|
self, client: Client, sample_combined_input: CombinedInput
|
||||||
|
):
|
||||||
|
"""Test debugging confirm enable/disable signals."""
|
||||||
|
task_queue_name = str(uuid.uuid4())
|
||||||
|
|
||||||
|
# Create mock activity functions with proper signatures
|
||||||
|
@activity.defn(name="get_wf_env_vars")
|
||||||
|
async def mock_get_wf_env_vars(input: EnvLookupInput) -> EnvLookupOutput:
|
||||||
|
return EnvLookupOutput(show_confirm=True, multi_goal_mode=True)
|
||||||
|
|
||||||
|
async with Worker(
|
||||||
|
client,
|
||||||
|
task_queue=task_queue_name,
|
||||||
|
workflows=[AgentGoalWorkflow],
|
||||||
|
activities=[mock_get_wf_env_vars],
|
||||||
|
):
|
||||||
|
handle = await client.start_workflow(
|
||||||
|
AgentGoalWorkflow.run,
|
||||||
|
sample_combined_input,
|
||||||
|
id=str(uuid.uuid4()),
|
||||||
|
task_queue=task_queue_name,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Test enable debugging confirm signal
|
||||||
|
await handle.signal(AgentGoalWorkflow.enable_debugging_confirm)
|
||||||
|
|
||||||
|
# Test disable debugging confirm signal
|
||||||
|
await handle.signal(AgentGoalWorkflow.disable_debugging_confirm)
|
||||||
|
|
||||||
# End workflow
|
# End workflow
|
||||||
await handle.signal(AgentGoalWorkflow.end_chat)
|
await handle.signal(AgentGoalWorkflow.end_chat)
|
||||||
result = await handle.result()
|
result = await handle.result()
|
||||||
assert isinstance(result, str)
|
assert isinstance(result, str)
|
||||||
|
|
||||||
|
async def test_workflow_with_empty_prompt_queue(
|
||||||
|
self, client: Client, sample_agent_goal
|
||||||
|
):
|
||||||
|
"""Test workflow behavior with empty prompt queue."""
|
||||||
|
task_queue_name = str(uuid.uuid4())
|
||||||
|
|
||||||
|
# Create input with empty prompt queue
|
||||||
|
from collections import deque
|
||||||
|
|
||||||
|
tool_params = AgentGoalWorkflowParams(
|
||||||
|
conversation_summary=None, prompt_queue=deque()
|
||||||
|
)
|
||||||
|
combined_input = CombinedInput(
|
||||||
|
agent_goal=sample_agent_goal, tool_params=tool_params
|
||||||
|
)
|
||||||
|
|
||||||
|
# Create mock activity functions with proper signatures
|
||||||
|
@activity.defn(name="get_wf_env_vars")
|
||||||
|
async def mock_get_wf_env_vars(input: EnvLookupInput) -> EnvLookupOutput:
|
||||||
|
return EnvLookupOutput(show_confirm=True, multi_goal_mode=True)
|
||||||
|
|
||||||
|
async with Worker(
|
||||||
|
client,
|
||||||
|
task_queue=task_queue_name,
|
||||||
|
workflows=[AgentGoalWorkflow],
|
||||||
|
activities=[mock_get_wf_env_vars],
|
||||||
|
):
|
||||||
|
handle = await client.start_workflow(
|
||||||
|
AgentGoalWorkflow.run,
|
||||||
|
combined_input,
|
||||||
|
id=str(uuid.uuid4()),
|
||||||
|
task_queue=task_queue_name,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Give workflow time to initialize
|
||||||
|
import asyncio
|
||||||
|
|
||||||
|
await asyncio.sleep(0.1)
|
||||||
|
|
||||||
|
# Query initial state
|
||||||
|
conversation_history = await handle.query(
|
||||||
|
AgentGoalWorkflow.get_conversation_history
|
||||||
|
)
|
||||||
|
assert isinstance(conversation_history, dict)
|
||||||
|
assert "messages" in conversation_history
|
||||||
|
|
||||||
|
# Should have no messages initially (empty prompt queue, no summary)
|
||||||
|
messages = conversation_history["messages"]
|
||||||
|
assert len(messages) == 0
|
||||||
|
|
||||||
|
# End workflow
|
||||||
|
await handle.signal(AgentGoalWorkflow.end_chat)
|
||||||
|
result = await handle.result()
|
||||||
|
assert isinstance(result, str)
|
||||||
|
|
||||||
|
async def test_multiple_user_prompts(
|
||||||
|
self, client: Client, sample_combined_input: CombinedInput
|
||||||
|
):
|
||||||
|
"""Test workflow handling multiple user prompts in sequence."""
|
||||||
|
task_queue_name = str(uuid.uuid4())
|
||||||
|
|
||||||
|
# Create mock activity functions with proper signatures
|
||||||
|
@activity.defn(name="get_wf_env_vars")
|
||||||
|
async def mock_get_wf_env_vars(input: EnvLookupInput) -> EnvLookupOutput:
|
||||||
|
return EnvLookupOutput(show_confirm=True, multi_goal_mode=True)
|
||||||
|
|
||||||
|
@activity.defn(name="agent_validatePrompt")
|
||||||
|
async def mock_agent_validatePrompt(
|
||||||
|
validation_input: ValidationInput,
|
||||||
|
) -> ValidationResult:
|
||||||
|
return ValidationResult(validationResult=True, validationFailedReason={})
|
||||||
|
|
||||||
|
@activity.defn(name="agent_toolPlanner")
|
||||||
|
async def mock_agent_toolPlanner(input: ToolPromptInput) -> dict:
|
||||||
|
# Keep workflow running for multiple prompts
|
||||||
|
return {"next": "question", "response": f"Processed: {input.prompt}"}
|
||||||
|
|
||||||
|
async with Worker(
|
||||||
|
client,
|
||||||
|
task_queue=task_queue_name,
|
||||||
|
workflows=[AgentGoalWorkflow],
|
||||||
|
activities=[
|
||||||
|
mock_get_wf_env_vars,
|
||||||
|
mock_agent_validatePrompt,
|
||||||
|
mock_agent_toolPlanner,
|
||||||
|
],
|
||||||
|
):
|
||||||
|
handle = await client.start_workflow(
|
||||||
|
AgentGoalWorkflow.run,
|
||||||
|
sample_combined_input,
|
||||||
|
id=str(uuid.uuid4()),
|
||||||
|
task_queue=task_queue_name,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Send multiple prompts
|
||||||
|
await handle.signal(AgentGoalWorkflow.user_prompt, "First message")
|
||||||
|
import asyncio
|
||||||
|
|
||||||
|
await asyncio.sleep(0.1)
|
||||||
|
|
||||||
|
await handle.signal(AgentGoalWorkflow.user_prompt, "Second message")
|
||||||
|
await asyncio.sleep(0.1)
|
||||||
|
|
||||||
|
await handle.signal(AgentGoalWorkflow.user_prompt, "Third message")
|
||||||
|
await asyncio.sleep(0.1)
|
||||||
|
|
||||||
|
# End workflow
|
||||||
|
await handle.signal(AgentGoalWorkflow.end_chat)
|
||||||
|
result = await handle.result()
|
||||||
|
assert isinstance(result, str)
|
||||||
|
|
||||||
# Parse result and verify multiple messages
|
# Parse result and verify multiple messages
|
||||||
import json
|
import json
|
||||||
|
|
||||||
try:
|
try:
|
||||||
conversation_history = json.loads(result.replace("'", '"'))
|
conversation_history = json.loads(result.replace("'", '"'))
|
||||||
except:
|
except Exception:
|
||||||
conversation_history = eval(result)
|
conversation_history = eval(result)
|
||||||
messages = conversation_history["messages"]
|
messages = conversation_history["messages"]
|
||||||
|
|
||||||
# Should have at least one user message (timing dependent)
|
# Should have at least one user message (timing dependent)
|
||||||
user_messages = [msg for msg in messages if msg["actor"] == "user"]
|
user_messages = [msg for msg in messages if msg["actor"] == "user"]
|
||||||
assert len(user_messages) >= 1
|
assert len(user_messages) >= 1
|
||||||
|
|
||||||
# Verify at least the first message was processed
|
# Verify at least the first message was processed
|
||||||
message_texts = [str(msg["response"]) for msg in user_messages]
|
message_texts = [str(msg["response"]) for msg in user_messages]
|
||||||
assert any("First message" in text for text in message_texts)
|
assert any("First message" in text for text in message_texts)
|
||||||
|
|||||||
@@ -1,19 +1,18 @@
|
|||||||
import os
|
|
||||||
import uuid
|
|
||||||
import json
|
import json
|
||||||
from unittest.mock import patch, MagicMock, AsyncMock
|
import os
|
||||||
|
from unittest.mock import AsyncMock, MagicMock, patch
|
||||||
|
|
||||||
import pytest
|
import pytest
|
||||||
from temporalio.client import Client
|
from temporalio.client import Client
|
||||||
from temporalio.worker import Worker
|
|
||||||
from temporalio.testing import ActivityEnvironment
|
from temporalio.testing import ActivityEnvironment
|
||||||
|
|
||||||
from activities.tool_activities import ToolActivities, dynamic_tool_activity
|
from activities.tool_activities import ToolActivities, dynamic_tool_activity
|
||||||
from models.data_types import (
|
from models.data_types import (
|
||||||
|
EnvLookupInput,
|
||||||
|
EnvLookupOutput,
|
||||||
|
ToolPromptInput,
|
||||||
ValidationInput,
|
ValidationInput,
|
||||||
ValidationResult,
|
ValidationResult,
|
||||||
ToolPromptInput,
|
|
||||||
EnvLookupInput,
|
|
||||||
EnvLookupOutput
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@@ -25,63 +24,66 @@ class TestToolActivities:
|
|||||||
self.tool_activities = ToolActivities()
|
self.tool_activities = ToolActivities()
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_agent_validatePrompt_valid_prompt(self, sample_agent_goal, sample_conversation_history):
|
async def test_agent_validatePrompt_valid_prompt(
|
||||||
|
self, sample_agent_goal, sample_conversation_history
|
||||||
|
):
|
||||||
"""Test agent_validatePrompt with a valid prompt."""
|
"""Test agent_validatePrompt with a valid prompt."""
|
||||||
validation_input = ValidationInput(
|
validation_input = ValidationInput(
|
||||||
prompt="I need help with the test tool",
|
prompt="I need help with the test tool",
|
||||||
conversation_history=sample_conversation_history,
|
conversation_history=sample_conversation_history,
|
||||||
agent_goal=sample_agent_goal
|
agent_goal=sample_agent_goal,
|
||||||
)
|
)
|
||||||
|
|
||||||
# Mock the agent_toolPlanner to return a valid response
|
# Mock the agent_toolPlanner to return a valid response
|
||||||
mock_response = {
|
mock_response = {"validationResult": True, "validationFailedReason": {}}
|
||||||
"validationResult": True,
|
|
||||||
"validationFailedReason": {}
|
with patch.object(
|
||||||
}
|
self.tool_activities, "agent_toolPlanner", new_callable=AsyncMock
|
||||||
|
) as mock_planner:
|
||||||
with patch.object(self.tool_activities, 'agent_toolPlanner', new_callable=AsyncMock) as mock_planner:
|
|
||||||
mock_planner.return_value = mock_response
|
mock_planner.return_value = mock_response
|
||||||
|
|
||||||
activity_env = ActivityEnvironment()
|
activity_env = ActivityEnvironment()
|
||||||
result = await activity_env.run(
|
result = await activity_env.run(
|
||||||
self.tool_activities.agent_validatePrompt,
|
self.tool_activities.agent_validatePrompt, validation_input
|
||||||
validation_input
|
|
||||||
)
|
)
|
||||||
|
|
||||||
assert isinstance(result, ValidationResult)
|
assert isinstance(result, ValidationResult)
|
||||||
assert result.validationResult is True
|
assert result.validationResult is True
|
||||||
assert result.validationFailedReason == {}
|
assert result.validationFailedReason == {}
|
||||||
|
|
||||||
# Verify the mock was called with correct parameters
|
# Verify the mock was called with correct parameters
|
||||||
mock_planner.assert_called_once()
|
mock_planner.assert_called_once()
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_agent_validatePrompt_invalid_prompt(self, sample_agent_goal, sample_conversation_history):
|
async def test_agent_validatePrompt_invalid_prompt(
|
||||||
|
self, sample_agent_goal, sample_conversation_history
|
||||||
|
):
|
||||||
"""Test agent_validatePrompt with an invalid prompt."""
|
"""Test agent_validatePrompt with an invalid prompt."""
|
||||||
validation_input = ValidationInput(
|
validation_input = ValidationInput(
|
||||||
prompt="asdfghjkl nonsense",
|
prompt="asdfghjkl nonsense",
|
||||||
conversation_history=sample_conversation_history,
|
conversation_history=sample_conversation_history,
|
||||||
agent_goal=sample_agent_goal
|
agent_goal=sample_agent_goal,
|
||||||
)
|
)
|
||||||
|
|
||||||
# Mock the agent_toolPlanner to return an invalid response
|
# Mock the agent_toolPlanner to return an invalid response
|
||||||
mock_response = {
|
mock_response = {
|
||||||
"validationResult": False,
|
"validationResult": False,
|
||||||
"validationFailedReason": {
|
"validationFailedReason": {
|
||||||
"next": "question",
|
"next": "question",
|
||||||
"response": "Your request doesn't make sense in this context"
|
"response": "Your request doesn't make sense in this context",
|
||||||
}
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
with patch.object(self.tool_activities, 'agent_toolPlanner', new_callable=AsyncMock) as mock_planner:
|
with patch.object(
|
||||||
|
self.tool_activities, "agent_toolPlanner", new_callable=AsyncMock
|
||||||
|
) as mock_planner:
|
||||||
mock_planner.return_value = mock_response
|
mock_planner.return_value = mock_response
|
||||||
|
|
||||||
activity_env = ActivityEnvironment()
|
activity_env = ActivityEnvironment()
|
||||||
result = await activity_env.run(
|
result = await activity_env.run(
|
||||||
self.tool_activities.agent_validatePrompt,
|
self.tool_activities.agent_validatePrompt, validation_input
|
||||||
validation_input
|
|
||||||
)
|
)
|
||||||
|
|
||||||
assert isinstance(result, ValidationResult)
|
assert isinstance(result, ValidationResult)
|
||||||
assert result.validationResult is False
|
assert result.validationResult is False
|
||||||
assert "doesn't make sense" in str(result.validationFailedReason)
|
assert "doesn't make sense" in str(result.validationFailedReason)
|
||||||
@@ -90,29 +92,31 @@ class TestToolActivities:
|
|||||||
async def test_agent_toolPlanner_success(self):
|
async def test_agent_toolPlanner_success(self):
|
||||||
"""Test agent_toolPlanner with successful LLM response."""
|
"""Test agent_toolPlanner with successful LLM response."""
|
||||||
prompt_input = ToolPromptInput(
|
prompt_input = ToolPromptInput(
|
||||||
prompt="Test prompt",
|
prompt="Test prompt", context_instructions="Test context instructions"
|
||||||
context_instructions="Test context instructions"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
# Mock the completion function
|
# Mock the completion function
|
||||||
mock_response = MagicMock()
|
mock_response = MagicMock()
|
||||||
mock_response.choices = [MagicMock()]
|
mock_response.choices = [MagicMock()]
|
||||||
mock_response.choices[0].message.content = '{"next": "confirm", "tool": "TestTool", "response": "Test response"}'
|
mock_response.choices[
|
||||||
|
0
|
||||||
with patch('activities.tool_activities.completion') as mock_completion:
|
].message.content = (
|
||||||
|
'{"next": "confirm", "tool": "TestTool", "response": "Test response"}'
|
||||||
|
)
|
||||||
|
|
||||||
|
with patch("activities.tool_activities.completion") as mock_completion:
|
||||||
mock_completion.return_value = mock_response
|
mock_completion.return_value = mock_response
|
||||||
|
|
||||||
activity_env = ActivityEnvironment()
|
activity_env = ActivityEnvironment()
|
||||||
result = await activity_env.run(
|
result = await activity_env.run(
|
||||||
self.tool_activities.agent_toolPlanner,
|
self.tool_activities.agent_toolPlanner, prompt_input
|
||||||
prompt_input
|
|
||||||
)
|
)
|
||||||
|
|
||||||
assert isinstance(result, dict)
|
assert isinstance(result, dict)
|
||||||
assert result["next"] == "confirm"
|
assert result["next"] == "confirm"
|
||||||
assert result["tool"] == "TestTool"
|
assert result["tool"] == "TestTool"
|
||||||
assert result["response"] == "Test response"
|
assert result["response"] == "Test response"
|
||||||
|
|
||||||
# Verify completion was called with correct parameters
|
# Verify completion was called with correct parameters
|
||||||
mock_completion.assert_called_once()
|
mock_completion.assert_called_once()
|
||||||
call_args = mock_completion.call_args[1]
|
call_args = mock_completion.call_args[1]
|
||||||
@@ -125,27 +129,25 @@ class TestToolActivities:
|
|||||||
async def test_agent_toolPlanner_with_custom_base_url(self):
|
async def test_agent_toolPlanner_with_custom_base_url(self):
|
||||||
"""Test agent_toolPlanner with custom base URL configuration."""
|
"""Test agent_toolPlanner with custom base URL configuration."""
|
||||||
# Set up tool activities with custom base URL
|
# Set up tool activities with custom base URL
|
||||||
with patch.dict(os.environ, {'LLM_BASE_URL': 'https://custom.endpoint.com'}):
|
with patch.dict(os.environ, {"LLM_BASE_URL": "https://custom.endpoint.com"}):
|
||||||
tool_activities = ToolActivities()
|
tool_activities = ToolActivities()
|
||||||
|
|
||||||
prompt_input = ToolPromptInput(
|
prompt_input = ToolPromptInput(
|
||||||
prompt="Test prompt",
|
prompt="Test prompt", context_instructions="Test context instructions"
|
||||||
context_instructions="Test context instructions"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
mock_response = MagicMock()
|
mock_response = MagicMock()
|
||||||
mock_response.choices = [MagicMock()]
|
mock_response.choices = [MagicMock()]
|
||||||
mock_response.choices[0].message.content = '{"next": "done", "response": "Test"}'
|
mock_response.choices[
|
||||||
|
0
|
||||||
with patch('activities.tool_activities.completion') as mock_completion:
|
].message.content = '{"next": "done", "response": "Test"}'
|
||||||
|
|
||||||
|
with patch("activities.tool_activities.completion") as mock_completion:
|
||||||
mock_completion.return_value = mock_response
|
mock_completion.return_value = mock_response
|
||||||
|
|
||||||
activity_env = ActivityEnvironment()
|
activity_env = ActivityEnvironment()
|
||||||
await activity_env.run(
|
await activity_env.run(tool_activities.agent_toolPlanner, prompt_input)
|
||||||
tool_activities.agent_toolPlanner,
|
|
||||||
prompt_input
|
|
||||||
)
|
|
||||||
|
|
||||||
# Verify base_url was included in the call
|
# Verify base_url was included in the call
|
||||||
call_args = mock_completion.call_args[1]
|
call_args = mock_completion.call_args[1]
|
||||||
assert "base_url" in call_args
|
assert "base_url" in call_args
|
||||||
@@ -155,41 +157,37 @@ class TestToolActivities:
|
|||||||
async def test_agent_toolPlanner_json_parsing_error(self):
|
async def test_agent_toolPlanner_json_parsing_error(self):
|
||||||
"""Test agent_toolPlanner handles JSON parsing errors."""
|
"""Test agent_toolPlanner handles JSON parsing errors."""
|
||||||
prompt_input = ToolPromptInput(
|
prompt_input = ToolPromptInput(
|
||||||
prompt="Test prompt",
|
prompt="Test prompt", context_instructions="Test context instructions"
|
||||||
context_instructions="Test context instructions"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
# Mock the completion function to return invalid JSON
|
# Mock the completion function to return invalid JSON
|
||||||
mock_response = MagicMock()
|
mock_response = MagicMock()
|
||||||
mock_response.choices = [MagicMock()]
|
mock_response.choices = [MagicMock()]
|
||||||
mock_response.choices[0].message.content = 'Invalid JSON response'
|
mock_response.choices[0].message.content = "Invalid JSON response"
|
||||||
|
|
||||||
with patch('activities.tool_activities.completion') as mock_completion:
|
with patch("activities.tool_activities.completion") as mock_completion:
|
||||||
mock_completion.return_value = mock_response
|
mock_completion.return_value = mock_response
|
||||||
|
|
||||||
activity_env = ActivityEnvironment()
|
activity_env = ActivityEnvironment()
|
||||||
with pytest.raises(Exception): # Should raise JSON parsing error
|
with pytest.raises(Exception): # Should raise JSON parsing error
|
||||||
await activity_env.run(
|
await activity_env.run(
|
||||||
self.tool_activities.agent_toolPlanner,
|
self.tool_activities.agent_toolPlanner, prompt_input
|
||||||
prompt_input
|
|
||||||
)
|
)
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_get_wf_env_vars_default_values(self):
|
async def test_get_wf_env_vars_default_values(self):
|
||||||
"""Test get_wf_env_vars with default values."""
|
"""Test get_wf_env_vars with default values."""
|
||||||
env_input = EnvLookupInput(
|
env_input = EnvLookupInput(
|
||||||
show_confirm_env_var_name="SHOW_CONFIRM",
|
show_confirm_env_var_name="SHOW_CONFIRM", show_confirm_default=True
|
||||||
show_confirm_default=True
|
|
||||||
)
|
)
|
||||||
|
|
||||||
# Clear environment variables
|
# Clear environment variables
|
||||||
with patch.dict(os.environ, {}, clear=True):
|
with patch.dict(os.environ, {}, clear=True):
|
||||||
activity_env = ActivityEnvironment()
|
activity_env = ActivityEnvironment()
|
||||||
result = await activity_env.run(
|
result = await activity_env.run(
|
||||||
self.tool_activities.get_wf_env_vars,
|
self.tool_activities.get_wf_env_vars, env_input
|
||||||
env_input
|
|
||||||
)
|
)
|
||||||
|
|
||||||
assert isinstance(result, EnvLookupOutput)
|
assert isinstance(result, EnvLookupOutput)
|
||||||
assert result.show_confirm is True # default value
|
assert result.show_confirm is True # default value
|
||||||
assert result.multi_goal_mode is True # default value
|
assert result.multi_goal_mode is True # default value
|
||||||
@@ -198,21 +196,18 @@ class TestToolActivities:
|
|||||||
async def test_get_wf_env_vars_custom_values(self):
|
async def test_get_wf_env_vars_custom_values(self):
|
||||||
"""Test get_wf_env_vars with custom environment values."""
|
"""Test get_wf_env_vars with custom environment values."""
|
||||||
env_input = EnvLookupInput(
|
env_input = EnvLookupInput(
|
||||||
show_confirm_env_var_name="SHOW_CONFIRM",
|
show_confirm_env_var_name="SHOW_CONFIRM", show_confirm_default=True
|
||||||
show_confirm_default=True
|
|
||||||
)
|
)
|
||||||
|
|
||||||
# Set environment variables
|
# Set environment variables
|
||||||
with patch.dict(os.environ, {
|
with patch.dict(
|
||||||
'SHOW_CONFIRM': 'false',
|
os.environ, {"SHOW_CONFIRM": "false", "AGENT_GOAL": "specific_goal"}
|
||||||
'AGENT_GOAL': 'specific_goal'
|
):
|
||||||
}):
|
|
||||||
activity_env = ActivityEnvironment()
|
activity_env = ActivityEnvironment()
|
||||||
result = await activity_env.run(
|
result = await activity_env.run(
|
||||||
self.tool_activities.get_wf_env_vars,
|
self.tool_activities.get_wf_env_vars, env_input
|
||||||
env_input
|
|
||||||
)
|
)
|
||||||
|
|
||||||
assert isinstance(result, EnvLookupOutput)
|
assert isinstance(result, EnvLookupOutput)
|
||||||
assert result.show_confirm is False # from env var
|
assert result.show_confirm is False # from env var
|
||||||
assert result.multi_goal_mode is False # from env var
|
assert result.multi_goal_mode is False # from env var
|
||||||
@@ -220,20 +215,22 @@ class TestToolActivities:
|
|||||||
def test_sanitize_json_response(self):
|
def test_sanitize_json_response(self):
|
||||||
"""Test JSON response sanitization."""
|
"""Test JSON response sanitization."""
|
||||||
# Test with markdown code blocks
|
# Test with markdown code blocks
|
||||||
response_with_markdown = "```json\n{\"test\": \"value\"}\n```"
|
response_with_markdown = '```json\n{"test": "value"}\n```'
|
||||||
sanitized = self.tool_activities.sanitize_json_response(response_with_markdown)
|
sanitized = self.tool_activities.sanitize_json_response(response_with_markdown)
|
||||||
assert sanitized == '{"test": "value"}'
|
assert sanitized == '{"test": "value"}'
|
||||||
|
|
||||||
# Test with extra whitespace
|
# Test with extra whitespace
|
||||||
response_with_whitespace = " \n{\"test\": \"value\"} \n"
|
response_with_whitespace = ' \n{"test": "value"} \n'
|
||||||
sanitized = self.tool_activities.sanitize_json_response(response_with_whitespace)
|
sanitized = self.tool_activities.sanitize_json_response(
|
||||||
|
response_with_whitespace
|
||||||
|
)
|
||||||
assert sanitized == '{"test": "value"}'
|
assert sanitized == '{"test": "value"}'
|
||||||
|
|
||||||
def test_parse_json_response_success(self):
|
def test_parse_json_response_success(self):
|
||||||
"""Test successful JSON parsing."""
|
"""Test successful JSON parsing."""
|
||||||
json_string = '{"next": "confirm", "tool": "TestTool"}'
|
json_string = '{"next": "confirm", "tool": "TestTool"}'
|
||||||
result = self.tool_activities.parse_json_response(json_string)
|
result = self.tool_activities.parse_json_response(json_string)
|
||||||
|
|
||||||
assert isinstance(result, dict)
|
assert isinstance(result, dict)
|
||||||
assert result["next"] == "confirm"
|
assert result["next"] == "confirm"
|
||||||
assert result["tool"] == "TestTool"
|
assert result["tool"] == "TestTool"
|
||||||
@@ -241,7 +238,7 @@ class TestToolActivities:
|
|||||||
def test_parse_json_response_failure(self):
|
def test_parse_json_response_failure(self):
|
||||||
"""Test JSON parsing with invalid JSON."""
|
"""Test JSON parsing with invalid JSON."""
|
||||||
invalid_json = "Not valid JSON"
|
invalid_json = "Not valid JSON"
|
||||||
|
|
||||||
with pytest.raises(Exception): # Should raise JSON parsing error
|
with pytest.raises(Exception): # Should raise JSON parsing error
|
||||||
self.tool_activities.parse_json_response(invalid_json)
|
self.tool_activities.parse_json_response(invalid_json)
|
||||||
|
|
||||||
@@ -255,26 +252,22 @@ class TestDynamicToolActivity:
|
|||||||
# Mock the activity info and payload converter
|
# Mock the activity info and payload converter
|
||||||
mock_info = MagicMock()
|
mock_info = MagicMock()
|
||||||
mock_info.activity_type = "TestTool"
|
mock_info.activity_type = "TestTool"
|
||||||
|
|
||||||
mock_payload_converter = MagicMock()
|
mock_payload_converter = MagicMock()
|
||||||
mock_payload = MagicMock()
|
mock_payload = MagicMock()
|
||||||
mock_payload.payload = b'{"test_arg": "test_value"}'
|
mock_payload.payload = b'{"test_arg": "test_value"}'
|
||||||
mock_payload_converter.from_payload.return_value = {"test_arg": "test_value"}
|
mock_payload_converter.from_payload.return_value = {"test_arg": "test_value"}
|
||||||
|
|
||||||
# Mock the handler function
|
# Mock the handler function
|
||||||
def mock_handler(args):
|
def mock_handler(args):
|
||||||
return {"result": f"Handled {args['test_arg']}"}
|
return {"result": f"Handled {args['test_arg']}"}
|
||||||
|
|
||||||
with patch('temporalio.activity.info', return_value=mock_info), \
|
with patch("temporalio.activity.info", return_value=mock_info), patch(
|
||||||
patch('temporalio.activity.payload_converter', return_value=mock_payload_converter), \
|
"temporalio.activity.payload_converter", return_value=mock_payload_converter
|
||||||
patch('tools.get_handler', return_value=mock_handler):
|
), patch("tools.get_handler", return_value=mock_handler):
|
||||||
|
|
||||||
activity_env = ActivityEnvironment()
|
activity_env = ActivityEnvironment()
|
||||||
result = await activity_env.run(
|
result = await activity_env.run(dynamic_tool_activity, [mock_payload])
|
||||||
dynamic_tool_activity,
|
|
||||||
[mock_payload]
|
|
||||||
)
|
|
||||||
|
|
||||||
assert isinstance(result, dict)
|
assert isinstance(result, dict)
|
||||||
assert result["result"] == "Handled test_value"
|
assert result["result"] == "Handled test_value"
|
||||||
|
|
||||||
@@ -284,26 +277,22 @@ class TestDynamicToolActivity:
|
|||||||
# Mock the activity info and payload converter
|
# Mock the activity info and payload converter
|
||||||
mock_info = MagicMock()
|
mock_info = MagicMock()
|
||||||
mock_info.activity_type = "AsyncTestTool"
|
mock_info.activity_type = "AsyncTestTool"
|
||||||
|
|
||||||
mock_payload_converter = MagicMock()
|
mock_payload_converter = MagicMock()
|
||||||
mock_payload = MagicMock()
|
mock_payload = MagicMock()
|
||||||
mock_payload.payload = b'{"test_arg": "async_test"}'
|
mock_payload.payload = b'{"test_arg": "async_test"}'
|
||||||
mock_payload_converter.from_payload.return_value = {"test_arg": "async_test"}
|
mock_payload_converter.from_payload.return_value = {"test_arg": "async_test"}
|
||||||
|
|
||||||
# Mock the async handler function
|
# Mock the async handler function
|
||||||
async def mock_async_handler(args):
|
async def mock_async_handler(args):
|
||||||
return {"async_result": f"Async handled {args['test_arg']}"}
|
return {"async_result": f"Async handled {args['test_arg']}"}
|
||||||
|
|
||||||
with patch('temporalio.activity.info', return_value=mock_info), \
|
with patch("temporalio.activity.info", return_value=mock_info), patch(
|
||||||
patch('temporalio.activity.payload_converter', return_value=mock_payload_converter), \
|
"temporalio.activity.payload_converter", return_value=mock_payload_converter
|
||||||
patch('tools.get_handler', return_value=mock_async_handler):
|
), patch("tools.get_handler", return_value=mock_async_handler):
|
||||||
|
|
||||||
activity_env = ActivityEnvironment()
|
activity_env = ActivityEnvironment()
|
||||||
result = await activity_env.run(
|
result = await activity_env.run(dynamic_tool_activity, [mock_payload])
|
||||||
dynamic_tool_activity,
|
|
||||||
[mock_payload]
|
|
||||||
)
|
|
||||||
|
|
||||||
assert isinstance(result, dict)
|
assert isinstance(result, dict)
|
||||||
assert result["async_result"] == "Async handled async_test"
|
assert result["async_result"] == "Async handled async_test"
|
||||||
|
|
||||||
@@ -314,21 +303,17 @@ class TestToolActivitiesIntegration:
|
|||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_activities_in_worker(self, client: Client):
|
async def test_activities_in_worker(self, client: Client):
|
||||||
"""Test activities can be registered and executed in a worker."""
|
"""Test activities can be registered and executed in a worker."""
|
||||||
task_queue_name = str(uuid.uuid4())
|
# task_queue_name = str(uuid.uuid4())
|
||||||
tool_activities = ToolActivities()
|
tool_activities = ToolActivities()
|
||||||
|
|
||||||
# Test get_wf_env_vars activity using ActivityEnvironment
|
# Test get_wf_env_vars activity using ActivityEnvironment
|
||||||
env_input = EnvLookupInput(
|
env_input = EnvLookupInput(
|
||||||
show_confirm_env_var_name="TEST_CONFIRM",
|
show_confirm_env_var_name="TEST_CONFIRM", show_confirm_default=False
|
||||||
show_confirm_default=False
|
|
||||||
)
|
)
|
||||||
|
|
||||||
activity_env = ActivityEnvironment()
|
activity_env = ActivityEnvironment()
|
||||||
result = await activity_env.run(
|
result = await activity_env.run(tool_activities.get_wf_env_vars, env_input)
|
||||||
tool_activities.get_wf_env_vars,
|
|
||||||
env_input
|
|
||||||
)
|
|
||||||
|
|
||||||
assert isinstance(result, EnvLookupOutput)
|
assert isinstance(result, EnvLookupOutput)
|
||||||
assert isinstance(result.show_confirm, bool)
|
assert isinstance(result.show_confirm, bool)
|
||||||
assert isinstance(result.multi_goal_mode, bool)
|
assert isinstance(result.multi_goal_mode, bool)
|
||||||
@@ -336,36 +321,36 @@ class TestToolActivitiesIntegration:
|
|||||||
|
|
||||||
class TestEdgeCases:
|
class TestEdgeCases:
|
||||||
"""Test edge cases and error handling."""
|
"""Test edge cases and error handling."""
|
||||||
|
|
||||||
def setup_method(self):
|
def setup_method(self):
|
||||||
"""Set up test environment for each test."""
|
"""Set up test environment for each test."""
|
||||||
self.tool_activities = ToolActivities()
|
self.tool_activities = ToolActivities()
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_agent_validatePrompt_with_empty_conversation_history(self, sample_agent_goal):
|
async def test_agent_validatePrompt_with_empty_conversation_history(
|
||||||
|
self, sample_agent_goal
|
||||||
|
):
|
||||||
"""Test validation with empty conversation history."""
|
"""Test validation with empty conversation history."""
|
||||||
validation_input = ValidationInput(
|
validation_input = ValidationInput(
|
||||||
prompt="Test prompt",
|
prompt="Test prompt",
|
||||||
conversation_history={"messages": []},
|
conversation_history={"messages": []},
|
||||||
agent_goal=sample_agent_goal
|
agent_goal=sample_agent_goal,
|
||||||
)
|
)
|
||||||
|
|
||||||
mock_response = {
|
mock_response = {"validationResult": True, "validationFailedReason": {}}
|
||||||
"validationResult": True,
|
|
||||||
"validationFailedReason": {}
|
with patch.object(
|
||||||
}
|
self.tool_activities, "agent_toolPlanner", new_callable=AsyncMock
|
||||||
|
) as mock_planner:
|
||||||
with patch.object(self.tool_activities, 'agent_toolPlanner', new_callable=AsyncMock) as mock_planner:
|
|
||||||
mock_planner.return_value = mock_response
|
mock_planner.return_value = mock_response
|
||||||
|
|
||||||
activity_env = ActivityEnvironment()
|
activity_env = ActivityEnvironment()
|
||||||
result = await activity_env.run(
|
result = await activity_env.run(
|
||||||
self.tool_activities.agent_validatePrompt,
|
self.tool_activities.agent_validatePrompt, validation_input
|
||||||
validation_input
|
|
||||||
)
|
)
|
||||||
|
|
||||||
assert isinstance(result, ValidationResult)
|
assert isinstance(result, ValidationResult)
|
||||||
assert result.validationResult == True
|
assert result.validationResult
|
||||||
assert result.validationFailedReason == {}
|
assert result.validationFailedReason == {}
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
@@ -373,22 +358,22 @@ class TestEdgeCases:
|
|||||||
"""Test toolPlanner with very long prompt."""
|
"""Test toolPlanner with very long prompt."""
|
||||||
long_prompt = "This is a very long prompt " * 100
|
long_prompt = "This is a very long prompt " * 100
|
||||||
tool_prompt_input = ToolPromptInput(
|
tool_prompt_input = ToolPromptInput(
|
||||||
prompt=long_prompt,
|
prompt=long_prompt, context_instructions="Test context instructions"
|
||||||
context_instructions="Test context instructions"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
# Mock the completion response
|
# Mock the completion response
|
||||||
mock_response = MagicMock()
|
mock_response = MagicMock()
|
||||||
mock_response.choices = [MagicMock()]
|
mock_response.choices = [MagicMock()]
|
||||||
mock_response.choices[0].message.content = '{"next": "done", "response": "Processed long prompt"}'
|
mock_response.choices[
|
||||||
|
0
|
||||||
with patch('activities.tool_activities.completion', return_value=mock_response):
|
].message.content = '{"next": "done", "response": "Processed long prompt"}'
|
||||||
|
|
||||||
|
with patch("activities.tool_activities.completion", return_value=mock_response):
|
||||||
activity_env = ActivityEnvironment()
|
activity_env = ActivityEnvironment()
|
||||||
result = await activity_env.run(
|
result = await activity_env.run(
|
||||||
self.tool_activities.agent_toolPlanner,
|
self.tool_activities.agent_toolPlanner, tool_prompt_input
|
||||||
tool_prompt_input
|
|
||||||
)
|
)
|
||||||
|
|
||||||
assert isinstance(result, dict)
|
assert isinstance(result, dict)
|
||||||
assert result["next"] == "done"
|
assert result["next"] == "done"
|
||||||
assert "Processed long prompt" in result["response"]
|
assert "Processed long prompt" in result["response"]
|
||||||
@@ -397,15 +382,15 @@ class TestEdgeCases:
|
|||||||
async def test_sanitize_json_with_various_formats(self):
|
async def test_sanitize_json_with_various_formats(self):
|
||||||
"""Test JSON sanitization with various input formats."""
|
"""Test JSON sanitization with various input formats."""
|
||||||
# Test markdown code blocks
|
# Test markdown code blocks
|
||||||
markdown_json = "```json\n{\"test\": \"value\"}\n```"
|
markdown_json = '```json\n{"test": "value"}\n```'
|
||||||
result = self.tool_activities.sanitize_json_response(markdown_json)
|
result = self.tool_activities.sanitize_json_response(markdown_json)
|
||||||
assert result == '{"test": "value"}'
|
assert result == '{"test": "value"}'
|
||||||
|
|
||||||
# Test with extra whitespace
|
# Test with extra whitespace
|
||||||
whitespace_json = " \n {\"test\": \"value\"} \n "
|
whitespace_json = ' \n {"test": "value"} \n '
|
||||||
result = self.tool_activities.sanitize_json_response(whitespace_json)
|
result = self.tool_activities.sanitize_json_response(whitespace_json)
|
||||||
assert result == '{"test": "value"}'
|
assert result == '{"test": "value"}'
|
||||||
|
|
||||||
# Test already clean JSON
|
# Test already clean JSON
|
||||||
clean_json = '{"test": "value"}'
|
clean_json = '{"test": "value"}'
|
||||||
result = self.tool_activities.sanitize_json_response(clean_json)
|
result = self.tool_activities.sanitize_json_response(clean_json)
|
||||||
@@ -423,44 +408,38 @@ class TestEdgeCases:
|
|||||||
# Test with "true" string
|
# Test with "true" string
|
||||||
with patch.dict(os.environ, {"TEST_CONFIRM": "true"}):
|
with patch.dict(os.environ, {"TEST_CONFIRM": "true"}):
|
||||||
env_input = EnvLookupInput(
|
env_input = EnvLookupInput(
|
||||||
show_confirm_env_var_name="TEST_CONFIRM",
|
show_confirm_env_var_name="TEST_CONFIRM", show_confirm_default=False
|
||||||
show_confirm_default=False
|
|
||||||
)
|
)
|
||||||
|
|
||||||
activity_env = ActivityEnvironment()
|
activity_env = ActivityEnvironment()
|
||||||
result = await activity_env.run(
|
result = await activity_env.run(
|
||||||
self.tool_activities.get_wf_env_vars,
|
self.tool_activities.get_wf_env_vars, env_input
|
||||||
env_input
|
|
||||||
)
|
)
|
||||||
|
|
||||||
assert result.show_confirm == True
|
assert result.show_confirm
|
||||||
|
|
||||||
# Test with "false" string
|
# Test with "false" string
|
||||||
with patch.dict(os.environ, {"TEST_CONFIRM": "false"}):
|
with patch.dict(os.environ, {"TEST_CONFIRM": "false"}):
|
||||||
env_input = EnvLookupInput(
|
env_input = EnvLookupInput(
|
||||||
show_confirm_env_var_name="TEST_CONFIRM",
|
show_confirm_env_var_name="TEST_CONFIRM", show_confirm_default=True
|
||||||
show_confirm_default=True
|
|
||||||
)
|
)
|
||||||
|
|
||||||
activity_env = ActivityEnvironment()
|
activity_env = ActivityEnvironment()
|
||||||
result = await activity_env.run(
|
result = await activity_env.run(
|
||||||
self.tool_activities.get_wf_env_vars,
|
self.tool_activities.get_wf_env_vars, env_input
|
||||||
env_input
|
|
||||||
)
|
)
|
||||||
|
|
||||||
assert result.show_confirm == False
|
assert not result.show_confirm
|
||||||
|
|
||||||
# Test with missing env var (should use default)
|
# Test with missing env var (should use default)
|
||||||
with patch.dict(os.environ, {}, clear=True):
|
with patch.dict(os.environ, {}, clear=True):
|
||||||
env_input = EnvLookupInput(
|
env_input = EnvLookupInput(
|
||||||
show_confirm_env_var_name="MISSING_VAR",
|
show_confirm_env_var_name="MISSING_VAR", show_confirm_default=True
|
||||||
show_confirm_default=True
|
|
||||||
)
|
)
|
||||||
|
|
||||||
activity_env = ActivityEnvironment()
|
activity_env = ActivityEnvironment()
|
||||||
result = await activity_env.run(
|
result = await activity_env.run(
|
||||||
self.tool_activities.get_wf_env_vars,
|
self.tool_activities.get_wf_env_vars, env_input
|
||||||
env_input
|
|
||||||
)
|
)
|
||||||
|
|
||||||
assert result.show_confirm == True
|
assert result.show_confirm
|
||||||
|
|||||||
@@ -1,25 +1,22 @@
|
|||||||
|
import concurrent.futures
|
||||||
import uuid
|
import uuid
|
||||||
|
from contextlib import contextmanager
|
||||||
|
|
||||||
|
from temporalio import activity
|
||||||
from temporalio.client import Client, WorkflowExecutionStatus
|
from temporalio.client import Client, WorkflowExecutionStatus
|
||||||
from temporalio.worker import Worker
|
from temporalio.worker import Worker
|
||||||
from temporalio import activity
|
|
||||||
import concurrent.futures
|
|
||||||
from temporalio.testing import WorkflowEnvironment
|
|
||||||
from api.main import get_initial_agent_goal
|
from api.main import get_initial_agent_goal
|
||||||
from models.data_types import (
|
from models.data_types import (
|
||||||
AgentGoalWorkflowParams,
|
AgentGoalWorkflowParams,
|
||||||
CombinedInput,
|
CombinedInput,
|
||||||
ValidationResult,
|
|
||||||
ValidationInput,
|
|
||||||
EnvLookupOutput,
|
|
||||||
EnvLookupInput,
|
EnvLookupInput,
|
||||||
ToolPromptInput
|
EnvLookupOutput,
|
||||||
|
ToolPromptInput,
|
||||||
|
ValidationInput,
|
||||||
|
ValidationResult,
|
||||||
)
|
)
|
||||||
from workflows.agent_goal_workflow import AgentGoalWorkflow
|
from workflows.agent_goal_workflow import AgentGoalWorkflow
|
||||||
from activities.tool_activities import ToolActivities, dynamic_tool_activity
|
|
||||||
from unittest.mock import patch
|
|
||||||
from dotenv import load_dotenv
|
|
||||||
import os
|
|
||||||
from contextlib import contextmanager
|
|
||||||
|
|
||||||
|
|
||||||
@contextmanager
|
@contextmanager
|
||||||
@@ -29,57 +26,49 @@ def my_context():
|
|||||||
print("Cleanup")
|
print("Cleanup")
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
async def test_flight_booking(client: Client):
|
async def test_flight_booking(client: Client):
|
||||||
|
# load_dotenv("test_flights_single.env")
|
||||||
|
|
||||||
#load_dotenv("test_flights_single.env")
|
|
||||||
|
|
||||||
with my_context() as value:
|
with my_context() as value:
|
||||||
print(f"Working with {value}")
|
print(f"Working with {value}")
|
||||||
|
|
||||||
|
|
||||||
# Create the test environment
|
# Create the test environment
|
||||||
#env = await WorkflowEnvironment.start_local()
|
# env = await WorkflowEnvironment.start_local()
|
||||||
#client = env.client
|
# client = env.client
|
||||||
task_queue_name = str(uuid.uuid4())
|
task_queue_name = str(uuid.uuid4())
|
||||||
workflow_id = str(uuid.uuid4())
|
workflow_id = str(uuid.uuid4())
|
||||||
|
|
||||||
# Create mock activity functions with proper signatures
|
# Create mock activity functions with proper signatures
|
||||||
@activity.defn(name="get_wf_env_vars")
|
@activity.defn(name="get_wf_env_vars")
|
||||||
async def mock_get_wf_env_vars(input: EnvLookupInput) -> EnvLookupOutput:
|
async def mock_get_wf_env_vars(input: EnvLookupInput) -> EnvLookupOutput:
|
||||||
return EnvLookupOutput(
|
return EnvLookupOutput(show_confirm=True, multi_goal_mode=True)
|
||||||
show_confirm=True,
|
|
||||||
multi_goal_mode=True
|
|
||||||
)
|
|
||||||
|
|
||||||
@activity.defn(name="agent_validatePrompt")
|
@activity.defn(name="agent_validatePrompt")
|
||||||
async def mock_agent_validatePrompt(validation_input: ValidationInput) -> ValidationResult:
|
async def mock_agent_validatePrompt(
|
||||||
return ValidationResult(
|
validation_input: ValidationInput,
|
||||||
validationResult=True,
|
) -> ValidationResult:
|
||||||
validationFailedReason={}
|
return ValidationResult(validationResult=True, validationFailedReason={})
|
||||||
)
|
|
||||||
|
|
||||||
@activity.defn(name="agent_toolPlanner")
|
@activity.defn(name="agent_toolPlanner")
|
||||||
async def mock_agent_toolPlanner(input: ToolPromptInput) -> dict:
|
async def mock_agent_toolPlanner(input: ToolPromptInput) -> dict:
|
||||||
return {
|
return {"next": "done", "response": "Test response from LLM"}
|
||||||
"next": "done",
|
|
||||||
"response": "Test response from LLM"
|
|
||||||
}
|
|
||||||
|
|
||||||
with concurrent.futures.ThreadPoolExecutor(max_workers=100) as activity_executor:
|
with concurrent.futures.ThreadPoolExecutor(
|
||||||
|
max_workers=100
|
||||||
|
) as activity_executor:
|
||||||
worker = Worker(
|
worker = Worker(
|
||||||
client,
|
client,
|
||||||
task_queue=task_queue_name,
|
task_queue=task_queue_name,
|
||||||
workflows=[AgentGoalWorkflow],
|
workflows=[AgentGoalWorkflow],
|
||||||
activities=[
|
activities=[
|
||||||
mock_get_wf_env_vars,
|
mock_get_wf_env_vars,
|
||||||
mock_agent_validatePrompt,
|
mock_agent_validatePrompt,
|
||||||
mock_agent_toolPlanner
|
mock_agent_toolPlanner,
|
||||||
],
|
],
|
||||||
activity_executor=activity_executor,
|
activity_executor=activity_executor,
|
||||||
)
|
)
|
||||||
|
|
||||||
async with worker:
|
async with worker:
|
||||||
initial_agent_goal = get_initial_agent_goal()
|
initial_agent_goal = get_initial_agent_goal()
|
||||||
# Create combined input
|
# Create combined input
|
||||||
combined_input = CombinedInput(
|
combined_input = CombinedInput(
|
||||||
@@ -87,30 +76,36 @@ async def test_flight_booking(client: Client):
|
|||||||
agent_goal=initial_agent_goal,
|
agent_goal=initial_agent_goal,
|
||||||
)
|
)
|
||||||
|
|
||||||
prompt="Hello!"
|
prompt = "Hello!"
|
||||||
|
|
||||||
#async with Worker(client, task_queue=task_queue_name, workflows=[AgentGoalWorkflow], activities=[ToolActivities.agent_validatePrompt, ToolActivities.agent_toolPlanner, dynamic_tool_activity]):
|
# async with Worker(client, task_queue=task_queue_name, workflows=[AgentGoalWorkflow], activities=[ToolActivities.agent_validatePrompt, ToolActivities.agent_toolPlanner, dynamic_tool_activity]):
|
||||||
|
|
||||||
# todo set goal categories for scenarios
|
# todo set goal categories for scenarios
|
||||||
handle = await client.start_workflow(
|
handle = await client.start_workflow(
|
||||||
AgentGoalWorkflow.run,
|
AgentGoalWorkflow.run,
|
||||||
combined_input,
|
combined_input,
|
||||||
id=workflow_id,
|
id=workflow_id,
|
||||||
task_queue=task_queue_name,
|
task_queue=task_queue_name,
|
||||||
start_signal="user_prompt",
|
start_signal="user_prompt",
|
||||||
start_signal_args=[prompt],
|
start_signal_args=[prompt],
|
||||||
)
|
)
|
||||||
# todo send signals to simulate user input
|
# todo send signals to simulate user input
|
||||||
# await handle.signal(AgentGoalWorkflow.user_prompt, "book flights") # for multi-goal
|
# await handle.signal(AgentGoalWorkflow.user_prompt, "book flights") # for multi-goal
|
||||||
await handle.signal(AgentGoalWorkflow.user_prompt, "sydney in september")
|
await handle.signal(
|
||||||
assert WorkflowExecutionStatus.RUNNING == (await handle.describe()).status
|
AgentGoalWorkflow.user_prompt, "sydney in september"
|
||||||
|
)
|
||||||
|
assert (
|
||||||
|
WorkflowExecutionStatus.RUNNING == (await handle.describe()).status
|
||||||
|
)
|
||||||
|
|
||||||
|
# assert ["Hello, user1", "Hello, user2"] == await handle.result()
|
||||||
#assert ["Hello, user1", "Hello, user2"] == await handle.result()
|
await handle.signal(
|
||||||
await handle.signal(AgentGoalWorkflow.user_prompt, "I'm all set, end conversation")
|
AgentGoalWorkflow.user_prompt, "I'm all set, end conversation"
|
||||||
|
)
|
||||||
#assert WorkflowExecutionStatus.COMPLETED == (await handle.describe()).status
|
|
||||||
|
# assert WorkflowExecutionStatus.COMPLETED == (await handle.describe()).status
|
||||||
|
|
||||||
result = await handle.result()
|
result = await handle.result()
|
||||||
#todo dump workflow history for analysis optional
|
print(f"Workflow result: {result}")
|
||||||
#todo assert result is good
|
# todo dump workflow history for analysis optional
|
||||||
|
# todo assert result is good
|
||||||
|
|||||||
8
thirdparty/train_api.py
vendored
8
thirdparty/train_api.py
vendored
@@ -1,9 +1,9 @@
|
|||||||
from http.server import HTTPServer, BaseHTTPRequestHandler
|
|
||||||
from urllib.parse import parse_qs, urlparse
|
|
||||||
import json
|
import json
|
||||||
import time
|
|
||||||
import random
|
import random
|
||||||
import string
|
import string
|
||||||
|
import time
|
||||||
|
from http.server import BaseHTTPRequestHandler, HTTPServer
|
||||||
|
from urllib.parse import parse_qs, urlparse
|
||||||
|
|
||||||
|
|
||||||
def parse_datetime(datetime_str):
|
def parse_datetime(datetime_str):
|
||||||
@@ -213,4 +213,4 @@ def run_server():
|
|||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
run_server()
|
run_server()
|
||||||
|
|||||||
@@ -1,35 +1,24 @@
|
|||||||
from .search_fixtures import search_fixtures
|
|
||||||
from .search_flights import search_flights
|
|
||||||
from .search_trains import search_trains
|
|
||||||
from .search_trains import book_trains
|
|
||||||
from .create_invoice import create_invoice
|
|
||||||
from .find_events import find_events
|
|
||||||
from .list_agents import list_agents
|
|
||||||
from .change_goal import change_goal
|
from .change_goal import change_goal
|
||||||
from .transfer_control import transfer_control
|
from .create_invoice import create_invoice
|
||||||
|
from .ecommerce.get_order import get_order
|
||||||
from .hr.current_pto import current_pto
|
from .ecommerce.list_orders import list_orders
|
||||||
from .hr.book_pto import book_pto
|
from .ecommerce.track_package import track_package
|
||||||
from .hr.future_pto_calc import future_pto_calc
|
|
||||||
from .hr.checkpaybankstatus import checkpaybankstatus
|
|
||||||
|
|
||||||
from .fin.check_account_valid import check_account_valid
|
from .fin.check_account_valid import check_account_valid
|
||||||
from .fin.get_account_balances import get_account_balance
|
from .fin.get_account_balances import get_account_balance
|
||||||
from .fin.move_money import move_money
|
from .fin.move_money import move_money
|
||||||
from .fin.submit_loan_application import submit_loan_application
|
from .fin.submit_loan_application import submit_loan_application
|
||||||
|
from .find_events import find_events
|
||||||
from .ecommerce.get_order import get_order
|
|
||||||
from .ecommerce.track_package import track_package
|
|
||||||
from .ecommerce.list_orders import list_orders
|
|
||||||
|
|
||||||
from .food.get_menu import get_menu
|
|
||||||
from .food.get_menu_item_details import get_menu_item_details
|
|
||||||
from .food.add_to_cart import add_to_cart
|
|
||||||
from .food.place_order import place_order
|
|
||||||
from .food.check_order_status import check_order_status
|
|
||||||
|
|
||||||
from .give_hint import give_hint
|
from .give_hint import give_hint
|
||||||
from .guess_location import guess_location
|
from .guess_location import guess_location
|
||||||
|
from .hr.book_pto import book_pto
|
||||||
|
from .hr.checkpaybankstatus import checkpaybankstatus
|
||||||
|
from .hr.current_pto import current_pto
|
||||||
|
from .hr.future_pto_calc import future_pto_calc
|
||||||
|
from .list_agents import list_agents
|
||||||
|
from .search_fixtures import search_fixtures
|
||||||
|
from .search_flights import search_flights
|
||||||
|
from .search_trains import book_trains, search_trains
|
||||||
|
from .transfer_control import transfer_control
|
||||||
|
|
||||||
|
|
||||||
def get_handler(tool_name: str):
|
def get_handler(tool_name: str):
|
||||||
@@ -73,16 +62,6 @@ def get_handler(tool_name: str):
|
|||||||
return track_package
|
return track_package
|
||||||
if tool_name == "ListOrders":
|
if tool_name == "ListOrders":
|
||||||
return list_orders
|
return list_orders
|
||||||
if tool_name == "GetMenu":
|
|
||||||
return get_menu
|
|
||||||
if tool_name == "GetMenuItemDetails":
|
|
||||||
return get_menu_item_details
|
|
||||||
if tool_name == "AddToCart":
|
|
||||||
return add_to_cart
|
|
||||||
if tool_name == "PlaceOrder":
|
|
||||||
return place_order
|
|
||||||
if tool_name == "CheckOrderStatus":
|
|
||||||
return check_order_status
|
|
||||||
if tool_name == "GiveHint":
|
if tool_name == "GiveHint":
|
||||||
return give_hint
|
return give_hint
|
||||||
if tool_name == "GuessLocation":
|
if tool_name == "GuessLocation":
|
||||||
|
|||||||
@@ -1,9 +1,8 @@
|
|||||||
def change_goal(args: dict) -> dict:
|
def change_goal(args: dict) -> dict:
|
||||||
|
|
||||||
new_goal = args.get("goalID")
|
new_goal = args.get("goalID")
|
||||||
if new_goal is None:
|
if new_goal is None:
|
||||||
new_goal = "goal_choose_agent_type"
|
new_goal = "goal_choose_agent_type"
|
||||||
|
|
||||||
return {
|
return {
|
||||||
"new_goal": new_goal,
|
"new_goal": new_goal,
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,4 +1,5 @@
|
|||||||
import os
|
import os
|
||||||
|
|
||||||
import stripe
|
import stripe
|
||||||
from dotenv import load_dotenv
|
from dotenv import load_dotenv
|
||||||
|
|
||||||
|
|||||||
@@ -1,122 +0,0 @@
|
|||||||
{
|
|
||||||
"restaurants": [
|
|
||||||
{
|
|
||||||
"id": "rest_001",
|
|
||||||
"name": "Tony's Pizza Palace",
|
|
||||||
"menu": [
|
|
||||||
{
|
|
||||||
"id": "item_001",
|
|
||||||
"name": "Margherita Pizza",
|
|
||||||
"category": "Pizza",
|
|
||||||
"price": 14.99,
|
|
||||||
"description": "Fresh mozzarella, tomato sauce, basil",
|
|
||||||
"available": true
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"id": "item_002",
|
|
||||||
"name": "Pepperoni Pizza",
|
|
||||||
"category": "Pizza",
|
|
||||||
"price": 16.99,
|
|
||||||
"description": "Classic pepperoni with mozzarella and tomato sauce",
|
|
||||||
"available": true
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"id": "item_003",
|
|
||||||
"name": "Caesar Salad",
|
|
||||||
"category": "Salad",
|
|
||||||
"price": 9.99,
|
|
||||||
"description": "Romaine lettuce, parmesan, croutons, caesar dressing",
|
|
||||||
"available": true
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"id": "item_004",
|
|
||||||
"name": "Garlic Bread",
|
|
||||||
"category": "Sides",
|
|
||||||
"price": 5.99,
|
|
||||||
"description": "Fresh baked bread with garlic butter",
|
|
||||||
"available": true
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"id": "item_005",
|
|
||||||
"name": "Tiramisu",
|
|
||||||
"category": "Dessert",
|
|
||||||
"price": 7.99,
|
|
||||||
"description": "Classic Italian dessert with coffee and mascarpone",
|
|
||||||
"available": true
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"carts": {
|
|
||||||
"steve@example.com": {
|
|
||||||
"restaurant_id": "rest_001",
|
|
||||||
"items": []
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"orders": [
|
|
||||||
{
|
|
||||||
"id": "order_001",
|
|
||||||
"customer_email": "john.doe@example.com",
|
|
||||||
"restaurant_id": "rest_001",
|
|
||||||
"items": [
|
|
||||||
{
|
|
||||||
"item_id": "item_001",
|
|
||||||
"quantity": 1,
|
|
||||||
"price": 14.99
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"item_id": "item_004",
|
|
||||||
"quantity": 2,
|
|
||||||
"price": 5.99
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"total": 26.97,
|
|
||||||
"status": "delivered",
|
|
||||||
"order_date": "2025-05-29T18:30:00Z",
|
|
||||||
"estimated_delivery": "2025-05-29T19:15:00Z",
|
|
||||||
"actual_delivery": "2025-05-29T19:12:00Z"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"id": "order_002",
|
|
||||||
"customer_email": "jane.smith@example.com",
|
|
||||||
"restaurant_id": "rest_001",
|
|
||||||
"items": [
|
|
||||||
{
|
|
||||||
"item_id": "item_002",
|
|
||||||
"quantity": 1,
|
|
||||||
"price": 16.99
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"total": 16.99,
|
|
||||||
"status": "preparing",
|
|
||||||
"order_date": "2025-05-30T12:00:00Z",
|
|
||||||
"estimated_delivery": "2025-05-30T12:45:00Z"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"id": "order_58539a70",
|
|
||||||
"customer_email": "steve@example.com",
|
|
||||||
"restaurant_id": "rest_001",
|
|
||||||
"items": [
|
|
||||||
{
|
|
||||||
"item_id": "item_001",
|
|
||||||
"quantity": 1,
|
|
||||||
"price": 14.99
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"item_id": "item_002",
|
|
||||||
"quantity": 1,
|
|
||||||
"price": 16.99
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"item_id": "item_004",
|
|
||||||
"quantity": 1,
|
|
||||||
"price": 5.99
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"total": 37.97,
|
|
||||||
"status": "preparing",
|
|
||||||
"order_date": "2025-05-30T20:28:18.444162Z",
|
|
||||||
"estimated_delivery": "2025-05-30T20:58:18.444169Z"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
@@ -1,16 +1,18 @@
|
|||||||
from pathlib import Path
|
|
||||||
import json
|
import json
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
|
||||||
# this is made to demonstrate functionality but it could just as durably be an API call
|
# this is made to demonstrate functionality but it could just as durably be an API call
|
||||||
# called as part of a temporal activity with automatic retries
|
# called as part of a temporal activity with automatic retries
|
||||||
def get_order(args: dict) -> dict:
|
def get_order(args: dict) -> dict:
|
||||||
|
|
||||||
order_id = args.get("order_id")
|
order_id = args.get("order_id")
|
||||||
|
|
||||||
file_path = Path(__file__).resolve().parent.parent / "data" / "customer_order_data.json"
|
file_path = (
|
||||||
|
Path(__file__).resolve().parent.parent / "data" / "customer_order_data.json"
|
||||||
|
)
|
||||||
if not file_path.exists():
|
if not file_path.exists():
|
||||||
return {"error": "Data file not found."}
|
return {"error": "Data file not found."}
|
||||||
|
|
||||||
with open(file_path, "r") as file:
|
with open(file_path, "r") as file:
|
||||||
data = json.load(file)
|
data = json.load(file)
|
||||||
order_list = data["orders"]
|
order_list = data["orders"]
|
||||||
@@ -18,6 +20,6 @@ def get_order(args: dict) -> dict:
|
|||||||
for order in order_list:
|
for order in order_list:
|
||||||
if order["id"] == order_id:
|
if order["id"] == order_id:
|
||||||
return order
|
return order
|
||||||
|
|
||||||
return_msg = "Order " + order_id + " not found."
|
return_msg = "Order " + order_id + " not found."
|
||||||
return {"error": return_msg}
|
return {"error": return_msg}
|
||||||
|
|||||||
@@ -1,17 +1,20 @@
|
|||||||
from pathlib import Path
|
|
||||||
import json
|
import json
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
|
||||||
def sorting(e):
|
def sorting(e):
|
||||||
return e['order_date']
|
return e["order_date"]
|
||||||
|
|
||||||
|
|
||||||
def list_orders(args: dict) -> dict:
|
def list_orders(args: dict) -> dict:
|
||||||
|
|
||||||
email_address = args.get("email_address")
|
email_address = args.get("email_address")
|
||||||
|
|
||||||
file_path = Path(__file__).resolve().parent.parent / "data" / "customer_order_data.json"
|
file_path = (
|
||||||
|
Path(__file__).resolve().parent.parent / "data" / "customer_order_data.json"
|
||||||
|
)
|
||||||
if not file_path.exists():
|
if not file_path.exists():
|
||||||
return {"error": "Data file not found."}
|
return {"error": "Data file not found."}
|
||||||
|
|
||||||
with open(file_path, "r") as file:
|
with open(file_path, "r") as file:
|
||||||
data = json.load(file)
|
data = json.load(file)
|
||||||
order_list = data["orders"]
|
order_list = data["orders"]
|
||||||
@@ -24,7 +27,6 @@ def list_orders(args: dict) -> dict:
|
|||||||
if len(rtn_order_list) > 0:
|
if len(rtn_order_list) > 0:
|
||||||
rtn_order_list.sort(key=sorting)
|
rtn_order_list.sort(key=sorting)
|
||||||
return {"orders": rtn_order_list}
|
return {"orders": rtn_order_list}
|
||||||
else:
|
else:
|
||||||
return_msg = "No orders for customer " + email_address + " found."
|
return_msg = "No orders for customer " + email_address + " found."
|
||||||
return {"error": return_msg}
|
return {"error": return_msg}
|
||||||
|
|
||||||
|
|||||||
@@ -1,49 +1,59 @@
|
|||||||
import http
|
import http
|
||||||
import os
|
|
||||||
import json
|
import json
|
||||||
|
import os
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
#Send back dummy data in the correct format - to use the real API, 1) change this to be track_package_fake and 2) change the below track_package_real to be track_package
|
|
||||||
|
# Send back dummy data in the correct format - to use the real API, 1) change this to be track_package_fake and 2) change the below track_package_real to be track_package
|
||||||
def track_package(args: dict) -> dict:
|
def track_package(args: dict) -> dict:
|
||||||
|
|
||||||
tracking_id = args.get("tracking_id")
|
tracking_id = args.get("tracking_id")
|
||||||
file_path = Path(__file__).resolve().parent.parent / "data" / "dummy_tracking_data.json"
|
file_path = (
|
||||||
|
Path(__file__).resolve().parent.parent / "data" / "dummy_tracking_data.json"
|
||||||
|
)
|
||||||
if not file_path.exists():
|
if not file_path.exists():
|
||||||
return {"error": "Data file not found."}
|
return {"error": "Data file not found."}
|
||||||
|
|
||||||
with open(file_path, "r") as file:
|
with open(file_path, "r") as file:
|
||||||
data = json.load(file)
|
data = json.load(file)
|
||||||
package_list = data["packages"]
|
package_list = data["packages"]
|
||||||
|
|
||||||
for package in package_list:
|
for package in package_list:
|
||||||
if package["TrackingNumber"] == tracking_id:
|
if package["TrackingNumber"] == tracking_id:
|
||||||
scheduled_delivery_date = package["ScheduledDeliveryDate"]
|
scheduled_delivery_date = package["ScheduledDeliveryDate"]
|
||||||
carrier = package["Carrier"]
|
carrier = package["Carrier"]
|
||||||
status_summary = package["StatusSummary"]
|
status_summary = package["StatusSummary"]
|
||||||
tracking_details = package.get("TrackingDetails", [])
|
tracking_details = package.get("TrackingDetails", [])
|
||||||
last_tracking_update = ""
|
last_tracking_update = ""
|
||||||
if tracking_details and tracking_details is not None and tracking_details[0] is not None:
|
if (
|
||||||
last_tracking_update = tracking_details[0]["EventDateTimeInDateTimeFormat"]
|
tracking_details
|
||||||
|
and tracking_details is not None
|
||||||
tracking_link = ""
|
and tracking_details[0] is not None
|
||||||
if carrier == "USPS":
|
):
|
||||||
tracking_link = f"https://tools.usps.com/go/TrackConfirmAction?qtc_tLabels1={tracking_id}"
|
last_tracking_update = tracking_details[0][
|
||||||
elif carrier == "UPS":
|
"EventDateTimeInDateTimeFormat"
|
||||||
tracking_link = f"https://www.ups.com/track?track=yes&trackNums={tracking_id}"
|
]
|
||||||
|
|
||||||
|
tracking_link = ""
|
||||||
|
if carrier == "USPS":
|
||||||
|
tracking_link = f"https://tools.usps.com/go/TrackConfirmAction?qtc_tLabels1={tracking_id}"
|
||||||
|
elif carrier == "UPS":
|
||||||
|
tracking_link = (
|
||||||
|
f"https://www.ups.com/track?track=yes&trackNums={tracking_id}"
|
||||||
|
)
|
||||||
|
|
||||||
|
return {
|
||||||
|
"scheduled_delivery_date": scheduled_delivery_date,
|
||||||
|
"carrier": carrier,
|
||||||
|
"status_summary": status_summary,
|
||||||
|
"tracking_link": tracking_link,
|
||||||
|
"last_tracking_update": last_tracking_update,
|
||||||
|
}
|
||||||
|
|
||||||
return {
|
|
||||||
"scheduled_delivery_date": scheduled_delivery_date,
|
|
||||||
"carrier": carrier,
|
|
||||||
"status_summary": status_summary,
|
|
||||||
"tracking_link": tracking_link,
|
|
||||||
"last_tracking_update": last_tracking_update
|
|
||||||
}
|
|
||||||
|
|
||||||
return_msg = "Package not found with tracking info " + tracking_id
|
return_msg = "Package not found with tracking info " + tracking_id
|
||||||
return {"error": return_msg}
|
return {"error": return_msg}
|
||||||
|
|
||||||
'''Format of response:
|
|
||||||
|
"""Format of response:
|
||||||
{
|
{
|
||||||
"TrackingNumber": "",
|
"TrackingNumber": "",
|
||||||
"Delivered": false,
|
"Delivered": false,
|
||||||
@@ -94,9 +104,10 @@ def track_package(args: dict) -> dict:
|
|||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
'''
|
"""
|
||||||
def track_package_real(args: dict) -> dict:
|
|
||||||
|
|
||||||
|
|
||||||
|
def track_package_real(args: dict) -> dict:
|
||||||
tracking_id = args.get("tracking_id")
|
tracking_id = args.get("tracking_id")
|
||||||
|
|
||||||
api_key = os.getenv("RAPIDAPI_KEY")
|
api_key = os.getenv("RAPIDAPI_KEY")
|
||||||
@@ -127,11 +138,17 @@ def track_package_real(args: dict) -> dict:
|
|||||||
status_summary = json_data["StatusSummary"]
|
status_summary = json_data["StatusSummary"]
|
||||||
tracking_details = json_data.get("TrackingDetails", [])
|
tracking_details = json_data.get("TrackingDetails", [])
|
||||||
last_tracking_update = ""
|
last_tracking_update = ""
|
||||||
if tracking_details and tracking_details is not None and tracking_details[0] is not None:
|
if (
|
||||||
|
tracking_details
|
||||||
|
and tracking_details is not None
|
||||||
|
and tracking_details[0] is not None
|
||||||
|
):
|
||||||
last_tracking_update = tracking_details[0]["EventDateTimeInDateTimeFormat"]
|
last_tracking_update = tracking_details[0]["EventDateTimeInDateTimeFormat"]
|
||||||
tracking_link = ""
|
tracking_link = ""
|
||||||
if carrier == "USPS":
|
if carrier == "USPS":
|
||||||
tracking_link = f"https://tools.usps.com/go/TrackConfirmAction?qtc_tLabels1={tracking_id}"
|
tracking_link = (
|
||||||
|
f"https://tools.usps.com/go/TrackConfirmAction?qtc_tLabels1={tracking_id}"
|
||||||
|
)
|
||||||
elif carrier == "UPS":
|
elif carrier == "UPS":
|
||||||
tracking_link = f"https://www.ups.com/track?track=yes&trackNums={tracking_id}"
|
tracking_link = f"https://www.ups.com/track?track=yes&trackNums={tracking_id}"
|
||||||
|
|
||||||
@@ -140,5 +157,5 @@ def track_package_real(args: dict) -> dict:
|
|||||||
"carrier": carrier,
|
"carrier": carrier,
|
||||||
"status_summary": status_summary,
|
"status_summary": status_summary,
|
||||||
"tracking_link": tracking_link,
|
"tracking_link": tracking_link,
|
||||||
"last_tracking_update": last_tracking_update
|
"last_tracking_update": last_tracking_update,
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,24 +1,31 @@
|
|||||||
from pathlib import Path
|
|
||||||
import json
|
import json
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
|
||||||
# this is made to demonstrate functionality but it could just as durably be an API call
|
# this is made to demonstrate functionality but it could just as durably be an API call
|
||||||
# called as part of a temporal activity with automatic retries
|
# called as part of a temporal activity with automatic retries
|
||||||
def check_account_valid(args: dict) -> dict:
|
def check_account_valid(args: dict) -> dict:
|
||||||
|
|
||||||
email = args.get("email")
|
email = args.get("email")
|
||||||
account_id = args.get("account_id")
|
account_id = args.get("account_id")
|
||||||
|
|
||||||
file_path = Path(__file__).resolve().parent.parent / "data" / "customer_account_data.json"
|
file_path = (
|
||||||
|
Path(__file__).resolve().parent.parent / "data" / "customer_account_data.json"
|
||||||
|
)
|
||||||
if not file_path.exists():
|
if not file_path.exists():
|
||||||
return {"error": "Data file not found."}
|
return {"error": "Data file not found."}
|
||||||
|
|
||||||
with open(file_path, "r") as file:
|
with open(file_path, "r") as file:
|
||||||
data = json.load(file)
|
data = json.load(file)
|
||||||
account_list = data["accounts"]
|
account_list = data["accounts"]
|
||||||
|
|
||||||
for account in account_list:
|
for account in account_list:
|
||||||
if account["email"] == email or account["account_id"] == account_id:
|
if account["email"] == email or account["account_id"] == account_id:
|
||||||
return{"status": "account valid"}
|
return {"status": "account valid"}
|
||||||
|
|
||||||
return_msg = "Account not found with email address " + email + " or account ID: " + account_id
|
return_msg = (
|
||||||
return {"error": return_msg}
|
"Account not found with email address "
|
||||||
|
+ email
|
||||||
|
+ " or account ID: "
|
||||||
|
+ account_id
|
||||||
|
)
|
||||||
|
return {"error": return_msg}
|
||||||
|
|||||||
@@ -1,23 +1,33 @@
|
|||||||
from pathlib import Path
|
|
||||||
import json
|
import json
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
|
||||||
# this is made to demonstrate functionality but it could just as durably be an API call
|
# this is made to demonstrate functionality but it could just as durably be an API call
|
||||||
# this assumes it's a valid account - use check_account_valid() to verify that first
|
# this assumes it's a valid account - use check_account_valid() to verify that first
|
||||||
def get_account_balance(args: dict) -> dict:
|
def get_account_balance(args: dict) -> dict:
|
||||||
|
|
||||||
account_key = args.get("email_address_or_account_ID")
|
account_key = args.get("email_address_or_account_ID")
|
||||||
|
|
||||||
file_path = Path(__file__).resolve().parent.parent / "data" / "customer_account_data.json"
|
file_path = (
|
||||||
|
Path(__file__).resolve().parent.parent / "data" / "customer_account_data.json"
|
||||||
|
)
|
||||||
if not file_path.exists():
|
if not file_path.exists():
|
||||||
return {"error": "Data file not found."}
|
return {"error": "Data file not found."}
|
||||||
|
|
||||||
with open(file_path, "r") as file:
|
with open(file_path, "r") as file:
|
||||||
data = json.load(file)
|
data = json.load(file)
|
||||||
account_list = data["accounts"]
|
account_list = data["accounts"]
|
||||||
|
|
||||||
for account in account_list:
|
for account in account_list:
|
||||||
if account["email"] == account_key or account["account_id"] == account_key:
|
if account["email"] == account_key or account["account_id"] == account_key:
|
||||||
return{ "name": account["name"], "email": account["email"], "account_id": account["account_id"], "checking_balance": account["checking_balance"], "savings_balance": account["savings_balance"], "bitcoin_balance": account["bitcoin_balance"], "account_creation_date": account["account_creation_date"] }
|
return {
|
||||||
|
"name": account["name"],
|
||||||
|
"email": account["email"],
|
||||||
|
"account_id": account["account_id"],
|
||||||
|
"checking_balance": account["checking_balance"],
|
||||||
|
"savings_balance": account["savings_balance"],
|
||||||
|
"bitcoin_balance": account["bitcoin_balance"],
|
||||||
|
"account_creation_date": account["account_creation_date"],
|
||||||
|
}
|
||||||
|
|
||||||
return_msg = "Account not found with for " + account_key
|
return_msg = "Account not found with for " + account_key
|
||||||
return {"error": return_msg}
|
return {"error": return_msg}
|
||||||
|
|||||||
@@ -1,16 +1,12 @@
|
|||||||
import os
|
|
||||||
from pathlib import Path
|
|
||||||
import json
|
import json
|
||||||
from temporalio.client import Client
|
import os
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass
|
||||||
from typing import Optional
|
from pathlib import Path
|
||||||
import asyncio
|
|
||||||
from temporalio.exceptions import WorkflowAlreadyStartedError
|
from temporalio.exceptions import WorkflowAlreadyStartedError
|
||||||
|
|
||||||
from shared.config import get_temporal_client
|
from shared.config import get_temporal_client
|
||||||
|
|
||||||
|
|
||||||
from enum import Enum, auto
|
|
||||||
|
|
||||||
# enums for the java enum
|
# enums for the java enum
|
||||||
# class ExecutionScenarios(Enum):
|
# class ExecutionScenarios(Enum):
|
||||||
# HAPPY_PATH = 0
|
# HAPPY_PATH = 0
|
||||||
@@ -32,7 +28,6 @@ class MoneyMovementWorkflowParameterObj:
|
|||||||
# this is made to demonstrate functionality but it could just as durably be an API call
|
# this is made to demonstrate functionality but it could just as durably be an API call
|
||||||
# this assumes it's a valid account - use check_account_valid() to verify that first
|
# this assumes it's a valid account - use check_account_valid() to verify that first
|
||||||
async def move_money(args: dict) -> dict:
|
async def move_money(args: dict) -> dict:
|
||||||
|
|
||||||
account_key = args.get("email_address_or_account_ID")
|
account_key = args.get("email_address_or_account_ID")
|
||||||
account_type: str = args.get("accounttype")
|
account_type: str = args.get("accounttype")
|
||||||
amount = args.get("amount")
|
amount = args.get("amount")
|
||||||
@@ -101,7 +96,6 @@ async def move_money(args: dict) -> dict:
|
|||||||
async def start_workflow(
|
async def start_workflow(
|
||||||
amount_cents: int, from_account_name: str, to_account_name: str
|
amount_cents: int, from_account_name: str, to_account_name: str
|
||||||
) -> str:
|
) -> str:
|
||||||
|
|
||||||
start_real_workflow = os.getenv("FIN_START_REAL_WORKFLOW")
|
start_real_workflow = os.getenv("FIN_START_REAL_WORKFLOW")
|
||||||
if start_real_workflow is not None and start_real_workflow.lower() == "false":
|
if start_real_workflow is not None and start_real_workflow.lower() == "false":
|
||||||
START_REAL_WORKFLOW = False
|
START_REAL_WORKFLOW = False
|
||||||
@@ -128,7 +122,7 @@ async def start_workflow(
|
|||||||
task_queue="MoneyTransferJava", # Task queue name
|
task_queue="MoneyTransferJava", # Task queue name
|
||||||
)
|
)
|
||||||
return handle.id
|
return handle.id
|
||||||
except WorkflowAlreadyStartedError as e:
|
except WorkflowAlreadyStartedError:
|
||||||
existing_handle = client.get_workflow_handle(workflow_id=workflow_id)
|
existing_handle = client.get_workflow_handle(workflow_id=workflow_id)
|
||||||
return existing_handle.id
|
return existing_handle.id
|
||||||
else:
|
else:
|
||||||
|
|||||||
@@ -1,18 +1,10 @@
|
|||||||
from datetime import date, timedelta
|
|
||||||
import os
|
import os
|
||||||
from pathlib import Path
|
|
||||||
import json
|
|
||||||
from temporalio.client import (
|
|
||||||
Client,
|
|
||||||
WithStartWorkflowOperation,
|
|
||||||
WorkflowHandle,
|
|
||||||
WorkflowUpdateFailedError,
|
|
||||||
)
|
|
||||||
from temporalio import common
|
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass
|
||||||
from typing import Optional
|
from datetime import date
|
||||||
import asyncio
|
|
||||||
from temporalio.exceptions import WorkflowAlreadyStartedError
|
from temporalio import common
|
||||||
|
from temporalio.client import WithStartWorkflowOperation, WorkflowUpdateFailedError
|
||||||
|
|
||||||
from shared.config import get_temporal_client
|
from shared.config import get_temporal_client
|
||||||
|
|
||||||
|
|
||||||
@@ -24,39 +16,55 @@ class TransactionRequest:
|
|||||||
sourceAccount: str
|
sourceAccount: str
|
||||||
targetAccount: str
|
targetAccount: str
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class TxResult:
|
class TxResult:
|
||||||
transactionId: str
|
transactionId: str
|
||||||
status: str
|
status: str
|
||||||
|
|
||||||
#demonstrate starting a workflow and early return pattern while the workflow continues
|
|
||||||
|
# demonstrate starting a workflow and early return pattern while the workflow continues
|
||||||
async def submit_loan_application(args: dict) -> dict:
|
async def submit_loan_application(args: dict) -> dict:
|
||||||
account_key = args.get("email_address_or_account_ID")
|
account_key = args.get("email_address_or_account_ID")
|
||||||
amount = args.get("amount")
|
amount = args.get("amount")
|
||||||
|
|
||||||
loan_status: dict = await start_workflow(amount=amount,account_name=account_key)
|
loan_status: dict = await start_workflow(amount=amount, account_name=account_key)
|
||||||
|
|
||||||
if loan_status.get("error") is None:
|
if loan_status.get("error") is None:
|
||||||
return {'status': loan_status.get("loan_application_status"), 'detailed_status': loan_status.get("application_details"), 'next_step': loan_status.get("advisement"), 'confirmation_id': loan_status.get("transaction_id")}
|
return {
|
||||||
|
"status": loan_status.get("loan_application_status"),
|
||||||
|
"detailed_status": loan_status.get("application_details"),
|
||||||
|
"next_step": loan_status.get("advisement"),
|
||||||
|
"confirmation_id": loan_status.get("transaction_id"),
|
||||||
|
}
|
||||||
else:
|
else:
|
||||||
print(loan_status)
|
print(loan_status)
|
||||||
return loan_status
|
return loan_status
|
||||||
|
|
||||||
|
|
||||||
# Async function to start workflow
|
# Async function to start workflow
|
||||||
async def start_workflow(amount: str, account_name: str, )-> dict:
|
async def start_workflow(
|
||||||
|
amount: str,
|
||||||
|
account_name: str,
|
||||||
|
) -> dict:
|
||||||
start_real_workflow = os.getenv("FIN_START_REAL_WORKFLOW")
|
start_real_workflow = os.getenv("FIN_START_REAL_WORKFLOW")
|
||||||
if start_real_workflow is not None and start_real_workflow.lower() == "false":
|
if start_real_workflow is not None and start_real_workflow.lower() == "false":
|
||||||
START_REAL_WORKFLOW = False
|
# START_REAL_WORKFLOW = False
|
||||||
return {'loan_application_status': "applied", 'application_details': "loan application is submitted and initial validation is complete",'transaction_id': "APPLICATION"+account_name, 'advisement': "You'll receive a confirmation for final approval in three business days", }
|
return {
|
||||||
|
"loan_application_status": "applied",
|
||||||
|
"application_details": "loan application is submitted and initial validation is complete",
|
||||||
|
"transaction_id": "APPLICATION" + account_name,
|
||||||
|
"advisement": "You'll receive a confirmation for final approval in three business days",
|
||||||
|
}
|
||||||
else:
|
else:
|
||||||
START_REAL_WORKFLOW = True
|
# START_REAL_WORKFLOW = True
|
||||||
# Connect to Temporal
|
# Connect to Temporal
|
||||||
client = await get_temporal_client()
|
client = await get_temporal_client()
|
||||||
|
|
||||||
# Define the workflow ID and task queue
|
# Define the workflow ID and task queue
|
||||||
workflow_id = "LOAN_APPLICATION-"+account_name+"-"+date.today().strftime('%Y-%m-%d')
|
workflow_id = (
|
||||||
|
"LOAN_APPLICATION-" + account_name + "-" + date.today().strftime("%Y-%m-%d")
|
||||||
|
)
|
||||||
task_queue = "LatencyOptimizationTEST"
|
task_queue = "LatencyOptimizationTEST"
|
||||||
|
|
||||||
# Create a TransactionRequest (matching the Java workflow's expected input)
|
# Create a TransactionRequest (matching the Java workflow's expected input)
|
||||||
@@ -83,21 +91,27 @@ async def start_workflow(amount: str, account_name: str, )-> dict:
|
|||||||
)
|
)
|
||||||
)
|
)
|
||||||
except WorkflowUpdateFailedError:
|
except WorkflowUpdateFailedError:
|
||||||
print("aww man got exception WorkflowUpdateFailedError" )
|
print("aww man got exception WorkflowUpdateFailedError")
|
||||||
tx_result = None
|
tx_result = None
|
||||||
return_msg = "Loan could not be processed for " + account_name
|
return_msg = "Loan could not be processed for " + account_name
|
||||||
return {"error": return_msg}
|
return {"error": return_msg}
|
||||||
|
|
||||||
workflow_handle = await start_op.workflow_handle()
|
workflow_handle = await start_op.workflow_handle()
|
||||||
|
print(f"Workflow started with ID: {workflow_handle.id}")
|
||||||
print(tx_result)
|
print(tx_result)
|
||||||
|
|
||||||
print(f"Update result: Transaction ID = {tx_result.transactionId}, Message = {tx_result.status}")
|
print(
|
||||||
|
f"Update result: Transaction ID = {tx_result.transactionId}, Message = {tx_result.status}"
|
||||||
|
)
|
||||||
|
|
||||||
# Optionally, wait for the workflow to complete and get the final result
|
# Optionally, wait for the workflow to complete and get the final result
|
||||||
# final_result = await handle.result()
|
# final_result = await handle.result()
|
||||||
# print(f"Workflow completed with result: {final_result}")
|
# print(f"Workflow completed with result: {final_result}")
|
||||||
|
|
||||||
|
# return {'status': loan_status.get("loan_status"), 'detailed_status': loan_status.get("results"), 'next_step': loan_status.get("advisement"), 'confirmation_id': loan_status.get("workflowID")}
|
||||||
# return {'status': loan_status.get("loan_status"), 'detailed_status': loan_status.get("results"), 'next_step': loan_status.get("advisement"), 'confirmation_id': loan_status.get("workflowID")}
|
return {
|
||||||
return {'loan_application_status': "applied", 'application_details': "loan application is submitted and initial validation is complete",'transaction_id': tx_result.transactionId, 'advisement': "You'll receive a confirmation for final approval in three business days", }
|
"loan_application_status": "applied",
|
||||||
|
"application_details": "loan application is submitted and initial validation is complete",
|
||||||
|
"transaction_id": tx_result.transactionId,
|
||||||
|
"advisement": "You'll receive a confirmation for final approval in three business days",
|
||||||
|
}
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
|
import json
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
import json
|
|
||||||
|
|
||||||
|
|
||||||
def find_events(args: dict) -> dict:
|
def find_events(args: dict) -> dict:
|
||||||
|
|||||||
@@ -1,63 +0,0 @@
|
|||||||
from pathlib import Path
|
|
||||||
import json
|
|
||||||
|
|
||||||
def add_to_cart(args: dict) -> dict:
|
|
||||||
customer_email = args.get("customer_email")
|
|
||||||
item_id = args.get("item_id")
|
|
||||||
quantity = int(args.get("quantity", 1))
|
|
||||||
restaurant_id = args.get("restaurant_id", "rest_001")
|
|
||||||
|
|
||||||
file_path = Path(__file__).resolve().parent.parent / "data" / "food_ordering_data.json"
|
|
||||||
if not file_path.exists():
|
|
||||||
return {"error": "Data file not found."}
|
|
||||||
|
|
||||||
with open(file_path, "r") as file:
|
|
||||||
data = json.load(file)
|
|
||||||
|
|
||||||
# Find the item to get its price
|
|
||||||
item_price = None
|
|
||||||
item_name = None
|
|
||||||
for restaurant in data["restaurants"]:
|
|
||||||
if restaurant["id"] == restaurant_id:
|
|
||||||
for item in restaurant["menu"]:
|
|
||||||
if item["id"] == item_id:
|
|
||||||
item_price = item["price"]
|
|
||||||
item_name = item["name"]
|
|
||||||
break
|
|
||||||
|
|
||||||
if item_price is None:
|
|
||||||
return {"error": f"Item {item_id} not found."}
|
|
||||||
|
|
||||||
# Initialize cart if it doesn't exist
|
|
||||||
if customer_email not in data["carts"]:
|
|
||||||
data["carts"][customer_email] = {
|
|
||||||
"restaurant_id": restaurant_id,
|
|
||||||
"items": []
|
|
||||||
}
|
|
||||||
|
|
||||||
# Check if item already in cart
|
|
||||||
cart = data["carts"][customer_email]
|
|
||||||
existing_item = None
|
|
||||||
for cart_item in cart["items"]:
|
|
||||||
if cart_item["item_id"] == item_id:
|
|
||||||
existing_item = cart_item
|
|
||||||
break
|
|
||||||
|
|
||||||
if existing_item:
|
|
||||||
existing_item["quantity"] += quantity
|
|
||||||
else:
|
|
||||||
cart["items"].append({
|
|
||||||
"item_id": item_id,
|
|
||||||
"quantity": quantity,
|
|
||||||
"price": item_price
|
|
||||||
})
|
|
||||||
|
|
||||||
# Save back to file
|
|
||||||
with open(file_path, "w") as file:
|
|
||||||
json.dump(data, file, indent=2)
|
|
||||||
|
|
||||||
return {
|
|
||||||
"status": "success",
|
|
||||||
"message": f"Added {quantity} x {item_name} to cart",
|
|
||||||
"cart": cart
|
|
||||||
}
|
|
||||||
@@ -1,28 +0,0 @@
|
|||||||
from pathlib import Path
|
|
||||||
import json
|
|
||||||
|
|
||||||
def check_order_status(args: dict) -> dict:
|
|
||||||
order_id = args.get("order_id")
|
|
||||||
|
|
||||||
file_path = Path(__file__).resolve().parent.parent / "data" / "food_ordering_data.json"
|
|
||||||
if not file_path.exists():
|
|
||||||
return {"error": "Data file not found."}
|
|
||||||
|
|
||||||
with open(file_path, "r") as file:
|
|
||||||
data = json.load(file)
|
|
||||||
|
|
||||||
orders = data["orders"]
|
|
||||||
|
|
||||||
for order in orders:
|
|
||||||
if order["id"] == order_id:
|
|
||||||
return {
|
|
||||||
"order_id": order["id"],
|
|
||||||
"status": order["status"],
|
|
||||||
"order_date": order["order_date"],
|
|
||||||
"estimated_delivery": order["estimated_delivery"],
|
|
||||||
"actual_delivery": order.get("actual_delivery"),
|
|
||||||
"total": order["total"],
|
|
||||||
"items": order["items"]
|
|
||||||
}
|
|
||||||
|
|
||||||
return {"error": f"Order {order_id} not found."}
|
|
||||||
@@ -1,23 +0,0 @@
|
|||||||
from pathlib import Path
|
|
||||||
import json
|
|
||||||
|
|
||||||
def get_menu(args: dict) -> dict:
|
|
||||||
restaurant_id = args.get("restaurant_id", "rest_001")
|
|
||||||
|
|
||||||
file_path = Path(__file__).resolve().parent.parent / "data" / "food_ordering_data.json"
|
|
||||||
if not file_path.exists():
|
|
||||||
return {"error": "Data file not found."}
|
|
||||||
|
|
||||||
with open(file_path, "r") as file:
|
|
||||||
data = json.load(file)
|
|
||||||
|
|
||||||
restaurants = data["restaurants"]
|
|
||||||
|
|
||||||
for restaurant in restaurants:
|
|
||||||
if restaurant["id"] == restaurant_id:
|
|
||||||
return {
|
|
||||||
"restaurant_name": restaurant["name"],
|
|
||||||
"menu": restaurant["menu"]
|
|
||||||
}
|
|
||||||
|
|
||||||
return {"error": f"Restaurant {restaurant_id} not found."}
|
|
||||||
@@ -1,23 +0,0 @@
|
|||||||
from pathlib import Path
|
|
||||||
import json
|
|
||||||
|
|
||||||
def get_menu_item_details(args: dict) -> dict:
|
|
||||||
item_id = args.get("item_id")
|
|
||||||
restaurant_id = args.get("restaurant_id", "rest_001")
|
|
||||||
|
|
||||||
file_path = Path(__file__).resolve().parent.parent / "data" / "food_ordering_data.json"
|
|
||||||
if not file_path.exists():
|
|
||||||
return {"error": "Data file not found."}
|
|
||||||
|
|
||||||
with open(file_path, "r") as file:
|
|
||||||
data = json.load(file)
|
|
||||||
|
|
||||||
restaurants = data["restaurants"]
|
|
||||||
|
|
||||||
for restaurant in restaurants:
|
|
||||||
if restaurant["id"] == restaurant_id:
|
|
||||||
for item in restaurant["menu"]:
|
|
||||||
if item["id"] == item_id:
|
|
||||||
return item
|
|
||||||
|
|
||||||
return {"error": f"Menu item {item_id} not found."}
|
|
||||||
@@ -1,57 +0,0 @@
|
|||||||
from pathlib import Path
|
|
||||||
import json
|
|
||||||
import uuid
|
|
||||||
from datetime import datetime, timedelta
|
|
||||||
|
|
||||||
def place_order(args: dict) -> dict:
|
|
||||||
customer_email = args.get("customer_email")
|
|
||||||
|
|
||||||
file_path = Path(__file__).resolve().parent.parent / "data" / "food_ordering_data.json"
|
|
||||||
if not file_path.exists():
|
|
||||||
return {"error": "Data file not found."}
|
|
||||||
|
|
||||||
with open(file_path, "r") as file:
|
|
||||||
data = json.load(file)
|
|
||||||
|
|
||||||
# Check if cart exists
|
|
||||||
if customer_email not in data["carts"] or not data["carts"][customer_email]["items"]:
|
|
||||||
return {"error": "Cart is empty. Please add items to cart first."}
|
|
||||||
|
|
||||||
cart = data["carts"][customer_email]
|
|
||||||
|
|
||||||
# Calculate total
|
|
||||||
total = sum(item["price"] * item["quantity"] for item in cart["items"])
|
|
||||||
|
|
||||||
# Create order
|
|
||||||
order_id = f"order_{str(uuid.uuid4())[:8]}"
|
|
||||||
order_date = datetime.now().isoformat() + "Z"
|
|
||||||
estimated_delivery = (datetime.now() + timedelta(minutes=30)).isoformat() + "Z"
|
|
||||||
|
|
||||||
new_order = {
|
|
||||||
"id": order_id,
|
|
||||||
"customer_email": customer_email,
|
|
||||||
"restaurant_id": cart["restaurant_id"],
|
|
||||||
"items": cart["items"],
|
|
||||||
"total": round(total, 2),
|
|
||||||
"status": "preparing",
|
|
||||||
"order_date": order_date,
|
|
||||||
"estimated_delivery": estimated_delivery
|
|
||||||
}
|
|
||||||
|
|
||||||
# Add order to data
|
|
||||||
data["orders"].append(new_order)
|
|
||||||
|
|
||||||
# Clear cart
|
|
||||||
data["carts"][customer_email] = {"restaurant_id": cart["restaurant_id"], "items": []}
|
|
||||||
|
|
||||||
# Save back to file
|
|
||||||
with open(file_path, "w") as file:
|
|
||||||
json.dump(data, file, indent=2)
|
|
||||||
|
|
||||||
return {
|
|
||||||
"status": "success",
|
|
||||||
"order_id": order_id,
|
|
||||||
"total": round(total, 2),
|
|
||||||
"estimated_delivery": estimated_delivery,
|
|
||||||
"message": "Order placed successfully!"
|
|
||||||
}
|
|
||||||
@@ -1,10 +1,10 @@
|
|||||||
TREASURE_LOCATION = {
|
TREASURE_LOCATION = {
|
||||||
"address": "300 Lenora",
|
"address": "300 Lenora",
|
||||||
"city": "Seattle",
|
"city": "Seattle",
|
||||||
"state_full": "Washington",
|
"state_full": "Washington",
|
||||||
"state_abbrev": "WA",
|
"state_abbrev": "WA",
|
||||||
"zip": "98121",
|
"zip": "98121",
|
||||||
"country": "USA"
|
"country": "USA",
|
||||||
}
|
}
|
||||||
|
|
||||||
HINTS = [
|
HINTS = [
|
||||||
@@ -12,8 +12,8 @@ HINTS = [
|
|||||||
"state of " + TREASURE_LOCATION["state_full"],
|
"state of " + TREASURE_LOCATION["state_full"],
|
||||||
"city of " + TREASURE_LOCATION["city"],
|
"city of " + TREASURE_LOCATION["city"],
|
||||||
"at a company HQ",
|
"at a company HQ",
|
||||||
"The company's tech traces its roots to a project called Cadence", #thanks, Grok
|
"The company's tech traces its roots to a project called Cadence", # thanks, Grok
|
||||||
"The company offers a tool that lets developers write code as if it's running forever, no matter what crashes", #thanks, Grok
|
"The company offers a tool that lets developers write code as if it's running forever, no matter what crashes", # thanks, Grok
|
||||||
]
|
]
|
||||||
''' Additional Grok provided hints about Temporal:
|
''' Additional Grok provided hints about Temporal:
|
||||||
"This company was founded by two engineers who previously worked on a system named after a South American river at Uber."
|
"This company was founded by two engineers who previously worked on a system named after a South American river at Uber."
|
||||||
@@ -26,16 +26,14 @@ HINTS = [
|
|||||||
"They’re backed by big venture capital names like Sequoia, betting on their vision for reliable software."
|
"They’re backed by big venture capital names like Sequoia, betting on their vision for reliable software."
|
||||||
"The company’s name might remind you of a word for something fleeting, yet their tech is built to last."'''
|
"The company’s name might remind you of a word for something fleeting, yet their tech is built to last."'''
|
||||||
|
|
||||||
|
|
||||||
def give_hint(args: dict) -> dict:
|
def give_hint(args: dict) -> dict:
|
||||||
hint_total = args.get("hint_total")
|
hint_total = args.get("hint_total")
|
||||||
if hint_total is None:
|
if hint_total is None:
|
||||||
hint_total = 0
|
hint_total = 0
|
||||||
|
|
||||||
index = hint_total % len(HINTS)
|
index = hint_total % len(HINTS)
|
||||||
hint_text = HINTS[index]
|
hint_text = HINTS[index]
|
||||||
|
|
||||||
hint_total = hint_total + 1
|
hint_total = hint_total + 1
|
||||||
return {
|
return {"hint_number": hint_total, "hint": hint_text}
|
||||||
"hint_number": hint_total,
|
|
||||||
"hint": hint_text
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -1,7 +1,8 @@
|
|||||||
import os
|
import os
|
||||||
from typing import List
|
from typing import List
|
||||||
from models.tool_definitions import AgentGoal
|
|
||||||
import tools.tool_registry as tool_registry
|
import tools.tool_registry as tool_registry
|
||||||
|
from models.tool_definitions import AgentGoal
|
||||||
|
|
||||||
# Turn on Silly Mode - this should be a description of the persona you'd like the bot to have and can be a single word or a phrase.
|
# Turn on Silly Mode - this should be a description of the persona you'd like the bot to have and can be a single word or a phrase.
|
||||||
# Example if you want the bot to be a specific person, like Mario or Christopher Walken, or to describe a specific tone:
|
# Example if you want the bot to be a specific person, like Mario or Christopher Walken, or to describe a specific tone:
|
||||||
@@ -310,7 +311,7 @@ goal_fin_check_account_balances = AgentGoal(
|
|||||||
)
|
)
|
||||||
|
|
||||||
# this tool checks account balances, and uses ./data/customer_account_data.json as dummy data
|
# this tool checks account balances, and uses ./data/customer_account_data.json as dummy data
|
||||||
# it also uses a separate workflow/tool, see ./setup.md for details
|
# it also uses a separate workflow/tool, see ./SETUP.md for details
|
||||||
goal_fin_move_money = AgentGoal(
|
goal_fin_move_money = AgentGoal(
|
||||||
id="goal_fin_move_money",
|
id="goal_fin_move_money",
|
||||||
category_tag="fin",
|
category_tag="fin",
|
||||||
@@ -350,7 +351,7 @@ goal_fin_move_money = AgentGoal(
|
|||||||
)
|
)
|
||||||
|
|
||||||
# this starts a loan approval process
|
# this starts a loan approval process
|
||||||
# it also uses a separate workflow/tool, see ./setup.md for details
|
# it also uses a separate workflow/tool, see ./SETUP.md for details
|
||||||
goal_fin_loan_application = AgentGoal(
|
goal_fin_loan_application = AgentGoal(
|
||||||
id="goal_fin_loan_application",
|
id="goal_fin_loan_application",
|
||||||
category_tag="fin",
|
category_tag="fin",
|
||||||
@@ -454,50 +455,6 @@ goal_ecomm_list_orders = AgentGoal(
|
|||||||
),
|
),
|
||||||
)
|
)
|
||||||
|
|
||||||
# ----- Food Ordering Goal -----
|
|
||||||
goal_food_ordering = AgentGoal(
|
|
||||||
id="goal_food_ordering",
|
|
||||||
category_tag="food",
|
|
||||||
agent_name="Food Ordering Assistant",
|
|
||||||
agent_friendly_description="Order food from Tony's Pizza Palace. Browse menu, add items to cart, and place orders.",
|
|
||||||
tools=[
|
|
||||||
tool_registry.food_get_menu_tool,
|
|
||||||
tool_registry.food_get_menu_item_details_tool,
|
|
||||||
tool_registry.food_add_to_cart_tool,
|
|
||||||
tool_registry.food_place_order_tool,
|
|
||||||
tool_registry.food_check_order_status_tool,
|
|
||||||
],
|
|
||||||
description="The user wants to order food from Tony's Pizza Palace. Help them browse the menu, learn about menu items, add items to their cart, and place an order. To assist with that goal, help the user gather args for these tools in order: "
|
|
||||||
"1. GetMenu: Show the restaurant menu. This tool is optional if the user already knows what they want. "
|
|
||||||
"2. GetMenuItemDetails: Get details about specific menu items. This tool is optional and can be used multiple times. "
|
|
||||||
"3. AddToCart: Add menu items to the customer's cart. This tool can be used multiple times to add different items. "
|
|
||||||
"4. PlaceOrder: Place the order for items in the cart. "
|
|
||||||
"5. CheckOrderStatus: Check the status of a placed order. This tool is optional and used after placing an order.",
|
|
||||||
starter_prompt=starter_prompt_generic,
|
|
||||||
example_conversation_history="\n ".join(
|
|
||||||
[
|
|
||||||
"user: I'd like to order some food",
|
|
||||||
"agent: Great! I can help you order food from Tony's Pizza Palace. Would you like to see the menu first?",
|
|
||||||
"user: Yes, please show me the menu",
|
|
||||||
"user_confirmed_tool_run: <user clicks confirm on GetMenu tool>",
|
|
||||||
"tool_result: {'restaurant_name': 'Tony\\'s Pizza Palace', 'menu': [{'id': 'item_001', 'name': 'Margherita Pizza', 'category': 'Pizza', 'price': 14.99, 'description': 'Fresh mozzarella, tomato sauce, basil', 'available': True}, {'id': 'item_002', 'name': 'Pepperoni Pizza', 'category': 'Pizza', 'price': 16.99, 'description': 'Classic pepperoni with mozzarella and tomato sauce', 'available': True}]}",
|
|
||||||
"agent: Here's the menu from Tony's Pizza Palace: \n"
|
|
||||||
"Pizza: \n"
|
|
||||||
"- Margherita Pizza ($14.99): Fresh mozzarella, tomato sauce, basil \n"
|
|
||||||
"- Pepperoni Pizza ($16.99): Classic pepperoni with mozzarella and tomato sauce \n"
|
|
||||||
"What would you like to add to your cart? I'll need your email address to create your order.",
|
|
||||||
"user: I'd like a Margherita Pizza. My email is john.doe@example.com",
|
|
||||||
"user_confirmed_tool_run: <user clicks confirm on AddToCart tool>",
|
|
||||||
"tool_result: {'status': 'success', 'message': 'Added 1 x Margherita Pizza to cart', 'cart': {'restaurant_id': 'rest_001', 'items': [{'item_id': 'item_001', 'quantity': 1, 'price': 14.99}]}}",
|
|
||||||
"agent: Perfect! I've added 1 Margherita Pizza to your cart. Would you like to add anything else or are you ready to place your order?",
|
|
||||||
"user: I'm ready to place the order",
|
|
||||||
"user_confirmed_tool_run: <user clicks confirm on PlaceOrder tool>",
|
|
||||||
"tool_result: {'status': 'success', 'order_id': 'order_12345678', 'total': 14.99, 'estimated_delivery': '2025-05-30T13:30:00Z', 'message': 'Order placed successfully!'}",
|
|
||||||
"agent: Order placed successfully! Your order ID is order_12345678 and the total is $14.99. Your food should be delivered by 1:30 PM today. You can check your order status anytime using the order ID.",
|
|
||||||
]
|
|
||||||
),
|
|
||||||
)
|
|
||||||
|
|
||||||
# Add the goals to a list for more generic processing, like listing available agents
|
# Add the goals to a list for more generic processing, like listing available agents
|
||||||
goal_list: List[AgentGoal] = []
|
goal_list: List[AgentGoal] = []
|
||||||
goal_list.append(goal_choose_agent_type)
|
goal_list.append(goal_choose_agent_type)
|
||||||
@@ -512,7 +469,6 @@ goal_list.append(goal_fin_move_money)
|
|||||||
goal_list.append(goal_fin_loan_application)
|
goal_list.append(goal_fin_loan_application)
|
||||||
goal_list.append(goal_ecomm_list_orders)
|
goal_list.append(goal_ecomm_list_orders)
|
||||||
goal_list.append(goal_ecomm_order_status)
|
goal_list.append(goal_ecomm_order_status)
|
||||||
goal_list.append(goal_food_ordering)
|
|
||||||
|
|
||||||
|
|
||||||
# for multi-goal, just set list agents as the last tool
|
# for multi-goal, just set list agents as the last tool
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
from .give_hint import TREASURE_LOCATION
|
from .give_hint import TREASURE_LOCATION
|
||||||
|
|
||||||
|
|
||||||
def guess_location(args: dict) -> dict:
|
def guess_location(args: dict) -> dict:
|
||||||
|
|
||||||
guess_address = args.get("address").lower()
|
guess_address = args.get("address").lower()
|
||||||
guess_city = args.get("city").lower()
|
guess_city = args.get("city").lower()
|
||||||
guess_state = args.get("state").lower()
|
guess_state = args.get("state").lower()
|
||||||
@@ -11,8 +11,12 @@ def guess_location(args: dict) -> dict:
|
|||||||
else:
|
else:
|
||||||
compare_state = TREASURE_LOCATION.get("state_full").lower()
|
compare_state = TREASURE_LOCATION.get("state_full").lower()
|
||||||
|
|
||||||
#Check for the street address to be included in the guess to account for "st" vs "street" or leaving Street off entirely
|
# Check for the street address to be included in the guess to account for "st" vs "street" or leaving Street off entirely
|
||||||
if TREASURE_LOCATION.get("address").lower() in guess_address and TREASURE_LOCATION.get("city").lower() == guess_city and compare_state == guess_state:
|
if (
|
||||||
|
TREASURE_LOCATION.get("address").lower() in guess_address
|
||||||
|
and TREASURE_LOCATION.get("city").lower() == guess_city
|
||||||
|
and compare_state == guess_state
|
||||||
|
):
|
||||||
return {"treasure_found": "True"}
|
return {"treasure_found": "True"}
|
||||||
else:
|
else:
|
||||||
return {"treasure_found": "False"}
|
return {"treasure_found": "False"}
|
||||||
|
|||||||
@@ -1,11 +1,10 @@
|
|||||||
def book_pto(args: dict) -> dict:
|
def book_pto(args: dict) -> dict:
|
||||||
|
|
||||||
email = args.get("email")
|
email = args.get("email")
|
||||||
start_date = args.get("start_date")
|
start_date = args.get("start_date")
|
||||||
end_date = args.get("end_date")
|
end_date = args.get("end_date")
|
||||||
|
|
||||||
print(f"[BookPTO] Totally would send an email confirmation of PTO from {start_date} to {end_date} to {email} here!")
|
print(
|
||||||
|
f"[BookPTO] Totally would send an email confirmation of PTO from {start_date} to {end_date} to {email} here!"
|
||||||
|
)
|
||||||
|
|
||||||
return {
|
return {"status": "success"}
|
||||||
"status": "success"
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -1,9 +1,4 @@
|
|||||||
from pathlib import Path
|
|
||||||
import json
|
|
||||||
|
|
||||||
|
|
||||||
def checkpaybankstatus(args: dict) -> dict:
|
def checkpaybankstatus(args: dict) -> dict:
|
||||||
|
|
||||||
email = args.get("email")
|
email = args.get("email")
|
||||||
|
|
||||||
if email == "grinch@grinch.com":
|
if email == "grinch@grinch.com":
|
||||||
@@ -12,4 +7,4 @@ def checkpaybankstatus(args: dict) -> dict:
|
|||||||
|
|
||||||
# could do logic here or look up data but for now everyone but the grinch is getting paid
|
# could do logic here or look up data but for now everyone but the grinch is getting paid
|
||||||
return_msg = "connected"
|
return_msg = "connected"
|
||||||
return {"status": return_msg}
|
return {"status": return_msg}
|
||||||
|
|||||||
@@ -1,26 +1,27 @@
|
|||||||
from pathlib import Path
|
|
||||||
import json
|
import json
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
|
||||||
def current_pto(args: dict) -> dict:
|
def current_pto(args: dict) -> dict:
|
||||||
|
|
||||||
email = args.get("email")
|
email = args.get("email")
|
||||||
|
|
||||||
file_path = Path(__file__).resolve().parent.parent / "data" / "employee_pto_data.json"
|
file_path = (
|
||||||
|
Path(__file__).resolve().parent.parent / "data" / "employee_pto_data.json"
|
||||||
|
)
|
||||||
if not file_path.exists():
|
if not file_path.exists():
|
||||||
return {"error": "Data file not found."}
|
return {"error": "Data file not found."}
|
||||||
|
|
||||||
data = json.load(open(file_path))
|
data = json.load(open(file_path))
|
||||||
employee_list = data["theCompany"]["employees"]
|
employee_list = data["theCompany"]["employees"]
|
||||||
|
|
||||||
for employee in employee_list:
|
for employee in employee_list:
|
||||||
if employee["email"] == email:
|
if employee["email"] == email:
|
||||||
num_hours = int(employee["currentPTOHrs"])
|
num_hours = int(employee["currentPTOHrs"])
|
||||||
num_days = float(num_hours/8)
|
num_days = float(num_hours / 8)
|
||||||
return {
|
return {
|
||||||
"num_hours": num_hours,
|
"num_hours": num_hours,
|
||||||
"num_days": num_days,
|
"num_days": num_days,
|
||||||
}
|
}
|
||||||
|
|
||||||
return_msg = "Employee not found with email address " + email
|
return_msg = "Employee not found with email address " + email
|
||||||
return {"error": return_msg}
|
return {"error": return_msg}
|
||||||
|
|||||||
@@ -1,43 +1,59 @@
|
|||||||
import json
|
import json
|
||||||
import pandas
|
|
||||||
from pathlib import Path
|
|
||||||
from datetime import date, datetime
|
from datetime import date, datetime
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
import pandas
|
||||||
from dateutil.relativedelta import relativedelta
|
from dateutil.relativedelta import relativedelta
|
||||||
|
|
||||||
|
|
||||||
def future_pto_calc(args: dict) -> dict:
|
def future_pto_calc(args: dict) -> dict:
|
||||||
|
file_path = (
|
||||||
file_path = Path(__file__).resolve().parent.parent / "data" / "employee_pto_data.json"
|
Path(__file__).resolve().parent.parent / "data" / "employee_pto_data.json"
|
||||||
|
)
|
||||||
if not file_path.exists():
|
if not file_path.exists():
|
||||||
return {"error": "Data file not found."}
|
return {"error": "Data file not found."}
|
||||||
|
|
||||||
start_date = datetime.strptime(args.get("start_date"), "%Y-%m-%d").date()
|
start_date = datetime.strptime(args.get("start_date"), "%Y-%m-%d").date()
|
||||||
end_date = datetime.strptime(args.get("end_date"), "%Y-%m-%d").date()
|
end_date = datetime.strptime(args.get("end_date"), "%Y-%m-%d").date()
|
||||||
email = args.get("email")
|
email = args.get("email")
|
||||||
|
|
||||||
#Next, set up the ability to calculate how much PTO will be added to the user's total by the start of the PTO request
|
# Next, set up the ability to calculate how much PTO will be added to the user's total by the start of the PTO request
|
||||||
today = date.today()
|
today = date.today()
|
||||||
|
|
||||||
if today > start_date:
|
if today > start_date:
|
||||||
return_msg = "PTO start date " + args.get("start_date") + "cannot be in the past"
|
return_msg = (
|
||||||
|
"PTO start date " + args.get("start_date") + "cannot be in the past"
|
||||||
|
)
|
||||||
return {"error": return_msg}
|
return {"error": return_msg}
|
||||||
|
|
||||||
if end_date < start_date:
|
if end_date < start_date:
|
||||||
return_msg = "PTO end date " + args.get("end_date") + " must be after PTO start date " + args.get("start_date")
|
return_msg = (
|
||||||
|
"PTO end date "
|
||||||
|
+ args.get("end_date")
|
||||||
|
+ " must be after PTO start date "
|
||||||
|
+ args.get("start_date")
|
||||||
|
)
|
||||||
return {"error": return_msg}
|
return {"error": return_msg}
|
||||||
|
|
||||||
#Get the number of business days, and then business hours (assume 8 hr biz day), included in the PTO request
|
# Get the number of business days, and then business hours (assume 8 hr biz day), included in the PTO request
|
||||||
biz_days_of_request = len(pandas.bdate_range(start=start_date, end=end_date, inclusive="both"))
|
biz_days_of_request = len(
|
||||||
|
pandas.bdate_range(start=start_date, end=end_date, inclusive="both")
|
||||||
|
)
|
||||||
if biz_days_of_request == 0:
|
if biz_days_of_request == 0:
|
||||||
return_msg = "There are no business days between " + args.get("start_date") + " and " + args.get("end_date")
|
return_msg = (
|
||||||
|
"There are no business days between "
|
||||||
|
+ args.get("start_date")
|
||||||
|
+ " and "
|
||||||
|
+ args.get("end_date")
|
||||||
|
)
|
||||||
return {"error": return_msg}
|
return {"error": return_msg}
|
||||||
biz_hours_of_request = biz_days_of_request * 8
|
biz_hours_of_request = biz_days_of_request * 8
|
||||||
|
|
||||||
#Assume PTO is added on the first of every month - month math compares rolling dates, so compare the PTO request with the first day of the current month.
|
# Assume PTO is added on the first of every month - month math compares rolling dates, so compare the PTO request with the first day of the current month.
|
||||||
today_first_of_month = date(today.year, today.month, 1)
|
today_first_of_month = date(today.year, today.month, 1)
|
||||||
time_difference = relativedelta(start_date, today_first_of_month)
|
time_difference = relativedelta(start_date, today_first_of_month)
|
||||||
months_to_accrue = time_difference.years * 12 + time_difference.months
|
months_to_accrue = time_difference.years * 12 + time_difference.months
|
||||||
|
|
||||||
data = json.load(open(file_path))
|
data = json.load(open(file_path))
|
||||||
employee_list = data["theCompany"]["employees"]
|
employee_list = data["theCompany"]["employees"]
|
||||||
|
|
||||||
@@ -47,12 +63,14 @@ def future_pto_calc(args: dict) -> dict:
|
|||||||
if employee["email"] == email:
|
if employee["email"] == email:
|
||||||
current_pto_hours = int(employee["currentPTOHrs"])
|
current_pto_hours = int(employee["currentPTOHrs"])
|
||||||
hrs_added_per_month = int(employee["hrsAddedPerMonth"])
|
hrs_added_per_month = int(employee["hrsAddedPerMonth"])
|
||||||
pto_available_at_start = current_pto_hours + (months_to_accrue * hrs_added_per_month)
|
pto_available_at_start = current_pto_hours + (
|
||||||
|
months_to_accrue * hrs_added_per_month
|
||||||
|
)
|
||||||
pto_hrs_remaining_after = pto_available_at_start - biz_hours_of_request
|
pto_hrs_remaining_after = pto_available_at_start - biz_hours_of_request
|
||||||
if pto_hrs_remaining_after >= 0:
|
if pto_hrs_remaining_after >= 0:
|
||||||
enough_pto = True
|
enough_pto = True
|
||||||
return {
|
return {
|
||||||
"enough_pto": enough_pto,
|
"enough_pto": enough_pto,
|
||||||
"pto_hrs_remaining_after": str(pto_hrs_remaining_after),
|
"pto_hrs_remaining_after": str(pto_hrs_remaining_after),
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -1,19 +1,23 @@
|
|||||||
import os
|
import os
|
||||||
|
|
||||||
import tools.goal_registry as goals
|
import tools.goal_registry as goals
|
||||||
|
|
||||||
def list_agents(args: dict) -> dict:
|
|
||||||
|
|
||||||
|
def list_agents(args: dict) -> dict:
|
||||||
goal_categories_start = os.getenv("GOAL_CATEGORIES")
|
goal_categories_start = os.getenv("GOAL_CATEGORIES")
|
||||||
if goal_categories_start is None:
|
if goal_categories_start is None:
|
||||||
goal_categories = ["all"] # default to 'all' categories
|
goal_categories = ["all"] # default to 'all' categories
|
||||||
else:
|
else:
|
||||||
goal_categories_start.strip().lower() # handle extra spaces or non-lowercase
|
goal_categories_start.strip().lower() # handle extra spaces or non-lowercase
|
||||||
goal_categories = goal_categories_start.split(",")
|
goal_categories = goal_categories_start.split(",")
|
||||||
|
|
||||||
# if multi-goal-mode, add agent_selection as a goal (defaults to True)
|
# if multi-goal-mode, add agent_selection as a goal (defaults to True)
|
||||||
if "agent_selection" not in goal_categories :
|
if "agent_selection" not in goal_categories:
|
||||||
first_goal_value = os.getenv("AGENT_GOAL")
|
first_goal_value = os.getenv("AGENT_GOAL")
|
||||||
if first_goal_value is None or first_goal_value.lower() == "goal_choose_agent_type":
|
if (
|
||||||
|
first_goal_value is None
|
||||||
|
or first_goal_value.lower() == "goal_choose_agent_type"
|
||||||
|
):
|
||||||
goal_categories.append("agent_selection")
|
goal_categories.append("agent_selection")
|
||||||
|
|
||||||
# always show goals labeled as "system," like the goal chooser
|
# always show goals labeled as "system," like the goal chooser
|
||||||
@@ -33,7 +37,7 @@ def list_agents(args: dict) -> dict:
|
|||||||
"goal_id": goal.id,
|
"goal_id": goal.id,
|
||||||
"agent_description": goal.agent_friendly_description,
|
"agent_description": goal.agent_friendly_description,
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
return {
|
return {
|
||||||
"agents": agents,
|
"agents": agents,
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,7 +1,8 @@
|
|||||||
import os
|
import os
|
||||||
import requests
|
|
||||||
import random
|
import random
|
||||||
from datetime import datetime, timedelta, date
|
from datetime import date, datetime, timedelta
|
||||||
|
|
||||||
|
import requests
|
||||||
from dotenv import load_dotenv
|
from dotenv import load_dotenv
|
||||||
|
|
||||||
PREMIER_LEAGUE_CLUBS_DATA = [
|
PREMIER_LEAGUE_CLUBS_DATA = [
|
||||||
|
|||||||
@@ -1,9 +1,10 @@
|
|||||||
import os
|
|
||||||
import json
|
|
||||||
import http.client
|
import http.client
|
||||||
from dotenv import load_dotenv
|
import json
|
||||||
|
import os
|
||||||
import urllib.parse
|
import urllib.parse
|
||||||
|
|
||||||
|
from dotenv import load_dotenv
|
||||||
|
|
||||||
|
|
||||||
def search_airport(query: str) -> list:
|
def search_airport(query: str) -> list:
|
||||||
"""
|
"""
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
from models.tool_definitions import ToolDefinition, ToolArgument
|
from models.tool_definitions import ToolArgument, ToolDefinition
|
||||||
|
|
||||||
# ----- System tools -----
|
# ----- System tools -----
|
||||||
list_agents_tool = ToolDefinition(
|
list_agents_tool = ToolDefinition(
|
||||||
@@ -397,89 +397,3 @@ ecomm_track_package = ToolDefinition(
|
|||||||
),
|
),
|
||||||
],
|
],
|
||||||
)
|
)
|
||||||
|
|
||||||
# ----- Food Ordering Use Case Tools -----
|
|
||||||
food_get_menu_tool = ToolDefinition(
|
|
||||||
name="GetMenu",
|
|
||||||
description="Get the menu for a restaurant. Defaults to Tony's Pizza Palace if no restaurant specified.",
|
|
||||||
arguments=[
|
|
||||||
ToolArgument(
|
|
||||||
name="restaurant_id",
|
|
||||||
type="string",
|
|
||||||
description="ID of the restaurant (defaults to rest_001 for Tony's Pizza Palace)",
|
|
||||||
),
|
|
||||||
],
|
|
||||||
)
|
|
||||||
|
|
||||||
food_get_menu_item_details_tool = ToolDefinition(
|
|
||||||
name="GetMenuItemDetails",
|
|
||||||
description="Get detailed information about a specific menu item.",
|
|
||||||
arguments=[
|
|
||||||
ToolArgument(
|
|
||||||
name="item_id",
|
|
||||||
type="string",
|
|
||||||
description="ID of the menu item to get details for",
|
|
||||||
),
|
|
||||||
ToolArgument(
|
|
||||||
name="restaurant_id",
|
|
||||||
type="string",
|
|
||||||
description="ID of the restaurant (defaults to rest_001 for Tony's Pizza Palace)",
|
|
||||||
),
|
|
||||||
],
|
|
||||||
)
|
|
||||||
|
|
||||||
food_add_to_cart_tool = ToolDefinition(
|
|
||||||
name="AddToCart",
|
|
||||||
description="Add a menu item to the customer's cart.",
|
|
||||||
arguments=[
|
|
||||||
ToolArgument(
|
|
||||||
name="customer_email",
|
|
||||||
type="string",
|
|
||||||
description="Email address of the customer",
|
|
||||||
),
|
|
||||||
ToolArgument(
|
|
||||||
name="item_id",
|
|
||||||
type="string",
|
|
||||||
description="ID of the menu item to add to cart",
|
|
||||||
),
|
|
||||||
ToolArgument(
|
|
||||||
name="quantity",
|
|
||||||
type="number",
|
|
||||||
description="Quantity of the item to add (defaults to 1)",
|
|
||||||
),
|
|
||||||
ToolArgument(
|
|
||||||
name="restaurant_id",
|
|
||||||
type="string",
|
|
||||||
description="ID of the restaurant (defaults to rest_001 for Tony's Pizza Palace)",
|
|
||||||
),
|
|
||||||
],
|
|
||||||
)
|
|
||||||
|
|
||||||
food_place_order_tool = ToolDefinition(
|
|
||||||
name="PlaceOrder",
|
|
||||||
description="Place an order for the items in the customer's cart.",
|
|
||||||
arguments=[
|
|
||||||
ToolArgument(
|
|
||||||
name="customer_email",
|
|
||||||
type="string",
|
|
||||||
description="Email address of the customer",
|
|
||||||
),
|
|
||||||
ToolArgument(
|
|
||||||
name="userConfirmation",
|
|
||||||
type="string",
|
|
||||||
description="Indication of user's desire to place the order",
|
|
||||||
),
|
|
||||||
],
|
|
||||||
)
|
|
||||||
|
|
||||||
food_check_order_status_tool = ToolDefinition(
|
|
||||||
name="CheckOrderStatus",
|
|
||||||
description="Check the status of a food order.",
|
|
||||||
arguments=[
|
|
||||||
ToolArgument(
|
|
||||||
name="order_id",
|
|
||||||
type="string",
|
|
||||||
description="ID of the order to check status for",
|
|
||||||
),
|
|
||||||
],
|
|
||||||
)
|
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
import shared.config
|
import shared.config
|
||||||
|
|
||||||
def transfer_control(args: dict) -> dict:
|
|
||||||
|
|
||||||
|
def transfer_control(args: dict) -> dict:
|
||||||
return {
|
return {
|
||||||
"new_goal": shared.config.AGENT_GOAL,
|
"new_goal": shared.config.AGENT_GOAL,
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,31 +1,35 @@
|
|||||||
from collections import deque
|
from collections import deque
|
||||||
from datetime import timedelta
|
from datetime import timedelta
|
||||||
from typing import Dict, Any, Union, List, Optional, Deque, TypedDict
|
from typing import Any, Deque, Dict, List, Optional, TypedDict, Union
|
||||||
|
|
||||||
from temporalio.common import RetryPolicy
|
|
||||||
from temporalio import workflow
|
from temporalio import workflow
|
||||||
|
from temporalio.common import RetryPolicy
|
||||||
|
|
||||||
from models.data_types import ConversationHistory, EnvLookupOutput, NextStep, ValidationInput, EnvLookupInput
|
from models.data_types import (
|
||||||
|
ConversationHistory,
|
||||||
|
EnvLookupInput,
|
||||||
|
EnvLookupOutput,
|
||||||
|
NextStep,
|
||||||
|
ValidationInput,
|
||||||
|
)
|
||||||
from models.tool_definitions import AgentGoal
|
from models.tool_definitions import AgentGoal
|
||||||
from workflows.workflow_helpers import LLM_ACTIVITY_START_TO_CLOSE_TIMEOUT, \
|
|
||||||
LLM_ACTIVITY_SCHEDULE_TO_CLOSE_TIMEOUT
|
|
||||||
from workflows import workflow_helpers as helpers
|
from workflows import workflow_helpers as helpers
|
||||||
|
from workflows.workflow_helpers import (
|
||||||
|
LLM_ACTIVITY_SCHEDULE_TO_CLOSE_TIMEOUT,
|
||||||
|
LLM_ACTIVITY_START_TO_CLOSE_TIMEOUT,
|
||||||
|
)
|
||||||
|
|
||||||
with workflow.unsafe.imports_passed_through():
|
with workflow.unsafe.imports_passed_through():
|
||||||
from activities.tool_activities import ToolActivities
|
from activities.tool_activities import ToolActivities
|
||||||
from prompts.agent_prompt_generators import (
|
from models.data_types import CombinedInput, ToolPromptInput
|
||||||
generate_genai_prompt
|
from prompts.agent_prompt_generators import generate_genai_prompt
|
||||||
)
|
|
||||||
from models.data_types import (
|
|
||||||
CombinedInput,
|
|
||||||
ToolPromptInput,
|
|
||||||
)
|
|
||||||
from tools.goal_registry import goal_list
|
from tools.goal_registry import goal_list
|
||||||
|
|
||||||
# Constants
|
# Constants
|
||||||
MAX_TURNS_BEFORE_CONTINUE = 250
|
MAX_TURNS_BEFORE_CONTINUE = 250
|
||||||
|
|
||||||
#ToolData as part of the workflow is what's accessible to the UI - see LLMResponse.jsx for example
|
|
||||||
|
# ToolData as part of the workflow is what's accessible to the UI - see LLMResponse.jsx for example
|
||||||
class ToolData(TypedDict, total=False):
|
class ToolData(TypedDict, total=False):
|
||||||
next: NextStep
|
next: NextStep
|
||||||
tool: str
|
tool: str
|
||||||
@@ -33,6 +37,7 @@ class ToolData(TypedDict, total=False):
|
|||||||
response: str
|
response: str
|
||||||
force_confirm: bool = True
|
force_confirm: bool = True
|
||||||
|
|
||||||
|
|
||||||
@workflow.defn
|
@workflow.defn
|
||||||
class AgentGoalWorkflow:
|
class AgentGoalWorkflow:
|
||||||
"""Workflow that manages tool execution with user confirmation and conversation history."""
|
"""Workflow that manages tool execution with user confirmation and conversation history."""
|
||||||
@@ -43,16 +48,21 @@ class AgentGoalWorkflow:
|
|||||||
self.conversation_summary: Optional[str] = None
|
self.conversation_summary: Optional[str] = None
|
||||||
self.chat_ended: bool = False
|
self.chat_ended: bool = False
|
||||||
self.tool_data: Optional[ToolData] = None
|
self.tool_data: Optional[ToolData] = None
|
||||||
self.confirmed: bool = False # indicates that we have confirmation to proceed to run tool
|
self.confirmed: bool = (
|
||||||
|
False # indicates that we have confirmation to proceed to run tool
|
||||||
|
)
|
||||||
self.tool_results: List[Dict[str, Any]] = []
|
self.tool_results: List[Dict[str, Any]] = []
|
||||||
self.goal: AgentGoal = {"tools": []}
|
self.goal: AgentGoal = {"tools": []}
|
||||||
self.show_tool_args_confirmation: bool = True # set from env file in activity lookup_wf_env_settings
|
self.show_tool_args_confirmation: bool = (
|
||||||
self.multi_goal_mode: bool = False # set from env file in activity lookup_wf_env_settings
|
True # set from env file in activity lookup_wf_env_settings
|
||||||
|
)
|
||||||
|
self.multi_goal_mode: bool = (
|
||||||
|
False # set from env file in activity lookup_wf_env_settings
|
||||||
|
)
|
||||||
|
|
||||||
# see ../api/main.py#temporal_client.start_workflow() for how the input parameters are set
|
# see ../api/main.py#temporal_client.start_workflow() for how the input parameters are set
|
||||||
@workflow.run
|
@workflow.run
|
||||||
async def run(self, combined_input: CombinedInput) -> str:
|
async def run(self, combined_input: CombinedInput) -> str:
|
||||||
|
|
||||||
"""Main workflow execution method."""
|
"""Main workflow execution method."""
|
||||||
# setup phase, starts with blank tool_params and agent_goal prompt as defined in tools/goal_registry.py
|
# setup phase, starts with blank tool_params and agent_goal prompt as defined in tools/goal_registry.py
|
||||||
params = combined_input.tool_params
|
params = combined_input.tool_params
|
||||||
@@ -68,12 +78,12 @@ class AgentGoalWorkflow:
|
|||||||
if params and params.prompt_queue:
|
if params and params.prompt_queue:
|
||||||
self.prompt_queue.extend(params.prompt_queue)
|
self.prompt_queue.extend(params.prompt_queue)
|
||||||
|
|
||||||
waiting_for_confirm = False
|
waiting_for_confirm = False
|
||||||
current_tool = None
|
current_tool = None
|
||||||
|
|
||||||
# This is the main interactive loop. Main responsibilities:
|
# This is the main interactive loop. Main responsibilities:
|
||||||
# - Selecting and changing goals as directed by the user
|
# - Selecting and changing goals as directed by the user
|
||||||
# - reacting to user input (from signals)
|
# - reacting to user input (from signals)
|
||||||
# - validating user input to make sure it makes sense with the current goal and tools
|
# - validating user input to make sure it makes sense with the current goal and tools
|
||||||
# - calling the LLM through activities to determine next steps and prompts
|
# - calling the LLM through activities to determine next steps and prompts
|
||||||
# - executing the selected tools via activities
|
# - executing the selected tools via activities
|
||||||
@@ -87,7 +97,7 @@ class AgentGoalWorkflow:
|
|||||||
if self.chat_should_end():
|
if self.chat_should_end():
|
||||||
return f"{self.conversation_history}"
|
return f"{self.conversation_history}"
|
||||||
|
|
||||||
# Execute the tool
|
# Execute the tool
|
||||||
if self.ready_for_tool_execution(waiting_for_confirm, current_tool):
|
if self.ready_for_tool_execution(waiting_for_confirm, current_tool):
|
||||||
waiting_for_confirm = await self.execute_tool(current_tool)
|
waiting_for_confirm = await self.execute_tool(current_tool)
|
||||||
continue
|
continue
|
||||||
@@ -96,10 +106,12 @@ class AgentGoalWorkflow:
|
|||||||
if self.prompt_queue:
|
if self.prompt_queue:
|
||||||
# get most recent prompt
|
# get most recent prompt
|
||||||
prompt = self.prompt_queue.popleft()
|
prompt = self.prompt_queue.popleft()
|
||||||
workflow.logger.info(f"workflow step: processing message on the prompt queue, message is {prompt}")
|
workflow.logger.info(
|
||||||
|
f"workflow step: processing message on the prompt queue, message is {prompt}"
|
||||||
|
)
|
||||||
|
|
||||||
# Validate user-provided prompts
|
# Validate user-provided prompts
|
||||||
if self.is_user_prompt(prompt):
|
if self.is_user_prompt(prompt):
|
||||||
self.add_message("user", prompt)
|
self.add_message("user", prompt)
|
||||||
|
|
||||||
# Validate the prompt before proceeding
|
# Validate the prompt before proceeding
|
||||||
@@ -120,18 +132,25 @@ class AgentGoalWorkflow:
|
|||||||
|
|
||||||
# If validation fails, provide that feedback to the user - i.e., "your words make no sense, puny human" end this iteration of processing
|
# If validation fails, provide that feedback to the user - i.e., "your words make no sense, puny human" end this iteration of processing
|
||||||
if not validation_result.validationResult:
|
if not validation_result.validationResult:
|
||||||
workflow.logger.warning(f"Prompt validation failed: {validation_result.validationFailedReason}")
|
workflow.logger.warning(
|
||||||
self.add_message("agent", validation_result.validationFailedReason)
|
f"Prompt validation failed: {validation_result.validationFailedReason}"
|
||||||
|
)
|
||||||
|
self.add_message(
|
||||||
|
"agent", validation_result.validationFailedReason
|
||||||
|
)
|
||||||
continue
|
continue
|
||||||
|
|
||||||
# If valid, proceed with generating the context and prompt
|
# If valid, proceed with generating the context and prompt
|
||||||
context_instructions = generate_genai_prompt(
|
context_instructions = generate_genai_prompt(
|
||||||
agent_goal=self.goal,
|
agent_goal=self.goal,
|
||||||
conversation_history = self.conversation_history,
|
conversation_history=self.conversation_history,
|
||||||
multi_goal_mode=self.multi_goal_mode,
|
multi_goal_mode=self.multi_goal_mode,
|
||||||
raw_json=self.tool_data)
|
raw_json=self.tool_data,
|
||||||
|
)
|
||||||
prompt_input = ToolPromptInput(prompt=prompt, context_instructions=context_instructions)
|
|
||||||
|
prompt_input = ToolPromptInput(
|
||||||
|
prompt=prompt, context_instructions=context_instructions
|
||||||
|
)
|
||||||
|
|
||||||
# connect to LLM and execute to get next steps
|
# connect to LLM and execute to get next steps
|
||||||
tool_data = await workflow.execute_activity_method(
|
tool_data = await workflow.execute_activity_method(
|
||||||
@@ -151,20 +170,24 @@ class AgentGoalWorkflow:
|
|||||||
next_step = tool_data.get("next")
|
next_step = tool_data.get("next")
|
||||||
current_tool = tool_data.get("tool")
|
current_tool = tool_data.get("tool")
|
||||||
|
|
||||||
workflow.logger.info(f"next_step: {next_step}, current tool is {current_tool}")
|
workflow.logger.info(
|
||||||
|
f"next_step: {next_step}, current tool is {current_tool}"
|
||||||
|
)
|
||||||
|
|
||||||
# make sure we're ready to run the tool & have everything we need
|
# make sure we're ready to run the tool & have everything we need
|
||||||
if next_step == "confirm" and current_tool:
|
if next_step == "confirm" and current_tool:
|
||||||
args = tool_data.get("args", {})
|
args = tool_data.get("args", {})
|
||||||
# if we're missing arguments, ask for them
|
# if we're missing arguments, ask for them
|
||||||
if await helpers.handle_missing_args(current_tool, args, tool_data, self.prompt_queue):
|
if await helpers.handle_missing_args(
|
||||||
|
current_tool, args, tool_data, self.prompt_queue
|
||||||
|
):
|
||||||
continue
|
continue
|
||||||
|
|
||||||
waiting_for_confirm = True
|
waiting_for_confirm = True
|
||||||
|
|
||||||
# We have needed arguments, if we want to force the user to confirm, set that up
|
# We have needed arguments, if we want to force the user to confirm, set that up
|
||||||
if self.show_tool_args_confirmation:
|
if self.show_tool_args_confirmation:
|
||||||
self.confirmed = False # set that we're not confirmed
|
self.confirmed = False # set that we're not confirmed
|
||||||
workflow.logger.info("Waiting for user confirm signal...")
|
workflow.logger.info("Waiting for user confirm signal...")
|
||||||
# if we have all needed arguments (handled above) and not holding for a debugging confirm, proceed:
|
# if we have all needed arguments (handled above) and not holding for a debugging confirm, proceed:
|
||||||
else:
|
else:
|
||||||
@@ -174,14 +197,11 @@ class AgentGoalWorkflow:
|
|||||||
workflow.logger.info("All steps completed. Resetting goal.")
|
workflow.logger.info("All steps completed. Resetting goal.")
|
||||||
self.change_goal("goal_choose_agent_type")
|
self.change_goal("goal_choose_agent_type")
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
# else if the next step is to be done with the conversation such as if the user requests it via asking to "end conversation"
|
# else if the next step is to be done with the conversation such as if the user requests it via asking to "end conversation"
|
||||||
elif next_step == "done":
|
elif next_step == "done":
|
||||||
|
|
||||||
self.add_message("agent", tool_data)
|
self.add_message("agent", tool_data)
|
||||||
|
|
||||||
#here we could send conversation to AI for analysis
|
# here we could send conversation to AI for analysis
|
||||||
|
|
||||||
# end the workflow
|
# end the workflow
|
||||||
return str(self.conversation_history)
|
return str(self.conversation_history)
|
||||||
@@ -192,10 +212,10 @@ class AgentGoalWorkflow:
|
|||||||
self.prompt_queue,
|
self.prompt_queue,
|
||||||
self.goal,
|
self.goal,
|
||||||
MAX_TURNS_BEFORE_CONTINUE,
|
MAX_TURNS_BEFORE_CONTINUE,
|
||||||
self.add_message
|
self.add_message,
|
||||||
)
|
)
|
||||||
|
|
||||||
#Signal that comes from api/main.py via a post to /send-prompt
|
# Signal that comes from api/main.py via a post to /send-prompt
|
||||||
@workflow.signal
|
@workflow.signal
|
||||||
async def user_prompt(self, prompt: str) -> None:
|
async def user_prompt(self, prompt: str) -> None:
|
||||||
"""Signal handler for receiving user prompts."""
|
"""Signal handler for receiving user prompts."""
|
||||||
@@ -205,28 +225,28 @@ class AgentGoalWorkflow:
|
|||||||
return
|
return
|
||||||
self.prompt_queue.append(prompt)
|
self.prompt_queue.append(prompt)
|
||||||
|
|
||||||
#Signal that comes from api/main.py via a post to /confirm
|
# Signal that comes from api/main.py via a post to /confirm
|
||||||
@workflow.signal
|
@workflow.signal
|
||||||
async def confirm(self) -> None:
|
async def confirm(self) -> None:
|
||||||
"""Signal handler for user confirmation of tool execution."""
|
"""Signal handler for user confirmation of tool execution."""
|
||||||
workflow.logger.info("Received user signal: confirmation")
|
workflow.logger.info("Received user signal: confirmation")
|
||||||
self.confirmed = True
|
self.confirmed = True
|
||||||
|
|
||||||
#Signal that comes from api/main.py via a post to /end-chat
|
# Signal that comes from api/main.py via a post to /end-chat
|
||||||
@workflow.signal
|
@workflow.signal
|
||||||
async def end_chat(self) -> None:
|
async def end_chat(self) -> None:
|
||||||
"""Signal handler for ending the chat session."""
|
"""Signal handler for ending the chat session."""
|
||||||
workflow.logger.info("signal received: end_chat")
|
workflow.logger.info("signal received: end_chat")
|
||||||
self.chat_ended = True
|
self.chat_ended = True
|
||||||
|
|
||||||
#Signal that can be sent from Temporal Workflow UI to enable debugging confirm and override .env setting
|
# Signal that can be sent from Temporal Workflow UI to enable debugging confirm and override .env setting
|
||||||
@workflow.signal
|
@workflow.signal
|
||||||
async def enable_debugging_confirm(self) -> None:
|
async def enable_debugging_confirm(self) -> None:
|
||||||
"""Signal handler for enabling debugging confirm UI & associated logic."""
|
"""Signal handler for enabling debugging confirm UI & associated logic."""
|
||||||
workflow.logger.info("signal received: enable_debugging_confirm")
|
workflow.logger.info("signal received: enable_debugging_confirm")
|
||||||
self.enable_debugging_confirm = True
|
self.enable_debugging_confirm = True
|
||||||
|
|
||||||
#Signal that can be sent from Temporal Workflow UI to disable debugging confirm and override .env setting
|
# Signal that can be sent from Temporal Workflow UI to disable debugging confirm and override .env setting
|
||||||
@workflow.signal
|
@workflow.signal
|
||||||
async def disable_debugging_confirm(self) -> None:
|
async def disable_debugging_confirm(self) -> None:
|
||||||
"""Signal handler for disabling debugging confirm UI & associated logic."""
|
"""Signal handler for disabling debugging confirm UI & associated logic."""
|
||||||
@@ -237,7 +257,7 @@ class AgentGoalWorkflow:
|
|||||||
def get_conversation_history(self) -> ConversationHistory:
|
def get_conversation_history(self) -> ConversationHistory:
|
||||||
"""Query handler to retrieve the full conversation history."""
|
"""Query handler to retrieve the full conversation history."""
|
||||||
return self.conversation_history
|
return self.conversation_history
|
||||||
|
|
||||||
@workflow.query
|
@workflow.query
|
||||||
def get_agent_goal(self) -> AgentGoal:
|
def get_agent_goal(self) -> AgentGoal:
|
||||||
"""Query handler to retrieve the current goal of the agent."""
|
"""Query handler to retrieve the current goal of the agent."""
|
||||||
@@ -245,7 +265,7 @@ class AgentGoalWorkflow:
|
|||||||
|
|
||||||
@workflow.query
|
@workflow.query
|
||||||
def get_summary_from_history(self) -> Optional[str]:
|
def get_summary_from_history(self) -> Optional[str]:
|
||||||
"""Query handler to retrieve the conversation summary if available.
|
"""Query handler to retrieve the conversation summary if available.
|
||||||
Used only for continue as new of the workflow."""
|
Used only for continue as new of the workflow."""
|
||||||
return self.conversation_summary
|
return self.conversation_summary
|
||||||
|
|
||||||
@@ -272,9 +292,9 @@ class AgentGoalWorkflow:
|
|||||||
)
|
)
|
||||||
|
|
||||||
def change_goal(self, goal: str) -> None:
|
def change_goal(self, goal: str) -> None:
|
||||||
""" Change the goal (usually on request of the user).
|
"""Change the goal (usually on request of the user).
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
goal: goal to change to)
|
goal: goal to change to)
|
||||||
"""
|
"""
|
||||||
if goal is not None:
|
if goal is not None:
|
||||||
@@ -283,8 +303,9 @@ class AgentGoalWorkflow:
|
|||||||
self.goal = listed_goal
|
self.goal = listed_goal
|
||||||
workflow.logger.info("Changed goal to " + goal)
|
workflow.logger.info("Changed goal to " + goal)
|
||||||
if goal is None:
|
if goal is None:
|
||||||
workflow.logger.warning("Goal not set after goal reset, probably bad.") # if this happens, there's probably a problem with the goal list
|
workflow.logger.warning(
|
||||||
|
"Goal not set after goal reset, probably bad."
|
||||||
|
) # if this happens, there's probably a problem with the goal list
|
||||||
|
|
||||||
# workflow function that defines if chat should end
|
# workflow function that defines if chat should end
|
||||||
def chat_should_end(self) -> bool:
|
def chat_should_end(self) -> bool:
|
||||||
@@ -293,9 +314,11 @@ class AgentGoalWorkflow:
|
|||||||
return True
|
return True
|
||||||
else:
|
else:
|
||||||
return False
|
return False
|
||||||
|
|
||||||
# define if we're ready for tool execution
|
# define if we're ready for tool execution
|
||||||
def ready_for_tool_execution(self, waiting_for_confirm: bool, current_tool: Any) -> bool:
|
def ready_for_tool_execution(
|
||||||
|
self, waiting_for_confirm: bool, current_tool: Any
|
||||||
|
) -> bool:
|
||||||
if self.confirmed and waiting_for_confirm and current_tool and self.tool_data:
|
if self.confirmed and waiting_for_confirm and current_tool and self.tool_data:
|
||||||
return True
|
return True
|
||||||
else:
|
else:
|
||||||
@@ -304,19 +327,19 @@ class AgentGoalWorkflow:
|
|||||||
# LLM-tagged prompts start with "###"
|
# LLM-tagged prompts start with "###"
|
||||||
# all others are from the user
|
# all others are from the user
|
||||||
def is_user_prompt(self, prompt) -> bool:
|
def is_user_prompt(self, prompt) -> bool:
|
||||||
if prompt.startswith("###"):
|
if prompt.startswith("###"):
|
||||||
return False
|
return False
|
||||||
else:
|
else:
|
||||||
return True
|
return True
|
||||||
|
|
||||||
# look up env settings in an activity so they're part of history
|
# look up env settings in an activity so they're part of history
|
||||||
async def lookup_wf_env_settings(self, combined_input: CombinedInput)->None:
|
async def lookup_wf_env_settings(self, combined_input: CombinedInput) -> None:
|
||||||
env_lookup_input = EnvLookupInput(
|
env_lookup_input = EnvLookupInput(
|
||||||
show_confirm_env_var_name = "SHOW_CONFIRM",
|
show_confirm_env_var_name="SHOW_CONFIRM",
|
||||||
show_confirm_default = True,
|
show_confirm_default=True,
|
||||||
)
|
)
|
||||||
env_output:EnvLookupOutput = await workflow.execute_activity_method(
|
env_output: EnvLookupOutput = await workflow.execute_activity_method(
|
||||||
ToolActivities.get_wf_env_vars,
|
ToolActivities.get_wf_env_vars,
|
||||||
env_lookup_input,
|
env_lookup_input,
|
||||||
start_to_close_timeout=LLM_ACTIVITY_START_TO_CLOSE_TIMEOUT,
|
start_to_close_timeout=LLM_ACTIVITY_START_TO_CLOSE_TIMEOUT,
|
||||||
retry_policy=RetryPolicy(
|
retry_policy=RetryPolicy(
|
||||||
@@ -325,11 +348,13 @@ class AgentGoalWorkflow:
|
|||||||
)
|
)
|
||||||
self.show_tool_args_confirmation = env_output.show_confirm
|
self.show_tool_args_confirmation = env_output.show_confirm
|
||||||
self.multi_goal_mode = env_output.multi_goal_mode
|
self.multi_goal_mode = env_output.multi_goal_mode
|
||||||
|
|
||||||
# execute the tool - return False if we're not waiting for confirm anymore (always the case if it works successfully)
|
# execute the tool - return False if we're not waiting for confirm anymore (always the case if it works successfully)
|
||||||
#
|
#
|
||||||
async def execute_tool(self, current_tool: str)->bool:
|
async def execute_tool(self, current_tool: str) -> bool:
|
||||||
workflow.logger.info(f"workflow step: user has confirmed, executing the tool {current_tool}")
|
workflow.logger.info(
|
||||||
|
f"workflow step: user has confirmed, executing the tool {current_tool}"
|
||||||
|
)
|
||||||
self.confirmed = False
|
self.confirmed = False
|
||||||
waiting_for_confirm = False
|
waiting_for_confirm = False
|
||||||
confirmed_tool_data = self.tool_data.copy()
|
confirmed_tool_data = self.tool_data.copy()
|
||||||
@@ -342,21 +367,27 @@ class AgentGoalWorkflow:
|
|||||||
self.tool_data,
|
self.tool_data,
|
||||||
self.tool_results,
|
self.tool_results,
|
||||||
self.add_message,
|
self.add_message,
|
||||||
self.prompt_queue
|
self.prompt_queue,
|
||||||
)
|
)
|
||||||
|
|
||||||
# set new goal if we should
|
# set new goal if we should
|
||||||
if len(self.tool_results) > 0:
|
if len(self.tool_results) > 0:
|
||||||
if "ChangeGoal" in self.tool_results[-1].values() and "new_goal" in self.tool_results[-1].keys():
|
if (
|
||||||
|
"ChangeGoal" in self.tool_results[-1].values()
|
||||||
|
and "new_goal" in self.tool_results[-1].keys()
|
||||||
|
):
|
||||||
new_goal = self.tool_results[-1].get("new_goal")
|
new_goal = self.tool_results[-1].get("new_goal")
|
||||||
self.change_goal(new_goal)
|
self.change_goal(new_goal)
|
||||||
elif "ListAgents" in self.tool_results[-1].values() and self.goal.id != "goal_choose_agent_type":
|
elif (
|
||||||
|
"ListAgents" in self.tool_results[-1].values()
|
||||||
|
and self.goal.id != "goal_choose_agent_type"
|
||||||
|
):
|
||||||
self.change_goal("goal_choose_agent_type")
|
self.change_goal("goal_choose_agent_type")
|
||||||
return waiting_for_confirm
|
return waiting_for_confirm
|
||||||
|
|
||||||
# debugging helper - drop this in various places in the workflow to get status
|
# debugging helper - drop this in various places in the workflow to get status
|
||||||
# also don't forget you can look at the workflow itself and do queries if you want
|
# also don't forget you can look at the workflow itself and do queries if you want
|
||||||
def print_useful_workflow_vars(self, status_or_step:str) -> None:
|
def print_useful_workflow_vars(self, status_or_step: str) -> None:
|
||||||
print(f"***{status_or_step}:***")
|
print(f"***{status_or_step}:***")
|
||||||
if self.goal:
|
if self.goal:
|
||||||
print(f"current goal: {self.goal.id}")
|
print(f"current goal: {self.goal.id}")
|
||||||
@@ -367,4 +398,3 @@ class AgentGoalWorkflow:
|
|||||||
else:
|
else:
|
||||||
print("no tool data initialized yet")
|
print("no tool data initialized yet")
|
||||||
print(f"self.confirmed: {self.confirmed}")
|
print(f"self.confirmed: {self.confirmed}")
|
||||||
|
|
||||||
|
|||||||
@@ -1,8 +1,9 @@
|
|||||||
from datetime import timedelta
|
from datetime import timedelta
|
||||||
from typing import Dict, Any, Deque
|
from typing import Any, Deque, Dict
|
||||||
|
|
||||||
from temporalio import workflow
|
from temporalio import workflow
|
||||||
from temporalio.exceptions import ActivityError
|
|
||||||
from temporalio.common import RetryPolicy
|
from temporalio.common import RetryPolicy
|
||||||
|
from temporalio.exceptions import ActivityError
|
||||||
|
|
||||||
from models.data_types import ConversationHistory, ToolPromptInput
|
from models.data_types import ConversationHistory, ToolPromptInput
|
||||||
from prompts.agent_prompt_generators import (
|
from prompts.agent_prompt_generators import (
|
||||||
|
|||||||
Reference in New Issue
Block a user