diff --git a/README.md b/README.md index a2b22dc..ede3c07 100644 --- a/README.md +++ b/README.md @@ -24,6 +24,8 @@ From the /scripts directory: Example: `poetry run python send_message.py 'I want to fly from San Francisco'` NOTE: The workflow will pause on the 'confirm' step until the user sends a 'confirm' signal. Use the 'get_tool_data' query to see the current state of the workflow. + + You can send a 'confirm' signal using `poetry run python send_confirm.py` 5. Get the conversation history summary by querying the workflow. Example: `poetry run python get_history.py` @@ -36,4 +38,5 @@ Run query get_tool_data to see the data the tool has collected so far. ## TODO - The LLM prompts move through 3 mock tools (FindEvents, SearchFlights, CreateInvoice) but I should make them contact real APIs. - Might need to abstract the json example in the prompt generator to be part of a ToolDefinition (prevent overfitting to the example). -- I need to build a chat interface so it's not cli-controlled. Also want to show some 'behind the scenes' of the agents being used as they run. \ No newline at end of file +- I need to build a chat interface so it's not cli-controlled. Also want to show some 'behind the scenes' of the agents being used as they run. +- What happens if I don't want to confirm a step, but instead want to correct it? TODO figure out \ No newline at end of file diff --git a/scripts/send_confirm.py b/scripts/send_confirm.py new file mode 100644 index 0000000..22be926 --- /dev/null +++ b/scripts/send_confirm.py @@ -0,0 +1,20 @@ +import asyncio +import sys +from temporalio.client import Client + + +async def main(): + + # 1) Connect to Temporal and signal the workflow + client = await Client.connect("localhost:7233") + + workflow_id = "agent-workflow" + + await client.get_workflow_handle(workflow_id).signal("confirm") + + +if __name__ == "__main__": + if len(sys.argv) != 1: + print("Usage: python send_confirm.py'") + else: + asyncio.run(main())