readme update

This commit is contained in:
Steve Androulakis
2025-01-09 15:33:02 -08:00

View File

@@ -5,7 +5,7 @@ Work in progress.
This demo shows a multi-turn conversation with an AI agent running inside a Temporal workflow. The goal is to collect information towards a goal. There's a simple DSL input for collecting information (currently set up to use mock functions to search for events, book flights around those events then create an invoice for those flights). The AI will respond with clarifications and ask for any missing information to that goal. It uses a local LLM via Ollama.
## Setup
* See .env_example for the required environment variables.
* See `.env_example` for the required environment variables and copy to `.env` in the root directory.
* Requires an OpenAI key for the gpt-4o model. Set this in the `OPENAI_API_KEY` environment variable in .env
* Requires a Rapidapi key for sky-scrapper (how we find flights). Set this in the `RAPIDAPI_KEY` environment variable in .env
* It's free to sign up and get a key at [RapidAPI](https://rapidapi.com/apiheya/api/sky-scrapper)
@@ -14,13 +14,28 @@ This demo shows a multi-turn conversation with an AI agent running inside a Temp
* It's free to sign up and get a key at [Stripe](https://stripe.com/)
* If you're lazy go to `tools/create_invoice.py` and replace the `create_invoice` function with the mock `create_invoice_example` that exists in the same file.
* Install and run Temporal. Follow the instructions in the [Temporal documentation](https://learn.temporal.io/getting_started/python/dev_environment/#set-up-a-local-temporal-service-for-development-with-temporal-cli) to install and run the Temporal server.
* Install the dependencies: `poetry install`
Deprecated:
### Python Environment
Requires [Poetry](https://python-poetry.org/) to manage dependencies.
1. `python -m venv venv`
2. `source venv/bin/activate`
3. `poetry install`
### React UI
- `cd frontend`
- `npm install` to install the dependencies.
#### Deprecated:
* Install [Ollama](https://ollama.com) and the [Qwen2.5 14B](https://ollama.com/library/qwen2.5) model (`ollama run qwen2.5:14b`). (note this model is about 9GB to download).
* Local LLM is disabled as ChatGPT 4o was better for this use case. To use Ollama, examine `./activities/tool_activities.py` and rename the functions.
## Running the example
## Running the demo
### Run a Temporal Dev Server
@@ -43,8 +58,6 @@ Then run the API and UI using the instructions below.
- Access the API at `/docs` to see the available endpoints.
### UI
- `cd frontend`
- `npm install` to install the dependencies.
- `npm run dev` to start the dev server.
- Access the UI at `http://localhost:5173`