From d62f899c29dd3111ce19b6fdc10414f66928843c Mon Sep 17 00:00:00 2001 From: jeanpaul Date: Fri, 30 Aug 2024 12:04:35 +0200 Subject: [PATCH] Update README with Ollama instructions for both Nvidia and mac users (#13) --- README.md | 44 +++++++++++++++++++++++++++++++++++--------- 1 file changed, 35 insertions(+), 9 deletions(-) diff --git a/README.md b/README.md index 2f4ae01..894448b 100644 --- a/README.md +++ b/README.md @@ -47,6 +47,33 @@ cd self-hosted-ai-starter-kit docker compose --profile gpu-nvidia up ``` +> [!NOTE] +> If you have not used your Nvidia GPU with Docker before, please follow the +> [Ollama Docker instructions](https://github.com/ollama/ollama/blob/main/docs/docker.md). + +### For Mac / Apple Silicon users + +If you’re using a Mac with an M1 or newer processor, you can't expose your GPU +to the Docker instance, unfortunately. There are two options in this case: + +1. Run the starter kit fully on CPU, like in the section "For everyone else" + below +2. Run Ollama on your Mac for faster inference, and connect to that from the + n8n instance + +If you want to run Ollama on your mac, check the +[Ollama homepage](https://ollama.com/) +for installation instructions, and run the starter kit as follows: + +``` +git clone https://github.com/n8n-io/self-hosted-ai-starter-kit.git +cd self-hosted-ai-starter-kit +docker compose up +``` + +After you followed the quick start set-up below, change the Ollama credentials +by using `http://host.docker.internal:11434/` as the host. + ### For everyone else ``` @@ -55,14 +82,6 @@ cd self-hosted-ai-starter-kit docker compose --profile cpu up ``` -> [!TIP] -> If you’re using a Mac with an M1 or newer processor, you can run Ollama on -> your host machine for faster GPU inference. Unfortunately, you can’t expose -> the GPU to Docker instances. Check the -> [Ollama homepage](https://ollama.com/) for installation instructions, and -> use `http://host.docker.internal:11434/` as the Ollama host in your -> credentials. - ## ⚡️ Quick start and usage The main component of the self-hosted AI starter kit is a docker compose file @@ -101,6 +120,13 @@ language model and Qdrant as your vector store. ``` docker compose --profile gpu-nvidia pull +docker compose create && docker compose --profile gpu-nvidia up +``` + +### For Mac / Apple Silicon users + +``` +docker compose pull docker compose create && docker compose up ``` @@ -108,7 +134,7 @@ docker compose create && docker compose up ``` docker compose --profile cpu pull -docker compose create && docker compose up +docker compose create && docker compose --profile cpu up ``` ## 👓 Recommended reading