mirror of
https://github.com/n8n-io/self-hosted-ai-starter-kit.git
synced 2026-03-15 08:48:08 +01:00
Move .env file and simplify setup on mac (#68)
This commit is contained in:
33
README.md
33
README.md
@@ -42,15 +42,17 @@ Engineering world, handles large amounts of data safely.
|
||||
```bash
|
||||
git clone https://github.com/n8n-io/self-hosted-ai-starter-kit.git
|
||||
cd self-hosted-ai-starter-kit
|
||||
cp .env.example .env # you should update secrets and passwords inside
|
||||
```
|
||||
|
||||
### Running n8n using Docker Compose
|
||||
|
||||
#### For Nvidia GPU users
|
||||
|
||||
```
|
||||
```bash
|
||||
git clone https://github.com/n8n-io/self-hosted-ai-starter-kit.git
|
||||
cd self-hosted-ai-starter-kit
|
||||
cp .env.example .env # you should update secrets and passwords inside
|
||||
docker compose --profile gpu-nvidia up
|
||||
```
|
||||
|
||||
@@ -60,9 +62,10 @@ docker compose --profile gpu-nvidia up
|
||||
|
||||
### For AMD GPU users on Linux
|
||||
|
||||
```
|
||||
```bash
|
||||
git clone https://github.com/n8n-io/self-hosted-ai-starter-kit.git
|
||||
cd self-hosted-ai-starter-kit
|
||||
cp .env.example .env # you should update secrets and passwords inside
|
||||
docker compose --profile gpu-amd up
|
||||
```
|
||||
|
||||
@@ -80,36 +83,30 @@ If you want to run Ollama on your mac, check the
|
||||
[Ollama homepage](https://ollama.com/)
|
||||
for installation instructions, and run the starter kit as follows:
|
||||
|
||||
```
|
||||
```bash
|
||||
git clone https://github.com/n8n-io/self-hosted-ai-starter-kit.git
|
||||
cd self-hosted-ai-starter-kit
|
||||
cp .env.example .env # you should update secrets and passwords inside
|
||||
docker compose up
|
||||
```
|
||||
|
||||
##### For Mac users running OLLAMA locally
|
||||
|
||||
If you're running OLLAMA locally on your Mac (not in Docker), you need to modify the OLLAMA_HOST environment variable
|
||||
in the n8n service configuration. Update the x-n8n section in your Docker Compose file as follows:
|
||||
|
||||
```yaml
|
||||
x-n8n: &service-n8n
|
||||
# ... other configurations ...
|
||||
environment:
|
||||
# ... other environment variables ...
|
||||
- OLLAMA_HOST=host.docker.internal:11434
|
||||
```
|
||||
1. Set OLLAMA_HOST to `host.docker.internal:11434` in your .env file.
|
||||
2. Additionally, after you see "Editor is now accessible via: <http://localhost:5678/>":
|
||||
|
||||
Additionally, after you see "Editor is now accessible via: <http://localhost:5678/>":
|
||||
|
||||
1. Head to <http://localhost:5678/home/credentials>
|
||||
2. Click on "Local Ollama service"
|
||||
3. Change the base URL to "http://host.docker.internal:11434/"
|
||||
1. Head to <http://localhost:5678/home/credentials>
|
||||
2. Click on "Local Ollama service"
|
||||
3. Change the base URL to "http://host.docker.internal:11434/"
|
||||
|
||||
#### For everyone else
|
||||
|
||||
```
|
||||
```bash
|
||||
git clone https://github.com/n8n-io/self-hosted-ai-starter-kit.git
|
||||
cd self-hosted-ai-starter-kit
|
||||
cp .env.example .env # you should update secrets and passwords inside
|
||||
docker compose --profile cpu up
|
||||
```
|
||||
|
||||
@@ -154,7 +151,7 @@ docker compose create && docker compose --profile gpu-nvidia up
|
||||
|
||||
* ### For Mac / Apple Silicon users
|
||||
|
||||
```
|
||||
```bash
|
||||
docker compose pull
|
||||
docker compose create && docker compose up
|
||||
```
|
||||
|
||||
Reference in New Issue
Block a user