fix: add numpy issue to troubleshooting (#2048)
Some checks failed
release-please / release-please (push) Waiting to run
tests / setup (push) Waiting to run
tests / ${{ matrix.quality-command }} (black) (push) Blocked by required conditions
tests / ${{ matrix.quality-command }} (mypy) (push) Blocked by required conditions
tests / ${{ matrix.quality-command }} (ruff) (push) Blocked by required conditions
tests / test (push) Blocked by required conditions
tests / all_checks_passed (push) Blocked by required conditions
publish docs / publish-docs (push) Has been cancelled

* docs: add numpy issue to troubleshooting

* fix: troubleshooting link

...
This commit is contained in:
Javier Martinez 2024-08-07 12:16:03 +02:00 committed by GitHub
parent b16abbefe4
commit 4ca6d0cb55
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
2 changed files with 22 additions and 5 deletions

View file

@ -46,4 +46,19 @@ huggingface:
embedding:
embed_dim: 384
```
</Callout>
</Callout>
# Building Llama-cpp with NVIDIA GPU support
## Out-of-memory error
If you encounter an out-of-memory error while running `llama-cpp` with CUDA, you can try the following steps to resolve the issue:
1. **Set the next environment:**
```bash
TOKENIZERS_PARALLELISM=true
```
2. **Run PrivateGPT:**
```bash
poetry run python -m privategpt
```
Give thanks to [MarioRossiGithub](https://github.com/MarioRossiGithub) for providing the following solution.