fix(config): make tokenizer optional and include a troubleshooting doc (#1998)
Some checks are pending
publish docs / publish-docs (push) Waiting to run
release-please / release-please (push) Waiting to run
tests / setup (push) Waiting to run
tests / ${{ matrix.quality-command }} (black) (push) Blocked by required conditions
tests / ${{ matrix.quality-command }} (mypy) (push) Blocked by required conditions
tests / ${{ matrix.quality-command }} (ruff) (push) Blocked by required conditions
tests / test (push) Blocked by required conditions
tests / all_checks_passed (push) Blocked by required conditions

* docs: add troubleshooting

* fix: pass HF token to setup script and prevent to download tokenizer when it is empty

* fix: improve log and disable specific tokenizer by default

* chore: change HF_TOKEN environment to be aligned with default config

* ifx: mypy
This commit is contained in:
Javier Martinez 2024-07-17 10:06:27 +02:00 committed by GitHub
parent 15f73dbc48
commit 01b7ccd064
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
6 changed files with 65 additions and 12 deletions

View file

@ -41,6 +41,8 @@ navigation:
path: ./docs/pages/installation/concepts.mdx
- page: Installation
path: ./docs/pages/installation/installation.mdx
- page: Troubleshooting
path: ./docs/pages/installation/troubleshooting.mdx
# Manual of privateGPT: how to use it and configure it
- tab: manual
layout:

View file

@ -81,6 +81,8 @@ set PGPT_PROFILES=ollama
make run
```
Refer to the [troubleshooting](./troubleshooting) section for specific issues you might encounter.
### Local, Ollama-powered setup - RECOMMENDED
**The easiest way to run PrivateGPT fully locally** is to depend on Ollama for the LLM. Ollama provides local LLM and Embeddings super easy to install and use, abstracting the complexity of GPU support. It's the recommended setup for local development.

View file

@ -0,0 +1,44 @@
# Downloading Gated and Private Models
Many models are gated or private, requiring special access to use them. Follow these steps to gain access and set up your environment for using these models.
## Accessing Gated Models
1. **Request Access:**
Follow the instructions provided [here](https://huggingface.co/docs/hub/en/models-gated) to request access to the gated model.
2. **Generate a Token:**
Once you have access, generate a token by following the instructions [here](https://huggingface.co/docs/hub/en/security-tokens).
3. **Set the Token:**
Add the generated token to your `settings.yaml` file:
```yaml
huggingface:
access_token: <your-token>
```
Alternatively, set the `HF_TOKEN` environment variable:
```bash
export HF_TOKEN=<your-token>
```
# Tokenizer Setup
PrivateGPT uses the `AutoTokenizer` library to tokenize input text accurately. It connects to HuggingFace's API to download the appropriate tokenizer for the specified model.
## Configuring the Tokenizer
1. **Specify the Model:**
In your `settings.yaml` file, specify the model you want to use:
```yaml
llm:
tokenizer: mistralai/Mistral-7B-Instruct-v0.2
```
2. **Set Access Token for Gated Models:**
If you are using a gated model, ensure the `access_token` is set as mentioned in the previous section.
This configuration ensures that PrivateGPT can download and use the correct tokenizer for the model you are working with.