feat(docs): update documentation and fix preview-docs (#2000)
Some checks are pending
publish docs / publish-docs (push) Waiting to run
release-please / release-please (push) Waiting to run
tests / setup (push) Waiting to run
tests / ${{ matrix.quality-command }} (black) (push) Blocked by required conditions
tests / ${{ matrix.quality-command }} (mypy) (push) Blocked by required conditions
tests / ${{ matrix.quality-command }} (ruff) (push) Blocked by required conditions
tests / test (push) Blocked by required conditions
tests / all_checks_passed (push) Blocked by required conditions

* docs: add missing configurations

* docs: change HF embeddings by ollama

* docs: add disclaimer about Gradio UI

* docs: improve readability in concepts

* docs: reorder `Fully Local Setups`

* docs: improve setup instructions

* docs: prevent have duplicate documentation and use table to show different options

* docs: rename privateGpt to PrivateGPT

* docs: update ui image

* docs: remove useless header

* docs: convert to alerts ingestion disclaimers

* docs: add UI alternatives

* docs: reference UI alternatives in disclaimers

* docs: fix table

* chore: update doc preview version

* chore: add permissions

* chore: remove useless line

* docs: fixes

...
This commit is contained in:
Javier Martinez 2024-07-18 10:06:51 +02:00 committed by GitHub
parent 01b7ccd064
commit 4523a30c8f
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
13 changed files with 161 additions and 100 deletions

View file

@ -1,44 +1,31 @@
# Downloading Gated and Private Models
Many models are gated or private, requiring special access to use them. Follow these steps to gain access and set up your environment for using these models.
## Accessing Gated Models
1. **Request Access:**
Follow the instructions provided [here](https://huggingface.co/docs/hub/en/models-gated) to request access to the gated model.
2. **Generate a Token:**
Once you have access, generate a token by following the instructions [here](https://huggingface.co/docs/hub/en/security-tokens).
3. **Set the Token:**
Add the generated token to your `settings.yaml` file:
```yaml
huggingface:
access_token: <your-token>
```
Alternatively, set the `HF_TOKEN` environment variable:
```bash
export HF_TOKEN=<your-token>
```
# Tokenizer Setup
PrivateGPT uses the `AutoTokenizer` library to tokenize input text accurately. It connects to HuggingFace's API to download the appropriate tokenizer for the specified model.
## Configuring the Tokenizer
1. **Specify the Model:**
In your `settings.yaml` file, specify the model you want to use:
```yaml
llm:
tokenizer: mistralai/Mistral-7B-Instruct-v0.2
```
2. **Set Access Token for Gated Models:**
If you are using a gated model, ensure the `access_token` is set as mentioned in the previous section.
This configuration ensures that PrivateGPT can download and use the correct tokenizer for the model you are working with.