mirror of
https://github.com/zylon-ai/private-gpt.git
synced 2025-12-22 07:40:12 +01:00
feat(recipe): add our first recipe Summarize (#2028)
Some checks are pending
publish docs / publish-docs (push) Waiting to run
release-please / release-please (push) Waiting to run
tests / setup (push) Waiting to run
tests / ${{ matrix.quality-command }} (black) (push) Blocked by required conditions
tests / ${{ matrix.quality-command }} (mypy) (push) Blocked by required conditions
tests / ${{ matrix.quality-command }} (ruff) (push) Blocked by required conditions
tests / test (push) Blocked by required conditions
tests / all_checks_passed (push) Blocked by required conditions
Some checks are pending
publish docs / publish-docs (push) Waiting to run
release-please / release-please (push) Waiting to run
tests / setup (push) Waiting to run
tests / ${{ matrix.quality-command }} (black) (push) Blocked by required conditions
tests / ${{ matrix.quality-command }} (mypy) (push) Blocked by required conditions
tests / ${{ matrix.quality-command }} (ruff) (push) Blocked by required conditions
tests / test (push) Blocked by required conditions
tests / all_checks_passed (push) Blocked by required conditions
* feat: add summary recipe * test: add summary tests * docs: move all recipes docs * docs: add recipes and summarize doc * docs: update openapi reference * refactor: split method in two method (summary) * feat: add initial summarize ui * feat: add mode explanation * fix: mypy * feat: allow to configure async property in summarize * refactor: move modes to enum and update mode explanations * docs: fix url * docs: remove list-llm pages * docs: remove double header * fix: summary description
This commit is contained in:
parent
40638a18a5
commit
8119842ae6
13 changed files with 743 additions and 148 deletions
|
|
@ -1,122 +0,0 @@
|
|||
# List of working LLM
|
||||
|
||||
**Do you have any working combination of LLM and embeddings?**
|
||||
|
||||
Please open a PR to add it to the list, and come on our Discord to tell us about it!
|
||||
|
||||
## Prompt style
|
||||
|
||||
LLMs might have been trained with different prompt styles.
|
||||
The prompt style is the way the prompt is written, and how the system message is injected in the prompt.
|
||||
|
||||
For example, `llama2` looks like this:
|
||||
```text
|
||||
<s>[INST] <<SYS>>
|
||||
{{ system_prompt }}
|
||||
<</SYS>>
|
||||
|
||||
{{ user_message }} [/INST]
|
||||
```
|
||||
|
||||
While `default` (the `llama_index` default) looks like this:
|
||||
```text
|
||||
system: {{ system_prompt }}
|
||||
user: {{ user_message }}
|
||||
assistant: {{ assistant_message }}
|
||||
```
|
||||
|
||||
The "`tag`" style looks like this:
|
||||
|
||||
```text
|
||||
<|system|>: {{ system_prompt }}
|
||||
<|user|>: {{ user_message }}
|
||||
<|assistant|>: {{ assistant_message }}
|
||||
```
|
||||
|
||||
The "`mistral`" style looks like this:
|
||||
|
||||
```text
|
||||
<s>[INST] You are an AI assistant. [/INST]</s>[INST] Hello, how are you doing? [/INST]
|
||||
```
|
||||
|
||||
The "`chatml`" style looks like this:
|
||||
```text
|
||||
<|im_start|>system
|
||||
{{ system_prompt }}<|im_end|>
|
||||
<|im_start|>user"
|
||||
{{ user_message }}<|im_end|>
|
||||
<|im_start|>assistant
|
||||
{{ assistant_message }}
|
||||
```
|
||||
|
||||
Some LLMs will not understand these prompt styles, and will not work (returning nothing).
|
||||
You can try to change the prompt style to `default` (or `tag`) in the settings, and it will
|
||||
change the way the messages are formatted to be passed to the LLM.
|
||||
|
||||
## Example of configuration
|
||||
|
||||
You might want to change the prompt depending on the language and model you are using.
|
||||
|
||||
### English, with instructions
|
||||
|
||||
`settings-en.yaml`:
|
||||
```yml
|
||||
local:
|
||||
llm_hf_repo_id: TheBloke/Mistral-7B-Instruct-v0.1-GGUF
|
||||
llm_hf_model_file: mistral-7b-instruct-v0.1.Q4_K_M.gguf
|
||||
embedding_hf_model_name: BAAI/bge-small-en-v1.5
|
||||
prompt_style: "llama2"
|
||||
```
|
||||
|
||||
### French, with instructions
|
||||
|
||||
`settings-fr.yaml`:
|
||||
```yml
|
||||
local:
|
||||
llm_hf_repo_id: TheBloke/Vigogne-2-7B-Instruct-GGUF
|
||||
llm_hf_model_file: vigogne-2-7b-instruct.Q4_K_M.gguf
|
||||
embedding_hf_model_name: dangvantuan/sentence-camembert-base
|
||||
prompt_style: "default"
|
||||
# prompt_style: "tag" # also works
|
||||
# The default system prompt is injected only when the `prompt_style` != default, and there are no system message in the discussion
|
||||
# default_system_prompt: Vous êtes un assistant IA qui répond à la question posée à la fin en utilisant le contexte suivant. Si vous ne connaissez pas la réponse, dites simplement que vous ne savez pas, n'essayez pas d'inventer une réponse. Veuillez répondre exclusivement en français.
|
||||
```
|
||||
|
||||
You might want to change the prompt as the one above might not directly answer your question.
|
||||
You can read online about how to write a good prompt, but in a nutshell, make it (extremely) directive.
|
||||
|
||||
You can try and troubleshot your prompt by writing multiline requests in the UI, while
|
||||
writing your interaction with the model, for example:
|
||||
|
||||
```text
|
||||
Tu es un programmeur senior qui programme en python et utilise le framework fastapi. Ecrit moi un serveur qui retourne "hello world".
|
||||
```
|
||||
|
||||
Another example:
|
||||
```text
|
||||
Context: None
|
||||
Situation: tu es au milieu d'un champ.
|
||||
Tache: va a la rivière, en bas du champ.
|
||||
Décrit comment aller a la rivière.
|
||||
```
|
||||
|
||||
### Optimised Models
|
||||
GodziLLa2-70B LLM (English, rank 2 on HuggingFace OpenLLM Leaderboard), bge large Embedding Model (rank 1 on HuggingFace MTEB Leaderboard)
|
||||
`settings-optimised.yaml`:
|
||||
```yml
|
||||
local:
|
||||
llm_hf_repo_id: TheBloke/GodziLLa2-70B-GGUF
|
||||
llm_hf_model_file: godzilla2-70b.Q4_K_M.gguf
|
||||
embedding_hf_model_name: BAAI/bge-large-en
|
||||
prompt_style: "llama2"
|
||||
```
|
||||
### German speaking model
|
||||
`settings-de.yaml`:
|
||||
```yml
|
||||
local:
|
||||
llm_hf_repo_id: TheBloke/em_german_leo_mistral-GGUF
|
||||
llm_hf_model_file: em_german_leo_mistral.Q4_K_M.gguf
|
||||
embedding_hf_model_name: T-Systems-onsite/german-roberta-sentence-transformer-v2
|
||||
#llama, default or tag
|
||||
prompt_style: "default"
|
||||
```
|
||||
23
fern/docs/pages/recipes/quickstart.mdx
Normal file
23
fern/docs/pages/recipes/quickstart.mdx
Normal file
|
|
@ -0,0 +1,23 @@
|
|||
# Recipes
|
||||
|
||||
Recipes are predefined use cases that help users solve very specific tasks using PrivateGPT.
|
||||
They provide a streamlined approach to achieve common goals with the platform, offering both a starting point and inspiration for further exploration.
|
||||
The main goal of Recipes is to empower the community to create and share solutions, expanding the capabilities of PrivateGPT.
|
||||
|
||||
## How to Create a New Recipe
|
||||
|
||||
1. **Identify the Task**: Define a specific task or problem that the Recipe will address.
|
||||
2. **Develop the Solution**: Create a clear and concise guide, including any necessary code snippets or configurations.
|
||||
3. **Submit a PR**: Fork the PrivateGPT repository, add your Recipe to the appropriate section, and submit a PR for review.
|
||||
|
||||
We encourage you to be creative and think outside the box! Your contributions help shape the future of PrivateGPT.
|
||||
|
||||
## Available Recipes
|
||||
|
||||
<Cards>
|
||||
<Card
|
||||
title="Summarize"
|
||||
icon="fa-solid fa-file-alt"
|
||||
href="/recipes/general-use-cases/summarize"
|
||||
/>
|
||||
</Cards>
|
||||
20
fern/docs/pages/recipes/summarize.mdx
Normal file
20
fern/docs/pages/recipes/summarize.mdx
Normal file
|
|
@ -0,0 +1,20 @@
|
|||
The Summarize Recipe provides a method to extract concise summaries from ingested documents or texts using PrivateGPT.
|
||||
This tool is particularly useful for quickly understanding large volumes of information by distilling key points and main ideas.
|
||||
|
||||
## Use Case
|
||||
|
||||
The primary use case for the `Summarize` tool is to automate the summarization of lengthy documents,
|
||||
making it easier for users to grasp the essential information without reading through entire texts.
|
||||
This can be applied in various scenarios, such as summarizing research papers, news articles, or business reports.
|
||||
|
||||
## Key Features
|
||||
|
||||
1. **Ingestion-compatible**: The user provides the text to be summarized. The text can be directly inputted or retrieved from ingested documents within the system.
|
||||
2. **Customization**: The summary generation can be influenced by providing specific `instructions` or a `prompt`. These inputs guide the model on how to frame the summary, allowing for customization according to user needs.
|
||||
3. **Streaming Support**: The tool supports streaming, allowing for real-time summary generation, which can be particularly useful for handling large texts or providing immediate feedback.
|
||||
|
||||
## Contributing
|
||||
|
||||
If you have ideas for improving the Summarize or want to add new features, feel free to contribute!
|
||||
You can submit your enhancements via a pull request on our [GitHub repository](https://github.com/zylon-ai/private-gpt).
|
||||
|
||||
Loading…
Add table
Add a link
Reference in a new issue