Commit graph

9 commits

Author SHA1 Message Date
google-labs-jules[bot]
6fd3a23daf Refactor and enhance LLM prompt styles
This commit introduces several improvements to the prompt formatting logic in `private_gpt/components/llm/prompt_helper.py`:

1.  **Llama3PromptStyle**:
    *   Implemented tool handling capabilities, allowing for the formatting of tool call and tool result messages within the Llama 3 prompt structure.
    *   Ensured correct usage of BOS, EOT, and other Llama 3 specific tokens.

2.  **MistralPromptStyle**:
    *   Refactored the `_messages_to_prompt` method for more robust handling of various conversational scenarios, including consecutive user messages and initial assistant messages.
    *   Ensured correct application of `<s>`, `</s>`, and `[INST]` tags.

3.  **ChatMLPromptStyle**:
    *   Corrected the logic for handling system messages to prevent duplication and ensure accurate ChatML formatting (`<|im_start|>role\ncontent<|im_end|>`).

4.  **TagPromptStyle**:
    *   Addressed a FIXME comment by incorporating `<s>` (BOS) and `</s>` (EOS) tokens, making it more suitable for Llama-based models like Vigogne.
    *   Fixed a minor bug related to enum string conversion.

5.  **Unit Tests**:
    *   Added a new test suite in `tests/components/llm/test_prompt_helper.py`.
    *   These tests provide comprehensive coverage for all modified prompt styles, verifying correct prompt generation for various inputs, edge cases, and special token placements.

These changes improve the correctness, robustness, and feature set of the supported prompt styles, leading to better compatibility and interaction with the respective language models.
2025-06-10 21:05:34 +00:00
Javier Martinez
5851b02378
feat: update llama-index + dependencies (#2092)
Some checks failed
release-please / release-please (push) Has been cancelled
tests / setup (push) Has been cancelled
tests / ${{ matrix.quality-command }} (black) (push) Has been cancelled
tests / ${{ matrix.quality-command }} (mypy) (push) Has been cancelled
tests / ${{ matrix.quality-command }} (ruff) (push) Has been cancelled
tests / test (push) Has been cancelled
tests / all_checks_passed (push) Has been cancelled
* chore: update libraries

* fix: mypy

* chore: more updates

* fix: mypy/black

* chore: fix docker warnings

* fix: mypy

* fix: black
2024-09-26 16:29:52 +02:00
Javier Martinez
9027d695c1
feat: make llama3.1 as default (#2022)
* feat: change ollama default model to llama3.1

* chore: bump versions

* feat: Change default model in local mode to llama3.1

* chore: make sure last poetry version is used

* fix: mypy

* fix: do not add BOS (with last llamacpp-python version)
2024-07-31 14:35:36 +02:00
Robert Hirsch
d080969407
added llama3 prompt (#1962)
Some checks are pending
release-please / release-please (push) Waiting to run
tests / setup (push) Waiting to run
tests / ${{ matrix.quality-command }} (black) (push) Blocked by required conditions
tests / ${{ matrix.quality-command }} (mypy) (push) Blocked by required conditions
tests / ${{ matrix.quality-command }} (ruff) (push) Blocked by required conditions
tests / test (push) Blocked by required conditions
tests / all_checks_passed (push) Blocked by required conditions
* added llama3 prompt

* more fixes to pass tests; changed type VectorStore -> BasePydanticVectorStore, see https://github.com/run-llama/llama_index/blob/main/CHANGELOG.md#2024-05-14

* fix: new llama3 prompt

---------

Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com>
2024-07-29 17:28:00 +02:00
Pablo Orgaz
c7212ac7cc
fix(LLM): mistral ignoring assistant messages (#1954)
Some checks failed
release-please / release-please (push) Has been cancelled
tests / setup (push) Has been cancelled
tests / ${{ matrix.quality-command }} (black) (push) Has been cancelled
tests / ${{ matrix.quality-command }} (mypy) (push) Has been cancelled
tests / ${{ matrix.quality-command }} (ruff) (push) Has been cancelled
tests / test (push) Has been cancelled
tests / all_checks_passed (push) Has been cancelled
* fix: mistral ignoring assistant messages

* fix: typing

* fix: fix tests
2024-05-30 15:41:16 +02:00
Iván Martínez
45f05711eb
feat: Upgrade to LlamaIndex to 0.10 (#1663)
* Extract optional dependencies

* Separate local mode into llms-llama-cpp and embeddings-huggingface for clarity

* Support Ollama embeddings

* Upgrade to llamaindex 0.10.14. Remove legacy use of ServiceContext in ContextChatEngine

* Fix vector retriever filters
2024-03-06 17:51:30 +01:00
CognitiveTech
e326126d0d
feat: add mistral + chatml prompts (#1426) 2024-01-16 22:51:14 +01:00
Louis Melchior
a3ed14c58f
feat(llm): drop default_system_prompt (#1385)
As discussed on Discord, the decision has been made to remove the system prompts by default, to better segregate the API and the UI usages.

A concurrent PR (#1353) is enabling the dynamic setting of a system prompt in the UI.

Therefore, if UI users want to use a custom system prompt, they can specify one directly in the UI.
If the API users want to use a custom prompt, they can pass it directly into their messages that they are passing to the API.

In the highlight of the two use case above, it becomes clear that default system_prompt does not need to exist.
2023-12-08 23:13:51 +01:00
Iván Martínez
944c43bfa8
Multi language support - fern debug (#1307)
---------

Co-authored-by: Louis <lpglm@orange.fr>
Co-authored-by: LeMoussel <cnhx27@gmail.com>
2023-11-25 14:34:23 +01:00