mirror of
https://github.com/zylon-ai/private-gpt.git
synced 2025-12-22 20:12:55 +01:00
This commit introduces several improvements to the prompt formatting logic in `private_gpt/components/llm/prompt_helper.py`:
1. **Llama3PromptStyle**:
* Implemented tool handling capabilities, allowing for the formatting of tool call and tool result messages within the Llama 3 prompt structure.
* Ensured correct usage of BOS, EOT, and other Llama 3 specific tokens.
2. **MistralPromptStyle**:
* Refactored the `_messages_to_prompt` method for more robust handling of various conversational scenarios, including consecutive user messages and initial assistant messages.
* Ensured correct application of `<s>`, `</s>`, and `[INST]` tags.
3. **ChatMLPromptStyle**:
* Corrected the logic for handling system messages to prevent duplication and ensure accurate ChatML formatting (`<|im_start|>role\ncontent<|im_end|>`).
4. **TagPromptStyle**:
* Addressed a FIXME comment by incorporating `<s>` (BOS) and `</s>` (EOS) tokens, making it more suitable for Llama-based models like Vigogne.
* Fixed a minor bug related to enum string conversion.
5. **Unit Tests**:
* Added a new test suite in `tests/components/llm/test_prompt_helper.py`.
* These tests provide comprehensive coverage for all modified prompt styles, verifying correct prompt generation for various inputs, edge cases, and special token placements.
These changes improve the correctness, robustness, and feature set of the supported prompt styles, leading to better compatibility and interaction with the respective language models.
|
||
|---|---|---|
| .. | ||
| test_prompt_helper.py | ||