tests / test (push) Blocked by required conditions
tests / all_checks_passed (push) Blocked by required conditions
* Updated prompt_style to be moved to the main LLM setting since all LLMs from llama_index can utilize this. I also included temperature, context window size, max_tokens, max_new_tokens into the openailike to help ensure the settings are consistent from the other implementations.
* Removed prompt_style from llamacpp entirely
* Fixed settings-local.yaml to include prompt_style in the LLM settings instead of llamacpp.
* Adding Postgres for the doc and index store
* Adding documentation. Rename postgres database local->simple. Postgres storage dependencies
* Update documentation for postgres storage
* Renaming feature to nodestore
* update docstore -> nodestore in doc
* missed some docstore changes in doc
* Updated poetry.lock
* Formatting updates to pass ruff/black checks
* Correction to unreachable code!
* Format adjustment to pass black test
* Adjust extra inclusion name for vector pg
* extra dep change for pg vector
* storage-postgres -> storage-nodestore-postgres
* Hash change on poetry lock
* Extract optional dependencies
* Separate local mode into llms-llama-cpp and embeddings-huggingface for clarity
* Support Ollama embeddings
* Upgrade to llamaindex 0.10.14. Remove legacy use of ServiceContext in ContextChatEngine
* Fix vector retriever filters
This mode behaves the same as the openai mode, except that it allows setting custom models not
supported by OpenAI. It can be used with any tool that serves models from an OpenAI compatible API.
Implements #1424
As discussed on Discord, the decision has been made to remove the system prompts by default, to better segregate the API and the UI usages.
A concurrent PR (#1353) is enabling the dynamic setting of a system prompt in the UI.
Therefore, if UI users want to use a custom system prompt, they can specify one directly in the UI.
If the API users want to use a custom prompt, they can pass it directly into their messages that they are passing to the API.
In the highlight of the two use case above, it becomes clear that default system_prompt does not need to exist.
* Fix the parallel ingestion mode, and make it available through conf
Also updated the documentation to show how to configure the ingest mode.
* PR feedback: redirect to documentation
* added max_new_tokens as a configuration option to the llm block in settings
* Update fern/docs/pages/manual/settings.mdx
Co-authored-by: lopagela <lpglm@orange.fr>
* Update private_gpt/settings/settings.py
Add default value for max_new_tokens = 256
Co-authored-by: lopagela <lpglm@orange.fr>
* Addressed location of docs comment
* reformatting from running 'make check'
* remove default config value from settings.yaml
---------
Co-authored-by: lopagela <lpglm@orange.fr>