private-gpt/private_gpt/components/llm
uw4 fc13368bc7
Some checks are pending
publish docs / publish-docs (push) Waiting to run
release-please / release-please (push) Waiting to run
tests / setup (push) Waiting to run
tests / ${{ matrix.quality-command }} (black) (push) Blocked by required conditions
tests / ${{ matrix.quality-command }} (mypy) (push) Blocked by required conditions
tests / ${{ matrix.quality-command }} (ruff) (push) Blocked by required conditions
tests / test (push) Blocked by required conditions
tests / all_checks_passed (push) Blocked by required conditions
feat(llm): Support for Google Gemini LLMs and Embeddings (#1965)
* Support for Google Gemini LLMs and Embeddings

Initial support for Gemini, enables usage of Google LLMs and embedding models (see settings-gemini.yaml)

Install via
poetry install --extras "llms-gemini embeddings-gemini"

Notes:
* had to bump llama-index-core to later version that supports Gemini
* poetry --no-update did not work: Gemini/llama_index seem to require more (transient) updates to make it work...

* fix: crash when gemini is not selected

* docs: add gemini llm

---------

Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com>
2024-07-08 11:47:36 +02:00
..
custom fix: Replacing unsafe eval() with json.loads() (#1890) 2024-04-30 09:58:19 +02:00
__init__.py Next version of PrivateGPT (#1077) 2023-10-19 16:04:35 +02:00
llm_component.py feat(llm): Support for Google Gemini LLMs and Embeddings (#1965) 2024-07-08 11:47:36 +02:00
prompt_helper.py fix(LLM): mistral ignoring assistant messages (#1954) 2024-05-30 15:41:16 +02:00