Javier Martinez
5851b02378
feat: update llama-index + dependencies ( #2092 )
...
release-please / release-please (push) Has been cancelled
tests / setup (push) Has been cancelled
tests / ${{ matrix.quality-command }} (black) (push) Has been cancelled
tests / ${{ matrix.quality-command }} (mypy) (push) Has been cancelled
tests / ${{ matrix.quality-command }} (ruff) (push) Has been cancelled
tests / test (push) Has been cancelled
tests / all_checks_passed (push) Has been cancelled
* chore: update libraries
* fix: mypy
* chore: more updates
* fix: mypy/black
* chore: fix docker warnings
* fix: mypy
* fix: black
2024-09-26 16:29:52 +02:00
Brett England
134fc54d7d
feat(ingest): Created a faster ingestion mode - pipeline ( #1750 )
...
* Unify pgvector and postgres connection settings
* Remove local changes
* Update file pgvector->postgres
* postgresql should be postgres
* Adding pipeline ingestion mode
* disable hugging face parallelism. Continue on file to doc transform failure
* Semaphore to limit docq async workers. ETA reporting
2024-03-19 21:24:46 +01:00
Iván Martínez
45f05711eb
feat: Upgrade to LlamaIndex to 0.10 ( #1663 )
...
* Extract optional dependencies
* Separate local mode into llms-llama-cpp and embeddings-huggingface for clarity
* Support Ollama embeddings
* Upgrade to llamaindex 0.10.14. Remove legacy use of ServiceContext in ContextChatEngine
* Fix vector retriever filters
2024-03-06 17:51:30 +01:00
lopagela
56af625d71
Fix the parallel ingestion mode, and make it available through conf ( #1336 )
...
* Fix the parallel ingestion mode, and make it available through conf
Also updated the documentation to show how to configure the ingest mode.
* PR feedback: redirect to documentation
2023-11-30 11:41:55 +01:00
lopagela
bafdd3baf1
Ingestion Speedup Multiple strategy ( #1309 )
2023-11-25 20:12:09 +01:00