Nathan Lenas
494bb9eea9
Merge 9177a0ad73 into b7ee43788d
2024-11-28 15:02:58 +01:00
Javier Martinez
5851b02378
feat: update llama-index + dependencies ( #2092 )
...
release-please / release-please (push) Has been cancelled
tests / setup (push) Has been cancelled
tests / ${{ matrix.quality-command }} (black) (push) Has been cancelled
tests / ${{ matrix.quality-command }} (mypy) (push) Has been cancelled
tests / ${{ matrix.quality-command }} (ruff) (push) Has been cancelled
tests / test (push) Has been cancelled
tests / all_checks_passed (push) Has been cancelled
* chore: update libraries
* fix: mypy
* chore: more updates
* fix: mypy/black
* chore: fix docker warnings
* fix: mypy
* fix: black
2024-09-26 16:29:52 +02:00
Nathan Lenas
9177a0ad73
fix: parse metadata as json, allow metadata typing
2024-07-29 09:26:40 +02:00
Nathan Lenas
f47c05730d
fix: remove metadata from bulk ingestion
2024-07-23 09:34:16 +02:00
Nathan Lenas
50388f6a33
fix: specify dict type, fix bulk ingestion with metadata
2024-07-23 09:18:27 +02:00
Nathan Lenas
d559d54e1a
feat: Add optional metadata param to ingest routes
2024-07-23 08:50:54 +02:00
Brett England
134fc54d7d
feat(ingest): Created a faster ingestion mode - pipeline ( #1750 )
...
* Unify pgvector and postgres connection settings
* Remove local changes
* Update file pgvector->postgres
* postgresql should be postgres
* Adding pipeline ingestion mode
* disable hugging face parallelism. Continue on file to doc transform failure
* Semaphore to limit docq async workers. ETA reporting
2024-03-19 21:24:46 +01:00
Iván Martínez
45f05711eb
feat: Upgrade to LlamaIndex to 0.10 ( #1663 )
...
* Extract optional dependencies
* Separate local mode into llms-llama-cpp and embeddings-huggingface for clarity
* Support Ollama embeddings
* Upgrade to llamaindex 0.10.14. Remove legacy use of ServiceContext in ContextChatEngine
* Fix vector retriever filters
2024-03-06 17:51:30 +01:00
lopagela
56af625d71
Fix the parallel ingestion mode, and make it available through conf ( #1336 )
...
* Fix the parallel ingestion mode, and make it available through conf
Also updated the documentation to show how to configure the ingest mode.
* PR feedback: redirect to documentation
2023-11-30 11:41:55 +01:00
lopagela
bafdd3baf1
Ingestion Speedup Multiple strategy ( #1309 )
2023-11-25 20:12:09 +01:00