mirror of
https://github.com/zylon-ai/private-gpt.git
synced 2025-12-22 20:12:55 +01:00
Search in Docs to UI (#1186)
Move from Context Chunks JSON response to a more comprehensive Search in Docs functionality
This commit is contained in:
parent
1e96e3a29e
commit
c81f4b2ebd
3 changed files with 60 additions and 21 deletions
|
|
@ -348,25 +348,24 @@ computations.
|
|||
|
||||
Gradio UI is a ready to use way of testing most of PrivateGPT API functionalities.
|
||||
|
||||

|
||||

|
||||
|
||||
### Execution Modes
|
||||
|
||||
It has 3 modes of execution (you can select in the top-left):
|
||||
|
||||
* Query Documents: uses the context from the
|
||||
* Query Docs: uses the context from the
|
||||
ingested documents to answer the questions posted in the chat. It also takes
|
||||
into account previous chat messages as context.
|
||||
* Makes use of `/chat/completions` API with `use_context=true` and no
|
||||
`context_filter`.
|
||||
* Search in Docs: fast search that returns the 4 most related text
|
||||
chunks, together with their source document and page.
|
||||
* Makes use of `/chunks` API with no `context_filter`, `limit=4` and
|
||||
`prev_next_chunks=0`.
|
||||
* LLM Chat: simple, non-contextual chat with the LLM. The ingested documents won't
|
||||
be taken into account, only the previous messages.
|
||||
* Makes use of `/chat/completions` API with `use_context=false`.
|
||||
* Context Chunks: returns the JSON representation of the 2 most related text
|
||||
chunks, together with their metadata, source document and previous and next
|
||||
chunks.
|
||||
* Makes use of `/chunks` API with no `context_filter`, `limit=2` and
|
||||
`prev_next_chunks=1`.
|
||||
|
||||
### Document Ingestion
|
||||
|
||||
|
|
|
|||
File diff suppressed because one or more lines are too long
Loading…
Add table
Add a link
Reference in a new issue