private-gpt/private_gpt/server/completions/completions_router.py
lopagela aa70d3d9f0
Add simple Basic auth (#1203)
* Add simple Basic auth

To enable the basic authentication, one must set `server.auth.enabled`
to true.

The static string defined in `server.auth.secret` must be set in the
header `Authorization`.

The health check endpoint will always be accessible, no matter the API
auth configuration.

* Fix linting and type check

* Fighting with mypy being too restrictive

Had to disable mypy in the `auth` as we are not using the same signature
for the authenticated method.

mypy was complaining that the signatures of `authenticated` must be
identical, no matter in which logical branch we are.
Given that fastapi is accomodating itself of method signatures (it will
inject the dependencies in the method call), this warning of mypy is
actually preventing us to do something legit.

mypy doc: https://mypy.readthedocs.io/en/stable/common_issues.html

* Write tests to verify that the simple auth is working
2023-11-12 19:05:00 +01:00

73 lines
2.6 KiB
Python

from fastapi import APIRouter, Depends
from pydantic import BaseModel
from starlette.responses import StreamingResponse
from private_gpt.open_ai.extensions.context_filter import ContextFilter
from private_gpt.open_ai.openai_models import (
OpenAICompletion,
OpenAIMessage,
)
from private_gpt.server.chat.chat_router import ChatBody, chat_completion
from private_gpt.server.utils.auth import authenticated
completions_router = APIRouter(prefix="/v1", dependencies=[Depends(authenticated)])
class CompletionsBody(BaseModel):
prompt: str
use_context: bool = False
context_filter: ContextFilter | None = None
include_sources: bool = True
stream: bool = False
model_config = {
"json_schema_extra": {
"examples": [
{
"prompt": "How do you fry an egg?",
"stream": False,
"use_context": False,
"include_sources": False,
}
]
}
}
@completions_router.post(
"/completions",
response_model=None,
summary="Completion",
responses={200: {"model": OpenAICompletion}},
tags=["Contextual Completions"],
)
def prompt_completion(body: CompletionsBody) -> OpenAICompletion | StreamingResponse:
"""We recommend most users use our Chat completions API.
Given a prompt, the model will return one predicted completion. If `use_context`
is set to `true`, the model will use context coming from the ingested documents
to create the response. The documents being used can be filtered using the
`context_filter` and passing the document IDs to be used. Ingested documents IDs
can be found using `/ingest/list` endpoint. If you want all ingested documents to
be used, remove `context_filter` altogether.
When using `'include_sources': true`, the API will return the source Chunks used
to create the response, which come from the context provided.
When using `'stream': true`, the API will return data chunks following [OpenAI's
streaming model](https://platform.openai.com/docs/api-reference/chat/streaming):
```
{"id":"12345","object":"completion.chunk","created":1694268190,
"model":"private-gpt","choices":[{"index":0,"delta":{"content":"Hello"},
"finish_reason":null}]}
```
"""
message = OpenAIMessage(content=body.prompt, role="user")
chat_body = ChatBody(
messages=[message],
use_context=body.use_context,
stream=body.stream,
include_sources=body.include_sources,
context_filter=body.context_filter,
)
return chat_completion(chat_body)