Create Completion
Given a prompt, this method lets you retrieve one or more predicted completions along with the probabilities of alternative tokens at each position.
To leverage the newest models and features, you should consider using use the Chat Completions API instead.
This method is compatible with the OpenAI endpoint for creating a completion.
Show optional properties
{
"model": "model-to-use",
"prompt": "hello"
}
{
"seed": 0,
"temperature": "number",
"n": 0,
"stop": [
"string"
],
"max_tokens": 0,
"stream": false,
"model": "string",
"prompt": "string"
}
Seed to propagate to the LLM for making repeated requests with the same seed as deterministic as possible. Note that this feature is in beta for most inference servers.
temperature
n
stop
max_tokens
stream
ID of the completions model to use.
prompt
Successful Response
Invalid model endpoint specified or model endpoint not ready.
Unknown model endpoint requested.
Validation Error
"HTTPValidationError Object"
detail
curl -X POST -H 'Authorization: <value>' -H 'Content-Type: application/json' -d '{"model:"string","prompt:"string"}' https://{api_host}/api/v1/compatibility/openai/v1/completions