Create Chat Completion
- Endpoint:
POST https://api.olddog.shop/v1/chat/completions - Authentication:
Authorization: Bearer <API_KEY> - Content-Type:
application/json
Generate a model response from a list of chat messages (compatible with OpenAI chat.completions).
Request parameters
| Name | Type | Required | Description |
|---|---|---|---|
| model | string | Yes | Model ID, e.g. gpt-4o-mini (subject to what’s available in the console). |
| messages | array | Yes | Chat message list. Each item includes role and content. |
| temperature | number | No | Sampling temperature, typically (0~2). Higher is more random. |
| top_p | number | No | Nucleus sampling probability, typically (0~1). Usually tune either this or temperature. |
| max_tokens | integer | No | Maximum tokens to generate (limits vary by model). |
| stream | boolean | No | Enable streaming. Default false. |
| stop | string / array | No | Stop sequence(s). Generation stops early when matched. |
| presence_penalty | number | No | Topic novelty penalty, typically (-2~2). |
| frequency_penalty | number | No | Repetition penalty, typically (-2~2). |
| user | string | No | End-user identifier for abuse monitoring and analytics. |
messages fields
| Field | Type | Required | Description |
|---|---|---|---|
| role | string | Yes | system / user / assistant / tool. |
| content | string / array | Yes | Text content, or a multimodal content array (if supported by the model). |
| name | string | No | Optional author name (compat field). |
cURL example
bash
curl --request POST "https://api.olddog.shop/v1/chat/completions" \
--header "Authorization: Bearer $OLD_DOG_API_KEY" \
--header "Content-Type: application/json" \
--data '{
"model": "gpt-4o-mini",
"messages": [
{ "role": "system", "content": "你是一个专业的中文技术写作助手。" },
{ "role": "user", "content": "用三句话解释什么是向量嵌入。" }
],
"temperature": 0.7
}'