POST
/
messages
curl --request POST \
  --url https://api.anthropic.com/v1/messages \
  --header 'Content-Type: application/json' \
  --header 'x-api-key: <api-key>' \
  --data '{
  "model": "<string>",
  "messages": "[{\"role\": \"user\", \"content\": \"Hello, Claude\"}]",
  "system": "<string>",
  "max_tokens": 123,
  "metadata": {},
  "stop_sequences": [
    "<string>"
  ],
  "stream": true,
  "temperature": 123,
  "top_p": 123,
  "top_k": 123
}'
{
  "id": "<string>",
  "type": "message",
  "role": "assistant",
  "content": "[{\"type\": \"text\", \"text\": \"Hi, I'm Claude.\"}]",
  "model": "<string>",
  "stop_reason": "end_turn",
  "stop_sequence": "<string>",
  "usage": {
    "input_tokens": 123,
    "output_tokens": 123
  }
}

Authorizations

x-api-key
string
header
required

API key to authorize requests.

Body

application/json
Model settings and a structured list of input messages with text and/or image content.
model
string
required

The model that will complete your prompt.

messages
object[]
required

Input messages.

Example:

"[{\"role\": \"user\", \"content\": \"Hello, Claude\"}]"

max_tokens
integer
required

The maximum number of tokens to generate before stopping.

system
string

A system prompt is a way of providing context and instructions to Claude, such as specifying a particular goal or role.

metadata
object

An object describing metadata about the request.

stop_sequences
string[]

Custom text sequences that will cause the model to stop generating.

stream
boolean

Whether to incrementally stream the response using server-sent events.

temperature
number

Amount of randomness injected into the response.

top_p
number

Use nucleus sampling.

top_k
integer

Only sample from the top K options for each subsequent token.

Response

200
application/json
Message object.
id
string
required

Unique object identifier.

type
string
default:message
required

For Messages, this is always "message".

role
string
default:assistant
required

This will always be "assistant".

content
object[]
required

An array of content blocks, each of which has a type. Currently, only type in responses is "text".

Example:

"[{\"type\": \"text\", \"text\": \"Hi, I'm Claude.\"}]"

model
string
required

The model that handled the request.

stop_reason
enum<string>
required

The reason that the model stopped. In non-streaming mode this value is always non-null. In streaming mode, it is null in the message_start event and non-null otherwise.

Available options:
end_turn,
max_tokens,
stop_sequence
stop_sequence
string
required

This value will be a non-null string if one of your custom stop sequences was generated. Else null.

usage
object
required

Billing and rate-limit usage.