Mirrowel commited on
Commit
d195a5f
·
1 Parent(s): 6b6277b

fix: Update model reference and add embeddings endpoint to README

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -144,7 +144,7 @@ curl -X POST http://127.0.0.1:8000/v1/chat/completions \
144
  -H "Content-Type: application/json" \
145
  -H "Authorization: Bearer your-super-secret-proxy-key" \
146
  -d '{
147
- "model": "gemini/gemini-2.5-flash-preview-05-20",
148
  "messages": [{"role": "user", "content": "What is the capital of France?"}]
149
  }'
150
  ```
@@ -168,7 +168,7 @@ client = openai.OpenAI(
168
 
169
  # Make a request
170
  response = client.chat.completions.create(
171
- model="gemini/gemini-2.5-flash-preview-05-20", # Specify provider and model
172
  messages=[
173
  {"role": "user", "content": "Write a short poem about space."}
174
  ]
@@ -180,6 +180,7 @@ print(response.choices[0].message.content)
180
  ### Available API Endpoints
181
 
182
  - `POST /v1/chat/completions`: The main endpoint for making chat requests.
 
183
  - `GET /v1/models`: Returns a list of all available models from your configured providers.
184
  - `GET /v1/providers`: Returns a list of all configured providers.
185
  - `POST /v1/token-count`: Calculates the token count for a given message payload.
 
144
  -H "Content-Type: application/json" \
145
  -H "Authorization: Bearer your-super-secret-proxy-key" \
146
  -d '{
147
+ "model": "gemini/gemini-2.5-flash",
148
  "messages": [{"role": "user", "content": "What is the capital of France?"}]
149
  }'
150
  ```
 
168
 
169
  # Make a request
170
  response = client.chat.completions.create(
171
+ model="gemini/gemini-2.5-flash-preview", # Specify provider and model
172
  messages=[
173
  {"role": "user", "content": "Write a short poem about space."}
174
  ]
 
180
  ### Available API Endpoints
181
 
182
  - `POST /v1/chat/completions`: The main endpoint for making chat requests.
183
+ - `POST /v1/embeddings`: The endpoint for creating embeddings.
184
  - `GET /v1/models`: Returns a list of all available models from your configured providers.
185
  - `GET /v1/providers`: Returns a list of all configured providers.
186
  - `POST /v1/token-count`: Calculates the token count for a given message payload.