File size: 1,220 Bytes
e8ad316
 
 
 
 
 
 
033b3ec
e8ad316
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
#2. **Set the `MODEL_ENDPOINT` Environment Variable**
#In your chat application, ensure the `MODEL_ENDPOINT` environment variable is set to the correct URL of the model server. For example:
#```python
import os
import requests

# Get the model endpoint from the environment variable
model_service = os.getenv("MODEL_ENDPOINT", "http://127.0.0.1:8000")

# Example function to call the model server
def call_model(prompt):
    url = f"{model_service}/generate"
    payload = {"prompt": prompt}
    response = requests.post(url, json=payload)
    if response.status_code == 200:
        return response.json().get("response", "No response from model")
    else:
        return f"Error: {response.status_code}"

# Test the connection
if __name__ == "__main__":
    prompt = "Hello, model!"
    result = call_model(prompt)
    print(result)
#### 3. **Test the Connection**
#Run the chat application and test the connection to the model server:

#```bash
# Set the MODEL_ENDPOINT environment variable
#export MODEL_ENDPOINT="http://localhost:8001"

# Run the chat application
#python chat_app.py
#```

#If everything is set up correctly, the chat application should be able to call #the model server and receive a response.