Upload README.md with huggingface_hub
Browse files
README.md
ADDED
|
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Deployment Scripts for Medguide (Built with Gradio)
|
| 2 |
+
|
| 3 |
+
This document provides instructions for deploying the Medguide model for inference using Gradio.
|
| 4 |
+
|
| 5 |
+
1. **Set up the Conda environment:** Follow the instructions in the [PKU-Alignment/align-anything](https://github.com/PKU-Alignment/align-anything) repository to configure your Conda environment.
|
| 6 |
+
2. **Configure the model path:** After setting up the environment, update the `MODEL_PATH` variable in `deploy_medguide_v.sh` to point to your local Medguide model directory.
|
| 7 |
+
3. **Verify inference script parameters:** Check the following three parameters in both `multimodal_inference.py`:
|
| 8 |
+
```python
|
| 9 |
+
# NOTE: Replace with your own model path if not loaded via the API base
|
| 10 |
+
model = ''
|
| 11 |
+
```
|
| 12 |
+
These scripts utilize an OpenAI-compatible server approach. The `deploy_medguide_v.sh` script launches the Medguide model locally and exposes it on port 8231 for external access via the specified API base URL.
|
| 13 |
+
4. **Running Inference:**
|
| 14 |
+
|
| 15 |
+
* **Streamed Output:**
|
| 16 |
+
```bash
|
| 17 |
+
bash deploy_medguide_v.sh
|
| 18 |
+
python multimodal_inference.py
|
| 19 |
+
```
|