Spaces:
Sleeping
Sleeping
have to turn the mcp server on for it to work
Browse files
README.md
CHANGED
|
@@ -14,13 +14,14 @@ tags:
|
|
| 14 |
- agent-demo-track
|
| 15 |
---
|
| 16 |
|
| 17 |
-
|
|
|
|
|
|
|
| 18 |
|
| 19 |
# Features
|
| 20 |
|
| 21 |
- **Automatic speech recognition (ASR)**: Transcribe real-time patient audio using [Gradio](https://www.gradio.app/guides/real-time-speech-recognition).
|
| 22 |
- **Interactive Q&A Agent**: The LLM dynamically asks clarifying questions base on ICD codes until it can diagnose with high confidence.
|
| 23 |
-
- **Multi-backend LLM**: Switch between OpenAI GPT, Mistral (HF), or a local transformers model via environment flags.
|
| 24 |
- **ICD-10 Mapping**: Use LlamaIndex for vector retrieval of probable ICD-10 codes with confidence scores.
|
| 25 |
- **MCP-Server Ready**: Exposes a `/mcp` REST endpoint for seamless agent integration.
|
| 26 |
|
|
@@ -53,39 +54,10 @@ git clone https://huggingface.co/spaces/Agents-MCP-Hackathon/MedCodeMCP
|
|
| 53 |
cd MedCodeMCP
|
| 54 |
python3 -m venv .venv && source .venv/bin/activate
|
| 55 |
pip install -r requirements.txt
|
| 56 |
-
````
|
| 57 |
-
|
| 58 |
-
## Environment Variables
|
| 59 |
-
|
| 60 |
-
| Name | Description | Default |
|
| 61 |
-
| -------------------------- | --------------------------------------------------------- | ---------------------- |
|
| 62 |
-
| `OPENAI_API_KEY` | OpenAI API key for GPT calls | *required* |
|
| 63 |
-
| `HUGGINGFACEHUB_API_TOKEN` | HF token for Mistral/inference models | *required for Mistral* |
|
| 64 |
-
| `USE_LOCAL_GPU` | Set to `1` to use a local transformers model (no credits) | `0` |
|
| 65 |
-
| `LOCAL_MODEL` | Path or HF ID of local model (e.g. `distilgpt2`) | `gpt2` |
|
| 66 |
-
| `USE_MISTRAL` | Set to `1` to use Mistral via HF instead of OpenAI | `0` |
|
| 67 |
-
| `MISTRAL_MODEL` | HF ID for Mistral model (`mistral-small/medium/large`) | `mistral-large` |
|
| 68 |
-
| `MISTRAL_TEMPERATURE` | Sampling temperature for Mistral | `0.7` |
|
| 69 |
-
| `MISTRAL_MAX_INPUT` | Max tokens for input prompt | `4096` |
|
| 70 |
-
| `MISTRAL_NUM_OUTPUT` | Max tokens to generate | `512` |
|
| 71 |
|
| 72 |
## Launch Locally
|
| 73 |
|
| 74 |
-
```bash
|
| 75 |
-
# Default (OpenAI)
|
| 76 |
-
python app.py
|
| 77 |
-
|
| 78 |
-
# Mistral backend
|
| 79 |
-
export USE_MISTRAL=1
|
| 80 |
-
export HUGGINGFACEHUB_API_TOKEN="hf_..."
|
| 81 |
-
python app.py
|
| 82 |
-
|
| 83 |
-
# Local GPU (no credits)
|
| 84 |
-
export USE_LOCAL_GPU=1
|
| 85 |
-
export LOCAL_MODEL="./models/distilgpt2"
|
| 86 |
-
python app.py
|
| 87 |
-
```
|
| 88 |
-
|
| 89 |
Open [http://localhost:7860](http://localhost:7860) to interact with the app.
|
| 90 |
|
| 91 |
## MCP API Usage
|
|
|
|
| 14 |
- agent-demo-track
|
| 15 |
---
|
| 16 |
|
| 17 |
+
MedCodeMCP is a voice-enabled medical diagnosis assistant that maps patient symptoms to probable ICD-10 codes and explanations. It leverages automatic speech recognition (ASR) (via Whisper) to transcribe spoken complaints and then engages an interactive Q&A session with the user to clarify symptoms.
|
| 18 |
+
|
| 19 |
+
Under the hood, it uses a Large Language Model (LLM) to either ask focused follow-up questions or output a final diagnosis suggestion in JSON format (with fields for diagnoses and confidence scores). The application integrates a vector database of ICD-10 descriptions using LlamaIndex for retrieval: given symptoms, it finds relevant ICD-10 code information to inform the LLM’s reasoning.
|
| 20 |
|
| 21 |
# Features
|
| 22 |
|
| 23 |
- **Automatic speech recognition (ASR)**: Transcribe real-time patient audio using [Gradio](https://www.gradio.app/guides/real-time-speech-recognition).
|
| 24 |
- **Interactive Q&A Agent**: The LLM dynamically asks clarifying questions base on ICD codes until it can diagnose with high confidence.
|
|
|
|
| 25 |
- **ICD-10 Mapping**: Use LlamaIndex for vector retrieval of probable ICD-10 codes with confidence scores.
|
| 26 |
- **MCP-Server Ready**: Exposes a `/mcp` REST endpoint for seamless agent integration.
|
| 27 |
|
|
|
|
| 54 |
cd MedCodeMCP
|
| 55 |
python3 -m venv .venv && source .venv/bin/activate
|
| 56 |
pip install -r requirements.txt
|
| 57 |
+
```` |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 58 |
|
| 59 |
## Launch Locally
|
| 60 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 61 |
Open [http://localhost:7860](http://localhost:7860) to interact with the app.
|
| 62 |
|
| 63 |
## MCP API Usage
|
app.py
CHANGED
|
@@ -190,7 +190,6 @@ with gr.Blocks(theme="default") as demo:
|
|
| 190 |
fn=update_live_transcription,
|
| 191 |
inputs=[microphone],
|
| 192 |
outputs=[text_input],
|
| 193 |
-
show_progress=False,
|
| 194 |
queue=True
|
| 195 |
)
|
| 196 |
|
|
@@ -215,4 +214,4 @@ with gr.Blocks(theme="default") as demo:
|
|
| 215 |
)
|
| 216 |
|
| 217 |
if __name__ == "__main__":
|
| 218 |
-
demo.launch(server_name="0.0.0.0", server_port=7860, share=True, show_api=True)
|
|
|
|
| 190 |
fn=update_live_transcription,
|
| 191 |
inputs=[microphone],
|
| 192 |
outputs=[text_input],
|
|
|
|
| 193 |
queue=True
|
| 194 |
)
|
| 195 |
|
|
|
|
| 214 |
)
|
| 215 |
|
| 216 |
if __name__ == "__main__":
|
| 217 |
+
demo.launch(server_name="0.0.0.0", server_port=7860, share=True, show_api=True, mcp_server=True)
|