Malikeh1375/medical-question-answering-datasets
Viewer • Updated • 1.26M • 1.41k • 75
How to use Echelon-AI/Med-Qwen2-7B with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="Echelon-AI/Med-Qwen2-7B")
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe(messages) # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Echelon-AI/Med-Qwen2-7B")
model = AutoModelForCausalLM.from_pretrained("Echelon-AI/Med-Qwen2-7B")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))How to use Echelon-AI/Med-Qwen2-7B with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "Echelon-AI/Med-Qwen2-7B"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "Echelon-AI/Med-Qwen2-7B",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker model run hf.co/Echelon-AI/Med-Qwen2-7B
How to use Echelon-AI/Med-Qwen2-7B with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "Echelon-AI/Med-Qwen2-7B" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "Echelon-AI/Med-Qwen2-7B",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "Echelon-AI/Med-Qwen2-7B" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "Echelon-AI/Med-Qwen2-7B",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'How to use Echelon-AI/Med-Qwen2-7B with Docker Model Runner:
docker model run hf.co/Echelon-AI/Med-Qwen2-7B
Qwen2 7B, after finetuning on a medical dataset, demonstrates enhanced performance in medical text understanding and generation
The model shows improved accuracy in diagnosing medical conditions, generating specialized medical texts, and responding to medical queries with contextually relevant information. This adaptation equips Med-Qwen2 to support advanced applications in healthcare, offering nuanced insights and precise language processing tailored for medical professionals and patients alike