Medical Merges
Collection
Playful merges that try to improve small medical LMs by merging them with models with higher reasoning capabilities. β’ 29 items β’ Updated β’ 3
How to use Technoculture/Medorca-4x7b with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="Technoculture/Medorca-4x7b") # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Technoculture/Medorca-4x7b")
model = AutoModelForCausalLM.from_pretrained("Technoculture/Medorca-4x7b")How to use Technoculture/Medorca-4x7b with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "Technoculture/Medorca-4x7b"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "Technoculture/Medorca-4x7b",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/Technoculture/Medorca-4x7b
How to use Technoculture/Medorca-4x7b with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "Technoculture/Medorca-4x7b" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "Technoculture/Medorca-4x7b",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "Technoculture/Medorca-4x7b" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "Technoculture/Medorca-4x7b",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use Technoculture/Medorca-4x7b with Docker Model Runner:
docker model run hf.co/Technoculture/Medorca-4x7b
Mediquad-orca-20B is a Mixure of Experts (MoE) made with the following models:
| Benchmark | Medorca-4x7b | Orca-2-7b | meditron-7b | meditron-70b |
|---|---|---|---|---|
| MedMCQA | ||||
| ClosedPubMedQA | ||||
| PubMedQA | ||||
| MedQA | ||||
| MedQA4 | ||||
| MedicationQA | ||||
| MMLU Medical | ||||
| MMLU | 24.28 | 56.37 | ||
| TruthfulQA | 48.42 | 52.45 | ||
| GSM8K | 0 | 47.2 | ||
| ARC | 29.35 | 54.1 | ||
| HellaSwag | 25.72 | 76.19 | ||
| Winogrande | 48.3 | 73.48 |
gate_mode: hidden
dtype: bfloat16
experts:
- source_model: epfl-llm/meditron-7b
positive_prompts:
- "How does sleep affect cardiovascular health?"
- "When discussing diabetes management, the key factors to consider are"
- "The differential diagnosis for a headache with visual aura could include"
negative_prompts:
- "What are the environmental impacts of deforestation?"
- "The recent advancements in artificial intelligence have led to developments in"
- source_model: medalpaca/medalpaca-7b
positive_prompts:
- "When discussing diabetes management, the key factors to consider are"
- "The differential diagnosis for a headache with visual aura could include"
negative_prompts:
- "Recommend a good recipe for a vegetarian lasagna."
- "The fundamental concepts in economics include ideas like supply and demand, which explain"
- source_model: chaoyi-wu/PMC_LLAMA_7B_10_epoch
positive_prompts:
- "How does sleep affect cardiovascular health?"
- "When discussing diabetes management, the key factors to consider are"
negative_prompts:
- "Recommend a good recipe for a vegetarian lasagna."
- "The recent advancements in artificial intelligence have led to developments in"
- "The fundamental concepts in economics include ideas like supply and demand, which explain"
- source_model: microsoft/Orca-2-7b
positive_prompts:
- "Here is a funny joke for you -"
- "When considering the ethical implications of artificial intelligence, one must take into account"
- "In strategic planning, a company must analyze its strengths and weaknesses, which involves"
- "Understanding consumer behavior in marketing requires considering factors like"
- "The debate on climate change solutions hinges on arguments that"
negative_prompts:
- "In discussing dietary adjustments for managing hypertension, it's crucial to emphasize"
- "For early detection of melanoma, dermatologists recommend that patients regularly check their skin for"
- "Explaining the importance of vaccination, a healthcare professional should highlight"
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Technoculture/Mediquad-orca-20B"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])