YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Merged DeepSeek-R1-Distill-Llama-8B with Medical Q&A LoRA
This is a merged version of the LoRA adapter from hitty28/Medseek-V1 applied to deepseek-ai/DeepSeek-R1-Distill-Llama-8B.
Model details
- Base model: DeepSeek-R1-Distill-Llama-8B
- Original adapter: hitty28/Medseek-V1
- Merge timestamp: 2025-05-14 15:04:56
Use case
This model is specialized for medical question answering tasks.
Available formats
- Full FP16 model
- GGUF quantized versions: q2_k, q3_k_s, q3_k_m, q3_k_l, q4_0, q4_k_s, q4_k_m, q5_0, q5_k_m, q6_k, q8_0
Usage with llama.cpp
./main -m medseek_r1_q4_k_m.gguf -p "Below is a task description along with additional context provided in the input section. Your goal is to provide a well-reasoned response that effectively addresses the request.
Before crafting your answer, take a moment to carefully analyze the question. Develop a clear, step-by-step thought process to ensure your response is both logical and accurate.
### Task:
You are a medical expert specializing in clinical reasoning, diagnostics, and treatment planning. Answer the medical question below using your advanced knowledge.
### Query:
What are the most common causes of acute pancreatitis?
### Answer:"
- Downloads last month
- -
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support