🀏 smolified-clinical-scribe

Intelligence, Distilled.

This is a Domain Specific Language Model (DSLM) generated by the Smolify Foundry.

It has been synthetically distilled from SOTA reasoning engines into a high-efficiency architecture, optimized for deployment on edge hardware (CPU/NPU) or low-VRAM environments.

πŸ“¦ Asset Details

  • Origin: Smolify Foundry (Job ID: 0b88261f)
  • Architecture: gemma-3-270m
  • Training Method: Proprietary Neural Distillation
  • Optimization: 4-bit Quantized / FP16 Mixed
  • Dataset: Link to Dataset

πŸš€ Usage (Inference)

This model is compatible with standard inference backends like vLLM, and Hugging Face Transformers.

# Example: Running your Sovereign Model
from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "DebMukherjee/smolified-clinical-scribe"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")

messages = [
    {"role": "system", "content": '''You are a clinical documentation engine converting medical transcripts to SOAP notes. Do not hallucinate data.'''},
    {"role": "user", "content": '''Patient presents with sharp epigastric pain radiating to the back for two days. Associated with nausea. Denies fever. Vitals: BP 140/90, Pulse 92. Exam: Abdomen tender to palpation in epigastric region, positive bowel sounds. Plan: H. Pylori breath test, Famotidine 20mg daily, follow-up if worsening.'''}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize = False,
    add_generation_prompt = True,
)
if "gemma-3-270m" == "gemma-3-270m":
    text = text.removeprefix('<bos>')

from transformers import TextStreamer
_ = model.generate(
    **tokenizer(text, return_tensors = "pt").to(model.device),
    max_new_tokens = 1000,
    temperature = 1.0, top_p = 0.95, top_k = 64,
    streamer = TextStreamer(tokenizer, skip_prompt = True),
)

βš–οΈ License & Ownership

This model weights are a sovereign asset owned by DebMukherjee. Generated via Smolify.ai.

Downloads last month
302
Safetensors
Model size
0.3B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support