YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Fine-tuned Mistral Model for Clinical Tasks
Model Details
- Base Model: mistralai/Mistral-7B-Instruct-v0.2
- Fine-tuned on: MIMIC clinical data with instruction tuning
- Upload date: 2025-05-14
- Framework: PyTorch + Unsloth
Model Description
This model has been fine-tuned on MIMIC medical data with instruction tuning to improve clinical NLP capabilities such as ICD code extraction and medical information processing.
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "Pritish92/final_model-finetuned-20250514-162048" # This will be the HF repo ID like 'username/model-name'
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# For simple text generation
inputs = tokenizer("Extract ICD-10 codes from this clinical note: ", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Training Information
- Training type: Instruction tuning with LoRA
- Hardware: Multiple GPUs: 2
- Original model: mistralai/Mistral-7B-Instruct-v0.2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support