gpt-oss-20b-pubmed

This is a fine-tune of the base model unsloth/gpt-oss-20b optimized for generating reasoned, biomedical responses. It emphasizes chain-of-thought (CoT) reasoning in its outputs, making it suitable for applications involving analytical discussions, medical question answering, and logical breakdowns of biomedical topics. The model was fine-tuned using QLoRA with Unsloth for efficiency, targeting a balance of performance and resource usage.

The model is provided in MXFP4 GGUF format for compatibility with llama.cpp, Ollama, or LM Studio.

Model Details

Please check also the github page of the model homepage.

  • Base Model: unsloth/gpt-oss-20b (MXFP4 quantized)
  • Fine-Tuning Method: QLoRA with rank=64, targeting MoE layers
  • Training Epochs: 6
  • Dataset: PubMedQA dataset(pqal.jsonl) + my own pseudo-labeled dataset (~7,000 examples, adapted for CoT)
  • Max Sequence Length: 4096
  • Optimizer: AdamW 8-bit
  • Learning Rate: 1e-4

Intended Uses

This model is designed for:

  • Generating biomedical responses with structured reasoning.
  • Educational tools for medical question answering and critical analysis.
  • Interactive chat applications for discussing health, research, or clinical topics.

Example use case: Responding to PubMed-style queries with step-by-step biomedical analysis followed by a concise answer.

Limitations

  • The model may exhibit biases inherent in PubMed data, potentially favoring certain medical viewpoints.
  • Performance on non-biomedical tasks (e.g., general debate, code generation) may not match the base model.
  • Outputs can sometimes be verbose; fine-tune temperature and max_tokens for control.
  • Not intended for clinical decision-making or sensitive medical applications without expert oversight.

Evaluation

During fine-tuning:

  • Training Loss: Monitored (decreased steadily over epochs).
  • Evaluation: Performed on a 10% holdout set after each epoch, showing improved coherence in CoT outputs.
  • Perplexity/Qualitative: Responses were manually inspected for logical flow and biomedical relevance.
  • Benchmarks: Tested on PubMedQA (500 instances test set, accuracy: 73.6%, made it 19th on the PubMedQA leaderboard https://pubmedqa.github.io/ as of 2025-12-30)

How to Use

With Transformers (Python)

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "Entz/gpt-oss-20b-pubmed"  # Or local path
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)

messages = [
    {"role": "system", "content": "You are a medical expert. In responses, append PubMed IDs as (PubMed ID: id) for sourced info."},
    {"role": "user", "content": "What are the effects of aspirin on cardiovascular health?"}
]

inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True)
outputs = model.generate(inputs, max_new_tokens=4096, temperature=0.7)
print(tokenizer.decode(outputs[0]))

With GGUF (llama.cpp/Ollama)

Download the GGUF file and use with compatible backends like Ollama:

ollama create pubmed-model -m gpt-oss-20b-pubmed.gguf
ollama run pubmed-model

Then prompt in the Ollama interface.

Training Data

The fine-tuning dataset consists of Q&A pairs from PubMedQA, focused on biomedical reasoning and analysis. Data was processed to ensure diversity and coverage of medical topics, with an emphasis on medium-effort CoT (75% reasoning focus).

Ethical Considerations

This model is for research and educational purposes. Users should be aware of potential biases and verify outputs, especially in medical contexts. It adheres to standard open-source guidelines but is not audited for production use.

Acknowledgments

Built using Unsloth for efficient fine-tuning and Hugging Face Transformers. Thanks to the open-source community for tools and base models.


Downloads last month
264
GGUF
Model size
21B params
Architecture
gpt-oss
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Dataset used to train Entz/gpt-oss-20b-pubmed