vmanvs commited on
Commit
d326a98
·
verified ·
1 Parent(s): 1a1c40d

modified readme

Browse files
Files changed (1) hide show
  1. README.md +4 -1
README.md CHANGED
@@ -33,6 +33,9 @@ This is **not** a production model. It is a learning and research artifact.
33
  >
34
  > This decoder-only model was extracted from the original multimodal checkpoint using a custom extraction process. While it passes all internal stress tests (31/31), **it has not been evaluated on standardized medical benchmarks** (e.g., MedQA, PubMedQA, USMLE) and **has not undergone clinical validation**. Do not deploy this model in production healthcare systems, clinical decision support tools, or any patient-facing applications without extensive independent testing, medical expert review, and regulatory compliance evaluation. Use at your own risk.
35
 
 
 
 
36
  ---
37
 
38
  ## Model Details
@@ -98,7 +101,7 @@ This extraction exists for **research, education, and experimentation** — to a
98
  from transformers import AutoModelForCausalLM, AutoTokenizer
99
  import torch
100
 
101
- model_id = "your-username/medgemma-decoder-only-4b-it" # Update with your HF repo
102
 
103
  tokenizer = AutoTokenizer.from_pretrained(model_id)
104
  model = AutoModelForCausalLM.from_pretrained(
 
33
  >
34
  > This decoder-only model was extracted from the original multimodal checkpoint using a custom extraction process. While it passes all internal stress tests (31/31), **it has not been evaluated on standardized medical benchmarks** (e.g., MedQA, PubMedQA, USMLE) and **has not undergone clinical validation**. Do not deploy this model in production healthcare systems, clinical decision support tools, or any patient-facing applications without extensive independent testing, medical expert review, and regulatory compliance evaluation. Use at your own risk.
35
 
36
+ 📦 **Model & weights:** [HuggingFace — vmanvs/medgemma-1.5-decoder-only-4b-it](https://huggingface.co/vmanvs/medgemma-1.5-decoder-only-4b-it)
37
+ 💻 **Extraction code & tests:** [GitHub — vmanvs/medgemma-1.5-decoder-only-4b-it](https://github.com/vmanvs/medgemma-1.5-decoder-only-4b-it)
38
+
39
  ---
40
 
41
  ## Model Details
 
101
  from transformers import AutoModelForCausalLM, AutoTokenizer
102
  import torch
103
 
104
+ model_id = "vmanvs/medgemma-1.5-decoder-only-4b-it"
105
 
106
  tokenizer = AutoTokenizer.from_pretrained(model_id)
107
  model = AutoModelForCausalLM.from_pretrained(