dunktra commited on
Commit
b7f07a7
·
verified ·
1 Parent(s): 4e044c9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +90 -139
README.md CHANGED
@@ -7,9 +7,10 @@ tags:
7
  - transformers
8
  ---
9
 
10
- # Model Card for Model ID
11
 
12
- <!-- Provide a quick summary of what the model is/does. -->
 
13
 
14
 
15
 
@@ -17,57 +18,48 @@ tags:
17
 
18
  ### Model Description
19
 
20
- <!-- Provide a longer summary of what this model is. -->
21
 
 
 
 
 
 
 
 
 
22
 
 
23
 
24
- - **Developed by:** [More Information Needed]
25
- - **Funded by [optional]:** [More Information Needed]
26
- - **Shared by [optional]:** [More Information Needed]
27
- - **Model type:** [More Information Needed]
28
- - **Language(s) (NLP):** [More Information Needed]
29
- - **License:** [More Information Needed]
30
- - **Finetuned from model [optional]:** [More Information Needed]
31
 
32
- ### Model Sources [optional]
33
-
34
- <!-- Provide the basic links for the model. -->
35
-
36
- - **Repository:** [More Information Needed]
37
- - **Paper [optional]:** [More Information Needed]
38
- - **Demo [optional]:** [More Information Needed]
39
 
40
  ## Uses
41
 
42
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
43
 
44
  ### Direct Use
45
 
46
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
47
-
48
- [More Information Needed]
49
-
50
- ### Downstream Use [optional]
51
 
52
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
53
-
54
- [More Information Needed]
55
 
56
  ### Out-of-Scope Use
57
 
58
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
59
-
60
- [More Information Needed]
61
-
62
- ## Bias, Risks, and Limitations
63
 
64
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
65
 
66
- [More Information Needed]
67
 
68
- ### Recommendations
69
-
70
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
 
 
 
 
71
 
72
  Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
73
 
@@ -75,132 +67,91 @@ Users (both direct and downstream) should be made aware of the risks, biases and
75
 
76
  Use the code below to get started with the model.
77
 
78
- [More Information Needed]
79
-
80
- ## Training Details
81
-
82
- ### Training Data
83
-
84
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
85
-
86
- [More Information Needed]
87
 
88
- ### Training Procedure
 
 
 
 
89
 
90
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
 
 
 
91
 
92
- #### Preprocessing [optional]
 
 
 
93
 
94
- [More Information Needed]
95
-
96
-
97
- #### Training Hyperparameters
98
-
99
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
100
-
101
- #### Speeds, Sizes, Times [optional]
102
 
103
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
104
 
105
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
106
 
107
  ## Evaluation
108
 
109
- <!-- This section describes the evaluation protocols and provides the results. -->
110
-
111
- ### Testing Data, Factors & Metrics
112
-
113
- #### Testing Data
114
-
115
- <!-- This should link to a Dataset Card if possible. -->
116
-
117
- [More Information Needed]
118
-
119
- #### Factors
120
-
121
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
122
-
123
- [More Information Needed]
124
-
125
  #### Metrics
126
 
127
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
128
-
129
- [More Information Needed]
130
-
131
- ### Results
132
-
133
- [More Information Needed]
134
-
135
- #### Summary
136
-
137
-
138
-
139
- ## Model Examination [optional]
140
-
141
- <!-- Relevant interpretability work for the model goes here -->
142
-
143
- [More Information Needed]
144
-
145
- ## Environmental Impact
146
-
147
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
148
-
149
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
150
-
151
- - **Hardware Type:** [More Information Needed]
152
- - **Hours used:** [More Information Needed]
153
- - **Cloud Provider:** [More Information Needed]
154
- - **Compute Region:** [More Information Needed]
155
- - **Carbon Emitted:** [More Information Needed]
156
-
157
- ## Technical Specifications [optional]
158
-
159
- ### Model Architecture and Objective
160
-
161
- [More Information Needed]
162
-
163
- ### Compute Infrastructure
164
-
165
- [More Information Needed]
166
-
167
- #### Hardware
168
-
169
- [More Information Needed]
170
-
171
- #### Software
172
-
173
- [More Information Needed]
174
-
175
- ## Citation [optional]
176
-
177
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
178
-
179
- **BibTeX:**
180
-
181
- [More Information Needed]
182
 
183
- **APA:**
184
 
185
- [More Information Needed]
 
 
 
 
186
 
187
- ## Glossary [optional]
188
 
189
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
190
 
191
- [More Information Needed]
 
 
192
 
193
- ## More Information [optional]
194
 
195
- [More Information Needed]
 
 
 
196
 
197
- ## Model Card Authors [optional]
198
 
199
- [More Information Needed]
 
 
200
 
201
  ## Model Card Contact
202
 
203
- [More Information Needed]
204
  ### Framework versions
205
 
206
  - PEFT 0.18.1
 
7
  - transformers
8
  ---
9
 
10
+ # MedGemma Temporal Change Detection (LoRA Adapter)
11
 
12
+ This repository provides **LoRA adapters** fine-tuned on top of **google/medgemma-1.5-4b-it** for exploring **temporal change detection in dermatoscopic image pairs**.
13
+ The project investigates whether lightweight parameter-efficient fine-tuning can adapt a multimodal medical foundation model to a **novel temporal reasoning task**.
14
 
15
 
16
 
 
18
 
19
  ### Model Description
20
 
21
+ This repository contains LoRA adapters only, not a full model checkpoint.
22
 
23
+ - **Developed and shared by:** Dung Claire Tran ([@dunktra](https://huggingface.co/dunktra))
24
+ - **Base Model:** [google/medgemma-1.5-4b-it](https://huggingface.co/google/medgemma-1.5-4b-it)
25
+ - **Fine-Tuning Method:** LoRA (Low-Rank Adaptation, PEFT)
26
+ - **Model type:** Vision–Language Model (VLM) adapter
27
+ - **Task:** Binary classification of temporal change in skin lesion image pairs
28
+ - **Dataset:** dunktra/dermacheck-temporal-pairs (synthetic temporal pairs)
29
+ - **Language(s) (NLP):** English
30
+ - **License:** Inherits license from google/medgemma-1.5-4b-it
31
 
32
+ ### Model Sources
33
 
34
+ - **Repository:** [Kaggle notebook (training & evaluation)](https://www.kaggle.com/code/dungclairetran/dermacheck-medgemma-lora-fine-tuning)
 
 
 
 
 
 
35
 
 
 
 
 
 
 
 
36
 
37
  ## Uses
38
 
 
39
 
40
  ### Direct Use
41
 
42
+ - Research and experimentation with **temporal reasoning in medical imaging**
43
+ - Evaluation of **LoRA fine-tuning feasibility** on multimodal medical foundation models
44
+ - Educational and benchmarking purposes
 
 
45
 
 
 
 
46
 
47
  ### Out-of-Scope Use
48
 
49
+ - Clinical diagnosis or medical decision-making
50
+ - Deployment in real-world healthcare settings without clinical validation
 
 
 
51
 
52
+ This model is **not a medical device**.
53
 
54
+ ## Limitations
55
 
56
+ - Fine-tuning effects may not surface when using **keyword-based label extraction**
57
+ - Binary classification may mask improvements in:
58
+ - reasoning structure
59
+ - explanatory language
60
+ - uncertainty expression
61
+ - Synthetic temporal data limits real-world generalization
62
+ - Inherits all limitations of the base MedGemma model
63
 
64
  Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
65
 
 
67
 
68
  Use the code below to get started with the model.
69
 
70
+ ```
71
+ from transformers import AutoModelForVision2Seq, AutoProcessor
72
+ from peft import PeftModel
73
+ import torch
 
 
 
 
 
74
 
75
+ base_model = AutoModelForVision2Seq.from_pretrained(
76
+ "google/medgemma-1.5-4b-it",
77
+ torch_dtype=torch.bfloat16,
78
+ device_map="auto"
79
+ )
80
 
81
+ model = PeftModel.from_pretrained(
82
+ base_model,
83
+ "dunktra/medgemma-temporal-lora"
84
+ )
85
 
86
+ processor = AutoProcessor.from_pretrained(
87
+ "dunktra/medgemma-temporal-lora"
88
+ )
89
+ ```
90
 
91
+ ## Training Details
 
 
 
 
 
 
 
92
 
93
+ ### Training Data
94
 
95
+ - **Source:** [dunktra/dermacheck-temporal-pairs](https://huggingface.co/datasets/dunktra/dermacheck-temporal-pairs)
96
+ - **Description:** Synthetic before/after dermatoscopic image pairs labeled for temporal change
97
+ - **Splits:**
98
+ - **Training:** ~630 pairs
99
+ - **Validation:** ~135 pairs
100
+ - **Test:** 135 pairs
101
+ **Note:** *The dataset consists of **synthetic temporal pairs**, not real longitudinal patient data.*
102
+
103
+ ### Training Configuration
104
+
105
+ - **LoRA Rank (r):** 8
106
+ - **LoRA Alpha:** 16
107
+ - **Target Modules:** q_proj, k_proj, v_proj, o_proj
108
+ - **LoRA Dropout:** 0.05
109
+ - **Epochs:** 3
110
+ - **Effective Batch Size:** 16
111
+ - **Learning Rate:** 2e-4
112
+ - **Precision:** bfloat16
113
+ - **Frameworks:** Transformers + PEFT
114
 
115
  ## Evaluation
116
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
117
  #### Metrics
118
 
119
+ - Precision
120
+ - Recall
121
+ - F1 score (binary classification)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
122
 
123
+ ### Results (Test Set: 135 temporal pairs)
124
 
125
+ | Metric | Base MedGemma | Fine-Tuned (LoRA) | Change |
126
+ |------------|---------------|-------------------|--------|
127
+ | F1 Score | 0.8797 | 0.8797 | +0.00% |
128
+ | Precision | 0.7852 | 0.7852 | +0.00% |
129
+ | Recall | 1.0000 | 1.0000 | +0.00% |
130
 
131
+ LoRA fine-tuning **did not** yield measurable improvements under the current evaluation protocol.
132
 
133
+ ### Qualitative Analysis
134
 
135
+ - No test cases were found where the fine-tuned model corrected errors made by the base model.
136
+ - Fine-tuning did not alter binary decision outcomes given the current response-parsing heuristic.
137
+
138
 
139
+ ## License
140
 
141
+ - This adapter inherits the license and usage restrictions of:
142
+ - **google/medgemma-1.5-4b-it**
143
+ - Underlying datasets used by the base model
144
+ - Non-commercial research use only.
145
 
146
+ ## Acknowledgements
147
 
148
+ - Google MedGemma team
149
+ - PEFT / Hugging Face ecosystem
150
+ *Created for the **MedGemma Impact Challenge 2026 – Novel Task Exploration**.*
151
 
152
  ## Model Card Contact
153
 
154
+ [dunktra](https://huggingface.co/dunktra)
155
  ### Framework versions
156
 
157
  - PEFT 0.18.1