iszt commited on
Commit
bbeaaf5
·
verified ·
1 Parent(s): 9d66986

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +78 -0
README.md CHANGED
@@ -137,6 +137,84 @@ with torch.no_grad():
137
  outputs = model(**inputs)
138
  ```
139
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
140
  ## Technical Details
141
 
142
  ### Eye Center Detection
 
137
  outputs = model(**inputs)
138
  ```
139
 
140
+ ## Coordinate Mapping
141
+
142
+ The processor returns coordinate mapping information that allows you to map coordinates from the processed image back to the original image space. This is useful for applications like lesion detection, where you need to annotate or visualize detected features on the original image.
143
+
144
+ ### Output Format
145
+
146
+ The processor returns these additional keys:
147
+ - `scale_x`, `scale_y`: Scale factors for coordinate mapping (shape: `(B,)`)
148
+ - `offset_x`, `offset_y`: Offset values for coordinate mapping (shape: `(B,)`)
149
+
150
+ ### Mapping Formula
151
+
152
+ To map coordinates from the processed image back to original coordinates:
153
+
154
+ ```python
155
+ orig_x = offset_x + cropped_x * scale_x
156
+ orig_y = offset_y + cropped_y * scale_y
157
+ ```
158
+
159
+ Where `cropped_x` and `cropped_y` are coordinates in the processed image (range: [0, size-1]).
160
+
161
+ ### Example: Single Point Mapping
162
+
163
+ ```python
164
+ from PIL import Image
165
+
166
+ # Process image
167
+ processor = AutoImageProcessor.from_pretrained("iszt/eye-clahe-processor", trust_remote_code=True)
168
+ image = Image.open("fundus.jpg")
169
+ outputs = processor(image, return_tensors="pt")
170
+
171
+ # Detected point in processed image (e.g., from a model prediction)
172
+ detected_x, detected_y = 100.0, 150.0
173
+
174
+ # Map back to original image coordinates
175
+ orig_x = outputs['offset_x'] + detected_x * outputs['scale_x']
176
+ orig_y = outputs['offset_y'] + detected_y * outputs['scale_y']
177
+
178
+ print(f"Original coordinates: ({orig_x.item():.2f}, {orig_y.item():.2f})")
179
+ ```
180
+
181
+ ### Example: Multiple Points in Batch
182
+
183
+ ```python
184
+ import torch
185
+
186
+ # Process batch of images
187
+ images = [Image.open(f"image_{i}.jpg") for i in range(4)]
188
+ outputs = processor(images, return_tensors="pt")
189
+
190
+ # Detected points for each image (B, N, 2) where N is number of points
191
+ detected_points = torch.tensor([
192
+ [[50.0, 60.0], [100.0, 120.0]], # Image 0: 2 points
193
+ [[75.0, 80.0], [150.0, 160.0]], # Image 1: 2 points
194
+ [[90.0, 95.0], [180.0, 190.0]], # Image 2: 2 points
195
+ [[65.0, 70.0], [130.0, 140.0]], # Image 3: 2 points
196
+ ])
197
+
198
+ # Map all points back to original coordinates
199
+ B, N, _ = detected_points.shape
200
+ scale_x = outputs['scale_x'].view(B, 1, 1)
201
+ scale_y = outputs['scale_y'].view(B, 1, 1)
202
+ offset_x = outputs['offset_x'].view(B, 1, 1)
203
+ offset_y = outputs['offset_y'].view(B, 1, 1)
204
+
205
+ orig_x = offset_x + detected_points[..., 0:1] * scale_x
206
+ orig_y = offset_y + detected_points[..., 1:2] * scale_y
207
+
208
+ original_points = torch.cat([orig_x, orig_y], dim=-1) # (B, N, 2)
209
+ ```
210
+
211
+ ### Use Cases
212
+
213
+ - **Lesion Detection**: Map detected lesion coordinates back for visualization
214
+ - **Optic Disc Localization**: Track anatomical landmarks through preprocessing
215
+ - **Vessel Segmentation**: Align segmentation masks with original images
216
+ - **Quality Control**: Verify feature alignment across processing pipeline
217
+
218
  ## Technical Details
219
 
220
  ### Eye Center Detection