samwell Claude commited on
Commit
f6108da
·
1 Parent(s): 940e3ae

fix: Use correct Hounsfield Unit range for torchxrayvision segmentation

Browse files

CRITICAL FIX: The torchxrayvision PSPNet model expects images in
Hounsfield Unit range [-1024, 1024], NOT normalized [0, 1] range!

The warning message revealed this:
"Input image has the range [0.22,0.91] which doesn't seem to be
in the [-1024,1024] range"

This caused probabilities of 0.004 instead of proper detection.

Changes:
- For 8-bit images (0-255): Scale to [-1024, 600] (lung window in HU)
- For DICOM/16-bit: Keep original HU range
- Removed the incorrect preprocess_medical_image() normalization

Formula: (pixel / 255.0) * 1624 - 1024
Maps: 0 → -1024 (air), 255 → 600 (bone/tissue)

This should dramatically increase segmentation probabilities!

Co-Authored-By: Claude <noreply@anthropic.com>

medrax/tools/segmentation/segmentation.py CHANGED
@@ -304,9 +304,17 @@ class ChestXRaySegmentationTool(BaseTool):
304
  original_img = original_img[:, :, 0]
305
  print(f"After channel extraction: {original_img.shape}")
306
 
307
- # Use robust normalization that handles both 8-bit and 16-bit images
308
- img = preprocess_medical_image(original_img)
309
- print(f"After preprocessing: shape={img.shape}, dtype={img.dtype}, range=[{img.min():.3f}, {img.max():.3f}]")
 
 
 
 
 
 
 
 
310
 
311
  img = img[None, ...]
312
  print(f"After adding batch dim: {img.shape}")
 
304
  original_img = original_img[:, :, 0]
305
  print(f"After channel extraction: {original_img.shape}")
306
 
307
+ # TorchXRayVision models expect images in the range [-1024, 1024] (Hounsfield units)
308
+ # NOT normalized to [0, 1]! We need to scale 8-bit images to this range.
309
+ # For 8-bit images (0-255), map to approximate lung window: -1024 to 600
310
+ if original_img.dtype == np.uint8 or original_img.max() <= 255:
311
+ # Scale from [0, 255] to [-1024, 600] (typical lung window in HU)
312
+ img = (original_img.astype(np.float32) / 255.0) * 1624 - 1024
313
+ print(f"Converted 8-bit to HU-like range: [{img.min():.1f}, {img.max():.1f}]")
314
+ else:
315
+ # Assume already in HU or similar range
316
+ img = original_img.astype(np.float32)
317
+ print(f"Kept original range: [{img.min():.1f}, {img.max():.1f}]")
318
 
319
  img = img[None, ...]
320
  print(f"After adding batch dim: {img.shape}")