Update README.md
Browse files
README.md
CHANGED
|
@@ -21,6 +21,23 @@ This model is an improved version of the architecture used in the [paper](https:
|
|
| 21 |
|
| 22 |
Special thanks to **Juno** for contributing ideas and feedback that greatly helped in lightweighting and optimizing the model.
|
| 23 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 24 |
## Model description
|
| 25 |
|
| 26 |

|
|
|
|
| 21 |
|
| 22 |
Special thanks to **Juno** for contributing ideas and feedback that greatly helped in lightweighting and optimizing the model.
|
| 23 |
|
| 24 |
+
### Background and Motivation
|
| 25 |
+
|
| 26 |
+
In computational pathology, a single whole-slide image (WSI) is typically
|
| 27 |
+
partitioned into thousands to tens of thousands of high-resolution image
|
| 28 |
+
patches (e.g., 512×512 pixels) for analysis.
|
| 29 |
+
|
| 30 |
+
This setting places strong constraints on both throughput and latency:
|
| 31 |
+
even small inefficiencies in patch-level inference can lead to
|
| 32 |
+
prohibitively long end-to-end processing times at the slide level.
|
| 33 |
+
|
| 34 |
+
CSATv2 was originally designed to address this constraint by enabling
|
| 35 |
+
high-throughput, high-resolution inference while preserving classification
|
| 36 |
+
accuracy. In practical deployments, this design reduced slide-level
|
| 37 |
+
processing time from tens of minutes to approximately one minute,
|
| 38 |
+
making near–real-time pathological analysis feasible at scale.
|
| 39 |
+
|
| 40 |
+
|
| 41 |
## Model description
|
| 42 |
|
| 43 |

|