{ "pdf_info": [ { "para_blocks": [ { "bbox": [ 63, 103, 531, 121 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 63, 103, 531, 121 ], "spans": [ { "bbox": [ 63, 103, 531, 121 ], "type": "text", "content": "A Loopback Network for Explainable Microvascular Invasion Classification" } ] } ], "index": 2 }, { "bbox": [ 53, 142, 583, 158 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 53, 142, 583, 158 ], "spans": [ { "bbox": [ 53, 142, 583, 158 ], "type": "text", "content": "Shengxuming Zhang" }, { "bbox": [ 53, 142, 583, 158 ], "type": "inline_equation", "content": "^{1}" }, { "bbox": [ 53, 142, 583, 158 ], "type": "text", "content": ", Tianqi Shi" }, { "bbox": [ 53, 142, 583, 158 ], "type": "inline_equation", "content": "^{2}" }, { "bbox": [ 53, 142, 583, 158 ], "type": "text", "content": ", Yang Jiang" }, { "bbox": [ 53, 142, 583, 158 ], "type": "inline_equation", "content": "^{2}" }, { "bbox": [ 53, 142, 583, 158 ], "type": "text", "content": ", Xiuming Zhang" }, { "bbox": [ 53, 142, 583, 158 ], "type": "inline_equation", "content": "^{1}" }, { "bbox": [ 53, 142, 583, 158 ], "type": "text", "content": ", Jie Lei" }, { "bbox": [ 53, 142, 583, 158 ], "type": "inline_equation", "content": "^{3}" }, { "bbox": [ 53, 142, 583, 158 ], "type": "text", "content": ", Zunlei Feng" }, { "bbox": [ 53, 142, 583, 158 ], "type": "inline_equation", "content": "^{1,4,5,*}" }, { "bbox": [ 53, 142, 583, 158 ], "type": "text", "content": ", Mingli Song" }, { "bbox": [ 53, 142, 583, 158 ], "type": "inline_equation", "content": "^{1,4,5}" } ] } ], "index": 3 }, { "bbox": [ 137, 156, 501, 199 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 137, 156, 501, 199 ], "spans": [ { "bbox": [ 137, 156, 501, 199 ], "type": "text", "content": "1Zhejiang University, 2Alibaba Group, 3Zhejiang University of Technology, 4Shanghai Institute for Advanced Study of Zhejiang University, 5Zhejiang Provincial Key Laboratory of Service Robot" } ] } ], "index": 4 }, { "bbox": [ 128, 201, 505, 227 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 128, 201, 505, 227 ], "spans": [ { "bbox": [ 128, 201, 505, 227 ], "type": "text", "content": "{zsxm1998, 1508056, zunleifeng, brooksong}@zju.edu.cn, {tianqi.tianqishi, yangjiang.yj}@alibaba-inc.com, jasonlei@zjut.edu.cn" } ] } ], "index": 5 }, { "bbox": [ 143, 255, 192, 268 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 143, 255, 192, 268 ], "spans": [ { "bbox": [ 143, 255, 192, 268 ], "type": "text", "content": "Abstract" } ] } ], "index": 6 }, { "bbox": [ 46, 287, 290, 396 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 46, 287, 290, 396 ], "spans": [ { "bbox": [ 46, 287, 290, 396 ], "type": "text", "content": "Microvascular invasion (MVI) is a critical factor for prognosis evaluation and cancer treatment. The current diagnosis of MVI relies on pathologists to manually find out cancerous cells from hundreds of blood vessels, which is time-consuming, tedious, and subjective. Recently, deep learning has achieved promising results in medical image analysis tasks. However, the unexplainability of black box models and the requirement of massive annotated samples limit the clinical application of deep learning based diagnostic methods." } ] } ], "index": 7 }, { "bbox": [ 46, 403, 290, 668 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 46, 403, 290, 668 ], "spans": [ { "bbox": [ 46, 403, 290, 668 ], "type": "text", "content": "In this paper, aiming to develop an accurate, objective, and explainable diagnosis tool for MVI, we propose a Loopback Network (LoopNet) for classifying MVI efficiently. With the image-level category annotations of the collected Pathologic Vessel Image Dataset (PVID), LoopNet is devised to be composed binary classification branch and cell locating branch. The latter is devised to locate the area of cancerous cells, regular non-cancerous cells, and background. For healthy samples, the pseudo masks of cells supervise the cell locating branch to distinguish the area of regular non-cancerous cells and background. For each MVI sample, the cell locating branch predicts the mask of cancerous cells. Then the masked cancerous and non-cancerous areas of the same sample are input back to the binary classification branch separately. The loopback between two branches enables the category label to supervise the cell locating branch to learn the locating ability for cancerous areas. Experiment results show that the proposed LoopNet achieves " }, { "bbox": [ 46, 403, 290, 668 ], "type": "inline_equation", "content": "97.5\\%" }, { "bbox": [ 46, 403, 290, 668 ], "type": "text", "content": " accuracy on MVI classification. Surprisingly, the proposed loopback mechanism not only enables LoopNet to predict the cancerous area but also facilitates the classification backbone to achieve better classification performance." } ] } ], "index": 8 }, { "type": "image", "bbox": [ 309, 256, 544, 360 ], "blocks": [ { "bbox": [ 309, 256, 544, 360 ], "lines": [ { "bbox": [ 309, 256, 544, 360 ], "spans": [ { "bbox": [ 309, 256, 544, 360 ], "type": "image", "image_path": "a63cad09d737cf74fe207380eb295bbe6fd65132dd592320c62cf65764220916.jpg" } ] } ], "index": 9, "angle": 0, "type": "image_body" } ], "index": 9 }, { "type": "image", "bbox": [ 308, 362, 425, 432 ], "blocks": [ { "bbox": [ 308, 362, 425, 432 ], "lines": [ { "bbox": [ 308, 362, 425, 432 ], "spans": [ { "bbox": [ 308, 362, 425, 432 ], "type": "image", "image_path": "e01229ec4d5ddbda8f2753af6841a99e74d96ea155b66f9a89b64b4ed0160095.jpg" } ] } ], "index": 10, "angle": 0, "type": "image_body" }, { "bbox": [ 307, 437, 544, 449 ], "lines": [ { "bbox": [ 307, 437, 544, 449 ], "spans": [ { "bbox": [ 307, 437, 544, 449 ], "type": "text", "content": "vessels healthy/MVI vessels healthy/cancerous cells" } ] } ], "index": 12, "angle": 0, "type": "image_footnote" } ], "index": 10 }, { "type": "image", "bbox": [ 428, 362, 544, 432 ], "blocks": [ { "bbox": [ 428, 362, 544, 432 ], "lines": [ { "bbox": [ 428, 362, 544, 432 ], "spans": [ { "bbox": [ 428, 362, 544, 432 ], "type": "image", "image_path": "00e5a4600f5f6878d169d5e53c2617263faf2eb3a0693ddee6cfa8610075a326.jpg" } ] } ], "index": 11, "angle": 0, "type": "image_body" }, { "bbox": [ 304, 460, 547, 528 ], "lines": [ { "bbox": [ 304, 460, 547, 528 ], "spans": [ { "bbox": [ 304, 460, 547, 528 ], "type": "text", "content": "Figure 1. Examples of MVI and healthy vessels extracted from a pathological image of liver cancer. (a) The super large sample contains numerous blood vessels of varied sizes. (b) The healthy vessels are composed of a variety of cells with similar appearances. (c) The cancerous cells have varied types and similar appearances to parts of healthy cells." } ] } ], "index": 13, "angle": 0, "type": "image_footnote" } ], "index": 11 }, { "bbox": [ 306, 536, 387, 550 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 306, 536, 387, 550 ], "spans": [ { "bbox": [ 306, 536, 387, 550 ], "type": "text", "content": "1. Introduction" } ] } ], "index": 14 }, { "bbox": [ 304, 557, 547, 665 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 304, 557, 547, 665 ], "spans": [ { "bbox": [ 304, 557, 547, 665 ], "type": "text", "content": "Microvascular invasion (MVI), referring to the appearance of cancerous cells within microscopic venules or veins, is a histological feature of cancer-related to aggressive biological behavior [27, 56]. In clinical, MVI is usually used as a reference standard for assessing cancer spreading, which is a critical factor for prognosis evaluation and treatment [8, 15, 43]. Accurate prognosis evaluation along with appropriate treatment can effectively improve patient's life quality and prolong their life-span." } ] } ], "index": 15 }, { "bbox": [ 304, 665, 547, 714 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 304, 665, 547, 714 ], "spans": [ { "bbox": [ 304, 665, 547, 714 ], "type": "text", "content": "Currently, the diagnosis of MVI relies on pathologists to manually find out cancerous cells from hundreds of blood vessels, each of which usually contains dozens of cells. As shown in Fig.1, each pathological sample is an image of" } ] } ], "index": 16 } ], "discarded_blocks": [ { "bbox": [ 65, 2, 111, 34 ], "type": "header", "angle": 0, "lines": [ { "bbox": [ 65, 2, 111, 34 ], "spans": [ { "bbox": [ 65, 2, 111, 34 ], "type": "text", "content": "CVF" } ] } ], "index": 0 }, { "bbox": [ 145, 0, 494, 37 ], "type": "header", "angle": 0, "lines": [ { "bbox": [ 145, 0, 494, 37 ], "spans": [ { "bbox": [ 145, 0, 494, 37 ], "type": "text", "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." } ] } ], "index": 1 }, { "bbox": [ 58, 702, 135, 712 ], "type": "page_footnote", "angle": 0, "lines": [ { "bbox": [ 58, 702, 135, 712 ], "spans": [ { "bbox": [ 58, 702, 135, 712 ], "type": "text", "content": "*Corresponding author" } ] } ], "index": 17 }, { "bbox": [ 295, 748, 315, 757 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 295, 748, 315, 757 ], "spans": [ { "bbox": [ 295, 748, 315, 757 ], "type": "text", "content": "7443" } ] } ], "index": 18 } ], "page_size": [ 612, 792 ], "page_idx": 0 }, { "para_blocks": [ { "bbox": [ 46, 72, 289, 204 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 46, 72, 289, 204 ], "spans": [ { "bbox": [ 46, 72, 289, 204 ], "type": "text", "content": "about " }, { "bbox": [ 46, 72, 289, 204 ], "type": "inline_equation", "content": "100,000 \\times 250,000" }, { "bbox": [ 46, 72, 289, 204 ], "type": "text", "content": " px. These super-large pathological images have three characteristics. Firstly, each sample contains numerous blood vessels (Fig.1a). Secondly, each blood vessel usually has a variety of cells with similar appearances (Fig.1b). Thirdly, types of cancerous cells are also varied (Fig.1c). Therefore, diagnosis of MVI requires the professional pathologist to discriminate cancerous/non-cancerous cells carefully, which is time-consuming and tedious. The discrimination relies on the individual pathologist's prior knowledge, which is subjective and leads to misdiagnosis occasionally." } ] } ], "index": 0 }, { "bbox": [ 46, 205, 289, 361 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 46, 205, 289, 361 ], "spans": [ { "bbox": [ 46, 205, 289, 361 ], "type": "text", "content": "In recent years, deep learning has achieved promising results in many areas [28-30, 68-72], including medical image analysis. Many researchers focus on applying deep learning techniques to image-based tumor analysis tasks, such as tumor grading [3, 73], lesion area detection [14, 35], vessel segmentation [18, 31], cell detection/segmentation [42, 54, 67, 75], etc. The successful application of deep learning relies on massive annotated samples. However, annotating cancerous cells of all MVI images is very time-consuming. What's more, the black-box characteristic of deep learning leads to unexplainable classification results, which limits the clinical application of deep learning based diagnostic methods." } ] } ], "index": 1 }, { "bbox": [ 47, 362, 288, 422 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 47, 362, 288, 422 ], "spans": [ { "bbox": [ 47, 362, 288, 422 ], "type": "text", "content": "In order to apply the deep learning technique to the MVI analysis task, we collect the first Pathologic Vessel Image Dataset (PVID) containing healthy blood vessel samples and MVI samples from the pathological image of liver cancer patients." } ] } ], "index": 2 }, { "bbox": [ 47, 423, 289, 484 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 47, 423, 289, 484 ], "spans": [ { "bbox": [ 47, 423, 289, 484 ], "type": "text", "content": "In this paper, we aim to develop an accurate, objective, and explainable method for MVI diagnosis with as few annotations as possible. As annotating the cell in each MVI vessel is time-consuming, we only adopt easily-obtained image-level category labels for developing the new approach." } ] } ], "index": 3 }, { "bbox": [ 46, 484, 289, 628 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 46, 484, 289, 628 ], "spans": [ { "bbox": [ 46, 484, 289, 628 ], "type": "text", "content": "For the explainable MVI classification, the developed approach should provide credible evidence, such as cancerous areas and classification results. Therefore, the proposed approach is devised to be composed of two branches: the binary classification branch and the cell locating branch. The binary classification branch is used to classify the healthy blood vessels and MVI vessels with corresponding vessel image-level category labels as supervision. The initial goal of the cell locating branch is to distinguish the cancerous cells. However, the supervision information for the cell locating branch is insufficient, which requires exploring more supervision information from the characteristic of MVI itself." } ] } ], "index": 4 }, { "bbox": [ 46, 629, 289, 715 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 46, 629, 289, 715 ], "spans": [ { "bbox": [ 46, 629, 289, 715 ], "type": "text", "content": "Firstly, based on the characteristic of blood vessel samples that most cells can be distributed into some similar templates according to structure and color, the correlation filter [9, 22], which is widely adopted in the object tracking area, can be used for locating most of the cells; hence the results of this filter can be interpreted as pseudo masks of cells for supervising the cell locating branch to distinguish" } ] } ], "index": 5 }, { "bbox": [ 304, 72, 545, 156 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 304, 72, 545, 156 ], "spans": [ { "bbox": [ 304, 72, 545, 156 ], "type": "text", "content": "cell area from the background. Secondly, the healthy vessel sample only containing non-cancerous cells and background is used for supervising the cell locating branch distinguishing healthy area (non-cancerous cells and background) from the cancerous cells. Lastly, we devise loopback strategy between the binary classification branch and cell locating branch to discover the cancerous area from each MVI sample." } ] } ], "index": 6 }, { "bbox": [ 304, 156, 547, 275 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 304, 156, 547, 275 ], "spans": [ { "bbox": [ 304, 156, 547, 275 ], "type": "text", "content": "For the loopback strategy, the cell locating branch first predicts the cancerous area of the MVI sample, then the cancerous and non-cancerous areas of the same sample masked with the predicted results are input back into the classification branch separately. The devised a loopback strategy effectively achieves two goals: 1) utilizing the image-level category label to supervise the cell locating branch distinguishing the cancerous area from other areas. 2) building the direct relation between the predicted cancerous areas and the final classification result." } ] } ], "index": 7 }, { "bbox": [ 304, 276, 547, 358 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 304, 276, 547, 358 ], "spans": [ { "bbox": [ 304, 276, 547, 358 ], "type": "text", "content": "Experiment results show that the loopback strategy not only enables the proposed framework to predict precious cancerous areas but also facilitate the classification branch achieve better classification performance. The two-branch framework with the loopback strategy, termed as Loopback Network (LoopNet), achieves " }, { "bbox": [ 304, 276, 547, 358 ], "type": "inline_equation", "content": "97.5\\%" }, { "bbox": [ 304, 276, 547, 358 ], "type": "text", "content": " accuracy on MVI classification." } ] } ], "index": 8 }, { "bbox": [ 306, 360, 545, 383 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 306, 360, 545, 383 ], "spans": [ { "bbox": [ 306, 360, 545, 383 ], "type": "text", "content": "In conclusion, the main contributions of our work are summarized as follows:" } ] } ], "index": 9 }, { "bbox": [ 317, 392, 547, 634 ], "type": "list", "angle": 0, "index": 14, "blocks": [ { "bbox": [ 317, 392, 545, 464 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 317, 392, 545, 464 ], "spans": [ { "bbox": [ 317, 392, 545, 464 ], "type": "text", "content": "- We propose the first deep learning based network, termed as LoopNet, for explainable MVI classification. LoopNet fully exploits the characteristics of MVI samples to achieve blood vessel classification and cell locating results simultaneously and can be extended to MVI analysis tasks on various organs." } ] } ], "index": 10 }, { "bbox": [ 317, 472, 545, 533 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 317, 472, 545, 533 ], "spans": [ { "bbox": [ 317, 472, 545, 533 ], "type": "text", "content": "- The loopback strategy is devised for utilizing the category label to supervise LoopNet distinguishing the cancerous area from other regions, which effectively builds the direct relation between the located cancerous area and the final classification result." } ] } ], "index": 11 }, { "bbox": [ 317, 541, 545, 590 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 317, 541, 545, 590 ], "spans": [ { "bbox": [ 317, 541, 545, 590 ], "type": "text", "content": "- We collect the first Pathologic Vessel Image Dataset (PVID) containing 4130 healthy blood vessel samples and 857 MVI samples from the pathological image of 103 liver cancer patients." } ] } ], "index": 12 }, { "bbox": [ 317, 597, 547, 634 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 317, 597, 547, 634 ], "spans": [ { "bbox": [ 317, 597, 547, 634 ], "type": "text", "content": "- Experiment show that LoopNet achieves " }, { "bbox": [ 317, 597, 547, 634 ], "type": "inline_equation", "content": "97.5\\%" }, { "bbox": [ 317, 597, 547, 634 ], "type": "text", "content": " accuracy on PVID, which verifies the potential of deep learning on MVI classification task." } ] } ], "index": 13 } ], "sub_type": "text" }, { "bbox": [ 306, 645, 392, 658 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 306, 645, 392, 658 ], "spans": [ { "bbox": [ 306, 645, 392, 658 ], "type": "text", "content": "2. Related Work" } ] } ], "index": 15 }, { "bbox": [ 304, 665, 547, 715 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 304, 665, 547, 715 ], "spans": [ { "bbox": [ 304, 665, 547, 715 ], "type": "text", "content": "To our knowledge, there has been no MVI classification method for pathological images until now. Therefore, from a technical point of view, we survey two most related areas: explainable classification, cell detection, and segmentation." } ] } ], "index": 16 } ], "discarded_blocks": [ { "bbox": [ 295, 748, 315, 757 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 295, 748, 315, 757 ], "spans": [ { "bbox": [ 295, 748, 315, 757 ], "type": "text", "content": "7444" } ] } ], "index": 17 } ], "page_size": [ 612, 792 ], "page_idx": 1 }, { "para_blocks": [ { "bbox": [ 47, 72, 191, 85 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 47, 72, 191, 85 ], "spans": [ { "bbox": [ 47, 72, 191, 85 ], "type": "text", "content": "2.1. Explainable Classification" } ] } ], "index": 0 }, { "bbox": [ 46, 90, 288, 198 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 46, 90, 288, 198 ], "spans": [ { "bbox": [ 46, 90, 288, 198 ], "type": "text", "content": "Deep learning based techniques have been widely used in medical image classification tasks [3, 73]. However, the black box characteristic of deep learning techniques has limited their applications in clinical scenarios. The root reason is that the prediction is unexplainable. Therefore, some researchers developed some explainable classification methods, which can be broadly summarized into two categories: prediction approximation techniques and prediction attribution techniques." } ] } ], "index": 1 }, { "bbox": [ 46, 198, 288, 328 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 46, 198, 288, 328 ], "spans": [ { "bbox": [ 46, 198, 288, 328 ], "type": "text", "content": "For prediction approximation techniques, most methods adopted explainable machine learning methods to approximate the prediction of deep models. [7, 36] adopted explainable random forests to approximate CNN-based classification model's predictions. Further, Chen et al. [11] incorporated the tree-decision mechanism into the CNN-based classification model, which combines the explainability of random forests and the high performance of the CNN model. However, those methods are unsuitable for the MVI classification task, which requires knowing the number and location of cancerous cells." } ] } ], "index": 2 }, { "bbox": [ 46, 329, 288, 605 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 46, 329, 288, 605 ], "spans": [ { "bbox": [ 46, 329, 288, 605 ], "type": "text", "content": "For prediction attribution techniques, some feature attribution strategies are proposed for locating the critical features for final prediction results. The most commonly used strategies are activation-based, perturbation-based, and back-attribute-based techniques. The activation-based techniques [13,44,61,76] attributed important features by calculating a group of weights and then summing the feature map. The perturbation-based techniques [37,74,77,80] attributed important features of the input image by removing, masking, or altering them and running a forward pass on the modified image, measuring the difference with the actual output. For the back-attribute-based techniques, some researchers applied the derivative-related terms of the predict category w.r.t. the input to locate the important features. The existing derivative-related terms, including Gradient [6,52], Gradient " }, { "bbox": [ 46, 329, 288, 605 ], "type": "inline_equation", "content": "\\times" }, { "bbox": [ 46, 329, 288, 605 ], "type": "text", "content": " Input [5,16,51], Integrated Gradients [57], and DeepLIFT [50], have been proven to be firmly related or approximate by Ancona et al. [2] from theoretical and practical perspectives. Those feature attribution strategies built the association between important features and the final prediction. However, different methods usually attribute to different feature areas, which shows that the established association lacks credibility." } ] } ], "index": 3 }, { "bbox": [ 47, 611, 223, 624 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 47, 611, 223, 624 ], "spans": [ { "bbox": [ 47, 611, 223, 624 ], "type": "text", "content": "2.2. Cell Detection and Segmentation" } ] } ], "index": 4 }, { "bbox": [ 46, 629, 288, 713 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 46, 629, 288, 713 ], "spans": [ { "bbox": [ 46, 629, 288, 713 ], "type": "text", "content": "In recent years, plenty of fully supervised methods have been proposed for cell detection [20, 54, 63, 64, 67, 79] and segmentation [1, 12, 19, 34, 42, 48, 49, 58, 60, 75]. Most of those methods leveraged manually annotated centroids/outlines/masks of cells to supervise the training of the model. Apart from the above end-to-end training methods, some researchers took the special characteristics of cells" } ] } ], "index": 5 }, { "bbox": [ 304, 72, 547, 215 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 304, 72, 547, 215 ], "spans": [ { "bbox": [ 304, 72, 547, 215 ], "type": "text", "content": "into consideration for better detection and segmentation results. Based on the initial segmentation results with FCN, Naylor et al. [46] applied the watershed method to split the cells. Similarly, Xing et al. [65] performed bottom-up shape deformation and top-down shape inference with the initial segmentation results to achieve better cell segmentation alternately. Sirinukunwattana et al. [55] added the local neighborhood constraint into the cell detection and classification model. Naylor et al. [45] devised a regression network for cell distance map segmentation with a fully convolutional network. However, the performance of those methods highly relies on a large number of fine annotations." } ] } ], "index": 6 }, { "bbox": [ 304, 216, 548, 514 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 304, 216, 548, 514 ], "spans": [ { "bbox": [ 304, 216, 548, 514 ], "type": "text", "content": "For the weakly supervised methods, Xu et al. [66] fine-tuned a Stacked Sparse Autoencoder pretrained with image reconstruction by classifying each cell patch for detecting cells automatically. Mahmood et al. [41] adopted the condition GAN to segment cells with some synthetic samples and original annotated samples. LIRNet [78] adopted cascaded truncated counting indicators on image patches rather than centroid annotations to train a cell detection network. Chamanzar et al. [10] adopted the Voronoi transformation to generate local polygon regions containing only one cell based on centroid annotations, then trained the segmentation network with the generated pseudo annotations. Hu et al. [26] utilized a Generative Adversarial Network (GAN) to generate a cell centroid likelihood map, then used guided backpropagation to visualize the pixel contributions of the map, and finally obtained instance segmentation of cells by graph-cut. Feng et al. [17] proposed a mutual-complementing framework for detecting and segmenting cells simultaneously, where detection and segmentation branches are optimized iteratively. Most above methods are used for segmenting or detecting all cells in the pathological tissue area. Due to the unique characteristic of MVI samples, existing methods can't be directly used to detect and segment the cancerous cells. What's more, most of the above methods still require cell-level annotations." } ] } ], "index": 7 }, { "bbox": [ 304, 515, 548, 647 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 304, 515, 548, 647 ], "spans": [ { "bbox": [ 304, 515, 548, 647 ], "type": "text", "content": "Some researchers developed unsupervised methods to detect and segment cells to relieve the massive cost of annotations on cells. Le et al. [24] proposed an unsupervised crosswise sparse convolutional autoencoder to detect cells based on the local sparsity assumption. Hou et al. [23] adopted GAN to synthesize histopathology samples and then trained a task-specific cell segmentation network with the synthetic samples and corresponding masks. However, the unsupervised methods usually fail in real scenarios, especially in cancerous cases, which is unsuitable for the MVI classification task." } ] } ], "index": 8 }, { "bbox": [ 306, 657, 362, 670 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 306, 657, 362, 670 ], "spans": [ { "bbox": [ 306, 657, 362, 670 ], "type": "text", "content": "3. Method" } ] } ], "index": 9 }, { "bbox": [ 304, 677, 547, 715 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 304, 677, 547, 715 ], "spans": [ { "bbox": [ 304, 677, 547, 715 ], "type": "text", "content": "In order to achieve an accurate, objective and explainable diagnosis and analysis of MVI with as few annotations as possible, we devise a Loopback Network consisting of two" } ] } ], "index": 10 } ], "discarded_blocks": [ { "bbox": [ 295, 748, 315, 757 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 295, 748, 315, 757 ], "spans": [ { "bbox": [ 295, 748, 315, 757 ], "type": "text", "content": "7445" } ] } ], "index": 11 } ], "page_size": [ 612, 792 ], "page_idx": 2 }, { "para_blocks": [ { "type": "image", "bbox": [ 49, 70, 541, 247 ], "blocks": [ { "bbox": [ 49, 70, 541, 247 ], "lines": [ { "bbox": [ 49, 70, 541, 247 ], "spans": [ { "bbox": [ 49, 70, 541, 247 ], "type": "image", "image_path": "c1d9aaf85bae3e9a7f27c0b904e0e8359616780b1c85c8d90e3b1f75bd724d18.jpg" } ] } ], "index": 0, "angle": 0, "type": "image_body" }, { "bbox": [ 46, 248, 548, 348 ], "lines": [ { "bbox": [ 46, 248, 548, 348 ], "spans": [ { "bbox": [ 46, 248, 548, 348 ], "type": "text", "content": "Figure 2. The framework of LoopNet, which is composed of a binary blood vessel image classification branch " }, { "bbox": [ 46, 248, 548, 348 ], "type": "inline_equation", "content": "\\mathcal{F}_b\\circ \\mathcal{F}_c" }, { "bbox": [ 46, 248, 548, 348 ], "type": "text", "content": " and a cell locating branch " }, { "bbox": [ 46, 248, 548, 348 ], "type": "inline_equation", "content": "\\mathcal{F}_b\\circ \\mathcal{F}_l" }, { "bbox": [ 46, 248, 548, 348 ], "type": "text", "content": ". The image classification branch and cell locating branch share the same backbone " }, { "bbox": [ 46, 248, 548, 348 ], "type": "inline_equation", "content": "\\mathcal{F}_b" }, { "bbox": [ 46, 248, 548, 348 ], "type": "text", "content": ", which extracts the same pathological features for classification and locating. Both healthy and MVI samples are sent to the image classification branch to predict their categories, supervised by image-level labels " }, { "bbox": [ 46, 248, 548, 348 ], "type": "inline_equation", "content": "y" }, { "bbox": [ 46, 248, 548, 348 ], "type": "text", "content": " annotated by the pathologist. Manually selecting some typical cells as templates, the kernelized correlation filter is used to generate pseudo mask " }, { "bbox": [ 46, 248, 548, 348 ], "type": "inline_equation", "content": "\\hat{y}_{loc}" }, { "bbox": [ 46, 248, 548, 348 ], "type": "text", "content": " to supervise grid patch based cell locating for both kinds of vessels. For locating cancerous cells in MVI vessels, we devise a loopback strategy, which separately inputs the masked cancerous areas and masked healthy areas of an MVI vessel sample predicted by the cell locating branch into the image classification branch. The category label will ensure that the masked cancerous areas contain cancerous features and the masked healthy areas only contain healthy features. Therefore, the loopback strategy built the direct association between the locating cancerous areas and final classification results." } ] } ], "index": 1, "angle": 0, "type": "image_caption" } ], "index": 0 }, { "bbox": [ 46, 354, 288, 402 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 46, 354, 288, 402 ], "spans": [ { "bbox": [ 46, 354, 288, 402 ], "type": "text", "content": "branches, as shown in Fig.2. One of the two branches is responsible for binary vessel image classification, which leverages the easily obtained image-level category labels as supervision." } ] } ], "index": 2 }, { "bbox": [ 46, 402, 287, 485 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 46, 402, 287, 485 ], "spans": [ { "bbox": [ 46, 402, 287, 485 ], "type": "text", "content": "As an indicator of MVI, the presence of cancerous cells in the blood vessel offers credible evidence for the image classification result. Therefore, we add a cell locating branch to discover cancerous cells in these blood vessels. As there are also non-cancerous cells and tissues in MVI vessels, this branch recognize three categories, including background areas, healthy cell areas and cancerous cell areas." } ] } ], "index": 3 }, { "bbox": [ 46, 486, 288, 676 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 46, 486, 288, 676 ], "spans": [ { "bbox": [ 46, 486, 288, 676 ], "type": "text", "content": "With only the image-level category labels annotated by the pathologist, we devise a loopback strategy to supervise locating cancerous cell areas. For MVI vessels, the healthy and cancerous areas masked with the results predicted by the cell locating branch are input back into the vessel image classification branch to utilize the image-level category label to supervise the cell locating branch distinguishing the cancerous area from other areas. For healthy vessels, the pseudo masks generated by the cell template correlation filter will prompt the cell locating branch to learn to recognize background and healthy cell areas. Therefore, only when the background, healthy cell areas, and cancerous cell areas of MVI vessels are perfectly distinguished will all constraints in our proposed LoopNet be satisfied. In this way, the loopback strategy can build the direct relation between the located cancerous areas and the final classification result." } ] } ], "index": 4 }, { "bbox": [ 47, 677, 288, 714 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 47, 677, 288, 714 ], "spans": [ { "bbox": [ 47, 677, 288, 714 ], "type": "text", "content": "The image classification branch and cell locating branch share the same backbone parameters to leverage extracted pathological image features together, followed by an image" } ] } ], "index": 5 }, { "bbox": [ 304, 354, 547, 426 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 304, 354, 547, 426 ], "spans": [ { "bbox": [ 304, 354, 547, 426 ], "type": "text", "content": "classification head and a cell locating head, as shown in Fig.2. We denote the backbone, image classification head, and cell locating head as " }, { "bbox": [ 304, 354, 547, 426 ], "type": "inline_equation", "content": "\\mathcal{F}_b" }, { "bbox": [ 304, 354, 547, 426 ], "type": "text", "content": ", " }, { "bbox": [ 304, 354, 547, 426 ], "type": "inline_equation", "content": "\\mathcal{F}_c" }, { "bbox": [ 304, 354, 547, 426 ], "type": "text", "content": ", and " }, { "bbox": [ 304, 354, 547, 426 ], "type": "inline_equation", "content": "\\mathcal{F}_l" }, { "bbox": [ 304, 354, 547, 426 ], "type": "text", "content": ", respectively. For convenience, we denote the composite function " }, { "bbox": [ 304, 354, 547, 426 ], "type": "inline_equation", "content": "\\mathcal{F}_b \\circ \\mathcal{F}_c" }, { "bbox": [ 304, 354, 547, 426 ], "type": "text", "content": " as " }, { "bbox": [ 304, 354, 547, 426 ], "type": "inline_equation", "content": "\\mathcal{F}_{cls}" }, { "bbox": [ 304, 354, 547, 426 ], "type": "text", "content": " and " }, { "bbox": [ 304, 354, 547, 426 ], "type": "inline_equation", "content": "\\mathcal{F}_b \\circ \\mathcal{F}_l" }, { "bbox": [ 304, 354, 547, 426 ], "type": "text", "content": " as " }, { "bbox": [ 304, 354, 547, 426 ], "type": "inline_equation", "content": "\\mathcal{F}_{loc}" }, { "bbox": [ 304, 354, 547, 426 ], "type": "text", "content": ", which correspond to the vessel image classification branch and cell locating branch, respectively." } ] } ], "index": 6 }, { "bbox": [ 304, 426, 547, 569 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 304, 426, 547, 569 ], "spans": [ { "bbox": [ 304, 426, 547, 569 ], "type": "text", "content": "For preprocessing, we utilize vessel segmentation [18] results to remove tissues from the vessels and only focus on the contents inside the vessel lumens. To tackle the variety of vessel sizes, we split the entire vessel image into square patches with the " }, { "bbox": [ 304, 426, 547, 569 ], "type": "inline_equation", "content": "3\\mathrm{x}" }, { "bbox": [ 304, 426, 547, 569 ], "type": "text", "content": " size of the average cell size on the statistical discovery that big vessels generally contain a lot of white backgrounds. The patches containing cells determined by pseudo masks are randomly concatenated to a " }, { "bbox": [ 304, 426, 547, 569 ], "type": "inline_equation", "content": "20 \\times 20" }, { "bbox": [ 304, 426, 547, 569 ], "type": "text", "content": " patch image to obtain the fixed size input for training the network, discarding the patches that don't contain any cells. If there are less than " }, { "bbox": [ 304, 426, 547, 569 ], "type": "inline_equation", "content": "20 \\times 20" }, { "bbox": [ 304, 426, 547, 569 ], "type": "text", "content": " extracted patches, blank patches will fill the vacancy." } ] } ], "index": 7 }, { "bbox": [ 304, 569, 547, 629 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 304, 569, 547, 629 ], "spans": [ { "bbox": [ 304, 569, 547, 629 ], "type": "text", "content": "Based on the gained patches, each patch will contain 1-2 cells, so the patch classification can be used for cell locating, reducing the difficulty of finely segmenting the cell edge and meanwhile satisfying the requirement of analysis and diagnosis of MVI." } ] } ], "index": 8 }, { "bbox": [ 306, 635, 491, 647 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 306, 635, 491, 647 ], "spans": [ { "bbox": [ 306, 635, 491, 647 ], "type": "text", "content": "3.1. Binary Vessel Image Classification" } ] } ], "index": 9 }, { "bbox": [ 304, 654, 547, 715 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 304, 654, 547, 715 ], "spans": [ { "bbox": [ 304, 654, 547, 715 ], "type": "text", "content": "For a blood vessel image " }, { "bbox": [ 304, 654, 547, 715 ], "type": "inline_equation", "content": "I" }, { "bbox": [ 304, 654, 547, 715 ], "type": "text", "content": ", we apply some preprocessings and data augmentations to " }, { "bbox": [ 304, 654, 547, 715 ], "type": "inline_equation", "content": "I" }, { "bbox": [ 304, 654, 547, 715 ], "type": "text", "content": ", and the result, termed as " }, { "bbox": [ 304, 654, 547, 715 ], "type": "inline_equation", "content": "x" }, { "bbox": [ 304, 654, 547, 715 ], "type": "text", "content": ", is input into the binary vessel image classification branch " }, { "bbox": [ 304, 654, 547, 715 ], "type": "inline_equation", "content": "\\mathcal{F}_{cls}" }, { "bbox": [ 304, 654, 547, 715 ], "type": "text", "content": " to predict the probability of " }, { "bbox": [ 304, 654, 547, 715 ], "type": "inline_equation", "content": "I" }, { "bbox": [ 304, 654, 547, 715 ], "type": "text", "content": " belonging to an MVI vessel, denoted as " }, { "bbox": [ 304, 654, 547, 715 ], "type": "inline_equation", "content": "p_{cls} = \\mathcal{F}_{cls}(x)" }, { "bbox": [ 304, 654, 547, 715 ], "type": "text", "content": ". The image-level category label" } ] } ], "index": 10 } ], "discarded_blocks": [ { "bbox": [ 295, 748, 316, 757 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 295, 748, 316, 757 ], "spans": [ { "bbox": [ 295, 748, 316, 757 ], "type": "text", "content": "7446" } ] } ], "index": 11 } ], "page_size": [ 612, 792 ], "page_idx": 3 }, { "para_blocks": [ { "bbox": [ 47, 72, 287, 96 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 47, 72, 287, 96 ], "spans": [ { "bbox": [ 47, 72, 287, 96 ], "type": "inline_equation", "content": "y" }, { "bbox": [ 47, 72, 287, 96 ], "type": "text", "content": " is used to supervise the binary image classification branch with the following GHM-C loss function [38]:" } ] } ], "index": 0 }, { "bbox": [ 100, 113, 288, 139 ], "type": "interline_equation", "angle": 0, "lines": [ { "bbox": [ 100, 113, 288, 139 ], "spans": [ { "bbox": [ 100, 113, 288, 139 ], "type": "interline_equation", "content": "\\mathcal {L} _ {c l s} \\left(p _ {c l s}, y\\right) = \\frac {\\mathcal {L} _ {C E} \\left(p _ {c l s} , y\\right)}{G D \\left(\\left| p _ {c l s} - y \\right|\\right)}, \\tag {1}", "image_path": "3ae621ae7f219b4e26f17bc1e66189d84d86746444bd26bbb475c72469f3ed57.jpg" } ] } ], "index": 1 }, { "bbox": [ 47, 154, 287, 178 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 47, 154, 287, 178 ], "spans": [ { "bbox": [ 47, 154, 287, 178 ], "type": "text", "content": "where " }, { "bbox": [ 47, 154, 287, 178 ], "type": "inline_equation", "content": "\\mathcal{L}_{CE}" }, { "bbox": [ 47, 154, 287, 178 ], "type": "text", "content": " is the cross entropy (CE) loss function, " }, { "bbox": [ 47, 154, 287, 178 ], "type": "inline_equation", "content": "GD" }, { "bbox": [ 47, 154, 287, 178 ], "type": "text", "content": " is gradient density function." } ] } ], "index": 2 }, { "bbox": [ 47, 182, 287, 266 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 47, 182, 287, 266 ], "spans": [ { "bbox": [ 47, 182, 287, 266 ], "type": "text", "content": "The reason why we adopt the GHM-C loss function [38] rather than the traditional cross-entropy loss function is that the diagnosis of MVI is a subjective task, so a few image category labels of hard blood vessel samples may be wrong annotated. The GHM-C loss function [38] can reduce the gradient contribution of massive easy examples and few outliers to make the classification network more robust." } ] } ], "index": 3 }, { "bbox": [ 47, 285, 218, 298 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 47, 285, 218, 298 ], "spans": [ { "bbox": [ 47, 285, 218, 298 ], "type": "text", "content": "3.2. Grid Patch Based Cell Locating" } ] } ], "index": 4 }, { "bbox": [ 47, 307, 287, 367 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 47, 307, 287, 367 ], "spans": [ { "bbox": [ 47, 307, 287, 367 ], "type": "text", "content": "The image-level blood vessel category label, which is the only supervision information for training our model, is insufficient for locating cells in vessels. Consequently, exploring more supervision information from the characteristic of the MVI pathological image itself is necessary." } ] } ], "index": 5 }, { "bbox": [ 46, 371, 287, 491 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 46, 371, 287, 491 ], "spans": [ { "bbox": [ 46, 371, 287, 491 ], "type": "text", "content": "Based on the fact that most cells possess similar shapes, colors, and structures in blood vessel pathological images, distinguishing from background tissue, we can distribute the cells into some templates according to these characteristics. To distinguish background tissue and cells, the kernelized correlation filter [9, 22], as a mature technology in the object tracking field, is adopted for locating most cells in vessels with the manually selected cell templates, as shown in Fig.2. The result of the kernelized correlation filter can be used as a binary pseudo mask for locating cells." } ] } ], "index": 6 }, { "bbox": [ 47, 495, 287, 566 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 47, 495, 287, 566 ], "spans": [ { "bbox": [ 47, 495, 287, 566 ], "type": "text", "content": "Due to the indistinguishable appearance of non-cancerous cells and cancerous cells, the correlation filter cannot differentiate cancerous cells from non-cancerous cells. The fact that healthy vessels only contain healthy cells can be utilized for discriminating healthy areas (non-cancerous cells and background tissue) from cancerous cells." } ] } ], "index": 7 }, { "bbox": [ 46, 570, 287, 712 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 46, 570, 287, 712 ], "spans": [ { "bbox": [ 46, 570, 287, 712 ], "type": "text", "content": "The presence of cancerous cells in blood vessels is the key characteristic of MVI, regardless of the positions and number of cancerous cells in vessels. Therefore we devise a loopback strategy between the image classification branch and cell locating branch to distinguish the cancerous area from the healthy area. The loopback strategy separately inputs the cancerous areas and healthy areas of an MVI vessel sample predicted by the cell locating branch into the binary image classification branch to supervise the locating of cancerous cells using image-level labels. What's more, the loopback strategy can build a direct relationship between the predicted cancerous areas and the final classification result." } ] } ], "index": 8 }, { "bbox": [ 306, 72, 547, 96 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 306, 72, 547, 96 ], "spans": [ { "bbox": [ 306, 72, 547, 96 ], "type": "text", "content": "3.2.1 Correlation Filter based Pseudo Mask for Distinguishing Cells and Backgrounds" } ] } ], "index": 9 }, { "bbox": [ 304, 103, 545, 270 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 304, 103, 545, 270 ], "spans": [ { "bbox": [ 304, 103, 545, 270 ], "type": "text", "content": "Owing to the similarity of cell appearance, the correlation filter based on manually selected cell templates can locate most of the cells, which can be regarded as pseudo masks to train the cell locating branch. Specifically, splitting the correlation filter result into grid patches corresponding to the input image, every patch containing any parts of cells is labeled as positive, and patches containing only background tissue are labeled as negative. The binary pseudo mask is denoted as " }, { "bbox": [ 304, 103, 545, 270 ], "type": "inline_equation", "content": "\\hat{y}_{loc}" }, { "bbox": [ 304, 103, 545, 270 ], "type": "text", "content": ". For an input " }, { "bbox": [ 304, 103, 545, 270 ], "type": "inline_equation", "content": "x" }, { "bbox": [ 304, 103, 545, 270 ], "type": "text", "content": ", the cell locating result " }, { "bbox": [ 304, 103, 545, 270 ], "type": "inline_equation", "content": "p_{loc}" }, { "bbox": [ 304, 103, 545, 270 ], "type": "text", "content": " for all grid patches in " }, { "bbox": [ 304, 103, 545, 270 ], "type": "inline_equation", "content": "x" }, { "bbox": [ 304, 103, 545, 270 ], "type": "text", "content": " is predicted by the cell locating branch " }, { "bbox": [ 304, 103, 545, 270 ], "type": "inline_equation", "content": "\\mathcal{F}_{loc}" }, { "bbox": [ 304, 103, 545, 270 ], "type": "text", "content": ", namely " }, { "bbox": [ 304, 103, 545, 270 ], "type": "inline_equation", "content": "p_{loc} = \\mathcal{F}_{loc}(x)" }, { "bbox": [ 304, 103, 545, 270 ], "type": "text", "content": ", which has three output channels representing the background, non-cancerous and cancerous areas, denoted as " }, { "bbox": [ 304, 103, 545, 270 ], "type": "inline_equation", "content": "p_{loc}^{0,i,j}" }, { "bbox": [ 304, 103, 545, 270 ], "type": "text", "content": ", " }, { "bbox": [ 304, 103, 545, 270 ], "type": "inline_equation", "content": "p_{loc}^{1,i,j}" }, { "bbox": [ 304, 103, 545, 270 ], "type": "text", "content": ", and " }, { "bbox": [ 304, 103, 545, 270 ], "type": "inline_equation", "content": "p_{loc}^{2,i,j}" }, { "bbox": [ 304, 103, 545, 270 ], "type": "text", "content": " for a patch in row " }, { "bbox": [ 304, 103, 545, 270 ], "type": "inline_equation", "content": "i" }, { "bbox": [ 304, 103, 545, 270 ], "type": "text", "content": " and column " }, { "bbox": [ 304, 103, 545, 270 ], "type": "inline_equation", "content": "j" }, { "bbox": [ 304, 103, 545, 270 ], "type": "text", "content": ", respectively." } ] } ], "index": 10 }, { "bbox": [ 304, 271, 547, 380 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 304, 271, 547, 380 ], "spans": [ { "bbox": [ 304, 271, 547, 380 ], "type": "text", "content": "Since healthy blood vessels only consist of background and non-cancerous cells, the pseudo binary masks are enough to supervise cell locating with cross-entropy loss. But for MVI blood vessels consisting of cancerous and noncancerous cells, the pseudo binary masks are insufficient for locating cells precisely. Hence, we only utilize the background areas of pseudo binary masks to supervise cell locating of MVI samples. The formula of loss function for pseudo mask cell locating is as follows:" } ] } ], "index": 11 }, { "bbox": [ 306, 388, 572, 460 ], "type": "interline_equation", "angle": 0, "lines": [ { "bbox": [ 306, 388, 572, 460 ], "spans": [ { "bbox": [ 306, 388, 572, 460 ], "type": "interline_equation", "content": "\\begin{array}{l} \\mathcal {L} _ {l o c} = - \\frac {\\mathbf {1} \\{y = 0 \\}}{N} \\sum_ {i, j} \\left(\\left(1 - \\hat {y} _ {l o c} ^ {i, j}\\right) \\log \\left(p _ {l o c} ^ {0, i, j}\\right) + \\hat {y} _ {l o c} ^ {i, j} \\log \\left(p _ {l o c} ^ {1, i, j}\\right)\\right) \\\\ - \\frac {\\mathbf {1} \\{y = 1 \\}}{\\sum_ {i , j} \\mathbf {1} \\left\\{\\hat {y} _ {l o c} ^ {i , j} = 0 \\right\\}} \\sum_ {i, j} \\mathbf {1} \\left\\{\\hat {y} _ {l o c} ^ {i, j} = 0 \\right\\} \\log \\left(p _ {l o c} ^ {0, i, j}\\right), \\tag {2} \\\\ \\end{array}", "image_path": "42d2c2b81171581f22fd77724a4f7c920c1e894bbd900a957f917c67dae1b9a0.jpg" } ] } ], "index": 12 }, { "bbox": [ 304, 461, 545, 509 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 304, 461, 545, 509 ], "spans": [ { "bbox": [ 304, 461, 545, 509 ], "type": "text", "content": "where " }, { "bbox": [ 304, 461, 545, 509 ], "type": "inline_equation", "content": "N = h\\times w" }, { "bbox": [ 304, 461, 545, 509 ], "type": "text", "content": " is the total number of grid patches of " }, { "bbox": [ 304, 461, 545, 509 ], "type": "inline_equation", "content": "x" }, { "bbox": [ 304, 461, 545, 509 ], "type": "text", "content": ", " }, { "bbox": [ 304, 461, 545, 509 ], "type": "inline_equation", "content": "h" }, { "bbox": [ 304, 461, 545, 509 ], "type": "text", "content": " and " }, { "bbox": [ 304, 461, 545, 509 ], "type": "inline_equation", "content": "w" }, { "bbox": [ 304, 461, 545, 509 ], "type": "text", "content": " are number of rows and columns of grid patches, " }, { "bbox": [ 304, 461, 545, 509 ], "type": "inline_equation", "content": "i\\in [1,h]" }, { "bbox": [ 304, 461, 545, 509 ], "type": "text", "content": " and " }, { "bbox": [ 304, 461, 545, 509 ], "type": "inline_equation", "content": "j\\in [1,w]" }, { "bbox": [ 304, 461, 545, 509 ], "type": "text", "content": " is the row index and column index, " }, { "bbox": [ 304, 461, 545, 509 ], "type": "inline_equation", "content": "\\mathbf{1}\\{\\cdot \\}" }, { "bbox": [ 304, 461, 545, 509 ], "type": "text", "content": " is the indicator function." } ] } ], "index": 13 }, { "bbox": [ 304, 510, 545, 569 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 304, 510, 545, 569 ], "spans": [ { "bbox": [ 304, 510, 545, 569 ], "type": "text", "content": "The " }, { "bbox": [ 304, 510, 545, 569 ], "type": "inline_equation", "content": "\\mathcal{L}_{loc}" }, { "bbox": [ 304, 510, 545, 569 ], "type": "text", "content": " can supervise the cell locating branch learning the features of background patches and non-cancerous cell patches. But the cell locating branch still can not recognize cancerous cell patches in MVI samples, that is one of the reasons why we develop the following loopback strategy." } ] } ], "index": 14 }, { "bbox": [ 306, 586, 545, 598 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 306, 586, 545, 598 ], "spans": [ { "bbox": [ 306, 586, 545, 598 ], "type": "text", "content": "3.2.2 Loopback Strategy for Cancerous Cells Locating" } ] } ], "index": 15 }, { "bbox": [ 304, 605, 547, 713 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 304, 605, 547, 713 ], "spans": [ { "bbox": [ 304, 605, 547, 713 ], "type": "text", "content": "Leveraging the characteristic that MVI blood vessels must contain one or more cancerous cells, no matter what other areas in blood vessels look like, we devise our loopback strategy to supervise locating cancerous cell patches with only image-level category labels. Specifically, for an MVI vessel sample, the cancerous areas of it can be represented by the 2-th channel of the result predicted by the cell locating branch, namely " }, { "bbox": [ 304, 605, 547, 713 ], "type": "inline_equation", "content": "area_{pos} = \\mathcal{U}(p_{loc}^2)" }, { "bbox": [ 304, 605, 547, 713 ], "type": "text", "content": ", and the healthy areas of it can be represented by the sum of the 0-th channel and the 1-" } ] } ], "index": 16 } ], "discarded_blocks": [ { "bbox": [ 295, 748, 315, 757 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 295, 748, 315, 757 ], "spans": [ { "bbox": [ 295, 748, 315, 757 ], "type": "text", "content": "7447" } ] } ], "index": 17 } ], "page_size": [ 612, 792 ], "page_idx": 4 }, { "para_blocks": [ { "bbox": [ 46, 72, 288, 120 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 46, 72, 288, 120 ], "spans": [ { "bbox": [ 46, 72, 288, 120 ], "type": "text", "content": "th channel of the locating result, namely " }, { "bbox": [ 46, 72, 288, 120 ], "type": "inline_equation", "content": "area_{neg} = \\mathcal{U}(p_{loc}^0 + p_{loc}^1)" }, { "bbox": [ 46, 72, 288, 120 ], "type": "text", "content": ". The above " }, { "bbox": [ 46, 72, 288, 120 ], "type": "inline_equation", "content": "\\mathcal{U}" }, { "bbox": [ 46, 72, 288, 120 ], "type": "text", "content": " is the nearest neighbor interpolation function, which upsamples the patch-wise result to the size of input " }, { "bbox": [ 46, 72, 288, 120 ], "type": "inline_equation", "content": "x" }, { "bbox": [ 46, 72, 288, 120 ], "type": "text", "content": "." } ] } ], "index": 0 }, { "bbox": [ 46, 120, 290, 335 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 46, 120, 290, 335 ], "spans": [ { "bbox": [ 46, 120, 290, 335 ], "type": "text", "content": "We perform element-wise product between the original input vessel image and corresponding cancerous/healthy areas. The result " }, { "bbox": [ 46, 120, 290, 335 ], "type": "inline_equation", "content": "x * area_{pos}" }, { "bbox": [ 46, 120, 290, 335 ], "type": "text", "content": " and " }, { "bbox": [ 46, 120, 290, 335 ], "type": "inline_equation", "content": "x * area_{neg}" }, { "bbox": [ 46, 120, 290, 335 ], "type": "text", "content": " are input back into the binary image classification branch " }, { "bbox": [ 46, 120, 290, 335 ], "type": "inline_equation", "content": "\\mathcal{F}_{cls}" }, { "bbox": [ 46, 120, 290, 335 ], "type": "text", "content": " to obtain the loopback image classification result. Ideally, the healthy areas of the MVI sample " }, { "bbox": [ 46, 120, 290, 335 ], "type": "inline_equation", "content": "x" }, { "bbox": [ 46, 120, 290, 335 ], "type": "text", "content": " don't contain cancerous cells, so that the loopback image classification result " }, { "bbox": [ 46, 120, 290, 335 ], "type": "inline_equation", "content": "\\mathcal{F}_{cls}(x * area_{neg})" }, { "bbox": [ 46, 120, 290, 335 ], "type": "text", "content": " will be negative. Similarly, the loopback image classification result of the MVI sample's cancerous areas " }, { "bbox": [ 46, 120, 290, 335 ], "type": "inline_equation", "content": "\\mathcal{F}_{cls}(x * area_{pos})" }, { "bbox": [ 46, 120, 290, 335 ], "type": "text", "content": " will be positive. However, the inadequately trained cell locating branch can not distinguish cancerous areas from healthy areas accurately, e.g. healthy areas that contain some cancerous cells will be predicted as positive, which is different from the ideal case. Therefore, we can utilize the divergence between the loopback image classification result and the ideal case to generate gradients with respect to the cell locating result " }, { "bbox": [ 46, 120, 290, 335 ], "type": "inline_equation", "content": "p_{loc}" }, { "bbox": [ 46, 120, 290, 335 ], "type": "text", "content": " to modify it. The formalized loss function of the loopback strategy is given as follows:" } ] } ], "index": 1 }, { "bbox": [ 74, 342, 287, 396 ], "type": "interline_equation", "angle": 0, "lines": [ { "bbox": [ 74, 342, 287, 396 ], "spans": [ { "bbox": [ 74, 342, 287, 396 ], "type": "interline_equation", "content": "\\begin{array}{l} \\mathcal {L} _ {\\text {l o o p}} = \\mathcal {L} _ {C E} \\left(\\mathcal {F} _ {\\text {c l s}} \\left(x * \\mathcal {U} \\left(p _ {\\text {l o c}} ^ {0} + p _ {\\text {l o c}} ^ {1}\\right)\\right), 0\\right) \\tag {3} \\\\ + \\mathcal {L} _ {C E} \\left(\\mathcal {F} _ {c l s} \\left(x * \\mathcal {U} \\left(p _ {l o c} ^ {2}\\right)\\right), 1\\right). \\\\ \\end{array}", "image_path": "99df4a4707626dd162d432f95cdf22add5121620d771d4ba5b7e7234de41817d.jpg" } ] } ], "index": 2 }, { "bbox": [ 47, 401, 253, 413 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 47, 401, 253, 413 ], "spans": [ { "bbox": [ 47, 401, 253, 413 ], "type": "text", "content": "This loss function is only applied to MVI samples." } ] } ], "index": 3 }, { "bbox": [ 46, 413, 287, 496 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 46, 413, 287, 496 ], "spans": [ { "bbox": [ 46, 413, 287, 496 ], "type": "text", "content": "It is worth noting that the gradients of " }, { "bbox": [ 46, 413, 287, 496 ], "type": "inline_equation", "content": "\\mathcal{L}_{loop}" }, { "bbox": [ 46, 413, 287, 496 ], "type": "text", "content": " w.r.t. the parameters of image classification branch " }, { "bbox": [ 46, 413, 287, 496 ], "type": "inline_equation", "content": "\\mathcal{F}_{cls}" }, { "bbox": [ 46, 413, 287, 496 ], "type": "text", "content": " in Eq.(3) are not accumulated to participate in gradient descent. Since in loopback procedure, the role of the image classification branch is like the discriminator in the generative adversarial network, optimizing the parameters of " }, { "bbox": [ 46, 413, 287, 496 ], "type": "inline_equation", "content": "\\mathcal{F}_{cls}" }, { "bbox": [ 46, 413, 287, 496 ], "type": "text", "content": " will make the discrimination ability degenerate." } ] } ], "index": 4 }, { "bbox": [ 46, 497, 287, 593 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 46, 497, 287, 593 ], "spans": [ { "bbox": [ 46, 497, 287, 593 ], "type": "text", "content": "The loopback strategy " }, { "bbox": [ 46, 497, 287, 593 ], "type": "inline_equation", "content": "\\mathcal{L}_{loop}" }, { "bbox": [ 46, 497, 287, 593 ], "type": "text", "content": " can supervise LoopNet's cell locating branch to learn the cancerous cell patches' features through the gradients of " }, { "bbox": [ 46, 497, 287, 593 ], "type": "inline_equation", "content": "\\mathcal{L}_{loop}" }, { "bbox": [ 46, 497, 287, 593 ], "type": "text", "content": " back-propagating to " }, { "bbox": [ 46, 497, 287, 593 ], "type": "inline_equation", "content": "\\mathcal{F}_{loc}" }, { "bbox": [ 46, 497, 287, 593 ], "type": "text", "content": ". Moreover, the loopback strategy correlates the vessel image classification results with the cancerous cell locating results, which provides credible evidence and a reliable explanation of the image classification result and facilitates the classification branch to achieve better classification performance." } ] } ], "index": 5 }, { "bbox": [ 47, 599, 167, 612 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 47, 599, 167, 612 ], "spans": [ { "bbox": [ 47, 599, 167, 612 ], "type": "text", "content": "3.3. Complete Algorithm" } ] } ], "index": 6 }, { "bbox": [ 46, 617, 288, 713 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 46, 617, 288, 713 ], "spans": [ { "bbox": [ 46, 617, 288, 713 ], "type": "text", "content": "Based on the well-developed image classification network, the loopback strategy, together with the binary pseudo mask, enables our LoopNet to obtain the ability of distinguishing healthy and MVI blood vessels, and the capability of locating background, non-cancerous cells and cancerous cells in vessels, supervised only by the image-level category labels. LoopNet learns the features of background and non-cancerous cell patches through pseudo binary masks" } ] } ], "index": 7 }, { "bbox": [ 304, 72, 545, 108 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 304, 72, 545, 108 ], "spans": [ { "bbox": [ 304, 72, 545, 108 ], "type": "text", "content": "generated by cell template correlation filter and learns the features of cancerous cell patches through through loopback strategy performed on MVI samples." } ] } ], "index": 8 }, { "bbox": [ 304, 108, 545, 180 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 304, 108, 545, 180 ], "spans": [ { "bbox": [ 304, 108, 545, 180 ], "type": "text", "content": "We train our LoopNet in two stages. Firstly, we train the binary vessel image classification branch " }, { "bbox": [ 304, 108, 545, 180 ], "type": "inline_equation", "content": "\\mathcal{F}_{cls}" }, { "bbox": [ 304, 108, 545, 180 ], "type": "text", "content": " using the loss function " }, { "bbox": [ 304, 108, 545, 180 ], "type": "inline_equation", "content": "\\mathcal{L}_{cls}" }, { "bbox": [ 304, 108, 545, 180 ], "type": "text", "content": " to endow it with the essential ability to distinguish healthy/MVI blood vessels. Secondly, we jointly optimize " }, { "bbox": [ 304, 108, 545, 180 ], "type": "inline_equation", "content": "\\mathcal{F}_{cls}" }, { "bbox": [ 304, 108, 545, 180 ], "type": "text", "content": " and the cell locating branch " }, { "bbox": [ 304, 108, 545, 180 ], "type": "inline_equation", "content": "\\mathcal{F}_{loc}" }, { "bbox": [ 304, 108, 545, 180 ], "type": "text", "content": " with the following loss function:" } ] } ], "index": 9 }, { "bbox": [ 365, 190, 545, 203 ], "type": "interline_equation", "angle": 0, "lines": [ { "bbox": [ 365, 190, 545, 203 ], "spans": [ { "bbox": [ 365, 190, 545, 203 ], "type": "interline_equation", "content": "\\mathcal {L} = \\mathcal {L} _ {l o c} + \\alpha \\mathcal {L} _ {l o o p} + \\beta \\mathcal {L} _ {c l s}, \\tag {4}", "image_path": "aad9e3b6bdedffe31408439a25e07f8f8715219345f133125454bcea5ed799a4.jpg" } ] } ], "index": 10 }, { "bbox": [ 305, 213, 463, 224 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 305, 213, 463, 224 ], "spans": [ { "bbox": [ 305, 213, 463, 224 ], "type": "text", "content": "where " }, { "bbox": [ 305, 213, 463, 224 ], "type": "inline_equation", "content": "\\alpha" }, { "bbox": [ 305, 213, 463, 224 ], "type": "text", "content": " and " }, { "bbox": [ 305, 213, 463, 224 ], "type": "inline_equation", "content": "\\beta" }, { "bbox": [ 305, 213, 463, 224 ], "type": "text", "content": " are balance parameters." } ] } ], "index": 11 }, { "bbox": [ 306, 236, 360, 248 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 306, 236, 360, 248 ], "spans": [ { "bbox": [ 306, 236, 360, 248 ], "type": "text", "content": "4. Dataset" } ] } ], "index": 12 }, { "bbox": [ 304, 256, 547, 435 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 304, 256, 547, 435 ], "spans": [ { "bbox": [ 304, 256, 547, 435 ], "type": "text", "content": "To construct the Pathologic Vessel Image Dataset (PVID) for MVI analysis, we collect 100 whole slide pathological images of liver cancer patients from the cooperative institution, each of which is about " }, { "bbox": [ 304, 256, 547, 435 ], "type": "inline_equation", "content": "100,000 \\times 250,000" }, { "bbox": [ 304, 256, 547, 435 ], "type": "text", "content": " pixels. For each whole slide pathological image, we randomly crop an average of about 50 blood vessel images, which can reduce the repeatability of the sample. Finally, the collected PVID contains 5,000 vessel samples (4,140 healthy blood vessels and 860 MVI blood vessels). We randomly split the vessel image part into the training, validation, and test sets according to slides, ensuring vessel images of the same slide are in the same set. The number of samples of healthy vessels in the training set, validation set and test set is 2,480 and 830 and 830 respectively, and the number of MVI vessels is 520 and 170 and 170 respectively." } ] } ], "index": 13 }, { "bbox": [ 304, 436, 545, 495 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 304, 436, 545, 495 ], "spans": [ { "bbox": [ 304, 436, 545, 495 ], "type": "text", "content": "For both vessel image classification and cell locating, we only use the training set of the vessel image part of PVID to train the proposed model. The test set of the vessel image part is adopted to evaluate the performance of vessel image classification." } ] } ], "index": 14 }, { "bbox": [ 304, 496, 545, 555 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 304, 496, 545, 555 ], "spans": [ { "bbox": [ 304, 496, 545, 555 ], "type": "text", "content": "To assess the cancerous cell locating performance, we select another 130 MVI blood vessels and annotate the centroids of every cancerous cell in these blood vessels using point label as the locating test part of PVID. The number of annotated cancerous cell centroids is 23, 237." } ] } ], "index": 15 }, { "bbox": [ 304, 555, 545, 603 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 304, 555, 545, 603 ], "spans": [ { "bbox": [ 304, 555, 545, 603 ], "type": "text", "content": "All the annotations were labeled by the experienced pathologist of the cooperative institution. What's more, this study was approved by the institutional research ethics committee." } ] } ], "index": 16 }, { "bbox": [ 306, 615, 388, 628 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 306, 615, 388, 628 ], "spans": [ { "bbox": [ 306, 615, 388, 628 ], "type": "text", "content": "5. Experiments" } ] } ], "index": 17 }, { "bbox": [ 306, 635, 508, 647 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 306, 635, 508, 647 ], "spans": [ { "bbox": [ 306, 635, 508, 647 ], "type": "text", "content": "5.1. Network Architecture and Parameters" } ] } ], "index": 18 }, { "bbox": [ 304, 653, 547, 713 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 304, 653, 547, 713 ], "spans": [ { "bbox": [ 304, 653, 547, 713 ], "type": "text", "content": "In the following experiment, unless otherwise specified, the backbone " }, { "bbox": [ 304, 653, 547, 713 ], "type": "inline_equation", "content": "\\mathcal{F}_b" }, { "bbox": [ 304, 653, 547, 713 ], "type": "text", "content": " we adopted is the ResNet-50 [21], and the last global average pooling layer and the fully connected layer are removed and regarded as vessel image classification head " }, { "bbox": [ 304, 653, 547, 713 ], "type": "inline_equation", "content": "\\mathcal{F}_c" }, { "bbox": [ 304, 653, 547, 713 ], "type": "text", "content": ". The cell locating head we adopted is a 3-" } ] } ], "index": 19 } ], "discarded_blocks": [ { "bbox": [ 295, 748, 315, 757 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 295, 748, 315, 757 ], "spans": [ { "bbox": [ 295, 748, 315, 757 ], "type": "text", "content": "7448" } ] } ], "index": 20 } ], "page_size": [ 612, 792 ], "page_idx": 5 }, { "para_blocks": [ { "type": "table", "bbox": [ 50, 70, 544, 156 ], "blocks": [ { "bbox": [ 50, 70, 544, 156 ], "lines": [ { "bbox": [ 50, 70, 544, 156 ], "spans": [ { "bbox": [ 50, 70, 544, 156 ], "type": "table", "html": "
Index\\BackboneResNet-50 [21]AlexNet [33]VGG-16 [53]EfficientNetV2-S [59]MobileNetV2 [25]ConvNeXt-B [40]
BaselineOursBaselineOursBaselineOursBaselineOursBaselineOursBaselineOurs
Accuracy96.5997.4996.0996.1997.0997.9996.8997.5994.6897.1994.5895.29
PrecisionHealthy Vessel97.7198.5596.9098.5297.2798.7997.8398.2094.5897.6197.6798.61
MVI Vessel91.0292.4491.7785.9594.1596.1092.1794.5595.3894.9778.8284.44
RecallHealthy Vessel98.1898.4396.8598.3299.2798.7998.4398.9199.0399.2794.7996.61
MVI Vessel88.8992.9884.8092.9886.5594.1589.4791.2372.5188.3093.5788.89
", "image_path": "62fbd7b4737c5e3181fd51acd40dce16124a2103f6799b9ac05771bd7ac10b33.jpg" } ] } ], "index": 0, "angle": 0, "type": "table_body" } ], "index": 0 }, { "type": "table", "bbox": [ 50, 194, 284, 243 ], "blocks": [ { "bbox": [ 46, 159, 546, 182 ], "lines": [ { "bbox": [ 46, 159, 546, 182 ], "spans": [ { "bbox": [ 46, 159, 546, 182 ], "type": "text", "content": "Table 1. The classification performance comparison with SOTA methods. The Baseline means directly using the backbone for image classification. Our LoopNet shares the same backbone with the corresponding SOTA classification networks. (All scores are in %)" } ] } ], "index": 1, "angle": 0, "type": "table_caption" }, { "bbox": [ 50, 194, 284, 243 ], "lines": [ { "bbox": [ 50, 194, 284, 243 ], "spans": [ { "bbox": [ 50, 194, 284, 243 ], "type": "table", "html": "
Index\\MethodCAM [44]DeepLIFT [50]LRP [4]LoopNet
Precision4.5981.1139.0571.26
Recall8.355.395.8994.52
Dice16.3515.0915.4279.58
", "image_path": "9a256551d178a8c35a712dbfd43f3849cdd3ee9942ba568178cee0e3e2bd10fb.jpg" } ] } ], "index": 2, "angle": 0, "type": "table_body" }, { "bbox": [ 46, 247, 288, 301 ], "lines": [ { "bbox": [ 46, 247, 288, 301 ], "spans": [ { "bbox": [ 46, 247, 288, 301 ], "type": "text", "content": "Table 2. The comparison results of classification explainability. The precision, recall, and dice score denote the performance of the located cancerous patch in the attributed feature areas for CAM [44], DeepLIFT [50] and LRP [4], which are three classic feature attribution methods." } ] } ], "index": 3, "angle": 0, "type": "table_footnote" } ], "index": 2 }, { "bbox": [ 46, 319, 287, 392 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 46, 319, 287, 392 ], "spans": [ { "bbox": [ 46, 319, 287, 392 ], "type": "text", "content": "layers Graph Convolutional Network (GCN) [32]. BatchNorm, ReLU, and Dropout layers are successively added between two GCN layers. The dropout probability of the first and second Dropout layer is set to 0.2 and 0.1, respectively. Edges are added between each feature and its eight spatial neighbors." } ] } ], "index": 4 }, { "bbox": [ 46, 393, 287, 489 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 46, 393, 287, 489 ], "spans": [ { "bbox": [ 46, 393, 287, 489 ], "type": "text", "content": "We solely train the image classification branch 50 epochs and then train the entire LoopNet 50 epochs with " }, { "bbox": [ 46, 393, 287, 489 ], "type": "inline_equation", "content": "\\alpha = 1" }, { "bbox": [ 46, 393, 287, 489 ], "type": "text", "content": " and " }, { "bbox": [ 46, 393, 287, 489 ], "type": "inline_equation", "content": "\\beta = 0.5" }, { "bbox": [ 46, 393, 287, 489 ], "type": "text", "content": ". The default batch size is 16. The optimizer we adopted is Ranger [62], and the learning rate and weight decay we set are " }, { "bbox": [ 46, 393, 287, 489 ], "type": "inline_equation", "content": "1 \\times 10^{-3}" }, { "bbox": [ 46, 393, 287, 489 ], "type": "text", "content": " and " }, { "bbox": [ 46, 393, 287, 489 ], "type": "inline_equation", "content": "5 \\times 10^{-4}" }, { "bbox": [ 46, 393, 287, 489 ], "type": "text", "content": ", respectively. The cosine annealing with five warm-up epochs is adopted as the learning rate scheduler. The size of the input image is " }, { "bbox": [ 46, 393, 287, 489 ], "type": "inline_equation", "content": "640 \\times 640" }, { "bbox": [ 46, 393, 287, 489 ], "type": "text", "content": " px." } ] } ], "index": 5 }, { "bbox": [ 47, 501, 233, 513 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 47, 501, 233, 513 ], "spans": [ { "bbox": [ 47, 501, 233, 513 ], "type": "text", "content": "5.2. Performance of MVI Classification" } ] } ], "index": 6 }, { "bbox": [ 46, 521, 287, 604 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 46, 521, 287, 604 ], "spans": [ { "bbox": [ 46, 521, 287, 604 ], "type": "text", "content": "As described above, there is no MVI classification method until now. Therefore, we compare the proposed LoopNet with the SOTA fully supervised CNN-based image classification methods, including ResNet-50 [21], AlexNet [33], VGG-16 [53], EfficientNetV2-S [59], MobileNetV2 [25], and ConvNeXt-B [40]. For a fair comparison, we adopt the same backbone as those classification models." } ] } ], "index": 7 }, { "bbox": [ 46, 605, 288, 713 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 46, 605, 288, 713 ], "spans": [ { "bbox": [ 46, 605, 288, 713 ], "type": "text", "content": "The vessel classification performance comparison with the SOTA methods is given in Table 1, where we can see that the proposed LoopNet achieves the best accuracy, precision, and recall in almost all cases than SOTA classification methods. This is mainly because the proposed loopback strategy builds direct relation between the predicted cancerous areas and the image classification result, prompting the network to pay more attention to the discriminative cancerous areas rather than irrelevant background areas." } ] } ], "index": 8 }, { "type": "table", "bbox": [ 309, 195, 543, 252 ], "blocks": [ { "bbox": [ 309, 195, 543, 252 ], "lines": [ { "bbox": [ 309, 195, 543, 252 ], "spans": [ { "bbox": [ 309, 195, 543, 252 ], "type": "table", "html": "
Index\\MethodQu [47]Liu [39]FullyLoopNet
Precision85.4780.8278.9971.26
Recall95.6192.5793.3594.52
Dice89.3484.4282.0779.58
", "image_path": "ed68f2b2cf0f37411d4cfa9d33392f2079f44ffdc3174fd63abb5e2ce82d4762.jpg" } ] } ], "index": 9, "angle": 0, "type": "table_body" } ], "index": 9 }, { "bbox": [ 304, 256, 547, 312 ], "lines": [ { "bbox": [ 304, 256, 547, 312 ], "spans": [ { "bbox": [ 304, 256, 547, 312 ], "type": "text", "content": "Table 3. The cancerous cell locating results of different methods. [47] and [39] are two cell segmentation approaches with cell centroid point as annotations. 'Fully' means using the cancerous cell centroid annotations to finetune the cell locating branch of LoopNet, of which the image classification branch has been trained." } ] } ], "index": 10, "angle": 0, "type": "text" }, { "bbox": [ 306, 328, 533, 342 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 306, 328, 533, 342 ], "spans": [ { "bbox": [ 306, 328, 533, 342 ], "type": "text", "content": "5.3. Comparison of Classification Explainability" } ] } ], "index": 11 }, { "bbox": [ 304, 347, 547, 503 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 304, 347, 547, 503 ], "spans": [ { "bbox": [ 304, 347, 547, 503 ], "type": "text", "content": "Another advantage of the proposed LoopNet is providing explainable classification results, namely the cancerous cell areas. To verify the effectiveness of explainability, we compare the cancerous cell area locating results with the outputs of three classic feature attribution methods on the test part of PVID: CAM [44], DeepLIFT [50] and LRP [4]. Those methods adopted different feature attribution strategy to locate the critical features for the final classification prediction. Table 2 shows that the proposed LoopNet achieves the best performance among all methods. Noting that the Precision of DeepLIFT is higher than our approach, the Recall and Dice of it is much lower. Besides, the all indexes of other two approaches are pretty lower than the proposed LoopNet." } ] } ], "index": 12 }, { "bbox": [ 304, 504, 547, 540 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 304, 504, 547, 540 ], "spans": [ { "bbox": [ 304, 504, 547, 540 ], "type": "text", "content": "The qualitative visual results of different methods for the located cancerous areas and detailed analysis are given in the supplements." } ] } ], "index": 13 }, { "bbox": [ 306, 550, 528, 563 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 306, 550, 528, 563 ], "spans": [ { "bbox": [ 306, 550, 528, 563 ], "type": "text", "content": "5.4. Performance Comparison of Cell Locating" } ] } ], "index": 14 }, { "bbox": [ 304, 570, 547, 713 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 304, 570, 547, 713 ], "spans": [ { "bbox": [ 304, 570, 547, 713 ], "type": "text", "content": "To assess the effectiveness of the proposed loopback strategy for cell locating only with image-level category labels, we compare the cell locating performance to two weakly supervised cell segmentation approaches, Qu [47] and Liu [39], both leveraging point annotations of cells. The results are shown in Table 3. 'Fully' means using the cancerous cell centroid annotations to directly finetune the cell locating branch of LoopNet where the image classification branch has been trained. The results indicate that only with image-level category labels the performance of our approach can achieve promising results compared to those approaches that require massive point annotations." } ] } ], "index": 15 } ], "discarded_blocks": [ { "bbox": [ 295, 748, 315, 757 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 295, 748, 315, 757 ], "spans": [ { "bbox": [ 295, 748, 315, 757 ], "type": "text", "content": "7449" } ] } ], "index": 16 } ], "page_size": [ 612, 792 ], "page_idx": 6 }, { "para_blocks": [ { "type": "table", "bbox": [ 50, 70, 285, 122 ], "blocks": [ { "bbox": [ 50, 70, 285, 122 ], "lines": [ { "bbox": [ 50, 70, 285, 122 ], "spans": [ { "bbox": [ 50, 70, 285, 122 ], "type": "table", "html": "
Index\\Ablationw/o Lloopw/o Llocw/o LclsLoopNet
Precision050.8660.5871.26
Recall099.7599.5094.52
Dice059.9373.0379.58
", "image_path": "4d5a9ae59b1c8b183e24da18040f1123126d631828ee97a43baccc5eec9ccd69.jpg" } ] } ], "index": 0, "angle": 0, "type": "table_body" }, { "bbox": [ 55, 126, 277, 137 ], "lines": [ { "bbox": [ 55, 126, 277, 137 ], "spans": [ { "bbox": [ 55, 126, 277, 137 ], "type": "text", "content": "Table 4. The results of ablation study on different loss terms." } ] } ], "index": 1, "angle": 0, "type": "table_footnote" } ], "index": 0 }, { "bbox": [ 47, 144, 141, 158 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 47, 144, 141, 158 ], "spans": [ { "bbox": [ 47, 144, 141, 158 ], "type": "text", "content": "5.5. Ablation Study" } ] } ], "index": 2 }, { "bbox": [ 47, 163, 180, 175 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 47, 163, 180, 175 ], "spans": [ { "bbox": [ 47, 163, 180, 175 ], "type": "text", "content": "5.5.1 Ablation of Loss Terms" } ] } ], "index": 3 }, { "bbox": [ 47, 183, 289, 399 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 47, 183, 289, 399 ], "spans": [ { "bbox": [ 47, 183, 289, 399 ], "type": "text", "content": "As delineated in Sec.3.3, the joint training loss function has three terms: the image classification loss " }, { "bbox": [ 47, 183, 289, 399 ], "type": "inline_equation", "content": "\\mathcal{L}_{cls}" }, { "bbox": [ 47, 183, 289, 399 ], "type": "text", "content": ", the pseudo mask based cell locating loss " }, { "bbox": [ 47, 183, 289, 399 ], "type": "inline_equation", "content": "\\mathcal{L}_{loc}" }, { "bbox": [ 47, 183, 289, 399 ], "type": "text", "content": " and the loopback loss " }, { "bbox": [ 47, 183, 289, 399 ], "type": "inline_equation", "content": "\\mathcal{L}_{loop}" }, { "bbox": [ 47, 183, 289, 399 ], "type": "text", "content": ". We evaluate the joint training when supervised with the three terms separately in Table 4, showing that all loss terms have contributed to the final result. We have to mention that without " }, { "bbox": [ 47, 183, 289, 399 ], "type": "inline_equation", "content": "\\mathcal{L}_{loop}" }, { "bbox": [ 47, 183, 289, 399 ], "type": "text", "content": ", the model will degenerate so that it can't find any cancerous cells and all the metrics will be 0, demonstrating that the devised loopback strategy plays the key role in locating cancerous cells. Abandoning " }, { "bbox": [ 47, 183, 289, 399 ], "type": "inline_equation", "content": "\\mathcal{L}_{cls}" }, { "bbox": [ 47, 183, 289, 399 ], "type": "text", "content": " or " }, { "bbox": [ 47, 183, 289, 399 ], "type": "inline_equation", "content": "\\mathcal{L}_{loc}" }, { "bbox": [ 47, 183, 289, 399 ], "type": "text", "content": " will cause the network to recognize many healthy areas as cancerous, so the Recall becomes very high but Precision and Dice decrease a lot. The " }, { "bbox": [ 47, 183, 289, 399 ], "type": "inline_equation", "content": "\\mathcal{L}_{loc}" }, { "bbox": [ 47, 183, 289, 399 ], "type": "text", "content": " acts as an antagonist and constraint to " }, { "bbox": [ 47, 183, 289, 399 ], "type": "inline_equation", "content": "\\mathcal{L}_{loop}" }, { "bbox": [ 47, 183, 289, 399 ], "type": "text", "content": ", and the " }, { "bbox": [ 47, 183, 289, 399 ], "type": "inline_equation", "content": "\\mathcal{L}_{cls}" }, { "bbox": [ 47, 183, 289, 399 ], "type": "text", "content": " can maintain the discriminative capability of the image classification branch for optimizing the cell locating results with the proposed loopback strategy. Accordingly, the three terms work together to improve cancerous cell locating performance." } ] } ], "index": 4 }, { "bbox": [ 47, 403, 241, 415 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 47, 403, 241, 415 ], "spans": [ { "bbox": [ 47, 403, 241, 415 ], "type": "text", "content": "5.5.2 Ablation of Influence of GHM-C Loss" } ] } ], "index": 5 }, { "bbox": [ 46, 422, 288, 530 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 46, 422, 288, 530 ], "spans": [ { "bbox": [ 46, 422, 288, 530 ], "type": "text", "content": "As described in Sec.3.1, owing to the subjectivity of MVI diagnosis, there will be a few incorrectly labeled vessel samples. Therefore we adopt GHM-C loss rather than CE loss, which can reduce the gradient contribution of these incorrectly labeled outliers. As shown in the second column of Table 5, compared to the original result in the first column, replacing the GHM-C loss of " }, { "bbox": [ 46, 422, 288, 530 ], "type": "inline_equation", "content": "\\mathcal{L}_{cls}" }, { "bbox": [ 46, 422, 288, 530 ], "type": "text", "content": " to CE loss will reduce the overall Accuracy and especially the Recall of MVI vessels, which is sensitivity, an important index in clinical." } ] } ], "index": 6 }, { "bbox": [ 46, 531, 288, 662 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 46, 531, 288, 662 ], "spans": [ { "bbox": [ 46, 531, 288, 662 ], "type": "text", "content": "In " }, { "bbox": [ 46, 531, 288, 662 ], "type": "inline_equation", "content": "\\mathcal{L}_{loop}" }, { "bbox": [ 46, 531, 288, 662 ], "type": "text", "content": ", we adopt CE loss for modifying cancerous cell locating results using image-level labels rather than GHM-C loss. That is because the masked cancerous or non-cancerous areas of the original input vessel image can be regarded as the hard sample for the image classification branch, requiring more gradients magnitude for modifying the cancerous cell locating results. As shown in the third column in Table 5, replacing the CE loss in " }, { "bbox": [ 46, 531, 288, 662 ], "type": "inline_equation", "content": "\\mathcal{L}_{loop}" }, { "bbox": [ 46, 531, 288, 662 ], "type": "text", "content": " with GHM-C loss will induce the network to recognize more cancerous areas, making the masked samples more like easy samples, therefore reducing the Precision and Dice." } ] } ], "index": 7 }, { "bbox": [ 47, 669, 119, 681 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 47, 669, 119, 681 ], "spans": [ { "bbox": [ 47, 669, 119, 681 ], "type": "text", "content": "6. Conclusion" } ] } ], "index": 8 }, { "bbox": [ 47, 689, 288, 715 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 47, 689, 288, 715 ], "spans": [ { "bbox": [ 47, 689, 288, 715 ], "type": "text", "content": "In this paper, we put forward the first deep learning based network LoopNet for classifying MVI, which can be used" } ] } ], "index": 9 }, { "type": "table", "bbox": [ 309, 70, 545, 205 ], "blocks": [ { "bbox": [ 309, 70, 545, 205 ], "lines": [ { "bbox": [ 309, 70, 545, 205 ], "spans": [ { "bbox": [ 309, 70, 545, 205 ], "type": "table", "html": "
Index\\AblationOri\\( \\mathcal{L}_{cls} \\)GHM-C→CE\\( \\mathcal{L}_{loop} \\)CE→GHM-C
Vessel ClassificationAccuracy97.4996.59-
Precision Healthy98.5597.37-
MVI92.4492.13-
Recall Healthy98.4398.67-
MVI92.9887.13-
Cell locatingPrecision71.26-63.26
Recall94.52-98.95
Dice79.58-75.97
", "image_path": "54cef8c678d68f42e034b0835014b322317cf59c76303da0d9ddd77e83940fd2.jpg" } ] } ], "index": 10, "angle": 0, "type": "table_body" }, { "bbox": [ 306, 209, 545, 232 ], "lines": [ { "bbox": [ 306, 209, 545, 232 ], "spans": [ { "bbox": [ 306, 209, 545, 232 ], "type": "text", "content": "Table 5. The ablation of replacing the GHM-C loss and CE loss in " }, { "bbox": [ 306, 209, 545, 232 ], "type": "inline_equation", "content": "\\mathcal{L}_{cls}" }, { "bbox": [ 306, 209, 545, 232 ], "type": "text", "content": " and " }, { "bbox": [ 306, 209, 545, 232 ], "type": "inline_equation", "content": "\\mathcal{L}_{loop}" }, { "bbox": [ 306, 209, 545, 232 ], "type": "text", "content": " to the other. 'Ori' denotes the original setting." } ] } ], "index": 11, "angle": 0, "type": "table_footnote" } ], "index": 10 }, { "bbox": [ 304, 251, 547, 443 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 304, 251, 547, 443 ], "spans": [ { "bbox": [ 304, 251, 547, 443 ], "type": "text", "content": "as an accurate, objective, explainable and efficient diagnosis tool for MVI. Through fully exploiting the characteristic of MVI samples, LoopNet can achieve blood vessel classification and cell locating results simultaneously with only category labels, which provides a new weakly supervised framework for future MVI analysis tasks on various organs. To achieve the explainable MVI analysis goal, the loopback strategy is devised for utilizing the category label to supervise LoopNet distinguishing the cancerous area from other regions, which effectively builds the direct relation between the located cancerous area and the final classification result. To verify the effectiveness of the proposed the LoopNet, we collect the first Pathologic Vessel Image Dataset (PVID). Experiment results demonstrate that the proposed LoopNet achieves " }, { "bbox": [ 304, 251, 547, 443 ], "type": "inline_equation", "content": "97.5\\%" }, { "bbox": [ 304, 251, 547, 443 ], "type": "text", "content": " accuracy on PVID, which demonstrates the potential of deep learning on the MVI analysis task." } ] } ], "index": 12 }, { "bbox": [ 304, 445, 547, 567 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 304, 445, 547, 567 ], "spans": [ { "bbox": [ 304, 445, 547, 567 ], "type": "text", "content": "The cell locating results show that there are still several missing cancerous cells, which demonstrates the deficiency of insufficient supervised information. In the future, we will focus on exploring more potential supervision information from the characteristics of pathologic images and incorporating those supervision information into the proposed framework. Furthermore, we will also devote ourselves to improving the overall performance of the proposed method and applying the proposed method to auxiliary diagnosis in clinical practice." } ] } ], "index": 13 }, { "bbox": [ 304, 594, 548, 714 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 304, 594, 548, 714 ], "spans": [ { "bbox": [ 304, 594, 548, 714 ], "type": "text", "content": "Acknowledgements. This work is supported by National Natural Science Foundation of China (61976186,U20B2066), Zhejiang Provincial Science and Technology Project for Public Welfare (LGF21F020020), Starry Night Science Fund of Zhejiang University Shanghai Institute for Advanced Study (Grant No. SN-ZJU-SIAS-001), Fundamental Research Funds for the Central Universities (2021FZZX001-23), Alibaba Group through Alibaba Innovative Research Program, and Alibaba-Zhejiang University Joint Research Institute of Frontier Technologies." } ] } ], "index": 14 } ], "discarded_blocks": [ { "bbox": [ 295, 748, 315, 757 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 295, 748, 315, 757 ], "spans": [ { "bbox": [ 295, 748, 315, 757 ], "type": "text", "content": "7450" } ] } ], "index": 15 } ], "page_size": [ 612, 792 ], "page_idx": 7 }, { "para_blocks": [ { "bbox": [ 48, 71, 106, 83 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 48, 71, 106, 83 ], "spans": [ { "bbox": [ 48, 71, 106, 83 ], "type": "text", "content": "References" } ] } ], "index": 0 }, { "bbox": [ 48, 91, 289, 714 ], "type": "list", "angle": 0, "index": 14, "blocks": [ { "bbox": [ 53, 91, 289, 146 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 53, 91, 289, 146 ], "spans": [ { "bbox": [ 53, 91, 289, 146 ], "type": "text", "content": "[1] Hao Chen A, Xiaojuan Qi A, Lequan Yu A, Qi Dou A, Jing Qin B, and Pheng Ann Heng A. Dcan: Deep contour-aware networks for object instance segmentation from histology images - sciencedirect. Medical Image Analysis, 36:135-146, 2017. 3" } ] } ], "index": 1 }, { "bbox": [ 53, 148, 288, 191 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 53, 148, 288, 191 ], "spans": [ { "bbox": [ 53, 148, 288, 191 ], "type": "text", "content": "[2] Marco Ancona, Enea Ceolini, Cengiz ztreli, and Markus Gross. Towards better understanding of gradient-based attribution methods for deep neural networks. In ICLR, 2018. 3" } ] } ], "index": 2 }, { "bbox": [ 53, 194, 288, 248 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 53, 194, 288, 248 ], "spans": [ { "bbox": [ 53, 194, 288, 248 ], "type": "text", "content": "[3] Eirini Arvaniti, Kim S Fricker, Michael Moret, Niels Rupp, Thomas Hermanns, Christian Fankhauser, Norbert Wey, Peter J Wild, Jan H Rueschoff, and Manfred Claassen. Automated gleason grading of prostate cancer tissue microarrays via deep learning. Scientific reports, 8(1):1-11, 2018. 2, 3" } ] } ], "index": 3 }, { "bbox": [ 53, 250, 288, 293 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 53, 250, 288, 293 ], "spans": [ { "bbox": [ 53, 250, 288, 293 ], "type": "text", "content": "[4] S. Bach, A. Binder, G. Montavon, F. Klauschen, KR Müller, and W. Samek. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLOS ONE, 10, 2015. 7" } ] } ], "index": 4 }, { "bbox": [ 53, 295, 288, 350 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 53, 295, 288, 350 ], "spans": [ { "bbox": [ 53, 295, 288, 350 ], "type": "text", "content": "[5] Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus Robert Müller, and Wojciech Samek. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLOS ONE, 10(7):130140, 2015. 3" } ] } ], "index": 5 }, { "bbox": [ 53, 351, 288, 396 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 53, 351, 288, 396 ], "spans": [ { "bbox": [ 53, 351, 288, 396 ], "type": "text", "content": "[6] David Baehrens, Timon Schroeter, Stefan Harmeling, Motoaki Kawanabe, Katja Hansen, and Klaus-Robert Müller. How to explain individual classification decisions. Journal of Machine Learning Research, 11(61):1803-1831, 2010. 3" } ] } ], "index": 6 }, { "bbox": [ 53, 397, 288, 430 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 53, 397, 288, 430 ], "spans": [ { "bbox": [ 53, 397, 288, 430 ], "type": "text", "content": "[7] Osbert Bastani, Carolyn Kim, and Hamsa Bastani. Interpretability via model extraction. arXiv preprint arXiv:1706.09773, 2017. 3" } ] } ], "index": 7 }, { "bbox": [ 53, 431, 288, 497 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 53, 431, 288, 497 ], "spans": [ { "bbox": [ 53, 431, 288, 497 ], "type": "text", "content": "[8] Rubén G. Bengio, Leandro Cristian Arribillaga, Javier Epelde, Sergio Orellana, Ariel Montedoro, Verónica Bengio, Esteban Cordero, and Matías Guevara. Evaluation of microvascular invasion as a prognostic factor in the progression of nonmetastatic renal cancer. Central European Journal of Urology, 71(4), 2018. 1" } ] } ], "index": 8 }, { "bbox": [ 53, 498, 288, 553 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 53, 498, 288, 553 ], "spans": [ { "bbox": [ 53, 498, 288, 553 ], "type": "text", "content": "[9] David S Bolme, J Ross Beveridge, Bruce A Draper, and Yui Man Lui. Visual object tracking using adaptive correlation filters. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 2544-2550. IEEE, 2010. 2, 5" } ] } ], "index": 9 }, { "bbox": [ 48, 555, 288, 599 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 48, 555, 288, 599 ], "spans": [ { "bbox": [ 48, 555, 288, 599 ], "type": "text", "content": "[10] Alireza Chamanzar and Yao Nie. Weakly supervised multitask learning for cell detection and segmentation. In 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), pages 513-516. IEEE, 2020. 3" } ] } ], "index": 10 }, { "bbox": [ 48, 600, 288, 633 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 48, 600, 288, 633 ], "spans": [ { "bbox": [ 48, 600, 288, 633 ], "type": "text", "content": "[11] Ying Chen, Feng Mao, Jie Song, Xinchao Wang, Huiqiong Wang, and Mingli Song. Self-born wiring for neural trees. CVPR, 2021. 3" } ] } ], "index": 11 }, { "bbox": [ 48, 635, 288, 679 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 48, 635, 288, 679 ], "spans": [ { "bbox": [ 48, 635, 288, 679 ], "type": "text", "content": "[12] Yuxin Cui, Guiying Zhang, Zhonghao Liu, Zheng Xiong, and Jianjun Hu. A deep learning algorithm for one-step contour aware nuclei segmentation of histopathological images. Medical Biological Engineering Computing, 2018. 3" } ] } ], "index": 12 }, { "bbox": [ 48, 680, 288, 714 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 48, 680, 288, 714 ], "spans": [ { "bbox": [ 48, 680, 288, 714 ], "type": "text", "content": "[13] Saurabh Desai and Harish G. Ramaswamy. Ablation-cam: Visual explanations for deep convolutional network via gradient-free localization. In WACV, pages 983-991, 2020. 3" } ] } ], "index": 13 } ], "sub_type": "ref_text" }, { "bbox": [ 307, 73, 547, 714 ], "type": "list", "angle": 0, "index": 27, "blocks": [ { "bbox": [ 307, 73, 547, 128 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 307, 73, 547, 128 ], "spans": [ { "bbox": [ 307, 73, 547, 128 ], "type": "text", "content": "[14] Neeraj Dhungel, Gustavo Carneiro, and Andrew P Bradley. Deep learning and structured prediction for the segmentation of mass in mammograms. In International Conference on Medical image computing and computer-assisted intervention, pages 605-612. Springer, 2015. 2" } ] } ], "index": 15 }, { "bbox": [ 307, 129, 547, 173 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 307, 129, 547, 173 ], "spans": [ { "bbox": [ 307, 129, 547, 173 ], "type": "text", "content": "[15] S. Feng, X. Yu, W. Liang, X. Li, W. Zhong, W. Hu, H. Zhang, Z. Feng, M. Song, and J. Zhang. Development of a deep learning model to assist with diagnosis of hepatocellular carcinoma. 2021. 1" } ] } ], "index": 16 }, { "bbox": [ 307, 175, 547, 217 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 307, 175, 547, 217 ], "spans": [ { "bbox": [ 307, 175, 547, 217 ], "type": "text", "content": "[16] Yao Feng, Fan Wu, Xiaohu Shao, Yanfeng Wang, and Xi Zhou. Joint 3d face reconstruction and dense alignment with position map regression network. In ECCV, pages 557-574, 2018. 3" } ] } ], "index": 17 }, { "bbox": [ 307, 220, 547, 285 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 307, 220, 547, 285 ], "spans": [ { "bbox": [ 307, 220, 547, 285 ], "type": "text", "content": "[17] Zunlei Feng, Zhonghua Wang, Xinchao Wang, Yining Mao, Thomas Li, Jie Lei, Yuexuan Wang, and Mingli Song. Mutualcomplementing framework for nuclei detection and segmentation in pathology image. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4036-4045, 2021. 3" } ] } ], "index": 18 }, { "bbox": [ 307, 286, 547, 352 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 307, 286, 547, 352 ], "spans": [ { "bbox": [ 307, 286, 547, 352 ], "type": "text", "content": "[18] Zunlei Feng, Zhonghua Wang, Xinchao Wang, Xiuming Zhang, Lechao Cheng, Jie Lei, Yuexuan Wang, and Mingli Song. Edge-competing pathological liver vessel segmentation with limited labels. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 1325-1333, 2021. 2, 4" } ] } ], "index": 19 }, { "bbox": [ 307, 354, 547, 408 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 307, 354, 547, 408 ], "spans": [ { "bbox": [ 307, 354, 547, 408 ], "type": "text", "content": "[19] Simon Graham, Quoc Dang Vu, Shan E Ahmed Raza, Ayesha Azam, Yee Wah Tsang, Jin Tae Kwak, and Nasir Rajpoot. Hover-net: Simultaneous segmentation and classification of nuclei in multi-tissue histology images. Medical Image Analysis, 58:101563, 2019. 3" } ] } ], "index": 20 }, { "bbox": [ 307, 410, 547, 465 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 307, 410, 547, 465 ], "spans": [ { "bbox": [ 307, 410, 547, 465 ], "type": "text", "content": "[20] Yue Guo, Jason Stein, Guorong Wu, and Ashok Krishnamurthy. Sau-net: A universal deep network for cell counting. In Proceedings of the 10th ACM international conference on bioinformatics, computational biology and health informatics, pages 299-306, 2019. 3" } ] } ], "index": 21 }, { "bbox": [ 307, 467, 547, 510 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 307, 467, 547, 510 ], "spans": [ { "bbox": [ 307, 467, 547, 510 ], "type": "text", "content": "[21] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 6, 7" } ] } ], "index": 22 }, { "bbox": [ 307, 512, 547, 555 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 307, 512, 547, 555 ], "spans": [ { "bbox": [ 307, 512, 547, 555 ], "type": "text", "content": "[22] João F Henriques, Rui Caseiro, Pedro Martins, and Jorge Batista. High-speed tracking with kernelized correlation filters. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(3):583-596, 2014. 2, 5" } ] } ], "index": 23 }, { "bbox": [ 307, 557, 547, 601 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 307, 557, 547, 601 ], "spans": [ { "bbox": [ 307, 557, 547, 601 ], "type": "text", "content": "[23] Le Hou, Ayush Agarwal, Dimitris Samaras, Tahsin M. Kurc, and Joel H. Saltz. Robust histopathology image analysis: To label or to synthesize? In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 3" } ] } ], "index": 24 }, { "bbox": [ 307, 602, 547, 657 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 307, 602, 547, 657 ], "spans": [ { "bbox": [ 307, 602, 547, 657 ], "type": "text", "content": "[24] Le Hou, Vu Nguyen, Ariel B. Kanevsky, Dimitris Samaaras, Tahsin M. Kurc, Tianhao Zhao, Rajarsi R. Gupta, Yi Gao, Wenjin Chen, and David and Foran. Sparse autoencoder for unsupervised nucleus detection and representation in histopathology images. Pattern recognition, 2019. 3" } ] } ], "index": 25 }, { "bbox": [ 307, 658, 547, 714 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 307, 658, 547, 714 ], "spans": [ { "bbox": [ 307, 658, 547, 714 ], "type": "text", "content": "[25] Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017. 7" } ] } ], "index": 26 } ], "sub_type": "ref_text" } ], "discarded_blocks": [ { "bbox": [ 295, 748, 314, 757 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 295, 748, 314, 757 ], "spans": [ { "bbox": [ 295, 748, 314, 757 ], "type": "text", "content": "7451" } ] } ], "index": 28 } ], "page_size": [ 612, 792 ], "page_idx": 8 }, { "para_blocks": [ { "bbox": [ 48, 72, 288, 713 ], "type": "list", "angle": 0, "index": 15, "blocks": [ { "bbox": [ 48, 72, 288, 127 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 48, 72, 288, 127 ], "spans": [ { "bbox": [ 48, 72, 288, 127 ], "type": "text", "content": "[26] Wei Hu, Huanhuan Sheng, Jing Wu, Yining Li, Tianyi Liu, Yonghao Wang, and Yuan Wen. Generative adversarial training for weakly supervised nuclei instance segmentation. In 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pages 3649-3654. IEEE, 2020. 3" } ] } ], "index": 0 }, { "bbox": [ 48, 129, 288, 195 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 48, 129, 288, 195 ], "spans": [ { "bbox": [ 48, 129, 288, 195 ], "type": "text", "content": "[27] Hai Huang, Xiu-Wu Pan, Yi Huang, Dan-Feng Xu, Xin-Gang Cui, Lin Li, Yi Hong, Lu Chen, Yi Gao, and Lei Yin. Microvascular invasion as a prognostic indicator in renal cell carcinoma: a systematic review and meta-analysis. In International Journal of Clinical and Experimental Medicine, 2015. 1" } ] } ], "index": 1 }, { "bbox": [ 48, 198, 288, 240 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 48, 198, 288, 240 ], "spans": [ { "bbox": [ 48, 198, 288, 240 ], "type": "text", "content": "[28] Yongcheng Jing, Yining Mao, Yiding Yang, Yibing Zhan, Mingli Song, Xinchao Wang, and Dacheng Tao. Learning graph neural networks for image style transfer. In ECCV, 2022. 2" } ] } ], "index": 2 }, { "bbox": [ 48, 243, 288, 276 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 48, 243, 288, 276 ], "spans": [ { "bbox": [ 48, 243, 288, 276 ], "type": "text", "content": "[29] Yongcheng Jing, Yiding Yang, Xinchao Wang, Mingli Song, and Dacheng Tao. Amalgamating knowledge from heterogeneous graph neural networks. In CVPR, 2021. 2" } ] } ], "index": 3 }, { "bbox": [ 48, 277, 288, 311 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 48, 277, 288, 311 ], "spans": [ { "bbox": [ 48, 277, 288, 311 ], "type": "text", "content": "[30] Yongcheng Jing, Yiding Yang, Xinchao Wang, Mingli Song, and Dacheng Tao. Meta-aggregator: learning to aggregate for 1-bit graph neural networks. In ICCV, 2021. 2" } ] } ], "index": 4 }, { "bbox": [ 48, 312, 288, 345 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 48, 312, 288, 345 ], "spans": [ { "bbox": [ 48, 312, 288, 345 ], "type": "text", "content": "[31] N. Kaur, G. Chetty, and L. Singh. A novel approach using deep neural network vessel segmentation & retinal disease detection. In CSDE, 2020. 2" } ] } ], "index": 5 }, { "bbox": [ 48, 347, 288, 380 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 48, 347, 288, 380 ], "spans": [ { "bbox": [ 48, 347, 288, 380 ], "type": "text", "content": "[32] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016. 7" } ] } ], "index": 6 }, { "bbox": [ 48, 382, 288, 415 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 48, 382, 288, 415 ], "spans": [ { "bbox": [ 48, 382, 288, 415 ], "type": "text", "content": "[33] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. ImageNet classification with deep convolutional neural networks. Communications of the ACM, 60(6):84-90, 2017. 7" } ] } ], "index": 7 }, { "bbox": [ 48, 417, 288, 472 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 48, 417, 288, 472 ], "spans": [ { "bbox": [ 48, 417, 288, 472 ], "type": "text", "content": "[34] Neeraj Kumar, Ruchika Verma, Sanuj Sharma, Surabhi Bhargava, Abhishek Vahadane, and Amit Sethi. A dataset and a technique for generalized nuclear segmentation for computational pathology. IEEE Transactions on Medical Imaging, pages 1-1, 2017. 3" } ] } ], "index": 8 }, { "bbox": [ 48, 474, 288, 528 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 48, 474, 288, 528 ], "spans": [ { "bbox": [ 48, 474, 288, 528 ], "type": "text", "content": "[35] Viksit Kumar, Jeremy M Webb, Adriana Gregory, Max Denis, Duane D Meixner, Mahdi Bayat, Dana H Whaley, Mostafa Fatemi, and Azra Alizad. Automated and real-time segmentation of suspicious breast masses using convolutional neural network. PloS one, 13(5):e0195816, 2018. 2" } ] } ], "index": 9 }, { "bbox": [ 48, 530, 288, 563 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 48, 530, 288, 563 ], "spans": [ { "bbox": [ 48, 530, 288, 563 ], "type": "text", "content": "[36] Jie Lei, Zhe Wang, Zunlei Feng, Mingli Song, and Jiajun Bu. Understanding the prediction process of deep networks by forests. In *BigMM*, pages 1–7, 2018. 3" } ] } ], "index": 10 }, { "bbox": [ 48, 565, 288, 609 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 48, 565, 288, 609 ], "spans": [ { "bbox": [ 48, 565, 288, 609 ], "type": "text", "content": "[37] Benjamin J. Lengerich, Sandeep Konam, Eric P. Xing, Stephanie Rosenthal, and Manuela M. Veloso. Visual explanations for convolutional neural networks via input resampling. arXiv preprint arXiv:1707.09641, 2017. 3" } ] } ], "index": 11 }, { "bbox": [ 48, 611, 288, 654 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 48, 611, 288, 654 ], "spans": [ { "bbox": [ 48, 611, 288, 654 ], "type": "text", "content": "[38] Buyu Li, Yu Liu, and Xiaogang Wang. Gradient harmonized single-stage detector. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 8577-8584, 2019. 5" } ] } ], "index": 12 }, { "bbox": [ 48, 656, 288, 689 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 48, 656, 288, 689 ], "spans": [ { "bbox": [ 48, 656, 288, 689 ], "type": "text", "content": "[39] Weizhen Liu, Qian He, and Xuming He. Weakly supervised nuclei segmentation via instance learning. arXiv preprint arXiv:2202.01564, 2022. 7" } ] } ], "index": 13 }, { "bbox": [ 48, 691, 288, 713 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 48, 691, 288, 713 ], "spans": [ { "bbox": [ 48, 691, 288, 713 ], "type": "text", "content": "[40] Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, and Saining Xie. A convnet for the" } ] } ], "index": 14 } ], "sub_type": "ref_text" }, { "bbox": [ 307, 73, 547, 712 ], "type": "list", "angle": 0, "index": 29, "blocks": [ { "bbox": [ 327, 73, 547, 105 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 327, 73, 547, 105 ], "spans": [ { "bbox": [ 327, 73, 547, 105 ], "type": "text", "content": "2020s. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11976-11986, 2022. 7" } ] } ], "index": 16 }, { "bbox": [ 307, 108, 547, 162 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 307, 108, 547, 162 ], "spans": [ { "bbox": [ 307, 108, 547, 162 ], "type": "text", "content": "[41] Faisal Mahmood, Daniel Borders, Richard J. Chen, Gregory N. Mckay, and Nicholas J. Durr. Deep adversarial training for multi-organ nuclei segmentation in histopathology images. IEEE Transactions on Medical Imaging, PP(99):1-1, 2019. 3" } ] } ], "index": 17 }, { "bbox": [ 307, 165, 547, 220 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 307, 165, 547, 220 ], "spans": [ { "bbox": [ 307, 165, 547, 220 ], "type": "text", "content": "[42] Faisal Mahmood, Daniel Borders, Richard J Chen, Gregory N McKay, Kevan J Salimian, Alexander Baras, and Nicholas J Durr. Deep adversarial training for multi-organ nuclei segmentation in histopathology images. IEEE transactions on medical imaging, 39(11):3257–3267, 2019. 2, 3" } ] } ], "index": 18 }, { "bbox": [ 307, 222, 547, 276 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 307, 222, 547, 276 ], "spans": [ { "bbox": [ 307, 222, 547, 276 ], "type": "text", "content": "[43] Rodríguez-Perálvarez Manuel, Luong Vinh Tu, Andreana Lorenzo, Meyer Tim, Paul Dhillon Amar, and Kenneth Burroughs Andrew. A systematic review of microvascular invasion in hepatocellular carcinoma: Diagnostic and prognostic variability. 2012. 1" } ] } ], "index": 19 }, { "bbox": [ 307, 279, 547, 312 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 307, 279, 547, 312 ], "spans": [ { "bbox": [ 307, 279, 547, 312 ], "type": "text", "content": "[44] Rakshit Naidu and Joy Michael. Ss-cam: Smoothed score-cam for sharper visual feature localization. arXiv preprint arXiv:2006.14255, 2020. 3, 7" } ] } ], "index": 20 }, { "bbox": [ 307, 314, 547, 358 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 307, 314, 547, 358 ], "spans": [ { "bbox": [ 307, 314, 547, 358 ], "type": "text", "content": "[45] Naylor, Peter, Lae, Marick, Reyal, Fabien, Walter, and Thomas. Segmentation of nuclei in histopathology images by deep regression of the distance map. IEEE Transactions on Medical Imaging, 38(2):448-459, 2019. 3" } ] } ], "index": 21 }, { "bbox": [ 307, 360, 547, 404 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 307, 360, 547, 404 ], "spans": [ { "bbox": [ 307, 360, 547, 404 ], "type": "text", "content": "[46] Peter Naylor, Marick Laé, Fabien Reyal, and Thomas Walter. Nuclei segmentation in histopathology images using deep neural networks. In IEEE International Symposium on Biomedical Imaging, 2017. 3" } ] } ], "index": 22 }, { "bbox": [ 307, 407, 547, 471 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 307, 407, 547, 471 ], "spans": [ { "bbox": [ 307, 407, 547, 471 ], "type": "text", "content": "[47] Hui Qu, Pengxiang Wu, Qiaoying Huang, Jingru Yi, Gregory M Riedlinger, Subhajyoti De, and Dimitris N Metaxas. Weakly supervised deep nuclei segmentation using points annotation in histopathology images. In International Conference on Medical Imaging with Deep Learning, pages 390-400. PMLR, 2019. 7" } ] } ], "index": 23 }, { "bbox": [ 307, 474, 547, 517 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 307, 474, 547, 517 ], "spans": [ { "bbox": [ 307, 474, 547, 517 ], "type": "text", "content": "[48] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, 2015. 3" } ] } ], "index": 24 }, { "bbox": [ 307, 520, 547, 573 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 307, 520, 547, 573 ], "spans": [ { "bbox": [ 307, 520, 547, 573 ], "type": "text", "content": "[49] Monjoy Saha and Chandan Chakraborty. Her2net: A deep framework for semantic segmentation and classification of cell membranes and nuclei in breast cancer evaluation. IEEE Transactions on Image Processing, 27(5):2189-2200, 2018. 3" } ] } ], "index": 25 }, { "bbox": [ 307, 577, 547, 620 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 307, 577, 547, 620 ], "spans": [ { "bbox": [ 307, 577, 547, 620 ], "type": "text", "content": "[50] Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. Learning important features through propagating activation differences. In International conference on machine learning, pages 3145-3153. PMLR, 2017. 3, 7" } ] } ], "index": 26 }, { "bbox": [ 307, 623, 547, 667 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 307, 623, 547, 667 ], "spans": [ { "bbox": [ 307, 623, 547, 667 ], "type": "text", "content": "[51] Avanti Shrikumar, Peyton Greenside, Anna Shcherbina, and Anshul Kundaje. Not just a black box: Learning important features through propagating activation differences. arXiv preprint arXiv:1605.01713, 2016. 3" } ] } ], "index": 27 }, { "bbox": [ 307, 670, 547, 712 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 307, 670, 547, 712 ], "spans": [ { "bbox": [ 307, 670, 547, 712 ], "type": "text", "content": "[52] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. In ICLR Workshop, 2013. 3" } ] } ], "index": 28 } ], "sub_type": "ref_text" } ], "discarded_blocks": [ { "bbox": [ 295, 748, 315, 757 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 295, 748, 315, 757 ], "spans": [ { "bbox": [ 295, 748, 315, 757 ], "type": "text", "content": "7452" } ] } ], "index": 30 } ], "page_size": [ 612, 792 ], "page_idx": 9 }, { "para_blocks": [ { "bbox": [ 48, 72, 288, 713 ], "type": "list", "angle": 0, "index": 14, "blocks": [ { "bbox": [ 48, 72, 288, 106 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 48, 72, 288, 106 ], "spans": [ { "bbox": [ 48, 72, 288, 106 ], "type": "text", "content": "[53] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. 7" } ] } ], "index": 0 }, { "bbox": [ 48, 106, 288, 171 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 48, 106, 288, 171 ], "spans": [ { "bbox": [ 48, 106, 288, 171 ], "type": "text", "content": "[54] Korsuk Sirinukunwattana, Shan E Ahmed Raza, Yee-Wah Tsang, David RJ Snead, Ian A Cree, and Nasir M Rajpoot. Locality sensitive deep learning for detection and classification of nuclei in routine colon cancer histology images. IEEE transactions on medical imaging, 35(5):1196-1206, 2016. 2, 3" } ] } ], "index": 1 }, { "bbox": [ 48, 172, 288, 227 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 48, 172, 288, 227 ], "spans": [ { "bbox": [ 48, 172, 288, 227 ], "type": "text", "content": "[55] Korsuk Sirinukunwattana, Shan E Ahmed Raza, Yee Wah Tsang, David R. J. Snead, Ian A. Cree, and Nasir M. Rajpoot. Locality sensitive deep learning for detection and classification of nuclei in routine colon cancer histology images. IEEE Transactions on Medical Imaging, 35(5):1196-1206, 2016. 3" } ] } ], "index": 2 }, { "bbox": [ 48, 228, 288, 293 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 48, 228, 288, 293 ], "spans": [ { "bbox": [ 48, 228, 288, 293 ], "type": "text", "content": "[56] Shuji Sumie, Ryoko Kuromatsu, Koji Okuda, Eiji Ando, Akio Takata, Nobuyoshi Fukushima, Yasutomo Watanabe, Masamichi Kojiro, and Michio Sata. Microvascular invasion in patients with hepatocellular carcinoma and its predictable clinicopathological factors. In Annals of Surgical Oncology, 2008. 1" } ] } ], "index": 3 }, { "bbox": [ 48, 293, 288, 326 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 48, 293, 288, 326 ], "spans": [ { "bbox": [ 48, 293, 288, 326 ], "type": "text", "content": "[57] Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In ICML, pages 3319-3328. PMLR, 2017. 3" } ] } ], "index": 4 }, { "bbox": [ 48, 327, 288, 381 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 48, 327, 288, 381 ], "spans": [ { "bbox": [ 48, 327, 288, 381 ], "type": "text", "content": "[58] Mahmood Tahir, Owais Muhammad, Noh Kyoung Jun, Yoon Hyo Sik, Haider Adnan, Sultan Haseeb, and Park Kang Ryoung. Artificial intelligence-based segmentation of nuclei in multi-organ histopathology images: Model development and validation. JMIR Medical Informantics, 2021. 3" } ] } ], "index": 5 }, { "bbox": [ 48, 382, 288, 403 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 48, 382, 288, 403 ], "spans": [ { "bbox": [ 48, 382, 288, 403 ], "type": "text", "content": "[59] Mingxing Tan and Quoc Le. Efficientnetv2: Smaller models and faster training. In ICML, pages 10096-10106, 2021. 7" } ] } ], "index": 6 }, { "bbox": [ 48, 404, 288, 448 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 48, 404, 288, 448 ], "spans": [ { "bbox": [ 48, 404, 288, 448 ], "type": "text", "content": "[60] Jeya Maria Jose Valanarasu, Poojan Oza, Ilker Hacihaliloglu, and Vishal M Patel. Medical transformer: Gated axial-attention for medical image segmentation. arXiv preprint arXiv:2102.10662, 2021. 3" } ] } ], "index": 7 }, { "bbox": [ 48, 449, 288, 492 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 48, 449, 288, 492 ], "spans": [ { "bbox": [ 48, 449, 288, 492 ], "type": "text", "content": "[61] Haofan Wang, Zifan Wang, Mengnan Du, Fan Yang, Zijian Zhang, Sirui Ding, Piotr Mardziel, and Xia Hu. Score-cam: Score-weighted visual explanations for convolutional neural networks. In CVPR Workshops, pages 111-119, 2020. 3" } ] } ], "index": 8 }, { "bbox": [ 48, 492, 288, 525 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 48, 492, 288, 525 ], "spans": [ { "bbox": [ 48, 492, 288, 525 ], "type": "text", "content": "[62] Less Wright. Ranger - a synergistic optimizer. https://github.com/lessw2020/Ranger-Deep-Learning-Optimizer, 2019.7" } ] } ], "index": 9 }, { "bbox": [ 48, 525, 288, 581 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 48, 525, 288, 581 ], "spans": [ { "bbox": [ 48, 525, 288, 581 ], "type": "text", "content": "[63] Chensu Xie, Chad M Vanderbilt, Anne Grabenstetter, and Thomas J Fuchs. Voca: cell nuclei detection in histopathology images by vector oriented confidence accumulation. In International Conference on Medical Imaging with Deep Learning, pages 527-539. PMLR, 2019. 3" } ] } ], "index": 10 }, { "bbox": [ 48, 582, 288, 624 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 48, 582, 288, 624 ], "spans": [ { "bbox": [ 48, 582, 288, 624 ], "type": "text", "content": "[64] Yuanpu Xie, Fuyong Xing, Xiaoshuang Shi, Xiangfei Kong, Hai Su, and Lin Yang. Efficient and robust cell detection: A structured regression approach. Medical image analysis, 44:245-254, 2018. 3" } ] } ], "index": 11 }, { "bbox": [ 48, 625, 288, 667 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 48, 625, 288, 667 ], "spans": [ { "bbox": [ 48, 625, 288, 667 ], "type": "text", "content": "[65] Fuyong Xing, Yuanpu Xie, and Lin Yang. An automatic learning-based framework for robust nucleus segmentation. IEEE Transactions on Medical Imaging, 35(2):550-566, 2016. 3" } ] } ], "index": 12 }, { "bbox": [ 48, 669, 288, 713 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 48, 669, 288, 713 ], "spans": [ { "bbox": [ 48, 669, 288, 713 ], "type": "text", "content": "[66] Jun Xu, Lei Xiang, Renlong Hang, and Jianzhong Wu. Stacked sparse autoencoder (ssae) for nuclei detection on breast cancer histopathology images. In IEEE International Symposium on Biomedical Imaging, 2014. 3" } ] } ], "index": 13 } ], "sub_type": "ref_text" }, { "bbox": [ 307, 72, 547, 656 ], "type": "list", "angle": 0, "index": 29, "blocks": [ { "bbox": [ 307, 72, 547, 128 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 307, 72, 547, 128 ], "spans": [ { "bbox": [ 307, 72, 547, 128 ], "type": "text", "content": "[67] Jun Xu, Lei Xiang, Qingshan Liu, Hannah Gilmore, Jianzhong Wu, Jinghai Tang, and Anant Madabhushi. Stacked sparse autoencoder (ssae) for nuclei detection on breast cancer histopathology images. IEEE transactions on medical imaging, 35(1):119-130, 2015. 2, 3" } ] } ], "index": 15 }, { "bbox": [ 307, 129, 547, 183 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 307, 129, 547, 183 ], "spans": [ { "bbox": [ 307, 129, 547, 183 ], "type": "text", "content": "[68] Xingyi Yang, Jingwen Ye, and Xinchao Wang. Factorizing knowledge in neural networks. In Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXXIV, pages 73-91. Springer, 2022. 2" } ] } ], "index": 16 }, { "bbox": [ 307, 184, 545, 206 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 307, 184, 545, 206 ], "spans": [ { "bbox": [ 307, 184, 545, 206 ], "type": "text", "content": "[69] Xingyi Yang, Daquan Zhou, Songhua Liu, Jingwen Ye, and Xinchao Wang. Deep model reassembly. NeurIPS, 2022. 2" } ] } ], "index": 17 }, { "bbox": [ 307, 207, 545, 240 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 307, 207, 545, 240 ], "spans": [ { "bbox": [ 307, 207, 545, 240 ], "type": "text", "content": "[70] Jingwen Ye, Yifang Fu, Jie Song, Xingyi Yang, Songhua Liu, Xin Jin, Mingli Song, and Xinchao Wang. Learning with recoverable forgetting. In ECCV, 2022. 2" } ] } ], "index": 18 }, { "bbox": [ 307, 241, 547, 295 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 307, 241, 547, 295 ], "spans": [ { "bbox": [ 307, 241, 547, 295 ], "type": "text", "content": "[71] Jingwen Ye, Yixin Ji, Xinchao Wang, Xin Gao, and Mingli Song. Data-free knowledge amalgamation via group-stack dual-gan. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 12513-12522, 2020. 2" } ] } ], "index": 19 }, { "bbox": [ 307, 296, 547, 330 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 307, 296, 547, 330 ], "spans": [ { "bbox": [ 307, 296, 547, 330 ], "type": "text", "content": "[72] Jingwen Ye, Yining Mao, Jie Song, Xinchao Wang, Cheng Jin, and Mingli Song. Safe distillation box. In AAAI Conference on Artificial Intelligence, 2021. 2" } ] } ], "index": 20 }, { "bbox": [ 307, 331, 547, 374 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 307, 331, 547, 374 ], "spans": [ { "bbox": [ 307, 331, 547, 374 ], "type": "text", "content": "[73] Xiaotian Yu, Zunlei Feng, Mingli Song, Yuexuan Wang, Xiuming Zhang13, and Thomas Li. Tendentious noise-rectifying framework for pathological hcc grading. In British Machine Vision Conference, 2021. 2, 3" } ] } ], "index": 21 }, { "bbox": [ 307, 375, 547, 398 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 307, 375, 547, 398 ], "spans": [ { "bbox": [ 307, 375, 547, 398 ], "type": "text", "content": "[74] Matthew D. Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In ECCV, 2014. 3" } ] } ], "index": 22 }, { "bbox": [ 307, 399, 547, 453 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 307, 399, 547, 453 ], "spans": [ { "bbox": [ 307, 399, 547, 453 ], "type": "text", "content": "[75] Donghao Zhang, Yang Song, Siqi Liu, Dagan Feng, Yue Wang, and Weidong Cai. Nuclei instance segmentation with dual contour-enhanced adversarial network. In 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), pages 409-412. IEEE, 2018. 2, 3" } ] } ], "index": 23 }, { "bbox": [ 307, 454, 547, 487 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 307, 454, 547, 487 ], "spans": [ { "bbox": [ 307, 454, 547, 487 ], "type": "text", "content": "[76] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba. Learning deep features for discriminative localization. IEEE Computer Society, 2016. 3" } ] } ], "index": 24 }, { "bbox": [ 307, 488, 547, 521 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 307, 488, 547, 521 ], "spans": [ { "bbox": [ 307, 488, 547, 521 ], "type": "text", "content": "[77] Jian Zhou and Olga G Troyanskaya. Predicting effects of noncoding variants with deep learning-based sequence model. Nature Methods, 12(10):931-934, 2015. 3" } ] } ], "index": 25 }, { "bbox": [ 307, 522, 547, 577 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 307, 522, 547, 577 ], "spans": [ { "bbox": [ 307, 522, 547, 577 ], "type": "text", "content": "[78] Xiao Zhou, Zhen Cheng, Miao Gu, and Fei Chang. Lirnet: Local integral regression network for both strongly and weakly supervised nuclei detection. In 2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pages 945-951. IEEE, 2020. 3" } ] } ], "index": 26 }, { "bbox": [ 307, 578, 547, 622 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 307, 578, 547, 622 ], "spans": [ { "bbox": [ 307, 578, 547, 622 ], "type": "text", "content": "[79] Yin Zhou, Hang Chang, Kenneth E. Barner, and Bahram Parvin. Nuclei segmentation via sparsity constrained convolutional regression. In IEEE International Symposium on Biomedical Imaging, 2015. 3" } ] } ], "index": 27 }, { "bbox": [ 307, 622, 547, 656 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 307, 622, 547, 656 ], "spans": [ { "bbox": [ 307, 622, 547, 656 ], "type": "text", "content": "[80] Luisa M Zintgraf, Taco S Cohen, Tameem Adel, and Max Welling. Visualizing deep neural network decisions: Prediction difference analysis. In ICLR, 2017. 3" } ] } ], "index": 28 } ], "sub_type": "ref_text" } ], "discarded_blocks": [ { "bbox": [ 295, 748, 315, 757 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 295, 748, 315, 757 ], "spans": [ { "bbox": [ 295, 748, 315, 757 ], "type": "text", "content": "7453" } ] } ], "index": 30 } ], "page_size": [ 612, 792 ], "page_idx": 10 } ], "_backend": "vlm", "_version_name": "2.6.4" }