OpenFace-CQUPT commited on
Commit
20c8393
verified
1 Parent(s): 11e0eed

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +69 -1
README.md CHANGED
@@ -21,8 +21,76 @@ We developed a domain-speciffc large language-vision assistant (PA-LLaVA) for pa
21
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/663f06e01cd68975883a353e/IAeFWhH8brZYDaTJnew2N.png)
22
 
23
 
24
- Only the image names of the cleaned dataset are provided here, for the specific training code please visit our Github.
 
25
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
 
27
  ## contact
28
 
 
21
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/663f06e01cd68975883a353e/IAeFWhH8brZYDaTJnew2N.png)
22
 
23
 
24
+ ### Step 1 Download the public datasets.
25
+ Here we only provide the download link for the public dataset and expose the image id index of our cleaned dataset on Hugging Face.
26
 
27
+ #### Domain Alignment Stage
28
+
29
+ PubMedVision-Alignment: [FreedomIntelligence/PubMedVision 路 Datasets at Hugging Face](https://huggingface.co/datasets/FreedomIntelligence/PubMedVision)
30
+
31
+ PMC-OA: [axiong/pmc_oa 路 Datasets at Hugging Face](https://huggingface.co/datasets/axiong/pmc_oa)
32
+
33
+ Quilt-1M: [Quilt-1M: One Million Image-Text Pairs for Histopathology (zenodo.org)](https://zenodo.org/records/8239942)
34
+
35
+
36
+ #### Instruction Tuning Stage
37
+
38
+ PathVQA: https://drive.google.com/drive/folders/1G2C2_FUCyYQKCkSeCRRiTTsLDvOAjFj5
39
+
40
+ PMC-VQA: [xmcmic/PMC-VQA 路 Datasets at Hugging Face](https://huggingface.co/datasets/xmcmic/PMC-VQA)
41
+
42
+
43
+ #### Categorical dataset for zero-sample testing
44
+
45
+ ICIAR 2018 BACH: https://iciar2018-challenge.grand-challenge.org/Download/
46
+
47
+ OSCC: https://data.mendeley.com/datasets/ftmp4cvtmb/1
48
+
49
+ ColonPath : https://medfm2023.grand-challenge.org/datasets
50
+
51
+
52
+ ### Step 2 Data processing.
53
+ First, use the image index of the clean dataset provided by us to extract the human pathological dataset, and then process it into the following format:
54
+ ```
55
+ [
56
+ {
57
+ "image": ,
58
+ "caption":
59
+ },
60
+ ]
61
+ ```
62
+
63
+ Finally, run dataformate.py to get the format needed to train the model.
64
+ ```
65
+ python dataformat.py
66
+ ```
67
+
68
+ ## Moddel
69
+ Our released weights are distributed training weights that can be directly loaded for training through XTuner. If you need merged weights, they can be merged using XTuner (using the weights from the domain alignment phase as an example):
70
+ ```
71
+ xtuner convert pth_to_hf path/pallava_domain_alignment.py ./domain_alignment_weight.pth ./domain_alignment_weight_ft
72
+ xtuner convert merge meta-llama/Meta-Llama-3-8B-Instruct ./domain_alignment_weight_ft/llm_adapter ./domain_alignment_weight_ft/llm_merge_lora
73
+
74
+ ```
75
+
76
+ ## Training
77
+
78
+ We used xtuner as a training tool, so please go to xtuner official to complete the environment configuration [https://github.com/InternLM/xtuner]. Then place the pallava folder under the xtuner_add folder into the xtuner folder.
79
+
80
+
81
+ #### Domain Alignment
82
+ ```
83
+ NPROC_PER_NODE=8 NNODES=2 PORT=12345 ADDR= NODE_RANK=0 xtuner train pallava_domain_alignment.py --deepspeed deepspeed_zero2 --seed 1024
84
+ ```
85
+
86
+ #### Instruction Tuning
87
+ ```
88
+ NPROC_PER_NODE=8 NNODES=2 PORT=12345 ADDR= NODE_RANK=0 xtuner train pallava_instruction_tuning.py --deepspeed deepspeed_zero2 --seed 1024
89
+ ```
90
+
91
+ ## Result
92
+
93
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/663f06e01cd68975883a353e/ng9KbUevJk5HyYONpOg2S.png)
94
 
95
  ## contact
96