kaijun123 commited on
Commit
fe32d6f
·
verified ·
1 Parent(s): c974251

Delete vision_tower-epoch-1-lr-0.0001/val_set_result.out

Browse files
vision_tower-epoch-1-lr-0.0001/val_set_result.out DELETED
@@ -1,144 +0,0 @@
1
- /cm/local/apps/slurm/var/spool/job24428/slurm_script: line 27: SBATCH: command not found
2
- /cm/local/apps/slurm/var/spool/job24428/slurm_script: line 28: SBATCH: command not found
3
- /home/FYP/angk0064/.conda/envs/llava-med/lib/python3.10/site-packages/transformers/utils/generic.py:441: FutureWarning: `torch.utils._pytree._register_pytree_node` is deprecated. Please use `torch.utils._pytree.register_pytree_node` instead.
4
- _torch_pytree._register_pytree_node(
5
- /home/FYP/angk0064/.conda/envs/llava-med/lib/python3.10/site-packages/transformers/utils/generic.py:309: FutureWarning: `torch.utils._pytree._register_pytree_node` is deprecated. Please use `torch.utils._pytree.register_pytree_node` instead.
6
- _torch_pytree._register_pytree_node(
7
- /home/FYP/angk0064/.conda/envs/llava-med/lib/python3.10/site-packages/transformers/utils/generic.py:309: FutureWarning: `torch.utils._pytree._register_pytree_node` is deprecated. Please use `torch.utils._pytree.register_pytree_node` instead.
8
- _torch_pytree._register_pytree_node(
9
- testing out loading previously trained weights
10
- loading classifier
11
- classifier: CLIPDiseaseClassifier(
12
- (mlp): Sequential(
13
- (0): Linear(in_features=1024, out_features=1024, bias=True)
14
- (1): ReLU()
15
- (2): Dropout(p=0.5, inplace=False)
16
- (3): Linear(in_features=1024, out_features=1024, bias=True)
17
- (4): ReLU()
18
- (5): Dropout(p=0.5, inplace=False)
19
- (6): Linear(in_features=1024, out_features=56, bias=True)
20
- )
21
- )
22
- vision_tower: /home/FYP/angk0064/ANGK0064/checkpoints/vision_tower-epoch-2-lr-0.0001
23
- self.select_feature: cls
24
- self.vision_tower_name: /home/FYP/angk0064/ANGK0064/checkpoints/vision_tower-epoch-2-lr-0.0001
25
- /home/FYP/angk0064/ANGK0064/checkpoints/vision_tower-epoch-2-lr-0.0001 is already loaded, `load_model` called again, skipping.
26
- vision_tower_instance: CustomCLIPVisionTower(
27
- (vision_tower): CLIPVisionModel(
28
- (vision_model): CLIPVisionTransformer(
29
- (embeddings): CLIPVisionEmbeddings(
30
- (patch_embedding): Conv2d(3, 1024, kernel_size=(14, 14), stride=(14, 14), bias=False)
31
- (position_embedding): Embedding(577, 1024)
32
- )
33
- (pre_layrnorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
34
- (encoder): CLIPEncoder(
35
- (layers): ModuleList(
36
- (0-23): 24 x CLIPEncoderLayer(
37
- (self_attn): CLIPAttention(
38
- (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
39
- (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
40
- (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
41
- (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
42
- )
43
- (layer_norm1): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
44
- (mlp): CLIPMLP(
45
- (activation_fn): QuickGELUActivation()
46
- (fc1): Linear(in_features=1024, out_features=4096, bias=True)
47
- (fc2): Linear(in_features=4096, out_features=1024, bias=True)
48
- )
49
- (layer_norm2): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
50
- )
51
- )
52
- )
53
- (post_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
54
- )
55
- )
56
- )
57
- vision_tower_instance: CustomCLIPVisionTower(
58
- (vision_tower): CLIPVisionModel(
59
- (vision_model): CLIPVisionTransformer(
60
- (embeddings): CLIPVisionEmbeddings(
61
- (patch_embedding): Conv2d(3, 1024, kernel_size=(14, 14), stride=(14, 14), bias=False)
62
- (position_embedding): Embedding(577, 1024)
63
- )
64
- (pre_layrnorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
65
- (encoder): CLIPEncoder(
66
- (layers): ModuleList(
67
- (0-23): 24 x CLIPEncoderLayer(
68
- (self_attn): CLIPAttention(
69
- (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
70
- (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
71
- (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
72
- (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
73
- )
74
- (layer_norm1): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
75
- (mlp): CLIPMLP(
76
- (activation_fn): QuickGELUActivation()
77
- (fc1): Linear(in_features=1024, out_features=4096, bias=True)
78
- (fc2): Linear(in_features=4096, out_features=1024, bias=True)
79
- )
80
- (layer_norm2): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
81
- )
82
- )
83
- )
84
- (post_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
85
- )
86
- )
87
- )
88
- len(ground_truths): 64
89
- ground_truths: tensor([[0., 0., 0., ..., 0., 0., 1.],
90
- [0., 0., 0., ..., 0., 0., 1.],
91
- [1., 0., 0., ..., 0., 0., 1.],
92
- ...,
93
- [0., 0., 0., ..., 0., 0., 1.],
94
- [0., 0., 0., ..., 0., 0., 1.],
95
- [0., 0., 0., ..., 0., 0., 1.]], device='cuda:0')
96
- len(ground_truths): 64
97
- ground_truths: tensor([[0., 0., 1., ..., 0., 0., 1.],
98
- [0., 0., 0., ..., 0., 0., 1.],
99
- [0., 0., 0., ..., 0., 1., 0.],
100
- ...,
101
- [1., 0., 0., ..., 0., 1., 0.],
102
- [1., 0., 0., ..., 0., 0., 1.],
103
- [0., 0., 0., ..., 0., 1., 0.]], device='cuda:0')
104
- len(ground_truths): 64
105
- ground_truths: tensor([[1., 0., 0., ..., 0., 0., 1.],
106
- [0., 0., 1., ..., 0., 1., 0.],
107
- [0., 0., 0., ..., 0., 1., 0.],
108
- ...,
109
- [0., 0., 0., ..., 0., 1., 0.],
110
- [0., 0., 0., ..., 0., 0., 1.],
111
- [0., 0., 0., ..., 0., 1., 0.]], device='cuda:0')
112
- len(ground_truths): 64
113
- ground_truths: tensor([[0., 0., 0., ..., 0., 1., 0.],
114
- [0., 0., 1., ..., 0., 1., 0.],
115
- [0., 0., 0., ..., 0., 0., 1.],
116
- ...,
117
- [0., 0., 0., ..., 0., 1., 0.],
118
- [0., 0., 0., ..., 0., 1., 0.],
119
- [0., 0., 0., ..., 0., 1., 0.]], device='cuda:0')
120
- len(ground_truths): 5
121
- ground_truths: tensor([[0., 0., 1., 0., 0., 0., 0., 1., 0., 0., 0., 1., 0., 0., 0., 1., 0., 0.,
122
- 0., 1., 1., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 1., 0., 0., 0., 1.,
123
- 0., 0., 1., 0., 0., 0., 0., 1., 0., 0., 0., 1., 0., 1., 0., 0., 0., 0.,
124
- 1., 0.],
125
- [0., 0., 0., 1., 0., 0., 0., 1., 0., 0., 0., 1., 1., 0., 0., 0., 0., 0.,
126
- 0., 1., 0., 0., 0., 1., 0., 0., 0., 1., 0., 0., 1., 0., 0., 0., 0., 1.,
127
- 0., 0., 0., 1., 0., 0., 0., 1., 0., 0., 0., 1., 0., 0., 0., 1., 0., 0.,
128
- 0., 1.],
129
- [0., 0., 0., 1., 0., 0., 1., 0., 0., 0., 0., 1., 0., 1., 0., 0., 0., 0.,
130
- 0., 1., 0., 0., 0., 1., 0., 0., 0., 1., 0., 0., 0., 1., 0., 0., 0., 1.,
131
- 0., 0., 0., 1., 0., 0., 0., 1., 0., 1., 0., 0., 0., 0., 0., 1., 0., 0.,
132
- 0., 1.],
133
- [0., 0., 1., 0., 0., 0., 0., 1., 0., 1., 0., 0., 0., 0., 0., 1., 0., 0.,
134
- 0., 1., 0., 0., 0., 1., 0., 0., 0., 1., 0., 0., 0., 1., 0., 0., 0., 1.,
135
- 0., 0., 1., 0., 0., 0., 0., 1., 0., 0., 0., 1., 0., 0., 0., 1., 0., 0.,
136
- 0., 1.],
137
- [0., 0., 1., 0., 0., 0., 0., 1., 0., 0., 0., 1., 0., 0., 0., 1., 0., 1.,
138
- 0., 0., 0., 0., 0., 1., 0., 0., 0., 1., 0., 0., 1., 0., 0., 0., 0., 1.,
139
- 0., 1., 0., 0., 0., 0., 0., 1., 0., 0., 1., 0., 0., 1., 0., 0., 0., 0.,
140
- 0., 1.]], device='cuda:0')
141
- accuracy score: 0.891351943076081
142
- precision score: 0.7949743003997716
143
- recall score: 0.7619047619047619
144
- f1 score: 0.7780883174958076