Dr. Jorge Abreu Vicente commited on
Commit
ce2455b
·
1 Parent(s): e03a5e7

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -32
README.md CHANGED
@@ -1,35 +1,15 @@
1
  ---
2
- license: mit
3
  tags:
4
  - generated_from_trainer
5
  datasets:
6
  - source_data_nlp
7
- widget:
8
- - text: "Figure 2A. HEK293T cells were transfected with MYC-FOXP3 and FLAG-USP44 encoding expression constructs using Polyethylenimine. 48hrs post-transfection, cells were harvested, lysed, and anti-FLAG or anti-MYC antibody coated beads were used to immunoprecipitate the given labeled protein along with its binding partner. Co-IP' ed proteins were subjected to SDS PAGE followed by immunoblot analysis. Antibodies recognizing FLAG or MYC tags were used to probe for USP44 and FOXP3, respectively. B. Endogenous co-IP of USP44 and FOXP3 in murine iTregs. iTregs were generated as in Fig. 1 from naïve CD4+T cells FACS isolated from pooled suspensions of the lymph node and spleen cells of wild type C57BL/6 mice (n = 2-3 / experiment). iTregs were lysed and key proteins were immunoprecipitated using either anti-USP44 (right panel) or anti-FOXP3 (left panel) antibody. Proteins pulled-down in this experiment were then resolved and analyzed by immunoblot using anti-FOXP3 or anti-USP44 antibodies. C. Endogenous co-IP of USP44 and FOXP3 in murine nTregs. nTregs (CD4+CD25high) isolated by FACS were activated by anti-CD3 and anti-CD28 (1 and 4 ug/ml, respectively) overnight in the presence of IL-2 (100 U/ml). The cells were lysed and proteins were immunoprecipitated using either anti-Foxp3 (left panel) or anti-Usp44 (right panel). Proteins pulled down in this experiment were then resolved and identified with the indicated antibodies. D . Naïve murine CD4+T cells were isolated by FACS from lymph node and spleen cell suspension of USP44fl/fl CD4Cre+ mice and that of their wild type littermates (USP44fl/fl CD4Cre-mice; n = 2-3 / group / experiment) . iTreg cells were generated from these mice as described for Fig. 1 before incubation on a microscope slide pre-coated with poly-L lysine for 1h. Adhered cells were then fixed by PFA for 0.5 followed by blocking with 1% BSA for 1h, then incubation with the specified antibodies. Representative confocal microscopy images (40X) were visualized for endogenous USP44 (red) and FOXP3 Baxter et al (). DAPI was used to visualize cell nuclei (blue); scale bar 50μm."
9
- matrics:
10
  - precision
11
  - recall
12
  - f1
13
  model-index:
14
  - name: sd-panelization-v2
15
- results:
16
- - task:
17
- name: Token Classification
18
- type: token-classification
19
- dataset:
20
- name: source_data_nlp
21
- type: source_data_nlp
22
- args: PANELIZATION
23
- metrics:
24
- - name: Precision
25
- type: precision
26
- value: 0.9120703437250199
27
- - name: Recall
28
- type: recall
29
- value: 0.9449275362318841
30
- - name: F1
31
- type: f1
32
- value: 0.9282082570673175
33
  ---
34
 
35
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -37,13 +17,13 @@ should probably proofread and complete it, then remove this comment. -->
37
 
38
  # sd-panelization-v2
39
 
40
- This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-large](https://huggingface.co/michiyasunaga/BioLinkBERT-large) on the source_data_nlp dataset.
41
  It achieves the following results on the evaluation set:
42
- - Loss: 0.0051
43
- - Accuracy Score: 0.9981
44
- - Precision: 0.9121
45
- - Recall: 0.9449
46
- - F1: 0.9282
47
 
48
  ## Model description
49
 
@@ -63,18 +43,19 @@ More information needed
63
 
64
  The following hyperparameters were used during training:
65
  - learning_rate: 5e-05
66
- - train_batch_size: 32
67
  - eval_batch_size: 256
68
  - seed: 42
69
  - optimizer: Adafactor
70
  - lr_scheduler_type: linear
71
- - num_epochs: 1.0
72
 
73
  ### Training results
74
 
75
  | Training Loss | Epoch | Step | Validation Loss | Accuracy Score | Precision | Recall | F1 |
76
  |:-------------:|:-----:|:----:|:---------------:|:--------------:|:---------:|:------:|:------:|
77
- | 0.0048 | 1.0 | 431 | 0.0051 | 0.9981 | 0.9121 | 0.9449 | 0.9282 |
 
78
 
79
 
80
  ### Framework versions
@@ -82,4 +63,4 @@ The following hyperparameters were used during training:
82
  - Transformers 4.20.0
83
  - Pytorch 1.11.0a0+bfe5ad2
84
  - Datasets 1.17.0
85
- - Tokenizers 0.12.1
 
1
  ---
 
2
  tags:
3
  - generated_from_trainer
4
  datasets:
5
  - source_data_nlp
6
+ metrics:
 
 
7
  - precision
8
  - recall
9
  - f1
10
  model-index:
11
  - name: sd-panelization-v2
12
+ results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  ---
14
 
15
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
17
 
18
  # sd-panelization-v2
19
 
20
+ This model was trained from scratch on the source_data_nlp dataset.
21
  It achieves the following results on the evaluation set:
22
+ - Loss: 0.0064
23
+ - Accuracy Score: 0.9982
24
+ - Precision: 0.9689
25
+ - Recall: 0.9905
26
+ - F1: 0.9795
27
 
28
  ## Model description
29
 
 
43
 
44
  The following hyperparameters were used during training:
45
  - learning_rate: 5e-05
46
+ - train_batch_size: 64
47
  - eval_batch_size: 256
48
  - seed: 42
49
  - optimizer: Adafactor
50
  - lr_scheduler_type: linear
51
+ - num_epochs: 2.0
52
 
53
  ### Training results
54
 
55
  | Training Loss | Epoch | Step | Validation Loss | Accuracy Score | Precision | Recall | F1 |
56
  |:-------------:|:-----:|:----:|:---------------:|:--------------:|:---------:|:------:|:------:|
57
+ | 0.0074 | 1.0 | 216 | 0.0085 | 0.9977 | 0.9670 | 0.9785 | 0.9727 |
58
+ | 0.0049 | 2.0 | 432 | 0.0064 | 0.9982 | 0.9689 | 0.9905 | 0.9795 |
59
 
60
 
61
  ### Framework versions
 
63
  - Transformers 4.20.0
64
  - Pytorch 1.11.0a0+bfe5ad2
65
  - Datasets 1.17.0
66
+ - Tokenizers 0.12.1