yezdata commited on
Commit
e817e4f
·
verified ·
1 Parent(s): aa14d4e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -18,7 +18,7 @@ metrics:
18
  - recall
19
  - f1
20
  model-index:
21
- - name: EmCoder (v1)
22
  results:
23
  - task:
24
  type: text-classification
@@ -56,14 +56,14 @@ EmCoder is optimized for **MC Dropout inference**.
56
  EmCoder achieves competitive F1-scores while being ~35% smaller than RoBERTa-base and ~45% smaller than ModernBERT, offering a superior efficiency-to-uncertainty ratio.
57
  | Model | Precision | Recall | F1-Score | Params |
58
  | :--- | :--- | :--- | :--- | :--- |
59
- | **EmCoder (v1)** | **0.408** | **0.495** | **0.440** | **82.1M** |
60
  | Google BERT (Original) | 0.400 | 0.630 | 0.460 | 110M |
61
  | RoBERTa-base | 0.575 | 0.396 | 0.450 | 125M |
62
  | ModernBERT-base | 0.652 | 0.443 | 0.500 | 149M |
63
 
64
 
65
  ## How to use
66
- EmCoder v1.0 uses the `roberta-base` tokenizer for correct token-to-embedding mapping.
67
  ### 1. Setup & Tokenization
68
  ```python
69
  import torch
 
18
  - recall
19
  - f1
20
  model-index:
21
+ - name: EmCoder
22
  results:
23
  - task:
24
  type: text-classification
 
56
  EmCoder achieves competitive F1-scores while being ~35% smaller than RoBERTa-base and ~45% smaller than ModernBERT, offering a superior efficiency-to-uncertainty ratio.
57
  | Model | Precision | Recall | F1-Score | Params |
58
  | :--- | :--- | :--- | :--- | :--- |
59
+ | **EmCoder** | **0.408** | **0.495** | **0.440** | **82.1M** |
60
  | Google BERT (Original) | 0.400 | 0.630 | 0.460 | 110M |
61
  | RoBERTa-base | 0.575 | 0.396 | 0.450 | 125M |
62
  | ModernBERT-base | 0.652 | 0.443 | 0.500 | 149M |
63
 
64
 
65
  ## How to use
66
+ EmCoder uses the `roberta-base` tokenizer for correct token-to-embedding mapping.
67
  ### 1. Setup & Tokenization
68
  ```python
69
  import torch