Text Classification
Transformers
Safetensors
English
emcoder
feature-extraction
emotion-recognition
bayesian-deep-learning
mc-dropout
uncertainty-quantification
multi-label-classification
custom_code
Eval Results (legacy)
Instructions to use yezdata/EmCoder with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use yezdata/EmCoder with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="yezdata/EmCoder", trust_remote_code=True)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("yezdata/EmCoder", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -61,7 +61,9 @@ EmCoder achieves competitive F1-scores while being ~35% smaller than RoBERTa-bas
|
|
| 61 |
|
| 62 |
|
| 63 |
## How to use
|
| 64 |
-
|
|
|
|
|
|
|
| 65 |
### 1. Setup & Tokenization
|
| 66 |
```python
|
| 67 |
from transformers import AutoTokenizer
|
|
@@ -98,7 +100,7 @@ uncertainty = probs_all.std(dim=0) # Epistemic Uncertainty (Standard Deviation)
|
|
| 98 |
|
| 99 |
|
| 100 |
## Model Architecture
|
| 101 |
-

|
| 104 |
|
| 105 |
|
| 106 |
### Optimization
|
|
|
|
| 167 |
|
| 168 |
|
| 169 |
## Workflow
|
| 170 |
+

|
| 171 |
|
| 172 |
|
| 173 |
### Note
|