Text Classification
Transformers
PyTorch
Turkish
bert
deprem-clf-v1
Eval Results (legacy)
text-embeddings-inference
Instructions to use deprem-ml/deprem_bert_128k with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use deprem-ml/deprem_bert_128k with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="deprem-ml/deprem_bert_128k")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("deprem-ml/deprem_bert_128k") model = AutoModelForSequenceClassification.from_pretrained("deprem-ml/deprem_bert_128k") - Notebooks
- Google Colab
- Kaggle
Eval Results
precision recall f1-score support
Alakasiz 0.87 0.91 0.89 734
Barinma 0.79 0.89 0.84 207
Elektronik 0.69 0.83 0.75 130
Giysi 0.71 0.81 0.76 94
Kurtarma 0.82 0.85 0.83 362
Lojistik 0.57 0.67 0.62 112
Saglik 0.68 0.85 0.75 108
Su 0.56 0.76 0.64 78
Yagma 0.60 0.77 0.68 31
Yemek 0.71 0.89 0.79 117
micro avg 0.77 0.86 0.81 1973
macro avg 0.70 0.82 0.76 1973
weighted avg 0.78 0.86 0.82 1973
samples avg 0.83 0.88 0.84 1973
Training Params:
{'per_device_train_batch_size': 32,
'per_device_eval_batch_size': 32,
'learning_rate': 5.8679699888213376e-05,
'weight_decay': 0.03530961718117487,
'num_train_epochs': 4,
'lr_scheduler_type': 'cosine',
'warmup_steps': 40,
'seed': 42,
'fp16': True,
'load_best_model_at_end': True,
'metric_for_best_model': 'macro f1',
'greater_is_better': True
}
Threshold:
- Best Threshold: 0.40
Class Loss Weights
- Same as Anıl's approach:
[1.0,
1.5167249178108022,
1.7547338578655642,
1.9610520059358458,
1.8684086209021484,
1.8019018017117145,
2.110648663094536,
3.081208739200435,
1.7994815143101963]
- Downloads last month
- 13
Evaluation results
- recall on deprem_private_dataset_v1_2self-reported0.820
- f1 on deprem_private_dataset_v1_2self-reported0.760