Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
stuartmesham
/
roberta-large_spell_10k_3_p3
like
0
Token Classification
Transformers
PyTorch
roberta
Generated from Trainer
Model card
Files
Files and versions
xet
Community
1
Deploy
Use this model
refs/pr/1
roberta-large_spell_10k_3_p3
2.93 GB
1 contributor
History:
3 commits
SFconvertbot
Adding `safetensors` variant of this model
acbbba9
verified
11 months ago
.gitattributes
1.43 kB
initial commit
about 3 years ago
README.md
1.79 kB
Upload with huggingface_hub
about 3 years ago
added_tokens.json
22 Bytes
Upload with huggingface_hub
about 3 years ago
all_results.json
410 Bytes
Upload with huggingface_hub
about 3 years ago
config.json
606 kB
Upload with huggingface_hub
about 3 years ago
eval_results.json
231 Bytes
Upload with huggingface_hub
about 3 years ago
inference_tweak_params.json
60 Bytes
Upload with huggingface_hub
about 3 years ago
merges.txt
456 kB
Upload with huggingface_hub
about 3 years ago
model.safetensors
1.46 GB
xet
Adding `safetensors` variant of this model
11 months ago
pytorch_model.bin
1.46 GB
xet
Upload with huggingface_hub
about 3 years ago
special_tokens_map.json
331 Bytes
Upload with huggingface_hub
about 3 years ago
tokenizer.json
2.11 MB
Upload with huggingface_hub
about 3 years ago
tokenizer_config.json
438 Bytes
Upload with huggingface_hub
about 3 years ago
train_results.json
199 Bytes
Upload with huggingface_hub
about 3 years ago
trainer_state.json
2.5 kB
Upload with huggingface_hub
about 3 years ago
training_args.bin
3.44 kB
xet
Upload with huggingface_hub
about 3 years ago
vocab.json
798 kB
Upload with huggingface_hub
about 3 years ago