Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
pere
/
norwegian-roberta-base-highlr
like
0
Fill-Mask
Transformers
PyTorch
JAX
TensorBoard
roberta
Model card
Files
Files and versions
xet
Metrics
Training metrics
Community
1
Deploy
Use this model
main
norwegian-roberta-base-highlr
1.75 GB
1 contributor
History:
48 commits
pere
restored to original finished training
ff287ff
about 4 years ago
.gitattributes
Safe
781 Bytes
First try larger lr and bs
over 4 years ago
README.md
Safe
75 Bytes
Create README.md
over 4 years ago
config.json
Safe
701 Bytes
test
about 4 years ago
create_config.py
Safe
163 Bytes
First try larger lr and bs
over 4 years ago
events.out.tfevents.1629033809.t1v-n-358ff5d1-w-0.179703.3.v2
40 Bytes
xet
Saving weights and logs of step 50001
over 4 years ago
events.out.tfevents.1629051212.t1v-n-358ff5d1-w-0.196309.3.v2
77.5 MB
xet
updated training script with auth_token
over 4 years ago
events.out.tfevents.1631046163.t1v-n-358ff5d1-w-0.4050701.3.v2
170 MB
xet
final 128
over 4 years ago
flax_model.msgpack
499 MB
xet
restored to original finished training
about 4 years ago
flax_model_final128.msgpack
499 MB
xet
final 128
over 4 years ago
generate_pytorch_model.py
349 Bytes
xet
new pytorch model
about 4 years ago
merges.txt
Safe
476 kB
tokenizers
about 4 years ago
pytorch_model.bin
pickle
Detected Pickle imports (4)
"torch.LongStorage"
,
"collections.OrderedDict"
,
"torch.FloatStorage"
,
"torch._utils._rebuild_tensor_v2"
What is a pickle import?
499 MB
xet
restored to original finished training
about 4 years ago
run.sh
Safe
814 Bytes
First try larger lr and bs
over 4 years ago
run_mlm_flax.py
Safe
29.8 kB
First try larger lr and bs
over 4 years ago
run_mlm_flax_stream.py
Safe
26.8 kB
updated training script with auth_token
over 4 years ago
run_recover_1350_stream.sh
Safe
736 Bytes
Saving weights and logs of step 50001
over 4 years ago
run_recover_850_stream.sh
Safe
746 Bytes
First try larger lr and bs
over 4 years ago
run_stream.sh
Safe
684 Bytes
updated ds
over 4 years ago
special_tokens_map.json
Safe
239 Bytes
tokenizers
about 4 years ago
tokenizer.json
Safe
1.39 MB
tokenizers
about 4 years ago
tokenizer_config.json
Safe
291 Bytes
tokenizers
about 4 years ago
train_tokenizer.py
Safe
820 Bytes
First try larger lr and bs
over 4 years ago
vocab.json
Safe
818 kB
tokenizers
about 4 years ago