Model Card for

A pretrained BERT using .

Model Details

Model Description

A MLM-only pretrained BERT-base using .

Model Checkpoints

[More Information Needed]

Model Sources

  • Paper: [More Information Needed]

Uses

See https://huggingface.co/google-bert/bert-base-uncased#intended-uses--limitations.

Checkpoint Use

[More Information Needed]

Bias, Risks, and Limitations

See https://huggingface.co/google-bert/bert-base-uncased#limitations-and-bias.

Training Details

See https://huggingface.co/google-bert/bert-base-uncased#training-procedure.

Training Data

[More Information Needed]

Preprocessing [optional]

[More Information Needed]

Training Hyperparameters

  • Training regime: fp16
  • Batch size: 32
  • Gradient accumulation steps: 3

Environmental Impact

  • Hardware Type: NVIDIA Tesla V100 PCIE 32GB
  • Hours used: [More Information Needed]
  • Cluster Provider: Artemisa
  • Compute Region: EU
  • Carbon Emitted: [More Information Needed]

Citation

BibTeX:

[More Information Needed]

Downloads last month
-
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for cglez/bert-s140-uncased.v1