Instructions to use oracat/bert-paper-classifier with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use oracat/bert-paper-classifier with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="oracat/bert-paper-classifier")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("oracat/bert-paper-classifier") model = AutoModelForSequenceClassification.from_pretrained("oracat/bert-paper-classifier") - Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -21,6 +21,8 @@ So far only a subset of the PubMed dataset has been used for training. Future im
|
|
| 21 |
|
| 22 |
## Training procedure
|
| 23 |
|
|
|
|
|
|
|
| 24 |
### Training hyperparameters
|
| 25 |
|
| 26 |
The following hyperparameters were used during training:
|
|
|
|
| 21 |
|
| 22 |
## Training procedure
|
| 23 |
|
| 24 |
+
The code for the model fine-tuning can be found [in the respective notebook](https://huggingface.co/oracat/bert-paper-classifier/blob/main/finetuning-pubmed.ipynb).
|
| 25 |
+
|
| 26 |
### Training hyperparameters
|
| 27 |
|
| 28 |
The following hyperparameters were used during training:
|