Instructions to use robertsw/tmp_trainer with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use robertsw/tmp_trainer with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-classification", model="robertsw/tmp_trainer") pipe("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/parrots.png")# Load model directly from transformers import AutoImageProcessor, AutoModelForImageClassification processor = AutoImageProcessor.from_pretrained("robertsw/tmp_trainer") model = AutoModelForImageClassification.from_pretrained("robertsw/tmp_trainer") - Notebooks
- Google Colab
- Kaggle
# Load model directly
from transformers import AutoImageProcessor, AutoModelForImageClassification
processor = AutoImageProcessor.from_pretrained("robertsw/tmp_trainer")
model = AutoModelForImageClassification.from_pretrained("robertsw/tmp_trainer")Quick Links
tmp_trainer
This model was trained from scratch on the imagefolder dataset.
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
Framework versions
- Transformers 4.41.2
- Pytorch 2.2.0
- Datasets 2.19.2
- Tokenizers 0.19.1
- Downloads last month
- 3
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-classification", model="robertsw/tmp_trainer") pipe("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/parrots.png")