Medical-Related
Collection
Models of all types of tasks that relate to medical matters. • 18 items • Updated
# Load model directly
from transformers import AutoModelForImageClassification
model = AutoModelForImageClassification.from_pretrained("DunnBC22/van-base-Brain_Tumors_Image_Classification", dtype="auto")This model is a fine-tuned version of Visual-Attention-Network/van-base.
It achieves the following results on the evaluation set:
This project is part of a comparison of seventeen (17) transformers.
Click here to see the README markdown file for the full project.
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted F1 | Micro F1 | Macro F1 | Weighted Recall | Micro Recall | Macro Recall | Weighted Precision | Micro Precision | Macro Precision |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1.3357 | 1.0 | 180 | 1.5273 | 0.7183 | 0.6631 | 0.7183 | 0.6695 | 0.7183 | 0.7183 | 0.7058 | 0.8219 | 0.7183 | 0.8420 |
| 1.3357 | 2.0 | 360 | 1.9359 | 0.7792 | 0.7314 | 0.7792 | 0.7411 | 0.7792 | 0.7792 | 0.7764 | 0.8467 | 0.7792 | 0.8636 |
| 0.1229 | 3.0 | 540 | 1.7847 | 0.7919 | 0.7588 | 0.7919 | 0.7665 | 0.7919 | 0.7919 | 0.7865 | 0.8505 | 0.7919 | 0.8675 |
This model is a fine-tuned derivative of a pretrained model. Users must comply with the original model license.
This model was fine-tuned on third-party datasets which may have separate licenses or usage restrictions.
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-classification", model="DunnBC22/van-base-Brain_Tumors_Image_Classification") pipe("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/parrots.png")