Unsloth Model Card
Browse files
README.md
CHANGED
|
@@ -1,109 +1,21 @@
|
|
| 1 |
---
|
|
|
|
| 2 |
tags:
|
| 3 |
- text-generation-inference
|
| 4 |
- transformers
|
| 5 |
- unsloth
|
| 6 |
- mllama
|
| 7 |
-
- trl
|
| 8 |
license: apache-2.0
|
| 9 |
language:
|
| 10 |
- en
|
| 11 |
---
|
| 12 |
-
# DoctorIA-XRAD-R-11B: Revolutionizing Medical Imaging with AI
|
| 13 |
|
| 14 |
-
|
| 15 |
|
| 16 |
-
|
|
|
|
|
|
|
| 17 |
|
| 18 |
-
|
| 19 |
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
## Overview
|
| 23 |
-
|
| 24 |
-
DoctorIA-XRAD-R-11B specializes in automated medical image analysis, enabling healthcare professionals to:
|
| 25 |
-
- Detect abnormalities in chest X-rays, brain MRIs, and other imaging modalities.
|
| 26 |
-
- Segment regions of interest (e.g., tumors, lesions) with high precision.
|
| 27 |
-
- Assist in preliminary diagnostic reasoning by integrating clinical data with imaging findings.
|
| 28 |
-
|
| 29 |
-
Our mission is to empower healthcare providers and patients alike by providing cutting-edge AI-driven diagnostic tools.
|
| 30 |
-
|
| 31 |
-
---
|
| 32 |
-
|
| 33 |
-
## Key Features
|
| 34 |
-
|
| 35 |
-
- **Disease Detection**: Accurate identification of conditions like pneumonia, tuberculosis, and brain tumors.
|
| 36 |
-
- **Segmentation Capabilities**: Precise delineation of regions of interest for quantifying disease extent.
|
| 37 |
-
- **Interoperability**: Seamlessly integrates with existing healthcare systems and tools.
|
| 38 |
-
- **Scalability**: Optimized for deployment in resource-constrained environments, ensuring accessibility even in underserved areas.
|
| 39 |
-
|
| 40 |
-
---
|
| 41 |
-
|
| 42 |
-
## Benchmarks
|
| 43 |
-
|
| 44 |
-
DoctorIA-XRAD-R-11B has been evaluated on several benchmarks to ensure its performance and reliability:
|
| 45 |
-
- **CheXpert**: Achieved **X% sensitivity** and **Y% specificity** for chest X-ray analysis.
|
| 46 |
-
- **BraTS**: Demonstrated superior performance in brain tumor segmentation tasks.
|
| 47 |
-
|
| 48 |
-
For more details, refer to our [paper](https://example.com/paper).
|
| 49 |
-
|
| 50 |
-
---
|
| 51 |
-
|
| 52 |
-
## Usage
|
| 53 |
-
|
| 54 |
-
### 1. Install Dependencies
|
| 55 |
-
```bash
|
| 56 |
-
pip install transformers torch
|
| 57 |
-
```
|
| 58 |
-
|
| 59 |
-
### 2. Load the Model
|
| 60 |
-
```python
|
| 61 |
-
from transformers import AutoFeatureExtractor, AutoModelForImageClassification
|
| 62 |
-
from PIL import Image
|
| 63 |
-
import requests
|
| 64 |
-
|
| 65 |
-
model_name = "doctoria/DoctorIA-XRAD-R-11B"
|
| 66 |
-
feature_extractor = AutoFeatureExtractor.from_pretrained(model_name)
|
| 67 |
-
model = AutoModelForImageClassification.from_pretrained(model_name)
|
| 68 |
-
|
| 69 |
-
# Example input
|
| 70 |
-
url = "https://example.com/chest-xray.jpg"
|
| 71 |
-
image = Image.open(requests.get(url, stream=True).raw)
|
| 72 |
-
inputs = feature_extractor(images=image, return_tensors="pt")
|
| 73 |
-
outputs = model(**inputs)
|
| 74 |
-
predicted_class = outputs.logits.argmax().item()
|
| 75 |
-
print(f"Predicted Class: {predicted_class}")
|
| 76 |
-
```
|
| 77 |
-
|
| 78 |
-
---
|
| 79 |
-
|
| 80 |
-
## Contributing & Feedback
|
| 81 |
-
|
| 82 |
-
We welcome contributions from the community to enhance these models further. Whether it’s through bug reports, feature requests, or pull requests, your input will help us refine and expand the capabilities of these tools.
|
| 83 |
-
|
| 84 |
-
Feel free to reach out via GitHub Issues or Discussions on the respective Hugging Face repositories. Together, let’s push the boundaries of AI in healthcare!
|
| 85 |
-
|
| 86 |
-
---
|
| 87 |
-
|
| 88 |
-
## License
|
| 89 |
-
|
| 90 |
-
The codebase is released under the **Apache 2.0 License**.
|
| 91 |
-
The model weights are released under the **CC-BY 4.0 License**.
|
| 92 |
-
|
| 93 |
-
---
|
| 94 |
-
|
| 95 |
-
## Citation
|
| 96 |
-
|
| 97 |
-
If you use DoctorIA-XRAD-R-11B in your research or projects, please cite our work:
|
| 98 |
-
|
| 99 |
-
```bibtex
|
| 100 |
-
@techreport{doctoria2025,
|
| 101 |
-
title={DoctorIA-XRAD-R-11B: Enhancing Radiological Diagnostics with AI},
|
| 102 |
-
author={Jad Tounsi El Azzoiani and Team DoctorIA},
|
| 103 |
-
year={2025},
|
| 104 |
-
eprint={XXXX.XXXXX},
|
| 105 |
-
archivePrefix={arXiv},
|
| 106 |
-
primaryClass={cs.AI},
|
| 107 |
-
url={https://example.com/paper},
|
| 108 |
-
}
|
| 109 |
-
'''
|
|
|
|
| 1 |
---
|
| 2 |
+
base_model: unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
|
| 3 |
tags:
|
| 4 |
- text-generation-inference
|
| 5 |
- transformers
|
| 6 |
- unsloth
|
| 7 |
- mllama
|
|
|
|
| 8 |
license: apache-2.0
|
| 9 |
language:
|
| 10 |
- en
|
| 11 |
---
|
|
|
|
| 12 |
|
| 13 |
+
# Uploaded finetuned model
|
| 14 |
|
| 15 |
+
- **Developed by:** doctoria
|
| 16 |
+
- **License:** apache-2.0
|
| 17 |
+
- **Finetuned from model :** unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
|
| 18 |
|
| 19 |
+
This mllama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
| 20 |
|
| 21 |
+
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|