Update README.md
Browse files
README.md
CHANGED
|
@@ -101,16 +101,22 @@ inp = jnp.array(inp)
|
|
| 101 |
</details>
|
| 102 |
|
| 103 |
|
| 104 |
-
##
|
|
|
|
|
|
|
| 105 |
|
| 106 |
-
The pre-trained models can be accessed via [PyTorch Hub](https://pytorch.org/hub/) as:
|
| 107 |
```python
|
| 108 |
-
import
|
|
|
|
|
|
|
|
|
|
| 109 |
|
| 110 |
-
|
| 111 |
-
|
| 112 |
-
|
| 113 |
-
|
|
|
|
|
|
|
| 114 |
```
|
| 115 |
|
| 116 |
### Pre-trained backbones
|
|
|
|
| 101 |
</details>
|
| 102 |
|
| 103 |
|
| 104 |
+
## Usage
|
| 105 |
+
|
| 106 |
+
The pre-trained models can be used via [Hugging Face hub](https://huggingface.co/collections/apple/aim-65aa3ce948c718a574f09eb7) as follows:
|
| 107 |
|
|
|
|
| 108 |
```python
|
| 109 |
+
from PIL import Image
|
| 110 |
+
|
| 111 |
+
from aim.torch.models import AIMForImageClassification
|
| 112 |
+
from aim.torch.data import val_transforms
|
| 113 |
|
| 114 |
+
img = Image.open(...)
|
| 115 |
+
model = AIMForImageClassification.from_pretrained("apple/aim-7B")
|
| 116 |
+
transform = val_transforms()
|
| 117 |
+
|
| 118 |
+
inp = transform(img).unsqueeze(0)
|
| 119 |
+
logits, features = model(inp)
|
| 120 |
```
|
| 121 |
|
| 122 |
### Pre-trained backbones
|