| | --- |
| | license: apache-2.0 |
| | base_model: google/vit-base-patch32-384 |
| | tags: |
| | - generated_from_keras_callback |
| | model-index: |
| | - name: Prahas10/roof_classification |
| | results: [] |
| | --- |
| | |
| | <!-- This model card has been generated automatically according to the information Keras had access to. You should |
| | probably proofread and complete it, then remove this comment. --> |
| |
|
| | # Prahas10/roof_classification |
| | |
| | This model is a fine-tuned version of [google/vit-base-patch32-384](https://huggingface.co/google/vit-base-patch32-384) on an unknown dataset. |
| | It achieves the following results on the evaluation set: |
| | - Train Loss: 0.0162 |
| | - Validation Loss: 0.2163 |
| | - Train Accuracy: 0.8916 |
| | - Epoch: 24 |
| | |
| | ## Model description |
| | |
| | More information needed |
| | |
| | ## Intended uses & limitations |
| | |
| | More information needed |
| | |
| | ## Training and evaluation data |
| | |
| | More information needed |
| | |
| | ## Training procedure |
| | |
| | ### Training hyperparameters |
| | |
| | The following hyperparameters were used during training: |
| | - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 4825, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.0001} |
| | - training_precision: float32 |
| |
|
| | ### Training results |
| |
|
| | | Train Loss | Validation Loss | Train Accuracy | Epoch | |
| | |:----------:|:---------------:|:--------------:|:-----:| |
| | | 2.5019 | 2.0795 | 0.3735 | 0 | |
| | | 1.7660 | 1.7259 | 0.4458 | 1 | |
| | | 1.0922 | 1.0990 | 0.7590 | 2 | |
| | | 0.6402 | 0.8232 | 0.8193 | 3 | |
| | | 0.4725 | 0.6107 | 0.8675 | 4 | |
| | | 0.2674 | 0.4986 | 0.9157 | 5 | |
| | | 0.1794 | 0.5000 | 0.9157 | 6 | |
| | | 0.2579 | 0.7721 | 0.7349 | 7 | |
| | | 0.1269 | 0.3304 | 0.8675 | 8 | |
| | | 0.0970 | 0.2980 | 0.8795 | 9 | |
| | | 0.1181 | 0.4988 | 0.8193 | 10 | |
| | | 0.1241 | 0.2899 | 0.8795 | 11 | |
| | | 0.2311 | 0.4113 | 0.8795 | 12 | |
| | | 0.0753 | 0.2964 | 0.9157 | 13 | |
| | | 0.0637 | 0.4096 | 0.8675 | 14 | |
| | | 0.0540 | 0.3032 | 0.9036 | 15 | |
| | | 0.0334 | 0.2694 | 0.9277 | 16 | |
| | | 0.0212 | 0.1793 | 0.9639 | 17 | |
| | | 0.0241 | 0.3772 | 0.8554 | 18 | |
| | | 0.0471 | 0.5727 | 0.8675 | 19 | |
| | | 0.0652 | 0.3167 | 0.8916 | 20 | |
| | | 0.0281 | 0.2690 | 0.9036 | 21 | |
| | | 0.0478 | 0.2169 | 0.9277 | 22 | |
| | | 0.0193 | 0.2091 | 0.9880 | 23 | |
| | | 0.0162 | 0.2163 | 0.8916 | 24 | |
| |
|
| |
|
| | ### Framework versions |
| |
|
| | - Transformers 4.38.2 |
| | - TensorFlow 2.15.0 |
| | - Datasets 2.16.1 |
| | - Tokenizers 0.15.2 |
| |
|