File size: 4,186 Bytes
48e4a62
f395ace
 
48e4a62
 
d05c32f
48e4a62
f395ace
 
1b5344f
 
6572bc6
f395ace
6572bc6
f395ace
c52955f
f395ace
 
48e4a62
0e36d92
1b5344f
 
 
f395ace
1b5344f
4d49d53
1b5344f
 
 
 
 
f395ace
0e36d92
f395ace
48e4a62
 
 
 
 
 
 
d05c32f
1b5344f
0e36d92
 
48e4a62
 
 
49f14dd
 
241a4ad
48e4a62
 
 
241a4ad
 
 
 
 
 
 
 
 
 
48e4a62
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1b5344f
 
0e36d92
 
 
 
48e4a62
 
 
 
 
 
 
ff568dd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
---
language:
- en
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
datasets:
- keremberke/pokemon-classification
metrics:
- accuracy
widget:
- src: https://datasets-server.huggingface.co/assets/keremberke/pokemon-classification/--/full/train/3/image/image.jpg
  example_title: Abra
- src: https://datasets-server.huggingface.co/cached-assets/keremberke/pokemon-classification/--/full/train/383/image/image.jpg
  example_title: Blastoise
pipeline_tag: image-classification
base_model: google/vit-base-patch16-224
model-index:
- name: pokemon_classifier
  results:
  - task:
      type: image-classification
      name: Image Classification
    dataset:
      name: keremberke/pokemon-classification
      type: pokemon-classification
      config: full
      split: validation
      args: full
    metrics:
    - type: accuracy
      value: 0.08848920863309352
      name: Accuracy
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# pokemon_classifier

This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pokemon-classification and the full datasets.
It achieves the following results on the evaluation set:
- Loss: 8.0935
- Accuracy: 0.0885

## Model description

This model, referred to as "PokemonClassifier," is a fine-tuned version of google/vit-base-patch16-224 on Pokemon classification datasets.
Its primary objective is to accurately identify the Pokemon in input images. While this general summary provides information about its performance in terms
of loss and accuracy, its core function lies in precisely classifying Pokemon images.

## Intended uses & limitations

This model is limited to the training data it was exposed to and can only identify the following Pokémon: Golbat, Machoke, Omastar, Diglett, Lapras, Kabuto,
Persian, Weepinbell, Golem, Dodrio, Raichu, Zapdos, Raticate, Magnemite, Ivysaur, Growlithe, Tangela, Drowzee, Rapidash, Venonat, Pidgeot, Nidorino, Porygon,
Lickitung, Rattata, Machop, Charmeleon, Slowbro, Parasect, Eevee, Starmie, Staryu, Psyduck, Dragonair, Magikarp, Vileplume, Marowak, Pidgeotto, Shellder, Mewtwo,
Farfetchd, Kingler, Seel, Kakuna, Doduo, Electabuzz, Charmander, Rhyhorn, Tauros, Dugtrio, Poliwrath, Gengar, Exeggutor, Dewgong, Jigglypuff, Geodude, Kadabra, Nidorina,
Sandshrew, Grimer, MrMime, Pidgey, Koffing, Ekans, Alolan Sandslash, Venusaur, Snorlax, Paras, Jynx, Chansey, Hitmonchan, Gastly, Kangaskhan, Oddish, Wigglytuff,
Graveler, Arcanine, Clefairy, Articuno, Poliwag, Abra, Squirtle, Voltorb, Ponyta, Moltres, Nidoqueen, Magmar, Onix, Vulpix, Butterfree, Krabby, Arbok, Clefable, Goldeen,
Magneton, Dratini, Caterpie, Jolteon, Nidoking, Alakazam, Dragonite, Fearow, Slowpoke, Weezing, Beedrill, Weedle, Cloyster, Vaporeon, Gyarados, Golduck, Machamp, Hitmonlee,
Primeape, Cubone, Sandslash, Scyther, Haunter, Metapod, Tentacruel, Aerodactyl, Kabutops, Ninetales, Zubat, Rhydon, Mew, Pinsir, Ditto, Victreebel, Omanyte, Horsea, Pikachu,
Blastoise, Venomoth, Charizard, Seadra, Muk, Spearow, Bulbasaur, Bellsprout, Electrode, Gloom, Poliwhirl, Flareon, Seaking, Hypno, Wartortle, Mankey, Tentacool, Exeggcute,
and Meowth.

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4

### Training results

| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0872        | 0.82  | 500  | 7.2669          | 0.0640   |
| 0.1581        | 1.64  | 1000 | 7.6072          | 0.0712   |
| 0.0536        | 2.46  | 1500 | 7.8952          | 0.0842   |
| 0.0169        | 3.28  | 2000 | 8.0935          | 0.0885   |


### Framework versions

- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3