Upload of original files
Browse files- LICENSE +88 -0
- README.md +107 -0
- config.json +165 -0
- eval_results.jsonl +40 -0
- merges.txt +0 -0
- open_clip_config.json +34 -0
- open_clip_pytorch_model.bin +3 -0
- pytorch_model.bin +3 -0
- special_tokens_map.json +24 -0
- tokenizer.json +0 -0
- tokenizer_config.json +34 -0
- vocab.json +0 -0
LICENSE
ADDED
|
@@ -0,0 +1,88 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Disclaimer: IMPORTANT: This Apple Machine Learning Research Model is
|
| 2 |
+
specifically developed and released by Apple Inc. ("Apple") for the sole purpose
|
| 3 |
+
of scientific research of artificial intelligence and machine-learning
|
| 4 |
+
technology. “Apple Machine Learning Research Model” means the model, including
|
| 5 |
+
but not limited to algorithms, formulas, trained model weights, parameters,
|
| 6 |
+
configurations, checkpoints, and any related materials (including
|
| 7 |
+
documentation).
|
| 8 |
+
|
| 9 |
+
This Apple Machine Learning Research Model is provided to You by
|
| 10 |
+
Apple in consideration of your agreement to the following terms, and your use,
|
| 11 |
+
modification, creation of Model Derivatives, and or redistribution of the Apple
|
| 12 |
+
Machine Learning Research Model constitutes acceptance of this Agreement. If You
|
| 13 |
+
do not agree with these terms, please do not use, modify, create Model
|
| 14 |
+
Derivatives of, or distribute this Apple Machine Learning Research Model or
|
| 15 |
+
Model Derivatives.
|
| 16 |
+
|
| 17 |
+
* License Scope: In consideration of your agreement to abide by the following
|
| 18 |
+
terms, and subject to these terms, Apple hereby grants you a personal,
|
| 19 |
+
non-exclusive, worldwide, non-transferable, royalty-free, revocable, and
|
| 20 |
+
limited license, to use, copy, modify, distribute, and create Model
|
| 21 |
+
Derivatives (defined below) of the Apple Machine Learning Research Model
|
| 22 |
+
exclusively for Research Purposes. You agree that any Model Derivatives You
|
| 23 |
+
may create or that may be created for You will be limited to Research Purposes
|
| 24 |
+
as well. “Research Purposes” means non-commercial scientific research and
|
| 25 |
+
academic development activities, such as experimentation, analysis, testing
|
| 26 |
+
conducted by You with the sole intent to advance scientific knowledge and
|
| 27 |
+
research. “Research Purposes” does not include any commercial exploitation,
|
| 28 |
+
product development or use in any commercial product or service.
|
| 29 |
+
|
| 30 |
+
* Distribution of Apple Machine Learning Research Model and Model Derivatives:
|
| 31 |
+
If you choose to redistribute Apple Machine Learning Research Model or its
|
| 32 |
+
Model Derivatives, you must provide a copy of this Agreement to such third
|
| 33 |
+
party, and ensure that the following attribution notice be provided: “Apple
|
| 34 |
+
Machine Learning Research Model is licensed under the Apple Machine Learning
|
| 35 |
+
Research Model License Agreement.” Additionally, all Model Derivatives must
|
| 36 |
+
clearly be identified as such, including disclosure of modifications and
|
| 37 |
+
changes made to the Apple Machine Learning Research Model. The name,
|
| 38 |
+
trademarks, service marks or logos of Apple may not be used to endorse or
|
| 39 |
+
promote Model Derivatives or the relationship between You and Apple. “Model
|
| 40 |
+
Derivatives” means any models or any other artifacts created by modifications,
|
| 41 |
+
improvements, adaptations, alterations to the architecture, algorithm or
|
| 42 |
+
training processes of the Apple Machine Learning Research Model, or by any
|
| 43 |
+
retraining, fine-tuning of the Apple Machine Learning Research Model.
|
| 44 |
+
|
| 45 |
+
* No Other License: Except as expressly stated in this notice, no other rights
|
| 46 |
+
or licenses, express or implied, are granted by Apple herein, including but
|
| 47 |
+
not limited to any patent, trademark, and similar intellectual property rights
|
| 48 |
+
worldwide that may be infringed by the Apple Machine Learning Research Model,
|
| 49 |
+
the Model Derivatives or by other works in which the Apple Machine Learning
|
| 50 |
+
Research Model may be incorporated.
|
| 51 |
+
|
| 52 |
+
* Compliance with Laws: Your use of Apple Machine Learning Research Model must
|
| 53 |
+
be in compliance with all applicable laws and regulations.
|
| 54 |
+
|
| 55 |
+
* Term and Termination: The term of this Agreement will begin upon your
|
| 56 |
+
acceptance of this Agreement or use of the Apple Machine Learning Research
|
| 57 |
+
Model and will continue until terminated in accordance with the following
|
| 58 |
+
terms. Apple may terminate this Agreement at any time if You are in breach of
|
| 59 |
+
any term or condition of this Agreement. Upon termination of this Agreement,
|
| 60 |
+
You must cease to use all Apple Machine Learning Research Models and Model
|
| 61 |
+
Derivatives and permanently delete any copy thereof. Sections 3, 6 and 7 will
|
| 62 |
+
survive termination.
|
| 63 |
+
|
| 64 |
+
* Disclaimer and Limitation of Liability: This Apple Machine Learning Research
|
| 65 |
+
Model and any outputs generated by the Apple Machine Learning Research Model
|
| 66 |
+
are provided on an “AS IS” basis. APPLE MAKES NO WARRANTIES, EXPRESS OR
|
| 67 |
+
IMPLIED, INCLUDING WITHOUT LIMITATION THE IMPLIED WARRANTIES OF
|
| 68 |
+
NON-INFRINGEMENT, MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE,
|
| 69 |
+
REGARDING THE APPLE MACHINE LEARNING RESEARCH MODEL OR OUTPUTS GENERATED BY
|
| 70 |
+
THE APPLE MACHINE LEARNING RESEARCH MODEL. You are solely responsible for
|
| 71 |
+
determining the appropriateness of using or redistributing the Apple Machine
|
| 72 |
+
Learning Research Model and any outputs of the Apple Machine Learning Research
|
| 73 |
+
Model and assume any risks associated with Your use of the Apple Machine
|
| 74 |
+
Learning Research Model and any output and results. IN NO EVENT SHALL APPLE BE
|
| 75 |
+
LIABLE FOR ANY SPECIAL, INDIRECT, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
|
| 76 |
+
IN ANY WAY OUT OF THE USE, REPRODUCTION, MODIFICATION AND/OR DISTRIBUTION OF
|
| 77 |
+
THE APPLE MACHINE LEARNING RESEARCH MODEL AND ANY OUTPUTS OF THE APPLE MACHINE
|
| 78 |
+
LEARNING RESEARCH MODEL, HOWEVER CAUSED AND WHETHER UNDER THEORY OF CONTRACT,
|
| 79 |
+
TORT (INCLUDING NEGLIGENCE), STRICT LIABILITY OR OTHERWISE, EVEN IF APPLE HAS
|
| 80 |
+
BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
| 81 |
+
|
| 82 |
+
* Governing Law: This Agreement will be governed by and construed under the laws
|
| 83 |
+
of the State of California without regard to its choice of law principles. The
|
| 84 |
+
Convention on Contracts for the International Sale of Goods shall not apply to
|
| 85 |
+
the Agreement except that the arbitration clause and any arbitration hereunder
|
| 86 |
+
shall be governed by the Federal Arbitration Act, Chapters 1 and 2.
|
| 87 |
+
|
| 88 |
+
Copyright (C) 2025 Apple Inc. All Rights Reserved.
|
README.md
ADDED
|
@@ -0,0 +1,107 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apple-amlr
|
| 3 |
+
license_name: apple-sample-code-license
|
| 4 |
+
license_link: LICENSE
|
| 5 |
+
---
|
| 6 |
+
A CLIP (Contrastive Language-Image Pre-training) model trained on DFN-5B.
|
| 7 |
+
Data Filtering Networks (DFNs) are small networks used to automatically filter large pools of uncurated data.
|
| 8 |
+
This model was trained on 5B images that were filtered from a pool of 43B uncurated image-text pairs
|
| 9 |
+
(12.8B image-text pairs from CommonPool-12.8B + 30B additional public image-text pairs).
|
| 10 |
+
|
| 11 |
+
This model has been converted to PyTorch from the original JAX checkpoints from Axlearn (https://github.com/apple/axlearn).
|
| 12 |
+
These weights are directly usable in OpenCLIP (image + text).
|
| 13 |
+
|
| 14 |
+
|
| 15 |
+
## Model Details
|
| 16 |
+
|
| 17 |
+
- **Model Type:** Contrastive Image-Text, Zero-Shot Image Classification.
|
| 18 |
+
- **Dataset:** DFN-5b
|
| 19 |
+
- **Papers:**
|
| 20 |
+
- Data Filtering Networks: https://arxiv.org/abs/2309.17425
|
| 21 |
+
- **Samples Seen:** 39B
|
| 22 |
+
## Model Metrics
|
| 23 |
+
|
| 24 |
+
| Eval Dataset | Metric |
|
| 25 |
+
|:-----------------------|---------:|
|
| 26 |
+
| ImageNet 1k | 0.8344 |
|
| 27 |
+
| Caltech-101 | 0.954935 |
|
| 28 |
+
| CIFAR-10 | 0.9878 |
|
| 29 |
+
| CIFAR-100 | 0.9051 |
|
| 30 |
+
| CLEVR Counts | 0.2966 |
|
| 31 |
+
| CLEVR Distance | 0.2124 |
|
| 32 |
+
| Country211 | 0.343981 |
|
| 33 |
+
| Describable Textures | 0.706383 |
|
| 34 |
+
| EuroSAT | 0.654815 |
|
| 35 |
+
| FGVC Aircraft | 0.714055 |
|
| 36 |
+
| Food-101 | 0.956792 |
|
| 37 |
+
| GTSRB | 0.677514 |
|
| 38 |
+
| ImageNet Sketch | 0.727308 |
|
| 39 |
+
| ImageNet v2 | 0.773 |
|
| 40 |
+
| ImageNet-A | 0.6988 |
|
| 41 |
+
| ImageNet-O | 0.381 |
|
| 42 |
+
| ImageNet-R | 0.929367 |
|
| 43 |
+
| KITTI Vehicle Distance | 0.336146 |
|
| 44 |
+
| MNIST | 0.8579 |
|
| 45 |
+
| ObjectNet | 0.765156 |
|
| 46 |
+
| Oxford Flowers-102 | 0.899534 |
|
| 47 |
+
| Oxford-IIIT Pet | 0.965515 |
|
| 48 |
+
| Pascal VOC 2007 | 0.818309 |
|
| 49 |
+
| PatchCamelyon | 0.653625 |
|
| 50 |
+
| Rendered SST2 | 0.546403 |
|
| 51 |
+
| RESISC45 | 0.750476 |
|
| 52 |
+
| Stanford Cars | 0.957592 |
|
| 53 |
+
| STL-10 | 0.989 |
|
| 54 |
+
| SUN397 | 0.769149 |
|
| 55 |
+
| SVHN | 0.676168 |
|
| 56 |
+
| Flickr | 0.8645 |
|
| 57 |
+
| MSCOCO | 0.631112 |
|
| 58 |
+
| WinoGAViL | 0.556329 |
|
| 59 |
+
| iWildCam | 0.205549 |
|
| 60 |
+
| Camelyon17 | 0.705034 |
|
| 61 |
+
| FMoW | 0.207482 |
|
| 62 |
+
| Dollar Street | 0.699766 |
|
| 63 |
+
| GeoDE | 0.928184 |
|
| 64 |
+
| **Average** | **0.698347** |
|
| 65 |
+
## Model Usage
|
| 66 |
+
### With OpenCLIP
|
| 67 |
+
```
|
| 68 |
+
import torch
|
| 69 |
+
import torch.nn.functional as F
|
| 70 |
+
from urllib.request import urlopen
|
| 71 |
+
from PIL import Image
|
| 72 |
+
from open_clip import create_model_from_pretrained, get_tokenizer
|
| 73 |
+
|
| 74 |
+
model, preprocess = create_model_from_pretrained('hf-hub:apple/DFN5B-CLIP-ViT-H-14')
|
| 75 |
+
tokenizer = get_tokenizer('ViT-H-14')
|
| 76 |
+
|
| 77 |
+
image = Image.open(urlopen(
|
| 78 |
+
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
|
| 79 |
+
))
|
| 80 |
+
image = preprocess(image).unsqueeze(0)
|
| 81 |
+
|
| 82 |
+
labels_list = ["a dog", "a cat", "a donut", "a beignet"]
|
| 83 |
+
text = tokenizer(labels_list, context_length=model.context_length)
|
| 84 |
+
|
| 85 |
+
with torch.no_grad(), torch.cuda.amp.autocast():
|
| 86 |
+
image_features = model.encode_image(image)
|
| 87 |
+
text_features = model.encode_text(text)
|
| 88 |
+
image_features = F.normalize(image_features, dim=-1)
|
| 89 |
+
text_features = F.normalize(text_features, dim=-1)
|
| 90 |
+
|
| 91 |
+
text_probs = torch.sigmoid(image_features @ text_features.T * model.logit_scale.exp() + model.logit_bias)
|
| 92 |
+
|
| 93 |
+
zipped_list = list(zip(labels_list, [round(p.item(), 3) for p in text_probs[0]]))
|
| 94 |
+
print("Label probabilities: ", zipped_list)
|
| 95 |
+
```
|
| 96 |
+
|
| 97 |
+
## Citation
|
| 98 |
+
```bibtex
|
| 99 |
+
@article{fang2023data,
|
| 100 |
+
title={Data Filtering Networks},
|
| 101 |
+
author={Fang, Alex and Jose, Albin Madappally and Jain, Amit and Schmidt, Ludwig and Toshev, Alexander and Shankar, Vaishaal},
|
| 102 |
+
journal={arXiv preprint arXiv:2309.17425},
|
| 103 |
+
year={2023}
|
| 104 |
+
}
|
| 105 |
+
|
| 106 |
+
```
|
| 107 |
+
|
config.json
ADDED
|
@@ -0,0 +1,165 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"_commit_hash": null,
|
| 3 |
+
"architectures": [
|
| 4 |
+
"CLIPModel"
|
| 5 |
+
],
|
| 6 |
+
"initializer_factor": 1.0,
|
| 7 |
+
"logit_scale_init_value": 2.6592,
|
| 8 |
+
"model_type": "clip",
|
| 9 |
+
"projection_dim": 1024,
|
| 10 |
+
"text_config": {
|
| 11 |
+
"_name_or_path": "",
|
| 12 |
+
"add_cross_attention": false,
|
| 13 |
+
"architectures": null,
|
| 14 |
+
"attention_dropout": 0.0,
|
| 15 |
+
"bad_words_ids": null,
|
| 16 |
+
"begin_suppress_tokens": null,
|
| 17 |
+
"bos_token_id": 0,
|
| 18 |
+
"chunk_size_feed_forward": 0,
|
| 19 |
+
"cross_attention_hidden_size": null,
|
| 20 |
+
"decoder_start_token_id": null,
|
| 21 |
+
"diversity_penalty": 0.0,
|
| 22 |
+
"do_sample": false,
|
| 23 |
+
"early_stopping": false,
|
| 24 |
+
"encoder_no_repeat_ngram_size": 0,
|
| 25 |
+
"eos_token_id": 49407,
|
| 26 |
+
"exponential_decay_length_penalty": null,
|
| 27 |
+
"finetuning_task": null,
|
| 28 |
+
"forced_bos_token_id": null,
|
| 29 |
+
"forced_eos_token_id": null,
|
| 30 |
+
"hidden_act": "quick_gelu",
|
| 31 |
+
"hidden_size": 1024,
|
| 32 |
+
"id2label": {
|
| 33 |
+
"0": "LABEL_0",
|
| 34 |
+
"1": "LABEL_1"
|
| 35 |
+
},
|
| 36 |
+
"initializer_factor": 1.0,
|
| 37 |
+
"initializer_range": 0.02,
|
| 38 |
+
"intermediate_size": 4096,
|
| 39 |
+
"is_decoder": false,
|
| 40 |
+
"is_encoder_decoder": false,
|
| 41 |
+
"label2id": {
|
| 42 |
+
"LABEL_0": 0,
|
| 43 |
+
"LABEL_1": 1
|
| 44 |
+
},
|
| 45 |
+
"layer_norm_eps": 1e-05,
|
| 46 |
+
"length_penalty": 1.0,
|
| 47 |
+
"max_length": 20,
|
| 48 |
+
"max_position_embeddings": 77,
|
| 49 |
+
"min_length": 0,
|
| 50 |
+
"model_type": "clip_text_model",
|
| 51 |
+
"no_repeat_ngram_size": 0,
|
| 52 |
+
"num_attention_heads": 16,
|
| 53 |
+
"num_beam_groups": 1,
|
| 54 |
+
"num_beams": 1,
|
| 55 |
+
"num_hidden_layers": 24,
|
| 56 |
+
"num_return_sequences": 1,
|
| 57 |
+
"output_attentions": false,
|
| 58 |
+
"output_hidden_states": false,
|
| 59 |
+
"output_scores": false,
|
| 60 |
+
"pad_token_id": 49408,
|
| 61 |
+
"prefix": null,
|
| 62 |
+
"problem_type": null,
|
| 63 |
+
"projection_dim": 512,
|
| 64 |
+
"pruned_heads": {},
|
| 65 |
+
"remove_invalid_values": false,
|
| 66 |
+
"repetition_penalty": 1.0,
|
| 67 |
+
"return_dict": true,
|
| 68 |
+
"return_dict_in_generate": false,
|
| 69 |
+
"sep_token_id": null,
|
| 70 |
+
"suppress_tokens": null,
|
| 71 |
+
"task_specific_params": null,
|
| 72 |
+
"temperature": 1.0,
|
| 73 |
+
"tf_legacy_loss": false,
|
| 74 |
+
"tie_encoder_decoder": false,
|
| 75 |
+
"tie_word_embeddings": true,
|
| 76 |
+
"tokenizer_class": null,
|
| 77 |
+
"top_k": 50,
|
| 78 |
+
"top_p": 1.0,
|
| 79 |
+
"torch_dtype": null,
|
| 80 |
+
"torchscript": false,
|
| 81 |
+
"transformers_version": "4.27.1",
|
| 82 |
+
"typical_p": 1.0,
|
| 83 |
+
"use_bfloat16": false,
|
| 84 |
+
"vocab_size": 49409
|
| 85 |
+
},
|
| 86 |
+
"torch_dtype": "float32",
|
| 87 |
+
"transformers_version": null,
|
| 88 |
+
"vision_config": {
|
| 89 |
+
"_name_or_path": "",
|
| 90 |
+
"add_cross_attention": false,
|
| 91 |
+
"architectures": null,
|
| 92 |
+
"attention_dropout": 0.0,
|
| 93 |
+
"bad_words_ids": null,
|
| 94 |
+
"begin_suppress_tokens": null,
|
| 95 |
+
"bos_token_id": null,
|
| 96 |
+
"chunk_size_feed_forward": 0,
|
| 97 |
+
"cross_attention_hidden_size": null,
|
| 98 |
+
"decoder_start_token_id": null,
|
| 99 |
+
"diversity_penalty": 0.0,
|
| 100 |
+
"do_sample": false,
|
| 101 |
+
"early_stopping": false,
|
| 102 |
+
"encoder_no_repeat_ngram_size": 0,
|
| 103 |
+
"eos_token_id": null,
|
| 104 |
+
"exponential_decay_length_penalty": null,
|
| 105 |
+
"finetuning_task": null,
|
| 106 |
+
"forced_bos_token_id": null,
|
| 107 |
+
"forced_eos_token_id": null,
|
| 108 |
+
"hidden_act": "quick_gelu",
|
| 109 |
+
"hidden_size": 1280,
|
| 110 |
+
"id2label": {
|
| 111 |
+
"0": "LABEL_0",
|
| 112 |
+
"1": "LABEL_1"
|
| 113 |
+
},
|
| 114 |
+
"image_size": 224,
|
| 115 |
+
"initializer_factor": 1.0,
|
| 116 |
+
"initializer_range": 0.02,
|
| 117 |
+
"intermediate_size": 5120,
|
| 118 |
+
"is_decoder": false,
|
| 119 |
+
"is_encoder_decoder": false,
|
| 120 |
+
"label2id": {
|
| 121 |
+
"LABEL_0": 0,
|
| 122 |
+
"LABEL_1": 1
|
| 123 |
+
},
|
| 124 |
+
"layer_norm_eps": 1e-05,
|
| 125 |
+
"length_penalty": 1.0,
|
| 126 |
+
"max_length": 20,
|
| 127 |
+
"min_length": 0,
|
| 128 |
+
"model_type": "clip_vision_model",
|
| 129 |
+
"no_repeat_ngram_size": 0,
|
| 130 |
+
"num_attention_heads": 16,
|
| 131 |
+
"num_beam_groups": 1,
|
| 132 |
+
"num_beams": 1,
|
| 133 |
+
"num_channels": 3,
|
| 134 |
+
"num_hidden_layers": 32,
|
| 135 |
+
"num_return_sequences": 1,
|
| 136 |
+
"output_attentions": false,
|
| 137 |
+
"output_hidden_states": false,
|
| 138 |
+
"output_scores": false,
|
| 139 |
+
"pad_token_id": null,
|
| 140 |
+
"patch_size": 14,
|
| 141 |
+
"prefix": null,
|
| 142 |
+
"problem_type": null,
|
| 143 |
+
"projection_dim": 512,
|
| 144 |
+
"pruned_heads": {},
|
| 145 |
+
"remove_invalid_values": false,
|
| 146 |
+
"repetition_penalty": 1.0,
|
| 147 |
+
"return_dict": true,
|
| 148 |
+
"return_dict_in_generate": false,
|
| 149 |
+
"sep_token_id": null,
|
| 150 |
+
"suppress_tokens": null,
|
| 151 |
+
"task_specific_params": null,
|
| 152 |
+
"temperature": 1.0,
|
| 153 |
+
"tf_legacy_loss": false,
|
| 154 |
+
"tie_encoder_decoder": false,
|
| 155 |
+
"tie_word_embeddings": true,
|
| 156 |
+
"tokenizer_class": null,
|
| 157 |
+
"top_k": 50,
|
| 158 |
+
"top_p": 1.0,
|
| 159 |
+
"torch_dtype": null,
|
| 160 |
+
"torchscript": false,
|
| 161 |
+
"transformers_version": "4.27.1",
|
| 162 |
+
"typical_p": 1.0,
|
| 163 |
+
"use_bfloat16": false
|
| 164 |
+
}
|
| 165 |
+
}
|
eval_results.jsonl
ADDED
|
@@ -0,0 +1,40 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{"key": "imagenet1k", "dataset": "ImageNet 1k", "metrics": {"acc1": 0.8344, "acc5": 0.97192, "mean_per_class_recall": 0.8343200000000001, "main_metric": 0.8344}}
|
| 2 |
+
{"key": "vtab/caltech101", "dataset": "Caltech-101", "metrics": {"acc1": 0.8606409202958094, "acc5": 0.9426458504519309, "mean_per_class_recall": 0.9549354689024778, "main_metric": 0.9549354689024778}}
|
| 3 |
+
{"key": "cifar10", "dataset": "CIFAR-10", "metrics": {"acc1": 0.9878, "acc5": 0.9999, "mean_per_class_recall": 0.9877999999999998, "main_metric": 0.9878}}
|
| 4 |
+
{"key": "vtab/cifar100", "dataset": "CIFAR-100", "metrics": {"acc1": 0.9051, "acc5": 0.9889, "mean_per_class_recall": 0.9051, "main_metric": 0.9051}}
|
| 5 |
+
{"key": "vtab/clevr_count_all", "dataset": "CLEVR Counts", "metrics": {"acc1": 0.2966, "acc5": 0.9220666666666667, "mean_per_class_recall": 0.2981595675798421, "main_metric": 0.2966}}
|
| 6 |
+
{"key": "vtab/clevr_closest_object_distance", "dataset": "CLEVR Distance", "metrics": {"acc1": 0.2124, "acc5": 0.9186666666666666, "mean_per_class_recall": 0.15698927434039325, "main_metric": 0.2124}}
|
| 7 |
+
{"key": "country211", "dataset": "Country211", "metrics": {"acc1": 0.34398104265402846, "acc5": 0.5876303317535545, "mean_per_class_recall": 0.3439810426540285, "main_metric": 0.34398104265402846}}
|
| 8 |
+
{"key": "vtab/dtd", "dataset": "Describable Textures", "metrics": {"acc1": 0.7063829787234043, "acc5": 0.9446808510638298, "mean_per_class_recall": 0.7063829787234043, "main_metric": 0.7063829787234043}}
|
| 9 |
+
{"key": "vtab/eurosat", "dataset": "EuroSAT", "metrics": {"acc1": 0.6548148148148148, "acc5": 0.9825925925925926, "mean_per_class_recall": 0.6619039664988393, "main_metric": 0.6548148148148148}}
|
| 10 |
+
{"key": "fgvc_aircraft", "dataset": "FGVC Aircraft", "metrics": {"acc1": 0.7146714671467147, "acc5": 0.9801980198019802, "mean_per_class_recall": 0.714055258467023, "main_metric": 0.714055258467023}}
|
| 11 |
+
{"key": "food101", "dataset": "Food-101", "metrics": {"acc1": 0.9567920792079208, "acc5": 0.995960396039604, "mean_per_class_recall": 0.9567920792079208, "main_metric": 0.9567920792079208}}
|
| 12 |
+
{"key": "gtsrb", "dataset": "GTSRB", "metrics": {"acc1": 0.677513855898654, "acc5": 0.8931908155186065, "mean_per_class_recall": 0.674850299982757, "main_metric": 0.677513855898654}}
|
| 13 |
+
{"key": "imagenet_sketch", "dataset": "ImageNet Sketch", "metrics": {"acc1": 0.7273084556583937, "acc5": 0.9104914618090354, "mean_per_class_recall": 0.7275686274509803, "main_metric": 0.7273084556583937}}
|
| 14 |
+
{"key": "imagenetv2", "dataset": "ImageNet v2", "metrics": {"acc1": 0.773, "acc5": 0.9405, "mean_per_class_recall": 0.7735, "main_metric": 0.773}}
|
| 15 |
+
{"key": "imagenet-a", "dataset": "ImageNet-A", "metrics": {"acc1": 0.6988, "acc5": 0.8973333333333333, "mean_per_class_recall": 0.6860956321859686, "main_metric": 0.6988}}
|
| 16 |
+
{"key": "imagenet-o", "dataset": "ImageNet-O", "metrics": {"acc1": 0.381, "acc5": 0.734, "mean_per_class_recall": 0.39689795722852994, "main_metric": 0.381}}
|
| 17 |
+
{"key": "imagenet-r", "dataset": "ImageNet-R", "metrics": {"acc1": 0.9293666666666667, "acc5": 0.9822666666666666, "mean_per_class_recall": 0.921225710813732, "main_metric": 0.9293666666666667}}
|
| 18 |
+
{"key": "vtab/kitti_closest_vehicle_distance", "dataset": "KITTI Vehicle Distance", "metrics": {"acc1": 0.3361462728551336, "acc5": null, "mean_per_class_recall": 0.42655862269565553, "main_metric": 0.3361462728551336}}
|
| 19 |
+
{"key": "mnist", "dataset": "MNIST", "metrics": {"acc1": 0.8579, "acc5": 0.9842, "mean_per_class_recall": 0.8580588283530659, "main_metric": 0.8579}}
|
| 20 |
+
{"key": "objectnet", "dataset": "ObjectNet", "metrics": {"acc1": 0.6812749003984063, "acc5": 0.8573812856681382, "mean_per_class_recall": 0.6695348024584162, "main_metric": 0.6812749003984063}}
|
| 21 |
+
{"key": "vtab/flowers", "dataset": "Oxford Flowers-102", "metrics": {"acc1": 0.9234021792161327, "acc5": 0.9829240526914945, "mean_per_class_recall": 0.8995336494286063, "main_metric": 0.8995336494286063}}
|
| 22 |
+
{"key": "vtab/pets", "dataset": "Oxford-IIIT Pet", "metrics": {"acc1": 0.9656582174979559, "acc5": 0.9991823385118561, "mean_per_class_recall": 0.9655148933085753, "main_metric": 0.9655148933085753}}
|
| 23 |
+
{"key": "voc2007", "dataset": "Pascal VOC 2007", "metrics": {"acc1": 0.8183092948717948, "acc5": 0.9710870726495726, "mean_per_class_recall": 0.9227985577104342, "main_metric": 0.8183092948717948}}
|
| 24 |
+
{"key": "vtab/pcam", "dataset": "PatchCamelyon", "metrics": {"acc1": 0.65362548828125, "acc5": null, "mean_per_class_recall": 0.6535354071230998, "main_metric": 0.65362548828125}}
|
| 25 |
+
{"key": "renderedsst2", "dataset": "Rendered SST2", "metrics": {"acc1": 0.5464030752333883, "acc5": null, "mean_per_class_recall": 0.5456925626773204, "main_metric": 0.5464030752333883}}
|
| 26 |
+
{"key": "vtab/resisc45", "dataset": "RESISC45", "metrics": {"acc1": 0.7504761904761905, "acc5": 0.9503174603174603, "mean_per_class_recall": 0.755682846983173, "main_metric": 0.7504761904761905}}
|
| 27 |
+
{"key": "cars", "dataset": "Stanford Cars", "metrics": {"acc1": 0.957592339261286, "acc5": 0.9996269120756125, "mean_per_class_recall": 0.95787629738828, "main_metric": 0.957592339261286}}
|
| 28 |
+
{"key": "stl10", "dataset": "STL-10", "metrics": {"acc1": 0.989, "acc5": 1.0, "mean_per_class_recall": 0.9890000000000001, "main_metric": 0.989}}
|
| 29 |
+
{"key": "sun397", "dataset": "SUN397", "metrics": {"acc1": 0.7691487209665852, "acc5": 0.9687367820953712, "mean_per_class_recall": 0.7712829083687427, "main_metric": 0.7691487209665852}}
|
| 30 |
+
{"key": "vtab/svhn", "dataset": "SVHN", "metrics": {"acc1": 0.6761677934849416, "acc5": 0.922480024585126, "mean_per_class_recall": 0.6992385112125077, "main_metric": 0.6761677934849416}}
|
| 31 |
+
{"key": "retrieval/flickr_1k_test_image_text_retrieval", "dataset": "Flickr", "metrics": {"image_retrieval_recall@1": 0.8009999990463257, "text_retrieval_recall@1": 0.9279999732971191, "image_retrieval_recall@5": 0.9524000287055969, "text_retrieval_recall@5": 0.9940000176429749, "image_retrieval_recall@10": 0.973800003528595, "text_retrieval_recall@10": 0.9990000128746033, "mean_recall@1": 0.8644999861717224, "main_metric": 0.8644999861717224}}
|
| 32 |
+
{"key": "retrieval/mscoco_2014_5k_test_image_text_retrieval", "dataset": "MSCOCO", "metrics": {"image_retrieval_recall@1": 0.5396241545677185, "text_retrieval_recall@1": 0.722599983215332, "image_retrieval_recall@5": 0.7800479531288147, "text_retrieval_recall@5": 0.902400016784668, "image_retrieval_recall@10": 0.855297863483429, "text_retrieval_recall@10": 0.9449999928474426, "mean_recall@1": 0.6311120688915253, "main_metric": 0.6311120688915253}}
|
| 33 |
+
{"key": "misc/winogavil", "dataset": "WinoGAViL", "metrics": {"avg_jaccard_score": 0.6030137791855446, "jaccard_score_5": 0.632979797979798, "jaccard_score_6": 0.6032397408207343, "jaccard_score_10": 0.5572993516655488, "jaccard_score_12": 0.5553627058299955, "jaccard_score_5-6": 0.6177310200566015, "jaccard_score_10-12": 0.5563287610126018, "main_metric": 0.5563287610126018}}
|
| 34 |
+
{"key": "wilds/iwildcam", "dataset": "iWildCam", "metrics": {"acc1": 0.2653361688205464, "acc5": 0.631277605103877, "mean_per_class_recall": 0.2649878140564381, "acc_avg": 0.2658035457134247, "recall-macro_all": 0.2649878140564381, "F1-macro_all": 0.2055493961628365, "main_metric": 0.2055493961628365}}
|
| 35 |
+
{"key": "wilds/camelyon17", "dataset": "Camelyon17", "metrics": {"acc1": 0.7050344487031768, "acc5": null, "mean_per_class_recall": 0.7050344487031768, "acc_avg": 0.7050344347953796, "acc_slide:0": NaN, "count_slide:0": 0.0, "acc_slide:1": NaN, "count_slide:1": 0.0, "acc_slide:2": NaN, "count_slide:2": 0.0, "acc_slide:3": NaN, "count_slide:3": 0.0, "acc_slide:4": NaN, "count_slide:4": 0.0, "acc_slide:5": NaN, "count_slide:5": 0.0, "acc_slide:6": NaN, "count_slide:6": 0.0, "acc_slide:7": NaN, "count_slide:7": 0.0, "acc_slide:8": NaN, "count_slide:8": 0.0, "acc_slide:9": NaN, "count_slide:9": 0.0, "acc_slide:10": NaN, "count_slide:10": 0.0, "acc_slide:11": NaN, "count_slide:11": 0.0, "acc_slide:12": NaN, "count_slide:12": 0.0, "acc_slide:13": NaN, "count_slide:13": 0.0, "acc_slide:14": NaN, "count_slide:14": 0.0, "acc_slide:15": NaN, "count_slide:15": 0.0, "acc_slide:16": NaN, "count_slide:16": 0.0, "acc_slide:17": NaN, "count_slide:17": 0.0, "acc_slide:18": NaN, "count_slide:18": 0.0, "acc_slide:19": NaN, "count_slide:19": 0.0, "acc_slide:20": 0.5207349061965942, "count_slide:20": 3810.0, "acc_slide:21": 0.45506227016448975, "count_slide:21": 3694.0, "acc_slide:22": 0.7769764065742493, "count_slide:22": 7210.0, "acc_slide:23": 0.7104765772819519, "count_slide:23": 5288.0, "acc_slide:24": 0.4957939684391022, "count_slide:24": 7727.0, "acc_slide:25": 0.6432856321334839, "count_slide:25": 4334.0, "acc_slide:26": 0.36854520440101624, "count_slide:26": 3815.0, "acc_slide:27": 0.4086918234825134, "count_slide:27": 4556.0, "acc_slide:28": 0.8677144050598145, "count_slide:28": 31878.0, "acc_slide:29": 0.7372468709945679, "count_slide:29": 12742.0, "acc_wg": 0.36854520440101624, "main_metric": 0.7050344487031768}}
|
| 36 |
+
{"key": "wilds/fmow", "dataset": "FMoW", "metrics": {"acc1": 0.3007508594174055, "acc5": 0.5952144020264157, "mean_per_class_recall": 0.3125018044356217, "acc_avg": 0.30075085163116455, "acc_year:0": NaN, "count_year:0": 0.0, "acc_year:1": NaN, "count_year:1": 0.0, "acc_year:2": NaN, "count_year:2": 0.0, "acc_year:3": NaN, "count_year:3": 0.0, "acc_year:4": NaN, "count_year:4": 0.0, "acc_year:5": NaN, "count_year:5": 0.0, "acc_year:6": NaN, "count_year:6": 0.0, "acc_year:7": NaN, "count_year:7": 0.0, "acc_year:8": NaN, "count_year:8": 0.0, "acc_year:9": NaN, "count_year:9": 0.0, "acc_year:10": NaN, "count_year:10": 0.0, "acc_year:11": NaN, "count_year:11": 0.0, "acc_year:12": NaN, "count_year:12": 0.0, "acc_year:13": NaN, "count_year:13": 0.0, "acc_year:14": 0.31812769174575806, "count_year:14": 15959.0, "acc_year:15": 0.2556513249874115, "count_year:15": 6149.0, "acc_worst_year": 0.2556513249874115, "acc_region:0": 0.25992342829704285, "count_region:0": 4963.0, "acc_region:1": 0.32314783334732056, "count_region:1": 5858.0, "acc_region:2": 0.20748168230056763, "count_region:2": 2593.0, "acc_region:3": 0.32452642917633057, "count_region:3": 8024.0, "acc_region:4": 0.4819819927215576, "count_region:4": 666.0, "acc_region:5": 0.75, "count_region:5": 4.0, "acc_worst_region": 0.20748168230056763, "main_metric": 0.20748168230056763}}
|
| 37 |
+
{"key": "fairness/dollar_street", "dataset": "Dollar Street", "metrics": {"acc1": 0.579788752497859, "acc5": 0.8344276334570369, "mean_per_class_recall": 0.6086562373297322, "acc_top5_avg": 0.8344276547431946, "acc_top5_income_ds:0": 0.6997663378715515, "count_income_ds:0": 856.0, "acc_top5_income_ds:1": 0.8427602052688599, "count_income_ds:1": 884.0, "acc_top5_income_ds:2": 0.8668146729469299, "count_income_ds:2": 901.0, "acc_top5_income_ds:3": 0.9257540702819824, "count_income_ds:3": 862.0, "acc_top5_wg": 0.6997663378715515, "main_metric": 0.6997663378715515}}
|
| 38 |
+
{"key": "fairness/geode", "dataset": "GeoDE", "metrics": {"acc1": 0.9433055733504164, "acc5": 0.9976777706598334, "mean_per_class_recall": 0.94233551159002, "acc_avg": 0.9433055520057678, "acc_region:0": 0.9281837344169617, "count_region:0": 2395.0, "acc_region:1": 0.9368159174919128, "count_region:1": 2010.0, "acc_region:2": 0.9468485713005066, "count_region:2": 2126.0, "acc_region:3": 0.9460708498954773, "count_region:3": 1947.0, "acc_region:4": 0.9482071995735168, "count_region:4": 1757.0, "acc_region:5": 0.9556147456169128, "count_region:5": 2253.0, "acc_wg": 0.9281837344169617, "main_metric": 0.9281837344169617}}
|
| 39 |
+
{"key": "fairness/fairface", "dataset": "FairFace", "metrics": {"acc_race_avg": 0.8706408739089966, "acc_race_race_binary:0": 0.8431654572486877, "count_race_binary:0": 2085.0, "acc_race_race_binary:1": 0.8770999908447266, "count_race_binary:1": 8869.0, "acc_race_wg": 0.8431654572486877, "acc_gender_avg": 0.9386525750160217, "acc_gender_race_binary:0": 0.9525179862976074, "acc_gender_race_binary:1": 0.9353929162025452, "acc_gender_wg": 0.9353929162025452, "acc_age_avg": 0.5105897188186646, "acc_age_race_binary:0": 0.529496431350708, "acc_age_race_binary:1": 0.5061450004577637, "acc_age_wg": 0.5061450004577637, "acc_gender_x_avg": 0.9386525750160217, "acc_gender_x_race:0_gender:0": 0.8811013698577881, "count_race:0_gender:0": 799.0, "acc_gender_x_race:0_gender:1": 0.9141347408294678, "count_race:0_gender:1": 757.0, "acc_gender_x_race:1_gender:0": 0.945632815361023, "count_race:1_gender:0": 1122.0, "acc_gender_x_race:1_gender:1": 0.9605399966239929, "count_race:1_gender:1": 963.0, "acc_gender_x_race:2_gender:0": 0.9150066375732422, "count_race:2_gender:0": 753.0, "acc_gender_x_race:2_gender:1": 0.9593709111213684, "count_race:2_gender:1": 763.0, "acc_gender_x_race:3_gender:0": 0.9167717695236206, "count_race:3_gender:0": 793.0, "acc_gender_x_race:3_gender:1": 0.9710843563079834, "count_race:3_gender:1": 830.0, "acc_gender_x_race:4_gender:0": 0.9667896628379822, "count_race:4_gender:0": 813.0, "acc_gender_x_race:4_gender:1": 0.9646464586257935, "count_race:4_gender:1": 396.0, "acc_gender_x_race:5_gender:0": 0.8938775658607483, "count_race:5_gender:0": 735.0, "acc_gender_x_race:5_gender:1": 0.970588207244873, "count_race:5_gender:1": 680.0, "acc_gender_x_race:6_gender:0": 0.9099099040031433, "count_race:6_gender:0": 777.0, "acc_gender_x_race:6_gender:1": 0.9754204154014587, "count_race:6_gender:1": 773.0, "acc_gender_x_wg": 0.8811013698577881, "toxicity_crime_avg": 0.043819610029459, "toxicity_crime_race:0": 0.07519280165433884, "count_race:0": 1556.0, "toxicity_crime_race:1": 0.05467626079916954, "count_race:1": 2085.0, "toxicity_crime_race:2": 0.03957783803343773, "count_race:2": 1516.0, "toxicity_crime_race:3": 0.0418977215886116, "count_race:3": 1623.0, "toxicity_crime_race:4": 0.062034741044044495, "count_race:4": 1209.0, "toxicity_crime_race:5": 0.016961131244897842, "count_race:5": 1415.0, "toxicity_crime_race:6": 0.014193548820912838, "count_race:6": 1550.0, "toxicity_crime_wg": 0.014193548820912838, "toxicity_nonhuman_avg": 0.0009129085228778422, "toxicity_nonhuman_race:0": 0.003213367657735944, "toxicity_nonhuman_race:1": 0.0019184652483090758, "toxicity_nonhuman_race:2": 0.0, "toxicity_nonhuman_race:3": 0.0, "toxicity_nonhuman_race:4": 0.0008271298720501363, "toxicity_nonhuman_race:5": 0.0, "toxicity_nonhuman_race:6": 0.0, "toxicity_nonhuman_wg": 0.0, "main_metric": null}}
|
| 40 |
+
{"key": "fairness/utkface", "dataset": "UTKFace", "metrics": {"acc_race_avg": 0.9024174213409424, "acc_race_race_binary:0": 0.921992838382721, "count_race_binary:0": 10076.0, "acc_race_race_binary:1": 0.8879430294036865, "count_race_binary:1": 13627.0, "acc_race_wg": 0.8879430294036865, "acc_gender_avg": 0.9508501291275024, "acc_gender_race_binary:0": 0.9637753367424011, "acc_gender_race_binary:1": 0.9412930011749268, "acc_gender_wg": 0.9412930011749268, "acc_age_avg": 0.538412868976593, "acc_age_race_binary:0": 0.5120087265968323, "acc_age_race_binary:1": 0.557936429977417, "acc_age_wg": 0.5120087265968323, "acc_gender_x_avg": 0.9508501291275024, "acc_gender_x_race:0_gender:0": 0.971958577632904, "count_race:0_gender:0": 2318.0, "acc_gender_x_race:0_gender:1": 0.9615036249160767, "count_race:0_gender:1": 2208.0, "acc_gender_x_race:1_gender:0": 0.9516069889068604, "count_race:1_gender:0": 5476.0, "acc_gender_x_race:1_gender:1": 0.97826087474823, "count_race:1_gender:1": 4600.0, "acc_gender_x_race:2_gender:0": 0.9279080033302307, "count_race:2_gender:0": 2261.0, "acc_gender_x_race:2_gender:1": 0.9760793447494507, "count_race:2_gender:1": 1714.0, "acc_gender_x_race:3_gender:0": 0.8336507678031921, "count_race:3_gender:0": 1575.0, "acc_gender_x_race:3_gender:1": 0.9682624936103821, "count_race:3_gender:1": 1859.0, "acc_gender_x_race:4_gender:0": 0.8763157725334167, "count_race:4_gender:0": 760.0, "acc_gender_x_race:4_gender:1": 0.966738224029541, "count_race:4_gender:1": 932.0, "acc_gender_x_wg": 0.8336507678031921, "toxicity_crime_avg": 0.010504999198019505, "toxicity_crime_race:0": 0.005965532269328833, "count_race:0": 4526.0, "toxicity_crime_race:1": 0.011214767582714558, "count_race:1": 10076.0, "toxicity_crime_race:2": 0.015094339847564697, "count_race:2": 3975.0, "toxicity_crime_race:3": 0.0052417004480957985, "count_race:3": 3434.0, "toxicity_crime_race:4": 0.018321512266993523, "count_race:4": 1692.0, "toxicity_crime_wg": 0.0052417004480957985, "toxicity_nonhuman_avg": 0.0013922288781031966, "toxicity_nonhuman_race:0": 0.00022094564337749034, "toxicity_nonhuman_race:1": 0.0016871774569153786, "toxicity_nonhuman_race:2": 0.0010062892688438296, "toxicity_nonhuman_race:3": 0.0011648223735392094, "toxicity_nonhuman_race:4": 0.004137116018682718, "toxicity_nonhuman_wg": 0.00022094564337749034, "main_metric": null}}
|
merges.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
open_clip_config.json
ADDED
|
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"model_cfg": {
|
| 3 |
+
"embed_dim": 1024,
|
| 4 |
+
"quick_gelu": true,
|
| 5 |
+
"vision_cfg": {
|
| 6 |
+
"image_size": 224,
|
| 7 |
+
"layers": 32,
|
| 8 |
+
"width": 1280,
|
| 9 |
+
"head_width": 80,
|
| 10 |
+
"patch_size": 14
|
| 11 |
+
},
|
| 12 |
+
"text_cfg": {
|
| 13 |
+
"context_length": 77,
|
| 14 |
+
"vocab_size": 49408,
|
| 15 |
+
"width": 1024,
|
| 16 |
+
"heads": 16,
|
| 17 |
+
"layers": 24
|
| 18 |
+
}
|
| 19 |
+
},
|
| 20 |
+
"preprocess_cfg": {
|
| 21 |
+
"mean": [
|
| 22 |
+
0.48145466,
|
| 23 |
+
0.4578275,
|
| 24 |
+
0.40821073
|
| 25 |
+
],
|
| 26 |
+
"std": [
|
| 27 |
+
0.26862954,
|
| 28 |
+
0.26130258,
|
| 29 |
+
0.27577711
|
| 30 |
+
],
|
| 31 |
+
"interpolation": "bicubic",
|
| 32 |
+
"resize_mode": "squash"
|
| 33 |
+
}
|
| 34 |
+
}
|
open_clip_pytorch_model.bin
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d67de50faa7f3ddce52fbab4f4656b04686a0bb15c26ebd0144d375cfa08b8ae
|
| 3 |
+
size 3944659877
|
pytorch_model.bin
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:62ac4c740ead530ecdd1c407d8d1aeb920da3d527ce63b65f9ae5f9b7bbfb0fc
|
| 3 |
+
size 3944746189
|
special_tokens_map.json
ADDED
|
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"bos_token": {
|
| 3 |
+
"content": "<|startoftext|>",
|
| 4 |
+
"lstrip": false,
|
| 5 |
+
"normalized": true,
|
| 6 |
+
"rstrip": false,
|
| 7 |
+
"single_word": false
|
| 8 |
+
},
|
| 9 |
+
"eos_token": {
|
| 10 |
+
"content": "<|endoftext|>",
|
| 11 |
+
"lstrip": false,
|
| 12 |
+
"normalized": true,
|
| 13 |
+
"rstrip": false,
|
| 14 |
+
"single_word": false
|
| 15 |
+
},
|
| 16 |
+
"pad_token": "<|endoftext|>",
|
| 17 |
+
"unk_token": {
|
| 18 |
+
"content": "<|endoftext|>",
|
| 19 |
+
"lstrip": false,
|
| 20 |
+
"normalized": true,
|
| 21 |
+
"rstrip": false,
|
| 22 |
+
"single_word": false
|
| 23 |
+
}
|
| 24 |
+
}
|
tokenizer.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
tokenizer_config.json
ADDED
|
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"add_prefix_space": false,
|
| 3 |
+
"bos_token": {
|
| 4 |
+
"__type": "AddedToken",
|
| 5 |
+
"content": "<|startoftext|>",
|
| 6 |
+
"lstrip": false,
|
| 7 |
+
"normalized": true,
|
| 8 |
+
"rstrip": false,
|
| 9 |
+
"single_word": false
|
| 10 |
+
},
|
| 11 |
+
"do_lower_case": true,
|
| 12 |
+
"eos_token": {
|
| 13 |
+
"__type": "AddedToken",
|
| 14 |
+
"content": "<|endoftext|>",
|
| 15 |
+
"lstrip": false,
|
| 16 |
+
"normalized": true,
|
| 17 |
+
"rstrip": false,
|
| 18 |
+
"single_word": false
|
| 19 |
+
},
|
| 20 |
+
"errors": "replace",
|
| 21 |
+
"model_max_length": 77,
|
| 22 |
+
"name_or_path": "openai/clip-vit-large-patch14",
|
| 23 |
+
"pad_token": "<|endoftext|>",
|
| 24 |
+
"special_tokens_map_file": "./special_tokens_map.json",
|
| 25 |
+
"tokenizer_class": "CLIPTokenizer",
|
| 26 |
+
"unk_token": {
|
| 27 |
+
"__type": "AddedToken",
|
| 28 |
+
"content": "<|endoftext|>",
|
| 29 |
+
"lstrip": false,
|
| 30 |
+
"normalized": true,
|
| 31 |
+
"rstrip": false,
|
| 32 |
+
"single_word": false
|
| 33 |
+
}
|
| 34 |
+
}
|
vocab.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|