Image Feature Extraction
Transformers
Safetensors
vit_nepa

Add pipeline tag

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -1,9 +1,11 @@
1
  ---
2
- library_name: transformers
3
- license: apache-2.0
4
  datasets:
5
  - ILSVRC/imagenet-1k
 
 
 
6
  ---
 
7
  # NEPA: Next-Embedding Prediction Makes Strong Vision Learners
8
 
9
  [![Paper](https://img.shields.io/badge/arXiv-Paper-b31b1b?logo=arxiv&logoColor=b31b1b)](https://arxiv.org/abs/2512.16922)
@@ -197,7 +199,7 @@ python init_nepa_cls_from_pretrain.py \
197
 
198
  ## Acknowledgements
199
 
200
- We gratefully acknowledge the developers of [Transformers](https://github.com/huggingface/transformers), [Evaluate](https://github.com/huggingface/evaluate), and [timm](https://github.com/huggingface/pytorch-image-models) for their excellent open-source contributions.
201
 
202
  ## Contact
203
 
 
1
  ---
 
 
2
  datasets:
3
  - ILSVRC/imagenet-1k
4
+ library_name: transformers
5
+ license: apache-2.0
6
+ pipeline_tag: image-feature-extraction
7
  ---
8
+
9
  # NEPA: Next-Embedding Prediction Makes Strong Vision Learners
10
 
11
  [![Paper](https://img.shields.io/badge/arXiv-Paper-b31b1b?logo=arxiv&logoColor=b31b1b)](https://arxiv.org/abs/2512.16922)
 
199
 
200
  ## Acknowledgements
201
 
202
+ We gratefully acknowledge the developers of [Transformers](https://github.com/huggingface/transformers), [Datasets](https://github.com/huggingface/datasets), [Evaluate](https://github.com/huggingface/evaluate), and [timm](https://github.com/huggingface/pytorch-image-models) for their excellent open-source contributions.
203
 
204
  ## Contact
205