repo_id
stringlengths
15
86
file_path
stringlengths
27
180
content
stringlengths
1
1.75M
__index_level_0__
int64
0
0
hf_public_repos/pytorch-image-models/docs/models
hf_public_repos/pytorch-image-models/docs/models/.templates/code_snippets.md
## How do I use this model on an image? To load a pretrained model: ```python import timm model = timm.create_model('{{ model_name }}', pretrained=True) model.eval() ``` To load and preprocess the image: ```python import urllib from PIL import Image from timm.data import resolve_data_config from timm.data.transforms...
0
hf_public_repos/pytorch-image-models/docs/models
hf_public_repos/pytorch-image-models/docs/models/.templates/generate_readmes.py
""" Run this script to generate the model-index files in `models` from the templates in `.templates/models`. """ import argparse from pathlib import Path from jinja2 import Environment, FileSystemLoader import modelindex def generate_readmes(templates_path: Path, dest_path: Path): """Add the code snippet templ...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/adversarial-inception-v3.md
# Adversarial Inception v3 **Inception v3** is a convolutional neural network architecture from the Inception family that makes several improvements including using [Label Smoothing](https://paperswithcode.com/method/label-smoothing), Factorized 7 x 7 convolutions, and the use of an [auxiliary classifer](https://paper...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/advprop.md
# AdvProp (EfficientNet) **AdvProp** is an adversarial training scheme which treats adversarial examples as additional examples, to prevent overfitting. Key to the method is the usage of a separate auxiliary batch norm for adversarial examples, as they have different underlying distributions to normal examples. The w...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/big-transfer.md
# Big Transfer (BiT) **Big Transfer (BiT)** is a type of pretraining recipe that pre-trains on a large supervised source dataset, and fine-tunes the weights on the target task. Models are trained on the JFT-300M dataset. The finetuned models contained in this collection are finetuned on ImageNet. {% include 'code_sn...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/csp-darknet.md
# CSP-DarkNet **CSPDarknet53** is a convolutional neural network and backbone for object detection that uses [DarkNet-53](https://paperswithcode.com/method/darknet-53). It employs a CSPNet strategy to partition the feature map of the base layer into two parts and then merges them through a cross-stage hierarchy. The u...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/csp-resnet.md
# CSP-ResNet **CSPResNet** is a convolutional neural network where we apply the Cross Stage Partial Network (CSPNet) approach to [ResNet](https://paperswithcode.com/method/resnet). The CSPNet partitions the feature map of the base layer into two parts and then merges them through a cross-stage hierarchy. The use of a ...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/csp-resnext.md
# CSP-ResNeXt **CSPResNeXt** is a convolutional neural network where we apply the Cross Stage Partial Network (CSPNet) approach to [ResNeXt](https://paperswithcode.com/method/resnext). The CSPNet partitions the feature map of the base layer into two parts and then merges them through a cross-stage hierarchy. The use o...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/densenet.md
# DenseNet **DenseNet** is a type of convolutional neural network that utilises dense connections between layers, through [Dense Blocks](http://www.paperswithcode.com/method/dense-block), where we connect *all layers* (with matching feature-map sizes) directly with each other. To preserve the feed-forward nature, each...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/dla.md
# Deep Layer Aggregation Extending “shallow” skip connections, **Dense Layer Aggregation (DLA)** incorporates more depth and sharing. The authors introduce two structures for deep layer aggregation (DLA): iterative deep aggregation (IDA) and hierarchical deep aggregation (HDA). These structures are expressed through ...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/dpn.md
# Dual Path Network (DPN) A **Dual Path Network (DPN)** is a convolutional neural network which presents a new topology of connection paths internally. The intuition is that [ResNets](https://paperswithcode.com/method/resnet) enables feature re-usage while DenseNet enables new feature exploration, and both are importa...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/ecaresnet.md
# ECA-ResNet An **ECA ResNet** is a variant on a [ResNet](https://paperswithcode.com/method/resnet) that utilises an [Efficient Channel Attention module](https://paperswithcode.com/method/efficient-channel-attention). Efficient Channel Attention is an architectural unit based on [squeeze-and-excitation blocks](https:/...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/efficientnet-pruned.md
# EfficientNet (Knapsack Pruned) **EfficientNet** is a convolutional neural network architecture and scaling method that uniformly scales all dimensions of depth/width/resolution using a *compound coefficient*. Unlike conventional practice that arbitrary scales these factors, the EfficientNet scaling method uniformly...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/efficientnet.md
# EfficientNet **EfficientNet** is a convolutional neural network architecture and scaling method that uniformly scales all dimensions of depth/width/resolution using a *compound coefficient*. Unlike conventional practice that arbitrary scales these factors, the EfficientNet scaling method uniformly scales network wi...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/ensemble-adversarial.md
# # Ensemble Adversarial Inception ResNet v2 **Inception-ResNet-v2** is a convolutional neural architecture that builds on the Inception family of architectures but incorporates [residual connections](https://paperswithcode.com/method/residual-connection) (replacing the filter concatenation stage of the Inception arch...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/ese-vovnet.md
# ESE-VoVNet **VoVNet** is a convolutional neural network that seeks to make [DenseNet](https://paperswithcode.com/method/densenet) more efficient by concatenating all features only once in the last feature map, which makes input size constant and enables enlarging new output channel. Read about [one-shot aggregatio...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/fbnet.md
# FBNet **FBNet** is a type of convolutional neural architectures discovered through [DNAS](https://paperswithcode.com/method/dnas) neural architecture search. It utilises a basic type of image model block inspired by [MobileNetv2](https://paperswithcode.com/method/mobilenetv2) that utilises depthwise convolutions and...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/gloun-inception-v3.md
# (Gluon) Inception v3 **Inception v3** is a convolutional neural network architecture from the Inception family that makes several improvements including using [Label Smoothing](https://paperswithcode.com/method/label-smoothing), Factorized 7 x 7 convolutions, and the use of an [auxiliary classifer](https://paperswit...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/gloun-resnet.md
# (Gluon) ResNet **Residual Networks**, or **ResNets**, learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. Instead of hoping each few stacked layers directly fit a desired underlying mapping, residual nets let these layers fit a residual mapping. They stack [residu...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/gloun-resnext.md
# (Gluon) ResNeXt A **ResNeXt** repeats a [building block](https://paperswithcode.com/method/resnext-block) that aggregates a set of transformations with the same topology. Compared to a [ResNet](https://paperswithcode.com/method/resnet), it exposes a new dimension, *cardinality* (the size of the set of transformatio...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/gloun-senet.md
# (Gluon) SENet A **SENet** is a convolutional neural network architecture that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration. The weights from this model were ported from [Gluon](http...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/gloun-seresnext.md
# (Gluon) SE-ResNeXt **SE ResNeXt** is a variant of a [ResNext](https://www.paperswithcode.com/method/resnext) that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration. The weights from this...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/gloun-xception.md
# (Gluon) Xception **Xception** is a convolutional neural network architecture that relies solely on [depthwise separable convolution](https://paperswithcode.com/method/depthwise-separable-convolution) layers. The weights from this model were ported from [Gluon](https://cv.gluon.ai/model_zoo/classification.html). {%...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/hrnet.md
# HRNet **HRNet**, or **High-Resolution Net**, is a general purpose convolutional neural network for tasks like semantic segmentation, object detection and image classification. It is able to maintain high resolution representations through the whole process. We start from a high-resolution convolution stream, gradual...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/ig-resnext.md
# Instagram ResNeXt WSL A **ResNeXt** repeats a [building block](https://paperswithcode.com/method/resnext-block) that aggregates a set of transformations with the same topology. Compared to a [ResNet](https://paperswithcode.com/method/resnet), it exposes a new dimension, *cardinality* (the size of the set of transfo...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/inception-resnet-v2.md
# Inception ResNet v2 **Inception-ResNet-v2** is a convolutional neural architecture that builds on the Inception family of architectures but incorporates [residual connections](https://paperswithcode.com/method/residual-connection) (replacing the filter concatenation stage of the Inception architecture). {% include ...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/inception-v3.md
# Inception v3 **Inception v3** is a convolutional neural network architecture from the Inception family that makes several improvements including using [Label Smoothing](https://paperswithcode.com/method/label-smoothing), Factorized 7 x 7 convolutions, and the use of an [auxiliary classifer](https://paperswithcode.co...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/inception-v4.md
# Inception v4 **Inception-v4** is a convolutional neural network architecture that builds on previous iterations of the Inception family by simplifying the architecture and using more inception modules than [Inception-v3](https://paperswithcode.com/method/inception-v3). {% include 'code_snippets.md' %} ## How do I t...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/legacy-se-resnet.md
# (Legacy) SE-ResNet **SE ResNet** is a variant of a [ResNet](https://www.paperswithcode.com/method/resnet) that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration. {% include 'code_snippet...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/legacy-se-resnext.md
# (Legacy) SE-ResNeXt **SE ResNeXt** is a variant of a [ResNeXt](https://www.paperswithcode.com/method/resnext) that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration. {% include 'code_sni...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/legacy-senet.md
# (Legacy) SENet A **SENet** is a convolutional neural network architecture that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration. The weights from this model were ported from Gluon. {% ...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/mixnet.md
# MixNet **MixNet** is a type of convolutional neural network discovered via AutoML that utilises [MixConvs](https://paperswithcode.com/method/mixconv) instead of regular [depthwise convolutions](https://paperswithcode.com/method/depthwise-convolution). {% include 'code_snippets.md' %} ## How do I train this model? ...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/mnasnet.md
# MnasNet **MnasNet** is a type of convolutional neural network optimized for mobile devices that is discovered through mobile neural architecture search, which explicitly incorporates model latency into the main objective so that the search can identify a model that achieves a good trade-off between accuracy and late...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/mobilenet-v2.md
# MobileNet v2 **MobileNetV2** is a convolutional neural network architecture that seeks to perform well on mobile devices. It is based on an [inverted residual structure](https://paperswithcode.com/method/inverted-residual-block) where the residual connections are between the bottleneck layers. The intermediate expa...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/mobilenet-v3.md
# MobileNet v3 **MobileNetV3** is a convolutional neural network that is designed for mobile phone CPUs. The network design includes the use of a [hard swish activation](https://paperswithcode.com/method/hard-swish) and [squeeze-and-excitation](https://paperswithcode.com/method/squeeze-and-excitation-block) modules in...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/nasnet.md
# NASNet **NASNet** is a type of convolutional neural network discovered through neural architecture search. The building blocks consist of normal and reduction cells. {% include 'code_snippets.md' %} ## How do I train this model? You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-mo...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/noisy-student.md
# Noisy Student (EfficientNet) **Noisy Student Training** is a semi-supervised learning approach. It extends the idea of self-training and distillation with the use of equal-or-larger student models and noise added to the student during learning. It has three main steps: 1. train a teacher model on labeled images 2....
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/pnasnet.md
# PNASNet **Progressive Neural Architecture Search**, or **PNAS**, is a method for learning the structure of convolutional neural networks (CNNs). It uses a sequential model-based optimization (SMBO) strategy, where we search the space of cell structures, starting with simple (shallow) models and progressing to comple...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/regnetx.md
# RegNetX **RegNetX** is a convolutional network design space with simple, regular models with parameters: depth $d$, initial width $w\_{0} > 0$, and slope $w\_{a} > 0$, and generates a different block width $u\_{j}$ for each block $j < d$. The key restriction for the RegNet types of model is that there is a linear pa...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/regnety.md
# RegNetY **RegNetY** is a convolutional network design space with simple, regular models with parameters: depth $d$, initial width $w\_{0} > 0$, and slope $w\_{a} > 0$, and generates a different block width $u\_{j}$ for each block $j < d$. The key restriction for the RegNet types of model is that there is a linear pa...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/res2net.md
# Res2Net **Res2Net** is an image model that employs a variation on bottleneck residual blocks, [Res2Net Blocks](https://paperswithcode.com/method/res2net-block). The motivation is to be able to represent features at multiple scales. This is achieved through a novel building block for CNNs that constructs hierarchical...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/res2next.md
# Res2NeXt **Res2NeXt** is an image model that employs a variation on [ResNeXt](https://paperswithcode.com/method/resnext) bottleneck residual blocks. The motivation is to be able to represent features at multiple scales. This is achieved through a novel building block for CNNs that constructs hierarchical residual-li...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/resnest.md
# ResNeSt A **ResNeSt** is a variant on a [ResNet](https://paperswithcode.com/method/resnet), which instead stacks [Split-Attention blocks](https://paperswithcode.com/method/split-attention). The cardinal group representations are then concatenated along the channel dimension: $V = \text{Concat}${$V^{1},V^{2},\cdots{V...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/resnet-d.md
# ResNet-D **ResNet-D** is a modification on the [ResNet](https://paperswithcode.com/method/resnet) architecture that utilises an [average pooling](https://paperswithcode.com/method/average-pooling) tweak for downsampling. The motivation is that in the unmodified ResNet, the [1×1 convolution](https://paperswithcode.co...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/resnet.md
# ResNet **Residual Networks**, or **ResNets**, learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. Instead of hoping each few stacked layers directly fit a desired underlying mapping, residual nets let these layers fit a residual mapping. They stack [residual block...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/resnext.md
# ResNeXt A **ResNeXt** repeats a [building block](https://paperswithcode.com/method/resnext-block) that aggregates a set of transformations with the same topology. Compared to a [ResNet](https://paperswithcode.com/method/resnet), it exposes a new dimension, *cardinality* (the size of the set of transformations) $C$,...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/rexnet.md
# RexNet **Rank Expansion Networks** (ReXNets) follow a set of new design principles for designing bottlenecks in image classification models. Authors refine each layer by 1) expanding the input channel size of the convolution layer and 2) replacing the [ReLU6s](https://www.paperswithcode.com/method/relu6). {% includ...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/se-resnet.md
# SE-ResNet **SE ResNet** is a variant of a [ResNet](https://www.paperswithcode.com/method/resnet) that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration. {% include 'code_snippets.md' %} ...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/selecsls.md
# SelecSLS **SelecSLS** uses novel selective long and short range skip connections to improve the information flow allowing for a drastically faster network without compromising accuracy. {% include 'code_snippets.md' %} ## How do I train this model? You can follow the [timm recipe scripts](https://rwightman.github...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/seresnext.md
# SE-ResNeXt **SE ResNeXt** is a variant of a [ResNext](https://www.paperswithcode.com/method/resneXt) that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration. {% include 'code_snippets.md'...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/skresnet.md
# SK-ResNet **SK ResNet** is a variant of a [ResNet](https://www.paperswithcode.com/method/resnet) that employs a [Selective Kernel](https://paperswithcode.com/method/selective-kernel) unit. In general, all the large kernel convolutions in the original bottleneck blocks in ResNet are replaced by the proposed [SK convo...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/skresnext.md
# SK-ResNeXt **SK ResNeXt** is a variant of a [ResNeXt](https://www.paperswithcode.com/method/resnext) that employs a [Selective Kernel](https://paperswithcode.com/method/selective-kernel) unit. In general, all the large kernel convolutions in the original bottleneck blocks in ResNext are replaced by the proposed [SK ...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/spnasnet.md
# SPNASNet **Single-Path NAS** is a novel differentiable NAS method for designing hardware-efficient ConvNets in less than 4 hours. {% include 'code_snippets.md' %} ## How do I train this model? You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new mo...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/ssl-resnet.md
# SSL ResNet **Residual Networks**, or **ResNets**, learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. Instead of hoping each few stacked layers directly fit a desired underlying mapping, residual nets let these layers fit a residual mapping. They stack [residual b...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/ssl-resnext.md
# SSL ResNeXT A **ResNeXt** repeats a [building block](https://paperswithcode.com/method/resnext-block) that aggregates a set of transformations with the same topology. Compared to a [ResNet](https://paperswithcode.com/method/resnet), it exposes a new dimension, *cardinality* (the size of the set of transformations) ...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/swsl-resnet.md
# SWSL ResNet **Residual Networks**, or **ResNets**, learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. Instead of hoping each few stacked layers directly fit a desired underlying mapping, residual nets let these layers fit a residual mapping. They stack [residual ...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/swsl-resnext.md
# SWSL ResNeXt A **ResNeXt** repeats a [building block](https://paperswithcode.com/method/resnext-block) that aggregates a set of transformations with the same topology. Compared to a [ResNet](https://paperswithcode.com/method/resnet), it exposes a new dimension, *cardinality* (the size of the set of transformations)...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/tf-efficientnet-condconv.md
# (Tensorflow) EfficientNet CondConv **EfficientNet** is a convolutional neural network architecture and scaling method that uniformly scales all dimensions of depth/width/resolution using a *compound coefficient*. Unlike conventional practice that arbitrary scales these factors, the EfficientNet scaling method unifo...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/tf-efficientnet-lite.md
# (Tensorflow) EfficientNet Lite **EfficientNet** is a convolutional neural network architecture and scaling method that uniformly scales all dimensions of depth/width/resolution using a *compound coefficient*. Unlike conventional practice that arbitrary scales these factors, the EfficientNet scaling method uniformly...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/tf-efficientnet.md
# (Tensorflow) EfficientNet **EfficientNet** is a convolutional neural network architecture and scaling method that uniformly scales all dimensions of depth/width/resolution using a *compound coefficient*. Unlike conventional practice that arbitrary scales these factors, the EfficientNet scaling method uniformly scal...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/tf-inception-v3.md
# (Tensorflow) Inception v3 **Inception v3** is a convolutional neural network architecture from the Inception family that makes several improvements including using [Label Smoothing](https://paperswithcode.com/method/label-smoothing), Factorized 7 x 7 convolutions, and the use of an [auxiliary classifer](https://pape...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/tf-mixnet.md
# (Tensorflow) MixNet **MixNet** is a type of convolutional neural network discovered via AutoML that utilises [MixConvs](https://paperswithcode.com/method/mixconv) instead of regular [depthwise convolutions](https://paperswithcode.com/method/depthwise-convolution). The weights from this model were ported from [Tenso...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/tf-mobilenet-v3.md
# (Tensorflow) MobileNet v3 **MobileNetV3** is a convolutional neural network that is designed for mobile phone CPUs. The network design includes the use of a [hard swish activation](https://paperswithcode.com/method/hard-swish) and [squeeze-and-excitation](https://paperswithcode.com/method/squeeze-and-excitation-bloc...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/tresnet.md
# TResNet A **TResNet** is a variant on a [ResNet](https://paperswithcode.com/method/resnet) that aim to boost accuracy while maintaining GPU training and inference efficiency. They contain several design tricks including a SpaceToDepth stem, [Anti-Alias downsampling](https://paperswithcode.com/method/anti-alias-down...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/vision-transformer.md
# Vision Transformer (ViT) The **Vision Transformer** is a model for image classification that employs a Transformer-like architecture over patches of the image. This includes the use of [Multi-Head Attention](https://paperswithcode.com/method/multi-head-attention), [Scaled Dot-Product Attention](https://paperswithcod...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/wide-resnet.md
# Wide ResNet **Wide Residual Networks** are a variant on [ResNets](https://paperswithcode.com/method/resnet) where we decrease depth and increase the width of residual networks. This is achieved through the use of [wide residual blocks](https://paperswithcode.com/method/wide-residual-block). {% include 'code_snippet...
0
hf_public_repos/pytorch-image-models/docs/models/.templates
hf_public_repos/pytorch-image-models/docs/models/.templates/models/xception.md
# Xception **Xception** is a convolutional neural network architecture that relies solely on [depthwise separable convolution layers](https://paperswithcode.com/method/depthwise-separable-convolution). The weights from this model were ported from [Tensorflow/Models](https://github.com/tensorflow/models). {% include ...
0
hf_public_repos/pytorch-image-models
hf_public_repos/pytorch-image-models/hfdocs/README.md
# Hugging Face Timm Docs ## Getting Started ``` pip install git+https://github.com/huggingface/doc-builder.git@main#egg=hf-doc-builder pip install watchdog black ``` ## Preview the Docs Locally ``` doc-builder preview timm hfdocs/source ```
0
hf_public_repos/pytorch-image-models/hfdocs
hf_public_repos/pytorch-image-models/hfdocs/source/_toctree.yml
- sections: - local: index title: Home - local: quickstart title: Quickstart - local: installation title: Installation title: Get started - sections: - local: feature_extraction title: Using Pretrained Models as Feature Extractors - local: training_script title: Training With The Offici...
0
hf_public_repos/pytorch-image-models/hfdocs
hf_public_repos/pytorch-image-models/hfdocs/source/feature_extraction.mdx
# Feature Extraction All of the models in `timm` have consistent mechanisms for obtaining various types of features from the model for tasks besides classification. ## Penultimate Layer Features (Pre-Classifier Features) The features from the penultimate model layer can be obtained in several ways without requiring ...
0
hf_public_repos/pytorch-image-models/hfdocs
hf_public_repos/pytorch-image-models/hfdocs/source/hf_hub.mdx
# Sharing and Loading Models From the Hugging Face Hub The `timm` library has a built-in integration with the Hugging Face Hub, making it easy to share and load models from the 🤗 Hub. In this short guide, we'll see how to: 1. Share a `timm` model on the Hub 2. How to load that model back from the Hub ## Authent...
0
hf_public_repos/pytorch-image-models/hfdocs
hf_public_repos/pytorch-image-models/hfdocs/source/index.mdx
# timm <img class="float-left !m-0 !border-0 !dark:border-0 !shadow-none !max-w-lg w-[150px]" src="https://huggingface.co/front/thumbnails/docs/timm.png"/> `timm` is a library containing SOTA computer vision models, layers, utilities, optimizers, schedulers, data-loaders, augmentations, and training/evaluation script...
0
hf_public_repos/pytorch-image-models/hfdocs
hf_public_repos/pytorch-image-models/hfdocs/source/installation.mdx
# Installation Before you start, you'll need to setup your environment and install the appropriate packages. `timm` is tested on **Python 3+**. ## Virtual Environment You should install `timm` in a [virtual environment](https://docs.python.org/3/library/venv.html) to keep things tidy and avoid dependency conflicts. ...
0
hf_public_repos/pytorch-image-models/hfdocs
hf_public_repos/pytorch-image-models/hfdocs/source/models.mdx
# Model Summaries The model architectures included come from a wide variety of sources. Sources, including papers, original impl ("reference code") that I rewrote / adapted, and PyTorch impl that I leveraged directly ("code") are listed below. Most included models have pretrained weights. The weights are either: 1. ...
0
hf_public_repos/pytorch-image-models/hfdocs
hf_public_repos/pytorch-image-models/hfdocs/source/quickstart.mdx
# Quickstart This quickstart is intended for developers who are ready to dive into the code and see an example of how to integrate `timm` into their model training workflow. First, you'll need to install `timm`. For more information on installation, see [Installation](installation). ```bash pip install timm ``` ## ...
0
hf_public_repos/pytorch-image-models/hfdocs
hf_public_repos/pytorch-image-models/hfdocs/source/results.mdx
# Results CSV files containing an ImageNet-1K and out-of-distribution (OOD) test set validation results for all models with pretrained weights is located in the repository [results folder](https://github.com/rwightman/pytorch-image-models/tree/master/results). ## Self-trained Weights The table below includes ImageNe...
0
hf_public_repos/pytorch-image-models/hfdocs
hf_public_repos/pytorch-image-models/hfdocs/source/training_script.mdx
# Scripts A train, validation, inference, and checkpoint cleaning script included in the github root folder. Scripts are not currently packaged in the pip release. The training and validation scripts evolved from early versions of the [PyTorch Imagenet Examples](https://github.com/pytorch/examples). I have added sign...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/adversarial-inception-v3.mdx
# Adversarial Inception v3 **Inception v3** is a convolutional neural network architecture from the Inception family that makes several improvements including using [Label Smoothing](https://paperswithcode.com/method/label-smoothing), Factorized 7 x 7 convolutions, and the use of an [auxiliary classifer](https://paper...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/advprop.mdx
# AdvProp (EfficientNet) **AdvProp** is an adversarial training scheme which treats adversarial examples as additional examples, to prevent overfitting. Key to the method is the usage of a separate auxiliary batch norm for adversarial examples, as they have different underlying distributions to normal examples. The w...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/big-transfer.mdx
# Big Transfer (BiT) **Big Transfer (BiT)** is a type of pretraining recipe that pre-trains on a large supervised source dataset, and fine-tunes the weights on the target task. Models are trained on the JFT-300M dataset. The finetuned models contained in this collection are finetuned on ImageNet. ## How do I use thi...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/csp-darknet.mdx
# CSP-DarkNet **CSPDarknet53** is a convolutional neural network and backbone for object detection that uses [DarkNet-53](https://paperswithcode.com/method/darknet-53). It employs a CSPNet strategy to partition the feature map of the base layer into two parts and then merges them through a cross-stage hierarchy. The u...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/csp-resnet.mdx
# CSP-ResNet **CSPResNet** is a convolutional neural network where we apply the Cross Stage Partial Network (CSPNet) approach to [ResNet](https://paperswithcode.com/method/resnet). The CSPNet partitions the feature map of the base layer into two parts and then merges them through a cross-stage hierarchy. The use of a ...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/csp-resnext.mdx
# CSP-ResNeXt **CSPResNeXt** is a convolutional neural network where we apply the Cross Stage Partial Network (CSPNet) approach to [ResNeXt](https://paperswithcode.com/method/resnext). The CSPNet partitions the feature map of the base layer into two parts and then merges them through a cross-stage hierarchy. The use o...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/densenet.mdx
# DenseNet **DenseNet** is a type of convolutional neural network that utilises dense connections between layers, through [Dense Blocks](http://www.paperswithcode.com/method/dense-block), where we connect *all layers* (with matching feature-map sizes) directly with each other. To preserve the feed-forward nature, each...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/dla.mdx
# Deep Layer Aggregation Extending “shallow” skip connections, **Dense Layer Aggregation (DLA)** incorporates more depth and sharing. The authors introduce two structures for deep layer aggregation (DLA): iterative deep aggregation (IDA) and hierarchical deep aggregation (HDA). These structures are expressed through ...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/dpn.mdx
# Dual Path Network (DPN) A **Dual Path Network (DPN)** is a convolutional neural network which presents a new topology of connection paths internally. The intuition is that [ResNets](https://paperswithcode.com/method/resnet) enables feature re-usage while DenseNet enables new feature exploration, and both are importa...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/ecaresnet.mdx
# ECA-ResNet An **ECA ResNet** is a variant on a [ResNet](https://paperswithcode.com/method/resnet) that utilises an [Efficient Channel Attention module](https://paperswithcode.com/method/efficient-channel-attention). Efficient Channel Attention is an architectural unit based on [squeeze-and-excitation blocks](https:/...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/efficientnet-pruned.mdx
# EfficientNet (Knapsack Pruned) **EfficientNet** is a convolutional neural network architecture and scaling method that uniformly scales all dimensions of depth/width/resolution using a *compound coefficient*. Unlike conventional practice that arbitrary scales these factors, the EfficientNet scaling method uniformly...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/efficientnet.mdx
# EfficientNet **EfficientNet** is a convolutional neural network architecture and scaling method that uniformly scales all dimensions of depth/width/resolution using a *compound coefficient*. Unlike conventional practice that arbitrary scales these factors, the EfficientNet scaling method uniformly scales network wi...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/ensemble-adversarial.mdx
# # Ensemble Adversarial Inception ResNet v2 **Inception-ResNet-v2** is a convolutional neural architecture that builds on the Inception family of architectures but incorporates [residual connections](https://paperswithcode.com/method/residual-connection) (replacing the filter concatenation stage of the Inception arch...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/ese-vovnet.mdx
# ESE-VoVNet **VoVNet** is a convolutional neural network that seeks to make [DenseNet](https://paperswithcode.com/method/densenet) more efficient by concatenating all features only once in the last feature map, which makes input size constant and enables enlarging new output channel. Read about [one-shot aggregatio...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/fbnet.mdx
# FBNet **FBNet** is a type of convolutional neural architectures discovered through [DNAS](https://paperswithcode.com/method/dnas) neural architecture search. It utilises a basic type of image model block inspired by [MobileNetv2](https://paperswithcode.com/method/mobilenetv2) that utilises depthwise convolutions and...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/gloun-inception-v3.mdx
# (Gluon) Inception v3 **Inception v3** is a convolutional neural network architecture from the Inception family that makes several improvements including using [Label Smoothing](https://paperswithcode.com/method/label-smoothing), Factorized 7 x 7 convolutions, and the use of an [auxiliary classifer](https://paperswit...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/gloun-resnet.mdx
# (Gluon) ResNet **Residual Networks**, or **ResNets**, learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. Instead of hoping each few stacked layers directly fit a desired underlying mapping, residual nets let these layers fit a residual mapping. They stack [residu...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/gloun-resnext.mdx
# (Gluon) ResNeXt A **ResNeXt** repeats a [building block](https://paperswithcode.com/method/resnext-block) that aggregates a set of transformations with the same topology. Compared to a [ResNet](https://paperswithcode.com/method/resnet), it exposes a new dimension, *cardinality* (the size of the set of transformatio...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/gloun-senet.mdx
# (Gluon) SENet A **SENet** is a convolutional neural network architecture that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration. The weights from this model were ported from [Gluon](http...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/gloun-seresnext.mdx
# (Gluon) SE-ResNeXt **SE ResNeXt** is a variant of a [ResNext](https://www.paperswithcode.com/method/resnext) that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration. The weights from this...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/gloun-xception.mdx
# (Gluon) Xception **Xception** is a convolutional neural network architecture that relies solely on [depthwise separable convolution](https://paperswithcode.com/method/depthwise-separable-convolution) layers. The weights from this model were ported from [Gluon](https://cv.gluon.ai/model_zoo/classification.html). ##...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/hrnet.mdx
# HRNet **HRNet**, or **High-Resolution Net**, is a general purpose convolutional neural network for tasks like semantic segmentation, object detection and image classification. It is able to maintain high resolution representations through the whole process. We start from a high-resolution convolution stream, gradual...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/ig-resnext.mdx
# Instagram ResNeXt WSL A **ResNeXt** repeats a [building block](https://paperswithcode.com/method/resnext-block) that aggregates a set of transformations with the same topology. Compared to a [ResNet](https://paperswithcode.com/method/resnet), it exposes a new dimension, *cardinality* (the size of the set of transfo...
0