repo_id
stringlengths
15
86
file_path
stringlengths
27
180
content
stringlengths
1
1.75M
__index_level_0__
int64
0
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/inception-resnet-v2.mdx
# Inception ResNet v2 **Inception-ResNet-v2** is a convolutional neural architecture that builds on the Inception family of architectures but incorporates [residual connections](https://paperswithcode.com/method/residual-connection) (replacing the filter concatenation stage of the Inception architecture). ## How do I...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/inception-v3.mdx
# Inception v3 **Inception v3** is a convolutional neural network architecture from the Inception family that makes several improvements including using [Label Smoothing](https://paperswithcode.com/method/label-smoothing), Factorized 7 x 7 convolutions, and the use of an [auxiliary classifer](https://paperswithcode.co...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/inception-v4.mdx
# Inception v4 **Inception-v4** is a convolutional neural network architecture that builds on previous iterations of the Inception family by simplifying the architecture and using more inception modules than [Inception-v3](https://paperswithcode.com/method/inception-v3). ## How do I use this model on an image? To loa...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/legacy-se-resnet.mdx
# (Legacy) SE-ResNet **SE ResNet** is a variant of a [ResNet](https://www.paperswithcode.com/method/resnet) that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration. ## How do I use this mod...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/legacy-se-resnext.mdx
# (Legacy) SE-ResNeXt **SE ResNeXt** is a variant of a [ResNeXt](https://www.paperswithcode.com/method/resnext) that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration. ## How do I use this...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/legacy-senet.mdx
# (Legacy) SENet A **SENet** is a convolutional neural network architecture that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration. The weights from this model were ported from Gluon. ## ...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/mixnet.mdx
# MixNet **MixNet** is a type of convolutional neural network discovered via AutoML that utilises [MixConvs](https://paperswithcode.com/method/mixconv) instead of regular [depthwise convolutions](https://paperswithcode.com/method/depthwise-convolution). ## How do I use this model on an image? To load a pretrained mo...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/mnasnet.mdx
# MnasNet **MnasNet** is a type of convolutional neural network optimized for mobile devices that is discovered through mobile neural architecture search, which explicitly incorporates model latency into the main objective so that the search can identify a model that achieves a good trade-off between accuracy and late...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/mobilenet-v2.mdx
# MobileNet v2 **MobileNetV2** is a convolutional neural network architecture that seeks to perform well on mobile devices. It is based on an [inverted residual structure](https://paperswithcode.com/method/inverted-residual-block) where the residual connections are between the bottleneck layers. The intermediate expa...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/mobilenet-v3.mdx
# MobileNet v3 **MobileNetV3** is a convolutional neural network that is designed for mobile phone CPUs. The network design includes the use of a [hard swish activation](https://paperswithcode.com/method/hard-swish) and [squeeze-and-excitation](https://paperswithcode.com/method/squeeze-and-excitation-block) modules in...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/nasnet.mdx
# NASNet **NASNet** is a type of convolutional neural network discovered through neural architecture search. The building blocks consist of normal and reduction cells. ## How do I use this model on an image? To load a pretrained model: ```py >>> import timm >>> model = timm.create_model('nasnetalarge', pretrained=T...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/noisy-student.mdx
# Noisy Student (EfficientNet) **Noisy Student Training** is a semi-supervised learning approach. It extends the idea of self-training and distillation with the use of equal-or-larger student models and noise added to the student during learning. It has three main steps: 1. train a teacher model on labeled images 2....
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/pnasnet.mdx
# PNASNet **Progressive Neural Architecture Search**, or **PNAS**, is a method for learning the structure of convolutional neural networks (CNNs). It uses a sequential model-based optimization (SMBO) strategy, where we search the space of cell structures, starting with simple (shallow) models and progressing to comple...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/regnetx.mdx
# RegNetX **RegNetX** is a convolutional network design space with simple, regular models with parameters: depth $d$, initial width $w\_{0} > 0$, and slope $w\_{a} > 0$, and generates a different block width $u\_{j}$ for each block $j < d$. The key restriction for the RegNet types of model is that there is a linear pa...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/regnety.mdx
# RegNetY **RegNetY** is a convolutional network design space with simple, regular models with parameters: depth $d$, initial width $w\_{0} > 0$, and slope $w\_{a} > 0$, and generates a different block width $u\_{j}$ for each block $j < d$. The key restriction for the RegNet types of model is that there is a linear pa...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/res2net.mdx
# Res2Net **Res2Net** is an image model that employs a variation on bottleneck residual blocks, [Res2Net Blocks](https://paperswithcode.com/method/res2net-block). The motivation is to be able to represent features at multiple scales. This is achieved through a novel building block for CNNs that constructs hierarchical...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/res2next.mdx
# Res2NeXt **Res2NeXt** is an image model that employs a variation on [ResNeXt](https://paperswithcode.com/method/resnext) bottleneck residual blocks. The motivation is to be able to represent features at multiple scales. This is achieved through a novel building block for CNNs that constructs hierarchical residual-li...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/resnest.mdx
# ResNeSt A **ResNeSt** is a variant on a [ResNet](https://paperswithcode.com/method/resnet), which instead stacks [Split-Attention blocks](https://paperswithcode.com/method/split-attention). The cardinal group representations are then concatenated along the channel dimension: $V = \text{Concat}${$V^{1},V^{2},\cdots{V...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/resnet-d.mdx
# ResNet-D **ResNet-D** is a modification on the [ResNet](https://paperswithcode.com/method/resnet) architecture that utilises an [average pooling](https://paperswithcode.com/method/average-pooling) tweak for downsampling. The motivation is that in the unmodified ResNet, the [1×1 convolution](https://paperswithcode.co...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/resnet.mdx
# ResNet **Residual Networks**, or **ResNets**, learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. Instead of hoping each few stacked layers directly fit a desired underlying mapping, residual nets let these layers fit a residual mapping. They stack [residual block...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/resnext.mdx
# ResNeXt A **ResNeXt** repeats a [building block](https://paperswithcode.com/method/resnext-block) that aggregates a set of transformations with the same topology. Compared to a [ResNet](https://paperswithcode.com/method/resnet), it exposes a new dimension, *cardinality* (the size of the set of transformations) $C$,...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/rexnet.mdx
# RexNet **Rank Expansion Networks** (ReXNets) follow a set of new design principles for designing bottlenecks in image classification models. Authors refine each layer by 1) expanding the input channel size of the convolution layer and 2) replacing the [ReLU6s](https://www.paperswithcode.com/method/relu6). ## How do...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/se-resnet.mdx
# SE-ResNet **SE ResNet** is a variant of a [ResNet](https://www.paperswithcode.com/method/resnet) that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration. ## How do I use this model on an ...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/selecsls.mdx
# SelecSLS **SelecSLS** uses novel selective long and short range skip connections to improve the information flow allowing for a drastically faster network without compromising accuracy. ## How do I use this model on an image? To load a pretrained model: ```py >>> import timm >>> model = timm.create_model('selecsl...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/seresnext.mdx
# SE-ResNeXt **SE ResNeXt** is a variant of a [ResNext](https://www.paperswithcode.com/method/resneXt) that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration. ## How do I use this model on...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/skresnet.mdx
# SK-ResNet **SK ResNet** is a variant of a [ResNet](https://www.paperswithcode.com/method/resnet) that employs a [Selective Kernel](https://paperswithcode.com/method/selective-kernel) unit. In general, all the large kernel convolutions in the original bottleneck blocks in ResNet are replaced by the proposed [SK convo...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/skresnext.mdx
# SK-ResNeXt **SK ResNeXt** is a variant of a [ResNeXt](https://www.paperswithcode.com/method/resnext) that employs a [Selective Kernel](https://paperswithcode.com/method/selective-kernel) unit. In general, all the large kernel convolutions in the original bottleneck blocks in ResNext are replaced by the proposed [SK ...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/spnasnet.mdx
# SPNASNet **Single-Path NAS** is a novel differentiable NAS method for designing hardware-efficient ConvNets in less than 4 hours. ## How do I use this model on an image? To load a pretrained model: ```py >>> import timm >>> model = timm.create_model('spnasnet_100', pretrained=True) >>> model.eval() ``` To load a...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/ssl-resnet.mdx
# SSL ResNet **Residual Networks**, or **ResNets**, learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. Instead of hoping each few stacked layers directly fit a desired underlying mapping, residual nets let these layers fit a residual mapping. They stack [residual b...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/swsl-resnet.mdx
# SWSL ResNet **Residual Networks**, or **ResNets**, learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. Instead of hoping each few stacked layers directly fit a desired underlying mapping, residual nets let these layers fit a residual mapping. They stack [residual ...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/swsl-resnext.mdx
# SWSL ResNeXt A **ResNeXt** repeats a [building block](https://paperswithcode.com/method/resnext-block) that aggregates a set of transformations with the same topology. Compared to a [ResNet](https://paperswithcode.com/method/resnet), it exposes a new dimension, *cardinality* (the size of the set of transformations)...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/tf-efficientnet-condconv.mdx
# (Tensorflow) EfficientNet CondConv **EfficientNet** is a convolutional neural network architecture and scaling method that uniformly scales all dimensions of depth/width/resolution using a *compound coefficient*. Unlike conventional practice that arbitrary scales these factors, the EfficientNet scaling method unifo...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/tf-efficientnet-lite.mdx
# (Tensorflow) EfficientNet Lite **EfficientNet** is a convolutional neural network architecture and scaling method that uniformly scales all dimensions of depth/width/resolution using a *compound coefficient*. Unlike conventional practice that arbitrary scales these factors, the EfficientNet scaling method uniformly...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/tf-efficientnet.mdx
# (Tensorflow) EfficientNet **EfficientNet** is a convolutional neural network architecture and scaling method that uniformly scales all dimensions of depth/width/resolution using a *compound coefficient*. Unlike conventional practice that arbitrary scales these factors, the EfficientNet scaling method uniformly scal...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/tf-inception-v3.mdx
# (Tensorflow) Inception v3 **Inception v3** is a convolutional neural network architecture from the Inception family that makes several improvements including using [Label Smoothing](https://paperswithcode.com/method/label-smoothing), Factorized 7 x 7 convolutions, and the use of an [auxiliary classifer](https://pape...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/tf-mixnet.mdx
# (Tensorflow) MixNet **MixNet** is a type of convolutional neural network discovered via AutoML that utilises [MixConvs](https://paperswithcode.com/method/mixconv) instead of regular [depthwise convolutions](https://paperswithcode.com/method/depthwise-convolution). The weights from this model were ported from [Tenso...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/tf-mobilenet-v3.mdx
# (Tensorflow) MobileNet v3 **MobileNetV3** is a convolutional neural network that is designed for mobile phone CPUs. The network design includes the use of a [hard swish activation](https://paperswithcode.com/method/hard-swish) and [squeeze-and-excitation](https://paperswithcode.com/method/squeeze-and-excitation-bloc...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/tresnet.mdx
# TResNet A **TResNet** is a variant on a [ResNet](https://paperswithcode.com/method/resnet) that aim to boost accuracy while maintaining GPU training and inference efficiency. They contain several design tricks including a SpaceToDepth stem, [Anti-Alias downsampling](https://paperswithcode.com/method/anti-alias-down...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/wide-resnet.mdx
# Wide ResNet **Wide Residual Networks** are a variant on [ResNets](https://paperswithcode.com/method/resnet) where we decrease depth and increase the width of residual networks. This is achieved through the use of [wide residual blocks](https://paperswithcode.com/method/wide-residual-block). ## How do I use this mod...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/models/xception.mdx
# Xception **Xception** is a convolutional neural network architecture that relies solely on [depthwise separable convolution layers](https://paperswithcode.com/method/depthwise-separable-convolution). The weights from this model were ported from [Tensorflow/Models](https://github.com/tensorflow/models). ## How do I...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/reference/data.mdx
# Data [[autodoc]] timm.data.create_dataset [[autodoc]] timm.data.create_loader [[autodoc]] timm.data.create_transform [[autodoc]] timm.data.resolve_data_config
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/reference/models.mdx
# Models [[autodoc]] timm.create_model [[autodoc]] timm.list_models
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/reference/optimizers.mdx
# Optimization This page contains the API reference documentation for learning rate optimizers included in `timm`. ## Optimizers ### Factory functions [[autodoc]] timm.optim.optim_factory.create_optimizer [[autodoc]] timm.optim.optim_factory.create_optimizer_v2 ### Optimizer Classes [[autodoc]] timm.optim.adabeli...
0
hf_public_repos/pytorch-image-models/hfdocs/source
hf_public_repos/pytorch-image-models/hfdocs/source/reference/schedulers.mdx
# Learning Rate Schedulers This page contains the API reference documentation for learning rate schedulers included in `timm`. ## Schedulers ### Factory functions [[autodoc]] timm.scheduler.scheduler_factory.create_scheduler [[autodoc]] timm.scheduler.scheduler_factory.create_scheduler_v2 ### Scheduler Classes [[...
0
hf_public_repos/pytorch-image-models
hf_public_repos/pytorch-image-models/results/README.md
# Validation and Benchmark Results This folder contains validation and benchmark results for the models in this collection. Validation scores are currently only run for models with pretrained weights and ImageNet-1k heads, benchmark numbers are run for all. ## Datasets There are currently results for the ImageNet va...
0
hf_public_repos/pytorch-image-models
hf_public_repos/pytorch-image-models/results/benchmark-infer-amp-nchw-pt111-cu113-rtx3090.csv
model,infer_samples_per_sec,infer_step_time,infer_batch_size,infer_img_size,param_count tinynet_e,47972.76,21.335,1024,106,2.04 mobilenetv3_small_050,42473.43,24.099,1024,224,1.59 lcnet_035,39739.31,25.756,1024,224,1.64 lcnet_050,35211.0,29.071,1024,224,1.88 mobilenetv3_small_075,31410.3,32.589,1024,224,2.04 mobilenetv...
0
hf_public_repos/pytorch-image-models
hf_public_repos/pytorch-image-models/results/benchmark-infer-amp-nchw-pt112-cu113-rtx3090.csv
model,infer_samples_per_sec,infer_step_time,infer_batch_size,infer_img_size,infer_gmacs,infer_macts,param_count tinynet_e,49285.12,20.767,1024,106,0.03,0.69,2.04 mobilenetv3_small_050,43905.96,23.312,1024,224,0.03,0.92,1.59 lcnet_035,40961.84,24.988,1024,224,0.03,1.04,1.64 lcnet_050,36451.18,28.081,1024,224,0.05,1.26,1...
0
hf_public_repos/pytorch-image-models
hf_public_repos/pytorch-image-models/results/benchmark-infer-amp-nchw-pt113-cu117-rtx3090.csv
model,infer_samples_per_sec,infer_step_time,infer_batch_size,infer_img_size,infer_gmacs,infer_macts,param_count tinynet_e,49277.65,20.77,1024,106,0.03,0.69,2.04 mobilenetv3_small_050,45562.75,22.464,1024,224,0.03,0.92,1.59 lcnet_035,41026.68,24.949,1024,224,0.03,1.04,1.64 lcnet_050,37575.13,27.242,1024,224,0.05,1.26,1....
0
hf_public_repos/pytorch-image-models
hf_public_repos/pytorch-image-models/results/benchmark-infer-amp-nhwc-pt111-cu113-rtx3090.csv
model,infer_samples_per_sec,infer_step_time,infer_batch_size,infer_img_size,param_count tinynet_e,68298.73,14.982,1024,106,2.04 mobilenetv3_small_050,48773.32,20.985,1024,224,1.59 lcnet_035,47045.94,21.755,1024,224,1.64 lcnet_050,41541.83,24.639,1024,224,1.88 mobilenetv3_small_075,37803.23,27.076,1024,224,2.04 mobilene...
0
hf_public_repos/pytorch-image-models
hf_public_repos/pytorch-image-models/results/benchmark-infer-amp-nhwc-pt112-cu113-rtx3090.csv
model,infer_samples_per_sec,infer_step_time,infer_batch_size,infer_img_size,infer_gmacs,infer_macts,param_count tinynet_e,70939.06,14.424,1024,106,0.03,0.69,2.04 mobilenetv3_small_050,53363.87,19.179,1024,224,0.03,0.92,1.59 lcnet_035,39908.29,25.648,1024,224,0.03,1.04,1.64 mobilenetv3_small_075,38048.72,26.902,1024,224...
0
hf_public_repos/pytorch-image-models
hf_public_repos/pytorch-image-models/results/benchmark-infer-amp-nhwc-pt113-cu117-rtx3090.csv
model,infer_samples_per_sec,infer_step_time,infer_batch_size,infer_img_size,infer_gmacs,infer_macts,param_count tinynet_e,72737.62,14.068,1024,106,0.03,0.69,2.04 mobilenetv3_small_050,54822.3,18.668,1024,224,0.03,0.92,1.59 lcnet_035,53629.35,19.084,1024,224,0.03,1.04,1.64 lcnet_050,45492.41,22.499,1024,224,0.05,1.26,1....
0
hf_public_repos/pytorch-image-models
hf_public_repos/pytorch-image-models/results/benchmark-train-amp-nchw-pt111-cu113-rtx3090.csv
model,train_samples_per_sec,train_step_time,train_batch_size,train_img_size,param_count tinynet_e,9380.97,53.881,512,106,2.04 mobilenetv3_small_050,7276.68,69.643,512,224,1.59 tf_mobilenetv3_small_minimal_100,6334.14,80.291,512,224,2.04 mobilenetv3_small_075,5920.21,85.765,512,224,2.04 lcnet_035,5760.61,88.397,512,224,...
0
hf_public_repos/pytorch-image-models
hf_public_repos/pytorch-image-models/results/benchmark-train-amp-nchw-pt112-cu113-rtx3090.csv
model,train_samples_per_sec,train_step_time,train_batch_size,train_img_size,param_count tinynet_e,10001.12,50.423,512,106,2.04 mobilenetv3_small_050,7406.47,68.392,512,224,1.59 tf_mobilenetv3_small_minimal_100,6438.14,78.983,512,224,2.04 mobilenetv3_small_075,6186.83,82.006,512,224,2.04 tf_mobilenetv3_small_075,5783.46...
0
hf_public_repos/pytorch-image-models
hf_public_repos/pytorch-image-models/results/benchmark-train-amp-nhwc-pt111-cu113-rtx3090.csv
model,train_samples_per_sec,train_step_time,train_batch_size,train_img_size,param_count tinynet_e,10725.36,46.047,512,106,2.04 mobilenetv3_small_050,9864.52,50.786,512,224,1.59 lcnet_035,9593.72,52.888,512,224,1.64 lcnet_050,8283.82,61.296,512,224,1.88 tf_mobilenetv3_small_minimal_100,8178.73,62.055,512,224,2.04 tinyne...
0
hf_public_repos/pytorch-image-models
hf_public_repos/pytorch-image-models/results/benchmark-train-amp-nhwc-pt112-cu113-rtx3090.csv
model,train_samples_per_sec,train_step_time,train_batch_size,train_img_size,param_count tinynet_e,11915.85,41.681,512,106,2.04 mobilenetv3_small_050,11290.99,44.293,512,224,1.59 lcnet_035,10015.98,50.125,512,224,1.64 lcnet_050,9286.37,54.37,512,224,1.88 tf_mobilenetv3_small_minimal_100,9042.22,55.986,512,224,2.04 mobil...
0
hf_public_repos/pytorch-image-models
hf_public_repos/pytorch-image-models/results/generate_csv_results.py
import numpy as np import pandas as pd results = { 'results-imagenet.csv': [ 'results-imagenet-real.csv', 'results-imagenetv2-matched-frequency.csv', 'results-sketch.csv' ], 'results-imagenet-a-clean.csv': [ 'results-imagenet-a.csv', ], 'results-imagenet-r-clean.csv...
0
hf_public_repos/pytorch-image-models
hf_public_repos/pytorch-image-models/results/model_metadata-in1k.csv
model,pretrain adv_inception_v3,in1k-adv bat_resnext26ts,in1k beit_base_patch16_224,in21k-selfsl beit_base_patch16_384,in21k-selfsl beit_large_patch16_224,in21k-selfsl beit_large_patch16_384,in21k-selfsl beit_large_patch16_512,in21k-selfsl botnet26t_256,in1k cait_m36_384,in1k-dist cait_m48_448,in1k-dist cait_s24_224,in...
0
hf_public_repos/pytorch-image-models
hf_public_repos/pytorch-image-models/results/results-imagenet-a-clean.csv
model,top1,top1_err,top5,top5_err,param_count,img_size,crop_pct,interpolation eva02_large_patch14_448.mim_in22k_ft_in22k_in1k,98.930,1.070,99.910,0.090,305.08,448,1.000,bicubic eva02_large_patch14_448.mim_m38m_ft_in22k_in1k,98.850,1.150,99.880,0.120,305.08,448,1.000,bicubic eva02_large_patch14_448.mim_in22k_ft_in1k,98....
0
hf_public_repos/pytorch-image-models
hf_public_repos/pytorch-image-models/results/results-imagenet-a.csv
model,top1,top1_err,top5,top5_err,param_count,img_size,crop_pct,interpolation,top1_diff,top5_diff,rank_diff eva02_large_patch14_448.mim_m38m_ft_in22k_in1k,88.227,11.773,97.093,2.907,305.08,448,1.000,bicubic,-10.623,-2.787,+1 eva02_large_patch14_448.mim_in22k_ft_in22k_in1k,87.920,12.080,96.920,3.080,305.08,448,1.000,bic...
0
hf_public_repos/pytorch-image-models
hf_public_repos/pytorch-image-models/results/results-imagenet-r-clean.csv
model,top1,top1_err,top5,top5_err,param_count,img_size,crop_pct,interpolation eva02_large_patch14_448.mim_in22k_ft_in22k_in1k,98.160,1.840,99.880,0.120,305.08,448,1.000,bicubic eva02_large_patch14_448.mim_m38m_ft_in22k_in1k,98.030,1.970,99.890,0.110,305.08,448,1.000,bicubic eva_giant_patch14_560.m30m_ft_in22k_in1k,98.0...
0
hf_public_repos/pytorch-image-models
hf_public_repos/pytorch-image-models/results/results-imagenet-r.csv
model,top1,top1_err,top5,top5_err,param_count,img_size,crop_pct,interpolation,top1_diff,top5_diff,rank_diff convnext_xxlarge.clip_laion2b_soup_ft_in1k,90.623,9.377,97.913,2.087,846.47,256,1.000,bicubic,-7.127,-1.897,+18 eva_giant_patch14_336.clip_ft_in1k,90.550,9.450,97.230,2.770,"1,013.01",336,1.000,bicubic,-7.310,-2....
0
hf_public_repos/pytorch-image-models
hf_public_repos/pytorch-image-models/results/results-imagenet-real.csv
model,top1,top1_err,top5,top5_err,param_count,img_size,crop_pct,interpolation,top1_diff,top5_diff,rank_diff eva02_large_patch14_448.mim_m38m_ft_in22k_in1k,91.129,8.871,98.713,1.287,305.08,448,1.000,bicubic,+1.077,-0.335,0 eva_giant_patch14_336.clip_ft_in1k,91.058,8.942,98.602,1.399,"1,013.01",336,1.000,bicubic,+1.592,-...
0
hf_public_repos/pytorch-image-models
hf_public_repos/pytorch-image-models/results/results-imagenet.csv
model,top1,top1_err,top5,top5_err,param_count,img_size,crop_pct,interpolation eva02_large_patch14_448.mim_m38m_ft_in22k_in1k,90.052,9.948,99.048,0.952,305.08,448,1.000,bicubic eva02_large_patch14_448.mim_in22k_ft_in22k_in1k,89.966,10.034,99.012,0.988,305.08,448,1.000,bicubic eva_giant_patch14_560.m30m_ft_in22k_in1k,89....
0
hf_public_repos/pytorch-image-models
hf_public_repos/pytorch-image-models/results/results-imagenetv2-matched-frequency.csv
model,top1,top1_err,top5,top5_err,param_count,img_size,crop_pct,interpolation,top1_diff,top5_diff,rank_diff eva_giant_patch14_336.clip_ft_in1k,82.200,17.800,96.290,3.710,"1,013.01",336,1.000,bicubic,-7.266,-2.536,+6 eva02_large_patch14_448.mim_in22k_ft_in1k,82.130,17.870,96.260,3.740,305.08,448,1.000,bicubic,-7.494,-2....
0
hf_public_repos/pytorch-image-models
hf_public_repos/pytorch-image-models/results/results-sketch.csv
model,top1,top1_err,top5,top5_err,param_count,img_size,crop_pct,interpolation,top1_diff,top5_diff,rank_diff eva_giant_patch14_336.clip_ft_in1k,71.177,28.823,90.299,9.701,"1,013.01",336,1.000,bicubic,-18.289,-8.527,+6 eva02_large_patch14_448.mim_m38m_ft_in22k_in1k,70.662,29.338,89.854,10.146,305.08,448,1.000,bicubic,-19...
0
hf_public_repos/pytorch-image-models
hf_public_repos/pytorch-image-models/tests/test_layers.py
import torch import torch.nn as nn from timm.layers import create_act_layer, set_layer_config class MLP(nn.Module): def __init__(self, act_layer="relu", inplace=True): super(MLP, self).__init__() self.fc1 = nn.Linear(1000, 100) self.act = create_act_layer(act_layer, inplace=inplace) ...
0
hf_public_repos/pytorch-image-models
hf_public_repos/pytorch-image-models/tests/test_models.py
"""Run tests for all models Tests that run on CI should have a specific marker, e.g. @pytest.mark.base. This marker is used to parallelize the CI runs, with one runner for each marker. If new tests are added, ensure that they use one of the existing markers (documented in pyproject.toml > pytest > markers) or that a ...
0
hf_public_repos/pytorch-image-models
hf_public_repos/pytorch-image-models/tests/test_optim.py
""" Optimzier Tests These tests were adapted from PyTorch' optimizer tests. """ import math import pytest import functools from copy import deepcopy import torch from torch.testing._internal.common_utils import TestCase from torch.autograd import Variable from timm.scheduler import PlateauLRScheduler from timm.opti...
0
hf_public_repos/pytorch-image-models
hf_public_repos/pytorch-image-models/tests/test_utils.py
from torch.nn.modules.batchnorm import BatchNorm2d from torchvision.ops.misc import FrozenBatchNorm2d import timm from timm.utils.model import freeze, unfreeze def test_freeze_unfreeze(): model = timm.create_model('resnet18') # Freeze all freeze(model) # Check top level module assert model.fc.we...
0
hf_public_repos/pytorch-image-models
hf_public_repos/pytorch-image-models/timm/__init__.py
from .version import __version__ from .layers import is_scriptable, is_exportable, set_scriptable, set_exportable from .models import create_model, list_models, list_pretrained, is_model, list_modules, model_entrypoint, \ is_model_pretrained, get_pretrained_cfg, get_pretrained_cfg_value
0
hf_public_repos/pytorch-image-models
hf_public_repos/pytorch-image-models/timm/version.py
__version__ = '0.9.5'
0
hf_public_repos/pytorch-image-models/timm
hf_public_repos/pytorch-image-models/timm/data/__init__.py
from .auto_augment import RandAugment, AutoAugment, rand_augment_ops, auto_augment_policy,\ rand_augment_transform, auto_augment_transform from .config import resolve_data_config, resolve_model_data_config from .constants import * from .dataset import ImageDataset, IterableImageDataset, AugMixDataset from .dataset_...
0
hf_public_repos/pytorch-image-models/timm
hf_public_repos/pytorch-image-models/timm/data/auto_augment.py
""" AutoAugment, RandAugment, AugMix, and 3-Augment for PyTorch This code implements the searched ImageNet policies with various tweaks and improvements and does not include any of the search code. AA and RA Implementation adapted from: https://github.com/tensorflow/tpu/blob/master/models/official/efficientnet/au...
0
hf_public_repos/pytorch-image-models/timm
hf_public_repos/pytorch-image-models/timm/data/config.py
import logging from .constants import * _logger = logging.getLogger(__name__) def resolve_data_config( args=None, pretrained_cfg=None, model=None, use_test_size=False, verbose=False ): assert model or args or pretrained_cfg, "At least one of model, args, or pretrained_cfg...
0
hf_public_repos/pytorch-image-models/timm
hf_public_repos/pytorch-image-models/timm/data/constants.py
DEFAULT_CROP_PCT = 0.875 DEFAULT_CROP_MODE = 'center' IMAGENET_DEFAULT_MEAN = (0.485, 0.456, 0.406) IMAGENET_DEFAULT_STD = (0.229, 0.224, 0.225) IMAGENET_INCEPTION_MEAN = (0.5, 0.5, 0.5) IMAGENET_INCEPTION_STD = (0.5, 0.5, 0.5) IMAGENET_DPN_MEAN = (124 / 255, 117 / 255, 104 / 255) IMAGENET_DPN_STD = tuple([1 / (.0167 *...
0
hf_public_repos/pytorch-image-models/timm
hf_public_repos/pytorch-image-models/timm/data/dataset.py
""" Quick n Simple Image Folder, Tarfile based DataSet Hacked together by / Copyright 2019, Ross Wightman """ import io import logging from typing import Optional import torch import torch.utils.data as data from PIL import Image from .readers import create_reader _logger = logging.getLogger(__name__) _ERROR_RETR...
0
hf_public_repos/pytorch-image-models/timm
hf_public_repos/pytorch-image-models/timm/data/dataset_factory.py
""" Dataset Factory Hacked together by / Copyright 2021, Ross Wightman """ import os from torchvision.datasets import CIFAR100, CIFAR10, MNIST, KMNIST, FashionMNIST, ImageFolder try: from torchvision.datasets import Places365 has_places365 = True except ImportError: has_places365 = False try: from tor...
0
hf_public_repos/pytorch-image-models/timm
hf_public_repos/pytorch-image-models/timm/data/dataset_info.py
from abc import ABC, abstractmethod from typing import Dict, List, Optional, Union class DatasetInfo(ABC): def __init__(self): pass @abstractmethod def num_classes(self): pass @abstractmethod def label_names(self): pass @abstractmethod def label_descriptions(sel...
0
hf_public_repos/pytorch-image-models/timm
hf_public_repos/pytorch-image-models/timm/data/distributed_sampler.py
import math import torch from torch.utils.data import Sampler import torch.distributed as dist class OrderedDistributedSampler(Sampler): """Sampler that restricts data loading to a subset of the dataset. It is especially useful in conjunction with :class:`torch.nn.parallel.DistributedDataParallel`. In suc...
0
hf_public_repos/pytorch-image-models/timm
hf_public_repos/pytorch-image-models/timm/data/imagenet_info.py
import csv import os import pkgutil import re from typing import Dict, List, Optional, Union from .dataset_info import DatasetInfo # NOTE no ambiguity wrt to mapping from # classes to ImageNet subset so far, but likely to change _NUM_CLASSES_TO_SUBSET = { 1000: 'imagenet-1k', 11221: 'imagenet-21k-miil', # m...
0
hf_public_repos/pytorch-image-models/timm
hf_public_repos/pytorch-image-models/timm/data/loader.py
""" Loader Factory, Fast Collate, CUDA Prefetcher Prefetcher and Fast Collate inspired by NVIDIA APEX example at https://github.com/NVIDIA/apex/commit/d5e2bb4bdeedd27b1dfaf5bb2b24d6c000dee9be#diff-cf86c282ff7fba81fad27a559379d5bf Hacked together by / Copyright 2019, Ross Wightman """ import logging import random from...
0
hf_public_repos/pytorch-image-models/timm
hf_public_repos/pytorch-image-models/timm/data/mixup.py
""" Mixup and Cutmix Papers: mixup: Beyond Empirical Risk Minimization (https://arxiv.org/abs/1710.09412) CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features (https://arxiv.org/abs/1905.04899) Code Reference: CutMix: https://github.com/clovaai/CutMix-PyTorch Hacked together by / Co...
0
hf_public_repos/pytorch-image-models/timm
hf_public_repos/pytorch-image-models/timm/data/random_erasing.py
""" Random Erasing (Cutout) Originally inspired by impl at https://github.com/zhunzhong07/Random-Erasing, Apache 2.0 Copyright Zhun Zhong & Liang Zheng Hacked together by / Copyright 2019, Ross Wightman """ import random import math import torch def _get_pixels(per_pixel, rand_color, patch_size, dtype=torch.float3...
0
hf_public_repos/pytorch-image-models/timm
hf_public_repos/pytorch-image-models/timm/data/real_labels.py
""" Real labels evaluator for ImageNet Paper: `Are we done with ImageNet?` - https://arxiv.org/abs/2006.07159 Based on Numpy example at https://github.com/google-research/reassessed-imagenet Hacked together by / Copyright 2020 Ross Wightman """ import os import json import numpy as np import pkgutil class RealLabels...
0
hf_public_repos/pytorch-image-models/timm
hf_public_repos/pytorch-image-models/timm/data/tf_preprocessing.py
""" Tensorflow Preprocessing Adapter Allows use of Tensorflow preprocessing pipeline in PyTorch Transform Copyright of original Tensorflow code below. Hacked together by / Copyright 2020 Ross Wightman """ # Copyright 2018 The TensorFlow Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2....
0
hf_public_repos/pytorch-image-models/timm
hf_public_repos/pytorch-image-models/timm/data/transforms.py
import math import numbers import random import warnings from typing import List, Sequence import torch import torchvision.transforms.functional as F try: from torchvision.transforms.functional import InterpolationMode has_interpolation_mode = True except ImportError: has_interpolation_mode = False from PI...
0
hf_public_repos/pytorch-image-models/timm
hf_public_repos/pytorch-image-models/timm/data/transforms_factory.py
""" Transforms Factory Factory methods for building image transforms for use with TIMM (PyTorch Image Models) Hacked together by / Copyright 2019, Ross Wightman """ import math import torch from torchvision import transforms from timm.data.constants import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD, DEFAULT_CROP_PC...
0
hf_public_repos/pytorch-image-models/timm/data
hf_public_repos/pytorch-image-models/timm/data/_info/imagenet12k_synsets.txt
n00005787 n00006484 n00007846 n00015388 n00017222 n00021265 n00021939 n00120010 n00141669 n00288000 n00288384 n00324978 n00326094 n00433458 n00433661 n00433802 n00434075 n00439826 n00440039 n00440382 n00440509 n00440747 n00440941 n00441073 n00441824 n00442115 n00442437 n00442847 n00442981 n00443231 n00443692 n00443803 ...
0
hf_public_repos/pytorch-image-models/timm/data
hf_public_repos/pytorch-image-models/timm/data/_info/imagenet21k_goog_synsets.txt
n00004475 n00005787 n00006024 n00006484 n00007846 n00015388 n00017222 n00021265 n00021939 n00120010 n00141669 n00288000 n00288190 n00288384 n00324978 n00326094 n00433458 n00433661 n00433802 n00434075 n00439826 n00440039 n00440218 n00440382 n00440509 n00440643 n00440747 n00440941 n00441073 n00441824 n00442115 n00442437 ...
0
hf_public_repos/pytorch-image-models/timm/data
hf_public_repos/pytorch-image-models/timm/data/_info/imagenet21k_goog_to_12k_indices.txt
1 3 4 5 6 7 8 9 10 11 13 14 15 16 17 18 19 20 21 23 24 26 27 28 29 30 31 32 33 34 37 38 41 43 44 45 46 47 48 49 50 51 53 55 56 57 58 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 89 90 91 93 94 95 96 97 99 100 101 102 103 105 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121...
0
hf_public_repos/pytorch-image-models/timm/data
hf_public_repos/pytorch-image-models/timm/data/_info/imagenet21k_goog_to_22k_indices.txt
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 10...
0
hf_public_repos/pytorch-image-models/timm/data
hf_public_repos/pytorch-image-models/timm/data/_info/imagenet21k_miil_synsets.txt
n00005787 n00006484 n00007846 n00015388 n00017222 n00021265 n00021939 n00120010 n00141669 n00288000 n00288384 n00324978 n00326094 n00433458 n00433661 n00433802 n00434075 n00439826 n00440039 n00440382 n00440509 n00440747 n00440941 n00441073 n00441824 n00442115 n00442437 n00442847 n00442981 n00443231 n00443692 n00443803 ...
0
hf_public_repos/pytorch-image-models/timm/data
hf_public_repos/pytorch-image-models/timm/data/_info/imagenet21k_miil_w21_synsets.txt
n00005787 n00006484 n00007846 n00015388 n00017222 n00021265 n00021939 n00120010 n00141669 n00288000 n00288384 n00324978 n00326094 n00433458 n00433661 n00433802 n00434075 n00439826 n00440039 n00440382 n00440509 n00440747 n00440941 n00441073 n00441824 n00442115 n00442437 n00442847 n00442981 n00443231 n00443692 n00443803 ...
0
hf_public_repos/pytorch-image-models/timm/data
hf_public_repos/pytorch-image-models/timm/data/_info/imagenet22k_ms_synsets.txt
n01440764 n01443537 n01484850 n01491361 n01494475 n01496331 n01498041 n01514668 n01514859 n01518878 n01530575 n01531178 n01532829 n01534433 n01537544 n01558993 n01560419 n01580077 n01582220 n01592084 n01601694 n01608432 n01614925 n01616318 n01622779 n01629819 n01630670 n01631663 n01632458 n01632777 n01641577 n01644373 ...
0
hf_public_repos/pytorch-image-models/timm/data
hf_public_repos/pytorch-image-models/timm/data/_info/imagenet22k_ms_to_12k_indices.txt
1001 1003 1004 1005 1006 1007 1008 1009 1010 1011 1013 1014 1015 1016 1017 1018 1019 1020 1021 1023 1024 1026 1027 1028 1029 1030 1031 1032 1033 1034 1037 1038 1041 1043 1044 1045 1046 1047 1048 1049 1050 1051 1053 1055 1056 1057 1058 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 ...
0
hf_public_repos/pytorch-image-models/timm/data
hf_public_repos/pytorch-image-models/timm/data/_info/imagenet22k_ms_to_22k_indices.txt
1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 ...
0
hf_public_repos/pytorch-image-models/timm/data
hf_public_repos/pytorch-image-models/timm/data/_info/imagenet22k_synsets.txt
n00004475 n00005787 n00006024 n00006484 n00007846 n00015388 n00017222 n00021265 n00021939 n00120010 n00141669 n00288000 n00288190 n00288384 n00324978 n00326094 n00433458 n00433661 n00433802 n00434075 n00439826 n00440039 n00440218 n00440382 n00440509 n00440643 n00440747 n00440941 n00441073 n00441824 n00442115 n00442437 ...
0
hf_public_repos/pytorch-image-models/timm/data
hf_public_repos/pytorch-image-models/timm/data/_info/imagenet22k_to_12k_indices.txt
1 3 4 5 6 7 8 9 10 11 13 14 15 16 17 18 19 20 21 23 24 26 27 28 29 30 31 32 33 34 37 38 41 43 44 45 46 47 48 49 50 51 53 55 56 57 58 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 89 90 91 93 94 95 96 97 99 100 101 102 103 105 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121...
0
hf_public_repos/pytorch-image-models/timm/data
hf_public_repos/pytorch-image-models/timm/data/_info/imagenet_a_indices.txt
6 11 13 15 17 22 23 27 30 37 39 42 47 50 57 70 71 76 79 89 90 94 96 97 99 105 107 108 110 113 124 125 130 132 143 144 150 151 207 234 235 254 277 283 287 291 295 298 301 306 307 308 309 310 311 313 314 315 317 319 323 324 326 327 330 334 335 336 347 361 363 372 378 386 397 400 401 402 404 407 411 416 417 420 425 428 43...
0
hf_public_repos/pytorch-image-models/timm/data
hf_public_repos/pytorch-image-models/timm/data/_info/imagenet_a_synsets.txt
n01498041 n01531178 n01534433 n01558993 n01580077 n01614925 n01616318 n01631663 n01641577 n01669191 n01677366 n01687978 n01694178 n01698640 n01735189 n01770081 n01770393 n01774750 n01784675 n01819313 n01820546 n01833805 n01843383 n01847000 n01855672 n01882714 n01910747 n01914609 n01924916 n01944390 n01985128 n01986214 ...
0