Buckets:
| # Backbone | |
| A backbone is a model used for feature extraction for higher level computer vision tasks such as object detection and image classification. Transformers provides an [AutoBackbone](/docs/transformers/pr_33892/en/main_classes/backbones#transformers.AutoBackbone) class for initializing a Transformers backbone from pretrained model weights, and two utility classes: | |
| * [BackboneMixin](/docs/transformers/pr_33892/en/main_classes/backbones#transformers.utils.BackboneMixin) enables initializing a backbone from Transformers or [timm](https://hf.co/docs/timm/index) and includes functions for returning the output features and indices. | |
| * [BackboneConfigMixin](/docs/transformers/pr_33892/en/main_classes/backbones#transformers.utils.BackboneConfigMixin) sets the output features and indices of the backbone configuration. | |
| [timm](https://hf.co/docs/timm/index) models are loaded with the [TimmBackbone](/docs/transformers/pr_33892/en/main_classes/backbones#transformers.TimmBackbone) and [TimmBackboneConfig](/docs/transformers/pr_33892/en/main_classes/backbones#transformers.TimmBackboneConfig) classes. | |
| Backbones are supported for the following models: | |
| * [BEiT](../model_doc/beit) | |
| * [BiT](../model_doc/bit) | |
| * [ConvNext](../model_doc/convnext) | |
| * [ConvNextV2](../model_doc/convnextv2) | |
| * [DiNAT](../model_doc/dinat) | |
| * [DINOV2](../model_doc/dinov2) | |
| * [FocalNet](../model_doc/focalnet) | |
| * [MaskFormer](../model_doc/maskformer) | |
| * [NAT](../model_doc/nat) | |
| * [ResNet](../model_doc/resnet) | |
| * [Swin Transformer](../model_doc/swin) | |
| * [Swin Transformer v2](../model_doc/swinv2) | |
| * [ViTDet](../model_doc/vitdet) | |
| ## AutoBackbone[[transformers.AutoBackbone]] | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>class transformers.AutoBackbone</name><anchor>transformers.AutoBackbone</anchor><source>https://github.com/huggingface/transformers/blob/vr_33892/src/transformers/models/auto/modeling_auto.py#L2234</source><parameters>[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| </div> | |
| ## BackboneMixin[[transformers.utils.BackboneMixin]] | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>class transformers.utils.BackboneMixin</name><anchor>transformers.utils.BackboneMixin</anchor><source>https://github.com/huggingface/transformers/blob/vr_33892/src/transformers/utils/backbone_utils.py#L140</source><parameters>[]</parameters></docstring> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>to_dict</name><anchor>transformers.utils.BackboneMixin.to_dict</anchor><source>https://github.com/huggingface/transformers/blob/vr_33892/src/transformers/utils/backbone_utils.py#L253</source><parameters>[]</parameters></docstring> | |
| Serializes this instance to a Python dictionary. Override the default `to_dict()` from `PreTrainedConfig` to | |
| include the `out_features` and `out_indices` attributes. | |
| </div></div> | |
| ## BackboneConfigMixin[[transformers.utils.BackboneConfigMixin]] | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>class transformers.utils.BackboneConfigMixin</name><anchor>transformers.utils.BackboneConfigMixin</anchor><source>https://github.com/huggingface/transformers/blob/vr_33892/src/transformers/utils/backbone_utils.py#L264</source><parameters>[]</parameters></docstring> | |
| A Mixin to support handling the `out_features` and `out_indices` attributes for the backbone configurations. | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>to_dict</name><anchor>transformers.utils.BackboneConfigMixin.to_dict</anchor><source>https://github.com/huggingface/transformers/blob/vr_33892/src/transformers/utils/backbone_utils.py#L295</source><parameters>[]</parameters></docstring> | |
| Serializes this instance to a Python dictionary. Override the default `to_dict()` from `PreTrainedConfig` to | |
| include the `out_features` and `out_indices` attributes. | |
| </div></div> | |
| ## TimmBackbone[[transformers.TimmBackbone]] | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>class transformers.TimmBackbone</name><anchor>transformers.TimmBackbone</anchor><source>https://github.com/huggingface/transformers/blob/vr_33892/src/transformers/models/timm_backbone/modeling_timm_backbone.py#L35</source><parameters>[{"name": "config", "val": ""}, {"name": "**kwargs", "val": ""}]</parameters></docstring> | |
| Wrapper class for timm models to be used as backbones. This enables using the timm models interchangeably with the | |
| other models in the library keeping the same API. | |
| </div> | |
| ## TimmBackboneConfig[[transformers.TimmBackboneConfig]] | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>class transformers.TimmBackboneConfig</name><anchor>transformers.TimmBackboneConfig</anchor><source>https://github.com/huggingface/transformers/blob/vr_33892/src/transformers/models/timm_backbone/configuration_timm_backbone.py#L25</source><parameters>[{"name": "backbone", "val": " = None"}, {"name": "num_channels", "val": " = 3"}, {"name": "features_only", "val": " = True"}, {"name": "use_pretrained_backbone", "val": " = True"}, {"name": "out_indices", "val": " = None"}, {"name": "freeze_batch_norm_2d", "val": " = False"}, {"name": "**kwargs", "val": ""}]</parameters><paramsdesc>- **backbone** (`str`, *optional*) -- | |
| The timm checkpoint to load. | |
| - **num_channels** (`int`, *optional*, defaults to 3) -- | |
| The number of input channels. | |
| - **features_only** (`bool`, *optional*, defaults to `True`) -- | |
| Whether to output only the features or also the logits. | |
| - **use_pretrained_backbone** (`bool`, *optional*, defaults to `True`) -- | |
| Whether to use a pretrained backbone. | |
| - **out_indices** (`list[int]`, *optional*) -- | |
| If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how | |
| many stages the model has). Will default to the last stage if unset. | |
| - **freeze_batch_norm_2d** (`bool`, *optional*, defaults to `False`) -- | |
| Converts all `BatchNorm2d` and `SyncBatchNorm` layers of provided module into `FrozenBatchNorm2d`.</paramsdesc><paramgroups>0</paramgroups></docstring> | |
| This is the configuration class to store the configuration for a timm backbone [TimmBackbone](/docs/transformers/pr_33892/en/main_classes/backbones#transformers.TimmBackbone). | |
| It is used to instantiate a timm backbone model according to the specified arguments, defining the model. | |
| Configuration objects inherit from [PreTrainedConfig](/docs/transformers/pr_33892/en/main_classes/configuration#transformers.PreTrainedConfig) and can be used to control the model outputs. Read the | |
| documentation from [PreTrainedConfig](/docs/transformers/pr_33892/en/main_classes/configuration#transformers.PreTrainedConfig) for more information. | |
| <ExampleCodeBlock anchor="transformers.TimmBackboneConfig.example"> | |
| Example: | |
| ```python | |
| >>> from transformers import TimmBackboneConfig, TimmBackbone | |
| >>> # Initializing a timm backbone | |
| >>> configuration = TimmBackboneConfig("resnet50") | |
| >>> # Initializing a model from the configuration | |
| >>> model = TimmBackbone(configuration) | |
| >>> # Accessing the model configuration | |
| >>> configuration = model.config | |
| ``` | |
| </ExampleCodeBlock> | |
| </div> | |
| <EditOnGithub source="https://github.com/huggingface/transformers/blob/main/docs/source/en/main_classes/backbones.md" /> |
Xet Storage Details
- Size:
- 7.61 kB
- Xet hash:
- e3bba291a1aa98b2edf63f409fcddd34747d8db92abb3c81a2984cefe8968d33
·
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.