text
stringlengths
7
328k
id
stringlengths
14
166
metadata
dict
__index_level_0__
int64
0
459
<jupyter_start><jupyter_text>Pre-Training a ๐Ÿค— Transformers model on TPU with **Flax/JAX**In this notebook, we will see how to pretrain one of the [๐Ÿค— Transformers](https://github.com/huggingface/transformers) models on TPU using [**Flax**](https://flax.readthedocs.io/en/latest/index.html). GPT2's causal language model...
notebooks/examples/causal_language_modeling_flax.ipynb/0
{ "file_path": "notebooks/examples/causal_language_modeling_flax.ipynb", "repo_id": "notebooks", "token_count": 8784 }
158
<jupyter_start><jupyter_text>Multivariate Probabilistic Time Series Forecasting with Informer IntroductionA few months ago we introduced the [Time Series Transformer](https://huggingface.co/blog/time-series-transformers), which is the vanilla Transformer ([Vaswani et al., 2017](https://arxiv.org/abs/1706.03762)) applie...
notebooks/examples/multivariate_informer.ipynb/0
{ "file_path": "notebooks/examples/multivariate_informer.ipynb", "repo_id": "notebooks", "token_count": 15125 }
159
<jupyter_start><jupyter_text>If you're opening this Notebook on colab, you will probably need to install ๐Ÿค— Transformers and ๐Ÿค— Datasets as well as other dependencies. Uncomment the following cell and run it. Note the `rouge-score` and `nltk` dependencies - even if you've used ๐Ÿค— Transformers before, you may not have t...
notebooks/examples/summarization-tf.ipynb/0
{ "file_path": "notebooks/examples/summarization-tf.ipynb", "repo_id": "notebooks", "token_count": 8798 }
160
<jupyter_start><jupyter_text>If you're opening this Notebook on colab, you will probably need to install ๐Ÿค— Transformers and ๐Ÿค— Datasets. Uncomment the following cell and run it.<jupyter_code>#! pip install datasets transformers[sentencepiece] sacrebleu<jupyter_output><empty_output><jupyter_text>If you're opening this ...
notebooks/examples/translation.ipynb/0
{ "file_path": "notebooks/examples/translation.ipynb", "repo_id": "notebooks", "token_count": 5285 }
161
<jupyter_start><jupyter_text>Sentence Embeddings with Hugging Face Transformers, Sentence Transformers and Amazon SageMaker - Custom Inference for creating document embeddings with Hugging Face's Transformers Welcome to this getting started guide. We will use the Hugging Face Inference DLCs and Amazon SageMaker Python ...
notebooks/sagemaker/17_custom_inference_script/sagemaker-notebook.ipynb/0
{ "file_path": "notebooks/sagemaker/17_custom_inference_script/sagemaker-notebook.ipynb", "repo_id": "notebooks", "token_count": 3804 }
162
accelerate launch --config_file accelerate_config.yaml train_using_s3_data.py \ --mixed_precision "fp16"
notebooks/sagemaker/22_accelerate_sagemaker_examples/src/text-classification/launch.sh/0
{ "file_path": "notebooks/sagemaker/22_accelerate_sagemaker_examples/src/text-classification/launch.sh", "repo_id": "notebooks", "token_count": 40 }
163
# Builds GPU docker image of PyTorch # Uses multi-staged approach to reduce size # Stage 1 # Use base conda image to reduce time FROM continuumio/miniconda3:latest AS compile-image # Specify py version ENV PYTHON_VERSION=3.8 # Install apt libs - copied from https://github.com/huggingface/accelerate/blob/main/docker/acc...
peft/docker/peft-gpu-bnb-latest/Dockerfile/0
{ "file_path": "peft/docker/peft-gpu-bnb-latest/Dockerfile", "repo_id": "peft", "token_count": 816 }
164
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
peft/docs/source/developer_guides/mixed_models.md/0
{ "file_path": "peft/docs/source/developer_guides/mixed_models.md", "repo_id": "peft", "token_count": 770 }
165
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
peft/docs/source/package_reference/multitask_prompt_tuning.md/0
{ "file_path": "peft/docs/source/package_reference/multitask_prompt_tuning.md", "repo_id": "peft", "token_count": 533 }
166
<jupyter_start><jupyter_code>from transformers import AutoModelForCausalLM from peft import PeftModel, PeftConfig import torch from datasets import load_dataset import os from transformers import AutoTokenizer from torch.utils.data import DataLoader from transformers import default_data_collator, get_linear_schedule_wi...
peft/examples/causal_language_modeling/peft_lora_clm_accelerate_big_model_inference.ipynb/0
{ "file_path": "peft/examples/causal_language_modeling/peft_lora_clm_accelerate_big_model_inference.ipynb", "repo_id": "peft", "token_count": 2945 }
167
<jupyter_start><jupyter_code>import os import torch from transformers import ( AutoTokenizer, default_data_collator, AutoModelForSeq2SeqLM, Seq2SeqTrainingArguments, Seq2SeqTrainer, GenerationConfig, ) from peft import get_peft_model, PromptTuningInit, PromptTuningConfig, TaskType from datasets...
peft/examples/conditional_generation/peft_prompt_tuning_seq2seq_with_generate.ipynb/0
{ "file_path": "peft/examples/conditional_generation/peft_prompt_tuning_seq2seq_with_generate.ipynb", "repo_id": "peft", "token_count": 2021 }
168
# LoftQ: LoRA-fine-tuning-aware Quantization ## Introduction LoftQ finds quantized LoRA initialization: quantized backbone Q and LoRA adapters A and B, given a pre-trained weight W. ## Quick Start Steps: 1. Apply LoftQ to a full-precision pre-trained weight and save. 2. Load LoftQ initialization and train. For ste...
peft/examples/loftq_finetuning/README.md/0
{ "file_path": "peft/examples/loftq_finetuning/README.md", "repo_id": "peft", "token_count": 1978 }
169
<jupyter_start><jupyter_code>%env CUDA_VISIBLE_DEVICES=0 %env TOKENIZERS_PARALLELISM=false<jupyter_output>env: CUDA_VISIBLE_DEVICES=0 env: TOKENIZERS_PARALLELISM=false<jupyter_text>Initialize PolyModel<jupyter_code>import torch from transformers import ( AutoModelForSeq2SeqLM, AutoTokenizer, default_data_co...
peft/examples/poly/peft_poly_seq2seq_with_generate.ipynb/0
{ "file_path": "peft/examples/poly/peft_poly_seq2seq_with_generate.ipynb", "repo_id": "peft", "token_count": 4104 }
170
# Copyright 2023-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or...
peft/scripts/launch_notebook_mp.py/0
{ "file_path": "peft/scripts/launch_notebook_mp.py", "repo_id": "peft", "token_count": 474 }
171
# Copyright 2023-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or...
peft/src/peft/tuners/adalora/config.py/0
{ "file_path": "peft/src/peft/tuners/adalora/config.py", "repo_id": "peft", "token_count": 860 }
172
# Copyright 2023-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or...
peft/src/peft/tuners/loha/layer.py/0
{ "file_path": "peft/src/peft/tuners/loha/layer.py", "repo_id": "peft", "token_count": 7471 }
173
# Copyright 2023-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or...
peft/src/peft/tuners/poly/router.py/0
{ "file_path": "peft/src/peft/tuners/poly/router.py", "repo_id": "peft", "token_count": 1117 }
174
import os if os.environ.get("PEFT_DEBUG_WITH_TORCH_COMPILE") == "1": # This is a hack purely for debugging purposes. If the environment variable PEFT_DEBUG_WITH_TORCH_COMPILE is set to # 1, get_peft_model() will return a compiled model. This way, all unit tests that use peft.get_peft_model() will # use a ...
peft/tests/__init__.py/0
{ "file_path": "peft/tests/__init__.py", "repo_id": "peft", "token_count": 302 }
175
# Copyright 2023-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or...
peft/tests/test_mixed.py/0
{ "file_path": "peft/tests/test_mixed.py", "repo_id": "peft", "token_count": 17543 }
176
#!/usr/bin/env python3 """ Checkpoint Averaging Script This script averages all model weights for checkpoints in specified path that match the specified filter wildcard. All checkpoints must be from the exact same model. For any hope of decent results, the checkpoints should be from the same or child (via resumes) tr...
pytorch-image-models/avg_checkpoints.py/0
{ "file_path": "pytorch-image-models/avg_checkpoints.py", "repo_id": "pytorch-image-models", "token_count": 2377 }
177
# Adversarial Inception v3 **Inception v3** is a convolutional neural network architecture from the Inception family that makes several improvements including using [Label Smoothing](https://paperswithcode.com/method/label-smoothing), Factorized 7 x 7 convolutions, and the use of an [auxiliary classifer](https://paper...
pytorch-image-models/docs/models/.templates/models/adversarial-inception-v3.md/0
{ "file_path": "pytorch-image-models/docs/models/.templates/models/adversarial-inception-v3.md", "repo_id": "pytorch-image-models", "token_count": 1432 }
178
# (Gluon) ResNet **Residual Networks**, or **ResNets**, learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. Instead of hoping each few stacked layers directly fit a desired underlying mapping, residual nets let these layers fit a residual mapping. They stack [residu...
pytorch-image-models/docs/models/.templates/models/gloun-resnet.md/0
{ "file_path": "pytorch-image-models/docs/models/.templates/models/gloun-resnet.md", "repo_id": "pytorch-image-models", "token_count": 6383 }
179
# MobileNet v3 **MobileNetV3** is a convolutional neural network that is designed for mobile phone CPUs. The network design includes the use of a [hard swish activation](https://paperswithcode.com/method/hard-swish) and [squeeze-and-excitation](https://paperswithcode.com/method/squeeze-and-excitation-block) modules in...
pytorch-image-models/docs/models/.templates/models/mobilenet-v3.md/0
{ "file_path": "pytorch-image-models/docs/models/.templates/models/mobilenet-v3.md", "repo_id": "pytorch-image-models", "token_count": 1755 }
180
# SK-ResNet **SK ResNet** is a variant of a [ResNet](https://www.paperswithcode.com/method/resnet) that employs a [Selective Kernel](https://paperswithcode.com/method/selective-kernel) unit. In general, all the large kernel convolutions in the original bottleneck blocks in ResNet are replaced by the proposed [SK convo...
pytorch-image-models/docs/models/.templates/models/skresnet.md/0
{ "file_path": "pytorch-image-models/docs/models/.templates/models/skresnet.md", "repo_id": "pytorch-image-models", "token_count": 1276 }
181
# Xception **Xception** is a convolutional neural network architecture that relies solely on [depthwise separable convolution layers](https://paperswithcode.com/method/depthwise-separable-convolution). The weights from this model were ported from [Tensorflow/Models](https://github.com/tensorflow/models). {% include ...
pytorch-image-models/docs/models/.templates/models/xception.md/0
{ "file_path": "pytorch-image-models/docs/models/.templates/models/xception.md", "repo_id": "pytorch-image-models", "token_count": 1874 }
182
# Results CSV files containing an ImageNet-1K and out-of-distribution (OOD) test set validation results for all models with pretrained weights is located in the repository [results folder](https://github.com/rwightman/pytorch-image-models/tree/master/results). ## Self-trained Weights The table below includes ImageNe...
pytorch-image-models/hfdocs/source/results.mdx/0
{ "file_path": "pytorch-image-models/hfdocs/source/results.mdx", "repo_id": "pytorch-image-models", "token_count": 2259 }
183
DEFAULT_CROP_PCT = 0.875 DEFAULT_CROP_MODE = 'center' IMAGENET_DEFAULT_MEAN = (0.485, 0.456, 0.406) IMAGENET_DEFAULT_STD = (0.229, 0.224, 0.225) IMAGENET_INCEPTION_MEAN = (0.5, 0.5, 0.5) IMAGENET_INCEPTION_STD = (0.5, 0.5, 0.5) IMAGENET_DPN_MEAN = (124 / 255, 117 / 255, 104 / 255) IMAGENET_DPN_STD = tuple([1 / (.0167 *...
pytorch-image-models/timm/data/constants.py/0
{ "file_path": "pytorch-image-models/timm/data/constants.py", "repo_id": "pytorch-image-models", "token_count": 236 }
184
""" A dataset reader that extracts images from folders Folders are scanned recursively to find image files. Labels are based on the folder hierarchy, just leaf folders by default. Hacked together by / Copyright 2020 Ross Wightman """ import os from typing import Dict, List, Optional, Set, Tuple, Union from timm.util...
pytorch-image-models/timm/data/readers/reader_image_folder.py/0
{ "file_path": "pytorch-image-models/timm/data/readers/reader_image_folder.py", "repo_id": "pytorch-image-models", "token_count": 1510 }
185
""" Attention Pool 2D Implementations of 2D spatial feature pooling using multi-head attention instead of average pool. Based on idea in CLIP by OpenAI, licensed Apache 2.0 https://github.com/openai/CLIP/blob/3b473b0e682c091a9e53623eebc1ca1657385717/clip/model.py Hacked together by / Copyright 2021 Ross Wightman """...
pytorch-image-models/timm/layers/attention_pool2d.py/0
{ "file_path": "pytorch-image-models/timm/layers/attention_pool2d.py", "repo_id": "pytorch-image-models", "token_count": 2301 }
186
""" EvoNorm in PyTorch Based on `Evolving Normalization-Activation Layers` - https://arxiv.org/abs/2004.02967 @inproceedings{NEURIPS2020, author = {Liu, Hanxiao and Brock, Andy and Simonyan, Karen and Le, Quoc}, booktitle = {Advances in Neural Information Processing Systems}, editor = {H. Larochelle and M. Ranzato ...
pytorch-image-models/timm/layers/evo_norm.py/0
{ "file_path": "pytorch-image-models/timm/layers/evo_norm.py", "repo_id": "pytorch-image-models", "token_count": 6684 }
187
from typing import Optional import torch from torch import nn from torch import nn, Tensor from torch.nn.modules.transformer import _get_activation_fn def add_ml_decoder_head(model): if hasattr(model, 'global_pool') and hasattr(model, 'fc'): # most CNN models, like Resnet50 model.global_pool = nn.Identi...
pytorch-image-models/timm/layers/ml_decoder.py/0
{ "file_path": "pytorch-image-models/timm/layers/ml_decoder.py", "repo_id": "pytorch-image-models", "token_count": 3177 }
188
""" Split BatchNorm A PyTorch BatchNorm layer that splits input batch into N equal parts and passes each through a separate BN layer. The first split is passed through the parent BN layers with weight/bias keys the same as the original BN. All other splits pass through BN sub-layers under the '.aux_bn' namespace. Thi...
pytorch-image-models/timm/layers/split_batchnorm.py/0
{ "file_path": "pytorch-image-models/timm/layers/split_batchnorm.py", "repo_id": "pytorch-image-models", "token_count": 1394 }
189
import os from typing import Any, Dict, Optional, Union from urllib.parse import urlsplit from timm.layers import set_layer_config from ._helpers import load_checkpoint from ._hub import load_model_config_from_hf from ._pretrained import PretrainedCfg from ._registry import is_model, model_entrypoint, split_model_name...
pytorch-image-models/timm/models/_factory.py/0
{ "file_path": "pytorch-image-models/timm/models/_factory.py", "repo_id": "pytorch-image-models", "token_count": 1944 }
190
""" Bring-Your-Own-Blocks Network A flexible network w/ dataclass based config for stacking those NN blocks. This model is currently used to implement the following networks: GPU Efficient (ResNets) - gernet_l/m/s (original versions called genet, but this was already used (by SENet author)). Paper: `Neural Architect...
pytorch-image-models/timm/models/byobnet.py/0
{ "file_path": "pytorch-image-models/timm/models/byobnet.py", "repo_id": "pytorch-image-models", "token_count": 42793 }
191
""" The EfficientNet Family in PyTorch An implementation of EfficienNet that covers variety of related models with efficient architectures: * EfficientNet-V2 - `EfficientNetV2: Smaller Models and Faster Training` - https://arxiv.org/abs/2104.00298 * EfficientNet (B0-B8, L2 + Tensorflow pretrained AutoAug/RandAug/A...
pytorch-image-models/timm/models/efficientnet.py/0
{ "file_path": "pytorch-image-models/timm/models/efficientnet.py", "repo_id": "pytorch-image-models", "token_count": 47473 }
192
""" InceptionNeXt paper: https://arxiv.org/abs/2303.16900 Original implementation & weights from: https://github.com/sail-sg/inceptionnext """ from functools import partial import torch import torch.nn as nn from timm.data import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD from timm.layers import trunc_normal_, Drop...
pytorch-image-models/timm/models/inception_next.py/0
{ "file_path": "pytorch-image-models/timm/models/inception_next.py", "repo_id": "pytorch-image-models", "token_count": 7709 }
193
""" Pooling-based Vision Transformer (PiT) in PyTorch A PyTorch implement of Pooling-based Vision Transformers as described in 'Rethinking Spatial Dimensions of Vision Transformers' - https://arxiv.org/abs/2103.16302 This code was adapted from the original version at https://github.com/naver-ai/pit, original copyrigh...
pytorch-image-models/timm/models/pit.py/0
{ "file_path": "pytorch-image-models/timm/models/pit.py", "repo_id": "pytorch-image-models", "token_count": 7347 }
194
""" Swin Transformer A PyTorch impl of : `Swin Transformer: Hierarchical Vision Transformer using Shifted Windows` - https://arxiv.org/pdf/2103.14030 Code/weights from https://github.com/microsoft/Swin-Transformer, original copyright/license info below S3 (AutoFormerV2, https://arxiv.org/abs/2111.14725) Swin weig...
pytorch-image-models/timm/models/swin_transformer.py/0
{ "file_path": "pytorch-image-models/timm/models/swin_transformer.py", "repo_id": "pytorch-image-models", "token_count": 16908 }
195
"""Pytorch impl of Aligned Xception 41, 65, 71 This is a correct, from scratch impl of Aligned Xception (Deeplab) models compatible with TF weights at https://github.com/tensorflow/models/blob/master/research/deeplab/g3doc/model_zoo.md Hacked together by / Copyright 2020 Ross Wightman """ from functools import partia...
pytorch-image-models/timm/models/xception_aligned.py/0
{ "file_path": "pytorch-image-models/timm/models/xception_aligned.py", "repo_id": "pytorch-image-models", "token_count": 7719 }
196
""" Nvidia NovoGrad Optimizer. Original impl by Nvidia from Jasper example: - https://github.com/NVIDIA/DeepLearningExamples/blob/master/PyTorch/SpeechRecognition/Jasper Paper: `Stochastic Gradient Methods with Layer-wise Adaptive Moments for Training of Deep Networks` - https://arxiv.org/abs/1905.11286 """ im...
pytorch-image-models/timm/optim/nvnovograd.py/0
{ "file_path": "pytorch-image-models/timm/optim/nvnovograd.py", "repo_id": "pytorch-image-models", "token_count": 2415 }
197
""" Adaptive Gradient Clipping An impl of AGC, as per (https://arxiv.org/abs/2102.06171): @article{brock2021high, author={Andrew Brock and Soham De and Samuel L. Smith and Karen Simonyan}, title={High-Performance Large-Scale Image Recognition Without Normalization}, journal={arXiv preprint arXiv:}, year={2021...
pytorch-image-models/timm/utils/agc.py/0
{ "file_path": "pytorch-image-models/timm/utils/agc.py", "repo_id": "pytorch-image-models", "token_count": 661 }
198
#!/usr/bin/env python3 """ ImageNet Training Script This is intended to be a lean and easily modifiable ImageNet training script that reproduces ImageNet training results with some of the latest networks and training techniques. It favours canonical PyTorch and standard Python style over trying to be able to 'do it al...
pytorch-image-models/train.py/0
{ "file_path": "pytorch-image-models/train.py", "repo_id": "pytorch-image-models", "token_count": 24460 }
199
# Rust builder FROM lukemathwalker/cargo-chef:latest-rust-1.75 AS chef WORKDIR /usr/src ARG CARGO_REGISTRIES_CRATES_IO_PROTOCOL=sparse FROM chef as planner COPY Cargo.toml Cargo.toml COPY rust-toolchain.toml rust-toolchain.toml COPY proto proto COPY benchmark benchmark COPY router router COPY launcher launcher RUN ca...
text-generation-inference/Dockerfile_amd/0
{ "file_path": "text-generation-inference/Dockerfile_amd", "repo_id": "text-generation-inference", "token_count": 2129 }
200
unit-tests: python -m pytest --cov=text_generation tests install: pip install pip --upgrade pip install -e .
text-generation-inference/clients/python/Makefile/0
{ "file_path": "text-generation-inference/clients/python/Makefile", "repo_id": "text-generation-inference", "token_count": 41 }
201
- sections: - local: index title: Text Generation Inference - local: quicktour title: Quick Tour - local: installation title: Installation - local: supported_models title: Supported Models and Hardware - local: messages_api title: Messages API title: Getting started - sections: - local...
text-generation-inference/docs/source/_toctree.yml/0
{ "file_path": "text-generation-inference/docs/source/_toctree.yml", "repo_id": "text-generation-inference", "token_count": 434 }
202
# Installation This section explains how to install the CLI tool as well as installing TGI from source. **The strongly recommended approach is to use Docker, as it does not require much setup. Check [the Quick Tour](./quicktour) to learn how to run TGI with Docker.** ## Install CLI You can use TGI command-line inter...
text-generation-inference/docs/source/installation.md/0
{ "file_path": "text-generation-inference/docs/source/installation.md", "repo_id": "text-generation-inference", "token_count": 700 }
203
{ "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [ { "id": 330, "logprob": null, "text": "ir" }, { "id": 1622, "logprob": -7.8125, "text": "af" }, { "id": 249, ...
text-generation-inference/integration-tests/models/__snapshots__/test_flash_falcon/test_flash_falcon_all_params.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_flash_falcon/test_flash_falcon_all_params.json", "repo_id": "text-generation-inference", "token_count": 1204 }
204
{ "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [ { "id": 1, "logprob": null, "text": "<s>" }, { "id": 338, "logprob": -10.0078125, "text": "is" }, { "id": 2178...
text-generation-inference/integration-tests/models/__snapshots__/test_flash_medusa/test_flash_medusa_all_params.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_flash_medusa/test_flash_medusa_all_params.json", "repo_id": "text-generation-inference", "token_count": 1153 }
205
{ "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [ { "id": 563, "logprob": null, "text": "def" }, { "id": 942, "logprob": -5.1367188, "text": " print" }, { "id":...
text-generation-inference/integration-tests/models/__snapshots__/test_flash_santacoder/test_flash_santacoder.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_flash_santacoder/test_flash_santacoder.json", "repo_id": "text-generation-inference", "token_count": 1111 }
206
[ { "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [ { "id": 1276, "logprob": null, "text": "What" }, { "id": 310, "logprob": -0.83984375, "text": " is...
text-generation-inference/integration-tests/models/__snapshots__/test_mamba/test_mamba_load.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_mamba/test_mamba_load.json", "repo_id": "text-generation-inference", "token_count": 5458 }
207
{ "choices": [ { "delta": { "content": null, "role": "assistant", "tool_calls": { "function": { "arguments": "</s>", "name": null }, "id": "", "index": 20, "type": "function" } }, "finish_re...
text-generation-inference/integration-tests/models/__snapshots__/test_tools_llama/test_flash_llama_grammar_tools_stream.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_tools_llama/test_flash_llama_grammar_tools_stream.json", "repo_id": "text-generation-inference", "token_count": 319 }
208
import pytest @pytest.fixture(scope="module") def flash_santacoder_handle(launcher): with launcher("bigcode/santacoder") as handle: yield handle @pytest.fixture(scope="module") async def flash_santacoder(flash_santacoder_handle): await flash_santacoder_handle.health(300) return flash_santacoder_...
text-generation-inference/integration-tests/models/test_flash_santacoder.py/0
{ "file_path": "text-generation-inference/integration-tests/models/test_flash_santacoder.py", "repo_id": "text-generation-inference", "token_count": 387 }
209
//! Text Generation gRPC client library mod client; #[allow(clippy::derive_partial_eq_without_eq)] mod pb; mod sharded_client; pub use client::Client; pub use pb::generate::v2::HealthResponse; pub use pb::generate::v2::InfoResponse as ShardInfo; pub use pb::generate::v2::{ Batch, CachedBatch, FinishReason, Genera...
text-generation-inference/router/client/src/lib.rs/0
{ "file_path": "text-generation-inference/router/client/src/lib.rs", "repo_id": "text-generation-inference", "token_count": 464 }
210
# Fork that adds only the correct stream to this kernel in order # to make cuda graphs work. awq_commit := bd1dc2d5254345cc76ab71894651fb821275bdd4 awq: rm -rf llm-awq git clone https://github.com/huggingface/llm-awq build-awq: awq cd llm-awq/ && git fetch && git checkout $(awq_commit) cd llm-awq/awq/kernels && p...
text-generation-inference/server/Makefile-awq/0
{ "file_path": "text-generation-inference/server/Makefile-awq", "repo_id": "text-generation-inference", "token_count": 183 }
211
// Adapted from turboderp exllama: https://github.com/turboderp/exllama #ifndef _q4_matmul_cuh #define _q4_matmul_cuh #include <cuda_runtime.h> #include <cuda_fp16.h> #include <cstdint> #include <cstdio> #include <ATen/cuda/CUDAContext.h> #include "q4_matrix.cuh" #include "../tuning.h" void q4_matmul_cuda ( ExL...
text-generation-inference/server/exllama_kernels/exllama_kernels/cuda_func/q4_matmul.cuh/0
{ "file_path": "text-generation-inference/server/exllama_kernels/exllama_kernels/cuda_func/q4_matmul.cuh", "repo_id": "text-generation-inference", "token_count": 322 }
212
#include "compat.cuh" __forceinline__ __device__ half2 dot22_8(half2(&dq)[4], const half* a_ptr, const half2 g_result) { half2 result = {}; const half2* a2_ptr = (const half2*)a_ptr; #pragma unroll for (int i = 0; i < 4; i++) result = __hfma2(dq[i], *a2_ptr++, result); return __hadd2(result, g_resu...
text-generation-inference/server/exllamav2_kernels/exllamav2_kernels/cuda/q_gemm_kernel_gptq.cuh/0
{ "file_path": "text-generation-inference/server/exllamav2_kernels/exllamav2_kernels/cuda/q_gemm_kernel_gptq.cuh", "repo_id": "text-generation-inference", "token_count": 4839 }
213
import torch import grpc from google.rpc import status_pb2, code_pb2 from grpc_status import rpc_status from grpc_interceptor.server import AsyncServerInterceptor from loguru import logger from typing import Callable, Any class ExceptionInterceptor(AsyncServerInterceptor): async def intercept( self, ...
text-generation-inference/server/text_generation_server/interceptor.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/interceptor.py", "repo_id": "text-generation-inference", "token_count": 449 }
214
import math import torch import torch.distributed import numpy as np from dataclasses import dataclass from opentelemetry import trace from transformers import PreTrainedTokenizerBase from transformers.models.llama import LlamaTokenizerFast from typing import Optional, Tuple, Type from text_generation_server.pb impo...
text-generation-inference/server/text_generation_server/models/flash_mistral.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/models/flash_mistral.py", "repo_id": "text-generation-inference", "token_count": 10224 }
215
import torch import torch.distributed from typing import Optional from transformers import ( AutoTokenizer, AutoConfig, ) from text_generation_server.models.custom_modeling.opt_modeling import OPTForCausalLM from text_generation_server.models import CausalLM from text_generation_server.utils import ( init...
text-generation-inference/server/text_generation_server/models/opt.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/models/opt.py", "repo_id": "text-generation-inference", "token_count": 1210 }
216
# https://github.com/fpgaminer/GPTQ-triton """ Mostly the same as the autotuner in Triton, but with a few changes like using 40 runs instead of 100. """ import builtins import math import time from typing import Dict import triton class Autotuner(triton.KernelInterface): def __init__( self, fn, ...
text-generation-inference/server/text_generation_server/utils/gptq/custom_autotune.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/utils/gptq/custom_autotune.py", "repo_id": "text-generation-inference", "token_count": 5116 }
217
import subprocess import argparse def main(): parser = argparse.ArgumentParser() parser.add_argument("--check", action="store_true") args = parser.parse_args() output = subprocess.check_output(["text-generation-launcher", "--help"]).decode( "utf-8" ) wrap_code_blocks_flag = "<!-- WR...
text-generation-inference/update_doc.py/0
{ "file_path": "text-generation-inference/update_doc.py", "repo_id": "text-generation-inference", "token_count": 991 }
218
<p align="center"> <br> <img src="https://huggingface.co/landing/assets/tokenizers/tokenizers-logo.png" width="600"/> <br> <p> <p align="center"> <img alt="Build" src="https://github.com/huggingface/tokenizers/workflows/Rust/badge.svg"> <a href="https://github.com/huggingface/tokenizers/blob/main/LI...
tokenizers/README.md/0
{ "file_path": "tokenizers/README.md", "repo_id": "tokenizers", "token_count": 945 }
219
/* eslint-disable */ var globRequire = require; describe("pipelineExample", () => { // This is a hack to let us require using path similar to what the user has to use function require(mod: string) { if (mod.startsWith("tokenizers")) { // let path = mod.slice("tokenizers".length); ...
tokenizers/bindings/node/examples/documentation/pipeline.test.ts/0
{ "file_path": "tokenizers/bindings/node/examples/documentation/pipeline.test.ts", "repo_id": "tokenizers", "token_count": 2710 }
220
# `tokenizers-android-arm-eabi` This is the **armv7-linux-androideabi** binary for `tokenizers`
tokenizers/bindings/node/npm/android-arm-eabi/README.md/0
{ "file_path": "tokenizers/bindings/node/npm/android-arm-eabi/README.md", "repo_id": "tokenizers", "token_count": 35 }
221
# `tokenizers-linux-x64-gnu` This is the **x86_64-unknown-linux-gnu** binary for `tokenizers`
tokenizers/bindings/node/npm/linux-x64-gnu/README.md/0
{ "file_path": "tokenizers/bindings/node/npm/linux-x64-gnu/README.md", "repo_id": "tokenizers", "token_count": 36 }
222
use crate::arc_rwlock_serde; use crate::tasks::models::{BPEFromFilesTask, WordLevelFromFilesTask, WordPieceFromFilesTask}; use crate::trainers::Trainer; use napi::bindgen_prelude::*; use napi_derive::napi; use serde::{Deserialize, Serialize}; use std::collections::HashMap; use std::path::{Path, PathBuf}; use std::sync:...
tokenizers/bindings/node/src/models.rs/0
{ "file_path": "tokenizers/bindings/node/src/models.rs", "repo_id": "tokenizers", "token_count": 3681 }
223
[package] name = "tokenizers-python" version = "0.15.3-dev.0" authors = ["Anthony MOI <m.anthony.moi@gmail.com>"] edition = "2021" [lib] name = "tokenizers" crate-type = ["cdylib"] [dependencies] rayon = "1.8" serde = { version = "1.0", features = [ "rc", "derive" ]} serde_json = "1.0" libc = "0.2" env_logger = "0.10...
tokenizers/bindings/python/Cargo.toml/0
{ "file_path": "tokenizers/bindings/python/Cargo.toml", "repo_id": "tokenizers", "token_count": 302 }
224
from typing import Dict, List, Optional, Tuple, Union from tokenizers import AddedToken, EncodeInput, Encoding, InputSequence, Tokenizer from tokenizers.decoders import Decoder from tokenizers.models import Model from tokenizers.normalizers import Normalizer from tokenizers.pre_tokenizers import PreTokenizer from toke...
tokenizers/bindings/python/py_src/tokenizers/implementations/base_tokenizer.py/0
{ "file_path": "tokenizers/bindings/python/py_src/tokenizers/implementations/base_tokenizer.py", "repo_id": "tokenizers", "token_count": 6036 }
225
import itertools import os import re from string import Template from typing import Any, Callable, Dict, List, NamedTuple, Optional, Tuple from tokenizers import Encoding, Tokenizer dirname = os.path.dirname(__file__) css_filename = os.path.join(dirname, "visualizer-styles.css") with open(css_filename) as f: css...
tokenizers/bindings/python/py_src/tokenizers/tools/visualizer.py/0
{ "file_path": "tokenizers/bindings/python/py_src/tokenizers/tools/visualizer.py", "repo_id": "tokenizers", "token_count": 6754 }
226
use std::convert::TryInto; use std::sync::Arc; use pyo3::exceptions; use pyo3::prelude::*; use pyo3::types::*; use crate::encoding::PyEncoding; use crate::error::ToPyResult; use serde::{Deserialize, Serialize}; use tk::processors::bert::BertProcessing; use tk::processors::byte_level::ByteLevel; use tk::processors::ro...
tokenizers/bindings/python/src/processors.rs/0
{ "file_path": "tokenizers/bindings/python/src/processors.rs", "repo_id": "tokenizers", "token_count": 7873 }
227
import pickle import pytest from tokenizers import NormalizedString from tokenizers.normalizers import BertNormalizer, Lowercase, Normalizer, Sequence, Strip, Prepend class TestBertNormalizer: def test_instantiate(self): assert isinstance(BertNormalizer(), Normalizer) assert isinstance(BertNorma...
tokenizers/bindings/python/tests/bindings/test_normalizers.py/0
{ "file_path": "tokenizers/bindings/python/tests/bindings/test_normalizers.py", "repo_id": "tokenizers", "token_count": 2342 }
228
import multiprocessing as mp import os import pytest import requests DATA_PATH = os.path.join("tests", "data") def download(url, with_filename=None): filename = with_filename if with_filename is not None else url.rsplit("/")[-1] filepath = os.path.join(DATA_PATH, filename) if not os.path.exists(filepa...
tokenizers/bindings/python/tests/utils.py/0
{ "file_path": "tokenizers/bindings/python/tests/utils.py", "repo_id": "tokenizers", "token_count": 1569 }
229
Documentation ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The node API has not been documented yet.
tokenizers/docs/source/api/node.inc/0
{ "file_path": "tokenizers/docs/source/api/node.inc", "repo_id": "tokenizers", "token_count": 22 }
230
[package] authors = ["Anthony MOI <m.anthony.moi@gmail.com>", "Nicolas Patry <patry.nicolas@protonmail.com>"] edition = "2018" name = "tokenizers" version = "0.15.3-dev.0" homepage = "https://github.com/huggingface/tokenizers" repository = "https://github.com/huggingface/tokenizers" documentation = "https://docs.rs/tok...
tokenizers/tokenizers/Cargo.toml/0
{ "file_path": "tokenizers/tokenizers/Cargo.toml", "repo_id": "tokenizers", "token_count": 838 }
231
//! Test suite for the Web and headless browsers. #![cfg(target_arch = "wasm32")] extern crate wasm_bindgen_test; use wasm_bindgen_test::*; wasm_bindgen_test_configure!(run_in_browser); #[wasm_bindgen_test] fn pass() { assert_eq!(1 + 1, 2); }
tokenizers/tokenizers/examples/unstable_wasm/tests/web.rs/0
{ "file_path": "tokenizers/tokenizers/examples/unstable_wasm/tests/web.rs", "repo_id": "tokenizers", "token_count": 109 }
232
use super::model::Unigram; use serde::{ de::{Error, MapAccess, Visitor}, ser::SerializeStruct, Deserialize, Deserializer, Serialize, Serializer, }; impl Serialize for Unigram { fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error> where S: Serializer, { let mut model ...
tokenizers/tokenizers/src/models/unigram/serialization.rs/0
{ "file_path": "tokenizers/tokenizers/src/models/unigram/serialization.rs", "repo_id": "tokenizers", "token_count": 1824 }
233
use serde::{Deserialize, Serialize}; use crate::normalizers::NormalizerWrapper; use crate::tokenizer::{NormalizedString, Normalizer, Result}; use crate::utils::macro_rules_attribute; #[derive(Clone, Deserialize, Debug, Serialize)] #[serde(tag = "type")] /// Allows concatenating multiple other Normalizer as a Sequence...
tokenizers/tokenizers/src/normalizers/utils.rs/0
{ "file_path": "tokenizers/tokenizers/src/normalizers/utils.rs", "repo_id": "tokenizers", "token_count": 478 }
234
use crate::processors::byte_level::process_offsets; use crate::tokenizer::{Encoding, PostProcessor, Result}; use serde::{Deserialize, Serialize}; use std::collections::HashMap; use std::iter::FromIterator; #[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq)] #[serde(tag = "type")] pub struct RobertaProcessin...
tokenizers/tokenizers/src/processors/roberta.rs/0
{ "file_path": "tokenizers/tokenizers/src/processors/roberta.rs", "repo_id": "tokenizers", "token_count": 8419 }
235
use crate::parallelism::*; use crate::tokenizer::{Encoding, Result}; use serde::{Deserialize, Serialize}; /// The various possible padding directions. #[derive(Debug, Clone, Copy, Serialize, Deserialize)] pub enum PaddingDirection { Left, Right, } impl std::convert::AsRef<str> for PaddingDirection { fn as...
tokenizers/tokenizers/src/utils/padding.rs/0
{ "file_path": "tokenizers/tokenizers/src/utils/padding.rs", "repo_id": "tokenizers", "token_count": 2049 }
236
FROM google/cloud-sdk:slim # Build args. ARG GITHUB_REF=refs/heads/main # TODO: This Dockerfile installs pytorch/xla 3.6 wheels. There are also 3.7 # wheels available; see below. ENV PYTHON_VERSION=3.6 RUN apt-get update && apt-get install -y --no-install-recommends \ build-essential \ cmake \ ...
transformers/docker/transformers-pytorch-tpu/Dockerfile/0
{ "file_path": "transformers/docker/transformers-pytorch-tpu/Dockerfile", "repo_id": "transformers", "token_count": 1235 }
237
<!--- Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or ...
transformers/docs/source/de/contributing.md/0
{ "file_path": "transformers/docs/source/de/contributing.md", "repo_id": "transformers", "token_count": 8257 }
238
- sections: - local: index title: ๐Ÿค— Transformers - local: quicktour title: Quick tour - local: installation title: Installation title: Get started - sections: - local: pipeline_tutorial title: Run inference with pipelines - local: autoclass_tutorial title: Write portable code with AutoC...
transformers/docs/source/en/_toctree.yml/0
{ "file_path": "transformers/docs/source/en/_toctree.yml", "repo_id": "transformers", "token_count": 11121 }
239
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/debugging.md/0
{ "file_path": "transformers/docs/source/en/debugging.md", "repo_id": "transformers", "token_count": 6482 }
240
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/main_classes/onnx.md/0
{ "file_path": "transformers/docs/source/en/main_classes/onnx.md", "repo_id": "transformers", "token_count": 523 }
241
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/model_doc/bart.md/0
{ "file_path": "transformers/docs/source/en/model_doc/bart.md", "repo_id": "transformers", "token_count": 3297 }
242
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/model_doc/bloom.md/0
{ "file_path": "transformers/docs/source/en/model_doc/bloom.md", "repo_id": "transformers", "token_count": 1158 }
243
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/model_doc/convbert.md/0
{ "file_path": "transformers/docs/source/en/model_doc/convbert.md", "repo_id": "transformers", "token_count": 1393 }
244
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/model_doc/detr.md/0
{ "file_path": "transformers/docs/source/en/model_doc/detr.md", "repo_id": "transformers", "token_count": 4104 }
245
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/model_doc/esm.md/0
{ "file_path": "transformers/docs/source/en/model_doc/esm.md", "repo_id": "transformers", "token_count": 1906 }
246
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/model_doc/gpt2.md/0
{ "file_path": "transformers/docs/source/en/model_doc/gpt2.md", "repo_id": "transformers", "token_count": 2619 }
247
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/model_doc/jukebox.md/0
{ "file_path": "transformers/docs/source/en/model_doc/jukebox.md", "repo_id": "transformers", "token_count": 1219 }
248
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/model_doc/lxmert.md/0
{ "file_path": "transformers/docs/source/en/model_doc/lxmert.md", "repo_id": "transformers", "token_count": 1392 }
249
<!--Copyright 2023 Mistral AI and The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicabl...
transformers/docs/source/en/model_doc/mixtral.md/0
{ "file_path": "transformers/docs/source/en/model_doc/mixtral.md", "repo_id": "transformers", "token_count": 3416 }
250
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/model_doc/nezha.md/0
{ "file_path": "transformers/docs/source/en/model_doc/nezha.md", "repo_id": "transformers", "token_count": 906 }
251
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/model_doc/persimmon.md/0
{ "file_path": "transformers/docs/source/en/model_doc/persimmon.md", "repo_id": "transformers", "token_count": 1564 }
252
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/model_doc/rembert.md/0
{ "file_path": "transformers/docs/source/en/model_doc/rembert.md", "repo_id": "transformers", "token_count": 1363 }
253
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/model_doc/speech-encoder-decoder.md/0
{ "file_path": "transformers/docs/source/en/model_doc/speech-encoder-decoder.md", "repo_id": "transformers", "token_count": 2092 }
254
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/model_doc/table-transformer.md/0
{ "file_path": "transformers/docs/source/en/model_doc/table-transformer.md", "repo_id": "transformers", "token_count": 978 }
255
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/model_doc/upernet.md/0
{ "file_path": "transformers/docs/source/en/model_doc/upernet.md", "repo_id": "transformers", "token_count": 1188 }
256
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/model_doc/xmod.md/0
{ "file_path": "transformers/docs/source/en/model_doc/xmod.md", "repo_id": "transformers", "token_count": 1496 }
257