text
stringlengths
7
328k
id
stringlengths
14
166
metadata
dict
__index_level_0__
int64
0
459
# coding=utf-8 # Copyright 2024 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or ag...
diffusers/tests/pipelines/stable_diffusion_2/test_stable_diffusion_inpaint.py/0
{ "file_path": "diffusers/tests/pipelines/stable_diffusion_2/test_stable_diffusion_inpaint.py", "repo_id": "diffusers", "token_count": 4788 }
140
# coding=utf-8 # Copyright 2024 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or ag...
diffusers/tests/pipelines/unclip/test_unclip_image_variation.py/0
{ "file_path": "diffusers/tests/pipelines/unclip/test_unclip_image_variation.py", "repo_id": "diffusers", "token_count": 8161 }
141
import tempfile import unittest import numpy as np import torch from diffusers import ScoreSdeVeScheduler class ScoreSdeVeSchedulerTest(unittest.TestCase): # TODO adapt with class SchedulerCommonTest (scheduler needs Numpy Integration) scheduler_classes = (ScoreSdeVeScheduler,) forward_default_kwargs = ...
diffusers/tests/schedulers/test_scheduler_score_sde_ve.py/0
{ "file_path": "diffusers/tests/schedulers/test_scheduler_score_sde_ve.py", "repo_id": "diffusers", "token_count": 3215 }
142
# Stable Diffusion Deep Dive <CourseFloatingBanner unit={3} classNames="absolute z-10 right-0 top-0" notebooks={[ {label: "Stable Diffusion Deep Dive", value: "https://colab.research.google.com/github/huggingface/diffusion-models-class/blob/main/units/en/unit3/stable_diffusion_deep_dive.ipynb"}, {label: "S...
diffusion-models-class/units/en/unit3/3.mdx/0
{ "file_path": "diffusion-models-class/units/en/unit3/3.mdx", "repo_id": "diffusion-models-class", "token_count": 20868 }
143
# Sprint ControlNet en JAX/Diffusers Bienvenue au sprint communautaire en JAX/Diffusers ! L'objectif de ce sprint est de travailler sur des modèles de diffusion amusants et créatifs en utilisant JAX et Diffusers. Lors de cet événement, nous créerons diverses applications avec des modèles de diffusion en JAX/Flax et D...
diffusion-models-class/units/fr/events/4.mdx/0
{ "file_path": "diffusion-models-class/units/fr/events/4.mdx", "repo_id": "diffusion-models-class", "token_count": 15277 }
144
<jupyter_start><jupyter_text>Traduction (PyTorch) Installez les bibliothèques 🤗 *Datasets* et 🤗 *Transformers* pour exécuter ce *notebook*.<jupyter_code>!pip install datasets transformers[sentencepiece] !pip install accelerate # Pour exécuter l'entraînement sur TPU, vous devez décommenter la ligne suivante : # !pip i...
notebooks/course/fr/chapter7/section4_pt.ipynb/0
{ "file_path": "notebooks/course/fr/chapter7/section4_pt.ipynb", "repo_id": "notebooks", "token_count": 3791 }
145
<jupyter_start><jupyter_text>Partager ses démos avec d'autres Installez les bibliothèques 🤗 Transformers et 🤗 Gradio pour exécuter ce *notebook*.<jupyter_code>!pip install datasets transformers[sentencepiece] !pip install gradio import gradio as gr title = "Poser une question (en anglais) à Rick" description = """ L...
notebooks/course/fr/chapter9/section4.ipynb/0
{ "file_path": "notebooks/course/fr/chapter9/section4.ipynb", "repo_id": "notebooks", "token_count": 1441 }
146
<jupyter_start><jupyter_text>IntroductionThis notebook is designed to run inference on the [Diffuser](https://arxiv.org/abs/2205.09991) planning model for model-based RL. The notebook is modified from the authors' [original](https://colab.research.google.com/drive/1YajKhu-CUIGBJeQPehjVPJcK_b38a8Nc?usp=sharingscrollTo=5...
notebooks/diffusers/reinforcement_learning_with_diffusers.ipynb/0
{ "file_path": "notebooks/diffusers/reinforcement_learning_with_diffusers.ipynb", "repo_id": "notebooks", "token_count": 8060 }
147
<jupyter_start><jupyter_text>IDEFICS: A Flamingo-based model, trained at scale for the community Finetuning Demo Notebook: Credit: [Flamingo blog](https://www.deepmind.com/blog/tackling-multiple-tasks-with-a-single-visual-language-model)This google colab notebook shows how to run predictions with the 4-bit quantized...
notebooks/examples/idefics/finetune_image_captioning_peft.ipynb/0
{ "file_path": "notebooks/examples/idefics/finetune_image_captioning_peft.ipynb", "repo_id": "notebooks", "token_count": 3875 }
148
<jupyter_start><jupyter_text>How to export 🤗 Transformers Models to ONNX ? [ONNX](http://onnx.ai/) is open format for machine learning models. It allows to save your neural network's computation graph in a framework agnostic way, which might be particulary helpful when deploying deep learning models.Indeed, businesses...
notebooks/examples/onnx-export.ipynb/0
{ "file_path": "notebooks/examples/onnx-export.ipynb", "repo_id": "notebooks", "token_count": 6241 }
149
<jupyter_start><jupyter_text>If you're opening this Notebook on colab, you will probably need to install the most recent versions of 🤗 Transformers and 🤗 Datasets. We will also need `scipy` and `scikit-learn` for some of the metrics. Uncomment the following cell and run it.<jupyter_code>#! pip install transformers #!...
notebooks/examples/text_classification-tf.ipynb/0
{ "file_path": "notebooks/examples/text_classification-tf.ipynb", "repo_id": "notebooks", "token_count": 8177 }
150
<jupyter_start><jupyter_text><jupyter_code>!pip install transformers !sudo apt-get install git-lfs !git config --global user.email "julien@huggingface.co" !git config --global user.name "Julien Chaumond" !transformers-cli login !pwd !transformers-cli repo create policy-distilbert-7d !git clone https://julien-c:...token...
notebooks/huggingface_hub/upload_hf_model.ipynb/0
{ "file_path": "notebooks/huggingface_hub/upload_hf_model.ipynb", "repo_id": "notebooks", "token_count": 478 }
151
<jupyter_start><jupyter_text>Huggingface Sagemaker-sdk - Distributed Training Demo for `TensorFlow` Distributed Data Parallelism with `transformers` and `tensorflow` 1. [Introduction](Introduction) 2. [Development Environment and Permissions](Development-Environment-and-Permissions) 1. [Installation](Installation) ...
notebooks/sagemaker/07_tensorflow_distributed_training_data_parallelism/sagemaker-notebook.ipynb/0
{ "file_path": "notebooks/sagemaker/07_tensorflow_distributed_training_data_parallelism/sagemaker-notebook.ipynb", "repo_id": "notebooks", "token_count": 3614 }
152
<jupyter_start><jupyter_text>Accelerate BERT Inference with Hugging Face Transformers and AWS inferentia In this end-to-end tutorial, you will learn how to speed up BERT inference for text classification with Hugging Face Transformers, Amazon SageMaker, and AWS Inferentia. You will learn how to: 1. Convert your Hugging...
notebooks/sagemaker/18_inferentia_inference/sagemaker-notebook.ipynb/0
{ "file_path": "notebooks/sagemaker/18_inferentia_inference/sagemaker-notebook.ipynb", "repo_id": "notebooks", "token_count": 3902 }
153
<jupyter_start><jupyter_text>Document AI: Fine-tuning Donut for document-parsing using Hugging Face Transformers on Amazon SageMakerIn this tutorial, you will learn how to fine-tune and deploy [Donut-base](https://huggingface.co/naver-clova-ix/donut-base) for document-understand/document-parsing using Hugging Face Tran...
notebooks/sagemaker/26_document_ai_donut/sagemaker-notebook.ipynb/0
{ "file_path": "notebooks/sagemaker/26_document_ai_donut/sagemaker-notebook.ipynb", "repo_id": "notebooks", "token_count": 7780 }
154
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
peft/docs/source/developer_guides/troubleshooting.md/0
{ "file_path": "peft/docs/source/developer_guides/troubleshooting.md", "repo_id": "peft", "token_count": 1890 }
155
<!--⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Models [`PeftModel`] is the base model class for specifying the base Transformer model and configuration to apply a PEFT method to. The base `Peft...
peft/docs/source/package_reference/peft_model.md/0
{ "file_path": "peft/docs/source/package_reference/peft_model.md", "repo_id": "peft", "token_count": 540 }
156
<jupyter_start><jupyter_code>from transformers import AutoModelForCausalLM from peft import get_peft_config, get_peft_model, PrefixTuningConfig, TaskType, PeftType import torch from datasets import load_dataset import os from transformers import AutoTokenizer from torch.utils.data import DataLoader from transformers im...
peft/examples/causal_language_modeling/peft_prefix_tuning_clm.ipynb/0
{ "file_path": "peft/examples/causal_language_modeling/peft_prefix_tuning_clm.ipynb", "repo_id": "peft", "token_count": 4714 }
157
<jupyter_start><jupyter_code>import argparse import json import logging import math import os import random from pathlib import Path from tqdm import tqdm import datasets from datasets import load_dataset, DatasetDict import evaluate import torch from torch import nn from torch.utils.data import DataLoader import tr...
peft/examples/feature_extraction/peft_lora_embedding_semantic_similarity_inference.ipynb/0
{ "file_path": "peft/examples/feature_extraction/peft_lora_embedding_semantic_similarity_inference.ipynb", "repo_id": "peft", "token_count": 2675 }
158
<jupyter_start><jupyter_code>!git clone https://huggingface.co/spaces/smangrul/peft-lora-sd-dreambooth %cd "peft-lora-sd-dreambooth" !pip install -r requirements.txt !python colab.py<jupyter_output><empty_output>
peft/examples/lora_dreambooth/colab_notebook.ipynb/0
{ "file_path": "peft/examples/lora_dreambooth/colab_notebook.ipynb", "repo_id": "peft", "token_count": 91 }
159
<jupyter_start><jupyter_code>import argparse import os import torch from torch.optim import AdamW from torch.utils.data import DataLoader import peft import evaluate from datasets import load_dataset from transformers import AutoModelForSequenceClassification, AutoTokenizer, get_linear_schedule_with_warmup, set_seed ...
peft/examples/sequence_classification/IA3.ipynb/0
{ "file_path": "peft/examples/sequence_classification/IA3.ipynb", "repo_id": "peft", "token_count": 1903 }
160
# Copyright 2023 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicabl...
peft/setup.py/0
{ "file_path": "peft/setup.py", "repo_id": "peft", "token_count": 1546 }
161
# Copyright 2023-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or...
peft/src/peft/tuners/adalora/model.py/0
{ "file_path": "peft/src/peft/tuners/adalora/model.py", "repo_id": "peft", "token_count": 7189 }
162
# Copyright 2023-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or...
peft/src/peft/tuners/multitask_prompt_tuning/config.py/0
{ "file_path": "peft/src/peft/tuners/multitask_prompt_tuning/config.py", "repo_id": "peft", "token_count": 883 }
163
# Copyright 2023-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or...
peft/src/peft/tuners/prefix_tuning/model.py/0
{ "file_path": "peft/src/peft/tuners/prefix_tuning/model.py", "repo_id": "peft", "token_count": 1228 }
164
# Copyright 2023-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or...
peft/tests/test_adaption_prompt.py/0
{ "file_path": "peft/tests/test_adaption_prompt.py", "repo_id": "peft", "token_count": 16295 }
165
#!/usr/bin/env python3 # coding=utf-8 # Copyright 2023-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 #...
peft/tests/test_poly.py/0
{ "file_path": "peft/tests/test_poly.py", "repo_id": "peft", "token_count": 1541 }
166
#!/usr/bin/env python3 """ Checkpoint Cleaning Script Takes training checkpoints with GPU tensors, optimizer state, extra dict keys, etc. and outputs a CPU tensor checkpoint with only the `state_dict` along with SHA256 calculation for model zoo compatibility. Hacked together by / Copyright 2020 Ross Wightman (https:...
pytorch-image-models/clean_checkpoint.py/0
{ "file_path": "pytorch-image-models/clean_checkpoint.py", "repo_id": "pytorch-image-models", "token_count": 1771 }
167
# CSP-DarkNet **CSPDarknet53** is a convolutional neural network and backbone for object detection that uses [DarkNet-53](https://paperswithcode.com/method/darknet-53). It employs a CSPNet strategy to partition the feature map of the base layer into two parts and then merges them through a cross-stage hierarchy. The u...
pytorch-image-models/docs/models/.templates/models/csp-darknet.md/0
{ "file_path": "pytorch-image-models/docs/models/.templates/models/csp-darknet.md", "repo_id": "pytorch-image-models", "token_count": 947 }
168
# (Gluon) SE-ResNeXt **SE ResNeXt** is a variant of a [ResNext](https://www.paperswithcode.com/method/resnext) that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration. The weights from this...
pytorch-image-models/docs/models/.templates/models/gloun-seresnext.md/0
{ "file_path": "pytorch-image-models/docs/models/.templates/models/gloun-seresnext.md", "repo_id": "pytorch-image-models", "token_count": 1705 }
169
# PNASNet **Progressive Neural Architecture Search**, or **PNAS**, is a method for learning the structure of convolutional neural networks (CNNs). It uses a sequential model-based optimization (SMBO) strategy, where we search the space of cell structures, starting with simple (shallow) models and progressing to comple...
pytorch-image-models/docs/models/.templates/models/pnasnet.md/0
{ "file_path": "pytorch-image-models/docs/models/.templates/models/pnasnet.md", "repo_id": "pytorch-image-models", "token_count": 813 }
170
# SSL ResNet **Residual Networks**, or **ResNets**, learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. Instead of hoping each few stacked layers directly fit a desired underlying mapping, residual nets let these layers fit a residual mapping. They stack [residual b...
pytorch-image-models/docs/models/.templates/models/ssl-resnet.md/0
{ "file_path": "pytorch-image-models/docs/models/.templates/models/ssl-resnet.md", "repo_id": "pytorch-image-models", "token_count": 1616 }
171
# Scripts A train, validation, inference, and checkpoint cleaning script included in the github root folder. Scripts are not currently packaged in the pip release. The training and validation scripts evolved from early versions of the [PyTorch Imagenet Examples](https://github.com/pytorch/examples). I have added signi...
pytorch-image-models/docs/scripts.md/0
{ "file_path": "pytorch-image-models/docs/scripts.md", "repo_id": "pytorch-image-models", "token_count": 511 }
172
#!/usr/bin/env python3 """PyTorch Inference Script An example inference script that outputs top-k class ids for images in a folder into a csv. Hacked together by / Copyright 2020 Ross Wightman (https://github.com/rwightman) """ import argparse import json import logging import os import time from contextlib import su...
pytorch-image-models/inference.py/0
{ "file_path": "pytorch-image-models/inference.py", "repo_id": "pytorch-image-models", "token_count": 6803 }
173
[dist_conda] conda_name_differences = 'torch:pytorch' channels = pytorch noarch = True [metadata] url = "https://github.com/huggingface/pytorch-image-models"
pytorch-image-models/setup.cfg/0
{ "file_path": "pytorch-image-models/setup.cfg", "repo_id": "pytorch-image-models", "token_count": 65 }
174
from abc import ABC, abstractmethod from typing import Dict, List, Optional, Union class DatasetInfo(ABC): def __init__(self): pass @abstractmethod def num_classes(self): pass @abstractmethod def label_names(self): pass @abstractmethod def label_descriptions(sel...
pytorch-image-models/timm/data/dataset_info.py/0
{ "file_path": "pytorch-image-models/timm/data/dataset_info.py", "repo_id": "pytorch-image-models", "token_count": 941 }
175
""" Dataset reader that wraps TFDS datasets Wraps many (most?) TFDS image-classification datasets from https://github.com/tensorflow/datasets https://www.tensorflow.org/datasets/catalog/overview#image_classification Hacked together by / Copyright 2020 Ross Wightman """ import math import os import sys from typing imp...
pytorch-image-models/timm/data/readers/reader_tfds.py/0
{ "file_path": "pytorch-image-models/timm/data/readers/reader_tfds.py", "repo_id": "pytorch-image-models", "token_count": 7089 }
176
""" CBAM (sort-of) Attention Experimental impl of CBAM: Convolutional Block Attention Module: https://arxiv.org/abs/1807.06521 WARNING: Results with these attention layers have been mixed. They can significantly reduce performance on some tasks, especially fine-grained it seems. I may end up removing this impl. Hack...
pytorch-image-models/timm/layers/cbam.py/0
{ "file_path": "pytorch-image-models/timm/layers/cbam.py", "repo_id": "pytorch-image-models", "token_count": 2016 }
177
from enum import Enum from typing import Union import torch class Format(str, Enum): NCHW = 'NCHW' NHWC = 'NHWC' NCL = 'NCL' NLC = 'NLC' FormatT = Union[str, Format] def get_spatial_dim(fmt: FormatT): fmt = Format(fmt) if fmt is Format.NLC: dim = (1,) elif fmt is Format.NCL: ...
pytorch-image-models/timm/layers/format.py/0
{ "file_path": "pytorch-image-models/timm/layers/format.py", "repo_id": "pytorch-image-models", "token_count": 572 }
178
""" Normalization layers and wrappers Norm layer definitions that support fast norm and consistent channel arg order (always first arg). Hacked together by / Copyright 2022 Ross Wightman """ import numbers from typing import Tuple import torch import torch.nn as nn import torch.nn.functional as F from .fast_norm im...
pytorch-image-models/timm/layers/norm.py/0
{ "file_path": "pytorch-image-models/timm/layers/norm.py", "repo_id": "pytorch-image-models", "token_count": 2520 }
179
""" Test Time Pooling (Average-Max Pool) Hacked together by / Copyright 2020 Ross Wightman """ import logging from torch import nn import torch.nn.functional as F from .adaptive_avgmax_pool import adaptive_avgmax_pool2d _logger = logging.getLogger(__name__) class TestTimePoolHead(nn.Module): def __init__(sel...
pytorch-image-models/timm/layers/test_time_pool.py/0
{ "file_path": "pytorch-image-models/timm/layers/test_time_pool.py", "repo_id": "pytorch-image-models", "token_count": 881 }
180
""" Model creation / weight loading / state_dict helpers Hacked together by / Copyright 2020 Ross Wightman """ import logging import os from collections import OrderedDict from typing import Any, Callable, Dict, Optional, Union import torch try: import safetensors.torch _has_safetensors = True except ImportEr...
pytorch-image-models/timm/models/_helpers.py/0
{ "file_path": "pytorch-image-models/timm/models/_helpers.py", "repo_id": "pytorch-image-models", "token_count": 2546 }
181
""" ConViT Model @article{d2021convit, title={ConViT: Improving Vision Transformers with Soft Convolutional Inductive Biases}, author={d'Ascoli, St{\'e}phane and Touvron, Hugo and Leavitt, Matthew and Morcos, Ari and Biroli, Giulio and Sagun, Levent}, journal={arXiv preprint arXiv:2103.10697}, year={2021} } P...
pytorch-image-models/timm/models/convit.py/0
{ "file_path": "pytorch-image-models/timm/models/convit.py", "repo_id": "pytorch-image-models", "token_count": 7716 }
182
""" EVA EVA from https://github.com/baaivision/EVA , paper: https://arxiv.org/abs/2211.07636 @article{EVA, title={EVA: Exploring the Limits of Masked Visual Representation Learning at Scale}, author={Fang, Yuxin and Wang, Wen and Xie, Binhui and Sun, Quan and Wu, Ledell and Wang, Xinggang and Huang, Tiejun and ...
pytorch-image-models/timm/models/eva.py/0
{ "file_path": "pytorch-image-models/timm/models/eva.py", "repo_id": "pytorch-image-models", "token_count": 21637 }
183
""" Pytorch Inception-V4 implementation Sourced from https://github.com/Cadene/tensorflow-model-zoo.torch (MIT License) which is based upon Google's Tensorflow implementation and pretrained weights (Apache 2.0 License) """ from functools import partial import torch import torch.nn as nn from timm.data import IMAGENET...
pytorch-image-models/timm/models/inception_v4.py/0
{ "file_path": "pytorch-image-models/timm/models/inception_v4.py", "repo_id": "pytorch-image-models", "token_count": 5528 }
184
""" TinyViT Paper: `TinyViT: Fast Pretraining Distillation for Small Vision Transformers` - https://arxiv.org/abs/2207.10666 Adapted from official impl at https://github.com/microsoft/Cream/tree/main/TinyViT """ __all__ = ['TinyVit'] import math import itertools from functools import partial from typing import ...
pytorch-image-models/timm/models/tiny_vit.py/0
{ "file_path": "pytorch-image-models/timm/models/tiny_vit.py", "repo_id": "pytorch-image-models", "token_count": 12415 }
185
import math import torch from torch.optim.optimizer import Optimizer class AdaBelief(Optimizer): r"""Implements AdaBelief algorithm. Modified from Adam in PyTorch Arguments: params (iterable): iterable of parameters to optimize or dicts defining parameter groups lr (float, optiona...
pytorch-image-models/timm/optim/adabelief.py/0
{ "file_path": "pytorch-image-models/timm/optim/adabelief.py", "repo_id": "pytorch-image-models", "token_count": 5074 }
186
""" RMSProp modified to behave like Tensorflow impl Originally cut & paste from PyTorch RMSProp https://github.com/pytorch/pytorch/blob/063946d2b3f3f1e953a2a3b54e0b34f1393de295/torch/optim/rmsprop.py Licensed under BSD-Clause 3 (ish), https://github.com/pytorch/pytorch/blob/master/LICENSE Modifications Copyright 2021...
pytorch-image-models/timm/optim/rmsprop_tf.py/0
{ "file_path": "pytorch-image-models/timm/optim/rmsprop_tf.py", "repo_id": "pytorch-image-models", "token_count": 2901 }
187
""" CUDA / AMP utils Hacked together by / Copyright 2020 Ross Wightman """ import torch try: from apex import amp has_apex = True except ImportError: amp = None has_apex = False from .clip_grad import dispatch_clip_grad class ApexScaler: state_dict_key = "amp" def __call__( sel...
pytorch-image-models/timm/utils/cuda.py/0
{ "file_path": "pytorch-image-models/timm/utils/cuda.py", "repo_id": "pytorch-image-models", "token_count": 980 }
188
<div align="center"> <a href="https://www.youtube.com/watch?v=jlMAX2Oaht0"> <img width=560 width=315 alt="Making TGI deployment optimal" src="https://huggingface.co/datasets/Narsil/tgi_assets/resolve/main/thumbnail.png"> </a> # Text Generation Inference <a href="https://github.com/huggingface/text-generation-infer...
text-generation-inference/README.md/0
{ "file_path": "text-generation-inference/README.md", "repo_id": "text-generation-inference", "token_count": 3371 }
189
[tool.poetry] name = "text-generation" version = "0.6.1" description = "Hugging Face Text Generation Python Client" license = "Apache-2.0" authors = ["Olivier Dehaene <olivier@huggingface.co>"] maintainers = ["Olivier Dehaene <olivier@huggingface.co>"] readme = "README.md" homepage = "https://github.com/huggingface/tex...
text-generation-inference/clients/python/pyproject.toml/0
{ "file_path": "text-generation-inference/clients/python/pyproject.toml", "repo_id": "text-generation-inference", "token_count": 336 }
190
# Text-generation-launcher arguments <!-- WRAP CODE BLOCKS --> ```shell Text Generation Launcher Usage: text-generation-launcher [OPTIONS] Options: ``` ## MODEL_ID ```shell --model-id <MODEL_ID> The name of the model to load. Can be a MODEL_ID as listed on <https://hf.co/models> like `gpt2` or `Open...
text-generation-inference/docs/source/basic_tutorials/launcher.md/0
{ "file_path": "text-generation-inference/docs/source/basic_tutorials/launcher.md", "repo_id": "text-generation-inference", "token_count": 6114 }
191
# Supported Models and Hardware Text Generation Inference enables serving optimized models on specific hardware for the highest performance. The following sections list which models are hardware are supported. ## Supported Models The following models are optimized and can be served with TGI, which uses custom CUDA k...
text-generation-inference/docs/source/supported_models.md/0
{ "file_path": "text-generation-inference/docs/source/supported_models.md", "repo_id": "text-generation-inference", "token_count": 1169 }
192
{ "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [ { "id": 2, "logprob": null, "text": "<bos>" }, { "id": 2015, "logprob": -10.0, "text": "Test" }, { "id": 3853,...
text-generation-inference/integration-tests/models/__snapshots__/test_flash_gemma/test_flash_gemma_all_params.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_flash_gemma/test_flash_gemma_all_params.json", "repo_id": "text-generation-inference", "token_count": 1031 }
193
{ "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [ { "id": 1, "logprob": null, "text": "<s>" }, { "id": 3735, "logprob": -12.9140625, "text": "Test" }, { "id": 2...
text-generation-inference/integration-tests/models/__snapshots__/test_flash_mistral/test_flash_mistral.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_flash_mistral/test_flash_mistral.json", "repo_id": "text-generation-inference", "token_count": 1050 }
194
{ "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 60, "prefill": [ { "id": 589, "logprob": null, "text": "def" }, { "id": 1459, "logprob": -5.6328125, "text": " print" }, { "id"...
text-generation-inference/integration-tests/models/__snapshots__/test_flash_starcoder/test_flash_starcoder_default_params.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_flash_starcoder/test_flash_starcoder_default_params.json", "repo_id": "text-generation-inference", "token_count": 4734 }
195
{ "details": { "best_of_sequences": null, "finish_reason": "eos_token", "generated_tokens": 5, "prefill": [ { "id": 0, "logprob": null, "text": "<pad>" } ], "seed": 0, "tokens": [ { "id": 926, "logprob": -4.3554688, "special...
text-generation-inference/integration-tests/models/__snapshots__/test_mt0_base/test_mt0_base.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_mt0_base/test_mt0_base.json", "repo_id": "text-generation-inference", "token_count": 532 }
196
import pytest @pytest.fixture(scope="module") def flash_llama_awq_handle(launcher): with launcher( "abhinavkulkarni/codellama-CodeLlama-7b-Python-hf-w4-g128-awq", num_shard=1, quantize="awq", ) as handle: yield handle @pytest.fixture(scope="module") async def flash_llama_awq(...
text-generation-inference/integration-tests/models/test_flash_awq.py/0
{ "file_path": "text-generation-inference/integration-tests/models/test_flash_awq.py", "repo_id": "text-generation-inference", "token_count": 842 }
197
import pytest @pytest.fixture(scope="module") def flash_starcoder_gptq_handle(launcher): with launcher("Narsil/starcoder-gptq", num_shard=2, quantize="gptq") as handle: yield handle @pytest.fixture(scope="module") async def flash_starcoder_gptq(flash_starcoder_gptq_handle): await flash_starcoder_gpt...
text-generation-inference/integration-tests/models/test_flash_starcoder_gptq.py/0
{ "file_path": "text-generation-inference/integration-tests/models/test_flash_starcoder_gptq.py", "repo_id": "text-generation-inference", "token_count": 710 }
198
use std::fmt; use std::process::Command; pub(crate) struct Env { cargo_target: &'static str, cargo_version: &'static str, git_sha: &'static str, docker_label: &'static str, nvidia_env: String, } impl Env { pub fn new() -> Self { let nvidia_env = nvidia_smi(); Self { ...
text-generation-inference/launcher/src/env_runtime.rs/0
{ "file_path": "text-generation-inference/launcher/src/env_runtime.rs", "repo_id": "text-generation-inference", "token_count": 650 }
199
[package] name = "grpc-metadata" version = "0.1.0" edition = "2021" [dependencies] opentelemetry = "^0.20" tonic = "^0.10" tracing = "^0.1" tracing-opentelemetry = "^0.21"
text-generation-inference/router/grpc-metadata/Cargo.toml/0
{ "file_path": "text-generation-inference/router/grpc-metadata/Cargo.toml", "repo_id": "text-generation-inference", "token_count": 83 }
200
flash_att_v2_commit_cuda := 02ac572f3ffc4f402e4183aaa6824b45859d3ed3 flash_att_v2_commit_rocm := 8736558c287ff2ef28b24878e42828c595ac3e69 flash-attention-v2-cuda: # Clone flash attention pip install -U packaging ninja --no-cache-dir git clone https://github.com/HazyResearch/flash-attention.git flash-attention-v2...
text-generation-inference/server/Makefile-flash-att-v2/0
{ "file_path": "text-generation-inference/server/Makefile-flash-att-v2", "repo_id": "text-generation-inference", "token_count": 496 }
201
// Adapted from turboderp exllama: https://github.com/turboderp/exllama #include <torch/extension.h> #include <c10/cuda/CUDAGuard.h> #include <ATen/cuda/CUDAContext.h> #include <cuda_runtime.h> #include <cuda_fp16.h> #include <cstdint> #include <cstdio> #include "util.cuh" #include "tuning.h" #include "cuda_buffers.cu...
text-generation-inference/server/exllama_kernels/exllama_kernels/exllama_ext.cpp/0
{ "file_path": "text-generation-inference/server/exllama_kernels/exllama_kernels/exllama_ext.cpp", "repo_id": "text-generation-inference", "token_count": 3279 }
202
#ifndef _qdq_2_cuh #define _qdq_2_cuh #include "qdq_util.cuh" #include "../../config.h" #if QMODE_2BIT == 1 // Permutation: // // ffddbb99 77553311 eeccaa88 66442200 __forceinline__ __device__ void shuffle_2bit_16 ( uint32_t* q, int stride ) { uint32_t qa = q[0]; uint32_t qb = 0; #pragma unrol...
text-generation-inference/server/exllamav2_kernels/exllamav2_kernels/cuda/quant/qdq_2.cuh/0
{ "file_path": "text-generation-inference/server/exllamav2_kernels/exllamav2_kernels/cuda/quant/qdq_2.cuh", "repo_id": "text-generation-inference", "token_count": 1589 }
203
import pytest import torch from copy import copy from transformers import AutoTokenizer from text_generation_server.pb import generate_pb2 from text_generation_server.models.causal_lm import CausalLMBatch from text_generation_server.utils import weight_hub_files, download_weights from text_generation_server.models.bl...
text-generation-inference/server/tests/models/test_bloom.py/0
{ "file_path": "text-generation-inference/server/tests/models/test_bloom.py", "repo_id": "text-generation-inference", "token_count": 5296 }
204
import math import torch from typing import Optional, List, Tuple BLOCK_SIZE: int = 16 # Will be set in warmup CACHE_MANAGER: Optional["CacheManager"] = None class CacheManager: def __init__( self, num_blocks: int, num_layers: int, num_heads: int, head_size: int, ...
text-generation-inference/server/text_generation_server/models/cache_manager.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/models/cache_manager.py", "repo_id": "text-generation-inference", "token_count": 2033 }
205
# coding=utf-8 # Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved. # # This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX # and OPT implementations in this library. It has been modified from its # original forms to accommodate minor architectural differences compared # to G...
text-generation-inference/server/text_generation_server/models/custom_modeling/idefics_modeling.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/models/custom_modeling/idefics_modeling.py", "repo_id": "text-generation-inference", "token_count": 28490 }
206
import torch import torch.distributed from opentelemetry import trace from transformers import AutoConfig, AutoTokenizer from typing import Optional from text_generation_server.models import FlashCausalLM from text_generation_server.models.custom_modeling.flash_phi_modeling import ( FlashPhiForCausalLM, PhiCo...
text-generation-inference/server/text_generation_server/models/flash_phi.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/models/flash_phi.py", "repo_id": "text-generation-inference", "token_count": 1738 }
207
import torch import torch.distributed from typing import Optional, List from transformers import AutoTokenizer, AutoModelForCausalLM from text_generation_server.models import CausalLM FIM_PREFIX = "<fim-prefix>" FIM_MIDDLE = "<fim-middle>" FIM_SUFFIX = "<fim-suffix>" FIM_PAD = "<fim-pad>" EOD = "<|endoftext|>" cla...
text-generation-inference/server/text_generation_server/models/santacoder.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/models/santacoder.py", "repo_id": "text-generation-inference", "token_count": 1196 }
208
import math import numpy as np import torch import torch.nn as nn from torch.cuda.amp import custom_bwd, custom_fwd try: import triton import triton.language as tl from . import custom_autotune # code based https://github.com/fpgaminer/GPTQ-triton @custom_autotune.autotune( configs=[ ...
text-generation-inference/server/text_generation_server/utils/gptq/quant_linear.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/utils/gptq/quant_linear.py", "repo_id": "text-generation-inference", "token_count": 7008 }
209
# EditorConfig helps developers define and maintain consistent # coding styles between different editors or IDEs # http://editorconfig.org root = true [*] indent_style = space indent_size = 2 end_of_line = lf charset = utf-8 trim_trailing_whitespace = true insert_final_newline = true [*.md] trim_trailing_whitespace =...
tokenizers/bindings/node/.editorconfig/0
{ "file_path": "tokenizers/bindings/node/.editorconfig", "repo_id": "tokenizers", "token_count": 108 }
210
/* tslint:disable */ /* eslint-disable */ /* prettier-ignore */ /* auto-generated by NAPI-RS */ const { existsSync, readFileSync } = require('fs') const { join } = require('path') const { platform, arch } = process let nativeBinding = null let localFileExisted = false let loadError = null function isMusl() { // ...
tokenizers/bindings/node/index.js/0
{ "file_path": "tokenizers/bindings/node/index.js", "repo_id": "tokenizers", "token_count": 4683 }
211
{ "name": "tokenizers-android-arm64", "version": "0.13.4-rc1", "os": [ "android" ], "cpu": [ "arm64" ], "main": "tokenizers.android-arm64.node", "files": [ "tokenizers.android-arm64.node" ], "description": "Tokenizers platform specific bindings", "keywords": [ "napi-rs", "NAPI"...
tokenizers/bindings/node/npm/android-arm64/package.json/0
{ "file_path": "tokenizers/bindings/node/npm/android-arm64/package.json", "repo_id": "tokenizers", "token_count": 264 }
212
{ "name": "tokenizers-linux-x64-musl", "version": "0.13.4-rc1", "os": [ "linux" ], "cpu": [ "x64" ], "main": "tokenizers.linux-x64-musl.node", "files": [ "tokenizers.linux-x64-musl.node" ], "description": "Tokenizers platform specific bindings", "keywords": [ "napi-rs", "NAPI",...
tokenizers/bindings/node/npm/linux-x64-musl/package.json/0
{ "file_path": "tokenizers/bindings/node/npm/linux-x64-musl/package.json", "repo_id": "tokenizers", "token_count": 291 }
213
use crate::arc_rwlock_serde; use serde::{Deserialize, Serialize}; extern crate tokenizers as tk; use napi::bindgen_prelude::*; use napi_derive::napi; use std::sync::{Arc, RwLock}; use tk::processors::PostProcessorWrapper; use tk::Encoding; #[derive(Clone, Serialize, Deserialize)] #[napi] pub struct Processor { #[se...
tokenizers/bindings/node/src/processors.rs/0
{ "file_path": "tokenizers/bindings/node/src/processors.rs", "repo_id": "tokenizers", "token_count": 1336 }
214
<p align="center"> <br> <img src="https://huggingface.co/landing/assets/tokenizers/tokenizers-logo.png" width="600"/> <br> <p> <p align="center"> <a href="https://badge.fury.io/py/tokenizers"> <img alt="Build" src="https://badge.fury.io/py/tokenizers.svg"> </a> <a href="https://github.c...
tokenizers/bindings/python/README.md/0
{ "file_path": "tokenizers/bindings/python/README.md", "repo_id": "tokenizers", "token_count": 1621 }
215
from typing import Dict, Iterator, List, Optional, Tuple, Union from .. import AddedToken, Tokenizer, decoders, pre_tokenizers, trainers from ..models import BPE from ..normalizers import BertNormalizer, Lowercase, Sequence, unicode_normalizer_from_str from .base_tokenizer import BaseTokenizer class CharBPETokenizer...
tokenizers/bindings/python/py_src/tokenizers/implementations/char_level_bpe.py/0
{ "file_path": "tokenizers/bindings/python/py_src/tokenizers/implementations/char_level_bpe.py", "repo_id": "tokenizers", "token_count": 2509 }
216
[project] name = 'tokenizers' requires-python = '>=3.7' authors = [ {name = 'Nicolas Patry', email = 'patry.nicolas@protonmail.com'}, {name = 'Anthony Moi', email = 'anthony@huggingface.co'} ] classifiers = [ "Development Status :: 5 - Production/Stable", "Intended Audience :: Developers", "Intended Audie...
tokenizers/bindings/python/pyproject.toml/0
{ "file_path": "tokenizers/bindings/python/pyproject.toml", "repo_id": "tokenizers", "token_count": 711 }
217
use std::sync::{Arc, RwLock}; use crate::models::PyModel; use crate::tokenizer::PyAddedToken; use crate::utils::PyChar; use pyo3::exceptions; use pyo3::prelude::*; use pyo3::types::*; use serde::{Deserialize, Serialize}; use tk::models::TrainerWrapper; use tk::Trainer; use tokenizers as tk; /// Base class for all tra...
tokenizers/bindings/python/src/trainers.rs/0
{ "file_path": "tokenizers/bindings/python/src/trainers.rs", "repo_id": "tokenizers", "token_count": 17617 }
218
import pickle import numpy as np import pytest from tokenizers import AddedToken, Encoding, Tokenizer from tokenizers.implementations import BertWordPieceTokenizer from tokenizers.models import BPE, Model, Unigram from tokenizers.pre_tokenizers import ByteLevel from tokenizers.processors import RobertaProcessing fro...
tokenizers/bindings/python/tests/bindings/test_tokenizer.py/0
{ "file_path": "tokenizers/bindings/python/tests/bindings/test_tokenizer.py", "repo_id": "tokenizers", "token_count": 8966 }
219
- sections: - local: index title: 🤗 Tokenizers - local: quicktour title: Quicktour - local: installation title: Installation - local: pipeline title: The tokenization pipeline - local: components title: Components - local: training_from_memory title: Training from memory title: G...
tokenizers/docs/source-doc-builder/_toctree.yml/0
{ "file_path": "tokenizers/docs/source-doc-builder/_toctree.yml", "repo_id": "tokenizers", "token_count": 338 }
220
# The tokenization pipeline When calling `Tokenizer.encode` or `Tokenizer.encode_batch`, the input text(s) go through the following pipeline: - `normalization` - `pre-tokenization` - `model` - `post-processing` We'll see in details what happens during each of those steps in detail, as well as when you want t...
tokenizers/docs/source-doc-builder/pipeline.mdx/0
{ "file_path": "tokenizers/docs/source-doc-builder/pipeline.mdx", "repo_id": "tokenizers", "token_count": 5903 }
221
Documentation ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The Rust API Reference is available directly on the `Docs.rs <https://docs.rs/tokenizers>`__ website.
tokenizers/docs/source/api/rust.inc/0
{ "file_path": "tokenizers/docs/source/api/rust.inc", "repo_id": "tokenizers", "token_count": 43 }
222
language: node_js node_js: "10" script: - ./node_modules/.bin/webpack
tokenizers/tokenizers/examples/unstable_wasm/www/.travis.yml/0
{ "file_path": "tokenizers/tokenizers/examples/unstable_wasm/www/.travis.yml", "repo_id": "tokenizers", "token_count": 30 }
223
use crate::decoders::DecoderWrapper; use crate::tokenizer::{Decoder, Result}; use crate::utils::macro_rules_attribute; use serde::{Deserialize, Serialize}; #[derive(Clone, Debug)] #[macro_rules_attribute(impl_serde_type!)] pub struct Sequence { decoders: Vec<DecoderWrapper>, } impl Sequence { pub fn new(decod...
tokenizers/tokenizers/src/decoders/sequence.rs/0
{ "file_path": "tokenizers/tokenizers/src/decoders/sequence.rs", "repo_id": "tokenizers", "token_count": 600 }
224
use super::OrderedVocabIter; use crate::tokenizer::{Model, Result, Token}; use serde_json::Value; use std::collections::HashMap; use std::fs::File; use std::io::{BufReader, Read, Write}; use std::path::{Path, PathBuf}; mod serialization; mod trainer; // Re-export pub use trainer::*; type Vocab = HashMap<String, u32>...
tokenizers/tokenizers/src/models/wordlevel/mod.rs/0
{ "file_path": "tokenizers/tokenizers/src/models/wordlevel/mod.rs", "repo_id": "tokenizers", "token_count": 3383 }
225
use serde::{Deserialize, Serialize}; use crate::tokenizer::{PreTokenizedString, PreTokenizer, Result, SplitDelimiterBehavior}; use crate::utils::macro_rules_attribute; #[derive(Copy, Clone, Debug, PartialEq, Eq)] #[non_exhaustive] #[macro_rules_attribute(impl_serde_type!)] pub struct CharDelimiterSplit { pub deli...
tokenizers/tokenizers/src/pre_tokenizers/delimiter.rs/0
{ "file_path": "tokenizers/tokenizers/src/pre_tokenizers/delimiter.rs", "repo_id": "tokenizers", "token_count": 296 }
226
use super::{ normalizer::Range, Model, NormalizedString, Normalizer, Offsets, PreTokenizedString, Token, }; use aho_corasick::{AhoCorasick, AhoCorasickBuilder, MatchKind}; use regex::Regex; use serde::{ser::SerializeSeq, Deserialize, Serialize, Serializer}; use std::collections::{HashMap, HashSet}; /// Represent a...
tokenizers/tokenizers/src/tokenizer/added_vocabulary.rs/0
{ "file_path": "tokenizers/tokenizers/src/tokenizer/added_vocabulary.rs", "repo_id": "tokenizers", "token_count": 16897 }
227
use crate::tokenizer::{Encoding, Result}; use serde::{Deserialize, Serialize}; use std::cmp; use std::mem; #[derive(Debug, Clone, Copy, PartialEq, Serialize, Deserialize, Eq, Default)] pub enum TruncationDirection { Left, #[default] Right, } impl std::convert::AsRef<str> for TruncationDirection { fn a...
tokenizers/tokenizers/src/utils/truncation.rs/0
{ "file_path": "tokenizers/tokenizers/src/utils/truncation.rs", "repo_id": "tokenizers", "token_count": 5473 }
228
#!/bin/bash source ~/.bashrc echo "running docker-entrypoint.sh" conda activate container echo $KUBE_GOOGLE_CLOUD_TPU_ENDPOINTS echo "printed TPU info" export XRT_TPU_CONFIG="tpu_worker;0;${KUBE_GOOGLE_CLOUD_TPU_ENDPOINTS:7}" exec "$@"#!/bin/bash
transformers/docker/transformers-pytorch-tpu/docker-entrypoint.sh/0
{ "file_path": "transformers/docker/transformers-pytorch-tpu/docker-entrypoint.sh", "repo_id": "transformers", "token_count": 112 }
229
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/de/llm_tutorial.md/0
{ "file_path": "transformers/docs/source/de/llm_tutorial.md", "repo_id": "transformers", "token_count": 4767 }
230
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/add_new_pipeline.md/0
{ "file_path": "transformers/docs/source/en/add_new_pipeline.md", "repo_id": "transformers", "token_count": 3395 }
231
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/fsdp.md/0
{ "file_path": "transformers/docs/source/en/fsdp.md", "repo_id": "transformers", "token_count": 2239 }
232
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/llm_tutorial.md/0
{ "file_path": "transformers/docs/source/en/llm_tutorial.md", "repo_id": "transformers", "token_count": 4361 }
233
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/main_classes/pipelines.md/0
{ "file_path": "transformers/docs/source/en/main_classes/pipelines.md", "repo_id": "transformers", "token_count": 4571 }
234
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/model_doc/beit.md/0
{ "file_path": "transformers/docs/source/en/model_doc/beit.md", "repo_id": "transformers", "token_count": 2186 }
235
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/model_doc/cpm.md/0
{ "file_path": "transformers/docs/source/en/model_doc/cpm.md", "repo_id": "transformers", "token_count": 735 }
236
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/model_doc/flan-t5.md/0
{ "file_path": "transformers/docs/source/en/model_doc/flan-t5.md", "repo_id": "transformers", "token_count": 781 }
237
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/model_doc/gpt_neox.md/0
{ "file_path": "transformers/docs/source/en/model_doc/gpt_neox.md", "repo_id": "transformers", "token_count": 1662 }
238
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/model_doc/layoutlmv2.md/0
{ "file_path": "transformers/docs/source/en/model_doc/layoutlmv2.md", "repo_id": "transformers", "token_count": 5361 }
239