text
stringlengths
7
328k
id
stringlengths
14
166
metadata
dict
__index_level_0__
int64
0
459
# Copyright 2023-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or...
peft/src/peft/tuners/tuners_utils.py/0
{ "file_path": "peft/src/peft/tuners/tuners_utils.py", "repo_id": "peft", "token_count": 12742 }
176
#!/usr/bin/env python3 # coding=utf-8 # Copyright 2023-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 #...
peft/tests/test_custom_models.py/0
{ "file_path": "peft/tests/test_custom_models.py", "repo_id": "peft", "token_count": 41819 }
177
# Copyright 2023-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or...
peft/tests/testing_utils.py/0
{ "file_path": "peft/tests/testing_utils.py", "repo_id": "peft", "token_count": 1322 }
178
# Archived Changes ### Nov 22, 2021 * A number of updated weights anew new model defs * `eca_halonext26ts` - 79.5 @ 256 * `resnet50_gn` (new) - 80.1 @ 224, 81.3 @ 288 * `resnet50` - 80.7 @ 224, 80.9 @ 288 (trained at 176, not replacing current a1 weights as default since these don't scale as well to higher res, ...
pytorch-image-models/docs/archived_changes.md/0
{ "file_path": "pytorch-image-models/docs/archived_changes.md", "repo_id": "pytorch-image-models", "token_count": 9335 }
179
# Deep Layer Aggregation Extending “shallow” skip connections, **Dense Layer Aggregation (DLA)** incorporates more depth and sharing. The authors introduce two structures for deep layer aggregation (DLA): iterative deep aggregation (IDA) and hierarchical deep aggregation (HDA). These structures are expressed through ...
pytorch-image-models/docs/models/.templates/models/dla.md/0
{ "file_path": "pytorch-image-models/docs/models/.templates/models/dla.md", "repo_id": "pytorch-image-models", "token_count": 5955 }
180
# Inception ResNet v2 **Inception-ResNet-v2** is a convolutional neural architecture that builds on the Inception family of architectures but incorporates [residual connections](https://paperswithcode.com/method/residual-connection) (replacing the filter concatenation stage of the Inception architecture). {% include ...
pytorch-image-models/docs/models/.templates/models/inception-resnet-v2.md/0
{ "file_path": "pytorch-image-models/docs/models/.templates/models/inception-resnet-v2.md", "repo_id": "pytorch-image-models", "token_count": 864 }
181
# Res2NeXt **Res2NeXt** is an image model that employs a variation on [ResNeXt](https://paperswithcode.com/method/resnext) bottleneck residual blocks. The motivation is to be able to represent features at multiple scales. This is achieved through a novel building block for CNNs that constructs hierarchical residual-li...
pytorch-image-models/docs/models/.templates/models/res2next.md/0
{ "file_path": "pytorch-image-models/docs/models/.templates/models/res2next.md", "repo_id": "pytorch-image-models", "token_count": 905 }
182
# (Tensorflow) EfficientNet CondConv **EfficientNet** is a convolutional neural network architecture and scaling method that uniformly scales all dimensions of depth/width/resolution using a *compound coefficient*. Unlike conventional practice that arbitrary scales these factors, the EfficientNet scaling method unifo...
pytorch-image-models/docs/models/.templates/models/tf-efficientnet-condconv.md/0
{ "file_path": "pytorch-image-models/docs/models/.templates/models/tf-efficientnet-condconv.md", "repo_id": "pytorch-image-models", "token_count": 2457 }
183
# DenseNet **DenseNet** is a type of convolutional neural network that utilises dense connections between layers, through [Dense Blocks](http://www.paperswithcode.com/method/dense-block), where we connect *all layers* (with matching feature-map sizes) directly with each other. To preserve the feed-forward nature, each...
pytorch-image-models/docs/models/densenet.md/0
{ "file_path": "pytorch-image-models/docs/models/densenet.md", "repo_id": "pytorch-image-models", "token_count": 4185 }
184
# Instagram ResNeXt WSL A **ResNeXt** repeats a [building block](https://paperswithcode.com/method/resnext-block) that aggregates a set of transformations with the same topology. Compared to a [ResNet](https://paperswithcode.com/method/resnet), it exposes a new dimension, *cardinality* (the size of the set of transfo...
pytorch-image-models/docs/models/ig-resnext.md/0
{ "file_path": "pytorch-image-models/docs/models/ig-resnext.md", "repo_id": "pytorch-image-models", "token_count": 3230 }
185
# Feature Extraction All of the models in `timm` have consistent mechanisms for obtaining various types of features from the model for tasks besides classification. ## Penultimate Layer Features (Pre-Classifier Features) The features from the penultimate model layer can be obtained in several ways without requiring ...
pytorch-image-models/hfdocs/source/feature_extraction.mdx/0
{ "file_path": "pytorch-image-models/hfdocs/source/feature_extraction.mdx", "repo_id": "pytorch-image-models", "token_count": 2004 }
186
# EfficientNet **EfficientNet** is a convolutional neural network architecture and scaling method that uniformly scales all dimensions of depth/width/resolution using a *compound coefficient*. Unlike conventional practice that arbitrary scales these factors, the EfficientNet scaling method uniformly scales network wi...
pytorch-image-models/hfdocs/source/models/efficientnet.mdx/0
{ "file_path": "pytorch-image-models/hfdocs/source/models/efficientnet.mdx", "repo_id": "pytorch-image-models", "token_count": 4915 }
187
# (Legacy) SE-ResNeXt **SE ResNeXt** is a variant of a [ResNeXt](https://www.paperswithcode.com/method/resnext) that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration. ## How do I use this...
pytorch-image-models/hfdocs/source/models/legacy-se-resnext.mdx/0
{ "file_path": "pytorch-image-models/hfdocs/source/models/legacy-se-resnext.mdx", "repo_id": "pytorch-image-models", "token_count": 2730 }
188
# ResNeXt A **ResNeXt** repeats a [building block](https://paperswithcode.com/method/resnext-block) that aggregates a set of transformations with the same topology. Compared to a [ResNet](https://paperswithcode.com/method/resnet), it exposes a new dimension, *cardinality* (the size of the set of transformations) \\( ...
pytorch-image-models/hfdocs/source/models/resnext.mdx/0
{ "file_path": "pytorch-image-models/hfdocs/source/models/resnext.mdx", "repo_id": "pytorch-image-models", "token_count": 3056 }
189
# (Tensorflow) MobileNet v3 **MobileNetV3** is a convolutional neural network that is designed for mobile phone CPUs. The network design includes the use of a [hard swish activation](https://paperswithcode.com/method/hard-swish) and [squeeze-and-excitation](https://paperswithcode.com/method/squeeze-and-excitation-bloc...
pytorch-image-models/hfdocs/source/models/tf-mobilenet-v3.mdx/0
{ "file_path": "pytorch-image-models/hfdocs/source/models/tf-mobilenet-v3.mdx", "repo_id": "pytorch-image-models", "token_count": 4781 }
190
""" ONNX-runtime validation script This script was created to verify accuracy and performance of exported ONNX models running with the onnxruntime. It utilizes the PyTorch dataloader/processing pipeline for a fair comparison against the originals. Copyright 2020 Ross Wightman """ import argparse import numpy as np im...
pytorch-image-models/onnx_validate.py/0
{ "file_path": "pytorch-image-models/onnx_validate.py", "repo_id": "pytorch-image-models", "token_count": 1960 }
191
""" Optimzier Tests These tests were adapted from PyTorch' optimizer tests. """ import math import pytest import functools from copy import deepcopy import torch from torch.testing._internal.common_utils import TestCase from torch.nn import Parameter from timm.scheduler import PlateauLRScheduler from timm.optim imp...
pytorch-image-models/tests/test_optim.py/0
{ "file_path": "pytorch-image-models/tests/test_optim.py", "repo_id": "pytorch-image-models", "token_count": 11722 }
192
""" Mixup and Cutmix Papers: mixup: Beyond Empirical Risk Minimization (https://arxiv.org/abs/1710.09412) CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features (https://arxiv.org/abs/1905.04899) Code Reference: CutMix: https://github.com/clovaai/CutMix-PyTorch Hacked together by / Co...
pytorch-image-models/timm/data/mixup.py/0
{ "file_path": "pytorch-image-models/timm/data/mixup.py", "repo_id": "pytorch-image-models", "token_count": 7225 }
193
""" Tensorflow Preprocessing Adapter Allows use of Tensorflow preprocessing pipeline in PyTorch Transform Copyright of original Tensorflow code below. Hacked together by / Copyright 2020 Ross Wightman """ # Copyright 2018 The TensorFlow Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2....
pytorch-image-models/timm/data/tf_preprocessing.py/0
{ "file_path": "pytorch-image-models/timm/data/tf_preprocessing.py", "repo_id": "pytorch-image-models", "token_count": 3775 }
194
""" Conv2d w/ Same Padding Hacked together by / Copyright 2020 Ross Wightman """ import torch import torch.nn as nn import torch.nn.functional as F from typing import Tuple, Optional from .config import is_exportable, is_scriptable from .padding import pad_same, pad_same_arg, get_padding_value _USE_EXPORT_CONV = Fa...
pytorch-image-models/timm/layers/conv2d_same.py/0
{ "file_path": "pytorch-image-models/timm/layers/conv2d_same.py", "repo_id": "pytorch-image-models", "token_count": 1560 }
195
""" Global Response Normalization Module Based on the GRN layer presented in `ConvNeXt-V2 - Co-designing and Scaling ConvNets with Masked Autoencoders` - https://arxiv.org/abs/2301.00808 This implementation * works for both NCHW and NHWC tensor layouts * uses affine param names matching existing torch norm layers * s...
pytorch-image-models/timm/layers/grn.py/0
{ "file_path": "pytorch-image-models/timm/layers/grn.py", "repo_id": "pytorch-image-models", "token_count": 565 }
196
""" Image to Patch Embedding using Conv2d A convolution based approach to patchifying a 2D image w/ embedding projection. Based on code in: * https://github.com/google-research/vision_transformer * https://github.com/google-research/big_vision/tree/main/big_vision Hacked together by / Copyright 2020 Ross Wightma...
pytorch-image-models/timm/layers/patch_embed.py/0
{ "file_path": "pytorch-image-models/timm/layers/patch_embed.py", "repo_id": "pytorch-image-models", "token_count": 4705 }
197
from .asymmetric_loss import AsymmetricLossMultiLabel, AsymmetricLossSingleLabel from .binary_cross_entropy import BinaryCrossEntropy from .cross_entropy import LabelSmoothingCrossEntropy, SoftTargetCrossEntropy from .jsd import JsdCrossEntropy
pytorch-image-models/timm/loss/__init__.py/0
{ "file_path": "pytorch-image-models/timm/loss/__init__.py", "repo_id": "pytorch-image-models", "token_count": 70 }
198
import os import pkgutil from copy import deepcopy from torch import nn as nn from timm.layers import Conv2dSame, BatchNormAct2d, Linear __all__ = ['extract_layer', 'set_layer', 'adapt_model_from_string', 'adapt_model_from_file'] def extract_layer(model, layer): layer = layer.split('.') module = model ...
pytorch-image-models/timm/models/_prune.py/0
{ "file_path": "pytorch-image-models/timm/models/_prune.py", "repo_id": "pytorch-image-models", "token_count": 2021 }
199
"""PyTorch CspNet A PyTorch implementation of Cross Stage Partial Networks including: * CSPResNet50 * CSPResNeXt50 * CSPDarkNet53 * and DarkNet53 for good measure Based on paper `CSPNet: A New Backbone that can Enhance Learning Capability of CNN` - https://arxiv.org/abs/1911.11929 Reference impl via darknet cfg file...
pytorch-image-models/timm/models/cspnet.py/0
{ "file_path": "pytorch-image-models/timm/models/cspnet.py", "repo_id": "pytorch-image-models", "token_count": 19954 }
200
""" FocalNet As described in `Focal Modulation Networks` - https://arxiv.org/abs/2203.11926 Significant modifications and refactoring from the original impl at https://github.com/microsoft/FocalNet This impl is/has: * fully convolutional, NCHW tensor layout throughout, seemed to have minimal performance impact but m...
pytorch-image-models/timm/models/focalnet.py/0
{ "file_path": "pytorch-image-models/timm/models/focalnet.py", "repo_id": "pytorch-image-models", "token_count": 11585 }
201
""" Poolformer from MetaFormer is Actually What You Need for Vision https://arxiv.org/abs/2111.11418 IdentityFormer, RandFormer, PoolFormerV2, ConvFormer, and CAFormer from MetaFormer Baselines for Vision https://arxiv.org/abs/2210.13452 All implemented models support feature extraction and variable input resolution....
pytorch-image-models/timm/models/metaformer.py/0
{ "file_path": "pytorch-image-models/timm/models/metaformer.py", "repo_id": "pytorch-image-models", "token_count": 17521 }
202
""" Res2Net and Res2NeXt Adapted from Official Pytorch impl at: https://github.com/gasvn/Res2Net/ Paper: `Res2Net: A New Multi-scale Backbone Architecture` - https://arxiv.org/abs/1904.01169 """ import math import torch import torch.nn as nn from timm.data import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD from ._bui...
pytorch-image-models/timm/models/res2net.py/0
{ "file_path": "pytorch-image-models/timm/models/res2net.py", "repo_id": "pytorch-image-models", "token_count": 3659 }
203
"""VGG Adapted from https://github.com/pytorch/vision 'vgg.py' (BSD-3-Clause) with a few changes for timm functionality. Copyright 2021 Ross Wightman """ from typing import Union, List, Dict, Any, cast import torch import torch.nn as nn import torch.nn.functional as F from timm.data import IMAGENET_DEFAULT_MEAN, IM...
pytorch-image-models/timm/models/vgg.py/0
{ "file_path": "pytorch-image-models/timm/models/vgg.py", "repo_id": "pytorch-image-models", "token_count": 5201 }
204
""" AdamW Optimizer Impl copied from PyTorch master NOTE: Builtin optim.AdamW is used by the factory, this impl only serves as a Python based reference, will be removed someday """ import math import torch from torch.optim.optimizer import Optimizer class AdamW(Optimizer): r"""Implements AdamW algorithm. Th...
pytorch-image-models/timm/optim/adamw.py/0
{ "file_path": "pytorch-image-models/timm/optim/adamw.py", "repo_id": "pytorch-image-models", "token_count": 2417 }
205
""" Cosine Scheduler Cosine LR schedule with warmup, cycle/restarts, noise, k-decay. Hacked together by / Copyright 2021 Ross Wightman """ import logging import math import numpy as np import torch from .scheduler import Scheduler _logger = logging.getLogger(__name__) class CosineLRScheduler(Scheduler): """ ...
pytorch-image-models/timm/scheduler/cosine_lr.py/0
{ "file_path": "pytorch-image-models/timm/scheduler/cosine_lr.py", "repo_id": "pytorch-image-models", "token_count": 2031 }
206
""" Logging helpers Hacked together by / Copyright 2020 Ross Wightman """ import logging import logging.handlers class FormatterNoInfo(logging.Formatter): def __init__(self, fmt='%(levelname)s: %(message)s'): logging.Formatter.__init__(self, fmt) def format(self, record): if record.levelno =...
pytorch-image-models/timm/utils/log.py/0
{ "file_path": "pytorch-image-models/timm/utils/log.py", "repo_id": "pytorch-image-models", "token_count": 383 }
207
<div align="center"> # Text Generation Inference benchmarking tool ![benchmark](../assets/benchmark.png) </div> A lightweight benchmarking tool based inspired by [oha](https://github.com/hatoo/oha) and powered by [tui](https://github.com/tui-rs-revival/ratatui). ## Install ```shell make install-benchmark ``` ## ...
text-generation-inference/benchmark/README.md/0
{ "file_path": "text-generation-inference/benchmark/README.md", "repo_id": "text-generation-inference", "token_count": 187 }
208
import pytest from text_generation import ( InferenceAPIClient, InferenceAPIAsyncClient, Client, AsyncClient, ) from text_generation.errors import NotSupportedError, NotFoundError from text_generation.inference_api import check_model_support, deployed_models def test_check_model_support(flan_t5_xxl, ...
text-generation-inference/clients/python/tests/test_inference_api.py/0
{ "file_path": "text-generation-inference/clients/python/tests/test_inference_api.py", "repo_id": "text-generation-inference", "token_count": 411 }
209
[ { "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [ { "id": 17934, "logprob": null, "text": "Pour" }, { "id": 49833, "logprob": -10.5625, "text": " dé...
text-generation-inference/integration-tests/models/__snapshots__/test_bloom_560m/test_bloom_560m_load.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_bloom_560m/test_bloom_560m_load.json", "repo_id": "text-generation-inference", "token_count": 7244 }
210
[ { "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [ { "id": 1, "logprob": null, "text": "<s>" }, { "id": 1024, "logprob": -10.578125, "text": "name" ...
text-generation-inference/integration-tests/models/__snapshots__/test_flash_grammar_llama/test_flash_llama_grammar_load.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_flash_grammar_llama/test_flash_llama_grammar_load.json", "repo_id": "text-generation-inference", "token_count": 6602 }
211
[ { "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [ { "id": 50278, "logprob": null, "text": "<|USER|>" }, { "id": 1276, "logprob": -4.5546875, "text":...
text-generation-inference/integration-tests/models/__snapshots__/test_flash_neox/test_flash_neox_load.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_flash_neox/test_flash_neox_load.json", "repo_id": "text-generation-inference", "token_count": 6308 }
212
[ { "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [ { "id": 610, "logprob": null, "text": "def" }, { "id": 1489, "logprob": -5.2617188, "text": " prin...
text-generation-inference/integration-tests/models/__snapshots__/test_flash_starcoder2/test_flash_starcoder2_load.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_flash_starcoder2/test_flash_starcoder2_load.json", "repo_id": "text-generation-inference", "token_count": 5236 }
213
[ { "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [ { "id": 50278, "logprob": null, "text": "<|USER|>" }, { "id": 1276, "logprob": -4.5546875, "text":...
text-generation-inference/integration-tests/models/__snapshots__/test_neox/test_neox_load.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_neox/test_neox_load.json", "repo_id": "text-generation-inference", "token_count": 6296 }
214
import pytest import json from text_generation.types import GrammarType @pytest.fixture(scope="module") def flash_llama_grammar_handle(launcher): with launcher( "TinyLlama/TinyLlama-1.1B-Chat-v1.0", num_shard=2, disable_grammar_support=False ) as handle: yield handle @pytest.fixture(scope="...
text-generation-inference/integration-tests/models/test_flash_grammar_llama.py/0
{ "file_path": "text-generation-inference/integration-tests/models/test_flash_grammar_llama.py", "repo_id": "text-generation-inference", "token_count": 2366 }
215
import pytest @pytest.fixture(scope="module") def mpt_sharded_handle(launcher): with launcher("mosaicml/mpt-7b", num_shard=2) as handle: yield handle @pytest.fixture(scope="module") async def mpt_sharded(mpt_sharded_handle): await mpt_sharded_handle.health(300) return mpt_sharded_handle.client ...
text-generation-inference/integration-tests/models/test_mpt.py/0
{ "file_path": "text-generation-inference/integration-tests/models/test_mpt.py", "repo_id": "text-generation-inference", "token_count": 525 }
216
import { get_options, run } from "./common.js"; const reference_latency_ms = 70; const host = __ENV.HOST || '127.0.0.1:8000'; const max_new_tokens = 50; function generate_payload(gpt){ const input = gpt["conversations"][0]["value"]; return {"inputs": input, "parameters": {"max_new_tokens": max_new_tokens, "d...
text-generation-inference/load_tests/tgi.js/0
{ "file_path": "text-generation-inference/load_tests/tgi.js", "repo_id": "text-generation-inference", "token_count": 184 }
217
mod health; /// Text Generation Inference Webserver mod infer; mod queue; pub mod server; mod validation; use infer::{Infer, InferError, InferStreamResponse}; use queue::{Entry, Queue}; use serde::{Deserialize, Serialize}; use tokio::sync::OwnedSemaphorePermit; use tokio_stream::wrappers::UnboundedReceiverStream; use ...
text-generation-inference/router/src/lib.rs/0
{ "file_path": "text-generation-inference/router/src/lib.rs", "repo_id": "text-generation-inference", "token_count": 13923 }
218
#include <ATen/Dispatch.h> #include <THC/THCAtomics.cuh> #include <ATen/ATen.h> #include <torch/torch.h> #include <vector> #include <optional> /** * Friendly reminder of how multithreading works in CUDA: https://developer.nvidia.com/blog/even-easier-introduction-cuda * Check example at https://github.com/thomasw21/Li...
text-generation-inference/server/custom_kernels/custom_kernels/fused_attention_cuda.cu/0
{ "file_path": "text-generation-inference/server/custom_kernels/custom_kernels/fused_attention_cuda.cu", "repo_id": "text-generation-inference", "token_count": 5265 }
219
// Adapted from turboderp exllama: https://github.com/turboderp/exllama #ifndef _util_cuh #define _util_cuh #include <cuda_runtime.h> #include <cuda_fp16.h> #include <cstdint> #include <cstdio> #if defined(USE_ROCM) #define cudaUnspecified hipErrorUnknown #else #define cudaUnspecified cudaErrorApiFailureBase #endif ...
text-generation-inference/server/exllama_kernels/exllama_kernels/util.cuh/0
{ "file_path": "text-generation-inference/server/exllama_kernels/exllama_kernels/util.cuh", "repo_id": "text-generation-inference", "token_count": 283 }
220
#ifndef _qdq_6_cuh #define _qdq_6_cuh #include "qdq_util.cuh" #include "../../config.h" #if QMODE_6BIT == 1 // Not implemented #else __forceinline__ __device__ void shuffle_6bit_16 ( uint32_t* q, int stride ) { } __forceinline__ __device__ void dequant_6bit_16 ( const uint32_t q_0, const uint32_...
text-generation-inference/server/exllamav2_kernels/exllamav2_kernels/cuda/quant/qdq_6.cuh/0
{ "file_path": "text-generation-inference/server/exllamav2_kernels/exllamav2_kernels/cuda/quant/qdq_6.cuh", "repo_id": "text-generation-inference", "token_count": 571 }
221
import pytest import torch from copy import copy from transformers import AutoTokenizer from text_generation_server.pb import generate_pb2 from text_generation_server.models.seq2seq_lm import Seq2SeqLM, Seq2SeqLMBatch @pytest.fixture(scope="session") def mt0_small_tokenizer(): tokenizer = AutoTokenizer.from_pr...
text-generation-inference/server/tests/models/test_seq2seq_lm.py/0
{ "file_path": "text-generation-inference/server/tests/models/test_seq2seq_lm.py", "repo_id": "text-generation-inference", "token_count": 5483 }
222
# coding=utf-8 # Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved. # # This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX # and OPT implementations in this library. It has been modified from its # original forms to accommodate minor architectural differences compared # to G...
text-generation-inference/server/text_generation_server/models/custom_modeling/flash_gemma_modeling.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/models/custom_modeling/flash_gemma_modeling.py", "repo_id": "text-generation-inference", "token_count": 9688 }
223
import torch import torch.distributed from mamba_ssm.ops.triton.selective_state_update import selective_state_update from mamba_ssm.ops.selective_scan_interface import selective_scan_fn from torch import nn from typing import Optional, Tuple, Any from transformers.configuration_utils import PretrainedConfig import tor...
text-generation-inference/server/text_generation_server/models/custom_modeling/mamba_modeling.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/models/custom_modeling/mamba_modeling.py", "repo_id": "text-generation-inference", "token_count": 4100 }
224
import math import torch from typing import Optional from transformers.models.gpt2 import GPT2TokenizerFast from text_generation_server.models.cache_manager import BLOCK_SIZE from text_generation_server.models.flash_mistral import ( BaseFlashMistral, set_sliding_window, ) from text_generation_server.models....
text-generation-inference/server/text_generation_server/models/flash_starcoder2.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/models/flash_starcoder2.py", "repo_id": "text-generation-inference", "token_count": 1248 }
225
import os import torch import torch.distributed from torch import nn from torch.nn import functional as F from typing import List, Tuple, Optional from loguru import logger from functools import lru_cache HAS_BITS_AND_BYTES = True try: import bitsandbytes as bnb from bitsandbytes.nn import Int8Params, Params4...
text-generation-inference/server/text_generation_server/utils/layers.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/utils/layers.py", "repo_id": "text-generation-inference", "token_count": 19888 }
226
target .yarn
tokenizers/bindings/node/.prettierignore/0
{ "file_path": "tokenizers/bindings/node/.prettierignore", "repo_id": "tokenizers", "token_count": 5 }
227
{ "name": "tokenizers-darwin-x64", "version": "0.13.4-rc1", "os": [ "darwin" ], "cpu": [ "x64" ], "main": "tokenizers.darwin-x64.node", "files": [ "tokenizers.darwin-x64.node" ], "description": "Tokenizers platform specific bindings", "keywords": [ "napi-rs", "NAPI", "N-API...
tokenizers/bindings/node/npm/darwin-x64/package.json/0
{ "file_path": "tokenizers/bindings/node/npm/darwin-x64/package.json", "repo_id": "tokenizers", "token_count": 268 }
228
{ "name": "tokenizers-win32-ia32-msvc", "version": "0.13.4-rc1", "os": [ "win32" ], "cpu": [ "ia32" ], "main": "tokenizers.win32-ia32-msvc.node", "files": [ "tokenizers.win32-ia32-msvc.node" ], "description": "Tokenizers platform specific bindings", "keywords": [ "napi-rs", "NA...
tokenizers/bindings/node/npm/win32-ia32-msvc/package.json/0
{ "file_path": "tokenizers/bindings/node/npm/win32-ia32-msvc/package.json", "repo_id": "tokenizers", "token_count": 277 }
229
use crate::decoders::Decoder; use crate::encoding::{JsEncoding, JsTruncationDirection, JsTruncationStrategy}; use crate::models::Model; use crate::normalizers::Normalizer; use crate::pre_tokenizers::PreTokenizer; use crate::processors::Processor; use crate::tasks::tokenizer::{DecodeBatchTask, DecodeTask, EncodeBatchTas...
tokenizers/bindings/node/src/tokenizer.rs/0
{ "file_path": "tokenizers/bindings/node/src/tokenizer.rs", "repo_id": "tokenizers", "token_count": 5701 }
230
import argparse import glob from tokenizers import BertWordPieceTokenizer parser = argparse.ArgumentParser() parser.add_argument( "--files", default=None, metavar="path", type=str, required=True, help="The files to use as training; accept '**/*.txt' type of patterns \ ...
tokenizers/bindings/python/examples/train_bert_wordpiece.py/0
{ "file_path": "tokenizers/bindings/python/examples/train_bert_wordpiece.py", "repo_id": "tokenizers", "token_count": 472 }
231
# Generated content DO NOT EDIT class Model: """ Base class for all models The model represents the actual tokenization algorithm. This is the part that will contain and manage the learned vocabulary. This class cannot be constructed directly. Please use one of the concrete models. """ def...
tokenizers/bindings/python/py_src/tokenizers/models/__init__.pyi/0
{ "file_path": "tokenizers/bindings/python/py_src/tokenizers/models/__init__.pyi", "repo_id": "tokenizers", "token_count": 7567 }
232
import tokenizers from argparse import ArgumentParser import sentencepiece as spm from collections import Counter import json import os import datetime try: from termcolor import colored has_color = True except Exception: has_color = False def main(): parser = ArgumentParser("SentencePiece parity ch...
tokenizers/bindings/python/scripts/spm_parity_check.py/0
{ "file_path": "tokenizers/bindings/python/scripts/spm_parity_check.py", "repo_id": "tokenizers", "token_count": 4110 }
233
use tokenizers as tk; use pyo3::exceptions; use pyo3::prelude::*; use pyo3::types::*; use super::{ DestroyPtr, PyNormalizedString, PyNormalizedStringRefMut, RefMutContainer, RefMutGuard, }; use crate::encoding::PyEncoding; use crate::error::ToPyResult; use crate::token::PyToken; use tk::{OffsetReferential, Offset...
tokenizers/bindings/python/src/utils/pretokenization.rs/0
{ "file_path": "tokenizers/bindings/python/src/utils/pretokenization.rs", "repo_id": "tokenizers", "token_count": 4885 }
234
from tokenizers import Tokenizer from ..utils import data_dir, doc_wiki_tokenizer disable_printing = True original_print = print def print(*args, **kwargs): if not disable_printing: original_print(*args, **kwargs) class TestQuicktour: # This method contains everything we don't want to run @sta...
tokenizers/bindings/python/tests/documentation/test_quicktour.py/0
{ "file_path": "tokenizers/bindings/python/tests/documentation/test_quicktour.py", "repo_id": "tokenizers", "token_count": 3290 }
235
# Encoding <tokenizerslangcontent> <python> ## Encoding [[autodoc]] tokenizers.Encoding - all - attention_mask - ids - n_sequences - offsets - overflowing - sequence_ids - special_tokens_mask - tokens - type_ids - word_ids - words </python> <rust> The Rust API Reference...
tokenizers/docs/source-doc-builder/api/encoding.mdx/0
{ "file_path": "tokenizers/docs/source-doc-builder/api/encoding.mdx", "repo_id": "tokenizers", "token_count": 190 }
236
from docutils import nodes import sphinx from sphinx.locale import _ from conf import rust_version logger = sphinx.util.logging.getLogger(__name__) class RustRef: def __call__(self, name, rawtext, text, lineno, inliner, options={}, content=[]): doctype = name.split("_")[1] parts = text.split(":...
tokenizers/docs/source/_ext/rust_doc.py/0
{ "file_path": "tokenizers/docs/source/_ext/rust_doc.py", "repo_id": "tokenizers", "token_count": 1221 }
237
Tokenizers ==================================================================================================== Fast State-of-the-art tokenizers, optimized for both research and production `🤗 Tokenizers`_ provides an implementation of today's most used tokenizers, with a focus on performance and versatility. These t...
tokenizers/docs/source/index.rst/0
{ "file_path": "tokenizers/docs/source/index.rst", "repo_id": "tokenizers", "token_count": 404 }
238
use std::time::{Duration, Instant}; use criterion::black_box; use tokenizers::{ Decoder, EncodeInput, Model, Normalizer, PostProcessor, PreTokenizer, TokenizerImpl, Trainer, }; pub fn iter_bench_encode<M, N, PT, PP, D>( iters: u64, tokenizer: &TokenizerImpl<M, N, PT, PP, D>, lines: &[EncodeInput], ) ...
tokenizers/tokenizers/benches/common/mod.rs/0
{ "file_path": "tokenizers/tokenizers/benches/common/mod.rs", "repo_id": "tokenizers", "token_count": 964 }
239
// A dependency graph that contains any wasm must all be imported // asynchronously. This `bootstrap.js` file does the single async import, so // that no one else needs to worry about it again. import("./index.js") .catch(e => console.error("Error importing `index.js`:", e));
tokenizers/tokenizers/examples/unstable_wasm/www/bootstrap.js/0
{ "file_path": "tokenizers/tokenizers/examples/unstable_wasm/www/bootstrap.js", "repo_id": "tokenizers", "token_count": 79 }
240
//! [Byte Pair Encoding](https://www.aclweb.org/anthology/P16-1162/) model. use std::{iter, mem}; mod model; mod serialization; pub mod trainer; mod word; type Pair = (u32, u32); /// Errors that can be encountered while using or constructing a `BPE` model. #[derive(thiserror::Error, Debug)] pub enum Error { /// ...
tokenizers/tokenizers/src/models/bpe/mod.rs/0
{ "file_path": "tokenizers/tokenizers/src/models/bpe/mod.rs", "repo_id": "tokenizers", "token_count": 891 }
241
use super::{super::OrderedVocabIter, WordPiece, WordPieceBuilder}; use serde::{ de::{MapAccess, Visitor}, ser::SerializeStruct, Deserialize, Deserializer, Serialize, Serializer, }; use std::collections::HashSet; impl Serialize for WordPiece { fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Er...
tokenizers/tokenizers/src/models/wordpiece/serialization.rs/0
{ "file_path": "tokenizers/tokenizers/src/models/wordpiece/serialization.rs", "repo_id": "tokenizers", "token_count": 2453 }
242
use serde::{Deserialize, Serialize}; use crate::tokenizer::{PreTokenizedString, PreTokenizer, Result, SplitDelimiterBehavior}; use crate::utils::macro_rules_attribute; use unicode_categories::UnicodeCategories; fn is_punc(x: char) -> bool { char::is_ascii_punctuation(&x) || x.is_punctuation() } #[derive(Copy, Cl...
tokenizers/tokenizers/src/pre_tokenizers/punctuation.rs/0
{ "file_path": "tokenizers/tokenizers/src/pre_tokenizers/punctuation.rs", "repo_id": "tokenizers", "token_count": 1102 }
243
use crate::utils::SysRegex; use crate::{Offsets, Result}; use regex::Regex; /// Pattern used to split a NormalizedString pub trait Pattern { /// Slice the given string in a list of pattern match positions, with /// a boolean indicating whether this is a match or not. /// /// This method *must* cover th...
tokenizers/tokenizers/src/tokenizer/pattern.rs/0
{ "file_path": "tokenizers/tokenizers/src/tokenizer/pattern.rs", "repo_id": "tokenizers", "token_count": 3903 }
244
#![cfg(feature = "http")] use tokenizers::{FromPretrainedParameters, Result, Tokenizer}; #[test] fn test_from_pretrained() -> Result<()> { let tokenizer = Tokenizer::from_pretrained("bert-base-cased", None)?; let encoding = tokenizer.encode("Hey there dear friend!", false)?; assert_eq!( encoding.ge...
tokenizers/tokenizers/tests/from_pretrained.rs/0
{ "file_path": "tokenizers/tokenizers/tests/from_pretrained.rs", "repo_id": "tokenizers", "token_count": 683 }
245
FROM nvidia/cuda:11.8.0-cudnn8-devel-ubuntu20.04 LABEL maintainer="Hugging Face" ARG DEBIAN_FRONTEND=noninteractive # Use login shell to read variables from `~/.profile` (to pass dynamic created variables between RUN commands) SHELL ["sh", "-lc"] # The following `ARG` are mainly used to specify the versions explicit...
transformers/docker/transformers-all-latest-gpu/Dockerfile/0
{ "file_path": "transformers/docker/transformers-all-latest-gpu/Dockerfile", "repo_id": "transformers", "token_count": 1166 }
246
### Translating the Transformers documentation into your language As part of our mission to democratize machine learning, we'd love to make the Transformers library available in many more languages! Follow the steps below if you want to help translate the documentation into your language 🙏. **🗞️ Open an issue** To...
transformers/docs/TRANSLATING.md/0
{ "file_path": "transformers/docs/TRANSLATING.md", "repo_id": "transformers", "token_count": 948 }
247
<!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or ...
transformers/docs/source/de/pr_checks.md/0
{ "file_path": "transformers/docs/source/de/pr_checks.md", "repo_id": "transformers", "token_count": 4986 }
248
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/benchmarks.md/0
{ "file_path": "transformers/docs/source/en/benchmarks.md", "repo_id": "transformers", "token_count": 7208 }
249
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/hpo_train.md/0
{ "file_path": "transformers/docs/source/en/hpo_train.md", "repo_id": "transformers", "token_count": 2076 }
250
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/main_classes/callback.md/0
{ "file_path": "transformers/docs/source/en/main_classes/callback.md", "repo_id": "transformers", "token_count": 1520 }
251
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/main_classes/tokenizer.md/0
{ "file_path": "transformers/docs/source/en/main_classes/tokenizer.md", "repo_id": "transformers", "token_count": 1144 }
252
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/model_doc/bertweet.md/0
{ "file_path": "transformers/docs/source/en/model_doc/bertweet.md", "repo_id": "transformers", "token_count": 806 }
253
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/model_doc/data2vec.md/0
{ "file_path": "transformers/docs/source/en/model_doc/data2vec.md", "repo_id": "transformers", "token_count": 2027 }
254
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/model_doc/dpr.md/0
{ "file_path": "transformers/docs/source/en/model_doc/dpr.md", "repo_id": "transformers", "token_count": 1170 }
255
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/model_doc/fnet.md/0
{ "file_path": "transformers/docs/source/en/model_doc/fnet.md", "repo_id": "transformers", "token_count": 1150 }
256
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/model_doc/levit.md/0
{ "file_path": "transformers/docs/source/en/model_doc/levit.md", "repo_id": "transformers", "token_count": 1801 }
257
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/model_doc/openai-gpt.md/0
{ "file_path": "transformers/docs/source/en/model_doc/openai-gpt.md", "repo_id": "transformers", "token_count": 2422 }
258
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/model_doc/prophetnet.md/0
{ "file_path": "transformers/docs/source/en/model_doc/prophetnet.md", "repo_id": "transformers", "token_count": 1170 }
259
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/model_doc/rwkv.md/0
{ "file_path": "transformers/docs/source/en/model_doc/rwkv.md", "repo_id": "transformers", "token_count": 2548 }
260
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/model_doc/trocr.md/0
{ "file_path": "transformers/docs/source/en/model_doc/trocr.md", "repo_id": "transformers", "token_count": 2132 }
261
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/model_doc/visual_bert.md/0
{ "file_path": "transformers/docs/source/en/model_doc/visual_bert.md", "repo_id": "transformers", "token_count": 1680 }
262
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/model_doc/xglm.md/0
{ "file_path": "transformers/docs/source/en/model_doc/xglm.md", "repo_id": "transformers", "token_count": 1137 }
263
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/pipeline_tutorial.md/0
{ "file_path": "transformers/docs/source/en/pipeline_tutorial.md", "repo_id": "transformers", "token_count": 4846 }
264
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/tasks/image_feature_extraction.md/0
{ "file_path": "transformers/docs/source/en/tasks/image_feature_extraction.md", "repo_id": "transformers", "token_count": 1539 }
265
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/en/tasks/translation.md/0
{ "file_path": "transformers/docs/source/en/tasks/translation.md", "repo_id": "transformers", "token_count": 5209 }
266
- sections: - local: index title: 🤗 Transformers - local: quicktour title: Tour rápido - local: installation title: Instalación title: Empezar - sections: - local: pipeline_tutorial title: Pipelines para inferencia - local: autoclass_tutorial title: Carga instancias preentrenadas con un...
transformers/docs/source/es/_toctree.yml/0
{ "file_path": "transformers/docs/source/es/_toctree.yml", "repo_id": "transformers", "token_count": 1111 }
267
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/es/model_sharing.md/0
{ "file_path": "transformers/docs/source/es/model_sharing.md", "repo_id": "transformers", "token_count": 3985 }
268
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/es/tasks/image_classification.md/0
{ "file_path": "transformers/docs/source/es/tasks/image_classification.md", "repo_id": "transformers", "token_count": 2441 }
269
- sections: - local: pipeline_tutorial title: पाइपलाइनों के साथ अनुमान चलाएँ
transformers/docs/source/hi/_toctree.yml/0
{ "file_path": "transformers/docs/source/hi/_toctree.yml", "repo_id": "transformers", "token_count": 65 }
270
<!--- Copyright 2020 The HuggingFace Team. Tutti i diritti riservati. Concesso in licenza in base alla Licenza Apache, Versione 2.0 (la "Licenza"); non è possibile utilizzare questo file se non in conformità con la Licenza. È possibile ottenere una copia della Licenza all'indirizzo http://www.apache.org/licenses/LICE...
transformers/docs/source/it/migration.md/0
{ "file_path": "transformers/docs/source/it/migration.md", "repo_id": "transformers", "token_count": 5577 }
271
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/it/run_scripts.md/0
{ "file_path": "transformers/docs/source/it/run_scripts.md", "repo_id": "transformers", "token_count": 6868 }
272
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/ja/custom_tools.md/0
{ "file_path": "transformers/docs/source/ja/custom_tools.md", "repo_id": "transformers", "token_count": 15519 }
273
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/ja/llm_tutorial.md/0
{ "file_path": "transformers/docs/source/ja/llm_tutorial.md", "repo_id": "transformers", "token_count": 5622 }
274
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed...
transformers/docs/source/ja/main_classes/quantization.md/0
{ "file_path": "transformers/docs/source/ja/main_classes/quantization.md", "repo_id": "transformers", "token_count": 10631 }
275