text
stringlengths
5
631k
id
stringlengths
14
178
metadata
dict
__index_level_0__
int64
0
647
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # VeRA: Vector-based Random Matrix Adaptation [VeRA](https://huggingface.co/papers/2310.11454) is a parameter-efficient fine-tuning technique that is similar to LoRA but requires even fewer extra parameters while promising similar or even better performance. As such, it is particularly useful when the parameter budget is very limited, e.g. when scaling to very large models. The reduction of the count of trainable parameters is achieved by sharing the same low-rank matrices across all layers, and only training two additional vectors per layer. When saving the adapter parameters, it's possible to eschew storing the low rank matrices by setting `save_projection=False` on the `VeraConfig`. In that case, these matrices will be restored based on the fixed random seed from the `projection_prng_key` argument. This cuts down on the size of the checkpoint, but we cannot guarantee reproducibility on all devices and for all future versions of PyTorch. If you want to ensure reproducibility, set `save_projection=True` (which is the default). To handle different shapes of adapted layers, VeRA initializes shared A and B matrices with the largest required size for each dimension. During the forward pass, submatrices A and B for a given layer are sliced out from these shared matrices and used as described in the paper. For example, adapting two linear layers of shapes (100, 20) and (80, 50) will create A and B matrices of shapes (rank, 50) and (100, rank) respectively. Then, to adapt a layer of shape (100, 20), submatrices A and B of shapes (rank, 20) and (100, rank) will be extracted. VeRA currently has the following constraint: - Only `nn.Linear` layers are supported. The abstract from the paper is: > Low-rank adapation (LoRA) is a popular method that reduces the number of trainable parameters when finetuning large language models, but still faces acute storage challenges when scaling to even larger models or deploying numerous per-user or per-task adapted models. In this work, we present Vector-based Random Matrix Adaptation (VeRA), which significantly reduces the number of trainable parameters compared to LoRA, yet maintains the same performance. It achieves this by using a single pair of low-rank matrices shared across all layers and learning small scaling vectors instead. We demonstrate its effectiveness on the GLUE and E2E benchmarks, image classification tasks, and show its application in instruction-tuning of 7B and 13B language models. ## VeRAConfig [[autodoc]] tuners.vera.config.VeraConfig ## VeRAModel [[autodoc]] tuners.vera.model.VeraModel
peft/docs/source/package_reference/vera.md/0
{ "file_path": "peft/docs/source/package_reference/vera.md", "repo_id": "peft", "token_count": 807 }
227
PEFT_TYPE="boft" BLOCK_NUM=8 BLOCK_SIZE=0 N_BUTTERFLY_FACTOR=1 export DATASET_NAME="oftverse/control-celeba-hq" export PROJECT_NAME="controlnet_${PEFT_TYPE}" export RUN_NAME="${PEFT_TYPE}_${BLOCK_NUM}${BLOCK_SIZE}${N_BUTTERFLY_FACTOR}" export CONTROLNET_PATH="" export MODEL_NAME="stabilityai/stable-diffusion-2-1" # export MODEL_NAME="runwayml/stable-diffusion-v1-5" export OUTPUT_DIR="./output/${DATASET_NAME}/${RUN_NAME}" accelerate launch train_controlnet.py \ --pretrained_model_name_or_path=$MODEL_NAME \ --resume_from_checkpoint=$RESUME_PATH \ --controlnet_model_name_or_path=$CONTROLNET_PATH \ --output_dir=$OUTPUT_DIR \ --report_to="wandb" \ --dataset_name=$DATASET_NAME \ --resolution=512 \ --learning_rate=1e-5 \ --checkpointing_steps=500 \ --max_train_steps=50000 \ --validation_steps=5000 \ --num_validation_images=12 \ --train_batch_size=4 \ --dataloader_num_workers=2 \ --seed="0" \ --lr_scheduler="constant" \ --lr_warmup_steps=0 \ --wandb_project_name=$PROJECT_NAME \ --wandb_run_name=$RUN_NAME \ --enable_xformers_memory_efficient_attention \ --use_boft \ --boft_block_num=$BLOCK_NUM \ --boft_block_size=$BLOCK_SIZE \ --boft_n_butterfly_factor=$N_BUTTERFLY_FACTOR \ --boft_dropout=0.1 \ --boft_bias="boft_only" \
peft/examples/boft_controlnet/train_controlnet.sh/0
{ "file_path": "peft/examples/boft_controlnet/train_controlnet.sh", "repo_id": "peft", "token_count": 557 }
228
import argparse import os import warnings from typing import Optional from huggingface_hub import HfFolder, whoami from transformers import PretrainedConfig def import_model_class_from_model_name_or_path(pretrained_model_name_or_path: str, revision: str): text_encoder_config = PretrainedConfig.from_pretrained( pretrained_model_name_or_path, subfolder="text_encoder", revision=revision, ) model_class = text_encoder_config.architectures[0] if model_class == "CLIPTextModel": from transformers import CLIPTextModel return CLIPTextModel elif model_class == "RobertaSeriesModelWithTransformation": from diffusers.pipelines.alt_diffusion.modeling_roberta_series import RobertaSeriesModelWithTransformation return RobertaSeriesModelWithTransformation else: raise ValueError(f"{model_class} is not supported.") def get_full_repo_name(model_id: str, organization: Optional[str] = None, token: Optional[str] = None): if token is None: token = HfFolder.get_token() if organization is None: username = whoami(token)["name"] return f"{username}/{model_id}" else: return f"{organization}/{model_id}" def parse_args(input_args=None): parser = argparse.ArgumentParser(description="Simple example of a Dreambooth training script.") parser.add_argument( "--pretrained_model_name_or_path", type=str, default=None, required=True, help="Path to pretrained model or model identifier from huggingface.co/models.", ) parser.add_argument( "--revision", type=str, default=None, required=False, help="Revision of pretrained model identifier from huggingface.co/models.", ) parser.add_argument( "--tokenizer_name", type=str, default=None, help="Pretrained tokenizer name or path if not the same as model_name", ) parser.add_argument( "--instance_data_dir", type=str, default=None, required=True, help="A folder containing the training data of instance images.", ) parser.add_argument( "--class_data_dir", type=str, default=None, required=False, help="A folder containing the training data of class images.", ) parser.add_argument( "--instance_prompt", type=str, default=None, required=True, help="The prompt with identifier specifying the instance", ) parser.add_argument( "--class_prompt", type=str, default=None, help="The prompt to specify images in the same class as provided instance images.", ) parser.add_argument( "--with_prior_preservation", default=False, action="store_true", help="Flag to add prior preservation loss.", ) parser.add_argument("--prior_loss_weight", type=float, default=1.0, help="The weight of prior preservation loss.") parser.add_argument( "--num_class_images", type=int, default=100, help=( "Minimal class images for prior preservation loss. If there are not enough images already present in" " class_data_dir, additional images will be sampled with class_prompt." ), ) parser.add_argument( "--validation_prompt", nargs="+", help="A prompt that is used during validation to verify that the model is learning.", ) parser.add_argument( "--num_validation_images", type=int, default=4, help="Number of images that should be generated during validation with `validation_prompt`.", ) parser.add_argument( "--validation_steps", type=int, default=500, help=( "Run dreambooth validation every X steps. Dreambooth validation consists of running the prompt" " `args.validation_prompt` multiple times: `args.num_validation_images`." ), ) parser.add_argument( "--output_dir", type=str, default="text-inversion-model", help="The output directory where the model predictions and checkpoints will be written.", ) parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") parser.add_argument( "--resolution", type=int, default=512, help=( "The resolution for input images, all the images in the train/validation dataset will be resized to this" " resolution" ), ) parser.add_argument( "--center_crop", action="store_true", help="Whether to center crop images before resizing to resolution" ) parser.add_argument("--train_text_encoder", action="store_true", help="Whether to train the text encoder") parser.add_argument( "--set_grads_to_none", action="store_true", help=( "Save more memory by using setting grads to None instead of zero. Be aware, that this changes certain" " behaviors, so disable this argument if it causes any problems. More info:" " https://pytorch.org/docs/stable/generated/torch.optim.Optimizer.zero_grad.html" ), ) # boft args parser.add_argument("--use_boft", action="store_true", help="Whether to use BOFT for parameter efficient tuning") parser.add_argument("--boft_block_num", type=int, default=4, help="The number of BOFT blocks") parser.add_argument("--boft_block_size", type=int, default=0, help="The size of BOFT blocks") parser.add_argument("--boft_n_butterfly_factor", type=int, default=2, help="The number of butterfly factors") parser.add_argument("--boft_dropout", type=float, default=0.1, help="BOFT dropout, only used if use_boft is True") parser.add_argument( "--boft_bias", type=str, default="none", help="Bias type for BOFT. Can be 'none', 'all' or 'boft_only', only used if use_boft is True", ) parser.add_argument( "--num_dataloader_workers", type=int, default=1, help="Num of workers for the training dataloader." ) parser.add_argument( "--no_tracemalloc", default=False, action="store_true", help="Flag to stop memory allocation tracing during training. This could speed up training on Windows.", ) parser.add_argument( "--train_batch_size", type=int, default=4, help="Batch size (per device) for the training dataloader." ) parser.add_argument( "--sample_batch_size", type=int, default=4, help="Batch size (per device) for sampling images." ) parser.add_argument("--num_train_epochs", type=int, default=1) parser.add_argument( "--max_train_steps", type=int, default=None, help="Total number of training steps to perform. If provided, overrides num_train_epochs.", ) parser.add_argument( "--checkpointing_steps", type=int, default=500, help=( "Save a checkpoint of the training state every X updates. These checkpoints can be used both as final" " checkpoints in case they are better than the last checkpoint, and are also suitable for resuming" " training using `--resume_from_checkpoint`." ), ) parser.add_argument( "--resume_from_checkpoint", type=str, default=None, help=( "Whether training should be resumed from a previous checkpoint. Use a path saved by" ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.' ), ) parser.add_argument( "--gradient_accumulation_steps", type=int, default=1, help="Number of updates steps to accumulate before performing a backward/update pass.", ) parser.add_argument( "--gradient_checkpointing", action="store_true", help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.", ) parser.add_argument( "--learning_rate", type=float, default=5e-6, help="Initial learning rate (after the potential warmup period) to use.", ) parser.add_argument( "--scale_lr", action="store_true", default=False, help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", ) parser.add_argument( "--lr_scheduler", type=str, default="constant", help=( 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' ' "constant", "constant_with_warmup"]' ), ) parser.add_argument( "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." ) parser.add_argument( "--lr_num_cycles", type=int, default=1, help="Number of hard resets of the lr in cosine_with_restarts scheduler.", ) parser.add_argument("--lr_power", type=float, default=1.0, help="Power factor of the polynomial scheduler.") parser.add_argument( "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes." ) parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.") parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.") parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer") parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.") parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") parser.add_argument( "--hub_model_id", type=str, default=None, help="The name of the repository to keep in sync with the local `output_dir`.", ) parser.add_argument( "--logging_dir", type=str, default="logs", help=( "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." ), ) parser.add_argument( "--allow_tf32", action="store_true", help=( "Whether or not to allow TF32 on Ampere GPUs. Can be used to speed up training. For more information, see" " https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices" ), ) parser.add_argument( "--report_to", type=str, default="wandb", help=( 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`' ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.' ), ) parser.add_argument( "--wandb_key", type=str, default=None, help=("If report to option is set to wandb, api-key for wandb used for login to wandb "), ) parser.add_argument( "--wandb_project_name", type=str, default=None, help=("If report to option is set to wandb, project name in wandb for log tracking "), ) parser.add_argument( "--wandb_run_name", type=str, default=None, help=("If report to option is set to wandb, project name in wandb for log tracking "), ) parser.add_argument( "--mixed_precision", type=str, default=None, choices=["no", "fp16", "bf16"], help=( "Whether to use mixed precision. Choose between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >=" " 1.10.and an Nvidia Ampere GPU. Default to the value of accelerate config of the current system or the" " flag passed with the `accelerate.launch` command. Use this argument to override the accelerate config." ), ) parser.add_argument( "--prior_generation_precision", type=str, default=None, choices=["no", "fp32", "fp16", "bf16"], help=( "Choose prior generation precision between fp32, fp16 and bf16 (bfloat16). Bf16 requires PyTorch >=" " 1.10.and an Nvidia Ampere GPU. Default to fp16 if a GPU is available else fp32." ), ) parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") parser.add_argument( "--enable_xformers_memory_efficient_attention", action="store_true", help="Whether or not to use xformers." ) if input_args is not None: args = parser.parse_args(input_args) else: args = parser.parse_args() env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) if env_local_rank != -1 and env_local_rank != args.local_rank: args.local_rank = env_local_rank # Sanity checks # if args.dataset_name is None and args.train_data_dir is None: # raise ValueError("Need either a dataset name or a training folder.") if args.with_prior_preservation: if args.class_data_dir is None: raise ValueError("You must specify a data directory for class images.") if args.class_prompt is None: raise ValueError("You must specify prompt for class images.") else: # logger is not available yet if args.class_data_dir is not None: warnings.warn("You need not use --class_data_dir without --with_prior_preservation.") if args.class_prompt is not None: warnings.warn("You need not use --class_prompt without --with_prior_preservation.") return args
peft/examples/boft_dreambooth/utils/args_loader.py/0
{ "file_path": "peft/examples/boft_dreambooth/utils/args_loader.py", "repo_id": "peft", "token_count": 5745 }
229
<jupyter_start><jupyter_text>Initializing weights with LoftQ by replacing LoRA weights in-place This notebook shows how to apply [LoftQ](https://huggingface.co/papers/2310.08659) initialization on our QLoRA model.In short, the idea behind LoftQ is the following. When we use QLoRA, i.e. we quantize the base model with bitsandbytes to save memory, and then train LoRA weights on top of this base model, we expect a certain performance gap. This is partly due to the fact that quantization is onyl an approximation of the "real" weights and thus introduces a quantization error. By default, LoRA weights are initialized such that they are a no-op at the start of the training. However, we can instead initialize them so that they minimize the quantization error. This is the idea behind LoftQ.Note that this only influences the initialization of the model. Everything that follows stays the same as always. Imports<jupyter_code>import os import torch from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig from peft import get_peft_model, LoraConfig, replace_lora_weights_loftq<jupyter_output><empty_output><jupyter_text>Functions<jupyter_code>def get_mae(x, y): return (x - y).abs().mean() def get_mse(x, y): return torch.pow(x - y, 2).mean() def error_report(x, y): mae = get_mae(x, y) mse = get_mse(x, y) print( f"Mean absolute error: {mae:>8.5f}\n" f"Mean squared error: {mse:>8.5f}" )<jupyter_output><empty_output><jupyter_text>Base model First, let's load a base model and calculate some logits. These logits are the baseline, i.e. we try to match their values as best as possible. We only need these logits for demonstration purposes. In practice, it is not necessary to load the non-quantized weights to apply LoftQ initialization.**Note**: We have to choose a model with a `model.safetensors` file. As PyTorch checkpoints (pickle) cannot be loaded lazily, we have to use [safetensors](https://huggingface.co/docs/safetensors/index). If those don't exist for your model, save the pretrained model as a safetensors file using `safe_pretrained` and pass the model path to `replace_lora_weights_loftq`.<jupyter_code>model_id = "bigscience/bloomz-560m" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) s = """Beautiful is better than ugly. Explicit is better than implicit. Simple is better than complex. Complex is better than complicated. Flat is better than nested. Sparse is better than dense. Readability counts. Special cases aren't special enough to break the rules. Although practicality beats purity. Errors should never pass silently. Unless explicitly silenced. In the face of ambiguity, refuse the temptation to guess. There should be one-- and preferably only one --obvious way to do it. Although that way may not be obvious at first unless you're Dutch. Now is better than never. Although never is often better than *right* now. If the implementation is hard to explain, it's a bad idea. If the implementation is easy to explain, it may be a good idea. Namespaces are one honking great idea -- let's do more of those!""" inputs = tokenizer(s.splitlines(), return_tensors="pt", padding=True)<jupyter_output><empty_output><jupyter_text>Our baseline logits:<jupyter_code>logits_base = model(**inputs).logits<jupyter_output><empty_output><jupyter_text>Normal LoRA model Now we load the model quantized with bitsandbytes. For now, only 4bit is supported.<jupyter_code>bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_compute_dtype=torch.float16, ) model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config)<jupyter_output>`low_cpu_mem_usage` was None, now set to True since model is quantized.<jupyter_text>Next we create a LoRA model using PEFT and compute the logits of that model.<jupyter_code>lora_config = LoraConfig(task_type="CAUSAL_LM", target_modules="all-linear") peft_model = get_peft_model(model, lora_config) logits_lora = peft_model(**inputs).logits<jupyter_output>.../bitsandbytes/nn/modules.py:391: UserWarning: Input type into Linear4bit is torch.float16, but bnb_4bit_compute_dtype=torch.float32 (default). This will lead to slow inference or training speed. warnings.warn('Input type into Linear4bit is torch.float16, but bnb_4bit_compute_dtype=torch.float32 (default). This will lead to slow inference or training speed.')<jupyter_text>Let's check the influence of the quantization error on our logits:<jupyter_code>error_report(logits_base, logits_lora)<jupyter_output>Mean absolute error: 3.61113 Mean squared error: 36.53259<jupyter_text>LoftQ Next, let's use LoftQ initialization and see if it helps reduce the error.<jupyter_code>replace_lora_weights_loftq(peft_model) logits_loftq = peft_model(**inputs).logits error_report(logits_base, logits_loftq)<jupyter_output>Mean absolute error: 3.24111 Mean squared error: 31.13725<jupyter_text>We can see that LoftQ initialization helped a little bit, but the difference is not huge. LoftQ with callback To help with this, let's write a small callback function and pass it to `replace_lora_weights_loftq`. What this function does is that each time one weight is being replaced with LoftQ-initialized weights, we perform a test if the quantization error is actually reduced. If it it is not, we roll back the replacement. This way, we keep only those replacements that improve the results.<jupyter_code># Since PEFT has modified the base model, we should reload it model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config) peft_model = get_peft_model(model, lora_config) current_mse = float("inf") def my_callback(model, module_name): """Callable to replace weights with LoFTQ if the mse is lower than the current best one.""" global current_mse logits = model(**inputs).logits mse = get_mse(logits_base, logits) if mse < current_mse: current_mse = mse print(f"MSE improved for module {module_name}") return True print(f"MSE did not improve for module {module_name}") return False replace_lora_weights_loftq(peft_model, callback=my_callback) logits_loftq_callback = peft_model(**inputs).logits error_report(logits_base, logits_loftq_callback)<jupyter_output>Mean absolute error: 1.79576 Mean squared error: 8.47075<jupyter_text>We can see that applying LoftQ with the help of the callback reduced the error quite significantly. Applying LoftQ multiple times It is possible to run `replace_lora_weights_loftq` multiple times on the same model when using the callback.<jupyter_code>replace_lora_weights_loftq(peft_model, callback=my_callback) logits_loftq_callback_twice = peft_model(**inputs).logits error_report(logits_base, logits_loftq_callback_twice)<jupyter_output>Mean absolute error: 1.76357 Mean squared error: 8.33938
peft/examples/loftq_finetuning/LoftQ_weight_replacement.ipynb/0
{ "file_path": "peft/examples/loftq_finetuning/LoftQ_weight_replacement.ipynb", "repo_id": "peft", "token_count": 2208 }
230
<jupyter_start><jupyter_text>This notebook shows how to use the adapter merging methods from `peft` and apply them image generation models using `diffusers`. Turn `diffusers` LoRA checkpoints into `PeftModel`<jupyter_code>!pip install diffusers accelerate transformers -U -q !pip install git+https://github.com/huggingface/peft -q from google.colab import userdata TOKEN = userdata.get("HF_TOKEN") from diffusers import UNet2DConditionModel import torch device = torch.accelerator.current_accelerator().type if hasattr(torch, "accelerator") else "cuda" model_id = "stabilityai/stable-diffusion-xl-base-1.0" unet = UNet2DConditionModel.from_pretrained( model_id, subfolder="unet", torch_dtype=torch.float16, use_safetensors=True, variant="fp16" ).to(device) # So that we can populate it later. import copy sdxl_unet = copy.deepcopy(unet) # Load the pipeline too. from diffusers import DiffusionPipeline pipe = DiffusionPipeline.from_pretrained( model_id, variant="fp16", torch_dtype=torch.float16, unet=unet ).to(device) # Only UNet pipe.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy") from peft import get_peft_model, LoraConfig toy_peft_model = get_peft_model( sdxl_unet, pipe.unet.peft_config["toy"], adapter_name="toy" ) original_state_dict = {f"base_model.model.{k}": v for k, v in pipe.unet.state_dict().items()} toy_peft_model.load_state_dict(original_state_dict, strict=True) toy_peft_model.push_to_hub("toy_peft_model-new", token=TOKEN) pipe.delete_adapters("toy") sdxl_unet.delete_adapters("toy") pipe.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") pipe.set_adapters(adapter_names="pixel") pixel_peft_model = get_peft_model( sdxl_unet, pipe.unet.peft_config["pixel"], adapter_name="pixel" ) original_state_dict = {f"base_model.model.{k}": v for k, v in pipe.unet.state_dict().items()} pixel_peft_model.load_state_dict(original_state_dict, strict=True) pixel_peft_model.push_to_hub("pixel_peft_model-new", token=TOKEN) del pipe, sdxl_unet, toy_peft_model, pixel_peft_model<jupyter_output><empty_output><jupyter_text>Weighted adapter inference<jupyter_code>from peft import PeftModel base_unet = UNet2DConditionModel.from_pretrained( model_id, subfolder="unet", torch_dtype=torch.float16, use_safetensors=True, variant="fp16" ).to(device) toy_id = "sayakpaul/toy_peft_model-new" model = PeftModel.from_pretrained(base_unet, toy_id, use_safetensors=True, subfolder="toy", adapter_name="toy") model.load_adapter("sayakpaul/pixel_peft_model-new", use_safetensors=True, subfolder="pixel", adapter_name="pixel") # https://huggingface.co/docs/peft/main/en/package_reference/lora#peft.LoraModel.add_weighted_adapter model.add_weighted_adapter( adapters=["toy", "pixel"], weights=[0.7, 0.3], combination_type="linear", adapter_name="toy-pixel" ) model.set_adapters("toy-pixel") type(model.base_model.model) model = model.to(dtype=torch.float16, device=device) pipe = DiffusionPipeline.from_pretrained( model_id, unet=model, variant="fp16", torch_dtype=torch.float16, ).to(device) prompt = "toy_face of a hacker with a hoodie, pixel art" image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] image del pipe base_unet = UNet2DConditionModel.from_pretrained( model_id, subfolder="unet", torch_dtype=torch.float16, use_safetensors=True, variant="fp16" ).to(device) toy_id = "sayakpaul/toy_peft_model-new" model = PeftModel.from_pretrained(base_unet, toy_id, use_safetensors=True, subfolder="toy", adapter_name="toy") model.load_adapter("sayakpaul/pixel_peft_model-new", use_safetensors=True, subfolder="pixel", adapter_name="pixel") # https://huggingface.co/docs/peft/main/en/package_reference/lora#peft.LoraModel.add_weighted_adapter model.add_weighted_adapter( adapters=["toy", "pixel"], weights=[0.5, 0.5], combination_type="cat", adapter_name="toy-pixel" ) model.set_adapters("toy-pixel") model = model.to(dtype=torch.float16, device=device) pipe = DiffusionPipeline.from_pretrained( model_id, unet=model, variant="fp16", torch_dtype=torch.float16, ).to(device) prompt = "toy_face of a hacker with a hoodie, pixel art" image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] image del pipe pipe = DiffusionPipeline.from_pretrained( model_id, variant="fp16", torch_dtype=torch.float16, ).to(device) prompt = "toy_face of a hacker with a hoodie, pixel art" image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] image<jupyter_output>Loading pipeline components...: 100%|██████████| 7/7 [00:00<00:00, 14.10it/s] 100%|██████████| 30/30 [00:03<00:00, 9.26it/s]
peft/examples/multi_adapter_examples/multi_adapter_weighted_inference_diffusers.ipynb/0
{ "file_path": "peft/examples/multi_adapter_examples/multi_adapter_weighted_inference_diffusers.ipynb", "repo_id": "peft", "token_count": 1880 }
231
# RoAd: 3-in-1: 2D Rotary Adaptation for Efficient Finetuning, Efficient Batching and Composability ## Introduction [RoAd](https://arxiv.org/pdf/2409.00119) is a novel method that adapts LLMs using simple 2D rotations. It is highly parameter-efficient, achieving strong performance with less than 0.1% trainable parameters. RoAd also supports efficient serving of mixed-adapter requests within a batch, incurring only element-wise computation overhead rather than costly batch matrix multiplications. Additionally, it improves model interpretability through structured and composable transformations. ## Quick start ```python import torch from peft import RoadConfig, get_peft_model from transformers import AutoTokenizer, AutoModelForCausalLM, Trainer from datasets import load_dataset model = AutoModelForCausalLM.from_pretrained("huggyllama/llama-7b", device_map="cuda") tokenizer = AutoTokenizer.from_pretrained("huggyllama/llama-7b") dataset = load_dataset("timdettmers/openassistant-guanaco", split="train") road_config = RoadConfig( variant="1", ) peft_model = get_peft_model(model, road_config) trainer = transformers.Trainer( model=peft_model, train_dataset=dataset, dataset_text_field="text", max_seq_length=2048, tokenizer=tokenizer, ) trainer.train() peft_model.save_pretrained("road-llama-3-8b") ``` RoAd requires a higher learning rate compared to LoRa and similar approaches, set it to around 1e-3. Run the finetuning script simply by running: ```bash python examples/road_finetuning/road_finetuning.py --base_model meta-llama/Meta-Llama-3-8B --data_path timdettmers/openassistant-guanaco ``` RoAd also supports quantization. To use 4-bit quantization try: ```bash python examples/road_finetuning/road_finetuning.py --base_model meta-llama/Meta-Llama-3-8B --quantize ``` ### Full example of the script ```bash python road_finetuning.py \ --base_model "PATH_TO_MODEL" \ --data_path "PATH_TO_DATASET" \ --output_dir "PATH_TO_OUTPUT_DIR" \ --batch_size 1 \ --num_epochs 3 \ --learning_rate 1e-3 \ --cutoff_len 512 \ --val_set_size 500 \ --quantize \ --eval_step 10 \ --save_step 100 \ --device "cuda:0" \ --variant 1 \ --road_target_modules "q_proj,k_proj,v_proj,o_proj" \ --hub_model_id "YOUR_HF_REPO" \ --push_to_hub ``` ## Use the model on 🤗 You can load and use the model as any other 🤗 models. ```python from transformers import AutoModel model = AutoModel.from_pretrained("ppetrushkov/llama-2-7b-sql-road-test") ``` ## Citation ``` @inproceedings{ liao2024in, title={3-in-1: 2D Rotary Adaptation for Efficient Finetuning, Efficient Batching and Composability}, author={Baohao Liao and Christof Monz}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=rYjYwuM6yH} } ```
peft/examples/road_finetuning/README.md/0
{ "file_path": "peft/examples/road_finetuning/README.md", "repo_id": "peft", "token_count": 1042 }
232
# Copyright 2025-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ All utilities related to data handling. """ from functools import partial from typing import Callable import datasets import numpy as np from datasets import Dataset, load_dataset # with a token limit of 768 for query + response, we have to exclude all texts with length > 1304; this leaves 93.8% of # the dataset CHAR_LIMIT = 1300 # train/valid/test split -- note that evaluation takes quite long, so don't choose too large sizes for the valid set, # since it's run multiple times during training; test is only run once at the end and thus can be larger VALID_SIZE = 50 def get_filtered_dataset(*, ds: datasets.Dataset, print_fn: Callable[..., None]) -> Dataset: """Return the filtered dataset, with long queries removed. We determined that 99% of queries have 529 or fewer characters. Characters roughly correspond to tokens, so this is a good proxy. We cannot use tokens directly, as that depends on the tokenizer, which can be different for each model, but we want the same filter for each model. """ char_lengths = [len(f"{q} {r}") for q, r in zip(ds["query"], ds["response"])] idx_filtered = [i for i, length in enumerate(char_lengths) if length <= CHAR_LIMIT] print_fn(f"Filtered dataset: {100 * len(idx_filtered) / len(ds):.1f}% of the original dataset") return ds.select(idx_filtered) def get_train_valid_test_datasets( *, tokenizer, query_template: str, print_fn: Callable[..., None] ) -> tuple[Dataset, Dataset, Dataset]: """ Return the indices of the train, valid, and test splits of the dataset. We cannot use ds.train_test_split(..., stratify_by_column="type") as it gives: > ValueError: Stratifying by column is only supported for ClassLabel column, and column type is Value. even after calling ds_filtered.class_encode_column("type"). Thus, using sklearn's StratifiedKFold instead. """ metamath = load_dataset("meta-math/MetaMathQA")["train"] metamath = get_filtered_dataset(ds=metamath, print_fn=print_fn) # gsmk8k does not need to be filtered as query and response are short enough gsm8k = load_dataset("openai/gsm8k", "main") gsm8k = gsm8k.rename_columns({"question": "query", "answer": "response"}) gsm8k_train = gsm8k["train"] gsm8k_test = gsm8k["test"] np.random.seed(0) indices = np.arange(len(gsm8k_train)) np.random.shuffle(indices) idx_valid = indices[:VALID_SIZE] ds_train = metamath ds_valid = gsm8k_train.select(idx_valid) ds_test = gsm8k_test print_fn(f"Train size: {len(ds_train)}") print_fn(f"Valid size: {len(ds_valid)}") print_fn(f"Test size: {len(ds_test)}") tokenize_with_answer_ = partial(tokenize_with_answer, tokenizer=tokenizer, template=query_template) tokenize_wo_answer_ = partial(tokenize_wo_answer, tokenizer=tokenizer, template=query_template) ds_train = ds_train.map(tokenize_with_answer_, batched=True).remove_columns(["type", "query", "original_question"]) ds_valid = ds_valid.map(tokenize_wo_answer_, batched=True).remove_columns(["query"]) ds_test = ds_test.map(tokenize_wo_answer_, batched=True).remove_columns(["query"]) return ds_train, ds_valid, ds_test def tokenize_with_answer(samples, tokenizer, template): queries = [template.format(query=sample) + answer for sample, answer in zip(samples["query"], samples["response"])] tokenized = tokenizer(queries) tokenized["input_ids"] = [input_ids[: tokenizer.model_max_length] for input_ids in tokenized["input_ids"]] tokenized["attention_mask"] = [ input_ids[: tokenizer.model_max_length] for input_ids in tokenized["attention_mask"] ] return tokenized def tokenize_wo_answer(samples, tokenizer, template): queries = [template.format(query=sample) for sample in samples["query"]] tokenized = tokenizer(queries) tokenized["input_ids"] = [input_ids[: tokenizer.model_max_length] for input_ids in tokenized["input_ids"]] tokenized["attention_mask"] = [ input_ids[: tokenizer.model_max_length] for input_ids in tokenized["attention_mask"] ] return tokenized
peft/method_comparison/MetaMathQA/data.py/0
{ "file_path": "peft/method_comparison/MetaMathQA/data.py", "repo_id": "peft", "token_count": 1625 }
233
{ "auto_mapping": null, "base_model_name_or_path": null, "exclude_modules": null, "inference_mode": false, "modules_to_save": null, "peft_type": "LN_TUNING", "revision": null, "target_modules": null, "task_type": null }
peft/method_comparison/MetaMathQA/experiments/ln_tuning/llama-3.2-3B-default/adapter_config.json/0
{ "file_path": "peft/method_comparison/MetaMathQA/experiments/ln_tuning/llama-3.2-3B-default/adapter_config.json", "repo_id": "peft", "token_count": 100 }
234
{ "auto_mapping": null, "base_model_name_or_path": null, "bias": "none", "fan_in_fan_out": false, "inference_mode": false, "init_weights": true, "layers_pattern": null, "layers_to_transform": null, "modules_to_save": null, "peft_type": "RANDLORA", "projection_prng_key": 0, "r": 32, "randlora_alpha": 640, "randlora_dropout": 0.0, "revision": null, "save_projection": true, "sparse": false, "target_modules": null, "task_type": null, "very_sparse": false }
peft/method_comparison/MetaMathQA/experiments/randlora/llama-3.2-3B-default/adapter_config.json/0
{ "file_path": "peft/method_comparison/MetaMathQA/experiments/randlora/llama-3.2-3B-default/adapter_config.json", "repo_id": "peft", "token_count": 215 }
235
""" Utility to clean cache files that exceed a specific time in days according to their last access time recorded in the cache. Exit code: - 1 if no candidates are found - 0 if candidates are found Deletion can be enabled by passing `-d` parameter, otherwise it will only list the candidates. """ import sys from datetime import datetime as dt from huggingface_hub import scan_cache_dir def find_old_revisions(scan_results, max_age_days=30): """Find commit hashes of objects in the cache. These objects need a last access time that is above the passed `max_age_days` parameter. Returns an empty list if no objects are found. Time measurement is based of the current time and the recorded last access tiem in the cache. """ now = dt.now() revisions = [(i.revisions, i.last_accessed) for i in scan_results.repos] revisions_ages = [(rev, (now - dt.fromtimestamp(ts_access)).days) for rev, ts_access in revisions] delete_candidates = [rev for rev, age in revisions_ages if age > max_age_days] hashes = [n.commit_hash for rev in delete_candidates for n in rev] return hashes def delete_old_revisions(scan_results, delete_candidates, do_delete=False): delete_operation = scan_results.delete_revisions(*delete_candidates) print(f"Would free {delete_operation.expected_freed_size_str}") print(f"Candidates: {delete_candidates}") if do_delete: print("Deleting now.") delete_operation.execute() else: print("Not deleting, pass the -d flag.") if __name__ == "__main__": from argparse import ArgumentParser parser = ArgumentParser() parser.add_argument("-a", "--max-age", type=int, default=30, help="Max. age in days items in the cache may have.") parser.add_argument( "-d", "--delete", action="store_true", help=( "Delete mode; Really delete items if there are candidates. Exit code = 0 when we found something to delete, 1 " "otherwise." ), ) args = parser.parse_args() scan_results = scan_cache_dir() delete_candidates = find_old_revisions(scan_results, args.max_age) if not delete_candidates: print("No delete candidates found, not deleting anything.") sys.exit(1) delete_old_revisions(scan_results, delete_candidates, do_delete=args.delete)
peft/scripts/ci_clean_cache.py/0
{ "file_path": "peft/scripts/ci_clean_cache.py", "repo_id": "peft", "token_count": 807 }
236
# Copyright 2025-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ This module contains the implementation of the LoRA-FA optimizer. """ from __future__ import annotations import math from collections.abc import Iterable from typing import Callable import torch import torch.nn as nn from accelerate.utils.imports import is_bf16_available from torch import autocast from torch.optim import Optimizer from ..peft_model import PeftModel from ..utils.other import infer_device class LoraFAOptimizer(Optimizer): """ Implements the LoRA-FA optimizer designed specifically for training Low-Rank Adaptation (LoRA) parameters efficiently. Note that LoraFAOptimizer is based on adamw-hf in transformers, with only LoRA part modified. Without LoRA it will fall back to adamw-hf. Args: params (Iterable[nn.parameter.Parameter]): Parameters to optimize. lr (float, optional): Learning rate (default: 1e-3). betas (Tuple[float, float], optional): Coefficients for computing running averages of gradient and squared gradient (default: (0.9, 0.999)). eps (float, optional): Term added to denominator to improve numerical stability (default: 1e-6). weight_decay (float, optional): Weight decay (L2 penalty) (default: 0.0). correct_bias (bool, optional): Whether to apply bias correction as in original Adam (default: True). Args in sub-function step: closure (Callable, optional): A closure that reevaluates the model and returns the loss. Reference: - LoRA-FA: https://huggingface.co/papers/2308.03303 """ def __init__( self, params: Iterable[nn.parameter.Parameter], lr: float = 1e-3, betas: tuple[float, float] = (0.9, 0.999), eps: float = 1e-6, weight_decay: float = 0.0, correct_bias: bool = True, ): if lr < 0.0: raise ValueError(f"Invalid learning rate: {lr} - should be >= 0.0") if not 0.0 <= betas[0] < 1.0: raise ValueError(f"Invalid beta parameter: {betas[0]} - should be in [0.0, 1.0)") if not 0.0 <= betas[1] < 1.0: raise ValueError(f"Invalid beta parameter: {betas[1]} - should be in [0.0, 1.0)") if not 0.0 <= eps: raise ValueError(f"Invalid epsilon value: {eps} - should be >= 0.0") defaults = { "lr": lr, "betas": betas, "eps": eps, "weight_decay": weight_decay, "correct_bias": correct_bias, } super().__init__(params, defaults) @torch.no_grad() def step(self, closure: Callable = None): """ Performs a single optimization step. Arguments: closure (`Callable`, *optional*): A closure that reevaluates the model and returns the loss. """ loss = None if closure is not None: loss = closure() for group in self.param_groups: scaling_factor = group["scaling_factor"] param_list = [] name_list = [] for p, n in zip(group["params"], group["names"]): # Skip non-lora no-grad module, since we need lora_A which is no-grad. if "lora" not in n and p.grad is None: continue grad = p.grad if "lora" in n: param_list.append(p) name_list.append(n) if len(param_list) == 2: name = n[: n.find("lora")] + "lora" elif len(param_list) == 1: continue else: name = n # param_list contains a pair of A and B adapters # i.e., param_list -> [A,B] state = self.state[name] # State initialization if len(state) == 0: if len(param_list) == 2: state["step"] = 0 # Exponential moving average of gradient values state["exp_avg_B"] = torch.zeros_like(param_list[1]) # Exponential moving average of squared gradient values state["exp_avg_sq_B"] = torch.zeros_like(param_list[1]) else: state["step"] = 0 # Exponential moving average of gradient values state["exp_avg"] = torch.zeros_like(p) # Exponential moving average of squared gradient values state["exp_avg_sq"] = torch.zeros_like(p) # Below is the LoRA-FA part # 1. In this part, we optimize the gradient of B as: # g^B = \left(\frac{r}{\alpha}\right)^2 (A^\top A)^{-1} g_{\text{LoRA-FA}}^B # to min the func as described below: # \min_{g^B} \|\hat{g}_\text{LoRA-FA} - g\|_F^2 # 2. After the gradient of B is ready, update the optimizer state if len(param_list) == 2: A = param_list[0] B = param_list[1] grad_B_orin = B.grad # projection delta = 1e-8 # computing the inverse matrix AA_T = A @ A.T AA_T_inv = torch.linalg.pinv(AA_T + delta * torch.eye(A.shape[0]).to(A.device)) device_type = infer_device() if is_bf16_available(): with autocast(device_type=device_type, dtype=torch.bfloat16): grad_B = (1 / scaling_factor**2) * (grad_B_orin @ AA_T_inv) else: grad_B = (1 / scaling_factor**2) * (grad_B_orin @ AA_T_inv) if grad_B.dtype != B.grad.dtype: grad_B = grad_B.to(B.grad.dtype) exp_avg_B, exp_avg_sq_B = state["exp_avg_B"], state["exp_avg_sq_B"] beta1, beta2 = group["betas"] state["step"] += 1 exp_avg_B.mul_(beta1).add_(grad_B, alpha=(1.0 - beta1)) exp_avg_sq_B.mul_(beta2).addcmul_(grad_B, grad_B, value=1.0 - beta2) denom_B = exp_avg_sq_B.sqrt().add_(group["eps"]) step_size = group["lr"] if group["correct_bias"]: # No bias correction for Bert bias_correction1 = 1.0 - beta1 ** state["step"] bias_correction2 = 1.0 - beta2 ** state["step"] step_size = step_size * math.sqrt(bias_correction2) / bias_correction1 B.addcdiv_(exp_avg_B, denom_B, value=-step_size) if group["weight_decay"] > 0.0: B.add_(B, alpha=(-group["lr"] * group["weight_decay"])) param_list = [] name_list = [] # Below is the original AdamW else: exp_avg, exp_avg_sq = state["exp_avg"], state["exp_avg_sq"] beta1, beta2 = group["betas"] state["step"] += 1 # Decay the first and second moment running average coefficient # In-place operations to update the averages at the same time exp_avg.mul_(beta1).add_(grad, alpha=(1.0 - beta1)) exp_avg_sq.mul_(beta2).addcmul_(grad, grad, value=1.0 - beta2) denom = exp_avg_sq.sqrt().add_(group["eps"]) step_size = group["lr"] if group["correct_bias"]: # No bias correction for Bert bias_correction1 = 1.0 - beta1 ** state["step"] bias_correction2 = 1.0 - beta2 ** state["step"] step_size = step_size * math.sqrt(bias_correction2) / bias_correction1 p.addcdiv_(exp_avg, denom, value=-step_size) # Just adding the square of the weights to the loss function is *not* # the correct way of using L2 regularization/weight decay with Adam, # since that will interact with the m and v parameters in strange ways. # # Instead we want to decay the weights in a manner that doesn't interact # with the m/v parameters. This is equivalent to adding the square # of the weights to the loss with plain (non-momentum) SGD. # Add weight decay at the end (fixed version) if group["weight_decay"] > 0.0: p.add_(p, alpha=(-group["lr"] * group["weight_decay"])) return loss def create_lorafa_optimizer( model: PeftModel, r: int, lora_alpha: int, lr: float, weight_decay: float = 0.0, use_rslora: bool = False ) -> Optimizer: """ Helper function to instantiate a lorafa optimizer specifically configured for a given model using the LoRA method. This function will: - Disable gradient updates for the "lora_A" parameters (these are typically frozen during LoRA training). - Compute the scaling factor based on provided `lora_alpha` and rank `r` for proper gradient projection. - Create and configure parameter groups for the optimizer including specified learning rate, weight decay, and additional optimizer options. For hyper-params, LoRA-FA uses the same hyper-params as AdamW, except for the LoRA hyper-params (r, lora_alpha, use_rslora). One can always use the same hyper-params such as lr and weight_decay, as AdamW in LoRA tuning. Args: model (PeftModel): The model containing LoRA-adapted parameters. r (int): Rank of the LoRA decomposition. lora_alpha (int): Scaling factor for LoRA parameterization. lr (float): Learning rate for optimizer updates. weight_decay (float): Weight decay for AdamW. use_rslora (bool): whether to use rslora. In rslora, the lora scaling factor becomes to lora_alpha / math.sqrt(r) instead of lora_alpha / r. Returns: Optimizer: Configured lorafa optimizer instance ready for training. """ for name, param in model.named_parameters(): if "lora_A" in name: param.requires_grad_(False) lora_scaling = lora_alpha / math.sqrt(r) if use_rslora else lora_alpha / r param_groups = [ { "params": model.parameters(), "lr": lr, "names": [name for name, _ in model.named_parameters()], "scaling_factor": lora_scaling, "betas": (0.9, 0.999), "weight_decay": weight_decay, } ] return LoraFAOptimizer(param_groups)
peft/src/peft/optimizers/lorafa.py/0
{ "file_path": "peft/src/peft/optimizers/lorafa.py", "repo_id": "peft", "token_count": 5476 }
237
# Copyright 2023-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import inspect from typing import Optional import torch import torch.nn as nn def llama_rotate_half(x: torch.Tensor) -> torch.Tensor: """ Rotate half the hidden dims of the input. This function was duplicated verbatim from: https://github.com/huggingface/transformers/blob/1de8ce9ee1191ba761a593ac15d9ccbf5851bfc5/src/transformers/models/llama/modeling_llama.py#L126 This was done to eliminate the Llama transformers implementation as a dependency of this file. Note that some other functions were also adapted from the transformers implementation but were modified. """ x1 = x[..., : x.shape[-1] // 2] x2 = x[..., x.shape[-1] // 2 :] return torch.cat((-x2, x1), dim=-1) def llama_apply_rotary_pos_emb(q, cos, sin, position_ids): """ Apply rotary position embedding to query states in the Llama model. This function was adapted from: https://github.com/huggingface/transformers/blob/1de8ce9ee1191ba761a593ac15d9ccbf5851bfc5/src/transformers/models/llama/modeling_llama.py#L133 It was modified to remove unnecessary processing of key states. The method is compatible with transformers <= 4.34.2 and also with the latest version (>=4.35). """ # In previous transformers version cos/sin cached had a shape of 4D if len(cos.shape) == 4: gather_indices = position_ids[:, None, :, None] # [bs, 1, seq_len, 1] gather_indices = gather_indices.repeat(1, cos.shape[1], 1, cos.shape[3]) cos = torch.gather(cos.repeat(gather_indices.shape[0], 1, 1, 1), 2, gather_indices) sin = torch.gather(sin.repeat(gather_indices.shape[0], 1, 1, 1), 2, gather_indices) # In the new version, it is 2D so we fall back to the new implementation # https://github.com/huggingface/transformers/blame/eef7ea98c31a333bacdc7ae7a2372bde772be8e4/src/transformers/models/llama/modeling_llama.py#L222-L226 else: cos = cos[position_ids].unsqueeze(1) sin = sin[position_ids].unsqueeze(1) q_embed = (q * cos) + (llama_rotate_half(q) * sin) return q_embed def llama_compute_query_states(model: nn.Module, **kwargs) -> torch.Tensor: """ Compute query states for Llama models specifically. They need to be recomputed as the forward() method of the original LlamaModel in the transformers library does not return them. See the related discussion in the PR: https://github.com/huggingface/peft/pull/268 """ hidden_states = kwargs.get("hidden_states") position_ids = kwargs.get("position_ids") past_key_value = kwargs.get("past_key_value") bsz, q_len, _ = hidden_states.size() if hasattr(model, "num_heads"): # TODO: remove this clause after 2026-01-01 num_heads = model.num_heads else: # changed in https://github.com/huggingface/transformers/pull/35235 num_heads = model.config.num_attention_heads query_states = model.q_proj(hidden_states).view(bsz, q_len, num_heads, model.head_dim).transpose(1, 2) factor = model.k_proj.in_features // model.k_proj.out_features value_states = model.v_proj(hidden_states).view(bsz, q_len, (num_heads // factor), model.head_dim).transpose(1, 2) seq_len = q_len if past_key_value is not None: if isinstance(past_key_value, tuple): # for transformers <= 4.35 seq_len += past_key_value[0].shape[-2] else: # since transformers 4.36, this is a DynamicCache instance seq_len += past_key_value.get_seq_length(model.layer_idx) # model.rotary_emb is deprecated and will be removed in transformers > 4.47.0. Instead, the position embeddings are # passed via the kwargs if "position_embeddings" in kwargs: cos, sin = kwargs["position_embeddings"] cos = cos.unsqueeze(1) sin = sin.unsqueeze(1) return (query_states * cos) + (llama_rotate_half(query_states) * sin) # For transformers > 4.37.2 `position_ids` became a required arguments in the rotary embedding's forward pass. if "position_ids" not in inspect.signature(model.rotary_emb.forward).parameters: # TODO we assume that position_ids is not None here, not sure if that is safe but the old code also did that cos, sin = model.rotary_emb(value_states, seq_len=seq_len) return llama_apply_rotary_pos_emb(query_states, cos, sin, position_ids) past_seen_tokens = 0 if position_ids is None: # Compute position_ids, since they are required for transformers > 4.37.2 if past_key_value is None: new_cache_positions = torch.arange(q_len, q_len + q_len, device=value_states.device) else: past_seen_tokens = past_key_value.get_usable_length(q_len, model.layer_idx) new_cache_positions = torch.arange(past_seen_tokens, past_seen_tokens + q_len, device=value_states.device) position_ids = new_cache_positions.unsqueeze(0) rotary_emb_kwargs = {"position_ids": position_ids} # The `seq_len` argument has been officially removed in transformers >= 4.39.0 if "seq_len" in inspect.signature(model.rotary_emb.forward).parameters: rotary_emb_kwargs["seq_len"] = q_len + past_seen_tokens cos, sin = model.rotary_emb(value_states, **rotary_emb_kwargs) # For batched inference unsqueeze it on the correct dim # since: https://github.com/huggingface/transformers/pull/29109 if len(cos.shape) == 3: cos = cos.unsqueeze(1) sin = sin.unsqueeze(1) return (query_states * cos) + (llama_rotate_half(query_states) * sin) def gpt2_compute_query_states( model: nn.Module, hidden_states: Optional[tuple[torch.FloatTensor]], encoder_hidden_states: Optional[torch.Tensor] = None, ) -> torch.Tensor: """ Compute query states for GPT2 models. They need to be recomputed as the forward() method of the GPT@ in the transformers library does not return them. See the related discussion in the PR: """ if encoder_hidden_states is not None: if not hasattr(model, "q_attn"): raise ValueError( f"If `{model.__class__.__name__}` is used as cross attention, the weights `q_attn` must be defined. " f"Please make sure to instantiate it with `GPT2Attention(..., is_cross_attention=True)`." ) query_states = model.q_attn(hidden_states) else: query_states, _, _ = model.c_attn(hidden_states).split(model.split_size, dim=2) shape_q = (*query_states.shape[:-1], -1, model.head_dim) query_states = query_states.view(shape_q).transpose(1, 2) return query_states def is_adaption_prompt_trainable(params: str) -> bool: """Return True if module is trainable under adaption prompt fine-tuning.""" return params.split(".")[-1].startswith("adaption_")
peft/src/peft/tuners/adaption_prompt/utils.py/0
{ "file_path": "peft/src/peft/tuners/adaption_prompt/utils.py", "repo_id": "peft", "token_count": 2857 }
238
# Copyright 2025-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import torch from torch.autograd import Function from torch.fft import fft, ifft def get_circulant_fast(w): m, n, b = w.shape x = torch.eye(n * b, dtype=w.dtype, device=w.device) x = x.reshape(*x.shape[:-1], n, b) x = torch.einsum("...nb,mnb->...mb", ifft(x), fft(w)) x = fft(x).real.flatten(start_dim=1).T return x class BlockCircularConvolution(Function): @staticmethod def forward(ctx, x, w): m, n, b = w.shape x = x.reshape(*x.shape[:-1], n, b) ctx.save_for_backward(x, w) x = torch.einsum("...nb,mnb->...mb", ifft(x), fft(w)) x = fft(x).real x = x.reshape(*x.shape[:-2], -1) return x @staticmethod def backward(ctx, grad_output): x, w = ctx.saved_tensors m, n, b = w.shape grad_output = grad_output.reshape(*grad_output.shape[:-1], m, b) grad_output_fft = fft(grad_output) x_grad = fft(torch.einsum("...mb,mnb->...nb", grad_output_fft, ifft(w))).real x_grad = x_grad.reshape(*x_grad.shape[:-2], -1) w_grad = fft(torch.einsum("...mb,...nb->mnb", grad_output_fft, ifft(x))).real return x_grad, w_grad
peft/src/peft/tuners/c3a/utils.py/0
{ "file_path": "peft/src/peft/tuners/c3a/utils.py", "repo_id": "peft", "token_count": 747 }
239
# Copyright 2023-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from __future__ import annotations import re import warnings from dataclasses import asdict, replace from enum import Enum from typing import Optional import torch from torch import nn from transformers.pytorch_utils import Conv1D from peft.import_utils import is_bnb_4bit_available, is_bnb_available from peft.tuners.tuners_utils import BaseTuner, BaseTunerLayer, check_target_module_exists from peft.utils import ( TRANSFORMERS_MODELS_TO_IA3_FEEDFORWARD_MODULES_MAPPING, TRANSFORMERS_MODELS_TO_IA3_TARGET_MODULES_MAPPING, ModulesToSaveWrapper, _freeze_adapter, _get_submodules, ) from .layer import Conv2d, Conv3d, IA3Layer, Linear class IA3Model(BaseTuner): """ Creates a Infused Adapter by Inhibiting and Amplifying Inner Activations ((IA)^3) model from a pretrained transformers model. The method is described in detail in https://huggingface.co/papers/2205.05638 Args: model ([`~transformers.PreTrainedModel`]): The model to be adapted. config ([`IA3Config`]): The configuration of the (IA)^3 model. adapter_name (`str`): The name of the adapter, defaults to `"default"`. low_cpu_mem_usage (`bool`, `optional`, defaults to `False`): Create empty adapter weights on meta device. Useful to speed up the loading process. Returns: `torch.nn.Module`: The (IA)^3 model. Example: ```py >>> from transformers import AutoModelForSeq2SeqLM, ia3Config >>> from peft import IA3Model, IA3Config >>> config = IA3Config( ... peft_type="IA3", ... task_type="SEQ_2_SEQ_LM", ... target_modules=["k", "v", "w0"], ... feedforward_modules=["w0"], ... ) >>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base") >>> ia3_model = IA3Model(config, model) ``` **Attributes**: - **model** ([`~transformers.PreTrainedModel`]) -- The model to be adapted. - **peft_config** ([`ia3Config`]): The configuration of the (IA)^3 model. """ prefix: str = "ia3_" @staticmethod def _create_new_module(ia3_config, adapter_name, target, **kwargs): # avoid eager bnb import if is_bnb_available(): import bitsandbytes as bnb from .bnb import Linear8bitLt if is_bnb_4bit_available(): from .bnb import Linear4bit loaded_in_8bit = kwargs.pop("loaded_in_8bit", False) loaded_in_4bit = kwargs.pop("loaded_in_4bit", False) is_feedforward = kwargs.pop("is_feedforward", False) if isinstance(target, BaseTunerLayer): target_base_layer = target.get_base_layer() else: target_base_layer = target if loaded_in_8bit and isinstance(target_base_layer, bnb.nn.Linear8bitLt): eightbit_kwargs = kwargs.copy() eightbit_kwargs.update( { "has_fp16_weights": target_base_layer.state.has_fp16_weights, "threshold": target_base_layer.state.threshold, "index": target_base_layer.index, } ) new_module = Linear8bitLt(target, adapter_name, is_feedforward=is_feedforward, **eightbit_kwargs) elif loaded_in_4bit and isinstance(target_base_layer, bnb.nn.Linear4bit): fourbit_kwargs = kwargs.copy() fourbit_kwargs.update( { "compute_dtype": target_base_layer.compute_dtype, "compress_statistics": target_base_layer.weight.compress_statistics, "quant_type": target_base_layer.weight.quant_type, } ) new_module = Linear4bit(target, adapter_name, is_feedforward=is_feedforward, **fourbit_kwargs) elif isinstance(target, torch.nn.Conv2d): new_module = Conv2d(target, adapter_name, is_feedforward=is_feedforward, **kwargs) elif isinstance(target, torch.nn.Conv3d): new_module = Conv3d(target, adapter_name, is_feedforward=is_feedforward, **kwargs) elif isinstance(target_base_layer, torch.nn.Linear): if kwargs["fan_in_fan_out"]: warnings.warn( "fan_in_fan_out is set to True but the target module is `torch.nn.Linear`. " "Setting fan_in_fan_out to False." ) kwargs["fan_in_fan_out"] = ia3_config.fan_in_fan_out = False new_module = Linear(target, adapter_name, is_feedforward=is_feedforward, **kwargs) elif isinstance(target_base_layer, Conv1D): if not kwargs["fan_in_fan_out"]: warnings.warn( "fan_in_fan_out is set to False but the target module is `Conv1D`. Setting fan_in_fan_out to True." ) kwargs["fan_in_fan_out"] = ia3_config.fan_in_fan_out = True new_module = Linear( target, adapter_name, is_feedforward=is_feedforward, is_target_conv_1d_layer=True, **kwargs ) else: raise ValueError( f"Target module {target} is not supported. " f"Currently, only `torch.nn.Linear`, `torch.nn.Conv2d`, and `Conv1D` are supported." ) return new_module @staticmethod def _check_target_module_exists(ia3_config, key): return check_target_module_exists(ia3_config, key) def _mark_only_adapters_as_trainable(self, model: nn.Module) -> None: for n, p in model.named_parameters(): if self.prefix not in n: p.requires_grad = False def _create_and_replace( self, ia3_config, adapter_name, target, target_name, parent, current_key, ): # check if target module is in feedforward_modules is_feedforward = self._check_target_module_feedforward(ia3_config, current_key) kwargs = { "fan_in_fan_out": ia3_config.fan_in_fan_out, "init_ia3_weights": ia3_config.init_ia3_weights, "is_feedforward": is_feedforward, "loaded_in_8bit": getattr(self.model, "is_loaded_in_8bit", False), "loaded_in_4bit": getattr(self.model, "is_loaded_in_4bit", False), } if isinstance(target, IA3Layer): target.update_layer( adapter_name, ia3_config.init_ia3_weights, ) else: new_module = self._create_new_module(ia3_config, adapter_name, target, **kwargs) if adapter_name not in self.active_adapters: # adding an additional adapter: it is not automatically trainable new_module.requires_grad_(False) self._replace_module(parent, target_name, new_module, target) @staticmethod def _check_target_module_feedforward(ia3_config, key) -> bool: """ A helper private method that checks if the target module `key` matches with a feedforward module specified in `ia3_config` """ if isinstance(ia3_config.feedforward_modules, str): is_feedforward = bool(re.fullmatch(ia3_config.feedforward_modules, key)) else: is_feedforward = any(key.endswith(target_key) for target_key in ia3_config.feedforward_modules) return is_feedforward def _replace_module(self, parent, child_name, new_module, child): setattr(parent, child_name, new_module) # child layer wraps the original module, unpack it if hasattr(child, "base_layer"): child = child.base_layer # layers with base_layer don't need the weight to be copied, as they have a reference already if not hasattr(new_module, "base_layer"): new_module.weight = child.weight if hasattr(child, "bias"): new_module.bias = child.bias if getattr(child, "state", None) is not None: if hasattr(new_module, "base_layer"): new_module.base_layer.state = child.state else: new_module.state = child.state new_module.to(child.weight.device) meta = torch.device("meta") # dispatch to correct device for name, module in new_module.named_modules(): if self.prefix in name: if not any(p.device == meta for p in module.parameters()): module.to(child.weight.device) def __getattr__(self, name: str): """Forward missing attributes to the wrapped module.""" try: return super().__getattr__(name) # defer to nn.Module's logic except AttributeError: if name == "model": # see #1892: prevent infinite recursion if class is not initialized raise return getattr(self.model, name) def get_peft_config_as_dict(self, inference: bool = False): config_dict = {} for key, value in self.peft_config.items(): config = {k: v.value if isinstance(v, Enum) else v for k, v in asdict(value).items()} if inference: config["inference_mode"] = True config_dict[key] = config return config def _set_adapter_layers(self, enabled=True): for module in self.model.modules(): if isinstance(module, (IA3Layer, ModulesToSaveWrapper)): module.enable_adapters(enabled) def enable_adapter_layers(self) -> None: """Enable all adapters. Call this if you have previously disabled all adapters and want to re-enable them. """ self._set_adapter_layers(enabled=True) def disable_adapter_layers(self) -> None: """Disable all adapters. When disabling all adapters, the model output corresponds to the output of the base model. """ self._set_adapter_layers(enabled=False) def set_adapter(self, adapter_name: str | list[str]) -> None: """Set the active adapter(s). Additionally, this function will set the specified adapters to trainable (i.e., requires_grad=True). If this is not desired, use the following code. ```py >>> for name, param in model_peft.named_parameters(): ... if ...: # some check on name (ex. if 'lora' in name) ... param.requires_grad = False ``` Args: adapter_name (`str` or `list[str]`): Name of the adapter(s) to be activated. """ for module in self.model.modules(): if isinstance(module, IA3Layer): if module.merged: warnings.warn("Adapter cannot be set when the model is merged. Unmerging the model first.") module.unmerge() module.set_adapter(adapter_name) self.active_adapter = adapter_name @staticmethod def _prepare_adapter_config(peft_config, model_config): if peft_config.target_modules is None: if model_config["model_type"] not in TRANSFORMERS_MODELS_TO_IA3_TARGET_MODULES_MAPPING: raise ValueError("Please specify `target_modules` in `peft_config`") peft_config.target_modules = set( TRANSFORMERS_MODELS_TO_IA3_TARGET_MODULES_MAPPING[model_config["model_type"]] ) if peft_config.feedforward_modules is None: if model_config["model_type"] not in TRANSFORMERS_MODELS_TO_IA3_FEEDFORWARD_MODULES_MAPPING: raise ValueError("Please specify `feedforward_modules` in `peft_config`") peft_config.feedforward_modules = set( TRANSFORMERS_MODELS_TO_IA3_FEEDFORWARD_MODULES_MAPPING[model_config["model_type"]] ) return peft_config def _unload_and_optionally_merge( self, merge: bool = True, safe_merge: bool = False, adapter_names: Optional[list[str]] = None ): r""" This method merges the (IA)^3 layers into the base model. This is needed if someone wants to use the base model as a standalone model. Args: safe_merge (`bool`, `optional`, defaults to `False`): If True, the merge operation will be performed in a copy of the original weights and check for NaNs before merging the weights. This is useful if you want to check if the merge operation will produce NaNs. Defaults to `False`. adapter_names (`List[str]`, *optional*): The list of adapter names that should be merged. If None, all active adapters will be merged. Defaults to `None`. """ if getattr(self.model, "is_loaded_in_8bit", False): raise ValueError("Cannot merge ia3 layers when the model is loaded in 8-bit mode") if getattr(self.model, "is_loaded_in_4bit", False): raise ValueError("Cannot merge ia3 layers when the model is loaded in 4-bit mode") self._unloading_checks(adapter_names) key_list = [key for key, _ in self.model.named_modules() if self.prefix not in key] for key in key_list: try: parent, target, target_name = _get_submodules(self.model, key) except AttributeError: continue if hasattr(target, "base_layer"): if merge: target.merge(safe_merge=safe_merge, adapter_names=adapter_names) self._replace_module(parent, target_name, target.get_base_layer(), target) elif isinstance(target, ModulesToSaveWrapper): # save any additional trainable modules part of `modules_to_save` new_module = target.modules_to_save[target.active_adapter] if hasattr(new_module, "base_layer"): # check if the module is itself a tuner layer if merge: new_module.merge(safe_merge=safe_merge, adapter_names=adapter_names) new_module = new_module.get_base_layer() setattr(parent, target_name, new_module) return self.model def merge_and_unload(self, safe_merge: bool = False, adapter_names: Optional[list[str]] = None) -> torch.nn.Module: r""" This method merges the IA³ layers into the base model. This is needed if someone wants to use the base model as a standalone model. Args: safe_merge (`bool`): whether to activate the safe merging check to check if there is any potential Nan in the adapter weights adapter_names (`List[str]`, *optional*): The list of adapter names that should be merged. If None, all active adapters will be merged. Defaults to `None`. Example: ```py >>> from transformers import AutoModelForCausalLM >>> from peft import PeftModel >>> base_model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-40b") >>> peft_model_id = "smangrul/falcon-40B-int4-peft-lora-sfttrainer-sample" >>> model = PeftModel.from_pretrained(base_model, peft_model_id) >>> merged_model = model.merge_and_unload() ``` """ return self._unload_and_optionally_merge(safe_merge=safe_merge, adapter_names=adapter_names) def unload(self) -> torch.nn.Module: """ Gets back the base model by removing all the IA³ modules without merging. This gives back the original base model. """ return self._unload_and_optionally_merge(merge=False) def delete_adapter(self, adapter_name: str) -> None: """ Deletes an existing adapter. Args: adapter_name (str): Name of the adapter to be deleted. """ if adapter_name not in self.peft_config: raise ValueError(f"Adapter {adapter_name} does not exist") del self.peft_config[adapter_name] key_list = [key for key, _ in self.model.named_modules() if self.prefix not in key] new_adapter = None for key in key_list: _, target, _ = _get_submodules(self.model, key) if isinstance(target, IA3Layer): target.delete_adapter(adapter_name) if new_adapter is None: new_adapter = target.active_adapters[:] self.active_adapter = new_adapter or [] self._delete_auxiliary_adapter(adapter_name, new_active_adapters=new_adapter) def _check_add_weighted_adapter(self, adapters: list[str]) -> tuple[str, str]: """ Helper function to check if the arguments to add_weighted_adapter are valid and compatible with the underlying model. """ # Validate existence of adapters for adapter in adapters: if adapter not in self.peft_config: raise ValueError(f"Adapter {adapter} does not exist") # Check for conflicting modules_to_save modules_to_save_wrappers = [module for module in self.modules() if isinstance(module, ModulesToSaveWrapper)] if any( sum(adapter in wrapper.modules_to_save for adapter in adapters) > 1 for wrapper in modules_to_save_wrappers ): raise ValueError("Cannot add weighted adapters targeting the same module with modules_to_save.") # Ensure all adapters have compatible target and feedforward module types target_module_types = {type(self.peft_config[adapter].target_modules) for adapter in adapters} feedforward_module_types = {type(self.peft_config[adapter].feedforward_modules) for adapter in adapters} if len(target_module_types) > 1 or len(feedforward_module_types) > 1: raise ValueError("All adapter configs should have the same type for target and feedforward modules.") # Combine target and feedforward modules if str in target_module_types: new_target_modules = "|".join(f"({self.peft_config[adapter].target_modules})" for adapter in adapters) else: new_target_modules = set.union(*(self.peft_config[adapter].target_modules for adapter in adapters)) if str in feedforward_module_types: new_feedforward_modules = "|".join( f"({self.peft_config[adapter].feedforward_modules})" for adapter in adapters ) else: new_feedforward_modules = set.union( *(self.peft_config[adapter].feedforward_modules for adapter in adapters) ) return new_target_modules, new_feedforward_modules def add_weighted_adapter( self, adapters: list[str], weights: list[float], adapter_name: str, ) -> None: """ This method adds a new adapter by merging the given adapters with the given weights. Args: adapters (`list`): List of adapter names to be merged. weights (`list`): List of weights for each adapter. adapter_name (`str`): Name of the new adapter. """ if adapter_name in list(self.peft_config.keys()): return new_target_modules, new_feedforward_modules = self._check_add_weighted_adapter( adapters=adapters, ) self.peft_config[adapter_name] = replace( self.peft_config[adapters[0]], target_modules=new_target_modules, feedforward_modules=new_feedforward_modules, ) self.inject_adapter(self.model, adapter_name) # Do we really need that? _freeze_adapter(self.model, adapter_name) key_list = [key for key, _ in self.model.named_modules() if self.prefix not in key] for key in key_list: _, target, _ = _get_submodules(self.model, key) if isinstance(target, IA3Layer): if adapter_name in target.ia3_l: target_ia3_l = target.ia3_l[adapter_name] else: continue target_ia3_l.data = target_ia3_l.data.zero_() for adapter, weight in zip(adapters, weights): if adapter in target.ia3_l: current_adapter_ia3_l = target.ia3_l[adapter] else: continue target_ia3_l.data += current_adapter_ia3_l.data * weight
peft/src/peft/tuners/ia3/model.py/0
{ "file_path": "peft/src/peft/tuners/ia3/model.py", "repo_id": "peft", "token_count": 9374 }
240
# Copyright 2023-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from __future__ import annotations import warnings from typing import Any, Optional import bitsandbytes as bnb import torch from peft.import_utils import is_bnb_4bit_available, is_bnb_available from peft.tuners.tuners_utils import BaseTunerLayer, check_adapters_to_merge from peft.utils.integrations import dequantize_bnb_weight from peft.utils.other import transpose from .layer import LoraLayer, LoraVariant if is_bnb_available(): class Linear8bitLt(torch.nn.Module, LoraLayer): # Lora implemented in a dense layer def __init__( self, base_layer: torch.nn.Module, adapter_name: str, r: int = 0, lora_alpha: int = 1, lora_dropout: float = 0.0, init_lora_weights: bool = True, use_rslora: bool = False, use_dora: bool = False, lora_bias: bool = False, **kwargs, ) -> None: super().__init__() LoraLayer.__init__(self, base_layer) self.fan_in_fan_out = False self._active_adapter = adapter_name self.update_layer( adapter_name, r, lora_alpha=lora_alpha, lora_dropout=lora_dropout, init_lora_weights=init_lora_weights, use_rslora=use_rslora, use_dora=use_dora, lora_bias=lora_bias, ) def resolve_lora_variant(self, *, use_dora: bool, **kwargs) -> Optional[LoraVariant]: if not use_dora: return None from .variants import DoraLinearVariant return DoraLinearVariant() def merge(self, safe_merge: bool = False, adapter_names: Optional[list[str]] = None) -> None: """ Merge the active adapter weights into the base weights Args: safe_merge (`bool`, *optional*): If True, the merge operation will be performed in a copy of the original weights and check for NaNs before merging the weights. This is useful if you want to check if the merge operation will produce NaNs. Defaults to `False`. adapter_names (`list[str]`, *optional*): The list of adapter names that should be merged. If None, all active adapters will be merged. Defaults to `None`. """ adapter_names = check_adapters_to_merge(self, adapter_names) if not adapter_names: # no adapter to merge return for active_adapter in adapter_names: if active_adapter not in self.lora_A.keys(): continue warnings.warn( "Merge lora module to 8-bit linear may get different generations due to rounding errors." ) weight = self.get_base_layer().weight state = self.get_base_layer().state if state.SCB is None: state.SCB = weight.SCB # Dequantize the result of identity matrix and int8 weight because bitsandbytes does not support int8 # dequantization directly output = dequantize_bnb_weight(weight, state=state) if active_adapter not in self.lora_variant: # vanilla LoRA lora_data = self.get_delta_weight(active_adapter) w_data = output.to(lora_data.dtype).to(lora_data.device) + lora_data else: w_data = self.lora_variant[active_adapter].merge_safe(self, active_adapter, output) if safe_merge and not torch.isfinite(w_data).all(): raise ValueError( f"NaNs detected in the merged weights. The adapter {active_adapter} seems to be broken" ) self.get_base_layer().weight = bnb.nn.Int8Params( w_data.to("cpu"), requires_grad=False, has_fp16_weights=weight.has_fp16_weights ).to(weight.device) if self.lora_bias[active_adapter]: bias_data = self.get_base_layer().bias.data + self.lora_B[active_adapter].bias if safe_merge and not torch.isfinite(bias_data): raise ValueError( f"NaNs detected in the merged weights. The adapter {active_adapter} seems to be broken" ) self.get_base_layer().bias.data = bias_data state.reset_grads() self.merged_adapters.append(active_adapter) def unmerge(self) -> None: """ This method unmerges all merged adapter layers from the base weights. """ if not self.merged: warnings.warn("Already unmerged. Nothing to do.") return while len(self.merged_adapters) > 0: active_adapter = self.merged_adapters.pop() if active_adapter not in self.lora_A.keys(): continue warnings.warn( "Unmerge lora module to 8-bit linear may get different generations due to rounding errors." ) weight = self.get_base_layer().weight state = self.get_base_layer().state if state.SCB is None: state.SCB = weight.SCB output = dequantize_bnb_weight(weight, state=state) if active_adapter not in self.lora_variant: # vanilla LoRA lora_data = self.get_delta_weight(active_adapter) w_data = output.to(lora_data.dtype).to(lora_data.device) - lora_data else: w_data = self.lora_variant[active_adapter].unmerge(self, active_adapter, output) self.get_base_layer().weight = bnb.nn.Int8Params( w_data.to("cpu"), requires_grad=False, has_fp16_weights=weight.has_fp16_weights ).to(weight.device) if self.lora_bias[active_adapter]: self.get_base_layer().bias.data -= self.lora_B[active_adapter].bias state.reset_grads() def get_delta_weight(self, adapter): return ( transpose( self.lora_B[adapter].weight @ self.lora_A[adapter].weight, False, ) * self.scaling[adapter] ) def _mixed_batch_forward( self, x: torch.Tensor, *args: Any, adapter_names: list[str], **kwargs: Any ) -> torch.Tensor: # This is a special method that handles the case when users pass the argument `adapter_names`. This is an # extra argument that allows mixing different adapters in the same batch at inference time. result = self.base_layer(x, *args, **kwargs) unique_adapters = set(adapter_names) sub_batch_indices_list = [] for adapter in unique_adapters: sub_batch_indices_list.append([index for index, item in enumerate(adapter_names) if item == adapter]) for i, active_adapter in enumerate(unique_adapters): if active_adapter == "__base__": continue if active_adapter not in self.lora_A.keys(): continue lora_A = self.lora_A[active_adapter] lora_B = self.lora_B[active_adapter] dropout = self.lora_dropout[active_adapter] scaling = self.scaling[active_adapter] requires_conversion = not torch.is_autocast_enabled() if requires_conversion: expected_dtype = result.dtype x = self._cast_input_dtype(x, lora_A.weight.dtype) # getting the sub-batch, passing it to LoRA layers and updating the corresponding indices of the linear # layer output sub_batch = x[sub_batch_indices_list[i]] output = lora_B(lora_A(dropout(sub_batch))) * scaling if requires_conversion: output = output.to(expected_dtype) result[sub_batch_indices_list[i]] += output return result def forward(self, x: torch.Tensor, *args, **kwargs) -> torch.Tensor: self._check_forward_args(x, *args, **kwargs) adapter_names = kwargs.pop("adapter_names", None) if self.disable_adapters: if self.merged: self.unmerge() result = self.base_layer(x, *args, **kwargs) elif adapter_names is not None: result = self._mixed_batch_forward(x, *args, adapter_names=adapter_names, **kwargs) elif self.merged: result = self.base_layer(x, *args, **kwargs) else: result = self.base_layer(x, *args, **kwargs) for active_adapter in self.active_adapters: if active_adapter not in self.lora_A.keys(): continue lora_A = self.lora_A[active_adapter] lora_B = self.lora_B[active_adapter] dropout = self.lora_dropout[active_adapter] scaling = self.scaling[active_adapter] requires_conversion = not torch.is_autocast_enabled() if requires_conversion: expected_dtype = result.dtype x = self._cast_input_dtype(x, lora_A.weight.dtype) if active_adapter not in self.lora_variant: # vanilla LoRA output = lora_B(lora_A(dropout(x))) * scaling if requires_conversion: output = output.to(expected_dtype) result = result + output else: result = self.lora_variant[active_adapter].forward( self, active_adapter=active_adapter, x=x, result=result, ) if requires_conversion: result = result.to(expected_dtype) return result def __repr__(self) -> str: rep = super().__repr__() return "lora." + rep def dispatch_bnb_8bit(target: torch.nn.Module, adapter_name: str, **kwargs): new_module = None if isinstance(target, BaseTunerLayer): target_base_layer = target.get_base_layer() else: target_base_layer = target loaded_in_8bit = kwargs.get("loaded_in_8bit", False) if loaded_in_8bit and isinstance(target_base_layer, bnb.nn.Linear8bitLt): eightbit_kwargs = kwargs.copy() eightbit_kwargs.update( { "has_fp16_weights": target.state.has_fp16_weights, "threshold": target.state.threshold, "index": target.index, } ) new_module = Linear8bitLt(target, adapter_name, **eightbit_kwargs) return new_module if is_bnb_4bit_available(): class Linear4bit(torch.nn.Module, LoraLayer): # Lora implemented in a dense layer def __init__( self, base_layer: torch.nn.Module, adapter_name: str, r: int = 0, lora_alpha: int = 1, lora_dropout: float = 0.0, init_lora_weights: bool = True, use_rslora: bool = False, use_dora: bool = False, lora_bias: bool = False, **kwargs, ) -> None: super().__init__() LoraLayer.__init__(self, base_layer) self.fan_in_fan_out = False self._active_adapter = adapter_name self.update_layer( adapter_name, r, lora_alpha=lora_alpha, lora_dropout=lora_dropout, init_lora_weights=init_lora_weights, use_rslora=use_rslora, use_dora=use_dora, lora_bias=lora_bias, ) def resolve_lora_variant(self, *, use_dora: bool, **kwargs) -> Optional[LoraVariant]: if not use_dora: return None from .variants import DoraLinearVariant return DoraLinearVariant() def merge(self, safe_merge: bool = False, adapter_names: Optional[list[str]] = None) -> None: """ Merge the active adapter weights into the base weights Args: safe_merge (`bool`, *optional*): If True, the merge operation will be performed in a copy of the original weights and check for NaNs before merging the weights. This is useful if you want to check if the merge operation will produce NaNs. Defaults to `False`. adapter_names (`list[str]`, *optional*): The list of adapter names that should be merged. If None, all active adapters will be merged. Defaults to `None`. """ adapter_names = check_adapters_to_merge(self, adapter_names) if not adapter_names: # no adapter to merge return for active_adapter in adapter_names: if active_adapter not in self.lora_A.keys(): continue warnings.warn( "Merge lora module to 4-bit linear may get different generations due to rounding errors." ) # Refer to https://gist.github.com/ChrisHayduk/1a53463331f52dca205e55982baf9930 weight = self.get_base_layer().weight kwargs = weight.__dict__ output = dequantize_bnb_weight(weight, state=weight.quant_state) if active_adapter not in self.lora_variant: # vanilla LoRA lora_data = self.get_delta_weight(active_adapter) w_data = output + lora_data else: w_data = self.lora_variant[active_adapter].merge_safe(self, active_adapter, output) if safe_merge and not torch.isfinite(w_data).all(): raise ValueError( f"NaNs detected in the merged weights. The adapter {active_adapter} seems to be broken" ) if "bnb_quantized" in kwargs: kwargs["bnb_quantized"] = False kwargs["requires_grad"] = False kwargs.pop("data", None) # torch.compile can introduce attributes preceded by '_', remove them kwargs = {k: v for k, v in kwargs.items() if not k.startswith("_")} self.get_base_layer().weight = bnb.nn.Params4bit(w_data.to("cpu"), **kwargs).to(weight.device) if self.lora_bias[active_adapter]: bias_data = self.get_base_layer().bias.data + self.lora_B[active_adapter].bias if safe_merge and not torch.isfinite(bias_data): raise ValueError( f"NaNs detected in the merged weights. The adapter {active_adapter} seems to be broken" ) self.get_base_layer().bias.data = bias_data self.merged_adapters.append(active_adapter) def unmerge(self) -> None: """ This method unmerges all merged adapter layers from the base weights. """ if not self.merged: warnings.warn("Already unmerged. Nothing to do.") return while len(self.merged_adapters) > 0: active_adapter = self.merged_adapters.pop() if active_adapter not in self.lora_A.keys(): continue warnings.warn( "Unmerge lora module to 4-bit linear may get different generations due to rounding errors." ) weight = self.get_base_layer().weight kwargs = weight.__dict__ output = dequantize_bnb_weight(weight, state=weight.quant_state) if active_adapter not in self.lora_variant: # vanilla LoRA lora_data = self.get_delta_weight(active_adapter) w_data = output - lora_data else: w_data = self.lora_variant[active_adapter].unmerge(self, active_adapter, output) if "bnb_quantized" in kwargs: kwargs["bnb_quantized"] = False kwargs["requires_grad"] = False kwargs.pop("data", None) self.get_base_layer().weight = bnb.nn.Params4bit(w_data.to("cpu"), **kwargs).to(weight.device) if self.lora_bias[active_adapter]: self.get_base_layer().bias.data -= self.lora_B[active_adapter].bias def get_delta_weight(self, adapter): return ( transpose( self.lora_B[adapter].weight @ self.lora_A[adapter].weight, False, ) * self.scaling[adapter] ) def _mixed_batch_forward( self, x: torch.Tensor, *args: Any, adapter_names: list[str], **kwargs: Any ) -> torch.Tensor: # This is a special method that handles the case when users pass the argument `adapter_names`. This is an # extra argument that allows mixing different adapters in the same batch at inference time. result = self.base_layer(x, *args, **kwargs) unique_adapters = set(adapter_names) sub_batch_indices_list = [] for adapter in unique_adapters: sub_batch_indices_list.append([index for index, item in enumerate(adapter_names) if item == adapter]) for i, active_adapter in enumerate(unique_adapters): if active_adapter == "__base__": continue if active_adapter not in self.lora_A.keys(): continue lora_A = self.lora_A[active_adapter] lora_B = self.lora_B[active_adapter] dropout = self.lora_dropout[active_adapter] scaling = self.scaling[active_adapter] requires_conversion = not torch.is_autocast_enabled() if requires_conversion: expected_dtype = result.dtype x = self._cast_input_dtype(x, lora_A.weight.dtype) # getting the sub-batch, passing it to LoRA layers and updating the corresponding indices of the linear # layer output sub_batch = x[sub_batch_indices_list[i]] output = lora_B(lora_A(dropout(sub_batch))) * scaling if requires_conversion: output = output.to(expected_dtype) result[sub_batch_indices_list[i]] += output return result def forward(self, x: torch.Tensor, *args, **kwargs) -> torch.Tensor: self._check_forward_args(x, *args, **kwargs) adapter_names = kwargs.pop("adapter_names", None) if self.disable_adapters: if self.merged: self.unmerge() result = self.base_layer(x, *args, **kwargs) elif adapter_names is not None: result = self._mixed_batch_forward(x, *args, adapter_names=adapter_names, **kwargs) elif self.merged: result = self.base_layer(x, *args, **kwargs) else: result = self.base_layer(x, *args, **kwargs) # As per Tim Dettmers, for 4bit, we need to defensively clone here. # The reason is that in some cases, an error can occur that backprop # does not work on a manipulated view. This issue may be solved with # newer PyTorch versions but this would need extensive testing to be # sure. result = result.clone() for active_adapter in self.active_adapters: if active_adapter not in self.lora_A.keys(): continue lora_A = self.lora_A[active_adapter] lora_B = self.lora_B[active_adapter] dropout = self.lora_dropout[active_adapter] scaling = self.scaling[active_adapter] requires_conversion = not torch.is_autocast_enabled() if requires_conversion: expected_dtype = result.dtype x = self._cast_input_dtype(x, lora_A.weight.dtype) if active_adapter not in self.lora_variant: # vanilla LoRA output = lora_B(lora_A(dropout(x))) * scaling if requires_conversion: output = output.to(expected_dtype) result = result + output else: result = self.lora_variant[active_adapter].forward( self, active_adapter=active_adapter, x=x, result=result, ) if requires_conversion: result = result.to(expected_dtype) return result def __repr__(self) -> str: rep = super().__repr__() return "lora." + rep def dispatch_bnb_4bit(target: torch.nn.Module, adapter_name: str, **kwargs): new_module = None if isinstance(target, BaseTunerLayer): target_base_layer = target.get_base_layer() else: target_base_layer = target loaded_in_4bit = kwargs.get("loaded_in_4bit", False) if loaded_in_4bit and is_bnb_4bit_available() and isinstance(target_base_layer, bnb.nn.Linear4bit): fourbit_kwargs = kwargs.copy() fourbit_kwargs.update( { "compute_dtype": target_base_layer.compute_dtype, "compress_statistics": target_base_layer.weight.compress_statistics, "quant_type": target_base_layer.weight.quant_type, } ) new_module = Linear4bit(target, adapter_name, **fourbit_kwargs) return new_module
peft/src/peft/tuners/lora/bnb.py/0
{ "file_path": "peft/src/peft/tuners/lora/bnb.py", "repo_id": "peft", "token_count": 12187 }
241
# Copyright 2025-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from __future__ import annotations from dataclasses import dataclass, field from typing import Literal, Optional, Union from peft.config import PeftConfig from peft.utils import PeftType @dataclass class MissConfig(PeftConfig): """ This is the configuration class to store the configuration of a [`MiSSModel`]. Args: r (`int`): The rank of MiSS across different layers. It is best to set 'r' to an even number; otherwise, the default initialization method will not work. The rank of MiSS corresponds to a low-rank decomposition along the in_features dimension. miss_dropout (`float`): The dropout probability for MiSS layers. mini_r (`int`): The rank of MiSS corresponds to a low-rank decomposition along the out_features dimension. When you set `init_weights=mini`, you need to set `mini_r`. Please make sure that `out_features` is divisible by `mini_r`. target_modules (`Optional[Union[List[str], str]]`): The names of the modules to apply the adapter to. If this is specified, only the modules with the specified names will be replaced. When passing a string, a regex match will be performed. When passing a list of strings, either an exact match will be performed or it is checked if the name of the module ends with any of the passed strings. If this is specified as 'all-linear', then all linear modules are chosen, excluding the output layer. If this is not specified, modules will be chosen according to the model architecture. If the architecture is not known, an error will be raised -- in this case, you should specify the target modules manually. exclude_modules (`Optional[Union[List[str], str]]`): The names of the modules to not apply the adapter. When passing a string, a regex match will be performed. When passing a list of strings, either an exact match will be performed or it is checked if the name of the module ends with any of the passed strings. init_weights (bool | Literal["bat", "mini"]): Different initializations correspond to different MiSS variants. By default(balance), the most efficient and general method in MiSS will be used. 'bat': In this mode, you can enable nonlinear updates across different shards. 'mini': In this mode, you can set a smaller rank to use fewer trainable parameters, but it is recommended to keep `out_features % mini_r == 0`. layers_to_transform (`Union[List[int], int]`): The layer indices to transform. If a list of ints is passed, it will apply the adapter to the layer indices that are specified in this list. If a single integer is passed, it will apply the transformations on the layer at this index. layers_pattern (`str`): The layer pattern name, used only if `layers_to_transform` is different from `None`. modules_to_save (`List[str]`): List of modules apart from adapter layers to be set as trainable and saved in the final checkpoint. """ r: int = field( default=64, metadata={ "help": "The rank of MiSS corresponds to a low-rank decomposition along the in_features dimension.", "note": "It is best to set 'r' to an even number; otherwise, the default initialization method will not work.", }, ) miss_dropout: float = field(default=0.0, metadata={"help": "MiSS dropout"}) mini_r: int = field( default=1, metadata={ "help": "The rank of MiSS corresponds to a low-rank decomposition along the out_features dimension.", "note": "It is recommended that mini_r be divisible by out_features. When mini_r == out_features, the mini method is equivalent to the default efficient MiSS.", }, ) target_modules: Optional[Union[list[str], str]] = field( default=None, metadata={ "help": "List of module names or regex expression of the module names to replace with MiSS.", "example": "For example, ['q', 'v'] or '.*decoder.*(SelfAttention|EncDecAttention).*(q|v)$' ", }, ) exclude_modules: Optional[Union[list[str], str]] = field( default=None, metadata={"help": "List of module names or regex expression of the module names to exclude from MiSS."}, ) init_weights: bool | Literal["bat", "mini"] = field( default=True, metadata={ "help": ( "True -> MiSS balance; `bat` -> Bat; `mini` -> smaller rank and efficiency" "Whether to initialize the weights of the MiSS layers with their default initialization. Don't change " "this setting, except if you know exactly what you're doing." ), }, ) layers_to_transform: Optional[Union[list[int], int]] = field( default=None, metadata={ "help": "The layer indexes to transform, is this argument is specified, PEFT will transform only the layers indexes that are specified inside this list. If a single integer is passed, PEFT will transform only the layer at this index." }, ) layers_pattern: Optional[str] = field( default=None, metadata={ "help": "The layer pattern name, used only if `layers_to_transform` is different to None and if the layer pattern is not in the common layers pattern." }, ) bias: str = field(default="none", metadata={"help": "Bias type for MiSS. Can be 'none', 'all' or 'MiSS_only'"}) modules_to_save: Optional[list[str]] = field( default=None, metadata={ "help": "List of modules apart from MiSS layers to be set as trainable and saved in the final checkpoint. " "For example, in Sequence Classification or Token Classification tasks, " "the final layer `classifier/score` are randomly initialized and as such need to be trainable and saved." }, ) def __post_init__(self): super().__post_init__() self.peft_type = PeftType.MISS self.target_modules = ( set(self.target_modules) if isinstance(self.target_modules, list) else self.target_modules ) self.exclude_modules = ( set(self.exclude_modules) if isinstance(self.exclude_modules, list) else self.exclude_modules ) # if target_modules is a regex expression, then layers_to_transform should be None if isinstance(self.target_modules, str) and self.layers_to_transform is not None: raise ValueError("`layers_to_transform` cannot be used when `target_modules` is a str.") # if target_modules is a regex expression, then layers_pattern should be None if isinstance(self.target_modules, str) and self.layers_pattern is not None: raise ValueError("`layers_pattern` cannot be used when `target_modules` is a str.")
peft/src/peft/tuners/miss/config.py/0
{ "file_path": "peft/src/peft/tuners/miss/config.py", "repo_id": "peft", "token_count": 2708 }
242
# Copyright 2023-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import math import torch from peft.utils.integrations import gather_params_ctx from .config import PromptTuningInit class PromptEmbedding(torch.nn.Module): """ The model to encode virtual tokens into prompt embeddings. Args: config ([`PromptTuningConfig`]): The configuration of the prompt embedding. word_embeddings (`torch.nn.Module`): The word embeddings of the base transformer model. **Attributes**: - **embedding** (`torch.nn.Embedding`) -- The embedding layer of the prompt embedding. Example: ```py >>> from peft import PromptEmbedding, PromptTuningConfig >>> config = PromptTuningConfig( ... peft_type="PROMPT_TUNING", ... task_type="SEQ_2_SEQ_LM", ... num_virtual_tokens=20, ... token_dim=768, ... num_transformer_submodules=1, ... num_attention_heads=12, ... num_layers=12, ... prompt_tuning_init="TEXT", ... prompt_tuning_init_text="Predict if sentiment of this review is positive, negative or neutral", ... tokenizer_name_or_path="t5-base", ... ) >>> # t5_model.shared is the word embeddings of the base model >>> prompt_embedding = PromptEmbedding(config, t5_model.shared) ``` Input Shape: (`batch_size`, `total_virtual_tokens`) Output Shape: (`batch_size`, `total_virtual_tokens`, `token_dim`) """ def __init__(self, config, word_embeddings): super().__init__() total_virtual_tokens = config.num_virtual_tokens * config.num_transformer_submodules self.embedding = torch.nn.Embedding(total_virtual_tokens, config.token_dim) if config.prompt_tuning_init == PromptTuningInit.TEXT and not config.inference_mode: from transformers import AutoTokenizer tokenizer_kwargs = config.tokenizer_kwargs or {} tokenizer = AutoTokenizer.from_pretrained(config.tokenizer_name_or_path, **tokenizer_kwargs) init_text = config.prompt_tuning_init_text init_token_ids = tokenizer(init_text)["input_ids"] # Trim or iterate until num_text_tokens matches total_virtual_tokens num_text_tokens = len(init_token_ids) if num_text_tokens > total_virtual_tokens: init_token_ids = init_token_ids[:total_virtual_tokens] elif num_text_tokens < total_virtual_tokens: num_reps = math.ceil(total_virtual_tokens / num_text_tokens) init_token_ids = init_token_ids * num_reps init_token_ids = init_token_ids[:total_virtual_tokens] init_token_ids = torch.LongTensor(init_token_ids).to(word_embeddings.weight.device) with gather_params_ctx(word_embeddings.parameters()): word_embedding_weights = word_embeddings(init_token_ids).detach().clone() word_embedding_weights = word_embedding_weights.to(torch.float32) self.embedding.weight = torch.nn.Parameter(word_embedding_weights) def forward(self, indices): # Just get embeddings prompt_embeddings = self.embedding(indices) return prompt_embeddings
peft/src/peft/tuners/prompt_tuning/model.py/0
{ "file_path": "peft/src/peft/tuners/prompt_tuning/model.py", "repo_id": "peft", "token_count": 1486 }
243
# Copyright 2023-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from __future__ import annotations import warnings from dataclasses import dataclass from typing import Optional from peft.config import PeftConfig from peft.utils.peft_types import PeftType @dataclass class XLoraConfig(PeftConfig): r""" This is the configuration class to store the configuration of a `XLoraModel`. When the config is reloaded, the paths of the `adapters` field is disregarded in favor of the saved adapters. As such, only the keys matter during loading. Args: hidden_size (`int`): Hidden size of the base model. adapters (`dict`): Mapping of adapter names to the LoRA adapter id, as per PeftModel.load_adapter. *They will be automatically loaded*, to use as LoRA experts. When using from_pretrained, pass the new adapters dict as a keyword argument. enable_softmax (`bool`, *optional*, defaults to `True`): Enable softmax application for the X-LoRA classifier. enable_softmax_topk (`bool`, *optional*, defaults to `False`): Enable softmax application for the top-k LoRA adapters. Mutually exclusive to `enable_softmax` and must only be set if `top_k_lora` is. softmax_temperature (`float`, *optional*, defaults to 1.0): Softmax temperature, lower yields sharper predictions layerwise_scalings (`bool`, *optional*, defaults to `False`): If True, generate scalings for each LoRA adapter (each layer). If this is False, then scalings will be broadcasted, the same, to each layer. top_k_lora (`int`, *optional*, defaults to None): Sparsely select the top_k LoRA experts instead of the default dense method. xlora_depth (`int`, *optional*, defaults to 1): Depth of the X-LoRA classifier. xlora_size (`int`, *optional*, defaults to 2048): Hidden size of the X-LoRA classifier, irrelevant if `xlora_depth=1`. xlora_dropout_p (`float`, *optional*, defaults to 0.2): Dropout probability of the X-LoRA classifier, irrelevant if `xlora_depth=1`. use_trainable_adapters (`bool`, *optional*, defaults to False): Make the adapters trainable. scaling_pass_value (`float`, *optional*, defaults to 0): Scaling pass value. global_scaling_weight (`float`, *optional*, defaults to 1): Weight to multiply output of each LoRA adapter by. """ hidden_size: int = None # type: ignore adapters: dict[str, str] = None # type: ignore enable_softmax: bool = True enable_softmax_topk: bool = False layerwise_scalings: bool = False xlora_depth: int = 1 xlora_size: int = 2048 xlora_dropout_p: float = 0.2 use_trainable_adapters: bool = False softmax_temperature: float = 1.0 top_k_lora: Optional[int] = None scaling_pass_value: float = 0.0 global_scaling_weight: float = 1.0 def __post_init__(self): super().__post_init__() self.peft_type = PeftType.XLORA if self.hidden_size is None: warnings.warn( "No value was provided for `hidden_size`. This will be set to 4096 by default, please ensure that this is correct." ) self.hidden_size = 4096 if self.adapters is None: warnings.warn( "No value was provided for for `adapters`. This will be set to empty, please ensure that this is correct." ) self.adapters = {} if self.enable_softmax_topk and self.top_k_lora is None: warnings.warn("`enable_softmax_topk` enabled `top_k_lora` is not set") if self.enable_softmax_topk and self.enable_softmax: warnings.warn( "`enable_softmax_topk` and `enable_softmax` are both enabled. This will result in worse performance." ) if self.top_k_lora is not None and self.top_k_lora < 1: warnings.warn("`top_k_lora` value must be at least 1.")
peft/src/peft/tuners/xlora/config.py/0
{ "file_path": "peft/src/peft/tuners/xlora/config.py", "repo_id": "peft", "token_count": 1765 }
244
# Copyright 2023-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import platform import re import pytest def pytest_addoption(parser): parser.addoption("--regression", action="store_true", default=False, help="run regression tests") def pytest_configure(config): config.addinivalue_line("markers", "regression: mark regression tests") def pytest_collection_modifyitems(config, items): if config.getoption("--regression"): return skip_regression = pytest.mark.skip(reason="need --regression option to run regression tests") for item in items: if "regression" in item.keywords: item.add_marker(skip_regression) # TODO: remove this once support for PyTorch 2.2 (the latest one still supported by GitHub MacOS x86_64 runners) is # dropped, or if MacOS is removed from the test matrix, see https://github.com/huggingface/peft/issues/2431. # Note: the function name is fixed by the pytest plugin system, don't change it @pytest.hookimpl(hookwrapper=True) def pytest_runtest_makereport(item, call): """ Plug into the pytest test report generation to skip a specific MacOS failure caused by transformers. The error was introduced by https://github.com/huggingface/transformers/pull/37785, which results in torch.load failing when using torch < 2.6. Since the MacOS x86 runners need to use an older torch version, those steps are necessary to get the CI green. """ outcome = yield rep = outcome.get_result() # ref: # https://github.com/huggingface/transformers/blob/858ce6879a4aa7fa76a7c4e2ac20388e087ace26/src/transformers/utils/import_utils.py#L1418 error_msg = re.compile(r"Due to a serious vulnerability issue in `torch.load`") # notes: # - pytest uses hard-coded strings, we cannot import and use constants # https://docs.pytest.org/en/stable/reference/reference.html#pytest.TestReport # - errors can happen during call (running the test) but also setup (e.g. in fixtures) if rep.failed and (rep.when in ("setup", "call")) and (platform.system() == "Darwin"): exc_msg = str(call.excinfo.value) if error_msg.search(exc_msg): # turn this failure into an xfail: rep.outcome = "skipped" # for this attribute, see: # https://github.com/pytest-dev/pytest/blob/bd6877e5874b50ee57d0f63b342a67298ee9a1c3/src/_pytest/reports.py#L266C5-L266C13 rep.wasxfail = "Error known to occur on MacOS with older torch versions, won't be fixed"
peft/tests/conftest.py/0
{ "file_path": "peft/tests/conftest.py", "repo_id": "peft", "token_count": 1038 }
245
# Copyright 2024-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import pytest import torch from diffusers import StableDiffusionPipeline from torch import nn from transformers import AutoModelForCausalLM, AutoTokenizer from peft import LoraConfig, get_peft_model from peft.helpers import check_if_peft_model, disable_input_dtype_casting, rescale_adapter_scale from peft.tuners.lora.layer import LoraLayer from peft.utils import infer_device class TestCheckIsPeftModel: def test_valid_hub_model(self): result = check_if_peft_model("peft-internal-testing/gpt2-lora-random") assert result is True def test_invalid_hub_model(self): result = check_if_peft_model("gpt2") assert result is False def test_nonexisting_hub_model(self): result = check_if_peft_model("peft-internal-testing/non-existing-model") assert result is False def test_local_model_valid(self, tmp_path): model = AutoModelForCausalLM.from_pretrained("gpt2") config = LoraConfig() model = get_peft_model(model, config) model.save_pretrained(tmp_path / "peft-gpt2-valid") result = check_if_peft_model(tmp_path / "peft-gpt2-valid") assert result is True def test_local_model_invalid(self, tmp_path): model = AutoModelForCausalLM.from_pretrained("gpt2") model.save_pretrained(tmp_path / "peft-gpt2-invalid") result = check_if_peft_model(tmp_path / "peft-gpt2-invalid") assert result is False def test_local_model_broken_config(self, tmp_path): with open(tmp_path / "adapter_config.json", "w") as f: f.write('{"foo": "bar"}') result = check_if_peft_model(tmp_path) assert result is False def test_local_model_non_default_name(self, tmp_path): model = AutoModelForCausalLM.from_pretrained("gpt2") config = LoraConfig() model = get_peft_model(model, config, adapter_name="other") model.save_pretrained(tmp_path / "peft-gpt2-other") # no default adapter here result = check_if_peft_model(tmp_path / "peft-gpt2-other") assert result is False # with adapter name result = check_if_peft_model(tmp_path / "peft-gpt2-other" / "other") assert result is True class TestScalingAdapters: @pytest.fixture(scope="class") def tokenizer(self): return AutoTokenizer.from_pretrained("facebook/opt-125m") def get_scale_from_modules(self, model): layer_to_scale_map = {} for name, module in model.named_modules(): if isinstance(module, LoraLayer): layer_to_scale_map[name] = module.scaling return layer_to_scale_map def test_rescale_adapter_scale(self, tokenizer): model = AutoModelForCausalLM.from_pretrained("facebook/opt-125m") lora_config = LoraConfig( r=4, lora_alpha=4, target_modules=["k_proj", "v_proj"], lora_dropout=0.1, bias="none", init_lora_weights=False, ) model = get_peft_model(model, lora_config) model.eval() inputs = tokenizer("hello world", return_tensors="pt") with torch.no_grad(): logits_before_scaling = model(**inputs).logits scales_before_scaling = self.get_scale_from_modules(model) with rescale_adapter_scale(model=model, multiplier=0.5): scales_during_scaling = self.get_scale_from_modules(model) for key in scales_before_scaling.keys(): assert scales_before_scaling[key] != scales_during_scaling[key] with torch.no_grad(): logits_during_scaling = model(**inputs).logits assert not torch.allclose(logits_before_scaling, logits_during_scaling) scales_after_scaling = self.get_scale_from_modules(model) for key in scales_before_scaling.keys(): assert scales_before_scaling[key] == scales_after_scaling[key] with torch.no_grad(): logits_after_scaling = model(**inputs).logits assert torch.allclose(logits_before_scaling, logits_after_scaling) def test_wrong_scaling_datatype(self): model = AutoModelForCausalLM.from_pretrained("facebook/opt-125m") lora_config = LoraConfig( r=4, lora_alpha=4, target_modules=["k_proj", "v_proj"], lora_dropout=0.1, bias="none", init_lora_weights=False, ) model = get_peft_model(model, lora_config) # we expect a type error here becuase of wrong datatpye of multiplier multiplier = "a" with pytest.raises(TypeError, match=f"Argument multiplier should be of type float, got {type(multiplier)}"): with rescale_adapter_scale(model=model, multiplier=multiplier): pass def test_not_lora_model(self): model = AutoModelForCausalLM.from_pretrained("facebook/opt-125m") # we expect a value error here because the model # does not have lora layers with pytest.raises(ValueError, match="scaling is only supported for models with `LoraLayer`s"): with rescale_adapter_scale(model=model, multiplier=0.5): pass def test_scaling_set_to_zero(self, tokenizer): base_model = AutoModelForCausalLM.from_pretrained("facebook/opt-125m") inputs = tokenizer("hello world", return_tensors="pt") base_model.eval() with torch.no_grad(): logits_base_model = base_model(**inputs).logits lora_config = LoraConfig( r=4, lora_alpha=4, target_modules=["k_proj", "v_proj"], lora_dropout=0.1, bias="none", init_lora_weights=False, ) lora_model = get_peft_model(base_model, lora_config) lora_model.eval() with rescale_adapter_scale(model=lora_model, multiplier=0.0): with torch.no_grad(): logits_lora_model = lora_model(**inputs).logits assert torch.allclose(logits_base_model, logits_lora_model) def test_diffusers_pipeline(self): model_id = "hf-internal-testing/tiny-sd-pipe" pipeline = StableDiffusionPipeline.from_pretrained(model_id) text_encoder_kwargs = { "r": 8, "lora_alpha": 32, "target_modules": ["k_proj", "q_proj", "v_proj", "out_proj", "fc1", "fc2"], "lora_dropout": 0.0, "bias": "none", } unet_kwargs = { "r": 8, "lora_alpha": 32, "target_modules": ["proj_in", "proj_out", "to_k", "to_q", "to_v", "to_out.0", "ff.net.0.proj", "ff.net.2"], "lora_dropout": 0.0, "bias": "none", } # Instantiate text_encoder adapter config_text_encoder = LoraConfig(**text_encoder_kwargs) pipeline.text_encoder = get_peft_model(pipeline.text_encoder, config_text_encoder) # Instantiate unet adapter config_unet = LoraConfig(**unet_kwargs) pipeline.unet = get_peft_model(pipeline.unet, config_unet) text_scales_before_scaling = self.get_scale_from_modules(pipeline.text_encoder) unet_scales_before_scaling = self.get_scale_from_modules(pipeline.unet) with ( rescale_adapter_scale(model=pipeline.text_encoder, multiplier=0.5), rescale_adapter_scale(model=pipeline.unet, multiplier=0.5), ): text_scales_during_scaling = self.get_scale_from_modules(pipeline.text_encoder) unet_scales_during_scaling = self.get_scale_from_modules(pipeline.unet) for key in text_scales_before_scaling.keys(): assert text_scales_before_scaling[key] != text_scales_during_scaling[key] for key in unet_scales_before_scaling.keys(): assert unet_scales_before_scaling[key] != unet_scales_during_scaling[key] text_scales_fter_scaling = self.get_scale_from_modules(pipeline.text_encoder) unet_scales_after_scaling = self.get_scale_from_modules(pipeline.unet) for key in text_scales_before_scaling.keys(): assert text_scales_before_scaling[key] == text_scales_fter_scaling[key] for key in unet_scales_before_scaling.keys(): assert unet_scales_before_scaling[key] == unet_scales_after_scaling[key] def test_transformers_pipeline(self, tmp_path, tokenizer): # this uses a transformers model that loads the adapter directly model_id = "facebook/opt-125m" model = AutoModelForCausalLM.from_pretrained(model_id) config = LoraConfig(init_lora_weights=False) model = get_peft_model(model, config) model.save_pretrained(tmp_path / "opt-lora") del model # load directly into transformers model model = AutoModelForCausalLM.from_pretrained(model_id) model.load_adapter(tmp_path / "opt-lora") inputs = tokenizer("hello world", return_tensors="pt") model = model.eval() with torch.no_grad(): logits_before_scaling = model(**inputs).logits scales_before_scaling = self.get_scale_from_modules(model) with rescale_adapter_scale(model=model, multiplier=0.5): scales_during_scaling = self.get_scale_from_modules(model) for key in scales_before_scaling.keys(): assert scales_before_scaling[key] != scales_during_scaling[key] with torch.no_grad(): logits_during_scaling = model(**inputs).logits assert not torch.allclose(logits_before_scaling, logits_during_scaling) scales_after_scaling = self.get_scale_from_modules(model) for key in scales_before_scaling.keys(): assert scales_before_scaling[key] == scales_after_scaling[key] with torch.no_grad(): logits_after_scaling = model(**inputs).logits assert torch.allclose(logits_before_scaling, logits_after_scaling) def test_multi_adapters(self, tokenizer): model = AutoModelForCausalLM.from_pretrained("facebook/opt-125m") lora_config = LoraConfig( r=4, lora_alpha=4, target_modules=["k_proj", "v_proj"], lora_dropout=0.1, bias="none", init_lora_weights=False, ) model = get_peft_model(model, lora_config) inputs = tokenizer("hello world", return_tensors="pt") # add another adaper and activate it model.add_adapter("other", lora_config) model.set_adapter("other") scales_before_scaling = self.get_scale_from_modules(model) model.eval() with torch.no_grad(): logits_before = model(**inputs).logits with rescale_adapter_scale(model=model, multiplier=0.5): scales_during_scaling = self.get_scale_from_modules(model) for key in scales_before_scaling.keys(): assert scales_before_scaling[key] != scales_during_scaling[key] with torch.no_grad(): logits_during = model(**inputs).logits assert not torch.allclose(logits_before, logits_during) scales_after_scaling = self.get_scale_from_modules(model) for key in scales_before_scaling.keys(): assert scales_before_scaling[key] == scales_after_scaling[key] with torch.no_grad(): logits_after = model(**inputs).logits assert torch.allclose(logits_before, logits_after) def test_rank_alpha_pattern(self, tokenizer): model = AutoModelForCausalLM.from_pretrained("facebook/opt-125m") lora_config = LoraConfig( r=4, lora_alpha=4, target_modules=["k_proj", "v_proj"], lora_dropout=0.1, bias="none", init_lora_weights=False, rank_pattern={"k_proj": 2}, alpha_pattern={"k_proj": 8}, ) model = get_peft_model(model, lora_config) model.eval() inputs = tokenizer("hello world", return_tensors="pt") with torch.no_grad(): logits_before_scaling = model(**inputs).logits scales_before_scaling = self.get_scale_from_modules(model) with rescale_adapter_scale(model=model, multiplier=0.5): scales_during_scaling = self.get_scale_from_modules(model) for key in scales_before_scaling.keys(): assert scales_before_scaling[key] != scales_during_scaling[key] with torch.no_grad(): logits_during_scaling = model(**inputs).logits assert not torch.allclose(logits_before_scaling, logits_during_scaling) scales_after_scaling = self.get_scale_from_modules(model) for key in scales_before_scaling.keys(): assert scales_before_scaling[key] == scales_after_scaling[key] with torch.no_grad(): logits_after_scaling = model(**inputs).logits assert torch.allclose(logits_before_scaling, logits_after_scaling) def test_merging_adapter(self, tokenizer): model = AutoModelForCausalLM.from_pretrained("facebook/opt-125m") lora_config = LoraConfig( r=4, lora_alpha=4, target_modules=["k_proj", "v_proj"], lora_dropout=0.1, bias="none", init_lora_weights=False, ) model = get_peft_model(model, lora_config) model.eval() inputs = tokenizer("hello world", return_tensors="pt") with rescale_adapter_scale(model=model, multiplier=0.5): with torch.no_grad(): logits_unmerged_scaling = model(**inputs).logits model = model.merge_and_unload() with torch.no_grad(): logits_merged_scaling = model(**inputs).logits assert torch.allclose(logits_merged_scaling, logits_unmerged_scaling, atol=1e-4, rtol=1e-4) class TestDisableInputDtypeCasting: """Test the context manager `disable_input_dtype_casting` that temporarily disables input dtype casting in the model. The test works as follows: We create a simple MLP and convert it to a PeftModel. The model dtype is set to float16. Then a pre-foward hook is added that casts the model parameters to float32. Moreover, a post-forward hook is added that casts the weights back to float16. The input dtype is float32. Without the disable_input_dtype_casting context, what would happen is that PEFT detects that the input dtype is float32 but the weight dtype is float16, so it casts the input to float16. Then the pre-forward hook casts the weight to float32, which results in a RuntimeError. With the disable_input_dtype_casting context, the input dtype is left as float32 and there is no error. We also add a hook to record the dtype of the result from the LoraLayer to ensure that it is indeed float32. """ device = infer_device() dtype_record = [] @torch.no_grad() def cast_params_to_fp32_pre_hook(self, module, input): for param in module.parameters(recurse=False): param.data = param.data.float() return input @torch.no_grad() def cast_params_to_fp16_hook(self, module, input, output): for param in module.parameters(recurse=False): param.data = param.data.half() return output def record_dtype_hook(self, module, input, output): self.dtype_record.append(output[0].dtype) @pytest.fixture def inputs(self): return torch.randn(4, 10, device=self.device, dtype=torch.float32) @pytest.fixture def base_model(self): class MLP(nn.Module): def __init__(self, bias=True): super().__init__() self.lin0 = nn.Linear(10, 20, bias=bias) self.lin1 = nn.Linear(20, 2, bias=bias) self.sm = nn.LogSoftmax(dim=-1) def forward(self, X): X = self.lin0(X) X = self.lin1(X) X = self.sm(X) return X return MLP() @pytest.fixture def model(self, base_model): config = LoraConfig(target_modules=["lin0"], modules_to_save=["lin1"]) model = get_peft_model(base_model, config).to(device=self.device, dtype=torch.float16) # Register hooks on the submodule that holds parameters for module in model.modules(): if sum(p.numel() for p in module.parameters()) > 0: module.register_forward_pre_hook(self.cast_params_to_fp32_pre_hook) module.register_forward_hook(self.cast_params_to_fp16_hook) if isinstance(module, LoraLayer): module.register_forward_hook(self.record_dtype_hook) return model def test_disable_input_dtype_casting_active(self, model, inputs): self.dtype_record.clear() with disable_input_dtype_casting(model, active=True): model(inputs) assert self.dtype_record == [torch.float32] def test_no_disable_input_dtype_casting(self, model, inputs): msg = r"expected m.*1 and m.*2 to have the same dtype" with pytest.raises(RuntimeError, match=msg): model(inputs) def test_disable_input_dtype_casting_inactive(self, model, inputs): msg = r"expected m.*1 and m.*2 to have the same dtype" with pytest.raises(RuntimeError, match=msg): with disable_input_dtype_casting(model, active=False): model(inputs) def test_disable_input_dtype_casting_inactive_after_existing_context(self, model, inputs): # this is to ensure that when the context is left, we return to the previous behavior with disable_input_dtype_casting(model, active=True): model(inputs) # after the context exited, we're back to the error msg = r"expected m.*1 and m.*2 to have the same dtype" with pytest.raises(RuntimeError, match=msg): model(inputs)
peft/tests/test_helpers.py/0
{ "file_path": "peft/tests/test_helpers.py", "repo_id": "peft", "token_count": 8301 }
246
# Copyright 2025-present the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License governing permissions and limitations under the License. import pytest import torch from transformers import AutoModelForSequenceClassification from peft import ( AdaLoraConfig, BOFTConfig, BoneConfig, C3AConfig, FourierFTConfig, HRAConfig, IA3Config, LoraConfig, MissConfig, OFTConfig, PrefixTuningConfig, PromptEncoderConfig, PromptTuningConfig, PromptTuningInit, RoadConfig, ShiraConfig, VBLoRAConfig, VeraConfig, get_peft_model, ) from peft.utils.other import ModulesToSaveWrapper from .testing_common import PeftCommonTester from .testing_utils import hub_online_once PEFT_SEQ_CLS_MODELS_TO_TEST = [ "hf-internal-testing/tiny-random-BertForSequenceClassification", "hf-internal-testing/tiny-random-RobertaForSequenceClassification", "trl-internal-testing/tiny-LlamaForSequenceClassification-3.2", ] ALL_CONFIGS = [ ( AdaLoraConfig, { "task_type": "SEQ_CLS", "target_modules": None, "total_step": 1, }, ), ( BOFTConfig, { "task_type": "SEQ_CLS", "target_modules": None, }, ), ( BoneConfig, { "task_type": "SEQ_CLS", "target_modules": None, "r": 2, }, ), ( MissConfig, { "task_type": "SEQ_CLS", "target_modules": None, "r": 2, }, ), ( FourierFTConfig, { "task_type": "SEQ_CLS", "n_frequency": 10, "target_modules": None, }, ), ( HRAConfig, { "task_type": "SEQ_CLS", "target_modules": None, }, ), ( IA3Config, { "task_type": "SEQ_CLS", "target_modules": None, "feedforward_modules": None, }, ), ( LoraConfig, { "task_type": "SEQ_CLS", "r": 8, "lora_alpha": 32, "target_modules": None, "lora_dropout": 0.05, "bias": "none", }, ), # LoRA + trainable tokens ( LoraConfig, { "task_type": "SEQ_CLS", "r": 8, "lora_alpha": 32, "target_modules": None, "lora_dropout": 0.05, "bias": "none", "trainable_token_indices": [0, 1, 3], }, ), ( OFTConfig, { "task_type": "SEQ_CLS", "target_modules": None, }, ), ( PrefixTuningConfig, { "task_type": "SEQ_CLS", "num_virtual_tokens": 10, }, ), ( PromptEncoderConfig, { "task_type": "SEQ_CLS", "num_virtual_tokens": 10, "encoder_hidden_size": 32, }, ), ( PromptTuningConfig, { "task_type": "SEQ_CLS", "num_virtual_tokens": 10, }, ), ( RoadConfig, { "task_type": "SEQ_CLS", "variant": "road_1", "group_size": 2, }, ), ( ShiraConfig, { "r": 1, "task_type": "SEQ_CLS", "target_modules": None, "init_weights": False, }, ), ( VBLoRAConfig, { "task_type": "SEQ_CLS", "target_modules": None, "vblora_dropout": 0.05, "vector_length": 1, "num_vectors": 2, }, ), ( VeraConfig, { "task_type": "SEQ_CLS", "r": 8, "target_modules": None, "vera_dropout": 0.05, "projection_prng_key": 0xFF, "d_initial": 0.1, "save_projection": True, "bias": "none", }, ), ( C3AConfig, { "task_type": "SEQ_CLS", "block_size": 1, "target_modules": None, }, ), ] class TestSequenceClassificationModels(PeftCommonTester): r""" Tests for basic coverage of AutoModelForSequenceClassification and classification-specific cases. Most of the functionality is probably already covered by other tests. """ transformers_class = AutoModelForSequenceClassification def skipTest(self, reason=""): # for backwards compatibility with unittest style test classes pytest.skip(reason) def prepare_inputs_for_testing(self): input_ids = torch.tensor([[1, 1, 1], [1, 2, 1]]).to(self.torch_device) attention_mask = torch.tensor([[1, 1, 1], [1, 0, 1]]).to(self.torch_device) return {"input_ids": input_ids, "attention_mask": attention_mask} @pytest.mark.parametrize("model_id", PEFT_SEQ_CLS_MODELS_TO_TEST) @pytest.mark.parametrize("config_cls,config_kwargs", ALL_CONFIGS) def test_attributes_parametrized(self, model_id, config_cls, config_kwargs): self._test_model_attr(model_id, config_cls, config_kwargs.copy()) @pytest.mark.parametrize("model_id", PEFT_SEQ_CLS_MODELS_TO_TEST) @pytest.mark.parametrize("config_cls,config_kwargs", ALL_CONFIGS) def test_adapter_name(self, model_id, config_cls, config_kwargs): self._test_adapter_name(model_id, config_cls, config_kwargs.copy()) @pytest.mark.parametrize("model_id", PEFT_SEQ_CLS_MODELS_TO_TEST) @pytest.mark.parametrize("config_cls,config_kwargs", ALL_CONFIGS) def test_prepare_for_training_parametrized(self, model_id, config_cls, config_kwargs): self._test_prepare_for_training(model_id, config_cls, config_kwargs.copy()) @pytest.mark.parametrize("model_id", PEFT_SEQ_CLS_MODELS_TO_TEST) @pytest.mark.parametrize("config_cls,config_kwargs", ALL_CONFIGS) def test_prompt_tuning_text_prepare_for_training(self, model_id, config_cls, config_kwargs): if config_cls != PromptTuningConfig: pytest.skip(f"This test does not apply to {config_cls}") config_kwargs = config_kwargs.copy() config_kwargs["prompt_tuning_init"] = PromptTuningInit.TEXT config_kwargs["prompt_tuning_init_text"] = "This is a test prompt." config_kwargs["tokenizer_name_or_path"] = model_id self._test_prepare_for_training(model_id, config_cls, config_kwargs.copy()) @pytest.mark.parametrize("model_id", PEFT_SEQ_CLS_MODELS_TO_TEST) @pytest.mark.parametrize("config_cls,config_kwargs", ALL_CONFIGS) def test_save_pretrained(self, model_id, config_cls, config_kwargs): self._test_save_pretrained(model_id, config_cls, config_kwargs.copy()) @pytest.mark.parametrize("model_id", PEFT_SEQ_CLS_MODELS_TO_TEST) @pytest.mark.parametrize("config_cls,config_kwargs", ALL_CONFIGS) def test_save_pretrained_pickle(self, model_id, config_cls, config_kwargs): self._test_save_pretrained(model_id, config_cls, config_kwargs.copy(), safe_serialization=False) @pytest.mark.parametrize("model_id", PEFT_SEQ_CLS_MODELS_TO_TEST) @pytest.mark.parametrize("config_cls,config_kwargs", ALL_CONFIGS) def test_save_pretrained_selected_adapters(self, model_id, config_cls, config_kwargs): self._test_save_pretrained_selected_adapters(model_id, config_cls, config_kwargs.copy()) @pytest.mark.parametrize("model_id", PEFT_SEQ_CLS_MODELS_TO_TEST) @pytest.mark.parametrize("config_cls,config_kwargs", ALL_CONFIGS) def test_save_pretrained_selected_adapters_pickle(self, model_id, config_cls, config_kwargs): self._test_save_pretrained_selected_adapters( model_id, config_cls, config_kwargs.copy(), safe_serialization=False ) @pytest.mark.parametrize("model_id", PEFT_SEQ_CLS_MODELS_TO_TEST) @pytest.mark.parametrize("config_cls,config_kwargs", ALL_CONFIGS) def test_from_pretrained_config_construction(self, model_id, config_cls, config_kwargs): self._test_from_pretrained_config_construction(model_id, config_cls, config_kwargs.copy()) @pytest.mark.parametrize("model_id", PEFT_SEQ_CLS_MODELS_TO_TEST) @pytest.mark.parametrize("config_cls,config_kwargs", ALL_CONFIGS) def test_modules_to_save_correctly_set(self, model_id, config_cls, config_kwargs): # tests for a regression, introduced via #2220, where modules_to_save was not applied to prompt learning methods with hub_online_once(model_id): model = self.transformers_class.from_pretrained(model_id) config = config_cls( base_model_name_or_path=model_id, **config_kwargs, ) model = get_peft_model(model, config) base_model = model.get_base_model() # classifier layer is called either "classifier" or "score" classifier = getattr(base_model, "classifier", getattr(base_model, "score", None)) if classifier is None: raise ValueError(f"Could not determine classifier layer name for {model_id}, please fix the test") assert isinstance(classifier, ModulesToSaveWrapper)
peft/tests/test_seq_classifier.py/0
{ "file_path": "peft/tests/test_seq_classifier.py", "repo_id": "peft", "token_count": 4713 }
247
""" Convert weights from https://github.com/google-research/nested-transformer NOTE: You'll need https://github.com/google/CommonLoopUtils, not included in requirements.txt """ import sys import numpy as np import torch from clu import checkpoint arch_depths = { 'nest_base': [2, 2, 20], 'nest_small': [2, 2, 20], 'nest_tiny': [2, 2, 8], } def convert_nest(checkpoint_path, arch): """ Expects path to checkpoint which is a dir containing 4 files like in each of these folders - https://console.cloud.google.com/storage/browser/gresearch/nest-checkpoints `arch` is needed to Returns a state dict that can be used with `torch.nn.Module.load_state_dict` Hint: Follow timm.models.nest.Nest.__init__ and https://github.com/google-research/nested-transformer/blob/main/models/nest_net.py """ assert arch in ['nest_base', 'nest_small', 'nest_tiny'], "Your `arch` is not supported" flax_dict = checkpoint.load_state_dict(checkpoint_path)['optimizer']['target'] state_dict = {} # Patch embedding state_dict['patch_embed.proj.weight'] = torch.tensor( flax_dict['PatchEmbedding_0']['Conv_0']['kernel']).permute(3, 2, 0, 1) state_dict['patch_embed.proj.bias'] = torch.tensor(flax_dict['PatchEmbedding_0']['Conv_0']['bias']) # Positional embeddings posemb_keys = [k for k in flax_dict.keys() if k.startswith('PositionEmbedding')] for i, k in enumerate(posemb_keys): state_dict[f'levels.{i}.pos_embed'] = torch.tensor(flax_dict[k]['pos_embedding']) # Transformer encoders depths = arch_depths[arch] for level in range(len(depths)): for layer in range(depths[level]): global_layer_ix = sum(depths[:level]) + layer # Norms for i in range(2): state_dict[f'levels.{level}.transformer_encoder.{layer}.norm{i+1}.weight'] = torch.tensor( flax_dict[f'EncoderNDBlock_{global_layer_ix}'][f'LayerNorm_{i}']['scale']) state_dict[f'levels.{level}.transformer_encoder.{layer}.norm{i+1}.bias'] = torch.tensor( flax_dict[f'EncoderNDBlock_{global_layer_ix}'][f'LayerNorm_{i}']['bias']) # Attention qkv w_q = flax_dict[f'EncoderNDBlock_{global_layer_ix}']['MultiHeadAttention_0']['DenseGeneral_0']['kernel'] w_kv = flax_dict[f'EncoderNDBlock_{global_layer_ix}']['MultiHeadAttention_0']['DenseGeneral_1']['kernel'] # Pay attention to dims here (maybe get pen and paper) w_kv = np.concatenate(np.split(w_kv, 2, -1), 1) w_qkv = np.concatenate([w_q, w_kv], 1) state_dict[f'levels.{level}.transformer_encoder.{layer}.attn.qkv.weight'] = torch.tensor(w_qkv).flatten(1).permute(1,0) b_q = flax_dict[f'EncoderNDBlock_{global_layer_ix}']['MultiHeadAttention_0']['DenseGeneral_0']['bias'] b_kv = flax_dict[f'EncoderNDBlock_{global_layer_ix}']['MultiHeadAttention_0']['DenseGeneral_1']['bias'] # Pay attention to dims here (maybe get pen and paper) b_kv = np.concatenate(np.split(b_kv, 2, -1), 0) b_qkv = np.concatenate([b_q, b_kv], 0) state_dict[f'levels.{level}.transformer_encoder.{layer}.attn.qkv.bias'] = torch.tensor(b_qkv).reshape(-1) # Attention proj w_proj = flax_dict[f'EncoderNDBlock_{global_layer_ix}']['MultiHeadAttention_0']['proj_kernel'] w_proj = torch.tensor(w_proj).permute(2, 1, 0).flatten(1) state_dict[f'levels.{level}.transformer_encoder.{layer}.attn.proj.weight'] = w_proj state_dict[f'levels.{level}.transformer_encoder.{layer}.attn.proj.bias'] = torch.tensor( flax_dict[f'EncoderNDBlock_{global_layer_ix}']['MultiHeadAttention_0']['bias']) # MLP for i in range(2): state_dict[f'levels.{level}.transformer_encoder.{layer}.mlp.fc{i+1}.weight'] = torch.tensor( flax_dict[f'EncoderNDBlock_{global_layer_ix}']['MlpBlock_0'][f'Dense_{i}']['kernel']).permute(1, 0) state_dict[f'levels.{level}.transformer_encoder.{layer}.mlp.fc{i+1}.bias'] = torch.tensor( flax_dict[f'EncoderNDBlock_{global_layer_ix}']['MlpBlock_0'][f'Dense_{i}']['bias']) # Block aggregations (ConvPool) for level in range(1, len(depths)): # Convs state_dict[f'levels.{level}.pool.conv.weight'] = torch.tensor( flax_dict[f'ConvPool_{level-1}']['Conv_0']['kernel']).permute(3, 2, 0, 1) state_dict[f'levels.{level}.pool.conv.bias'] = torch.tensor( flax_dict[f'ConvPool_{level-1}']['Conv_0']['bias']) # Norms state_dict[f'levels.{level}.pool.norm.weight'] = torch.tensor( flax_dict[f'ConvPool_{level-1}']['LayerNorm_0']['scale']) state_dict[f'levels.{level}.pool.norm.bias'] = torch.tensor( flax_dict[f'ConvPool_{level-1}']['LayerNorm_0']['bias']) # Final norm state_dict[f'norm.weight'] = torch.tensor(flax_dict['LayerNorm_0']['scale']) state_dict[f'norm.bias'] = torch.tensor(flax_dict['LayerNorm_0']['bias']) # Classifier state_dict['head.weight'] = torch.tensor(flax_dict['Dense_0']['kernel']).permute(1, 0) state_dict['head.bias'] = torch.tensor(flax_dict['Dense_0']['bias']) return state_dict if __name__ == '__main__': variant = sys.argv[1] # base, small, or tiny state_dict = convert_nest(f'./nest-{variant[0]}_imagenet', f'nest_{variant}') torch.save(state_dict, f'./jx_nest_{variant}.pth')
pytorch-image-models/convert/convert_nest_flax.py/0
{ "file_path": "pytorch-image-models/convert/convert_nest_flax.py", "repo_id": "pytorch-image-models", "token_count": 2670 }
248
# DenseNet **DenseNet** is a type of convolutional neural network that utilises dense connections between layers, through [Dense Blocks](http://www.paperswithcode.com/method/dense-block), where we connect *all layers* (with matching feature-map sizes) directly with each other. To preserve the feed-forward nature, each layer obtains additional inputs from all preceding layers and passes on its own feature-maps to all subsequent layers. The **DenseNet Blur** variant in this collection by Ross Wightman employs [Blur Pooling](http://www.paperswithcode.com/method/blur-pooling) ## How do I use this model on an image? To load a pretrained model: ```py >>> import timm >>> model = timm.create_model('densenet121', pretrained=True) >>> model.eval() ``` To load and preprocess the image: ```py >>> import urllib >>> from PIL import Image >>> from timm.data import resolve_data_config >>> from timm.data.transforms_factory import create_transform >>> config = resolve_data_config({}, model=model) >>> transform = create_transform(**config) >>> url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg") >>> urllib.request.urlretrieve(url, filename) >>> img = Image.open(filename).convert('RGB') >>> tensor = transform(img).unsqueeze(0) # transform and add batch dimension ``` To get the model predictions: ```py >>> import torch >>> with torch.inference_mode(): ... out = model(tensor) >>> probabilities = torch.nn.functional.softmax(out[0], dim=0) >>> print(probabilities.shape) >>> # prints: torch.Size([1000]) ``` To get the top-5 predictions class names: ```py >>> # Get imagenet class mappings >>> url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt") >>> urllib.request.urlretrieve(url, filename) >>> with open("imagenet_classes.txt", "r") as f: ... categories = [s.strip() for s in f.readlines()] >>> # Print top categories per image >>> top5_prob, top5_catid = torch.topk(probabilities, 5) >>> for i in range(top5_prob.size(0)): ... print(categories[top5_catid[i]], top5_prob[i].item()) >>> # prints class names and probabilities like: >>> # [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)] ``` Replace the model name with the variant you want to use, e.g. `densenet121`. You can find the IDs in the model summaries at the top of this page. To extract image features with this model, follow the [timm feature extraction examples](../feature_extraction), just change the name of the model you want to use. ## How do I finetune this model? You can finetune any of the pre-trained models just by changing the classifier (the last layer). ```py >>> model = timm.create_model('densenet121', pretrained=True, num_classes=NUM_FINETUNE_CLASSES) ``` To finetune on your own dataset, you have to write a training loop or adapt [timm's training script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset. ## How do I train this model? You can follow the [timm recipe scripts](../training_script) for training a new model afresh. ## Citation ```BibTeX @article{DBLP:journals/corr/HuangLW16a, author = {Gao Huang and Zhuang Liu and Kilian Q. Weinberger}, title = {Densely Connected Convolutional Networks}, journal = {CoRR}, volume = {abs/1608.06993}, year = {2016}, url = {http://arxiv.org/abs/1608.06993}, archivePrefix = {arXiv}, eprint = {1608.06993}, timestamp = {Mon, 10 Sep 2018 15:49:32 +0200}, biburl = {https://dblp.org/rec/journals/corr/HuangLW16a.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` ``` @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/rwightman/pytorch-image-models}} } ``` <!-- Type: model-index Collections: - Name: DenseNet Paper: Title: Densely Connected Convolutional Networks URL: https://paperswithcode.com/paper/densely-connected-convolutional-networks Models: - Name: densenet121 In Collection: DenseNet Metadata: FLOPs: 3641843200 Parameters: 7980000 File Size: 32376726 Architecture: - 1x1 Convolution - Average Pooling - Batch Normalization - Convolution - Dense Block - Dense Connections - Dropout - Max Pooling - ReLU - Softmax Tasks: - Image Classification Training Techniques: - Kaiming Initialization - Nesterov Accelerated Gradient - Weight Decay Training Data: - ImageNet ID: densenet121 LR: 0.1 Epochs: 90 Layers: 121 Dropout: 0.2 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 256 Image Size: '224' Weight Decay: 0.0001 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/densenet.py#L295 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/densenet121_ra-50efcf5c.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 75.56% Top 5 Accuracy: 92.65% - Name: densenet161 In Collection: DenseNet Metadata: FLOPs: 9931959264 Parameters: 28680000 File Size: 115730790 Architecture: - 1x1 Convolution - Average Pooling - Batch Normalization - Convolution - Dense Block - Dense Connections - Dropout - Max Pooling - ReLU - Softmax Tasks: - Image Classification Training Techniques: - Kaiming Initialization - Nesterov Accelerated Gradient - Weight Decay Training Data: - ImageNet ID: densenet161 LR: 0.1 Epochs: 90 Layers: 161 Dropout: 0.2 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 256 Image Size: '224' Weight Decay: 0.0001 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/densenet.py#L347 Weights: https://download.pytorch.org/models/densenet161-8d451a50.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 77.36% Top 5 Accuracy: 93.63% - Name: densenet169 In Collection: DenseNet Metadata: FLOPs: 4316945792 Parameters: 14150000 File Size: 57365526 Architecture: - 1x1 Convolution - Average Pooling - Batch Normalization - Convolution - Dense Block - Dense Connections - Dropout - Max Pooling - ReLU - Softmax Tasks: - Image Classification Training Techniques: - Kaiming Initialization - Nesterov Accelerated Gradient - Weight Decay Training Data: - ImageNet ID: densenet169 LR: 0.1 Epochs: 90 Layers: 169 Dropout: 0.2 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 256 Image Size: '224' Weight Decay: 0.0001 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/densenet.py#L327 Weights: https://download.pytorch.org/models/densenet169-b2777c0a.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 75.9% Top 5 Accuracy: 93.02% - Name: densenet201 In Collection: DenseNet Metadata: FLOPs: 5514321024 Parameters: 20010000 File Size: 81131730 Architecture: - 1x1 Convolution - Average Pooling - Batch Normalization - Convolution - Dense Block - Dense Connections - Dropout - Max Pooling - ReLU - Softmax Tasks: - Image Classification Training Techniques: - Kaiming Initialization - Nesterov Accelerated Gradient - Weight Decay Training Data: - ImageNet ID: densenet201 LR: 0.1 Epochs: 90 Layers: 201 Dropout: 0.2 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 256 Image Size: '224' Weight Decay: 0.0001 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/densenet.py#L337 Weights: https://download.pytorch.org/models/densenet201-c1103571.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 77.29% Top 5 Accuracy: 93.48% - Name: densenetblur121d In Collection: DenseNet Metadata: FLOPs: 3947812864 Parameters: 8000000 File Size: 32456500 Architecture: - 1x1 Convolution - Batch Normalization - Blur Pooling - Convolution - Dense Block - Dense Connections - Dropout - Max Pooling - ReLU - Softmax Tasks: - Image Classification Training Data: - ImageNet ID: densenetblur121d Crop Pct: '0.875' Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/densenet.py#L305 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/densenetblur121d_ra-100dcfbc.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 76.59% Top 5 Accuracy: 93.2% - Name: tv_densenet121 In Collection: DenseNet Metadata: FLOPs: 3641843200 Parameters: 7980000 File Size: 32342954 Architecture: - 1x1 Convolution - Average Pooling - Batch Normalization - Convolution - Dense Block - Dense Connections - Dropout - Max Pooling - ReLU - Softmax Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet ID: tv_densenet121 LR: 0.1 Epochs: 90 Crop Pct: '0.875' LR Gamma: 0.1 Momentum: 0.9 Batch Size: 32 Image Size: '224' LR Step Size: 30 Weight Decay: 0.0001 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/densenet.py#L379 Weights: https://download.pytorch.org/models/densenet121-a639ec97.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 74.74% Top 5 Accuracy: 92.15% -->
pytorch-image-models/hfdocs/source/models/densenet.mdx/0
{ "file_path": "pytorch-image-models/hfdocs/source/models/densenet.mdx", "repo_id": "pytorch-image-models", "token_count": 4191 }
249
# Instagram ResNeXt WSL A **ResNeXt** repeats a [building block](https://paperswithcode.com/method/resnext-block) that aggregates a set of transformations with the same topology. Compared to a [ResNet](https://paperswithcode.com/method/resnet), it exposes a new dimension, *cardinality* (the size of the set of transformations) \\( C \\), as an essential factor in addition to the dimensions of depth and width. This model was trained on billions of Instagram images using thousands of distinct hashtags as labels exhibit excellent transfer learning performance. Please note the CC-BY-NC 4.0 license on theses weights, non-commercial use only. ## How do I use this model on an image? To load a pretrained model: ```py >>> import timm >>> model = timm.create_model('ig_resnext101_32x16d', pretrained=True) >>> model.eval() ``` To load and preprocess the image: ```py >>> import urllib >>> from PIL import Image >>> from timm.data import resolve_data_config >>> from timm.data.transforms_factory import create_transform >>> config = resolve_data_config({}, model=model) >>> transform = create_transform(**config) >>> url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg") >>> urllib.request.urlretrieve(url, filename) >>> img = Image.open(filename).convert('RGB') >>> tensor = transform(img).unsqueeze(0) # transform and add batch dimension ``` To get the model predictions: ```py >>> import torch >>> with torch.inference_mode(): ... out = model(tensor) >>> probabilities = torch.nn.functional.softmax(out[0], dim=0) >>> print(probabilities.shape) >>> # prints: torch.Size([1000]) ``` To get the top-5 predictions class names: ```py >>> # Get imagenet class mappings >>> url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt") >>> urllib.request.urlretrieve(url, filename) >>> with open("imagenet_classes.txt", "r") as f: ... categories = [s.strip() for s in f.readlines()] >>> # Print top categories per image >>> top5_prob, top5_catid = torch.topk(probabilities, 5) >>> for i in range(top5_prob.size(0)): ... print(categories[top5_catid[i]], top5_prob[i].item()) >>> # prints class names and probabilities like: >>> # [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)] ``` Replace the model name with the variant you want to use, e.g. `ig_resnext101_32x16d`. You can find the IDs in the model summaries at the top of this page. To extract image features with this model, follow the [timm feature extraction examples](../feature_extraction), just change the name of the model you want to use. ## How do I finetune this model? You can finetune any of the pre-trained models just by changing the classifier (the last layer). ```py >>> model = timm.create_model('ig_resnext101_32x16d', pretrained=True, num_classes=NUM_FINETUNE_CLASSES) ``` To finetune on your own dataset, you have to write a training loop or adapt [timm's training script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset. ## How do I train this model? You can follow the [timm recipe scripts](../training_script) for training a new model afresh. ## Citation ```BibTeX @misc{mahajan2018exploring, title={Exploring the Limits of Weakly Supervised Pretraining}, author={Dhruv Mahajan and Ross Girshick and Vignesh Ramanathan and Kaiming He and Manohar Paluri and Yixuan Li and Ashwin Bharambe and Laurens van der Maaten}, year={2018}, eprint={1805.00932}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` <!-- Type: model-index Collections: - Name: IG ResNeXt Paper: Title: Exploring the Limits of Weakly Supervised Pretraining URL: https://paperswithcode.com/paper/exploring-the-limits-of-weakly-supervised Models: - Name: ig_resnext101_32x16d In Collection: IG ResNeXt Metadata: FLOPs: 46623691776 Parameters: 194030000 File Size: 777518664 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Global Average Pooling - Grouped Convolution - Max Pooling - ReLU - ResNeXt Block - Residual Connection - Softmax Tasks: - Image Classification Training Techniques: - Nesterov Accelerated Gradient - Weight Decay Training Data: - IG-3.5B-17k - ImageNet Training Resources: 336x GPUs ID: ig_resnext101_32x16d Epochs: 100 Layers: 101 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 8064 Image Size: '224' Weight Decay: 0.001 Interpolation: bilinear Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L874 Weights: https://download.pytorch.org/models/ig_resnext101_32x16-c6f796b0.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 84.16% Top 5 Accuracy: 97.19% - Name: ig_resnext101_32x32d In Collection: IG ResNeXt Metadata: FLOPs: 112225170432 Parameters: 468530000 File Size: 1876573776 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Global Average Pooling - Grouped Convolution - Max Pooling - ReLU - ResNeXt Block - Residual Connection - Softmax Tasks: - Image Classification Training Techniques: - Nesterov Accelerated Gradient - Weight Decay Training Data: - IG-3.5B-17k - ImageNet Training Resources: 336x GPUs ID: ig_resnext101_32x32d Epochs: 100 Layers: 101 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 8064 Image Size: '224' Weight Decay: 0.001 Interpolation: bilinear Minibatch Size: 8064 Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L885 Weights: https://download.pytorch.org/models/ig_resnext101_32x32-e4b90b00.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 85.09% Top 5 Accuracy: 97.44% - Name: ig_resnext101_32x48d In Collection: IG ResNeXt Metadata: FLOPs: 197446554624 Parameters: 828410000 File Size: 3317136976 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Global Average Pooling - Grouped Convolution - Max Pooling - ReLU - ResNeXt Block - Residual Connection - Softmax Tasks: - Image Classification Training Techniques: - Nesterov Accelerated Gradient - Weight Decay Training Data: - IG-3.5B-17k - ImageNet Training Resources: 336x GPUs ID: ig_resnext101_32x48d Epochs: 100 Layers: 101 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 8064 Image Size: '224' Weight Decay: 0.001 Interpolation: bilinear Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L896 Weights: https://download.pytorch.org/models/ig_resnext101_32x48-3e41cc8a.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 85.42% Top 5 Accuracy: 97.58% - Name: ig_resnext101_32x8d In Collection: IG ResNeXt Metadata: FLOPs: 21180417024 Parameters: 88790000 File Size: 356056638 Architecture: - 1x1 Convolution - Batch Normalization - Convolution - Global Average Pooling - Grouped Convolution - Max Pooling - ReLU - ResNeXt Block - Residual Connection - Softmax Tasks: - Image Classification Training Techniques: - Nesterov Accelerated Gradient - Weight Decay Training Data: - IG-3.5B-17k - ImageNet Training Resources: 336x GPUs ID: ig_resnext101_32x8d Epochs: 100 Layers: 101 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 8064 Image Size: '224' Weight Decay: 0.001 Interpolation: bilinear Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L863 Weights: https://download.pytorch.org/models/ig_resnext101_32x8-c38310e5.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 82.7% Top 5 Accuracy: 96.64% -->
pytorch-image-models/hfdocs/source/models/ig-resnext.mdx/0
{ "file_path": "pytorch-image-models/hfdocs/source/models/ig-resnext.mdx", "repo_id": "pytorch-image-models", "token_count": 3234 }
250
# Res2Net **Res2Net** is an image model that employs a variation on bottleneck residual blocks, [Res2Net Blocks](https://paperswithcode.com/method/res2net-block). The motivation is to be able to represent features at multiple scales. This is achieved through a novel building block for CNNs that constructs hierarchical residual-like connections within one single residual block. This represents multi-scale features at a granular level and increases the range of receptive fields for each network layer. ## How do I use this model on an image? To load a pretrained model: ```py >>> import timm >>> model = timm.create_model('res2net101_26w_4s', pretrained=True) >>> model.eval() ``` To load and preprocess the image: ```py >>> import urllib >>> from PIL import Image >>> from timm.data import resolve_data_config >>> from timm.data.transforms_factory import create_transform >>> config = resolve_data_config({}, model=model) >>> transform = create_transform(**config) >>> url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg") >>> urllib.request.urlretrieve(url, filename) >>> img = Image.open(filename).convert('RGB') >>> tensor = transform(img).unsqueeze(0) # transform and add batch dimension ``` To get the model predictions: ```py >>> import torch >>> with torch.inference_mode(): ... out = model(tensor) >>> probabilities = torch.nn.functional.softmax(out[0], dim=0) >>> print(probabilities.shape) >>> # prints: torch.Size([1000]) ``` To get the top-5 predictions class names: ```py >>> # Get imagenet class mappings >>> url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt") >>> urllib.request.urlretrieve(url, filename) >>> with open("imagenet_classes.txt", "r") as f: ... categories = [s.strip() for s in f.readlines()] >>> # Print top categories per image >>> top5_prob, top5_catid = torch.topk(probabilities, 5) >>> for i in range(top5_prob.size(0)): ... print(categories[top5_catid[i]], top5_prob[i].item()) >>> # prints class names and probabilities like: >>> # [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)] ``` Replace the model name with the variant you want to use, e.g. `res2net101_26w_4s`. You can find the IDs in the model summaries at the top of this page. To extract image features with this model, follow the [timm feature extraction examples](../feature_extraction), just change the name of the model you want to use. ## How do I finetune this model? You can finetune any of the pre-trained models just by changing the classifier (the last layer). ```py >>> model = timm.create_model('res2net101_26w_4s', pretrained=True, num_classes=NUM_FINETUNE_CLASSES) ``` To finetune on your own dataset, you have to write a training loop or adapt [timm's training script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset. ## How do I train this model? You can follow the [timm recipe scripts](../training_script) for training a new model afresh. ## Citation ```BibTeX @article{Gao_2021, title={Res2Net: A New Multi-Scale Backbone Architecture}, volume={43}, ISSN={1939-3539}, url={http://dx.doi.org/10.1109/TPAMI.2019.2938758}, DOI={10.1109/tpami.2019.2938758}, number={2}, journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, publisher={Institute of Electrical and Electronics Engineers (IEEE)}, author={Gao, Shang-Hua and Cheng, Ming-Ming and Zhao, Kai and Zhang, Xin-Yu and Yang, Ming-Hsuan and Torr, Philip}, year={2021}, month={Feb}, pages={652–662} } ``` <!-- Type: model-index Collections: - Name: Res2Net Paper: Title: 'Res2Net: A New Multi-scale Backbone Architecture' URL: https://paperswithcode.com/paper/res2net-a-new-multi-scale-backbone Models: - Name: res2net101_26w_4s In Collection: Res2Net Metadata: FLOPs: 10415881200 Parameters: 45210000 File Size: 181456059 Architecture: - Batch Normalization - Convolution - Global Average Pooling - ReLU - Res2Net Block Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 4x Titan Xp GPUs ID: res2net101_26w_4s LR: 0.1 Epochs: 100 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 256 Image Size: '224' Weight Decay: 0.0001 Interpolation: bilinear Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/res2net.py#L152 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-res2net/res2net101_26w_4s-02a759a1.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 79.19% Top 5 Accuracy: 94.43% - Name: res2net50_14w_8s In Collection: Res2Net Metadata: FLOPs: 5403546768 Parameters: 25060000 File Size: 100638543 Architecture: - Batch Normalization - Convolution - Global Average Pooling - ReLU - Res2Net Block Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 4x Titan Xp GPUs ID: res2net50_14w_8s LR: 0.1 Epochs: 100 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 256 Image Size: '224' Weight Decay: 0.0001 Interpolation: bilinear Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/res2net.py#L196 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-res2net/res2net50_14w_8s-6527dddc.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 78.14% Top 5 Accuracy: 93.86% - Name: res2net50_26w_4s In Collection: Res2Net Metadata: FLOPs: 5499974064 Parameters: 25700000 File Size: 103110087 Architecture: - Batch Normalization - Convolution - Global Average Pooling - ReLU - Res2Net Block Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 4x Titan Xp GPUs ID: res2net50_26w_4s LR: 0.1 Epochs: 100 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 256 Image Size: '224' Weight Decay: 0.0001 Interpolation: bilinear Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/res2net.py#L141 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-res2net/res2net50_26w_4s-06e79181.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 77.99% Top 5 Accuracy: 93.85% - Name: res2net50_26w_6s In Collection: Res2Net Metadata: FLOPs: 8130156528 Parameters: 37050000 File Size: 148603239 Architecture: - Batch Normalization - Convolution - Global Average Pooling - ReLU - Res2Net Block Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 4x Titan Xp GPUs ID: res2net50_26w_6s LR: 0.1 Epochs: 100 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 256 Image Size: '224' Weight Decay: 0.0001 Interpolation: bilinear Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/res2net.py#L163 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-res2net/res2net50_26w_6s-19041792.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 78.57% Top 5 Accuracy: 94.12% - Name: res2net50_26w_8s In Collection: Res2Net Metadata: FLOPs: 10760338992 Parameters: 48400000 File Size: 194085165 Architecture: - Batch Normalization - Convolution - Global Average Pooling - ReLU - Res2Net Block Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 4x Titan Xp GPUs ID: res2net50_26w_8s LR: 0.1 Epochs: 100 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 256 Image Size: '224' Weight Decay: 0.0001 Interpolation: bilinear Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/res2net.py#L174 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-res2net/res2net50_26w_8s-2c7c9f12.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 79.19% Top 5 Accuracy: 94.37% - Name: res2net50_48w_2s In Collection: Res2Net Metadata: FLOPs: 5375291520 Parameters: 25290000 File Size: 101421406 Architecture: - Batch Normalization - Convolution - Global Average Pooling - ReLU - Res2Net Block Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 4x Titan Xp GPUs ID: res2net50_48w_2s LR: 0.1 Epochs: 100 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 256 Image Size: '224' Weight Decay: 0.0001 Interpolation: bilinear Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/res2net.py#L185 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-res2net/res2net50_48w_2s-afed724a.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 77.53% Top 5 Accuracy: 93.56% -->
pytorch-image-models/hfdocs/source/models/res2net.mdx/0
{ "file_path": "pytorch-image-models/hfdocs/source/models/res2net.mdx", "repo_id": "pytorch-image-models", "token_count": 3953 }
251
# (Tensorflow) EfficientNet CondConv **EfficientNet** is a convolutional neural network architecture and scaling method that uniformly scales all dimensions of depth/width/resolution using a *compound coefficient*. Unlike conventional practice that arbitrary scales these factors, the EfficientNet scaling method uniformly scales network width, depth, and resolution with a set of fixed scaling coefficients. For example, if we want to use \\( 2^N \\) times more computational resources, then we can simply increase the network depth by \\( \alpha ^ N \\), width by \\( \beta ^ N \\), and image size by \\( \gamma ^ N \\), where \\( \alpha, \beta, \gamma \\) are constant coefficients determined by a small grid search on the original small model. EfficientNet uses a compound coefficient \\( \phi \\) to uniformly scale network width, depth, and resolution in a principled way. The compound scaling method is justified by the intuition that if the input image is bigger, then the network needs more layers to increase the receptive field and more channels to capture more fine-grained patterns on the bigger image. The base EfficientNet-B0 network is based on the inverted bottleneck residual blocks of [MobileNetV2](https://paperswithcode.com/method/mobilenetv2), in addition to [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block). This collection of models amends EfficientNet by adding [CondConv](https://paperswithcode.com/method/condconv) convolutions. The weights from this model were ported from [Tensorflow/TPU](https://github.com/tensorflow/tpu). ## How do I use this model on an image? To load a pretrained model: ```py >>> import timm >>> model = timm.create_model('tf_efficientnet_cc_b0_4e', pretrained=True) >>> model.eval() ``` To load and preprocess the image: ```py >>> import urllib >>> from PIL import Image >>> from timm.data import resolve_data_config >>> from timm.data.transforms_factory import create_transform >>> config = resolve_data_config({}, model=model) >>> transform = create_transform(**config) >>> url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg") >>> urllib.request.urlretrieve(url, filename) >>> img = Image.open(filename).convert('RGB') >>> tensor = transform(img).unsqueeze(0) # transform and add batch dimension ``` To get the model predictions: ```py >>> import torch >>> with torch.inference_mode(): ... out = model(tensor) >>> probabilities = torch.nn.functional.softmax(out[0], dim=0) >>> print(probabilities.shape) >>> # prints: torch.Size([1000]) ``` To get the top-5 predictions class names: ```py >>> # Get imagenet class mappings >>> url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt") >>> urllib.request.urlretrieve(url, filename) >>> with open("imagenet_classes.txt", "r") as f: ... categories = [s.strip() for s in f.readlines()] >>> # Print top categories per image >>> top5_prob, top5_catid = torch.topk(probabilities, 5) >>> for i in range(top5_prob.size(0)): ... print(categories[top5_catid[i]], top5_prob[i].item()) >>> # prints class names and probabilities like: >>> # [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)] ``` Replace the model name with the variant you want to use, e.g. `tf_efficientnet_cc_b0_4e`. You can find the IDs in the model summaries at the top of this page. To extract image features with this model, follow the [timm feature extraction examples](../feature_extraction), just change the name of the model you want to use. ## How do I finetune this model? You can finetune any of the pre-trained models just by changing the classifier (the last layer). ```py >>> model = timm.create_model('tf_efficientnet_cc_b0_4e', pretrained=True, num_classes=NUM_FINETUNE_CLASSES) ``` To finetune on your own dataset, you have to write a training loop or adapt [timm's training script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset. ## How do I train this model? You can follow the [timm recipe scripts](../training_script) for training a new model afresh. ## Citation ```BibTeX @article{DBLP:journals/corr/abs-1904-04971, author = {Brandon Yang and Gabriel Bender and Quoc V. Le and Jiquan Ngiam}, title = {Soft Conditional Computation}, journal = {CoRR}, volume = {abs/1904.04971}, year = {2019}, url = {http://arxiv.org/abs/1904.04971}, archivePrefix = {arXiv}, eprint = {1904.04971}, timestamp = {Thu, 25 Apr 2019 13:55:01 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1904-04971.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <!-- Type: model-index Collections: - Name: TF EfficientNet CondConv Paper: Title: 'CondConv: Conditionally Parameterized Convolutions for Efficient Inference' URL: https://paperswithcode.com/paper/soft-conditional-computation Models: - Name: tf_efficientnet_cc_b0_4e In Collection: TF EfficientNet CondConv Metadata: FLOPs: 224153788 Parameters: 13310000 File Size: 53490940 Architecture: - 1x1 Convolution - Average Pooling - Batch Normalization - CondConv - Convolution - Dense Connections - Dropout - Inverted Residual Block - Squeeze-and-Excitation Block - Swish Tasks: - Image Classification Training Techniques: - AutoAugment - Label Smoothing - RMSProp - Stochastic Depth - Weight Decay Training Data: - ImageNet ID: tf_efficientnet_cc_b0_4e LR: 0.256 Epochs: 350 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 2048 Image Size: '224' Weight Decay: 1.0e-05 Interpolation: bicubic RMSProp Decay: 0.9 Label Smoothing: 0.1 BatchNorm Momentum: 0.99 Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1561 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_cc_b0_4e-4362b6b2.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 77.32% Top 5 Accuracy: 93.32% - Name: tf_efficientnet_cc_b0_8e In Collection: TF EfficientNet CondConv Metadata: FLOPs: 224158524 Parameters: 24010000 File Size: 96287616 Architecture: - 1x1 Convolution - Average Pooling - Batch Normalization - CondConv - Convolution - Dense Connections - Dropout - Inverted Residual Block - Squeeze-and-Excitation Block - Swish Tasks: - Image Classification Training Techniques: - AutoAugment - Label Smoothing - RMSProp - Stochastic Depth - Weight Decay Training Data: - ImageNet ID: tf_efficientnet_cc_b0_8e LR: 0.256 Epochs: 350 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 2048 Image Size: '224' Weight Decay: 1.0e-05 Interpolation: bicubic RMSProp Decay: 0.9 Label Smoothing: 0.1 BatchNorm Momentum: 0.99 Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1572 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_cc_b0_8e-66184a25.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 77.91% Top 5 Accuracy: 93.65% - Name: tf_efficientnet_cc_b1_8e In Collection: TF EfficientNet CondConv Metadata: FLOPs: 370427824 Parameters: 39720000 File Size: 159206198 Architecture: - 1x1 Convolution - Average Pooling - Batch Normalization - CondConv - Convolution - Dense Connections - Dropout - Inverted Residual Block - Squeeze-and-Excitation Block - Swish Tasks: - Image Classification Training Techniques: - AutoAugment - Label Smoothing - RMSProp - Stochastic Depth - Weight Decay Training Data: - ImageNet ID: tf_efficientnet_cc_b1_8e LR: 0.256 Epochs: 350 Crop Pct: '0.882' Momentum: 0.9 Batch Size: 2048 Image Size: '240' Weight Decay: 1.0e-05 Interpolation: bicubic RMSProp Decay: 0.9 Label Smoothing: 0.1 BatchNorm Momentum: 0.99 Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1584 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_cc_b1_8e-f7c79ae1.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 79.33% Top 5 Accuracy: 94.37% -->
pytorch-image-models/hfdocs/source/models/tf-efficientnet-condconv.mdx/0
{ "file_path": "pytorch-image-models/hfdocs/source/models/tf-efficientnet-condconv.mdx", "repo_id": "pytorch-image-models", "token_count": 3327 }
252
dependencies = ['torch'] import timm globals().update(timm.models._registry._model_entrypoints)
pytorch-image-models/hubconf.py/0
{ "file_path": "pytorch-image-models/hubconf.py", "repo_id": "pytorch-image-models", "token_count": 32 }
253
[dist_conda] conda_name_differences = 'torch:pytorch' channels = pytorch noarch = True [metadata] url = "https://github.com/huggingface/pytorch-image-models"
pytorch-image-models/setup.cfg/0
{ "file_path": "pytorch-image-models/setup.cfg", "repo_id": "pytorch-image-models", "token_count": 65 }
254
""" Quick n Simple Image Folder, Tarfile based DataSet Hacked together by / Copyright 2019, Ross Wightman """ import io import logging from typing import Optional import torch import torch.utils.data as data from PIL import Image from .readers import create_reader _logger = logging.getLogger(__name__) _ERROR_RETRY = 50 class ImageDataset(data.Dataset): def __init__( self, root, reader=None, split='train', class_map=None, load_bytes=False, input_img_mode='RGB', transform=None, target_transform=None, **kwargs, ): if reader is None or isinstance(reader, str): reader = create_reader( reader or '', root=root, split=split, class_map=class_map, **kwargs, ) self.reader = reader self.load_bytes = load_bytes self.input_img_mode = input_img_mode self.transform = transform self.target_transform = target_transform self._consecutive_errors = 0 def __getitem__(self, index): img, target = self.reader[index] try: img = img.read() if self.load_bytes else Image.open(img) except Exception as e: _logger.warning(f'Skipped sample (index {index}, file {self.reader.filename(index)}). {str(e)}') self._consecutive_errors += 1 if self._consecutive_errors < _ERROR_RETRY: return self.__getitem__((index + 1) % len(self.reader)) else: raise e self._consecutive_errors = 0 if self.input_img_mode and not self.load_bytes: img = img.convert(self.input_img_mode) if self.transform is not None: img = self.transform(img) if target is None: target = -1 elif self.target_transform is not None: target = self.target_transform(target) return img, target def __len__(self): return len(self.reader) def filename(self, index, basename=False, absolute=False): return self.reader.filename(index, basename, absolute) def filenames(self, basename=False, absolute=False): return self.reader.filenames(basename, absolute) class IterableImageDataset(data.IterableDataset): def __init__( self, root, reader=None, split='train', class_map=None, is_training=False, batch_size=1, num_samples=None, seed=42, repeats=0, download=False, input_img_mode='RGB', input_key=None, target_key=None, transform=None, target_transform=None, max_steps=None, **kwargs, ): assert reader is not None if isinstance(reader, str): self.reader = create_reader( reader, root=root, split=split, class_map=class_map, is_training=is_training, batch_size=batch_size, num_samples=num_samples, seed=seed, repeats=repeats, download=download, input_img_mode=input_img_mode, input_key=input_key, target_key=target_key, max_steps=max_steps, **kwargs, ) else: self.reader = reader self.transform = transform self.target_transform = target_transform self._consecutive_errors = 0 def __iter__(self): for img, target in self.reader: if self.transform is not None: img = self.transform(img) if self.target_transform is not None: target = self.target_transform(target) yield img, target def __len__(self): if hasattr(self.reader, '__len__'): return len(self.reader) else: return 0 def set_epoch(self, count): # TFDS and WDS need external epoch count for deterministic cross process shuffle if hasattr(self.reader, 'set_epoch'): self.reader.set_epoch(count) def set_loader_cfg( self, num_workers: Optional[int] = None, ): # TFDS and WDS readers need # workers for correct # samples estimate before loader processes created if hasattr(self.reader, 'set_loader_cfg'): self.reader.set_loader_cfg(num_workers=num_workers) def filename(self, index, basename=False, absolute=False): assert False, 'Filename lookup by index not supported, use filenames().' def filenames(self, basename=False, absolute=False): return self.reader.filenames(basename, absolute) class AugMixDataset(torch.utils.data.Dataset): """Dataset wrapper to perform AugMix or other clean/augmentation mixes""" def __init__(self, dataset, num_splits=2): self.augmentation = None self.normalize = None self.dataset = dataset if self.dataset.transform is not None: self._set_transforms(self.dataset.transform) self.num_splits = num_splits def _set_transforms(self, x): assert isinstance(x, (list, tuple)) and len(x) == 3, 'Expecting a tuple/list of 3 transforms' self.dataset.transform = x[0] self.augmentation = x[1] self.normalize = x[2] @property def transform(self): return self.dataset.transform @transform.setter def transform(self, x): self._set_transforms(x) def _normalize(self, x): return x if self.normalize is None else self.normalize(x) def __getitem__(self, i): x, y = self.dataset[i] # all splits share the same dataset base transform x_list = [self._normalize(x)] # first split only normalizes (this is the 'clean' split) # run the full augmentation on the remaining splits for _ in range(self.num_splits - 1): x_list.append(self._normalize(self.augmentation(x))) return tuple(x_list), y def __len__(self): return len(self.dataset)
pytorch-image-models/timm/data/dataset.py/0
{ "file_path": "pytorch-image-models/timm/data/dataset.py", "repo_id": "pytorch-image-models", "token_count": 2991 }
255
from abc import abstractmethod class Reader: def __init__(self): pass @abstractmethod def _filename(self, index, basename=False, absolute=False): pass def filename(self, index, basename=False, absolute=False): return self._filename(index, basename=basename, absolute=absolute) def filenames(self, basename=False, absolute=False): return [self._filename(index, basename=basename, absolute=absolute) for index in range(len(self))]
pytorch-image-models/timm/data/readers/reader.py/0
{ "file_path": "pytorch-image-models/timm/data/readers/reader.py", "repo_id": "pytorch-image-models", "token_count": 171 }
256
""" Activations A collection of activations fn and modules with a common interface so that they can easily be swapped. All have an `inplace` arg even if not used. Hacked together by / Copyright 2020 Ross Wightman """ import torch from torch import nn as nn from torch.nn import functional as F def swish(x, inplace: bool = False): """Swish - Described in: https://arxiv.org/abs/1710.05941 """ return x.mul_(x.sigmoid()) if inplace else x.mul(x.sigmoid()) class Swish(nn.Module): def __init__(self, inplace: bool = False): super(Swish, self).__init__() self.inplace = inplace def forward(self, x): return swish(x, self.inplace) def mish(x, inplace: bool = False): """Mish: A Self Regularized Non-Monotonic Neural Activation Function - https://arxiv.org/abs/1908.08681 NOTE: I don't have a working inplace variant """ return x.mul(F.softplus(x).tanh()) class Mish(nn.Module): """Mish: A Self Regularized Non-Monotonic Neural Activation Function - https://arxiv.org/abs/1908.08681 """ def __init__(self, inplace: bool = False): super(Mish, self).__init__() def forward(self, x): return mish(x) def sigmoid(x, inplace: bool = False): return x.sigmoid_() if inplace else x.sigmoid() # PyTorch has this, but not with a consistent inplace argument interface class Sigmoid(nn.Module): def __init__(self, inplace: bool = False): super(Sigmoid, self).__init__() self.inplace = inplace def forward(self, x): return x.sigmoid_() if self.inplace else x.sigmoid() def tanh(x, inplace: bool = False): return x.tanh_() if inplace else x.tanh() # PyTorch has this, but not with a consistent inplace argument interface class Tanh(nn.Module): def __init__(self, inplace: bool = False): super(Tanh, self).__init__() self.inplace = inplace def forward(self, x): return x.tanh_() if self.inplace else x.tanh() def hard_swish(x, inplace: bool = False): inner = F.relu6(x + 3.).div_(6.) return x.mul_(inner) if inplace else x.mul(inner) class HardSwish(nn.Module): def __init__(self, inplace: bool = False): super(HardSwish, self).__init__() self.inplace = inplace def forward(self, x): return hard_swish(x, self.inplace) def hard_sigmoid(x, inplace: bool = False): if inplace: return x.add_(3.).clamp_(0., 6.).div_(6.) else: return F.relu6(x + 3.) / 6. class HardSigmoid(nn.Module): def __init__(self, inplace: bool = False): super(HardSigmoid, self).__init__() self.inplace = inplace def forward(self, x): return hard_sigmoid(x, self.inplace) def hard_mish(x, inplace: bool = False): """ Hard Mish Experimental, based on notes by Mish author Diganta Misra at https://github.com/digantamisra98/H-Mish/blob/0da20d4bc58e696b6803f2523c58d3c8a82782d0/README.md """ if inplace: return x.mul_(0.5 * (x + 2).clamp(min=0, max=2)) else: return 0.5 * x * (x + 2).clamp(min=0, max=2) class HardMish(nn.Module): def __init__(self, inplace: bool = False): super(HardMish, self).__init__() self.inplace = inplace def forward(self, x): return hard_mish(x, self.inplace) class PReLU(nn.PReLU): """Applies PReLU (w/ dummy inplace arg) """ def __init__(self, num_parameters: int = 1, init: float = 0.25, inplace: bool = False) -> None: super(PReLU, self).__init__(num_parameters=num_parameters, init=init) def forward(self, input: torch.Tensor) -> torch.Tensor: return F.prelu(input, self.weight) def gelu(x: torch.Tensor, inplace: bool = False) -> torch.Tensor: return F.gelu(x) class GELU(nn.Module): """Applies the Gaussian Error Linear Units function (w/ dummy inplace arg) """ def __init__(self, inplace: bool = False): super(GELU, self).__init__() def forward(self, input: torch.Tensor) -> torch.Tensor: return F.gelu(input) def gelu_tanh(x: torch.Tensor, inplace: bool = False) -> torch.Tensor: return F.gelu(x, approximate='tanh') class GELUTanh(nn.Module): """Applies the Gaussian Error Linear Units function (w/ dummy inplace arg) """ def __init__(self, inplace: bool = False): super(GELUTanh, self).__init__() def forward(self, input: torch.Tensor) -> torch.Tensor: return F.gelu(input, approximate='tanh') def quick_gelu(x: torch.Tensor, inplace: bool = False) -> torch.Tensor: return x * torch.sigmoid(1.702 * x) class QuickGELU(nn.Module): """Applies the Gaussian Error Linear Units function (w/ dummy inplace arg) """ def __init__(self, inplace: bool = False): super(QuickGELU, self).__init__() def forward(self, input: torch.Tensor) -> torch.Tensor: return quick_gelu(input)
pytorch-image-models/timm/layers/activations.py/0
{ "file_path": "pytorch-image-models/timm/layers/activations.py", "repo_id": "pytorch-image-models", "token_count": 2008 }
257
""" Attention Factory Hacked together by / Copyright 2021 Ross Wightman """ import torch from functools import partial from .bottleneck_attn import BottleneckAttn from .cbam import CbamModule, LightCbamModule from .eca import EcaModule, CecaModule from .gather_excite import GatherExcite from .global_context import GlobalContext from .halo_attn import HaloAttn from .lambda_layer import LambdaLayer from .non_local_attn import NonLocalAttn, BatNonLocalAttn from .selective_kernel import SelectiveKernel from .split_attn import SplitAttn from .squeeze_excite import SEModule, EffectiveSEModule def get_attn(attn_type): if isinstance(attn_type, torch.nn.Module): return attn_type module_cls = None if attn_type: if isinstance(attn_type, str): attn_type = attn_type.lower() # Lightweight attention modules (channel and/or coarse spatial). # Typically added to existing network architecture blocks in addition to existing convolutions. if attn_type == 'se': module_cls = SEModule elif attn_type == 'ese': module_cls = EffectiveSEModule elif attn_type == 'eca': module_cls = EcaModule elif attn_type == 'ecam': module_cls = partial(EcaModule, use_mlp=True) elif attn_type == 'ceca': module_cls = CecaModule elif attn_type == 'ge': module_cls = GatherExcite elif attn_type == 'gc': module_cls = GlobalContext elif attn_type == 'gca': module_cls = partial(GlobalContext, fuse_add=True, fuse_scale=False) elif attn_type == 'cbam': module_cls = CbamModule elif attn_type == 'lcbam': module_cls = LightCbamModule # Attention / attention-like modules w/ significant params # Typically replace some of the existing workhorse convs in a network architecture. # All of these accept a stride argument and can spatially downsample the input. elif attn_type == 'sk': module_cls = SelectiveKernel elif attn_type == 'splat': module_cls = SplitAttn # Self-attention / attention-like modules w/ significant compute and/or params # Typically replace some of the existing workhorse convs in a network architecture. # All of these accept a stride argument and can spatially downsample the input. elif attn_type == 'lambda': return LambdaLayer elif attn_type == 'bottleneck': return BottleneckAttn elif attn_type == 'halo': return HaloAttn elif attn_type == 'nl': module_cls = NonLocalAttn elif attn_type == 'bat': module_cls = BatNonLocalAttn # Woops! else: assert False, "Invalid attn module (%s)" % attn_type elif isinstance(attn_type, bool): if attn_type: module_cls = SEModule else: module_cls = attn_type return module_cls def create_attn(attn_type, channels, **kwargs): module_cls = get_attn(attn_type) if module_cls is not None: # NOTE: it's expected the first (positional) argument of all attention layers is the # input channels return module_cls(channels, **kwargs) return None
pytorch-image-models/timm/layers/create_attn.py/0
{ "file_path": "pytorch-image-models/timm/layers/create_attn.py", "repo_id": "pytorch-image-models", "token_count": 1588 }
258
""" Image to Patch Hybird Embedding Layer Hacked together by / Copyright 2020 Ross Wightman """ import logging import math from typing import List, Optional, Tuple, Union import torch from torch import nn as nn import torch.nn.functional as F from .format import Format, nchw_to from .helpers import to_2tuple from .patch_embed import resample_patch_embed _logger = logging.getLogger(__name__) class HybridEmbed(nn.Module): """ CNN Feature Map Embedding Extract feature map from CNN, flatten, project to embedding dim. """ output_fmt: Format dynamic_img_pad: torch.jit.Final[bool] def __init__( self, backbone: nn.Module, img_size: Union[int, Tuple[int, int]] = 224, patch_size: Union[int, Tuple[int, int]] = 1, feature_size: Optional[Union[int, Tuple[int, int]]] = None, feature_ratio: Optional[Union[int, Tuple[int, int]]] = None, in_chans: int = 3, embed_dim: int = 768, bias: bool = True, proj: bool = True, flatten: bool = True, output_fmt: Optional[str] = None, strict_img_size: bool = True, dynamic_img_pad: bool = False, ): super().__init__() assert isinstance(backbone, nn.Module) self.backbone = backbone self.in_chans = in_chans ( self.img_size, self.patch_size, self.feature_size, self.feature_ratio, self.feature_dim, self.grid_size, self.num_patches, ) = self._init_backbone( img_size=img_size, patch_size=patch_size, feature_size=feature_size, feature_ratio=feature_ratio, ) if output_fmt is not None: self.flatten = False self.output_fmt = Format(output_fmt) else: # flatten spatial dim and transpose to channels last, kept for bwd compat self.flatten = flatten self.output_fmt = Format.NCHW self.strict_img_size = strict_img_size self.dynamic_img_pad = dynamic_img_pad if not dynamic_img_pad: assert self.feature_size[0] % self.patch_size[0] == 0 and self.feature_size[1] % self.patch_size[1] == 0 if proj: self.proj = nn.Conv2d( self.feature_dim, embed_dim, kernel_size=patch_size, stride=patch_size, bias=bias, ) else: assert self.feature_dim == embed_dim, \ f'The feature dim ({self.feature_dim} must match embed dim ({embed_dim}) when projection disabled.' self.proj = nn.Identity() def _init_backbone( self, img_size: Union[int, Tuple[int, int]] = 224, patch_size: Union[int, Tuple[int, int]] = 1, feature_size: Optional[Union[int, Tuple[int, int]]] = None, feature_ratio: Optional[Union[int, Tuple[int, int]]] = None, feature_dim: Optional[int] = None, ): img_size = to_2tuple(img_size) patch_size = to_2tuple(patch_size) if feature_size is None: with torch.no_grad(): # NOTE Most reliable way of determining output dims is to run forward pass training = self.backbone.training if training: self.backbone.eval() o = self.backbone(torch.zeros(1, self.in_chans, img_size[0], img_size[1])) if isinstance(o, (list, tuple)): o = o[-1] # last feature if backbone outputs list/tuple of features feature_size = o.shape[-2:] feature_dim = o.shape[1] self.backbone.train(training) feature_ratio = tuple([s // f for s, f in zip(img_size, feature_size)]) else: feature_size = to_2tuple(feature_size) feature_ratio = to_2tuple(feature_ratio or 16) if feature_dim is None: if hasattr(self.backbone, 'feature_info'): feature_dim = self.backbone.feature_info.channels()[-1] else: feature_dim = self.backbone.num_features grid_size = tuple([f // p for f, p in zip(feature_size, patch_size)]) num_patches = grid_size[0] * grid_size[1] return img_size, patch_size, feature_size, feature_ratio, feature_dim, grid_size, num_patches def set_input_size( self, img_size: Optional[Union[int, Tuple[int, int]]] = None, patch_size: Optional[Union[int, Tuple[int, int]]] = None, feature_size: Optional[Union[int, Tuple[int, int]]] = None, feature_ratio: Optional[Union[int, Tuple[int, int]]] = None, feature_dim: Optional[int] = None, ): assert img_size is not None or patch_size is not None img_size = img_size or self.img_size new_patch_size = None if patch_size is not None: new_patch_size = to_2tuple(patch_size) if new_patch_size is not None and new_patch_size != self.patch_size: assert isinstance(self.proj, nn.Conv2d), 'HybridEmbed must have a projection layer to change patch size.' with torch.no_grad(): new_proj = nn.Conv2d( self.proj.in_channels, self.proj.out_channels, kernel_size=new_patch_size, stride=new_patch_size, bias=self.proj.bias is not None, ) new_proj.weight.copy_(resample_patch_embed(self.proj.weight, new_patch_size, verbose=True)) if self.proj.bias is not None: new_proj.bias.copy_(self.proj.bias) self.proj = new_proj patch_size = new_patch_size patch_size = patch_size or self.patch_size if img_size != self.img_size or patch_size != self.patch_size: ( self.img_size, self.patch_size, self.feature_size, self.feature_ratio, self.feature_dim, self.grid_size, self.num_patches, ) = self._init_backbone( img_size=img_size, patch_size=patch_size, feature_size=feature_size, feature_ratio=feature_ratio, feature_dim=feature_dim, ) def feat_ratio(self, as_scalar=True) -> Union[Tuple[int, int], int]: total_reduction = ( self.feature_ratio[0] * self.patch_size[0], self.feature_ratio[1] * self.patch_size[1] ) if as_scalar: return max(total_reduction) else: return total_reduction def dynamic_feat_size(self, img_size: Tuple[int, int]) -> Tuple[int, int]: """ Get feature grid size taking account dynamic padding and backbone network feat reduction """ feat_size = (img_size[0] // self.feature_ratio[0], img_size[1] // self.feature_ratio[1]) if self.dynamic_img_pad: return math.ceil(feat_size[0] / self.patch_size[0]), math.ceil(feat_size[1] / self.patch_size[1]) else: return feat_size[0] // self.patch_size[0], feat_size[1] // self.patch_size[1] @torch.jit.ignore def set_grad_checkpointing(self, enable: bool = True): if hasattr(self.backbone, 'set_grad_checkpointing'): self.backbone.set_grad_checkpointing(enable=enable) elif hasattr(self.backbone, 'grad_checkpointing'): self.backbone.grad_checkpointing = enable def forward(self, x): x = self.backbone(x) if isinstance(x, (list, tuple)): x = x[-1] # last feature if backbone outputs list/tuple of features _, _, H, W = x.shape if self.dynamic_img_pad: pad_h = (self.patch_size[0] - H % self.patch_size[0]) % self.patch_size[0] pad_w = (self.patch_size[1] - W % self.patch_size[1]) % self.patch_size[1] x = F.pad(x, (0, pad_w, 0, pad_h)) x = self.proj(x) if self.flatten: x = x.flatten(2).transpose(1, 2) # NCHW -> NLC elif self.output_fmt != Format.NCHW: x = nchw_to(x, self.output_fmt) return x class HybridEmbedWithSize(HybridEmbed): """ CNN Feature Map Embedding Extract feature map from CNN, flatten, project to embedding dim. """ def __init__( self, backbone: nn.Module, img_size: Union[int, Tuple[int, int]] = 224, patch_size: Union[int, Tuple[int, int]] = 1, feature_size: Optional[Union[int, Tuple[int, int]]] = None, feature_ratio: Optional[Union[int, Tuple[int, int]]] = None, in_chans: int = 3, embed_dim: int = 768, bias=True, proj=True, ): super().__init__( backbone=backbone, img_size=img_size, patch_size=patch_size, feature_size=feature_size, feature_ratio=feature_ratio, in_chans=in_chans, embed_dim=embed_dim, bias=bias, proj=proj, ) @torch.jit.ignore def set_grad_checkpointing(self, enable: bool = True): if hasattr(self.backbone, 'set_grad_checkpointing'): self.backbone.set_grad_checkpointing(enable=enable) elif hasattr(self.backbone, 'grad_checkpointing'): self.backbone.grad_checkpointing = enable def forward(self, x) -> Tuple[torch.Tensor, List[int]]: x = self.backbone(x) if isinstance(x, (list, tuple)): x = x[-1] # last feature if backbone outputs list/tuple of features x = self.proj(x) return x.flatten(2).transpose(1, 2), x.shape[-2:]
pytorch-image-models/timm/layers/hybrid_embed.py/0
{ "file_path": "pytorch-image-models/timm/layers/hybrid_embed.py", "repo_id": "pytorch-image-models", "token_count": 5059 }
259
import torch def global_pool_nlc( x: torch.Tensor, pool_type: str = 'token', num_prefix_tokens: int = 1, reduce_include_prefix: bool = False, ): if not pool_type: return x if pool_type == 'token': x = x[:, 0] # class token else: x = x if reduce_include_prefix else x[:, num_prefix_tokens:] if pool_type == 'avg': x = x.mean(dim=1) elif pool_type == 'avgmax': x = 0.5 * (x.amax(dim=1) + x.mean(dim=1)) elif pool_type == 'max': x = x.amax(dim=1) else: assert not pool_type, f'Unknown pool type {pool_type}' return x
pytorch-image-models/timm/layers/pool1d.py/0
{ "file_path": "pytorch-image-models/timm/layers/pool1d.py", "repo_id": "pytorch-image-models", "token_count": 350 }
260
from .asymmetric_loss import AsymmetricLossMultiLabel, AsymmetricLossSingleLabel from .binary_cross_entropy import BinaryCrossEntropy from .cross_entropy import LabelSmoothingCrossEntropy, SoftTargetCrossEntropy from .jsd import JsdCrossEntropy
pytorch-image-models/timm/loss/__init__.py/0
{ "file_path": "pytorch-image-models/timm/loss/__init__.py", "repo_id": "pytorch-image-models", "token_count": 70 }
261
import os import pkgutil from copy import deepcopy from torch import nn as nn from timm.layers import Conv2dSame, BatchNormAct2d, Linear __all__ = ['extract_layer', 'set_layer', 'adapt_model_from_string', 'adapt_model_from_file'] def extract_layer(model, layer): layer = layer.split('.') module = model if hasattr(model, 'module') and layer[0] != 'module': module = model.module if not hasattr(model, 'module') and layer[0] == 'module': layer = layer[1:] for l in layer: if hasattr(module, l): if not l.isdigit(): module = getattr(module, l) else: module = module[int(l)] else: return module return module def set_layer(model, layer, val): layer = layer.split('.') module = model if hasattr(model, 'module') and layer[0] != 'module': module = model.module lst_index = 0 module2 = module for l in layer: if hasattr(module2, l): if not l.isdigit(): module2 = getattr(module2, l) else: module2 = module2[int(l)] lst_index += 1 lst_index -= 1 for l in layer[:lst_index]: if not l.isdigit(): module = getattr(module, l) else: module = module[int(l)] l = layer[lst_index] setattr(module, l, val) def adapt_model_from_string(parent_module, model_string): separator = '***' state_dict = {} lst_shape = model_string.split(separator) for k in lst_shape: k = k.split(':') key = k[0] shape = k[1][1:-1].split(',') if shape[0] != '': state_dict[key] = [int(i) for i in shape] new_module = deepcopy(parent_module) for n, m in parent_module.named_modules(): old_module = extract_layer(parent_module, n) if isinstance(old_module, nn.Conv2d) or isinstance(old_module, Conv2dSame): if isinstance(old_module, Conv2dSame): conv = Conv2dSame else: conv = nn.Conv2d s = state_dict[n + '.weight'] in_channels = s[1] out_channels = s[0] g = 1 if old_module.groups > 1: in_channels = out_channels g = in_channels new_conv = conv( in_channels=in_channels, out_channels=out_channels, kernel_size=old_module.kernel_size, bias=old_module.bias is not None, padding=old_module.padding, dilation=old_module.dilation, groups=g, stride=old_module.stride) set_layer(new_module, n, new_conv) elif isinstance(old_module, BatchNormAct2d): new_bn = BatchNormAct2d( state_dict[n + '.weight'][0], eps=old_module.eps, momentum=old_module.momentum, affine=old_module.affine, track_running_stats=True) new_bn.drop = old_module.drop new_bn.act = old_module.act set_layer(new_module, n, new_bn) elif isinstance(old_module, nn.BatchNorm2d): new_bn = nn.BatchNorm2d( num_features=state_dict[n + '.weight'][0], eps=old_module.eps, momentum=old_module.momentum, affine=old_module.affine, track_running_stats=True) set_layer(new_module, n, new_bn) elif isinstance(old_module, nn.Linear): # FIXME extra checks to ensure this is actually the FC classifier layer and not a diff Linear layer? num_features = state_dict[n + '.weight'][1] new_fc = Linear( in_features=num_features, out_features=old_module.out_features, bias=old_module.bias is not None) set_layer(new_module, n, new_fc) if hasattr(new_module, 'num_features'): if getattr(new_module, 'head_hidden_size', 0) == new_module.num_features: new_module.head_hidden_size = num_features new_module.num_features = num_features new_module.eval() parent_module.eval() return new_module def adapt_model_from_file(parent_module, model_variant): adapt_data = pkgutil.get_data(__name__, os.path.join('_pruned', model_variant + '.txt')) return adapt_model_from_string(parent_module, adapt_data.decode('utf-8').strip())
pytorch-image-models/timm/models/_prune.py/0
{ "file_path": "pytorch-image-models/timm/models/_prune.py", "repo_id": "pytorch-image-models", "token_count": 2096 }
262
"""PyTorch CspNet A PyTorch implementation of Cross Stage Partial Networks including: * CSPResNet50 * CSPResNeXt50 * CSPDarkNet53 * and DarkNet53 for good measure Based on paper `CSPNet: A New Backbone that can Enhance Learning Capability of CNN` - https://arxiv.org/abs/1911.11929 Reference impl via darknet cfg files at https://github.com/WongKinYiu/CrossStagePartialNetworks Hacked together by / Copyright 2020 Ross Wightman """ from dataclasses import dataclass, asdict, replace from functools import partial from typing import Any, Dict, Optional, Tuple, Union import torch import torch.nn as nn from timm.data import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD from timm.layers import ClassifierHead, ConvNormAct, DropPath, get_attn, create_act_layer, make_divisible from ._builder import build_model_with_cfg from ._manipulate import named_apply, MATCH_PREV_GROUP from ._registry import register_model, generate_default_cfgs __all__ = ['CspNet'] # model_registry will add each entrypoint fn to this @dataclass class CspStemCfg: out_chs: Union[int, Tuple[int, ...]] = 32 stride: Union[int, Tuple[int, ...]] = 2 kernel_size: int = 3 padding: Union[int, str] = '' pool: Optional[str] = '' def _pad_arg(x, n): # pads an argument tuple to specified n by padding with last value if not isinstance(x, (tuple, list)): x = (x,) curr_n = len(x) pad_n = n - curr_n if pad_n <= 0: return x[:n] return tuple(x + (x[-1],) * pad_n) @dataclass class CspStagesCfg: depth: Tuple[int, ...] = (3, 3, 5, 2) # block depth (number of block repeats in stages) out_chs: Tuple[int, ...] = (128, 256, 512, 1024) # number of output channels for blocks in stage stride: Union[int, Tuple[int, ...]] = 2 # stride of stage groups: Union[int, Tuple[int, ...]] = 1 # num kxk conv groups block_ratio: Union[float, Tuple[float, ...]] = 1.0 bottle_ratio: Union[float, Tuple[float, ...]] = 1. # bottleneck-ratio of blocks in stage avg_down: Union[bool, Tuple[bool, ...]] = False attn_layer: Optional[Union[str, Tuple[str, ...]]] = None attn_kwargs: Optional[Union[Dict, Tuple[Dict]]] = None stage_type: Union[str, Tuple[str]] = 'csp' # stage type ('csp', 'cs2', 'dark') block_type: Union[str, Tuple[str]] = 'bottle' # blocks type for stages ('bottle', 'dark') # cross-stage only expand_ratio: Union[float, Tuple[float, ...]] = 1.0 cross_linear: Union[bool, Tuple[bool, ...]] = False down_growth: Union[bool, Tuple[bool, ...]] = False def __post_init__(self): n = len(self.depth) assert len(self.out_chs) == n self.stride = _pad_arg(self.stride, n) self.groups = _pad_arg(self.groups, n) self.block_ratio = _pad_arg(self.block_ratio, n) self.bottle_ratio = _pad_arg(self.bottle_ratio, n) self.avg_down = _pad_arg(self.avg_down, n) self.attn_layer = _pad_arg(self.attn_layer, n) self.attn_kwargs = _pad_arg(self.attn_kwargs, n) self.stage_type = _pad_arg(self.stage_type, n) self.block_type = _pad_arg(self.block_type, n) self.expand_ratio = _pad_arg(self.expand_ratio, n) self.cross_linear = _pad_arg(self.cross_linear, n) self.down_growth = _pad_arg(self.down_growth, n) @dataclass class CspModelCfg: stem: CspStemCfg stages: CspStagesCfg zero_init_last: bool = True # zero init last weight (usually bn) in residual path act_layer: str = 'leaky_relu' norm_layer: str = 'batchnorm' aa_layer: Optional[str] = None # FIXME support string factory for this def _cs3_cfg( width_multiplier=1.0, depth_multiplier=1.0, avg_down=False, act_layer='silu', focus=False, attn_layer=None, attn_kwargs=None, bottle_ratio=1.0, block_type='dark', ): if focus: stem_cfg = CspStemCfg( out_chs=make_divisible(64 * width_multiplier), kernel_size=6, stride=2, padding=2, pool='') else: stem_cfg = CspStemCfg( out_chs=tuple([make_divisible(c * width_multiplier) for c in (32, 64)]), kernel_size=3, stride=2, pool='') return CspModelCfg( stem=stem_cfg, stages=CspStagesCfg( out_chs=tuple([make_divisible(c * width_multiplier) for c in (128, 256, 512, 1024)]), depth=tuple([int(d * depth_multiplier) for d in (3, 6, 9, 3)]), stride=2, bottle_ratio=bottle_ratio, block_ratio=0.5, avg_down=avg_down, attn_layer=attn_layer, attn_kwargs=attn_kwargs, stage_type='cs3', block_type=block_type, ), act_layer=act_layer, ) class BottleneckBlock(nn.Module): """ ResNe(X)t Bottleneck Block """ def __init__( self, in_chs, out_chs, dilation=1, bottle_ratio=0.25, groups=1, act_layer=nn.ReLU, norm_layer=nn.BatchNorm2d, attn_last=False, attn_layer=None, drop_block=None, drop_path=0. ): super(BottleneckBlock, self).__init__() mid_chs = int(round(out_chs * bottle_ratio)) ckwargs = dict(act_layer=act_layer, norm_layer=norm_layer) attn_last = attn_layer is not None and attn_last attn_first = attn_layer is not None and not attn_last self.conv1 = ConvNormAct(in_chs, mid_chs, kernel_size=1, **ckwargs) self.conv2 = ConvNormAct( mid_chs, mid_chs, kernel_size=3, dilation=dilation, groups=groups, drop_layer=drop_block, **ckwargs) self.attn2 = attn_layer(mid_chs, act_layer=act_layer) if attn_first else nn.Identity() self.conv3 = ConvNormAct(mid_chs, out_chs, kernel_size=1, apply_act=False, **ckwargs) self.attn3 = attn_layer(out_chs, act_layer=act_layer) if attn_last else nn.Identity() self.drop_path = DropPath(drop_path) if drop_path else nn.Identity() self.act3 = create_act_layer(act_layer) def zero_init_last(self): nn.init.zeros_(self.conv3.bn.weight) def forward(self, x): shortcut = x x = self.conv1(x) x = self.conv2(x) x = self.attn2(x) x = self.conv3(x) x = self.attn3(x) x = self.drop_path(x) + shortcut # FIXME partial shortcut needed if first block handled as per original, not used for my current impl #x[:, :shortcut.size(1)] += shortcut x = self.act3(x) return x class DarkBlock(nn.Module): """ DarkNet Block """ def __init__( self, in_chs, out_chs, dilation=1, bottle_ratio=0.5, groups=1, act_layer=nn.ReLU, norm_layer=nn.BatchNorm2d, attn_layer=None, drop_block=None, drop_path=0. ): super(DarkBlock, self).__init__() mid_chs = int(round(out_chs * bottle_ratio)) ckwargs = dict(act_layer=act_layer, norm_layer=norm_layer) self.conv1 = ConvNormAct(in_chs, mid_chs, kernel_size=1, **ckwargs) self.attn = attn_layer(mid_chs, act_layer=act_layer) if attn_layer is not None else nn.Identity() self.conv2 = ConvNormAct( mid_chs, out_chs, kernel_size=3, dilation=dilation, groups=groups, drop_layer=drop_block, **ckwargs) self.drop_path = DropPath(drop_path) if drop_path else nn.Identity() def zero_init_last(self): nn.init.zeros_(self.conv2.bn.weight) def forward(self, x): shortcut = x x = self.conv1(x) x = self.attn(x) x = self.conv2(x) x = self.drop_path(x) + shortcut return x class EdgeBlock(nn.Module): """ EdgeResidual / Fused-MBConv / MobileNetV1-like 3x3 + 1x1 block (w/ activated output) """ def __init__( self, in_chs, out_chs, dilation=1, bottle_ratio=0.5, groups=1, act_layer=nn.ReLU, norm_layer=nn.BatchNorm2d, attn_layer=None, drop_block=None, drop_path=0. ): super(EdgeBlock, self).__init__() mid_chs = int(round(out_chs * bottle_ratio)) ckwargs = dict(act_layer=act_layer, norm_layer=norm_layer) self.conv1 = ConvNormAct( in_chs, mid_chs, kernel_size=3, dilation=dilation, groups=groups, drop_layer=drop_block, **ckwargs) self.attn = attn_layer(mid_chs, act_layer=act_layer) if attn_layer is not None else nn.Identity() self.conv2 = ConvNormAct(mid_chs, out_chs, kernel_size=1, **ckwargs) self.drop_path = DropPath(drop_path) if drop_path else nn.Identity() def zero_init_last(self): nn.init.zeros_(self.conv2.bn.weight) def forward(self, x): shortcut = x x = self.conv1(x) x = self.attn(x) x = self.conv2(x) x = self.drop_path(x) + shortcut return x class CrossStage(nn.Module): """Cross Stage.""" def __init__( self, in_chs, out_chs, stride, dilation, depth, block_ratio=1., bottle_ratio=1., expand_ratio=1., groups=1, first_dilation=None, avg_down=False, down_growth=False, cross_linear=False, block_dpr=None, block_fn=BottleneckBlock, **block_kwargs, ): super(CrossStage, self).__init__() first_dilation = first_dilation or dilation down_chs = out_chs if down_growth else in_chs # grow downsample channels to output channels self.expand_chs = exp_chs = int(round(out_chs * expand_ratio)) block_out_chs = int(round(out_chs * block_ratio)) conv_kwargs = dict(act_layer=block_kwargs.get('act_layer'), norm_layer=block_kwargs.get('norm_layer')) aa_layer = block_kwargs.pop('aa_layer', None) if stride != 1 or first_dilation != dilation: if avg_down: self.conv_down = nn.Sequential( nn.AvgPool2d(2) if stride == 2 else nn.Identity(), # FIXME dilation handling ConvNormAct(in_chs, out_chs, kernel_size=1, stride=1, groups=groups, **conv_kwargs) ) else: self.conv_down = ConvNormAct( in_chs, down_chs, kernel_size=3, stride=stride, dilation=first_dilation, groups=groups, aa_layer=aa_layer, **conv_kwargs) prev_chs = down_chs else: self.conv_down = nn.Identity() prev_chs = in_chs # FIXME this 1x1 expansion is pushed down into the cross and block paths in the darknet cfgs. Also, # there is also special case for the first stage for some of the model that results in uneven split # across the two paths. I did it this way for simplicity for now. self.conv_exp = ConvNormAct(prev_chs, exp_chs, kernel_size=1, apply_act=not cross_linear, **conv_kwargs) prev_chs = exp_chs // 2 # output of conv_exp is always split in two self.blocks = nn.Sequential() for i in range(depth): self.blocks.add_module(str(i), block_fn( in_chs=prev_chs, out_chs=block_out_chs, dilation=dilation, bottle_ratio=bottle_ratio, groups=groups, drop_path=block_dpr[i] if block_dpr is not None else 0., **block_kwargs, )) prev_chs = block_out_chs # transition convs self.conv_transition_b = ConvNormAct(prev_chs, exp_chs // 2, kernel_size=1, **conv_kwargs) self.conv_transition = ConvNormAct(exp_chs, out_chs, kernel_size=1, **conv_kwargs) def forward(self, x): x = self.conv_down(x) x = self.conv_exp(x) xs, xb = x.split(self.expand_chs // 2, dim=1) xb = self.blocks(xb) xb = self.conv_transition_b(xb).contiguous() out = self.conv_transition(torch.cat([xs, xb], dim=1)) return out class CrossStage3(nn.Module): """Cross Stage 3. Similar to CrossStage, but with only one transition conv for the output. """ def __init__( self, in_chs, out_chs, stride, dilation, depth, block_ratio=1., bottle_ratio=1., expand_ratio=1., groups=1, first_dilation=None, avg_down=False, down_growth=False, cross_linear=False, block_dpr=None, block_fn=BottleneckBlock, **block_kwargs, ): super(CrossStage3, self).__init__() first_dilation = first_dilation or dilation down_chs = out_chs if down_growth else in_chs # grow downsample channels to output channels self.expand_chs = exp_chs = int(round(out_chs * expand_ratio)) block_out_chs = int(round(out_chs * block_ratio)) conv_kwargs = dict(act_layer=block_kwargs.get('act_layer'), norm_layer=block_kwargs.get('norm_layer')) aa_layer = block_kwargs.pop('aa_layer', None) if stride != 1 or first_dilation != dilation: if avg_down: self.conv_down = nn.Sequential( nn.AvgPool2d(2) if stride == 2 else nn.Identity(), # FIXME dilation handling ConvNormAct(in_chs, out_chs, kernel_size=1, stride=1, groups=groups, **conv_kwargs) ) else: self.conv_down = ConvNormAct( in_chs, down_chs, kernel_size=3, stride=stride, dilation=first_dilation, groups=groups, aa_layer=aa_layer, **conv_kwargs) prev_chs = down_chs else: self.conv_down = None prev_chs = in_chs # expansion conv self.conv_exp = ConvNormAct(prev_chs, exp_chs, kernel_size=1, apply_act=not cross_linear, **conv_kwargs) prev_chs = exp_chs // 2 # expanded output is split in 2 for blocks and cross stage self.blocks = nn.Sequential() for i in range(depth): self.blocks.add_module(str(i), block_fn( in_chs=prev_chs, out_chs=block_out_chs, dilation=dilation, bottle_ratio=bottle_ratio, groups=groups, drop_path=block_dpr[i] if block_dpr is not None else 0., **block_kwargs, )) prev_chs = block_out_chs # transition convs self.conv_transition = ConvNormAct(exp_chs, out_chs, kernel_size=1, **conv_kwargs) def forward(self, x): x = self.conv_down(x) x = self.conv_exp(x) x1, x2 = x.split(self.expand_chs // 2, dim=1) x1 = self.blocks(x1) out = self.conv_transition(torch.cat([x1, x2], dim=1)) return out class DarkStage(nn.Module): """DarkNet stage.""" def __init__( self, in_chs, out_chs, stride, dilation, depth, block_ratio=1., bottle_ratio=1., groups=1, first_dilation=None, avg_down=False, block_fn=BottleneckBlock, block_dpr=None, **block_kwargs, ): super(DarkStage, self).__init__() first_dilation = first_dilation or dilation conv_kwargs = dict(act_layer=block_kwargs.get('act_layer'), norm_layer=block_kwargs.get('norm_layer')) aa_layer = block_kwargs.pop('aa_layer', None) if avg_down: self.conv_down = nn.Sequential( nn.AvgPool2d(2) if stride == 2 else nn.Identity(), # FIXME dilation handling ConvNormAct(in_chs, out_chs, kernel_size=1, stride=1, groups=groups, **conv_kwargs) ) else: self.conv_down = ConvNormAct( in_chs, out_chs, kernel_size=3, stride=stride, dilation=first_dilation, groups=groups, aa_layer=aa_layer, **conv_kwargs) prev_chs = out_chs block_out_chs = int(round(out_chs * block_ratio)) self.blocks = nn.Sequential() for i in range(depth): self.blocks.add_module(str(i), block_fn( in_chs=prev_chs, out_chs=block_out_chs, dilation=dilation, bottle_ratio=bottle_ratio, groups=groups, drop_path=block_dpr[i] if block_dpr is not None else 0., **block_kwargs )) prev_chs = block_out_chs def forward(self, x): x = self.conv_down(x) x = self.blocks(x) return x def create_csp_stem( in_chans=3, out_chs=32, kernel_size=3, stride=2, pool='', padding='', act_layer=nn.ReLU, norm_layer=nn.BatchNorm2d, aa_layer=None, ): stem = nn.Sequential() feature_info = [] if not isinstance(out_chs, (tuple, list)): out_chs = [out_chs] stem_depth = len(out_chs) assert stem_depth assert stride in (1, 2, 4) prev_feat = None prev_chs = in_chans last_idx = stem_depth - 1 stem_stride = 1 for i, chs in enumerate(out_chs): conv_name = f'conv{i + 1}' conv_stride = 2 if (i == 0 and stride > 1) or (i == last_idx and stride > 2 and not pool) else 1 if conv_stride > 1 and prev_feat is not None: feature_info.append(prev_feat) stem.add_module(conv_name, ConvNormAct( prev_chs, chs, kernel_size, stride=conv_stride, padding=padding if i == 0 else '', act_layer=act_layer, norm_layer=norm_layer, )) stem_stride *= conv_stride prev_chs = chs prev_feat = dict(num_chs=prev_chs, reduction=stem_stride, module='.'.join(['stem', conv_name])) if pool: assert stride > 2 if prev_feat is not None: feature_info.append(prev_feat) if aa_layer is not None: stem.add_module('pool', nn.MaxPool2d(kernel_size=3, stride=1, padding=1)) stem.add_module('aa', aa_layer(channels=prev_chs, stride=2)) pool_name = 'aa' else: stem.add_module('pool', nn.MaxPool2d(kernel_size=3, stride=2, padding=1)) pool_name = 'pool' stem_stride *= 2 prev_feat = dict(num_chs=prev_chs, reduction=stem_stride, module='.'.join(['stem', pool_name])) feature_info.append(prev_feat) return stem, feature_info def _get_stage_fn(stage_args): stage_type = stage_args.pop('stage_type') assert stage_type in ('dark', 'csp', 'cs3') if stage_type == 'dark': stage_args.pop('expand_ratio', None) stage_args.pop('cross_linear', None) stage_args.pop('down_growth', None) stage_fn = DarkStage elif stage_type == 'csp': stage_fn = CrossStage else: stage_fn = CrossStage3 return stage_fn, stage_args def _get_block_fn(stage_args): block_type = stage_args.pop('block_type') assert block_type in ('dark', 'edge', 'bottle') if block_type == 'dark': return DarkBlock, stage_args elif block_type == 'edge': return EdgeBlock, stage_args else: return BottleneckBlock, stage_args def _get_attn_fn(stage_args): attn_layer = stage_args.pop('attn_layer') attn_kwargs = stage_args.pop('attn_kwargs', None) or {} if attn_layer is not None: attn_layer = get_attn(attn_layer) if attn_kwargs: attn_layer = partial(attn_layer, **attn_kwargs) return attn_layer, stage_args def create_csp_stages( cfg: CspModelCfg, drop_path_rate: float, output_stride: int, stem_feat: Dict[str, Any], ): cfg_dict = asdict(cfg.stages) num_stages = len(cfg.stages.depth) cfg_dict['block_dpr'] = [None] * num_stages if not drop_path_rate else \ [x.tolist() for x in torch.linspace(0, drop_path_rate, sum(cfg.stages.depth)).split(cfg.stages.depth)] stage_args = [dict(zip(cfg_dict.keys(), values)) for values in zip(*cfg_dict.values())] block_kwargs = dict( act_layer=cfg.act_layer, norm_layer=cfg.norm_layer, ) dilation = 1 net_stride = stem_feat['reduction'] prev_chs = stem_feat['num_chs'] prev_feat = stem_feat feature_info = [] stages = [] for stage_idx, stage_args in enumerate(stage_args): stage_fn, stage_args = _get_stage_fn(stage_args) block_fn, stage_args = _get_block_fn(stage_args) attn_fn, stage_args = _get_attn_fn(stage_args) stride = stage_args.pop('stride') if stride != 1 and prev_feat: feature_info.append(prev_feat) if net_stride >= output_stride and stride > 1: dilation *= stride stride = 1 net_stride *= stride first_dilation = 1 if dilation in (1, 2) else 2 stages += [stage_fn( prev_chs, **stage_args, stride=stride, first_dilation=first_dilation, dilation=dilation, block_fn=block_fn, aa_layer=cfg.aa_layer, attn_layer=attn_fn, # will be passed through stage as block_kwargs **block_kwargs, )] prev_chs = stage_args['out_chs'] prev_feat = dict(num_chs=prev_chs, reduction=net_stride, module=f'stages.{stage_idx}') feature_info.append(prev_feat) return nn.Sequential(*stages), feature_info class CspNet(nn.Module): """Cross Stage Partial base model. Paper: `CSPNet: A New Backbone that can Enhance Learning Capability of CNN` - https://arxiv.org/abs/1911.11929 Ref Impl: https://github.com/WongKinYiu/CrossStagePartialNetworks NOTE: There are differences in the way I handle the 1x1 'expansion' conv in this impl vs the darknet impl. I did it this way for simplicity and less special cases. """ def __init__( self, cfg: CspModelCfg, in_chans=3, num_classes=1000, output_stride=32, global_pool='avg', drop_rate=0., drop_path_rate=0., zero_init_last=True, **kwargs, ): """ Args: cfg (CspModelCfg): Model architecture configuration in_chans (int): Number of input channels (default: 3) num_classes (int): Number of classifier classes (default: 1000) output_stride (int): Output stride of network, one of (8, 16, 32) (default: 32) global_pool (str): Global pooling type (default: 'avg') drop_rate (float): Dropout rate (default: 0.) drop_path_rate (float): Stochastic depth drop-path rate (default: 0.) zero_init_last (bool): Zero-init last weight of residual path kwargs (dict): Extra kwargs overlayed onto cfg """ super().__init__() self.num_classes = num_classes self.drop_rate = drop_rate assert output_stride in (8, 16, 32) cfg = replace(cfg, **kwargs) # overlay kwargs onto cfg layer_args = dict( act_layer=cfg.act_layer, norm_layer=cfg.norm_layer, aa_layer=cfg.aa_layer ) self.feature_info = [] # Construct the stem self.stem, stem_feat_info = create_csp_stem(in_chans, **asdict(cfg.stem), **layer_args) self.feature_info.extend(stem_feat_info[:-1]) # Construct the stages self.stages, stage_feat_info = create_csp_stages( cfg, drop_path_rate=drop_path_rate, output_stride=output_stride, stem_feat=stem_feat_info[-1], ) prev_chs = stage_feat_info[-1]['num_chs'] self.feature_info.extend(stage_feat_info) # Construct the head self.num_features = self.head_hidden_size = prev_chs self.head = ClassifierHead( in_features=prev_chs, num_classes=num_classes, pool_type=global_pool, drop_rate=drop_rate, ) named_apply(partial(_init_weights, zero_init_last=zero_init_last), self) @torch.jit.ignore def group_matcher(self, coarse=False): matcher = dict( stem=r'^stem', blocks=r'^stages\.(\d+)' if coarse else [ (r'^stages\.(\d+)\.blocks\.(\d+)', None), (r'^stages\.(\d+)\..*transition', MATCH_PREV_GROUP), # map to last block in stage (r'^stages\.(\d+)', (0,)), ] ) return matcher @torch.jit.ignore def set_grad_checkpointing(self, enable=True): assert not enable, 'gradient checkpointing not supported' @torch.jit.ignore def get_classifier(self) -> nn.Module: return self.head.fc def reset_classifier(self, num_classes: int, global_pool: Optional[str] = None): self.num_classes = num_classes self.head.reset(num_classes, global_pool) def forward_features(self, x): x = self.stem(x) x = self.stages(x) return x def forward_head(self, x, pre_logits: bool = False): return self.head(x, pre_logits=pre_logits) if pre_logits else self.head(x) def forward(self, x): x = self.forward_features(x) x = self.forward_head(x) return x def _init_weights(module, name, zero_init_last=False): if isinstance(module, nn.Conv2d): nn.init.kaiming_normal_(module.weight, mode='fan_out', nonlinearity='relu') if module.bias is not None: nn.init.zeros_(module.bias) elif isinstance(module, nn.Linear): nn.init.normal_(module.weight, mean=0.0, std=0.01) if module.bias is not None: nn.init.zeros_(module.bias) elif zero_init_last and hasattr(module, 'zero_init_last'): module.zero_init_last() model_cfgs = dict( cspresnet50=CspModelCfg( stem=CspStemCfg(out_chs=64, kernel_size=7, stride=4, pool='max'), stages=CspStagesCfg( depth=(3, 3, 5, 2), out_chs=(128, 256, 512, 1024), stride=(1, 2), expand_ratio=2., bottle_ratio=0.5, cross_linear=True, ), ), cspresnet50d=CspModelCfg( stem=CspStemCfg(out_chs=(32, 32, 64), kernel_size=3, stride=4, pool='max'), stages=CspStagesCfg( depth=(3, 3, 5, 2), out_chs=(128, 256, 512, 1024), stride=(1,) + (2,), expand_ratio=2., bottle_ratio=0.5, block_ratio=1., cross_linear=True, ), ), cspresnet50w=CspModelCfg( stem=CspStemCfg(out_chs=(32, 32, 64), kernel_size=3, stride=4, pool='max'), stages=CspStagesCfg( depth=(3, 3, 5, 2), out_chs=(256, 512, 1024, 2048), stride=(1,) + (2,), expand_ratio=1., bottle_ratio=0.25, block_ratio=0.5, cross_linear=True, ), ), cspresnext50=CspModelCfg( stem=CspStemCfg(out_chs=64, kernel_size=7, stride=4, pool='max'), stages=CspStagesCfg( depth=(3, 3, 5, 2), out_chs=(256, 512, 1024, 2048), stride=(1,) + (2,), groups=32, expand_ratio=1., bottle_ratio=1., block_ratio=0.5, cross_linear=True, ), ), cspdarknet53=CspModelCfg( stem=CspStemCfg(out_chs=32, kernel_size=3, stride=1, pool=''), stages=CspStagesCfg( depth=(1, 2, 8, 8, 4), out_chs=(64, 128, 256, 512, 1024), stride=2, expand_ratio=(2.,) + (1.,), bottle_ratio=(0.5,) + (1.,), block_ratio=(1.,) + (0.5,), down_growth=True, block_type='dark', ), ), darknet17=CspModelCfg( stem=CspStemCfg(out_chs=32, kernel_size=3, stride=1, pool=''), stages=CspStagesCfg( depth=(1,) * 5, out_chs=(64, 128, 256, 512, 1024), stride=(2,), bottle_ratio=(0.5,), block_ratio=(1.,), stage_type='dark', block_type='dark', ), ), darknet21=CspModelCfg( stem=CspStemCfg(out_chs=32, kernel_size=3, stride=1, pool=''), stages=CspStagesCfg( depth=(1, 1, 1, 2, 2), out_chs=(64, 128, 256, 512, 1024), stride=(2,), bottle_ratio=(0.5,), block_ratio=(1.,), stage_type='dark', block_type='dark', ), ), sedarknet21=CspModelCfg( stem=CspStemCfg(out_chs=32, kernel_size=3, stride=1, pool=''), stages=CspStagesCfg( depth=(1, 1, 1, 2, 2), out_chs=(64, 128, 256, 512, 1024), stride=2, bottle_ratio=0.5, block_ratio=1., attn_layer='se', stage_type='dark', block_type='dark', ), ), darknet53=CspModelCfg( stem=CspStemCfg(out_chs=32, kernel_size=3, stride=1, pool=''), stages=CspStagesCfg( depth=(1, 2, 8, 8, 4), out_chs=(64, 128, 256, 512, 1024), stride=2, bottle_ratio=0.5, block_ratio=1., stage_type='dark', block_type='dark', ), ), darknetaa53=CspModelCfg( stem=CspStemCfg(out_chs=32, kernel_size=3, stride=1, pool=''), stages=CspStagesCfg( depth=(1, 2, 8, 8, 4), out_chs=(64, 128, 256, 512, 1024), stride=2, bottle_ratio=0.5, block_ratio=1., avg_down=True, stage_type='dark', block_type='dark', ), ), cs3darknet_s=_cs3_cfg(width_multiplier=0.5, depth_multiplier=0.5), cs3darknet_m=_cs3_cfg(width_multiplier=0.75, depth_multiplier=0.67), cs3darknet_l=_cs3_cfg(), cs3darknet_x=_cs3_cfg(width_multiplier=1.25, depth_multiplier=1.33), cs3darknet_focus_s=_cs3_cfg(width_multiplier=0.5, depth_multiplier=0.5, focus=True), cs3darknet_focus_m=_cs3_cfg(width_multiplier=0.75, depth_multiplier=0.67, focus=True), cs3darknet_focus_l=_cs3_cfg(focus=True), cs3darknet_focus_x=_cs3_cfg(width_multiplier=1.25, depth_multiplier=1.33, focus=True), cs3sedarknet_l=_cs3_cfg(attn_layer='se', attn_kwargs=dict(rd_ratio=.25)), cs3sedarknet_x=_cs3_cfg(attn_layer='se', width_multiplier=1.25, depth_multiplier=1.33), cs3sedarknet_xdw=CspModelCfg( stem=CspStemCfg(out_chs=(32, 64), kernel_size=3, stride=2, pool=''), stages=CspStagesCfg( depth=(3, 6, 12, 4), out_chs=(256, 512, 1024, 2048), stride=2, groups=(1, 1, 256, 512), bottle_ratio=0.5, block_ratio=0.5, attn_layer='se', ), act_layer='silu', ), cs3edgenet_x=_cs3_cfg(width_multiplier=1.25, depth_multiplier=1.33, bottle_ratio=1.5, block_type='edge'), cs3se_edgenet_x=_cs3_cfg( width_multiplier=1.25, depth_multiplier=1.33, bottle_ratio=1.5, block_type='edge', attn_layer='se', attn_kwargs=dict(rd_ratio=.25)), ) def _create_cspnet(variant, pretrained=False, **kwargs): if variant.startswith('darknet') or variant.startswith('cspdarknet'): # NOTE: DarkNet is one of few models with stride==1 features w/ 6 out_indices [0..5] default_out_indices = (0, 1, 2, 3, 4, 5) else: default_out_indices = (0, 1, 2, 3, 4) out_indices = kwargs.pop('out_indices', default_out_indices) return build_model_with_cfg( CspNet, variant, pretrained, model_cfg=model_cfgs[variant], feature_cfg=dict(flatten_sequential=True, out_indices=out_indices), **kwargs) def _cfg(url='', **kwargs): return { 'url': url, 'num_classes': 1000, 'input_size': (3, 256, 256), 'pool_size': (8, 8), 'crop_pct': 0.887, 'interpolation': 'bilinear', 'mean': IMAGENET_DEFAULT_MEAN, 'std': IMAGENET_DEFAULT_STD, 'first_conv': 'stem.conv1.conv', 'classifier': 'head.fc', **kwargs } default_cfgs = generate_default_cfgs({ 'cspresnet50.ra_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/cspresnet50_ra-d3e8d487.pth'), 'cspresnet50d.untrained': _cfg(), 'cspresnet50w.untrained': _cfg(), 'cspresnext50.ra_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/cspresnext50_ra_224-648b4713.pth', ), 'cspdarknet53.ra_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/cspdarknet53_ra_256-d05c7c21.pth'), 'darknet17.untrained': _cfg(), 'darknet21.untrained': _cfg(), 'sedarknet21.untrained': _cfg(), 'darknet53.c2ns_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tpu-weights/darknet53_256_c2ns-3aeff817.pth', interpolation='bicubic', test_input_size=(3, 288, 288), test_crop_pct=1.0), 'darknetaa53.c2ns_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tpu-weights/darknetaa53_c2ns-5c28ec8a.pth', test_input_size=(3, 288, 288), test_crop_pct=1.0), 'cs3darknet_s.untrained': _cfg(interpolation='bicubic'), 'cs3darknet_m.c2ns_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tpu-weights/cs3darknet_m_c2ns-43f06604.pth', interpolation='bicubic', test_input_size=(3, 288, 288), test_crop_pct=0.95, ), 'cs3darknet_l.c2ns_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tpu-weights/cs3darknet_l_c2ns-16220c5d.pth', interpolation='bicubic', test_input_size=(3, 288, 288), test_crop_pct=0.95), 'cs3darknet_x.c2ns_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tpu-weights/cs3darknet_x_c2ns-4e4490aa.pth', interpolation='bicubic', crop_pct=0.95, test_input_size=(3, 288, 288), test_crop_pct=1.0), 'cs3darknet_focus_s.ra4_e3600_r256_in1k': _cfg( hf_hub_id='timm/', mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5), interpolation='bicubic', test_input_size=(3, 320, 320), test_crop_pct=1.0), 'cs3darknet_focus_m.c2ns_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tpu-weights/cs3darknet_focus_m_c2ns-e23bed41.pth', interpolation='bicubic', test_input_size=(3, 288, 288), test_crop_pct=0.95), 'cs3darknet_focus_l.c2ns_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tpu-weights/cs3darknet_focus_l_c2ns-65ef8888.pth', interpolation='bicubic', test_input_size=(3, 288, 288), test_crop_pct=0.95), 'cs3darknet_focus_x.untrained': _cfg(interpolation='bicubic'), 'cs3sedarknet_l.c2ns_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tpu-weights/cs3sedarknet_l_c2ns-e8d1dc13.pth', interpolation='bicubic', test_input_size=(3, 288, 288), test_crop_pct=0.95), 'cs3sedarknet_x.c2ns_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tpu-weights/cs3sedarknet_x_c2ns-b4d0abc0.pth', interpolation='bicubic', test_input_size=(3, 288, 288), test_crop_pct=1.0), 'cs3sedarknet_xdw.untrained': _cfg(interpolation='bicubic'), 'cs3edgenet_x.c2_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tpu-weights/cs3edgenet_x_c2-2e1610a9.pth', interpolation='bicubic', test_input_size=(3, 288, 288), test_crop_pct=1.0), 'cs3se_edgenet_x.c2ns_in1k': _cfg( hf_hub_id='timm/', url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tpu-weights/cs3se_edgenet_x_c2ns-76f8e3ac.pth', interpolation='bicubic', crop_pct=0.95, test_input_size=(3, 320, 320), test_crop_pct=1.0), }) @register_model def cspresnet50(pretrained=False, **kwargs) -> CspNet: return _create_cspnet('cspresnet50', pretrained=pretrained, **kwargs) @register_model def cspresnet50d(pretrained=False, **kwargs) -> CspNet: return _create_cspnet('cspresnet50d', pretrained=pretrained, **kwargs) @register_model def cspresnet50w(pretrained=False, **kwargs) -> CspNet: return _create_cspnet('cspresnet50w', pretrained=pretrained, **kwargs) @register_model def cspresnext50(pretrained=False, **kwargs) -> CspNet: return _create_cspnet('cspresnext50', pretrained=pretrained, **kwargs) @register_model def cspdarknet53(pretrained=False, **kwargs) -> CspNet: return _create_cspnet('cspdarknet53', pretrained=pretrained, **kwargs) @register_model def darknet17(pretrained=False, **kwargs) -> CspNet: return _create_cspnet('darknet17', pretrained=pretrained, **kwargs) @register_model def darknet21(pretrained=False, **kwargs) -> CspNet: return _create_cspnet('darknet21', pretrained=pretrained, **kwargs) @register_model def sedarknet21(pretrained=False, **kwargs) -> CspNet: return _create_cspnet('sedarknet21', pretrained=pretrained, **kwargs) @register_model def darknet53(pretrained=False, **kwargs) -> CspNet: return _create_cspnet('darknet53', pretrained=pretrained, **kwargs) @register_model def darknetaa53(pretrained=False, **kwargs) -> CspNet: return _create_cspnet('darknetaa53', pretrained=pretrained, **kwargs) @register_model def cs3darknet_s(pretrained=False, **kwargs) -> CspNet: return _create_cspnet('cs3darknet_s', pretrained=pretrained, **kwargs) @register_model def cs3darknet_m(pretrained=False, **kwargs) -> CspNet: return _create_cspnet('cs3darknet_m', pretrained=pretrained, **kwargs) @register_model def cs3darknet_l(pretrained=False, **kwargs) -> CspNet: return _create_cspnet('cs3darknet_l', pretrained=pretrained, **kwargs) @register_model def cs3darknet_x(pretrained=False, **kwargs) -> CspNet: return _create_cspnet('cs3darknet_x', pretrained=pretrained, **kwargs) @register_model def cs3darknet_focus_s(pretrained=False, **kwargs) -> CspNet: return _create_cspnet('cs3darknet_focus_s', pretrained=pretrained, **kwargs) @register_model def cs3darknet_focus_m(pretrained=False, **kwargs) -> CspNet: return _create_cspnet('cs3darknet_focus_m', pretrained=pretrained, **kwargs) @register_model def cs3darknet_focus_l(pretrained=False, **kwargs) -> CspNet: return _create_cspnet('cs3darknet_focus_l', pretrained=pretrained, **kwargs) @register_model def cs3darknet_focus_x(pretrained=False, **kwargs) -> CspNet: return _create_cspnet('cs3darknet_focus_x', pretrained=pretrained, **kwargs) @register_model def cs3sedarknet_l(pretrained=False, **kwargs) -> CspNet: return _create_cspnet('cs3sedarknet_l', pretrained=pretrained, **kwargs) @register_model def cs3sedarknet_x(pretrained=False, **kwargs) -> CspNet: return _create_cspnet('cs3sedarknet_x', pretrained=pretrained, **kwargs) @register_model def cs3sedarknet_xdw(pretrained=False, **kwargs) -> CspNet: return _create_cspnet('cs3sedarknet_xdw', pretrained=pretrained, **kwargs) @register_model def cs3edgenet_x(pretrained=False, **kwargs) -> CspNet: return _create_cspnet('cs3edgenet_x', pretrained=pretrained, **kwargs) @register_model def cs3se_edgenet_x(pretrained=False, **kwargs) -> CspNet: return _create_cspnet('cs3se_edgenet_x', pretrained=pretrained, **kwargs)
pytorch-image-models/timm/models/cspnet.py/0
{ "file_path": "pytorch-image-models/timm/models/cspnet.py", "repo_id": "pytorch-image-models", "token_count": 20103 }
263
# NOTE timm.models.layers is DEPRECATED, please use timm.layers, this is here to reduce breakages in transition from timm.layers.activations import * from timm.layers.adaptive_avgmax_pool import \ adaptive_avgmax_pool2d, select_adaptive_pool2d, AdaptiveAvgMaxPool2d, SelectAdaptivePool2d from timm.layers.attention_pool2d import AttentionPool2d, RotAttentionPool2d, RotaryEmbedding from timm.layers.blur_pool import BlurPool2d from timm.layers.classifier import ClassifierHead, create_classifier from timm.layers.cond_conv2d import CondConv2d, get_condconv_initializer from timm.layers.config import is_exportable, is_scriptable, is_no_jit, set_exportable, set_scriptable, set_no_jit,\ set_layer_config from timm.layers.conv2d_same import Conv2dSame, conv2d_same from timm.layers.conv_bn_act import ConvNormAct, ConvNormActAa, ConvBnAct from timm.layers.create_act import create_act_layer, get_act_layer, get_act_fn from timm.layers.create_attn import get_attn, create_attn from timm.layers.create_conv2d import create_conv2d from timm.layers.create_norm import get_norm_layer, create_norm_layer from timm.layers.create_norm_act import get_norm_act_layer, create_norm_act_layer, get_norm_act_layer from timm.layers.drop import DropBlock2d, DropPath, drop_block_2d, drop_path from timm.layers.eca import EcaModule, CecaModule, EfficientChannelAttn, CircularEfficientChannelAttn from timm.layers.evo_norm import EvoNorm2dB0, EvoNorm2dB1, EvoNorm2dB2,\ EvoNorm2dS0, EvoNorm2dS0a, EvoNorm2dS1, EvoNorm2dS1a, EvoNorm2dS2, EvoNorm2dS2a from timm.layers.fast_norm import is_fast_norm, set_fast_norm, fast_group_norm, fast_layer_norm from timm.layers.filter_response_norm import FilterResponseNormTlu2d, FilterResponseNormAct2d from timm.layers.gather_excite import GatherExcite from timm.layers.global_context import GlobalContext from timm.layers.helpers import to_ntuple, to_2tuple, to_3tuple, to_4tuple, make_divisible, extend_tuple from timm.layers.inplace_abn import InplaceAbn from timm.layers.linear import Linear from timm.layers.mixed_conv2d import MixedConv2d from timm.layers.mlp import Mlp, GluMlp, GatedMlp, ConvMlp from timm.layers.non_local_attn import NonLocalAttn, BatNonLocalAttn from timm.layers.norm import GroupNorm, GroupNorm1, LayerNorm, LayerNorm2d from timm.layers.norm_act import BatchNormAct2d, GroupNormAct, convert_sync_batchnorm from timm.layers.padding import get_padding, get_same_padding, pad_same from timm.layers.patch_embed import PatchEmbed from timm.layers.pool2d_same import AvgPool2dSame, create_pool2d from timm.layers.squeeze_excite import SEModule, SqueezeExcite, EffectiveSEModule, EffectiveSqueezeExcite from timm.layers.selective_kernel import SelectiveKernel from timm.layers.separable_conv import SeparableConv2d, SeparableConvNormAct from timm.layers.split_attn import SplitAttn from timm.layers.split_batchnorm import SplitBatchNorm2d, convert_splitbn_model from timm.layers.std_conv import StdConv2d, StdConv2dSame, ScaledStdConv2d, ScaledStdConv2dSame from timm.layers.test_time_pool import TestTimePoolHead, apply_test_time_pool from timm.layers.trace_utils import _assert, _float_to_int from timm.layers.weight_init import trunc_normal_, trunc_normal_tf_, variance_scaling_, lecun_normal_ import warnings warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
pytorch-image-models/timm/models/layers/__init__.py/0
{ "file_path": "pytorch-image-models/timm/models/layers/__init__.py", "repo_id": "pytorch-image-models", "token_count": 1220 }
264
""" pnasnet5large implementation grabbed from Cadene's pretrained models Additional credit to https://github.com/creafz https://github.com/Cadene/pretrained-models.pytorch/blob/master/pretrainedmodels/models/pnasnet.py """ from collections import OrderedDict from functools import partial import torch import torch.nn as nn from timm.layers import ConvNormAct, create_conv2d, create_pool2d, create_classifier from ._builder import build_model_with_cfg from ._registry import register_model, generate_default_cfgs __all__ = ['PNASNet5Large'] class SeparableConv2d(nn.Module): def __init__(self, in_channels, out_channels, kernel_size, stride, padding=''): super(SeparableConv2d, self).__init__() self.depthwise_conv2d = create_conv2d( in_channels, in_channels, kernel_size=kernel_size, stride=stride, padding=padding, groups=in_channels) self.pointwise_conv2d = create_conv2d( in_channels, out_channels, kernel_size=1, padding=padding) def forward(self, x): x = self.depthwise_conv2d(x) x = self.pointwise_conv2d(x) return x class BranchSeparables(nn.Module): def __init__(self, in_channels, out_channels, kernel_size, stride=1, stem_cell=False, padding=''): super(BranchSeparables, self).__init__() middle_channels = out_channels if stem_cell else in_channels self.act_1 = nn.ReLU() self.separable_1 = SeparableConv2d( in_channels, middle_channels, kernel_size, stride=stride, padding=padding) self.bn_sep_1 = nn.BatchNorm2d(middle_channels, eps=0.001) self.act_2 = nn.ReLU() self.separable_2 = SeparableConv2d( middle_channels, out_channels, kernel_size, stride=1, padding=padding) self.bn_sep_2 = nn.BatchNorm2d(out_channels, eps=0.001) def forward(self, x): x = self.act_1(x) x = self.separable_1(x) x = self.bn_sep_1(x) x = self.act_2(x) x = self.separable_2(x) x = self.bn_sep_2(x) return x class ActConvBn(nn.Module): def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=''): super(ActConvBn, self).__init__() self.act = nn.ReLU() self.conv = create_conv2d( in_channels, out_channels, kernel_size=kernel_size, stride=stride, padding=padding) self.bn = nn.BatchNorm2d(out_channels, eps=0.001) def forward(self, x): x = self.act(x) x = self.conv(x) x = self.bn(x) return x class FactorizedReduction(nn.Module): def __init__(self, in_channels, out_channels, padding=''): super(FactorizedReduction, self).__init__() self.act = nn.ReLU() self.path_1 = nn.Sequential(OrderedDict([ ('avgpool', nn.AvgPool2d(1, stride=2, count_include_pad=False)), ('conv', create_conv2d(in_channels, out_channels // 2, kernel_size=1, padding=padding)), ])) self.path_2 = nn.Sequential(OrderedDict([ ('pad', nn.ZeroPad2d((-1, 1, -1, 1))), # shift ('avgpool', nn.AvgPool2d(1, stride=2, count_include_pad=False)), ('conv', create_conv2d(in_channels, out_channels // 2, kernel_size=1, padding=padding)), ])) self.final_path_bn = nn.BatchNorm2d(out_channels, eps=0.001) def forward(self, x): x = self.act(x) x_path1 = self.path_1(x) x_path2 = self.path_2(x) out = self.final_path_bn(torch.cat([x_path1, x_path2], 1)) return out class CellBase(nn.Module): def cell_forward(self, x_left, x_right): x_comb_iter_0_left = self.comb_iter_0_left(x_left) x_comb_iter_0_right = self.comb_iter_0_right(x_left) x_comb_iter_0 = x_comb_iter_0_left + x_comb_iter_0_right x_comb_iter_1_left = self.comb_iter_1_left(x_right) x_comb_iter_1_right = self.comb_iter_1_right(x_right) x_comb_iter_1 = x_comb_iter_1_left + x_comb_iter_1_right x_comb_iter_2_left = self.comb_iter_2_left(x_right) x_comb_iter_2_right = self.comb_iter_2_right(x_right) x_comb_iter_2 = x_comb_iter_2_left + x_comb_iter_2_right x_comb_iter_3_left = self.comb_iter_3_left(x_comb_iter_2) x_comb_iter_3_right = self.comb_iter_3_right(x_right) x_comb_iter_3 = x_comb_iter_3_left + x_comb_iter_3_right x_comb_iter_4_left = self.comb_iter_4_left(x_left) if self.comb_iter_4_right is not None: x_comb_iter_4_right = self.comb_iter_4_right(x_right) else: x_comb_iter_4_right = x_right x_comb_iter_4 = x_comb_iter_4_left + x_comb_iter_4_right x_out = torch.cat([x_comb_iter_0, x_comb_iter_1, x_comb_iter_2, x_comb_iter_3, x_comb_iter_4], 1) return x_out class CellStem0(CellBase): def __init__(self, in_chs_left, out_chs_left, in_chs_right, out_chs_right, pad_type=''): super(CellStem0, self).__init__() self.conv_1x1 = ActConvBn(in_chs_right, out_chs_right, kernel_size=1, padding=pad_type) self.comb_iter_0_left = BranchSeparables( in_chs_left, out_chs_left, kernel_size=5, stride=2, stem_cell=True, padding=pad_type) self.comb_iter_0_right = nn.Sequential(OrderedDict([ ('max_pool', create_pool2d('max', 3, stride=2, padding=pad_type)), ('conv', create_conv2d(in_chs_left, out_chs_left, kernel_size=1, padding=pad_type)), ('bn', nn.BatchNorm2d(out_chs_left, eps=0.001)), ])) self.comb_iter_1_left = BranchSeparables( out_chs_right, out_chs_right, kernel_size=7, stride=2, padding=pad_type) self.comb_iter_1_right = create_pool2d('max', 3, stride=2, padding=pad_type) self.comb_iter_2_left = BranchSeparables( out_chs_right, out_chs_right, kernel_size=5, stride=2, padding=pad_type) self.comb_iter_2_right = BranchSeparables( out_chs_right, out_chs_right, kernel_size=3, stride=2, padding=pad_type) self.comb_iter_3_left = BranchSeparables( out_chs_right, out_chs_right, kernel_size=3, padding=pad_type) self.comb_iter_3_right = create_pool2d('max', 3, stride=2, padding=pad_type) self.comb_iter_4_left = BranchSeparables( in_chs_right, out_chs_right, kernel_size=3, stride=2, stem_cell=True, padding=pad_type) self.comb_iter_4_right = ActConvBn( out_chs_right, out_chs_right, kernel_size=1, stride=2, padding=pad_type) def forward(self, x_left): x_right = self.conv_1x1(x_left) x_out = self.cell_forward(x_left, x_right) return x_out class Cell(CellBase): def __init__( self, in_chs_left, out_chs_left, in_chs_right, out_chs_right, pad_type='', is_reduction=False, match_prev_layer_dims=False, ): super(Cell, self).__init__() # If `is_reduction` is set to `True` stride 2 is used for # convolution and pooling layers to reduce the spatial size of # the output of a cell approximately by a factor of 2. stride = 2 if is_reduction else 1 # If `match_prev_layer_dimensions` is set to `True` # `FactorizedReduction` is used to reduce the spatial size # of the left input of a cell approximately by a factor of 2. self.match_prev_layer_dimensions = match_prev_layer_dims if match_prev_layer_dims: self.conv_prev_1x1 = FactorizedReduction(in_chs_left, out_chs_left, padding=pad_type) else: self.conv_prev_1x1 = ActConvBn(in_chs_left, out_chs_left, kernel_size=1, padding=pad_type) self.conv_1x1 = ActConvBn(in_chs_right, out_chs_right, kernel_size=1, padding=pad_type) self.comb_iter_0_left = BranchSeparables( out_chs_left, out_chs_left, kernel_size=5, stride=stride, padding=pad_type) self.comb_iter_0_right = create_pool2d('max', 3, stride=stride, padding=pad_type) self.comb_iter_1_left = BranchSeparables( out_chs_right, out_chs_right, kernel_size=7, stride=stride, padding=pad_type) self.comb_iter_1_right = create_pool2d('max', 3, stride=stride, padding=pad_type) self.comb_iter_2_left = BranchSeparables( out_chs_right, out_chs_right, kernel_size=5, stride=stride, padding=pad_type) self.comb_iter_2_right = BranchSeparables( out_chs_right, out_chs_right, kernel_size=3, stride=stride, padding=pad_type) self.comb_iter_3_left = BranchSeparables(out_chs_right, out_chs_right, kernel_size=3) self.comb_iter_3_right = create_pool2d('max', 3, stride=stride, padding=pad_type) self.comb_iter_4_left = BranchSeparables( out_chs_left, out_chs_left, kernel_size=3, stride=stride, padding=pad_type) if is_reduction: self.comb_iter_4_right = ActConvBn( out_chs_right, out_chs_right, kernel_size=1, stride=stride, padding=pad_type) else: self.comb_iter_4_right = None def forward(self, x_left, x_right): x_left = self.conv_prev_1x1(x_left) x_right = self.conv_1x1(x_right) x_out = self.cell_forward(x_left, x_right) return x_out class PNASNet5Large(nn.Module): def __init__( self, num_classes=1000, in_chans=3, output_stride=32, drop_rate=0., global_pool='avg', pad_type='', ): super(PNASNet5Large, self).__init__() self.num_classes = num_classes self.num_features = self.head_hidden_size = 4320 assert output_stride == 32 self.conv_0 = ConvNormAct( in_chans, 96, kernel_size=3, stride=2, padding=0, norm_layer=partial(nn.BatchNorm2d, eps=0.001, momentum=0.1), apply_act=False) self.cell_stem_0 = CellStem0( in_chs_left=96, out_chs_left=54, in_chs_right=96, out_chs_right=54, pad_type=pad_type) self.cell_stem_1 = Cell( in_chs_left=96, out_chs_left=108, in_chs_right=270, out_chs_right=108, pad_type=pad_type, match_prev_layer_dims=True, is_reduction=True) self.cell_0 = Cell( in_chs_left=270, out_chs_left=216, in_chs_right=540, out_chs_right=216, pad_type=pad_type, match_prev_layer_dims=True) self.cell_1 = Cell( in_chs_left=540, out_chs_left=216, in_chs_right=1080, out_chs_right=216, pad_type=pad_type) self.cell_2 = Cell( in_chs_left=1080, out_chs_left=216, in_chs_right=1080, out_chs_right=216, pad_type=pad_type) self.cell_3 = Cell( in_chs_left=1080, out_chs_left=216, in_chs_right=1080, out_chs_right=216, pad_type=pad_type) self.cell_4 = Cell( in_chs_left=1080, out_chs_left=432, in_chs_right=1080, out_chs_right=432, pad_type=pad_type, is_reduction=True) self.cell_5 = Cell( in_chs_left=1080, out_chs_left=432, in_chs_right=2160, out_chs_right=432, pad_type=pad_type, match_prev_layer_dims=True) self.cell_6 = Cell( in_chs_left=2160, out_chs_left=432, in_chs_right=2160, out_chs_right=432, pad_type=pad_type) self.cell_7 = Cell( in_chs_left=2160, out_chs_left=432, in_chs_right=2160, out_chs_right=432, pad_type=pad_type) self.cell_8 = Cell( in_chs_left=2160, out_chs_left=864, in_chs_right=2160, out_chs_right=864, pad_type=pad_type, is_reduction=True) self.cell_9 = Cell( in_chs_left=2160, out_chs_left=864, in_chs_right=4320, out_chs_right=864, pad_type=pad_type, match_prev_layer_dims=True) self.cell_10 = Cell( in_chs_left=4320, out_chs_left=864, in_chs_right=4320, out_chs_right=864, pad_type=pad_type) self.cell_11 = Cell( in_chs_left=4320, out_chs_left=864, in_chs_right=4320, out_chs_right=864, pad_type=pad_type) self.act = nn.ReLU() self.feature_info = [ dict(num_chs=96, reduction=2, module='conv_0'), dict(num_chs=270, reduction=4, module='cell_stem_1.conv_1x1.act'), dict(num_chs=1080, reduction=8, module='cell_4.conv_1x1.act'), dict(num_chs=2160, reduction=16, module='cell_8.conv_1x1.act'), dict(num_chs=4320, reduction=32, module='act'), ] self.global_pool, self.head_drop, self.last_linear = create_classifier( self.num_features, self.num_classes, pool_type=global_pool, drop_rate=drop_rate) @torch.jit.ignore def group_matcher(self, coarse=False): return dict(stem=r'^conv_0|cell_stem_[01]', blocks=r'^cell_(\d+)') @torch.jit.ignore def set_grad_checkpointing(self, enable=True): assert not enable, 'gradient checkpointing not supported' @torch.jit.ignore def get_classifier(self) -> nn.Module: return self.last_linear def reset_classifier(self, num_classes: int, global_pool: str = 'avg'): self.num_classes = num_classes self.global_pool, self.last_linear = create_classifier( self.num_features, self.num_classes, pool_type=global_pool) def forward_features(self, x): x_conv_0 = self.conv_0(x) x_stem_0 = self.cell_stem_0(x_conv_0) x_stem_1 = self.cell_stem_1(x_conv_0, x_stem_0) x_cell_0 = self.cell_0(x_stem_0, x_stem_1) x_cell_1 = self.cell_1(x_stem_1, x_cell_0) x_cell_2 = self.cell_2(x_cell_0, x_cell_1) x_cell_3 = self.cell_3(x_cell_1, x_cell_2) x_cell_4 = self.cell_4(x_cell_2, x_cell_3) x_cell_5 = self.cell_5(x_cell_3, x_cell_4) x_cell_6 = self.cell_6(x_cell_4, x_cell_5) x_cell_7 = self.cell_7(x_cell_5, x_cell_6) x_cell_8 = self.cell_8(x_cell_6, x_cell_7) x_cell_9 = self.cell_9(x_cell_7, x_cell_8) x_cell_10 = self.cell_10(x_cell_8, x_cell_9) x_cell_11 = self.cell_11(x_cell_9, x_cell_10) x = self.act(x_cell_11) return x def forward_head(self, x, pre_logits: bool = False): x = self.global_pool(x) x = self.head_drop(x) return x if pre_logits else self.last_linear(x) def forward(self, x): x = self.forward_features(x) x = self.forward_head(x) return x def _create_pnasnet(variant, pretrained=False, **kwargs): return build_model_with_cfg( PNASNet5Large, variant, pretrained, feature_cfg=dict(feature_cls='hook', no_rewrite=True), # not possible to re-write this model **kwargs, ) default_cfgs = generate_default_cfgs({ 'pnasnet5large.tf_in1k': { 'hf_hub_id': 'timm/', 'input_size': (3, 331, 331), 'pool_size': (11, 11), 'crop_pct': 0.911, 'interpolation': 'bicubic', 'mean': (0.5, 0.5, 0.5), 'std': (0.5, 0.5, 0.5), 'num_classes': 1000, 'first_conv': 'conv_0.conv', 'classifier': 'last_linear', }, }) @register_model def pnasnet5large(pretrained=False, **kwargs) -> PNASNet5Large: r"""PNASNet-5 model architecture from the `"Progressive Neural Architecture Search" <https://arxiv.org/abs/1712.00559>`_ paper. """ model_kwargs = dict(pad_type='same', **kwargs) return _create_pnasnet('pnasnet5large', pretrained, **model_kwargs)
pytorch-image-models/timm/models/pnasnet.py/0
{ "file_path": "pytorch-image-models/timm/models/pnasnet.py", "repo_id": "pytorch-image-models", "token_count": 7663 }
265
""" Selective Kernel Networks (ResNet base) Paper: Selective Kernel Networks (https://arxiv.org/abs/1903.06586) This was inspired by reading 'Compounding the Performance Improvements...' (https://arxiv.org/abs/2001.06268) and a streamlined impl at https://github.com/clovaai/assembled-cnn but I ended up building something closer to the original paper with some modifications of my own to better balance param count vs accuracy. Hacked together by / Copyright 2020 Ross Wightman """ import math from torch import nn as nn from timm.data import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD from timm.layers import SelectiveKernel, ConvNormAct, create_attn from ._builder import build_model_with_cfg from ._registry import register_model, generate_default_cfgs from .resnet import ResNet class SelectiveKernelBasic(nn.Module): expansion = 1 def __init__( self, inplanes, planes, stride=1, downsample=None, cardinality=1, base_width=64, sk_kwargs=None, reduce_first=1, dilation=1, first_dilation=None, act_layer=nn.ReLU, norm_layer=nn.BatchNorm2d, attn_layer=None, aa_layer=None, drop_block=None, drop_path=None, ): super(SelectiveKernelBasic, self).__init__() sk_kwargs = sk_kwargs or {} conv_kwargs = dict(act_layer=act_layer, norm_layer=norm_layer) assert cardinality == 1, 'BasicBlock only supports cardinality of 1' assert base_width == 64, 'BasicBlock doest not support changing base width' first_planes = planes // reduce_first outplanes = planes * self.expansion first_dilation = first_dilation or dilation self.conv1 = SelectiveKernel( inplanes, first_planes, stride=stride, dilation=first_dilation, aa_layer=aa_layer, drop_layer=drop_block, **conv_kwargs, **sk_kwargs) self.conv2 = ConvNormAct( first_planes, outplanes, kernel_size=3, dilation=dilation, apply_act=False, **conv_kwargs) self.se = create_attn(attn_layer, outplanes) self.act = act_layer(inplace=True) self.downsample = downsample self.drop_path = drop_path def zero_init_last(self): if getattr(self.conv2.bn, 'weight', None) is not None: nn.init.zeros_(self.conv2.bn.weight) def forward(self, x): shortcut = x x = self.conv1(x) x = self.conv2(x) if self.se is not None: x = self.se(x) if self.drop_path is not None: x = self.drop_path(x) if self.downsample is not None: shortcut = self.downsample(shortcut) x += shortcut x = self.act(x) return x class SelectiveKernelBottleneck(nn.Module): expansion = 4 def __init__( self, inplanes, planes, stride=1, downsample=None, cardinality=1, base_width=64, sk_kwargs=None, reduce_first=1, dilation=1, first_dilation=None, act_layer=nn.ReLU, norm_layer=nn.BatchNorm2d, attn_layer=None, aa_layer=None, drop_block=None, drop_path=None, ): super(SelectiveKernelBottleneck, self).__init__() sk_kwargs = sk_kwargs or {} conv_kwargs = dict(act_layer=act_layer, norm_layer=norm_layer) width = int(math.floor(planes * (base_width / 64)) * cardinality) first_planes = width // reduce_first outplanes = planes * self.expansion first_dilation = first_dilation or dilation self.conv1 = ConvNormAct(inplanes, first_planes, kernel_size=1, **conv_kwargs) self.conv2 = SelectiveKernel( first_planes, width, stride=stride, dilation=first_dilation, groups=cardinality, aa_layer=aa_layer, drop_layer=drop_block, **conv_kwargs, **sk_kwargs) self.conv3 = ConvNormAct(width, outplanes, kernel_size=1, apply_act=False, **conv_kwargs) self.se = create_attn(attn_layer, outplanes) self.act = act_layer(inplace=True) self.downsample = downsample self.drop_path = drop_path def zero_init_last(self): if getattr(self.conv3.bn, 'weight', None) is not None: nn.init.zeros_(self.conv3.bn.weight) def forward(self, x): shortcut = x x = self.conv1(x) x = self.conv2(x) x = self.conv3(x) if self.se is not None: x = self.se(x) if self.drop_path is not None: x = self.drop_path(x) if self.downsample is not None: shortcut = self.downsample(shortcut) x += shortcut x = self.act(x) return x def _create_skresnet(variant, pretrained=False, **kwargs): return build_model_with_cfg( ResNet, variant, pretrained, **kwargs, ) def _cfg(url='', **kwargs): return { 'url': url, 'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': (7, 7), 'crop_pct': 0.875, 'interpolation': 'bicubic', 'mean': IMAGENET_DEFAULT_MEAN, 'std': IMAGENET_DEFAULT_STD, 'first_conv': 'conv1', 'classifier': 'fc', **kwargs } default_cfgs = generate_default_cfgs({ 'skresnet18.ra_in1k': _cfg(hf_hub_id='timm/'), 'skresnet34.ra_in1k': _cfg(hf_hub_id='timm/'), 'skresnet50.untrained': _cfg(), 'skresnet50d.untrained': _cfg( first_conv='conv1.0'), 'skresnext50_32x4d.ra_in1k': _cfg(hf_hub_id='timm/'), }) @register_model def skresnet18(pretrained=False, **kwargs) -> ResNet: """Constructs a Selective Kernel ResNet-18 model. Different from configs in Select Kernel paper or "Compounding the Performance Improvements..." this variation splits the input channels to the selective convolutions to keep param count down. """ sk_kwargs = dict(rd_ratio=1 / 8, rd_divisor=16, split_input=True) model_args = dict( block=SelectiveKernelBasic, layers=[2, 2, 2, 2], block_args=dict(sk_kwargs=sk_kwargs), zero_init_last=False) return _create_skresnet('skresnet18', pretrained, **dict(model_args, **kwargs)) @register_model def skresnet34(pretrained=False, **kwargs) -> ResNet: """Constructs a Selective Kernel ResNet-34 model. Different from configs in Select Kernel paper or "Compounding the Performance Improvements..." this variation splits the input channels to the selective convolutions to keep param count down. """ sk_kwargs = dict(rd_ratio=1 / 8, rd_divisor=16, split_input=True) model_args = dict( block=SelectiveKernelBasic, layers=[3, 4, 6, 3], block_args=dict(sk_kwargs=sk_kwargs), zero_init_last=False) return _create_skresnet('skresnet34', pretrained, **dict(model_args, **kwargs)) @register_model def skresnet50(pretrained=False, **kwargs) -> ResNet: """Constructs a Select Kernel ResNet-50 model. Different from configs in Select Kernel paper or "Compounding the Performance Improvements..." this variation splits the input channels to the selective convolutions to keep param count down. """ sk_kwargs = dict(split_input=True) model_args = dict( block=SelectiveKernelBottleneck, layers=[3, 4, 6, 3], block_args=dict(sk_kwargs=sk_kwargs), zero_init_last=False) return _create_skresnet('skresnet50', pretrained, **dict(model_args, **kwargs)) @register_model def skresnet50d(pretrained=False, **kwargs) -> ResNet: """Constructs a Select Kernel ResNet-50-D model. Different from configs in Select Kernel paper or "Compounding the Performance Improvements..." this variation splits the input channels to the selective convolutions to keep param count down. """ sk_kwargs = dict(split_input=True) model_args = dict( block=SelectiveKernelBottleneck, layers=[3, 4, 6, 3], stem_width=32, stem_type='deep', avg_down=True, block_args=dict(sk_kwargs=sk_kwargs), zero_init_last=False) return _create_skresnet('skresnet50d', pretrained, **dict(model_args, **kwargs)) @register_model def skresnext50_32x4d(pretrained=False, **kwargs) -> ResNet: """Constructs a Select Kernel ResNeXt50-32x4d model. This should be equivalent to the SKNet-50 model in the Select Kernel Paper """ sk_kwargs = dict(rd_ratio=1/16, rd_divisor=32, split_input=False) model_args = dict( block=SelectiveKernelBottleneck, layers=[3, 4, 6, 3], cardinality=32, base_width=4, block_args=dict(sk_kwargs=sk_kwargs), zero_init_last=False) return _create_skresnet('skresnext50_32x4d', pretrained, **dict(model_args, **kwargs))
pytorch-image-models/timm/models/sknet.py/0
{ "file_path": "pytorch-image-models/timm/models/sknet.py", "repo_id": "pytorch-image-models", "token_count": 3811 }
266
""" ViTamin Paper: Designing Scalable Vison Models in the Vision-Language Era A family of model weights on Huggingface: https://huggingface.co/collections/jienengchen/vitamin-family-661048126b72debdaca060bf @inproceedings{chen2024vitamin, title={ViTamin: Designing Scalable Vision Models in the Vision-language Era}, author={Chen, Jieneng and Yu, Qihang and Shen, Xiaohui and Yuille, Alan and Chen, Liang-Chieh}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, year={2024} } Based on Apache 2.0 licensed code at https://github.com/ViTamin/ViTamin Modifications and timm support by Jieneng Chen 2024 Reference: https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/vision_transformer.py https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/vision_transformer_hybrid.py """ import math from dataclasses import dataclass, field from functools import partial from typing import Optional, Union, Tuple import torch import torch.nn as nn from timm.data import OPENAI_CLIP_MEAN, OPENAI_CLIP_STD from timm.layers import create_act_layer, get_norm_layer, get_norm_act_layer, create_conv2d, \ make_divisible, DropPath, HybridEmbed from ._builder import build_model_with_cfg from ._manipulate import named_apply, checkpoint_seq from ._registry import register_model, generate_default_cfgs from .vision_transformer import VisionTransformer, checkpoint_filter_fn @dataclass class VitConvCfg: expand_ratio: float = 4.0 expand_output: bool = True # calculate expansion channels from output (vs input chs) kernel_size: int = 3 group_size: int = 1 # 1 == depthwise pre_norm_act: bool = False # activation after pre-norm stride_mode: str = 'dw' # stride done via one of 'pool', '1x1', 'dw' pool_type: str = 'avg2' downsample_pool_type: str = 'avg2' act_layer: str = 'gelu' # stem & stage 1234 norm_layer: str = '' norm_eps: float = 1e-5 down_shortcut: Optional[bool] = True mlp: str = 'mlp' @dataclass class VitCfg: embed_dim: Tuple[Union[int, Tuple[int, ...]], ...] = (96, 192, 384, 768) depths: Tuple[Union[int, Tuple[int, ...]], ...] = (2, 3, 5, 2) stem_width: int = 64 conv_cfg: VitConvCfg = field(default_factory=VitConvCfg) head_type: str = "" def _init_conv(module, name, scheme=''): if isinstance(module, nn.Conv2d): fan_out = module.kernel_size[0] * module.kernel_size[1] * module.out_channels fan_out //= module.groups nn.init.normal_(module.weight, 0, math.sqrt(2.0 / fan_out)) if module.bias is not None: nn.init.zeros_(module.bias) class Stem(nn.Module): def __init__( self, in_chs: int, out_chs: int, act_layer: str = 'gelu', norm_layer: str = 'layernorm2d', norm_eps: float = 1e-6, bias: bool = True, ): super().__init__() norm_act_layer = partial(get_norm_act_layer(norm_layer, act_layer), eps=norm_eps) self.out_chs = out_chs self.conv1 = create_conv2d(in_chs, out_chs, 3, stride=2, bias=bias) self.norm1 = norm_act_layer(out_chs) self.conv2 = create_conv2d(out_chs, out_chs, 3, stride=1, bias=bias) named_apply(_init_conv, self) def forward(self, x): x = self.conv1(x) x = self.norm1(x) x = self.conv2(x) return x class Downsample2d(nn.Module): def __init__( self, dim: int, dim_out: int, pool_type: str = 'avg2', bias: bool = True, ): super().__init__() self.pool = nn.AvgPool2d(kernel_size=3, stride=2, padding=1, count_include_pad=False) if dim != dim_out: self.expand = nn.Conv2d(dim, dim_out, 1, bias=bias) # 1x1 conv else: self.expand = nn.Identity() def forward(self, x): x = self.pool(x) # spatial downsample x = self.expand(x) # expand chs return x class StridedConv(nn.Module): """ downsample 2d as well """ def __init__( self, kernel_size=3, stride=2, padding=1, in_chans=3, embed_dim=768 ): super().__init__() norm_layer = partial(get_norm_layer('layernorm2d'), eps=1e-6) self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=kernel_size, stride=stride, padding=padding) self.norm = norm_layer(in_chans) # affine over C def forward(self, x): x = self.norm(x) x = self.proj(x) return x class MbConvLNBlock(nn.Module): """ Pre-Norm Conv Block - 1x1 - kxk - 1x1, w/ inverted bottleneck (expand) """ def __init__( self, in_chs: int, out_chs: int, stride: int = 1, drop_path: float = 0., kernel_size: int = 3, norm_layer: str = 'layernorm2d', norm_eps: float = 1e-6, act_layer: str = 'gelu', expand_ratio: float = 4.0, ): super(MbConvLNBlock, self).__init__() self.stride, self.in_chs, self.out_chs = stride, in_chs, out_chs mid_chs = make_divisible(out_chs * expand_ratio) prenorm_act_layer = partial(get_norm_act_layer(norm_layer, act_layer), eps=norm_eps) if stride == 2: self.shortcut = Downsample2d(in_chs, out_chs, pool_type='avg', bias=True) elif in_chs != out_chs: self.shortcut = nn.Conv2d(in_chs, out_chs, 1, bias=True) else: self.shortcut = nn.Identity() self.pre_norm = prenorm_act_layer(in_chs, apply_act=False) self.down = nn.Identity() self.conv1_1x1 = create_conv2d(in_chs, mid_chs, 1, stride=1, bias=True) self.act1 = create_act_layer(act_layer, inplace=True) self.conv2_kxk = create_conv2d( mid_chs, mid_chs, kernel_size, stride=stride, dilation=1, groups=mid_chs, bias=True) self.act2 = create_act_layer(act_layer, inplace=True) self.conv3_1x1 = create_conv2d(mid_chs, out_chs, 1, bias=True) self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() def init_weights(self, scheme=''): named_apply(partial(_init_conv, scheme=scheme), self) def forward(self, x): shortcut = self.shortcut(x) x = self.pre_norm(x) x = self.down(x) # nn.Identity() # 1x1 expansion conv & act x = self.conv1_1x1(x) x = self.act1(x) # (strided) depthwise 3x3 conv & act x = self.conv2_kxk(x) x = self.act2(x) # 1x1 linear projection to output width x = self.conv3_1x1(x) x = self.drop_path(x) + shortcut return x class MbConvStages(nn.Module): """ MobileConv for stage 1 and stage 2 of ViTamin """ def __init__( self, cfg: VitCfg, img_size: Union[int, Tuple[int, int]] = 224, # place holder in_chans: int = 3, ): super().__init__() self.grad_checkpointing = False self.stem = Stem( in_chs=in_chans, out_chs=cfg.stem_width, ) stages = [] self.num_stages = len(cfg.embed_dim) for s, dim in enumerate(cfg.embed_dim[:2]): # stage stage_in_chs = cfg.embed_dim[s-1] if s>0 else cfg.stem_width blocks = [ MbConvLNBlock( in_chs = stage_in_chs if d==0 else dim, out_chs = dim, stride = 2 if d == 0 else 1, ) for d in range(cfg.depths[s]) ] stages += [nn.Sequential(*blocks)] self.stages = nn.Sequential(*stages) self.pool = StridedConv( stride=2, in_chans=cfg.embed_dim[1], embed_dim=cfg.embed_dim[2] ) def forward(self, x): x = self.stem(x) if self.grad_checkpointing and not torch.jit.is_scripting(): x = checkpoint_seq(self.stages, x) else: x = self.stages(x) x = self.pool(x) return x class GeGluMlp(nn.Module): def __init__( self, in_features, hidden_features, act_layer = 'gelu', norm_layer = None, bias = True, drop = 0.0, ): super().__init__() norm_layer = partial(get_norm_layer(norm_layer or 'layernorm'), eps=1e-6) self.norm = norm_layer(in_features) self.w0 = nn.Linear(in_features, hidden_features, bias=bias) self.act = create_act_layer(act_layer) self.w1 = nn.Linear(in_features, hidden_features, bias=bias) self.w2 = nn.Linear(hidden_features, in_features, bias=bias) def forward(self, x): x = self.norm(x) x = self.act(self.w0(x)) * self.w1(x) x = self.w2(x) return x def _create_vitamin(variant, pretrained=False, embed_cfg=None, **kwargs): out_indices = kwargs.pop('out_indices', 3) assert embed_cfg is not None backbone = MbConvStages(cfg=embed_cfg, in_chans=kwargs.get('in_chans', 3)) kwargs['embed_layer'] = partial(HybridEmbed, backbone=backbone, proj=False) kwargs.setdefault('patch_size', 1) # default patch size for hybrid models if not set return build_model_with_cfg( VisionTransformer, variant, pretrained, pretrained_filter_fn=checkpoint_filter_fn, feature_cfg=dict(out_indices=out_indices, feature_cls='getter'), **kwargs, ) def _cfg(url='', **kwargs): return { 'url': url, 'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': None, 'crop_pct': .9, 'interpolation': 'bicubic', 'fixed_input_size': True, 'mean': OPENAI_CLIP_MEAN, 'std': OPENAI_CLIP_STD, 'first_conv': 'patch_embed.backbone.stem.conv1', 'classifier': 'head', **kwargs } default_cfgs = generate_default_cfgs({ 'vitamin_small_224.datacomp1b_clip_ltt': _cfg( hf_hub_id='jienengchen/ViTamin-S-LTT', num_classes=384), 'vitamin_small_224.datacomp1b_clip': _cfg( hf_hub_id='jienengchen/ViTamin-S', num_classes=384), 'vitamin_base_224.datacomp1b_clip_ltt': _cfg( hf_hub_id='jienengchen/ViTamin-B-LTT', num_classes=768), 'vitamin_base_224.datacomp1b_clip': _cfg( hf_hub_id='jienengchen/ViTamin-B', num_classes=768), 'vitamin_large_224.datacomp1b_clip': _cfg( hf_hub_id='jienengchen/ViTamin-L-224px', num_classes=768), 'vitamin_large_256.datacomp1b_clip': _cfg( hf_hub_id='jienengchen/ViTamin-L-256px', num_classes=768, input_size=(3, 256, 256), crop_pct=1.0), 'vitamin_large_336.datacomp1b_clip': _cfg( hf_hub_id='jienengchen/ViTamin-L-336px', num_classes=768, input_size=(3, 336, 336), crop_pct=1.0), 'vitamin_large_384.datacomp1b_clip': _cfg( hf_hub_id='jienengchen/ViTamin-L-384px', num_classes=768, input_size=(3, 384, 384), crop_pct=1.0), 'vitamin_large2_224.datacomp1b_clip': _cfg( hf_hub_id='jienengchen/ViTamin-L2-224px', num_classes=1024), 'vitamin_large2_256.datacomp1b_clip': _cfg( hf_hub_id='jienengchen/ViTamin-L2-256px', num_classes=1024, input_size=(3, 256, 256), crop_pct=1.0), 'vitamin_large2_336.datacomp1b_clip': _cfg( hf_hub_id='jienengchen/ViTamin-L2-336px', num_classes=1024, input_size=(3, 336, 336), crop_pct=1.0), 'vitamin_large2_384.datacomp1b_clip': _cfg( hf_hub_id='jienengchen/ViTamin-L2-384px', num_classes=1024, input_size=(3, 384, 384), crop_pct=1.0), 'vitamin_xlarge_256.datacomp1b_clip': _cfg( hf_hub_id='jienengchen/ViTamin-XL-256px', num_classes=1152, input_size=(3, 256, 256), crop_pct=1.0), 'vitamin_xlarge_336.datacomp1b_clip': _cfg( hf_hub_id='jienengchen/ViTamin-XL-336px', num_classes=1152, input_size=(3, 336, 336), crop_pct=1.0), 'vitamin_xlarge_384.datacomp1b_clip': _cfg( hf_hub_id='jienengchen/ViTamin-XL-384px', num_classes=1152, input_size=(3, 384, 384), crop_pct=1.0), }) @register_model def vitamin_small_224(pretrained=False, **kwargs) -> VisionTransformer: embed_cfg = VitCfg( embed_dim=(64, 128, 384), depths=(2, 4, 1), stem_width=64, conv_cfg=VitConvCfg( norm_layer='layernorm2d', norm_eps=1e-6, ), head_type='1d', ) model_args = dict( embed_dim=384, depth=14, num_heads=6, mlp_layer=GeGluMlp, mlp_ratio=2., class_token=False, global_pool='avg', embed_cfg=embed_cfg ) model = _create_vitamin('vitamin_small_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vitamin_base_224(pretrained=False, **kwargs) -> VisionTransformer: embed_cfg = VitCfg( embed_dim=(128, 256, 768), depths=(2, 4, 1), stem_width=128, conv_cfg=VitConvCfg( norm_layer='layernorm2d', norm_eps=1e-6, ), head_type='1d', ) model_args = dict( embed_dim=768, depth=14, num_heads=12, mlp_layer=GeGluMlp, mlp_ratio=2., class_token=False, global_pool='avg', embed_cfg=embed_cfg) model = _create_vitamin('vitamin_base_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vitamin_large_224(pretrained=False, **kwargs) -> VisionTransformer: embed_cfg = VitCfg( embed_dim=(160, 320, 1024), depths=(2, 4, 1), stem_width=160, conv_cfg=VitConvCfg( norm_layer='layernorm2d', norm_eps=1e-6, ), head_type='1d', ) model_args = dict( embed_dim=1024, depth=31, num_heads=16, mlp_layer=GeGluMlp, mlp_ratio=2., class_token=False, global_pool='avg', embed_cfg=embed_cfg, ) model = _create_vitamin('vitamin_large_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vitamin_large_256(pretrained=False, **kwargs) -> VisionTransformer: embed_cfg = VitCfg( embed_dim=(160, 320, 1024), depths=(2, 4, 1), stem_width=160, conv_cfg=VitConvCfg( norm_layer='layernorm2d', norm_eps=1e-6, ), head_type='1d', ) model_args = dict( img_size=256, embed_dim=1024, depth=31, num_heads=16, mlp_layer=GeGluMlp, mlp_ratio=2., class_token=False, global_pool='avg', embed_cfg=embed_cfg) model = _create_vitamin('vitamin_large_256', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vitamin_large_336(pretrained=False, **kwargs) -> VisionTransformer: embed_cfg = VitCfg( embed_dim=(160, 320, 1024), depths=(2, 4, 1), stem_width=160, conv_cfg=VitConvCfg( norm_layer='layernorm2d', norm_eps=1e-6, ), head_type='1d', ) model_args = dict( img_size=336, embed_dim=1024, depth=31, num_heads=16, mlp_layer=GeGluMlp, mlp_ratio=2., class_token=False, global_pool='avg', embed_cfg=embed_cfg ) model = _create_vitamin('vitamin_large_336', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vitamin_large_384(pretrained=False, **kwargs) -> VisionTransformer: embed_cfg = VitCfg( embed_dim=(160, 320, 1024), depths=(2, 4, 1), stem_width=160, conv_cfg=VitConvCfg( norm_layer='layernorm2d', norm_eps=1e-6, ), head_type='1d', ) model_args = dict( img_size=384, embed_dim=1024, depth=31, num_heads=16, mlp_layer=GeGluMlp, mlp_ratio=2., class_token=False, global_pool='avg', embed_cfg=embed_cfg) model = _create_vitamin('vitamin_large_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vitamin_large2_224(pretrained=False, **kwargs) -> VisionTransformer: embed_cfg = VitCfg( embed_dim=(160, 320, 1024), depths=(2, 4, 1), stem_width=160, conv_cfg=VitConvCfg( norm_layer='layernorm2d', norm_eps=1e-6, ), head_type='1d', ) model_args = dict( embed_dim=1024, depth=31, num_heads=16, mlp_layer=GeGluMlp, mlp_ratio=2., class_token=False, global_pool='avg', embed_cfg=embed_cfg, ) model = _create_vitamin('vitamin_large2_224', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vitamin_large2_256(pretrained=False, **kwargs) -> VisionTransformer: embed_cfg = VitCfg( embed_dim=(160, 320, 1024), depths=(2, 4, 1), stem_width=160, conv_cfg=VitConvCfg( norm_layer='layernorm2d', norm_eps=1e-6, ), head_type='1d', ) model_args = dict( img_size=256, embed_dim=1024, depth=31, num_heads=16, mlp_layer=GeGluMlp, mlp_ratio=2., class_token=False, global_pool='avg', embed_cfg=embed_cfg) model = _create_vitamin('vitamin_large2_256', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vitamin_large2_336(pretrained=False, **kwargs) -> VisionTransformer: embed_cfg = VitCfg( embed_dim=(160, 320, 1024), depths=(2, 4, 1), stem_width=160, conv_cfg=VitConvCfg( norm_layer='layernorm2d', norm_eps=1e-6, ), head_type='1d', ) model_args = dict( img_size=336, embed_dim=1024, depth=31, num_heads=16, mlp_layer=GeGluMlp, mlp_ratio=2., class_token=False, global_pool='avg', embed_cfg=embed_cfg ) model = _create_vitamin('vitamin_large2_336', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vitamin_large2_384(pretrained=False, **kwargs) -> VisionTransformer: embed_cfg = VitCfg( embed_dim=(160, 320, 1024), depths=(2, 4, 1), stem_width=160, conv_cfg=VitConvCfg( norm_layer='layernorm2d', norm_eps=1e-6, ), head_type='1d', ) model_args = dict( img_size=384, embed_dim=1024, depth=31, num_heads=16, mlp_layer=GeGluMlp, mlp_ratio=2., class_token=False, global_pool='avg', embed_cfg=embed_cfg) model = _create_vitamin('vitamin_large2_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vitamin_xlarge_256(pretrained=False, **kwargs) -> VisionTransformer: embed_cfg=VitCfg( embed_dim=(192, 384, 1152), depths=(2, 4, 1), stem_width=192, conv_cfg=VitConvCfg( norm_layer='layernorm2d', norm_eps=1e-6, ), head_type='1d', ) model_args = dict( img_size=256, embed_dim=1152, depth=32, num_heads=16, mlp_layer=GeGluMlp, mlp_ratio=2., class_token=False, global_pool='avg', pos_embed='none', embed_cfg=embed_cfg) model = _create_vitamin( 'vitamin_xlarge_256', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vitamin_xlarge_336(pretrained=False, **kwargs) -> VisionTransformer: embed_cfg = VitCfg( embed_dim=(192, 384, 1152), depths=(2, 4, 1), stem_width=192, conv_cfg=VitConvCfg( norm_layer='layernorm2d', norm_eps=1e-6, ), head_type='1d', ) model_args = dict( img_size=336, embed_dim=1152, depth=32, num_heads=16, mlp_layer=GeGluMlp, mlp_ratio=2., class_token=False, global_pool='avg', pos_embed='none', embed_cfg=embed_cfg) model = _create_vitamin('vitamin_xlarge_256', pretrained=pretrained, **dict(model_args, **kwargs)) return model @register_model def vitamin_xlarge_384(pretrained=False, **kwargs) -> VisionTransformer: embed_cfg = VitCfg( embed_dim=(192, 384, 1152), depths=(2, 4, 1), stem_width=192, conv_cfg=VitConvCfg( norm_layer='layernorm2d', norm_eps=1e-6, ), head_type='1d', ) model_args = dict( img_size=384, embed_dim=1152, depth=32, num_heads=16, mlp_layer=GeGluMlp, mlp_ratio=2., class_token=False, global_pool='avg', pos_embed='none', embed_cfg=embed_cfg) model = _create_vitamin('vitamin_xlarge_384', pretrained=pretrained, **dict(model_args, **kwargs)) return model
pytorch-image-models/timm/models/vitamin.py/0
{ "file_path": "pytorch-image-models/timm/models/vitamin.py", "repo_id": "pytorch-image-models", "token_count": 10085 }
267
""" Adan Optimizer Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models[J]. arXiv preprint arXiv:2208.06677, 2022. https://arxiv.org/abs/2208.06677 Implementation adapted from https://github.com/sail-sg/Adan """ # Copyright 2022 Garena Online Private Limited # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import math from typing import List, Optional, Tuple import torch from torch import Tensor from torch.optim.optimizer import Optimizer class MultiTensorApply(object): available = False warned = False def __init__(self, chunk_size): try: MultiTensorApply.available = True self.chunk_size = chunk_size except ImportError as err: MultiTensorApply.available = False MultiTensorApply.import_err = err def __call__(self, op, noop_flag_buffer, tensor_lists, *args): return op(self.chunk_size, noop_flag_buffer, tensor_lists, *args) class Adan(Optimizer): """ Implements a pytorch variant of Adan. Adan was proposed in Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models https://arxiv.org/abs/2208.06677 Arguments: params: Iterable of parameters to optimize or dicts defining parameter groups. lr: Learning rate. betas: Coefficients used for first- and second-order moments. eps: Term added to the denominator to improve numerical stability. weight_decay: Decoupled weight decay (L2 penalty) no_prox: How to perform the weight decay caution: Enable caution from 'Cautious Optimizers' foreach: If True would use torch._foreach implementation. Faster but uses slightly more memory. """ def __init__(self, params, lr: float = 1e-3, betas: Tuple[float, float, float] = (0.98, 0.92, 0.99), eps: float = 1e-8, weight_decay: float = 0.0, no_prox: bool = False, caution: bool = False, foreach: Optional[bool] = None, ): if not 0.0 <= lr: raise ValueError('Invalid learning rate: {}'.format(lr)) if not 0.0 <= eps: raise ValueError('Invalid epsilon value: {}'.format(eps)) if not 0.0 <= betas[0] < 1.0: raise ValueError('Invalid beta parameter at index 0: {}'.format(betas[0])) if not 0.0 <= betas[1] < 1.0: raise ValueError('Invalid beta parameter at index 1: {}'.format(betas[1])) if not 0.0 <= betas[2] < 1.0: raise ValueError('Invalid beta parameter at index 2: {}'.format(betas[2])) defaults = dict( lr=lr, betas=betas, eps=eps, weight_decay=weight_decay, no_prox=no_prox, caution=caution, foreach=foreach, ) super().__init__(params, defaults) def __setstate__(self, state): super(Adan, self).__setstate__(state) for group in self.param_groups: group.setdefault('no_prox', False) group.setdefault('caution', False) @torch.no_grad() def restart_opt(self): for group in self.param_groups: group['step'] = 0 for p in group['params']: if p.requires_grad: state = self.state[p] # State initialization # Exponential moving average of gradient values state['exp_avg'] = torch.zeros_like(p) # Exponential moving average of squared gradient values state['exp_avg_sq'] = torch.zeros_like(p) # Exponential moving average of gradient difference state['exp_avg_diff'] = torch.zeros_like(p) @torch.no_grad() def step(self, closure=None): """Performs a single optimization step.""" loss = None if closure is not None: with torch.enable_grad(): loss = closure() try: has_scalar_maximum = 'Scalar' in torch.ops.aten._foreach_maximum_.overloads() except: has_scalar_maximum = False for group in self.param_groups: params_with_grad = [] grads = [] exp_avgs = [] exp_avg_sqs = [] exp_avg_diffs = [] neg_pre_grads = [] beta1, beta2, beta3 = group['betas'] # assume same step across group now to simplify things # per parameter step can be easily supported by making it a tensor, or pass list into kernel if 'step' in group: group['step'] += 1 else: group['step'] = 1 bias_correction1 = 1.0 - beta1 ** group['step'] bias_correction2 = 1.0 - beta2 ** group['step'] bias_correction3 = 1.0 - beta3 ** group['step'] for p in group['params']: if p.grad is None: continue params_with_grad.append(p) grads.append(p.grad) state = self.state[p] if len(state) == 0: state['exp_avg'] = torch.zeros_like(p) state['exp_avg_sq'] = torch.zeros_like(p) state['exp_avg_diff'] = torch.zeros_like(p) if 'neg_pre_grad' not in state or group['step'] == 1: state['neg_pre_grad'] = -p.grad.clone() exp_avgs.append(state['exp_avg']) exp_avg_sqs.append(state['exp_avg_sq']) exp_avg_diffs.append(state['exp_avg_diff']) neg_pre_grads.append(state['neg_pre_grad']) if not params_with_grad: continue if group['foreach'] is None: use_foreach = not group['caution'] or has_scalar_maximum else: use_foreach = group['foreach'] if use_foreach: func = _multi_tensor_adan else: func = _single_tensor_adan func( params_with_grad, grads, exp_avgs=exp_avgs, exp_avg_sqs=exp_avg_sqs, exp_avg_diffs=exp_avg_diffs, neg_pre_grads=neg_pre_grads, beta1=beta1, beta2=beta2, beta3=beta3, bias_correction1=bias_correction1, bias_correction2=bias_correction2, bias_correction3_sqrt=math.sqrt(bias_correction3), lr=group['lr'], weight_decay=group['weight_decay'], eps=group['eps'], no_prox=group['no_prox'], caution=group['caution'], ) return loss def _single_tensor_adan( params: List[Tensor], grads: List[Tensor], exp_avgs: List[Tensor], exp_avg_sqs: List[Tensor], exp_avg_diffs: List[Tensor], neg_pre_grads: List[Tensor], *, beta1: float, beta2: float, beta3: float, bias_correction1: float, bias_correction2: float, bias_correction3_sqrt: float, lr: float, weight_decay: float, eps: float, no_prox: bool, caution: bool, ): for i, param in enumerate(params): grad = grads[i] exp_avg = exp_avgs[i] exp_avg_sq = exp_avg_sqs[i] exp_avg_diff = exp_avg_diffs[i] neg_grad_or_diff = neg_pre_grads[i] # for memory saving, we use `neg_grad_or_diff` to get some temp variable in an inplace way neg_grad_or_diff.add_(grad) exp_avg.mul_(beta1).add_(grad, alpha=1 - beta1) # m_t exp_avg_diff.mul_(beta2).add_(neg_grad_or_diff, alpha=1 - beta2) # diff_t neg_grad_or_diff.mul_(beta2).add_(grad) exp_avg_sq.mul_(beta3).addcmul_(neg_grad_or_diff, neg_grad_or_diff, value=1 - beta3) # n_t denom = (exp_avg_sq.sqrt() / bias_correction3_sqrt).add_(eps) step_size_diff = lr * beta2 / bias_correction2 step_size = lr / bias_correction1 if caution: # Apply caution as per 'Cautious Optimizers' - https://arxiv.org/abs/2411.16085 mask = (exp_avg * grad > 0).to(grad.dtype) mask.div_(mask.mean().clamp_(min=1e-3)) exp_avg = exp_avg * mask if no_prox: param.mul_(1 - lr * weight_decay) param.addcdiv_(exp_avg, denom, value=-step_size) param.addcdiv_(exp_avg_diff, denom, value=-step_size_diff) else: param.addcdiv_(exp_avg, denom, value=-step_size) param.addcdiv_(exp_avg_diff, denom, value=-step_size_diff) param.div_(1 + lr * weight_decay) neg_grad_or_diff.zero_().add_(grad, alpha=-1.0) def _multi_tensor_adan( params: List[Tensor], grads: List[Tensor], exp_avgs: List[Tensor], exp_avg_sqs: List[Tensor], exp_avg_diffs: List[Tensor], neg_pre_grads: List[Tensor], *, beta1: float, beta2: float, beta3: float, bias_correction1: float, bias_correction2: float, bias_correction3_sqrt: float, lr: float, weight_decay: float, eps: float, no_prox: bool, caution: bool, ): if len(params) == 0: return # for memory saving, we use `neg_pre_grads` to get some temp variable in a inplace way torch._foreach_add_(neg_pre_grads, grads) torch._foreach_mul_(exp_avgs, beta1) torch._foreach_add_(exp_avgs, grads, alpha=1 - beta1) # m_t torch._foreach_mul_(exp_avg_diffs, beta2) torch._foreach_add_(exp_avg_diffs, neg_pre_grads, alpha=1 - beta2) # diff_t torch._foreach_mul_(neg_pre_grads, beta2) torch._foreach_add_(neg_pre_grads, grads) torch._foreach_mul_(exp_avg_sqs, beta3) torch._foreach_addcmul_(exp_avg_sqs, neg_pre_grads, neg_pre_grads, value=1 - beta3) # n_t denom = torch._foreach_sqrt(exp_avg_sqs) torch._foreach_div_(denom, bias_correction3_sqrt) torch._foreach_add_(denom, eps) step_size_diff = lr * beta2 / bias_correction2 step_size = lr / bias_correction1 if caution: # Apply caution as per 'Cautious Optimizers' - https://arxiv.org/abs/2411.16085 masks = torch._foreach_mul(exp_avgs, grads) masks = [(m > 0).to(g.dtype) for m, g in zip(masks, grads)] mask_scale = [m.mean() for m in masks] torch._foreach_maximum_(mask_scale, 1e-3) torch._foreach_div_(masks, mask_scale) exp_avgs = torch._foreach_mul(exp_avgs, masks) if no_prox: torch._foreach_mul_(params, 1 - lr * weight_decay) torch._foreach_addcdiv_(params, exp_avgs, denom, value=-step_size) torch._foreach_addcdiv_(params, exp_avg_diffs, denom, value=-step_size_diff) else: torch._foreach_addcdiv_(params, exp_avgs, denom, value=-step_size) torch._foreach_addcdiv_(params, exp_avg_diffs, denom, value=-step_size_diff) torch._foreach_div_(params, 1 + lr * weight_decay) torch._foreach_zero_(neg_pre_grads) torch._foreach_add_(neg_pre_grads, grads, alpha=-1.0)
pytorch-image-models/timm/optim/adan.py/0
{ "file_path": "pytorch-image-models/timm/optim/adan.py", "repo_id": "pytorch-image-models", "token_count": 5803 }
268
""" SGDP Optimizer Implementation copied from https://github.com/clovaai/AdamP/blob/master/adamp/sgdp.py Paper: `Slowing Down the Weight Norm Increase in Momentum-based Optimizers` - https://arxiv.org/abs/2006.08217 Code: https://github.com/clovaai/AdamP Copyright (c) 2020-present NAVER Corp. MIT license """ import torch import torch.nn.functional as F from torch.optim.optimizer import Optimizer, required import math from .adamp import projection class SGDP(Optimizer): def __init__( self, params, lr=required, momentum=0, dampening=0, weight_decay=0, nesterov=False, eps=1e-8, delta=0.1, wd_ratio=0.1 ): defaults = dict( lr=lr, momentum=momentum, dampening=dampening, weight_decay=weight_decay, nesterov=nesterov, eps=eps, delta=delta, wd_ratio=wd_ratio, ) super(SGDP, self).__init__(params, defaults) @torch.no_grad() def step(self, closure=None): loss = None if closure is not None: with torch.enable_grad(): loss = closure() for group in self.param_groups: weight_decay = group['weight_decay'] momentum = group['momentum'] dampening = group['dampening'] nesterov = group['nesterov'] for p in group['params']: if p.grad is None: continue grad = p.grad state = self.state[p] # State initialization if len(state) == 0: state['momentum'] = torch.zeros_like(p) # SGD buf = state['momentum'] buf.mul_(momentum).add_(grad, alpha=1. - dampening) if nesterov: d_p = grad + momentum * buf else: d_p = buf # Projection wd_ratio = 1. if len(p.shape) > 1: d_p, wd_ratio = projection(p, grad, d_p, group['delta'], group['wd_ratio'], group['eps']) # Weight decay if weight_decay != 0: p.mul_(1. - group['lr'] * group['weight_decay'] * wd_ratio / (1-momentum)) # Step p.add_(d_p, alpha=-group['lr']) return loss
pytorch-image-models/timm/optim/sgdp.py/0
{ "file_path": "pytorch-image-models/timm/optim/sgdp.py", "repo_id": "pytorch-image-models", "token_count": 1374 }
269
import torch from timm.utils.agc import adaptive_clip_grad def dispatch_clip_grad(parameters, value: float, mode: str = 'norm', norm_type: float = 2.0): """ Dispatch to gradient clipping method Args: parameters (Iterable): model parameters to clip value (float): clipping value/factor/norm, mode dependant mode (str): clipping mode, one of 'norm', 'value', 'agc' norm_type (float): p-norm, default 2.0 """ if mode == 'norm': torch.nn.utils.clip_grad_norm_(parameters, value, norm_type=norm_type) elif mode == 'value': torch.nn.utils.clip_grad_value_(parameters, value) elif mode == 'agc': adaptive_clip_grad(parameters, value, norm_type=norm_type) else: assert False, f"Unknown clip mode ({mode})."
pytorch-image-models/timm/utils/clip_grad.py/0
{ "file_path": "pytorch-image-models/timm/utils/clip_grad.py", "repo_id": "pytorch-image-models", "token_count": 306 }
270
- title: Get started sections: - local: index title: Introduction - local: installation title: Installation options - local: guided_tour title: Guided tour - title: Tutorials sections: - local: tutorials/building_good_agents title: ✨ Building good agents - local: tutorials/inspect_runs title: 📊 Inspect your agent runs using telemetry - local: tutorials/tools title: 🛠️ Tools - in-depth guide - local: tutorials/secure_code_execution title: 🛡️ Secure code execution - local: tutorials/memory title: 📚 Manage your agent's memory - title: Conceptual guides sections: - local: conceptual_guides/intro_agents title: 🤖 What are agents? - local: conceptual_guides/react title: 🤔 How do Multi-step agents work? - title: Examples sections: - local: examples/text_to_sql title: Self-correcting Text-to-SQL - local: examples/rag title: Master your knowledge base with agentic RAG - local: examples/multiagents title: Orchestrate a multi-agent system - local: examples/web_browser title: Build a web browser agent using vision models - local: examples/using_different_models title: Using different models - local: examples/plan_customization title: "Human-in-the-Loop: Customize agent plan interactively" - local: examples/async_agent title: Async Applications with Agents - title: Reference sections: - local: reference/agents title: Agent-related objects - local: reference/models title: Model-related objects - title: Tools sections: - title: Tool-related objects local: reference/tools - title: Built-in Tools local: reference/default_tools
smolagents/docs/source/en/_toctree.yml/0
{ "file_path": "smolagents/docs/source/en/_toctree.yml", "repo_id": "smolagents", "token_count": 538 }
271
# Tools <Tip warning={true}> Smolagents is an experimental API which is subject to change at any time. Results returned by the agents can vary as the APIs or underlying models are prone to change. </Tip> To learn more about agents and tools make sure to read the [introductory guide](../index). This page contains the API docs for the underlying classes. ## Tool Base Classes ### load_tool [[autodoc]] load_tool ### tool [[autodoc]] tool ### Tool [[autodoc]] Tool ### launch_gradio_demo [[autodoc]] launch_gradio_demo ## ToolCollection [[autodoc]] ToolCollection ## MCP Client [[autodoc]] smolagents.mcp_client.MCPClient ## Agent Types Agents can handle any type of object in-between tools; tools, being completely multimodal, can accept and return text, image, audio, video, among other types. In order to increase compatibility between tools, as well as to correctly render these returns in ipython (jupyter, colab, ipython notebooks, ...), we implement wrapper classes around these types. The wrapped objects should continue behaving as initially; a text object should still behave as a string, an image object should still behave as a `PIL.Image`. These types have three specific purposes: - Calling `to_raw` on the type should return the underlying object - Calling `to_string` on the type should return the object as a string: that can be the string in case of an `AgentText` but will be the path of the serialized version of the object in other instances - Displaying it in an ipython kernel should display the object correctly ### AgentText [[autodoc]] smolagents.agent_types.AgentText ### AgentImage [[autodoc]] smolagents.agent_types.AgentImage ### AgentAudio [[autodoc]] smolagents.agent_types.AgentAudio
smolagents/docs/source/en/reference/tools.md/0
{ "file_path": "smolagents/docs/source/en/reference/tools.md", "repo_id": "smolagents", "token_count": 481 }
272
# Tools <Tip warning={true}> Smolagents एक experimental API है जो किसी भी समय बदल सकता है। एजेंट्स द्वारा लौटाए गए परिणाम भिन्न हो सकते हैं क्योंकि APIs या underlying मॉडल बदलने की संभावना रखते हैं। </Tip> एजेंट्स और टूल्स के बारे में अधिक जानने के लिए [introductory guide](../index) पढ़ना सुनिश्चित करें। यह पेज underlying क्लासेज के लिए API docs को शामिल करता है। ## Tools ### load_tool [[autodoc]] load_tool ### tool [[autodoc]] tool ### Tool [[autodoc]] Tool ### launch_gradio_demo [[autodoc]] launch_gradio_demo ## Default Tools ### PythonInterpreterTool [[autodoc]] PythonInterpreterTool ### DuckDuckGoSearchTool [[autodoc]] DuckDuckGoSearchTool ### VisitWebpageTool [[autodoc]] VisitWebpageTool ### UserInputTool [[autodoc]] UserInputTool ## ToolCollection [[autodoc]] ToolCollection ## Agent टाइप्स एजेंट्स टूल्स के बीच किसी भी प्रकार की ऑब्जेक्ट को संभाल सकते हैं; टूल्स, पूरी तरह से मल्टीमोडल होने के कारण, टेक्स्ट, इमेज, ऑडियो, वीडियो सहित अन्य प्रकारों को स्वीकार और रिटर्न कर सकते हैं। टूल्स के बीच अनुकूलता बढ़ाने के साथ-साथ इन रिटर्न्स को ipython (jupyter, colab, ipython notebooks, ...) में सही ढंग से रेंडर करने के लिए, हम इन टाइप्स के आसपास रैपर क्लासेज को लागू करते हैं। रैप किए गए ऑब्जेक्ट्स को प्रारंभ में जैसा व्यवहार करना चाहिए वैसा ही करना जारी रखना चाहिए; एक टेक्स्ट ऑब्जेक्ट को अभी भी स्ट्रिंग की तरह व्यवहार करना चाहिए| एक इमेज ऑब्जेक्ट को अभी भी `PIL.Image` की तरह व्यवहार करना चाहिए। इन टाइप्स के तीन विशिष्ट उद्देश्य हैं: - टाइप पर `to_raw` को कॉल करने से अंतर्निहित ऑब्जेक्ट रिटर्न होना चाहिए - टाइप पर `to_string` को कॉल करने से ऑब्जेक्ट को स्ट्रिंग के रूप में रिटर्न होना चाहिए: वह `AgentText` के मामले में स्ट्रिंग हो सकती है लेकिन अन्य उदाहरणों में ऑब्जेक्ट के सीरियलाइज्ड वर्जन का पाथ होगा - इसे एक ipython kernel में प्रदर्शित करने पर ऑब्जेक्ट को सही ढंग से प्रदर्शित करना चाहिए ### AgentText [[autodoc]] smolagents.agent_types.AgentText ### AgentImage [[autodoc]] smolagents.agent_types.AgentImage ### AgentAudio [[autodoc]] smolagents.agent_types.AgentAudio
smolagents/docs/source/hi/reference/tools.md/0
{ "file_path": "smolagents/docs/source/hi/reference/tools.md", "repo_id": "smolagents", "token_count": 2093 }
273
# Text-to-SQL [[open-in-colab]] 在此教程中,我们将看到如何使用 `smolagents` 实现一个利用 SQL 的 agent。 > 让我们从经典问题开始:为什么不简单地使用标准的 text-to-SQL pipeline 呢? 标准的 text-to-SQL pipeline 很脆弱,因为生成的 SQL 查询可能会出错。更糟糕的是,查询可能出错却不引发错误警报,从而返回一些不正确或无用的结果。 👉 相反,agent 系统则可以检视输出结果并决定查询是否需要被更改,因此带来巨大的性能提升。 让我们来一起构建这个 agent! 💪 首先,我们构建一个 SQL 的环境: ```py from sqlalchemy import ( create_engine, MetaData, Table, Column, String, Integer, Float, insert, inspect, text, ) engine = create_engine("sqlite:///:memory:") metadata_obj = MetaData() # create city SQL table table_name = "receipts" receipts = Table( table_name, metadata_obj, Column("receipt_id", Integer, primary_key=True), Column("customer_name", String(16), primary_key=True), Column("price", Float), Column("tip", Float), ) metadata_obj.create_all(engine) rows = [ {"receipt_id": 1, "customer_name": "Alan Payne", "price": 12.06, "tip": 1.20}, {"receipt_id": 2, "customer_name": "Alex Mason", "price": 23.86, "tip": 0.24}, {"receipt_id": 3, "customer_name": "Woodrow Wilson", "price": 53.43, "tip": 5.43}, {"receipt_id": 4, "customer_name": "Margaret James", "price": 21.11, "tip": 1.00}, ] for row in rows: stmt = insert(receipts).values(**row) with engine.begin() as connection: cursor = connection.execute(stmt) ``` ### 构建 agent 现在,我们构建一个 agent,它将使用 SQL 查询来回答问题。工具的 description 属性将被 agent 系统嵌入到 LLM 的提示中:它为 LLM 提供有关如何使用该工具的信息。这正是我们描述 SQL 表的地方。 ```py inspector = inspect(engine) columns_info = [(col["name"], col["type"]) for col in inspector.get_columns("receipts")] table_description = "Columns:\n" + "\n".join([f" - {name}: {col_type}" for name, col_type in columns_info]) print(table_description) ``` ```text Columns: - receipt_id: INTEGER - customer_name: VARCHAR(16) - price: FLOAT - tip: FLOAT ``` 现在让我们构建我们的工具。它需要以下内容:(更多细节请参阅[工具文档](../tutorials/tools)) - 一个带有 `Args:` 部分列出参数的 docstring。 - 输入和输出的type hints。 ```py from smolagents import tool @tool def sql_engine(query: str) -> str: """ Allows you to perform SQL queries on the table. Returns a string representation of the result. The table is named 'receipts'. Its description is as follows: Columns: - receipt_id: INTEGER - customer_name: VARCHAR(16) - price: FLOAT - tip: FLOAT Args: query: The query to perform. This should be correct SQL. """ output = "" with engine.connect() as con: rows = con.execute(text(query)) for row in rows: output += "\n" + str(row) return output ``` 我们现在使用这个工具来创建一个 agent。我们使用 `CodeAgent`,这是 smolagent 的主要 agent 类:一个在代码中编写操作并根据 ReAct 框架迭代先前输出的 agent。 这个模型是驱动 agent 系统的 LLM。`InferenceClientModel` 允许你使用 HF Inference API 调用 LLM,无论是通过 Serverless 还是 Dedicated endpoint,但你也可以使用任何专有 API。 ```py from smolagents import CodeAgent, InferenceClientModel agent = CodeAgent( tools=[sql_engine], model=InferenceClientModel(model_id="meta-llama/Meta-Llama-3.1-8B-Instruct"), ) agent.run("Can you give me the name of the client who got the most expensive receipt?") ``` ### Level 2: 表连接 现在让我们增加一些挑战!我们希望我们的 agent 能够处理跨多个表的连接。因此,我们创建一个新表,记录每个 receipt_id 的服务员名字! ```py table_name = "waiters" receipts = Table( table_name, metadata_obj, Column("receipt_id", Integer, primary_key=True), Column("waiter_name", String(16), primary_key=True), ) metadata_obj.create_all(engine) rows = [ {"receipt_id": 1, "waiter_name": "Corey Johnson"}, {"receipt_id": 2, "waiter_name": "Michael Watts"}, {"receipt_id": 3, "waiter_name": "Michael Watts"}, {"receipt_id": 4, "waiter_name": "Margaret James"}, ] for row in rows: stmt = insert(receipts).values(**row) with engine.begin() as connection: cursor = connection.execute(stmt) ``` 因为我们改变了表,我们需要更新 `SQLExecutorTool`,让 LLM 能够正确利用这个表的信息。 ```py updated_description = """Allows you to perform SQL queries on the table. Beware that this tool's output is a string representation of the execution output. It can use the following tables:""" inspector = inspect(engine) for table in ["receipts", "waiters"]: columns_info = [(col["name"], col["type"]) for col in inspector.get_columns(table)] table_description = f"Table '{table}':\n" table_description += "Columns:\n" + "\n".join([f" - {name}: {col_type}" for name, col_type in columns_info]) updated_description += "\n\n" + table_description print(updated_description) ``` 因为这个request 比之前的要难一些,我们将 LLM 引擎切换到更强大的 [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct)! ```py sql_engine.description = updated_description agent = CodeAgent( tools=[sql_engine], model=InferenceClientModel(model_id="Qwen/Qwen2.5-Coder-32B-Instruct"), ) agent.run("Which waiter got more total money from tips?") ``` 它直接就能工作!设置过程非常简单,难道不是吗? 这个例子到此结束!我们涵盖了这些概念: - 构建新工具。 - 更新工具的描述。 - 切换到更强大的 LLM 有助于 agent 推理。 ✅ 现在你可以构建你一直梦寐以求的 text-to-SQL 系统了!✨
smolagents/docs/source/zh/examples/text_to_sql.md/0
{ "file_path": "smolagents/docs/source/zh/examples/text_to_sql.md", "repo_id": "smolagents", "token_count": 2956 }
274
from smolagents import Tool from smolagents.models import Model class TextInspectorTool(Tool): name = "inspect_file_as_text" description = """ You cannot load files yourself: instead call this tool to read a file as markdown text and ask questions about it. This tool handles the following file extensions: [".html", ".htm", ".xlsx", ".pptx", ".wav", ".mp3", ".m4a", ".flac", ".pdf", ".docx"], and all other types of text files. IT DOES NOT HANDLE IMAGES.""" inputs = { "file_path": { "description": "The path to the file you want to read as text. Must be a '.something' file, like '.pdf'. If it is an image, use the visualizer tool instead! DO NOT use this tool for an HTML webpage: use the web_search tool instead!", "type": "string", }, "question": { "description": "[Optional]: Your question, as a natural language sentence. Provide as much context as possible. Do not pass this parameter if you just want to directly return the content of the file.", "type": "string", "nullable": True, }, } output_type = "string" def __init__(self, model: Model = None, text_limit: int = 100000): super().__init__() self.model = model self.text_limit = text_limit from .mdconvert import MarkdownConverter self.md_converter = MarkdownConverter() def forward_initial_exam_mode(self, file_path, question): from smolagents.models import MessageRole result = self.md_converter.convert(file_path) if file_path[-4:] in [".png", ".jpg"]: raise Exception("Cannot use inspect_file_as_text tool with images: use visualizer instead!") if ".zip" in file_path: return result.text_content if not question: return result.text_content if len(result.text_content) < 4000: return "Document content: " + result.text_content messages = [ { "role": MessageRole.SYSTEM, "content": [ { "type": "text", "text": "Here is a file:\n### " + str(result.title) + "\n\n" + result.text_content[: self.text_limit], } ], }, { "role": MessageRole.USER, "content": [ { "type": "text", "text": "Now please write a short, 5 sentence caption for this document, that could help someone asking this question: " + question + "\n\nDon't answer the question yourself! Just provide useful notes on the document", } ], }, ] return self.model(messages).content def forward(self, file_path, question: str | None = None) -> str: from smolagents.models import MessageRole result = self.md_converter.convert(file_path) if file_path[-4:] in [".png", ".jpg"]: raise Exception("Cannot use inspect_file_as_text tool with images: use visualizer instead!") if ".zip" in file_path: return result.text_content if not question: return result.text_content messages = [ { "role": MessageRole.SYSTEM, "content": [ { "type": "text", "text": "You will have to write a short caption for this file, then answer this question:" + question, } ], }, { "role": MessageRole.USER, "content": [ { "type": "text", "text": "Here is the complete file:\n### " + str(result.title) + "\n\n" + result.text_content[: self.text_limit], } ], }, { "role": MessageRole.USER, "content": [ { "type": "text", "text": "Now answer the question below. Use these three headings: '1. Short answer', '2. Extremely detailed answer', '3. Additional Context on the document and question asked'." + question, } ], }, ] return self.model(messages).content
smolagents/examples/open_deep_research/scripts/text_inspector_tool.py/0
{ "file_path": "smolagents/examples/open_deep_research/scripts/text_inspector_tool.py", "repo_id": "smolagents", "token_count": 2346 }
275
#!/usr/bin/env python # coding=utf-8 # Copyright 2024 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. __version__ = "1.22.0.dev0" from .agent_types import * # noqa: I001 from .agents import * # Above noqa avoids a circular dependency due to cli.py from .default_tools import * from .gradio_ui import * from .local_python_executor import * from .mcp_client import * from .memory import * from .models import * from .monitoring import * from .remote_executors import * from .tools import * from .utils import * from .cli import *
smolagents/src/smolagents/__init__.py/0
{ "file_path": "smolagents/src/smolagents/__init__.py", "repo_id": "smolagents", "token_count": 311 }
276
import ast import builtins from itertools import zip_longest from .utils import BASE_BUILTIN_MODULES, get_source, is_valid_name _BUILTIN_NAMES = set(vars(builtins)) class MethodChecker(ast.NodeVisitor): """ Checks that a method - only uses defined names - contains no local imports (e.g. numpy is ok but local_script is not) """ def __init__(self, class_attributes: set[str], check_imports: bool = True): self.undefined_names = set() self.imports = {} self.from_imports = {} self.assigned_names = set() self.arg_names = set() self.class_attributes = class_attributes self.errors = [] self.check_imports = check_imports self.typing_names = {"Any"} self.defined_classes = set() def visit_arguments(self, node): """Collect function arguments""" self.arg_names = {arg.arg for arg in node.args} if node.kwarg: self.arg_names.add(node.kwarg.arg) if node.vararg: self.arg_names.add(node.vararg.arg) def visit_Import(self, node): for name in node.names: actual_name = name.asname or name.name self.imports[actual_name] = name.name def visit_ImportFrom(self, node): module = node.module or "" for name in node.names: actual_name = name.asname or name.name self.from_imports[actual_name] = (module, name.name) def visit_Assign(self, node): for target in node.targets: if isinstance(target, ast.Name): self.assigned_names.add(target.id) elif isinstance(target, (ast.Tuple, ast.List)): for elt in target.elts: if isinstance(elt, ast.Name): self.assigned_names.add(elt.id) self.visit(node.value) def visit_With(self, node): """Track aliases in 'with' statements (the 'y' in 'with X as y')""" for item in node.items: if item.optional_vars: # This is the 'y' in 'with X as y' if isinstance(item.optional_vars, ast.Name): self.assigned_names.add(item.optional_vars.id) self.generic_visit(node) def visit_ExceptHandler(self, node): """Track exception aliases (the 'e' in 'except Exception as e')""" if node.name: # This is the 'e' in 'except Exception as e' self.assigned_names.add(node.name) self.generic_visit(node) def visit_AnnAssign(self, node): """Track annotated assignments.""" if isinstance(node.target, ast.Name): self.assigned_names.add(node.target.id) if node.value: self.visit(node.value) def visit_For(self, node): target = node.target if isinstance(target, ast.Name): self.assigned_names.add(target.id) elif isinstance(target, ast.Tuple): for elt in target.elts: if isinstance(elt, ast.Name): self.assigned_names.add(elt.id) self.generic_visit(node) def _handle_comprehension_generators(self, generators): """Helper method to handle generators in all types of comprehensions""" for generator in generators: if isinstance(generator.target, ast.Name): self.assigned_names.add(generator.target.id) elif isinstance(generator.target, ast.Tuple): for elt in generator.target.elts: if isinstance(elt, ast.Name): self.assigned_names.add(elt.id) def visit_ListComp(self, node): """Track variables in list comprehensions""" self._handle_comprehension_generators(node.generators) self.generic_visit(node) def visit_DictComp(self, node): """Track variables in dictionary comprehensions""" self._handle_comprehension_generators(node.generators) self.generic_visit(node) def visit_SetComp(self, node): """Track variables in set comprehensions""" self._handle_comprehension_generators(node.generators) self.generic_visit(node) def visit_Attribute(self, node): if not (isinstance(node.value, ast.Name) and node.value.id == "self"): self.generic_visit(node) def visit_ClassDef(self, node): """Track class definitions""" self.defined_classes.add(node.name) self.generic_visit(node) def visit_Name(self, node): if isinstance(node.ctx, ast.Load): if not ( node.id in _BUILTIN_NAMES or node.id in BASE_BUILTIN_MODULES or node.id in self.arg_names or node.id == "self" or node.id in self.class_attributes or node.id in self.imports or node.id in self.from_imports or node.id in self.assigned_names or node.id in self.typing_names or node.id in self.defined_classes ): self.errors.append(f"Name '{node.id}' is undefined.") def visit_Call(self, node): if isinstance(node.func, ast.Name): if not ( node.func.id in _BUILTIN_NAMES or node.func.id in BASE_BUILTIN_MODULES or node.func.id in self.arg_names or node.func.id == "self" or node.func.id in self.class_attributes or node.func.id in self.imports or node.func.id in self.from_imports or node.func.id in self.assigned_names or node.func.id in self.defined_classes ): self.errors.append(f"Name '{node.func.id}' is undefined.") self.generic_visit(node) def validate_tool_attributes(cls, check_imports: bool = True) -> None: """ Validates that a Tool class follows the proper patterns: 0. Any argument of __init__ should have a default. Args chosen at init are not traceable, so we cannot rebuild the source code for them, thus any important arg should be defined as a class attribute. 1. About the class: - Class attributes should only be strings or dicts - Class attributes cannot be complex attributes 2. About all class methods: - Imports must be from packages, not local files - All methods must be self-contained Raises all errors encountered, if no error returns None. """ class ClassLevelChecker(ast.NodeVisitor): def __init__(self): self.imported_names = set() self.complex_attributes = set() self.class_attributes = set() self.non_defaults = set() self.non_literal_defaults = set() self.in_method = False self.invalid_attributes = [] def visit_FunctionDef(self, node): if node.name == "__init__": self._check_init_function_parameters(node) old_context = self.in_method self.in_method = True self.generic_visit(node) self.in_method = old_context def visit_Assign(self, node): if self.in_method: return # Track class attributes for target in node.targets: if isinstance(target, ast.Name): self.class_attributes.add(target.id) # Check if the assignment is more complex than simple literals if not all(isinstance(val, (ast.Constant, ast.Dict, ast.List, ast.Set)) for val in ast.walk(node.value)): for target in node.targets: if isinstance(target, ast.Name): self.complex_attributes.add(target.id) # Check specific class attributes if getattr(node.targets[0], "id", "") == "name": if not isinstance(node.value, ast.Constant): self.invalid_attributes.append(f"Class attribute 'name' must be a constant, found '{node.value}'") elif not isinstance(node.value.value, str): self.invalid_attributes.append( f"Class attribute 'name' must be a string, found '{node.value.value}'" ) elif not is_valid_name(node.value.value): self.invalid_attributes.append( f"Class attribute 'name' must be a valid Python identifier and not a reserved keyword, found '{node.value.value}'" ) def _check_init_function_parameters(self, node): # Check defaults in parameters for arg, default in reversed(list(zip_longest(reversed(node.args.args), reversed(node.args.defaults)))): if default is None: if arg.arg != "self": self.non_defaults.add(arg.arg) elif not isinstance(default, (ast.Constant, ast.Dict, ast.List, ast.Set)): self.non_literal_defaults.add(arg.arg) class_level_checker = ClassLevelChecker() source = get_source(cls) tree = ast.parse(source) class_node = tree.body[0] if not isinstance(class_node, ast.ClassDef): raise ValueError("Source code must define a class") class_level_checker.visit(class_node) errors = [] # Check invalid class attributes if class_level_checker.invalid_attributes: errors += class_level_checker.invalid_attributes if class_level_checker.complex_attributes: errors.append( f"Complex attributes should be defined in __init__, not as class attributes: " f"{', '.join(class_level_checker.complex_attributes)}" ) if class_level_checker.non_defaults: errors.append( f"Parameters in __init__ must have default values, found required parameters: " f"{', '.join(class_level_checker.non_defaults)}" ) if class_level_checker.non_literal_defaults: errors.append( f"Parameters in __init__ must have literal default values, found non-literal defaults: " f"{', '.join(class_level_checker.non_literal_defaults)}" ) # Run checks on all methods for node in class_node.body: if isinstance(node, ast.FunctionDef): method_checker = MethodChecker(class_level_checker.class_attributes, check_imports=check_imports) method_checker.visit(node) errors += [f"- {node.name}: {error}" for error in method_checker.errors] if errors: raise ValueError(f"Tool validation failed for {cls.__name__}:\n" + "\n".join(errors)) return
smolagents/src/smolagents/tool_validation.py/0
{ "file_path": "smolagents/src/smolagents/tool_validation.py", "repo_id": "smolagents", "token_count": 4921 }
277
import os import subprocess import tempfile def test_import_smolagents_without_extras(monkeypatch): monkeypatch.delenv("VIRTUAL_ENV", raising=False) with tempfile.TemporaryDirectory() as temp_dir: # Create a virtual environment venv_dir = os.path.join(temp_dir, "venv") subprocess.run(["uv", "venv", venv_dir], check=True) # Install smolagents in the virtual environment subprocess.run( ["uv", "pip", "install", "--python", os.path.join(venv_dir, "bin", "python"), "smolagents @ ."], check=True ) # Run the import test in the virtual environment result = subprocess.run( [os.path.join(venv_dir, "bin", "python"), "-c", "import smolagents"], capture_output=True, text=True, ) # Check if the import was successful assert result.returncode == 0, ( "Import failed with error: " + (result.stderr.splitlines()[-1] if result.stderr else "No error message") + "\n" + result.stderr )
smolagents/tests/test_import.py/0
{ "file_path": "smolagents/tests/test_import.py", "repo_id": "smolagents", "token_count": 448 }
278
repos: - repo: https://github.com/pre-commit/pre-commit-hooks rev: v4.5.0 hooks: - id: check-yaml - id: end-of-file-fixer exclude: crate-hashes.json - id: trailing-whitespace exclude: docs/source/reference/launcher.md - repo: https://github.com/psf/black rev: 24.2.0 hooks: - id: black - repo: https://github.com/doublify/pre-commit-rust rev: v1.0 hooks: - id: cargo-check - id: fmt - id: clippy - repo: https://github.com/astral-sh/ruff-pre-commit rev: v0.3.0 hooks: - id: ruff args: [--fix, --exit-non-zero-on-fix]
text-generation-inference/.pre-commit-config.yaml/0
{ "file_path": "text-generation-inference/.pre-commit-config.yaml", "repo_id": "text-generation-inference", "token_count": 314 }
279
<div align="center"> <a href="https://www.youtube.com/watch?v=jlMAX2Oaht0"> <img width=560 alt="Making TGI deployment optimal" src="https://huggingface.co/datasets/Narsil/tgi_assets/resolve/main/thumbnail.png"> </a> # Text Generation Inference <a href="https://github.com/huggingface/text-generation-inference"> <img alt="GitHub Repo stars" src="https://img.shields.io/github/stars/huggingface/text-generation-inference?style=social"> </a> <a href="https://huggingface.github.io/text-generation-inference"> <img alt="Swagger API documentation" src="https://img.shields.io/badge/API-Swagger-informational"> </a> A Rust, Python and gRPC server for text generation inference. Used in production at [Hugging Face](https://huggingface.co) to power Hugging Chat, the Inference API and Inference Endpoints. </div> ## Table of contents - [Get Started](#get-started) - [Docker](#docker) - [API documentation](#api-documentation) - [Using a private or gated model](#using-a-private-or-gated-model) - [A note on Shared Memory (shm)](#a-note-on-shared-memory-shm) - [Distributed Tracing](#distributed-tracing) - [Architecture](#architecture) - [Local install](#local-install) - [Local install (Nix)](#local-install-nix) - [Optimized architectures](#optimized-architectures) - [Run locally](#run-locally) - [Run](#run) - [Quantization](#quantization) - [Develop](#develop) - [Testing](#testing) Text Generation Inference (TGI) is a toolkit for deploying and serving Large Language Models (LLMs). TGI enables high-performance text generation for the most popular open-source LLMs, including Llama, Falcon, StarCoder, BLOOM, GPT-NeoX, and [more](https://huggingface.co/docs/text-generation-inference/supported_models). TGI implements many features, such as: - Simple launcher to serve most popular LLMs - Production ready (distributed tracing with Open Telemetry, Prometheus metrics) - Tensor Parallelism for faster inference on multiple GPUs - Token streaming using Server-Sent Events (SSE) - Continuous batching of incoming requests for increased total throughput - [Messages API](https://huggingface.co/docs/text-generation-inference/en/messages_api) compatible with Open AI Chat Completion API - Optimized transformers code for inference using [Flash Attention](https://github.com/HazyResearch/flash-attention) and [Paged Attention](https://github.com/vllm-project/vllm) on the most popular architectures - Quantization with : - [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) - [GPT-Q](https://arxiv.org/abs/2210.17323) - [EETQ](https://github.com/NetEase-FuXi/EETQ) - [AWQ](https://github.com/casper-hansen/AutoAWQ) - [Marlin](https://github.com/IST-DASLab/marlin) - [fp8](https://developer.nvidia.com/blog/nvidia-arm-and-intel-publish-fp8-specification-for-standardization-as-an-interchange-format-for-ai/) - [Safetensors](https://github.com/huggingface/safetensors) weight loading - Watermarking with [A Watermark for Large Language Models](https://arxiv.org/abs/2301.10226) - Logits warper (temperature scaling, top-p, top-k, repetition penalty, more details see [transformers.LogitsProcessor](https://huggingface.co/docs/transformers/internal/generation_utils#transformers.LogitsProcessor)) - Stop sequences - Log probabilities - [Speculation](https://huggingface.co/docs/text-generation-inference/conceptual/speculation) ~2x latency - [Guidance/JSON](https://huggingface.co/docs/text-generation-inference/conceptual/guidance). Specify output format to speed up inference and make sure the output is valid according to some specs.. - Custom Prompt Generation: Easily generate text by providing custom prompts to guide the model's output - Fine-tuning Support: Utilize fine-tuned models for specific tasks to achieve higher accuracy and performance ### Hardware support - [Nvidia](https://github.com/huggingface/text-generation-inference/pkgs/container/text-generation-inference) - [AMD](https://github.com/huggingface/text-generation-inference/pkgs/container/text-generation-inference) (-rocm) - [Inferentia](https://github.com/huggingface/optimum-neuron/tree/main/text-generation-inference) - [Intel GPU](https://github.com/huggingface/text-generation-inference/pull/1475) - [Gaudi](https://github.com/huggingface/tgi-gaudi) - [Google TPU](https://huggingface.co/docs/optimum-tpu/howto/serving) ## Get Started ### Docker For a detailed starting guide, please see the [Quick Tour](https://huggingface.co/docs/text-generation-inference/quicktour). The easiest way of getting started is using the official Docker container: ```shell model=HuggingFaceH4/zephyr-7b-beta # share a volume with the Docker container to avoid downloading weights every run volume=$PWD/data docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data \ ghcr.io/huggingface/text-generation-inference:3.3.4 --model-id $model ``` And then you can make requests like ```bash curl 127.0.0.1:8080/generate_stream \ -X POST \ -d '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":20}}' \ -H 'Content-Type: application/json' ``` You can also use [TGI's Messages API](https://huggingface.co/docs/text-generation-inference/en/messages_api) to obtain Open AI Chat Completion API compatible responses. ```bash curl localhost:8080/v1/chat/completions \ -X POST \ -d '{ "model": "tgi", "messages": [ { "role": "system", "content": "You are a helpful assistant." }, { "role": "user", "content": "What is deep learning?" } ], "stream": true, "max_tokens": 20 }' \ -H 'Content-Type: application/json' ``` **Note:** To use NVIDIA GPUs, you need to install the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html). We also recommend using NVIDIA drivers with CUDA version 12.2 or higher. For running the Docker container on a machine with no GPUs or CUDA support, it is enough to remove the `--gpus all` flag and add `--disable-custom-kernels`, please note CPU is not the intended platform for this project, so performance might be subpar. **Note:** TGI supports AMD Instinct MI210 and MI250 GPUs. Details can be found in the [Supported Hardware documentation](https://huggingface.co/docs/text-generation-inference/installation_amd#using-tgi-with-amd-gpus). To use AMD GPUs, please use `docker run --device /dev/kfd --device /dev/dri --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:3.3.4-rocm --model-id $model` instead of the command above. To see all options to serve your models (in the [code](https://github.com/huggingface/text-generation-inference/blob/main/launcher/src/main.rs) or in the cli): ``` text-generation-launcher --help ``` ### API documentation You can consult the OpenAPI documentation of the `text-generation-inference` REST API using the `/docs` route. The Swagger UI is also available at: [https://huggingface.github.io/text-generation-inference](https://huggingface.github.io/text-generation-inference). ### Using a private or gated model You have the option to utilize the `HF_TOKEN` environment variable for configuring the token employed by `text-generation-inference`. This allows you to gain access to protected resources. For example, if you want to serve the gated Llama V2 model variants: 1. Go to https://huggingface.co/settings/tokens 2. Copy your CLI READ token 3. Export `HF_TOKEN=<your CLI READ token>` or with Docker: ```shell model=meta-llama/Meta-Llama-3.1-8B-Instruct volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run token=<your cli READ token> docker run --gpus all --shm-size 1g -e HF_TOKEN=$token -p 8080:80 -v $volume:/data \ ghcr.io/huggingface/text-generation-inference:3.3.4 --model-id $model ``` ### A note on Shared Memory (shm) [`NCCL`](https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/index.html) is a communication framework used by `PyTorch` to do distributed training/inference. `text-generation-inference` makes use of `NCCL` to enable Tensor Parallelism to dramatically speed up inference for large language models. In order to share data between the different devices of a `NCCL` group, `NCCL` might fall back to using the host memory if peer-to-peer using NVLink or PCI is not possible. To allow the container to use 1G of Shared Memory and support SHM sharing, we add `--shm-size 1g` on the above command. If you are running `text-generation-inference` inside `Kubernetes`. You can also add Shared Memory to the container by creating a volume with: ```yaml - name: shm emptyDir: medium: Memory sizeLimit: 1Gi ``` and mounting it to `/dev/shm`. Finally, you can also disable SHM sharing by using the `NCCL_SHM_DISABLE=1` environment variable. However, note that this will impact performance. ### Distributed Tracing `text-generation-inference` is instrumented with distributed tracing using OpenTelemetry. You can use this feature by setting the address to an OTLP collector with the `--otlp-endpoint` argument. The default service name can be overridden with the `--otlp-service-name` argument ### Architecture ![TGI architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/TGI.png) Detailed blogpost by Adyen on TGI inner workings: [LLM inference at scale with TGI (Martin Iglesias Goyanes - Adyen, 2024)](https://www.adyen.com/knowledge-hub/llm-inference-at-scale-with-tgi) ### Local install You can also opt to install `text-generation-inference` locally. First clone the repository and change directory into it: ```shell git clone https://github.com/huggingface/text-generation-inference cd text-generation-inference ``` Then [install Rust](https://rustup.rs/) and create a Python virtual environment with at least Python 3.9, e.g. using `conda` or `python venv`: ```shell curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh #using conda conda create -n text-generation-inference python=3.11 conda activate text-generation-inference #using python venv python3 -m venv .venv source .venv/bin/activate ``` You may also need to install Protoc. On Linux: ```shell PROTOC_ZIP=protoc-21.12-linux-x86_64.zip curl -OL https://github.com/protocolbuffers/protobuf/releases/download/v21.12/$PROTOC_ZIP sudo unzip -o $PROTOC_ZIP -d /usr/local bin/protoc sudo unzip -o $PROTOC_ZIP -d /usr/local 'include/*' rm -f $PROTOC_ZIP ``` On MacOS, using Homebrew: ```shell brew install protobuf ``` Then run: ```shell BUILD_EXTENSIONS=True make install # Install repository and HF/transformer fork with CUDA kernels text-generation-launcher --model-id mistralai/Mistral-7B-Instruct-v0.2 ``` **Note:** on some machines, you may also need the OpenSSL libraries and gcc. On Linux machines, run: ```shell sudo apt-get install libssl-dev gcc -y ``` ### Local install (Nix) Another option is to install `text-generation-inference` locally using [Nix](https://nixos.org). Currently, we only support Nix on x86_64 Linux with CUDA GPUs. When using Nix, all dependencies can be pulled from a binary cache, removing the need to build them locally. First follow the instructions to [install Cachix and enable the Hugging Face cache](https://app.cachix.org/cache/huggingface). Setting up the cache is important, otherwise Nix will build many of the dependencies locally, which can take hours. After that you can run TGI with `nix run`: ```shell cd text-generation-inference nix run --extra-experimental-features nix-command --extra-experimental-features flakes . -- --model-id meta-llama/Llama-3.1-8B-Instruct ``` **Note:** when you are using Nix on a non-NixOS system, you have to [make some symlinks](https://danieldk.eu/Nix-CUDA-on-non-NixOS-systems#make-runopengl-driverlib-and-symlink-the-driver-library) to make the CUDA driver libraries visible to Nix packages. For TGI development, you can use the `impure` dev shell: ```shell nix develop .#impure # Only needed the first time the devshell is started or after updating the protobuf. ( cd server mkdir text_generation_server/pb || true python -m grpc_tools.protoc -I../proto/v3 --python_out=text_generation_server/pb \ --grpc_python_out=text_generation_server/pb --mypy_out=text_generation_server/pb ../proto/v3/generate.proto find text_generation_server/pb/ -type f -name "*.py" -print0 -exec sed -i -e 's/^\(import.*pb2\)/from . \1/g' {} \; touch text_generation_server/pb/__init__.py ) ``` All development dependencies (cargo, Python, Torch), etc. are available in this dev shell. ## Optimized architectures TGI works out of the box to serve optimized models for all modern models. They can be found in [this list](https://huggingface.co/docs/text-generation-inference/supported_models). Other architectures are supported on a best-effort basis using: `AutoModelForCausalLM.from_pretrained(<model>, device_map="auto")` or `AutoModelForSeq2SeqLM.from_pretrained(<model>, device_map="auto")` ## Run locally ### Run ```shell text-generation-launcher --model-id mistralai/Mistral-7B-Instruct-v0.2 ``` ### Quantization You can also run pre-quantized weights (AWQ, GPTQ, Marlin) or on-the-fly quantize weights with bitsandbytes, EETQ, fp8, to reduce the VRAM requirement: ```shell text-generation-launcher --model-id mistralai/Mistral-7B-Instruct-v0.2 --quantize ``` 4bit quantization is available using the [NF4 and FP4 data types from bitsandbytes](https://arxiv.org/pdf/2305.14314.pdf). It can be enabled by providing `--quantize bitsandbytes-nf4` or `--quantize bitsandbytes-fp4` as a command line argument to `text-generation-launcher`. Read more about quantization in the [Quantization documentation](https://huggingface.co/docs/text-generation-inference/en/conceptual/quantization). ## Develop ```shell make server-dev make router-dev ``` ## Testing ```shell # python make python-server-tests make python-client-tests # or both server and client tests make python-tests # rust cargo tests make rust-tests # integration tests make integration-tests ```
text-generation-inference/README.md/0
{ "file_path": "text-generation-inference/README.md", "repo_id": "text-generation-inference", "token_count": 4590 }
280
# Examples of Docker Commands for Gaudi Backend This page gives a list of examples of docker run commands for some of the most popular models. > **Note:** The parameters are chosen for Gaudi2 hardware to maximize performance on this given hardware, please adjust the parameters based on your hardware. For example, if you are using Gaudi3, you may want to increase the batch size. ## Default Precision (BF16) ### Llama3.1-8B on 1 card (BF16) ```bash model=meta-llama/Meta-Llama-3.1-8B-Instruct hf_token=YOUR_ACCESS_TOKEN volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run docker run -p 8080:80 \ --runtime=habana \ --cap-add=sys_nice \ --ipc=host \ -v $volume:/data \ -e HF_TOKEN=$hf_token \ ghcr.io/huggingface/text-generation-inference:3.3.4-gaudi \ --model-id $model \ --max-input-tokens 1024 --max-total-tokens 2048 \ --max-batch-prefill-tokens 2048 --max-batch-size 32 \ --max-waiting-tokens 7 --waiting-served-ratio 1.2 --max-concurrent-requests 64 ``` ### Llama3.1-70B 8 cards (BF16) ```bash model=meta-llama/Meta-Llama-3.1-70B-Instruct hf_token=YOUR_ACCESS_TOKEN volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run docker run -p 8080:80 \ --runtime=habana \ --cap-add=sys_nice \ --ipc=host \ -v $volume:/data \ -e HF_TOKEN=$hf_token \ ghcr.io/huggingface/text-generation-inference:3.3.4-gaudi \ --model-id $model \ --sharded true --num-shard 8 \ --max-input-tokens 1024 --max-total-tokens 2048 \ --max-batch-prefill-tokens 4096 --max-batch-size 256 \ --max-waiting-tokens 7 --waiting-served-ratio 1.2 --max-concurrent-requests 512 ``` ### Llava-v1.6-Mistral-7B on 1 card (BF16) ```bash model=llava-hf/llava-v1.6-mistral-7b-hf volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run docker run -p 8080:80 \ --runtime=habana \ --cap-add=sys_nice \ --ipc=host \ -v $volume:/data \ ghcr.io/huggingface/text-generation-inference:3.3.4-gaudi \ --model-id $model \ --max-input-tokens 4096 --max-batch-prefill-tokens 16384 \ --max-total-tokens 8192 --max-batch-size 4 ``` ## FP8 Precision You could also set kv cache dtype to FP8 when launching the server, fp8_e4m3fn is supported in Gaudi ## Llama3-8B on 1 Card (FP8) ```bash model=RedHatAI/Meta-Llama-3-8B-Instruct-FP8-KV hf_token=YOUR_ACCESS_TOKEN volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run docker run -p 8080:80 \ --runtime=habana \ --cap-add=sys_nice \ --ipc=host \ -v $volume:/data \ -e HF_TOKEN=$hf_token \ ghcr.io/huggingface/text-generation-inference:3.3.4-gaudi \ --model-id $model \ --kv-cache-dtype fp8_e4m3fn \ --max-input-tokens 1024 --max-total-tokens 2048 \ --max-batch-prefill-tokens 2048 --max-batch-size 32 \ --max-waiting-tokens 7 --waiting-served-ratio 1.2 --max-concurrent-requests 64 ``` ## Llama3-70B on 8 cards (FP8) ```bash model=RedHatAI/Meta-Llama-3-70B-Instruct-FP8 hf_token=YOUR_ACCESS_TOKEN volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run docker run -p 8080:80 \ --runtime=habana \ --cap-add=sys_nice \ --ipc=host \ -v $volume:/data \ -e HF_TOKEN=$hf_token \ ghcr.io/huggingface/text-generation-inference:3.3.4-gaudi \ --model-id $model \ --kv-cache-dtype fp8_e4m3fn \ --sharded true --num-shard 8 \ --max-input-tokens 1024 --max-total-tokens 2048 \ --max-batch-prefill-tokens 4096 --max-batch-size 256 \ --max-waiting-tokens 7 --waiting-served-ratio 1.2 --max-concurrent-requests 512 ```
text-generation-inference/backends/gaudi/examples/docker_commands/docker_commands.md/0
{ "file_path": "text-generation-inference/backends/gaudi/examples/docker_commands/docker_commands.md", "repo_id": "text-generation-inference", "token_count": 1488 }
281
# coding=utf-8 # Copyright 5 The Qwen team, Alibaba Group and the HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from typing import List, Optional, Tuple, Type import torch from torch import nn import torch.nn.functional as F from text_generation_server.layers.attention import ( attention, paged_attention, set_block_mapping, Seqlen, HPUPagedAttentionMetadata, ) from text_generation_server.layers.attention.kv_cache import get_kv_scales from text_generation_server.layers.moe import DenseMoELayer, MoELayer, SparseMoELayer from text_generation_server.layers import ( TensorParallelEmbedding, TensorParallelColumnLinear, TensorParallelRowLinear, SpeculativeHead, FastLinear, ) from text_generation_server.layers.layernorm import ( FastRMSNorm, ) from .flash_qwen2_modeling import Qwen2MLP from .flash_qwen3_modeling import Qwen3Attention from transformers.activations import ACT2FN from text_generation_server.layers.rotary import PositionRotaryEmbedding def rotate_half(x): """Rotates half the hidden dims of the input.""" x1 = x[..., : x.shape[-1] // 2] x2 = x[..., x.shape[-1] // 2 :] return torch.cat((-x2, x1), dim=-1) def apply_rotary_pos_emb(q, k, cos, sin, position_ids=None, unsqueeze_dim=1): """Applies Rotary Position Embedding to the query and key tensors. Args: q (`torch.Tensor`): The query tensor. k (`torch.Tensor`): The key tensor. cos (`torch.Tensor`): The cosine part of the rotary embedding. sin (`torch.Tensor`): The sine part of the rotary embedding. position_ids (`torch.Tensor`, *optional*): Deprecated and unused. unsqueeze_dim (`int`, *optional*, defaults to 1): The 'unsqueeze_dim' argument specifies the dimension along which to unsqueeze cos[position_ids] and sin[position_ids] so that they can be properly broadcasted to the dimensions of q and k. For example, note that cos[position_ids] and sin[position_ids] have the shape [batch_size, seq_len, head_dim]. Then, if q and k have the shape [batch_size, heads, seq_len, head_dim], then setting unsqueeze_dim=1 makes cos[position_ids] and sin[position_ids] broadcastable to the shapes of q and k. Similarly, if q and k have the shape [batch_size, seq_len, heads, head_dim], then set unsqueeze_dim=2. Returns: `tuple(torch.Tensor)` comprising of the query and key tensors rotated using the Rotary Position Embedding. """ cos = cos.unsqueeze(unsqueeze_dim) sin = sin.unsqueeze(unsqueeze_dim) q_embed = (q * cos) + (rotate_half(q) * sin) k_embed = (k * cos) + (rotate_half(k) * sin) return q_embed, k_embed class Qwen3MoeAttention(nn.Module): """Multi-headed attention from 'Attention Is All You Need' paper""" def __init__(self, config, prefix, weights, layer_idx, rotary_emb): super().__init__() self.config = config self.layer_idx = layer_idx self.head_dim = getattr( config, "head_dim", config.hidden_size // config.num_attention_heads ) self.num_key_value_heads = config.num_key_value_heads self.num_key_value_groups = ( config.num_attention_heads // config.num_key_value_heads ) self.scaling = self.head_dim**-0.5 self.attention_dropout = config.attention_dropout self.is_causal = True self.q_proj = FastLinear.load( config, f"{prefix}.q_proj", weights, bias=config.attention_bias ) self.k_proj = FastLinear.load( config, f"{prefix}.k_proj", weights, bias=config.attention_bias ) self.v_proj = FastLinear.load( config, f"{prefix}.v_proj", weights, bias=config.attention_bias ) self.o_proj = FastLinear.load( config, f"{prefix}.o_proj", weights, bias=config.attention_bias ) self.rotary_emb = rotary_emb self.q_norm = FastRMSNorm.load( prefix=f"{prefix}.q_norm", weights=weights, eps=config.rms_norm_eps, ) self.k_norm = FastRMSNorm.load( prefix=f"{prefix}.k_norm", weights=weights, eps=config.rms_norm_eps, ) self.max_past = ( config.sliding_window if config.sliding_window is not None else -1 ) self.kv_scales = get_kv_scales(weights, f"{prefix}") self.kv_head_mapping = torch.arange( 0, self.num_key_value_heads, dtype=torch.int32, device=weights.device ).repeat_interleave(self.num_key_value_groups) self.sliding_window = config.sliding_window if not ( self.config.use_sliding_window and getattr(self.config, "sliding_window", None) is not None and self.layer_idx >= self.config.max_window_layers ): self.sliding_window = None def forward( self, hidden_states, cos, sin, cu_seqlen_prefill, kv_cache, slots, seqlen, hpu_attention_meta, ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]: input_shape = hidden_states.shape[:-1] hidden_shape = (*input_shape, -1, self.head_dim) query_states, _ = self.q_norm(self.q_proj(hidden_states).view(hidden_shape)) key_states, _ = self.k_norm(self.k_proj(hidden_states).view(hidden_shape)) value_states = self.v_proj(hidden_states).view(hidden_shape) self.rotary_emb(query_states, key_states, cos, sin) # query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin) kv_cache.store( key=key_states, value=value_states, slots=slots, kv_scales=self.kv_scales, ) # Prefill if cu_seqlen_prefill is not None: # sdpa attn_output = attention( query=query_states, key=key_states, value=value_states, kv_cache=kv_cache, kv_scales=self.kv_scales, seqlen=seqlen, softmax_scale=self.scaling, window_size_left=self.max_past, ) # Decode else: attn_output = paged_attention( query_states, kv_cache, self.kv_head_mapping, self.scaling, seqlen, kv_scales=self.kv_scales, hpu_attention_meta=hpu_attention_meta, window_size_left=self.max_past, ) attn_output = attn_output.reshape(*input_shape, -1).contiguous() attn_output = self.o_proj(attn_output) return attn_output class Qwen3MoE(nn.Module): def __init__(self, prefix, config, moe_layer_cls: Type[MoELayer], weights): super().__init__() # gating self.gate = FastLinear.load(config, f"{prefix}.gate", weights, bias=False) self.moe = moe_layer_cls( n_expert_group=None, n_experts=config.num_experts, prefix=f"{prefix}.experts", renormalize=True, topk=config.num_experts_per_tok, topk_group=None, weights=weights, ) # gate_proj_name="w1", # up_proj_name="w3", # down_proj_name="w2", assert isinstance(self.moe, MoELayer) self.process_group = weights.process_group def forward(self, x: torch.Tensor) -> torch.Tensor: router_logits = self.gate(x) out = self.moe(x, gating_output=router_logits) # Reduce sum if self.process_group.size() > 1: torch.distributed.all_reduce(out, group=self.process_group) return out.view(*x.shape) class Qwen3MoeMLP(nn.Module): def __init__(self, prefix, config, weights, intermediate_size=None): super().__init__() self.config = config self.hidden_size = config.hidden_size self.intermediate_size = ( intermediate_size if intermediate_size is not None else config.intermediate_size ) # Fuse gate and up proj self.gate_up_proj = TensorParallelColumnLinear.load_multi( config, prefixes=[f"{prefix}.gate_proj", f"{prefix}.up_proj"], weights=weights, dim=0, bias=False, ) self.down_proj = TensorParallelRowLinear.load( config, prefix=f"{prefix}.down_proj", weights=weights, bias=False, ) self.intermediate_size = ( config.intermediate_size // weights.process_group.size() ) self.act_fn = ACT2FN[config.hidden_act] def forward(self, x): gate_up_states = self.gate_up_proj(x) gate_up_states = gate_up_states.view(-1, 2, self.intermediate_size) return self.down_proj(self.act(gate_up_states[:, 0]) * gate_up_states[:, 1]) class Qwen3MoeSparseMoeBlock(nn.Module): def __init__(self, prefix, config, weights): super().__init__() self.num_experts = config.num_experts self.top_k = config.num_experts_per_tok self.norm_topk_prob = config.norm_topk_prob # gating # self.gate = nn.Linear(config.hidden_size, config.num_experts, bias=False) self.gate = FastLinear.load(config, f"{prefix}.gate", weights, bias=False) self.experts = nn.ModuleList( [ Qwen3MoeMLP( prefix=f"{prefix}.experts.{i}", config=config, weights=weights, intermediate_size=config.moe_intermediate_size, ) for i in range(self.num_experts) ] ) def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: """ """ input_shape = hidden_states.shape _, hidden_dim = hidden_states.shape # hidden_states = hidden_states.view(-1, hidden_dim) # router_logits: (batch * sequence_length, n_experts) router_logits = self.gate(hidden_states) routing_weights = F.softmax(router_logits, dim=1, dtype=hidden_states.dtype) routing_weights, selected_experts = torch.topk( routing_weights, self.top_k, dim=-1 ) if self.norm_topk_prob: # only diff with mixtral sparse moe block! routing_weights /= routing_weights.sum(dim=-1, keepdim=True) # we cast back to the input dtype routing_weights = routing_weights.to(hidden_states.dtype) final_hidden_states = torch.zeros( (input_shape), dtype=hidden_states.dtype, device=hidden_states.device ) # One hot encode the selected experts to create an expert mask # this will be used to easily index which expert is going to be sollicitated expert_mask = torch.nn.functional.one_hot( selected_experts, num_classes=self.num_experts ).permute(2, 1, 0) # Loop over all available experts in the model and perform the computation on each expert for expert_idx in range(self.num_experts): expert_layer = self.experts[expert_idx] idx, top_x = torch.where(expert_mask[expert_idx]) # Index the correct hidden states and compute the expert hidden state for # the current expert. We need to make sure to multiply the output hidden # states by `routing_weights` on the corresponding tokens (top-1 and top-2) current_state = hidden_states[None, top_x].reshape(-1, hidden_dim) current_hidden_states = ( expert_layer(current_state) * routing_weights[top_x, idx, None] ) # However `index_add_` only support torch tensors for indexing so we'll use # the `top_x` tensor here. final_hidden_states.index_add_( 0, top_x, current_hidden_states.to(hidden_states.dtype) ) final_hidden_states = final_hidden_states.reshape(input_shape) return final_hidden_states class Qwen3MoeDecoderLayer(nn.Module): def __init__(self, config, prefix, weights, layer_idx: int, rotary_emb): super().__init__() self.hidden_size = config.hidden_size if config.num_key_value_heads // weights.process_group.size() > 0: self.self_attn = Qwen3Attention( config, prefix=f"{prefix}.self_attn", weights=weights, layer_idx=layer_idx, rotary_emb=rotary_emb, ) else: self.self_attn = Qwen3MoeAttention( config, prefix=f"{prefix}.self_attn", weights=weights, layer_idx=layer_idx, rotary_emb=rotary_emb, ) moe_layer_cls = ( SparseMoELayer if SparseMoELayer.is_supported(weights) else DenseMoELayer ) if (layer_idx not in config.mlp_only_layers) and ( config.num_experts > 0 and (layer_idx + 1) % config.decoder_sparse_step == 0 ): self.mlp = Qwen3MoE(f"{prefix}.mlp", config, moe_layer_cls, weights) # self.mlp = Qwen3MoeSparseMoeBlock(f"{prefix}.mlp", config, weights) else: self.mlp = Qwen2MLP(config=config, prefix=f"{prefix}.mlp", weights=weights) self.input_layernorm = FastRMSNorm.load( prefix=f"{prefix}.input_layernorm", weights=weights, eps=config.rms_norm_eps ) self.post_attention_layernorm = FastRMSNorm.load( prefix=f"{prefix}.post_attention_layernorm", weights=weights, eps=config.rms_norm_eps, ) def forward( self, hidden_states, residual, cos, sin, cu_seqlen_prefill, kv_cache, slots, seqlen, hpu_attention_meta, ) -> torch.Tensor: if residual is None: residual = hidden_states hidden_states, _ = self.input_layernorm(hidden_states) # Self Attention hidden_states = self.self_attn( hidden_states, cos, sin, cu_seqlen_prefill, kv_cache, slots, seqlen, hpu_attention_meta, ) hidden_states = residual + hidden_states # Fully Connected residual = hidden_states hidden_states, _ = self.post_attention_layernorm(hidden_states) hidden_states = self.mlp(hidden_states) hidden_states = residual + hidden_states return hidden_states class Qwen3MoeModel(nn.Module): def __init__(self, config, prefix: str, weights): super().__init__() self.config = config self.padding_idx = config.pad_token_id self.vocab_size = config.vocab_size head_dim = getattr( config, "head_dim", config.hidden_size // config.num_attention_heads ) rotary_emb = PositionRotaryEmbedding.static( config=config, dim=head_dim, base=config.rope_theta, device=weights.device, ) self.layers = nn.ModuleList( [ Qwen3MoeDecoderLayer( config=config, prefix=f"{prefix}.layers.{layer_idx}", weights=weights, layer_idx=layer_idx, rotary_emb=rotary_emb, ) for layer_idx in range(config.num_hidden_layers) ] ) self.norm = FastRMSNorm.load( prefix=f"{prefix}.norm", weights=weights, eps=config.rms_norm_eps ) def forward( self, inputs_embeds: torch.Tensor, position_ids: torch.Tensor, cu_seqlen_prefill: Optional[torch.Tensor], kv_cache: List[Tuple[torch.Tensor, torch.Tensor]], slots: torch.Tensor, seqlen: Seqlen, hpu_attention_meta: Optional[HPUPagedAttentionMetadata], ) -> torch.Tensor: if hpu_attention_meta is not None: hpu_attention_meta = set_block_mapping( hpu_attention_meta, inputs_embeds.shape[0] ) hidden_states = inputs_embeds # create position embeddings to be shared across the decoder layers cos, sin = self.layers[0].self_attn.rotary_emb.get_cos_sin( position_ids, ) residual = None for i, decoder_layer in enumerate(self.layers): hidden_states = decoder_layer( hidden_states, residual, cos, sin, cu_seqlen_prefill, kv_cache[i], slots, seqlen, hpu_attention_meta, ) hidden_states, _ = self.norm(hidden_states) # add hidden states from the last decoder layer return hidden_states class Qwen3MoeForCausalLM(nn.Module): def __init__(self, prefix: str, config, weights): super().__init__() self.model = Qwen3MoeModel(config=config, prefix="model", weights=weights) self.vocab_size = config.vocab_size if config.tie_word_embeddings: suffix = "model.embed_tokens" else: suffix = "lm_head" self.lm_head = SpeculativeHead.load( config, prefix=f"{prefix}.{suffix}" if prefix else suffix, weights=weights, ) self.embed_tokens = TensorParallelEmbedding( prefix=f"{prefix}.embed_tokens" if prefix else "model.embed_tokens", weights=weights, ) def forward( self, input_ids: torch.Tensor, position_ids: torch.Tensor, cu_seqlen_prefill: Optional[torch.Tensor], kv_cache: List[Tuple[torch.Tensor, torch.Tensor]], slots: torch.Tensor, seqlen: Seqlen, hpu_attention_meta: Optional[HPUPagedAttentionMetadata], lm_head_indices: Optional[torch.Tensor] = None, adapter_data: Optional[torch.Tensor] = None, ) -> torch.Tensor: inputs_embeds = self.embed_tokens(input_ids) # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn) hidden_states = self.model( inputs_embeds, position_ids, cu_seqlen_prefill, kv_cache, slots, seqlen, hpu_attention_meta, ) # Only compute necessary logits, and do not upcast them to float if we are not computing the loss if lm_head_indices is not None: hidden_states = hidden_states[lm_head_indices] logits = self.lm_head(hidden_states) return logits
text-generation-inference/backends/gaudi/server/text_generation_server/models/custom_modeling/flash_qwen3_moe_modeling.py/0
{ "file_path": "text-generation-inference/backends/gaudi/server/text_generation_server/models/custom_modeling/flash_qwen3_moe_modeling.py", "repo_id": "text-generation-inference", "token_count": 9436 }
282
# Llamacpp backend If all your dependencies are installed at the system level, running cargo build should be sufficient. However, if you want to experiment with different versions of llama.cpp, some additional setup is required. ## Install llama.cpp LLAMACPP_PREFIX=$(pwd)/llama.cpp.out git clone https://github.com/ggerganov/llama.cpp cd llama.cpp cmake -B build \ -DCMAKE_INSTALL_PREFIX="$LLAMACPP_PREFIX" \ -DLLAMA_BUILD_COMMON=OFF \ -DLLAMA_BUILD_TESTS=OFF \ -DLLAMA_BUILD_EXAMPLES=OFF \ -DLLAMA_BUILD_SERVER=OFF cmake --build build --config Release -j cmake --install build ## Build TGI PKG_CONFIG_PATH="$LLAMACPP_PREFIX/lib/pkgconfig" cargo build
text-generation-inference/backends/llamacpp/README.md/0
{ "file_path": "text-generation-inference/backends/llamacpp/README.md", "repo_id": "text-generation-inference", "token_count": 297 }
283
from typing import Any, Callable import grpc from google.rpc import code_pb2, status_pb2 from grpc_interceptor.server import AsyncServerInterceptor from grpc_status import rpc_status from loguru import logger class ExceptionInterceptor(AsyncServerInterceptor): async def intercept( self, method: Callable, request_or_iterator: Any, context: grpc.ServicerContext, method_name: str, ) -> Any: try: response = method(request_or_iterator, context) return await response except Exception as err: method_name = method_name.split("/")[-1] logger.exception(f"Method {method_name} encountered an error.") await context.abort_with_status( rpc_status.to_status( status_pb2.Status(code=code_pb2.INTERNAL, message=str(err)) ) )
text-generation-inference/backends/neuron/server/text_generation_server/interceptor.py/0
{ "file_path": "text-generation-inference/backends/neuron/server/text_generation_server/interceptor.py", "repo_id": "text-generation-inference", "token_count": 399 }
284
import os import pytest from tempfile import TemporaryDirectory from optimum.neuron.models.inference.nxd.backend.config import NxDNeuronConfig from optimum.neuron.utils import map_torch_dtype from text_generation_server.tgi_env import ( get_neuron_config_for_model, lookup_compatible_cached_model, neuron_config_to_env, ) def test_get_neuron_config_for_model(neuron_model_config): neuron_model_path = neuron_model_config["neuron_model_path"] export_kwargs = neuron_model_config["export_kwargs"] os.environ["MAX_BATCH_SIZE"] = str(export_kwargs["batch_size"]) os.environ["MAX_TOTAL_TOKENS"] = str(export_kwargs["sequence_length"]) os.environ["HF_AUTO_CAST_TYPE"] = export_kwargs["auto_cast_type"] os.environ["HF_NUM_CORES"] = str(export_kwargs["num_cores"]) neuron_config = get_neuron_config_for_model(neuron_model_path) assert neuron_config is not None assert neuron_config.batch_size == export_kwargs["batch_size"] assert neuron_config.sequence_length == export_kwargs["sequence_length"] assert neuron_config.tp_degree == export_kwargs["num_cores"] if isinstance(neuron_config, NxDNeuronConfig): assert map_torch_dtype(neuron_config.torch_dtype) == map_torch_dtype( export_kwargs["auto_cast_type"] ) else: assert map_torch_dtype(neuron_config.auto_cast_type) == map_torch_dtype( export_kwargs["auto_cast_type"] ) @pytest.mark.parametrize("model_id", ["unsloth/Llama-3.2-1B-Instruct"]) def test_lookup_compatible_cached_model(model_id: str): neuron_config = lookup_compatible_cached_model(model_id, None) assert neuron_config is not None def test_neuron_config_to_env(neuron_model_config) -> None: neuron_model_path = neuron_model_config["neuron_model_path"] neuron_config = get_neuron_config_for_model(neuron_model_path) with TemporaryDirectory() as temp_dir: os.environ["ENV_FILEPATH"] = os.path.join(temp_dir, "env.sh") neuron_config_to_env(neuron_config) with open(os.environ["ENV_FILEPATH"], "r") as env_file: env_content = env_file.read() assert f"export MAX_BATCH_SIZE={neuron_config.batch_size}" in env_content assert ( f"export MAX_TOTAL_TOKENS={neuron_config.sequence_length}" in env_content ) assert f"export HF_NUM_CORES={neuron_config.tp_degree}" in env_content if hasattr(neuron_config, "torch_dtype"): auto_cast_type = str(map_torch_dtype(neuron_config.torch_dtype)).split( "." )[-1] else: auto_cast_type = neuron_config.auto_cast_type assert f"export HF_AUTO_CAST_TYPE={auto_cast_type}" in env_content
text-generation-inference/backends/neuron/tests/test_entry_point.py/0
{ "file_path": "text-generation-inference/backends/neuron/tests/test_entry_point.py", "repo_id": "text-generation-inference", "token_count": 1205 }
285
from argparse import ArgumentParser AWS_S3_CACHING_VARIABLES = { "AWS_ACCESS_KEY_ID": "aws_access_key_id", "AWS_SECRET_ACCESS_KEY": "aws_secret_access_key", "AWS_SESSION_TOKEN": "aws_session_token", "SCCACHE_REGION": "s3_region", "SCCACHE_BUCKET": "s3_bucket_name", } ALL_CACHING_STORAGE_VARIABLES = {"AWS_S3_CACHING_VARIABLES"} def setup_sccache_locally(): from os import environ print("Setting up Local Caching Layer") for target in ALL_CACHING_STORAGE_VARIABLES: for envvar in globals()[target].keys(): if envvar in environ: print(f"Deleted {envvar} from environment variables") del environ[envvar] def setup_sccache_for_s3(): from os import environ print("Setting up AWS S3 Caching Layer") for envvar in AWS_S3_CACHING_VARIABLES.keys(): if envvar not in environ or not environ[envvar] or len(environ[envvar]) == 0: print(f"Missing definition for environment variable {envvar}") if __name__ == "__main__": parser = ArgumentParser("TensorRT-LLM Build Caching Setup") parser.add_argument( "--is-gha-build", type=str, default="FALSE", help="Indicate if the build is from Github Actions", ) # Parse args args = parser.parse_args() args.is_gha_build = args.is_gha_build.lower() in {"on", "true", "1"} if args.is_gha_build: setup_sccache_for_s3() else: setup_sccache_locally()
text-generation-inference/backends/trtllm/scripts/setup_sccache.py/0
{ "file_path": "text-generation-inference/backends/trtllm/scripts/setup_sccache.py", "repo_id": "text-generation-inference", "token_count": 663 }
286
use crate::client::{ Batch, GrammarType, NextTokenChooserParameters, Request, StoppingCriteriaParameters, }; use nohash_hasher::{BuildNoHashHasher, IntMap}; use std::cmp::min; use std::collections::VecDeque; use text_generation_router::infer::InferError; use text_generation_router::infer::InferStreamResponse; use text_generation_router::validation::{ ChunksToString, ValidGenerateRequest, ValidGrammar, ValidParameters, ValidStoppingParameters, }; use tokio::sync::{mpsc, oneshot}; use tokio::time::Instant; use tracing::{info_span, instrument, Span}; /// Queue entry #[derive(Debug)] pub(crate) struct Entry { /// Request pub request: ValidGenerateRequest, /// Response sender to communicate between the Infer struct and the batching_task pub response_tx: mpsc::UnboundedSender<Result<InferStreamResponse, InferError>>, /// Span that will live as long as entry pub span: Span, /// Temporary span used as a guard when logging inference, wait times... pub temp_span: Option<Span>, /// Instant when this entry was queued pub queue_time: Instant, /// Instant when this entry was added to a batch pub batch_time: Option<Instant>, } /// Request Queue #[derive(Debug, Clone)] pub(crate) struct Queue { /// Channel to communicate with the background queue task queue_sender: mpsc::UnboundedSender<QueueCommand>, } impl Queue { pub(crate) fn new( requires_padding: bool, block_size: u32, window_size: Option<u32>, speculate: u32, ) -> Self { // Create channel let (queue_sender, queue_receiver) = mpsc::unbounded_channel(); // Launch background queue task tokio::spawn(queue_task( requires_padding, block_size, window_size, speculate, queue_receiver, )); Self { queue_sender } } #[instrument(skip_all)] pub(crate) fn append(&self, entry: Entry) { // Send append command to the background task managing the state // Unwrap is safe here self.queue_sender .send(QueueCommand::Append(Box::new(entry), Span::current())) .unwrap(); } // Get the next batch #[instrument(skip(self))] pub(crate) async fn next_batch( &self, min_size: Option<usize>, max_size: Option<usize>, prefill_token_budget: u32, token_budget: u32, ) -> Option<NextBatch> { // Create response channel let (response_sender, response_receiver) = oneshot::channel(); // Send next batch command to the background task managing the state // Unwrap is safe here self.queue_sender .send(QueueCommand::NextBatch { min_size, max_size, prefill_token_budget, token_budget, response_sender, span: Span::current(), }) .unwrap(); // Await on response channel // Unwrap is safe here response_receiver.await.unwrap() } } // Background task responsible of the queue state async fn queue_task( requires_padding: bool, block_size: u32, window_size: Option<u32>, speculate: u32, mut receiver: mpsc::UnboundedReceiver<QueueCommand>, ) { let mut state = State::new(requires_padding, block_size, window_size, speculate); while let Some(cmd) = receiver.recv().await { match cmd { QueueCommand::Append(entry, span) => { span.in_scope(|| state.append(*entry)); metrics::gauge!("tgi_queue_size").increment(1.0); } QueueCommand::NextBatch { min_size, max_size, prefill_token_budget, token_budget, response_sender, span, } => span.in_scope(|| { let next_batch = state.next_batch(min_size, max_size, prefill_token_budget, token_budget); response_sender.send(next_batch).unwrap(); metrics::gauge!("tgi_queue_size").set(state.entries.len() as f64); }), } } } /// Queue State #[derive(Debug)] struct State { /// Queue entries organized in a Vec entries: VecDeque<(u64, Entry)>, /// Id of the next entry next_id: u64, /// Id of the next batch next_batch_id: u64, /// Whether the model is using padding requires_padding: bool, /// Paged Attention block size block_size: u32, /// Sliding window window_size: Option<u32>, /// Speculation amount speculate: u32, } impl State { fn new( requires_padding: bool, block_size: u32, window_size: Option<u32>, speculate: u32, ) -> Self { Self { entries: VecDeque::with_capacity(128), next_id: 0, next_batch_id: 0, requires_padding, block_size, window_size, speculate, } } /// Append an entry to the queue fn append(&mut self, mut entry: Entry) { // Create a span that will live as long as the entry is in the queue waiting to be batched let queue_span = info_span!(parent: &entry.span, "queued"); entry.temp_span = Some(queue_span); // Push entry in the queue self.entries.push_back((self.next_id, entry)); self.next_id += 1; } // Get the next batch fn next_batch( &mut self, min_size: Option<usize>, max_size: Option<usize>, prefill_token_budget: u32, token_budget: u32, ) -> Option<NextBatch> { if self.entries.is_empty() { tracing::debug!("No queue"); return None; } // Check if we have enough entries if let Some(min_size) = min_size { if self.entries.len() < min_size { tracing::debug!("Not enough entries"); return None; } } if let Some(max_size) = max_size { if max_size == 0 { tracing::debug!("No capacity"); return None; } } // Pad prefill_token_budget to be a multiple of block size let prefill_token_budget = prefill_token_budget.div_ceil(self.block_size) * self.block_size; // Create span for this batch to add context to inference calls let next_batch_span = info_span!(parent: None, "batch", batch_size = tracing::field::Empty); next_batch_span.follows_from(Span::current()); let mut batch_requests = Vec::with_capacity(self.entries.len()); let mut batch_entries = IntMap::with_capacity_and_hasher(self.entries.len(), BuildNoHashHasher::default()); let mut max_input_length = 0; let mut prefill_tokens: u32 = 0; let mut decode_tokens: u32 = 0; // Pop entries starting from the front of the queue while let Some((id, mut entry)) = self.entries.pop_front() { // Filter entries where the response receiver was dropped (== entries where the request // was dropped by the client) if entry.response_tx.is_closed() { metrics::counter!("tgi_request_failure", "err" => "dropped").increment(1); tracing::debug!("Dropping entry"); continue; } if self.requires_padding { // We pad to max input length in the Python shards // We need to take these padding tokens into the equation max_input_length = max_input_length.max(entry.request.input_length); prefill_tokens = (batch_requests.len() + 1) as u32 * max_input_length } else { // pad to block size prefill_tokens += entry.request.input_length.div_ceil(self.block_size) * self.block_size; } if self.requires_padding { decode_tokens += entry.request.stopping_parameters.max_new_tokens; } else { let max_new_tokens = match self.window_size { None => entry.request.stopping_parameters.max_new_tokens, Some(window_size) => min( window_size.saturating_sub(entry.request.input_length), entry.request.stopping_parameters.max_new_tokens, ), }; // pad to block size decode_tokens += max_new_tokens.div_ceil(self.block_size) * self.block_size; } if prefill_tokens > prefill_token_budget || (prefill_tokens + decode_tokens + self.speculate) > token_budget { // Entry is over budget // Add it back to the front tracing::debug!("Over budget: prefill_tokens={prefill_tokens} > {prefill_token_budget} || {prefill_tokens} + {decode_tokens} + {} > {token_budget}", self.speculate); self.entries.push_front((id, entry)); break; } tracing::debug!("Accepting entry"); // Create a new span to link the batch back to this entry let entry_batch_span = info_span!(parent: &entry.span, "infer"); // Add relationships next_batch_span.follows_from(&entry_batch_span); entry_batch_span.follows_from(&next_batch_span); // Update entry entry.temp_span = Some(entry_batch_span); batch_requests.push(Request { id, prefill_logprobs: entry.request.decoder_input_details, inputs: entry.request.inputs.chunks_to_string(), truncate: entry.request.truncate, parameters: Some(NextTokenChooserParameters::from( entry.request.parameters.clone(), )), stopping_parameters: Some(StoppingCriteriaParameters::from( entry.request.stopping_parameters.clone(), )), top_n_tokens: entry.request.top_n_tokens, }); // Set batch_time entry.batch_time = Some(Instant::now()); // Insert in batch_entries IntMap batch_entries.insert(id, entry); // Check if max_size if Some(batch_requests.len()) == max_size { break; } } // Empty batch if batch_requests.is_empty() { tracing::debug!("Filtered out all entries"); return None; } // Check if our batch is big enough if let Some(min_size) = min_size { // Batch is too small if batch_requests.len() < min_size { // Add back entries to the queue in the correct order for r in batch_requests.into_iter().rev() { let id = r.id; let entry = batch_entries.remove(&id).unwrap(); self.entries.push_front((id, entry)); } return None; } } // Final batch size let size = batch_requests.len() as u32; next_batch_span.record("batch_size", size); let batch = Batch { id: self.next_batch_id, requests: batch_requests, size, max_tokens: (prefill_tokens + decode_tokens), }; // Increment batch id self.next_batch_id += 1; metrics::histogram!("tgi_batch_next_size").record(batch.size as f64); Some((batch_entries, batch, next_batch_span)) } } type NextBatch = (IntMap<u64, Entry>, Batch, Span); #[derive(Debug)] enum QueueCommand { Append(Box<Entry>, Span), NextBatch { min_size: Option<usize>, max_size: Option<usize>, prefill_token_budget: u32, token_budget: u32, response_sender: oneshot::Sender<Option<NextBatch>>, span: Span, }, } impl From<ValidParameters> for NextTokenChooserParameters { fn from(value: ValidParameters) -> Self { let (grammar, grammar_type) = match value.grammar { None => (String::new(), GrammarType::None), Some(grammar) => match grammar { ValidGrammar::Json(grammar_string) => (grammar_string, GrammarType::Json), ValidGrammar::Regex(grammar_string) => (grammar_string, GrammarType::Regex), }, }; Self { temperature: value.temperature, top_k: value.top_k, top_p: value.top_p, typical_p: value.typical_p, do_sample: value.do_sample, seed: value.seed, repetition_penalty: value.repetition_penalty, frequency_penalty: value.frequency_penalty, watermark: value.watermark, grammar, grammar_type: grammar_type.into(), } } } impl From<ValidStoppingParameters> for StoppingCriteriaParameters { fn from(value: ValidStoppingParameters) -> Self { Self { max_new_tokens: value.max_new_tokens, stop_sequences: value.stop_sequences, ignore_eos_token: value.ignore_eos_token, } } } #[cfg(test)] mod tests { use super::*; use std::sync::Arc; use tracing::info_span; fn default_entry() -> ( Entry, mpsc::UnboundedReceiver<Result<InferStreamResponse, InferError>>, ) { let (response_tx, receiver_tx) = mpsc::unbounded_channel(); let entry = Entry { request: ValidGenerateRequest { inputs: vec![], input_ids: Some(Arc::new(vec![])), input_length: 0, add_special_tokens: true, truncate: 0, decoder_input_details: false, parameters: ValidParameters { temperature: 0.0, top_k: 0, top_p: 0.0, typical_p: 0.0, do_sample: false, seed: 0, repetition_penalty: 0.0, frequency_penalty: 0.0, watermark: false, grammar: None, }, stopping_parameters: ValidStoppingParameters { ignore_eos_token: false, max_new_tokens: 1, max_total_new_tokens: 1024, stop_sequences: vec![], }, top_n_tokens: 0, adapter_id: None, }, response_tx, span: info_span!("entry"), temp_span: None, queue_time: Instant::now(), batch_time: None, }; (entry, receiver_tx) } #[test] fn test_append() { let mut state = State::new(false, 1, None, 0); let (entry, _guard) = default_entry(); assert_eq!(state.next_id, 0); assert_eq!(state.entries.len(), 0); state.append(entry); assert_eq!(state.next_id, 1); assert_eq!(state.entries.len(), 1); let (id, _) = state.entries.remove(0).unwrap(); assert_eq!(id, 0); } #[test] fn test_next_batch_empty() { let mut state = State::new(false, 1, None, 0); assert!(state.next_batch(None, None, 1, 1).is_none()); assert!(state.next_batch(Some(1), None, 1, 1).is_none()); } #[test] fn test_next_batch_min_size() { let mut state = State::new(false, 1, None, 0); let (entry1, _guard1) = default_entry(); let (entry2, _guard2) = default_entry(); state.append(entry1); state.append(entry2); let (entries, batch, _) = state.next_batch(None, None, 2, 2).unwrap(); assert_eq!(entries.len(), 2); assert!(entries.contains_key(&0)); assert!(entries.contains_key(&1)); assert!(entries.get(&0).unwrap().batch_time.is_some()); assert!(entries.get(&1).unwrap().batch_time.is_some()); assert_eq!(batch.id, 0); assert_eq!(batch.size, 2); assert_eq!(state.next_id, 2); assert_eq!(state.entries.len(), 0); assert_eq!(state.next_batch_id, 1); let (entry3, _guard3) = default_entry(); state.append(entry3); assert!(state.next_batch(Some(2), None, 2, 2).is_none()); assert_eq!(state.next_id, 3); assert_eq!(state.entries.len(), 1); let (id, _) = state.entries.remove(0).unwrap(); assert_eq!(id, 2); } #[test] fn test_next_batch_max_size() { let mut state = State::new(false, 1, None, 0); let (entry1, _guard1) = default_entry(); let (entry2, _guard2) = default_entry(); state.append(entry1); state.append(entry2); let (entries, batch, _) = state.next_batch(None, Some(1), 2, 2).unwrap(); assert_eq!(entries.len(), 1); assert!(entries.contains_key(&0)); assert!(entries.get(&0).unwrap().batch_time.is_some()); assert_eq!(batch.id, 0); assert_eq!(batch.size, 1); assert_eq!(state.next_id, 2); assert_eq!(state.entries.len(), 1); assert_eq!(state.next_batch_id, 1); } #[test] fn test_next_batch_token_budget() { let mut state = State::new(false, 1, None, 0); let (entry1, _guard1) = default_entry(); let (entry2, _guard2) = default_entry(); state.append(entry1); state.append(entry2); let (entries, batch, _) = state.next_batch(None, None, 1, 1).unwrap(); assert_eq!(entries.len(), 1); assert!(entries.contains_key(&0)); assert_eq!(batch.id, 0); assert_eq!(batch.size, 1); assert_eq!(state.next_id, 2); assert_eq!(state.entries.len(), 1); assert_eq!(state.next_batch_id, 1); let (entry3, _guard3) = default_entry(); state.append(entry3); let (entries, batch, _) = state.next_batch(None, None, 3, 3).unwrap(); assert_eq!(entries.len(), 2); assert!(entries.contains_key(&1)); assert!(entries.contains_key(&2)); assert_eq!(batch.id, 1); assert_eq!(batch.size, 2); assert_eq!(state.next_id, 3); assert_eq!(state.entries.len(), 0); assert_eq!(state.next_batch_id, 2); } #[tokio::test] async fn test_queue_append() { let queue = Queue::new(false, 1, None, 0); let (entry, _guard) = default_entry(); queue.append(entry); } #[tokio::test] async fn test_queue_next_batch_empty() { let queue = Queue::new(false, 1, None, 0); assert!(queue.next_batch(None, None, 1, 1).await.is_none()); assert!(queue.next_batch(Some(1), None, 1, 1).await.is_none()); } #[tokio::test] async fn test_queue_next_batch_min_size() { let queue = Queue::new(false, 1, None, 0); let (entry1, _guard1) = default_entry(); let (entry2, _guard2) = default_entry(); queue.append(entry1); queue.append(entry2); let (entries, batch, _) = queue.next_batch(None, None, 2, 2).await.unwrap(); assert_eq!(entries.len(), 2); assert!(entries.contains_key(&0)); assert!(entries.contains_key(&1)); assert!(entries.get(&0).unwrap().batch_time.is_some()); assert!(entries.get(&1).unwrap().batch_time.is_some()); assert_eq!(batch.id, 0); assert_eq!(batch.size, 2); let (entry3, _guard3) = default_entry(); queue.append(entry3); // Not enough requests pending assert!(queue.next_batch(Some(2), None, 2, 2).await.is_none()); // Not enough token budget assert!(queue.next_batch(Some(1), None, 0, 0).await.is_none()); // Ok let (entries2, batch2, _) = queue.next_batch(Some(1), None, 2, 2).await.unwrap(); assert_eq!(entries2.len(), 1); assert!(entries2.contains_key(&2)); assert!(entries2.get(&2).unwrap().batch_time.is_some()); assert_eq!(batch2.id, 1); assert_eq!(batch2.size, 1); } #[tokio::test] async fn test_queue_next_batch_max_size() { let queue = Queue::new(false, 1, None, 0); let (entry1, _guard1) = default_entry(); let (entry2, _guard2) = default_entry(); queue.append(entry1); queue.append(entry2); let (entries, batch, _) = queue.next_batch(None, Some(1), 2, 2).await.unwrap(); assert_eq!(entries.len(), 1); assert!(entries.contains_key(&0)); assert!(entries.get(&0).unwrap().batch_time.is_some()); assert_eq!(batch.id, 0); assert_eq!(batch.size, 1); } #[tokio::test] async fn test_queue_next_batch_token_budget() { let queue = Queue::new(false, 1, None, 0); let (entry1, _guard1) = default_entry(); let (entry2, _guard2) = default_entry(); queue.append(entry1); queue.append(entry2); let (entries, batch, _) = queue.next_batch(None, None, 1, 1).await.unwrap(); assert_eq!(entries.len(), 1); assert!(entries.contains_key(&0)); assert_eq!(batch.id, 0); assert_eq!(batch.size, 1); let (entry3, _guard3) = default_entry(); queue.append(entry3); let (entries, batch, _) = queue.next_batch(None, None, 3, 3).await.unwrap(); assert_eq!(entries.len(), 2); assert!(entries.contains_key(&1)); assert!(entries.contains_key(&2)); assert_eq!(batch.id, 1); assert_eq!(batch.size, 2); } #[tokio::test] async fn test_queue_next_batch_token_speculate() { let queue = Queue::new(false, 1, None, 2); let (entry1, _guard1) = default_entry(); let (entry2, _guard2) = default_entry(); queue.append(entry1); queue.append(entry2); // Budget of 1 is not enough assert!(queue.next_batch(None, None, 1, 1).await.is_none()); let (entries, batch, _) = queue.next_batch(None, None, 6, 6).await.unwrap(); assert_eq!(entries.len(), 2); assert!(entries.contains_key(&0)); assert!(entries.contains_key(&1)); assert_eq!(batch.id, 0); assert_eq!(batch.size, 2); } #[tokio::test] async fn test_queue_next_batch_dropped_receiver() { let queue = Queue::new(false, 1, None, 0); let (entry, _) = default_entry(); queue.append(entry); assert!(queue.next_batch(None, None, 1, 1).await.is_none()); } }
text-generation-inference/backends/v2/src/queue.rs/0
{ "file_path": "text-generation-inference/backends/v2/src/queue.rs", "repo_id": "text-generation-inference", "token_count": 11054 }
287
/// Inspired by https://github.com/orhun/rust-tui-template/blob/472aa515119d4c94903eac12d9784417281dc7f5/src/event.rs use ratatui::crossterm::event; use std::time::{Duration, Instant}; use tokio::sync::{broadcast, mpsc}; /// Events #[derive(Debug)] pub(crate) enum Event { /// Terminal tick. Tick, /// Key press. Key(event::KeyEvent), /// Terminal resize. Resize, } pub(crate) async fn terminal_event_task( fps: u32, event_sender: mpsc::Sender<Event>, mut shutdown_receiver: broadcast::Receiver<()>, _shutdown_guard_sender: mpsc::Sender<()>, ) { // End task if a message is received on shutdown_receiver // _shutdown_guard_sender will be dropped once the task is finished tokio::select! { _ = event_loop(fps, event_sender) => { }, _ = shutdown_receiver.recv() => {} } } /// Main event loop async fn event_loop(fps: u32, event_sender: mpsc::Sender<Event>) { // Frame budget let per_frame = Duration::from_secs(1) / fps; // When was last frame executed let mut last_frame = Instant::now(); loop { // Sleep to avoid blocking the thread for too long if let Some(sleep) = per_frame.checked_sub(last_frame.elapsed()) { tokio::time::sleep(sleep).await; } // Get crossterm event and send a new one over the channel if event::poll(Duration::from_secs(0)).expect("no events available") { match event::read().expect("unable to read event") { event::Event::Key(e) => event_sender.send(Event::Key(e)).await.unwrap_or(()), event::Event::Resize(_w, _h) => { event_sender.send(Event::Resize).await.unwrap_or(()) } _ => (), } } // Frame budget exceeded if last_frame.elapsed() >= per_frame { // Send tick event_sender.send(Event::Tick).await.unwrap_or(()); // Rest last_frame time last_frame = Instant::now(); } } }
text-generation-inference/benchmark/src/event.rs/0
{ "file_path": "text-generation-inference/benchmark/src/event.rs", "repo_id": "text-generation-inference", "token_count": 917 }
288
# Copyright 2023 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. __version__ = "0.7.0" DEPRECATION_WARNING = ( "`text_generation` clients are deprecated and will be removed in the near future. " "Please use the `InferenceClient` from the `huggingface_hub` package instead." ) from text_generation.client import Client, AsyncClient # noqa E402 from text_generation.inference_api import ( # noqa E402 InferenceAPIClient, InferenceAPIAsyncClient, ) __all__ = [ "Client", "AsyncClient", "InferenceAPIClient", "InferenceAPIAsyncClient", ]
text-generation-inference/clients/python/text_generation/__init__.py/0
{ "file_path": "text-generation-inference/clients/python/text_generation/__init__.py", "repo_id": "text-generation-inference", "token_count": 338 }
289
# Serving Private & Gated Models If the model you wish to serve is behind gated access or the model repository on Hugging Face Hub is private, and you have access to the model, you can provide your Hugging Face Hub access token. You can generate and copy a read token from [Hugging Face Hub tokens page](https://huggingface.co/settings/tokens) If you're using the CLI, set the `HF_TOKEN` environment variable. For example: ``` export HF_TOKEN=<YOUR READ TOKEN> ``` If you would like to do it through Docker, you can provide your token by specifying `HF_TOKEN` as shown below. ```bash model=meta-llama/Llama-2-7b-chat-hf volume=$PWD/data token=<your READ token> docker run --gpus all \ --shm-size 1g \ -e HF_TOKEN=$token \ -p 8080:80 \ -v $volume:/data ghcr.io/huggingface/text-generation-inference:3.3.4 \ --model-id $model ```
text-generation-inference/docs/source/basic_tutorials/gated_model_access.md/0
{ "file_path": "text-generation-inference/docs/source/basic_tutorials/gated_model_access.md", "repo_id": "text-generation-inference", "token_count": 290 }
290
# Safetensors Safetensors is a model serialization format for deep learning models. It is [faster](https://huggingface.co/docs/safetensors/speed) and safer compared to other serialization formats like pickle (which is used under the hood in many deep learning libraries). TGI depends on safetensors format mainly to enable [tensor parallelism sharding](./tensor_parallelism). For a given model repository during serving, TGI looks for safetensors weights. If there are no safetensors weights, TGI converts the PyTorch weights to safetensors format. You can learn more about safetensors by reading the [safetensors documentation](https://huggingface.co/docs/safetensors/index).
text-generation-inference/docs/source/conceptual/safetensors.md/0
{ "file_path": "text-generation-inference/docs/source/conceptual/safetensors.md", "repo_id": "text-generation-inference", "token_count": 184 }
291
[ { "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [ { "id": 17934, "logprob": null, "text": "Pour" }, { "id": 49833, "logprob": -10.5625, "text": " dég" }, { "id": 21543, "logprob": -0.14770508, "text": "uster" }, { "id": 447, "logprob": -1.9287109, "text": " un" }, { "id": 46341, "logprob": -15.4609375, "text": " ort" }, { "id": 35567, "logprob": -7.5585938, "text": "olan" }, { "id": 15, "logprob": -1.4003906, "text": "," }, { "id": 1669, "logprob": -1.5673828, "text": " il" }, { "id": 11580, "logprob": -0.94628906, "text": " faut" }, { "id": 3913, "logprob": -3.703125, "text": " tout" }, { "id": 39261, "logprob": -1.5732422, "text": " d'abord" } ], "seed": null, "tokens": [ { "id": 578, "logprob": -1.7646484, "special": false, "text": " le" }, { "id": 5608, "logprob": -2.6113281, "special": false, "text": " faire" }, { "id": 1767, "logprob": -1.5263672, "special": false, "text": " cu" }, { "id": 1273, "logprob": -0.00010049343, "special": false, "text": "ire" }, { "id": 1486, "logprob": -1.4707031, "special": false, "text": " dans" }, { "id": 283, "logprob": -1.2119141, "special": false, "text": " de" }, { "id": 40410, "logprob": -0.11883545, "special": false, "text": " l'eau" }, { "id": 20226, "logprob": -0.40844727, "special": false, "text": " bou" }, { "id": 172483, "logprob": -0.0037841797, "special": false, "text": "illante" }, { "id": 2805, "logprob": -1.0195312, "special": false, "text": " sal" } ] }, "generated_text": " le faire cuire dans de l'eau bouillante sal" }, { "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [ { "id": 17934, "logprob": null, "text": "Pour" }, { "id": 49833, "logprob": -10.53125, "text": " dég" }, { "id": 21543, "logprob": -0.14770508, "text": "uster" }, { "id": 447, "logprob": -1.9287109, "text": " un" }, { "id": 46341, "logprob": -15.4140625, "text": " ort" }, { "id": 35567, "logprob": -7.5234375, "text": "olan" }, { "id": 15, "logprob": -1.3613281, "text": "," }, { "id": 1669, "logprob": -1.5458984, "text": " il" }, { "id": 11580, "logprob": -0.94189453, "text": " faut" }, { "id": 3913, "logprob": -3.7011719, "text": " tout" }, { "id": 39261, "logprob": -1.5732422, "text": " d'abord" } ], "seed": null, "tokens": [ { "id": 578, "logprob": -1.7548828, "special": false, "text": " le" }, { "id": 5608, "logprob": -2.578125, "special": false, "text": " faire" }, { "id": 1767, "logprob": -1.5117188, "special": false, "text": " cu" }, { "id": 1273, "logprob": -0.00010049343, "special": false, "text": "ire" }, { "id": 1486, "logprob": -1.4707031, "special": false, "text": " dans" }, { "id": 283, "logprob": -1.1982422, "special": false, "text": " de" }, { "id": 40410, "logprob": -0.11004639, "special": false, "text": " l'eau" }, { "id": 20226, "logprob": -0.4506836, "special": false, "text": " bou" }, { "id": 172483, "logprob": -0.003047943, "special": false, "text": "illante" }, { "id": 2805, "logprob": -1.0185547, "special": false, "text": " sal" } ] }, "generated_text": " le faire cuire dans de l'eau bouillante sal" }, { "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [ { "id": 17934, "logprob": null, "text": "Pour" }, { "id": 49833, "logprob": -10.53125, "text": " dég" }, { "id": 21543, "logprob": -0.14770508, "text": "uster" }, { "id": 447, "logprob": -1.9287109, "text": " un" }, { "id": 46341, "logprob": -15.4140625, "text": " ort" }, { "id": 35567, "logprob": -7.5234375, "text": "olan" }, { "id": 15, "logprob": -1.3613281, "text": "," }, { "id": 1669, "logprob": -1.5458984, "text": " il" }, { "id": 11580, "logprob": -0.94189453, "text": " faut" }, { "id": 3913, "logprob": -3.7011719, "text": " tout" }, { "id": 39261, "logprob": -1.5732422, "text": " d'abord" } ], "seed": null, "tokens": [ { "id": 578, "logprob": -1.7548828, "special": false, "text": " le" }, { "id": 5608, "logprob": -2.578125, "special": false, "text": " faire" }, { "id": 1767, "logprob": -1.5117188, "special": false, "text": " cu" }, { "id": 1273, "logprob": -0.00010049343, "special": false, "text": "ire" }, { "id": 1486, "logprob": -1.4707031, "special": false, "text": " dans" }, { "id": 283, "logprob": -1.1982422, "special": false, "text": " de" }, { "id": 40410, "logprob": -0.11004639, "special": false, "text": " l'eau" }, { "id": 20226, "logprob": -0.4506836, "special": false, "text": " bou" }, { "id": 172483, "logprob": -0.003047943, "special": false, "text": "illante" }, { "id": 2805, "logprob": -1.0185547, "special": false, "text": " sal" } ] }, "generated_text": " le faire cuire dans de l'eau bouillante sal" }, { "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [ { "id": 17934, "logprob": null, "text": "Pour" }, { "id": 49833, "logprob": -10.53125, "text": " dég" }, { "id": 21543, "logprob": -0.14770508, "text": "uster" }, { "id": 447, "logprob": -1.9287109, "text": " un" }, { "id": 46341, "logprob": -15.4140625, "text": " ort" }, { "id": 35567, "logprob": -7.5234375, "text": "olan" }, { "id": 15, "logprob": -1.3613281, "text": "," }, { "id": 1669, "logprob": -1.5458984, "text": " il" }, { "id": 11580, "logprob": -0.94189453, "text": " faut" }, { "id": 3913, "logprob": -3.7011719, "text": " tout" }, { "id": 39261, "logprob": -1.5732422, "text": " d'abord" } ], "seed": null, "tokens": [ { "id": 578, "logprob": -1.7548828, "special": false, "text": " le" }, { "id": 5608, "logprob": -2.578125, "special": false, "text": " faire" }, { "id": 1767, "logprob": -1.5117188, "special": false, "text": " cu" }, { "id": 1273, "logprob": -0.00010049343, "special": false, "text": "ire" }, { "id": 1486, "logprob": -1.4707031, "special": false, "text": " dans" }, { "id": 283, "logprob": -1.1982422, "special": false, "text": " de" }, { "id": 40410, "logprob": -0.11004639, "special": false, "text": " l'eau" }, { "id": 20226, "logprob": -0.4506836, "special": false, "text": " bou" }, { "id": 172483, "logprob": -0.003047943, "special": false, "text": "illante" }, { "id": 2805, "logprob": -1.0185547, "special": false, "text": " sal" } ] }, "generated_text": " le faire cuire dans de l'eau bouillante sal" } ]
text-generation-inference/integration-tests/models/__snapshots__/test_bloom_560m/test_bloom_560m_load.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_bloom_560m/test_bloom_560m_load.json", "repo_id": "text-generation-inference", "token_count": 7244 }
292
{ "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [], "seed": 0, "tokens": [ { "id": 5267, "logprob": -1.1464844, "special": false, "text": "?\n" }, { "id": 33464, "logprob": -0.83203125, "special": false, "text": "Deep" }, { "id": 20909, "logprob": -0.5625, "special": false, "text": " Learning" }, { "id": 320, "logprob": -2.1464844, "special": false, "text": " (" }, { "id": 16524, "logprob": 0.0, "special": false, "text": "DL" }, { "id": 701, "logprob": -2.2089844, "special": false, "text": ")," }, { "id": 476, "logprob": -0.27368164, "special": false, "text": " or" }, { "id": 20443, "logprob": -0.09442139, "special": false, "text": " artificial" }, { "id": 29728, "logprob": 0.0, "special": false, "text": " neural" }, { "id": 14155, "logprob": 0.0, "special": false, "text": " networks" } ], "top_tokens": null }, "generated_text": "What is deep learning?\nDeep Learning (DL), or artificial neural networks" }
text-generation-inference/integration-tests/models/__snapshots__/test_compressed_tensors_w8a8_int_dynamic_weight/test_compressed_tensors_w8a8_int_dynamic_weight_all_params.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_compressed_tensors_w8a8_int_dynamic_weight/test_compressed_tensors_w8a8_int_dynamic_weight_all_params.json", "repo_id": "text-generation-inference", "token_count": 862 }
293
{ "choices": [ { "finish_reason": "stop", "index": 0, "logprobs": null, "message": { "content": "Okay, let's analyze the image. \n\nThe image is a very plain, solid white square. That's it! \n\nIt's essentially a blank canvas. \n\nDo you want me to describe it in more detail, or are you interested in something else regarding this image?", "name": null, "role": "assistant", "tool_calls": null }, "usage": null } ], "created": 1747062955, "id": "", "model": "google/gemma-3-4b-it", "object": "chat.completion", "system_fingerprint": "3.3.4-dev0-native", "usage": { "completion_tokens": 62, "prompt_tokens": 277, "total_tokens": 339 } }
text-generation-inference/integration-tests/models/__snapshots__/test_flash_gemma3/test_flash_gemma3_image_base64_rgb_png.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_flash_gemma3/test_flash_gemma3_image_base64_rgb_png.json", "repo_id": "text-generation-inference", "token_count": 324 }
294
{ "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [], "seed": null, "tokens": [ { "id": 5229, "logprob": -2.7988281, "special": false, "text": " failed" }, { "id": 29901, "logprob": -0.91259766, "special": false, "text": ":" }, { "id": 853, "logprob": -2.8496094, "special": false, "text": " Un" }, { "id": 23765, "logprob": -1.1894531, "special": false, "text": "supported" }, { "id": 4714, "logprob": -1.5917969, "special": false, "text": " browser" }, { "id": 29892, "logprob": -0.34765625, "special": false, "text": "," }, { "id": 1873, "logprob": -1.2695312, "special": false, "text": " version" }, { "id": 470, "logprob": -0.25170898, "special": false, "text": " or" }, { "id": 7481, "logprob": -0.21411133, "special": false, "text": " platform" }, { "id": 13, "logprob": -1.1162109, "special": false, "text": "\n" } ], "top_tokens": null }, "generated_text": " failed: Unsupported browser, version or platform\n" }
text-generation-inference/integration-tests/models/__snapshots__/test_flash_llama_marlin_24/test_flash_llama_marlin.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_flash_llama_marlin_24/test_flash_llama_marlin.json", "repo_id": "text-generation-inference", "token_count": 869 }
295
{ "details": { "finish_reason": "length", "generated_tokens": 40, "prefill": [], "seed": null, "tokens": [ { "id": 13, "logprob": -0.31347656, "special": false, "text": "\n" }, { "id": 13, "logprob": -0.27441406, "special": false, "text": "\n" }, { "id": 28737, "logprob": -2.2285156, "special": false, "text": "I" }, { "id": 28809, "logprob": -1.4677734, "special": false, "text": "’" }, { "id": 28719, "logprob": -0.31762695, "special": false, "text": "m" }, { "id": 264, "logprob": -1.6865234, "special": false, "text": " a" }, { "id": 1215, "logprob": -3.2695312, "special": false, "text": " very" }, { "id": 20640, "logprob": -3.1230469, "special": false, "text": " passionate" }, { "id": 1338, "logprob": -0.48339844, "special": false, "text": " person" }, { "id": 28723, "logprob": -0.9970703, "special": false, "text": "." }, { "id": 315, "logprob": -0.5498047, "special": false, "text": " I" }, { "id": 28809, "logprob": -1.1923828, "special": false, "text": "’" }, { "id": 28719, "logprob": -0.080444336, "special": false, "text": "m" }, { "id": 1215, "logprob": -1.8271484, "special": false, "text": " very" }, { "id": 12215, "logprob": -2.8847656, "special": false, "text": " driven" }, { "id": 28723, "logprob": -1.0927734, "special": false, "text": "." }, { "id": 315, "logprob": -0.4584961, "special": false, "text": " I" }, { "id": 28809, "logprob": -0.5019531, "special": false, "text": "’" }, { "id": 28719, "logprob": -0.030715942, "special": false, "text": "m" }, { "id": 1215, "logprob": -0.96972656, "special": false, "text": " very" }, { "id": 7798, "logprob": -2.8847656, "special": false, "text": " determined" }, { "id": 28723, "logprob": -0.27319336, "special": false, "text": "." }, { "id": 13, "logprob": -0.56396484, "special": false, "text": "\n" }, { "id": 13, "logprob": -0.011016846, "special": false, "text": "\n" }, { "id": 3195, "logprob": -0.7163086, "special": false, "text": "What" }, { "id": 349, "logprob": -1.1611328, "special": false, "text": " is" }, { "id": 574, "logprob": -0.515625, "special": false, "text": " your" }, { "id": 6656, "logprob": -1.0253906, "special": false, "text": " favorite" }, { "id": 1970, "logprob": -2.1738281, "special": false, "text": " thing" }, { "id": 684, "logprob": -0.48364258, "special": false, "text": " about" }, { "id": 1250, "logprob": -1.8876953, "special": false, "text": " being" }, { "id": 264, "logprob": -0.41967773, "special": false, "text": " a" }, { "id": 8626, "logprob": -2.9160156, "special": false, "text": " teacher" }, { "id": 28804, "logprob": -0.11920166, "special": false, "text": "?" }, { "id": 13, "logprob": -0.023727417, "special": false, "text": "\n" }, { "id": 13, "logprob": -0.010848999, "special": false, "text": "\n" }, { "id": 28737, "logprob": -1.0566406, "special": false, "text": "I" }, { "id": 2016, "logprob": -0.7163086, "special": false, "text": " love" }, { "id": 272, "logprob": -1.9169922, "special": false, "text": " the" }, { "id": 1639, "logprob": -2.03125, "special": false, "text": " fact" } ] }, "generated_text": "\n\nI’m a very passionate person. I’m very driven. I’m very determined.\n\nWhat is your favorite thing about being a teacher?\n\nI love the fact" }
text-generation-inference/integration-tests/models/__snapshots__/test_lora_mistral/test_lora_mistral_without_customer_support_adapter.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_lora_mistral/test_lora_mistral_without_customer_support_adapter.json", "repo_id": "text-generation-inference", "token_count": 3126 }
296
{ "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [], "seed": 0, "tokens": [ { "id": 29899, "logprob": -1.4980469, "special": false, "text": "-" }, { "id": 1454, "logprob": -0.19433594, "special": false, "text": "for" }, { "id": 29899, "logprob": 0.0, "special": false, "text": "-" }, { "id": 9342, "logprob": 0.0, "special": false, "text": "comment" }, { "id": 29901, "logprob": 0.0, "special": false, "text": ":" }, { "id": 396, "logprob": -0.27392578, "special": false, "text": " #" }, { "id": 29906, "logprob": -0.49389648, "special": false, "text": "2" }, { "id": 29900, "logprob": -0.81103516, "special": false, "text": "0" }, { "id": 29896, "logprob": 0.0, "special": false, "text": "1" }, { "id": 29955, "logprob": -1.0800781, "special": false, "text": "7" } ], "top_tokens": null }, "generated_text": "Test request-for-comment: #2017" }
text-generation-inference/integration-tests/models/__snapshots__/test_server_gptq_quantized/test_server_gptq_quantized_all_params.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_server_gptq_quantized/test_server_gptq_quantized_all_params.json", "repo_id": "text-generation-inference", "token_count": 853 }
297
{ "choices": [ { "finish_reason": "stop", "index": 0, "logprobs": null, "message": { "content": "I can't access real-time data, but I can provide you with current conditions and forecast for Paris, France:\n\nThe current conditions in Paris are mostly cloudy with a temperature of 6.7°C (44.1°F). \n\nPlease note that the actual weather may differ from the provided information. For up-to-date information, I suggest checking a reliable weather website or app for the latest conditions and forecast.", "role": "assistant", "tool_calls": null } } ], "created": 1741263702, "id": "", "model": "meta-llama/Llama-3.1-8B-Instruct", "object": "chat.completion", "system_fingerprint": "3.1.2-dev0-native", "usage": { "completion_tokens": 83, "prompt_tokens": 109, "total_tokens": 192 } }
text-generation-inference/integration-tests/models/__snapshots__/test_tools_llama/test_flash_llama_tool_reply_response.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_tools_llama/test_flash_llama_tool_reply_response.json", "repo_id": "text-generation-inference", "token_count": 335 }
298
import pytest @pytest.fixture(scope="module") def compressed_tensors_w8an_handle(launcher): with launcher( "neuralmagic/Llama-3.2-1B-Instruct-FP8", num_shard=2, quantize="compressed-tensors", ) as handle: yield handle @pytest.fixture(scope="module") async def compressed_tensors_w8an(compressed_tensors_w8an_handle): await compressed_tensors_w8an_handle.health(300) return compressed_tensors_w8an_handle.client @pytest.mark.release @pytest.mark.asyncio @pytest.mark.private async def test_compressed_tensors_w8an(compressed_tensors_w8an, response_snapshot): response = await compressed_tensors_w8an.generate( "What is deep learning?", max_new_tokens=10, decoder_input_details=True, ) assert ( response.generated_text == " Deep learning is a type of artificial intelligence (AI" ) assert response.details.generated_tokens == 10 assert response == response_snapshot @pytest.mark.asyncio async def test_compressed_tensors_w8an_all_params( compressed_tensors_w8an, response_snapshot ): response = await compressed_tensors_w8an.generate( "What is deep learning", max_new_tokens=10, repetition_penalty=1.2, return_full_text=True, stop_sequences=["test"], temperature=0.5, top_p=0.9, top_k=10, truncate=5, typical_p=0.9, watermark=True, decoder_input_details=True, seed=0, ) assert response.details.generated_tokens == 10 assert ( response.generated_text == "What is deep learning?\nDeep learning, also known as neural network or" ) assert response == response_snapshot @pytest.mark.release @pytest.mark.asyncio @pytest.mark.private async def test_compressed_tensors_w8an_load( compressed_tensors_w8an, generate_load, response_snapshot ): responses = await generate_load( compressed_tensors_w8an, "What is deep learning?", max_new_tokens=10, n=4, ) assert ( responses[0].generated_text == " Deep learning is a type of artificial intelligence (AI" ) assert len(responses) == 4 assert all([r.generated_text == responses[0].generated_text for r in responses]) assert responses == response_snapshot
text-generation-inference/integration-tests/models/test_compressed_tensors_w8an_fp.py/0
{ "file_path": "text-generation-inference/integration-tests/models/test_compressed_tensors_w8an_fp.py", "repo_id": "text-generation-inference", "token_count": 1000 }
299
import pytest @pytest.fixture(scope="module") def flash_llama_fp8_handle(launcher): with launcher("meta-llama/Meta-Llama-3-8B", num_shard=2, quantize="fp8") as handle: yield handle @pytest.fixture(scope="module") async def flash_llama_fp8(flash_llama_fp8_handle): await flash_llama_fp8_handle.health(300) return flash_llama_fp8_handle.client @pytest.mark.release @pytest.mark.asyncio @pytest.mark.private async def test_flash_llama_fp8(flash_llama_fp8, response_snapshot): response = await flash_llama_fp8.generate( "Test request", max_new_tokens=10, decoder_input_details=True ) assert response.generated_text == " for the 2019-2020 school year" assert response.details.generated_tokens == 10 assert response == response_snapshot @pytest.mark.release @pytest.mark.asyncio @pytest.mark.private async def test_flash_llama_fp8_all_params(flash_llama_fp8, response_snapshot): response = await flash_llama_fp8.generate( "Test request", max_new_tokens=10, repetition_penalty=1.2, return_full_text=True, stop_sequences=["test"], temperature=0.5, top_p=0.9, top_k=10, truncate=5, typical_p=0.9, watermark=True, decoder_input_details=True, seed=0, ) assert response == response_snapshot @pytest.mark.release @pytest.mark.asyncio @pytest.mark.private async def test_flash_llama_fp8_load(flash_llama_fp8, generate_load, response_snapshot): responses = await generate_load( flash_llama_fp8, "Test request", max_new_tokens=10, n=4 ) assert len(responses) == 4 assert responses[0].generated_text == " for the 2019-2020 school year" assert all( [r.generated_text == responses[0].generated_text for r in responses] ), f"Different messages : {[r.generated_text for r in responses]}" assert responses == response_snapshot
text-generation-inference/integration-tests/models/test_flash_llama_fp8.py/0
{ "file_path": "text-generation-inference/integration-tests/models/test_flash_llama_fp8.py", "repo_id": "text-generation-inference", "token_count": 802 }
300
import pytest @pytest.fixture(scope="module") def flash_phi_handle(launcher): with launcher("microsoft/phi-2", num_shard=1) as handle: yield handle @pytest.fixture(scope="module") async def flash_phi(flash_phi_handle): await flash_phi_handle.health(300) return flash_phi_handle.client @pytest.mark.release @pytest.mark.asyncio async def test_flash_phi(flash_phi, response_snapshot): response = await flash_phi.generate( "Test request", max_new_tokens=10, decoder_input_details=True ) assert response.details.generated_tokens == 10 assert response.generated_text == ': {request}")\n response = self' assert response == response_snapshot @pytest.mark.release @pytest.mark.asyncio async def test_flash_phi_all_params(flash_phi, response_snapshot): response = await flash_phi.generate( "Test request", max_new_tokens=10, repetition_penalty=1.2, return_full_text=True, stop_sequences=["network"], temperature=0.5, top_p=0.9, top_k=10, truncate=5, typical_p=0.9, watermark=True, decoder_input_details=True, seed=0, ) assert response.details.generated_tokens == 6 assert response.generated_text == "Test request to send data over a network" assert response == response_snapshot @pytest.mark.release @pytest.mark.asyncio async def test_flash_phi_load(flash_phi, generate_load, response_snapshot): responses = await generate_load(flash_phi, "Test request", max_new_tokens=10, n=4) assert len(responses) == 4 assert all( [r.generated_text == responses[0].generated_text for r in responses] ), f"{[r.generated_text for r in responses]}" assert responses[0].generated_text == ': {request}")\n response = self' assert responses == response_snapshot
text-generation-inference/integration-tests/models/test_flash_phi.py/0
{ "file_path": "text-generation-inference/integration-tests/models/test_flash_phi.py", "repo_id": "text-generation-inference", "token_count": 749 }
301
import pytest @pytest.fixture(scope="module") def flash_llava_next_handle(launcher): with launcher( "llava-hf/llava-v1.6-mistral-7b-hf", num_shard=4, max_input_length=4000, max_total_tokens=4096, ) as handle: yield handle @pytest.fixture(scope="module") async def flash_llava_next(flash_llava_next_handle): await flash_llava_next_handle.health(300) return flash_llava_next_handle.client @pytest.mark.release @pytest.mark.asyncio @pytest.mark.private async def test_flash_llava_next_simple(flash_llava_next, response_snapshot, chicken): response = await flash_llava_next.generate( f"User:![]({chicken})Can you tell me a very short story based on the image?", max_new_tokens=10, ) assert ( response.generated_text == "\n\nOnce upon a time, there was a" ), f"{repr(response.generated_text)}" assert response.details.generated_tokens == 10 assert response == response_snapshot @pytest.mark.release @pytest.mark.asyncio @pytest.mark.private async def test_flash_llava_next_all_params(flash_llava_next, response_snapshot): response = await flash_llava_next.generate( "Test request", max_new_tokens=10, repetition_penalty=1.2, return_full_text=True, stop_sequences=["test"], temperature=0.5, top_p=0.9, top_k=10, truncate=5, typical_p=0.9, watermark=True, decoder_input_details=True, seed=0, ) assert response.details.generated_tokens == 6 assert response == response_snapshot @pytest.mark.release @pytest.mark.asyncio @pytest.mark.private async def test_flash_llava_next_load( flash_llava_next, generate_load, response_snapshot, chicken ): responses = await generate_load( flash_llava_next, f"User:![]({chicken})Can you tell me a very short story based on the image?", max_new_tokens=10, n=4, ) generated_texts = [r.generated_text for r in responses] assert generated_texts[0] == "\n\nOnce upon a time, there was a" assert len(generated_texts) == 4 assert all([r.generated_text == generated_texts[0] for r in responses]) assert responses == response_snapshot
text-generation-inference/integration-tests/models/test_llava_next.py/0
{ "file_path": "text-generation-inference/integration-tests/models/test_llava_next.py", "repo_id": "text-generation-inference", "token_count": 961 }
302
[project] name = "text-generation-integration-tests" version = "2.0.1" description = "Text Generation Inference integration tests" authors = ["Nicolas Patry <nicolas@huggingface.co>"] requires-python = ">=3.10,<3.13" dependencies = [ "pydantic>2,< 3", "syrupy>=4.8.0", "text-generation>=0.6.0", "pytest>=8.3.0", "pytest-asyncio>=0.23.1", "docker>=7", "numpy>=2.0", "openai>=1.65", "huggingface_hub>=0.29", "pillow>=11.1.0", ] [tool.isort] profile = "black"
text-generation-inference/integration-tests/pyproject.toml/0
{ "file_path": "text-generation-inference/integration-tests/pyproject.toml", "repo_id": "text-generation-inference", "token_count": 245 }
303
import json import datasets import tqdm def main(): dataset = datasets.load_dataset("Open-Orca/OpenOrca", split="train") # Select only the first 2k conversations that start with a human. max = min(2000, len(dataset)) conversations = [] for item in tqdm.tqdm(dataset, total=max): conversation = { "conversations": [ {"from": "human", "value": item["question"]}, ], "id": item["id"], } conversations.append(conversation) if len(conversations) >= max: break with open("./small.json", "w") as f: json.dump(conversations, f, indent=4) if __name__ == "__main__": main()
text-generation-inference/load_tests/orca.py/0
{ "file_path": "text-generation-inference/load_tests/orca.py", "repo_id": "text-generation-inference", "token_count": 313 }
304
// Adapted from turboderp exllama: https://github.com/turboderp/exllama #ifndef _column_remap_cuh #define _column_remap_cuh #include <cuda_runtime.h> #include <cuda_fp16.h> #include <cstdint> void column_remap_cuda ( const half* x, half* x_new, const int x_height, const int x_width, const uint32_t* x_map ); #endif
text-generation-inference/server/exllama_kernels/exllama_kernels/cuda_func/column_remap.cuh/0
{ "file_path": "text-generation-inference/server/exllama_kernels/exllama_kernels/cuda_func/column_remap.cuh", "repo_id": "text-generation-inference", "token_count": 153 }
305
#ifndef _q_gemm_cuh #define _q_gemm_cuh #include <cuda_runtime.h> #include <cuda_fp16.h> #include <cstdint> #include <cstdio> #include <ATen/cuda/CUDAContext.h> #include "q_matrix.cuh" void gemm_half_q_half_cuda ( cublasHandle_t cublas_handle, const half* a, QMatrix* b, half* c, int size_m, int size_n, int size_k, bool clear = false, half* reconstruct = NULL, bool force_cuda = false, const half* r_weights = NULL, const int r_weights_stride = 0, bool mul_r_weights = false ); void clear_tensor_cuda ( half* c, int size_m, int size_n ); #endif
text-generation-inference/server/exllamav2_kernels/exllamav2_kernels/cuda/q_gemm.cuh/0
{ "file_path": "text-generation-inference/server/exllamav2_kernels/exllamav2_kernels/cuda/q_gemm.cuh", "repo_id": "text-generation-inference", "token_count": 294 }
306
[project] name = "text-generation-server" version = "2.0.5-dev0" description = "Text Generation Inference Python gRPC Server" readme = "README.md" requires-python = ">=3.9" authors = [ {name = "Olivier Dehaene", email = "olivier@huggingface.co"}, {name = "Nicolas Patry", email = "nicolas@huggingface.co"}, ] dependencies = [ # Remove explicit click dependency once typer/click are compatible again. "click<8.2.0", "einops>=0.8.0", "grpc-interceptor>=0.15.4", "grpcio>=1.67.0", "grpcio-reflection>=1.67.0", "grpcio-status>=1.67.0", "kernels>=0.2.1", "hf-transfer>=0.1.8", "loguru>=0.7.3", "numpy>=1.26,<3", "opentelemetry-api>=1.27.0", "opentelemetry-exporter-otlp>=1.27.0", "opentelemetry-instrumentation-grpc>=0.50b0", "pillow>=11.1.0", "prometheus-client>=0.21.0", "protobuf>=5.28.3", "py-cpuinfo>=9.0.0", "rich>=13.8.1", "safetensors>=0.4.5", "scipy>=1.13.1", "sentencepiece>=0.2.0", "tokenizers>=0.20.3", "typer>=0.15.1", "transformers>=4.51.0", "huggingface-hub>=0.30.1", "hf-xet>=1.0.0", ] [[tool.uv.index]] name = "pytorch-cu128" url = "https://download.pytorch.org/whl/cu128" explicit = true [tool.uv.sources] torch = [ { index = "pytorch-cu128", marker = "sys_platform == 'linux' or sys_platform == 'win32'" }, ] torchvision = [ { index = "pytorch-cu128", marker = "sys_platform == 'linux' or sys_platform == 'win32'" }, ] [build-system] requires = ["kernels>=0.1.7", "setuptools"] build-backend = "setuptools.build_meta" [tool.kernels.dependencies] "kernels-community/paged-attention" = ">=0.0.2" "kernels-community/moe" = ">=0.1.1" "kernels-community/punica-sgmv" = ">=0.0.1" "kernels-community/quantization" = ">=0.0.3" "kernels-community/quantization-eetq" = ">=0.0.1" "kernels-community/rotary" = ">=0.0.1" [project.scripts] text-generation-server = "text_generation_server.cli:app" [project.optional-dependencies] accelerate = [ "accelerate>=1.2.1,<2", ] bnb = [ "bitsandbytes>=0.45.0", ] compressed-tensors = [ "compressed-tensors>=0.9.0", ] peft = [ "peft>=0.14.0", ] outlines = [ "outlines>=0.1.13,<1.0", ] dev = [ "grpcio-tools>=1.51.1,<2.0", "pytest>=7.3.0,<8" ] quantize = [ "texttable>=1.6.7,<2", "datasets>=2.21,<3", ] gen = [ "grpcio-tools>=1.69.0", "mypy-protobuf>=3.6.0", ] torch = [ "torch==2.7.0", "torchvision==0.22.0", ] [tool.pytest.ini_options] markers = ["private: marks tests as requiring an admin hf token (deselect with '-m \"not private\"')"] [tool.isort] profile = "black" [tool.uv] package = true [tool.setuptools.packages.find] include = ["text_generation_server*"]
text-generation-inference/server/pyproject.toml/0
{ "file_path": "text-generation-inference/server/pyproject.toml", "repo_id": "text-generation-inference", "token_count": 1325 }
307
import torch from text_generation_server.utils.tokens import ( StopSequenceCriteria, StoppingCriteria, FinishReason, batch_top_tokens, ) def test_stop_sequence_criteria(): criteria = StopSequenceCriteria("/test;") assert not criteria("/") assert not criteria("/test") assert criteria("/test;") assert not criteria("/test; ") def test_stop_sequence_criteria_escape(): criteria = StopSequenceCriteria("<|stop|>") assert not criteria("<") assert not criteria("<|stop") assert criteria("<|stop|>") assert not criteria("<|stop|> ") def test_stopping_criteria(): criteria = StoppingCriteria(0, [StopSequenceCriteria("/test;")], max_new_tokens=5) assert criteria(65827, "/test") == (False, None) assert criteria(30, ";") == (True, FinishReason.FINISH_REASON_STOP_SEQUENCE) def test_stopping_criteria_eos(): criteria = StoppingCriteria(0, [StopSequenceCriteria("/test;")], max_new_tokens=5) assert criteria(1, "") == (False, None) assert criteria(0, "") == (True, FinishReason.FINISH_REASON_EOS_TOKEN) def test_stopping_criteria_max(): criteria = StoppingCriteria(0, [StopSequenceCriteria("/test;")], max_new_tokens=5) assert criteria(1, "") == (False, None) assert criteria(1, "") == (False, None) assert criteria(1, "") == (False, None) assert criteria(1, "") == (False, None) assert criteria(1, "") == (True, FinishReason.FINISH_REASON_LENGTH) def test_batch_top_tokens(): top_n_tokens = [0, 2, 3, 4, 5] top_n_tokens_tensor = torch.tensor(top_n_tokens) inp_logprobs = torch.tensor([[-1.0, -3.0, -4.0, -2.0, -3.0]] * 5) accepted_ids = torch.ones_like(top_n_tokens_tensor) topn_tok_ids, topn_tok_logprobs = batch_top_tokens( top_n_tokens, top_n_tokens_tensor, inp_logprobs, accepted_ids ) assert topn_tok_ids[0] == [[]] assert topn_tok_ids[1] == [[0, 3]] assert topn_tok_ids[2] == [[0, 3, 1, 4]] assert topn_tok_ids[3] == [[0, 3, 1, 4]] assert topn_tok_ids[4] == [[0, 3, 1, 4, 2]] assert topn_tok_logprobs[0] == [[]] assert topn_tok_logprobs[1] == [[-1, -2]] assert topn_tok_logprobs[2] == [[-1, -2, -3, -3]] assert topn_tok_logprobs[3] == [[-1, -2, -3, -3]] assert topn_tok_logprobs[4] == [[-1, -2, -3, -3, -4]] # Now let's make second member of the batch be speculated inp_logprobs = torch.tensor([[-1.0, -3.0, -4.0, -2.0, -3.0]] * 5 * 2) accepted_ids[1] = 2 topn_tok_ids, topn_tok_logprobs = batch_top_tokens( top_n_tokens, top_n_tokens_tensor, inp_logprobs, accepted_ids ) assert topn_tok_ids[0] == [[]] assert topn_tok_ids[1] == [[0, 3], [0, 3]] assert topn_tok_ids[2] == [[0, 3, 1, 4]] assert topn_tok_ids[3] == [[0, 3, 1, 4]] assert topn_tok_ids[4] == [[0, 3, 1, 4, 2]] assert topn_tok_logprobs[0] == [[]] assert topn_tok_logprobs[1] == [[-1, -2], [-1, -2]] assert topn_tok_logprobs[2] == [[-1, -2, -3, -3]] assert topn_tok_logprobs[3] == [[-1, -2, -3, -3]] assert topn_tok_logprobs[4] == [[-1, -2, -3, -3, -4]]
text-generation-inference/server/tests/utils/test_tokens.py/0
{ "file_path": "text-generation-inference/server/tests/utils/test_tokens.py", "repo_id": "text-generation-inference", "token_count": 1427 }
308
from typing import Optional from contextvars import ContextVar from contextlib import contextmanager import flashinfer import torch prefill_state: ContextVar[flashinfer.BatchPrefillWithRaggedKVCacheWrapper] = ContextVar( "prefill_state" ) prefill_with_paged_kv_state: ContextVar[ flashinfer.BatchPrefillWithPagedKVCacheWrapper ] = ContextVar("prefill_with_paged_kv_state") decode_state: ContextVar[flashinfer.BatchDecodeWithPagedKVCacheWrapper] = ContextVar( "decode_state" ) workspace: Optional[torch.Tensor] = None def get_workspace(device): """Get shared flashinfer workspace.""" global workspace if workspace is None: workspace = torch.empty(128 * 1024 * 1024, dtype=torch.uint8, device=device) return workspace def create_prefill_with_paged_kv_state( *, device: torch.device, ): """Create a prefill state that uses the KV cache.""" workspace_buffer = get_workspace(device) return flashinfer.BatchPrefillWithPagedKVCacheWrapper( workspace_buffer, kv_layout="NHD", use_cuda_graph=False ) @contextmanager def use_prefill_with_paged_kv_state( *, state: flashinfer.BatchPrefillWithPagedKVCacheWrapper, block_tables: torch.Tensor, cu_seqlens: torch.Tensor, custom_mask: Optional[torch.Tensor], input_lengths: torch.Tensor, num_heads: int, num_kv_heads: int, head_size: int, page_size: int, kv_dtype: torch.dtype, q_dtype: torch.dtype, ): """ Context manager to set the active flashinfer prefill state to the given `state` and parameters. This state will be used by all calls to the `attention` function while the context manager is active. """ indptr = torch.zeros( input_lengths.shape[0] + 1, device=input_lengths.device, dtype=torch.int32 ) # Round up to page size and then calculate the cumulative sum to get # the indices into the block table. torch.add(input_lengths, page_size - 1, out=indptr[1:]) indptr[1:].div_(page_size, rounding_mode="floor") indptr[1:].cumsum_(-1) # Get the lengths of the last page in a block. if page_size == 1: last_page_len = torch.ones( input_lengths.shape[0], dtype=torch.int32, device=input_lengths.device ) else: last_page_len = torch.empty( input_lengths.shape[0], dtype=torch.int32, device=input_lengths.device ) torch.sub(input_lengths, 1, out=last_page_len) last_page_len.remainder_(page_size) last_page_len += 1 token = prefill_with_paged_kv_state.set(state) try: state.plan( qo_indptr=cu_seqlens, paged_kv_indptr=indptr, paged_kv_indices=block_tables, paged_kv_last_page_len=last_page_len, custom_mask=custom_mask, num_qo_heads=num_heads, num_kv_heads=num_kv_heads, head_dim=head_size, kv_data_type=kv_dtype, q_data_type=q_dtype, page_size=page_size, ) yield finally: if token is not None: prefill_with_paged_kv_state.reset(token) def create_prefill_state( *, device: torch.device, ): """Create a prefill state.""" workspace_buffer = get_workspace(device) return flashinfer.BatchPrefillWithRaggedKVCacheWrapper( workspace_buffer, kv_layout="NHD", use_cuda_graph=False ) def create_decode_state( *, device: torch.device, num_heads: int, num_kv_heads: int, ): """Create a decode state.""" workspace_buffer = get_workspace(device) num_groups = num_heads // num_kv_heads return flashinfer.BatchDecodeWithPagedKVCacheWrapper( workspace_buffer, kv_layout="NHD", use_cuda_graph=False, # Taken from https://github.com/flashinfer-ai/flashinfer/blob/33ef95700981ba70f4cab63b8931e562bc795b21/python/flashinfer/decode.py#L57-L60 use_tensor_cores=num_groups not in [1, 2, 4, 8], ) def create_decode_state_cuda_graphs( *, device: torch.device, block_tables: torch.Tensor, block_tables_ptr: torch.Tensor, last_page_len: torch.Tensor, num_heads: int, num_kv_heads: int, ): """ Create a decode state for use with CUDA Graphs. `block_tables`, `block_tables_ptr`, and `last_page_len` are used in CUDA Graphs and are therefore stored as part of the state. """ workspace_buffer = get_workspace(device) num_groups = num_heads // num_kv_heads return flashinfer.BatchDecodeWithPagedKVCacheWrapper( workspace_buffer, kv_layout="NHD", use_cuda_graph=True, paged_kv_indices_buffer=block_tables, paged_kv_indptr_buffer=block_tables_ptr, paged_kv_last_page_len_buffer=last_page_len, # Taken from https://github.com/flashinfer-ai/flashinfer/blob/33ef95700981ba70f4cab63b8931e562bc795b21/python/flashinfer/decode.py#L57-L60 use_tensor_cores=num_groups not in [1, 2, 4, 8], ) @contextmanager def use_decode_state( *, state: flashinfer.BatchDecodeWithPagedKVCacheWrapper, input_lengths: torch.Tensor, block_tables: torch.Tensor, num_heads: int, num_kv_heads: int, head_size: int, page_size: int, kv_cache_dtype: torch.dtype, q_dtype: torch.dtype, ): """ Context manager to set the active flashinfer decoding state to the given `state` and parameters. This state will be used by all calls to the `paged_attention` function while the context manager is active. """ indptr = torch.zeros( input_lengths.shape[0] + 1, device=input_lengths.device, dtype=torch.int32 ) # Round up to page size and then calculate the cumulative sum to get # the indices into the block table. torch.add(input_lengths, page_size - 1, out=indptr[1:]) indptr[1:].div_(page_size, rounding_mode="floor") indptr[1:].cumsum_(-1) # Get the lengths of the last page in a block. last_page_len = torch.empty( input_lengths.shape[0], dtype=torch.int32, device=input_lengths.device ) torch.sub(input_lengths, 1, out=last_page_len) last_page_len.remainder_(page_size) last_page_len += 1 token = decode_state.set(state) try: state.plan( indptr=indptr, indices=block_tables, last_page_len=last_page_len, num_qo_heads=num_heads, num_kv_heads=num_kv_heads, head_dim=head_size, page_size=page_size, data_type=kv_cache_dtype, q_data_type=q_dtype, ) yield finally: if token is not None: decode_state.reset(token)
text-generation-inference/server/text_generation_server/layers/attention/flashinfer.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/layers/attention/flashinfer.py", "repo_id": "text-generation-inference", "token_count": 2990 }
309
from dataclasses import dataclass import torch from text_generation_server.utils.kernels import load_kernel from text_generation_server.utils.weights import UnquantizedWeight quantization_eetq = load_kernel( module="quantization_eetq", repo_id="kernels-community/quantization-eetq" ) @dataclass class EETQWeight(UnquantizedWeight): weight: torch.Tensor def get_linear(self, bias: torch.Tensor): try: from text_generation_server.layers.eetq import EETQLinear return EETQLinear(self.weight, bias) except ImportError: raise ImportError( "Please install EETQ from https://github.com/NetEase-FuXi/EETQ" ) class EETQLinear(torch.nn.Module): def __init__( self, weight, bias, ) -> None: super().__init__() device = weight.device if weight.dtype != torch.float16: weight = weight.to(dtype=torch.float16) weight = torch.t(weight).contiguous().cpu() weight, scale = quantization_eetq.quant_weights(weight, torch.int8, False) self.weight = weight.cuda(device) self.scale = scale.cuda(device) self.bias = bias.cuda(device) if bias is not None else None def forward(self, input: torch.Tensor) -> torch.Tensor: output = quantization_eetq.w8_a16_gemm(input, self.weight, self.scale) output = output + self.bias if self.bias is not None else output return output
text-generation-inference/server/text_generation_server/layers/eetq.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/layers/eetq.py", "repo_id": "text-generation-inference", "token_count": 630 }
310
from dataclasses import dataclass from typing import List, Optional, Union import numpy import torch import torch.nn as nn from loguru import logger from text_generation_server.layers.marlin.util import ( _check_marlin_kernels, marlin_zero_points, permute_scales, unpack_cols, ) from text_generation_server.utils.import_utils import SYSTEM from text_generation_server.utils.kernels import load_kernel from text_generation_server.utils.log import log_once from text_generation_server.utils.weights import Weight, Weights, WeightsLoader if SYSTEM == "cuda": quantization = load_kernel( module="quantization", repo_id="kernels-community/quantization" ) else: quantization = None try: major, _minor = torch.cuda.get_device_capability() has_sm_8_0 = major >= 8 except Exception: has_sm_8_0 = False GPTQ_MARLIN_BITS = [4, 8] GPTQ_MARLIN_GROUP_SIZES = [-1, 32, 64, 128] MARLIN_TILE_SIZE = 16 def can_use_gptq_marlin( *, bits: int, groupsize: int, quant_method: str, quantize: str, sym: bool ) -> bool: return ( SYSTEM == "cuda" and quantization is not None and has_sm_8_0 and quantize in {"awq", "gptq"} and quant_method in {"awq", "gptq"} and bits in GPTQ_MARLIN_BITS and groupsize in GPTQ_MARLIN_GROUP_SIZES # We only support asymmetric quantization for AWQ. and (sym or quant_method == "awq") ) class GPTQMarlinWeightsLoader(WeightsLoader): """ Loader for using GPTQ- and AWQ-quantized weights with Marlin kernels. """ def __init__( self, *, bits: int, desc_act: bool, groupsize: int, quant_method: str, quantize: str, sym: bool, ): self.bits = bits self.desc_act = desc_act self.groupsize = groupsize self.quant_method = quant_method self.quantize = quantize self.sym = sym def get_weights(self, weights: Weights, prefix: str): log_once(logger.info, "Using GPTQ-Marlin kernels") try: qweight = weights.get_tensor(f"{prefix}.qweight") except RuntimeError: raise RuntimeError( f"Cannot load `{self.quantize}` weight for GPTQ -> Marlin repacking, make sure the model is already quantized" ) if not self.sym: qzeros = weights.get_tensor(f"{prefix}.qzeros") else: qzeros = None if self.quant_method == "awq": g_idx = None else: g_idx = weights.get_tensor(f"{prefix}.g_idx") scales = weights.get_tensor(f"{prefix}.scales") return repack_gptq_for_marlin( qweight=qweight, scales=scales, qzeros=qzeros, g_idx=g_idx, bits=self.bits, desc_act=self.desc_act, groupsize=self.groupsize, quant_method=self.quant_method, sym=self.sym, sharded_infeatures=False, ) def get_weights_col_packed( self, weights: Weights, prefix: str, block_sizes: Union[int, List[int]], ): try: qweight = weights.get_packed_sharded( f"{prefix}.qweight", dim=1, block_sizes=block_sizes ) except RuntimeError: raise RuntimeError( f"Cannot load `{self.quantize}` weight, make sure the model is already quantized." ) scales = weights.get_packed_sharded( f"{prefix}.scales", dim=1, block_sizes=block_sizes ) scales = scales.to(dtype=weights.dtype) if not self.sym: qzeros = weights.get_packed_sharded( f"{prefix}.qzeros", dim=1, block_sizes=block_sizes ) else: qzeros = None if self.quant_method == "awq": g_idx = None else: g_idx = weights.get_tensor(f"{prefix}.g_idx") return repack_gptq_for_marlin( qweight=qweight, scales=scales, qzeros=qzeros, g_idx=g_idx, bits=self.bits, desc_act=self.desc_act, groupsize=self.groupsize, quant_method=self.quant_method, sym=self.sym, sharded_infeatures=False, ) def get_multi_weights_col(self, weights: Weights, prefixes: List[str], dim: int): try: qweight = torch.cat( [weights.get_sharded(f"{p}.qweight", dim=1) for p in prefixes], dim=1 ) except RuntimeError: raise RuntimeError( f"Cannot load `{self.quantize}` weight, make sure the model is already quantized" ) scales = torch.cat( [weights.get_sharded(f"{p}.scales", dim=1) for p in prefixes], dim=1 ) if not self.sym: qzeros = torch.cat( [weights.get_sharded(f"{p}.qzeros", dim=1) for p in prefixes], dim=1 ) else: qzeros = None if self.quant_method == "awq": g_idx = None else: w = [weights.get_tensor(f"{p}.g_idx") for p in prefixes] for w2 in w[1:]: torch.testing.assert_close(w2, w[0]) g_idx = w[0] return repack_gptq_for_marlin( qweight=qweight, scales=scales, qzeros=qzeros, g_idx=g_idx, bits=self.bits, desc_act=self.desc_act, groupsize=self.groupsize, quant_method=self.quant_method, sym=self.sym, sharded_infeatures=False, ) def get_weights_row(self, weights: Weights, prefix: str): log_once(logger.info, "Using GPTQ-Marlin kernels") try: qweight = weights.get_sharded(f"{prefix}.qweight", dim=0) except RuntimeError: raise RuntimeError( f"Cannot load `{self.quantize}` weight for GPTQ -> Marlin repacking, make sure the model is already quantized" ) if not self.sym: if self.desc_act or self.groupsize == -1: qzeros = weights.get_tensor(f"{prefix}.qzeros") else: qzeros = weights.get_sharded(f"{prefix}.qzeros", dim=0) else: qzeros = None if self.quant_method == "awq": g_idx = None else: g_idx = weights.get_sharded(f"{prefix}.g_idx", dim=0) if self.desc_act or self.groupsize == -1: scales = weights.get_tensor(f"{prefix}.scales") else: scales = weights.get_sharded(f"{prefix}.scales", dim=0) sharded_in_features = weights.process_group.size() > 1 return repack_gptq_for_marlin( qweight=qweight, scales=scales, qzeros=qzeros, g_idx=g_idx, bits=self.bits, desc_act=self.desc_act, groupsize=self.groupsize, quant_method=self.quant_method, sym=self.sym, sharded_infeatures=sharded_in_features, ) def _get_gptq_params(self, weights: Weights): if weights.has_tensor("gptq_bits") and weights.has_tensor("gptq_groupsize"): self.bits = weights.get_tensor("gptq_bits").item() self.groupsize = weights.get_tensor("gptq_groupsize").item() self.desc_act = False # `server quantize` used asymmetric quantization unconditionally # before the `gptq_sym` setting tensor was added. self.sym = ( weights.get_tensor("gptq_sym").item() if weights.has_tensor("gptq_sym") else False ) self.quant_method = "gptq" @dataclass class GPTQMarlinWeight(Weight): """ Repacked GPTQ Marlin weights. """ qweight: torch.Tensor qzeros: torch.Tensor scales: torch.Tensor g_idx: torch.Tensor perm: torch.Tensor bits: int is_full_k: bool def __post_init__(self): assert self.qweight.dtype == torch.int32 assert self.scales.dtype in (torch.float16, torch.bfloat16) assert self.g_idx.dtype == torch.int32 assert self.perm.dtype == torch.int32 def get_linear(self, bias: torch.Tensor): return GPTQMarlinLinear( weight=self, bias=bias, ) def repack_gptq_for_marlin( *, qweight: torch.Tensor, qzeros: Optional[torch.Tensor], scales: torch.Tensor, g_idx: Optional[torch.Tensor], bits: int, desc_act: bool, groupsize: int, quant_method: str, sym: bool, sharded_infeatures: bool, ) -> GPTQMarlinWeight: """Convert GPTQ weights to a layout that's compatible with GPTQ-Marlin kernels.""" _check_marlin_kernels() assert quantization is not None if bits not in GPTQ_MARLIN_BITS: supported_bits = ", ".join(str(b) for b in GPTQ_MARLIN_BITS) raise RuntimeError( f"Repacking {bits}-bit GPTQ weights as Marlin is not supported, must be one of: {supported_bits}" ) if groupsize not in GPTQ_MARLIN_GROUP_SIZES: supported_sizes = ", ".join(str(b) for b in GPTQ_MARLIN_GROUP_SIZES) raise RuntimeError( f"Repacking GPTQ weights with group size {groupsize} as Marlin is not supported, must be one of: {supported_sizes}" ) if not (sym or quant_method == "awq" or quant_method == "compressed-tensors"): raise RuntimeError( "Repacking GPTQ weights with asymmetric quantization as Marlin is not supported." ) log_once(logger.info, f"Converting {quant_method} model to Marlin packing format.") weights_per_int = 32 // bits in_features = qweight.shape[0] out_features = qweight.shape[1] # AWQ uses column packing, GPTQ uses row packing if quant_method == "awq": out_features *= weights_per_int else: in_features *= weights_per_int if in_features % groupsize != 0: raise ValueError( f"Number of input features ({in_features}) not divisible by group size ({groupsize})" ) if g_idx is not None and desc_act and groupsize != -1: perm = torch.argsort(g_idx).to(torch.int) g_idx = g_idx[perm] else: perm = torch.empty(0, dtype=torch.int, device=qweight.device) g_idx = torch.empty(0, dtype=torch.int, device=qweight.device) if quant_method == "awq": repacked = quantization.awq_marlin_repack( qweight, in_features, out_features, bits ) if qzeros is not None: qzeros = awq_to_marlin_zero_points( qzeros, in_features // groupsize, out_features, bits, ) else: repacked = quantization.gptq_marlin_repack( qweight, perm, in_features, out_features, bits ) if qzeros is None: qzeros = torch.empty(0, dtype=torch.int, device=qweight.device) scales = permute_scales(scales) is_full_k = not (desc_act and groupsize != -1 and sharded_infeatures) return GPTQMarlinWeight( qweight=repacked, qzeros=qzeros, scales=scales, g_idx=g_idx, perm=perm, bits=bits, is_full_k=is_full_k, ) class GPTQMarlinLinear(nn.Module): """ Linear layer for GPTQ weights that were converted for the GPTQ-Marlin kernels. """ def __init__( self, *, weight: GPTQMarlinWeight, bias: Optional[torch.Tensor], ): super().__init__() _check_marlin_kernels() assert quantization is not None in_features = weight.qweight.shape[0] * MARLIN_TILE_SIZE out_features = weight.scales.shape[1] _check_valid_shape(in_features=in_features, out_features=out_features) if weight.bits not in (4, 8): raise ValueError("GPTQMarlinLinear only supports 4 and 8-bit quantization") if weight.qzeros.numel() > 0: if weight.bits == 4: self.quant_type = quantization.scalar_types.uint4 else: self.quant_type = quantization.scalar_types.uint8 else: if weight.bits == 4: self.quant_type = quantization.scalar_types.uint4b8 else: self.quant_type = quantization.scalar_types.uint8b128 self.is_full_k = weight.is_full_k self.qweight = weight.qweight self.qzeros = weight.qzeros self.scales = weight.scales self.g_idx = weight.g_idx self.perm = weight.perm if bias is not None: self.bias = bias else: self.bias = None self.workspace = torch.zeros( out_features // 64 * 16, dtype=torch.int, device=weight.qweight.device ) def forward(self, A: torch.Tensor) -> torch.Tensor: assert quantization is not None A_flat = A.view(-1, A.shape[-1]) C = quantization.gptq_marlin_gemm( A_flat, self.qweight, self.scales, self.qzeros, self.g_idx, self.perm, self.workspace, self.quant_type, A_flat.shape[0], self.scales.shape[1], A_flat.shape[1], self.is_full_k, self.qzeros.numel() > 0, True, ) C = C.reshape(A.shape[:-1] + (self.scales.shape[1],)) if self.bias is not None: C += self.bias return C def awq_to_marlin_zero_points( q_zp_packed: torch.Tensor, size_k: int, size_n: int, num_bits: int ) -> torch.Tensor: # AWQ zero-points are quantized and packed on the column dim. # In addition, the values are permuted based on dequantizer. # Here we undo both of these, and then apply marlin permutation # and pack it back. q_zp = unpack_cols(q_zp_packed, num_bits, size_k, size_n) # Undo interleaving (use argsort(..) to get inverse perm) if num_bits == 4: undo_interleave = numpy.argsort(numpy.array([0, 2, 4, 6, 1, 3, 5, 7])) elif num_bits == 8: undo_interleave = numpy.argsort(numpy.array([0, 2, 1, 3])) else: raise Exception("num_bits must be 4 or 8, got {}".format(num_bits)) q_zp = q_zp.reshape((-1, len(undo_interleave)))[:, undo_interleave].ravel() q_zp = q_zp.reshape((-1, size_n)).contiguous() marlin_zp = marlin_zero_points(q_zp, size_k, size_n, num_bits) return marlin_zp def _check_valid_shape(in_features: int, out_features: int): if (in_features % 128 != 0 or out_features % 64 != 0) and ( in_features % 64 != 0 or out_features % 128 != 0 ): raise ValueError( f"The GPTQ Marlin kernel does not have a valid thread configuration for weight matrix with shape ({out_features}, {in_features})." " The shape elements must be divisible by (128, 64) or (64, 128)." )
text-generation-inference/server/text_generation_server/layers/marlin/gptq.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/layers/marlin/gptq.py", "repo_id": "text-generation-inference", "token_count": 7460 }
311
# This code was adapted from https://github.com/lucidrains/flamingo-pytorch licensed under the MIT License. # # MIT License # # Copyright (c) 2020 The Google AI Language Team Authors, The HuggingFace Inc. team and github/lonePatient # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in all # copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. """ Generic interface to various configurations of the Perceiver Resampler, that simply takes in a series of (potentially time-indexed) contextual embeddings, and "resamples" (compresses) them down to a pre-specified number of latents! Note that the Perceiver in general resamples based solely off the *long-range* context; there's a nice opportunity here to prime the Perceiver Resampler with say a single layer's worth of language embeddings (the target domain), and use that to softly "retrieve & compress" what we need --> this would be a novel contribution we should explore. References: - DeepMind's Flamingo: https://www.deepmind.com/blog/tackling-multiple-tasks-with-a-single-visual-language-model - Code borrowed w/ love from: https://github.com/lucidrains/flamingo-pytorch """ from typing import Optional, Tuple import torch import torch.nn as nn from text_generation_server.layers import ( TensorParallelColumnLinear, TensorParallelRowLinear, ) EPS = 1e-5 class IdeficsPerceiverResampler(nn.Module): def __init__( self, prefix, config, embed_dim: int, depth: int, n_heads: int, head_dim: int, n_latents: int, weights, ) -> None: """ Instantiates a Perceiver Resampler that operates over a sequence of embeddings (say from a ResNet or ViT or MAE) of a given dimension, performs `depth` blocks of cross-attention with a fixed `n_latents` inputs, then returns a Tensor of shape [bsz, n_latents, embed_dim]. :param embed_dim: Dimensionality of embeddings being fed to the Perceiver Resampler (also dimensionality of latent embeddings *returned* by the Perceiver Resampler. Could be e.g., VIT embed_dim, ResNet pool dim, and so on. Args: config (`IdeficsConfig`): config object embed_dim (`int`): The size of each embedding vector depth (`int`): Depth of the Perceiver Resampler (Transformer w/ cross attention). Should be shallow (< 3). n_heads (`int`): Number of heads in each Transformer block (for multi-headed self-attention). head_dim (`int`): Dimensionality of each head projection in the Transformer block. n_latents (`int`): Number of latent embeddings to resample ("compress") the input sequence to (usually < 128). """ super().__init__() self.embed_dim, self.n_heads, self.head_dim, self.n_latents = ( embed_dim, n_heads, head_dim, n_latents, ) self.qk_layer_norms = config.perceiver_config.qk_layer_norms_perceiver # Create Latents for Perceiver self.latents = nn.Parameter(weights.get_tensor(f"{prefix}.latents")) self.intermediate_dim = ( self.embed_dim * 4 if not hasattr(config.vision_config, "embed_dim") else config.vision_config.embed_dim * 4 ) # Create Transformer Blocks self.blocks = nn.ModuleList( [ nn.ModuleList( [ IdeficsPerceiverAttention( prefix=f"{prefix}.blocks.{layer_id}.0", config=config, embed_dim=self.embed_dim, n_heads=self.n_heads, head_dim=self.head_dim, qk_layer_norms=self.qk_layer_norms, weights=weights, ), IdeficsMLP( prefix=f"{prefix}.blocks.{layer_id}.1", intermediate_size=self.intermediate_dim, config=config, weights=weights, ), ] ) for layer_id in range(depth) ] ) self.layer_norm = nn.LayerNorm.load( prefix=f"{prefix}.layer_norm", weights=weights, eps=EPS ) def forward(self, context: torch.Tensor) -> torch.Tensor: """Resample arbitrary length context & *compress* down to self.n_latents latent embeddings""" # einsum.repeat(self.latents, "seq embed -> bsz seq embed", bsz=context.shape[0]) latents = self.latents.repeat(context.shape[0], 1, 1) # Feed through Perceiver Attention blocks... for attn, ff in self.blocks: latents = attn(context, latents) + latents latents = ff(latents) + latents return self.layer_norm(latents) class IdeficsPerceiverAttention(nn.Module): def __init__( self, prefix, config, embed_dim: int, n_heads: int, head_dim: int, qk_layer_norms: bool, weights, ) -> None: """Perceiver Cross-Attention Module --> let long-form inputs be `context`, resampled embeddings be `latents`""" super().__init__() self.embed_dim, self.n_heads, self.head_dim = embed_dim, n_heads, head_dim self.qk_layer_norms = qk_layer_norms # Normalization & Scaling self.context_layer_norm = nn.LayerNorm.load( prefix=f"{prefix}.context_layer_norm", weights=weights, eps=EPS ) self.latents_layer_norm = nn.LayerNorm.load( prefix=f"{prefix}.latents_layer_norm", weights=weights, eps=EPS ) if self.qk_layer_norms: self.q_layer_norm = nn.LayerNorm.load( prefix=f"{prefix}.q_layer_norm", weights=weights, eps=EPS ) self.k_layer_norm = nn.LayerNorm.load( prefix=f"{prefix}.k_layer_norm", weights=weights, eps=EPS ) self.qk_scale = self.head_dim**-0.5 if n_heads % weights.process_group.size() != 0: raise ValueError( f"`num_heads` must be divisible by `num_shards` (got `num_heads`: {n_heads} " f"and `num_shards`: {weights.process_group.size()}" ) self.n_heads //= weights.process_group.size() # Q, K, V Projection (no bias -- detail from Perceiver/Flamingo Papers). self.q_proj = TensorParallelColumnLinear.load( config=config, prefix=f"{prefix}.q_proj", weights=weights, bias=False ) self.k_proj = TensorParallelColumnLinear.load( config=config, prefix=f"{prefix}.k_proj", weights=weights, bias=False ) self.v_proj = TensorParallelColumnLinear.load( config=config, prefix=f"{prefix}.v_proj", weights=weights, bias=False ) self.output_proj = TensorParallelRowLinear.load( config=config, prefix=f"{prefix}.output_proj", weights=weights, bias=False ) def forward(self, context: torch.Tensor, latents: torch.Tensor) -> torch.Tensor: """ Runs Perceiver Self-Attention, with special (context, latents) appended along the `seq` dimension! Args: context (`torch.Tensor`): Tensor of shape `[bsz, seq, embed_dim]` representing long-form context to resample. latents (`torch.Tensor`): Tensor of shape `[bsz, n_latents, embed_dim]` representing fixed length latents to compress to. Returns: `torch.Tensor`: Tensor of shape `[bsz, n_latents, embed_dim]` representing attention over latents w/ cross from context. """ context = self.context_layer_norm(context) latents = self.latents_layer_norm(latents) batch_size, seq_length, embed_dim = context.shape[:3] # Query, Key, Value Projections --> Note that in Flamingo, latents are *concatenated* with context prior to attn! # Note: This results in queries w/ `seq = n_latents`, and keys, values with `seq = len(context) + n_latents` q = self.q_proj(latents) k = self.k_proj(torch.cat([context, latents], dim=-2)) v = self.v_proj(torch.cat([context, latents], dim=-2)) # Multiheaded Self-Attention w/ stable softmax (subtract per-row max -- `amax` -- before softmax call) # =>> `attn` should be a 2D matrix of shape [n_latents x (context + n_latents)] # einsum.rearrange(x, "bsz seq (heads embed) -> bsz heads seq embed", heads=self.n_heads) q, k, v = [ x.reshape(batch_size, x.shape[1], self.n_heads, self.head_dim).transpose( 1, 2 ) for x in (q, k, v) ] if self.qk_layer_norms: q = self.q_layer_norm(q) k = self.k_layer_norm(k) scores = torch.einsum("... i d, ... j d -> ... i j", q * self.qk_scale, k) stabilized_scores = scores - (scores.amax(dim=-1, keepdim=True).detach()) attn = stabilized_scores.softmax(dim=-1) # Attend & project back to output... resampled = torch.einsum("... i j, ... j d -> ... i d", attn, v) # einsum.rearrange(resampled, "bsz heads seq embed -> bsz seq (heads embed)", heads=self.n_heads) return self.output_proj(resampled.transpose(1, 2).flatten(-2)) class IdeficsMLP(nn.Module): def __init__( self, prefix, intermediate_size, config, weights, ): """Simple MLP block with intermediate_size and embedding size""" super().__init__() self.embed_dim = config.vision_config.embed_dim self.ln = nn.LayerNorm.load(prefix=f"{prefix}.ln", weights=weights, eps=EPS) self.fc = TensorParallelColumnLinear.load( config=config, prefix=f"{prefix}.fc", weights=weights, bias=False, ) self.act = nn.ReLU() self.c_proj = TensorParallelRowLinear.load( config=config, prefix=f"{prefix}.c_proj", weights=weights, bias=False, ) def forward( self, hidden_states: Optional[Tuple[torch.FloatTensor]] ) -> torch.FloatTensor: hidden_states = self.ln(hidden_states) hidden_states = self.fc(hidden_states) hidden_states = self.act(hidden_states) hidden_states = self.c_proj(hidden_states) return hidden_states
text-generation-inference/server/text_generation_server/models/custom_modeling/idefics_perceiver.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/models/custom_modeling/idefics_perceiver.py", "repo_id": "text-generation-inference", "token_count": 5152 }
312
import re import torch import torch.distributed from transformers import ( PreTrainedTokenizerBase, ) from text_generation_server.models.causal_lm import CausalLMBatch from text_generation_server.pb import generate_pb2 from text_generation_server.utils import ( NextTokenChooser, StoppingCriteria, ) from text_generation_server.utils.chunks import concat_text_chunks # CREDIT: Papers with code => https://github.com/paperswithcode/galai/blob/main/galai/utils.py # we split individual characters inside special tokens like [START_DNA] CUSTOM_SEQ_RE = re.compile(r"(\[START_(DNA|SMILES|I_SMILES|AMINO)])(.*?)(\[END_\2])") # token added to implement a custom sequence tokenization. This token is added at # corpus cleaning step and removed in pretokenization. The digits are added to increase the chance # that they do not occur in the corpus. The digits are escaped so that the token does not appear # literally in the source code in case we ever include it in the training data. SPLIT_MARKER = f"SPL{1}T-TH{1}S-Pl3A5E" def _insert_split_marker(m: re.Match): """ Applies split marker based on a regex match of special tokens such as [START_DNA]. Parameters ---------- n : str Input text to split Returns ---------- str - the text with the split token added """ start_token, _, sequence, end_token = m.groups() sequence = re.sub(r"(.)", rf"{SPLIT_MARKER}\1", sequence, flags=re.DOTALL) return f"{start_token}{sequence}{SPLIT_MARKER}{end_token}" def escape_custom_split_sequence(text): """ Applies custom splitting to the text for GALILEO's tokenization Parameters ---------- text : str Input text to split Returns ---------- str - the text with the split token added """ return CUSTOM_SEQ_RE.sub(_insert_split_marker, text) # END CREDIT class GalacticaCausalLMBatch(CausalLMBatch): @classmethod def from_pb( cls, pb: generate_pb2.Batch, tokenizer: PreTrainedTokenizerBase, dtype: torch.dtype, device: torch.device, ) -> "GalacticaCausalLMBatch": inputs = [] next_token_choosers = [] stopping_criterias = [] prefix_offsets = [] top_n_tokens = [] read_offsets = [] requests_idx_mapping = {} # Parse batch max_truncation = 0 padding_right_offset = 0 max_decode_tokens = 0 for i, r in enumerate(pb.requests): requests_idx_mapping[r.id] = i # Add escape_custom_split_sequence to the CausalLMBatch logic inputs.append( escape_custom_split_sequence(concat_text_chunks(r.input_chunks.chunks)) ) next_token_choosers.append( NextTokenChooser.from_pb(r.parameters, device, tokenizer) ) stopping_criteria = StoppingCriteria.from_pb( r.stopping_parameters, tokenizer ) stopping_criterias.append(stopping_criteria) top_n_tokens.append(r.top_n_tokens) max_truncation = max(max_truncation, r.truncate) max_decode_tokens += stopping_criteria.max_new_tokens padding_right_offset = max( padding_right_offset, stopping_criteria.max_new_tokens ) tokenized_inputs = tokenizer( inputs, return_tensors="pt", padding=True, return_token_type_ids=False, truncation=True, max_length=max_truncation, ).to(device) for _ in pb.requests: input_len = tokenized_inputs["input_ids"].shape[1] prefix_offsets.append(0) read_offsets.append(input_len) input_lengths = tokenized_inputs["attention_mask"].sum(1) max_input_length = input_lengths.max() input_ids = tokenized_inputs["input_ids"] # Allocate maximum attention_mask attention_mask = input_ids.new_zeros( (pb.size, max_input_length + padding_right_offset) ) # Copy tokenizer attention_mask into fully allocated attention_mask attention_mask[:, :max_input_length] = tokenized_inputs["attention_mask"] position_ids = tokenized_inputs["attention_mask"].long().cumsum(-1) - 1 position_ids.masked_fill_(tokenized_inputs["attention_mask"] == 0, 1) all_input_ids = tokenized_inputs["input_ids"].T.split(1, dim=1) top_n_tokens_tensor = torch.tensor( top_n_tokens, device=device, dtype=torch.int64 ) max_tokens = len(inputs) * max_input_length + max_decode_tokens return cls( batch_id=pb.id, requests=pb.requests, requests_idx_mapping=requests_idx_mapping, input_ids=input_ids, attention_mask=attention_mask, position_ids=position_ids, past_key_values=None, all_input_ids=list(all_input_ids), input_lengths=input_lengths.tolist(), prefix_offsets=prefix_offsets, read_offsets=read_offsets, next_token_choosers=next_token_choosers, stopping_criterias=stopping_criterias, top_n_tokens=top_n_tokens, top_n_tokens_tensor=top_n_tokens_tensor, max_input_length=max_input_length.item(), padding_right_offset=padding_right_offset, max_tokens=max_tokens, )
text-generation-inference/server/text_generation_server/models/galactica.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/models/galactica.py", "repo_id": "text-generation-inference", "token_count": 2499 }
313
exclude = ["node_modules/**/*.toml"] # https://taplo.tamasfe.dev/configuration/formatter-options.html [formatting] align_entries = true indent_tables = true reorder_keys = true
tokenizers/bindings/node/.taplo.toml/0
{ "file_path": "tokenizers/bindings/node/.taplo.toml", "repo_id": "tokenizers", "token_count": 66 }
314
import { bpeDecoder, byteFallbackDecoder, ctcDecoder, fuseDecoder, metaspaceDecoder, replaceDecoder, sequenceDecoder, stripDecoder, wordPieceDecoder, } from '../../' describe('wordPieceDecoder', () => { it('accepts `undefined` as first parameter', () => { expect(wordPieceDecoder(undefined)).toBeDefined() }) it('accepts `undefined` as second parameter', () => { expect(wordPieceDecoder('test', undefined)).toBeDefined() }) it('can decode arrays of strings', () => { expect(wordPieceDecoder().decode(['Hel', '##lo', 'there', 'my', 'fr', '##iend'])).toEqual('Hello there my friend') }) }) describe('byteFallbackDecoder', () => { it('accepts `undefined` as first parameter', () => { expect(byteFallbackDecoder()).toBeDefined() }) it('can decode arrays of strings', () => { expect(byteFallbackDecoder().decode(['Hel', 'lo'])).toEqual('Hello') expect(byteFallbackDecoder().decode(['<0x61>'])).toEqual('a') expect(byteFallbackDecoder().decode(['<0x61>'])).toEqual('a') expect(byteFallbackDecoder().decode(['My', ' na', 'me'])).toEqual('My name') expect(byteFallbackDecoder().decode(['<0x61>'])).toEqual('a') expect(byteFallbackDecoder().decode(['<0xE5>'])).toEqual('�') expect(byteFallbackDecoder().decode(['<0xE5>', '<0x8f>'])).toEqual('��') expect(byteFallbackDecoder().decode(['<0xE5>', '<0x8f>', '<0xab>'])).toEqual('叫') expect(byteFallbackDecoder().decode(['<0xE5>', '<0x8f>', 'a'])).toEqual('��a') expect(byteFallbackDecoder().decode(['<0xE5>', '<0x8f>', '<0xab>', 'a'])).toEqual('叫a') }) }) describe('replaceDecoder', () => { it('can decode arrays of strings', () => { expect(replaceDecoder('_', ' ').decode(['Hello', '_Hello'])).toEqual('Hello Hello') }) }) describe('fuseDecoder', () => { it('accepts `undefined` as first parameter', () => { expect(fuseDecoder()).toBeDefined() }) it('can decode arrays of strings', () => { expect(fuseDecoder().decode(['Hel', 'lo'])).toEqual('Hello') }) }) describe('stripDecoder', () => { it('accepts `undefined` as first parameter', () => { expect(stripDecoder('_', 0, 0)).toBeDefined() }) it('can decode arrays of strings', () => { expect(stripDecoder('_', 1, 0).decode(['_Hel', 'lo', '__there'])).toEqual('Hello_there') }) }) describe('metaspaceDecoder', () => { it('accepts `undefined` as first parameter', () => { expect(metaspaceDecoder(undefined)).toBeDefined() }) it('accepts `undefined` as second parameter', () => { expect(metaspaceDecoder('t', undefined)).toBeDefined() }) it('works', () => { expect(metaspaceDecoder().decode(['▁Hello'])).toEqual('Hello') }) }) describe('bpeDecoder', () => { it('accepts `undefined` as parameter', () => { expect(bpeDecoder(undefined)).toBeDefined() }) }) describe('ctcDecoder', () => { it('accepts `undefined` as parameter', () => { expect(ctcDecoder(undefined)).toBeDefined() }) it('encodes correctly', () => { expect(ctcDecoder().decode(['<pad>', 'h', 'h', 'e', 'e', 'l', 'l', '<pad>', 'l', 'l', 'o'])).toEqual('hello') }) }) describe('sequenceDecoder', () => { it('accepts `empty list` as parameter', () => { expect(sequenceDecoder([])).toBeDefined() }) it('encodes correctly', () => { expect( sequenceDecoder([ctcDecoder(), metaspaceDecoder()]).decode(['▁', '▁', 'H', 'H', 'i', 'i', '▁', 'y', 'o', 'u']), ).toEqual('Hi you') }) })
tokenizers/bindings/node/lib/bindings/decoders.test.ts/0
{ "file_path": "tokenizers/bindings/node/lib/bindings/decoders.test.ts", "repo_id": "tokenizers", "token_count": 1393 }
315
# `tokenizers-freebsd-x64` This is the **x86_64-unknown-freebsd** binary for `tokenizers`
tokenizers/bindings/node/npm/freebsd-x64/README.md/0
{ "file_path": "tokenizers/bindings/node/npm/freebsd-x64/README.md", "repo_id": "tokenizers", "token_count": 36 }
316
# `tokenizers-win32-x64-msvc` This is the **x86_64-pc-windows-msvc** binary for `tokenizers`
tokenizers/bindings/node/npm/win32-x64-msvc/README.md/0
{ "file_path": "tokenizers/bindings/node/npm/win32-x64-msvc/README.md", "repo_id": "tokenizers", "token_count": 39 }
317
use crate::models::Model; use napi_derive::napi; use std::sync::{Arc, RwLock}; use tokenizers as tk; use tokenizers::models::TrainerWrapper; #[napi] pub struct Trainer { trainer: Option<Arc<RwLock<TrainerWrapper>>>, } impl From<TrainerWrapper> for Trainer { fn from(trainer: TrainerWrapper) -> Self { Self { trainer: Some(Arc::new(RwLock::new(trainer))), } } } impl tk::Trainer for Trainer { type Model = Model; fn should_show_progress(&self) -> bool { self .trainer .as_ref() .expect("Uninitialized Trainer") .read() .unwrap() .should_show_progress() } fn train(&self, model: &mut Self::Model) -> tk::Result<Vec<tk::AddedToken>> { let special_tokens = self .trainer .as_ref() .ok_or("Uninitialized Trainer")? .read() .unwrap() .train( &mut model .model .as_ref() .ok_or("Uninitialized Model")? .write() .unwrap(), )?; Ok(special_tokens) } fn feed<I, S, F>(&mut self, iterator: I, process: F) -> tk::Result<()> where I: Iterator<Item = S> + Send, S: AsRef<str> + Send, F: Fn(&str) -> tk::Result<Vec<String>> + Sync, { self .trainer .as_ref() .ok_or("Uninitialized Trainer")? .write() .unwrap() .feed(iterator, process) } }
tokenizers/bindings/node/src/trainers.rs/0
{ "file_path": "tokenizers/bindings/node/src/trainers.rs", "repo_id": "tokenizers", "token_count": 641 }
318
import argparse import glob from tokenizers import BertWordPieceTokenizer parser = argparse.ArgumentParser() parser.add_argument( "--files", default=None, metavar="path", type=str, required=True, help="The files to use as training; accept '**/*.txt' type of patterns \ if enclosed in quotes", ) parser.add_argument( "--out", default="./", type=str, help="Path to the output directory, where the files will be saved", ) parser.add_argument("--name", default="bert-wordpiece", type=str, help="The name of the output vocab files") args = parser.parse_args() files = glob.glob(args.files) if not files: print(f"File does not exist: {args.files}") exit(1) # Initialize an empty tokenizer tokenizer = BertWordPieceTokenizer( clean_text=True, handle_chinese_chars=True, strip_accents=True, lowercase=True, ) # And then train tokenizer.train( files, vocab_size=10000, min_frequency=2, show_progress=True, special_tokens=["[PAD]", "[UNK]", "[CLS]", "[SEP]", "[MASK]"], limit_alphabet=1000, wordpieces_prefix="##", ) # Save the files tokenizer.save_model(args.out, args.name)
tokenizers/bindings/python/examples/train_bert_wordpiece.py/0
{ "file_path": "tokenizers/bindings/python/examples/train_bert_wordpiece.py", "repo_id": "tokenizers", "token_count": 472 }
319
# Generated content DO NOT EDIT class Model: """ Base class for all models The model represents the actual tokenization algorithm. This is the part that will contain and manage the learned vocabulary. This class cannot be constructed directly. Please use one of the concrete models. """ def get_trainer(self): """ Get the associated :class:`~tokenizers.trainers.Trainer` Retrieve the :class:`~tokenizers.trainers.Trainer` associated to this :class:`~tokenizers.models.Model`. Returns: :class:`~tokenizers.trainers.Trainer`: The Trainer used to train this model """ pass def id_to_token(self, id): """ Get the token associated to an ID Args: id (:obj:`int`): An ID to convert to a token Returns: :obj:`str`: The token associated to the ID """ pass def save(self, folder, prefix): """ Save the current model Save the current model in the given folder, using the given prefix for the various files that will get created. Any file with the same name that already exists in this folder will be overwritten. Args: folder (:obj:`str`): The path to the target folder in which to save the various files prefix (:obj:`str`, `optional`): An optional prefix, used to prefix each file name Returns: :obj:`List[str]`: The list of saved files """ pass def token_to_id(self, tokens): """ Get the ID associated to a token Args: token (:obj:`str`): A token to convert to an ID Returns: :obj:`int`: The ID associated to the token """ pass def tokenize(self, sequence): """ Tokenize a sequence Args: sequence (:obj:`str`): A sequence to tokenize Returns: A :obj:`List` of :class:`~tokenizers.Token`: The generated tokens """ pass class BPE(Model): """ An implementation of the BPE (Byte-Pair Encoding) algorithm Args: vocab (:obj:`Dict[str, int]`, `optional`): A dictionary of string keys and their ids :obj:`{"am": 0,...}` merges (:obj:`List[Tuple[str, str]]`, `optional`): A list of pairs of tokens (:obj:`Tuple[str, str]`) :obj:`[("a", "b"),...]` cache_capacity (:obj:`int`, `optional`): The number of words that the BPE cache can contain. The cache allows to speed-up the process by keeping the result of the merge operations for a number of words. dropout (:obj:`float`, `optional`): A float between 0 and 1 that represents the BPE dropout to use. unk_token (:obj:`str`, `optional`): The unknown token to be used by the model. continuing_subword_prefix (:obj:`str`, `optional`): The prefix to attach to subword units that don't represent a beginning of word. end_of_word_suffix (:obj:`str`, `optional`): The suffix to attach to subword units that represent an end of word. fuse_unk (:obj:`bool`, `optional`): Whether to fuse any subsequent unknown tokens into a single one byte_fallback (:obj:`bool`, `optional`): Whether to use spm byte-fallback trick (defaults to False) ignore_merges (:obj:`bool`, `optional`): Whether or not to match tokens with the vocab before using merges. """ def __init__( self, vocab=None, merges=None, cache_capacity=None, dropout=None, unk_token=None, continuing_subword_prefix=None, end_of_word_suffix=None, fuse_unk=None, byte_fallback=False, ignore_merges=False, ): pass @staticmethod def from_file(cls, vocab, merge, **kwargs): """ Instantiate a BPE model from the given files. This method is roughly equivalent to doing:: vocab, merges = BPE.read_file(vocab_filename, merges_filename) bpe = BPE(vocab, merges) If you don't need to keep the :obj:`vocab, merges` values lying around, this method is more optimized than manually calling :meth:`~tokenizers.models.BPE.read_file` to initialize a :class:`~tokenizers.models.BPE` Args: vocab (:obj:`str`): The path to a :obj:`vocab.json` file merges (:obj:`str`): The path to a :obj:`merges.txt` file Returns: :class:`~tokenizers.models.BPE`: An instance of BPE loaded from these files """ pass def get_trainer(self): """ Get the associated :class:`~tokenizers.trainers.Trainer` Retrieve the :class:`~tokenizers.trainers.Trainer` associated to this :class:`~tokenizers.models.Model`. Returns: :class:`~tokenizers.trainers.Trainer`: The Trainer used to train this model """ pass def id_to_token(self, id): """ Get the token associated to an ID Args: id (:obj:`int`): An ID to convert to a token Returns: :obj:`str`: The token associated to the ID """ pass @staticmethod def read_file(self, vocab, merges): """ Read a :obj:`vocab.json` and a :obj:`merges.txt` files This method provides a way to read and parse the content of these files, returning the relevant data structures. If you want to instantiate some BPE models from memory, this method gives you the expected input from the standard files. Args: vocab (:obj:`str`): The path to a :obj:`vocab.json` file merges (:obj:`str`): The path to a :obj:`merges.txt` file Returns: A :obj:`Tuple` with the vocab and the merges: The vocabulary and merges loaded into memory """ pass def save(self, folder, prefix): """ Save the current model Save the current model in the given folder, using the given prefix for the various files that will get created. Any file with the same name that already exists in this folder will be overwritten. Args: folder (:obj:`str`): The path to the target folder in which to save the various files prefix (:obj:`str`, `optional`): An optional prefix, used to prefix each file name Returns: :obj:`List[str]`: The list of saved files """ pass def token_to_id(self, tokens): """ Get the ID associated to a token Args: token (:obj:`str`): A token to convert to an ID Returns: :obj:`int`: The ID associated to the token """ pass def tokenize(self, sequence): """ Tokenize a sequence Args: sequence (:obj:`str`): A sequence to tokenize Returns: A :obj:`List` of :class:`~tokenizers.Token`: The generated tokens """ pass class Unigram(Model): """ An implementation of the Unigram algorithm Args: vocab (:obj:`List[Tuple[str, float]]`, `optional`, `optional`): A list of vocabulary items and their relative score [("am", -0.2442),...] """ def __init__(self, vocab, unk_id, byte_fallback): pass def get_trainer(self): """ Get the associated :class:`~tokenizers.trainers.Trainer` Retrieve the :class:`~tokenizers.trainers.Trainer` associated to this :class:`~tokenizers.models.Model`. Returns: :class:`~tokenizers.trainers.Trainer`: The Trainer used to train this model """ pass def id_to_token(self, id): """ Get the token associated to an ID Args: id (:obj:`int`): An ID to convert to a token Returns: :obj:`str`: The token associated to the ID """ pass def save(self, folder, prefix): """ Save the current model Save the current model in the given folder, using the given prefix for the various files that will get created. Any file with the same name that already exists in this folder will be overwritten. Args: folder (:obj:`str`): The path to the target folder in which to save the various files prefix (:obj:`str`, `optional`): An optional prefix, used to prefix each file name Returns: :obj:`List[str]`: The list of saved files """ pass def token_to_id(self, tokens): """ Get the ID associated to a token Args: token (:obj:`str`): A token to convert to an ID Returns: :obj:`int`: The ID associated to the token """ pass def tokenize(self, sequence): """ Tokenize a sequence Args: sequence (:obj:`str`): A sequence to tokenize Returns: A :obj:`List` of :class:`~tokenizers.Token`: The generated tokens """ pass class WordLevel(Model): """ An implementation of the WordLevel algorithm Most simple tokenizer model based on mapping tokens to their corresponding id. Args: vocab (:obj:`str`, `optional`): A dictionary of string keys and their ids :obj:`{"am": 0,...}` unk_token (:obj:`str`, `optional`): The unknown token to be used by the model. """ def __init__(self, vocab, unk_token): pass @staticmethod def from_file(vocab, unk_token): """ Instantiate a WordLevel model from the given file This method is roughly equivalent to doing:: vocab = WordLevel.read_file(vocab_filename) wordlevel = WordLevel(vocab) If you don't need to keep the :obj:`vocab` values lying around, this method is more optimized than manually calling :meth:`~tokenizers.models.WordLevel.read_file` to initialize a :class:`~tokenizers.models.WordLevel` Args: vocab (:obj:`str`): The path to a :obj:`vocab.json` file Returns: :class:`~tokenizers.models.WordLevel`: An instance of WordLevel loaded from file """ pass def get_trainer(self): """ Get the associated :class:`~tokenizers.trainers.Trainer` Retrieve the :class:`~tokenizers.trainers.Trainer` associated to this :class:`~tokenizers.models.Model`. Returns: :class:`~tokenizers.trainers.Trainer`: The Trainer used to train this model """ pass def id_to_token(self, id): """ Get the token associated to an ID Args: id (:obj:`int`): An ID to convert to a token Returns: :obj:`str`: The token associated to the ID """ pass @staticmethod def read_file(vocab): """ Read a :obj:`vocab.json` This method provides a way to read and parse the content of a vocabulary file, returning the relevant data structures. If you want to instantiate some WordLevel models from memory, this method gives you the expected input from the standard files. Args: vocab (:obj:`str`): The path to a :obj:`vocab.json` file Returns: :obj:`Dict[str, int]`: The vocabulary as a :obj:`dict` """ pass def save(self, folder, prefix): """ Save the current model Save the current model in the given folder, using the given prefix for the various files that will get created. Any file with the same name that already exists in this folder will be overwritten. Args: folder (:obj:`str`): The path to the target folder in which to save the various files prefix (:obj:`str`, `optional`): An optional prefix, used to prefix each file name Returns: :obj:`List[str]`: The list of saved files """ pass def token_to_id(self, tokens): """ Get the ID associated to a token Args: token (:obj:`str`): A token to convert to an ID Returns: :obj:`int`: The ID associated to the token """ pass def tokenize(self, sequence): """ Tokenize a sequence Args: sequence (:obj:`str`): A sequence to tokenize Returns: A :obj:`List` of :class:`~tokenizers.Token`: The generated tokens """ pass class WordPiece(Model): """ An implementation of the WordPiece algorithm Args: vocab (:obj:`Dict[str, int]`, `optional`): A dictionary of string keys and their ids :obj:`{"am": 0,...}` unk_token (:obj:`str`, `optional`): The unknown token to be used by the model. max_input_chars_per_word (:obj:`int`, `optional`): The maximum number of characters to authorize in a single word. """ def __init__(self, vocab, unk_token, max_input_chars_per_word): pass @staticmethod def from_file(vocab, **kwargs): """ Instantiate a WordPiece model from the given file This method is roughly equivalent to doing:: vocab = WordPiece.read_file(vocab_filename) wordpiece = WordPiece(vocab) If you don't need to keep the :obj:`vocab` values lying around, this method is more optimized than manually calling :meth:`~tokenizers.models.WordPiece.read_file` to initialize a :class:`~tokenizers.models.WordPiece` Args: vocab (:obj:`str`): The path to a :obj:`vocab.txt` file Returns: :class:`~tokenizers.models.WordPiece`: An instance of WordPiece loaded from file """ pass def get_trainer(self): """ Get the associated :class:`~tokenizers.trainers.Trainer` Retrieve the :class:`~tokenizers.trainers.Trainer` associated to this :class:`~tokenizers.models.Model`. Returns: :class:`~tokenizers.trainers.Trainer`: The Trainer used to train this model """ pass def id_to_token(self, id): """ Get the token associated to an ID Args: id (:obj:`int`): An ID to convert to a token Returns: :obj:`str`: The token associated to the ID """ pass @staticmethod def read_file(vocab): """ Read a :obj:`vocab.txt` file This method provides a way to read and parse the content of a standard `vocab.txt` file as used by the WordPiece Model, returning the relevant data structures. If you want to instantiate some WordPiece models from memory, this method gives you the expected input from the standard files. Args: vocab (:obj:`str`): The path to a :obj:`vocab.txt` file Returns: :obj:`Dict[str, int]`: The vocabulary as a :obj:`dict` """ pass def save(self, folder, prefix): """ Save the current model Save the current model in the given folder, using the given prefix for the various files that will get created. Any file with the same name that already exists in this folder will be overwritten. Args: folder (:obj:`str`): The path to the target folder in which to save the various files prefix (:obj:`str`, `optional`): An optional prefix, used to prefix each file name Returns: :obj:`List[str]`: The list of saved files """ pass def token_to_id(self, tokens): """ Get the ID associated to a token Args: token (:obj:`str`): A token to convert to an ID Returns: :obj:`int`: The ID associated to the token """ pass def tokenize(self, sequence): """ Tokenize a sequence Args: sequence (:obj:`str`): A sequence to tokenize Returns: A :obj:`List` of :class:`~tokenizers.Token`: The generated tokens """ pass
tokenizers/bindings/python/py_src/tokenizers/models/__init__.pyi/0
{ "file_path": "tokenizers/bindings/python/py_src/tokenizers/models/__init__.pyi", "repo_id": "tokenizers", "token_count": 7626 }
320
import tokenizers from argparse import ArgumentParser import sentencepiece as spm from collections import Counter import json import os import datetime try: from termcolor import colored has_color = True except Exception: has_color = False def main(): parser = ArgumentParser("SentencePiece parity checker") parser.add_argument( "--input-file", "-i", type=str, required=True, help="Which files do you want to train from", ) parser.add_argument( "--model-file", "-m", type=str, required=False, default=None, help="Use a pretrained token file", ) parser.add_argument( "--model-prefix", type=str, default="spm_parity", help="Model prefix for spm_train", ) parser.add_argument( "--vocab-size", "-v", type=int, default=8000, help="Vocab size for spm_train", ) parser.add_argument( "--verbose", action="store_true", help="Verbosity", ) parser.add_argument( "--train", action="store_true", help="Instead of checking the encoder part, we check the trainer part", ) parser.add_argument( "--from-spm", action="store_true", help="Directly load the spm file with it's own normalizer", ) args = parser.parse_args() trained = False if args.model_file is None: spm.SentencePieceTrainer.Train( f"--input={args.input_file} --model_prefix={args.model_prefix}" f" --character_coverage=1.0" f" --max_sentence_length=40000" f" --num_threads=1" f" --vocab_size={args.vocab_size}" ) trained = True args.model_file = f"{args.model_prefix}.model" try: if args.train: check_train(args) else: check_encode(args) finally: if trained: os.remove(f"{args.model_prefix}.model") os.remove(f"{args.model_prefix}.vocab") def check_train(args): sp = spm.SentencePieceProcessor() sp.Load(args.model_file) tokenizer = tokenizers.SentencePieceUnigramTokenizer() tokenizer.train(args.input_file, show_progress=False) spm_tokens = 0 tokenizer_tokens = 0 with open(args.input_file, "r") as f: for i, line in enumerate(f): line = line.strip() ids = sp.EncodeAsIds(line) encoded = tokenizer.encode(line) spm_tokens += len(ids) tokenizer_tokens += len(encoded.ids) vocab = [0 for i in range(args.vocab_size)] spm_vocab = [0 for i in range(args.vocab_size)] for token, index in tokenizer.get_vocab().items(): vocab[index] = token for i in range(args.vocab_size): spm_vocab[i] = sp.id_to_piece(i) # 0 is unk in tokenizers, 0, 1, 2 are unk bos, eos in spm by default. for i, (token, spm_token) in enumerate(zip(vocab[1:], spm_vocab[3:])): if token != spm_token: print(f"First different token is token {i} ({token} != {spm_token})") break print(f"Tokenizer used {tokenizer_tokens}, where spm used {spm_tokens}") assert tokenizer_tokens < spm_tokens, "Our trainer should be at least more efficient than the SPM one" print("Ok our trainer is at least more efficient than the SPM one") def check_diff(spm_diff, tok_diff, sp, tok): if spm_diff == list(reversed(tok_diff)): # AAA -> AA+A vs A+AA case. return True elif len(spm_diff) == len(tok_diff) and tok.decode(spm_diff) == tok.decode(tok_diff): # Second order OK # Barrich -> Barr + ich vs Bar + rich return True spm_reencoded = sp.encode(sp.decode(spm_diff)) tok_reencoded = tok.encode(tok.decode(spm_diff)).ids if spm_reencoded != spm_diff and spm_reencoded == tok_reencoded: # Type 3 error. # Snehagatha -> # Sne, h, aga, th, a # Sne, ha, gat, ha # Encoding the wrong with sp does not even recover what spm gave us # It fits tokenizer however... return True return False def check_details(line, spm_ids, tok_ids, sp, tok): # Encoding can be the same with same result AAA -> A + AA vs AA + A # We can check that we use at least exactly the same number of tokens. for i, (spm_id, tok_id) in enumerate(zip(spm_ids, tok_ids)): if spm_id != tok_id: break first = i for i, (spm_id, tok_id) in enumerate(zip(reversed(spm_ids), reversed(tok_ids))): if spm_id != tok_id: break last = len(spm_ids) - i spm_diff = spm_ids[first:last] tok_diff = tok_ids[first:last] if check_diff(spm_diff, tok_diff, sp, tok): return True if last - first > 5: # We might have twice a single problem, attempt to subdivide the disjointed tokens into smaller problems spms = Counter(spm_ids[first:last]) toks = Counter(tok_ids[first:last]) removable_tokens = {spm_ for (spm_, si) in spms.items() if toks.get(spm_, 0) == si} min_width = 3 for i in range(last - first - min_width): if all(spm_ids[first + i + j] in removable_tokens for j in range(min_width)): possible_matches = [ k for k in range(last - first - min_width) if tok_ids[first + k : first + k + min_width] == spm_ids[first + i : first + i + min_width] ] for j in possible_matches: if check_diff(spm_ids[first : first + i], tok_ids[first : first + j], sp, tok) and check_details( line, spm_ids[first + i : last], tok_ids[first + j : last], sp, tok, ): return True print(f"Spm: {[tok.decode([spm_ids[i]]) for i in range(first, last)]}") try: print(f"Tok: {[tok.decode([tok_ids[i]]) for i in range(first, last)]}") except Exception: pass ok_start = tok.decode(spm_ids[:first]) ok_end = tok.decode(spm_ids[last:]) wrong = tok.decode(spm_ids[first:last]) print() if has_color: print(f"{colored(ok_start, 'grey')}{colored(wrong, 'red')}{colored(ok_end, 'grey')}") else: print(wrong) return False def check_encode(args): sp = spm.SentencePieceProcessor() sp.Load(args.model_file) if args.from_spm: tok = tokenizers.SentencePieceUnigramTokenizer.from_spm(args.model_file) else: vocab = [(sp.id_to_piece(i), sp.get_score(i)) for i in range(sp.piece_size())] unk_id = sp.unk_id() tok = tokenizers.SentencePieceUnigramTokenizer(vocab, unk_id) perfect = 0 imperfect = 0 wrong = 0 now = datetime.datetime.now spm_total_time = datetime.timedelta(seconds=0) tok_total_time = datetime.timedelta(seconds=0) with open(args.input_file, "r", encoding="utf-8-sig") as f: for i, line in enumerate(f): line = line.strip() start = now() ids = sp.EncodeAsIds(line) spm_time = now() encoded = tok.encode(line) tok_time = now() spm_total_time += spm_time - start tok_total_time += tok_time - spm_time if args.verbose: if i % 10000 == 0: print(f"({perfect} / {imperfect} / {wrong} ----- {perfect + imperfect + wrong})") print(f"SPM: {spm_total_time} - TOK: {tok_total_time}") if ids != encoded.ids: if check_details(line, ids, encoded.ids, sp, tok): imperfect += 1 continue else: wrong += 1 else: perfect += 1 assert ( ids == encoded.ids ), f"line {i}: {line} : \n\n{ids}\n{encoded.ids}\n{list(zip(encoded.ids, encoded.tokens))}" print(f"({perfect} / {imperfect} / {wrong} ----- {perfect + imperfect + wrong})") total = perfect + imperfect + wrong print(f"Accuracy {perfect * 100 / total:.2f} Slowdown : {tok_total_time/ spm_total_time:.2f}") if __name__ == "__main__": main()
tokenizers/bindings/python/scripts/spm_parity_check.py/0
{ "file_path": "tokenizers/bindings/python/scripts/spm_parity_check.py", "repo_id": "tokenizers", "token_count": 4110 }
321
use tokenizers as tk; use pyo3::exceptions; use pyo3::prelude::*; use pyo3::types::*; use super::{ DestroyPtr, PyNormalizedString, PyNormalizedStringRefMut, RefMutContainer, RefMutGuard, }; use crate::encoding::PyEncoding; use crate::error::ToPyResult; use crate::token::PyToken; use tk::{OffsetReferential, OffsetType, Offsets, PreTokenizedString, Token}; fn split(pretok: &mut PreTokenizedString, func: &Bound<'_, PyAny>) -> PyResult<()> { if !func.is_callable() { Err(exceptions::PyTypeError::new_err( "`split` expect a callable with the signature: \ `fn(index: int, normalized: NormalizedString) -> List[NormalizedString]`", )) } else { ToPyResult(pretok.split(|i, normalized| { let output = func.call((i, PyNormalizedString::from(normalized)), None)?; Ok(output .extract::<Vec<PyNormalizedString>>()? .into_iter() .map(tk::NormalizedString::from)) })) .into() } } fn normalize(pretok: &mut PreTokenizedString, func: &Bound<'_, PyAny>) -> PyResult<()> { if !func.is_callable() { Err(exceptions::PyTypeError::new_err( "`normalize` expect a callable with the signature: \ `fn(normalized: NormalizedString)`", )) } else { ToPyResult(pretok.normalize(|normalized| { let norm = PyNormalizedStringRefMut::new(normalized); func.call((norm.get().clone(),), None)?; Ok(()) })) .into() } } fn tokenize(pretok: &mut PreTokenizedString, func: &Bound<'_, PyAny>) -> PyResult<()> { if !func.is_callable() { Err(exceptions::PyTypeError::new_err( "`tokenize` expect a callable with the signature: \ `fn(str) -> List[Token]`", )) } else { ToPyResult(pretok.tokenize(|normalized| { let output = func.call((normalized.get(),), None)?; Ok(output .extract::<Bound<PyList>>()? .into_iter() .map(|obj| Ok(Token::from(obj.extract::<PyToken>()?))) .collect::<PyResult<Vec<_>>>()?) })) .into() } } /// This is an enum #[derive(Clone)] pub struct PyOffsetReferential(OffsetReferential); impl FromPyObject<'_> for PyOffsetReferential { fn extract_bound(obj: &Bound<'_, PyAny>) -> PyResult<Self> { let s = obj.extract::<String>()?; Ok(Self(match s.as_ref() { "original" => Ok(OffsetReferential::Original), "normalized" => Ok(OffsetReferential::Normalized), _ => Err(exceptions::PyValueError::new_err( "Wrong value for OffsetReferential, expected one of `original, normalized`", )), }?)) } } #[derive(Clone)] pub struct PyOffsetType(OffsetType); impl FromPyObject<'_> for PyOffsetType { fn extract_bound(obj: &Bound<'_, PyAny>) -> PyResult<Self> { let s = obj.extract::<String>()?; Ok(Self(match s.as_ref() { "byte" => Ok(OffsetType::Byte), "char" => Ok(OffsetType::Char), _ => Err(exceptions::PyValueError::new_err( "Wrong value for OffsetType, expected one of `byte, char`", )), }?)) } } type PySplit = (String, Offsets, Option<Vec<PyToken>>); fn get_splits( pretok: &PreTokenizedString, offset_referential: PyOffsetReferential, offset_type: PyOffsetType, ) -> Vec<PySplit> { pretok .get_splits(offset_referential.0, offset_type.0) .into_iter() .map(|(s, o, t)| { ( s.to_owned(), o, t.as_ref() .map(|tokens| tokens.iter().map(|t| t.clone().into()).collect()), ) }) .collect() } fn to_encoding( pretok: &PreTokenizedString, type_id: u32, word_idx: Option<u32>, ) -> PyResult<PyEncoding> { Ok(ToPyResult( pretok .clone() .into_encoding(word_idx, type_id, tk::OffsetType::Char), ) .into_py()? .into()) } /// PreTokenizedString /// /// Wrapper over a string, that provides a way to normalize, pre-tokenize, tokenize the /// underlying string, while keeping track of the alignment information (offsets). /// /// The PreTokenizedString manages what we call `splits`. Each split represents a substring /// which is a subpart of the original string, with the relevant offsets and tokens. /// /// When calling one of the methods used to modify the PreTokenizedString (namely one of /// `split`, `normalize` or `tokenize), only the `splits` that don't have any associated /// tokens will get modified. /// /// Args: /// sequence: str: /// The string sequence used to initialize this PreTokenizedString #[pyclass(module = "tokenizers", name = "PreTokenizedString")] pub struct PyPreTokenizedString { pub(crate) pretok: tk::PreTokenizedString, } impl From<PreTokenizedString> for PyPreTokenizedString { fn from(pretok: PreTokenizedString) -> Self { Self { pretok } } } impl From<PyPreTokenizedString> for PreTokenizedString { fn from(pretok: PyPreTokenizedString) -> Self { pretok.pretok } } #[pymethods] impl PyPreTokenizedString { #[new] #[pyo3(text_signature = "(self, sequence)")] fn new(s: &str) -> Self { PreTokenizedString::from(s).into() } /// Split the PreTokenizedString using the given `func` /// /// Args: /// func: Callable[[index, NormalizedString], List[NormalizedString]]: /// The function used to split each underlying split. /// It is expected to return a list of `NormalizedString`, that represent the new /// splits. If the given `NormalizedString` does not need any splitting, we can /// just return it directly. /// In order for the offsets to be tracked accurately, any returned `NormalizedString` /// should come from calling either `.split` or `.slice` on the received one. #[pyo3(text_signature = "(self, func)")] fn split(&mut self, func: &Bound<'_, PyAny>) -> PyResult<()> { split(&mut self.pretok, func) } /// Normalize each split of the `PreTokenizedString` using the given `func` /// /// Args: /// func: Callable[[NormalizedString], None]: /// The function used to normalize each underlying split. This function /// does not need to return anything, just calling the methods on the provided /// NormalizedString allow its modification. #[pyo3(text_signature = "(self, func)")] fn normalize(&mut self, func: &Bound<'_, PyAny>) -> PyResult<()> { normalize(&mut self.pretok, func) } /// Tokenize each split of the `PreTokenizedString` using the given `func` /// /// Args: /// func: Callable[[str], List[Token]]: /// The function used to tokenize each underlying split. This function must return /// a list of Token generated from the input str. #[pyo3(text_signature = "(self, func)")] fn tokenize(&mut self, func: &Bound<'_, PyAny>) -> PyResult<()> { tokenize(&mut self.pretok, func) } /// Return an Encoding generated from this PreTokenizedString /// /// Args: /// type_id: int = 0: /// The type_id to be used on the generated Encoding. /// /// word_idx: Optional[int] = None: /// An optional word index to be used for each token of this Encoding. If provided, /// all the word indices in the generated Encoding will use this value, instead /// of the one automatically tracked during pre-tokenization. /// /// Returns: /// An Encoding #[pyo3(signature = (type_id = 0, word_idx = None))] #[pyo3(text_signature = "(self, type_id=0, word_idx=None)")] fn to_encoding(&self, type_id: u32, word_idx: Option<u32>) -> PyResult<PyEncoding> { to_encoding(&self.pretok, type_id, word_idx) } /// Get the splits currently managed by the PreTokenizedString /// /// Args: /// offset_referential: :obj:`str` /// Whether the returned splits should have offsets expressed relative /// to the original string, or the normalized one. choices: "original", "normalized". /// /// offset_type: :obj:`str` /// Whether the returned splits should have offsets expressed in bytes or chars. /// When slicing an str, we usually want to use chars, which is the default value. /// Now in some cases it might be interesting to get these offsets expressed in bytes, /// so it is possible to change this here. /// choices: "char", "bytes" /// /// Returns /// A list of splits #[pyo3(signature = ( offset_referential = PyOffsetReferential(OffsetReferential::Original), offset_type = PyOffsetType(OffsetType::Char) ))] #[pyo3(text_signature = "(self, offset_referential=\"original\", offset_type=\"char\")")] fn get_splits( &self, offset_referential: PyOffsetReferential, offset_type: PyOffsetType, ) -> Vec<PySplit> { get_splits(&self.pretok, offset_referential, offset_type) } } #[pyclass(module = "tokenizers", name = "PreTokenizedString")] #[derive(Clone)] pub struct PyPreTokenizedStringRefMut { inner: RefMutContainer<PreTokenizedString>, } impl DestroyPtr for PyPreTokenizedStringRefMut { fn destroy(&mut self) { self.inner.destroy(); } } impl PyPreTokenizedStringRefMut { pub fn new(pretok: &mut tk::PreTokenizedString) -> RefMutGuard<'_, Self> { // SAFETY: This is safe because we return a RefMutGuard here. // The compiler will make sure the &mut stays valid as necessary. RefMutGuard::new(Self { inner: RefMutContainer::new(pretok), }) } pub fn destroyed_error() -> PyErr { exceptions::PyException::new_err( "Cannot use a PreTokenizedStringRefMut outside `pre_tokenize`", ) } } #[pymethods] impl PyPreTokenizedStringRefMut { fn split(&mut self, func: &Bound<'_, PyAny>) -> PyResult<()> { self.inner .map_mut(|pretok| split(pretok, func)) .ok_or_else(PyPreTokenizedStringRefMut::destroyed_error)? } fn normalize(&mut self, func: &Bound<'_, PyAny>) -> PyResult<()> { self.inner .map_mut(|pretok| normalize(pretok, func)) .ok_or_else(PyPreTokenizedStringRefMut::destroyed_error)? } fn tokenize(&mut self, func: &Bound<'_, PyAny>) -> PyResult<()> { self.inner .map_mut(|pretok| tokenize(pretok, func)) .ok_or_else(PyPreTokenizedStringRefMut::destroyed_error)? } #[pyo3(signature = (type_id = 0, word_idx = None))] fn to_encoding(&self, type_id: u32, word_idx: Option<u32>) -> PyResult<PyEncoding> { self.inner .map(|pretok| to_encoding(pretok, type_id, word_idx)) .ok_or_else(PyPreTokenizedStringRefMut::destroyed_error)? } #[pyo3(signature = ( offset_referential = PyOffsetReferential(OffsetReferential::Original), offset_type = PyOffsetType(OffsetType::Char) ))] fn get_splits( &self, offset_referential: PyOffsetReferential, offset_type: PyOffsetType, ) -> PyResult<Vec<PySplit>> { self.inner .map(|pretok| get_splits(pretok, offset_referential, offset_type)) .ok_or_else(PyPreTokenizedStringRefMut::destroyed_error) } }
tokenizers/bindings/python/src/utils/pretokenization.rs/0
{ "file_path": "tokenizers/bindings/python/src/utils/pretokenization.rs", "repo_id": "tokenizers", "token_count": 4958 }
322
from tokenizers import Tokenizer from ..utils import data_dir, doc_pipeline_bert_tokenizer, doc_wiki_tokenizer disable_printing = True original_print = print def print(*args, **kwargs): if not disable_printing: original_print(*args, **kwargs) class TestPipeline: def test_pipeline(self, doc_wiki_tokenizer): try: # START reload_tokenizer from tokenizers import Tokenizer tokenizer = Tokenizer.from_file("data/tokenizer-wiki.json") # END reload_tokenizer except Exception: tokenizer = Tokenizer.from_file(doc_wiki_tokenizer) # START setup_normalizer from tokenizers import normalizers from tokenizers.normalizers import NFD, StripAccents normalizer = normalizers.Sequence([NFD(), StripAccents()]) # END setup_normalizer # START test_normalizer normalizer.normalize_str("Héllò hôw are ü?") # "Hello how are u?" # END test_normalizer assert normalizer.normalize_str("Héllò hôw are ü?") == "Hello how are u?" # START replace_normalizer tokenizer.normalizer = normalizer # END replace_normalizer # START setup_pre_tokenizer from tokenizers.pre_tokenizers import Whitespace pre_tokenizer = Whitespace() pre_tokenizer.pre_tokenize_str("Hello! How are you? I'm fine, thank you.") # [("Hello", (0, 5)), ("!", (5, 6)), ("How", (7, 10)), ("are", (11, 14)), ("you", (15, 18)), # ("?", (18, 19)), ("I", (20, 21)), ("'", (21, 22)), ('m', (22, 23)), ("fine", (24, 28)), # (",", (28, 29)), ("thank", (30, 35)), ("you", (36, 39)), (".", (39, 40))] # END setup_pre_tokenizer assert pre_tokenizer.pre_tokenize_str("Hello! How are you? I'm fine, thank you.") == [ ("Hello", (0, 5)), ("!", (5, 6)), ("How", (7, 10)), ("are", (11, 14)), ("you", (15, 18)), ("?", (18, 19)), ("I", (20, 21)), ("'", (21, 22)), ("m", (22, 23)), ("fine", (24, 28)), (",", (28, 29)), ("thank", (30, 35)), ("you", (36, 39)), (".", (39, 40)), ] # START combine_pre_tokenizer from tokenizers import pre_tokenizers from tokenizers.pre_tokenizers import Digits pre_tokenizer = pre_tokenizers.Sequence([Whitespace(), Digits(individual_digits=True)]) pre_tokenizer.pre_tokenize_str("Call 911!") # [("Call", (0, 4)), ("9", (5, 6)), ("1", (6, 7)), ("1", (7, 8)), ("!", (8, 9))] # END combine_pre_tokenizer assert pre_tokenizer.pre_tokenize_str("Call 911!") == [ ("Call", (0, 4)), ("9", (5, 6)), ("1", (6, 7)), ("1", (7, 8)), ("!", (8, 9)), ] # START replace_pre_tokenizer tokenizer.pre_tokenizer = pre_tokenizer # END replace_pre_tokenizer # START setup_processor from tokenizers.processors import TemplateProcessing tokenizer.post_processor = TemplateProcessing( single="[CLS] $A [SEP]", pair="[CLS] $A [SEP] $B:1 [SEP]:1", special_tokens=[("[CLS]", 1), ("[SEP]", 2)], ) # END setup_processor # START test_decoding output = tokenizer.encode("Hello, y'all! How are you 😁 ?") print(output.ids) # [1, 27253, 16, 93, 11, 5097, 5, 7961, 5112, 6218, 0, 35, 2] tokenizer.decode([1, 27253, 16, 93, 11, 5097, 5, 7961, 5112, 6218, 0, 35, 2]) # "Hello , y ' all ! How are you ?" # END test_decoding assert output.ids == [1, 27253, 16, 93, 11, 5097, 5, 7961, 5112, 6218, 0, 35, 2] assert ( tokenizer.decode([1, 27253, 16, 93, 11, 5097, 5, 7961, 5112, 6218, 0, 35, 2]) == "Hello , y ' all ! How are you ?" ) @staticmethod def slow_train(): # START bert_setup_tokenizer from tokenizers import Tokenizer from tokenizers.models import WordPiece bert_tokenizer = Tokenizer(WordPiece(unk_token="[UNK]")) # END bert_setup_tokenizer # START bert_setup_normalizer from tokenizers import normalizers from tokenizers.normalizers import NFD, Lowercase, StripAccents bert_tokenizer.normalizer = normalizers.Sequence([NFD(), Lowercase(), StripAccents()]) # END bert_setup_normalizer # START bert_setup_pre_tokenizer from tokenizers.pre_tokenizers import Whitespace bert_tokenizer.pre_tokenizer = Whitespace() # END bert_setup_pre_tokenizer # START bert_setup_processor from tokenizers.processors import TemplateProcessing bert_tokenizer.post_processor = TemplateProcessing( single="[CLS] $A [SEP]", pair="[CLS] $A [SEP] $B:1 [SEP]:1", special_tokens=[ ("[CLS]", 1), ("[SEP]", 2), ], ) # END bert_setup_processor # START bert_train_tokenizer from tokenizers.trainers import WordPieceTrainer trainer = WordPieceTrainer(vocab_size=30522, special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"]) files = [f"data/wikitext-103-raw/wiki.{split}.raw" for split in ["test", "train", "valid"]] bert_tokenizer.train(files, trainer) bert_tokenizer.save("data/bert-wiki.json") # END bert_train_tokenizer def test_bert_example(self, doc_pipeline_bert_tokenizer): try: bert_tokenizer = Tokenizer.from_file("data/bert-wiki.json") except Exception: bert_tokenizer = Tokenizer.from_file(doc_pipeline_bert_tokenizer) # START bert_test_decoding output = bert_tokenizer.encode("Welcome to the 🤗 Tokenizers library.") print(output.tokens) # ["[CLS]", "welcome", "to", "the", "[UNK]", "tok", "##eni", "##zer", "##s", "library", ".", "[SEP]"] bert_tokenizer.decode(output.ids) # "welcome to the tok ##eni ##zer ##s library ." # END bert_test_decoding assert bert_tokenizer.decode(output.ids) == "welcome to the tok ##eni ##zer ##s library ." # START bert_proper_decoding from tokenizers import decoders bert_tokenizer.decoder = decoders.WordPiece() bert_tokenizer.decode(output.ids) # "welcome to the tokenizers library." # END bert_proper_decoding assert bert_tokenizer.decode(output.ids) == "welcome to the tokenizers library." if __name__ == "__main__": import os from urllib import request from zipfile import ZipFile disable_printing = False if not os.path.isdir("data/wikitext-103-raw"): print("Downloading wikitext-103...") wiki_text, _ = request.urlretrieve( "https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-raw-v1.zip" ) with ZipFile(wiki_text, "r") as z: print("Unzipping in data...") z.extractall("data") print("Now training...") TestPipeline.slow_train()
tokenizers/bindings/python/tests/documentation/test_pipeline.py/0
{ "file_path": "tokenizers/bindings/python/tests/documentation/test_pipeline.py", "repo_id": "tokenizers", "token_count": 3351 }
323
# Encode Inputs <tokenizerslangcontent> <python> These types represent all the different kinds of input that a [`~tokenizers.Tokenizer`] accepts when using [`~tokenizers.Tokenizer.encode_batch`]. ## TextEncodeInput[[[[tokenizers.TextEncodeInput]]]] <code>tokenizers.TextEncodeInput</code> Represents a textual input for encoding. Can be either: - A single sequence: [TextInputSequence](/docs/tokenizers/api/input-sequences#tokenizers.TextInputSequence) - A pair of sequences: - A Tuple of [TextInputSequence](/docs/tokenizers/api/input-sequences#tokenizers.TextInputSequence) - Or a List of [TextInputSequence](/docs/tokenizers/api/input-sequences#tokenizers.TextInputSequence) of size 2 alias of `Union[str, Tuple[str, str], List[str]]`. ## PreTokenizedEncodeInput[[[[tokenizers.PreTokenizedEncodeInput]]]] <code>tokenizers.PreTokenizedEncodeInput</code> Represents a pre-tokenized input for encoding. Can be either: - A single sequence: [PreTokenizedInputSequence](/docs/tokenizers/api/input-sequences#tokenizers.PreTokenizedInputSequence) - A pair of sequences: - A Tuple of [PreTokenizedInputSequence](/docs/tokenizers/api/input-sequences#tokenizers.PreTokenizedInputSequence) - Or a List of [PreTokenizedInputSequence](/docs/tokenizers/api/input-sequences#tokenizers.PreTokenizedInputSequence) of size 2 alias of `Union[List[str], Tuple[str], Tuple[Union[List[str], Tuple[str]], Union[List[str], Tuple[str]]], List[Union[List[str], Tuple[str]]]]`. ## EncodeInput[[[[tokenizers.EncodeInput]]]] <code>tokenizers.EncodeInput</code> Represents all the possible types of input for encoding. Can be: - When `is_pretokenized=False`: [TextEncodeInput](#tokenizers.TextEncodeInput) - When `is_pretokenized=True`: [PreTokenizedEncodeInput](#tokenizers.PreTokenizedEncodeInput) alias of `Union[str, Tuple[str, str], List[str], Tuple[str], Tuple[Union[List[str], Tuple[str]], Union[List[str], Tuple[str]]], List[Union[List[str], Tuple[str]]]]`. </python> <rust> The Rust API Reference is available directly on the [Docs.rs](https://docs.rs/tokenizers/latest/tokenizers/) website. </rust> <node> The node API has not been documented yet. </node> </tokenizerslangcontent>
tokenizers/docs/source-doc-builder/api/encode-inputs.mdx/0
{ "file_path": "tokenizers/docs/source-doc-builder/api/encode-inputs.mdx", "repo_id": "tokenizers", "token_count": 716 }
324
from collections import defaultdict, abc from typing import cast from docutils import nodes from docutils.parsers.rst import Directive import sphinx from sphinx.locale import _ from sphinx.util.docutils import SphinxDirective from sphinx.errors import ExtensionError from conf import languages as LANGUAGES logger = sphinx.util.logging.getLogger(__name__) GLOBALNAME = "$GLOBAL$" def update(d, u): for k, v in u.items(): if isinstance(v, abc.Mapping): d[k] = update(d.get(k, {}), v) else: d[k] = v return d class EntityNode(nodes.General, nodes.Element): pass class EntitiesNode(nodes.General, nodes.Element): pass class AllEntities: def __init__(self): self.entities = defaultdict(dict) @classmethod def install(cls, env): if not hasattr(env, "entity_all_entities"): entities = cls() env.entity_all_entities = entities return env.entity_all_entities def merge(self, other): self.entities.update(other.entities) def purge(self, docname): for env_docname in [GLOBALNAME, docname]: self.entities[env_docname] = dict( [ (name, entity) for name, entity in self.entities[env_docname].items() if entity["docname"] != docname ] ) def _extract_entities(self, nodes): pass def _extract_options(self, nodes): pass def _add_entities(self, entities, language, is_global, docname): scope = GLOBALNAME if is_global else docname for entity in entities: name = f'{language}-{entity["name"]}' content = entity["content"] if name in self.entities[scope]: logger.warning( f'Entity "{name}" has already been defined{" globally" if is_global else ""}', location=docname, ) self.entities[scope][name] = {"docname": docname, "content": content} def _extract_global(self, nodes): for node in nodes: if node.tagname != "field": raise Exception(f"Expected a field, found {node.tagname}") name, _ = node.children if name.tagname != "field_name": raise Exception(f"Expected a field name here, found {name_node.tagname}") if str(name.children[0]) == "global": return True def _extract_entities(self, nodes): entities = [] for node in nodes: if node.tagname != "definition_list_item": raise Exception(f"Expected a list item here, found {node.tagname}") name_node, content_node = node.children if name_node.tagname != "term": raise Exception(f"Expected a term here, found {name_node.tagname}") if content_node.tagname != "definition": raise Exception(f"Expected a definition here, found {content_node.tagname}") name = str(name_node.children[0]) if len(content_node.children) == 1 and content_node.children[0].tagname == "paragraph": content = content_node.children[0].children[0] else: content = content_node entities.append({"name": name, "content": content}) return entities def extract(self, node, docname): is_global = False entities = [] language = None for node in node.children: if language is None and node.tagname != "paragraph": raise Exception(f"Expected language name:\n.. entities:: <LANGUAGE>") elif language is None and node.tagname == "paragraph": language = str(node.children[0]) if language not in LANGUAGES: raise Exception( f'Unknown language "{language}. Might be missing a newline after language"' ) elif node.tagname == "field_list": is_global = self._extract_global(node.children) elif node.tagname == "definition_list": entities.extend(self._extract_entities(node.children)) else: raise Exception(f"Expected a list of terms/options, found {node.tagname}") self._add_entities(entities, language, is_global, docname) def resolve_pendings(self, app): env = app.builder.env updates = defaultdict(dict) for env_docname in self.entities.keys(): for name, entity in self.entities[env_docname].items(): docname = entity["docname"] node = entity["content"] for node in node.traverse(sphinx.addnodes.pending_xref): contnode = cast(nodes.TextElement, node[0].deepcopy()) newnode = None typ = node["reftype"] target = node["reftarget"] refdoc = node.get("refdoc", docname) domain = None try: if "refdomain" in node and node["refdomain"]: # let the domain try to resolve the reference try: domain = env.domains[node["refdomain"]] except KeyError as exc: raise NoUri(target, typ) from exc newnode = domain.resolve_xref( env, refdoc, app.builder, typ, target, node, contnode ) except NoUri: newnode = contnode updates[env_docname][name] = { "docname": docname, "content": newnode or contnode, } update(self.entities, updates) def get(self, language, name, docname): name = f"{language}-{name}" if name in self.entities[docname]: return self.entities[docname][name] elif name in self.entities[GLOBALNAME]: return self.entities[GLOBALNAME][name] else: return None class EntitiesDirective(SphinxDirective): has_content = True def run(self): content = nodes.definition_list() self.state.nested_parse(self.content, self.content_offset, content) try: entities = AllEntities.install(self.env) entities.extract(content, self.env.docname) except Exception as err: raise self.error(f'Malformed directive "entities": {err}') return [] def entity_role(name, rawtext, text, lineno, inliner, options={}, content=[]): node = EntityNode() node.entity = text return [node], [] def process_entity_nodes(app, doctree, docname): """ Replace all the entities by their content """ env = app.builder.env entities = AllEntities.install(env) entities.resolve_pendings(app) language = None try: language = next(l for l in LANGUAGES if l in app.tags) except Exception: logger.warning(f"No language tag specified, not resolving entities in {docname}") for node in doctree.traverse(EntityNode): if language is None: node.replace_self(nodes.Text(_(node.entity), _(node.entity))) else: entity = entities.get(language, node.entity, docname) if entity is None: node.replace_self(nodes.Text(_(node.entity), _(node.entity))) logger.warning(f'Entity "{node.entity}" has not been defined', location=node) else: node.replace_self(entity["content"]) def purge_entities(app, env, docname): """ Purge any entity that comes from the given docname """ entities = AllEntities.install(env) entities.purge(docname) def merge_entities(app, env, docnames, other): """ Merge multiple environment entities """ entities = AllEntities.install(env) other_entities = AllEntities.install(other) entities.merge(other_entities) def setup(app): app.add_node(EntityNode) app.add_node(EntitiesNode) app.add_directive("entities", EntitiesDirective) app.add_role("entity", entity_role) app.connect("doctree-resolved", process_entity_nodes) app.connect("env-merge-info", merge_entities) app.connect("env-purge-doc", purge_entities) return { "version": "0.1", "parallel_read_safe": True, "parallel_write_safe": True, }
tokenizers/docs/source/_ext/entities.py/0
{ "file_path": "tokenizers/docs/source/_ext/entities.py", "repo_id": "tokenizers", "token_count": 4032 }
325
.. entities:: python :global: class class classmethod class method Tokenizer :class:`~tokenizers.Tokenizer` Tokenizer.train :meth:`~tokenizers.Tokenizer.train` Tokenizer.save :meth:`~tokenizers.Tokenizer.save` Tokenizer.from_file :meth:`~tokenizers.Tokenizer.from_file` Tokenizer.encode :meth:`~tokenizers.Tokenizer.encode` Tokenizer.encode_batch :meth:`~tokenizers.Tokenizer.encode_batch` Tokenizer.decode :meth:`~tokenizers.Tokenizer.decode` Tokenizer.decode_batch :meth:`~tokenizers.Tokenizer.decode_batch` Tokenizer.token_to_id :meth:`~tokenizers.Tokenizer.token_to_id` Tokenizer.enable_padding :meth:`~tokenizers.Tokenizer.enable_padding` Encoding :class:`~tokenizers.Encoding` TemplateProcessing :class:`~tokenizers.processors.TemplateProcessing` Normalizer :class:`~tokenizers.normalizers.Normalizer` normalizers.Sequence :class:`~tokenizers.normalizers.Sequence` pre_tokenizers.Whitespace :class:`~tokenizers.pre_tokenizers.Whitespace` PreTokenizer :class:`~tokenizers.pre_tokenizers.PreTokenizer` models.BPE :class:`~tokenizers.models.BPE` models.Unigram :class:`~tokenizers.models.Unigram` models.WordLevel :class:`~tokenizers.models.WordLevel` models.WordPiece :class:`~tokenizers.models.WordPiece` Decoder :class:`~tokenizers.decoders.Decoder` .. entities:: rust :global: class struct classmethod static method Tokenizer :rust_struct:`~tokenizers::tokenizer::Tokenizer` Tokenizer.train :rust_meth:`~tokenizers::tokenizer::Tokenizer::train` Tokenizer.save :rust_meth:`~tokenizers::tokenizer::Tokenizer::save` Tokenizer.from_file :rust_meth:`~tokenizers::tokenizer::Tokenizer::from_file` Tokenizer.encode :rust_meth:`~tokenizers::tokenizer::Tokenizer::encode` Tokenizer.encode_batch :rust_meth:`~tokenizers::tokenizer::Tokenizer::encode_batch` Tokenizer.decode :rust_meth:`~tokenizers::tokenizer::Tokenizer::decode` Tokenizer.decode_batch :rust_meth:`~tokenizers::tokenizer::Tokenizer::decode_batch` Tokenizer.token_to_id :rust_meth:`~tokenizers::tokenizer::Tokenizer::token_to_id` Tokenizer.enable_padding :rust_meth:`~tokenizers::tokenizer::Tokenizer::enable_padding` Encoding :rust_struct:`~tokenizers::tokenizer::Encoding` TemplateProcessing :rust_struct:`~tokenizers::processors::template::TemplateProcessing` Normalizer :rust_trait:`~tokenizers::tokenizer::Normalizer` normalizers.Sequence :rust_struct:`~tokenizers::normalizers::utils::Sequence` pre_tokenizers.Whitespace :rust_struct:`~tokenizers::normalizers::whitespace::Whitespace` PreTokenizer :rust_trait:`~tokenizers::tokenizer::PreTokenizer` models.BPE :rust_struct:`~tokenizers::models::bpe::BPE` models.Unigram :rust_struct:`~tokenizers::models::unigram::Unigram` models.WordLevel :rust_struct:`~tokenizers::models::wordlevel::WordLevel` models.WordPiece :rust_struct:`~tokenizers::models::wordpiece::WordPiece` Decoder :rust_trait:`~tokenizers::tokenizer::Decoder` .. entities:: node :global: class class classmethod static method Tokenizer :obj:`Tokenizer` Tokenizer.train :obj:`Tokenizer.train()` Tokenizer.save :obj:`Tokenizer.save()` Tokenizer.from_file :obj:`Tokenizer.fromFile()` Tokenizer.encode :obj:`Tokenizer.encode()` Tokenizer.encode_batch :obj:`Tokenizer.encodeBatch()` Tokenizer.decode :obj:`Tokenizer.decode()` Tokenizer.decode_batch :obj:`Tokenizer.decodeBatch()` Tokenizer.token_to_id :obj:`Tokenizer.tokenToId()` Tokenizer.enable_padding :obj:`Tokenizer.setPadding()` Encoding :obj:`Encoding` TemplateProcessing :obj:`TemplateProcessing` Normalizer :obj:`Normalizer` normalizers.Sequence :obj:`Sequence` pre_tokenizers.Whitespace :obj:`Whitespace` PreTokenizer :obj:`PreTokenizer` models.BPE :obj:`BPE` models.Unigram :obj:`Unigram` models.WordLevel :obj:`WordLevel` models.WordPiece :obj:`WordPiece` Decoder :obj:`Decoder`
tokenizers/docs/source/entities.inc/0
{ "file_path": "tokenizers/docs/source/entities.inc", "repo_id": "tokenizers", "token_count": 2078 }
326