text
stringlengths
5
631k
id
stringlengths
14
178
metadata
dict
__index_level_0__
int64
0
647
{ "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [], "seed": null, "tokens": [ { "id": 510, "logprob": -0.63183594, "special": false, "text": "The" }, { "id": 3159, "logprob": -0.5390625, "special": false, "text": " word" }, { "id": 346, "logprob": -0.045684814, "special": false, "text": " \"" }, { "id": 6441, "logprob": -0.002090454, "special": false, "text": "mem" }, { "id": 70, "logprob": -1.3589859e-05, "special": false, "text": "e" }, { "id": 3, "logprob": -0.0009455681, "special": false, "text": "\"" }, { "id": 369, "logprob": -0.088012695, "special": false, "text": " was" }, { "id": 806, "logprob": -0.12585449, "special": false, "text": " first" }, { "id": 908, "logprob": -0.017196655, "special": false, "text": " used" }, { "id": 275, "logprob": -0.49731445, "special": false, "text": " in" } ] }, "generated_text": "The word \"meme\" was first used in" }
text-generation-inference/integration-tests/models/__snapshots__/test_flash_neox_sharded/test_flash_neox.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_flash_neox_sharded/test_flash_neox.json", "repo_id": "text-generation-inference", "token_count": 860 }
303
{ "choices": [ { "finish_reason": "stop", "index": 0, "logprobs": null, "message": { "content": "The image depicts an anthropomorphic rabbit character wearing an intricate space suit, which includes a helmet with a starry face pattern and multiple suitors. The rabbit's ears are significantly large and upright, and it has a hitchhiker-like star antennas on its chest. The background is a reddish-orange, rocky landscape, suggesting a Martian environment. The suit has various buttons, a red button on the chest, and a reflective or illuminated dome on the head. The overall color scheme is dominated by shades of red, orange, and gray, giving a sense of a rugged, otherworldly setting.", "name": null, "role": "assistant", "tool_calls": null }, "usage": null } ], "created": 1738342872, "id": "", "model": "Qwen/Qwen2.5-VL-3B-Instruct", "object": "chat.completion", "system_fingerprint": "3.0.2-dev0-native", "usage": { "completion_tokens": 121, "prompt_tokens": 1363, "total_tokens": 1484 } }
text-generation-inference/integration-tests/models/__snapshots__/test_flash_qwen2_5_vl/test_flash_qwen2_5_vl_simple.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_flash_qwen2_5_vl/test_flash_qwen2_5_vl_simple.json", "repo_id": "text-generation-inference", "token_count": 388 }
304
[ { "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [], "seed": null, "tokens": [ { "id": 222, "logprob": -1.9091797, "special": false, "text": "\n" }, { "id": 222, "logprob": -1.0478516, "special": false, "text": "\n" }, { "id": 40, "logprob": -3.015625, "special": false, "text": "#" }, { "id": 494, "logprob": -1.4228516, "special": false, "text": " +" }, { "id": 447, "logprob": -1.1025391, "special": false, "text": " [" }, { "id": 9009, "logprob": -0.0008444786, "special": false, "text": "markdown" }, { "id": 98, "logprob": -8.8095665e-05, "special": false, "text": "]" }, { "id": 37402, "logprob": -0.5810547, "special": false, "text": " slideshow" }, { "id": 8492, "logprob": -0.00022864342, "special": false, "text": "={\"" }, { "id": 7277, "logprob": -0.00030994415, "special": false, "text": "slide" } ], "top_tokens": null }, "generated_text": "\n\n# + [markdown] slideshow={\"slide" }, { "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [], "seed": null, "tokens": [ { "id": 222, "logprob": -1.9091797, "special": false, "text": "\n" }, { "id": 222, "logprob": -1.0478516, "special": false, "text": "\n" }, { "id": 40, "logprob": -3.015625, "special": false, "text": "#" }, { "id": 494, "logprob": -1.4228516, "special": false, "text": " +" }, { "id": 447, "logprob": -1.1025391, "special": false, "text": " [" }, { "id": 9009, "logprob": -0.0008444786, "special": false, "text": "markdown" }, { "id": 98, "logprob": -8.8095665e-05, "special": false, "text": "]" }, { "id": 37402, "logprob": -0.5810547, "special": false, "text": " slideshow" }, { "id": 8492, "logprob": -0.00022864342, "special": false, "text": "={\"" }, { "id": 7277, "logprob": -0.00030994415, "special": false, "text": "slide" } ], "top_tokens": null }, "generated_text": "\n\n# + [markdown] slideshow={\"slide" }, { "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [], "seed": null, "tokens": [ { "id": 222, "logprob": -1.9091797, "special": false, "text": "\n" }, { "id": 222, "logprob": -1.0478516, "special": false, "text": "\n" }, { "id": 40, "logprob": -3.015625, "special": false, "text": "#" }, { "id": 494, "logprob": -1.4228516, "special": false, "text": " +" }, { "id": 447, "logprob": -1.1025391, "special": false, "text": " [" }, { "id": 9009, "logprob": -0.0008444786, "special": false, "text": "markdown" }, { "id": 98, "logprob": -8.8095665e-05, "special": false, "text": "]" }, { "id": 37402, "logprob": -0.5810547, "special": false, "text": " slideshow" }, { "id": 8492, "logprob": -0.00022864342, "special": false, "text": "={\"" }, { "id": 7277, "logprob": -0.00030994415, "special": false, "text": "slide" } ], "top_tokens": null }, "generated_text": "\n\n# + [markdown] slideshow={\"slide" }, { "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [], "seed": null, "tokens": [ { "id": 222, "logprob": -1.9091797, "special": false, "text": "\n" }, { "id": 222, "logprob": -1.0478516, "special": false, "text": "\n" }, { "id": 40, "logprob": -3.015625, "special": false, "text": "#" }, { "id": 494, "logprob": -1.4228516, "special": false, "text": " +" }, { "id": 447, "logprob": -1.1025391, "special": false, "text": " [" }, { "id": 9009, "logprob": -0.0008444786, "special": false, "text": "markdown" }, { "id": 98, "logprob": -8.8095665e-05, "special": false, "text": "]" }, { "id": 37402, "logprob": -0.5810547, "special": false, "text": " slideshow" }, { "id": 8492, "logprob": -0.00022864342, "special": false, "text": "={\"" }, { "id": 7277, "logprob": -0.00030994415, "special": false, "text": "slide" } ], "top_tokens": null }, "generated_text": "\n\n# + [markdown] slideshow={\"slide" } ]
text-generation-inference/integration-tests/models/__snapshots__/test_flash_starcoder2_lora/test_flash_starcoder2_load.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_flash_starcoder2_lora/test_flash_starcoder2_load.json", "repo_id": "text-generation-inference", "token_count": 4084 }
305
{ "details": { "best_of_sequences": null, "finish_reason": "eos_token", "generated_tokens": 9, "prefill": [], "seed": null, "tokens": [ { "id": 2684, "logprob": -0.24902344, "special": false, "text": " There" }, { "id": 374, "logprob": -0.0703125, "special": false, "text": " is" }, { "id": 264, "logprob": -0.23535156, "special": false, "text": " a" }, { "id": 35372, "logprob": -0.125, "special": false, "text": " statue" }, { "id": 304, "logprob": -0.30273438, "special": false, "text": " in" }, { "id": 279, "logprob": -0.20507812, "special": false, "text": " the" }, { "id": 2217, "logprob": -0.076171875, "special": false, "text": " image" }, { "id": 13, "logprob": -0.053710938, "special": false, "text": "." }, { "id": 128258, "logprob": -0.011352539, "special": true, "text": "<end_of_utterance>" } ], "top_tokens": null }, "generated_text": " There is a statue in the image." }
text-generation-inference/integration-tests/models/__snapshots__/test_idefics3/test_flash_idefics3_next_simple_url.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_idefics3/test_flash_idefics3_next_simple_url.json", "repo_id": "text-generation-inference", "token_count": 796 }
306
{ "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 17, "prefill": [ { "id": 1276, "logprob": null, "text": "What" }, { "id": 310, "logprob": -1.5117188, "text": " is" }, { "id": 18147, "logprob": -8.96875, "text": " Deep" }, { "id": 20727, "logprob": -1.953125, "text": " Learning" }, { "id": 32, "logprob": -0.94189453, "text": "?" } ], "seed": null, "tokens": [ { "id": 428, "logprob": -1.5830078, "special": false, "text": " -" }, { "id": 18147, "logprob": -3.3105469, "special": false, "text": " Deep" }, { "id": 20727, "logprob": -0.3215332, "special": false, "text": " Learning" }, { "id": 187, "logprob": -2.5566406, "special": false, "text": "\n" }, { "id": 30763, "logprob": -1.6074219, "special": false, "text": "Deep" }, { "id": 20727, "logprob": -0.69628906, "special": false, "text": " Learning" }, { "id": 310, "logprob": -0.6923828, "special": false, "text": " is" }, { "id": 247, "logprob": -0.5263672, "special": false, "text": " a" }, { "id": 749, "logprob": -1.8544922, "special": false, "text": " sub" }, { "id": 3423, "logprob": -0.6118164, "special": false, "text": "field" }, { "id": 273, "logprob": -0.055877686, "special": false, "text": " of" }, { "id": 5145, "logprob": -1.0537109, "special": false, "text": " machine" }, { "id": 4715, "logprob": -0.0115737915, "special": false, "text": " learning" }, { "id": 326, "logprob": -0.9111328, "special": false, "text": " that" }, { "id": 4648, "logprob": -1.4589844, "special": false, "text": " uses" }, { "id": 13345, "logprob": -1.4853516, "special": false, "text": " artificial" }, { "id": 11454, "logprob": -0.021636963, "special": false, "text": " neural" } ] }, "generated_text": " - Deep Learning\nDeep Learning is a subfield of machine learning that uses artificial neural" }
text-generation-inference/integration-tests/models/__snapshots__/test_mpt/test_mpt.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_mpt/test_mpt.json", "repo_id": "text-generation-inference", "token_count": 1691 }
307
{ "choices": [ { "finish_reason": "stop", "index": 0, "logprobs": null, "message": { "content": "The image does not depict a dog; it shows a cow standing on a beach. Therefore, there is no breed of a dog to identify.", "name": null, "role": "assistant", "tool_calls": null }, "usage": null } ], "created": 1743863056, "id": "", "model": "ll-re/Llama-4-Scout-17B-16E-Instruct", "object": "chat.completion", "system_fingerprint": "3.2.1-dev0-native", "usage": { "completion_tokens": 30, "prompt_tokens": 168, "total_tokens": 198 } }
text-generation-inference/integration-tests/models/__snapshots__/test_transformers_llama4/test_flash_llama4_image_cow_dog.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_transformers_llama4/test_flash_llama4_image_cow_dog.json", "repo_id": "text-generation-inference", "token_count": 298 }
308
import pytest @pytest.fixture(scope="module") def flash_deepseek_v2_handle(launcher): with launcher("deepseek-ai/DeepSeek-V2-Lite", num_shard=2) as handle: yield handle @pytest.fixture(scope="module") async def flash_deepseek_v2(flash_deepseek_v2_handle): await flash_deepseek_v2_handle.health(300) return flash_deepseek_v2_handle.client @pytest.mark.release @pytest.mark.asyncio @pytest.mark.private async def test_flash_deepseek_v2(flash_deepseek_v2, response_snapshot): response = await flash_deepseek_v2.generate( "Test request", max_new_tokens=10, decoder_input_details=True ) assert response == response_snapshot @pytest.mark.release @pytest.mark.asyncio @pytest.mark.private async def test_flash_deepseek_v2_all_params(flash_deepseek_v2, response_snapshot): response = await flash_deepseek_v2.generate( "Test request", max_new_tokens=10, repetition_penalty=1.2, return_full_text=True, stop_sequences=["test"], temperature=0.5, top_p=0.9, top_k=10, truncate=5, typical_p=0.9, watermark=True, decoder_input_details=True, seed=0, ) assert response == response_snapshot @pytest.mark.release @pytest.mark.asyncio @pytest.mark.private async def test_flash_deepseek_v2_load( flash_deepseek_v2, generate_load, response_snapshot ): responses = await generate_load( flash_deepseek_v2, "Test request", max_new_tokens=10, n=4 ) assert len(responses) == 4 assert all([r.generated_text == responses[0].generated_text for r in responses]) assert responses == response_snapshot
text-generation-inference/integration-tests/models/test_flash_deepseek_v2.py/0
{ "file_path": "text-generation-inference/integration-tests/models/test_flash_deepseek_v2.py", "repo_id": "text-generation-inference", "token_count": 710 }
309
import pytest @pytest.fixture(scope="module") def flash_starcoder_handle(launcher): with launcher("bigcode/starcoder", num_shard=2) as handle: yield handle @pytest.fixture(scope="module") async def flash_starcoder(flash_starcoder_handle): await flash_starcoder_handle.health(300) return flash_starcoder_handle.client @pytest.mark.release @pytest.mark.asyncio @pytest.mark.private async def test_flash_starcoder(flash_starcoder, response_snapshot): response = await flash_starcoder.generate( "def print_hello", max_new_tokens=10, decoder_input_details=True ) assert response.details.generated_tokens == 10 assert response == response_snapshot @pytest.mark.release @pytest.mark.asyncio @pytest.mark.private async def test_flash_starcoder_default_params(flash_starcoder, response_snapshot): response = await flash_starcoder.generate( "def print_hello", max_new_tokens=60, temperature=0.2, top_p=0.95, decoder_input_details=True, seed=0, ) assert response.details.generated_tokens == 60 assert response == response_snapshot @pytest.mark.release @pytest.mark.asyncio @pytest.mark.private async def test_flash_starcoder_load(flash_starcoder, generate_load, response_snapshot): responses = await generate_load( flash_starcoder, "def print_hello", max_new_tokens=10, n=4 ) assert len(responses) == 4 assert all([r.generated_text == responses[0].generated_text for r in responses]) assert responses == response_snapshot
text-generation-inference/integration-tests/models/test_flash_starcoder.py/0
{ "file_path": "text-generation-inference/integration-tests/models/test_flash_starcoder.py", "repo_id": "text-generation-inference", "token_count": 602 }
310
import pytest @pytest.fixture(scope="module") def neox_handle(launcher): with launcher( "stabilityai/stablelm-tuned-alpha-3b", num_shard=1, use_flash_attention=False ) as handle: yield handle @pytest.fixture(scope="module") async def neox(neox_handle): await neox_handle.health(300) return neox_handle.client @pytest.mark.release @pytest.mark.skip @pytest.mark.asyncio async def test_neox(neox, response_snapshot): response = await neox.generate( "<|USER|>What's your mood today?<|ASSISTANT|>", max_new_tokens=10, decoder_input_details=True, ) assert response.details.generated_tokens == 10 assert response == response_snapshot @pytest.mark.release @pytest.mark.skip @pytest.mark.asyncio async def test_neox_load(neox, generate_load, response_snapshot): responses = await generate_load( neox, "<|USER|>What's your mood today?<|ASSISTANT|>", max_new_tokens=10, n=4, ) generated_texts = [r.generated_text for r in responses] assert len(generated_texts) == 4 assert generated_texts, all( [text == generated_texts[0] for text in generated_texts] ) assert responses == response_snapshot
text-generation-inference/integration-tests/models/test_neox.py/0
{ "file_path": "text-generation-inference/integration-tests/models/test_neox.py", "repo_id": "text-generation-inference", "token_count": 515 }
311
use std::fmt; use std::process::Command; pub(crate) struct Env { cargo_target: &'static str, cargo_version: &'static str, git_sha: &'static str, docker_label: &'static str, nvidia_env: String, xpu_env: String, hpu_env: String, } impl Env { pub fn new() -> Self { let nvidia_env = nvidia_smi(); let xpu_env = xpu_smi(); let hpu_env = hl_smi(); Self { nvidia_env: nvidia_env.unwrap_or("N/A".to_string()), xpu_env: xpu_env.unwrap_or("N/A".to_string()), hpu_env: hpu_env.unwrap_or("N/A".to_string()), cargo_target: env!("VERGEN_CARGO_TARGET_TRIPLE"), cargo_version: env!("VERGEN_RUSTC_SEMVER"), git_sha: option_env!("VERGEN_GIT_SHA").unwrap_or("N/A"), docker_label: option_env!("DOCKER_LABEL").unwrap_or("N/A"), } } } impl fmt::Display for Env { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { writeln!(f, "Runtime environment:")?; writeln!(f, "Target: {}", self.cargo_target)?; writeln!(f, "Cargo version: {}", self.cargo_version)?; writeln!(f, "Commit sha: {}", self.git_sha)?; writeln!(f, "Docker label: {}", self.docker_label)?; writeln!(f, "nvidia-smi:\n{}", self.nvidia_env)?; writeln!(f, "xpu-smi:\n{}", self.xpu_env)?; writeln!(f, "hpu-smi:\n{}", self.hpu_env)?; Ok(()) } } fn nvidia_smi() -> Option<String> { let output = Command::new("nvidia-smi").output().ok()?; let nvidia_smi = String::from_utf8(output.stdout).ok()?; let output = nvidia_smi.replace('\n', "\n "); Some(output.trim().to_string()) } fn xpu_smi() -> Option<String> { let output = Command::new("xpu-smi").arg("discovery").output().ok()?; let xpu_smi = String::from_utf8(output.stdout).ok()?; let output = xpu_smi.replace('\n', "\n "); Some(output.trim().to_string()) } fn hl_smi() -> Option<String> { let output = Command::new("hl-smi").output().ok()?; let hl_smi = String::from_utf8(output.stdout).ok()?; let output = hl_smi.replace('\n', "\n "); Some(output.trim().to_string()) }
text-generation-inference/launcher/src/env_runtime.rs/0
{ "file_path": "text-generation-inference/launcher/src/env_runtime.rs", "repo_id": "text-generation-inference", "token_count": 1067 }
312
{ lib, mkShell, black, cmake, isort, ninja, which, cudaPackages, openssl, pkg-config, poetry, protobuf, python3, pyright, redocly, ruff, rust-bin, server, # Enable dependencies for building CUDA packages. Useful for e.g. # developing marlin/moe-kernels in-place. withCuda ? false, }: mkShell { nativeBuildInputs = [ black isort pkg-config poetry (rust-bin.stable.latest.default.override { extensions = [ "rust-analyzer" "rust-src" ]; }) protobuf pyright redocly ruff ] ++ (lib.optionals withCuda [ cmake ninja which # For most Torch-based extensions, setting CUDA_HOME is enough, but # some custom CMake builds (e.g. vLLM) also need to have nvcc in PATH. cudaPackages.cuda_nvcc ]); buildInputs = [ openssl.dev ] ++ (with python3.pkgs; [ venvShellHook docker pip ipdb click openai pytest pytest-asyncio syrupy ]) ++ (lib.optionals withCuda ( with cudaPackages; [ cuda_cccl cuda_cudart cuda_nvrtc cuda_nvtx cuda_profiler_api cudnn libcublas libcusolver libcusparse ] )); inputsFrom = [ server ]; env = lib.optionalAttrs withCuda { CUDA_HOME = "${lib.getDev cudaPackages.cuda_nvcc}"; TORCH_CUDA_ARCH_LIST = lib.concatStringsSep ";" python3.pkgs.torch.cudaCapabilities; }; venvDir = "./.venv"; postVenvCreation = '' unset SOURCE_DATE_EPOCH ( cd server ; python -m pip install --no-build-isolation --no-dependencies -e . ) ( cd clients/python ; python -m pip install --no-dependencies -e . ) ''; postShellHook = '' unset SOURCE_DATE_EPOCH export PATH=${cudaPackages.backendStdenv.cc}/bin:$PATH:~/.cargo/bin '' # At various points in time, the latest gcc supported by CUDA differs # from the default version in nixpkgs. A lot of the dependencies in # the impure environment pull in the default gcc from nixpkgs, so we # end up with the CUDA-supported gcc and the nixpkgs default gcc in # the path. To ensure that we can build CUDA kernels, put the CUDA # first in the path. It's a hack, but it works. + lib.optionalString withCuda '' export PATH=${cudaPackages.backendStdenv.cc}/bin:$PATH ''; }
text-generation-inference/nix/impure-shell.nix/0
{ "file_path": "text-generation-inference/nix/impure-shell.nix", "repo_id": "text-generation-inference", "token_count": 1119 }
313
use crate::infer::Infer; use crate::server::{chat_completions, compat_generate, completions, ComputeType}; use crate::{ ChatCompletion, ChatCompletionChunk, ChatRequest, Chunk, CompatGenerateRequest, CompletionFinal, CompletionRequest, ErrorResponse, GenerateResponse, Info, StreamResponse, }; use axum::extract::Extension; use axum::http::StatusCode; use axum::response::Response; use axum::Json; use serde::{Deserialize, Serialize}; use tracing::instrument; use utoipa::ToSchema; #[derive(Clone, Deserialize, ToSchema)] #[serde(untagged)] pub(crate) enum SagemakerRequest { Generate(CompatGenerateRequest), Chat(ChatRequest), Completion(CompletionRequest), } // Used for OpenAPI specs #[allow(dead_code)] #[derive(Serialize, ToSchema)] #[serde(untagged)] pub(crate) enum SagemakerResponse { Generate(GenerateResponse), Chat(ChatCompletion), Completion(CompletionFinal), } // Used for OpenAPI specs #[allow(dead_code)] #[derive(Serialize, ToSchema)] #[serde(untagged)] pub(crate) enum SagemakerStreamResponse { Generate(StreamResponse), Chat(ChatCompletionChunk), Completion(Chunk), } /// Generate tokens from Sagemaker request #[utoipa::path( post, tag = "Text Generation Inference", path = "/invocations", request_body = SagemakerRequest, responses( (status = 200, description = "Generated Chat Completion", content( ("application/json" = SagemakerResponse), ("text/event-stream" = SagemakerStreamResponse), )), (status = 424, description = "Generation Error", body = ErrorResponse, example = json ! ({"error": "Request failed during generation", "error_type": "generation"})), (status = 429, description = "Model is overloaded", body = ErrorResponse, example = json ! ({"error": "Model is overloaded", "error_type": "overloaded"})), (status = 422, description = "Input validation error", body = ErrorResponse, example = json ! ({"error": "Input validation error", "error_type": "validation"})), (status = 500, description = "Incomplete generation", body = ErrorResponse, example = json ! ({"error": "Incomplete generation", "error_type": "incomplete_generation"})), ) )] #[instrument(skip_all)] pub(crate) async fn sagemaker_compatibility( default_return_full_text: Extension<bool>, infer: Extension<Infer>, compute_type: Extension<ComputeType>, context: Extension<Option<opentelemetry::Context>>, info: Extension<Info>, Json(req): Json<SagemakerRequest>, ) -> Result<Response, (StatusCode, Json<ErrorResponse>)> { match req { SagemakerRequest::Generate(req) => { compat_generate( default_return_full_text, infer, compute_type, context, Json(req), ) .await } SagemakerRequest::Chat(req) => { chat_completions(infer, compute_type, info, context, Json(req)).await } SagemakerRequest::Completion(req) => { completions(infer, compute_type, info, context, Json(req)).await } } }
text-generation-inference/router/src/sagemaker.rs/0
{ "file_path": "text-generation-inference/router/src/sagemaker.rs", "repo_id": "text-generation-inference", "token_count": 1113 }
314
commit_rocm := de990cd12537f78f74e40b5c8ee1a62d63d734dd build-vllm-rocm: if [ ! -d 'vllm' ]; then \ pip install -U ninja packaging --no-cache-dir && \ git clone https://github.com/mht-sharma/vllm.git vllm; \ fi cd vllm && git fetch && git checkout $(commit_rocm) && \ PYTORCH_ROCM_ARCH="gfx90a;gfx942" python3 setup.py bdist_wheel --dist-dir=dist install-vllm-rocm: build-vllm-rocm cd vllm && git fetch && git checkout $(commit_rocm)
text-generation-inference/server/Makefile-vllm/0
{ "file_path": "text-generation-inference/server/Makefile-vllm", "repo_id": "text-generation-inference", "token_count": 201 }
315
// Adapted from turboderp exllama: https://github.com/turboderp/exllama #ifndef _hip_compat_cuh #define _hip_compat_cuh // Workaround for a bug in hipamd, backported from upstream, this is fixed in ROCm 5.6. __device__ __forceinline__ __half __compat_hrcp(__half x) { return __half_raw{ static_cast<_Float16>(__builtin_amdgcn_rcph(static_cast<__half_raw>(x).data))}; } __device__ __forceinline__ __half2 __compat_h2rcp(__half2 x) { return _Float16_2{ _Float16_2{static_cast<_Float16>(1.0f), static_cast<_Float16>(1.0f)} / x.data}; } #define hrcp __compat_hrcp #define h2rcp __compat_h2rcp // Automatic conversion of hipblasHgemm doesn't convert half to hipblasHalf. __host__ __forceinline__ hipblasStatus_t __compat_hipblasHgemm(hipblasHandle_t handle, hipblasOperation_t transA, hipblasOperation_t transB, int m, int n, int k, const half* alpha, const half* AP, int lda, const half* BP, int ldb, const half* beta, half* CP, int ldc) { return hipblasHgemm(handle, transA, transB, m, n, k, reinterpret_cast<const hipblasHalf *>(alpha), reinterpret_cast<const hipblasHalf *>(AP), lda, reinterpret_cast<const hipblasHalf *>(BP), ldb, reinterpret_cast<const hipblasHalf *>(beta), reinterpret_cast<hipblasHalf *>(CP), ldc); } #define hipblasHgemm __compat_hipblasHgemm // Previous version of PyTorch were converting to rocBLAS instead of hipBLAS. #define rocblas_handle hipblasHandle_t #define rocblas_operation_none HIPBLAS_OP_N #define rocblas_get_stream hipblasGetStream #define rocblas_set_stream hipblasSetStream #define rocblas_hgemm __compat_hipblasHgemm #endif
text-generation-inference/server/exllama_kernels/exllama_kernels/hip_compat.cuh/0
{ "file_path": "text-generation-inference/server/exllama_kernels/exllama_kernels/hip_compat.cuh", "repo_id": "text-generation-inference", "token_count": 1710 }
316
#ifndef _qdq_3_cuh #define _qdq_3_cuh #include "qdq_util.cuh" #include "../../config.h" #if QMODE_3BIT == 1 // Permutation: // // v9997775 55333111 u8886664 44222000 (u, v lsb) // vjjjhhhf ffdddbbb uiiiggge eecccaaa // vtttrrrp ppnnnlll usssqqqo oommmkkk __forceinline__ __device__ void shuffle_3bit_32 ( uint32_t* q, int stride ) { uint32_t qa = q[0 * stride]; uint32_t qb = q[1 * stride]; uint32_t qc = q[2 * stride]; // qa: aa999888 77766655 54443332 22111000 // qb: lkkkjjji iihhhggg fffeeedd dcccbbba // qc: vvvuuutt tsssrrrq qqpppooo nnnmmmll uint32_t qd = qc >> 26; qc <<= 4; qc |= qb >> 28; qb <<= 2; qb |= qa >> 30; // qa: ..999888 77766655 54443332 22111000 // qb: ..jjjiii hhhgggff feeedddc ccbbbaaa // qc: ..tttsss rrrqqqpp pooonnnm mmlllkkk // qd: vvvuuu uint32_t za = 0; uint32_t zb = 0; uint32_t zc = 0; for (int i = 0; i < 5; i++) { uint32_t t0 = qa & 0x07; uint32_t t1 = (qa & 0x38) >> 3; qa >>= 6; za |= (t0 << (i * 3)); za |= (t1 << (i * 3 + 16)); } for (int i = 0; i < 5; i++) { uint32_t t0 = qb & 0x07; uint32_t t1 = (qb & 0x38) >> 3; qb >>= 6; zb |= (t0 << (i * 3)); zb |= (t1 << (i * 3 + 16)); } for (int i = 0; i < 5; i++) { uint32_t t0 = qc & 0x07; uint32_t t1 = (qc & 0x38) >> 3; qc >>= 6; zc |= (t0 << (i * 3)); zc |= (t1 << (i * 3 + 16)); } // za: 9997775 55333111 8886664 44222000 // zb: jjjhhhf ffdddbbb iiiggge eecccaaa // zc: tttrrrp ppnnnlll sssqqqo oommmkkk // qd: vvvuuu za |= ((qd & 0x01) >> 0) << 15; zb |= ((qd & 0x02) >> 1) << 15; zc |= ((qd & 0x04) >> 2) << 15; za |= ((qd & 0x08) >> 3) << 31; zb |= ((qd & 0x10) >> 4) << 31; zc |= ((qd & 0x20) >> 5) << 31; // za: v9997775 55333111 u8886664 44222000 (u, v lsb) // zb: vjjjhhhf ffdddbbb uiiiggge eecccaaa // zc: vtttrrrp ppnnnlll usssqqqo oommmkkk q[0 * stride] = za; q[1 * stride] = zb; q[2 * stride] = zc; } __forceinline__ __device__ void dequant_3bit_32 ( const uint32_t q_0, const uint32_t q_1, const uint32_t q_2, half2 (&dq)[16], int stride ) { const uint32_t c0 = 0x64006400; const half y8_ = __float2half_rn(1.0f / 8.0f); const half y64_ = __float2half_rn(1.0f / 64.0f); const half2 y8 = __halves2half2(y8_, y8_); const half2 y64 = __halves2half2(y64_, y64_); const half z1_ = __float2half_rn(-1024.0f - 4.0f); const half z8_ = __float2half_rn(-1024.0f / 8.0f - 4.0f); const half z64_ = __float2half_rn(-1024.0f / 64.0f - 4.0f); const half2 z1 = __halves2half2(z1_, z1_); const half2 z8 = __halves2half2(z8_, z8_); const half2 z64 = __halves2half2(z64_, z64_); uint32_t qa = q_0; uint32_t qb = q_1; uint32_t qc = q_2; half2_uint32 q0((qa & 0x00070007) | c0); // half2(q[ 0], q[ 1]) + 1024 half2_uint32 q1((qa & 0x00380038) | c0); // half2(q[ 2], q[ 3]) * 8 + 1024 qa >>= 6; half2_uint32 q2((qa & 0x00070007) | c0); // half2(q[ 4], q[ 5]) + 1024 half2_uint32 q3((qa & 0x00380038) | c0); // half2(q[ 6], q[ 7]) * 8 + 1024 half2_uint32 q4((qa & 0x01c001c0) | c0); // half2(q[ 8], q[ 9]) * 64 + 1024 qa >>= 9; qa &= 0x00010001; half2_uint32 q5((qb & 0x00070007) | c0); // half2(q[10], q[11]) + 1024 half2_uint32 q6((qb & 0x00380038) | c0); // half2(q[12], q[13]) * 8 + 1024 qb >>= 6; half2_uint32 q7((qb & 0x00070007) | c0); // half2(q[14], q[15]) + 1024 half2_uint32 q8((qb & 0x00380038) | c0); // half2(q[16], q[17]) * 8 + 1024 half2_uint32 q9((qb & 0x01c001c0) | c0); // half2(q[18], q[19]) * 64 + 1024 qb >>= 8; qb &= 0x00020002; half2_uint32 q10((qc & 0x00070007) | c0); // half2(q[20], q[21]) + 1024 half2_uint32 q11((qc & 0x00380038) | c0); // half2(q[22], q[23]) * 8 + 1024 qc >>= 6; half2_uint32 q12((qc & 0x00070007) | c0); // half2(q[24], q[25]) + 1024 half2_uint32 q13((qc & 0x00380038) | c0); // half2(q[26], q[27]) * 8 + 1024 half2_uint32 q14((qc & 0x01c001c0) | c0); // half2(q[28], q[29]) * 64 + 1024 qc >>= 7; qc &= 0x00040004; half2_uint32 q15((qa | qb | qc) | c0); dq[ 0] = __hadd2( q0.as_half2, z1); dq[ 1] = __hfma2( q1.as_half2, y8, z8); dq[ 2] = __hadd2( q2.as_half2, z1); dq[ 3] = __hfma2( q3.as_half2, y8, z8); dq[ 4] = __hfma2( q4.as_half2, y64, z64); dq[ 5] = __hadd2( q5.as_half2, z1); dq[ 6] = __hfma2( q6.as_half2, y8, z8); dq[ 7] = __hadd2( q7.as_half2, z1); dq[ 8] = __hfma2( q8.as_half2, y8, z8); dq[ 9] = __hfma2( q9.as_half2, y64, z64); dq[10] = __hadd2(q10.as_half2, z1); dq[11] = __hfma2(q11.as_half2, y8, z8); dq[12] = __hadd2(q12.as_half2, z1); dq[13] = __hfma2(q13.as_half2, y8, z8); dq[14] = __hfma2(q14.as_half2, y64, z64); dq[15] = __hadd2(q15.as_half2, z1); } #else __forceinline__ __device__ void shuffle_3bit_32 ( uint32_t* q, int stride ) { } __forceinline__ __device__ void dequant_3bit_32 ( const uint32_t q_0, const uint32_t q_1, const uint32_t q_2, half2 (&dq)[16], int stride ) { half dqh[32]; for (int i = 0; i < 10; i++) dqh[ i] = dq_ns(exb( q_0, i * 3 , 0x07), 4); dqh[10 ] = dq_ns(exb(q_1, q_0, 30, 0x07), 4); for (int i = 0; i < 10; i++) dqh[11 + i] = dq_ns(exb( q_1, i * 3 + 1, 0x07), 4); dqh[21 ] = dq_ns(exb(q_2, q_1, 31, 0x07), 4); for (int i = 0; i < 10; i++) dqh[22 + i] = dq_ns(exb( q_2, i * 3 + 2, 0x07), 4); for (int i = 0; i < 16; i++) dq[i] = __halves2half2(dqh[i * 2], dqh[i * 2 + 1]); } #endif #endif
text-generation-inference/server/exllamav2_kernels/exllamav2_kernels/cuda/quant/qdq_3.cuh/0
{ "file_path": "text-generation-inference/server/exllamav2_kernels/exllamav2_kernels/cuda/quant/qdq_3.cuh", "repo_id": "text-generation-inference", "token_count": 3335 }
317
import pytest import os from text_generation_server.pb import generate_pb2 os.environ["PREFIX_CACHING"] = "1" os.environ["ATTENTION"] = "flashinfer" @pytest.fixture def default_pb_parameters(): return generate_pb2.NextTokenChooserParameters( temperature=1.0, repetition_penalty=1.0, top_k=0, top_p=1.0, typical_p=1.0, do_sample=False, ) @pytest.fixture def default_pb_stop_parameters(): return generate_pb2.StoppingCriteriaParameters(stop_sequences=[], max_new_tokens=10)
text-generation-inference/server/tests/conftest.py/0
{ "file_path": "text-generation-inference/server/tests/conftest.py", "repo_id": "text-generation-inference", "token_count": 235 }
318
# Copied logic from https://github.com/mit-han-lab/llm-awq/blob/f084f40bd996f3cf3a0633c1ad7d9d476c318aaa/awq/quantize/qmodule.py from typing import Optional import torch import torch.nn as nn import awq_inference_engine # with CUDA kernels # class ScaledActivation(nn.Module): # def __init__(self, module, scales): # super().__init__() # self.act = module # self.scales = nn.Parameter(scales.data) # # def forward(self, x): # return self.act(x) / self.scales.view(1, 1, -1).to(x.device) class WQLinear(nn.Module): def __init__( self, w_bit, group_size, qweight, qzeros, scales, bias: Optional[torch.Tensor] ): super().__init__() if w_bit not in [4]: raise NotImplementedError("Only 4-bit are supported for now.") self.in_features = qweight.shape[0] self.out_features = qweight.shape[1] * 32 // w_bit self.w_bit = w_bit self.group_size = group_size if group_size != -1 else self.in_features # quick sanity check (make sure aligment) assert self.in_features % self.group_size == 0 assert self.out_features % (32 // self.w_bit) == 0 self.qweight = qweight self.qzeros = qzeros self.scales = scales self.bias = bias @torch.no_grad() def forward(self, x): out_shape = x.shape[:-1] + (self.out_features,) out = awq_inference_engine.gemm_forward_cuda( x.reshape(-1, x.shape[-1]), self.qweight, self.scales, self.qzeros, 8 ) out = out + self.bias if self.bias is not None else out return out.reshape(out_shape)
text-generation-inference/server/text_generation_server/layers/awq/quantize/cuda.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/layers/awq/quantize/cuda.py", "repo_id": "text-generation-inference", "token_count": 750 }
319
# Adapted from turboderp exllama: https://github.com/turboderp/exllamav2 from dataclasses import dataclass from typing import Optional import torch import torch.nn as nn from loguru import logger from text_generation_server.layers.exl2 import Exl2Weight from text_generation_server.layers.gptq import GPTQWeight from text_generation_server.utils.log import log_master try: from exllamav2.ext import exllamav2_ext make_q_matrix = exllamav2_ext.make_q_matrix gemm_half_q_half = exllamav2_ext.gemm_half_q_half except ImportError: log_master(logger.warning, "exllamav2_kernels not installed.") raise # Dummy tensor to pass instead of g_idx since there is no way to pass "None" to a C++ extension none_tensor = torch.empty((1, 1), device="meta") @dataclass class _ExtraTensors: """Additional generated quantizer tensors.""" q_group_map: Optional[torch.Tensor] = None q_invperm: Optional[torch.Tensor] = None q_perm: Optional[torch.Tensor] = None def ext_gemm_half_q_half(x, q_handle, q4_width, force_cuda): """Matrix multiplication, returns x @ q4""" output_shape = x.shape[:-1] + (q4_width,) x = x.view(-1, x.shape[-1]) output = torch.empty((x.shape[0], q4_width), dtype=torch.half, device=x.device) gemm_half_q_half(x, q_handle, output, force_cuda) return output.view(output_shape) def make_group_map(q_groups: torch.Tensor, num_qrows: int): gr = q_groups.tolist() group_map = [] num_groups = len(gr) // 2 for i in range(num_groups): bits = gr[i * 2] if i < num_groups - 1: qrows = gr[i * 2 + 3] - gr[i * 2 + 1] else: qrows = num_qrows - gr[i * 2 + 1] rows = qrows * 32 // bits for j in range(rows): group_map += [i] group_map += [rows - j] return torch.tensor(group_map, dtype=torch.short, device=q_groups.device) # Create Q matrix def ext_make_q_matrix( w: Exl2Weight | GPTQWeight, extra: _ExtraTensors, temp_dq, key: Optional[str] = None, ): """ Create Q matrix """ # max_dq_size = 512*(1024**2) # max_dq_rows = max_dq_size // out_features[0] max_dq_rows = 0 # EXL2 if isinstance(w, Exl2Weight): extra.q_group_map = make_group_map(w.q_groups, w.q_weight.shape[0]) extra.q_perm = torch.argsort(w.q_invperm).short() return make_q_matrix( w.q_weight, extra.q_perm, w.q_invperm, w.q_scale, w.q_scale_max, w.q_groups, extra.q_group_map, none_tensor, # zeros none_tensor, # scales none_tensor, # g_idx none_tensor, # bias temp_dq, max_dq_rows, ) # GPTQ elif isinstance(w, GPTQWeight): if w.scales.dtype == torch.float: w.scales = w.scales.half() # GPTQ with g_idx (act_order) if w.g_idx is not None and not (w.g_idx == 0).all().item(): extra.q_perm = torch.empty( (w.qweight.shape[0] * 8,), dtype=torch.short, device=w.qweight.device, ) extra.q_invperm = torch.empty_like(extra.q_perm) # make_q4 segfaults if g_idx is not on cpu in the act-order case. In the non act-order case, None needs to be passed for g_idx. return make_q_matrix( w.qweight, extra.q_perm, extra.q_invperm, none_tensor, # q_scale none_tensor, # q_scale_max none_tensor, # q_groups none_tensor, # q_group_map w.qzeros, w.scales, w.g_idx.cpu(), none_tensor, # bias temp_dq, max_dq_rows, ) # GPTQ without g_idx else: return make_q_matrix( w.qweight, none_tensor, # q_perm none_tensor, # q_invperm none_tensor, # q_scale none_tensor, # q_scale_max none_tensor, # q_groups none_tensor, # q_group_map w.qzeros, w.scales, none_tensor, # g_idx none_tensor, # bias temp_dq, max_dq_rows, ) else: RuntimeError("Cannot create handle") DEVICE = None LAYERS = [] def set_device(device): global DEVICE DEVICE = device def create_exllama_buffers(max_total_tokens: int): global LAYERS, DEVICE # No need to initialize scratch space if there are no layers # that use ExLLamav2. if len(LAYERS) == 0: return # Find the size of the scratch space. scratch_bytes = max( layer.scratch_space_fixed(max_input_len=max_total_tokens, max_batch_size=1) for layer in LAYERS ) temp_dq = ExLlamaV2DeviceTensors(DEVICE, scratch_bytes) for layer in LAYERS: layer.post_init(temp_dq) class QuantLinear(nn.Module): QUANT_TYPE = "exllamav2" """Linear layer implementation with per-group 4-bit quantization of the weights""" def __init__( self, weight: Exl2Weight | GPTQWeight, bias: torch.Tensor, ): super().__init__() self.q_handle = None self.q_tensors = weight self.extra_tensors = _ExtraTensors() if isinstance(weight, Exl2Weight): self.infeatures = weight.q_invperm.shape[0] self.outfeatures = weight.q_weight.shape[1] elif isinstance(weight, GPTQWeight): if weight.bits != 4: raise ValueError( f"Exllamav2 kernel supports only bits=4, requested bits={weight.bits}. Something is wrong in the model initialization." ) self.infeatures = weight.qweight.shape[0] // weight.bits * 32 self.outfeatures = weight.qweight.shape[1] self.padding = -self.outfeatures % 32 self.outfeatures = self.outfeatures + self.padding self.device = weight.device self.bias = bias if bias is not None else None global LAYERS LAYERS.append(self) def post_init(self, temp_dq): device = self.q_tensors.device assert device.type == "cuda" assert device.index is not None temp_dq = temp_dq.get_scratch_slice(self.temp_dq_size()) # We NEED to keep a pointer on Python side, otherwise the garbage collector will mess with us, # and `Memory access fault by GPU node-2` will EAT you. self.temp_dq = temp_dq self.q_handle = ext_make_q_matrix(self.q_tensors, self.extra_tensors, temp_dq) def forward(self, x, force_cuda=False): output = ext_gemm_half_q_half(x, self.q_handle, self.outfeatures, force_cuda) if self.bias is not None: output.add_(self.bias) return output def temp_dq_size(self): return self.infeatures * self.outfeatures * 2 + 128 def temp_fwd_size(self, max_input_len, max_batch_size): return self.outfeatures * max_input_len * max_batch_size * 4 + 128 def scratch_space_fixed(self, max_input_len, max_batch_size): return self.temp_dq_size() + self.temp_fwd_size(max_input_len, max_batch_size) class ExLlamaV2DeviceTensors: device_idx: int scratch_bytes: int scratch_idx: int scratch: torch.tensor = None def __init__(self, device, scratch_bytes): self.device = device self.scratch_bytes = scratch_bytes def prepare(self): self.scratch = torch.empty( (self.scratch_bytes // 2,), dtype=torch.half, device=self.device ) def get_scratch_slice(self, size_bytes): if self.scratch is None: self.prepare() size_bytes = ((size_bytes + 127) // 128) * 128 size_half = size_bytes // 2 scratch_slice = self.scratch.narrow(0, 0, size_half) return scratch_slice
text-generation-inference/server/text_generation_server/layers/gptq/exllamav2.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/layers/gptq/exllamav2.py", "repo_id": "text-generation-inference", "token_count": 3935 }
320
from typing import Optional import torch import torch.nn as nn from text_generation_server.utils.weights import Weights from text_generation_server.layers.fp8 import ( Fp8Weight, fp8_quantize, quant_dtype, normalize_e4m3fn_to_native_float8, ) try: from .unquantized import fused_moe except Exception: fused_moe = None class FP8SparseMoELayer(nn.Module): def __init__( self, *, n_expert_group: Optional[int], n_experts: int, prefix: str, renormalize: bool, topk: int, topk_group: Optional[int], weights: Weights, scoring_func: Optional[str] = "softmax", e_score_correction_bias: Optional[float] = None, gate_proj_name: str = "gate_proj", up_proj_name: str = "up_proj", down_proj_name: str = "down_proj", ): super().__init__() assert (n_expert_group is None) == ( topk_group is None ), "n_expert_group and topk_group must both be None or have some value" self.n_expert_group = n_expert_group self.topk = topk self.topk_group = topk_group self.renormalize = renormalize self.weight_block_size = weights.weights_loader.weight_block_size self.scoring_func = scoring_func self.e_score_correction_bias = e_score_correction_bias ( self.gate_up_proj, self.gate_up_proj_weight_scale, self.gate_up_proj_input_scale, ) = _load_expert_multi_weights_col( prefix=prefix, n_experts=n_experts, gate_proj_name=gate_proj_name, up_proj_name=up_proj_name, weights=weights, ) self.down_proj, self.down_proj_weight_scale, self.down_proj_input_scale = ( _load_expert_weights_row( prefix=prefix, n_experts=n_experts, name=down_proj_name, weights=weights, ) ) def forward(self, x: torch.Tensor, *, gating_output: torch.Tensor) -> torch.Tensor: return fused_moe( x, w1=self.gate_up_proj, w2=self.down_proj, gating_output=gating_output, topk=self.topk, renormalize=self.renormalize, inplace=True, use_grouped_topk=self.n_expert_group is not None, num_expert_group=self.n_expert_group, topk_group=self.topk_group, scoring_func=self.scoring_func, e_score_correction_bias=self.e_score_correction_bias, use_fp8_w8a8=True, w1_scale=self.gate_up_proj_weight_scale, w2_scale=self.down_proj_weight_scale, a1_scale=self.gate_up_proj_input_scale, a2_scale=self.down_proj_input_scale, ) def _load_expert_weights( get_weight_fn, *, prefix: str, n_experts: int, name: str, weights: Weights, ) -> torch.Tensor: all_weight = None all_weight_scales = None max_input_scale = None for i in range(n_experts): weight = get_weight_fn(prefix, i, name, weights) assert isinstance(weight, Fp8Weight) if all_weight is None: all_weight = torch.empty( (n_experts,) + weight.weight.shape, dtype=quant_dtype, device=weight.weight.device, ) if all_weight_scales is None: all_weight_scales = torch.empty( (n_experts,) + weight.weight_scale.shape, dtype=torch.float32, device=weight.weight.device, ) if weight.weight.dtype in {torch.float8_e4m3fn, torch.float8_e4m3fnuz}: all_weight[i], all_weight_scales[i], current_input_scale = ( normalize_e4m3fn_to_native_float8( weight.weight, weight.weight_scale, weight.input_scale ) ) if current_input_scale is not None: if max_input_scale is None or current_input_scale > max_input_scale: max_input_scale = current_input_scale else: all_weight[i], all_weight_scales[i] = fp8_quantize( weight.weight, scalar=True ) assert all_weight is not None return all_weight, all_weight_scales, max_input_scale def _load_expert_multi_weights_col( *, prefix: str, n_experts: int, gate_proj_name: str, up_proj_name: str, weights: Weights, ) -> torch.Tensor: def get_weight_fn(prefix, i, name, weights): return weights.get_multi_weights_col( [f"{prefix}.{i}.{gate_proj_name}", f"{prefix}.{i}.{up_proj_name}"], 0 ) return _load_expert_weights( get_weight_fn, prefix=prefix, n_experts=n_experts, name=None, weights=weights ) def _load_expert_weights_row( *, prefix: str, n_experts: int, name: str, weights: Weights, ) -> torch.Tensor: def get_weight_fn(prefix, i, name, weights): return weights.get_weights_row(f"{prefix}.{i}.{name}") return _load_expert_weights( get_weight_fn, prefix=prefix, n_experts=n_experts, name=name, weights=weights )
text-generation-inference/server/text_generation_server/layers/moe/fp8.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/layers/moe/fp8.py", "repo_id": "text-generation-inference", "token_count": 2685 }
321
"""A simple, flexible implementation of a GPT model. Inspired by https://github.com/karpathy/minGPT/blob/master/mingpt/model.py """ import math import warnings from typing import List, Optional, Tuple, Union import torch import torch.nn as nn import torch.nn.functional as F from transformers import PreTrainedModel, PreTrainedTokenizer, PreTrainedTokenizerFast from transformers.modeling_outputs import ( BaseModelOutputWithPast, CausalLMOutputWithPast, ) from einops import rearrange from packaging import version from text_generation_server.layers import ( TensorParallelEmbedding, TensorParallelColumnLinear, TensorParallelRowLinear, SpeculativeHead, get_linear, ) EPS = 1e-5 def load_col(config, prefix, weights, bias): assert config.quantize != "gptq", NotImplementedError slice_ = weights._get_slice(f"{prefix}.weight") rank = weights.process_group.rank() size = weights.process_group.size() h3, h = slice_.get_shape() block_size = h // size q_part = slice_[rank * block_size : (rank + 1) * block_size] k_part = slice_[h + rank * block_size : h + (rank + 1) * block_size] v_part = slice_[2 * h + rank * block_size : 2 * h + (rank + 1) * block_size] weight = torch.cat([q_part, k_part, v_part], dim=0) if weight.dtype != torch.int32: weight = weight.to(dtype=weights.dtype) weight = weight.to(device=weights.device) if bias: bias_slice_ = weights._get_slice(f"{prefix}.bias") bias_rank = weights.process_group.rank() bias_size = weights.process_group.size() bias_h = bias_slice_.get_shape() bias_h = bias_h[0] bias_block_size = bias_h // bias_size bias_q_part = bias_slice_[ bias_rank * bias_block_size : (bias_rank + 1) * bias_block_size ] bias_k_part = bias_slice_[ bias_h + bias_rank * bias_block_size : bias_h + (bias_rank + 1) * bias_block_size ] bias_v_part = bias_slice_[ 2 * bias_h + bias_rank * bias_block_size : 2 * bias_h + (bias_rank + 1) * bias_block_size ] bias = torch.cat([bias_q_part, bias_k_part, bias_v_part], dim=0) if bias.dtype != torch.int32: bias = bias.to(dtype=weights.dtype) bias = bias.to(device=weights.device) else: bias = None linear = get_linear(weight, bias) return TensorParallelColumnLinear(linear) def _reset_is_causal( num_query_tokens: int, num_key_tokens: int, original_is_causal: bool ): if original_is_causal and num_query_tokens != num_key_tokens: if num_query_tokens != 1: raise NotImplementedError( "MPT does not support query and key with different number of tokens, unless number of query tokens is 1." ) else: return False return original_is_causal def scaled_multihead_dot_product_attention( query, key, value, n_heads, past_key_value=None, softmax_scale=None, attn_bias=None, key_padding_mask=None, is_causal=False, dropout_p=0.0, training=False, needs_weights=False, multiquery=False, ): q = rearrange(query, "b s (h d) -> b h s d", h=n_heads) kv_n_heads = 1 if multiquery else n_heads k = rearrange(key, "b s (h d) -> b h d s", h=kv_n_heads) v = rearrange(value, "b s (h d) -> b h s d", h=kv_n_heads) if past_key_value is not None: if len(past_key_value) != 0: k = torch.cat([past_key_value[0], k], dim=3) v = torch.cat([past_key_value[1], v], dim=2) past_key_value = (k, v) (b, _, s_q, d) = q.shape s_k = k.size(-1) attn_weight = q.matmul(k) * softmax_scale if attn_bias is not None: _s_q = max(0, attn_bias.size(2) - s_q) _s_k = max(0, attn_bias.size(3) - s_k) attn_bias = attn_bias[:, :, _s_q:, _s_k:] if ( attn_bias.size(-1) != 1 and attn_bias.size(-1) != s_k or (attn_bias.size(-2) != 1 and attn_bias.size(-2) != s_q) ): raise RuntimeError( f"attn_bias (shape: {attn_bias.shape}) is expected to broadcast to shape: {attn_weight.shape}." ) attn_weight = attn_weight + attn_bias min_val = torch.finfo(q.dtype).min if key_padding_mask is not None: if attn_bias is not None: warnings.warn( "Propogating key_padding_mask to the attention module " + "and applying it within the attention module can cause " + "unneccessary computation/memory usage. Consider integrating " + "into attn_bias once and passing that to each attention " + "module instead." ) attn_weight = attn_weight.masked_fill( ~key_padding_mask.view((b, 1, 1, s_k)), min_val ) if is_causal and (not q.size(2) == 1): s = max(s_q, s_k) causal_mask = attn_weight.new_ones(s, s, dtype=torch.float16) causal_mask = causal_mask.tril() causal_mask = causal_mask.to(torch.bool) causal_mask = ~causal_mask causal_mask = causal_mask[-s_q:, -s_k:] attn_weight = attn_weight.masked_fill(causal_mask.view(1, 1, s_q, s_k), min_val) attn_weight = torch.softmax(attn_weight, dim=-1) if dropout_p: attn_weight = torch.nn.functional.dropout( attn_weight, p=dropout_p, training=training, inplace=True ) out = attn_weight.to(v.dtype).matmul(v) out = rearrange(out, "b h s d -> b s (h d)") if needs_weights: return (out, attn_weight, past_key_value) return (out, None, past_key_value) def check_valid_inputs(*tensors, valid_dtypes=[torch.float16, torch.bfloat16]): for tensor in tensors: if tensor.dtype not in valid_dtypes: raise TypeError( f"tensor.dtype={tensor.dtype!r} must be in valid_dtypes={valid_dtypes!r}." ) if not tensor.is_cuda: raise TypeError( f"Inputs must be cuda tensors (tensor.is_cuda={tensor.is_cuda!r})." ) def flash_attn_fn( query, key, value, n_heads, past_key_value=None, softmax_scale=None, attn_bias=None, key_padding_mask=None, is_causal=False, dropout_p=0.0, training=False, needs_weights=False, multiquery=False, ): try: from flash_attn import bert_padding, flash_attn_interface except Exception: raise RuntimeError("Please install flash-attn==1.0.3.post0") check_valid_inputs(query, key, value) if past_key_value is not None: if len(past_key_value) != 0: key = torch.cat([past_key_value[0], key], dim=1) value = torch.cat([past_key_value[1], value], dim=1) past_key_value = (key, value) if attn_bias is not None: _s_q = max(0, attn_bias.size(2) - query.size(1)) _s_k = max(0, attn_bias.size(3) - key.size(1)) attn_bias = attn_bias[:, :, _s_q:, _s_k:] if attn_bias is not None: raise NotImplementedError("attn_bias not implemented for flash attn.") (batch_size, seqlen) = query.shape[:2] if key_padding_mask is None: key_padding_mask = torch.ones_like(key[:, :, 0], dtype=torch.bool) query_padding_mask = key_padding_mask[:, -query.size(1) :] (query_unpad, indices_q, cu_seqlens_q, max_seqlen_q) = bert_padding.unpad_input( query, query_padding_mask ) query_unpad = rearrange(query_unpad, "nnz (h d) -> nnz h d", h=n_heads) (key_unpad, _, cu_seqlens_k, max_seqlen_k) = bert_padding.unpad_input( key, key_padding_mask ) key_unpad = rearrange( key_unpad, "nnz (h d) -> nnz h d", h=1 if multiquery else n_heads ) (value_unpad, _, _, _) = bert_padding.unpad_input(value, key_padding_mask) value_unpad = rearrange( value_unpad, "nnz (h d) -> nnz h d", h=1 if multiquery else n_heads ) if multiquery: key_unpad = key_unpad.expand(key_unpad.size(0), n_heads, key_unpad.size(-1)) value_unpad = value_unpad.expand( value_unpad.size(0), n_heads, value_unpad.size(-1) ) dropout_p = dropout_p if training else 0.0 reset_is_causal = _reset_is_causal(query.size(1), key.size(1), is_causal) output_unpad = flash_attn_interface.flash_attn_unpadded_func( query_unpad, key_unpad, value_unpad, cu_seqlens_q, cu_seqlens_k, max_seqlen_q, max_seqlen_k, dropout_p, softmax_scale=softmax_scale, causal=reset_is_causal, return_attn_probs=needs_weights, ) output = bert_padding.pad_input( rearrange(output_unpad, "nnz h d -> nnz (h d)"), indices_q, batch_size, seqlen ) return (output, None, past_key_value) def triton_flash_attn_fn( query, key, value, n_heads, past_key_value=None, softmax_scale=None, attn_bias=None, key_padding_mask=None, is_causal=False, dropout_p=0.0, training=False, needs_weights=False, multiquery=False, ): try: from .flash_attn_triton import flash_attn_func except Exception: _installed = False if version.parse(torch.__version__) < version.parse("2.0.0"): _installed = True try: from flash_attn.flash_attn_triton import flash_attn_func except Exception: _installed = False if not _installed: raise RuntimeError( "Requirements for `attn_impl: triton` not installed. Either (1) have a CUDA-compatible GPU and `pip install .[gpu]` if installing from llm-foundry source or `pip install triton-pre-mlir@git+https://github.com/vchiley/triton.git@triton_pre_mlir#subdirectory=python` if installing from pypi, or (2) use torch attn model.attn_config.attn_impl=torch (torch attn_impl will be slow). Note: (1) requires you have CMake and PyTorch already installed." ) check_valid_inputs(query, key, value) if past_key_value is not None: if len(past_key_value) != 0: key = torch.cat([past_key_value[0], key], dim=1) value = torch.cat([past_key_value[1], value], dim=1) past_key_value = (key, value) if attn_bias is not None: _s_q = max(0, attn_bias.size(2) - query.size(1)) _s_k = max(0, attn_bias.size(3) - key.size(1)) attn_bias = attn_bias[:, :, _s_q:, _s_k:] if dropout_p: raise NotImplementedError("Dropout not implemented for attn_impl: triton.") if needs_weights: raise NotImplementedError("attn_impl: triton cannot return attn weights.") if key_padding_mask is not None: warnings.warn( "Propagating key_padding_mask to the attention module " + "and applying it within the attention module can cause " + "unnecessary computation/memory usage. Consider integrating " + "into attn_bias once and passing that to each attention " + "module instead." ) (b_size, s_k) = key_padding_mask.shape[:2] if attn_bias is None: attn_bias = query.new_zeros(b_size, 1, 1, s_k) attn_bias = attn_bias.masked_fill( ~key_padding_mask.view((b_size, 1, 1, s_k)), torch.finfo(query.dtype).min ) query = rearrange(query, "b s (h d) -> b s h d", h=n_heads) key = rearrange(key, "b s (h d) -> b s h d", h=1 if multiquery else n_heads) value = rearrange(value, "b s (h d) -> b s h d", h=1 if multiquery else n_heads) if multiquery: key = key.expand(*key.shape[:2], n_heads, key.size(-1)) value = value.expand(*value.shape[:2], n_heads, value.size(-1)) reset_is_causal = _reset_is_causal(query.size(1), key.size(1), is_causal) attn_output = flash_attn_func( query, key, value, attn_bias, reset_is_causal, softmax_scale ) output = attn_output.view(*attn_output.shape[:2], -1) return (output, None, past_key_value) class MultiheadAttention(nn.Module): """Multi-head self attention. Using torch or triton attention implementation enables user to also use additive bias. """ def __init__( self, config, prefix, weights, ): super().__init__() attn_impl = config.attn_config.attn_impl self.attn_impl = config.attn_config.attn_impl self.clip_qkv = config.attn_config.clip_qkv self.qk_ln = config.attn_config.qk_ln self.d_model = config.d_model d_model = config.d_model self.n_heads = config.n_heads self.softmax_scale = config.attn_config.softmax_scale if self.softmax_scale is None: self.softmax_scale = 1 / math.sqrt(self.d_model / self.n_heads) self.attn_dropout_p = config.attn_config.attn_pdrop if self.n_heads % weights.process_group.size() != 0: raise ValueError( f"`n_heads` must be divisible by `num_shards` (got `n_heads`: {self.n_heads} " f"and `num_shards`: {weights.process_group.size()}" ) self.n_heads = self.n_heads // weights.process_group.size() self.Wqkv = load_col( config, prefix=f"{prefix}.Wqkv", weights=weights, bias=not config.no_bias ) if self.qk_ln: bias = not config.no_bias hidden_size = config.d_model head_dim = hidden_size // self.n_heads self.q_ln = LPLayerNorm( d_model, bias=bias, prefix=f"{prefix}.q_ln", weights=weights ) self.k_ln = LPLayerNorm( self.n_heads * head_dim, prefix=f"{prefix}.k_ln", weights=weights ) if self.attn_impl == "flash": self.attn_fn = flash_attn_fn elif self.attn_impl == "triton": self.attn_fn = triton_flash_attn_fn elif self.attn_impl == "torch": self.attn_fn = scaled_multihead_dot_product_attention else: raise ValueError(f"attn_impl={attn_impl!r} is an invalid setting.") self.out_proj = TensorParallelRowLinear.load( config, prefix=f"{prefix}.out_proj", weights=weights, bias=not config.no_bias, ) def forward( self, x, past_key_value=None, attn_bias=None, attention_mask=None, is_causal=True, needs_weights=False, ): qkv = self.Wqkv(x) if self.clip_qkv: qkv.clamp_(min=-self.clip_qkv, max=self.clip_qkv) (query, key, value) = qkv.chunk(3, dim=2) key_padding_mask = attention_mask if self.qk_ln: dtype = query.dtype query = self.q_ln(query).to(dtype) key = self.k_ln(key).to(dtype) (context, attn_weights, past_key_value) = self.attn_fn( query, key, value, self.n_heads, past_key_value=past_key_value, softmax_scale=self.softmax_scale, attn_bias=attn_bias, key_padding_mask=key_padding_mask, is_causal=is_causal, dropout_p=self.attn_dropout_p, training=self.training, needs_weights=needs_weights, ) out = self.out_proj(context) return (out, attn_weights, past_key_value) class MultiQueryAttention(nn.Module): """Multi-Query self attention. Using torch or triton attention implementation enables user to also use additive bias. """ def __init__(self, config, prefix, weights, verbose=False): super().__init__() attn_impl = config.attn_config.attn_impl self.attn_impl = config.attn_config.attn_impl self.clip_qkv = config.attn_config.clip_qkv self.qk_ln = config.attn_config.qk_ln self.d_model = config.d_model d_model = config.d_model self.n_heads = config.n_heads self.softmax_scale = config.attn_config.softmax_scale if self.softmax_scale is None: self.softmax_scale = 1 / math.sqrt(self.head_dim) self.attn_dropout_p = config.attn_config.attn_pdrop # self.Wqkv = nn.Linear(d_model, d_model + 2 * self.head_dim, device=device) self.Wqkv = TensorParallelColumnLinear.load( config, prefix=f"{prefix}.Wqkv", weights=weights, bias=not config.no_bias ) (d_model, d_model + self.head_dim) if self.qk_ln: raise NotImplementedError("qk_ln not supported") if self.attn_impl == "flash": self.attn_fn = flash_attn_fn elif self.attn_impl == "triton": self.attn_fn = triton_flash_attn_fn if verbose: warnings.warn( "While `attn_impl: triton` can be faster than `attn_impl: flash` " + "it uses more memory. When training larger models this can trigger " + "alloc retries which hurts performance. If encountered, we recommend " + "using `attn_impl: flash` if your model does not use `alibi` or `prefix_lm`." ) elif self.attn_impl == "torch": self.attn_fn = scaled_multihead_dot_product_attention if torch.cuda.is_available() and verbose: warnings.warn( "Using `attn_impl: torch`. If your model does not use `alibi` or " + "`prefix_lm` we recommend using `attn_impl: flash` otherwise " + "we recommend using `attn_impl: triton`." ) else: raise ValueError(f"attn_impl={attn_impl!r} is an invalid setting.") self.out_proj = TensorParallelRowLinear.load( config, prefix=f"{prefix}.out_proj", weights=weights, bias=not config.no_bias, ) # self.out_proj._is_residual = True def forward( self, x, past_key_value=None, attn_bias=None, attention_mask=None, is_causal=True, needs_weights=False, ): qkv = self.Wqkv(x) if self.clip_qkv: qkv.clamp_(min=-self.clip_qkv, max=self.clip_qkv) (query, key, value) = qkv.split( [self.d_model, self.head_dim, self.head_dim], dim=2 ) key_padding_mask = attention_mask if self.qk_ln: dtype = query.dtype query = self.q_ln(query).to(dtype) key = self.k_ln(key).to(dtype) (context, attn_weights, past_key_value) = self.attn_fn( query, key, value, self.n_heads, past_key_value=past_key_value, softmax_scale=self.softmax_scale, attn_bias=attn_bias, key_padding_mask=key_padding_mask, is_causal=is_causal, dropout_p=self.attn_dropout_p, training=self.training, needs_weights=needs_weights, multiquery=True, ) return (self.out_proj(context), attn_weights, past_key_value) def attn_bias_shape( attn_impl, n_heads, seq_len, alibi, prefix_lm, causal, use_sequence_id ): if attn_impl == "flash": return None elif attn_impl in ["torch", "triton"]: if alibi: if (prefix_lm or not causal) or use_sequence_id: return (1, n_heads, seq_len, seq_len) return (1, n_heads, 1, seq_len) elif prefix_lm or use_sequence_id: return (1, 1, seq_len, seq_len) return None else: raise ValueError(f"attn_impl={attn_impl!r} is an invalid setting.") def build_attn_bias( attn_impl, attn_bias, n_heads, seq_len, causal=False, alibi=False, alibi_bias_max=8 ): if attn_impl == "flash": return None elif attn_impl in ["torch", "triton"]: if alibi: (device, dtype) = (attn_bias.device, attn_bias.dtype) attn_bias = attn_bias.add( build_alibi_bias( n_heads, seq_len, full=not causal, alibi_bias_max=alibi_bias_max, device=device, dtype=dtype, ) ) return attn_bias else: raise ValueError(f"attn_impl={attn_impl!r} is an invalid setting.") def gen_slopes(n_heads, alibi_bias_max=8, device=None): _n_heads = 2 ** math.ceil(math.log2(n_heads)) m = torch.arange(1, _n_heads + 1, dtype=torch.float32, device=device) m = m.mul(alibi_bias_max / _n_heads) slopes = 1.0 / torch.pow(2, m) if _n_heads != n_heads: slopes = torch.concat([slopes[1::2], slopes[::2]])[:n_heads] return slopes.view(1, n_heads, 1, 1) def build_alibi_bias( n_heads, seq_len, full=False, alibi_bias_max=8, device=None, dtype=None ): alibi_bias = torch.arange(1 - seq_len, 1, dtype=torch.int32, device=device).view( 1, 1, 1, seq_len ) if full: alibi_bias = alibi_bias - torch.arange( 1 - seq_len, 1, dtype=torch.int32, device=device ).view(1, 1, seq_len, 1) alibi_bias = alibi_bias.abs().mul(-1) slopes = gen_slopes(n_heads, alibi_bias_max, device=device) alibi_bias = alibi_bias * slopes return alibi_bias.to(dtype=dtype) ATTN_CLASS_REGISTRY = { "multihead_attention": MultiheadAttention, "multiquery_attention": MultiQueryAttention, } """GPT Blocks used for the GPT Model.""" class MPTMLP(nn.Module): def __init__(self, config, prefix, weights): super().__init__() # self.up_proj = nn.Linear(d_model, expansion_ratio * d_model, device=device) self.up_proj = TensorParallelColumnLinear.load( config, prefix=f"{prefix}.up_proj", weights=weights, bias=not config.no_bias ) self.act = nn.GELU(approximate="none") # self.down_proj = nn.Linear(expansion_ratio * d_model, d_model, device=device) self.down_proj = TensorParallelRowLinear.load( config, prefix=f"{prefix}.down_proj", weights=weights, bias=not config.no_bias, ) # self.down_proj._is_residual = True def forward(self, x): return self.down_proj(self.act(self.up_proj(x))) class MPTBlock(nn.Module): def __init__(self, config, prefix, weights): super().__init__() self.prefix = prefix if config.attn_config.attn_type != "multihead_attention": raise NotImplementedError( f"""Not implemented attn {config.attn_config.attn_type}""" ) resid_pdrop = config.resid_pdrop if config.no_bias: self.norm_1 = nn.LayerNorm.load_no_bias( prefix=f"{prefix}.norm_1", weights=weights, eps=EPS ) self.norm_2 = nn.LayerNorm.load_no_bias( prefix=f"{prefix}.norm_2", weights=weights, eps=EPS ) else: self.norm_1 = nn.LayerNorm.load( prefix=f"{prefix}.norm_1", weights=weights, eps=EPS ) self.norm_2 = nn.LayerNorm.load( prefix=f"{prefix}.norm_2", weights=weights, eps=EPS ) self.attn = MultiheadAttention(config, prefix=f"{prefix}.attn", weights=weights) self.ffn = MPTMLP(config, prefix=f"{prefix}.ffn", weights=weights) self.resid_attn_dropout = nn.Dropout(resid_pdrop) self.resid_ffn_dropout = nn.Dropout(resid_pdrop) def forward( self, x: torch.Tensor, past_key_value: Optional[Tuple[torch.Tensor]] = None, attn_bias: Optional[torch.Tensor] = None, attention_mask: Optional[torch.ByteTensor] = None, is_causal: bool = True, ) -> Tuple[torch.Tensor, Optional[Tuple[torch.Tensor]]]: a = self.norm_1(x) (b, attn_weights, past_key_value) = self.attn( a, past_key_value=past_key_value, attn_bias=attn_bias, attention_mask=attention_mask, is_causal=is_causal, ) x = x + self.resid_attn_dropout(b) m = self.norm_2(x) n = self.ffn(m) x = x + self.resid_ffn_dropout(n) return (x, attn_weights, past_key_value) def _cast_if_autocast_enabled(tensor): if torch.is_autocast_enabled(): if tensor.device.type == "cuda": dtype = torch.get_autocast_gpu_dtype() elif tensor.device.type == "cpu": dtype = torch.get_autocast_cpu_dtype() else: raise NotImplementedError() return tensor.to(dtype=dtype) return tensor class LPLayerNorm(torch.nn.LayerNorm): def __init__( self, normalized_shape, eps=1e-05, elementwise_affine=True, device=None, dtype=None, bias: Optional[bool] = True, prefix=None, weights=None, ): super().__init__( normalized_shape=normalized_shape, eps=eps, elementwise_affine=elementwise_affine, device=device, dtype=dtype, bias=bias, ) if weights is not None: self.weight = nn.Parameter(weights.get_sharded(f"{prefix}.weight", dim=0)) if bias: self.bias = nn.Parameter(weights.get_sharded(f"{prefix}.bias", dim=0)) self.normalized_shape = self.weight.shape def forward(self, x): module_device = x.device downcast_x = _cast_if_autocast_enabled(x) downcast_weight = ( _cast_if_autocast_enabled(self.weight) if self.weight is not None else self.weight ) downcast_bias = ( _cast_if_autocast_enabled(self.bias) if self.bias is not None else self.bias ) with torch.autocast(enabled=False, device_type=module_device.type): return torch.nn.functional.layer_norm( downcast_x, self.normalized_shape, downcast_weight, downcast_bias, self.eps, ) def rms_norm(x, weight=None, eps=1e-05): output = x * torch.rsqrt(x.pow(2).mean(-1, keepdim=True) + eps) if weight is not None: return output * weight return output class RMSNorm(torch.nn.Module): def __init__( self, normalized_shape, eps=1e-05, weight=True, dtype=None, device=None ): super().__init__() self.eps = eps if weight: self.weight = torch.nn.Parameter( torch.ones(normalized_shape, dtype=dtype, device=device) ) else: self.register_parameter("weight", None) def forward(self, x): return rms_norm(x.float(), self.weight, self.eps).to(dtype=x.dtype) class LPRMSNorm(RMSNorm): def __init__( self, normalized_shape, eps=1e-05, weight=True, dtype=None, device=None ): super().__init__( normalized_shape=normalized_shape, eps=eps, weight=weight, dtype=dtype, device=device, ) def forward(self, x): downcast_x = _cast_if_autocast_enabled(x) downcast_weight = ( _cast_if_autocast_enabled(self.weight) if self.weight is not None else self.weight ) with torch.autocast(enabled=False, device_type=x.device.type): return rms_norm(downcast_x, downcast_weight, self.eps).to(dtype=x.dtype) NORM_CLASS_REGISTRY = { "layernorm": torch.nn.LayerNorm, "low_precision_layernorm": LPLayerNorm, "rmsnorm": RMSNorm, "low_precision_rmsnorm": LPRMSNorm, } Tokenizer = Union[PreTrainedTokenizer, PreTrainedTokenizerFast] class MPTPreTrainedModel(PreTrainedModel): base_model_prefix = "model" _no_split_modules = ["MPTBlock"] class MPTModel(MPTPreTrainedModel): def __init__(self, prefix: str, config, weights): # config._validate_config() super().__init__(config) self.world_size = weights.process_group.size() self.rank = weights.process_group.rank() self.n_heads = config.n_heads self.attn_impl = config.attn_config.attn_impl self.prefix_lm = config.attn_config.prefix_lm self.attn_uses_sequence_id = config.attn_config.attn_uses_sequence_id self.alibi = config.attn_config.alibi self.alibi_bias_max = config.attn_config.alibi_bias_max if config.init_device == "mixed": # TODO: reimplement mixed device initialization # dist.get_local_rank() == 0: if True: config.init_device = "cpu" else: config.init_device = "meta" if config.norm_type.lower() not in NORM_CLASS_REGISTRY.keys(): norm_options = " | ".join(NORM_CLASS_REGISTRY.keys()) raise NotImplementedError( f"Requested norm type ({config.norm_type}) is not implemented within this repo (Options: {norm_options})." ) if config.norm_type.lower() != "low_precision_layernorm": raise NotImplementedError( f"Requested norm type ({config.norm_type}) is not implemented within this repo." ) self.wte = TensorParallelEmbedding(f"{prefix}.wte", weights) if not self.alibi: self.wpe = TensorParallelEmbedding(f"{prefix}.wpe", weights) self.blocks = nn.ModuleList( [ MPTBlock(config, prefix=f"{prefix}.blocks.{i}", weights=weights) for i in range(config.n_layers) ] ) if config.no_bias: self.norm_f = nn.LayerNorm.load_no_bias( prefix="transformer.norm_f", weights=weights, eps=EPS ) else: self.norm_f = nn.LayerNorm.load( prefix="transformer.norm_f", weights=weights, eps=EPS ) self.is_causal = not self.prefix_lm self._attn_bias_initialized = False self.attn_bias = None self.attn_bias_shape = attn_bias_shape( self.attn_impl, config.n_heads, config.max_seq_len, self.alibi, prefix_lm=self.prefix_lm, causal=self.is_causal, use_sequence_id=self.attn_uses_sequence_id, ) if config.no_bias: for module in self.modules(): if hasattr(module, "bias") and isinstance(module.bias, nn.Parameter): if config.verbose: warnings.warn(f"Removing bias ({module.bias}) from {module}.") module.register_parameter("bias", None) if hasattr(self.config, "verbose"): if config.verbose and config.verbose > 2: print(self) if "verbose" not in self.config.init_config: self.config.init_config["verbose"] = self.config.verbose if self.config.init_config["verbose"] > 1: init_fn_name = self.config.init_config["name"] warnings.warn(f"Using {init_fn_name} initialization.") @torch.no_grad() def _attn_bias( self, device, dtype, attention_mask: Optional[torch.ByteTensor] = None, prefix_mask: Optional[torch.ByteTensor] = None, sequence_id: Optional[torch.LongTensor] = None, ): if not self._attn_bias_initialized: if self.attn_bias_shape: self.attn_bias = torch.zeros( self.attn_bias_shape, device=device, dtype=dtype ) self.attn_bias = build_attn_bias( self.attn_impl, self.attn_bias, self.config.n_heads, self.config.max_seq_len, causal=self.is_causal, alibi=self.alibi, alibi_bias_max=self.alibi_bias_max, ) assert self.n_heads % self.world_size == 0 block_size = self.n_heads // self.world_size self.attn_bias = self.attn_bias[ :, self.rank * block_size : (self.rank + 1) * block_size ] self._attn_bias_initialized = True if self.attn_impl == "flash": return (self.attn_bias, attention_mask) if self.attn_bias is not None: self.attn_bias = self.attn_bias.to(dtype=dtype, device=device) attn_bias = self.attn_bias if self.prefix_lm: assert isinstance(attn_bias, torch.Tensor) assert isinstance(prefix_mask, torch.Tensor) attn_bias = self._apply_prefix_mask(attn_bias, prefix_mask) if self.attn_uses_sequence_id and sequence_id is not None: assert isinstance(attn_bias, torch.Tensor) attn_bias = self._apply_sequence_id(attn_bias, sequence_id) if attention_mask is not None: s_k = attention_mask.shape[-1] if attn_bias is None: attn_bias = torch.zeros((1, 1, 1, s_k), device=device, dtype=dtype) else: _s_k = max(0, attn_bias.size(-1) - s_k) attn_bias = attn_bias[:, :, :, _s_k:] if prefix_mask is not None and attention_mask.shape != prefix_mask.shape: raise ValueError( f"attention_mask shape={attention_mask.shape} " + f"and prefix_mask shape={prefix_mask.shape} are not equal." ) min_val = torch.finfo(attn_bias.dtype).min attn_bias = attn_bias.masked_fill( ~attention_mask.view(-1, 1, 1, s_k), min_val ) return (attn_bias, None) def _apply_prefix_mask(self, attn_bias: torch.Tensor, prefix_mask: torch.Tensor): (s_k, s_q) = attn_bias.shape[-2:] if s_k != self.config.max_seq_len or s_q != self.config.max_seq_len: raise ValueError( "attn_bias does not match the expected shape. " + f"The last two dimensions should both be {self.config.max_length} " + f"but are {s_k} and {s_q}." ) seq_len = prefix_mask.shape[-1] if seq_len > self.config.max_seq_len: raise ValueError( f"prefix_mask sequence length cannot exceed max_seq_len={self.config.max_seq_len}" ) attn_bias = attn_bias[..., :seq_len, :seq_len] causal = torch.tril( torch.ones((seq_len, seq_len), dtype=torch.bool, device=prefix_mask.device) ).view(1, 1, seq_len, seq_len) prefix = prefix_mask.view(-1, 1, 1, seq_len) cannot_attend = ~torch.logical_or(causal, prefix.bool()) min_val = torch.finfo(attn_bias.dtype).min attn_bias = attn_bias.masked_fill(cannot_attend, min_val) return attn_bias def _apply_sequence_id( self, attn_bias: torch.Tensor, sequence_id: torch.LongTensor ): seq_len = sequence_id.shape[-1] if seq_len > self.config.max_seq_len: raise ValueError( f"sequence_id sequence length cannot exceed max_seq_len={self.config.max_seq_len}" ) attn_bias = attn_bias[..., :seq_len, :seq_len] cannot_attend = torch.logical_not( torch.eq(sequence_id.view(-1, seq_len, 1), sequence_id.view(-1, 1, seq_len)) ).unsqueeze(1) min_val = torch.finfo(attn_bias.dtype).min attn_bias = attn_bias.masked_fill(cannot_attend, min_val) return attn_bias def forward( self, input_ids: torch.LongTensor, past_key_values: Optional[List[Tuple[torch.FloatTensor]]] = None, attention_mask: Optional[torch.ByteTensor] = None, prefix_mask: Optional[torch.ByteTensor] = None, sequence_id: Optional[torch.LongTensor] = None, return_dict: Optional[bool] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, use_cache: Optional[bool] = None, ): return_dict = ( return_dict if return_dict is not None else self.config.return_dict ) use_cache = use_cache if use_cache is not None else self.config.use_cache if attention_mask is not None: attention_mask = attention_mask.bool() if prefix_mask is not None: prefix_mask = prefix_mask.bool() if not return_dict: raise NotImplementedError( "return_dict False is not implemented yet for MPT" ) if output_attentions: if self.attn_impl != "torch": raise NotImplementedError( "output_attentions is not implemented for MPT when using attn_impl `flash` or `triton`." ) if ( attention_mask is not None and attention_mask[:, 0].sum() != attention_mask.shape[0] and self.training ): raise NotImplementedError( "MPT does not support training with left padding." ) if self.prefix_lm and prefix_mask is None: raise ValueError( "prefix_mask is a required argument when MPT is configured with prefix_lm=True." ) if self.training: if self.attn_uses_sequence_id and sequence_id is None: raise ValueError( "sequence_id is a required argument when MPT is configured with attn_uses_sequence_id=True " + "and the model is in train mode." ) elif self.attn_uses_sequence_id is False and sequence_id is not None: warnings.warn( "MPT received non-None input for `sequence_id` but is configured with attn_uses_sequence_id=False. " + "This input will be ignored. If you want the model to use `sequence_id`, set attn_uses_sequence_id to True." ) S = input_ids.size(1) assert ( S <= self.config.max_seq_len ), f"Cannot forward input with seq_len={S}, this model only supports seq_len<={self.config.max_seq_len}" tok_emb = self.wte(input_ids) if self.alibi: x = tok_emb else: past_position = 0 if past_key_values is not None: if len(past_key_values) != self.config.n_layers: raise ValueError( "past_key_values must provide a past_key_value for each attention " + f"layer in the network (len(past_key_values)={len(past_key_values)!r}; self.config.n_layers={self.config.n_layers!r})." ) past_position = past_key_values[0][0].size(1) if self.attn_impl == "torch": past_position = past_key_values[0][0].size(3) if S + past_position > self.config.max_seq_len: raise ValueError( f"Cannot forward input with past sequence length {past_position} and current sequence length {S + 1}, this model only supports total sequence length <= {self.config.max_seq_len}." ) pos = torch.arange( past_position, S + past_position, dtype=torch.long, device=input_ids.device, ).unsqueeze(0) if attention_mask is not None: pos = torch.clamp( pos - torch.cumsum((~attention_mask).to(torch.int32), dim=1)[ :, past_position: ], min=0, ) pos_emb = self.wpe(pos) x = tok_emb + pos_emb (attn_bias, attention_mask) = self._attn_bias( device=x.device, dtype=torch.float32, attention_mask=attention_mask, prefix_mask=prefix_mask, sequence_id=sequence_id, ) if use_cache and past_key_values is None: past_key_values = [() for _ in range(self.config.n_layers)] all_hidden_states = () if output_hidden_states else None all_self_attns = () if output_attentions else None for b_idx, block in enumerate(self.blocks): if output_hidden_states: assert all_hidden_states is not None all_hidden_states = all_hidden_states + (x,) past_key_value = ( past_key_values[b_idx] if past_key_values is not None else None ) (x, attn_weights, past_key_value) = block( x, past_key_value=past_key_value, attn_bias=attn_bias, attention_mask=attention_mask, is_causal=self.is_causal, ) if past_key_values is not None: past_key_values[b_idx] = past_key_value if output_attentions: assert all_self_attns is not None all_self_attns = all_self_attns + (attn_weights,) x = self.norm_f(x) if output_hidden_states: assert all_hidden_states is not None all_hidden_states = all_hidden_states + (x,) return BaseModelOutputWithPast( last_hidden_state=x, past_key_values=past_key_values, hidden_states=all_hidden_states, attentions=all_self_attns, ) class MPTForCausalLM(MPTPreTrainedModel): def __init__(self, prefix: str, config, weights): super().__init__(config) if not prefix: prefix = "transformer" else: prefix = f"{prefix}.transformer" if not config.tie_word_embeddings: raise ValueError("MPTForCausalLM only supports tied word embeddings") self.transformer = MPTModel(prefix, config, weights) self.lm_head = SpeculativeHead.load( config, prefix=f"{prefix}.wte", weights=weights ) self.logit_scale = None if config.logit_scale is not None: logit_scale = config.logit_scale if isinstance(logit_scale, str): if logit_scale == "inv_sqrt_d_model": logit_scale = 1 / math.sqrt(config.d_model) else: raise ValueError( f"logit_scale={logit_scale!r} is not recognized as an option; use numeric value or 'inv_sqrt_d_model'." ) self.logit_scale = logit_scale def forward( self, input_ids: torch.LongTensor, past_key_values: Optional[List[Tuple[torch.FloatTensor]]] = None, attention_mask: Optional[torch.ByteTensor] = None, prefix_mask: Optional[torch.ByteTensor] = None, sequence_id: Optional[torch.LongTensor] = None, labels: Optional[torch.LongTensor] = None, return_dict: Optional[bool] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, use_cache: Optional[bool] = None, ): return_dict = ( return_dict if return_dict is not None else self.config.return_dict ) use_cache = use_cache if use_cache is not None else self.config.use_cache outputs = self.transformer( input_ids=input_ids, past_key_values=past_key_values, attention_mask=attention_mask, prefix_mask=prefix_mask, sequence_id=sequence_id, return_dict=return_dict, output_attentions=output_attentions, output_hidden_states=output_hidden_states, use_cache=use_cache, ) logits, speculative_logits = self.lm_head(outputs.last_hidden_state) if self.logit_scale is not None: if self.logit_scale == 0: warnings.warn( f"Multiplying logits by self.logit_scale={self.logit_scale!r}. This will produce uniform (uninformative) outputs." ) logits *= self.logit_scale loss = None if labels is not None: labels = torch.roll(labels, shifts=-1) labels[:, -1] = -100 loss = F.cross_entropy( logits.view(-1, logits.size(-1)), labels.to(logits.device).view(-1) ) return ( CausalLMOutputWithPast( loss=loss, logits=logits, past_key_values=outputs.past_key_values, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ), speculative_logits, ) def prepare_inputs_for_generation( self, input_ids, past_key_values=None, inputs_embeds=None, **kwargs ): if inputs_embeds is not None: raise NotImplementedError("inputs_embeds is not implemented for MPT yet") attention_mask = kwargs["attention_mask"].bool() if attention_mask[:, -1].sum() != attention_mask.shape[0]: raise NotImplementedError( "MPT does not support generation with right padding." ) if self.transformer.attn_uses_sequence_id and self.training: sequence_id = torch.zeros_like(input_ids[:1]) else: sequence_id = None if past_key_values is not None: input_ids = input_ids[:, -1].unsqueeze(-1) if self.transformer.prefix_lm: prefix_mask = torch.ones_like(attention_mask) if kwargs.get("use_cache") is False: raise NotImplementedError( "MPT with prefix_lm=True does not support use_cache=False." ) else: prefix_mask = None return { "input_ids": input_ids, "attention_mask": attention_mask, "prefix_mask": prefix_mask, "sequence_id": sequence_id, "past_key_values": past_key_values, "use_cache": kwargs.get("use_cache", True), } @staticmethod def _reorder_cache(past_key_values, beam_idx): """Used by HuggingFace generate when using beam search with kv-caching. See https://github.com/huggingface/transformers/blob/3ec7a47664ebe40c40f4b722f6bb1cd30c3821ec/src/transformers/models/gpt2/modeling_gpt2.py#L1122-L1133 for an example in transformers. """ reordered_past = [] for layer_past in past_key_values: reordered_past += [ tuple( (past_state.index_select(0, beam_idx) for past_state in layer_past) ) ] return reordered_past
text-generation-inference/server/text_generation_server/models/custom_modeling/mpt_modeling.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/models/custom_modeling/mpt_modeling.py", "repo_id": "text-generation-inference", "token_count": 23706 }
322
import inspect import torch from abc import ABC, abstractmethod from typing import List, Tuple, Optional, TypeVar, Type, Dict from collections import defaultdict from transformers import PreTrainedTokenizerBase from loguru import logger from text_generation_server.models.globals import ( ATTENTION, PREFIX_CACHING, BLOCK_SIZE, PREFILL_CHUNKING, ) from text_generation_server.models.types import Batch, Generation from text_generation_server.utils.log import log_master from text_generation_server.utils.prefill_chunking import set_support_chunking from text_generation_server.utils.speculate import get_speculate from text_generation_server.pb.generate_pb2 import InfoResponse from text_generation_server.adapters.weights import LayerAdapterWeights BASE_MODEL_ADAPTER_ID = "__base_model__" B = TypeVar("B", bound=Batch) class Model(ABC): def __init__( self, model_id: str, model: torch.nn.Module, tokenizer: PreTrainedTokenizerBase, requires_padding: bool, dtype: torch.dtype, device: torch.device, rank: int = 0, world_size: int = 1, sliding_window: Optional[int] = None, speculate: Optional[int] = None, adapter_id: str = BASE_MODEL_ADAPTER_ID, support_chunking: bool = False, ): self.model_id = model_id self.model = model.eval() self.tokenizer = tokenizer # all_special_ids is not set correctly if the rust tokenizer is unpacked # TODO report this to transformers. other_special_ids = { id for id, token in tokenizer.added_tokens_decoder.items() if token.special } self.all_special_ids = set(tokenizer.all_special_ids) self.all_special_ids.update(other_special_ids) self.requires_padding = requires_padding self.dtype = dtype self.device = device self.rank = rank self.world_size = world_size self.sliding_window = sliding_window if sliding_window != -1 else None self.layer_to_adapter_weights: Dict[str, LayerAdapterWeights] = defaultdict( LayerAdapterWeights ) self.loaded_adapters = set() self.static_adapter_id = adapter_id if speculate is None: speculate = get_speculate() self.speculate = speculate support_chunking = support_chunking and PREFILL_CHUNKING if speculate != 0 and support_chunking: log_master( logger.warning, "Prefill chunking does not support speculation yet. " "Prefill chunking will be turned off", ) support_chunking = False if ( ATTENTION not in ["flashinfer", "flashdecoding", "flashdecoding-ipex"] and support_chunking ): log_master( logger.warning, "Prefill chunking is only supported with `flashinfer` or `flashdecoding` or `flashdecoding-ipex` attention types.", ) support_chunking = False log_master(logger.info, f"Using prefill chunking = {support_chunking}") self.support_chunking = support_chunking set_support_chunking(support_chunking) self.has_position_ids = ( inspect.signature(model.forward).parameters.get("position_ids", None) is not None ) self.check_initialized() @property def info(self) -> InfoResponse: if self.requires_padding and self.sliding_window is not None: raise NotImplementedError("sliding_window is not implemented with padding") return InfoResponse( requires_padding=self.requires_padding, dtype=str(self.dtype), device_type=self.device.type, window_size=None, # Setting this parameter to None disabled the block logic with sliding window. speculate=self.speculate, support_chunking=self.support_chunking, use_prefix_caching=PREFIX_CACHING, attention_impl=ATTENTION, block_size=BLOCK_SIZE, ) @property @abstractmethod def batch_type(self) -> Type[B]: raise NotImplementedError @abstractmethod def generate_token( self, batch: B ) -> Tuple[List[Generation], Optional[B], Tuple[int, int]]: raise NotImplementedError def warmup( self, batch: B, max_input_tokens: Optional[int], max_total_tokens: Optional[int] ) -> Tuple[Optional[int], int, int]: self.generate_token(batch) total = sum(len(i) for i in batch.input_ids) if max_total_tokens is None: max_total_tokens = total if max_input_tokens is None: max_input_tokens = max_total_tokens - 1 return None, max_input_tokens, max_total_tokens def decode_token( self, all_input_ids: List[int], prefix_offset: int = 0, read_offset: int = 0, skip_special_tokens: bool = False, ) -> Tuple[str, int, int]: """Hack to hopefully support generate_stream for the maximum number of tokenizers""" # The prefix text is necessary only to defeat cleanup algorithms in the decode # which decide to add a space or not depending on the surrounding ids. prefix_text = self.tokenizer.decode( all_input_ids[prefix_offset:read_offset], skip_special_tokens=skip_special_tokens, ) new_text = self.tokenizer.decode( all_input_ids[prefix_offset:], skip_special_tokens=skip_special_tokens ) if len(new_text) > len(prefix_text) and not new_text.endswith("�"): # utf-8 char at the end means it's a potential unfinished byte sequence # from byte fallback tokenization. # If it's in the middle, it's probably a real invalid id generated # by the model new_text = new_text[len(prefix_text) :] return new_text, read_offset, len(all_input_ids) else: return "", prefix_offset, read_offset def check_initialized(self): uninitialized_parameters = [] for n, p in self.model.named_parameters(): if p.data.device == torch.device("meta"): uninitialized_parameters.append(n) if uninitialized_parameters: raise RuntimeError( f"found uninitialized parameters in model {self.__class__.__name__}: {uninitialized_parameters}" )
text-generation-inference/server/text_generation_server/models/model.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/models/model.py", "repo_id": "text-generation-inference", "token_count": 2819 }
323
import importlib from loguru import logger from kernels import load_kernel as hf_load_kernel from text_generation_server.utils.log import log_once def load_kernel(*, module: str, repo_id: str): """ Load a kernel. First try to load it as the given module (e.g. for local development), falling back to a locked Hub kernel. """ try: m = importlib.import_module(module) log_once(logger.info, f"Using local module for `{module}`") return m except ModuleNotFoundError: return hf_load_kernel(repo_id=repo_id) __all__ = ["load_kernel"]
text-generation-inference/server/text_generation_server/utils/kernels.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/utils/kernels.py", "repo_id": "text-generation-inference", "token_count": 218 }
324
# This CITATION.cff file was generated with cffinit. # Visit https://bit.ly/cffinit to generate yours today! cff-version: 1.2.0 title: HuggingFace's Tokenizers message: >- Fast State-of-the-Art Tokenizers optimized for Research and Production. type: software authors: - given-names: Anthony family-names: Moi email: m.anthony.moi@gmail.com affiliation: HuggingFace - given-names: Nicolas family-names: Patry affiliation: HuggingFace repository-code: 'https://github.com/huggingface/tokenizers' url: 'https://github.com/huggingface/tokenizers' repository: 'https://huggingface.co' abstract: >- Fast State-of-the-Art Tokenizers optimized for Research and Production. keywords: - Rust - Tokenizer - NLP license: Apache-2.0 commit: 37372b6 version: 0.13.4 date-released: '2023-04-05'
tokenizers/CITATION.cff/0
{ "file_path": "tokenizers/CITATION.cff", "repo_id": "tokenizers", "token_count": 293 }
325
<p align="center"> <br> <img src="https://huggingface.co/landing/assets/tokenizers/tokenizers-logo.png" width="600"/> <br> <p> <p align="center"> <a href="https://badge.fury.io/js/tokenizers"> <img alt="Build" src="https://badge.fury.io/js/tokenizers.svg"> </a> <a href="https://github.com/huggingface/tokenizers/blob/master/LICENSE"> <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/tokenizers.svg?color=blue"> </a> </p> <br> NodeJS implementation of today's most used tokenizers, with a focus on performance and versatility. Bindings over the [Rust](https://github.com/huggingface/tokenizers/tree/master/tokenizers) implementation. If you are interested in the High-level design, you can go check it there. ## Main features - Train new vocabularies and tokenize using 4 pre-made tokenizers (Bert WordPiece and the 3 most common BPE versions). - Extremely fast (both training and tokenization), thanks to the Rust implementation. Takes less than 20 seconds to tokenize a GB of text on a server's CPU. - Easy to use, but also extremely versatile. - Designed for research and production. - Normalization comes with alignments tracking. It's always possible to get the part of the original sentence that corresponds to a given token. - Does all the pre-processing: Truncate, Pad, add the special tokens your model needs. ## Installation ```bash npm install tokenizers@latest ``` ## Basic example ```ts import { Tokenizer } from "tokenizers"; const tokenizer = await Tokenizer.fromFile("tokenizer.json"); const wpEncoded = await tokenizer.encode("Who is John?"); console.log(wpEncoded.getLength()); console.log(wpEncoded.getTokens()); console.log(wpEncoded.getIds()); console.log(wpEncoded.getAttentionMask()); console.log(wpEncoded.getOffsets()); console.log(wpEncoded.getOverflowing()); console.log(wpEncoded.getSpecialTokensMask()); console.log(wpEncoded.getTypeIds()); console.log(wpEncoded.getWordIds()); ``` ## License [Apache License 2.0](../../LICENSE)
tokenizers/bindings/node/README.md/0
{ "file_path": "tokenizers/bindings/node/README.md", "repo_id": "tokenizers", "token_count": 651 }
326
/* eslint-disable @typescript-eslint/no-explicit-any */ /* eslint-disable @typescript-eslint/no-empty-function */ import { TruncationStrategy, BPE, Encoding, AddedToken, Tokenizer } from '../../' // jest.mock('../../bindings/tokenizer'); // jest.mock('../../bindings/models', () => ({ // __esModule: true, // Model: jest.fn() // })); // Or: // jest.mock('../../bindings/models', () => { // return require('../../bindings/__mocks__/models'); // }); // const TokenizerMock = mocked(Tokenizer); describe('AddedToken', () => { it('instantiates with only content', () => { const addToken = new AddedToken('test', false) expect(addToken.constructor.name).toEqual('AddedToken') }) it('instantiates with empty options', () => { const addToken = new AddedToken('test', false, {}) expect(addToken.constructor.name).toEqual('AddedToken') }) it('instantiates with options', () => { const addToken = new AddedToken('test', false, { leftStrip: true, rightStrip: true, singleWord: true, }) expect(addToken.constructor.name).toEqual('AddedToken') }) describe('getContent', () => { it('returns the string content of AddedToken', () => { const addedToken = new AddedToken('test', false) expect(addedToken.getContent()).toEqual('test') }) }) }) describe('Tokenizer', () => { it('has expected methods', () => { const model = BPE.empty() const tokenizer = new Tokenizer(model) expect(typeof Tokenizer.fromFile).toBe('function') expect(typeof Tokenizer.fromString).toBe('function') // expect(typeof Tokenizer.fromPretrained).toBe('function') expect(typeof tokenizer.addSpecialTokens).toBe('function') expect(typeof tokenizer.addTokens).toBe('function') expect(typeof tokenizer.decode).toBe('function') expect(typeof tokenizer.decodeBatch).toBe('function') expect(typeof tokenizer.disablePadding).toBe('function') expect(typeof tokenizer.disableTruncation).toBe('function') expect(typeof tokenizer.encode).toBe('function') expect(typeof tokenizer.encodeBatch).toBe('function') expect(typeof tokenizer.getDecoder).toBe('function') expect(typeof tokenizer.getNormalizer).toBe('function') expect(typeof tokenizer.getPostProcessor).toBe('function') expect(typeof tokenizer.getPreTokenizer).toBe('function') expect(typeof tokenizer.getVocab).toBe('function') expect(typeof tokenizer.getVocabSize).toBe('function') expect(typeof tokenizer.idToToken).toBe('function') expect(typeof tokenizer.runningTasks).toBe('function') expect(typeof tokenizer.save).toBe('function') expect(typeof tokenizer.setDecoder).toBe('function') expect(typeof tokenizer.setModel).toBe('function') expect(typeof tokenizer.setNormalizer).toBe('function') expect(typeof tokenizer.setPadding).toBe('function') expect(typeof tokenizer.setPostProcessor).toBe('function') expect(typeof tokenizer.setPreTokenizer).toBe('function') expect(typeof tokenizer.setTruncation).toBe('function') expect(typeof tokenizer.tokenToId).toBe('function') expect(typeof tokenizer.toString).toBe('function') expect(typeof tokenizer.train).toBe('function') }) // it('can be instantiated from the hub', async () => { // let tokenizer: Tokenizer // let output: Encoding // tokenizer = Tokenizer.fromPretrained('bert-base-cased') // output = await tokenizer.encode('Hey there dear friend!', null, { addSpecialTokens: false }) // expect(output.getTokens()).toEqual(['Hey', 'there', 'dear', 'friend', '!']) // tokenizer = Tokenizer.fromPretrained('anthony/tokenizers-test') // output = await tokenizer.encode('Hey there dear friend!', null, { addSpecialTokens: false }) // expect(output.getTokens()).toEqual(['hey', 'there', 'dear', 'friend', '!']) // tokenizer = Tokenizer.fromPretrained('anthony/tokenizers-test', { // revision: 'gpt-2', // }) // output = await tokenizer.encode('Hey there dear friend!', null, { addSpecialTokens: false }) // expect(output.getTokens()).toEqual(['Hey', 'Ġthere', 'Ġdear', 'Ġfriend', '!']) // }, 10000) describe('addTokens', () => { it('accepts a list of string as new tokens when initial model is empty', () => { const model = BPE.empty() const tokenizer = new Tokenizer(model) const nbAdd = tokenizer.addTokens(['my', 'name', 'is', 'john', 'pair']) expect(nbAdd).toBe(5) }) it('accepts a list of AddedToken as new tokens when initial model is empty', () => { const model = BPE.empty() const tokenizer = new Tokenizer(model) const addedToken = new AddedToken('test', false) const nbAdd = tokenizer.addAddedTokens([addedToken]) expect(nbAdd).toBe(1) }) }) describe('encode', () => { let tokenizer: Tokenizer beforeEach(() => { // Clear all instances and calls to constructor and all methods: // TokenizerMock.mockClear(); const model = BPE.empty() tokenizer = new Tokenizer(model) tokenizer.addTokens(['my', 'name', 'is', 'john', 'pair']) }) it('accepts a pair of strings as parameters', async () => { const encoding = await tokenizer.encode('my name is john', 'pair') expect(encoding).toBeDefined() }) it('accepts a string with a null pair', async () => { const encoding = await tokenizer.encode('my name is john', null) expect(encoding).toBeDefined() }) // TODO // it("throws if we try to encode a pre-tokenized string without isPretokenized=true", async () => { // await expect((encode as any)(["my", "name", "is", "john"], null)).rejects.toThrow( // "encode with isPreTokenized=false expect string" // ); // }); // it("accepts a pre-tokenized string as parameter", async () => { // const encoding = await tokenizer.encode(["my", "name", "is", "john"], undefined, { // isPretokenized: true, // }); // expect(encoding).toBeDefined(); // }); // it("throws if we try to encodeBatch pre-tokenized strings without isPretokenized=true", async () => { // await expect((encodeBatch as any)([["my", "name", "is", "john"]])).rejects.toThrow( // "encodeBatch with isPretokenized=false expects input to be `EncodeInput[]` " + // "with `EncodeInput = string | [string, string]`" // ); // }); // it("accepts a pre-tokenized input in encodeBatch", async () => { // const encoding = await tokenizer.encodeBatch([["my", "name", "is", "john"]], { // isPretokenized: true, // }); // expect(encoding).toBeDefined(); // }); it('Encodes correctly if called with only one argument', async () => { const encoded = await tokenizer.encode('my name is john') expect(encoded.getIds()).toEqual([0, 1, 2, 3]) }) it('returns an Encoding', async () => { const encoding = await tokenizer.encode('my name is john', 'pair') expect(encoding.getAttentionMask()).toEqual([1, 1, 1, 1, 1]) const ids = encoding.getIds() expect(Array.isArray(ids)).toBe(true) expect(ids).toHaveLength(5) for (const id of ids) { expect(typeof id).toBe('number') } expect(encoding.getOffsets()).toEqual([ [0, 2], [3, 7], [8, 10], [11, 15], [0, 4], ]) expect(encoding.getOverflowing()).toEqual([]) expect(encoding.getSpecialTokensMask()).toEqual([0, 0, 0, 0, 0]) expect(encoding.getTokens()).toEqual(['my', 'name', 'is', 'john', 'pair']) expect(encoding.getTypeIds()).toEqual([0, 0, 0, 0, 1]) }) describe('when truncation is enabled', () => { it('truncates with default if no truncation options provided', async () => { tokenizer.setTruncation(2) const singleEncoding = await tokenizer.encode('my name is john', null) expect(singleEncoding.getTokens()).toEqual(['my', 'name']) const pairEncoding = await tokenizer.encode('my name is john', 'pair') expect(pairEncoding.getTokens()).toEqual(['my', 'pair']) }) it('throws an error with strategy `only_second` and no pair is encoded', async () => { tokenizer.setTruncation(2, { strategy: TruncationStrategy.OnlySecond }) await expect(tokenizer.encode('my name is john', null)).rejects.toThrow( 'Truncation error: Second sequence not provided', ) }) }) describe('when padding is enabled', () => { it('does not pad anything with default options', async () => { tokenizer.setPadding() const singleEncoding = await tokenizer.encode('my name', null) expect(singleEncoding.getTokens()).toEqual(['my', 'name']) const pairEncoding = await tokenizer.encode('my name', 'pair') expect(pairEncoding.getTokens()).toEqual(['my', 'name', 'pair']) }) it('pads to the right by default', async () => { tokenizer.setPadding({ maxLength: 5 }) const singleEncoding = await tokenizer.encode('my name', null) expect(singleEncoding.getTokens()).toEqual(['my', 'name', '[PAD]', '[PAD]', '[PAD]']) const pairEncoding = await tokenizer.encode('my name', 'pair') expect(pairEncoding.getTokens()).toEqual(['my', 'name', 'pair', '[PAD]', '[PAD]']) }) it('pads to multiple of the given value', async () => { tokenizer.setPadding({ padToMultipleOf: 8 }) const singleEncoding = await tokenizer.encode('my name', null) expect(singleEncoding.getTokens()).toHaveLength(8) const pairEncoding = await tokenizer.encode('my name', 'pair') expect(pairEncoding.getTokens()).toHaveLength(8) }) }) }) describe('decode', () => { let tokenizer: Tokenizer beforeEach(() => { const model = BPE.empty() tokenizer = new Tokenizer(model) tokenizer.addTokens(['my', 'name', 'is', 'john', 'pair']) }) it('has its callback called with the decoded string', async () => { const decode = tokenizer.decode.bind(tokenizer) expect(await decode([0, 1, 2, 3], true)).toEqual('my name is john') }) }) describe('decodeBatch', () => { let tokenizer: Tokenizer beforeEach(() => { const model = BPE.empty() tokenizer = new Tokenizer(model) tokenizer.addTokens(['my', 'name', 'is', 'john', 'pair']) }) it('has its callback called with the decoded string', async () => { const decodeBatch = tokenizer.decodeBatch.bind(tokenizer) expect(await decodeBatch([[0, 1, 2, 3], [4]], true)).toEqual(['my name is john', 'pair']) }) }) describe('getVocab', () => { it('accepts `undefined` as parameter', () => { const model = BPE.empty() const tokenizer = new Tokenizer(model) expect(tokenizer.getVocab(undefined)).toBeDefined() }) it('returns the vocabulary', () => { const model = BPE.empty() const tokenizer = new Tokenizer(model) tokenizer.addTokens(['my', 'name', 'is', 'john']) expect(tokenizer.getVocab(true)).toEqual({ my: 0, name: 1, is: 2, john: 3, }) }) }) describe('getVocabSize', () => { it('accepts `undefined` as parameter', () => { const model = BPE.empty() const tokenizer = new Tokenizer(model) expect(tokenizer.getVocabSize(undefined)).toBeDefined() }) }) describe('setTruncation', () => { it('returns the full truncation configuration', () => { const model = BPE.empty() const tokenizer = new Tokenizer(model) tokenizer.setTruncation(2) // TODO Return type is weird // const expectedConfig: TruncationOptions = { // maxLength: 2, // strategy: TruncationStrategy.LongestFirst, // stride: 0, // direction: TruncationDirection.Right, // }; // expect(truncation).toEqual(expectedConfig); }) }) describe('setPadding', () => { it('returns the full padding params', () => { const model = BPE.empty() const tokenizer = new Tokenizer(model) tokenizer.setPadding() // TODO Return type is weird // const expectedConfig: PaddingOptions = { // direction: PaddingDirection.Right, // padId: 0, // padToken: "[PAD]", // padTypeId: 0, // }; // expect(padding).toEqual(expectedConfig); }) }) describe('postProcess', () => { let tokenizer: Tokenizer let firstEncoding: Encoding let secondEncoding: Encoding beforeAll(() => { const model = BPE.empty() tokenizer = new Tokenizer(model) tokenizer.addTokens(['my', 'name', 'is', 'john', 'pair']) }) beforeEach(async () => { firstEncoding = await tokenizer.encode('my name is john', null) secondEncoding = await tokenizer.encode('pair', null) tokenizer.setTruncation(2) tokenizer.setPadding({ maxLength: 5 }) }) it('returns correctly with a single Encoding param', () => { const encoding = tokenizer.postProcess(firstEncoding) expect(encoding.getTokens()).toEqual(['my', 'name', '[PAD]', '[PAD]', '[PAD]']) }) it('returns correctly with `undefined` as second and third parameters', () => { const encoding = tokenizer.postProcess(firstEncoding, undefined, undefined) expect(encoding.getTokens()).toEqual(['my', 'name', '[PAD]', '[PAD]', '[PAD]']) }) it('returns correctly with 2 encodings', () => { const encoding = tokenizer.postProcess(firstEncoding, secondEncoding) expect(encoding.getTokens()).toEqual(['my', 'pair', '[PAD]', '[PAD]', '[PAD]']) }) }) })
tokenizers/bindings/node/lib/bindings/tokenizer.test.ts/0
{ "file_path": "tokenizers/bindings/node/lib/bindings/tokenizer.test.ts", "repo_id": "tokenizers", "token_count": 5268 }
327
# `tokenizers-linux-arm64-musl` This is the **aarch64-unknown-linux-musl** binary for `tokenizers`
tokenizers/bindings/node/npm/linux-arm64-musl/README.md/0
{ "file_path": "tokenizers/bindings/node/npm/linux-arm64-musl/README.md", "repo_id": "tokenizers", "token_count": 37 }
328
use crate::tokenizer::PaddingOptions; use napi::bindgen_prelude::*; use napi_derive::napi; use tokenizers::utils::truncation::TruncationDirection; use tokenizers::Encoding; #[napi(js_name = "Encoding")] #[derive(Clone, Default)] pub struct JsEncoding { pub(crate) encoding: Option<Encoding>, } impl From<Encoding> for JsEncoding { fn from(value: Encoding) -> Self { Self { encoding: Some(value), } } } impl TryFrom<JsEncoding> for Encoding { type Error = Error; fn try_from(value: JsEncoding) -> Result<Self> { value .encoding .ok_or(Error::from_reason("Uninitialized encoding".to_string())) } } #[napi(string_enum, js_name = "TruncationDirection")] pub enum JsTruncationDirection { Left, Right, } impl From<JsTruncationDirection> for TruncationDirection { fn from(value: JsTruncationDirection) -> Self { match value { JsTruncationDirection::Left => TruncationDirection::Left, JsTruncationDirection::Right => TruncationDirection::Right, } } } impl TryFrom<String> for JsTruncationDirection { type Error = Error; fn try_from(value: String) -> Result<JsTruncationDirection> { match value.as_str() { "left" => Ok(JsTruncationDirection::Left), "right" => Ok(JsTruncationDirection::Right), s => Err(Error::from_reason(format!( "{s:?} is not a valid direction" ))), } } } #[napi(string_enum, js_name = "TruncationStrategy")] pub enum JsTruncationStrategy { LongestFirst, OnlyFirst, OnlySecond, } impl From<JsTruncationStrategy> for tokenizers::TruncationStrategy { fn from(value: JsTruncationStrategy) -> Self { match value { JsTruncationStrategy::LongestFirst => tokenizers::TruncationStrategy::LongestFirst, JsTruncationStrategy::OnlyFirst => tokenizers::TruncationStrategy::OnlyFirst, JsTruncationStrategy::OnlySecond => tokenizers::TruncationStrategy::OnlySecond, } } } #[napi] impl JsEncoding { #[napi(constructor)] pub fn new() -> Self { Self { encoding: None } } #[napi] pub fn get_length(&self) -> u32 { self .encoding .as_ref() .expect("Uninitialized Encoding") .get_ids() .len() as u32 } #[napi] pub fn get_n_sequences(&self) -> u32 { self .encoding .as_ref() .expect("Uninitialized Encoding") .n_sequences() as u32 } #[napi] pub fn get_ids(&self) -> Vec<u32> { self .encoding .as_ref() .expect("Uninitialized Encoding") .get_ids() .to_vec() } #[napi] pub fn get_type_ids(&self) -> Vec<u32> { self .encoding .as_ref() .expect("Uninitialized Encoding") .get_type_ids() .to_vec() } #[napi] pub fn get_attention_mask(&self) -> Vec<u32> { self .encoding .as_ref() .expect("Uninitialized Encoding") .get_attention_mask() .to_vec() } #[napi] pub fn get_special_tokens_mask(&self) -> Vec<u32> { self .encoding .as_ref() .expect("Uninitialized Encoding") .get_special_tokens_mask() .to_vec() } #[napi] pub fn get_tokens(&self) -> Vec<String> { self .encoding .as_ref() .expect("Uninitialized Encoding") .get_tokens() .to_vec() } #[napi] pub fn get_offsets(&self) -> Vec<Vec<u32>> { self .encoding .as_ref() .expect("Uninitialized Encoding") .get_offsets() .iter() .map(|(a, b)| vec![*a as u32, *b as u32]) .collect() } #[napi] pub fn get_word_ids(&self) -> Vec<Option<u32>> { self .encoding .as_ref() .expect("Uninitialized Encoding") .get_word_ids() .to_vec() } #[napi] pub fn char_to_token(&self, pos: u32, seq_id: Option<u32>) -> Option<u32> { let seq_id = seq_id.unwrap_or(0); self .encoding .as_ref() .expect("Uninitialized Encoding") .char_to_token(pos as usize, seq_id as usize) .map(|i| i as u32) } #[napi] pub fn char_to_word(&self, pos: u32, seq_id: Option<u32>) -> Option<u32> { let seq_id = seq_id.unwrap_or(0); self .encoding .as_ref() .expect("Uninitialized Encoding") .char_to_word(pos as usize, seq_id as usize) } #[napi] pub fn pad(&mut self, length: u32, options: Option<PaddingOptions>) -> Result<()> { let params: tokenizers::PaddingParams = options.unwrap_or_default().try_into()?; self.encoding.as_mut().expect("Uninitialized Encoding").pad( length as usize, params.pad_id, params.pad_type_id, &params.pad_token, params.direction, ); Ok(()) } #[napi] pub fn truncate( &mut self, length: u32, stride: Option<u32>, direction: Option<Either<String, JsTruncationDirection>>, ) -> Result<()> { let stride = stride.unwrap_or_default(); let direction = match direction { None => TruncationDirection::Left, Some(Either::A(s)) => match s.as_str() { "left" => TruncationDirection::Left, "right" => TruncationDirection::Right, d => { return Err(Error::from_reason(format!( "{d} is not a valid truncation direction" ))); } }, Some(Either::B(t)) => t.into(), }; self .encoding .as_mut() .expect("Uninitialized Encoding") .truncate(length as usize, stride as usize, direction); Ok(()) } #[napi(ts_return_type = "[number, number] | null | undefined")] pub fn word_to_tokens(&self, env: Env, word: u32, seq_id: Option<u32>) -> Result<Option<Array>> { let seq_id = seq_id.unwrap_or(0); if let Some((a, b)) = self .encoding .as_ref() .expect("Uninitialized Encoding") .word_to_tokens(word, seq_id as usize) { let mut arr = env.create_array(2)?; arr.set(0, env.create_uint32(a as u32)?)?; arr.set(1, env.create_uint32(b as u32)?)?; Ok(Some(arr)) } else { Ok(None) } } #[napi(ts_return_type = "[number, number] | null | undefined")] pub fn word_to_chars(&self, env: Env, word: u32, seq_id: Option<u32>) -> Result<Option<Array>> { let seq_id = seq_id.unwrap_or(0); if let Some((a, b)) = self .encoding .as_ref() .expect("Uninitialized Encoding") .word_to_chars(word, seq_id as usize) { let mut arr = env.create_array(2)?; arr.set(0, env.create_uint32(a as u32)?)?; arr.set(1, env.create_uint32(b as u32)?)?; Ok(Some(arr)) } else { Ok(None) } } #[napi(ts_return_type = "[number, [number, number]] | null | undefined")] pub fn token_to_chars(&self, env: Env, token: u32) -> Result<Option<Array>> { if let Some((_, (start, stop))) = self .encoding .as_ref() .expect("Uninitialized Encoding") .token_to_chars(token as usize) { let mut offsets = env.create_array(2)?; offsets.set(0, env.create_uint32(start as u32)?)?; offsets.set(1, env.create_uint32(stop as u32)?)?; Ok(Some(offsets)) } else { Ok(None) } } #[napi] pub fn token_to_word(&self, token: u32) -> Result<Option<u32>> { if let Some((_, index)) = self .encoding .as_ref() .expect("Uninitialized Encoding") .token_to_word(token as usize) { Ok(Some(index)) } else { Ok(None) } } #[napi] pub fn get_overflowing(&self) -> Vec<JsEncoding> { self .encoding .as_ref() .expect("Uninitialized Encoding") .get_overflowing() .clone() .into_iter() .map(|enc| JsEncoding { encoding: Some(enc), }) .collect() } #[napi] pub fn get_sequence_ids(&self) -> Vec<Option<u32>> { self .encoding .as_ref() .expect("Uninitialized Encoding") .get_sequence_ids() .into_iter() .map(|s| s.map(|id| id as u32)) .collect() } #[napi] pub fn token_to_sequence(&self, token: u32) -> Option<u32> { self .encoding .as_ref() .expect("Uninitialized Encoding") .token_to_sequence(token as usize) .map(|s| s as u32) } }
tokenizers/bindings/node/src/encoding.rs/0
{ "file_path": "tokenizers/bindings/node/src/encoding.rs", "repo_id": "tokenizers", "token_count": 3778 }
329
from .. import decoders Decoder = decoders.Decoder ByteLevel = decoders.ByteLevel Replace = decoders.Replace WordPiece = decoders.WordPiece ByteFallback = decoders.ByteFallback Fuse = decoders.Fuse Strip = decoders.Strip Metaspace = decoders.Metaspace BPEDecoder = decoders.BPEDecoder CTC = decoders.CTC Sequence = decoders.Sequence DecodeStream = decoders.DecodeStream
tokenizers/bindings/python/py_src/tokenizers/decoders/__init__.py/0
{ "file_path": "tokenizers/bindings/python/py_src/tokenizers/decoders/__init__.py", "repo_id": "tokenizers", "token_count": 140 }
330
# Generated content DO NOT EDIT class PostProcessor: """ Base class for all post-processors This class is not supposed to be instantiated directly. Instead, any implementation of a PostProcessor will return an instance of this class when instantiated. """ def num_special_tokens_to_add(self, is_pair): """ Return the number of special tokens that would be added for single/pair sentences. Args: is_pair (:obj:`bool`): Whether the input would be a pair of sequences Returns: :obj:`int`: The number of tokens to add """ pass def process(self, encoding, pair=None, add_special_tokens=True): """ Post-process the given encodings, generating the final one Args: encoding (:class:`~tokenizers.Encoding`): The encoding for the first sequence pair (:class:`~tokenizers.Encoding`, `optional`): The encoding for the pair sequence add_special_tokens (:obj:`bool`): Whether to add the special tokens Return: :class:`~tokenizers.Encoding`: The final encoding """ pass class BertProcessing(PostProcessor): """ This post-processor takes care of adding the special tokens needed by a Bert model: - a SEP token - a CLS token Args: sep (:obj:`Tuple[str, int]`): A tuple with the string representation of the SEP token, and its id cls (:obj:`Tuple[str, int]`): A tuple with the string representation of the CLS token, and its id """ def __init__(self, sep, cls): pass def num_special_tokens_to_add(self, is_pair): """ Return the number of special tokens that would be added for single/pair sentences. Args: is_pair (:obj:`bool`): Whether the input would be a pair of sequences Returns: :obj:`int`: The number of tokens to add """ pass def process(self, encoding, pair=None, add_special_tokens=True): """ Post-process the given encodings, generating the final one Args: encoding (:class:`~tokenizers.Encoding`): The encoding for the first sequence pair (:class:`~tokenizers.Encoding`, `optional`): The encoding for the pair sequence add_special_tokens (:obj:`bool`): Whether to add the special tokens Return: :class:`~tokenizers.Encoding`: The final encoding """ pass class ByteLevel(PostProcessor): """ This post-processor takes care of trimming the offsets. By default, the ByteLevel BPE might include whitespaces in the produced tokens. If you don't want the offsets to include these whitespaces, then this PostProcessor must be used. Args: trim_offsets (:obj:`bool`): Whether to trim the whitespaces from the produced offsets. """ def __init__(self, trim_offsets=True): pass def num_special_tokens_to_add(self, is_pair): """ Return the number of special tokens that would be added for single/pair sentences. Args: is_pair (:obj:`bool`): Whether the input would be a pair of sequences Returns: :obj:`int`: The number of tokens to add """ pass def process(self, encoding, pair=None, add_special_tokens=True): """ Post-process the given encodings, generating the final one Args: encoding (:class:`~tokenizers.Encoding`): The encoding for the first sequence pair (:class:`~tokenizers.Encoding`, `optional`): The encoding for the pair sequence add_special_tokens (:obj:`bool`): Whether to add the special tokens Return: :class:`~tokenizers.Encoding`: The final encoding """ pass class RobertaProcessing(PostProcessor): """ This post-processor takes care of adding the special tokens needed by a Roberta model: - a SEP token - a CLS token It also takes care of trimming the offsets. By default, the ByteLevel BPE might include whitespaces in the produced tokens. If you don't want the offsets to include these whitespaces, then this PostProcessor should be initialized with :obj:`trim_offsets=True` Args: sep (:obj:`Tuple[str, int]`): A tuple with the string representation of the SEP token, and its id cls (:obj:`Tuple[str, int]`): A tuple with the string representation of the CLS token, and its id trim_offsets (:obj:`bool`, `optional`, defaults to :obj:`True`): Whether to trim the whitespaces from the produced offsets. add_prefix_space (:obj:`bool`, `optional`, defaults to :obj:`True`): Whether the add_prefix_space option was enabled during pre-tokenization. This is relevant because it defines the way the offsets are trimmed out. """ def __init__(self, sep, cls, trim_offsets=True, add_prefix_space=True): pass def num_special_tokens_to_add(self, is_pair): """ Return the number of special tokens that would be added for single/pair sentences. Args: is_pair (:obj:`bool`): Whether the input would be a pair of sequences Returns: :obj:`int`: The number of tokens to add """ pass def process(self, encoding, pair=None, add_special_tokens=True): """ Post-process the given encodings, generating the final one Args: encoding (:class:`~tokenizers.Encoding`): The encoding for the first sequence pair (:class:`~tokenizers.Encoding`, `optional`): The encoding for the pair sequence add_special_tokens (:obj:`bool`): Whether to add the special tokens Return: :class:`~tokenizers.Encoding`: The final encoding """ pass class Sequence(PostProcessor): """ Sequence Processor Args: processors (:obj:`List[PostProcessor]`) The processors that need to be chained """ def __init__(self, processors): pass def num_special_tokens_to_add(self, is_pair): """ Return the number of special tokens that would be added for single/pair sentences. Args: is_pair (:obj:`bool`): Whether the input would be a pair of sequences Returns: :obj:`int`: The number of tokens to add """ pass def process(self, encoding, pair=None, add_special_tokens=True): """ Post-process the given encodings, generating the final one Args: encoding (:class:`~tokenizers.Encoding`): The encoding for the first sequence pair (:class:`~tokenizers.Encoding`, `optional`): The encoding for the pair sequence add_special_tokens (:obj:`bool`): Whether to add the special tokens Return: :class:`~tokenizers.Encoding`: The final encoding """ pass class TemplateProcessing(PostProcessor): """ Provides a way to specify templates in order to add the special tokens to each input sequence as relevant. Let's take :obj:`BERT` tokenizer as an example. It uses two special tokens, used to delimitate each sequence. :obj:`[CLS]` is always used at the beginning of the first sequence, and :obj:`[SEP]` is added at the end of both the first, and the pair sequences. The final result looks like this: - Single sequence: :obj:`[CLS] Hello there [SEP]` - Pair sequences: :obj:`[CLS] My name is Anthony [SEP] What is my name? [SEP]` With the type ids as following:: [CLS] ... [SEP] ... [SEP] 0 0 0 1 1 You can achieve such behavior using a TemplateProcessing:: TemplateProcessing( single="[CLS] $0 [SEP]", pair="[CLS] $A [SEP] $B:1 [SEP]:1", special_tokens=[("[CLS]", 1), ("[SEP]", 0)], ) In this example, each input sequence is identified using a ``$`` construct. This identifier lets us specify each input sequence, and the type_id to use. When nothing is specified, it uses the default values. Here are the different ways to specify it: - Specifying the sequence, with default ``type_id == 0``: ``$A`` or ``$B`` - Specifying the `type_id` with default ``sequence == A``: ``$0``, ``$1``, ``$2``, ... - Specifying both: ``$A:0``, ``$B:1``, ... The same construct is used for special tokens: ``<identifier>(:<type_id>)?``. **Warning**: You must ensure that you are giving the correct tokens/ids as these will be added to the Encoding without any further check. If the given ids correspond to something totally different in a `Tokenizer` using this `PostProcessor`, it might lead to unexpected results. Args: single (:obj:`Template`): The template used for single sequences pair (:obj:`Template`): The template used when both sequences are specified special_tokens (:obj:`Tokens`): The list of special tokens used in each sequences Types: Template (:obj:`str` or :obj:`List`): - If a :obj:`str` is provided, the whitespace is used as delimiter between tokens - If a :obj:`List[str]` is provided, a list of tokens Tokens (:obj:`List[Union[Tuple[int, str], Tuple[str, int], dict]]`): - A :obj:`Tuple` with both a token and its associated ID, in any order - A :obj:`dict` with the following keys: - "id": :obj:`str` => The special token id, as specified in the Template - "ids": :obj:`List[int]` => The associated IDs - "tokens": :obj:`List[str]` => The associated tokens The given dict expects the provided :obj:`ids` and :obj:`tokens` lists to have the same length. """ def __init__(self, single, pair, special_tokens): pass def num_special_tokens_to_add(self, is_pair): """ Return the number of special tokens that would be added for single/pair sentences. Args: is_pair (:obj:`bool`): Whether the input would be a pair of sequences Returns: :obj:`int`: The number of tokens to add """ pass def process(self, encoding, pair=None, add_special_tokens=True): """ Post-process the given encodings, generating the final one Args: encoding (:class:`~tokenizers.Encoding`): The encoding for the first sequence pair (:class:`~tokenizers.Encoding`, `optional`): The encoding for the pair sequence add_special_tokens (:obj:`bool`): Whether to add the special tokens Return: :class:`~tokenizers.Encoding`: The final encoding """ pass
tokenizers/bindings/python/py_src/tokenizers/processors/__init__.pyi/0
{ "file_path": "tokenizers/bindings/python/py_src/tokenizers/processors/__init__.pyi", "repo_id": "tokenizers", "token_count": 4779 }
331
use std::collections::HashMap; use std::path::{Path, PathBuf}; use std::sync::{Arc, RwLock}; use crate::token::PyToken; use crate::trainers::PyTrainer; use ahash::AHashMap; use pyo3::exceptions; use pyo3::prelude::*; use pyo3::types::*; use serde::{Deserialize, Serialize}; use tk::models::bpe::{BpeBuilder, Merges, BPE}; use tk::models::unigram::Unigram; use tk::models::wordlevel::WordLevel; use tk::models::wordpiece::{WordPiece, WordPieceBuilder}; use tk::models::ModelWrapper; use tk::{Model, Token}; use tokenizers as tk; use super::error::{deprecation_warning, ToPyResult}; /// Base class for all models /// /// The model represents the actual tokenization algorithm. This is the part that /// will contain and manage the learned vocabulary. /// /// This class cannot be constructed directly. Please use one of the concrete models. #[pyclass(module = "tokenizers.models", name = "Model", subclass)] #[derive(Clone, Serialize, Deserialize)] #[serde(transparent)] pub struct PyModel { pub model: Arc<RwLock<ModelWrapper>>, } impl PyModel { pub(crate) fn get_as_subtype(&self, py: Python<'_>) -> PyResult<PyObject> { let base = self.clone(); Ok(match *self.model.as_ref().read().unwrap() { ModelWrapper::BPE(_) => Py::new(py, (PyBPE {}, base))? .into_pyobject(py)? .into_any() .into(), ModelWrapper::WordPiece(_) => Py::new(py, (PyWordPiece {}, base))? .into_pyobject(py)? .into_any() .into(), ModelWrapper::WordLevel(_) => Py::new(py, (PyWordLevel {}, base))? .into_pyobject(py)? .into_any() .into(), ModelWrapper::Unigram(_) => Py::new(py, (PyUnigram {}, base))? .into_pyobject(py)? .into_any() .into(), }) } } impl Model for PyModel { type Trainer = PyTrainer; fn tokenize(&self, tokens: &str) -> tk::Result<Vec<Token>> { self.model.read().unwrap().tokenize(tokens) } fn token_to_id(&self, token: &str) -> Option<u32> { self.model.read().unwrap().token_to_id(token) } fn id_to_token(&self, id: u32) -> Option<String> { self.model.read().unwrap().id_to_token(id) } fn get_vocab(&self) -> HashMap<String, u32> { self.model.read().unwrap().get_vocab() } fn get_vocab_size(&self) -> usize { self.model.read().unwrap().get_vocab_size() } fn save(&self, folder: &Path, name: Option<&str>) -> tk::Result<Vec<PathBuf>> { self.model.read().unwrap().save(folder, name) } fn get_trainer(&self) -> Self::Trainer { self.model.read().unwrap().get_trainer().into() } } impl<I> From<I> for PyModel where I: Into<ModelWrapper>, { fn from(model: I) -> Self { Self { model: Arc::new(RwLock::new(model.into())), } } } #[pymethods] impl PyModel { #[new] #[pyo3(text_signature = None)] fn __new__() -> Self { // Instantiate a default empty model. This doesn't really make sense, but we need // to be able to instantiate an empty model for pickle capabilities. PyModel { model: Arc::new(RwLock::new(BPE::default().into())), } } fn __getstate__(&self, py: Python) -> PyResult<PyObject> { let data = serde_json::to_string(&self.model).map_err(|e| { exceptions::PyException::new_err(format!("Error while attempting to pickle Model: {e}")) })?; Ok(PyBytes::new(py, data.as_bytes()).into()) } fn __setstate__(&mut self, py: Python, state: PyObject) -> PyResult<()> { match state.extract::<&[u8]>(py) { Ok(s) => { self.model = serde_json::from_slice(s).map_err(|e| { exceptions::PyException::new_err(format!( "Error while attempting to unpickle Model: {e}" )) })?; Ok(()) } Err(e) => Err(e), } } /// Tokenize a sequence /// /// Args: /// sequence (:obj:`str`): /// A sequence to tokenize /// /// Returns: /// A :obj:`List` of :class:`~tokenizers.Token`: The generated tokens #[pyo3(text_signature = "(self, sequence)")] fn tokenize(&self, sequence: &str) -> PyResult<Vec<PyToken>> { Ok(ToPyResult(self.model.read().unwrap().tokenize(sequence)) .into_py()? .into_iter() .map(|t| t.into()) .collect()) } /// Get the ID associated to a token /// /// Args: /// token (:obj:`str`): /// A token to convert to an ID /// /// Returns: /// :obj:`int`: The ID associated to the token #[pyo3(text_signature = "(self, tokens)")] fn token_to_id(&self, token: &str) -> Option<u32> { self.model.read().unwrap().token_to_id(token) } /// Get the token associated to an ID /// /// Args: /// id (:obj:`int`): /// An ID to convert to a token /// /// Returns: /// :obj:`str`: The token associated to the ID #[pyo3(text_signature = "(self, id)")] fn id_to_token(&self, id: u32) -> Option<String> { self.model.read().unwrap().id_to_token(id) } /// Save the current model /// /// Save the current model in the given folder, using the given prefix for the various /// files that will get created. /// Any file with the same name that already exists in this folder will be overwritten. /// /// Args: /// folder (:obj:`str`): /// The path to the target folder in which to save the various files /// /// prefix (:obj:`str`, `optional`): /// An optional prefix, used to prefix each file name /// /// Returns: /// :obj:`List[str]`: The list of saved files #[pyo3(signature = (folder, prefix=None, name=None), text_signature = "(self, folder, prefix)")] fn save<'a>( &self, py: Python<'_>, folder: &str, mut prefix: Option<&'a str>, name: Option<&'a str>, ) -> PyResult<Vec<String>> { if name.is_some() { deprecation_warning( py, "0.10.0", "Parameter `name` of Model.save has been renamed `prefix`", )?; if prefix.is_none() { prefix = name; } } let saved: PyResult<Vec<_>> = ToPyResult(self.model.read().unwrap().save(Path::new(folder), prefix)).into(); Ok(saved? .into_iter() .map(|path| path.to_string_lossy().into_owned()) .collect()) } /// Get the associated :class:`~tokenizers.trainers.Trainer` /// /// Retrieve the :class:`~tokenizers.trainers.Trainer` associated to this /// :class:`~tokenizers.models.Model`. /// /// Returns: /// :class:`~tokenizers.trainers.Trainer`: The Trainer used to train this model #[pyo3(text_signature = "(self)")] fn get_trainer(&self, py: Python<'_>) -> PyResult<PyObject> { PyTrainer::from(self.model.read().unwrap().get_trainer()).get_as_subtype(py) } fn __repr__(&self) -> PyResult<String> { crate::utils::serde_pyo3::repr(self) .map_err(|e| exceptions::PyException::new_err(e.to_string())) } fn __str__(&self) -> PyResult<String> { crate::utils::serde_pyo3::to_string(self) .map_err(|e| exceptions::PyException::new_err(e.to_string())) } } /// An implementation of the BPE (Byte-Pair Encoding) algorithm /// /// Args: /// vocab (:obj:`Dict[str, int]`, `optional`): /// A dictionary of string keys and their ids :obj:`{"am": 0,...}` /// /// merges (:obj:`List[Tuple[str, str]]`, `optional`): /// A list of pairs of tokens (:obj:`Tuple[str, str]`) :obj:`[("a", "b"),...]` /// /// cache_capacity (:obj:`int`, `optional`): /// The number of words that the BPE cache can contain. The cache allows /// to speed-up the process by keeping the result of the merge operations /// for a number of words. /// /// dropout (:obj:`float`, `optional`): /// A float between 0 and 1 that represents the BPE dropout to use. /// /// unk_token (:obj:`str`, `optional`): /// The unknown token to be used by the model. /// /// continuing_subword_prefix (:obj:`str`, `optional`): /// The prefix to attach to subword units that don't represent a beginning of word. /// /// end_of_word_suffix (:obj:`str`, `optional`): /// The suffix to attach to subword units that represent an end of word. /// /// fuse_unk (:obj:`bool`, `optional`): /// Whether to fuse any subsequent unknown tokens into a single one /// /// byte_fallback (:obj:`bool`, `optional`): /// Whether to use spm byte-fallback trick (defaults to False) /// /// ignore_merges (:obj:`bool`, `optional`): /// Whether or not to match tokens with the vocab before using merges. #[pyclass(extends=PyModel, module = "tokenizers.models", name = "BPE")] pub struct PyBPE {} impl PyBPE { fn with_builder( mut builder: BpeBuilder, kwargs: Option<&Bound<'_, PyDict>>, ) -> PyResult<(Self, PyModel)> { if let Some(kwargs) = kwargs { for (key, value) in kwargs { let key: String = key.extract()?; match key.as_ref() { "cache_capacity" => builder = builder.cache_capacity(value.extract()?), "dropout" => { if let Some(dropout) = value.extract()? { builder = builder.dropout(dropout); } } "unk_token" => { if let Some(unk) = value.extract()? { builder = builder.unk_token(unk); } } "continuing_subword_prefix" => { builder = builder.continuing_subword_prefix(value.extract()?) } "end_of_word_suffix" => builder = builder.end_of_word_suffix(value.extract()?), "fuse_unk" => builder = builder.fuse_unk(value.extract()?), "byte_fallback" => builder = builder.byte_fallback(value.extract()?), "ignore_merges" => builder = builder.ignore_merges(value.extract()?), _ => println!("Ignored unknown kwarg option {key}"), }; } } match builder.build() { Err(e) => Err(exceptions::PyException::new_err(format!( "Error while initializing BPE: {e}" ))), Ok(bpe) => Ok((PyBPE {}, bpe.into())), } } } macro_rules! getter { ($self: ident, $variant: ident, $($name: tt)+) => {{ let super_ = $self.as_ref(); let model = super_.model.read().unwrap(); if let ModelWrapper::$variant(ref mo) = *model { mo.$($name)+ } else { unreachable!() } }}; } macro_rules! setter { ($self: ident, $variant: ident, $name: ident, $value: expr) => {{ let super_ = $self.as_ref(); let mut model = super_.model.write().unwrap(); if let ModelWrapper::$variant(ref mut mo) = *model { mo.$name = $value; } }}; } #[derive(FromPyObject)] enum PyVocab { Vocab(HashMap<String, u32>), Filename(String), } #[derive(FromPyObject)] enum PyMerges { Merges(Merges), Filename(String), } #[pymethods] impl PyBPE { #[getter] fn get_dropout(self_: PyRef<Self>) -> Option<f32> { getter!(self_, BPE, dropout) } #[setter] fn set_dropout(self_: PyRef<Self>, dropout: Option<f32>) { setter!(self_, BPE, dropout, dropout); } #[getter] fn get_unk_token(self_: PyRef<Self>) -> Option<String> { getter!(self_, BPE, unk_token.clone()) } #[setter] fn set_unk_token(self_: PyRef<Self>, unk_token: Option<String>) { setter!(self_, BPE, unk_token, unk_token); } #[getter] fn get_continuing_subword_prefix(self_: PyRef<Self>) -> Option<String> { getter!(self_, BPE, continuing_subword_prefix.clone()) } #[setter] fn set_continuing_subword_prefix( self_: PyRef<Self>, continuing_subword_prefix: Option<String>, ) { setter!( self_, BPE, continuing_subword_prefix, continuing_subword_prefix ); } #[getter] fn get_end_of_word_suffix(self_: PyRef<Self>) -> Option<String> { getter!(self_, BPE, end_of_word_suffix.clone()) } #[setter] fn set_end_of_word_suffix(self_: PyRef<Self>, end_of_word_suffix: Option<String>) { setter!(self_, BPE, end_of_word_suffix, end_of_word_suffix); } #[getter] fn get_fuse_unk(self_: PyRef<Self>) -> bool { getter!(self_, BPE, fuse_unk) } #[setter] fn set_fuse_unk(self_: PyRef<Self>, fuse_unk: bool) { setter!(self_, BPE, fuse_unk, fuse_unk); } #[getter] fn get_byte_fallback(self_: PyRef<Self>) -> bool { getter!(self_, BPE, byte_fallback) } #[setter] fn set_byte_fallback(self_: PyRef<Self>, byte_fallback: bool) { setter!(self_, BPE, byte_fallback, byte_fallback); } #[getter] fn get_ignore_merges(self_: PyRef<Self>) -> bool { getter!(self_, BPE, ignore_merges) } #[setter] fn set_ignore_merges(self_: PyRef<Self>, ignore_merges: bool) { setter!(self_, BPE, ignore_merges, ignore_merges); } #[new] #[pyo3( signature = (vocab=None, merges=None, **kwargs), text_signature = "(self, vocab=None, merges=None, cache_capacity=None, dropout=None, unk_token=None, continuing_subword_prefix=None, end_of_word_suffix=None, fuse_unk=None, byte_fallback=False, ignore_merges=False)")] fn new( py: Python<'_>, vocab: Option<PyVocab>, merges: Option<PyMerges>, kwargs: Option<&Bound<'_, PyDict>>, ) -> PyResult<(Self, PyModel)> { if (vocab.is_some() && merges.is_none()) || (vocab.is_none() && merges.is_some()) { return Err(exceptions::PyValueError::new_err( "`vocab` and `merges` must be both specified", )); } let mut builder = BPE::builder(); if let (Some(vocab), Some(merges)) = (vocab, merges) { match (vocab, merges) { (PyVocab::Vocab(vocab), PyMerges::Merges(merges)) => { let vocab: AHashMap<_, _> = vocab.into_iter().collect(); builder = builder.vocab_and_merges(vocab, merges); } (PyVocab::Filename(vocab_filename), PyMerges::Filename(merges_filename)) => { deprecation_warning( py, "0.9.0", "BPE.__init__ will not create from files anymore, try `BPE.from_file` instead", )?; builder = builder.files(vocab_filename.to_string(), merges_filename.to_string()); } _ => { return Err(exceptions::PyValueError::new_err( "`vocab` and `merges` must be both be from memory or both filenames", )); } } } PyBPE::with_builder(builder, kwargs) } /// Read a :obj:`vocab.json` and a :obj:`merges.txt` files /// /// This method provides a way to read and parse the content of these files, /// returning the relevant data structures. If you want to instantiate some BPE models /// from memory, this method gives you the expected input from the standard files. /// /// Args: /// vocab (:obj:`str`): /// The path to a :obj:`vocab.json` file /// /// merges (:obj:`str`): /// The path to a :obj:`merges.txt` file /// /// Returns: /// A :obj:`Tuple` with the vocab and the merges: /// The vocabulary and merges loaded into memory #[staticmethod] #[pyo3(text_signature = "(self, vocab, merges)")] fn read_file(vocab: &str, merges: &str) -> PyResult<(HashMap<String, u32>, Merges)> { let (vocab, merges) = BPE::read_file(vocab, merges).map_err(|e| { exceptions::PyException::new_err(format!( "Error while reading vocab & merges files: {e}" )) })?; let vocab = vocab.into_iter().collect(); Ok((vocab, merges)) } /// Instantiate a BPE model from the given files. /// /// This method is roughly equivalent to doing:: /// /// vocab, merges = BPE.read_file(vocab_filename, merges_filename) /// bpe = BPE(vocab, merges) /// /// If you don't need to keep the :obj:`vocab, merges` values lying around, /// this method is more optimized than manually calling /// :meth:`~tokenizers.models.BPE.read_file` to initialize a :class:`~tokenizers.models.BPE` /// /// Args: /// vocab (:obj:`str`): /// The path to a :obj:`vocab.json` file /// /// merges (:obj:`str`): /// The path to a :obj:`merges.txt` file /// /// Returns: /// :class:`~tokenizers.models.BPE`: An instance of BPE loaded from these files #[classmethod] #[pyo3(signature = (vocab, merges, **kwargs))] #[pyo3(text_signature = "(cls, vocab, merge, **kwargs)")] fn from_file( _cls: &Bound<'_, PyType>, py: Python, vocab: &str, merges: &str, kwargs: Option<&Bound<'_, PyDict>>, ) -> PyResult<Py<Self>> { let (vocab, merges) = BPE::read_file(vocab, merges).map_err(|e| { exceptions::PyException::new_err(format!("Error while reading BPE files: {e}")) })?; let vocab = vocab.into_iter().collect(); Py::new( py, PyBPE::new( py, Some(PyVocab::Vocab(vocab)), Some(PyMerges::Merges(merges)), kwargs, )?, ) } /// Clears the internal cache #[pyo3(signature = ())] #[pyo3(text_signature = "(self)")] fn _clear_cache(self_: PyRef<Self>) -> PyResult<()> { let super_ = self_.as_ref(); let mut model = super_.model.write().map_err(|e| { exceptions::PyException::new_err(format!("Error while clearing BPE cache: {e}")) })?; model.clear_cache(); Ok(()) } /// Resize the internal cache #[pyo3(signature = (capacity))] #[pyo3(text_signature = "(self, capacity)")] fn _resize_cache(self_: PyRef<Self>, capacity: usize) -> PyResult<()> { let super_ = self_.as_ref(); let mut model = super_.model.write().map_err(|e| { exceptions::PyException::new_err(format!("Error while resizing BPE cache: {e}")) })?; model.resize_cache(capacity); Ok(()) } } /// An implementation of the WordPiece algorithm /// /// Args: /// vocab (:obj:`Dict[str, int]`, `optional`): /// A dictionary of string keys and their ids :obj:`{"am": 0,...}` /// /// unk_token (:obj:`str`, `optional`): /// The unknown token to be used by the model. /// /// max_input_chars_per_word (:obj:`int`, `optional`): /// The maximum number of characters to authorize in a single word. #[pyclass(extends=PyModel, module = "tokenizers.models", name = "WordPiece")] pub struct PyWordPiece {} impl PyWordPiece { fn with_builder( mut builder: WordPieceBuilder, kwargs: Option<&Bound<'_, PyDict>>, ) -> PyResult<(Self, PyModel)> { if let Some(kwargs) = kwargs { for (key, val) in kwargs { let key: String = key.extract()?; match key.as_ref() { "unk_token" => { builder = builder.unk_token(val.extract()?); } "max_input_chars_per_word" => { builder = builder.max_input_chars_per_word(val.extract()?); } "continuing_subword_prefix" => { builder = builder.continuing_subword_prefix(val.extract()?); } _ => println!("Ignored unknown kwargs option {key}"), } } } match builder.build() { Err(e) => Err(exceptions::PyException::new_err(format!( "Error while initializing WordPiece: {e}" ))), Ok(wordpiece) => Ok((PyWordPiece {}, wordpiece.into())), } } } #[pymethods] impl PyWordPiece { #[getter] fn get_unk_token(self_: PyRef<Self>) -> String { getter!(self_, WordPiece, unk_token.clone()) } #[setter] fn set_unk_token(self_: PyRef<Self>, unk_token: String) { setter!(self_, WordPiece, unk_token, unk_token); } #[getter] fn get_continuing_subword_prefix(self_: PyRef<Self>) -> String { getter!(self_, WordPiece, continuing_subword_prefix.clone()) } #[setter] fn set_continuing_subword_prefix(self_: PyRef<Self>, continuing_subword_prefix: String) { setter!( self_, WordPiece, continuing_subword_prefix, continuing_subword_prefix ); } #[getter] fn get_max_input_chars_per_word(self_: PyRef<Self>) -> usize { getter!(self_, WordPiece, max_input_chars_per_word) } #[setter] fn set_max_input_chars_per_word(self_: PyRef<Self>, max: usize) { setter!(self_, WordPiece, max_input_chars_per_word, max); } #[new] #[pyo3(signature = (vocab=None, **kwargs), text_signature = "(self, vocab, unk_token, max_input_chars_per_word)")] fn new( py: Python<'_>, vocab: Option<PyVocab>, kwargs: Option<&Bound<'_, PyDict>>, ) -> PyResult<(Self, PyModel)> { let mut builder = WordPiece::builder(); if let Some(vocab) = vocab { match vocab { PyVocab::Vocab(vocab) => { let vocab: AHashMap<_, _> = vocab.into_iter().collect(); builder = builder.vocab(vocab); } PyVocab::Filename(vocab_filename) => { deprecation_warning( py, "0.9.0", "WordPiece.__init__ will not create from files anymore, try `WordPiece.from_file` instead", )?; builder = builder.files(vocab_filename.to_string()); } } } PyWordPiece::with_builder(builder, kwargs) } /// Read a :obj:`vocab.txt` file /// /// This method provides a way to read and parse the content of a standard `vocab.txt` /// file as used by the WordPiece Model, returning the relevant data structures. If you /// want to instantiate some WordPiece models from memory, this method gives you the /// expected input from the standard files. /// /// Args: /// vocab (:obj:`str`): /// The path to a :obj:`vocab.txt` file /// /// Returns: /// :obj:`Dict[str, int]`: The vocabulary as a :obj:`dict` #[staticmethod] #[pyo3(text_signature = "(vocab)")] fn read_file(vocab: &str) -> PyResult<HashMap<String, u32>> { let vocab = WordPiece::read_file(vocab).map_err(|e| { exceptions::PyException::new_err(format!("Error while reading WordPiece file: {e}")) })?; Ok(vocab.into_iter().collect()) } /// Instantiate a WordPiece model from the given file /// /// This method is roughly equivalent to doing:: /// /// vocab = WordPiece.read_file(vocab_filename) /// wordpiece = WordPiece(vocab) /// /// If you don't need to keep the :obj:`vocab` values lying around, this method is /// more optimized than manually calling :meth:`~tokenizers.models.WordPiece.read_file` to /// initialize a :class:`~tokenizers.models.WordPiece` /// /// Args: /// vocab (:obj:`str`): /// The path to a :obj:`vocab.txt` file /// /// Returns: /// :class:`~tokenizers.models.WordPiece`: An instance of WordPiece loaded from file #[classmethod] #[pyo3(signature = (vocab, **kwargs))] #[pyo3(text_signature = "(vocab, **kwargs)")] fn from_file( _cls: &Bound<'_, PyType>, py: Python, vocab: &str, kwargs: Option<&Bound<'_, PyDict>>, ) -> PyResult<Py<Self>> { let vocab = WordPiece::read_file(vocab).map_err(|e| { exceptions::PyException::new_err(format!("Error while reading WordPiece file: {e}")) })?; let vocab = vocab.into_iter().collect(); Py::new( py, PyWordPiece::new(py, Some(PyVocab::Vocab(vocab)), kwargs)?, ) } } /// An implementation of the WordLevel algorithm /// /// Most simple tokenizer model based on mapping tokens to their corresponding id. /// /// Args: /// vocab (:obj:`str`, `optional`): /// A dictionary of string keys and their ids :obj:`{"am": 0,...}` /// /// unk_token (:obj:`str`, `optional`): /// The unknown token to be used by the model. #[pyclass(extends=PyModel, module = "tokenizers.models", name = "WordLevel")] pub struct PyWordLevel {} #[pymethods] impl PyWordLevel { #[getter] fn get_unk_token(self_: PyRef<Self>) -> String { getter!(self_, WordLevel, unk_token.clone()) } #[setter] fn set_unk_token(self_: PyRef<Self>, unk_token: String) { setter!(self_, WordLevel, unk_token, unk_token); } #[new] #[pyo3(signature = (vocab=None, unk_token = None), text_signature = "(self, vocab, unk_token)")] fn new( py: Python<'_>, vocab: Option<PyVocab>, unk_token: Option<String>, ) -> PyResult<(Self, PyModel)> { let mut builder = WordLevel::builder(); if let Some(vocab) = vocab { match vocab { PyVocab::Vocab(vocab) => { let vocab = vocab.into_iter().collect(); builder = builder.vocab(vocab); } PyVocab::Filename(vocab_filename) => { deprecation_warning( py, "0.9.0", "WordLevel.__init__ will not create from files anymore, \ try `WordLevel.from_file` instead", )?; builder = builder.files(vocab_filename.to_string()); } }; } if let Some(unk_token) = unk_token { builder = builder.unk_token(unk_token); } Ok(( PyWordLevel {}, builder .build() .map_err(|e| exceptions::PyException::new_err(e.to_string()))? .into(), )) } /// Read a :obj:`vocab.json` /// /// This method provides a way to read and parse the content of a vocabulary file, /// returning the relevant data structures. If you want to instantiate some WordLevel models /// from memory, this method gives you the expected input from the standard files. /// /// Args: /// vocab (:obj:`str`): /// The path to a :obj:`vocab.json` file /// /// Returns: /// :obj:`Dict[str, int]`: The vocabulary as a :obj:`dict` #[staticmethod] #[pyo3(text_signature = "(vocab)")] fn read_file(vocab: &str) -> PyResult<HashMap<String, u32>> { let vocab = WordLevel::read_file(vocab).map_err(|e| { exceptions::PyException::new_err(format!("Error while reading WordLevel file: {e}")) })?; let vocab: HashMap<_, _> = vocab.into_iter().collect(); Ok(vocab) } /// Instantiate a WordLevel model from the given file /// /// This method is roughly equivalent to doing:: /// /// vocab = WordLevel.read_file(vocab_filename) /// wordlevel = WordLevel(vocab) /// /// If you don't need to keep the :obj:`vocab` values lying around, this method is /// more optimized than manually calling :meth:`~tokenizers.models.WordLevel.read_file` to /// initialize a :class:`~tokenizers.models.WordLevel` /// /// Args: /// vocab (:obj:`str`): /// The path to a :obj:`vocab.json` file /// /// Returns: /// :class:`~tokenizers.models.WordLevel`: An instance of WordLevel loaded from file #[classmethod] #[pyo3(signature = (vocab, unk_token = None))] #[pyo3(text_signature = "(vocab, unk_token)")] fn from_file( _cls: &Bound<'_, PyType>, py: Python, vocab: &str, unk_token: Option<String>, ) -> PyResult<Py<Self>> { let vocab = WordLevel::read_file(vocab).map_err(|e| { exceptions::PyException::new_err(format!("Error while reading WordLevel file: {e}")) })?; let vocab = vocab.into_iter().collect(); Py::new( py, PyWordLevel::new(py, Some(PyVocab::Vocab(vocab)), unk_token)?, ) } } /// An implementation of the Unigram algorithm /// /// Args: /// vocab (:obj:`List[Tuple[str, float]]`, `optional`, `optional`): /// A list of vocabulary items and their relative score [("am", -0.2442),...] #[pyclass(extends=PyModel, module = "tokenizers.models", name = "Unigram")] pub struct PyUnigram {} #[pymethods] impl PyUnigram { #[new] #[pyo3(signature = (vocab=None, unk_id=None, byte_fallback=None), text_signature = "(self, vocab, unk_id, byte_fallback)")] fn new( vocab: Option<Vec<(String, f64)>>, unk_id: Option<usize>, byte_fallback: Option<bool>, ) -> PyResult<(Self, PyModel)> { match (vocab, unk_id, byte_fallback) { (Some(vocab), unk_id, byte_fallback) => { let model = Unigram::from(vocab, unk_id, byte_fallback.unwrap_or(false)).map_err(|e| { exceptions::PyException::new_err(format!( "Error while loading Unigram: {e}" )) })?; Ok((PyUnigram {}, model.into())) } (None, None, _) => Ok((PyUnigram {}, Unigram::default().into())), _ => Err(exceptions::PyValueError::new_err( "`vocab` and `unk_id` must be both specified", )), } } /// Clears the internal cache #[pyo3(signature = ())] #[pyo3(text_signature = "(self)")] fn _clear_cache(self_: PyRef<Self>) -> PyResult<()> { let super_ = self_.as_ref(); let mut model = super_.model.write().map_err(|e| { exceptions::PyException::new_err(format!("Error while clearing Unigram cache: {e}")) })?; model.clear_cache(); Ok(()) } /// Resize the internal cache #[pyo3(signature = (capacity))] #[pyo3(text_signature = "(self, capacity)")] fn _resize_cache(self_: PyRef<Self>, capacity: usize) -> PyResult<()> { let super_ = self_.as_ref(); let mut model = super_.model.write().map_err(|e| { exceptions::PyException::new_err(format!("Error while resizing Unigram cache: {e}")) })?; model.resize_cache(capacity); Ok(()) } } /// Models Module #[pymodule] pub fn models(m: &Bound<'_, PyModule>) -> PyResult<()> { m.add_class::<PyModel>()?; m.add_class::<PyBPE>()?; m.add_class::<PyWordPiece>()?; m.add_class::<PyWordLevel>()?; m.add_class::<PyUnigram>()?; Ok(()) } #[cfg(test)] mod test { use crate::models::PyModel; use pyo3::prelude::*; use tk::models::bpe::BPE; use tk::models::ModelWrapper; #[test] fn get_subtype() { Python::with_gil(|py| { let py_model = PyModel::from(BPE::default()); let py_bpe = py_model.get_as_subtype(py).unwrap(); assert_eq!("BPE", py_bpe.bind(py).get_type().qualname().unwrap()); }) } #[test] fn serialize() { let rs_bpe = BPE::default(); let rs_bpe_ser = serde_json::to_string(&rs_bpe).unwrap(); let rs_wrapper: ModelWrapper = rs_bpe.into(); let rs_wrapper_ser = serde_json::to_string(&rs_wrapper).unwrap(); let py_model = PyModel::from(rs_wrapper); let py_ser = serde_json::to_string(&py_model).unwrap(); assert_eq!(py_ser, rs_bpe_ser); assert_eq!(py_ser, rs_wrapper_ser); let py_model: PyModel = serde_json::from_str(&rs_bpe_ser).unwrap(); match *py_model.model.as_ref().read().unwrap() { ModelWrapper::BPE(_) => (), _ => panic!("Expected Bert postprocessor."), }; let py_model: PyModel = serde_json::from_str(&rs_wrapper_ser).unwrap(); match *py_model.model.as_ref().read().unwrap() { ModelWrapper::BPE(_) => (), _ => panic!("Expected Bert postprocessor."), }; } }
tokenizers/bindings/python/src/models.rs/0
{ "file_path": "tokenizers/bindings/python/src/models.rs", "repo_id": "tokenizers", "token_count": 16127 }
332
from tokenizers import ByteLevelBPETokenizer from ..utils import data_dir, multiprocessing_with_parallelism, roberta_files class TestByteLevelBPE: def test_basic_encode(self, roberta_files): tokenizer = ByteLevelBPETokenizer.from_file(roberta_files["vocab"], roberta_files["merges"]) output = tokenizer.encode("The quick brown fox jumps over the lazy dog") assert output.ids == [133, 2119, 6219, 23602, 13855, 81, 5, 22414, 2335] assert output.tokens == [ "The", "Ġquick", "Ġbrown", "Ġfox", "Ġjumps", "Ġover", "Ġthe", "Ġlazy", "Ġdog", ] assert output.offsets == [ (0, 3), (3, 9), (9, 15), (15, 19), (19, 25), (25, 30), (30, 34), (34, 39), (39, 43), ] def test_add_prefix_space(self, roberta_files): tokenizer = ByteLevelBPETokenizer.from_file( roberta_files["vocab"], roberta_files["merges"], add_prefix_space=True ) output = tokenizer.encode("The quick brown fox jumps over the lazy dog") assert output.ids == [20, 2119, 6219, 23602, 13855, 81, 5, 22414, 2335] assert output.tokens == [ "ĠThe", "Ġquick", "Ġbrown", "Ġfox", "Ġjumps", "Ġover", "Ġthe", "Ġlazy", "Ġdog", ] assert output.offsets == [ (0, 3), (3, 9), (9, 15), (15, 19), (19, 25), (25, 30), (30, 34), (34, 39), (39, 43), ] def test_lowerspace(self, roberta_files): tokenizer = ByteLevelBPETokenizer.from_file( roberta_files["vocab"], roberta_files["merges"], add_prefix_space=True, lowercase=True, ) output = tokenizer.encode("The Quick Brown Fox Jumps Over The Lazy Dog") assert output.ids == [5, 2119, 6219, 23602, 13855, 81, 5, 22414, 2335] assert output.tokens == [ "Ġthe", "Ġquick", "Ġbrown", "Ġfox", "Ġjumps", "Ġover", "Ġthe", "Ġlazy", "Ġdog", ] def test_multiprocessing_with_parallelism(self, roberta_files): tokenizer = ByteLevelBPETokenizer.from_file(roberta_files["vocab"], roberta_files["merges"]) multiprocessing_with_parallelism(tokenizer, False) multiprocessing_with_parallelism(tokenizer, True) def test_train_from_iterator(self): text = ["A first sentence", "Another sentence", "And a last one"] tokenizer = ByteLevelBPETokenizer() tokenizer.train_from_iterator(text, show_progress=False) output = tokenizer.encode("A sentence") assert output.tokens == ["A", "Ġsentence"]
tokenizers/bindings/python/tests/implementations/test_byte_level_bpe.py/0
{ "file_path": "tokenizers/bindings/python/tests/implementations/test_byte_level_bpe.py", "repo_id": "tokenizers", "token_count": 1653 }
333
# Pre-tokenizers <tokenizerslangcontent> <python> ## BertPreTokenizer [[autodoc]] tokenizers.pre_tokenizers.BertPreTokenizer ## ByteLevel [[autodoc]] tokenizers.pre_tokenizers.ByteLevel ## CharDelimiterSplit [[autodoc]] tokenizers.pre_tokenizers.CharDelimiterSplit ## Digits [[autodoc]] tokenizers.pre_tokenizers.Digits ## Metaspace [[autodoc]] tokenizers.pre_tokenizers.Metaspace ## PreTokenizer [[autodoc]] tokenizers.pre_tokenizers.PreTokenizer ## Punctuation [[autodoc]] tokenizers.pre_tokenizers.Punctuation ## Sequence [[autodoc]] tokenizers.pre_tokenizers.Sequence ## Split [[autodoc]] tokenizers.pre_tokenizers.Split ## UnicodeScripts [[autodoc]] tokenizers.pre_tokenizers.UnicodeScripts ## Whitespace [[autodoc]] tokenizers.pre_tokenizers.Whitespace ## WhitespaceSplit [[autodoc]] tokenizers.pre_tokenizers.WhitespaceSplit </python> <rust> The Rust API Reference is available directly on the [Docs.rs](https://docs.rs/tokenizers/latest/tokenizers/) website. </rust> <node> The node API has not been documented yet. </node> </tokenizerslangcontent>
tokenizers/docs/source-doc-builder/api/pre-tokenizers.mdx/0
{ "file_path": "tokenizers/docs/source-doc-builder/api/pre-tokenizers.mdx", "repo_id": "tokenizers", "token_count": 371 }
334
The tokenization pipeline ==================================================================================================== When calling :entity:`Tokenizer.encode` or :entity:`Tokenizer.encode_batch`, the input text(s) go through the following pipeline: - :ref:`normalization` - :ref:`pre-tokenization` - :ref:`model` - :ref:`post-processing` We'll see in details what happens during each of those steps in detail, as well as when you want to :ref:`decode <decoding>` some token ids, and how the 🤗 Tokenizers library allows you to customize each of those steps to your needs. If you're already familiar with those steps and want to learn by seeing some code, jump to :ref:`our BERT from scratch example <example>`. For the examples that require a :entity:`Tokenizer`, we will use the tokenizer we trained in the :doc:`quicktour`, which you can load with: .. only:: python .. literalinclude:: ../../bindings/python/tests/documentation/test_pipeline.py :language: python :start-after: START reload_tokenizer :end-before: END reload_tokenizer :dedent: 8 .. only:: rust .. literalinclude:: ../../tokenizers/tests/documentation.rs :language: rust :start-after: START pipeline_reload_tokenizer :end-before: END pipeline_reload_tokenizer :dedent: 4 .. only:: node .. literalinclude:: ../../bindings/node/examples/documentation/pipeline.test.ts :language: javascript :start-after: START reload_tokenizer :end-before: END reload_tokenizer :dedent: 8 .. _normalization: Normalization ---------------------------------------------------------------------------------------------------- Normalization is, in a nutshell, a set of operations you apply to a raw string to make it less random or "cleaner". Common operations include stripping whitespace, removing accented characters or lowercasing all text. If you're familiar with `Unicode normalization <https://unicode.org/reports/tr15>`__, it is also a very common normalization operation applied in most tokenizers. Each normalization operation is represented in the 🤗 Tokenizers library by a :entity:`Normalizer`, and you can combine several of those by using a :entity:`normalizers.Sequence`. Here is a normalizer applying NFD Unicode normalization and removing accents as an example: .. only:: python .. literalinclude:: ../../bindings/python/tests/documentation/test_pipeline.py :language: python :start-after: START setup_normalizer :end-before: END setup_normalizer :dedent: 8 .. only:: rust .. literalinclude:: ../../tokenizers/tests/documentation.rs :language: rust :start-after: START pipeline_setup_normalizer :end-before: END pipeline_setup_normalizer :dedent: 4 .. only:: node .. literalinclude:: ../../bindings/node/examples/documentation/pipeline.test.ts :language: javascript :start-after: START setup_normalizer :end-before: END setup_normalizer :dedent: 8 You can manually test that normalizer by applying it to any string: .. only:: python .. literalinclude:: ../../bindings/python/tests/documentation/test_pipeline.py :language: python :start-after: START test_normalizer :end-before: END test_normalizer :dedent: 8 .. only:: rust .. literalinclude:: ../../tokenizers/tests/documentation.rs :language: rust :start-after: START pipeline_test_normalizer :end-before: END pipeline_test_normalizer :dedent: 4 .. only:: node .. literalinclude:: ../../bindings/node/examples/documentation/pipeline.test.ts :language: javascript :start-after: START test_normalizer :end-before: END test_normalizer :dedent: 8 When building a :entity:`Tokenizer`, you can customize its normalizer by just changing the corresponding attribute: .. only:: python .. literalinclude:: ../../bindings/python/tests/documentation/test_pipeline.py :language: python :start-after: START replace_normalizer :end-before: END replace_normalizer :dedent: 8 .. only:: rust .. literalinclude:: ../../tokenizers/tests/documentation.rs :language: rust :start-after: START pipeline_replace_normalizer :end-before: END pipeline_replace_normalizer :dedent: 4 .. only:: node .. literalinclude:: ../../bindings/node/examples/documentation/pipeline.test.ts :language: javascript :start-after: START replace_normalizer :end-before: END replace_normalizer :dedent: 8 Of course, if you change the way a tokenizer applies normalization, you should probably retrain it from scratch afterward. .. _pre-tokenization: Pre-Tokenization ---------------------------------------------------------------------------------------------------- Pre-tokenization is the act of splitting a text into smaller objects that give an upper bound to what your tokens will be at the end of training. A good way to think of this is that the pre-tokenizer will split your text into "words" and then, your final tokens will be parts of those words. An easy way to pre-tokenize inputs is to split on spaces and punctuations, which is done by the :entity:`pre_tokenizers.Whitespace` pre-tokenizer: .. only:: python .. literalinclude:: ../../bindings/python/tests/documentation/test_pipeline.py :language: python :start-after: START setup_pre_tokenizer :end-before: END setup_pre_tokenizer :dedent: 8 .. only:: rust .. literalinclude:: ../../tokenizers/tests/documentation.rs :language: rust :start-after: START pipeline_setup_pre_tokenizer :end-before: END pipeline_setup_pre_tokenizer :dedent: 4 .. only:: node .. literalinclude:: ../../bindings/node/examples/documentation/pipeline.test.ts :language: javascript :start-after: START setup_pre_tokenizer :end-before: END setup_pre_tokenizer :dedent: 8 The output is a list of tuples, with each tuple containing one word and its span in the original sentence (which is used to determine the final :obj:`offsets` of our :entity:`Encoding`). Note that splitting on punctuation will split contractions like :obj:`"I'm"` in this example. You can combine together any :entity:`PreTokenizer` together. For instance, here is a pre-tokenizer that will split on space, punctuation and digits, separating numbers in their individual digits: .. only:: python .. literalinclude:: ../../bindings/python/tests/documentation/test_pipeline.py :language: python :start-after: START combine_pre_tokenizer :end-before: END combine_pre_tokenizer :dedent: 8 .. only:: rust .. literalinclude:: ../../tokenizers/tests/documentation.rs :language: rust :start-after: START pipeline_combine_pre_tokenizer :end-before: END pipeline_combine_pre_tokenizer :dedent: 4 .. only:: node .. literalinclude:: ../../bindings/node/examples/documentation/pipeline.test.ts :language: javascript :start-after: START combine_pre_tokenizer :end-before: END combine_pre_tokenizer :dedent: 8 As we saw in the :doc:`quicktour`, you can customize the pre-tokenizer of a :entity:`Tokenizer` by just changing the corresponding attribute: .. only:: python .. literalinclude:: ../../bindings/python/tests/documentation/test_pipeline.py :language: python :start-after: START replace_pre_tokenizer :end-before: END replace_pre_tokenizer :dedent: 8 .. only:: rust .. literalinclude:: ../../tokenizers/tests/documentation.rs :language: rust :start-after: START pipeline_replace_pre_tokenizer :end-before: END pipeline_replace_pre_tokenizer :dedent: 4 .. only:: node .. literalinclude:: ../../bindings/node/examples/documentation/pipeline.test.ts :language: javascript :start-after: START replace_pre_tokenizer :end-before: END replace_pre_tokenizer :dedent: 8 Of course, if you change the way the pre-tokenizer, you should probably retrain your tokenizer from scratch afterward. .. _model: The Model ---------------------------------------------------------------------------------------------------- Once the input texts are normalized and pre-tokenized, the :entity:`Tokenizer` applies the model on the pre-tokens. This is the part of the pipeline that needs training on your corpus (or that has been trained if you are using a pretrained tokenizer). The role of the model is to split your "words" into tokens, using the rules it has learned. It's also responsible for mapping those tokens to their corresponding IDs in the vocabulary of the model. This model is passed along when initializing the :entity:`Tokenizer` so you already know how to customize this part. Currently, the 🤗 Tokenizers library supports: - :entity:`models.BPE` - :entity:`models.Unigram` - :entity:`models.WordLevel` - :entity:`models.WordPiece` For more details about each model and its behavior, you can check `here <components#models>`__ .. _post-processing: Post-Processing ---------------------------------------------------------------------------------------------------- Post-processing is the last step of the tokenization pipeline, to perform any additional transformation to the :entity:`Encoding` before it's returned, like adding potential special tokens. As we saw in the quick tour, we can customize the post processor of a :entity:`Tokenizer` by setting the corresponding attribute. For instance, here is how we can post-process to make the inputs suitable for the BERT model: .. only:: python .. literalinclude:: ../../bindings/python/tests/documentation/test_pipeline.py :language: python :start-after: START setup_processor :end-before: END setup_processor :dedent: 8 .. only:: rust .. literalinclude:: ../../tokenizers/tests/documentation.rs :language: rust :start-after: START pipeline_setup_processor :end-before: END pipeline_setup_processor :dedent: 4 .. only:: node .. literalinclude:: ../../bindings/node/examples/documentation/pipeline.test.ts :language: javascript :start-after: START setup_processor :end-before: END setup_processor :dedent: 8 Note that contrarily to the pre-tokenizer or the normalizer, you don't need to retrain a tokenizer after changing its post-processor. .. _example: All together: a BERT tokenizer from scratch ---------------------------------------------------------------------------------------------------- Let's put all those pieces together to build a BERT tokenizer. First, BERT relies on WordPiece, so we instantiate a new :entity:`Tokenizer` with this model: .. only:: python .. literalinclude:: ../../bindings/python/tests/documentation/test_pipeline.py :language: python :start-after: START bert_setup_tokenizer :end-before: END bert_setup_tokenizer :dedent: 8 .. only:: rust .. literalinclude:: ../../tokenizers/tests/documentation.rs :language: rust :start-after: START bert_setup_tokenizer :end-before: END bert_setup_tokenizer :dedent: 4 .. only:: node .. literalinclude:: ../../bindings/node/examples/documentation/pipeline.test.ts :language: javascript :start-after: START bert_setup_tokenizer :end-before: END bert_setup_tokenizer :dedent: 8 Then we know that BERT preprocesses texts by removing accents and lowercasing. We also use a unicode normalizer: .. only:: python .. literalinclude:: ../../bindings/python/tests/documentation/test_pipeline.py :language: python :start-after: START bert_setup_normalizer :end-before: END bert_setup_normalizer :dedent: 8 .. only:: rust .. literalinclude:: ../../tokenizers/tests/documentation.rs :language: rust :start-after: START bert_setup_normalizer :end-before: END bert_setup_normalizer :dedent: 4 .. only:: node .. literalinclude:: ../../bindings/node/examples/documentation/pipeline.test.ts :language: javascript :start-after: START bert_setup_normalizer :end-before: END bert_setup_normalizer :dedent: 8 The pre-tokenizer is just splitting on whitespace and punctuation: .. only:: python .. literalinclude:: ../../bindings/python/tests/documentation/test_pipeline.py :language: python :start-after: START bert_setup_pre_tokenizer :end-before: END bert_setup_pre_tokenizer :dedent: 8 .. only:: rust .. literalinclude:: ../../tokenizers/tests/documentation.rs :language: rust :start-after: START bert_setup_pre_tokenizer :end-before: END bert_setup_pre_tokenizer :dedent: 4 .. only:: node .. literalinclude:: ../../bindings/node/examples/documentation/pipeline.test.ts :language: javascript :start-after: START bert_setup_pre_tokenizer :end-before: END bert_setup_pre_tokenizer :dedent: 8 And the post-processing uses the template we saw in the previous section: .. only:: python .. literalinclude:: ../../bindings/python/tests/documentation/test_pipeline.py :language: python :start-after: START bert_setup_processor :end-before: END bert_setup_processor :dedent: 8 .. only:: rust .. literalinclude:: ../../tokenizers/tests/documentation.rs :language: rust :start-after: START bert_setup_processor :end-before: END bert_setup_processor :dedent: 4 .. only:: node .. literalinclude:: ../../bindings/node/examples/documentation/pipeline.test.ts :language: javascript :start-after: START bert_setup_processor :end-before: END bert_setup_processor :dedent: 8 We can use this tokenizer and train on it on wikitext like in the :doc:`quicktour`: .. only:: python .. literalinclude:: ../../bindings/python/tests/documentation/test_pipeline.py :language: python :start-after: START bert_train_tokenizer :end-before: END bert_train_tokenizer :dedent: 8 .. only:: rust .. literalinclude:: ../../tokenizers/tests/documentation.rs :language: rust :start-after: START bert_train_tokenizer :end-before: END bert_train_tokenizer :dedent: 4 .. only:: node .. literalinclude:: ../../bindings/node/examples/documentation/pipeline.test.ts :language: javascript :start-after: START bert_train_tokenizer :end-before: END bert_train_tokenizer :dedent: 8 .. _decoding: Decoding ---------------------------------------------------------------------------------------------------- .. entities:: python bert_tokenizer :obj:`bert_tokenizer` .. entities:: rust bert_tokenizer :obj:`bert_tokenizer` .. entities:: node bert_tokenizer :obj:`bertTokenizer` On top of encoding the input texts, a :entity:`Tokenizer` also has an API for decoding, that is converting IDs generated by your model back to a text. This is done by the methods :entity:`Tokenizer.decode` (for one predicted text) and :entity:`Tokenizer.decode_batch` (for a batch of predictions). The `decoder` will first convert the IDs back to tokens (using the tokenizer's vocabulary) and remove all special tokens, then join those tokens with spaces: .. only:: python .. literalinclude:: ../../bindings/python/tests/documentation/test_pipeline.py :language: python :start-after: START test_decoding :end-before: END test_decoding :dedent: 8 .. only:: rust .. literalinclude:: ../../tokenizers/tests/documentation.rs :language: rust :start-after: START pipeline_test_decoding :end-before: END pipeline_test_decoding :dedent: 4 .. only:: node .. literalinclude:: ../../bindings/node/examples/documentation/pipeline.test.ts :language: javascript :start-after: START test_decoding :end-before: END test_decoding :dedent: 8 If you used a model that added special characters to represent subtokens of a given "word" (like the :obj:`"##"` in WordPiece) you will need to customize the `decoder` to treat them properly. If we take our previous :entity:`bert_tokenizer` for instance the default decoding will give: .. only:: python .. literalinclude:: ../../bindings/python/tests/documentation/test_pipeline.py :language: python :start-after: START bert_test_decoding :end-before: END bert_test_decoding :dedent: 8 .. only:: rust .. literalinclude:: ../../tokenizers/tests/documentation.rs :language: rust :start-after: START bert_test_decoding :end-before: END bert_test_decoding :dedent: 4 .. only:: node .. literalinclude:: ../../bindings/node/examples/documentation/pipeline.test.ts :language: javascript :start-after: START bert_test_decoding :end-before: END bert_test_decoding :dedent: 8 But by changing it to a proper decoder, we get: .. only:: python .. literalinclude:: ../../bindings/python/tests/documentation/test_pipeline.py :language: python :start-after: START bert_proper_decoding :end-before: END bert_proper_decoding :dedent: 8 .. only:: rust .. literalinclude:: ../../tokenizers/tests/documentation.rs :language: rust :start-after: START bert_proper_decoding :end-before: END bert_proper_decoding :dedent: 4 .. only:: node .. literalinclude:: ../../bindings/node/examples/documentation/pipeline.test.ts :language: javascript :start-after: START bert_proper_decoding :end-before: END bert_proper_decoding :dedent: 8
tokenizers/docs/source/pipeline.rst/0
{ "file_path": "tokenizers/docs/source/pipeline.rst", "repo_id": "tokenizers", "token_count": 6322 }
335
use tokenizers::models::wordpiece::WordPiece; use tokenizers::{AddedToken, Tokenizer}; fn main() { let start = std::time::Instant::now(); let mut tokenizer = Tokenizer::new(WordPiece::default()); // Mix special and not special // You can make sure ids are in order, and special status is correct. let tokens: Vec<_> = (0..120_000) .map(|i| AddedToken::from(format!("[SPECIAL_{i}]"), i % 2 == 0)) .collect(); tokenizer.add_tokens(&tokens); tokenizer.save("_tok.json", true).unwrap(); println!("Save took {:?}", start.elapsed()); let start = std::time::Instant::now(); let _tok = Tokenizer::from_file("_tok.json").unwrap(); println!("Took {:?}", start.elapsed()); std::fs::remove_file("_tok.json").unwrap(); }
tokenizers/tokenizers/examples/serialization.rs/0
{ "file_path": "tokenizers/tokenizers/examples/serialization.rs", "repo_id": "tokenizers", "token_count": 299 }
336
{ "name": "create-wasm-app", "version": "0.1.0", "lockfileVersion": 2, "requires": true, "packages": { "": { "name": "create-wasm-app", "version": "0.1.0", "license": "(MIT OR Apache-2.0)", "dependencies": { "unstable_wasm": "file:../pkg" }, "bin": { "create-wasm-app": ".bin/create-wasm-app.js" }, "devDependencies": { "copy-webpack-plugin": "^11.0.0", "webpack": "^5.75.0", "webpack-cli": "^5.0.1", "webpack-dev-server": "^5.2.1" } }, "../pkg": { "name": "unstable_wasm", "version": "0.0.1" }, "node_modules/@discoveryjs/json-ext": { "version": "0.5.7", "resolved": "https://registry.npmjs.org/@discoveryjs/json-ext/-/json-ext-0.5.7.tgz", "integrity": "sha512-dBVuXR082gk3jsFp7Rd/JI4kytwGHecnCoTtXFb7DB6CNHp4rg5k1bhg0nWdLGLnOV71lmDzGQaLMy8iPLY0pw==", "dev": true, "engines": { "node": ">=10.0.0" } }, "node_modules/@jridgewell/gen-mapping": { "version": "0.3.5", "resolved": "https://registry.npmjs.org/@jridgewell/gen-mapping/-/gen-mapping-0.3.5.tgz", "integrity": "sha512-IzL8ZoEDIBRWEzlCcRhOaCupYyN5gdIK+Q6fbFdPDg6HqX6jpkItn7DFIpW9LQzXG6Df9sA7+OKnq0qlz/GaQg==", "dev": true, "dependencies": { "@jridgewell/set-array": "^1.2.1", "@jridgewell/sourcemap-codec": "^1.4.10", "@jridgewell/trace-mapping": "^0.3.24" }, "engines": { "node": ">=6.0.0" } }, "node_modules/@jridgewell/resolve-uri": { "version": "3.1.2", "resolved": "https://registry.npmjs.org/@jridgewell/resolve-uri/-/resolve-uri-3.1.2.tgz", "integrity": "sha512-bRISgCIjP20/tbWSPWMEi54QVPRZExkuD9lJL+UIxUKtwVJA8wW1Trb1jMs1RFXo1CBTNZ/5hpC9QvmKWdopKw==", "dev": true, "engines": { "node": ">=6.0.0" } }, "node_modules/@jridgewell/set-array": { "version": "1.2.1", "resolved": "https://registry.npmjs.org/@jridgewell/set-array/-/set-array-1.2.1.tgz", "integrity": "sha512-R8gLRTZeyp03ymzP/6Lil/28tGeGEzhx1q2k703KGWRAI1VdvPIXdG70VJc2pAMw3NA6JKL5hhFu1sJX0Mnn/A==", "dev": true, "engines": { "node": ">=6.0.0" } }, "node_modules/@jridgewell/source-map": { "version": "0.3.6", "resolved": "https://registry.npmjs.org/@jridgewell/source-map/-/source-map-0.3.6.tgz", "integrity": "sha512-1ZJTZebgqllO79ue2bm3rIGud/bOe0pP5BjSRCRxxYkEZS8STV7zN84UBbiYu7jy+eCKSnVIUgoWWE/tt+shMQ==", "dev": true, "dependencies": { "@jridgewell/gen-mapping": "^0.3.5", "@jridgewell/trace-mapping": "^0.3.25" } }, "node_modules/@jridgewell/sourcemap-codec": { "version": "1.5.0", "resolved": "https://registry.npmjs.org/@jridgewell/sourcemap-codec/-/sourcemap-codec-1.5.0.tgz", "integrity": "sha512-gv3ZRaISU3fjPAgNsriBRqGWQL6quFx04YMPW/zD8XMLsU32mhCCbfbO6KZFLjvYpCZ8zyDEgqsgf+PwPaM7GQ==", "dev": true }, "node_modules/@jridgewell/trace-mapping": { "version": "0.3.25", "resolved": "https://registry.npmjs.org/@jridgewell/trace-mapping/-/trace-mapping-0.3.25.tgz", "integrity": "sha512-vNk6aEwybGtawWmy/PzwnGDOjCkLWSD2wqvjGGAgOAwCGWySYXfYoxt00IJkTF+8Lb57DwOb3Aa0o9CApepiYQ==", "dev": true, "dependencies": { "@jridgewell/resolve-uri": "^3.1.0", "@jridgewell/sourcemap-codec": "^1.4.14" } }, "node_modules/@jsonjoy.com/base64": { "version": "1.1.2", "resolved": "https://registry.npmjs.org/@jsonjoy.com/base64/-/base64-1.1.2.tgz", "integrity": "sha512-q6XAnWQDIMA3+FTiOYajoYqySkO+JSat0ytXGSuRdq9uXE7o92gzuQwQM14xaCRlBLGq3v5miDGC4vkVTn54xA==", "dev": true, "license": "Apache-2.0", "engines": { "node": ">=10.0" }, "funding": { "type": "github", "url": "https://github.com/sponsors/streamich" }, "peerDependencies": { "tslib": "2" } }, "node_modules/@jsonjoy.com/json-pack": { "version": "1.2.0", "resolved": "https://registry.npmjs.org/@jsonjoy.com/json-pack/-/json-pack-1.2.0.tgz", "integrity": "sha512-io1zEbbYcElht3tdlqEOFxZ0dMTYrHz9iMf0gqn1pPjZFTCgM5R4R5IMA20Chb2UPYYsxjzs8CgZ7Nb5n2K2rA==", "dev": true, "license": "Apache-2.0", "dependencies": { "@jsonjoy.com/base64": "^1.1.1", "@jsonjoy.com/util": "^1.1.2", "hyperdyperid": "^1.2.0", "thingies": "^1.20.0" }, "engines": { "node": ">=10.0" }, "funding": { "type": "github", "url": "https://github.com/sponsors/streamich" }, "peerDependencies": { "tslib": "2" } }, "node_modules/@jsonjoy.com/util": { "version": "1.6.0", "resolved": "https://registry.npmjs.org/@jsonjoy.com/util/-/util-1.6.0.tgz", "integrity": "sha512-sw/RMbehRhN68WRtcKCpQOPfnH6lLP4GJfqzi3iYej8tnzpZUDr6UkZYJjcjjC0FWEJOJbyM3PTIwxucUmDG2A==", "dev": true, "license": "Apache-2.0", "engines": { "node": ">=10.0" }, "funding": { "type": "github", "url": "https://github.com/sponsors/streamich" }, "peerDependencies": { "tslib": "2" } }, "node_modules/@leichtgewicht/ip-codec": { "version": "2.0.5", "resolved": "https://registry.npmjs.org/@leichtgewicht/ip-codec/-/ip-codec-2.0.5.tgz", "integrity": "sha512-Vo+PSpZG2/fmgmiNzYK9qWRh8h/CHrwD0mo1h1DzL4yzHNSfWYujGTYsWGreD000gcgmZ7K4Ys6Tx9TxtsKdDw==", "dev": true, "license": "MIT" }, "node_modules/@nodelib/fs.scandir": { "version": "2.1.5", "resolved": "https://registry.npmjs.org/@nodelib/fs.scandir/-/fs.scandir-2.1.5.tgz", "integrity": "sha512-vq24Bq3ym5HEQm2NKCr3yXDwjc7vTsEThRDnkp2DK9p1uqLR+DHurm/NOTo0KG7HYHU7eppKZj3MyqYuMBf62g==", "dev": true, "dependencies": { "@nodelib/fs.stat": "2.0.5", "run-parallel": "^1.1.9" }, "engines": { "node": ">= 8" } }, "node_modules/@nodelib/fs.stat": { "version": "2.0.5", "resolved": "https://registry.npmjs.org/@nodelib/fs.stat/-/fs.stat-2.0.5.tgz", "integrity": "sha512-RkhPPp2zrqDAQA/2jNhnztcPAlv64XdhIp7a7454A5ovI7Bukxgt7MX7udwAu3zg1DcpPU0rz3VV1SeaqvY4+A==", "dev": true, "engines": { "node": ">= 8" } }, "node_modules/@nodelib/fs.walk": { "version": "1.2.8", "resolved": "https://registry.npmjs.org/@nodelib/fs.walk/-/fs.walk-1.2.8.tgz", "integrity": "sha512-oGB+UxlgWcgQkgwo8GcEGwemoTFt3FIO9ababBmaGwXIoBKZ+GTy0pP185beGg7Llih/NSHSV2XAs1lnznocSg==", "dev": true, "dependencies": { "@nodelib/fs.scandir": "2.1.5", "fastq": "^1.6.0" }, "engines": { "node": ">= 8" } }, "node_modules/@types/body-parser": { "version": "1.19.2", "resolved": "https://registry.npmjs.org/@types/body-parser/-/body-parser-1.19.2.tgz", "integrity": "sha512-ALYone6pm6QmwZoAgeyNksccT9Q4AWZQ6PvfwR37GT6r6FWUPguq6sUmNGSMV2Wr761oQoBxwGGa6DR5o1DC9g==", "dev": true, "dependencies": { "@types/connect": "*", "@types/node": "*" } }, "node_modules/@types/bonjour": { "version": "3.5.13", "resolved": "https://registry.npmjs.org/@types/bonjour/-/bonjour-3.5.13.tgz", "integrity": "sha512-z9fJ5Im06zvUL548KvYNecEVlA7cVDkGUi6kZusb04mpyEFKCIZJvloCcmpmLaIahDpOQGHaHmG6imtPMmPXGQ==", "dev": true, "license": "MIT", "dependencies": { "@types/node": "*" } }, "node_modules/@types/connect": { "version": "3.4.35", "resolved": "https://registry.npmjs.org/@types/connect/-/connect-3.4.35.tgz", "integrity": "sha512-cdeYyv4KWoEgpBISTxWvqYsVy444DOqehiF3fM3ne10AmJ62RSyNkUnxMJXHQWRQQX2eR94m5y1IZyDwBjV9FQ==", "dev": true, "dependencies": { "@types/node": "*" } }, "node_modules/@types/connect-history-api-fallback": { "version": "1.5.4", "resolved": "https://registry.npmjs.org/@types/connect-history-api-fallback/-/connect-history-api-fallback-1.5.4.tgz", "integrity": "sha512-n6Cr2xS1h4uAulPRdlw6Jl6s1oG8KrVilPN2yUITEs+K48EzMJJ3W1xy8K5eWuFvjp3R74AOIGSmp2UfBJ8HFw==", "dev": true, "license": "MIT", "dependencies": { "@types/express-serve-static-core": "*", "@types/node": "*" } }, "node_modules/@types/estree": { "version": "1.0.6", "resolved": "https://registry.npmjs.org/@types/estree/-/estree-1.0.6.tgz", "integrity": "sha512-AYnb1nQyY49te+VRAVgmzfcgjYS91mY5P0TKUDCLEM+gNnA+3T6rWITXRLYCpahpqSQbN5cE+gHpnPyXjHWxcw==", "dev": true }, "node_modules/@types/express": { "version": "4.17.22", "resolved": "https://registry.npmjs.org/@types/express/-/express-4.17.22.tgz", "integrity": "sha512-eZUmSnhRX9YRSkplpz0N+k6NljUUn5l3EWZIKZvYzhvMphEuNiyyy1viH/ejgt66JWgALwC/gtSUAeQKtSwW/w==", "dev": true, "license": "MIT", "dependencies": { "@types/body-parser": "*", "@types/express-serve-static-core": "^4.17.33", "@types/qs": "*", "@types/serve-static": "*" } }, "node_modules/@types/express-serve-static-core": { "version": "4.19.6", "resolved": "https://registry.npmjs.org/@types/express-serve-static-core/-/express-serve-static-core-4.19.6.tgz", "integrity": "sha512-N4LZ2xG7DatVqhCZzOGb1Yi5lMbXSZcmdLDe9EzSndPV2HpWYWzRbaerl2n27irrm94EPpprqa8KpskPT085+A==", "dev": true, "license": "MIT", "dependencies": { "@types/node": "*", "@types/qs": "*", "@types/range-parser": "*", "@types/send": "*" } }, "node_modules/@types/http-errors": { "version": "2.0.4", "resolved": "https://registry.npmjs.org/@types/http-errors/-/http-errors-2.0.4.tgz", "integrity": "sha512-D0CFMMtydbJAegzOyHjtiKPLlvnm3iTZyZRSZoLq2mRhDdmLfIWOCYPfQJ4cu2erKghU++QvjcUjp/5h7hESpA==", "dev": true, "license": "MIT" }, "node_modules/@types/http-proxy": { "version": "1.17.9", "resolved": "https://registry.npmjs.org/@types/http-proxy/-/http-proxy-1.17.9.tgz", "integrity": "sha512-QsbSjA/fSk7xB+UXlCT3wHBy5ai9wOcNDWwZAtud+jXhwOM3l+EYZh8Lng4+/6n8uar0J7xILzqftJdJ/Wdfkw==", "dev": true, "dependencies": { "@types/node": "*" } }, "node_modules/@types/json-schema": { "version": "7.0.11", "resolved": "https://registry.npmjs.org/@types/json-schema/-/json-schema-7.0.11.tgz", "integrity": "sha512-wOuvG1SN4Us4rez+tylwwwCV1psiNVOkJeM3AUWUNWg/jDQY2+HE/444y5gc+jBmRqASOm2Oeh5c1axHobwRKQ==", "dev": true }, "node_modules/@types/mime": { "version": "1.3.5", "resolved": "https://registry.npmjs.org/@types/mime/-/mime-1.3.5.tgz", "integrity": "sha512-/pyBZWSLD2n0dcHE3hq8s8ZvcETHtEuF+3E7XVt0Ig2nvsVQXdghHVcEkIWjy9A0wKfTn97a/PSDYohKIlnP/w==", "dev": true, "license": "MIT" }, "node_modules/@types/node": { "version": "18.7.13", "resolved": "https://registry.npmjs.org/@types/node/-/node-18.7.13.tgz", "integrity": "sha512-46yIhxSe5xEaJZXWdIBP7GU4HDTG8/eo0qd9atdiL+lFpA03y8KS+lkTN834TWJj5767GbWv4n/P6efyTFt1Dw==", "dev": true }, "node_modules/@types/node-forge": { "version": "1.3.11", "resolved": "https://registry.npmjs.org/@types/node-forge/-/node-forge-1.3.11.tgz", "integrity": "sha512-FQx220y22OKNTqaByeBGqHWYz4cl94tpcxeFdvBo3wjG6XPBuZ0BNgNZRV5J5TFmmcsJ4IzsLkmGRiQbnYsBEQ==", "dev": true, "license": "MIT", "dependencies": { "@types/node": "*" } }, "node_modules/@types/qs": { "version": "6.9.7", "resolved": "https://registry.npmjs.org/@types/qs/-/qs-6.9.7.tgz", "integrity": "sha512-FGa1F62FT09qcrueBA6qYTrJPVDzah9a+493+o2PCXsesWHIn27G98TsSMs3WPNbZIEj4+VJf6saSFpvD+3Zsw==", "dev": true }, "node_modules/@types/range-parser": { "version": "1.2.4", "resolved": "https://registry.npmjs.org/@types/range-parser/-/range-parser-1.2.4.tgz", "integrity": "sha512-EEhsLsD6UsDM1yFhAvy0Cjr6VwmpMWqFBCb9w07wVugF7w9nfajxLuVmngTIpgS6svCnm6Vaw+MZhoDCKnOfsw==", "dev": true }, "node_modules/@types/retry": { "version": "0.12.2", "resolved": "https://registry.npmjs.org/@types/retry/-/retry-0.12.2.tgz", "integrity": "sha512-XISRgDJ2Tc5q4TRqvgJtzsRkFYNJzZrhTdtMoGVBttwzzQJkPnS3WWTFc7kuDRoPtPakl+T+OfdEUjYJj7Jbow==", "dev": true, "license": "MIT" }, "node_modules/@types/send": { "version": "0.17.4", "resolved": "https://registry.npmjs.org/@types/send/-/send-0.17.4.tgz", "integrity": "sha512-x2EM6TJOybec7c52BX0ZspPodMsQUd5L6PRwOunVyVUhXiBSKf3AezDL8Dgvgt5o0UfKNfuA0eMLr2wLT4AiBA==", "dev": true, "license": "MIT", "dependencies": { "@types/mime": "^1", "@types/node": "*" } }, "node_modules/@types/serve-index": { "version": "1.9.4", "resolved": "https://registry.npmjs.org/@types/serve-index/-/serve-index-1.9.4.tgz", "integrity": "sha512-qLpGZ/c2fhSs5gnYsQxtDEq3Oy8SXPClIXkW5ghvAvsNuVSA8k+gCONcUCS/UjLEYvYps+e8uBtfgXgvhwfNug==", "dev": true, "license": "MIT", "dependencies": { "@types/express": "*" } }, "node_modules/@types/serve-static": { "version": "1.15.7", "resolved": "https://registry.npmjs.org/@types/serve-static/-/serve-static-1.15.7.tgz", "integrity": "sha512-W8Ym+h8nhuRwaKPaDw34QUkwsGi6Rc4yYqvKFo5rm2FUEhCFbzVWrxXUxuKK8TASjWsysJY0nsmNCGhCOIsrOw==", "dev": true, "license": "MIT", "dependencies": { "@types/http-errors": "*", "@types/node": "*", "@types/send": "*" } }, "node_modules/@types/sockjs": { "version": "0.3.36", "resolved": "https://registry.npmjs.org/@types/sockjs/-/sockjs-0.3.36.tgz", "integrity": "sha512-MK9V6NzAS1+Ud7JV9lJLFqW85VbC9dq3LmwZCuBe4wBDgKC0Kj/jd8Xl+nSviU+Qc3+m7umHHyHg//2KSa0a0Q==", "dev": true, "license": "MIT", "dependencies": { "@types/node": "*" } }, "node_modules/@types/ws": { "version": "8.18.1", "resolved": "https://registry.npmjs.org/@types/ws/-/ws-8.18.1.tgz", "integrity": "sha512-ThVF6DCVhA8kUGy+aazFQ4kXQ7E1Ty7A3ypFOe0IcJV8O/M511G99AW24irKrW56Wt44yG9+ij8FaqoBGkuBXg==", "dev": true, "license": "MIT", "dependencies": { "@types/node": "*" } }, "node_modules/@webassemblyjs/ast": { "version": "1.12.1", "resolved": "https://registry.npmjs.org/@webassemblyjs/ast/-/ast-1.12.1.tgz", "integrity": "sha512-EKfMUOPRRUTy5UII4qJDGPpqfwjOmZ5jeGFwid9mnoqIFK+e0vqoi1qH56JpmZSzEL53jKnNzScdmftJyG5xWg==", "dev": true, "dependencies": { "@webassemblyjs/helper-numbers": "1.11.6", "@webassemblyjs/helper-wasm-bytecode": "1.11.6" } }, "node_modules/@webassemblyjs/floating-point-hex-parser": { "version": "1.11.6", "resolved": "https://registry.npmjs.org/@webassemblyjs/floating-point-hex-parser/-/floating-point-hex-parser-1.11.6.tgz", "integrity": "sha512-ejAj9hfRJ2XMsNHk/v6Fu2dGS+i4UaXBXGemOfQ/JfQ6mdQg/WXtwleQRLLS4OvfDhv8rYnVwH27YJLMyYsxhw==", "dev": true }, "node_modules/@webassemblyjs/helper-api-error": { "version": "1.11.6", "resolved": "https://registry.npmjs.org/@webassemblyjs/helper-api-error/-/helper-api-error-1.11.6.tgz", "integrity": "sha512-o0YkoP4pVu4rN8aTJgAyj9hC2Sv5UlkzCHhxqWj8butaLvnpdc2jOwh4ewE6CX0txSfLn/UYaV/pheS2Txg//Q==", "dev": true }, "node_modules/@webassemblyjs/helper-buffer": { "version": "1.12.1", "resolved": "https://registry.npmjs.org/@webassemblyjs/helper-buffer/-/helper-buffer-1.12.1.tgz", "integrity": "sha512-nzJwQw99DNDKr9BVCOZcLuJJUlqkJh+kVzVl6Fmq/tI5ZtEyWT1KZMyOXltXLZJmDtvLCDgwsyrkohEtopTXCw==", "dev": true }, "node_modules/@webassemblyjs/helper-numbers": { "version": "1.11.6", "resolved": "https://registry.npmjs.org/@webassemblyjs/helper-numbers/-/helper-numbers-1.11.6.tgz", "integrity": "sha512-vUIhZ8LZoIWHBohiEObxVm6hwP034jwmc9kuq5GdHZH0wiLVLIPcMCdpJzG4C11cHoQ25TFIQj9kaVADVX7N3g==", "dev": true, "dependencies": { "@webassemblyjs/floating-point-hex-parser": "1.11.6", "@webassemblyjs/helper-api-error": "1.11.6", "@xtuc/long": "4.2.2" } }, "node_modules/@webassemblyjs/helper-wasm-bytecode": { "version": "1.11.6", "resolved": "https://registry.npmjs.org/@webassemblyjs/helper-wasm-bytecode/-/helper-wasm-bytecode-1.11.6.tgz", "integrity": "sha512-sFFHKwcmBprO9e7Icf0+gddyWYDViL8bpPjJJl0WHxCdETktXdmtWLGVzoHbqUcY4Be1LkNfwTmXOJUFZYSJdA==", "dev": true }, "node_modules/@webassemblyjs/helper-wasm-section": { "version": "1.12.1", "resolved": "https://registry.npmjs.org/@webassemblyjs/helper-wasm-section/-/helper-wasm-section-1.12.1.tgz", "integrity": "sha512-Jif4vfB6FJlUlSbgEMHUyk1j234GTNG9dBJ4XJdOySoj518Xj0oGsNi59cUQF4RRMS9ouBUxDDdyBVfPTypa5g==", "dev": true, "dependencies": { "@webassemblyjs/ast": "1.12.1", "@webassemblyjs/helper-buffer": "1.12.1", "@webassemblyjs/helper-wasm-bytecode": "1.11.6", "@webassemblyjs/wasm-gen": "1.12.1" } }, "node_modules/@webassemblyjs/ieee754": { "version": "1.11.6", "resolved": "https://registry.npmjs.org/@webassemblyjs/ieee754/-/ieee754-1.11.6.tgz", "integrity": "sha512-LM4p2csPNvbij6U1f19v6WR56QZ8JcHg3QIJTlSwzFcmx6WSORicYj6I63f9yU1kEUtrpG+kjkiIAkevHpDXrg==", "dev": true, "dependencies": { "@xtuc/ieee754": "^1.2.0" } }, "node_modules/@webassemblyjs/leb128": { "version": "1.11.6", "resolved": "https://registry.npmjs.org/@webassemblyjs/leb128/-/leb128-1.11.6.tgz", "integrity": "sha512-m7a0FhE67DQXgouf1tbN5XQcdWoNgaAuoULHIfGFIEVKA6tu/edls6XnIlkmS6FrXAquJRPni3ZZKjw6FSPjPQ==", "dev": true, "dependencies": { "@xtuc/long": "4.2.2" } }, "node_modules/@webassemblyjs/utf8": { "version": "1.11.6", "resolved": "https://registry.npmjs.org/@webassemblyjs/utf8/-/utf8-1.11.6.tgz", "integrity": "sha512-vtXf2wTQ3+up9Zsg8sa2yWiQpzSsMyXj0qViVP6xKGCUT8p8YJ6HqI7l5eCnWx1T/FYdsv07HQs2wTFbbof/RA==", "dev": true }, "node_modules/@webassemblyjs/wasm-edit": { "version": "1.12.1", "resolved": "https://registry.npmjs.org/@webassemblyjs/wasm-edit/-/wasm-edit-1.12.1.tgz", "integrity": "sha512-1DuwbVvADvS5mGnXbE+c9NfA8QRcZ6iKquqjjmR10k6o+zzsRVesil54DKexiowcFCPdr/Q0qaMgB01+SQ1u6g==", "dev": true, "dependencies": { "@webassemblyjs/ast": "1.12.1", "@webassemblyjs/helper-buffer": "1.12.1", "@webassemblyjs/helper-wasm-bytecode": "1.11.6", "@webassemblyjs/helper-wasm-section": "1.12.1", "@webassemblyjs/wasm-gen": "1.12.1", "@webassemblyjs/wasm-opt": "1.12.1", "@webassemblyjs/wasm-parser": "1.12.1", "@webassemblyjs/wast-printer": "1.12.1" } }, "node_modules/@webassemblyjs/wasm-gen": { "version": "1.12.1", "resolved": "https://registry.npmjs.org/@webassemblyjs/wasm-gen/-/wasm-gen-1.12.1.tgz", "integrity": "sha512-TDq4Ojh9fcohAw6OIMXqiIcTq5KUXTGRkVxbSo1hQnSy6lAM5GSdfwWeSxpAo0YzgsgF182E/U0mDNhuA0tW7w==", "dev": true, "dependencies": { "@webassemblyjs/ast": "1.12.1", "@webassemblyjs/helper-wasm-bytecode": "1.11.6", "@webassemblyjs/ieee754": "1.11.6", "@webassemblyjs/leb128": "1.11.6", "@webassemblyjs/utf8": "1.11.6" } }, "node_modules/@webassemblyjs/wasm-opt": { "version": "1.12.1", "resolved": "https://registry.npmjs.org/@webassemblyjs/wasm-opt/-/wasm-opt-1.12.1.tgz", "integrity": "sha512-Jg99j/2gG2iaz3hijw857AVYekZe2SAskcqlWIZXjji5WStnOpVoat3gQfT/Q5tb2djnCjBtMocY/Su1GfxPBg==", "dev": true, "dependencies": { "@webassemblyjs/ast": "1.12.1", "@webassemblyjs/helper-buffer": "1.12.1", "@webassemblyjs/wasm-gen": "1.12.1", "@webassemblyjs/wasm-parser": "1.12.1" } }, "node_modules/@webassemblyjs/wasm-parser": { "version": "1.12.1", "resolved": "https://registry.npmjs.org/@webassemblyjs/wasm-parser/-/wasm-parser-1.12.1.tgz", "integrity": "sha512-xikIi7c2FHXysxXe3COrVUPSheuBtpcfhbpFj4gmu7KRLYOzANztwUU0IbsqvMqzuNK2+glRGWCEqZo1WCLyAQ==", "dev": true, "dependencies": { "@webassemblyjs/ast": "1.12.1", "@webassemblyjs/helper-api-error": "1.11.6", "@webassemblyjs/helper-wasm-bytecode": "1.11.6", "@webassemblyjs/ieee754": "1.11.6", "@webassemblyjs/leb128": "1.11.6", "@webassemblyjs/utf8": "1.11.6" } }, "node_modules/@webassemblyjs/wast-printer": { "version": "1.12.1", "resolved": "https://registry.npmjs.org/@webassemblyjs/wast-printer/-/wast-printer-1.12.1.tgz", "integrity": "sha512-+X4WAlOisVWQMikjbcvY2e0rwPsKQ9F688lksZhBcPycBBuii3O7m8FACbDMWDojpAqvjIncrG8J0XHKyQfVeA==", "dev": true, "dependencies": { "@webassemblyjs/ast": "1.12.1", "@xtuc/long": "4.2.2" } }, "node_modules/@webpack-cli/configtest": { "version": "2.0.1", "resolved": "https://registry.npmjs.org/@webpack-cli/configtest/-/configtest-2.0.1.tgz", "integrity": "sha512-njsdJXJSiS2iNbQVS0eT8A/KPnmyH4pv1APj2K0d1wrZcBLw+yppxOy4CGqa0OxDJkzfL/XELDhD8rocnIwB5A==", "dev": true, "engines": { "node": ">=14.15.0" }, "peerDependencies": { "webpack": "5.x.x", "webpack-cli": "5.x.x" } }, "node_modules/@webpack-cli/info": { "version": "2.0.1", "resolved": "https://registry.npmjs.org/@webpack-cli/info/-/info-2.0.1.tgz", "integrity": "sha512-fE1UEWTwsAxRhrJNikE7v4EotYflkEhBL7EbajfkPlf6E37/2QshOy/D48Mw8G5XMFlQtS6YV42vtbG9zBpIQA==", "dev": true, "engines": { "node": ">=14.15.0" }, "peerDependencies": { "webpack": "5.x.x", "webpack-cli": "5.x.x" } }, "node_modules/@webpack-cli/serve": { "version": "2.0.1", "resolved": "https://registry.npmjs.org/@webpack-cli/serve/-/serve-2.0.1.tgz", "integrity": "sha512-0G7tNyS+yW8TdgHwZKlDWYXFA6OJQnoLCQvYKkQP0Q2X205PSQ6RNUj0M+1OB/9gRQaUZ/ccYfaxd0nhaWKfjw==", "dev": true, "engines": { "node": ">=14.15.0" }, "peerDependencies": { "webpack": "5.x.x", "webpack-cli": "5.x.x" }, "peerDependenciesMeta": { "webpack-dev-server": { "optional": true } } }, "node_modules/@xtuc/ieee754": { "version": "1.2.0", "resolved": "https://registry.npmjs.org/@xtuc/ieee754/-/ieee754-1.2.0.tgz", "integrity": "sha512-DX8nKgqcGwsc0eJSqYt5lwP4DH5FlHnmuWWBRy7X0NcaGR0ZtuyeESgMwTYVEtxmsNGY+qit4QYT/MIYTOTPeA==", "dev": true }, "node_modules/@xtuc/long": { "version": "4.2.2", "resolved": "https://registry.npmjs.org/@xtuc/long/-/long-4.2.2.tgz", "integrity": "sha512-NuHqBY1PB/D8xU6s/thBgOAiAP7HOYDQ32+BFZILJ8ivkUkAHQnWfn6WhL79Owj1qmUnoN/YPhktdIoucipkAQ==", "dev": true }, "node_modules/accepts": { "version": "1.3.8", "resolved": "https://registry.npmjs.org/accepts/-/accepts-1.3.8.tgz", "integrity": "sha512-PYAthTa2m2VKxuvSD3DPC/Gy+U+sOA1LAuT8mkmRuvw+NACSaeXEQ+NHcVF7rONl6qcaxV3Uuemwawk+7+SJLw==", "dev": true, "dependencies": { "mime-types": "~2.1.34", "negotiator": "0.6.3" }, "engines": { "node": ">= 0.6" } }, "node_modules/acorn": { "version": "8.12.1", "resolved": "https://registry.npmjs.org/acorn/-/acorn-8.12.1.tgz", "integrity": "sha512-tcpGyI9zbizT9JbV6oYE477V6mTlXvvi0T0G3SNIYE2apm/G5huBa1+K89VGeovbg+jycCrfhl3ADxErOuO6Jg==", "dev": true, "bin": { "acorn": "bin/acorn" }, "engines": { "node": ">=0.4.0" } }, "node_modules/acorn-import-attributes": { "version": "1.9.5", "resolved": "https://registry.npmjs.org/acorn-import-attributes/-/acorn-import-attributes-1.9.5.tgz", "integrity": "sha512-n02Vykv5uA3eHGM/Z2dQrcD56kL8TyDb2p1+0P83PClMnC/nc+anbQRhIOWnSq4Ke/KvDPrY3C9hDtC/A3eHnQ==", "dev": true, "peerDependencies": { "acorn": "^8" } }, "node_modules/ajv": { "version": "8.11.2", "resolved": "https://registry.npmjs.org/ajv/-/ajv-8.11.2.tgz", "integrity": "sha512-E4bfmKAhGiSTvMfL1Myyycaub+cUEU2/IvpylXkUu7CHBkBj1f/ikdzbD7YQ6FKUbixDxeYvB/xY4fvyroDlQg==", "dev": true, "dependencies": { "fast-deep-equal": "^3.1.1", "json-schema-traverse": "^1.0.0", "require-from-string": "^2.0.2", "uri-js": "^4.2.2" }, "funding": { "type": "github", "url": "https://github.com/sponsors/epoberezkin" } }, "node_modules/ajv-formats": { "version": "2.1.1", "resolved": "https://registry.npmjs.org/ajv-formats/-/ajv-formats-2.1.1.tgz", "integrity": "sha512-Wx0Kx52hxE7C18hkMEggYlEifqWZtYaRgouJor+WMdPnQyEK13vgEWyVNup7SoeeoLMsr4kf5h6dOW11I15MUA==", "dev": true, "dependencies": { "ajv": "^8.0.0" }, "peerDependencies": { "ajv": "^8.0.0" }, "peerDependenciesMeta": { "ajv": { "optional": true } } }, "node_modules/ajv-keywords": { "version": "5.1.0", "resolved": "https://registry.npmjs.org/ajv-keywords/-/ajv-keywords-5.1.0.tgz", "integrity": "sha512-YCS/JNFAUyr5vAuhk1DWm1CBxRHW9LbJ2ozWeemrIqpbsqKjHVxYPyi5GC0rjZIT5JxJ3virVTS8wk4i/Z+krw==", "dev": true, "dependencies": { "fast-deep-equal": "^3.1.3" }, "peerDependencies": { "ajv": "^8.8.2" } }, "node_modules/ansi-html-community": { "version": "0.0.8", "resolved": "https://registry.npmjs.org/ansi-html-community/-/ansi-html-community-0.0.8.tgz", "integrity": "sha512-1APHAyr3+PCamwNw3bXCPp4HFLONZt/yIH0sZp0/469KWNTEy+qN5jQ3GVX6DMZ1UXAi34yVwtTeaG/HpBuuzw==", "dev": true, "engines": [ "node >= 0.8.0" ], "bin": { "ansi-html": "bin/ansi-html" } }, "node_modules/anymatch": { "version": "3.1.3", "resolved": "https://registry.npmjs.org/anymatch/-/anymatch-3.1.3.tgz", "integrity": "sha512-KMReFUr0B4t+D+OBkjR3KYqvocp2XaSzO55UcB6mgQMd3KbcE+mWTyvVV7D/zsdEbNnV6acZUutkiHQXvTr1Rw==", "dev": true, "license": "ISC", "dependencies": { "normalize-path": "^3.0.0", "picomatch": "^2.0.4" }, "engines": { "node": ">= 8" } }, "node_modules/array-flatten": { "version": "1.1.1", "resolved": "https://registry.npmjs.org/array-flatten/-/array-flatten-1.1.1.tgz", "integrity": "sha512-PCVAQswWemu6UdxsDFFX/+gVeYqKAod3D3UVm91jHwynguOwAvYPhx8nNlM++NqRcK6CxxpUafjmhIdKiHibqg==", "dev": true, "license": "MIT" }, "node_modules/batch": { "version": "0.6.1", "resolved": "https://registry.npmjs.org/batch/-/batch-0.6.1.tgz", "integrity": "sha512-x+VAiMRL6UPkx+kudNvxTl6hB2XNNCG2r+7wixVfIYwu/2HKRXimwQyaumLjMveWvT2Hkd/cAJw+QBMfJ/EKVw==", "dev": true }, "node_modules/binary-extensions": { "version": "2.3.0", "resolved": "https://registry.npmjs.org/binary-extensions/-/binary-extensions-2.3.0.tgz", "integrity": "sha512-Ceh+7ox5qe7LJuLHoY0feh3pHuUDHAcRUeyL2VYghZwfpkNIy/+8Ocg0a3UuSoYzavmylwuLWQOf3hl0jjMMIw==", "dev": true, "license": "MIT", "engines": { "node": ">=8" }, "funding": { "url": "https://github.com/sponsors/sindresorhus" } }, "node_modules/body-parser": { "version": "1.20.3", "resolved": "https://registry.npmjs.org/body-parser/-/body-parser-1.20.3.tgz", "integrity": "sha512-7rAxByjUMqQ3/bHJy7D6OGXvx/MMc4IqBn/X0fcM1QUcAItpZrBEYhWGem+tzXH90c+G01ypMcYJBO9Y30203g==", "dev": true, "license": "MIT", "dependencies": { "bytes": "3.1.2", "content-type": "~1.0.5", "debug": "2.6.9", "depd": "2.0.0", "destroy": "1.2.0", "http-errors": "2.0.0", "iconv-lite": "0.4.24", "on-finished": "2.4.1", "qs": "6.13.0", "raw-body": "2.5.2", "type-is": "~1.6.18", "unpipe": "1.0.0" }, "engines": { "node": ">= 0.8", "npm": "1.2.8000 || >= 1.4.16" } }, "node_modules/bonjour-service": { "version": "1.3.0", "resolved": "https://registry.npmjs.org/bonjour-service/-/bonjour-service-1.3.0.tgz", "integrity": "sha512-3YuAUiSkWykd+2Azjgyxei8OWf8thdn8AITIog2M4UICzoqfjlqr64WIjEXZllf/W6vK1goqleSR6brGomxQqA==", "dev": true, "license": "MIT", "dependencies": { "fast-deep-equal": "^3.1.3", "multicast-dns": "^7.2.5" } }, "node_modules/braces": { "version": "3.0.2", "resolved": "https://registry.npmjs.org/braces/-/braces-3.0.2.tgz", "integrity": "sha512-b8um+L1RzM3WDSzvhm6gIz1yfTbBt6YTlcEKAvsmqCZZFw46z626lVj9j1yEPW33H5H+lBQpZMP1k8l+78Ha0A==", "dev": true, "dependencies": { "fill-range": "^7.0.1" }, "engines": { "node": ">=8" } }, "node_modules/browserslist": { "version": "4.24.0", "resolved": "https://registry.npmjs.org/browserslist/-/browserslist-4.24.0.tgz", "integrity": "sha512-Rmb62sR1Zpjql25eSanFGEhAxcFwfA1K0GuQcLoaJBAcENegrQut3hYdhXFF1obQfiDyqIW/cLM5HSJ/9k884A==", "dev": true, "funding": [ { "type": "opencollective", "url": "https://opencollective.com/browserslist" }, { "type": "tidelift", "url": "https://tidelift.com/funding/github/npm/browserslist" }, { "type": "github", "url": "https://github.com/sponsors/ai" } ], "dependencies": { "caniuse-lite": "^1.0.30001663", "electron-to-chromium": "^1.5.28", "node-releases": "^2.0.18", "update-browserslist-db": "^1.1.0" }, "bin": { "browserslist": "cli.js" }, "engines": { "node": "^6 || ^7 || ^8 || ^9 || ^10 || ^11 || ^12 || >=13.7" } }, "node_modules/buffer-from": { "version": "1.1.2", "resolved": "https://registry.npmjs.org/buffer-from/-/buffer-from-1.1.2.tgz", "integrity": "sha512-E+XQCRwSbaaiChtv6k6Dwgc+bx+Bs6vuKJHHl5kox/BaKbhiXzqQOwK4cO22yElGp2OCmjwVhT3HmxgyPGnJfQ==", "dev": true }, "node_modules/bundle-name": { "version": "4.1.0", "resolved": "https://registry.npmjs.org/bundle-name/-/bundle-name-4.1.0.tgz", "integrity": "sha512-tjwM5exMg6BGRI+kNmTntNsvdZS1X8BFYS6tnJ2hdH0kVxM6/eVZ2xy+FqStSWvYmtfFMDLIxurorHwDKfDz5Q==", "dev": true, "license": "MIT", "dependencies": { "run-applescript": "^7.0.0" }, "engines": { "node": ">=18" }, "funding": { "url": "https://github.com/sponsors/sindresorhus" } }, "node_modules/bytes": { "version": "3.1.2", "resolved": "https://registry.npmjs.org/bytes/-/bytes-3.1.2.tgz", "integrity": "sha512-/Nf7TyzTx6S3yRJObOAV7956r8cr2+Oj8AC5dt8wSP3BQAoeX58NoHyCU8P8zGkNXStjTSi6fzO6F0pBdcYbEg==", "dev": true, "engines": { "node": ">= 0.8" } }, "node_modules/call-bind-apply-helpers": { "version": "1.0.2", "resolved": "https://registry.npmjs.org/call-bind-apply-helpers/-/call-bind-apply-helpers-1.0.2.tgz", "integrity": "sha512-Sp1ablJ0ivDkSzjcaJdxEunN5/XvksFJ2sMBFfq6x0ryhQV/2b/KwFe21cMpmHtPOSij8K99/wSfoEuTObmuMQ==", "dev": true, "license": "MIT", "dependencies": { "es-errors": "^1.3.0", "function-bind": "^1.1.2" }, "engines": { "node": ">= 0.4" } }, "node_modules/call-bound": { "version": "1.0.4", "resolved": "https://registry.npmjs.org/call-bound/-/call-bound-1.0.4.tgz", "integrity": "sha512-+ys997U96po4Kx/ABpBCqhA9EuxJaQWDQg7295H4hBphv3IZg0boBKuwYpt4YXp6MZ5AmZQnU/tyMTlRpaSejg==", "dev": true, "license": "MIT", "dependencies": { "call-bind-apply-helpers": "^1.0.2", "get-intrinsic": "^1.3.0" }, "engines": { "node": ">= 0.4" }, "funding": { "url": "https://github.com/sponsors/ljharb" } }, "node_modules/caniuse-lite": { "version": "1.0.30001664", "resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001664.tgz", "integrity": "sha512-AmE7k4dXiNKQipgn7a2xg558IRqPN3jMQY/rOsbxDhrd0tyChwbITBfiwtnqz8bi2M5mIWbxAYBvk7W7QBUS2g==", "dev": true, "funding": [ { "type": "opencollective", "url": "https://opencollective.com/browserslist" }, { "type": "tidelift", "url": "https://tidelift.com/funding/github/npm/caniuse-lite" }, { "type": "github", "url": "https://github.com/sponsors/ai" } ] }, "node_modules/chokidar": { "version": "3.6.0", "resolved": "https://registry.npmjs.org/chokidar/-/chokidar-3.6.0.tgz", "integrity": "sha512-7VT13fmjotKpGipCW9JEQAusEPE+Ei8nl6/g4FBAmIm0GOOLMua9NDDo/DWp0ZAxCr3cPq5ZpBqmPAQgDda2Pw==", "dev": true, "license": "MIT", "dependencies": { "anymatch": "~3.1.2", "braces": "~3.0.2", "glob-parent": "~5.1.2", "is-binary-path": "~2.1.0", "is-glob": "~4.0.1", "normalize-path": "~3.0.0", "readdirp": "~3.6.0" }, "engines": { "node": ">= 8.10.0" }, "funding": { "url": "https://paulmillr.com/funding/" }, "optionalDependencies": { "fsevents": "~2.3.2" } }, "node_modules/chokidar/node_modules/glob-parent": { "version": "5.1.2", "resolved": "https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.2.tgz", "integrity": "sha512-AOIgSQCepiJYwP3ARnGx+5VnTu2HBYdzbGP45eLw1vr3zB3vZLeyed1sC9hnbcOc9/SrMyM5RPQrkGz4aS9Zow==", "dev": true, "license": "ISC", "dependencies": { "is-glob": "^4.0.1" }, "engines": { "node": ">= 6" } }, "node_modules/chrome-trace-event": { "version": "1.0.3", "resolved": "https://registry.npmjs.org/chrome-trace-event/-/chrome-trace-event-1.0.3.tgz", "integrity": "sha512-p3KULyQg4S7NIHixdwbGX+nFHkoBiA4YQmyWtjb8XngSKV124nJmRysgAeujbUVb15vh+RvFUfCPqU7rXk+hZg==", "dev": true, "engines": { "node": ">=6.0" } }, "node_modules/clone-deep": { "version": "4.0.1", "resolved": "https://registry.npmjs.org/clone-deep/-/clone-deep-4.0.1.tgz", "integrity": "sha512-neHB9xuzh/wk0dIHweyAXv2aPGZIVk3pLMe+/RNzINf17fe0OG96QroktYAUm7SM1PBnzTabaLboqqxDyMU+SQ==", "dev": true, "dependencies": { "is-plain-object": "^2.0.4", "kind-of": "^6.0.2", "shallow-clone": "^3.0.0" }, "engines": { "node": ">=6" } }, "node_modules/colorette": { "version": "2.0.19", "resolved": "https://registry.npmjs.org/colorette/-/colorette-2.0.19.tgz", "integrity": "sha512-3tlv/dIP7FWvj3BsbHrGLJ6l/oKh1O3TcgBqMn+yyCagOxc23fyzDS6HypQbgxWbkpDnf52p1LuR4eWDQ/K9WQ==", "dev": true }, "node_modules/commander": { "version": "2.20.3", "resolved": "https://registry.npmjs.org/commander/-/commander-2.20.3.tgz", "integrity": "sha512-GpVkmM8vF2vQUkj2LvZmD35JxeJOLCwJ9cUkugyk2nuhbv3+mJvpLYYt+0+USMxE+oj+ey/lJEnhZw75x/OMcQ==", "dev": true }, "node_modules/compressible": { "version": "2.0.18", "resolved": "https://registry.npmjs.org/compressible/-/compressible-2.0.18.tgz", "integrity": "sha512-AF3r7P5dWxL8MxyITRMlORQNaOA2IkAFaTr4k7BUumjPtRpGDTZpl0Pb1XCO6JeDCBdp126Cgs9sMxqSjgYyRg==", "dev": true, "dependencies": { "mime-db": ">= 1.43.0 < 2" }, "engines": { "node": ">= 0.6" } }, "node_modules/compression": { "version": "1.8.1", "resolved": "https://registry.npmjs.org/compression/-/compression-1.8.1.tgz", "integrity": "sha512-9mAqGPHLakhCLeNyxPkK4xVo746zQ/czLH1Ky+vkitMnWfWZps8r0qXuwhwizagCRttsL4lfG4pIOvaWLpAP0w==", "dev": true, "dependencies": { "bytes": "3.1.2", "compressible": "~2.0.18", "debug": "2.6.9", "negotiator": "~0.6.4", "on-headers": "~1.1.0", "safe-buffer": "5.2.1", "vary": "~1.1.2" }, "engines": { "node": ">= 0.8.0" } }, "node_modules/compression/node_modules/negotiator": { "version": "0.6.4", "resolved": "https://registry.npmjs.org/negotiator/-/negotiator-0.6.4.tgz", "integrity": "sha512-myRT3DiWPHqho5PrJaIRyaMv2kgYf0mUVgBNOYMuCH5Ki1yEiQaf/ZJuQ62nvpc44wL5WDbTX7yGJi1Neevw8w==", "dev": true, "engines": { "node": ">= 0.6" } }, "node_modules/compression/node_modules/safe-buffer": { "version": "5.2.1", "resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.2.1.tgz", "integrity": "sha512-rp3So07KcdmmKbGvgaNxQSJr7bGVSVk5S9Eq1F+ppbRo70+YeaDxkw5Dd8NPN+GD6bjnYm2VuPuCXmpuYvmCXQ==", "dev": true, "funding": [ { "type": "github", "url": "https://github.com/sponsors/feross" }, { "type": "patreon", "url": "https://www.patreon.com/feross" }, { "type": "consulting", "url": "https://feross.org/support" } ] }, "node_modules/connect-history-api-fallback": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/connect-history-api-fallback/-/connect-history-api-fallback-2.0.0.tgz", "integrity": "sha512-U73+6lQFmfiNPrYbXqr6kZ1i1wiRqXnp2nhMsINseWXO8lDau0LGEffJ8kQi4EjLZympVgRdvqjAgiZ1tgzDDA==", "dev": true, "engines": { "node": ">=0.8" } }, "node_modules/content-disposition": { "version": "0.5.4", "resolved": "https://registry.npmjs.org/content-disposition/-/content-disposition-0.5.4.tgz", "integrity": "sha512-FveZTNuGw04cxlAiWbzi6zTAL/lhehaWbTtgluJh4/E95DqMwTmha3KZN1aAWA8cFIhHzMZUvLevkw5Rqk+tSQ==", "dev": true, "license": "MIT", "dependencies": { "safe-buffer": "5.2.1" }, "engines": { "node": ">= 0.6" } }, "node_modules/content-disposition/node_modules/safe-buffer": { "version": "5.2.1", "resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.2.1.tgz", "integrity": "sha512-rp3So07KcdmmKbGvgaNxQSJr7bGVSVk5S9Eq1F+ppbRo70+YeaDxkw5Dd8NPN+GD6bjnYm2VuPuCXmpuYvmCXQ==", "dev": true, "funding": [ { "type": "github", "url": "https://github.com/sponsors/feross" }, { "type": "patreon", "url": "https://www.patreon.com/feross" }, { "type": "consulting", "url": "https://feross.org/support" } ], "license": "MIT" }, "node_modules/content-type": { "version": "1.0.5", "resolved": "https://registry.npmjs.org/content-type/-/content-type-1.0.5.tgz", "integrity": "sha512-nTjqfcBFEipKdXCv4YDQWCfmcLZKm81ldF0pAopTvyrFGVbcR6P/VAAd5G7N+0tTr8QqiU0tFadD6FK4NtJwOA==", "dev": true, "license": "MIT", "engines": { "node": ">= 0.6" } }, "node_modules/cookie": { "version": "0.7.1", "resolved": "https://registry.npmjs.org/cookie/-/cookie-0.7.1.tgz", "integrity": "sha512-6DnInpx7SJ2AK3+CTUE/ZM0vWTUboZCegxhC2xiIydHR9jNuTAASBrfEpHhiGOZw/nX51bHt6YQl8jsGo4y/0w==", "dev": true, "license": "MIT", "engines": { "node": ">= 0.6" } }, "node_modules/cookie-signature": { "version": "1.0.6", "resolved": "https://registry.npmjs.org/cookie-signature/-/cookie-signature-1.0.6.tgz", "integrity": "sha512-QADzlaHc8icV8I7vbaJXJwod9HWYp8uCqf1xa4OfNu1T7JVxQIrUgOWtHdNDtPiywmFbiS12VjotIXLrKM3orQ==", "dev": true, "license": "MIT" }, "node_modules/copy-webpack-plugin": { "version": "11.0.0", "resolved": "https://registry.npmjs.org/copy-webpack-plugin/-/copy-webpack-plugin-11.0.0.tgz", "integrity": "sha512-fX2MWpamkW0hZxMEg0+mYnA40LTosOSa5TqZ9GYIBzyJa9C3QUaMPSE2xAi/buNr8u89SfD9wHSQVBzrRa/SOQ==", "dev": true, "dependencies": { "fast-glob": "^3.2.11", "glob-parent": "^6.0.1", "globby": "^13.1.1", "normalize-path": "^3.0.0", "schema-utils": "^4.0.0", "serialize-javascript": "^6.0.0" }, "engines": { "node": ">= 14.15.0" }, "funding": { "type": "opencollective", "url": "https://opencollective.com/webpack" }, "peerDependencies": { "webpack": "^5.1.0" } }, "node_modules/core-util-is": { "version": "1.0.3", "resolved": "https://registry.npmjs.org/core-util-is/-/core-util-is-1.0.3.tgz", "integrity": "sha512-ZQBvi1DcpJ4GDqanjucZ2Hj3wEO5pZDS89BWbkcrvdxksJorwUDDZamX9ldFkp9aw2lmBDLgkObEA4DWNJ9FYQ==", "dev": true }, "node_modules/cross-spawn": { "version": "7.0.3", "resolved": "https://registry.npmjs.org/cross-spawn/-/cross-spawn-7.0.3.tgz", "integrity": "sha512-iRDPJKUPVEND7dHPO8rkbOnPpyDygcDFtWjpeWNCgy8WP2rXcxXL8TskReQl6OrB2G7+UJrags1q15Fudc7G6w==", "dev": true, "dependencies": { "path-key": "^3.1.0", "shebang-command": "^2.0.0", "which": "^2.0.1" }, "engines": { "node": ">= 8" } }, "node_modules/debug": { "version": "2.6.9", "resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz", "integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==", "dev": true, "dependencies": { "ms": "2.0.0" } }, "node_modules/default-browser": { "version": "5.2.1", "resolved": "https://registry.npmjs.org/default-browser/-/default-browser-5.2.1.tgz", "integrity": "sha512-WY/3TUME0x3KPYdRRxEJJvXRHV4PyPoUsxtZa78lwItwRQRHhd2U9xOscaT/YTf8uCXIAjeJOFBVEh/7FtD8Xg==", "dev": true, "license": "MIT", "dependencies": { "bundle-name": "^4.1.0", "default-browser-id": "^5.0.0" }, "engines": { "node": ">=18" }, "funding": { "url": "https://github.com/sponsors/sindresorhus" } }, "node_modules/default-browser-id": { "version": "5.0.0", "resolved": "https://registry.npmjs.org/default-browser-id/-/default-browser-id-5.0.0.tgz", "integrity": "sha512-A6p/pu/6fyBcA1TRz/GqWYPViplrftcW2gZC9q79ngNCKAeR/X3gcEdXQHl4KNXV+3wgIJ1CPkJQ3IHM6lcsyA==", "dev": true, "license": "MIT", "engines": { "node": ">=18" }, "funding": { "url": "https://github.com/sponsors/sindresorhus" } }, "node_modules/define-lazy-prop": { "version": "3.0.0", "resolved": "https://registry.npmjs.org/define-lazy-prop/-/define-lazy-prop-3.0.0.tgz", "integrity": "sha512-N+MeXYoqr3pOgn8xfyRPREN7gHakLYjhsHhWGT3fWAiL4IkAt0iDw14QiiEm2bE30c5XX5q0FtAA3CK5f9/BUg==", "dev": true, "license": "MIT", "engines": { "node": ">=12" }, "funding": { "url": "https://github.com/sponsors/sindresorhus" } }, "node_modules/depd": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/depd/-/depd-2.0.0.tgz", "integrity": "sha512-g7nH6P6dyDioJogAAGprGpCtVImJhpPk/roCzdb3fIh61/s/nPsfR6onyMwkCAR/OlC3yBC0lESvUoQEAssIrw==", "dev": true, "license": "MIT", "engines": { "node": ">= 0.8" } }, "node_modules/destroy": { "version": "1.2.0", "resolved": "https://registry.npmjs.org/destroy/-/destroy-1.2.0.tgz", "integrity": "sha512-2sJGJTaXIIaR1w4iJSNoN0hnMY7Gpc/n8D4qSCJw8QqFWXf7cuAgnEHxBpweaVcPevC2l3KpjYCx3NypQQgaJg==", "dev": true, "license": "MIT", "engines": { "node": ">= 0.8", "npm": "1.2.8000 || >= 1.4.16" } }, "node_modules/detect-node": { "version": "2.1.0", "resolved": "https://registry.npmjs.org/detect-node/-/detect-node-2.1.0.tgz", "integrity": "sha512-T0NIuQpnTvFDATNuHN5roPwSBG83rFsuO+MXXH9/3N1eFbn4wcPjttvjMLEPWJ0RGUYgQE7cGgS3tNxbqCGM7g==", "dev": true }, "node_modules/dir-glob": { "version": "3.0.1", "resolved": "https://registry.npmjs.org/dir-glob/-/dir-glob-3.0.1.tgz", "integrity": "sha512-WkrWp9GR4KXfKGYzOLmTuGVi1UWFfws377n9cc55/tb6DuqyF6pcQ5AbiHEshaDpY9v6oaSr2XCDidGmMwdzIA==", "dev": true, "dependencies": { "path-type": "^4.0.0" }, "engines": { "node": ">=8" } }, "node_modules/dns-packet": { "version": "5.6.1", "resolved": "https://registry.npmjs.org/dns-packet/-/dns-packet-5.6.1.tgz", "integrity": "sha512-l4gcSouhcgIKRvyy99RNVOgxXiicE+2jZoNmaNmZ6JXiGajBOJAesk1OBlJuM5k2c+eudGdLxDqXuPCKIj6kpw==", "dev": true, "license": "MIT", "dependencies": { "@leichtgewicht/ip-codec": "^2.0.1" }, "engines": { "node": ">=6" } }, "node_modules/dunder-proto": { "version": "1.0.1", "resolved": "https://registry.npmjs.org/dunder-proto/-/dunder-proto-1.0.1.tgz", "integrity": "sha512-KIN/nDJBQRcXw0MLVhZE9iQHmG68qAVIBg9CqmUYjmQIhgij9U5MFvrqkUL5FbtyyzZuOeOt0zdeRe4UY7ct+A==", "dev": true, "license": "MIT", "dependencies": { "call-bind-apply-helpers": "^1.0.1", "es-errors": "^1.3.0", "gopd": "^1.2.0" }, "engines": { "node": ">= 0.4" } }, "node_modules/ee-first": { "version": "1.1.1", "resolved": "https://registry.npmjs.org/ee-first/-/ee-first-1.1.1.tgz", "integrity": "sha512-WMwm9LhRUo+WUaRN+vRuETqG89IgZphVSNkdFgeb6sS/E4OrDIN7t48CAewSHXc6C8lefD8KKfr5vY61brQlow==", "dev": true, "license": "MIT" }, "node_modules/electron-to-chromium": { "version": "1.5.29", "resolved": "https://registry.npmjs.org/electron-to-chromium/-/electron-to-chromium-1.5.29.tgz", "integrity": "sha512-PF8n2AlIhCKXQ+gTpiJi0VhcHDb69kYX4MtCiivctc2QD3XuNZ/XIOlbGzt7WAjjEev0TtaH6Cu3arZExm5DOw==", "dev": true }, "node_modules/encodeurl": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/encodeurl/-/encodeurl-2.0.0.tgz", "integrity": "sha512-Q0n9HRi4m6JuGIV1eFlmvJB7ZEVxu93IrMyiMsGC0lrMJMWzRgx6WGquyfQgZVb31vhGgXnfmPNNXmxnOkRBrg==", "dev": true, "license": "MIT", "engines": { "node": ">= 0.8" } }, "node_modules/enhanced-resolve": { "version": "5.17.1", "resolved": "https://registry.npmjs.org/enhanced-resolve/-/enhanced-resolve-5.17.1.tgz", "integrity": "sha512-LMHl3dXhTcfv8gM4kEzIUeTQ+7fpdA0l2tUf34BddXPkz2A5xJ5L/Pchd5BL6rdccM9QGvu0sWZzK1Z1t4wwyg==", "dev": true, "dependencies": { "graceful-fs": "^4.2.4", "tapable": "^2.2.0" }, "engines": { "node": ">=10.13.0" } }, "node_modules/envinfo": { "version": "7.8.1", "resolved": "https://registry.npmjs.org/envinfo/-/envinfo-7.8.1.tgz", "integrity": "sha512-/o+BXHmB7ocbHEAs6F2EnG0ogybVVUdkRunTT2glZU9XAaGmhqskrvKwqXuDfNjEO0LZKWdejEEpnq8aM0tOaw==", "dev": true, "bin": { "envinfo": "dist/cli.js" }, "engines": { "node": ">=4" } }, "node_modules/es-define-property": { "version": "1.0.1", "resolved": "https://registry.npmjs.org/es-define-property/-/es-define-property-1.0.1.tgz", "integrity": "sha512-e3nRfgfUZ4rNGL232gUgX06QNyyez04KdjFrF+LTRoOXmrOgFKDg4BCdsjW8EnT69eqdYGmRpJwiPVYNrCaW3g==", "dev": true, "license": "MIT", "engines": { "node": ">= 0.4" } }, "node_modules/es-errors": { "version": "1.3.0", "resolved": "https://registry.npmjs.org/es-errors/-/es-errors-1.3.0.tgz", "integrity": "sha512-Zf5H2Kxt2xjTvbJvP2ZWLEICxA6j+hAmMzIlypy4xcBg1vKVnx89Wy0GbS+kf5cwCVFFzdCFh2XSCFNULS6csw==", "dev": true, "license": "MIT", "engines": { "node": ">= 0.4" } }, "node_modules/es-module-lexer": { "version": "1.5.4", "resolved": "https://registry.npmjs.org/es-module-lexer/-/es-module-lexer-1.5.4.tgz", "integrity": "sha512-MVNK56NiMrOwitFB7cqDwq0CQutbw+0BvLshJSse0MUNU+y1FC3bUS/AQg7oUng+/wKrrki7JfmwtVHkVfPLlw==", "dev": true }, "node_modules/es-object-atoms": { "version": "1.1.1", "resolved": "https://registry.npmjs.org/es-object-atoms/-/es-object-atoms-1.1.1.tgz", "integrity": "sha512-FGgH2h8zKNim9ljj7dankFPcICIK9Cp5bm+c2gQSYePhpaG5+esrLODihIorn+Pe6FGJzWhXQotPv73jTaldXA==", "dev": true, "license": "MIT", "dependencies": { "es-errors": "^1.3.0" }, "engines": { "node": ">= 0.4" } }, "node_modules/escalade": { "version": "3.2.0", "resolved": "https://registry.npmjs.org/escalade/-/escalade-3.2.0.tgz", "integrity": "sha512-WUj2qlxaQtO4g6Pq5c29GTcWGDyd8itL8zTlipgECz3JesAiiOKotd8JU6otB3PACgG6xkJUyVhboMS+bje/jA==", "dev": true, "engines": { "node": ">=6" } }, "node_modules/escape-html": { "version": "1.0.3", "resolved": "https://registry.npmjs.org/escape-html/-/escape-html-1.0.3.tgz", "integrity": "sha512-NiSupZ4OeuGwr68lGIeym/ksIZMJodUGOSCZ/FSnTxcrekbvqrgdUxlJOMpijaKZVjAJrWrGs/6Jy8OMuyj9ow==", "dev": true }, "node_modules/eslint-scope": { "version": "5.1.1", "resolved": "https://registry.npmjs.org/eslint-scope/-/eslint-scope-5.1.1.tgz", "integrity": "sha512-2NxwbF/hZ0KpepYN0cNbo+FN6XoK7GaHlQhgx/hIZl6Va0bF45RQOOwhLIy8lQDbuCiadSLCBnH2CFYquit5bw==", "dev": true, "dependencies": { "esrecurse": "^4.3.0", "estraverse": "^4.1.1" }, "engines": { "node": ">=8.0.0" } }, "node_modules/esrecurse": { "version": "4.3.0", "resolved": "https://registry.npmjs.org/esrecurse/-/esrecurse-4.3.0.tgz", "integrity": "sha512-KmfKL3b6G+RXvP8N1vr3Tq1kL/oCFgn2NYXEtqP8/L3pKapUA4G8cFVaoF3SU323CD4XypR/ffioHmkti6/Tag==", "dev": true, "dependencies": { "estraverse": "^5.2.0" }, "engines": { "node": ">=4.0" } }, "node_modules/esrecurse/node_modules/estraverse": { "version": "5.3.0", "resolved": "https://registry.npmjs.org/estraverse/-/estraverse-5.3.0.tgz", "integrity": "sha512-MMdARuVEQziNTeJD8DgMqmhwR11BRQ/cBP+pLtYdSTnf3MIO8fFeiINEbX36ZdNlfU/7A9f3gUw49B3oQsvwBA==", "dev": true, "engines": { "node": ">=4.0" } }, "node_modules/estraverse": { "version": "4.3.0", "resolved": "https://registry.npmjs.org/estraverse/-/estraverse-4.3.0.tgz", "integrity": "sha512-39nnKffWz8xN1BU/2c79n9nB9HDzo0niYUqx6xyqUnyoAnQyyWpOTdZEeiCch8BBu515t4wp9ZmgVfVhn9EBpw==", "dev": true, "engines": { "node": ">=4.0" } }, "node_modules/etag": { "version": "1.8.1", "resolved": "https://registry.npmjs.org/etag/-/etag-1.8.1.tgz", "integrity": "sha512-aIL5Fx7mawVa300al2BnEE4iNvo1qETxLrPI/o05L7z6go7fCw1J6EQmbK4FmJ2AS7kgVF/KEZWufBfdClMcPg==", "dev": true, "license": "MIT", "engines": { "node": ">= 0.6" } }, "node_modules/eventemitter3": { "version": "4.0.7", "resolved": "https://registry.npmjs.org/eventemitter3/-/eventemitter3-4.0.7.tgz", "integrity": "sha512-8guHBZCwKnFhYdHr2ysuRWErTwhoN2X8XELRlrRwpmfeY2jjuUN4taQMsULKUVo1K4DvZl+0pgfyoysHxvmvEw==", "dev": true }, "node_modules/events": { "version": "3.3.0", "resolved": "https://registry.npmjs.org/events/-/events-3.3.0.tgz", "integrity": "sha512-mQw+2fkQbALzQ7V0MY0IqdnXNOeTtP4r0lN9z7AAawCXgqea7bDii20AYrIBrFd/Hx0M2Ocz6S111CaFkUcb0Q==", "dev": true, "engines": { "node": ">=0.8.x" } }, "node_modules/express": { "version": "4.21.2", "resolved": "https://registry.npmjs.org/express/-/express-4.21.2.tgz", "integrity": "sha512-28HqgMZAmih1Czt9ny7qr6ek2qddF4FclbMzwhCREB6OFfH+rXAnuNCwo1/wFvrtbgsQDb4kSbX9de9lFbrXnA==", "dev": true, "license": "MIT", "dependencies": { "accepts": "~1.3.8", "array-flatten": "1.1.1", "body-parser": "1.20.3", "content-disposition": "0.5.4", "content-type": "~1.0.4", "cookie": "0.7.1", "cookie-signature": "1.0.6", "debug": "2.6.9", "depd": "2.0.0", "encodeurl": "~2.0.0", "escape-html": "~1.0.3", "etag": "~1.8.1", "finalhandler": "1.3.1", "fresh": "0.5.2", "http-errors": "2.0.0", "merge-descriptors": "1.0.3", "methods": "~1.1.2", "on-finished": "2.4.1", "parseurl": "~1.3.3", "path-to-regexp": "0.1.12", "proxy-addr": "~2.0.7", "qs": "6.13.0", "range-parser": "~1.2.1", "safe-buffer": "5.2.1", "send": "0.19.0", "serve-static": "1.16.2", "setprototypeof": "1.2.0", "statuses": "2.0.1", "type-is": "~1.6.18", "utils-merge": "1.0.1", "vary": "~1.1.2" }, "engines": { "node": ">= 0.10.0" }, "funding": { "type": "opencollective", "url": "https://opencollective.com/express" } }, "node_modules/express/node_modules/safe-buffer": { "version": "5.2.1", "resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.2.1.tgz", "integrity": "sha512-rp3So07KcdmmKbGvgaNxQSJr7bGVSVk5S9Eq1F+ppbRo70+YeaDxkw5Dd8NPN+GD6bjnYm2VuPuCXmpuYvmCXQ==", "dev": true, "funding": [ { "type": "github", "url": "https://github.com/sponsors/feross" }, { "type": "patreon", "url": "https://www.patreon.com/feross" }, { "type": "consulting", "url": "https://feross.org/support" } ], "license": "MIT" }, "node_modules/fast-deep-equal": { "version": "3.1.3", "resolved": "https://registry.npmjs.org/fast-deep-equal/-/fast-deep-equal-3.1.3.tgz", "integrity": "sha512-f3qQ9oQy9j2AhBe/H9VC91wLmKBCCU/gDOnKNAYG5hswO7BLKj09Hc5HYNz9cGI++xlpDCIgDaitVs03ATR84Q==", "dev": true }, "node_modules/fast-glob": { "version": "3.2.12", "resolved": "https://registry.npmjs.org/fast-glob/-/fast-glob-3.2.12.tgz", "integrity": "sha512-DVj4CQIYYow0BlaelwK1pHl5n5cRSJfM60UA0zK891sVInoPri2Ekj7+e1CT3/3qxXenpI+nBBmQAcJPJgaj4w==", "dev": true, "dependencies": { "@nodelib/fs.stat": "^2.0.2", "@nodelib/fs.walk": "^1.2.3", "glob-parent": "^5.1.2", "merge2": "^1.3.0", "micromatch": "^4.0.4" }, "engines": { "node": ">=8.6.0" } }, "node_modules/fast-glob/node_modules/glob-parent": { "version": "5.1.2", "resolved": "https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.2.tgz", "integrity": "sha512-AOIgSQCepiJYwP3ARnGx+5VnTu2HBYdzbGP45eLw1vr3zB3vZLeyed1sC9hnbcOc9/SrMyM5RPQrkGz4aS9Zow==", "dev": true, "dependencies": { "is-glob": "^4.0.1" }, "engines": { "node": ">= 6" } }, "node_modules/fast-json-stable-stringify": { "version": "2.1.0", "resolved": "https://registry.npmjs.org/fast-json-stable-stringify/-/fast-json-stable-stringify-2.1.0.tgz", "integrity": "sha512-lhd/wF+Lk98HZoTCtlVraHtfh5XYijIjalXck7saUtuanSDyLMxnHhSXEDJqHxD7msR8D0uCmqlkwjCV8xvwHw==", "dev": true }, "node_modules/fastest-levenshtein": { "version": "1.0.16", "resolved": "https://registry.npmjs.org/fastest-levenshtein/-/fastest-levenshtein-1.0.16.tgz", "integrity": "sha512-eRnCtTTtGZFpQCwhJiUOuxPQWRXVKYDn0b2PeHfXL6/Zi53SLAzAHfVhVWK2AryC/WH05kGfxhFIPvTF0SXQzg==", "dev": true, "engines": { "node": ">= 4.9.1" } }, "node_modules/fastq": { "version": "1.15.0", "resolved": "https://registry.npmjs.org/fastq/-/fastq-1.15.0.tgz", "integrity": "sha512-wBrocU2LCXXa+lWBt8RoIRD89Fi8OdABODa/kEnyeyjS5aZO5/GNvI5sEINADqP/h8M29UHTHUb53sUu5Ihqdw==", "dev": true, "dependencies": { "reusify": "^1.0.4" } }, "node_modules/faye-websocket": { "version": "0.11.4", "resolved": "https://registry.npmjs.org/faye-websocket/-/faye-websocket-0.11.4.tgz", "integrity": "sha512-CzbClwlXAuiRQAlUyfqPgvPoNKTckTPGfwZV4ZdAhVcP2lh9KUxJg2b5GkE7XbjKQ3YJnQ9z6D9ntLAlB+tP8g==", "dev": true, "dependencies": { "websocket-driver": ">=0.5.1" }, "engines": { "node": ">=0.8.0" } }, "node_modules/fill-range": { "version": "7.0.1", "resolved": "https://registry.npmjs.org/fill-range/-/fill-range-7.0.1.tgz", "integrity": "sha512-qOo9F+dMUmC2Lcb4BbVvnKJxTPjCm+RRpe4gDuGrzkL7mEVl/djYSu2OdQ2Pa302N4oqkSg9ir6jaLWJ2USVpQ==", "dev": true, "dependencies": { "to-regex-range": "^5.0.1" }, "engines": { "node": ">=8" } }, "node_modules/finalhandler": { "version": "1.3.1", "resolved": "https://registry.npmjs.org/finalhandler/-/finalhandler-1.3.1.tgz", "integrity": "sha512-6BN9trH7bp3qvnrRyzsBz+g3lZxTNZTbVO2EV1CS0WIcDbawYVdYvGflME/9QP0h0pYlCDBCTjYa9nZzMDpyxQ==", "dev": true, "license": "MIT", "dependencies": { "debug": "2.6.9", "encodeurl": "~2.0.0", "escape-html": "~1.0.3", "on-finished": "2.4.1", "parseurl": "~1.3.3", "statuses": "2.0.1", "unpipe": "~1.0.0" }, "engines": { "node": ">= 0.8" } }, "node_modules/find-up": { "version": "4.1.0", "resolved": "https://registry.npmjs.org/find-up/-/find-up-4.1.0.tgz", "integrity": "sha512-PpOwAdQ/YlXQ2vj8a3h8IipDuYRi3wceVQQGYWxNINccq40Anw7BlsEXCMbt1Zt+OLA6Fq9suIpIWD0OsnISlw==", "dev": true, "dependencies": { "locate-path": "^5.0.0", "path-exists": "^4.0.0" }, "engines": { "node": ">=8" } }, "node_modules/follow-redirects": { "version": "1.15.4", "resolved": "https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.15.4.tgz", "integrity": "sha512-Cr4D/5wlrb0z9dgERpUL3LrmPKVDsETIJhaCMeDfuFYcqa5bldGV6wBsAN6X/vxlXQtFBMrXdXxdL8CbDTGniw==", "dev": true, "funding": [ { "type": "individual", "url": "https://github.com/sponsors/RubenVerborgh" } ], "engines": { "node": ">=4.0" }, "peerDependenciesMeta": { "debug": { "optional": true } } }, "node_modules/forwarded": { "version": "0.2.0", "resolved": "https://registry.npmjs.org/forwarded/-/forwarded-0.2.0.tgz", "integrity": "sha512-buRG0fpBtRHSTCOASe6hD258tEubFoRLb4ZNA6NxMVHNw2gOcwHo9wyablzMzOA5z9xA9L1KNjk/Nt6MT9aYow==", "dev": true, "license": "MIT", "engines": { "node": ">= 0.6" } }, "node_modules/fresh": { "version": "0.5.2", "resolved": "https://registry.npmjs.org/fresh/-/fresh-0.5.2.tgz", "integrity": "sha512-zJ2mQYM18rEFOudeV4GShTGIQ7RbzA7ozbU9I/XBpm7kqgMywgmylMwXHxZJmkVoYkna9d2pVXVXPdYTP9ej8Q==", "dev": true, "license": "MIT", "engines": { "node": ">= 0.6" } }, "node_modules/fsevents": { "version": "2.3.3", "resolved": "https://registry.npmjs.org/fsevents/-/fsevents-2.3.3.tgz", "integrity": "sha512-5xoDfX+fL7faATnagmWPpbFtwh/R77WmMMqqHGS65C3vvB0YHrgF+B1YmZ3441tMj5n63k0212XNoJwzlhffQw==", "dev": true, "hasInstallScript": true, "license": "MIT", "optional": true, "os": [ "darwin" ], "engines": { "node": "^8.16.0 || ^10.6.0 || >=11.0.0" } }, "node_modules/function-bind": { "version": "1.1.2", "resolved": "https://registry.npmjs.org/function-bind/-/function-bind-1.1.2.tgz", "integrity": "sha512-7XHNxH7qX9xG5mIwxkhumTox/MIRNcOgDrxWsMt2pAr23WHp6MrRlN7FBSFpCpr+oVO0F744iUgR82nJMfG2SA==", "dev": true, "funding": { "url": "https://github.com/sponsors/ljharb" } }, "node_modules/get-intrinsic": { "version": "1.3.0", "resolved": "https://registry.npmjs.org/get-intrinsic/-/get-intrinsic-1.3.0.tgz", "integrity": "sha512-9fSjSaos/fRIVIp+xSJlE6lfwhES7LNtKaCBIamHsjr2na1BiABJPo0mOjjz8GJDURarmCPGqaiVg5mfjb98CQ==", "dev": true, "license": "MIT", "dependencies": { "call-bind-apply-helpers": "^1.0.2", "es-define-property": "^1.0.1", "es-errors": "^1.3.0", "es-object-atoms": "^1.1.1", "function-bind": "^1.1.2", "get-proto": "^1.0.1", "gopd": "^1.2.0", "has-symbols": "^1.1.0", "hasown": "^2.0.2", "math-intrinsics": "^1.1.0" }, "engines": { "node": ">= 0.4" }, "funding": { "url": "https://github.com/sponsors/ljharb" } }, "node_modules/get-proto": { "version": "1.0.1", "resolved": "https://registry.npmjs.org/get-proto/-/get-proto-1.0.1.tgz", "integrity": "sha512-sTSfBjoXBp89JvIKIefqw7U2CCebsc74kiY6awiGogKtoSGbgjYE/G/+l9sF3MWFPNc9IcoOC4ODfKHfxFmp0g==", "dev": true, "license": "MIT", "dependencies": { "dunder-proto": "^1.0.1", "es-object-atoms": "^1.0.0" }, "engines": { "node": ">= 0.4" } }, "node_modules/glob-parent": { "version": "6.0.2", "resolved": "https://registry.npmjs.org/glob-parent/-/glob-parent-6.0.2.tgz", "integrity": "sha512-XxwI8EOhVQgWp6iDL+3b0r86f4d6AX6zSU55HfB4ydCEuXLXc5FcYeOu+nnGftS4TEju/11rt4KJPTMgbfmv4A==", "dev": true, "dependencies": { "is-glob": "^4.0.3" }, "engines": { "node": ">=10.13.0" } }, "node_modules/glob-to-regexp": { "version": "0.4.1", "resolved": "https://registry.npmjs.org/glob-to-regexp/-/glob-to-regexp-0.4.1.tgz", "integrity": "sha512-lkX1HJXwyMcprw/5YUZc2s7DrpAiHB21/V+E1rHUrVNokkvB6bqMzT0VfV6/86ZNabt1k14YOIaT7nDvOX3Iiw==", "dev": true }, "node_modules/globby": { "version": "13.1.3", "resolved": "https://registry.npmjs.org/globby/-/globby-13.1.3.tgz", "integrity": "sha512-8krCNHXvlCgHDpegPzleMq07yMYTO2sXKASmZmquEYWEmCx6J5UTRbp5RwMJkTJGtcQ44YpiUYUiN0b9mzy8Bw==", "dev": true, "dependencies": { "dir-glob": "^3.0.1", "fast-glob": "^3.2.11", "ignore": "^5.2.0", "merge2": "^1.4.1", "slash": "^4.0.0" }, "engines": { "node": "^12.20.0 || ^14.13.1 || >=16.0.0" }, "funding": { "url": "https://github.com/sponsors/sindresorhus" } }, "node_modules/gopd": { "version": "1.2.0", "resolved": "https://registry.npmjs.org/gopd/-/gopd-1.2.0.tgz", "integrity": "sha512-ZUKRh6/kUFoAiTAtTYPZJ3hw9wNxx+BIBOijnlG9PnrJsCcSjs1wyyD6vJpaYtgnzDrKYRSqf3OO6Rfa93xsRg==", "dev": true, "license": "MIT", "engines": { "node": ">= 0.4" }, "funding": { "url": "https://github.com/sponsors/ljharb" } }, "node_modules/graceful-fs": { "version": "4.2.11", "resolved": "https://registry.npmjs.org/graceful-fs/-/graceful-fs-4.2.11.tgz", "integrity": "sha512-RbJ5/jmFcNNCcDV5o9eTnBLJ/HszWV0P73bc+Ff4nS/rJj+YaS6IGyiOL0VoBYX+l1Wrl3k63h/KrH+nhJ0XvQ==", "dev": true }, "node_modules/handle-thing": { "version": "2.0.1", "resolved": "https://registry.npmjs.org/handle-thing/-/handle-thing-2.0.1.tgz", "integrity": "sha512-9Qn4yBxelxoh2Ow62nP+Ka/kMnOXRi8BXnRaUwezLNhqelnN49xKz4F/dPP8OYLxLxq6JDtZb2i9XznUQbNPTg==", "dev": true }, "node_modules/has": { "version": "1.0.3", "resolved": "https://registry.npmjs.org/has/-/has-1.0.3.tgz", "integrity": "sha512-f2dvO0VU6Oej7RkWJGrehjbzMAjFp5/VKPp5tTpWIV4JHHZK1/BxbFRtf/siA2SWTe09caDmVtYYzWEIbBS4zw==", "dev": true, "dependencies": { "function-bind": "^1.1.1" }, "engines": { "node": ">= 0.4.0" } }, "node_modules/has-flag": { "version": "4.0.0", "resolved": "https://registry.npmjs.org/has-flag/-/has-flag-4.0.0.tgz", "integrity": "sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ==", "dev": true, "engines": { "node": ">=8" } }, "node_modules/has-symbols": { "version": "1.1.0", "resolved": "https://registry.npmjs.org/has-symbols/-/has-symbols-1.1.0.tgz", "integrity": "sha512-1cDNdwJ2Jaohmb3sg4OmKaMBwuC48sYni5HUw2DvsC8LjGTLK9h+eb1X6RyuOHe4hT0ULCW68iomhjUoKUqlPQ==", "dev": true, "license": "MIT", "engines": { "node": ">= 0.4" }, "funding": { "url": "https://github.com/sponsors/ljharb" } }, "node_modules/hasown": { "version": "2.0.2", "resolved": "https://registry.npmjs.org/hasown/-/hasown-2.0.2.tgz", "integrity": "sha512-0hJU9SCPvmMzIBdZFqNPXWa6dqh7WdH0cII9y+CyS8rG3nL48Bclra9HmKhVVUHyPWNH5Y7xDwAB7bfgSjkUMQ==", "dev": true, "license": "MIT", "dependencies": { "function-bind": "^1.1.2" }, "engines": { "node": ">= 0.4" } }, "node_modules/hpack.js": { "version": "2.1.6", "resolved": "https://registry.npmjs.org/hpack.js/-/hpack.js-2.1.6.tgz", "integrity": "sha512-zJxVehUdMGIKsRaNt7apO2Gqp0BdqW5yaiGHXXmbpvxgBYVZnAql+BJb4RO5ad2MgpbZKn5G6nMnegrH1FcNYQ==", "dev": true, "dependencies": { "inherits": "^2.0.1", "obuf": "^1.0.0", "readable-stream": "^2.0.1", "wbuf": "^1.1.0" } }, "node_modules/http-deceiver": { "version": "1.2.7", "resolved": "https://registry.npmjs.org/http-deceiver/-/http-deceiver-1.2.7.tgz", "integrity": "sha512-LmpOGxTfbpgtGVxJrj5k7asXHCgNZp5nLfp+hWc8QQRqtb7fUy6kRY3BO1h9ddF6yIPYUARgxGOwB42DnxIaNw==", "dev": true }, "node_modules/http-errors": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/http-errors/-/http-errors-2.0.0.tgz", "integrity": "sha512-FtwrG/euBzaEjYeRqOgly7G0qviiXoJWnvEH2Z1plBdXgbyjv34pHTSb9zoeHMyDy33+DWy5Wt9Wo+TURtOYSQ==", "dev": true, "license": "MIT", "dependencies": { "depd": "2.0.0", "inherits": "2.0.4", "setprototypeof": "1.2.0", "statuses": "2.0.1", "toidentifier": "1.0.1" }, "engines": { "node": ">= 0.8" } }, "node_modules/http-parser-js": { "version": "0.5.6", "resolved": "https://registry.npmjs.org/http-parser-js/-/http-parser-js-0.5.6.tgz", "integrity": "sha512-vDlkRPDJn93swjcjqMSaGSPABbIarsr1TLAui/gLDXzV5VsJNdXNzMYDyNBLQkjWQCJ1uizu8T2oDMhmGt0PRA==", "dev": true }, "node_modules/http-proxy": { "version": "1.18.1", "resolved": "https://registry.npmjs.org/http-proxy/-/http-proxy-1.18.1.tgz", "integrity": "sha512-7mz/721AbnJwIVbnaSv1Cz3Am0ZLT/UBwkC92VlxhXv/k/BBQfM2fXElQNC27BVGr0uwUpplYPQM9LnaBMR5NQ==", "dev": true, "dependencies": { "eventemitter3": "^4.0.0", "follow-redirects": "^1.0.0", "requires-port": "^1.0.0" }, "engines": { "node": ">=8.0.0" } }, "node_modules/http-proxy-middleware": { "version": "2.0.9", "resolved": "https://registry.npmjs.org/http-proxy-middleware/-/http-proxy-middleware-2.0.9.tgz", "integrity": "sha512-c1IyJYLYppU574+YI7R4QyX2ystMtVXZwIdzazUIPIJsHuWNd+mho2j+bKoHftndicGj9yh+xjd+l0yj7VeT1Q==", "dev": true, "license": "MIT", "dependencies": { "@types/http-proxy": "^1.17.8", "http-proxy": "^1.18.1", "is-glob": "^4.0.1", "is-plain-obj": "^3.0.0", "micromatch": "^4.0.2" }, "engines": { "node": ">=12.0.0" }, "peerDependencies": { "@types/express": "^4.17.13" }, "peerDependenciesMeta": { "@types/express": { "optional": true } } }, "node_modules/hyperdyperid": { "version": "1.2.0", "resolved": "https://registry.npmjs.org/hyperdyperid/-/hyperdyperid-1.2.0.tgz", "integrity": "sha512-Y93lCzHYgGWdrJ66yIktxiaGULYc6oGiABxhcO5AufBeOyoIdZF7bIfLaOrbM0iGIOXQQgxxRrFEnb+Y6w1n4A==", "dev": true, "license": "MIT", "engines": { "node": ">=10.18" } }, "node_modules/iconv-lite": { "version": "0.4.24", "resolved": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.4.24.tgz", "integrity": "sha512-v3MXnZAcvnywkTUEZomIActle7RXXeedOR31wwl7VlyoXO4Qi9arvSenNQWne1TcRwhCL1HwLI21bEqdpj8/rA==", "dev": true, "license": "MIT", "dependencies": { "safer-buffer": ">= 2.1.2 < 3" }, "engines": { "node": ">=0.10.0" } }, "node_modules/ignore": { "version": "5.2.4", "resolved": "https://registry.npmjs.org/ignore/-/ignore-5.2.4.tgz", "integrity": "sha512-MAb38BcSbH0eHNBxn7ql2NH/kX33OkB3lZ1BNdh7ENeRChHTYsTvWrMubiIAMNS2llXEEgZ1MUOBtXChP3kaFQ==", "dev": true, "engines": { "node": ">= 4" } }, "node_modules/import-local": { "version": "3.1.0", "resolved": "https://registry.npmjs.org/import-local/-/import-local-3.1.0.tgz", "integrity": "sha512-ASB07uLtnDs1o6EHjKpX34BKYDSqnFerfTOJL2HvMqF70LnxpjkzDB8J44oT9pu4AMPkQwf8jl6szgvNd2tRIg==", "dev": true, "dependencies": { "pkg-dir": "^4.2.0", "resolve-cwd": "^3.0.0" }, "bin": { "import-local-fixture": "fixtures/cli.js" }, "engines": { "node": ">=8" }, "funding": { "url": "https://github.com/sponsors/sindresorhus" } }, "node_modules/inherits": { "version": "2.0.4", "resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.4.tgz", "integrity": "sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ==", "dev": true }, "node_modules/interpret": { "version": "3.1.1", "resolved": "https://registry.npmjs.org/interpret/-/interpret-3.1.1.tgz", "integrity": "sha512-6xwYfHbajpoF0xLW+iwLkhwgvLoZDfjYfoFNu8ftMoXINzwuymNLd9u/KmwtdT2GbR+/Cz66otEGEVVUHX9QLQ==", "dev": true, "engines": { "node": ">=10.13.0" } }, "node_modules/ipaddr.js": { "version": "2.2.0", "resolved": "https://registry.npmjs.org/ipaddr.js/-/ipaddr.js-2.2.0.tgz", "integrity": "sha512-Ag3wB2o37wslZS19hZqorUnrnzSkpOVy+IiiDEiTqNubEYpYuHWIf6K4psgN2ZWKExS4xhVCrRVfb/wfW8fWJA==", "dev": true, "license": "MIT", "engines": { "node": ">= 10" } }, "node_modules/is-binary-path": { "version": "2.1.0", "resolved": "https://registry.npmjs.org/is-binary-path/-/is-binary-path-2.1.0.tgz", "integrity": "sha512-ZMERYes6pDydyuGidse7OsHxtbI7WVeUEozgR/g7rd0xUimYNlvZRE/K2MgZTjWy725IfelLeVcEM97mmtRGXw==", "dev": true, "license": "MIT", "dependencies": { "binary-extensions": "^2.0.0" }, "engines": { "node": ">=8" } }, "node_modules/is-core-module": { "version": "2.11.0", "resolved": "https://registry.npmjs.org/is-core-module/-/is-core-module-2.11.0.tgz", "integrity": "sha512-RRjxlvLDkD1YJwDbroBHMb+cukurkDWNyHx7D3oNB5x9rb5ogcksMC5wHCadcXoo67gVr/+3GFySh3134zi6rw==", "dev": true, "dependencies": { "has": "^1.0.3" }, "funding": { "url": "https://github.com/sponsors/ljharb" } }, "node_modules/is-docker": { "version": "3.0.0", "resolved": "https://registry.npmjs.org/is-docker/-/is-docker-3.0.0.tgz", "integrity": "sha512-eljcgEDlEns/7AXFosB5K/2nCM4P7FQPkGc/DWLy5rmFEWvZayGrik1d9/QIY5nJ4f9YsVvBkA6kJpHn9rISdQ==", "dev": true, "license": "MIT", "bin": { "is-docker": "cli.js" }, "engines": { "node": "^12.20.0 || ^14.13.1 || >=16.0.0" }, "funding": { "url": "https://github.com/sponsors/sindresorhus" } }, "node_modules/is-extglob": { "version": "2.1.1", "resolved": "https://registry.npmjs.org/is-extglob/-/is-extglob-2.1.1.tgz", "integrity": "sha512-SbKbANkN603Vi4jEZv49LeVJMn4yGwsbzZworEoyEiutsN3nJYdbO36zfhGJ6QEDpOZIFkDtnq5JRxmvl3jsoQ==", "dev": true, "engines": { "node": ">=0.10.0" } }, "node_modules/is-glob": { "version": "4.0.3", "resolved": "https://registry.npmjs.org/is-glob/-/is-glob-4.0.3.tgz", "integrity": "sha512-xelSayHH36ZgE7ZWhli7pW34hNbNl8Ojv5KVmkJD4hBdD3th8Tfk9vYasLM+mXWOZhFkgZfxhLSnrwRr4elSSg==", "dev": true, "dependencies": { "is-extglob": "^2.1.1" }, "engines": { "node": ">=0.10.0" } }, "node_modules/is-inside-container": { "version": "1.0.0", "resolved": "https://registry.npmjs.org/is-inside-container/-/is-inside-container-1.0.0.tgz", "integrity": "sha512-KIYLCCJghfHZxqjYBE7rEy0OBuTd5xCHS7tHVgvCLkx7StIoaxwNW3hCALgEUjFfeRk+MG/Qxmp/vtETEF3tRA==", "dev": true, "license": "MIT", "dependencies": { "is-docker": "^3.0.0" }, "bin": { "is-inside-container": "cli.js" }, "engines": { "node": ">=14.16" }, "funding": { "url": "https://github.com/sponsors/sindresorhus" } }, "node_modules/is-network-error": { "version": "1.1.0", "resolved": "https://registry.npmjs.org/is-network-error/-/is-network-error-1.1.0.tgz", "integrity": "sha512-tUdRRAnhT+OtCZR/LxZelH/C7QtjtFrTu5tXCA8pl55eTUElUHT+GPYV8MBMBvea/j+NxQqVt3LbWMRir7Gx9g==", "dev": true, "license": "MIT", "engines": { "node": ">=16" }, "funding": { "url": "https://github.com/sponsors/sindresorhus" } }, "node_modules/is-number": { "version": "7.0.0", "resolved": "https://registry.npmjs.org/is-number/-/is-number-7.0.0.tgz", "integrity": "sha512-41Cifkg6e8TylSpdtTpeLVMqvSBEVzTttHvERD741+pnZ8ANv0004MRL43QKPDlK9cGvNp6NZWZUBlbGXYxxng==", "dev": true, "engines": { "node": ">=0.12.0" } }, "node_modules/is-plain-obj": { "version": "3.0.0", "resolved": "https://registry.npmjs.org/is-plain-obj/-/is-plain-obj-3.0.0.tgz", "integrity": "sha512-gwsOE28k+23GP1B6vFl1oVh/WOzmawBrKwo5Ev6wMKzPkaXaCDIQKzLnvsA42DRlbVTWorkgTKIviAKCWkfUwA==", "dev": true, "engines": { "node": ">=10" }, "funding": { "url": "https://github.com/sponsors/sindresorhus" } }, "node_modules/is-plain-object": { "version": "2.0.4", "resolved": "https://registry.npmjs.org/is-plain-object/-/is-plain-object-2.0.4.tgz", "integrity": "sha512-h5PpgXkWitc38BBMYawTYMWJHFZJVnBquFE57xFpjB8pJFiF6gZ+bU+WyI/yqXiFR5mdLsgYNaPe8uao6Uv9Og==", "dev": true, "dependencies": { "isobject": "^3.0.1" }, "engines": { "node": ">=0.10.0" } }, "node_modules/is-wsl": { "version": "3.1.0", "resolved": "https://registry.npmjs.org/is-wsl/-/is-wsl-3.1.0.tgz", "integrity": "sha512-UcVfVfaK4Sc4m7X3dUSoHoozQGBEFeDC+zVo06t98xe8CzHSZZBekNXH+tu0NalHolcJ/QAGqS46Hef7QXBIMw==", "dev": true, "license": "MIT", "dependencies": { "is-inside-container": "^1.0.0" }, "engines": { "node": ">=16" }, "funding": { "url": "https://github.com/sponsors/sindresorhus" } }, "node_modules/isarray": { "version": "1.0.0", "resolved": "https://registry.npmjs.org/isarray/-/isarray-1.0.0.tgz", "integrity": "sha512-VLghIWNM6ELQzo7zwmcg0NmTVyWKYjvIeM83yjp0wRDTmUnrM678fQbcKBo6n2CJEF0szoG//ytg+TKla89ALQ==", "dev": true }, "node_modules/isexe": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/isexe/-/isexe-2.0.0.tgz", "integrity": "sha512-RHxMLp9lnKHGHRng9QFhRCMbYAcVpn69smSGcq3f36xjgVVWThj4qqLbTLlq7Ssj8B+fIQ1EuCEGI2lKsyQeIw==", "dev": true }, "node_modules/isobject": { "version": "3.0.1", "resolved": "https://registry.npmjs.org/isobject/-/isobject-3.0.1.tgz", "integrity": "sha512-WhB9zCku7EGTj/HQQRz5aUQEUeoQZH2bWcltRErOpymJ4boYE6wL9Tbr23krRPSZ+C5zqNSrSw+Cc7sZZ4b7vg==", "dev": true, "engines": { "node": ">=0.10.0" } }, "node_modules/jest-worker": { "version": "27.5.1", "resolved": "https://registry.npmjs.org/jest-worker/-/jest-worker-27.5.1.tgz", "integrity": "sha512-7vuh85V5cdDofPyxn58nrPjBktZo0u9x1g8WtjQol+jZDaE+fhN+cIvTj11GndBnMnyfrUOG1sZQxCdjKh+DKg==", "dev": true, "dependencies": { "@types/node": "*", "merge-stream": "^2.0.0", "supports-color": "^8.0.0" }, "engines": { "node": ">= 10.13.0" } }, "node_modules/json-parse-even-better-errors": { "version": "2.3.1", "resolved": "https://registry.npmjs.org/json-parse-even-better-errors/-/json-parse-even-better-errors-2.3.1.tgz", "integrity": "sha512-xyFwyhro/JEof6Ghe2iz2NcXoj2sloNsWr/XsERDK/oiPCfaNhl5ONfp+jQdAZRQQ0IJWNzH9zIZF7li91kh2w==", "dev": true }, "node_modules/json-schema-traverse": { "version": "1.0.0", "resolved": "https://registry.npmjs.org/json-schema-traverse/-/json-schema-traverse-1.0.0.tgz", "integrity": "sha512-NM8/P9n3XjXhIZn1lLhkFaACTOURQXjWhV4BA/RnOv8xvgqtqpAX9IO4mRQxSx1Rlo4tqzeqb0sOlruaOy3dug==", "dev": true }, "node_modules/kind-of": { "version": "6.0.3", "resolved": "https://registry.npmjs.org/kind-of/-/kind-of-6.0.3.tgz", "integrity": "sha512-dcS1ul+9tmeD95T+x28/ehLgd9mENa3LsvDTtzm3vyBEO7RPptvAD+t44WVXaUjTBRcrpFeFlC8WCruUR456hw==", "dev": true, "engines": { "node": ">=0.10.0" } }, "node_modules/launch-editor": { "version": "2.10.0", "resolved": "https://registry.npmjs.org/launch-editor/-/launch-editor-2.10.0.tgz", "integrity": "sha512-D7dBRJo/qcGX9xlvt/6wUYzQxjh5G1RvZPgPv8vi4KRU99DVQL/oW7tnVOCCTm2HGeo3C5HvGE5Yrh6UBoZ0vA==", "dev": true, "license": "MIT", "dependencies": { "picocolors": "^1.0.0", "shell-quote": "^1.8.1" } }, "node_modules/loader-runner": { "version": "4.3.0", "resolved": "https://registry.npmjs.org/loader-runner/-/loader-runner-4.3.0.tgz", "integrity": "sha512-3R/1M+yS3j5ou80Me59j7F9IMs4PXs3VqRrm0TU3AbKPxlmpoY1TNscJV/oGJXo8qCatFGTfDbY6W6ipGOYXfg==", "dev": true, "engines": { "node": ">=6.11.5" } }, "node_modules/locate-path": { "version": "5.0.0", "resolved": "https://registry.npmjs.org/locate-path/-/locate-path-5.0.0.tgz", "integrity": "sha512-t7hw9pI+WvuwNJXwk5zVHpyhIqzg2qTlklJOf0mVxGSbe3Fp2VieZcduNYjaLDoy6p9uGpQEGWG87WpMKlNq8g==", "dev": true, "dependencies": { "p-locate": "^4.1.0" }, "engines": { "node": ">=8" } }, "node_modules/math-intrinsics": { "version": "1.1.0", "resolved": "https://registry.npmjs.org/math-intrinsics/-/math-intrinsics-1.1.0.tgz", "integrity": "sha512-/IXtbwEk5HTPyEwyKX6hGkYXxM9nbj64B+ilVJnC/R6B0pH5G4V3b0pVbL7DBj4tkhBAppbQUlf6F6Xl9LHu1g==", "dev": true, "license": "MIT", "engines": { "node": ">= 0.4" } }, "node_modules/media-typer": { "version": "0.3.0", "resolved": "https://registry.npmjs.org/media-typer/-/media-typer-0.3.0.tgz", "integrity": "sha512-dq+qelQ9akHpcOl/gUVRTxVIOkAJ1wR3QAvb4RsVjS8oVoFjDGTc679wJYmUmknUF5HwMLOgb5O+a3KxfWapPQ==", "dev": true, "license": "MIT", "engines": { "node": ">= 0.6" } }, "node_modules/memfs": { "version": "4.17.2", "resolved": "https://registry.npmjs.org/memfs/-/memfs-4.17.2.tgz", "integrity": "sha512-NgYhCOWgovOXSzvYgUW0LQ7Qy72rWQMGGFJDoWg4G30RHd3z77VbYdtJ4fembJXBy8pMIUA31XNAupobOQlwdg==", "dev": true, "license": "Apache-2.0", "dependencies": { "@jsonjoy.com/json-pack": "^1.0.3", "@jsonjoy.com/util": "^1.3.0", "tree-dump": "^1.0.1", "tslib": "^2.0.0" }, "engines": { "node": ">= 4.0.0" }, "funding": { "type": "github", "url": "https://github.com/sponsors/streamich" } }, "node_modules/merge-descriptors": { "version": "1.0.3", "resolved": "https://registry.npmjs.org/merge-descriptors/-/merge-descriptors-1.0.3.tgz", "integrity": "sha512-gaNvAS7TZ897/rVaZ0nMtAyxNyi/pdbjbAwUpFQpN70GqnVfOiXpeUUMKRBmzXaSQ8DdTX4/0ms62r2K+hE6mQ==", "dev": true, "license": "MIT", "funding": { "url": "https://github.com/sponsors/sindresorhus" } }, "node_modules/merge-stream": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/merge-stream/-/merge-stream-2.0.0.tgz", "integrity": "sha512-abv/qOcuPfk3URPfDzmZU1LKmuw8kT+0nIHvKrKgFrwifol/doWcdA4ZqsWQ8ENrFKkd67Mfpo/LovbIUsbt3w==", "dev": true }, "node_modules/merge2": { "version": "1.4.1", "resolved": "https://registry.npmjs.org/merge2/-/merge2-1.4.1.tgz", "integrity": "sha512-8q7VEgMJW4J8tcfVPy8g09NcQwZdbwFEqhe/WZkoIzjn/3TGDwtOCYtXGxA3O8tPzpczCCDgv+P2P5y00ZJOOg==", "dev": true, "engines": { "node": ">= 8" } }, "node_modules/methods": { "version": "1.1.2", "resolved": "https://registry.npmjs.org/methods/-/methods-1.1.2.tgz", "integrity": "sha512-iclAHeNqNm68zFtnZ0e+1L2yUIdvzNoauKU4WBA3VvH/vPFieF7qfRlwUZU+DA9P9bPXIS90ulxoUoCH23sV2w==", "dev": true, "license": "MIT", "engines": { "node": ">= 0.6" } }, "node_modules/micromatch": { "version": "4.0.5", "resolved": "https://registry.npmjs.org/micromatch/-/micromatch-4.0.5.tgz", "integrity": "sha512-DMy+ERcEW2q8Z2Po+WNXuw3c5YaUSFjAO5GsJqfEl7UjvtIuFKO6ZrKvcItdy98dwFI2N1tg3zNIdKaQT+aNdA==", "dev": true, "dependencies": { "braces": "^3.0.2", "picomatch": "^2.3.1" }, "engines": { "node": ">=8.6" } }, "node_modules/mime": { "version": "1.6.0", "resolved": "https://registry.npmjs.org/mime/-/mime-1.6.0.tgz", "integrity": "sha512-x0Vn8spI+wuJ1O6S7gnbaQg8Pxh4NNHb7KSINmEWKiPE4RKOplvijn+NkmYmmRgP68mc70j2EbeTFRsrswaQeg==", "dev": true, "license": "MIT", "bin": { "mime": "cli.js" }, "engines": { "node": ">=4" } }, "node_modules/mime-db": { "version": "1.52.0", "resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.52.0.tgz", "integrity": "sha512-sPU4uV7dYlvtWJxwwxHD0PuihVNiE7TyAbQ5SWxDCB9mUYvOgroQOwYQQOKPJ8CIbE+1ETVlOoK1UC2nU3gYvg==", "dev": true, "engines": { "node": ">= 0.6" } }, "node_modules/mime-types": { "version": "2.1.35", "resolved": "https://registry.npmjs.org/mime-types/-/mime-types-2.1.35.tgz", "integrity": "sha512-ZDY+bPm5zTTF+YpCrAU9nK0UgICYPT0QtT1NZWFv4s++TNkcgVaT0g6+4R2uI4MjQjzysHB1zxuWL50hzaeXiw==", "dev": true, "dependencies": { "mime-db": "1.52.0" }, "engines": { "node": ">= 0.6" } }, "node_modules/minimalistic-assert": { "version": "1.0.1", "resolved": "https://registry.npmjs.org/minimalistic-assert/-/minimalistic-assert-1.0.1.tgz", "integrity": "sha512-UtJcAD4yEaGtjPezWuO9wC4nwUnVH/8/Im3yEHQP4b67cXlD/Qr9hdITCU1xDbSEXg2XKNaP8jsReV7vQd00/A==", "dev": true }, "node_modules/ms": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz", "integrity": "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A==", "dev": true }, "node_modules/multicast-dns": { "version": "7.2.5", "resolved": "https://registry.npmjs.org/multicast-dns/-/multicast-dns-7.2.5.tgz", "integrity": "sha512-2eznPJP8z2BFLX50tf0LuODrpINqP1RVIm/CObbTcBRITQgmC/TjcREF1NeTBzIcR5XO/ukWo+YHOjBbFwIupg==", "dev": true, "license": "MIT", "dependencies": { "dns-packet": "^5.2.2", "thunky": "^1.0.2" }, "bin": { "multicast-dns": "cli.js" } }, "node_modules/negotiator": { "version": "0.6.3", "resolved": "https://registry.npmjs.org/negotiator/-/negotiator-0.6.3.tgz", "integrity": "sha512-+EUsqGPLsM+j/zdChZjsnX51g4XrHFOIXwfnCVPGlQk/k5giakcKsuxCObBRu6DSm9opw/O6slWbJdghQM4bBg==", "dev": true, "engines": { "node": ">= 0.6" } }, "node_modules/neo-async": { "version": "2.6.2", "resolved": "https://registry.npmjs.org/neo-async/-/neo-async-2.6.2.tgz", "integrity": "sha512-Yd3UES5mWCSqR+qNT93S3UoYUkqAZ9lLg8a7g9rimsWmYGK8cVToA4/sF3RrshdyV3sAGMXVUmpMYOw+dLpOuw==", "dev": true }, "node_modules/node-forge": { "version": "1.3.1", "resolved": "https://registry.npmjs.org/node-forge/-/node-forge-1.3.1.tgz", "integrity": "sha512-dPEtOeMvF9VMcYV/1Wb8CPoVAXtp6MKMlcbAt4ddqmGqUJ6fQZFXkNZNkNlfevtNkGtaSoXf/vNNNSvgrdXwtA==", "dev": true, "license": "(BSD-3-Clause OR GPL-2.0)", "engines": { "node": ">= 6.13.0" } }, "node_modules/node-releases": { "version": "2.0.18", "resolved": "https://registry.npmjs.org/node-releases/-/node-releases-2.0.18.tgz", "integrity": "sha512-d9VeXT4SJ7ZeOqGX6R5EM022wpL+eWPooLI+5UpWn2jCT1aosUQEhQP214x33Wkwx3JQMvIm+tIoVOdodFS40g==", "dev": true }, "node_modules/normalize-path": { "version": "3.0.0", "resolved": "https://registry.npmjs.org/normalize-path/-/normalize-path-3.0.0.tgz", "integrity": "sha512-6eZs5Ls3WtCisHWp9S2GUy8dqkpGi4BVSz3GaqiE6ezub0512ESztXUwUB6C6IKbQkY2Pnb/mD4WYojCRwcwLA==", "dev": true, "engines": { "node": ">=0.10.0" } }, "node_modules/object-inspect": { "version": "1.13.4", "resolved": "https://registry.npmjs.org/object-inspect/-/object-inspect-1.13.4.tgz", "integrity": "sha512-W67iLl4J2EXEGTbfeHCffrjDfitvLANg0UlX3wFUUSTx92KXRFegMHUVgSqE+wvhAbi4WqjGg9czysTV2Epbew==", "dev": true, "license": "MIT", "engines": { "node": ">= 0.4" }, "funding": { "url": "https://github.com/sponsors/ljharb" } }, "node_modules/obuf": { "version": "1.1.2", "resolved": "https://registry.npmjs.org/obuf/-/obuf-1.1.2.tgz", "integrity": "sha512-PX1wu0AmAdPqOL1mWhqmlOd8kOIZQwGZw6rh7uby9fTc5lhaOWFLX3I6R1hrF9k3zUY40e6igsLGkDXK92LJNg==", "dev": true }, "node_modules/on-finished": { "version": "2.4.1", "resolved": "https://registry.npmjs.org/on-finished/-/on-finished-2.4.1.tgz", "integrity": "sha512-oVlzkg3ENAhCk2zdv7IJwd/QUD4z2RxRwpkcGY8psCVcCYZNq4wYnVWALHM+brtuJjePWiYF/ClmuDr8Ch5+kg==", "dev": true, "license": "MIT", "dependencies": { "ee-first": "1.1.1" }, "engines": { "node": ">= 0.8" } }, "node_modules/on-headers": { "version": "1.1.0", "resolved": "https://registry.npmjs.org/on-headers/-/on-headers-1.1.0.tgz", "integrity": "sha512-737ZY3yNnXy37FHkQxPzt4UZ2UWPWiCZWLvFZ4fu5cueciegX0zGPnrlY6bwRg4FdQOe9YU8MkmJwGhoMybl8A==", "dev": true, "engines": { "node": ">= 0.8" } }, "node_modules/open": { "version": "10.1.2", "resolved": "https://registry.npmjs.org/open/-/open-10.1.2.tgz", "integrity": "sha512-cxN6aIDPz6rm8hbebcP7vrQNhvRcveZoJU72Y7vskh4oIm+BZwBECnx5nTmrlres1Qapvx27Qo1Auukpf8PKXw==", "dev": true, "license": "MIT", "dependencies": { "default-browser": "^5.2.1", "define-lazy-prop": "^3.0.0", "is-inside-container": "^1.0.0", "is-wsl": "^3.1.0" }, "engines": { "node": ">=18" }, "funding": { "url": "https://github.com/sponsors/sindresorhus" } }, "node_modules/p-limit": { "version": "2.3.0", "resolved": "https://registry.npmjs.org/p-limit/-/p-limit-2.3.0.tgz", "integrity": "sha512-//88mFWSJx8lxCzwdAABTJL2MyWB12+eIY7MDL2SqLmAkeKU9qxRvWuSyTjm3FUmpBEMuFfckAIqEaVGUDxb6w==", "dev": true, "dependencies": { "p-try": "^2.0.0" }, "engines": { "node": ">=6" }, "funding": { "url": "https://github.com/sponsors/sindresorhus" } }, "node_modules/p-locate": { "version": "4.1.0", "resolved": "https://registry.npmjs.org/p-locate/-/p-locate-4.1.0.tgz", "integrity": "sha512-R79ZZ/0wAxKGu3oYMlz8jy/kbhsNrS7SKZ7PxEHBgJ5+F2mtFW2fK2cOtBh1cHYkQsbzFV7I+EoRKe6Yt0oK7A==", "dev": true, "dependencies": { "p-limit": "^2.2.0" }, "engines": { "node": ">=8" } }, "node_modules/p-retry": { "version": "6.2.1", "resolved": "https://registry.npmjs.org/p-retry/-/p-retry-6.2.1.tgz", "integrity": "sha512-hEt02O4hUct5wtwg4H4KcWgDdm+l1bOaEy/hWzd8xtXB9BqxTWBBhb+2ImAtH4Cv4rPjV76xN3Zumqk3k3AhhQ==", "dev": true, "license": "MIT", "dependencies": { "@types/retry": "0.12.2", "is-network-error": "^1.0.0", "retry": "^0.13.1" }, "engines": { "node": ">=16.17" }, "funding": { "url": "https://github.com/sponsors/sindresorhus" } }, "node_modules/p-try": { "version": "2.2.0", "resolved": "https://registry.npmjs.org/p-try/-/p-try-2.2.0.tgz", "integrity": "sha512-R4nPAVTAU0B9D35/Gk3uJf/7XYbQcyohSKdvAxIRSNghFl4e71hVoGnBNQz9cWaXxO2I10KTC+3jMdvvoKw6dQ==", "dev": true, "engines": { "node": ">=6" } }, "node_modules/parseurl": { "version": "1.3.3", "resolved": "https://registry.npmjs.org/parseurl/-/parseurl-1.3.3.tgz", "integrity": "sha512-CiyeOxFT/JZyN5m0z9PfXw4SCBJ6Sygz1Dpl0wqjlhDEGGBP1GnsUVEL0p63hoG1fcj3fHynXi9NYO4nWOL+qQ==", "dev": true, "engines": { "node": ">= 0.8" } }, "node_modules/path-exists": { "version": "4.0.0", "resolved": "https://registry.npmjs.org/path-exists/-/path-exists-4.0.0.tgz", "integrity": "sha512-ak9Qy5Q7jYb2Wwcey5Fpvg2KoAc/ZIhLSLOSBmRmygPsGwkVVt0fZa0qrtMz+m6tJTAHfZQ8FnmB4MG4LWy7/w==", "dev": true, "engines": { "node": ">=8" } }, "node_modules/path-key": { "version": "3.1.1", "resolved": "https://registry.npmjs.org/path-key/-/path-key-3.1.1.tgz", "integrity": "sha512-ojmeN0qd+y0jszEtoY48r0Peq5dwMEkIlCOu6Q5f41lfkswXuKtYrhgoTpLnyIcHm24Uhqx+5Tqm2InSwLhE6Q==", "dev": true, "engines": { "node": ">=8" } }, "node_modules/path-parse": { "version": "1.0.7", "resolved": "https://registry.npmjs.org/path-parse/-/path-parse-1.0.7.tgz", "integrity": "sha512-LDJzPVEEEPR+y48z93A0Ed0yXb8pAByGWo/k5YYdYgpY2/2EsOsksJrq7lOHxryrVOn1ejG6oAp8ahvOIQD8sw==", "dev": true }, "node_modules/path-to-regexp": { "version": "0.1.12", "resolved": "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-0.1.12.tgz", "integrity": "sha512-RA1GjUVMnvYFxuqovrEqZoxxW5NUZqbwKtYz/Tt7nXerk0LbLblQmrsgdeOxV5SFHf0UDggjS/bSeOZwt1pmEQ==", "dev": true, "license": "MIT" }, "node_modules/path-type": { "version": "4.0.0", "resolved": "https://registry.npmjs.org/path-type/-/path-type-4.0.0.tgz", "integrity": "sha512-gDKb8aZMDeD/tZWs9P6+q0J9Mwkdl6xMV8TjnGP3qJVJ06bdMgkbBlLU8IdfOsIsFz2BW1rNVT3XuNEl8zPAvw==", "dev": true, "engines": { "node": ">=8" } }, "node_modules/picocolors": { "version": "1.1.0", "resolved": "https://registry.npmjs.org/picocolors/-/picocolors-1.1.0.tgz", "integrity": "sha512-TQ92mBOW0l3LeMeyLV6mzy/kWr8lkd/hp3mTg7wYK7zJhuBStmGMBG0BdeDZS/dZx1IukaX6Bk11zcln25o1Aw==", "dev": true }, "node_modules/picomatch": { "version": "2.3.1", "resolved": "https://registry.npmjs.org/picomatch/-/picomatch-2.3.1.tgz", "integrity": "sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA==", "dev": true, "engines": { "node": ">=8.6" }, "funding": { "url": "https://github.com/sponsors/jonschlinkert" } }, "node_modules/pkg-dir": { "version": "4.2.0", "resolved": "https://registry.npmjs.org/pkg-dir/-/pkg-dir-4.2.0.tgz", "integrity": "sha512-HRDzbaKjC+AOWVXxAU/x54COGeIv9eb+6CkDSQoNTt4XyWoIJvuPsXizxu/Fr23EiekbtZwmh1IcIG/l/a10GQ==", "dev": true, "dependencies": { "find-up": "^4.0.0" }, "engines": { "node": ">=8" } }, "node_modules/process-nextick-args": { "version": "2.0.1", "resolved": "https://registry.npmjs.org/process-nextick-args/-/process-nextick-args-2.0.1.tgz", "integrity": "sha512-3ouUOpQhtgrbOa17J7+uxOTpITYWaGP7/AhoR3+A+/1e9skrzelGi/dXzEYyvbxubEF6Wn2ypscTKiKJFFn1ag==", "dev": true }, "node_modules/proxy-addr": { "version": "2.0.7", "resolved": "https://registry.npmjs.org/proxy-addr/-/proxy-addr-2.0.7.tgz", "integrity": "sha512-llQsMLSUDUPT44jdrU/O37qlnifitDP+ZwrmmZcoSKyLKvtZxpyV0n2/bD/N4tBAAZ/gJEdZU7KMraoK1+XYAg==", "dev": true, "license": "MIT", "dependencies": { "forwarded": "0.2.0", "ipaddr.js": "1.9.1" }, "engines": { "node": ">= 0.10" } }, "node_modules/proxy-addr/node_modules/ipaddr.js": { "version": "1.9.1", "resolved": "https://registry.npmjs.org/ipaddr.js/-/ipaddr.js-1.9.1.tgz", "integrity": "sha512-0KI/607xoxSToH7GjN1FfSbLoU0+btTicjsQSWQlh/hZykN8KpmMf7uYwPW3R+akZ6R/w18ZlXSHBYXiYUPO3g==", "dev": true, "license": "MIT", "engines": { "node": ">= 0.10" } }, "node_modules/punycode": { "version": "2.1.1", "resolved": "https://registry.npmjs.org/punycode/-/punycode-2.1.1.tgz", "integrity": "sha512-XRsRjdf+j5ml+y/6GKHPZbrF/8p2Yga0JPtdqTIY2Xe5ohJPD9saDJJLPvp9+NSBprVvevdXZybnj2cv8OEd0A==", "dev": true, "engines": { "node": ">=6" } }, "node_modules/qs": { "version": "6.13.0", "resolved": "https://registry.npmjs.org/qs/-/qs-6.13.0.tgz", "integrity": "sha512-+38qI9SOr8tfZ4QmJNplMUxqjbe7LKvvZgWdExBOmd+egZTtjLB67Gu0HRX3u/XOq7UU2Nx6nsjvS16Z9uwfpg==", "dev": true, "license": "BSD-3-Clause", "dependencies": { "side-channel": "^1.0.6" }, "engines": { "node": ">=0.6" }, "funding": { "url": "https://github.com/sponsors/ljharb" } }, "node_modules/queue-microtask": { "version": "1.2.3", "resolved": "https://registry.npmjs.org/queue-microtask/-/queue-microtask-1.2.3.tgz", "integrity": "sha512-NuaNSa6flKT5JaSYQzJok04JzTL1CA6aGhv5rfLW3PgqA+M2ChpZQnAC8h8i4ZFkBS8X5RqkDBHA7r4hej3K9A==", "dev": true, "funding": [ { "type": "github", "url": "https://github.com/sponsors/feross" }, { "type": "patreon", "url": "https://www.patreon.com/feross" }, { "type": "consulting", "url": "https://feross.org/support" } ] }, "node_modules/randombytes": { "version": "2.1.0", "resolved": "https://registry.npmjs.org/randombytes/-/randombytes-2.1.0.tgz", "integrity": "sha512-vYl3iOX+4CKUWuxGi9Ukhie6fsqXqS9FE2Zaic4tNFD2N2QQaXOMFbuKK4QmDHC0JO6B1Zp41J0LpT0oR68amQ==", "dev": true, "dependencies": { "safe-buffer": "^5.1.0" } }, "node_modules/range-parser": { "version": "1.2.1", "resolved": "https://registry.npmjs.org/range-parser/-/range-parser-1.2.1.tgz", "integrity": "sha512-Hrgsx+orqoygnmhFbKaHE6c296J+HTAQXoxEF6gNupROmmGJRoyzfG3ccAveqCBrwr/2yxQ5BVd/GTl5agOwSg==", "dev": true, "license": "MIT", "engines": { "node": ">= 0.6" } }, "node_modules/raw-body": { "version": "2.5.2", "resolved": "https://registry.npmjs.org/raw-body/-/raw-body-2.5.2.tgz", "integrity": "sha512-8zGqypfENjCIqGhgXToC8aB2r7YrBX+AQAfIPs/Mlk+BtPTztOvTS01NRW/3Eh60J+a48lt8qsCzirQ6loCVfA==", "dev": true, "license": "MIT", "dependencies": { "bytes": "3.1.2", "http-errors": "2.0.0", "iconv-lite": "0.4.24", "unpipe": "1.0.0" }, "engines": { "node": ">= 0.8" } }, "node_modules/readable-stream": { "version": "2.3.7", "resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-2.3.7.tgz", "integrity": "sha512-Ebho8K4jIbHAxnuxi7o42OrZgF/ZTNcsZj6nRKyUmkhLFq8CHItp/fy6hQZuZmP/n3yZ9VBUbp4zz/mX8hmYPw==", "dev": true, "dependencies": { "core-util-is": "~1.0.0", "inherits": "~2.0.3", "isarray": "~1.0.0", "process-nextick-args": "~2.0.0", "safe-buffer": "~5.1.1", "string_decoder": "~1.1.1", "util-deprecate": "~1.0.1" } }, "node_modules/readdirp": { "version": "3.6.0", "resolved": "https://registry.npmjs.org/readdirp/-/readdirp-3.6.0.tgz", "integrity": "sha512-hOS089on8RduqdbhvQ5Z37A0ESjsqz6qnRcffsMU3495FuTdqSm+7bhJ29JvIOsBDEEnan5DPu9t3To9VRlMzA==", "dev": true, "license": "MIT", "dependencies": { "picomatch": "^2.2.1" }, "engines": { "node": ">=8.10.0" } }, "node_modules/rechoir": { "version": "0.8.0", "resolved": "https://registry.npmjs.org/rechoir/-/rechoir-0.8.0.tgz", "integrity": "sha512-/vxpCXddiX8NGfGO/mTafwjq4aFa/71pvamip0++IQk3zG8cbCj0fifNPrjjF1XMXUne91jL9OoxmdykoEtifQ==", "dev": true, "dependencies": { "resolve": "^1.20.0" }, "engines": { "node": ">= 10.13.0" } }, "node_modules/require-from-string": { "version": "2.0.2", "resolved": "https://registry.npmjs.org/require-from-string/-/require-from-string-2.0.2.tgz", "integrity": "sha512-Xf0nWe6RseziFMu+Ap9biiUbmplq6S9/p+7w7YXP/JBHhrUDDUhwa+vANyubuqfZWTveU//DYVGsDG7RKL/vEw==", "dev": true, "engines": { "node": ">=0.10.0" } }, "node_modules/requires-port": { "version": "1.0.0", "resolved": "https://registry.npmjs.org/requires-port/-/requires-port-1.0.0.tgz", "integrity": "sha512-KigOCHcocU3XODJxsu8i/j8T9tzT4adHiecwORRQ0ZZFcp7ahwXuRU1m+yuO90C5ZUyGeGfocHDI14M3L3yDAQ==", "dev": true }, "node_modules/resolve": { "version": "1.22.1", "resolved": "https://registry.npmjs.org/resolve/-/resolve-1.22.1.tgz", "integrity": "sha512-nBpuuYuY5jFsli/JIs1oldw6fOQCBioohqWZg/2hiaOybXOft4lonv85uDOKXdf8rhyK159cxU5cDcK/NKk8zw==", "dev": true, "dependencies": { "is-core-module": "^2.9.0", "path-parse": "^1.0.7", "supports-preserve-symlinks-flag": "^1.0.0" }, "bin": { "resolve": "bin/resolve" }, "funding": { "url": "https://github.com/sponsors/ljharb" } }, "node_modules/resolve-cwd": { "version": "3.0.0", "resolved": "https://registry.npmjs.org/resolve-cwd/-/resolve-cwd-3.0.0.tgz", "integrity": "sha512-OrZaX2Mb+rJCpH/6CpSqt9xFVpN++x01XnN2ie9g6P5/3xelLAkXWVADpdz1IHD/KFfEXyE6V0U01OQ3UO2rEg==", "dev": true, "dependencies": { "resolve-from": "^5.0.0" }, "engines": { "node": ">=8" } }, "node_modules/resolve-from": { "version": "5.0.0", "resolved": "https://registry.npmjs.org/resolve-from/-/resolve-from-5.0.0.tgz", "integrity": "sha512-qYg9KP24dD5qka9J47d0aVky0N+b4fTU89LN9iDnjB5waksiC49rvMB0PrUJQGoTmH50XPiqOvAjDfaijGxYZw==", "dev": true, "engines": { "node": ">=8" } }, "node_modules/retry": { "version": "0.13.1", "resolved": "https://registry.npmjs.org/retry/-/retry-0.13.1.tgz", "integrity": "sha512-XQBQ3I8W1Cge0Seh+6gjj03LbmRFWuoszgK9ooCpwYIrhhoO80pfq4cUkU5DkknwfOfFteRwlZ56PYOGYyFWdg==", "dev": true, "license": "MIT", "engines": { "node": ">= 4" } }, "node_modules/reusify": { "version": "1.0.4", "resolved": "https://registry.npmjs.org/reusify/-/reusify-1.0.4.tgz", "integrity": "sha512-U9nH88a3fc/ekCF1l0/UP1IosiuIjyTh7hBvXVMHYgVcfGvt897Xguj2UOLDeI5BG2m7/uwyaLVT6fbtCwTyzw==", "dev": true, "engines": { "iojs": ">=1.0.0", "node": ">=0.10.0" } }, "node_modules/run-applescript": { "version": "7.0.0", "resolved": "https://registry.npmjs.org/run-applescript/-/run-applescript-7.0.0.tgz", "integrity": "sha512-9by4Ij99JUr/MCFBUkDKLWK3G9HVXmabKz9U5MlIAIuvuzkiOicRYs8XJLxX+xahD+mLiiCYDqF9dKAgtzKP1A==", "dev": true, "license": "MIT", "engines": { "node": ">=18" }, "funding": { "url": "https://github.com/sponsors/sindresorhus" } }, "node_modules/run-parallel": { "version": "1.2.0", "resolved": "https://registry.npmjs.org/run-parallel/-/run-parallel-1.2.0.tgz", "integrity": "sha512-5l4VyZR86LZ/lDxZTR6jqL8AFE2S0IFLMP26AbjsLVADxHdhB/c0GUsH+y39UfCi3dzz8OlQuPmnaJOMoDHQBA==", "dev": true, "funding": [ { "type": "github", "url": "https://github.com/sponsors/feross" }, { "type": "patreon", "url": "https://www.patreon.com/feross" }, { "type": "consulting", "url": "https://feross.org/support" } ], "dependencies": { "queue-microtask": "^1.2.2" } }, "node_modules/safe-buffer": { "version": "5.1.2", "resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.1.2.tgz", "integrity": "sha512-Gd2UZBJDkXlY7GbJxfsE8/nvKkUEU1G38c1siN6QP6a9PT9MmHB8GnpscSmMJSoF8LOIrt8ud/wPtojys4G6+g==", "dev": true }, "node_modules/safer-buffer": { "version": "2.1.2", "resolved": "https://registry.npmjs.org/safer-buffer/-/safer-buffer-2.1.2.tgz", "integrity": "sha512-YZo3K82SD7Riyi0E1EQPojLz7kpepnSQI9IyPbHHg1XXXevb5dJI7tpyN2ADxGcQbHG7vcyRHk0cbwqcQriUtg==", "dev": true, "license": "MIT" }, "node_modules/schema-utils": { "version": "4.3.2", "resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-4.3.2.tgz", "integrity": "sha512-Gn/JaSk/Mt9gYubxTtSn/QCV4em9mpAPiR1rqy/Ocu19u/G9J5WWdNoUT4SiV6mFC3y6cxyFcFwdzPM3FgxGAQ==", "dev": true, "license": "MIT", "dependencies": { "@types/json-schema": "^7.0.9", "ajv": "^8.9.0", "ajv-formats": "^2.1.1", "ajv-keywords": "^5.1.0" }, "engines": { "node": ">= 10.13.0" }, "funding": { "type": "opencollective", "url": "https://opencollective.com/webpack" } }, "node_modules/select-hose": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/select-hose/-/select-hose-2.0.0.tgz", "integrity": "sha1-Yl2GWPhlr0Psliv8N2o3NZpJlMo=", "dev": true }, "node_modules/selfsigned": { "version": "2.4.1", "resolved": "https://registry.npmjs.org/selfsigned/-/selfsigned-2.4.1.tgz", "integrity": "sha512-th5B4L2U+eGLq1TVh7zNRGBapioSORUeymIydxgFpwww9d2qyKvtuPU2jJuHvYAwwqi2Y596QBL3eEqcPEYL8Q==", "dev": true, "license": "MIT", "dependencies": { "@types/node-forge": "^1.3.0", "node-forge": "^1" }, "engines": { "node": ">=10" } }, "node_modules/send": { "version": "0.19.0", "resolved": "https://registry.npmjs.org/send/-/send-0.19.0.tgz", "integrity": "sha512-dW41u5VfLXu8SJh5bwRmyYUbAoSB3c9uQh6L8h/KtsFREPWpbX1lrljJo186Jc4nmci/sGUZ9a0a0J2zgfq2hw==", "dev": true, "license": "MIT", "dependencies": { "debug": "2.6.9", "depd": "2.0.0", "destroy": "1.2.0", "encodeurl": "~1.0.2", "escape-html": "~1.0.3", "etag": "~1.8.1", "fresh": "0.5.2", "http-errors": "2.0.0", "mime": "1.6.0", "ms": "2.1.3", "on-finished": "2.4.1", "range-parser": "~1.2.1", "statuses": "2.0.1" }, "engines": { "node": ">= 0.8.0" } }, "node_modules/send/node_modules/encodeurl": { "version": "1.0.2", "resolved": "https://registry.npmjs.org/encodeurl/-/encodeurl-1.0.2.tgz", "integrity": "sha512-TPJXq8JqFaVYm2CWmPvnP2Iyo4ZSM7/QKcSmuMLDObfpH5fi7RUGmd/rTDf+rut/saiDiQEeVTNgAmJEdAOx0w==", "dev": true, "license": "MIT", "engines": { "node": ">= 0.8" } }, "node_modules/send/node_modules/ms": { "version": "2.1.3", "resolved": "https://registry.npmjs.org/ms/-/ms-2.1.3.tgz", "integrity": "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA==", "dev": true, "license": "MIT" }, "node_modules/serialize-javascript": { "version": "6.0.2", "resolved": "https://registry.npmjs.org/serialize-javascript/-/serialize-javascript-6.0.2.tgz", "integrity": "sha512-Saa1xPByTTq2gdeFZYLLo+RFE35NHZkAbqZeWNd3BpzppeVisAqpDjcp8dyf6uIvEqJRd46jemmyA4iFIeVk8g==", "dev": true, "dependencies": { "randombytes": "^2.1.0" } }, "node_modules/serve-index": { "version": "1.9.1", "resolved": "https://registry.npmjs.org/serve-index/-/serve-index-1.9.1.tgz", "integrity": "sha1-03aNabHn2C5c4FD/9bRTvqEqkjk=", "dev": true, "dependencies": { "accepts": "~1.3.4", "batch": "0.6.1", "debug": "2.6.9", "escape-html": "~1.0.3", "http-errors": "~1.6.2", "mime-types": "~2.1.17", "parseurl": "~1.3.2" }, "engines": { "node": ">= 0.8.0" } }, "node_modules/serve-index/node_modules/depd": { "version": "1.1.2", "resolved": "https://registry.npmjs.org/depd/-/depd-1.1.2.tgz", "integrity": "sha512-7emPTl6Dpo6JRXOXjLRxck+FlLRX5847cLKEn00PLAgc3g2hTZZgr+e4c2v6QpSmLeFP3n5yUo7ft6avBK/5jQ==", "dev": true, "engines": { "node": ">= 0.6" } }, "node_modules/serve-index/node_modules/http-errors": { "version": "1.6.3", "resolved": "https://registry.npmjs.org/http-errors/-/http-errors-1.6.3.tgz", "integrity": "sha512-lks+lVC8dgGyh97jxvxeYTWQFvh4uw4yC12gVl63Cg30sjPX4wuGcdkICVXDAESr6OJGjqGA8Iz5mkeN6zlD7A==", "dev": true, "dependencies": { "depd": "~1.1.2", "inherits": "2.0.3", "setprototypeof": "1.1.0", "statuses": ">= 1.4.0 < 2" }, "engines": { "node": ">= 0.6" } }, "node_modules/serve-index/node_modules/inherits": { "version": "2.0.3", "resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.3.tgz", "integrity": "sha512-x00IRNXNy63jwGkJmzPigoySHbaqpNuzKbBOmzK+g2OdZpQ9w+sxCN+VSB3ja7IAge2OP2qpfxTjeNcyjmW1uw==", "dev": true }, "node_modules/serve-index/node_modules/setprototypeof": { "version": "1.1.0", "resolved": "https://registry.npmjs.org/setprototypeof/-/setprototypeof-1.1.0.tgz", "integrity": "sha512-BvE/TwpZX4FXExxOxZyRGQQv651MSwmWKZGqvmPcRIjDqWub67kTKuIMx43cZZrS/cBBzwBcNDWoFxt2XEFIpQ==", "dev": true }, "node_modules/serve-index/node_modules/statuses": { "version": "1.5.0", "resolved": "https://registry.npmjs.org/statuses/-/statuses-1.5.0.tgz", "integrity": "sha1-Fhx9rBd2Wf2YEfQ3cfqZOBR4Yow=", "dev": true, "engines": { "node": ">= 0.6" } }, "node_modules/serve-static": { "version": "1.16.2", "resolved": "https://registry.npmjs.org/serve-static/-/serve-static-1.16.2.tgz", "integrity": "sha512-VqpjJZKadQB/PEbEwvFdO43Ax5dFBZ2UECszz8bQ7pi7wt//PWe1P6MN7eCnjsatYtBT6EuiClbjSWP2WrIoTw==", "dev": true, "license": "MIT", "dependencies": { "encodeurl": "~2.0.0", "escape-html": "~1.0.3", "parseurl": "~1.3.3", "send": "0.19.0" }, "engines": { "node": ">= 0.8.0" } }, "node_modules/setprototypeof": { "version": "1.2.0", "resolved": "https://registry.npmjs.org/setprototypeof/-/setprototypeof-1.2.0.tgz", "integrity": "sha512-E5LDX7Wrp85Kil5bhZv46j8jOeboKq5JMmYM3gVGdGH8xFpPWXUMsNrlODCrkoxMEeNi/XZIwuRvY4XNwYMJpw==", "dev": true, "license": "ISC" }, "node_modules/shallow-clone": { "version": "3.0.1", "resolved": "https://registry.npmjs.org/shallow-clone/-/shallow-clone-3.0.1.tgz", "integrity": "sha512-/6KqX+GVUdqPuPPd2LxDDxzX6CAbjJehAAOKlNpqqUpAqPM6HeL8f+o3a+JsyGjn2lv0WY8UsTgUJjU9Ok55NA==", "dev": true, "dependencies": { "kind-of": "^6.0.2" }, "engines": { "node": ">=8" } }, "node_modules/shebang-command": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/shebang-command/-/shebang-command-2.0.0.tgz", "integrity": "sha512-kHxr2zZpYtdmrN1qDjrrX/Z1rR1kG8Dx+gkpK1G4eXmvXswmcE1hTWBWYUzlraYw1/yZp6YuDY77YtvbN0dmDA==", "dev": true, "dependencies": { "shebang-regex": "^3.0.0" }, "engines": { "node": ">=8" } }, "node_modules/shebang-regex": { "version": "3.0.0", "resolved": "https://registry.npmjs.org/shebang-regex/-/shebang-regex-3.0.0.tgz", "integrity": "sha512-7++dFhtcx3353uBaq8DDR4NuxBetBzC7ZQOhmTQInHEd6bSrXdiEyzCvG07Z44UYdLShWUyXt5M/yhz8ekcb1A==", "dev": true, "engines": { "node": ">=8" } }, "node_modules/shell-quote": { "version": "1.8.3", "resolved": "https://registry.npmjs.org/shell-quote/-/shell-quote-1.8.3.tgz", "integrity": "sha512-ObmnIF4hXNg1BqhnHmgbDETF8dLPCggZWBjkQfhZpbszZnYur5DUljTcCHii5LC3J5E0yeO/1LIMyH+UvHQgyw==", "dev": true, "license": "MIT", "engines": { "node": ">= 0.4" }, "funding": { "url": "https://github.com/sponsors/ljharb" } }, "node_modules/side-channel": { "version": "1.1.0", "resolved": "https://registry.npmjs.org/side-channel/-/side-channel-1.1.0.tgz", "integrity": "sha512-ZX99e6tRweoUXqR+VBrslhda51Nh5MTQwou5tnUDgbtyM0dBgmhEDtWGP/xbKn6hqfPRHujUNwz5fy/wbbhnpw==", "dev": true, "license": "MIT", "dependencies": { "es-errors": "^1.3.0", "object-inspect": "^1.13.3", "side-channel-list": "^1.0.0", "side-channel-map": "^1.0.1", "side-channel-weakmap": "^1.0.2" }, "engines": { "node": ">= 0.4" }, "funding": { "url": "https://github.com/sponsors/ljharb" } }, "node_modules/side-channel-list": { "version": "1.0.0", "resolved": "https://registry.npmjs.org/side-channel-list/-/side-channel-list-1.0.0.tgz", "integrity": "sha512-FCLHtRD/gnpCiCHEiJLOwdmFP+wzCmDEkc9y7NsYxeF4u7Btsn1ZuwgwJGxImImHicJArLP4R0yX4c2KCrMrTA==", "dev": true, "license": "MIT", "dependencies": { "es-errors": "^1.3.0", "object-inspect": "^1.13.3" }, "engines": { "node": ">= 0.4" }, "funding": { "url": "https://github.com/sponsors/ljharb" } }, "node_modules/side-channel-map": { "version": "1.0.1", "resolved": "https://registry.npmjs.org/side-channel-map/-/side-channel-map-1.0.1.tgz", "integrity": "sha512-VCjCNfgMsby3tTdo02nbjtM/ewra6jPHmpThenkTYh8pG9ucZ/1P8So4u4FGBek/BjpOVsDCMoLA/iuBKIFXRA==", "dev": true, "license": "MIT", "dependencies": { "call-bound": "^1.0.2", "es-errors": "^1.3.0", "get-intrinsic": "^1.2.5", "object-inspect": "^1.13.3" }, "engines": { "node": ">= 0.4" }, "funding": { "url": "https://github.com/sponsors/ljharb" } }, "node_modules/side-channel-weakmap": { "version": "1.0.2", "resolved": "https://registry.npmjs.org/side-channel-weakmap/-/side-channel-weakmap-1.0.2.tgz", "integrity": "sha512-WPS/HvHQTYnHisLo9McqBHOJk2FkHO/tlpvldyrnem4aeQp4hai3gythswg6p01oSoTl58rcpiFAjF2br2Ak2A==", "dev": true, "license": "MIT", "dependencies": { "call-bound": "^1.0.2", "es-errors": "^1.3.0", "get-intrinsic": "^1.2.5", "object-inspect": "^1.13.3", "side-channel-map": "^1.0.1" }, "engines": { "node": ">= 0.4" }, "funding": { "url": "https://github.com/sponsors/ljharb" } }, "node_modules/slash": { "version": "4.0.0", "resolved": "https://registry.npmjs.org/slash/-/slash-4.0.0.tgz", "integrity": "sha512-3dOsAHXXUkQTpOYcoAxLIorMTp4gIQr5IW3iVb7A7lFIp0VHhnynm9izx6TssdrIcVIESAlVjtnO2K8bg+Coew==", "dev": true, "engines": { "node": ">=12" }, "funding": { "url": "https://github.com/sponsors/sindresorhus" } }, "node_modules/sockjs": { "version": "0.3.24", "resolved": "https://registry.npmjs.org/sockjs/-/sockjs-0.3.24.tgz", "integrity": "sha512-GJgLTZ7vYb/JtPSSZ10hsOYIvEYsjbNU+zPdIHcUaWVNUEPivzxku31865sSSud0Da0W4lEeOPlmw93zLQchuQ==", "dev": true, "dependencies": { "faye-websocket": "^0.11.3", "uuid": "^8.3.2", "websocket-driver": "^0.7.4" } }, "node_modules/source-map": { "version": "0.6.1", "resolved": "https://registry.npmjs.org/source-map/-/source-map-0.6.1.tgz", "integrity": "sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g==", "dev": true, "engines": { "node": ">=0.10.0" } }, "node_modules/source-map-support": { "version": "0.5.21", "resolved": "https://registry.npmjs.org/source-map-support/-/source-map-support-0.5.21.tgz", "integrity": "sha512-uBHU3L3czsIyYXKX88fdrGovxdSCoTGDRZ6SYXtSRxLZUzHg5P/66Ht6uoUlHu9EZod+inXhKo3qQgwXUT/y1w==", "dev": true, "dependencies": { "buffer-from": "^1.0.0", "source-map": "^0.6.0" } }, "node_modules/spdy": { "version": "4.0.2", "resolved": "https://registry.npmjs.org/spdy/-/spdy-4.0.2.tgz", "integrity": "sha512-r46gZQZQV+Kl9oItvl1JZZqJKGr+oEkB08A6BzkiR7593/7IbtuncXHd2YoYeTsG4157ZssMu9KYvUHLcjcDoA==", "dev": true, "dependencies": { "debug": "^4.1.0", "handle-thing": "^2.0.0", "http-deceiver": "^1.2.7", "select-hose": "^2.0.0", "spdy-transport": "^3.0.0" }, "engines": { "node": ">=6.0.0" } }, "node_modules/spdy-transport": { "version": "3.0.0", "resolved": "https://registry.npmjs.org/spdy-transport/-/spdy-transport-3.0.0.tgz", "integrity": "sha512-hsLVFE5SjA6TCisWeJXFKniGGOpBgMLmerfO2aCyCU5s7nJ/rpAepqmFifv/GCbSbueEeAJJnmSQ2rKC/g8Fcw==", "dev": true, "dependencies": { "debug": "^4.1.0", "detect-node": "^2.0.4", "hpack.js": "^2.1.6", "obuf": "^1.1.2", "readable-stream": "^3.0.6", "wbuf": "^1.7.3" } }, "node_modules/spdy-transport/node_modules/debug": { "version": "4.3.4", "resolved": "https://registry.npmjs.org/debug/-/debug-4.3.4.tgz", "integrity": "sha512-PRWFHuSU3eDtQJPvnNY7Jcket1j0t5OuOsFzPPzsekD52Zl8qUfFIPEiswXqIvHWGVHOgX+7G/vCNNhehwxfkQ==", "dev": true, "dependencies": { "ms": "2.1.2" }, "engines": { "node": ">=6.0" }, "peerDependenciesMeta": { "supports-color": { "optional": true } } }, "node_modules/spdy-transport/node_modules/ms": { "version": "2.1.2", "resolved": "https://registry.npmjs.org/ms/-/ms-2.1.2.tgz", "integrity": "sha512-sGkPx+VjMtmA6MX27oA4FBFELFCZZ4S4XqeGOXCv68tT+jb3vk/RyaKWP0PTKyWtmLSM0b+adUTEvbs1PEaH2w==", "dev": true }, "node_modules/spdy-transport/node_modules/readable-stream": { "version": "3.6.0", "resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-3.6.0.tgz", "integrity": "sha512-BViHy7LKeTz4oNnkcLJ+lVSL6vpiFeX6/d3oSH8zCW7UxP2onchk+vTGB143xuFjHS3deTgkKoXXymXqymiIdA==", "dev": true, "dependencies": { "inherits": "^2.0.3", "string_decoder": "^1.1.1", "util-deprecate": "^1.0.1" }, "engines": { "node": ">= 6" } }, "node_modules/spdy/node_modules/debug": { "version": "4.3.4", "resolved": "https://registry.npmjs.org/debug/-/debug-4.3.4.tgz", "integrity": "sha512-PRWFHuSU3eDtQJPvnNY7Jcket1j0t5OuOsFzPPzsekD52Zl8qUfFIPEiswXqIvHWGVHOgX+7G/vCNNhehwxfkQ==", "dev": true, "dependencies": { "ms": "2.1.2" }, "engines": { "node": ">=6.0" }, "peerDependenciesMeta": { "supports-color": { "optional": true } } }, "node_modules/spdy/node_modules/ms": { "version": "2.1.2", "resolved": "https://registry.npmjs.org/ms/-/ms-2.1.2.tgz", "integrity": "sha512-sGkPx+VjMtmA6MX27oA4FBFELFCZZ4S4XqeGOXCv68tT+jb3vk/RyaKWP0PTKyWtmLSM0b+adUTEvbs1PEaH2w==", "dev": true }, "node_modules/statuses": { "version": "2.0.1", "resolved": "https://registry.npmjs.org/statuses/-/statuses-2.0.1.tgz", "integrity": "sha512-RwNA9Z/7PrK06rYLIzFMlaF+l73iwpzsqRIFgbMLbTcLD6cOao82TaWefPXQvB2fOC4AjuYSEndS7N/mTCbkdQ==", "dev": true, "license": "MIT", "engines": { "node": ">= 0.8" } }, "node_modules/string_decoder": { "version": "1.1.1", "resolved": "https://registry.npmjs.org/string_decoder/-/string_decoder-1.1.1.tgz", "integrity": "sha512-n/ShnvDi6FHbbVfviro+WojiFzv+s8MPMHBczVePfUpDJLwoLT0ht1l4YwBCbi8pJAveEEdnkHyPyTP/mzRfwg==", "dev": true, "dependencies": { "safe-buffer": "~5.1.0" } }, "node_modules/supports-color": { "version": "8.1.1", "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-8.1.1.tgz", "integrity": "sha512-MpUEN2OodtUzxvKQl72cUF7RQ5EiHsGvSsVG0ia9c5RbWGL2CI4C7EpPS8UTBIplnlzZiNuV56w+FuNxy3ty2Q==", "dev": true, "dependencies": { "has-flag": "^4.0.0" }, "engines": { "node": ">=10" }, "funding": { "url": "https://github.com/chalk/supports-color?sponsor=1" } }, "node_modules/supports-preserve-symlinks-flag": { "version": "1.0.0", "resolved": "https://registry.npmjs.org/supports-preserve-symlinks-flag/-/supports-preserve-symlinks-flag-1.0.0.tgz", "integrity": "sha512-ot0WnXS9fgdkgIcePe6RHNk1WA8+muPa6cSjeR3V8K27q9BB1rTE3R1p7Hv0z1ZyAc8s6Vvv8DIyWf681MAt0w==", "dev": true, "engines": { "node": ">= 0.4" }, "funding": { "url": "https://github.com/sponsors/ljharb" } }, "node_modules/tapable": { "version": "2.2.1", "resolved": "https://registry.npmjs.org/tapable/-/tapable-2.2.1.tgz", "integrity": "sha512-GNzQvQTOIP6RyTfE2Qxb8ZVlNmw0n88vp1szwWRimP02mnTsx3Wtn5qRdqY9w2XduFNUgvOwhNnQsjwCp+kqaQ==", "dev": true, "engines": { "node": ">=6" } }, "node_modules/terser": { "version": "5.34.1", "resolved": "https://registry.npmjs.org/terser/-/terser-5.34.1.tgz", "integrity": "sha512-FsJZ7iZLd/BXkz+4xrRTGJ26o/6VTjQytUk8b8OxkwcD2I+79VPJlz7qss1+zE7h8GNIScFqXcDyJ/KqBYZFVA==", "dev": true, "dependencies": { "@jridgewell/source-map": "^0.3.3", "acorn": "^8.8.2", "commander": "^2.20.0", "source-map-support": "~0.5.20" }, "bin": { "terser": "bin/terser" }, "engines": { "node": ">=10" } }, "node_modules/terser-webpack-plugin": { "version": "5.3.10", "resolved": "https://registry.npmjs.org/terser-webpack-plugin/-/terser-webpack-plugin-5.3.10.tgz", "integrity": "sha512-BKFPWlPDndPs+NGGCr1U59t0XScL5317Y0UReNrHaw9/FwhPENlq6bfgs+4yPfyP51vqC1bQ4rp1EfXW5ZSH9w==", "dev": true, "dependencies": { "@jridgewell/trace-mapping": "^0.3.20", "jest-worker": "^27.4.5", "schema-utils": "^3.1.1", "serialize-javascript": "^6.0.1", "terser": "^5.26.0" }, "engines": { "node": ">= 10.13.0" }, "funding": { "type": "opencollective", "url": "https://opencollective.com/webpack" }, "peerDependencies": { "webpack": "^5.1.0" }, "peerDependenciesMeta": { "@swc/core": { "optional": true }, "esbuild": { "optional": true }, "uglify-js": { "optional": true } } }, "node_modules/terser-webpack-plugin/node_modules/ajv": { "version": "6.12.6", "resolved": "https://registry.npmjs.org/ajv/-/ajv-6.12.6.tgz", "integrity": "sha512-j3fVLgvTo527anyYyJOGTYJbG+vnnQYvE0m5mmkc1TK+nxAppkCLMIL0aZ4dblVCNoGShhm+kzE4ZUykBoMg4g==", "dev": true, "dependencies": { "fast-deep-equal": "^3.1.1", "fast-json-stable-stringify": "^2.0.0", "json-schema-traverse": "^0.4.1", "uri-js": "^4.2.2" }, "funding": { "type": "github", "url": "https://github.com/sponsors/epoberezkin" } }, "node_modules/terser-webpack-plugin/node_modules/ajv-keywords": { "version": "3.5.2", "resolved": "https://registry.npmjs.org/ajv-keywords/-/ajv-keywords-3.5.2.tgz", "integrity": "sha512-5p6WTN0DdTGVQk6VjcEju19IgaHudalcfabD7yhDGeA6bcQnmL+CpveLJq/3hvfwd1aof6L386Ougkx6RfyMIQ==", "dev": true, "peerDependencies": { "ajv": "^6.9.1" } }, "node_modules/terser-webpack-plugin/node_modules/json-schema-traverse": { "version": "0.4.1", "resolved": "https://registry.npmjs.org/json-schema-traverse/-/json-schema-traverse-0.4.1.tgz", "integrity": "sha512-xbbCH5dCYU5T8LcEhhuh7HJ88HXuW3qsI3Y0zOZFKfZEHcpWiHU/Jxzk629Brsab/mMiHQti9wMP+845RPe3Vg==", "dev": true }, "node_modules/terser-webpack-plugin/node_modules/schema-utils": { "version": "3.3.0", "resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-3.3.0.tgz", "integrity": "sha512-pN/yOAvcC+5rQ5nERGuwrjLlYvLTbCibnZ1I7B1LaiAz9BRBlE9GMgE/eqV30P7aJQUf7Ddimy/RsbYO/GrVGg==", "dev": true, "dependencies": { "@types/json-schema": "^7.0.8", "ajv": "^6.12.5", "ajv-keywords": "^3.5.2" }, "engines": { "node": ">= 10.13.0" }, "funding": { "type": "opencollective", "url": "https://opencollective.com/webpack" } }, "node_modules/thingies": { "version": "1.21.0", "resolved": "https://registry.npmjs.org/thingies/-/thingies-1.21.0.tgz", "integrity": "sha512-hsqsJsFMsV+aD4s3CWKk85ep/3I9XzYV/IXaSouJMYIoDlgyi11cBhsqYe9/geRfB0YIikBQg6raRaM+nIMP9g==", "dev": true, "license": "Unlicense", "engines": { "node": ">=10.18" }, "peerDependencies": { "tslib": "^2" } }, "node_modules/thunky": { "version": "1.1.0", "resolved": "https://registry.npmjs.org/thunky/-/thunky-1.1.0.tgz", "integrity": "sha512-eHY7nBftgThBqOyHGVN+l8gF0BucP09fMo0oO/Lb0w1OF80dJv+lDVpXG60WMQvkcxAkNybKsrEIE3ZtKGmPrA==", "dev": true, "license": "MIT" }, "node_modules/to-regex-range": { "version": "5.0.1", "resolved": "https://registry.npmjs.org/to-regex-range/-/to-regex-range-5.0.1.tgz", "integrity": "sha512-65P7iz6X5yEr1cwcgvQxbbIw7Uk3gOy5dIdtZ4rDveLqhrdJP+Li/Hx6tyK0NEb+2GCyneCMJiGqrADCSNk8sQ==", "dev": true, "dependencies": { "is-number": "^7.0.0" }, "engines": { "node": ">=8.0" } }, "node_modules/toidentifier": { "version": "1.0.1", "resolved": "https://registry.npmjs.org/toidentifier/-/toidentifier-1.0.1.tgz", "integrity": "sha512-o5sSPKEkg/DIQNmH43V0/uerLrpzVedkUh8tGNvaeXpfpuwjKenlSox/2O/BTlZUtEe+JG7s5YhEz608PlAHRA==", "dev": true, "license": "MIT", "engines": { "node": ">=0.6" } }, "node_modules/tree-dump": { "version": "1.0.3", "resolved": "https://registry.npmjs.org/tree-dump/-/tree-dump-1.0.3.tgz", "integrity": "sha512-il+Cv80yVHFBwokQSfd4bldvr1Md951DpgAGfmhydt04L+YzHgubm2tQ7zueWDcGENKHq0ZvGFR/hjvNXilHEg==", "dev": true, "license": "Apache-2.0", "engines": { "node": ">=10.0" }, "funding": { "type": "github", "url": "https://github.com/sponsors/streamich" }, "peerDependencies": { "tslib": "2" } }, "node_modules/tslib": { "version": "2.8.1", "resolved": "https://registry.npmjs.org/tslib/-/tslib-2.8.1.tgz", "integrity": "sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w==", "dev": true, "license": "0BSD" }, "node_modules/type-is": { "version": "1.6.18", "resolved": "https://registry.npmjs.org/type-is/-/type-is-1.6.18.tgz", "integrity": "sha512-TkRKr9sUTxEH8MdfuCSP7VizJyzRNMjj2J2do2Jr3Kym598JVdEksuzPQCnlFPW4ky9Q+iA+ma9BGm06XQBy8g==", "dev": true, "license": "MIT", "dependencies": { "media-typer": "0.3.0", "mime-types": "~2.1.24" }, "engines": { "node": ">= 0.6" } }, "node_modules/unpipe": { "version": "1.0.0", "resolved": "https://registry.npmjs.org/unpipe/-/unpipe-1.0.0.tgz", "integrity": "sha512-pjy2bYhSsufwWlKwPc+l3cN7+wuJlK6uz0YdJEOlQDbl6jo/YlPi4mb8agUkVC8BF7V8NuzeyPNqRksA3hztKQ==", "dev": true, "license": "MIT", "engines": { "node": ">= 0.8" } }, "node_modules/unstable_wasm": { "resolved": "../pkg", "link": true }, "node_modules/update-browserslist-db": { "version": "1.1.1", "resolved": "https://registry.npmjs.org/update-browserslist-db/-/update-browserslist-db-1.1.1.tgz", "integrity": "sha512-R8UzCaa9Az+38REPiJ1tXlImTJXlVfgHZsglwBD/k6nj76ctsH1E3q4doGrukiLQd3sGQYu56r5+lo5r94l29A==", "dev": true, "funding": [ { "type": "opencollective", "url": "https://opencollective.com/browserslist" }, { "type": "tidelift", "url": "https://tidelift.com/funding/github/npm/browserslist" }, { "type": "github", "url": "https://github.com/sponsors/ai" } ], "dependencies": { "escalade": "^3.2.0", "picocolors": "^1.1.0" }, "bin": { "update-browserslist-db": "cli.js" }, "peerDependencies": { "browserslist": ">= 4.21.0" } }, "node_modules/uri-js": { "version": "4.4.1", "resolved": "https://registry.npmjs.org/uri-js/-/uri-js-4.4.1.tgz", "integrity": "sha512-7rKUyy33Q1yc98pQ1DAmLtwX109F7TIfWlW1Ydo8Wl1ii1SeHieeh0HHfPeL2fMXK6z0s8ecKs9frCuLJvndBg==", "dev": true, "dependencies": { "punycode": "^2.1.0" } }, "node_modules/util-deprecate": { "version": "1.0.2", "resolved": "https://registry.npmjs.org/util-deprecate/-/util-deprecate-1.0.2.tgz", "integrity": "sha1-RQ1Nyfpw3nMnYvvS1KKJgUGaDM8=", "dev": true }, "node_modules/utils-merge": { "version": "1.0.1", "resolved": "https://registry.npmjs.org/utils-merge/-/utils-merge-1.0.1.tgz", "integrity": "sha512-pMZTvIkT1d+TFGvDOqodOclx0QWkkgi6Tdoa8gC8ffGAAqz9pzPTZWAybbsHHoED/ztMtkv/VoYTYyShUn81hA==", "dev": true, "license": "MIT", "engines": { "node": ">= 0.4.0" } }, "node_modules/uuid": { "version": "8.3.2", "resolved": "https://registry.npmjs.org/uuid/-/uuid-8.3.2.tgz", "integrity": "sha512-+NYs2QeMWy+GWFOEm9xnn6HCDp0l7QBD7ml8zLUmJ+93Q5NF0NocErnwkTkXVFNiX3/fpC6afS8Dhb/gz7R7eg==", "dev": true, "bin": { "uuid": "dist/bin/uuid" } }, "node_modules/vary": { "version": "1.1.2", "resolved": "https://registry.npmjs.org/vary/-/vary-1.1.2.tgz", "integrity": "sha1-IpnwLG3tMNSllhsLn3RSShj2NPw=", "dev": true, "engines": { "node": ">= 0.8" } }, "node_modules/watchpack": { "version": "2.4.2", "resolved": "https://registry.npmjs.org/watchpack/-/watchpack-2.4.2.tgz", "integrity": "sha512-TnbFSbcOCcDgjZ4piURLCbJ3nJhznVh9kw6F6iokjiFPl8ONxe9A6nMDVXDiNbrSfLILs6vB07F7wLBrwPYzJw==", "dev": true, "dependencies": { "glob-to-regexp": "^0.4.1", "graceful-fs": "^4.1.2" }, "engines": { "node": ">=10.13.0" } }, "node_modules/wbuf": { "version": "1.7.3", "resolved": "https://registry.npmjs.org/wbuf/-/wbuf-1.7.3.tgz", "integrity": "sha512-O84QOnr0icsbFGLS0O3bI5FswxzRr8/gHwWkDlQFskhSPryQXvrTMxjxGP4+iWYoauLoBvfDpkrOauZ+0iZpDA==", "dev": true, "dependencies": { "minimalistic-assert": "^1.0.0" } }, "node_modules/webpack": { "version": "5.95.0", "resolved": "https://registry.npmjs.org/webpack/-/webpack-5.95.0.tgz", "integrity": "sha512-2t3XstrKULz41MNMBF+cJ97TyHdyQ8HCt//pqErqDvNjU9YQBnZxIHa11VXsi7F3mb5/aO2tuDxdeTPdU7xu9Q==", "dev": true, "dependencies": { "@types/estree": "^1.0.5", "@webassemblyjs/ast": "^1.12.1", "@webassemblyjs/wasm-edit": "^1.12.1", "@webassemblyjs/wasm-parser": "^1.12.1", "acorn": "^8.7.1", "acorn-import-attributes": "^1.9.5", "browserslist": "^4.21.10", "chrome-trace-event": "^1.0.2", "enhanced-resolve": "^5.17.1", "es-module-lexer": "^1.2.1", "eslint-scope": "5.1.1", "events": "^3.2.0", "glob-to-regexp": "^0.4.1", "graceful-fs": "^4.2.11", "json-parse-even-better-errors": "^2.3.1", "loader-runner": "^4.2.0", "mime-types": "^2.1.27", "neo-async": "^2.6.2", "schema-utils": "^3.2.0", "tapable": "^2.1.1", "terser-webpack-plugin": "^5.3.10", "watchpack": "^2.4.1", "webpack-sources": "^3.2.3" }, "bin": { "webpack": "bin/webpack.js" }, "engines": { "node": ">=10.13.0" }, "funding": { "type": "opencollective", "url": "https://opencollective.com/webpack" }, "peerDependenciesMeta": { "webpack-cli": { "optional": true } } }, "node_modules/webpack-cli": { "version": "5.0.1", "resolved": "https://registry.npmjs.org/webpack-cli/-/webpack-cli-5.0.1.tgz", "integrity": "sha512-S3KVAyfwUqr0Mo/ur3NzIp6jnerNpo7GUO6so51mxLi1spqsA17YcMXy0WOIJtBSnj748lthxC6XLbNKh/ZC+A==", "dev": true, "dependencies": { "@discoveryjs/json-ext": "^0.5.0", "@webpack-cli/configtest": "^2.0.1", "@webpack-cli/info": "^2.0.1", "@webpack-cli/serve": "^2.0.1", "colorette": "^2.0.14", "commander": "^9.4.1", "cross-spawn": "^7.0.3", "envinfo": "^7.7.3", "fastest-levenshtein": "^1.0.12", "import-local": "^3.0.2", "interpret": "^3.1.1", "rechoir": "^0.8.0", "webpack-merge": "^5.7.3" }, "bin": { "webpack-cli": "bin/cli.js" }, "engines": { "node": ">=14.15.0" }, "funding": { "type": "opencollective", "url": "https://opencollective.com/webpack" }, "peerDependencies": { "webpack": "5.x.x" }, "peerDependenciesMeta": { "@webpack-cli/generators": { "optional": true }, "webpack-bundle-analyzer": { "optional": true }, "webpack-dev-server": { "optional": true } } }, "node_modules/webpack-cli/node_modules/commander": { "version": "9.4.1", "resolved": "https://registry.npmjs.org/commander/-/commander-9.4.1.tgz", "integrity": "sha512-5EEkTNyHNGFPD2H+c/dXXfQZYa/scCKasxWcXJaWnNJ99pnQN9Vnmqow+p+PlFPE63Q6mThaZws1T+HxfpgtPw==", "dev": true, "engines": { "node": "^12.20.0 || >=14" } }, "node_modules/webpack-dev-middleware": { "version": "7.4.2", "resolved": "https://registry.npmjs.org/webpack-dev-middleware/-/webpack-dev-middleware-7.4.2.tgz", "integrity": "sha512-xOO8n6eggxnwYpy1NlzUKpvrjfJTvae5/D6WOK0S2LSo7vjmo5gCM1DbLUmFqrMTJP+W/0YZNctm7jasWvLuBA==", "dev": true, "license": "MIT", "dependencies": { "colorette": "^2.0.10", "memfs": "^4.6.0", "mime-types": "^2.1.31", "on-finished": "^2.4.1", "range-parser": "^1.2.1", "schema-utils": "^4.0.0" }, "engines": { "node": ">= 18.12.0" }, "funding": { "type": "opencollective", "url": "https://opencollective.com/webpack" }, "peerDependencies": { "webpack": "^5.0.0" }, "peerDependenciesMeta": { "webpack": { "optional": true } } }, "node_modules/webpack-dev-server": { "version": "5.2.1", "resolved": "https://registry.npmjs.org/webpack-dev-server/-/webpack-dev-server-5.2.1.tgz", "integrity": "sha512-ml/0HIj9NLpVKOMq+SuBPLHcmbG+TGIjXRHsYfZwocUBIqEvws8NnS/V9AFQ5FKP+tgn5adwVwRrTEpGL33QFQ==", "dev": true, "license": "MIT", "dependencies": { "@types/bonjour": "^3.5.13", "@types/connect-history-api-fallback": "^1.5.4", "@types/express": "^4.17.21", "@types/express-serve-static-core": "^4.17.21", "@types/serve-index": "^1.9.4", "@types/serve-static": "^1.15.5", "@types/sockjs": "^0.3.36", "@types/ws": "^8.5.10", "ansi-html-community": "^0.0.8", "bonjour-service": "^1.2.1", "chokidar": "^3.6.0", "colorette": "^2.0.10", "compression": "^1.7.4", "connect-history-api-fallback": "^2.0.0", "express": "^4.21.2", "graceful-fs": "^4.2.6", "http-proxy-middleware": "^2.0.7", "ipaddr.js": "^2.1.0", "launch-editor": "^2.6.1", "open": "^10.0.3", "p-retry": "^6.2.0", "schema-utils": "^4.2.0", "selfsigned": "^2.4.1", "serve-index": "^1.9.1", "sockjs": "^0.3.24", "spdy": "^4.0.2", "webpack-dev-middleware": "^7.4.2", "ws": "^8.18.0" }, "bin": { "webpack-dev-server": "bin/webpack-dev-server.js" }, "engines": { "node": ">= 18.12.0" }, "funding": { "type": "opencollective", "url": "https://opencollective.com/webpack" }, "peerDependencies": { "webpack": "^5.0.0" }, "peerDependenciesMeta": { "webpack": { "optional": true }, "webpack-cli": { "optional": true } } }, "node_modules/webpack-merge": { "version": "5.8.0", "resolved": "https://registry.npmjs.org/webpack-merge/-/webpack-merge-5.8.0.tgz", "integrity": "sha512-/SaI7xY0831XwP6kzuwhKWVKDP9t1QY1h65lAFLbZqMPIuYcD9QAW4u9STIbU9kaJbPBB/geU/gLr1wDjOhQ+Q==", "dev": true, "dependencies": { "clone-deep": "^4.0.1", "wildcard": "^2.0.0" }, "engines": { "node": ">=10.0.0" } }, "node_modules/webpack-sources": { "version": "3.2.3", "resolved": "https://registry.npmjs.org/webpack-sources/-/webpack-sources-3.2.3.tgz", "integrity": "sha512-/DyMEOrDgLKKIG0fmvtz+4dUX/3Ghozwgm6iPp8KRhvn+eQf9+Q7GWxVNMk3+uCPWfdXYC4ExGBckIXdFEfH1w==", "dev": true, "engines": { "node": ">=10.13.0" } }, "node_modules/webpack/node_modules/ajv": { "version": "6.12.6", "resolved": "https://registry.npmjs.org/ajv/-/ajv-6.12.6.tgz", "integrity": "sha512-j3fVLgvTo527anyYyJOGTYJbG+vnnQYvE0m5mmkc1TK+nxAppkCLMIL0aZ4dblVCNoGShhm+kzE4ZUykBoMg4g==", "dev": true, "dependencies": { "fast-deep-equal": "^3.1.1", "fast-json-stable-stringify": "^2.0.0", "json-schema-traverse": "^0.4.1", "uri-js": "^4.2.2" }, "funding": { "type": "github", "url": "https://github.com/sponsors/epoberezkin" } }, "node_modules/webpack/node_modules/ajv-keywords": { "version": "3.5.2", "resolved": "https://registry.npmjs.org/ajv-keywords/-/ajv-keywords-3.5.2.tgz", "integrity": "sha512-5p6WTN0DdTGVQk6VjcEju19IgaHudalcfabD7yhDGeA6bcQnmL+CpveLJq/3hvfwd1aof6L386Ougkx6RfyMIQ==", "dev": true, "peerDependencies": { "ajv": "^6.9.1" } }, "node_modules/webpack/node_modules/json-schema-traverse": { "version": "0.4.1", "resolved": "https://registry.npmjs.org/json-schema-traverse/-/json-schema-traverse-0.4.1.tgz", "integrity": "sha512-xbbCH5dCYU5T8LcEhhuh7HJ88HXuW3qsI3Y0zOZFKfZEHcpWiHU/Jxzk629Brsab/mMiHQti9wMP+845RPe3Vg==", "dev": true }, "node_modules/webpack/node_modules/schema-utils": { "version": "3.3.0", "resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-3.3.0.tgz", "integrity": "sha512-pN/yOAvcC+5rQ5nERGuwrjLlYvLTbCibnZ1I7B1LaiAz9BRBlE9GMgE/eqV30P7aJQUf7Ddimy/RsbYO/GrVGg==", "dev": true, "dependencies": { "@types/json-schema": "^7.0.8", "ajv": "^6.12.5", "ajv-keywords": "^3.5.2" }, "engines": { "node": ">= 10.13.0" }, "funding": { "type": "opencollective", "url": "https://opencollective.com/webpack" } }, "node_modules/websocket-driver": { "version": "0.7.4", "resolved": "https://registry.npmjs.org/websocket-driver/-/websocket-driver-0.7.4.tgz", "integrity": "sha512-b17KeDIQVjvb0ssuSDF2cYXSg2iztliJ4B9WdsuB6J952qCPKmnVq4DyW5motImXHDC1cBT/1UezrJVsKw5zjg==", "dev": true, "dependencies": { "http-parser-js": ">=0.5.1", "safe-buffer": ">=5.1.0", "websocket-extensions": ">=0.1.1" }, "engines": { "node": ">=0.8.0" } }, "node_modules/websocket-extensions": { "version": "0.1.4", "resolved": "https://registry.npmjs.org/websocket-extensions/-/websocket-extensions-0.1.4.tgz", "integrity": "sha512-OqedPIGOfsDlo31UNwYbCFMSaO9m9G/0faIHj5/dZFDMFqPTcx6UwqyOy3COEaEOg/9VsGIpdqn62W5KhoKSpg==", "dev": true, "engines": { "node": ">=0.8.0" } }, "node_modules/which": { "version": "2.0.2", "resolved": "https://registry.npmjs.org/which/-/which-2.0.2.tgz", "integrity": "sha512-BLI3Tl1TW3Pvl70l3yq3Y64i+awpwXqsGBYWkkqMtnbXgrMD+yj7rhW0kuEDxzJaYXGjEW5ogapKNMEKNMjibA==", "dev": true, "dependencies": { "isexe": "^2.0.0" }, "bin": { "node-which": "bin/node-which" }, "engines": { "node": ">= 8" } }, "node_modules/wildcard": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/wildcard/-/wildcard-2.0.0.tgz", "integrity": "sha512-JcKqAHLPxcdb9KM49dufGXn2x3ssnfjbcaQdLlfZsL9rH9wgDQjUtDxbo8NE0F6SFvydeu1VhZe7hZuHsB2/pw==", "dev": true }, "node_modules/ws": { "version": "8.18.2", "resolved": "https://registry.npmjs.org/ws/-/ws-8.18.2.tgz", "integrity": "sha512-DMricUmwGZUVr++AEAe2uiVM7UoO9MAVZMDu05UQOaUII0lp+zOzLLU4Xqh/JvTqklB1T4uELaaPBKyjE1r4fQ==", "dev": true, "license": "MIT", "engines": { "node": ">=10.0.0" }, "peerDependencies": { "bufferutil": "^4.0.1", "utf-8-validate": ">=5.0.2" }, "peerDependenciesMeta": { "bufferutil": { "optional": true }, "utf-8-validate": { "optional": true } } } }, "dependencies": { "@discoveryjs/json-ext": { "version": "0.5.7", "resolved": "https://registry.npmjs.org/@discoveryjs/json-ext/-/json-ext-0.5.7.tgz", "integrity": "sha512-dBVuXR082gk3jsFp7Rd/JI4kytwGHecnCoTtXFb7DB6CNHp4rg5k1bhg0nWdLGLnOV71lmDzGQaLMy8iPLY0pw==", "dev": true }, "@jridgewell/gen-mapping": { "version": "0.3.5", "resolved": "https://registry.npmjs.org/@jridgewell/gen-mapping/-/gen-mapping-0.3.5.tgz", "integrity": "sha512-IzL8ZoEDIBRWEzlCcRhOaCupYyN5gdIK+Q6fbFdPDg6HqX6jpkItn7DFIpW9LQzXG6Df9sA7+OKnq0qlz/GaQg==", "dev": true, "requires": { "@jridgewell/set-array": "^1.2.1", "@jridgewell/sourcemap-codec": "^1.4.10", "@jridgewell/trace-mapping": "^0.3.24" } }, "@jridgewell/resolve-uri": { "version": "3.1.2", "resolved": "https://registry.npmjs.org/@jridgewell/resolve-uri/-/resolve-uri-3.1.2.tgz", "integrity": "sha512-bRISgCIjP20/tbWSPWMEi54QVPRZExkuD9lJL+UIxUKtwVJA8wW1Trb1jMs1RFXo1CBTNZ/5hpC9QvmKWdopKw==", "dev": true }, "@jridgewell/set-array": { "version": "1.2.1", "resolved": "https://registry.npmjs.org/@jridgewell/set-array/-/set-array-1.2.1.tgz", "integrity": "sha512-R8gLRTZeyp03ymzP/6Lil/28tGeGEzhx1q2k703KGWRAI1VdvPIXdG70VJc2pAMw3NA6JKL5hhFu1sJX0Mnn/A==", "dev": true }, "@jridgewell/source-map": { "version": "0.3.6", "resolved": "https://registry.npmjs.org/@jridgewell/source-map/-/source-map-0.3.6.tgz", "integrity": "sha512-1ZJTZebgqllO79ue2bm3rIGud/bOe0pP5BjSRCRxxYkEZS8STV7zN84UBbiYu7jy+eCKSnVIUgoWWE/tt+shMQ==", "dev": true, "requires": { "@jridgewell/gen-mapping": "^0.3.5", "@jridgewell/trace-mapping": "^0.3.25" } }, "@jridgewell/sourcemap-codec": { "version": "1.5.0", "resolved": "https://registry.npmjs.org/@jridgewell/sourcemap-codec/-/sourcemap-codec-1.5.0.tgz", "integrity": "sha512-gv3ZRaISU3fjPAgNsriBRqGWQL6quFx04YMPW/zD8XMLsU32mhCCbfbO6KZFLjvYpCZ8zyDEgqsgf+PwPaM7GQ==", "dev": true }, "@jridgewell/trace-mapping": { "version": "0.3.25", "resolved": "https://registry.npmjs.org/@jridgewell/trace-mapping/-/trace-mapping-0.3.25.tgz", "integrity": "sha512-vNk6aEwybGtawWmy/PzwnGDOjCkLWSD2wqvjGGAgOAwCGWySYXfYoxt00IJkTF+8Lb57DwOb3Aa0o9CApepiYQ==", "dev": true, "requires": { "@jridgewell/resolve-uri": "^3.1.0", "@jridgewell/sourcemap-codec": "^1.4.14" } }, "@jsonjoy.com/base64": { "version": "1.1.2", "resolved": "https://registry.npmjs.org/@jsonjoy.com/base64/-/base64-1.1.2.tgz", "integrity": "sha512-q6XAnWQDIMA3+FTiOYajoYqySkO+JSat0ytXGSuRdq9uXE7o92gzuQwQM14xaCRlBLGq3v5miDGC4vkVTn54xA==", "dev": true, "requires": {} }, "@jsonjoy.com/json-pack": { "version": "1.2.0", "resolved": "https://registry.npmjs.org/@jsonjoy.com/json-pack/-/json-pack-1.2.0.tgz", "integrity": "sha512-io1zEbbYcElht3tdlqEOFxZ0dMTYrHz9iMf0gqn1pPjZFTCgM5R4R5IMA20Chb2UPYYsxjzs8CgZ7Nb5n2K2rA==", "dev": true, "requires": { "@jsonjoy.com/base64": "^1.1.1", "@jsonjoy.com/util": "^1.1.2", "hyperdyperid": "^1.2.0", "thingies": "^1.20.0" } }, "@jsonjoy.com/util": { "version": "1.6.0", "resolved": "https://registry.npmjs.org/@jsonjoy.com/util/-/util-1.6.0.tgz", "integrity": "sha512-sw/RMbehRhN68WRtcKCpQOPfnH6lLP4GJfqzi3iYej8tnzpZUDr6UkZYJjcjjC0FWEJOJbyM3PTIwxucUmDG2A==", "dev": true, "requires": {} }, "@leichtgewicht/ip-codec": { "version": "2.0.5", "resolved": "https://registry.npmjs.org/@leichtgewicht/ip-codec/-/ip-codec-2.0.5.tgz", "integrity": "sha512-Vo+PSpZG2/fmgmiNzYK9qWRh8h/CHrwD0mo1h1DzL4yzHNSfWYujGTYsWGreD000gcgmZ7K4Ys6Tx9TxtsKdDw==", "dev": true }, "@nodelib/fs.scandir": { "version": "2.1.5", "resolved": "https://registry.npmjs.org/@nodelib/fs.scandir/-/fs.scandir-2.1.5.tgz", "integrity": "sha512-vq24Bq3ym5HEQm2NKCr3yXDwjc7vTsEThRDnkp2DK9p1uqLR+DHurm/NOTo0KG7HYHU7eppKZj3MyqYuMBf62g==", "dev": true, "requires": { "@nodelib/fs.stat": "2.0.5", "run-parallel": "^1.1.9" } }, "@nodelib/fs.stat": { "version": "2.0.5", "resolved": "https://registry.npmjs.org/@nodelib/fs.stat/-/fs.stat-2.0.5.tgz", "integrity": "sha512-RkhPPp2zrqDAQA/2jNhnztcPAlv64XdhIp7a7454A5ovI7Bukxgt7MX7udwAu3zg1DcpPU0rz3VV1SeaqvY4+A==", "dev": true }, "@nodelib/fs.walk": { "version": "1.2.8", "resolved": "https://registry.npmjs.org/@nodelib/fs.walk/-/fs.walk-1.2.8.tgz", "integrity": "sha512-oGB+UxlgWcgQkgwo8GcEGwemoTFt3FIO9ababBmaGwXIoBKZ+GTy0pP185beGg7Llih/NSHSV2XAs1lnznocSg==", "dev": true, "requires": { "@nodelib/fs.scandir": "2.1.5", "fastq": "^1.6.0" } }, "@types/body-parser": { "version": "1.19.2", "resolved": "https://registry.npmjs.org/@types/body-parser/-/body-parser-1.19.2.tgz", "integrity": "sha512-ALYone6pm6QmwZoAgeyNksccT9Q4AWZQ6PvfwR37GT6r6FWUPguq6sUmNGSMV2Wr761oQoBxwGGa6DR5o1DC9g==", "dev": true, "requires": { "@types/connect": "*", "@types/node": "*" } }, "@types/bonjour": { "version": "3.5.13", "resolved": "https://registry.npmjs.org/@types/bonjour/-/bonjour-3.5.13.tgz", "integrity": "sha512-z9fJ5Im06zvUL548KvYNecEVlA7cVDkGUi6kZusb04mpyEFKCIZJvloCcmpmLaIahDpOQGHaHmG6imtPMmPXGQ==", "dev": true, "requires": { "@types/node": "*" } }, "@types/connect": { "version": "3.4.35", "resolved": "https://registry.npmjs.org/@types/connect/-/connect-3.4.35.tgz", "integrity": "sha512-cdeYyv4KWoEgpBISTxWvqYsVy444DOqehiF3fM3ne10AmJ62RSyNkUnxMJXHQWRQQX2eR94m5y1IZyDwBjV9FQ==", "dev": true, "requires": { "@types/node": "*" } }, "@types/connect-history-api-fallback": { "version": "1.5.4", "resolved": "https://registry.npmjs.org/@types/connect-history-api-fallback/-/connect-history-api-fallback-1.5.4.tgz", "integrity": "sha512-n6Cr2xS1h4uAulPRdlw6Jl6s1oG8KrVilPN2yUITEs+K48EzMJJ3W1xy8K5eWuFvjp3R74AOIGSmp2UfBJ8HFw==", "dev": true, "requires": { "@types/express-serve-static-core": "*", "@types/node": "*" } }, "@types/estree": { "version": "1.0.6", "resolved": "https://registry.npmjs.org/@types/estree/-/estree-1.0.6.tgz", "integrity": "sha512-AYnb1nQyY49te+VRAVgmzfcgjYS91mY5P0TKUDCLEM+gNnA+3T6rWITXRLYCpahpqSQbN5cE+gHpnPyXjHWxcw==", "dev": true }, "@types/express": { "version": "4.17.22", "resolved": "https://registry.npmjs.org/@types/express/-/express-4.17.22.tgz", "integrity": "sha512-eZUmSnhRX9YRSkplpz0N+k6NljUUn5l3EWZIKZvYzhvMphEuNiyyy1viH/ejgt66JWgALwC/gtSUAeQKtSwW/w==", "dev": true, "requires": { "@types/body-parser": "*", "@types/express-serve-static-core": "^4.17.33", "@types/qs": "*", "@types/serve-static": "*" } }, "@types/express-serve-static-core": { "version": "4.19.6", "resolved": "https://registry.npmjs.org/@types/express-serve-static-core/-/express-serve-static-core-4.19.6.tgz", "integrity": "sha512-N4LZ2xG7DatVqhCZzOGb1Yi5lMbXSZcmdLDe9EzSndPV2HpWYWzRbaerl2n27irrm94EPpprqa8KpskPT085+A==", "dev": true, "requires": { "@types/node": "*", "@types/qs": "*", "@types/range-parser": "*", "@types/send": "*" } }, "@types/http-errors": { "version": "2.0.4", "resolved": "https://registry.npmjs.org/@types/http-errors/-/http-errors-2.0.4.tgz", "integrity": "sha512-D0CFMMtydbJAegzOyHjtiKPLlvnm3iTZyZRSZoLq2mRhDdmLfIWOCYPfQJ4cu2erKghU++QvjcUjp/5h7hESpA==", "dev": true }, "@types/http-proxy": { "version": "1.17.9", "resolved": "https://registry.npmjs.org/@types/http-proxy/-/http-proxy-1.17.9.tgz", "integrity": "sha512-QsbSjA/fSk7xB+UXlCT3wHBy5ai9wOcNDWwZAtud+jXhwOM3l+EYZh8Lng4+/6n8uar0J7xILzqftJdJ/Wdfkw==", "dev": true, "requires": { "@types/node": "*" } }, "@types/json-schema": { "version": "7.0.11", "resolved": "https://registry.npmjs.org/@types/json-schema/-/json-schema-7.0.11.tgz", "integrity": "sha512-wOuvG1SN4Us4rez+tylwwwCV1psiNVOkJeM3AUWUNWg/jDQY2+HE/444y5gc+jBmRqASOm2Oeh5c1axHobwRKQ==", "dev": true }, "@types/mime": { "version": "1.3.5", "resolved": "https://registry.npmjs.org/@types/mime/-/mime-1.3.5.tgz", "integrity": "sha512-/pyBZWSLD2n0dcHE3hq8s8ZvcETHtEuF+3E7XVt0Ig2nvsVQXdghHVcEkIWjy9A0wKfTn97a/PSDYohKIlnP/w==", "dev": true }, "@types/node": { "version": "18.7.13", "resolved": "https://registry.npmjs.org/@types/node/-/node-18.7.13.tgz", "integrity": "sha512-46yIhxSe5xEaJZXWdIBP7GU4HDTG8/eo0qd9atdiL+lFpA03y8KS+lkTN834TWJj5767GbWv4n/P6efyTFt1Dw==", "dev": true }, "@types/node-forge": { "version": "1.3.11", "resolved": "https://registry.npmjs.org/@types/node-forge/-/node-forge-1.3.11.tgz", "integrity": "sha512-FQx220y22OKNTqaByeBGqHWYz4cl94tpcxeFdvBo3wjG6XPBuZ0BNgNZRV5J5TFmmcsJ4IzsLkmGRiQbnYsBEQ==", "dev": true, "requires": { "@types/node": "*" } }, "@types/qs": { "version": "6.9.7", "resolved": "https://registry.npmjs.org/@types/qs/-/qs-6.9.7.tgz", "integrity": "sha512-FGa1F62FT09qcrueBA6qYTrJPVDzah9a+493+o2PCXsesWHIn27G98TsSMs3WPNbZIEj4+VJf6saSFpvD+3Zsw==", "dev": true }, "@types/range-parser": { "version": "1.2.4", "resolved": "https://registry.npmjs.org/@types/range-parser/-/range-parser-1.2.4.tgz", "integrity": "sha512-EEhsLsD6UsDM1yFhAvy0Cjr6VwmpMWqFBCb9w07wVugF7w9nfajxLuVmngTIpgS6svCnm6Vaw+MZhoDCKnOfsw==", "dev": true }, "@types/retry": { "version": "0.12.2", "resolved": "https://registry.npmjs.org/@types/retry/-/retry-0.12.2.tgz", "integrity": "sha512-XISRgDJ2Tc5q4TRqvgJtzsRkFYNJzZrhTdtMoGVBttwzzQJkPnS3WWTFc7kuDRoPtPakl+T+OfdEUjYJj7Jbow==", "dev": true }, "@types/send": { "version": "0.17.4", "resolved": "https://registry.npmjs.org/@types/send/-/send-0.17.4.tgz", "integrity": "sha512-x2EM6TJOybec7c52BX0ZspPodMsQUd5L6PRwOunVyVUhXiBSKf3AezDL8Dgvgt5o0UfKNfuA0eMLr2wLT4AiBA==", "dev": true, "requires": { "@types/mime": "^1", "@types/node": "*" } }, "@types/serve-index": { "version": "1.9.4", "resolved": "https://registry.npmjs.org/@types/serve-index/-/serve-index-1.9.4.tgz", "integrity": "sha512-qLpGZ/c2fhSs5gnYsQxtDEq3Oy8SXPClIXkW5ghvAvsNuVSA8k+gCONcUCS/UjLEYvYps+e8uBtfgXgvhwfNug==", "dev": true, "requires": { "@types/express": "*" } }, "@types/serve-static": { "version": "1.15.7", "resolved": "https://registry.npmjs.org/@types/serve-static/-/serve-static-1.15.7.tgz", "integrity": "sha512-W8Ym+h8nhuRwaKPaDw34QUkwsGi6Rc4yYqvKFo5rm2FUEhCFbzVWrxXUxuKK8TASjWsysJY0nsmNCGhCOIsrOw==", "dev": true, "requires": { "@types/http-errors": "*", "@types/node": "*", "@types/send": "*" } }, "@types/sockjs": { "version": "0.3.36", "resolved": "https://registry.npmjs.org/@types/sockjs/-/sockjs-0.3.36.tgz", "integrity": "sha512-MK9V6NzAS1+Ud7JV9lJLFqW85VbC9dq3LmwZCuBe4wBDgKC0Kj/jd8Xl+nSviU+Qc3+m7umHHyHg//2KSa0a0Q==", "dev": true, "requires": { "@types/node": "*" } }, "@types/ws": { "version": "8.18.1", "resolved": "https://registry.npmjs.org/@types/ws/-/ws-8.18.1.tgz", "integrity": "sha512-ThVF6DCVhA8kUGy+aazFQ4kXQ7E1Ty7A3ypFOe0IcJV8O/M511G99AW24irKrW56Wt44yG9+ij8FaqoBGkuBXg==", "dev": true, "requires": { "@types/node": "*" } }, "@webassemblyjs/ast": { "version": "1.12.1", "resolved": "https://registry.npmjs.org/@webassemblyjs/ast/-/ast-1.12.1.tgz", "integrity": "sha512-EKfMUOPRRUTy5UII4qJDGPpqfwjOmZ5jeGFwid9mnoqIFK+e0vqoi1qH56JpmZSzEL53jKnNzScdmftJyG5xWg==", "dev": true, "requires": { "@webassemblyjs/helper-numbers": "1.11.6", "@webassemblyjs/helper-wasm-bytecode": "1.11.6" } }, "@webassemblyjs/floating-point-hex-parser": { "version": "1.11.6", "resolved": "https://registry.npmjs.org/@webassemblyjs/floating-point-hex-parser/-/floating-point-hex-parser-1.11.6.tgz", "integrity": "sha512-ejAj9hfRJ2XMsNHk/v6Fu2dGS+i4UaXBXGemOfQ/JfQ6mdQg/WXtwleQRLLS4OvfDhv8rYnVwH27YJLMyYsxhw==", "dev": true }, "@webassemblyjs/helper-api-error": { "version": "1.11.6", "resolved": "https://registry.npmjs.org/@webassemblyjs/helper-api-error/-/helper-api-error-1.11.6.tgz", "integrity": "sha512-o0YkoP4pVu4rN8aTJgAyj9hC2Sv5UlkzCHhxqWj8butaLvnpdc2jOwh4ewE6CX0txSfLn/UYaV/pheS2Txg//Q==", "dev": true }, "@webassemblyjs/helper-buffer": { "version": "1.12.1", "resolved": "https://registry.npmjs.org/@webassemblyjs/helper-buffer/-/helper-buffer-1.12.1.tgz", "integrity": "sha512-nzJwQw99DNDKr9BVCOZcLuJJUlqkJh+kVzVl6Fmq/tI5ZtEyWT1KZMyOXltXLZJmDtvLCDgwsyrkohEtopTXCw==", "dev": true }, "@webassemblyjs/helper-numbers": { "version": "1.11.6", "resolved": "https://registry.npmjs.org/@webassemblyjs/helper-numbers/-/helper-numbers-1.11.6.tgz", "integrity": "sha512-vUIhZ8LZoIWHBohiEObxVm6hwP034jwmc9kuq5GdHZH0wiLVLIPcMCdpJzG4C11cHoQ25TFIQj9kaVADVX7N3g==", "dev": true, "requires": { "@webassemblyjs/floating-point-hex-parser": "1.11.6", "@webassemblyjs/helper-api-error": "1.11.6", "@xtuc/long": "4.2.2" } }, "@webassemblyjs/helper-wasm-bytecode": { "version": "1.11.6", "resolved": "https://registry.npmjs.org/@webassemblyjs/helper-wasm-bytecode/-/helper-wasm-bytecode-1.11.6.tgz", "integrity": "sha512-sFFHKwcmBprO9e7Icf0+gddyWYDViL8bpPjJJl0WHxCdETktXdmtWLGVzoHbqUcY4Be1LkNfwTmXOJUFZYSJdA==", "dev": true }, "@webassemblyjs/helper-wasm-section": { "version": "1.12.1", "resolved": "https://registry.npmjs.org/@webassemblyjs/helper-wasm-section/-/helper-wasm-section-1.12.1.tgz", "integrity": "sha512-Jif4vfB6FJlUlSbgEMHUyk1j234GTNG9dBJ4XJdOySoj518Xj0oGsNi59cUQF4RRMS9ouBUxDDdyBVfPTypa5g==", "dev": true, "requires": { "@webassemblyjs/ast": "1.12.1", "@webassemblyjs/helper-buffer": "1.12.1", "@webassemblyjs/helper-wasm-bytecode": "1.11.6", "@webassemblyjs/wasm-gen": "1.12.1" } }, "@webassemblyjs/ieee754": { "version": "1.11.6", "resolved": "https://registry.npmjs.org/@webassemblyjs/ieee754/-/ieee754-1.11.6.tgz", "integrity": "sha512-LM4p2csPNvbij6U1f19v6WR56QZ8JcHg3QIJTlSwzFcmx6WSORicYj6I63f9yU1kEUtrpG+kjkiIAkevHpDXrg==", "dev": true, "requires": { "@xtuc/ieee754": "^1.2.0" } }, "@webassemblyjs/leb128": { "version": "1.11.6", "resolved": "https://registry.npmjs.org/@webassemblyjs/leb128/-/leb128-1.11.6.tgz", "integrity": "sha512-m7a0FhE67DQXgouf1tbN5XQcdWoNgaAuoULHIfGFIEVKA6tu/edls6XnIlkmS6FrXAquJRPni3ZZKjw6FSPjPQ==", "dev": true, "requires": { "@xtuc/long": "4.2.2" } }, "@webassemblyjs/utf8": { "version": "1.11.6", "resolved": "https://registry.npmjs.org/@webassemblyjs/utf8/-/utf8-1.11.6.tgz", "integrity": "sha512-vtXf2wTQ3+up9Zsg8sa2yWiQpzSsMyXj0qViVP6xKGCUT8p8YJ6HqI7l5eCnWx1T/FYdsv07HQs2wTFbbof/RA==", "dev": true }, "@webassemblyjs/wasm-edit": { "version": "1.12.1", "resolved": "https://registry.npmjs.org/@webassemblyjs/wasm-edit/-/wasm-edit-1.12.1.tgz", "integrity": "sha512-1DuwbVvADvS5mGnXbE+c9NfA8QRcZ6iKquqjjmR10k6o+zzsRVesil54DKexiowcFCPdr/Q0qaMgB01+SQ1u6g==", "dev": true, "requires": { "@webassemblyjs/ast": "1.12.1", "@webassemblyjs/helper-buffer": "1.12.1", "@webassemblyjs/helper-wasm-bytecode": "1.11.6", "@webassemblyjs/helper-wasm-section": "1.12.1", "@webassemblyjs/wasm-gen": "1.12.1", "@webassemblyjs/wasm-opt": "1.12.1", "@webassemblyjs/wasm-parser": "1.12.1", "@webassemblyjs/wast-printer": "1.12.1" } }, "@webassemblyjs/wasm-gen": { "version": "1.12.1", "resolved": "https://registry.npmjs.org/@webassemblyjs/wasm-gen/-/wasm-gen-1.12.1.tgz", "integrity": "sha512-TDq4Ojh9fcohAw6OIMXqiIcTq5KUXTGRkVxbSo1hQnSy6lAM5GSdfwWeSxpAo0YzgsgF182E/U0mDNhuA0tW7w==", "dev": true, "requires": { "@webassemblyjs/ast": "1.12.1", "@webassemblyjs/helper-wasm-bytecode": "1.11.6", "@webassemblyjs/ieee754": "1.11.6", "@webassemblyjs/leb128": "1.11.6", "@webassemblyjs/utf8": "1.11.6" } }, "@webassemblyjs/wasm-opt": { "version": "1.12.1", "resolved": "https://registry.npmjs.org/@webassemblyjs/wasm-opt/-/wasm-opt-1.12.1.tgz", "integrity": "sha512-Jg99j/2gG2iaz3hijw857AVYekZe2SAskcqlWIZXjji5WStnOpVoat3gQfT/Q5tb2djnCjBtMocY/Su1GfxPBg==", "dev": true, "requires": { "@webassemblyjs/ast": "1.12.1", "@webassemblyjs/helper-buffer": "1.12.1", "@webassemblyjs/wasm-gen": "1.12.1", "@webassemblyjs/wasm-parser": "1.12.1" } }, "@webassemblyjs/wasm-parser": { "version": "1.12.1", "resolved": "https://registry.npmjs.org/@webassemblyjs/wasm-parser/-/wasm-parser-1.12.1.tgz", "integrity": "sha512-xikIi7c2FHXysxXe3COrVUPSheuBtpcfhbpFj4gmu7KRLYOzANztwUU0IbsqvMqzuNK2+glRGWCEqZo1WCLyAQ==", "dev": true, "requires": { "@webassemblyjs/ast": "1.12.1", "@webassemblyjs/helper-api-error": "1.11.6", "@webassemblyjs/helper-wasm-bytecode": "1.11.6", "@webassemblyjs/ieee754": "1.11.6", "@webassemblyjs/leb128": "1.11.6", "@webassemblyjs/utf8": "1.11.6" } }, "@webassemblyjs/wast-printer": { "version": "1.12.1", "resolved": "https://registry.npmjs.org/@webassemblyjs/wast-printer/-/wast-printer-1.12.1.tgz", "integrity": "sha512-+X4WAlOisVWQMikjbcvY2e0rwPsKQ9F688lksZhBcPycBBuii3O7m8FACbDMWDojpAqvjIncrG8J0XHKyQfVeA==", "dev": true, "requires": { "@webassemblyjs/ast": "1.12.1", "@xtuc/long": "4.2.2" } }, "@webpack-cli/configtest": { "version": "2.0.1", "resolved": "https://registry.npmjs.org/@webpack-cli/configtest/-/configtest-2.0.1.tgz", "integrity": "sha512-njsdJXJSiS2iNbQVS0eT8A/KPnmyH4pv1APj2K0d1wrZcBLw+yppxOy4CGqa0OxDJkzfL/XELDhD8rocnIwB5A==", "dev": true, "requires": {} }, "@webpack-cli/info": { "version": "2.0.1", "resolved": "https://registry.npmjs.org/@webpack-cli/info/-/info-2.0.1.tgz", "integrity": "sha512-fE1UEWTwsAxRhrJNikE7v4EotYflkEhBL7EbajfkPlf6E37/2QshOy/D48Mw8G5XMFlQtS6YV42vtbG9zBpIQA==", "dev": true, "requires": {} }, "@webpack-cli/serve": { "version": "2.0.1", "resolved": "https://registry.npmjs.org/@webpack-cli/serve/-/serve-2.0.1.tgz", "integrity": "sha512-0G7tNyS+yW8TdgHwZKlDWYXFA6OJQnoLCQvYKkQP0Q2X205PSQ6RNUj0M+1OB/9gRQaUZ/ccYfaxd0nhaWKfjw==", "dev": true, "requires": {} }, "@xtuc/ieee754": { "version": "1.2.0", "resolved": "https://registry.npmjs.org/@xtuc/ieee754/-/ieee754-1.2.0.tgz", "integrity": "sha512-DX8nKgqcGwsc0eJSqYt5lwP4DH5FlHnmuWWBRy7X0NcaGR0ZtuyeESgMwTYVEtxmsNGY+qit4QYT/MIYTOTPeA==", "dev": true }, "@xtuc/long": { "version": "4.2.2", "resolved": "https://registry.npmjs.org/@xtuc/long/-/long-4.2.2.tgz", "integrity": "sha512-NuHqBY1PB/D8xU6s/thBgOAiAP7HOYDQ32+BFZILJ8ivkUkAHQnWfn6WhL79Owj1qmUnoN/YPhktdIoucipkAQ==", "dev": true }, "accepts": { "version": "1.3.8", "resolved": "https://registry.npmjs.org/accepts/-/accepts-1.3.8.tgz", "integrity": "sha512-PYAthTa2m2VKxuvSD3DPC/Gy+U+sOA1LAuT8mkmRuvw+NACSaeXEQ+NHcVF7rONl6qcaxV3Uuemwawk+7+SJLw==", "dev": true, "requires": { "mime-types": "~2.1.34", "negotiator": "0.6.3" } }, "acorn": { "version": "8.12.1", "resolved": "https://registry.npmjs.org/acorn/-/acorn-8.12.1.tgz", "integrity": "sha512-tcpGyI9zbizT9JbV6oYE477V6mTlXvvi0T0G3SNIYE2apm/G5huBa1+K89VGeovbg+jycCrfhl3ADxErOuO6Jg==", "dev": true }, "acorn-import-attributes": { "version": "1.9.5", "resolved": "https://registry.npmjs.org/acorn-import-attributes/-/acorn-import-attributes-1.9.5.tgz", "integrity": "sha512-n02Vykv5uA3eHGM/Z2dQrcD56kL8TyDb2p1+0P83PClMnC/nc+anbQRhIOWnSq4Ke/KvDPrY3C9hDtC/A3eHnQ==", "dev": true, "requires": {} }, "ajv": { "version": "8.11.2", "resolved": "https://registry.npmjs.org/ajv/-/ajv-8.11.2.tgz", "integrity": "sha512-E4bfmKAhGiSTvMfL1Myyycaub+cUEU2/IvpylXkUu7CHBkBj1f/ikdzbD7YQ6FKUbixDxeYvB/xY4fvyroDlQg==", "dev": true, "requires": { "fast-deep-equal": "^3.1.1", "json-schema-traverse": "^1.0.0", "require-from-string": "^2.0.2", "uri-js": "^4.2.2" } }, "ajv-formats": { "version": "2.1.1", "resolved": "https://registry.npmjs.org/ajv-formats/-/ajv-formats-2.1.1.tgz", "integrity": "sha512-Wx0Kx52hxE7C18hkMEggYlEifqWZtYaRgouJor+WMdPnQyEK13vgEWyVNup7SoeeoLMsr4kf5h6dOW11I15MUA==", "dev": true, "requires": { "ajv": "^8.0.0" } }, "ajv-keywords": { "version": "5.1.0", "resolved": "https://registry.npmjs.org/ajv-keywords/-/ajv-keywords-5.1.0.tgz", "integrity": "sha512-YCS/JNFAUyr5vAuhk1DWm1CBxRHW9LbJ2ozWeemrIqpbsqKjHVxYPyi5GC0rjZIT5JxJ3virVTS8wk4i/Z+krw==", "dev": true, "requires": { "fast-deep-equal": "^3.1.3" } }, "ansi-html-community": { "version": "0.0.8", "resolved": "https://registry.npmjs.org/ansi-html-community/-/ansi-html-community-0.0.8.tgz", "integrity": "sha512-1APHAyr3+PCamwNw3bXCPp4HFLONZt/yIH0sZp0/469KWNTEy+qN5jQ3GVX6DMZ1UXAi34yVwtTeaG/HpBuuzw==", "dev": true }, "anymatch": { "version": "3.1.3", "resolved": "https://registry.npmjs.org/anymatch/-/anymatch-3.1.3.tgz", "integrity": "sha512-KMReFUr0B4t+D+OBkjR3KYqvocp2XaSzO55UcB6mgQMd3KbcE+mWTyvVV7D/zsdEbNnV6acZUutkiHQXvTr1Rw==", "dev": true, "requires": { "normalize-path": "^3.0.0", "picomatch": "^2.0.4" } }, "array-flatten": { "version": "1.1.1", "resolved": "https://registry.npmjs.org/array-flatten/-/array-flatten-1.1.1.tgz", "integrity": "sha512-PCVAQswWemu6UdxsDFFX/+gVeYqKAod3D3UVm91jHwynguOwAvYPhx8nNlM++NqRcK6CxxpUafjmhIdKiHibqg==", "dev": true }, "batch": { "version": "0.6.1", "resolved": "https://registry.npmjs.org/batch/-/batch-0.6.1.tgz", "integrity": "sha512-x+VAiMRL6UPkx+kudNvxTl6hB2XNNCG2r+7wixVfIYwu/2HKRXimwQyaumLjMveWvT2Hkd/cAJw+QBMfJ/EKVw==", "dev": true }, "binary-extensions": { "version": "2.3.0", "resolved": "https://registry.npmjs.org/binary-extensions/-/binary-extensions-2.3.0.tgz", "integrity": "sha512-Ceh+7ox5qe7LJuLHoY0feh3pHuUDHAcRUeyL2VYghZwfpkNIy/+8Ocg0a3UuSoYzavmylwuLWQOf3hl0jjMMIw==", "dev": true }, "body-parser": { "version": "1.20.3", "resolved": "https://registry.npmjs.org/body-parser/-/body-parser-1.20.3.tgz", "integrity": "sha512-7rAxByjUMqQ3/bHJy7D6OGXvx/MMc4IqBn/X0fcM1QUcAItpZrBEYhWGem+tzXH90c+G01ypMcYJBO9Y30203g==", "dev": true, "requires": { "bytes": "3.1.2", "content-type": "~1.0.5", "debug": "2.6.9", "depd": "2.0.0", "destroy": "1.2.0", "http-errors": "2.0.0", "iconv-lite": "0.4.24", "on-finished": "2.4.1", "qs": "6.13.0", "raw-body": "2.5.2", "type-is": "~1.6.18", "unpipe": "1.0.0" } }, "bonjour-service": { "version": "1.3.0", "resolved": "https://registry.npmjs.org/bonjour-service/-/bonjour-service-1.3.0.tgz", "integrity": "sha512-3YuAUiSkWykd+2Azjgyxei8OWf8thdn8AITIog2M4UICzoqfjlqr64WIjEXZllf/W6vK1goqleSR6brGomxQqA==", "dev": true, "requires": { "fast-deep-equal": "^3.1.3", "multicast-dns": "^7.2.5" } }, "braces": { "version": "3.0.2", "resolved": "https://registry.npmjs.org/braces/-/braces-3.0.2.tgz", "integrity": "sha512-b8um+L1RzM3WDSzvhm6gIz1yfTbBt6YTlcEKAvsmqCZZFw46z626lVj9j1yEPW33H5H+lBQpZMP1k8l+78Ha0A==", "dev": true, "requires": { "fill-range": "^7.0.1" } }, "browserslist": { "version": "4.24.0", "resolved": "https://registry.npmjs.org/browserslist/-/browserslist-4.24.0.tgz", "integrity": "sha512-Rmb62sR1Zpjql25eSanFGEhAxcFwfA1K0GuQcLoaJBAcENegrQut3hYdhXFF1obQfiDyqIW/cLM5HSJ/9k884A==", "dev": true, "requires": { "caniuse-lite": "^1.0.30001663", "electron-to-chromium": "^1.5.28", "node-releases": "^2.0.18", "update-browserslist-db": "^1.1.0" } }, "buffer-from": { "version": "1.1.2", "resolved": "https://registry.npmjs.org/buffer-from/-/buffer-from-1.1.2.tgz", "integrity": "sha512-E+XQCRwSbaaiChtv6k6Dwgc+bx+Bs6vuKJHHl5kox/BaKbhiXzqQOwK4cO22yElGp2OCmjwVhT3HmxgyPGnJfQ==", "dev": true }, "bundle-name": { "version": "4.1.0", "resolved": "https://registry.npmjs.org/bundle-name/-/bundle-name-4.1.0.tgz", "integrity": "sha512-tjwM5exMg6BGRI+kNmTntNsvdZS1X8BFYS6tnJ2hdH0kVxM6/eVZ2xy+FqStSWvYmtfFMDLIxurorHwDKfDz5Q==", "dev": true, "requires": { "run-applescript": "^7.0.0" } }, "bytes": { "version": "3.1.2", "resolved": "https://registry.npmjs.org/bytes/-/bytes-3.1.2.tgz", "integrity": "sha512-/Nf7TyzTx6S3yRJObOAV7956r8cr2+Oj8AC5dt8wSP3BQAoeX58NoHyCU8P8zGkNXStjTSi6fzO6F0pBdcYbEg==", "dev": true }, "call-bind-apply-helpers": { "version": "1.0.2", "resolved": "https://registry.npmjs.org/call-bind-apply-helpers/-/call-bind-apply-helpers-1.0.2.tgz", "integrity": "sha512-Sp1ablJ0ivDkSzjcaJdxEunN5/XvksFJ2sMBFfq6x0ryhQV/2b/KwFe21cMpmHtPOSij8K99/wSfoEuTObmuMQ==", "dev": true, "requires": { "es-errors": "^1.3.0", "function-bind": "^1.1.2" } }, "call-bound": { "version": "1.0.4", "resolved": "https://registry.npmjs.org/call-bound/-/call-bound-1.0.4.tgz", "integrity": "sha512-+ys997U96po4Kx/ABpBCqhA9EuxJaQWDQg7295H4hBphv3IZg0boBKuwYpt4YXp6MZ5AmZQnU/tyMTlRpaSejg==", "dev": true, "requires": { "call-bind-apply-helpers": "^1.0.2", "get-intrinsic": "^1.3.0" } }, "caniuse-lite": { "version": "1.0.30001664", "resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001664.tgz", "integrity": "sha512-AmE7k4dXiNKQipgn7a2xg558IRqPN3jMQY/rOsbxDhrd0tyChwbITBfiwtnqz8bi2M5mIWbxAYBvk7W7QBUS2g==", "dev": true }, "chokidar": { "version": "3.6.0", "resolved": "https://registry.npmjs.org/chokidar/-/chokidar-3.6.0.tgz", "integrity": "sha512-7VT13fmjotKpGipCW9JEQAusEPE+Ei8nl6/g4FBAmIm0GOOLMua9NDDo/DWp0ZAxCr3cPq5ZpBqmPAQgDda2Pw==", "dev": true, "requires": { "anymatch": "~3.1.2", "braces": "~3.0.2", "fsevents": "~2.3.2", "glob-parent": "~5.1.2", "is-binary-path": "~2.1.0", "is-glob": "~4.0.1", "normalize-path": "~3.0.0", "readdirp": "~3.6.0" }, "dependencies": { "glob-parent": { "version": "5.1.2", "resolved": "https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.2.tgz", "integrity": "sha512-AOIgSQCepiJYwP3ARnGx+5VnTu2HBYdzbGP45eLw1vr3zB3vZLeyed1sC9hnbcOc9/SrMyM5RPQrkGz4aS9Zow==", "dev": true, "requires": { "is-glob": "^4.0.1" } } } }, "chrome-trace-event": { "version": "1.0.3", "resolved": "https://registry.npmjs.org/chrome-trace-event/-/chrome-trace-event-1.0.3.tgz", "integrity": "sha512-p3KULyQg4S7NIHixdwbGX+nFHkoBiA4YQmyWtjb8XngSKV124nJmRysgAeujbUVb15vh+RvFUfCPqU7rXk+hZg==", "dev": true }, "clone-deep": { "version": "4.0.1", "resolved": "https://registry.npmjs.org/clone-deep/-/clone-deep-4.0.1.tgz", "integrity": "sha512-neHB9xuzh/wk0dIHweyAXv2aPGZIVk3pLMe+/RNzINf17fe0OG96QroktYAUm7SM1PBnzTabaLboqqxDyMU+SQ==", "dev": true, "requires": { "is-plain-object": "^2.0.4", "kind-of": "^6.0.2", "shallow-clone": "^3.0.0" } }, "colorette": { "version": "2.0.19", "resolved": "https://registry.npmjs.org/colorette/-/colorette-2.0.19.tgz", "integrity": "sha512-3tlv/dIP7FWvj3BsbHrGLJ6l/oKh1O3TcgBqMn+yyCagOxc23fyzDS6HypQbgxWbkpDnf52p1LuR4eWDQ/K9WQ==", "dev": true }, "commander": { "version": "2.20.3", "resolved": "https://registry.npmjs.org/commander/-/commander-2.20.3.tgz", "integrity": "sha512-GpVkmM8vF2vQUkj2LvZmD35JxeJOLCwJ9cUkugyk2nuhbv3+mJvpLYYt+0+USMxE+oj+ey/lJEnhZw75x/OMcQ==", "dev": true }, "compressible": { "version": "2.0.18", "resolved": "https://registry.npmjs.org/compressible/-/compressible-2.0.18.tgz", "integrity": "sha512-AF3r7P5dWxL8MxyITRMlORQNaOA2IkAFaTr4k7BUumjPtRpGDTZpl0Pb1XCO6JeDCBdp126Cgs9sMxqSjgYyRg==", "dev": true, "requires": { "mime-db": ">= 1.43.0 < 2" } }, "compression": { "version": "1.8.1", "resolved": "https://registry.npmjs.org/compression/-/compression-1.8.1.tgz", "integrity": "sha512-9mAqGPHLakhCLeNyxPkK4xVo746zQ/czLH1Ky+vkitMnWfWZps8r0qXuwhwizagCRttsL4lfG4pIOvaWLpAP0w==", "dev": true, "requires": { "bytes": "3.1.2", "compressible": "~2.0.18", "debug": "2.6.9", "negotiator": "~0.6.4", "on-headers": "~1.1.0", "safe-buffer": "5.2.1", "vary": "~1.1.2" }, "dependencies": { "negotiator": { "version": "0.6.4", "resolved": "https://registry.npmjs.org/negotiator/-/negotiator-0.6.4.tgz", "integrity": "sha512-myRT3DiWPHqho5PrJaIRyaMv2kgYf0mUVgBNOYMuCH5Ki1yEiQaf/ZJuQ62nvpc44wL5WDbTX7yGJi1Neevw8w==", "dev": true }, "safe-buffer": { "version": "5.2.1", "resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.2.1.tgz", "integrity": "sha512-rp3So07KcdmmKbGvgaNxQSJr7bGVSVk5S9Eq1F+ppbRo70+YeaDxkw5Dd8NPN+GD6bjnYm2VuPuCXmpuYvmCXQ==", "dev": true } } }, "connect-history-api-fallback": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/connect-history-api-fallback/-/connect-history-api-fallback-2.0.0.tgz", "integrity": "sha512-U73+6lQFmfiNPrYbXqr6kZ1i1wiRqXnp2nhMsINseWXO8lDau0LGEffJ8kQi4EjLZympVgRdvqjAgiZ1tgzDDA==", "dev": true }, "content-disposition": { "version": "0.5.4", "resolved": "https://registry.npmjs.org/content-disposition/-/content-disposition-0.5.4.tgz", "integrity": "sha512-FveZTNuGw04cxlAiWbzi6zTAL/lhehaWbTtgluJh4/E95DqMwTmha3KZN1aAWA8cFIhHzMZUvLevkw5Rqk+tSQ==", "dev": true, "requires": { "safe-buffer": "5.2.1" }, "dependencies": { "safe-buffer": { "version": "5.2.1", "resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.2.1.tgz", "integrity": "sha512-rp3So07KcdmmKbGvgaNxQSJr7bGVSVk5S9Eq1F+ppbRo70+YeaDxkw5Dd8NPN+GD6bjnYm2VuPuCXmpuYvmCXQ==", "dev": true } } }, "content-type": { "version": "1.0.5", "resolved": "https://registry.npmjs.org/content-type/-/content-type-1.0.5.tgz", "integrity": "sha512-nTjqfcBFEipKdXCv4YDQWCfmcLZKm81ldF0pAopTvyrFGVbcR6P/VAAd5G7N+0tTr8QqiU0tFadD6FK4NtJwOA==", "dev": true }, "cookie": { "version": "0.7.1", "resolved": "https://registry.npmjs.org/cookie/-/cookie-0.7.1.tgz", "integrity": "sha512-6DnInpx7SJ2AK3+CTUE/ZM0vWTUboZCegxhC2xiIydHR9jNuTAASBrfEpHhiGOZw/nX51bHt6YQl8jsGo4y/0w==", "dev": true }, "cookie-signature": { "version": "1.0.6", "resolved": "https://registry.npmjs.org/cookie-signature/-/cookie-signature-1.0.6.tgz", "integrity": "sha512-QADzlaHc8icV8I7vbaJXJwod9HWYp8uCqf1xa4OfNu1T7JVxQIrUgOWtHdNDtPiywmFbiS12VjotIXLrKM3orQ==", "dev": true }, "copy-webpack-plugin": { "version": "11.0.0", "resolved": "https://registry.npmjs.org/copy-webpack-plugin/-/copy-webpack-plugin-11.0.0.tgz", "integrity": "sha512-fX2MWpamkW0hZxMEg0+mYnA40LTosOSa5TqZ9GYIBzyJa9C3QUaMPSE2xAi/buNr8u89SfD9wHSQVBzrRa/SOQ==", "dev": true, "requires": { "fast-glob": "^3.2.11", "glob-parent": "^6.0.1", "globby": "^13.1.1", "normalize-path": "^3.0.0", "schema-utils": "^4.0.0", "serialize-javascript": "^6.0.0" } }, "core-util-is": { "version": "1.0.3", "resolved": "https://registry.npmjs.org/core-util-is/-/core-util-is-1.0.3.tgz", "integrity": "sha512-ZQBvi1DcpJ4GDqanjucZ2Hj3wEO5pZDS89BWbkcrvdxksJorwUDDZamX9ldFkp9aw2lmBDLgkObEA4DWNJ9FYQ==", "dev": true }, "cross-spawn": { "version": "7.0.3", "resolved": "https://registry.npmjs.org/cross-spawn/-/cross-spawn-7.0.3.tgz", "integrity": "sha512-iRDPJKUPVEND7dHPO8rkbOnPpyDygcDFtWjpeWNCgy8WP2rXcxXL8TskReQl6OrB2G7+UJrags1q15Fudc7G6w==", "dev": true, "requires": { "path-key": "^3.1.0", "shebang-command": "^2.0.0", "which": "^2.0.1" } }, "debug": { "version": "2.6.9", "resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz", "integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==", "dev": true, "requires": { "ms": "2.0.0" } }, "default-browser": { "version": "5.2.1", "resolved": "https://registry.npmjs.org/default-browser/-/default-browser-5.2.1.tgz", "integrity": "sha512-WY/3TUME0x3KPYdRRxEJJvXRHV4PyPoUsxtZa78lwItwRQRHhd2U9xOscaT/YTf8uCXIAjeJOFBVEh/7FtD8Xg==", "dev": true, "requires": { "bundle-name": "^4.1.0", "default-browser-id": "^5.0.0" } }, "default-browser-id": { "version": "5.0.0", "resolved": "https://registry.npmjs.org/default-browser-id/-/default-browser-id-5.0.0.tgz", "integrity": "sha512-A6p/pu/6fyBcA1TRz/GqWYPViplrftcW2gZC9q79ngNCKAeR/X3gcEdXQHl4KNXV+3wgIJ1CPkJQ3IHM6lcsyA==", "dev": true }, "define-lazy-prop": { "version": "3.0.0", "resolved": "https://registry.npmjs.org/define-lazy-prop/-/define-lazy-prop-3.0.0.tgz", "integrity": "sha512-N+MeXYoqr3pOgn8xfyRPREN7gHakLYjhsHhWGT3fWAiL4IkAt0iDw14QiiEm2bE30c5XX5q0FtAA3CK5f9/BUg==", "dev": true }, "depd": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/depd/-/depd-2.0.0.tgz", "integrity": "sha512-g7nH6P6dyDioJogAAGprGpCtVImJhpPk/roCzdb3fIh61/s/nPsfR6onyMwkCAR/OlC3yBC0lESvUoQEAssIrw==", "dev": true }, "destroy": { "version": "1.2.0", "resolved": "https://registry.npmjs.org/destroy/-/destroy-1.2.0.tgz", "integrity": "sha512-2sJGJTaXIIaR1w4iJSNoN0hnMY7Gpc/n8D4qSCJw8QqFWXf7cuAgnEHxBpweaVcPevC2l3KpjYCx3NypQQgaJg==", "dev": true }, "detect-node": { "version": "2.1.0", "resolved": "https://registry.npmjs.org/detect-node/-/detect-node-2.1.0.tgz", "integrity": "sha512-T0NIuQpnTvFDATNuHN5roPwSBG83rFsuO+MXXH9/3N1eFbn4wcPjttvjMLEPWJ0RGUYgQE7cGgS3tNxbqCGM7g==", "dev": true }, "dir-glob": { "version": "3.0.1", "resolved": "https://registry.npmjs.org/dir-glob/-/dir-glob-3.0.1.tgz", "integrity": "sha512-WkrWp9GR4KXfKGYzOLmTuGVi1UWFfws377n9cc55/tb6DuqyF6pcQ5AbiHEshaDpY9v6oaSr2XCDidGmMwdzIA==", "dev": true, "requires": { "path-type": "^4.0.0" } }, "dns-packet": { "version": "5.6.1", "resolved": "https://registry.npmjs.org/dns-packet/-/dns-packet-5.6.1.tgz", "integrity": "sha512-l4gcSouhcgIKRvyy99RNVOgxXiicE+2jZoNmaNmZ6JXiGajBOJAesk1OBlJuM5k2c+eudGdLxDqXuPCKIj6kpw==", "dev": true, "requires": { "@leichtgewicht/ip-codec": "^2.0.1" } }, "dunder-proto": { "version": "1.0.1", "resolved": "https://registry.npmjs.org/dunder-proto/-/dunder-proto-1.0.1.tgz", "integrity": "sha512-KIN/nDJBQRcXw0MLVhZE9iQHmG68qAVIBg9CqmUYjmQIhgij9U5MFvrqkUL5FbtyyzZuOeOt0zdeRe4UY7ct+A==", "dev": true, "requires": { "call-bind-apply-helpers": "^1.0.1", "es-errors": "^1.3.0", "gopd": "^1.2.0" } }, "ee-first": { "version": "1.1.1", "resolved": "https://registry.npmjs.org/ee-first/-/ee-first-1.1.1.tgz", "integrity": "sha512-WMwm9LhRUo+WUaRN+vRuETqG89IgZphVSNkdFgeb6sS/E4OrDIN7t48CAewSHXc6C8lefD8KKfr5vY61brQlow==", "dev": true }, "electron-to-chromium": { "version": "1.5.29", "resolved": "https://registry.npmjs.org/electron-to-chromium/-/electron-to-chromium-1.5.29.tgz", "integrity": "sha512-PF8n2AlIhCKXQ+gTpiJi0VhcHDb69kYX4MtCiivctc2QD3XuNZ/XIOlbGzt7WAjjEev0TtaH6Cu3arZExm5DOw==", "dev": true }, "encodeurl": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/encodeurl/-/encodeurl-2.0.0.tgz", "integrity": "sha512-Q0n9HRi4m6JuGIV1eFlmvJB7ZEVxu93IrMyiMsGC0lrMJMWzRgx6WGquyfQgZVb31vhGgXnfmPNNXmxnOkRBrg==", "dev": true }, "enhanced-resolve": { "version": "5.17.1", "resolved": "https://registry.npmjs.org/enhanced-resolve/-/enhanced-resolve-5.17.1.tgz", "integrity": "sha512-LMHl3dXhTcfv8gM4kEzIUeTQ+7fpdA0l2tUf34BddXPkz2A5xJ5L/Pchd5BL6rdccM9QGvu0sWZzK1Z1t4wwyg==", "dev": true, "requires": { "graceful-fs": "^4.2.4", "tapable": "^2.2.0" } }, "envinfo": { "version": "7.8.1", "resolved": "https://registry.npmjs.org/envinfo/-/envinfo-7.8.1.tgz", "integrity": "sha512-/o+BXHmB7ocbHEAs6F2EnG0ogybVVUdkRunTT2glZU9XAaGmhqskrvKwqXuDfNjEO0LZKWdejEEpnq8aM0tOaw==", "dev": true }, "es-define-property": { "version": "1.0.1", "resolved": "https://registry.npmjs.org/es-define-property/-/es-define-property-1.0.1.tgz", "integrity": "sha512-e3nRfgfUZ4rNGL232gUgX06QNyyez04KdjFrF+LTRoOXmrOgFKDg4BCdsjW8EnT69eqdYGmRpJwiPVYNrCaW3g==", "dev": true }, "es-errors": { "version": "1.3.0", "resolved": "https://registry.npmjs.org/es-errors/-/es-errors-1.3.0.tgz", "integrity": "sha512-Zf5H2Kxt2xjTvbJvP2ZWLEICxA6j+hAmMzIlypy4xcBg1vKVnx89Wy0GbS+kf5cwCVFFzdCFh2XSCFNULS6csw==", "dev": true }, "es-module-lexer": { "version": "1.5.4", "resolved": "https://registry.npmjs.org/es-module-lexer/-/es-module-lexer-1.5.4.tgz", "integrity": "sha512-MVNK56NiMrOwitFB7cqDwq0CQutbw+0BvLshJSse0MUNU+y1FC3bUS/AQg7oUng+/wKrrki7JfmwtVHkVfPLlw==", "dev": true }, "es-object-atoms": { "version": "1.1.1", "resolved": "https://registry.npmjs.org/es-object-atoms/-/es-object-atoms-1.1.1.tgz", "integrity": "sha512-FGgH2h8zKNim9ljj7dankFPcICIK9Cp5bm+c2gQSYePhpaG5+esrLODihIorn+Pe6FGJzWhXQotPv73jTaldXA==", "dev": true, "requires": { "es-errors": "^1.3.0" } }, "escalade": { "version": "3.2.0", "resolved": "https://registry.npmjs.org/escalade/-/escalade-3.2.0.tgz", "integrity": "sha512-WUj2qlxaQtO4g6Pq5c29GTcWGDyd8itL8zTlipgECz3JesAiiOKotd8JU6otB3PACgG6xkJUyVhboMS+bje/jA==", "dev": true }, "escape-html": { "version": "1.0.3", "resolved": "https://registry.npmjs.org/escape-html/-/escape-html-1.0.3.tgz", "integrity": "sha512-NiSupZ4OeuGwr68lGIeym/ksIZMJodUGOSCZ/FSnTxcrekbvqrgdUxlJOMpijaKZVjAJrWrGs/6Jy8OMuyj9ow==", "dev": true }, "eslint-scope": { "version": "5.1.1", "resolved": "https://registry.npmjs.org/eslint-scope/-/eslint-scope-5.1.1.tgz", "integrity": "sha512-2NxwbF/hZ0KpepYN0cNbo+FN6XoK7GaHlQhgx/hIZl6Va0bF45RQOOwhLIy8lQDbuCiadSLCBnH2CFYquit5bw==", "dev": true, "requires": { "esrecurse": "^4.3.0", "estraverse": "^4.1.1" } }, "esrecurse": { "version": "4.3.0", "resolved": "https://registry.npmjs.org/esrecurse/-/esrecurse-4.3.0.tgz", "integrity": "sha512-KmfKL3b6G+RXvP8N1vr3Tq1kL/oCFgn2NYXEtqP8/L3pKapUA4G8cFVaoF3SU323CD4XypR/ffioHmkti6/Tag==", "dev": true, "requires": { "estraverse": "^5.2.0" }, "dependencies": { "estraverse": { "version": "5.3.0", "resolved": "https://registry.npmjs.org/estraverse/-/estraverse-5.3.0.tgz", "integrity": "sha512-MMdARuVEQziNTeJD8DgMqmhwR11BRQ/cBP+pLtYdSTnf3MIO8fFeiINEbX36ZdNlfU/7A9f3gUw49B3oQsvwBA==", "dev": true } } }, "estraverse": { "version": "4.3.0", "resolved": "https://registry.npmjs.org/estraverse/-/estraverse-4.3.0.tgz", "integrity": "sha512-39nnKffWz8xN1BU/2c79n9nB9HDzo0niYUqx6xyqUnyoAnQyyWpOTdZEeiCch8BBu515t4wp9ZmgVfVhn9EBpw==", "dev": true }, "etag": { "version": "1.8.1", "resolved": "https://registry.npmjs.org/etag/-/etag-1.8.1.tgz", "integrity": "sha512-aIL5Fx7mawVa300al2BnEE4iNvo1qETxLrPI/o05L7z6go7fCw1J6EQmbK4FmJ2AS7kgVF/KEZWufBfdClMcPg==", "dev": true }, "eventemitter3": { "version": "4.0.7", "resolved": "https://registry.npmjs.org/eventemitter3/-/eventemitter3-4.0.7.tgz", "integrity": "sha512-8guHBZCwKnFhYdHr2ysuRWErTwhoN2X8XELRlrRwpmfeY2jjuUN4taQMsULKUVo1K4DvZl+0pgfyoysHxvmvEw==", "dev": true }, "events": { "version": "3.3.0", "resolved": "https://registry.npmjs.org/events/-/events-3.3.0.tgz", "integrity": "sha512-mQw+2fkQbALzQ7V0MY0IqdnXNOeTtP4r0lN9z7AAawCXgqea7bDii20AYrIBrFd/Hx0M2Ocz6S111CaFkUcb0Q==", "dev": true }, "express": { "version": "4.21.2", "resolved": "https://registry.npmjs.org/express/-/express-4.21.2.tgz", "integrity": "sha512-28HqgMZAmih1Czt9ny7qr6ek2qddF4FclbMzwhCREB6OFfH+rXAnuNCwo1/wFvrtbgsQDb4kSbX9de9lFbrXnA==", "dev": true, "requires": { "accepts": "~1.3.8", "array-flatten": "1.1.1", "body-parser": "1.20.3", "content-disposition": "0.5.4", "content-type": "~1.0.4", "cookie": "0.7.1", "cookie-signature": "1.0.6", "debug": "2.6.9", "depd": "2.0.0", "encodeurl": "~2.0.0", "escape-html": "~1.0.3", "etag": "~1.8.1", "finalhandler": "1.3.1", "fresh": "0.5.2", "http-errors": "2.0.0", "merge-descriptors": "1.0.3", "methods": "~1.1.2", "on-finished": "2.4.1", "parseurl": "~1.3.3", "path-to-regexp": "0.1.12", "proxy-addr": "~2.0.7", "qs": "6.13.0", "range-parser": "~1.2.1", "safe-buffer": "5.2.1", "send": "0.19.0", "serve-static": "1.16.2", "setprototypeof": "1.2.0", "statuses": "2.0.1", "type-is": "~1.6.18", "utils-merge": "1.0.1", "vary": "~1.1.2" }, "dependencies": { "safe-buffer": { "version": "5.2.1", "resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.2.1.tgz", "integrity": "sha512-rp3So07KcdmmKbGvgaNxQSJr7bGVSVk5S9Eq1F+ppbRo70+YeaDxkw5Dd8NPN+GD6bjnYm2VuPuCXmpuYvmCXQ==", "dev": true } } }, "fast-deep-equal": { "version": "3.1.3", "resolved": "https://registry.npmjs.org/fast-deep-equal/-/fast-deep-equal-3.1.3.tgz", "integrity": "sha512-f3qQ9oQy9j2AhBe/H9VC91wLmKBCCU/gDOnKNAYG5hswO7BLKj09Hc5HYNz9cGI++xlpDCIgDaitVs03ATR84Q==", "dev": true }, "fast-glob": { "version": "3.2.12", "resolved": "https://registry.npmjs.org/fast-glob/-/fast-glob-3.2.12.tgz", "integrity": "sha512-DVj4CQIYYow0BlaelwK1pHl5n5cRSJfM60UA0zK891sVInoPri2Ekj7+e1CT3/3qxXenpI+nBBmQAcJPJgaj4w==", "dev": true, "requires": { "@nodelib/fs.stat": "^2.0.2", "@nodelib/fs.walk": "^1.2.3", "glob-parent": "^5.1.2", "merge2": "^1.3.0", "micromatch": "^4.0.4" }, "dependencies": { "glob-parent": { "version": "5.1.2", "resolved": "https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.2.tgz", "integrity": "sha512-AOIgSQCepiJYwP3ARnGx+5VnTu2HBYdzbGP45eLw1vr3zB3vZLeyed1sC9hnbcOc9/SrMyM5RPQrkGz4aS9Zow==", "dev": true, "requires": { "is-glob": "^4.0.1" } } } }, "fast-json-stable-stringify": { "version": "2.1.0", "resolved": "https://registry.npmjs.org/fast-json-stable-stringify/-/fast-json-stable-stringify-2.1.0.tgz", "integrity": "sha512-lhd/wF+Lk98HZoTCtlVraHtfh5XYijIjalXck7saUtuanSDyLMxnHhSXEDJqHxD7msR8D0uCmqlkwjCV8xvwHw==", "dev": true }, "fastest-levenshtein": { "version": "1.0.16", "resolved": "https://registry.npmjs.org/fastest-levenshtein/-/fastest-levenshtein-1.0.16.tgz", "integrity": "sha512-eRnCtTTtGZFpQCwhJiUOuxPQWRXVKYDn0b2PeHfXL6/Zi53SLAzAHfVhVWK2AryC/WH05kGfxhFIPvTF0SXQzg==", "dev": true }, "fastq": { "version": "1.15.0", "resolved": "https://registry.npmjs.org/fastq/-/fastq-1.15.0.tgz", "integrity": "sha512-wBrocU2LCXXa+lWBt8RoIRD89Fi8OdABODa/kEnyeyjS5aZO5/GNvI5sEINADqP/h8M29UHTHUb53sUu5Ihqdw==", "dev": true, "requires": { "reusify": "^1.0.4" } }, "faye-websocket": { "version": "0.11.4", "resolved": "https://registry.npmjs.org/faye-websocket/-/faye-websocket-0.11.4.tgz", "integrity": "sha512-CzbClwlXAuiRQAlUyfqPgvPoNKTckTPGfwZV4ZdAhVcP2lh9KUxJg2b5GkE7XbjKQ3YJnQ9z6D9ntLAlB+tP8g==", "dev": true, "requires": { "websocket-driver": ">=0.5.1" } }, "fill-range": { "version": "7.0.1", "resolved": "https://registry.npmjs.org/fill-range/-/fill-range-7.0.1.tgz", "integrity": "sha512-qOo9F+dMUmC2Lcb4BbVvnKJxTPjCm+RRpe4gDuGrzkL7mEVl/djYSu2OdQ2Pa302N4oqkSg9ir6jaLWJ2USVpQ==", "dev": true, "requires": { "to-regex-range": "^5.0.1" } }, "finalhandler": { "version": "1.3.1", "resolved": "https://registry.npmjs.org/finalhandler/-/finalhandler-1.3.1.tgz", "integrity": "sha512-6BN9trH7bp3qvnrRyzsBz+g3lZxTNZTbVO2EV1CS0WIcDbawYVdYvGflME/9QP0h0pYlCDBCTjYa9nZzMDpyxQ==", "dev": true, "requires": { "debug": "2.6.9", "encodeurl": "~2.0.0", "escape-html": "~1.0.3", "on-finished": "2.4.1", "parseurl": "~1.3.3", "statuses": "2.0.1", "unpipe": "~1.0.0" } }, "find-up": { "version": "4.1.0", "resolved": "https://registry.npmjs.org/find-up/-/find-up-4.1.0.tgz", "integrity": "sha512-PpOwAdQ/YlXQ2vj8a3h8IipDuYRi3wceVQQGYWxNINccq40Anw7BlsEXCMbt1Zt+OLA6Fq9suIpIWD0OsnISlw==", "dev": true, "requires": { "locate-path": "^5.0.0", "path-exists": "^4.0.0" } }, "follow-redirects": { "version": "1.15.4", "resolved": "https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.15.4.tgz", "integrity": "sha512-Cr4D/5wlrb0z9dgERpUL3LrmPKVDsETIJhaCMeDfuFYcqa5bldGV6wBsAN6X/vxlXQtFBMrXdXxdL8CbDTGniw==", "dev": true }, "forwarded": { "version": "0.2.0", "resolved": "https://registry.npmjs.org/forwarded/-/forwarded-0.2.0.tgz", "integrity": "sha512-buRG0fpBtRHSTCOASe6hD258tEubFoRLb4ZNA6NxMVHNw2gOcwHo9wyablzMzOA5z9xA9L1KNjk/Nt6MT9aYow==", "dev": true }, "fresh": { "version": "0.5.2", "resolved": "https://registry.npmjs.org/fresh/-/fresh-0.5.2.tgz", "integrity": "sha512-zJ2mQYM18rEFOudeV4GShTGIQ7RbzA7ozbU9I/XBpm7kqgMywgmylMwXHxZJmkVoYkna9d2pVXVXPdYTP9ej8Q==", "dev": true }, "fsevents": { "version": "2.3.3", "resolved": "https://registry.npmjs.org/fsevents/-/fsevents-2.3.3.tgz", "integrity": "sha512-5xoDfX+fL7faATnagmWPpbFtwh/R77WmMMqqHGS65C3vvB0YHrgF+B1YmZ3441tMj5n63k0212XNoJwzlhffQw==", "dev": true, "optional": true }, "function-bind": { "version": "1.1.2", "resolved": "https://registry.npmjs.org/function-bind/-/function-bind-1.1.2.tgz", "integrity": "sha512-7XHNxH7qX9xG5mIwxkhumTox/MIRNcOgDrxWsMt2pAr23WHp6MrRlN7FBSFpCpr+oVO0F744iUgR82nJMfG2SA==", "dev": true }, "get-intrinsic": { "version": "1.3.0", "resolved": "https://registry.npmjs.org/get-intrinsic/-/get-intrinsic-1.3.0.tgz", "integrity": "sha512-9fSjSaos/fRIVIp+xSJlE6lfwhES7LNtKaCBIamHsjr2na1BiABJPo0mOjjz8GJDURarmCPGqaiVg5mfjb98CQ==", "dev": true, "requires": { "call-bind-apply-helpers": "^1.0.2", "es-define-property": "^1.0.1", "es-errors": "^1.3.0", "es-object-atoms": "^1.1.1", "function-bind": "^1.1.2", "get-proto": "^1.0.1", "gopd": "^1.2.0", "has-symbols": "^1.1.0", "hasown": "^2.0.2", "math-intrinsics": "^1.1.0" } }, "get-proto": { "version": "1.0.1", "resolved": "https://registry.npmjs.org/get-proto/-/get-proto-1.0.1.tgz", "integrity": "sha512-sTSfBjoXBp89JvIKIefqw7U2CCebsc74kiY6awiGogKtoSGbgjYE/G/+l9sF3MWFPNc9IcoOC4ODfKHfxFmp0g==", "dev": true, "requires": { "dunder-proto": "^1.0.1", "es-object-atoms": "^1.0.0" } }, "glob-parent": { "version": "6.0.2", "resolved": "https://registry.npmjs.org/glob-parent/-/glob-parent-6.0.2.tgz", "integrity": "sha512-XxwI8EOhVQgWp6iDL+3b0r86f4d6AX6zSU55HfB4ydCEuXLXc5FcYeOu+nnGftS4TEju/11rt4KJPTMgbfmv4A==", "dev": true, "requires": { "is-glob": "^4.0.3" } }, "glob-to-regexp": { "version": "0.4.1", "resolved": "https://registry.npmjs.org/glob-to-regexp/-/glob-to-regexp-0.4.1.tgz", "integrity": "sha512-lkX1HJXwyMcprw/5YUZc2s7DrpAiHB21/V+E1rHUrVNokkvB6bqMzT0VfV6/86ZNabt1k14YOIaT7nDvOX3Iiw==", "dev": true }, "globby": { "version": "13.1.3", "resolved": "https://registry.npmjs.org/globby/-/globby-13.1.3.tgz", "integrity": "sha512-8krCNHXvlCgHDpegPzleMq07yMYTO2sXKASmZmquEYWEmCx6J5UTRbp5RwMJkTJGtcQ44YpiUYUiN0b9mzy8Bw==", "dev": true, "requires": { "dir-glob": "^3.0.1", "fast-glob": "^3.2.11", "ignore": "^5.2.0", "merge2": "^1.4.1", "slash": "^4.0.0" } }, "gopd": { "version": "1.2.0", "resolved": "https://registry.npmjs.org/gopd/-/gopd-1.2.0.tgz", "integrity": "sha512-ZUKRh6/kUFoAiTAtTYPZJ3hw9wNxx+BIBOijnlG9PnrJsCcSjs1wyyD6vJpaYtgnzDrKYRSqf3OO6Rfa93xsRg==", "dev": true }, "graceful-fs": { "version": "4.2.11", "resolved": "https://registry.npmjs.org/graceful-fs/-/graceful-fs-4.2.11.tgz", "integrity": "sha512-RbJ5/jmFcNNCcDV5o9eTnBLJ/HszWV0P73bc+Ff4nS/rJj+YaS6IGyiOL0VoBYX+l1Wrl3k63h/KrH+nhJ0XvQ==", "dev": true }, "handle-thing": { "version": "2.0.1", "resolved": "https://registry.npmjs.org/handle-thing/-/handle-thing-2.0.1.tgz", "integrity": "sha512-9Qn4yBxelxoh2Ow62nP+Ka/kMnOXRi8BXnRaUwezLNhqelnN49xKz4F/dPP8OYLxLxq6JDtZb2i9XznUQbNPTg==", "dev": true }, "has": { "version": "1.0.3", "resolved": "https://registry.npmjs.org/has/-/has-1.0.3.tgz", "integrity": "sha512-f2dvO0VU6Oej7RkWJGrehjbzMAjFp5/VKPp5tTpWIV4JHHZK1/BxbFRtf/siA2SWTe09caDmVtYYzWEIbBS4zw==", "dev": true, "requires": { "function-bind": "^1.1.1" } }, "has-flag": { "version": "4.0.0", "resolved": "https://registry.npmjs.org/has-flag/-/has-flag-4.0.0.tgz", "integrity": "sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ==", "dev": true }, "has-symbols": { "version": "1.1.0", "resolved": "https://registry.npmjs.org/has-symbols/-/has-symbols-1.1.0.tgz", "integrity": "sha512-1cDNdwJ2Jaohmb3sg4OmKaMBwuC48sYni5HUw2DvsC8LjGTLK9h+eb1X6RyuOHe4hT0ULCW68iomhjUoKUqlPQ==", "dev": true }, "hasown": { "version": "2.0.2", "resolved": "https://registry.npmjs.org/hasown/-/hasown-2.0.2.tgz", "integrity": "sha512-0hJU9SCPvmMzIBdZFqNPXWa6dqh7WdH0cII9y+CyS8rG3nL48Bclra9HmKhVVUHyPWNH5Y7xDwAB7bfgSjkUMQ==", "dev": true, "requires": { "function-bind": "^1.1.2" } }, "hpack.js": { "version": "2.1.6", "resolved": "https://registry.npmjs.org/hpack.js/-/hpack.js-2.1.6.tgz", "integrity": "sha512-zJxVehUdMGIKsRaNt7apO2Gqp0BdqW5yaiGHXXmbpvxgBYVZnAql+BJb4RO5ad2MgpbZKn5G6nMnegrH1FcNYQ==", "dev": true, "requires": { "inherits": "^2.0.1", "obuf": "^1.0.0", "readable-stream": "^2.0.1", "wbuf": "^1.1.0" } }, "http-deceiver": { "version": "1.2.7", "resolved": "https://registry.npmjs.org/http-deceiver/-/http-deceiver-1.2.7.tgz", "integrity": "sha512-LmpOGxTfbpgtGVxJrj5k7asXHCgNZp5nLfp+hWc8QQRqtb7fUy6kRY3BO1h9ddF6yIPYUARgxGOwB42DnxIaNw==", "dev": true }, "http-errors": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/http-errors/-/http-errors-2.0.0.tgz", "integrity": "sha512-FtwrG/euBzaEjYeRqOgly7G0qviiXoJWnvEH2Z1plBdXgbyjv34pHTSb9zoeHMyDy33+DWy5Wt9Wo+TURtOYSQ==", "dev": true, "requires": { "depd": "2.0.0", "inherits": "2.0.4", "setprototypeof": "1.2.0", "statuses": "2.0.1", "toidentifier": "1.0.1" } }, "http-parser-js": { "version": "0.5.6", "resolved": "https://registry.npmjs.org/http-parser-js/-/http-parser-js-0.5.6.tgz", "integrity": "sha512-vDlkRPDJn93swjcjqMSaGSPABbIarsr1TLAui/gLDXzV5VsJNdXNzMYDyNBLQkjWQCJ1uizu8T2oDMhmGt0PRA==", "dev": true }, "http-proxy": { "version": "1.18.1", "resolved": "https://registry.npmjs.org/http-proxy/-/http-proxy-1.18.1.tgz", "integrity": "sha512-7mz/721AbnJwIVbnaSv1Cz3Am0ZLT/UBwkC92VlxhXv/k/BBQfM2fXElQNC27BVGr0uwUpplYPQM9LnaBMR5NQ==", "dev": true, "requires": { "eventemitter3": "^4.0.0", "follow-redirects": "^1.0.0", "requires-port": "^1.0.0" } }, "http-proxy-middleware": { "version": "2.0.9", "resolved": "https://registry.npmjs.org/http-proxy-middleware/-/http-proxy-middleware-2.0.9.tgz", "integrity": "sha512-c1IyJYLYppU574+YI7R4QyX2ystMtVXZwIdzazUIPIJsHuWNd+mho2j+bKoHftndicGj9yh+xjd+l0yj7VeT1Q==", "dev": true, "requires": { "@types/http-proxy": "^1.17.8", "http-proxy": "^1.18.1", "is-glob": "^4.0.1", "is-plain-obj": "^3.0.0", "micromatch": "^4.0.2" } }, "hyperdyperid": { "version": "1.2.0", "resolved": "https://registry.npmjs.org/hyperdyperid/-/hyperdyperid-1.2.0.tgz", "integrity": "sha512-Y93lCzHYgGWdrJ66yIktxiaGULYc6oGiABxhcO5AufBeOyoIdZF7bIfLaOrbM0iGIOXQQgxxRrFEnb+Y6w1n4A==", "dev": true }, "iconv-lite": { "version": "0.4.24", "resolved": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.4.24.tgz", "integrity": "sha512-v3MXnZAcvnywkTUEZomIActle7RXXeedOR31wwl7VlyoXO4Qi9arvSenNQWne1TcRwhCL1HwLI21bEqdpj8/rA==", "dev": true, "requires": { "safer-buffer": ">= 2.1.2 < 3" } }, "ignore": { "version": "5.2.4", "resolved": "https://registry.npmjs.org/ignore/-/ignore-5.2.4.tgz", "integrity": "sha512-MAb38BcSbH0eHNBxn7ql2NH/kX33OkB3lZ1BNdh7ENeRChHTYsTvWrMubiIAMNS2llXEEgZ1MUOBtXChP3kaFQ==", "dev": true }, "import-local": { "version": "3.1.0", "resolved": "https://registry.npmjs.org/import-local/-/import-local-3.1.0.tgz", "integrity": "sha512-ASB07uLtnDs1o6EHjKpX34BKYDSqnFerfTOJL2HvMqF70LnxpjkzDB8J44oT9pu4AMPkQwf8jl6szgvNd2tRIg==", "dev": true, "requires": { "pkg-dir": "^4.2.0", "resolve-cwd": "^3.0.0" } }, "inherits": { "version": "2.0.4", "resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.4.tgz", "integrity": "sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ==", "dev": true }, "interpret": { "version": "3.1.1", "resolved": "https://registry.npmjs.org/interpret/-/interpret-3.1.1.tgz", "integrity": "sha512-6xwYfHbajpoF0xLW+iwLkhwgvLoZDfjYfoFNu8ftMoXINzwuymNLd9u/KmwtdT2GbR+/Cz66otEGEVVUHX9QLQ==", "dev": true }, "ipaddr.js": { "version": "2.2.0", "resolved": "https://registry.npmjs.org/ipaddr.js/-/ipaddr.js-2.2.0.tgz", "integrity": "sha512-Ag3wB2o37wslZS19hZqorUnrnzSkpOVy+IiiDEiTqNubEYpYuHWIf6K4psgN2ZWKExS4xhVCrRVfb/wfW8fWJA==", "dev": true }, "is-binary-path": { "version": "2.1.0", "resolved": "https://registry.npmjs.org/is-binary-path/-/is-binary-path-2.1.0.tgz", "integrity": "sha512-ZMERYes6pDydyuGidse7OsHxtbI7WVeUEozgR/g7rd0xUimYNlvZRE/K2MgZTjWy725IfelLeVcEM97mmtRGXw==", "dev": true, "requires": { "binary-extensions": "^2.0.0" } }, "is-core-module": { "version": "2.11.0", "resolved": "https://registry.npmjs.org/is-core-module/-/is-core-module-2.11.0.tgz", "integrity": "sha512-RRjxlvLDkD1YJwDbroBHMb+cukurkDWNyHx7D3oNB5x9rb5ogcksMC5wHCadcXoo67gVr/+3GFySh3134zi6rw==", "dev": true, "requires": { "has": "^1.0.3" } }, "is-docker": { "version": "3.0.0", "resolved": "https://registry.npmjs.org/is-docker/-/is-docker-3.0.0.tgz", "integrity": "sha512-eljcgEDlEns/7AXFosB5K/2nCM4P7FQPkGc/DWLy5rmFEWvZayGrik1d9/QIY5nJ4f9YsVvBkA6kJpHn9rISdQ==", "dev": true }, "is-extglob": { "version": "2.1.1", "resolved": "https://registry.npmjs.org/is-extglob/-/is-extglob-2.1.1.tgz", "integrity": "sha512-SbKbANkN603Vi4jEZv49LeVJMn4yGwsbzZworEoyEiutsN3nJYdbO36zfhGJ6QEDpOZIFkDtnq5JRxmvl3jsoQ==", "dev": true }, "is-glob": { "version": "4.0.3", "resolved": "https://registry.npmjs.org/is-glob/-/is-glob-4.0.3.tgz", "integrity": "sha512-xelSayHH36ZgE7ZWhli7pW34hNbNl8Ojv5KVmkJD4hBdD3th8Tfk9vYasLM+mXWOZhFkgZfxhLSnrwRr4elSSg==", "dev": true, "requires": { "is-extglob": "^2.1.1" } }, "is-inside-container": { "version": "1.0.0", "resolved": "https://registry.npmjs.org/is-inside-container/-/is-inside-container-1.0.0.tgz", "integrity": "sha512-KIYLCCJghfHZxqjYBE7rEy0OBuTd5xCHS7tHVgvCLkx7StIoaxwNW3hCALgEUjFfeRk+MG/Qxmp/vtETEF3tRA==", "dev": true, "requires": { "is-docker": "^3.0.0" } }, "is-network-error": { "version": "1.1.0", "resolved": "https://registry.npmjs.org/is-network-error/-/is-network-error-1.1.0.tgz", "integrity": "sha512-tUdRRAnhT+OtCZR/LxZelH/C7QtjtFrTu5tXCA8pl55eTUElUHT+GPYV8MBMBvea/j+NxQqVt3LbWMRir7Gx9g==", "dev": true }, "is-number": { "version": "7.0.0", "resolved": "https://registry.npmjs.org/is-number/-/is-number-7.0.0.tgz", "integrity": "sha512-41Cifkg6e8TylSpdtTpeLVMqvSBEVzTttHvERD741+pnZ8ANv0004MRL43QKPDlK9cGvNp6NZWZUBlbGXYxxng==", "dev": true }, "is-plain-obj": { "version": "3.0.0", "resolved": "https://registry.npmjs.org/is-plain-obj/-/is-plain-obj-3.0.0.tgz", "integrity": "sha512-gwsOE28k+23GP1B6vFl1oVh/WOzmawBrKwo5Ev6wMKzPkaXaCDIQKzLnvsA42DRlbVTWorkgTKIviAKCWkfUwA==", "dev": true }, "is-plain-object": { "version": "2.0.4", "resolved": "https://registry.npmjs.org/is-plain-object/-/is-plain-object-2.0.4.tgz", "integrity": "sha512-h5PpgXkWitc38BBMYawTYMWJHFZJVnBquFE57xFpjB8pJFiF6gZ+bU+WyI/yqXiFR5mdLsgYNaPe8uao6Uv9Og==", "dev": true, "requires": { "isobject": "^3.0.1" } }, "is-wsl": { "version": "3.1.0", "resolved": "https://registry.npmjs.org/is-wsl/-/is-wsl-3.1.0.tgz", "integrity": "sha512-UcVfVfaK4Sc4m7X3dUSoHoozQGBEFeDC+zVo06t98xe8CzHSZZBekNXH+tu0NalHolcJ/QAGqS46Hef7QXBIMw==", "dev": true, "requires": { "is-inside-container": "^1.0.0" } }, "isarray": { "version": "1.0.0", "resolved": "https://registry.npmjs.org/isarray/-/isarray-1.0.0.tgz", "integrity": "sha512-VLghIWNM6ELQzo7zwmcg0NmTVyWKYjvIeM83yjp0wRDTmUnrM678fQbcKBo6n2CJEF0szoG//ytg+TKla89ALQ==", "dev": true }, "isexe": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/isexe/-/isexe-2.0.0.tgz", "integrity": "sha512-RHxMLp9lnKHGHRng9QFhRCMbYAcVpn69smSGcq3f36xjgVVWThj4qqLbTLlq7Ssj8B+fIQ1EuCEGI2lKsyQeIw==", "dev": true }, "isobject": { "version": "3.0.1", "resolved": "https://registry.npmjs.org/isobject/-/isobject-3.0.1.tgz", "integrity": "sha512-WhB9zCku7EGTj/HQQRz5aUQEUeoQZH2bWcltRErOpymJ4boYE6wL9Tbr23krRPSZ+C5zqNSrSw+Cc7sZZ4b7vg==", "dev": true }, "jest-worker": { "version": "27.5.1", "resolved": "https://registry.npmjs.org/jest-worker/-/jest-worker-27.5.1.tgz", "integrity": "sha512-7vuh85V5cdDofPyxn58nrPjBktZo0u9x1g8WtjQol+jZDaE+fhN+cIvTj11GndBnMnyfrUOG1sZQxCdjKh+DKg==", "dev": true, "requires": { "@types/node": "*", "merge-stream": "^2.0.0", "supports-color": "^8.0.0" } }, "json-parse-even-better-errors": { "version": "2.3.1", "resolved": "https://registry.npmjs.org/json-parse-even-better-errors/-/json-parse-even-better-errors-2.3.1.tgz", "integrity": "sha512-xyFwyhro/JEof6Ghe2iz2NcXoj2sloNsWr/XsERDK/oiPCfaNhl5ONfp+jQdAZRQQ0IJWNzH9zIZF7li91kh2w==", "dev": true }, "json-schema-traverse": { "version": "1.0.0", "resolved": "https://registry.npmjs.org/json-schema-traverse/-/json-schema-traverse-1.0.0.tgz", "integrity": "sha512-NM8/P9n3XjXhIZn1lLhkFaACTOURQXjWhV4BA/RnOv8xvgqtqpAX9IO4mRQxSx1Rlo4tqzeqb0sOlruaOy3dug==", "dev": true }, "kind-of": { "version": "6.0.3", "resolved": "https://registry.npmjs.org/kind-of/-/kind-of-6.0.3.tgz", "integrity": "sha512-dcS1ul+9tmeD95T+x28/ehLgd9mENa3LsvDTtzm3vyBEO7RPptvAD+t44WVXaUjTBRcrpFeFlC8WCruUR456hw==", "dev": true }, "launch-editor": { "version": "2.10.0", "resolved": "https://registry.npmjs.org/launch-editor/-/launch-editor-2.10.0.tgz", "integrity": "sha512-D7dBRJo/qcGX9xlvt/6wUYzQxjh5G1RvZPgPv8vi4KRU99DVQL/oW7tnVOCCTm2HGeo3C5HvGE5Yrh6UBoZ0vA==", "dev": true, "requires": { "picocolors": "^1.0.0", "shell-quote": "^1.8.1" } }, "loader-runner": { "version": "4.3.0", "resolved": "https://registry.npmjs.org/loader-runner/-/loader-runner-4.3.0.tgz", "integrity": "sha512-3R/1M+yS3j5ou80Me59j7F9IMs4PXs3VqRrm0TU3AbKPxlmpoY1TNscJV/oGJXo8qCatFGTfDbY6W6ipGOYXfg==", "dev": true }, "locate-path": { "version": "5.0.0", "resolved": "https://registry.npmjs.org/locate-path/-/locate-path-5.0.0.tgz", "integrity": "sha512-t7hw9pI+WvuwNJXwk5zVHpyhIqzg2qTlklJOf0mVxGSbe3Fp2VieZcduNYjaLDoy6p9uGpQEGWG87WpMKlNq8g==", "dev": true, "requires": { "p-locate": "^4.1.0" } }, "math-intrinsics": { "version": "1.1.0", "resolved": "https://registry.npmjs.org/math-intrinsics/-/math-intrinsics-1.1.0.tgz", "integrity": "sha512-/IXtbwEk5HTPyEwyKX6hGkYXxM9nbj64B+ilVJnC/R6B0pH5G4V3b0pVbL7DBj4tkhBAppbQUlf6F6Xl9LHu1g==", "dev": true }, "media-typer": { "version": "0.3.0", "resolved": "https://registry.npmjs.org/media-typer/-/media-typer-0.3.0.tgz", "integrity": "sha512-dq+qelQ9akHpcOl/gUVRTxVIOkAJ1wR3QAvb4RsVjS8oVoFjDGTc679wJYmUmknUF5HwMLOgb5O+a3KxfWapPQ==", "dev": true }, "memfs": { "version": "4.17.2", "resolved": "https://registry.npmjs.org/memfs/-/memfs-4.17.2.tgz", "integrity": "sha512-NgYhCOWgovOXSzvYgUW0LQ7Qy72rWQMGGFJDoWg4G30RHd3z77VbYdtJ4fembJXBy8pMIUA31XNAupobOQlwdg==", "dev": true, "requires": { "@jsonjoy.com/json-pack": "^1.0.3", "@jsonjoy.com/util": "^1.3.0", "tree-dump": "^1.0.1", "tslib": "^2.0.0" } }, "merge-descriptors": { "version": "1.0.3", "resolved": "https://registry.npmjs.org/merge-descriptors/-/merge-descriptors-1.0.3.tgz", "integrity": "sha512-gaNvAS7TZ897/rVaZ0nMtAyxNyi/pdbjbAwUpFQpN70GqnVfOiXpeUUMKRBmzXaSQ8DdTX4/0ms62r2K+hE6mQ==", "dev": true }, "merge-stream": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/merge-stream/-/merge-stream-2.0.0.tgz", "integrity": "sha512-abv/qOcuPfk3URPfDzmZU1LKmuw8kT+0nIHvKrKgFrwifol/doWcdA4ZqsWQ8ENrFKkd67Mfpo/LovbIUsbt3w==", "dev": true }, "merge2": { "version": "1.4.1", "resolved": "https://registry.npmjs.org/merge2/-/merge2-1.4.1.tgz", "integrity": "sha512-8q7VEgMJW4J8tcfVPy8g09NcQwZdbwFEqhe/WZkoIzjn/3TGDwtOCYtXGxA3O8tPzpczCCDgv+P2P5y00ZJOOg==", "dev": true }, "methods": { "version": "1.1.2", "resolved": "https://registry.npmjs.org/methods/-/methods-1.1.2.tgz", "integrity": "sha512-iclAHeNqNm68zFtnZ0e+1L2yUIdvzNoauKU4WBA3VvH/vPFieF7qfRlwUZU+DA9P9bPXIS90ulxoUoCH23sV2w==", "dev": true }, "micromatch": { "version": "4.0.5", "resolved": "https://registry.npmjs.org/micromatch/-/micromatch-4.0.5.tgz", "integrity": "sha512-DMy+ERcEW2q8Z2Po+WNXuw3c5YaUSFjAO5GsJqfEl7UjvtIuFKO6ZrKvcItdy98dwFI2N1tg3zNIdKaQT+aNdA==", "dev": true, "requires": { "braces": "^3.0.2", "picomatch": "^2.3.1" } }, "mime": { "version": "1.6.0", "resolved": "https://registry.npmjs.org/mime/-/mime-1.6.0.tgz", "integrity": "sha512-x0Vn8spI+wuJ1O6S7gnbaQg8Pxh4NNHb7KSINmEWKiPE4RKOplvijn+NkmYmmRgP68mc70j2EbeTFRsrswaQeg==", "dev": true }, "mime-db": { "version": "1.52.0", "resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.52.0.tgz", "integrity": "sha512-sPU4uV7dYlvtWJxwwxHD0PuihVNiE7TyAbQ5SWxDCB9mUYvOgroQOwYQQOKPJ8CIbE+1ETVlOoK1UC2nU3gYvg==", "dev": true }, "mime-types": { "version": "2.1.35", "resolved": "https://registry.npmjs.org/mime-types/-/mime-types-2.1.35.tgz", "integrity": "sha512-ZDY+bPm5zTTF+YpCrAU9nK0UgICYPT0QtT1NZWFv4s++TNkcgVaT0g6+4R2uI4MjQjzysHB1zxuWL50hzaeXiw==", "dev": true, "requires": { "mime-db": "1.52.0" } }, "minimalistic-assert": { "version": "1.0.1", "resolved": "https://registry.npmjs.org/minimalistic-assert/-/minimalistic-assert-1.0.1.tgz", "integrity": "sha512-UtJcAD4yEaGtjPezWuO9wC4nwUnVH/8/Im3yEHQP4b67cXlD/Qr9hdITCU1xDbSEXg2XKNaP8jsReV7vQd00/A==", "dev": true }, "ms": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz", "integrity": "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A==", "dev": true }, "multicast-dns": { "version": "7.2.5", "resolved": "https://registry.npmjs.org/multicast-dns/-/multicast-dns-7.2.5.tgz", "integrity": "sha512-2eznPJP8z2BFLX50tf0LuODrpINqP1RVIm/CObbTcBRITQgmC/TjcREF1NeTBzIcR5XO/ukWo+YHOjBbFwIupg==", "dev": true, "requires": { "dns-packet": "^5.2.2", "thunky": "^1.0.2" } }, "negotiator": { "version": "0.6.3", "resolved": "https://registry.npmjs.org/negotiator/-/negotiator-0.6.3.tgz", "integrity": "sha512-+EUsqGPLsM+j/zdChZjsnX51g4XrHFOIXwfnCVPGlQk/k5giakcKsuxCObBRu6DSm9opw/O6slWbJdghQM4bBg==", "dev": true }, "neo-async": { "version": "2.6.2", "resolved": "https://registry.npmjs.org/neo-async/-/neo-async-2.6.2.tgz", "integrity": "sha512-Yd3UES5mWCSqR+qNT93S3UoYUkqAZ9lLg8a7g9rimsWmYGK8cVToA4/sF3RrshdyV3sAGMXVUmpMYOw+dLpOuw==", "dev": true }, "node-forge": { "version": "1.3.1", "resolved": "https://registry.npmjs.org/node-forge/-/node-forge-1.3.1.tgz", "integrity": "sha512-dPEtOeMvF9VMcYV/1Wb8CPoVAXtp6MKMlcbAt4ddqmGqUJ6fQZFXkNZNkNlfevtNkGtaSoXf/vNNNSvgrdXwtA==", "dev": true }, "node-releases": { "version": "2.0.18", "resolved": "https://registry.npmjs.org/node-releases/-/node-releases-2.0.18.tgz", "integrity": "sha512-d9VeXT4SJ7ZeOqGX6R5EM022wpL+eWPooLI+5UpWn2jCT1aosUQEhQP214x33Wkwx3JQMvIm+tIoVOdodFS40g==", "dev": true }, "normalize-path": { "version": "3.0.0", "resolved": "https://registry.npmjs.org/normalize-path/-/normalize-path-3.0.0.tgz", "integrity": "sha512-6eZs5Ls3WtCisHWp9S2GUy8dqkpGi4BVSz3GaqiE6ezub0512ESztXUwUB6C6IKbQkY2Pnb/mD4WYojCRwcwLA==", "dev": true }, "object-inspect": { "version": "1.13.4", "resolved": "https://registry.npmjs.org/object-inspect/-/object-inspect-1.13.4.tgz", "integrity": "sha512-W67iLl4J2EXEGTbfeHCffrjDfitvLANg0UlX3wFUUSTx92KXRFegMHUVgSqE+wvhAbi4WqjGg9czysTV2Epbew==", "dev": true }, "obuf": { "version": "1.1.2", "resolved": "https://registry.npmjs.org/obuf/-/obuf-1.1.2.tgz", "integrity": "sha512-PX1wu0AmAdPqOL1mWhqmlOd8kOIZQwGZw6rh7uby9fTc5lhaOWFLX3I6R1hrF9k3zUY40e6igsLGkDXK92LJNg==", "dev": true }, "on-finished": { "version": "2.4.1", "resolved": "https://registry.npmjs.org/on-finished/-/on-finished-2.4.1.tgz", "integrity": "sha512-oVlzkg3ENAhCk2zdv7IJwd/QUD4z2RxRwpkcGY8psCVcCYZNq4wYnVWALHM+brtuJjePWiYF/ClmuDr8Ch5+kg==", "dev": true, "requires": { "ee-first": "1.1.1" } }, "on-headers": { "version": "1.1.0", "resolved": "https://registry.npmjs.org/on-headers/-/on-headers-1.1.0.tgz", "integrity": "sha512-737ZY3yNnXy37FHkQxPzt4UZ2UWPWiCZWLvFZ4fu5cueciegX0zGPnrlY6bwRg4FdQOe9YU8MkmJwGhoMybl8A==", "dev": true }, "open": { "version": "10.1.2", "resolved": "https://registry.npmjs.org/open/-/open-10.1.2.tgz", "integrity": "sha512-cxN6aIDPz6rm8hbebcP7vrQNhvRcveZoJU72Y7vskh4oIm+BZwBECnx5nTmrlres1Qapvx27Qo1Auukpf8PKXw==", "dev": true, "requires": { "default-browser": "^5.2.1", "define-lazy-prop": "^3.0.0", "is-inside-container": "^1.0.0", "is-wsl": "^3.1.0" } }, "p-limit": { "version": "2.3.0", "resolved": "https://registry.npmjs.org/p-limit/-/p-limit-2.3.0.tgz", "integrity": "sha512-//88mFWSJx8lxCzwdAABTJL2MyWB12+eIY7MDL2SqLmAkeKU9qxRvWuSyTjm3FUmpBEMuFfckAIqEaVGUDxb6w==", "dev": true, "requires": { "p-try": "^2.0.0" } }, "p-locate": { "version": "4.1.0", "resolved": "https://registry.npmjs.org/p-locate/-/p-locate-4.1.0.tgz", "integrity": "sha512-R79ZZ/0wAxKGu3oYMlz8jy/kbhsNrS7SKZ7PxEHBgJ5+F2mtFW2fK2cOtBh1cHYkQsbzFV7I+EoRKe6Yt0oK7A==", "dev": true, "requires": { "p-limit": "^2.2.0" } }, "p-retry": { "version": "6.2.1", "resolved": "https://registry.npmjs.org/p-retry/-/p-retry-6.2.1.tgz", "integrity": "sha512-hEt02O4hUct5wtwg4H4KcWgDdm+l1bOaEy/hWzd8xtXB9BqxTWBBhb+2ImAtH4Cv4rPjV76xN3Zumqk3k3AhhQ==", "dev": true, "requires": { "@types/retry": "0.12.2", "is-network-error": "^1.0.0", "retry": "^0.13.1" } }, "p-try": { "version": "2.2.0", "resolved": "https://registry.npmjs.org/p-try/-/p-try-2.2.0.tgz", "integrity": "sha512-R4nPAVTAU0B9D35/Gk3uJf/7XYbQcyohSKdvAxIRSNghFl4e71hVoGnBNQz9cWaXxO2I10KTC+3jMdvvoKw6dQ==", "dev": true }, "parseurl": { "version": "1.3.3", "resolved": "https://registry.npmjs.org/parseurl/-/parseurl-1.3.3.tgz", "integrity": "sha512-CiyeOxFT/JZyN5m0z9PfXw4SCBJ6Sygz1Dpl0wqjlhDEGGBP1GnsUVEL0p63hoG1fcj3fHynXi9NYO4nWOL+qQ==", "dev": true }, "path-exists": { "version": "4.0.0", "resolved": "https://registry.npmjs.org/path-exists/-/path-exists-4.0.0.tgz", "integrity": "sha512-ak9Qy5Q7jYb2Wwcey5Fpvg2KoAc/ZIhLSLOSBmRmygPsGwkVVt0fZa0qrtMz+m6tJTAHfZQ8FnmB4MG4LWy7/w==", "dev": true }, "path-key": { "version": "3.1.1", "resolved": "https://registry.npmjs.org/path-key/-/path-key-3.1.1.tgz", "integrity": "sha512-ojmeN0qd+y0jszEtoY48r0Peq5dwMEkIlCOu6Q5f41lfkswXuKtYrhgoTpLnyIcHm24Uhqx+5Tqm2InSwLhE6Q==", "dev": true }, "path-parse": { "version": "1.0.7", "resolved": "https://registry.npmjs.org/path-parse/-/path-parse-1.0.7.tgz", "integrity": "sha512-LDJzPVEEEPR+y48z93A0Ed0yXb8pAByGWo/k5YYdYgpY2/2EsOsksJrq7lOHxryrVOn1ejG6oAp8ahvOIQD8sw==", "dev": true }, "path-to-regexp": { "version": "0.1.12", "resolved": "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-0.1.12.tgz", "integrity": "sha512-RA1GjUVMnvYFxuqovrEqZoxxW5NUZqbwKtYz/Tt7nXerk0LbLblQmrsgdeOxV5SFHf0UDggjS/bSeOZwt1pmEQ==", "dev": true }, "path-type": { "version": "4.0.0", "resolved": "https://registry.npmjs.org/path-type/-/path-type-4.0.0.tgz", "integrity": "sha512-gDKb8aZMDeD/tZWs9P6+q0J9Mwkdl6xMV8TjnGP3qJVJ06bdMgkbBlLU8IdfOsIsFz2BW1rNVT3XuNEl8zPAvw==", "dev": true }, "picocolors": { "version": "1.1.0", "resolved": "https://registry.npmjs.org/picocolors/-/picocolors-1.1.0.tgz", "integrity": "sha512-TQ92mBOW0l3LeMeyLV6mzy/kWr8lkd/hp3mTg7wYK7zJhuBStmGMBG0BdeDZS/dZx1IukaX6Bk11zcln25o1Aw==", "dev": true }, "picomatch": { "version": "2.3.1", "resolved": "https://registry.npmjs.org/picomatch/-/picomatch-2.3.1.tgz", "integrity": "sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA==", "dev": true }, "pkg-dir": { "version": "4.2.0", "resolved": "https://registry.npmjs.org/pkg-dir/-/pkg-dir-4.2.0.tgz", "integrity": "sha512-HRDzbaKjC+AOWVXxAU/x54COGeIv9eb+6CkDSQoNTt4XyWoIJvuPsXizxu/Fr23EiekbtZwmh1IcIG/l/a10GQ==", "dev": true, "requires": { "find-up": "^4.0.0" } }, "process-nextick-args": { "version": "2.0.1", "resolved": "https://registry.npmjs.org/process-nextick-args/-/process-nextick-args-2.0.1.tgz", "integrity": "sha512-3ouUOpQhtgrbOa17J7+uxOTpITYWaGP7/AhoR3+A+/1e9skrzelGi/dXzEYyvbxubEF6Wn2ypscTKiKJFFn1ag==", "dev": true }, "proxy-addr": { "version": "2.0.7", "resolved": "https://registry.npmjs.org/proxy-addr/-/proxy-addr-2.0.7.tgz", "integrity": "sha512-llQsMLSUDUPT44jdrU/O37qlnifitDP+ZwrmmZcoSKyLKvtZxpyV0n2/bD/N4tBAAZ/gJEdZU7KMraoK1+XYAg==", "dev": true, "requires": { "forwarded": "0.2.0", "ipaddr.js": "1.9.1" }, "dependencies": { "ipaddr.js": { "version": "1.9.1", "resolved": "https://registry.npmjs.org/ipaddr.js/-/ipaddr.js-1.9.1.tgz", "integrity": "sha512-0KI/607xoxSToH7GjN1FfSbLoU0+btTicjsQSWQlh/hZykN8KpmMf7uYwPW3R+akZ6R/w18ZlXSHBYXiYUPO3g==", "dev": true } } }, "punycode": { "version": "2.1.1", "resolved": "https://registry.npmjs.org/punycode/-/punycode-2.1.1.tgz", "integrity": "sha512-XRsRjdf+j5ml+y/6GKHPZbrF/8p2Yga0JPtdqTIY2Xe5ohJPD9saDJJLPvp9+NSBprVvevdXZybnj2cv8OEd0A==", "dev": true }, "qs": { "version": "6.13.0", "resolved": "https://registry.npmjs.org/qs/-/qs-6.13.0.tgz", "integrity": "sha512-+38qI9SOr8tfZ4QmJNplMUxqjbe7LKvvZgWdExBOmd+egZTtjLB67Gu0HRX3u/XOq7UU2Nx6nsjvS16Z9uwfpg==", "dev": true, "requires": { "side-channel": "^1.0.6" } }, "queue-microtask": { "version": "1.2.3", "resolved": "https://registry.npmjs.org/queue-microtask/-/queue-microtask-1.2.3.tgz", "integrity": "sha512-NuaNSa6flKT5JaSYQzJok04JzTL1CA6aGhv5rfLW3PgqA+M2ChpZQnAC8h8i4ZFkBS8X5RqkDBHA7r4hej3K9A==", "dev": true }, "randombytes": { "version": "2.1.0", "resolved": "https://registry.npmjs.org/randombytes/-/randombytes-2.1.0.tgz", "integrity": "sha512-vYl3iOX+4CKUWuxGi9Ukhie6fsqXqS9FE2Zaic4tNFD2N2QQaXOMFbuKK4QmDHC0JO6B1Zp41J0LpT0oR68amQ==", "dev": true, "requires": { "safe-buffer": "^5.1.0" } }, "range-parser": { "version": "1.2.1", "resolved": "https://registry.npmjs.org/range-parser/-/range-parser-1.2.1.tgz", "integrity": "sha512-Hrgsx+orqoygnmhFbKaHE6c296J+HTAQXoxEF6gNupROmmGJRoyzfG3ccAveqCBrwr/2yxQ5BVd/GTl5agOwSg==", "dev": true }, "raw-body": { "version": "2.5.2", "resolved": "https://registry.npmjs.org/raw-body/-/raw-body-2.5.2.tgz", "integrity": "sha512-8zGqypfENjCIqGhgXToC8aB2r7YrBX+AQAfIPs/Mlk+BtPTztOvTS01NRW/3Eh60J+a48lt8qsCzirQ6loCVfA==", "dev": true, "requires": { "bytes": "3.1.2", "http-errors": "2.0.0", "iconv-lite": "0.4.24", "unpipe": "1.0.0" } }, "readable-stream": { "version": "2.3.7", "resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-2.3.7.tgz", "integrity": "sha512-Ebho8K4jIbHAxnuxi7o42OrZgF/ZTNcsZj6nRKyUmkhLFq8CHItp/fy6hQZuZmP/n3yZ9VBUbp4zz/mX8hmYPw==", "dev": true, "requires": { "core-util-is": "~1.0.0", "inherits": "~2.0.3", "isarray": "~1.0.0", "process-nextick-args": "~2.0.0", "safe-buffer": "~5.1.1", "string_decoder": "~1.1.1", "util-deprecate": "~1.0.1" } }, "readdirp": { "version": "3.6.0", "resolved": "https://registry.npmjs.org/readdirp/-/readdirp-3.6.0.tgz", "integrity": "sha512-hOS089on8RduqdbhvQ5Z37A0ESjsqz6qnRcffsMU3495FuTdqSm+7bhJ29JvIOsBDEEnan5DPu9t3To9VRlMzA==", "dev": true, "requires": { "picomatch": "^2.2.1" } }, "rechoir": { "version": "0.8.0", "resolved": "https://registry.npmjs.org/rechoir/-/rechoir-0.8.0.tgz", "integrity": "sha512-/vxpCXddiX8NGfGO/mTafwjq4aFa/71pvamip0++IQk3zG8cbCj0fifNPrjjF1XMXUne91jL9OoxmdykoEtifQ==", "dev": true, "requires": { "resolve": "^1.20.0" } }, "require-from-string": { "version": "2.0.2", "resolved": "https://registry.npmjs.org/require-from-string/-/require-from-string-2.0.2.tgz", "integrity": "sha512-Xf0nWe6RseziFMu+Ap9biiUbmplq6S9/p+7w7YXP/JBHhrUDDUhwa+vANyubuqfZWTveU//DYVGsDG7RKL/vEw==", "dev": true }, "requires-port": { "version": "1.0.0", "resolved": "https://registry.npmjs.org/requires-port/-/requires-port-1.0.0.tgz", "integrity": "sha512-KigOCHcocU3XODJxsu8i/j8T9tzT4adHiecwORRQ0ZZFcp7ahwXuRU1m+yuO90C5ZUyGeGfocHDI14M3L3yDAQ==", "dev": true }, "resolve": { "version": "1.22.1", "resolved": "https://registry.npmjs.org/resolve/-/resolve-1.22.1.tgz", "integrity": "sha512-nBpuuYuY5jFsli/JIs1oldw6fOQCBioohqWZg/2hiaOybXOft4lonv85uDOKXdf8rhyK159cxU5cDcK/NKk8zw==", "dev": true, "requires": { "is-core-module": "^2.9.0", "path-parse": "^1.0.7", "supports-preserve-symlinks-flag": "^1.0.0" } }, "resolve-cwd": { "version": "3.0.0", "resolved": "https://registry.npmjs.org/resolve-cwd/-/resolve-cwd-3.0.0.tgz", "integrity": "sha512-OrZaX2Mb+rJCpH/6CpSqt9xFVpN++x01XnN2ie9g6P5/3xelLAkXWVADpdz1IHD/KFfEXyE6V0U01OQ3UO2rEg==", "dev": true, "requires": { "resolve-from": "^5.0.0" } }, "resolve-from": { "version": "5.0.0", "resolved": "https://registry.npmjs.org/resolve-from/-/resolve-from-5.0.0.tgz", "integrity": "sha512-qYg9KP24dD5qka9J47d0aVky0N+b4fTU89LN9iDnjB5waksiC49rvMB0PrUJQGoTmH50XPiqOvAjDfaijGxYZw==", "dev": true }, "retry": { "version": "0.13.1", "resolved": "https://registry.npmjs.org/retry/-/retry-0.13.1.tgz", "integrity": "sha512-XQBQ3I8W1Cge0Seh+6gjj03LbmRFWuoszgK9ooCpwYIrhhoO80pfq4cUkU5DkknwfOfFteRwlZ56PYOGYyFWdg==", "dev": true }, "reusify": { "version": "1.0.4", "resolved": "https://registry.npmjs.org/reusify/-/reusify-1.0.4.tgz", "integrity": "sha512-U9nH88a3fc/ekCF1l0/UP1IosiuIjyTh7hBvXVMHYgVcfGvt897Xguj2UOLDeI5BG2m7/uwyaLVT6fbtCwTyzw==", "dev": true }, "run-applescript": { "version": "7.0.0", "resolved": "https://registry.npmjs.org/run-applescript/-/run-applescript-7.0.0.tgz", "integrity": "sha512-9by4Ij99JUr/MCFBUkDKLWK3G9HVXmabKz9U5MlIAIuvuzkiOicRYs8XJLxX+xahD+mLiiCYDqF9dKAgtzKP1A==", "dev": true }, "run-parallel": { "version": "1.2.0", "resolved": "https://registry.npmjs.org/run-parallel/-/run-parallel-1.2.0.tgz", "integrity": "sha512-5l4VyZR86LZ/lDxZTR6jqL8AFE2S0IFLMP26AbjsLVADxHdhB/c0GUsH+y39UfCi3dzz8OlQuPmnaJOMoDHQBA==", "dev": true, "requires": { "queue-microtask": "^1.2.2" } }, "safe-buffer": { "version": "5.1.2", "resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.1.2.tgz", "integrity": "sha512-Gd2UZBJDkXlY7GbJxfsE8/nvKkUEU1G38c1siN6QP6a9PT9MmHB8GnpscSmMJSoF8LOIrt8ud/wPtojys4G6+g==", "dev": true }, "safer-buffer": { "version": "2.1.2", "resolved": "https://registry.npmjs.org/safer-buffer/-/safer-buffer-2.1.2.tgz", "integrity": "sha512-YZo3K82SD7Riyi0E1EQPojLz7kpepnSQI9IyPbHHg1XXXevb5dJI7tpyN2ADxGcQbHG7vcyRHk0cbwqcQriUtg==", "dev": true }, "schema-utils": { "version": "4.3.2", "resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-4.3.2.tgz", "integrity": "sha512-Gn/JaSk/Mt9gYubxTtSn/QCV4em9mpAPiR1rqy/Ocu19u/G9J5WWdNoUT4SiV6mFC3y6cxyFcFwdzPM3FgxGAQ==", "dev": true, "requires": { "@types/json-schema": "^7.0.9", "ajv": "^8.9.0", "ajv-formats": "^2.1.1", "ajv-keywords": "^5.1.0" } }, "select-hose": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/select-hose/-/select-hose-2.0.0.tgz", "integrity": "sha1-Yl2GWPhlr0Psliv8N2o3NZpJlMo=", "dev": true }, "selfsigned": { "version": "2.4.1", "resolved": "https://registry.npmjs.org/selfsigned/-/selfsigned-2.4.1.tgz", "integrity": "sha512-th5B4L2U+eGLq1TVh7zNRGBapioSORUeymIydxgFpwww9d2qyKvtuPU2jJuHvYAwwqi2Y596QBL3eEqcPEYL8Q==", "dev": true, "requires": { "@types/node-forge": "^1.3.0", "node-forge": "^1" } }, "send": { "version": "0.19.0", "resolved": "https://registry.npmjs.org/send/-/send-0.19.0.tgz", "integrity": "sha512-dW41u5VfLXu8SJh5bwRmyYUbAoSB3c9uQh6L8h/KtsFREPWpbX1lrljJo186Jc4nmci/sGUZ9a0a0J2zgfq2hw==", "dev": true, "requires": { "debug": "2.6.9", "depd": "2.0.0", "destroy": "1.2.0", "encodeurl": "~1.0.2", "escape-html": "~1.0.3", "etag": "~1.8.1", "fresh": "0.5.2", "http-errors": "2.0.0", "mime": "1.6.0", "ms": "2.1.3", "on-finished": "2.4.1", "range-parser": "~1.2.1", "statuses": "2.0.1" }, "dependencies": { "encodeurl": { "version": "1.0.2", "resolved": "https://registry.npmjs.org/encodeurl/-/encodeurl-1.0.2.tgz", "integrity": "sha512-TPJXq8JqFaVYm2CWmPvnP2Iyo4ZSM7/QKcSmuMLDObfpH5fi7RUGmd/rTDf+rut/saiDiQEeVTNgAmJEdAOx0w==", "dev": true }, "ms": { "version": "2.1.3", "resolved": "https://registry.npmjs.org/ms/-/ms-2.1.3.tgz", "integrity": "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA==", "dev": true } } }, "serialize-javascript": { "version": "6.0.2", "resolved": "https://registry.npmjs.org/serialize-javascript/-/serialize-javascript-6.0.2.tgz", "integrity": "sha512-Saa1xPByTTq2gdeFZYLLo+RFE35NHZkAbqZeWNd3BpzppeVisAqpDjcp8dyf6uIvEqJRd46jemmyA4iFIeVk8g==", "dev": true, "requires": { "randombytes": "^2.1.0" } }, "serve-index": { "version": "1.9.1", "resolved": "https://registry.npmjs.org/serve-index/-/serve-index-1.9.1.tgz", "integrity": "sha1-03aNabHn2C5c4FD/9bRTvqEqkjk=", "dev": true, "requires": { "accepts": "~1.3.4", "batch": "0.6.1", "debug": "2.6.9", "escape-html": "~1.0.3", "http-errors": "~1.6.2", "mime-types": "~2.1.17", "parseurl": "~1.3.2" }, "dependencies": { "depd": { "version": "1.1.2", "resolved": "https://registry.npmjs.org/depd/-/depd-1.1.2.tgz", "integrity": "sha512-7emPTl6Dpo6JRXOXjLRxck+FlLRX5847cLKEn00PLAgc3g2hTZZgr+e4c2v6QpSmLeFP3n5yUo7ft6avBK/5jQ==", "dev": true }, "http-errors": { "version": "1.6.3", "resolved": "https://registry.npmjs.org/http-errors/-/http-errors-1.6.3.tgz", "integrity": "sha512-lks+lVC8dgGyh97jxvxeYTWQFvh4uw4yC12gVl63Cg30sjPX4wuGcdkICVXDAESr6OJGjqGA8Iz5mkeN6zlD7A==", "dev": true, "requires": { "depd": "~1.1.2", "inherits": "2.0.3", "setprototypeof": "1.1.0", "statuses": ">= 1.4.0 < 2" } }, "inherits": { "version": "2.0.3", "resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.3.tgz", "integrity": "sha512-x00IRNXNy63jwGkJmzPigoySHbaqpNuzKbBOmzK+g2OdZpQ9w+sxCN+VSB3ja7IAge2OP2qpfxTjeNcyjmW1uw==", "dev": true }, "setprototypeof": { "version": "1.1.0", "resolved": "https://registry.npmjs.org/setprototypeof/-/setprototypeof-1.1.0.tgz", "integrity": "sha512-BvE/TwpZX4FXExxOxZyRGQQv651MSwmWKZGqvmPcRIjDqWub67kTKuIMx43cZZrS/cBBzwBcNDWoFxt2XEFIpQ==", "dev": true }, "statuses": { "version": "1.5.0", "resolved": "https://registry.npmjs.org/statuses/-/statuses-1.5.0.tgz", "integrity": "sha1-Fhx9rBd2Wf2YEfQ3cfqZOBR4Yow=", "dev": true } } }, "serve-static": { "version": "1.16.2", "resolved": "https://registry.npmjs.org/serve-static/-/serve-static-1.16.2.tgz", "integrity": "sha512-VqpjJZKadQB/PEbEwvFdO43Ax5dFBZ2UECszz8bQ7pi7wt//PWe1P6MN7eCnjsatYtBT6EuiClbjSWP2WrIoTw==", "dev": true, "requires": { "encodeurl": "~2.0.0", "escape-html": "~1.0.3", "parseurl": "~1.3.3", "send": "0.19.0" } }, "setprototypeof": { "version": "1.2.0", "resolved": "https://registry.npmjs.org/setprototypeof/-/setprototypeof-1.2.0.tgz", "integrity": "sha512-E5LDX7Wrp85Kil5bhZv46j8jOeboKq5JMmYM3gVGdGH8xFpPWXUMsNrlODCrkoxMEeNi/XZIwuRvY4XNwYMJpw==", "dev": true }, "shallow-clone": { "version": "3.0.1", "resolved": "https://registry.npmjs.org/shallow-clone/-/shallow-clone-3.0.1.tgz", "integrity": "sha512-/6KqX+GVUdqPuPPd2LxDDxzX6CAbjJehAAOKlNpqqUpAqPM6HeL8f+o3a+JsyGjn2lv0WY8UsTgUJjU9Ok55NA==", "dev": true, "requires": { "kind-of": "^6.0.2" } }, "shebang-command": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/shebang-command/-/shebang-command-2.0.0.tgz", "integrity": "sha512-kHxr2zZpYtdmrN1qDjrrX/Z1rR1kG8Dx+gkpK1G4eXmvXswmcE1hTWBWYUzlraYw1/yZp6YuDY77YtvbN0dmDA==", "dev": true, "requires": { "shebang-regex": "^3.0.0" } }, "shebang-regex": { "version": "3.0.0", "resolved": "https://registry.npmjs.org/shebang-regex/-/shebang-regex-3.0.0.tgz", "integrity": "sha512-7++dFhtcx3353uBaq8DDR4NuxBetBzC7ZQOhmTQInHEd6bSrXdiEyzCvG07Z44UYdLShWUyXt5M/yhz8ekcb1A==", "dev": true }, "shell-quote": { "version": "1.8.3", "resolved": "https://registry.npmjs.org/shell-quote/-/shell-quote-1.8.3.tgz", "integrity": "sha512-ObmnIF4hXNg1BqhnHmgbDETF8dLPCggZWBjkQfhZpbszZnYur5DUljTcCHii5LC3J5E0yeO/1LIMyH+UvHQgyw==", "dev": true }, "side-channel": { "version": "1.1.0", "resolved": "https://registry.npmjs.org/side-channel/-/side-channel-1.1.0.tgz", "integrity": "sha512-ZX99e6tRweoUXqR+VBrslhda51Nh5MTQwou5tnUDgbtyM0dBgmhEDtWGP/xbKn6hqfPRHujUNwz5fy/wbbhnpw==", "dev": true, "requires": { "es-errors": "^1.3.0", "object-inspect": "^1.13.3", "side-channel-list": "^1.0.0", "side-channel-map": "^1.0.1", "side-channel-weakmap": "^1.0.2" } }, "side-channel-list": { "version": "1.0.0", "resolved": "https://registry.npmjs.org/side-channel-list/-/side-channel-list-1.0.0.tgz", "integrity": "sha512-FCLHtRD/gnpCiCHEiJLOwdmFP+wzCmDEkc9y7NsYxeF4u7Btsn1ZuwgwJGxImImHicJArLP4R0yX4c2KCrMrTA==", "dev": true, "requires": { "es-errors": "^1.3.0", "object-inspect": "^1.13.3" } }, "side-channel-map": { "version": "1.0.1", "resolved": "https://registry.npmjs.org/side-channel-map/-/side-channel-map-1.0.1.tgz", "integrity": "sha512-VCjCNfgMsby3tTdo02nbjtM/ewra6jPHmpThenkTYh8pG9ucZ/1P8So4u4FGBek/BjpOVsDCMoLA/iuBKIFXRA==", "dev": true, "requires": { "call-bound": "^1.0.2", "es-errors": "^1.3.0", "get-intrinsic": "^1.2.5", "object-inspect": "^1.13.3" } }, "side-channel-weakmap": { "version": "1.0.2", "resolved": "https://registry.npmjs.org/side-channel-weakmap/-/side-channel-weakmap-1.0.2.tgz", "integrity": "sha512-WPS/HvHQTYnHisLo9McqBHOJk2FkHO/tlpvldyrnem4aeQp4hai3gythswg6p01oSoTl58rcpiFAjF2br2Ak2A==", "dev": true, "requires": { "call-bound": "^1.0.2", "es-errors": "^1.3.0", "get-intrinsic": "^1.2.5", "object-inspect": "^1.13.3", "side-channel-map": "^1.0.1" } }, "slash": { "version": "4.0.0", "resolved": "https://registry.npmjs.org/slash/-/slash-4.0.0.tgz", "integrity": "sha512-3dOsAHXXUkQTpOYcoAxLIorMTp4gIQr5IW3iVb7A7lFIp0VHhnynm9izx6TssdrIcVIESAlVjtnO2K8bg+Coew==", "dev": true }, "sockjs": { "version": "0.3.24", "resolved": "https://registry.npmjs.org/sockjs/-/sockjs-0.3.24.tgz", "integrity": "sha512-GJgLTZ7vYb/JtPSSZ10hsOYIvEYsjbNU+zPdIHcUaWVNUEPivzxku31865sSSud0Da0W4lEeOPlmw93zLQchuQ==", "dev": true, "requires": { "faye-websocket": "^0.11.3", "uuid": "^8.3.2", "websocket-driver": "^0.7.4" } }, "source-map": { "version": "0.6.1", "resolved": "https://registry.npmjs.org/source-map/-/source-map-0.6.1.tgz", "integrity": "sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g==", "dev": true }, "source-map-support": { "version": "0.5.21", "resolved": "https://registry.npmjs.org/source-map-support/-/source-map-support-0.5.21.tgz", "integrity": "sha512-uBHU3L3czsIyYXKX88fdrGovxdSCoTGDRZ6SYXtSRxLZUzHg5P/66Ht6uoUlHu9EZod+inXhKo3qQgwXUT/y1w==", "dev": true, "requires": { "buffer-from": "^1.0.0", "source-map": "^0.6.0" } }, "spdy": { "version": "4.0.2", "resolved": "https://registry.npmjs.org/spdy/-/spdy-4.0.2.tgz", "integrity": "sha512-r46gZQZQV+Kl9oItvl1JZZqJKGr+oEkB08A6BzkiR7593/7IbtuncXHd2YoYeTsG4157ZssMu9KYvUHLcjcDoA==", "dev": true, "requires": { "debug": "^4.1.0", "handle-thing": "^2.0.0", "http-deceiver": "^1.2.7", "select-hose": "^2.0.0", "spdy-transport": "^3.0.0" }, "dependencies": { "debug": { "version": "4.3.4", "resolved": "https://registry.npmjs.org/debug/-/debug-4.3.4.tgz", "integrity": "sha512-PRWFHuSU3eDtQJPvnNY7Jcket1j0t5OuOsFzPPzsekD52Zl8qUfFIPEiswXqIvHWGVHOgX+7G/vCNNhehwxfkQ==", "dev": true, "requires": { "ms": "2.1.2" } }, "ms": { "version": "2.1.2", "resolved": "https://registry.npmjs.org/ms/-/ms-2.1.2.tgz", "integrity": "sha512-sGkPx+VjMtmA6MX27oA4FBFELFCZZ4S4XqeGOXCv68tT+jb3vk/RyaKWP0PTKyWtmLSM0b+adUTEvbs1PEaH2w==", "dev": true } } }, "spdy-transport": { "version": "3.0.0", "resolved": "https://registry.npmjs.org/spdy-transport/-/spdy-transport-3.0.0.tgz", "integrity": "sha512-hsLVFE5SjA6TCisWeJXFKniGGOpBgMLmerfO2aCyCU5s7nJ/rpAepqmFifv/GCbSbueEeAJJnmSQ2rKC/g8Fcw==", "dev": true, "requires": { "debug": "^4.1.0", "detect-node": "^2.0.4", "hpack.js": "^2.1.6", "obuf": "^1.1.2", "readable-stream": "^3.0.6", "wbuf": "^1.7.3" }, "dependencies": { "debug": { "version": "4.3.4", "resolved": "https://registry.npmjs.org/debug/-/debug-4.3.4.tgz", "integrity": "sha512-PRWFHuSU3eDtQJPvnNY7Jcket1j0t5OuOsFzPPzsekD52Zl8qUfFIPEiswXqIvHWGVHOgX+7G/vCNNhehwxfkQ==", "dev": true, "requires": { "ms": "2.1.2" } }, "ms": { "version": "2.1.2", "resolved": "https://registry.npmjs.org/ms/-/ms-2.1.2.tgz", "integrity": "sha512-sGkPx+VjMtmA6MX27oA4FBFELFCZZ4S4XqeGOXCv68tT+jb3vk/RyaKWP0PTKyWtmLSM0b+adUTEvbs1PEaH2w==", "dev": true }, "readable-stream": { "version": "3.6.0", "resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-3.6.0.tgz", "integrity": "sha512-BViHy7LKeTz4oNnkcLJ+lVSL6vpiFeX6/d3oSH8zCW7UxP2onchk+vTGB143xuFjHS3deTgkKoXXymXqymiIdA==", "dev": true, "requires": { "inherits": "^2.0.3", "string_decoder": "^1.1.1", "util-deprecate": "^1.0.1" } } } }, "statuses": { "version": "2.0.1", "resolved": "https://registry.npmjs.org/statuses/-/statuses-2.0.1.tgz", "integrity": "sha512-RwNA9Z/7PrK06rYLIzFMlaF+l73iwpzsqRIFgbMLbTcLD6cOao82TaWefPXQvB2fOC4AjuYSEndS7N/mTCbkdQ==", "dev": true }, "string_decoder": { "version": "1.1.1", "resolved": "https://registry.npmjs.org/string_decoder/-/string_decoder-1.1.1.tgz", "integrity": "sha512-n/ShnvDi6FHbbVfviro+WojiFzv+s8MPMHBczVePfUpDJLwoLT0ht1l4YwBCbi8pJAveEEdnkHyPyTP/mzRfwg==", "dev": true, "requires": { "safe-buffer": "~5.1.0" } }, "supports-color": { "version": "8.1.1", "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-8.1.1.tgz", "integrity": "sha512-MpUEN2OodtUzxvKQl72cUF7RQ5EiHsGvSsVG0ia9c5RbWGL2CI4C7EpPS8UTBIplnlzZiNuV56w+FuNxy3ty2Q==", "dev": true, "requires": { "has-flag": "^4.0.0" } }, "supports-preserve-symlinks-flag": { "version": "1.0.0", "resolved": "https://registry.npmjs.org/supports-preserve-symlinks-flag/-/supports-preserve-symlinks-flag-1.0.0.tgz", "integrity": "sha512-ot0WnXS9fgdkgIcePe6RHNk1WA8+muPa6cSjeR3V8K27q9BB1rTE3R1p7Hv0z1ZyAc8s6Vvv8DIyWf681MAt0w==", "dev": true }, "tapable": { "version": "2.2.1", "resolved": "https://registry.npmjs.org/tapable/-/tapable-2.2.1.tgz", "integrity": "sha512-GNzQvQTOIP6RyTfE2Qxb8ZVlNmw0n88vp1szwWRimP02mnTsx3Wtn5qRdqY9w2XduFNUgvOwhNnQsjwCp+kqaQ==", "dev": true }, "terser": { "version": "5.34.1", "resolved": "https://registry.npmjs.org/terser/-/terser-5.34.1.tgz", "integrity": "sha512-FsJZ7iZLd/BXkz+4xrRTGJ26o/6VTjQytUk8b8OxkwcD2I+79VPJlz7qss1+zE7h8GNIScFqXcDyJ/KqBYZFVA==", "dev": true, "requires": { "@jridgewell/source-map": "^0.3.3", "acorn": "^8.8.2", "commander": "^2.20.0", "source-map-support": "~0.5.20" } }, "terser-webpack-plugin": { "version": "5.3.10", "resolved": "https://registry.npmjs.org/terser-webpack-plugin/-/terser-webpack-plugin-5.3.10.tgz", "integrity": "sha512-BKFPWlPDndPs+NGGCr1U59t0XScL5317Y0UReNrHaw9/FwhPENlq6bfgs+4yPfyP51vqC1bQ4rp1EfXW5ZSH9w==", "dev": true, "requires": { "@jridgewell/trace-mapping": "^0.3.20", "jest-worker": "^27.4.5", "schema-utils": "^3.1.1", "serialize-javascript": "^6.0.1", "terser": "^5.26.0" }, "dependencies": { "ajv": { "version": "6.12.6", "resolved": "https://registry.npmjs.org/ajv/-/ajv-6.12.6.tgz", "integrity": "sha512-j3fVLgvTo527anyYyJOGTYJbG+vnnQYvE0m5mmkc1TK+nxAppkCLMIL0aZ4dblVCNoGShhm+kzE4ZUykBoMg4g==", "dev": true, "requires": { "fast-deep-equal": "^3.1.1", "fast-json-stable-stringify": "^2.0.0", "json-schema-traverse": "^0.4.1", "uri-js": "^4.2.2" } }, "ajv-keywords": { "version": "3.5.2", "resolved": "https://registry.npmjs.org/ajv-keywords/-/ajv-keywords-3.5.2.tgz", "integrity": "sha512-5p6WTN0DdTGVQk6VjcEju19IgaHudalcfabD7yhDGeA6bcQnmL+CpveLJq/3hvfwd1aof6L386Ougkx6RfyMIQ==", "dev": true, "requires": {} }, "json-schema-traverse": { "version": "0.4.1", "resolved": "https://registry.npmjs.org/json-schema-traverse/-/json-schema-traverse-0.4.1.tgz", "integrity": "sha512-xbbCH5dCYU5T8LcEhhuh7HJ88HXuW3qsI3Y0zOZFKfZEHcpWiHU/Jxzk629Brsab/mMiHQti9wMP+845RPe3Vg==", "dev": true }, "schema-utils": { "version": "3.3.0", "resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-3.3.0.tgz", "integrity": "sha512-pN/yOAvcC+5rQ5nERGuwrjLlYvLTbCibnZ1I7B1LaiAz9BRBlE9GMgE/eqV30P7aJQUf7Ddimy/RsbYO/GrVGg==", "dev": true, "requires": { "@types/json-schema": "^7.0.8", "ajv": "^6.12.5", "ajv-keywords": "^3.5.2" } } } }, "thingies": { "version": "1.21.0", "resolved": "https://registry.npmjs.org/thingies/-/thingies-1.21.0.tgz", "integrity": "sha512-hsqsJsFMsV+aD4s3CWKk85ep/3I9XzYV/IXaSouJMYIoDlgyi11cBhsqYe9/geRfB0YIikBQg6raRaM+nIMP9g==", "dev": true, "requires": {} }, "thunky": { "version": "1.1.0", "resolved": "https://registry.npmjs.org/thunky/-/thunky-1.1.0.tgz", "integrity": "sha512-eHY7nBftgThBqOyHGVN+l8gF0BucP09fMo0oO/Lb0w1OF80dJv+lDVpXG60WMQvkcxAkNybKsrEIE3ZtKGmPrA==", "dev": true }, "to-regex-range": { "version": "5.0.1", "resolved": "https://registry.npmjs.org/to-regex-range/-/to-regex-range-5.0.1.tgz", "integrity": "sha512-65P7iz6X5yEr1cwcgvQxbbIw7Uk3gOy5dIdtZ4rDveLqhrdJP+Li/Hx6tyK0NEb+2GCyneCMJiGqrADCSNk8sQ==", "dev": true, "requires": { "is-number": "^7.0.0" } }, "toidentifier": { "version": "1.0.1", "resolved": "https://registry.npmjs.org/toidentifier/-/toidentifier-1.0.1.tgz", "integrity": "sha512-o5sSPKEkg/DIQNmH43V0/uerLrpzVedkUh8tGNvaeXpfpuwjKenlSox/2O/BTlZUtEe+JG7s5YhEz608PlAHRA==", "dev": true }, "tree-dump": { "version": "1.0.3", "resolved": "https://registry.npmjs.org/tree-dump/-/tree-dump-1.0.3.tgz", "integrity": "sha512-il+Cv80yVHFBwokQSfd4bldvr1Md951DpgAGfmhydt04L+YzHgubm2tQ7zueWDcGENKHq0ZvGFR/hjvNXilHEg==", "dev": true, "requires": {} }, "tslib": { "version": "2.8.1", "resolved": "https://registry.npmjs.org/tslib/-/tslib-2.8.1.tgz", "integrity": "sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w==", "dev": true }, "type-is": { "version": "1.6.18", "resolved": "https://registry.npmjs.org/type-is/-/type-is-1.6.18.tgz", "integrity": "sha512-TkRKr9sUTxEH8MdfuCSP7VizJyzRNMjj2J2do2Jr3Kym598JVdEksuzPQCnlFPW4ky9Q+iA+ma9BGm06XQBy8g==", "dev": true, "requires": { "media-typer": "0.3.0", "mime-types": "~2.1.24" } }, "unpipe": { "version": "1.0.0", "resolved": "https://registry.npmjs.org/unpipe/-/unpipe-1.0.0.tgz", "integrity": "sha512-pjy2bYhSsufwWlKwPc+l3cN7+wuJlK6uz0YdJEOlQDbl6jo/YlPi4mb8agUkVC8BF7V8NuzeyPNqRksA3hztKQ==", "dev": true }, "unstable_wasm": { "version": "file:../pkg" }, "update-browserslist-db": { "version": "1.1.1", "resolved": "https://registry.npmjs.org/update-browserslist-db/-/update-browserslist-db-1.1.1.tgz", "integrity": "sha512-R8UzCaa9Az+38REPiJ1tXlImTJXlVfgHZsglwBD/k6nj76ctsH1E3q4doGrukiLQd3sGQYu56r5+lo5r94l29A==", "dev": true, "requires": { "escalade": "^3.2.0", "picocolors": "^1.1.0" } }, "uri-js": { "version": "4.4.1", "resolved": "https://registry.npmjs.org/uri-js/-/uri-js-4.4.1.tgz", "integrity": "sha512-7rKUyy33Q1yc98pQ1DAmLtwX109F7TIfWlW1Ydo8Wl1ii1SeHieeh0HHfPeL2fMXK6z0s8ecKs9frCuLJvndBg==", "dev": true, "requires": { "punycode": "^2.1.0" } }, "util-deprecate": { "version": "1.0.2", "resolved": "https://registry.npmjs.org/util-deprecate/-/util-deprecate-1.0.2.tgz", "integrity": "sha1-RQ1Nyfpw3nMnYvvS1KKJgUGaDM8=", "dev": true }, "utils-merge": { "version": "1.0.1", "resolved": "https://registry.npmjs.org/utils-merge/-/utils-merge-1.0.1.tgz", "integrity": "sha512-pMZTvIkT1d+TFGvDOqodOclx0QWkkgi6Tdoa8gC8ffGAAqz9pzPTZWAybbsHHoED/ztMtkv/VoYTYyShUn81hA==", "dev": true }, "uuid": { "version": "8.3.2", "resolved": "https://registry.npmjs.org/uuid/-/uuid-8.3.2.tgz", "integrity": "sha512-+NYs2QeMWy+GWFOEm9xnn6HCDp0l7QBD7ml8zLUmJ+93Q5NF0NocErnwkTkXVFNiX3/fpC6afS8Dhb/gz7R7eg==", "dev": true }, "vary": { "version": "1.1.2", "resolved": "https://registry.npmjs.org/vary/-/vary-1.1.2.tgz", "integrity": "sha1-IpnwLG3tMNSllhsLn3RSShj2NPw=", "dev": true }, "watchpack": { "version": "2.4.2", "resolved": "https://registry.npmjs.org/watchpack/-/watchpack-2.4.2.tgz", "integrity": "sha512-TnbFSbcOCcDgjZ4piURLCbJ3nJhznVh9kw6F6iokjiFPl8ONxe9A6nMDVXDiNbrSfLILs6vB07F7wLBrwPYzJw==", "dev": true, "requires": { "glob-to-regexp": "^0.4.1", "graceful-fs": "^4.1.2" } }, "wbuf": { "version": "1.7.3", "resolved": "https://registry.npmjs.org/wbuf/-/wbuf-1.7.3.tgz", "integrity": "sha512-O84QOnr0icsbFGLS0O3bI5FswxzRr8/gHwWkDlQFskhSPryQXvrTMxjxGP4+iWYoauLoBvfDpkrOauZ+0iZpDA==", "dev": true, "requires": { "minimalistic-assert": "^1.0.0" } }, "webpack": { "version": "5.95.0", "resolved": "https://registry.npmjs.org/webpack/-/webpack-5.95.0.tgz", "integrity": "sha512-2t3XstrKULz41MNMBF+cJ97TyHdyQ8HCt//pqErqDvNjU9YQBnZxIHa11VXsi7F3mb5/aO2tuDxdeTPdU7xu9Q==", "dev": true, "requires": { "@types/estree": "^1.0.5", "@webassemblyjs/ast": "^1.12.1", "@webassemblyjs/wasm-edit": "^1.12.1", "@webassemblyjs/wasm-parser": "^1.12.1", "acorn": "^8.7.1", "acorn-import-attributes": "^1.9.5", "browserslist": "^4.21.10", "chrome-trace-event": "^1.0.2", "enhanced-resolve": "^5.17.1", "es-module-lexer": "^1.2.1", "eslint-scope": "5.1.1", "events": "^3.2.0", "glob-to-regexp": "^0.4.1", "graceful-fs": "^4.2.11", "json-parse-even-better-errors": "^2.3.1", "loader-runner": "^4.2.0", "mime-types": "^2.1.27", "neo-async": "^2.6.2", "schema-utils": "^3.2.0", "tapable": "^2.1.1", "terser-webpack-plugin": "^5.3.10", "watchpack": "^2.4.1", "webpack-sources": "^3.2.3" }, "dependencies": { "ajv": { "version": "6.12.6", "resolved": "https://registry.npmjs.org/ajv/-/ajv-6.12.6.tgz", "integrity": "sha512-j3fVLgvTo527anyYyJOGTYJbG+vnnQYvE0m5mmkc1TK+nxAppkCLMIL0aZ4dblVCNoGShhm+kzE4ZUykBoMg4g==", "dev": true, "requires": { "fast-deep-equal": "^3.1.1", "fast-json-stable-stringify": "^2.0.0", "json-schema-traverse": "^0.4.1", "uri-js": "^4.2.2" } }, "ajv-keywords": { "version": "3.5.2", "resolved": "https://registry.npmjs.org/ajv-keywords/-/ajv-keywords-3.5.2.tgz", "integrity": "sha512-5p6WTN0DdTGVQk6VjcEju19IgaHudalcfabD7yhDGeA6bcQnmL+CpveLJq/3hvfwd1aof6L386Ougkx6RfyMIQ==", "dev": true, "requires": {} }, "json-schema-traverse": { "version": "0.4.1", "resolved": "https://registry.npmjs.org/json-schema-traverse/-/json-schema-traverse-0.4.1.tgz", "integrity": "sha512-xbbCH5dCYU5T8LcEhhuh7HJ88HXuW3qsI3Y0zOZFKfZEHcpWiHU/Jxzk629Brsab/mMiHQti9wMP+845RPe3Vg==", "dev": true }, "schema-utils": { "version": "3.3.0", "resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-3.3.0.tgz", "integrity": "sha512-pN/yOAvcC+5rQ5nERGuwrjLlYvLTbCibnZ1I7B1LaiAz9BRBlE9GMgE/eqV30P7aJQUf7Ddimy/RsbYO/GrVGg==", "dev": true, "requires": { "@types/json-schema": "^7.0.8", "ajv": "^6.12.5", "ajv-keywords": "^3.5.2" } } } }, "webpack-cli": { "version": "5.0.1", "resolved": "https://registry.npmjs.org/webpack-cli/-/webpack-cli-5.0.1.tgz", "integrity": "sha512-S3KVAyfwUqr0Mo/ur3NzIp6jnerNpo7GUO6so51mxLi1spqsA17YcMXy0WOIJtBSnj748lthxC6XLbNKh/ZC+A==", "dev": true, "requires": { "@discoveryjs/json-ext": "^0.5.0", "@webpack-cli/configtest": "^2.0.1", "@webpack-cli/info": "^2.0.1", "@webpack-cli/serve": "^2.0.1", "colorette": "^2.0.14", "commander": "^9.4.1", "cross-spawn": "^7.0.3", "envinfo": "^7.7.3", "fastest-levenshtein": "^1.0.12", "import-local": "^3.0.2", "interpret": "^3.1.1", "rechoir": "^0.8.0", "webpack-merge": "^5.7.3" }, "dependencies": { "commander": { "version": "9.4.1", "resolved": "https://registry.npmjs.org/commander/-/commander-9.4.1.tgz", "integrity": "sha512-5EEkTNyHNGFPD2H+c/dXXfQZYa/scCKasxWcXJaWnNJ99pnQN9Vnmqow+p+PlFPE63Q6mThaZws1T+HxfpgtPw==", "dev": true } } }, "webpack-dev-middleware": { "version": "7.4.2", "resolved": "https://registry.npmjs.org/webpack-dev-middleware/-/webpack-dev-middleware-7.4.2.tgz", "integrity": "sha512-xOO8n6eggxnwYpy1NlzUKpvrjfJTvae5/D6WOK0S2LSo7vjmo5gCM1DbLUmFqrMTJP+W/0YZNctm7jasWvLuBA==", "dev": true, "requires": { "colorette": "^2.0.10", "memfs": "^4.6.0", "mime-types": "^2.1.31", "on-finished": "^2.4.1", "range-parser": "^1.2.1", "schema-utils": "^4.0.0" } }, "webpack-dev-server": { "version": "5.2.1", "resolved": "https://registry.npmjs.org/webpack-dev-server/-/webpack-dev-server-5.2.1.tgz", "integrity": "sha512-ml/0HIj9NLpVKOMq+SuBPLHcmbG+TGIjXRHsYfZwocUBIqEvws8NnS/V9AFQ5FKP+tgn5adwVwRrTEpGL33QFQ==", "dev": true, "requires": { "@types/bonjour": "^3.5.13", "@types/connect-history-api-fallback": "^1.5.4", "@types/express": "^4.17.21", "@types/express-serve-static-core": "^4.17.21", "@types/serve-index": "^1.9.4", "@types/serve-static": "^1.15.5", "@types/sockjs": "^0.3.36", "@types/ws": "^8.5.10", "ansi-html-community": "^0.0.8", "bonjour-service": "^1.2.1", "chokidar": "^3.6.0", "colorette": "^2.0.10", "compression": "^1.7.4", "connect-history-api-fallback": "^2.0.0", "express": "^4.21.2", "graceful-fs": "^4.2.6", "http-proxy-middleware": "^2.0.7", "ipaddr.js": "^2.1.0", "launch-editor": "^2.6.1", "open": "^10.0.3", "p-retry": "^6.2.0", "schema-utils": "^4.2.0", "selfsigned": "^2.4.1", "serve-index": "^1.9.1", "sockjs": "^0.3.24", "spdy": "^4.0.2", "webpack-dev-middleware": "^7.4.2", "ws": "^8.18.0" } }, "webpack-merge": { "version": "5.8.0", "resolved": "https://registry.npmjs.org/webpack-merge/-/webpack-merge-5.8.0.tgz", "integrity": "sha512-/SaI7xY0831XwP6kzuwhKWVKDP9t1QY1h65lAFLbZqMPIuYcD9QAW4u9STIbU9kaJbPBB/geU/gLr1wDjOhQ+Q==", "dev": true, "requires": { "clone-deep": "^4.0.1", "wildcard": "^2.0.0" } }, "webpack-sources": { "version": "3.2.3", "resolved": "https://registry.npmjs.org/webpack-sources/-/webpack-sources-3.2.3.tgz", "integrity": "sha512-/DyMEOrDgLKKIG0fmvtz+4dUX/3Ghozwgm6iPp8KRhvn+eQf9+Q7GWxVNMk3+uCPWfdXYC4ExGBckIXdFEfH1w==", "dev": true }, "websocket-driver": { "version": "0.7.4", "resolved": "https://registry.npmjs.org/websocket-driver/-/websocket-driver-0.7.4.tgz", "integrity": "sha512-b17KeDIQVjvb0ssuSDF2cYXSg2iztliJ4B9WdsuB6J952qCPKmnVq4DyW5motImXHDC1cBT/1UezrJVsKw5zjg==", "dev": true, "requires": { "http-parser-js": ">=0.5.1", "safe-buffer": ">=5.1.0", "websocket-extensions": ">=0.1.1" } }, "websocket-extensions": { "version": "0.1.4", "resolved": "https://registry.npmjs.org/websocket-extensions/-/websocket-extensions-0.1.4.tgz", "integrity": "sha512-OqedPIGOfsDlo31UNwYbCFMSaO9m9G/0faIHj5/dZFDMFqPTcx6UwqyOy3COEaEOg/9VsGIpdqn62W5KhoKSpg==", "dev": true }, "which": { "version": "2.0.2", "resolved": "https://registry.npmjs.org/which/-/which-2.0.2.tgz", "integrity": "sha512-BLI3Tl1TW3Pvl70l3yq3Y64i+awpwXqsGBYWkkqMtnbXgrMD+yj7rhW0kuEDxzJaYXGjEW5ogapKNMEKNMjibA==", "dev": true, "requires": { "isexe": "^2.0.0" } }, "wildcard": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/wildcard/-/wildcard-2.0.0.tgz", "integrity": "sha512-JcKqAHLPxcdb9KM49dufGXn2x3ssnfjbcaQdLlfZsL9rH9wgDQjUtDxbo8NE0F6SFvydeu1VhZe7hZuHsB2/pw==", "dev": true }, "ws": { "version": "8.18.2", "resolved": "https://registry.npmjs.org/ws/-/ws-8.18.2.tgz", "integrity": "sha512-DMricUmwGZUVr++AEAe2uiVM7UoO9MAVZMDu05UQOaUII0lp+zOzLLU4Xqh/JvTqklB1T4uELaaPBKyjE1r4fQ==", "dev": true, "requires": {} } } }
tokenizers/tokenizers/examples/unstable_wasm/www/package-lock.json/0
{ "file_path": "tokenizers/tokenizers/examples/unstable_wasm/www/package-lock.json", "repo_id": "tokenizers", "token_count": 193670 }
337
#![allow(clippy::map_entry)] use super::{Pair, WithFirstLastIterator, Word, BPE}; use crate::parallelism::*; use crate::tokenizer::{AddedToken, Result, Trainer}; use crate::utils::progress::{ProgressBar, ProgressStyle}; use ahash::{AHashMap, AHashSet}; use compact_str::CompactString; use dary_heap::OctonaryHeap; use serde::{Deserialize, Serialize}; use std::cmp::Ordering; use std::collections::HashSet; #[derive(Debug, Eq)] struct Merge { pair: Pair, count: u64, pos: AHashSet<usize>, } impl PartialEq for Merge { fn eq(&self, other: &Self) -> bool { self.count == other.count && self.pair == other.pair } } impl PartialOrd for Merge { fn partial_cmp(&self, other: &Self) -> Option<Ordering> { Some(self.cmp(other)) } } impl Ord for Merge { fn cmp(&self, other: &Self) -> Ordering { if self.count != other.count { self.count.cmp(&other.count) } else { // Here we want ascending order other.pair.cmp(&self.pair) } } } struct Config { min_frequency: u64, vocab_size: usize, show_progress: bool, special_tokens: Vec<AddedToken>, limit_alphabet: Option<usize>, initial_alphabet: AHashSet<char>, continuing_subword_prefix: Option<String>, end_of_word_suffix: Option<String>, max_token_length: Option<usize>, } /// A `BpeTrainerBuilder` can be used to create a `BpeTrainer` with a custom /// configuration. pub struct BpeTrainerBuilder { config: Config, } impl Default for BpeTrainerBuilder { fn default() -> Self { Self { config: Config { min_frequency: 0, vocab_size: 30000, show_progress: true, special_tokens: vec![], limit_alphabet: None, initial_alphabet: AHashSet::new(), continuing_subword_prefix: None, end_of_word_suffix: None, max_token_length: None, }, } } } impl BpeTrainerBuilder { /// Constructs a new `BpeTrainerBuilder` pub fn new() -> Self { Self::default() } /// Set the expected minimum frequency #[must_use] pub fn min_frequency(mut self, frequency: u64) -> Self { self.config.min_frequency = frequency; self } /// Set the vocabulary size #[must_use] pub fn vocab_size(mut self, size: usize) -> Self { self.config.vocab_size = size; self } /// Set whether to show progress #[must_use] pub fn show_progress(mut self, show: bool) -> Self { self.config.show_progress = show; self } /// Set the special tokens #[must_use] pub fn special_tokens(mut self, tokens: Vec<AddedToken>) -> Self { self.config.special_tokens = tokens; self } /// Set whether to limit the alphabet #[must_use] pub fn limit_alphabet(mut self, limit: usize) -> Self { self.config.limit_alphabet = Some(limit); self } /// Set the initial alphabet #[must_use] pub fn initial_alphabet(mut self, alphabet: HashSet<char>) -> Self { let mut initial_alphabet = AHashSet::with_capacity(alphabet.len()); initial_alphabet.extend(alphabet); self.config.initial_alphabet = initial_alphabet; self } /// Set the continuing_subword_prefix #[must_use] pub fn continuing_subword_prefix(mut self, prefix: String) -> Self { self.config.continuing_subword_prefix = Some(prefix); self } /// Set the end_of_word_suffix #[must_use] pub fn end_of_word_suffix(mut self, suffix: String) -> Self { self.config.end_of_word_suffix = Some(suffix); self } /// Set max_token_length #[must_use] pub fn max_token_length(mut self, max_token_length: Option<usize>) -> Self { self.config.max_token_length = max_token_length; self } /// Constructs the final BpeTrainer pub fn build(self) -> BpeTrainer { BpeTrainer { min_frequency: self.config.min_frequency, vocab_size: self.config.vocab_size, show_progress: self.config.show_progress, special_tokens: self.config.special_tokens, limit_alphabet: self.config.limit_alphabet, initial_alphabet: self.config.initial_alphabet, continuing_subword_prefix: self.config.continuing_subword_prefix, end_of_word_suffix: self.config.end_of_word_suffix, max_token_length: self.config.max_token_length, words: AHashMap::new(), } } } /// In charge of training a `BPE` model /// /// # Examples /// /// ``` /// use tokenizers::tokenizer::Trainer; /// use tokenizers::models::bpe::{BPE, BpeTrainer}; /// /// let sequences = vec![ "Hello", "World" ]; /// /// let mut trainer = BpeTrainer::default(); /// trainer.feed(sequences.iter(), |s| Ok(vec![s.to_owned()])); /// /// let mut model = BPE::default(); /// let special_tokens = trainer.train(&mut model).unwrap(); /// ``` #[non_exhaustive] #[derive(Debug, Clone, PartialEq, Serialize, Deserialize, Eq)] pub struct BpeTrainer { /// The minimum frequency a pair must have to produce a merge operation pub min_frequency: u64, /// The target vocabulary size pub vocab_size: usize, /// Whether to show progress while training pub show_progress: bool, /// A list of special tokens that the model should know of pub special_tokens: Vec<AddedToken>, /// Whether to limit the number of initial tokens that can be kept before computing merges pub limit_alphabet: Option<usize>, /// The initial alphabet we want absolutely to include. This allows to cover /// some characters that are not necessarily in the training set pub initial_alphabet: AHashSet<char>, /// An optional prefix to use on any subword that exist only behind another one pub continuing_subword_prefix: Option<String>, /// An optional suffix to characterize and end-of-word subword pub end_of_word_suffix: Option<String>, /// An optional parameter to limit the max length of any single token pub max_token_length: Option<usize>, words: AHashMap<CompactString, u64>, } impl Default for BpeTrainer { fn default() -> Self { Self::builder().build() } } impl BpeTrainer { pub fn new(min_frequency: u64, vocab_size: usize) -> Self { Self { min_frequency, vocab_size, ..Default::default() } } pub fn builder() -> BpeTrainerBuilder { BpeTrainerBuilder::new() } /// Setup a progress bar if asked to show progress fn setup_progress(&self) -> Option<ProgressBar> { if self.show_progress { let p = ProgressBar::new(0); p.set_style( ProgressStyle::default_bar() .template("[{elapsed_precise}] {msg:<30!} {wide_bar} {pos:<9!}/{len:>9!}") .expect("Invalid progress template"), ); Some(p) } else { None } } /// Set the progress bar in the finish state fn finalize_progress(&self, p: &Option<ProgressBar>, final_len: usize) { if let Some(p) = p { p.set_length(final_len as u64); p.finish(); println!(); } } /// Update the progress bar with the new provided length and message fn update_progress(&self, p: &Option<ProgressBar>, len: usize, message: &'static str) { if let Some(p) = p { p.set_message(message); p.set_length(len as u64); p.reset(); } } /// Add the provided special tokens to the initial vocabulary fn add_special_tokens( &self, w2id: &mut AHashMap<CompactString, u32>, id2w: &mut Vec<CompactString>, ) { for token in &self.special_tokens { // get hash of content if !w2id.contains_key(&CompactString::from(&token.content)) { id2w.push(CompactString::from(&token.content)); w2id.insert(CompactString::from(&token.content), (id2w.len() - 1) as u32); } } } /// Compute the initial alphabet and limit it if relevant fn compute_alphabet( &self, wc: &AHashMap<CompactString, u64>, w2id: &mut AHashMap<CompactString, u32>, id2w: &mut Vec<CompactString>, ) { // Compute the alphabet from seen words let mut alphabet: AHashMap<char, usize> = AHashMap::new(); for (word, count) in wc { for c in word.chars() { *alphabet.entry(c).or_default() += *count as usize; } } // Also include anything from the provided initial alphabet for c in &self.initial_alphabet { *alphabet.entry(*c).or_default() = usize::MAX; } let mut kept = alphabet.iter().collect::<Vec<_>>(); // Compute the number of chars to remove from the alphabet // If `limit_alphabet < initial_alphabet.len()`, some of these initial characters // will be removed let to_remove = self .limit_alphabet .map(|limit| alphabet.len().saturating_sub(limit)) .unwrap_or(0); // Remove the unwanted chars if to_remove > 0 { kept.sort_unstable_by_key(|k| *k.1); kept.drain(..to_remove); } // Keep the initial alphabet (sorted for determinism) kept.sort_unstable_by_key(|k| *k.0 as u32); kept.into_iter().for_each(|(c, _)| { let s = c.to_string(); /* if !w2id.contains_key(&s) { id2w.push(s.clone()); w2id.insert(s, (id2w.len() - 1) as u32); } */ // u64 hash version if !w2id.contains_key(&CompactString::from(&s)) { id2w.push(CompactString::from(&s)); w2id.insert(CompactString::from(&s), (id2w.len() - 1) as u32); } }); } /// Tokenize words and add subwords to the vocabulary when relevant fn tokenize_words( &self, wc: &AHashMap<CompactString, u64>, w2id: &mut AHashMap<CompactString, u32>, id2w: &mut Vec<CompactString>, p: &Option<ProgressBar>, ) -> (Vec<Word>, Vec<u64>) { let mut words: Vec<Word> = Vec::with_capacity(wc.len()); let mut counts: Vec<u64> = Vec::with_capacity(wc.len()); for (word, count) in wc { let mut current_word = Word::new(); counts.push(*count); for (is_first, is_last, c) in word.chars().with_first_and_last() { let mut s = c.to_string(); if w2id.contains_key(&CompactString::from(&s)) { // Found the initial char in the authorized alphabet // Add the `continuing_subword_prefix` if relevant if !is_first { if let Some(prefix) = &self.continuing_subword_prefix { s.insert_str(0, prefix); } } // Add the `end_of_word_suffix` if relevant if is_last { if let Some(suffix) = &self.end_of_word_suffix { s.push_str(suffix); } } // Insert the new formed string if necessary if !w2id.contains_key(&CompactString::from(&s)) { id2w.push(CompactString::from(&s)); w2id.insert(CompactString::from(&s), (id2w.len() - 1) as u32); } current_word.add(w2id[&CompactString::from(&s)], 1); // We do not care about the len here } } words.push(current_word); if let Some(p) = p { p.inc(1); } } (words, counts) } fn count_pairs( &self, words: &[Word], counts: &[u64], p: &Option<ProgressBar>, ) -> (AHashMap<Pair, i32>, AHashMap<Pair, AHashSet<usize>>) { words .maybe_par_iter() .enumerate() .map(|(i, word)| { let mut pair_counts = AHashMap::new(); let mut where_to_update: AHashMap<Pair, AHashSet<usize>> = AHashMap::new(); for window in word.get_chars().windows(2) { let cur_pair: Pair = (window[0], window[1]); // Initialize pair_counts and where_to_update for this pair if we just saw it // Then update counts *pair_counts.entry(cur_pair).or_default() += counts[i] as i32; where_to_update.entry(cur_pair).or_default().insert(i); } if let Some(p) = &p { p.inc(1); } (pair_counts, where_to_update) }) .reduce( || (AHashMap::new(), AHashMap::new()), |(mut pair_counts, mut where_to_update), (pc, wtu)| { for (k, v) in pc { *pair_counts.entry(k).or_default() += v; } for (k, v) in wtu { where_to_update.entry(k).or_default().extend(v); } (pair_counts, where_to_update) }, ) } pub fn do_train( &self, word_counts: &AHashMap<CompactString, u64>, model: &mut BPE, ) -> Result<Vec<AddedToken>> { let mut word_to_id: AHashMap<CompactString, u32> = AHashMap::with_capacity(self.vocab_size); let mut id_to_word: Vec<CompactString> = Vec::with_capacity(self.vocab_size); let max_token_length: usize = self.max_token_length.unwrap_or(usize::MAX); let progress = self.setup_progress(); // // 1. Add all special tokens to the vocabulary // self.add_special_tokens(&mut word_to_id, &mut id_to_word); // // 2. Compute the initial alphabet // self.compute_alphabet(word_counts, &mut word_to_id, &mut id_to_word); // // 3. Tokenize words // self.update_progress(&progress, word_counts.len(), "Tokenize words"); let (mut words, counts) = self.tokenize_words(word_counts, &mut word_to_id, &mut id_to_word, &progress); self.finalize_progress(&progress, words.len()); // // 4. Count pairs in words // self.update_progress(&progress, words.len(), "Count pairs"); let (mut pair_counts, mut where_to_update) = self.count_pairs(&words, &counts, &progress); // Insert them in the queue let mut queue = OctonaryHeap::with_capacity(pair_counts.len()); where_to_update.drain().for_each(|(pair, pos)| { let count = pair_counts[&pair]; if count > 0 { queue.push(Merge { pair, count: count as u64, pos, }); } }); self.finalize_progress(&progress, words.len()); // // 5. Do merges // self.update_progress(&progress, self.vocab_size, "Compute merges"); let mut merges: Vec<(Pair, u32)> = vec![]; loop { // Stop as soon as we have a big enough vocabulary if word_to_id.len() >= self.vocab_size { break; } let Some(mut top) = queue.pop() else { break; }; if top.count != pair_counts[&top.pair] as u64 { top.count = pair_counts[&top.pair] as u64; queue.push(top); continue; } if top.count < 1 || self.min_frequency > top.count { break; } let part_a = &id_to_word[top.pair.0 as usize]; let mut part_b = id_to_word[top.pair.1 as usize].as_str(); // Build new token if let Some(prefix) = &self.continuing_subword_prefix { if let Some(rest) = part_b.strip_prefix(prefix) { part_b = rest; } } let new_token = format!("{part_a}{part_b}"); // implement sentencepiece-like merge. // if this code were to be merged, integrate a way in the python bindings to communicate this variable // default should be 0/None to maintain previous behavior. 16 is the spm default. // Insert new token if it does not already exist let new_token_id = word_to_id .get(&CompactString::from(&new_token)) .copied() .unwrap_or(id_to_word.len() as u32); if !word_to_id.contains_key(&CompactString::from(&new_token)) { id_to_word.push(CompactString::from(&new_token)); word_to_id.insert(CompactString::from(&new_token), new_token_id); } merges.push((top.pair, new_token_id)); // Merge the new pair in every words // Safety: This is just a type assertion, the code below may no longer be safe // if the type of `pos` changes let pos: &AHashSet<usize> = &top.pos; let words_len = words.len(); struct WordPtr(*mut Word); // Safety: We do not actually use this for concurrent access to the same memory, // only to different chunks within the same allocation. unsafe impl Sync for WordPtr {} let word_start = WordPtr(words.as_mut_ptr()); let changes = pos .maybe_par_iter() .flat_map(|&i| { // We can merge each of these words in parallel here because each position // can be there only once (AHashSet). So this is safe. unsafe { assert!(i < words_len); // This is words[i], but avoids needing to go through &T (which triggers UB) let word = word_start.0.add(i); // let word: &mut Word = &mut (*word); (*word) .merge(top.pair.0, top.pair.1, new_token_id, max_token_length) .into_iter() .map(|c| (c, i)) .collect::<Vec<_>>() } }) .collect::<Vec<_>>(); // Introduce new formed pairs for ((pair, change), iw) in changes { let count = change * counts[iw] as i32; *pair_counts.entry(pair).or_default() += count; if change > 0 { where_to_update.entry(pair).or_default().insert(iw); } } where_to_update.drain().for_each(|(pair, pos)| { let count = pair_counts[&pair]; if count > 0 { queue.push(Merge { pair, count: count as u64, pos, }); } }); if let Some(p) = &progress { p.inc(1); } } self.finalize_progress(&progress, merges.len()); // Transfer new vocab & options to model //model.vocab = word_to_id; model.vocab = word_to_id .into_iter() // we have to look up the string in id_to_word because the key in word_to_id is a hash .map(|(_key, val)| (id_to_word[val as usize].to_string(), val)) .collect(); model.vocab_r = model .vocab .iter() .map(|(key, val)| (*val, key.to_owned())) .collect(); model.merges = merges .into_iter() .enumerate() .map(|(i, (pair, new_token_id))| (pair, (i as u32, new_token_id))) .collect(); model.continuing_subword_prefix = self.continuing_subword_prefix.clone(); model.end_of_word_suffix = self.end_of_word_suffix.clone(); Ok(self.special_tokens.clone()) } } impl Trainer for BpeTrainer { type Model = BPE; /// Train a BPE model fn train(&self, model: &mut BPE) -> Result<Vec<AddedToken>> { self.do_train(&self.words, model) } /// Whether we should show progress fn should_show_progress(&self) -> bool { self.show_progress } fn feed<I, S, F>(&mut self, iterator: I, process: F) -> Result<()> where I: Iterator<Item = S> + Send, S: AsRef<str> + Send, F: Fn(&str) -> Result<Vec<String>> + Sync, { let words: Result<AHashMap<CompactString, u64>> = iterator .maybe_par_bridge() .map(|sequence| { let words = process(sequence.as_ref())?; let mut map = AHashMap::new(); for word in words { *map.entry(CompactString::from(word)).or_default() += 1; } Ok(map) }) .reduce( || Ok(AHashMap::new()), |acc, ws| { let mut acc = acc?; for (k, v) in ws? { *acc.entry(k).or_default() += v; } Ok(acc) }, ); self.words = words?; Ok(()) } } #[cfg(test)] mod tests { use super::{BpeTrainer, Pair, BPE}; use ahash::AHashMap; use compact_str::CompactString; #[test] fn test_train() { let word_counts: AHashMap<CompactString, u64> = [ ("roses".into(), 1), ("are".into(), 2), ("red".into(), 1), ("voilets".into(), 1), ("blue".into(), 1), ("BERT".into(), 1), ("is".into(), 2), ("big".into(), 1), ("and".into(), 1), ("so".into(), 1), ("GPT-2".into(), 1), ] .iter() .cloned() .collect(); let trainer = BpeTrainer::builder() .show_progress(false) .min_frequency(2) .build(); let mut model = BPE::default(); trainer.do_train(&word_counts, &mut model).unwrap(); // Vocab should contain all of the characters from the `word_counts` mapping // as well as three merges: 're', 'are', and 'is'. let expected_vocab: AHashMap<String, u32> = [ ("-".into(), 0), ("2".into(), 1), ("B".into(), 2), ("E".into(), 3), ("G".into(), 4), ("P".into(), 5), ("R".into(), 6), ("T".into(), 7), ("a".into(), 8), ("b".into(), 9), ("d".into(), 10), ("e".into(), 11), ("g".into(), 12), ("i".into(), 13), ("l".into(), 14), ("n".into(), 15), ("o".into(), 16), ("r".into(), 17), ("s".into(), 18), ("t".into(), 19), ("u".into(), 20), ("v".into(), 21), ("re".into(), 22), ("are".into(), 23), ("is".into(), 24), ] .iter() .cloned() .collect(); assert_eq!(model.vocab, expected_vocab); // The keys in `merges` are pairs of symbols, the values are tuples of (rank, id), // where 'rank' determines the order in which this merge will be applied during // tokenization, and 'id' is the vocab id of the symbol resulting from merging // the pair of symbols in the corresponding key. let expected_merges: AHashMap<Pair, (u32, u32)> = [ ((17, 11), (0, 22)), // 'r' + 'e' -> 're' ((8, 22), (1, 23)), // 'a' + 're' -> 'are' ((13, 18), (2, 24)), // 'i' + 's' -> 'is' ] .iter() .cloned() .collect(); assert_eq!(model.merges, expected_merges); } #[test] fn bpe_test_max_token_length_16() { /* bpe_test_max_token_length series of tests test the max_token_length flag of bpetrainer // this is the more robust version that only tests max length of learned tokens // (pre) tokenizer settings or vocab can be easily modified when necessary */ let max_token_length = 16; let long_word_counts: AHashMap<CompactString, u64> = [ ("singlelongtokenwithoutcasechange", 2), ("singleLongTokenWithCamelCaseChange", 2), ("Longsingletokenwithpunctu@t!onwithin", 2), ("Anotherlongsingletokenwithnumberw1th1n", 2), ("짧은한글문자열짧은한", 2), // korean 10 char ("긴한글문자열긴한글문자열긴한글문", 2), // korean 16 char ("短字符串短字符串短字", 2), //simplified chinese 10 char ("长字符串长字符串长字符串长字符串", 2), // simp. chinese 16 char ("短い文字列短い文字列", 2), // japanese 10 char ("長い文字列長い文字列長い文字列長", 2), // japanese 16 char ("so", 2), ("GPT-2", 2), ] .iter() .map(|(key, value)| (CompactString::from(key.to_string()), *value)) .collect(); let trainer = BpeTrainer::builder() .max_token_length(Some(max_token_length)) .show_progress(false) .min_frequency(0) .build(); let mut model = BPE::default(); trainer.do_train(&long_word_counts, &mut model).unwrap(); let vocab = model.get_vocab(); for token in vocab.keys() { assert!( token.chars().count() <= max_token_length, "token too long : {} , chars().count() = {}", token, token.chars().count() ) } } #[test] fn bpe_test_max_token_length_direct_assert() { /* more direct version of bpe_test_max_token_length test // directly compares tokens with known expected values. // maybe unstable depending on specific settings or changes. */ let long_word_counts: AHashMap<CompactString, u64> = [ ("sin", 2), ("Sin", 2), ("Lon", 2), ("Ano", 2), ("짧은한", 2), ("긴한글", 2), ("短字符", 2), ("长字符", 2), ("短い文", 2), ("長い文", 2), ("so", 2), ("GP", 2), ] .iter() .map(|(key, value)| (CompactString::from(key.to_string()), *value)) .collect(); let trainer = BpeTrainer::builder() .max_token_length(Some(2)) .show_progress(false) .min_frequency(0) .build(); let mut model = BPE::default(); trainer.do_train(&long_word_counts, &mut model).unwrap(); let trained_vocab: AHashMap<String, u32> = model.get_vocab().into_iter().collect(); let expected_vocab: AHashMap<String, u32> = [ ("短", 12), ("n", 6), ("i", 5), ("s", 8), ("字符", 23), ("長", 14), ("긴", 17), ("い文", 22), ("L", 2), ("in", 21), ("o", 7), ("은한", 29), ("S", 4), ("P", 3), ("so", 27), ("符", 13), ("文", 11), ("字", 10), ("짧", 19), ("GP", 25), ("글", 16), ("G", 1), ("An", 24), ("长", 15), ("A", 0), ("Lo", 26), ("긴한", 28), ("い", 9), ("한", 20), ("은", 18), ] .iter() .cloned() .map(|(k, v)| (k.to_string(), v)) .collect(); assert_eq!(trained_vocab, expected_vocab) } }
tokenizers/tokenizers/src/models/bpe/trainer.rs/0
{ "file_path": "tokenizers/tokenizers/src/models/bpe/trainer.rs", "repo_id": "tokenizers", "token_count": 14629 }
338
use crate::processors::byte_level::bytes_char; use crate::tokenizer::{NormalizedString, Normalizer, Result}; use crate::utils::macro_rules_attribute; use ahash::{AHashMap, AHashSet}; use std::sync::LazyLock; #[derive(Clone, Debug)] #[macro_rules_attribute(impl_serde_type!)] pub struct ByteLevel; static BYTES_CHAR: LazyLock<AHashMap<u8, char>> = LazyLock::new(bytes_char); impl Default for ByteLevel { fn default() -> Self { Self::new() } } impl ByteLevel { pub fn new() -> Self { Self {} } pub fn alphabet() -> AHashSet<char> { BYTES_CHAR.values().copied().collect() } } impl Normalizer for ByteLevel { /// Strip the normalized string inplace fn normalize(&self, normalized: &mut NormalizedString) -> Result<()> { if !normalized.is_empty() { let s = normalized.get(); let mut transformations: Vec<(char, isize)> = Vec::with_capacity(s.len()); for (i, cur_char) in s.char_indices() { let size = cur_char.len_utf8(); transformations.extend( s.as_bytes()[i..i + size] .iter() .enumerate() .map(|(i, b)| (BYTES_CHAR[b], isize::from(i > 0))), ); } normalized.transform(transformations, 0); } Ok(()) } } #[cfg(test)] mod tests { use super::*; #[test] fn test_byte_level_normalize() { let original = "Hello 我今天能为你做什么"; let normalized = "HelloĠæĪijä»Ĭ天èĥ½ä¸ºä½łåģļä»Ģä¹Ī"; assert_ne!(original, normalized); let mut n = NormalizedString::from(original); let byte_level = ByteLevel::new(); byte_level.normalize(&mut n).unwrap(); assert_eq!(&n.get(), &normalized); assert_eq!( n, NormalizedString::new( original.to_string(), normalized.to_string(), vec![ (0, 1), (1, 2), (2, 3), (3, 4), (4, 5), (5, 6), (5, 6), (6, 9), (6, 9), (6, 9), (6, 9), (6, 9), (6, 9), (9, 12), (9, 12), (9, 12), (9, 12), (9, 12), (9, 12), (12, 15), (12, 15), (12, 15), (12, 15), (12, 15), (12, 15), (15, 18), (15, 18), (15, 18), (15, 18), (15, 18), (15, 18), (18, 21), (18, 21), (18, 21), (18, 21), (18, 21), (18, 21), (21, 24), (21, 24), (21, 24), (21, 24), (21, 24), (21, 24), (24, 27), (24, 27), (24, 27), (24, 27), (24, 27), (24, 27), (27, 30), (27, 30), (27, 30), (27, 30), (27, 30), (27, 30), (30, 33), (30, 33), (30, 33), (30, 33), (30, 33), (30, 33) ], 0 ) ); assert_eq!( n.alignments_original(), vec![ (0, 1), (1, 2), (2, 3), (3, 4), (4, 5), (5, 7), (7, 13), (7, 13), (7, 13), (13, 19), (13, 19), (13, 19), (19, 25), (19, 25), (19, 25), (25, 31), (25, 31), (25, 31), (31, 37), (31, 37), (31, 37), (37, 43), (37, 43), (37, 43), (43, 49), (43, 49), (43, 49), (49, 55), (49, 55), (49, 55), (55, 61), (55, 61), (55, 61) ] ); } }
tokenizers/tokenizers/src/normalizers/byte_level.rs/0
{ "file_path": "tokenizers/tokenizers/src/normalizers/byte_level.rs", "repo_id": "tokenizers", "token_count": 3351 }
339
use crate::pre_tokenizers::PreTokenizerWrapper; use crate::tokenizer::{PreTokenizedString, PreTokenizer, Result}; use crate::utils::macro_rules_attribute; use serde::{Deserialize, Serialize}; #[derive(Clone, Debug, PartialEq)] #[macro_rules_attribute(impl_serde_type!)] pub struct Sequence { pretokenizers: Vec<PreTokenizerWrapper>, } impl Sequence { pub fn new(pretokenizers: Vec<PreTokenizerWrapper>) -> Self { Self { pretokenizers } } } impl AsRef<[PreTokenizerWrapper]> for Sequence { fn as_ref(&self) -> &[PreTokenizerWrapper] { &self.pretokenizers } } impl AsMut<[PreTokenizerWrapper]> for Sequence { fn as_mut(&mut self) -> &mut [PreTokenizerWrapper] { &mut self.pretokenizers } } impl IntoIterator for Sequence { type Item = PreTokenizerWrapper; type IntoIter = std::vec::IntoIter<Self::Item>; fn into_iter(self) -> Self::IntoIter { self.pretokenizers.into_iter() } } impl PreTokenizer for Sequence { fn pre_tokenize(&self, pretokenized: &mut PreTokenizedString) -> Result<()> { for pretokenizer in &self.pretokenizers { pretokenizer.pre_tokenize(pretokenized)?; } Ok(()) } } #[cfg(test)] mod tests { use super::*; use crate::pre_tokenizers::{punctuation::Punctuation, whitespace::WhitespaceSplit}; use crate::{OffsetReferential, OffsetType}; #[test] fn sequence_basic() { let pretokenizers = vec![ PreTokenizerWrapper::WhitespaceSplit(WhitespaceSplit), PreTokenizerWrapper::Punctuation(Punctuation::default()), ]; let pretok = Sequence::new(pretokenizers); let mut pretokenized: PreTokenizedString = "Hey friend! How are you?!?".into(); pretok.pre_tokenize(&mut pretokenized).unwrap(); assert_eq!( pretokenized .get_splits(OffsetReferential::Original, OffsetType::Byte) .into_iter() .map(|(s, o, _)| (s, o)) .collect::<Vec<_>>(), vec![ ("Hey", (0, 3)), ("friend", (4, 10)), ("!", (10, 11)), ("How", (16, 19)), ("are", (20, 23)), ("you", (24, 27)), ("?", (27, 28)), ("!", (28, 29)), ("?", (29, 30)), ] ); } }
tokenizers/tokenizers/src/pre_tokenizers/sequence.rs/0
{ "file_path": "tokenizers/tokenizers/src/pre_tokenizers/sequence.rs", "repo_id": "tokenizers", "token_count": 1124 }
340
use crate::{ normalizer::Range, Encoding, NormalizedString, OffsetReferential, Offsets, Result, Token, }; use std::collections::HashMap; /// Various possible types of offsets #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub enum OffsetType { Byte, Char, None, } /// Wrapper for a subpart of a `NormalizedString`. /// /// This Split contains the underlying `NormalizedString` as well as its offsets /// in the original string. These offsets are in the `original` referential. /// It also contains any `Token` associated to the current split #[derive(Debug, Clone, PartialEq, Eq)] pub struct Split { /// The underlying `NormalizedString`. Each SubString is represented by a `NormalizedString` /// and in the end we might be carrying a lot of SubString representing various parts of the /// original input string. normalized: NormalizedString, /// Optional Tokens associated to this Split tokens: Option<Vec<Token>>, } impl From<NormalizedString> for Split { fn from(n: NormalizedString) -> Self { Self { normalized: n, tokens: None, } } } impl From<(NormalizedString, Option<Vec<Token>>)> for Split { fn from(f: (NormalizedString, Option<Vec<Token>>)) -> Self { Self { normalized: f.0, tokens: f.1, } } } /// The `PreTokenizedString` is in charge of splitting an underlying string, /// making sure everything is fine while doing so, and providing ways to normalize /// and tokenize these splits. /// Once everything has been normalized and tokenized, the `PreTokenizedString` is able /// to build an `Encoding` with all the relevant offsets and word ids, relative to the /// original string. #[derive(Debug, Clone, PartialEq, Eq)] pub struct PreTokenizedString { original: String, splits: Vec<Split>, } impl PreTokenizedString { /// Split the `PreTokenizedString` by providing a `split_fn` in charge of splitting /// each substring (`NormalizedString`) into multiple parts. /// /// `split_fn` takes a `NormalizedString` and is in charge of returning an iterator /// over the produced `NormalizedString`. `split_fn` is free of modifying these /// `NormalizedString` as relevant, as long as it respects the constraint stated below. /// /// There are only one constraint that *MUST* be respected: /// > The produced `NormalizedString`, if combined back together, must have the /// > same `original` string as the original one given to `split_fn`. This concretely /// > means that for the offset tracking to work as expected, `split_fn` must produce /// > "splits" of the original string. pub fn split<F, U, R>(&mut self, mut split_fn: F) -> Result<()> where F: FnMut(usize, NormalizedString) -> Result<U>, U: IntoIterator<Item = R>, R: Into<Split>, { // new_splits is at least as big as self.splits let mut new_splits = Vec::with_capacity(self.splits.len()); for (i, original_split) in self.splits.drain(..).enumerate() { if original_split.tokens.is_some() { new_splits.push(original_split); continue; } new_splits.extend( split_fn(i, original_split.normalized)? .into_iter() .filter_map(|split| { let split: Split = split.into(); if split.normalized.is_empty() { None } else { Some(split) } }), ); } self.splits = new_splits; Ok(()) } /// Normalized all the splits that do not have attached `Tokens`, using the provided /// `normalize` function. pub fn normalize<F>(&mut self, normalize: F) -> Result<()> where F: Fn(&mut NormalizedString) -> Result<()>, { for split in self.splits.iter_mut().filter(|s| s.tokens.is_none()) { normalize(&mut split.normalized)?; } Ok(()) } /// Tokenize all the splits that do not have attached `Tokens`, using the provided /// `tokenize` function pub fn tokenize<F>(&mut self, tokenize: F) -> Result<()> where F: Fn(&NormalizedString) -> Result<Vec<Token>>, { for split in self.splits.iter_mut().filter(|s| s.tokens.is_none()) { split.tokens = Some(tokenize(&split.normalized)?); } Ok(()) } /// Transform the current `PreTokenizedString` into an `Encoding`. /// /// If a `word_idx` is provided, any word in the generated `Encoding` /// will be set to this value. This is generally used with pre-tokenized /// input, that do not need the `PreTokenizedString` to generate word ids. /// /// This method will fail if some splits do not have associated `Token`. pub fn into_encoding( self, word_idx: Option<u32>, type_id: u32, offset_type: OffsetType, ) -> Result<Encoding> { if self.splits.is_empty() { Ok(Encoding::default()) } else if !self.splits.iter().all(|split| split.tokens.is_some()) { Err("Split has not been tokenized, call `PreTokenizedString::tokenize` first".into()) } else { let offset_converter = match offset_type { OffsetType::Char => Some(BytesToCharOffsetConverter::new(&self.original)), OffsetType::Byte => None, OffsetType::None => { let tokens = self .splits .into_iter() .flat_map(|split| { split.tokens.unwrap().into_iter().map(|token| { // Replace this with the actual fields you need for the Encoding type (token.id, String::with_capacity(0), (0, 0), None, 0) }) }) .collect(); return Ok(tokens); } }; Ok(self .splits .into_iter() .enumerate() .flat_map(|(idx, split)| { let normalized = split.normalized; let offsets = normalized.offsets_original(); let offset_converter = &offset_converter; split.tokens.unwrap().into_iter().map(move |token| { let mut offsets = normalized .convert_offsets(Range::Normalized(token.offsets.0..token.offsets.1)) .map_or(token.offsets, |range| { (offsets.0 + range.start, offsets.0 + range.end) }); // Convert to char offsets if relevant if let Some(converter) = offset_converter { offsets = converter.convert(offsets).unwrap_or(offsets); } ( token.id, token.value, offsets, if word_idx.is_some() { word_idx } else { Some(idx as u32) }, type_id, ) }) }) .collect()) } } /// Returns a list of splits, each of them being a slice of the normalized /// string, the associated offsets either in original or normalized /// referential, as well as the potention tokens pub fn get_splits( &self, offset_ref: OffsetReferential, offset_type: OffsetType, ) -> Vec<(&str, Offsets, &Option<Vec<Token>>)> { let offset_converter = match offset_type { OffsetType::Char => Some(BytesToCharOffsetConverter::new(&self.original)), OffsetType::Byte => None, OffsetType::None => None, }; let mut offset = 0; self.splits .iter() .map(|split| { let mut offsets = match offset_ref { OffsetReferential::Original => split.normalized.offsets_original(), OffsetReferential::Normalized => { let len = split.normalized.len(); offset += len; (offset - len, offset) } }; // Convert to char offsets if relevant if let Some(ref converter) = offset_converter { offsets = converter.convert(offsets).unwrap_or(offsets); } (split.normalized.get(), offsets, &split.tokens) }) .collect() } } impl From<NormalizedString> for PreTokenizedString { fn from(s: NormalizedString) -> Self { Self { original: s.get_original().to_owned(), splits: vec![Split { normalized: s, tokens: None, }], } } } impl From<&str> for PreTokenizedString { fn from(s: &str) -> Self { let normalized: NormalizedString = s.into(); normalized.into() } } impl From<String> for PreTokenizedString { fn from(s: String) -> Self { let normalized: NormalizedString = s.into(); normalized.into() } } struct BytesToCharOffsetConverter { map: HashMap<usize, usize>, } impl BytesToCharOffsetConverter { pub fn new(sequence: &str) -> Self { Self { map: sequence .char_indices() .enumerate() .flat_map(|(i, (b, c))| { let mut n = 0; std::iter::repeat_with(move || { let o = (b + n, i); n += 1; o }) .take(c.len_utf8()) }) .collect(), } } pub fn convert(&self, offsets: Offsets) -> Option<Offsets> { match (self.map.get(&offsets.0), self.map.get(&offsets.1)) { (Some(start), Some(end)) => Some((*start, *end)), // If we reached the end, `end` is not in the map (Some(start), None) => { // But the one just before should be let last = self.map.get(&(offsets.1 - 1)).copied().unwrap_or(start + 1); Some((*start, last + 1)) } _ => None, } } }
tokenizers/tokenizers/src/tokenizer/pre_tokenizer.rs/0
{ "file_path": "tokenizers/tokenizers/src/tokenizer/pre_tokenizer.rs", "repo_id": "tokenizers", "token_count": 5310 }
341
mod common; use common::*; use tokenizers::tokenizer::AddedToken; macro_rules! check_offsets { ($input: expr, $output:expr, $offset:expr, $result:expr) => { let offsets = $output.get_offsets()[$offset]; assert_eq!(&$input[offsets.0..offsets.1], $result); }; } #[test] fn byte_level_basic() { // Without trimming offsets let tokenizer = get_byte_level(true, false); let input = "Hello there, how are you?"; let output = tokenizer.encode(input, false).unwrap(); check_offsets!(input, output, 0, "Hello"); check_offsets!(input, output, 1, " there"); check_offsets!(input, output, 2, ","); check_offsets!(input, output, 3, " how"); check_offsets!(input, output, 4, " are"); check_offsets!(input, output, 5, " you"); check_offsets!(input, output, 6, "?"); // And when trimming offsets: let tokenizer = get_byte_level(true, true); let input = "Hello there, how are you?"; let output = tokenizer.encode(input, false).unwrap(); check_offsets!(input, output, 0, "Hello"); check_offsets!(input, output, 1, "there"); check_offsets!(input, output, 2, ","); check_offsets!(input, output, 3, "how"); check_offsets!(input, output, 4, "are"); check_offsets!(input, output, 5, "you"); check_offsets!(input, output, 6, "?"); } #[test] fn byte_level_unicode() { let tokenizer = get_byte_level(true, false); let input = "i⭢j"; let output = tokenizer.encode(input, false).unwrap(); check_offsets!(input, output, 1, "⭢"); check_offsets!(input, output, 2, "⭢"); check_offsets!(input, output, 3, "⭢"); } #[test] fn byte_level_double_sequence() { let input_a = "My name is Anthony"; let input_b = "What is my name?"; // Without trimming offsets let tokenizer = get_byte_level(true, false); let output = tokenizer.encode((input_a, input_b), false).unwrap(); let offsets = output.get_offsets(); assert_eq!( offsets, &[ (0, 2), (2, 7), (7, 10), (10, 18), (0, 4), (4, 7), (7, 10), (10, 15), (15, 16) ] ); assert_eq!( output.get_word_ids(), &[ Some(0), Some(1), Some(2), Some(3), Some(0), Some(1), Some(2), Some(3), Some(4) ] ); assert_eq!(output.get_type_ids(), &[0, 0, 0, 0, 1, 1, 1, 1, 1]); // When trimming offsets let tokenizer = get_byte_level(true, true); let output = tokenizer.encode((input_a, input_b), false).unwrap(); let offsets = output.get_offsets(); assert_eq!( offsets, &[ (0, 2), (3, 7), (8, 10), (11, 18), (0, 4), (5, 7), (8, 10), (11, 15), (15, 16) ] ); } #[test] fn byte_level_pre_tokenized_sequence() { let input = ["My", "name", "is", "Anthonino"]; // Without trimming offsets let tokenizer = get_byte_level(true, false); let output = tokenizer.encode(&input[..], false).unwrap(); assert_eq!( output.get_tokens(), &["ĠMy", "Ġname", "Ġis", "ĠAnth", "on", "ino"] ); assert_eq!( output.get_word_ids(), &[Some(0), Some(1), Some(2), Some(3), Some(3), Some(3)] ); assert_eq!( output.get_offsets(), &[(0, 2), (0, 4), (0, 2), (0, 4), (4, 6), (6, 9)] ); } #[test] #[ignore] fn byte_level_pre_tokenized_sequence_with_trimming() { let input = ["My", "name", "is", "Anthonino"]; // When trimming offsets (expect same result) let tokenizer = get_byte_level(true, true); let output = tokenizer.encode(&input[..], false).unwrap(); assert_eq!( output.get_word_ids(), &[Some(0), Some(1), Some(2), Some(3), Some(3), Some(3)] ); assert_eq!( output.get_offsets(), &[(0, 2), (0, 4), (0, 2), (0, 4), (4, 6), (6, 9)] ); } #[test] fn split_on_added_tokens_bert() { let input = "Yesterday I saw a [MASK] far away"; let mut tokenizer = get_bert(); tokenizer.add_special_tokens(&[AddedToken::from("[MASK]", true)]); let output = tokenizer.encode(input, false).unwrap(); assert_eq!( output.get_offsets(), &[ (0, 9), (10, 11), (12, 15), (16, 17), (18, 24), (25, 28), (29, 33) ] ); assert_eq!( output.get_tokens(), &["yesterday", "i", "saw", "a", "[MASK]", "far", "away"] ); assert_eq!( output.get_word_ids(), &[ Some(0), Some(1), Some(2), Some(3), Some(4), Some(5), Some(6) ] ); }
tokenizers/tokenizers/tests/offsets.rs/0
{ "file_path": "tokenizers/tokenizers/tests/offsets.rs", "repo_id": "tokenizers", "token_count": 2497 }
342
{ "overrides": [ { "files": ["tests/**/*.js"], "options": { "printWidth": 10000000 } } ] }
transformers.js/.prettierrc/0
{ "file_path": "transformers.js/.prettierrc", "repo_id": "transformers.js", "token_count": 108 }
343
# Use custom models <include> { "path": "../snippets/4_custom-usage.snippet" } </include>
transformers.js/docs/source/custom_usage.md/0
{ "file_path": "transformers.js/docs/source/custom_usage.md", "repo_id": "transformers.js", "token_count": 40 }
344
#root { height: 100vh; width: 100vw; padding: 1rem; }
transformers.js/examples/cross-encoder/src/App.css/0
{ "file_path": "transformers.js/examples/cross-encoder/src/App.css", "repo_id": "transformers.js", "token_count": 29 }
345
import './style.css'; import * as THREE from 'three'; import { OrbitControls } from 'three/addons/controls/OrbitControls.js'; import { pipeline, env, RawImage } from '@xenova/transformers'; // Since we will download the model from the Hugging Face Hub, we can skip the local model check env.allowLocalModels = false; // Proxy the WASM backend to prevent the UI from freezing env.backends.onnx.wasm.proxy = true; // Constants const EXAMPLE_URL = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/bread_small.png'; const DEFAULT_SCALE = 0.75; // Reference the elements that we will need const status = document.getElementById('status'); const fileUpload = document.getElementById('upload'); const imageContainer = document.getElementById('container'); const example = document.getElementById('example'); // Create a new depth-estimation pipeline status.textContent = 'Loading model...'; const depth_estimator = await pipeline('depth-estimation', 'Xenova/depth-anything-small-hf'); status.textContent = 'Ready'; example.addEventListener('click', (e) => { e.preventDefault(); predict(EXAMPLE_URL); }); fileUpload.addEventListener('change', function (e) { const file = e.target.files[0]; if (!file) { return; } const reader = new FileReader(); // Set up a callback when the file is loaded reader.onload = e2 => predict(e2.target.result); reader.readAsDataURL(file); }); let onSliderChange; // Predict depth map for the given image async function predict(url) { imageContainer.innerHTML = ''; const image = await RawImage.fromURL(url); // Set up scene and slider controls const { canvas, setDisplacementMap } = setupScene(url, image.width, image.height); imageContainer.append(canvas); status.textContent = 'Analysing...'; const { depth } = await depth_estimator(image); setDisplacementMap(depth.toCanvas()); status.textContent = ''; // Add slider control const slider = document.createElement('input'); slider.type = 'range'; slider.min = 0; slider.max = 1; slider.step = 0.01; slider.addEventListener('input', (e) => { onSliderChange(parseFloat(e.target.value)); }); slider.defaultValue = DEFAULT_SCALE; imageContainer.append(slider); } function setupScene(url, w, h) { // Create new scene const canvas = document.createElement('canvas'); const width = canvas.width = imageContainer.offsetWidth; const height = canvas.height = imageContainer.offsetHeight; const scene = new THREE.Scene(); // Create camera and add it to the scene const camera = new THREE.PerspectiveCamera(30, width / height, 0.01, 10); camera.position.z = 2; scene.add(camera); const renderer = new THREE.WebGLRenderer({ canvas, antialias: true }); renderer.setSize(width, height); renderer.setPixelRatio(window.devicePixelRatio); // Add ambient light const light = new THREE.AmbientLight(0xffffff, 2); scene.add(light); // Load depth texture const image = new THREE.TextureLoader().load(url); image.colorSpace = THREE.SRGBColorSpace; const material = new THREE.MeshStandardMaterial({ map: image, side: THREE.DoubleSide, }); material.displacementScale = DEFAULT_SCALE; const setDisplacementMap = (canvas) => { material.displacementMap = new THREE.CanvasTexture(canvas); material.needsUpdate = true; } const setDisplacementScale = (scale) => { material.displacementScale = scale; material.needsUpdate = true; } onSliderChange = setDisplacementScale; // Create plane and rescale it so that max(w, h) = 1 const [pw, ph] = w > h ? [1, h / w] : [w / h, 1]; const geometry = new THREE.PlaneGeometry(pw, ph, w, h); const plane = new THREE.Mesh(geometry, material); scene.add(plane); // Add orbit controls const controls = new OrbitControls(camera, renderer.domElement); controls.enableDamping = true; renderer.setAnimationLoop(() => { renderer.render(scene, camera); controls.update(); }); window.addEventListener('resize', () => { const width = imageContainer.offsetWidth; const height = imageContainer.offsetHeight; camera.aspect = width / height; camera.updateProjectionMatrix(); renderer.setSize(width, height); }, false); return { canvas: renderer.domElement, setDisplacementMap, }; }
transformers.js/examples/depth-anything-client/main.js/0
{ "file_path": "transformers.js/examples/depth-anything-client/main.js", "repo_id": "transformers.js", "token_count": 1584 }
346
#root { max-width: 960px; height: 100vh; margin: 0 auto; text-align: center; display: flex; justify-content: center; align-items: center; }
transformers.js/examples/musicgen-web/src/App.css/0
{ "file_path": "transformers.js/examples/musicgen-web/src/App.css", "repo_id": "transformers.js", "token_count": 60 }
347
* { box-sizing: border-box; padding: 0; margin: 0; font-family: sans-serif; } html, body { height: 100%; } body { padding: 16px 32px; } body, #container, #upload-button { display: flex; flex-direction: column; justify-content: center; align-items: center; } h1, h4 { text-align: center; } h4 { margin-top: 0.5rem; } #container { position: relative; width: 720px; height: 480px; max-width: 100%; max-height: 100%; border: 2px dashed #D1D5DB; border-radius: 0.75rem; overflow: hidden; margin-top: 1rem; background-size: 100% 100%; background-position: center; background-repeat: no-repeat; } #upload-button { gap: 0.4rem; font-size: 18px; cursor: pointer; } #upload { display: none; } svg { pointer-events: none; } #example { font-size: 14px; text-decoration: underline; cursor: pointer; } #example:hover { color: #2563EB; } canvas { position: absolute; width: 100%; height: 100%; } #status { min-height: 16px; margin: 8px 0; }
transformers.js/examples/remove-background-client/style.css/0
{ "file_path": "transformers.js/examples/remove-background-client/style.css", "repo_id": "transformers.js", "token_count": 422 }
348
import { env, AutoTokenizer, ClapTextModelWithProjection } from '@xenova/transformers'; import { getCachedFile } from './utils'; // Skip local model check env.allowLocalModels = false; class ApplicationSingleton { static model_id = 'Xenova/larger_clap_music_and_speech'; static BASE_URL = 'https://huggingface.co/datasets/Xenova/MusicBenchEmbedded/resolve/main/'; static tokenizer = null; static text_model = null; static embeddings = null; static async getInstance(progress_callback = null) { this.tokenizer ??= AutoTokenizer.from_pretrained(this.model_id, { progress_callback }); this.text_model ??= ClapTextModelWithProjection.from_pretrained(this.model_id, { progress_callback, quantized: true, // TODO allow user to select quantized or not }); this.embeddings ??= new Promise( (resolve, reject) => { getCachedFile(this.BASE_URL + 'audio-embeddings_52768-512_32bit.bin') .then((buffer) => { resolve(new Float32Array(buffer)); }) .catch(reject); } ); return Promise.all([this.tokenizer, this.text_model, this.embeddings]); } } function cosineSimilarity(query_embeds, database_embeds) { const EMBED_DIM = 512; const numDB = database_embeds.length / EMBED_DIM; const similarityScores = new Array(numDB); for (let i = 0; i < numDB; ++i) { const startOffset = i * EMBED_DIM; const dbVector = database_embeds.slice(startOffset, startOffset + EMBED_DIM); let dotProduct = 0; let normEmbeds = 0; let normDB = 0; for (let j = 0; j < EMBED_DIM; ++j) { const embedValue = query_embeds[j]; const dbValue = dbVector[j]; dotProduct += embedValue * dbValue; normEmbeds += embedValue * embedValue; normDB += dbValue * dbValue; } similarityScores[i] = dotProduct / (Math.sqrt(normEmbeds) * Math.sqrt(normDB)); } return similarityScores; } // Listen for messages from the main thread self.addEventListener('message', async (event) => { // Get the tokenizer, model, and embeddings. When called for the first time, // this will load the files and cache them for future use. const [tokenizer, text_model, embeddings] = await ApplicationSingleton.getInstance(self.postMessage); // Send the output back to the main thread self.postMessage({ status: 'ready' }); // Run tokenization const text_inputs = tokenizer(event.data.query, { padding: true, truncation: true }); // Compute embeddings const { text_embeds } = await text_model(text_inputs); // Compute similarity scores const scores = cosineSimilarity(text_embeds.data, embeddings); const output = scores .map((score, i) => [score, i]) // Save index .sort((a, b) => b[0] - a[0]) // Sort by scores .slice(0, 100); // Get top 100 results // Send the output back to the main thread self.postMessage({ status: 'complete', output: output, }); });
transformers.js/examples/semantic-audio-search/worker.js/0
{ "file_path": "transformers.js/examples/semantic-audio-search/worker.js", "repo_id": "transformers.js", "token_count": 1294 }
349
import './globals.css' import { Inter } from 'next/font/google' const inter = Inter({ subsets: ['latin'] }) export const metadata = { title: 'In-browser Semantic Image Search', description: 'Search for images using text (built w/ Transformers.js)', } export default function RootLayout({ children }) { return ( <html lang="en"> <body className={inter.className}>{children}</body> </html> ) }
transformers.js/examples/semantic-image-search-client/src/app/layout.js/0
{ "file_path": "transformers.js/examples/semantic-image-search-client/src/app/layout.js", "repo_id": "transformers.js", "token_count": 139 }
350
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>Transformers.js - Text-to-speech demo</title> </head> <body> <div id="root"></div> <script type="module" src="/src/main.jsx"></script> </body> </html>
transformers.js/examples/text-to-speech-client/index.html/0
{ "file_path": "transformers.js/examples/text-to-speech-client/index.html", "repo_id": "transformers.js", "token_count": 136 }
351
import { marked } from 'marked'; import DOMPurify from 'dompurify'; import BotIcon from './icons/BotIcon'; import UserIcon from './icons/UserIcon'; import './Chat.css'; export default function Chat({ messages }) { const empty = messages.length === 0; return (<div className={`flex-1 p-6 max-w-[960px] w-full ${empty ? 'flex flex-col items-center justify-end' : 'space-y-4'}`}> {empty ? <div className="text-xl">Ready!</div> : messages.map((msg, i) => ( <div key={`message-${i}`} className="flex items-start space-x-4"> {msg.role === 'assistant' ? (<> <BotIcon className="h-6 w-6 min-h-6 min-w-6 my-3 text-gray-500 dark:text-gray-300" /> <div className="bg-gray-200 dark:bg-gray-700 rounded-lg p-4"> <p className="min-h-6 text-gray-800 dark:text-gray-200 overflow-wrap-anywhere">{ msg.content.length > 0 ? <span className="markdown" dangerouslySetInnerHTML={{ __html: DOMPurify.sanitize(marked.parse(msg.content)) }} /> : (<span className="h-6 flex items-center gap-1"> <span className="w-2.5 h-2.5 bg-gray-600 dark:bg-gray-300 rounded-full animate-pulse"></span> <span className="w-2.5 h-2.5 bg-gray-600 dark:bg-gray-300 rounded-full animate-pulse animation-delay-200"></span> <span className="w-2.5 h-2.5 bg-gray-600 dark:bg-gray-300 rounded-full animate-pulse animation-delay-400"></span> </span>) }</p> </div> </> ) : (<> <UserIcon className="h-6 w-6 min-h-6 min-w-6 my-3 text-gray-500 dark:text-gray-300" /> <div className="bg-blue-500 text-white rounded-lg p-4"> <p className="min-h-6 overflow-wrap-anywhere">{msg.content}</p> </div> </>) } </div> ))} </div>) }
transformers.js/examples/webgpu-chat/src/components/Chat.jsx/0
{ "file_path": "transformers.js/examples/webgpu-chat/src/components/Chat.jsx", "repo_id": "transformers.js", "token_count": 1362 }
352
import { defineConfig } from 'vite'; export default defineConfig({ build: { target: 'esnext' } });
transformers.js/examples/webgpu-clip/vite.config.js/0
{ "file_path": "transformers.js/examples/webgpu-clip/vite.config.js", "repo_id": "transformers.js", "token_count": 37 }
353
{ "name": "webgpu-video-depth-estimation", "private": true, "version": "0.0.0", "type": "module", "scripts": { "dev": "vite", "build": "vite build", "preview": "vite preview" }, "devDependencies": { "vite": "^5.2.0" }, "dependencies": { "@xenova/transformers": "github:xenova/transformers.js#v3" } }
transformers.js/examples/webgpu-video-depth-estimation/package.json/0
{ "file_path": "transformers.js/examples/webgpu-video-depth-estimation/package.json", "repo_id": "transformers.js", "token_count": 157 }
354
export default function BotIcon(props) { return ( <svg {...props} xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" strokeWidth="2" strokeLinecap="round" strokeLinejoin="round" > <path d="M12 8V4H8" /> <rect width="16" height="12" x="4" y="8" rx="2" /> <path d="M2 14h2" /> <path d="M20 14h2" /> <path d="M15 13v2" /> <path d="M9 13v2" /> </svg> ) }
transformers.js/examples/webgpu-vlm/src/components/icons/BotIcon.jsx/0
{ "file_path": "transformers.js/examples/webgpu-vlm/src/components/icons/BotIcon.jsx", "repo_id": "transformers.js", "token_count": 392 }
355
import random from typing import Optional, Tuple from optimum.exporters.onnx.config import TextDecoderOnnxConfig from optimum.utils import NormalizedTextConfig, DummyInputGenerator, DEFAULT_DUMMY_SHAPES, DummyTextInputGenerator, NormalizedConfig class OpenElmDummyPastKeyValuesGenerator(DummyInputGenerator): SUPPORTED_INPUT_NAMES = ("past_key_values", ) def __init__( self, task: str, normalized_config: NormalizedTextConfig, batch_size: int = DEFAULT_DUMMY_SHAPES["batch_size"], sequence_length: int = DEFAULT_DUMMY_SHAPES["sequence_length"], random_batch_size_range: Optional[Tuple[int, int]] = None, random_sequence_length_range: Optional[Tuple[int, int]] = None, **kwargs, ): self.num_layers = normalized_config.num_layers self.num_kv_heads = normalized_config.num_kv_heads self.num_query_heads = normalized_config.num_query_heads self.head_dim = normalized_config.head_dim self.hidden_size = normalized_config.model_dim if random_batch_size_range: low, high = random_batch_size_range self.batch_size = random.randint(low, high) else: self.batch_size = batch_size if random_sequence_length_range: low, high = random_sequence_length_range self.sequence_length = random.randint(low, high) else: self.sequence_length = sequence_length def generate(self, input_name: str, framework: str = "pt", int_dtype: str = "int64", float_dtype: str = "fp32"): data = [] for i in range(self.num_layers): kv_shape = ( self.batch_size, self.num_kv_heads[i], self.sequence_length, self.head_dim, ) data.append(( self.random_float_tensor(kv_shape, framework=framework, dtype=float_dtype), self.random_float_tensor(kv_shape, framework=framework, dtype=float_dtype), )) return data class OpenElmOnnxConfig(TextDecoderOnnxConfig): DEFAULT_ONNX_OPSET = 14 DUMMY_INPUT_GENERATOR_CLASSES = (DummyTextInputGenerator, OpenElmDummyPastKeyValuesGenerator) DUMMY_PKV_GENERATOR_CLASS = OpenElmDummyPastKeyValuesGenerator NORMALIZED_CONFIG_CLASS = NormalizedConfig.with_args( num_kv_heads="num_kv_heads", num_query_heads="num_query_heads", num_layers="num_transformer_layers", allow_new=True, )
transformers.js/scripts/extra/openelm.py/0
{ "file_path": "transformers.js/scripts/extra/openelm.py", "repo_id": "transformers.js", "token_count": 1129 }
356
/** * @module generation/configuration_utils */ import { pick } from "../utils/core.js"; /** * Class that holds a configuration for a generation task. */ export class GenerationConfig { // Parameters that control the length of the output /** * The maximum length the generated tokens can have. * Corresponds to the length of the input prompt + `max_new_tokens`. * Its effect is overridden by `max_new_tokens`, if also set. * @type {number} * @default 20 */ max_length = 20; /** * The maximum numbers of tokens to generate, ignoring the number of tokens in the prompt. * @type {number} * @default null */ max_new_tokens = null; /** * The minimum length of the sequence to be generated. * Corresponds to the length of the input prompt + `min_new_tokens`. * Its effect is overridden by `min_new_tokens`, if also set. * @type {number} * @default 0 */ min_length = 0; /** * The minimum numbers of tokens to generate, ignoring the number of tokens in the prompt. * @type {number} * @default null */ min_new_tokens = null; /** * Controls the stopping condition for beam-based methods, like beam-search. It accepts the following values: * - `true`, where the generation stops as soon as there are `num_beams` complete candidates; * - `false`, where an heuristic is applied and the generation stops when is it very unlikely to find better candidates; * - `"never"`, where the beam search procedure only stops when there cannot be better candidates (canonical beam search algorithm). * @type {boolean|"never"} * @default false */ early_stopping = false; /** * The maximum amount of time you allow the computation to run for in seconds. * Generation will still finish the current pass after allocated time has been passed. * @type {number} * @default null */ max_time = null; // Parameters that control the generation strategy used /** * Whether or not to use sampling; use greedy decoding otherwise. * @type {boolean} * @default false */ do_sample = false; /** * Number of beams for beam search. 1 means no beam search. * @type {number} * @default 1 */ num_beams = 1; /** * Number of groups to divide `num_beams` into in order to ensure diversity among different groups of beams. * See [this paper](https://huggingface.co/papers/1610.02424) for more details. * @type {number} * @default 1 */ num_beam_groups = 1; /** * The values balance the model confidence and the degeneration penalty in contrastive search decoding. * @type {number} * @default null */ penalty_alpha = null; /** * Whether or not the model should use the past last key/values attentions (if applicable to the model) to speed up decoding. * @type {boolean} * @default true */ use_cache = true; // Parameters for manipulation of the model output logits /** * The value used to modulate the next token probabilities. * @type {number} * @default 1.0 */ temperature = 1.0; /** * The number of highest probability vocabulary tokens to keep for top-k-filtering. * @type {number} * @default 50 */ top_k = 50; /** * If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to `top_p` or higher are kept for generation. * @type {number} * @default 1.0 */ top_p = 1.0; /** * Local typicality measures how similar the conditional probability of predicting a target token next is to the expected conditional probability of predicting a random token next, given the partial text already generated. * If set to float < 1, the smallest set of the most locally typical tokens with probabilities that add up to `typical_p` or higher are kept for generation. * See [this paper](https://huggingface.co/papers/2202.00666) for more details. * @type {number} * @default 1.0 */ typical_p = 1.0; /** * If set to float strictly between 0 and 1, only tokens with a conditional probability greater than `epsilon_cutoff` will be sampled. * In the paper, suggested values range from 3e-4 to 9e-4, depending on the size of the model. * See [Truncation Sampling as Language Model Desmoothing](https://huggingface.co/papers/2210.15191) for more details. * @type {number} * @default 0.0 */ epsilon_cutoff = 0.0; /** * Eta sampling is a hybrid of locally typical sampling and epsilon sampling. * If set to float strictly between 0 and 1, a token is only considered if it is greater than either `eta_cutoff` or `sqrt(eta_cutoff) * exp(-entropy(softmax(next_token_logits)))`. * The latter term is intuitively the expected next token probability, scaled by `sqrt(eta_cutoff)`. In the paper, suggested values range from 3e-4 to 2e-3, depending on the size of the model. * See [Truncation Sampling as Language Model Desmoothing](https://huggingface.co/papers/2210.15191) for more details. * @type {number} * @default 0.0 */ eta_cutoff = 0.0; /** * This value is subtracted from a beam's score if it generates a token same as any beam from other group at a particular time. * Note that `diversity_penalty` is only effective if `group beam search` is enabled. * @type {number} * @default 0.0 */ diversity_penalty = 0.0; /** * The parameter for repetition penalty. 1.0 means no penalty. * See [this paper](https://huggingface.co/papers/1909.05858) for more details. * @type {number} * @default 1.0 */ repetition_penalty = 1.0; /** * The paramater for encoder_repetition_penalty. * An exponential penalty on sequences that are not in the original input. * 1.0 means no penalty. * @type {number} * @default 1.0 */ encoder_repetition_penalty = 1.0; /** * Exponential penalty to the length that is used with beam-based generation. * It is applied as an exponent to the sequence length, which in turn is used to divide the score of the sequence. * Since the score is the log likelihood of the sequence (i.e. negative), `length_penalty` > 0.0 promotes longer sequences, while `length_penalty` < 0.0 encourages shorter sequences. * @type {number} * @default 1.0 */ length_penalty = 1.0; /** * If set to int > 0, all ngrams of that size can only occur once. * @type {number} * @default 0 */ no_repeat_ngram_size = 0; /** * List of token ids that are not allowed to be generated. * In order to get the token ids of the words that should not appear in the generated text, use * `tokenizer(bad_words, { add_prefix_space: true, add_special_tokens: false }).input_ids`. * @type {number[][]} * @default null */ bad_words_ids = null; /** * List of token ids that must be generated. * If given a `number[][]`, this is treated as a simple list of words that must be included, the opposite to `bad_words_ids`. * If given `number[][][]`, this triggers a [disjunctive constraint](https://github.com/huggingface/transformers/issues/14081), where one can allow different forms of each word. * @type {number[][]|number[][][]} * @default null */ force_words_ids = null; /** * Whether to renormalize the logits after applying all the logits processors or warpers (including the custom ones). * It's highly recommended to set this flag to `true` as the search algorithms suppose the score logits are normalized but some logit processors or warpers break the normalization. * @type {boolean} * @default false */ renormalize_logits = false; /** * Custom constraints that can be added to the generation to ensure that the output will contain the use of certain tokens as defined by `Constraint` objects, in the most sensible way possible. * @type {Object[]} * @default null */ constraints = null; /** * The id of the token to force as the first generated token after the `decoder_start_token_id`. * Useful for multilingual models like mBART where the first generated token needs to be the target language token. * @type {number} * @default null */ forced_bos_token_id = null; /** * The id of the token to force as the last generated token when `max_length` is reached. * Optionally, use a list to set multiple *end-of-sequence* tokens. * @type {number|number[]} * @default null */ forced_eos_token_id = null; /** * Whether to remove possible *nan* and *inf* outputs of the model to prevent the generation method to crash. Note that using `remove_invalid_values` can slow down generation. * @type {boolean} */ remove_invalid_values = false; /** * This Tuple adds an exponentially increasing length penalty, after a certain amount of tokens have been generated. * The tuple shall consist of: `(start_index, decay_factor)` where `start_index` indicates where penalty starts and `decay_factor` represents the factor of exponential decay. * @type {[number, number]} * @default null */ exponential_decay_length_penalty = null; /** * A list of tokens that will be suppressed at generation. * The `SuppressTokens` logit processor will set their log probs to `-inf` so that they are not sampled. * @type {number[]} * @default null */ suppress_tokens = null; /** * A streamer that will be used to stream the generation. * @type {import('./streamers.js').TextStreamer} * @default null */ streamer = null; /** * A list of tokens that will be suppressed at the beginning of the generation. * The `SuppressBeginTokens` logit processor will set their log probs to `-inf` so that they are not sampled. * @type {number[]} * @default null */ begin_suppress_tokens = null; /** * A list of pairs of integers which indicates a mapping from generation indices to token indices that will be forced before sampling. * For example, `[[1, 123]]` means the second generated token will always be a token of index 123. * @type {[number, number][]} * @default null */ forced_decoder_ids = null; /** * The guidance scale for classifier free guidance (CFG). CFG is enabled by setting `guidance_scale > 1`. * Higher guidance scale encourages the model to generate samples that are more closely linked to the input * prompt, usually at the expense of poorer quality. * @type {number} * @default null */ guidance_scale = null; // Parameters that define the output variables of `generate` /** * The number of independently computed returned sequences for each element in the batch. * @type {number} * @default 1 */ num_return_sequences = 1; /** * Whether or not to return the attentions tensors of all attention layers. * See `attentions` under returned tensors for more details. * @type {boolean} * @default false */ output_attentions = false; /** * Whether or not to return the hidden states of all layers. * See `hidden_states` under returned tensors for more details. * @type {boolean} * @default false */ output_hidden_states = false; /** * Whether or not to return the prediction scores. * See `scores` under returned tensors for more details. * @type {boolean} * @default false */ output_scores = false; /** * Whether or not to return a `ModelOutput` instead of a plain tuple. * @type {boolean} * @default false */ return_dict_in_generate = false; // Special tokens that can be used at generation time /** * The id of the *padding* token. * @type {number} * @default null */ pad_token_id = null; /** * The id of the *beginning-of-sequence* token. * @type {number} * @default null */ bos_token_id = null; /** * The id of the *end-of-sequence* token. * Optionally, use a list to set multiple *end-of-sequence* tokens. * @type {number|number[]} * @default null */ eos_token_id = null; // Generation parameters exclusive to encoder-decoder models /** * If set to int > 0, all ngrams of that size that occur in the `encoder_input_ids` cannot occur in the `decoder_input_ids`. * @type {number} * @default 0 */ encoder_no_repeat_ngram_size = 0; /** * If an encoder-decoder model starts decoding with a different token than *bos*, the id of that token. * @type {number} * @default null */ decoder_start_token_id = null; // Wild card /** * Additional generation kwargs will be forwarded to the `generate` function of the model. * Kwargs that are not present in `generate`'s signature will be used in the model forward pass. * @type {Object} * @default {} */ generation_kwargs = {}; /** * * @param {GenerationConfig|import('../configs.js').PretrainedConfig} config */ constructor(config) { Object.assign(this, pick(config, Object.getOwnPropertyNames(this))); } }
transformers.js/src/generation/configuration_utils.js/0
{ "file_path": "transformers.js/src/generation/configuration_utils.js", "repo_id": "transformers.js", "token_count": 4706 }
357
import { ImageProcessor, } from "../../base/image_processors_utils.js"; export class ConvNextImageProcessor extends ImageProcessor { constructor(config) { super(config); /** * Percentage of the image to crop. Only has an effect if this.size < 384. */ // @ts-expect-error TS2339 this.crop_pct = this.config.crop_pct ?? (224 / 256); } async resize(image) { const shortest_edge = this.size?.shortest_edge; if (shortest_edge === undefined) { throw new Error(`Size dictionary must contain 'shortest_edge' key.`); } if (shortest_edge < 384) { // maintain same ratio, resizing shortest edge to shortest_edge/crop_pct const resize_shortest_edge = Math.floor(shortest_edge / this.crop_pct); const [newWidth, newHeight] = this.get_resize_output_image_size(image, { shortest_edge: resize_shortest_edge, }); image = await image.resize(newWidth, newHeight, { resample: this.resample, }); // then crop to (shortest_edge, shortest_edge) image = await image.center_crop(shortest_edge, shortest_edge); } else { // warping (no cropping) when evaluated at 384 or larger image = await image.resize(shortest_edge, shortest_edge, { resample: this.resample, }); } return image; } } export class ConvNextFeatureExtractor extends ConvNextImageProcessor { }
transformers.js/src/models/convnext/image_processing_convnext.js/0
{ "file_path": "transformers.js/src/models/convnext/image_processing_convnext.js", "repo_id": "transformers.js", "token_count": 683 }
358
import { ImageProcessor, } from "../../base/image_processors_utils.js"; import { cat, full, interpolate_4d, slice, stack } from "../../utils/tensor.js"; export class Idefics3ImageProcessor extends ImageProcessor { constructor(config) { super(config); this.do_image_splitting = config.do_image_splitting ?? true; this.max_image_size = config.max_image_size; } /** * @typedef {import('../../utils/image.js').RawImage} RawImage * @typedef {import('../../utils/tensor.js').Tensor} Tensor */ /** * Calculate size to resize images to, to be multiples of `vision_encoder_max_size` while preserving the aspect ratio. * @param {Tensor} pixel_values Tensor of the image to resize. * @param {number} vision_encoder_max_size Maximum size of the output image. If the image is larger than this size, * it will be split into patches of this size, and the original image will be concatenated with the patches, resized to max_size. */ get_resize_for_vision_encoder(pixel_values, vision_encoder_max_size) { let [height, width] = pixel_values.dims.slice(-2); const aspect_ratio = width / height; if (width >= height) { width = Math.ceil(width / vision_encoder_max_size) * vision_encoder_max_size; height = Math.floor(width / aspect_ratio); height = Math.ceil(height / vision_encoder_max_size) * vision_encoder_max_size; } else { height = Math.ceil(height / vision_encoder_max_size) * vision_encoder_max_size; width = Math.floor(height * aspect_ratio); width = Math.ceil(width / vision_encoder_max_size) * vision_encoder_max_size; } return { height, width }; } /** @param {RawImage|RawImage[]|RawImage[][]} images */ async _call(images, { do_image_splitting = null, return_row_col_info = false, } = {}) { /** @type {RawImage[][]} */ let batched_2d_images; if (!Array.isArray(images)) { batched_2d_images = [[images]]; } else { if (images.length === 0 || !images[0]) { throw new Error("No images provided."); } if (!Array.isArray(images[0])) { batched_2d_images = [/** @type {RawImage[]} */(images)]; } else { batched_2d_images = /** @type {RawImage[][]} */(images); } } // List of tensors, each with shape [patches, channels, height, width] let all_pixel_values = []; let images_list_rows = []; let images_list_cols = []; const original_sizes = []; const reshaped_input_sizes = []; for (const image_batch of batched_2d_images) { let images_list = await Promise.all(image_batch.map(x => this.preprocess(x))); // Original sizes of images original_sizes.push(...images_list.map(x => x.original_size)); // Reshaped sizes of images, before padding or cropping reshaped_input_sizes.push(...images_list.map(x => x.reshaped_input_size)); // Convert images to 4D tensors for easier processing images_list.forEach(x => x.pixel_values.unsqueeze_(0)); const { longest_edge } = this.max_image_size; /** @type {Tensor[]} */ let images_tensor; if (do_image_splitting ?? this.do_image_splitting) { let image_rows = new Array(images_list.length); let image_cols = new Array(images_list.length); // We first resize both height and width of each image to the nearest max_image_size multiple, disregarding the aspect ratio images_tensor = await Promise.all(images_list.map(async (x, i) => { const new_size = this.get_resize_for_vision_encoder(x.pixel_values, longest_edge); const resized = await interpolate_4d(x.pixel_values, { size: [new_size.height, new_size.width], }); const { frames, num_splits_h, num_splits_w } = await this.split_image(resized, this.max_image_size); image_rows[i] = num_splits_h; image_cols[i] = num_splits_w; return cat(frames, 0); })); images_list_rows.push(image_rows); images_list_cols.push(image_cols); } else { /** @type {[number, number]} */ const size = [longest_edge, longest_edge]; images_tensor = await Promise.all( images_list.map(x => interpolate_4d(x.pixel_values, { size })) ); images_list_rows.push(new Array(images_list.length).fill(0)); images_list_cols.push(new Array(images_list.length).fill(0)); } all_pixel_values.push(cat(images_tensor, 0)); } const batch_size = all_pixel_values.length; const [n, c, h, w] = all_pixel_values[0].dims; // Stack pixel values let pixel_values; let pixel_attention_mask; if (batch_size === 1) { pixel_values = all_pixel_values[0].unsqueeze_(0); pixel_attention_mask = full([batch_size, n, h, w], true); } else { // Add padding (if necessary) to images with less patches than the maximum number of patches const max_num_patches = Math.max(...all_pixel_values.map(x => x.dims.at(0))); pixel_attention_mask = full([batch_size, max_num_patches, h, w], true); const pixel_attention_mask_data = pixel_attention_mask.data; const pixel_attention_mask_stride = max_num_patches * h * w; for (let i = 0; i < batch_size; ++i) { const num_patches = all_pixel_values[i].dims[0]; if (num_patches < max_num_patches) { all_pixel_values[i] = cat([ all_pixel_values[i], full([max_num_patches - num_patches, c, h, w], 0), ], 0); const start_offset = i * pixel_attention_mask_stride + num_patches * h * w; const end_offset = (i + 1) * pixel_attention_mask_stride; // @ts-ignore pixel_attention_mask_data.fill(false, start_offset, end_offset); } } pixel_values = stack(all_pixel_values, 0); } return { pixel_values, pixel_attention_mask, original_sizes, reshaped_input_sizes, ...( return_row_col_info ? { rows: images_list_rows, cols: images_list_cols } : {} ), } } async split_image(pixel_values, { longest_edge }) { const max_height = longest_edge; const max_width = longest_edge; const frames = []; const [height, width] = pixel_values.dims.slice(-2); let num_splits_h = 0, num_splits_w = 0; if (height > max_height || width > max_width) { // Calculate the number of splits num_splits_h = Math.ceil(height / max_height); num_splits_w = Math.ceil(width / max_width); // Calculate the optimal width and height for the sub-images const optimal_height = Math.ceil(height / num_splits_h); const optimal_width = Math.ceil(width / num_splits_w); // Iterate through each row and column for (let r = 0; r < num_splits_h; ++r) { for (let c = 0; c < num_splits_w; ++c) { let start_x, start_y, end_x, end_y; if (r === num_splits_h - 1) { // At bottom start_y = height - optimal_height; end_y = height; } else { start_y = r * optimal_height; end_y = (r + 1) * optimal_height; } if (c === num_splits_w - 1) { // At right start_x = width - optimal_width; end_x = width; } else { start_x = c * optimal_width; end_x = (c + 1) * optimal_width; } const starts = [start_y, start_x]; const ends = [end_y, end_x]; const patch = await slice(pixel_values, starts, ends, [2, 3]); frames.push(patch); } } // Resize the global image to match max dimensions for memory efficiency const global_image_height = max_height; const global_image_width = max_width; if (height !== global_image_height || width !== global_image_width) { pixel_values = await interpolate_4d(pixel_values, { size: [global_image_height, global_image_width], }) } } frames.push(pixel_values); return { frames, num_splits_h, num_splits_w }; } }
transformers.js/src/models/idefics3/image_processing_idefics3.js/0
{ "file_path": "transformers.js/src/models/idefics3/image_processing_idefics3.js", "repo_id": "transformers.js", "token_count": 4603 }
359
import { ImageProcessor, } from "../../base/image_processors_utils.js"; export class MobileViTImageProcessor extends ImageProcessor { } export class MobileViTFeatureExtractor extends MobileViTImageProcessor { }
transformers.js/src/models/mobilevit/image_processing_mobilevit.js/0
{ "file_path": "transformers.js/src/models/mobilevit/image_processing_mobilevit.js", "repo_id": "transformers.js", "token_count": 64 }
360
import { ImageProcessor, post_process_object_detection, } from "../../base/image_processors_utils.js"; export class RTDetrImageProcessor extends ImageProcessor { /** @type {typeof post_process_object_detection} */ post_process_object_detection(...args) { return post_process_object_detection(...args); } }
transformers.js/src/models/rt_detr/image_processing_rt_detr.js/0
{ "file_path": "transformers.js/src/models/rt_detr/image_processing_rt_detr.js", "repo_id": "transformers.js", "token_count": 122 }
361
import { ImageProcessor, } from "../../base/image_processors_utils.js"; export class VitPoseImageProcessor extends ImageProcessor { /** * Transform the heatmaps into keypoint predictions and transform them back to the image. * NOTE: This is a naive implementation and does not include advanced post-processing techniques, * so the results may not be as accurate as the original implementation. * @param {import('../../utils/tensor.js').Tensor} outputs The model outputs. * @param {[number, number, number, number][][]} boxes List or array of bounding boxes for each image. * Each box should be a list of 4 floats representing the bounding box coordinates in COCO format (top_left_x, top_left_y, width, height). * @returns {{ * bbox: [number, number, number, number], * scores: number[], * labels: number[], * keypoints: [number, number][] * }[][]} List of keypoints predictions for each image. */ post_process_pose_estimation(outputs, boxes, { threshold = null, // TODO: // kernel_size = 11, // target_sizes = null, } = {}) { // NOTE: boxes are 3D (batch_size, num_boxes, 4) const heatmaps = outputs.tolist(); const [batch_size, num_classes, height, width] = outputs.dims; const results = []; for (let b = 0; b < batch_size; ++b) { const heatmap = heatmaps[b]; const bboxes = boxes[b]; const batch_results = []; for (let n = 0; n < bboxes.length; ++n) { const bbox = bboxes[n]; const keypoints = []; const scores = []; const labels = []; const xScale = bbox.at(-2) / width; const yScale = bbox.at(-1) / height; for (let c = 0; c < heatmap.length; ++c) { let [xWeightedSum, yWeightedSum] = [0, 0]; let sum = 0; let score = -Infinity; const row = heatmap[c]; for (let y = 0; y < row.length; ++y) { const col = row[y]; for (let x = 0; x < col.length; ++x) { const value = col[x]; sum += value; score = Math.max(score, value); // Get weighted sum of positions // TODO: Determine best offsets xWeightedSum += (x + 0.5) * value; yWeightedSum += (y) * value; } } // Ignore low scores, if threshold is set if (threshold != null && score < threshold) continue; /** @type {[number, number]} */ const keypoint = [ xScale * xWeightedSum / sum, yScale * yWeightedSum / sum, ] keypoints.push(keypoint); labels.push(c); scores.push(score); } batch_results.push({ bbox, scores, labels, keypoints, }); } results.push(batch_results); } return results; } }
transformers.js/src/models/vitpose/image_processing_vitpose.js/0
{ "file_path": "transformers.js/src/models/vitpose/image_processing_vitpose.js", "repo_id": "transformers.js", "token_count": 1796 }
362
export const GITHUB_ISSUE_URL = 'https://github.com/huggingface/transformers.js/issues/new/choose'; export const CONFIG_NAME = "config.json" export const FEATURE_EXTRACTOR_NAME = "preprocessor_config.json" export const IMAGE_PROCESSOR_NAME = FEATURE_EXTRACTOR_NAME export const PROCESSOR_NAME = "processor_config.json" export const CHAT_TEMPLATE_NAME = "chat_template.jinja" export const GENERATION_CONFIG_NAME = "generation_config.json"
transformers.js/src/utils/constants.js/0
{ "file_path": "transformers.js/src/utils/constants.js", "repo_id": "transformers.js", "token_count": 149 }
363
// Helper functions used when initialising the testing environment. // Import Node typing utilities import * as types from "node:util/types"; // Import onnxruntime-node's default backend import { onnxruntimeBackend } from "onnxruntime-node/dist/backend"; import * as ONNX_COMMON from "onnxruntime-common"; /** * A workaround to define a new backend for onnxruntime, which * will not throw an error when running tests with jest. * For more information, see: https://github.com/jestjs/jest/issues/11864#issuecomment-1261468011 */ export function init() { // In rare cases (specifically when running unit tests with GitHub actions), possibly due to // a large number of concurrent executions, onnxruntime might fallback to use the WASM backend. // In this case, we set the number of threads to 1 to avoid errors like: // - `TypeError: The worker script or module filename must be an absolute path or a relative path starting with './' or '../'. Received "blob:nodedata:..."` ONNX_COMMON.env.wasm.numThreads = 1; let registerBackend = ONNX_COMMON.registerBackend; // Define the constructors to monkey-patch const TYPED_ARRAYS_CONSTRUCTOR_NAMES = ["Int8Array", "Int16Array", "Int32Array", "BigInt64Array", "Uint8Array", "Uint8ClampedArray", "Uint16Array", "Uint32Array", "BigUint64Array", "Float16Array", "Float32Array", "Float64Array"]; // Keep a reference to the original initialization method const originalMethod = onnxruntimeBackend.init; // Monkey-patch the initialization function onnxruntimeBackend.init = function (...args) { // There is probably a better way to do this Array.isArray = (x) => typeof x === "object" && x !== null && typeof x.length === "number" && x?.constructor.toString() === Array.toString(); // For each typed array constructor for (const ctorName of TYPED_ARRAYS_CONSTRUCTOR_NAMES) { // Get the constructor from the current context const ctor = globalThis[ctorName]; if (ctor === undefined) continue; // If unavailable, skip the patching // Get the corresponding test function from the `util` module const value = types[`is${ctorName}`].bind(types); // Monkey-patch the constructor so "x instanceof ctor" returns "types[`is${ctorName}`](x)" Object.defineProperty(ctor, Symbol.hasInstance, { value, writable: true, // writable=true is necessary to overwrite the default implementation (and allow subsequent overwrites) configurable: false, enumerable: false, }); } // Call the original method return originalMethod.apply(this, args); }; // Register the backend with the highest priority, so it is used instead of the default one registerBackend("test", onnxruntimeBackend, Number.POSITIVE_INFINITY); } export const MAX_TOKENIZER_LOAD_TIME = 10_000; // 10 seconds export const MAX_FEATURE_EXTRACTOR_LOAD_TIME = 10_000; // 10 seconds export const MAX_PROCESSOR_LOAD_TIME = 10_000; // 10 seconds export const MAX_MODEL_LOAD_TIME = 15_000; // 15 seconds export const MAX_TEST_EXECUTION_TIME = 60_000; // 60 seconds export const MAX_MODEL_DISPOSE_TIME = 1_000; // 1 second export const MAX_TEST_TIME = MAX_MODEL_LOAD_TIME + MAX_TEST_EXECUTION_TIME + MAX_MODEL_DISPOSE_TIME; export const DEFAULT_MODEL_OPTIONS = Object.freeze({ dtype: "fp32", }); expect.extend({ toBeCloseToNested(received, expected, numDigits = 2) { const compare = (received, expected, path = "") => { if (typeof received === "number" && typeof expected === "number" && !Number.isInteger(received) && !Number.isInteger(expected)) { const pass = Math.abs(received - expected) < Math.pow(10, -numDigits); return { pass, message: () => (pass ? `✓ At path '${path}': expected ${received} not to be close to ${expected} with tolerance of ${numDigits} decimal places` : `✗ At path '${path}': expected ${received} to be close to ${expected} with tolerance of ${numDigits} decimal places`), }; } else if (Array.isArray(received) && Array.isArray(expected)) { if (received.length !== expected.length) { return { pass: false, message: () => `✗ At path '${path}': array lengths differ. Received length ${received.length}, expected length ${expected.length}`, }; } for (let i = 0; i < received.length; i++) { const result = compare(received[i], expected[i], `${path}[${i}]`); if (!result.pass) return result; } } else if (typeof received === "object" && typeof expected === "object" && received !== null && expected !== null) { const receivedKeys = Object.keys(received); const expectedKeys = Object.keys(expected); if (receivedKeys.length !== expectedKeys.length) { return { pass: false, message: () => `✗ At path '${path}': object keys length differ. Received keys: ${JSON.stringify(receivedKeys)}, expected keys: ${JSON.stringify(expectedKeys)}`, }; } for (const key of receivedKeys) { if (!expected.hasOwnProperty(key)) { return { pass: false, message: () => `✗ At path '${path}': key '${key}' found in received but not in expected`, }; } const result = compare(received[key], expected[key], `${path}.${key}`); if (!result.pass) return result; } } else { const pass = received === expected; return { pass, message: () => (pass ? `✓ At path '${path}': expected ${JSON.stringify(received)} not to equal ${JSON.stringify(expected)}` : `✗ At path '${path}': expected ${JSON.stringify(received)} to equal ${JSON.stringify(expected)}`), }; } return { pass: true }; }; return compare(received, expected); }, });
transformers.js/tests/init.js/0
{ "file_path": "transformers.js/tests/init.js", "repo_id": "transformers.js", "token_count": 2091 }
364
import { CodeGenTokenizer, CodeGenForCausalLM } from "../../../src/transformers.js"; import { MAX_MODEL_LOAD_TIME, MAX_TEST_EXECUTION_TIME, MAX_MODEL_DISPOSE_TIME, DEFAULT_MODEL_OPTIONS } from "../../init.js"; export default () => { describe("CodeGenForCausalLM", () => { const model_id = "hf-internal-testing/tiny-random-CodeGenForCausalLM"; /** @type {CodeGenForCausalLM} */ let model; /** @type {CodeGenTokenizer} */ let tokenizer; beforeAll(async () => { model = await CodeGenForCausalLM.from_pretrained(model_id, DEFAULT_MODEL_OPTIONS); tokenizer = await CodeGenTokenizer.from_pretrained(model_id); tokenizer.padding_side = "left"; }, MAX_MODEL_LOAD_TIME); it( "batch_size=1", async () => { const inputs = tokenizer("hello"); const outputs = await model.generate({ ...inputs, max_length: 10, }); expect(outputs.tolist()).toEqual([[258n, 863n, 79n, 437n, 334n, 450n, 294n, 621n, 375n, 385n]]); }, MAX_TEST_EXECUTION_TIME, ); it( "batch_size>1", async () => { const inputs = tokenizer(["hello", "hello world"], { padding: true }); const outputs = await model.generate({ ...inputs, max_length: 10, }); expect(outputs.tolist()).toEqual([ [0n, 0n, 258n, 863n, 79n, 437n, 334n, 450n, 294n, 621n], [258n, 863n, 79n, 269n, 813n, 759n, 113n, 295n, 574n, 987n], ]); }, MAX_TEST_EXECUTION_TIME, ); afterAll(async () => { await model?.dispose(); }, MAX_MODEL_DISPOSE_TIME); }); };
transformers.js/tests/models/codegen/test_modeling_codegen.js/0
{ "file_path": "transformers.js/tests/models/codegen/test_modeling_codegen.js", "repo_id": "transformers.js", "token_count": 768 }
365
import { PreTrainedTokenizer, JAISLMHeadModel } from "../../../src/transformers.js"; import { MAX_MODEL_LOAD_TIME, MAX_TEST_EXECUTION_TIME, MAX_MODEL_DISPOSE_TIME, DEFAULT_MODEL_OPTIONS } from "../../init.js"; export default () => { describe("JAISLMHeadModel", () => { const model_id = "onnx-community/tiny-random-jais"; /** @type {JAISLMHeadModel} */ let model; /** @type {PreTrainedTokenizer} */ let tokenizer; beforeAll(async () => { model = await JAISLMHeadModel.from_pretrained(model_id, DEFAULT_MODEL_OPTIONS); tokenizer = await PreTrainedTokenizer.from_pretrained(model_id); tokenizer.padding_side = "left"; }, MAX_MODEL_LOAD_TIME); it( "batch_size=1", async () => { const inputs = tokenizer("hello"); const outputs = await model.generate({ ...inputs, max_length: 10, }); expect(outputs.tolist()).toEqual([[55422n, 55422n, 55422n, 55422n, 55422n, 55422n, 55422n, 55422n, 55422n, 55422n]]); }, MAX_TEST_EXECUTION_TIME, ); it( "batch_size>1", async () => { const inputs = tokenizer(["hello", "hello world"], { padding: true }); const outputs = await model.generate({ ...inputs, max_length: 10, }); expect(outputs.tolist()).toEqual([ [0n, 55422n, 55422n, 55422n, 55422n, 55422n, 55422n, 55422n, 55422n, 55422n], [55422n, 2838n, 2838n, 2838n, 2838n, 2838n, 2838n, 2838n, 2838n, 2838n], ]); }, MAX_TEST_EXECUTION_TIME, ); afterAll(async () => { await model?.dispose(); }, MAX_MODEL_DISPOSE_TIME); }); };
transformers.js/tests/models/jais/test_modeling_jais.js/0
{ "file_path": "transformers.js/tests/models/jais/test_modeling_jais.js", "repo_id": "transformers.js", "token_count": 798 }
366
import { Wav2Vec2Processor, MoonshineForConditionalGeneration, full, ones } from "../../../src/transformers.js"; import { MAX_MODEL_LOAD_TIME, MAX_TEST_EXECUTION_TIME, MAX_MODEL_DISPOSE_TIME, DEFAULT_MODEL_OPTIONS } from "../../init.js"; export default () => { describe("MoonshineForConditionalGeneration", () => { const model_id = "hf-internal-testing/tiny-random-MoonshineForConditionalGeneration"; /** @type {MoonshineForConditionalGeneration} */ let model; /** @type {Wav2Vec2Processor} */ let processor; beforeAll(async () => { model = await MoonshineForConditionalGeneration.from_pretrained(model_id, DEFAULT_MODEL_OPTIONS); processor = await Wav2Vec2Processor.from_pretrained(model_id); }, MAX_MODEL_LOAD_TIME); const input_values = new Float32Array(16000); it( "forward", async () => { const inputs = await processor(input_values); const { logits } = await model({ ...inputs, decoder_input_ids: ones([1, 1]), }); expect(logits.dims).toEqual([1, 1, 32768]); expect(logits.mean().item()).toBeCloseTo(0.016709428280591965, 6); }, MAX_TEST_EXECUTION_TIME, ); it( "batch_size=1", async () => { const inputs = await processor(input_values); const generate_ids = await model.generate({ ...inputs, max_new_tokens: 3 }); const new_tokens = generate_ids; expect(new_tokens.tolist()).toEqual([[/* Decoder start token */ 1n, /* Generated */ 6891n, 21892n, 14850n]]); }, MAX_TEST_EXECUTION_TIME, ); afterAll(async () => { await model?.dispose(); }, MAX_MODEL_DISPOSE_TIME); }); };
transformers.js/tests/models/moonshine/test_modeling_moonshine.js/0
{ "file_path": "transformers.js/tests/models/moonshine/test_modeling_moonshine.js", "repo_id": "transformers.js", "token_count": 724 }
367
import { AutoProcessor, AutoModelForAudioFrameClassification } from "../../../src/transformers.js"; import { MAX_TEST_EXECUTION_TIME, DEFAULT_MODEL_OPTIONS } from "../../init.js"; import { compare } from "../../test_utils.js"; export default () => { const models_to_test = ["onnx-community/pyannote-segmentation-3.0"]; let audio; beforeAll(async () => { const url = "https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/mlk.npy"; const buffer = await (await fetch(url)).arrayBuffer(); audio = Float32Array.from(new Float64Array(buffer)); }); it( `PyAnnoteForAudioFrameClassification`, async () => { const model_id = models_to_test[0]; // Load model and processor const model = await AutoModelForAudioFrameClassification.from_pretrained(model_id, DEFAULT_MODEL_OPTIONS); const processor = await AutoProcessor.from_pretrained(model_id); // Check processor config expect(processor.sampling_rate).toEqual(16000); // Preprocess audio const inputs = await processor(audio); // Run model with inputs const { logits } = await model(inputs); compare(logits.dims, [1, 767, 7]); compare(logits.mean().item(), -4.822614669799805, 6); const result = processor.post_process_speaker_diarization(logits, audio.length); const target = [ [ { id: 0, start: 0, end: 1.0512535626298245, confidence: 0.7898106738171984 }, { id: 2, start: 1.0512535626298245, end: 2.373798367228636, confidence: 0.8923380609065887 }, { id: 0, start: 2.373798367228636, end: 3.5776532534660155, confidence: 0.6920057005438546 }, { id: 2, start: 3.5776532534660155, end: 4.578039708226655, confidence: 0.8169249580865657 }, { id: 3, start: 4.578039708226655, end: 6.2396985652867, confidence: 0.6921662061495533 }, { id: 2, start: 6.2396985652867, end: 8.664364040384521, confidence: 0.705263573835628 }, { id: 0, start: 8.664364040384521, end: 10.071687358098641, confidence: 0.6650650397924295 }, { id: 2, start: 10.071687358098641, end: 12.598087048934833, confidence: 0.8999033333468749 }, { id: 0, start: 12.598087048934833, end: 13.005023911888312, confidence: 0.37838892004965197 }, ], ]; compare(result, target); await model.dispose(); }, MAX_TEST_EXECUTION_TIME, ); };
transformers.js/tests/models/pyannote/test_modeling_pyannote.js/0
{ "file_path": "transformers.js/tests/models/pyannote/test_modeling_pyannote.js", "repo_id": "transformers.js", "token_count": 1018 }
368
import { AutoImageProcessor, rand, Tensor, VitPoseImageProcessor } from "../../../src/transformers.js"; import { load_cached_image } from "../../asset_cache.js"; import { MAX_PROCESSOR_LOAD_TIME, MAX_TEST_EXECUTION_TIME } from "../../init.js"; export default () => { describe("VitPoseImageProcessor", () => { const model_id = "onnx-community/vitpose-base-simple"; /** @type {VitPoseImageProcessor} */ let processor; beforeAll(async () => { processor = await AutoImageProcessor.from_pretrained(model_id); }, MAX_PROCESSOR_LOAD_TIME); it( "default", async () => { const image = await load_cached_image("tiger"); const { pixel_values, original_sizes, reshaped_input_sizes } = await processor(image); expect(pixel_values.dims).toEqual([1, 3, 256, 192]); expect(pixel_values.mean().item()).toBeCloseTo(-0.2771204710006714, 6); expect(original_sizes).toEqual([[408, 612]]); expect(reshaped_input_sizes).toEqual([[256, 192]]); }, MAX_TEST_EXECUTION_TIME, ); it( "post_process_pose_estimation", async () => { const num_classes = 17; const size = [0, 0, 1000, 1500]; const heatmaps = rand([1, num_classes, 64, 48]); const boxes = [[size]]; const { bbox, scores, labels, keypoints } = processor.post_process_pose_estimation(heatmaps, boxes, { threshold: null })[0][0]; expect(bbox).toEqual(size); expect(scores).toHaveLength(num_classes); expect(labels).toHaveLength(num_classes); expect(keypoints).toHaveLength(num_classes); expect(keypoints[0]).toHaveLength(2); }, MAX_TEST_EXECUTION_TIME, ); }); };
transformers.js/tests/models/vitpose/test_image_processing_vitpose.js/0
{ "file_path": "transformers.js/tests/models/vitpose/test_image_processing_vitpose.js", "repo_id": "transformers.js", "token_count": 725 }
369
import { pipeline, FillMaskPipeline } from "../../src/transformers.js"; import { MAX_MODEL_LOAD_TIME, MAX_TEST_EXECUTION_TIME, MAX_MODEL_DISPOSE_TIME, DEFAULT_MODEL_OPTIONS } from "../init.js"; const PIPELINE_ID = "fill-mask"; export default () => { describe("Fill Mask", () => { describe("Standard", () => { const model_id = "hf-internal-testing/tiny-random-BertForMaskedLM"; /** @type {FillMaskPipeline} */ let pipe; beforeAll(async () => { pipe = await pipeline(PIPELINE_ID, model_id, DEFAULT_MODEL_OPTIONS); }, MAX_MODEL_LOAD_TIME); it("should be an instance of FillMaskPipeline", () => { expect(pipe).toBeInstanceOf(FillMaskPipeline); }); describe("batch_size=1", () => { it( "default (top_k=5)", async () => { const output = await pipe("a [MASK] c"); const target = [ { score: 0.0013377574505284429, token: 854, token_str: "##ο", sequence: "aο c" }, { score: 0.001248967950232327, token: 962, token_str: "##ち", sequence: "aち c" }, { score: 0.0012304208939895034, token: 933, token_str: "##ع", sequence: "aع c" }, { score: 0.0012301815440878272, token: 313, token_str: "ფ", sequence: "a ფ c" }, { score: 0.001222139224410057, token: 624, token_str: "未", sequence: "a 未 c" }, ]; expect(output).toBeCloseToNested(target, 5); }, MAX_TEST_EXECUTION_TIME, ); it( "custom (top_k=2)", async () => { const output = await pipe("a [MASK] c", { top_k: 2 }); const target = [ { score: 0.0013377574505284429, token: 854, token_str: "##ο", sequence: "aο c" }, { score: 0.001248967950232327, token: 962, token_str: "##ち", sequence: "aち c" }, ]; expect(output).toBeCloseToNested(target, 5); }, MAX_TEST_EXECUTION_TIME, ); }); describe("batch_size>1", () => { it( "default (top_k=5)", async () => { const output = await pipe(["a [MASK] c", "a b [MASK] c"]); const target = [ [ { score: 0.0013377574505284429, token: 854, token_str: "##ο", sequence: "aο c" }, { score: 0.001248967950232327, token: 962, token_str: "##ち", sequence: "aち c" }, { score: 0.0012304208939895034, token: 933, token_str: "##ع", sequence: "aع c" }, { score: 0.0012301815440878272, token: 313, token_str: "ფ", sequence: "a ფ c" }, { score: 0.001222139224410057, token: 624, token_str: "未", sequence: "a 未 c" }, ], [ { score: 0.0013287801994010806, token: 962, token_str: "##ち", sequence: "a bち c" }, { score: 0.0012486606137827039, token: 823, token_str: "##ن", sequence: "a bن c" }, { score: 0.0012320734094828367, token: 1032, token_str: "##ც", sequence: "a bც c" }, { score: 0.0012295148335397243, token: 854, token_str: "##ο", sequence: "a bο c" }, { score: 0.0012277684872969985, token: 624, token_str: "未", sequence: "a b 未 c" }, ], ]; expect(output).toBeCloseToNested(target, 5); }, MAX_TEST_EXECUTION_TIME, ); it( "custom (top_k=2)", async () => { const output = await pipe(["a [MASK] c", "a b [MASK] c"], { top_k: 2 }); const target = [ [ { score: 0.0013377574505284429, token: 854, token_str: "##ο", sequence: "aο c" }, { score: 0.001248967950232327, token: 962, token_str: "##ち", sequence: "aち c" }, ], [ { score: 0.0013287801994010806, token: 962, token_str: "##ち", sequence: "a bち c" }, { score: 0.0012486606137827039, token: 823, token_str: "##ن", sequence: "a bن c" }, ], ]; expect(output).toBeCloseToNested(target, 5); }, MAX_TEST_EXECUTION_TIME, ); }); afterAll(async () => { await pipe.dispose(); }, MAX_MODEL_DISPOSE_TIME); }); describe("Custom tokenizer", () => { const model_id = "hf-internal-testing/tiny-random-ModernBertForMaskedLM"; /** @type {FillMaskPipeline} */ let pipe; beforeAll(async () => { pipe = await pipeline(PIPELINE_ID, model_id, DEFAULT_MODEL_OPTIONS); }, MAX_MODEL_LOAD_TIME); it("should be an instance of FillMaskPipeline", () => { expect(pipe).toBeInstanceOf(FillMaskPipeline); }); describe("batch_size=1", () => { it( "default (top_k=5)", async () => { const output = await pipe("The capital of France is [MASK]."); const target = [ { score: 0.2106737643480301, sequence: "The capital of France isả.", token: 35165, token_str: "ả" }, { score: 0.18418768048286438, sequence: "The capital of France isDispatch.", token: 48010, token_str: "Dispatch" }, { score: 0.16561225056648254, sequence: "The capital of France is Ther.", token: 20763, token_str: " Ther" }, { score: 0.07070659101009369, sequence: "The capital of France isschild.", token: 50040, token_str: "schild" }, { score: 0.029540402814745903, sequence: "The capital of France isbles.", token: 9143, token_str: "bles" }, ]; expect(output).toBeCloseToNested(target, 5); }, MAX_TEST_EXECUTION_TIME, ); }); describe("batch_size>1", () => { it( "default (top_k=5)", async () => { const output = await pipe(["a [MASK] c", "a b [MASK] c"]); const target = [ [ { score: 0.06699250638484955, sequence: "a oocytes c", token: 36805, token_str: " oocytes" }, { score: 0.05928678810596466, sequence: "ancia c", token: 19003, token_str: "ncia" }, { score: 0.057058464735746384, sequence: "aả c", token: 35165, token_str: "ả" }, { score: 0.04978331923484802, sequence: "amq c", token: 37365, token_str: "mq" }, { score: 0.04839889705181122, sequence: "a1371 c", token: 26088, token_str: "1371" }, ], [ { score: 0.06364646553993225, sequence: "a b oocytes c", token: 36805, token_str: " oocytes" }, { score: 0.03993292525410652, sequence: "a bectin c", token: 41105, token_str: "ectin" }, { score: 0.03932870551943779, sequence: "a bả c", token: 35165, token_str: "ả" }, { score: 0.037771403789520264, sequence: "a boplastic c", token: 21945, token_str: "oplastic" }, { score: 0.03748754784464836, sequence: "a b Ther c", token: 20763, token_str: " Ther" }, ], ]; expect(output).toBeCloseToNested(target, 5); }, MAX_TEST_EXECUTION_TIME, ); }); afterAll(async () => { await pipe.dispose(); }, MAX_MODEL_DISPOSE_TIME); }); }); };
transformers.js/tests/pipelines/test_pipelines_fill_mask.js/0
{ "file_path": "transformers.js/tests/pipelines/test_pipelines_fill_mask.js", "repo_id": "transformers.js", "token_count": 3763 }
370
import { pipeline, ZeroShotAudioClassificationPipeline } from "../../src/transformers.js"; import { MAX_MODEL_LOAD_TIME, MAX_TEST_EXECUTION_TIME, MAX_MODEL_DISPOSE_TIME, DEFAULT_MODEL_OPTIONS } from "../init.js"; import { load_cached_audio } from "../asset_cache.js"; const PIPELINE_ID = "zero-shot-audio-classification"; export default () => { describe("Zero-shot Audio Classification", () => { const model_id = "hf-internal-testing/tiny-clap-htsat-unfused"; const labels = ["cat", "dog"]; const hypothesis_template = "sound of a {}"; /** @type {ZeroShotAudioClassificationPipeline} */ let pipe; let audio; beforeAll(async () => { pipe = await pipeline(PIPELINE_ID, model_id, DEFAULT_MODEL_OPTIONS); audio = await load_cached_audio("mlk"); }, MAX_MODEL_LOAD_TIME); it("should be an instance of ZeroShotAudioClassificationPipeline", () => { expect(pipe).toBeInstanceOf(ZeroShotAudioClassificationPipeline); }); describe("batch_size=1", () => { it( "default", async () => { const output = await pipe(audio, labels); const target = [ { score: 0.4990939795970917, label: "cat" }, { score: 0.5009059906005859, label: "dog" }, ]; expect(output).toBeCloseToNested(target, 5); }, MAX_TEST_EXECUTION_TIME, ); it( "custom (w/ hypothesis_template)", async () => { const output = await pipe(audio, labels, { hypothesis_template }); const target = [ { score: 0.4987950325012207, label: "cat" }, { score: 0.5012049674987793, label: "dog" }, ]; expect(output).toBeCloseToNested(target, 5); }, MAX_TEST_EXECUTION_TIME, ); }); afterAll(async () => { await pipe.dispose(); }, MAX_MODEL_DISPOSE_TIME); }); };
transformers.js/tests/pipelines/test_pipelines_zero_shot_audio_classification.js/0
{ "file_path": "transformers.js/tests/pipelines/test_pipelines_zero_shot_audio_classification.js", "repo_id": "transformers.js", "token_count": 827 }
371
import { fileURLToPath } from "node:url"; import path from "node:path"; import fs from "node:fs"; import webpack from "webpack"; import TerserPlugin from "terser-webpack-plugin"; const __dirname = path.dirname(fileURLToPath(import.meta.url)); /** * Plugin to strip the "node:" prefix from module requests. * * This is necessary to ensure both web and node builds work correctly, * otherwise we would get an error like: * ``` * Module build failed: UnhandledSchemeError: Reading from "node:path" is not handled by plugins (Unhandled scheme). * Webpack supports "data:" and "file:" URIs by default. * You may need an additional plugin to handle "node:" URIs. * ``` * * NOTE: We then do not need to use the `node:` prefix in the resolve.alias configuration. */ class StripNodePrefixPlugin extends webpack.NormalModuleReplacementPlugin { constructor() { super( /^node:(.+)$/, resource => { resource.request = resource.request.replace(/^node:/, ''); } ); } } /** * Plugin to post-process build files. Required to solve certain issues with ESM module output. * See https://github.com/webpack/webpack/issues/17121 for more information. * * @see https://webpack.js.org/contribute/writing-a-plugin/ */ class PostBuildPlugin { apply(compiler) { compiler.hooks.done.tap('PostBuildPlugin', () => { const dist = path.join(__dirname, 'dist'); const ORT_JSEP_FILE = 'ort-wasm-simd-threaded.jsep.mjs'; const ORT_BUNDLE_FILE = 'ort.bundle.min.mjs'; // 1. Remove unnecessary files { const file = path.join(dist, ORT_BUNDLE_FILE); if (fs.existsSync(file)) fs.unlinkSync(file); } // 2. Copy unbundled JSEP file { const src = path.join(__dirname, 'node_modules/onnxruntime-web/dist', ORT_JSEP_FILE); const dest = path.join(dist, ORT_JSEP_FILE); fs.copyFileSync(src, dest); } }); } } /** * Helper function to create webpack configurations. * @param {Object} options Options for creating a webpack target. * @param {string} options.name Name of output file. * @param {string} options.suffix Suffix of output file. * @param {string} options.type Type of library. * @param {string} options.ignoreModules The list of modules to ignore. * @param {string} options.externalModules The list of modules to set as external. * @param {Object[]} options.plugins List of plugins to use. * @returns {import('webpack').Configuration} One webpack target. */ function buildConfig({ name = "", suffix = ".js", type = "module", // 'module' | 'commonjs' ignoreModules = [], externalModules = [], plugins = [], } = {}) { const outputModule = type === "module"; const alias = Object.fromEntries( ignoreModules.map((module) => [module, false]), ); /** @type {import('webpack').Configuration} */ const config = { mode: "development", devtool: "source-map", entry: { [`transformers${name}`]: "./src/transformers.js", [`transformers${name}.min`]: "./src/transformers.js", }, output: { filename: `[name]${suffix}`, path: path.join(__dirname, "dist"), library: { type, }, assetModuleFilename: "[name][ext]", chunkFormat: false, }, optimization: { minimize: true, minimizer: [ new TerserPlugin({ test: new RegExp(`\\.min\\${suffix}$`), // Do not bundle with comments. // See https://webpack.js.org/plugins/terser-webpack-plugin/#remove-comments for more information. terserOptions: { output: { comments: false, }, }, extractComments: false, }), ], }, experiments: { outputModule, }, resolve: { alias }, externals: externalModules, // Development server devServer: { static: { directory: __dirname, }, port: 8080, }, plugins, }; if (outputModule) { config.module = { parser: { javascript: { importMeta: false, }, }, }; } else { config.externalsType = "commonjs"; } return config; } // Do not bundle onnxruntime-web when packaging for Node.js. // Instead, we use the native library (onnxruntime-node). const NODE_IGNORE_MODULES = ["onnxruntime-web"]; // Do not bundle the following modules with webpack (mark as external) // NOTE: This is necessary for both type="module" and type="commonjs", // and will be ignored when building for web (only used for node/deno) const NODE_EXTERNAL_MODULES = [ "onnxruntime-common", "onnxruntime-node", "sharp", "node:fs", "node:path", "node:url", ]; // Do not bundle node-only packages when bundling for the web. // NOTE: We can exclude the "node:" prefix for built-in modules here, // since we apply the `StripNodePrefixPlugin` to strip it. const WEB_IGNORE_MODULES = ["onnxruntime-node", "sharp", "fs", "path", "url"]; // Do not bundle the following modules with webpack (mark as external) const WEB_EXTERNAL_MODULES = [ "onnxruntime-common", "onnxruntime-web", ]; // Web-only build const WEB_BUILD = buildConfig({ name: ".web", type: "module", ignoreModules: WEB_IGNORE_MODULES, externalModules: WEB_EXTERNAL_MODULES, plugins: [ new StripNodePrefixPlugin() ] }); // Web-only build, bundled with onnxruntime-web const BUNDLE_BUILD = buildConfig({ type: "module", ignoreModules: WEB_IGNORE_MODULES, plugins: [ new StripNodePrefixPlugin(), new PostBuildPlugin(), ], }); // Node-compatible builds const NODE_BUILDS = [ buildConfig({ name: ".node", suffix: ".mjs", type: "module", ignoreModules: NODE_IGNORE_MODULES, externalModules: NODE_EXTERNAL_MODULES, }), buildConfig({ name: ".node", suffix: ".cjs", type: "commonjs", ignoreModules: NODE_IGNORE_MODULES, externalModules: NODE_EXTERNAL_MODULES, }), ]; // When running with `webpack serve`, only build the web target. const BUILDS = process.env.WEBPACK_SERVE ? [BUNDLE_BUILD] : [BUNDLE_BUILD, WEB_BUILD, ...NODE_BUILDS]; export default BUILDS;
transformers.js/webpack.config.js/0
{ "file_path": "transformers.js/webpack.config.js", "repo_id": "transformers.js", "token_count": 2352 }
372
# Contributor Covenant Code of Conduct ## Our Pledge We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation. We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community. ## Our Standards Examples of behavior that contributes to a positive environment for our community include: * Demonstrating empathy and kindness toward other people * Being respectful of differing opinions, viewpoints, and experiences * Giving and gracefully accepting constructive feedback * Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience * Focusing on what is best not just for us as individuals, but for the overall community Examples of unacceptable behavior include: * The use of sexualized language or imagery, and sexual attention or advances of any kind * Trolling, insulting or derogatory comments, and personal or political attacks * Public or private harassment * Publishing others' private information, such as a physical or email address, without their explicit permission * Other conduct which could reasonably be considered inappropriate in a professional setting ## Enforcement Responsibilities Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful. Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate. ## Scope This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. ## Enforcement Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at feedback@huggingface.co. All complaints will be reviewed and investigated promptly and fairly. All community leaders are obligated to respect the privacy and security of the reporter of any incident. ## Enforcement Guidelines Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct: ### 1. Correction **Community Impact**: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community. **Consequence**: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested. ### 2. Warning **Community Impact**: A violation through a single incident or series of actions. **Consequence**: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban. ### 3. Temporary Ban **Community Impact**: A serious violation of community standards, including sustained inappropriate behavior. **Consequence**: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban. ### 4. Permanent Ban **Community Impact**: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals. **Consequence**: A permanent ban from any sort of public interaction within the community. ## Attribution This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 2.1, available at [https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1]. Community Impact Guidelines were inspired by [Mozilla's code of conduct enforcement ladder][Mozilla CoC]. For answers to common questions about this code of conduct, see the FAQ at [https://www.contributor-covenant.org/faq][FAQ]. Translations are available at [https://www.contributor-covenant.org/translations][translations]. [homepage]: https://www.contributor-covenant.org [v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html [Mozilla CoC]: https://github.com/mozilla/diversity [FAQ]: https://www.contributor-covenant.org/faq [translations]: https://www.contributor-covenant.org/translations
transformers/CODE_OF_CONDUCT.md/0
{ "file_path": "transformers/CODE_OF_CONDUCT.md", "repo_id": "transformers", "token_count": 1206 }
373
{ "annotations": { "list": [ { "builtIn": 1, "datasource": { "type": "grafana", "uid": "-- Grafana --" }, "enable": true, "hide": true, "iconColor": "rgba(0, 211, 255, 1)", "name": "Annotations & Alerts", "type": "dashboard" } ] }, "editable": true, "fiscalYearStartMonth": 0, "graphTooltip": 0, "id": 1, "links": [ { "asDropdown": false, "icon": "external link", "includeVars": false, "keepTime": false, "tags": [], "targetBlank": false, "title": "Go to data", "tooltip": "Go to data", "type": "link", "url": "http://transformers-benchmarks.hf.co/d/fdz33iyzln9c0a/transformers-benchmarks?orgId=1&from=${StartTime}&to=${EndTime}" } ], "liveNow": true, "panels": [ { "datasource": { "default": true, "type": "grafana-postgresql-datasource", "uid": "be28nkzirtb0gd" }, "fieldConfig": { "defaults": { "color": { "mode": "thresholds" }, "custom": { "align": "left", "cellOptions": { "type": "auto" }, "inspect": false }, "mappings": [], "thresholds": { "mode": "absolute", "steps": [ { "color": "green", "value": null }, { "color": "red", "value": 80 } ] } }, "overrides": [ { "matcher": { "id": "byName", "options": "gpu_name" }, "properties": [ { "id": "custom.width", "value": 202 } ] }, { "matcher": { "id": "byName", "options": "left" }, "properties": [ { "id": "custom.width", "value": 407 } ] }, { "matcher": { "id": "byName", "options": "commit_message" }, "properties": [ { "id": "custom.width", "value": 524 } ] }, { "matcher": { "id": "byName", "options": "commit_id" }, "properties": [ { "id": "custom.width", "value": 353 } ] }, { "matcher": { "id": "byName", "options": "model_id" }, "properties": [ { "id": "custom.width", "value": 216 } ] } ] }, "gridPos": { "h": 6, "w": 24, "x": 0, "y": 0 }, "id": 5, "options": { "cellHeight": "sm", "footer": { "countRows": false, "fields": "", "reducer": [ "sum" ], "show": false }, "showHeader": true, "sortBy": [] }, "pluginVersion": "11.2.2", "targets": [ { "datasource": { "default": true, "type": "grafana-postgresql-datasource", "uid": "be28nkzirtb0gd" }, "editorMode": "code", "format": "table", "rawQuery": true, "rawSql": "SELECT commit_id, commit_message, metadata->>'gpu_name' as gpu_name, metadata->>'model_id' as model_id, created_at AS date FROM benchmarks WHERE branch = '${branch}' AND metadata->>'gpu_name' = '${gpu_name}' ORDER BY benchmark_id DESC LIMIT ${last_n_commits};", "refId": "A", "sql": { "columns": [ { "parameters": [ { "name": "commit_id", "type": "functionParameter" } ], "type": "function" }, { "parameters": [ { "name": "gpu_name", "type": "functionParameter" } ], "type": "function" } ], "groupBy": [ { "property": { "type": "string" }, "type": "groupBy" } ], "limit": 50, "whereJsonTree": { "children1": [ { "id": "baaa8aaa-89ab-4cde-b012-31922f96de3f", "properties": { "field": "commit_id", "fieldSrc": "field", "operator": "equal", "value": [ "${commit}" ], "valueError": [ null ], "valueSrc": [ "value" ], "valueType": [ "text" ] }, "type": "rule" } ], "id": "bab88a98-0123-4456-b89a-b1922f7d4f11", "type": "group" }, "whereString": "commit_id = '${commit}'" }, "table": "benchmarks" } ], "transparent": true, "type": "table" }, { "collapsed": false, "gridPos": { "h": 1, "w": 24, "x": 0, "y": 6 }, "id": 13, "panels": [], "title": "Eager Forward Pass", "type": "row" }, { "datasource": { "default": true, "type": "grafana-postgresql-datasource", "uid": "be28nkzirtb0gd" }, "fieldConfig": { "defaults": { "color": { "mode": "continuous-YlBl" }, "custom": { "axisBorderShow": false, "axisCenteredZero": false, "axisColorMode": "text", "axisLabel": "", "axisPlacement": "auto", "fillOpacity": 80, "gradientMode": "scheme", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, "lineWidth": 0, "scaleDistribution": { "type": "linear" }, "thresholdsStyle": { "mode": "off" } }, "mappings": [], "thresholds": { "mode": "absolute", "steps": [ { "color": "green", "value": null } ] }, "unit": "s" }, "overrides": [] }, "gridPos": { "h": 11, "w": 12, "x": 0, "y": 7 }, "id": 7, "options": { "barRadius": 0.05, "barWidth": 0.8, "fullHighlight": false, "groupWidth": 0.7, "legend": { "calcs": [], "displayMode": "list", "placement": "bottom", "showLegend": false }, "orientation": "auto", "showValue": "auto", "stacking": "none", "tooltip": { "mode": "single", "sort": "none" }, "xTickLabelRotation": 0, "xTickLabelSpacing": 0 }, "pluginVersion": "11.2.2", "targets": [ { "datasource": { "default": true, "type": "grafana-postgresql-datasource", "uid": "be28nkzirtb0gd" }, "editorMode": "code", "format": "table", "rawQuery": true, "rawSql": "SELECT CAST(m.measurements->'first_eager_forward_pass_time_secs' AS double precision) AS first_eager_forward_pass_time_secs, left(b.commit_id, 7), m.time FROM benchmarks as b JOIN model_measurements AS m ON b.benchmark_id = m.benchmark_id WHERE b.branch = '${branch}' AND b.metadata->>'gpu_name' = '${gpu_name}' ORDER BY b.benchmark_id DESC LIMIT ${last_n_commits};", "refId": "A", "sql": { "columns": [ { "parameters": [], "type": "function" } ], "groupBy": [ { "property": { "type": "string" }, "type": "groupBy" } ], "limit": 50 } } ], "title": "First eager forward pass", "transformations": [ { "id": "sortBy", "options": { "fields": {}, "sort": [ { "field": "time" } ] } } ], "transparent": true, "type": "barchart" }, { "datasource": { "default": true, "type": "grafana-postgresql-datasource", "uid": "be28nkzirtb0gd" }, "fieldConfig": { "defaults": { "color": { "mode": "continuous-YlBl" }, "custom": { "axisBorderShow": false, "axisCenteredZero": false, "axisColorMode": "text", "axisLabel": "", "axisPlacement": "auto", "fillOpacity": 80, "gradientMode": "scheme", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, "lineWidth": 0, "scaleDistribution": { "type": "linear" }, "thresholdsStyle": { "mode": "off" } }, "mappings": [], "thresholds": { "mode": "absolute", "steps": [ { "color": "green", "value": null }, { "color": "red", "value": 80 } ] }, "unit": "s" }, "overrides": [] }, "gridPos": { "h": 11, "w": 12, "x": 12, "y": 7 }, "id": 9, "options": { "barRadius": 0.05, "barWidth": 0.8, "fullHighlight": false, "groupWidth": 0.7, "legend": { "calcs": [], "displayMode": "list", "placement": "bottom", "showLegend": false }, "orientation": "auto", "showValue": "auto", "stacking": "none", "tooltip": { "mode": "single", "sort": "none" }, "xTickLabelRotation": 0, "xTickLabelSpacing": 0 }, "targets": [ { "datasource": { "default": true, "type": "grafana-postgresql-datasource", "uid": "be28nkzirtb0gd" }, "editorMode": "code", "format": "table", "rawQuery": true, "rawSql": "SELECT CAST(m.measurements->'second_eager_forward_pass_time_secs' AS double precision) AS second_eager_forward_pass_time_secs, left(b.commit_id, 7), m.time FROM benchmarks as b JOIN model_measurements AS m ON b.benchmark_id = m.benchmark_id WHERE b.branch = '${branch}' AND b.metadata->>'gpu_name' = '${gpu_name}' ORDER BY b.benchmark_id DESC LIMIT ${last_n_commits};", "refId": "A", "sql": { "columns": [ { "parameters": [], "type": "function" } ], "groupBy": [ { "property": { "type": "string" }, "type": "groupBy" } ], "limit": 50 } } ], "title": "Second eager forward pass", "transformations": [ { "id": "sortBy", "options": { "fields": {}, "sort": [ { "field": "time" } ] } } ], "transparent": true, "type": "barchart" }, { "collapsed": false, "gridPos": { "h": 1, "w": 24, "x": 0, "y": 18 }, "id": 16, "panels": [], "title": "Time to next token", "type": "row" }, { "datasource": { "default": true, "type": "grafana-postgresql-datasource", "uid": "be28nkzirtb0gd" }, "fieldConfig": { "defaults": { "color": { "mode": "continuous-YlBl" }, "custom": { "axisBorderShow": false, "axisCenteredZero": false, "axisColorMode": "text", "axisLabel": "", "axisPlacement": "auto", "fillOpacity": 80, "gradientMode": "scheme", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, "lineWidth": 0, "scaleDistribution": { "type": "linear" }, "thresholdsStyle": { "mode": "off" } }, "mappings": [], "thresholds": { "mode": "absolute", "steps": [ { "color": "green", "value": null } ] }, "unit": "s" }, "overrides": [] }, "gridPos": { "h": 11, "w": 12, "x": 0, "y": 19 }, "id": 17, "options": { "barRadius": 0.05, "barWidth": 0.8, "fullHighlight": false, "groupWidth": 0.7, "legend": { "calcs": [], "displayMode": "list", "placement": "bottom", "showLegend": false }, "orientation": "auto", "showValue": "always", "stacking": "none", "tooltip": { "mode": "single", "sort": "none" }, "xTickLabelRotation": 0, "xTickLabelSpacing": 0 }, "targets": [ { "datasource": { "default": true, "type": "grafana-postgresql-datasource", "uid": "be28nkzirtb0gd" }, "editorMode": "code", "format": "table", "rawQuery": true, "rawSql": "SELECT CAST(m.measurements->'time_to_first_token_secs' AS double precision) AS time_to_first_token_secs, left(b.commit_id, 7), m.time FROM benchmarks as b JOIN model_measurements AS m ON b.benchmark_id = m.benchmark_id WHERE b.branch = '${branch}' AND b.metadata->>'gpu_name' = '${gpu_name}' ORDER BY b.benchmark_id DESC LIMIT ${last_n_commits};", "refId": "A", "sql": { "columns": [ { "parameters": [], "type": "function" } ], "groupBy": [ { "property": { "type": "string" }, "type": "groupBy" } ], "limit": 50 } } ], "title": "Time to first token", "transformations": [ { "id": "sortBy", "options": { "fields": {}, "sort": [ { "field": "time" } ] } } ], "transparent": true, "type": "barchart" }, { "datasource": { "default": true, "type": "grafana-postgresql-datasource", "uid": "be28nkzirtb0gd" }, "fieldConfig": { "defaults": { "color": { "mode": "continuous-YlBl" }, "custom": { "axisBorderShow": false, "axisCenteredZero": false, "axisColorMode": "text", "axisLabel": "", "axisPlacement": "auto", "fillOpacity": 80, "gradientMode": "scheme", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, "lineWidth": 0, "scaleDistribution": { "type": "linear" }, "thresholdsStyle": { "mode": "off" } }, "mappings": [], "thresholds": { "mode": "absolute", "steps": [ { "color": "green", "value": null } ] }, "unit": "s" }, "overrides": [] }, "gridPos": { "h": 11, "w": 12, "x": 12, "y": 19 }, "id": 18, "options": { "barRadius": 0.05, "barWidth": 0.8, "fullHighlight": false, "groupWidth": 0.7, "legend": { "calcs": [], "displayMode": "list", "placement": "bottom", "showLegend": false }, "orientation": "auto", "showValue": "always", "stacking": "none", "tooltip": { "mode": "single", "sort": "none" }, "xTickLabelRotation": 0, "xTickLabelSpacing": 0 }, "targets": [ { "datasource": { "default": true, "type": "grafana-postgresql-datasource", "uid": "be28nkzirtb0gd" }, "editorMode": "code", "format": "table", "rawQuery": true, "rawSql": "SELECT CAST(m.measurements->'time_to_second_token_secs' AS double precision) AS time_to_second_token_secs, left(b.commit_id, 7), m.time FROM benchmarks as b JOIN model_measurements AS m ON b.benchmark_id = m.benchmark_id WHERE b.branch = '${branch}' AND b.metadata->>'gpu_name' = '${gpu_name}' ORDER BY b.benchmark_id DESC LIMIT ${last_n_commits};", "refId": "A", "sql": { "columns": [ { "parameters": [], "type": "function" } ], "groupBy": [ { "property": { "type": "string" }, "type": "groupBy" } ], "limit": 50 } } ], "title": "Time to second token", "transformations": [ { "id": "sortBy", "options": { "fields": {}, "sort": [ { "field": "time" } ] } } ], "transparent": true, "type": "barchart" }, { "datasource": { "default": true, "type": "grafana-postgresql-datasource", "uid": "be28nkzirtb0gd" }, "fieldConfig": { "defaults": { "color": { "mode": "continuous-YlBl" }, "custom": { "axisBorderShow": false, "axisCenteredZero": false, "axisColorMode": "text", "axisLabel": "", "axisPlacement": "auto", "fillOpacity": 80, "gradientMode": "scheme", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, "lineWidth": 0, "scaleDistribution": { "type": "linear" }, "thresholdsStyle": { "mode": "off" } }, "mappings": [], "thresholds": { "mode": "absolute", "steps": [ { "color": "green", "value": null } ] }, "unit": "s" }, "overrides": [] }, "gridPos": { "h": 11, "w": 12, "x": 0, "y": 30 }, "id": 19, "options": { "barRadius": 0.05, "barWidth": 0.8, "fullHighlight": false, "groupWidth": 0.7, "legend": { "calcs": [], "displayMode": "list", "placement": "bottom", "showLegend": false }, "orientation": "auto", "showValue": "always", "stacking": "none", "tooltip": { "mode": "single", "sort": "none" }, "xTickLabelRotation": 0, "xTickLabelSpacing": 0 }, "targets": [ { "datasource": { "default": true, "type": "grafana-postgresql-datasource", "uid": "be28nkzirtb0gd" }, "editorMode": "code", "format": "table", "rawQuery": true, "rawSql": "SELECT CAST(m.measurements->'time_to_third_token_secs' AS double precision) AS time_to_third_token_secs, left(b.commit_id, 7), m.time FROM benchmarks as b JOIN model_measurements AS m ON b.benchmark_id = m.benchmark_id WHERE b.branch = '${branch}' AND b.metadata->>'gpu_name' = '${gpu_name}' ORDER BY b.benchmark_id DESC LIMIT ${last_n_commits};", "refId": "A", "sql": { "columns": [ { "parameters": [], "type": "function" } ], "groupBy": [ { "property": { "type": "string" }, "type": "groupBy" } ], "limit": 50 } } ], "title": "Time to third token", "transformations": [ { "id": "sortBy", "options": { "fields": {}, "sort": [ { "field": "time" } ] } } ], "transparent": true, "type": "barchart" }, { "datasource": { "default": true, "type": "grafana-postgresql-datasource", "uid": "be28nkzirtb0gd" }, "fieldConfig": { "defaults": { "color": { "mode": "continuous-YlBl" }, "custom": { "axisBorderShow": false, "axisCenteredZero": false, "axisColorMode": "text", "axisLabel": "", "axisPlacement": "auto", "fillOpacity": 80, "gradientMode": "scheme", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, "lineWidth": 0, "scaleDistribution": { "type": "linear" }, "thresholdsStyle": { "mode": "off" } }, "mappings": [], "thresholds": { "mode": "absolute", "steps": [ { "color": "green", "value": null } ] }, "unit": "s" }, "overrides": [] }, "gridPos": { "h": 11, "w": 12, "x": 12, "y": 30 }, "id": 20, "options": { "barRadius": 0.05, "barWidth": 0.8, "fullHighlight": false, "groupWidth": 0.7, "legend": { "calcs": [], "displayMode": "list", "placement": "bottom", "showLegend": false }, "orientation": "auto", "showValue": "always", "stacking": "none", "tooltip": { "mode": "single", "sort": "none" }, "xTickLabelRotation": 0, "xTickLabelSpacing": 0 }, "targets": [ { "datasource": { "default": true, "type": "grafana-postgresql-datasource", "uid": "be28nkzirtb0gd" }, "editorMode": "code", "format": "table", "rawQuery": true, "rawSql": "SELECT CAST(m.measurements->'time_to_next_token_mean_secs' AS double precision) AS time_to_next_token_mean_secs, left(b.commit_id, 7), m.time FROM benchmarks as b JOIN model_measurements AS m ON b.benchmark_id = m.benchmark_id WHERE b.branch = '${branch}' AND b.metadata->>'gpu_name' = '${gpu_name}' ORDER BY b.benchmark_id DESC LIMIT ${last_n_commits};", "refId": "A", "sql": { "columns": [ { "parameters": [], "type": "function" } ], "groupBy": [ { "property": { "type": "string" }, "type": "groupBy" } ], "limit": 50 } } ], "title": "Time to subsequent next tokens mean", "transformations": [ { "id": "sortBy", "options": { "fields": {}, "sort": [ { "field": "time" } ] } } ], "transparent": true, "type": "barchart" }, { "collapsed": false, "gridPos": { "h": 1, "w": 24, "x": 0, "y": 41 }, "id": 14, "panels": [], "title": "Compiled Generate", "type": "row" }, { "datasource": { "default": true, "type": "grafana-postgresql-datasource", "uid": "be28nkzirtb0gd" }, "fieldConfig": { "defaults": { "color": { "mode": "continuous-YlBl" }, "custom": { "axisBorderShow": false, "axisCenteredZero": false, "axisColorMode": "text", "axisLabel": "", "axisPlacement": "auto", "fillOpacity": 80, "gradientMode": "scheme", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, "lineWidth": 0, "scaleDistribution": { "type": "linear" }, "thresholdsStyle": { "mode": "off" } }, "mappings": [], "thresholds": { "mode": "absolute", "steps": [ { "color": "green", "value": null } ] }, "unit": "s" }, "overrides": [] }, "gridPos": { "h": 11, "w": 12, "x": 0, "y": 42 }, "id": 8, "options": { "barRadius": 0.05, "barWidth": 0.8, "fullHighlight": false, "groupWidth": 0.7, "legend": { "calcs": [], "displayMode": "list", "placement": "bottom", "showLegend": false }, "orientation": "auto", "showValue": "always", "stacking": "none", "tooltip": { "mode": "single", "sort": "none" }, "xTickLabelRotation": 0, "xTickLabelSpacing": 0 }, "targets": [ { "datasource": { "default": true, "type": "grafana-postgresql-datasource", "uid": "be28nkzirtb0gd" }, "editorMode": "code", "format": "table", "rawQuery": true, "rawSql": "SELECT CAST(m.measurements->'first_compile_generate_time_secs' AS double precision) AS first_compile_generate_time_secs, left(b.commit_id, 7), m.time FROM benchmarks as b JOIN model_measurements AS m ON b.benchmark_id = m.benchmark_id WHERE b.branch = '${branch}' AND b.metadata->>'gpu_name' = '${gpu_name}' ORDER BY b.benchmark_id DESC LIMIT ${last_n_commits};", "refId": "A", "sql": { "columns": [ { "parameters": [], "type": "function" } ], "groupBy": [ { "property": { "type": "string" }, "type": "groupBy" } ], "limit": 50 } } ], "title": "First compile generate", "transformations": [ { "id": "sortBy", "options": { "fields": {}, "sort": [ { "field": "time" } ] } } ], "transparent": true, "type": "barchart" }, { "datasource": { "default": true, "type": "grafana-postgresql-datasource", "uid": "be28nkzirtb0gd" }, "fieldConfig": { "defaults": { "color": { "mode": "continuous-YlBl" }, "custom": { "axisBorderShow": false, "axisCenteredZero": false, "axisColorMode": "text", "axisLabel": "", "axisPlacement": "auto", "fillOpacity": 80, "gradientMode": "scheme", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, "lineWidth": 0, "scaleDistribution": { "type": "linear" }, "thresholdsStyle": { "mode": "off" } }, "mappings": [], "thresholds": { "mode": "absolute", "steps": [ { "color": "green", "value": null } ] }, "unit": "s" }, "overrides": [] }, "gridPos": { "h": 11, "w": 12, "x": 12, "y": 42 }, "id": 10, "options": { "barRadius": 0.05, "barWidth": 0.8, "fullHighlight": false, "groupWidth": 0.7, "legend": { "calcs": [], "displayMode": "list", "placement": "bottom", "showLegend": false }, "orientation": "auto", "showValue": "auto", "stacking": "none", "tooltip": { "mode": "single", "sort": "none" }, "xTickLabelRotation": 0, "xTickLabelSpacing": 0 }, "targets": [ { "datasource": { "default": true, "type": "grafana-postgresql-datasource", "uid": "be28nkzirtb0gd" }, "editorMode": "code", "format": "table", "rawQuery": true, "rawSql": "SELECT CAST(m.measurements->'second_compile_generate_time_secs' AS double precision) AS second_compile_generate_time_secs, left(b.commit_id, 7), m.time FROM benchmarks as b JOIN model_measurements AS m ON b.benchmark_id = m.benchmark_id WHERE b.branch = '${branch}' AND b.metadata->>'gpu_name' = '${gpu_name}' ORDER BY b.benchmark_id DESC LIMIT ${last_n_commits};", "refId": "A", "sql": { "columns": [ { "parameters": [], "type": "function" } ], "groupBy": [ { "property": { "type": "string" }, "type": "groupBy" } ], "limit": 50 } } ], "title": "Second compile generate", "transformations": [ { "id": "sortBy", "options": { "fields": {}, "sort": [ { "field": "time" } ] } } ], "transparent": true, "type": "barchart" }, { "datasource": { "default": true, "type": "grafana-postgresql-datasource", "uid": "be28nkzirtb0gd" }, "fieldConfig": { "defaults": { "color": { "mode": "continuous-YlBl" }, "custom": { "axisBorderShow": false, "axisCenteredZero": false, "axisColorMode": "text", "axisLabel": "", "axisPlacement": "auto", "fillOpacity": 80, "gradientMode": "scheme", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, "lineWidth": 0, "scaleDistribution": { "type": "linear" }, "thresholdsStyle": { "mode": "off" } }, "mappings": [], "thresholds": { "mode": "absolute", "steps": [ { "color": "green", "value": null } ] }, "unit": "s" }, "overrides": [] }, "gridPos": { "h": 11, "w": 12, "x": 0, "y": 53 }, "id": 11, "options": { "barRadius": 0.05, "barWidth": 0.8, "fullHighlight": false, "groupWidth": 0.7, "legend": { "calcs": [], "displayMode": "list", "placement": "bottom", "showLegend": false }, "orientation": "auto", "showValue": "auto", "stacking": "none", "tooltip": { "mode": "single", "sort": "none" }, "xTickLabelRotation": 0, "xTickLabelSpacing": 0 }, "targets": [ { "datasource": { "default": true, "type": "grafana-postgresql-datasource", "uid": "be28nkzirtb0gd" }, "editorMode": "code", "format": "table", "rawQuery": true, "rawSql": "SELECT CAST(m.measurements->'third_compile_generate_time_secs' AS double precision) AS third_compile_generate_time_secs, left(b.commit_id, 7), m.time FROM benchmarks as b JOIN model_measurements AS m ON b.benchmark_id = m.benchmark_id WHERE b.branch = '${branch}' AND b.metadata->>'gpu_name' = '${gpu_name}' ORDER BY b.benchmark_id DESC LIMIT ${last_n_commits};", "refId": "A", "sql": { "columns": [ { "parameters": [], "type": "function" } ], "groupBy": [ { "property": { "type": "string" }, "type": "groupBy" } ], "limit": 50 } } ], "title": "Third compile generate", "transformations": [ { "id": "sortBy", "options": { "fields": {}, "sort": [ { "field": "time" } ] } } ], "transparent": true, "type": "barchart" }, { "datasource": { "default": true, "type": "grafana-postgresql-datasource", "uid": "be28nkzirtb0gd" }, "fieldConfig": { "defaults": { "color": { "mode": "continuous-YlBl" }, "custom": { "axisBorderShow": false, "axisCenteredZero": false, "axisColorMode": "text", "axisLabel": "", "axisPlacement": "auto", "fillOpacity": 80, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, "lineWidth": 0, "scaleDistribution": { "type": "linear" }, "thresholdsStyle": { "mode": "off" } }, "mappings": [], "thresholds": { "mode": "absolute", "steps": [ { "color": "green", "value": null } ] }, "unit": "s" }, "overrides": [] }, "gridPos": { "h": 11, "w": 12, "x": 12, "y": 53 }, "id": 12, "options": { "barRadius": 0.05, "barWidth": 0.8, "fullHighlight": false, "groupWidth": 0.7, "legend": { "calcs": [], "displayMode": "list", "placement": "bottom", "showLegend": false }, "orientation": "auto", "showValue": "auto", "stacking": "none", "tooltip": { "mode": "single", "sort": "none" }, "xTickLabelRotation": 0, "xTickLabelSpacing": 0 }, "targets": [ { "datasource": { "default": true, "type": "grafana-postgresql-datasource", "uid": "be28nkzirtb0gd" }, "editorMode": "code", "format": "table", "rawQuery": true, "rawSql": "SELECT CAST(m.measurements->'fourth_compile_generate_time_secs' AS double precision) AS fourth_compile_generate_time_secs, left(b.commit_id, 7), m.time FROM benchmarks as b JOIN model_measurements AS m ON b.benchmark_id = m.benchmark_id WHERE b.branch = '${branch}' AND b.metadata->>'gpu_name' = '${gpu_name}' ORDER BY b.benchmark_id DESC LIMIT ${last_n_commits};", "refId": "A", "sql": { "columns": [ { "parameters": [], "type": "function" } ], "groupBy": [ { "property": { "type": "string" }, "type": "groupBy" } ], "limit": 50 } } ], "title": "Fourth compile generate", "transformations": [ { "id": "sortBy", "options": { "fields": {}, "sort": [ { "field": "time" } ] } } ], "transparent": true, "type": "barchart" }, { "collapsed": true, "gridPos": { "h": 1, "w": 24, "x": 0, "y": 64 }, "id": 15, "panels": [ { "datasource": {}, "fieldConfig": { "defaults": { "color": { "mode": "palette-classic" }, "custom": { "axisBorderShow": false, "axisCenteredZero": false, "axisColorMode": "text", "axisLabel": "", "axisPlacement": "auto", "barAlignment": 0, "barWidthFactor": 0.6, "drawStyle": "line", "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, "insertNulls": 60000, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, "scaleDistribution": { "type": "linear" }, "showPoints": "auto", "spanNulls": false, "stacking": { "group": "A", "mode": "none" }, "thresholdsStyle": { "mode": "off" } }, "mappings": [], "thresholds": { "mode": "absolute", "steps": [ { "color": "green" }, { "color": "red", "value": 80 } ] }, "unit": "percent" }, "overrides": [] }, "gridPos": { "h": 9, "w": 12, "x": 0, "y": 65 }, "id": 1, "options": { "legend": { "calcs": [], "displayMode": "list", "placement": "bottom", "showLegend": true }, "tooltip": { "mode": "single", "sort": "none" } }, "targets": [ { "datasource": { "default": true, "type": "grafana-postgresql-datasource", "uid": "be28nkzirtb0gd" }, "editorMode": "code", "format": "table", "rawQuery": true, "rawSql": "SELECT\n d.cpu_util,\n d.time\nFROM\n benchmarks AS b\n JOIN device_measurements AS d ON b.benchmark_id = d.benchmark_id\nWHERE\n branch = '${branch}';", "refId": "A", "sql": { "columns": [ { "parameters": [ { "name": "cpu_util", "type": "functionParameter" } ], "type": "function" }, { "parameters": [ { "name": "mem_megabytes", "type": "functionParameter" } ], "type": "function" }, { "parameters": [ { "name": "gpu_util", "type": "functionParameter" } ], "type": "function" }, { "parameters": [ { "name": "gpu_mem_megabytes", "type": "functionParameter" } ], "type": "function" }, { "parameters": [ { "name": "\"time\"", "type": "functionParameter" } ], "type": "function" } ], "groupBy": [ { "property": { "type": "string" }, "type": "groupBy" } ], "limit": 50, "whereJsonTree": { "children1": [ { "id": "baa888b8-89ab-4cde-b012-31922f8671e9", "properties": { "field": "commit_id", "fieldSrc": "field", "operator": "equal", "value": [ "${commit}" ], "valueError": [ null ], "valueSrc": [ "value" ], "valueType": [ "text" ] }, "type": "rule" } ], "id": "bab88a98-0123-4456-b89a-b1922f7d4f11", "type": "group" }, "whereString": "commit_id = '${commit}'" }, "table": "measurements" } ], "title": "CPU Utilization", "transparent": true, "type": "timeseries" }, { "datasource": {}, "fieldConfig": { "defaults": { "color": { "mode": "palette-classic" }, "custom": { "axisBorderShow": false, "axisCenteredZero": false, "axisColorMode": "text", "axisLabel": "", "axisPlacement": "auto", "barAlignment": 0, "barWidthFactor": 0.6, "drawStyle": "line", "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, "insertNulls": 60000, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, "scaleDistribution": { "type": "linear" }, "showPoints": "auto", "spanNulls": false, "stacking": { "group": "A", "mode": "none" }, "thresholdsStyle": { "mode": "off" } }, "mappings": [], "thresholds": { "mode": "absolute", "steps": [ { "color": "green" }, { "color": "red", "value": 80 } ] }, "unit": "percent" }, "overrides": [] }, "gridPos": { "h": 9, "w": 12, "x": 12, "y": 65 }, "id": 4, "options": { "legend": { "calcs": [], "displayMode": "list", "placement": "bottom", "showLegend": true }, "tooltip": { "mode": "single", "sort": "none" } }, "targets": [ { "datasource": { "default": true, "type": "grafana-postgresql-datasource", "uid": "be28nkzirtb0gd" }, "editorMode": "code", "format": "table", "rawQuery": true, "rawSql": "SELECT\n b.commit_id,\n d.gpu_util,\n d.time\nFROM\n benchmarks AS b\n JOIN device_measurements AS d ON b.benchmark_id = d.benchmark_id\nWHERE\n branch = '${branch}';", "refId": "A", "sql": { "columns": [ { "parameters": [ { "name": "cpu_util", "type": "functionParameter" } ], "type": "function" }, { "parameters": [ { "name": "mem_megabytes", "type": "functionParameter" } ], "type": "function" }, { "parameters": [ { "name": "gpu_util", "type": "functionParameter" } ], "type": "function" }, { "parameters": [ { "name": "gpu_mem_megabytes", "type": "functionParameter" } ], "type": "function" }, { "parameters": [ { "name": "\"time\"", "type": "functionParameter" } ], "type": "function" } ], "groupBy": [ { "property": { "type": "string" }, "type": "groupBy" } ], "limit": 50, "whereJsonTree": { "children1": [ { "id": "baa888b8-89ab-4cde-b012-31922f8671e9", "properties": { "field": "commit_id", "fieldSrc": "field", "operator": "equal", "value": [ "${commit}" ], "valueError": [ null ], "valueSrc": [ "value" ], "valueType": [ "text" ] }, "type": "rule" } ], "id": "bab88a98-0123-4456-b89a-b1922f7d4f11", "type": "group" }, "whereString": "commit_id = '${commit}'" }, "table": "measurements" } ], "title": "GPU Utilization", "transparent": true, "type": "timeseries" }, { "datasource": {}, "fieldConfig": { "defaults": { "color": { "mode": "palette-classic" }, "custom": { "axisBorderShow": false, "axisCenteredZero": false, "axisColorMode": "text", "axisLabel": "", "axisPlacement": "auto", "barAlignment": 0, "barWidthFactor": 0.6, "drawStyle": "line", "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, "insertNulls": 60000, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, "scaleDistribution": { "type": "linear" }, "showPoints": "auto", "spanNulls": false, "stacking": { "group": "A", "mode": "none" }, "thresholdsStyle": { "mode": "off" } }, "mappings": [], "thresholds": { "mode": "absolute", "steps": [ { "color": "green" }, { "color": "red", "value": 80 } ] }, "unit": "decmbytes" }, "overrides": [] }, "gridPos": { "h": 9, "w": 12, "x": 0, "y": 74 }, "id": 2, "options": { "legend": { "calcs": [], "displayMode": "list", "placement": "bottom", "showLegend": true }, "tooltip": { "mode": "single", "sort": "none" } }, "targets": [ { "datasource": { "default": true, "type": "grafana-postgresql-datasource", "uid": "be28nkzirtb0gd" }, "editorMode": "code", "format": "table", "rawQuery": true, "rawSql": "SELECT d.mem_megabytes, d.time FROM benchmarks AS b JOIN device_measurements AS d ON b.benchmark_id = d.benchmark_id WHERE branch = '${branch}';", "refId": "A", "sql": { "columns": [ { "parameters": [ { "name": "cpu_util", "type": "functionParameter" } ], "type": "function" }, { "parameters": [ { "name": "mem_megabytes", "type": "functionParameter" } ], "type": "function" }, { "parameters": [ { "name": "gpu_util", "type": "functionParameter" } ], "type": "function" }, { "parameters": [ { "name": "gpu_mem_megabytes", "type": "functionParameter" } ], "type": "function" }, { "parameters": [ { "name": "\"time\"", "type": "functionParameter" } ], "type": "function" } ], "groupBy": [ { "property": { "type": "string" }, "type": "groupBy" } ], "limit": 50, "whereJsonTree": { "children1": [ { "id": "baa888b8-89ab-4cde-b012-31922f8671e9", "properties": { "field": "commit_id", "fieldSrc": "field", "operator": "equal", "value": [ "${commit}" ], "valueError": [ null ], "valueSrc": [ "value" ], "valueType": [ "text" ] }, "type": "rule" } ], "id": "bab88a98-0123-4456-b89a-b1922f7d4f11", "type": "group" }, "whereString": "commit_id = '${commit}'" }, "table": "measurements" } ], "title": "Memory usage", "transparent": true, "type": "timeseries" }, { "datasource": {}, "fieldConfig": { "defaults": { "color": { "mode": "palette-classic" }, "custom": { "axisBorderShow": false, "axisCenteredZero": false, "axisColorMode": "text", "axisLabel": "", "axisPlacement": "auto", "barAlignment": 0, "barWidthFactor": 0.6, "drawStyle": "line", "fillOpacity": 0, "gradientMode": "none", "hideFrom": { "legend": false, "tooltip": false, "viz": false }, "insertNulls": 60000, "lineInterpolation": "linear", "lineWidth": 1, "pointSize": 5, "scaleDistribution": { "type": "linear" }, "showPoints": "auto", "spanNulls": false, "stacking": { "group": "A", "mode": "none" }, "thresholdsStyle": { "mode": "off" } }, "mappings": [], "thresholds": { "mode": "absolute", "steps": [ { "color": "green" }, { "color": "red", "value": 80 } ] }, "unit": "decmbytes" }, "overrides": [] }, "gridPos": { "h": 9, "w": 12, "x": 12, "y": 74 }, "id": 3, "options": { "legend": { "calcs": [], "displayMode": "list", "placement": "bottom", "showLegend": true }, "tooltip": { "mode": "single", "sort": "none" } }, "targets": [ { "datasource": { "default": true, "type": "grafana-postgresql-datasource", "uid": "be28nkzirtb0gd" }, "editorMode": "code", "format": "table", "rawQuery": true, "rawSql": "SELECT\n d.gpu_mem_megabytes,\n d.time\nFROM\n benchmarks AS b\n JOIN device_measurements AS d ON b.benchmark_id = d.benchmark_id\nWHERE\n branch = '${branch}';", "refId": "A", "sql": { "columns": [ { "parameters": [ { "name": "cpu_util", "type": "functionParameter" } ], "type": "function" }, { "parameters": [ { "name": "mem_megabytes", "type": "functionParameter" } ], "type": "function" }, { "parameters": [ { "name": "gpu_util", "type": "functionParameter" } ], "type": "function" }, { "parameters": [ { "name": "gpu_mem_megabytes", "type": "functionParameter" } ], "type": "function" }, { "parameters": [ { "name": "\"time\"", "type": "functionParameter" } ], "type": "function" } ], "groupBy": [ { "property": { "type": "string" }, "type": "groupBy" } ], "limit": 50, "whereJsonTree": { "children1": [ { "id": "baa888b8-89ab-4cde-b012-31922f8671e9", "properties": { "field": "commit_id", "fieldSrc": "field", "operator": "equal", "value": [ "${commit}" ], "valueError": [ null ], "valueSrc": [ "value" ], "valueType": [ "text" ] }, "type": "rule" } ], "id": "bab88a98-0123-4456-b89a-b1922f7d4f11", "type": "group" }, "whereString": "commit_id = '${commit}'" }, "table": "measurements" } ], "title": "GPU memory usage", "transparent": true, "type": "timeseries" } ], "title": "Usage metrics", "type": "row" } ], "schemaVersion": 39, "tags": [], "templating": { "list": [ { "current": { "selected": false, "text": "main", "value": "main" }, "datasource": { "default": true, "type": "grafana-postgresql-datasource", "uid": "be28nkzirtb0gd" }, "definition": "SELECT DISTINCT branch FROM benchmarks;", "description": "", "hide": 0, "includeAll": false, "label": "branch", "multi": false, "name": "branch", "options": [], "query": "SELECT DISTINCT branch FROM benchmarks;", "refresh": 1, "regex": "", "skipUrlSync": false, "sort": 0, "type": "query" }, { "current": { "selected": false, "text": "1729701492845", "value": "1729701492845" }, "datasource": { "default": true, "type": "grafana-postgresql-datasource", "uid": "be28nkzirtb0gd" }, "definition": "SELECT created_at - INTERVAL '5 secs' FROM benchmarks WHERE branch = '${branch}' ORDER BY benchmark_id ASC LIMIT 1;", "description": "", "hide": 2, "includeAll": false, "multi": false, "name": "StartTime", "options": [], "query": "SELECT created_at - INTERVAL '5 secs' FROM benchmarks WHERE branch = '${branch}' ORDER BY benchmark_id ASC LIMIT 1;", "refresh": 2, "regex": "", "skipUrlSync": false, "sort": 0, "type": "query" }, { "current": { "selected": false, "text": "1730393397577", "value": "1730393397577" }, "datasource": { "default": true, "type": "grafana-postgresql-datasource", "uid": "be28nkzirtb0gd" }, "definition": "SELECT time + INTERVAL '5 secs' FROM benchmarks AS b JOIN device_measurements AS d ON b.benchmark_id = d.benchmark_id WHERE branch = '${branch}' ORDER BY b.benchmark_id DESC, d.measurement_id DESC LIMIT 1;", "description": "", "hide": 2, "includeAll": false, "multi": false, "name": "EndTime", "options": [], "query": "SELECT time + INTERVAL '5 secs' FROM benchmarks AS b JOIN device_measurements AS d ON b.benchmark_id = d.benchmark_id WHERE branch = '${branch}' ORDER BY b.benchmark_id DESC, d.measurement_id DESC LIMIT 1;", "refresh": 1, "regex": "", "skipUrlSync": false, "sort": 0, "type": "query" }, { "current": { "selected": false, "text": "NVIDIA A10G", "value": "NVIDIA A10G" }, "datasource": { "type": "grafana-postgresql-datasource", "uid": "be28nkzirtb0gd" }, "definition": "SELECT DISTINCT metadata->>'gpu_name' FROM benchmarks;", "description": "", "hide": 0, "includeAll": false, "label": "GPU", "multi": false, "name": "gpu_name", "options": [], "query": "SELECT DISTINCT metadata->>'gpu_name' FROM benchmarks;", "refresh": 1, "regex": "", "skipUrlSync": false, "sort": 0, "type": "query" }, { "current": { "selected": true, "text": "10", "value": "10" }, "description": "The number of commits to display, going from most recent to the nth commit.", "hide": 0, "label": "Last # of commits", "name": "last_n_commits", "options": [ { "selected": true, "text": "10", "value": "10" } ], "query": "10", "skipUrlSync": false, "type": "textbox" } ] }, "time": { "from": "now-1h", "to": "now" }, "timepicker": { "hidden": false }, "timezone": "browser", "title": "Transformers benchmarks", "uid": "fdz33iyzln9c0a", "version": 10, "weekStart": "" }
transformers/benchmark/grafana_dashboard.json/0
{ "file_path": "transformers/benchmark/grafana_dashboard.json", "repo_id": "transformers", "token_count": 42595 }
374
#!/bin/bash source ~/.bashrc echo "running docker-entrypoint.sh" conda activate container echo $KUBE_GOOGLE_CLOUD_TPU_ENDPOINTS echo "printed TPU info" export XRT_TPU_CONFIG="tpu_worker;0;${KUBE_GOOGLE_CLOUD_TPU_ENDPOINTS:7}" exec "$@"#!/bin/bash
transformers/docker/transformers-pytorch-tpu/docker-entrypoint.sh/0
{ "file_path": "transformers/docker/transformers-pytorch-tpu/docker-entrypoint.sh", "repo_id": "transformers", "token_count": 112 }
375
# إنشاء بنية مخصصة تحدد فئة [`AutoClass`](model_doc/auto) تلقائيًا بنية النموذج وتقوم بتنزيل تكوين وأوزان مسبقين للنموذج. بشكل عام، نوصي باستخدام `AutoClass` لإنتاج كود غير مرتبط بنسخة معينة. ولكن يمكن للمستخدمين الذين يريدون مزيدًا من التحكم في معلمات النموذج المحددة إنشاء نموذج مخصص من 🤗 Transformers من مجرد بضع فئات أساسية. قد يكون هذا مفيدًا بشكل خاص لأي شخص مهتم بدراسة نموذج 🤗 Transformers أو تدريبه أو إجراء تجارب عليه. في هذا الدليل، سنغوص بشكل أعمق في إنشاء نموذج مخصص بدون `AutoClass`. تعرف على كيفية: - تحميل تكوين النموذج وتخصيصه. - إنشاء بنية نموذج. - إنشاء مجزء لغوى سريع وبطيء للنص. - إنشاء معالج صور لمهام الرؤية. - إنشاء مستخرج ميزات لمهام الصوت. - إنشاء معالج للمهام متعددة الوسائط. ## التكوين يشير مصطلح [التكوين](main_classes/configuration) إلى الخصائص المحددة للنموذج. لكل تكوين نموذج خصائصه الخاصة؛ على سبيل المثال، تشترك جميع نماذج NLP في الخصائص `hidden_size` و`num_attention_heads` و`num_hidden_layers` و`vocab_size` المشتركة. تحدد هذه الخصائص عدد رؤوس الانتباه أو الطبقات المخفية لبناء نموذج بها. اطلع على [DistilBERT](model_doc/distilbert) من خلال [`DistilBertConfig`] لمعاينة خصائصه: ```py >>> from transformers import DistilBertConfig >>> config = DistilBertConfig() >>> print(config) DistilBertConfig { "activation": "gelu", "attention_dropout": 0.1, "dim": 768, "dropout": 0.1, "hidden_dim": 3072, "initializer_range": 0.02, "max_position_embeddings": 512, "model_type": "distilbert", "n_heads": 12, "n_layers": 6, "pad_token_id": 0, "qa_dropout": 0.1, "seq_classif_dropout": 0.2, "sinusoidal_pos_embds": false, "transformers_version": "4.16.2", "vocab_size": 30522 } ``` يعرض [`DistilBertConfig`] جميع الخصائص الافتراضية المستخدمة لبناء نموذج [`DistilBertModel`] أساسي. جميع الخصائص قابلة للتعديل، مما ييتيح مجالاً للتجريب. على سبيل المثال، يمكنك تعديل نموذج افتراضي لـ: - تجربة دالة تنشيط مختلفة باستخدام معامل `activation`. - استخدام معدل إسقاط أعلى الاحتمالات الانتباه مع معامل `attention_dropout`. ```py >>> my_config = DistilBertConfig(activation="relu", attention_dropout=0.4) >>> print(my_config) DistilBertConfig { "activation": "relu", "attention_dropout": 0.4, ``` يمكن تعديل خصائص النموذج المدرب مسبقًا في دالة [`~PretrainedConfig.from_pretrained`] : ```py >>> my_config = DistilBertConfig.from_pretrained("distilbert/distilbert-base-uncased", activation="relu", attention_dropout=0.4) ``` بمجرد أن تصبح راضيًا عن تكوين نموذجك، يمكنك حفظه باستخدام [`~PretrainedConfig.save_pretrained`]. يتم تخزين ملف التكوين الخاص بك على أنه ملف JSON في دليل الحفظ المحدد: ```py >>> my_config.save_pretrained(save_directory="./your_model_save_path") ``` لإعادة استخدام ملف التكوين، قم بتحميله باستخدام [`~PretrainedConfig.from_pretrained`]: ```py >>> my_config = DistilBertConfig.from_pretrained("./your_model_save_path/config.json") ``` <Tip> يمكنك أيضًا حفظ ملف التكوين كقاموس أو حتى كفرق بين خصائص التكوين المُعدّلة والخصائص التكوين الافتراضية! راجع وثائق [التكوين](main_classes/configuration) لمزيد من التفاصيل. </Tip> ## النموذج الخطوة التالية هي إنشاء [نموذج](main_classes/models). النموذج - ويُشار إليه أحيانًا باسم البنية - يُحدد وظيفة كل طبقة والعمليات الحسابية المُنفذة. تُستخدم خصائص مثل `num_hidden_layers` من التكوين لتحديد هذه البنية. تشترك جميع النماذج في فئة أساسية واحدة هي [`PreTrainedModel`] وبعض الوظائف المُشتركة مثل غيير حجم مُدخلات الكلمات وتقليص رؤوس آلية الانتباه الذاتي. بالإضافة إلى ذلك، فإن جميع النماذج هي فئات فرعية إما من [`torch.nn.Module`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html)، [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) أو [`flax.linen.Module`](https://flax.readthedocs.io/en/latest/api_reference/flax.linen/module.html) . هذا يعني النماذج متوافقة مع كل استخدام لإطار العمل الخاص بها. <frameworkcontent> <pt> قم بتحميل خصائص التكوين المخصصة الخاصة بك في النموذج: ```py >>> from transformers import DistilBertModel >>> my_config = DistilBertConfig.from_pretrained("./your_model_save_path/config.json") >>> model = DistilBertModel(my_config) ``` هذا ينشئ نموذجًا بقيم عشوائية بدلاً من الأوزان المُدربة مسبقًا. لن يكون هذا النموذج مفيدًا حتى يتم تدريبه. تُعد عملية التدريب مكلفة وتستغرق وقتًا طويلاً. من الأفضل بشكل عام استخدام نموذج مُدرب مسبقًا للحصول على نتائج أفضل بشكل أسرع، مع استخدام جزء بسيط فقط من الموارد المطلوبة للتدريب. قم بإنشاء نموذج مُدرب مسبقًا باستخدام [`~PreTrainedModel.from_pretrained`]: ```py >>> model = DistilBertModel.from_pretrained("distilbert/distilbert-base-uncased") ``` عند بتحميل الأوزان المُدربة مسبقًا، يتم تحميل تكوين النموذج الافتراضي تلقائيًا إذا كان النموذج من مكتبة 🤗 Transformers. ومع ذلك، يمكنك أيضًا استبدال - بعض أو كل - سإعدادات النموذج الافتراضية بإعداداتك الخاصة: ```py >>> model = DistilBertModel.from_pretrained("distilbert/distilbert-base-uncased"، config=my_config) ``` </pt> <tf> قم بتحميل خصائص التكوين المُخصصة الخاصة بك في النموذج: ```py >>> from transformers import TFDistilBertModel >>> my_config = DistilBertConfig.from_pretrained("./your_model_save_path/my_config.json") >>> tf_model = TFDistilBertModel(my_config) ``` هذا ينشئ نموذجًا بقيم عشوائية بدلاً من الأوزان المُدربة مسبقًا. لن يكون هذا النموذج مفيدًا حتى يتم تدريبه. تُعد عملية التدريب مكلفة وتستغرق وقتًا طويلاً. من الأفضل بشكل عام استخدام نموذج مُدرب مسبقًا للحصول على نتائج أفضل بشكل أسرع، مع استخدام جزء بسيط فقط من الموارد المطلوبة للتدريب. قم بإنشاء نموذج مُدرب مسبقًا باستخدام [`~TFPreTrainedModel.from_pretrained`]: ```py >>> tf_model = TFDistilBertModel.from_pretrained("distilbert/distilbert-base-uncased") ``` عندما تقوم بتحميل الأوزان المُدربة مسبقًا،يتم تحميل إعدادات النموذج الافتراضي تلقائيًا إذا كان النموذج من مكتبة 🤗 Transformers. ومع ذلك، يمكنك أيضًا استبدال - بعض أو كل - إعدادات النموذج الافتراضية بإعداداتك الخاصة: ```py >>> tf_model = TFDistilBertModel.from_pretrained("distilbert/distilbert-base-uncased"، config=my_config) ``` </tf> </frameworkcontent> ### رؤوس النموذج في هذه المرحلة، لديك نموذج DistilBERT الأساسي الذي يخرج *حالات الكامنة*. تُمرَّر هذه الحالات الكامنة كمدخلات لرأس النموذج لإنتاج المخرجات النهائية. توفر مكتبة 🤗 Transformers رأس نموذج مختلف لكل مهمة طالما أن النموذج يدعم المهمة (أي لا يمكنك استخدام DistilBERT لمهمة تسلسل إلى تسلسل مثل الترجمة). <frameworkcontent> <pt> على سبيل المثال، [`DistilBertForSequenceClassification`] هو نموذج DistilBERT الأساس مزودًا برأس تصنيف تسلسلي. يُشكّل رأس التصنيف التسلسلي طبقة خطية فوق المخرجات المجمعة. ```py >>> from transformers import DistilBertForSequenceClassification >>> model = DistilBertForSequenceClassification.from_pretrained("distilbert/distilbert-base-uncased") ``` أعد استخدام هذا نقطة التحقق هذه لمهمة أخرى بسهولة، وذلك بتغيير رأس النموذج.ففي مهمة الإجابة على الأسئلة، ستستخدم رأس النموذج [`DistilBertForQuestionAnswering`]. رأس الإجابة على الأسئلة مشابه لرأس التصنيف التسلسلي باستثناء أنه طبقة خطية فوق مخرجات الحالات الكامنة. ```py >>> from transformers import DistilBertForQuestionAnswering >>> model = DistilBertForQuestionAnswering.from_pretrained("distilbert/distilbert-base-uncased") ``` </pt> <tf> على سبيل المثال، [`TFDistilBertForSequenceClassification`] هو نموذج DistilBERT الأساسي برأس تصنيف تسلسل. رأس التصنيف التسلسلي هو طبقة خطية أعلى المخرجات المجمعة. ```py >>> from transformers import TFDistilBertForSequenceClassification >>> tf_model = TFDistilBertForSequenceClassification.from_pretrained("distilbert/distilbert-base-uncased") ``` أعد استخدام هذا نقطة التحقق لمهمة أخرى عن طريق التبديل إلى رأس نموذج مختلف. لمهمة الإجابة على الأسئلة، ستستخدم رأس النموذج [`TFDistilBertForQuestionAnswering`]. رأس الإجابة على الأسئلة مشابه لرأس التصنيف التسلسلي باستثناء أنه طبقة خطية أعلى حالات الإخراج المخفية. ```py >>> from transformers import TFDistilBertForQuestionAnswering >>> tf_model = TFDistilBertForQuestionAnswering.from_pretrained("distilbert/distilbert-base-uncased") ``` </tf> </frameworkcontent> ## مجزئ النصوص الفئة الأساسية الأخيرة التي تحتاجها قبل استخدام نموذج للبيانات النصية هي [مجزئ النصوص](main_classes/tokenizer) لتحويل النص الخام إلى تنسورات (tensors). هناك نوعان من المحولات الرموز التي يمكنك استخدامها مع 🤗 Transformers: - [`PreTrainedTokenizer`]: تنفيذ Python لمجزئ النصوص. - [`PreTrainedTokenizerFast`]: مجزئ النصوص من مكتبة [🤗 Tokenizer](https://huggingface.co/docs/tokenizers/python/latest/) المُبنية على لغة Rust. هذا النوع من المجزئات أسرع بكثير، خاصةً عند معالجة دفعات النصوص، وذلك بفضل تصميمه بلغة Rust. كما يوفر مجزئ النصوص السريع طرقًا إضافية مثل *مخطط الإزاحة* الذي يُطابق الرموز بكلماتها أو أحرفها الأصلية. يدعم كلا النوعين من المجزئات طرقًا شائعة مثل الترميز وفك الترميز، وإضافة رموز جديدة، وإدارة الرموز الخاصة. <Tip warning={true}> لا يدعم كل نموذج مجزئ النصوص سريع. الق نظرة على هذا [جدول](index#supported-frameworks) للتحقق مما إذا كان النموذج يحتوي على دعم مجزئ النصوص سريع. </Tip> إذا دربت مجزئ النصوص خاص بك، فيمكنك إنشاء واحد من *قاموسك*:``` ```py >>> from transformers import DistilBertTokenizer >>> my_tokenizer = DistilBertTokenizer(vocab_file="my_vocab_file.txt"، do_lower_case=False، padding_side="left") ``` من المهم أن تتذكر أن قاموس مجزئ النصوص المُخصص سيكون مختلفًا عن قاموس مجزئ النصوص نموذج مُدرّب مسبقًا. يجب عليك استخدام قاموس نموذج مُدرّب مسبقًا إذا كنت تستخدم نموذجًا مُدرّبًا مسبقًا، وإلا فلن تكون المدخلات ذات معنى. قم بإنشاء مجزئ النصوص باستخدام قاموس نموذج مُدرّب مسبقًا باستخدام فئة [`DistilBertTokenizer`]: ```py >>> from transformers import DistilBertTokenizer >>> slow_tokenizer = DistilBertTokenizer.from_pretrained("distilbert/distilbert-base-uncased") ``` قم بإنشاء مجزئ نصوص سريع باستخدام فئة [`DistilBertTokenizerFast`]: ```py >>> from transformers import DistilBertTokenizerFast >>> fast_tokenizer = DistilBertTokenizerFast.from_pretrained("distilbert/distilbert-base-uncased") ``` <Tip> افتراضيًا، سيحاول [`AutoTokenizer`] تحميل مجزئ نصوص سريع. يمكنك تعطيل هذا السلوك عن طريق تعيين `use_fast=False` في `from_pretrained`. </Tip> ## معالج الصور يعالج معالج الصور بيانات الرؤية. وهو يرث من الفئة الأساسية [`~image_processing_utils.ImageProcessingMixin`]. لبناء معالج صور خاص بالنموذج المستخدم، أنشئ مثلاً مُعالج [`ViTImageProcessor`] افتراضيًا إذا كنت تستخدم [ViT](model_doc/vit) لتصنيف الصور: ```py >>> from transformers import ViTImageProcessor >>> vit_extractor = ViTImageProcessor() >>> print(vit_extractor) ViTImageProcessor { "do_normalize": true, "do_resize": true, "image_processor_type": "ViTImageProcessor", "image_mean": [ 0.5, 0.5, 0.5 ], "image_std": [ 0.5, 0.5, 0.5 ], "resample": 2, "size": 224 } ``` <Tip> إذا كنت لا تبحث عن أي تخصيص، فما عليك سوى استخدام طريقة `from_pretrained` لتحميل معلمات معالج الصور الافتراضية للنموذج. </Tip> عدل أيًا من معلمات [`ViTImageProcessor`] لإنشاء معالج الصور المخصص الخاص بك: ```py >>> from transformers import ViTImageProcessor >>> my_vit_extractor = ViTImageProcessor(resample="PIL.Image.BOX", do_normalize=False, image_mean=[0.3, 0.3, 0.3]) >>> print(my_vit_extractor) ViTImageProcessor { "do_normalize": false, "do_resize": true, "image_processor_type": "ViTImageProcessor", "image_mean": [ 0.3, 0.3, 0.3 ], "image_std": [ 0.5, 0.5, 0.5 ], "resample": "PIL.Image.BOX", "size": 224 } ``` ## العمود الفقري <div style="text-align: center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/Backbone.png"> </div> تتكون نماذج رؤية الحاسب من جزء أساسي، وجزء وسيط، وجزء معالجة نهائي. يستخرج الجزء الأساسي الميزات من صورة الإدخال، ويجمع الجزء الوسيط هذه الميزات المستخرجة ويعززها، ويُستخدم الجزء النهائي للمهمة الرئيسية (مثل اكتشاف الأجسام). ابدأ عبتهيئة الجزء الأساسي في تكوين النموذج وحدد ما إذا كنت تريد تحميل أوزان مدربة مسبقًا أو أوزانًا عشوائية. بعد ذلك، يمكنك تمرير تكوين النموذج إلى جزء المعالجة النهائي. على سبيل المثال، لتحميل [ResNet](../model_doc/resnet) backbone في نموذج [MaskFormer](../model_doc/maskformer) مع رأس تجزئة مثيل: <hfoptions id="backbone"> <hfoption id="pretrained weights"> قم بتعيين `use_pretrained_backbone=True` لتحميل الأوزان المسبقة التدريب لـ ResNet للعمود الفقري. ```py from transformers import MaskFormerConfig, MaskFormerForInstanceSegmentation config = MaskFormerConfig(backbone="microsoft/resnet-50", use_pretrained_backbone=True) # تكوين الجزء الأساسي والجزء الوسيط model = MaskFormerForInstanceSegmentation(config) # جزء المعالجة النهائي ``` </hfoption> <hfoption id="random weights"> قم بتعيين `use_pretrained_backbone=False` لتهيئة جزء ResNet الأساسي بشكل عشوائي. ```py from transformers import MaskFormerConfig, MaskFormerForInstanceSegmentation config = MaskFormerConfig(backbone="microsoft/resnet-50", use_pretrained_backbone=False) # تكوين الجزء الأساسي والجزء الوسيط model = MaskFormerForInstanceSegmentation(config) # جزء المعالجة النهائي ``` يمكنك أيضًا تحميل تكوين الجزء الأساسي بشكل منفصل، ثم تمريره إلى تكوين النموذج.``` ```py from transformers import MaskFormerConfig, MaskFormerForInstanceSegmentation, ResNetConfig backbone_config = ResNetConfig() config = MaskFormerConfig(backbone_config=backbone_config) model = MaskFormerForInstanceSegmentation(config) ``` </hfoption> <hfoption id="timm backbone"> يتم تحميل نماذج [timm](https://hf.co/docs/timm/index) داخل نموذج باستخدام `use_timm_backbone=True` أو باستخدام [`TimmBackbone`] و [`TimmBackboneConfig`]. استخدم `use_timm_backbone=True` و `use_pretrained_backbone=True` لتحميل أوزان timm المُدرّبة مسبقًا للجزء الأساسي. ```python from transformers import MaskFormerConfig, MaskFormerForInstanceSegmentation config = MaskFormerConfig(backbone="resnet50", use_pretrained_backbone=True, use_timm_backbone=True) # تكوين الجزء الأساسي والجزء الوسيط model = MaskFormerForInstanceSegmentation(config) # جزء المعالجة النهائي ``` قم بتعيين `use_timm_backbone=True` و `use_pretrained_backbone=False` لتحميل عمود فقري timm مبدئي عشوائي. ```python from transformers import MaskFormerConfig, MaskFormerForInstanceSegmentation config = MaskFormerConfig(backbone="resnet50", use_pretrained_backbone=False, use_timm_backbone=True) # تكوين الجزء الأساسي والجزء الوسيط model = MaskFormerForInstanceSegmentation(config) # جزء المعالجة النهائي ``` يمكنك أيضًا تحميل تكوين الجزء الأساسي واستخدامه لإنشاء `TimmBackbone` أو تمريره إلى تكوين النموذج. سيتم تحميلأوزان الجزء الأساسي لـ Timm المُدرّبة مسبقًا افتراضيًا. عيّن `use_pretrained_backbone=False` لتحميل الأوزان المبدئية العشوائية. ```python from transformers import TimmBackboneConfig, TimmBackbone backbone_config = TimmBackboneConfig("resnet50", use_pretrained_backbone=False) # قم بإنشاء مثيل من العمود الفقري backbone = TimmBackbone(config=backbone_config) # قم بإنشاء نموذج باستخدام عمود فقري timm from transformers import MaskFormerConfig, MaskFormerForInstanceSegmentation config = MaskFormerConfig(backbone_config=backbone_config) model = MaskFormerForInstanceSegmentation(config) ``` ## مستخرج الميزات يقوم مُستخرج الميزات بمعالجة المدخلات الصوتية. يرث من فئة الأساس [`~feature_extraction_utils.FeatureExtractionMixin`]، وقد يرث أيضًا من فئة [`SequenceFeatureExtractor`] لمعالجة المدخلات الصوتية. للاستخدام، قم بإنشاء مستخرج ميزات مرتبط بالنموذج الذي تستخدمه. على سبيل المثال، قم بإنشاء مستخرج ميزات Wav2Vec2 الافتراضي إذا كنت تستخدم [Wav2Vec2](model_doc/wav2vec2) لتصنيف الصوت: ```py >>> from transformers import Wav2Vec2FeatureExtractor >>> w2v2_extractor = Wav2Vec2FeatureExtractor() >>> print(w2v2_extractor) Wav2Vec2FeatureExtractor { "do_normalize": true, "feature_extractor_type": "Wav2Vec2FeatureExtractor", "feature_size": 1, "padding_side": "right", "padding_value": 0.0, "return_attention_mask": false, "sampling_rate": 16000 } ``` <Tip> إذا لم تكن بحاجة لأي تخصيص، فاستخدم فقط طريقة `from_pretrained` لتحميل معلمات مستخرج الميزات الافتراضية للنموذج. </Tip> قم بتعديل أي من معلمات [`Wav2Vec2FeatureExtractor`] لإنشاء مستخرج ميزات مخصص: ```py >>> from transformers import Wav2Vec2FeatureExtractor >>> w2v2_extractor = Wav2Vec2FeatureExtractor(sampling_rate=8000، do_normalize=False) >>> print(w2v2_extractor) Wav2Vec2FeatureExtractor { "do_normalize": false, "feature_extractor_type": "Wav2Vec2FeatureExtractor"، "feature_size": 1، "padding_side": "right"، "padding_value": 0.0، "return_attention_mask": false، "sampling_rate": 8000 } ``` ## المعالج بالنسبة للنماذج التي تدعم مهام الوسائط المتعددة، توفر مكتبة 🤗 Transformers فئة معالج تجمع بفاعلية فئات المعالجة مثل مستخرج الميزات ومقسّم الرموز في كائن واحد. على سبيل المثال، دعنا نستخدم [`Wav2Vec2Processor`] لمهمة التعرف الآلي على الكلام (ASR). تقوم مهمة ASR بتحويل الصوت إلى نص، لذلك ستحتاج إلى مستخرج ميزات ومقسّم رموز. قم بإنشاء مستخرج ميزات لمعالجة المدخلات الصوتية: ```py >>> from transformers import Wav2Vec2FeatureExtractor >>> feature_extractor = Wav2Vec2FeatureExtractor(padding_value=1.0, do_normalize=True) ``` قم بإنشاء مقسّم رموز لمعالجة المدخلات النصية: ```py >>> from transformers import Wav2Vec2CTCTokenizer >>> tokenizer = Wav2Vec2CTCTokenizer(vocab_file="my_vocab_file.txt") ``` قم بدمج مستخرج الميزات ومقسّم الرموز في [`Wav2Vec2Processor`]: ```py >>> from transformers import Wav2Vec2Processor >>> processor = Wav2Vec2Processor(feature_extractor=feature_extractor, tokenizer=tokenizer) ``` باستخدام فئتين أساسيتين - التكوين والنموذج - بالإضافة إلى فئة معالجة مسبق (مقسّم رموز أو معالج صورة أو مستخرج ميزات أو معالج)، يمكنك إنشاء أي من النماذج التي تدعمها مكتبة 🤗 Transformers. يمكن تكوين كل من هذه الفئات الأساسية، مما يسمح لك باستخدام السمات المطلوبة. يمكنك بسهولة تهيئة نموذج للتدريب أو تعديل نموذج مدرب مسبقاً لإجراء ضبط دقيق.
transformers/docs/source/ar/create_a_model.md/0
{ "file_path": "transformers/docs/source/ar/create_a_model.md", "repo_id": "transformers", "token_count": 12069 }
376
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Wie erstellt man eine benutzerdefinierte Pipeline? In dieser Anleitung sehen wir uns an, wie Sie eine benutzerdefinierte Pipeline erstellen und sie auf dem [Hub](https://hf.co/models) freigeben oder sie der 🤗 Transformers-Bibliothek hinzufügen. Zuallererst müssen Sie entscheiden, welche Roheingaben die Pipeline verarbeiten kann. Es kann sich um Strings, rohe Bytes, Dictionaries oder was auch immer die wahrscheinlichste gewünschte Eingabe ist. Versuchen Sie, diese Eingaben so rein wie möglich in Python zu halten denn das macht die Kompatibilität einfacher (auch mit anderen Sprachen über JSON). Dies werden die Eingaben der Pipeline (`Vorverarbeitung`). Definieren Sie dann die `Outputs`. Dieselbe Richtlinie wie für die Eingänge. Je einfacher, desto besser. Dies werden die Ausgaben der Methode `Postprocess`. Beginnen Sie damit, die Basisklasse `Pipeline` mit den 4 Methoden zu erben, die für die Implementierung von `preprocess` benötigt werden, Weiterleiten", "Nachbearbeitung" und "Parameter säubern". ```python from transformers import Pipeline class MyPipeline(Pipeline): def _sanitize_parameters(self, **kwargs): preprocess_kwargs = {} if "maybe_arg" in kwargs: preprocess_kwargs["maybe_arg"] = kwargs["maybe_arg"] return preprocess_kwargs, {}, {} def preprocess(self, inputs, maybe_arg=2): model_input = Tensor(inputs["input_ids"]) return {"model_input": model_input} def _forward(self, model_inputs): # model_inputs == {"model_input": model_input} outputs = self.model(**model_inputs) # Maybe {"logits": Tensor(...)} return outputs def postprocess(self, model_outputs): best_class = model_outputs["logits"].softmax(-1) return best_class ``` Die Struktur dieser Aufteilung soll eine relativ nahtlose Unterstützung für CPU/GPU ermöglichen und gleichzeitig die Durchführung von Vor-/Nachbearbeitung auf der CPU in verschiedenen Threads Preprocess" nimmt die ursprünglich definierten Eingaben und wandelt sie in etwas um, das in das Modell eingespeist werden kann. Es kann mehr Informationen enthalten und ist normalerweise ein `Dict`. `_forward` ist das Implementierungsdetail und ist nicht dafür gedacht, direkt aufgerufen zu werden. Weiterleiten" ist die bevorzugte aufgerufene Methode, da sie Sicherheitsvorkehrungen enthält, die sicherstellen, dass alles auf dem erwarteten Gerät funktioniert. Wenn etwas mit einem realen Modell verknüpft ist, gehört es in die Methode `_forward`, alles andere gehört in die Methoden preprocess/postprocess. Die Methode `Postprocess` nimmt die Ausgabe von `_forward` und verwandelt sie in die endgültige Ausgabe, die zuvor festgelegt wurde. zuvor entschieden wurde. Die Methode `_sanitize_parameters` ermöglicht es dem Benutzer, beliebige Parameter zu übergeben, wann immer er möchte, sei es bei der Initialisierung Zeit `pipeline(...., maybe_arg=4)` oder zur Aufrufzeit `pipe = pipeline(...); output = pipe(...., maybe_arg=4)`. Die Rückgabe von `_sanitize_parameters` sind die 3 Dicts von kwargs, die direkt an `preprocess` übergeben werden, `_forward` und `postprocess` übergeben werden. Füllen Sie nichts aus, wenn der Aufrufer keinen zusätzlichen Parameter angegeben hat. Das erlaubt es, die Standardargumente in der Funktionsdefinition beizubehalten, was immer "natürlicher" ist. Ein klassisches Beispiel wäre das Argument `top_k` in der Nachbearbeitung bei Klassifizierungsaufgaben. ```python >>> pipe = pipeline("my-new-task") >>> pipe("This is a test") [{"label": "1-star", "score": 0.8}, {"label": "2-star", "score": 0.1}, {"label": "3-star", "score": 0.05} {"label": "4-star", "score": 0.025}, {"label": "5-star", "score": 0.025}] >>> pipe("This is a test", top_k=2) [{"label": "1-star", "score": 0.8}, {"label": "2-star", "score": 0.1}] ``` In order to achieve that, we'll update our `postprocess` method with a default parameter to `5`. and edit `_sanitize_parameters` to allow this new parameter. ```python def postprocess(self, model_outputs, top_k=5): best_class = model_outputs["logits"].softmax(-1) # Add logic to handle top_k return best_class def _sanitize_parameters(self, **kwargs): preprocess_kwargs = {} if "maybe_arg" in kwargs: preprocess_kwargs["maybe_arg"] = kwargs["maybe_arg"] postprocess_kwargs = {} if "top_k" in kwargs: postprocess_kwargs["top_k"] = kwargs["top_k"] return preprocess_kwargs, {}, postprocess_kwargs ``` Versuchen Sie, die Eingaben/Ausgaben sehr einfach und idealerweise JSON-serialisierbar zu halten, da dies die Verwendung der Pipeline sehr einfach macht ohne dass die Benutzer neue Arten von Objekten verstehen müssen. Es ist auch relativ üblich, viele verschiedene Arten von Argumenten zu unterstützen von Argumenten zu unterstützen (Audiodateien, die Dateinamen, URLs oder reine Bytes sein können). ## Hinzufügen zur Liste der unterstützten Aufgaben Um Ihre `neue Aufgabe` in die Liste der unterstützten Aufgaben aufzunehmen, müssen Sie sie zur `PIPELINE_REGISTRY` hinzufügen: ```python from transformers.pipelines import PIPELINE_REGISTRY PIPELINE_REGISTRY.register_pipeline( "new-task", pipeline_class=MyPipeline, pt_model=AutoModelForSequenceClassification, ) ``` Wenn Sie möchten, können Sie ein Standardmodell angeben. In diesem Fall sollte es mit einer bestimmten Revision (die der Name einer Verzweigung oder ein Commit-Hash sein kann, hier haben wir `"abcdef"` genommen) sowie mit dem Typ versehen sein: ```python PIPELINE_REGISTRY.register_pipeline( "new-task", pipeline_class=MyPipeline, pt_model=AutoModelForSequenceClassification, default={"pt": ("user/awesome_model", "abcdef")}, type="text", # current support type: text, audio, image, multimodal ) ``` ## Teilen Sie Ihre Pipeline auf dem Hub Um Ihre benutzerdefinierte Pipeline auf dem Hub freizugeben, müssen Sie lediglich den benutzerdefinierten Code Ihrer `Pipeline`-Unterklasse in einer Python-Datei speichern. Nehmen wir zum Beispiel an, Sie möchten eine benutzerdefinierte Pipeline für die Klassifizierung von Satzpaaren wie folgt verwenden: ```py import numpy as np from transformers import Pipeline def softmax(outputs): maxes = np.max(outputs, axis=-1, keepdims=True) shifted_exp = np.exp(outputs - maxes) return shifted_exp / shifted_exp.sum(axis=-1, keepdims=True) class PairClassificationPipeline(Pipeline): def _sanitize_parameters(self, **kwargs): preprocess_kwargs = {} if "second_text" in kwargs: preprocess_kwargs["second_text"] = kwargs["second_text"] return preprocess_kwargs, {}, {} def preprocess(self, text, second_text=None): return self.tokenizer(text, text_pair=second_text, return_tensors=self.framework) def _forward(self, model_inputs): return self.model(**model_inputs) def postprocess(self, model_outputs): logits = model_outputs.logits[0].numpy() probabilities = softmax(logits) best_class = np.argmax(probabilities) label = self.model.config.id2label[best_class] score = probabilities[best_class].item() logits = logits.tolist() return {"label": label, "score": score, "logits": logits} ``` Die Implementierung ist Framework-unabhängig und funktioniert für PyTorch- und TensorFlow-Modelle. Wenn wir dies in einer Datei einer Datei namens `pair_classification.py` gespeichert haben, können wir sie importieren und wie folgt registrieren: ```py from pair_classification import PairClassificationPipeline from transformers.pipelines import PIPELINE_REGISTRY from transformers import AutoModelForSequenceClassification, TFAutoModelForSequenceClassification PIPELINE_REGISTRY.register_pipeline( "pair-classification", pipeline_class=PairClassificationPipeline, pt_model=AutoModelForSequenceClassification, tf_model=TFAutoModelForSequenceClassification, ) ``` Sobald dies geschehen ist, können wir es mit einem vortrainierten Modell verwenden. Zum Beispiel wurde `sgugger/finetuned-bert-mrpc` auf den auf den MRPC-Datensatz abgestimmt, der Satzpaare als Paraphrasen oder nicht klassifiziert. ```py from transformers import pipeline classifier = pipeline("pair-classification", model="sgugger/finetuned-bert-mrpc") ``` Dann können wir sie auf dem Hub mit der Methode `push_to_hub` freigeben: ```py classifier.push_to_hub("test-dynamic-pipeline") ``` Dadurch wird die Datei, in der Sie `PairClassificationPipeline` definiert haben, in den Ordner `"test-dynamic-pipeline"` kopiert, und speichert das Modell und den Tokenizer der Pipeline, bevor Sie alles in das Repository verschieben `{Ihr_Benutzername}/test-dynamic-pipeline`. Danach kann jeder die Pipeline verwenden, solange er die Option `trust_remote_code=True` angeben: ```py from transformers import pipeline classifier = pipeline(model="{your_username}/test-dynamic-pipeline", trust_remote_code=True) ``` ## Hinzufügen der Pipeline zu 🤗 Transformers Wenn Sie Ihre Pipeline zu 🤗 Transformers beitragen möchten, müssen Sie ein neues Modul im Untermodul `pipelines` hinzufügen mit dem Code Ihrer Pipeline hinzufügen. Fügen Sie es dann der Liste der in `pipelines/__init__.py` definierten Aufgaben hinzu. Dann müssen Sie noch Tests hinzufügen. Erstellen Sie eine neue Datei `tests/test_pipelines_MY_PIPELINE.py` mit Beispielen für die anderen Tests. Die Funktion `run_pipeline_test` ist sehr allgemein gehalten und läuft auf kleinen Zufallsmodellen auf jeder möglichen Architektur, wie durch `model_mapping` und `tf_model_mapping` definiert. Dies ist sehr wichtig, um die zukünftige Kompatibilität zu testen, d.h. wenn jemand ein neues Modell für `XXXForQuestionAnswering` hinzufügt, wird der Pipeline-Test versuchen, mit diesem Modell zu arbeiten. Da die Modelle zufällig sind, ist es ist es unmöglich, die tatsächlichen Werte zu überprüfen. Deshalb gibt es eine Hilfsfunktion `ANY`, die einfach versucht, die Ausgabe der Pipeline TYPE. Außerdem *müssen* Sie 2 (idealerweise 4) Tests implementieren. - `test_small_model_pt` : Definieren Sie 1 kleines Modell für diese Pipeline (es spielt keine Rolle, ob die Ergebnisse keinen Sinn ergeben) und testen Sie die Ausgaben der Pipeline. Die Ergebnisse sollten die gleichen sein wie bei `test_small_model_tf`. - `test_small_model_tf` : Definieren Sie 1 kleines Modell für diese Pipeline (es spielt keine Rolle, ob die Ergebnisse keinen Sinn ergeben) und testen Sie die Ausgaben der Pipeline. Die Ergebnisse sollten die gleichen sein wie bei `test_small_model_pt`. - `test_large_model_pt` (`optional`): Testet die Pipeline an einer echten Pipeline, bei der die Ergebnisse Sinn machen. Diese Tests sind langsam und sollten als solche gekennzeichnet werden. Hier geht es darum, die Pipeline zu präsentieren und sicherzustellen sicherzustellen, dass es in zukünftigen Versionen keine Abweichungen gibt. - `test_large_model_tf` (`optional`): Testet die Pipeline an einer echten Pipeline, bei der die Ergebnisse Sinn machen. Diese Tests sind langsam und sollten als solche gekennzeichnet werden. Hier geht es darum, die Pipeline zu präsentieren und sicherzustellen sicherzustellen, dass es in zukünftigen Versionen keine Abweichungen gibt.
transformers/docs/source/de/add_new_pipeline.md/0
{ "file_path": "transformers/docs/source/de/add_new_pipeline.md", "repo_id": "transformers", "token_count": 4535 }
377
# Optimizing inference perf_infer_gpu_many: perf_infer_gpu_one transformers_agents: agents quantization: quantization/overview
transformers/docs/source/en/_redirects.yml/0
{ "file_path": "transformers/docs/source/en/_redirects.yml", "repo_id": "transformers", "token_count": 41 }
378
<!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Contribute to 🤗 Transformers Everyone is welcome to contribute, and we value everybody's contribution. Code contributions are not the only way to help the community. Answering questions, helping others, and improving the documentation are also immensely valuable. It also helps us if you spread the word! Reference the library in blog posts about the awesome projects it made possible, shout out on Twitter every time it has helped you, or simply ⭐️ the repository to say thank you. However you choose to contribute, please be mindful and respect our [code of conduct](https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md). **This guide was heavily inspired by the awesome [scikit-learn guide to contributing](https://github.com/scikit-learn/scikit-learn/blob/main/CONTRIBUTING.md).** ## Ways to contribute There are several ways you can contribute to 🤗 Transformers: * Fix outstanding issues with the existing code. * Submit issues related to bugs or desired new features. * Implement new models. * Contribute to the examples or to the documentation. If you don't know where to start, there is a special [Good First Issue](https://github.com/huggingface/transformers/contribute) listing. It will give you a list of open issues that are beginner-friendly and help you start contributing to open-source. The best way to do that is to open a Pull Request and link it to the issue that you'd like to work on. We try to give priority to opened PRs as we can easily track the progress of the fix, and if the contributor does not have time anymore, someone else can take the PR over. For something slightly more challenging, you can also take a look at the [Good Second Issue](https://github.com/huggingface/transformers/labels/Good%20Second%20Issue) list. In general though, if you feel like you know what you're doing, go for it and we'll help you get there! 🚀 > All contributions are equally valuable to the community. 🥰 ## Fixing outstanding issues If you notice an issue with the existing code and have a fix in mind, feel free to [start contributing](#create-a-pull-request) and open a Pull Request! ## Submitting a bug-related issue or feature request Do your best to follow these guidelines when submitting a bug-related issue or a feature request. It will make it easier for us to come back to you quickly and with good feedback. ### Did you find a bug? The 🤗 Transformers library is robust and reliable thanks to users who report the problems they encounter. Before you report an issue, we would really appreciate it if you could **make sure the bug was not already reported** (use the search bar on GitHub under Issues). Your issue should also be related to bugs in the library itself, and not your code. If you're unsure whether the bug is in your code or the library, please ask in the [forum](https://discuss.huggingface.co/) or on our [discord](https://discord.com/invite/hugging-face-879548962464493619) first. This helps us respond quicker to fixing issues related to the library versus general questions. > [!TIP] > We have a [docs bot](https://huggingface.co/spaces/huggingchat/hf-docs-chat), and we highly encourage you to ask all your questions there. There is always a chance your bug can be fixed with a simple flag 👾🔫 Once you've confirmed the bug hasn't already been reported, please include the following information in your issue so we can quickly resolve it: * Your **OS type and version** and **Python**, **PyTorch** and **TensorFlow** versions when applicable. * A short, self-contained, code snippet that allows us to reproduce the bug in less than 30s. * The *full* traceback if an exception is raised. * Attach any other additional information, like screenshots, you think may help. To get the OS and software versions automatically, run the following command: ```bash transformers env ``` You can also run the same command from the root of the repository: ```bash python src/transformers/commands/transformers_cli.py env ``` ### Do you want a new feature? If there is a new feature you'd like to see in 🤗 Transformers, please open an issue and describe: 1. What is the *motivation* behind this feature? Is it related to a problem or frustration with the library? Is it a feature related to something you need for a project? Is it something you worked on and think it could benefit the community? Whatever it is, we'd love to hear about it! 2. Describe your requested feature in as much detail as possible. The more you can tell us about it, the better we'll be able to help you. 3. Provide a *code snippet* that demonstrates the features usage. 4. If the feature is related to a paper, please include a link. If your issue is well written we're already 80% of the way there by the time you create it. We have added [templates](https://github.com/huggingface/transformers/tree/main/templates) to help you get started with your issue. ## Do you want to implement a new model? New models are constantly released and if you want to implement a new model, please provide the following information: * A short description of the model and a link to the paper. * Link to the implementation if it is open-sourced. * Link to the model weights if they are available. If you are willing to contribute the model yourself, let us know so we can help you add it to 🤗 Transformers! We have a technical guide for [how to add a model to 🤗 Transformers](https://huggingface.co/docs/transformers/add_new_model). ## Do you want to add documentation? We're always looking for improvements to the documentation that make it more clear and accurate. Please let us know how the documentation can be improved such as typos and any content that is missing, unclear or inaccurate. We'll be happy to make the changes or help you make a contribution if you're interested! For more details about how to generate, build, and write the documentation, take a look at the documentation [README](https://github.com/huggingface/transformers/tree/main/docs). ## Create a Pull Request Before writing any code, we strongly advise you to search through the existing PRs or issues to make sure nobody is already working on the same thing. If you are unsure, it is always a good idea to open an issue to get some feedback. You will need basic `git` proficiency to contribute to 🤗 Transformers. While `git` is not the easiest tool to use, it has the greatest manual. Type `git --help` in a shell and enjoy! If you prefer books, [Pro Git](https://git-scm.com/book/en/v2) is a very good reference. You'll need **[Python 3.9](https://github.com/huggingface/transformers/blob/main/setup.py#L449)** or above to contribute to 🤗 Transformers. Follow the steps below to start contributing: 1. Fork the [repository](https://github.com/huggingface/transformers) by clicking on the **[Fork](https://github.com/huggingface/transformers/fork)** button on the repository's page. This creates a copy of the code under your GitHub user account. 2. Clone your fork to your local disk, and add the base repository as a remote: ```bash git clone git@github.com:<your Github handle>/transformers.git cd transformers git remote add upstream https://github.com/huggingface/transformers.git ``` 3. Create a new branch to hold your development changes: ```bash git checkout -b a-descriptive-name-for-my-changes ``` 🚨 **Do not** work on the `main` branch! 4. Set up a development environment by running the following command in a virtual environment: ```bash pip install -e ".[dev]" ``` If 🤗 Transformers was already installed in the virtual environment, remove it with `pip uninstall transformers` before reinstalling it in editable mode with the `-e` flag. Depending on your OS, and since the number of optional dependencies of Transformers is growing, you might get a failure with this command. If that's the case make sure to install the Deep Learning framework you are working with (PyTorch, TensorFlow and/or Flax) then do: ```bash pip install -e ".[quality]" ``` which should be enough for most use cases. 5. Develop the features in your branch. As you work on your code, you should make sure the test suite passes. Run the tests impacted by your changes like this: ```bash pytest tests/<TEST_TO_RUN>.py ``` For more information about tests, check out the [Testing](https://huggingface.co/docs/transformers/testing) guide. 🤗 Transformers relies on `black` and `ruff` to format its source code consistently. After you make changes, apply automatic style corrections and code verifications that can't be automated in one go with: ```bash make fixup ``` This target is also optimized to only work with files modified by the PR you're working on. If you prefer to run the checks one after the other, the following command applies the style corrections: ```bash make style ``` 🤗 Transformers also uses `ruff` and a few custom scripts to check for coding mistakes. Quality controls are run by the CI, but you can run the same checks with: ```bash make quality ``` Finally, we have a lot of scripts to make sure we don't forget to update some files when adding a new model. You can run these scripts with: ```bash make repo-consistency ``` To learn more about those checks and how to fix any issues with them, check out the [Checks on a Pull Request](https://huggingface.co/docs/transformers/pr_checks) guide. If you're modifying documents under the `docs/source` directory, make sure the documentation can still be built. This check will also run in the CI when you open a pull request. To run a local check make sure you install the [documentation builder](https://github.com/huggingface/doc-builder). ```bash pip install hf-doc-builder ``` Run the following command from the root of the repository: ```bash doc-builder build transformers docs/source/en --build_dir ~/tmp/test-build ``` This will build the documentation in the `~/tmp/test-build` folder where you can inspect the generated Markdown files with your favorite editor. You can also preview the docs on GitHub when you open a pull request. Once you're happy with your changes, add the changed files with `git add` and record your changes locally with `git commit`: ```bash git add modified_file.py git commit ``` Please remember to write [good commit messages](https://chris.beams.io/posts/git-commit/) to clearly communicate the changes you made! To keep your copy of the code up to date with the original repository, rebase your branch on `upstream/branch` *before* you open a pull request or if requested by a maintainer: ```bash git fetch upstream git rebase upstream/main ``` Push your changes to your branch: ```bash git push -u origin a-descriptive-name-for-my-changes ``` If you've already opened a pull request, you'll need to force push with the `--force` flag. Otherwise, if the pull request hasn't been opened yet, you can just push your changes normally. 6. Now you can go to your fork of the repository on GitHub and click on **Pull Request** to open a pull request. Make sure you tick off all the boxes on our [checklist](#pull-request-checklist) below. When you're ready, you can send your changes to the project maintainers for review. 7. It's ok if maintainers request changes, it happens to our core contributors too! So everyone can see the changes in the pull request, work in your local branch and push the changes to your fork. They will automatically appear in the pull request. ### Pull request checklist ☐ The pull request title should summarize your contribution.<br> ☐ If your pull request addresses an issue, please mention the issue number in the pull request description to make sure they are linked (and people viewing the issue know you are working on it).<br> ☐ To indicate a work in progress please prefix the title with `[WIP]`. These are useful to avoid duplicated work, and to differentiate it from PRs ready to be merged.<br> ☐ Make sure existing tests pass.<br> ☐ If adding a new feature, also add tests for it.<br> - If you are adding a new model, make sure you use `ModelTester.all_model_classes = (MyModel, MyModelWithLMHead,...)` to trigger the common tests. - If you are adding new `@slow` tests, make sure they pass using `RUN_SLOW=1 python -m pytest tests/models/my_new_model/test_my_new_model.py`. - If you are adding a new tokenizer, write tests and make sure `RUN_SLOW=1 python -m pytest tests/models/{your_model_name}/test_tokenization_{your_model_name}.py` passes. - CircleCI does not run the slow tests, but GitHub Actions does every night!<br> ☐ All public methods must have informative docstrings (see [`modeling_bert.py`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/modeling_bert.py) for an example).<br> ☐ Due to the rapidly growing repository, don't add any images, videos and other non-text files that'll significantly weigh down the repository. Instead, use a Hub repository such as [`hf-internal-testing`](https://huggingface.co/hf-internal-testing) to host these files and reference them by URL. We recommend placing documentation related images in the following repository: [huggingface/documentation-images](https://huggingface.co/datasets/huggingface/documentation-images). You can open a PR on this dataset repository and ask a Hugging Face member to merge it. For more information about the checks run on a pull request, take a look at our [Checks on a Pull Request](https://huggingface.co/docs/transformers/pr_checks) guide. ### Tests An extensive test suite is included to test the library behavior and several examples. Library tests can be found in the [tests](https://github.com/huggingface/transformers/tree/main/tests) folder and examples tests in the [examples](https://github.com/huggingface/transformers/tree/main/examples) folder. We like `pytest` and `pytest-xdist` because it's faster. From the root of the repository, specify a *path to a subfolder or a test file* to run the test: ```bash python -m pytest -n auto --dist=loadfile -s -v ./tests/models/my_new_model ``` Similarly, for the `examples` directory, specify a *path to a subfolder or test file* to run the test. For example, the following command tests the text classification subfolder in the PyTorch `examples` directory: ```bash pip install -r examples/xxx/requirements.txt # only needed the first time python -m pytest -n auto --dist=loadfile -s -v ./examples/pytorch/text-classification ``` In fact, this is actually how our `make test` and `make test-examples` commands are implemented (not including the `pip install`)! You can also specify a smaller set of tests in order to test only the feature you're working on. By default, slow tests are skipped but you can set the `RUN_SLOW` environment variable to `yes` to run them. This will download many gigabytes of models so make sure you have enough disk space, a good internet connection or a lot of patience! <Tip warning={true}> Remember to specify a *path to a subfolder or a test file* to run the test. Otherwise, you'll run all the tests in the `tests` or `examples` folder, which will take a very long time! </Tip> ```bash RUN_SLOW=yes python -m pytest -n auto --dist=loadfile -s -v ./tests/models/my_new_model RUN_SLOW=yes python -m pytest -n auto --dist=loadfile -s -v ./examples/pytorch/text-classification ``` Like the slow tests, there are other environment variables available which are not enabled by default during testing: - `RUN_CUSTOM_TOKENIZERS`: Enables tests for custom tokenizers. More environment variables and additional information can be found in the [testing_utils.py](https://github.com/huggingface/transformers/blob/main/src/transformers/testing_utils.py). 🤗 Transformers uses `pytest` as a test runner only. It doesn't use any `pytest`-specific features in the test suite itself. This means `unittest` is fully supported. Here's how to run tests with `unittest`: ```bash python -m unittest discover -s tests -t . -v python -m unittest discover -s examples -t examples -v ``` ### Style guide For documentation strings, 🤗 Transformers follows the [Google Python Style Guide](https://google.github.io/styleguide/pyguide.html). Check our [documentation writing guide](https://github.com/huggingface/transformers/tree/main/docs#writing-documentation---specification) for more information. ### Develop on Windows On Windows (unless you're working in [Windows Subsystem for Linux](https://learn.microsoft.com/en-us/windows/wsl/) or WSL), you need to configure git to transform Windows `CRLF` line endings to Linux `LF` line endings: ```bash git config core.autocrlf input ``` One way to run the `make` command on Windows is with MSYS2: 1. [Download MSYS2](https://www.msys2.org/), and we assume it's installed in `C:\msys64`. 2. Open the command line `C:\msys64\msys2.exe` (it should be available from the **Start** menu). 3. Run in the shell: `pacman -Syu` and install `make` with `pacman -S make`. 4. Add `C:\msys64\usr\bin` to your PATH environment variable. You can now use `make` from any terminal (PowerShell, cmd.exe, etc.)! 🎉 ### Sync a forked repository with upstream main (the Hugging Face repository) When updating the main branch of a forked repository, please follow these steps to avoid pinging the upstream repository which adds reference notes to each upstream PR, and sends unnecessary notifications to the developers involved in these PRs. 1. When possible, avoid syncing with the upstream using a branch and PR on the forked repository. Instead, merge directly into the forked main. 2. If a PR is absolutely necessary, use the following steps after checking out your branch: ```bash git checkout -b your-branch-for-syncing git pull --squash --no-commit upstream main git commit -m '<your message without GitHub references>' git push --set-upstream origin your-branch-for-syncing ```
transformers/docs/source/en/contributing.md/0
{ "file_path": "transformers/docs/source/en/contributing.md", "repo_id": "transformers", "token_count": 5163 }
379
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Image processors Image processors converts images into pixel values, tensors that represent image colors and size. The pixel values are inputs to a vision model. To ensure a pretrained model receives the correct input, an image processor can perform the following operations to make sure an image is exactly like the images a model was pretrained on. - [`~BaseImageProcessor.center_crop`] to resize an image - [`~BaseImageProcessor.normalize`] or [`~BaseImageProcessor.rescale`] pixel values Use [`~ImageProcessingMixin.from_pretrained`] to load an image processors configuration (image size, whether to normalize and rescale, etc.) from a vision model on the Hugging Face [Hub](https://hf.co) or local directory. The configuration for each pretrained model is saved in a [preprocessor_config.json](https://huggingface.co/google/vit-base-patch16-224/blob/main/preprocessor_config.json) file. ```py from transformers import AutoImageProcessor image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224") ``` Pass an image to the image processor to transform it into pixel values, and set `return_tensors="pt"` to return PyTorch tensors. Feel free to print out the inputs to see what the image looks like as a tensor. ```py from PIL import Image import requests url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/image_processor_example.png" image = Image.open(requests.get(url, stream=True).raw).convert("RGB") inputs = image_processor(image, return_tensors="pt") ``` This guide covers the image processor class and how to preprocess images for vision models. ## Image processor classes Image processors inherit from the [`BaseImageProcessor`] class which provides the [`~BaseImageProcessor.center_crop`], [`~BaseImageProcessor.normalize`], and [`~BaseImageProcessor.rescale`] functions. There are two types of image processors. - [`BaseImageProcessor`] is a Python implementation. - [`BaseImageProcessorFast`] is a faster [torchvision-backed](https://pytorch.org/vision/stable/index.html) version. For a batch of [torch.Tensor](https://pytorch.org/docs/stable/tensors.html) inputs, this can be up to 33x faster. [`BaseImageProcessorFast`] is not available for all vision models at the moment. Refer to a models API documentation to check if it is supported. Each image processor subclasses the [`ImageProcessingMixin`] class which provides the [`~ImageProcessingMixin.from_pretrained`] and [`~ImageProcessingMixin.save_pretrained`] methods for loading and saving image processors. There are two ways you can load an image processor, with [`AutoImageProcessor`] or a model-specific image processor. <hfoptions id="image-processor-classes"> <hfoption id="AutoImageProcessor"> The [AutoClass](./model_doc/auto) API provides a convenient method to load an image processor without directly specifying the model the image processor is associated with. Use [`~AutoImageProcessor.from_pretrained`] to load an image processor, and set `use_fast=True` to load a fast image processor if it's supported. ```py from transformers import AutoImageProcessor image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224", use_fast=True) ``` </hfoption> <hfoption id="model-specific image processor"> Each image processor is associated with a specific pretrained vision model, and the image processors configuration contains the models expected size and whether to normalize and resize. The image processor can be loaded directly from the model-specific class. Check a models API documentation to see whether it supports a fast image processor. ```py from transformers import ViTImageProcessor image_processor = ViTImageProcessor.from_pretrained("google/vit-base-patch16-224") ``` To load a fast image processor, use the fast implementation class. ```py from transformers import ViTImageProcessorFast image_processor = ViTImageProcessorFast.from_pretrained("google/vit-base-patch16-224") ``` </hfoption> </hfoptions> ## Fast image processors [`BaseImageProcessorFast`] is based on [torchvision](https://pytorch.org/vision/stable/index.html) and is significantly faster, especially when processing on a GPU. This class can be used as a drop-in replacement for [`BaseImageProcessor`] if it's available for a model because it has the same design. Make sure [torchvision](https://pytorch.org/get-started/locally/#mac-installation) is installed, and set the `use_fast` parameter to `True`. ```py from transformers import AutoImageProcessor processor = AutoImageProcessor.from_pretrained("facebook/detr-resnet-50", use_fast=True) ``` Control which device processing is performed on with the `device` parameter. Processing is performed on the same device as the input by default if the inputs are tensors, otherwise they are processed on the CPU. The example below places the fast processor on a GPU. ```py from torchvision.io import read_image from transformers import DetrImageProcessorFast images = read_image("image.jpg") processor = DetrImageProcessorFast.from_pretrained("facebook/detr-resnet-50") images_processed = processor(images, return_tensors="pt", device="cuda") ``` <details> <summary>Benchmarks</summary> The benchmarks are obtained from an [AWS EC2 g5.2xlarge](https://aws.amazon.com/ec2/instance-types/g5/) instance with a NVIDIA A10G Tensor Core GPU. <div class="flex"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/benchmark_results_full_pipeline_detr_fast_padded.png" /> </div> <div class="flex"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/benchmark_results_full_pipeline_detr_fast_batched_compiled.png" /> </div> <div class="flex"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/benchmark_results_full_pipeline_rt_detr_fast_single.png" /> </div> <div class="flex"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/benchmark_results_full_pipeline_rt_detr_fast_batched.png" /> </div> </details> ## Preprocess Transformers' vision models expects the input as PyTorch tensors of pixel values. An image processor handles the conversion of images to pixel values, which is represented by the batch size, number of channels, height, and width. To achieve this, an image is resized (center cropped) and the pixel values are normalized and rescaled to the models expected values. Image preprocessing is not the same as *image augmentation*. Image augmentation makes changes (brightness, colors, rotatation, etc.) to an image for the purpose of either creating new training examples or prevent overfitting. Image preprocessing makes changes to an image for the purpose of matching a pretrained model's expected input format. Typically, images are augmented (to increase performance) and then preprocessed before being passed to a model. You can use any library ([Albumentations](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification_albumentations.ipynb), [Kornia](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification_kornia.ipynb)) for augmentation and an image processor for preprocessing. This guide uses the torchvision [transforms](https://pytorch.org/vision/stable/transforms.html) module for augmentation. Start by loading a small sample of the [food101](https://hf.co/datasets/food101) dataset. ```py from datasets import load_dataset dataset = load_dataset("food101", split="train[:100]") ``` From the [transforms](https://pytorch.org/vision/stable/transforms.html) module, use the [Compose](https://pytorch.org/vision/master/generated/torchvision.transforms.Compose.html) API to chain together [RandomResizedCrop](https://pytorch.org/vision/main/generated/torchvision.transforms.RandomResizedCrop.html) and [ColorJitter](https://pytorch.org/vision/main/generated/torchvision.transforms.ColorJitter.html). These transforms randomly crop and resize an image, and randomly adjusts an images colors. The image size to randomly crop to can be retrieved from the image processor. For some models, an exact height and width are expected while for others, only the `shortest_edge` is required. ```py from torchvision.transforms import RandomResizedCrop, ColorJitter, Compose size = ( image_processor.size["shortest_edge"] if "shortest_edge" in image_processor.size else (image_processor.size["height"], image_processor.size["width"]) ) _transforms = Compose([RandomResizedCrop(size), ColorJitter(brightness=0.5, hue=0.5)]) ``` Apply the transforms to the images and convert them to the RGB format. Then pass the augmented images to the image processor to return the pixel values. The `do_resize` parameter is set to `False` because the images have already been resized in the augmentation step by [RandomResizedCrop](https://pytorch.org/vision/main/generated/torchvision.transforms.RandomResizedCrop.html). If you don't augment the images, then the image processor automatically resizes and normalizes the images with the `image_mean` and `image_std` values. These values are found in the preprocessor configuration file. ```py def transforms(examples): images = [_transforms(img.convert("RGB")) for img in examples["image"]] examples["pixel_values"] = image_processor(images, do_resize=False, return_tensors="pt")["pixel_values"] return examples ``` Apply the combined augmentation and preprocessing function to the entire dataset on the fly with [`~datasets.Dataset.set_transform`]. ```py dataset.set_transform(transforms) ``` Convert the pixel values back into an image to see how the image has been augmented and preprocessed. ```py import numpy as np import matplotlib.pyplot as plt img = dataset[0]["pixel_values"] plt.imshow(img.permute(1, 2, 0)) ``` <div class="flex gap-4"> <div> <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/vision-preprocess-tutorial.png" /> <figcaption class="mt-2 text-center text-sm text-gray-500">before</figcaption> </div> <div> <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/preprocessed_image.png" /> <figcaption class="mt-2 text-center text-sm text-gray-500">after</figcaption> </div> </div> For other vision tasks like object detection or segmentation, the image processor includes post-processing methods to convert a models raw output into meaningful predictions like bounding boxes or segmentation maps. ### Padding Some models, like [DETR](./model_doc/detr), applies [scale augmentation](https://paperswithcode.com/method/image-scale-augmentation) during training which can cause images in a batch to have different sizes. Images with different sizes can't be batched together. To fix this, pad the images with the special padding token `0`. Use the [pad](https://github.com/huggingface/transformers/blob/9578c2597e2d88b6f0b304b5a05864fd613ddcc1/src/transformers/models/detr/image_processing_detr.py#L1151) method to pad the images, and define a custom collate function to batch them together. ```py def collate_fn(batch): pixel_values = [item["pixel_values"] for item in batch] encoding = image_processor.pad(pixel_values, return_tensors="pt") labels = [item["labels"] for item in batch] batch = {} batch["pixel_values"] = encoding["pixel_values"] batch["pixel_mask"] = encoding["pixel_mask"] batch["labels"] = labels return batch ```
transformers/docs/source/en/image_processors.md/0
{ "file_path": "transformers/docs/source/en/image_processors.md", "repo_id": "transformers", "token_count": 3596 }
380
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Optimizing inference Inference with large language models (LLMs) can be challenging because they have to store and handle billions of parameters. To load a 70B parameter [Llama 2](https://hf.co/meta-llama/Llama-2-70b-hf) model, it requires 256GB of memory for full precision weights and 128GB of memory for half-precision weights. The most powerful GPUs today - the A100 and H100 - only have 80GB of memory. On top of the memory requirements, inference is slow because LLMs are called repeatedly to generate the next token. The input sequence increases as generation progresses, which takes longer and longer to process. This guide will show you how to optimize LLM inference to accelerate generation and reduce memory usage. > [!TIP] > Try out [Text Generation Inference (TGI)](https://hf.co/docs/text-generation-inference), a Hugging Face library dedicated to deploying and serving highly optimized LLMs for inference. ## Static kv-cache and torch.compile LLMs compute key-value (kv) values for each input token, and it performs the same kv computation each time because the generated output becomes part of the input. However, performing the same kv computation every time is not very efficient. A *kv-cache* stores the past keys and values instead of recomputing them each time. As a result, the kv-cache is dynamic and it grows with each generation step which prevents you from taking advantage of [torch.compile](./perf_torch_compile), a powerful optimization method that fuses PyTorch code into optimized kernels. The *static kv-cache* solves this issue by pre-allocating the kv-cache size to a maximum value, so you can combine it with [torch.compile](./perf_torch_compile) for up to a 4x speed up. Your speed up may vary depending on the model size (larger models have a smaller speed up) and hardware. > [!WARNING] > Follow this [issue](https://github.com/huggingface/transformers/issues/28981) to track which models (Llama, Gemma, Mistral, etc.) support a static kv-cache and torch.compile. Depending on your task, there are several ways you can use the static kv-cache. 1. For basic use cases, set [cache_implementation](https://hf.co/docs/transformers/main_classes/text_generation#transformers.GenerationConfig.cache_implementation) to `"static"` (recommended). 2. For multi-turn generation or a custom generation loop, initialize and handle [`StaticCache`] directly. 3. For more unique hardware or use cases, it may be better to compile the entire [`~GenerationMixin.generate`] function into a single graph. > [!TIP] > Regardless of how you use the static kv-cache and torch.compile, left-pad your inputs with [pad_to_multiple_of](https://hf.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizer.__call__.pad_to_multiple_of) to a limited set of values to avoid shape-related recompilations. <hfoptions id="static-kv"> <hfoption id="1. cache_implementation"> 1. Set the [cache_implementation](https://hf.co/docs/transformers/main_classes/text_generation#transformers.GenerationConfig.cache_implementation) to `"static"` in a models [`GenerationConfig`]. 2. Call [torch.compile](./perf_torch_compile) to compile the forward pass with the static kv-cache. ```py from transformers import AutoTokenizer, AutoModelForCausalLM import torch import os os.environ["TOKENIZERS_PARALLELISM"] = "false" # To prevent long warnings :) tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", dtype="auto", device_map="auto") model.generation_config.cache_implementation = "static" model.forward = torch.compile(model.forward, mode="reduce-overhead", fullgraph=True) input_text = "The theory of special relativity states " input_ids = tokenizer(input_text, return_tensors="pt").to(model.device.type) outputs = model.generate(**input_ids) print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) ['The theory of special relativity states 1. The speed of light is constant in all inertial reference'] ``` Under the hood, [`~GenerationMixin.generate`] attempts to reuse the same cache object to avoid recompilation at each call, which is critical to get the most out of [torch.compile](./perf_torch_compile). Be aware of the following to avoid triggering recompilation or if generation is slower than expected. 1. If the batch size changes or the maximum output length increases between calls, the cache is reinitialized and recompiled. 2. The first several calls of the compiled function are slower because it is being compiled. </hfoption> <hfoption id="2. StaticCache"> Directly initialize a [`StaticCache`] object and pass it to the `past_key_values` parameter in [`~GenerationMixin.generate`]. The [`StaticCache`] keeps the cache contents, so you can pass it to a new [`~GenerationMixin.generate`] call to continue generation, similar to a dynamic cache. ```py from transformers import AutoTokenizer, AutoModelForCausalLM, StaticCache import torch import os os.environ["TOKENIZERS_PARALLELISM"] = "false" # To prevent long warnings :) tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", dtype="auto", device_map="auto") model.forward = torch.compile(model.forward, mode="reduce-overhead", fullgraph=True) input_text = "The theory of special relativity states " input_ids = tokenizer(input_text, return_tensors="pt").to(model.device.type) prompt_length = input_ids.input_ids.shape[1] model.generation_config.max_new_tokens = 16 past_key_values = StaticCache( config=model.config, # If you plan to reuse the cache, make sure the cache length is large enough for all cases max_cache_len=prompt_length+(model.generation_config.max_new_tokens*2), ) outputs = model.generate(**input_ids, past_key_values=past_key_values) print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) ['The theory of special relativity states 1. The speed of light is constant in all inertial reference frames. 2'] # pass in the generated text and the same cache object to continue generation from where it left off. Optionally, in a # multi-turn conversation, append the new user input to the generated text. new_input_ids = outputs outputs = model.generate(new_input_ids, past_key_values=past_key_values) print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) ['The theory of special relativity states 1. The speed of light is constant in all inertial reference frames. 2. The speed of light is constant in all inertial reference frames. 3.'] ``` > [!TIP] > To reuse [`StaticCache`] on a new prompt, use [`~StaticCache.reset`] to reset the cache contents between calls. Another option for using [`StaticCache`] is to pass it to a models forward pass using the same `past_key_values` argument. This allows you to write your own custom decoding function to decode the next token given the current token, position, and cache position of previously generated tokens. ```py from transformers import LlamaTokenizer, LlamaForCausalLM, StaticCache, logging, infer_device from transformers.testing_utils import CaptureLogger import torch prompts = [ "Simply put, the theory of relativity states that ", "My favorite all time favorite condiment is ketchup.", ] NUM_TOKENS_TO_GENERATE = 40 torch_device = infer_device() tokenizer = LlamaTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf", pad_token="</s>", padding_side="right") model = LlamaForCausalLM.from_pretrained("meta-llama/Llama-2-7b-hf", device_map="sequential") inputs = tokenizer(prompts, return_tensors="pt", padding=True).to(model.device) def decode_one_tokens(model, cur_token, input_pos, cache_position, past_key_values): logits = model( cur_token, position_ids=input_pos, cache_position=cache_position, past_key_values=past_key_values, return_dict=False, use_cache=True )[0] new_token = torch.argmax(logits[:, -1], dim=-1)[:, None] return new_token ``` To enable static kv-cache and [torch.compile](./perf_torch_compile) with [`StaticCache`], follow the steps below. 1. Initialize [`StaticCache`] before using the model for inference to configure parameters like the maximum batch size and sequence length. 2. Call [torch.compile](./perf_torch_compile) on the model to compile the forward pass with the static kv-cache. 3. se SDPBackend.MATH in the [torch.nn.attention.sdpa_kernel](https://pytorch.org/docs/stable/generated/torch.nn.attention.sdpa_kernel.html) context manager to enable the native PyTorch C++ implementation of scaled dot product attention to speed up inference even more. ```py from torch.nn.attention import SDPBackend, sdpa_kernel batch_size, seq_length = inputs["input_ids"].shape with torch.no_grad(): past_key_values = StaticCache( config=model.config, max_cache_len=4096 ) cache_position = torch.arange(seq_length, device=torch_device) generated_ids = torch.zeros( batch_size, seq_length + NUM_TOKENS_TO_GENERATE + 1, dtype=torch.int, device=torch_device ) generated_ids[:, cache_position] = inputs["input_ids"].to(torch_device).to(torch.int) logits = model( **inputs, cache_position=cache_position, past_key_values=past_key_values,return_dict=False, use_cache=True )[0] next_token = torch.argmax(logits[:, -1], dim=-1)[:, None] generated_ids[:, seq_length] = next_token[:, 0] decode_one_tokens = torch.compile(decode_one_tokens, mode="reduce-overhead", fullgraph=True) cache_position = torch.tensor([seq_length + 1], device=torch_device) for _ in range(1, NUM_TOKENS_TO_GENERATE): with sdpa_kernel(SDPBackend.MATH): next_token = decode_one_tokens(model, next_token.clone(), None, cache_position, past_key_values) generated_ids[:, cache_position] = next_token.int() cache_position += 1 text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) text ['Simply put, the theory of relativity states that 1) the speed of light is constant, 2) the speed of light is the same for all observers, and 3) the laws of physics are the same for all observers.', 'My favorite all time favorite condiment is ketchup. I love it on everything. I love it on my eggs, my fries, my chicken, my burgers, my hot dogs, my sandwiches, my salads, my p'] ``` </hfoption> <hfoption id="3. compile entire generate function"> Compiling the entire [`~GenerationMixin.generate`] function also compiles the input preparation logit processor operations, and more, in addition to the forward pass. With this approach, you don't need to initialize [`StaticCache`] or set the [cache_implementation](https://hf.co/docs/transformers/main_classes/text_generation#transformers.GenerationConfig.cache_implementation) parameter. ```py from transformers import AutoTokenizer, AutoModelForCausalLM import torch import os os.environ["TOKENIZERS_PARALLELISM"] = "false" # To prevent long warnings :) tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", dtype="auto", device_map="auto") model.generate = torch.compile(model.generate, mode="reduce-overhead", fullgraph=True) input_text = "The theory of special relativity states " input_ids = tokenizer(input_text, return_tensors="pt").to(model.device.type) outputs = model.generate(**input_ids) print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) ['The theory of special relativity states 1. The speed of light is constant in all inertial reference'] ``` This usage pattern is more appropriate for unique hardware or use cases, but there are several drawbacks to consider. 1. Compilation is much slower. 2. Parameters must be configured through [`GenerationConfig`]. 3. Many warnings and exceptions are suppressed. We recommend testing the uncompiled model first. 4. Many features are unavailable at the moment. For example, generation does not stop if an `EOS` token is selected. </hfoption> </hfoptions> ## Decoding strategies Decoding can also be optimized to accelerate generation. You can use a lightweight assistant model to generate candidate tokens faster than the LLM itself or you can use a variant of this decoding strategy that works especially well for input-grounded tasks. ### Speculative decoding > [!TIP] > For a more in-depth explanation, take a look at the [Assisted Generation: a new direction toward low-latency text generation](https://hf.co/blog/assisted-generation) blog post! For each input token, the model weights are loaded each time during the forward pass, which is slow and cumbersome when a model has billions of parameters. Speculative decoding alleviates this slowdown by using a second smaller and faster assistant model to generate candidate tokens that are verified by the larger model in a single forward pass. If the verified tokens are correct, the LLM essentially gets them for "free" without having to generate them itself. There is no degradation in accuracy because the verification forward pass ensures the same outputs are generated as if the LLM had generated them on its own. To get the largest speed up, the assistant model should be a lot smaller than the LLM so that it can generate tokens quickly. The assistant and LLM model must also share the same tokenizer to avoid re-encoding and decoding tokens. > [!WARNING] > Speculative decoding is only supported for the greedy search and sampling decoding strategies, and it doesn't support batched inputs. Enable speculative decoding by loading an assistant model and passing it to [`~GenerationMixin.generate`]. <hfoptions id="spec-decoding"> <hfoption id="greedy search"> ```py from transformers import AutoModelForCausalLM, AutoTokenizer, infer_device import torch device = infer_device() tokenizer = AutoTokenizer.from_pretrained("facebook/opt-1.3b") inputs = tokenizer("Einstein's theory of relativity states", return_tensors="pt").to(device) model = AutoModelForCausalLM.from_pretrained("facebook/opt-1.3b", dtype="auto").to(device) assistant_model = AutoModelForCausalLM.from_pretrained("facebook/opt-125m").to(device) outputs = model.generate(**inputs, assistant_model=assistant_model) tokenizer.batch_decode(outputs, skip_special_tokens=True) ["Einstein's theory of relativity states that the speed of light is constant. "] ``` </hfoption> <hfoption id="sampling"> For speculative sampling decoding, add the [do_sample](https://hf.co/docs/transformers/main/en/main_classes/text_generation#transformers.GenerationConfig.do_sample) and [temperature](https://hf.co/docs/transformers/main/en/main_classes/text_generation#transformers.GenerationConfig.temperature) parameters to [`~GenerationMixin.generate`]. ```py from transformers import AutoModelForCausalLM, AutoTokenizer, infer_device import torch device = infer_device() tokenizer = AutoTokenizer.from_pretrained("facebook/opt-1.3b") inputs = tokenizer("Einstein's theory of relativity states", return_tensors="pt").to(device) model = AutoModelForCausalLM.from_pretrained("facebook/opt-1.3b", dtype="auto").to(device) assistant_model = AutoModelForCausalLM.from_pretrained("facebook/opt-125m").to(device) outputs = model.generate(**inputs, assistant_model=assistant_model, do_sample=True, temperature=0.7) print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) ["Einstein's theory of relativity states that motion in the universe is not a straight line.\n"] ``` </hfoption> </hfoptions> ### Prompt lookup decoding Prompt lookup decoding is a variant of speculative decoding that is also compatible with greedy search and sampling. Prompt lookup works especially well for input-grounded tasks - such as summarization - where there is often overlapping words between the prompt and output. These overlapping n-grams are used as the LLM candidate tokens. To enable prompt lookup decoding, specify the number of tokens that should be overlapping in the [prompt_lookup_num_tokens](https://hf.co/docs/transformers/main/en/main_classes/text_generation#transformers.GenerationConfig.prompt_lookup_num_tokens) parameter. Then pass this parameter to [`~GenerationMixin.generate`]. <hfoptions id="pld"> <hfoption id="greedy decoding"> ```py from transformers import AutoModelForCausalLM, AutoTokenizer, infer_device import torch device = infer_device() tokenizer = AutoTokenizer.from_pretrained("facebook/opt-1.3b") inputs = tokenizer("The second law of thermodynamics states", return_tensors="pt").to(device) model = AutoModelForCausalLM.from_pretrained("facebook/opt-1.3b", dtype="auto").to(device) assistant_model = AutoModelForCausalLM.from_pretrained("facebook/opt-125m").to(device) outputs = model.generate(**inputs, prompt_lookup_num_tokens=3) print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) ['The second law of thermodynamics states that entropy increases with temperature. '] ``` </hfoption> <hfoption id="sampling"> For prompt lookup decoding with sampling, add the [do_sample](https://hf.co/docs/transformers/main/en/main_classes/text_generation#transformers.GenerationConfig.do_sample) and [temperature](https://hf.co/docs/transformers/main/en/main_classes/text_generation#transformers.GenerationConfig.temperature) parameters to [`~GenerationMixin.generate`]. ```py from transformers import AutoModelForCausalLM, AutoTokenizer, infer_device import torch device = infer_device() tokenizer = AutoTokenizer.from_pretrained("facebook/opt-1.3b") inputs = tokenizer("The second law of thermodynamics states", return_tensors="pt").to(device) model = AutoModelForCausalLM.from_pretrained("facebook/opt-1.3b", dtype="auto").to(device) outputs = model.generate(**inputs, prompt_lookup_num_tokens=3, do_sample=True, temperature=0.7) print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) ["The second law of thermodynamics states that energy cannot be created nor destroyed. It's not a"] ``` </hfoption> </hfoptions> ## Attention A known issue with transformer models is that the self-attention mechanism grows quadratically in compute and memory with the number of input tokens. This limitation is only magnified in LLMs which handles much longer sequences. To address this, try FlashAttention2 or PyTorch's scaled dot product attention (SDPA), which are more memory efficient attention implementations. ### FlashAttention-2 FlashAttention and [FlashAttention-2](./perf_infer_gpu_one#flashattention-2) break up the attention computation into smaller chunks and reduces the number of intermediate read/write operations to the GPU memory to speed up inference. FlashAttention-2 improves on the original FlashAttention algorithm by also parallelizing over sequence length dimension and better partitioning work on the hardware to reduce synchronization and communication overhead. To use FlashAttention-2, set [attn_implementation](https://hf.co/docs/transformers/main/en/main_classes/text_generation#transformers.PreTrainedModel.from_pretrained.attn_implementation) to `"flash_attention_2"` in [`~PreTrainedModel.from_pretrained`] or set with `model.set_attention_implementation("flash_attention_2")` to dynamically update the [attention interface](./attention_interface) after the model is loaded. ```py from transformers import AutoModelForCausalLM, BitsAndBytesConfig quant_config = BitsAndBytesConfig(load_in_8bit=True) model = AutoModelForCausalLM.from_pretrained( "google/gemma-2b", quantization_config=quant_config, dtype=torch.bfloat16, attn_implementation="flash_attention_2", ) # Change the model's attention dynamically after loading model = AutoModelForCausalLM.from_pretrained( "google/gemma-2b", quantization_config=quant_config, dtype=torch.bfloat16 ) model.set_attention_implementation("flash_attention_2") ``` ### PyTorch scaled dot product attention Scaled dot product attention (SDPA) is automatically enabled in PyTorch 2.0 and it supports FlashAttention, xFormers, and PyTorch's C++ implementation. SDPA chooses the most performant attention algorithm if you're using a CUDA backend. For other backends, SDPA defaults to the PyTorch C++ implementation. > [!TIP] > SDPA automatically supports FlashAttention-2 as long as you have the latest PyTorch version installed. Use the [torch.nn.attention.sdpa_kernel](https://pytorch.org/docs/stable/generated/torch.nn.attention.sdpa_kernel.html) context manager to explicitly enable or disable any of the four attention algorithms. For example, use `SDPBackend.FLASH_ATTENTION` to enable FlashAttention. ```py import torch from torch.nn.attention import SDPBackend, sdpa_kernel from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained( "google/gemma-2b", dtype=torch.bfloat16, ) with sdpa_kernel(SDPBackend.FLASH_ATTENTION): outputs = model.generate(**inputs) ``` ## Quantization Quantization reduces the size of model weights by storing them in a lower precision. This translates to lower memory usage and makes loading LLMs for inference more accessible if you're constrained by GPU memory. If you aren't limited by your GPU, you don't necessarily need to quantize your model because it can increase latency slightly (except for AWQ and fused AWQ modules) due to the extra step required to quantize and dequantize the weights. > [!TIP] > There are many quantization libraries (see the [Quantization](./quantization) guide for more details) available, such as Quanto, AQLM, VPTQ, AWQ, and AutoGPTQ. Feel free to try them out and see which one works best for your use case. We also recommend reading the [Overview of natively supported quantization schemes in 🤗 Transformers](https://hf.co/blog/overview-quantization-transformers) blog post which compares AutoGPTQ and bitsandbytes. Use the Model Memory Calculator below to estimate and compare how much memory is required to load a model. For example, try estimating the memory required to load [Mistral-7B-v0.1](https://hf.co/mistralai/Mistral-7B-v0.1). <iframe src="https://hf-accelerate-model-memory-usage.hf.space" frameborder="0" width="850" height="450" ></iframe> To load a model in half-precision, set the [dtype](https://hf.co/docs/transformers/main/en/main_classes/text_generation#transformers.PreTrainedModel.from_pretrained.dtype) parameter in [`~transformers.AutoModelForCausalLM.from_pretrained`] to `torch.bfloat16`. This requires 13.74GB of memory. ```py from transformers import AutoTokenizer, AutoModelForCausalLM import torch model = AutoModelForCausalLM.from_pretrained( "mistralai/Mistral-7B-v0.1", dtype=torch.bfloat16, device_map="auto", ) ``` To load a quantized model (8-bit or 4-bit), try [bitsandbytes](https://hf.co/docs/bitsandbytes) and set the [load_in_4bit](https://hf.co/docs/transformers/main/en/main_classes/text_generation#transformers.BitsAndBytesConfig.load_in_4bit) or [load_in_8bit](https://hf.co/docs/transformers/main/en/main_classes/text_generation#transformers.BitsAndBytesConfig.load_in_8bit) parameters to `True`. Loading the model in 8-bits only requires 6.87 GB of memory. ```py from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig import torch quant_config = BitsAndBytesConfig(load_in_8bit=True) model = AutoModelForCausalLM.from_pretrained( "mistralai/Mistral-7B-v0.1", quantization_config=quant_config, device_map="auto" ) ```
transformers/docs/source/en/llm_optims.md/0
{ "file_path": "transformers/docs/source/en/llm_optims.md", "repo_id": "transformers", "token_count": 7181 }
381
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Model outputs All models have outputs that are instances of subclasses of [`~utils.ModelOutput`]. Those are data structures containing all the information returned by the model, but that can also be used as tuples or dictionaries. Let's see how this looks in an example: ```python from transformers import BertTokenizer, BertForSequenceClassification import torch tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-uncased") model = BertForSequenceClassification.from_pretrained("google-bert/bert-base-uncased") inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") labels = torch.tensor([1]).unsqueeze(0) # Batch size 1 outputs = model(**inputs, labels=labels) ``` The `outputs` object is a [`~modeling_outputs.SequenceClassifierOutput`], as we can see in the documentation of that class below, it means it has an optional `loss`, a `logits`, an optional `hidden_states` and an optional `attentions` attribute. Here we have the `loss` since we passed along `labels`, but we don't have `hidden_states` and `attentions` because we didn't pass `output_hidden_states=True` or `output_attentions=True`. <Tip> When passing `output_hidden_states=True` you may expect the `outputs.hidden_states[-1]` to match `outputs.last_hidden_state` exactly. However, this is not always the case. Some models apply normalization or subsequent process to the last hidden state when it's returned. </Tip> You can access each attribute as you would usually do, and if that attribute has not been returned by the model, you will get `None`. Here for instance `outputs.loss` is the loss computed by the model, and `outputs.attentions` is `None`. When considering our `outputs` object as tuple, it only considers the attributes that don't have `None` values. Here for instance, it has two elements, `loss` then `logits`, so ```python outputs[:2] ``` will return the tuple `(outputs.loss, outputs.logits)` for instance. When considering our `outputs` object as dictionary, it only considers the attributes that don't have `None` values. Here for instance, it has two keys that are `loss` and `logits`. We document here the generic model outputs that are used by more than one model type. Specific output types are documented on their corresponding model page. ## ModelOutput [[autodoc]] utils.ModelOutput - to_tuple ## BaseModelOutput [[autodoc]] modeling_outputs.BaseModelOutput ## BaseModelOutputWithPooling [[autodoc]] modeling_outputs.BaseModelOutputWithPooling ## BaseModelOutputWithCrossAttentions [[autodoc]] modeling_outputs.BaseModelOutputWithCrossAttentions ## BaseModelOutputWithPoolingAndCrossAttentions [[autodoc]] modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions ## BaseModelOutputWithPast [[autodoc]] modeling_outputs.BaseModelOutputWithPast ## BaseModelOutputWithPastAndCrossAttentions [[autodoc]] modeling_outputs.BaseModelOutputWithPastAndCrossAttentions ## Seq2SeqModelOutput [[autodoc]] modeling_outputs.Seq2SeqModelOutput ## CausalLMOutput [[autodoc]] modeling_outputs.CausalLMOutput ## CausalLMOutputWithCrossAttentions [[autodoc]] modeling_outputs.CausalLMOutputWithCrossAttentions ## CausalLMOutputWithPast [[autodoc]] modeling_outputs.CausalLMOutputWithPast ## MaskedLMOutput [[autodoc]] modeling_outputs.MaskedLMOutput ## Seq2SeqLMOutput [[autodoc]] modeling_outputs.Seq2SeqLMOutput ## NextSentencePredictorOutput [[autodoc]] modeling_outputs.NextSentencePredictorOutput ## SequenceClassifierOutput [[autodoc]] modeling_outputs.SequenceClassifierOutput ## Seq2SeqSequenceClassifierOutput [[autodoc]] modeling_outputs.Seq2SeqSequenceClassifierOutput ## MultipleChoiceModelOutput [[autodoc]] modeling_outputs.MultipleChoiceModelOutput ## TokenClassifierOutput [[autodoc]] modeling_outputs.TokenClassifierOutput ## QuestionAnsweringModelOutput [[autodoc]] modeling_outputs.QuestionAnsweringModelOutput ## Seq2SeqQuestionAnsweringModelOutput [[autodoc]] modeling_outputs.Seq2SeqQuestionAnsweringModelOutput ## Seq2SeqSpectrogramOutput [[autodoc]] modeling_outputs.Seq2SeqSpectrogramOutput ## SemanticSegmenterOutput [[autodoc]] modeling_outputs.SemanticSegmenterOutput ## ImageClassifierOutput [[autodoc]] modeling_outputs.ImageClassifierOutput ## ImageClassifierOutputWithNoAttention [[autodoc]] modeling_outputs.ImageClassifierOutputWithNoAttention ## DepthEstimatorOutput [[autodoc]] modeling_outputs.DepthEstimatorOutput ## Wav2Vec2BaseModelOutput [[autodoc]] modeling_outputs.Wav2Vec2BaseModelOutput ## XVectorOutput [[autodoc]] modeling_outputs.XVectorOutput ## Seq2SeqTSModelOutput [[autodoc]] modeling_outputs.Seq2SeqTSModelOutput ## Seq2SeqTSPredictionOutput [[autodoc]] modeling_outputs.Seq2SeqTSPredictionOutput ## SampleTSPredictionOutput [[autodoc]] modeling_outputs.SampleTSPredictionOutput
transformers/docs/source/en/main_classes/output.md/0
{ "file_path": "transformers/docs/source/en/main_classes/output.md", "repo_id": "transformers", "token_count": 1670 }
382
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Auto Classes In many cases, the architecture you want to use can be guessed from the name or the path of the pretrained model you are supplying to the `from_pretrained()` method. AutoClasses are here to do this job for you so that you automatically retrieve the relevant model given the name/path to the pretrained weights/config/vocabulary. Instantiating one of [`AutoConfig`], [`AutoModel`], and [`AutoTokenizer`] will directly create a class of the relevant architecture. For instance ```python model = AutoModel.from_pretrained("google-bert/bert-base-cased") ``` will create a model that is an instance of [`BertModel`]. There is one class of `AutoModel` for each task. ## Extending the Auto Classes Each of the auto classes has a method to be extended with your custom classes. For instance, if you have defined a custom class of model `NewModel`, make sure you have a `NewModelConfig` then you can add those to the auto classes like this: ```python from transformers import AutoConfig, AutoModel AutoConfig.register("new-model", NewModelConfig) AutoModel.register(NewModelConfig, NewModel) ``` You will then be able to use the auto classes like you would usually do! <Tip warning={true}> If your `NewModelConfig` is a subclass of [`~transformers.PretrainedConfig`], make sure its `model_type` attribute is set to the same key you use when registering the config (here `"new-model"`). Likewise, if your `NewModel` is a subclass of [`PreTrainedModel`], make sure its `config_class` attribute is set to the same class you use when registering the model (here `NewModelConfig`). </Tip> ## AutoConfig [[autodoc]] AutoConfig ## AutoTokenizer [[autodoc]] AutoTokenizer ## AutoFeatureExtractor [[autodoc]] AutoFeatureExtractor ## AutoImageProcessor [[autodoc]] AutoImageProcessor ## AutoVideoProcessor [[autodoc]] AutoVideoProcessor ## AutoProcessor [[autodoc]] AutoProcessor ## Generic model classes The following auto classes are available for instantiating a base model class without a specific head. ### AutoModel [[autodoc]] AutoModel ## Generic pretraining classes The following auto classes are available for instantiating a model with a pretraining head. ### AutoModelForPreTraining [[autodoc]] AutoModelForPreTraining ## Natural Language Processing The following auto classes are available for the following natural language processing tasks. ### AutoModelForCausalLM [[autodoc]] AutoModelForCausalLM ### AutoModelForMaskedLM [[autodoc]] AutoModelForMaskedLM ### AutoModelForMaskGeneration [[autodoc]] AutoModelForMaskGeneration ### AutoModelForSeq2SeqLM [[autodoc]] AutoModelForSeq2SeqLM ### AutoModelForSequenceClassification [[autodoc]] AutoModelForSequenceClassification ### AutoModelForMultipleChoice [[autodoc]] AutoModelForMultipleChoice ### AutoModelForNextSentencePrediction [[autodoc]] AutoModelForNextSentencePrediction ### AutoModelForTokenClassification [[autodoc]] AutoModelForTokenClassification ### AutoModelForQuestionAnswering [[autodoc]] AutoModelForQuestionAnswering ### AutoModelForTextEncoding [[autodoc]] AutoModelForTextEncoding ## Computer vision The following auto classes are available for the following computer vision tasks. ### AutoModelForDepthEstimation [[autodoc]] AutoModelForDepthEstimation ### AutoModelForImageClassification [[autodoc]] AutoModelForImageClassification ### AutoModelForVideoClassification [[autodoc]] AutoModelForVideoClassification ### AutoModelForKeypointDetection [[autodoc]] AutoModelForKeypointDetection ### AutoModelForKeypointMatching [[autodoc]] AutoModelForKeypointMatching ### AutoModelForMaskedImageModeling [[autodoc]] AutoModelForMaskedImageModeling ### AutoModelForObjectDetection [[autodoc]] AutoModelForObjectDetection ### AutoModelForImageSegmentation [[autodoc]] AutoModelForImageSegmentation ### AutoModelForImageToImage [[autodoc]] AutoModelForImageToImage ### AutoModelForSemanticSegmentation [[autodoc]] AutoModelForSemanticSegmentation ### AutoModelForInstanceSegmentation [[autodoc]] AutoModelForInstanceSegmentation ### AutoModelForUniversalSegmentation [[autodoc]] AutoModelForUniversalSegmentation ### AutoModelForZeroShotImageClassification [[autodoc]] AutoModelForZeroShotImageClassification ### AutoModelForZeroShotObjectDetection [[autodoc]] AutoModelForZeroShotObjectDetection ## Audio The following auto classes are available for the following audio tasks. ### AutoModelForAudioClassification [[autodoc]] AutoModelForAudioClassification ### AutoModelForAudioFrameClassification [[autodoc]] AutoModelForAudioFrameClassification ### AutoModelForCTC [[autodoc]] AutoModelForCTC ### AutoModelForSpeechSeq2Seq [[autodoc]] AutoModelForSpeechSeq2Seq ### AutoModelForAudioXVector [[autodoc]] AutoModelForAudioXVector ### AutoModelForTextToSpectrogram [[autodoc]] AutoModelForTextToSpectrogram ### AutoModelForTextToWaveform [[autodoc]] AutoModelForTextToWaveform ### AutoModelForAudioTokenization [[autodoc]] AutoModelForAudioTokenization ## Multimodal The following auto classes are available for the following multimodal tasks. ### AutoModelForTableQuestionAnswering [[autodoc]] AutoModelForTableQuestionAnswering ### AutoModelForDocumentQuestionAnswering [[autodoc]] AutoModelForDocumentQuestionAnswering ### AutoModelForVisualQuestionAnswering [[autodoc]] AutoModelForVisualQuestionAnswering ### AutoModelForVision2Seq [[autodoc]] AutoModelForVision2Seq ### AutoModelForImageTextToText [[autodoc]] AutoModelForImageTextToText ## Time Series ### AutoModelForTimeSeriesPrediction [[autodoc]] AutoModelForTimeSeriesPrediction
transformers/docs/source/en/model_doc/auto.md/0
{ "file_path": "transformers/docs/source/en/model_doc/auto.md", "repo_id": "transformers", "token_count": 1795 }
383
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> *This model was released on 2019-12-24 and added to Hugging Face Transformers on 2022-12-07.* # Big Transfer (BiT) <div class="flex flex-wrap space-x-1"> <img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white"> </div> ## Overview The BiT model was proposed in [Big Transfer (BiT): General Visual Representation Learning](https://huggingface.co/papers/1912.11370) by Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, Neil Houlsby. BiT is a simple recipe for scaling up pre-training of [ResNet](resnet)-like architectures (specifically, ResNetv2). The method results in significant improvements for transfer learning. The abstract from the paper is the following: *Transfer of pre-trained representations improves sample efficiency and simplifies hyperparameter tuning when training deep neural networks for vision. We revisit the paradigm of pre-training on large supervised datasets and fine-tuning the model on a target task. We scale up pre-training, and propose a simple recipe that we call Big Transfer (BiT). By combining a few carefully selected components, and transferring using a simple heuristic, we achieve strong performance on over 20 datasets. BiT performs well across a surprisingly wide range of data regimes -- from 1 example per class to 1M total examples. BiT achieves 87.5% top-1 accuracy on ILSVRC-2012, 99.4% on CIFAR-10, and 76.3% on the 19 task Visual Task Adaptation Benchmark (VTAB). On small datasets, BiT attains 76.8% on ILSVRC-2012 with 10 examples per class, and 97.0% on CIFAR-10 with 10 examples per class. We conduct detailed analysis of the main components that lead to high transfer performance.* This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/google-research/big_transfer). ## Usage tips - BiT models are equivalent to ResNetv2 in terms of architecture, except that: 1) all batch normalization layers are replaced by [group normalization](https://huggingface.co/papers/1803.08494), 2) [weight standardization](https://huggingface.co/papers/1903.10520) is used for convolutional layers. The authors show that the combination of both is useful for training with large batch sizes, and has a significant impact on transfer learning. ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BiT. <PipelineTag pipeline="image-classification"/> - [`BitForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). - See also: [Image classification task guide](../tasks/image_classification) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## BitConfig [[autodoc]] BitConfig ## BitImageProcessor [[autodoc]] BitImageProcessor - preprocess ## BitImageProcessorFast [[autodoc]] BitImageProcessorFast - preprocess ## BitModel [[autodoc]] BitModel - forward ## BitForImageClassification [[autodoc]] BitForImageClassification - forward
transformers/docs/source/en/model_doc/bit.md/0
{ "file_path": "transformers/docs/source/en/model_doc/bit.md", "repo_id": "transformers", "token_count": 1129 }
384
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> *This model was released on 2021-02-26 and added to Hugging Face Transformers on 2021-05-12.* <div style="float: right;"> <div class="flex flex-wrap space-x-1"> <img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white"> <img alt="FlashAttention" src="https://img.shields.io/badge/%E2%9A%A1%EF%B8%8E%20FlashAttention-eae0c8?style=flat"> <img alt="SDPA" src="https://img.shields.io/badge/SDPA-DE3412?style=flat&logo=pytorch&logoColor=white"> </div> </div> # CLIP [CLIP](https://huggingface.co/papers/2103.00020) is a is a multimodal vision and language model motivated by overcoming the fixed number of object categories when training a computer vision model. CLIP learns about images directly from raw text by jointly training on 400M (image, text) pairs. Pretraining on this scale enables zero-shot transfer to downstream tasks. CLIP uses an image encoder and text encoder to get visual features and text features. Both features are projected to a latent space with the same number of dimensions and their dot product gives a similarity score. You can find all the original CLIP checkpoints under the [OpenAI](https://huggingface.co/openai?search_models=clip) organization. > [!TIP] > Click on the CLIP models in the right sidebar for more examples of how to apply CLIP to different image and language tasks. The example below demonstrates how to calculate similarity scores between multiple text descriptions and an image with [`Pipeline`] or the [`AutoModel`] class. <hfoptions id="usage"> <hfoption id="Pipeline"> ```py import torch from transformers import pipeline clip = pipeline( task="zero-shot-image-classification", model="openai/clip-vit-base-patch32", dtype=torch.bfloat16, device=0 ) labels = ["a photo of a cat", "a photo of a dog", "a photo of a car"] clip("http://images.cocodataset.org/val2017/000000039769.jpg", candidate_labels=labels) ``` </hfoption> <hfoption id="AutoModel"> ```py import requests import torch from PIL import Image from transformers import AutoProcessor, AutoModel model = AutoModel.from_pretrained("openai/clip-vit-base-patch32", dtype=torch.bfloat16, attn_implementation="sdpa") processor = AutoProcessor.from_pretrained("openai/clip-vit-base-patch32") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) labels = ["a photo of a cat", "a photo of a dog", "a photo of a car"] inputs = processor(text=labels, images=image, return_tensors="pt", padding=True) outputs = model(**inputs) logits_per_image = outputs.logits_per_image probs = logits_per_image.softmax(dim=1) most_likely_idx = probs.argmax(dim=1).item() most_likely_label = labels[most_likely_idx] print(f"Most likely label: {most_likely_label} with probability: {probs[0][most_likely_idx].item():.3f}") ``` </hfoption> </hfoptions> ## Notes - Use [`CLIPImageProcessor`] to resize (or rescale) and normalizes images for the model. ## CLIPConfig [[autodoc]] CLIPConfig - from_text_vision_configs ## CLIPTextConfig [[autodoc]] CLIPTextConfig ## CLIPVisionConfig [[autodoc]] CLIPVisionConfig ## CLIPTokenizer [[autodoc]] CLIPTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## CLIPTokenizerFast [[autodoc]] CLIPTokenizerFast ## CLIPImageProcessor [[autodoc]] CLIPImageProcessor - preprocess ## CLIPImageProcessorFast [[autodoc]] CLIPImageProcessorFast - preprocess ## CLIPFeatureExtractor [[autodoc]] CLIPFeatureExtractor ## CLIPProcessor [[autodoc]] CLIPProcessor ## CLIPModel [[autodoc]] CLIPModel - forward - get_text_features - get_image_features ## CLIPTextModel [[autodoc]] CLIPTextModel - forward ## CLIPTextModelWithProjection [[autodoc]] CLIPTextModelWithProjection - forward ## CLIPVisionModelWithProjection [[autodoc]] CLIPVisionModelWithProjection - forward ## CLIPVisionModel [[autodoc]] CLIPVisionModel - forward ## CLIPForImageClassification [[autodoc]] CLIPForImageClassification - forward
transformers/docs/source/en/model_doc/clip.md/0
{ "file_path": "transformers/docs/source/en/model_doc/clip.md", "repo_id": "transformers", "token_count": 1632 }
385
<!--Copyright 2025 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> *This model was released on 2025-02-27 and added to Hugging Face Transformers on 2025-05-07.* # Csm ## Overview The Conversational Speech Model (CSM) is the first open-source contextual text-to-speech model [released by Sesame](https://www.sesame.com/research/crossing_the_uncanny_valley_of_voice). It is designed to generate natural-sounding speech with or without conversational context. This context typically consists of multi-turn dialogue between speakers, represented as sequences of text and corresponding spoken audio. **Model Architecture:** CSM is composed of two LLaMA-style auto-regressive transformer decoders: a backbone decoder that predicts the first codebook token and a depth decoder that generates the remaining tokens. It uses the pretrained codec model [Mimi](./mimi), introduced by Kyutai, to encode speech into discrete codebook tokens and decode them back into audio. The original csm-1b checkpoint is available under the [Sesame](https://huggingface.co/sesame/csm-1b) organization on Hugging Face. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/eustlb/documentation-images/resolve/main/csm_architecture.png"/> </div> ## Usage Tips ### Without Conversational Context CSM can be used to simply generate speech from a text prompt: ```python import torch from transformers import CsmForConditionalGeneration, AutoProcessor, infer_device model_id = "sesame/csm-1b" device = infer_device() # load the model and the processor processor = AutoProcessor.from_pretrained(model_id) model = CsmForConditionalGeneration.from_pretrained(model_id, device_map=device) # prepare the inputs text = "[0]The past is just a story we tell ourselves." # `[0]` for speaker id 0 inputs = processor(text, add_special_tokens=True).to(device) # another equivalent way to prepare the inputs conversation = [ {"role": "0", "content": [{"type": "text", "text": "The past is just a story we tell ourselves."}]}, ] inputs = processor.apply_chat_template( conversation, tokenize=True, return_dict=True, ).to(model.device) # infer the model audio = model.generate(**inputs, output_audio=True) processor.save_audio(audio, "example_without_context.wav") ``` ### With Conversational Context CSM can be used to generate speech given a conversation, allowing consistency in the voices and content-aware generation: ```python import torch from transformers import CsmForConditionalGeneration, AutoProcessor, infer_device from datasets import load_dataset, Audio model_id = "sesame/csm-1b" device = infer_device() # load the model and the processor processor = AutoProcessor.from_pretrained(model_id) model = CsmForConditionalGeneration.from_pretrained(model_id, device_map=device) # prepare the inputs ds = load_dataset("hf-internal-testing/dailytalk-dummy", split="train") # ensure the audio is 24kHz ds = ds.cast_column("audio", Audio(sampling_rate=24000)) conversation = [] # 1. context for text, audio, speaker_id in zip(ds[:4]["text"], ds[:4]["audio"], ds[:4]["speaker_id"]): conversation.append( { "role": f"{speaker_id}", "content": [{"type": "text", "text": text}, {"type": "audio", "path": audio["array"]}], } ) # 2. text prompt conversation.append({"role": f"{ds[4]['speaker_id']}", "content": [{"type": "text", "text": ds[4]["text"]}]}) inputs = processor.apply_chat_template( conversation, tokenize=True, return_dict=True, ).to(model.device) # infer the model audio = model.generate(**inputs, output_audio=True) processor.save_audio(audio, "example_with_context.wav") ``` ### Batched Inference CSM supports batched inference! ```python import torch from transformers import CsmForConditionalGeneration, AutoProcessor, infer_device from datasets import load_dataset, Audio model_id = "sesame/csm-1b" device = infer_device() # load the model and the processor processor = AutoProcessor.from_pretrained(model_id) model = CsmForConditionalGeneration.from_pretrained(model_id, device_map=device) # prepare the inputs ds = load_dataset("hf-internal-testing/dailytalk-dummy", split="train") # ensure the audio is 24kHz ds = ds.cast_column("audio", Audio(sampling_rate=24000)) # here a batch with two prompts conversation = [ [ { "role": f"{ds[0]['speaker_id']}", "content": [ {"type": "text", "text": ds[0]["text"]}, {"type": "audio", "path": ds[0]["audio"]["array"]}, ], }, { "role": f"{ds[1]['speaker_id']}", "content": [ {"type": "text", "text": ds[1]["text"]}, ], }, ], [ { "role": f"{ds[0]['speaker_id']}", "content": [ {"type": "text", "text": ds[0]["text"]}, ], } ], ] inputs = processor.apply_chat_template( conversation, tokenize=True, return_dict=True, ).to(model.device) audio = model.generate(**inputs, output_audio=True) processor.save_audio(audio, [f"speech_batch_idx_{i}.wav" for i in range(len(audio))]) ``` ### Making The Model Go Brrr CSM supports full-graph compilation with CUDA graphs! ```python import torch import copy from transformers import CsmForConditionalGeneration, AutoProcessor from datasets import load_dataset model_id = "sesame/csm-1b" device = "cuda" # set logs to ensure no recompilation and graph breaks torch._logging.set_logs(graph_breaks=True, recompiles=True, cudagraphs=True) # load the model and the processor processor = AutoProcessor.from_pretrained(model_id) model = CsmForConditionalGeneration.from_pretrained(model_id, device_map=device) # use static cache, enabling automatically torch compile with fullgraph and reduce-overhead model.generation_config.max_length = 250 # big enough to avoid recompilation model.generation_config.max_new_tokens = None # would take precedence over max_length model.generation_config.cache_implementation = "static" model.depth_decoder.generation_config.cache_implementation = "static" # generation kwargs gen_kwargs = { "do_sample": False, "depth_decoder_do_sample": False, "temperature": 1.0, "depth_decoder_temperature": 1.0, } # Define a timing decorator class TimerContext: def __init__(self, name="Execution"): self.name = name self.start_event = None self.end_event = None def __enter__(self): # Use CUDA events for more accurate GPU timing self.start_event = torch.cuda.Event(enable_timing=True) self.end_event = torch.cuda.Event(enable_timing=True) self.start_event.record() return self def __exit__(self, *args): self.end_event.record() torch.cuda.synchronize() elapsed_time = self.start_event.elapsed_time(self.end_event) / 1000.0 print(f"{self.name} time: {elapsed_time:.4f} seconds") # prepare the inputs ds = load_dataset("hf-internal-testing/dailytalk-dummy", split="train") conversation = [ { "role": f"{ds[0]['speaker_id']}", "content": [ {"type": "text", "text": ds[0]["text"]}, {"type": "audio", "path": ds[0]["audio"]["array"]}, ], }, { "role": f"{ds[1]['speaker_id']}", "content": [ {"type": "text", "text": ds[1]["text"]}, {"type": "audio", "path": ds[1]["audio"]["array"]}, ], }, { "role": f"{ds[2]['speaker_id']}", "content": [ {"type": "text", "text": ds[2]["text"]}, ], }, ] padded_inputs_1 = processor.apply_chat_template( conversation, tokenize=True, return_dict=True, ).to(model.device) print("\n" + "="*50) print("First generation - compiling and recording CUDA graphs...") with TimerContext("First generation"): _ = model.generate(**padded_inputs_1, **gen_kwargs) print("="*50) print("\n" + "="*50) print("Second generation - fast !!!") with TimerContext("Second generation"): _ = model.generate(**padded_inputs_1, **gen_kwargs) print("="*50) # now with different inputs conversation = [ { "role": f"{ds[0]['speaker_id']}", "content": [ {"type": "text", "text": ds[2]["text"]}, {"type": "audio", "path": ds[2]["audio"]["array"]}, ], }, { "role": f"{ds[1]['speaker_id']}", "content": [ {"type": "text", "text": ds[3]["text"]}, {"type": "audio", "path": ds[3]["audio"]["array"]}, ], }, { "role": f"{ds[2]['speaker_id']}", "content": [ {"type": "text", "text": ds[4]["text"]}, ], }, ] padded_inputs_2 = processor.apply_chat_template( conversation, tokenize=True, return_dict=True, ).to(model.device) print("\n" + "="*50) print("Generation with other inputs!") with TimerContext("Generation with different inputs"): _ = model.generate(**padded_inputs_2, **gen_kwargs) print("="*50) ``` ### Training CSM Transformers integration supports training! ```python from transformers import CsmForConditionalGeneration, AutoProcessor, infer_device from datasets import load_dataset, Audio model_id = "sesame/csm-1b" device = infer_device() # load the model and the processor processor = AutoProcessor.from_pretrained(model_id) model = CsmForConditionalGeneration.from_pretrained(model_id, device_map=device) model.train() model.codec_model.eval() ds = load_dataset("hf-internal-testing/dailytalk-dummy", split="train") # ensure the audio is 24kHz ds = ds.cast_column("audio", Audio(sampling_rate=24000)) conversation = [] # context for text, audio, speaker_id in zip(ds[:4]["text"], ds[:4]["audio"], ds[:4]["speaker_id"]): conversation.append( { "role": f"{speaker_id}", "content": [{"type": "text", "text": text}, {"type": "audio", "path": audio["array"]}], } ) inputs = processor.apply_chat_template( conversation, tokenize=True, return_dict=True, output_labels=True, ).to(model.device) out = model(**inputs) out.loss.backward() ``` This model was contributed by [Eustache Le Bihan](https://huggingface.co/eustlb). The original code can be found [here](https://github.com/SesameAILabs/csm). ## CsmConfig [[autodoc]] CsmConfig ## CsmDepthDecoderConfig [[autodoc]] CsmDepthDecoderConfig ## CsmProcessor <div class="flex justify-center"> <img src="https://huggingface.co/datasets/eustlb/documentation-images/resolve/main/fig1.jpg"/> </div> [[autodoc]] CsmProcessor - __call__ ## CsmForConditionalGeneration [[autodoc]] CsmForConditionalGeneration - forward - generate ## CsmDepthDecoderForCausalLM [[autodoc]] CsmDepthDecoderForCausalLM ## CsmDepthDecoderModel [[autodoc]] CsmDepthDecoderModel ## CsmBackboneModel [[autodoc]] CsmBackboneModel
transformers/docs/source/en/model_doc/csm.md/0
{ "file_path": "transformers/docs/source/en/model_doc/csm.md", "repo_id": "transformers", "token_count": 4409 }
386
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> *This model was released on 2020-12-23 and added to Hugging Face Transformers on 2021-04-13.* # DeiT <div class="flex flex-wrap space-x-1"> <img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white"> <img alt="FlashAttention" src="https://img.shields.io/badge/%E2%9A%A1%EF%B8%8E%20FlashAttention-eae0c8?style=flat"> <img alt="SDPA" src="https://img.shields.io/badge/SDPA-DE3412?style=flat&logo=pytorch&logoColor=white"> </div> ## Overview The DeiT model was proposed in [Training data-efficient image transformers & distillation through attention](https://huggingface.co/papers/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou. The [Vision Transformer (ViT)](vit) introduced in [Dosovitskiy et al., 2020](https://huggingface.co/papers/2010.11929) has shown that one can match or even outperform existing convolutional neural networks using a Transformer encoder (BERT-like). However, the ViT models introduced in that paper required training on expensive infrastructure for multiple weeks, using external data. DeiT (data-efficient image transformers) are more efficiently trained transformers for image classification, requiring far less data and far less computing resources compared to the original ViT models. The abstract from the paper is the following: *Recently, neural networks purely based on attention were shown to address image understanding tasks such as image classification. However, these visual transformers are pre-trained with hundreds of millions of images using an expensive infrastructure, thereby limiting their adoption. In this work, we produce a competitive convolution-free transformer by training on Imagenet only. We train them on a single computer in less than 3 days. Our reference vision transformer (86M parameters) achieves top-1 accuracy of 83.1% (single-crop evaluation) on ImageNet with no external data. More importantly, we introduce a teacher-student strategy specific to transformers. It relies on a distillation token ensuring that the student learns from the teacher through attention. We show the interest of this token-based distillation, especially when using a convnet as a teacher. This leads us to report results competitive with convnets for both Imagenet (where we obtain up to 85.2% accuracy) and when transferring to other tasks. We share our code and models.* This model was contributed by [nielsr](https://huggingface.co/nielsr). ## Usage tips - Compared to ViT, DeiT models use a so-called distillation token to effectively learn from a teacher (which, in the DeiT paper, is a ResNet like-model). The distillation token is learned through backpropagation, by interacting with the class ([CLS]) and patch tokens through the self-attention layers. - There are 2 ways to fine-tune distilled models, either (1) in a classic way, by only placing a prediction head on top of the final hidden state of the class token and not using the distillation signal, or (2) by placing both a prediction head on top of the class token and on top of the distillation token. In that case, the [CLS] prediction head is trained using regular cross-entropy between the prediction of the head and the ground-truth label, while the distillation prediction head is trained using hard distillation (cross-entropy between the prediction of the distillation head and the label predicted by the teacher). At inference time, one takes the average prediction between both heads as final prediction. (2) is also called "fine-tuning with distillation", because one relies on a teacher that has already been fine-tuned on the downstream dataset. In terms of models, (1) corresponds to [`DeiTForImageClassification`] and (2) corresponds to [`DeiTForImageClassificationWithTeacher`]. - Note that the authors also did try soft distillation for (2) (in which case the distillation prediction head is trained using KL divergence to match the softmax output of the teacher), but hard distillation gave the best results. - All released checkpoints were pre-trained and fine-tuned on ImageNet-1k only. No external data was used. This is in contrast with the original ViT model, which used external data like the JFT-300M dataset/Imagenet-21k for pre-training. - The authors of DeiT also released more efficiently trained ViT models, which you can directly plug into [`ViTModel`] or [`ViTForImageClassification`]. Techniques like data augmentation, optimization, and regularization were used in order to simulate training on a much larger dataset (while only using ImageNet-1k for pre-training). There are 4 variants available (in 3 different sizes): *facebook/deit-tiny-patch16-224*, *facebook/deit-small-patch16-224*, *facebook/deit-base-patch16-224* and *facebook/deit-base-patch16-384*. Note that one should use [`DeiTImageProcessor`] in order to prepare images for the model. ### Using Scaled Dot Product Attention (SDPA) PyTorch includes a native scaled dot-product attention (SDPA) operator as part of `torch.nn.functional`. This function encompasses several implementations that can be applied depending on the inputs and the hardware in use. See the [official documentation](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) or the [GPU Inference](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#pytorch-scaled-dot-product-attention) page for more information. SDPA is used by default for `torch>=2.1.1` when an implementation is available, but you may also set `attn_implementation="sdpa"` in `from_pretrained()` to explicitly request SDPA to be used. ``` from transformers import DeiTForImageClassification model = DeiTForImageClassification.from_pretrained("facebook/deit-base-distilled-patch16-224", attn_implementation="sdpa", dtype=torch.float16) ... ``` For the best speedups, we recommend loading the model in half-precision (e.g. `torch.float16` or `torch.bfloat16`). On a local benchmark (A100-40GB, PyTorch 2.3.0, OS Ubuntu 22.04) with `float32` and `facebook/deit-base-distilled-patch16-224` model, we saw the following speedups during inference. | Batch size | Average inference time (ms), eager mode | Average inference time (ms), sdpa model | Speed up, Sdpa / Eager (x) | |--------------|-------------------------------------------|-------------------------------------------|------------------------------| | 1 | 8 | 6 | 1.33 | | 2 | 9 | 6 | 1.5 | | 4 | 9 | 6 | 1.5 | | 8 | 8 | 6 | 1.33 | ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DeiT. <PipelineTag pipeline="image-classification"/> - [`DeiTForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). - See also: [Image classification task guide](../tasks/image_classification) Besides that: - [`DeiTForMaskedImageModeling`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-pretraining). If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## DeiTConfig [[autodoc]] DeiTConfig ## DeiTFeatureExtractor [[autodoc]] DeiTFeatureExtractor - __call__ ## DeiTImageProcessor [[autodoc]] DeiTImageProcessor - preprocess ## DeiTImageProcessorFast [[autodoc]] DeiTImageProcessorFast - preprocess ## DeiTModel [[autodoc]] DeiTModel - forward ## DeiTForMaskedImageModeling [[autodoc]] DeiTForMaskedImageModeling - forward ## DeiTForImageClassification [[autodoc]] DeiTForImageClassification - forward ## DeiTForImageClassificationWithTeacher [[autodoc]] DeiTForImageClassificationWithTeacher - forward
transformers/docs/source/en/model_doc/deit.md/0
{ "file_path": "transformers/docs/source/en/model_doc/deit.md", "repo_id": "transformers", "token_count": 2942 }
387
<!--Copyright 2025 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> *This model was released on 2024-12-27 and added to Hugging Face Transformers on 2025-07-08.* # Doge ## Overview Doge is a series of small language models based on the [Doge](https://github.com/SmallDoges/small-doge) architecture, aiming to combine the advantages of state-space and self-attention algorithms, calculate dynamic masks from cached value states using the zero-order hold method, and solve the problem of existing mainstream language models getting lost in context. It uses the `wsd_scheduler` scheduler to pre-train on the `smollm-corpus`, and can continue training on new datasets or add sparse activation feedforward networks from stable stage checkpoints. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/refs%2Fpr%2F426/transformers/model_doc/doge_architecture.png" alt="drawing" width="600"/> As shown in the figure below, the sequence transformation part of the Doge architecture uses `Dynamic Mask Attention`, which can be understood as using self-attention related to value states during training, and using state-space without past state decay during inference, to solve the problem of existing Transformers or SSMs getting lost in long text. The state transformation part of Doge uses `Cross Domain Mixture of Experts`, which consists of dense linear layers and sparse embedding layers, and can additionally increase sparse parameters to continue training from dense weight checkpoints without retraining the entire model, thereby reducing the cost of continuous iteration of the model. In addition, Doge also uses `RMSNorm` and `Residual` with learnable parameters to adapt the gradient range of deep models. Checkout all Doge model checkpoints [here](https://huggingface.co/collections/SmallDoge/doge-slm-679cc991f027c4a3abbded4a). ## Usage <details> <summary>Using Doge-Base for text generation</summary> ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("SmallDoge/Doge-20M") model = AutoModelForCausalLM.from_pretrained("SmallDoge/Doge-20M") inputs = tokenizer("Hey how are you doing?", return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=100) print(tokenizer.batch_decode(outputs)) ``` </details> <details> <summary>Using Doge-Instruct for question answering</summary> ```python from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig, TextStreamer tokenizer = AutoTokenizer.from_pretrained("SmallDoge/Doge-20M-Instruct") model = AutoModelForCausalLM.from_pretrained("SmallDoge/Doge-20M-Instruct") generation_config = GenerationConfig( max_new_tokens=100, use_cache=True, do_sample=True, temperature=0.8, top_p=0.9, repetition_penalty=1.0 ) steamer = TextStreamer(tokenizer=tokenizer, skip_prompt=True) prompt = "Hi, how are you doing today?" conversation = [ {"role": "user", "content": prompt} ] inputs = tokenizer.apply_chat_template( conversation=conversation, tokenize=True, return_tensors="pt", ) outputs = model.generate( inputs, tokenizer=tokenizer, generation_config=generation_config, streamer=steamer ) ``` </details> ## DogeConfig [[autodoc]] DogeConfig ## DogeModel [[autodoc]] DogeModel - forward ## DogeForCausalLM [[autodoc]] DogeForCausalLM - forward ## DogeForSequenceClassification [[autodoc]] DogeForSequenceClassification - forward
transformers/docs/source/en/model_doc/doge.md/0
{ "file_path": "transformers/docs/source/en/model_doc/doge.md", "repo_id": "transformers", "token_count": 1233 }
388
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> *This model was released on 2019-07-15 and added to Hugging Face Transformers on 2020-11-16.* # FSMT ## Overview FSMT (FairSeq MachineTranslation) models were introduced in [Facebook FAIR's WMT19 News Translation Task Submission](https://huggingface.co/papers/1907.06616) by Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, Sergey Edunov. The abstract of the paper is the following: *This paper describes Facebook FAIR's submission to the WMT19 shared news translation task. We participate in two language pairs and four language directions, English <-> German and English <-> Russian. Following our submission from last year, our baseline systems are large BPE-based transformer models trained with the Fairseq sequence modeling toolkit which rely on sampled back-translations. This year we experiment with different bitext data filtering schemes, as well as with adding filtered back-translated data. We also ensemble and fine-tune our models on domain-specific data, then decode using noisy channel model reranking. Our submissions are ranked first in all four directions of the human evaluation campaign. On En->De, our system significantly outperforms other systems as well as human translations. This system improves upon our WMT'18 submission by 4.5 BLEU points.* This model was contributed by [stas](https://huggingface.co/stas). The original code can be found [here](https://github.com/pytorch/fairseq/tree/master/examples/wmt19). ## Implementation Notes - FSMT uses source and target vocabulary pairs that aren't combined into one. It doesn't share embeddings tokens either. Its tokenizer is very similar to [`XLMTokenizer`] and the main model is derived from [`BartModel`]. ## FSMTConfig [[autodoc]] FSMTConfig ## FSMTTokenizer [[autodoc]] FSMTTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## FSMTModel [[autodoc]] FSMTModel - forward ## FSMTForConditionalGeneration [[autodoc]] FSMTForConditionalGeneration - forward
transformers/docs/source/en/model_doc/fsmt.md/0
{ "file_path": "transformers/docs/source/en/model_doc/fsmt.md", "repo_id": "transformers", "token_count": 766 }
389
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> *This model was released on 2019-02-14 and added to Hugging Face Transformers on 2020-11-16.* <div style="float: right;"> <div class="flex flex-wrap space-x-1"> <img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white"> <img alt="FlashAttention" src="https://img.shields.io/badge/%E2%9A%A1%EF%B8%8E%20FlashAttention-eae0c8?style=flat"> <img alt="SDPA" src="https://img.shields.io/badge/SDPA-DE3412?style=flat&logo=pytorch&logoColor=white"> </div> </div> # GPT-2 [GPT-2](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) is a scaled up version of GPT, a causal transformer language model, with 10x more parameters and training data. The model was pretrained on a 40GB dataset to predict the next word in a sequence based on all the previous words. This approach enabled the model to perform many downstream tasks in a zero-shot setting. The blog post released by OpenAI can be found [here](https://openai.com/index/better-language-models/). The model architecture uses a unidirectional (causal) attention mechanism where each token can only attend to previous tokens, making it particularly effective for text generation tasks. You can find all the original GPT-2 checkpoints under the [OpenAI community](https://huggingface.co/openai-community?search_models=gpt) organization. > [!TIP] > Click on the GPT-2 models in the right sidebar for more examples of how to apply GPT-2 to different language tasks. The example below demonstrates how to generate text with [`Pipeline`] or the [`AutoModel`], and from the command line. <hfoptions id="usage"> <hfoption id="Pipeline"> ```py import torch from transformers import pipeline pipeline = pipeline(task="text-generation", model="openai-community/gpt2", dtype=torch.float16, device=0) pipeline("Hello, I'm a language model") ``` </hfoption> <hfoption id="AutoModel"> ```py import torch from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2", dtype=torch.float16, device_map="auto", attn_implementation="sdpa") tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2") input_ids = tokenizer("Hello, I'm a language model", return_tensors="pt").to(model.device) output = model.generate(**input_ids, cache_implementation="static") print(tokenizer.decode(output[0], skip_special_tokens=True)) ``` </hfoption> <hfoption id="transformers CLI"> ```bash echo -e "Hello, I'm a language model" | transformers run --task text-generation --model openai-community/gpt2 --device 0 ``` </hfoption> </hfoptions> One can also serve the model using vLLM with the `transformers backend`. ``` vllm serve openai-community/gpt2 --model-imp transformers ``` Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the [Quantization](../quantization/overview) overview for more available quantization backends. The example below uses [bitsandbytes](../quantization/bitsandbytes) to only quantize the weights to 4-bits. ```py import torch from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, pipeline quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype="float16", bnb_4bit_use_double_quant=True ) model = AutoModelForCausalLM.from_pretrained( "openai-community/gpt2-xl", quantization_config=quantization_config, device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2-xl") inputs = tokenizer("Once upon a time, there was a magical forest", return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_new_tokens=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Notes - Pad inputs on the right because GPT-2 uses absolute position embeddings. - GPT-2 can reuse previously computed key-value attention pairs. Access this feature with the [past_key_values](https://huggingface.co/docs/transformers//en/model_doc/gpt2#transformers.GPT2Model.forward.past_key_values) parameter in [`GPT2Model.forward`]. - Enable the [scale_attn_by_inverse_layer_idx](https://huggingface.co/docs/transformers/en/model_doc/gpt2#transformers.GPT2Config.scale_attn_by_inverse_layer_idx) and [reorder_and_upcast_attn](https://huggingface.co/docs/transformers/en/model_doc/gpt2#transformers.GPT2Config.reorder_and_upcast_attn) parameters to apply the training stability improvements from [Mistral](./mistral). ## GPT2Config [[autodoc]] GPT2Config ## GPT2Tokenizer [[autodoc]] GPT2Tokenizer - save_vocabulary ## GPT2TokenizerFast [[autodoc]] GPT2TokenizerFast ## GPT2 specific outputs [[autodoc]] models.gpt2.modeling_gpt2.GPT2DoubleHeadsModelOutput ## GPT2Model [[autodoc]] GPT2Model - forward ## GPT2LMHeadModel [[autodoc]] GPT2LMHeadModel - forward ## GPT2DoubleHeadsModel [[autodoc]] GPT2DoubleHeadsModel - forward ## GPT2ForQuestionAnswering [[autodoc]] GPT2ForQuestionAnswering - forward ## GPT2ForSequenceClassification [[autodoc]] GPT2ForSequenceClassification - forward ## GPT2ForTokenClassification [[autodoc]] GPT2ForTokenClassification - forward
transformers/docs/source/en/model_doc/gpt2.md/0
{ "file_path": "transformers/docs/source/en/model_doc/gpt2.md", "repo_id": "transformers", "token_count": 1969 }
390
<!--Copyright 2025 The HuggingFace Team. All rights reserved. Licensed under the MIT License; you may not use this file except in compliance with the License. Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> *This model was released on 2023-06-23 and added to Hugging Face Transformers on 2025-06-17.* <div style="float: right;"> <div class="flex flex-wrap space-x-1"> <img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white" > </div> </div> # LightGlue [LightGlue](https://huggingface.co/papers/2306.13643) is a deep neural network that learns to match local features across images. It revisits multiple design decisions of SuperGlue and derives simple but effective improvements. Cumulatively, these improvements make LightGlue more efficient - in terms of both memory and computation, more accurate, and much easier to train. Similar to [SuperGlue](https://huggingface.co/magic-leap-community/superglue_outdoor), this model consists of matching two sets of local features extracted from two images, with the goal of being faster than SuperGlue. Paired with the [SuperPoint model](https://huggingface.co/magic-leap-community/superpoint), it can be used to match two images and estimate the pose between them. You can find all the original LightGlue checkpoints under the [ETH-CVG](https://huggingface.co/ETH-CVG) organization. > [!TIP] > This model was contributed by [stevenbucaille](https://huggingface.co/stevenbucaille). > > Click on the LightGlue models in the right sidebar for more examples of how to apply LightGlue to different computer vision tasks. The example below demonstrates how to match keypoints between two images with the [`AutoModel`] class. <hfoptions id="usage"> <hfoption id="AutoModel"> ```py from transformers import AutoImageProcessor, AutoModel import torch from PIL import Image import requests url_image1 = "https://raw.githubusercontent.com/magicleap/SuperGluePretrainedNetwork/refs/heads/master/assets/phototourism_sample_images/united_states_capitol_98169888_3347710852.jpg" image1 = Image.open(requests.get(url_image1, stream=True).raw) url_image2 = "https://raw.githubusercontent.com/magicleap/SuperGluePretrainedNetwork/refs/heads/master/assets/phototourism_sample_images/united_states_capitol_26757027_6717084061.jpg" image2 = Image.open(requests.get(url_image2, stream=True).raw) images = [image1, image2] processor = AutoImageProcessor.from_pretrained("ETH-CVG/lightglue_superpoint") model = AutoModel.from_pretrained("ETH-CVG/lightglue_superpoint") inputs = processor(images, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) # Post-process to get keypoints and matches image_sizes = [[(image.height, image.width) for image in images]] processed_outputs = processor.post_process_keypoint_matching(outputs, image_sizes, threshold=0.2) ``` </hfoption> </hfoptions> ## Notes - LightGlue is adaptive to the task difficulty. Inference is much faster on image pairs that are intuitively easy to match, for example, because of a larger visual overlap or limited appearance change. ```py from transformers import AutoImageProcessor, AutoModel import torch from PIL import Image import requests processor = AutoImageProcessor.from_pretrained("ETH-CVG/lightglue_superpoint") model = AutoModel.from_pretrained("ETH-CVG/lightglue_superpoint") # LightGlue requires pairs of images images = [image1, image2] inputs = processor(images, return_tensors="pt") outputs = model(**inputs) # Extract matching information keypoints0 = outputs.keypoints0 # Keypoints in first image keypoints1 = outputs.keypoints1 # Keypoints in second image matches = outputs.matches # Matching indices matching_scores = outputs.matching_scores # Confidence scores ``` - The model outputs matching indices, keypoints, and confidence scores for each match, similar to SuperGlue but with improved efficiency. - For better visualization and analysis, use the [`LightGlueImageProcessor.post_process_keypoint_matching`] method to get matches in a more readable format. ```py # Process outputs for visualization image_sizes = [[(image.height, image.width) for image in images]] processed_outputs = processor.post_process_keypoint_matching(outputs, image_sizes, threshold=0.2) for i, output in enumerate(processed_outputs): print(f"For the image pair {i}") for keypoint0, keypoint1, matching_score in zip( output["keypoints0"], output["keypoints1"], output["matching_scores"] ): print(f"Keypoint at {keypoint0.numpy()} matches with keypoint at {keypoint1.numpy()} with score {matching_score}") ``` - Visualize the matches between the images using the built-in plotting functionality. ```py # Easy visualization using the built-in plotting method processor.visualize_keypoint_matching(images, processed_outputs) ``` <div class="flex justify-center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/632885ba1558dac67c440aa8/duPp09ty8NRZlMZS18ccP.png"> </div> ## Resources - Refer to the [original LightGlue repository](https://github.com/cvg/LightGlue) for more examples and implementation details. ## LightGlueConfig [[autodoc]] LightGlueConfig ## LightGlueImageProcessor [[autodoc]] LightGlueImageProcessor - preprocess - post_process_keypoint_matching - visualize_keypoint_matching <frameworkcontent> <pt> ## LightGlueForKeypointMatching [[autodoc]] LightGlueForKeypointMatching - forward </pt> </frameworkcontent>
transformers/docs/source/en/model_doc/lightglue.md/0
{ "file_path": "transformers/docs/source/en/model_doc/lightglue.md", "repo_id": "transformers", "token_count": 1908 }
391
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> *This model was released on 2023-12-01 and added to Hugging Face Transformers on 2024-03-05.* <div style="float: right;"> <div class="flex flex-wrap space-x-1"> <img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white"> </div> </div> # Mamba [Mamba](https://huggingface.co/papers/2312.00752) is a selective structured state space model (SSMs) designed to work around Transformers computational inefficiency when dealing with long sequences. It is a completely attention-free architecture, and comprised of a combination of H3 and gated MLP blocks (Mamba block). Mamba's "content-based reasoning" allows it to focus on specific parts of an input depending on the current token. Mamba also uses a new hardware-aware parallel algorithm to compensate for the lack of convolutional operations. As a result, Mamba has fast inference and can scale to very long sequences. You can find all the original Mamba checkpoints under the [State Space Models](https://huggingface.co/state-spaces) organization. > [!TIP] > This model was contributed by [Molbap](https://huggingface.co/Molbap) and [AntonV](https://huggingface.co/AntonV). > Click on the Mamba models in the right sidebar for more examples of how to apply Mamba to different language tasks. The example below demonstrates how to generate text with [`Pipeline`], [`AutoModel`], and from the command line. <hfoptions id="usage"> <hfoption id="Pipeline"> ```py import torch from transformers import pipeline pipeline = pipeline( task="text-generation", model="state-spaces/mamba-130m-hf", dtype=torch.float16, device=0 ) pipeline("Plants create energy through a process known as") ``` </hfoption> <hfoption id="AutoModel"> ```py import torch from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("state-spaces/mamba-130m-hf") model = AutoModelForCausalLM.from_pretrained("state-spaces/mamba-130m-hf", dtype=torch.float16, device_map="auto",) input_ids = tokenizer("Plants create energy through a process known as", return_tensors="pt").to(model.device) output = model.generate(**input_ids) print(tokenizer.decode(output[0], skip_special_tokens=True) ``` </hfoption> <hfoption id="transformers CLI"> ```bash echo -e "Plants create energy through a process known as" | transformers run --task text-generation --model state-spaces/mamba-130m-hf --device 0 ``` </hfoption> </hfoptions> Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the [Quantization](../quantization/overview) overview for more available quantization backends. The example below uses [torchao](../quantization/torchao) to only quantize the weights to 4-bit integers. ```py import torch from transformers import AutoModelForCausalLM, AutoTokenizer, TorchAoConfig from torchao.quantization import Int4WeightOnlyConfig quantization_config = Int4WeightOnlyConfig(group_size=128) quantization_config = TorchAoConfig(quant_type=quant_config) tokenizer = AutoTokenizer.from_pretrained("state-spaces/mamba-2.8b-hf") model = AutoModelForCausalLM.from_pretrained("state-spaces/mamba-2.8b-hf", dtype=torch.bfloat16, quantization_config=quantization_config, device_map="auto",) input_ids = tokenizer("Plants create energy through a process known as", return_tensors="pt").to(model.device) output = model.generate(**input_ids) print(tokenizer.decode(output[0], skip_special_tokens=True)) ``` ## Notes - The current implementation uses the original CUDA kernels. The FlashAttention equivalent implementation is hosted in the [mamba-ssm](https://github.com/state-spaces/mamba) and [causal_conv1d](https://github.com/Dao-AILab/causal-conv1d) repositories. Make sure to install them if your hardware supports it! - Mamba stacks `mixer` layers which are equivalent to `Attention` layers. You can find the main logic of Mamba in the `MambaMixer` class. - The example below demonstrates how to fine-tune Mamba with [PEFT](https://huggingface.co/docs/peft). ```py from datasets import load_dataset from trl import SFTConfig, SFTTrainer from peft import LoraConfig model_id = "state-spaces/mamba-130m-hf" dataset = load_dataset("Abirate/english_quotes", split="train") training_args = SFTConfig(dataset_text_field="quote") lora_config = LoraConfig(target_modules=["x_proj", "embeddings", "in_proj", "out_proj"]) trainer = SFTTrainer( model=model_id, args=training_args, train_dataset=dataset, peft_config=lora_config, ) trainer.train() ``` ## MambaCache [[autodoc]] MambaCache - update_conv_state - update_ssm_state - reset ## MambaConfig [[autodoc]] MambaConfig ## MambaModel [[autodoc]] MambaModel - forward ## MambaLMHeadModel [[autodoc]] MambaForCausalLM - forward
transformers/docs/source/en/model_doc/mamba.md/0
{ "file_path": "transformers/docs/source/en/model_doc/mamba.md", "repo_id": "transformers", "token_count": 1779 }
392
<!--Copyright 2023 Mistral AI and The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> *This model was released on 2023-10-10 and added to Hugging Face Transformers on 2023-09-27.* <div style="float: right;"> <div class="flex flex-wrap space-x-1"> <img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white"> <img alt="FlashAttention" src="https://img.shields.io/badge/%E2%9A%A1%EF%B8%8E%20FlashAttention-eae0c8?style=flat"> <img alt="SDPA" src="https://img.shields.io/badge/SDPA-DE3412?style=flat&logo=pytorch&logoColor=white"> <img alt="Tensor parallelism" src="https://img.shields.io/badge/Tensor%20parallelism-06b6d4?style=flat&logoColor=white"> </div> </div> # Mistral [Mistral](https://huggingface.co/papers/2310.06825) is a 7B parameter language model, available as a pretrained and instruction-tuned variant, focused on balancing the scaling costs of large models with performance and efficient inference. This model uses sliding window attention (SWA) trained with a 8K context length and a fixed cache size to handle longer sequences more effectively. Grouped-query attention (GQA) speeds up inference and reduces memory requirements. Mistral also features a byte-fallback BPE tokenizer to improve token handling and efficiency by ensuring characters are never mapped to out-of-vocabulary tokens. You can find all the original Mistral checkpoints under the [Mistral AI_](https://huggingface.co/mistralai) organization. > [!TIP] > Click on the Mistral models in the right sidebar for more examples of how to apply Mistral to different language tasks. The example below demonstrates how to chat with [`Pipeline`] or the [`AutoModel`], and from the command line. <hfoptions id="usage"> <hfoption id="Pipeline"> ```python >>> import torch >>> from transformers import pipeline >>> messages = [ ... {"role": "user", "content": "What is your favourite condiment?"}, ... {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"}, ... {"role": "user", "content": "Do you have mayonnaise recipes?"} ... ] >>> chatbot = pipeline("text-generation", model="mistralai/Mistral-7B-Instruct-v0.3", dtype=torch.bfloat16, device=0) >>> chatbot(messages) ``` </hfoption> <hfoption id="AutoModel"> ```python >>> import torch >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.3", dtype=torch.bfloat16, attn_implementation="sdpa", device_map="auto") >>> tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.3") >>> messages = [ ... {"role": "user", "content": "What is your favourite condiment?"}, ... {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"}, ... {"role": "user", "content": "Do you have mayonnaise recipes?"} ... ] >>> model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device) >>> generated_ids = model.generate(model_inputs, max_new_tokens=100, do_sample=True) >>> tokenizer.batch_decode(generated_ids)[0] "Mayonnaise can be made as follows: (...)" ``` </hfoption> <hfoption id="transformers CLI"> ```python echo -e "My favorite condiment is" | transformers chat mistralai/Mistral-7B-v0.3 --dtype auto --device 0 --attn_implementation flash_attention_2 ``` </hfoption> </hfoptions> Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the [Quantization](../quantization/overview) overview for more available quantization backends. The example below uses [bitsandbytes](../quantization/bitsandbytes) to only quantize the weights to 4-bits. ```python >>> import torch >>> from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig >>> # specify how to quantize the model >>> quantization_config = BitsAndBytesConfig( ... load_in_4bit=True, ... bnb_4bit_quant_type="nf4", ... bnb_4bit_compute_dtype="torch.float16", ... ) >>> model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.3", quantization_config=True, dtype=torch.bfloat16, device_map="auto") >>> tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.3") >>> prompt = "My favourite condiment is" >>> messages = [ ... {"role": "user", "content": "What is your favourite condiment?"}, ... {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"}, ... {"role": "user", "content": "Do you have mayonnaise recipes?"} ... ] >>> model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device) >>> generated_ids = model.generate(model_inputs, max_new_tokens=100, do_sample=True) >>> tokenizer.batch_decode(generated_ids)[0] "The expected output" ``` Use the [AttentionMaskVisualizer](https://github.com/huggingface/transformers/blob/beb9b5b02246b9b7ee81ddf938f93f44cfeaad19/src/transformers/utils/attention_visualizer.py#L139) to better understand what tokens the model can and cannot attend to. ```py >>> from transformers.utils.attention_visualizer import AttentionMaskVisualizer >>> visualizer = AttentionMaskVisualizer("mistralai/Mistral-7B-Instruct-v0.3") >>> visualizer("Do you have mayonnaise recipes?") ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/mistral-attn-mask.png"/> </div> ## MistralConfig [[autodoc]] MistralConfig ## MistralCommonTokenizer [[autodoc]] MistralCommonTokenizer ## MistralModel [[autodoc]] MistralModel - forward ## MistralForCausalLM [[autodoc]] MistralForCausalLM - forward ## MistralForSequenceClassification [[autodoc]] MistralForSequenceClassification - forward ## MistralForTokenClassification [[autodoc]] MistralForTokenClassification - forward ## MistralForQuestionAnswering [[autodoc]] MistralForQuestionAnswering - forward
transformers/docs/source/en/model_doc/mistral.md/0
{ "file_path": "transformers/docs/source/en/model_doc/mistral.md", "repo_id": "transformers", "token_count": 2259 }
393
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> *This model was released on 2024-02-01 and added to Hugging Face Transformers on 2024-04-17.* <div style="float: right;"> <div class="flex flex-wrap space-x-1"> <img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white"> <img alt="FlashAttention" src="https://img.shields.io/badge/%E2%9A%A1%EF%B8%8E%20FlashAttention-eae0c8?style=flat"> <img alt="SDPA" src="https://img.shields.io/badge/SDPA-DE3412?style=flat&logo=pytorch&logoColor=white"> <img alt="Tensor parallelism" src="https://img.shields.io/badge/Tensor%20parallelism-06b6d4?style=flat&logoColor=white"> </div> </div> # OLMo [OLMo](https://huggingface.co/papers/2402.00838) is a 7B-parameter dense language model. It uses SwiGLU activations, non-parametric layer normalization, rotary positional embeddings, and a BPE tokenizer that masks personally identifiable information. It is pretrained on [Dolma](https://huggingface.co/datasets/allenai/dolma), a 3T-token dataset. OLMo was released to provide complete transparency of not just the model weights but the training data, training code, and evaluation code to enable more research on language models. You can find all the original OLMo checkpoints under the [OLMo](https://huggingface.co/collections/allenai/olmo-suite-65aeaae8fe5b6b2122b46778) collection. > [!TIP] > This model was contributed by [shanearora](https://huggingface.co/shanearora). > > Click on the OLMo models in the right sidebar for more examples of how to apply OLMo to different language tasks. The example below demonstrates how to generate text with [`Pipeline`] or the [`AutoModel`] class. <hfoptions id="usage"> <hfoption id="Pipeline"> ```py import torch from transformers import pipeline pipe = pipeline( task="text-generation", model="allenai/OLMo-7B-hf", dtype=torch.float16, device=0, ) result = pipe("Plants create energy through a process known as") print(result) ``` </hfoption> <hfoption id="AutoModel"> ```py import torch from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( "allenai/OLMo-7B-hf" ) model = AutoModelForCausalLM.from_pretrained( "allenai/OLMo-7B-hf", dtype=torch.float16, device_map="auto", attn_implementation="sdpa" ) input_ids = tokenizer("Plants create energy through a process known as", return_tensors="pt").to(model.device) output = model.generate(**input_ids, max_length=50, cache_implementation="static") print(tokenizer.decode(output[0], skip_special_tokens=True)) ``` </hfoption> <hfoption id="transformers CLI"> ```bash echo -e "Plants create energy through a process known as" | transformers run --task text-generation --model allenai/OLMo-7B-hf --device 0 ``` </hfoption> </hfoptions> Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the [Quantization](../quantization/overview) overview for more available quantization backends. The example below uses [bitsandbytes](../quantization/bitsandbytes) to only quantize the weights to 4-bits. ```py import torch from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.float16, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4" ) model = AutoModelForCausalLM.from_pretrained( "allenai/OLMo-7B-hf", attn_implementation="sdpa", dtype=torch.float16, device_map="auto", quantization_config=quantization_config ) tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-7B-hf") inputs = tokenizer("Bitcoin is", return_tensors="pt") inputs = {k: v.to(model.device) for k, v in inputs.items()} output = model.generate(**inputs, max_length=64) print(tokenizer.decode(output[0])) ``` ## OlmoConfig [[autodoc]] OlmoConfig ## OlmoModel [[autodoc]] OlmoModel - forward ## OlmoForCausalLM [[autodoc]] OlmoForCausalLM - forward
transformers/docs/source/en/model_doc/olmo.md/0
{ "file_path": "transformers/docs/source/en/model_doc/olmo.md", "repo_id": "transformers", "token_count": 1641 }
394
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> *This model was released on 2021-07-30 and added to Hugging Face Transformers on 2021-12-08.* # Perceiver <div class="flex flex-wrap space-x-1"> <img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white"> </div> ## Overview The Perceiver IO model was proposed in [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://huggingface.co/papers/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira. Perceiver IO is a generalization of [Perceiver](https://huggingface.co/papers/2103.03206) to handle arbitrary outputs in addition to arbitrary inputs. The original Perceiver only produced a single classification label. In addition to classification labels, Perceiver IO can produce (for example) language, optical flow, and multimodal videos with audio. This is done using the same building blocks as the original Perceiver. The computational complexity of Perceiver IO is linear in the input and output size and the bulk of the processing occurs in the latent space, allowing us to process inputs and outputs that are much larger than can be handled by standard Transformers. This means, for example, Perceiver IO can do BERT-style masked language modeling directly using bytes instead of tokenized inputs. The abstract from the paper is the following: *The recently-proposed Perceiver model obtains good results on several domains (images, audio, multimodal, point clouds) while scaling linearly in compute and memory with the input size. While the Perceiver supports many kinds of inputs, it can only produce very simple outputs such as class scores. Perceiver IO overcomes this limitation without sacrificing the original's appealing properties by learning to flexibly query the model's latent space to produce outputs of arbitrary size and semantics. Perceiver IO still decouples model depth from data size and still scales linearly with data size, but now with respect to both input and output sizes. The full Perceiver IO model achieves strong results on tasks with highly structured output spaces, such as natural language and visual understanding, StarCraft II, and multi-task and multi-modal domains. As highlights, Perceiver IO matches a Transformer-based BERT baseline on the GLUE language benchmark without the need for input tokenization and achieves state-of-the-art performance on Sintel optical flow estimation.* Here's a TLDR explaining how Perceiver works: The main problem with the self-attention mechanism of the Transformer is that the time and memory requirements scale quadratically with the sequence length. Hence, models like BERT and RoBERTa are limited to a max sequence length of 512 tokens. Perceiver aims to solve this issue by, instead of performing self-attention on the inputs, perform it on a set of latent variables, and only use the inputs for cross-attention. In this way, the time and memory requirements don't depend on the length of the inputs anymore, as one uses a fixed amount of latent variables, like 256 or 512. These are randomly initialized, after which they are trained end-to-end using backpropagation. Internally, [`PerceiverModel`] will create the latents, which is a tensor of shape `(batch_size, num_latents, d_latents)`. One must provide `inputs` (which could be text, images, audio, you name it!) to the model, which it will use to perform cross-attention with the latents. The output of the Perceiver encoder is a tensor of the same shape. One can then, similar to BERT, convert the last hidden states of the latents to classification logits by averaging along the sequence dimension, and placing a linear layer on top of that to project the `d_latents` to `num_labels`. This was the idea of the original Perceiver paper. However, it could only output classification logits. In a follow-up work, PerceiverIO, they generalized it to let the model also produce outputs of arbitrary size. How, you might ask? The idea is actually relatively simple: one defines outputs of an arbitrary size, and then applies cross-attention with the last hidden states of the latents, using the outputs as queries, and the latents as keys and values. So let's say one wants to perform masked language modeling (BERT-style) with the Perceiver. As the Perceiver's input length will not have an impact on the computation time of the self-attention layers, one can provide raw bytes, providing `inputs` of length 2048 to the model. If one now masks out certain of these 2048 tokens, one can define the `outputs` as being of shape: `(batch_size, 2048, 768)`. Next, one performs cross-attention with the final hidden states of the latents to update the `outputs` tensor. After cross-attention, one still has a tensor of shape `(batch_size, 2048, 768)`. One can then place a regular language modeling head on top, to project the last dimension to the vocabulary size of the model, i.e. creating logits of shape `(batch_size, 2048, 262)` (as Perceiver uses a vocabulary size of 262 byte IDs). <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/perceiver_architecture.jpg" alt="drawing" width="600"/> <small> Perceiver IO architecture. Taken from the <a href="https://huggingface.co/papers/2105.15203">original paper</a> </small> This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/deepmind/deepmind-research/tree/master/perceiver). <Tip warning={true}> Perceiver does **not** work with `torch.nn.DataParallel` due to a bug in PyTorch, see [issue #36035](https://github.com/pytorch/pytorch/issues/36035) </Tip> ## Resources - The quickest way to get started with the Perceiver is by checking the [tutorial notebooks](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/Perceiver). - Refer to the [blog post](https://huggingface.co/blog/perceiver) if you want to fully understand how the model works and is implemented in the library. Note that the models available in the library only showcase some examples of what you can do with the Perceiver. There are many more use cases, including question answering, named-entity recognition, object detection, audio classification, video classification, etc. - [Text classification task guide](../tasks/sequence_classification) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Image classification task guide](../tasks/image_classification) ## Perceiver specific outputs [[autodoc]] models.perceiver.modeling_perceiver.PerceiverModelOutput [[autodoc]] models.perceiver.modeling_perceiver.PerceiverDecoderOutput [[autodoc]] models.perceiver.modeling_perceiver.PerceiverMaskedLMOutput [[autodoc]] models.perceiver.modeling_perceiver.PerceiverClassifierOutput ## PerceiverConfig [[autodoc]] PerceiverConfig ## PerceiverTokenizer [[autodoc]] PerceiverTokenizer - __call__ ## PerceiverFeatureExtractor [[autodoc]] PerceiverFeatureExtractor - __call__ ## PerceiverImageProcessor [[autodoc]] PerceiverImageProcessor - preprocess ## PerceiverImageProcessorFast [[autodoc]] PerceiverImageProcessorFast - preprocess ## PerceiverTextPreprocessor [[autodoc]] models.perceiver.modeling_perceiver.PerceiverTextPreprocessor ## PerceiverImagePreprocessor [[autodoc]] models.perceiver.modeling_perceiver.PerceiverImagePreprocessor ## PerceiverOneHotPreprocessor [[autodoc]] models.perceiver.modeling_perceiver.PerceiverOneHotPreprocessor ## PerceiverAudioPreprocessor [[autodoc]] models.perceiver.modeling_perceiver.PerceiverAudioPreprocessor ## PerceiverMultimodalPreprocessor [[autodoc]] models.perceiver.modeling_perceiver.PerceiverMultimodalPreprocessor ## PerceiverProjectionDecoder [[autodoc]] models.perceiver.modeling_perceiver.PerceiverProjectionDecoder ## PerceiverBasicDecoder [[autodoc]] models.perceiver.modeling_perceiver.PerceiverBasicDecoder ## PerceiverClassificationDecoder [[autodoc]] models.perceiver.modeling_perceiver.PerceiverClassificationDecoder ## PerceiverOpticalFlowDecoder [[autodoc]] models.perceiver.modeling_perceiver.PerceiverOpticalFlowDecoder ## PerceiverBasicVideoAutoencodingDecoder [[autodoc]] models.perceiver.modeling_perceiver.PerceiverBasicVideoAutoencodingDecoder ## PerceiverMultimodalDecoder [[autodoc]] models.perceiver.modeling_perceiver.PerceiverMultimodalDecoder ## PerceiverProjectionPostprocessor [[autodoc]] models.perceiver.modeling_perceiver.PerceiverProjectionPostprocessor ## PerceiverAudioPostprocessor [[autodoc]] models.perceiver.modeling_perceiver.PerceiverAudioPostprocessor ## PerceiverClassificationPostprocessor [[autodoc]] models.perceiver.modeling_perceiver.PerceiverClassificationPostprocessor ## PerceiverMultimodalPostprocessor [[autodoc]] models.perceiver.modeling_perceiver.PerceiverMultimodalPostprocessor ## PerceiverModel [[autodoc]] PerceiverModel - forward ## PerceiverForMaskedLM [[autodoc]] PerceiverForMaskedLM - forward ## PerceiverForSequenceClassification [[autodoc]] PerceiverForSequenceClassification - forward ## PerceiverForImageClassificationLearned [[autodoc]] PerceiverForImageClassificationLearned - forward ## PerceiverForImageClassificationFourier [[autodoc]] PerceiverForImageClassificationFourier - forward ## PerceiverForImageClassificationConvProcessing [[autodoc]] PerceiverForImageClassificationConvProcessing - forward ## PerceiverForOpticalFlow [[autodoc]] PerceiverForOpticalFlow - forward ## PerceiverForMultimodalAutoencoding [[autodoc]] PerceiverForMultimodalAutoencoding - forward
transformers/docs/source/en/model_doc/perceiver.md/0
{ "file_path": "transformers/docs/source/en/model_doc/perceiver.md", "repo_id": "transformers", "token_count": 2888 }
395
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> *This model was released on 2021-06-25 and added to Hugging Face Transformers on 2024-03-13.* # Pyramid Vision Transformer V2 (PVTv2) <div class="flex flex-wrap space-x-1"> <img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white"> </div> ## Overview The PVTv2 model was proposed in [PVT v2: Improved Baselines with Pyramid Vision Transformer](https://huggingface.co/papers/2106.13797) by Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, and Ling Shao. As an improved variant of PVT, it eschews position embeddings, relying instead on positional information encoded through zero-padding and overlapping patch embeddings. This lack of reliance on position embeddings simplifies the architecture, and enables running inference at any resolution without needing to interpolate them. The PVTv2 encoder structure has been successfully deployed to achieve state-of-the-art scores in [Segformer](https://huggingface.co/papers/2105.15203) for semantic segmentation, [GLPN](https://huggingface.co/papers/2201.07436) for monocular depth, and [Panoptic Segformer](https://huggingface.co/papers/2109.03814) for panoptic segmentation. PVTv2 belongs to a family of models called [hierarchical transformers](https://natecibik.medium.com/the-rise-of-vision-transformers-f623c980419f) , which make adaptations to transformer layers in order to generate multi-scale feature maps. Unlike the columnal structure of Vision Transformer ([ViT](https://huggingface.co/papers/2010.11929)) which loses fine-grained detail, multi-scale feature maps are known preserve this detail and aid performance in dense prediction tasks. In the case of PVTv2, this is achieved by generating image patch tokens using 2D convolution with overlapping kernels in each encoder layer. The multi-scale features of hierarchical transformers allow them to be easily swapped in for traditional workhorse computer vision backbone models like ResNet in larger architectures. Both Segformer and Panoptic Segformer demonstrated that configurations using PVTv2 for a backbone consistently outperformed those with similarly sized ResNet backbones. Another powerful feature of the PVTv2 is the complexity reduction in the self-attention layers called Spatial Reduction Attention (SRA), which uses 2D convolution layers to project hidden states to a smaller resolution before attending to them with the queries, improving the $O(n^2)$ complexity of self-attention to $O(n^2/R)$, with $R$ being the spatial reduction ratio (`sr_ratio`, aka kernel size and stride in the 2D convolution). SRA was introduced in PVT, and is the default attention complexity reduction method used in PVTv2. However, PVTv2 also introduced the option of using a self-attention mechanism with linear complexity related to image size, which they called "Linear SRA". This method uses average pooling to reduce the hidden states to a fixed size that is invariant to their original resolution (although this is inherently more lossy than regular SRA). This option can be enabled by setting `linear_attention` to `True` in the PVTv2Config. ### Abstract from the paper: *Transformer recently has presented encouraging progress in computer vision. In this work, we present new baselines by improving the original Pyramid Vision Transformer (PVT v1) by adding three designs, including (1) linear complexity attention layer, (2) overlapping patch embedding, and (3) convolutional feed-forward network. With these modifications, PVT v2 reduces the computational complexity of PVT v1 to linear and achieves significant improvements on fundamental vision tasks such as classification, detection, and segmentation. Notably, the proposed PVT v2 achieves comparable or better performances than recent works such as Swin Transformer. We hope this work will facilitate state-of-the-art Transformer researches in computer vision. Code is available at https://github.com/whai362/PVT.* This model was contributed by [FoamoftheSea](https://huggingface.co/FoamoftheSea). The original code can be found [here](https://github.com/whai362/PVT). ## Usage tips - [PVTv2](https://huggingface.co/papers/2106.13797) is a hierarchical transformer model which has demonstrated powerful performance in image classification and multiple other tasks, used as a backbone for semantic segmentation in [Segformer](https://huggingface.co/papers/2105.15203), monocular depth estimation in [GLPN](https://huggingface.co/papers/2201.07436), and panoptic segmentation in [Panoptic Segformer](https://huggingface.co/papers/2109.03814), consistently showing higher performance than similar ResNet configurations. - Hierarchical transformers like PVTv2 achieve superior data and parameter efficiency on image data compared with pure transformer architectures by incorporating design elements of convolutional neural networks (CNNs) into their encoders. This creates a best-of-both-worlds architecture that infuses the useful inductive biases of CNNs like translation equivariance and locality into the network while still enjoying the benefits of dynamic data response and global relationship modeling provided by the self-attention mechanism of [transformers](https://huggingface.co/papers/1706.03762). - PVTv2 uses overlapping patch embeddings to create multi-scale feature maps, which are infused with location information using zero-padding and depth-wise convolutions. - To reduce the complexity in the attention layers, PVTv2 performs a spatial reduction on the hidden states using either strided 2D convolution (SRA) or fixed-size average pooling (Linear SRA). Although inherently more lossy, Linear SRA provides impressive performance with a linear complexity with respect to image size. To use Linear SRA in the self-attention layers, set `linear_attention=True` in the `PvtV2Config`. - [`PvtV2Model`] is the hierarchical transformer encoder (which is also often referred to as Mix Transformer or MiT in the literature). [`PvtV2ForImageClassification`] adds a simple classifier head on top to perform Image Classification. [`PvtV2Backbone`] can be used with the [`AutoBackbone`] system in larger architectures like Deformable DETR. - ImageNet pretrained weights for all model sizes can be found on the [hub](https://huggingface.co/models?other=pvt_v2). The best way to get started with the PVTv2 is to load the pretrained checkpoint with the size of your choosing using `AutoModelForImageClassification`: ```python import requests import torch from transformers import AutoModelForImageClassification, AutoImageProcessor from PIL import Image model = AutoModelForImageClassification.from_pretrained("OpenGVLab/pvt_v2_b0") image_processor = AutoImageProcessor.from_pretrained("OpenGVLab/pvt_v2_b0") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) processed = image_processor(image) outputs = model(torch.tensor(processed["pixel_values"])) ``` To use the PVTv2 as a backbone for more complex architectures like DeformableDETR, you can use AutoBackbone (this model would need fine-tuning as you're replacing the backbone in the pretrained model): ```python import requests import torch from transformers import AutoConfig, AutoModelForObjectDetection, AutoImageProcessor from PIL import Image model = AutoModelForObjectDetection.from_config( config=AutoConfig.from_pretrained( "SenseTime/deformable-detr", backbone_config=AutoConfig.from_pretrained("OpenGVLab/pvt_v2_b5"), use_timm_backbone=False ), ) image_processor = AutoImageProcessor.from_pretrained("SenseTime/deformable-detr") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) processed = image_processor(image) outputs = model(torch.tensor(processed["pixel_values"])) ``` [PVTv2](https://github.com/whai362/PVT/tree/v2) performance on ImageNet-1K by model size (B0-B5): | Method | Size | Acc@1 | #Params (M) | |------------------|:----:|:-----:|:-----------:| | PVT-V2-B0 | 224 | 70.5 | 3.7 | | PVT-V2-B1 | 224 | 78.7 | 14.0 | | PVT-V2-B2-Linear | 224 | 82.1 | 22.6 | | PVT-V2-B2 | 224 | 82.0 | 25.4 | | PVT-V2-B3 | 224 | 83.1 | 45.2 | | PVT-V2-B4 | 224 | 83.6 | 62.6 | | PVT-V2-B5 | 224 | 83.8 | 82.0 | ## PvtV2Config [[autodoc]] PvtV2Config ## PvtForImageClassification [[autodoc]] PvtV2ForImageClassification - forward ## PvtModel [[autodoc]] PvtV2Model - forward
transformers/docs/source/en/model_doc/pvt_v2.md/0
{ "file_path": "transformers/docs/source/en/model_doc/pvt_v2.md", "repo_id": "transformers", "token_count": 2647 }
396
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> *This model was released on 2015-12-10 and added to Hugging Face Transformers on 2022-03-14.* # ResNet <div class="flex flex-wrap space-x-1"> <img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white"> </div> ## Overview The ResNet model was proposed in [Deep Residual Learning for Image Recognition](https://huggingface.co/papers/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren and Jian Sun. Our implementation follows the small changes made by [Nvidia](https://catalog.ngc.nvidia.com/orgs/nvidia/resources/resnet_50_v1_5_for_pytorch), we apply the `stride=2` for downsampling in bottleneck's `3x3` conv and not in the first `1x1`. This is generally known as "ResNet v1.5". ResNet introduced residual connections, they allow to train networks with an unseen number of layers (up to 1000). ResNet won the 2015 ILSVRC & COCO competition, one important milestone in deep computer vision. The abstract from the paper is the following: *Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.* The figure below illustrates the architecture of ResNet. Taken from the [original paper](https://huggingface.co/papers/1512.03385). <img width="600" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/resnet_architecture.png"/> This model was contributed by [Francesco](https://huggingface.co/Francesco). The original code can be found [here](https://github.com/KaimingHe/deep-residual-networks). ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ResNet. <PipelineTag pipeline="image-classification"/> - [`ResNetForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). - See also: [Image classification task guide](../tasks/image_classification) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## ResNetConfig [[autodoc]] ResNetConfig ## ResNetModel [[autodoc]] ResNetModel - forward ## ResNetForImageClassification [[autodoc]] ResNetForImageClassification - forward
transformers/docs/source/en/model_doc/resnet.md/0
{ "file_path": "transformers/docs/source/en/model_doc/resnet.md", "repo_id": "transformers", "token_count": 1162 }
397
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> *This model was released on 2021-05-31 and added to Hugging Face Transformers on 2021-10-28.* # SegFormer <div class="flex flex-wrap space-x-1"> <img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white"> </div> ## Overview The SegFormer model was proposed in [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://huggingface.co/papers/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo. The model consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on image segmentation benchmarks such as ADE20K and Cityscapes. The abstract from the paper is the following: *We present SegFormer, a simple, efficient yet powerful semantic segmentation framework which unifies Transformers with lightweight multilayer perception (MLP) decoders. SegFormer has two appealing features: 1) SegFormer comprises a novel hierarchically structured Transformer encoder which outputs multiscale features. It does not need positional encoding, thereby avoiding the interpolation of positional codes which leads to decreased performance when the testing resolution differs from training. 2) SegFormer avoids complex decoders. The proposed MLP decoder aggregates information from different layers, and thus combining both local attention and global attention to render powerful representations. We show that this simple and lightweight design is the key to efficient segmentation on Transformers. We scale our approach up to obtain a series of models from SegFormer-B0 to SegFormer-B5, reaching significantly better performance and efficiency than previous counterparts. For example, SegFormer-B4 achieves 50.3% mIoU on ADE20K with 64M parameters, being 5x smaller and 2.2% better than the previous best method. Our best model, SegFormer-B5, achieves 84.0% mIoU on Cityscapes validation set and shows excellent zero-shot robustness on Cityscapes-C.* The figure below illustrates the architecture of SegFormer. Taken from the [original paper](https://huggingface.co/papers/2105.15203). <img width="600" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/segformer_architecture.png"/> This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/NVlabs/SegFormer). ## Usage tips - SegFormer consists of a hierarchical Transformer encoder, and a lightweight all-MLP decoder head. [`SegformerModel`] is the hierarchical Transformer encoder (which in the paper is also referred to as Mix Transformer or MiT). [`SegformerForSemanticSegmentation`] adds the all-MLP decoder head on top to perform semantic segmentation of images. In addition, there's [`SegformerForImageClassification`] which can be used to - you guessed it - classify images. The authors of SegFormer first pre-trained the Transformer encoder on ImageNet-1k to classify images. Next, they throw away the classification head, and replace it by the all-MLP decode head. Next, they fine-tune the model altogether on ADE20K, Cityscapes and COCO-stuff, which are important benchmarks for semantic segmentation. All checkpoints can be found on the [hub](https://huggingface.co/models?other=segformer). - The quickest way to get started with SegFormer is by checking the [example notebooks](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/SegFormer) (which showcase both inference and fine-tuning on custom data). One can also check out the [blog post](https://huggingface.co/blog/fine-tune-segformer) introducing SegFormer and illustrating how it can be fine-tuned on custom data. - One can also check out [this interactive demo on Hugging Face Spaces](https://huggingface.co/spaces/chansung/segformer-tf-transformers) to try out a SegFormer model on custom images. - SegFormer works on any input size, as it pads the input to be divisible by `config.patch_sizes`. - One can use [`SegformerImageProcessor`] to prepare images and corresponding segmentation maps for the model. Note that this image processor is fairly basic and does not include all data augmentations used in the original paper. The original preprocessing pipelines (for the ADE20k dataset for instance) can be found [here](https://github.com/NVlabs/SegFormer/blob/master/local_configs/_base_/datasets/ade20k_repeat.py). The most important preprocessing step is that images and segmentation maps are randomly cropped and padded to the same size, such as 512x512 or 640x640, after which they are normalized. - One additional thing to keep in mind is that one can initialize [`SegformerImageProcessor`] with `do_reduce_labels` set to `True` or `False`. In some datasets (like ADE20k), the 0 index is used in the annotated segmentation maps for background. However, ADE20k doesn't include the "background" class in its 150 labels. Therefore, `do_reduce_labels` is used to reduce all labels by 1, and to make sure no loss is computed for the background class (i.e. it replaces 0 in the annotated maps by 255, which is the *ignore_index* of the loss function used by [`SegformerForSemanticSegmentation`]). However, other datasets use the 0 index as background class and include this class as part of all labels. In that case, `do_reduce_labels` should be set to `False`, as loss should also be computed for the background class. - As most models, SegFormer comes in different sizes, the details of which can be found in the table below (taken from Table 7 of the [original paper](https://huggingface.co/papers/2105.15203)). | **Model variant** | **Depths** | **Hidden sizes** | **Decoder hidden size** | **Params (M)** | **ImageNet-1k Top 1** | | :---------------: | ------------- | ------------------- | :---------------------: | :------------: | :-------------------: | | MiT-b0 | [2, 2, 2, 2] | [32, 64, 160, 256] | 256 | 3.7 | 70.5 | | MiT-b1 | [2, 2, 2, 2] | [64, 128, 320, 512] | 256 | 14.0 | 78.7 | | MiT-b2 | [3, 4, 6, 3] | [64, 128, 320, 512] | 768 | 25.4 | 81.6 | | MiT-b3 | [3, 4, 18, 3] | [64, 128, 320, 512] | 768 | 45.2 | 83.1 | | MiT-b4 | [3, 8, 27, 3] | [64, 128, 320, 512] | 768 | 62.6 | 83.6 | | MiT-b5 | [3, 6, 40, 3] | [64, 128, 320, 512] | 768 | 82.0 | 83.8 | Note that MiT in the above table refers to the Mix Transformer encoder backbone introduced in SegFormer. For SegFormer's results on the segmentation datasets like ADE20k, refer to the [paper](https://huggingface.co/papers/2105.15203). ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with SegFormer. <PipelineTag pipeline="image-classification"/> - [`SegformerForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). - [Image classification task guide](../tasks/image_classification) Semantic segmentation: - [`SegformerForSemanticSegmentation`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/semantic-segmentation). - A blog on fine-tuning SegFormer on a custom dataset can be found [here](https://huggingface.co/blog/fine-tune-segformer). - More demo notebooks on SegFormer (both inference + fine-tuning on a custom dataset) can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/SegFormer). - [Semantic segmentation task guide](../tasks/semantic_segmentation) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ## SegformerConfig [[autodoc]] SegformerConfig ## SegformerFeatureExtractor [[autodoc]] SegformerFeatureExtractor - __call__ - post_process_semantic_segmentation ## SegformerImageProcessor [[autodoc]] SegformerImageProcessor - preprocess - post_process_semantic_segmentation ## SegformerImageProcessorFast [[autodoc]] SegformerImageProcessorFast - preprocess - post_process_semantic_segmentation ## SegformerModel [[autodoc]] SegformerModel - forward ## SegformerDecodeHead [[autodoc]] SegformerDecodeHead - forward ## SegformerForImageClassification [[autodoc]] SegformerForImageClassification - forward ## SegformerForSemanticSegmentation [[autodoc]] SegformerForSemanticSegmentation - forward
transformers/docs/source/en/model_doc/segformer.md/0
{ "file_path": "transformers/docs/source/en/model_doc/segformer.md", "repo_id": "transformers", "token_count": 3038 }
398
<!--Copyright 2025 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> *This model was released on 2023-10-14 and added to Hugging Face Transformers on 2025-04-16.* # TimesFM <div class="flex flex-wrap space-x-1"> <img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white"> </div> ## Overview TimesFM (Time Series Foundation Model) is a pretrained time-series foundation model proposed in [A decoder-only foundation model for time-series forecasting](https://huggingface.co/papers/2310.10688) by Abhimanyu Das, Weihao Kong, Rajat Sen, and Yichen Zhou. It is a decoder only model that uses non-overlapping patches of time-series data as input and outputs some output patch length prediction in an autoregressive fashion. The abstract from the paper is the following: *Motivated by recent advances in large language models for Natural Language Processing (NLP), we design a time-series foundation model for forecasting whose out-of-the-box zero-shot performance on a variety of public datasets comes close to the accuracy of state-of-the-art supervised forecasting models for each individual dataset. Our model is based on pretraining a patched-decoder style attention model on a large time-series corpus, and can work well across different forecasting history lengths, prediction lengths and temporal granularities.* This model was contributed by [kashif](https://huggingface.co/kashif). The original code can be found [here](https://github.com/google-research/timesfm). To use the model: ```python import numpy as np import torch from transformers import TimesFmModelForPrediction model = TimesFmModelForPrediction.from_pretrained( "google/timesfm-2.0-500m-pytorch", dtype=torch.bfloat16, attn_implementation="sdpa", device_map="auto" ) # Create dummy inputs forecast_input = [ np.sin(np.linspace(0, 20, 100)), np.sin(np.linspace(0, 20, 200)), np.sin(np.linspace(0, 20, 400)), ] frequency_input = [0, 1, 2] # Convert inputs to sequence of tensors forecast_input_tensor = [ torch.tensor(ts, dtype=torch.bfloat16).to(model.device) for ts in forecast_input ] frequency_input_tensor = torch.tensor(frequency_input, dtype=torch.long).to(model.device) # Get predictions from the pre-trained model with torch.no_grad(): outputs = model(past_values=forecast_input_tensor, freq=frequency_input_tensor, return_dict=True) point_forecast_conv = outputs.mean_predictions.float().cpu().numpy() quantile_forecast_conv = outputs.full_predictions.float().cpu().numpy() ``` ## TimesFmConfig [[autodoc]] TimesFmConfig ## TimesFmModel [[autodoc]] TimesFmModel - forward ## TimesFmModelForPrediction [[autodoc]] TimesFmModelForPrediction - forward
transformers/docs/source/en/model_doc/timesfm.md/0
{ "file_path": "transformers/docs/source/en/model_doc/timesfm.md", "repo_id": "transformers", "token_count": 1031 }
399
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> *This model was released on 2023-11-16 and added to Hugging Face Transformers on 2024-05-15.* # Video-LLaVA <div class="flex flex-wrap space-x-1"> <img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white"> <img alt="FlashAttention" src="https://img.shields.io/badge/%E2%9A%A1%EF%B8%8E%20FlashAttention-eae0c8?style=flat"> <img alt="SDPA" src="https://img.shields.io/badge/SDPA-DE3412?style=flat&logo=pytorch&logoColor=white"> </div> ## Overview Video-LLaVa is an open-source multimodal LLM trained by fine-tuning LlamA/Vicuna on multimodal instruction-following data generated by Llava1.5 and VideChat. It is an auto-regressive language model, based on the transformer architecture. Video-LLaVa unifies visual representations to the language feature space, and enables an LLM to perform visual reasoning capabilities on both images and videos simultaneously. The Video-LLaVA model was proposed in [Video-LLaVA: Learning United Visual Representation by Alignment Before Projection](https://huggingface.co/papers/2311.10122) by Bin Lin, Yang Ye, Bin Zhu, Jiaxi Cui, Munang Ning, Peng Jin, Li Yuan. The abstract from the paper is the following: *The Large Vision-Language Model (LVLM) has enhanced the performance of various downstream tasks in visual-language understanding. Most existing approaches encode images and videos into separate feature spaces, which are then fed as inputs to large language models. However, due to the lack of unified tokenization for images and videos, namely misalignment before projection, it becomes challenging for a Large Language Model (LLM) to learn multi-modal interactions from several poor projection layers. In this work, we unify visual representation into the language feature space to advance the foundational LLM towards a unified LVLM. As a result, we establish a simple but robust LVLM baseline, Video-LLaVA, which learns from a mixed dataset of images and videos, mutually enhancing each other. Video-LLaVA achieves superior performances on a broad range of 9 image benchmarks across 5 image question-answering datasets and 4 image benchmark toolkits. Additionally, our Video-LLaVA also outperforms Video-ChatGPT by 5.8%, 9.9%, 18.6%, and 10.1% on MSRVTT, MSVD, TGIF, and ActivityNet, respectively. Notably, extensive experiments demonstrate that Video-LLaVA mutually benefits images and videos within a unified visual representation, outperforming models designed specifically for images or videos. We aim for this work to provide modest insights into the multi-modal inputs for the LLM* ## Usage tips: - We advise users to use padding_side="left" when computing batched generation as it leads to more accurate results. Simply make sure to call processor.tokenizer.padding_side = "left" before generating. - Note the model has not been explicitly trained to process multiple images/videos in the same prompt, although this is technically possible, you may experience inaccurate results. - Note that the video inputs should have exactly 8 frames at the input, since the models were trained in that setting. This model was contributed by [RaushanTurganbay](https://huggingface.co/RaushanTurganbay). The original code can be found [here](https://github.com/PKU-YuanGroup/Video-LLaVA). > [!NOTE] > LLaVA models after release v4.46 will raise warnings about adding `processor.patch_size = {{patch_size}}`, `processor.num_additional_image_tokens = {{num_additional_image_tokens}}` and processor.vision_feature_select_strategy = {{vision_feature_select_strategy}}`. It is strongly recommended to add the attributes to the processor if you own the model checkpoint, or open a PR if it is not owned by you. Adding these attributes means that LLaVA will try to infer the number of image tokens required per image and expand the text with as many `<image>` placeholders as there will be tokens. Usually it is around 500 tokens per image, so make sure that the text is not truncated as otherwise there will be failure when merging the embeddings. The attributes can be obtained from model config, as `model.config.vision_config.patch_size` or `model.config.vision_feature_select_strategy`. The `num_additional_image_tokens` should be `1` if the vision backbone adds a CLS token or `0` if nothing extra is added to the vision patches. ## Usage example ### Single Media Mode The model can accept both images and videos as input. Here's an example code for inference in half-precision (`torch.float16`): ```python import av import torch import numpy as np from transformers import VideoLlavaForConditionalGeneration, VideoLlavaProcessor def read_video_pyav(container, indices): ''' Decode the video with PyAV decoder. Args: container (`av.container.input.InputContainer`): PyAV container. indices (`list[int]`): List of frame indices to decode. Returns: result (np.ndarray): np array of decoded frames of shape (num_frames, height, width, 3). ''' frames = [] container.seek(0) start_index = indices[0] end_index = indices[-1] for i, frame in enumerate(container.decode(video=0)): if i > end_index: break if i >= start_index and i in indices: frames.append(frame) return np.stack([x.to_ndarray(format="rgb24") for x in frames]) # Load the model in half-precision model = VideoLlavaForConditionalGeneration.from_pretrained("LanguageBind/Video-LLaVA-7B-hf", dtype=torch.float16, device_map="auto") processor = VideoLlavaProcessor.from_pretrained("LanguageBind/Video-LLaVA-7B-hf") # Load the video as an np.arrau, sampling uniformly 8 frames video_path = hf_hub_download(repo_id="raushan-testing-hf/videos-test", filename="sample_demo_1.mp4", repo_type="dataset") container = av.open(video_path) total_frames = container.streams.video[0].frames indices = np.arange(0, total_frames, total_frames / 8).astype(int) video = read_video_pyav(container, indices) # For better results, we recommend to prompt the model in the following format prompt = "USER: <video>\nWhy is this funny? ASSISTANT:" inputs = processor(text=prompt, videos=video, return_tensors="pt") out = model.generate(**inputs, max_new_tokens=60) processor.batch_decode(out, skip_special_tokens=True, clean_up_tokenization_spaces=True) ``` For multiple turns conversation change the prompt format to: ```bash "USER: <video>\nWhat do you see in this video? ASSISTANT: A baby reading a book. USER: Why is the it funny? ASSISTANT:" ``` ### Mixed Media Mode The model can also generate from an interleaved image-video inputs. However note, that it was not trained in interleaved image-video setting which might affect the performance. Below is an example usage for mixed media input, add the following lines to the above code snippet: ```python from PIL import Image import requests # Generate from image and video mixed inputs # Load and image and write a new prompt url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) prompt = "USER: <image>\nHow many cats are there in the image? ASSISTANT: There are two cats. USER: <video>\nWhy is this video funny? ASSISTANT:" inputs = processor(text=prompt, images=image, videos=clip, padding=True, return_tensors="pt") # Generate generate_ids = model.generate(**inputs, max_length=50) processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True) ``` ## Model optimization ### Quantization using Bitsandbytes for memory efficiency The model can be loaded in lower bits, significantly reducing memory burden while maintaining the performance of the original model. his allows for efficient deployment on resource-constrained cases. First make sure to install bitsandbytes by running `pip install bitsandbytes` and to have access to a GPU/accelerator that is supported by the library. <Tip> bitsandbytes is being refactored to support multiple backends beyond CUDA. Currently, ROCm (AMD GPU) and Intel CPU implementations are mature, with Intel XPU in progress and Apple Silicon support expected by Q4/Q1. For installation instructions and the latest backend updates, visit [this link](https://huggingface.co/docs/bitsandbytes/main/en/installation#multi-backend). We value your feedback to help identify bugs before the full release! Check out [these docs](https://huggingface.co/docs/bitsandbytes/main/en/non_cuda_backends) for more details and feedback links. </Tip> Load the quantized model by simply adding [`BitsAndBytesConfig`](../main_classes/quantization#transformers.BitsAndBytesConfig) as shown below: ```python from transformers import VideoLlavaForConditionalGeneration, BitsAndBytesConfig # specify how to quantize the model quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.float16, ) model = VideoLlavaForConditionalGeneration.from_pretrained("LanguageBind/Video-LLaVA-7B-hf", quantization_config=quantization_config, device_map="auto") ``` ### Flash-Attention 2 to speed-up generation Additionally, we can greatly speed-up model inference by using [Flash Attention](../perf_train_gpu_one#flash-attention-2), which is a faster implementation of the attention mechanism used inside the model. First, make sure to install the latest version of Flash Attention 2: ```bash pip install -U flash-attn --no-build-isolation ``` Also, you should have a hardware that is compatible with Flash-Attention 2. Read more about it in the official documentation of the [flash attention repository](https://github.com/Dao-AILab/flash-attention). FlashAttention-2 can only be used when a model is loaded in `torch.float16` or `torch.bfloat16`. To load and run a model using Flash Attention-2, simply add `attn_implementation="flash_attention_2"` when loading the model as follows: ```python from transformers import VideoLlavaForConditionalGeneration model = VideoLlavaForConditionalGeneration.from_pretrained( "LanguageBind/Video-LLaVA-7B-hf", dtype=torch.float16, attn_implementation="flash_attention_2", ).to(0) ``` ## VideoLlavaConfig [[autodoc]] VideoLlavaConfig ## VideoLlavaImageProcessor [[autodoc]] VideoLlavaImageProcessor ## VideoLlavaVideoProcessor [[autodoc]] VideoLlavaVideoProcessor ## VideoLlavaProcessor [[autodoc]] VideoLlavaProcessor ## VideoLlavaModel [[autodoc]] VideoLlavaModel ## VideoLlavaForConditionalGeneration [[autodoc]] VideoLlavaForConditionalGeneration - forward
transformers/docs/source/en/model_doc/video_llava.md/0
{ "file_path": "transformers/docs/source/en/model_doc/video_llava.md", "repo_id": "transformers", "token_count": 3315 }
400
<!--Copyright 2025 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> *This model was released on 2025-06-11 and added to Hugging Face Transformers on 2025-06-11.* <div style="float: right;"> <div class="flex flex-wrap space-x-1"> <img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white"> <img alt="SDPA" src="https://img.shields.io/badge/SDPA-DE3412?style=flat&logo=pytorch&logoColor=white"> <img alt="FlashAttention" src="https://img.shields.io/badge/%E2%9A%A1%EF%B8%8E%20FlashAttention-eae0c8?style=flat"> </div> </div> # V-JEPA 2 [V-JEPA 2](https://huggingface.co/papers/2506.09985) ([blog post](https://ai.meta.com/blog/v-jepa-2-world-model-benchmarks/)) is a self-supervised approach to training video encoders developed by FAIR, Meta. Using internet-scale video data, V-JEPA 2 attains state-of-the-art performance on motion understanding and human action anticipation tasks. V-JEPA 2-AC is a latent action-conditioned world model post-trained from V-JEPA 2 (using a small amount of robot trajectory interaction data) that solves robot manipulation tasks without environment-specific data collection or task-specific training or calibration. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/vjepa.gif" alt="drawing" width="600"/> </div> You can find all original V-JEPA2 checkpoints under the [V-JEPA 2](https://huggingface.co/collections/facebook/v-jepa-2-6841bad8413014e185b497a6) collection. This model was contributed by [koustuvs](https://huggingface.co/koustuvs), [yonigozlan](https://huggingface.co/yonigozlan) and [qubvel](https://huggingface.co/qubvel-hf). The original code can be found [here](https://github.com/facebookresearch/vjepa2). ## Usage example The snippet below shows how to load the V-JEPA 2 model for feature extraction using the `AutoModel` class. ```py import torch from torchcodec.decoders import VideoDecoder import numpy as np processor = AutoVideoProcessor.from_pretrained("facebook/vjepa2-vitl-fpc64-256") model = AutoModel.from_pretrained( "facebook/vjepa2-vitl-fpc64-256", dtype=torch.float16, device_map="auto", attn_implementation="sdpa" ) video_url = "https://huggingface.co/datasets/nateraw/kinetics-mini/resolve/main/val/archery/-Qz25rXdMjE_000014_000024.mp4" vr = VideoDecoder(video_url) frame_idx = np.arange(0, 64) # choosing some frames. here, you can define more complex sampling strategy video = vr.get_frames_at(indices=frame_idx).data # T x C x H x W video = processor(video, return_tensors="pt").to(model.device) outputs = model(**video) # V-JEPA 2 encoder outputs, same as calling `model.get_vision_features()` encoder_outputs = outputs.last_hidden_state # V-JEPA 2 predictor outputs predictor_outputs = outputs.predictor_output.last_hidden_state ``` V-JEPA 2 can also be finetuned for video classification. In the following snippet, we show how use finetuned on Something-Something-V2 video classification model. ```python import torch import numpy as np from torchcodec.decoders import VideoDecoder from transformers import AutoVideoProcessor, AutoModelForVideoClassification, infer_device device = infer_device() # Load model and video preprocessor hf_repo = "facebook/vjepa2-vitl-fpc16-256-ssv2" model = AutoModelForVideoClassification.from_pretrained(hf_repo).to(device) processor = AutoVideoProcessor.from_pretrained(hf_repo) # To load a video, sample the number of frames according to the model. video_url = "https://huggingface.co/datasets/nateraw/kinetics-mini/resolve/main/val/bowling/-WH-lxmGJVY_000005_000015.mp4" vr = VideoDecoder(video_url) frame_idx = np.arange(0, model.config.frames_per_clip, 8) # you can define more complex sampling strategy video = vr.get_frames_at(indices=frame_idx).data # frames x channels x height x width # Preprocess and run inference inputs = processor(video, return_tensors="pt").to(model.device) with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits print("Top 5 predicted class names:") top5_indices = logits.topk(5).indices[0] top5_probs = torch.softmax(logits, dim=-1).topk(5).values[0] for idx, prob in zip(top5_indices, top5_probs): text_label = model.config.id2label[idx.item()] print(f" - {text_label}: {prob:.2f}") ``` ## VJEPA2Config [[autodoc]] VJEPA2Config ## VJEPA2Model [[autodoc]] VJEPA2Model - forward ## VJEPA2ForVideoClassification [[autodoc]] VJEPA2ForVideoClassification - forward ## VJEPA2VideoProcessor [[autodoc]] VJEPA2VideoProcessor
transformers/docs/source/en/model_doc/vjepa2.md/0
{ "file_path": "transformers/docs/source/en/model_doc/vjepa2.md", "repo_id": "transformers", "token_count": 1829 }
401
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> *This model was released on 2019-06-19 and added to Hugging Face Transformers on 2020-11-16.* # XLNet <div class="flex flex-wrap space-x-1"> <img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white"> </div> ## Overview The XLNet model was proposed in [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://huggingface.co/papers/1906.08237) by Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. XLnet is an extension of the Transformer-XL model pre-trained using an autoregressive method to learn bidirectional contexts by maximizing the expected likelihood over all permutations of the input sequence factorization order. The abstract from the paper is the following: *With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling. However, relying on corrupting the input with masks, BERT neglects dependency between the masked positions and suffers from a pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and (2) overcomes the limitations of BERT thanks to its autoregressive formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into pretraining. Empirically, under comparable experiment settings, XLNet outperforms BERT on 20 tasks, often by a large margin, including question answering, natural language inference, sentiment analysis, and document ranking.* This model was contributed by [thomwolf](https://huggingface.co/thomwolf). The original code can be found [here](https://github.com/zihangdai/xlnet/). ## Usage tips - The specific attention pattern can be controlled at training and test time using the `perm_mask` input. - Due to the difficulty of training a fully auto-regressive model over various factorization order, XLNet is pretrained using only a sub-set of the output tokens as target which are selected with the `target_mapping` input. - To use XLNet for sequential decoding (i.e. not in fully bi-directional setting), use the `perm_mask` and `target_mapping` inputs to control the attention span and outputs (see examples in *examples/pytorch/text-generation/run_generation.py*) - XLNet is one of the few models that has no sequence length limit. - XLNet is not a traditional autoregressive model but uses a training strategy that builds on that. It permutes the tokens in the sentence, then allows the model to use the last n tokens to predict the token n+1. Since this is all done with a mask, the sentence is actually fed in the model in the right order, but instead of masking the first n tokens for n+1, XLNet uses a mask that hides the previous tokens in some given permutation of 1,…,sequence length. - XLNet also uses the same recurrence mechanism as Transformer-XL to build long-term dependencies. ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## XLNetConfig [[autodoc]] XLNetConfig ## XLNetTokenizer [[autodoc]] XLNetTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## XLNetTokenizerFast [[autodoc]] XLNetTokenizerFast ## XLNet specific outputs [[autodoc]] models.xlnet.modeling_xlnet.XLNetModelOutput [[autodoc]] models.xlnet.modeling_xlnet.XLNetLMHeadModelOutput [[autodoc]] models.xlnet.modeling_xlnet.XLNetForSequenceClassificationOutput [[autodoc]] models.xlnet.modeling_xlnet.XLNetForMultipleChoiceOutput [[autodoc]] models.xlnet.modeling_xlnet.XLNetForTokenClassificationOutput [[autodoc]] models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringSimpleOutput [[autodoc]] models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringOutput ## XLNetModel [[autodoc]] XLNetModel - forward ## XLNetLMHeadModel [[autodoc]] XLNetLMHeadModel - forward ## XLNetForSequenceClassification [[autodoc]] XLNetForSequenceClassification - forward ## XLNetForMultipleChoice [[autodoc]] XLNetForMultipleChoice - forward ## XLNetForTokenClassification [[autodoc]] XLNetForTokenClassification - forward ## XLNetForQuestionAnsweringSimple [[autodoc]] XLNetForQuestionAnsweringSimple - forward ## XLNetForQuestionAnswering [[autodoc]] XLNetForQuestionAnswering - forward
transformers/docs/source/en/model_doc/xlnet.md/0
{ "file_path": "transformers/docs/source/en/model_doc/xlnet.md", "repo_id": "transformers", "token_count": 1581 }
402