text
stringlengths
5
631k
id
stringlengths
14
178
metadata
dict
__index_level_0__
int64
0
647
# 模型 <Tip warning={true}> Smolagents 是一个实验性 API,其可能会随时发生更改。由于 API 或底层模型可能会变化,智能体返回的结果可能会有所不同。 </Tip> 要了解有关智能体和工具的更多信息,请务必阅读[入门指南](../index)。此页面包含底层类的 API 文档。 ## 模型 您可以自由创建和使用自己的模型为智能体提供支持。 您可以使用任何 `model` 可调用对象作为智能体的模型,只要满足以下条件: 1. 它遵循[消息格式](./chat_templating)(`List[Dict[str, str]]`),将其作为输入 `messages`,并返回一个 `str`。 2. 它在生成的序列到达 `stop_sequences` 参数中指定的内容之前停止生成输出。 要定义您的 LLM,可以创建一个 `custom_model` 方法,该方法接受一个 [messages](./chat_templating) 列表,并返回一个包含 `.content` 属性的对象,其中包含生成的文本。此可调用对象还需要接受一个 `stop_sequences` 参数,用于指示何时停止生成。 ```python from huggingface_hub import login, InferenceClient login("<YOUR_HUGGINGFACEHUB_API_TOKEN>") model_id = "meta-llama/Llama-3.3-70B-Instruct" client = InferenceClient(model=model_id) def custom_model(messages, stop_sequences=["Task"]): response = client.chat_completion(messages, stop=stop_sequences, max_tokens=1000) answer = response.choices[0].message return answer ``` 此外,`custom_model` 还可以接受一个 `grammar` 参数。如果在智能体初始化时指定了 `grammar`,则此参数将在调用模型时传递,以便进行[约束生成](https://huggingface.co/docs/text-generation-inference/conceptual/guidance),从而强制生成格式正确的智能体输出。 ### TransformersModel 为了方便起见,我们添加了一个 `TransformersModel`,该模型通过为初始化时指定的 `model_id` 构建一个本地 `transformers` pipeline 来实现上述功能。 ```python from smolagents import TransformersModel model = TransformersModel(model_id="HuggingFaceTB/SmolLM-135M-Instruct") print(model([{"role": "user", "content": [{"type": "text", "text": "Ok!"}]}], stop_sequences=["great"])) ``` ```text >>> What a ``` > [!TIP] > 您必须在机器上安装 `transformers` 和 `torch`。如果尚未安装,请运行 `pip install smolagents[transformers]`。 [[autodoc]] TransformersModel ### InferenceClientModel `InferenceClientModel` 封装了 huggingface_hub 的 [InferenceClient](https://huggingface.co/docs/huggingface_hub/main/en/guides/inference),用于执行 LLM。它支持 HF 的 [Inference API](https://huggingface.co/docs/api-inference/index) 以及 Hub 上所有可用的[Inference Providers](https://huggingface.co/blog/inference-providers)。 ```python from smolagents import InferenceClientModel messages = [ {"role": "user", "content": [{"type": "text", "text": "Hello, how are you?"}]} ] model = InferenceClientModel() print(model(messages)) ``` ```text >>> Of course! If you change your mind, feel free to reach out. Take care! ``` [[autodoc]] InferenceClientModel ### LiteLLMModel `LiteLLMModel` 利用 [LiteLLM](https://www.litellm.ai/) 支持来自不同提供商的 100+ 个 LLM。您可以在模型初始化时传递 `kwargs`,这些参数将在每次使用模型时被使用,例如下面的示例中传递了 `temperature`。 ```python from smolagents import LiteLLMModel messages = [ {"role": "user", "content": [{"type": "text", "text": "Hello, how are you?"}]} ] model = LiteLLMModel(model_id="anthropic/claude-3-5-sonnet-latest", temperature=0.2, max_tokens=10) print(model(messages)) ``` [[autodoc]] LiteLLMModel ### OpenAIServerModel 此类允许您调用任何 OpenAIServer 兼容模型。 以下是设置方法(您可以自定义 `api_base` URL 指向其他服务器): ```py import os from smolagents import OpenAIServerModel model = OpenAIServerModel( model_id="gpt-4o", api_base="https://api.openai.com/v1", api_key=os.environ["OPENAI_API_KEY"], ) ``` [[autodoc]] OpenAIServerModel ### AzureOpenAIServerModel `AzureOpenAIServerModel` 允许您连接到任何 Azure OpenAI 部署。 下面是设置示例,请注意,如果已经设置了相应的环境变量,您可以省略 `azure_endpoint`、`api_key` 和 `api_version` 参数——环境变量包括 `AZURE_OPENAI_ENDPOINT`、`AZURE_OPENAI_API_KEY` 和 `OPENAI_API_VERSION`。 请注意,`OPENAI_API_VERSION` 没有 `AZURE_` 前缀,这是由于底层 [openai](https://github.com/openai/openai-python) 包的设计所致。 ```py import os from smolagents import AzureOpenAIServerModel model = AzureOpenAIServerModel( model_id = os.environ.get("AZURE_OPENAI_MODEL"), azure_endpoint=os.environ.get("AZURE_OPENAI_ENDPOINT"), api_key=os.environ.get("AZURE_OPENAI_API_KEY"), api_version=os.environ.get("OPENAI_API_VERSION") ) ``` [[autodoc]] AzureOpenAIServerModel ### MLXModel ```python from smolagents import MLXModel model = MLXModel(model_id="HuggingFaceTB/SmolLM-135M-Instruct") print(model([{"role": "user", "content": "Ok!"}], stop_sequences=["great"])) ``` ```text >>> What a ``` > [!TIP] > 您必须在机器上安装 `mlx-lm`。如果尚未安装,请运行 `pip install smolagents[mlx-lm]`。 [[autodoc]] MLXModel
smolagents/docs/source/zh/reference/models.md/0
{ "file_path": "smolagents/docs/source/zh/reference/models.md", "repo_id": "smolagents", "token_count": 2699 }
273
# Open Deep Research Welcome to this open replication of [OpenAI's Deep Research](https://openai.com/index/introducing-deep-research/)! This agent attempts to replicate OpenAI's model and achieve similar performance on research tasks. Read more about this implementation's goal and methods in our [blog post](https://huggingface.co/blog/open-deep-research). This agent achieves **55% pass@1** on the GAIA validation set, compared to **67%** for the original Deep Research. ## Setup To get started, follow the steps below: ### Clone the repository ```bash git clone https://github.com/huggingface/smolagents.git cd smolagents/examples/open_deep_research ``` ### Install dependencies Run the following command to install the required dependencies from the `requirements.txt` file: ```bash pip install -r requirements.txt ``` ### Install the development version of `smolagents` ```bash pip install -e ../../.[dev] ``` ### Set up environment variables The agent uses the `GoogleSearchTool` for web search, which requires an environment variable with the corresponding API key, based on the selected provider: - `SERPAPI_API_KEY` for SerpApi: [Sign up here to get a key](https://serpapi.com/users/sign_up) - `SERPER_API_KEY` for Serper: [Sign up here to get a key](https://serper.dev/signup) Depending on the model you want to use, you may need to set environment variables. For example, to use the default `o1` model, you need to set the `OPENAI_API_KEY` environment variable. [Sign up here to get a key](https://platform.openai.com/signup). > [!WARNING] > The use of the default `o1` model is restricted to tier-3 access: https://help.openai.com/en/articles/10362446-api-access-to-o1-and-o3-mini ## Usage Then you're good to go! Run the run.py script, as in: ```bash python run.py --model-id "o1" "Your question here!" ``` ## Full reproducibility of results The data used in our submissions to GAIA was augmented in this way: - For each single-page .pdf or .xls file, it was opened in a file reader (MacOS Sonoma Numbers or Preview), and a ".png" screenshot was taken and added to the folder. - Then for any file used in a question, the file loading system checks if there is a ".png" extension version of the file, and loads it instead of the original if it exists. This process was done manually but could be automatized. After processing, the annotated was uploaded to a [new dataset](https://huggingface.co/datasets/smolagents/GAIA-annotated). You need to request access (granted instantly).
smolagents/examples/open_deep_research/README.md/0
{ "file_path": "smolagents/examples/open_deep_research/README.md", "repo_id": "smolagents", "token_count": 723 }
274
""" Plan Customization Example This example demonstrates how to use step callbacks to interrupt the agent after plan creation, allow user interaction to approve or modify the plan, and then resume execution while preserving agent memory. Key concepts demonstrated: 1. Step callbacks to interrupt after PlanningStep 2. Extracting and modifying the current plan 3. Resuming execution with reset=False to preserve memory 4. User interaction for plan approval/modification """ from smolagents import CodeAgent, DuckDuckGoSearchTool, InferenceClientModel, PlanningStep def display_plan(plan_content): """Display the plan in a formatted way""" print("\n" + "=" * 60) print("🤖 AGENT PLAN CREATED") print("=" * 60) print(plan_content) print("=" * 60) def get_user_choice(): """Get user's choice for plan approval""" while True: choice = input("\nChoose an option:\n1. Approve plan\n2. Modify plan\n3. Cancel\nYour choice (1-3): ").strip() if choice in ["1", "2", "3"]: return int(choice) print("Invalid choice. Please enter 1, 2, or 3.") def get_modified_plan(original_plan): """Allow user to modify the plan""" print("\n" + "-" * 40) print("MODIFY PLAN") print("-" * 40) print("Current plan:") print(original_plan) print("-" * 40) print("Enter your modified plan (press Enter twice to finish):") lines = [] empty_line_count = 0 while empty_line_count < 2: line = input() if line.strip() == "": empty_line_count += 1 else: empty_line_count = 0 lines.append(line) # Remove the last two empty lines modified_plan = "\n".join(lines[:-2]) return modified_plan if modified_plan.strip() else original_plan def interrupt_after_plan(memory_step, agent): """ Step callback that interrupts the agent after a planning step is created. This allows for user interaction to review and potentially modify the plan. """ if isinstance(memory_step, PlanningStep): print("\n🛑 Agent interrupted after plan creation...") # Display the created plan display_plan(memory_step.plan) # Get user choice choice = get_user_choice() if choice == 1: # Approve plan print("✅ Plan approved! Continuing execution...") # Don't interrupt - let the agent continue return elif choice == 2: # Modify plan # Get modified plan from user modified_plan = get_modified_plan(memory_step.plan) # Update the plan in the memory step memory_step.plan = modified_plan print("\nPlan updated!") display_plan(modified_plan) print("✅ Continuing with modified plan...") # Don't interrupt - let the agent continue with modified plan return elif choice == 3: # Cancel print("❌ Execution cancelled by user.") agent.interrupt() return def main(): """Run the complete plan customization example""" print("🚀 Starting Plan Customization Example") print("=" * 60) # Create agent with planning enabled and step callback agent = CodeAgent( model=InferenceClientModel(), tools=[DuckDuckGoSearchTool()], # Add a search tool for more interesting plans planning_interval=5, # Plan every 5 steps for demonstration step_callbacks={PlanningStep: interrupt_after_plan}, max_steps=10, verbosity_level=1, # Show agent thoughts ) # Define a task that will benefit from planning task = """Search for recent developments in artificial intelligence and provide a summary of the top 3 most significant breakthroughs in 2024. Include the source of each breakthrough.""" try: print(f"\n📋 Task: {task}") print("\n🤖 Agent starting execution...") # First run - will create plan and potentially get interrupted result = agent.run(task) # If we get here, the plan was approved or execution completed print("\n✅ Task completed successfully!") print("\n📄 Final Result:") print("-" * 40) print(result) except Exception as e: if "interrupted" in str(e).lower(): print("\n🛑 Agent execution was cancelled by user.") print("\nTo resume execution later, you could call:") print("agent.run(task, reset=False) # This preserves the agent's memory") # Demonstrate resuming with reset=False print("\n" + "=" * 60) print("DEMONSTRATION: Resuming with reset=False") print("=" * 60) # Show current memory state print(f"\n📚 Current memory contains {len(agent.memory.steps)} steps:") for i, step in enumerate(agent.memory.steps): step_type = type(step).__name__ print(f" {i + 1}. {step_type}") # Ask if user wants to see resume demonstration resume_choice = input("\nWould you like to see resume demonstration? (y/n): ").strip().lower() if resume_choice == "y": print("\n🔄 Resuming execution...") try: # Resume without resetting - preserves memory agent.run(task, reset=False) print("\n✅ Task completed after resume!") print("\n📄 Final Result:") print("-" * 40) except Exception as resume_error: print(f"\n❌ Error during resume: {resume_error}") else: print(f"\n❌ An error occurred: {e}") if __name__ == "__main__": # Run the main example main()
smolagents/examples/plan_customization/plan_customization.py/0
{ "file_path": "smolagents/examples/plan_customization/plan_customization.py", "repo_id": "smolagents", "token_count": 2301 }
275
#!/usr/bin/env python # coding=utf-8 # Copyright 2024 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from dataclasses import dataclass from typing import Any from .local_python_executor import ( BASE_BUILTIN_MODULES, BASE_PYTHON_TOOLS, evaluate_python_code, ) from .tools import PipelineTool, Tool @dataclass class PreTool: name: str inputs: dict[str, str] output_type: type task: str description: str repo_id: str class PythonInterpreterTool(Tool): name = "python_interpreter" description = "This is a tool that evaluates python code. It can be used to perform calculations." inputs = { "code": { "type": "string", "description": "The python code to run in interpreter", } } output_type = "string" def __init__(self, *args, authorized_imports=None, **kwargs): if authorized_imports is None: self.authorized_imports = list(set(BASE_BUILTIN_MODULES)) else: self.authorized_imports = list(set(BASE_BUILTIN_MODULES) | set(authorized_imports)) self.inputs = { "code": { "type": "string", "description": ( "The code snippet to evaluate. All variables used in this snippet must be defined in this same snippet, " f"else you will get an error. This code can only import the following python libraries: {self.authorized_imports}." ), } } self.base_python_tools = BASE_PYTHON_TOOLS self.python_evaluator = evaluate_python_code super().__init__(*args, **kwargs) def forward(self, code: str) -> str: state = {} output = str( self.python_evaluator( code, state=state, static_tools=self.base_python_tools, authorized_imports=self.authorized_imports, )[0] # The second element is boolean is_final_answer ) return f"Stdout:\n{str(state['_print_outputs'])}\nOutput: {output}" class FinalAnswerTool(Tool): name = "final_answer" description = "Provides a final answer to the given problem." inputs = {"answer": {"type": "any", "description": "The final answer to the problem"}} output_type = "any" def forward(self, answer: Any) -> Any: return answer class UserInputTool(Tool): name = "user_input" description = "Asks for user's input on a specific question" inputs = {"question": {"type": "string", "description": "The question to ask the user"}} output_type = "string" def forward(self, question): user_input = input(f"{question} => Type your answer here:") return user_input class DuckDuckGoSearchTool(Tool): """Web search tool that performs searches using the DuckDuckGo search engine. Args: max_results (`int`, default `10`): Maximum number of search results to return. rate_limit (`float`, default `1.0`): Maximum queries per second. Set to `None` to disable rate limiting. **kwargs: Additional keyword arguments for the `DDGS` client. Examples: ```python >>> from smolagents import DuckDuckGoSearchTool >>> web_search_tool = DuckDuckGoSearchTool(max_results=5, rate_limit=2.0) >>> results = web_search_tool("Hugging Face") >>> print(results) ``` """ name = "web_search" description = """Performs a duckduckgo web search based on your query (think a Google search) then returns the top search results.""" inputs = {"query": {"type": "string", "description": "The search query to perform."}} output_type = "string" def __init__(self, max_results: int = 10, rate_limit: float | None = 1.0, **kwargs): super().__init__() self.max_results = max_results self.rate_limit = rate_limit self._min_interval = 1.0 / rate_limit if rate_limit else 0.0 self._last_request_time = 0.0 try: from ddgs import DDGS except ImportError as e: raise ImportError( "You must install package `ddgs` to run this tool: for instance run `pip install ddgs`." ) from e self.ddgs = DDGS(**kwargs) def forward(self, query: str) -> str: self._enforce_rate_limit() results = self.ddgs.text(query, max_results=self.max_results) if len(results) == 0: raise Exception("No results found! Try a less restrictive/shorter query.") postprocessed_results = [f"[{result['title']}]({result['href']})\n{result['body']}" for result in results] return "## Search Results\n\n" + "\n\n".join(postprocessed_results) def _enforce_rate_limit(self) -> None: import time # No rate limit enforced if not self.rate_limit: return now = time.time() elapsed = now - self._last_request_time if elapsed < self._min_interval: time.sleep(self._min_interval - elapsed) self._last_request_time = time.time() class GoogleSearchTool(Tool): name = "web_search" description = """Performs a google web search for your query then returns a string of the top search results.""" inputs = { "query": {"type": "string", "description": "The search query to perform."}, "filter_year": { "type": "integer", "description": "Optionally restrict results to a certain year", "nullable": True, }, } output_type = "string" def __init__(self, provider: str = "serpapi"): super().__init__() import os self.provider = provider if provider == "serpapi": self.organic_key = "organic_results" api_key_env_name = "SERPAPI_API_KEY" else: self.organic_key = "organic" api_key_env_name = "SERPER_API_KEY" self.api_key = os.getenv(api_key_env_name) if self.api_key is None: raise ValueError(f"Missing API key. Make sure you have '{api_key_env_name}' in your env variables.") def forward(self, query: str, filter_year: int | None = None) -> str: import requests if self.provider == "serpapi": params = { "q": query, "api_key": self.api_key, "engine": "google", "google_domain": "google.com", } base_url = "https://serpapi.com/search.json" else: params = { "q": query, "api_key": self.api_key, } base_url = "https://google.serper.dev/search" if filter_year is not None: params["tbs"] = f"cdr:1,cd_min:01/01/{filter_year},cd_max:12/31/{filter_year}" response = requests.get(base_url, params=params) if response.status_code == 200: results = response.json() else: raise ValueError(response.json()) if self.organic_key not in results.keys(): if filter_year is not None: raise Exception( f"No results found for query: '{query}' with filtering on year={filter_year}. Use a less restrictive query or do not filter on year." ) else: raise Exception(f"No results found for query: '{query}'. Use a less restrictive query.") if len(results[self.organic_key]) == 0: year_filter_message = f" with filter year={filter_year}" if filter_year is not None else "" return f"No results found for '{query}'{year_filter_message}. Try with a more general query, or remove the year filter." web_snippets = [] if self.organic_key in results: for idx, page in enumerate(results[self.organic_key]): date_published = "" if "date" in page: date_published = "\nDate published: " + page["date"] source = "" if "source" in page: source = "\nSource: " + page["source"] snippet = "" if "snippet" in page: snippet = "\n" + page["snippet"] redacted_version = f"{idx}. [{page['title']}]({page['link']}){date_published}{source}\n{snippet}" web_snippets.append(redacted_version) return "## Search Results\n" + "\n\n".join(web_snippets) class ApiWebSearchTool(Tool): """Web search tool that performs API-based searches. By default, it uses the Brave Search API. This tool implements a rate limiting mechanism to ensure compliance with API usage policies. By default, it limits requests to 1 query per second. Args: endpoint (`str`): API endpoint URL. Defaults to Brave Search API. api_key (`str`): API key for authentication. api_key_name (`str`): Environment variable name containing the API key. Defaults to "BRAVE_API_KEY". headers (`dict`, *optional*): Headers for API requests. params (`dict`, *optional*): Parameters for API requests. rate_limit (`float`, default `1.0`): Maximum queries per second. Set to `None` to disable rate limiting. Examples: ```python >>> from smolagents import ApiWebSearchTool >>> web_search_tool = ApiWebSearchTool(rate_limit=50.0) >>> results = web_search_tool("Hugging Face") >>> print(results) ``` """ name = "web_search" description = "Performs a web search for a query and returns a string of the top search results formatted as markdown with titles, URLs, and descriptions." inputs = {"query": {"type": "string", "description": "The search query to perform."}} output_type = "string" def __init__( self, endpoint: str = "", api_key: str = "", api_key_name: str = "", headers: dict = None, params: dict = None, rate_limit: float | None = 1.0, ): import os super().__init__() self.endpoint = endpoint or "https://api.search.brave.com/res/v1/web/search" self.api_key_name = api_key_name or "BRAVE_API_KEY" self.api_key = api_key or os.getenv(self.api_key_name) self.headers = headers or {"X-Subscription-Token": self.api_key} self.params = params or {"count": 10} self.rate_limit = rate_limit self._min_interval = 1.0 / rate_limit if rate_limit else 0.0 self._last_request_time = 0.0 def _enforce_rate_limit(self) -> None: import time # No rate limit enforced if not self.rate_limit: return now = time.time() elapsed = now - self._last_request_time if elapsed < self._min_interval: time.sleep(self._min_interval - elapsed) self._last_request_time = time.time() def forward(self, query: str) -> str: import requests self._enforce_rate_limit() params = {**self.params, "q": query} response = requests.get(self.endpoint, headers=self.headers, params=params) response.raise_for_status() data = response.json() results = self.extract_results(data) return self.format_markdown(results) def extract_results(self, data: dict) -> list: results = [] for result in data.get("web", {}).get("results", []): results.append( {"title": result["title"], "url": result["url"], "description": result.get("description", "")} ) return results def format_markdown(self, results: list) -> str: if not results: return "No results found." return "## Search Results\n\n" + "\n\n".join( [ f"{idx}. [{result['title']}]({result['url']})\n{result['description']}" for idx, result in enumerate(results, start=1) ] ) class WebSearchTool(Tool): name = "web_search" description = "Performs a web search for a query and returns a string of the top search results formatted as markdown with titles, links, and descriptions." inputs = {"query": {"type": "string", "description": "The search query to perform."}} output_type = "string" def __init__(self, max_results: int = 10, engine: str = "duckduckgo"): super().__init__() self.max_results = max_results self.engine = engine def forward(self, query: str) -> str: results = self.search(query) if len(results) == 0: raise Exception("No results found! Try a less restrictive/shorter query.") return self.parse_results(results) def search(self, query: str) -> list: if self.engine == "duckduckgo": return self.search_duckduckgo(query) elif self.engine == "bing": return self.search_bing(query) else: raise ValueError(f"Unsupported engine: {self.engine}") def parse_results(self, results: list) -> str: return "## Search Results\n\n" + "\n\n".join( [f"[{result['title']}]({result['link']})\n{result['description']}" for result in results] ) def search_duckduckgo(self, query: str) -> list: import requests response = requests.get( "https://lite.duckduckgo.com/lite/", params={"q": query}, headers={"User-Agent": "Mozilla/5.0"}, ) response.raise_for_status() parser = self._create_duckduckgo_parser() parser.feed(response.text) return parser.results def _create_duckduckgo_parser(self): from html.parser import HTMLParser class SimpleResultParser(HTMLParser): def __init__(self): super().__init__() self.results = [] self.current = {} self.capture_title = False self.capture_description = False self.capture_link = False def handle_starttag(self, tag, attrs): attrs = dict(attrs) if tag == "a" and attrs.get("class") == "result-link": self.capture_title = True elif tag == "td" and attrs.get("class") == "result-snippet": self.capture_description = True elif tag == "span" and attrs.get("class") == "link-text": self.capture_link = True def handle_endtag(self, tag): if tag == "a" and self.capture_title: self.capture_title = False elif tag == "td" and self.capture_description: self.capture_description = False elif tag == "span" and self.capture_link: self.capture_link = False elif tag == "tr": # Store current result if all parts are present if {"title", "description", "link"} <= self.current.keys(): self.current["description"] = " ".join(self.current["description"]) self.results.append(self.current) self.current = {} def handle_data(self, data): if self.capture_title: self.current["title"] = data.strip() elif self.capture_description: self.current.setdefault("description", []) self.current["description"].append(data.strip()) elif self.capture_link: self.current["link"] = "https://" + data.strip() return SimpleResultParser() def search_bing(self, query: str) -> list: import xml.etree.ElementTree as ET import requests response = requests.get( "https://www.bing.com/search", params={"q": query, "format": "rss"}, ) response.raise_for_status() root = ET.fromstring(response.text) items = root.findall(".//item") results = [ { "title": item.findtext("title"), "link": item.findtext("link"), "description": item.findtext("description"), } for item in items[: self.max_results] ] return results class VisitWebpageTool(Tool): name = "visit_webpage" description = ( "Visits a webpage at the given url and reads its content as a markdown string. Use this to browse webpages." ) inputs = { "url": { "type": "string", "description": "The url of the webpage to visit.", } } output_type = "string" def __init__(self, max_output_length: int = 40000): super().__init__() self.max_output_length = max_output_length def _truncate_content(self, content: str, max_length: int) -> str: if len(content) <= max_length: return content return ( content[:max_length] + f"\n..._This content has been truncated to stay below {max_length} characters_...\n" ) def forward(self, url: str) -> str: try: import re import requests from markdownify import markdownify from requests.exceptions import RequestException except ImportError as e: raise ImportError( "You must install packages `markdownify` and `requests` to run this tool: for instance run `pip install markdownify requests`." ) from e try: # Send a GET request to the URL with a 20-second timeout response = requests.get(url, timeout=20) response.raise_for_status() # Raise an exception for bad status codes # Convert the HTML content to Markdown markdown_content = markdownify(response.text).strip() # Remove multiple line breaks markdown_content = re.sub(r"\n{3,}", "\n\n", markdown_content) return self._truncate_content(markdown_content, self.max_output_length) except requests.exceptions.Timeout: return "The request timed out. Please try again later or check the URL." except RequestException as e: return f"Error fetching the webpage: {str(e)}" except Exception as e: return f"An unexpected error occurred: {str(e)}" class WikipediaSearchTool(Tool): """ Search Wikipedia and return the summary or full text of the requested article, along with the page URL. Attributes: user_agent (`str`): Custom user-agent string to identify the project. This is required as per Wikipedia API policies. See: https://foundation.wikimedia.org/wiki/Policy:Wikimedia_Foundation_User-Agent_Policy language (`str`, default `"en"`): Language in which to retrieve Wikipedia article. See: http://meta.wikimedia.org/wiki/List_of_Wikipedias content_type (`Literal["summary", "text"]`, default `"text"`): Type of content to fetch. Can be "summary" for a short summary or "text" for the full article. extract_format (`Literal["HTML", "WIKI"]`, default `"WIKI"`): Extraction format of the output. Can be `"WIKI"` or `"HTML"`. Example: ```python >>> from smolagents import CodeAgent, InferenceClientModel, WikipediaSearchTool >>> agent = CodeAgent( >>> tools=[ >>> WikipediaSearchTool( >>> user_agent="MyResearchBot (myemail@example.com)", >>> language="en", >>> content_type="summary", # or "text" >>> extract_format="WIKI", >>> ) >>> ], >>> model=InferenceClientModel(), >>> ) >>> agent.run("Python_(programming_language)") ``` """ name = "wikipedia_search" description = "Searches Wikipedia and returns a summary or full text of the given topic, along with the page URL." inputs = { "query": { "type": "string", "description": "The topic to search on Wikipedia.", } } output_type = "string" def __init__( self, user_agent: str = "Smolagents (myemail@example.com)", language: str = "en", content_type: str = "text", extract_format: str = "WIKI", ): super().__init__() try: import wikipediaapi except ImportError as e: raise ImportError( "You must install `wikipedia-api` to run this tool: for instance run `pip install wikipedia-api`" ) from e if not user_agent: raise ValueError("User-agent is required. Provide a meaningful identifier for your project.") self.user_agent = user_agent self.language = language self.content_type = content_type # Map string format to wikipediaapi.ExtractFormat extract_format_map = { "WIKI": wikipediaapi.ExtractFormat.WIKI, "HTML": wikipediaapi.ExtractFormat.HTML, } if extract_format not in extract_format_map: raise ValueError("Invalid extract_format. Choose between 'WIKI' or 'HTML'.") self.extract_format = extract_format_map[extract_format] self.wiki = wikipediaapi.Wikipedia( user_agent=self.user_agent, language=self.language, extract_format=self.extract_format ) def forward(self, query: str) -> str: try: page = self.wiki.page(query) if not page.exists(): return f"No Wikipedia page found for '{query}'. Try a different query." title = page.title url = page.fullurl if self.content_type == "summary": text = page.summary elif self.content_type == "text": text = page.text else: return "⚠️ Invalid `content_type`. Use either 'summary' or 'text'." return f"✅ **Wikipedia Page:** {title}\n\n**Content:** {text}\n\n🔗 **Read more:** {url}" except Exception as e: return f"Error fetching Wikipedia summary: {str(e)}" class SpeechToTextTool(PipelineTool): default_checkpoint = "openai/whisper-large-v3-turbo" description = "This is a tool that transcribes an audio into text. It returns the transcribed text." name = "transcriber" inputs = { "audio": { "type": "audio", "description": "The audio to transcribe. Can be a local path, an url, or a tensor.", } } output_type = "string" def __new__(cls, *args, **kwargs): from transformers.models.whisper import WhisperForConditionalGeneration, WhisperProcessor cls.pre_processor_class = WhisperProcessor cls.model_class = WhisperForConditionalGeneration return super().__new__(cls) def encode(self, audio): from .agent_types import AgentAudio audio = AgentAudio(audio).to_raw() return self.pre_processor(audio, return_tensors="pt") def forward(self, inputs): return self.model.generate(inputs["input_features"]) def decode(self, outputs): return self.pre_processor.batch_decode(outputs, skip_special_tokens=True)[0] TOOL_MAPPING = { tool_class.name: tool_class for tool_class in [ PythonInterpreterTool, DuckDuckGoSearchTool, VisitWebpageTool, ] } __all__ = [ "ApiWebSearchTool", "PythonInterpreterTool", "FinalAnswerTool", "UserInputTool", "WebSearchTool", "DuckDuckGoSearchTool", "GoogleSearchTool", "VisitWebpageTool", "WikipediaSearchTool", "SpeechToTextTool", ]
smolagents/src/smolagents/default_tools.py/0
{ "file_path": "smolagents/src/smolagents/default_tools.py", "repo_id": "smolagents", "token_count": 10575 }
276
from unittest.mock import patch import pytest from smolagents.agents import MultiStepAgent from smolagents.monitoring import LogLevel # Import fixture modules as plugins pytest_plugins = ["tests.fixtures.agents", "tests.fixtures.tools"] original_multi_step_agent_init = MultiStepAgent.__init__ @pytest.fixture(autouse=True) def patch_multi_step_agent_with_suppressed_logging(): with patch.object(MultiStepAgent, "__init__", autospec=True) as mock_init: def init_with_suppressed_logging(self, *args, verbosity_level=LogLevel.OFF, **kwargs): original_multi_step_agent_init(self, *args, verbosity_level=verbosity_level, **kwargs) mock_init.side_effect = init_with_suppressed_logging yield
smolagents/tests/conftest.py/0
{ "file_path": "smolagents/tests/conftest.py", "repo_id": "smolagents", "token_count": 260 }
277
# coding=utf-8 # Copyright 2024 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest import PIL.Image import pytest from smolagents import ( CodeAgent, ToolCallingAgent, stream_to_gradio, ) from smolagents.models import ( ChatMessage, ChatMessageToolCall, ChatMessageToolCallFunction, MessageRole, Model, TokenUsage, ) class FakeLLMModel(Model): def generate(self, prompt, tools_to_call_from=None, **kwargs): if tools_to_call_from is not None: return ChatMessage( role=MessageRole.ASSISTANT, content="I will call the final_answer tool.", tool_calls=[ ChatMessageToolCall( id="fake_id", type="function", function=ChatMessageToolCallFunction( name="final_answer", arguments={"answer": "This is the final answer."} ), ) ], token_usage=TokenUsage(input_tokens=10, output_tokens=20), ) else: return ChatMessage( role=MessageRole.ASSISTANT, content="""<code> final_answer('This is the final answer.') </code>""", token_usage=TokenUsage(input_tokens=10, output_tokens=20), ) class MonitoringTester(unittest.TestCase): def test_code_agent_metrics_max_steps(self): class FakeLLMModelMalformedAnswer(Model): def generate(self, prompt, **kwargs): return ChatMessage( role=MessageRole.ASSISTANT, content="Malformed answer", token_usage=TokenUsage(input_tokens=10, output_tokens=20), ) agent = CodeAgent( tools=[], model=FakeLLMModelMalformedAnswer(), max_steps=1, ) agent.run("Fake task") self.assertEqual(agent.monitor.total_input_token_count, 20) self.assertEqual(agent.monitor.total_output_token_count, 40) def test_code_agent_metrics_generation_error(self): class FakeLLMModelGenerationException(Model): def generate(self, prompt, **kwargs): raise Exception("Cannot generate") agent = CodeAgent( tools=[], model=FakeLLMModelGenerationException(), max_steps=1, ) with pytest.raises(Exception) as e: agent.run("Fake task") assert "Cannot generate" in str(e.value) def test_streaming_agent_text_output(self): agent = CodeAgent( tools=[], model=FakeLLMModel(), max_steps=1, planning_interval=2, ) # Use stream_to_gradio to capture the output outputs = list(stream_to_gradio(agent, task="Test task")) self.assertEqual(len(outputs), 11) plan_message = outputs[1] self.assertEqual(plan_message.role, "assistant") self.assertIn("<code>", plan_message.content) final_message = outputs[-1] self.assertEqual(final_message.role, "assistant") self.assertIn("This is the final answer.", final_message.content) def test_streaming_agent_image_output(self): class FakeLLMModelImage(Model): def generate(self, prompt, **kwargs): return ChatMessage( role=MessageRole.ASSISTANT, content="I will call the final_answer tool.", tool_calls=[ ChatMessageToolCall( id="fake_id", type="function", function=ChatMessageToolCallFunction(name="final_answer", arguments={"answer": "image"}), ) ], ) agent = ToolCallingAgent( tools=[], model=FakeLLMModelImage(), max_steps=1, verbosity_level=100, ) # Use stream_to_gradio to capture the output outputs = list( stream_to_gradio( agent, task="Test task", additional_args=dict(image=PIL.Image.new("RGB", (100, 100))), ) ) self.assertEqual(len(outputs), 7) final_message = outputs[-1] self.assertEqual(final_message.role, "assistant") self.assertIsInstance(final_message.content, dict) self.assertEqual(final_message.content["mime_type"], "image/png") def test_streaming_with_agent_error(self): class DummyModel(Model): def generate(self, prompt, **kwargs): return ChatMessage(role=MessageRole.ASSISTANT, content="Malformed call") agent = CodeAgent( tools=[], model=DummyModel(), max_steps=1, ) # Use stream_to_gradio to capture the output outputs = list(stream_to_gradio(agent, task="Test task")) self.assertEqual(len(outputs), 11) final_message = outputs[-1] self.assertEqual(final_message.role, "assistant") self.assertIn("Malformed call", final_message.content) @pytest.mark.parametrize("agent_class", [CodeAgent, ToolCallingAgent]) def test_code_agent_metrics(agent_class): agent = agent_class( tools=[], model=FakeLLMModel(), max_steps=1, ) agent.run("Fake task") assert agent.monitor.total_input_token_count == 10 assert agent.monitor.total_output_token_count == 20
smolagents/tests/test_monitoring.py/0
{ "file_path": "smolagents/tests/test_monitoring.py", "repo_id": "smolagents", "token_count": 2889 }
278
[workspace] members = [ "benchmark", "backends/v2", "backends/v3", "backends/grpc-metadata", "backends/trtllm", "backends/llamacpp", "launcher", "router" ] default-members = [ "benchmark", "backends/v2", "backends/v3", "backends/grpc-metadata", # "backends/trtllm", "launcher", "router" ] resolver = "2" [workspace.package] version = "3.3.4-dev0" edition = "2021" authors = ["Olivier Dehaene"] homepage = "https://github.com/huggingface/text-generation-inference" [workspace.dependencies] base64 = "0.22.0" tokenizers = { version = "0.20.0", features = ["http"] } hf-hub = { version = "0.4.2", features = ["tokio"] } metrics = { version = "0.23.0" } metrics-exporter-prometheus = { version = "0.15.1", features = [] } minijinja = { version = "2.2.0", features = ["json"] } minijinja-contrib = { version = "2.0.2", features = ["pycompat"] } pyo3 = { version = "0.22.2", features = ["auto-initialize"] } [profile.release] incremental = true [profile.release-binary] inherits = "release" debug = 1 incremental = true panic = "abort" [profile.release-opt] inherits = "release" debug = 0 incremental = false lto = "fat" opt-level = 3 codegen-units = 1
text-generation-inference/Cargo.toml/0
{ "file_path": "text-generation-inference/Cargo.toml", "repo_id": "text-generation-inference", "token_count": 512 }
279
[package] name = "text-generation-client" version.workspace = true edition.workspace = true authors.workspace = true homepage.workspace = true [dependencies] async-trait = "^0.1" base64 = { workspace = true } futures = "^0.3" grpc-metadata = { path = "../grpc-metadata" } prost = "^0.12" thiserror = "^1.0" tokio = { version = "^1.32", features = ["sync"] } tonic = "^0.10" tower = "^0.4" tracing = "^0.1" [build-dependencies] tonic-build = "0.10.1" prost-build = "0.12.1"
text-generation-inference/backends/client/Cargo.toml/0
{ "file_path": "text-generation-inference/backends/client/Cargo.toml", "repo_id": "text-generation-inference", "token_count": 202 }
280
fbgemm_commit := v0.8.0 build-fbgemm: @if [ ! -d "fbgemm" ]; then \ git clone https://github.com/pytorch/FBGEMM.git fbgemm; \ fi cd fbgemm && git fetch && git checkout $(fbgemm_commit) && \ git submodule update --init --recursive && \ cd fbgemm_gpu && \ pip install -r requirements.txt && \ CUDA_ARCH_LIST="8.0;9.0a" NVCC_GENCODE="-gencode=arch=compute_80,code=sm_80 -gencode=arch=compute_90a,code=sm_90a" TORCH_CUDA_ARCH_LIST="8.0;9.0a" python setup.py --package_variant genai build install-fbgemm: build-fbgemm cd fbgemm/fbgemm_gpu && \ CUDA_ARCH_LIST="8.0;9.0a" NVCC_GENCODE="-gencode=arch=compute_80,code=sm_80 -gencode=arch=compute_90a,code=sm_90a" TORCH_CUDA_ARCH_LIST="8.0;9.0a" python setup.py --package_variant genai install
text-generation-inference/backends/gaudi/server/Makefile-fbgemm/0
{ "file_path": "text-generation-inference/backends/gaudi/server/Makefile-fbgemm", "repo_id": "text-generation-inference", "token_count": 337 }
281
import torch from typing import Dict, Optional, TypeVar from text_generation_server.models.types import Batch B = TypeVar("B", bound=Batch) class Cache: def __init__(self): self.cache: Dict[int, B] = {} def pop(self, batch_id: int) -> Optional[B]: return self.cache.pop(batch_id, None) def set(self, entry: B): if entry is not None: self.cache[entry.batch_id] = entry def delete(self, batch_id: int): batch = self.pop(batch_id) if batch is not None: del batch if torch.cuda.is_available(): torch.cuda.empty_cache() def clear(self): keys = list(self.cache.keys()) for k in keys: self.delete(k) def __len__(self): return len(self.cache.keys())
text-generation-inference/backends/gaudi/server/text_generation_server/cache.py/0
{ "file_path": "text-generation-inference/backends/gaudi/server/text_generation_server/cache.py", "repo_id": "text-generation-inference", "token_count": 359 }
282
from dataclasses import dataclass from typing import List, Union import torch from text_generation_server.utils.weights import Weight, Weights, WeightsLoader @dataclass class Exl2Weight(Weight): """ Exllama2 exl2 quantized weights. """ q_weight: torch.Tensor q_scale: torch.Tensor q_invperm: torch.Tensor q_scale_max: torch.Tensor q_groups: torch.Tensor def __post_init__(self): self.q_scale_max /= 256 self.q_invperm = self.q_invperm.short() @property def device(self) -> torch.device: return self.q_weight.device def get_linear(self, bias: torch.Tensor): from text_generation_server.layers.gptq import ExllamaQuantLinear return ExllamaQuantLinear(self, bias) class Exl2WeightsLoader(WeightsLoader): """Loader for exl2-quantized weights.""" def get_weights(self, weights: "Weights", prefix: str): """ Get weights at the given prefix and apply without tensor paralllism. """ try: q_weight = weights.get_tensor(f"{prefix}.q_weight") except RuntimeError: raise RuntimeError( "Cannot load `exl2`-quantized weight, make sure the model is already quantized." ) q_scale = weights.get_tensor(f"{prefix}.q_scale") q_invperm = weights.get_tensor(f"{prefix}.q_invperm") q_scale_max = weights.get_tensor(f"{prefix}.q_scale_max") q_groups = weights.get_tensor(f"{prefix}.q_groups") return Exl2Weight( q_weight=q_weight, q_scale=q_scale, q_invperm=q_invperm, q_scale_max=q_scale_max, q_groups=q_groups, ) def get_weights_col_packed( self, weights: Weights, prefix: str, block_sizes: Union[int, List[int]], ): raise RuntimeError("Column-packed weights are not supported for exl") def get_weights_col(self, weights: Weights, prefix: str): # Sharding is not yet supported, so we return the weights as-is. return self.get_weights(weights, prefix) def get_multi_weights_col(self, weights: Weights, prefixes: List[str], dim: int): raise ValueError("get_multi_weights_col is not supported for exl2") def get_weights_row(self, weights: Weights, prefix: str): # Sharding is not yet supported, so we return the weights as-is. return self.get_weights(weights, prefix)
text-generation-inference/backends/gaudi/server/text_generation_server/layers/exl2.py/0
{ "file_path": "text-generation-inference/backends/gaudi/server/text_generation_server/layers/exl2.py", "repo_id": "text-generation-inference", "token_count": 1050 }
283
import torch import json from typing import Tuple, Optional from text_generation_server.layers.tensor_parallel import TensorParallelHead from text_generation_server.layers.medusa import MedusaHeadV1, MedusaHeadV2 from text_generation_server.layers.mlp import MLPSpeculatorHead class SpeculativeHead(torch.nn.Module): def __init__(self, lm_head, speculator): super().__init__() self.head = lm_head self.speculator = speculator @staticmethod def load(config, prefix: str, weights): speculator = config.speculator if speculator: speculator_path = config.speculator["path"] speculator_config = str(speculator_path / "config.json") with open(speculator_config, "r") as f: speculator_config = json.load(f) config.speculator_config = speculator_config try: architecture = speculator_config["architectures"][0] if architecture == "MLPSpeculatorPreTrainedModel": speculator = MLPSpeculatorHead.load(config, prefix, weights) else: speculator = None except KeyError: try: speculator = MedusaHeadV1.load(config, prefix, weights) except Exception: speculator = MedusaHeadV2(config, prefix, weights) lm_head = None else: lm_head = TensorParallelHead.load(config, prefix, weights) speculator = None return SpeculativeHead(lm_head, speculator) def forward( self, input: torch.Tensor ) -> Tuple[torch.Tensor, Optional[torch.Tensor]]: if self.speculator is not None: return self.speculator(input) assert self.head is not None logits = self.head(input) return logits, None
text-generation-inference/backends/gaudi/server/text_generation_server/layers/speculative.py/0
{ "file_path": "text-generation-inference/backends/gaudi/server/text_generation_server/layers/speculative.py", "repo_id": "text-generation-inference", "token_count": 851 }
284
# coding=utf-8 # Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved. # # This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX # and OPT implementations in this library. It has been modified from its # original forms to accommodate minor architectural differences compared # to GPT-NeoX and OPT used by the Meta AI team that trained the model. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from contextlib import contextmanager from typing import List, Optional, Tuple, Type import torch import torch.distributed from torch import nn from transformers.activations import ACT2FN import habana_frameworks.torch as htorch from text_generation_server.layers.attention import ( KVCache, get_kv_scales, ) from text_generation_server.layers.moe import DenseMoELayer, MoELayer, SparseMoELayer from text_generation_server.layers.attention import ( paged_attention, attention, set_block_mapping, Seqlen, HPUPagedAttentionMetadata, ) from text_generation_server.layers import ( TensorParallelRowLinear, TensorParallelColumnLinear, TensorParallelEmbedding, SpeculativeHead, TensorParallelMultiAdapterLinear, TensorParallelAdapterRowLinear, ) from text_generation_server.layers.rotary import PositionRotaryEmbedding from text_generation_server.layers.layernorm import ( FastRMSNorm, FastLayerNorm, ) from text_generation_server.layers import ( FastLinear, ) from text_generation_server.utils.weights import ( Weights, ) from text_generation_server.layers.fp8 import HybridFP8UnquantLoader def load_attention(config, prefix: str, weights, layer_id): # Only defined in granite. bias = getattr(config, "attention_bias", False) head_size = config.hidden_size // config.num_attention_heads sizes = None prefixes = None if config.model_type == "phi3": base_layer = TensorParallelColumnLinear.load_qkv( config, prefix=f"{prefix}.qkv_proj", weights=weights, bias=bias, num_heads=config.num_attention_heads, num_key_value_heads=config.num_key_value_heads, ) prefixes = ["qkv_proj"] elif config.model_type == "baichuan": prefix = f"{prefix}.W_pack" base_layer = TensorParallelColumnLinear.load_qkv( config, prefix=prefix, weights=weights, bias=bias, num_heads=config.num_attention_heads, num_key_value_heads=config.num_key_value_heads, ) prefixes = [prefix] else: prefixes = ["q_proj", "k_proj", "v_proj"] sizes = [ head_size * config.num_attention_heads, head_size * config.num_key_value_heads, head_size * config.num_key_value_heads, ] base_layer = TensorParallelColumnLinear.load_multi( config, prefixes=[f"{prefix}.q_proj", f"{prefix}.k_proj", f"{prefix}.v_proj"], dim=0, weights=weights, bias=bias, ) return TensorParallelMultiAdapterLinear.load( base_layer=base_layer, layer_id=layer_id, layer_names=prefixes, sizes=sizes, process_group=weights.process_group, ) @contextmanager def no_fp8(weights: Weights): """De-activate fp8 auto conversion for the duration of this context manager""" weights_loader = weights.weights_loader if isinstance(weights_loader, HybridFP8UnquantLoader) and weights_loader.to_fp8: weights_loader = HybridFP8UnquantLoader( weights_loader.activation_scale_ub, to_fp8=False ) with weights.use_loader(weights_loader): yield class FlashLlamaAttention(torch.nn.Module): def __init__( self, index: int, prefix: str, config, weights, rotary_emb, ): super().__init__() self.num_heads = config.num_attention_heads self.hidden_size = config.hidden_size self.head_size = self.hidden_size // self.num_heads self.rotary_emb = rotary_emb # `config.attention_multiplier` is used in Granite self.softmax_scale = getattr( config, "attention_multiplier", self.head_size**-0.5 ) if self.num_heads % weights.process_group.size() != 0: raise ValueError( f"`num_heads` must be divisible by `num_shards` (got `num_heads`: {self.num_heads} " f"and `num_shards`: {weights.process_group.size()}" ) if config.num_key_value_heads % weights.process_group.size() != 0: raise ValueError( f"`num_key_value_heads` must be divisible by `num_shards` (got `num_key_value_heads`: {config.num_key_value_heads} " f"and `num_shards`: {weights.process_group.size()}" ) self.num_heads = self.num_heads // weights.process_group.size() self.num_key_value_heads = ( config.num_key_value_heads // weights.process_group.size() ) self.query_key_value = load_attention(config, prefix, weights, index) self.index = index self.kv_scales = get_kv_scales(weights, f"{prefix}") o_proj = TensorParallelRowLinear.load( config, prefix=f"{prefix}.o_proj", weights=weights, bias=getattr(config, "attention_bias", False), ) self.o_proj = TensorParallelAdapterRowLinear.load( o_proj, index, "o_proj", process_group=weights.process_group, ) self.num_groups = self.num_heads // self.num_key_value_heads self.kv_head_mapping = torch.arange( 0, self.num_key_value_heads, dtype=torch.int32, device=weights.device ).repeat_interleave(self.num_groups) def forward( self, hidden_states, cos, sin, cu_seqlen_prefill, kv_cache: KVCache, slots, seqlen, adapter_data, hpu_attention_meta: Optional[HPUPagedAttentionMetadata], ): qkv = self.query_key_value(hidden_states, adapter_data) query, kv = qkv.split( [ self.head_size * self.num_heads, 2 * self.head_size * self.num_key_value_heads, ], dim=1, ) query = query.view(-1, self.num_heads, self.head_size) kv = kv.view(-1, 2, self.num_key_value_heads, self.head_size) self.rotary_emb(query, torch.select(kv, dim=1, index=0), cos, sin) kv_cache.store( key=kv[:, 0], value=kv[:, 1], slots=slots, kv_scales=self.kv_scales, ) # Prefill if cu_seqlen_prefill is not None: # sdpa attn_output = attention( query=query, key=kv[:, 0], value=kv[:, 1], kv_scales=self.kv_scales, kv_cache=kv_cache, seqlen=seqlen, softmax_scale=self.softmax_scale, ) # Decode else: attn_output = paged_attention( query, kv_cache, self.kv_head_mapping, self.softmax_scale, seqlen, kv_scales=self.kv_scales, hpu_attention_meta=hpu_attention_meta, ) return self.o_proj( attn_output.view(-1, self.num_heads * self.head_size), adapter_data ) class Phi3MoE(nn.Module): def __init__( self, prefix: str, config, moe_layer_cls: Type[MoELayer], weights: Weights ): super().__init__() # gating self.gate = FastLinear.load(config, f"{prefix}.gate", weights, bias=False) self.moe = moe_layer_cls( prefix=f"{prefix}.experts", n_experts=config.num_local_experts, n_expert_group=None, renormalize=True, topk=config.num_experts_per_tok, topk_group=None, weights=weights, gate_proj_name="w1", up_proj_name="w3", down_proj_name="w2", ) self.process_group = weights.process_group def forward(self, x, adapter_data) -> torch.Tensor: # router_logits: (num_tokens, n_experts) router_logits = self.gate(x) out = self.moe(x, gating_output=router_logits) # Reduce sum if self.process_group.size() > 1: torch.distributed.all_reduce(out, group=self.process_group) return out.view(*x.shape) class LlamaMLP(nn.Module): def __init__(self, prefix, config, weights, index): super().__init__() self.hidden_act = config.hidden_act self.act = ( ACT2FN[self.hidden_act] if "gelu" not in self.hidden_act else lambda x: torch.nn.functional.gelu( x, approximate=( "tanh" if self.hidden_act in ["gelu_fast", "gelu_pytorch_tanh"] else "none" ), ) ) prefixes = None sizes = None # Fuse gate and up proj bias = getattr(config, "mlp_bias", False) if config.model_type == "phi3": gate_up_proj = TensorParallelColumnLinear.load_gate_up( config, prefix=f"{prefix}.gate_up_proj", weights=weights, bias=bias, ) else: prefixes = ["gate_proj", "up_proj"] sizes = [ config.intermediate_size, config.intermediate_size, ] gate_up_proj = TensorParallelColumnLinear.load_multi( config, prefixes=[f"{prefix}.gate_proj", f"{prefix}.up_proj"], weights=weights, dim=0, bias=bias, ) self.gate_up_proj = TensorParallelMultiAdapterLinear.load( gate_up_proj, index, layer_names=prefixes, sizes=sizes, process_group=weights.process_group, ) down_proj = TensorParallelRowLinear.load( config, prefix=f"{prefix}.down_proj", weights=weights, bias=bias, ) self.down_proj = TensorParallelAdapterRowLinear.load( down_proj, index, "down_proj", process_group=weights.process_group, ) self.intermediate_size = ( config.intermediate_size // weights.process_group.size() ) # TODO: This is a hotfix to be removed & properly refactored. self.quantize = config.quantize self.hidden_size = config.hidden_size def forward(self, hidden_states, adapter_data): gate_up_states = self.gate_up_proj(hidden_states, adapter_data) gate_up_states = gate_up_states.view(-1, 2, self.intermediate_size) return self.down_proj( self.act(gate_up_states[:, 0]) * gate_up_states[:, 1], adapter_data ) class FlashLlamaLayer(nn.Module): def __init__(self, index, prefix, config, weights, rotary_emb): super().__init__() with no_fp8(weights): self.self_attn = FlashLlamaAttention( index=index, prefix=f"{prefix}.self_attn", config=config, weights=weights, rotary_emb=rotary_emb, ) if config.model_type == "phimoe": moe_layer_cls = ( SparseMoELayer if SparseMoELayer.is_supported(weights) else DenseMoELayer ) self.mlp = Phi3MoE( f"{prefix}.block_sparse_moe", config, moe_layer_cls, weights ) # with moe the layernorms are are not rmsnorms and they have bias self.input_layernorm = FastLayerNorm.load( prefix=f"{prefix}.input_layernorm", weights=weights, eps=config.rms_norm_eps, ) self.post_attention_layernorm = FastLayerNorm.load( prefix=f"{prefix}.post_attention_layernorm", weights=weights, eps=config.rms_norm_eps, ) else: self.mlp = LlamaMLP( prefix=f"{prefix}.mlp", config=config, weights=weights, index=index ) self.input_layernorm = FastRMSNorm.load( prefix=f"{prefix}.input_layernorm", weights=weights, eps=config.rms_norm_eps, ) self.post_attention_layernorm = FastRMSNorm.load( prefix=f"{prefix}.post_attention_layernorm", weights=weights, eps=config.rms_norm_eps, ) # Used in Granite # This could eventually be baked into the weights like we do for the embeddings/lm_head # but this would mean modifying the lora code self.residual_multiplier = getattr(config, "residual_multiplier", None) def forward( self, hidden_states, residual, cos, sin, cu_seqlen_prefill, kv_cache, slots, seqlen, adapter_data, cross_attention_states, hpu_attention_meta: Optional[HPUPagedAttentionMetadata], ): normed_hidden_states, res = self.input_layernorm(hidden_states, residual) # Self Attention attn_output = self.self_attn( normed_hidden_states, cos, sin, cu_seqlen_prefill, kv_cache, slots, seqlen, adapter_data, hpu_attention_meta=hpu_attention_meta, ) if self.residual_multiplier is not None: attn_output *= self.residual_multiplier normed_attn_res_output, attn_res = self.post_attention_layernorm( attn_output, res ) mlp_output = self.mlp(normed_attn_res_output, adapter_data) if self.residual_multiplier is not None: mlp_output *= self.residual_multiplier return mlp_output, attn_res class FlashLlamaModel(torch.nn.Module): def __init__(self, prefix, config, weights): super().__init__() process_group = weights.process_group self.tp_rank = process_group.rank() self.tp_world_size = process_group.size() # Skip fp8 quant for first and last layers self.layers = nn.ModuleList() self.cross_attention_layers = getattr(config, "cross_attention_layers", []) # Setting defaults for baichuan custom config which doesn't apply them. config.rope_theta = getattr(config, "rope_theta", 10000) config.num_key_value_heads = getattr( config, "num_key_value_heads", config.num_attention_heads ) rotary_emb = PositionRotaryEmbedding.static( config=config, dim=config.hidden_size // config.num_attention_heads, base=config.rope_theta, device=weights.device, ) with no_fp8(weights): self.layers.append( FlashLlamaLayer( index=0, prefix=f"{prefix}.layers.0", config=config, weights=weights, rotary_emb=rotary_emb, ) ) # Skip first and last layers for layer_id in range(1, config.num_hidden_layers - 1): if layer_id in self.cross_attention_layers: from text_generation_server.models.custom_modeling.flash_mllama import ( FlashLlamaCrossLayer, ) self.layers.append( FlashLlamaCrossLayer( index=layer_id, prefix=(f"{prefix}.layers.{layer_id}"), config=config, weights=weights, ) ) else: self.layers.append( FlashLlamaLayer( index=layer_id, prefix=(f"{prefix}.layers.{layer_id}"), config=config, weights=weights, rotary_emb=rotary_emb, ) ) with no_fp8(weights): last_layer_id = config.num_hidden_layers - 1 self.layers.append( FlashLlamaLayer( index=last_layer_id, prefix=(f"{prefix}.layers.{last_layer_id}"), config=config, weights=weights, rotary_emb=rotary_emb, ) ) self.norm = FastRMSNorm.load( prefix=f"{prefix}.norm", weights=weights, eps=config.rms_norm_eps, ) self.gradient_checkpointing = False self.head_size = self.layers[0].self_attn.head_size self.num_heads = self.layers[0].self_attn.num_heads self.num_key_value_heads = self.layers[0].self_attn.num_key_value_heads def forward( self, inputs_embeds: torch.Tensor, position_ids: torch.Tensor, cu_seqlen_prefill: Optional[torch.Tensor], kv_cache: List[Tuple[torch.Tensor, torch.Tensor]], slots: torch.Tensor, seqlen: Seqlen, adapter_data, hpu_attention_meta: Optional[HPUPagedAttentionMetadata], cross_attention_states=None, ) -> torch.Tensor: if hpu_attention_meta is not None: hpu_attention_meta = set_block_mapping( hpu_attention_meta, inputs_embeds.shape[0] ) hidden_states = inputs_embeds # Get rotary cos and sin for this forward # Avoid to index in each layer cos, sin = self.layers[0].self_attn.rotary_emb.get_cos_sin(position_ids) residual = None lazy_mode = htorch.utils.internal.is_lazy() if lazy_mode: htorch.core.mark_step() for i, layer in enumerate(self.layers): hidden_states, residual = layer( hidden_states, residual, cos, sin, cu_seqlen_prefill, kv_cache[i], slots, seqlen, adapter_data, cross_attention_states, hpu_attention_meta=hpu_attention_meta, ) if lazy_mode: htorch.core.mark_step() hidden_states, _ = self.norm(hidden_states, residual) return hidden_states class FlashLlamaForCausalLM(torch.nn.Module): def __init__(self, prefix: str, config, weights, name=None): if name is None: name = "model" super().__init__() with no_fp8(weights): self.embed_tokens = TensorParallelEmbedding( prefix=( f"{name}.embed_tokens" if not prefix else f"{prefix}.{name}.embed_tokens" ), weights=weights, ) self.model = FlashLlamaModel( prefix=name if not prefix else f"{prefix}.{name}", config=config, weights=weights, ) if config.tie_word_embeddings: suffix = "model.embed_tokens" else: suffix = "lm_head" # Used in Granite embedding_multiplier = getattr(config, "embedding_multiplier", None) if embedding_multiplier is not None: self.embed_tokens.weight.data *= embedding_multiplier prefix = suffix if not prefix or name != "model" else f"{prefix}.{suffix}" with no_fp8(weights): self.lm_head = SpeculativeHead.load( config, prefix, weights, ) # Used in Granite self.logits_scaling = getattr(config, "logits_scaling", None) if self.logits_scaling is not None and self.lm_head.head is not None: try: # Scale the weights directly self.lm_head.head.linear.weight.data /= self.logits_scaling self.logits_scaled = True except Exception: self.logits_scaled = False def forward( self, input_ids: torch.Tensor, position_ids: torch.Tensor, cu_seqlen_prefill: Optional[torch.Tensor], kv_cache: List[Tuple[torch.Tensor, torch.Tensor]], slots: torch.Tensor, seqlen: Seqlen, hpu_attention_meta: Optional[HPUPagedAttentionMetadata], lm_head_indices: Optional[torch.Tensor] = None, adapter_data: Optional[torch.Tensor] = None, cross_attention_states=None, ) -> Tuple[torch.Tensor, Optional[torch.Tensor]]: inputs_embeds = self.embed_tokens(input_ids) hidden_states = self.model( inputs_embeds, position_ids, cu_seqlen_prefill, kv_cache, slots, seqlen, adapter_data=adapter_data, cross_attention_states=cross_attention_states, hpu_attention_meta=hpu_attention_meta, ) if lm_head_indices is not None: hidden_states = hidden_states[lm_head_indices] logits, speculative_logits = self.lm_head(hidden_states) # Used in Granite if self.logits_scaling is not None and not self.logits_scaled: logits /= self.logits_scaling if speculative_logits is not None: speculative_logits /= self.logits_scaling return logits, speculative_logits
text-generation-inference/backends/gaudi/server/text_generation_server/models/custom_modeling/flash_llama_modeling.py/0
{ "file_path": "text-generation-inference/backends/gaudi/server/text_generation_server/models/custom_modeling/flash_llama_modeling.py", "repo_id": "text-generation-inference", "token_count": 11549 }
285
# Copyright (C) 2024 Habana Labs, Ltd. an Intel Company. from text_generation_server.utils.convert import convert_file, convert_files from text_generation_server.utils.dist import initialize_torch_distributed from text_generation_server.utils.weights import Weights from text_generation_server.utils.peft import download_and_unload_peft from text_generation_server.utils.hub import ( weight_files, weight_hub_files, download_weights, EntryNotFoundError, LocalEntryNotFoundError, RevisionNotFoundError, ) from text_generation_server.utils.tokens import ( NextTokenChooser, HeterogeneousNextTokenChooser, StoppingCriteria, StopSequenceCriteria, FinishReason, Sampling, Greedy, make_tokenizer_optional, is_tokenizer_transparent, pad_next_token_chooser_parameters, ) __all__ = [ "convert_file", "convert_files", "initialize_torch_distributed", "weight_files", "weight_hub_files", "download_weights", "download_and_unload_peft", "EntryNotFoundError", "HeterogeneousNextTokenChooser", "LocalEntryNotFoundError", "RevisionNotFoundError", "Greedy", "NextTokenChooser", "Sampling", "StoppingCriteria", "StopSequenceCriteria", "FinishReason", "Weights", "make_tokenizer_optional", "is_tokenizer_transparent", "pad_next_token_chooser_parameters", ]
text-generation-inference/backends/gaudi/server/text_generation_server/utils/__init__.py/0
{ "file_path": "text-generation-inference/backends/gaudi/server/text_generation_server/utils/__init__.py", "repo_id": "text-generation-inference", "token_count": 516 }
286
# Origin: https://github.com/predibase/lorax # Path: lorax/server/lorax_server/utils/segments.py # License: Apache License Version 2.0, January 2004 from typing import List, Tuple, Union import torch def find_segments( adapter_indices: Union[torch.Tensor, List[int]], ) -> Tuple[List[int], List[int]]: segments = [0] segment_indices = [] if isinstance(adapter_indices, torch.Tensor): # Calling .item() repeatedly on CUDA tensor is very slow, so we move it to CPU first adapter_indices = adapter_indices.cpu().tolist() start_index = 0 for i in range(1, len(adapter_indices)): if adapter_indices[i] != adapter_indices[i - 1]: segments.append(i) segment_indices.append(adapter_indices[i - 1]) start_index = i # Handle the last segment if start_index < len(adapter_indices): segments.append(len(adapter_indices)) segment_indices.append(adapter_indices[-1]) return segments, segment_indices class SegmentConcatBuilder: def __init__(self): self.adapter_segment_indices = [] self.adapter_segment_tensors = [] def concat(self, adapter_segments: torch.Tensor, segment_indices: List[int]): # Update adapter segments if self.adapter_segment_tensors: # Because we have already processed at least one batch, remove the 0 start index # from this batch denoting the beginning of the segment, then offset all segment # positions by the value of the last segment in the previous batch to account for # the concatenation. adapter_segments = ( adapter_segments[1:] + self.adapter_segment_tensors[-1][-1] ) if ( self.adapter_segment_indices and self.adapter_segment_indices[-1] == segment_indices[0] ): # If the last segment in the previous batch is the same as the first segment in this batch, # then we merge them together into a single segment. In effect, this means removing it from # the segment indices of this batch, and extending the segment span by removing the segment # end index from the previous batch. segment_indices = segment_indices[1:] self.adapter_segment_tensors[-1] = self.adapter_segment_tensors[-1][:-1] self.adapter_segment_indices.extend(segment_indices) self.adapter_segment_tensors.append(adapter_segments) def build(self) -> Tuple[torch.Tensor, List[int]]: return torch.concat(self.adapter_segment_tensors), self.adapter_segment_indices
text-generation-inference/backends/gaudi/server/text_generation_server/utils/segments.py/0
{ "file_path": "text-generation-inference/backends/gaudi/server/text_generation_server/utils/segments.py", "repo_id": "text-generation-inference", "token_count": 1082 }
287
mod backend; mod llamacpp; mod quantize; use quantize::QuantizeType; use backend::{ BackendError, LlamacppBackend, LlamacppConfig, LlamacppGGMLType, LlamacppNuma, LlamacppSplitMode, }; use clap::Parser; use hf_hub::api::tokio::ApiBuilder; use hf_hub::{Repo, RepoType}; use std::path::Path; use text_generation_router::{logging, server, usage_stats}; use thiserror::Error; use tokenizers::Tokenizer; use tokio::process::Command; use tokio::sync::oneshot::error::RecvError; use tracing::{error, warn}; /// Backend Configuration #[derive(Parser, Debug)] #[clap(author, version, about, long_about = None)] struct Args { /// Name of the model to load. #[clap(long, env)] model_id: String, /// Revision of the model. #[clap(default_value = "main", long, env)] revision: String, /// Path to the GGUF model file for inference. #[clap(long, env)] model_gguf: Option<String>, /// Number of threads to use for generation. #[clap(long, env)] n_threads: Option<usize>, /// Number of threads to use for batch processing. #[clap(long, env)] n_threads_batch: Option<usize>, /// Number of layers to store in VRAM. #[clap(default_value = "0", long, env)] n_gpu_layers: usize, /// Split the model across multiple GPUs. #[clap(default_value = "layer", long, env)] split_mode: LlamacppSplitMode, /// Defragment the KV cache if holes/size > threshold. #[clap(default_value = "-1.0", long, env)] defrag_threshold: f32, /// Enable NUMA optimizations. #[clap(default_value = "disabled", value_enum, long, env)] numa: LlamacppNuma, /// Use memory mapping for the model. #[clap(long, env)] disable_mmap: bool, /// Use memory locking to prevent swapping. #[clap(long, env)] use_mlock: bool, /// Enable offloading of KQV operations to the GPU. #[clap(long, env)] disable_offload_kqv: bool, /// Enable flash attention for faster inference. (EXPERIMENTAL) #[clap(long, env)] disable_flash_attention: bool, /// Data type used for K cache. #[clap(default_value = "f16", value_enum, long, env)] type_k: LlamacppGGMLType, /// Data type used for V cache. #[clap(default_value = "f16", value_enum, long, env)] type_v: LlamacppGGMLType, /// Number of tokenizer workers used for payload validation and truncation. #[clap(default_value = "2", long, env)] validation_workers: usize, /// Maximum number of concurrent requests. #[clap(long, env)] max_concurrent_requests: Option<usize>, /// Maximum number of input tokens per request. #[clap(default_value = "1024", long, env)] max_input_tokens: usize, /// Maximum number of total tokens (input + output) per request. #[clap(default_value = "2048", long, env)] max_total_tokens: usize, /// Maximum number of tokens in a batch. #[clap(long, env)] max_batch_total_tokens: Option<usize>, /// Maximum number of tokens in a physical batch. #[clap(long, env)] max_physical_batch_total_tokens: Option<usize>, /// Maximum number of requests per batch. #[clap(long, env)] max_batch_size: Option<usize>, /// IP address to listen on. #[clap(default_value = "0.0.0.0", long)] hostname: String, /// Port to listen on. #[clap(default_value = "3000", long, short, env)] port: u16, #[clap(default_value = "9000", long, short, env)] prometheus_port: u16, /// Enable JSON output format. #[clap(long, env)] json_output: bool, /// OTLP endpoint for telemetry data. #[clap(long, env)] otlp_endpoint: Option<String>, /// Service name for OTLP telemetry. #[clap(default_value = "text-generation-inference.router", long, env)] otlp_service_name: String, /// Allowed origins for CORS. #[clap(long, env)] cors_allow_origin: Option<Vec<String>>, /// Path to the tokenizer configuration file. #[clap(long, env)] tokenizer_config_path: Option<String>, /// Disable grammar support. #[clap(long, env)] disable_grammar_support: bool, /// Maximum number of inputs per request. #[clap(default_value = "4", long, env)] max_client_batch_size: usize, /// Level of usage statistics collection. #[clap(default_value = "on", long, env)] usage_stats: usage_stats::UsageStatsLevel, /// Maximum payload size in bytes. #[clap(default_value = "2000000", long, env)] payload_limit: usize, } #[tokio::main] async fn main() -> Result<(), RouterError> { let args = Args::parse(); logging::init_logging(args.otlp_endpoint, args.otlp_service_name, args.json_output); let n_threads = match args.n_threads { Some(0) | None => num_cpus::get(), Some(threads) => threads, }; let n_threads_batch = match args.n_threads_batch { Some(0) | None => n_threads, Some(threads) => threads, }; let max_batch_size = match args.max_batch_size { Some(0) | None => n_threads_batch, Some(threads) => threads, }; let max_batch_total_tokens = match args.max_batch_total_tokens { None => max_batch_size * args.max_total_tokens, Some(size) => size, }; let max_physical_batch_total_tokens = match args.max_physical_batch_total_tokens { None => max_batch_total_tokens, Some(size) => size, }; let max_concurrent_requests = match args.max_concurrent_requests { None => max_batch_size * 2, Some(size) => size, }; if args.max_input_tokens >= args.max_total_tokens { return Err(RouterError::ArgumentValidation( "`max_input_tokens` must be < `max_total_tokens`".to_string(), )); } if args.max_total_tokens > max_batch_total_tokens { return Err(RouterError::ArgumentValidation( "`max_total_tokens` must be <= `max_batch_total_tokens`".to_string(), )); } if max_batch_size * args.max_total_tokens > max_batch_total_tokens { return Err(RouterError::ArgumentValidation( "`max_batch_size` * `max_total_tokens` must be <= `max_batch_total_tokens`".to_string(), )); } let api_builder = || { let mut builder = ApiBuilder::new().with_progress(true); if let Ok(cache_dir) = std::env::var("HUGGINGFACE_HUB_CACHE") { builder = builder.with_cache_dir(cache_dir.into()); } if let Ok(token) = std::env::var("HF_TOKEN") { builder = builder.with_token(token.into()); } if let Ok(origin) = std::env::var("HF_HUB_USER_AGENT_ORIGIN") { builder = builder.with_user_agent("origin", origin.as_str()); } builder }; let api_repo = api_builder().build()?.repo(Repo::with_revision( args.model_id.clone(), RepoType::Model, args.revision.clone(), )); let tokenizer_path = api_repo.get("tokenizer.json").await?; let tokenizer = Tokenizer::from_file(&tokenizer_path)?; let model_gguf = if let Some(model_gguf) = args.model_gguf { model_gguf } else { let model_gguf = format!("models/{}/model.gguf", args.model_id); let model_gguf_path = Path::new(&model_gguf); if !model_gguf_path.exists() { let tmp_gguf = "models/tmp.gguf"; if let Some(parent) = Path::new(model_gguf_path).parent() { std::fs::create_dir_all(parent)?; } let cache_path = tokenizer_path.parent().unwrap(); for sibling in api_repo.info().await?.siblings { let _ = api_repo.get(&sibling.rfilename).await?; } let status = Command::new("convert_hf_to_gguf.py") .arg("--outfile") .arg(tmp_gguf) .arg(cache_path) .spawn()? .wait() .await?; if !status.success() { let exit_code = status.code().unwrap_or(-1); error!("Failed to generate GGUF, exit code: {}", exit_code); return Err(RouterError::CommandError(exit_code)); } quantize::model(tmp_gguf, &model_gguf, QuantizeType::MostlyQ4_0, n_threads) .map_err(RouterError::QuantizeError)?; } model_gguf }; let (backend, ok, shutdown) = LlamacppBackend::new( LlamacppConfig { model_gguf, n_threads, n_threads_batch, n_gpu_layers: args.n_gpu_layers, split_mode: args.split_mode, defrag_threshold: args.defrag_threshold, numa: args.numa, use_mmap: !args.disable_mmap, use_mlock: args.use_mlock, flash_attention: !args.disable_flash_attention, type_k: args.type_k, type_v: args.type_v, offload_kqv: !args.disable_offload_kqv, max_batch_total_tokens, max_physical_batch_total_tokens, max_batch_size, batch_timeout: tokio::time::Duration::from_millis(5), }, tokenizer, ); ok.await??; if cfg!(debug_assertions) { warn!("Graceful shutdown disabled!"); let _ = tokio::task::spawn(async move { let _ = tokio::signal::ctrl_c().await; let _ = shutdown.send(true); }); } server::run( backend, max_concurrent_requests, 0, // max_best_of 0, // max_stop_sequences 0, // max_top_n_tokens args.max_input_tokens, args.max_total_tokens, args.validation_workers, None, // api_key args.model_id, // tokenizer_name args.tokenizer_config_path, Some(args.revision), false, // trust_remote_code args.hostname, args.port, args.cors_allow_origin, false, // ngrok, None, // ngrok_authtoken, None, // ngrok_edge, args.disable_grammar_support, args.max_client_batch_size, args.usage_stats, args.payload_limit, args.prometheus_port, ) .await?; Ok(()) } #[derive(Debug, Error)] enum RouterError { #[error("Argument validation error: {0}")] ArgumentValidation(String), #[error("Tokenizer error: {0}")] Tokenizer(#[from] tokenizers::Error), #[error("Backend error: {0}")] Backend(#[from] BackendError), #[error("WebServer error: {0}")] WebServer(#[from] server::WebServerError), #[error("Recv error: {0}")] RecvError(#[from] RecvError), #[error("Io error: {0}")] IoError(#[from] std::io::Error), #[error("Var error: {0}")] VarError(#[from] std::env::VarError), #[error("Quantize error: {0}")] QuantizeError(String), #[error("Command error: {0}")] CommandError(i32), #[error("HF hub error: {0}")] HubError(#[from] hf_hub::api::tokio::ApiError), }
text-generation-inference/backends/llamacpp/src/main.rs/0
{ "file_path": "text-generation-inference/backends/llamacpp/src/main.rs", "repo_id": "text-generation-inference", "token_count": 4967 }
288
import copy import logging import subprocess import sys from tempfile import TemporaryDirectory import os import pytest from transformers import AutoTokenizer from optimum.neuron.cache import synchronize_hub_cache logging.basicConfig( level=logging.INFO, format="[%(asctime)s] %(levelname)s [%(filename)s.%(funcName)s:%(lineno)d] %(message)s", stream=sys.stdout, ) logger = logging.getLogger(__file__) OPTIMUM_CACHE_REPO_ID = "optimum-internal-testing/neuron-testing-cache" # All model configurations below will be added to the neuron_model_config fixture MODEL_CONFIGURATIONS = { "llama": { "model_id": "unsloth/Llama-3.2-1B-Instruct", "export_kwargs": { "batch_size": 4, "sequence_length": 4096, "num_cores": 2, "auto_cast_type": "bf16", }, }, "qwen2": { "model_id": "Qwen/Qwen2.5-0.5B", "export_kwargs": { "batch_size": 4, "sequence_length": 4096, "num_cores": 2, "auto_cast_type": "bf16", }, }, "granite": { "model_id": "ibm-granite/granite-3.1-2b-instruct", "export_kwargs": { "batch_size": 4, "sequence_length": 4096, "num_cores": 2, "auto_cast_type": "bf16", }, }, } def export_model(model_id, export_kwargs, neuron_model_path): export_command = [ "optimum-cli", "export", "neuron", "-m", model_id, "--task", "text-generation", ] for kwarg, value in export_kwargs.items(): export_command.append(f"--{kwarg}") export_command.append(str(value)) export_command.append(neuron_model_path) logger.info(f"Exporting {model_id} with {export_kwargs}") try: subprocess.run(export_command, check=True) except subprocess.CalledProcessError as e: raise ValueError(f"Failed to export model: {e}") @pytest.fixture(scope="session", params=MODEL_CONFIGURATIONS.keys()) def neuron_model_config(request): """Expose a pre-trained neuron model The fixture exports a model locally and returns a dictionary containing: - a configuration name, - the original model id, - the export parameters, - the neuron model local path. For each exposed model, the local directory is maintained for the duration of the test session and cleaned up afterwards. """ config_name = request.param model_config = copy.deepcopy(MODEL_CONFIGURATIONS[request.param]) model_id = model_config["model_id"] export_kwargs = model_config["export_kwargs"] with TemporaryDirectory() as neuron_model_path: export_model(model_id, export_kwargs, neuron_model_path) synchronize_hub_cache(cache_repo_id=OPTIMUM_CACHE_REPO_ID) tokenizer = AutoTokenizer.from_pretrained(model_id) tokenizer.save_pretrained(neuron_model_path) del tokenizer # Add dynamic parameters to the model configuration model_config["neuron_model_path"] = neuron_model_path # Also add model configuration name to allow tests to adapt their expectations model_config["name"] = config_name # Yield instead of returning to keep a reference to the temporary directory. # It will go out of scope and be released only once all tests needing the fixture # have been completed. logger.info(f"{config_name} ready for testing ...") os.environ["CUSTOM_CACHE_REPO"] = OPTIMUM_CACHE_REPO_ID yield model_config logger.info(f"Done with {config_name}") @pytest.fixture(scope="module") def neuron_model_path(neuron_model_config): yield neuron_model_config["neuron_model_path"]
text-generation-inference/backends/neuron/tests/fixtures/model.py/0
{ "file_path": "text-generation-inference/backends/neuron/tests/fixtures/model.py", "repo_id": "text-generation-inference", "token_count": 1570 }
289
# Text Generation Inference - TensorRT-LLM Backend Implementation ## Description This folder provides the sources of the TensorRT-LLM backend implementation powered by TensorRT-LLM Executor new API ## Simplified Request Sequence ```mermaid sequenceDiagram actor User participant TextGenerationInference.HttpServer participant TextGenerationInference.TensorRtLlmBackend participant TextGenerationInference.TensorRtLlmWorkerThread participant TensorRtLlm.Executor participant Nvidia.Gpu User ->> TextGenerationInference.HttpServer: POST /generate TextGenerationInference.HttpServer ->> TextGenerationInference.TensorRtLlmBackend: Validate and forward inputs & parameters TextGenerationInference.TensorRtLlmBackend ->> TextGenerationInference.TensorRtLlmWorkerThread: Allocate a new context and spawn a new thread to handle the request TextGenerationInference.TensorRtLlmWorkerThread ->> TensorRtLlm.Executor: Submit the request to the In-Flight Batcher activate Nvidia.Gpu TensorRtLlm.Executor ->> Nvidia.Gpu: Add the request to the poll for execution TensorRtLlm.Executor -->> TextGenerationInference.TensorRtLlmWorkerThread: Response with an unique request identifier rect rgb(10, 92, 54) loop every 100us rect rgb(15, 81, 50) alt Acquire lock to query executor TextGenerationInference.TensorRtLlmWorkerThread ->> TensorRtLlm.Executor: Poll request number of new token(s) generated else There are new generated tokens TextGenerationInference.TensorRtLlmWorkerThread ->> TensorRtLlm.Executor: Retrieve newly generated tokens TensorRtLlm.Executor -->> TextGenerationInference.TensorRtLlmWorkerThread: Return decoded token information and potential error (omitted) rect rgb(11, 110, 79) alt Generated token is final TensorRtLlm.Executor ->> Nvidia.Gpu: Remove request from the scheduler and from the GPU TextGenerationInference.TensorRtLlmWorkerThread -->> User: Stream the remaining decoded tokens and flush the connection else Generated token is not final TextGenerationInference.TensorRtLlmWorkerThread -->> User: Stream token back to the user as they get decoded end end end end deactivate Nvidia.Gpu end end ```
text-generation-inference/backends/trtllm/README.md/0
{ "file_path": "text-generation-inference/backends/trtllm/README.md", "repo_id": "text-generation-inference", "token_count": 1019 }
290
/// /// Extract the first line of the provided string reference. /// If there is no lines in the buffer, it returns a string /// which content is defined by the content of `fail` /// # Arguments /// /// * `s`: The string buffer to extract the first-line from /// * `fail`: A string content which is returned if no lines are /// present in `s` /// /// returns: String /// /// # Examples /// /// ``` /// let s = "My name is Morgan.\n I'm working at Hugging Face."; /// first_line(s, "No line in string"); /// ``` #[inline] pub(crate) fn first_line(s: &str, fail: &str) -> String { s.lines().next().unwrap_or(fail).to_string() }
text-generation-inference/backends/trtllm/src/utils.rs/0
{ "file_path": "text-generation-inference/backends/trtllm/src/utils.rs", "repo_id": "text-generation-inference", "token_count": 201 }
291
use std::sync::Arc; use tokio::sync::{mpsc, oneshot}; use crate::radix::RadixAllocator; use text_generation_router::usage_stats::Env; #[derive(Debug, Clone)] pub struct BlockAllocation { pub allocation_id: u64, pub blocks: Vec<u32>, pub slots: Vec<u32>, /// Prefix that was cached and for which the KV does not have to /// be recomputed. pub prefix_len: u32, pub(crate) block_allocator: Option<BlockAllocator>, } impl Drop for BlockAllocation { fn drop(&mut self) { if let Some(block_allocator) = self.block_allocator.as_mut() { block_allocator.free(self.blocks.clone(), self.allocation_id) } } } #[derive(Debug, Clone)] pub struct BlockAllocator { /// Channel to communicate with the background task block_allocator: mpsc::UnboundedSender<BlockAllocatorCommand>, } impl BlockAllocator { pub(crate) fn new( max_batch_total_tokens: u32, block_size: u32, prefix_caching: bool, window_size: Option<u32>, ) -> Self { // Create channel let (sender, receiver) = mpsc::unbounded_channel(); // Launch background queue task tokio::spawn(block_allocator_task( max_batch_total_tokens / block_size, block_size, prefix_caching, window_size, receiver, )); Self { block_allocator: sender, } } pub(crate) async fn allocate( &self, tokens: u32, prefill_tokens: Option<Arc<Vec<u32>>>, ) -> Option<BlockAllocation> { let (response_sender, response_receiver) = oneshot::channel(); self.block_allocator .send(BlockAllocatorCommand::Allocate { tokens, prefill_tokens, response_sender, }) .unwrap(); response_receiver.await.unwrap().map(|mut allocation| { allocation.block_allocator = Some(self.clone()); allocation }) } pub(crate) fn free(&self, blocks: Vec<u32>, allocation_id: u64) { self.block_allocator .send(BlockAllocatorCommand::Free { allocation_id, blocks, }) .unwrap(); } } async fn block_allocator_task( blocks: u32, block_size: u32, prefix_caching: bool, window_size: Option<u32>, mut receiver: mpsc::UnboundedReceiver<BlockAllocatorCommand>, ) { let mut allocator: Box<dyn Allocator + Send> = if prefix_caching { Box::new(RadixAllocator::new(block_size, blocks, window_size)) } else { Box::new(SimpleAllocator::new(blocks, block_size, window_size)) }; while let Some(cmd) = receiver.recv().await { match cmd { BlockAllocatorCommand::Free { blocks, allocation_id, } => allocator.free(blocks, allocation_id), BlockAllocatorCommand::Allocate { tokens, prefill_tokens, response_sender, } => { response_sender .send(allocator.allocate(tokens, prefill_tokens)) .unwrap(); } } } } #[derive(Debug)] enum BlockAllocatorCommand { Free { blocks: Vec<u32>, allocation_id: u64, }, Allocate { tokens: u32, prefill_tokens: Option<Arc<Vec<u32>>>, response_sender: oneshot::Sender<Option<BlockAllocation>>, }, } pub trait Allocator { fn allocate( &mut self, tokens: u32, prefill_tokens: Option<Arc<Vec<u32>>>, ) -> Option<BlockAllocation>; fn free(&mut self, blocks: Vec<u32>, allocation_id: u64); } pub struct SimpleAllocator { free_blocks: Vec<u32>, block_size: u32, window_size: Option<u32>, is_hpu_device: bool, } impl SimpleAllocator { fn new(blocks: u32, block_size: u32, window_size: Option<u32>) -> Self { SimpleAllocator { block_size, // Block 0 is reserved for health checks free_blocks: (1..blocks).collect(), window_size, is_hpu_device: Env::new().is_hpu_device(), } } } impl Allocator for SimpleAllocator { fn allocate( &mut self, tokens: u32, _prefill_tokens: Option<Arc<Vec<u32>>>, ) -> Option<BlockAllocation> { let mut tokens = tokens; if self.is_hpu_device { // need 1 slot for ping-pong optimization tokens += 1; } // Apply window size let (required_blocks, repeats) = { let (tokens, repeats) = match self.window_size { None => (tokens, 1), Some(window_size) => { let repeats = tokens.div_ceil(window_size); let tokens = core::cmp::min(tokens, window_size); (tokens, repeats as usize) } }; // Pad to a multiple of block size let required_blocks = tokens.div_ceil(self.block_size); (required_blocks, repeats) }; let tokens = tokens as usize; if required_blocks > self.free_blocks.len() as u32 { None } else { if self.is_hpu_device { self.free_blocks.sort_by(|a, b| b.cmp(a)); } let mut blocks = self .free_blocks .split_off(self.free_blocks.len() - required_blocks as usize); if self.is_hpu_device { blocks.sort(); } let mut slots = Vec::with_capacity((required_blocks * self.block_size * repeats as u32) as usize); 'slots: for block_id in blocks.repeat(repeats).iter() { for s in (block_id * self.block_size)..((block_id + 1) * self.block_size) { slots.push(s); if slots.len() == tokens { break 'slots; } } } Some(BlockAllocation { allocation_id: 0, blocks, slots, prefix_len: 0, block_allocator: None, }) } } fn free(&mut self, blocks: Vec<u32>, _allocation_id: u64) { self.free_blocks.extend(blocks) } }
text-generation-inference/backends/v3/src/block_allocator.rs/0
{ "file_path": "text-generation-inference/backends/v3/src/block_allocator.rs", "repo_id": "text-generation-inference", "token_count": 3274 }
292
/// MIT License // // Copyright (c) 2020 hatoo // // Permission is hereby granted, free of charge, to any person obtaining a copy // of this software and associated documentation files (the "Software"), to deal // in the Software without restriction, including without limitation the rights // to use, copy, modify, merge, publish, distribute, sublicense, and/or sell // copies of the Software, and to permit persons to whom the Software is // furnished to do so, subject to the following conditions: // // The above copyright notice and this permission notice shall be included in all // copies or substantial portions of the Software. use std::collections::BTreeMap; pub(crate) fn histogram(values: &[f64], bins: usize) -> Vec<(f64, usize)> { assert!(bins >= 2); let mut bucket: Vec<usize> = vec![0; bins]; let min = values.iter().collect::<average::Min>().min(); let max = values.iter().collect::<average::Max>().max(); let step = (max - min) / (bins - 1) as f64; for &v in values { let i = std::cmp::min(((v - min) / step).ceil() as usize, bins - 1); bucket[i] += 1; } bucket .into_iter() .enumerate() .map(|(i, v)| (min + step * i as f64, v)) .collect() } pub(crate) fn percentiles(values: &[f64], pecents: &[i32]) -> BTreeMap<String, f64> { pecents .iter() .map(|&p| { let i = (f64::from(p) / 100.0 * values.len() as f64) as usize; (format!("p{p}"), *values.get(i).unwrap_or(&f64::NAN)) }) .collect() }
text-generation-inference/benchmark/src/utils.rs/0
{ "file_path": "text-generation-inference/benchmark/src/utils.rs", "repo_id": "text-generation-inference", "token_count": 598 }
293
{ "git+https://github.com/dottxt-ai/outlines-core.git?rev=ba10c619fc9bf3c487e43f49bdecb95a24bb465c#outlines-core@0.1.0": "1j9dcd831b0bmmjk2n4aag3x47qnqmkpg4gqpvwwyic7744llbfm" }
text-generation-inference/crate-hashes.json/0
{ "file_path": "text-generation-inference/crate-hashes.json", "repo_id": "text-generation-inference", "token_count": 106 }
294
# Train Medusa This tutorial will show you how to train a Medusa model on a dataset of your choice. Please check out the [speculation documentation](../conceptual/speculation) for more information on how Medusa works and speculation in general. ## What are the benefits of training a Medusa model? Training Medusa heads can greatly improve the speed of generation. Medusa adds extra "heads" to LLMs to predict multiple future tokens simultaneously. When augmenting a model with Medusa, the original model stays untouched, and only the new heads are fine-tuned during training. One of the most important things is to have a good dataset (with similar data to what will be used in production) because Medusa has a much higher hit-rate when the generation is in-domain. If you train Medusa on a dataset that is very different from the one you will use in production then the model will not be able to predict the future tokens accurately and consequently the speedup will be minimal or non-existent. ## Self-distillation (Generating data for training) There are many methods for preparing data for training, but one of the easiest and most effective ways is to "self-distill" the data. This means that you can use the same model to generate the data that you will use to train the model. Essentially, you prompt the model with a similar input to what you will use in production and the model will generate the output. We'll use this output to help train the medusa heads to predict the `n+1`, `n+2`, `n+3`, etc tokens in the sequence. ## Training The original implementation of Medusa is available at [https://github.com/FasterDecoding/Medusa](https://github.com/FasterDecoding/Medusa) and we'll follow a very similar process to train the model as described on the original repository. ### Getting Started There are two methods for training the model: - `torchrun` that is a wrapper around `torch.distributed.launch` - a forked version of `axlotl` that supports Medusa In this tutorial we'll use `torchrun` to train the model as it is the most straightforward way to train the model but similar steps can be followed to train the model using `axlotl` if you prefer. ### Training with `torchrun` ```bash mkdir medusa-training cd medusa-training pyenv install 3.10 pyenv local 3.10 uv venv -p 3.10 source .venv/bin/activate ``` Now lets clone the original `Medusa` repository and install the library. ```bash git clone https://github.com/FasterDecoding/Medusa.git cd Medusa pip install -e . ``` Next we'll need some data to train on, we can use the `ShareGPT_Vicuna_unfiltered` dataset that is available on the Hugging Face Hub. ```bash apt install git-lfs git lfs install git clone https://huggingface.co/datasets/Aeala/ShareGPT_Vicuna_unfiltered ``` Currently our directory structure looks like this: ```bash . ├── assets ├── CITATION.cff ├── create_data.py ├── data_generation ├── deepspeed.json ├── last_run_prepared ├── LICENSE ├── llm_judge ├── medusa ├── medusa_llm.egg-info ├── mistral.json ├── notebooks ├── pyproject.toml ├── README.md ├── ROADMAP.md ├── scripts ├── ShareGPT_Vicuna_unfiltered │   ├── README.md │   ├── ShareGPT_2023.05.04v0_Wasteland_Edition.json │   └── ShareGPT_V4.3_unfiltered_cleaned_split.json ├── simple_gradio_interface.py ├── tiny-llama.json └── vicuna_7b_qlora_stage1 ``` ## Start Training Now the lets generate the data and start training the model. This process will take a while since we are generating data from the model. First make sure you have an instance of TGI running with the model you want to use for self-distillation. ```bash model=HuggingFaceH4/zephyr-7b-beta volume=/home/ubuntu/.cache/huggingface/hub/ docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:latest --model-id $model ``` Now we can generate the data using the `create_data.py` script. ```bash python create_data.py \ --input-filename ShareGPT_Vicuna_unfiltered/ShareGPT_V4.3_unfiltered_cleaned_split.json \ --output-filename zephyr_self_distill.json ``` At this point our terminal should look like this: <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/tgi/medusa-train-large.gif" width="550" /> </div> > Note: In the screen shot above we are only using a the first 500 examples from the dataset to speed up the process, you should have a much larger dataset for training. Now we can finally get to the fun part and start training the model! Using `torchrun` we can easily launch the `medusa` training script with the `zephyr_self_distill.json` configuration file. > NOTE: If you just self-distilled you may still have the model running, make sure to stop it before starting the training in order to allow all of the resources to be used for training. ```bash WANDB_MODE=offline torchrun --nproc_per_node=4 medusa/train/train_legacy.py \ --model_name_or_path HuggingFaceH4/zephyr-7b-beta \ --data_path zephyr_self_distill.json \ --bf16 True \ --output_dir zephyr_out \ --num_train_epochs 5 \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 4 \ --gradient_accumulation_steps 4 \ --evaluation_strategy "no" \ --save_strategy "no" \ --learning_rate 1e-3 \ --weight_decay 0.0 \ --warmup_ratio 0.1 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --tf32 True \ --model_max_length 2048 \ --lazy_preprocess True \ --medusa_num_heads 3 \ --medusa_num_layers 1 \ --deepspeed deepspeed.json ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/tgi/medusa-train-heads-large.gif" width="550" /> </div> If successful, you should see the similar output to the one below: ```bash wandb: Run history: wandb: train/epoch ▁▁▁▁▁▂▂▂▂▂▃▃▃▃▃▄▄▄▄▄▅▅▅▅▅▅▅▆▆▆▆▆▇▇▇▇▇███ wandb: train/global_step ▁▁▁▁▁▂▂▂▂▂▃▃▃▃▃▄▄▄▄▄▅▅▅▅▅▅▅▆▆▆▆▆▇▇▇▇▇███ wandb: train/learning_rate ▅███▇▇▆▅▅▄▃▂▂▁▁▁ wandb: train/loss ██▆▄▄▃▃▂▂▃▁▁▂▁▁▁ wandb: train/medusa0_loss ▆▆▇▆▆▅▄▅▃▃▃▃▂▂▂▂▂▃▂▂▂▁▁▁▂▁▁▁▁▁█▁▁▁▂▁▁▁▁▁ wandb: train/medusa0_top1 ▁▁▁▁▁▁▁▁▃▂▃▃▄▄▄▃▄▃▄▄▅▅▆▅▆▆▇▅▇▇▄▇█▇▅▇█▆▇▇ wandb: train/medusa1_loss ▇▇█▇▇▆▅▅▃▄▃▃▃▃▃▃▃▃▃▃▂▁▂▂▂▁▁▂▁▁▇▁▁▁▂▁▁▁▁▁ wandb: train/medusa1_top1 ▁▁▁▁▁▁▁▁▃▂▃▃▃▄▄▃▃▂▃▃▅▅▆▄█▆▇▅▇▇▅█▇▇▅▇█▆▆▇ wandb: train/medusa2_loss ▃▃▄▄▄▃▃▃▂▂▂▂▂▂▂▂▂▂▂▂▁▁▁▁▁▁▁▁▁▁█▁▁▁▂▁▁▁▁▁ wandb: train/medusa2_top1 ▁▁▁▂▁▁▁▁▂▂▃▃▃▄▄▃▃▂▃▃▅▆▅▄█▆▆▅▆▆▄█▇▇▄▇█▆▆▇ wandb: train/total_flos ▁ wandb: train/train_loss ▁ wandb: train/train_runtime ▁ wandb: train/train_samples_per_second ▁ wandb: train/train_steps_per_second ▁ wandb: wandb: Run summary: wandb: train/epoch 2.0 wandb: train/global_step 16 wandb: train/learning_rate 0.0 wandb: train/loss 14.8906 wandb: train/medusa0_loss 4.25 wandb: train/medusa0_top1 0.28809 wandb: train/medusa1_loss 4.8125 wandb: train/medusa1_top1 0.22727 wandb: train/medusa2_loss 5.5 wandb: train/medusa2_top1 0.17293 wandb: train/total_flos 0.0 wandb: train/train_loss 23.98242 wandb: train/train_runtime 396.9266 wandb: train/train_samples_per_second 2.519 wandb: train/train_steps_per_second 0.04 ``` Last but most importantly, don't forget to push this model to the Hugging Face Hub so you can use it in your projects. ```bash python -m medusa.hf_utils \ --folder zephyr_out_medusa_mlp_zephyr-7b-beta_medusa_3_lr_0.001_layers_1 \ --repo drbh/zephyr_medusa_demo ``` Woo, we've successfully trained a Medusa model and pushed it to the Hugging Face Hub! 🎉
text-generation-inference/docs/source/basic_tutorials/train_medusa.md/0
{ "file_path": "text-generation-inference/docs/source/basic_tutorials/train_medusa.md", "repo_id": "text-generation-inference", "token_count": 3478 }
295
# Installation from source <Tip warning={true}> Installing TGI from source is not the recommended usage. We strongly recommend to use TGI through Docker, check the [Quick Tour](./quicktour), [Installation for Nvidia GPUs](./installation_nvidia) and [Installation for AMD GPUs](./installation_amd) to learn how to use TGI with Docker. </Tip> ## Install CLI You can use TGI command-line interface (CLI) to download weights, serve and quantize models, or get information on serving parameters. To install the CLI, you need to first clone the TGI repository and then run `make`. ```bash git clone https://github.com/huggingface/text-generation-inference.git && cd text-generation-inference make install ``` If you would like to serve models with custom kernels, run ```bash BUILD_EXTENSIONS=True make install ``` ## Local Installation from Source Before you start, you will need to setup your environment, and install Text Generation Inference. Text Generation Inference is tested on **Python 3.9+**. Text Generation Inference is available on pypi, conda and GitHub. To install and launch locally, first [install Rust](https://rustup.rs/) and create a Python virtual environment with at least Python 3.9, e.g. using conda: ```bash curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh conda create -n text-generation-inference python=3.9 conda activate text-generation-inference ``` You may also need to install Protoc. On Linux: ```bash PROTOC_ZIP=protoc-21.12-linux-x86_64.zip curl -OL https://github.com/protocolbuffers/protobuf/releases/download/v21.12/$PROTOC_ZIP sudo unzip -o $PROTOC_ZIP -d /usr/local bin/protoc sudo unzip -o $PROTOC_ZIP -d /usr/local 'include/*' rm -f $PROTOC_ZIP ``` On MacOS, using Homebrew: ```bash brew install protobuf ``` Then run to install Text Generation Inference: ```bash git clone https://github.com/huggingface/text-generation-inference.git && cd text-generation-inference BUILD_EXTENSIONS=True make install ``` <Tip warning={true}> On some machines, you may also need the OpenSSL libraries and gcc. On Linux machines, run: ```bash sudo apt-get install libssl-dev gcc -y ``` </Tip> Once installation is done, simply run: ```bash make run-falcon-7b-instruct ``` This will serve Falcon 7B Instruct model from the port 8080, which we can query.
text-generation-inference/docs/source/installation.md/0
{ "file_path": "text-generation-inference/docs/source/installation.md", "repo_id": "text-generation-inference", "token_count": 727 }
296
pytest_plugins = [ "fixtures.neuron.service", "fixtures.neuron.export_models", "fixtures.gaudi.service", ] # ruff: noqa: E402 from _pytest.fixtures import SubRequest from huggingface_hub.inference._generated.types.chat_completion import ( ChatCompletionStreamOutput, ChatCompletionOutput, ) from openai.types.chat.chat_completion_chunk import ( ChatCompletionChunk as OAIChatCompletionChunk, ) from openai.types.completion import Completion as OAICompletion import requests class SessionTimeoutFix(requests.Session): def request(self, *args, **kwargs): timeout = kwargs.pop("timeout", 120) return super().request(*args, **kwargs, timeout=timeout) requests.sessions.Session = SessionTimeoutFix import warnings import asyncio import contextlib import json import math import os import random import subprocess import sys import tempfile import time import docker import pytest import base64 from pathlib import Path from typing import Dict, List, Optional from aiohttp import ClientConnectorError, ClientOSError, ServerDisconnectedError from docker.errors import NotFound from syrupy.extensions.json import JSONSnapshotExtension from text_generation import AsyncClient from text_generation.types import ( BestOfSequence, Message, ChatComplete, ChatCompletionChunk, ChatCompletionComplete, Completion, Details, Grammar, InputToken, Response, Token, ) DOCKER_IMAGE = os.getenv("DOCKER_IMAGE", None) HF_TOKEN = os.getenv("HF_TOKEN", None) DOCKER_VOLUME = os.getenv("DOCKER_VOLUME", "/data") DOCKER_DEVICES = os.getenv("DOCKER_DEVICES") def pytest_addoption(parser): parser.addoption( "--release", action="store_true", default=False, help="run release tests" ) parser.addoption( "--neuron", action="store_true", default=False, help="run neuron tests" ) parser.addoption( "--gaudi", action="store_true", default=False, help="run gaudi tests" ) parser.addoption( "--gaudi-all-models", action="store_true", default=False, help="Run tests for all models instead of just the default subset", ) def pytest_configure(config): config.addinivalue_line("markers", "release: mark test as a release-only test") config.addinivalue_line("markers", "neuron: mark test as a neuron test") def pytest_collection_modifyitems(config, items): selectors = [] if not config.getoption("--release"): # --release not given in cli: skip release tests def skip_release(item): if "release" in item.keywords: item.add_marker(pytest.mark.skip(reason="need --release option to run")) selectors.append(skip_release) if config.getoption("--gaudi"): def skip_not_gaudi(item): if "gaudi" not in item.keywords: item.add_marker(pytest.mark.skip(reason="requires --gaudi to run")) selectors.append(skip_not_gaudi) else: def skip_gaudi(item): if "gaudi" in item.keywords: item.add_marker(pytest.mark.skip(reason="requires --gaudi to run")) selectors.append(skip_gaudi) if config.getoption("--neuron"): def skip_not_neuron(item): if "neuron" not in item.keywords: item.add_marker( pytest.mark.skip(reason="incompatible with --neuron option") ) selectors.append(skip_not_neuron) else: def skip_neuron(item): if "neuron" in item.keywords: item.add_marker(pytest.mark.skip(reason="requires --neuron to run")) selectors.append(skip_neuron) for item in items: for selector in selectors: selector(item) @pytest.fixture(autouse=True, scope="module") def container_log(request: SubRequest): error_log = request.getfixturevalue("error_log") assert error_log is not None yield if request.session.testsfailed: error_log.seek(0) print(error_log.read(), file=sys.stderr) else: error_log.truncate(0) error_log.seek(0) class ResponseComparator(JSONSnapshotExtension): rtol = 0.2 ignore_logprob = False def _serialize( self, data, ): if ( isinstance(data, Response) or isinstance(data, ChatComplete) or isinstance(data, ChatCompletionChunk) or isinstance(data, ChatCompletionComplete) or isinstance(data, Completion) or isinstance(data, OAIChatCompletionChunk) or isinstance(data, OAICompletion) ): data = data.model_dump() elif isinstance(data, ChatCompletionStreamOutput) or isinstance( data, ChatCompletionOutput ): data = dict(data) elif isinstance(data, List): data = [self._serialize(d) for d in data] elif isinstance(data, dict): return data else: raise RuntimeError(f"Unexpected data {type(data)} : {data}") return data def serialize( self, data, *, include=None, exclude=None, matcher=None, ): data = self._serialize(data) data = self._filter( data=data, depth=0, path=(), exclude=exclude, include=include, matcher=matcher, ) data = json.dumps(data, indent=2, ensure_ascii=False, sort_keys=False) + "\n" return data def matches( self, *, serialized_data, snapshot_data, ) -> bool: def convert_data(data): data = json.loads(data) return _convert_data(data) def _convert_data(data): if isinstance(data, Dict): if "choices" in data: data["choices"] = list( sorted(data["choices"], key=lambda x: int(x["index"])) ) choices = data["choices"] if isinstance(choices, List) and len(choices) >= 1: if "delta" in choices[0]: return ChatCompletionChunk(**data) if "text" in choices[0]: return Completion(**data) return ChatComplete(**data) else: return Response(**data) if isinstance(data, List): return [_convert_data(d) for d in data] raise NotImplementedError(f"Data: {data}") def eq_token(token: Token, other: Token) -> bool: return ( token.id == other.id and token.text == other.text and ( self.ignore_logprob or (token.logprob == other.logprob and token.logprob is None) or math.isclose(token.logprob, other.logprob, rel_tol=self.rtol) ) and token.special == other.special ) def eq_prefill_token(prefill_token: InputToken, other: InputToken) -> bool: try: return ( prefill_token.id == other.id and prefill_token.text == other.text and ( self.ignore_logprob or math.isclose( prefill_token.logprob, other.logprob, rel_tol=self.rtol, ) if prefill_token.logprob is not None else prefill_token.logprob == other.logprob ) ) except TypeError: return False def eq_best_of(details: BestOfSequence, other: BestOfSequence) -> bool: return ( details.finish_reason == other.finish_reason and details.generated_tokens == other.generated_tokens and details.seed == other.seed and len(details.prefill) == len(other.prefill) and all( [ eq_prefill_token(d, o) for d, o in zip(details.prefill, other.prefill) ] ) and len(details.tokens) == len(other.tokens) and all([eq_token(d, o) for d, o in zip(details.tokens, other.tokens)]) ) def eq_details(details: Details, other: Details) -> bool: return ( details.finish_reason == other.finish_reason and details.generated_tokens == other.generated_tokens and details.seed == other.seed and len(details.prefill) == len(other.prefill) and all( [ eq_prefill_token(d, o) for d, o in zip(details.prefill, other.prefill) ] ) and len(details.tokens) == len(other.tokens) and all([eq_token(d, o) for d, o in zip(details.tokens, other.tokens)]) and ( len(details.best_of_sequences) if details.best_of_sequences is not None else 0 ) == ( len(other.best_of_sequences) if other.best_of_sequences is not None else 0 ) and ( all( [ eq_best_of(d, o) for d, o in zip( details.best_of_sequences, other.best_of_sequences ) ] ) if details.best_of_sequences is not None else details.best_of_sequences == other.best_of_sequences ) ) def eq_completion(response: Completion, other: Completion) -> bool: return response.choices[0].text == other.choices[0].text def eq_chat_complete(response: ChatComplete, other: ChatComplete) -> bool: return ( response.choices[0].message.content == other.choices[0].message.content ) def eq_chat_complete_chunk( response: ChatCompletionChunk, other: ChatCompletionChunk ) -> bool: if response.choices: if response.choices[0].delta.content is not None: return ( response.choices[0].delta.content == other.choices[0].delta.content ) elif response.choices[0].delta.tool_calls is not None: return ( response.choices[0].delta.tool_calls == other.choices[0].delta.tool_calls ) else: raise RuntimeError( f"Invalid empty chat chunk {response} vs {other}" ) elif response.usage is not None: return response.usage == other.usage else: raise RuntimeError(f"Invalid empty chat {response} vs {other}") def eq_response(response: Response, other: Response) -> bool: return response.generated_text == other.generated_text and eq_details( response.details, other.details ) serialized_data = convert_data(serialized_data) snapshot_data = convert_data(snapshot_data) if not isinstance(serialized_data, List): serialized_data = [serialized_data] if not isinstance(snapshot_data, List): snapshot_data = [snapshot_data] if len(serialized_data) == 0: return len(snapshot_data) == len(serialized_data) if isinstance(serialized_data[0], Completion): return len(snapshot_data) == len(serialized_data) and all( [eq_completion(r, o) for r, o in zip(serialized_data, snapshot_data)] ) if isinstance(serialized_data[0], ChatComplete): return len(snapshot_data) == len(serialized_data) and all( [eq_chat_complete(r, o) for r, o in zip(serialized_data, snapshot_data)] ) if isinstance(serialized_data[0], ChatCompletionChunk): return len(snapshot_data) == len(serialized_data) and all( [ eq_chat_complete_chunk(r, o) for r, o in zip(serialized_data, snapshot_data) ] ) return len(snapshot_data) == len(serialized_data) and all( [eq_response(r, o) for r, o in zip(serialized_data, snapshot_data)] ) class GenerousResponseComparator(ResponseComparator): # Needed for GPTQ with exllama which has serious numerical fluctuations. rtol = 0.75 class IgnoreLogProbResponseComparator(ResponseComparator): ignore_logprob = True class LauncherHandle: def __init__(self, port: int, error_log): with warnings.catch_warnings(action="ignore"): self.client = AsyncClient(f"http://localhost:{port}", timeout=30) self.error_log = error_log def _inner_health(self): raise NotImplementedError async def health(self, timeout: int = 60): assert timeout > 0 for _ in range(timeout): if not self._inner_health(): self.error_log.seek(0) print(self.error_log.read(), file=sys.stderr) raise RuntimeError("Launcher crashed") try: await self.client.generate("test") return except (ClientConnectorError, ClientOSError, ServerDisconnectedError): time.sleep(1) self.error_log.seek(0) print(self.error_log.read(), file=sys.stderr) raise RuntimeError("Health check failed") class ContainerLauncherHandle(LauncherHandle): def __init__(self, docker_client, container_name, port: int, error_log): super().__init__(port, error_log) self.docker_client = docker_client self.container_name = container_name def _inner_health(self) -> bool: container = self.docker_client.containers.get(self.container_name) return container.status in ["running", "created"] class ProcessLauncherHandle(LauncherHandle): def __init__(self, process, port: int, error_log): super().__init__(port, error_log) self.process = process def _inner_health(self) -> bool: return self.process.poll() is None @pytest.fixture def response_snapshot(snapshot): return snapshot.use_extension(ResponseComparator) @pytest.fixture def generous_response_snapshot(snapshot): return snapshot.use_extension(GenerousResponseComparator) @pytest.fixture def ignore_logprob_response_snapshot(snapshot): return snapshot.use_extension(IgnoreLogProbResponseComparator) @pytest.fixture(scope="session") def error_log(): with tempfile.TemporaryFile("w+") as tmp: yield tmp @pytest.fixture(scope="session") async def launcher(error_log): @contextlib.contextmanager def local_launcher( model_id: str, num_shard: Optional[int] = None, quantize: Optional[str] = None, trust_remote_code: bool = False, use_flash_attention: bool = True, disable_grammar_support: bool = False, dtype: Optional[str] = None, kv_cache_dtype: Optional[str] = None, revision: Optional[str] = None, max_input_length: Optional[int] = None, max_input_tokens: Optional[int] = None, max_batch_prefill_tokens: Optional[int] = None, max_total_tokens: Optional[int] = None, lora_adapters: Optional[List[str]] = None, cuda_graphs: Optional[List[int]] = None, attention: Optional[str] = None, ): port = random.randint(8000, 10_000) master_port = random.randint(10_000, 20_000) shard_uds_path = ( f"/tmp/tgi-tests-{model_id.split('/')[-1]}-{num_shard}-{quantize}-server" ) args = [ "text-generation-launcher", "--model-id", model_id, "--port", str(port), "--master-port", str(master_port), "--shard-uds-path", shard_uds_path, ] env = os.environ if disable_grammar_support: args.append("--disable-grammar-support") if num_shard is not None: args.extend(["--num-shard", str(num_shard)]) if quantize is not None: args.append("--quantize") args.append(quantize) if dtype is not None: args.append("--dtype") args.append(dtype) if kv_cache_dtype is not None: args.append("--kv-cache-dtype") args.append(kv_cache_dtype) if revision is not None: args.append("--revision") args.append(revision) if trust_remote_code: args.append("--trust-remote-code") if max_input_length: args.append("--max-input-length") args.append(str(max_input_length)) if max_input_tokens: args.append("--max-input-tokens") args.append(str(max_input_tokens)) if max_batch_prefill_tokens: args.append("--max-batch-prefill-tokens") args.append(str(max_batch_prefill_tokens)) if max_total_tokens: args.append("--max-total-tokens") args.append(str(max_total_tokens)) if lora_adapters: args.append("--lora-adapters") args.append(",".join(lora_adapters)) if cuda_graphs: args.append("--cuda-graphs") args.append(",".join(map(str, cuda_graphs))) print(" ".join(args), file=sys.stderr) env["LOG_LEVEL"] = "info,text_generation_router=debug" env["PREFILL_CHUNKING"] = "1" if not use_flash_attention: env["USE_FLASH_ATTENTION"] = "false" if attention is not None: env["ATTENTION"] = attention # with tempfile.TemporaryFile("w+") as tmp: # We'll output stdout/stderr to a temporary file. Using a pipe # cause the process to block until stdout is read. with subprocess.Popen( args, stdout=error_log, stderr=subprocess.STDOUT, env=env, ) as process: yield ProcessLauncherHandle(process, port, error_log=error_log) process.terminate() process.wait(60) if not use_flash_attention: del env["USE_FLASH_ATTENTION"] @contextlib.contextmanager def docker_launcher( model_id: str, num_shard: Optional[int] = None, quantize: Optional[str] = None, trust_remote_code: bool = False, use_flash_attention: bool = True, disable_grammar_support: bool = False, dtype: Optional[str] = None, kv_cache_dtype: Optional[str] = None, revision: Optional[str] = None, max_input_length: Optional[int] = None, max_batch_prefill_tokens: Optional[int] = None, max_total_tokens: Optional[int] = None, lora_adapters: Optional[List[str]] = None, cuda_graphs: Optional[List[int]] = None, attention: Optional[str] = None, ): port = random.randint(8000, 10_000) args = ["--model-id", model_id, "--env"] if disable_grammar_support: args.append("--disable-grammar-support") if num_shard is not None: args.extend(["--num-shard", str(num_shard)]) if quantize is not None: args.append("--quantize") args.append(quantize) if dtype is not None: args.append("--dtype") args.append(dtype) if kv_cache_dtype is not None: args.append("--kv-cache-dtype") args.append(kv_cache_dtype) if revision is not None: args.append("--revision") args.append(revision) if trust_remote_code: args.append("--trust-remote-code") if max_input_length: args.append("--max-input-length") args.append(str(max_input_length)) if max_batch_prefill_tokens: args.append("--max-batch-prefill-tokens") args.append(str(max_batch_prefill_tokens)) if max_total_tokens: args.append("--max-total-tokens") args.append(str(max_total_tokens)) if lora_adapters: args.append("--lora-adapters") args.append(",".join(lora_adapters)) if cuda_graphs: args.append("--cuda-graphs") args.append(",".join(map(str, cuda_graphs))) client = docker.from_env() container_name = f"tgi-tests-{model_id.split('/')[-1]}-{num_shard}-{quantize}" try: container = client.containers.get(container_name) container.stop() container.remove() container.wait() except NotFound: pass gpu_count = num_shard if num_shard is not None else 1 env = { "LOG_LEVEL": "info,text_generation_router=debug", "PREFILL_CHUNKING": "1", } if not use_flash_attention: env["USE_FLASH_ATTENTION"] = "false" if attention is not None: env["ATTENTION"] = attention if HF_TOKEN is not None: env["HF_TOKEN"] = HF_TOKEN volumes = [] if DOCKER_VOLUME: volumes = [f"{DOCKER_VOLUME}:/data"] if DOCKER_DEVICES: if DOCKER_DEVICES.lower() == "none": devices = [] else: devices = DOCKER_DEVICES.strip().split(",") visible = os.getenv("ROCR_VISIBLE_DEVICES") if visible: env["ROCR_VISIBLE_DEVICES"] = visible device_requests = [] if not devices: devices = None elif devices == ["nvidia.com/gpu=all"]: devices = None device_requests = [ docker.types.DeviceRequest( driver="cdi", # count=gpu_count, device_ids=[f"nvidia.com/gpu={i}"], ) for i in range(gpu_count) ] else: devices = None device_requests = [ docker.types.DeviceRequest(count=gpu_count, capabilities=[["gpu"]]) ] client.api.timeout = 1000 container = client.containers.run( DOCKER_IMAGE, command=args, name=container_name, environment=env, auto_remove=False, detach=True, device_requests=device_requests, devices=devices, volumes=volumes, ports={"80/tcp": port}, healthcheck={"timeout": int(180 * 1e9), "retries": 2}, # 60s shm_size="1G", ) def pipe(): for log in container.logs(stream=True): log = log.decode("utf-8") error_log.write(log) # Start looping to pipe the logs import threading t = threading.Thread(target=pipe, args=()) t.start() try: yield ContainerLauncherHandle( client, container.name, port, error_log=error_log ) if not use_flash_attention: del env["USE_FLASH_ATTENTION"] try: container.stop() container.wait() except NotFound: pass finally: try: container.remove() except Exception: pass if DOCKER_IMAGE is not None: return docker_launcher return local_launcher @pytest.fixture(scope="module") def generate_load(): async def generate_load_inner( client: AsyncClient, prompt: str, max_new_tokens: int, n: int, seed: Optional[int] = None, grammar: Optional[Grammar] = None, stop_sequences: Optional[List[str]] = None, ) -> List[Response]: futures = [ client.generate( prompt, max_new_tokens=max_new_tokens, decoder_input_details=True, seed=seed, grammar=grammar, stop_sequences=stop_sequences, ) for _ in range(n) ] return await asyncio.gather(*futures) return generate_load_inner @pytest.fixture(scope="module") def generate_multi(): async def generate_load_inner( client: AsyncClient, prompts: List[str], max_new_tokens: int, seed: Optional[int] = None, ) -> List[Response]: import numpy as np arange = np.arange(len(prompts)) perm = np.random.permutation(arange) rperm = [-1] * len(perm) for i, p in enumerate(perm): rperm[p] = i shuffled_prompts = [prompts[p] for p in perm] futures = [ client.chat( messages=[Message(role="user", content=prompt)], max_tokens=max_new_tokens, temperature=0, seed=seed, ) for prompt in shuffled_prompts ] shuffled_responses = await asyncio.gather(*futures) responses = [shuffled_responses[p] for p in rperm] return responses return generate_load_inner # TODO fix the server parsser to count inline image tokens correctly @pytest.fixture def chicken(): path = Path(__file__).parent / "images" / "chicken_on_money.png" with open(path, "rb") as image_file: encoded_string = base64.b64encode(image_file.read()) return f"data:image/png;base64,{encoded_string.decode('utf-8')}" @pytest.fixture def cow_beach(): path = Path(__file__).parent / "images" / "cow_beach.png" with open(path, "rb") as image_file: encoded_string = base64.b64encode(image_file.read()) return f"data:image/png;base64,{encoded_string.decode('utf-8')}"
text-generation-inference/integration-tests/conftest.py/0
{ "file_path": "text-generation-inference/integration-tests/conftest.py", "repo_id": "text-generation-inference", "token_count": 13377 }
297
[ { "choices": [ { "delta": { "content": "OK", "role": "assistant", "tool_calls": null }, "finish_reason": null, "index": 0, "logprobs": null } ], "created": 1741266005, "id": "", "model": "meta-llama/Llama-3.1-8B-Instruct", "object": "chat.completion.chunk", "system_fingerprint": "3.1.2-dev0-native", "usage": null }, { "choices": [ { "delta": { "content": "!", "role": "assistant", "tool_calls": null }, "finish_reason": null, "index": 0, "logprobs": null } ], "created": 1741266005, "id": "", "model": "meta-llama/Llama-3.1-8B-Instruct", "object": "chat.completion.chunk", "system_fingerprint": "3.1.2-dev0-native", "usage": null }, { "choices": [ { "delta": { "content": "", "role": "assistant", "tool_calls": null }, "finish_reason": "stop", "index": 0, "logprobs": null } ], "created": 1741266005, "id": "", "model": "meta-llama/Llama-3.1-8B-Instruct", "object": "chat.completion.chunk", "system_fingerprint": "3.1.2-dev0-native", "usage": null }, { "choices": [], "created": 1741266005, "id": "", "model": "meta-llama/Llama-3.1-8B-Instruct", "object": "chat.completion.chunk", "system_fingerprint": "3.1.2-dev0-native", "usage": { "completion_tokens": 3, "prompt_tokens": 39, "total_tokens": 42 } } ]
text-generation-inference/integration-tests/models/__snapshots__/test_completion_prompts/test_chat_hfhub_usage.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_completion_prompts/test_chat_hfhub_usage.json", "repo_id": "text-generation-inference", "token_count": 889 }
298
{ "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [], "seed": 0, "tokens": [ { "id": 604, "logprob": -0.28271484, "special": false, "text": " for" }, { "id": 573, "logprob": -0.19030762, "special": false, "text": " the" }, { "id": 16819, "logprob": -1.4863281, "special": false, "text": " detection" }, { "id": 576, "logprob": -0.7089844, "special": false, "text": " of" }, { "id": 573, "logprob": -2.0410156, "special": false, "text": " the" }, { "id": 8566, "logprob": 0.0, "special": false, "text": " presence" }, { "id": 689, "logprob": -0.16491699, "special": false, "text": " or" }, { "id": 14862, "logprob": 0.0, "special": false, "text": " absence" }, { "id": 576, "logprob": -0.9970703, "special": false, "text": " of" }, { "id": 671, "logprob": -0.5292969, "special": false, "text": " an" } ], "top_tokens": null }, "generated_text": "Test request for the detection of the presence or absence of an" }
text-generation-inference/integration-tests/models/__snapshots__/test_flash_gemma_gptq/test_flash_gemma_gptq_all_params.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_flash_gemma_gptq/test_flash_gemma_gptq_all_params.json", "repo_id": "text-generation-inference", "token_count": 867 }
299
{ "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [], "seed": 0, "tokens": [ { "id": 25, "logprob": -0.88183594, "special": false, "text": ":" }, { "id": 2209, "logprob": -2.6699219, "special": false, "text": " Is" }, { "id": 279, "logprob": -0.61083984, "special": false, "text": " the" }, { "id": 734, "logprob": -2.6660156, "special": false, "text": " function" }, { "id": 330, "logprob": -0.35498047, "special": false, "text": " \"" }, { "id": 4110, "logprob": -2.4101562, "special": false, "text": "Create" }, { "id": 7575, "logprob": -2.2304688, "special": false, "text": "Process" }, { "id": 1, "logprob": -0.080078125, "special": false, "text": "\"" }, { "id": 304, "logprob": -0.75439453, "special": false, "text": " in" }, { "id": 12468, "logprob": -1.8769531, "special": false, "text": " Win" } ], "top_tokens": null }, "generated_text": "Test request: Is the function \"CreateProcess\" in Win" }
text-generation-inference/integration-tests/models/__snapshots__/test_flash_llama_fp8/test_flash_llama_fp8_all_params.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_flash_llama_fp8/test_flash_llama_fp8_all_params.json", "repo_id": "text-generation-inference", "token_count": 868 }
300
{ "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 10, "prefill": [], "seed": 0, "tokens": [ { "id": 13, "logprob": -1.1582031, "special": false, "text": "\n" }, { "id": 2772, "logprob": -0.23083496, "special": false, "text": "De" }, { "id": 1022, "logprob": 0.0, "special": false, "text": "ep" }, { "id": 6509, "logprob": 0.0, "special": false, "text": " learning" }, { "id": 29892, "logprob": -0.61816406, "special": false, "text": "," }, { "id": 607, "logprob": -0.7089844, "special": false, "text": " which" }, { "id": 508, "logprob": -1.7724609, "special": false, "text": " can" }, { "id": 367, "logprob": 0.0, "special": false, "text": " be" }, { "id": 5545, "logprob": 0.0, "special": false, "text": " considered" }, { "id": 408, "logprob": -0.3869629, "special": false, "text": " as" } ] }, "generated_text": "What is Deep Learning?\nDeep learning, which can be considered as" }
text-generation-inference/integration-tests/models/__snapshots__/test_flash_medusa/test_flash_medusa_all_params.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_flash_medusa/test_flash_medusa_all_params.json", "repo_id": "text-generation-inference", "token_count": 847 }
301
{ "details": { "best_of_sequences": null, "finish_reason": "length", "generated_tokens": 60, "prefill": [], "seed": 0, "tokens": [ { "id": 222, "logprob": 0.0, "special": false, "text": "\n" }, { "id": 222, "logprob": 0.0, "special": false, "text": "\n" }, { "id": 40, "logprob": -0.7944336, "special": false, "text": "#" }, { "id": 494, "logprob": 0.0, "special": false, "text": " +" }, { "id": 447, "logprob": -0.1796875, "special": false, "text": " [" }, { "id": 9009, "logprob": 0.0, "special": false, "text": "markdown" }, { "id": 98, "logprob": 0.0, "special": false, "text": "]" }, { "id": 37402, "logprob": 0.0, "special": false, "text": " slideshow" }, { "id": 8492, "logprob": 0.0, "special": false, "text": "={\"" }, { "id": 7277, "logprob": 0.0, "special": false, "text": "slide" }, { "id": 100, "logprob": 0.0, "special": false, "text": "_" }, { "id": 700, "logprob": 0.0, "special": false, "text": "type" }, { "id": 582, "logprob": 0.0, "special": false, "text": "\":" }, { "id": 332, "logprob": 0.0, "special": false, "text": " \"" }, { "id": 7277, "logprob": -0.06994629, "special": false, "text": "slide" }, { "id": 3667, "logprob": 0.0, "special": false, "text": "\"}" }, { "id": 222, "logprob": 0.0, "special": false, "text": "\n" }, { "id": 40, "logprob": 0.0, "special": false, "text": "#" }, { "id": 607, "logprob": -0.8261719, "special": false, "text": " #" }, { "id": 244, "logprob": -1.8574219, "special": false, "text": " " }, { "id": 55, "logprob": -1.4541016, "special": false, "text": "2" }, { "id": 51, "logprob": 0.0, "special": false, "text": "." }, { "id": 6208, "logprob": -0.9794922, "special": false, "text": " What" }, { "id": 458, "logprob": 0.0, "special": false, "text": " is" }, { "id": 341, "logprob": 0.0, "special": false, "text": " the" }, { "id": 10609, "logprob": -0.69189453, "special": false, "text": " difference" }, { "id": 3761, "logprob": 0.0, "special": false, "text": " between" }, { "id": 331, "logprob": 0.0, "special": false, "text": " a" }, { "id": 1168, "logprob": -0.27172852, "special": false, "text": " list" }, { "id": 480, "logprob": 0.0, "special": false, "text": " and" }, { "id": 331, "logprob": 0.0, "special": false, "text": " a" }, { "id": 8871, "logprob": 0.0, "special": false, "text": " tuple" }, { "id": 68, "logprob": 0.0, "special": false, "text": "?" }, { "id": 222, "logprob": 0.0, "special": false, "text": "\n" }, { "id": 40, "logprob": -1.3359375, "special": false, "text": "#" }, { "id": 222, "logprob": 0.0, "special": false, "text": "\n" }, { "id": 40, "logprob": 0.0, "special": false, "text": "#" }, { "id": 449, "logprob": -0.03164673, "special": false, "text": " -" }, { "id": 418, "logprob": -1.0947266, "special": false, "text": " A" }, { "id": 1168, "logprob": 0.0, "special": false, "text": " list" }, { "id": 458, "logprob": 0.0, "special": false, "text": " is" }, { "id": 331, "logprob": -0.3305664, "special": false, "text": " a" }, { "id": 14792, "logprob": 0.0, "special": false, "text": " mutable" }, { "id": 6645, "logprob": -0.40478516, "special": false, "text": " sequence" }, { "id": 451, "logprob": 0.0, "special": false, "text": " of" }, { "id": 4725, "logprob": -0.50390625, "special": false, "text": " elements" }, { "id": 49, "logprob": -2.1269531, "special": false, "text": "," }, { "id": 2236, "logprob": -0.1427002, "special": false, "text": " while" }, { "id": 331, "logprob": 0.0, "special": false, "text": " a" }, { "id": 8871, "logprob": 0.0, "special": false, "text": " tuple" }, { "id": 458, "logprob": 0.0, "special": false, "text": " is" }, { "id": 619, "logprob": 0.0, "special": false, "text": " an" }, { "id": 26079, "logprob": 0.0, "special": false, "text": " immutable" }, { "id": 6645, "logprob": 0.0, "special": false, "text": " sequence" }, { "id": 451, "logprob": 0.0, "special": false, "text": " of" }, { "id": 4725, "logprob": 0.0, "special": false, "text": " elements" }, { "id": 51, "logprob": 0.0, "special": false, "text": "." }, { "id": 222, "logprob": 0.0, "special": false, "text": "\n" }, { "id": 40, "logprob": 0.0, "special": false, "text": "#" }, { "id": 449, "logprob": 0.0, "special": false, "text": " -" } ], "top_tokens": null }, "generated_text": "\n\n# + [markdown] slideshow={\"slide_type\": \"slide\"}\n# # 2. What is the difference between a list and a tuple?\n#\n# - A list is a mutable sequence of elements, while a tuple is an immutable sequence of elements.\n# -" }
text-generation-inference/integration-tests/models/__snapshots__/test_flash_starcoder2_lora/test_flash_starcoder2_default_params.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_flash_starcoder2_lora/test_flash_starcoder2_default_params.json", "repo_id": "text-generation-inference", "token_count": 4513 }
302
{ "details": { "best_of_sequences": null, "finish_reason": "eos_token", "generated_tokens": 19, "prefill": [], "seed": null, "tokens": [ { "id": 415, "logprob": -0.03665161, "special": false, "text": " The" }, { "id": 12072, "logprob": -0.13549805, "special": false, "text": " cow" }, { "id": 349, "logprob": -0.05819702, "special": false, "text": " is" }, { "id": 6328, "logprob": -0.6826172, "special": false, "text": " standing" }, { "id": 356, "logprob": -0.1607666, "special": false, "text": " on" }, { "id": 272, "logprob": -0.5073242, "special": false, "text": " the" }, { "id": 10305, "logprob": -0.016418457, "special": false, "text": " beach" }, { "id": 304, "logprob": -1.3916016, "special": false, "text": " and" }, { "id": 272, "logprob": -0.020217896, "special": false, "text": " the" }, { "id": 13088, "logprob": -0.0028133392, "special": false, "text": " chicken" }, { "id": 349, "logprob": -0.003145218, "special": false, "text": " is" }, { "id": 6398, "logprob": -0.37060547, "special": false, "text": " sitting" }, { "id": 356, "logprob": -0.034851074, "special": false, "text": " on" }, { "id": 264, "logprob": -0.2878418, "special": false, "text": " a" }, { "id": 17972, "logprob": -0.046051025, "special": false, "text": " pile" }, { "id": 302, "logprob": -0.00028848648, "special": false, "text": " of" }, { "id": 2445, "logprob": -0.025772095, "special": false, "text": " money" }, { "id": 28723, "logprob": -0.018127441, "special": false, "text": "." }, { "id": 32002, "logprob": -0.00019824505, "special": true, "text": "<end_of_utterance>" } ], "top_tokens": null }, "generated_text": " The cow is standing on the beach and the chicken is sitting on a pile of money." }
text-generation-inference/integration-tests/models/__snapshots__/test_idefics2/test_flash_idefics2_two_images.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_idefics2/test_flash_idefics2_two_images.json", "repo_id": "text-generation-inference", "token_count": 1559 }
303
{ "choices": [ { "finish_reason": "stop", "index": 0, "logprobs": null, "message": { "content": null, "role": "assistant", "tool_calls": [ { "function": { "arguments": "{\"location\":\"Brooklyn, NY\",\"format\":\"fahrenheit\"}", "description": null, "name": "get_current_weather" }, "id": "0", "type": "function" } ] } } ], "created": 1741372434, "id": "", "model": "meta-llama/Llama-3.1-8B-Instruct", "object": "chat.completion", "system_fingerprint": "3.1.2-dev0-native", "usage": { "completion_tokens": 29, "prompt_tokens": 501, "total_tokens": 530 } }
text-generation-inference/integration-tests/models/__snapshots__/test_tools_llama/test_flash_llama_grammar_tools_auto_nostream.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_tools_llama/test_flash_llama_grammar_tools_auto_nostream.json", "repo_id": "text-generation-inference", "token_count": 421 }
304
{ "choices": [ { "finish_reason": "stop", "index": 0, "logprobs": null, "message": { "content": "The image shows a brown cow standing on the beach with a white face and black and white marking on its ears. The cow has a white patch around its nose and mouth. The ocean and blue sky are in the background.", "name": null, "role": "assistant", "tool_calls": null }, "usage": null } ], "created": 1743863057, "id": "", "model": "ll-re/Llama-4-Scout-17B-16E-Instruct", "object": "chat.completion", "system_fingerprint": "3.2.1-dev0-native", "usage": { "completion_tokens": 46, "prompt_tokens": 164, "total_tokens": 210 } }
text-generation-inference/integration-tests/models/__snapshots__/test_transformers_llama4/test_flash_llama4_image_cow.json/0
{ "file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_transformers_llama4/test_flash_llama4_image_cow.json", "repo_id": "text-generation-inference", "token_count": 314 }
305
import pytest @pytest.fixture(scope="module") def flash_llama_awq_handle_sharded(launcher): with launcher( "abhinavkulkarni/codellama-CodeLlama-7b-Python-hf-w4-g128-awq", num_shard=2, quantize="awq", ) as handle: yield handle @pytest.fixture(scope="module") async def flash_llama_awq_sharded(flash_llama_awq_handle_sharded): await flash_llama_awq_handle_sharded.health(300) return flash_llama_awq_handle_sharded.client @pytest.mark.release @pytest.mark.asyncio async def test_flash_llama_awq_sharded(flash_llama_awq_sharded, response_snapshot): response = await flash_llama_awq_sharded.generate( "What is Deep Learning?", max_new_tokens=10, decoder_input_details=True ) assert response.details.generated_tokens == 10 assert ( response.generated_text == "\nWhat is the difference between Deep Learning and Machine" ) assert response == response_snapshot @pytest.mark.release @pytest.mark.asyncio async def test_flash_llama_awq_load_sharded( flash_llama_awq_sharded, generate_load, response_snapshot ): responses = await generate_load( flash_llama_awq_sharded, "What is Deep Learning?", max_new_tokens=10, n=4 ) assert len(responses) == 4 assert all( [ r.generated_text == "\nWhat is the difference between Deep Learning and Machine" for r in responses ] ) assert responses == response_snapshot
text-generation-inference/integration-tests/models/test_flash_awq_sharded.py/0
{ "file_path": "text-generation-inference/integration-tests/models/test_flash_awq_sharded.py", "repo_id": "text-generation-inference", "token_count": 624 }
306
import pytest @pytest.fixture(scope="module") def flash_santacoder_handle(launcher): with launcher("bigcode/santacoder") as handle: yield handle @pytest.fixture(scope="module") async def flash_santacoder(flash_santacoder_handle): await flash_santacoder_handle.health(300) return flash_santacoder_handle.client @pytest.mark.release @pytest.mark.asyncio async def test_flash_santacoder(flash_santacoder, response_snapshot): response = await flash_santacoder.generate( "def print_hello", max_new_tokens=10, decoder_input_details=True ) assert response.details.generated_tokens == 10 assert response == response_snapshot @pytest.mark.release @pytest.mark.asyncio async def test_flash_santacoder_load( flash_santacoder, generate_load, response_snapshot ): responses = await generate_load( flash_santacoder, "def print_hello", max_new_tokens=10, n=4 ) assert len(responses) == 4 assert all([r.generated_text == responses[0].generated_text for r in responses]) assert responses == response_snapshot
text-generation-inference/integration-tests/models/test_flash_santacoder.py/0
{ "file_path": "text-generation-inference/integration-tests/models/test_flash_santacoder.py", "repo_id": "text-generation-inference", "token_count": 403 }
307
import pytest @pytest.fixture(scope="module") def mt0_base_handle(launcher): with launcher("bigscience/mt0-base") as handle: yield handle @pytest.fixture(scope="module") async def mt0_base(mt0_base_handle): await mt0_base_handle.health(300) return mt0_base_handle.client @pytest.mark.release @pytest.mark.asyncio async def test_mt0_base(mt0_base, response_snapshot): response = await mt0_base.generate( "Why is the sky blue?", max_new_tokens=10, top_p=0.9, decoder_input_details=True, seed=0, ) assert response.details.generated_tokens == 5 assert response == response_snapshot @pytest.mark.release @pytest.mark.asyncio async def test_mt0_base_all_params(mt0_base, response_snapshot): response = await mt0_base.generate( "Why is the sky blue?", max_new_tokens=10, repetition_penalty=1.2, return_full_text=True, stop_sequences=["test"], temperature=0.5, top_p=0.9, top_k=10, truncate=5, typical_p=0.9, watermark=True, decoder_input_details=True, seed=0, ) assert response.details.generated_tokens == 10 assert response == response_snapshot @pytest.mark.release @pytest.mark.asyncio async def test_mt0_base_load(mt0_base, generate_load, response_snapshot): responses = await generate_load( mt0_base, "Why is the sky blue?", max_new_tokens=10, n=4, ) assert len(responses) == 4 assert all([r.generated_text == responses[0].generated_text for r in responses]) assert responses == response_snapshot
text-generation-inference/integration-tests/models/test_mt0_base.py/0
{ "file_path": "text-generation-inference/integration-tests/models/test_mt0_base.py", "repo_id": "text-generation-inference", "token_count": 737 }
308
use std::error::Error; use vergen::EmitBuilder; fn main() -> Result<(), Box<dyn Error>> { // Emit cargo and rustc compile time values EmitBuilder::builder().all_cargo().all_rustc().emit()?; // Try to get the git sha from the local git repository if EmitBuilder::builder() .fail_on_error() .git_sha(false) .emit() .is_err() { // Unable to get the git sha if let Ok(sha) = std::env::var("GIT_SHA") { // Set it from an env var println!("cargo:rustc-env=VERGEN_GIT_SHA={sha}"); } } // Set docker label if present if let Ok(label) = std::env::var("DOCKER_LABEL") { // Set it from an env var println!("cargo:rustc-env=DOCKER_LABEL={label}"); } Ok(()) }
text-generation-inference/launcher/build.rs/0
{ "file_path": "text-generation-inference/launcher/build.rs", "repo_id": "text-generation-inference", "token_count": 363 }
309
{ stdenv, dockerTools, cacert, text-generation-inference, stream ? false, }: let build = if stream then dockerTools.streamLayeredImage else dockerTools.buildLayeredImage; in build { name = "tgi-docker"; tag = "latest"; compressor = "zstd"; config = { EntryPoint = [ "${text-generation-inference}/bin/text-generation-inference" ]; Env = [ "HF_HOME=/data" "PORT=80" # The CUDA container toolkit will mount the driver shim into the # container. We just have to ensure that the dynamic loader finds # the libraries. "LD_LIBRARY_PATH=/usr/lib64" ]; }; extraCommands = '' mkdir -p tmp chmod -R 1777 tmp ''; contents = [ cacert stdenv.cc ]; }
text-generation-inference/nix/docker.nix/0
{ "file_path": "text-generation-inference/nix/docker.nix", "repo_id": "text-generation-inference", "token_count": 290 }
310
use axum::{extract::Request, middleware::Next, response::Response}; use opentelemetry::sdk::propagation::TraceContextPropagator; use opentelemetry::sdk::trace; use opentelemetry::sdk::trace::Sampler; use opentelemetry::sdk::Resource; use opentelemetry::trace::{SpanContext, SpanId, TraceContextExt, TraceFlags, TraceId}; use opentelemetry::Context; use opentelemetry::{global, KeyValue}; use opentelemetry_otlp::WithExportConfig; use tracing_subscriber::layer::SubscriberExt; use tracing_subscriber::util::SubscriberInitExt; use tracing_subscriber::{filter::LevelFilter, EnvFilter, Layer}; struct TraceParent { #[allow(dead_code)] version: u8, trace_id: TraceId, parent_id: SpanId, trace_flags: TraceFlags, } fn parse_traceparent(header_value: &str) -> Option<TraceParent> { let parts: Vec<&str> = header_value.split('-').collect(); if parts.len() != 4 { return None; } let version = u8::from_str_radix(parts[0], 16).ok()?; if version == 0xff { return None; } let trace_id = TraceId::from_hex(parts[1]).ok()?; let parent_id = SpanId::from_hex(parts[2]).ok()?; let trace_flags = u8::from_str_radix(parts[3], 16).ok()?; Some(TraceParent { version, trace_id, parent_id, trace_flags: TraceFlags::new(trace_flags), }) } pub async fn trace_context_middleware(mut request: Request, next: Next) -> Response { let context = request .headers() .get("traceparent") .and_then(|v| v.to_str().ok()) .and_then(parse_traceparent) .map(|traceparent| { Context::new().with_remote_span_context(SpanContext::new( traceparent.trace_id, traceparent.parent_id, traceparent.trace_flags, true, Default::default(), )) }); request.extensions_mut().insert(context); next.run(request).await } /// Init logging using env variables LOG_LEVEL and LOG_FORMAT: /// - otlp_endpoint is an optional URL to an Open Telemetry collector /// - otlp_service_name service name to appear in APM /// - LOG_LEVEL may be TRACE, DEBUG, INFO, WARN or ERROR (default to INFO) /// - LOG_FORMAT may be TEXT or JSON (default to TEXT) /// - LOG_COLORIZE may be "false" or "true" (default to "true" or ansi supported platforms) pub fn init_logging(otlp_endpoint: Option<String>, otlp_service_name: String, json_output: bool) { let mut layers = Vec::new(); // STDOUT/STDERR layer let ansi = std::env::var("LOG_COLORIZE") != Ok("1".to_string()); let fmt_layer = tracing_subscriber::fmt::layer() .with_file(true) .with_ansi(ansi) .with_line_number(true); let fmt_layer = match json_output { true => fmt_layer.json().flatten_event(true).boxed(), false => fmt_layer.boxed(), }; layers.push(fmt_layer); // OpenTelemetry tracing layer if let Some(otlp_endpoint) = otlp_endpoint { global::set_text_map_propagator(TraceContextPropagator::new()); let tracer = opentelemetry_otlp::new_pipeline() .tracing() .with_exporter( opentelemetry_otlp::new_exporter() .tonic() .with_endpoint(otlp_endpoint), ) .with_trace_config( trace::config() .with_resource(Resource::new(vec![KeyValue::new( "service.name", otlp_service_name, )])) .with_sampler(Sampler::AlwaysOn), ) .install_batch(opentelemetry::runtime::Tokio); if let Ok(tracer) = tracer { layers.push(tracing_opentelemetry::layer().with_tracer(tracer).boxed()); init_tracing_opentelemetry::init_propagator().unwrap(); }; } // Filter events with LOG_LEVEL let varname = "LOG_LEVEL"; let env_filter = if let Ok(log_level) = std::env::var(varname) { // Override to avoid simple logs to be spammed with tokio level informations let log_level = match &log_level[..] { "warn" => "text_generation_launcher=warn,text_generation_router=warn", "info" => "text_generation_launcher=info,text_generation_router=info", "debug" => "text_generation_launcher=debug,text_generation_router=debug", log_level => log_level, }; EnvFilter::builder() .with_default_directive(LevelFilter::INFO.into()) .parse_lossy(log_level) } else { EnvFilter::new("info") }; tracing_subscriber::registry() .with(env_filter) .with(layers) .init(); }
text-generation-inference/router/src/logging.rs/0
{ "file_path": "text-generation-inference/router/src/logging.rs", "repo_id": "text-generation-inference", "token_count": 2156 }
311
selective_scan_commit := 2a3704fd47ba817b415627b06fd796b971fdc137 causal-conv1d: rm -rf causal-conv1d git clone https://github.com/Dao-AILab/causal-conv1d.git build-causal-conv1d: causal-conv1d cd causal-conv1d/ && git checkout v1.1.1 # known latest working version tag cd causal-conv1d/ && CAUSAL_CONV1D_FORCE_BUILD=TRUE python setup.py build install-causal-conv1d: build-causal-conv1d pip uninstall causal-conv1d -y || true cd causal-conv1d/ && pip install . # selective-scan dependends on causal-conv1d selective-scan: rm -rf mamba git clone https://github.com/state-spaces/mamba.git mamba build-selective-scan: selective-scan cd mamba/ && git fetch && git checkout $(selective_scan_commit) cd mamba && python setup.py build install-selective-scan: install-causal-conv1d build-selective-scan pip uninstall selective-scan-cuda -y || true cd mamba && pip install . build-all: build-causal-conv1d build-selective-scan
text-generation-inference/server/Makefile-selective-scan/0
{ "file_path": "text-generation-inference/server/Makefile-selective-scan", "repo_id": "text-generation-inference", "token_count": 351 }
312
// Adapted from turboderp exllama: https://github.com/turboderp/exllama #include <torch/extension.h> #include <c10/cuda/CUDAGuard.h> #include <ATen/cuda/CUDAContext.h> #include <cuda_runtime.h> #include <cuda_fp16.h> #include <cstdint> #include <cstdio> #include "util.cuh" #include "tuning.h" #include "cuda_buffers.cuh" #include "cuda_func/q4_matrix.cuh" #include "cuda_func/q4_matmul.cuh" #include "cuda_func/column_remap.cuh" // Check CUDA return code. We don't want to include Torch headers in the .cu files because parsing them adds almost a // minute to the compile time on a 12900K. Also passing exceptions back to Python is super tricky, so in place of // exceptions, CUDA functions return with a cudaError_t which we can parse and dump to the console. void check_cuda(cudaError_t ret) { switch (ret) { case cudaSuccess: break; case cudaUnspecified: printf(" **** Unspecified error\n"); TORCH_CHECK(false, "CUDA error"); break; default: printf(" **** CUDA error\n"); \ printf(" **** %s\n", cudaGetErrorString(ret)); \ TORCH_CHECK(false, "CUDA error"); \ break; } } // Some decluttering macros #define STRINGIFY_(__x) #__x #define STRINGIFY(__x) STRINGIFY_(__x) #define TORCH_CHECK_DTYPE(__x, __dtype) TORCH_CHECK((__x).dtype() == torch::__dtype, #__x " is incorrect datatype, must be " #__dtype) #define TORCH_CHECK_DTYPE_OPT(__x, __dtype) TORCH_CHECK((__x).device().is_meta() || (__x).dtype() == torch::__dtype, #__x " is incorrect datatype, must be " #__dtype) #define TORCH_CHECK_SHAPES(__x, __dim_x, __y, __dim_y, __scale_y) TORCH_CHECK((__x).size(__dim_x) == (__y).size(__dim_y) * __scale_y, #__x " and " #__y " have incompatible shapes") #define TORCH_CHECK_SHAPES_OPT(__x, __dim_x, __y, __dim_y, __scale_y) TORCH_CHECK((__x).device().is_meta() || (__x).size(__dim_x) == (__y).size(__dim_y) * __scale_y, #__x " and " #__y " have incompatible shapes") #define TORCH_CHECK_SHAPE_MOD(__x, __dim_x, __mod) TORCH_CHECK((__x).size(__dim_x) % __mod == 0, #__x ".shape[" STRINGIFY(__dim_x) "] must be a multiple of " STRINGIFY(__mod)) #define TORCH_CHECK_DEVICE_INDEX(__index) \ do { \ TORCH_CHECK(__index >= 0, "no device index"); \ TORCH_CHECK(__index < CUDA_MAX_DEVICES, "invalid device index"); \ } while(0) #define TORCH_CHECK_QUANT(__w, __w_scales, __w_zeros, __seq_g_idx, __x_map) \ do { \ TORCH_CHECK_DTYPE(__w, kInt); \ TORCH_CHECK_DTYPE(__w_scales, kHalf); \ TORCH_CHECK_DTYPE(__w_zeros, kInt); \ TORCH_CHECK_DTYPE_OPT(__seq_g_idx, kShort); \ TORCH_CHECK_DTYPE_OPT(__x_map, kInt); \ TORCH_CHECK_SHAPES_OPT(__seq_g_idx, 0, __w, 0, 2 * 8); \ TORCH_CHECK_SHAPES_OPT(__x_map, 0, __w, 0, 8); \ } while(0) int get_groupsize(torch::Tensor w, torch::Tensor w_zeros) { int groupsize = w.size(0) * 8 / w_zeros.size(0); TORCH_CHECK(groupsize * w_zeros.size(0) == w.size(0) * 8, "w.shape[-2] must be a multiple of zeros.shape[-2]") return groupsize; } // Tuning parameters ExLlamaTuning tuningParams; void set_tuning_params ( int matmul_recons_thd, bool matmul_fused_remap, bool matmul_no_half2 ) { tuningParams.matmul_recons_thd = matmul_recons_thd; tuningParams.matmul_fused_remap = matmul_fused_remap; tuningParams.matmul_no_half2 = matmul_no_half2; } // Release all unmanaged objects allocated by the extension void cleanup() { cleanup_buffers_cuda(); g_q4_free_matrices(); } // Prepare buffers for forward pass void prepare_buffers ( torch::Device device, torch::Tensor temp_state, torch::Tensor temp_dq ) { int device_index = device.index(); TORCH_CHECK_DEVICE_INDEX(device_index); const at::cuda::OptionalCUDAGuard device_guard(device); prepare_buffers_cuda ( device_index, (half*) temp_state.data_ptr(), (half*) temp_dq.data_ptr() ); } // Create Q4Matrix, return handle uintptr_t make_q4 ( torch::Tensor qweight, torch::Tensor qzeros, torch::Tensor scales, torch::Tensor g_idx, int device ) { TORCH_CHECK_DTYPE(qweight, kInt); TORCH_CHECK_DTYPE(qzeros, kInt); TORCH_CHECK_DTYPE(scales, kHalf); TORCH_CHECK_DTYPE_OPT(g_idx, kInt); TORCH_CHECK_SHAPES(qweight, 1, qzeros, 1, 8); TORCH_CHECK_SHAPES(scales, 1, qweight, 1, 1); TORCH_CHECK_SHAPES(qzeros, 0, scales, 0, 1); int width = qweight.size(1); int height = qweight.size(0) * 8; int groups = qzeros.size(0); Q4Matrix* m = new Q4Matrix ( height, width, groups, (uint32_t*) qweight.data_ptr(), (uint32_t*) qzeros.data_ptr(), (half*) scales.data_ptr(), g_idx.device().is_meta() ? NULL : (uint32_t*) g_idx.data_ptr(), device ); g_q4_keep_matrix(m); return reinterpret_cast<uintptr_t> (m); } // Matmul half @ quant -> half void q4_matmul ( torch::Tensor x, uintptr_t w, torch::Tensor out ) { Q4Matrix* wm = reinterpret_cast<Q4Matrix*> (w); TORCH_CHECK_DTYPE(x, kHalf); TORCH_CHECK_DTYPE(out, kHalf); TORCH_CHECK_SHAPES(x, 0, out, 0, 1); TORCH_CHECK(wm->height == x.size(-1), "x and w have incompatible shapes") const at::cuda::OptionalCUDAGuard device_guard(device_of(x)); int x_height = x.size(0); const cudaStream_t stream = at::cuda::getCurrentCUDAStream(); if (tuningParams.matmul_recons_thd == 0 || x_height < tuningParams.matmul_recons_thd) { q4_matmul_cuda ( &tuningParams, (half*) x.data_ptr(), x_height, wm, (half*) out.data_ptr(), false, stream ); } else { q4_matmul_recons_cuda ( &tuningParams, (half*) x.data_ptr(), x_height, wm, (half*) out.data_ptr(), false, at::cuda::getCurrentCUDABlasHandle() ); } } // Remap columns in half tensor void column_remap ( torch::Tensor x, torch::Tensor x_new, torch::Tensor x_map ) { TORCH_CHECK_DTYPE(x, kHalf); TORCH_CHECK_DTYPE(x_new, kHalf); TORCH_CHECK_DTYPE(x_map, kInt); TORCH_CHECK_SHAPES(x_map, 0, x, 1, 1); int height = x.size(0); int width = x.size(1); const at::cuda::OptionalCUDAGuard device_guard(device_of(x)); column_remap_cuda ( (half*) x.data_ptr(), (half*) x_new.data_ptr(), height, width, (uint32_t*) x_map.data_ptr() ); } PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { m.def("set_tuning_params", &set_tuning_params, "set_tuning_params"); m.def("prepare_buffers", &prepare_buffers, "prepare_buffers"); m.def("cleanup", &cleanup, "cleanup"); m.def("make_q4", &make_q4, "make_q4"); m.def("q4_matmul", &q4_matmul, "q4_matmul"); }
text-generation-inference/server/exllama_kernels/exllama_kernels/exllama_ext.cpp/0
{ "file_path": "text-generation-inference/server/exllama_kernels/exllama_kernels/exllama_ext.cpp", "repo_id": "text-generation-inference", "token_count": 3279 }
313
#ifndef _qdq_2_cuh #define _qdq_2_cuh #include "qdq_util.cuh" #include "../../config.h" #if QMODE_2BIT == 1 // Permutation: // // ffddbb99 77553311 eeccaa88 66442200 __forceinline__ __device__ void shuffle_2bit_16 ( uint32_t* q, int stride ) { uint32_t qa = q[0]; uint32_t qb = 0; #pragma unroll for (int i = 0; i < 8; i++) { uint32_t qa0 = qa & 0x03; uint32_t qa1 = (qa & 0x0c) >> 2; qa >>= 4; qb |= (qa1 << (i * 2 + 16)); qb |= (qa0 << (i * 2)); } q[0] = qb; } __forceinline__ __device__ void dequant_2bit_16 ( const uint32_t q_0, half2 (&dq)[8], int stride ) { const uint32_t c0 = 0x64006400; const half y4_ = __float2half_rn(1.0f / 4.0f); const half y16_ = __float2half_rn(1.0f / 16.0f); const half y64_ = __float2half_rn(1.0f / 64.0f); const half2 y4 = __halves2half2(y4_, y4_); const half2 y16 = __halves2half2(y16_, y16_); const half2 y64 = __halves2half2(y64_, y64_); const half z1_ = __float2half_rn(-1024.0f - 2.0f); const half z4_ = __float2half_rn(-1024.0f / 4.0f - 2.0f); const half z16_ = __float2half_rn(-1024.0f / 16.0f - 2.0f); const half z64_ = __float2half_rn(-1024.0f / 64.0f - 2.0f); const half2 z1 = __halves2half2(z1_, z1_); const half2 z4 = __halves2half2(z4_, z4_); const half2 z16 = __halves2half2(z16_, z16_); const half2 z64 = __halves2half2(z64_, z64_); uint32_t qa = q_0; half2_uint32 q0((qa & 0x00030003) | c0); // half2(q[ 0], q[ 1]) + 1024 half2_uint32 q1((qa & 0x000c000c) | c0); // half2(q[ 2], q[ 3]) * 4 + 1024 half2_uint32 q2((qa & 0x00300030) | c0); // half2(q[ 4], q[ 5]) * 16 + 1024 half2_uint32 q3((qa & 0x00c000c0) | c0); // half2(q[ 6], q[ 7]) * 64 + 1024 qa >>= 8; half2_uint32 q4((qa & 0x00030003) | c0); // half2(q[ 8], q[ 8]) + 1024 half2_uint32 q5((qa & 0x000c000c) | c0); // half2(q[10], q[11]) * 4 + 1024 half2_uint32 q6((qa & 0x00300030) | c0); // half2(q[12], q[13]) * 16 + 1024 half2_uint32 q7((qa & 0x00c000c0) | c0); // half2(q[14], q[15]) * 64 + 1024 dq[0] = __hadd2(q0.as_half2, z1); dq[1] = __hfma2(q1.as_half2, y4, z4); dq[2] = __hfma2(q2.as_half2, y16, z16); dq[3] = __hfma2(q3.as_half2, y64, z64); dq[4] = __hadd2(q4.as_half2, z1); dq[5] = __hfma2(q5.as_half2, y4, z4); dq[6] = __hfma2(q6.as_half2, y16, z16); dq[7] = __hfma2(q7.as_half2, y64, z64); } #else __forceinline__ __device__ void shuffle_2bit_16 ( uint32_t* q, int stride ) { } __forceinline__ __device__ void dequant_2bit_16 ( const uint32_t q_0, half2 (&dq)[8], int stride ) { half dqh[16]; for (int i = 0; i < 16; i++) dqh[i] = dq_ns(exb(q_0, i * 2, 0x03), 2); for (int i = 0; i < 8; i++) dq[i] = __halves2half2(dqh[i * 2], dqh[i * 2 + 1]); } #endif #endif
text-generation-inference/server/exllamav2_kernels/exllamav2_kernels/cuda/quant/qdq_2.cuh/0
{ "file_path": "text-generation-inference/server/exllamav2_kernels/exllamav2_kernels/cuda/quant/qdq_2.cuh", "repo_id": "text-generation-inference", "token_count": 1589 }
314
# Origin: https://github.com/predibase/lorax # Path: lorax/server/lorax_server/adapters/config.py # License: Apache License Version 2.0, January 2004 from abc import ABC, abstractmethod from dataclasses import dataclass from typing import Dict, Set, Tuple import torch from text_generation_server.adapters.weights import AdapterWeights @dataclass class ModuleMap: module_name: str module_weights: Dict[str, Tuple[torch.Tensor, str]] @dataclass class AdapterConfig(ABC): base_model_name_or_path: str @abstractmethod def map_weights_for_model( self, adapter_weights: Dict[int, AdapterWeights], weight_names: Tuple[str], ) -> Tuple[ModuleMap, Set[str]]: pass
text-generation-inference/server/text_generation_server/adapters/config.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/adapters/config.py", "repo_id": "text-generation-inference", "token_count": 275 }
315
from text_generation_server.utils.import_utils import SYSTEM if SYSTEM == "ipex": from .ipex import WQLinear elif SYSTEM == "cuda": from .cuda import WQLinear __all__ = ["WQLinear"]
text-generation-inference/server/text_generation_server/layers/awq/quantize/__init__.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/layers/awq/quantize/__init__.py", "repo_id": "text-generation-inference", "token_count": 71 }
316
from text_generation_server.layers.gptq import GPTQWeight import torch from exllama_kernels import make_q4, q4_matmul, prepare_buffers, set_tuning_params # Dummy tensor to pass instead of g_idx since there is no way to pass "None" to a C++ extension none_tensor = torch.empty((1, 1), device="meta") def ext_make_q4(qweight, qzeros, scales, g_idx, device): """Construct Q4Matrix, return handle""" return make_q4( qweight, qzeros, scales, g_idx if g_idx is not None else none_tensor, device ) def ext_q4_matmul(x, q4, q4_width): """Matrix multiplication, returns x @ q4""" outshape = x.shape[:-1] + (q4_width,) x = x.view(-1, x.shape[-1]) output = torch.empty((x.shape[0], q4_width), dtype=torch.float16, device=x.device) q4_matmul(x, q4, output) return output.view(outshape) MAX_DQ = 1 MAX_INNER = 1 ACT_ORDER = False DEVICE = None TEMP_STATE = None TEMP_DQ = None def set_device(device): global DEVICE DEVICE = device def create_exllama_buffers(max_total_tokens: int): global MAX_DQ, MAX_INNER, ACT_ORDER, DEVICE, TEMP_STATE, TEMP_DQ assert DEVICE is not None, "call set_device first" if not ACT_ORDER: max_total_tokens = 1 # This temp_state buffer is required to reorder X in the act-order case. temp_state = torch.zeros( (max_total_tokens, MAX_INNER), dtype=torch.float16, device=DEVICE ) temp_dq = torch.zeros((1, MAX_DQ), dtype=torch.float16, device=DEVICE) # This temp_dq buffer is required to dequantize weights when using cuBLAS, typically for the prefill. prepare_buffers(DEVICE, temp_state, temp_dq) matmul_recons_thd = 8 matmul_fused_remap = False matmul_no_half2 = False set_tuning_params(matmul_recons_thd, matmul_fused_remap, matmul_no_half2) TEMP_STATE, TEMP_DQ = temp_state, temp_dq class Ex4bitLinear(torch.nn.Module): """Linear layer implementation with per-group 4-bit quantization of the weights""" def __init__(self, weight: GPTQWeight, bias): super().__init__() global MAX_DQ, MAX_INNER, ACT_ORDER, DEVICE assert weight.bits == 4 self.device = weight.qweight.device self.qweight = weight.qweight self.qzeros = weight.qzeros self.scales = weight.scales self.g_idx = weight.g_idx.cpu() if weight.g_idx is not None else None self.bias = bias if bias is not None else None if self.g_idx is not None and ( (self.g_idx == 0).all() or torch.equal( weight.g_idx.cpu(), torch.tensor( [i // weight.groupsize for i in range(weight.g_idx.shape[0])], dtype=torch.int32, ), ) ): self.empty_g_idx = True self.g_idx = None assert self.device.type == "cuda" assert self.device.index is not None self.q4 = ext_make_q4( self.qweight, self.qzeros, self.scales, self.g_idx, self.device.index ) self.height = weight.qweight.shape[0] * 8 self.width = weight.qweight.shape[1] # Infer groupsize from height of qzeros self.groupsize = None if self.qzeros.shape[0] > 1: self.groupsize = (self.qweight.shape[0] * 8) // (self.qzeros.shape[0]) if self.groupsize is not None: assert weight.groupsize == self.groupsize # Handle act-order matrix if self.g_idx is not None: if self.groupsize is None: raise ValueError("Found group index but no groupsize. What do?") self.act_order = True else: self.act_order = False DEVICE = self.qweight.device MAX_DQ = max(MAX_DQ, self.qweight.numel() * 8) if self.act_order: MAX_INNER = max(MAX_INNER, self.height, self.width) ACT_ORDER = True def forward(self, x): out = ext_q4_matmul(x, self.q4, self.width) if self.bias is not None: out.add_(self.bias) return out
text-generation-inference/server/text_generation_server/layers/gptq/exllama.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/layers/gptq/exllama.py", "repo_id": "text-generation-inference", "token_count": 1888 }
317
from typing import Optional, Protocol, runtime_checkable import torch import torch.nn as nn from loguru import logger from transformers.activations import ACT2FN from text_generation_server.layers import ( TensorParallelColumnLinear, TensorParallelRowLinear, ) from text_generation_server.layers.fp8 import HybridFP8UnquantLoader from text_generation_server.layers.marlin import GPTQMarlinWeightsLoader from text_generation_server.layers.moe.gptq_marlin import ( GPTQMarlinSparseMoELayer, can_use_marlin_moe_gemm, ) from text_generation_server.layers.moe.unquantized import UnquantizedSparseMoELayer from text_generation_server.layers.moe.fp8 import FP8SparseMoELayer from text_generation_server.utils.import_utils import SYSTEM from text_generation_server.utils.kernels import load_kernel from text_generation_server.utils.log import log_once from text_generation_server.utils.weights import ( DefaultWeightsLoader, Weights, UnquantizedWeight, ) if SYSTEM == "ipex": from .fused_moe_ipex import fused_topk, grouped_topk elif SYSTEM == "cuda": moe_kernels = load_kernel(module="moe", repo_id="kernels-community/moe") fused_topk = moe_kernels.fused_topk grouped_topk = moe_kernels.grouped_topk else: from moe_kernels.fused_moe import fused_topk, grouped_topk # NOTE: we are using a protocol here, because multiple inherance is not nice. # We need `Module`, and `Module` -> some abstract class -> some concrete # class inheritance is whacky. @runtime_checkable class MoELayer(Protocol): def __init__( self, *, n_expert_group: Optional[int], n_experts: int, prefix: str, renormalize: bool, topk: int, topk_group: Optional[int], weights: Weights, gate_proj_name: str = "gate_proj", up_proj_name: str = "up_proj", down_proj_name: str = "down_proj", hidden_act: str = "silu", scoring_func: Optional[str] = None, e_score_correction_bias: Optional[float] = None, ): ... def forward( self, x: torch.Tensor, *, gating_output: torch.Tensor ) -> torch.Tensor: ... class DenseMoELayer(nn.Module): """ Layer for MoE that applies *all* experts to each tokens and then weights their outputs based on the calculated routing. This layer is much slower than `SparseMoELayer` and should only be used when no fused kernels are available (e.g. for unsupported quantizers). """ def __init__( self, *, n_expert_group: Optional[int], n_experts: int, prefix: str, renormalize: bool, topk: int, topk_group: Optional[int], weights: Weights, gate_proj_name: str = "gate_proj", up_proj_name: str = "up_proj", down_proj_name: str = "down_proj", hidden_act: str = "silu", scoring_func: Optional[str] = None, e_score_correction_bias: Optional[float] = None, ): super().__init__() assert scoring_func is None, "scoring func is not handled" assert e_score_correction_bias is None, "scoring correction bias is not handled" log_once( logger.info, "No fused layers are available for this model type, using (slower) dense MoE layer", ) assert (n_expert_group is None) == ( topk_group is None ), "n_expert_group and topk_group must both be None or have some value" self.n_expert_group = n_expert_group self.n_experts = n_experts self.renormalize = renormalize self.topk = topk self.topk_group = topk_group if "gelu" in hidden_act: self.act = lambda x: torch.nn.functional.gelu( x, approximate=( "tanh" if hidden_act in ["gelu_fast", "gelu_pytorch_tanh"] else "none" ), ) elif "silu" in hidden_act: self.act = torch.nn.functional.silu else: self.act = ACT2FN[hidden_act] self.gate_proj = [ TensorParallelColumnLinear.load( None, prefix=f"{prefix}.{i}.{gate_proj_name}", weights=weights, bias=False, ) for i in range(self.n_experts) ] self.up_proj = [ TensorParallelColumnLinear.load( None, prefix=f"{prefix}.{i}.{up_proj_name}", weights=weights, bias=False, ) for i in range(self.n_experts) ] self.down_proj = [ TensorParallelRowLinear.load( None, prefix=f"{prefix}.{i}.{down_proj_name}", weights=weights, bias=False, ) for i in range(self.n_experts) ] self.process_group = weights.process_group def forward(self, x: torch.Tensor, *, gating_output: torch.Tensor) -> torch.Tensor: """ x: (sequence_length, model_dim) gating_output: (sequence_length, n_experts) """ # optional reshape input_shape = x.shape x = x.view(-1, input_shape[-1]) if self.n_expert_group is not None and self.topk_group is not None: topk_weights, topk_ids = grouped_topk( x, gating_output, self.topk, renormalize=self.renormalize, num_expert_group=self.n_expert_group, topk_group=self.topk_group, ) else: topk_weights, topk_ids = fused_topk( x, gating_output, self.topk, self.renormalize ) topk_weights = topk_weights.to(x.dtype) weights = torch.zeros( topk_ids.shape[0], self.n_experts, dtype=x.dtype, device=x.device ) weights.scatter_(1, topk_ids.long(), topk_weights.to(weights.dtype)) out = torch.zeros_like(x) for i in range(self.n_experts): h = self.act(self.gate_proj[i](x)) * self.up_proj[i](x) h = self.down_proj[i](h, reduce=False) out += h * weights[:, i].view(-1, 1) return out class SparseMoELayer(nn.Module): """ Layer for MoE that uses fused kernels to only apply the active experts for each token (rather than applying all experts and selecting the outputs of active experts). """ def __init__( self, *, n_expert_group: Optional[int], n_experts: int, prefix: str, renormalize: bool, topk: int, topk_group: Optional[int], weights: Weights, scoring_func: Optional[str] = "softmax", e_score_correction_bias: Optional[float] = None, gate_proj_name: str = "gate_proj", up_proj_name: str = "up_proj", down_proj_name: str = "down_proj", ): super().__init__() if ( isinstance(weights.loader, DefaultWeightsLoader) and isinstance(weights.loader.weight_class, UnquantizedWeight) ) or isinstance(weights.loader, HybridFP8UnquantLoader): if ( isinstance(weights.loader, HybridFP8UnquantLoader) and weights.loader.to_fp8 ): cls = FP8SparseMoELayer else: cls = UnquantizedSparseMoELayer elif isinstance( weights.loader, GPTQMarlinWeightsLoader ) and can_use_marlin_moe_gemm( quant_method=weights.loader.quant_method, quantize=weights.loader.quantize, sym=weights.loader.sym, ): cls = GPTQMarlinSparseMoELayer else: raise ValueError( f"Unsupported weights loader: {type(weights.loader)}, sparse MoE is only supported for unquantized, AWQ, and GPTQ weights" ) log_once( logger.info, "Using MoE layer wih fused gemm", ) self.moe = cls( n_expert_group=n_expert_group, n_experts=n_experts, prefix=prefix, renormalize=renormalize, topk=topk, topk_group=topk_group, weights=weights, scoring_func=scoring_func, e_score_correction_bias=e_score_correction_bias, gate_proj_name=gate_proj_name, up_proj_name=up_proj_name, down_proj_name=down_proj_name, ) def forward(self, x: torch.Tensor, *, gating_output: torch.Tensor) -> torch.Tensor: return self.moe(x, gating_output=gating_output) @staticmethod def is_supported(weights: Weights) -> bool: return ( ( isinstance(weights.loader, DefaultWeightsLoader) and isinstance(weights.loader.weight_class, UnquantizedWeight) ) or isinstance(weights.loader, HybridFP8UnquantLoader) or ( isinstance(weights.loader, GPTQMarlinWeightsLoader) and can_use_marlin_moe_gemm( quant_method=weights.loader.quant_method, quantize=weights.loader.quantize, sym=weights.loader.sym, ) ) )
text-generation-inference/server/text_generation_server/layers/moe/__init__.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/layers/moe/__init__.py", "repo_id": "text-generation-inference", "token_count": 4641 }
318
# coding=utf-8 # Copyright 2023, 2024 DeepSeek-AI and The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from typing import List, Optional, Tuple, Type import torch import torch.distributed from torch import nn from transformers.activations import ACT2FN from transformers.configuration_utils import PretrainedConfig from text_generation_server.layers import ( FastLinear, SpeculativeHead, TensorParallelColumnLinear, TensorParallelEmbedding, TensorParallelRowLinear, get_linear, ) from text_generation_server.layers.attention import ( Seqlen, attention, paged_attention, ) from text_generation_server.layers.attention.kv_cache import KVCache, get_kv_scales from text_generation_server.layers.layernorm import FastRMSNorm from text_generation_server.layers.moe import DenseMoELayer, MoELayer, SparseMoELayer from text_generation_server.layers.rotary import PositionRotaryEmbedding, get_mscale from text_generation_server.utils.import_utils import SYSTEM from text_generation_server.utils.weights import Weights if SYSTEM == "rocm": try: import vllm._custom_ops as ops except Exception as e: raise ImportError(f"Could not load `vllm._custom_ops`. Full error: {e}") class DeepseekV2Config(PretrainedConfig): def __init__( self, vocab_size=102400, hidden_size=4096, intermediate_size=11008, moe_intermediate_size=1407, num_hidden_layers=30, num_attention_heads=32, num_key_value_heads=32, n_shared_experts=2, n_routed_experts=160, ep_size=1, routed_scaling_factor=1.0, kv_lora_rank=512, q_lora_rank=1536, qk_rope_head_dim=64, v_head_dim=128, qk_nope_head_dim=128, topk_method="gready", n_group=8, topk_group=3, num_experts_per_tok=6, moe_layer_freq=1, first_k_dense_replace=0, norm_topk_prob=False, scoring_func="softmax", aux_loss_alpha=0.001, seq_aux=True, hidden_act="silu", max_position_embeddings=2048, initializer_range=0.02, rms_norm_eps=1e-6, use_cache=True, pad_token_id=None, bos_token_id=100000, eos_token_id=100001, pretraining_tp=1, tie_word_embeddings=False, rope_theta=10000.0, rope_scaling=None, attention_bias=False, attention_dropout=0.0, **kwargs, ): self.vocab_size = vocab_size self.max_position_embeddings = max_position_embeddings self.hidden_size = hidden_size self.intermediate_size = intermediate_size self.moe_intermediate_size = moe_intermediate_size self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.n_shared_experts = n_shared_experts self.n_routed_experts = n_routed_experts self.ep_size = ep_size self.routed_scaling_factor = routed_scaling_factor self.kv_lora_rank = kv_lora_rank self.q_lora_rank = q_lora_rank self.qk_rope_head_dim = qk_rope_head_dim self.v_head_dim = v_head_dim self.qk_nope_head_dim = qk_nope_head_dim self.topk_method = topk_method self.n_group = n_group self.topk_group = topk_group self.num_experts_per_tok = num_experts_per_tok self.moe_layer_freq = moe_layer_freq self.first_k_dense_replace = first_k_dense_replace self.norm_topk_prob = norm_topk_prob self.scoring_func = scoring_func self.aux_loss_alpha = aux_loss_alpha self.seq_aux = seq_aux # for backward compatibility if num_key_value_heads is None: num_key_value_heads = num_attention_heads self.num_key_value_heads = num_key_value_heads self.hidden_act = hidden_act self.initializer_range = initializer_range self.rms_norm_eps = rms_norm_eps self.pretraining_tp = pretraining_tp self.use_cache = use_cache self.rope_theta = rope_theta self.rope_scaling = rope_scaling self.attention_bias = attention_bias self.attention_dropout = attention_dropout tie_word_embeddings = kwargs.pop("tie_word_embeddings", False) if tie_word_embeddings: raise ValueError( "tie_word_embeddings is not supported for Deepseek V2 models." ) if ep_size != 1: raise ValueError( f"Currently only ep_size == 1 is supported for Deepseek V2 models, was {ep_size}" ) super().__init__( pad_token_id=pad_token_id, bos_token_id=bos_token_id, eos_token_id=eos_token_id, tie_word_embeddings=tie_word_embeddings, **kwargs, ) class DeepseekV2Attention(torch.nn.Module): def __init__( self, prefix: str, config, weights: Weights, ): super().__init__() self.num_heads = config.num_attention_heads self.hidden_size = config.hidden_size self.kv_lora_rank = config.kv_lora_rank self.q_lora_rank = config.q_lora_rank self.qk_nope_head_dim = config.qk_nope_head_dim self.qk_rope_head_dim = config.qk_rope_head_dim self.head_size = config.qk_nope_head_dim + config.qk_rope_head_dim self.value_head_size = config.v_head_dim self.head_pad_size = max(self.head_size, self.value_head_size) self.rotary_emb = PositionRotaryEmbedding.static( config=config, dim=self.qk_rope_head_dim, base=config.rope_theta, device=weights.device, ) mscale = get_mscale( self.rotary_emb.scaling_factor, self.rotary_emb.mscale_all_dim ) self.softmax_scale = self.head_size**-0.5 * mscale * mscale if self.num_heads % weights.process_group.size() != 0: raise ValueError( f"`num_heads` must be divisible by `num_shards` (got `num_heads`: {self.num_heads} " f"and `num_shards`: {weights.process_group.size()}" ) self.num_heads = self.num_heads // weights.process_group.size() self.num_key_value_heads = ( config.num_key_value_heads // weights.process_group.size() ) if self.q_lora_rank is None: self.q_proj = TensorParallelColumnLinear.load( config, prefix=f"{prefix}.q_proj", weights=weights, bias=config.attention_bias, ) else: self.q_a_proj = get_linear( weight=weights.get_weights(f"{prefix}.q_a_proj"), bias=( weights.get_tensor(f"{prefix}.q_a_proj.bias") if config.attention_bias else None ), ) self.q_a_layernorm = FastRMSNorm.load( prefix=f"{prefix}.q_a_layernorm", weights=weights, eps=config.rms_norm_eps, ) self.q_b_proj = TensorParallelColumnLinear.load( config, prefix=f"{prefix}.q_b_proj", weights=weights, bias=config.attention_bias, ) self.kv_a_proj_with_mqa = get_linear( weight=weights.get_weights(f"{prefix}.kv_a_proj_with_mqa"), bias=( weights.get_tensor(f"{prefix}.kv_a_proj_with_mqa.bias") if config.attention_bias else None ), ) self.kv_scales = get_kv_scales(weights, f"{prefix}") self.kv_a_layernorm = FastRMSNorm.load( prefix=f"{prefix}.kv_a_layernorm", weights=weights, eps=config.rms_norm_eps ) self.kv_b_proj = TensorParallelColumnLinear.load( config, prefix=f"{prefix}.kv_b_proj", weights=weights, bias=config.attention_bias, ) self.o_proj = TensorParallelRowLinear.load( config, prefix=f"{prefix}.o_proj", weights=weights, bias=False, ) self.num_groups = self.num_heads // self.num_key_value_heads self.kv_head_mapping = torch.arange( 0, self.num_key_value_heads, dtype=torch.int32, device=weights.device ).repeat_interleave(self.num_groups) def forward( self, hidden_states: torch.Tensor, cos: torch.Tensor, sin: torch.Tensor, cu_seqlen_prefill: torch.Tensor, kv_cache: KVCache, block_tables: torch.Tensor, slots: torch.Tensor, seqlen: Seqlen, max_s: int, ): if self.q_lora_rank is None: query = self.q_proj(hidden_states) else: query = self.q_b_proj(self.q_a_layernorm(self.q_a_proj(hidden_states))[0]) query = query.view(-1, self.num_heads, self.head_size) _, query_pe = torch.split( query, [self.qk_nope_head_dim, self.qk_rope_head_dim], dim=-1 ) compressed_kv = self.kv_a_proj_with_mqa(hidden_states) compressed_kv, key_pe = torch.split( compressed_kv, [self.kv_lora_rank, self.qk_rope_head_dim], dim=-1 ) key_pe = key_pe.view(-1, 1, self.qk_rope_head_dim) kv = self.kv_b_proj(self.kv_a_layernorm(compressed_kv.contiguous())[0]).view( -1, self.num_key_value_heads, self.qk_nope_head_dim + self.value_head_size ) key_nope, value = torch.split( kv, [self.qk_nope_head_dim, self.value_head_size], dim=-1 ) batch_size, heads, head_dim = query_pe.shape query_pe = ( query_pe.view(batch_size, heads, head_dim // 2, 2) .transpose(2, 3) .reshape(batch_size, heads, head_dim) ) batch_size, heads, head_dim = key_pe.shape key_pe = ( key_pe.view(batch_size, heads, head_dim // 2, 2) .transpose(2, 3) .reshape(batch_size, heads, head_dim) ) self.rotary_emb(query_pe, key_pe, cos, sin) query[..., self.qk_nope_head_dim :] = query_pe key = torch.empty_like(query) key[..., : self.qk_nope_head_dim] = key_nope key[..., self.qk_nope_head_dim :] = key_pe # We need to pad the heads because Flash Attention does not support # qk and v with different head sizes. query = torch.nn.functional.pad( query, (0, self.head_pad_size - self.head_size), value=0 ) key = torch.nn.functional.pad( key, (0, self.head_pad_size - self.head_size), value=0 ) value = torch.nn.functional.pad( value, (0, self.head_pad_size - self.value_head_size), value=0 ) kv_cache.store( key=key, value=value, slots=slots, kv_scales=self.kv_scales, ) # Prefill if cu_seqlen_prefill is not None: # flash attention attn_output = attention( query=query, key=key, value=value, kv_cache=kv_cache, kv_scales=self.kv_scales, seqlen=seqlen, block_tables=block_tables, softmax_scale=self.softmax_scale, ) # Decode else: attn_output = paged_attention( query, kv_cache, self.kv_head_mapping, self.softmax_scale, block_tables, seqlen, max_s, kv_scales=self.kv_scales, ) # Remove padding. attn_output = attn_output[..., : self.value_head_size] return self.o_proj( attn_output.reshape(-1, self.num_heads * self.value_head_size) ) class DeepseekV2MLP(nn.Module): def __init__(self, prefix: str, config, weights, intermediate_size: int): super().__init__() self.hidden_act = config.hidden_act if self.hidden_act != "silu": # Bail out because MoE only supports silu. raise NotImplementedError( "Currently only `silu` is supported as an activation for Deepseek V2." ) self.act = ACT2FN[self.hidden_act] self.gate_up_proj = TensorParallelColumnLinear.load_multi( config, prefixes=[f"{prefix}.gate_proj", f"{prefix}.up_proj"], weights=weights, dim=0, bias=False, ) self.down_proj = TensorParallelRowLinear.load( config, prefix=f"{prefix}.down_proj", weights=weights, bias=False, ) self.intermediate_size = intermediate_size // weights.process_group.size() # TODO: This is a hotfix to be removed & properly refactored. self.quantize = config.quantize def forward(self, hidden_states: torch.Tensor, reduce: bool = True): if ( SYSTEM == "rocm" and self.hidden_act == "silu" and hidden_states.dtype == torch.float16 and hidden_states.shape[0] == 1 and not self.quantize ): out = torch.empty( hidden_states.shape[0], self.intermediate_size, dtype=hidden_states.dtype, device="cuda", ) ops.LLMM_Silu(self.gate_up_proj.linear.weight, hidden_states, out, 8) return self.down_proj(out, reduce=reduce) else: gate_up_states = self.gate_up_proj(hidden_states) gate_up_states = gate_up_states.view(-1, 2, self.intermediate_size) return self.down_proj( self.act(gate_up_states[:, 0]) * gate_up_states[:, 1], reduce=reduce ) class DeepseekV2MoE(nn.Module): def __init__( self, prefix, config: DeepseekV2Config, moe_layer_cls: Type[MoELayer], weights, ): super().__init__() self.hidden_dim = config.hidden_size self.moe_intermediate_size = ( config.moe_intermediate_size // weights.process_group.size() ) self.routed_scaling_factor = config.routed_scaling_factor # Gating self.gate = FastLinear.load(config, f"{prefix}.gate", weights, bias=False) self.moe_layer = moe_layer_cls( prefix=f"{prefix}.experts", n_experts=config.n_routed_experts, n_expert_group=config.n_group, renormalize=config.norm_topk_prob, topk=config.num_experts_per_tok, topk_group=config.topk_group, weights=weights, ) assert isinstance(self.moe_layer, MoELayer) if config.n_shared_experts is not None: self.shared_experts = DeepseekV2MLP( prefix=f"{prefix}.shared_experts", config=config, weights=weights, intermediate_size=config.moe_intermediate_size * config.n_shared_experts, ) else: self.shared_experts = None self.process_group = weights.process_group def forward(self, x: torch.Tensor) -> torch.Tensor: if self.shared_experts is not None: shared_output = self.shared_experts(x, reduce=False) else: shared_output = None router_logits = self.gate(x) out = self.moe_layer(x, gating_output=router_logits) if shared_output is not None: out = out + shared_output # Reduce sum if self.process_group.size() > 1: torch.distributed.all_reduce(out, group=self.process_group) return out.view(*x.shape) class DeepseekV2Layer(nn.Module): def __init__(self, prefix, layer_id, config, weights): super().__init__() prefix = f"{prefix}.layers.{layer_id}" self.self_attn = DeepseekV2Attention( prefix=f"{prefix}.self_attn", config=config, weights=weights, ) if ( config.n_routed_experts is not None and layer_id >= config.first_k_dense_replace and layer_id % config.moe_layer_freq == 0 ): moe_layer_cls = ( SparseMoELayer if SparseMoELayer.is_supported(weights) else DenseMoELayer ) self.mlp = DeepseekV2MoE(f"{prefix}.mlp", config, moe_layer_cls, weights) else: self.mlp = DeepseekV2MLP( prefix=f"{prefix}.mlp", config=config, weights=weights, intermediate_size=config.intermediate_size, ) self.input_layernorm = FastRMSNorm.load( prefix=f"{prefix}.input_layernorm", weights=weights, eps=config.rms_norm_eps ) self.post_attention_layernorm = FastRMSNorm.load( prefix=f"{prefix}.post_attention_layernorm", weights=weights, eps=config.rms_norm_eps, ) def forward( self, hidden_states: torch.Tensor, residual: torch.Tensor, cos: torch.Tensor, sin: torch.Tensor, cu_seqlen_prefill: torch.Tensor, kv_cache, block_tables: torch.Tensor, slots: torch.Tensor, seqlen: Seqlen, max_s: int, ): normed_hidden_states, residual = self.input_layernorm(hidden_states, residual) # Self Attention attn_output = self.self_attn( normed_hidden_states, cos, sin, cu_seqlen_prefill, kv_cache, block_tables, slots, seqlen, max_s, ) # faster post attention rms norm normed_attn_res_output, residual = self.post_attention_layernorm( attn_output, residual ) output = self.mlp(normed_attn_res_output) return output, residual class DeepseekV2Model(torch.nn.Module): def __init__(self, prefix: str, config, weights: Weights): super().__init__() self.embed_tokens = TensorParallelEmbedding( prefix=f"{prefix}.embed_tokens", weights=weights ) self.layers = nn.ModuleList( [ DeepseekV2Layer( prefix, layer_id, config, weights, ) for layer_id in range(config.num_hidden_layers) ] ) self.norm = FastRMSNorm.load( prefix=f"{prefix}.norm", weights=weights, eps=config.rms_norm_eps ) self.head_size = self.layers[0].self_attn.head_size self.num_heads = self.layers[0].self_attn.num_heads self.num_key_value_heads = self.layers[0].self_attn.num_key_value_heads def forward( self, input_ids: torch.Tensor, position_ids: torch.Tensor, cu_seqlen_prefill: Optional[torch.Tensor], kv_cache: List[Tuple[torch.Tensor, torch.Tensor]], block_tables: torch.Tensor, slots: torch.Tensor, seqlen: Seqlen, max_s: int, ) -> torch.Tensor: hidden_states = self.embed_tokens(input_ids) # Get rotary cos and sin for this forward # Avoid to index in each layer cos, sin = self.layers[0].self_attn.rotary_emb.get_cos_sin( position_ids, max_s, hidden_states.dtype ) residual = None for i, layer in enumerate(self.layers): hidden_states, residual = layer( hidden_states, residual, cos, sin, cu_seqlen_prefill, kv_cache[i], block_tables, slots, seqlen, max_s, ) hidden_states, _ = self.norm(hidden_states, residual) return hidden_states class FlashDeepseekV2ForCausalLM(torch.nn.Module): def __init__(self, prefix: str, config, weights: Weights): super().__init__() self.model = DeepseekV2Model( "model" if not prefix else f"{prefix}.model", config, weights ) self.lm_head = SpeculativeHead.load( config, prefix="lm_head" if not prefix else f"{prefix}.lm_head", weights=weights, ) def forward( self, input_ids: torch.Tensor, position_ids: torch.Tensor, cu_seqlen_prefill: Optional[torch.Tensor], kv_cache: List[Tuple[torch.Tensor, torch.Tensor]], block_tables: torch.Tensor, slots: torch.Tensor, seqlen: Seqlen, max_s: int, prefill_cache_indices: Optional[torch.Tensor], lm_head_indices: Optional[torch.Tensor] = None, adapter_data: Optional[torch.Tensor] = None, ) -> Tuple[torch.Tensor, Optional[torch.Tensor]]: hidden_states = self.model( input_ids, position_ids, cu_seqlen_prefill, kv_cache, block_tables, slots, seqlen, max_s, ) if lm_head_indices is not None: hidden_states = hidden_states[lm_head_indices] logits, speculative_logits = self.lm_head(hidden_states) return logits, speculative_logits
text-generation-inference/server/text_generation_server/models/custom_modeling/flash_deepseek_v2_modeling.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/models/custom_modeling/flash_deepseek_v2_modeling.py", "repo_id": "text-generation-inference", "token_count": 11480 }
319
import torch import torch.distributed from torch import nn from transformers.activations import ACT2FN from typing import Optional, List, Tuple from text_generation_server.layers.attention import ( paged_attention, attention, Seqlen, ) from text_generation_server.layers import ( TensorParallelRowLinear, TensorParallelColumnLinear, SpeculativeHead, TensorParallelEmbedding, get_linear, ) from text_generation_server.layers.attention.kv_cache import get_kv_scales from text_generation_server.layers.gptq import GPTQWeightsLoader from text_generation_server.layers.layernorm import ( FastLayerNorm, ) def load_multi_mqa( config, prefix: str, weights, bias: bool, head_size, num_heads, hidden_size ): if config.quantize == "gptq": return _load_multi_mqa_gptq( config, prefix, weights, bias, head_size, num_heads, hidden_size ) elif config.quantize == "marlin": raise RuntimeError( "santacoder models with marlin quantization are not yet supported" ) else: return _load_multi_mqa( config, prefix, weights, bias, head_size, num_heads, hidden_size ) def _load_multi_mqa_gptq( config, prefix: str, weights, bias: bool, head_size, num_heads, hidden_size ): from text_generation_server.layers.gptq import GPTQWeight if any("c_attn" in k for k in weights.routing.keys()) and not config.transpose: world_size = weights.process_group.size() rank = weights.process_group.rank() slice_ = weights._get_slice(f"{prefix}.c_attn.qweight") shape = slice_.get_shape() block_size = (shape[1] - 2 * head_size) // world_size start = rank * block_size stop = (rank + 1) * block_size assert (shape[1] - 2 * head_size) % world_size == 0 q_tensor = slice_[:, start:stop] kv_tensor = slice_[:, -2 * head_size :] qweight = torch.cat([q_tensor, kv_tensor], dim=1) qweight = qweight.to(device=weights.device) slice_ = weights._get_slice(f"{prefix}.c_attn.scales") shape = slice_.get_shape() block_size = (shape[1] - 2 * head_size) // world_size start = rank * block_size stop = (rank + 1) * block_size assert (shape[1] - 2 * head_size) % world_size == 0 q_tensor = slice_[:, start:stop] kv_tensor = slice_[:, -2 * head_size :] scales = torch.cat([q_tensor, kv_tensor], dim=1) scales = scales.to(device=weights.device) slice_ = weights._get_slice(f"{prefix}.c_attn.qzeros") shape = slice_.get_shape() block_size = (shape[1] - (2 * head_size) * 4 // 32) // world_size start = rank * block_size stop = (rank + 1) * block_size assert 2 * head_size % (32 // 4) == 0 q_tensor = slice_[:, start:stop] kv_tensor = slice_[:, -2 * head_size * 4 // 32 :] qzeros = torch.cat([q_tensor, kv_tensor], dim=1) qzeros = qzeros.to(device=weights.device) loader = weights.weights_loader assert isinstance(loader, GPTQWeightsLoader) loader._get_gptq_params(weights) if loader.quant_method == "gptq": g_idx = weights.get_tensor(f"{prefix}.c_attn.g_idx") g_idx = g_idx.to(device=weights.device) elif loader.quant_method == "awq": g_idx = None from text_generation_server.layers.awq.conversion_utils import ( fast_awq_to_gptq, ) qweight, qzeros = fast_awq_to_gptq(qweight, qzeros) from text_generation_server.layers.gptq import HAS_EXLLAMA weight = GPTQWeight( qweight=qweight, qzeros=qzeros, scales=scales, g_idx=g_idx, bits=loader.bits, groupsize=loader.groupsize, use_awq_kernel=loader.quantize == "awq", use_exllama=HAS_EXLLAMA, ) if bias: slice_ = weights._get_slice(f"{prefix}.c_attn.bias") shape = slice_.get_shape() block_size = (shape[0] - 2 * head_size) // world_size assert (shape[0] - 2 * head_size) % world_size == 0 q_tensor = slice_[start:stop] start = rank * block_size stop = (rank + 1) * block_size q_tensor = slice_[start:stop] kv_tensor = slice_[-2 * head_size :] bias = torch.cat([q_tensor, kv_tensor], dim=0) bias = bias.to(device=weights.device) return TensorParallelColumnLinear(get_linear(weight, bias)) else: raise NotImplementedError("Gptq loading with santacoder is not implemented") def _load_multi_mqa( config, prefix: str, weights, bias: bool, head_size, num_heads, hidden_size ): if any("c_attn" in k for k in weights.routing.keys()): slice_ = weights._get_slice(f"{prefix}.c_attn.weight") shape = slice_.get_shape() world_size = weights.process_group.size() rank = weights.process_group.rank() if config.transpose: block_size = (shape[1] - 2 * head_size) // world_size start = rank * block_size stop = (rank + 1) * block_size assert (shape[1] - 2 * head_size) % world_size == 0 q_tensor = slice_[:, start:stop] kv_tensor = slice_[:, -2 * head_size :] weight = torch.cat([q_tensor, kv_tensor], dim=1).T else: block_size = (shape[0] - 2 * head_size) // world_size start = rank * block_size stop = (rank + 1) * block_size assert (shape[0] - 2 * head_size) % world_size == 0 q_tensor = slice_[start:stop] kv_tensor = slice_[-2 * head_size :] weight = torch.cat([q_tensor, kv_tensor], dim=0) if bias: slice_ = weights._get_slice(f"{prefix}.c_attn.bias") shape = slice_.get_shape() block_size = (shape[0] - 2 * head_size) // world_size assert (shape[0] - 2 * head_size) % world_size == 0 start = rank * block_size stop = (rank + 1) * block_size q_tensor = slice_[start:stop] kv_tensor = slice_[-2 * head_size :] bias = torch.cat([q_tensor, kv_tensor], dim=0) else: if config.transpose: w = [ weights.get_sharded(f"{prefix}.q_attn.weight", dim=1).T, weights.get_tensor(f"{prefix}.kv_attn.weight").T, ] weight = torch.cat(w, dim=0) else: w = [ weights.get_sharded(f"{prefix}.q_attn.weight", dim=0), weights.get_tensor(f"{prefix}.kv_attn.weight"), ] weight = torch.cat(w, dim=1) if bias: b = [ weights.get_sharded(f"{prefix}.q_attn.bias", dim=0), weights.get_tensor(f"{prefix}.kv_attn.bias"), ] bias = torch.cat(b, dim=0) else: bias = None weight = weight.to(dtype=weights.dtype).to(device=weights.device) assert list(weight.shape) == [ (num_heads + 2) * head_size, hidden_size, ], f"{weight.shape} != {[(num_heads + 2) * head_size, hidden_size]}" if bias is not None: bias = bias.to(dtype=weights.dtype).to(device=weights.device) assert list(bias.shape) == [ (num_heads + 2) * head_size ], f"{weight.shape} != {[(num_heads + 2) * head_size]}" return TensorParallelColumnLinear(get_linear(weight, bias)) def load_col(config, prefix: str, weights, bias: bool): if config.transpose: weight = weights.get_sharded(f"{prefix}.weight", dim=1).T else: weight = weights.get_multi_weights_col([prefix], dim=0) if bias: bias = weights.get_sharded(f"{prefix}.bias", dim=0) else: bias = None return TensorParallelColumnLinear(get_linear(weight, bias)) def load_row(config, prefix: str, weights, bias: bool): if config.transpose: weight = weights.get_sharded(f"{prefix}.weight", dim=0).T else: weight = weights.get_weights_row(prefix) if bias and weights.process_group.rank() == 0: # Rank is only on the first rank process bias = weights.get_tensor(f"{prefix}.bias") else: bias = None return TensorParallelRowLinear( get_linear(weight, bias), process_group=weights.process_group ) class FlashMQAttention(torch.nn.Module): def __init__(self, prefix, config, weights): super().__init__() num_heads = config.num_attention_heads hidden_size = config.hidden_size self.num_heads = num_heads self.hidden_size = hidden_size self.head_size = hidden_size // num_heads if self.num_heads % weights.process_group.size() != 0: raise ValueError( f"`num_heads` must be divisible by `num_shards` (got `num_heads`: {self.num_heads} " f"and `num_shards`: {weights.process_group.size()}" ) self.num_heads = self.num_heads // weights.process_group.size() self.softmax_scale = self.head_size ** (-0.5) self.c_attn = load_multi_mqa( config, prefix=prefix, weights=weights, bias=True, head_size=self.head_size, hidden_size=hidden_size, num_heads=self.num_heads, ) self.c_proj = load_row( config, prefix=f"{prefix}.c_proj", weights=weights, bias=True ) self.kv_scales = get_kv_scales(weights, f"{prefix}") self.kv_head_mapping = torch.zeros( self.num_heads, dtype=torch.int32, device=weights.device ) def forward( self, hidden_states, cu_seqlen_prefill, kv_cache, block_tables, slots, seqlen, max_s, ): qkv = self.c_attn(hidden_states) # Split query from key_value query, key_value = qkv.split( [self.head_size * self.num_heads, 2 * self.head_size], dim=1 ) # Prepare query and key_value for indexing query = query.view(-1, self.num_heads, self.head_size) key_value = key_value.view(-1, 2, 1, self.head_size) kv_cache.store( key=key_value[:, 0], value=key_value[:, 1], slots=slots, kv_scales=self.kv_scales, ) # Prefill if cu_seqlen_prefill is not None: # flash attention attn_output = attention( query=query, key=key_value[:, 0], value=key_value[:, 1], kv_cache=kv_cache, kv_scales=self.kv_scales, seqlen=seqlen, block_tables=block_tables, softmax_scale=self.softmax_scale, ) # Decode else: attn_output = paged_attention( query, kv_cache, self.kv_head_mapping, self.softmax_scale, block_tables, seqlen, max_s, kv_scales=self.kv_scales, ) return self.c_proj(attn_output.view(-1, self.num_heads * self.head_size)) class MLP(nn.Module): def __init__(self, prefix, config, weights): super().__init__() act = config.activation_function self.act = ( ACT2FN[act] if "gelu" not in act else lambda x: torch.nn.functional.gelu( x, approximate=( "tanh" if act in ["gelu_fast", "gelu_pytorch_tanh"] else "none" ), ) ) self.c_fc = load_col( config, prefix=f"{prefix}.c_fc", weights=weights, bias=True ) self.c_proj = load_row( config, prefix=f"{prefix}.c_proj", weights=weights, bias=True ) def forward(self, hidden_states): hidden_states = self.c_fc(hidden_states) hidden_states = self.act(hidden_states) hidden_states = self.c_proj(hidden_states) return hidden_states class Block(nn.Module): def __init__(self, prefix: str, layer_id, config, weights): super().__init__() prefix = f"{prefix}.h.{layer_id}" self.ln_1 = FastLayerNorm.load( prefix=f"{prefix}.ln_1", weights=weights, eps=config.layer_norm_epsilon ) self.ln_2 = FastLayerNorm.load( prefix=f"{prefix}.ln_2", weights=weights, eps=config.layer_norm_epsilon ) self.self_attn = FlashMQAttention( prefix=f"{prefix}.attn", config=config, weights=weights, ) self.mlp = MLP( prefix=f"{prefix}.mlp", config=config, weights=weights, ) def forward( self, hidden_states, residual, cu_seqlen_prefill, kv_cache, block_tables, slots, seqlen, max_s, ): hidden_states, residual = self.ln_1(hidden_states, residual) hidden_states = self.self_attn( hidden_states, cu_seqlen_prefill, kv_cache, block_tables, slots, seqlen, max_s, ) hidden_states, residual = self.ln_2(hidden_states, residual) mlp_output = self.mlp(hidden_states) return mlp_output, residual class FlashSantacoderModel(nn.Module): def __init__(self, prefix: str, config, weights): super().__init__() self.config = config self.process_group = weights.process_group self.wte = TensorParallelEmbedding( prefix=f"{prefix}.wte", weights=weights, reduce=False, ) self.wpe = TensorParallelEmbedding( prefix=f"{prefix}.wpe", weights=weights, reduce=False, ) self.layers = nn.ModuleList( [ Block( prefix, layer_id, config, weights, ) for layer_id in range(config.num_hidden_layers) ] ) self.ln_f = FastLayerNorm.load( prefix="transformer.ln_f", weights=weights, eps=config.layer_norm_epsilon ) self.head_size = self.layers[0].self_attn.head_size self.num_heads = self.layers[0].self_attn.num_heads def forward( self, input_ids: torch.Tensor, position_ids: torch.Tensor, cu_seqlen_prefill: Optional[torch.Tensor], kv_cache: List[Tuple[torch.Tensor, torch.Tensor]], block_tables: torch.Tensor, slots: torch.Tensor, seqlen: Seqlen, max_s: int, ) -> torch.Tensor: hidden_states = self.wte(input_ids) + self.wpe(position_ids) if self.process_group.size() > 1: torch.distributed.all_reduce(hidden_states, group=self.process_group) residual = None for i, layer in enumerate(self.layers): hidden_states, residual = layer( hidden_states, residual, cu_seqlen_prefill, kv_cache[i], block_tables, slots, seqlen, max_s, ) hidden_states, _ = self.ln_f(hidden_states, residual) return hidden_states class FlashSantacoderForCausalLM(nn.Module): def __init__(self, prefix, config, weights): super().__init__() if not prefix: prefix = "transformer" else: prefix = f"{prefix}.transformer" config.transpose = config.architectures[0].startswith("GPT2") self.model = FlashSantacoderModel(prefix, config, weights) self.lm_head = SpeculativeHead.load( config, prefix=f"{prefix}.wte", weights=weights ) def forward( self, input_ids: torch.Tensor, position_ids: torch.Tensor, cu_seqlen_prefill: Optional[torch.Tensor], kv_cache: List[Tuple[torch.Tensor, torch.Tensor]], block_tables: torch.Tensor, slots: torch.Tensor, seqlen: Seqlen, max_s: int, prefill_cache_indices: Optional[torch.Tensor], lm_head_indices: Optional[torch.Tensor] = None, adapter_data: Optional[torch.Tensor] = None, ) -> torch.Tensor: hidden_states = self.model( input_ids, position_ids, cu_seqlen_prefill, kv_cache, block_tables, slots, seqlen, max_s, ) if lm_head_indices is not None: hidden_states = hidden_states[lm_head_indices] logits = self.lm_head(hidden_states) return logits
text-generation-inference/server/text_generation_server/models/custom_modeling/flash_santacoder_modeling.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/models/custom_modeling/flash_santacoder_modeling.py", "repo_id": "text-generation-inference", "token_count": 8648 }
320
# coding=utf-8 # Copyright 2024 the HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """PyTorch Mllama model.""" from typing import Optional, Tuple, List import torch import torch.utils.checkpoint from torch import nn from text_generation_server.utils.import_utils import SYSTEM if SYSTEM == "ipex": import intel_extension_for_pytorch as ipex else: import flash_attn_2_cuda from transformers.activations import ACT2FN import torch.nn.functional as F from text_generation_server.layers import ( TensorParallelColumnLinear, TensorParallelEmbedding, TensorParallelRowLinear, FastLinear, ) from text_generation_server.layers.attention import ( Seqlen, ) from text_generation_server.models.custom_modeling.flash_llama_modeling import ( FlashLlamaForCausalLM, ) def _prepare_aspect_ratio_attention_mask( aspect_ratio_mask: torch.Tensor, num_patches: int, target_length: int, dtype: torch.dtype, ) -> torch.Tensor: # Expand aspect ratio mask to target_length batch_size, max_num_tiles = aspect_ratio_mask.shape attention_mask = aspect_ratio_mask.view(batch_size, max_num_tiles, 1, 1).to(dtype) attention_mask = attention_mask.repeat(1, 1, target_length, 1) # Mask padding patches pad_patches = target_length - num_patches attention_mask[:, :, -pad_patches:] = 0 # Invert the mask (0 -> 1, 1 -> 0) attention_mask = 1 - attention_mask # Reshape to 2D and create 4D attention mask # (batch_size, 1, max_num_tiles * target_length, max_num_tiles * target_length) attention_mask = attention_mask.reshape( batch_size, max_num_tiles * target_length, 1 ) attention_mask = ( attention_mask @ attention_mask.transpose(-1, -2) * torch.finfo(dtype).min ) attention_mask = attention_mask.unsqueeze(1) return attention_mask # Copied from transformers.models.llama.modeling_llama._prepare_4d_causal_attention_mask_with_cache_position def _prepare_4d_causal_attention_mask_with_cache_position( attention_mask: torch.Tensor, sequence_length: int, target_length: int, dtype: torch.dtype, device: torch.device, min_dtype: float, cache_position: torch.Tensor, batch_size: int, ): """ Creates a causal 4D mask of shape `(batch_size, 1, query_length, key_value_length)` from a 2D mask of shape `(batch_size, key_value_length)`, or if the input `attention_mask` is already 4D, do nothing. Args: attention_mask (`torch.Tensor`): A 2D attention mask of shape `(batch_size, key_value_length)` or a 4D attention mask of shape `(batch_size, 1, query_length, key_value_length)`. sequence_length (`int`): The sequence length being processed. target_length (`int`): The target length: when generating with static cache, the mask should be as long as the static cache, to account for the 0 padding, the part of the cache that is not filled yet. dtype (`torch.dtype`): The dtype to use for the 4D attention mask. device (`torch.device`): The device to plcae the 4D attention mask on. min_dtype (`float`): The minimum value representable with the dtype `dtype`. cache_position (`torch.Tensor`): Indices depicting the position of the input sequence tokens in the sequence. batch_size (`torch.Tensor`): Batch size. """ if attention_mask is not None and attention_mask.dim() == 4: # In this case we assume that the mask comes already in inverted form and requires no inversion or slicing. causal_mask = attention_mask else: causal_mask = torch.full( (sequence_length, target_length), fill_value=min_dtype, dtype=dtype, device=device, ) if sequence_length != 1: causal_mask = torch.triu(causal_mask, diagonal=1) causal_mask *= torch.arange( target_length, device=device ) > cache_position.reshape(-1, 1) causal_mask = causal_mask[None, None, :, :].expand(batch_size, 1, -1, -1) if attention_mask is not None: causal_mask = ( causal_mask.clone() ) # copy to contiguous memory for in-place edit mask_length = attention_mask.shape[-1] padding_mask = ( causal_mask[:, :, :, :mask_length] + attention_mask[:, None, None, :] ) padding_mask = padding_mask == 0 causal_mask[:, :, :, :mask_length] = causal_mask[ :, :, :, :mask_length ].masked_fill(padding_mask, min_dtype) return causal_mask def _prepare_cross_attention_mask( cross_attention_mask: torch.Tensor, num_vision_tokens: int, dtype: str, ) -> Tuple[torch.Tensor, torch.Tensor]: # reshape so it can be used by attn module batch_size, text_total_length, *_ = cross_attention_mask.shape cross_attention_mask = cross_attention_mask.repeat_interleave( num_vision_tokens, dim=3 ) cross_attention_mask = cross_attention_mask.view(batch_size, text_total_length, -1) cross_attention_mask = cross_attention_mask.unsqueeze(1) # invert the mask inverted_cross_attn_mask = (1.0 - cross_attention_mask).to(dtype) cross_attention_mask = inverted_cross_attn_mask.masked_fill( inverted_cross_attn_mask.to(torch.bool), torch.finfo(dtype).min ) # apply full-row bias, which return 4D tensor of shape [B, H, S1, 1] where value is 0 if the a full row in cross attn mask's # last dimension contains negative infinity values, otherwise it's 1 negative_inf_value = torch.finfo(dtype).min full_text_row_masked_out_mask = ( (cross_attention_mask != negative_inf_value) .any(dim=-1) .type_as(cross_attention_mask)[..., None] ) cross_attention_mask *= full_text_row_masked_out_mask return cross_attention_mask, full_text_row_masked_out_mask # Copied from transformers.models.clip.modeling_clip.CLIPMLP with CLIP->MllamaVision class MllamaVisionMLP(nn.Module): def __init__(self, *, prefix, config, weights): super().__init__() self.config = config self.activation_fn = ACT2FN[config.hidden_act] self.fc1 = TensorParallelColumnLinear.load( prefix=f"{prefix}.fc1", weights=weights, config=config, bias=True ) self.fc2 = TensorParallelRowLinear.load( prefix=f"{prefix}.fc2", weights=weights, config=config, bias=True ) def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: hidden_states = self.fc1(hidden_states) hidden_states = self.activation_fn(hidden_states) hidden_states = self.fc2(hidden_states) return hidden_states class MllamaVisionSdpaAttention(nn.Module): def __init__(self, *, prefix, config, weights): super().__init__() self.embed_dim = config.hidden_size self.head_dim = config.hidden_size // config.attention_heads self.num_heads = config.attention_heads // weights.process_group.size() self.qkv_proj = TensorParallelColumnLinear.load_multi( config, prefixes=[f"{prefix}.q_proj", f"{prefix}.k_proj", f"{prefix}.v_proj"], dim=0, weights=weights, bias=False, ) self.o_proj = TensorParallelRowLinear.load( config, prefix=f"{prefix}.o_proj", weights=weights, bias=False, ) def forward( self, hidden_state: torch.Tensor, attention_mask: Optional[torch.Tensor] = None, ) -> torch.Tensor: qkv = self.qkv_proj(hidden_state) query, key, value = qkv.split( [ self.head_dim * self.num_heads, self.head_dim * self.num_heads, self.head_dim * self.num_heads, ], dim=2, ) batch_size, q_seq_len, _ = query.shape _, kv_seq_len, _ = key.shape query = query.view(batch_size, q_seq_len, self.num_heads, self.head_dim) key = key.view(batch_size, kv_seq_len, self.num_heads, self.head_dim) value = value.view(batch_size, kv_seq_len, self.num_heads, self.head_dim) query = query.transpose(1, 2) key = key.transpose(1, 2) value = value.transpose(1, 2) attn_output = F.scaled_dot_product_attention( query, key, value, attn_mask=attention_mask ) attn_output = attn_output.transpose(1, 2).contiguous() attn_output = attn_output.reshape(batch_size, q_seq_len, -1) output = self.o_proj(attn_output) return output class MllamaVisionEncoderLayer(nn.Module): def __init__(self, *, prefix, config, weights, is_gated: bool): super().__init__() self.hidden_size = config.hidden_size self.num_attention_heads = config.attention_heads self.is_gated = is_gated self.intermediate_size = config.intermediate_size self.self_attn = MllamaVisionSdpaAttention( prefix=f"{prefix}.self_attn", config=config, weights=weights ) self.mlp = MllamaVisionMLP( prefix=f"{prefix}.mlp", config=config, weights=weights ) self.input_layernorm = nn.LayerNorm.load( prefix=f"{prefix}.input_layernorm", weights=weights, eps=1e-05 ) self.post_attention_layernorm = nn.LayerNorm.load( prefix=f"{prefix}.post_attention_layernorm", weights=weights, eps=1e-05 ) # there used to be an if else here, no code path if is_gated: self.gate_attn = nn.Parameter( weights.get_tensor(f"{prefix}.gate_attn"), requires_grad=False ) self.gate_ffn = nn.Parameter( weights.get_tensor(f"{prefix}.gate_ffn"), requires_grad=False ) def forward( self, hidden_state: torch.Tensor, attention_mask: Optional[torch.Tensor] = None, ): # Self Attention residual = hidden_state hidden_state = self.input_layernorm(hidden_state) hidden_state = self.self_attn(hidden_state, attention_mask=attention_mask) gate_attn = 1 if not self.is_gated else self.gate_attn.tanh() hidden_state = residual + gate_attn * hidden_state # Feed forward residual = hidden_state hidden_state = self.post_attention_layernorm(hidden_state) hidden_state = self.mlp(hidden_state) gate_ffn = 1 if not self.is_gated else self.gate_ffn.tanh() hidden_state = residual + gate_ffn * hidden_state return hidden_state class MllamaVisionEncoder(nn.Module): def __init__(self, *, prefix, config, weights, is_gated: bool, num_layers: int): super().__init__() self.config = config self.layers = [ MllamaVisionEncoderLayer( prefix=f"{prefix}.layers.{i}", config=config, weights=weights, is_gated=is_gated, ) for i in range(num_layers) ] def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.Tensor] = None, ): encoder_states = [hidden_states] for encoder_layer in self.layers: layer_outputs = encoder_layer( hidden_states, attention_mask, ) hidden_states = layer_outputs encoder_states.append(hidden_states) return hidden_states, encoder_states class MllamaPrecomputedAspectRatioEmbedding(nn.Module): def __init__(self, *, prefix, config, weights): super().__init__() self.max_num_tiles = config.max_num_tiles self.hidden_size = config.hidden_size self.max_aspect_ratio_id = config.max_aspect_ratio_id self.embedding = TensorParallelEmbedding( prefix=f"{prefix}.embedding", weights=weights ) self.gate = nn.Parameter( weights.get_tensor(f"{prefix}.gate"), requires_grad=False ) def forward( self, hidden_state: torch.Tensor, aspect_ratio_ids: torch.Tensor ) -> torch.Tensor: embeddings = self.embedding(aspect_ratio_ids) embeddings = embeddings.reshape(-1, self.max_num_tiles, 1, self.hidden_size) # Always gated. embeddings = embeddings * self.gate.tanh() hidden_state = hidden_state + embeddings return hidden_state class MllamaPrecomputedPositionEmbedding(nn.Module): def __init__(self, *, prefix, config, weights): super().__init__() self.max_num_tiles = config.max_num_tiles self.max_aspect_ratio_id = config.max_aspect_ratio_id self.num_patches = (config.image_size // config.patch_size) ** 2 + 1 self.hidden_size = config.hidden_size self.scale = config.hidden_size**-0.5 self.gate = nn.Parameter( weights.get_tensor(f"{prefix}.gate"), requires_grad=False ) # position embedding embedding = nn.Parameter( weights.get_tensor(f"{prefix}.embedding"), requires_grad=False ) self.gated_position_embedding = (1 - self.gate.tanh()) * embedding self.tile_embedding = TensorParallelEmbedding( prefix=f"{prefix}.tile_embedding", weights=weights ) def forward( self, hidden_state: torch.Tensor, aspect_ratio_ids: torch.Tensor ) -> torch.Tensor: # position embeddings hidden_state = hidden_state + self.gated_position_embedding.view( 1, 1, self.num_patches, self.hidden_size ) # precomputed tile position embeddings tile_position_embedding = self.tile_embedding(aspect_ratio_ids) batch_size = hidden_state.shape[0] tile_position_embedding = tile_position_embedding.reshape( batch_size, self.max_num_tiles, self.num_patches, self.hidden_size ) gated_tile_position_embedding = self.gate.tanh() * tile_position_embedding hidden_state = hidden_state + gated_tile_position_embedding return hidden_state class MllamaVisionModel(nn.Module): def __init__(self, *, prefix, config, weights): super().__init__() self.image_size = config.image_size self.patch_size = config.patch_size self.max_num_tiles = config.max_num_tiles self.hidden_size = config.hidden_size self.num_channels = config.num_channels self.intermediate_layers_indices = config.intermediate_layers_indices self.num_patches = (self.image_size // self.patch_size) ** 2 + 1 self.scale = config.hidden_size**-0.5 self.dtype = weights.dtype self.patch_embedding = nn.Conv2d( in_channels=config.num_channels, out_channels=self.hidden_size, kernel_size=self.patch_size, stride=self.patch_size, padding="valid", bias=False, ) self.patch_embedding.weight = nn.Parameter( weights.get_tensor(f"{prefix}.patch_embedding.weight"), requires_grad=False ) self.class_embedding = nn.Parameter( weights.get_tensor(f"{prefix}.class_embedding"), requires_grad=False ) self.gated_positional_embedding = MllamaPrecomputedPositionEmbedding( prefix=f"{prefix}.gated_positional_embedding", config=config, weights=weights, ) self.pre_tile_positional_embedding = MllamaPrecomputedAspectRatioEmbedding( prefix=f"{prefix}.pre_tile_positional_embedding", config=config, weights=weights, ) self.post_tile_positional_embedding = MllamaPrecomputedAspectRatioEmbedding( prefix=f"{prefix}.post_tile_positional_embedding", config=config, weights=weights, ) ## layer norms self.layernorm_pre = nn.LayerNorm.load( prefix=f"{prefix}.layernorm_pre", weights=weights, # torch default eps=1e-05, ) self.layernorm_post = nn.LayerNorm.load( prefix=f"{prefix}.layernorm_post", weights=weights, # torch default eps=1e-05, ) ## encoders self.transformer = MllamaVisionEncoder( prefix=f"{prefix}.transformer", config=config, weights=weights, is_gated=False, num_layers=config.num_hidden_layers, ) self.global_transformer = MllamaVisionEncoder( prefix=f"{prefix}.global_transformer", config=config, weights=weights, is_gated=True, num_layers=config.num_global_layers, ) def apply_class_embedding(self, hidden_state: torch.Tensor) -> torch.Tensor: batch_size, _, hidden_size = hidden_state.shape class_embedding = self.class_embedding.expand(batch_size, 1, hidden_size) hidden_state = torch.cat([class_embedding, hidden_state], dim=1) return hidden_state def forward( self, pixel_values: torch.Tensor, aspect_ratio_ids: torch.Tensor, attention_mask: torch.Tensor, ) -> torch.Tensor: ( batch_size, num_concurrent_media, num_tiles, num_channels, height, width, ) = pixel_values.shape pixel_values = pixel_values.reshape( batch_size * num_concurrent_media * num_tiles, num_channels, height, width ) aspect_ratio_ids = aspect_ratio_ids.reshape( batch_size * num_concurrent_media, -1 ) # patch embedding patch_embeds = self.patch_embedding(pixel_values) hidden_state = patch_embeds.flatten(2).transpose(1, 2) # tile embeddings _, num_patches, dim = hidden_state.shape hidden_state = hidden_state.reshape( batch_size * num_concurrent_media, num_tiles, -1, dim ) hidden_state = self.pre_tile_positional_embedding( hidden_state, aspect_ratio_ids ) # apply cls token hidden_state = hidden_state.reshape( batch_size * num_concurrent_media * num_tiles, num_patches, dim ) hidden_state = self.apply_class_embedding(hidden_state) num_patches += 1 # apply position embeddings hidden_state = hidden_state.reshape( batch_size * num_concurrent_media, num_tiles, num_patches, dim ) hidden_state = self.gated_positional_embedding(hidden_state, aspect_ratio_ids) # apply encoder hidden_state = self.layernorm_pre(hidden_state) # Compute the number of tokens to pad num_padding_patches = (8 - (hidden_state.shape[-2] % 8)) % 8 # Compute padding tuple for pad function padding = ( 0, 0, 0, num_padding_patches, ) # (pad_left, pad_right, pad_left for dim -2, pad_right for dim -2) # Pad the tensor hidden_state = F.pad(hidden_state, padding, mode="constant", value=0) slice_index = -num_padding_patches if num_padding_patches > 0 else None if attention_mask is not None: attention_mask = attention_mask.reshape( batch_size * num_concurrent_media, -1 ) attention_mask = _prepare_aspect_ratio_attention_mask( aspect_ratio_mask=attention_mask, num_patches=self.num_patches, target_length=hidden_state.shape[2], dtype=self.dtype, ) hidden_state = hidden_state.view(batch_size * num_concurrent_media, -1, dim) hidden_state, all_intermediate_hidden_states = self.transformer( hidden_state, attention_mask=attention_mask, ) intermediate_hidden_states = [ hidden_state for idx, hidden_state in enumerate(all_intermediate_hidden_states) if idx in self.intermediate_layers_indices ] intermediate_hidden_states = torch.stack(intermediate_hidden_states, dim=-1) # apply global encoder hidden_state = self.layernorm_post(hidden_state) hidden_state = hidden_state.reshape( batch_size * num_concurrent_media, num_tiles, num_patches + num_padding_patches, dim, ) hidden_state = self.post_tile_positional_embedding( hidden_state, aspect_ratio_ids ) hidden_state = hidden_state.reshape( batch_size * num_concurrent_media, num_tiles * (num_patches + num_padding_patches), dim, ) hidden_state, _ = self.global_transformer( hidden_state, attention_mask=attention_mask ) hidden_state = hidden_state.reshape( batch_size * num_concurrent_media, num_tiles, num_patches + num_padding_patches, dim, ) hidden_state = hidden_state[:, :, :slice_index] # adding intermediate layer outputs hidden_state = hidden_state.reshape( batch_size, num_concurrent_media, num_tiles, num_patches, dim ) intermediate_hidden_states = intermediate_hidden_states.reshape( batch_size * num_concurrent_media, num_tiles, num_patches + num_padding_patches, -1, ) intermediate_hidden_states = intermediate_hidden_states[:, :, :slice_index] intermediate_hidden_states = intermediate_hidden_states.reshape( batch_size, num_concurrent_media, num_tiles, num_patches, -1 ) hidden_state = torch.cat([hidden_state, intermediate_hidden_states], dim=-1) return hidden_state class MllamaTextCrossAttention(nn.Module): """Multi-headed attention from 'Attention Is All You Need' paper""" def __init__(self, *, prefix, config, weights, layer_idx): super().__init__() self.config = config self.num_heads = self.config.num_attention_heads self.num_key_value_heads = self.config.num_key_value_heads self.dropout = config.dropout self.hidden_size = config.hidden_size self.head_size = config.hidden_size // self.num_heads self.num_key_value_groups = self.num_heads // self.num_key_value_heads self.layer_idx = layer_idx self.num_heads = self.num_heads // weights.process_group.size() self.num_key_value_heads = ( self.num_key_value_heads // weights.process_group.size() ) self.q_proj = TensorParallelColumnLinear.load( config, prefix=f"{prefix}.q_proj", weights=weights, bias=False, ) self.k_proj = TensorParallelColumnLinear.load( config, prefix=f"{prefix}.k_proj", weights=weights, bias=False, ) self.v_proj = TensorParallelColumnLinear.load( config, prefix=f"{prefix}.v_proj", weights=weights, bias=False, ) self.o_proj = TensorParallelRowLinear.load( config, prefix=f"{prefix}.o_proj", weights=weights, bias=False, ) self.q_norm = MllamaTextRMSNorm.load( prefix=f"{prefix}.q_norm", weights=weights, eps=config.rms_norm_eps ) self.k_norm = MllamaTextRMSNorm.load( prefix=f"{prefix}.k_norm", weights=weights, eps=config.rms_norm_eps ) self.softmax_scale = self.head_size**-0.5 def forward( self, hidden_states: torch.Tensor, cross_attention_states: Optional[torch.Tensor] = None, # past_key_value=None, # attention_mask: Optional[torch.Tensor] = None, # cache_position: Optional[torch.LongTensor] = None, ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]: """Input shape: Batch x Time x Channel""" # hidden_states = hidden_states.unsqueeze(0) # bsz, q_len, _ = hidden_states.size() query_states = self.q_proj(hidden_states) query_states = query_states.view(-1, self.num_heads, self.head_size) query_states = self.q_norm(query_states) ( cross_attention_states, cu_seqlen_q, cu_seqlen_k, max_q, max_k, indices, ) = cross_attention_states key_states = self.k_proj(cross_attention_states) value_states = self.v_proj(cross_attention_states) key_states = key_states.view(-1, self.num_key_value_heads, self.head_size) value_states = value_states.view(-1, self.num_key_value_heads, self.head_size) key_states = self.k_norm(key_states) # key_states = key_states.repeat(1, self.num_key_value_groups, 1) # value_states = value_states.repeat(1, self.num_key_value_groups, 1) causal = False # logger.info( # f"Q: {query_states.shape} -K {key_states.shape} - V{value_states.shape}" # ) if SYSTEM == "ipex": attn_output = torch.empty_like(query_states) if query_states.device.type == "xpu": ipex.llm.functional.varlen_attention( query_states.contiguous(), key_states.contiguous(), value_states.contiguous(), attn_output, cu_seqlen_q, cu_seqlen_k, None, max_q, max_k, 0.0, self.softmax_scale, False, causal, False, None, ) else: ipex.llm.functional.varlen_attention( query_states, key_states, value_states, attn_output, cu_seqlen_q, cu_seqlen_k, max_q, max_k, 0.0, self.softmax_scale, False, causal, False, None, ) else: attn_output = flash_attn_2_cuda.varlen_fwd( query_states, key_states, value_states, None, cu_seqlen_q, cu_seqlen_k, None, None, None, # block_tables None, max_q, max_k, 0.0, self.softmax_scale, False, causal, # Causal -1, # window_size_left, -1, 0.0, # softcap False, None, )[0] attn_output = self.o_proj(attn_output.view(-1, self.num_heads * self.head_size)) return attn_output # Copied from transformers.models.gemma2.modeling_gemma2.Gemma2MLP with Gemma2->MllamaText class MllamaTextMLP(nn.Module): def __init__(self, *, prefix, config, weights): super().__init__() self.config = config self.hidden_size = config.hidden_size self.intermediate_size = ( config.intermediate_size // weights.process_group.size() ) self.gate_up_proj = TensorParallelColumnLinear.load_multi( config, prefixes=[f"{prefix}.gate_proj", f"{prefix}.up_proj"], weights=weights, dim=0, bias=False, ) self.down_proj = TensorParallelRowLinear.load( config, prefix=f"{prefix}.down_proj", weights=weights, bias=False, ) self.act_fn = ACT2FN[config.hidden_act] def forward(self, x): shape = x.shape gate_up_states = self.gate_up_proj(x) gate_up_states = gate_up_states.view(*shape[:-1], 2, self.intermediate_size) result = self.down_proj( self.act_fn(gate_up_states[:, 0]) * gate_up_states[:, 1] ) return result class FlashLlamaCrossLayer(torch.nn.Module): """Cross-attention transformer block with tanh-gated attention and feedforward.""" def __init__(self, *, prefix, config, weights, index) -> None: layer_idx = index super().__init__() self.cross_attn = MllamaTextCrossAttention( prefix=f"{prefix}.cross_attn", config=config, weights=weights, layer_idx=layer_idx, ) self.input_layernorm = MllamaTextRMSNorm.load( prefix=f"{prefix}.input_layernorm", weights=weights, eps=config.rms_norm_eps ) self.cross_attn_attn_gate = torch.nn.Parameter( weights.get_tensor(f"{prefix}.cross_attn_attn_gate"), requires_grad=False ) self.mlp = MllamaTextMLP(prefix=f"{prefix}.mlp", config=config, weights=weights) self.post_attention_layernorm = MllamaTextRMSNorm.load( prefix=f"{prefix}.post_attention_layernorm", weights=weights, eps=config.rms_norm_eps, ) self.cross_attn_mlp_gate = torch.nn.Parameter( weights.get_tensor(f"{prefix}.cross_attn_mlp_gate"), requires_grad=False ) self.layer_idx = layer_idx def forward( self, hidden_states, residual, cos, sin, cu_seqlen_prefill, kv_cache, block_tables, slots, seqlen, max_s, adapter_data, cross_attention_states, # [ IB, ...] ) -> Tuple[torch.Tensor, torch.Tensor]: if cross_attention_states is None: return hidden_states, residual if residual is not None: hidden_states += residual indices = cross_attention_states[-1] out_hidden_states = hidden_states[:] if len(indices) > 0: assert max(indices) < hidden_states.shape[0] hidden_states = hidden_states[indices] residual = hidden_states hidden_states = self.input_layernorm(hidden_states) hidden_states = self.cross_attn( hidden_states=hidden_states, # attention_mask=cross_attention_mask, cross_attention_states=cross_attention_states, ) hidden_states = residual + self.cross_attn_attn_gate.tanh() * hidden_states residual = hidden_states hidden_states = self.post_attention_layernorm(hidden_states) hidden_states = self.mlp(hidden_states) hidden_states = residual + self.cross_attn_mlp_gate.tanh() * hidden_states out_hidden_states[indices] = hidden_states hidden_states = out_hidden_states return hidden_states, None # Copied from transformers.models.llama.modeling_llama.LlamaRMSNorm with Llama->MllamaText class MllamaTextRMSNorm(nn.Module): def __init__(self, weight, eps): super().__init__() self.weight = weight self.variance_epsilon = eps @classmethod def load(cls, *, prefix, weights, eps): weight = nn.Parameter( weights.get_tensor(f"{prefix}.weight"), requires_grad=False ) return cls(weight=weight, eps=eps) def forward(self, hidden_states): input_dtype = hidden_states.dtype hidden_states = hidden_states.to(torch.float32) variance = hidden_states.pow(2).mean(-1, keepdim=True) hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon) return self.weight * hidden_states.to(input_dtype) def extra_repr(self): return f"{tuple(self.weight.shape)}, eps={self.variance_epsilon}" class MllamaForConditionalGeneration(nn.Module): def __init__(self, prefix, config, weights): super().__init__() config.vision_config.quantize = None config.vision_config.speculator = config.speculator config.text_config.quantize = config.quantize config.text_config.speculator = config.speculator config.text_config._attn_implementation = "sdpa" self.hidden_size = config.text_config.hidden_size self.vision_model = MllamaVisionModel( prefix="vision_model", config=config.vision_config, weights=weights ) self.multi_modal_projector = FastLinear.load( prefix="multi_modal_projector", config=config, weights=weights, bias=True ) self.text_model = FlashLlamaForCausalLM( prefix="language_model", config=config.text_config, weights=weights ) self.config = config self.dtype = weights.dtype self.device = weights.device def vision_forward(self, pixel_values, aspect_ratio_ids, aspect_ratio_mask): if aspect_ratio_ids is None: raise ValueError( "`aspect_ratio_ids` must be provided if `pixel_values` is provided" ) # logger.info(f"PIxel values {pixel_values.shape}") batch_size = pixel_values.shape[0] vision_states = self.vision_model( pixel_values, aspect_ratio_ids, aspect_ratio_mask ) cross_attention_states = self.multi_modal_projector(vision_states).reshape( -1, vision_states.shape[-2], self.hidden_size ) _, _, h = cross_attention_states.shape cross_attention_states = cross_attention_states.view(batch_size, -1, h) # logger.info(f"cross {cross_attention_states.shape}") return cross_attention_states def forward( self, input_ids: torch.Tensor, position_ids: torch.Tensor, cu_seqlen_prefill: Optional[torch.Tensor], kv_cache: List[Tuple[torch.Tensor, torch.Tensor]], block_tables: torch.Tensor, slots: torch.Tensor, seqlen: Seqlen, max_s: int, prefill_cache_indices: Optional[torch.Tensor], lm_head_indices: Optional[torch.Tensor], adapter_data: Optional[torch.Tensor] = None, # XXX: Putting these as optional so that the cuda warmup calls can go through. cross_attention_states: Optional[torch.Tensor] = None, image_indices=None, ): if cross_attention_states is not None: seqlen_q = len(image_indices) n_images = cross_attention_states.shape[0] seqlen_k = cross_attention_states.shape[1] device = cross_attention_states.device if cu_seqlen_prefill is not None: offset = 0 cu_q = [] indices = [] for index in image_indices: cu_q.append(offset) length = seqlen.input_lengths[index].item() assert index < seqlen.cu_seqlen_q.shape[0] input_ids_offset = seqlen.cu_seqlen_q[index] indices.extend(range(input_ids_offset, input_ids_offset + length)) offset += length cu_q.append(offset) cu_seqlen_q = torch.Tensor(cu_q).to(device=device, dtype=torch.int32) assert max(indices) < input_ids.shape[0] cu_seqlen_k = ( torch.arange( n_images + 1, device=device, dtype=torch.int32, ) * seqlen_k ) max_q = cu_seqlen_q[-1].item() max_k = seqlen_k else: cu_seqlen_q = torch.arange( seqlen_q + 1, device=device, dtype=torch.int32 ) seqlen_k = cross_attention_states.shape[1] n_images = cross_attention_states.shape[0] cu_seqlen_k = ( torch.arange( n_images + 1, device=device, dtype=torch.int32, ) * seqlen_k ) max_q = seqlen_q max_k = seqlen_k indices = image_indices[:] cross_attention_states = ( cross_attention_states, cu_seqlen_q, cu_seqlen_k, max_q, max_k, indices, ) outputs = self.text_model( input_ids=input_ids, position_ids=position_ids, cu_seqlen_prefill=cu_seqlen_prefill, kv_cache=kv_cache, block_tables=block_tables, slots=slots, seqlen=seqlen, max_s=max_s, prefill_cache_indices=prefill_cache_indices, lm_head_indices=lm_head_indices, adapter_data=adapter_data, cross_attention_states=cross_attention_states, ) return outputs
text-generation-inference/server/text_generation_server/models/custom_modeling/mllama.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/models/custom_modeling/mllama.py", "repo_id": "text-generation-inference", "token_count": 18370 }
321
import torch import numpy as np from typing import Iterable, Optional, Tuple, List, Dict from text_generation_server.pb.generate_pb2 import Request from io import BytesIO from PIL import Image from dataclasses import dataclass from opentelemetry import trace from transformers import ( PreTrainedTokenizerBase, ) from text_generation_server.models.vlm_causal_lm import VlmCausalLMBatch, VlmCausalLM from text_generation_server.pb import generate_pb2 from text_generation_server.models.globals import PREFIX_CACHING, ATTENTION from text_generation_server.layers.attention import Seqlen from text_generation_server.models.metadata_kernels import block_tables_to_ragged tracer = trace.get_tracer(__name__) @dataclass class MllamaCausalLMBatch(VlmCausalLMBatch): image_indices: List[int] = 42 aspect_ratio_ids: Optional[torch.Tensor] = None aspect_ratio_mask: Optional[torch.Tensor] = None cross_attention_states: Optional[torch.Tensor] = None def prepare_for_prefill(self): super(VlmCausalLMBatch, self).prepare_for_prefill() @classmethod @tracer.start_as_current_span("concatenate") def concatenate(cls, batches): batch = super(VlmCausalLMBatch, cls).concatenate(batches) batch.pixel_values = None batch.pixel_attention_mask = None offset = 0 image_indices = [] attention_states = [] for b in batches: if b.cross_attention_states is not None: attention_states.append(b.cross_attention_states) image_indices.extend([i + offset for i in b.image_indices]) offset += len(b.image_indices) if len(attention_states) > 0: assert len(image_indices) > 0 batch.cross_attention_states = torch.cat(attention_states, dim=0) batch.image_indices = image_indices else: batch.cross_attention_states = None batch.image_indices = [] return batch @tracer.start_as_current_span("filter") def filter(self, request_ids: List[int]): assert self.image_indices is not None batch = super(VlmCausalLMBatch, self).filter(request_ids) assert self.image_indices is not None indices = [] for i, request_id in enumerate(request_ids): idx = self.requests_idx_mapping[request_id] indices.append(idx) offset = 0 new_image_indices = [] prev_i = None for i in self.image_indices: if i in indices: new_image_indices.append(offset) if i != prev_i: offset += 1 prev_i = i batch.image_indices = new_image_indices if len(new_image_indices) > 0: assert max(new_image_indices) < self.cross_attention_states.shape[0] assert offset <= self.cross_attention_states.shape[0] batch.cross_attention_states = self.cross_attention_states[ new_image_indices ] else: batch.cross_attention_states = None batch.pixel_values = None return batch @classmethod def batch_tokenized_inputs( cls, requests: Iterable[Request], tokenizer, processor, config ): image_inputs = [] texts = [] image_indices = [] batch_tokenized_inputs = [] for i, r in enumerate(requests): # Each input is encoded into a list, where each element of this input list is either a string or a URL curr_text = "" curr_image = None curr_i = None for chunk in r.input_chunks.chunks: chunk_type = chunk.WhichOneof("chunk") if chunk_type == "text": curr_text += chunk.text elif chunk_type == "image": image = Image.open(BytesIO(chunk.image.data)) # TODO unsure about BOS curr_text += "<|image|>" image_input = processor.image_processor(image, return_tensors="pt") curr_image = image_input curr_i = i # image_inputs.append(image_input) # image_indices.append(i) else: raise RuntimeError(f"Invalid chunk type {chunk_type}") texts.append(curr_text) if curr_image is not None: image_inputs.append(curr_image) image_indices.append(curr_i) input_ids = tokenizer( curr_text, truncation=True, max_length=r.truncate, add_special_tokens=r.add_special_tokens, )["input_ids"] batch_tokenized_inputs.append(input_ids) if image_inputs: image_input = image_inputs[0] new_image_inputs = { "pixel_values": torch.cat( [img["pixel_values"] for img in image_inputs], dim=0 ), } if "aspect_ratio_ids" in image_input: new_image_inputs["aspect_ratio_ids"] = torch.cat( [img["aspect_ratio_ids"] for img in image_inputs], dim=0 ) if "aspect_ratio_mask" in image_input: new_image_inputs["aspect_ratio_mask"] = torch.cat( [img["aspect_ratio_mask"] for img in image_inputs], dim=0 ) image_inputs = new_image_inputs image_inputs["image_indices"] = image_indices else: image_inputs = None if image_inputs is not None: assert len(image_indices) == image_inputs["pixel_values"].shape[0] return batch_tokenized_inputs, image_inputs @classmethod def from_pb_processor( cls, pb: generate_pb2.Batch, tokenizer: PreTrainedTokenizerBase, processor, config, dtype: torch.dtype, device: torch.device, ) -> "VlmCausalLMBatch": batch_tokenized_inputs, image_inputs = cls.batch_tokenized_inputs( pb.requests, tokenizer, processor, config ) batch = cls.from_tokenized(pb, tokenizer, batch_tokenized_inputs, dtype, device) # XXX: <|image|> token is actually out of bounds and bugs out the logit processors. batch.all_input_ids_tensor = batch.all_input_ids_tensor.clamp( max=config.text_config.vocab_size - 1 ) if isinstance(batch.input_ids, list): if len(batch) > 1: input_ids = np.concatenate(batch.input_ids, dtype=np.int64) else: input_ids = batch.input_ids[0] batch.input_ids = torch.tensor(input_ids, dtype=torch.int64, device=device) batch.input_ids = batch.input_ids.clamp(max=config.text_config.vocab_size - 1) if image_inputs is not None: batch.pixel_values = image_inputs["pixel_values"].to( device=device, dtype=dtype ) batch.aspect_ratio_ids = image_inputs["aspect_ratio_ids"].to(device=device) batch.aspect_ratio_mask = image_inputs["aspect_ratio_mask"].to( device=device ) batch.image_indices = image_inputs["image_indices"] else: batch.pixel_values = None batch.aspect_ratio_ids = None batch.aspect_ratio_mask = None batch.image_indices = [] assert batch.image_indices is not None return batch class MllamaCausalLM(VlmCausalLM): def set_inputs_embeds(self, batch): # Set the input embeddings to None, as we are using the input_ids for the model batch.inputs_embeds = None def cuda_graph_warmup(self, bs: int, max_s: int, max_bt: int): super(VlmCausalLM, self).cuda_graph_warmup(bs, max_s, max_bt) def forward( self, batch: MllamaCausalLMBatch, adapter_data: Optional[Dict[str, torch.Tensor]] = None, ) -> Tuple[torch.Tensor, Optional[torch.Tensor]]: # Model Forward if batch.speculative_ids is not None: input_ids = batch.input_ids position_ids = batch.position_ids cu_seqlen_prefill = batch.cu_seqlen_prefill kv_cache = self.kv_cache block_tables = batch.block_tables_tensor slots = batch.slots[batch.slot_indices] input_lengths = batch.input_lengths_tensor max_s = batch.max_current_length lm_head_indices = batch.prefill_head_indices speculative_ids = batch.speculative_ids B, speculative_length = speculative_ids.shape new_length = speculative_length + 1 new_input_ids = torch.cat( [input_ids.unsqueeze(-1), speculative_ids], dim=1 ).reshape(-1) arange = torch.arange(new_length, device=position_ids.device).unsqueeze(0) arange_int = arange.to(dtype=torch.int32) new_position_ids = ( position_ids.unsqueeze(-1).expand(B, new_length) + arange ).view(-1) slots = (slots.unsqueeze(-1).expand(B, new_length) + arange_int).view(-1) input_lengths = ( input_lengths.unsqueeze(-1).expand(B, new_length) + arange_int ).view(-1) cache_lengths_tensor = ( batch.cache_lengths_tensor.unsqueeze(-1).expand(B, new_length) ).reshape(-1) # Add Copy the block tables for all members block_tables = ( block_tables.unsqueeze(1) .expand(B, new_length, -1) .reshape(B * new_length, -1) .contiguous() ) max_s = max_s + speculative_length input_ids = new_input_ids position_ids = new_position_ids else: input_ids = batch.input_ids position_ids = batch.position_ids cu_seqlen_prefill = batch.cu_seqlen_prefill kv_cache = self.kv_cache block_tables = batch.block_tables_tensor slots = batch.slots[batch.slot_indices] input_lengths = batch.input_lengths_tensor cache_lengths_tensor = batch.cache_lengths_tensor max_s = batch.max_current_length lm_head_indices = batch.prefill_head_indices # Try to find an associated cuda graph bs = input_ids.shape[0] sorted_padded_bs = sorted([k for k in self.cuda_graphs.keys() if k >= bs]) if sorted_padded_bs: # Get associated cuda graph cuda_graph = self.cuda_graphs[sorted_padded_bs[0]] else: cuda_graph = None if ( cu_seqlen_prefill is not None or cuda_graph is None # Only run cuda graphs when there's no images. or batch.cross_attention_states is not None ): if PREFIX_CACHING: block_tables = block_tables_to_ragged( block_tables=block_tables, input_lengths=batch.input_lengths, cache_lengths=batch.cache_lengths, input_lengths_tensor=batch.input_lengths_tensor, cache_lengths_tensor=batch.cache_lengths_tensor, max_current_length=batch.max_current_length, ) with self._forward_context( block_tables=block_tables, cu_seqlen_prefill=cu_seqlen_prefill, input_lengths_tensor=input_lengths, cache_lengths_tensor=cache_lengths_tensor, ): seqlen = Seqlen( input_lengths=input_lengths, cache_lengths=cache_lengths_tensor, cu_seqlen_q=cu_seqlen_prefill, max_q=batch.max_input_length, max_k=batch.max_current_length, ) if batch.pixel_values is not None: cross_attention_states = self.model.vision_forward( pixel_values=batch.pixel_values, aspect_ratio_ids=batch.aspect_ratio_ids, aspect_ratio_mask=batch.aspect_ratio_mask, ) batch.cross_attention_states = cross_attention_states cross_attention_states = batch.cross_attention_states logits, speculative_logits = self.model.forward( input_ids=input_ids, position_ids=position_ids, cu_seqlen_prefill=cu_seqlen_prefill, kv_cache=kv_cache, block_tables=block_tables, slots=slots, seqlen=seqlen, max_s=max_s, prefill_cache_indices=batch.prefill_cache_indices, lm_head_indices=lm_head_indices, cross_attention_states=cross_attention_states, adapter_data=adapter_data, image_indices=batch.image_indices[:], ) if batch.prefill_cache_indices is not None: batch.prefill_cache_indices = None if batch.pixel_values is not None: batch.pixel_values = None return logits, speculative_logits # Copy inputs to the static inputs of the cuda graph # Static inputs are potentially padded cuda_graph["input_ids"][: input_ids.shape[0]] = input_ids cuda_graph["position_ids"][: position_ids.shape[0]] = position_ids if ATTENTION == "flashinfer": block_tables = block_tables_to_ragged( block_tables=block_tables, input_lengths=batch.input_lengths, cache_lengths=batch.cache_lengths, input_lengths_tensor=batch.input_lengths_tensor, cache_lengths_tensor=batch.cache_lengths_tensor, max_current_length=batch.max_current_length, ) cuda_graph["block_tables"][: block_tables.shape[0]] = block_tables else: cuda_graph["block_tables"][ : block_tables.shape[0], : block_tables.shape[1] ] = block_tables # XXX: This is working only because block 0 is reserved for the healthcheck # so it doesn't matter if we override it with bogus values. cuda_graph["slots"].fill_(0) cuda_graph["slots"][: slots.shape[0]] = slots cuda_graph["input_lengths"].zero_() cuda_graph["input_lengths"][: input_lengths.shape[0]] = input_lengths cuda_graph["cache_lengths"].zero_() cuda_graph["cache_lengths"][ : cache_lengths_tensor.shape[0] ] = cache_lengths_tensor with self._forward_context( block_tables=cuda_graph["block_tables"], cu_seqlen_prefill=None, input_lengths_tensor=cuda_graph["input_lengths"], cache_lengths_tensor=cuda_graph["cache_lengths"], state=cuda_graph["state"], ): # Replay the graph cuda_graph["graph"].replay() # Slice output to the correct shape speculative_logits = ( cuda_graph["speculative_logits"][:bs] if cuda_graph["speculative_logits"] is not None else None ) logits = cuda_graph["logits"][:bs] return logits, speculative_logits
text-generation-inference/server/text_generation_server/models/mllama_causal_lm.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/models/mllama_causal_lm.py", "repo_id": "text-generation-inference", "token_count": 7966 }
322
import torch from loguru import logger import os import importlib.util def is_ipex_available(): return importlib.util.find_spec("intel_extension_for_pytorch") is not None def get_cuda_free_memory(device, memory_fraction): total_free_memory, _ = torch.cuda.mem_get_info(device) total_gpu_memory = torch.cuda.get_device_properties(device).total_memory free_memory = max(0, total_free_memory - (1 - memory_fraction) * total_gpu_memory) return free_memory def get_xpu_free_memory(device, memory_fraction): total_free_memory, total_xpu_memory = torch.xpu.mem_get_info(device) memory_fraction = float(os.getenv("XPU_MEMORY_FRACTION", "0.9")) free_memory = max( 0, int(total_free_memory - (1 - memory_fraction) * total_xpu_memory) ) return free_memory def get_cpu_free_memory(device, memory_fraction): import psutil from text_generation_server.utils.dist import WORLD_SIZE mem = psutil.virtual_memory() free_memory = int(mem.available * 0.95 / WORLD_SIZE) return free_memory def noop(*args, **kwargs): pass SYSTEM = None if torch.version.hip is not None: SYSTEM = "rocm" empty_cache = torch.cuda.empty_cache synchronize = torch.cuda.synchronize get_free_memory = get_cuda_free_memory elif torch.version.cuda is not None and torch.cuda.is_available(): SYSTEM = "cuda" empty_cache = torch.cuda.empty_cache synchronize = torch.cuda.synchronize get_free_memory = get_cuda_free_memory elif is_ipex_available(): SYSTEM = "ipex" import intel_extension_for_pytorch # noqa: F401 if hasattr(torch, "xpu") and torch.xpu.is_available(): empty_cache = torch.xpu.empty_cache synchronize = torch.xpu.synchronize get_free_memory = get_xpu_free_memory else: empty_cache = noop synchronize = noop get_free_memory = get_cpu_free_memory elif hasattr(torch, "xpu") and torch.xpu.is_available(): SYSTEM = "xpu" empty_cache = torch.xpu.empty_cache synchronize = torch.xpu.synchronize get_free_memory = get_xpu_free_memory else: SYSTEM = "cpu" empty_cache = noop synchronize = noop get_free_memory = get_cpu_free_memory logger.info(f"Detected system {SYSTEM}")
text-generation-inference/server/text_generation_server/utils/import_utils.py/0
{ "file_path": "text-generation-inference/server/text_generation_server/utils/import_utils.py", "repo_id": "text-generation-inference", "token_count": 893 }
323
import subprocess import argparse import ast import json import os TEMPLATE = """ # Supported Models Text Generation Inference enables serving optimized models. The following sections list which models (VLMs & LLMs) are supported. SUPPORTED_MODELS If the above list lacks the model you would like to serve, depending on the model's pipeline type, you can try to initialize and serve the model anyways to see how well it performs, but performance isn't guaranteed for non-optimized models: ```python # for causal LMs/text-generation models AutoModelForCausalLM.from_pretrained(<model>, device_map="auto") # or, for text-to-text generation models AutoModelForSeq2SeqLM.from_pretrained(<model>, device_map="auto") ``` If you wish to serve a supported model that already exists on a local folder, just point to the local folder. ```bash text-generation-launcher --model-id <PATH-TO-LOCAL-BLOOM> ``` """ def check_cli(check: bool): output = subprocess.check_output(["text-generation-launcher", "--help"]).decode( "utf-8" ) wrap_code_blocks_flag = "<!-- WRAP CODE BLOCKS -->" final_doc = f"# Text-generation-launcher arguments\n\n{wrap_code_blocks_flag}\n\n" lines = output.split("\n") header = "" block = [] for line in lines: if line.startswith(" -") or line.startswith(" -"): rendered_block = "\n".join(block) if header: final_doc += f"## {header}\n```shell\n{rendered_block}\n```\n" else: final_doc += f"```shell\n{rendered_block}\n```\n" block = [] tokens = line.split("<") if len(tokens) > 1: header = tokens[-1][:-1] else: header = line.split("--")[-1] header = header.upper().replace("-", "_") block.append(line) rendered_block = "\n".join(block) final_doc += f"## {header}\n```shell\n{rendered_block}\n```\n" block = [] filename = "docs/source/reference/launcher.md" if check: with open(filename, "r") as f: doc = f.read() if doc != final_doc: tmp = "launcher.md" with open(tmp, "w") as g: g.write(final_doc) diff = subprocess.run( ["diff", tmp, filename], capture_output=True ).stdout.decode("utf-8") print(diff) raise Exception( "Cli arguments Doc is not up-to-date, run `python update_doc.py` in order to update it" ) else: with open(filename, "w") as f: f.write(final_doc) def check_supported_models(check: bool): filename = "server/text_generation_server/models/__init__.py" with open(filename, "r") as f: tree = ast.parse(f.read()) enum_def = [ x for x in tree.body if isinstance(x, ast.ClassDef) and x.name == "ModelType" ][0] _locals = {} _globals = {} exec(f"import enum\n{ast.unparse(enum_def)}", _globals, _locals) ModelType = _locals["ModelType"] list_string = "" for data in ModelType: list_string += f"- [{data.value['name']}]({data.value['url']})" if data.value.get("multimodal", None): list_string += " (Multimodal)" list_string += "\n" final_doc = TEMPLATE.replace("SUPPORTED_MODELS", list_string) filename = "docs/source/supported_models.md" if check: with open(filename, "r") as f: doc = f.read() if doc != final_doc: tmp = "supported.md" with open(tmp, "w") as g: g.write(final_doc) diff = subprocess.run( ["diff", tmp, filename], capture_output=True ).stdout.decode("utf-8") print(diff) raise Exception( "Supported models is not up-to-date, run `python update_doc.py` in order to update it" ) else: with open(filename, "w") as f: f.write(final_doc) def get_openapi_schema(): try: output = subprocess.check_output(["text-generation-router", "print-schema"]) return json.loads(output) except subprocess.CalledProcessError as e: print(f"Error running text-generation-router print-schema: {e}") raise SystemExit(1) except json.JSONDecodeError: print("Error: Invalid JSON received from text-generation-router print-schema") raise SystemExit(1) def check_openapi(check: bool): new_openapi_data = get_openapi_schema() filename = "docs/openapi.json" tmp_filename = "openapi_tmp.json" with open(tmp_filename, "w") as f: json.dump(new_openapi_data, f, indent=2) f.write("\n") if check: diff = subprocess.run( [ "diff", tmp_filename, filename, ], capture_output=True, ).stdout.decode("utf-8") os.remove(tmp_filename) if diff: print(diff) raise Exception( "OpenAPI documentation is not up-to-date, run `python update_doc.py` in order to update it" ) else: os.rename(tmp_filename, filename) print("OpenAPI documentation updated.") p = subprocess.run( [ "redocly", # allow for trailing whitespace since it's not significant # and the precommit hook will remove it "lint", "--skip-rule", "security-defined", filename, ], capture_output=True, ) errors = p.stderr.decode("utf-8") # The openapi specs fails on `exclusive_minimum` which is expected to be a boolean where # utoipa outputs a value instead: https://github.com/juhaku/utoipa/issues/969 print(errors) if p.returncode != 0: print(errors) raise Exception( f"OpenAPI documentation is invalid, `redocly lint {filename}` showed some error:\n {errors}" ) return True def main(): parser = argparse.ArgumentParser() parser.add_argument("--check", action="store_true") args = parser.parse_args() check_cli(args.check) check_supported_models(args.check) check_openapi(args.check) if __name__ == "__main__": main()
text-generation-inference/update_doc.py/0
{ "file_path": "text-generation-inference/update_doc.py", "repo_id": "text-generation-inference", "token_count": 2925 }
324
.PHONY: style check-style test DATA_DIR = data dir_guard=@mkdir -p $(@D) # Format source code automatically style: npm run lint # Check the source code is formatted correctly check-style: npm run lint-check TESTS_RESOURCES = $(DATA_DIR)/small.txt $(DATA_DIR)/roberta.json $(DATA_DIR)/tokenizer-wiki.json $(DATA_DIR)/bert-wiki.json # Launch the test suite test: $(TESTS_RESOURCES) npm run test $(DATA_DIR)/big.txt : $(dir_guard) wget https://norvig.com/big.txt -O $@ $(DATA_DIR)/small.txt : $(DATA_DIR)/big.txt head -100 $(DATA_DIR)/big.txt > $@ $(DATA_DIR)/roberta.json : $(dir_guard) wget https://huggingface.co/roberta-large/raw/main/tokenizer.json -O $@ $(DATA_DIR)/tokenizer-wiki.json : $(dir_guard) wget https://s3.amazonaws.com/models.huggingface.co/bert/anthony/doc-quicktour/tokenizer.json -O $@ $(DATA_DIR)/bert-wiki.json : $(dir_guard) wget https://s3.amazonaws.com/models.huggingface.co/bert/anthony/doc-pipeline/tokenizer.json -O $@
tokenizers/bindings/node/Makefile/0
{ "file_path": "tokenizers/bindings/node/Makefile", "repo_id": "tokenizers", "token_count": 406 }
325
import { byteLevelPreTokenizer, metaspacePreTokenizer, punctuationPreTokenizer, sequencePreTokenizer, splitPreTokenizer, whitespaceSplitPreTokenizer, } from '../../' describe('byteLevelPreTokenizer', () => { it('instantiates correctly', () => { const processor = byteLevelPreTokenizer() expect(processor.constructor.name).toEqual('PreTokenizer') }) }) describe('metaspacePreTokenizer', () => { it('instantiates correctly without any parameter', () => { const processor = metaspacePreTokenizer() expect(processor.constructor.name).toEqual('PreTokenizer') }) it('accepts `undefined` as first parameter', () => { expect(metaspacePreTokenizer(undefined)).toBeDefined() }) it('accepts `undefined` as second parameter', () => { expect(metaspacePreTokenizer('t', undefined)).toBeDefined() }) it('can pre-tokenize strings', () => { const pretok = metaspacePreTokenizer() expect(pretok.preTokenizeString('Hello there friend')).toEqual([ ['▁Hello', [0, 5]], ['▁there', [5, 11]], ['▁friend', [11, 18]], ]) }) }) describe('punctuationPreTokenizer', () => { it('instantiates correctly without any parameter', () => { const processor = punctuationPreTokenizer() expect(processor.constructor.name).toEqual('PreTokenizer') }) it('instantiates correctly with non-default split delimeter', () => { const processor = punctuationPreTokenizer('removed') expect(processor.constructor.name).toEqual('PreTokenizer') }) }) describe('splitPreTokenizer', () => { it('instantiates correctly with invert parameter', () => { const processor = splitPreTokenizer(' ', 'mergedWithPrevious', false) expect(processor.constructor.name).toEqual('PreTokenizer') }) }) describe('sequencePreTokenizer', () => { it('instantiates correctly', () => { const punctuation = punctuationPreTokenizer() const whitespace = whitespaceSplitPreTokenizer() const sequence2 = sequencePreTokenizer([]) expect(sequence2.constructor.name).toEqual('PreTokenizer') const sequence3 = sequencePreTokenizer([punctuation, whitespace]) expect(sequence3.constructor.name).toEqual('PreTokenizer') }) })
tokenizers/bindings/node/lib/bindings/pre-tokenizers.test.ts/0
{ "file_path": "tokenizers/bindings/node/lib/bindings/pre-tokenizers.test.ts", "repo_id": "tokenizers", "token_count": 728 }
326
{ "name": "tokenizers-linux-arm64-gnu", "version": "0.13.4-rc1", "os": [ "linux" ], "cpu": [ "arm64" ], "main": "tokenizers.linux-arm64-gnu.node", "files": [ "tokenizers.linux-arm64-gnu.node" ], "description": "Tokenizers platform specific bindings", "keywords": [ "napi-rs", "NAPI", "N-API", "Rust", "node-addon", "node-addon-api" ], "license": "MIT", "engines": { "node": ">= 10" }, "publishConfig": { "registry": "https://registry.npmjs.org/", "access": "public" }, "repository": "tokenizers", "libc": [ "glibc" ] }
tokenizers/bindings/node/npm/linux-arm64-gnu/package.json/0
{ "file_path": "tokenizers/bindings/node/npm/linux-arm64-gnu/package.json", "repo_id": "tokenizers", "token_count": 289 }
327
use crate::arc_rwlock_serde; use serde::{Deserialize, Serialize}; extern crate tokenizers as tk; use napi::bindgen_prelude::*; use napi_derive::napi; use std::sync::{Arc, RwLock}; use tk::decoders::DecoderWrapper; /// Decoder #[derive(Clone, Serialize, Deserialize)] #[napi] pub struct Decoder { #[serde(flatten, with = "arc_rwlock_serde")] decoder: Option<Arc<RwLock<DecoderWrapper>>>, } #[napi] impl Decoder { #[napi] pub fn decode(&self, tokens: Vec<String>) -> Result<String> { use tk::Decoder; self .decoder .as_ref() .unwrap() .read() .unwrap() .decode(tokens) .map_err(|e| Error::from_reason(format!("{e}"))) } } impl tk::Decoder for Decoder { fn decode_chain(&self, tokens: Vec<String>) -> tk::Result<Vec<String>> { self .decoder .as_ref() .ok_or("Uninitialized Decoder")? .read() .unwrap() .decode_chain(tokens) } } #[napi] pub fn bpe_decoder(suffix: Option<String>) -> Decoder { let suffix = suffix.unwrap_or("</w>".to_string()); let decoder = Some(Arc::new(RwLock::new( tk::decoders::bpe::BPEDecoder::new(suffix).into(), ))); Decoder { decoder } } #[napi] pub fn byte_fallback_decoder() -> Decoder { Decoder { decoder: Some(Arc::new(RwLock::new( tk::decoders::byte_fallback::ByteFallback::new().into(), ))), } } #[napi] pub fn ctc_decoder( #[napi(ts_arg_type = "string = '<pad>'")] pad_token: Option<String>, word_delimiter_token: Option<String>, cleanup: Option<bool>, ) -> Decoder { let pad_token = pad_token.unwrap_or("<pad>".to_string()); let word_delimiter_token = word_delimiter_token.unwrap_or("|".to_string()); let cleanup = cleanup.unwrap_or(true); let decoder = Some(Arc::new(RwLock::new( tk::decoders::ctc::CTC::new(pad_token, word_delimiter_token, cleanup).into(), ))); Decoder { decoder } } #[napi] pub fn fuse_decoder() -> Decoder { Decoder { decoder: Some(Arc::new(RwLock::new( tk::decoders::fuse::Fuse::new().into(), ))), } } #[napi] pub fn metaspace_decoder( #[napi(ts_arg_type = "string = '▁'")] replacement: Option<String>, #[napi(ts_arg_type = "prepend_scheme = 'always'")] prepend_scheme: Option<String>, #[napi(ts_arg_type = "split = true")] split: Option<bool>, ) -> Result<Decoder> { use tk::pre_tokenizers::metaspace::PrependScheme; let split = split.unwrap_or(true); let replacement = replacement.unwrap_or("▁".to_string()); if replacement.chars().count() != 1 { return Err(Error::from_reason( "replacement is supposed to be a single char", )); } let replacement = replacement.chars().next().unwrap(); let prepend_scheme: PrependScheme = match prepend_scheme.unwrap_or(String::from("always")).as_str() { "always" => PrependScheme::Always, "first" => PrependScheme::First, "never" => PrependScheme::Never, _ => { return Err(Error::from_reason( "prepend_scheme is supposed to be either 'always', 'first' or 'never'", )); } }; Ok(Decoder { decoder: Some(Arc::new(RwLock::new( tk::decoders::metaspace::Metaspace::new(replacement, prepend_scheme, split).into(), ))), }) } #[napi] pub fn replace_decoder(pattern: String, content: String) -> Result<Decoder> { Ok(Decoder { decoder: Some(Arc::new(RwLock::new( tk::normalizers::replace::Replace::new(pattern, content) .map_err(|e| Error::from_reason(e.to_string()))? .into(), ))), }) } #[napi] pub fn sequence_decoder(decoders: Vec<&Decoder>) -> Decoder { let sequence: Vec<tk::DecoderWrapper> = decoders .into_iter() .filter_map(|decoder| { decoder .decoder .as_ref() .map(|decoder| (**decoder).read().unwrap().clone()) }) .clone() .collect(); Decoder { decoder: Some(Arc::new(RwLock::new(tk::DecoderWrapper::Sequence( tk::decoders::sequence::Sequence::new(sequence), )))), } } #[napi] pub fn strip_decoder(content: String, left: u32, right: u32) -> Result<Decoder> { let content: char = content.chars().next().ok_or(Error::from_reason( "Expected non empty string for strip pattern", ))?; Ok(Decoder { decoder: Some(Arc::new(RwLock::new( tk::decoders::strip::Strip::new(content, left as usize, right as usize).into(), ))), }) } #[napi] pub fn word_piece_decoder( #[napi(ts_arg_type = "string = '##'")] prefix: Option<String>, #[napi(ts_arg_type = "bool = true")] cleanup: Option<bool>, ) -> Decoder { let prefix = prefix.unwrap_or("##".to_string()); let cleanup = cleanup.unwrap_or(true); Decoder { decoder: Some(Arc::new(RwLock::new( tk::decoders::wordpiece::WordPiece::new(prefix, cleanup).into(), ))), } }
tokenizers/bindings/node/src/decoders.rs/0
{ "file_path": "tokenizers/bindings/node/src/decoders.rs", "repo_id": "tokenizers", "token_count": 2037 }
328
[target.x86_64-apple-darwin] rustflags = [ "-C", "link-arg=-undefined", "-C", "link-arg=dynamic_lookup", "-C", "link-arg=-mmacosx-version-min=10.11", ] [target.aarch64-apple-darwin] rustflags = [ "-C", "link-arg=-undefined", "-C", "link-arg=dynamic_lookup", "-C", "link-arg=-mmacosx-version-min=10.11", ]
tokenizers/bindings/python/.cargo/config.toml/0
{ "file_path": "tokenizers/bindings/python/.cargo/config.toml", "repo_id": "tokenizers", "token_count": 146 }
329
# Generated content DO NOT EDIT class AddedToken: """ Represents a token that can be be added to a :class:`~tokenizers.Tokenizer`. It can have special options that defines the way it should behave. Args: content (:obj:`str`): The content of the token single_word (:obj:`bool`, defaults to :obj:`False`): Defines whether this token should only match single words. If :obj:`True`, this token will never match inside of a word. For example the token ``ing`` would match on ``tokenizing`` if this option is :obj:`False`, but not if it is :obj:`True`. The notion of "`inside of a word`" is defined by the word boundaries pattern in regular expressions (ie. the token should start and end with word boundaries). lstrip (:obj:`bool`, defaults to :obj:`False`): Defines whether this token should strip all potential whitespaces on its left side. If :obj:`True`, this token will greedily match any whitespace on its left. For example if we try to match the token ``[MASK]`` with ``lstrip=True``, in the text ``"I saw a [MASK]"``, we would match on ``" [MASK]"``. (Note the space on the left). rstrip (:obj:`bool`, defaults to :obj:`False`): Defines whether this token should strip all potential whitespaces on its right side. If :obj:`True`, this token will greedily match any whitespace on its right. It works just like :obj:`lstrip` but on the right. normalized (:obj:`bool`, defaults to :obj:`True` with :meth:`~tokenizers.Tokenizer.add_tokens` and :obj:`False` with :meth:`~tokenizers.Tokenizer.add_special_tokens`): Defines whether this token should match against the normalized version of the input text. For example, with the added token ``"yesterday"``, and a normalizer in charge of lowercasing the text, the token could be extract from the input ``"I saw a lion Yesterday"``. special (:obj:`bool`, defaults to :obj:`False` with :meth:`~tokenizers.Tokenizer.add_tokens` and :obj:`False` with :meth:`~tokenizers.Tokenizer.add_special_tokens`): Defines whether this token should be skipped when decoding. """ def __init__(self, content, single_word=False, lstrip=False, rstrip=False, normalized=True, special=False): pass @property def content(self): """ Get the content of this :obj:`AddedToken` """ pass @property def lstrip(self): """ Get the value of the :obj:`lstrip` option """ pass @property def normalized(self): """ Get the value of the :obj:`normalized` option """ pass @property def rstrip(self): """ Get the value of the :obj:`rstrip` option """ pass @property def single_word(self): """ Get the value of the :obj:`single_word` option """ pass @property def special(self): """ Get the value of the :obj:`special` option """ pass class Encoding: """ The :class:`~tokenizers.Encoding` represents the output of a :class:`~tokenizers.Tokenizer`. """ @property def attention_mask(self): """ The attention mask This indicates to the LM which tokens should be attended to, and which should not. This is especially important when batching sequences, where we need to applying padding. Returns: :obj:`List[int]`: The attention mask """ pass def char_to_token(self, char_pos, sequence_index=0): """ Get the token that contains the char at the given position in the input sequence. Args: char_pos (:obj:`int`): The position of a char in the input string sequence_index (:obj:`int`, defaults to :obj:`0`): The index of the sequence that contains the target char Returns: :obj:`int`: The index of the token that contains this char in the encoded sequence """ pass def char_to_word(self, char_pos, sequence_index=0): """ Get the word that contains the char at the given position in the input sequence. Args: char_pos (:obj:`int`): The position of a char in the input string sequence_index (:obj:`int`, defaults to :obj:`0`): The index of the sequence that contains the target char Returns: :obj:`int`: The index of the word that contains this char in the input sequence """ pass @property def ids(self): """ The generated IDs The IDs are the main input to a Language Model. They are the token indices, the numerical representations that a LM understands. Returns: :obj:`List[int]`: The list of IDs """ pass @staticmethod def merge(encodings, growing_offsets=True): """ Merge the list of encodings into one final :class:`~tokenizers.Encoding` Args: encodings (A :obj:`List` of :class:`~tokenizers.Encoding`): The list of encodings that should be merged in one growing_offsets (:obj:`bool`, defaults to :obj:`True`): Whether the offsets should accumulate while merging Returns: :class:`~tokenizers.Encoding`: The resulting Encoding """ pass @property def n_sequences(self): """ The number of sequences represented Returns: :obj:`int`: The number of sequences in this :class:`~tokenizers.Encoding` """ pass @property def offsets(self): """ The offsets associated to each token These offsets let's you slice the input string, and thus retrieve the original part that led to producing the corresponding token. Returns: A :obj:`List` of :obj:`Tuple[int, int]`: The list of offsets """ pass @property def overflowing(self): """ A :obj:`List` of overflowing :class:`~tokenizers.Encoding` When using truncation, the :class:`~tokenizers.Tokenizer` takes care of splitting the output into as many pieces as required to match the specified maximum length. This field lets you retrieve all the subsequent pieces. When you use pairs of sequences, the overflowing pieces will contain enough variations to cover all the possible combinations, while respecting the provided maximum length. """ pass def pad(self, length, direction="right", pad_id=0, pad_type_id=0, pad_token="[PAD]"): """ Pad the :class:`~tokenizers.Encoding` at the given length Args: length (:obj:`int`): The desired length direction: (:obj:`str`, defaults to :obj:`right`): The expected padding direction. Can be either :obj:`right` or :obj:`left` pad_id (:obj:`int`, defaults to :obj:`0`): The ID corresponding to the padding token pad_type_id (:obj:`int`, defaults to :obj:`0`): The type ID corresponding to the padding token pad_token (:obj:`str`, defaults to `[PAD]`): The pad token to use """ pass @property def sequence_ids(self): """ The generated sequence indices. They represent the index of the input sequence associated to each token. The sequence id can be None if the token is not related to any input sequence, like for example with special tokens. Returns: A :obj:`List` of :obj:`Optional[int]`: A list of optional sequence index. """ pass def set_sequence_id(self, sequence_id): """ Set the given sequence index Set the given sequence index for the whole range of tokens contained in this :class:`~tokenizers.Encoding`. """ pass @property def special_tokens_mask(self): """ The special token mask This indicates which tokens are special tokens, and which are not. Returns: :obj:`List[int]`: The special tokens mask """ pass def token_to_chars(self, token_index): """ Get the offsets of the token at the given index. The returned offsets are related to the input sequence that contains the token. In order to determine in which input sequence it belongs, you must call :meth:`~tokenizers.Encoding.token_to_sequence()`. Args: token_index (:obj:`int`): The index of a token in the encoded sequence. Returns: :obj:`Tuple[int, int]`: The token offsets :obj:`(first, last + 1)` """ pass def token_to_sequence(self, token_index): """ Get the index of the sequence represented by the given token. In the general use case, this method returns :obj:`0` for a single sequence or the first sequence of a pair, and :obj:`1` for the second sequence of a pair Args: token_index (:obj:`int`): The index of a token in the encoded sequence. Returns: :obj:`int`: The sequence id of the given token """ pass def token_to_word(self, token_index): """ Get the index of the word that contains the token in one of the input sequences. The returned word index is related to the input sequence that contains the token. In order to determine in which input sequence it belongs, you must call :meth:`~tokenizers.Encoding.token_to_sequence()`. Args: token_index (:obj:`int`): The index of a token in the encoded sequence. Returns: :obj:`int`: The index of the word in the relevant input sequence. """ pass @property def tokens(self): """ The generated tokens They are the string representation of the IDs. Returns: :obj:`List[str]`: The list of tokens """ pass def truncate(self, max_length, stride=0, direction="right"): """ Truncate the :class:`~tokenizers.Encoding` at the given length If this :class:`~tokenizers.Encoding` represents multiple sequences, when truncating this information is lost. It will be considered as representing a single sequence. Args: max_length (:obj:`int`): The desired length stride (:obj:`int`, defaults to :obj:`0`): The length of previous content to be included in each overflowing piece direction (:obj:`str`, defaults to :obj:`right`): Truncate direction """ pass @property def type_ids(self): """ The generated type IDs Generally used for tasks like sequence classification or question answering, these tokens let the LM know which input sequence corresponds to each tokens. Returns: :obj:`List[int]`: The list of type ids """ pass @property def word_ids(self): """ The generated word indices. They represent the index of the word associated to each token. When the input is pre-tokenized, they correspond to the ID of the given input label, otherwise they correspond to the words indices as defined by the :class:`~tokenizers.pre_tokenizers.PreTokenizer` that was used. For special tokens and such (any token that was generated from something that was not part of the input), the output is :obj:`None` Returns: A :obj:`List` of :obj:`Optional[int]`: A list of optional word index. """ pass def word_to_chars(self, word_index, sequence_index=0): """ Get the offsets of the word at the given index in one of the input sequences. Args: word_index (:obj:`int`): The index of a word in one of the input sequences. sequence_index (:obj:`int`, defaults to :obj:`0`): The index of the sequence that contains the target word Returns: :obj:`Tuple[int, int]`: The range of characters (span) :obj:`(first, last + 1)` """ pass def word_to_tokens(self, word_index, sequence_index=0): """ Get the encoded tokens corresponding to the word at the given index in one of the input sequences. Args: word_index (:obj:`int`): The index of a word in one of the input sequences. sequence_index (:obj:`int`, defaults to :obj:`0`): The index of the sequence that contains the target word Returns: :obj:`Tuple[int, int]`: The range of tokens: :obj:`(first, last + 1)` """ pass @property def words(self): """ The generated word indices. .. warning:: This is deprecated and will be removed in a future version. Please use :obj:`~tokenizers.Encoding.word_ids` instead. They represent the index of the word associated to each token. When the input is pre-tokenized, they correspond to the ID of the given input label, otherwise they correspond to the words indices as defined by the :class:`~tokenizers.pre_tokenizers.PreTokenizer` that was used. For special tokens and such (any token that was generated from something that was not part of the input), the output is :obj:`None` Returns: A :obj:`List` of :obj:`Optional[int]`: A list of optional word index. """ pass class NormalizedString: """ NormalizedString A NormalizedString takes care of modifying an "original" string, to obtain a "normalized" one. While making all the requested modifications, it keeps track of the alignment information between the two versions of the string. Args: sequence: str: The string sequence used to initialize this NormalizedString """ def append(self, s): """ Append the given sequence to the string """ pass def clear(self): """ Clears the string """ pass def filter(self, func): """ Filter each character of the string using the given func """ pass def for_each(self, func): """ Calls the given function for each character of the string """ pass def lowercase(self): """ Lowercase the string """ pass def lstrip(self): """ Strip the left of the string """ pass def map(self, func): """ Calls the given function for each character of the string Replaces each character of the string using the returned value. Each returned value **must** be a str of length 1 (ie a character). """ pass def nfc(self): """ Runs the NFC normalization """ pass def nfd(self): """ Runs the NFD normalization """ pass def nfkc(self): """ Runs the NFKC normalization """ pass def nfkd(self): """ Runs the NFKD normalization """ pass @property def normalized(self): """ The normalized part of the string """ pass def prepend(self, s): """ Prepend the given sequence to the string """ pass def replace(self, pattern, content): """ Replace the content of the given pattern with the provided content Args: pattern: Pattern: A pattern used to match the string. Usually a string or a Regex content: str: The content to be used as replacement """ pass def rstrip(self): """ Strip the right of the string """ pass def slice(self, range): """ Slice the string using the given range """ pass def split(self, pattern, behavior): """ Split the NormalizedString using the given pattern and the specified behavior Args: pattern: Pattern: A pattern used to split the string. Usually a string or a regex built with `tokenizers.Regex` behavior: SplitDelimiterBehavior: The behavior to use when splitting. Choices: "removed", "isolated", "merged_with_previous", "merged_with_next", "contiguous" Returns: A list of NormalizedString, representing each split """ pass def strip(self): """ Strip both ends of the string """ pass def uppercase(self): """ Uppercase the string """ pass class PreTokenizedString: """ PreTokenizedString Wrapper over a string, that provides a way to normalize, pre-tokenize, tokenize the underlying string, while keeping track of the alignment information (offsets). The PreTokenizedString manages what we call `splits`. Each split represents a substring which is a subpart of the original string, with the relevant offsets and tokens. When calling one of the methods used to modify the PreTokenizedString (namely one of `split`, `normalize` or `tokenize), only the `splits` that don't have any associated tokens will get modified. Args: sequence: str: The string sequence used to initialize this PreTokenizedString """ def __init__(self, sequence): pass def get_splits(self, offset_referential="original", offset_type="char"): """ Get the splits currently managed by the PreTokenizedString Args: offset_referential: :obj:`str` Whether the returned splits should have offsets expressed relative to the original string, or the normalized one. choices: "original", "normalized". offset_type: :obj:`str` Whether the returned splits should have offsets expressed in bytes or chars. When slicing an str, we usually want to use chars, which is the default value. Now in some cases it might be interesting to get these offsets expressed in bytes, so it is possible to change this here. choices: "char", "bytes" Returns A list of splits """ pass def normalize(self, func): """ Normalize each split of the `PreTokenizedString` using the given `func` Args: func: Callable[[NormalizedString], None]: The function used to normalize each underlying split. This function does not need to return anything, just calling the methods on the provided NormalizedString allow its modification. """ pass def split(self, func): """ Split the PreTokenizedString using the given `func` Args: func: Callable[[index, NormalizedString], List[NormalizedString]]: The function used to split each underlying split. It is expected to return a list of `NormalizedString`, that represent the new splits. If the given `NormalizedString` does not need any splitting, we can just return it directly. In order for the offsets to be tracked accurately, any returned `NormalizedString` should come from calling either `.split` or `.slice` on the received one. """ pass def to_encoding(self, type_id=0, word_idx=None): """ Return an Encoding generated from this PreTokenizedString Args: type_id: int = 0: The type_id to be used on the generated Encoding. word_idx: Optional[int] = None: An optional word index to be used for each token of this Encoding. If provided, all the word indices in the generated Encoding will use this value, instead of the one automatically tracked during pre-tokenization. Returns: An Encoding """ pass def tokenize(self, func): """ Tokenize each split of the `PreTokenizedString` using the given `func` Args: func: Callable[[str], List[Token]]: The function used to tokenize each underlying split. This function must return a list of Token generated from the input str. """ pass class Regex: """ Instantiate a new Regex with the given pattern """ def __init__(self, pattern): pass class Token: pass class Tokenizer: """ A :obj:`Tokenizer` works as a pipeline. It processes some raw text as input and outputs an :class:`~tokenizers.Encoding`. Args: model (:class:`~tokenizers.models.Model`): The core algorithm that this :obj:`Tokenizer` should be using. """ def __init__(self, model): pass def add_special_tokens(self, tokens): """ Add the given special tokens to the Tokenizer. If these tokens are already part of the vocabulary, it just let the Tokenizer know about them. If they don't exist, the Tokenizer creates them, giving them a new id. These special tokens will never be processed by the model (ie won't be split into multiple tokens), and they can be removed from the output when decoding. Args: tokens (A :obj:`List` of :class:`~tokenizers.AddedToken` or :obj:`str`): The list of special tokens we want to add to the vocabulary. Each token can either be a string or an instance of :class:`~tokenizers.AddedToken` for more customization. Returns: :obj:`int`: The number of tokens that were created in the vocabulary """ pass def add_tokens(self, tokens): """ Add the given tokens to the vocabulary The given tokens are added only if they don't already exist in the vocabulary. Each token then gets a new attributed id. Args: tokens (A :obj:`List` of :class:`~tokenizers.AddedToken` or :obj:`str`): The list of tokens we want to add to the vocabulary. Each token can be either a string or an instance of :class:`~tokenizers.AddedToken` for more customization. Returns: :obj:`int`: The number of tokens that were created in the vocabulary """ pass def decode(self, ids, skip_special_tokens=True): """ Decode the given list of ids back to a string This is used to decode anything coming back from a Language Model Args: ids (A :obj:`List/Tuple` of :obj:`int`): The list of ids that we want to decode skip_special_tokens (:obj:`bool`, defaults to :obj:`True`): Whether the special tokens should be removed from the decoded string Returns: :obj:`str`: The decoded string """ pass def decode_batch(self, sequences, skip_special_tokens=True): """ Decode a batch of ids back to their corresponding string Args: sequences (:obj:`List` of :obj:`List[int]`): The batch of sequences we want to decode skip_special_tokens (:obj:`bool`, defaults to :obj:`True`): Whether the special tokens should be removed from the decoded strings Returns: :obj:`List[str]`: A list of decoded strings """ pass @property def decoder(self): """ The `optional` :class:`~tokenizers.decoders.Decoder` in use by the Tokenizer """ pass def enable_padding( self, direction="right", pad_id=0, pad_type_id=0, pad_token="[PAD]", length=None, pad_to_multiple_of=None ): """ Enable the padding Args: direction (:obj:`str`, `optional`, defaults to :obj:`right`): The direction in which to pad. Can be either ``right`` or ``left`` pad_to_multiple_of (:obj:`int`, `optional`): If specified, the padding length should always snap to the next multiple of the given value. For example if we were going to pad witha length of 250 but ``pad_to_multiple_of=8`` then we will pad to 256. pad_id (:obj:`int`, defaults to 0): The id to be used when padding pad_type_id (:obj:`int`, defaults to 0): The type id to be used when padding pad_token (:obj:`str`, defaults to :obj:`[PAD]`): The pad token to be used when padding length (:obj:`int`, `optional`): If specified, the length at which to pad. If not specified we pad using the size of the longest sequence in a batch. """ pass def enable_truncation(self, max_length, stride=0, strategy="longest_first", direction="right"): """ Enable truncation Args: max_length (:obj:`int`): The max length at which to truncate stride (:obj:`int`, `optional`): The length of the previous first sequence to be included in the overflowing sequence strategy (:obj:`str`, `optional`, defaults to :obj:`longest_first`): The strategy used to truncation. Can be one of ``longest_first``, ``only_first`` or ``only_second``. direction (:obj:`str`, defaults to :obj:`right`): Truncate direction """ pass def encode(self, sequence, pair=None, is_pretokenized=False, add_special_tokens=True): """ Encode the given sequence and pair. This method can process raw text sequences as well as already pre-tokenized sequences. Example: Here are some examples of the inputs that are accepted:: encode("A single sequence")` encode("A sequence", "And its pair")` encode([ "A", "pre", "tokenized", "sequence" ], is_pretokenized=True)` encode( [ "A", "pre", "tokenized", "sequence" ], [ "And", "its", "pair" ], is_pretokenized=True ) Args: sequence (:obj:`~tokenizers.InputSequence`): The main input sequence we want to encode. This sequence can be either raw text or pre-tokenized, according to the ``is_pretokenized`` argument: - If ``is_pretokenized=False``: :class:`~tokenizers.TextInputSequence` - If ``is_pretokenized=True``: :class:`~tokenizers.PreTokenizedInputSequence` pair (:obj:`~tokenizers.InputSequence`, `optional`): An optional input sequence. The expected format is the same that for ``sequence``. is_pretokenized (:obj:`bool`, defaults to :obj:`False`): Whether the input is already pre-tokenized add_special_tokens (:obj:`bool`, defaults to :obj:`True`): Whether to add the special tokens Returns: :class:`~tokenizers.Encoding`: The encoded result """ pass def encode_batch(self, input, is_pretokenized=False, add_special_tokens=True): """ Encode the given batch of inputs. This method accept both raw text sequences as well as already pre-tokenized sequences. The reason we use `PySequence` is because it allows type checking with zero-cost (according to PyO3) as we don't have to convert to check. Example: Here are some examples of the inputs that are accepted:: encode_batch([ "A single sequence", ("A tuple with a sequence", "And its pair"), [ "A", "pre", "tokenized", "sequence" ], ([ "A", "pre", "tokenized", "sequence" ], "And its pair") ]) Args: input (A :obj:`List`/:obj:`Tuple` of :obj:`~tokenizers.EncodeInput`): A list of single sequences or pair sequences to encode. Each sequence can be either raw text or pre-tokenized, according to the ``is_pretokenized`` argument: - If ``is_pretokenized=False``: :class:`~tokenizers.TextEncodeInput` - If ``is_pretokenized=True``: :class:`~tokenizers.PreTokenizedEncodeInput` is_pretokenized (:obj:`bool`, defaults to :obj:`False`): Whether the input is already pre-tokenized add_special_tokens (:obj:`bool`, defaults to :obj:`True`): Whether to add the special tokens Returns: A :obj:`List` of :class:`~tokenizers.Encoding`: The encoded batch """ pass def encode_batch_fast(self, input, is_pretokenized=False, add_special_tokens=True): """ Encode the given batch of inputs. This method is faster than `encode_batch` because it doesn't keep track of offsets, they will be all zeros. Example: Here are some examples of the inputs that are accepted:: encode_batch_fast([ "A single sequence", ("A tuple with a sequence", "And its pair"), [ "A", "pre", "tokenized", "sequence" ], ([ "A", "pre", "tokenized", "sequence" ], "And its pair") ]) Args: input (A :obj:`List`/:obj:`Tuple` of :obj:`~tokenizers.EncodeInput`): A list of single sequences or pair sequences to encode. Each sequence can be either raw text or pre-tokenized, according to the ``is_pretokenized`` argument: - If ``is_pretokenized=False``: :class:`~tokenizers.TextEncodeInput` - If ``is_pretokenized=True``: :class:`~tokenizers.PreTokenizedEncodeInput` is_pretokenized (:obj:`bool`, defaults to :obj:`False`): Whether the input is already pre-tokenized add_special_tokens (:obj:`bool`, defaults to :obj:`True`): Whether to add the special tokens Returns: A :obj:`List` of :class:`~tokenizers.Encoding`: The encoded batch """ pass @property def encode_special_tokens(self): """ Modifies the tokenizer in order to use or not the special tokens during encoding. Args: value (:obj:`bool`): Whether to use the special tokens or not """ pass @staticmethod def from_buffer(buffer): """ Instantiate a new :class:`~tokenizers.Tokenizer` from the given buffer. Args: buffer (:obj:`bytes`): A buffer containing a previously serialized :class:`~tokenizers.Tokenizer` Returns: :class:`~tokenizers.Tokenizer`: The new tokenizer """ pass @staticmethod def from_file(path): """ Instantiate a new :class:`~tokenizers.Tokenizer` from the file at the given path. Args: path (:obj:`str`): A path to a local JSON file representing a previously serialized :class:`~tokenizers.Tokenizer` Returns: :class:`~tokenizers.Tokenizer`: The new tokenizer """ pass @staticmethod def from_pretrained(identifier, revision="main", token=None): """ Instantiate a new :class:`~tokenizers.Tokenizer` from an existing file on the Hugging Face Hub. Args: identifier (:obj:`str`): The identifier of a Model on the Hugging Face Hub, that contains a tokenizer.json file revision (:obj:`str`, defaults to `main`): A branch or commit id token (:obj:`str`, `optional`, defaults to `None`): An optional auth token used to access private repositories on the Hugging Face Hub Returns: :class:`~tokenizers.Tokenizer`: The new tokenizer """ pass @staticmethod def from_str(json): """ Instantiate a new :class:`~tokenizers.Tokenizer` from the given JSON string. Args: json (:obj:`str`): A valid JSON string representing a previously serialized :class:`~tokenizers.Tokenizer` Returns: :class:`~tokenizers.Tokenizer`: The new tokenizer """ pass def get_added_tokens_decoder(self): """ Get the underlying vocabulary Returns: :obj:`Dict[int, AddedToken]`: The vocabulary """ pass def get_vocab(self, with_added_tokens=True): """ Get the underlying vocabulary Args: with_added_tokens (:obj:`bool`, defaults to :obj:`True`): Whether to include the added tokens Returns: :obj:`Dict[str, int]`: The vocabulary """ pass def get_vocab_size(self, with_added_tokens=True): """ Get the size of the underlying vocabulary Args: with_added_tokens (:obj:`bool`, defaults to :obj:`True`): Whether to include the added tokens Returns: :obj:`int`: The size of the vocabulary """ pass def id_to_token(self, id): """ Convert the given id to its corresponding token if it exists Args: id (:obj:`int`): The id to convert Returns: :obj:`Optional[str]`: An optional token, :obj:`None` if out of vocabulary """ pass @property def model(self): """ The :class:`~tokenizers.models.Model` in use by the Tokenizer """ pass def no_padding(self): """ Disable padding """ pass def no_truncation(self): """ Disable truncation """ pass @property def normalizer(self): """ The `optional` :class:`~tokenizers.normalizers.Normalizer` in use by the Tokenizer """ pass def num_special_tokens_to_add(self, is_pair): """ Return the number of special tokens that would be added for single/pair sentences. :param is_pair: Boolean indicating if the input would be a single sentence or a pair :return: """ pass @property def padding(self): """ Get the current padding parameters `Cannot be set, use` :meth:`~tokenizers.Tokenizer.enable_padding` `instead` Returns: (:obj:`dict`, `optional`): A dict with the current padding parameters if padding is enabled """ pass def post_process(self, encoding, pair=None, add_special_tokens=True): """ Apply all the post-processing steps to the given encodings. The various steps are: 1. Truncate according to the set truncation params (provided with :meth:`~tokenizers.Tokenizer.enable_truncation`) 2. Apply the :class:`~tokenizers.processors.PostProcessor` 3. Pad according to the set padding params (provided with :meth:`~tokenizers.Tokenizer.enable_padding`) Args: encoding (:class:`~tokenizers.Encoding`): The :class:`~tokenizers.Encoding` corresponding to the main sequence. pair (:class:`~tokenizers.Encoding`, `optional`): An optional :class:`~tokenizers.Encoding` corresponding to the pair sequence. add_special_tokens (:obj:`bool`): Whether to add the special tokens Returns: :class:`~tokenizers.Encoding`: The final post-processed encoding """ pass @property def post_processor(self): """ The `optional` :class:`~tokenizers.processors.PostProcessor` in use by the Tokenizer """ pass @property def pre_tokenizer(self): """ The `optional` :class:`~tokenizers.pre_tokenizers.PreTokenizer` in use by the Tokenizer """ pass def save(self, path, pretty=True): """ Save the :class:`~tokenizers.Tokenizer` to the file at the given path. Args: path (:obj:`str`): A path to a file in which to save the serialized tokenizer. pretty (:obj:`bool`, defaults to :obj:`True`): Whether the JSON file should be pretty formatted. """ pass def to_str(self, pretty=False): """ Gets a serialized string representing this :class:`~tokenizers.Tokenizer`. Args: pretty (:obj:`bool`, defaults to :obj:`False`): Whether the JSON string should be pretty formatted. Returns: :obj:`str`: A string representing the serialized Tokenizer """ pass def token_to_id(self, token): """ Convert the given token to its corresponding id if it exists Args: token (:obj:`str`): The token to convert Returns: :obj:`Optional[int]`: An optional id, :obj:`None` if out of vocabulary """ pass def train(self, files, trainer=None): """ Train the Tokenizer using the given files. Reads the files line by line, while keeping all the whitespace, even new lines. If you want to train from data store in-memory, you can check :meth:`~tokenizers.Tokenizer.train_from_iterator` Args: files (:obj:`List[str]`): A list of path to the files that we should use for training trainer (:obj:`~tokenizers.trainers.Trainer`, `optional`): An optional trainer that should be used to train our Model """ pass def train_from_iterator(self, iterator, trainer=None, length=None): """ Train the Tokenizer using the provided iterator. You can provide anything that is a Python Iterator * A list of sequences :obj:`List[str]` * A generator that yields :obj:`str` or :obj:`List[str]` * A Numpy array of strings * ... Args: iterator (:obj:`Iterator`): Any iterator over strings or list of strings trainer (:obj:`~tokenizers.trainers.Trainer`, `optional`): An optional trainer that should be used to train our Model length (:obj:`int`, `optional`): The total number of sequences in the iterator. This is used to provide meaningful progress tracking """ pass @property def truncation(self): """ Get the currently set truncation parameters `Cannot set, use` :meth:`~tokenizers.Tokenizer.enable_truncation` `instead` Returns: (:obj:`dict`, `optional`): A dict with the current truncation parameters if truncation is enabled """ pass
tokenizers/bindings/python/py_src/tokenizers/__init__.pyi/0
{ "file_path": "tokenizers/bindings/python/py_src/tokenizers/__init__.pyi", "repo_id": "tokenizers", "token_count": 17247 }
330
# Generated content DO NOT EDIT from .. import processors PostProcessor = processors.PostProcessor BertProcessing = processors.BertProcessing ByteLevel = processors.ByteLevel RobertaProcessing = processors.RobertaProcessing Sequence = processors.Sequence TemplateProcessing = processors.TemplateProcessing
tokenizers/bindings/python/py_src/tokenizers/processors/__init__.py/0
{ "file_path": "tokenizers/bindings/python/py_src/tokenizers/processors/__init__.py", "repo_id": "tokenizers", "token_count": 74 }
331
#![warn(clippy::all)] #![allow(clippy::upper_case_acronyms)] // Many false positives with pyo3 it seems &str, and &PyAny get flagged #![allow(clippy::borrow_deref_ref)] extern crate tokenizers as tk; mod decoders; mod encoding; mod error; mod models; mod normalizers; mod pre_tokenizers; mod processors; mod token; mod tokenizer; mod trainers; mod utils; use pyo3::prelude::*; use pyo3::wrap_pymodule; pub const VERSION: &str = env!("CARGO_PKG_VERSION"); // For users using multiprocessing in python, it is quite easy to fork the process running // tokenizers, ending up with a deadlock because we internally make use of multithreading. So // we register a callback to be called in the event of a fork so that we can warn the user. #[cfg(target_family = "unix")] static mut REGISTERED_FORK_CALLBACK: bool = false; #[cfg(target_family = "unix")] extern "C" fn child_after_fork() { use tk::parallelism::*; if has_parallelism_been_used() && !is_parallelism_configured() { eprintln!( "huggingface/tokenizers: The current process just got forked, after parallelism has \ already been used. Disabling parallelism to avoid deadlocks..." ); eprintln!("To disable this warning, you can either:"); eprintln!( "\t- Avoid using `tokenizers` before the fork if possible\n\ \t- Explicitly set the environment variable {ENV_VARIABLE}=(true | false)" ); set_parallelism(false); } } /// Tokenizers Module #[pymodule] pub fn tokenizers(m: &Bound<'_, PyModule>) -> PyResult<()> { let _ = env_logger::try_init_from_env("TOKENIZERS_LOG"); // Register the fork callback #[cfg(target_family = "unix")] unsafe { if !REGISTERED_FORK_CALLBACK { libc::pthread_atfork(None, None, Some(child_after_fork)); REGISTERED_FORK_CALLBACK = true; } } m.add_class::<tokenizer::PyTokenizer>()?; m.add_class::<tokenizer::PyAddedToken>()?; m.add_class::<token::PyToken>()?; m.add_class::<encoding::PyEncoding>()?; m.add_class::<utils::PyRegex>()?; m.add_class::<utils::PyNormalizedString>()?; m.add_class::<utils::PyPreTokenizedString>()?; m.add_wrapped(wrap_pymodule!(models::models))?; m.add_wrapped(wrap_pymodule!(pre_tokenizers::pre_tokenizers))?; m.add_wrapped(wrap_pymodule!(decoders::decoders))?; m.add_wrapped(wrap_pymodule!(processors::processors))?; m.add_wrapped(wrap_pymodule!(normalizers::normalizers))?; m.add_wrapped(wrap_pymodule!(trainers::trainers))?; m.add("__version__", env!("CARGO_PKG_VERSION"))?; Ok(()) }
tokenizers/bindings/python/src/lib.rs/0
{ "file_path": "tokenizers/bindings/python/src/lib.rs", "repo_id": "tokenizers", "token_count": 1075 }
332
from tokenizers import BertWordPieceTokenizer from ..utils import bert_files, data_dir, multiprocessing_with_parallelism class TestBertWordPieceTokenizer: def test_basic_encode(self, bert_files): tokenizer = BertWordPieceTokenizer.from_file(bert_files["vocab"]) # Encode with special tokens by default output = tokenizer.encode("My name is John", "pair") assert output.ids == [101, 2026, 2171, 2003, 2198, 102, 3940, 102] assert output.tokens == [ "[CLS]", "my", "name", "is", "john", "[SEP]", "pair", "[SEP]", ] assert output.offsets == [ (0, 0), (0, 2), (3, 7), (8, 10), (11, 15), (0, 0), (0, 4), (0, 0), ] assert output.type_ids == [0, 0, 0, 0, 0, 0, 1, 1] # Can encode without the special tokens output = tokenizer.encode("My name is John", "pair", add_special_tokens=False) assert output.ids == [2026, 2171, 2003, 2198, 3940] assert output.tokens == ["my", "name", "is", "john", "pair"] assert output.offsets == [(0, 2), (3, 7), (8, 10), (11, 15), (0, 4)] assert output.type_ids == [0, 0, 0, 0, 1] def test_multiprocessing_with_parallelism(self, bert_files): tokenizer = BertWordPieceTokenizer.from_file(bert_files["vocab"]) multiprocessing_with_parallelism(tokenizer, False) multiprocessing_with_parallelism(tokenizer, True) def test_train_from_iterator(self): text = ["A first sentence", "Another sentence", "And a last one"] tokenizer = BertWordPieceTokenizer() tokenizer.train_from_iterator(text, show_progress=False) output = tokenizer.encode("A sentence") assert output.tokens == ["a", "sentence"]
tokenizers/bindings/python/tests/implementations/test_bert_wordpiece.py/0
{ "file_path": "tokenizers/bindings/python/tests/implementations/test_bert_wordpiece.py", "repo_id": "tokenizers", "token_count": 914 }
333
# Post-processors <tokenizerslangcontent> <python> ## BertProcessing [[autodoc]] tokenizers.processors.BertProcessing ## ByteLevel [[autodoc]] tokenizers.processors.ByteLevel ## RobertaProcessing [[autodoc]] tokenizers.processors.RobertaProcessing ## TemplateProcessing [[autodoc]] tokenizers.processors.TemplateProcessing </python> <rust> The Rust API Reference is available directly on the [Docs.rs](https://docs.rs/tokenizers/latest/tokenizers/) website. </rust> <node> The node API has not been documented yet. </node> </tokenizerslangcontent>
tokenizers/docs/source-doc-builder/api/post-processors.mdx/0
{ "file_path": "tokenizers/docs/source-doc-builder/api/post-processors.mdx", "repo_id": "tokenizers", "token_count": 174 }
334
Crates.io ---------------------------------------------------------------------------------------------------- 🤗 Tokenizers is available on `crates.io <https://crates.io/crates/tokenizers>`__. You just need to add it to your :obj:`Cargo.toml`:: tokenizers = "0.10"
tokenizers/docs/source/installation/rust.inc/0
{ "file_path": "tokenizers/docs/source/installation/rust.inc", "repo_id": "tokenizers", "token_count": 74 }
335
use tokenizers::Tokenizer; fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> { let tokenizer = Tokenizer::from_pretrained("meta-llama/Meta-Llama-3.1-8B-Instruct", None)?; let data = std::fs::read_to_string("data/big.txt")?; let data: Vec<_> = data.lines().collect(); let add_special_tokens = false; tokenizer.encode_batch_char_offsets(data, add_special_tokens)?; Ok(()) }
tokenizers/tokenizers/examples/encode_batch.rs/0
{ "file_path": "tokenizers/tokenizers/examples/encode_batch.rs", "repo_id": "tokenizers", "token_count": 165 }
336
import * as wasm from "unstable_wasm"; console.log(wasm.tokenize("ab")); console.log(wasm.tokenize("abc"));
tokenizers/tokenizers/examples/unstable_wasm/www/index.js/0
{ "file_path": "tokenizers/tokenizers/examples/unstable_wasm/www/index.js", "repo_id": "tokenizers", "token_count": 43 }
337
use super::{super::OrderedVocabIter, convert_merges_to_hashmap, BpeBuilder, Pair, BPE}; use ahash::AHashMap; use serde::{ de::{Error, MapAccess, Visitor}, ser::SerializeStruct, Deserialize, Deserializer, Serialize, Serializer, }; impl Serialize for BPE { fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error> where S: Serializer, { let mut model = serializer.serialize_struct("BPE", 8)?; // Start by small fields model.serialize_field("type", "BPE")?; model.serialize_field("dropout", &self.dropout)?; model.serialize_field("unk_token", &self.unk_token)?; model.serialize_field("continuing_subword_prefix", &self.continuing_subword_prefix)?; model.serialize_field("end_of_word_suffix", &self.end_of_word_suffix)?; model.serialize_field("fuse_unk", &self.fuse_unk)?; model.serialize_field("byte_fallback", &self.byte_fallback)?; model.serialize_field("ignore_merges", &self.ignore_merges)?; // Then the large ones let mut merges: Vec<(&Pair, &u32)> = self .merges .iter() .map(|(pair, (rank, _))| (pair, rank)) .collect(); merges.sort_unstable_by_key(|k| *k.1); let merges = merges .into_iter() .map(|(pair, _)| (self.vocab_r[&pair.0].clone(), self.vocab_r[&pair.1].clone())) .collect::<Vec<_>>(); let ordered_vocab = OrderedVocabIter::new(&self.vocab_r); model.serialize_field("vocab", &ordered_vocab)?; model.serialize_field("merges", &merges)?; model.end() } } impl<'de> Deserialize<'de> for BPE { fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where D: Deserializer<'de>, { deserializer.deserialize_struct( "BPE", &[ "type", "dropout", "unk_token", "continuing_subword_prefix", "end_of_word_suffix", "fuse_unk", "byte_fallback", "ignore_merges", "vocab", "merges", ], BPEVisitor, ) } } struct BPEVisitor; impl<'de> Visitor<'de> for BPEVisitor { type Value = BPE; fn expecting(&self, fmt: &mut std::fmt::Formatter) -> std::fmt::Result { write!(fmt, "struct BPE") } fn visit_map<V>(self, mut map: V) -> std::result::Result<Self::Value, V::Error> where V: MapAccess<'de>, { let mut builder = BpeBuilder::new(); let mut vocab: Option<AHashMap<String, u32>> = None; #[derive(Debug, Deserialize)] #[serde(untagged)] enum MergeType { Tuple(Vec<(String, String)>), Legacy(Vec<String>), } let mut merges: Option<MergeType> = None; while let Some(key) = map.next_key::<String>()? { match key.as_ref() { "dropout" => { if let Some(dropout) = map.next_value()? { builder = builder.dropout(dropout); } } "unk_token" => { if let Some(unk) = map.next_value()? { builder = builder.unk_token(unk); } } "continuing_subword_prefix" => { if let Some(prefix) = map.next_value()? { builder = builder.continuing_subword_prefix(prefix); } } "end_of_word_suffix" => { if let Some(suffix) = map.next_value()? { builder = builder.end_of_word_suffix(suffix); } } "fuse_unk" => { if let Some(suffix) = map.next_value()? { builder = builder.fuse_unk(suffix); } } "byte_fallback" => { if let Some(suffix) = map.next_value()? { builder = builder.byte_fallback(suffix); } } "ignore_merges" => { if let Some(suffix) = map.next_value()? { builder = builder.ignore_merges(suffix); } } "vocab" => vocab = Some(map.next_value()?), "merges" => merges = Some(map.next_value()?), "type" => match map.next_value()? { "BPE" => {} u => { return Err(serde::de::Error::invalid_value( serde::de::Unexpected::Str(u), &"BPE", )) } }, _ => {} } } if let (Some(vocab), Some(merges)) = (vocab, merges) { let merges = match merges { MergeType::Tuple(merges) => merges, MergeType::Legacy(merges) => { convert_merges_to_hashmap(merges.into_iter(), &vocab).map_err(Error::custom)? } }; builder = builder.vocab_and_merges(vocab, merges); Ok(builder.build().map_err(Error::custom)?) } else { Err(Error::custom("Missing vocab/merges")) } } } #[cfg(test)] mod test { use super::*; use crate::models::bpe::Vocab; #[test] fn test_serialization() { let vocab: Vocab = [ ("<unk>".into(), 0), ("a".into(), 1), ("b".into(), 2), ("ab".into(), 3), ] .iter() .cloned() .collect(); let bpe = BpeBuilder::default() .vocab_and_merges(vocab, vec![("a".to_string(), "b".to_string())]) .unk_token("<unk>".to_string()) .ignore_merges(true) .build() .unwrap(); let legacy = r#"{"type":"BPE","dropout":null,"unk_token":"<unk>","continuing_subword_prefix":null,"end_of_word_suffix":null,"fuse_unk":false,"byte_fallback":false,"ignore_merges":true,"vocab":{"<unk>":0,"a":1,"b":2,"ab":3},"merges":["a b"]}"#; let legacy = serde_json::from_str(legacy).unwrap(); assert_eq!(bpe, legacy); let data = serde_json::to_string(&bpe).unwrap(); assert_eq!( data, r#"{"type":"BPE","dropout":null,"unk_token":"<unk>","continuing_subword_prefix":null,"end_of_word_suffix":null,"fuse_unk":false,"byte_fallback":false,"ignore_merges":true,"vocab":{"<unk>":0,"a":1,"b":2,"ab":3},"merges":[["a","b"]]}"# ); let reconstructed = serde_json::from_str(&data).unwrap(); assert_eq!(bpe, reconstructed); // With a space in the token let vocab: Vocab = [ ("<unk>".into(), 0), ("a".into(), 1), ("b c d".into(), 2), ("ab c d".into(), 3), ] .iter() .cloned() .collect(); let bpe = BpeBuilder::default() .vocab_and_merges(vocab, vec![("a".to_string(), "b c d".to_string())]) .unk_token("<unk>".to_string()) .ignore_merges(true) .build() .unwrap(); let data = serde_json::to_string(&bpe).unwrap(); assert_eq!( data, r#"{"type":"BPE","dropout":null,"unk_token":"<unk>","continuing_subword_prefix":null,"end_of_word_suffix":null,"fuse_unk":false,"byte_fallback":false,"ignore_merges":true,"vocab":{"<unk>":0,"a":1,"b c d":2,"ab c d":3},"merges":[["a","b c d"]]}"# ); let reconstructed = serde_json::from_str(&data).unwrap(); assert_eq!(bpe, reconstructed); } #[test] fn test_serialization_ignore_merges() { let vocab: Vocab = [("<unk>".into(), 0), ("a".into(), 1), ("b".into(), 2)] .iter() .cloned() .collect(); let mut bpe = BpeBuilder::default() .vocab_and_merges(vocab, vec![]) .unk_token("<unk>".to_string()) .ignore_merges(true) .build() .unwrap(); let bpe_string = r#"{"type":"BPE","dropout":null,"unk_token":"<unk>","continuing_subword_prefix":null,"end_of_word_suffix":null,"fuse_unk":false,"byte_fallback":false,"ignore_merges":true,"vocab":{"<unk>":0,"a":1,"b":2},"merges":[]}"#; assert_eq!(serde_json::from_str::<BPE>(bpe_string).unwrap(), bpe); bpe.ignore_merges = false; let bpe_string = r#"{"type":"BPE","dropout":null,"unk_token":"<unk>","continuing_subword_prefix":null,"end_of_word_suffix":null,"fuse_unk":false,"byte_fallback":false,"vocab":{"<unk>":0,"a":1,"b":2},"merges":[]}"#; assert_eq!(serde_json::from_str::<BPE>(bpe_string).unwrap(), bpe); } }
tokenizers/tokenizers/src/models/bpe/serialization.rs/0
{ "file_path": "tokenizers/tokenizers/src/models/bpe/serialization.rs", "repo_id": "tokenizers", "token_count": 4848 }
338
use crate::tokenizer::{NormalizedString, Normalizer, Result}; use serde::{Deserialize, Serialize}; use unicode_categories::UnicodeCategories; /// Checks whether a character is whitespace fn is_whitespace(c: char) -> bool { // These are technically control characters but we count them as whitespace match c { '\t' | '\n' | '\r' => true, _ => c.is_whitespace(), } } /// Checks whether a character is a control character fn is_control(c: char) -> bool { // These are technically control characters but we count them as whitespace match c { '\t' | '\n' | '\r' => false, // The definition of `is_control` here is quite large and contains also // Cc, Cf, Cn or Co // cf. https://unicode.org/reports/tr44/ (Table 12) _ => c.is_other(), } } /// Checks whether a character is chinese /// This defines a "chinese character" as anything in the CJK Unicode block: /// https://en.wikipedia.org/wiki/CJK_Unified_Ideographs_(Unicode_block) /// /// Note that the CJK Unicode block is NOT all Japanese and Korean characters, /// despite its name. The modern Korean Hangul alphabet is a different block, /// as is Japanese Hiragana and Katakana. Those alphabets are used to write /// space-separated words, so they are not treated specially and handled /// like for all of the other languages. fn is_chinese_char(c: char) -> bool { matches!( c as usize, 0x4E00..=0x9FFF | 0x3400..=0x4DBF | 0x20000..=0x2A6DF | 0x2A700..=0x2B73F | 0x2B740..=0x2B81F | 0x2B920..=0x2CEAF | 0xF900..=0xFAFF | 0x2F800..=0x2FA1F ) } #[derive(Copy, Clone, Debug, Deserialize, Serialize)] #[serde(tag = "type")] #[non_exhaustive] pub struct BertNormalizer { /// Whether to do the bert basic cleaning: /// 1. Remove any control characters /// 2. Replace all sorts of whitespace by the classic one ` ` pub clean_text: bool, /// Whether to put spaces around chinese characters so they get split pub handle_chinese_chars: bool, /// Whether to strip accents pub strip_accents: Option<bool>, /// Whether to lowercase the input pub lowercase: bool, } impl Default for BertNormalizer { fn default() -> Self { Self { clean_text: true, handle_chinese_chars: true, strip_accents: None, lowercase: true, } } } impl BertNormalizer { pub fn new( clean_text: bool, handle_chinese_chars: bool, strip_accents: Option<bool>, lowercase: bool, ) -> Self { Self { clean_text, handle_chinese_chars, strip_accents, lowercase, } } fn do_clean_text(&self, normalized: &mut NormalizedString) { normalized .filter(|c| !(c as usize == 0 || c as usize == 0xfffd || is_control(c))) .map(|c| if is_whitespace(c) { ' ' } else { c }); } fn do_handle_chinese_chars(&self, normalized: &mut NormalizedString) { let mut new_chars: Vec<(char, isize)> = vec![]; normalized.for_each(|c| { if is_chinese_char(c) { new_chars.extend([(' ', 0), (c, 1), (' ', 1)]); } else { new_chars.push((c, 0)); } }); normalized.transform(new_chars, 0); } fn do_strip_accents(&self, normalized: &mut NormalizedString) { normalized.nfd().filter(|c| !c.is_mark_nonspacing()); } fn do_lowercase(&self, normalized: &mut NormalizedString) { normalized.lowercase(); } } impl Normalizer for BertNormalizer { fn normalize(&self, normalized: &mut NormalizedString) -> Result<()> { if self.clean_text { self.do_clean_text(normalized); } if self.handle_chinese_chars { self.do_handle_chinese_chars(normalized); } let strip_accents = self.strip_accents.unwrap_or(self.lowercase); if strip_accents { self.do_strip_accents(normalized); } if self.lowercase { self.do_lowercase(normalized); } Ok(()) } }
tokenizers/tokenizers/src/normalizers/bert.rs/0
{ "file_path": "tokenizers/tokenizers/src/normalizers/bert.rs", "repo_id": "tokenizers", "token_count": 1856 }
339
use serde::{Deserialize, Serialize}; use crate::tokenizer::{PreTokenizedString, PreTokenizer, Result, SplitDelimiterBehavior}; use crate::utils::macro_rules_attribute; use unicode_categories::UnicodeCategories; fn is_punc(x: char) -> bool { char::is_ascii_punctuation(&x) || x.is_punctuation() } #[derive(Copy, Clone, Debug, PartialEq, Eq)] #[macro_rules_attribute(impl_serde_type!)] pub struct Punctuation { #[serde(default = "default_split")] pub behavior: SplitDelimiterBehavior, } fn default_split() -> SplitDelimiterBehavior { SplitDelimiterBehavior::Isolated } impl Punctuation { pub fn new(behavior: SplitDelimiterBehavior) -> Self { Self { behavior } } } impl Default for Punctuation { fn default() -> Self { Self::new(SplitDelimiterBehavior::Isolated) } } impl PreTokenizer for Punctuation { fn pre_tokenize(&self, pretokenized: &mut PreTokenizedString) -> Result<()> { pretokenized.split(|_, s| s.split(is_punc, self.behavior)) } } #[cfg(test)] mod tests { use super::*; use crate::{OffsetReferential, OffsetType}; #[test] fn punctuation_basic() { let pretok = Punctuation::default(); let mut pretokenized: PreTokenizedString = "Hey friend! How are you?!?".into(); pretok.pre_tokenize(&mut pretokenized).unwrap(); assert_eq!( pretokenized .get_splits(OffsetReferential::Original, OffsetType::Byte) .into_iter() .map(|(s, o, _)| (s, o)) .collect::<Vec<_>>(), vec![ ("Hey friend", (0, 10)), ("!", (10, 11)), (" How are you", (11, 27)), ("?", (27, 28)), ("!", (28, 29)), ("?", (29, 30)), ] ); } #[test] fn deserialization() { let punctuation: Punctuation = serde_json::from_str(r#"{"type": "Punctuation"}"#).unwrap(); assert_eq!(punctuation, Punctuation::default()); assert_eq!( punctuation, Punctuation::new(SplitDelimiterBehavior::Isolated) ); } #[test] #[should_panic] fn deserialization_erroneous() { let _punctuation: Punctuation = serde_json::from_str(r#"{"type": "WhitespaceSplit"}"#).unwrap(); } }
tokenizers/tokenizers/src/pre_tokenizers/punctuation.rs/0
{ "file_path": "tokenizers/tokenizers/src/pre_tokenizers/punctuation.rs", "repo_id": "tokenizers", "token_count": 1103 }
340
use crate::utils::SysRegex; use crate::{Offsets, Result}; use regex::Regex; /// Pattern used to split a NormalizedString pub trait Pattern { /// Slice the given string in a list of pattern match positions, with /// a boolean indicating whether this is a match or not. /// /// This method *must* cover the whole string in its outputs, with /// contiguous ordered slices. fn find_matches(&self, inside: &str) -> Result<Vec<(Offsets, bool)>>; } impl Pattern for char { fn find_matches(&self, inside: &str) -> Result<Vec<(Offsets, bool)>> { let is_char = |c: char| -> bool { c == *self }; is_char.find_matches(inside) } } impl Pattern for &str { fn find_matches(&self, inside: &str) -> Result<Vec<(Offsets, bool)>> { if self.is_empty() { // If we try to find the matches with an empty string, just don't match anything return Ok(vec![((0, inside.chars().count()), false)]); } let re = Regex::new(&regex::escape(self))?; (&re).find_matches(inside) } } impl Pattern for &String { fn find_matches(&self, inside: &str) -> Result<Vec<(Offsets, bool)>> { let s: &str = self; s.find_matches(inside) } } impl Pattern for &Regex { fn find_matches(&self, inside: &str) -> Result<Vec<(Offsets, bool)>> { if inside.is_empty() { return Ok(vec![((0, 0), false)]); } let mut prev = 0; let mut splits = Vec::with_capacity(inside.len()); for m in self.find_iter(inside) { if prev != m.start() { splits.push(((prev, m.start()), false)); } splits.push(((m.start(), m.end()), true)); prev = m.end(); } if prev != inside.len() { splits.push(((prev, inside.len()), false)) } Ok(splits) } } impl Pattern for &SysRegex { fn find_matches(&self, inside: &str) -> Result<Vec<(Offsets, bool)>> { if inside.is_empty() { return Ok(vec![((0, 0), false)]); } let mut prev = 0; let mut splits = Vec::with_capacity(inside.len()); for (start, end) in self.find_iter(inside) { if prev != start { splits.push(((prev, start), false)); } splits.push(((start, end), true)); prev = end; } if prev != inside.len() { splits.push(((prev, inside.len()), false)) } Ok(splits) } } impl<F> Pattern for F where F: Fn(char) -> bool, { fn find_matches(&self, inside: &str) -> Result<Vec<(Offsets, bool)>> { if inside.is_empty() { return Ok(vec![((0, 0), false)]); } let mut last_offset = 0; let mut last_seen = 0; let mut matches = inside .char_indices() .flat_map(|(b, c)| { last_seen = b + c.len_utf8(); if self(c) { let mut events = Vec::with_capacity(2); if last_offset < b { // We need to emit what was before this match events.push(((last_offset, b), false)); } events.push(((b, b + c.len_utf8()), true)); last_offset = b + c.len_utf8(); events } else { vec![] } }) .collect::<Vec<_>>(); // Do not forget the last potential split if last_seen > last_offset { matches.push(((last_offset, last_seen), false)); } Ok(matches) } } /// Invert the `is_match` flags for the wrapped Pattern. This is useful /// for example when we use a regex that matches words instead of a delimiter, /// and we want to match the delimiter. pub struct Invert<P: Pattern>(pub P); impl<P: Pattern> Pattern for Invert<P> { fn find_matches(&self, inside: &str) -> Result<Vec<(Offsets, bool)>> { Ok(self .0 .find_matches(inside)? .into_iter() .map(|(offsets, flag)| (offsets, !flag)) .collect()) } } #[cfg(test)] mod tests { use super::*; use regex::Regex; macro_rules! do_test { ($inside: expr, $pattern: expr => @ERROR) => { assert!($pattern.find_matches($inside).is_err()); }; ($inside: expr, $pattern: expr => $result: expr) => { assert_eq!($pattern.find_matches($inside).unwrap(), $result); assert_eq!( Invert($pattern).find_matches($inside).unwrap(), $result .into_iter() .map(|v: (Offsets, bool)| (v.0, !v.1)) .collect::<Vec<_>>() ); }; } #[test] fn char() { do_test!("aba", 'a' => vec![((0, 1), true), ((1, 2), false), ((2, 3), true)]); do_test!("bbbba", 'a' => vec![((0, 4), false), ((4, 5), true)]); do_test!("aabbb", 'a' => vec![((0, 1), true), ((1, 2), true), ((2, 5), false)]); do_test!("", 'a' => vec![((0, 0), false)]); do_test!("aaa", 'b' => vec![((0, 3), false)]); } #[test] fn str() { do_test!("aba", "a" => vec![((0, 1), true), ((1, 2), false), ((2, 3), true)]); do_test!("bbbba", "a" => vec![((0, 4), false), ((4, 5), true)]); do_test!("aabbb", "a" => vec![((0, 1), true), ((1, 2), true), ((2, 5), false)]); do_test!("aabbb", "ab" => vec![((0, 1), false), ((1, 3), true), ((3, 5), false)]); do_test!("aabbab", "ab" => vec![((0, 1), false), ((1, 3), true), ((3, 4), false), ((4, 6), true)] ); do_test!("", "" => vec![((0, 0), false)]); do_test!("aaa", "" => vec![((0, 3), false)]); do_test!("aaa", "b" => vec![((0, 3), false)]); } #[test] fn functions() { let is_b = |c| c == 'b'; do_test!("aba", is_b => vec![((0, 1), false), ((1, 2), true), ((2, 3), false)]); do_test!("aaaab", is_b => vec![((0, 4), false), ((4, 5), true)]); do_test!("bbaaa", is_b => vec![((0, 1), true), ((1, 2), true), ((2, 5), false)]); do_test!("", is_b => vec![((0, 0), false)]); do_test!("aaa", is_b => vec![((0, 3), false)]); } #[test] fn regex() { let is_whitespace = Regex::new(r"\s+").unwrap(); do_test!("a b", &is_whitespace => vec![((0, 1), false), ((1, 4), true), ((4, 5), false)]); do_test!(" a b ", &is_whitespace => vec![((0, 3), true), ((3, 4), false), ((4, 7), true), ((7, 8), false), ((8, 11), true)] ); do_test!("", &is_whitespace => vec![((0, 0), false)]); do_test!("𝔾𝕠𝕠𝕕 𝕞𝕠𝕣𝕟𝕚𝕟𝕘", &is_whitespace => vec![((0, 16), false), ((16, 17), true), ((17, 45), false)] ); do_test!("aaa", &is_whitespace => vec![((0, 3), false)]); } #[test] fn sys_regex() { let is_whitespace = SysRegex::new(r"\s+").unwrap(); do_test!("a b", &is_whitespace => vec![((0, 1), false), ((1, 4), true), ((4, 5), false)]); do_test!(" a b ", &is_whitespace => vec![((0, 3), true), ((3, 4), false), ((4, 7), true), ((7, 8), false), ((8, 11), true)] ); do_test!("", &is_whitespace => vec![((0, 0), false)]); do_test!("𝔾𝕠𝕠𝕕 𝕞𝕠𝕣𝕟𝕚𝕟𝕘", &is_whitespace => vec![((0, 16), false), ((16, 17), true), ((17, 45), false)] ); do_test!("aaa", &is_whitespace => vec![((0, 3), false)]); } }
tokenizers/tokenizers/src/tokenizer/pattern.rs/0
{ "file_path": "tokenizers/tokenizers/src/tokenizer/pattern.rs", "repo_id": "tokenizers", "token_count": 3902 }
341
#![cfg(feature = "http")] use tokenizers::{FromPretrainedParameters, Result, Tokenizer}; #[test] fn test_from_pretrained() -> Result<()> { let tokenizer = Tokenizer::from_pretrained("bert-base-cased", None)?; let encoding = tokenizer.encode("Hey there dear friend!", false)?; assert_eq!( encoding.get_tokens(), &["Hey", "there", "dear", "friend", "!"] ); Ok(()) } #[test] fn test_from_pretrained_revision() -> Result<()> { let tokenizer = Tokenizer::from_pretrained("anthony/tokenizers-test", None)?; let encoding = tokenizer.encode("Hey there dear friend!", false)?; assert_eq!( encoding.get_tokens(), &["hey", "there", "dear", "friend", "!"] ); let tokenizer = Tokenizer::from_pretrained( "anthony/tokenizers-test", Some(FromPretrainedParameters { revision: "gpt-2".to_string(), ..Default::default() }), )?; let encoding = tokenizer.encode("Hey there dear friend!", false)?; assert_eq!( encoding.get_tokens(), &["Hey", "Ġthere", "Ġdear", "Ġfriend", "!"] ); Ok(()) } #[test] fn test_from_pretrained_invalid_model() { let tokenizer = Tokenizer::from_pretrained("docs?", None); assert!(tokenizer.is_err()); } #[test] fn test_from_pretrained_invalid_revision() { let tokenizer = Tokenizer::from_pretrained( "bert-base-cased", Some(FromPretrainedParameters { revision: "gpt?".to_string(), ..Default::default() }), ); assert!(tokenizer.is_err()); }
tokenizers/tokenizers/tests/from_pretrained.rs/0
{ "file_path": "tokenizers/tokenizers/tests/from_pretrained.rs", "repo_id": "tokenizers", "token_count": 683 }
342
# Ignore artifacts: .github dist docs examples scripts types *.md
transformers.js/.prettierignore/0
{ "file_path": "transformers.js/.prettierignore", "repo_id": "transformers.js", "token_count": 22 }
343
export default { plugins: { tailwindcss: {}, autoprefixer: {}, }, }
transformers.js/examples/cross-encoder/postcss.config.js/0
{ "file_path": "transformers.js/examples/cross-encoder/postcss.config.js", "repo_id": "transformers.js", "token_count": 35 }
344
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>Transformers.js - Depth Anything</title> </head> <body> <h1>Depth Anything w/ 🤗 Transformers.js</h1> <div id="container"> <label id="upload-button" for="upload"> <svg width="25" height="25" viewBox="0 0 25 25" fill="none" xmlns="http://www.w3.org/2000/svg"> <path fill="#000" d="M3.5 24.3a3 3 0 0 1-1.9-.8c-.5-.5-.8-1.2-.8-1.9V2.9c0-.7.3-1.3.8-1.9.6-.5 1.2-.7 2-.7h18.6c.7 0 1.3.2 1.9.7.5.6.7 1.2.7 2v18.6c0 .7-.2 1.4-.7 1.9a3 3 0 0 1-2 .8H3.6Zm0-2.7h18.7V2.9H3.5v18.7Zm2.7-2.7h13.3c.3 0 .5 0 .6-.3v-.7l-3.7-5a.6.6 0 0 0-.6-.2c-.2 0-.4 0-.5.3l-3.5 4.6-2.4-3.3a.6.6 0 0 0-.6-.3c-.2 0-.4.1-.5.3l-2.7 3.6c-.1.2-.2.4 0 .7.1.2.3.3.6.3Z"> </path> </svg> Click to upload image <label id="example">(or try example)</label> </label> </div> <label id="status"></label> <input id="upload" type="file" accept="image/*" /> <script type="module" src="/main.js"></script> </body> </html>
transformers.js/examples/depth-anything-client/index.html/0
{ "file_path": "transformers.js/examples/depth-anything-client/index.html", "repo_id": "transformers.js", "token_count": 597 }
345
// See the Electron documentation for details on how to use preload scripts: // https://www.electronjs.org/docs/latest/tutorial/process-model#preload-scripts const { contextBridge, ipcRenderer } = require('electron'); // Here, we use the `contextBridge` API to expose a custom API to the renderer process. // This API allows the renderer process to invoke the `transformers:run` event in the main process. contextBridge.exposeInMainWorld('electronAPI', { run: (text) => ipcRenderer.invoke('transformers:run', text) });
transformers.js/examples/electron/src/preload.js/0
{ "file_path": "transformers.js/examples/electron/src/preload.js", "repo_id": "transformers.js", "token_count": 153 }
346
module.exports = { env: { browser: true, es2020: true, 'node': true }, extends: [ 'eslint:recommended', 'plugin:react/recommended', 'plugin:react/jsx-runtime', 'plugin:react-hooks/recommended', ], parserOptions: { ecmaVersion: 'latest', sourceType: 'module' }, settings: { react: { version: '18.2' } }, plugins: ['react-refresh'], rules: { 'react-refresh/only-export-components': 'warn', 'react/prop-types': 'off', }, }
transformers.js/examples/react-translator/.eslintrc.cjs/0
{ "file_path": "transformers.js/examples/react-translator/.eslintrc.cjs", "repo_id": "transformers.js", "token_count": 179 }
347
// Adapted from https://github.com/xenova/transformers.js/blob/c367f9d68b809bbbf81049c808bf6d219d761d23/src/utils/hub.js#L330 export async function getCachedFile(url) { let cache; try { cache = await caches.open('semantic-audio-search'); const cachedResponse = await cache.match(url); if (cachedResponse) { return await cachedResponse.arrayBuffer(); } } catch (e) { console.warn('Unable to open cache', e); } // No cache, or cache failed to open. Fetch the file. const response = await fetch(url); const buffer = await response.arrayBuffer(); if (cache) { try { // NOTE: We use `new Response(buffer, ...)` instead of `response.clone()` to handle LFS files await cache.put(url, new Response(buffer, { headers: response.headers, })); } catch (e) { console.warn('Unable to cache file', e); } } return buffer; } export async function getCachedJSON(url) { let buffer = await getCachedFile(url); let decoder = new TextDecoder('utf-8'); let jsonData = decoder.decode(buffer); return JSON.parse(jsonData); }
transformers.js/examples/semantic-audio-search/utils.js/0
{ "file_path": "transformers.js/examples/semantic-audio-search/utils.js", "repo_id": "transformers.js", "token_count": 502 }
348
@tailwind base; @tailwind components; @tailwind utilities; :root { --foreground-rgb: 255, 255, 255; --background-start-rgb: 0, 0, 0; --background-end-rgb: 0, 0, 0; } body { color: rgb(var(--foreground-rgb)); background: linear-gradient( to bottom, transparent, rgb(var(--background-end-rgb)) ) rgb(var(--background-start-rgb)); }
transformers.js/examples/semantic-image-search-client/src/app/globals.css/0
{ "file_path": "transformers.js/examples/semantic-image-search-client/src/app/globals.css", "repo_id": "transformers.js", "token_count": 157 }
349
html, body { font-family: Arial, Helvetica, sans-serif; } .container { margin: 40px auto; width: max(50vw, 400px); display: flex; flex-direction: column; align-items: center; } .custom-file-upload { display: flex; align-items: center; cursor: pointer; gap: 10px; border: 2px solid black; padding: 8px 16px; cursor: pointer; border-radius: 6px; } #file-upload { display: none; } .upload-icon { width: 30px; } #image-container { width: 100%; margin-top: 20px; position: relative; } #image-container>img { width: 100%; } .bounding-box { position: absolute; box-sizing: border-box; border-width: 2px; border-style: solid; } .bounding-box-label { color: white; position: absolute; font-size: 12px; margin-top: -16px; margin-left: -2px; padding: 1px; }
transformers.js/examples/vanilla-js/style.css/0
{ "file_path": "transformers.js/examples/vanilla-js/style.css", "repo_id": "transformers.js", "token_count": 389 }
350
@scope (.markdown) { /* Code blocks */ pre { margin: 0.5rem 0; white-space: break-spaces; } code { padding: 0.2em 0.4em; border-radius: 4px; font-family: Consolas, Monaco, 'Andale Mono', 'Ubuntu Mono', monospace; font-size: 0.9em; } pre, code { background-color: #f2f2f2; } @media (prefers-color-scheme: dark) { pre, code { background-color: #333; } } pre:has(code) { padding: 1rem 0.5rem; } pre>code { padding: 0; } /* Headings */ h1, h2, h3, h4, h5, h6 { font-weight: 600; line-height: 1.2; } h1 { font-size: 2em; margin: 1rem 0; } h2 { font-size: 1.5em; margin: 0.83rem 0; } h3 { font-size: 1.25em; margin: 0.67rem 0; } h4 { font-size: 1em; margin: 0.5rem 0; } h5 { font-size: 0.875em; margin: 0.33rem 0; } h6 { font-size: 0.75em; margin: 0.25rem 0; } h1, h2, h3, h4, h5, h6:first-child { margin-top: 0; } /* Unordered List */ ul { list-style-type: disc; margin-left: 1.5rem; } /* Ordered List */ ol { list-style-type: decimal; margin-left: 1.5rem; } /* List Items */ li { margin: 0.25rem 0; } p:not(:first-child) { margin-top: 0.75rem; } p:not(:last-child) { margin-bottom: 0.75rem; } }
transformers.js/examples/webgpu-chat/src/components/Chat.css/0
{ "file_path": "transformers.js/examples/webgpu-chat/src/components/Chat.css", "repo_id": "transformers.js", "token_count": 947 }
351
* { box-sizing: border-box; padding: 0; margin: 0; font-family: sans-serif; } html, body { height: 100%; } body { padding: 16px 32px; } body, #container { display: flex; flex-direction: column; justify-content: center; align-items: center; } #controls { display: flex; padding: 1rem; gap: 1rem; } #controls>div { text-align: center; } h1, h3 { text-align: center; } h3 { margin-top: 0.5rem; } #container { position: relative; width: 720px; height: 405px; max-width: 100%; max-height: 100%; border: 2px dashed #D1D5DB; border-radius: 0.75rem; overflow: hidden; margin-top: 1rem; background-size: 100% 100%; background-position: center; background-repeat: no-repeat; } #status { min-height: 16px; margin: 8px 0; } video { width: 100%; height: 100%; } input[type="text"] { padding: 0.25rem 0.5rem; border: 1px solid #D1D5DB; border-radius: 0.25rem; margin-top: 2px; } input[type="range"] { margin-top: 6px; } #overlay { position: absolute; top: 0; left: 0; background-color: rgba(255, 255, 255, 0.9); font-size: 1.25rem; border-radius: 2px; } #overlay:not(:empty) { padding: 0.5rem; }
transformers.js/examples/webgpu-clip/style.css/0
{ "file_path": "transformers.js/examples/webgpu-clip/style.css", "repo_id": "transformers.js", "token_count": 510 }
352
import './style.css'; import { AutoModel, AutoProcessor, RawImage } from '@xenova/transformers'; async function hasFp16() { try { const adapter = await navigator.gpu.requestAdapter() return adapter.features.has('shader-f16') } catch (e) { return false } } // Reference the elements that we will need const status = document.getElementById('status'); const canvas = document.createElement('canvas'); const outputCanvas = document.getElementById('output-canvas'); const video = document.getElementById('video'); const sizeSlider = document.getElementById('size'); const sizeLabel = document.getElementById('size-value'); const scaleSlider = document.getElementById('scale'); const scaleLabel = document.getElementById('scale-value'); function setStreamSize(width, height) { video.width = outputCanvas.width = canvas.width = Math.round(width); video.height = outputCanvas.height = canvas.height = Math.round(height); } status.textContent = 'Loading model...'; // Load model and processor const model_id = 'onnx-community/depth-anything-v2-small'; let model; try { model = await AutoModel.from_pretrained(model_id, { device: 'webgpu', // Use fp16 if available, otherwise use fp32 dtype: (await hasFp16()) ? 'fp16' : 'fp32', }); } catch (err) { status.textContent = err.message; alert(err.message) throw err; } const processor = await AutoProcessor.from_pretrained(model_id); // Set up controls let size = 504; processor.feature_extractor.size = { width: size, height: size }; sizeSlider.addEventListener('input', () => { size = Number(sizeSlider.value); processor.feature_extractor.size = { width: size, height: size }; sizeLabel.textContent = size; }); sizeSlider.disabled = false; let scale = 0.4; scaleSlider.addEventListener('input', () => { scale = Number(scaleSlider.value); setStreamSize(video.videoWidth * scale, video.videoHeight * scale); scaleLabel.textContent = scale; }); scaleSlider.disabled = false; status.textContent = 'Ready'; let isProcessing = false; let previousTime; const context = canvas.getContext('2d', { willReadFrequently: true }); const outputContext = outputCanvas.getContext('2d', { willReadFrequently: true }); function updateCanvas() { const { width, height } = canvas; if (!isProcessing) { isProcessing = true; (async function () { // Read the current frame from the video context.drawImage(video, 0, 0, width, height); const currentFrame = context.getImageData(0, 0, width, height); const image = new RawImage(currentFrame.data, width, height, 4); // Pre-process image const inputs = await processor(image); // Predict depth map const { predicted_depth } = await model(inputs); const data = predicted_depth.data; const [bs, oh, ow] = predicted_depth.dims; // Normalize the depth map let min = Infinity; let max = -Infinity; outputCanvas.width = ow; outputCanvas.height = oh; for (let i = 0; i < data.length; ++i) { const v = data[i]; if (v < min) min = v; if (v > max) max = v; } const range = max - min; const imageData = new Uint8ClampedArray(4 * data.length); for (let i = 0; i < data.length; ++i) { const offset = 4 * i; imageData[offset] = 255; // Set base color to red // Set alpha to normalized depth value imageData[offset + 3] = 255 * (1 - (data[i] - min) / range); } const outPixelData = new ImageData(imageData, ow, oh); outputContext.putImageData(outPixelData, 0, 0); if (previousTime !== undefined) { const fps = 1000 / (performance.now() - previousTime); status.textContent = `FPS: ${fps.toFixed(2)}`; } previousTime = performance.now(); isProcessing = false; })(); } window.requestAnimationFrame(updateCanvas); } // Start the video stream navigator.mediaDevices.getUserMedia( { video: { width: 720, height: 720 } }, // Ask for square video ).then((stream) => { // Set up the video and canvas elements. video.srcObject = stream; video.play(); const videoTrack = stream.getVideoTracks()[0]; const { width, height } = videoTrack.getSettings(); setStreamSize(width * scale, height * scale); // Start the animation loop setTimeout(updateCanvas, 50); }).catch((error) => { alert(error); });
transformers.js/examples/webgpu-video-depth-estimation/main.js/0
{ "file_path": "transformers.js/examples/webgpu-video-depth-estimation/main.js", "repo_id": "transformers.js", "token_count": 1857 }
353
export default function ArrowRightIcon(props) { return ( <svg {...props} xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" strokeWidth="2" strokeLinecap="round" strokeLinejoin="round" > <path d="M5 12h14" /> <path d="m12 5 7 7-7 7" /> </svg> ) }
transformers.js/examples/webgpu-vlm/src/components/icons/ArrowRightIcon.jsx/0
{ "file_path": "transformers.js/examples/webgpu-vlm/src/components/icons/ArrowRightIcon.jsx", "repo_id": "transformers.js", "token_count": 289 }
354
import json from transformers.utils import cached_file def generate_tokenizer_json(model_path, tokenizer): # Marian models use two separate tokenizers for source and target languages. # So, we merge them into a single tokenizer. vocab_file = cached_file(model_path, 'vocab.json') with open(vocab_file) as fp: vocab = json.load(fp) added_tokens = [ dict( id=vocab.get(x), special=True, content=x, single_word=False, lstrip=False, rstrip=False, normalized=False ) for x in tokenizer.all_special_tokens ] tokenizer_json = { 'version': '1.0', 'truncation': None, 'padding': None, 'added_tokens': added_tokens, 'normalizer': { 'type': 'Precompiled', 'precompiled_charsmap': None # TODO add this }, 'pre_tokenizer': { 'type': 'Sequence', 'pretokenizers': [ { 'type': 'WhitespaceSplit' }, { 'type': 'Metaspace', 'replacement': '\u2581', 'add_prefix_space': True } ] }, 'post_processor': { 'type': 'TemplateProcessing', 'single': [ {'Sequence': {'id': 'A', 'type_id': 0}}, {'SpecialToken': {'id': tokenizer.eos_token, 'type_id': 0}} ], 'pair': [ {'Sequence': {'id': 'A', 'type_id': 0}}, {'SpecialToken': {'id': tokenizer.eos_token, 'type_id': 0}}, {'Sequence': {'id': 'B', 'type_id': 0}}, {'SpecialToken': {'id': tokenizer.eos_token, 'type_id': 0}} ], 'special_tokens': { tokenizer.eos_token: { 'id': tokenizer.eos_token, 'ids': [tokenizer.eos_token_id], 'tokens': [tokenizer.eos_token] } } }, 'decoder': { 'type': 'Metaspace', 'replacement': '\u2581', 'add_prefix_space': True }, 'model': { 'type': 'Unigram', 'unk_id': 2, } } # NOTE: Must have sentencepiece installed spm_source = tokenizer.spm_source spm_target = tokenizer.spm_target src_vocab_dict = { spm_source.IdToPiece(i): spm_source.GetScore(i) for i in range(spm_source.GetPieceSize()) } tgt_vocab_dict = { spm_target.IdToPiece(i): spm_target.GetScore(i) for i in range(spm_target.GetPieceSize()) } tokenizer_json['model']['vocab'] = [ [ k, 0.0 if k in tokenizer.all_special_tokens else max( src_vocab_dict.get(k, -100), tgt_vocab_dict.get(k, -100)) ] for k in vocab ] return tokenizer_json
transformers.js/scripts/extra/marian.py/0
{ "file_path": "transformers.js/scripts/extra/marian.py", "repo_id": "transformers.js", "token_count": 1677 }
355
/** * @file Module used to configure Transformers.js. * * **Example:** Disable remote models. * ```javascript * import { env } from '@huggingface/transformers'; * env.allowRemoteModels = false; * ``` * * **Example:** Set local model path. * ```javascript * import { env } from '@huggingface/transformers'; * env.localModelPath = '/path/to/local/models/'; * ``` * * **Example:** Set cache directory. * ```javascript * import { env } from '@huggingface/transformers'; * env.cacheDir = '/path/to/cache/directory/'; * ``` * * @module env */ import fs from 'node:fs'; import path from 'node:path'; import url from 'node:url'; const VERSION = '3.7.2'; // Check if various APIs are available (depends on environment) const IS_BROWSER_ENV = typeof window !== "undefined" && typeof window.document !== "undefined"; const IS_WEBWORKER_ENV = typeof self !== "undefined" && (['DedicatedWorkerGlobalScope', 'ServiceWorkerGlobalScope', 'SharedWorkerGlobalScope'].includes(self.constructor?.name)); const IS_WEB_CACHE_AVAILABLE = typeof self !== "undefined" && 'caches' in self; const IS_WEBGPU_AVAILABLE = typeof navigator !== 'undefined' && 'gpu' in navigator; const IS_WEBNN_AVAILABLE = typeof navigator !== 'undefined' && 'ml' in navigator; const IS_PROCESS_AVAILABLE = typeof process !== 'undefined'; const IS_NODE_ENV = IS_PROCESS_AVAILABLE && process?.release?.name === 'node'; const IS_FS_AVAILABLE = !isEmpty(fs); const IS_PATH_AVAILABLE = !isEmpty(path); // Runtime detection const IS_DENO_RUNTIME = typeof globalThis.Deno !== 'undefined'; const IS_BUN_RUNTIME = typeof globalThis.Bun !== 'undefined'; /** * A read-only object containing information about the APIs available in the current environment. */ export const apis = Object.freeze({ /** Whether we are running in a browser environment (and not a web worker) */ IS_BROWSER_ENV, /** Whether we are running in a web worker environment */ IS_WEBWORKER_ENV, /** Whether the Cache API is available */ IS_WEB_CACHE_AVAILABLE, /** Whether the WebGPU API is available */ IS_WEBGPU_AVAILABLE, /** Whether the WebNN API is available */ IS_WEBNN_AVAILABLE, /** Whether the Node.js process API is available */ IS_PROCESS_AVAILABLE, /** Whether we are running in a Node.js-like environment (node, deno, bun) */ IS_NODE_ENV, /** Whether the filesystem API is available */ IS_FS_AVAILABLE, /** Whether the path API is available */ IS_PATH_AVAILABLE, }); const RUNNING_LOCALLY = IS_FS_AVAILABLE && IS_PATH_AVAILABLE; let dirname__ = './'; if (RUNNING_LOCALLY) { // NOTE: We wrap `import.meta` in a call to `Object` to prevent Webpack from trying to bundle it in CommonJS. // Although we get the warning: "Accessing import.meta directly is unsupported (only property access or destructuring is supported)", // it is safe to ignore since the bundled value (`{}`) isn't used for CommonJS environments (we use __dirname instead). const _import_meta_url = Object(import.meta).url; if (_import_meta_url) { dirname__ = path.dirname(path.dirname(url.fileURLToPath(_import_meta_url))) // ESM } else if (typeof __dirname !== 'undefined') { dirname__ = path.dirname(__dirname) // CommonJS } } // Only used for environments with access to file system const DEFAULT_CACHE_DIR = RUNNING_LOCALLY ? path.join(dirname__, '/.cache/') : null; // Set local model path, based on available APIs const DEFAULT_LOCAL_MODEL_PATH = '/models/'; const localModelPath = RUNNING_LOCALLY ? path.join(dirname__, DEFAULT_LOCAL_MODEL_PATH) : DEFAULT_LOCAL_MODEL_PATH; /** * Global variable given visible to users to control execution. This provides users a simple way to configure Transformers.js. * @typedef {Object} TransformersEnvironment * @property {string} version This version of Transformers.js. * @property {{onnx: Partial<import('onnxruntime-common').Env>}} backends Expose environment variables of different backends, * allowing users to set these variables if they want to. * @property {boolean} allowRemoteModels Whether to allow loading of remote files, defaults to `true`. * If set to `false`, it will have the same effect as setting `local_files_only=true` when loading pipelines, models, tokenizers, processors, etc. * @property {string} remoteHost Host URL to load models from. Defaults to the Hugging Face Hub. * @property {string} remotePathTemplate Path template to fill in and append to `remoteHost` when loading models. * @property {boolean} allowLocalModels Whether to allow loading of local files, defaults to `false` if running in-browser, and `true` otherwise. * If set to `false`, it will skip the local file check and try to load the model from the remote host. * @property {string} localModelPath Path to load local models from. Defaults to `/models/`. * @property {boolean} useFS Whether to use the file system to load files. By default, it is `true` if available. * @property {boolean} useBrowserCache Whether to use Cache API to cache models. By default, it is `true` if available. * @property {boolean} useFSCache Whether to use the file system to cache files. By default, it is `true` if available. * @property {string} cacheDir The directory to use for caching files with the file system. By default, it is `./.cache`. * @property {boolean} useCustomCache Whether to use a custom cache system (defined by `customCache`), defaults to `false`. * @property {Object} customCache The custom cache to use. Defaults to `null`. Note: this must be an object which * implements the `match` and `put` functions of the Web Cache API. For more information, see https://developer.mozilla.org/en-US/docs/Web/API/Cache. * If you wish, you may also return a `Promise<string>` from the `match` function if you'd like to use a file path instead of `Promise<Response>`. */ /** @type {TransformersEnvironment} */ export const env = { version: VERSION, /////////////////// Backends settings /////////////////// // NOTE: These will be populated later by the backends themselves. backends: { // onnxruntime-web/onnxruntime-node onnx: {}, }, /////////////////// Model settings /////////////////// allowRemoteModels: true, remoteHost: 'https://huggingface.co/', remotePathTemplate: '{model}/resolve/{revision}/', allowLocalModels: !(IS_BROWSER_ENV || IS_WEBWORKER_ENV), localModelPath: localModelPath, useFS: IS_FS_AVAILABLE, /////////////////// Cache settings /////////////////// useBrowserCache: IS_WEB_CACHE_AVAILABLE && !IS_DENO_RUNTIME, useFSCache: IS_FS_AVAILABLE, cacheDir: DEFAULT_CACHE_DIR, useCustomCache: false, customCache: null, ////////////////////////////////////////////////////// } /** * @param {Object} obj * @private */ function isEmpty(obj) { return Object.keys(obj).length === 0; }
transformers.js/src/env.js/0
{ "file_path": "transformers.js/src/env.js", "repo_id": "transformers.js", "token_count": 2230 }
356
import { ImageProcessor, } from "../../base/image_processors_utils.js"; export class CLIPImageProcessor extends ImageProcessor { } export class CLIPFeatureExtractor extends CLIPImageProcessor { }
transformers.js/src/models/clip/image_processing_clip.js/0
{ "file_path": "transformers.js/src/models/clip/image_processing_clip.js", "repo_id": "transformers.js", "token_count": 60 }
357
import { Processor } from "../../base/processing_utils.js"; import { AutoImageProcessor } from "../auto/image_processing_auto.js"; import { AutoTokenizer } from "../../tokenizers.js"; import { center_to_corners_format } from "../../base/image_processors_utils.js"; /** * Get token ids of phrases from posmaps and input_ids. * @param {import('../../utils/tensor.js').Tensor} posmaps A boolean tensor of unbatched text-thresholded logits related to the detected bounding boxes of shape `(hidden_size, )`. * @param {import('../../utils/tensor.js').Tensor} input_ids A tensor of token ids of shape `(sequence_length, )`. */ function get_phrases_from_posmap(posmaps, input_ids) { const left_idx = 0; const right_idx = posmaps.dims.at(-1) - 1; const posmaps_list = posmaps.tolist(); posmaps_list.fill(false, 0, left_idx + 1); posmaps_list.fill(false, right_idx); const input_ids_list = input_ids.tolist(); return posmaps_list .map((val, idx) => val ? idx : null) .filter(idx => idx !== null) .map(i => input_ids_list[i]); } export class GroundingDinoProcessor extends Processor { static tokenizer_class = AutoTokenizer static image_processor_class = AutoImageProcessor /** * @typedef {import('../../utils/image.js').RawImage} RawImage */ /** * * @param {RawImage|RawImage[]|RawImage[][]} images * @param {string|string[]} text * @returns {Promise<any>} */ async _call(images, text, options = {}) { const image_inputs = images ? await this.image_processor(images, options) : {}; const text_inputs = text ? this.tokenizer(text, options) : {}; return { ...text_inputs, ...image_inputs, } } post_process_grounded_object_detection(outputs, input_ids, { box_threshold = 0.25, text_threshold = 0.25, target_sizes = null } = {}) { const { logits, pred_boxes } = outputs; const batch_size = logits.dims[0]; if (target_sizes !== null && target_sizes.length !== batch_size) { throw Error("Make sure that you pass in as many target sizes as the batch dimension of the logits") } const num_queries = logits.dims.at(1); const probs = logits.sigmoid(); // (batch_size, num_queries, 256) const scores = probs.max(-1).tolist(); // (batch_size, num_queries) // Convert to [x0, y0, x1, y1] format const boxes = pred_boxes.tolist() // (batch_size, num_queries, 4) .map(batch => batch.map(box => center_to_corners_format(box))); const results = []; for (let i = 0; i < batch_size; ++i) { const target_size = target_sizes !== null ? target_sizes[i] : null; // Convert from relative [0, 1] to absolute [0, height] coordinates if (target_size !== null) { boxes[i] = boxes[i].map(box => box.map((x, j) => x * target_size[(j + 1) % 2])); } const batch_scores = scores[i]; const final_scores = []; const final_phrases = []; const final_boxes = []; for (let j = 0; j < num_queries; ++j) { const score = batch_scores[j]; if (score <= box_threshold) { continue; } const box = boxes[i][j]; const prob = probs[i][j]; final_scores.push(score); final_boxes.push(box); const phrases = get_phrases_from_posmap(prob.gt(text_threshold), input_ids[i]); final_phrases.push(phrases); } results.push({ scores: final_scores, boxes: final_boxes, labels: this.batch_decode(final_phrases) }); } return results; } }
transformers.js/src/models/grounding_dino/processing_grounding_dino.js/0
{ "file_path": "transformers.js/src/models/grounding_dino/processing_grounding_dino.js", "repo_id": "transformers.js", "token_count": 1714 }
358
import { Processor } from "../../base/processing_utils.js"; import { AutoImageProcessor } from "../auto/image_processing_auto.js"; import { AutoTokenizer } from "../../tokenizers.js"; import { RawImage } from "../../utils/image.js"; export class Qwen2VLProcessor extends Processor { static image_processor_class = AutoImageProcessor static tokenizer_class = AutoTokenizer /** * * @param {string|string[]} text * @param {RawImage|RawImage[]} images * @param {...any} args * @returns {Promise<any>} */ async _call(text, images = null, ...args) { if (!Array.isArray(text)) { text = [text]; } let image_inputs, image_grid_thw; if (images) { image_inputs = await this.image_processor(images); image_grid_thw = image_inputs.image_grid_thw; } if (image_grid_thw) { // @ts-expect-error TS2551 let merge_length = this.image_processor.config.merge_size ** 2; let index = 0; const image_grid_thw_list = image_grid_thw.tolist(); text = text.map(t => { while (t.includes("<|image_pad|>")) { const prod = Number(image_grid_thw_list[index++].reduce((a, b) => a * b, 1n)); t = t.replace("<|image_pad|>", "<|placeholder|>".repeat(Math.floor(prod / merge_length))); } return t.replaceAll("<|placeholder|>", "<|image_pad|>"); }); } const text_inputs = this.tokenizer(text); return { ...text_inputs, ...image_inputs, // TODO: ...videos_inputs, } } }
transformers.js/src/models/qwen2_vl/processing_qwen2_vl.js/0
{ "file_path": "transformers.js/src/models/qwen2_vl/processing_qwen2_vl.js", "repo_id": "transformers.js", "token_count": 819 }
359
import { ImageProcessor, } from "../../base/image_processors_utils.js"; import { stack, cat, } from "../../utils/tensor.js"; export class VitMatteImageProcessor extends ImageProcessor { /** * Calls the feature extraction process on an array of images, preprocesses * each image, and concatenates the resulting features into a single Tensor. * @param {import("../../utils/image.js").RawImage[]} images The image(s) to extract features from. * @param {import("../../utils/image.js").RawImage[]} trimaps The trimaps(s) to extract features from. * @returns {Promise<import("../../base/image_processors_utils.js").ImageProcessorResult>} An object containing the concatenated pixel values of the preprocessed images. */ async _call(images, trimaps) { if (!Array.isArray(images)) { images = [images]; } if (!Array.isArray(trimaps)) { trimaps = [trimaps]; } const imageData = await Promise.all(images.map(x => this.preprocess(x))); const trimapData = await Promise.all(trimaps.map(x => this.preprocess(x, { do_normalize: false, do_convert_rgb: false, do_convert_grayscale: true, }))); // Stack pixel values const pixel_values = stack(imageData.map( // Concatenate images and trimaps (x, i) => cat([x.pixel_values, trimapData[i].pixel_values], 0) ), 0); return { pixel_values, // Original sizes of images original_sizes: imageData.map(x => x.original_size), // Reshaped sizes of images, before padding or cropping reshaped_input_sizes: imageData.map(x => x.reshaped_input_size), } } }
transformers.js/src/models/vitmatte/image_processing_vitmatte.js/0
{ "file_path": "transformers.js/src/models/vitmatte/image_processing_vitmatte.js", "repo_id": "transformers.js", "token_count": 746 }
360
/** * @file Helper module for audio processing. * * These functions and classes are only used internally, * meaning an end-user shouldn't need to access anything here. * * @module utils/audio */ import { getFile, } from './hub.js'; import { FFT, max } from './maths.js'; import { calculateReflectOffset, saveBlob, } from './core.js'; import { apis } from '../env.js'; import { Tensor, matmul } from './tensor.js'; import fs from 'node:fs'; /** * Helper function to read audio from a path/URL. * @param {string|URL} url The path/URL to load the audio from. * @param {number} sampling_rate The sampling rate to use when decoding the audio. * @returns {Promise<Float32Array>} The decoded audio as a `Float32Array`. */ export async function read_audio(url, sampling_rate) { if (typeof AudioContext === 'undefined') { // Running in node or an environment without AudioContext throw Error( "Unable to load audio from path/URL since `AudioContext` is not available in your environment. " + "Instead, audio data should be passed directly to the pipeline/processor. " + "For more information and some example code, see https://huggingface.co/docs/transformers.js/guides/node-audio-processing." ) } const response = await (await getFile(url)).arrayBuffer(); const audioCTX = new AudioContext({ sampleRate: sampling_rate }); if (typeof sampling_rate === 'undefined') { console.warn(`No sampling rate provided, using default of ${audioCTX.sampleRate}Hz.`) } const decoded = await audioCTX.decodeAudioData(response); /** @type {Float32Array} */ let audio; // We now replicate HuggingFace's `ffmpeg_read` method: if (decoded.numberOfChannels === 2) { // When downmixing a stereo audio file to mono using the -ac 1 option in FFmpeg, // the audio signal is summed across both channels to create a single mono channel. // However, if the audio is at full scale (i.e. the highest possible volume level), // the summing of the two channels can cause the audio signal to clip or distort. // To prevent this clipping, FFmpeg applies a scaling factor of 1/sqrt(2) (~ 0.707) // to the audio signal before summing the two channels. This scaling factor ensures // that the combined audio signal will not exceed the maximum possible level, even // if both channels are at full scale. // After applying this scaling factor, the audio signal from both channels is summed // to create a single mono channel. It's worth noting that this scaling factor is // only applied when downmixing stereo audio to mono using the -ac 1 option in FFmpeg. // If you're using a different downmixing method, or if you're not downmixing the // audio at all, this scaling factor may not be needed. const SCALING_FACTOR = Math.sqrt(2); const left = decoded.getChannelData(0); const right = decoded.getChannelData(1); audio = new Float32Array(left.length); for (let i = 0; i < decoded.length; ++i) { audio[i] = SCALING_FACTOR * (left[i] + right[i]) / 2; } } else { // If the audio is not stereo, we can just use the first channel: audio = decoded.getChannelData(0); } return audio; } /** * Helper function to generate windows that are special cases of the generalized cosine window. * See https://www.mathworks.com/help/signal/ug/generalized-cosine-windows.html for more information. * @param {number} M Number of points in the output window. If zero or less, an empty array is returned. * @param {number} a_0 Offset for the generalized cosine window. * @returns {Float64Array} The generated window. */ function generalized_cosine_window(M, a_0) { if (M < 1) { return new Float64Array(); } if (M === 1) { return new Float64Array([1]); } const a_1 = 1 - a_0; const factor = 2 * Math.PI / (M - 1); const cos_vals = new Float64Array(M); for (let i = 0; i < M; ++i) { cos_vals[i] = a_0 - a_1 * Math.cos(i * factor); } return cos_vals; } /** * Generates a Hanning window of length M. * See https://numpy.org/doc/stable/reference/generated/numpy.hanning.html for more information. * * @param {number} M The length of the Hanning window to generate. * @returns {Float64Array} The generated Hanning window. */ export function hanning(M) { return generalized_cosine_window(M, 0.5); } /** * Generates a Hamming window of length M. * See https://numpy.org/doc/stable/reference/generated/numpy.hamming.html for more information. * * @param {number} M The length of the Hamming window to generate. * @returns {Float64Array} The generated Hamming window. */ export function hamming(M) { return generalized_cosine_window(M, 0.54); } const HERTZ_TO_MEL_MAPPING = { "htk": (/** @type {number} */ freq) => 2595.0 * Math.log10(1.0 + (freq / 700.0)), "kaldi": (/** @type {number} */ freq) => 1127.0 * Math.log(1.0 + (freq / 700.0)), "slaney": (/** @type {number} */ freq, min_log_hertz = 1000.0, min_log_mel = 15.0, logstep = 27.0 / Math.log(6.4)) => freq >= min_log_hertz ? min_log_mel + Math.log(freq / min_log_hertz) * logstep : 3.0 * freq / 200.0, } /** * @template {Float32Array|Float64Array|number} T * @param {T} freq * @param {string} [mel_scale] * @returns {T} */ function hertz_to_mel(freq, mel_scale = "htk") { const fn = HERTZ_TO_MEL_MAPPING[mel_scale]; if (!fn) { throw new Error('mel_scale should be one of "htk", "slaney" or "kaldi".'); } // @ts-expect-error ts(2322) return typeof freq === 'number' ? fn(freq) : freq.map(x => fn(x)); } const MEL_TO_HERTZ_MAPPING = { "htk": (/** @type {number} */ mels) => 700.0 * (10.0 ** (mels / 2595.0) - 1.0), "kaldi": (/** @type {number} */ mels) => 700.0 * (Math.exp(mels / 1127.0) - 1.0), "slaney": (/** @type {number} */ mels, min_log_hertz = 1000.0, min_log_mel = 15.0, logstep = Math.log(6.4) / 27.0) => mels >= min_log_mel ? min_log_hertz * Math.exp(logstep * (mels - min_log_mel)) : 200.0 * mels / 3.0, } /** * @template {Float32Array|Float64Array|number} T * @param {T} mels * @param {string} [mel_scale] * @returns {T} */ function mel_to_hertz(mels, mel_scale = "htk") { const fn = MEL_TO_HERTZ_MAPPING[mel_scale]; if (!fn) { throw new Error('mel_scale should be one of "htk", "slaney" or "kaldi".'); } // @ts-expect-error ts(2322) return typeof mels === 'number' ? fn(mels) : mels.map(x => fn(x)); } /** * Creates a triangular filter bank. * * Adapted from torchaudio and librosa. * * @param {Float64Array} fft_freqs Discrete frequencies of the FFT bins in Hz, of shape `(num_frequency_bins,)`. * @param {Float64Array} filter_freqs Center frequencies of the triangular filters to create, in Hz, of shape `(num_mel_filters,)`. * @returns {number[][]} of shape `(num_frequency_bins, num_mel_filters)`. */ function _create_triangular_filter_bank(fft_freqs, filter_freqs) { const filter_diff = Float64Array.from( { length: filter_freqs.length - 1 }, (_, i) => filter_freqs[i + 1] - filter_freqs[i] ); const slopes = Array.from({ length: fft_freqs.length }, () => new Array(filter_freqs.length)); for (let j = 0; j < fft_freqs.length; ++j) { const slope = slopes[j]; for (let i = 0; i < filter_freqs.length; ++i) { slope[i] = filter_freqs[i] - fft_freqs[j]; } } const numFreqs = filter_freqs.length - 2; const ret = Array.from({ length: numFreqs }, () => new Array(fft_freqs.length)); for (let j = 0; j < fft_freqs.length; ++j) { // 201 const slope = slopes[j]; for (let i = 0; i < numFreqs; ++i) { // 80 const down = -slope[i] / filter_diff[i]; const up = slope[i + 2] / filter_diff[i + 1]; ret[i][j] = Math.max(0, Math.min(down, up)); } } return ret; } /** * Return evenly spaced numbers over a specified interval. * @param {number} start The starting value of the sequence. * @param {number} end The end value of the sequence. * @param {number} num Number of samples to generate. * @returns `num` evenly spaced samples, calculated over the interval `[start, stop]`. */ function linspace(start, end, num) { const step = (end - start) / (num - 1); return Float64Array.from({ length: num }, (_, i) => start + step * i); } /** * Creates a frequency bin conversion matrix used to obtain a mel spectrogram. This is called a *mel filter bank*, and * various implementation exist, which differ in the number of filters, the shape of the filters, the way the filters * are spaced, the bandwidth of the filters, and the manner in which the spectrum is warped. The goal of these * features is to approximate the non-linear human perception of the variation in pitch with respect to the frequency. * @param {number} num_frequency_bins Number of frequency bins (should be the same as `n_fft // 2 + 1` * where `n_fft` is the size of the Fourier Transform used to compute the spectrogram). * @param {number} num_mel_filters Number of mel filters to generate. * @param {number} min_frequency Lowest frequency of interest in Hz. * @param {number} max_frequency Highest frequency of interest in Hz. This should not exceed `sampling_rate / 2`. * @param {number} sampling_rate Sample rate of the audio waveform. * @param {string} [norm] If `"slaney"`, divide the triangular mel weights by the width of the mel band (area normalization). * @param {string} [mel_scale] The mel frequency scale to use, `"htk"` or `"slaney"`. * @param {boolean} [triangularize_in_mel_space] If this option is enabled, the triangular filter is applied in mel space rather than frequency space. * This should be set to `true` in order to get the same results as `torchaudio` when computing mel filters. * @returns {number[][]} Triangular filter bank matrix, which is a 2D array of shape (`num_frequency_bins`, `num_mel_filters`). * This is a projection matrix to go from a spectrogram to a mel spectrogram. */ export function mel_filter_bank( num_frequency_bins, num_mel_filters, min_frequency, max_frequency, sampling_rate, norm = null, mel_scale = "htk", triangularize_in_mel_space = false, ) { if (norm !== null && norm !== "slaney") { throw new Error('norm must be one of null or "slaney"'); } if (num_frequency_bins < 2) { throw new Error(`Require num_frequency_bins: ${num_frequency_bins} >= 2`); } if (min_frequency > max_frequency) { throw new Error(`Require min_frequency: ${min_frequency} <= max_frequency: ${max_frequency}`); } const mel_min = hertz_to_mel(min_frequency, mel_scale); const mel_max = hertz_to_mel(max_frequency, mel_scale); const mel_freqs = linspace(mel_min, mel_max, num_mel_filters + 2); let filter_freqs = mel_to_hertz(mel_freqs, mel_scale); let fft_freqs; // frequencies of FFT bins in Hz if (triangularize_in_mel_space) { const fft_bin_width = sampling_rate / ((num_frequency_bins - 1) * 2); fft_freqs = hertz_to_mel(Float64Array.from({ length: num_frequency_bins }, (_, i) => i * fft_bin_width), mel_scale); filter_freqs = mel_freqs; } else { fft_freqs = linspace(0, Math.floor(sampling_rate / 2), num_frequency_bins); } const mel_filters = _create_triangular_filter_bank(fft_freqs, filter_freqs); if (norm !== null && norm === "slaney") { // Slaney-style mel is scaled to be approx constant energy per channel for (let i = 0; i < num_mel_filters; ++i) { const filter = mel_filters[i]; const enorm = 2.0 / (filter_freqs[i + 2] - filter_freqs[i]); for (let j = 0; j < num_frequency_bins; ++j) { // Apply this enorm to all frequency bins filter[j] *= enorm; } } } // TODO warn if there is a zero row return mel_filters; } /** * @template {Float32Array|Float64Array} T * Pads an array with a reflected version of itself on both ends. * @param {T} array The array to pad. * @param {number} left The amount of padding to add to the left. * @param {number} right The amount of padding to add to the right. * @returns {T} The padded array. */ function padReflect(array, left, right) { // @ts-ignore const padded = new array.constructor(array.length + left + right); const w = array.length - 1; for (let i = 0; i < array.length; ++i) { padded[left + i] = array[i]; } for (let i = 1; i <= left; ++i) { padded[left - i] = array[calculateReflectOffset(i, w)]; } for (let i = 1; i <= right; ++i) { padded[w + left + i] = array[calculateReflectOffset(w - i, w)]; } return padded; } /** * Helper function to compute `amplitude_to_db` and `power_to_db`. * @template {Float32Array|Float64Array} T * @param {T} spectrogram * @param {number} factor * @param {number} reference * @param {number} min_value * @param {number} db_range * @returns {T} */ function _db_conversion_helper(spectrogram, factor, reference, min_value, db_range) { if (reference <= 0) { throw new Error('reference must be greater than zero'); } if (min_value <= 0) { throw new Error('min_value must be greater than zero'); } reference = Math.max(min_value, reference); const logReference = Math.log10(reference); for (let i = 0; i < spectrogram.length; ++i) { spectrogram[i] = factor * Math.log10(Math.max(min_value, spectrogram[i]) - logReference) } if (db_range !== null) { if (db_range <= 0) { throw new Error('db_range must be greater than zero'); } const maxValue = max(spectrogram)[0] - db_range; for (let i = 0; i < spectrogram.length; ++i) { spectrogram[i] = Math.max(spectrogram[i], maxValue); } } return spectrogram; } /** * Converts an amplitude spectrogram to the decibel scale. This computes `20 * log10(spectrogram / reference)`, * using basic logarithm properties for numerical stability. NOTE: Operates in-place. * * The motivation behind applying the log function on the (mel) spectrogram is that humans do not hear loudness on a * linear scale. Generally to double the perceived volume of a sound we need to put 8 times as much energy into it. * This means that large variations in energy may not sound all that different if the sound is loud to begin with. * This compression operation makes the (mel) spectrogram features match more closely what humans actually hear. * * @template {Float32Array|Float64Array} T * @param {T} spectrogram The input amplitude (mel) spectrogram. * @param {number} [reference=1.0] Sets the input spectrogram value that corresponds to 0 dB. * For example, use `np.max(spectrogram)` to set the loudest part to 0 dB. Must be greater than zero. * @param {number} [min_value=1e-5] The spectrogram will be clipped to this minimum value before conversion to decibels, * to avoid taking `log(0)`. The default of `1e-5` corresponds to a minimum of -100 dB. Must be greater than zero. * @param {number} [db_range=null] Sets the maximum dynamic range in decibels. For example, if `db_range = 80`, the * difference between the peak value and the smallest value will never be more than 80 dB. Must be greater than zero. * @returns {T} The modified spectrogram in decibels. */ function amplitude_to_db(spectrogram, reference = 1.0, min_value = 1e-5, db_range = null) { return _db_conversion_helper(spectrogram, 20.0, reference, min_value, db_range); } /** * Converts a power spectrogram to the decibel scale. This computes `10 * log10(spectrogram / reference)`, * using basic logarithm properties for numerical stability. NOTE: Operates in-place. * * The motivation behind applying the log function on the (mel) spectrogram is that humans do not hear loudness on a * linear scale. Generally to double the perceived volume of a sound we need to put 8 times as much energy into it. * This means that large variations in energy may not sound all that different if the sound is loud to begin with. * This compression operation makes the (mel) spectrogram features match more closely what humans actually hear. * * Based on the implementation of `librosa.power_to_db`. * * @template {Float32Array|Float64Array} T * @param {T} spectrogram The input power (mel) spectrogram. Note that a power spectrogram has the amplitudes squared! * @param {number} [reference=1.0] Sets the input spectrogram value that corresponds to 0 dB. * For example, use `np.max(spectrogram)` to set the loudest part to 0 dB. Must be greater than zero. * @param {number} [min_value=1e-10] The spectrogram will be clipped to this minimum value before conversion to decibels, * to avoid taking `log(0)`. The default of `1e-10` corresponds to a minimum of -100 dB. Must be greater than zero. * @param {number} [db_range=null] Sets the maximum dynamic range in decibels. For example, if `db_range = 80`, the * difference between the peak value and the smallest value will never be more than 80 dB. Must be greater than zero. * @returns {T} The modified spectrogram in decibels. */ function power_to_db(spectrogram, reference = 1.0, min_value = 1e-10, db_range = null) { return _db_conversion_helper(spectrogram, 10.0, reference, min_value, db_range); } /** * Calculates a spectrogram over one waveform using the Short-Time Fourier Transform. * * This function can create the following kinds of spectrograms: * - amplitude spectrogram (`power = 1.0`) * - power spectrogram (`power = 2.0`) * - complex-valued spectrogram (`power = None`) * - log spectrogram (use `log_mel` argument) * - mel spectrogram (provide `mel_filters`) * - log-mel spectrogram (provide `mel_filters` and `log_mel`) * * In this implementation, the window is assumed to be zero-padded to have the same size as the analysis frame. * A padded window can be obtained from `window_function()`. The FFT input buffer may be larger than the analysis frame, * typically the next power of two. * * @param {Float32Array|Float64Array} waveform The input waveform of shape `(length,)`. This must be a single real-valued, mono waveform. * @param {Float32Array|Float64Array} window The windowing function to apply of shape `(frame_length,)`, including zero-padding if necessary. The actual window length may be * shorter than `frame_length`, but we're assuming the array has already been zero-padded. * @param {number} frame_length The length of the analysis frames in samples (a.k.a., `fft_length`). * @param {number} hop_length The stride between successive analysis frames in samples. * @param {Object} options * @param {number} [options.fft_length=null] The size of the FFT buffer in samples. This determines how many frequency bins the spectrogram will have. * For optimal speed, this should be a power of two. If `null`, uses `frame_length`. * @param {number} [options.power=1.0] If 1.0, returns the amplitude spectrogram. If 2.0, returns the power spectrogram. If `null`, returns complex numbers. * @param {boolean} [options.center=true] Whether to pad the waveform so that frame `t` is centered around time `t * hop_length`. If `false`, frame * `t` will start at time `t * hop_length`. * @param {string} [options.pad_mode="reflect"] Padding mode used when `center` is `true`. Possible values are: `"constant"` (pad with zeros), * `"edge"` (pad with edge values), `"reflect"` (pads with mirrored values). * @param {boolean} [options.onesided=true] If `true`, only computes the positive frequencies and returns a spectrogram containing `fft_length // 2 + 1` * frequency bins. If `false`, also computes the negative frequencies and returns `fft_length` frequency bins. * @param {number} [options.preemphasis=null] Coefficient for a low-pass filter that applies pre-emphasis before the DFT. * @param {boolean} [options.preemphasis_htk_flavor=true] Whether to apply the pre-emphasis filter in the HTK flavor. * @param {number[][]} [options.mel_filters=null] The mel filter bank of shape `(num_freq_bins, num_mel_filters)`. * If supplied, applies this filter bank to create a mel spectrogram. * @param {number} [options.mel_floor=1e-10] Minimum value of mel frequency banks. * @param {string} [options.log_mel=null] How to convert the spectrogram to log scale. Possible options are: * `null` (don't convert), `"log"` (take the natural logarithm) `"log10"` (take the base-10 logarithm), `"dB"` (convert to decibels). * Can only be used when `power` is not `null`. * @param {number} [options.reference=1.0] Sets the input spectrogram value that corresponds to 0 dB. For example, use `max(spectrogram)[0]` to set * the loudest part to 0 dB. Must be greater than zero. * @param {number} [options.min_value=1e-10] The spectrogram will be clipped to this minimum value before conversion to decibels, to avoid taking `log(0)`. * For a power spectrogram, the default of `1e-10` corresponds to a minimum of -100 dB. For an amplitude spectrogram, the value `1e-5` corresponds to -100 dB. * Must be greater than zero. * @param {number} [options.db_range=null] Sets the maximum dynamic range in decibels. For example, if `db_range = 80`, the difference between the * peak value and the smallest value will never be more than 80 dB. Must be greater than zero. * @param {boolean} [options.remove_dc_offset=null] Subtract mean from waveform on each frame, applied before pre-emphasis. This should be set to `true` in * order to get the same results as `torchaudio.compliance.kaldi.fbank` when computing mel filters. * @param {number} [options.max_num_frames=null] If provided, limits the number of frames to compute to this value. * @param {number} [options.min_num_frames=null] If provided, ensures the number of frames to compute is at least this value. * @param {boolean} [options.do_pad=true] If `true`, pads the output spectrogram to have `max_num_frames` frames. * @param {boolean} [options.transpose=false] If `true`, the returned spectrogram will have shape `(num_frames, num_frequency_bins/num_mel_filters)`. If `false`, the returned spectrogram will have shape `(num_frequency_bins/num_mel_filters, num_frames)`. * @returns {Promise<Tensor>} Spectrogram of shape `(num_frequency_bins, length)` (regular spectrogram) or shape `(num_mel_filters, length)` (mel spectrogram). */ export async function spectrogram( waveform, window, frame_length, hop_length, { fft_length = null, power = 1.0, center = true, pad_mode = "reflect", onesided = true, preemphasis = null, preemphasis_htk_flavor = true, mel_filters = null, mel_floor = 1e-10, log_mel = null, reference = 1.0, min_value = 1e-10, db_range = null, remove_dc_offset = null, // Custom parameters for efficiency reasons min_num_frames = null, max_num_frames = null, do_pad = true, transpose = false, } = {} ) { const window_length = window.length; if (fft_length === null) { fft_length = frame_length; } if (frame_length > fft_length) { throw Error(`frame_length (${frame_length}) may not be larger than fft_length (${fft_length})`) } if (window_length !== frame_length) { throw new Error(`Length of the window (${window_length}) must equal frame_length (${frame_length})`); } if (hop_length <= 0) { throw new Error("hop_length must be greater than zero"); } if (power === null && mel_filters !== null) { throw new Error( "You have provided `mel_filters` but `power` is `None`. Mel spectrogram computation is not yet supported for complex-valued spectrogram. " + "Specify `power` to fix this issue." ); } if (!preemphasis_htk_flavor) { throw new Error( "`preemphasis_htk_flavor=false` is not currently supported." ); } if (center) { if (pad_mode !== 'reflect') { throw new Error(`pad_mode="${pad_mode}" not implemented yet.`) } const half_window = Math.floor((fft_length - 1) / 2) + 1; waveform = padReflect(waveform, half_window, half_window); } // split waveform into frames of frame_length size let num_frames = Math.floor(1 + Math.floor((waveform.length - frame_length) / hop_length)) if (min_num_frames !== null && num_frames < min_num_frames) { num_frames = min_num_frames } const num_frequency_bins = onesided ? Math.floor(fft_length / 2) + 1 : fft_length let d1 = num_frames; let d1Max = num_frames; // If maximum number of frames is provided, we must either pad or truncate if (max_num_frames !== null) { if (max_num_frames > num_frames) { // input is too short, so we pad if (do_pad) { d1Max = max_num_frames; } } else { // input is too long, so we truncate d1Max = d1 = max_num_frames; } } // Preallocate arrays to store output. const fft = new FFT(fft_length); const inputBuffer = new Float64Array(fft_length); const outputBuffer = new Float64Array(fft.outputBufferSize); const transposedMagnitudeData = new Float32Array(num_frequency_bins * d1Max); for (let i = 0; i < d1; ++i) { // Populate buffer with waveform data const offset = i * hop_length; const buffer_size = Math.min(waveform.length - offset, frame_length); if (buffer_size !== frame_length) { // The full buffer is not needed, so we need to reset it (avoid overflow from previous iterations) // NOTE: We don't need to reset the buffer if it's full since we overwrite the first // `frame_length` values and the rest (`fft_length - frame_length`) remains zero. inputBuffer.fill(0, 0, frame_length); } for (let j = 0; j < buffer_size; ++j) { inputBuffer[j] = waveform[offset + j]; } if (remove_dc_offset) { let sum = 0; for (let j = 0; j < buffer_size; ++j) { sum += inputBuffer[j]; } const mean = sum / buffer_size; for (let j = 0; j < buffer_size; ++j) { inputBuffer[j] -= mean; } } if (preemphasis !== null) { // Done in reverse to avoid copies and destructive modification for (let j = buffer_size - 1; j >= 1; --j) { inputBuffer[j] -= preemphasis * inputBuffer[j - 1]; } inputBuffer[0] *= 1 - preemphasis; } // Apply window function for (let j = 0; j < window.length; ++j) { inputBuffer[j] *= window[j]; } fft.realTransform(outputBuffer, inputBuffer); // compute magnitudes for (let j = 0; j < num_frequency_bins; ++j) { const j2 = j << 1; // NOTE: We transpose the data here to avoid doing it later transposedMagnitudeData[j * d1Max + i] = outputBuffer[j2] ** 2 + outputBuffer[j2 + 1] ** 2; } } if (power !== null && power !== 2) { // slight optimization to not sqrt const pow = power / 2; // we use 2 since we already squared for (let i = 0; i < transposedMagnitudeData.length; ++i) { transposedMagnitudeData[i] **= pow; } } // TODO: What if `mel_filters` is null? const num_mel_filters = mel_filters.length; // Perform matrix muliplication: // mel_spec = mel_filters @ magnitudes.T // - mel_filters.shape=(80, 201) // - magnitudes.shape=(3000, 201) => magnitudes.T.shape=(201, 3000) // - mel_spec.shape=(80, 3000) let mel_spec = await matmul( // TODO: Make `mel_filters` a Tensor during initialization new Tensor('float32', mel_filters.flat(), [num_mel_filters, num_frequency_bins]), new Tensor('float32', transposedMagnitudeData, [num_frequency_bins, d1Max]), ); if (transpose) { mel_spec = mel_spec.transpose(1, 0); } const mel_spec_data = /** @type {Float32Array} */(mel_spec.data); for (let i = 0; i < mel_spec_data.length; ++i) { mel_spec_data[i] = Math.max(mel_floor, mel_spec_data[i]); } if (power !== null && log_mel !== null) { const o = Math.min(mel_spec_data.length, d1 * num_mel_filters); // NOTE: operates in-place switch (log_mel) { case 'log': for (let i = 0; i < o; ++i) { mel_spec_data[i] = Math.log(mel_spec_data[i]); } break; case 'log10': for (let i = 0; i < o; ++i) { mel_spec_data[i] = Math.log10(mel_spec_data[i]); } break; case 'dB': if (power === 1.0) { amplitude_to_db(mel_spec_data, reference, min_value, db_range); } else if (power === 2.0) { power_to_db(mel_spec_data, reference, min_value, db_range); } else { throw new Error(`Cannot use log_mel option '${log_mel}' with power ${power}`) } break; default: throw new Error(`log_mel must be one of null, 'log', 'log10' or 'dB'. Got '${log_mel}'`); } } return mel_spec; } /** * Returns an array containing the specified window. * @param {number} window_length The length of the window in samples. * @param {string} name The name of the window function. * @param {Object} options Additional options. * @param {boolean} [options.periodic=true] Whether the window is periodic or symmetric. * @param {number} [options.frame_length=null] The length of the analysis frames in samples. * Provide a value for `frame_length` if the window is smaller than the frame length, so that it will be zero-padded. * @param {boolean} [options.center=true] Whether to center the window inside the FFT buffer. Only used when `frame_length` is provided. * @returns {Float64Array} The window of shape `(window_length,)` or `(frame_length,)`. */ export function window_function(window_length, name, { periodic = true, frame_length = null, center = true, } = {}) { const length = periodic ? window_length + 1 : window_length; let window; switch (name) { case 'boxcar': window = new Float64Array(length).fill(1.0); break; case 'hann': case 'hann_window': window = hanning(length); break; case 'hamming': window = hamming(length); break; case 'povey': window = hanning(length).map(x => Math.pow(x, 0.85)); break; default: throw new Error(`Unknown window type ${name}.`); } if (periodic) { window = window.subarray(0, window_length); } if (frame_length === null) { return window; } if (window_length > frame_length) { throw new Error(`Length of the window (${window_length}) may not be larger than frame_length (${frame_length})`); } return window; } /** * Encode audio data to a WAV file. * WAV file specs : https://en.wikipedia.org/wiki/WAV#WAV_File_header * * Adapted from https://www.npmjs.com/package/audiobuffer-to-wav * @param {Float32Array} samples The audio samples. * @param {number} rate The sample rate. * @returns {ArrayBuffer} The WAV audio buffer. */ function encodeWAV(samples, rate) { let offset = 44; const buffer = new ArrayBuffer(offset + samples.length * 4); const view = new DataView(buffer); /* RIFF identifier */ writeString(view, 0, "RIFF"); /* RIFF chunk length */ view.setUint32(4, 36 + samples.length * 4, true); /* RIFF type */ writeString(view, 8, "WAVE"); /* format chunk identifier */ writeString(view, 12, "fmt "); /* format chunk length */ view.setUint32(16, 16, true); /* sample format (raw) */ view.setUint16(20, 3, true); /* channel count */ view.setUint16(22, 1, true); /* sample rate */ view.setUint32(24, rate, true); /* byte rate (sample rate * block align) */ view.setUint32(28, rate * 4, true); /* block align (channel count * bytes per sample) */ view.setUint16(32, 4, true); /* bits per sample */ view.setUint16(34, 32, true); /* data chunk identifier */ writeString(view, 36, "data"); /* data chunk length */ view.setUint32(40, samples.length * 4, true); for (let i = 0; i < samples.length; ++i, offset += 4) { view.setFloat32(offset, samples[i], true); } return buffer; } function writeString(view, offset, string) { for (let i = 0; i < string.length; ++i) { view.setUint8(offset + i, string.charCodeAt(i)); } } export class RawAudio { /** * Create a new `RawAudio` object. * @param {Float32Array} audio Audio data * @param {number} sampling_rate Sampling rate of the audio data */ constructor(audio, sampling_rate) { this.audio = audio this.sampling_rate = sampling_rate } /** * Convert the audio to a wav file buffer. * @returns {ArrayBuffer} The WAV file. */ toWav() { return encodeWAV(this.audio, this.sampling_rate) } /** * Convert the audio to a blob. * @returns {Blob} */ toBlob() { const wav = this.toWav(); const blob = new Blob([wav], { type: 'audio/wav' }); return blob; } /** * Save the audio to a wav file. * @param {string} path */ async save(path) { let fn; if (apis.IS_BROWSER_ENV) { if (apis.IS_WEBWORKER_ENV) { throw new Error('Unable to save a file from a Web Worker.') } fn = saveBlob; } else if (apis.IS_FS_AVAILABLE) { fn = async (/** @type {string} */ path, /** @type {Blob} */ blob) => { let buffer = await blob.arrayBuffer(); fs.writeFileSync(path, Buffer.from(buffer)); } } else { throw new Error('Unable to save because filesystem is disabled in this environment.') } await fn(path, this.toBlob()) } }
transformers.js/src/utils/audio.js/0
{ "file_path": "transformers.js/src/utils/audio.js", "repo_id": "transformers.js", "token_count": 13124 }
361
import { GemmaTokenizer, GemmaForCausalLM } from "../../../src/transformers.js"; import { MAX_MODEL_LOAD_TIME, MAX_TEST_EXECUTION_TIME, MAX_MODEL_DISPOSE_TIME, DEFAULT_MODEL_OPTIONS } from "../../init.js"; export default () => { describe("GemmaForCausalLM", () => { const model_id = "Xenova/tiny-random-GemmaForCausalLM"; /** @type {GemmaForCausalLM} */ let model; /** @type {GemmaTokenizer} */ let tokenizer; beforeAll(async () => { model = await GemmaForCausalLM.from_pretrained(model_id, DEFAULT_MODEL_OPTIONS); tokenizer = await GemmaTokenizer.from_pretrained(model_id); tokenizer.padding_side = "left"; }, MAX_MODEL_LOAD_TIME); it( "batch_size=1", async () => { const inputs = tokenizer("hello"); const outputs = await model.generate({ ...inputs, max_length: 10, }); expect(outputs.tolist()).toEqual([[2n, 17534n, 254059n, 254059n, 254059n, 254059n, 254059n, 254059n, 254059n, 254059n]]); }, MAX_TEST_EXECUTION_TIME, ); it( "batch_size>1", async () => { const inputs = tokenizer(["hello", "hello world"], { padding: true }); const outputs = await model.generate({ ...inputs, max_length: 10, }); expect(outputs.tolist()).toEqual([ [0n, 2n, 17534n, 254059n, 254059n, 254059n, 254059n, 254059n, 254059n, 254059n], [2n, 17534n, 2134n, 71055n, 71055n, 71055n, 71055n, 71055n, 71055n, 71055n], ]); }, MAX_TEST_EXECUTION_TIME, ); afterAll(async () => { await model?.dispose(); }, MAX_MODEL_DISPOSE_TIME); }); };
transformers.js/tests/models/gemma/test_modeling_gemma.js/0
{ "file_path": "transformers.js/tests/models/gemma/test_modeling_gemma.js", "repo_id": "transformers.js", "token_count": 806 }
362
import { Idefics3Processor, Idefics3ForConditionalGeneration, RawImage } from "../../../src/transformers.js"; import { MAX_MODEL_LOAD_TIME, MAX_TEST_EXECUTION_TIME, MAX_MODEL_DISPOSE_TIME, DEFAULT_MODEL_OPTIONS } from "../../init.js"; export default () => { const conversation = [ { role: "user", content: [{ type: "image" }, { type: "text", text: "Can you describe this image?" }], }, ]; // Empty white and black images const white_image_dims = [224, 224, 3]; const white_image = new RawImage(new Uint8ClampedArray(white_image_dims[0] * white_image_dims[1] * white_image_dims[2]).fill(255), ...white_image_dims); const black_image_dims = [720, 360, 3]; const black_image = new RawImage(new Uint8ClampedArray(black_image_dims[0] * black_image_dims[1] * black_image_dims[2]).fill(0), ...black_image_dims); describe("Idefics3ForConditionalGeneration", () => { const model_id = "hf-internal-testing/tiny-random-Idefics3ForConditionalGeneration"; /** @type {Idefics3ForConditionalGeneration} */ let model; /** @type {Idefics3Processor} */ let processor; /** @type {string} */ let text; beforeAll(async () => { model = await Idefics3ForConditionalGeneration.from_pretrained(model_id, DEFAULT_MODEL_OPTIONS); processor = await Idefics3Processor.from_pretrained(model_id); text = processor.apply_chat_template(conversation, { add_generation_prompt: true, }); }, MAX_MODEL_LOAD_TIME); it( "forward w/ image splitting (default)", async () => { const inputs = await processor(text, white_image, { do_image_splitting: true, }); const { logits } = await model(inputs); expect(logits.dims).toEqual([1, 3041, 128259]); expect(logits.mean().item()).toBeCloseTo(-0.0002692154666874558, 6); }, MAX_TEST_EXECUTION_TIME, ); it( "forward w/o image splitting", async () => { const inputs = await processor(text, white_image, { do_image_splitting: false, }); const { logits } = await model(inputs); expect(logits.dims).toEqual([1, 189, 128259]); expect(logits.mean().item()).toBeCloseTo(-0.00019743280427064747, 6); }, MAX_TEST_EXECUTION_TIME, ); it( "batch_size=1 w/ image splitting", async () => { const inputs = await processor(text, white_image, { do_image_splitting: true, }); const generate_ids = await model.generate({ ...inputs, max_new_tokens: 10, // To obtain unique output tokens, deterministically repetition_penalty: 2.0, }); expect(generate_ids.dims).toEqual([1, 3051]); const new_tokens = generate_ids.slice(null, [inputs.input_ids.dims.at(-1), null]); expect(new_tokens.tolist()).toEqual([[64531n, 121777n, 70370n, 105334n, 12720n, 113356n, 47739n, 59240n, 102001n, 60344n]]); }, MAX_TEST_EXECUTION_TIME, ); it( "batch_size=1 w/o image splitting", async () => { const inputs = await processor(text, white_image, { do_image_splitting: false, }); const generate_ids = await model.generate({ ...inputs, max_new_tokens: 10, // To obtain unique output tokens, deterministically repetition_penalty: 2.0, }); expect(generate_ids.dims).toEqual([1, 199]); const new_tokens = generate_ids.slice(null, [inputs.input_ids.dims.at(-1), null]); expect(new_tokens.tolist()).toEqual([[64531n, 121777n, 70370n, 105334n, 12720n, 113356n, 47739n, 59240n, 59697n, 65246n]]); }, MAX_TEST_EXECUTION_TIME, ); it( "batch_size=1 multi-image w/o image splitting", async () => { const multi_image_conversation = [ { role: "user", content: [{ type: "image" }, { type: "image" }, { type: "text", text: "Can you describe these images?" }], }, ]; const multi_image_text = processor.apply_chat_template(multi_image_conversation, { add_generation_prompt: true, }); const inputs = await processor(multi_image_text, [white_image, black_image], { do_image_splitting: false, }); const generate_ids = await model.generate({ ...inputs, max_new_tokens: 10, // To obtain unique output tokens, deterministically repetition_penalty: 2.0, }); expect(generate_ids.dims).toEqual([1, 374]); const new_tokens = generate_ids.slice(null, [inputs.input_ids.dims.at(-1), null]); expect(new_tokens.tolist()).toEqual([[73189n, 99346n, 113252n, 51743n, 33499n, 66430n, 78739n, 89539n, 121023n, 14474n]]); }, MAX_TEST_EXECUTION_TIME, ); afterAll(async () => { await model?.dispose(); }, MAX_MODEL_DISPOSE_TIME); }); };
transformers.js/tests/models/idefics3/test_modeling_idefics3.js/0
{ "file_path": "transformers.js/tests/models/idefics3/test_modeling_idefics3.js", "repo_id": "transformers.js", "token_count": 2233 }
363
import { AutoFeatureExtractor, MoonshineFeatureExtractor } from "../../../src/transformers.js"; import { load_cached_audio } from "../../asset_cache.js"; import { MAX_FEATURE_EXTRACTOR_LOAD_TIME, MAX_TEST_EXECUTION_TIME } from "../../init.js"; export default () => { // MoonshineFeatureExtractor describe("MoonshineFeatureExtractor", () => { const model_id = "onnx-community/moonshine-tiny-ONNX"; /** @type {MoonshineFeatureExtractor} */ let feature_extractor; beforeAll(async () => { feature_extractor = await AutoFeatureExtractor.from_pretrained(model_id); }, MAX_FEATURE_EXTRACTOR_LOAD_TIME); it( "default", async () => { const audio = await load_cached_audio("mlk"); const { input_values } = await feature_extractor(audio); expect(input_values.dims).toEqual([1, 208000]); expect(input_values.mean().item()).toBeCloseTo(-1.5654930507480458e-7, 6); expect(input_values.data[0]).toBeCloseTo(0.0067138671875, 6); expect(input_values.data.at(-1)).toBeCloseTo(-0.013427734375, 6); }, MAX_TEST_EXECUTION_TIME, ); }); };
transformers.js/tests/models/moonshine/test_feature_extraction_moonshine.js/0
{ "file_path": "transformers.js/tests/models/moonshine/test_feature_extraction_moonshine.js", "repo_id": "transformers.js", "token_count": 465 }
364
import { AutoProcessor, Phi3VProcessor } from "../../../src/transformers.js"; import { load_cached_image } from "../../asset_cache.js"; import { MAX_PROCESSOR_LOAD_TIME, MAX_TEST_EXECUTION_TIME } from "../../init.js"; export default () => { const model_id = "onnx-community/Phi-3.5-vision-instruct"; describe("Phi3VProcessor", () => { /** @type {Phi3VProcessor} */ let processor; let images = {}; beforeAll(async () => { processor = await AutoProcessor.from_pretrained(model_id, { // Use legacy to match python version legacy: true, }); images = { white_image: await load_cached_image("white_image"), }; }, MAX_PROCESSOR_LOAD_TIME); const create_prompt = (text, images = []) => { const placeholder = images.map((_, i) => `<|image_${i + 1}|>\n`).join(""); const messages = [{ role: "user", content: placeholder + text }]; const prompt = processor.tokenizer.apply_chat_template(messages, { tokenize: false, add_generation_prompt: true }); return prompt; }; it( "Text-only", async () => { const prompt = create_prompt("Hi there."); const { input_ids, pixel_values } = await processor(prompt); expect(input_ids.dims).toEqual([1, 11]); expect(pixel_values).toBeUndefined(); }, MAX_TEST_EXECUTION_TIME, ); it( "Single image & text", async () => { const imgs = [images.white_image]; const prompt = create_prompt("Describe this image.", imgs); const { input_ids, attention_mask, pixel_values, image_sizes } = await processor(prompt, imgs); expect(input_ids.dims).toEqual([1, /* 773 */ 770]); expect(attention_mask.dims).toEqual(input_ids.dims); expect(pixel_values.dims).toEqual([1, 5, 3, 336, 336]); expect(image_sizes.tolist()).toEqual([[672n, 672n]]); }, MAX_TEST_EXECUTION_TIME, ); it( "Single image (num_crops=16) & text", async () => { const imgs = [images.white_image]; const prompt = create_prompt("Describe this image.", imgs); const { input_ids, attention_mask, pixel_values, image_sizes } = await processor(prompt, imgs, { num_crops: 16 }); expect(input_ids.dims).toEqual([1, /* 2525 */ 2522]); expect(attention_mask.dims).toEqual(input_ids.dims); expect(pixel_values.dims).toEqual([1, 17, 3, 336, 336]); expect(image_sizes.tolist()).toEqual([[1344n, 1344n]]); }, MAX_TEST_EXECUTION_TIME, ); it( "Multiple images & text", async () => { const imgs = [images.white_image, images.white_image]; const prompt = create_prompt("Describe these images.", imgs); const { input_ids, attention_mask, pixel_values, image_sizes } = await processor(prompt, imgs); expect(input_ids.dims).toEqual([1, /* 1533 */ 1527]); expect(attention_mask.dims).toEqual(input_ids.dims); expect(pixel_values.dims).toEqual([2, 5, 3, 336, 336]); expect(image_sizes.tolist()).toEqual([ [672n, 672n], [672n, 672n], ]); }, MAX_TEST_EXECUTION_TIME, ); }); };
transformers.js/tests/models/phi3_v/test_processor_phi3_v.js/0
{ "file_path": "transformers.js/tests/models/phi3_v/test_processor_phi3_v.js", "repo_id": "transformers.js", "token_count": 1404 }
365
import { AutoImageProcessor, VitMatteImageProcessor } from "../../../src/transformers.js"; import { load_cached_image } from "../../asset_cache.js"; import { MAX_PROCESSOR_LOAD_TIME, MAX_TEST_EXECUTION_TIME } from "../../init.js"; export default () => { // VitMatteImageProcessor // - tests custom overrides // - tests multiple inputs // - tests `size_divisibility` and no size (size_divisibility=32) // - tests do_pad and `size_divisibility` describe("VitMatteImageProcessor", () => { const model_id = "Xenova/vitmatte-small-distinctions-646"; /** @type {VitMatteImageProcessor} */ let processor; beforeAll(async () => { processor = await AutoImageProcessor.from_pretrained(model_id); }, MAX_PROCESSOR_LOAD_TIME); it( "w/o resize", async () => { const image = await load_cached_image("vitmatte_image"); const image2 = await load_cached_image("vitmatte_trimap"); const { pixel_values, original_sizes, reshaped_input_sizes } = await processor(image, image2); const { data, dims } = pixel_values; expect(dims).toEqual([1, 4, 640, 960]); expect(pixel_values.mean().item()).toBeCloseTo(-0.4028555154800415); expect(data[0]).toBeCloseTo(-0.9921568632125854); expect(data[1]).toBeCloseTo(-0.9921568632125854); expect(data[5]).toBeCloseTo(-1.0); expect(data[640]).toBeCloseTo(-0.6784313917160034); expect(data[641]).toBeCloseTo(-0.6705882549285889); expect(data[640 * 960]).toBeCloseTo(-1.0); expect(data[640 * 960 + 1]).toBeCloseTo(-1.0); expect(data.at(-1)).toBeCloseTo(0.0); expect(original_sizes).toEqual([[640, 960]]); expect(reshaped_input_sizes).toEqual([[640, 960]]); }, MAX_TEST_EXECUTION_TIME, ); it( "w/ resize", async () => { const image = await load_cached_image("pattern_3x5"); const image2 = await load_cached_image("pattern_3x5"); const { pixel_values, original_sizes, reshaped_input_sizes } = await processor(image, image2); const { data, dims } = pixel_values; expect(dims).toEqual([1, 4, 32, 32]); expect(pixel_values.mean().item()).toBeCloseTo(-0.00867417361587286); expect(data[0]).toBeCloseTo(-0.9921568632125854); expect(data[1]).toBeCloseTo(-0.9686274528503418); expect(data[5]).toBeCloseTo(0.0); expect(data[32]).toBeCloseTo(-0.9215686321258545); expect(data[33]).toBeCloseTo(-0.8980392217636108); expect(data.at(-1)).toBeCloseTo(0.0); expect(original_sizes).toEqual([[5, 3]]); expect(reshaped_input_sizes).toEqual([[5, 3]]); }, MAX_TEST_EXECUTION_TIME, ); }); };
transformers.js/tests/models/vitmatte/test_image_processing_vitmatte.js/0
{ "file_path": "transformers.js/tests/models/vitmatte/test_image_processing_vitmatte.js", "repo_id": "transformers.js", "token_count": 1213 }
366
import { pipeline, FeatureExtractionPipeline } from "../../src/transformers.js"; import { MAX_MODEL_LOAD_TIME, MAX_TEST_EXECUTION_TIME, MAX_MODEL_DISPOSE_TIME, DEFAULT_MODEL_OPTIONS } from "../init.js"; const PIPELINE_ID = "feature-extraction"; export default () => { describe("Feature Extraction", () => { const model_id = "hf-internal-testing/tiny-random-BertModel"; const texts = ["This is a simple test.", "Hello world"]; /** @type {FeatureExtractionPipeline} */ let pipe; beforeAll(async () => { pipe = await pipeline(PIPELINE_ID, model_id, DEFAULT_MODEL_OPTIONS); }, MAX_MODEL_LOAD_TIME); it("should be an instance of FeatureExtractionPipeline ", () => { expect(pipe).toBeInstanceOf(FeatureExtractionPipeline); }); describe("batch_size=1", () => { it( "default", async () => { const output = await pipe(texts[0]); expect(output.dims).toEqual([1, 20, 32]); expect(output.type).toEqual("float32"); expect(output.mean().item()).toBeCloseTo(-1.538501215314625e-9, 6); }, MAX_TEST_EXECUTION_TIME, ); it( "w/ cls pooling", async () => { const output = await pipe(texts[0], { pooling: "cls" }); expect(output.dims).toEqual([1, 32]); expect(output.type).toEqual("float32"); expect(output.mean().item()).toBeCloseTo(2.491287887096405e-8, 6); }, MAX_TEST_EXECUTION_TIME, ); it( "w/ mean pooling & normalization", async () => { const output = await pipe(texts[0], { pooling: "mean", normalize: true }); expect(output.dims).toEqual([1, 32]); expect(output.type).toEqual("float32"); expect(output.mean().item()).toBeCloseTo(-2.0245352061465383e-9, 6); }, MAX_TEST_EXECUTION_TIME, ); it( "w/ mean pooling & binary quantization", async () => { const output = await pipe(texts[0], { pooling: "mean", quantize: true, precision: "binary" }); expect(output.dims).toEqual([1, 32 / 8]); expect(output.type).toEqual("int8"); expect(output.mean().item()).toEqual(-15); }, MAX_TEST_EXECUTION_TIME, ); it("w/ cls pooling & ubinary quantization", async () => { const output = await pipe(texts[0], { pooling: "cls", quantize: true, precision: "ubinary" }); expect(output.dims).toEqual([1, 32 / 8]); expect(output.type).toEqual("uint8"); expect(output.mean().item()).toEqual(140); }); }); describe("batch_size>1", () => { it( "default", async () => { const output = await pipe(texts); expect(output.dims).toEqual([texts.length, 20, 32]); expect(output.type).toEqual("float32"); expect(output.mean().item()).toBeCloseTo(2.345950544935249e-9, 6); }, MAX_TEST_EXECUTION_TIME, ); it( "w/ cls pooling", async () => { const output = await pipe(texts, { pooling: "cls" }); expect(output.dims).toEqual([texts.length, 32]); expect(output.type).toEqual("float32"); expect(output.mean().item()).toBeCloseTo(1.6298145055770874e-8, 6); }, MAX_TEST_EXECUTION_TIME, ); it( "w/ mean pooling & normalization", async () => { const output = await pipe(texts, { pooling: "mean", normalize: true }); expect(output.dims).toEqual([texts.length, 32]); expect(output.type).toEqual("float32"); expect(output.mean().item()).toBeCloseTo(-1.538609240014921e-10, 6); }, MAX_TEST_EXECUTION_TIME, ); it("w/ mean pooling & binary quantization", async () => { const output = await pipe(texts, { pooling: "mean", quantize: true, precision: "binary" }); expect(output.dims).toEqual([texts.length, 32 / 8]); expect(output.type).toEqual("int8"); expect(output.mean().item()).toEqual(-14); }); it("w/ cls pooling & ubinary quantization", async () => { const output = await pipe(texts, { pooling: "cls", quantize: true, precision: "ubinary" }); expect(output.dims).toEqual([texts.length, 32 / 8]); expect(output.type).toEqual("uint8"); expect(output.mean().item()).toEqual(140); }); }); afterAll(async () => { await pipe.dispose(); }, MAX_MODEL_DISPOSE_TIME); }); };
transformers.js/tests/pipelines/test_pipelines_feature_extraction.js/0
{ "file_path": "transformers.js/tests/pipelines/test_pipelines_feature_extraction.js", "repo_id": "transformers.js", "token_count": 2082 }
367
import { pipeline, ZeroShotClassificationPipeline } from "../../src/transformers.js"; import { MAX_MODEL_LOAD_TIME, MAX_TEST_EXECUTION_TIME, MAX_MODEL_DISPOSE_TIME, DEFAULT_MODEL_OPTIONS } from "../init.js"; const PIPELINE_ID = "zero-shot-classification"; export default () => { describe("Zero-shot Classification", () => { const model_id = "hf-internal-testing/tiny-random-BertForSequenceClassification"; /** @type {ZeroShotClassificationPipeline} */ let pipe; beforeAll(async () => { pipe = await pipeline(PIPELINE_ID, model_id, { ...DEFAULT_MODEL_OPTIONS, // The model isn't designed for zero-shot classification, so we set the config config: { model_type: "bert", id2label: { 0: "contradiction", 1: "entailment", }, label2id: { contradiction: 0, entailment: 1, }, }, }); }, MAX_MODEL_LOAD_TIME); it("should be an instance of ZeroShotClassificationPipeline", () => { expect(pipe).toBeInstanceOf(ZeroShotClassificationPipeline); }); const sequences_to_classify = ["one day I will see the world", "I love making pizza"]; const candidate_labels = ["travel", "cooking", "dancing"]; it( "Single sequence classification", async () => { const output = await pipe(sequences_to_classify[0], candidate_labels); const target = { sequence: "one day I will see the world", labels: ["dancing", "cooking", "travel"], scores: [0.3333353410546293, 0.3333348269618681, 0.3333298319835025], }; expect(output).toBeCloseToNested(target, 5); }, MAX_TEST_EXECUTION_TIME, ); it( "Batched classification", async () => { const output = await pipe(sequences_to_classify, candidate_labels); const target = [ { sequence: "one day I will see the world", labels: ["dancing", "cooking", "travel"], scores: [0.3333353410546293, 0.3333348269618681, 0.3333298319835025], }, { sequence: "I love making pizza", labels: ["dancing", "cooking", "travel"], scores: [0.3333347058960895, 0.3333337292465588, 0.3333315648573516], }, ]; expect(output).toBeCloseToNested(target, 5); }, MAX_TEST_EXECUTION_TIME, ); it( "Batched + multilabel classification", async () => { const candidate_labels = ["travel", "cooking", "dancing"]; const output = await pipe(sequences_to_classify, candidate_labels, { multi_label: true }); const target = [ { sequence: "one day I will see the world", labels: ["dancing", "cooking", "travel"], scores: [0.49231469615364476, 0.4923134953805702, 0.4923094795142658], }, { sequence: "I love making pizza", labels: ["dancing", "cooking", "travel"], scores: [0.49230751217535645, 0.49230615475943956, 0.4923042569480609], }, ]; expect(output).toBeCloseToNested(target, 5); }, MAX_TEST_EXECUTION_TIME, ); afterAll(async () => { await pipe.dispose(); }, MAX_MODEL_DISPOSE_TIME); }); };
transformers.js/tests/pipelines/test_pipelines_zero_shot.js/0
{ "file_path": "transformers.js/tests/pipelines/test_pipelines_zero_shot.js", "repo_id": "transformers.js", "token_count": 1523 }
368
{ // Only include files in the src directory "include": ["src/**/*"], "compilerOptions": { // Tells the compiler to check JS files "checkJs": true, "target": "esnext", "module": "nodenext", "moduleResolution": "nodenext", "outDir": "types", "strict": false, "skipLibCheck": true, "declaration": true, "declarationMap": true, "noEmit": false, "emitDeclarationOnly": true }, "typeAcquisition": { "include": ["jest"] } }
transformers.js/tsconfig.json/0
{ "file_path": "transformers.js/tsconfig.json", "repo_id": "transformers.js", "token_count": 196 }
369
cff-version: "1.2.0" date-released: 2020-10 message: "If you use this software, please cite it using these metadata." title: "Transformers: State-of-the-Art Natural Language Processing" url: "https://github.com/huggingface/transformers" authors: - family-names: Wolf given-names: Thomas - family-names: Debut given-names: Lysandre - family-names: Sanh given-names: Victor - family-names: Chaumond given-names: Julien - family-names: Delangue given-names: Clement - family-names: Moi given-names: Anthony - family-names: Cistac given-names: Perric - family-names: Ma given-names: Clara - family-names: Jernite given-names: Yacine - family-names: Plu given-names: Julien - family-names: Xu given-names: Canwen - family-names: "Le Scao" given-names: Teven - family-names: Gugger given-names: Sylvain - family-names: Drame given-names: Mariama - family-names: Lhoest given-names: Quentin - family-names: Rush given-names: "Alexander M." preferred-citation: type: conference-paper authors: - family-names: Wolf given-names: Thomas - family-names: Debut given-names: Lysandre - family-names: Sanh given-names: Victor - family-names: Chaumond given-names: Julien - family-names: Delangue given-names: Clement - family-names: Moi given-names: Anthony - family-names: Cistac given-names: Perric - family-names: Ma given-names: Clara - family-names: Jernite given-names: Yacine - family-names: Plu given-names: Julien - family-names: Xu given-names: Canwen - family-names: "Le Scao" given-names: Teven - family-names: Gugger given-names: Sylvain - family-names: Drame given-names: Mariama - family-names: Lhoest given-names: Quentin - family-names: Rush given-names: "Alexander M." booktitle: "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations" month: 10 start: 38 end: 45 title: "Transformers: State-of-the-Art Natural Language Processing" year: 2020 publisher: "Association for Computational Linguistics" url: "https://www.aclweb.org/anthology/2020.emnlp-demos.6" address: "Online"
transformers/CITATION.cff/0
{ "file_path": "transformers/CITATION.cff", "repo_id": "transformers", "token_count": 824 }
370
apiVersion: 1 providers: - name: 'Transformers Benchmarks' orgId: 1 type: file updateIntervalSeconds: 10 allowUiUpdates: true options: path: /etc/grafana/dashboards
transformers/benchmark/default.yml/0
{ "file_path": "transformers/benchmark/default.yml", "repo_id": "transformers", "token_count": 81 }
371
FROM python:3.9-slim ENV PYTHONDONTWRITEBYTECODE=1 ARG REF=main USER root RUN apt-get update && apt-get install -y time git ENV UV_PYTHON=/usr/local/bin/python RUN pip install uv RUN uv pip install --no-cache-dir -U pip setuptools GitPython "git+https://github.com/huggingface/transformers.git@${REF}#egg=transformers[ruff]" urllib3 RUN apt-get install -y jq curl && apt-get clean && rm -rf /var/lib/apt/lists/*
transformers/docker/quality.dockerfile/0
{ "file_path": "transformers/docker/quality.dockerfile", "repo_id": "transformers", "token_count": 162 }
372