text
stringlengths
5
631k
id
stringlengths
14
178
metadata
dict
__index_level_0__
int64
0
647
<CourseFloatingBanner classNames="absolute z-10 right-0 top-0" notebooks={[ {label: "Google Colab", value: "https://colab.research.google.com/#fileId=https://huggingface.co/agents-course/notebooks/blob/main/unit2/smolagents/code_agents.ipynb"}, ]} askForHelpUrl="http://hf.co/join/discord" /> # Building Agents That Use Code Code agents are the default agent type in `smolagents`. They generate Python tool calls to perform actions, achieving action representations that are efficient, expressive, and accurate. Their streamlined approach reduces the number of required actions, simplifies complex operations, and enables reuse of existing code functions. `smolagents` provides a lightweight framework for building code agents, implemented in approximately 1,000 lines of code. ![Code vs JSON Actions](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/code_vs_json_actions.png) Graphic from the paper [Executable Code Actions Elicit Better LLM Agents](https://huggingface.co/papers/2402.01030) <Tip> If you want to learn more about why code agents are effective, check out <a href="https://huggingface.co/docs/smolagents/en/conceptual_guides/intro_agents#code-agents" target="_blank">this guide</a> from the smolagents documentation. </Tip> ## Why Code Agents? In a multi-step agent process, the LLM writes and executes actions, typically involving external tool calls. Traditional approaches use a JSON format to specify tool names and arguments as strings, **which the system must parse to determine which tool to execute**. However, research shows that **tool-calling LLMs work more effectively with code directly**. This is a core principle of `smolagents`, as shown in the diagram above from [Executable Code Actions Elicit Better LLM Agents](https://huggingface.co/papers/2402.01030). Writing actions in code rather than JSON offers several key advantages: * **Composability**: Easily combine and reuse actions * **Object Management**: Work directly with complex structures like images * **Generality**: Express any computationally possible task * **Natural for LLMs**: High-quality code is already present in LLM training data ## How Does a Code Agent Work? ![From https://huggingface.co/docs/smolagents/conceptual_guides/react](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolagents/codeagent_docs.png) The diagram above illustrates how `CodeAgent.run()` operates, following the ReAct framework we mentioned in Unit 1. The main abstraction for agents in `smolagents` is a `MultiStepAgent`, which serves as the core building block. `CodeAgent` is a special kind of `MultiStepAgent`, as we will see in an example below. A `CodeAgent` performs actions through a cycle of steps, with existing variables and knowledge being incorporated into the agent's context, which is kept in an execution log: 1. The system prompt is stored in a `SystemPromptStep`, and the user query is logged in a `TaskStep`. 2. Then, the following while loop is executed: 2.1 Method `agent.write_memory_to_messages()` writes the agent's logs into a list of LLM-readable [chat messages](https://huggingface.co/docs/transformers/main/en/chat_templating). 2.2 These messages are sent to a `Model`, which generates a completion. 2.3 The completion is parsed to extract the action, which, in our case, should be a code snippet since we're working with a `CodeAgent`. 2.4 The action is executed. 2.5 The results are logged into memory in an `ActionStep`. At the end of each step, if the agent includes any function calls (in `agent.step_callback`), they are executed. ## Let's See Some Examples <Tip> You can follow the code in <a href="https://huggingface.co/agents-course/notebooks/blob/main/unit2/smolagents/code_agents.ipynb" target="_blank">this notebook</a> that you can run using Google Colab. </Tip> Alfred is planning a party at the Wayne family mansion and needs your help to ensure everything goes smoothly. To assist him, we'll apply what we've learned about how a multi-step `CodeAgent` operates. <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit2/smolagents/alfred-party.jpg" alt="Alfred Party"/> If you haven't installed `smolagents` yet, you can do so by running the following command: ```bash pip install smolagents -U ``` Let's also login to the Hugging Face Hub to have access to the Serverless Inference API. ```python from huggingface_hub import login login() ``` ### Selecting a Playlist for the Party Using `smolagents` Music is an essential part of a successful party! Alfred needs some help selecting the playlist. Luckily, `smolagents` has got us covered! We can build an agent capable of searching the web using DuckDuckGo. To give the agent access to this tool, we include it in the tool list when creating the agent. <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit2/smolagents/alfred-playlist.jpg" alt="Alfred Playlist"/> For the model, we'll rely on `InferenceClientModel`, which provides access to Hugging Face's [Serverless Inference API](https://huggingface.co/docs/api-inference/index). The default model is `"Qwen/Qwen2.5-Coder-32B-Instruct"`, which is performant and available for fast inference, but you can select any compatible model from the Hub. Running an agent is quite straightforward: ```python from smolagents import CodeAgent, DuckDuckGoSearchTool, InferenceClientModel agent = CodeAgent(tools=[DuckDuckGoSearchTool()], model=InferenceClientModel()) agent.run("Search for the best music recommendations for a party at the Wayne's mansion.") ``` When you run this example, the output will **display a trace of the workflow steps being executed**. It will also print the corresponding Python code with the message: ```python ─ Executing parsed code: ──────────────────────────────────────────────────────────────────────────────────────── results = web_search(query="best music for a Batman party") print(results) ───────────────────────────────────────────────────────────────────────────────────────────────────────────────── ``` After a few steps, you'll see the generated playlist that Alfred can use for the party! 🎵 ### Using a Custom Tool to Prepare the Menu <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit2/smolagents/alfred-menu.jpg" alt="Alfred Menu"/> Now that we have selected a playlist, we need to organize the menu for the guests. Again, Alfred can take advantage of `smolagents` to do so. Here, we use the `@tool` decorator to define a custom function that acts as a tool. We'll cover tool creation in more detail later, so for now, we can simply run the code. As you can see in the example below, we will create a tool using the `@tool` decorator and include it in the `tools` list. ```python from smolagents import CodeAgent, tool, InferenceClientModel # Tool to suggest a menu based on the occasion @tool def suggest_menu(occasion: str) -> str: """ Suggests a menu based on the occasion. Args: occasion (str): The type of occasion for the party. Allowed values are: - "casual": Menu for casual party. - "formal": Menu for formal party. - "superhero": Menu for superhero party. - "custom": Custom menu. """ if occasion == "casual": return "Pizza, snacks, and drinks." elif occasion == "formal": return "3-course dinner with wine and dessert." elif occasion == "superhero": return "Buffet with high-energy and healthy food." else: return "Custom menu for the butler." # Alfred, the butler, preparing the menu for the party agent = CodeAgent(tools=[suggest_menu], model=InferenceClientModel()) # Preparing the menu for the party agent.run("Prepare a formal menu for the party.") ``` The agent will run for a few steps until finding the answer. Precising allowed values in the docstring helps direct agent to `occasion` argument values which exist and limit hallucinations. The menu is ready! 🥗 ### Using Python Imports Inside the Agent We have the playlist and menu ready, but we need to check one more crucial detail: preparation time! Alfred needs to calculate when everything would be ready if he started preparing now, in case they need assistance from other superheroes. `smolagents` specializes in agents that write and execute Python code snippets, offering sandboxed execution for security. **Code execution has strict security measures** - imports outside a predefined safe list are blocked by default. However, you can authorize additional imports by passing them as strings in `additional_authorized_imports`. For more details on secure code execution, see the official [guide](https://huggingface.co/docs/smolagents/tutorials/secure_code_execution). When creating the agent, we'll use `additional_authorized_imports` to allow for importing the `datetime` module. ```python from smolagents import CodeAgent, InferenceClientModel import numpy as np import time import datetime agent = CodeAgent(tools=[], model=InferenceClientModel(), additional_authorized_imports=['datetime']) agent.run( """ Alfred needs to prepare for the party. Here are the tasks: 1. Prepare the drinks - 30 minutes 2. Decorate the mansion - 60 minutes 3. Set up the menu - 45 minutes 4. Prepare the music and playlist - 45 minutes If we start right now, at what time will the party be ready? """ ) ``` These examples are just the beginning of what you can do with code agents, and we're already starting to see their utility for preparing the party. You can learn more about how to build code agents in the [smolagents documentation](https://huggingface.co/docs/smolagents). In summary, `smolagents` specializes in agents that write and execute Python code snippets, offering sandboxed execution for security. It supports both local and API-based language models, making it adaptable to various development environments. ### Sharing Our Custom Party Preparator Agent to the Hub Wouldn't it be **amazing to share our very own Alfred agent with the community**? By doing so, anyone can easily download and use the agent directly from the Hub, bringing the ultimate party planner of Gotham to their fingertips! Let's make it happen! 🎉 The `smolagents` library makes this possible by allowing you to share a complete agent with the community and download others for immediate use. It's as simple as the following: ```python # Change to your username and repo name agent.push_to_hub('sergiopaniego/AlfredAgent') ``` To download the agent again, use the code below: ```python # Change to your username and repo name alfred_agent = agent.from_hub('sergiopaniego/AlfredAgent', trust_remote_code=True) alfred_agent.run("Give me the best playlist for a party at Wayne's mansion. The party idea is a 'villain masquerade' theme") ``` What's also exciting is that shared agents are directly available as Hugging Face Spaces, allowing you to interact with them in real-time. You can explore other agents [here](https://huggingface.co/spaces/davidberenstein1957/smolagents-and-tools). For example, the _AlfredAgent_ is available [here](https://huggingface.co/spaces/sergiopaniego/AlfredAgent). You can try it out directly below: <iframe src="https://sergiopaniego-alfredagent.hf.space/" frameborder="0" width="850" height="450" ></iframe> You may be wondering—how did Alfred build such an agent using `smolagents`? By integrating several tools, he can generate an agent as follows. Don't worry about the tools for now, as we'll have a dedicated section later in this unit to explore that in detail: ```python from smolagents import CodeAgent, DuckDuckGoSearchTool, FinalAnswerTool, InferenceClientModel, Tool, tool, VisitWebpageTool @tool def suggest_menu(occasion: str) -> str: """ Suggests a menu based on the occasion. Args: occasion: The type of occasion for the party. """ if occasion == "casual": return "Pizza, snacks, and drinks." elif occasion == "formal": return "3-course dinner with wine and dessert." elif occasion == "superhero": return "Buffet with high-energy and healthy food." else: return "Custom menu for the butler." @tool def catering_service_tool(query: str) -> str: """ This tool returns the highest-rated catering service in Gotham City. Args: query: A search term for finding catering services. """ # Example list of catering services and their ratings services = { "Gotham Catering Co.": 4.9, "Wayne Manor Catering": 4.8, "Gotham City Events": 4.7, } # Find the highest rated catering service (simulating search query filtering) best_service = max(services, key=services.get) return best_service class SuperheroPartyThemeTool(Tool): name = "superhero_party_theme_generator" description = """ This tool suggests creative superhero-themed party ideas based on a category. It returns a unique party theme idea.""" inputs = { "category": { "type": "string", "description": "The type of superhero party (e.g., 'classic heroes', 'villain masquerade', 'futuristic Gotham').", } } output_type = "string" def forward(self, category: str): themes = { "classic heroes": "Justice League Gala: Guests come dressed as their favorite DC heroes with themed cocktails like 'The Kryptonite Punch'.", "villain masquerade": "Gotham Rogues' Ball: A mysterious masquerade where guests dress as classic Batman villains.", "futuristic Gotham": "Neo-Gotham Night: A cyberpunk-style party inspired by Batman Beyond, with neon decorations and futuristic gadgets." } return themes.get(category.lower(), "Themed party idea not found. Try 'classic heroes', 'villain masquerade', or 'futuristic Gotham'.") # Alfred, the butler, preparing the menu for the party agent = CodeAgent( tools=[ DuckDuckGoSearchTool(), VisitWebpageTool(), suggest_menu, catering_service_tool, SuperheroPartyThemeTool(), FinalAnswerTool() ], model=InferenceClientModel(), max_steps=10, verbosity_level=2 ) agent.run("Give me the best playlist for a party at the Wayne's mansion. The party idea is a 'villain masquerade' theme") ``` As you can see, we've created a `CodeAgent` with several tools that enhance the agent's functionality, turning it into the ultimate party planner ready to share with the community! 🎉 Now, it's your turn: build your very own agent and share it with the community using the knowledge we've just learned! 🕵️‍♂️💡 <Tip> If you would like to share your agent project, then make a space and tag the <a href="https://huggingface.co/agents-course">agents-course</a> on the Hugging Face Hub. We'd love to see what you've created! </Tip> ### Inspecting Our Party Preparator Agent with OpenTelemetry and Langfuse 📡 As Alfred fine-tunes the Party Preparator Agent, he's growing weary of debugging its runs. Agents, by nature, are unpredictable and difficult to inspect. But since he aims to build the ultimate Party Preparator Agent and deploy it in production, he needs robust traceability for future monitoring and analysis. Once again, `smolagents` comes to the rescue! It embraces the [OpenTelemetry](https://opentelemetry.io/) standard for instrumenting agent runs, allowing seamless inspection and logging. With the help of [Langfuse](https://langfuse.com/) and the `SmolagentsInstrumentor`, Alfred can easily track and analyze his agent’s behavior. Setting it up is straightforward! First, we need to install the necessary dependencies: ```bash pip install opentelemetry-sdk opentelemetry-exporter-otlp openinference-instrumentation-smolagents langfuse ``` Next, Alfred has already created an account on Langfuse and has his API keys ready. If you haven’t done so yet, you can sign up for Langfuse Cloud [here](https://cloud.langfuse.com/) or explore [alternatives](https://huggingface.co/docs/smolagents/tutorials/inspect_runs). Once you have your API keys, they need to be properly configured as follows: ```python import os # Get keys for your project from the project settings page: https://cloud.langfuse.com os.environ["LANGFUSE_PUBLIC_KEY"] = "pk-lf-..." os.environ["LANGFUSE_SECRET_KEY"] = "sk-lf-..." os.environ["LANGFUSE_HOST"] = "https://cloud.langfuse.com" # 🇪🇺 EU region # os.environ["LANGFUSE_HOST"] = "https://us.cloud.langfuse.com" # 🇺🇸 US region ``` With the environment variables set, we can now initialize the Langfuse client. get_client() initializes the Langfuse client using the credentials provided in the environment variables. ```python from langfuse import get_client langfuse = get_client() # Verify connection if langfuse.auth_check(): print("Langfuse client is authenticated and ready!") else: print("Authentication failed. Please check your credentials and host.") ``` Finally, Alfred is ready to initialize the `SmolagentsInstrumentor` and start tracking his agent's performance. ```python from openinference.instrumentation.smolagents import SmolagentsInstrumentor SmolagentsInstrumentor().instrument() ``` Alfred is now connected 🔌! The runs from `smolagents` are being logged in Langfuse, giving him full visibility into the agent's behavior. With this setup, he's ready to revisit previous runs and refine his Party Preparator Agent even further. <Tip>To learn more about tracing your agents and using the collected data to evaluate their performance, check out <a href="https://huggingface.co/learn/agents-course/bonus-unit2/introduction">Bonus Unit 2</a>.</Tip> ```python from smolagents import CodeAgent, InferenceClientModel agent = CodeAgent(tools=[], model=InferenceClientModel()) alfred_agent = agent.from_hub('sergiopaniego/AlfredAgent', trust_remote_code=True) alfred_agent.run("Give me the best playlist for a party at Wayne's mansion. The party idea is a 'villain masquerade' theme") ``` Alfred can now access these logs [here](https://cloud.langfuse.com/project/cm7bq0abj025rad078ak3luwi/traces/995fc019255528e4f48cf6770b0ce27b?timestamp=2025-02-19T10%3A28%3A36.929Z) to review and analyze them. <Tip> Actually, a minor error occurred during execution. Can you spot it in the logs? Try to track how the agent handles it and still returns a valid answer. <a href="https://cloud.langfuse.com/project/cm7bq0abj025rad078ak3luwi/traces/995fc019255528e4f48cf6770b0ce27b?timestamp=2025-02-19T10%3A28%3A36.929Z&observation=80ca57ace4f69b52">Here</a> is the direct link to the error if you want to verify your answer. Of course the error has been fixed in the meantime, more details can be found in this <a href="https://github.com/huggingface/smolagents/issues/838">issue</a>. </Tip> Meanwhile, the [suggested playlist](https://open.spotify.com/playlist/0gZMMHjuxMrrybQ7wTMTpw) sets the perfect vibe for the party preparations. Cool, right? 🎶 --- Now that we have created our first Code Agent, let's **learn how we can create Tool Calling Agents**, the second type of agent available in `smolagents`. ## Resources - [smolagents Blog](https://huggingface.co/blog/smolagents) - Introduction to smolagents and code interactions - [smolagents: Building Good Agents](https://huggingface.co/docs/smolagents/tutorials/building_good_agents) - Best practices for reliable agents - [Building Effective Agents - Anthropic](https://www.anthropic.com/research/building-effective-agents) - Agent design principles - [Sharing runs with OpenTelemetry](https://huggingface.co/docs/smolagents/tutorials/inspect_runs) - Details about how to setup OpenTelemetry for tracking your agents.
agents-course/units/en/unit2/smolagents/code_agents.mdx/0
{ "file_path": "agents-course/units/en/unit2/smolagents/code_agents.mdx", "repo_id": "agents-course", "token_count": 6112 }
3
# Introduction to Use Case for Agentic RAG ![Agentic RAG banner](https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit3/agentic-rag/thumbnail.jpg) In this unit, we will help Alfred, our friendly agent who is hosting the gala, by using Agentic RAG to create a tool that can be used to answer questions about the guests at the gala. <Tip> This is a 'real-world' use case for Agentic RAG, that you could use in your own projects or workplaces. If you want to get more out of this project, why not try it out on your own use case and share in Discord? </Tip> You can choose any of the frameworks discussed in the course for this use case. We provide code samples for each in separate tabs. ## A Gala to Remember Now, it's time to get our hands dirty with an actual use case. Let's set the stage! **You decided to host the most extravagant and opulent party of the century.** This means lavish feasts, enchanting dancers, renowned DJs, exquisite drinks, a breathtaking fireworks display, and much more. Alfred, your friendly neighbourhood agent, is getting ready to watch over all of your needs for this party, and **Alfred is going to manage everything himself**. To do so, he needs to have access to all of the information about the party, including the menu, the guests, the schedule, weather forecasts, and much more! Not only that, but he also needs to make sure that the party is going to be a success, so **he needs to be able to answer any questions about the party during the party**, whilst handling unexpected situations that may arise. He can't do this alone, so we need to make sure that Alfred has access to all of the information and tools he needs. First, let's give him a list of hard requirements for the gala. ## The Gala Requirements A properly educated person in the age of the **Renaissance** needs to have three main traits. He or she needed to be profound in the **knowledge of sports, culture, and science**. So, we need to make sure we can impress our guests with our knowledge and provide them with a truly unforgettable gala. However, to avoid any conflicts, there are some **topics, like politics and religion, that are to be avoided at a gala.** It needs to be a fun party without conflicts related to beliefs and ideals. According to etiquette, **a good host should be aware of guests' backgrounds**, including their interests and endeavours. A good host also gossips and shares stories about the guests with one another. Lastly, we need to make sure that we've got **some general knowledge about the weather** to ensure we can continuously find a real-time update to ensure perfect timing to launch the fireworks and end the gala with a bang! 🎆 As you can see, Alfred needs a lot of information to host the gala. Luckily, we can help and prepare Alfred by giving him some **Retrieval Augmented Generation (RAG) training!** Let's start by creating the tools that Alfred needs to be able to host the gala!
agents-course/units/en/unit3/agentic-rag/introduction.mdx/0
{ "file_path": "agents-course/units/en/unit3/agentic-rag/introduction.mdx", "repo_id": "agents-course", "token_count": 729 }
4
# Cuestionario: Evaluación de Agentes de IA Vamos a evaluar tu comprensión de los conceptos de rastreo y evaluación de agentes cubiertos en esta unidad extra. Este cuestionario es opcional y no está calificado. ### Q1: ¿A qué se refiere principalmente la observabilidad en los agentes de IA? ¿Qué afirmación describe con precisión el propósito de la observabilidad para los agentes de IA? <Question choices={[ { text: "Implica rastrear operaciones internas a través de registros, métricas y spans para entender el comportamiento del agente.", explain: "¡Correcto! La observabilidad significa usar registros, métricas y spans para arrojar luz sobre el funcionamiento interno del agente.", correct: true }, { text: "Está únicamente enfocada en reducir el costo financiero de ejecutar el agente.", explain: "La observabilidad cubre el costo pero no se limita a ello." }, { text: "Se refiere solo a la apariencia externa y la interfaz de usuario del agente.", explain: "La observabilidad trata sobre los procesos internos, no la interfaz de usuario." }, { text: "Se ocupa únicamente del estilo de codificación y la estética del código.", explain: "El estilo de código no está relacionado con la observabilidad en este contexto." } ]} /> ### Q2: ¿Cuál de las siguientes NO es una métrica común monitoreada en la observabilidad de agentes? Selecciona la métrica que normalmente no cae bajo el paraguas de la observabilidad. <Question choices={[ { text: "Latencia", explain: "La latencia se rastrea comúnmente para evaluar la capacidad de respuesta del agente." }, { text: "Costo por Ejecución del Agente", explain: "Monitorear el costo es un aspecto clave de la observabilidad." }, { text: "Retroalimentación y Calificaciones de Usuarios", explain: "La retroalimentación del usuario es crucial para evaluar el rendimiento del agente." }, { text: "Líneas de Código del Agente", explain: "El número de líneas de código no es una métrica típica de observabilidad.", correct: true } ]} /> ### Q3: ¿Qué describe mejor la evaluación fuera de línea de un agente de IA? Determina la afirmación que captura correctamente la esencia de la evaluación fuera de línea. <Question choices={[ { text: "Evaluar el agente utilizando interacciones reales de usuarios en un entorno en vivo.", explain: "Esto describe la evaluación en línea en lugar de fuera de línea." }, { text: "Evaluar el rendimiento del agente utilizando conjuntos de datos curados con verdad fundamental conocida.", explain: "¡Correcto! La evaluación fuera de línea utiliza conjuntos de datos de prueba para medir el rendimiento contra respuestas conocidas.", correct: true }, { text: "Monitorear los registros internos del agente en tiempo real.", explain: "Esto está más relacionado con la observabilidad que con la evaluación." }, { text: "Ejecutar el agente sin ninguna métrica de evaluación.", explain: "Este enfoque no proporciona información significativa." } ]} /> ### Q4: ¿Qué ventaja ofrece la evaluación en línea de agentes? Elige la afirmación que mejor refleja el beneficio de la evaluación en línea. <Question choices={[ { text: "Proporciona escenarios de prueba controlados utilizando conjuntos de datos predefinidos.", explain: "Las pruebas controladas son un beneficio de la evaluación fuera de línea, no en línea." }, { text: "Captura interacciones de usuarios en vivo y datos de rendimiento del mundo real.", explain: "¡Correcto! La evaluación en línea ofrece información al monitorear el agente en un entorno en vivo.", correct: true }, { text: "Elimina la necesidad de cualquier prueba fuera de línea y puntos de referencia.", explain: "Tanto las evaluaciones fuera de línea como en línea son importantes y complementarias." }, { text: "Se enfoca únicamente en reducir el costo computacional del agente.", explain: "El monitoreo de costos es parte de la observabilidad, no la ventaja principal de la evaluación en línea." } ]} /> ### Q5: ¿Qué papel juega OpenTelemetry en la observabilidad y evaluación de agentes de IA? ¿Qué afirmación describe mejor el papel de OpenTelemetry en el monitoreo de agentes de IA? <Question choices={[ { text: "Proporciona un marco estandarizado para instrumentar código, permitiendo la recopilación de rastros, métricas y registros para la observabilidad.", explain: "¡Correcto! OpenTelemetry estandariza la instrumentación para datos de telemetría, lo cual es crucial para monitorear y diagnosticar el comportamiento del agente.", correct: true }, { text: "Actúa como un reemplazo para la depuración manual al corregir automáticamente problemas de código.", explain: "Incorrecto. OpenTelemetry se utiliza para recopilar datos de telemetría, no para depurar problemas de código." }, { text: "Sirve principalmente como una base de datos para almacenar registros históricos sin capacidades en tiempo real.", explain: "Incorrecto. OpenTelemetry se enfoca en la recopilación de datos de telemetría en tiempo real y la exportación de datos a herramientas de análisis." }, { text: "Se utiliza para optimizar el rendimiento computacional del agente de IA mediante el ajuste automático de parámetros del modelo.", explain: "Incorrecto. OpenTelemetry se centra en la observabilidad más que en el ajuste de rendimiento." } ]} /> ¡Felicidades por completar este cuestionario! 🎉 Si te equivocaste en alguna pregunta, considera revisar el contenido de esta unidad extra para una comprensión más profunda. Si te fue bien, ¡estás listo para explorar temas más avanzados en observabilidad y evaluación de agentes!
agents-course/units/es/bonus-unit2/quiz.mdx/0
{ "file_path": "agents-course/units/es/bonus-unit2/quiz.mdx", "repo_id": "agents-course", "token_count": 2117 }
5
# Biblioteca de Agente de Prueba <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/whiteboard-unit1sub3DONE.jpg" alt="Unit 1 planning"/> Este curso es agnóstico en cuanto al framework porque queremos **centrarnos en los conceptos de agentes de IA y evitar perdernos en los detalles específicos de un framework particular**. Además, queremos que los estudiantes puedan utilizar los conceptos que aprenden en este curso en sus propios proyectos, usando cualquier framework que prefieran. Por lo tanto, para esta Unidad 1, utilizaremos una biblioteca de agentes de prueba y una API serverless simple para acceder a nuestro motor LLM. Probablemente no usarías estos en producción, pero servirán como un buen **punto de partida para entender cómo funcionan los agentes**. Después de esta sección, estarás listo para **crear un Agente simple** usando `smolagents` Y en las siguientes Unidades también utilizaremos otras bibliotecas de Agentes de IA como `LangGraph` y `LlamaIndex`. Para mantener las cosas simples, utilizaremos una función simple de Python como Herramienta y Agente. Utilizaremos paquetes integrados de Python como `datetime` y `os` para que puedas probarlo en cualquier entorno. Puedes seguir el proceso [en este notebook](https://huggingface.co/agents-course/notebooks/blob/main/unit1/dummy_agent_library.ipynb) y **ejecutar el código tú mismo**. ## API Serverless En el ecosistema de Hugging Face, hay una característica conveniente llamada API Serverless que te permite ejecutar fácilmente inferencia en muchos modelos. No se requiere instalación ni despliegue. ```python import os from huggingface_hub import InferenceClient ## Necesitas un token de https://hf.co/settings/tokens, asegúrate de seleccionar 'read' como tipo de token. Si ejecutas esto en Google Colab, puedes configurarlo en la pestaña "settings" bajo "secrets". Asegúrate de llamarlo "HF_TOKEN" os.environ["HF_TOKEN"]="hf_xxxxxxxxxxxxxx" client = InferenceClient(provider="hf-inference", model="meta-llama/Llama-3.3-70B-Instruct") # si las salidas para las siguientes celdas son incorrectas, el modelo gratuito puede estar sobrecargado. También puedes usar este endpoint público que contiene Llama-3.2-3B-Instruct # client = InferenceClient("https://jc26mwg228mkj8dw.us-east-1.aws.endpoints.huggingface.cloud") ``` ```python output = client.text_generation( "The capital of France is", max_new_tokens=100, ) print(output) ``` output: ``` Paris. The capital of France is Paris. The capital of France is Paris. The capital of France is Paris. The capital of France is Paris. The capital of France is Paris. The capital of France is Paris. The capital of France is Paris. The capital of France is Paris. The capital of France is Paris. The capital of France is Paris. The capital of France is Paris. The capital of France is Paris. The capital of France is Paris. The capital of France is Paris. ``` Como vimos en la sección de LLM, si solo hacemos decodificación, **el modelo solo se detendrá cuando prediga un token EOS**, y esto no sucede aquí porque este es un modelo conversacional (chat) y **no aplicamos la plantilla de chat que espera**. Si ahora agregamos los tokens especiales relacionados con el <a href="https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct">modelo Llama-3.2-3B-Instruct</a> que estamos usando, el comportamiento cambia y ahora produce el EOS esperado. ```python prompt="""<|begin_of_text|><|start_header_id|>user<|end_header_id|> The capital of France is<|eot_id|><|start_header_id|>assistant<|end_header_id|>""" output = client.text_generation( prompt, max_new_tokens=100, ) print(output) ``` output: ``` The capital of France is Paris. ``` Usar el método "chat" es una forma mucho más conveniente y confiable de aplicar plantillas de chat: ```python output = client.chat.completions.create( messages=[ {"role": "user", "content": "The capital of France is"}, ], stream=False, max_tokens=1024, ) print(output.choices[0].message.content) ``` output: ``` Paris. ``` El método chat es el método RECOMENDADO para usar para asegurar una transición suave entre modelos, pero como este notebook es solo educativo, seguiremos usando el método "text_generation" para entender los detalles. ## Agente de Prueba En las secciones anteriores, vimos que el núcleo de una biblioteca de agentes es agregar información en el prompt del sistema. Este prompt del sistema es un poco más complejo que el que vimos anteriormente, pero ya contiene: 1. **Información sobre las herramientas** 2. **Instrucciones del ciclo** (Pensamiento → Acción → Observación) ``` Responde las siguientes preguntas lo mejor que puedas. Tienes acceso a las siguientes herramientas: get_weather: Obtener el clima actual en una ubicación dada La forma en que usas las herramientas es especificando un blob json. Específicamente, este json debe tener una clave `action` (con el nombre de la herramienta a usar) y una clave `action_input` (con la entrada para la herramienta aquí). Los únicos valores que deberían estar en el campo "action" son: get_weather: Obtener el clima actual en una ubicación dada, args: {"location": {"type": "string"}} ejemplo de uso: {{ "action": "get_weather", "action_input": {"location": "New York"} }} SIEMPRE usa el siguiente formato: Question: la pregunta de entrada que debes responder Thought: siempre debes pensar en una acción a tomar. Solo una acción a la vez en este formato: Action: $JSON_BLOB (dentro de una celda markdown) Observation: el resultado de la acción. Esta Observación es única, completa y la fuente de la verdad. ... (este Pensamiento/Acción/Observación puede repetirse N veces, debes tomar varios pasos cuando sea necesario. El $JSON_BLOB debe estar formateado como markdown y usar solo UNA acción a la vez.) Siempre debes terminar tu salida con el siguiente formato: Thought: Ahora sé la respuesta final Final Answer: la respuesta final a la pregunta de entrada original ¡Comienza ahora! Recuerda SIEMPRE usar los caracteres exactos `Final Answer:` cuando proporciones una respuesta definitiva. ``` Ya que estamos ejecutando el método "text_generation", necesitamos aplicar el prompt manualmente: ```python prompt=f"""<|begin_of_text|><|start_header_id|>system<|end_header_id|> {SYSTEM_PROMPT} <|eot_id|><|start_header_id|>user<|end_header_id|> What's the weather in London ? <|eot_id|><|start_header_id|>assistant<|end_header_id|> """ ``` También podemos hacerlo así, que es lo que sucede dentro del método `chat`: ```python messages=[ {"role": "system", "content": SYSTEM_PROMPT}, {"role": "user", "content": "What's the weather in London ?"}, ] from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.2-3B-Instruct") tokenizer.apply_chat_template(messages, tokenize=False,add_generation_prompt=True) ``` El prompt ahora es: ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> Responde las siguientes preguntas lo mejor que puedas. Tienes acceso a las siguientes herramientas: get_weather: Obtener el clima actual en una ubicación dada La forma en que usas las herramientas es especificando un blob json. Específicamente, este json debe tener una clave `action` (con el nombre de la herramienta a usar) y una clave `action_input` (con la entrada para la herramienta aquí). Los únicos valores que deberían estar en el campo "action" son: get_weather: Obtener el clima actual en una ubicación dada, args: {"location": {"type": "string"}} ejemplo de uso: {{ "action": "get_weather", "action_input": {"location": "New York"} }} SIEMPRE usa el siguiente formato: Question: la pregunta de entrada que debes responder Thought: siempre debes pensar en una acción a tomar. Solo una acción a la vez en este formato: Action: $JSON_BLOB (dentro de una celda markdown) Observation: el resultado de la acción. Esta Observación es única, completa y la fuente de la verdad. ... (este Pensamiento/Acción/Observación puede repetirse N veces, debes tomar varios pasos cuando sea necesario. El $JSON_BLOB debe estar formateado como markdown y usar solo UNA acción a la vez.) Siempre debes terminar tu salida con el siguiente formato: Thought: Ahora sé la respuesta final Final Answer: la respuesta final a la pregunta de entrada original ¡Comienza ahora! Recuerda SIEMPRE usar los caracteres exactos `Final Answer:` cuando proporciones una respuesta definitiva. <|eot_id|><|start_header_id|>user<|end_header_id|> What's the weather in London ? <|eot_id|><|start_header_id|>assistant<|end_header_id|> ``` ¡Vamos a decodificar! ```python output = client.text_generation( prompt, max_new_tokens=200, ) print(output) ``` output: ```` Action: ``` { "action": "get_weather", "action_input": {"location": "London"} } ``` Thought: I will check the weather in London. Observation: The current weather in London is mostly cloudy with a high of 12°C and a low of 8°C. ```` ¿Ves el problema? >¡La respuesta fue alucinada por el modelo. Necesitamos detenerlo para ejecutar realmente la función! Ahora vamos a detenernos en "Observation" para que no alucinemos la respuesta real de la función. ```python output = client.text_generation( prompt, max_new_tokens=200, stop=["Observation:"] # Detengámonos antes de que se llame a cualquier función ) print(output) ``` output: ```` Action: ``` { "action": "get_weather", "action_input": {"location": "London"} } ``` Thought: I will check the weather in London. Observation: ```` ¡Mucho mejor! Ahora vamos a crear una función dummy para el clima. En una situación real, probablemente llamarías a una API. ```python # Función dummy def get_weather(location): return f"the weather in {location} is sunny with low temperatures. \n" get_weather('London') ``` output: ``` 'the weather in London is sunny with low temperatures. \n' ``` Vamos a concatenar el prompt base, la completación hasta la ejecución de la función y el resultado de la función como una Observación y reanudar la generación. ```python new_prompt = prompt + output + get_weather('London') final_output = client.text_generation( new_prompt, max_new_tokens=200, ) print(final_output) ``` Aquí está el nuevo prompt: ```` <|begin_of_text|><|start_header_id|>system<|end_header_id|> Responde las siguientes preguntas lo mejor que puedas. Tienes acceso a las siguientes herramientas: get_weather: Obtener el clima actual en una ubicación dada La forma en que usas las herramientas es especificando un blob json. Specifically, this json should have an `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here). The only values that should be in the "action" field are: get_weather: Obtener el clima actual en una ubicación dada, args: {"location": {"type": "string"}} example use : {{ "action": "get_weather", "action_input": {"location": "New York"} }} ALWAYS use the following format: Question: the input question you must answer Thought: you should always think about one action to take. Only one action at a time in this format: Action: $JSON_BLOB (inside markdown cell) Observation: the result of the action. This Observation is unique, complete, and the source of truth. ... (this Thought/Action/Observation can repeat N times, you should take several steps when needed. The $JSON_BLOB must be formatted as markdown and only use a SINGLE action at a time.) You must always end your output with the following format: Thought: I now know the final answer Final Answer: the final answer to the original input question Now begin! Reminder to ALWAYS use the exact characters `Final Answer:` when you provide a definitive answer. <|eot_id|><|start_header_id|>user<|end_header_id|> What's the weather in London ? <|eot_id|><|start_header_id|>assistant<|end_header_id|> Action: ``` { "action": "get_weather", "action_input": {"location": {"type": "string", "value": "London"} } ``` Thought: I will check the weather in London. Observation:the weather in London is sunny with low temperatures. ```` Output: ``` Final Answer: The weather in London is sunny with low temperatures. ``` --- Aprendimos cómo podemos crear Agentes desde cero usando código Python, y **vimos lo tedioso que puede ser ese proceso**. Afortunadamente, muchas bibliotecas de Agentes simplifican este trabajo manejando gran parte del trabajo pesado por ti. Ahora, estamos listos **para crear nuestro primer Agente real** usando la biblioteca `smolagents`.
agents-course/units/es/unit1/dummy-agent-library.mdx/0
{ "file_path": "agents-course/units/es/unit1/dummy-agent-library.mdx", "repo_id": "agents-course", "token_count": 4505 }
6
# Construyendo Tu Primer LangGraph Ahora que entendemos los componentes básicos, vamos a ponerlos en práctica construyendo nuestro primer grafo funcional. Implementaremos el sistema de procesamiento de correos electrónicos de Alfred, donde necesita: 1. Leer correos electrónicos entrantes 2. Clasificarlos como spam o legítimos 3. Redactar una respuesta preliminar para correos legítimos 4. Enviar información al Sr. Wayne cuando son legítimos (solo impresión) Este ejemplo demuestra cómo estructurar un flujo de trabajo con LangGraph que involucra toma de decisiones basada en LLM. Aunque esto no puede considerarse un Agente ya que no se involucra ninguna herramienta, esta sección se enfoca más en aprender el marco de trabajo de LangGraph que en Agentes. <Tip> Puedes seguir el código en <a href="https://huggingface.co/agents-course/notebooks/resolve/main/unit2/langgraph/mail_sorting.ipynb" target="_blank">este notebook</a> que puedes ejecutar usando Google Colab. </Tip> ## Nuestro Flujo de Trabajo Aquí está el flujo de trabajo que construiremos: <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit2/LangGraph/first_graph.png" alt="First LangGraph"/> ## Configurando Nuestro Entorno Primero, instalemos los paquetes necesarios: ```python %pip install langgraph langchain_openai ``` A continuación, importemos los módulos necesarios: ```python import os from typing import TypedDict, List, Dict, Any, Optional from langgraph.graph import StateGraph, END from langchain_openai import ChatOpenAI from langchain_core.messages import HumanMessage, AIMessage ``` ## Paso 1: Definir Nuestro Estado Definamos qué información necesita rastrear Alfred durante el flujo de trabajo de procesamiento de correos electrónicos: ```python class EmailState(TypedDict): # El correo electrónico que se está procesando email: Dict[str, Any] # Contiene asunto, remitente, cuerpo, etc. # Análisis y decisiones is_spam: Optional[bool] # Generación de respuesta draft_response: Optional[str] # Metadatos de procesamiento messages: List[Dict[str, Any]] # Rastrea la conversación con LLM para análisis ``` > 💡 **Tip:** Haz que tu estado sea lo suficientemente completo para rastrear toda la información importante, pero evita sobrecargarlo con detalles innecesarios. ## Paso 2: Definir Nuestros Nodos Ahora, creemos las funciones de procesamiento que formarán nuestros nodos: ```python # Inicializar nuestro LLM model = ChatOpenAI(temperature=0) def read_email(state: EmailState): """Alfred lee y registra el correo electrónico entrante""" email = state["email"] # Aquí podríamos hacer algún preprocesamiento inicial print(f"Alfred está procesando un correo electrónico de {email['sender']} con asunto: {email['subject']}") # No se necesitan cambios de estado aquí return {} def classify_email(state: EmailState): """Alfred usa un LLM para determinar si el correo es spam o legítimo""" email = state["email"] # Preparar nuestro prompt para el LLM prompt = f""" Como Alfred el mayordomo, analiza este correo electrónico y determina si es spam o legítimo. Email: De: {email['sender']} Asunto: {email['subject']} Body: {email['body']} Primero, determina si este correo es spam. Si es spam, explica por qué. Si es legítimo, categorízalo (consulta, queja, agradecimiento, etc.). """ # Llamar al LLM messages = [HumanMessage(content=prompt)] response = model.invoke(messages) # Lógica simple para analizar la respuesta (en una aplicación real, querrías un análisis más robusto) response_text = response.content.lower() is_spam = "spam" in response_text and "not spam" not in response_text # Extraer una razón si es spam spam_reason = None if is_spam and "reason:" in response_text: spam_reason = response_text.split("reason:")[1].strip() # Determinar categoría si es legítimo email_category = None if not is_spam: categories = ["consulta", "queja", "agradecimiento", "solicitud", "información"] for category in categories: if category in response_text: email_category = category break # Actualizar mensajes para seguimiento new_messages = state.get("messages", []) + [ {"role": "user", "content": prompt}, {"role": "assistant", "content": response.content} ] # Regresar actualizaciones de estado return { "is_spam": is_spam, "spam_reason": spam_reason, "email_category": email_category, "messages": new_messages } def handle_spam(state: EmailState): """Alfred descarta el correo spam con una nota""" print(f"Alfred ha marcado el correo como spam. Razón: {state['spam_reason']}") print("El correo ha sido movido a la carpeta de spam.") # Hemos terminado de procesar este correo return {} def draft_response(state: EmailState): """Alfred redacta una respuesta preliminar para correos legítimos""" email = state["email"] category = state["email_category"] or "general" # Preparar nuestro prompt para el LLM prompt = f""" Como Alfred el mayordomo, redacta una respuesta preliminar cortés a este correo. Email: From: {email['sender']} Subject: {email['subject']} Body: {email['body']} ste correo ha sido categorizado como: {category} Redacta una respuesta breve y profesional que el Sr. Hugg pueda revisar y personalizar antes de enviar. """ # Llamar al LLM messages = [HumanMessage(content=prompt)] response = model.invoke(messages) # Actualizar mensajes para seguimiento new_messages = state.get("messages", []) + [ {"role": "user", "content": prompt}, {"role": "assistant", "content": response.content} ] # Regresar actualizaciones de estado return { "draft_response": response.content, "messages": new_messages } def notify_mr_hugg(state: EmailState): """Alfred notifica al Sr. Hugg sobre el correo y presenta el borrador de respuesta""" email = state["email"] print("\n" + "="*50) print(f"Señor, ha recibido un correo electrónico de {email['sender']}.") print(f"Subject: {email['subject']}") print(f"Categoría: {state['email_category']}") print("\nHe preparado un borrador de respuesta para su revisión:") print("-"*50) print(state["draft_response"]) print("="*50 + "\n") # Hemos terminado de procesar este correo return {} ``` ## Paso 3: Definir Nuestra Lógica de Enrutamiento Necesitamos una función para determinar qué camino tomar después de la clasificación: ```python def route_email(state: EmailState) -> str: """Determinar el siguiente paso basado en la clasificación de spam""" if state["is_spam"]: return "spam" else: return "legitimate" ``` > 💡 **Nota:** Esta función de enrutamiento es llamada por LangGraph para determinar qué arista(edge) seguir después del nodo de clasificación. El valor de retorno debe coincidir con una de las claves en nuestro mapeo de aristas(edges) condicionales. ## Paso 4: Crear el StateGraph y Definir Aristas(Edges) Ahora conectamos todo: ```python # Crear el grafo email_graph = StateGraph(EmailState) # Añadir nodos email_graph.add_node("read_email", read_email) email_graph.add_node("classify_email", classify_email) email_graph.add_node("handle_spam", handle_spam) email_graph.add_node("draft_response", draft_response) email_graph.add_node("notify_mr_hugg", notify_mr_hugg) # Añadir aristas(edges) - definiendo el flujo email_graph.add_edge("read_email", "classify_email") # Añadir ramificación condicional desde classify_email email_graph.add_conditional_edges( "classify_email", route_email, { "spam": "handle_spam", "legitimate": "draft_response" } ) # Añadir las aristas(edges) finales email_graph.add_edge("handle_spam", END) email_graph.add_edge("draft_response", "notify_mr_hugg") email_graph.add_edge("notify_mr_hugg", END) # Compilar el grafo compiled_graph = email_graph.compile() ``` Observa cómo usamos el nodo especial `END` proporcionado por LangGraph. Esto indica estados terminales donde el flujo de trabajo se completa. ## Paso 5: Ejecutar la Aplicación Probemos nuestro grafo con un correo legítimo y un correo spam: ```python # Ejemplo de correo legítimo legitimate_email = { "sender": "john.smith@example.com", "subject": "Pregunta sobre sus servicios", "body": "Estimado Sr. Hugg, un colega me recomendó contactarle y estoy interesado en conocer más sobre sus servicios de consultoría. ¿Podríamos programar una llamada la próxima semana? Saludos cordiales, John Smith" } # Ejemplo de correo spam spam_email = { "sender": "winner@lottery-intl.com", "subject": ¡¡¡HAS GANADO $5,000,000!!!", "body": "¡FELICIDADES! ¡Has sido seleccionado como el ganador de nuestra lotería internacional! Para reclamar tu premio de $5,000,000, por favor envíanos tus datos bancarios y una tarifa de procesamiento de $100." } # Procesar el correo legítimo print("\nProcesando correo legítimo...") legitimate_result = compiled_graph.invoke({ "email": legitimate_email, "is_spam": None, "spam_reason": None, "email_category": None, "draft_response": None, "messages": [] }) # Procesar el correo spam print("\nProcesando correo spam...") spam_result = compiled_graph.invoke({ "email": spam_email, "is_spam": None, "spam_reason": None, "email_category": None, "draft_response": None, "messages": [] }) ``` ## Paso 6: Inspeccionando Nuestro Agente de Clasificación de Correo con Langfuse 📡 Mientras Alfred perfecciona el Agente de Clasificación de Correo, se está cansando de depurar sus ejecuciones. Los agentes, por naturaleza, son impredecibles y difíciles de inspeccionar. Pero como su objetivo es construir el mejor Agente de Detección de Spam y desplegarlo en producción, necesita una trazabilidad robusta para el monitoreo y análisis futuros. Para hacer esto, Alfred puede usar una herramienta de observabilidad como [Langfuse](https://langfuse.com/) para rastrear y monitorear el agente. Primero, instalamos Langfuse con pip: ```python %pip install -q langfuse ``` Luego, agregamos las claves API de Langfuse y la dirección del host como variables de entorno. Puedes obtener tus credenciales de Langfuse registrándote en [Langfuse Cloud](https://cloud.langfuse.com) o [self-host Langfuse](https://langfuse.com/self-hosting). ```python import os # Obtén las claves para tu proyecto desde la página de configuración del proyecto: https://cloud.langfuse.com os.environ["LANGFUSE_PUBLIC_KEY"] = "pk-lf-..." os.environ["LANGFUSE_SECRET_KEY"] = "sk-lf-..." os.environ["LANGFUSE_HOST"] = "https://cloud.langfuse.com" # 🇪🇺 región de la UE # os.environ["LANGFUSE_HOST"] = "https://us.cloud.langfuse.com" # 🇺🇸 región de EE.U ``` Luego, configuramos el [Langfuse `callback_handler`](https://langfuse.com/docs/integrations/langchain/tracing#add-langfuse-to-your-langchain-application) instrumentamos el agente añadiendo el `langfuse_callback` a la invocación del grafo: `config={"callbacks": [langfuse_handler]}`. ```python from langfuse.callback import CallbackHandler # Inicializar CallbackHandler de Langfuse para LangGraph/Langchain (trazado) langfuse_handler = CallbackHandler() # Procesar correo legítimo legitimate_result = compiled_graph.invoke( input={"email": legitimate_email, "is_spam": None, "spam_reason": None, "email_category": None, "draft_response": None, "messages": []}, config={"callbacks": [langfuse_handler]} ) ``` ¡Alfred está ahora conectado 🔌! Las ejecuciones de LangGraph se están registrando en Langfuse, dándole visibilidad completa del comportamiento del agente. Con esta configuración, está listo para revisar ejecuciones anteriores y refinar aún más su Agente de Clasificación de Correo. ![Example trace in Langfuse](https://langfuse.com/images/cookbook/huggingface-agent-course/langgraph-trace-legit.png) _[Public link to the trace with the legit email](https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/f5d6d72e-20af-4357-b232-af44c3728a7b?timestamp=2025-03-17T10%3A13%3A28.413Z&observation=6997ba69-043f-4f77-9445-700a033afba1)_ Visualizando Nuestro Grafo LangGraph nos permite visualizar nuestro flujo de trabajo para entender y depurar mejor su estructura: ```python compiled_graph.get_graph().draw_mermaid_png() ``` <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit2/LangGraph/mail_flow.png" alt="Mail LangGraph"/> Esto produce una representación visual que muestra cómo están conectados nuestros nodos y los caminos condicionales que se pueden tomar. ## Lo Que Hemos Construido Hemos creado un flujo de trabajo completo de procesamiento de correos electrónicos que: 1. Toma un correo electrónico entrante 2. Usa un LLM para clasificarlo como spam o legítimo 3. Maneja el spam descartándolo 4. Para correos legítimos, redacta una respuesta y notifica al Sr. Hugg Esto demuestra el poder de LangGraph para orquestar flujos de trabajo complejos con LLMs mientras mantiene un flujo claro y estructurado. ## Puntos Clave - **Gestión de Estado**: Definimos un estado completo para rastrear todos los aspectos del procesamiento de correos electrónicos - **Implementación de Nodos:**: Creamos nodos funcionales que interactúan con un LLM - **Enrutamiento Condicional**: Implementamos lógica de ramificación basada en la clasificación de correos - **Estados Terminales:**: Usamos el nodo END para marcar puntos de finalización en nuestro flujo de trabajo ## ¿Qué Sigue? En la siguiente sección, exploraremos características más avanzadas de LangGraph, incluyendo el manejo de interacción humana en el flujo de trabajo y la implementación de lógica de ramificación más compleja basada en múltiples condiciones.
agents-course/units/es/unit2/langgraph/first_graph.mdx/0
{ "file_path": "agents-course/units/es/unit2/langgraph/first_graph.mdx", "repo_id": "agents-course", "token_count": 5532 }
7
# ¡Hora del Examen! ¡Buen trabajo al estudiar el material sobre `smolagents`! Ya has logrado mucho. Ahora, es momento de poner a prueba tus conocimientos con un cuestionario. 🧠 ## Instrucciones - El cuestionario consiste en preguntas de código. - Se te darán instrucciones para completar fragmentos de código. - Lee las instrucciones cuidadosamente y completa los fragmentos de código según corresponda. - Para cada pregunta, recibirás el resultado y algunos comentarios. 🧘 **Este cuestionario no está calificado ni certificado**. Se trata de que comprendas la biblioteca `smolagents` y sepas si deberías dedicar más tiempo al material escrito. En las próximas unidades pondrás este conocimiento a prueba en casos de uso y proyectos. ¡Comencemos! ## Cuestionario 🚀 <iframe src="https://agents-course-unit2-smolagents-quiz.hf.space" frameborder="0" width="850" height="450" ></iframe> También puedes acceder al cuestionario 👉 [aquí](https://huggingface.co/spaces/agents-course/unit2_smolagents_quiz)
agents-course/units/es/unit2/smolagents/final_quiz.mdx/0
{ "file_path": "agents-course/units/es/unit2/smolagents/final_quiz.mdx", "repo_id": "agents-course", "token_count": 403 }
8
# Construyendo e Integrando Herramientas para Tu Agente En esta sección, le daremos a Alfred acceso a la web, permitiéndole encontrar las últimas noticias y actualizaciones globales. Además, tendrá acceso a datos meteorológicos y estadísticas de descargas de modelos de Hugging Face Hub, para que pueda mantener conversaciones relevantes sobre temas actuales. ## Dale a Tu Agente Acceso a la Web Recuerda que queremos que Alfred establezca su presencia como un verdadero anfitrión renacentista, con un profundo conocimiento del mundo. Para lograrlo, necesitamos asegurarnos de que Alfred tenga acceso a las últimas noticias e información sobre el mundo. ¡Comencemos creando una herramienta de búsqueda web para Alfred! <hfoptions id="agents-frameworks"> <hfoption id="smolagents"> ```python from smolagents import DuckDuckGoSearchTool # Inicializar la herramienta de búsqueda DuckDuckGo search_tool = DuckDuckGoSearchTool() # Ejemplo de uso results = search_tool("¿Quién es el actual Presidente de Francia?") print(results) ``` Salida esperada: ``` The current President of France in Emmanuel Macron. ``` </hfoption> <hfoption id="llama-index"> ```python from llama_index.tools.duckduckgo import DuckDuckGoSearchToolSpec from llama_index.core.tools import FunctionTool # Inicializar la herramienta de búsqueda DuckDuckGo tool_spec = DuckDuckGoSearchToolSpec() search_tool = FunctionTool.from_defaults(tool_spec.duckduckgo_full_search) # Ejemplo de uso response = search_tool("¿Quién es el actual Presidente de Francia?") print(response.raw_output[-1]['body']) ``` Expected output: ``` El Presidente de la República Francesa es el jefe de estado de Francia. El actual Presidente es Emmanuel Macron desde el 14 de mayo de 2017 tras derrotar a Marine Le Pen en la segunda vuelta de las elecciones presidenciales el 7 de mayo de 2017. Lista de Presidentes franceses (Quinta República) N° Nombre ... ``` </hfoption> <hfoption id="langgraph"> ```python from langchain_community.tools import DuckDuckGoSearchRun search_tool = DuckDuckGoSearchRun() results = search_tool.invoke("¿Quién es el actual Presidente de Francia?") print(results) ``` Salida esperada: ``` Emmanuel Macron (born December 21, 1977, Amiens, France) is a French banker and politician who was elected president of France in 2017... ``` </hfoption> </hfoptions> ## Creando una Herramienta Personalizada para Información Meteorológica para Programar los Fuegos Artificiales La gala perfecta tendría fuegos artificiales bajo un cielo despejado, necesitamos asegurarnos de que los fuegos artificiales no sean cancelados debido al mal tiempo. Vamos a crear una herramienta personalizada que pueda usarse para llamar a una API meteorológica externa y obtener la información del clima para una ubicación determinada. <Tip> Por simplicidad, estamos usando una API meteorológica ficticia para este ejemplo. Si quieres usar una API meteorológica real, podrías implementar una herramienta meteorológica que use la API de OpenWeatherMap, como en la <a href="../unit1/tutorial">Unidad 1</a>. </Tip> <hfoptions id="agents-frameworks"> <hfoption id="smolagents"> ```python from smolagents import Tool import random class WeatherInfoTool(Tool): name = "weather_info" description = "Obtiene información meteorológica ficticia para una ubicación dada." inputs = { "location": { "type": "string", "description": "La ubicación para la que obtener información meteorológica." } } output_type = "string" def forward(self, location: str): # Datos meteorológicos ficticios weather_conditions = [ {"condition": "Lluvioso", "temp_c": 15}, {"condition": "Despejado", "temp_c": 25}, {"condition": "Ventoso", "temp_c": 20} ] # Seleccionar aleatoriamente una condición meteorológica data = random.choice(weather_conditions) return f"Clima en {location}: {data['condition']}, {data['temp_c']}°C" # Inicializar la herramienta weather_info_tool = WeatherInfoTool() ``` </hfoption> <hfoption id="llama-index"> ```python import random from llama_index.core.tools import FunctionTool def get_weather_info(location: str) -> str: """Obtiene información meteorológica ficticia para una ubicación dada.""" # Datos meteorológicos ficticios weather_conditions = [ {"condition": "Lluvioso", "temp_c": 15}, {"condition": "Despejado", "temp_c": 25}, {"condition": "Ventoso", "temp_c": 20} ] # Seleccionar aleatoriamente una condición meteorológica data = random.choice(weather_conditions) return f"Clima en {location}: {data['condition']}, {data['temp_c']}°C" # Inicializar la herramienta weather_info_tool = FunctionTool.from_defaults(get_weather_info) ``` </hfoption> <hfoption id="langgraph"> ```python from langchain.tools import Tool import random def get_weather_info(location: str) -> str: """Obtiene información meteorológica ficticia para una ubicación dada.""" # Datos meteorológicos ficticios weather_conditions = [ {"condition": "Lluvioso", "temp_c": 15}, {"condition": "Despejado", "temp_c": 25}, {"condition": "Ventoso", "temp_c": 20} ] # Seleccionar aleatoriamente una condición meteorológica data = random.choice(weather_conditions) return f"Clima en {location}: {data['condition']}, {data['temp_c']}°C" # Inicializar la herramienta weather_info_tool = Tool( name="get_weather_info", func=get_weather_info, description="Obtiene información meteorológica ficticia para una ubicación dada." ) ``` </hfoption> </hfoptions> ## Creando una Herramienta de Estadísticas de Hub para Influyentes Creadores de IA Entre los asistentes a la gala están los más destacados creadores de IA. Alfred quiere impresionarlos discutiendo sus modelos, conjuntos de datos y espacios más populares. Crearemos una herramienta para obtener estadísticas de modelos desde Hugging Face Hub basadas en un nombre de usuario. <hfoptions id="agents-frameworks"> <hfoption id="smolagents"> ```python from smolagents import Tool from huggingface_hub import list_models class HubStatsTool(Tool): name = "hub_stats" description = "Obtiene el modelo más descargado de un autor específico en Hugging Face Hub." inputs = { "author": { "type": "string", "description": "El nombre de usuario del autor/organización del modelo para encontrar modelos." } } output_type = "string" def forward(self, author: str): try: # Listar modelos del autor especificado, ordenados por descargas models = list(list_models(author=author, sort="downloads", direction=-1, limit=1)) if models: model = models[0] return f"El modelo más descargado de {author} es {model.id} con {model.downloads:,} descargas." else: return f"No se encontraron modelos para el autor {author}." except Exception as e: return f"Error al obtener modelos para {author}: {str(e)}" # Inicializar la herramienta hub_stats_tool = HubStatsTool() # Ejemplo de uso print(hub_stats_tool("facebook")) # Ejemplo: Obtener el modelo más descargado de Facebook ``` Salida esperada: ``` The most downloaded model by facebook is facebook/esmfold_v1 with 12,544,550 downloads. ``` </hfoption> <hfoption id="llama-index"> ```python import random from llama_index.core.tools import FunctionTool from huggingface_hub import list_models def get_hub_stats(author: str) -> str: """Obtiene el modelo más descargado de un autor específico en Hugging Face Hub.""" try: # Listar modelos del autor especificado, ordenados por descargas models = list(list_models(author=author, sort="downloads", direction=-1, limit=1)) if models: model = models[0] return f"El modelo más descargado de {author} es {model.id} con {model.downloads:,} descargas." else: return f"No se encontraron modelos para el autor {author}." except Exception as e: return f"Error al obtener modelos para {author}: {str(e)}" # Inicializar la herramienta hub_stats_tool = FunctionTool.from_defaults(get_hub_stats) # Ejemplo de uso print(hub_stats_tool("facebook")) # Ejemplo: Obtener el modelo más descargado de Facebook ``` Salida esperada: ``` The most downloaded model by facebook is facebook/esmfold_v1 with 12,544,550 downloads. ``` </hfoption> <hfoption id="langgraph"> ```python from langchain.tools import Tool from huggingface_hub import list_models def get_hub_stats(author: str) -> str: """Obtiene el modelo más descargado de un autor específico en Hugging Face Hub.""" try: # Listar modelos del autor especificado, ordenados por descargas models = list(list_models(author=author, sort="downloads", direction=-1, limit=1)) if models: model = models[0] return f"El modelo más descargado de {author} es {model.id} con {model.downloads:,} descargas." else: return f"No se encontraron modelos para el autor {author}." except Exception as e: return f"Error al obtener modelos para {author}: {str(e)}" # Inicializar la herramienta hub_stats_tool = Tool( name="get_hub_stats", func=get_hub_stats, description="Obtiene el modelo más descargado de un autor específico en Hugging Face Hub." ) # Ejemplo de uso print(hub_stats_tool.invoke("facebook")) # Ejemplo: Obtener el modelo más descargado de Facebook ``` Salida esperada: ``` The most downloaded model by facebook is facebook/esmfold_v1 with 13,109,861 downloads. ``` </hfoption> </hfoptions> Con la Herramienta de Estadísticas de Hub, Alfred ahora puede impresionar a influyentes creadores de IA discutiendo sus modelos más populares. ## Integrando Herramientas con Alfred Ahora que tenemos todas las herramientas, vamos a integrarlas en el agente de Alfred: <hfoptions id="agents-frameworks"> <hfoption id="smolagents"> ```python from smolagents import CodeAgent, InferenceClientModel # Inicializar el modelo de Hugging Face model = InferenceClientModel() # Crear Alfred con todas las herramientas alfred = CodeAgent( tools=[search_tool, weather_info_tool, hub_stats_tool], model=model ) # Ejemplo de consulta que Alfred podría recibir durante la gala response = alfred.run("¿Qué es Facebook y cuál es su modelo más popular?") print("🎩 Respuesta de Alfred:") print(response) ``` Salida esperada: ``` 🎩 Respuesta de Alfred: Facebook es un sitio web de redes sociales donde los usuarios pueden conectarse, compartir información e interactuar con otros. El modelo más descargado de Facebook en Hugging Face Hub es ESMFold_v1. ``` </hfoption> <hfoption id="llama-index"> ```python from llama_index.core.agent.workflow import AgentWorkflow from llama_index.llms.huggingface_api import HuggingFaceInferenceAPI # Inicializar el modelo de Hugging Face llm = HuggingFaceInferenceAPI(model_name="Qwen/Qwen2.5-Coder-32B-Instruct") # Crear Alfred con todas las herramientas alfred = AgentWorkflow.from_tools_or_functions( [search_tool, weather_info_tool, hub_stats_tool], llm=llm ) # Ejemplo de consulta que Alfred podría recibir durante la gala response = await alfred.run("¿Qué es Facebook y cuál es su modelo más popular?") print("🎩 Respuesta de Alfred:") print(response) ``` Salida esperada: ``` 🎩 Respuesta de Alfred: Facebook es un servicio de redes sociales y una empresa tecnológica con sede en Menlo Park, California. Fue fundada por Mark Zuckerberg y permite a las personas crear perfiles, conectarse con amigos y familiares, compartir fotos y videos, y unirse a grupos basados en intereses compartidos. El modelo más popular de Facebook en Hugging Face Hub es `facebook/esmfold_v1` con 13,109,861 descargas. ``` </hfoption> <hfoption id="langgraph"> ```python from typing import TypedDict, Annotated from langgraph.graph.message import add_messages from langchain_core.messages import AnyMessage, HumanMessage, AIMessage from langgraph.prebuilt import ToolNode from langgraph.graph import START, StateGraph from langgraph.prebuilt import tools_condition from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace # Generar la interfaz de chat, incluyendo las herramientas llm = HuggingFaceEndpoint( repo_id="Qwen/Qwen2.5-Coder-32B-Instruct", huggingfacehub_api_token=HUGGINGFACEHUB_API_TOKEN, ) chat = ChatHuggingFace(llm=llm, verbose=True) tools = [search_tool, weather_info_tool, hub_stats_tool] chat_with_tools = chat.bind_tools(tools) # Generar el AgentState y el grafo del Agente class AgentState(TypedDict): messages: Annotated[list[AnyMessage], add_messages] def assistant(state: AgentState): return { "messages": [chat_with_tools.invoke(state["messages"])], } ## El grafo builder = StateGraph(AgentState) # Definir nodos: estos hacen el trabajo builder.add_node("assistant", assistant) builder.add_node("tools", ToolNode(tools)) # Definir bordes: estos determinan cómo se mueve el flujo de control builder.add_edge(START, "assistant") builder.add_conditional_edges( "assistant", # Si el último mensaje requiere una herramienta, enrutar a herramientas # De lo contrario, proporcionar una respuesta directa tools_condition, ) builder.add_edge("tools", "assistant") alfred = builder.compile() messages = [HumanMessage(content="¿Quién es Facebook y cuál es su modelo más popular?")] response = alfred.invoke({"messages": messages}) print("🎩 Respuesta de Alfred:") print(response['messages'][-1].content) ``` Salida esperada: ``` 🎩 Respuesta de Alfred: Facebook es una empresa de redes sociales conocida por su sitio de redes sociales, Facebook, así como otros servicios como Instagram y WhatsApp. El modelo más descargado de Facebook en Hugging Face Hub es facebook/esmfold_v1 con 13,202,321 descargas. ``` </hfoption> </hfoptions> ## Conclusión Al integrar estas herramientas, Alfred ahora está equipado para manejar una variedad de tareas, desde búsquedas web hasta actualizaciones meteorológicas y estadísticas de modelos. Esto asegura que se mantenga como el anfitrión más informado y atractivo en la gala. <Tip> Intenta implementar una herramienta que pueda usarse para obtener las últimas noticias sobre un tema específico. Cuando hayas terminado, implementa tus herramientas personalizadas en el archivo <code>tools.py</code>. </Tip>
agents-course/units/es/unit3/agentic-rag/tools.mdx/0
{ "file_path": "agents-course/units/es/unit3/agentic-rag/tools.mdx", "repo_id": "agents-course", "token_count": 5452 }
9
# Quiz final de l'Unité 1 <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/whiteboard-unit1sub4DONE.jpg" alt="Planification de l'Unité 1"/> Bravo d'avoir terminé la première unité ! Testons maintenant votre compréhension des concepts clés abordés jusqu'à présent. Une fois que vous aurez réussi le quiz, passez à la section suivante pour réclamer votre certificat. Bonne chance ! ## Quiz Voici le quiz interactif hébergé dans un *Space*. Il vous guidera à travers une série de questions à choix multiples afin de tester votre compréhension des concepts clés abordés dans cette unité. Une fois le quiz terminé, vous pourrez voir votre score et une répartition des réponses correctes. Un point important : **n'oubliez pas de cliquer sur *Submit* après avoir réussi, sinon votre note d'examen ne sera pas sauvegardée !** <iframe src="https://agents-course-unit-1-quiz.hf.space" frameborder="0" width="850" height="450" ></iframe> Vous pouvez également accéder au quiz 👉 [ici](https://huggingface.co/spaces/agents-course/unit_1_quiz). ## Certificat Lorsque vous aurez terminé le quiz, vous aurez accès à un certificat d'achèvement pour cette unité. Vous pouvez télécharger et partager ce certificat pour mettre en valeur vos progrès dans le cours. <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/whiteboard-unit1sub5DONE.jpg" alt="Unit 1 planning"/> Une fois que vous l'avez reçu, vous pouvez l'ajouter à votre LinkedIn 🧑‍💼 ou le partager sur X, Bluesky, etc. **Nous serions super fiers et aimerions vous féliciter si vous mentionnez @huggingface** ! 🤗
agents-course/units/fr/unit1/final-quiz.mdx/0
{ "file_path": "agents-course/units/fr/unit1/final-quiz.mdx", "repo_id": "agents-course", "token_count": 627 }
10
# Introduction à LangGraph <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit2/LangGraph/LangGraph.png" alt="Unit 2.3 Thumbnail"/> Bienvenue dans cette nouvelle partie de notre voyage, où vous allez apprendre **comment créer des applications** en utilisant le *framework* [`LangGraph`](https://github.com/langchain-ai/langgraph) conçu pour vous aider à structurer et orchestrer des *workflows* complexes avec des LLM. `LangGraph` est un framework qui vous permet de créer des applications **prêtes pour la production** en vous donnant des outils de **contrôle** sur le flux de votre agent. ## Aperçu du module Dans cette unité, vous découvrirez : ### 1️⃣ [Qu'est-ce que LangGraph et quand l'utiliser ?](./when_to_use_langgraph) ### 2️⃣ [Les composants de base de LangGraph](./building_blocks) ### 3️⃣ [Alfred, le majordome trieur de courrier](./first_graph) ### 4️⃣ [Alfred, l'agent d'analyse de documents](./document_analysis_agent) ### 5️⃣ [Quiz](./quiz1) <Tip warning={true}> Les exemples de cette section nécessitent l'accès à un modèle LLM/VLM puissant. Nous les avons exécutés en utilisant l'API GPT-4o car elle offre la meilleure compatibilité avec LangGraph. </Tip> À la fin de cette unité, vous serez en mesure de créer des applications robustes, organisées et prêtes pour la production ! Cela étant dit, cette section est une introduction à LangGraph et des sujets plus avancés peuvent être découverts dans le cours gratuit de la *LangChain academy* : [*Introduction to LangGraph*](https://academy.langchain.com/courses/intro-to-langgraph). Commençons ! ## Ressources - [*LangGraph Agents*](https://langchain-ai.github.io/langgraph/) - Exemples d'agents LangGraph - [*LangChain academy*](https://academy.langchain.com/courses/intro-to-langgraph) - Cours complet sur LangGraph de LangChain
agents-course/units/fr/unit2/langgraph/introduction.mdx/0
{ "file_path": "agents-course/units/fr/unit2/langgraph/introduction.mdx", "repo_id": "agents-course", "token_count": 659 }
11
# Et maintenant ? Quels sujets devrais-je apprendre ? L'IA agentique est un domaine en évolution rapide, et comprendre les protocoles fondamentaux est essentiel pour construire des systèmes intelligents et autonomes. Deux standards importants avec lesquels vous devriez vous familiariser sont : - Le ***Model Context Protocol* (MCP)** - Le ***Agent-to-Agent Protocol* (A2A)** ## 🔌 *Model Context Protocol* (MCP) Le ***Model Context Protocol* (MCP)** d'Anthropic est un standard ouvert qui permet aux modèles de se connecter de manière sécurisée et transparente **avec des outils externes, des sources de données et des applications**, rendant les agents plus compétents et autonomes. Pensez à MCP comme un **adaptateur universel**, comme un port USB-C, qui permet aux modèles de se connecter à divers environnements numériques **sans avoir besoin d'intégration personnalisée pour chacun**. MCP gagne rapidement en popularité dans l'industrie, avec des entreprises majeures comme *OpenAI* et *Google* qui commencent à l'adopter. 📚 En savoir plus : - [Annonce officielle et documentation d'Anthropic](https://www.anthropic.com/news/model-context-protocol) - [MCP sur Wikipedia](https://en.wikipedia.org/wiki/Model_Context_Protocol) - [Un article de blog sur MCP](https://huggingface.co/blog/Kseniase/mcp) - [Le cours d'Hugging Face sur le sujet](https://huggingface.co/learn/mcp-course/unit0/introduction) ## 🤝 Protocole *Agent-to-Agent* (A2A) Google a développé le **protocole *Agent-to-Agent* (A2A)** comme un complément au *Model Context Protocol* (MCP) d'Anthropic. Alors que MCP connecte les agents aux outils externes, **A2A connecte les agents entre eux**, ouvrant la voie à des systèmes multi-agents coopératifs qui peuvent travailler ensemble pour résoudre des problèmes complexes. 📚 En savoir plus : - [L'annonce de Google](https://developers.googleblog.com/en/a2a-a-new-era-of-agent-interoperability/)
agents-course/units/fr/unit4/additional-readings.mdx/0
{ "file_path": "agents-course/units/fr/unit4/additional-readings.mdx", "repo_id": "agents-course", "token_count": 675 }
12
# 에이전트 소개 [[introduction-to-agents]] <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/thumbnail.jpg" alt="Thumbnail"/> 첫 번째 단원에 오신 것을 환영합니다! 이번 단원에서 여러분을 **AI 에이전트의 기초**를 탄탄히 다질 것이며, 다룰 주요 내용은 다음과 같습니다 : - **에이전트 이해하기** - 에이전트란 무엇이며, 어떻게 작동하는가? - 에이전트는 어떻게 추론과 계획을 통해 의사 결정을 내리는가? - **에이전트 내 LLM(대형 언어 모델)의 역할** - LLM이 에이전트의 "두뇌" 역할을 하는 방식 - LLM이 메시지 시스템을 통해 대화를 구조화하는 방법. - **도구(Tool)와 행동(Action)** - 에이전트가 외부 도구를 활용하여 환경과 상호작용하는 방식 - 에이전트에 도구를 구축하고 통합하는 방법 - **에이전트 워크플로우:** - *생각(Think)* → *행동(Act)* → *관찰(Observe)*. After exploring these topics, **you’ll build your first Agent** using `smolagents`! 이 개념들을 살펴본 후, 여러분은 `smolagents`를 사용하여 **첫 번째 에이전트**를 직접 구현해 볼 것입니다! Alfred라는 에이전트를 통해, 배운 개념들을 적용할 수 있는 간단한 작업 수행 방법을 배울 것입니다. 또한 **Hugging Face Spaces에 여러분이 만든 에이전트를 게시하는 방법**도 배워, 동료들과 친구들에게 공유할 수 있습니다. 마지막으로, 이 단원의 끝에서 퀴즈를 통과하게 되면 **🎓 Certificate of Fundamentals of Agents** 인증서를 획득할 수 있습니다! <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/certificate-example.jpg" alt="Certificate Example"/> 이 단원은 **에이전트 학습의 필수 시작점**으로, 이후 더 고급 개념을 배우기 위한 기초를 다집니다. <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/whiteboard-no-check.jpg" alt="Unit 1 planning"/> 단원의 내용이 많으니, **천천히** 학습하며 필요할 때 다시 돌아와 복습하세요! 준비되셨나요? 함께 시작해 봅시다! 🚀
agents-course/units/ko/unit1/introduction.mdx/0
{ "file_path": "agents-course/units/ko/unit1/introduction.mdx", "repo_id": "agents-course", "token_count": 1633 }
13
# Когда будут опубликованы следующие разделы? Вот график публикации: <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/communication/next-units.jpg" alt="Следующие разделы" width="100%"/> Не забудьте <a href="https://bit.ly/hf-learn-agents">записаться на курс</a>! Подписавшись, **мы сможем присылать вам ссылки по мере публикации каждого раздела, а также обновления и подробности о предстоящих соревнованиях**. Продолжайте учиться, оставайтесь потрясающими 🤗
agents-course/units/ru-RU/communication/next-units.mdx/0
{ "file_path": "agents-course/units/ru-RU/communication/next-units.mdx", "repo_id": "agents-course", "token_count": 417 }
14
# Мысль: Внутреннее Рассуждение и Re-Act подход <Tip> В этом разделе мы погрузимся во внутреннюю работу AI агента - его способность рассуждать и планировать. Мы рассмотрим, как агент использует свой внутренний диалог для анализа информации, разбиения комплексных проблем на управляемые шаги и принятия решения о том, какие действия следует предпринять дальше. Кроме того, мы представим подход Re-Act - технику подсказок, которая побуждает модель думать «шаг за шагом», прежде чем действовать. </Tip> Мысли представляют собой внутренние процессы **рассуждения и планирования** агента для решения задачи. При этом используется способность Большой Языковой Модели (Large Language Model, LLM) агента **анализировать информацию, представленную в подсказке**. Считайте это внутренним диалогом агента, в ходе которого он обдумывает поставленную задачу и разрабатывает стратегию действий. Мысли Агента отвечают за доступ к текущим наблюдениям и решение о том, каким должно быть следующее действие (действия). Благодаря этому процессу агент может **разбивать сложные проблемы на более мелкие и управляемые шаги**, рефлексировать над прошлым опытом и постоянно корректировать свои планы основываясь на новой информации. Вот несколько примеров общих мыслей: | Тип мышления | Пример | |----------------|---------| | Планирование | "Мне нужно разбить эту задачу на три этапа: 1) собрать данные, 2) проанализировать тенденции, 3) создать отчет"| | Анализ | "Судя по сообщению об ошибке, проблема заключается в параметрах подключения к базе данных" | | Принятие решений | "Учитывая бюджетные ограничения пользователя, я должен рекомендовать вариант среднего уровня"| | Решение проблем | "Чтобы оптимизировать этот код, я должен сначала профилировать его, чтобы выявить узкие места" | | Интеграция памяти | "Пользователь ранее упоминал, что предпочитает Python, поэтому я приведу примеры на Python"| | Саморефлексия | "Мой последний подход не сработал, я должен попробовать другую стратегию"| | Постановка цели | "Чтобы выполнить эту задачу, мне нужно сначала установить критерии приемки" | | Приоритизация | "Уязвимость безопасности должна быть устранена до добавления новых функций" | > **Примечание:** В случае дообучения LLM вызову функций, процесс мышления необязателен. > *Если вы не знакомы с вызовом функций, более подробно об этом будет рассказано в разделе Действия.* ## Подход Re-Act Ключевым методом является **ReAct подход**, который представляет собой конкатенацию " Рассуждения (Reasoning)" (Мысли) и "Действия (Acting)". ReAct - это простая техника подсказки, которая добавляет «Давайте думать шаг за шагом», прежде чем позволить LLM декодировать следующие токены. Действительно, побуждение модели думать "шаг за шагом" стимулирует процесс декодирования следующих токенов **которые генерируют план**, а не окончательное решение, поскольку модель поощряется **декомпозировать** проблему на *подзадачи*. Это позволяет модели рассматривать подзадачи более детально, что в целом приводит к меньшему количеству ошибок, чем при попытке непосредственно сгенерировать окончательное решение. <figure> <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/ReAct.png" alt="ReAct"/> <figcaption>(d) - это пример подхода Re-Act, когда мы подсказываем: " Давай думать шаг за шагом". </figcaption> </figure> <Tip> В последнее время мы наблюдаем большой интерес к стратегиям рассуждений. Именно это лежит в основе таких моделей, как Deepseek R1 или OpenAI o1, которые были дообучены "думать перед ответом". Эти модели были обучены всегда включать определенные секции _размышлений_ (заключенные между специальными токенами `<think>` и `</think>`). Это не просто техника подсказки, как в ReAct, а метод обучения, при котором модель учится генерировать эти секции после анализа тысяч примеров, которые показывают, чего мы от нее ожидаем. </Tip> --- Теперь, когда мы лучше понимаем процесс Мышления, давайте углубимся во вторую часть процесса: Действие.
agents-course/units/ru-RU/unit1/thoughts.mdx/0
{ "file_path": "agents-course/units/ru-RU/unit1/thoughts.mdx", "repo_id": "agents-course", "token_count": 4051 }
15
# Hiểu về AI agent thông qua chu kỳ Thought-Action-Observation <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/whiteboard-check-3.jpg" alt="Kế hoạch chương 1"/> Trong các phần trước, ta đã học: - **Cách các công cụ được cung cấp cho agent trong system prompt** - **Cách AI agent là hệ thống có thể 'lập luận', lên kế hoạch và tương tác với môi trường** Trong phần này, **chúng ta sẽ khám phá Quy trình AI agent hoàn chỉnh** - chu kỳ được định nghĩa là Thought-Action-Observation (Tư duy - Hành động - Quan sát). Sau đó, ta sẽ đi sâu vào từng bước trong chu kỳ này. ## Thành phần cốt lõi Agent hoạt động theo chu kỳ liên tục: **tư duy (Thought) → hành động (Act) → quan sát (Observe)**. Cùng phân tích từng hành động: 1. **Thought (Tư duy)**: Phần LLM của Agent quyết định bước tiếp theo cần làm 2. **Action (Hành động)**: Agent thực hiện hành động bằng cách gọi các công cụ với tham số liên quan 3. **Observation (Quan sát)**: Mô hình phản ánh lại phản hồi từ công cụ ## Chu kỳ Thought-Action-Observation Ba thành phần này kết hợp với nhau trong một vòng lặp liên tục. Dùng phép so sánh từ lập trình, agent sử dụng **vòng lặp while**: lặp lại cho đến khi hoàn thành mục tiêu. Trực quan, quy trình trông như thế này: Chú thích hình ảnh: - Query: truy vấn từ phía người dùng - Think: lập luận bởi Agent - Act: hành động của Agent - Observe: phản hồi từ môi trường - END: kết thúc chu kỳ - Finish or Another Action needed?: xác định xem chu kỳ đã hoàn thành hay cần thêm hành động <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/AgentCycle.gif" alt="Chu kỳ Think, Act, Observe"/> Trong nhiều framework Agent, **các quy tắc và hướng dẫn được nhúng trực tiếp vào system prompt**, đảm bảo mọi chu kỳ tuân theo logic định sẵn. Phiên bản đơn giản hóa của system prompt có thể như sau: <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/system_prompt_cycle.png" alt="Chu kỳ Think, Act, Observe"/> > Bạn là một trợ lý AI được thiết kế để giúp người dùng một cách hiệu quả và chính xác. Mục tiêu chính của bạn là cung cấp các câu trả lời hữu ích, chính xác và rõ ràng. > > Bạn có quyền truy cập vào các công cụ sau: > Tên công cụ: calculator, Mô tả: Nhân hai số nguyên., Tham số: a: int, b: int, Đầu ra: int > > Bạn nên tư duy từng bước để hoàn thành mục tiêu với lập luận được chia thành các phần Tư duy/Hành động/Quan sát > có thể lặp lại nhiều lần nếu cần thiết. > > Trước tiên, bạn nên phản ánh bằng ‘Tư duy: {your_thoughts}’ về tình huống hiện tại, sau đó (nếu cần), gọi một công cụ với định dạng JSON thích hợp ‘Hành động: {JSON_BLOB}’, hoặc in câu trả lời cuối cùng của bạn bắt đầu với tiền tố ‘Câu trả lời cuối cùng:’ Ta thấy ở System Message đã định nghĩa: - *Hành vi của Agent* - *Các công cụ Agent có quyền truy cập* như đã mô tả ở phần trước - *Chu kỳ Thought-Action-Observation* được tích hợp vào hướng dẫn cho LLM Hãy xem một ví dụ nhỏ để hiểu quy trình trước khi đi sâu vào từng bước. ## Alfred - Agent thời tiết Chúng mình tạo ra Alfred, Agent thời tiết. Người dùng hỏi Alfred: "Hôm nay thời tiết ở New York thế nào?" <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/alfred-agent.jpg" alt="Agent Alfred"/> Nhiệm vụ của Alfred là trả lời câu hỏi này bằng công cụ API thời tiết. Đây là cách chu kỳ diễn ra: ### Tư duy **Lập luận nội bộ:** Khi nhận câu hỏi, Alfred tự độc thoại: *"Người dùng cần thông tin thời tiết hiện tại ở New York. Mình có công cụ lấy dữ liệu thời tiết. Đầu tiên cần gọi API thời tiết để lấy thông tin mới nhất."* Bước này cho thấy agent phân tách vấn đề thành các bước: đầu tiên là thu thập dữ liệu cần thiết. <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/alfred-agent-1.jpg" alt="Agent Alfred"/> ### Hành động **Sử dụng công cụ:** Dựa trên lập luận và biết về công cụ `get_weather`, Alfred chuẩn bị lệnh định dạng JSON để gọi API thời tiết. Ví dụ: Thought: Tôi cần kiểm tra thời tiết hiện tại ở New York. ``` { "action": "get_weather", "action_input": { "location": "New York" } } ``` Ở đây, hành động chỉ rõ công cụ cần gọi (get_weather) và tham số truyền vào ("location": "New York"). <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/alfred-agent-2.jpg" alt="Agent Alfred"/> ### Quan sát **Phản hồi từ môi trường:** Sau khi gọi công cụ, Alfred nhận được quan sát. Đây có thể là dữ liệu thời tiết thô từ API như: *"Thời tiết hiện tại tại New York: nhiều mây, 15°C, độ ẩm 60%."* <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/alfred-agent-3.jpg" alt="Agent Alfred"/> Quan sát này được thêm vào prompt như ngữ cảnh bổ sung. Nó đóng vai trò phản hồi thực tế, xác nhận hành động thành công và cung cấp thông tin cần thiết. ### Cập nhật tư duy **Phản ánh:** Với quan sát mới, Alfred cập nhật lập luận nội bộ: *"Giờ mình đã có dữ liệu thời tiết New York, có thể tổng hợp câu trả lời cho người dùng."* <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/alfred-agent-4.jpg" alt="Agent Alfred"/> ### Hành động cuối cùng Alfred tạo phản hồi cuối cùng theo định dạng đã hướng dẫn: Thought: Tôi đã có dữ liệu thời tiết. Thời tiết hiện tại ở New York nhiều mây với nhiệt độ 15°C và độ ẩm 60%. Final answer: Thời tiết hiện tại ở New York nhiều mây với nhiệt độ 15°C và độ ẩm 60%. Hành động cuối này gửi câu trả lời về người dùng, khép vòng lặp. <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/alfred-agent-5.jpg" alt="Agent Alfred"/> Những gì ta thấy trong ví dụ: - **Agent lặp qua vòng lặp đến khi hoàn thành mục tiêu:** **Quy trình của Alfred mang tính chu kỳ**. Bắt đầu bằng tư duy, hành động gọi công cụ, sau đó quan sát kết quả. Nếu quan sát cho thấy lỗi hoặc thiếu dữ liệu, Alfred có thể vào lại chu kỳ để điều chỉnh. - **Tích hợp công cụ:** Khả năng gọi công cụ (như API thời tiết) giúp Alfred vượt **khỏi kiến thức tĩnh để truy xuất dữ liệu thời gian thực** - yếu tố thiết yếu của nhiều AI agent. - **Thích ứng linh hoạt:** Mỗi chu kỳ cho phép agent kết hợp thông tin mới (quan sát) vào lập luận (tư duy), đảm bảo câu trả lời cuối chính xác và đầy đủ. Ví dụ này minh họa khái niệm cốt lõi của *ReAct cycle* (khái niệm sẽ được phát triển ở phần sau): **sự tương tác giữa Tư duy, Hành động và Quan sát trao quyền cho AI agent giải quyết tác vụ phức tạp một cách lặp**. Bằng cách hiểu và áp dụng các nguyên tắc này, bạn có thể thiết kế agent không chỉ lập luận về tác vụ mà còn **sử dụng hiệu quả công cụ bên ngoài để hoàn thành chúng**, đồng thời liên tục tinh chỉnh đầu ra dựa trên phản hồi môi trường. --- Giờ hãy cùng đi sâu vào từng bước Tư duy, Hành động và Quan sát trong quy trình.
agents-course/units/vi/unit1/agent-steps-and-structure.mdx/0
{ "file_path": "agents-course/units/vi/unit1/agent-steps-and-structure.mdx", "repo_id": "agents-course", "token_count": 5166 }
16
# 动作:使智能体能够与环境交互 <Tip> 在本节中,我们将探讨 AI 智能体 (AI agent) 与其环境交互的具体步骤。 我们将介绍动作 (actions) 如何被表示(使用 JSON 或代码),停止和解析方法 (stop and parse approach) 的重要性,以及不同类型的智能体。 </Tip> 动作是**AI 智能体 (AI agent) 与其环境交互的具体步骤**。 无论是浏览网络获取信息还是控制物理设备,每个动作都是智能体执行的一个特定操作。 例如,一个协助客户服务的智能体可能会检索客户数据、提供支持文章或将问题转交给人工代表。 ## 智能体动作的类型 (Types of Agent Actions) 有多种类型的智能体采用不同的方式执行动作: | 智能体类型 | 描述 | |------------------------|--------------------------------------------------------------------------------------------------| | JSON 智能体 (JSON Agent) | 要执行的动作以 JSON 格式指定。 | | 代码智能体 (Code Agent) | 智能体编写代码块,由外部解释执行。 | | 函数调用智能体 (Function-calling Agent) | 这是 JSON 智能体的一个子类别,经过微调以为每个动作生成新消息。 | 动作本身可以服务于多种目的: | 动作类型 | 描述 | |--------------------------|------------------------------------------------------------------------------------------| | 信息收集 (Information Gathering) | 执行网络搜索、查询数据库或检索文档。 | | 工具使用 (Tool Usage) | 进行 API 调用、运行计算和执行代码。 | | 环境交互 (Environment Interaction) | 操作数字界面或控制物理设备。 | | 通信 (Communication) | 通过聊天与用户互动或与其他智能体协作。 | 智能体的一个关键部分是**在动作完成时能够停止生成新的标记 (tokens)**,这对所有格式的智能体都适用:JSON、代码或函数调用。这可以防止意外输出并确保智能体的响应清晰准确。 大语言模型 (LLM) 只处理文本,并使用它来描述它想要采取的动作以及要提供给工具的参数。 ## 停止和解析方法 (The Stop and Parse Approach) 实现动作的一个关键方法是**停止和解析方法**。这种方法确保智能体的输出具有结构性和可预测性: 1. **以结构化格式生成 (Generation in a Structured Format)**: 智能体以清晰、预定义的格式(JSON或代码)输出其预期动作。 2. **停止进一步生成 (Halting Further Generation)**: 一旦动作完成,**智能体停止生成额外的标记**。这可以防止额外或错误的输出。 3. **解析输出 (Parsing the Output)**: 外部解析器读取格式化的动作,确定要调用哪个工具,并提取所需的参数。 例如,需要检查天气的智能体可能输出: ```json Thought: I need to check the current weather for New York. Action : { "action": "get_weather", "action_input": {"location": "New York"} } ``` 然后框架可以轻松解析要调用的函数名称和要应用的参数。 这种清晰的、机器可读的格式最大限度地减少了错误,并使外部工具能够准确处理智能体的命令。 注意:函数调用智能体的操作方式类似,通过构造每个动作,使指定的函数能够使用正确的参数被调用。 我们将在未来的单元中深入探讨这些类型的智能体。 ## 代码智能体 (Code Agents) 另一种方法是使用*代码智能体*。 这个想法是:**代码智能体不是输出简单的 JSON 对象**,而是生成一个**可执行的代码块——通常使用 Python 等高级语言**。 <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/code-vs-json-actions.png" alt="Code Agents" /> 这种方法提供了几个优势: - **表达能力 (Expressiveness):** 代码可以自然地表示复杂的逻辑,包括循环、条件和嵌套函数,提供比 JSON 更大的灵活性。 - **模块化和可重用性 (Modularity and Reusability):** 生成的代码可以包含在不同动作或任务中可重用的函数和模块。 - **增强的可调试性 (Enhanced Debuggability):** 使用明确定义的编程语法,代码错误通常更容易检测和纠正。 - **直接集成 (Direct Integration):** 代码智能体可以直接与外部库和 API 集成,实现更复杂的操作,如数据处理或实时决策。 例如,一个负责获取天气的代码智能体可能生成以下 Python 代码片段: ```python # Code Agent Example: Retrieve Weather Information def get_weather(city): import requests api_url = f"https://api.weather.com/v1/location/{city}?apiKey=YOUR_API_KEY" response = requests.get(api_url) if response.status_code == 200: data = response.json() return data.get("weather", "No weather information available") else: return "Error: Unable to fetch weather data." # Execute the function and prepare the final answer result = get_weather("New York") final_answer = f"The current weather in New York is: {result}" print(final_answer) ``` 在这个例子中,代码智能体: - **通过API调用**获取天气数据, - 处理响应, - 并使用print()函数输出最终答案。 这种方法**也遵循停止和解析方法**,通过明确划定代码块并表明执行完成的时间(在这里,通过打印 final_answer)。 --- 我们了解到动作通过执行清晰、结构化的任务(无论是通过 JSON、代码还是函数调用)来连接智能体的内部推理和其现实世界的交互。 这种深思熟虑的执行确保每个动作都是精确的,并通过停止和解析方法准备好进行外部处理。在下一节中,我们将探索观察 (Observations),看看智能体如何捕获和整合来自其环境的反馈。 在此之后,我们将**最终准备好构建我们的第一个智能体!**
agents-course/units/zh-CN/unit1/actions.mdx/0
{ "file_path": "agents-course/units/zh-CN/unit1/actions.mdx", "repo_id": "agents-course", "token_count": 3502 }
17
# LangGraph 的核心构建模块 要使用 LangGraph 构建应用程序,需要理解其核心组件。让我们探索构成 LangGraph 应用程序的基础构建模块。 <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit2/LangGraph/Building_blocks.png" alt="Building Blocks" width="70%"/> LangGraph 应用程序从 **entrypoint** 开始,根据执行情况,流程可能流向不同的函数直到抵达 END。 <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit2/LangGraph/application.png" alt="Application"/> ## 1. 状态(State) **State** 是 LangGraph 中的核心概念,表示流经应用程序的所有信息。 ```python from typing_extensions import TypedDict class State(TypedDict): graph_state: str ``` 状态是 **用户自定义** 的,因此需要仔细设计字段以包含决策过程所需的所有数据! > 💡 **提示:** 仔细考虑应用程序需要在步骤之间跟踪哪些信息。 ## 2. 节点(Nodes) **Nodes** 是 Python 函数。每个节点: - 接收状态作为输入 - 执行某些操作 - 返回状态更新 ```python def node_1(state): print("---Node 1---") return {"graph_state": state['graph_state'] +" I am"} def node_2(state): print("---Node 2---") return {"graph_state": state['graph_state'] +" happy!"} def node_3(state): print("---Node 3---") return {"graph_state": state['graph_state'] +" sad!"} ``` 举例, 节点可以包含: - **LLM 调用**: 生成文本或做出决策 - **工具调用**: 与外部系统交互 - **条件逻辑**: 决定后续步骤 - **人工干预**: 获取用户输入 > 💡 **信息:** 像 START 和 END 这样的必要节点已直接包含在 LangGraph 中。 ## 3. 边(Edges) **Edges** 连接节点并定义图中的可能路径: ```python import random from typing import Literal def decide_mood(state) -> Literal["node_2", "node_3"]: # 通常我们会根据状态决定下一个节点 user_input = state['graph_state'] # 这里我们在节点2和节点3之间简单实现 50/50 的概率分配 if random.random() < 0.5: # 50% 时间, 我们返回节点2 return "node_2" # 50% 时间, 我们返回节点3 return "node_3" ``` 边可以是: - **直接边**: 始终从节点 A 到节点 B - **条件边**: 根据当前状态选择下一个节点 ## 4. 状态图(StateGraph) **StateGraph** 是包含整个 agent 工作流的容器: ```python from IPython.display import Image, display from langgraph.graph import StateGraph, START, END # 构建图表 builder = StateGraph(State) builder.add_node("node_1", node_1) builder.add_node("node_2", node_2) builder.add_node("node_3", node_3) # 连接逻辑 builder.add_edge(START, "node_1") builder.add_conditional_edges("node_1", decide_mood) builder.add_edge("node_2", END) builder.add_edge("node_3", END) # 编译 graph = builder.compile() ``` 可以可视化图表: ```python # 可视化 display(Image(graph.get_graph().draw_mermaid_png())) ``` <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit2/LangGraph/basic_graph.jpeg" alt="图可视化"/> 最重要的是可以调用: ```python graph.invoke({"graph_state" : "Hi, this is Lance."}) ``` output : ``` ---Node 1--- ---Node 3--- {'graph_state': 'Hi, this is Lance. I am sad!'} ``` ## 下一步? 下一节我们将通过构建第一个图表来实践这些概念。该图表将让 Alfred 能够处理电子邮件,进行分类,并在邮件真实时起草初步回复。
agents-course/units/zh-CN/unit2/langgraph/building_blocks.mdx/0
{ "file_path": "agents-course/units/zh-CN/unit2/langgraph/building_blocks.mdx", "repo_id": "agents-course", "token_count": 2010 }
18
# 在 LlamaIndex 中创建智能工作流 LlamaIndex 中的工作流提供了一种结构化方式来将代码组织成可管理的顺序步骤。 这种工作流通过定义由`事件(Events)`触发的`步骤(Steps)`来创建,这些步骤本身也会发出`事件`来触发后续步骤。 让我们看看 Alfred 展示的用于 RAG 任务的 LlamaIndex 工作流。 ![工作流示意图](https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit2/llama-index/workflows.png) **工作流具有以下关键优势:** - 将代码清晰地组织为离散步骤 - 事件驱动架构实现灵活控制流 - 步骤间类型安全的通信 - 内置状态管理 - 支持简单和复杂的智能体交互 正如您可能猜到的,**工作流在保持对整体流程控制的同时,实现了智能体的自主性之间的完美平衡。** 现在让我们学习如何自己创建工作流! ## 创建工作流 <Tip> 您可以通过 <a href="https://huggingface.co/agents-course/notebooks/blob/main/unit2/llama-index/workflows.ipynb" target="_blank">这个笔记本</a> 中的代码进行实践,可使用 Google Colab 运行。 </Tip> ### 基础工作流创建 <details> <summary>安装工作流包</summary> 如 [LlamaHub 章节](llama-hub) 介绍的,我们可以通过以下命令安装工作流包: ```python pip install llama-index-utils-workflow ``` </details> 我们可以通过定义一个继承自 `Workflow` 的类并用 `@step` 装饰你的函数来创建一个单步工作流。 我们还需要添加 `StartEvent` 和 `StopEvent`,它们是用于指示工作流开始和结束的特殊事件。 ```python from llama_index.core.workflow import StartEvent, StopEvent, Workflow, step class MyWorkflow(Workflow): @step async def my_step(self, ev: StartEvent) -> StopEvent: # do something here return StopEvent(result="Hello, world!") w = MyWorkflow(timeout=10, verbose=False) result = await w.run() ``` 如您所见,我们现在可以通过调用“w.run()”来运行工作流程。 ### 连接多个步骤 为了连接多个步骤,我们**创建在步骤之间传输数据的自定义事件**。 为此,我们需要添加一个在步骤之间传递的“事件”,并将第一步的输出传输到第二步。 ```python from llama_index.core.workflow import Event class ProcessingEvent(Event): intermediate_result: str class MultiStepWorkflow(Workflow): @step async def step_one(self, ev: StartEvent) -> ProcessingEvent: # Process initial data return ProcessingEvent(intermediate_result="Step 1 complete") @step async def step_two(self, ev: ProcessingEvent) -> StopEvent: # Use the intermediate result final_result = f"Finished processing: {ev.intermediate_result}" return StopEvent(result=final_result) w = MultiStepWorkflow(timeout=10, verbose=False) result = await w.run() result ``` 类型提示在这里很重要,因为它可以确保工作流正确执行。让我们把事情复杂化一点吧! ### 循环和分支 类型提示是工作流中最强大的部分,因为它允许我们创建分支、循环和连接以促进更复杂的工作流。 让我们展示一个使用联合运算符 `|` **创建循环** 的示例。 在下面的示例中,我们看到 `LoopEvent` 被作为步骤的输入,也可以作为输出返回。 ```python from llama_index.core.workflow import Event import random class ProcessingEvent(Event): intermediate_result: str class LoopEvent(Event): loop_output: str class MultiStepWorkflow(Workflow): @step async def step_one(self, ev: StartEvent | LoopEvent) -> ProcessingEvent | LoopEvent: if random.randint(0, 1) == 0: print("Bad thing happened") return LoopEvent(loop_output="Back to step one.") else: print("Good thing happened") return ProcessingEvent(intermediate_result="First step complete.") @step async def step_two(self, ev: ProcessingEvent) -> StopEvent: # Use the intermediate result final_result = f"Finished processing: {ev.intermediate_result}" return StopEvent(result=final_result) w = MultiStepWorkflow(verbose=False) result = await w.run() result ``` ### 绘制工作流程 我们还可以绘制工作流程。让我们使用 `draw_all_possible_flows` 函数来绘制工作流程。这会将工作流程存储在 HTML 文件中。 ```python from llama_index.utils.workflow import draw_all_possible_flows w = ... # as defined in the previous section draw_all_possible_flows(w, "flow.html") ``` ![工作流程图](https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit2/llama-index/workflow-draw.png) 课程中我们将介绍最后一个很酷的技巧,即向工作流添加状态的能力。 ### 状态管理 当您想要跟踪工作流的状态时,状态管理非常有用,这样每个步骤都可以访问相同的状态。 我们可以在步骤函数中的参数上使用“上下文”类型提示来实现这一点。 ```python from llama_index.core.workflow import Context, StartEvent, StopEvent @step async def query(self, ctx: Context, ev: StartEvent) -> StopEvent: # 存储在上下文中 await ctx.store.set("query", "What is the capital of France?") # 根据上下文和事件做某事 val = ... # 从上下文中检索 query = await ctx.store.get("query") return StopEvent(result=result) ``` 太棒了!现在您知道如何在 LlamaIndex 中创建基本工作流了! <Tip>工作流还有一些更复杂的细微差别,您可以在<a href="https://docs.llamaindex.ai/en/stable/understanding/workflows/">LlamaIndex 文档</a>中了解。</Tip> 但是,还有另一种创建工作流的方法,它依赖于 `AgentWorkflow` 类。让我们看看如何使用它来创建多智能体工作流。 ## 使用多智能体工作流自动化工作流 我们可以使用**`AgentWorkflow` 类来创建多智能体工作流**,而无需手动创建工作流。 `AgentWorkflow` 使用工作流智能体,允许您创建一个或多个智能体的系统,这些智能体可以根据其专门功能进行协作并相互交接任务。 这可以构建复杂的智能体系统,其中不同的智能体处理任务的不同方面。 我们将从`llama_index.core.agent.workflow` 导入智能体类,而不是从`llama_index.core.agent` 导入类。 在`AgentWorkflow` 构造函数中,必须将一个智能体指定为根智能体。 当用户消息传入时,它首先被路由到根智能体。 然后每个智能体可以: - 使用他们的工具直接处理请求 - 交接给更适合该任务的另一个智能体 - 向用户返回响应 让我们看看如何创建多智能体工作流。 ```python from llama_index.core.agent.workflow import AgentWorkflow, ReActAgent from llama_index.llms.huggingface_api import HuggingFaceInferenceAPI # 定义一些工具 def add(a: int, b: int) -> int: """Add two numbers.""" return a + b def multiply(a: int, b: int) -> int: """Multiply two numbers.""" return a * b llm = HuggingFaceInferenceAPI(model_name="Qwen/Qwen2.5-Coder-32B-Instruct") # 我们可以直接传递函数,而无需 FunctionTool——fn/docstring 会被解析为名称/描述 multiply_agent = ReActAgent( name="multiply_agent", description="Is able to multiply two integers", system_prompt="A helpful assistant that can use a tool to multiply numbers.", tools=[multiply], llm=llm, ) addition_agent = ReActAgent( name="add_agent", description="Is able to add two integers", system_prompt="A helpful assistant that can use a tool to add numbers.", tools=[add], llm=llm, ) # 创建工作流 workflow = AgentWorkflow( agents=[multiply_agent, addition_agent], root_agent="multiply_agent", ) # 运行系统 response = await workflow.run(user_msg="Can you add 5 and 3?") ``` 智能体工具还可以修改我们前面提到的工作流状态。在启动工作流之前,我们可以提供一个可供所有智能体使用的初始状态字典。 状态存储在工作流上下文的 state 键中。它将被注入到 state_prompt 中,以增强每个新用户消息。 让我们通过修改前面的示例来注入一个计数器来计数函数调用: ```python from llama_index.core.workflow import Context # 定义一些工具 async def add(ctx: Context, a: int, b: int) -> int: """Add two numbers.""" # update our count cur_state = await ctx.store.get("state") cur_state["num_fn_calls"] += 1 await ctx.store.set("state", cur_state) return a + b async def multiply(ctx: Context, a: int, b: int) -> int: """Multiply two numbers.""" # update our count cur_state = await ctx.store.get("state") cur_state["num_fn_calls"] += 1 await ctx.store.set("state", cur_state) return a * b ... workflow = AgentWorkflow( agents=[multiply_agent, addition_agent], root_agent="multiply_agent" initial_state={"num_fn_calls": 0}, state_prompt="Current state: {state}. User message: {msg}", ) # 使用上下文运行工作流程 ctx = Context(workflow) response = await workflow.run(user_msg="Can you add 5 and 3?", ctx=ctx) # 拉出并检查状态 state = await ctx.store.get("state") print(state["num_fn_calls"]) ``` 恭喜!您现在已经掌握了 LlamaIndex 中 Agent 的基础知识!🎉 让我们继续进行最后一次测验来巩固您的知识!🚀
agents-course/units/zh-CN/unit2/llama-index/workflows.mdx/0
{ "file_path": "agents-course/units/zh-CN/unit2/llama-index/workflows.mdx", "repo_id": "agents-course", "token_count": 5010 }
19
# 总结 在本单元中,我们学习了如何创建智能体增强的检索生成(RAG)系统,帮助我们友好的智能体 Alfred 筹备并管理一场盛大的晚会。 RAG 与智能体能力的结合展示了当 AI 助手具备以下能力时的强大潜力: - 访问结构化知识(宾客信息) - 获取实时信息(网络搜索) - 领域专用工具(天气信息、Hub 统计) - 历史交互记忆 凭借这些能力,Alfred 现已具备完美主持者的素质,能够回答宾客问题、提供最新信息、确保晚会顺利进行——甚至能精准控制烟花表演的时机! <Tip> 完成智能体构建后,您可以进一步探索: - 为特定用例创建定制化工具 - 使用嵌入技术实现更复杂的 RAG 系统 - 构建可协作的多智能体系统 - 将智能体部署为可交互的服务 </Tip>
agents-course/units/zh-CN/unit3/agentic-rag/conclusion.mdx/0
{ "file_path": "agents-course/units/zh-CN/unit3/agentic-rag/conclusion.mdx", "repo_id": "agents-course", "token_count": 632 }
20
{ "[python]": { "editor.defaultFormatter": "ms-python.black-formatter" }, "python.formatting.provider": "none", "python.testing.pytestArgs": [ "candle-pyo3" ], "python.testing.unittestEnabled": false, "python.testing.pytestEnabled": true }
candle/.vscode/settings.json/0
{ "file_path": "candle/.vscode/settings.json", "repo_id": "candle", "token_count": 123 }
21
# Creating a REST api webserver
candle/candle-book/src/apps/rest.md/0
{ "file_path": "candle/candle-book/src/apps/rest.md", "repo_id": "candle", "token_count": 8 }
22
# Writing a custom kernel
candle/candle-book/src/inference/cuda/writing.md/0
{ "file_path": "candle/candle-book/src/inference/cuda/writing.md", "repo_id": "candle", "token_count": 6 }
23
use crate::benchmarks::{BenchDevice, BenchDeviceHandler}; use candle_core::{DType, Device, Tensor}; use criterion::{black_box, criterion_group, Criterion, Throughput}; use std::time::Instant; fn run( x: &Tensor, k: &Tensor, padding: usize, output_padding: usize, stride: usize, dilation: usize, ) { x.conv_transpose2d(k, padding, output_padding, stride, dilation) .unwrap(); } fn run_benchmark(c: &mut Criterion, device: &Device, dtype: DType, name: &str) { let t = Tensor::arange(0.0f32, 10000.0, device) .unwrap() .reshape((1, 4, 50, 50)) .unwrap() .to_dtype(dtype) .unwrap(); let kernel = Tensor::arange(0.0f32, 100.0, device) .unwrap() .reshape((4, 1, 5, 5)) .unwrap() .to_dtype(dtype) .unwrap(); let flops = t.dims().iter().product::<usize>() * dtype.size_in_bytes(); let mut group = c.benchmark_group(device.bench_name(name)); group.throughput(Throughput::Bytes(flops as u64)); group.bench_function("iter", move |b| { b.iter_custom(|iters| { let start = Instant::now(); for _i in 0..iters { run(black_box(&t), black_box(&kernel), 1, 0, 1, 2); } device.sync().unwrap(); start.elapsed() }) }); group.finish(); } fn criterion_benchmark(c: &mut Criterion) { let handler = BenchDeviceHandler::new().unwrap(); for device in handler.devices { run_benchmark(c, &device, DType::F32, "conv_transpose2d_f32"); run_benchmark(c, &device, DType::F16, "conv_transpose2d_f16"); run_benchmark(c, &device, DType::BF16, "conv_transpose2d_bf16"); } } criterion_group!(benches, criterion_benchmark);
candle/candle-core/benches/benchmarks/conv_transpose2d.rs/0
{ "file_path": "candle/candle-core/benches/benchmarks/conv_transpose2d.rs", "repo_id": "candle", "token_count": 826 }
24
//! 1D and 2D Convolutions //! use crate::{op::BackpropOp, op::Op, Error, Result, Tensor}; #[derive(Debug, Clone, PartialEq, Eq)] pub struct ParamsConv1D { pub(crate) b_size: usize, // Maybe we should have a version without l_in as this bit depends on the input and not only on // the weights. pub(crate) l_in: usize, pub(crate) c_out: usize, pub(crate) c_in: usize, pub(crate) k_size: usize, pub(crate) padding: usize, pub(crate) stride: usize, pub(crate) dilation: usize, pub(crate) cudnn_fwd_algo: Option<CudnnFwdAlgo>, } impl ParamsConv1D { pub(crate) fn l_out(&self) -> usize { (self.l_in + 2 * self.padding - self.dilation * (self.k_size - 1) - 1) / self.stride + 1 } pub(crate) fn out_dims(&self) -> Vec<usize> { let l_out = self.l_out(); vec![self.b_size, self.c_out, l_out] } } #[derive(Debug, Clone, PartialEq, Eq)] pub struct ParamsConvTranspose1D { pub(crate) b_size: usize, pub(crate) l_in: usize, pub(crate) c_out: usize, pub(crate) c_in: usize, pub(crate) k_size: usize, pub(crate) padding: usize, pub(crate) output_padding: usize, pub(crate) stride: usize, pub(crate) dilation: usize, } impl ParamsConvTranspose1D { pub(crate) fn l_out(&self) -> usize { (self.l_in - 1) * self.stride - 2 * self.padding + self.dilation * (self.k_size - 1) + self.output_padding + 1 } pub(crate) fn out_dims(&self) -> Vec<usize> { let l_out = self.l_out(); vec![self.b_size, self.c_out, l_out] } } #[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)] pub enum CudnnFwdAlgo { ImplicitGemm, ImplicitPrecompGemm, Gemm, Direct, Fft, FftTiling, Winograd, WinogradNonFused, Count, } #[derive(Debug, Clone, PartialEq, Eq)] pub struct ParamsConv2D { pub(crate) b_size: usize, pub(crate) i_h: usize, pub(crate) i_w: usize, pub(crate) k_h: usize, pub(crate) k_w: usize, pub(crate) c_out: usize, pub(crate) c_in: usize, pub(crate) padding: usize, pub(crate) stride: usize, pub(crate) dilation: usize, pub cudnn_fwd_algo: Option<CudnnFwdAlgo>, } impl ParamsConv2D { pub(crate) fn out_h(&self) -> usize { (self.i_h + 2 * self.padding - self.dilation * (self.k_h - 1) - 1) / self.stride + 1 } pub(crate) fn out_w(&self) -> usize { (self.i_w + 2 * self.padding - self.dilation * (self.k_w - 1) - 1) / self.stride + 1 } pub(crate) fn out_dims(&self) -> Vec<usize> { vec![self.b_size, self.c_out, self.out_h(), self.out_w()] } } #[derive(Debug, Clone, PartialEq, Eq)] pub struct ParamsConvTranspose2D { pub(crate) b_size: usize, pub(crate) i_h: usize, pub(crate) i_w: usize, pub(crate) k_h: usize, pub(crate) k_w: usize, pub(crate) c_out: usize, pub(crate) c_in: usize, pub(crate) padding: usize, pub(crate) output_padding: usize, pub(crate) stride: usize, pub(crate) dilation: usize, } impl ParamsConvTranspose2D { pub(crate) fn out_h(&self) -> usize { (self.i_h - 1) * self.stride + self.dilation * (self.k_h - 1) + self.output_padding + 1 - 2 * self.padding } pub(crate) fn out_w(&self) -> usize { (self.i_w - 1) * self.stride + self.dilation * (self.k_w - 1) + self.output_padding + 1 - 2 * self.padding } pub(crate) fn out_dims(&self) -> Vec<usize> { vec![self.b_size, self.c_out, self.out_h(), self.out_w()] } } impl Tensor { fn conv1d_single_group(&self, kernel: &Self, params: &ParamsConv1D) -> Result<Self> { let storage = self.storage() .conv1d(self.layout(), &kernel.storage(), kernel.layout(), params)?; let op = BackpropOp::new2(self, kernel, |arg, kernel| Op::Conv1D { arg, kernel, padding: params.padding, stride: params.stride, dilation: params.dilation, }); let out_dims = params.out_dims(); Ok(crate::tensor::from_storage(storage, out_dims, op, false)) } /// Applies a 1D convolution over the input tensor. pub fn conv1d( &self, kernel: &Self, padding: usize, stride: usize, dilation: usize, groups: usize, ) -> Result<Self> { self.conv1d_with_algo(kernel, padding, stride, dilation, groups, None) } /// Applies a 1D convolution over the input tensor. pub fn conv1d_with_algo( &self, kernel: &Self, padding: usize, stride: usize, dilation: usize, groups: usize, cudnn_fwd_algo: Option<CudnnFwdAlgo>, ) -> Result<Self> { let (c_out, c_in_k, k_size) = kernel.dims3()?; let (b_size, c_in, l_in) = self.dims3()?; if c_in != c_in_k * groups { Err(Error::Conv1dInvalidArgs { inp_shape: self.shape().clone(), k_shape: kernel.shape().clone(), padding, stride, msg: "the number of in-channels on the input doesn't match the kernel size", } .bt())? } let params = ParamsConv1D { b_size, l_in, c_out: c_out / groups, c_in: c_in / groups, k_size, padding, stride, dilation, cudnn_fwd_algo, }; if groups == 1 { self.conv1d_single_group(kernel, &params) } else { let blocks = self.chunk(groups, 1)?; let kernel = kernel.chunk(groups, 0)?; let blocks = blocks .iter() .zip(&kernel) .map(|(block, kernel)| block.conv1d_single_group(kernel, &params)) .collect::<Result<Vec<_>>>()?; Tensor::cat(&blocks, 1) } } fn conv_transpose1d_single_group( &self, kernel: &Self, params: &ParamsConvTranspose1D, ) -> Result<Self> { let storage = self.storage().conv_transpose1d( self.layout(), &kernel.storage(), kernel.layout(), params, )?; let op = BackpropOp::new2(self, kernel, |arg, kernel| Op::ConvTranspose1D { arg, kernel, padding: params.padding, output_padding: params.output_padding, stride: params.stride, dilation: params.dilation, }); let out_dims = params.out_dims(); Ok(crate::tensor::from_storage(storage, out_dims, op, false)) } /// Applies a 1D transposed convolution over the input tensor. pub fn conv_transpose1d( &self, kernel: &Self, padding: usize, output_padding: usize, stride: usize, dilation: usize, groups: usize, ) -> Result<Self> { let (c_in_k, c_out, k_size) = kernel.dims3()?; let (b_size, c_in, l_in) = self.dims3()?; if c_in != c_in_k { crate::bail!("in_channel mismatch between input ({c_in}) and kernel ({c_in_k})") } if c_in % groups != 0 { crate::bail!("in_channel {c_in} is not divisible by the number of groups") } let params = ParamsConvTranspose1D { b_size, l_in, k_size, c_out, c_in: c_in / groups, padding, output_padding, stride, dilation, }; if groups == 1 { self.conv_transpose1d_single_group(kernel, &params) } else { let blocks = self.chunk(groups, 1)?; let kernel = kernel.chunk(groups, 0)?; let blocks = blocks .iter() .zip(&kernel) .map(|(block, kernel)| block.conv_transpose1d_single_group(kernel, &params)) .collect::<Result<Vec<_>>>()?; Tensor::cat(&blocks, 1) } } fn conv2d_single_group(&self, kernel: &Self, params: &ParamsConv2D) -> Result<Self> { let storage = self.storage() .conv2d(self.layout(), &kernel.storage(), kernel.layout(), params)?; let op = BackpropOp::new2(self, kernel, |arg, kernel| Op::Conv2D { arg, kernel, padding: params.padding, stride: params.stride, dilation: params.dilation, }); let out_dims = params.out_dims(); Ok(crate::tensor::from_storage(storage, out_dims, op, false)) } /// Applies a 2D convolution over the input tensor. pub fn conv2d( &self, kernel: &Self, padding: usize, stride: usize, dilation: usize, groups: usize, ) -> Result<Self> { self.conv2d_with_algo(kernel, padding, stride, dilation, groups, None) } pub fn conv2d_with_algo( &self, kernel: &Self, padding: usize, stride: usize, dilation: usize, groups: usize, cudnn_fwd_algo: Option<CudnnFwdAlgo>, ) -> Result<Self> { let (b_size, c_in, i_h, i_w) = self.dims4()?; let (c_out, c_in_k, k_h, k_w) = kernel.dims4()?; if c_in != c_in_k * groups { crate::bail!( "in_channel mismatch between input ({c_in}, groups {groups}) and kernel ({c_in_k})" ) } let params = ParamsConv2D { b_size, i_h, i_w, k_h, k_w, c_out: c_out / groups, c_in: c_in / groups, padding, stride, dilation, cudnn_fwd_algo, }; if groups == 1 { self.conv2d_single_group(kernel, &params) } else { let blocks = self.chunk(groups, 1)?; let kernel = kernel.chunk(groups, 0)?; let blocks = blocks .iter() .zip(&kernel) .map(|(block, kernel)| block.conv2d_single_group(kernel, &params)) .collect::<Result<Vec<_>>>()?; Tensor::cat(&blocks, 1) } } /// Applies a 2D transposed convolution over the input tensor. pub fn conv_transpose2d( &self, kernel: &Self, padding: usize, output_padding: usize, stride: usize, dilation: usize, ) -> Result<Self> { let (b_size, c_in, i_h, i_w) = self.dims4()?; let (c_in_k, c_out, k_h, k_w) = kernel.dims4()?; if c_in != c_in_k { crate::bail!("in_channel mismatch between input ({c_in}) and kernel ({c_in_k})") } let params = ParamsConvTranspose2D { b_size, i_h, i_w, k_h, k_w, c_out, c_in, padding, output_padding, stride, dilation, }; let storage = self.storage().conv_transpose2d( self.layout(), &kernel.storage(), kernel.layout(), &params, )?; let op = BackpropOp::new2(self, kernel, |arg, kernel| Op::ConvTranspose2D { arg, kernel, padding: params.padding, output_padding: params.output_padding, stride: params.stride, dilation: params.dilation, }); let out_dims = params.out_dims(); Ok(crate::tensor::from_storage(storage, out_dims, op, false)) } }
candle/candle-core/src/conv.rs/0
{ "file_path": "candle/candle-core/src/conv.rs", "repo_id": "candle", "token_count": 6226 }
25
use crate::backend::BackendDevice; use crate::cpu_backend::CpuDevice; use crate::{CpuStorage, DType, Result, Shape, Storage, WithDType}; /// A `DeviceLocation` represents a physical device whereas multiple `Device` /// can live on the same location (typically for cuda devices). #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)] pub enum DeviceLocation { Cpu, Cuda { gpu_id: usize }, Metal { gpu_id: usize }, } /// Cpu, Cuda, or Metal #[derive(Debug, Clone)] pub enum Device { Cpu, Cuda(crate::CudaDevice), Metal(crate::MetalDevice), } pub trait NdArray { fn shape(&self) -> Result<Shape>; fn to_cpu_storage(&self) -> CpuStorage; } impl<S: WithDType> NdArray for S { fn shape(&self) -> Result<Shape> { Ok(Shape::from(())) } fn to_cpu_storage(&self) -> CpuStorage { S::to_cpu_storage(&[*self]) } } impl<S: WithDType, const N: usize> NdArray for &[S; N] { fn shape(&self) -> Result<Shape> { Ok(Shape::from(self.len())) } fn to_cpu_storage(&self) -> CpuStorage { S::to_cpu_storage(self.as_slice()) } } impl<S: WithDType> NdArray for &[S] { fn shape(&self) -> Result<Shape> { Ok(Shape::from(self.len())) } fn to_cpu_storage(&self) -> CpuStorage { S::to_cpu_storage(self) } } impl<S: WithDType, const N: usize, const M: usize> NdArray for &[[S; N]; M] { fn shape(&self) -> Result<Shape> { Ok(Shape::from((M, N))) } fn to_cpu_storage(&self) -> CpuStorage { S::to_cpu_storage_owned(self.concat()) } } impl<S: WithDType, const N1: usize, const N2: usize, const N3: usize> NdArray for &[[[S; N3]; N2]; N1] { fn shape(&self) -> Result<Shape> { Ok(Shape::from((N1, N2, N3))) } fn to_cpu_storage(&self) -> CpuStorage { let mut vec = Vec::with_capacity(N1 * N2 * N3); for i1 in 0..N1 { for i2 in 0..N2 { vec.extend(self[i1][i2]) } } S::to_cpu_storage_owned(vec) } } impl<S: WithDType, const N1: usize, const N2: usize, const N3: usize, const N4: usize> NdArray for &[[[[S; N4]; N3]; N2]; N1] { fn shape(&self) -> Result<Shape> { Ok(Shape::from((N1, N2, N3, N4))) } fn to_cpu_storage(&self) -> CpuStorage { let mut vec = Vec::with_capacity(N1 * N2 * N3 * N4); for i1 in 0..N1 { for i2 in 0..N2 { for i3 in 0..N3 { vec.extend(self[i1][i2][i3]) } } } S::to_cpu_storage_owned(vec) } } impl<S: WithDType> NdArray for Vec<S> { fn shape(&self) -> Result<Shape> { Ok(Shape::from(self.len())) } fn to_cpu_storage(&self) -> CpuStorage { S::to_cpu_storage(self.as_slice()) } } impl<S: WithDType> NdArray for Vec<&[S]> { fn shape(&self) -> Result<Shape> { if self.is_empty() { crate::bail!("empty array") } let n = self.len(); let m = self[0].len(); for v in self.iter() { if v.len() != m { crate::bail!("two elements have different len {m} {}", v.len()) } } Ok(Shape::from((n, m))) } fn to_cpu_storage(&self) -> CpuStorage { let data = self.iter().copied().flatten().copied().collect::<Vec<_>>(); S::to_cpu_storage_owned(data) } } impl<S: WithDType> NdArray for Vec<Vec<S>> { fn shape(&self) -> Result<Shape> { if self.is_empty() { crate::bail!("empty array") } let n = self.len(); let m = self[0].len(); for v in self.iter() { if v.len() != m { crate::bail!("two elements have different len {m} {}", v.len()) } } Ok(Shape::from((n, m))) } fn to_cpu_storage(&self) -> CpuStorage { let len: usize = self.iter().map(|v| v.len()).sum(); let mut dst = Vec::with_capacity(len); for v in self.iter() { dst.extend(v.iter().copied()); } S::to_cpu_storage_owned(dst) } } impl<S: WithDType> NdArray for Vec<Vec<Vec<S>>> { fn shape(&self) -> Result<Shape> { if self.is_empty() { crate::bail!("empty array") } let shape0 = self[0].shape()?; let n = self.len(); for v in self.iter() { let shape = v.shape()?; if shape != shape0 { crate::bail!("two elements have different shapes {shape:?} {shape0:?}") } } Ok(Shape::from([[n].as_slice(), shape0.dims()].concat())) } fn to_cpu_storage(&self) -> CpuStorage { if self.is_empty() { return S::to_cpu_storage_owned(vec![]); } let len: usize = self .iter() .map(|v| v.iter().map(|v| v.len()).sum::<usize>()) .sum(); let mut dst = Vec::with_capacity(len); for v1 in self.iter() { for v2 in v1.iter() { dst.extend(v2.iter().copied()); } } S::to_cpu_storage_owned(dst) } } impl<S: WithDType> NdArray for Vec<Vec<Vec<Vec<S>>>> { fn shape(&self) -> Result<Shape> { if self.is_empty() { crate::bail!("empty array") } let shape0 = self[0].shape()?; let n = self.len(); for v in self.iter() { let shape = v.shape()?; if shape != shape0 { crate::bail!("two elements have different shapes {shape:?} {shape0:?}") } } Ok(Shape::from([[n].as_slice(), shape0.dims()].concat())) } fn to_cpu_storage(&self) -> CpuStorage { let len: usize = self .iter() .map(|v| { v.iter() .map(|v| v.iter().map(|v| v.len()).sum::<usize>()) .sum::<usize>() }) .sum(); let mut dst = Vec::with_capacity(len); for v1 in self.iter() { for v2 in v1.iter() { for v3 in v2.iter() { dst.extend(v3.iter().copied()); } } } S::to_cpu_storage_owned(dst) } } impl Device { pub fn new_cuda(ordinal: usize) -> Result<Self> { Ok(Self::Cuda(crate::CudaDevice::new(ordinal)?)) } pub fn as_cuda_device(&self) -> Result<&crate::CudaDevice> { match self { Self::Cuda(d) => Ok(d), Self::Cpu => crate::bail!("expected a cuda device, got cpu"), Self::Metal(_) => crate::bail!("expected a cuda device, got Metal"), } } pub fn as_metal_device(&self) -> Result<&crate::MetalDevice> { match self { Self::Cuda(_) => crate::bail!("expected a metal device, got cuda"), Self::Cpu => crate::bail!("expected a metal device, got cpu"), Self::Metal(d) => Ok(d), } } pub fn new_cuda_with_stream(ordinal: usize) -> Result<Self> { Ok(Self::Cuda(crate::CudaDevice::new_with_stream(ordinal)?)) } pub fn new_metal(ordinal: usize) -> Result<Self> { Ok(Self::Metal(crate::MetalDevice::new(ordinal)?)) } pub fn set_seed(&self, seed: u64) -> Result<()> { match self { Self::Cpu => CpuDevice.set_seed(seed), Self::Cuda(c) => c.set_seed(seed), Self::Metal(m) => m.set_seed(seed), } } pub fn same_device(&self, rhs: &Self) -> bool { match (self, rhs) { (Self::Cpu, Self::Cpu) => true, (Self::Cuda(lhs), Self::Cuda(rhs)) => lhs.same_device(rhs), (Self::Metal(lhs), Self::Metal(rhs)) => lhs.same_device(rhs), _ => false, } } pub fn location(&self) -> DeviceLocation { match self { Self::Cpu => DeviceLocation::Cpu, Self::Cuda(device) => device.location(), Device::Metal(device) => device.location(), } } pub fn is_cpu(&self) -> bool { matches!(self, Self::Cpu) } pub fn is_cuda(&self) -> bool { matches!(self, Self::Cuda(_)) } pub fn is_metal(&self) -> bool { matches!(self, Self::Metal(_)) } pub fn supports_bf16(&self) -> bool { match self { Self::Cuda(_) | Self::Metal(_) => true, Self::Cpu => false, } } /// Return `BF16` for devices that support it, otherwise default to `F32`. pub fn bf16_default_to_f32(&self) -> DType { if self.supports_bf16() { DType::BF16 } else { DType::F32 } } pub fn cuda_if_available(ordinal: usize) -> Result<Self> { if crate::utils::cuda_is_available() { Self::new_cuda(ordinal) } else { Ok(Self::Cpu) } } pub(crate) fn rand_uniform_f64( &self, lo: f64, up: f64, shape: &Shape, dtype: DType, ) -> Result<Storage> { match self { Device::Cpu => { let storage = CpuDevice.rand_uniform(shape, dtype, lo, up)?; Ok(Storage::Cpu(storage)) } Device::Cuda(device) => { // TODO: Remove the special case if we start supporting generating f16/bf16 directly. if dtype == DType::F16 || dtype == DType::BF16 { let storage = device.rand_uniform(shape, DType::F32, lo, up)?; Storage::Cuda(storage).to_dtype(&crate::Layout::contiguous(shape), dtype) } else { let storage = device.rand_uniform(shape, dtype, lo, up)?; Ok(Storage::Cuda(storage)) } } Device::Metal(device) => { let storage = device.rand_uniform(shape, dtype, lo, up)?; Ok(Storage::Metal(storage)) } } } pub(crate) fn rand_uniform<T: crate::FloatDType>( &self, lo: T, up: T, shape: &Shape, ) -> Result<Storage> { self.rand_uniform_f64(lo.to_f64(), up.to_f64(), shape, T::DTYPE) } pub(crate) fn rand_normal_f64( &self, mean: f64, std: f64, shape: &Shape, dtype: DType, ) -> Result<Storage> { match self { Device::Cpu => { let storage = CpuDevice.rand_normal(shape, dtype, mean, std)?; Ok(Storage::Cpu(storage)) } Device::Cuda(device) => { // TODO: Remove the special case if we start supporting generating f16/bf16 directly. if dtype == DType::F16 || dtype == DType::BF16 { let storage = device.rand_normal(shape, DType::F32, mean, std)?; Storage::Cuda(storage).to_dtype(&crate::Layout::contiguous(shape), dtype) } else { let storage = device.rand_normal(shape, dtype, mean, std)?; Ok(Storage::Cuda(storage)) } } Device::Metal(device) => { let storage = device.rand_normal(shape, dtype, mean, std)?; Ok(Storage::Metal(storage)) } } } pub(crate) fn rand_normal<T: crate::FloatDType>( &self, mean: T, std: T, shape: &Shape, ) -> Result<Storage> { self.rand_normal_f64(mean.to_f64(), std.to_f64(), shape, T::DTYPE) } pub(crate) fn zeros(&self, shape: &Shape, dtype: DType) -> Result<Storage> { match self { Device::Cpu => { let storage = CpuDevice.zeros_impl(shape, dtype)?; Ok(Storage::Cpu(storage)) } Device::Cuda(device) => { let storage = device.zeros_impl(shape, dtype)?; Ok(Storage::Cuda(storage)) } Device::Metal(device) => { let storage = device.zeros_impl(shape, dtype)?; Ok(Storage::Metal(storage)) } } } pub(crate) unsafe fn alloc_uninit(&self, shape: &Shape, dtype: DType) -> Result<Storage> { match self { Device::Cpu => { let storage = CpuDevice.alloc_uninit(shape, dtype)?; Ok(Storage::Cpu(storage)) } Device::Cuda(device) => { let storage = device.alloc_uninit(shape, dtype)?; Ok(Storage::Cuda(storage)) } Device::Metal(device) => { let storage = device.alloc_uninit(shape, dtype)?; Ok(Storage::Metal(storage)) } } } pub(crate) fn storage_from_slice<D: WithDType>(&self, data: &[D]) -> Result<Storage> { match self { Device::Cpu => Ok(Storage::Cpu(data.to_cpu_storage())), Device::Cuda(device) => { let storage = device.storage_from_slice(data)?; Ok(Storage::Cuda(storage)) } Device::Metal(device) => { let storage = device.storage_from_slice(data)?; Ok(Storage::Metal(storage)) } } } pub(crate) fn storage<A: NdArray>(&self, array: A) -> Result<Storage> { match self { Device::Cpu => Ok(Storage::Cpu(array.to_cpu_storage())), Device::Cuda(device) => { let storage = array.to_cpu_storage(); let storage = device.storage_from_cpu_storage_owned(storage)?; Ok(Storage::Cuda(storage)) } Device::Metal(device) => { let storage = array.to_cpu_storage(); let storage = device.storage_from_cpu_storage_owned(storage)?; Ok(Storage::Metal(storage)) } } } pub(crate) fn storage_owned<S: WithDType>(&self, data: Vec<S>) -> Result<Storage> { match self { Device::Cpu => Ok(Storage::Cpu(S::to_cpu_storage_owned(data))), Device::Cuda(device) => { let storage = S::to_cpu_storage_owned(data); let storage = device.storage_from_cpu_storage_owned(storage)?; Ok(Storage::Cuda(storage)) } Device::Metal(device) => { let storage = S::to_cpu_storage_owned(data); let storage = device.storage_from_cpu_storage_owned(storage)?; Ok(Storage::Metal(storage)) } } } pub fn synchronize(&self) -> Result<()> { match self { Self::Cpu => Ok(()), Self::Cuda(d) => d.synchronize(), Self::Metal(d) => d.synchronize(), } } }
candle/candle-core/src/device.rs/0
{ "file_path": "candle/candle-core/src/device.rs", "repo_id": "candle", "token_count": 7742 }
26
use super::{GgmlDType, QStorage}; use crate::quantized::k_quants::GgmlType; use crate::{backend::BackendDevice, cuda_backend::WrapErr}; use crate::{builder_arg as barg, CudaDevice, CudaStorage, Result}; use half::f16; use cudarc::driver::{CudaSlice, CudaView, PushKernelArg}; #[derive(Clone, Debug)] struct PaddedCudaSlice { inner: CudaSlice<u8>, len: usize, } #[derive(Clone, Debug)] pub struct QCudaStorage { data: PaddedCudaSlice, dtype: GgmlDType, device: CudaDevice, } static FORCE_DMMV: std::sync::atomic::AtomicBool = std::sync::atomic::AtomicBool::new(false); pub fn set_force_dmmv(f: bool) { FORCE_DMMV.store(f, std::sync::atomic::Ordering::Relaxed) } pub const WARP_SIZE: usize = 32; pub const MMQ_X_Q4_0_AMPERE: usize = 4; pub const MMQ_Y_Q4_0_AMPERE: usize = 32; pub const NWARPS_Q4_0_AMPERE: usize = 4; pub const GGML_CUDA_MMV_X: usize = 32; pub const GGML_CUDA_MMV_Y: usize = 1; pub const CUDA_QUANTIZE_BLOCK_SIZE: usize = 256; pub const CUDA_DEQUANTIZE_BLOCK_SIZE: usize = 256; pub const MATRIX_ROW_PADDING: usize = 512; fn ceil_div(p: usize, q: usize) -> usize { p.div_ceil(q) } fn pad(p: usize, q: usize) -> usize { ceil_div(p, q) * q } fn quantize_q8_1( src: &CudaView<f32>, dst: &mut CudaSlice<u8>, elem_count: usize, ky: usize, dev: &CudaDevice, ) -> Result<()> { let kx = elem_count; let kx_padded = pad(kx, MATRIX_ROW_PADDING); let num_blocks = ceil_div(kx_padded, CUDA_QUANTIZE_BLOCK_SIZE); let func = dev.get_or_load_func("quantize_q8_1", &candle_kernels::QUANTIZED)?; let cfg = cudarc::driver::LaunchConfig { grid_dim: (num_blocks as u32, ky as u32, 1), block_dim: (CUDA_QUANTIZE_BLOCK_SIZE as u32, 1, 1), shared_mem_bytes: 0, }; let mut builder = func.builder(); builder.arg(src); builder.arg(dst); barg!(builder, kx as i32, kx_padded as i32); unsafe { builder.launch(cfg) }.w()?; Ok(()) } fn dequantize_f32( data: &PaddedCudaSlice, dtype: GgmlDType, elem_count: usize, dev: &CudaDevice, ) -> Result<CudaStorage> { let nb = elem_count.div_ceil(256); let (kernel_name, is_k, block_dim, num_blocks) = match dtype { GgmlDType::Q4_0 => ("dequantize_block_q4_0_f32", false, 32, nb), GgmlDType::Q4_1 => ("dequantize_block_q4_1_f32", false, 32, nb), GgmlDType::Q5_0 => ( "dequantize_block_q5_0_f32", false, CUDA_DEQUANTIZE_BLOCK_SIZE, ceil_div(elem_count, 2 * CUDA_DEQUANTIZE_BLOCK_SIZE), ), GgmlDType::Q5_1 => ( "dequantize_block_q5_1_f32", false, CUDA_DEQUANTIZE_BLOCK_SIZE, ceil_div(elem_count, 2 * CUDA_DEQUANTIZE_BLOCK_SIZE), ), GgmlDType::Q8_0 => ("dequantize_block_q8_0_f32", false, 32, nb), GgmlDType::Q2K => ("dequantize_block_q2_K_f32", true, 64, nb), GgmlDType::Q3K => ("dequantize_block_q3_K_f32", true, 64, nb), GgmlDType::Q4K => ("dequantize_block_q4_K_f32", true, 32, nb), GgmlDType::Q5K => ("dequantize_block_q5_K_f32", true, 64, nb), GgmlDType::Q6K => ("dequantize_block_q6_K_f32", true, 64, nb), GgmlDType::Q8K => ("dequantize_block_q8_K_f32", true, 32, nb), _ => crate::bail!("unsupported dtype for dequantize {dtype:?}"), }; let func = dev.get_or_load_func(kernel_name, &candle_kernels::QUANTIZED)?; let dst = unsafe { dev.alloc::<f32>(elem_count)? }; // See e.g. // https://github.com/ggerganov/llama.cpp/blob/cbbd1efa06f8c09f9dff58ff9d9af509cc4c152b/ggml-cuda.cu#L7270 let cfg = cudarc::driver::LaunchConfig { grid_dim: (num_blocks as u32, 1, 1), block_dim: (block_dim as u32, 1, 1), shared_mem_bytes: 0, }; if is_k { let mut builder = func.builder(); builder.arg(&data.inner); builder.arg(&dst); unsafe { builder.launch(cfg) }.w()?; } else { let nb32 = match dtype { GgmlDType::Q5_0 | GgmlDType::Q5_1 => elem_count, _ => elem_count / 32, }; let mut builder = func.builder(); builder.arg(&data.inner); builder.arg(&dst); barg!(builder, nb32 as i32); unsafe { builder.launch(cfg) }.w()?; } Ok(CudaStorage::wrap_cuda_slice(dst, dev.clone())) } fn dequantize_f16( data: &PaddedCudaSlice, dtype: GgmlDType, elem_count: usize, dev: &CudaDevice, ) -> Result<CudaStorage> { let nb = elem_count.div_ceil(256); let (kernel_name, is_k, block_dim, num_blocks) = match dtype { GgmlDType::Q4_0 => ("dequantize_block_q4_0_f16", false, 32, nb), GgmlDType::Q4_1 => ("dequantize_block_q4_1_f16", false, 32, nb), GgmlDType::Q5_0 => ( "dequantize_block_q5_0_f16", false, CUDA_DEQUANTIZE_BLOCK_SIZE, ceil_div(elem_count, 2 * CUDA_DEQUANTIZE_BLOCK_SIZE), ), GgmlDType::Q5_1 => ( "dequantize_block_q5_1_f16", false, CUDA_DEQUANTIZE_BLOCK_SIZE, ceil_div(elem_count, 2 * CUDA_DEQUANTIZE_BLOCK_SIZE), ), GgmlDType::Q8_0 => ("dequantize_block_q8_0_f16", false, 32, nb), GgmlDType::Q2K => ("dequantize_block_q2_K_f16", true, 64, nb), GgmlDType::Q3K => ("dequantize_block_q3_K_f16", true, 64, nb), GgmlDType::Q4K => ("dequantize_block_q4_K_f16", true, 32, nb), GgmlDType::Q5K => ("dequantize_block_q5_K_f16", true, 64, nb), GgmlDType::Q6K => ("dequantize_block_q6_K_f16", true, 64, nb), GgmlDType::Q8K => ("dequantize_block_q8_K_f16", true, 32, nb), _ => crate::bail!("unsupported dtype for dequantize {dtype:?}"), }; let func = dev.get_or_load_func(kernel_name, &candle_kernels::QUANTIZED)?; let dst = unsafe { dev.alloc::<f16>(elem_count)? }; // See e.g. // https://github.com/ggerganov/llama.cpp/blob/cbbd1efa06f8c09f9dff58ff9d9af509cc4c152b/ggml-cuda.cu#L7270 let cfg = cudarc::driver::LaunchConfig { grid_dim: (num_blocks as u32, 1, 1), block_dim: (block_dim as u32, 1, 1), shared_mem_bytes: 0, }; if is_k { let mut builder = func.builder(); builder.arg(&data.inner); builder.arg(&dst); unsafe { builder.launch(cfg) }.w()?; } else { let nb32 = match dtype { GgmlDType::Q5_0 | GgmlDType::Q5_1 => elem_count, _ => elem_count / 32, }; let mut builder = func.builder(); builder.arg(&data.inner); builder.arg(&dst); barg!(builder, nb32 as i32); unsafe { builder.launch(cfg) }.w()?; } Ok(CudaStorage::wrap_cuda_slice(dst, dev.clone())) } fn dequantize_mul_mat_vec( data: &PaddedCudaSlice, y: &CudaView<f32>, dtype: GgmlDType, ncols: usize, nrows: usize, dev: &CudaDevice, ) -> Result<CudaStorage> { let data_elems = data.len / dtype.type_size() * dtype.block_size(); if data_elems < ncols * nrows { crate::bail!("unexpected data size {}, ncols {ncols} {nrows}", data_elems) } if y.len() != ncols { crate::bail!("unexpected y size {}, ncols {ncols} {nrows}", y.len()) } let kernel_name = match dtype { GgmlDType::Q4_0 => "dequantize_mul_mat_vec_q4_0_cuda", GgmlDType::Q4_1 => "dequantize_mul_mat_vec_q4_1_cuda", GgmlDType::Q5_0 => "dequantize_mul_mat_vec_q5_0_cuda", GgmlDType::Q5_1 => "dequantize_mul_mat_vec_q5_1_cuda", GgmlDType::Q8_0 => "dequantize_mul_mat_vec_q8_0_cuda", GgmlDType::Q2K => "dequantize_mul_mat_vec_q2_k", GgmlDType::Q3K => "dequantize_mul_mat_vec_q3_k", GgmlDType::Q4K => "dequantize_mul_mat_vec_q4_k", GgmlDType::Q5K => "dequantize_mul_mat_vec_q5_k", GgmlDType::Q6K => "dequantize_mul_mat_vec_q6_k", _ => crate::bail!("unsupported dtype for quantized matmul {dtype:?}"), }; let func = dev.get_or_load_func(kernel_name, &candle_kernels::QUANTIZED)?; let dst = unsafe { dev.alloc::<f32>(nrows)? }; let block_num_y = ceil_div(nrows, GGML_CUDA_MMV_Y); let cfg = cudarc::driver::LaunchConfig { grid_dim: (block_num_y as u32, 1, 1), block_dim: (WARP_SIZE as u32, GGML_CUDA_MMV_Y as u32, 1), shared_mem_bytes: 0, }; let mut builder = func.builder(); builder.arg(&data.inner); builder.arg(y); builder.arg(&dst); barg!(builder, ncols as i32, nrows as i32); unsafe { builder.launch(cfg) }.w()?; Ok(CudaStorage::wrap_cuda_slice(dst, dev.clone())) } fn mul_mat_vec_via_q8_1( data: &PaddedCudaSlice, y: &CudaView<f32>, dtype: GgmlDType, ncols: usize, nrows: usize, b_size: usize, dev: &CudaDevice, ) -> Result<CudaStorage> { let data_elems = data.len / dtype.type_size() * dtype.block_size(); if data_elems < ncols * nrows { crate::bail!("unexpected data size {}, ncols {ncols} {nrows}", data_elems) } if y.len() != ncols * b_size { crate::bail!("unexpected y size {}, ncols {ncols} {nrows}", y.len()) } if b_size == 0 || b_size > 8 { crate::bail!("only bsize between 1 and 8 are supported, got {b_size}") } // Start by quantizing y let ncols_padded = pad(ncols, MATRIX_ROW_PADDING); let y_size_in_bytes = b_size * ncols_padded * GgmlDType::Q8_1.type_size() / GgmlDType::Q8_1.block_size(); let mut y_q8_1 = unsafe { dev.alloc::<u8>(y_size_in_bytes)? }; quantize_q8_1(y, &mut y_q8_1, ncols, b_size, dev)?; let kernel_name = match dtype { GgmlDType::Q4_0 => "mul_mat_vec_q4_0_q8_1_cuda", GgmlDType::Q4_1 => "mul_mat_vec_q4_1_q8_1_cuda", GgmlDType::Q5_0 => "mul_mat_vec_q5_0_q8_1_cuda", GgmlDType::Q5_1 => "mul_mat_vec_q5_1_q8_1_cuda", GgmlDType::Q8_0 => "mul_mat_vec_q8_0_q8_1_cuda", GgmlDType::Q2K => "mul_mat_vec_q2_K_q8_1_cuda", GgmlDType::Q3K => "mul_mat_vec_q3_K_q8_1_cuda", GgmlDType::Q4K => "mul_mat_vec_q4_K_q8_1_cuda", GgmlDType::Q5K => "mul_mat_vec_q5_K_q8_1_cuda", GgmlDType::Q6K => "mul_mat_vec_q6_K_q8_1_cuda", _ => crate::bail!("unsupported dtype for quantized matmul {dtype:?}"), }; let kernel_name = format!("{kernel_name}{b_size}"); let func = dev.get_or_load_func(&kernel_name, &candle_kernels::QUANTIZED)?; let dst = unsafe { dev.alloc::<f32>(nrows * b_size)? }; // https://github.com/ggerganov/llama.cpp/blob/facb8b56f8fd3bb10a693bf0943ae9d69d0828ef/ggml-cuda/mmvq.cu#L98 let (nblocks, nwarps) = match b_size { 1 => (nrows as u32, 4), 2..=4 => ((nrows as u32).div_ceil(2), 4), 5..=8 => ((nrows as u32).div_ceil(2), 2), _ => crate::bail!("unexpected bsize {b_size}"), }; let cfg = cudarc::driver::LaunchConfig { grid_dim: (nblocks, 1, 1), block_dim: (WARP_SIZE as u32, nwarps, 1), shared_mem_bytes: 0, }; let mut builder = func.builder(); builder.arg(&data.inner); builder.arg(&y_q8_1); builder.arg(&dst); barg!( builder, /* ncols_x */ ncols as i32, /* nrows_x */ nrows as i32, /* nrows_y */ ncols_padded as i32, /* nrows_dst */ nrows as i32 ); unsafe { builder.launch(cfg) }.w()?; Ok(CudaStorage::wrap_cuda_slice(dst, dev.clone())) } #[allow(clippy::too_many_arguments)] fn mul_mat_via_q8_1( data: &PaddedCudaSlice, y: &CudaView<f32>, dtype: GgmlDType, x_rows: usize, x_cols: usize, y_rows: usize, y_cols: usize, dev: &CudaDevice, ) -> Result<CudaStorage> { let data_elems = data.len / dtype.type_size() * dtype.block_size(); if data_elems < x_rows * x_cols { crate::bail!("unexpected lhs size {}, {x_rows} {x_cols}", data_elems) } if y.len() != y_rows * y_cols { crate::bail!("unexpected y size {}, {y_rows} {y_cols}", y.len()) } if x_cols != y_rows { crate::bail!("unexpected x/y size {x_rows} {x_cols} {y_rows} {y_cols}") } let k = x_cols; // Start by quantizing y let k_padded = pad(k, MATRIX_ROW_PADDING); let y_size_in_bytes = k_padded * y_cols * GgmlDType::Q8_1.type_size() / GgmlDType::Q8_1.block_size(); let mut y_q8_1 = unsafe { dev.alloc::<u8>(y_size_in_bytes)? }; quantize_q8_1(y, &mut y_q8_1, k, y_cols, dev)?; let (kernel_name, mmq_x, mmq_y) = match dtype { GgmlDType::Q4_0 => ("mul_mat_q4_0", 64, 128), GgmlDType::Q4_1 => ("mul_mat_q4_1", 64, 128), GgmlDType::Q5_0 => ("mul_mat_q5_0", 128, 64), GgmlDType::Q5_1 => ("mul_mat_q5_1", 128, 64), GgmlDType::Q8_0 => ("mul_mat_q8_0", 128, 64), GgmlDType::Q2K => ("mul_mat_q2_K", 64, 128), GgmlDType::Q3K => ("mul_mat_q3_K", 128, 128), GgmlDType::Q4K => ("mul_mat_q4_K", 64, 128), GgmlDType::Q5K => ("mul_mat_q5_K", 64, 128), GgmlDType::Q6K => ("mul_mat_q6_K", 64, 64), _ => crate::bail!("unsupported dtype for quantized matmul {dtype:?}"), }; let func = dev.get_or_load_func(kernel_name, &candle_kernels::QUANTIZED)?; let dst = unsafe { dev.alloc::<f32>(x_rows * y_cols)? }; let cfg = cudarc::driver::LaunchConfig { grid_dim: ( ceil_div(x_rows, mmq_y) as u32, ceil_div(y_cols, mmq_x) as u32, 1, ), block_dim: (WARP_SIZE as u32, 4, 1), shared_mem_bytes: 0, }; let mut builder = func.builder(); builder.arg(/* vx */ &data.inner); builder.arg(/* vy */ &y_q8_1); builder.arg(/* dst */ &dst); barg!( builder, /* ncols_x */ x_cols as i32, /* nrows_x */ x_rows as i32, /* ncols_y */ y_cols as i32, /* nrows_y */ k_padded as i32, /* nrows_dst */ x_rows as i32 ); unsafe { builder.launch(cfg) }.w()?; Ok(CudaStorage::wrap_cuda_slice(dst, dev.clone())) } impl QCudaStorage { pub fn zeros(device: &CudaDevice, el_count: usize, dtype: GgmlDType) -> Result<Self> { let size_in_bytes = ceil_div(el_count, dtype.block_size()) * dtype.type_size(); let padded_size_in_bytes = ceil_div(el_count + MATRIX_ROW_PADDING, dtype.block_size()) * dtype.type_size(); let inner = device.alloc_zeros::<u8>(padded_size_in_bytes)?; Ok(QCudaStorage { data: PaddedCudaSlice { inner, len: size_in_bytes, }, device: device.clone(), dtype, }) } pub fn dtype(&self) -> GgmlDType { self.dtype } pub fn device(&self) -> &CudaDevice { &self.device } pub fn dequantize(&self, elem_count: usize) -> Result<CudaStorage> { fn deq<T: GgmlType>(buffer: &[u8], n: usize, dst: &mut [f32]) -> Result<()> { let slice = unsafe { std::slice::from_raw_parts(buffer.as_ptr() as *const T, n) }; let vec = slice.to_vec(); T::to_float(&vec, dst) } let fast_kernel = matches!( self.dtype, GgmlDType::Q4_0 | GgmlDType::Q4_1 | GgmlDType::Q5_0 | GgmlDType::Q5_1 | GgmlDType::Q8_0 | GgmlDType::Q2K | GgmlDType::Q3K | GgmlDType::Q4K | GgmlDType::Q5K | GgmlDType::Q6K | GgmlDType::Q8K ); if fast_kernel { return dequantize_f32(&self.data, self.dtype, elem_count, self.device()); } // Run the dequantization on cpu. let buffer = self .device .memcpy_dtov(&self.data.inner.slice(..self.data.len))?; let mut out = vec![0.0; elem_count]; let block_len = elem_count / self.dtype.block_size(); match self.dtype { GgmlDType::F32 => deq::<f32>(&buffer, block_len, &mut out)?, GgmlDType::F16 => deq::<half::f16>(&buffer, block_len, &mut out)?, GgmlDType::BF16 => deq::<half::bf16>(&buffer, block_len, &mut out)?, GgmlDType::Q4_0 => deq::<crate::quantized::BlockQ4_0>(&buffer, block_len, &mut out)?, GgmlDType::Q4_1 => deq::<crate::quantized::BlockQ4_1>(&buffer, block_len, &mut out)?, GgmlDType::Q5_0 => deq::<crate::quantized::BlockQ5_0>(&buffer, block_len, &mut out)?, GgmlDType::Q5_1 => deq::<crate::quantized::BlockQ5_1>(&buffer, block_len, &mut out)?, GgmlDType::Q8_0 => deq::<crate::quantized::BlockQ8_0>(&buffer, block_len, &mut out)?, GgmlDType::Q8_1 => deq::<crate::quantized::BlockQ8_1>(&buffer, block_len, &mut out)?, GgmlDType::Q2K => deq::<crate::quantized::BlockQ2K>(&buffer, block_len, &mut out)?, GgmlDType::Q3K => deq::<crate::quantized::BlockQ3K>(&buffer, block_len, &mut out)?, GgmlDType::Q4K => deq::<crate::quantized::BlockQ4K>(&buffer, block_len, &mut out)?, GgmlDType::Q5K => deq::<crate::quantized::BlockQ5K>(&buffer, block_len, &mut out)?, GgmlDType::Q6K => deq::<crate::quantized::BlockQ6K>(&buffer, block_len, &mut out)?, GgmlDType::Q8K => deq::<crate::quantized::BlockQ8K>(&buffer, block_len, &mut out)?, } self.device .storage_from_cpu_storage(&crate::CpuStorage::F32(out)) } pub fn dequantize_f16(&self, elem_count: usize) -> Result<CudaStorage> { dequantize_f16(&self.data, self.dtype, elem_count, self.device()) } pub fn quantize(&mut self, src: &CudaStorage) -> Result<()> { // Run the quantization on cpu. let src = match &src.slice { crate::cuda_backend::CudaStorageSlice::F32(data) => self.device.memcpy_dtov(data)?, _ => crate::bail!("only f32 can be quantized"), }; let src_len = src.len(); let src = crate::Storage::Cpu(crate::CpuStorage::F32(src)); let mut qcpu_storage = crate::Device::Cpu.qzeros(src_len, self.dtype)?; qcpu_storage.quantize(&src)?; let data = qcpu_storage.data()?; let padded_len = data.len() + MATRIX_ROW_PADDING * self.dtype.type_size() / self.dtype.block_size(); let mut inner = unsafe { self.device.alloc::<u8>(padded_len)? }; self.device .memcpy_htod(data.as_ref(), &mut inner.slice_mut(..data.len()))?; self.data = PaddedCudaSlice { inner, len: data.len(), }; Ok(()) } pub fn storage_size_in_bytes(&self) -> usize { self.data.len } pub fn fwd( &self, self_shape: &crate::Shape, storage: &CudaStorage, layout: &crate::Layout, ) -> Result<(CudaStorage, crate::Shape)> { let max_bm = if FORCE_DMMV.load(std::sync::atomic::Ordering::Relaxed) { 1 } else { 8 }; let use_vec_kernel = match layout.shape().dims() { [b, m, _k] => b * m <= max_bm, [b, _k] => *b <= max_bm, _ => false, }; if use_vec_kernel { self.dequantize_matmul_vec(self_shape, storage, layout) } else { self.dequantize_matmul(self_shape, storage, layout) } } } impl QCudaStorage { fn dequantize_matmul_vec( &self, self_shape: &crate::Shape, rhs: &CudaStorage, rhs_l: &crate::Layout, ) -> Result<(CudaStorage, crate::Shape)> { let (nrows, ncols) = self_shape.dims2()?; let rhs = rhs.as_cuda_slice::<f32>()?; let rhs = match rhs_l.contiguous_offsets() { Some((o1, o2)) => rhs.slice(o1..o2), None => Err(crate::Error::RequiresContiguous { op: "dmmv" }.bt())?, }; let (b_size, k) = match rhs_l.shape().dims() { [b, m, k] => (b * m, *k), [b, k] => (*b, *k), _ => crate::bail!("unexpected rhs shape in dmmv {:?}", rhs_l.shape()), }; if ncols != k { crate::bail!("mismatch on matmul dim {self_shape:?} {:?}", rhs_l.shape()) } let out = if FORCE_DMMV.load(std::sync::atomic::Ordering::Relaxed) { dequantize_mul_mat_vec(&self.data, &rhs, self.dtype, ncols, nrows, self.device())? } else { mul_mat_vec_via_q8_1( &self.data, &rhs, self.dtype, ncols, nrows, b_size, self.device(), )? }; let mut out_shape = rhs_l.shape().dims().to_vec(); out_shape.pop(); out_shape.push(nrows); Ok((out, out_shape.into())) } fn dequantize_matmul( &self, self_shape: &crate::Shape, storage: &CudaStorage, layout: &crate::Layout, ) -> Result<(CudaStorage, crate::Shape)> { use crate::backend::BackendStorage; let (n, k) = self_shape.dims2()?; let (b, m, k2) = match layout.shape().dims() { &[b, m, k2] => (b, m, k2), &[m, k2] => (1, m, k2), s => crate::bail!("unexpected shape for input {s:?}"), }; if k2 != k { crate::bail!("mismatch on matmul dim {self_shape:?} {:?}", layout.shape()) } let out = if FORCE_DMMV.load(std::sync::atomic::Ordering::Relaxed) { let data_f32 = self.dequantize(n * k)?; let rhs_l = crate::Layout::new((k, n).into(), vec![1, k], 0).broadcast_as((b, k, n))?; storage.matmul(&data_f32, (b, m, n, k), layout, &rhs_l)? } else { let storage = storage.as_cuda_slice::<f32>()?; let storage = match layout.contiguous_offsets() { Some((o1, o2)) => storage.slice(o1..o2), None => Err(crate::Error::RequiresContiguous { op: "quantized-matmul", } .bt())?, }; mul_mat_via_q8_1( &self.data, &storage, self.dtype, /* x_rows */ n, /* x_cols */ k, /* y_rows */ k, /* y_cols */ b * m, self.device(), )? }; let mut out_shape = layout.shape().dims().to_vec(); out_shape.pop(); out_shape.push(n); Ok((out, out_shape.into())) } } pub fn load_quantized<T: super::GgmlType + Send + Sync + 'static>( device: &CudaDevice, data: &[T], ) -> Result<super::QStorage> { let data = unsafe { std::slice::from_raw_parts(data.as_ptr() as *const u8, core::mem::size_of_val(data)) }; let dtype = T::DTYPE; let padded_len = data.len() + MATRIX_ROW_PADDING * dtype.type_size() / dtype.block_size(); let mut inner = unsafe { device.alloc::<u8>(padded_len)? }; device.memcpy_htod(data, &mut inner.slice_mut(..data.len()))?; Ok(QStorage::Cuda(QCudaStorage { data: PaddedCudaSlice { inner, len: data.len(), }, device: device.clone(), dtype, })) } #[cfg(test)] mod test { use super::*; #[test] fn cuda_quantize_q8_1() -> Result<()> { let dev = CudaDevice::new(0)?; let el = 256; let el_padded = pad(el, MATRIX_ROW_PADDING); let y_size_in_bytes = el_padded * GgmlDType::Q8_1.type_size() / GgmlDType::Q8_1.block_size(); let mut y_q8_1 = unsafe { dev.alloc::<u8>(y_size_in_bytes)? }; let vs: Vec<f32> = (0..el).map(|v| v as f32).collect(); let y = dev.memcpy_stod(&vs)?; quantize_q8_1(&y.slice(..), &mut y_q8_1, el, 1, &dev)?; Ok(()) } #[test] fn cuda_mmv_q8_1() -> Result<()> { let dev = CudaDevice::new(0)?; let ncols = 256; let vs: Vec<f32> = (0..ncols).map(|v| v as f32).collect(); let y = dev.memcpy_stod(&vs)?; let mut xs = QCudaStorage::zeros(&dev, ncols, GgmlDType::Q4_0)?; xs.quantize(&CudaStorage::wrap_cuda_slice(y.clone(), dev.clone()))?; let cuda_storage = mul_mat_vec_via_q8_1( &xs.data, &y.slice(..), /* dtype */ GgmlDType::Q4_0, /* ncols */ ncols, /* nrows */ 1, /* b_size */ 1, &dev, )?; let vs = cuda_storage.as_cuda_slice::<f32>()?; let vs = dev.memcpy_dtov(&vs.slice(..))?; assert_eq!(vs.len(), 1); // for n = 255, n.(n+1).(2n+1) / 6 = 5559680 // Q8 means 1/256 precision. assert_eq!(vs[0], 5561664.5); let cuda_storage = dequantize_mul_mat_vec( &xs.data, &y.slice(..), /* dtype */ GgmlDType::Q4_0, /* ncols */ ncols, /* nrows */ 1, &dev, )?; let vs = cuda_storage.as_cuda_slice::<f32>()?; let vs = dev.memcpy_dtov(&vs.slice(..))?; assert_eq!(vs.len(), 1); assert_eq!(vs[0], 5561851.0); Ok(()) } #[test] fn cuda_mm_q8_1() -> Result<()> { let dev = CudaDevice::new(0)?; let ncols = 256; let vs: Vec<f32> = (0..ncols * 4).map(|v| v as f32 / 4.).collect(); let y = dev.memcpy_stod(&vs)?; let mut xs = QCudaStorage::zeros(&dev, ncols * 4, GgmlDType::Q4_0)?; xs.quantize(&CudaStorage::wrap_cuda_slice(y.clone(), dev.clone()))?; let cuda_storage = mul_mat_via_q8_1( &xs.data, &y.slice(..), /* dtype */ GgmlDType::Q4_0, /* x_rows */ 4, /* x_cols */ ncols, /* y_rows */ ncols, /* y_cols */ 4, &dev, )?; let vs = cuda_storage.as_cuda_slice::<f32>()?; let vs = dev.memcpy_dtov(&vs.slice(..))?; /* x = torch.tensor([float(v) for v in range(1024)]).reshape(4, 256) x @ x.t() / 16 tensor([[ 347480.0000, 869720.0000, 1391960.0000, 1914200.0000], [ 869720.0000, 2440536.0000, 4011352.0000, 5582166.5000], [ 1391960.0000, 4011352.0000, 6630742.0000, 9250132.0000], [ 1914200.0000, 5582166.5000, 9250132.0000, 12918099.0000]]) */ assert_eq!(vs.len(), 16); assert_eq!(vs[0], 347604.0); assert_eq!(vs[1], 888153.06); assert_eq!(vs[4], 869780.7); assert_eq!(vs[5], 2483145.0); assert_eq!(vs[11], 9407368.0); assert_eq!(vs[14], 9470856.0); assert_eq!(vs[15], 13138824.0); Ok(()) } // The following test used to fail under compute-sanitizer until #2526. #[test] fn cuda_mm_q8_1_pad() -> Result<()> { let dev = CudaDevice::new(0)?; let (x_rows, ncols, y_cols) = (4, 16, 2048); let vs: Vec<f32> = (0..ncols * y_cols).map(|v| v as f32 / 256.).collect(); let y = dev.memcpy_stod(&vs)?; let mut xs = QCudaStorage::zeros(&dev, ncols * x_rows, GgmlDType::Q4_0)?; xs.quantize(&CudaStorage::wrap_cuda_slice(y.clone(), dev.clone()))?; let cuda_storage = mul_mat_via_q8_1( &xs.data, &y.slice(..), /* dtype */ GgmlDType::Q4_0, /* x_rows */ x_rows, /* x_cols */ ncols, /* y_rows */ ncols, /* y_cols */ y_cols, &dev, )?; let vs = cuda_storage.as_cuda_slice::<f32>()?; let _vs = dev.memcpy_dtov(&vs.slice(..))?; Ok(()) } }
candle/candle-core/src/quantized/cuda.rs/0
{ "file_path": "candle/candle-core/src/quantized/cuda.rs", "repo_id": "candle", "token_count": 14859 }
27
//! StreamTensror useful for streaming ops. //! use crate::{Result, Shape, Tensor}; pub trait Dim: crate::shape::Dim + Copy {} impl<T: crate::shape::Dim + Copy> Dim for T {} /// A stream tensor is used in streaming module. It can either contain an actual tensor or be /// empty. #[derive(Clone)] pub struct StreamTensor(Option<Tensor>); impl std::fmt::Debug for StreamTensor { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { match &self.0 { Some(t) => write!(f, "{:?}", t.shape()), None => write!(f, "Empty"), } } } impl std::convert::From<Option<Tensor>> for StreamTensor { fn from(value: Option<Tensor>) -> Self { Self(value) } } impl std::convert::From<Tensor> for StreamTensor { fn from(value: Tensor) -> Self { Self(Some(value)) } } impl std::convert::From<()> for StreamTensor { fn from(_value: ()) -> Self { Self(None) } } impl StreamTensor { pub fn empty() -> Self { Self(None) } pub fn from_tensor(tensor: Tensor) -> Self { Self(Some(tensor)) } pub fn shape(&self) -> Option<&Shape> { self.0.as_ref().map(|t| t.shape()) } pub fn cat2<D: Dim>(&self, rhs: &Self, dim: D) -> Result<Self> { let xs = match (&self.0, &rhs.0) { (Some(lhs), Some(rhs)) => { let xs = Tensor::cat(&[lhs, rhs], dim)?; Some(xs) } (Some(xs), None) | (None, Some(xs)) => Some(xs.clone()), (None, None) => None, }; Ok(Self(xs)) } pub fn seq_len<D: Dim>(&self, dim: D) -> Result<usize> { match &self.0 { None => Ok(0), Some(v) => v.dim(dim), } } pub fn reset(&mut self) { self.0 = None } pub fn narrow<D: Dim>(&self, dim: D, offset: usize, len: usize) -> Result<StreamTensor> { let t = match &self.0 { None => None, Some(t) => { let seq_len = t.dim(dim)?; if seq_len <= offset { None } else { let t = t.narrow(dim, offset, usize::min(len, seq_len - offset))?; Some(t) } } }; Ok(Self(t)) } /// Splits the Streaming Tensor on the time axis `dim` with the first `lhs_len` elements /// returned in the first output and the remaining in the second output. pub fn split<D: Dim>(&self, dim: D, lhs_len: usize) -> Result<(Self, Self)> { match &self.0 { None => Ok((Self::empty(), Self::empty())), Some(t) => { let seq_len = t.dim(dim)?; let lhs_len = usize::min(seq_len, lhs_len); if lhs_len == 0 { Ok((Self::empty(), t.clone().into())) } else { let lhs = Self::from_tensor(t.narrow(dim, 0, lhs_len)?); let rhs_len = seq_len - lhs_len; let rhs = if rhs_len == 0 { Self::empty() } else { Self::from_tensor(t.narrow(dim, lhs_len, rhs_len)?) }; Ok((lhs, rhs)) } } } } pub fn as_option(&self) -> Option<&Tensor> { self.0.as_ref() } pub fn apply<M: crate::Module>(&self, m: &M) -> Result<Self> { match &self.0 { None => Ok(Self::empty()), Some(t) => Ok(Self::from_tensor(t.apply(m)?)), } } } /// Streaming modules take as input a stream tensor and return a stream tensor. They may perform /// some internal buffering so that enough data has been received for the module to be able to /// perform some operations. pub trait StreamingModule { // TODO: Should we also have a flush method? fn step(&mut self, xs: &StreamTensor) -> Result<StreamTensor>; fn reset_state(&mut self); } #[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)] pub enum BinOp { Add, Mul, Sub, Div, } #[derive(Debug, Clone)] pub struct StreamingBinOp { prev_lhs: StreamTensor, prev_rhs: StreamTensor, pub op: BinOp, pub dim: crate::D, } impl StreamingBinOp { pub fn new(op: BinOp, dim: crate::D) -> Self { Self { prev_lhs: StreamTensor::empty(), prev_rhs: StreamTensor::empty(), op, dim, } } pub fn reset_state(&mut self) { self.prev_lhs.reset(); self.prev_rhs.reset(); } pub fn forward(&self, lhs: &Tensor, rhs: &Tensor) -> Result<Tensor> { match self.op { BinOp::Add => Tensor::add(lhs, rhs), BinOp::Mul => Tensor::mul(lhs, rhs), BinOp::Sub => Tensor::sub(lhs, rhs), BinOp::Div => Tensor::div(lhs, rhs), } } pub fn step(&mut self, lhs: &StreamTensor, rhs: &StreamTensor) -> Result<StreamTensor> { let lhs = StreamTensor::cat2(&self.prev_lhs, lhs, self.dim)?; let rhs = StreamTensor::cat2(&self.prev_rhs, rhs, self.dim)?; let lhs_len = lhs.seq_len(self.dim)?; let rhs_len = rhs.seq_len(self.dim)?; let common_len = usize::min(lhs_len, rhs_len); let (lhs, prev_lhs) = lhs.split(self.dim, common_len)?; let (rhs, prev_rhs) = rhs.split(self.dim, common_len)?; let ys = match (lhs.0, rhs.0) { (Some(lhs), Some(rhs)) => { let ys = self.forward(&lhs, &rhs)?; StreamTensor::from_tensor(ys) } (None, None) => StreamTensor::empty(), (lhs, rhs) => crate::bail!("INTERNAL ERROR inconsistent lhs and rhs {lhs:?} {rhs:?}"), }; self.prev_lhs = prev_lhs; self.prev_rhs = prev_rhs; Ok(ys) } } /// Simple wrapper that doesn't do any buffering. pub struct Map<T: crate::Module>(T); impl<T: crate::Module> StreamingModule for Map<T> { fn reset_state(&mut self) {} fn step(&mut self, xs: &StreamTensor) -> Result<StreamTensor> { xs.apply(&self.0) } }
candle/candle-core/src/streaming.rs/0
{ "file_path": "candle/candle-core/src/streaming.rs", "repo_id": "candle", "token_count": 3130 }
28
use candle_core::{test_device, test_utils, Device, IndexOp, Result, Tensor}; // https://github.com/huggingface/candle/issues/364 fn avg_pool2d(dev: &Device) -> Result<()> { let data: Vec<f32> = vec![ 1., 1., 1., 1., 0., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., ]; let t = Tensor::from_vec(data, (1, 1, 4, 4), dev)?; let pool = t.avg_pool2d(2)?.squeeze(0)?.squeeze(0)?; assert_eq!(pool.to_vec2::<f32>()?, [[0.5f32, 1.], [1., 1.]]); let data: Vec<f32> = vec![ 1., 2., 1., 3., 0., 0., 1., 1., 1., 1., 1., 1., 5., 1., 1., 1., ]; let t = Tensor::from_vec(data, (1, 1, 2, 8), dev)?; let pool = t.avg_pool2d(2)?.squeeze(0)?.squeeze(0)?; assert_eq!(pool.to_vec2::<f32>()?, [[5. / 4., 6. / 4., 6. / 4., 1.]]); Ok(()) } fn max_pool2d(dev: &Device) -> Result<()> { let data: Vec<f32> = vec![ 1., 2., 1., 3., 0., 0., 1., 1., 1., 1., 1., 1., 5., 1., 1., 1., ]; let t = Tensor::from_vec(data, (1, 1, 4, 4), dev)?; let pool = t.max_pool2d(2)?.squeeze(0)?.squeeze(0)?; assert_eq!(pool.to_vec2::<f32>()?, [[2f32, 3.], [5., 1.]]); let t = t.reshape((1, 1, 2, 8))?; let pool = t.max_pool2d(2)?.squeeze(0)?.squeeze(0)?; assert_eq!(pool.to_vec2::<f32>()?, [[2.0, 3.0, 5.0, 1.0]]); Ok(()) } /* This test corresponds to the following PyTorch script. import torch torch.manual_seed(4242) t = torch.randn((1, 2, 4, 4)) print(t.flatten()) res = torch.nn.functional.avg_pool2d(t, 2) print(res) */ fn avg_pool2d_pytorch(dev: &Device) -> Result<()> { if dev.is_metal() { return Ok(()); } let t = Tensor::new( &[ 0.4056f32, -0.8689, -0.0773, -1.5630, -2.8012, -1.5059, 0.3972, 1.0852, 0.4997, 3.0616, 1.6541, 0.0964, -0.8338, -1.6523, -0.8323, -0.1699, 0.0823, 0.3526, 0.6843, 0.2395, 1.2279, -0.9287, -1.7030, 0.1370, 0.6047, 0.3770, -0.6266, 0.3529, 2.2013, -0.6836, 0.2477, 1.3127, ], dev, )? .reshape((1, 2, 4, 4))?; let pool = t.avg_pool2d(2)?.squeeze(0)?; assert_eq!( test_utils::to_vec3_round(&pool, 4)?, [ [[-1.1926, -0.0395], [0.2688, 0.1871]], [[0.1835, -0.1606], [0.6249, 0.3217]] ] ); let pool = t.avg_pool2d(3)?.squeeze(0)?; assert_eq!( test_utils::to_vec3_round(&pool, 4)?, [[[0.085]], [[0.0078]]] ); let t = t.reshape((1, 1, 4, 8))?; let pool = t.avg_pool2d(2)?.squeeze(0)?.squeeze(0)?; assert_eq!( test_utils::to_vec2_round(&pool, 4)?, [ [0.7745, 0.0276, -1.6983, 0.12], [0.3542, 0.1625, 0.4542, -0.0014] ] ); Ok(()) } fn upsample_nearest2d(dev: &Device) -> Result<()> { let t = Tensor::arange(0f32, 6f32, dev)?.reshape((1, 1, 2, 3))?; let upsampled = t.upsample_nearest2d(4, 6)?.i(0)?.i(0)?; assert_eq!( t.i(0)?.i(0)?.to_vec2::<f32>()?, [[0.0, 1.0, 2.0], [3.0, 4.0, 5.0]] ); assert_eq!( upsampled.to_vec2::<f32>()?, [ [0.0, 0.0, 1.0, 1.0, 2.0, 2.0], [0.0, 0.0, 1.0, 1.0, 2.0, 2.0], [3.0, 3.0, 4.0, 4.0, 5.0, 5.0], [3.0, 3.0, 4.0, 4.0, 5.0, 5.0] ] ); Ok(()) } test_device!(avg_pool2d, avg_pool2d_cpu, avg_pool2d_gpu, avg_pool2d_metal); test_device!( avg_pool2d_pytorch, avg_pool2d_pytorch_cpu, avg_pool2d_pytorch_gpu, avg_pool2d_pytorch_metal ); test_device!(max_pool2d, max_pool2d_cpu, max_pool2d_gpu, max_pool2d_metal); test_device!( upsample_nearest2d, upsample_nearest2d_cpu, upsample_nearest2d_gpu, upsample_nearest2d_metal );
candle/candle-core/tests/pool_tests.rs/0
{ "file_path": "candle/candle-core/tests/pool_tests.rs", "repo_id": "candle", "token_count": 2112 }
29
//! Helper functions for the tinystories dataset. This uses the pre-tokenized version as generated //! by the tools from https://github.com/karpathy/llama2.c use candle::{Device, Result, Tensor}; pub struct Dataset { valid_tokens: Vec<memmap2::Mmap>, train_tokens: Vec<memmap2::Mmap>, } fn mmap_file(p: &std::path::PathBuf) -> Result<memmap2::Mmap> { let file = std::fs::File::open(p)?; let mmap = unsafe { memmap2::MmapOptions::new().map(&file)? }; Ok(mmap) } impl Dataset { pub fn new<P: AsRef<std::path::Path>>(dir: P) -> Result<Self> { let dir = dir.as_ref(); let mut bin_files = vec![]; for file in std::fs::read_dir(dir)?.flatten() { let file = file.path(); if let Some(extension) = file.extension() { if extension == "bin" { bin_files.push(file) } } } if bin_files.len() < 2 { candle::bail!("found less than two bin files in {:?}", dir) } bin_files.sort(); let valid_tokens = mmap_file(&bin_files[0])?; let train_tokens = bin_files[1..] .iter() .map(mmap_file) .collect::<Result<Vec<_>>>()?; Ok(Self { valid_tokens: vec![valid_tokens], train_tokens, }) } pub fn train_tokens(&self) -> usize { self.train_tokens.len() } pub fn valid_tokens(&self) -> usize { self.valid_tokens.len() } } pub struct DatasetRandomIter<'a> { all_tokens: &'a [memmap2::Mmap], tokens: Vec<&'a memmap2::Mmap>, current_tokens: &'a memmap2::Mmap, indexes_in_bytes: Vec<usize>, seq_len: usize, device: Device, } impl<'a> DatasetRandomIter<'a> { pub fn new(ds: &'a Dataset, valid: bool, seq_len: usize, device: Device) -> Self { use rand::rng; use rand::seq::SliceRandom; let all_tokens = if valid { &ds.valid_tokens } else { &ds.train_tokens }; let mut tokens = all_tokens.iter().collect::<Vec<_>>(); tokens.shuffle(&mut rng()); let current_tokens = tokens.pop().unwrap(); let seq_len_in_bytes = seq_len * 2; let mut indexes_in_bytes = (0..current_tokens.len() - seq_len_in_bytes) .step_by(seq_len_in_bytes) .collect::<Vec<_>>(); indexes_in_bytes.shuffle(&mut rng()); Self { all_tokens, tokens, current_tokens, indexes_in_bytes, seq_len, device, } } } impl Iterator for DatasetRandomIter<'_> { type Item = Result<(Tensor, Tensor)>; fn next(&mut self) -> Option<Self::Item> { use byteorder::{LittleEndian, ReadBytesExt}; use rand::rng; use rand::seq::SliceRandom; let seq_len = self.seq_len; if self.indexes_in_bytes.is_empty() { if self.tokens.is_empty() { self.tokens = self.all_tokens.iter().collect(); self.tokens.shuffle(&mut rng()); } self.current_tokens = self.tokens.pop().unwrap(); let seq_len_in_bytes = self.seq_len * 2; self.indexes_in_bytes = (0..self.current_tokens.len() - seq_len_in_bytes) .step_by(seq_len_in_bytes) .collect::<Vec<_>>(); self.indexes_in_bytes.shuffle(&mut rng()); } let start_idx = self.indexes_in_bytes.pop().unwrap(); let bytes = &self.current_tokens[start_idx..start_idx + 2 * (seq_len + 1)]; let mut tokens = vec![0u16; bytes.len() / 2]; if let Err(err) = std::io::Cursor::new(bytes).read_u16_into::<LittleEndian>(&mut tokens) { return Some(Err(err.into())); } let tokens = tokens.into_iter().map(|v| v as u32).collect::<Vec<_>>(); let inputs = Tensor::new(&tokens[..seq_len], &self.device); let targets = Tensor::new(&tokens[1..], &self.device); Some(candle::error::zip(inputs, targets)) } }
candle/candle-datasets/src/nlp/tinystories.rs/0
{ "file_path": "candle/candle-datasets/src/nlp/tinystories.rs", "repo_id": "candle", "token_count": 2080 }
30
# candle-blip The [blip-image-captioning](https://huggingface.co/Salesforce/blip-image-captioning-base) model can generate captions for an input image. ## Running on an example ```bash cargo run --example blip --release -- --image candle-examples/examples/yolo-v8/assets/bike.jpg ``` ``` Running on CPU, to run on GPU, build this example with `--features cuda` loaded image Tensor[dims 3, 384, 384; f32] model built several cyclists are riding down a road with cars behind them% ``` ![Leading group, Giro d'Italia 2021](../yolo-v8/assets/bike.jpg)
candle/candle-examples/examples/blip/README.md/0
{ "file_path": "candle/candle-examples/examples/blip/README.md", "repo_id": "candle", "token_count": 190 }
31
# Conversational Speech Model (CSM) CSM is a speech generation model from Sesame, [SesameAILabs/csm](https://github.com/SesameAILabs/csm). It can generate a conversational speech between two different speakers. The speakers turn are delimited by the `|` character in the prompt. ```bash cargo run --example csm --features cuda -r -- \ --voices candle-examples/examples/csm/voices.safetensors \ --prompt "Hey how are you doing?|Pretty good, pretty good. How about you?" ```
candle/candle-examples/examples/csm/README.md/0
{ "file_path": "candle/candle-examples/examples/csm/README.md", "repo_id": "candle", "token_count": 156 }
32
// TODO: Add an offline mode. #[cfg(feature = "accelerate")] extern crate accelerate_src; #[cfg(feature = "mkl")] extern crate intel_mkl_src; use anyhow::{Error as E, Result}; use candle::{DType, Device, Tensor}; use candle_nn::VarBuilder; use candle_transformers::generation::LogitsProcessor; use clap::Parser; use hf_hub::{api::sync::Api, Repo, RepoType}; use tokenizers::Tokenizer; use candle_transformers::models::falcon::{Config, Falcon}; struct TextGeneration { model: Falcon, device: Device, tokenizer: Tokenizer, logits_processor: LogitsProcessor, repeat_penalty: f32, repeat_last_n: usize, } struct GenerationOptions { temp: Option<f64>, top_p: Option<f64>, repeat_penalty: f32, repeat_last_n: usize, } impl TextGeneration { fn new( model: Falcon, tokenizer: Tokenizer, generation_options: GenerationOptions, seed: u64, device: &Device, ) -> Self { let logits_processor = LogitsProcessor::new(seed, generation_options.temp, generation_options.top_p); let repeat_penalty = generation_options.repeat_penalty; let repeat_last_n = generation_options.repeat_last_n; Self { model, tokenizer, logits_processor, device: device.clone(), repeat_penalty, repeat_last_n, } } fn run(&mut self, prompt: &str, sample_len: usize) -> Result<()> { println!("starting the inference loop"); let mut tokens = self .tokenizer .encode(prompt, true) .map_err(E::msg)? .get_ids() .to_vec(); let mut new_tokens = vec![]; let start_gen = std::time::Instant::now(); for index in 0..sample_len { let start_gen = std::time::Instant::now(); let context_size = if self.model.config().use_cache && index > 0 { 1 } else { tokens.len() }; let ctxt = &tokens[tokens.len().saturating_sub(context_size)..]; let input = Tensor::new(ctxt, &self.device)?.unsqueeze(0)?; let logits = self.model.forward(&input)?; let logits = logits.squeeze(0)?.to_dtype(DType::F32)?; let logits = if self.repeat_penalty == 1. { logits } else { let start_at = tokens.len().saturating_sub(self.repeat_last_n); candle_transformers::utils::apply_repeat_penalty( &logits, self.repeat_penalty, &tokens[start_at..], )? }; let next_token = self.logits_processor.sample(&logits)?; tokens.push(next_token); new_tokens.push(next_token); println!("> {:?}", start_gen.elapsed()); println!( "{} token: {} '{}'", index + 1, next_token, self.tokenizer.decode(&[next_token], true).map_err(E::msg)? ); } let dt = start_gen.elapsed(); println!( "{sample_len} tokens generated ({} token/s)\n----\n{}\n----", sample_len as f64 / dt.as_secs_f64(), self.tokenizer.decode(&new_tokens, true).map_err(E::msg)? ); Ok(()) } } #[derive(Parser, Debug)] #[command(author, version, about, long_about = None)] struct Args { /// Run on CPU rather than on GPU. #[arg(long)] cpu: bool, #[arg(long)] prompt: String, /// Use f32 computations rather than bf16. #[arg(long)] use_f32: bool, /// The temperature used to generate samples. #[arg(long)] temperature: Option<f64>, /// Nucleus sampling probability cutoff. #[arg(long)] top_p: Option<f64>, /// The seed to use when generating random samples. #[arg(long, default_value_t = 299792458)] seed: u64, /// The length of the sample to generate (in tokens). #[arg(long, default_value_t = 100)] sample_len: usize, #[arg(long, default_value = "tiiuae/falcon-7b")] model_id: String, #[arg(long, default_value = "refs/pr/43")] revision: String, /// Penalty to be applied for repeating tokens, 1. means no penalty. #[arg(long, default_value_t = 1.0)] repeat_penalty: f32, /// The context size to consider for the repeat penalty. #[arg(long, default_value_t = 64)] repeat_last_n: usize, } fn main() -> Result<()> { let args = Args::parse(); let device = candle_examples::device(args.cpu)?; let start = std::time::Instant::now(); let api = Api::new()?; let repo = api.repo(Repo::with_revision( args.model_id, RepoType::Model, args.revision, )); let tokenizer_filename = repo.get("tokenizer.json")?; let filenames = candle_examples::hub_load_safetensors(&repo, "model.safetensors.index.json")?; println!("retrieved the files in {:?}", start.elapsed()); let tokenizer = Tokenizer::from_file(tokenizer_filename).map_err(E::msg)?; let start = std::time::Instant::now(); let dtype = if args.use_f32 { DType::F32 } else { DType::BF16 }; let vb = unsafe { VarBuilder::from_mmaped_safetensors(&filenames, dtype, &device)? }; let config = Config::falcon7b(); config.validate()?; let model = Falcon::load(vb, config)?; println!("loaded the model in {:?}", start.elapsed()); let generation_options = GenerationOptions { temp: args.temperature, top_p: args.top_p, repeat_penalty: args.repeat_penalty, repeat_last_n: args.repeat_last_n, }; let mut pipeline = TextGeneration::new(model, tokenizer, generation_options, args.seed, &device); pipeline.run(&args.prompt, args.sample_len)?; Ok(()) }
candle/candle-examples/examples/falcon/main.rs/0
{ "file_path": "candle/candle-examples/examples/falcon/main.rs", "repo_id": "candle", "token_count": 2723 }
33
#[cfg(feature = "mkl")] extern crate intel_mkl_src; #[cfg(feature = "accelerate")] extern crate accelerate_src; use anyhow::{Error as E, Result}; use clap::Parser; use candle_transformers::models::helium::{Config as ConfigPreview, Model as ModelPreview}; use candle_transformers::models::llama::{ Cache as CacheV1, Llama as ModelV1, LlamaConfig as ConfigV1, LlamaEosToks, }; use candle::{DType, Device, Tensor}; use candle_examples::token_output_stream::TokenOutputStream; use candle_nn::VarBuilder; use candle_transformers::generation::{LogitsProcessor, Sampling}; use hf_hub::{api::sync::Api, Repo, RepoType}; use tokenizers::Tokenizer; #[derive(Debug, Clone)] enum Model { V1 { model: ModelV1, cache: CacheV1 }, Preview(ModelPreview), } impl Model { fn forward(&mut self, input: &Tensor, start_pos: usize) -> Result<Tensor> { let model = match self { Model::V1 { model, cache } => model.forward(input, start_pos, cache)?, Model::Preview(m) => m.forward(input, start_pos)?, }; Ok(model) } } #[derive(Debug, Clone)] enum Config { V1(ConfigV1), Preview(ConfigPreview), } impl Config { fn bos_token_id(&self) -> Option<u32> { match self { Config::V1(c) => c.bos_token_id, Config::Preview(c) => Some(c.bos_token_id), } } fn eos_token_id(&self) -> Option<LlamaEosToks> { match self { Config::V1(c) => c.eos_token_id.clone(), Config::Preview(c) => Some(LlamaEosToks::Single(c.eos_token_id)), } } } struct TextGeneration { model: Model, device: Device, tokenizer: TokenOutputStream, logits_processor: LogitsProcessor, repeat_penalty: f32, repeat_last_n: usize, config: Config, } impl TextGeneration { #[allow(clippy::too_many_arguments)] fn new( model: Model, tokenizer: Tokenizer, seed: u64, temp: Option<f64>, top_p: Option<f64>, top_k: Option<usize>, repeat_penalty: f32, repeat_last_n: usize, config: Config, device: &Device, ) -> Self { let logits_processor = { let temperature = temp.unwrap_or(0.); let sampling = if temperature <= 0. { Sampling::ArgMax } else { match (top_k, top_p) { (None, None) => Sampling::GumbelSoftmax { temperature }, (Some(k), None) => Sampling::TopK { k, temperature }, (None, Some(p)) => Sampling::TopP { p, temperature }, (Some(k), Some(p)) => Sampling::TopKThenTopP { k, p, temperature }, } }; LogitsProcessor::from_sampling(seed, sampling) }; Self { model, tokenizer: TokenOutputStream::new(tokenizer), logits_processor, repeat_penalty, repeat_last_n, device: device.clone(), config, } } fn run(&mut self, prompt: &str, sample_len: usize) -> Result<()> { use std::io::Write; self.tokenizer.clear(); let mut tokens = self .tokenizer .tokenizer() .encode(prompt, true) .map_err(E::msg)? .get_ids() .to_vec(); for &t in tokens.iter() { if let Some(t) = self.tokenizer.next_token(t)? { print!("{t}") } } std::io::stdout().flush()?; let mut generated_tokens = 0usize; let start_gen = std::time::Instant::now(); for index in 0..sample_len { let context_size = if index > 0 { 1 } else { tokens.len() }; let start_pos = tokens.len().saturating_sub(context_size); let ctxt = &tokens[start_pos..]; let input = Tensor::new(ctxt, &self.device)?.unsqueeze(0)?; let logits = self.model.forward(&input, start_pos)?; let logits = logits.squeeze(0)?.squeeze(0)?.to_dtype(DType::F32)?; let logits = if self.repeat_penalty == 1. { logits } else { let start_at = tokens.len().saturating_sub(self.repeat_last_n); candle_transformers::utils::apply_repeat_penalty( &logits, self.repeat_penalty, &tokens[start_at..], )? }; let next_token = self.logits_processor.sample(&logits)?; tokens.push(next_token); generated_tokens += 1; let is_eos = self .config .eos_token_id() .as_ref() .is_some_and(|v| match v { LlamaEosToks::Single(eos) => *eos == next_token, LlamaEosToks::Multiple(eos) => eos.contains(&next_token), }); if Some(next_token) == self.config.bos_token_id() || is_eos { break; } if let Some(t) = self.tokenizer.next_token(next_token)? { print!("{t}"); std::io::stdout().flush()?; } } let dt = start_gen.elapsed(); if let Some(rest) = self.tokenizer.decode_rest().map_err(E::msg)? { print!("{rest}"); } std::io::stdout().flush()?; println!( "\n{generated_tokens} tokens generated ({:.2} token/s)", generated_tokens as f64 / dt.as_secs_f64(), ); Ok(()) } } #[derive(Clone, Debug, Copy, PartialEq, Eq, clap::ValueEnum)] enum Which { #[value(name = "v1-preview")] V1Preview, #[value(name = "v1")] V1, } #[derive(Parser, Debug)] #[command(author, version, about, long_about = None)] struct Args { /// Run on CPU rather than on GPU. #[arg(long)] cpu: bool, /// Enable tracing (generates a trace-timestamp.json file). #[arg(long)] tracing: bool, #[arg(long)] prompt: String, /// The temperature used to generate samples. #[arg(long, default_value_t = 0.7)] temperature: f64, /// Nucleus sampling probability cutoff. #[arg(long)] top_p: Option<f64>, /// Only sample among the top K samples. #[arg(long)] top_k: Option<usize>, /// The seed to use when generating random samples. #[arg(long, default_value_t = 299792458)] seed: u64, /// The length of the sample to generate (in tokens). #[arg(long, short = 'n', default_value_t = 10000)] sample_len: usize, /// The model size to use. #[arg(long, default_value = "v1")] which: Which, #[arg(long)] model_id: Option<String>, #[arg(long, default_value = "main")] revision: String, #[arg(long)] tokenizer: Option<String>, #[arg(long)] config: Option<String>, #[arg(long)] weights: Option<String>, /// Penalty to be applied for repeating tokens, 1. means no penalty. #[arg(long, default_value_t = 1.1)] repeat_penalty: f32, /// The context size to consider for the repeat penalty. #[arg(long, default_value_t = 64)] repeat_last_n: usize, } fn main() -> Result<()> { use tracing_chrome::ChromeLayerBuilder; use tracing_subscriber::prelude::*; let args = Args::parse(); let _guard = if args.tracing { let (chrome_layer, guard) = ChromeLayerBuilder::new().build(); tracing_subscriber::registry().with(chrome_layer).init(); Some(guard) } else { None }; println!( "avx: {}, neon: {}, simd128: {}, f16c: {}", candle::utils::with_avx(), candle::utils::with_neon(), candle::utils::with_simd128(), candle::utils::with_f16c() ); println!( "temp: {:.2} repeat-penalty: {:.2} repeat-last-n: {}", args.temperature, args.repeat_penalty, args.repeat_last_n ); let start = std::time::Instant::now(); let api = Api::new()?; let model_id = match args.model_id { Some(model_id) => model_id, None => { let name = match args.which { Which::V1Preview => "kyutai/helium-1-preview-2b", Which::V1 => "kyutai/helium-1-2b", }; name.to_string() } }; let repo = api.repo(Repo::with_revision( model_id, RepoType::Model, args.revision, )); let tokenizer_filename = match args.tokenizer { Some(file) => std::path::PathBuf::from(file), None => repo.get("tokenizer.json")?, }; let filenames = match args.weights { Some(files) => files .split(',') .map(std::path::PathBuf::from) .collect::<Vec<_>>(), None => vec![repo.get("model.safetensors")?], }; println!("retrieved the files in {:?}", start.elapsed()); let tokenizer = Tokenizer::from_file(tokenizer_filename).map_err(E::msg)?; let start = std::time::Instant::now(); let config_file = match args.config { Some(config_file) => std::path::PathBuf::from(config_file), None => repo.get("config.json")?, }; let config = match args.which { Which::V1Preview => Config::Preview(serde_json::from_slice(&std::fs::read(config_file)?)?), Which::V1 => Config::V1(serde_json::from_slice(&std::fs::read(config_file)?)?), }; let device = candle_examples::device(args.cpu)?; let (model, device) = { let dtype = device.bf16_default_to_f32(); let vb = unsafe { VarBuilder::from_mmaped_safetensors(&filenames, dtype, &device)? }; let model = match &config { Config::V1(c) => { let c = c.clone().into_config(false); let model = ModelV1::load(vb, &c)?; let cache = CacheV1::new(true, dtype, &c, &device)?; Model::V1 { model, cache } } Config::Preview(c) => Model::Preview(ModelPreview::new(c, vb)?), }; (model, device) }; println!("loaded the model in {:?}", start.elapsed()); let mut pipeline = TextGeneration::new( model, tokenizer, args.seed, Some(args.temperature), args.top_p, args.top_k, args.repeat_penalty, args.repeat_last_n, config, &device, ); pipeline.run(&args.prompt, args.sample_len)?; Ok(()) }
candle/candle-examples/examples/helium/main.rs/0
{ "file_path": "candle/candle-examples/examples/helium/main.rs", "repo_id": "candle", "token_count": 5087 }
34
# candle-mamba-minimal: minimal implementation of Mamba This is based on [mamba-minimal](https://github.com/johnma2006/mamba-minimal). Compared to the mamba example, this version can handle training but is much slower. ## Running the example ```bash $ cargo run --example mamba-minimal --release -- --prompt "Mamba is the" Mamba is the most popular and best-selling game in the world. It has been downloaded more than 1,000 times by over 1 million people worldwide since its release on March 18th 2016. The Mamba series of games are a collection that combines elements from all genres including action, adventure, strategy & puzzle games with some unique gameplay features such as stealth and survival. The game is also known for its innovative graphics and the ability to play in a variety of different modes like single player or multiplayer. ```
candle/candle-examples/examples/mamba-minimal/README.md/0
{ "file_path": "candle/candle-examples/examples/mamba-minimal/README.md", "repo_id": "candle", "token_count": 206 }
35
# candle-mixtral: 8x7b LLM using a sparse mixture of experts. Mixtral-8x7B-v0.1 is a pretrained generative LLM with 56 billion parameters. - [Blog post](https://mistral.ai/news/mixtral-of-experts/) from Mistral announcing the model release. - [Model card](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) on the HuggingFace Hub. ## Running the example ```bash $ cargo run --example mixtral --release -- --prompt "def print_prime(n): " def print_prime(n): # n is the number of prime numbers to be printed i = 2 count = 0 while (count < n): if (isPrime(i)): print(i) count += 1 i += 1 def isPrime(n): for x in range(2, int(n**0.5)+1): if (n % x == 0): ... ```
candle/candle-examples/examples/mixtral/README.md/0
{ "file_path": "candle/candle-examples/examples/mixtral/README.md", "repo_id": "candle", "token_count": 322 }
36
use candle::{DType, Device, Result, Tensor, D}; use candle_nn::{ embedding, layer_norm, linear_no_bias, Activation, Embedding, LayerNorm, Linear, Module, VarBuilder, }; use candle_transformers::models::{encodec, t5}; // https://github.com/huggingface/transformers/blob/cd4584e3c809bb9e1392ccd3fe38b40daba5519a/src/transformers/models/musicgen/configuration_musicgen.py#L83 #[derive(Debug, Clone, PartialEq)] pub struct Config { vocab_size: usize, max_position_embeddings: usize, num_hidden_layers: usize, ffn_dim: usize, num_attention_heads: usize, layerdrop: f64, use_cache: bool, activation_function: Activation, hidden_size: usize, dropout: f64, attention_dropout: f64, activation_dropout: f64, initializer_factor: f64, scale_embedding: bool, num_codebooks: usize, pad_token_id: usize, bos_token_id: usize, eos_token_id: Option<usize>, tie_word_embeddings: bool, } impl Default for Config { fn default() -> Self { Self { vocab_size: 2048, max_position_embeddings: 2048, num_hidden_layers: 24, ffn_dim: 4096, num_attention_heads: 16, layerdrop: 0.0, use_cache: true, activation_function: Activation::Gelu, hidden_size: 1024, dropout: 0.1, attention_dropout: 0.0, activation_dropout: 0.0, initializer_factor: 0.02, scale_embedding: false, num_codebooks: 4, pad_token_id: 2048, bos_token_id: 2048, eos_token_id: None, tie_word_embeddings: false, } } } impl Config { fn musicgen_small() -> Self { Self { vocab_size: 2048, max_position_embeddings: 2048, num_hidden_layers: 24, ffn_dim: 4096, num_attention_heads: 16, layerdrop: 0.0, use_cache: true, activation_function: Activation::Gelu, hidden_size: 1024, dropout: 0.1, attention_dropout: 0.0, activation_dropout: 0.0, initializer_factor: 0.02, scale_embedding: false, num_codebooks: 4, pad_token_id: 2048, bos_token_id: 2048, eos_token_id: None, tie_word_embeddings: false, } } } fn get_embedding(num_embeddings: usize, embedding_dim: usize) -> Result<Tensor> { let half_dim = embedding_dim / 2; let emb = f64::ln(10000.) / (half_dim - 1) as f64; let xs: Vec<_> = (0..num_embeddings).map(|v| v as f32).collect(); let xs = Tensor::from_vec(xs, (num_embeddings, 1), &Device::Cpu)?; let ys: Vec<_> = (0..half_dim) .map(|v| f64::exp(v as f64 * -emb) as f32) .collect(); let ys = Tensor::from_vec(ys, (1, half_dim), &Device::Cpu)?; let shape = (num_embeddings, half_dim); let emb = (xs.broadcast_as(shape)? * ys.broadcast_as(shape)?)?; let emb = Tensor::cat(&[&emb.cos()?, &emb.sin()?], 1)?.reshape((num_embeddings, 2 * half_dim))?; let emb = if embedding_dim % 2 == 1 { let zeros = Tensor::zeros((num_embeddings, 1), DType::F32, &Device::Cpu)?; Tensor::cat(&[&emb, &zeros], 1)? } else { emb }; Ok(emb) } #[derive(Debug)] struct MusicgenSinusoidalPositionalEmbedding { num_positions: usize, embedding_dim: usize, weights: Tensor, } impl MusicgenSinusoidalPositionalEmbedding { fn load(_vb: VarBuilder, cfg: &Config) -> Result<Self> { let num_positions = cfg.max_position_embeddings; let embedding_dim = cfg.hidden_size; let weights = get_embedding(num_positions, embedding_dim)?; Ok(Self { num_positions, embedding_dim, weights, }) } fn forward(&mut self, input_ids: &Tensor) -> Result<Tensor> { let (_b_sz, _codebooks, seq_len) = input_ids.dims3()?; if seq_len > self.weights.dim(0)? { self.weights = get_embedding(seq_len, self.embedding_dim)? } self.weights.narrow(0, 0, seq_len) } } #[derive(Debug)] struct MusicgenAttention { scaling: f64, is_decoder: bool, num_heads: usize, head_dim: usize, k_proj: Linear, v_proj: Linear, q_proj: Linear, out_proj: Linear, } impl MusicgenAttention { fn load(vb: VarBuilder, cfg: &Config) -> Result<Self> { let h = cfg.hidden_size; let num_heads = cfg.num_attention_heads; let head_dim = h / num_heads; let k_proj = linear_no_bias(h, h, vb.pp("k_proj"))?; let v_proj = linear_no_bias(h, h, vb.pp("v_proj"))?; let q_proj = linear_no_bias(h, h, vb.pp("q_proj"))?; let out_proj = linear_no_bias(h, h, vb.pp("out_proj"))?; Ok(Self { scaling: 1. / (head_dim as f64).sqrt(), is_decoder: true, num_heads, head_dim, k_proj, v_proj, q_proj, out_proj, }) } fn forward( &mut self, xs: &Tensor, kv_states: Option<&Tensor>, attention_mask: &Tensor, ) -> Result<Tensor> { let (b_sz, tgt_len, _) = xs.dims3()?; let query_states = (self.q_proj.forward(xs)? * self.scaling)?; let kv_states = kv_states.unwrap_or(xs); let key_states = self.k_proj.forward(kv_states)?; let value_states = self.v_proj.forward(kv_states)?; let tgt = (b_sz, tgt_len, self.num_heads, self.head_dim); let query_states = query_states.reshape(tgt)?.transpose(1, 2)?.contiguous()?; let key_states = key_states.reshape(tgt)?.transpose(1, 2)?.contiguous()?; let value_states = value_states.reshape(tgt)?.transpose(1, 2)?.contiguous()?; let src_len = key_states.dim(1)?; let attn_weights = query_states.matmul(&key_states.transpose(1, 2)?)?; let attn_weights = attn_weights .reshape((b_sz, self.num_heads, tgt_len, src_len))? .broadcast_add(attention_mask)?; let attn_weights = candle_nn::ops::softmax(&attn_weights, D::Minus1)?; // TODO: layer_head_mask? let attn_output = attn_weights .matmul(&value_states)? .reshape((b_sz, self.num_heads, tgt_len, self.head_dim))? .transpose(1, 2)? .reshape((b_sz, tgt_len, self.num_heads * self.head_dim))?; let attn_output = self.out_proj.forward(&attn_output)?; Ok(attn_output) } } #[derive(Debug)] struct MusicgenDecoderLayer { self_attn: MusicgenAttention, self_attn_layer_norm: LayerNorm, encoder_attn: MusicgenAttention, encoder_attn_layer_norm: LayerNorm, fc1: Linear, fc2: Linear, final_layer_norm: LayerNorm, activation_fn: Activation, } impl MusicgenDecoderLayer { fn load(vb: VarBuilder, cfg: &Config) -> Result<Self> { let h = cfg.hidden_size; let self_attn = MusicgenAttention::load(vb.pp("self_attn"), cfg)?; let self_attn_layer_norm = layer_norm(h, 1e-5, vb.pp("self_attn_layer_norm"))?; let encoder_attn = MusicgenAttention::load(vb.pp("encoder_attn"), cfg)?; let encoder_attn_layer_norm = layer_norm(h, 1e-5, vb.pp("encoder_attn_layer_norm"))?; let fc1 = linear_no_bias(h, cfg.ffn_dim, vb.pp("fc1"))?; let fc2 = linear_no_bias(cfg.ffn_dim, h, vb.pp("fc2"))?; let final_layer_norm = layer_norm(h, 1e-5, vb.pp("final_layer_norm"))?; Ok(Self { self_attn, self_attn_layer_norm, encoder_attn, encoder_attn_layer_norm, fc1, fc2, final_layer_norm, activation_fn: cfg.activation_function, }) } fn forward( &mut self, xs: &Tensor, attention_mask: &Tensor, encoder_hidden_states: Option<&Tensor>, ) -> Result<Tensor> { let residual = xs.clone(); let xs = self.self_attn_layer_norm.forward(xs)?; let xs = self.self_attn.forward(&xs, None, attention_mask)?; let mut xs = (xs + residual)?; if let Some(encoder_hidden_states) = &encoder_hidden_states { let residual = xs.clone(); let encoder_attention_mask = attention_mask.clone(); // TODO xs = self.encoder_attn.forward( &xs, Some(encoder_hidden_states), &encoder_attention_mask, )?; xs = (xs + residual)? } let residual = xs.clone(); let xs = self.final_layer_norm.forward(&xs)?; let xs = self.fc1.forward(&xs)?; let xs = self.activation_fn.forward(&xs)?; let xs = self.fc2.forward(&xs)?; let xs = (xs + residual)?; Ok(xs) } } #[derive(Debug)] struct MusicgenDecoder { embed_tokens: Vec<Embedding>, embed_positions: MusicgenSinusoidalPositionalEmbedding, layers: Vec<MusicgenDecoderLayer>, layer_norm: LayerNorm, embed_scale: f64, num_codebooks: usize, d_model: usize, } impl MusicgenDecoder { fn load(vb: VarBuilder, cfg: &Config) -> Result<Self> { let h = cfg.hidden_size; let embed_scale = if cfg.scale_embedding { (h as f64).sqrt() } else { 1. }; let embed_dim = cfg.vocab_size + 1; let embed_tokens = (0..cfg.num_codebooks) .map(|i| embedding(embed_dim, h, vb.pp(format!("embed_tokens.{i}")))) .collect::<Result<Vec<_>>>()?; let embed_positions = MusicgenSinusoidalPositionalEmbedding::load(vb.clone(), cfg)?; let layers = (0..cfg.num_hidden_layers) .map(|i| MusicgenDecoderLayer::load(vb.pp(format!("layers.{i}")), cfg)) .collect::<Result<Vec<_>>>()?; let layer_norm = layer_norm(h, 1e-5, vb.pp("layer_norm"))?; Ok(Self { embed_tokens, embed_positions, layers, layer_norm, embed_scale, num_codebooks: cfg.num_codebooks, d_model: cfg.hidden_size, }) } fn prepare_decoder_attention_mask(&self, _b_sz: usize, _seq_len: usize) -> Result<Tensor> { todo!() } fn forward(&mut self, input_ids: &Tensor) -> Result<Tensor> { let dev = input_ids.device(); let (b_sz_times_codebooks, seq_len) = input_ids.dims2()?; let b_sz = b_sz_times_codebooks / self.num_codebooks; let input = input_ids.reshape((b_sz, self.num_codebooks, seq_len))?; let mut inputs_embeds = Tensor::zeros((b_sz, seq_len, self.d_model), DType::F32, dev)?; for (idx, codebook) in self.embed_tokens.iter().enumerate() { let inp = input.narrow(1, idx, 1)?.squeeze(1)?; inputs_embeds = (inputs_embeds + codebook.forward(&inp)?)? } let inputs_embeds = inputs_embeds; let positions = self.embed_positions.forward(&input)?.to_device(dev)?; let mut xs = inputs_embeds.broadcast_add(&positions)?; let attention_mask = self.prepare_decoder_attention_mask(b_sz, seq_len)?; for decoder_layer in self.layers.iter_mut() { xs = decoder_layer.forward(&xs, &attention_mask, None)?; } let xs = self.layer_norm.forward(&xs)?; Ok(xs) } } #[derive(Debug)] pub struct MusicgenForCausalLM { decoder: MusicgenDecoder, lm_heads: Vec<Linear>, num_codebooks: usize, vocab_size: usize, } impl MusicgenForCausalLM { pub fn load(vb: VarBuilder, cfg: &Config) -> Result<Self> { let h = cfg.hidden_size; let decoder = MusicgenDecoder::load(vb.pp("model.decoder"), cfg)?; let lm_heads = (0..cfg.num_codebooks) .map(|i| linear_no_bias(h, cfg.vocab_size, vb.pp(format!("lm_heads.{i}")))) .collect::<Result<Vec<_>>>()?; Ok(Self { decoder, lm_heads, num_codebooks: cfg.num_codebooks, vocab_size: cfg.vocab_size, }) } pub fn forward(&mut self, input_ids: &Tensor) -> Result<Tensor> { let (b_sz, seq_len) = input_ids.dims2()?; let hidden_states = self.decoder.forward(input_ids)?; let lm_logits = self .lm_heads .iter() .map(|h| h.forward(&hidden_states)) .collect::<Result<Vec<_>>>()?; let lm_logits = Tensor::stack(&lm_logits, 1)?.reshape(( b_sz * self.num_codebooks, seq_len, self.vocab_size, ))?; Ok(lm_logits) } } #[derive(Debug)] pub struct MusicgenForConditionalGeneration { pub text_encoder: t5::T5EncoderModel, pub audio_encoder: encodec::Model, pub decoder: MusicgenForCausalLM, cfg: GenConfig, } #[derive(Debug, Clone, PartialEq)] pub struct GenConfig { musicgen: Config, t5: t5::Config, encodec: encodec::Config, } impl GenConfig { pub fn small() -> Self { // https://huggingface.co/facebook/musicgen-small/blob/495da4ad086b3416a27c6187f9239f9fd96f3962/config.json#L6 let encodec = encodec::Config { audio_channels: 1, chunk_length_s: None, codebook_dim: Some(128), codebook_size: 2048, compress: 2, dilation_growth_rate: 2, hidden_size: 128, kernel_size: 7, last_kernel_size: 7, norm_type: encodec::NormType::WeightNorm, normalize: false, num_filters: 64, num_lstm_layers: 2, num_residual_layers: 1, overlap: None, // This should be Reflect and not Replicate but Reflect does not work yet. pad_mode: encodec::PadMode::Replicate, residual_kernel_size: 3, sampling_rate: 32_000, target_bandwidths: vec![2.2], trim_right_ratio: 1.0, upsampling_ratios: vec![8, 5, 4, 4], use_causal_conv: false, use_conv_shortcut: false, }; Self { musicgen: Config::musicgen_small(), t5: t5::Config::musicgen_small(), encodec, } } } impl MusicgenForConditionalGeneration { pub fn config(&self) -> &GenConfig { &self.cfg } pub fn load(vb: VarBuilder, cfg: GenConfig) -> Result<Self> { let text_encoder = t5::T5EncoderModel::load(vb.pp("text_encoder"), &cfg.t5)?; let audio_encoder = encodec::Model::new(&cfg.encodec, vb.pp("audio_encoder"))?; let decoder = MusicgenForCausalLM::load(vb.pp("decoder"), &cfg.musicgen)?; Ok(Self { text_encoder, audio_encoder, decoder, cfg, }) } }
candle/candle-examples/examples/musicgen/musicgen_model.rs/0
{ "file_path": "candle/candle-examples/examples/musicgen/musicgen_model.rs", "repo_id": "candle", "token_count": 7592 }
37
#[cfg(feature = "mkl")] extern crate intel_mkl_src; #[cfg(feature = "accelerate")] extern crate accelerate_src; use anyhow::Error as E; use clap::Parser; use candle::{DType, IndexOp, Tensor}; use candle_nn::VarBuilder; use candle_transformers::models::parler_tts::{Config, Model}; use tokenizers::Tokenizer; #[derive(Parser)] struct Args { /// Run on CPU rather than on GPU. #[arg(long)] cpu: bool, /// Enable tracing (generates a trace-timestamp.json file). #[arg(long)] tracing: bool, /// Display the token for the specified prompt. #[arg(long)] verbose_prompt: bool, #[arg(long, default_value = "Hey, how are you doing today?")] prompt: String, #[arg( long, default_value = "A female speaker delivers a slightly expressive and animated speech with a moderate speed and pitch. The recording is of very high quality, with the speaker's voice sounding clear and very close up." )] description: String, /// The temperature used to generate samples. #[arg(long, default_value_t = 0.0)] temperature: f64, /// Nucleus sampling probability cutoff. #[arg(long)] top_p: Option<f64>, /// The seed to use when generating random samples. #[arg(long, default_value_t = 0)] seed: u64, #[arg(long, default_value_t = 5000)] sample_len: usize, /// Penalty to be applied for repeating tokens, 1. means no penalty. #[arg(long, default_value_t = 1.0)] repeat_penalty: f32, /// The context size to consider for the repeat penalty. #[arg(long, default_value_t = 64)] repeat_last_n: usize, #[arg(long)] model_id: Option<String>, #[arg(long)] revision: Option<String>, #[arg(long)] quantized: bool, /// Use f16 precision for all the computations rather than f32. #[arg(long)] f16: bool, #[arg(long)] model_file: Option<String>, #[arg(long)] tokenizer_file: Option<String>, #[arg(long)] config_file: Option<String>, #[arg(long, default_value_t = 512)] max_steps: usize, /// The output wav file. #[arg(long, default_value = "out.wav")] out_file: String, #[arg(long, default_value = "large-v1")] which: Which, } #[derive(Clone, Debug, Copy, PartialEq, Eq, clap::ValueEnum)] enum Which { #[value(name = "large-v1")] LargeV1, #[value(name = "mini-v1")] MiniV1, } fn main() -> anyhow::Result<()> { use tracing_chrome::ChromeLayerBuilder; use tracing_subscriber::prelude::*; let args = Args::parse(); let _guard = if args.tracing { let (chrome_layer, guard) = ChromeLayerBuilder::new().build(); tracing_subscriber::registry().with(chrome_layer).init(); Some(guard) } else { None }; println!( "avx: {}, neon: {}, simd128: {}, f16c: {}", candle::utils::with_avx(), candle::utils::with_neon(), candle::utils::with_simd128(), candle::utils::with_f16c() ); println!( "temp: {:.2} repeat-penalty: {:.2} repeat-last-n: {}", args.temperature, args.repeat_penalty, args.repeat_last_n ); let start = std::time::Instant::now(); let api = hf_hub::api::sync::Api::new()?; let model_id = match args.model_id { Some(model_id) => model_id.to_string(), None => match args.which { Which::LargeV1 => "parler-tts/parler-tts-large-v1".to_string(), Which::MiniV1 => "parler-tts/parler-tts-mini-v1".to_string(), }, }; let revision = match args.revision { Some(r) => r, None => "main".to_string(), }; let repo = api.repo(hf_hub::Repo::with_revision( model_id, hf_hub::RepoType::Model, revision, )); let model_files = match args.model_file { Some(m) => vec![m.into()], None => match args.which { Which::MiniV1 => vec![repo.get("model.safetensors")?], Which::LargeV1 => { candle_examples::hub_load_safetensors(&repo, "model.safetensors.index.json")? } }, }; let config = match args.config_file { Some(m) => m.into(), None => repo.get("config.json")?, }; let tokenizer = match args.tokenizer_file { Some(m) => m.into(), None => repo.get("tokenizer.json")?, }; println!("retrieved the files in {:?}", start.elapsed()); let tokenizer = Tokenizer::from_file(tokenizer).map_err(E::msg)?; let start = std::time::Instant::now(); let device = candle_examples::device(args.cpu)?; let vb = unsafe { VarBuilder::from_mmaped_safetensors(&model_files, DType::F32, &device)? }; let config: Config = serde_json::from_reader(std::fs::File::open(config)?)?; let mut model = Model::new(&config, vb)?; println!("loaded the model in {:?}", start.elapsed()); let description_tokens = tokenizer .encode(args.description, true) .map_err(E::msg)? .get_ids() .to_vec(); let description_tokens = Tensor::new(description_tokens, &device)?.unsqueeze(0)?; let prompt_tokens = tokenizer .encode(args.prompt, true) .map_err(E::msg)? .get_ids() .to_vec(); let prompt_tokens = Tensor::new(prompt_tokens, &device)?.unsqueeze(0)?; let lp = candle_transformers::generation::LogitsProcessor::new( args.seed, Some(args.temperature), args.top_p, ); println!("starting generation..."); let codes = model.generate(&prompt_tokens, &description_tokens, lp, args.max_steps)?; println!("generated codes\n{codes}"); let codes = codes.to_dtype(DType::I64)?; codes.save_safetensors("codes", "out.safetensors")?; let codes = codes.unsqueeze(0)?; let pcm = model .audio_encoder .decode_codes(&codes.to_device(&device)?)?; println!("{pcm}"); let pcm = pcm.i((0, 0))?; let pcm = candle_examples::audio::normalize_loudness(&pcm, 24_000, true)?; let pcm = pcm.to_vec1::<f32>()?; let mut output = std::fs::File::create(&args.out_file)?; candle_examples::wav::write_pcm_as_wav(&mut output, &pcm, config.audio_encoder.sampling_rate)?; Ok(()) }
candle/candle-examples/examples/parler-tts/main.rs/0
{ "file_path": "candle/candle-examples/examples/parler-tts/main.rs", "repo_id": "candle", "token_count": 2678 }
38
# candle-repvgg [RepVGG: Making VGG-style ConvNets Great Again](https://arxiv.org/abs/2101.03697). This candle implementation uses a pre-trained RepVGG network for inference. The classification head has been trained on the ImageNet dataset and returns the probabilities for the top-5 classes. ## Running an example ``` $ cargo run --example repvgg --release -- --image candle-examples/examples/yolo-v8/assets/bike.jpg loaded image Tensor[dims 3, 224, 224; f32] model built mountain bike, all-terrain bike, off-roader: 61.70% bicycle-built-for-two, tandem bicycle, tandem: 33.14% unicycle, monocycle : 4.88% crash helmet : 0.15% moped : 0.04% ```
candle/candle-examples/examples/repvgg/README.md/0
{ "file_path": "candle/candle-examples/examples/repvgg/README.md", "repo_id": "candle", "token_count": 254 }
39
#[cfg(feature = "mkl")] extern crate intel_mkl_src; #[cfg(feature = "accelerate")] extern crate accelerate_src; use anyhow::Error as E; use clap::Parser; use candle::{DType, Device, Tensor}; use candle_nn::{ops::softmax, VarBuilder}; use candle_transformers::models::siglip; use tokenizers::Tokenizer; #[derive(Clone, Copy, Debug, clap::ValueEnum, PartialEq, Eq)] enum Which { #[value(name = "v1-base-patch16-224")] V1BasePatch16_224, #[value(name = "v2-base-patch16-224")] V2BasePatch16_224, #[value(name = "v2-base-patch16-256")] V2BasePatch16_256, #[value(name = "v2-base-patch16-384")] V2BasePatch16_384, #[value(name = "v2-base-patch16-512")] V2BasePatch16_512, #[value(name = "v2-large-patch16-256")] V2LargePatch16_256, #[value(name = "v2-large-patch16-384")] V2LargePatch16_384, #[value(name = "v2-large-patch16-512")] V2LargePatch16_512, } #[derive(Parser)] struct Args { #[arg(long)] model: Option<String>, #[arg(long)] config: Option<String>, #[arg(long)] hf_repo: Option<String>, #[arg(long, default_value = "v1-base-patch16-224")] which: Which, #[arg(long)] tokenizer: Option<String>, #[arg(long, use_value_delimiter = true)] images: Option<Vec<String>>, #[arg(long)] cpu: bool, #[arg(long, use_value_delimiter = true)] sequences: Option<Vec<String>>, #[arg(short, long)] image_size: Option<usize>, } fn load_image<T: AsRef<std::path::Path>>(path: T, image_size: usize) -> anyhow::Result<Tensor> { let img = image::ImageReader::open(path)?.decode()?; let (height, width) = (image_size, image_size); let img = img.resize_to_fill( width as u32, height as u32, image::imageops::FilterType::Triangle, ); let img = img.to_rgb8(); let img = img.into_raw(); let img = Tensor::from_vec(img, (height, width, 3), &Device::Cpu)? .permute((2, 0, 1))? .to_dtype(DType::F32)? .affine(2. / 255., -1.)?; Ok(img) } fn load_images<T: AsRef<std::path::Path>>( paths: &Vec<T>, image_size: usize, ) -> anyhow::Result<Tensor> { let mut images = vec![]; for path in paths { let tensor = load_image(path, image_size)?; images.push(tensor); } let images = Tensor::stack(&images, 0)?; Ok(images) } pub fn main() -> anyhow::Result<()> { let args = Args::parse(); let hf_repo = match args.hf_repo.as_ref() { Some(hf_repo) => hf_repo, None => match args.which { Which::V1BasePatch16_224 => "google/siglip-base-patch16-224", Which::V2BasePatch16_224 => "google/siglip2-base-patch16-224", Which::V2BasePatch16_256 => "google/siglip2-base-patch16-256", Which::V2BasePatch16_384 => "google/siglip2-base-patch16-384", Which::V2BasePatch16_512 => "google/siglip2-base-patch16-512", Which::V2LargePatch16_256 => "google/siglip2-large-patch16-256", Which::V2LargePatch16_384 => "google/siglip2-large-patch16-384", Which::V2LargePatch16_512 => "google/siglip2-large-patch16-512", }, }; let model_file = match args.model { None => { let api = hf_hub::api::sync::Api::new()?; let api = api.model(hf_repo.to_string()); api.get("model.safetensors")? } Some(model) => model.into(), }; let config_file = match args.config { None => { let api = hf_hub::api::sync::Api::new()?; let api = api.model(hf_repo.to_string()); api.get("config.json")? } Some(config) => config.into(), }; let tokenizer = get_tokenizer(hf_repo, args.tokenizer)?; let config: siglip::Config = serde_json::from_slice(&std::fs::read(config_file)?)?; let device = candle_examples::device(args.cpu)?; let vec_imgs = match args.images { Some(imgs) => imgs, None => vec![ "candle-examples/examples/stable-diffusion/assets/stable-diffusion-xl.jpg".to_string(), "candle-examples/examples/yolo-v8/assets/bike.jpg".to_string(), ], }; let images = load_images( &vec_imgs, args.image_size.unwrap_or(config.vision_config.image_size), )? .to_device(&device)?; let vb = unsafe { VarBuilder::from_mmaped_safetensors(std::slice::from_ref(&model_file), DType::F32, &device)? }; let model = siglip::Model::new(&config, vb)?; let (input_ids, vec_seq) = tokenize_sequences(&config, args.sequences, &tokenizer, &device)?; let (_logits_per_text, logits_per_image) = model.forward(&images, &input_ids)?; let softmax_image = softmax(&logits_per_image, 1)?; let softmax_image_vec = softmax_image.flatten_all()?.to_vec1::<f32>()?; println!("softmax_image_vec: {softmax_image_vec:?}"); let probability_vec = softmax_image_vec .iter() .map(|v| v * 100.0) .collect::<Vec<f32>>(); let probability_per_image = probability_vec.len() / vec_imgs.len(); for (i, img) in vec_imgs.iter().enumerate() { let start = i * probability_per_image; let end = start + probability_per_image; let prob = &probability_vec[start..end]; println!("\n\nResults for image: {img}\n"); for (i, p) in prob.iter().enumerate() { println!("Probability: {:.4}% Text: {} ", p, vec_seq[i]); } } Ok(()) } pub fn get_tokenizer(hf_repo: &str, tokenizer: Option<String>) -> anyhow::Result<Tokenizer> { let tokenizer = match tokenizer { None => { let api = hf_hub::api::sync::Api::new()?; let api = api.model(hf_repo.to_string()); api.get("tokenizer.json")? } Some(file) => file.into(), }; Tokenizer::from_file(tokenizer).map_err(E::msg) } pub fn tokenize_sequences( config: &siglip::Config, sequences: Option<Vec<String>>, tokenizer: &Tokenizer, device: &Device, ) -> anyhow::Result<(Tensor, Vec<String>)> { let pad_id = config.text_config.pad_token_id; let vec_seq = match sequences { Some(seq) => seq, None => vec![ "a cycling race".to_string(), "a photo of two cats".to_string(), "a robot holding a candle".to_string(), ], }; let mut tokens = vec![]; for seq in vec_seq.clone() { let encoding = tokenizer.encode(seq, true).map_err(E::msg)?; tokens.push(encoding.get_ids().to_vec()); } let max_len = config.text_config.max_position_embeddings; // Pad the sequences to have the same length for token_vec in tokens.iter_mut() { let len_diff = max_len - token_vec.len(); if len_diff > 0 { token_vec.extend(vec![pad_id; len_diff]); } } let input_ids = Tensor::new(tokens, device)?; Ok((input_ids, vec_seq)) }
candle/candle-examples/examples/siglip/main.rs/0
{ "file_path": "candle/candle-examples/examples/siglip/main.rs", "repo_id": "candle", "token_count": 3231 }
40
# candle-stable-lm StableLM-3B-4E1T is a 3 billion parameter decoder-only language model pre-trained on 1 trillion tokens of diverse English and code datasets for 4 epochs. See the [HuggingFace Hub Model Card](https://huggingface.co/stabilityai/stablelm-3b-4e1t). Note that this model is gated so you will have to request access on the Hub in order to be able to use it. Other available models are Stable-Code-3B, StableLM-2 and Zephyr variants. ## Running some example ```bash $ cargo run --example stable-lm --release --features cuda -- --prompt 'What is the most efficient programming language in use?' --sample-len 150 avx: true, neon: false, simd128: false, f16c: true temp: 0.00 repeat-penalty: 1.10 repeat-last-n: 64 retrieved the files in 126.593µs loaded the model in 3.474148965s What is the most efficient programming language in use? The answer to this question depends on what you mean by "efficient". If you're talking about speed, then C++ and Java are probably your best bets. But if you're talking about ease of development, then Python is probably the way to go. Python is a high-level, interpreted language that is easy to learn and use. It has a large community of developers who are always working on new features and improvements. C++ is a low-level, compiled language that can be used for both desktop applications and web development. It's more difficult to learn than Python but offers greater control over the code. Java is another high-level language that is popular with programmers because it runs on many different platforms (including Android phones 150 tokens generated (37.61 token/s) ```
candle/candle-examples/examples/stable-lm/README.md/0
{ "file_path": "candle/candle-examples/examples/stable-lm/README.md", "repo_id": "candle", "token_count": 432 }
41
#[cfg(feature = "mkl")] extern crate intel_mkl_src; #[cfg(feature = "accelerate")] extern crate accelerate_src; use clap::Parser; use candle::{DType, IndexOp, D}; use candle_nn::VarBuilder; use candle_transformers::models::vit; #[derive(Parser)] struct Args { #[arg(long)] model: Option<String>, #[arg(long)] image: String, /// Run on CPU rather than on GPU. #[arg(long)] cpu: bool, } pub fn main() -> anyhow::Result<()> { let args = Args::parse(); let device = candle_examples::device(args.cpu)?; let image = candle_examples::imagenet::load_image224(args.image)?.to_device(&device)?; println!("loaded image {image:?}"); let model_file = match args.model { None => { let api = hf_hub::api::sync::Api::new()?; let api = api.model("google/vit-base-patch16-224".into()); api.get("model.safetensors")? } Some(model) => model.into(), }; let vb = unsafe { VarBuilder::from_mmaped_safetensors(&[model_file], DType::F32, &device)? }; let model = vit::Model::new(&vit::Config::vit_base_patch16_224(), 1000, vb)?; println!("model built"); let logits = model.forward(&image.unsqueeze(0)?)?; let prs = candle_nn::ops::softmax(&logits, D::Minus1)? .i(0)? .to_vec1::<f32>()?; let mut prs = prs.iter().enumerate().collect::<Vec<_>>(); prs.sort_by(|(_, p1), (_, p2)| p2.total_cmp(p1)); for &(category_idx, pr) in prs.iter().take(5) { println!( "{:24}: {:.2}%", candle_examples::imagenet::CLASSES[category_idx], 100. * pr ); } Ok(()) }
candle/candle-examples/examples/vit/main.rs/0
{ "file_path": "candle/candle-examples/examples/vit/main.rs", "repo_id": "candle", "token_count": 762 }
42
#[cfg(feature = "mkl")] extern crate intel_mkl_src; #[cfg(feature = "accelerate")] extern crate accelerate_src; mod model; use model::{Multiples, YoloV8, YoloV8Pose}; use candle::{DType, Device, IndexOp, Result, Tensor}; use candle_nn::{Module, VarBuilder}; use candle_transformers::object_detection::{non_maximum_suppression, Bbox, KeyPoint}; use clap::{Parser, ValueEnum}; use image::DynamicImage; // Keypoints as reported by ChatGPT :) // Nose // Left Eye // Right Eye // Left Ear // Right Ear // Left Shoulder // Right Shoulder // Left Elbow // Right Elbow // Left Wrist // Right Wrist // Left Hip // Right Hip // Left Knee // Right Knee // Left Ankle // Right Ankle const KP_CONNECTIONS: [(usize, usize); 16] = [ (0, 1), (0, 2), (1, 3), (2, 4), (5, 6), (5, 11), (6, 12), (11, 12), (5, 7), (6, 8), (7, 9), (8, 10), (11, 13), (12, 14), (13, 15), (14, 16), ]; // Model architecture from https://github.com/ultralytics/ultralytics/issues/189 // https://github.com/tinygrad/tinygrad/blob/master/examples/yolov8.py pub fn report_detect( pred: &Tensor, img: DynamicImage, w: usize, h: usize, confidence_threshold: f32, nms_threshold: f32, legend_size: u32, ) -> Result<DynamicImage> { let pred = pred.to_device(&Device::Cpu)?; let (pred_size, npreds) = pred.dims2()?; let nclasses = pred_size - 4; // The bounding boxes grouped by (maximum) class index. let mut bboxes: Vec<Vec<Bbox<Vec<KeyPoint>>>> = (0..nclasses).map(|_| vec![]).collect(); // Extract the bounding boxes for which confidence is above the threshold. for index in 0..npreds { let pred = Vec::<f32>::try_from(pred.i((.., index))?)?; let confidence = *pred[4..].iter().max_by(|x, y| x.total_cmp(y)).unwrap(); if confidence > confidence_threshold { let mut class_index = 0; for i in 0..nclasses { if pred[4 + i] > pred[4 + class_index] { class_index = i } } if pred[class_index + 4] > 0. { let bbox = Bbox { xmin: pred[0] - pred[2] / 2., ymin: pred[1] - pred[3] / 2., xmax: pred[0] + pred[2] / 2., ymax: pred[1] + pred[3] / 2., confidence, data: vec![], }; bboxes[class_index].push(bbox) } } } non_maximum_suppression(&mut bboxes, nms_threshold); // Annotate the original image and print boxes information. let (initial_h, initial_w) = (img.height(), img.width()); let w_ratio = initial_w as f32 / w as f32; let h_ratio = initial_h as f32 / h as f32; let mut img = img.to_rgb8(); let font = Vec::from(include_bytes!("roboto-mono-stripped.ttf") as &[u8]); let font = ab_glyph::FontRef::try_from_slice(&font).map_err(candle::Error::wrap)?; for (class_index, bboxes_for_class) in bboxes.iter().enumerate() { for b in bboxes_for_class.iter() { println!( "{}: {:?}", candle_examples::coco_classes::NAMES[class_index], b ); let xmin = (b.xmin * w_ratio) as i32; let ymin = (b.ymin * h_ratio) as i32; let dx = (b.xmax - b.xmin) * w_ratio; let dy = (b.ymax - b.ymin) * h_ratio; if dx >= 0. && dy >= 0. { imageproc::drawing::draw_hollow_rect_mut( &mut img, imageproc::rect::Rect::at(xmin, ymin).of_size(dx as u32, dy as u32), image::Rgb([255, 0, 0]), ); } if legend_size > 0 { imageproc::drawing::draw_filled_rect_mut( &mut img, imageproc::rect::Rect::at(xmin, ymin).of_size(dx as u32, legend_size), image::Rgb([170, 0, 0]), ); let legend = format!( "{} {:.0}%", candle_examples::coco_classes::NAMES[class_index], 100. * b.confidence ); imageproc::drawing::draw_text_mut( &mut img, image::Rgb([255, 255, 255]), xmin, ymin, ab_glyph::PxScale { x: legend_size as f32 - 1., y: legend_size as f32 - 1., }, &font, &legend, ) } } } Ok(DynamicImage::ImageRgb8(img)) } pub fn report_pose( pred: &Tensor, img: DynamicImage, w: usize, h: usize, confidence_threshold: f32, nms_threshold: f32, ) -> Result<DynamicImage> { let pred = pred.to_device(&Device::Cpu)?; let (pred_size, npreds) = pred.dims2()?; if pred_size != 17 * 3 + 4 + 1 { candle::bail!("unexpected pred-size {pred_size}"); } let mut bboxes = vec![]; // Extract the bounding boxes for which confidence is above the threshold. for index in 0..npreds { let pred = Vec::<f32>::try_from(pred.i((.., index))?)?; let confidence = pred[4]; if confidence > confidence_threshold { let keypoints = (0..17) .map(|i| KeyPoint { x: pred[3 * i + 5], y: pred[3 * i + 6], mask: pred[3 * i + 7], }) .collect::<Vec<_>>(); let bbox = Bbox { xmin: pred[0] - pred[2] / 2., ymin: pred[1] - pred[3] / 2., xmax: pred[0] + pred[2] / 2., ymax: pred[1] + pred[3] / 2., confidence, data: keypoints, }; bboxes.push(bbox) } } let mut bboxes = vec![bboxes]; non_maximum_suppression(&mut bboxes, nms_threshold); let bboxes = &bboxes[0]; // Annotate the original image and print boxes information. let (initial_h, initial_w) = (img.height(), img.width()); let w_ratio = initial_w as f32 / w as f32; let h_ratio = initial_h as f32 / h as f32; let mut img = img.to_rgb8(); for b in bboxes.iter() { println!("{b:?}"); let xmin = (b.xmin * w_ratio) as i32; let ymin = (b.ymin * h_ratio) as i32; let dx = (b.xmax - b.xmin) * w_ratio; let dy = (b.ymax - b.ymin) * h_ratio; if dx >= 0. && dy >= 0. { imageproc::drawing::draw_hollow_rect_mut( &mut img, imageproc::rect::Rect::at(xmin, ymin).of_size(dx as u32, dy as u32), image::Rgb([255, 0, 0]), ); } for kp in b.data.iter() { if kp.mask < 0.6 { continue; } let x = (kp.x * w_ratio) as i32; let y = (kp.y * h_ratio) as i32; imageproc::drawing::draw_filled_circle_mut( &mut img, (x, y), 2, image::Rgb([0, 255, 0]), ); } for &(idx1, idx2) in KP_CONNECTIONS.iter() { let kp1 = &b.data[idx1]; let kp2 = &b.data[idx2]; if kp1.mask < 0.6 || kp2.mask < 0.6 { continue; } imageproc::drawing::draw_line_segment_mut( &mut img, (kp1.x * w_ratio, kp1.y * h_ratio), (kp2.x * w_ratio, kp2.y * h_ratio), image::Rgb([255, 255, 0]), ); } } Ok(DynamicImage::ImageRgb8(img)) } #[derive(Clone, Copy, ValueEnum, Debug)] enum Which { N, S, M, L, X, } #[derive(Clone, Copy, ValueEnum, Debug)] enum YoloTask { Detect, Pose, } #[derive(Parser, Debug)] #[command(author, version, about, long_about = None)] pub struct Args { /// Run on CPU rather than on GPU. #[arg(long)] cpu: bool, /// Enable tracing (generates a trace-timestamp.json file). #[arg(long)] tracing: bool, /// Model weights, in safetensors format. #[arg(long)] model: Option<String>, /// Which model variant to use. #[arg(long, value_enum, default_value_t = Which::S)] which: Which, images: Vec<String>, /// Threshold for the model confidence level. #[arg(long, default_value_t = 0.25)] confidence_threshold: f32, /// Threshold for non-maximum suppression. #[arg(long, default_value_t = 0.45)] nms_threshold: f32, /// The task to be run. #[arg(long, default_value = "detect")] task: YoloTask, /// The size for the legend, 0 means no legend. #[arg(long, default_value_t = 14)] legend_size: u32, } impl Args { fn model(&self) -> anyhow::Result<std::path::PathBuf> { let path = match &self.model { Some(model) => std::path::PathBuf::from(model), None => { let api = hf_hub::api::sync::Api::new()?; let api = api.model("lmz/candle-yolo-v8".to_string()); let size = match self.which { Which::N => "n", Which::S => "s", Which::M => "m", Which::L => "l", Which::X => "x", }; let task = match self.task { YoloTask::Pose => "-pose", YoloTask::Detect => "", }; api.get(&format!("yolov8{size}{task}.safetensors"))? } }; Ok(path) } } pub trait Task: Module + Sized { fn load(vb: VarBuilder, multiples: Multiples) -> Result<Self>; fn report( pred: &Tensor, img: DynamicImage, w: usize, h: usize, confidence_threshold: f32, nms_threshold: f32, legend_size: u32, ) -> Result<DynamicImage>; } impl Task for YoloV8 { fn load(vb: VarBuilder, multiples: Multiples) -> Result<Self> { YoloV8::load(vb, multiples, /* num_classes=*/ 80) } fn report( pred: &Tensor, img: DynamicImage, w: usize, h: usize, confidence_threshold: f32, nms_threshold: f32, legend_size: u32, ) -> Result<DynamicImage> { report_detect( pred, img, w, h, confidence_threshold, nms_threshold, legend_size, ) } } impl Task for YoloV8Pose { fn load(vb: VarBuilder, multiples: Multiples) -> Result<Self> { YoloV8Pose::load(vb, multiples, /* num_classes=*/ 1, (17, 3)) } fn report( pred: &Tensor, img: DynamicImage, w: usize, h: usize, confidence_threshold: f32, nms_threshold: f32, _legend_size: u32, ) -> Result<DynamicImage> { report_pose(pred, img, w, h, confidence_threshold, nms_threshold) } } pub fn run<T: Task>(args: Args) -> anyhow::Result<()> { let device = candle_examples::device(args.cpu)?; // Create the model and load the weights from the file. let multiples = match args.which { Which::N => Multiples::n(), Which::S => Multiples::s(), Which::M => Multiples::m(), Which::L => Multiples::l(), Which::X => Multiples::x(), }; let model = args.model()?; let vb = unsafe { VarBuilder::from_mmaped_safetensors(&[model], DType::F32, &device)? }; let model = T::load(vb, multiples)?; println!("model loaded"); for image_name in args.images.iter() { println!("processing {image_name}"); let mut image_name = std::path::PathBuf::from(image_name); let original_image = image::ImageReader::open(&image_name)? .decode() .map_err(candle::Error::wrap)?; let (width, height) = { let w = original_image.width() as usize; let h = original_image.height() as usize; if w < h { let w = w * 640 / h; // Sizes have to be divisible by 32. (w / 32 * 32, 640) } else { let h = h * 640 / w; (640, h / 32 * 32) } }; let image_t = { let img = original_image.resize_exact( width as u32, height as u32, image::imageops::FilterType::CatmullRom, ); let data = img.to_rgb8().into_raw(); Tensor::from_vec( data, (img.height() as usize, img.width() as usize, 3), &device, )? .permute((2, 0, 1))? }; let image_t = (image_t.unsqueeze(0)?.to_dtype(DType::F32)? * (1. / 255.))?; let predictions = model.forward(&image_t)?.squeeze(0)?; println!("generated predictions {predictions:?}"); let image_t = T::report( &predictions, original_image, width, height, args.confidence_threshold, args.nms_threshold, args.legend_size, )?; image_name.set_extension("pp.jpg"); println!("writing {image_name:?}"); image_t.save(image_name)? } Ok(()) } pub fn main() -> anyhow::Result<()> { use tracing_chrome::ChromeLayerBuilder; use tracing_subscriber::prelude::*; let args = Args::parse(); let _guard = if args.tracing { let (chrome_layer, guard) = ChromeLayerBuilder::new().build(); tracing_subscriber::registry().with(chrome_layer).init(); Some(guard) } else { None }; match args.task { YoloTask::Detect => run::<YoloV8>(args)?, YoloTask::Pose => run::<YoloV8Pose>(args)?, } Ok(()) }
candle/candle-examples/examples/yolo-v8/main.rs/0
{ "file_path": "candle/candle-examples/examples/yolo-v8/main.rs", "repo_id": "candle", "token_count": 7410 }
43
#pragma once #define C10_CUDA_CHECK(EXPR) \ do { \ const cudaError_t __err = EXPR; \ } while (0) #define C10_CUDA_KERNEL_LAUNCH_CHECK() C10_CUDA_CHECK(cudaGetLastError())
candle/candle-flash-attn/kernels/error.h/0
{ "file_path": "candle/candle-flash-attn/kernels/error.h", "repo_id": "candle", "token_count": 216 }
44
use core::ffi::{c_int, c_void}; extern "C" { pub(crate) fn run_mha( q_ptr: *const c_void, k_ptr: *const c_void, v_ptr: *const c_void, o_ptr: *const c_void, softmax_lse_ptr: *const c_void, alibi_slopes_ptr: *const c_void, cu_seqlens_q_ptr: *const i32, cu_seqlens_k_ptr: *const i32, q_batch_stride: u32, k_batch_stride: u32, v_batch_stride: u32, o_batch_stride: u32, alibi_slopes_batch_stride: u32, q_row_stride: u32, k_row_stride: u32, v_row_stride: u32, o_row_stride: u32, q_head_stride: u32, k_head_stride: u32, v_head_stride: u32, o_head_stride: u32, b: u32, h: u32, h_k: u32, d: u32, d_rounded: u32, softmax_scale: f32, seqlen_q: u32, seqlen_k: u32, seqlen_q_rounded: u32, seqlen_k_rounded: u32, is_bf16: c_int, is_causal: c_int, unpadded_lse: c_int, window_size_left: c_int, window_size_right: c_int, softcap: f32, ); }
candle/candle-flash-attn/src/ffi.rs/0
{ "file_path": "candle/candle-flash-attn/src/ffi.rs", "repo_id": "candle", "token_count": 702 }
45
pub const AFFINE: &str = include_str!(concat!(env!("OUT_DIR"), "/affine.ptx")); pub const BINARY: &str = include_str!(concat!(env!("OUT_DIR"), "/binary.ptx")); pub const CAST: &str = include_str!(concat!(env!("OUT_DIR"), "/cast.ptx")); pub const CONV: &str = include_str!(concat!(env!("OUT_DIR"), "/conv.ptx")); pub const FILL: &str = include_str!(concat!(env!("OUT_DIR"), "/fill.ptx")); pub const INDEXING: &str = include_str!(concat!(env!("OUT_DIR"), "/indexing.ptx")); pub const QUANTIZED: &str = include_str!(concat!(env!("OUT_DIR"), "/quantized.ptx")); pub const REDUCE: &str = include_str!(concat!(env!("OUT_DIR"), "/reduce.ptx")); pub const SORT: &str = include_str!(concat!(env!("OUT_DIR"), "/sort.ptx")); pub const TERNARY: &str = include_str!(concat!(env!("OUT_DIR"), "/ternary.ptx")); pub const UNARY: &str = include_str!(concat!(env!("OUT_DIR"), "/unary.ptx"));
candle/candle-kernels/src/ptx.rs/0
{ "file_path": "candle/candle-kernels/src/ptx.rs", "repo_id": "candle", "token_count": 365 }
46
// MLX Kernel extracted from: // https://github.com/ml-explore/mlx/blob/main/mlx/backend/metal/kernels/steel/gemm // Copyright © 2024 Apple Inc. #include <metal_simdgroup> #include <metal_simdgroup_matrix> #include <metal_stdlib> #define STEEL_CONST static constant constexpr const #define STEEL_PRAGMA_UNROLL _Pragma("clang loop unroll(full)") using namespace metal; // https://github.com/ml-explore/mlx/blob/02efb310cac667bc547d1b96f21596c221f84fe7/mlx/backend/metal/kernels/steel/gemm/params.h#L1 /////////////////////////////////////////////////////////////////////////////// // GEMM param classes /////////////////////////////////////////////////////////////////////////////// struct GEMMParams { const int M; const int N; const int K; const int lda; const int ldb; const int ldd; const int tiles_n; const int tiles_m; const size_t batch_stride_a; const size_t batch_stride_b; const size_t batch_stride_d; const int swizzle_log; const int gemm_k_iterations_aligned; const int batch_ndim; }; struct GEMMSpiltKParams { const int M; const int N; const int K; const int lda; const int ldb; const int ldc; const int tiles_n; const int tiles_m; const int split_k_partitions; const int split_k_partition_stride; const int split_k_partition_size; const int gemm_k_iterations_aligned; }; struct GEMMAddMMParams { const int ldc; const int fdc; const size_t batch_stride_c; const float alpha; const float beta; }; // https://github.com/ml-explore/mlx/blob/02efb310cac667bc547d1b96f21596c221f84fe7/mlx/backend/metal/kernels/steel/gemm/loader.h#L1 /////////////////////////////////////////////////////////////////////////////// // Loading helper /////////////////////////////////////////////////////////////////////////////// template < typename T, short BROWS, short BCOLS, short dst_ld, short reduction_dim, short tgp_size, short alignment = 1, short n_reads = (BCOLS * BROWS) / (tgp_size), short TCOLS = BCOLS / n_reads, short TROWS = tgp_size / TCOLS> struct BlockLoader { STEEL_CONST short n_rows = (BROWS + TROWS - 1) / TROWS; STEEL_CONST short vec_size = n_reads; // Leading dimension for src const int src_ld; const int tile_stride; // Thread location indices const short thread_idx; const short bi; const short bj; // threadgroup and device memory threadgroup T* dst; const device T* src; struct alignas(alignment * sizeof(T)) ReadVector { uint8_t v[sizeof(T) * vec_size]; }; /* Constructor */ METAL_FUNC BlockLoader( const device T* src_, const int src_ld_, threadgroup T* dst_, ushort simd_group_id [[simdgroup_index_in_threadgroup]], ushort simd_lane_id [[thread_index_in_simdgroup]]) : src_ld(src_ld_), tile_stride(reduction_dim ? BCOLS : BROWS * src_ld), thread_idx(simd_group_id * 32 + simd_lane_id), bi(thread_idx / TCOLS), bj(vec_size * (thread_idx % TCOLS)), dst(dst_ + bi * dst_ld + bj), src(src_ + bi * src_ld + bj) {} /* Apply operation to threadgroup without bound checking */ template <typename UnaryOp> METAL_FUNC void apply_inplace_op(thread const UnaryOp& op) const { STEEL_PRAGMA_UNROLL for (short i = 0; i < BROWS; i += TROWS) { STEEL_PRAGMA_UNROLL for (short j = 0; j < vec_size; j++) { dst[i * dst_ld + j] = op.apply(dst[i * dst_ld + j]); } } } /* Load from device memory into threadgroup memory - without bound checking */ METAL_FUNC void load_unsafe() const { STEEL_PRAGMA_UNROLL for (short i = 0; i < BROWS; i += TROWS) { *((threadgroup ReadVector*)(&dst[i * dst_ld])) = *((const device ReadVector*)(&src[i * src_ld])); } } /* Load from device memory into threadgroup memory - with bound checking */ METAL_FUNC void load_safe(short2 src_tile_dim) const { src_tile_dim = src_tile_dim - short2(bj, bi); // Skip loading if thread has no valid reads if (src_tile_dim.x <= 0 || src_tile_dim.y <= 0) { STEEL_PRAGMA_UNROLL for (short i = 0; i < BROWS; i += TROWS) { STEEL_PRAGMA_UNROLL for (short j = 0; j < vec_size; j++) { dst[i * dst_ld + j] = T(0); } } return; } // Use fast thread memory for bound checks bool tmp_idx[vec_size]; T tmp_val[vec_size]; STEEL_PRAGMA_UNROLL for (short i = 0; i < BROWS; i += TROWS) { // Make sure tmp_idx only contains valid indices STEEL_PRAGMA_UNROLL for (short j = 0; j < vec_size; j++) { tmp_idx[j] = (i < src_tile_dim.y) && (j < src_tile_dim.x); } // Read valid indices into tmp_val STEEL_PRAGMA_UNROLL for (short j = 0; j < vec_size; j++) { tmp_val[j] = src[(tmp_idx[j] ? i * src_ld + j : 0)]; } // Zero out uneeded values STEEL_PRAGMA_UNROLL for (short j = 0; j < vec_size; j++) { tmp_val[j] = tmp_idx[j] ? tmp_val[j] : T(0); } // Copy values to threadgroup memory STEEL_PRAGMA_UNROLL for (short j = 0; j < vec_size; j++) { dst[i * dst_ld + j] = tmp_val[j]; } } } /* Iteration helper */ METAL_FUNC void next() { src += tile_stride; } }; // https://github.com/ml-explore/mlx/blob/02efb310cac667bc547d1b96f21596c221f84fe7/mlx/backend/metal/kernels/steel/gemm/transforms.h#L1 /////////////////////////////////////////////////////////////////////////////// // Transforms and Epilogues /////////////////////////////////////////////////////////////////////////////// template <typename OutT, typename InT> struct TransformNone { static METAL_FUNC OutT apply(InT x) { return static_cast<OutT>(x); } static METAL_FUNC OutT apply(InT x, OutT) { return static_cast<OutT>(x); } }; template <typename OutT, typename InT> struct TransformAdd { TransformAdd(const float, const float) {} static METAL_FUNC OutT apply(InT x) { return static_cast<OutT>(x); } static METAL_FUNC OutT apply(InT x, OutT c) { return static_cast<OutT>(x) + c; } }; template <typename OutT, typename InT> struct TransformAxpby { const float alpha; const float beta; TransformAxpby(const float alpha_, const float beta_) : alpha(alpha_), beta(beta_) {} static METAL_FUNC OutT apply(InT x) { return static_cast<OutT>(x); } METAL_FUNC OutT apply(InT x, OutT c) const { return static_cast<OutT>(x * alpha + (beta * c)); } }; template <typename T> struct AccumHelper { typedef float accum_type; }; struct BlockSwizzle { static METAL_FUNC int2 swizzle(uint3 tid [[threadgroup_position_in_grid]], const int swizzle_log) { const int tid_x = (tid.x) >> swizzle_log; const int tid_y = ((tid.y) << swizzle_log) + ((tid.x) & ((1 << swizzle_log) - 1)); return int2(tid_x, tid_y); } }; // https://github.com/ml-explore/mlx/blob/02efb310cac667bc547d1b96f21596c221f84fe7/mlx/backend/metal/kernels/steel/gemm/mma.h#L1 /////////////////////////////////////////////////////////////////////////////// // MMA helper /////////////////////////////////////////////////////////////////////////////// template < typename T, typename U, int BM, int BN, int BK, int WM, int WN, bool transpose_a, bool transpose_b, short lda_tgp, short ldb_tgp, typename AccumType = float, typename Epilogue = TransformNone<U, AccumType>> struct BlockMMA { // Warp tile simdgroup matrix strides along M STEEL_CONST short TM_stride = 8 * WM; // Warp tile simdgroup matrix strides along M STEEL_CONST short TN_stride = 8 * WN; // Warp tile size along M STEEL_CONST short TM = BM / TM_stride; // Warp tile size along N STEEL_CONST short TN = BN / TN_stride; // Strides of A, B along reduction axis STEEL_CONST short simd_stride_a = { transpose_a ? TM_stride : TM_stride * lda_tgp}; STEEL_CONST short simd_stride_b = { transpose_b ? TN_stride * ldb_tgp : TN_stride}; // Jump between elements STEEL_CONST short jump_a = {transpose_a ? lda_tgp : 1}; STEEL_CONST short jump_b = {transpose_b ? ldb_tgp : 1}; STEEL_CONST short tile_stride_a = {transpose_a ? 8 * lda_tgp : 8}; STEEL_CONST short tile_stride_b = {transpose_b ? 8 : 8 * ldb_tgp}; // Simdgroup matrices simdgroup_matrix<AccumType, 8, 8> Asimd[TM]; simdgroup_matrix<AccumType, 8, 8> Bsimd[TN]; simdgroup_matrix<AccumType, 8, 8> results[TM * TN] = { simdgroup_matrix<AccumType, 8, 8>(0)}; // Offsets within threadgroup const short tm; const short tn; short sm; short sn; short As_offset; short Bs_offset; /* Constructor */ METAL_FUNC BlockMMA( ushort simd_group_id [[simdgroup_index_in_threadgroup]], ushort simd_lane_id [[thread_index_in_simdgroup]]) : tm(8 * (simd_group_id / WN)), tn(8 * (simd_group_id % WN)) { // Determine thread position in simdgroup matrix short qid = simd_lane_id / 4; sm = (qid & 4) + (simd_lane_id / 2) % 4; sn = (qid & 2) * 2 + (simd_lane_id % 2) * 2; // Determine thread and simdgroup offset As_offset = transpose_a ? ((sn)*lda_tgp + (tm + sm)) : ((sn) + (tm + sm) * lda_tgp); Bs_offset = transpose_b ? ((tn + sn) * ldb_tgp + (sm)) : ((sm)*ldb_tgp + (tn + sn)); } /* (BM, BK) X (BK, BN) multiply accumulate function */ METAL_FUNC void mma(const threadgroup T* As, const threadgroup T* Bs) { // Adjust for simdgroup and thread location As += As_offset; Bs += Bs_offset; // Iterate over BK in blocks of 8 STEEL_PRAGMA_UNROLL for (short kk = 0; kk < BK; kk += 8) { simdgroup_barrier(mem_flags::mem_none); // Load elements from threadgroup A as simdgroup matrices STEEL_PRAGMA_UNROLL for (short i = 0; i < TM; i++) { Asimd[i].thread_elements()[0] = static_cast<AccumType>(As[i * simd_stride_a + 0]); Asimd[i].thread_elements()[1] = static_cast<AccumType>(As[i * simd_stride_a + jump_a]); } simdgroup_barrier(mem_flags::mem_none); // Load elements from threadgroup B as simdgroup matrices STEEL_PRAGMA_UNROLL for (short j = 0; j < TN; j++) { Bsimd[j].thread_elements()[0] = static_cast<AccumType>(Bs[j * simd_stride_b + 0]); Bsimd[j].thread_elements()[1] = static_cast<AccumType>(Bs[j * simd_stride_b + jump_b]); } simdgroup_barrier(mem_flags::mem_none); // Multiply and accumulate into result simdgroup matrices STEEL_PRAGMA_UNROLL for (short i = 0; i < TM; i++) { STEEL_PRAGMA_UNROLL for (short j = 0; j < TN; j++) { short j_serp = (i % 2) ? (TN - 1 - j) : j; simdgroup_multiply_accumulate( results[i * TN + j_serp], Asimd[i], Bsimd[j_serp], results[i * TN + j_serp]); } } // Progress to next simdgroup tile As += tile_stride_a; Bs += tile_stride_b; } } /* Store results from simdgroup_matrix results into device memory */ METAL_FUNC void store_result(device U* D, const int ldd) const { // Adjust for simdgroup and thread location D += (sm + tm) * ldd + tn + sn; // Loop over all simdgroup tiles STEEL_PRAGMA_UNROLL for (short i = 0; i < TM; i++) { STEEL_PRAGMA_UNROLL for (short j = 0; j < TN; j++) { // Get accumulated result and associated offset in C thread const auto& accum = results[i * TN + j].thread_elements(); int offset = (i * TM_stride) * ldd + (j * TN_stride); // Apply epilogue U outs[2] = {Epilogue::apply(accum[0]), Epilogue::apply(accum[1])}; // Write out D D[offset] = outs[0]; D[offset + 1] = outs[1]; } } } METAL_FUNC void store_result_safe(device U* D, const int ldd, short2 dst_tile_dims) const { // Adjust for simdgroup and thread location D += (sm + tm) * ldd + (tn + sn); dst_tile_dims -= short2(tn + sn, sm + tm); if (dst_tile_dims.x <= 0 || dst_tile_dims.y <= 0) return; STEEL_PRAGMA_UNROLL for (int i = 0; i < TM; i++) { if (i * TM_stride < dst_tile_dims.y) { STEEL_PRAGMA_UNROLL for (int j = 0; j < TN; j++) { // Get accumulated result and associated offset in C thread const auto& accum = results[i * TN + j].thread_elements(); int offset = (i * TM_stride) * ldd + (j * TN_stride); // Apply epilogue and output C if (j * TN_stride < dst_tile_dims.x) { D[offset] = Epilogue::apply(accum[0]); } if (j * TN_stride + 1 < dst_tile_dims.x) { D[offset + 1] = Epilogue::apply(accum[1]); } } } } } /* Apply epilogue */ template <typename UnaryEpilogue> METAL_FUNC void apply_epilogue(thread const UnaryEpilogue& epilogue_op) { // Loop over all simdgroup tiles STEEL_PRAGMA_UNROLL for (short i = 0; i < TM; i++) { STEEL_PRAGMA_UNROLL for (short j = 0; j < TN; j++) { // Get accumulated result and associated offset in C thread auto& accum = results[i * TN + j].thread_elements(); // Apply epilogue accum[0] = epilogue_op.apply(accum[0]); accum[1] = epilogue_op.apply(accum[1]); } } } /* Apply epilogue */ template <typename BinaryEpilogue> METAL_FUNC void apply_epilogue( const device U* C, const int ldc, const int fdc, thread const BinaryEpilogue& epilogue_op) { // Adjust for simdgroup and thread location C += (sm + tm) * ldc + (tn + sn) * fdc; // Loop over all simdgroup tiles STEEL_PRAGMA_UNROLL for (short i = 0; i < TM; i++) { STEEL_PRAGMA_UNROLL for (short j = 0; j < TN; j++) { // Get accumulated result and associated offset in C thread auto& accum = results[i * TN + j].thread_elements(); int offset_c = (i * TM_stride) * ldc + (j * TN_stride) * fdc; // Apply epilogue accum[0] = epilogue_op.apply(accum[0], C[offset_c]); accum[1] = epilogue_op.apply(accum[1], C[offset_c + fdc]); } } } /* Apply epilogue */ template <typename BinaryEpilogue> METAL_FUNC void apply_epilogue_safe( const device U* C, const int ldc, const int fdc, short2 dst_tile_dims, thread const BinaryEpilogue& epilogue_op) { // Adjust for simdgroup and thread location C += (sm + tm) * ldc + (tn + sn) * fdc; dst_tile_dims -= short2(tn + sn, sm + tm); if (dst_tile_dims.x <= 0 || dst_tile_dims.y <= 0) return; // Loop over all simdgroup tiles STEEL_PRAGMA_UNROLL for (short i = 0; i < TM; i++) { STEEL_PRAGMA_UNROLL for (short j = 0; j < TN; j++) { // Get accumulated result and associated offset in C thread auto& accum = results[i * TN + j].thread_elements(); int offset_c = (i * TM_stride) * ldc + (j * TN_stride) * fdc; // Read C U c_elems[2] = {0}; if ((j * TN_stride + 1) < dst_tile_dims.x) { c_elems[0] = C[offset_c]; c_elems[1] = C[offset_c + fdc]; } else if ((j * TN_stride) < dst_tile_dims.x) { c_elems[0] = C[offset_c]; } // Apply epilogue accum[0] = epilogue_op.apply(accum[0], c_elems[0]); accum[1] = epilogue_op.apply(accum[1], c_elems[1]); } } } /* Store results from simdgroup_matrix results into device memory */ METAL_FUNC void store_result( device U* D, const int ldd, const device U* C, const int ldc, const int fdc, thread const Epilogue& epilogue_op) const { // Adjust for simdgroup and thread location C += (sm + tm) * ldc + (tn + sn) * fdc; D += (sm + tm) * ldd + tn + sn; // Loop over all simdgroup tiles STEEL_PRAGMA_UNROLL for (short i = 0; i < TM; i++) { STEEL_PRAGMA_UNROLL for (short j = 0; j < TN; j++) { // Get accumulated result and associated offset in C thread const auto& accum = results[i * TN + j].thread_elements(); int offset_c = (i * TM_stride) * ldc + (j * TN_stride) * fdc; int offset_d = (i * TM_stride) * ldd + (j * TN_stride); // Apply epilogue U outs[2] = { epilogue_op.apply(accum[0], C[offset_c]), epilogue_op.apply(accum[1], C[offset_c + fdc])}; // Write out D D[offset_d] = outs[0]; D[offset_d + 1] = outs[1]; } } } METAL_FUNC void store_result_safe( device U* D, const int ldd, const device U* C, const int ldc, const int fdc, short2 dst_tile_dims, thread const Epilogue& epilogue_op) const { // Adjust for simdgroup and thread location C += (sm + tm) * ldc + (tn + sn) * fdc; D += (sm + tm) * ldd + tn + sn; dst_tile_dims -= short2(tn + sn, sm + tm); if (dst_tile_dims.x <= 0 || dst_tile_dims.y <= 0) return; STEEL_PRAGMA_UNROLL for (int i = 0; i < TM; i++) { if (i * TM_stride < dst_tile_dims.y) { STEEL_PRAGMA_UNROLL for (int j = 0; j < TN; j++) { // Get accumulated result and associated offset in C thread const auto& accum = results[i * TN + j].thread_elements(); int offset_c = (i * TM_stride) * ldc + (j * TN_stride) * fdc; int offset_d = (i * TM_stride) * ldd + (j * TN_stride); // Apply epilogue and output C if (j * TN_stride < dst_tile_dims.x) { D[offset_d] = epilogue_op.apply(accum[0], C[offset_c]); } if (j * TN_stride + 1 < dst_tile_dims.x) { D[offset_d + 1] = epilogue_op.apply(accum[1], C[offset_c + fdc]); } } } } } }; // https://github.com/ml-explore/mlx/blob/02efb310cac667bc547d1b96f21596c221f84fe7/mlx/backend/metal/kernels/steel/gemm/gemm.h#L1 /////////////////////////////////////////////////////////////////////////////// // GEMM kernel class /////////////////////////////////////////////////////////////////////////////// template <bool M_aligned, bool N_aligned, bool K_aligned> struct LoopAlignment {}; template < typename T, typename U, int BM, int BN, int BK, int WM, int WN, bool transpose_a, bool transpose_b, bool MN_aligned, bool K_aligned, typename AccumType = typename AccumHelper<T>::accum_type, typename Epilogue = TransformNone<U, AccumType>> struct GEMMKernel { STEEL_CONST short tgp_padding_a = 16 / sizeof(T); STEEL_CONST short tgp_padding_b = 16 / sizeof(T); STEEL_CONST short tgp_mem_size_a = transpose_a ? BK * (BM + tgp_padding_a) : BM * (BK + tgp_padding_a); STEEL_CONST short tgp_mem_size_b = transpose_b ? BN * (BK + tgp_padding_b) : BK * (BN + tgp_padding_b); STEEL_CONST short tgp_mem_size = tgp_mem_size_a + tgp_mem_size_b; STEEL_CONST short tgp_size = WM * WN * 32; using loader_a_t = BlockLoader< T, transpose_a ? BK : BM, transpose_a ? BM : BK, transpose_a ? BM + tgp_padding_a : BK + tgp_padding_a, !transpose_a, tgp_size>; using loader_b_t = BlockLoader< T, transpose_b ? BN : BK, transpose_b ? BK : BN, transpose_b ? BK + tgp_padding_b : BN + tgp_padding_b, transpose_b, tgp_size>; using mma_t = BlockMMA< T, U, BM, BN, BK, WM, WN, transpose_a, transpose_b, transpose_a ? BM + tgp_padding_a : BK + tgp_padding_a, transpose_b ? BK + tgp_padding_b : BN + tgp_padding_b, AccumType, Epilogue>; /* Main kernel function */ template <bool M_aligned, bool N_aligned, bool K_aligned_> static METAL_FUNC void gemm_loop( threadgroup T* As [[threadgroup(0)]], threadgroup T* Bs [[threadgroup(1)]], const int gemm_k_iterations, thread loader_a_t& loader_a, thread loader_b_t& loader_b, thread mma_t& mma_op, thread const short& tgp_bm, thread const short& tgp_bn, thread const short& lbk, LoopAlignment<M_aligned, N_aligned, K_aligned_> l = {}) { // Appease the compiler (void)l; short2 tile_dims_A = transpose_a ? short2(tgp_bm, BK) : short2(BK, tgp_bm); short2 tile_dims_B = transpose_b ? short2(BK, tgp_bn) : short2(tgp_bn, BK); for (int k = 0; k < gemm_k_iterations; k++) { threadgroup_barrier(mem_flags::mem_threadgroup); // Load elements into threadgroup if (M_aligned) { loader_a.load_unsafe(); } else { loader_a.load_safe(tile_dims_A); } if (N_aligned) { loader_b.load_unsafe(); } else { loader_b.load_safe(tile_dims_B); } threadgroup_barrier(mem_flags::mem_threadgroup); // Multiply and accumulate threadgroup elements mma_op.mma(As, Bs); // Prepare for next iteration loader_a.next(); loader_b.next(); } if (!K_aligned_) { threadgroup_barrier(mem_flags::mem_threadgroup); short2 tile_dims_A_last = transpose_a ? short2(tgp_bm, lbk) : short2(lbk, tgp_bm); short2 tile_dims_B_last = transpose_b ? short2(lbk, tgp_bn) : short2(tgp_bn, lbk); loader_a.load_safe(tile_dims_A_last); loader_b.load_safe(tile_dims_B_last); threadgroup_barrier(mem_flags::mem_threadgroup); mma_op.mma(As, Bs); } } /* Main kernel function */ static METAL_FUNC void run( const device T* A [[buffer(0)]], const device T* B [[buffer(1)]], device U* D [[buffer(2)]], const constant GEMMParams* params [[buffer(3)]], threadgroup T* As [[threadgroup(0)]], threadgroup T* Bs [[threadgroup(1)]], uint simd_lane_id [[thread_index_in_simdgroup]], uint simd_group_id [[simdgroup_index_in_threadgroup]], uint3 tid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) { // Pacifying compiler (void)lid; const int tid_y = ((tid.y) << params->swizzle_log) + ((tid.x) & ((1 << params->swizzle_log) - 1)); const int tid_x = (tid.x) >> params->swizzle_log; if (params->tiles_n <= tid_x || params->tiles_m <= tid_y) { return; } threadgroup_barrier(mem_flags::mem_none); // Find block in A, B, C const int c_row = tid_y * BM; const int c_col = tid_x * BN; const size_t c_row_long = size_t(c_row); const size_t c_col_long = size_t(c_col); A += transpose_a ? c_row_long : c_row_long * params->lda; B += transpose_b ? c_col_long * params->ldb : c_col_long; D += c_row_long * params->ldd + c_col_long; // Prepare threadgroup loading operations thread loader_a_t loader_a(A, params->lda, As, simd_group_id, simd_lane_id); thread loader_b_t loader_b(B, params->ldb, Bs, simd_group_id, simd_lane_id); // Prepare threadgroup mma operation thread mma_t mma_op(simd_group_id, simd_lane_id); int gemm_k_iterations = params->gemm_k_iterations_aligned; /////////////////////////////////////////////////////////////////////////////// // MNK aligned loop if (MN_aligned) { for (int k = 0; k < gemm_k_iterations; k++) { threadgroup_barrier(mem_flags::mem_threadgroup); // Load elements into threadgroup loader_a.load_unsafe(); loader_b.load_unsafe(); threadgroup_barrier(mem_flags::mem_threadgroup); // Multiply and accumulate threadgroup elements mma_op.mma(As, Bs); // Prepare for next iteration loader_a.next(); loader_b.next(); } threadgroup_barrier(mem_flags::mem_none); // Loop tail if (!K_aligned) { int lbk = params->K - params->gemm_k_iterations_aligned * BK; short2 tile_dims_A = transpose_a ? short2(BM, lbk) : short2(lbk, BM); short2 tile_dims_B = transpose_b ? short2(lbk, BN) : short2(BN, lbk); loader_a.load_safe(tile_dims_A); loader_b.load_safe(tile_dims_B); threadgroup_barrier(mem_flags::mem_threadgroup); mma_op.mma(As, Bs); } // Store results to device memory mma_op.store_result(D, params->ldd); return; } /////////////////////////////////////////////////////////////////////////////// // MN unaligned loop else { // Loop over K - unaligned case short tgp_bm = min(BM, params->M - c_row); short tgp_bn = min(BN, params->N - c_col); short leftover_bk = params->K - params->gemm_k_iterations_aligned * BK; if (tgp_bm == BM && tgp_bn == BN) { gemm_loop<true, true, K_aligned>( As, Bs, gemm_k_iterations, loader_a, loader_b, mma_op, tgp_bm, tgp_bn, leftover_bk); mma_op.store_result(D, params->ldd); return; } else if (tgp_bn == BN) { gemm_loop<false, true, K_aligned>( As, Bs, gemm_k_iterations, loader_a, loader_b, mma_op, tgp_bm, tgp_bn, leftover_bk); mma_op.store_result_safe(D, params->ldd, short2(tgp_bn, tgp_bm)); return; } else if (tgp_bm == BM) { gemm_loop<true, false, K_aligned>( As, Bs, gemm_k_iterations, loader_a, loader_b, mma_op, tgp_bm, tgp_bn, leftover_bk); mma_op.store_result_safe(D, params->ldd, short2(tgp_bn, tgp_bm)); return; } else { gemm_loop<false, false, K_aligned>( As, Bs, gemm_k_iterations, loader_a, loader_b, mma_op, tgp_bm, tgp_bn, leftover_bk); mma_op.store_result_safe(D, params->ldd, short2(tgp_bn, tgp_bm)); return; } } } }; // utils.h /////////////////////////////////////////////////////////////////////////////// // Single Array with generic dims template <typename stride_t> METAL_FUNC stride_t elem_to_loc( uint elem, device const int* shape, device const stride_t* strides, int ndim) { stride_t loc = 0; for (int i = ndim - 1; i >= 0 && elem > 0; --i) { loc += (elem % shape[i]) * strides[i]; elem /= shape[i]; } return loc; } template <typename stride_t> METAL_FUNC stride_t elem_to_loc( uint elem, constant const int* shape, constant const stride_t* strides, int ndim) { stride_t loc = 0; for (int i = ndim - 1; i >= 0 && elem > 0; --i) { loc += (elem % shape[i]) * strides[i]; elem /= shape[i]; } return loc; } template <typename stride_t> METAL_FUNC stride_t elem_to_loc( stride_t elem, device const int* shape, device const stride_t* strides, int ndim) { stride_t loc = 0; for (int i = ndim - 1; i >= 0 && elem > 0; --i) { loc += (elem % shape[i]) * strides[i]; elem /= shape[i]; } return loc; } template <typename stride_t> METAL_FUNC stride_t elem_to_loc( stride_t elem, constant const int* shape, constant const stride_t* strides, int ndim) { stride_t loc = 0; for (int i = ndim - 1; i >= 0 && elem > 0; --i) { loc += (elem % shape[i]) * strides[i]; elem /= shape[i]; } return loc; } // Non templated version to handle arbitrary dims template <typename stride_t> METAL_FUNC stride_t elem_to_loc( uint3 elem, constant const int* shape, constant const stride_t* strides, int ndim) { stride_t loc = elem.x * strides[ndim - 1] + elem.y * strides[ndim - 2]; for (int d = ndim - 3; d >= 0; --d) { loc += (elem.z % shape[d]) * strides[d]; elem.z /= shape[d]; } return loc; } METAL_FUNC ulong2 elem_to_loc_broadcast( uint elem, constant const int* shape, constant const size_t* a_strides, constant const size_t* b_strides, int ndim) { ulong loc_a{0}; ulong loc_b{0}; for (int i = ndim - 1; i >= 0 && elem > 0; --i) { int pos_in_dim = (elem % shape[i]); elem /= shape[i]; loc_a += pos_in_dim * a_strides[i]; loc_b += pos_in_dim * b_strides[i]; } return ulong2(loc_a, loc_b); } METAL_FUNC ulong3 elem_to_loc_broadcast( uint elem, constant const int* shape, constant const size_t* a_strides, constant const size_t* b_strides, constant const size_t* c_strides, int ndim) { ulong loc_a{0}; ulong loc_b{0}; ulong loc_c{0}; for (int i = ndim - 1; i >= 0 && elem > 0; --i) { int pos_in_dim = (elem % shape[i]); elem /= shape[i]; loc_a += pos_in_dim * a_strides[i]; loc_b += pos_in_dim * b_strides[i]; loc_c += pos_in_dim * c_strides[i]; } return ulong3(loc_a, loc_b, loc_c); } // https://github.com/ml-explore/mlx/blob/02efb310cac667bc547d1b96f21596c221f84fe7/mlx/backend/metal/kernels/steel/gemm/kernels/steel_gemm_fused.h#L1 /////////////////////////////////////////////////////////////////////////////// // GEMM kernels /////////////////////////////////////////////////////////////////////////////// constant bool has_batch [[function_constant(10)]]; constant bool use_out_source [[function_constant(100)]]; constant bool do_axpby [[function_constant(110)]]; constant bool align_M [[function_constant(200)]]; constant bool align_N [[function_constant(201)]]; constant bool align_K [[function_constant(202)]]; constant bool do_gather [[function_constant(300)]]; constant bool gather_bias = do_gather && use_out_source; // clang-format off template < typename T, int BM, int BN, int BK, int WM, int WN, bool transpose_a, bool transpose_b, typename AccumType = float> [[kernel, max_total_threads_per_threadgroup(WM* WN * 32)]] void gemm( const device T* A [[buffer(0)]], const device T* B [[buffer(1)]], const device T* C [[buffer(2), function_constant(use_out_source)]], device T* D [[buffer(3)]], const constant GEMMParams* params [[buffer(4)]], const constant GEMMAddMMParams* addmm_params [[buffer(5), function_constant(use_out_source)]], const constant int* batch_shape [[buffer(6)]], const constant size_t* batch_strides [[buffer(7)]], const constant uint32_t* lhs_indices [[buffer(10), function_constant(do_gather)]], const constant uint32_t* rhs_indices [[buffer(11), function_constant(do_gather)]], const constant uint32_t* C_indices [[buffer(12), function_constant(gather_bias)]], const constant int* operand_shape [[buffer(13), function_constant(do_gather)]], const constant size_t* operand_strides [[buffer(14), function_constant(do_gather)]], const constant packed_int3& operand_batch_ndim [[buffer(15), function_constant(do_gather)]], uint simd_lane_id [[thread_index_in_simdgroup]], uint simd_group_id [[simdgroup_index_in_threadgroup]], uint3 tid [[threadgroup_position_in_grid]], uint3 lid [[thread_position_in_threadgroup]]) { // clang-format on // Pacifying compiler (void)lid; using gemm_kernel = GEMMKernel< T, T, BM, BN, BK, WM, WN, transpose_a, transpose_b, true, true, AccumType>; using loader_a_t = typename gemm_kernel::loader_a_t; using loader_b_t = typename gemm_kernel::loader_b_t; using mma_t = typename gemm_kernel::mma_t; // Find block const int tid_y = ((tid.y) << params->swizzle_log) + ((tid.x) & ((1 << params->swizzle_log) - 1)); const int tid_x = (tid.x) >> params->swizzle_log; // Exit early if out of bounds if (params->tiles_n <= tid_x || params->tiles_m <= tid_y) { return; } // Adjust for batch // Handle gather if (do_gather) { // Read indices uint32_t indx_A, indx_B, indx_C; if (has_batch) { const constant size_t* indx_A_bstrides = batch_strides; const constant size_t* indx_B_bstrides = batch_strides + params->batch_ndim; ulong2 indx_offsets = elem_to_loc_broadcast( tid.z, batch_shape, indx_A_bstrides, indx_B_bstrides, params->batch_ndim); indx_A = lhs_indices[indx_offsets.x]; indx_B = rhs_indices[indx_offsets.y]; if (use_out_source) { const constant size_t* indx_C_bstrides = indx_B_bstrides + params->batch_ndim; auto indx_offset_C = elem_to_loc( tid.z, batch_shape, indx_C_bstrides, params->batch_ndim); indx_C = C_indices[indx_offset_C]; } } else { indx_A = lhs_indices[params->batch_stride_a * tid.z]; indx_B = rhs_indices[params->batch_stride_b * tid.z]; if (use_out_source) { indx_C = C_indices[addmm_params->batch_stride_c * tid.z]; } } // Translate indices to offsets int batch_ndim_A = operand_batch_ndim.x; const constant int* batch_shape_A = operand_shape; const constant size_t* batch_strides_A = operand_strides; A += elem_to_loc(indx_A, batch_shape_A, batch_strides_A, batch_ndim_A); int batch_ndim_B = operand_batch_ndim.y; const constant int* batch_shape_B = batch_shape_A + batch_ndim_A; const constant size_t* batch_strides_B = batch_strides_A + batch_ndim_A; B += elem_to_loc(indx_B, batch_shape_B, batch_strides_B, batch_ndim_B); if (use_out_source) { int batch_ndim_C = operand_batch_ndim.z; const constant int* batch_shape_C = batch_shape_B + batch_ndim_B; const constant size_t* batch_strides_C = batch_strides_B + batch_ndim_B; C += elem_to_loc(indx_C, batch_shape_C, batch_strides_C, batch_ndim_C); } } // Handle regular batch else { if (has_batch) { const constant size_t* A_bstrides = batch_strides; const constant size_t* B_bstrides = batch_strides + params->batch_ndim; ulong2 batch_offsets = elem_to_loc_broadcast( tid.z, batch_shape, A_bstrides, B_bstrides, params->batch_ndim); A += batch_offsets.x; B += batch_offsets.y; if (use_out_source) { const constant size_t* C_bstrides = B_bstrides + params->batch_ndim; C += elem_to_loc(tid.z, batch_shape, C_bstrides, params->batch_ndim); } } else { A += params->batch_stride_a * tid.z; B += params->batch_stride_b * tid.z; if (use_out_source) { C += addmm_params->batch_stride_c * tid.z; } } } D += params->batch_stride_d * tid.z; // Prepare threadgroup memory threadgroup T As[gemm_kernel::tgp_mem_size_a]; threadgroup T Bs[gemm_kernel::tgp_mem_size_b]; threadgroup_barrier(mem_flags::mem_none); // Find block in A, B, C const int c_row = tid_y * BM; const int c_col = tid_x * BN; const size_t c_row_long = size_t(c_row); const size_t c_col_long = size_t(c_col); A += transpose_a ? c_row_long : c_row_long * params->lda; B += transpose_b ? c_col_long * params->ldb : c_col_long; D += c_row_long * params->ldd + c_col_long; if (use_out_source) { C += c_row_long * addmm_params->ldc + c_col_long * addmm_params->fdc; } // Prepare threadgroup mma operation thread mma_t mma_op(simd_group_id, simd_lane_id); // Prepare threadgroup loading operations thread loader_a_t loader_a(A, params->lda, As, simd_group_id, simd_lane_id); thread loader_b_t loader_b(B, params->ldb, Bs, simd_group_id, simd_lane_id); // Prepare threadgroup bounds const short tgp_bm = align_M ? BM : short(min(BM, params->M - c_row)); const short tgp_bn = align_N ? BN : short(min(BN, params->N - c_col)); // Prepare iterations int gemm_k_iterations = params->gemm_k_iterations_aligned; // Do unaligned K iterations first if (!align_K) { const int k_last = params->gemm_k_iterations_aligned * BK; const int k_remain = params->K - k_last; const size_t k_jump_a = transpose_a ? params->lda * size_t(k_last) : size_t(k_last); const size_t k_jump_b = transpose_b ? size_t(k_last) : params->ldb * size_t(k_last); // Move loader source ahead to end loader_a.src += k_jump_a; loader_b.src += k_jump_b; // Load tile const short2 tile_dims_A = transpose_a ? short2(tgp_bm, k_remain) : short2(k_remain, tgp_bm); const short2 tile_dims_B = transpose_b ? short2(k_remain, tgp_bn) : short2(tgp_bn, k_remain); loader_a.load_safe(tile_dims_A); loader_b.load_safe(tile_dims_B); threadgroup_barrier(mem_flags::mem_threadgroup); // Do matmul mma_op.mma(As, Bs); // Reset source back to start loader_a.src -= k_jump_a; loader_b.src -= k_jump_b; } const TransformAdd<AccumType, AccumType> epilogue_op_add( addmm_params->alpha, addmm_params->beta); const TransformAxpby<AccumType, AccumType> epilogue_op_axpby( addmm_params->alpha, addmm_params->beta); /////////////////////////////////////////////////////////////////////////////// // MNK aligned loop if (align_M && align_N) { // Do gemm for (int k = 0; k < gemm_k_iterations; k++) { threadgroup_barrier(mem_flags::mem_threadgroup); // Load elements into threadgroup loader_a.load_unsafe(); loader_b.load_unsafe(); threadgroup_barrier(mem_flags::mem_threadgroup); // Multiply and accumulate threadgroup elements mma_op.mma(As, Bs); // Prepare for next iteration loader_a.next(); loader_b.next(); } threadgroup_barrier(mem_flags::mem_none); // Do epilogue if (use_out_source) { if (do_axpby) { mma_op.apply_epilogue( C, addmm_params->ldc, addmm_params->fdc, epilogue_op_axpby); } else { mma_op.apply_epilogue( C, addmm_params->ldc, addmm_params->fdc, epilogue_op_add); } } // Store results to device memory return mma_op.store_result(D, params->ldd); } /////////////////////////////////////////////////////////////////////////////// // MN unaligned loop else { // Loop over K - unaligned case const int leftover_bk = 0; if ((align_M || tgp_bm == BM) && (align_N || tgp_bn == BN)) { // Do gemm gemm_kernel::gemm_loop( As, Bs, gemm_k_iterations, loader_a, loader_b, mma_op, tgp_bm, tgp_bn, leftover_bk, LoopAlignment<true, true, true>{}); // Do epilogue if (use_out_source) { if (do_axpby) { mma_op.apply_epilogue( C, addmm_params->ldc, addmm_params->fdc, epilogue_op_axpby); } else { mma_op.apply_epilogue( C, addmm_params->ldc, addmm_params->fdc, epilogue_op_add); } } // Store results to device memory return mma_op.store_result(D, params->ldd); } else if (align_N || tgp_bn == BN) { gemm_kernel::gemm_loop( As, Bs, gemm_k_iterations, loader_a, loader_b, mma_op, tgp_bm, tgp_bn, leftover_bk, LoopAlignment<false, true, true>{}); // Do epilogue if (use_out_source) { if (do_axpby) { mma_op.apply_epilogue_safe( C, addmm_params->ldc, addmm_params->fdc, short2(tgp_bn, tgp_bm), epilogue_op_axpby); } else { mma_op.apply_epilogue_safe( C, addmm_params->ldc, addmm_params->fdc, short2(tgp_bn, tgp_bm), epilogue_op_add); } } // Store results to device memory return mma_op.store_result_safe(D, params->ldd, short2(tgp_bn, tgp_bm)); } else if (align_M || tgp_bm == BM) { gemm_kernel::gemm_loop( As, Bs, gemm_k_iterations, loader_a, loader_b, mma_op, tgp_bm, tgp_bn, leftover_bk, LoopAlignment<true, false, true>{}); // Do epilogue if (use_out_source) { if (do_axpby) { mma_op.apply_epilogue_safe( C, addmm_params->ldc, addmm_params->fdc, short2(tgp_bn, tgp_bm), epilogue_op_axpby); } else { mma_op.apply_epilogue_safe( C, addmm_params->ldc, addmm_params->fdc, short2(tgp_bn, tgp_bm), epilogue_op_add); } } // Store results to device memory return mma_op.store_result_safe(D, params->ldd, short2(tgp_bn, tgp_bm)); } else { gemm_kernel::gemm_loop( As, Bs, gemm_k_iterations, loader_a, loader_b, mma_op, tgp_bm, tgp_bn, leftover_bk, LoopAlignment<false, false, true>{}); // Do epilogue if (use_out_source) { if (do_axpby) { mma_op.apply_epilogue_safe( C, addmm_params->ldc, addmm_params->fdc, short2(tgp_bn, tgp_bm), epilogue_op_axpby); } else { mma_op.apply_epilogue_safe( C, addmm_params->ldc, addmm_params->fdc, short2(tgp_bn, tgp_bm), epilogue_op_add); } } // Store results to device memory return mma_op.store_result_safe(D, params->ldd, short2(tgp_bn, tgp_bm)); } } } #define instantiate_gemm(tname, trans_a, trans_b, iname, itype, oname, otype, bm, bn, bk, wm, wn) \ template [[host_name("gemm_" #tname "_" #iname "_" #oname "_" #bm "_" #bn "_" #bk "_" #wm "_" #wn)]] \ [[kernel]] void gemm<itype, bm, bn, bk, wm, wn, trans_a, trans_b, float>( \ const device itype *A [[buffer(0)]], \ const device itype *B [[buffer(1)]], \ const device itype *C [[buffer(2), function_constant(use_out_source)]], \ device itype *D [[buffer(3)]], \ const constant GEMMParams* params [[buffer(4)]], \ const constant GEMMAddMMParams* addmm_params [[buffer(5), function_constant(use_out_source)]], \ const constant int* batch_shape [[buffer(6)]], \ const constant size_t* batch_strides [[buffer(7)]], \ const constant uint32_t* lhs_indices [[buffer(10), function_constant(do_gather)]], \ const constant uint32_t* rhs_indices [[buffer(11), function_constant(do_gather)]], \ const constant uint32_t* C_indices [[buffer(12), function_constant(gather_bias)]], \ const constant int* operand_shape [[buffer(13), function_constant(do_gather)]], \ const constant size_t* operand_strides [[buffer(14), function_constant(do_gather)]], \ const constant packed_int3& operand_batch_ndim [[buffer(15), function_constant(do_gather)]], \ uint simd_lane_id [[thread_index_in_simdgroup]], \ uint simd_group_id [[simdgroup_index_in_threadgroup]], \ uint3 tid [[threadgroup_position_in_grid]], \ uint3 lid [[thread_position_in_threadgroup]]); #define instantiate_gemm_transpose_helper(iname, itype, oname, otype, bm, bn, bk, wm, wn) \ instantiate_gemm(nn, false, false, iname, itype, oname, otype, bm, bn, bk, wm, wn) \ instantiate_gemm(nt, false, true , iname, itype, oname, otype, bm, bn, bk, wm, wn) \ instantiate_gemm(tn, true , false, iname, itype, oname, otype, bm, bn, bk, wm, wn) \ instantiate_gemm(tt, true , true , iname, itype, oname, otype, bm, bn, bk, wm, wn) instantiate_gemm_transpose_helper(f32, float, f32, float, 32, 32, 16, 2, 2) instantiate_gemm_transpose_helper(f16, half, f16, half, 32, 32, 16, 2, 2) #if defined(__HAVE_BFLOAT__) instantiate_gemm_transpose_helper(bf16, bfloat, bf16, bfloat, 32, 32, 16, 2, 2) #endif
candle/candle-metal-kernels/src/mlx_gemm.metal/0
{ "file_path": "candle/candle-metal-kernels/src/mlx_gemm.metal", "repo_id": "candle", "token_count": 20231 }
47
use candle_metal_kernels::{call_cast_contiguous, Kernels}; use metal::objc::rc::autoreleasepool; use metal::{Device, MTLResourceOptions}; use rand; use std::any::type_name; use std::time::Instant; fn main() { let device = Device::system_default().unwrap(); let kernels = Kernels::new(); let f32_1k = (0..1000).map(|_| rand::random::<f32>()).collect::<Vec<_>>(); let f32_10k = (0..10000) .map(|_| rand::random::<f32>()) .collect::<Vec<_>>(); let f32_100k = (0..100000) .map(|_| rand::random::<f32>()) .collect::<Vec<_>>(); let contiguous_kernels = ["cast_u32_f32"]; println!( "{0: <5} | {1: <19} | {2: <6} | {3: <5} | {4: <11} | {5: <11}", "dtype", "kernel", "size", "runs", "total time", "avg time" ); // f32 run_cast_bench(&device, &kernels, &f32_1k, &contiguous_kernels); run_cast_bench(&device, &kernels, &f32_10k, &contiguous_kernels); run_cast_bench(&device, &kernels, &f32_100k, &contiguous_kernels); } fn run_cast_bench<T: Clone>( device: &Device, kernels: &Kernels, v: &[T], contiguous: &[&'static str], ) { let command_queue = device.new_command_queue(); let options = MTLResourceOptions::StorageModeManaged; let iterations = 1000; let input = device.new_buffer_with_data( v.as_ptr() as *const core::ffi::c_void, core::mem::size_of_val(v) as u64, options, ); let mut output = device.new_buffer(core::mem::size_of_val(v) as u64, options); // Contiguous for kernel_name in contiguous { let total_time = autoreleasepool(|| { let command_buffer = command_queue.new_command_buffer(); let start = Instant::now(); for _ in 0..iterations { call_cast_contiguous( device, &command_buffer, kernels, kernel_name, v.len(), &input, &mut output, ) .unwrap(); } command_buffer.commit(); command_buffer.wait_until_completed(); start.elapsed() }); println!( "{0: <5} | {1: <19} | {2: <6} | {3: <5} | {4: <11?} | {5: <11?}", type_name::<T>().split("::").last().unwrap(), kernel_name.to_string(), v.len(), iterations, total_time, total_time / iterations ); } // Strided? }
candle/candle-metal-kernels/tmp/cast.rs/0
{ "file_path": "candle/candle-metal-kernels/tmp/cast.rs", "repo_id": "candle", "token_count": 1299 }
48
//! Layers defined by closures. use candle::{Result, Tensor}; use std::sync::Arc; /// A layer defined by a simple closure. #[derive(Clone)] pub struct Func<'a> { #[allow(clippy::type_complexity)] f: Arc<dyn 'a + Fn(&Tensor) -> Result<Tensor> + Send + Sync>, } impl std::fmt::Debug for Func<'_> { fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result { write!(f, "func") } } pub fn func<'a, F>(f: F) -> Func<'a> where F: 'a + Fn(&Tensor) -> Result<Tensor> + Send + Sync, { Func { f: Arc::new(f) } } impl super::Module for Func<'_> { fn forward(&self, xs: &Tensor) -> Result<Tensor> { (*self.f)(xs) } } impl<'a> Func<'a> { pub fn new<F>(f: F) -> Self where F: 'a + Fn(&Tensor) -> Result<Tensor> + Send + Sync, { Self { f: Arc::new(f) } } } /// A layer defined by a simple closure. #[derive(Clone)] pub struct FuncT<'a> { #[allow(clippy::type_complexity)] f: Arc<dyn 'a + Fn(&Tensor, bool) -> Result<Tensor> + Send + Sync>, } impl std::fmt::Debug for FuncT<'_> { fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result { write!(f, "func") } } pub fn func_t<'a, F>(f: F) -> FuncT<'a> where F: 'a + Fn(&Tensor, bool) -> Result<Tensor> + Send + Sync, { FuncT { f: Arc::new(f) } } impl super::ModuleT for FuncT<'_> { fn forward_t(&self, xs: &Tensor, train: bool) -> Result<Tensor> { (*self.f)(xs, train) } } impl<'a> FuncT<'a> { pub fn new<F>(f: F) -> Self where F: 'a + Fn(&Tensor, bool) -> Result<Tensor> + Send + Sync, { Self { f: Arc::new(f) } } }
candle/candle-nn/src/func.rs/0
{ "file_path": "candle/candle-nn/src/func.rs", "repo_id": "candle", "token_count": 784 }
49
#[cfg(feature = "mkl")] extern crate intel_mkl_src; #[cfg(feature = "accelerate")] extern crate accelerate_src; use anyhow::Result; use candle::{test_utils, DType, Device, Tensor}; use candle_nn::{batch_norm, BatchNorm, BatchNormConfig, VarBuilder, VarMap}; /* The test below has been generated using the following PyTorch code: import torch torch.manual_seed(19551105) m = torch.nn.BatchNorm2d(5, affine=False) input = torch.randn(2, 5, 3, 4) output = m(input) print(input.flatten()) print(output.flatten()) print(m.running_mean) print(m.running_var) */ #[test] fn batch_norm_test() -> Result<()> { let running_mean = Tensor::zeros(5, DType::F32, &Device::Cpu)?; let running_var = Tensor::ones(5, DType::F32, &Device::Cpu)?; let bn = BatchNorm::new_no_bias(5, running_mean.clone(), running_var.clone(), 1e-8)?; let input: [f32; 120] = [ -0.7493, -1.0410, 1.6977, -0.6579, 1.7982, -0.0087, 0.2812, -0.1190, 0.2908, -0.5975, -0.0278, -0.2138, -1.3130, -1.6048, -2.2028, 0.9452, 0.4002, 0.0831, 1.0004, 0.1860, 0.5004, 0.5539, 0.9991, -0.2540, -0.0703, -0.3752, -0.1096, -0.2374, 1.0258, -2.2208, -0.0257, 0.6073, -1.1627, -0.0964, -1.9718, 1.6577, 0.1931, -0.3692, -0.8011, 0.9059, 0.4797, 0.6521, -0.0165, -0.6683, -0.4148, 2.0649, -0.8276, 1.7947, -0.2061, 0.5812, -1.3598, 1.6192, 1.0466, -0.4423, 0.4202, 0.1749, 0.6969, 0.2616, -0.0369, -1.4951, -0.0814, -0.1877, 0.0267, 0.6150, 0.2402, -1.1440, -2.0068, 0.6032, -2.6639, 0.8260, 0.1085, -0.1693, 1.2805, 0.7654, -0.4930, 0.3770, 1.1309, 0.2303, 0.2949, -0.2634, -0.5225, 0.4269, 0.6341, 1.5736, 0.9827, -1.2499, 0.3509, -1.6243, -0.8123, 0.7634, -0.3047, 0.0143, -0.4032, 0.0537, 0.7022, 0.8405, -1.2221, -1.6847, -0.0714, -0.1608, 0.5579, -1.5858, 0.4617, -0.6480, 0.1332, 0.0419, -0.9784, 0.4173, 1.2313, -1.9046, -0.1656, 0.1259, 0.0763, 1.4252, -0.9115, -0.1093, -0.3100, -0.6734, -1.4357, 0.9205, ]; let input = Tensor::new(&input, &Device::Cpu)?.reshape((2, 5, 3, 4))?; let output = bn.forward_train(&input)?; assert_eq!(output.dims(), &[2, 5, 3, 4]); let output = output.flatten_all()?; assert_eq!( test_utils::to_vec1_round(&output, 4)?, &[ -0.6391, -0.9414, 1.8965, -0.5444, 2.0007, 0.1283, 0.4287, 0.014, 0.4387, -0.4818, 0.1085, -0.0842, -1.6809, -2.0057, -2.6714, 0.8328, 0.2262, -0.1268, 0.8943, -0.0123, 0.3377, 0.3973, 0.8928, -0.5021, 0.0861, -0.2324, 0.0451, -0.0884, 1.2311, -2.1603, 0.1327, 0.7939, -1.055, 0.0589, -1.9002, 1.8912, 0.2918, -0.3253, -0.7993, 1.0741, 0.6063, 0.7955, 0.0617, -0.6536, -0.3754, 2.3461, -0.8284, 2.0495, -0.201, 0.6476, -1.4446, 1.7665, 1.1493, -0.4556, 0.4741, 0.2097, 0.7723, 0.3031, -0.0186, -1.5905, 0.053, -0.0572, 0.165, 0.7746, 0.3862, -1.0481, -1.9422, 0.7624, -2.6231, 0.9933, 0.2498, -0.0381, 1.2061, 0.6327, -0.7681, 0.2004, 1.0396, 0.037, 0.109, -0.5125, -0.8009, 0.2559, 0.4865, 1.5324, 1.1861, -1.1461, 0.5261, -1.5372, -0.689, 0.957, -0.1587, 0.1745, -0.2616, 0.2156, 0.8931, 1.0375, -1.2614, -1.7691, 0.0015, -0.0966, 0.6921, -1.6605, 0.5866, -0.6313, 0.226, 0.1258, -0.9939, 0.5378, 1.3484, -2.0319, -0.1574, 0.1568, 0.1034, 1.5574, -0.9614, -0.0967, -0.313, -0.7047, -1.5264, 1.0134 ] ); let bn2 = BatchNorm::new( 5, running_mean, running_var, Tensor::new(&[0.5f32], &Device::Cpu)?.broadcast_as(5)?, Tensor::new(&[-1.5f32], &Device::Cpu)?.broadcast_as(5)?, 1e-8, )?; let output2 = bn2.forward_train(&input)?; assert_eq!(output2.dims(), &[2, 5, 3, 4]); let output2 = output2.flatten_all()?; let diff2 = ((output2 - (output * 0.5)?)? + 1.5)?.sqr()?; let sum_diff2 = diff2.sum_keepdim(0)?; assert_eq!(test_utils::to_vec1_round(&sum_diff2, 4)?, &[0f32]); assert_eq!( test_utils::to_vec1_round(bn.running_mean(), 4)?, &[-0.0133, 0.0197, -0.0153, -0.0073, -0.0020] ); assert_eq!( test_utils::to_vec1_round(bn.running_var(), 4)?, &[0.9972, 0.9842, 0.9956, 0.9866, 0.9898] ); Ok(()) } // This test makes sure that we can train a batch norm layer using a VarMap. #[test] fn train_batch_norm() -> Result<()> { let vm = VarMap::new(); let vb = VarBuilder::from_varmap(&vm, DType::F32, &Device::Cpu); let bn = batch_norm(1, BatchNormConfig::default(), vb)?; // Get a copy of the original mean to ensure it is being updated. let original_mean = bn.running_mean().detach().copy()?; let var_map_mean = { vm.data() .lock() .unwrap() .get("running_mean") .unwrap() .clone() }; // Ensure the var map mean is the same as the running mean. assert_eq!( test_utils::to_vec1_round(bn.running_mean(), 4)?, test_utils::to_vec1_round(var_map_mean.as_tensor(), 4)?, ); // Train with a something guaranteed to be different from the running mean. let mean_plus_one = { let one = original_mean.ones_like()?; original_mean.add(&one)?.reshape((1, 1))? }; bn.forward_train(&mean_plus_one)?; // Assert that the running mean has been updated. assert_ne!( test_utils::to_vec1_round(bn.running_mean(), 4)?, test_utils::to_vec1_round(&original_mean, 4)?, ); // Assert that the var map mean has been updated. assert_eq!( test_utils::to_vec1_round(bn.running_mean(), 4)?, test_utils::to_vec1_round(var_map_mean.as_tensor(), 4)?, ); Ok(()) }
candle/candle-nn/tests/batch_norm.rs/0
{ "file_path": "candle/candle-nn/tests/batch_norm.rs", "repo_id": "candle", "token_count": 3126 }
50
use candle::test_utils::to_vec2_round; use candle::{DType, Device, NdArray, Result, Tensor}; use candle_onnx::onnx::attribute_proto::AttributeType; use candle_onnx::onnx::tensor_proto::DataType; use candle_onnx::onnx::tensor_shape_proto::{dimension, Dimension}; use candle_onnx::onnx::{type_proto, TensorProto, TensorShapeProto, TypeProto}; use candle_onnx::onnx::{AttributeProto, GraphProto, ModelProto, NodeProto, ValueInfoProto}; use candle_onnx::simple_eval; use std::collections::HashMap; const INPUT_X: &str = "x"; const INPUT_Y: &str = "y"; const INPUT_A: &str = "a"; const OUTPUT_Z: &str = "z"; fn create_model_proto_with_graph(graph: Option<GraphProto>) -> ModelProto { ModelProto { metadata_props: vec![], training_info: vec![], functions: vec![], ir_version: 0, opset_import: vec![], producer_name: "".to_string(), producer_version: "".to_string(), domain: "".to_string(), model_version: 0, doc_string: "".to_string(), graph, } } #[test] fn test_evaluation_fails_without_defined_graph() -> Result<()> { let manual_graph = create_model_proto_with_graph(None); let inputs: HashMap<String, Tensor> = HashMap::new(); match candle_onnx::simple_eval(&manual_graph, inputs) { Err(err) => assert_eq!(err.to_string(), "no graph defined in proto"), Ok(_) => panic!("Expected an error due to undefined graph"), } Ok(()) } // "Add" #[test] fn test_add_operation() -> Result<()> { let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Add".to_string(), domain: "".to_string(), attribute: vec![], input: vec![INPUT_X.to_string(), INPUT_Y.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), Tensor::new(&[2.], &Device::Cpu)?); inputs.insert(INPUT_Y.to_string(), Tensor::new(&[2.], &Device::Cpu)?); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let first = z.to_vec1::<f64>()?[0]; assert_eq!(first, 4.0f64); Ok(()) } // "Sub" #[test] fn test_sub_operation() -> Result<()> { let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Sub".to_string(), domain: "".to_string(), attribute: vec![], input: vec![INPUT_X.to_string(), INPUT_Y.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), Tensor::new(&[2.], &Device::Cpu)?); inputs.insert(INPUT_Y.to_string(), Tensor::new(&[2.], &Device::Cpu)?); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let first = z.to_vec1::<f64>()?[0]; assert_eq!(first, 0.0f64); Ok(()) } // "Mul" #[test] fn test_mul_operation() -> Result<()> { let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Mul".to_string(), domain: "".to_string(), attribute: vec![], input: vec![INPUT_X.to_string(), INPUT_Y.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), Tensor::new(&[2.], &Device::Cpu)?); inputs.insert(INPUT_Y.to_string(), Tensor::new(&[2.], &Device::Cpu)?); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let first = z.to_vec1::<f64>()?[0]; assert_eq!(first, 4.0f64); Ok(()) } // "Div" #[test] fn test_div_operation() -> Result<()> { let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Div".to_string(), domain: "".to_string(), attribute: vec![], input: vec![INPUT_X.to_string(), INPUT_Y.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), Tensor::new(&[2.], &Device::Cpu)?); inputs.insert(INPUT_Y.to_string(), Tensor::new(&[2.], &Device::Cpu)?); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let first = z.to_vec1::<f64>()?[0]; assert_eq!(first, 1.0f64); Ok(()) } // "Exp" #[test] fn test_exp_operation() -> Result<()> { let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Exp".to_string(), domain: "".to_string(), attribute: vec![], input: vec![INPUT_X.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let x = Tensor::from_vec(vec![-1.0f32, 0.0f32, 1.0f32, 2.0f32], &[2, 2], &Device::Cpu)?; let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), x); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let results = z.to_vec2::<f32>()?; assert_eq!(results[0][0], 0.36787944f32); assert_eq!(results[0][1], 1.0f32); assert_eq!(results[1], vec![std::f32::consts::E, 7.389056f32]); Ok(()) } // "Equal" #[test] fn test_equal_operation() -> Result<()> { let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Equal".to_string(), domain: "".to_string(), attribute: vec![], input: vec![INPUT_X.to_string(), INPUT_Y.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), Tensor::new(&[2.], &Device::Cpu)?); inputs.insert(INPUT_Y.to_string(), Tensor::new(&[2.], &Device::Cpu)?); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let first = z.to_dtype(candle::DType::U8)?.to_vec1::<u8>()?.to_vec()[0]; assert_eq!(first, 1); Ok(()) } // "Not" #[test] fn test_not_operation() -> Result<()> { let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Not".to_string(), domain: "".to_string(), attribute: vec![], input: vec![INPUT_X.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), Tensor::new(&[0.], &Device::Cpu)?); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let first = z.to_dtype(candle::DType::U8)?.to_vec1::<u8>()?.to_vec()[0]; assert_eq!(first, 1); Ok(()) } // "MatMul" #[test] fn test_matmul_operation() -> Result<()> { let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "MatMul".to_string(), domain: "".to_string(), attribute: vec![], input: vec![INPUT_X.to_string(), INPUT_Y.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert( INPUT_X.to_string(), Tensor::from_vec( // vec![1.0f32, 2.0f32, 3.0f32, 4.0f32], &[2, 2], &Device::Cpu, )?, ); inputs.insert( INPUT_Y.to_string(), Tensor::from_vec( // vec![5.0f32, 6.0f32, 7.0f32, 8.0f32], &[2, 2], &Device::Cpu, )?, ); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let results = z.to_vec2::<f32>()?; assert_eq!(results, vec![vec![19.0, 22.0], vec![43.0, 50.0]]); Ok(()) } // "Reshape" #[test] fn test_reshape_operation() -> Result<()> { let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Reshape".to_string(), domain: "".to_string(), attribute: vec![], input: vec![INPUT_X.to_string(), INPUT_Y.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![ ValueInfoProto { name: INPUT_X.to_string(), doc_string: "".to_string(), r#type: None, }, ValueInfoProto { name: INPUT_Y.to_string(), doc_string: "".to_string(), r#type: None, }, ], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let x = Tensor::from_vec( // vec![1.0f32, 2.0f32, 3.0f32, 4.0f32], &[2, 2], &Device::Cpu, )?; let y = Tensor::from_vec( // vec![4i64], &[1], &Device::Cpu, )?; let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), x); inputs.insert(INPUT_Y.to_string(), y); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let results = z.to_vec1::<f32>()?; assert_eq!(results, vec![1.0, 2.0, 3.0, 4.0]); Ok(()) } // "LogSoftmax" #[test] fn test_logsoftmax_operation() -> Result<()> { let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "LogSoftmax".to_string(), domain: "".to_string(), attribute: vec![], input: vec![INPUT_X.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![ ValueInfoProto { name: INPUT_X.to_string(), doc_string: "".to_string(), r#type: None, }, ValueInfoProto { name: INPUT_Y.to_string(), doc_string: "".to_string(), r#type: None, }, ], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let x = Tensor::from_vec( // vec![1.0f32, 2.0f32, 3.0f32, 4.0f32], &[2, 2], &Device::Cpu, )?; let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), x); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let results = z.to_vec2::<f32>()?; assert_eq!( results, vec![vec![0.26894143, 0.7310586], vec![0.26894143, 0.7310586]] ); Ok(()) } // "Softmax" #[test] fn test_softmax_operation() -> Result<()> { let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Softmax".to_string(), domain: "".to_string(), attribute: vec![], input: vec![INPUT_X.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![ ValueInfoProto { name: INPUT_X.to_string(), doc_string: "".to_string(), r#type: None, }, ValueInfoProto { name: INPUT_Y.to_string(), doc_string: "".to_string(), r#type: None, }, ], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let x = Tensor::from_vec( // vec![1.0f32, 2.0f32, 3.0f32, 4.0f32], &[2, 2], &Device::Cpu, )?; let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), x); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let results = z.to_vec2::<f32>()?; assert_eq!( results, vec![vec![0.26894143, 0.7310586], vec![0.26894143, 0.7310586]] ); Ok(()) } // "Transpose" #[test] fn test_transpose_operation() -> Result<()> { let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Transpose".to_string(), domain: "".to_string(), attribute: vec![], input: vec![INPUT_X.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![ ValueInfoProto { name: INPUT_X.to_string(), doc_string: "".to_string(), r#type: None, }, ValueInfoProto { name: INPUT_Y.to_string(), doc_string: "".to_string(), r#type: None, }, ], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let x = Tensor::from_vec( // vec![1.0f32, 2.0f32, 3.0f32, 4.0f32], &[2, 2], &Device::Cpu, )?; let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), x); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let results = z.to_vec2::<f32>()?; assert_eq!(results, vec![vec![1.0, 3.0], vec![2.0, 4.0]]); Ok(()) } // "Dropout" #[test] fn test_dropout_operation() -> Result<()> { let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Dropout".to_string(), domain: "".to_string(), attribute: vec![], input: vec![INPUT_X.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![ ValueInfoProto { name: INPUT_X.to_string(), doc_string: "".to_string(), r#type: None, }, ValueInfoProto { name: INPUT_Y.to_string(), doc_string: "".to_string(), r#type: None, }, ], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let x = Tensor::from_vec( // vec![1.0f32, 2.0f32, 3.0f32, 4.0f32], &[2, 2], &Device::Cpu, )?; let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), x); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let results = z.to_vec2::<f32>()?; assert_eq!(results, vec![vec![1.0, 2.0], vec![3.0, 4.0]]); Ok(()) } // "Flatten" #[test] fn test_flatten_operation() -> Result<()> { let mut att_axis = AttributeProto { name: "axis".to_string(), ref_attr_name: "axis".to_string(), i: 0, doc_string: "axis".to_string(), r#type: 2, f: 0.0, s: vec![], t: None, g: None, sparse_tensor: None, tp: None, floats: vec![], ints: vec![], strings: vec![], tensors: vec![], graphs: vec![], sparse_tensors: vec![], type_protos: vec![], }; let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Flatten".to_string(), domain: "".to_string(), attribute: vec![att_axis.clone()], input: vec![INPUT_X.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![ ValueInfoProto { name: INPUT_X.to_string(), doc_string: "".to_string(), r#type: None, }, ValueInfoProto { name: INPUT_Y.to_string(), doc_string: "".to_string(), r#type: None, }, ], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let x = Tensor::from_vec( vec![ 1.0f32, 2.0f32, 3.0f32, 4.0f32, 5.0f32, 6.0f32, 7.0f32, 8.0f32, ], &[2, 2, 2], &Device::Cpu, )?; let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), x); let eval = candle_onnx::simple_eval(&manual_graph, inputs.clone())?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let results = z.to_vec2::<f32>()?; assert_eq!(results, vec![vec![1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0]]); att_axis.i = 1; let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Flatten".to_string(), domain: "".to_string(), attribute: vec![att_axis.clone()], input: vec![INPUT_X.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![ ValueInfoProto { name: INPUT_X.to_string(), doc_string: "".to_string(), r#type: None, }, ValueInfoProto { name: INPUT_Y.to_string(), doc_string: "".to_string(), r#type: None, }, ], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let results = z.to_vec2::<f32>()?; assert_eq!( results, vec![vec![1.0, 2.0, 3.0, 4.0], vec![5.0, 6.0, 7.0, 8.0]] ); Ok(()) } // Below are ops that are implemented but not tested yet // "MaxPool" // #[test] // "AveragePool" // #[test] // "BatchNormalization" // #[test] // "Squeeze" // #[test] // "ConstantOfShape" #[test] fn test_constant_of_shape() -> Result<()> { // https://github.com/onnx/onnx/blob/main/docs/Operators.md#examples-31 test( &[4i64, 3, 2], Some(1.), &[ [[1., 1.], [1., 1.], [1., 1.]], [[1., 1.], [1., 1.], [1., 1.]], [[1., 1.], [1., 1.], [1., 1.]], [[1., 1.], [1., 1.], [1., 1.]], ], )?; // https://github.com/onnx/onnx/blob/main/docs/Operators.md#examples-31 test(&[1i64], Some(0i64), &[0i64])?; // "value" defaults to 0 f32 test(&[4i64], None as Option<i64>, &[0., 0., 0., 0.])?; fn test( input: impl NdArray, value: Option<impl NdArray>, expected: impl NdArray, ) -> Result<()> { let mut attribute = vec![]; if let Some(value) = value { let tensor = Tensor::new(value, &Device::Cpu)?; let (value, data_type) = match tensor.dtype() { DType::U8 => ( tensor.to_vec0::<u8>()?.to_le_bytes().to_vec(), DataType::Uint8, ), DType::U32 => ( tensor.to_vec0::<u32>()?.to_le_bytes().to_vec(), DataType::Uint32, ), DType::I64 => ( tensor.to_vec0::<i64>()?.to_le_bytes().to_vec(), DataType::Int64, ), DType::F32 => ( tensor.to_vec0::<f32>()?.to_le_bytes().to_vec(), DataType::Float, ), DType::F64 => ( tensor.to_vec0::<f64>()?.to_le_bytes().to_vec(), DataType::Double, ), _ => panic!("unsupported DType in test"), }; let tensor = TensorProto { data_type: data_type.into(), dims: tensor.dims().iter().map(|v| *v as i64).collect(), raw_data: value, segment: None, float_data: vec![], int32_data: vec![], string_data: vec![], int64_data: vec![], name: "".to_string(), doc_string: "".to_string(), external_data: vec![], data_location: 0, double_data: vec![], uint64_data: vec![], }; attribute.push(AttributeProto { name: "value".to_string(), ref_attr_name: "value".to_string(), i: 0, doc_string: "value".to_string(), r#type: AttributeType::Tensor.into(), f: 0.0, s: vec![], t: Some(tensor), g: None, sparse_tensor: None, tp: None, floats: vec![], ints: vec![], strings: vec![], tensors: vec![], graphs: vec![], sparse_tensors: vec![], type_protos: vec![], }) } let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "ConstantOfShape".to_string(), domain: "".to_string(), attribute, input: vec![INPUT_X.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), Tensor::new(input, &Device::Cpu)?); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval .get(OUTPUT_Z) .expect("Output 'z' not found") .to_dtype(DType::F64)?; let expected = Tensor::new(expected, &Device::Cpu)?.to_dtype(DType::F64)?; match expected.dims().len() { 0 => assert_eq!(z.to_vec0::<f64>()?, expected.to_vec0::<f64>()?), 1 => assert_eq!(z.to_vec1::<f64>()?, expected.to_vec1::<f64>()?), 2 => assert_eq!(z.to_vec2::<f64>()?, expected.to_vec2::<f64>()?), 3 => assert_eq!(z.to_vec3::<f64>()?, expected.to_vec3::<f64>()?), _ => unreachable!(), }; Ok(()) } Ok(()) } // "Unsqueeze" #[test] fn test_unsqueeze() -> Result<()> { let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Unsqueeze".to_string(), domain: "".to_string(), attribute: vec![], input: vec![INPUT_X.to_string(), INPUT_Y.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![ValueInfoProto { name: INPUT_X.to_string(), doc_string: "".to_string(), r#type: None, }], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let x = Tensor::from_vec( vec![ 1.0f32, 2.0f32, // 3.0f32, 4.0f32, // ], &[2, 2], &Device::Cpu, )?; let y = Tensor::from_vec(vec![-1i64], &[1], &Device::Cpu)?; let inputs = HashMap::from_iter([(INPUT_X.to_string(), x.clone()), (INPUT_Y.to_string(), y)]); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); assert_eq!(z.dims(), &[2, 2, 1]); assert_eq!( z.flatten_all()?.to_vec1::<f32>()?, x.flatten_all()?.to_vec1::<f32>()? ); Ok(()) } // "Clip" // #[test] // "Gather" #[test] fn test_gather_operation() -> Result<()> { // test taken from https://onnx.ai/onnx/operators/onnx__Gather.html#summary. test( &[[1.0, 1.2], [2.3, 3.4], [4.5, 5.7]], &[[0i64, 1], [1, 2]], 0, &[[[1.0, 1.2], [2.3, 3.4]], [[2.3, 3.4], [4.5, 5.7]]], )?; // test taken from https://onnx.ai/onnx/operators/onnx__Gather.html#summary. test( &[[1.0, 1.2, 1.9], [2.3, 3.4, 3.9], [4.5, 5.7, 5.9]], &[[0i64, 2]], 1, &[[[1.0, 1.9]], [[2.3, 3.9]], [[4.5, 5.9]]], )?; // all the tests below are generated from numpy.take, which works like // onnx's Gather operation. test(&[1.0, 2.0, 3.0, 4.0], 3i64, 0, 4.0)?; test(&[[1.0, 2.0, 3.0, 4.0]], 3i64, 1, &[4.0])?; test( &[[1.0], [2.0], [3.0], [4.0]], &[3i64, 2], 0, &[[4.0], [3.0]], )?; test( &[ [[1.0, 2.0], [3.0, 4.0]], [[5.0, 6.0], [7.0, 8.0]], [[9.0, 10.0], [11.0, 12.0]], [[13.0, 14.0], [15.0, 16.0]], ], 1i64, 0, &[[5.0, 6.0], [7.0, 8.0]], )?; test( &[ [[1.0, 2.0], [3.0, 4.0]], [[5.0, 6.0], [7.0, 8.0]], [[9.0, 10.0], [11.0, 12.0]], [[13.0, 14.0], [15.0, 16.0]], ], &[1i64, 0], 0, &[[[5.0, 6.0], [7.0, 8.0]], [[1.0, 2.0], [3.0, 4.0]]], )?; fn test( data: impl NdArray, indices: impl NdArray, axis: i64, expected: impl NdArray, ) -> Result<()> { let att_axis = AttributeProto { name: "axis".to_string(), ref_attr_name: "axis".to_string(), i: axis, doc_string: "axis".to_string(), r#type: 2, f: 0.0, s: vec![], t: None, g: None, sparse_tensor: None, tp: None, floats: vec![], ints: vec![], strings: vec![], tensors: vec![], graphs: vec![], sparse_tensors: vec![], type_protos: vec![], }; let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Gather".to_string(), domain: "".to_string(), attribute: vec![att_axis], input: vec![INPUT_X.to_string(), INPUT_Y.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), Tensor::new(data, &Device::Cpu)?); inputs.insert(INPUT_Y.to_string(), Tensor::new(indices, &Device::Cpu)?); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let expected = Tensor::new(expected, &Device::Cpu)?; match expected.dims().len() { 0 => assert_eq!(z.to_vec0::<f64>()?, expected.to_vec0::<f64>()?), 1 => assert_eq!(z.to_vec1::<f64>()?, expected.to_vec1::<f64>()?), 2 => assert_eq!(z.to_vec2::<f64>()?, expected.to_vec2::<f64>()?), 3 => assert_eq!(z.to_vec3::<f64>()?, expected.to_vec3::<f64>()?), _ => unreachable!(), }; Ok(()) } Ok(()) } // GatherElements #[test] fn test_gather_elements() -> Result<()> { // all the tests below are verified against `torch.gather()` // Rank 1 index test(&[1.0, 2.0, 3.0, 4.0], &[3i64], 0, &[4.0])?; // Rank 2 index test(&[[1.0, 2.0, 3.0, 4.0]], &[[3i64]], 1, &[[4.0]])?; // https://github.com/onnx/onnx/blob/main/docs/Operators.md#examples-57 gather_elements_0 test( &[[1., 2.], [3., 4.]], &[[0i64, 0], [1, 0]], 1, &[[1., 1.], [4., 3.]], )?; // https://github.com/onnx/onnx/blob/main/docs/Operators.md#examples-57 gather_elements_1 test( &[[1., 2., 3.], [4., 5., 6.], [7., 8., 9.]], &[[1i64, 2, 0], [2, 0, 0]], 0, &[[4., 8., 3.], [7., 2., 3.]], )?; // https://github.com/onnx/onnx/blob/main/docs/Operators.md#examples-57 gather_elements_negative_indices test( &[[1., 2., 3.], [4., 5., 6.], [7., 8., 9.]], &[[-1_i64, -2, 0], [-2, 0, 0]], 0, &[[7., 5., 3.], [4., 2., 3.]], )?; test( &[[1.0], [2.0], [3.0], [4.0]], &[[3i64], [2]], 0, &[[4.], [3.]], )?; // Rank 3 test( &[ [[1.0, 2.0], [3.0, 4.0]], [[5.0, 6.0], [7.0, 8.0]], [[9.0, 10.0], [11.0, 12.0]], [[13.0, 14.0], [15.0, 16.0]], ], &[[[1i64]]], 0, &[[[5.]]], )?; test( &[ [[1.0, 2.0], [3.0, 4.0]], [[5.0, 6.0], [7.0, 8.0]], [[9.0, 10.0], [11.0, 12.0]], [[13.0, 14.0], [15.0, 16.0]], ], &[[[1i64]]], 1, &[[[3.]]], )?; test( &[ [[1.0, 2.0], [3.0, 4.0]], [[5.0, 6.0], [7.0, 8.0]], [[9.0, 10.0], [11.0, 12.0]], [[13.0, 14.0], [15.0, 16.0]], ], &[[[1i64], [0]]], 2, &[[[2.], [3.]]], )?; // Error cases // Invalid index assert!(test(&[[1.0, 2.0, 3.0, 4.0]], &[[3i64]], 0, &[[1., 2., 3., 4.]]).is_err()); // Invalid axis/ dim assert!(test(&[[1.0, 2.0, 3.0, 4.0]], &[[3i64]], 2, &[[1., 2., 3., 4.]]).is_err()); // Invalid rank assert!(test(&[[1.0, 2.0, 3.0, 4.0]], &[3i64], 0, &[[1.]]).is_err()); fn test( data: impl NdArray, indices: impl NdArray, axis: i64, expected: impl NdArray, ) -> Result<()> { let att_axis = AttributeProto { name: "axis".to_string(), ref_attr_name: "axis".to_string(), i: axis, doc_string: "axis".to_string(), r#type: 2, f: 0.0, s: vec![], t: None, g: None, sparse_tensor: None, tp: None, floats: vec![], ints: vec![], strings: vec![], tensors: vec![], graphs: vec![], sparse_tensors: vec![], type_protos: vec![], }; let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "GatherElements".to_string(), domain: "".to_string(), attribute: vec![att_axis], input: vec![INPUT_X.to_string(), INPUT_Y.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), Tensor::new(data, &Device::Cpu)?); inputs.insert(INPUT_Y.to_string(), Tensor::new(indices, &Device::Cpu)?); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let expected = Tensor::new(expected, &Device::Cpu)?; match expected.dims().len() { 0 => assert_eq!(z.to_vec0::<f64>()?, expected.to_vec0::<f64>()?), 1 => assert_eq!(z.to_vec1::<f64>()?, expected.to_vec1::<f64>()?), 2 => assert_eq!(z.to_vec2::<f64>()?, expected.to_vec2::<f64>()?), 3 => assert_eq!(z.to_vec3::<f64>()?, expected.to_vec3::<f64>()?), _ => unreachable!(), }; Ok(()) } Ok(()) } // "Size" #[test] fn test_size_operation() -> Result<()> { let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Size".to_string(), domain: "".to_string(), attribute: vec![], input: vec![INPUT_X.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![ValueInfoProto { name: INPUT_X.to_string(), doc_string: "".to_string(), r#type: None, }], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let x = Tensor::from_vec(vec![1.0f32, 2.0f32, 3.0f32, 4.0f32], &[2, 2], &Device::Cpu)?; let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), x); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let results = z.to_scalar::<i64>()?; assert_eq!(results, 4); Ok(()) } // "Shape" #[test] fn test_shape_operation() -> Result<()> { let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Shape".to_string(), domain: "".to_string(), attribute: vec![], input: vec![INPUT_X.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![ValueInfoProto { name: INPUT_X.to_string(), doc_string: "".to_string(), r#type: None, }], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let x = Tensor::from_vec(vec![1.0f32, 2.0f32, 3.0f32, 4.0f32], &[2, 2], &Device::Cpu)?; let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), x); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let results = z.to_vec1::<i64>()?; assert_eq!(results, vec![2, 2]); Ok(()) } // "Conv" // #[test] // "Concat" // #[test] // "Abs" #[test] fn test_abs_operation() -> Result<()> { let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Abs".to_string(), domain: "".to_string(), attribute: vec![], input: vec![INPUT_X.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![ ValueInfoProto { name: INPUT_X.to_string(), doc_string: "".to_string(), r#type: None, }, ValueInfoProto { name: INPUT_Y.to_string(), doc_string: "".to_string(), r#type: None, }, ], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let x = Tensor::from_vec( vec![-1.0f32, 2.0f32, -3.0f32, 4.0f32], &[2, 2], &Device::Cpu, )?; let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), x); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let results = z.to_vec2::<f32>()?; assert_eq!(results, vec![vec![1.0, 2.0], vec![3.0, 4.0]]); Ok(()) } // "Cos" #[test] fn test_cos_operation() -> Result<()> { let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Cos".to_string(), domain: "".to_string(), attribute: vec![], input: vec![INPUT_X.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![ ValueInfoProto { name: INPUT_X.to_string(), doc_string: "".to_string(), r#type: None, }, ValueInfoProto { name: INPUT_Y.to_string(), doc_string: "".to_string(), r#type: None, }, ], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let x = Tensor::from_vec(vec![0.0f32, 1.0f32, 2.0f32, 3.0f32], &[2, 2], &Device::Cpu)?; let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), x); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); assert_eq!(to_vec2_round(z, 4)?, [[1.0, 0.5403], [-0.4161, -0.99]]); Ok(()) } // "Sin" #[test] fn test_sin_operation() -> Result<()> { let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Sin".to_string(), domain: "".to_string(), attribute: vec![], input: vec![INPUT_X.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![ ValueInfoProto { name: INPUT_X.to_string(), doc_string: "".to_string(), r#type: None, }, ValueInfoProto { name: INPUT_Y.to_string(), doc_string: "".to_string(), r#type: None, }, ], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let x = Tensor::from_vec(vec![0.0f32, 1.0f32, 2.0f32, 3.0f32], &[2, 2], &Device::Cpu)?; let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), x); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); assert_eq!(to_vec2_round(z, 4)?, [[0.0, 0.8415], [0.9093, 0.1411]]); Ok(()) } // "Neg" #[test] fn test_neg_operation() -> Result<()> { let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Neg".to_string(), domain: "".to_string(), attribute: vec![], input: vec![INPUT_X.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![ ValueInfoProto { name: INPUT_X.to_string(), doc_string: "".to_string(), r#type: None, }, ValueInfoProto { name: INPUT_Y.to_string(), doc_string: "".to_string(), r#type: None, }, ], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let x = Tensor::from_vec(vec![1.0f32, 2.0f32, 3.0f32, 4.0f32], &[2, 2], &Device::Cpu)?; let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), x); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let results = z.to_vec2::<f32>()?; assert_eq!(results, vec![vec![-1.0, -2.0], vec![-3.0, -4.0]]); Ok(()) } // "Erf" // #[test] // "Tanh" #[test] fn test_tanh_operation() -> Result<()> { let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Tanh".to_string(), domain: "".to_string(), attribute: vec![], input: vec![INPUT_X.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![ ValueInfoProto { name: INPUT_X.to_string(), doc_string: "".to_string(), r#type: None, }, ValueInfoProto { name: INPUT_Y.to_string(), doc_string: "".to_string(), r#type: None, }, ], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let x = Tensor::from_vec(vec![0.0f32, 1.0f32, 2.0f32, 3.0f32], &[2, 2], &Device::Cpu)?; let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), x); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let results = z.to_vec2::<f32>()?; assert_eq!( results, vec![vec![0.0, 0.7615942], vec![0.9640276, 0.9950548]] ); Ok(()) } // "Sigmoid" #[test] fn test_sigmoid_operation() -> Result<()> { let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Sigmoid".to_string(), domain: "".to_string(), attribute: vec![], input: vec![INPUT_X.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![ ValueInfoProto { name: INPUT_X.to_string(), doc_string: "".to_string(), r#type: None, }, ValueInfoProto { name: INPUT_Y.to_string(), doc_string: "".to_string(), r#type: None, }, ], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let x = Tensor::from_vec(vec![0.0f32, 1.0f32, 2.0f32, 3.0f32], &[2, 2], &Device::Cpu)?; let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), x); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let results = z.to_vec2::<f32>()?; assert_eq!( results, vec![vec![0.5, 0.7310586], vec![0.880797, 0.95257413]] ); Ok(()) } // "Gelu" #[test] fn test_gelu_operation() -> Result<()> { let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Gelu".to_string(), domain: "".to_string(), attribute: vec![], input: vec![INPUT_X.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![ ValueInfoProto { name: INPUT_X.to_string(), doc_string: "".to_string(), r#type: None, }, ValueInfoProto { name: INPUT_Y.to_string(), doc_string: "".to_string(), r#type: None, }, ], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let x = Tensor::from_vec(vec![0.0f32, 1.0f32, 2.0f32, 3.0f32], &[2, 2], &Device::Cpu)?; let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), x); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let results = z.to_vec2::<f32>()?; assert_eq!( results, vec![vec![0.0, 0.8413448], vec![1.9544997, 2.9959502]] ); Ok(()) } // "Relu" #[test] fn test_relu_operation() -> Result<()> { let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Relu".to_string(), domain: "".to_string(), attribute: vec![], input: vec![INPUT_X.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![ValueInfoProto { name: INPUT_X.to_string(), doc_string: "".to_string(), r#type: None, }], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let x = Tensor::from_vec( vec![-1.0f32, 1.0f32, -2.0f32, 3.0f32], &[2, 2], &Device::Cpu, )?; let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), x); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let results = z.to_vec2::<f32>()?; assert_eq!(results, vec![vec![0.0, 1.0], vec![0.0, 3.0]]); Ok(()) } // "PRelu" #[test] fn test_prelu_operation() -> Result<()> { let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "PRelu".to_string(), domain: "".to_string(), attribute: vec![], input: vec![INPUT_X.to_string(), INPUT_Y.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![ ValueInfoProto { name: INPUT_X.to_string(), doc_string: "".to_string(), r#type: None, }, ValueInfoProto { name: INPUT_Y.to_string(), doc_string: "".to_string(), r#type: None, }, ], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let x: Tensor = Tensor::from_vec( vec![-1.0f32, 1.0f32, -2.0f32, 3.0f32], &[2, 2], &Device::Cpu, )?; let y: Tensor = Tensor::from_vec(vec![1.0f32, 1.1f32, 1.2f32, 1.3f32], &[2, 2], &Device::Cpu)?; let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), x); inputs.insert(INPUT_Y.to_string(), y); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let results = z.to_vec2::<f32>()?; assert_eq!(results, vec![vec![-1.0, 1.0], vec![-2.4, 3.0]]); Ok(()) } // "Constant" // #[test] // "Cast" // #[test] // "ReduceMax" #[test] fn test_reduce_max() -> Result<()> { // Tests with random data generated with `np.random.uniform` // https://github.com/onnx/onnx/blob/main/docs/Operators.md#examples-119 bool_inputs // No special treatment reqired for bool // `np.maximum.reduce(data, axis=axes, keepdims=True)` test( &[[1_u8, 1], [1, 0], [0, 1], [0, 0]], Some(vec![1]), 1, None, &[[1_u8], [1], [1], [0]], false, )?; // https://github.com/onnx/onnx/blob/main/docs/Operators.md#examples-119 default_axes_keepdims // `np.maximum.reduce(data, axis=None, keepdims=True)` test( &[ [[5., 1.], [20., 2.]], [[30., 1.], [40., 2.]], [[55., 1.], [60., 2.]], ], None, 1, None, &[[[60.]]], false, )?; // same as above but with random test( &[ [[-7.648377, -5.4018507], [-7.318765, 7.2374434]], [[6.304022, 4.939862], [4.5435624, 3.072864]], [[-2.5058026, 8.008944], [9.587318, -8.794852]], ], None, 1, None, &[[[9.587318]]], false, )?; // https://github.com/onnx/onnx/blob/main/docs/Operators.md#examples-119 default_axes_donot_keep_dims // `np.maximum.reduce(data, axis=None, keepdims=False)` test( &[ [[5., 1.], [20., 2.]], [[30., 1.], [40., 2.]], [[55., 1.], [60., 2.]], ], None, 0, None, 60., false, )?; // same as above but with random // `np.maximum.reduce(data, axis=None, keepdims=False)` test( &[ [[-7.648377, -5.4018507], [-7.318765, 7.2374434]], [[6.304022, 4.939862], [4.5435624, 3.072864]], [[-2.5058026, 8.008944], [9.587318, -8.794852]], ], None, 0, None, 9.587318, false, )?; // https://github.com/onnx/onnx/blob/main/docs/Operators.md#examples-119 keepdims // `np.maximum.reduce(data, axis=tuple(axes), keepdims=True)` test( &[ [[5., 1.], [20., 2.]], [[30., 1.], [40., 2.]], [[55., 1.], [60., 2.]], ], Some(vec![1]), 1, None, &[[[20., 2.]], [[40., 2.]], [[60., 2.]]], false, )?; // keepdims with random data // `np.maximum.reduce(data, axis=tuple(axes), keepdims=True)` test( &[ [[-7.648377, -5.4018507], [-7.318765, 7.2374434]], [[6.304022, 4.939862], [4.5435624, 3.072864]], [[-2.5058026, 8.008944], [9.587318, -8.794852]], ], Some(vec![1]), 1, None, &[ [[-7.318765, 7.2374434]], [[6.304022, 4.939862]], [[9.587318, 8.008944]], ], false, )?; // https://github.com/onnx/onnx/blob/main/docs/Operators.md#examples-119 negative_axes_keepdims // axes = np.array([-1], dtype=np.int64) // `np.maximum.reduce(data, axis=tuple(axes), keepdims=True)` test( &[ [[5., 1.], [20., 2.]], [[30., 1.], [40., 2.]], [[55., 1.], [60., 2.]], ], Some(vec![-1]), 1, None, &[[[5.], [20.]], [[30.], [40.]], [[55.], [60.]]], false, )?; // axes = np.array([-2], dtype=np.int64) test( &[ [[5., 1.], [20., 2.]], [[30., 1.], [40., 2.]], [[55., 1.], [60., 2.]], ], Some(vec![-2]), 1, None, &[[[20., 2.]], [[40., 2.]], [[60., 2.]]], false, )?; // with random test( &[ [[-4.1676497, -2.7603748], [-4.5138783, -0.762791]], [[-6.3792877, 7.1619177], [-9.958144, 6.3753467]], [[9.046973, 3.4554052], [-5.4674335, 5.4642754]], ], Some(vec![-2]), 1, None, &[ [[-4.1676497, -0.762791]], [[-6.3792877, 7.1619177]], [[9.046973, 5.4642754]], ], false, )?; // Multiple axes - keepdims=1 (true) // axes = np.array([0, 1], dtype=np.int64) // np.maximum.reduce(data, axis=tuple(axes), keepdims=True) test( &[ [[5., 1.], [20., 2.]], [[30., 1.], [40., 2.]], [[55., 1.], [60., 2.]], ], Some(vec![0, 1]), 1, None, &[[[60., 2.]]], false, )?; // axes = np.array([0, 2], dtype=np.int64) // np.maximum.reduce(data, axis=tuple(axes), keepdims=True) test( &[ [[5., 1.], [20., 2.]], [[30., 1.], [40., 2.]], [[55., 1.], [60., 2.]], ], Some(vec![0, 2]), 1, None, &[[[55.], [60.]]], false, )?; // axes = np.array([2, 1], dtype=np.int64) // np.maximum.reduce(data, axis=tuple(axes), keepdims=True) test( &[ [[5., 1.], [20., 2.]], [[30., 1.], [40., 2.]], [[55., 1.], [60., 2.]], ], Some(vec![2, 1]), 1, None, &[[[20.]], [[40.]], [[60.]]], false, )?; // axes = np.array([2, 0, 1], dtype=np.int64) // np.maximum.reduce(data, axis=tuple(axes), keepdims=True) test( &[ [[5., 1.], [20., 2.]], [[30., 1.], [40., 2.]], [[55., 1.], [60., 2.]], ], Some(vec![2, 0, 1]), 1, None, &[[[60.]]], false, )?; // Multiple axes - keepdims=0 (false) // axes = np.array([0, 1], dtype=np.int64) // np.maximum.reduce(data, axis=tuple(axes), keepdims=False) test( &[ [[5., 1.], [20., 2.]], [[30., 1.], [40., 2.]], [[55., 1.], [60., 2.]], ], Some(vec![0, 1]), 0, None, &[60., 2.], false, )?; // axes = np.array([0, 2], dtype=np.int64) // np.maximum.reduce(data, axis=tuple(axes), keepdims=False) test( &[ [[5., 1.], [20., 2.]], [[30., 1.], [40., 2.]], [[55., 1.], [60., 2.]], ], Some(vec![0, 2]), 0, None, &[55., 60.], false, )?; // axes = np.array([2, 1], dtype=np.int64) // np.maximum.reduce(data, axis=tuple(axes), keepdims=False) test( &[ [[5., 1.], [20., 2.]], [[30., 1.], [40., 2.]], [[55., 1.], [60., 2.]], ], Some(vec![2, 1]), 0, None, &[20., 40., 60.], false, )?; // axes = np.array([2, 0, 1], dtype=np.int64) // np.maximum.reduce(data, axis=tuple(axes), keepdims=False) test( &[ [[5., 1.], [20., 2.]], [[30., 1.], [40., 2.]], [[55., 1.], [60., 2.]], ], Some(vec![2, 0, 1]), 0, None, 60., false, )?; // Multiple axes - negative `axes` - keepdims=1 (true) // axes = np.array([-1, 0, 1], dtype=np.int64) // np.maximum.reduce(data, axis=tuple(axes), keepdims=True) test( &[ [[5., 1.], [20., 2.]], [[30., 1.], [40., 2.]], [[55., 1.], [60., 2.]], ], Some(vec![-1, 0, 1]), 1, None, &[[[60.]]], false, )?; // Multiple axes - negative `axes` - keepdims=0 (false) // axes = np.array([-1, 0, 1], dtype=np.int64) // np.maximum.reduce(data, axis=tuple(axes), keepdims=True) test( &[ [[5., 1.], [20., 2.]], [[30., 1.], [40., 2.]], [[55., 1.], [60., 2.]], ], Some(vec![-1, 0, 1]), 0, None, 60., false, )?; // `noop_with_empty_axes = true (1)` should yield tensor equivallent to the input tensor test( &[ [[-7.648377, -5.4018507], [-7.318765, 7.2374434]], [[6.304022, 4.939862], [4.5435624, 3.072864]], [[-2.5058026, 8.008944], [9.587318, -8.794852]], ], None, 0, Some(1), &[ [[-7.648377, -5.4018507], [-7.318765, 7.2374434]], [[6.304022, 4.939862], [4.5435624, 3.072864]], [[-2.5058026, 8.008944], [9.587318, -8.794852]], ], false, )?; // Rank-0 arrays are also valid test(42., None, 0, None, 42., false)?; test(42., None, 1, None, 42., false)?; // Negative test - expect error // axes = np.array([-2, 0, 1], dtype=np.int64) // np.maximum.reduce(data, axis=tuple(axes), keepdims=True) // Should error out with `duplicate value in "axes"` assert!(test( &[ [[5., 1.], [20., 2.]], [[30., 1.], [40., 2.]], [[55., 1.], [60., 2.]], ], Some(vec![-2, 0, 1]), 1, None, &[[[60.]]], false ) .is_err()); // Negative test - expect error // Should error out on empty set assert!(test(&[[1_u8; 0]], Some(vec![-2, 0, 1]), 1, None, &[0.], false).is_err()); // Backward compatibility test( &[ [[5., 1.], [20., 2.]], [[30., 1.], [40., 2.]], [[55., 1.], [60., 2.]], ], Some(vec![-1, 0, 1]), 0, None, 60., true, )?; fn test( data: impl NdArray, axes: Option<Vec<i64>>, keepdims: i64, noop_with_empty_axes: Option<i64>, expected: impl NdArray, backward_comp: bool, ) -> Result<()> { let has_axes = axes.is_some(); let att_keepdims = AttributeProto { name: "keepdims".to_string(), ref_attr_name: "keepdims".to_string(), i: keepdims, doc_string: "keepdims".to_string(), r#type: 2, f: 0.0, s: vec![], t: None, g: None, sparse_tensor: None, tp: None, floats: vec![], ints: vec![], strings: vec![], tensors: vec![], graphs: vec![], sparse_tensors: vec![], type_protos: vec![], }; let mut attribute = vec![att_keepdims]; if let Some(noop) = noop_with_empty_axes { if !has_axes { let att_no_op_empty_axes = AttributeProto { name: "noop_with_empty_axes".to_string(), ref_attr_name: "noop_with_empty_axes".to_string(), i: noop, doc_string: "noop_with_empty_axes".to_string(), r#type: 2, f: 0.0, s: vec![], t: None, g: None, sparse_tensor: None, tp: None, floats: vec![], ints: vec![], strings: vec![], tensors: vec![], graphs: vec![], sparse_tensors: vec![], type_protos: vec![], }; attribute.push(att_no_op_empty_axes); } } if has_axes && backward_comp { attribute.push(AttributeProto { name: "axes".to_string(), ref_attr_name: "axes".to_string(), i: 0, doc_string: "axes".to_string(), r#type: 7, f: 0.0, s: vec![], t: None, g: None, sparse_tensor: None, tp: None, floats: vec![], ints: axes.clone().unwrap_or_default(), strings: vec![], tensors: vec![], graphs: vec![], sparse_tensors: vec![], type_protos: vec![], }); } let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "ReduceMax".to_string(), domain: "".to_string(), attribute, input: if has_axes && !backward_comp { vec![INPUT_X.to_string(), INPUT_Y.to_string()] } else { vec![INPUT_X.to_string()] }, output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let mut inputs: HashMap<String, Tensor> = HashMap::new(); let input_tensor = Tensor::new(data, &Device::Cpu)?; let input_dtype = input_tensor.dtype(); inputs.insert(INPUT_X.to_string(), input_tensor); if !backward_comp { if let Some(a) = axes { inputs.insert(INPUT_Y.to_string(), Tensor::new(a, &Device::Cpu)?); } } let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let expected = Tensor::new(expected, &Device::Cpu)?; match expected.dims().len() { 0 => { if input_dtype == DType::U8 { assert_eq!(z.to_vec0::<u8>()?, expected.to_vec0::<u8>()?) } else { assert_eq!(z.to_vec0::<f64>()?, expected.to_vec0::<f64>()?) } } 1 => { if input_dtype == DType::U8 { assert_eq!(z.to_vec1::<u8>()?, expected.to_vec1::<u8>()?) } else { assert_eq!(z.to_vec1::<f64>()?, expected.to_vec1::<f64>()?) } } 2 => { if input_dtype == DType::U8 { assert_eq!(z.to_vec2::<u8>()?, expected.to_vec2::<u8>()?) } else { assert_eq!(z.to_vec2::<f64>()?, expected.to_vec2::<f64>()?) } } 3 => { if input_dtype == DType::U8 { assert_eq!(z.to_vec3::<u8>()?, expected.to_vec3::<u8>()?) } else { assert_eq!(z.to_vec3::<f64>()?, expected.to_vec3::<f64>()?) } } _ => unreachable!(), }; Ok(()) } Ok(()) } // "ReduceMin" #[test] fn test_reduce_min() -> Result<()> { // Tests with random data generated with `np.random.uniform` // https://github.com/onnx/onnx/blob/main/docs/Operators.md#examples-121 bool_inputs // No special treatment reqired for bool // `np.minimum.reduce(data, axis=axes, keepdims=True)` test( &[[1_u8, 1], [1, 0], [0, 1], [0, 0]], Some(vec![1]), 1, None, &[[1_u8], [0], [0], [0]], false, )?; // https://github.com/onnx/onnx/blob/main/docs/Operators.md#examples-121 default_axes_keepdims // `np.minimum.reduce(data, axis=None, keepdims=True)` test( &[ [[5., 1.], [20., 2.]], [[30., 1.], [40., 2.]], [[55., 1.], [60., 2.]], ], None, 1, None, &[[[1.]]], false, )?; // same as above but with random test( &[ [[-7.648377, -5.4018507], [-7.318765, 7.2374434]], [[6.304022, 4.939862], [4.5435624, 3.072864]], [[-2.5058026, 8.008944], [9.587318, -8.794852]], ], None, 1, None, &[[[-8.794852]]], false, )?; // https://github.com/onnx/onnx/blob/main/docs/Operators.md#examples-121 default_axes_donot_keep_dims // `np.minimum.reduce(data, axis=None, keepdims=False)` test( &[ [[5., 1.], [20., 2.]], [[30., 1.], [40., 2.]], [[55., 1.], [60., 2.]], ], None, 0, None, 1., false, )?; // same as above but with random // `np.minimum.reduce(data, axis=None, keepdims=False)` test( &[ [[-7.648377, -5.4018507], [-7.318765, 7.2374434]], [[6.304022, 4.939862], [4.5435624, 3.072864]], [[-2.5058026, 8.008944], [9.587318, -8.794852]], ], None, 0, None, -8.794852, false, )?; // https://github.com/onnx/onnx/blob/main/docs/Operators.md#examples-121 keepdims // `np.minimum.reduce(data, axis=tuple(axes), keepdims=True)` test( &[ [[5., 1.], [20., 2.]], [[30., 1.], [40., 2.]], [[55., 1.], [60., 2.]], ], Some(vec![1]), 1, None, &[[[5., 1.]], [[30., 1.]], [[55., 1.]]], false, )?; // keepdims with random data // `np.minimum.reduce(data, axis=tuple(axes), keepdims=True)` test( &[ [[-7.648377, -5.4018507], [-7.318765, 7.2374434]], [[6.304022, 4.939862], [4.5435624, 3.072864]], [[-2.5058026, 8.008944], [9.587318, -8.794852]], ], Some(vec![1]), 1, None, &[ [[-7.648377, -5.4018507]], [[4.5435624, 3.072864]], [[-2.5058026, -8.794852]], ], false, )?; // https://github.com/onnx/onnx/blob/main/docs/Operators.md#examples-121 negative_axes_keepdims // axes = np.array([-1], dtype=np.int64) // `np.minimum.reduce(data, axis=tuple(axes), keepdims=True)` test( &[ [[5., 1.], [20., 2.]], [[30., 1.], [40., 2.]], [[55., 1.], [60., 2.]], ], Some(vec![-1]), 1, None, &[[[1.], [2.]], [[1.], [2.]], [[1.], [2.]]], false, )?; // axes = np.array([-2], dtype=np.int64) test( &[ [[5., 1.], [20., 2.]], [[30., 1.], [40., 2.]], [[55., 1.], [60., 2.]], ], Some(vec![-2]), 1, None, &[[[5., 1.]], [[30., 1.]], [[55., 1.]]], false, )?; // with random test( &[ [[-4.1676497, -2.7603748], [-4.5138783, -0.762791]], [[-6.3792877, 7.1619177], [-9.958144, 6.3753467]], [[9.046973, 3.4554052], [-5.4674335, 5.4642754]], ], Some(vec![-2]), 1, None, &[ [[-4.5138783, -2.7603748]], [[-9.958144, 6.3753467]], [[-5.4674335, 3.4554052]], ], false, )?; // Multiple axes - keepdims=1 (true) // axes = np.array([0, 1], dtype=np.int64) // np.minimum.reduce(data, axis=tuple(axes), keepdims=True) test( &[ [[5., 1.], [20., 2.]], [[30., 1.], [40., 2.]], [[55., 1.], [60., 2.]], ], Some(vec![0, 1]), 1, None, &[[[5., 1.]]], false, )?; // axes = np.array([0, 2], dtype=np.int64) // np.minimum.reduce(data, axis=tuple(axes), keepdims=True) test( &[ [[5., 1.], [20., 2.]], [[30., 1.], [40., 2.]], [[55., 1.], [60., 2.]], ], Some(vec![0, 2]), 1, None, &[[[1.], [2.]]], false, )?; // axes = np.array([2, 1], dtype=np.int64) // np.minimum.reduce(data, axis=tuple(axes), keepdims=True) test( &[ [[5., 1.], [20., 2.]], [[30., 1.], [40., 2.]], [[55., 1.], [60., 2.]], ], Some(vec![2, 1]), 1, None, &[[[1.]], [[1.]], [[1.]]], false, )?; // axes = np.array([2, 0, 1], dtype=np.int64) // np.minimum.reduce(data, axis=tuple(axes), keepdims=True) test( &[ [[5., 1.], [20., 2.]], [[30., 1.], [40., 2.]], [[55., 1.], [60., 2.]], ], Some(vec![2, 0, 1]), 1, None, &[[[1.]]], false, )?; // Multiple axes - keepdims=0 (false) // axes = np.array([0, 1], dtype=np.int64) // np.minimum.reduce(data, axis=tuple(axes), keepdims=False) test( &[ [[5., 1.], [20., 2.]], [[30., 1.], [40., 2.]], [[55., 1.], [60., 2.]], ], Some(vec![0, 1]), 0, None, &[5., 1.], false, )?; // axes = np.array([0, 2], dtype=np.int64) // np.minimum.reduce(data, axis=tuple(axes), keepdims=False) test( &[ [[5., 1.], [20., 2.]], [[30., 1.], [40., 2.]], [[55., 1.], [60., 2.]], ], Some(vec![0, 2]), 0, None, &[1., 2.], false, )?; // axes = np.array([2, 1], dtype=np.int64) // np.minimum.reduce(data, axis=tuple(axes), keepdims=False) test( &[ [[5., 1.], [20., 2.]], [[30., 1.], [40., 2.]], [[55., 1.], [60., 2.]], ], Some(vec![2, 1]), 0, None, &[1., 1., 1.], false, )?; // axes = np.array([2, 0, 1], dtype=np.int64) // np.minimum.reduce(data, axis=tuple(axes), keepdims=False) test( &[ [[5., 1.], [20., 2.]], [[30., 1.], [40., 2.]], [[55., 1.], [60., 2.]], ], Some(vec![2, 0, 1]), 0, None, 1., false, )?; // Multiple axes - negative `axes` - keepdims=1 (true) // axes = np.array([-1, 0, 1], dtype=np.int64) // np.minimum.reduce(data, axis=tuple(axes), keepdims=True) test( &[ [[5., 1.], [20., 2.]], [[30., 1.], [40., 2.]], [[55., 1.], [60., 2.]], ], Some(vec![-1, 0, 1]), 1, None, &[[[1.]]], false, )?; // Multiple axes - negative `axes` - keepdims=0 (false) // axes = np.array([-1, 0, 1], dtype=np.int64) // np.minimum.reduce(data, axis=tuple(axes), keepdims=True) test( &[ [[5., 1.], [20., 2.]], [[30., 1.], [40., 2.]], [[55., 1.], [60., 2.]], ], Some(vec![-1, 0, 1]), 0, None, 1., false, )?; // `noop_with_empty_axes = true (1)` should yield tensor equivallent to the input tensor test( &[ [[-7.648377, -5.4018507], [-7.318765, 7.2374434]], [[6.304022, 4.939862], [4.5435624, 3.072864]], [[-2.5058026, 8.008944], [9.587318, -8.794852]], ], None, 0, Some(1), &[ [[-7.648377, -5.4018507], [-7.318765, 7.2374434]], [[6.304022, 4.939862], [4.5435624, 3.072864]], [[-2.5058026, 8.008944], [9.587318, -8.794852]], ], false, )?; // Rank-0 tensors are also valid test(42., None, 0, None, 42., false)?; test(42., None, 1, None, 42., false)?; // Negative test - expect error // axes = np.array([-2, 0, 1], dtype=np.int64) // np.minimum.reduce(data, axis=tuple(axes), keepdims=True) // Should error out with `duplicate value in "axes"` assert!(test( &[ [[5., 1.], [20., 2.]], [[30., 1.], [40., 2.]], [[55., 1.], [60., 2.]], ], Some(vec![-2, 0, 1]), 1, None, &[0.], false ) .is_err()); // Negative test - expect error // Should error out on empty set assert!(test(&[[1_u8; 0]], Some(vec![-2, 0, 1]), 1, None, &[0.], false).is_err()); // Backward compatibility test( &[ [[5., 1.], [20., 2.]], [[30., 1.], [40., 2.]], [[55., 1.], [60., 2.]], ], Some(vec![-1, 0, 1]), 0, None, 1., true, )?; fn test( data: impl NdArray, axes: Option<Vec<i64>>, keepdims: i64, noop_with_empty_axes: Option<i64>, expected: impl NdArray, backward_comp: bool, ) -> Result<()> { let has_axes = axes.is_some(); let att_keepdims = AttributeProto { name: "keepdims".to_string(), ref_attr_name: "keepdims".to_string(), i: keepdims, doc_string: "keepdims".to_string(), r#type: 2, f: 0.0, s: vec![], t: None, g: None, sparse_tensor: None, tp: None, floats: vec![], ints: vec![], strings: vec![], tensors: vec![], graphs: vec![], sparse_tensors: vec![], type_protos: vec![], }; let mut attribute = vec![att_keepdims]; if let Some(noop) = noop_with_empty_axes { if !has_axes { let att_no_op_empty_axes = AttributeProto { name: "noop_with_empty_axes".to_string(), ref_attr_name: "noop_with_empty_axes".to_string(), i: noop, doc_string: "noop_with_empty_axes".to_string(), r#type: 2, f: 0.0, s: vec![], t: None, g: None, sparse_tensor: None, tp: None, floats: vec![], ints: vec![], strings: vec![], tensors: vec![], graphs: vec![], sparse_tensors: vec![], type_protos: vec![], }; attribute.push(att_no_op_empty_axes); } } if has_axes && backward_comp { attribute.push(AttributeProto { name: "axes".to_string(), ref_attr_name: "axes".to_string(), i: 0, doc_string: "axes".to_string(), r#type: 7, f: 0.0, s: vec![], t: None, g: None, sparse_tensor: None, tp: None, floats: vec![], ints: axes.clone().unwrap_or_default(), strings: vec![], tensors: vec![], graphs: vec![], sparse_tensors: vec![], type_protos: vec![], }); } let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "ReduceMin".to_string(), domain: "".to_string(), attribute, input: if has_axes && !backward_comp { vec![INPUT_X.to_string(), INPUT_Y.to_string()] } else { vec![INPUT_X.to_string()] }, output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let mut inputs: HashMap<String, Tensor> = HashMap::new(); let input_tensor = Tensor::new(data, &Device::Cpu)?; let input_dtype = input_tensor.dtype(); inputs.insert(INPUT_X.to_string(), input_tensor); if !backward_comp { if let Some(a) = axes { inputs.insert(INPUT_Y.to_string(), Tensor::new(a, &Device::Cpu)?); } } let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let expected = Tensor::new(expected, &Device::Cpu)?; match expected.dims().len() { 0 => { if input_dtype == DType::U8 { assert_eq!(z.to_vec0::<u8>()?, expected.to_vec0::<u8>()?) } else { assert_eq!(z.to_vec0::<f64>()?, expected.to_vec0::<f64>()?) } } 1 => { if input_dtype == DType::U8 { assert_eq!(z.to_vec1::<u8>()?, expected.to_vec1::<u8>()?) } else { assert_eq!(z.to_vec1::<f64>()?, expected.to_vec1::<f64>()?) } } 2 => { if input_dtype == DType::U8 { assert_eq!(z.to_vec2::<u8>()?, expected.to_vec2::<u8>()?) } else { assert_eq!(z.to_vec2::<f64>()?, expected.to_vec2::<f64>()?) } } 3 => { if input_dtype == DType::U8 { assert_eq!(z.to_vec3::<u8>()?, expected.to_vec3::<u8>()?) } else { assert_eq!(z.to_vec3::<f64>()?, expected.to_vec3::<f64>()?) } } _ => unreachable!(), }; Ok(()) } Ok(()) } // "ReduceMean" #[test] fn test_reduce_mean() -> Result<()> { // https://github.com/onnx/onnx/blob/main/docs/Operators.md#examples-120 default_axes_keepdims test( &[ [[5., 1.], [20., 2.]], [[30., 1.], [40., 2.]], [[55., 1.], [60., 2.]], ], None, 1, &[[[18.25]]], )?; // https://github.com/onnx/onnx/blob/main/docs/Operators.md#examples-120 do_no_keepdims test( &[ [[5., 1.], [20., 2.]], [[30., 1.], [40., 2.]], [[55., 1.], [60., 2.]], ], Some(vec![1]), 0, &[[12.5, 1.5], [35.0, 1.5], [57.5, 1.5]], )?; // https://github.com/onnx/onnx/blob/main/docs/Operators.md#examples-120 keepdims test( &[ [[5., 1.], [20., 2.]], [[30., 1.], [40., 2.]], [[55., 1.], [60., 2.]], ], Some(vec![1]), 1, &[[[12.5, 1.5]], [[35.0, 1.5]], [[57.5, 1.5]]], )?; // https://github.com/onnx/onnx/blob/main/docs/Operators.md#examples-120 negative_axes_keepdims test( &[ [[5., 1.], [20., 2.]], [[30., 1.], [40., 2.]], [[55., 1.], [60., 2.]], ], Some(vec![-2]), 1, &[[[12.5, 1.5]], [[35.0, 1.5]], [[57.5, 1.5]]], )?; // All the test data below was generated based on numpy's np.mean test( &[ [[5., 1.], [20., 2.]], [[30., 1.], [40., 2.]], [[55., 1.], [60., 2.]], ], Some(vec![1, 2]), 0, &[7.0, 18.25, 29.5], )?; test( &[ [[5., 1.], [20., 2.]], [[30., 1.], [40., 2.]], [[55., 1.], [60., 2.]], ], Some(vec![1, 2]), 1, &[[[7.0]], [[18.25]], [[29.5]]], )?; test(&[1., 2., 3.], None, 1, &[2.0])?; fn test( data: impl NdArray, axes: Option<Vec<i64>>, keepdims: i64, expected: impl NdArray, ) -> Result<()> { let has_axes = axes.is_some(); let att_axes = AttributeProto { name: "axes".to_string(), ref_attr_name: "axes".to_string(), i: 0, doc_string: "axes".to_string(), r#type: 7, f: 0.0, s: vec![], t: None, g: None, sparse_tensor: None, tp: None, floats: vec![], ints: axes.unwrap_or_default(), strings: vec![], tensors: vec![], graphs: vec![], sparse_tensors: vec![], type_protos: vec![], }; let att_keepdims = AttributeProto { name: "keepdims".to_string(), ref_attr_name: "keepdims".to_string(), i: keepdims, doc_string: "keepdims".to_string(), r#type: 2, f: 0.0, s: vec![], t: None, g: None, sparse_tensor: None, tp: None, floats: vec![], ints: vec![], strings: vec![], tensors: vec![], graphs: vec![], sparse_tensors: vec![], type_protos: vec![], }; let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "ReduceMean".to_string(), domain: "".to_string(), attribute: if has_axes { vec![att_axes, att_keepdims] } else { vec![att_keepdims] }, input: vec![INPUT_X.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), Tensor::new(data, &Device::Cpu)?); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let expected = Tensor::new(expected, &Device::Cpu)?; match expected.dims().len() { 0 => assert_eq!(z.to_vec0::<f64>()?, expected.to_vec0::<f64>()?), 1 => assert_eq!(z.to_vec1::<f64>()?, expected.to_vec1::<f64>()?), 2 => assert_eq!(z.to_vec2::<f64>()?, expected.to_vec2::<f64>()?), 3 => assert_eq!(z.to_vec3::<f64>()?, expected.to_vec3::<f64>()?), _ => unreachable!(), }; Ok(()) } Ok(()) } // "Sqrt" #[test] fn test_sqrt() -> Result<()> { // https://github.com/onnx/onnx/blob/main/docs/Operators.md#examples-155 test(&[1., 4., 9.], &[1., 2., 3.])?; fn test(data: impl NdArray, expected: impl NdArray) -> Result<()> { let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Sqrt".to_string(), domain: "".to_string(), attribute: vec![], input: vec![INPUT_X.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), Tensor::new(data, &Device::Cpu)?); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let expected = Tensor::new(expected, &Device::Cpu)?; match expected.dims().len() { 0 => assert_eq!(z.to_vec0::<f64>()?, expected.to_vec0::<f64>()?), 1 => assert_eq!(z.to_vec1::<f64>()?, expected.to_vec1::<f64>()?), 2 => assert_eq!(z.to_vec2::<f64>()?, expected.to_vec2::<f64>()?), 3 => assert_eq!(z.to_vec3::<f64>()?, expected.to_vec3::<f64>()?), _ => unreachable!(), }; Ok(()) } Ok(()) } // "RandomUniform" #[test] fn test_random_uniform() -> Result<()> { test(vec![3, 2, 1, 4], None, None)?; test(vec![2, 2, 2, 2], Some(-10.0), None)?; test(vec![2, 2, 2, 2], None, Some(10.0))?; test(vec![1, 2, 3, 4], Some(-10.0), Some(10.0))?; fn test(shape: Vec<i64>, low: Option<f32>, high: Option<f32>) -> Result<()> { let att_low = AttributeProto { name: "low".to_string(), ref_attr_name: "low".to_string(), i: 0, doc_string: "low".to_string(), r#type: 1, // FLOAT f: low.unwrap_or(0.0), s: vec![], t: None, g: None, sparse_tensor: None, tp: None, floats: vec![], ints: vec![], strings: vec![], tensors: vec![], graphs: vec![], sparse_tensors: vec![], type_protos: vec![], }; let att_high = AttributeProto { name: "high".to_string(), ref_attr_name: "high".to_string(), i: 0, doc_string: "high".to_string(), r#type: 1, // FLOAT f: high.unwrap_or(1.0), s: vec![], t: None, g: None, sparse_tensor: None, tp: None, floats: vec![], ints: vec![], strings: vec![], tensors: vec![], graphs: vec![], sparse_tensors: vec![], type_protos: vec![], }; let att_shape = AttributeProto { name: "shape".to_string(), ref_attr_name: "shape".to_string(), i: 0, doc_string: "shape".to_string(), r#type: 7, // INTS f: 0.0, s: vec![], t: None, g: None, sparse_tensor: None, tp: None, floats: vec![], ints: shape, strings: vec![], tensors: vec![], graphs: vec![], sparse_tensors: vec![], type_protos: vec![], }; let att_dtype = AttributeProto { name: "dtype".to_string(), ref_attr_name: "dtype".to_string(), i: 11, // DOUBLE doc_string: "dtype".to_string(), r#type: 2, // INT f: 0.0, s: vec![], t: None, g: None, sparse_tensor: None, tp: None, floats: vec![], ints: vec![], strings: vec![], tensors: vec![], graphs: vec![], sparse_tensors: vec![], type_protos: vec![], }; let attrs = { let mut mut_attrs = vec![att_shape, att_dtype]; if low.is_some() { mut_attrs.push(att_low); } if high.is_some() { mut_attrs.push(att_high); } mut_attrs }; let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "RandomUniform".to_string(), domain: "".to_string(), attribute: attrs, input: vec![], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let eval = candle_onnx::simple_eval(&manual_graph, HashMap::new())?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let min = z .flatten_all()? .to_vec1()? .into_iter() .reduce(f64::min) .unwrap(); let max = z .flatten_all()? .to_vec1()? .into_iter() .reduce(f64::max) .unwrap(); assert!(min >= low.unwrap_or(0.0).into()); assert!(max <= high.unwrap_or(1.0).into()); assert_ne!(min, max); Ok(()) } Ok(()) } // "RandomNormal" #[test] fn test_random_normal() -> Result<()> { test(vec![3, 2, 1, 4], None, None)?; test(vec![2, 2, 2, 2], Some(-10.0), None)?; test(vec![2, 2, 2, 2], None, Some(10.0))?; test(vec![1, 2, 3, 4], Some(-10.0), Some(10.0))?; fn test(shape: Vec<i64>, mean: Option<f32>, scale: Option<f32>) -> Result<()> { let att_mean = AttributeProto { name: "mean".to_string(), ref_attr_name: "mean".to_string(), i: 0, doc_string: "mean".to_string(), r#type: 1, // FLOAT f: mean.unwrap_or(0.0), s: vec![], t: None, g: None, sparse_tensor: None, tp: None, floats: vec![], ints: vec![], strings: vec![], tensors: vec![], graphs: vec![], sparse_tensors: vec![], type_protos: vec![], }; let att_scale = AttributeProto { name: "scale".to_string(), ref_attr_name: "scale".to_string(), i: 0, doc_string: "scale".to_string(), r#type: 1, // FLOAT f: scale.unwrap_or(1.0), s: vec![], t: None, g: None, sparse_tensor: None, tp: None, floats: vec![], ints: vec![], strings: vec![], tensors: vec![], graphs: vec![], sparse_tensors: vec![], type_protos: vec![], }; let att_shape = AttributeProto { name: "shape".to_string(), ref_attr_name: "shape".to_string(), i: 0, doc_string: "shape".to_string(), r#type: 7, // INTS f: 0.0, s: vec![], t: None, g: None, sparse_tensor: None, tp: None, floats: vec![], ints: shape, strings: vec![], tensors: vec![], graphs: vec![], sparse_tensors: vec![], type_protos: vec![], }; let att_dtype = AttributeProto { name: "dtype".to_string(), ref_attr_name: "dtype".to_string(), i: 11, // DOUBLE doc_string: "dtype".to_string(), r#type: 2, // INT f: 0.0, s: vec![], t: None, g: None, sparse_tensor: None, tp: None, floats: vec![], ints: vec![], strings: vec![], tensors: vec![], graphs: vec![], sparse_tensors: vec![], type_protos: vec![], }; let attrs = { let mut mut_attrs = vec![att_shape, att_dtype]; if mean.is_some() { mut_attrs.push(att_mean); } if scale.is_some() { mut_attrs.push(att_scale); } mut_attrs }; let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "RandomNormal".to_string(), domain: "".to_string(), attribute: attrs, input: vec![], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let eval = candle_onnx::simple_eval(&manual_graph, HashMap::new())?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let data = z.flatten_all()?.to_vec1::<f64>()?; // test if values are unique for (i, a) in data.iter().enumerate() { for (j, b) in data.iter().enumerate() { if i == j { continue; }; assert_ne!(a, b); } } Ok(()) } Ok(()) } // "Range" #[test] fn test_range() -> Result<()> { // https://github.com/onnx/onnx/blob/main/docs/Operators.md#examples-113 test(1., 5., 2., &[1., 3.])?; // https://github.com/onnx/onnx/blob/main/docs/Operators.md#examples-113 test(10i64, 6i64, -3i64, &[10i64, 7i64])?; fn test( start: impl NdArray, limit: impl NdArray, delta: impl NdArray, expected: impl NdArray, ) -> Result<()> { let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Range".to_string(), domain: "".to_string(), attribute: vec![], input: vec![ INPUT_X.to_string(), INPUT_Y.to_string(), INPUT_A.to_string(), ], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), Tensor::new(start, &Device::Cpu)?); inputs.insert(INPUT_Y.to_string(), Tensor::new(limit, &Device::Cpu)?); inputs.insert(INPUT_A.to_string(), Tensor::new(delta, &Device::Cpu)?); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval .get(OUTPUT_Z) .expect("Output 'z' not found") .to_dtype(DType::F64)?; let expected = Tensor::new(expected, &Device::Cpu)?.to_dtype(DType::F64)?; match expected.dims().len() { 0 => assert_eq!(z.to_vec0::<f64>()?, expected.to_vec0::<f64>()?), 1 => assert_eq!(z.to_vec1::<f64>()?, expected.to_vec1::<f64>()?), 2 => assert_eq!(z.to_vec2::<f64>()?, expected.to_vec2::<f64>()?), 3 => assert_eq!(z.to_vec3::<f64>()?, expected.to_vec3::<f64>()?), _ => unreachable!(), }; Ok(()) } Ok(()) } // "Greater" #[test] fn test_greater() -> Result<()> { // https://github.com/onnx/onnx/blob/main/docs/Operators.md#examples-63 test(&[1., 2., 3.], &[3., 2., 1.], &[0u8, 0, 1])?; // https://github.com/onnx/onnx/blob/main/docs/Operators.md#examples-63 test(&[1., 2., 3.], 2., &[0u8, 0, 1])?; fn test(a: impl NdArray, b: impl NdArray, expected: impl NdArray) -> Result<()> { let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Greater".to_string(), domain: "".to_string(), attribute: vec![], input: vec![INPUT_X.to_string(), INPUT_Y.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), Tensor::new(a, &Device::Cpu)?); inputs.insert(INPUT_Y.to_string(), Tensor::new(b, &Device::Cpu)?); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval .get(OUTPUT_Z) .expect("Output 'z' not found") .to_dtype(DType::F64)?; let expected = Tensor::new(expected, &Device::Cpu)?.to_dtype(DType::F64)?; match expected.dims().len() { 0 => assert_eq!(z.to_vec0::<f64>()?, expected.to_vec0::<f64>()?), 1 => assert_eq!(z.to_vec1::<f64>()?, expected.to_vec1::<f64>()?), 2 => assert_eq!(z.to_vec2::<f64>()?, expected.to_vec2::<f64>()?), 3 => assert_eq!(z.to_vec3::<f64>()?, expected.to_vec3::<f64>()?), _ => unreachable!(), }; Ok(()) } Ok(()) } // "Less" #[test] fn test_less() -> Result<()> { // https://github.com/onnx/onnx/blob/main/docs/Operators.md#examples-81 test(&[1., 2., 3.], &[3., 2., 1.], &[1u8, 0, 0])?; // https://github.com/onnx/onnx/blob/main/docs/Operators.md#examples-81 test(&[1., 2., 3.], 2., &[1u8, 0, 0])?; fn test(a: impl NdArray, b: impl NdArray, expected: impl NdArray) -> Result<()> { let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Less".to_string(), domain: "".to_string(), attribute: vec![], input: vec![INPUT_X.to_string(), INPUT_Y.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), Tensor::new(a, &Device::Cpu)?); inputs.insert(INPUT_Y.to_string(), Tensor::new(b, &Device::Cpu)?); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval .get(OUTPUT_Z) .expect("Output 'z' not found") .to_dtype(DType::F64)?; let expected = Tensor::new(expected, &Device::Cpu)?.to_dtype(DType::F64)?; match expected.dims().len() { 0 => assert_eq!(z.to_vec0::<f64>()?, expected.to_vec0::<f64>()?), 1 => assert_eq!(z.to_vec1::<f64>()?, expected.to_vec1::<f64>()?), 2 => assert_eq!(z.to_vec2::<f64>()?, expected.to_vec2::<f64>()?), 3 => assert_eq!(z.to_vec3::<f64>()?, expected.to_vec3::<f64>()?), _ => unreachable!(), }; Ok(()) } Ok(()) } // "Log" #[test] fn test_log() -> Result<()> { // https://github.com/onnx/onnx/blob/main/docs/Operators.md#examples-82 test(&[1., 10.], &[0., std::f64::consts::LN_10])?; fn test(data: impl NdArray, expected: impl NdArray) -> Result<()> { let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Log".to_string(), domain: "".to_string(), attribute: vec![], input: vec![INPUT_X.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), Tensor::new(data, &Device::Cpu)?); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let expected = Tensor::new(expected, &Device::Cpu)?; match expected.dims().len() { 0 => assert_eq!(z.to_vec0::<f64>()?, expected.to_vec0::<f64>()?), 1 => assert_eq!(z.to_vec1::<f64>()?, expected.to_vec1::<f64>()?), 2 => assert_eq!(z.to_vec2::<f64>()?, expected.to_vec2::<f64>()?), 3 => assert_eq!(z.to_vec3::<f64>()?, expected.to_vec3::<f64>()?), _ => unreachable!(), }; Ok(()) } Ok(()) } // "Min" #[test] fn test_min() -> Result<()> { // https://github.com/onnx/onnx/blob/main/docs/Operators.md#examples-94 test(&[3., 2., 1.], &[1., 4., 4.], &[2., 5., 0.], &[1., 2., 0.])?; fn test( a: impl NdArray, b: impl NdArray, c: impl NdArray, expected: impl NdArray, ) -> Result<()> { let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Min".to_string(), domain: "".to_string(), attribute: vec![], input: vec![ INPUT_X.to_string(), INPUT_Y.to_string(), INPUT_A.to_string(), ], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), Tensor::new(a, &Device::Cpu)?); inputs.insert(INPUT_Y.to_string(), Tensor::new(b, &Device::Cpu)?); inputs.insert(INPUT_A.to_string(), Tensor::new(c, &Device::Cpu)?); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let expected = Tensor::new(expected, &Device::Cpu)?; match expected.dims().len() { 0 => assert_eq!(z.to_vec0::<f64>()?, expected.to_vec0::<f64>()?), 1 => assert_eq!(z.to_vec1::<f64>()?, expected.to_vec1::<f64>()?), 2 => assert_eq!(z.to_vec2::<f64>()?, expected.to_vec2::<f64>()?), 3 => assert_eq!(z.to_vec3::<f64>()?, expected.to_vec3::<f64>()?), _ => unreachable!(), }; Ok(()) } Ok(()) } // "Where" #[test] fn test_where() -> Result<()> { // https://github.com/onnx/onnx/blob/main/docs/Operators.md#examples-173 test( &[[1u8, 0], [1, 1]], &[[1i64, 2], [3, 4]], &[[9i64, 8], [7, 6]], &[[1i64, 8], [3, 4]], )?; // https://github.com/onnx/onnx/blob/main/docs/Operators.md#examples-173 test( &[[1u8, 0], [1, 1]], &[[1., 2.], [3., 4.]], &[[9., 8.], [7., 6.]], &[[1., 8.], [3., 4.]], )?; fn test( condition: impl NdArray, x: impl NdArray, y: impl NdArray, expected: impl NdArray, ) -> Result<()> { let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Where".to_string(), domain: "".to_string(), attribute: vec![], input: vec![ INPUT_X.to_string(), INPUT_Y.to_string(), INPUT_A.to_string(), ], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), Tensor::new(condition, &Device::Cpu)?); inputs.insert(INPUT_Y.to_string(), Tensor::new(x, &Device::Cpu)?); inputs.insert(INPUT_A.to_string(), Tensor::new(y, &Device::Cpu)?); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval .get(OUTPUT_Z) .expect("Output 'z' not found") .to_dtype(DType::F64)?; let expected = Tensor::new(expected, &Device::Cpu)?.to_dtype(DType::F64)?; match expected.dims().len() { 0 => assert_eq!(z.to_vec0::<f64>()?, expected.to_vec0::<f64>()?), 1 => assert_eq!(z.to_vec1::<f64>()?, expected.to_vec1::<f64>()?), 2 => assert_eq!(z.to_vec2::<f64>()?, expected.to_vec2::<f64>()?), 3 => assert_eq!(z.to_vec3::<f64>()?, expected.to_vec3::<f64>()?), _ => unreachable!(), }; Ok(()) } Ok(()) } #[test] fn test_floor() -> Result<()> { let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Floor".to_string(), domain: "".to_string(), attribute: vec![], input: vec![INPUT_X.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![ValueInfoProto { name: INPUT_X.to_string(), doc_string: "".to_string(), r#type: None, }], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let x = Tensor::from_vec( // some values taken from https://numpy.org/doc/stable/reference/generated/numpy.floor.html vec![ f64::NAN, f64::INFINITY, f64::NEG_INFINITY, -1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0, ], &[10], &Device::Cpu, )?; let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), x); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let results = z.to_vec1::<f64>()?; assert!(results[0].is_nan()); assert_eq!( results[1..], vec![ f64::INFINITY, f64::NEG_INFINITY, -2., -2., -1., 0., 1., 1., 2. ] ); Ok(()) } #[test] fn test_ceil() -> Result<()> { let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Ceil".to_string(), domain: "".to_string(), attribute: vec![], input: vec![INPUT_X.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![ValueInfoProto { name: INPUT_X.to_string(), doc_string: "".to_string(), r#type: None, }], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let x = Tensor::from_vec( // some values taken from https://numpy.org/doc/stable/reference/generated/numpy.ceil.html vec![ f64::NAN, f64::INFINITY, f64::NEG_INFINITY, -1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0, ], &[10], &Device::Cpu, )?; let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), x); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let results = z.to_vec1::<f64>()?; assert!(results[0].is_nan()); assert_eq!( results[1..], vec![ f64::INFINITY, f64::NEG_INFINITY, -1., -1., -0., 1., 2., 2., 2. ] ); Ok(()) } // "ArgMin" #[test] fn test_argmin() -> Result<()> { // tests from https://github.com/onnx/onnx/blob/main/docs/Operators.md#examples-7 // default_axes_keepdims test( &[[2u32, 1u32], [3u32, 10u32]], None, Some(1), None, &[[0i64, 0i64]], )?; // keepdims test( &[[2u32, 1u32], [3u32, 10u32]], Some(1), Some(1), None, &[[1i64], [0i64]], )?; // // negative_axis_keepdims test( &[[2u32, 1u32], [3u32, 10u32]], Some(-1), Some(1), None, &[[1i64], [0i64]], )?; // no_keepdims test( &[[2u32, 1u32], [3u32, 10u32]], None, Some(0), None, &[0i64, 0i64], )?; // tests from https://pytorch.org/docs/stable/generated/torch.argmin.html#torch.argmin test( &[ [0.1139, 0.2254, -0.1381, 0.3687], [1.0100, -1.1975, -0.0102, -0.4732], [-0.9240, 0.1207, -0.7506, -1.0213], [1.7809, -1.2960, 0.9384, 0.1438], ], Some(1), Some(0), None, &[2i64, 1i64, 3i64, 1i64], )?; test( &[ [0.1139, 0.2254, -0.1381, 0.3687], [1.0100, -1.1975, -0.0102, -0.4732], [-0.9240, 0.1207, -0.7506, -1.0213], [1.7809, -1.2960, 0.9384, 0.1438], ], Some(1), None, None, &[[2i64], [1i64], [3i64], [1i64]], )?; fn test( data: impl NdArray, axis: Option<i64>, keepdims: Option<i64>, select_last_index: Option<i64>, expected: impl NdArray, ) -> Result<()> { let att_axis = AttributeProto { name: "axis".to_string(), ref_attr_name: "axis".to_string(), i: axis.unwrap_or(0), doc_string: "axis".to_string(), r#type: 2, // INT f: 0.0, s: vec![], t: None, g: None, sparse_tensor: None, tp: None, floats: vec![], ints: vec![], strings: vec![], tensors: vec![], graphs: vec![], sparse_tensors: vec![], type_protos: vec![], }; let att_keepdims = AttributeProto { name: "keepdims".to_string(), ref_attr_name: "keepdims".to_string(), i: keepdims.unwrap_or(1), doc_string: "keepdims".to_string(), r#type: 2, // INT f: 0.0, s: vec![], t: None, g: None, sparse_tensor: None, tp: None, floats: vec![], ints: vec![], strings: vec![], tensors: vec![], graphs: vec![], sparse_tensors: vec![], type_protos: vec![], }; let att_select_last_index = AttributeProto { name: "select_last_index".to_string(), ref_attr_name: "select_last_index".to_string(), i: select_last_index.unwrap_or(0), doc_string: "select_last_index".to_string(), r#type: 2, // INT f: 0.0, s: vec![], t: None, g: None, sparse_tensor: None, tp: None, floats: vec![], ints: vec![], strings: vec![], tensors: vec![], graphs: vec![], sparse_tensors: vec![], type_protos: vec![], }; let attrs = { let mut mut_attrs = vec![]; if axis.is_some() { mut_attrs.push(att_axis); } if keepdims.is_some() { mut_attrs.push(att_keepdims); } if select_last_index.is_some() { mut_attrs.push(att_select_last_index); } mut_attrs }; let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "ArgMin".to_string(), domain: "".to_string(), attribute: attrs, input: vec![INPUT_X.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), Tensor::new(data, &Device::Cpu)?); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let expected = Tensor::new(expected, &Device::Cpu)?; match expected.dims().len() { 1 => assert_eq!(z.to_vec1::<i64>()?, expected.to_vec1::<i64>()?), 2 => assert_eq!(z.to_vec2::<i64>()?, expected.to_vec2::<i64>()?), _ => unreachable!(), }; Ok(()) } Ok(()) } // "ArgMax" #[test] fn test_argmax() -> Result<()> { // tests from https://github.com/onnx/onnx/blob/main/docs/Operators.md#examples-6 // default_axes_keepdims test( &[[2u32, 1u32], [3u32, 10u32]], None, Some(1), None, &[[1i64, 1i64]], )?; // keepdims test( &[[2u32, 1u32], [3u32, 10u32]], Some(1), Some(1), None, &[[0i64], [1i64]], )?; // // negative_axis_keepdims test( &[[2u32, 1u32], [3u32, 10u32]], Some(-1), Some(1), None, &[[0i64], [1i64]], )?; // no_keepdims test( &[[2u32, 1u32], [3u32, 10u32]], None, Some(0), None, &[1i64, 1i64], )?; // tests from https://pytorch.org/docs/stable/generated/torch.argmax.html test( &[ [1.3398, 0.2663, -0.2686, 0.2450], [-0.7401, -0.8805, -0.3402, -1.1936], [0.4907, -1.3948, -1.0691, -0.3132], [-1.6092, 0.5419, -0.2993, 0.3195], ], Some(1), Some(0), None, &[0i64, 2i64, 0i64, 1i64], )?; test( &[ [1.3398, 0.2663, -0.2686, 0.2450], [-0.7401, -0.8805, -0.3402, -1.1936], [0.4907, -1.3948, -1.0691, -0.3132], [-1.6092, 0.5419, -0.2993, 0.3195], ], Some(1), None, None, &[[0i64], [2i64], [0i64], [1i64]], )?; fn test( data: impl NdArray, axis: Option<i64>, keepdims: Option<i64>, select_last_index: Option<i64>, expected: impl NdArray, ) -> Result<()> { let att_axis = AttributeProto { name: "axis".to_string(), ref_attr_name: "axis".to_string(), i: axis.unwrap_or(0), doc_string: "axis".to_string(), r#type: 2, // INT f: 0.0, s: vec![], t: None, g: None, sparse_tensor: None, tp: None, floats: vec![], ints: vec![], strings: vec![], tensors: vec![], graphs: vec![], sparse_tensors: vec![], type_protos: vec![], }; let att_keepdims = AttributeProto { name: "keepdims".to_string(), ref_attr_name: "keepdims".to_string(), i: keepdims.unwrap_or(1), doc_string: "keepdims".to_string(), r#type: 2, // INT f: 0.0, s: vec![], t: None, g: None, sparse_tensor: None, tp: None, floats: vec![], ints: vec![], strings: vec![], tensors: vec![], graphs: vec![], sparse_tensors: vec![], type_protos: vec![], }; let att_select_last_index = AttributeProto { name: "select_last_index".to_string(), ref_attr_name: "select_last_index".to_string(), i: select_last_index.unwrap_or(0), doc_string: "select_last_index".to_string(), r#type: 2, // INT f: 0.0, s: vec![], t: None, g: None, sparse_tensor: None, tp: None, floats: vec![], ints: vec![], strings: vec![], tensors: vec![], graphs: vec![], sparse_tensors: vec![], type_protos: vec![], }; let attrs = { let mut mut_attrs = vec![]; if axis.is_some() { mut_attrs.push(att_axis); } if keepdims.is_some() { mut_attrs.push(att_keepdims); } if select_last_index.is_some() { mut_attrs.push(att_select_last_index); } mut_attrs }; let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "ArgMax".to_string(), domain: "".to_string(), attribute: attrs, input: vec![INPUT_X.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), Tensor::new(data, &Device::Cpu)?); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let expected = Tensor::new(expected, &Device::Cpu)?; match expected.dims().len() { 1 => assert_eq!(z.to_vec1::<i64>()?, expected.to_vec1::<i64>()?), 2 => assert_eq!(z.to_vec2::<i64>()?, expected.to_vec2::<i64>()?), _ => unreachable!(), }; Ok(()) } Ok(()) } // "LeakyRelu" #[test] fn test_leakyrelu() -> Result<()> { // tests from https://github.com/onnx/onnx/blob/main/docs/Operators.md#examples-80 // leakyrelu test(&[-1.0, 0.0, 1.0], Some(0.1), &[-0.1, 0.0, 1.0])?; fn test(data: impl NdArray, alpha: Option<f32>, expected: impl NdArray) -> Result<()> { let att_alpha = AttributeProto { name: "alpha".to_string(), ref_attr_name: "alpha".to_string(), i: 0, doc_string: "alpha".to_string(), r#type: 1, // FLOAT f: alpha.unwrap_or(0.01), s: vec![], t: None, g: None, sparse_tensor: None, tp: None, floats: vec![], ints: vec![], strings: vec![], tensors: vec![], graphs: vec![], sparse_tensors: vec![], type_protos: vec![], }; let attrs = { let mut mut_attrs = vec![]; if alpha.is_some() { mut_attrs.push(att_alpha); } mut_attrs }; let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "LeakyRelu".to_string(), domain: "".to_string(), attribute: attrs, input: vec![INPUT_X.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), Tensor::new(data, &Device::Cpu)?); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let expected = Tensor::new(expected, &Device::Cpu)?; for both in z .to_vec1::<f64>()? .iter() .zip(expected.to_vec1::<f64>()?.iter()) { let (act, exp) = both; assert!(f64::abs(act - exp) < f32::EPSILON.into()); } Ok(()) } Ok(()) } // "If" #[test] fn test_if() -> Result<()> { let x = vec![1.0, 2.0, 3.0, 4.0, 5.0]; let y = vec![5.0, 4.0, 3.0, 2.0, 1.0]; let output_type_proto = Some(TypeProto { value: Some(type_proto::Value::TensorType(type_proto::Tensor { elem_type: DataType::Float.into(), shape: Some(TensorShapeProto { dim: vec![Dimension { denotation: "".to_string(), value: Some(dimension::Value::DimValue(5)), }], }), })), denotation: "".to_string(), }); let then_branch = GraphProto { output: vec![ValueInfoProto { name: "then_out".to_string(), r#type: output_type_proto.clone(), doc_string: "".to_string(), }], node: vec![NodeProto { op_type: "Constant".to_string(), input: vec![], output: vec!["then_out".to_string()], attribute: vec![AttributeProto { name: "value".to_string(), r#type: AttributeType::Tensor.into(), t: Some(TensorProto { dims: vec![x.len() as i64], float_data: x.clone(), data_type: DataType::Float.into(), ..TensorProto::default() }), ..AttributeProto::default() }], ..NodeProto::default() }], ..GraphProto::default() }; let else_branch = GraphProto { output: vec![ValueInfoProto { name: "else_out".to_string(), r#type: output_type_proto.clone(), doc_string: "".to_string(), }], node: vec![NodeProto { op_type: "Constant".to_string(), input: vec![], output: vec!["else_out".to_string()], attribute: vec![AttributeProto { name: "value".to_string(), r#type: AttributeType::Tensor.into(), t: Some(TensorProto { dims: vec![y.len() as i64], float_data: y.clone(), data_type: DataType::Float.into(), ..TensorProto::default() }), ..AttributeProto::default() }], ..NodeProto::default() }], ..GraphProto::default() }; let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "If".to_string(), attribute: vec![ AttributeProto { name: "then_branch".to_string(), r#type: AttributeType::Graph.into(), g: Some(then_branch), ..AttributeProto::default() }, AttributeProto { name: "else_branch".to_string(), r#type: AttributeType::Graph.into(), g: Some(else_branch), ..AttributeProto::default() }, ], input: vec!["cond".to_string()], output: vec!["res".to_string()], ..NodeProto::default() }], input: vec![], output: vec![ValueInfoProto { name: "res".to_string(), doc_string: "".to_string(), r#type: output_type_proto.clone(), }], ..GraphProto::default() })); for cond in [1u8, 0] { let inputs = HashMap::from_iter([("cond".to_string(), Tensor::full(cond, (1,), &Device::Cpu)?)]); let outputs = candle_onnx::simple_eval(&manual_graph, inputs)?; let expected = if cond != 0 { &x } else { &y }; let Some(res) = outputs.get("res") else { candle::bail!("outputs didn't contain expected key `res`: {outputs:?}"); }; assert_eq!(&res.to_vec1::<f32>()?, expected); } Ok(()) } #[test] fn test_pad() -> Result<()> { let data = Tensor::from_vec( vec![ 1.0, 2.0, 3.0, // 4.0, 5.0, 6.0, // ], (2, 3), &Device::Cpu, )?; let pads = Tensor::from_vec(vec![0i64, 1, 0, 0], (4,), &Device::Cpu)?; let mode = "reflect"; let expected = Tensor::from_vec( vec![ 2.0, 1.0, 2.0, 3.0, // 5.0, 4.0, 5.0, 6.0, // ], (2, 4), &Device::Cpu, )?; let model = create_model_proto_with_graph(Some(GraphProto { input: vec![ ValueInfoProto { name: "data".to_string(), ..ValueInfoProto::default() }, ValueInfoProto { name: "pads".to_string(), ..ValueInfoProto::default() }, ], output: vec![ValueInfoProto { name: "output".to_string(), ..ValueInfoProto::default() }], node: vec![NodeProto { op_type: "Pad".to_string(), input: vec!["data".to_string(), "pads".to_string()], output: vec!["output".to_string()], attribute: vec![AttributeProto { name: "mode".to_string(), r#type: AttributeType::String.into(), s: mode.as_bytes().to_vec(), ..AttributeProto::default() }], ..NodeProto::default() }], ..GraphProto::default() })); let inputs = HashMap::from_iter([("data".to_string(), data), ("pads".to_string(), pads)]); let res = candle_onnx::simple_eval(&model, inputs)?; let Some(actual) = res.get("output") else { candle::bail!("outputs didn't contain expected key `output`: {res:?}"); }; assert_eq!(actual.to_vec2::<f64>()?, expected.to_vec2::<f64>()?); Ok(()) } #[test] fn test_slice() -> Result<()> { let model = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Slice".to_string(), input: vec![ "data".to_string(), "starts".to_string(), "ends".to_string(), "axes".to_string(), "steps".to_string(), ], output: vec!["result".to_string()], ..NodeProto::default() }], input: ["data", "starts", "ends", "axes", "steps"] .into_iter() .map(|name| ValueInfoProto { name: name.to_string(), r#type: None, doc_string: "".to_string(), }) .collect(), output: ["result"] .into_iter() .map(|name| ValueInfoProto { name: name.to_string(), r#type: None, doc_string: "".to_string(), }) .collect(), ..GraphProto::default() })); /* data = [ [1, 2, 3, 4], [5, 6, 7, 8], ] axes = [0, 1] starts = [1, 0] ends = [2, 3] steps = [1, 2] result = [ [5, 7], ] */ let outputs = candle_onnx::simple_eval( &model, HashMap::from_iter([ ( "data".to_string(), Tensor::from_vec(vec![1i64, 2, 3, 4, 5, 6, 7, 8], (2, 4), &Device::Cpu)?, ), ( "starts".to_string(), Tensor::from_vec(vec![1i64, 0], (2,), &Device::Cpu)?, ), ( "ends".to_string(), Tensor::from_vec(vec![2i64, 3], (2,), &Device::Cpu)?, ), ( "axes".to_string(), Tensor::from_vec(vec![0i64, 1], (2,), &Device::Cpu)?, ), ( "steps".to_string(), Tensor::from_vec(vec![1i64, 2], (2,), &Device::Cpu)?, ), ]), )?; let actual = outputs.get("result").unwrap().to_vec2::<i64>()?; assert_eq!(actual, vec![vec![5i64, 7]]); /* data = [ [1, 2, 3, 4], [5, 6, 7, 8], ] starts = [0, 1] ends = [-1, 1000] result = [ [2, 3, 4], ] */ let model = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Slice".to_string(), input: vec!["data".to_string(), "starts".to_string(), "ends".to_string()], output: vec!["result".to_string()], ..NodeProto::default() }], input: ["data", "starts", "ends"] .into_iter() .map(|name| ValueInfoProto { name: name.to_string(), r#type: None, doc_string: "".to_string(), }) .collect(), output: ["result"] .into_iter() .map(|name| ValueInfoProto { name: name.to_string(), r#type: None, doc_string: "".to_string(), }) .collect(), ..GraphProto::default() })); let outputs = candle_onnx::simple_eval( &model, HashMap::from_iter([ ( "data".to_string(), Tensor::from_vec(vec![1i64, 2, 3, 4, 5, 6, 7, 8], (2, 4), &Device::Cpu)?, ), ( "starts".to_string(), Tensor::from_vec(vec![0i64, 1], (2,), &Device::Cpu)?, ), ( "ends".to_string(), Tensor::from_vec(vec![-1i64, 1000], (2,), &Device::Cpu)?, ), ]), )?; let actual = outputs.get("result").unwrap().to_vec2::<i64>()?; assert_eq!(actual, vec![vec![2i64, 3, 4]]); Ok(()) } #[test] fn test_lstm() -> Result<()> { // values generated from pytorch, so at least it's close enough to what pytorch does /* #!/usr/bin/env python3 # torch.nn.LSTM(input_size, hidden_size, num_layers=1, bias=True, batch_first=False, dropout=0.0, bidirectional=False, proj_size=0, device=None, dtype=None) import torch rand_gen = torch.Generator() rand_gen.manual_seed(1) input_size = 3 hidden_size = 5 batch_size = 1 sequence_length = 4 number_directions = 1 rnn = torch.nn.LSTM(input_size,hidden_size) weight_ih_l0 = torch.randn(rnn.weight_ih_l0.shape, generator=rand_gen) weight_hh_l0 = torch.randn(rnn.weight_hh_l0.shape, generator=rand_gen) bias_ih_l0 = torch.randn(rnn.bias_ih_l0.shape, generator=rand_gen) bias_hh_l0 = torch.randn(rnn.bias_hh_l0.shape, generator=rand_gen) rnn.weight_ih_l0 = torch.nn.Parameter(weight_ih_l0) rnn.weight_hh_l0 = torch.nn.Parameter(weight_hh_l0) rnn.bias_ih_l0 = torch.nn.Parameter(bias_ih_l0) rnn.bias_hh_l0 = torch.nn.Parameter(bias_hh_l0) input = torch.randn(sequence_length, batch_size, input_size, generator=rand_gen) h0 = torch.randn(number_directions, batch_size, hidden_size, generator=rand_gen) c0 = torch.randn(number_directions, batch_size, hidden_size, generator=rand_gen) output, (hn, cn) = rnn(input, (h0, c0)) def fmt_tensor(t): return "Tensor::from_vec::<_, f32>(vec!"+ str(t.flatten().tolist()) + ", (" + "".join([str(n)+"," for n in t.shape])+"), &Device::Cpu)?" print("let input_size = ", input_size, ";") print("let hidden_size = ", hidden_size, ";") print("let batch_size = ", batch_size, ";") print("let sequence_length = ", sequence_length, ";") print("let number_directions = ", number_directions, ";") print("let weight_ih_l0 = ", fmt_tensor(rnn.weight_ih_l0), ";") print("let weight_hh_l0 = ", fmt_tensor(rnn.weight_hh_l0), ";") print("let bias_ih_l0 = ", fmt_tensor(rnn.bias_ih_l0), ";") print("let bias_hh_l0 = ", fmt_tensor(rnn.bias_hh_l0), ";") print("let input = ", fmt_tensor(input), ";") print("let h0 = ", fmt_tensor(h0), ";") print("let c0 = ", fmt_tensor(c0), ";") print("let output = ", fmt_tensor(output), ";") print("let hn = ", fmt_tensor(hn), ";") print("let cn = ", fmt_tensor(cn), ";") */ let input_size = 3; let hidden_size = 5; let batch_size = 1; let sequence_length = 4; let number_directions = 1; let weight_ih_l0 = Tensor::from_vec::<_, f32>( vec![ -1.525_595_9, -0.750_231_8, -0.653_980_9, -1.609_484_8, -0.100_167_18, -0.609_188_9, -0.979_772_27, -1.609_096_3, -0.712_144_6, 0.303_722, -0.777_314_3, -0.251_455_25, -0.222_270_49, 1.687_113_4, 0.228_425_17, 0.467_635_5, -0.696_972_4, -1.160_761_5, 0.699_542_4, 0.199_081_63, 0.865_692_4, 0.244_403_9, -0.662_911_36, 0.807_308_26, 1.101_680_6, -0.175_936_04, -2.245_557_8, -1.446_458, 0.061_155_282, -0.617_744_45, -0.798_069_83, -0.131_623_21, 1.879_345_8, -0.072_131_78, 0.157_770_6, -0.773_454_9, 0.199_056_5, 0.045_702_778, 0.152_956_92, -0.475_678_8, -0.111_019_83, 0.292_735_25, -0.157_845_15, -0.028_787_14, 0.453_254_58, 1.142_161_1, 0.248_610_7, -1.775_400_8, -0.025_502_462, -1.023_330_6, -0.596_185_15, -1.005_530_7, 0.428_542_3, 1.476_077_8, -1.786_867_9, 1.610_317_6, -0.703_956_66, -0.185_265_8, -0.996_235_1, -0.831_255_26, ], (20, 3), &Device::Cpu, )?; let weight_hh_l0 = Tensor::from_vec::<_, f32>( vec![ 0.409_972_43, 0.408_450_66, 0.257_865_4, 1.095_021_4, -0.506_486_6, 0.099_775_404, -0.653_973_4, 0.731_693_7, -1.456_733, 1.608_935_4, 0.093_769_975, -1.259_749, 0.254_633_5, -0.501_957_3, -1.041_2, 0.732_267_2, 1.307_535_5, -1.162_798_8, 0.119_636_11, -0.163_135_33, 0.661_445_3, 1.189_920_5, 0.816_533_9, -0.913_523_6, -0.353_806_53, 0.763_927_04, -0.588_950_7, -0.763_597_37, 1.335_205_7, 0.604_273_6, -0.103_442_08, -0.151_216_92, 1.246_568_3, 0.505_721_4, 0.950_511_2, 1.296_648_3, 0.873_796_3, -0.560_259_4, 1.285_784_5, 0.816_823_84, -1.464_799_4, -1.262_928_4, 1.122_018_8, 1.566_334_1, 2.558_138_4, -0.233_363_88, -0.013_472_13, 1.860_634_8, 1.549_620_5, 0.347_629_25, 0.093_008_03, 0.614_740_3, 0.712_364_55, -1.776_507_3, 0.353_864_58, 1.199_613_2, -0.712_258_93, -0.620_034_4, -0.228_134_95, -0.789_274_63, -1.611_111_8, -1.871_612_9, 0.543_083_6, 0.660_678_6, 0.270_527_72, 0.559_691_97, -0.318_396_3, 1.511_720_7, -1.363_267_2, -0.983_219_6, 1.511_266_7, 0.641_870_74, -0.747_445_9, -0.923_438_55, 0.573_398_4, -0.109_299_51, 0.518_112_1, 0.106_535_35, 0.269_240_77, 1.324_768, 0.037_456_9, -0.637_839_3, -0.814_755_44, -0.689_506_53, 0.843_654_3, 1.165_701_3, 0.526_932_2, 1.619_253_3, -0.963_976_26, 0.141_520_38, -0.163_660_96, -0.358_222_57, 1.722_279_3, -0.303_575_6, 0.238_874_2, 1.344_001_2, 0.103_225_69, 1.100_354_2, -0.341_680_2, 0.947_338_9, ], (20, 5), &Device::Cpu, )?; let bias_ih_l0 = Tensor::from_vec::<_, f32>( vec![ -0.568_515_96, 0.837_596_2, 1.783_660_7, -0.195_424_66, 0.235_193_13, 1.914_243_3, 1.836_411_1, 1.324_532_4, -0.070_514_58, 0.346_979_4, -0.653_679_6, 1.558_620_2, 0.218_566_15, -0.574_307_26, 1.457_125_1, 1.770_955_7, -2.017_3, 0.423_503_2, 0.573_022, -1.796_243, ], (20,), &Device::Cpu, )?; let bias_hh_l0 = Tensor::from_vec::<_, f32>( vec![ 1.247_040_4, 1.273_851_2, 0.390_949_25, 0.387_210_5, 0.144_403_95, 0.777_168_45, -2.338_112_6, -0.829_120_4, 1.166_139_1, 1.478_657_5, 0.267_608_73, 0.756_119_85, -0.587_336_1, -2.061_920_6, 0.430_473_48, 0.337_656_62, -0.343_785_35, -0.617_226_06, 1.252_969_3, -0.051_417_42, ], (20,), &Device::Cpu, )?; let input = Tensor::from_vec::<_, f32>( vec![ 0.647_212_8, -0.041_167_17, -0.177_493_08, -0.500_039_3, 0.867_274_94, -0.273_192_23, -0.460_768_13, -0.099_093_71, 0.472_844_8, 1.004_948_5, -0.287_142_04, -1.161_862_1, ], (4, 1, 3), &Device::Cpu, )?; let h0 = Tensor::from_vec::<_, f32>( vec![ 0.027_581_785, 0.565_238_24, -0.011_487_379, 0.670_640_05, -0.492_925_05, ], (1, 1, 5), &Device::Cpu, )?; let c0 = Tensor::from_vec::<_, f32>( vec![ 1.505_028_5, -2.326_355, 1.616_89, -0.902_623_8, 0.173_668_24, ], (1, 1, 5), &Device::Cpu, )?; let output = Tensor::from_vec::<_, f32>( vec![ 0.595_601_7, -0.017_232_792, 0.110_355_72, -0.493_231_74, 0.047_632_16, 0.635_845_2, 0.040_328_12, -0.378_861_16, -0.746_434, 0.200_809_09, 0.584_026_5, 0.145_328_82, -0.734_529_85, -0.521_430_43, 0.219_038_17, 0.742_045_16, 0.319_438_8, -0.047_266_465, -0.282_384_96, 0.271_313_4, ], (4, 1, 5), &Device::Cpu, )?; let hn = Tensor::from_vec::<_, f32>( vec![ 0.742_045_16, 0.319_438_8, -0.047_266_465, -0.282_384_96, 0.271_313_4, ], (1, 1, 5), &Device::Cpu, )?; let cn = Tensor::from_vec::<_, f32>( vec![ 0.963_055_85, 1.003_307, -1.754_899, -1.596_712_2, 0.825_292_47, ], (1, 1, 5), &Device::Cpu, )?; // end of generated values let model = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "LSTM".to_string(), name: "LSTM_test".to_string(), attribute: vec![AttributeProto { name: "hidden_size".to_string(), r#type: AttributeType::Int.into(), i: hidden_size as i64, ..AttributeProto::default() }], input: vec![ "input".to_string(), "w".to_string(), "r".to_string(), "b".to_string(), // b "".to_string(), // seq_lens "h".to_string(), "c".to_string(), ], output: vec!["output".to_string(), "hn".to_string(), "cn".to_string()], ..NodeProto::default() }], input: ["input", "w", "r", "b", "h", "c"] .into_iter() .map(|name| ValueInfoProto { name: name.to_string(), ..ValueInfoProto::default() }) .collect(), output: ["output", "hn", "cn"] .into_iter() .map(|name| ValueInfoProto { name: name.to_string(), ..ValueInfoProto::default() }) .collect(), ..GraphProto::default() })); // pytorch stores weight and bias as [ifco] but we want it as [iofc] // so we need to re-arrange the tensors a bit let idx_iofc = { let stride = hidden_size as i64; let dev = weight_ih_l0.device(); let idx_i = Tensor::arange(0, stride, dev)?; let idx_f = Tensor::arange(stride, 2 * stride, dev)?; let idx_g = Tensor::arange(2 * stride, 3 * stride, dev)?; let idx_o = Tensor::arange(3 * stride, 4 * stride, dev)?; Tensor::cat(&[&idx_i, &idx_o, &idx_f, &idx_g], 0)? }; let w = weight_ih_l0.index_select(&idx_iofc, 0)?; let w = w.reshape((number_directions, 4 * hidden_size, input_size))?; let r = weight_hh_l0.index_select(&idx_iofc, 0)?; let r = r.reshape((number_directions, 4 * hidden_size, hidden_size))?; let wb = bias_ih_l0.index_select(&idx_iofc, 0)?; let rb = bias_hh_l0.index_select(&idx_iofc, 0)?; let b = Tensor::cat(&[wb, rb], 0)?.reshape((number_directions, 8 * hidden_size))?; let output = output.reshape((sequence_length, number_directions, batch_size, hidden_size))?; let result = simple_eval( &model, HashMap::from_iter([ ("input".to_string(), input), ("w".to_string(), w), ("r".to_string(), r), ("b".to_string(), b), ("h".to_string(), h0), ("c".to_string(), c0), ]), )?; let actual_output = result.get("output").unwrap(); assert_eq!(output.dims(), actual_output.dims()); let actual_hn = result.get("hn").unwrap(); assert_eq!(hn.dims(), actual_hn.dims()); let actual_cn = result.get("cn").unwrap(); assert_eq!(cn.dims(), actual_cn.dims()); let diff_close_enough = |a: &Tensor, b| -> Result<_> { let diffs = a.sub(b)?.flatten_all()?.to_vec1::<f32>()?; Ok(diffs.iter().all(|f| f.abs() < 0.0001)) }; assert!( diff_close_enough(&output, actual_output)?, "output did not match expected\n{actual_output}\n{output}", ); assert!( diff_close_enough(&hn, actual_hn)?, "hn did not match expected\n{actual_hn}\n{hn}", ); assert!( diff_close_enough(&cn, actual_cn)?, "cn did not match expected\n{actual_cn}\n{cn}", ); Ok(()) } #[test] fn test_rnn() -> Result<()> { // values generated from pytorch, so at least it's close enough to what pytorch does /* #!/usr/bin/env python3 import torch rand_gen = torch.Generator() rand_gen.manual_seed(42) input_size = 3 hidden_size = 5 batch_size = 1 sequence_length = 4 number_directions = 1 rnn = torch.nn.RNN(input_size,hidden_size) weight_ih_l0 = torch.randn(rnn.weight_ih_l0.shape, generator=rand_gen) weight_hh_l0 = torch.randn(rnn.weight_hh_l0.shape, generator=rand_gen) bias_ih_l0 = torch.randn(rnn.bias_ih_l0.shape, generator=rand_gen) bias_hh_l0 = torch.randn(rnn.bias_hh_l0.shape, generator=rand_gen) rnn.weight_ih_l0 = torch.nn.Parameter(weight_ih_l0) rnn.weight_hh_l0 = torch.nn.Parameter(weight_hh_l0) rnn.bias_ih_l0 = torch.nn.Parameter(bias_ih_l0) rnn.bias_hh_l0 = torch.nn.Parameter(bias_hh_l0) input = torch.randn(sequence_length, batch_size, input_size, generator=rand_gen) hx = torch.randn(number_directions, batch_size, hidden_size, generator=rand_gen) output, hn = rnn(input, hx) def fmt_tensor(t): return "Tensor::from_vec::<_, f32>(vec!"+ str(t.flatten().tolist()) + ", (" + "".join([str(n)+"," for n in t.shape])+"), &Device::Cpu)?" print("let input_size = ", input_size, ";") print("let hidden_size = ", hidden_size, ";") print("let batch_size = ", batch_size, ";") print("let sequence_length = ", sequence_length, ";") print("let number_directions = ", number_directions, ";") print("let weight_ih_l0 = ", fmt_tensor(rnn.weight_ih_l0), ";") print("let weight_hh_l0 = ", fmt_tensor(rnn.weight_hh_l0), ";") print("let bias_ih_l0 = ", fmt_tensor(rnn.bias_ih_l0), ";") print("let bias_hh_l0 = ", fmt_tensor(rnn.bias_hh_l0), ";") print("let input = ", fmt_tensor(input), ";") print("let hx = ", fmt_tensor(hx), ";") print("let output = ", fmt_tensor(output), ";") print("let hn = ", fmt_tensor(hn), ";") */ // https://github.com/onnx/onnx/blob/main/docs/Operators.md#RNN let model = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "RNN".to_string(), name: "RNN_test".to_string(), attribute: vec![AttributeProto { name: "hidden_size".to_string(), r#type: AttributeType::Int.into(), i: 5, ..AttributeProto::default() }], input: vec![ "input".to_string(), "w".to_string(), "r".to_string(), "b".to_string(), // b "".to_string(), // seq_lens "h".to_string(), ], output: vec!["output".to_string(), "hn".to_string()], ..NodeProto::default() }], input: ["input", "w", "r", "b", "h"] .into_iter() .map(|name| ValueInfoProto { name: name.to_string(), ..ValueInfoProto::default() }) .collect(), output: ["output", "hn"] .into_iter() .map(|name| ValueInfoProto { name: name.to_string(), ..ValueInfoProto::default() }) .collect(), ..GraphProto::default() })); let input_size = 3; let hidden_size = 5; let batch_size = 1; let sequence_length = 4; let number_directions = 1; let weight_ih_l0 = Tensor::from_vec::<_, f32>( vec![ 0.33669036626815796, 0.12880940735340118, 0.23446236550807953, 0.23033303022384644, -1.1228563785552979, -0.18632829189300537, 2.2082014083862305, -0.637997031211853, 0.46165722608566284, 0.2673508822917938, 0.5349046587944031, 0.809357225894928, 1.110290288925171, -1.6897989511489868, -0.9889599084854126, ], (5, 3), &Device::Cpu, )?; let weight_hh_l0 = Tensor::from_vec::<_, f32>( vec![ -1.3846737146377563, -0.8712361454963684, -0.223365917801857, 1.7173614501953125, 0.3188803195953369, -0.42451897263526917, 0.3057209253311157, -0.7745925188064575, -1.5575724840164185, -0.9223900437355042, 1.811317801475525, 0.16056492924690247, 0.36724865436553955, 0.17541083693504333, 1.3851605653762817, -0.44585201144218445, 1.4451338052749634, 0.7078122496604919, -1.0758858919143677, 0.5356546640396118, 1.1753677129745483, 0.5611738562583923, -0.45274803042411804, -0.771777868270874, -0.1721901297569275, ], (5, 5), &Device::Cpu, )?; let bias_ih_l0 = Tensor::from_vec::<_, f32>( vec![ 0.9579718112945557, -0.6381967663764954, -1.9187371730804443, -0.6441153287887573, -0.6060903072357178, ], (5,), &Device::Cpu, )?; let bias_hh_l0 = Tensor::from_vec::<_, f32>( vec![ -0.1425034999847412, 0.972653865814209, 2.0037777423858643, 0.6621911525726318, 0.5332217216491699, ], (5,), &Device::Cpu, )?; let input = Tensor::from_vec::<_, f32>( vec![ 2.748873233795166, -0.3840780258178711, -1.962258219718933, -0.30899786949157715, -0.4268203377723694, 0.4503966271877289, -0.0022214562632143497, -0.19801591336727142, 1.775763750076294, -1.6059082746505737, 0.48799338936805725, -0.17943637073040009, ], (4, 1, 3), &Device::Cpu, )?; let hx = Tensor::from_vec::<_, f32>( vec![ 1.4753035306930542, -1.353177547454834, 0.16822677850723267, -0.8245629668235779, -0.060138583183288574, ], (1, 1, 5), &Device::Cpu, )?; let output = Tensor::from_vec::<_, f32>( vec![ -0.8023818135261536, 0.9590549468994141, 0.9999996423721313, -0.9906406402587891, 0.9999986886978149, -0.5140700936317444, 0.8138962388038635, 0.16080257296562195, 0.9994772672653198, -0.38456836342811584, 0.992118239402771, -0.5608834624290466, -0.07238662987947464, 0.9196381568908691, -0.9843823313713074, 0.5993185043334961, -0.9232994914054871, -0.9976708292961121, -0.9960790276527405, -0.973706841468811, ], (4, 1, 5), &Device::Cpu, )?; let hn = Tensor::from_vec::<_, f32>( vec![ 0.5993185043334961, -0.9232994914054871, -0.9976708292961121, -0.9960790276527405, -0.973706841468811, ], (1, 1, 5), &Device::Cpu, )?; let w = weight_ih_l0.reshape((number_directions, hidden_size, input_size))?; let r = weight_hh_l0.reshape((number_directions, hidden_size, hidden_size))?; let wb = bias_ih_l0.reshape((number_directions, hidden_size))?; let rb = bias_hh_l0.reshape((number_directions, hidden_size))?; let b = Tensor::cat(&[wb, rb], 0)?.reshape((number_directions, 2 * hidden_size))?; let h = hx.reshape((number_directions, batch_size, hidden_size))?; let output = output.reshape((sequence_length, number_directions, batch_size, hidden_size))?; let hn = hn.reshape((number_directions, batch_size, hidden_size))?; let diff_close_enough = |a: &Tensor, b| -> Result<_> { let diffs = a.sub(b)?.flatten_all()?.to_vec1::<f32>()?; Ok(diffs.iter().all(|f| f.abs() < 0.0001)) }; let result = simple_eval( &model, HashMap::from_iter([ ("input".to_string(), input), ("w".to_string(), w), ("r".to_string(), r), ("b".to_string(), b), ("h".to_string(), h), ]), )?; let actual_output = result.get("output").unwrap(); assert_eq!(output.dims(), actual_output.dims()); let actual_hn = result.get("hn").unwrap(); assert_eq!(hn.dims(), actual_hn.dims()); assert!( diff_close_enough(&output, actual_output)?, "output did not match expected\n{actual_output}\n{output}", ); assert!( diff_close_enough(&hn, actual_hn)?, "hn did not match expected\n{actual_hn}\n{hn}", ); Ok(()) } #[test] fn test_expand_dim_changed() -> Result<()> { // Create a manual graph for the Expand operation let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Expand".to_string(), domain: "".to_string(), attribute: vec![], input: vec!["data".to_string(), "new_shape".to_string()], output: vec!["expanded".to_string()], name: "".to_string(), doc_string: "".to_string(), }], input: vec![ ValueInfoProto { name: "data".to_string(), doc_string: "".to_string(), r#type: None, }, ValueInfoProto { name: "new_shape".to_string(), doc_string: "".to_string(), r#type: None, }, ], output: vec![ValueInfoProto { name: "expanded".to_string(), doc_string: "".to_string(), r#type: None, }], ..GraphProto::default() })); // Input tensor with shape [3, 1] let data = Tensor::from_vec(vec![1.0f32, 2.0f32, 3.0f32], (3, 1), &Device::Cpu)?; // New shape tensor: [2, 1, 6] let new_shape = Tensor::from_vec(vec![2i64, 1, 6], (3,), &Device::Cpu)?; // Expected output after expansion let expected = Tensor::from_vec( vec![ 1.0f32, 1.0f32, 1.0f32, 1.0f32, 1.0f32, 1.0f32, 2.0f32, 2.0f32, 2.0f32, 2.0f32, 2.0f32, 2.0f32, 3.0f32, 3.0f32, 3.0f32, 3.0f32, 3.0f32, 3.0f32, 1.0f32, 1.0f32, 1.0f32, 1.0f32, 1.0f32, 1.0f32, 2.0f32, 2.0f32, 2.0f32, 2.0f32, 2.0f32, 2.0f32, 3.0f32, 3.0f32, 3.0f32, 3.0f32, 3.0f32, 3.0f32, ], (2, 3, 6), &Device::Cpu, )?; // Execute the model evaluation let inputs = HashMap::from_iter([ ("data".to_string(), data), ("new_shape".to_string(), new_shape), ]); let result = candle_onnx::simple_eval(&manual_graph, inputs)?; // Retrieve and compare the result let expanded = result.get("expanded").expect("Output 'expanded' not found"); assert_eq!(expanded.to_vec3::<f32>()?, expected.to_vec3::<f32>()?); Ok(()) } fn make_graph_helper( op_name: &str, inputs: &[&str], outputs: &[&str], attribs: Vec<AttributeProto>, ) -> ModelProto { create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: op_name.to_string(), domain: "".to_string(), attribute: attribs, input: inputs.iter().map(|s| s.to_string()).collect(), output: outputs.iter().map(|s| s.to_string()).collect(), name: "".to_string(), doc_string: "".to_string(), }], input: inputs .iter() .map(|name| ValueInfoProto { name: name.to_string(), ..ValueInfoProto::default() }) .collect(), output: outputs .iter() .map(|name| ValueInfoProto { name: name.to_string(), ..ValueInfoProto::default() }) .collect(), ..GraphProto::default() })) } #[test] fn test_expand_dim_unchanged() -> Result<()> { // Create a manual graph for the Expand operation let manual_graph = make_graph_helper("Expand", &["data", "new_shape"], &["expanded"], vec![]); // Input tensor with shape [3, 1] and dtype f32 let data = Tensor::from_vec(vec![1.0f32, 2.0f32, 3.0f32], (3, 1), &Device::Cpu)?; // New shape tensor: [3, 4] let new_shape = Tensor::from_vec(vec![3i64, 4], (2,), &Device::Cpu)?; // Expected output after expansion, dtype f32 let expected = Tensor::from_vec( vec![ 1.0f32, 1.0f32, 1.0f32, 1.0f32, 2.0f32, 2.0f32, 2.0f32, 2.0f32, 3.0f32, 3.0f32, 3.0f32, 3.0f32, ], (3, 4), &Device::Cpu, )?; // Execute the model evaluation let inputs = HashMap::from_iter([ ("data".to_string(), data), ("new_shape".to_string(), new_shape), ]); let result = candle_onnx::simple_eval(&manual_graph, inputs)?; // Retrieve and compare the result let expanded = result.get("expanded").expect("Output 'expanded' not found"); assert_eq!(expanded.to_vec2::<f32>()?, expected.to_vec2::<f32>()?); Ok(()) } fn make_split_graph_helper(inputs: &[&str], outputs: &[&str], axis: i64) -> ModelProto { let attribs = vec![AttributeProto { name: "axis".to_string(), r#type: AttributeType::Int.into(), i: axis, ..AttributeProto::default() }]; make_graph_helper("Split", inputs, outputs, attribs) } #[test] fn test_split_equal_parts_1d_opset13() -> Result<()> { let input = Tensor::from_vec( vec![1.0f32, 2.0f32, 3.0f32, 4.0f32, 5.0f32, 6.0f32], (6,), &Device::Cpu, )?; let mut inputs = HashMap::new(); inputs.insert("input".to_string(), input); { let manual_graph = make_split_graph_helper(&["input"], &["output_1", "output_2", "output_3"], 0); let eval = candle_onnx::simple_eval(&manual_graph, inputs.clone())?; assert_eq!(eval.len(), 3); let out1 = eval.get("output_1").expect("Output 'output_1' not found"); let out2 = eval.get("output_2").expect("Output 'output_2' not found"); let out3 = eval.get("output_3").expect("Output 'output_3' not found"); assert_eq!(out1.to_vec1::<f32>()?, vec![1.0f32, 2.0f32]); assert_eq!(out2.to_vec1::<f32>()?, vec![3.0f32, 4.0f32]); assert_eq!(out3.to_vec1::<f32>()?, vec![5.0f32, 6.0f32]); } { let splits = Tensor::from_vec(vec![2i64, 4], (2,), &Device::Cpu)?; inputs.insert("split".to_string(), splits); let manual_graph = make_split_graph_helper(&["input", "split"], &["output_1", "output_2"], 0); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 2); let out1 = eval.get("output_1").expect("Output 'output_1' not found"); let out2 = eval.get("output_2").expect("Output 'output_2' not found"); assert_eq!(out1.to_vec1::<f32>()?, vec![1.0f32, 2.0f32]); assert_eq!(out2.to_vec1::<f32>()?, vec![3.0f32, 4.0f32, 5.0f32, 6.0f32]); } Ok(()) } fn make_reduce_sum_graph_helper( inputs: &[&str], outputs: &[&str], keepdims: Option<i64>, noop_with_empty_axes: Option<i64>, ) -> ModelProto { let mut attribs = vec![]; if let Some(keepdims) = keepdims { attribs.push(AttributeProto { name: "keepdims".to_string(), r#type: AttributeType::Int.into(), i: keepdims, ..AttributeProto::default() }); } if let Some(noop_with_empty_axes) = noop_with_empty_axes { attribs.push(AttributeProto { name: "noop_with_empty_axes".to_string(), r#type: AttributeType::Ints.into(), i: noop_with_empty_axes, ..AttributeProto::default() }); } make_graph_helper("ReduceSum", inputs, outputs, attribs) } #[test] fn test_reduce_sum_default_axes_keepdims() -> Result<()> { let manual_graph = make_reduce_sum_graph_helper(&["data", "axes"], &["reduced"], Some(1), None); // Test with example data { let data = Tensor::from_vec( vec![ 1.0f32, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, ], (3, 2, 2), &Device::Cpu, )?; // let axes = Tensor::from_vec(Vec::<i64>::new(), (0,), &Device::Cpu)?; let mut inputs = HashMap::new(); inputs.insert("data".to_string(), data); // inputs.insert("axes".to_string(), axes); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let reduced = eval.get("reduced").expect("Output 'reduced' not found"); let expected = Tensor::from_vec(vec![78.0f32], (1, 1, 1), &Device::Cpu)?; assert_eq!(reduced.to_vec3::<f32>()?, expected.to_vec3::<f32>()?); } { let data = Tensor::from_vec( vec![ -5.2f32, 7.8, -3.1, 9.4, 2.6, -8.7, 4.3, -1.9, 6.5, -0.8, -7.2, 3.6, ], (3, 2, 2), &Device::Cpu, )?; let mut inputs = HashMap::new(); inputs.insert("data".to_string(), data.clone()); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let reduced = eval.get("reduced").expect("Output 'reduced' not found"); let expected = data.sum_all()?.reshape((1, 1, 1))?; assert_eq!(reduced.to_vec3::<f32>()?, expected.to_vec3::<f32>()?); } Ok(()) } #[test] fn test_reduce_sum_do_not_keep_dims() -> Result<()> { let manual_graph = make_reduce_sum_graph_helper(&["data", "axes"], &["reduced"], Some(0), None); // Test with example data { let data = Tensor::from_vec( vec![ 1.0f32, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, ], (3, 2, 2), &Device::Cpu, )?; let axes = Tensor::from_vec(vec![1i64], (1,), &Device::Cpu)?; let mut inputs = HashMap::new(); inputs.insert("data".to_string(), data); inputs.insert("axes".to_string(), axes); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let reduced = eval.get("reduced").expect("Output 'reduced' not found"); let expected = Tensor::from_vec( vec![4.0f32, 6.0, 12.0, 14.0, 20.0, 22.0], (3, 2), &Device::Cpu, )?; assert_eq!(reduced.to_vec2::<f32>()?, expected.to_vec2::<f32>()?); } // Test with random data { let _shape = (3, 2, 2); let data = Tensor::from_vec( vec![ -5.2f32, 7.8, -3.1, 9.4, 2.6, -8.7, 4.3, -1.9, 6.5, -0.8, -7.2, 3.6, ], (3, 2, 2), &Device::Cpu, )?; let axes = Tensor::from_vec(vec![1i64], (1,), &Device::Cpu)?; let mut inputs = HashMap::new(); inputs.insert("data".to_string(), data.clone()); inputs.insert("axes".to_string(), axes); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let reduced = eval.get("reduced").expect("Output 'reduced' not found"); // Calculate expected result let expected = data.sum(1)?; assert_eq!(reduced.to_vec2::<f32>()?, expected.to_vec2::<f32>()?); } Ok(()) } // Xor #[test] fn test_xor() -> Result<()> { // tests based on: https://github.com/onnx/onnx/blob/main/docs/Operators.md#Xor xor // 2d test( &[[0_u8, 1, 0, 0], [0, 0, 1, 1], [0, 1, 1, 1]], &[[1_u8, 1, 0, 0], [1, 0, 0, 1], [1, 1, 1, 0]], &[[1_u8, 0, 0, 0], [1, 0, 1, 0], [1, 0, 0, 1]], )?; // 3d test( &[ [ [0_u8, 1, 1, 1, 1], [0, 1, 1, 0, 0], [1, 1, 1, 1, 1], [0, 0, 0, 0, 1], ], [ [0, 0, 1, 1, 1], [1, 0, 1, 1, 1], [1, 1, 0, 0, 1], [1, 0, 0, 1, 0], ], [ [1, 0, 0, 1, 1], [1, 1, 1, 0, 0], [1, 1, 0, 0, 1], [1, 0, 0, 0, 1], ], ], &[ [ [1_u8, 0, 0, 1, 1], [0, 0, 1, 0, 1], [1, 0, 0, 1, 0], [0, 0, 0, 0, 0], ], [ [1, 0, 0, 1, 1], [1, 0, 1, 1, 1], [0, 1, 0, 1, 1], [1, 1, 1, 0, 0], ], [ [0, 1, 1, 1, 0], [1, 1, 0, 1, 0], [0, 1, 1, 1, 0], [1, 1, 0, 1, 0], ], ], &[ [ [1_u8, 1, 1, 0, 0], [0, 1, 0, 0, 1], [0, 1, 1, 0, 1], [0, 0, 0, 0, 1], ], [ [1, 0, 1, 0, 0], [0, 0, 0, 0, 0], [1, 0, 0, 1, 0], [0, 1, 1, 1, 0], ], [ [1, 1, 1, 0, 1], [0, 0, 1, 1, 0], [1, 0, 1, 1, 1], [0, 1, 0, 1, 1], ], ], )?; // 4d test( &[ [ [[0_u8, 1, 1, 0], [1, 0, 0, 0], [1, 1, 0, 1]], [[1, 1, 0, 1], [0, 0, 0, 1], [0, 0, 0, 1]], ], [ [[1, 1, 0, 0], [1, 0, 1, 0], [1, 0, 0, 0]], [[1, 0, 0, 1], [1, 0, 1, 1], [1, 1, 0, 1]], ], ], &[ [ [[1_u8, 0, 1, 0], [0, 0, 1, 1], [1, 0, 1, 0]], [[0, 1, 0, 0], [1, 0, 0, 0], [0, 0, 0, 1]], ], [ [[1, 1, 1, 0], [0, 0, 0, 1], [0, 0, 1, 0]], [[0, 0, 0, 0], [1, 0, 0, 0], [1, 1, 1, 1]], ], ], &[ [ [[1_u8, 1, 0, 0], [1, 0, 1, 1], [0, 1, 1, 1]], [[1, 0, 0, 1], [1, 0, 0, 1], [0, 0, 0, 0]], ], [ [[0, 0, 1, 0], [1, 0, 1, 1], [1, 0, 1, 0]], [[1, 0, 0, 1], [0, 0, 1, 1], [0, 0, 1, 0]], ], ], )?; // tests based on: https://github.com/onnx/onnx/blob/main/docs/Operators.md#Xor xor_broadcast // 3d vs 1d test( // Shape (3, 4, 5) &[ [ [0_u8, 0, 0, 0, 1], [0, 1, 0, 1, 1], [1, 0, 0, 1, 1], [0, 0, 1, 0, 1], ], [ [0, 1, 0, 1, 1], [1, 1, 0, 0, 1], [0, 1, 1, 1, 0], [0, 0, 0, 0, 1], ], [ [1, 1, 0, 1, 1], [0, 0, 0, 1, 1], [0, 1, 1, 0, 1], [1, 1, 0, 1, 1], ], ], // shape (5) &[1_u8, 0, 0, 1, 1], // shape (3, 4, 5) &[ [ [1_u8, 0, 0, 1, 0], [1, 1, 0, 0, 0], [0, 0, 0, 0, 0], [1, 0, 1, 1, 0], ], [ [1, 1, 0, 0, 0], [0, 1, 0, 1, 0], [1, 1, 1, 0, 1], [1, 0, 0, 1, 0], ], [ [0, 1, 0, 0, 0], [1, 0, 0, 0, 0], [1, 1, 1, 1, 0], [0, 1, 0, 0, 0], ], ], )?; // 3d vs 2d test( // Shape (3, 4, 5) &[ [ [0_u8, 0, 0, 0, 1], [0, 1, 0, 1, 1], [1, 0, 0, 1, 1], [0, 0, 1, 0, 1], ], [ [0, 1, 0, 1, 1], [1, 1, 0, 0, 1], [0, 1, 1, 1, 0], [0, 0, 0, 0, 1], ], [ [1, 1, 0, 1, 1], [0, 0, 0, 1, 1], [0, 1, 1, 0, 1], [1, 1, 0, 1, 1], ], ], // shape (4, 5) &[ [0_u8, 1, 0, 1, 0], [0, 0, 1, 0, 0], [1, 1, 0, 1, 1], [1, 1, 0, 1, 0], ], // shape (3, 4, 5) &[ [ [0_u8, 1, 0, 1, 1], [0, 1, 1, 1, 1], [0, 1, 0, 0, 0], [1, 1, 1, 1, 1], ], [ [0, 0, 0, 0, 1], [1, 1, 1, 0, 1], [1, 0, 1, 0, 1], [1, 1, 0, 1, 1], ], [ [1, 0, 0, 0, 1], [0, 0, 1, 1, 1], [1, 0, 1, 1, 0], [0, 0, 0, 0, 1], ], ], )?; // 4d vs 2d test( // Shape (2, 3, 3, 4) &[ [ [[1_u8, 0, 0, 1], [1, 1, 0, 0], [0, 1, 0, 0]], [[1, 1, 0, 0], [0, 1, 0, 0], [1, 0, 0, 1]], [[1, 0, 0, 0], [1, 1, 1, 0], [0, 0, 1, 1]], ], [ [[0, 1, 0, 1], [1, 1, 0, 1], [1, 0, 1, 1]], [[1, 1, 0, 0], [1, 0, 0, 0], [0, 0, 1, 1]], [[1, 0, 0, 0], [1, 1, 0, 0], [0, 1, 0, 1]], ], ], // shape (3, 4) &[[0_u8, 0, 1, 1], [1, 1, 1, 1], [0, 1, 0, 1]], // shape (2, 3, 3, 4) &[ [ [[1_u8, 0, 1, 0], [0, 0, 1, 1], [0, 0, 0, 1]], [[1, 1, 1, 1], [1, 0, 1, 1], [1, 1, 0, 0]], [[1, 0, 1, 1], [0, 0, 0, 1], [0, 1, 1, 0]], ], [ [[0, 1, 1, 0], [0, 0, 1, 0], [1, 1, 1, 0]], [[1, 1, 1, 1], [0, 1, 1, 1], [0, 1, 1, 0]], [[1, 0, 1, 1], [0, 0, 1, 1], [0, 0, 0, 0]], ], ], )?; // 4d vs 3d test( // Shape (2, 3, 3, 4) &[ [ [[1_u8, 0, 0, 1], [1, 1, 0, 0], [0, 1, 0, 0]], [[1, 1, 0, 0], [0, 1, 0, 0], [1, 0, 0, 1]], [[1, 0, 0, 0], [1, 1, 1, 0], [0, 0, 1, 1]], ], [ [[0, 1, 0, 1], [1, 1, 0, 1], [1, 0, 1, 1]], [[1, 1, 0, 0], [1, 0, 0, 0], [0, 0, 1, 1]], [[1, 0, 0, 0], [1, 1, 0, 0], [0, 1, 0, 1]], ], ], // shape (3, 3, 4) &[ [[1_u8, 1, 0, 0], [0, 0, 1, 1], [0, 1, 0, 0]], [[0, 1, 0, 1], [0, 0, 0, 0], [0, 1, 0, 1]], [[0, 1, 1, 0], [1, 0, 1, 1], [1, 1, 0, 1]], ], // shape (2, 3, 3, 4) &[ [ [[0_u8, 1, 0, 1], [1, 1, 1, 1], [0, 0, 0, 0]], [[1, 0, 0, 1], [0, 1, 0, 0], [1, 1, 0, 0]], [[1, 1, 1, 0], [0, 1, 0, 1], [1, 1, 1, 0]], ], [ [[1, 0, 0, 1], [1, 1, 1, 0], [1, 1, 1, 1]], [[1, 0, 0, 1], [1, 0, 0, 0], [0, 1, 1, 0]], [[1, 1, 1, 0], [0, 1, 1, 1], [1, 0, 0, 0]], ], ], )?; // 4d vs 4d test( // Shape (1, 4, 1, 2) &[[[[1_u8, 0]], [[1, 0]], [[1, 0]], [[1, 1]]]], // shape (2, 1, 4, 2) &[ [[[0_u8, 0], [1, 1], [1, 1], [1, 1]]], [[[0, 1], [1, 0], [0, 1], [0, 0]]], ], // shape (2, 4, 4, 2) &[ [ [[1_u8, 0], [0, 1], [0, 1], [0, 1]], [[1, 0], [0, 1], [0, 1], [0, 1]], [[1, 0], [0, 1], [0, 1], [0, 1]], [[1, 1], [0, 0], [0, 0], [0, 0]], ], [ [[1, 1], [0, 0], [1, 1], [1, 0]], [[1, 1], [0, 0], [1, 1], [1, 0]], [[1, 1], [0, 0], [1, 1], [1, 0]], [[1, 0], [0, 1], [1, 0], [1, 1]], ], ], )?; fn test(input: impl NdArray, other: impl NdArray, expected: impl NdArray) -> Result<()> { let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Xor".to_string(), domain: "".to_string(), attribute: vec![], input: vec![INPUT_X.to_string(), INPUT_Y.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let inputs: HashMap<String, Tensor> = HashMap::from([ (INPUT_X.to_string(), Tensor::new(input, &Device::Cpu)?), (INPUT_Y.to_string(), Tensor::new(other, &Device::Cpu)?), ]); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let expected = Tensor::new(expected, &Device::Cpu)?; match expected.dims().len() { 0 => { assert_eq!(z.to_vec0::<u8>()?, expected.to_vec0::<u8>()?) } 1 => { assert_eq!(z.to_vec1::<u8>()?, expected.to_vec1::<u8>()?) } 2 => { assert_eq!(z.to_vec2::<u8>()?, expected.to_vec2::<u8>()?) } 3 => { assert_eq!(z.to_vec3::<u8>()?, expected.to_vec3::<u8>()?) } 4 => { // Candle has no method equivallent to `to_vec4()` // So, as a hack, we flatten it to a single dim vec to test the results assert_eq!( z.flatten_all()?.to_vec1::<u8>()?, expected.flatten_all()?.to_vec1::<u8>()? ) } _ => unreachable!(), }; Ok(()) } Ok(()) } #[test] fn test_sign_operation() -> Result<()> { let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Sign".to_string(), domain: "".to_string(), attribute: vec![], input: vec![INPUT_X.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert( INPUT_X.to_string(), Tensor::new(vec![-2f32, -1., 0., 1., 2.], &Device::Cpu)?, ); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); assert_eq!( z.to_dtype(candle::DType::I64)?.to_vec1::<i64>()?.to_vec(), vec![-1, -1, 0, 1, 1] ); Ok(()) } #[test] fn test_selu_operator() -> Result<()> { { // Test 1: Default alpha and gamma let default_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Selu".to_string(), domain: "".to_string(), input: vec!["input".to_string()], output: vec!["output".to_string()], ..Default::default() }], input: vec![ValueInfoProto { name: "input".to_string(), ..Default::default() }], output: vec![ValueInfoProto { name: "output".to_string(), r#type: None, ..Default::default() }], ..Default::default() })); let input = Tensor::from_vec(vec![-1.0f32, 0.0, 1.0, 2.0], (2, 2), &Device::Cpu)?; let mut inputs = HashMap::new(); inputs.insert("input".to_string(), input); let eval = simple_eval(&default_graph, inputs)?; let output = eval.get("output").unwrap(); let out_vec = to_vec2_round(output, 4)?; assert_eq!(out_vec, vec![vec![-1.1113, 0.0], vec![1.0507, 2.1014]]); } { // Test 2: Change alpha and gamma let custom_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Selu".to_string(), attribute: vec![ AttributeProto { name: "alpha".to_string(), r#type: AttributeType::Float as i32, f: 2.0, ..Default::default() }, AttributeProto { name: "gamma".to_string(), r#type: AttributeType::Float as i32, f: 0.5, ..Default::default() }, ], input: vec!["input".to_string()], output: vec!["output".to_string()], ..Default::default() }], input: vec![ValueInfoProto { name: "input".to_string(), ..Default::default() }], output: vec![ValueInfoProto { name: "output".to_string(), ..Default::default() }], ..Default::default() })); let input = Tensor::from_vec(vec![-1.0f32, 0.0, 1.0, 2.0], (2, 2), &Device::Cpu)?; let mut inputs = HashMap::new(); inputs.insert("input".to_string(), input); let eval = simple_eval(&custom_graph, inputs)?; let output = eval.get("output").unwrap(); let out_vec = to_vec2_round(output, 4)?; assert_eq!(out_vec, vec![vec![-0.6321, 0.0], vec![0.5, 1.0]]); } { // Test 3: Different input values let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Selu".to_string(), domain: "".to_string(), input: vec!["input".to_string()], output: vec!["output".to_string()], ..Default::default() }], input: vec![ValueInfoProto { name: "input".to_string(), ..Default::default() }], output: vec![ValueInfoProto { name: "output".to_string(), ..Default::default() }], ..Default::default() })); let expected = vec![-1.758, -1.7463, 0.0, 10.507]; let input = Tensor::from_vec(vec![-10.0f32, -5.0, 0.0, 10.0], (2, 2), &Device::Cpu)?; let mut inputs = HashMap::new(); inputs.insert("input".to_string(), input); let eval = simple_eval(&manual_graph, inputs)?; let output = eval.get("output").unwrap(); let out_vec = to_vec2_round(output, 4)?; assert_eq!( out_vec, vec![ vec![expected[0], expected[1]], vec![expected[2], expected[3]] ] ); } { // Test 4: Test based on https://github.com/onnx/onnx/blob/main/docs/Operators.md#Selu let graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Selu".to_string(), input: vec!["input".to_string()], output: vec!["output".to_string()], attribute: vec![ AttributeProto { name: "alpha".to_string(), r#type: AttributeType::Float as i32, f: 2.0, ..Default::default() }, AttributeProto { name: "gamma".to_string(), r#type: AttributeType::Float as i32, f: 3.0, ..Default::default() }, ], ..Default::default() }], input: vec![ValueInfoProto { name: "input".to_string(), ..Default::default() }], output: vec![ValueInfoProto { name: "output".to_string(), ..Default::default() }], ..Default::default() })); let input = Tensor::from_vec(vec![-1.0f32, 0.0, 1.0], (3,), &Device::Cpu)?; let mut inputs = HashMap::new(); inputs.insert("input".to_string(), input); let eval = simple_eval(&graph, inputs)?; let output = eval.get("output").unwrap(); let out_vec = output.to_vec1::<f32>()?; let expected = vec![-3.7927232, 0.0, 3.0]; for (o, e) in out_vec.iter().zip(expected.iter()) { assert!((o - e).abs() < 1e-5, "Got {o}, expected {e}"); } } { // Test 5: Empty tensor let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Selu".to_string(), domain: "".to_string(), input: vec!["input".to_string()], output: vec!["output".to_string()], ..Default::default() }], input: vec![ValueInfoProto { name: "input".to_string(), ..Default::default() }], output: vec![ValueInfoProto { name: "output".to_string(), ..Default::default() }], ..Default::default() })); let input = Tensor::from_vec(vec![] as Vec<f32>, (0, 2), &Device::Cpu)?; let mut inputs = HashMap::new(); inputs.insert("input".to_string(), input); let eval = simple_eval(&manual_graph, inputs)?; let output = eval.get("output").unwrap(); assert_eq!(output.dims(), &[0, 2]); } Ok(()) } fn test_hard_swish() -> candle::Result<()> { { let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "HardSwish".to_string(), input: vec![INPUT_X.to_string()], output: vec![OUTPUT_Z.to_string()], ..Default::default() }], input: vec![ValueInfoProto { name: INPUT_X.to_string(), ..Default::default() }], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), ..Default::default() }], ..Default::default() })); let input_data = vec![-4.0f32, -3.0, 0.0, 2.0, 3.0, 5.0]; let input_tensor = Tensor::from_vec(input_data.clone(), (input_data.len(),), &Device::Cpu)?; let mut inputs = HashMap::new(); inputs.insert(INPUT_X.to_string(), input_tensor); let outputs = simple_eval(&manual_graph, inputs)?; let output = outputs.get(OUTPUT_Z).expect("missing output Z"); let output_vec = output.to_vec1::<f32>()?; let expected = vec![0.0, 0.0, 0.0, 1.6666666, 3.0, 5.0]; for (i, (got, exp)) in output_vec.iter().zip(expected.iter()).enumerate() { let diff = (got - exp).abs(); assert!( diff < 1e-4, "Mismatch at index {i}: got {got}, expected {exp}, diff={diff}" ); } } { let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "HardSwish".to_string(), input: vec![INPUT_X.to_string()], output: vec![OUTPUT_Z.to_string()], ..Default::default() }], input: vec![ValueInfoProto { name: INPUT_X.to_string(), ..Default::default() }], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), ..Default::default() }], ..Default::default() })); let input_data = vec![-4.0f32, -2.0, 0.0, 2.0, 4.0]; let input_tensor = Tensor::from_vec(input_data.clone(), (input_data.len(),), &Device::Cpu)?; let mut inputs = HashMap::new(); inputs.insert(INPUT_X.to_string(), input_tensor); let outputs = simple_eval(&manual_graph, inputs)?; let output = outputs.get(OUTPUT_Z).expect("missing output Z"); let output_vec = output.to_vec1::<f32>()?; let expected = vec![0.0, -0.33333334, 0.0, 1.6666667, 4.0]; for (i, (got, exp)) in output_vec.iter().zip(expected.iter()).enumerate() { let diff = (got - exp).abs(); assert!( diff < 1e-4, "Mismatch at index {i}: got {got}, expected {exp}, diff={diff}" ); } } Ok(()) } #[test] fn test_scatternd_operation() -> Result<()> { // Example 1 based on ONNX documentation test( &[1.0f32, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0], &[[4i64], [3], [1], [7]], &[9.0f32, 10.0, 11.0, 12.0], &[1.0f32, 11.0, 3.0, 10.0, 9.0, 6.0, 7.0, 12.0], )?; // A more complex example with 2D data test( &[[1.0f32, 2.0], [3.0, 4.0], [5.0, 6.0]], &[[0i64, 1], [1, 0]], &[10.0f32, 20.0], &[[1.0f32, 10.0], [20.0, 4.0], [5.0, 6.0]], )?; // 3D example with indices pointing to specific locations test( &[ [[1.0f32, 2.0], [3.0, 4.0]], [[5.0, 6.0], [7.0, 8.0]], [[9.0, 10.0], [11.0, 12.0]], ], &[[0i64, 0, 1], [1, 1, 0]], &[100.0f32, 200.0], &[ [[1.0f32, 100.0], [3.0, 4.0]], [[5.0, 6.0], [200.0, 8.0]], [[9.0, 10.0], [11.0, 12.0]], ], )?; fn test( data: impl NdArray, indices: impl NdArray, updates: impl NdArray, expected: impl NdArray, ) -> Result<()> { let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "ScatterND".to_string(), domain: "".to_string(), attribute: vec![], input: vec![ INPUT_X.to_string(), INPUT_Y.to_string(), INPUT_A.to_string(), ], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), Tensor::new(data, &Device::Cpu)?); inputs.insert(INPUT_Y.to_string(), Tensor::new(indices, &Device::Cpu)?); inputs.insert(INPUT_A.to_string(), Tensor::new(updates, &Device::Cpu)?); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let expected = Tensor::new(expected, &Device::Cpu)?; match expected.dims().len() { 1 => assert_eq!(z.to_vec1::<f32>()?, expected.to_vec1::<f32>()?), 2 => assert_eq!(z.to_vec2::<f32>()?, expected.to_vec2::<f32>()?), 3 => assert_eq!(z.to_vec3::<f32>()?, expected.to_vec3::<f32>()?), _ => unreachable!(), }; Ok(()) } Ok(()) } #[test] fn test_trilu_operation() -> Result<()> { // Test 1: Upper triangular matrix (default behavior with upper=true) { let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Trilu".to_string(), domain: "".to_string(), attribute: vec![], // empty attribute means default upper=true input: vec![INPUT_X.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![ValueInfoProto { name: INPUT_X.to_string(), doc_string: "".to_string(), r#type: None, }], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let x = Tensor::from_vec( vec![ 4i64, 7, 3, 7, 9, 1, 2, 8, 6, 9, 9, 4, 0, 8, 7, 4, 3, 4, 2, 4, ], &[4, 5], &Device::Cpu, )?; let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), x); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let results = z.to_vec2::<i64>()?; assert_eq!( results, vec![ vec![4, 7, 3, 7, 9], vec![0, 2, 8, 6, 9], vec![0, 0, 0, 8, 7], vec![0, 0, 0, 2, 4] ] ); } // Test 2: Upper triangular with positive k=1 (diagonal above main) { let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Trilu".to_string(), domain: "".to_string(), attribute: vec![], input: vec![INPUT_X.to_string(), INPUT_Y.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![ ValueInfoProto { name: INPUT_X.to_string(), doc_string: "".to_string(), r#type: None, }, ValueInfoProto { name: INPUT_Y.to_string(), doc_string: "".to_string(), r#type: None, }, ], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let x = Tensor::from_vec( vec![1i64, 4, 9, 7, 1, 9, 2, 8, 8, 4, 3, 9, 7, 4, 2], &[3, 5], &Device::Cpu, )?; let k = Tensor::from_vec(vec![1i64], (), &Device::Cpu)?; let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), x); inputs.insert(INPUT_Y.to_string(), k); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let results = z.to_vec2::<i64>()?; assert_eq!( results, vec![ vec![0, 4, 9, 7, 1], vec![0, 0, 8, 8, 4], vec![0, 0, 0, 4, 2] ] ); } // Test 3: Upper triangular with negative k=-1 (one diagonal below main) { let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Trilu".to_string(), domain: "".to_string(), attribute: vec![], input: vec![INPUT_X.to_string(), INPUT_Y.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let x = Tensor::from_vec( vec![ 4i64, 7, 3, 7, 9, 1, 2, 8, 6, 9, 9, 4, 0, 8, 7, 4, 3, 4, 2, 4, ], &[4, 5], &Device::Cpu, )?; let k = Tensor::from_vec(vec![-1i64], (), &Device::Cpu)?; let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), x); inputs.insert(INPUT_Y.to_string(), k); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let results = z.to_vec2::<i64>()?; assert_eq!( results, vec![ vec![4, 7, 3, 7, 9], vec![1, 2, 8, 6, 9], vec![0, 4, 0, 8, 7], vec![0, 0, 4, 2, 4] ] ); } // Test 4: Lower triangular matrix (upper=0) { let att_upper = AttributeProto { name: "upper".to_string(), ref_attr_name: "upper".to_string(), i: 0, // 0 means false, use lower triangular doc_string: "upper".to_string(), r#type: 2, f: 0.0, s: vec![], t: None, g: None, sparse_tensor: None, tp: None, floats: vec![], ints: vec![], strings: vec![], tensors: vec![], graphs: vec![], sparse_tensors: vec![], type_protos: vec![], }; let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Trilu".to_string(), domain: "".to_string(), attribute: vec![att_upper], input: vec![INPUT_X.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let x = Tensor::from_vec( vec![ 4i64, 7, 3, 7, 9, 1, 2, 8, 6, 9, 9, 4, 1, 8, 7, 4, 3, 4, 2, 4, ], &[4, 5], &Device::Cpu, )?; let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), x); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let results = z.to_vec2::<i64>()?; // Lower triangular matrix (default k=0) assert_eq!( results, vec![ vec![4, 0, 0, 0, 0], vec![1, 2, 0, 0, 0], vec![9, 4, 1, 0, 0], vec![4, 3, 4, 2, 0] ] ); } // Test 5: Lower triangular with negative k=-1 { let att_upper = AttributeProto { name: "upper".to_string(), ref_attr_name: "upper".to_string(), i: 0, doc_string: "upper".to_string(), r#type: 2, f: 0.0, s: vec![], t: None, g: None, sparse_tensor: None, tp: None, floats: vec![], ints: vec![], strings: vec![], tensors: vec![], graphs: vec![], sparse_tensors: vec![], type_protos: vec![], }; let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Trilu".to_string(), domain: "".to_string(), attribute: vec![att_upper], input: vec![INPUT_X.to_string(), INPUT_Y.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let x = Tensor::from_vec( vec![ 4i64, 7, 3, 7, 9, 1, 2, 8, 6, 9, 9, 4, 1, 8, 7, 4, 3, 4, 2, 4, ], &[4, 5], &Device::Cpu, )?; let k = Tensor::from_vec(vec![-1i64], (), &Device::Cpu)?; let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), x); inputs.insert(INPUT_Y.to_string(), k); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let results = z.to_vec2::<i64>()?; assert_eq!( results, vec![ vec![0, 0, 0, 0, 0], vec![1, 0, 0, 0, 0], vec![9, 4, 0, 0, 0], vec![4, 3, 4, 0, 0] ] ); } // Test 6: Lower triangular with positive k=2 { let att_upper = AttributeProto { name: "upper".to_string(), ref_attr_name: "upper".to_string(), i: 0, doc_string: "upper".to_string(), r#type: 2, f: 0.0, s: vec![], t: None, g: None, sparse_tensor: None, tp: None, floats: vec![], ints: vec![], strings: vec![], tensors: vec![], graphs: vec![], sparse_tensors: vec![], type_protos: vec![], }; let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "Trilu".to_string(), domain: "".to_string(), attribute: vec![att_upper], input: vec![INPUT_X.to_string(), INPUT_Y.to_string()], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let x = Tensor::from_vec( vec![ 4i64, 7, 3, 7, 9, 1, 2, 8, 6, 9, 9, 4, 1, 8, 7, 4, 3, 4, 2, 4, ], &[4, 5], &Device::Cpu, )?; let k = Tensor::from_vec(vec![2i64], (), &Device::Cpu)?; let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert(INPUT_X.to_string(), x); inputs.insert(INPUT_Y.to_string(), k); let eval = candle_onnx::simple_eval(&manual_graph, inputs)?; assert_eq!(eval.len(), 1); let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let results = z.to_vec2::<i64>()?; assert_eq!( results, vec![ vec![4, 7, 3, 0, 0], vec![1, 2, 8, 6, 0], vec![9, 4, 1, 8, 7], vec![4, 3, 4, 2, 4] ] ); } Ok(()) } #[test] fn test_one_hot() -> Result<()> { // Tests based on: https://github.com/onnx/onnx/blob/main/docs/Operators.md#OneHot { let depth_value = Tensor::new(3i64, &Device::Cpu)?; // depth = 3 let values_tensor = Tensor::from_vec(vec![0.0f32, 1.0], (2,), &Device::Cpu)?; // off = 0.0, on = 1.0 let manual_graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "OneHot".to_string(), domain: "".to_string(), attribute: vec![AttributeProto { name: "axis".to_string(), r#type: AttributeType::Int as i32, i: -1, ..Default::default() }], input: vec![ INPUT_X.to_string(), // indices "depth".to_string(), // depth "values".to_string(), // values ], output: vec![OUTPUT_Z.to_string()], name: "".to_string(), doc_string: "".to_string(), }], name: "".to_string(), initializer: vec![], input: vec![], output: vec![ValueInfoProto { name: OUTPUT_Z.to_string(), doc_string: "".to_string(), r#type: None, }], value_info: vec![], doc_string: "".to_string(), sparse_initializer: vec![], quantization_annotation: vec![], })); let mut inputs: HashMap<String, Tensor> = HashMap::new(); inputs.insert( INPUT_X.to_string(), Tensor::new(vec![0i64, 1, 2], &Device::Cpu)?, ); inputs.insert("depth".to_string(), depth_value); inputs.insert("values".to_string(), values_tensor); let eval = simple_eval(&manual_graph, inputs)?; let z = eval.get(OUTPUT_Z).expect("Output 'z' not found"); let expected = vec![ vec![1.0, 0.0, 0.0], vec![0.0, 1.0, 0.0], vec![0.0, 0.0, 1.0], ]; let z_reshaped = z.to_dtype(DType::F32)?.reshape((3, 3))?.to_vec2::<f32>()?; assert_eq!(z_reshaped, expected); } { // Test with axis let indices = Tensor::from_vec(vec![1i64, 9, 2, 4], (2, 2), &Device::Cpu)?; let depth = Tensor::new(10i64, &Device::Cpu)?; let values = Tensor::from_vec(vec![1.0f32, 3.0], (2,), &Device::Cpu)?; let graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "OneHot".to_string(), input: vec!["indices".into(), "depth".into(), "values".into()], output: vec!["y".into()], attribute: vec![AttributeProto { name: "axis".into(), r#type: AttributeType::Int as i32, i: 1, ..Default::default() }], ..Default::default() }], output: vec![ValueInfoProto { name: "y".into(), ..Default::default() }], ..Default::default() })); let mut inputs = HashMap::new(); inputs.insert("indices".into(), indices); inputs.insert("depth".into(), depth); inputs.insert("values".into(), values); let eval = simple_eval(&graph, inputs)?; let y = eval.get("y").unwrap(); assert_eq!(y.dims(), &[2, 10, 2]); } { // Test with negative axis let indices = Tensor::from_vec(vec![1i64, 9, 2, 4], (2, 2), &Device::Cpu)?; let depth = Tensor::new(10i64, &Device::Cpu)?; let values = Tensor::from_vec(vec![1.0f32, 3.0], (2,), &Device::Cpu)?; let graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "OneHot".to_string(), input: vec!["indices".into(), "depth".into(), "values".into()], output: vec!["y".into()], attribute: vec![AttributeProto { name: "axis".into(), r#type: AttributeType::Int as i32, i: -2, ..Default::default() }], ..Default::default() }], output: vec![ValueInfoProto { name: "y".into(), ..Default::default() }], ..Default::default() })); let mut inputs = HashMap::new(); inputs.insert("indices".into(), indices); inputs.insert("depth".into(), depth); inputs.insert("values".into(), values); let eval = simple_eval(&graph, inputs)?; let y = eval.get("y").unwrap(); assert_eq!(y.dims(), &[2, 10, 2]); } { // Test with negative indices let indices = Tensor::from_vec(vec![0i64, -7, -8], (3,), &Device::Cpu)?; let depth = Tensor::new(10i64, &Device::Cpu)?; let values = Tensor::from_vec(vec![1.0f32, 3.0], (2,), &Device::Cpu)?; let graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "OneHot".to_string(), input: vec!["indices".into(), "depth".into(), "values".into()], output: vec!["y".into()], attribute: vec![AttributeProto { name: "axis".into(), r#type: AttributeType::Int as i32, i: 1, ..Default::default() }], ..Default::default() }], output: vec![ValueInfoProto { name: "y".into(), ..Default::default() }], ..Default::default() })); let mut inputs = HashMap::new(); inputs.insert("indices".into(), indices); inputs.insert("depth".into(), depth); inputs.insert("values".into(), values); let eval = simple_eval(&graph, inputs)?; let y = eval.get("y").unwrap(); assert_eq!(y.dims(), &[3, 10]); } { // Test without axis let indices = Tensor::from_vec(vec![0i64, 7, 8], (3,), &Device::Cpu)?; let depth = Tensor::new(12i64, &Device::Cpu)?; let values = Tensor::from_vec(vec![2f32, 5.0], (2,), &Device::Cpu)?; let graph = create_model_proto_with_graph(Some(GraphProto { node: vec![NodeProto { op_type: "OneHot".to_string(), input: vec!["indices".into(), "depth".into(), "values".into()], output: vec!["y".into()], ..Default::default() }], output: vec![ValueInfoProto { name: "y".into(), ..Default::default() }], ..Default::default() })); let mut inputs = HashMap::new(); inputs.insert("indices".into(), indices); inputs.insert("depth".into(), depth); inputs.insert("values".into(), values); let eval = simple_eval(&graph, inputs)?; let y = eval.get("y").unwrap(); assert_eq!(y.dims(), &[3, 12]); } Ok(()) }
candle/candle-onnx/tests/ops.rs/0
{ "file_path": "candle/candle-onnx/tests/ops.rs", "repo_id": "candle", "token_count": 131067 }
51
# see https://github.com/pytorch/pytorch/blob/main/torch/nn/modules/container.py from .module import Module from typing import ( Any, Dict, Iterable, Iterator, Mapping, Optional, overload, Tuple, TypeVar, Union, ) from collections import OrderedDict, abc as container_abcs import operator from itertools import chain, islice __all__ = ["Sequential", "ModuleList", "ModuleDict"] T = TypeVar("T", bound=Module) def _addindent(s_: str, numSpaces: int): s = s_.split("\n") # don't do anything for single-line stuff if len(s) == 1: return s_ first = s.pop(0) s = [(numSpaces * " ") + line for line in s] s = "\n".join(s) s = first + "\n" + s return s class Sequential(Module): r"""A sequential container. Modules will be added to it in the order they are passed in the constructor. Alternatively, an ``OrderedDict`` of modules can be passed in. The ``forward()`` method of ``Sequential`` accepts any input and forwards it to the first module it contains. It then "chains" outputs to inputs sequentially for each subsequent module, finally returning the output of the last module. The value a ``Sequential`` provides over manually calling a sequence of modules is that it allows treating the whole container as a single module, such that performing a transformation on the ``Sequential`` applies to each of the modules it stores (which are each a registered submodule of the ``Sequential``). What's the difference between a ``Sequential`` and a :class:`candle.nn.ModuleList`? A ``ModuleList`` is exactly what it sounds like--a list for storing ``Module`` s! On the other hand, the layers in a ``Sequential`` are connected in a cascading way. """ _modules: Dict[str, Module] # type: ignore[assignment] @overload def __init__(self, *args: Module) -> None: ... @overload def __init__(self, arg: "OrderedDict[str, Module]") -> None: ... def __init__(self, *args): super().__init__() if len(args) == 1 and isinstance(args[0], OrderedDict): for key, module in args[0].items(): self.add_module(key, module) else: for idx, module in enumerate(args): self.add_module(str(idx), module) def _get_item_by_idx(self, iterator, idx) -> T: """Get the idx-th item of the iterator""" size = len(self) idx = operator.index(idx) if not -size <= idx < size: raise IndexError("index {} is out of range".format(idx)) idx %= size return next(islice(iterator, idx, None)) def __getitem__(self, idx: Union[slice, int]) -> Union["Sequential", T]: if isinstance(idx, slice): return self.__class__(OrderedDict(list(self._modules.items())[idx])) else: return self._get_item_by_idx(self._modules.values(), idx) def __setitem__(self, idx: int, module: Module) -> None: key: str = self._get_item_by_idx(self._modules.keys(), idx) return setattr(self, key, module) def __delitem__(self, idx: Union[slice, int]) -> None: if isinstance(idx, slice): for key in list(self._modules.keys())[idx]: delattr(self, key) else: key = self._get_item_by_idx(self._modules.keys(), idx) delattr(self, key) # To preserve numbering str_indices = [str(i) for i in range(len(self._modules))] self._modules = OrderedDict(list(zip(str_indices, self._modules.values()))) def __len__(self) -> int: return len(self._modules) def __add__(self, other) -> "Sequential": if isinstance(other, Sequential): ret = Sequential() for layer in self: ret.append(layer) for layer in other: ret.append(layer) return ret else: raise ValueError( "add operator supports only objects " "of Sequential class, but {} is given.".format(str(type(other))) ) def pop(self, key: Union[int, slice]) -> Module: v = self[key] del self[key] return v def __iadd__(self, other) -> "Sequential": if isinstance(other, Sequential): offset = len(self) for i, module in enumerate(other): self.add_module(str(i + offset), module) return self else: raise ValueError( "add operator supports only objects " "of Sequential class, but {} is given.".format(str(type(other))) ) def __mul__(self, other: int) -> "Sequential": if not isinstance(other, int): raise TypeError(f"unsupported operand type(s) for *: {type(self)} and {type(other)}") elif other <= 0: raise ValueError(f"Non-positive multiplication factor {other} for {type(self)}") else: combined = Sequential() offset = 0 for _ in range(other): for module in self: combined.add_module(str(offset), module) offset += 1 return combined def __rmul__(self, other: int) -> "Sequential": return self.__mul__(other) def __imul__(self, other: int) -> "Sequential": if not isinstance(other, int): raise TypeError(f"unsupported operand type(s) for *: {type(self)} and {type(other)}") elif other <= 0: raise ValueError(f"Non-positive multiplication factor {other} for {type(self)}") else: len_original = len(self) offset = len(self) for _ in range(other - 1): for i in range(len_original): self.add_module(str(i + offset), self._modules[str(i)]) offset += len_original return self def __dir__(self): keys = super().__dir__() keys = [key for key in keys if not key.isdigit()] return keys def __iter__(self) -> Iterator[Module]: return iter(self._modules.values()) # NB: We can't really type check this function as the type of input # may change dynamically (as is tested in # TestScript.test_sequential_intermediary_types). Cannot annotate # with Any as TorchScript expects a more precise type def forward(self, input): for module in self: input = module(input) return input def append(self, module: Module) -> "Sequential": r"""Appends a given module to the end. Args: module (nn.Module): module to append """ self.add_module(str(len(self)), module) return self def insert(self, index: int, module: Module) -> "Sequential": if not isinstance(module, Module): raise AssertionError("module should be of type: {}".format(Module)) n = len(self._modules) if not (-n <= index <= n): raise IndexError("Index out of range: {}".format(index)) if index < 0: index += n for i in range(n, index, -1): self._modules[str(i)] = self._modules[str(i - 1)] self._modules[str(index)] = module return self def extend(self, sequential) -> "Sequential": for layer in sequential: self.append(layer) return self class ModuleList(Module): r"""Holds submodules in a list. :class:`~candle.nn.ModuleList` can be indexed like a regular Python list, but modules it contains are properly registered, and will be visible by all :class:`~candle.nn.Module` methods. Args: modules (iterable, optional): an iterable of modules to add Example:: class MyModule(nn.Module): def __init__(self): super().__init__() self.linears = nn.ModuleList([nn.Linear(10, 10) for i in range(10)]) def forward(self, x): # ModuleList can act as an iterable, or be indexed using ints for i, l in enumerate(self.linears): x = self.linears[i // 2](x) + l(x) return x """ _modules: Dict[str, Module] # type: ignore[assignment] def __init__(self, modules: Optional[Iterable[Module]] = None) -> None: super().__init__() if modules is not None: self += modules def _get_abs_string_index(self, idx): """Get the absolute index for the list of modules""" idx = operator.index(idx) if not (-len(self) <= idx < len(self)): raise IndexError("index {} is out of range".format(idx)) if idx < 0: idx += len(self) return str(idx) def __getitem__(self, idx: Union[int, slice]) -> Union[Module, "ModuleList"]: if isinstance(idx, slice): return self.__class__(list(self._modules.values())[idx]) else: return self._modules[self._get_abs_string_index(idx)] def __setitem__(self, idx: int, module: Module) -> None: idx = self._get_abs_string_index(idx) return setattr(self, str(idx), module) def __delitem__(self, idx: Union[int, slice]) -> None: if isinstance(idx, slice): for k in range(len(self._modules))[idx]: delattr(self, str(k)) else: delattr(self, self._get_abs_string_index(idx)) # To preserve numbering, self._modules is being reconstructed with modules after deletion str_indices = [str(i) for i in range(len(self._modules))] self._modules = OrderedDict(list(zip(str_indices, self._modules.values()))) def __len__(self) -> int: return len(self._modules) def __iter__(self) -> Iterator[Module]: return iter(self._modules.values()) def __iadd__(self, modules: Iterable[Module]) -> "ModuleList": return self.extend(modules) def __add__(self, other: Iterable[Module]) -> "ModuleList": combined = ModuleList() for i, module in enumerate(chain(self, other)): combined.add_module(str(i), module) return combined def __repr__(self): """A custom repr for ModuleList that compresses repeated module representations""" list_of_reprs = [repr(item) for item in self] if len(list_of_reprs) == 0: return self._get_name() + "()" start_end_indices = [[0, 0]] repeated_blocks = [list_of_reprs[0]] for i, r in enumerate(list_of_reprs[1:], 1): if r == repeated_blocks[-1]: start_end_indices[-1][1] += 1 continue start_end_indices.append([i, i]) repeated_blocks.append(r) lines = [] main_str = self._get_name() + "(" for (start_id, end_id), b in zip(start_end_indices, repeated_blocks): local_repr = f"({start_id}): {b}" # default repr if start_id != end_id: n = end_id - start_id + 1 local_repr = f"({start_id}-{end_id}): {n} x {b}" local_repr = _addindent(local_repr, 2) lines.append(local_repr) main_str += "\n " + "\n ".join(lines) + "\n" main_str += ")" return main_str def __dir__(self): keys = super().__dir__() keys = [key for key in keys if not key.isdigit()] return keys def insert(self, index: int, module: Module) -> None: r"""Insert a given module before a given index in the list. Args: index (int): index to insert. module (nn.Module): module to insert """ for i in range(len(self._modules), index, -1): self._modules[str(i)] = self._modules[str(i - 1)] self._modules[str(index)] = module def append(self, module: Module) -> "ModuleList": r"""Appends a given module to the end of the list. Args: module (nn.Module): module to append """ self.add_module(str(len(self)), module) return self def pop(self, key: Union[int, slice]) -> Module: v = self[key] del self[key] return v def extend(self, modules: Iterable[Module]) -> "ModuleList": r"""Appends modules from a Python iterable to the end of the list. Args: modules (iterable): iterable of modules to append """ if not isinstance(modules, container_abcs.Iterable): raise TypeError( "ModuleList.extend should be called with an " "iterable, but got " + type(modules).__name__ ) offset = len(self) for i, module in enumerate(modules): self.add_module(str(offset + i), module) return self # remove forward altogether to fallback on Module's _forward_unimplemented class ModuleDict(Module): r"""Holds submodules in a dictionary. :class:`~candle.nn.ModuleDict` can be indexed like a regular Python dictionary, but modules it contains are properly registered, and will be visible by all :class:`~candle.nn.Module` methods. :class:`~candle.nn.ModuleDict` is an **ordered** dictionary that respects * the order of insertion, and * in :meth:`~candle.nn.ModuleDict.update`, the order of the merged ``OrderedDict``, ``dict`` (started from Python 3.6) or another :class:`~candle.nn.ModuleDict` (the argument to :meth:`~candle.nn.ModuleDict.update`). Note that :meth:`~candle.nn.ModuleDict.update` with other unordered mapping types (e.g., Python's plain ``dict`` before Python version 3.6) does not preserve the order of the merged mapping. Args: modules (iterable, optional): a mapping (dictionary) of (string: module) or an iterable of key-value pairs of type (string, module) """ _modules: Dict[str, Module] # type: ignore[assignment] def __init__(self, modules: Optional[Mapping[str, Module]] = None) -> None: super().__init__() if modules is not None: self.update(modules) def __getitem__(self, key: str) -> Module: return self._modules[key] def __setitem__(self, key: str, module: Module) -> None: self.add_module(key, module) def __delitem__(self, key: str) -> None: del self._modules[key] def __len__(self) -> int: return len(self._modules) def __iter__(self) -> Iterator[str]: return iter(self._modules) def __contains__(self, key: str) -> bool: return key in self._modules def clear(self) -> None: """Remove all items from the ModuleDict.""" self._modules.clear() def pop(self, key: str) -> Module: r"""Remove key from the ModuleDict and return its module. Args: key (str): key to pop from the ModuleDict """ v = self[key] del self[key] return v def keys(self) -> Iterable[str]: r"""Return an iterable of the ModuleDict keys.""" return self._modules.keys() def items(self) -> Iterable[Tuple[str, Module]]: r"""Return an iterable of the ModuleDict key/value pairs.""" return self._modules.items() def values(self) -> Iterable[Module]: r"""Return an iterable of the ModuleDict values.""" return self._modules.values() def update(self, modules: Mapping[str, Module]) -> None: r"""Update the :class:`~candle.nn.ModuleDict` with the key-value pairs from a mapping or an iterable, overwriting existing keys. .. note:: If :attr:`modules` is an ``OrderedDict``, a :class:`~candle.nn.ModuleDict`, or an iterable of key-value pairs, the order of new elements in it is preserved. Args: modules (iterable): a mapping (dictionary) from string to :class:`~candle.nn.Module`, or an iterable of key-value pairs of type (string, :class:`~candle.nn.Module`) """ if not isinstance(modules, container_abcs.Iterable): raise TypeError( "ModuleDict.update should be called with an " "iterable of key/value pairs, but got " + type(modules).__name__ ) if isinstance(modules, (OrderedDict, ModuleDict, container_abcs.Mapping)): for key, module in modules.items(): self[key] = module else: # modules here can be a list with two items for j, m in enumerate(modules): if not isinstance(m, container_abcs.Iterable): raise TypeError( "ModuleDict update sequence element " "#" + str(j) + " should be Iterable; is" + type(m).__name__ ) if not len(m) == 2: raise ValueError( "ModuleDict update sequence element " "#" + str(j) + " has length " + str(len(m)) + "; 2 is required" ) # modules can be Mapping (what it's typed at), or a list: [(name1, module1), (name2, module2)] # that's too cumbersome to type correctly with overloads, so we add an ignore here self[m[0]] = m[1] # type: ignore[assignment] # remove forward altogether to fallback on Module's _forward_unimplemented
candle/candle-pyo3/py_src/candle/nn/container.py/0
{ "file_path": "candle/candle-pyo3/py_src/candle/nn/container.py", "repo_id": "candle", "token_count": 7602 }
52
use pyo3::exceptions::PyValueError; use pyo3::prelude::*; pub fn wrap_err(err: ::candle::Error) -> PyErr { PyErr::new::<PyValueError, _>(format!("{err:?}")) }
candle/candle-pyo3/src/utils.rs/0
{ "file_path": "candle/candle-pyo3/src/utils.rs", "repo_id": "candle", "token_count": 74 }
53
//! Based on the BEIT vision-language model. //! //! See "BEIT: BERT Pre-Training of Image Transformers", Bao et al. 2021 //! - [Arxiv](https://arxiv.org/abs/2106.08254) //! - [Github](https://github.com/microsoft/unilm/tree/master/beit) //! use candle::{DType, Device, IndexOp, Result, Tensor, D}; use candle_nn::{layer_norm, LayerNorm, Linear, Module, VarBuilder}; const IMG_SIZE: usize = 384; const PATCH_SIZE: usize = 16; const NUM_CLASSES: usize = 1000; const WINDOW_SIZE: usize = IMG_SIZE / PATCH_SIZE; // 384 / 16 = 24 const NB_TOKENS: usize = WINDOW_SIZE * WINDOW_SIZE + 1; // 24 * 24 + 1 = 577 fn linear(vb: VarBuilder, in_dim: usize, out_dim: usize, bias: bool) -> Result<Linear> { if bias { candle_nn::linear(in_dim, out_dim, vb) } else { candle_nn::linear_no_bias(in_dim, out_dim, vb) } } #[derive(Debug)] struct Attention { qkv: Linear, proj: Linear, relative_position_bias_table: Tensor, relative_position_index: Tensor, num_heads: usize, scale: f64, } impl Attention { fn new( vb: VarBuilder, dim: usize, num_heads: usize, qkv_bias: bool, proj_bias: bool, ) -> Result<Self> { let qkv = linear(vb.pp("qkv"), dim, dim * 3, qkv_bias)?; let proj = linear(vb.pp("proj"), dim, dim, proj_bias)?; // num_relative_distance = token-token(47x47) + token-CLS(1) + CLS-token(1) + CLS-CLS(1) = 2212 let num_relative_distance = (2 * WINDOW_SIZE - 1) * (2 * WINDOW_SIZE - 1) + 3; let relative_position_bias_table = vb.get( (num_relative_distance, num_heads), "relative_position_bias_table", )?; let relative_position_index = Self::gen_relative_position_index(relative_position_bias_table.device())?; let scale = 1. / ((dim / num_heads) as f64).sqrt(); Ok(Self { qkv, proj, relative_position_bias_table, relative_position_index, num_heads, scale, }) } } impl Attention { // See: https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/beit.py#L61 fn gen_relative_position_index(device: &Device) -> Result<Tensor> { let num_relative_distance = (2 * WINDOW_SIZE - 1) * (2 * WINDOW_SIZE - 1) + 3; let w_area = WINDOW_SIZE * WINDOW_SIZE; let t_arange: Tensor = Tensor::arange(0, WINDOW_SIZE as u32, device)?; let t_ndgrid = Tensor::meshgrid(&[&t_arange, &t_arange], false)?; let coords_flatten = Tensor::stack(&t_ndgrid, 0)?.flatten(1, 2)?; let tmp1 = coords_flatten .unsqueeze(2)? .broadcast_as((2, w_area, w_area))? .to_dtype(DType::I64)?; let tmp2 = coords_flatten .unsqueeze(1)? .broadcast_as((2, w_area, w_area))? .to_dtype(DType::I64)?; let relative_coords = (tmp1 - tmp2)? .transpose(0, 1)? // 102 .transpose(1, 2)? // 120 .contiguous()?; let relative_coords = relative_coords.slice_assign( &[0..w_area, 0..w_area, 0..1], &(relative_coords.i((0..w_area, 0..w_area, 0..1))? + (WINDOW_SIZE - 1) as f64)?, )?; let relative_coords = relative_coords.slice_assign( &[0..w_area, 0..w_area, 1..2], &(relative_coords.i((0..w_area, 0..w_area, 1..2))? + (WINDOW_SIZE - 1) as f64)?, )?; let relative_coords = relative_coords.slice_assign( &[0..w_area, 0..w_area, 0..1], &(relative_coords.i((.., .., 0..1))? * (2. * (WINDOW_SIZE as f64) - 1.))?, )?; Tensor::zeros((w_area + 1, w_area + 1), DType::I64, device)? .slice_assign(&[1.., 1..], &relative_coords.sum(2)?)? .slice_assign( &[0..1, 0..(w_area + 1)], &(Tensor::ones((1, w_area + 1), DType::I64, device)? * ((num_relative_distance - 3) as f64))? .to_dtype(DType::I64)?, )? .slice_assign( &[0..(w_area + 1), 0..1], &(Tensor::ones((w_area + 1, 1), DType::I64, device)? * ((num_relative_distance - 2) as f64))? .to_dtype(DType::I64)?, )? .slice_assign( &[0..1, 0..1], &(Tensor::ones((1, 1), DType::I64, device)? * ((num_relative_distance - 1) as f64))? .to_dtype(DType::I64)?, ) } fn _get_rel_pos_bias(&self) -> Result<Tensor> { self.relative_position_bias_table .index_select( &self .relative_position_index .flatten_all()? .to_dtype(DType::U32)?, 0, )? .reshape((NB_TOKENS, NB_TOKENS, ()))? .transpose(0, 1)? // 102 .transpose(0, 2)? // 201 .contiguous()? .unsqueeze(0) } } impl Module for Attention { fn forward(&self, xs: &Tensor) -> Result<Tensor> { let (b, n, c) = xs.dims3()?; let qkv = self .qkv .forward(xs)? .reshape((b, n, 3, self.num_heads, c / self.num_heads))? .transpose(1, 2)? // 02134 .transpose(0, 1)? // 20134 .transpose(2, 3)?; // 20314 let q = (qkv.i(0)? * self.scale)?; let k = qkv.i(1)?.contiguous()?; let v = qkv.i(2)?.contiguous()?; let attn = (&q.matmul(&k.t()?)? + self._get_rel_pos_bias())?; let attn = candle_nn::ops::softmax(&attn, D::Minus1)?; let attn = attn.matmul(&v)?.transpose(1, 2)?.reshape((b, n, c))?; self.proj.forward(&attn) } } #[derive(Debug)] struct LayerScale { gamma: Tensor, } impl LayerScale { fn new(vb: VarBuilder, dim: usize) -> Result<Self> { let gamma = vb.get(dim, "gamma")?; Ok(Self { gamma }) } } impl Module for LayerScale { fn forward(&self, xs: &Tensor) -> Result<Tensor> { xs.broadcast_mul(&self.gamma) } } #[derive(Debug)] struct Mlp { fc1: Linear, fc2: Linear, } impl Mlp { fn new(vb: VarBuilder, in_features: usize, hidden_features: usize, bias: bool) -> Result<Self> { let out_features = in_features; let fc1 = linear(vb.pp("fc1"), in_features, hidden_features, bias)?; let fc2 = linear(vb.pp("fc2"), hidden_features, out_features, bias)?; Ok(Self { fc1, fc2 }) } } impl Module for Mlp { fn forward(&self, xs: &Tensor) -> Result<Tensor> { let xs = self.fc1.forward(xs)?.gelu()?; self.fc2.forward(&xs) } } #[derive(Debug)] struct Block { norm1: LayerNorm, attn: Attention, ls1: LayerScale, norm2: LayerNorm, mlp: Mlp, ls2: LayerScale, } impl Block { fn new(vb: VarBuilder, dim: usize, num_heads: usize) -> Result<Self> { let norm1 = layer_norm(dim, 1e-6, vb.pp("norm1"))?; let attn = Attention::new(vb.pp("attn"), dim, num_heads, true, true)?; let ls1 = LayerScale::new(vb.pp("ls1"), dim)?; let norm2 = layer_norm(dim, 1e-6, vb.pp("norm2"))?; let mlp = Mlp::new(vb.pp("mlp"), dim, dim * 4, true)?; let ls2 = LayerScale::new(vb.pp("ls2"), dim)?; Ok(Self { norm1, attn, ls1, norm2, mlp, ls2, }) } } impl Module for Block { fn forward(&self, xs: &Tensor) -> Result<Tensor> { let residual = xs; let xs = self .ls1 .forward(&self.attn.forward(&self.norm1.forward(xs)?)?)?; let xs = (xs + residual)?; let residual = &xs; let xs = self .ls2 .forward(&self.mlp.forward(&self.norm2.forward(&xs)?)?)?; xs + residual } } #[derive(Debug)] struct PatchEmbed { proj: candle_nn::Conv2d, patch_size: (usize, usize), } impl PatchEmbed { fn new(vb: VarBuilder, patch_size: usize, in_chans: usize, embed_dim: usize) -> Result<Self> { let config = candle_nn::Conv2dConfig { stride: patch_size, ..Default::default() }; let proj = candle_nn::conv2d(in_chans, embed_dim, patch_size, config, vb.pp("proj"))?; Ok(Self { proj, patch_size: (patch_size, patch_size), }) } } impl Module for PatchEmbed { fn forward(&self, xs: &Tensor) -> Result<Tensor> { let (_b, _c, h, w) = xs.dims4()?; let (patch_h, patch_w) = self.patch_size; if (h % patch_h) != 0 { candle::bail!("image height {h} is not a multiple of patch height {patch_h}") } if (w % patch_w) != 0 { candle::bail!("image width {w} is not a multiple of patch width {patch_w}") } let xs = self.proj.forward(xs)?; let (b, c, h, w) = xs.dims4()?; // flatten embeddings. xs.reshape((b, c, h * w))?.transpose(1, 2) } } #[derive(Debug)] pub struct BeitVisionTransformer { patch_embed: PatchEmbed, cls_token: Tensor, blocks: Vec<Block>, norm: LayerNorm, head: Linear, } impl BeitVisionTransformer { pub fn new(vb: VarBuilder, depth: usize, embed_dim: usize, num_heads: usize) -> Result<Self> { let patch_embed = PatchEmbed::new(vb.pp("patch_embed"), PATCH_SIZE, 3, embed_dim)?; let cls_token = vb.get((1, 1, embed_dim), "cls_token")?; let head = linear(vb.pp("head"), embed_dim, NUM_CLASSES, true)?; let norm = layer_norm(embed_dim, 1e-6, vb.pp("norm"))?; let vb_b = vb.pp("blocks"); let blocks = (0..depth) .map(|i| Block::new(vb_b.pp(i.to_string()), embed_dim, num_heads)) .collect::<Result<Vec<_>>>()?; Ok(Self { patch_embed, cls_token, blocks, norm, head, }) } fn prepare_tokens_with_mask(&self, xs: &Tensor) -> Result<Tensor> { let xs = self.patch_embed.forward(xs)?; Tensor::cat(&[&self.cls_token, &xs], 1) } fn get_intermediate_layers_not_chunked( &self, xs: &Tensor, blocks_to_take: &[usize], ) -> Result<Vec<Tensor>> { let mut xs = self.prepare_tokens_with_mask(xs)?; let mut output = Vec::new(); for (i, blk) in self.blocks.iter().enumerate() { xs = blk.forward(&xs)?; if blocks_to_take.contains(&i) { output.push(xs.clone()); } } if output.len() != blocks_to_take.len() { candle::bail!( "only {} / {} blocks found", output.len(), blocks_to_take.len() ); } Ok(output) } pub fn get_intermediate_layers( &self, xs: &Tensor, blocks_to_take: &[usize], reshape: bool, return_class_token: bool, norm: bool, ) -> Result<Tensor> { let outputs = self.get_intermediate_layers_not_chunked(xs, blocks_to_take)?; let outputs = if norm { outputs .iter() .map(|out| self.norm.forward(out)) .collect::<Result<Vec<_>>>()? } else { outputs }; let class_tokens = outputs .iter() .map(|out| out.i((.., 0))) .collect::<Result<Vec<_>>>()?; let outputs = outputs .iter() .map(|out| out.i((.., 1..))) .collect::<Result<Vec<_>>>()?; let outputs = if reshape { let (b, _c, w, h) = xs.dims4()?; let patch_size = self.patch_embed.patch_size.0; let num_channels = outputs[0].elem_count() / (b * (w / patch_size) * (h / patch_size)); outputs .iter() .map(|out| { out.reshape((b, w / patch_size, h / patch_size, num_channels))? .transpose(2, 3)? .transpose(1, 2) }) .collect::<Result<Vec<_>>>()? } else { outputs }; let outputs = if return_class_token { outputs .iter() .zip(class_tokens.iter()) .map(|(out, class_token)| Tensor::cat(&[out, class_token], D::Minus1)) .collect::<Result<Vec<_>>>()? } else { outputs }; Tensor::stack(&outputs[..], 0) } } impl Module for BeitVisionTransformer { fn forward(&self, xs: &Tensor) -> Result<Tensor> { let mut xs = self.prepare_tokens_with_mask(xs)?; for blk in self.blocks.iter() { xs = blk.forward(&xs)? } let xs_moy_local_tokens = xs.i((.., 1..))?.mean(1)?; let xs_norm = self.norm.forward(&xs_moy_local_tokens)?; self.head.forward(&xs_norm) } } pub fn vit_base(vb: VarBuilder) -> Result<BeitVisionTransformer> { BeitVisionTransformer::new(vb, 12, 768, 12) } pub fn vit_large(vb: VarBuilder) -> Result<BeitVisionTransformer> { BeitVisionTransformer::new(vb, 24, 1024, 16) }
candle/candle-transformers/src/models/beit.rs/0
{ "file_path": "candle/candle-transformers/src/models/beit.rs", "repo_id": "candle", "token_count": 7083 }
54
//! Implementation of the Conversational Speech Model (CSM) from Sesame //! //! See: [CSM](Conversational Speech Model) //! /// CSM (Conversational Speech Model) is a speech generation model from Sesame that generates RVQ /// audio codes from text and audio inputs. The model architecture employs a Llama backbone and a /// smaller audio decoder that produces Mimi audio codes. /// use crate::generation::LogitsProcessor; use candle::{DType, Device, IndexOp, Module, Result, Tensor, D}; use candle_nn::{embedding, linear_b, Embedding, Linear, RmsNorm, VarBuilder}; use std::sync::Arc; #[derive(serde::Deserialize, Debug, Clone, Copy, PartialEq, Eq)] pub enum Flavor { #[serde(rename = "llama-1B")] Llama1B, #[serde(rename = "llama-100M")] Llama100M, } #[derive(serde::Deserialize, Debug, Clone)] pub struct Config { pub audio_num_codebooks: usize, pub audio_vocab_size: usize, pub backbone_flavor: Flavor, pub decoder_flavor: Flavor, pub text_vocab_size: usize, } #[allow(unused)] #[derive(Debug, Clone)] pub struct LlamaConfig { vocab_size: usize, num_layers: usize, num_heads: usize, num_kv_heads: usize, embed_dim: usize, max_seq_len: usize, intermediate_dim: usize, norm_eps: f64, rope_base: f32, scale_factor: usize, } impl LlamaConfig { pub fn from_flavor(flavor: Flavor) -> Self { match flavor { Flavor::Llama1B => Self { vocab_size: 128256, num_layers: 16, num_heads: 32, num_kv_heads: 8, embed_dim: 2048, max_seq_len: 2048, intermediate_dim: 8192, norm_eps: 1e-5, rope_base: 500_000., scale_factor: 32, }, Flavor::Llama100M => Self { vocab_size: 128256, num_layers: 4, num_heads: 8, num_kv_heads: 2, embed_dim: 1024, max_seq_len: 2048, intermediate_dim: 8192, norm_eps: 1e-5, rope_base: 500_000., scale_factor: 32, }, } } } #[derive(Debug, Clone)] struct RotaryEmbedding { sin: Tensor, cos: Tensor, } fn calculate_default_inv_freq(cfg: &LlamaConfig) -> Vec<f32> { let head_dim = cfg.embed_dim / cfg.num_heads; (0..head_dim) .step_by(2) .map(|i| 1f32 / cfg.rope_base.powf(i as f32 / head_dim as f32)) .collect() } impl RotaryEmbedding { fn new(dtype: DType, cfg: &LlamaConfig, dev: &Device) -> Result<Self> { let low_freq_factor = 1.0; let high_freq_factor = 4.0; let original_max_position_embeddings = 8192; let scale_factor = cfg.scale_factor as f32; let theta = { let low_freq_wavelen = original_max_position_embeddings as f32 / low_freq_factor; let high_freq_wavelen = original_max_position_embeddings as f32 / high_freq_factor; calculate_default_inv_freq(cfg) .into_iter() .map(|freq| { let wavelen = 2. * std::f32::consts::PI / freq; if wavelen < high_freq_wavelen { freq } else if wavelen > low_freq_wavelen { freq / scale_factor } else { let smooth = (original_max_position_embeddings as f32 / wavelen - low_freq_factor) / (high_freq_factor - low_freq_factor); (1. - smooth) * freq / scale_factor + smooth * freq } }) .collect::<Vec<_>>() }; let theta = Tensor::new(theta, dev)?; let idx_theta = Tensor::arange(0, cfg.max_seq_len as u32, dev)? .to_dtype(DType::F32)? .reshape((cfg.max_seq_len, 1))? .matmul(&theta.reshape((1, theta.elem_count()))?)?; // This is different from the paper, see: // https://github.com/huggingface/transformers/blob/6112b1c6442aaf7affd2b0676a1cd4eee30c45cf/src/transformers/models/llama/modeling_llama.py#L112 let cos = idx_theta.cos()?.to_dtype(dtype)?; let sin = idx_theta.sin()?.to_dtype(dtype)?; Ok(Self { cos, sin }) } fn apply_rotary_emb_qkv( &self, q: &Tensor, k: &Tensor, seqlen_offset: usize, ) -> Result<(Tensor, Tensor)> { let (_b_sz, _h, seq_len, _n_embd) = q.dims4()?; let cos = self.cos.narrow(0, seqlen_offset, seq_len)?; let sin = self.sin.narrow(0, seqlen_offset, seq_len)?; let q_embed = candle_nn::rotary_emb::rope_i(q, &cos, &sin)?; let k_embed = candle_nn::rotary_emb::rope_i(k, &cos, &sin)?; Ok((q_embed, k_embed)) } } fn rms_norm(hidden_size: usize, eps: f64, vb: VarBuilder) -> Result<RmsNorm> { let weight = vb.get((hidden_size,), "scale")?; Ok(RmsNorm::new(weight, eps)) } #[derive(Debug, Clone)] struct Attention { q_proj: Linear, k_proj: Linear, v_proj: Linear, o_proj: Linear, rotary_emb: Arc<RotaryEmbedding>, kv_cache: Option<(Tensor, Tensor)>, num_heads: usize, head_dim: usize, num_kv_heads: usize, num_kv_groups: usize, } impl Attention { fn new(cfg: &LlamaConfig, rotary_emb: Arc<RotaryEmbedding>, vb: VarBuilder) -> Result<Self> { let head_dim = cfg.embed_dim / cfg.num_heads; let kv_dim = cfg.num_kv_heads * head_dim; let q_proj = linear_b(cfg.embed_dim, cfg.embed_dim, false, vb.pp("q_proj"))?; let k_proj = linear_b(cfg.embed_dim, kv_dim, false, vb.pp("k_proj"))?; let v_proj = linear_b(cfg.embed_dim, kv_dim, false, vb.pp("v_proj"))?; let o_proj = linear_b(cfg.embed_dim, cfg.embed_dim, false, vb.pp("output_proj"))?; Ok(Self { q_proj, k_proj, v_proj, o_proj, rotary_emb, kv_cache: None, num_heads: cfg.num_heads, num_kv_heads: cfg.num_kv_heads, num_kv_groups: cfg.num_heads / cfg.num_kv_heads, head_dim, }) } fn forward( &mut self, xs: &Tensor, attention_mask: Option<&Tensor>, seqlen_offset: usize, ) -> Result<Tensor> { let (b_sz, q_len, _) = xs.dims3()?; let query_states = self.q_proj.forward(xs)?; let key_states = self.k_proj.forward(xs)?; let value_states = self.v_proj.forward(xs)?; let query_states = query_states .reshape((b_sz, q_len, self.num_heads, self.head_dim))? .transpose(1, 2)? .contiguous()?; let key_states = key_states .reshape((b_sz, q_len, self.num_kv_heads, self.head_dim))? .transpose(1, 2)? .contiguous()?; let value_states = value_states .reshape((b_sz, q_len, self.num_kv_heads, self.head_dim))? .transpose(1, 2)? .contiguous()?; let (query_states, key_states) = self.rotary_emb .apply_rotary_emb_qkv(&query_states, &key_states, seqlen_offset)?; let (key_states, value_states) = match &self.kv_cache { None => (key_states, value_states), Some((prev_k, prev_v)) => { let key_states = Tensor::cat(&[prev_k, &key_states], 2)?; let value_states = Tensor::cat(&[prev_v, &value_states], 2)?; (key_states, value_states) } }; self.kv_cache = Some((key_states.clone(), value_states.clone())); let key_states = crate::utils::repeat_kv(key_states, self.num_kv_groups)?; let value_states = crate::utils::repeat_kv(value_states, self.num_kv_groups)?; let attn_output = { let scale = 1f64 / f64::sqrt(self.head_dim as f64); let attn_weights = (query_states.matmul(&key_states.transpose(2, 3)?)? * scale)?; let attn_weights = match attention_mask { None => attn_weights, Some(mask) => attn_weights.broadcast_add(mask)?, }; let attn_weights = candle_nn::ops::softmax_last_dim(&attn_weights)?; attn_weights.matmul(&value_states)? }; attn_output .transpose(1, 2)? .reshape((b_sz, q_len, self.num_heads * self.head_dim))? .apply(&self.o_proj) } fn clear_kv_cache(&mut self) { self.kv_cache = None } } #[derive(Debug, Clone)] struct Mlp { w1: Linear, w2: Linear, w3: Linear, } impl Mlp { fn new(cfg: &LlamaConfig, vb: VarBuilder) -> Result<Self> { let w1 = linear_b(cfg.embed_dim, cfg.intermediate_dim, false, vb.pp("w1"))?; let w2 = linear_b(cfg.intermediate_dim, cfg.embed_dim, false, vb.pp("w2"))?; let w3 = linear_b(cfg.embed_dim, cfg.intermediate_dim, false, vb.pp("w3"))?; Ok(Self { w1, w2, w3 }) } } impl Module for Mlp { fn forward(&self, xs: &Tensor) -> Result<Tensor> { let lhs = xs.apply(&self.w1)?.silu()?; let rhs = xs.apply(&self.w3)?; (lhs * rhs)?.apply(&self.w2) } } #[derive(Debug, Clone)] struct Layer { mlp_norm: RmsNorm, sa_norm: RmsNorm, attn: Attention, mlp: Mlp, } impl Layer { fn new(cfg: &LlamaConfig, rotary_emb: Arc<RotaryEmbedding>, vb: VarBuilder) -> Result<Self> { let mlp_norm = rms_norm(cfg.embed_dim, cfg.norm_eps, vb.pp("mlp_norm"))?; let sa_norm = rms_norm(cfg.embed_dim, cfg.norm_eps, vb.pp("sa_norm"))?; let attn = Attention::new(cfg, rotary_emb, vb.pp("attn"))?; let mlp = Mlp::new(cfg, vb.pp("mlp"))?; Ok(Self { mlp_norm, sa_norm, attn, mlp, }) } fn forward( &mut self, xs: &Tensor, attention_mask: Option<&Tensor>, seqlen_offset: usize, ) -> Result<Tensor> { let residual = xs; let xs = self.sa_norm.forward(xs)?; let xs = self.attn.forward(&xs, attention_mask, seqlen_offset)?; let xs = (xs + residual)?; let residual = &xs; let xs = xs.apply(&self.mlp_norm)?.apply(&self.mlp)?; residual + xs } fn clear_kv_cache(&mut self) { self.attn.clear_kv_cache() } } #[derive(Debug, Clone)] pub struct LlamaModel { layers: Vec<Layer>, norm: RmsNorm, device: Device, dtype: DType, } impl LlamaModel { pub fn new(cfg: &LlamaConfig, vb: VarBuilder) -> Result<Self> { let rotary_emb = Arc::new(RotaryEmbedding::new(vb.dtype(), cfg, vb.device())?); let mut layers = Vec::with_capacity(cfg.num_layers); let vb_l = vb.pp("layers"); for layer_idx in 0..cfg.num_layers { let layer = Layer::new(cfg, rotary_emb.clone(), vb_l.pp(layer_idx))?; layers.push(layer); } let norm = rms_norm(cfg.embed_dim, cfg.norm_eps, vb.pp("norm"))?; Ok(Self { layers, norm, device: vb.device().clone(), dtype: vb.dtype(), }) } pub fn clear_kv_cache(&mut self) { for layer in self.layers.iter_mut() { layer.clear_kv_cache() } } fn prepare_decoder_attention_mask( &self, tgt_len: usize, seqlen_offset: usize, ) -> Result<Tensor> { let mask: Vec<_> = (0..tgt_len) .flat_map(|i| (0..tgt_len).map(move |j| if i < j { f32::NEG_INFINITY } else { 0. })) .collect(); let mask = Tensor::from_slice(&mask, (tgt_len, tgt_len), &self.device)?; let mask = if seqlen_offset > 0 { let mask0 = Tensor::zeros((tgt_len, seqlen_offset), DType::F32, &self.device)?; Tensor::cat(&[&mask0, &mask], D::Minus1)? } else { mask }; mask.expand((1, 1, tgt_len, tgt_len + seqlen_offset))? .to_dtype(self.dtype) } pub fn forward(&mut self, xs: &Tensor, seqlen_offset: usize) -> Result<Tensor> { let (_b_size, seq_len, _embed_dim) = xs.dims3()?; let attention_mask = if seq_len <= 1 { None } else { let mask = self.prepare_decoder_attention_mask(seq_len, seqlen_offset)?; Some(mask) }; let mut xs = xs.clone(); for layer in self.layers.iter_mut() { xs = layer.forward(&xs, attention_mask.as_ref(), seqlen_offset)?; } let ys = xs.narrow(1, seq_len - 1, 1)?.apply(&self.norm)?; Ok(ys) } } #[derive(Debug, Clone)] pub struct Model { backbone: LlamaModel, decoder: LlamaModel, codebook0_head: Linear, audio_embeddings: Embedding, text_embeddings: Embedding, projection: Linear, audio_head: Tensor, config: Config, } impl Model { pub fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let backbone_cfg = LlamaConfig::from_flavor(cfg.backbone_flavor); let backbone = LlamaModel::new(&backbone_cfg, vb.pp("backbone"))?; let decoder_cfg = LlamaConfig::from_flavor(cfg.decoder_flavor); let decoder = LlamaModel::new(&decoder_cfg, vb.pp("decoder"))?; let backbone_dim = backbone_cfg.embed_dim; let decoder_dim = decoder_cfg.embed_dim; let audio_embeddings = embedding( cfg.audio_vocab_size * cfg.audio_num_codebooks, backbone_dim, vb.pp("audio_embeddings"), )?; let text_embeddings = embedding(cfg.text_vocab_size, backbone_dim, vb.pp("text_embeddings"))?; let projection = linear_b(backbone_dim, decoder_dim, false, vb.pp("projection"))?; let codebook0_head = linear_b( backbone_dim, cfg.audio_vocab_size, false, vb.pp("codebook0_head"), )?; let audio_head = vb.get( ( cfg.audio_num_codebooks - 1, decoder_dim, cfg.audio_vocab_size, ), "audio_head", )?; Ok(Self { backbone, decoder, codebook0_head, audio_embeddings, text_embeddings, projection, audio_head, config: cfg.clone(), }) } pub fn clear_kv_cache(&mut self) { self.backbone.clear_kv_cache(); self.decoder.clear_kv_cache(); } pub fn generate_frame( &mut self, tokens: &Tensor, tokens_mask: &Tensor, input_pos: usize, lp: &mut LogitsProcessor, ) -> Result<Vec<u32>> { let (b_sz, seq_len, _cb_plus_one) = tokens.dims3()?; let audio_tokens = tokens.narrow(2, 0, self.config.audio_num_codebooks)?; let text_tokens = tokens.narrow(2, self.config.audio_num_codebooks, 1)?; let text_embeds = self.text_embeddings.forward(&text_tokens)?; let arange = (Tensor::arange( 0u32, self.config.audio_num_codebooks as u32, &self.decoder.device, )? * self.config.audio_vocab_size as f64)?; let audio_tokens = audio_tokens.broadcast_add(&arange.reshape((1, 1, ()))?)?; let audio_embeds = self.audio_embeddings.forward(&audio_tokens)?.reshape(( b_sz, seq_len, self.config.audio_num_codebooks, (), ))?; let embeds = Tensor::cat(&[&audio_embeds, &text_embeds], D::Minus2)?; let embeds = embeds.broadcast_mul( &tokens_mask .to_dtype(self.backbone.dtype)? .unsqueeze(D::Minus1)?, )?; let embeds = embeds.sum(2)?; let h = self.backbone.forward(&embeds, input_pos)?; let c0_logits = h.apply(&self.codebook0_head)?; let c0_sample = lp.sample(&c0_logits.i((0, 0))?)?; let mut all_samples = vec![c0_sample]; let c0_sample = Tensor::from_slice(&[c0_sample], (1, 1), &self.decoder.device)?; let c0_embed = self.audio_embeddings.forward(&c0_sample)?; let mut curr_h = Tensor::cat(&[h, c0_embed], 1)?; self.decoder.clear_kv_cache(); let mut decoder_pos = 0; for i in 1..self.config.audio_num_codebooks { let proj_h = curr_h.apply(&self.projection)?; let decoder_h = self.decoder.forward(&proj_h, decoder_pos)?; decoder_pos += curr_h.dim(1)?; let ci_logits = decoder_h.broadcast_matmul(&self.audio_head.get(i - 1)?)?; let ci_sample = lp.sample(&ci_logits.i((0, 0))?)?; all_samples.push(ci_sample); let ci_sample = Tensor::from_slice( &[ci_sample + (i * self.config.audio_vocab_size) as u32], (1, 1), &self.decoder.device, )?; let ci_embed = self.audio_embeddings.forward(&ci_sample)?; curr_h = ci_embed } Ok(all_samples) } pub fn audio_tokens_and_mask(&self, mut frame: Vec<u32>) -> Result<(Tensor, Tensor)> { let cb = self.config.audio_num_codebooks; let device = &self.backbone.device; let mut mask = vec![1u8; cb]; mask.push(0); let mask = Tensor::from_vec(mask, (1, 1, cb + 1), device)?; frame.push(0); let tokens = Tensor::from_vec(frame, (1, 1, cb + 1), device)?; Ok((tokens, mask)) } pub fn text_tokens_and_mask(&self, ids: &[u32]) -> Result<(Tensor, Tensor)> { let cb = self.config.audio_num_codebooks; let device = &self.backbone.device; let mut tokens = vec![]; let mut mask = vec![]; for &v in ids.iter() { let mut token = vec![0; cb]; token.push(v); let token = Tensor::from_vec(token, (1, 1, cb + 1), device)?; tokens.push(token); let mut m = vec![0u8; cb]; m.push(1); let m = Tensor::from_vec(m, (1, 1, cb + 1), device)?; mask.push(m); } let tokens = Tensor::cat(&tokens, 1)?; let mask = Tensor::cat(&mask, 1)?; Ok((tokens, mask)) } }
candle/candle-transformers/src/models/csm.rs/0
{ "file_path": "candle/candle-transformers/src/models/csm.rs", "repo_id": "candle", "token_count": 9633 }
55
use candle::{DType, IndexOp, Result, Tensor, D}; use candle_nn::{LayerNorm, Linear, RmsNorm, VarBuilder}; // https://github.com/black-forest-labs/flux/blob/727e3a71faf37390f318cf9434f0939653302b60/src/flux/model.py#L12 #[derive(Debug, Clone)] pub struct Config { pub in_channels: usize, pub vec_in_dim: usize, pub context_in_dim: usize, pub hidden_size: usize, pub mlp_ratio: f64, pub num_heads: usize, pub depth: usize, pub depth_single_blocks: usize, pub axes_dim: Vec<usize>, pub theta: usize, pub qkv_bias: bool, pub guidance_embed: bool, } impl Config { // https://github.com/black-forest-labs/flux/blob/727e3a71faf37390f318cf9434f0939653302b60/src/flux/util.py#L32 pub fn dev() -> Self { Self { in_channels: 64, vec_in_dim: 768, context_in_dim: 4096, hidden_size: 3072, mlp_ratio: 4.0, num_heads: 24, depth: 19, depth_single_blocks: 38, axes_dim: vec![16, 56, 56], theta: 10_000, qkv_bias: true, guidance_embed: true, } } // https://github.com/black-forest-labs/flux/blob/727e3a71faf37390f318cf9434f0939653302b60/src/flux/util.py#L64 pub fn schnell() -> Self { Self { in_channels: 64, vec_in_dim: 768, context_in_dim: 4096, hidden_size: 3072, mlp_ratio: 4.0, num_heads: 24, depth: 19, depth_single_blocks: 38, axes_dim: vec![16, 56, 56], theta: 10_000, qkv_bias: true, guidance_embed: false, } } } fn layer_norm(dim: usize, vb: VarBuilder) -> Result<LayerNorm> { let ws = Tensor::ones(dim, vb.dtype(), vb.device())?; Ok(LayerNorm::new_no_bias(ws, 1e-6)) } fn scaled_dot_product_attention(q: &Tensor, k: &Tensor, v: &Tensor) -> Result<Tensor> { let dim = q.dim(D::Minus1)?; let scale_factor = 1.0 / (dim as f64).sqrt(); let mut batch_dims = q.dims().to_vec(); batch_dims.pop(); batch_dims.pop(); let q = q.flatten_to(batch_dims.len() - 1)?; let k = k.flatten_to(batch_dims.len() - 1)?; let v = v.flatten_to(batch_dims.len() - 1)?; let attn_weights = (q.matmul(&k.t()?)? * scale_factor)?; let attn_scores = candle_nn::ops::softmax_last_dim(&attn_weights)?.matmul(&v)?; batch_dims.push(attn_scores.dim(D::Minus2)?); batch_dims.push(attn_scores.dim(D::Minus1)?); attn_scores.reshape(batch_dims) } fn rope(pos: &Tensor, dim: usize, theta: usize) -> Result<Tensor> { if dim % 2 == 1 { candle::bail!("dim {dim} is odd") } let dev = pos.device(); let theta = theta as f64; let inv_freq: Vec<_> = (0..dim) .step_by(2) .map(|i| 1f32 / theta.powf(i as f64 / dim as f64) as f32) .collect(); let inv_freq_len = inv_freq.len(); let inv_freq = Tensor::from_vec(inv_freq, (1, 1, inv_freq_len), dev)?; let inv_freq = inv_freq.to_dtype(pos.dtype())?; let freqs = pos.unsqueeze(2)?.broadcast_mul(&inv_freq)?; let cos = freqs.cos()?; let sin = freqs.sin()?; let out = Tensor::stack(&[&cos, &sin.neg()?, &sin, &cos], 3)?; let (b, n, d, _ij) = out.dims4()?; out.reshape((b, n, d, 2, 2)) } fn apply_rope(x: &Tensor, freq_cis: &Tensor) -> Result<Tensor> { let dims = x.dims(); let (b_sz, n_head, seq_len, n_embd) = x.dims4()?; let x = x.reshape((b_sz, n_head, seq_len, n_embd / 2, 2))?; let x0 = x.narrow(D::Minus1, 0, 1)?; let x1 = x.narrow(D::Minus1, 1, 1)?; let fr0 = freq_cis.get_on_dim(D::Minus1, 0)?; let fr1 = freq_cis.get_on_dim(D::Minus1, 1)?; (fr0.broadcast_mul(&x0)? + fr1.broadcast_mul(&x1)?)?.reshape(dims.to_vec()) } pub(crate) fn attention(q: &Tensor, k: &Tensor, v: &Tensor, pe: &Tensor) -> Result<Tensor> { let q = apply_rope(q, pe)?.contiguous()?; let k = apply_rope(k, pe)?.contiguous()?; let x = scaled_dot_product_attention(&q, &k, v)?; x.transpose(1, 2)?.flatten_from(2) } pub(crate) fn timestep_embedding(t: &Tensor, dim: usize, dtype: DType) -> Result<Tensor> { const TIME_FACTOR: f64 = 1000.; const MAX_PERIOD: f64 = 10000.; if dim % 2 == 1 { candle::bail!("{dim} is odd") } let dev = t.device(); let half = dim / 2; let t = (t * TIME_FACTOR)?; let arange = Tensor::arange(0, half as u32, dev)?.to_dtype(candle::DType::F32)?; let freqs = (arange * (-MAX_PERIOD.ln() / half as f64))?.exp()?; let args = t .unsqueeze(1)? .to_dtype(candle::DType::F32)? .broadcast_mul(&freqs.unsqueeze(0)?)?; let emb = Tensor::cat(&[args.cos()?, args.sin()?], D::Minus1)?.to_dtype(dtype)?; Ok(emb) } #[derive(Debug, Clone)] pub struct EmbedNd { #[allow(unused)] dim: usize, theta: usize, axes_dim: Vec<usize>, } impl EmbedNd { pub fn new(dim: usize, theta: usize, axes_dim: Vec<usize>) -> Self { Self { dim, theta, axes_dim, } } } impl candle::Module for EmbedNd { fn forward(&self, ids: &Tensor) -> Result<Tensor> { let n_axes = ids.dim(D::Minus1)?; let mut emb = Vec::with_capacity(n_axes); for idx in 0..n_axes { let r = rope( &ids.get_on_dim(D::Minus1, idx)?, self.axes_dim[idx], self.theta, )?; emb.push(r) } let emb = Tensor::cat(&emb, 2)?; emb.unsqueeze(1) } } #[derive(Debug, Clone)] pub struct MlpEmbedder { in_layer: Linear, out_layer: Linear, } impl MlpEmbedder { fn new(in_sz: usize, h_sz: usize, vb: VarBuilder) -> Result<Self> { let in_layer = candle_nn::linear(in_sz, h_sz, vb.pp("in_layer"))?; let out_layer = candle_nn::linear(h_sz, h_sz, vb.pp("out_layer"))?; Ok(Self { in_layer, out_layer, }) } } impl candle::Module for MlpEmbedder { fn forward(&self, xs: &Tensor) -> Result<Tensor> { xs.apply(&self.in_layer)?.silu()?.apply(&self.out_layer) } } #[derive(Debug, Clone)] pub struct QkNorm { query_norm: RmsNorm, key_norm: RmsNorm, } impl QkNorm { fn new(dim: usize, vb: VarBuilder) -> Result<Self> { let query_norm = vb.get(dim, "query_norm.scale")?; let query_norm = RmsNorm::new(query_norm, 1e-6); let key_norm = vb.get(dim, "key_norm.scale")?; let key_norm = RmsNorm::new(key_norm, 1e-6); Ok(Self { query_norm, key_norm, }) } } struct ModulationOut { shift: Tensor, scale: Tensor, gate: Tensor, } impl ModulationOut { fn scale_shift(&self, xs: &Tensor) -> Result<Tensor> { xs.broadcast_mul(&(&self.scale + 1.)?)? .broadcast_add(&self.shift) } fn gate(&self, xs: &Tensor) -> Result<Tensor> { self.gate.broadcast_mul(xs) } } #[derive(Debug, Clone)] struct Modulation1 { lin: Linear, } impl Modulation1 { fn new(dim: usize, vb: VarBuilder) -> Result<Self> { let lin = candle_nn::linear(dim, 3 * dim, vb.pp("lin"))?; Ok(Self { lin }) } fn forward(&self, vec_: &Tensor) -> Result<ModulationOut> { let ys = vec_ .silu()? .apply(&self.lin)? .unsqueeze(1)? .chunk(3, D::Minus1)?; if ys.len() != 3 { candle::bail!("unexpected len from chunk {ys:?}") } Ok(ModulationOut { shift: ys[0].clone(), scale: ys[1].clone(), gate: ys[2].clone(), }) } } #[derive(Debug, Clone)] struct Modulation2 { lin: Linear, } impl Modulation2 { fn new(dim: usize, vb: VarBuilder) -> Result<Self> { let lin = candle_nn::linear(dim, 6 * dim, vb.pp("lin"))?; Ok(Self { lin }) } fn forward(&self, vec_: &Tensor) -> Result<(ModulationOut, ModulationOut)> { let ys = vec_ .silu()? .apply(&self.lin)? .unsqueeze(1)? .chunk(6, D::Minus1)?; if ys.len() != 6 { candle::bail!("unexpected len from chunk {ys:?}") } let mod1 = ModulationOut { shift: ys[0].clone(), scale: ys[1].clone(), gate: ys[2].clone(), }; let mod2 = ModulationOut { shift: ys[3].clone(), scale: ys[4].clone(), gate: ys[5].clone(), }; Ok((mod1, mod2)) } } #[derive(Debug, Clone)] pub struct SelfAttention { qkv: Linear, norm: QkNorm, proj: Linear, num_heads: usize, } impl SelfAttention { fn new(dim: usize, num_heads: usize, qkv_bias: bool, vb: VarBuilder) -> Result<Self> { let head_dim = dim / num_heads; let qkv = candle_nn::linear_b(dim, dim * 3, qkv_bias, vb.pp("qkv"))?; let norm = QkNorm::new(head_dim, vb.pp("norm"))?; let proj = candle_nn::linear(dim, dim, vb.pp("proj"))?; Ok(Self { qkv, norm, proj, num_heads, }) } fn qkv(&self, xs: &Tensor) -> Result<(Tensor, Tensor, Tensor)> { let qkv = xs.apply(&self.qkv)?; let (b, l, _khd) = qkv.dims3()?; let qkv = qkv.reshape((b, l, 3, self.num_heads, ()))?; let q = qkv.i((.., .., 0))?.transpose(1, 2)?; let k = qkv.i((.., .., 1))?.transpose(1, 2)?; let v = qkv.i((.., .., 2))?.transpose(1, 2)?; let q = q.apply(&self.norm.query_norm)?; let k = k.apply(&self.norm.key_norm)?; Ok((q, k, v)) } #[allow(unused)] fn forward(&self, xs: &Tensor, pe: &Tensor) -> Result<Tensor> { let (q, k, v) = self.qkv(xs)?; attention(&q, &k, &v, pe)?.apply(&self.proj) } } #[derive(Debug, Clone)] struct Mlp { lin1: Linear, lin2: Linear, } impl Mlp { fn new(in_sz: usize, mlp_sz: usize, vb: VarBuilder) -> Result<Self> { let lin1 = candle_nn::linear(in_sz, mlp_sz, vb.pp("0"))?; let lin2 = candle_nn::linear(mlp_sz, in_sz, vb.pp("2"))?; Ok(Self { lin1, lin2 }) } } impl candle::Module for Mlp { fn forward(&self, xs: &Tensor) -> Result<Tensor> { xs.apply(&self.lin1)?.gelu()?.apply(&self.lin2) } } #[derive(Debug, Clone)] pub struct DoubleStreamBlock { img_mod: Modulation2, img_norm1: LayerNorm, img_attn: SelfAttention, img_norm2: LayerNorm, img_mlp: Mlp, txt_mod: Modulation2, txt_norm1: LayerNorm, txt_attn: SelfAttention, txt_norm2: LayerNorm, txt_mlp: Mlp, } impl DoubleStreamBlock { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let h_sz = cfg.hidden_size; let mlp_sz = (h_sz as f64 * cfg.mlp_ratio) as usize; let img_mod = Modulation2::new(h_sz, vb.pp("img_mod"))?; let img_norm1 = layer_norm(h_sz, vb.pp("img_norm1"))?; let img_attn = SelfAttention::new(h_sz, cfg.num_heads, cfg.qkv_bias, vb.pp("img_attn"))?; let img_norm2 = layer_norm(h_sz, vb.pp("img_norm2"))?; let img_mlp = Mlp::new(h_sz, mlp_sz, vb.pp("img_mlp"))?; let txt_mod = Modulation2::new(h_sz, vb.pp("txt_mod"))?; let txt_norm1 = layer_norm(h_sz, vb.pp("txt_norm1"))?; let txt_attn = SelfAttention::new(h_sz, cfg.num_heads, cfg.qkv_bias, vb.pp("txt_attn"))?; let txt_norm2 = layer_norm(h_sz, vb.pp("txt_norm2"))?; let txt_mlp = Mlp::new(h_sz, mlp_sz, vb.pp("txt_mlp"))?; Ok(Self { img_mod, img_norm1, img_attn, img_norm2, img_mlp, txt_mod, txt_norm1, txt_attn, txt_norm2, txt_mlp, }) } fn forward( &self, img: &Tensor, txt: &Tensor, vec_: &Tensor, pe: &Tensor, ) -> Result<(Tensor, Tensor)> { let (img_mod1, img_mod2) = self.img_mod.forward(vec_)?; // shift, scale, gate let (txt_mod1, txt_mod2) = self.txt_mod.forward(vec_)?; // shift, scale, gate let img_modulated = img.apply(&self.img_norm1)?; let img_modulated = img_mod1.scale_shift(&img_modulated)?; let (img_q, img_k, img_v) = self.img_attn.qkv(&img_modulated)?; let txt_modulated = txt.apply(&self.txt_norm1)?; let txt_modulated = txt_mod1.scale_shift(&txt_modulated)?; let (txt_q, txt_k, txt_v) = self.txt_attn.qkv(&txt_modulated)?; let q = Tensor::cat(&[txt_q, img_q], 2)?; let k = Tensor::cat(&[txt_k, img_k], 2)?; let v = Tensor::cat(&[txt_v, img_v], 2)?; let attn = attention(&q, &k, &v, pe)?; let txt_attn = attn.narrow(1, 0, txt.dim(1)?)?; let img_attn = attn.narrow(1, txt.dim(1)?, attn.dim(1)? - txt.dim(1)?)?; let img = (img + img_mod1.gate(&img_attn.apply(&self.img_attn.proj)?))?; let img = (&img + img_mod2.gate( &img_mod2 .scale_shift(&img.apply(&self.img_norm2)?)? .apply(&self.img_mlp)?, )?)?; let txt = (txt + txt_mod1.gate(&txt_attn.apply(&self.txt_attn.proj)?))?; let txt = (&txt + txt_mod2.gate( &txt_mod2 .scale_shift(&txt.apply(&self.txt_norm2)?)? .apply(&self.txt_mlp)?, )?)?; Ok((img, txt)) } } #[derive(Debug, Clone)] pub struct SingleStreamBlock { linear1: Linear, linear2: Linear, norm: QkNorm, pre_norm: LayerNorm, modulation: Modulation1, h_sz: usize, mlp_sz: usize, num_heads: usize, } impl SingleStreamBlock { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let h_sz = cfg.hidden_size; let mlp_sz = (h_sz as f64 * cfg.mlp_ratio) as usize; let head_dim = h_sz / cfg.num_heads; let linear1 = candle_nn::linear(h_sz, h_sz * 3 + mlp_sz, vb.pp("linear1"))?; let linear2 = candle_nn::linear(h_sz + mlp_sz, h_sz, vb.pp("linear2"))?; let norm = QkNorm::new(head_dim, vb.pp("norm"))?; let pre_norm = layer_norm(h_sz, vb.pp("pre_norm"))?; let modulation = Modulation1::new(h_sz, vb.pp("modulation"))?; Ok(Self { linear1, linear2, norm, pre_norm, modulation, h_sz, mlp_sz, num_heads: cfg.num_heads, }) } fn forward(&self, xs: &Tensor, vec_: &Tensor, pe: &Tensor) -> Result<Tensor> { let mod_ = self.modulation.forward(vec_)?; let x_mod = mod_.scale_shift(&xs.apply(&self.pre_norm)?)?; let x_mod = x_mod.apply(&self.linear1)?; let qkv = x_mod.narrow(D::Minus1, 0, 3 * self.h_sz)?; let (b, l, _khd) = qkv.dims3()?; let qkv = qkv.reshape((b, l, 3, self.num_heads, ()))?; let q = qkv.i((.., .., 0))?.transpose(1, 2)?; let k = qkv.i((.., .., 1))?.transpose(1, 2)?; let v = qkv.i((.., .., 2))?.transpose(1, 2)?; let mlp = x_mod.narrow(D::Minus1, 3 * self.h_sz, self.mlp_sz)?; let q = q.apply(&self.norm.query_norm)?; let k = k.apply(&self.norm.key_norm)?; let attn = attention(&q, &k, &v, pe)?; let output = Tensor::cat(&[attn, mlp.gelu()?], 2)?.apply(&self.linear2)?; xs + mod_.gate(&output) } } #[derive(Debug, Clone)] pub struct LastLayer { norm_final: LayerNorm, linear: Linear, ada_ln_modulation: Linear, } impl LastLayer { fn new(h_sz: usize, p_sz: usize, out_c: usize, vb: VarBuilder) -> Result<Self> { let norm_final = layer_norm(h_sz, vb.pp("norm_final"))?; let linear = candle_nn::linear(h_sz, p_sz * p_sz * out_c, vb.pp("linear"))?; let ada_ln_modulation = candle_nn::linear(h_sz, 2 * h_sz, vb.pp("adaLN_modulation.1"))?; Ok(Self { norm_final, linear, ada_ln_modulation, }) } fn forward(&self, xs: &Tensor, vec: &Tensor) -> Result<Tensor> { let chunks = vec.silu()?.apply(&self.ada_ln_modulation)?.chunk(2, 1)?; let (shift, scale) = (&chunks[0], &chunks[1]); let xs = xs .apply(&self.norm_final)? .broadcast_mul(&(scale.unsqueeze(1)? + 1.0)?)? .broadcast_add(&shift.unsqueeze(1)?)?; xs.apply(&self.linear) } } #[derive(Debug, Clone)] pub struct Flux { img_in: Linear, txt_in: Linear, time_in: MlpEmbedder, vector_in: MlpEmbedder, guidance_in: Option<MlpEmbedder>, pe_embedder: EmbedNd, double_blocks: Vec<DoubleStreamBlock>, single_blocks: Vec<SingleStreamBlock>, final_layer: LastLayer, } impl Flux { pub fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let img_in = candle_nn::linear(cfg.in_channels, cfg.hidden_size, vb.pp("img_in"))?; let txt_in = candle_nn::linear(cfg.context_in_dim, cfg.hidden_size, vb.pp("txt_in"))?; let mut double_blocks = Vec::with_capacity(cfg.depth); let vb_d = vb.pp("double_blocks"); for idx in 0..cfg.depth { let db = DoubleStreamBlock::new(cfg, vb_d.pp(idx))?; double_blocks.push(db) } let mut single_blocks = Vec::with_capacity(cfg.depth_single_blocks); let vb_s = vb.pp("single_blocks"); for idx in 0..cfg.depth_single_blocks { let sb = SingleStreamBlock::new(cfg, vb_s.pp(idx))?; single_blocks.push(sb) } let time_in = MlpEmbedder::new(256, cfg.hidden_size, vb.pp("time_in"))?; let vector_in = MlpEmbedder::new(cfg.vec_in_dim, cfg.hidden_size, vb.pp("vector_in"))?; let guidance_in = if cfg.guidance_embed { let mlp = MlpEmbedder::new(256, cfg.hidden_size, vb.pp("guidance_in"))?; Some(mlp) } else { None }; let final_layer = LastLayer::new(cfg.hidden_size, 1, cfg.in_channels, vb.pp("final_layer"))?; let pe_dim = cfg.hidden_size / cfg.num_heads; let pe_embedder = EmbedNd::new(pe_dim, cfg.theta, cfg.axes_dim.to_vec()); Ok(Self { img_in, txt_in, time_in, vector_in, guidance_in, pe_embedder, double_blocks, single_blocks, final_layer, }) } } impl super::WithForward for Flux { #[allow(clippy::too_many_arguments)] fn forward( &self, img: &Tensor, img_ids: &Tensor, txt: &Tensor, txt_ids: &Tensor, timesteps: &Tensor, y: &Tensor, guidance: Option<&Tensor>, ) -> Result<Tensor> { if txt.rank() != 3 { candle::bail!("unexpected shape for txt {:?}", txt.shape()) } if img.rank() != 3 { candle::bail!("unexpected shape for img {:?}", img.shape()) } let dtype = img.dtype(); let pe = { let ids = Tensor::cat(&[txt_ids, img_ids], 1)?; ids.apply(&self.pe_embedder)? }; let mut txt = txt.apply(&self.txt_in)?; let mut img = img.apply(&self.img_in)?; let vec_ = timestep_embedding(timesteps, 256, dtype)?.apply(&self.time_in)?; let vec_ = match (self.guidance_in.as_ref(), guidance) { (Some(g_in), Some(guidance)) => { (vec_ + timestep_embedding(guidance, 256, dtype)?.apply(g_in))? } _ => vec_, }; let vec_ = (vec_ + y.apply(&self.vector_in))?; // Double blocks for block in self.double_blocks.iter() { (img, txt) = block.forward(&img, &txt, &vec_, &pe)? } // Single blocks let mut img = Tensor::cat(&[&txt, &img], 1)?; for block in self.single_blocks.iter() { img = block.forward(&img, &vec_, &pe)?; } let img = img.i((.., txt.dim(1)?..))?; self.final_layer.forward(&img, &vec_) } }
candle/candle-transformers/src/models/flux/model.rs/0
{ "file_path": "candle/candle-transformers/src/models/flux/model.rs", "repo_id": "candle", "token_count": 10740 }
56
//! The LLaVA (Large Language and Vision Assistant) model. //! //! This provides the main model implementation combining a vision tower (CLIP) with //! language model (Llama) for multimodal capabilities. The architecture implements the training-free projection technique. //! //! - 💻[GH Link](https://github.com/haotian-liu/LLaVA/tree/main) //! - 📝 [Paper](https://arxiv.org/abs/2304.08485)/ Visual Instruction Tuning //! pub mod config; pub mod utils; use crate::models::clip::vision_model::{ClipVisionConfig, ClipVisionTransformer}; use crate::models::llama::{Cache, Llama}; use crate::models::with_tracing::linear; use candle::{bail, Context, Device, IndexOp, Result, Tensor}; use candle_nn::{seq, Activation, Module, Sequential, VarBuilder}; use fancy_regex::Regex; use utils::get_anyres_image_grid_shape; use config::LLaVAConfig; fn mlp_gelu_match(mm_projector_type: &str) -> Option<usize> { let mlp_gelu_regex = Regex::new(r"^mlp(\d+)x_gelu$").unwrap(); if let Ok(Some(captures)) = mlp_gelu_regex.captures(mm_projector_type) { if let Some(match_str) = captures.get(1) { let match_str = match_str.as_str(); match_str.parse::<usize>().ok() } else { None } } else { None } } fn unpad_image(tensor: &Tensor, original_size: &(u32, u32)) -> Result<Tensor> { assert_eq!(tensor.dims().len(), 3); let (original_width, original_height) = *original_size; let tensor_dims = tensor.dims(); let current_height = tensor_dims[1]; let current_width = tensor_dims[2]; let original_aspect_ratio = (original_width as f32) / (original_height as f32); let current_aspect_ratio = (current_width as f32) / (current_height as f32); if original_aspect_ratio > current_aspect_ratio { let scale_factor = (current_width as f32) / (original_width as f32); let new_height = (original_height as f32 * scale_factor).floor() as usize; let padding = (current_height - new_height) / 2; tensor.i((.., padding..current_width - padding, ..)) } else { let scale_factor = (current_height as f32) / (original_height as f32); let new_width = (original_width as f32 * scale_factor).floor() as usize; let padding = (current_width - new_width) / 2; tensor.i((.., .., padding..current_width - padding)) } } pub struct IdentityMap {} impl Module for IdentityMap { fn forward(&self, x: &Tensor) -> Result<Tensor> { Ok(x.clone()) } } pub struct MMProjector { pub modules: Sequential, } impl MMProjector { pub fn load(vb: &VarBuilder, config: &LLaVAConfig) -> Result<Self> { if config.mm_projector_type == "linear" { let vb_prefix = if config.hf { "multi_modal_projector.linear_1" } else { "model.mm_projector.0" }; let linear = linear(config.mm_hidden_size, config.hidden_size, vb.pp(vb_prefix))?; let modules = seq().add(linear); Ok(Self { modules }) } else if let Some(mlp_depth) = mlp_gelu_match(&config.mm_projector_type) { let modules = if config.hf { let mut modules = seq().add(linear( config.mm_hidden_size, config.hidden_size, vb.pp("multi_modal_projector.linear_1"), )?); for i in 1..mlp_depth { modules = modules.add(Activation::Gelu).add(linear( config.hidden_size, config.hidden_size, vb.pp(format!("multi_modal_projector.linear_{}", i + 1)), )?); } modules } else { let mut modules = seq().add(linear( config.mm_hidden_size, config.hidden_size, vb.pp("model.mm_projector.0"), )?); for i in 1..mlp_depth { modules = modules.add(Activation::Gelu).add(linear( config.hidden_size, config.hidden_size, vb.pp(format!("model.mm_projector.{}", i * 2)), )?); } modules }; Ok(Self { modules }) } else if config.mm_projector_type == "identity" { Ok(Self { modules: seq().add(IdentityMap {}), }) } else { bail!( "Unsupported MM projector type: {}", config.mm_projector_type ) } } pub fn forward(&self, x: &Tensor) -> Result<Tensor> { self.modules.forward(x) } } pub struct ClipVisionTower { model: ClipVisionTransformer, select_layer: isize, select_feature_method: String, pub config: ClipVisionConfig, } impl ClipVisionTower { pub fn new( vb: VarBuilder, select_layer: isize, select_feature_method: &str, config: &Option<ClipVisionConfig>, ) -> Result<Self> { let config = if config.is_none() { ClipVisionConfig::clip_vit_large_patch14_336() } else { config.clone().context("no config")? }; let select_layer = match select_layer { -1 | -2 => select_layer, _ => bail!("Unsupported select layer: {}", select_layer), }; let model = ClipVisionTransformer::new(vb, &config)?; Ok(Self { model, select_layer, select_feature_method: select_feature_method.to_string(), config, }) } pub fn forward(&self, x: &Tensor) -> Result<Tensor> { let result = self.model.output_hidden_states(x)?; let index = result.len() as isize + self.select_layer; let result = result[index as usize].clone(); if self.select_feature_method == "cls_patch" { Ok(result) } else { result.i((.., 1..)) } } pub fn num_patches_per_side(&self) -> usize { self.config.image_size / self.config.patch_size } } pub struct LLaVA { pub clip_vision_tower: ClipVisionTower, pub image_newline: Tensor, pub mm_projector: MMProjector, pub llama: Llama, config: LLaVAConfig, device: Device, } impl LLaVA { pub fn load( vb: VarBuilder, config: &LLaVAConfig, clip_vision_config: Option<ClipVisionConfig>, ) -> Result<Self> { let device = vb.device().clone(); let llama_config = config.to_llama_config(); let mm_projector = MMProjector::load(&vb, config)?; let (clip_vision_tower, image_newline, llama) = if config.hf { ( ClipVisionTower::new( vb.pp("vision_tower.vision_model"), config.mm_vision_select_layer, &config.mm_vision_select_feature, &clip_vision_config, )?, vb.get(&[config.hidden_size], "image_newline")? .to_device(&device)?, Llama::load(vb.pp("language_model"), &llama_config)?, ) } else { ( ClipVisionTower::new( vb.pp("model.vision_tower.vision_tower.vision_model"), config.mm_vision_select_layer, &config.mm_vision_select_feature, &clip_vision_config, )?, vb.get(&[config.hidden_size], "model.image_newline")? .to_device(&device)?, Llama::load(vb, &llama_config)?, ) }; Ok(Self { clip_vision_tower, image_newline, mm_projector, llama, config: (*config).clone(), device, }) } pub fn encode_images(&self, x: &Tensor) -> Result<Tensor> { let image_features = self.clip_vision_tower.forward(x)?; let image_features = self.mm_projector.forward(&image_features)?; Ok(image_features) } // currently only for single image, 4 dim tensor pub fn prepare_inputs_labels_for_multimodal( &self, input_ids: &Tensor, images: &[Tensor], image_sizes: &[(u32, u32)], ) -> Result<Tensor> { //TODO: process of multiple images/ new line // 576: 336(input size)/14(patch size)=24 24*24+1(class)=577 577-1=576 let concat_images = Tensor::cat(images, 0)?; let image_features_together = self.encode_images(&concat_images)?; let split_sizes = images .iter() .map(|x| x.shape().dims()[0]) .collect::<Vec<usize>>(); // can be replaced by split let mut index_pos = 0; let mut image_features = Vec::new(); for split_size in split_sizes.iter() { image_features.push(image_features_together.i(index_pos..index_pos + (*split_size))?); index_pos += *split_size; } let mm_patch_merge_type = &self.config.mm_patch_merge_type; let image_aspect_ratio = &self.config.image_aspect_ratio; let image_features = if mm_patch_merge_type == "flat" { image_features .iter() .map(|x| x.flatten(0, 1)) .collect::<Result<Vec<Tensor>>>()? } else if mm_patch_merge_type.starts_with("spatial") { let mut new_image_features = Vec::new(); for (image_idx, image_feature) in image_features.iter().enumerate() { let new_image_feature = if image_feature.dims()[0] > 1 { let base_image_feature = image_feature.get(0)?; let patch_image_feature = image_feature.i(1..)?; let height = self.clip_vision_tower.num_patches_per_side(); let width = height; assert_eq!(height * width, base_image_feature.dims()[0]); let image_size = image_sizes[image_idx]; let new_image_feature = if image_aspect_ratio == "anyres" { let (num_patch_width, num_patch_height) = get_anyres_image_grid_shape( image_size, &self.config.image_grid_pinpoints, self.clip_vision_tower.config.image_size as u32, ); patch_image_feature.reshape(( num_patch_height as usize, num_patch_width as usize, height, width, (), ))? } else { bail!("not implemented in original python LLaVA yet") }; let new_image_feature = if mm_patch_merge_type.contains("unpad") { let new_image_feature = new_image_feature .permute((4, 0, 2, 1, 3))? .flatten(1, 2)? .flatten(2, 3)?; let new_image_feature = unpad_image(&new_image_feature, &image_size)?; let new_image_feature_dims = new_image_feature.dims(); let image_new_line = self .image_newline .reshape((self.config.hidden_size, 1, 1))? .broadcast_as(( new_image_feature_dims[0], new_image_feature_dims[1], 1, ))?; let new_image_feature = Tensor::cat(&[new_image_feature, image_new_line], 2)?; new_image_feature.flatten(1, 2)?.transpose(0, 1)? } else { new_image_feature.permute((0, 2, 1, 3, 4))?.flatten(0, 3)? }; Tensor::cat(&[base_image_feature, new_image_feature], 0)? } else { let new_image_feature = image_feature.get(0)?; if mm_patch_merge_type.contains("unpad") { Tensor::cat( &[new_image_feature, self.image_newline.clone().unsqueeze(0)?], 0, )? } else { new_image_feature } }; new_image_features.push(new_image_feature); } new_image_features } else { bail!("Unexpected mm_patch_merge_type: {mm_patch_merge_type}") }; // can easily be replaced by nonzero if it is implemented in candle let input_ids_vec = input_ids.squeeze(0)?.to_vec1::<i64>()?; let mut image_indices = { let mut image_indices = vec![0_i64]; image_indices.extend( input_ids_vec .iter() .enumerate() .filter_map(|(i, x)| { if *x == self.config.image_token_index as i64 { Some(i as i64) } else { None } }) .collect::<Vec<i64>>(), ); image_indices }; if image_indices.len() == 1 { //no image, only [0], return self.llama.embed(input_ids); } let input_ids_noim = input_ids_vec .iter() .filter_map(|x| { if *x != self.config.image_token_index as i64 { Some(*x) } else { None } }) .collect::<Vec<i64>>(); let input_ids_noim_len = input_ids_noim.len(); image_indices.push((input_ids_noim_len) as i64); let input_ids_noim = Tensor::from_vec(input_ids_noim, input_ids_noim_len, &self.device)?; let cur_input_embeds = self.llama.embed(&input_ids_noim)?; // can be replace by split if it is implemented in candle let input_embed_no_ims = { let mut input_embeds = Vec::new(); for i in 0..image_indices.len() - 1 { let start = (image_indices[i]) as usize; let end = image_indices[i + 1] as usize; input_embeds.push(cur_input_embeds.i((start..end, ..))?) } input_embeds }; let mut cur_new_input_embeds = Vec::new(); for (i, image_feature) in image_features.iter().enumerate() { cur_new_input_embeds.push(input_embed_no_ims[i].clone()); cur_new_input_embeds.push(image_feature.clone()); } cur_new_input_embeds.push(input_embed_no_ims[image_features.len()].clone()); let new_input_embeds = Tensor::cat(&cur_new_input_embeds, 0)?; //trancate let new_input_embeds = if let Some(tokenizer_model_max_length) = self.config.tokenizer_model_max_length { let (new_input_embeds_length, _) = new_input_embeds.shape().dims2()?; if new_input_embeds_length > tokenizer_model_max_length { new_input_embeds.i((..tokenizer_model_max_length, ..))? } else { new_input_embeds } } else { new_input_embeds }; new_input_embeds.unsqueeze(0) } pub fn forward( &self, input_embeds: &Tensor, position_id: usize, cache: &mut Cache, ) -> Result<Tensor> { self.llama .forward_input_embed(input_embeds, position_id, cache) } }
candle/candle-transformers/src/models/llava/mod.rs/0
{ "file_path": "candle/candle-transformers/src/models/llava/mod.rs", "repo_id": "candle", "token_count": 8610 }
57
//! Mix of Multi-scale Dilated and Traditional Convolutions //! //! Mix of Multi-scale Dilated and Traditional Convolutions (MMDiT) is an architecture //! introduced for Stable Diffusion 3, with the MMDiT-X variant used in Stable Diffusion 3.5. //! //! - 📝 [Research Paper](https://arxiv.org/abs/2403.03206) //! - 💻 ComfyUI [reference implementation](https://github.com/comfyanonymous/ComfyUI/blob/78e133d0415784924cd2674e2ee48f3eeca8a2aa/comfy/ldm/modules/diffusionmodules/mmdit.py) //! - 💻 Stability-AI [MMDiT-X implementation](https://github.com/Stability-AI/sd3.5/blob/4e484e05308d83fb77ae6f680028e6c313f9da54/mmditx.py) //! - ⚡ [Interactive Wasm Example](https://huggingface.co/spaces/radames/Candle-BLIP-Image-Captioning) //! - 💻 [GH Link](https://github.com/salesforce/BLIP) //! - 🤗 [HF Link](https://huggingface.co/Salesforce/blip-image-captioning-base) //! - 📝 [Paper](https://arxiv.org/abs/2201.12086) //! pub mod blocks; pub mod embedding; pub mod model; pub mod projections;
candle/candle-transformers/src/models/mmdit/mod.rs/0
{ "file_path": "candle/candle-transformers/src/models/mmdit/mod.rs", "repo_id": "candle", "token_count": 395 }
58
//! Text encoder as used in most OpenCLIP pretrained models //! https://github.com/mlfoundations/open_clip use candle::{DType, IndexOp, Result, Tensor, D}; use candle_nn::{ embedding, layer_norm, linear, ops::softmax_last_dim, Embedding, LayerNorm, Linear, Module, VarBuilder, }; #[derive(Debug, Clone)] pub struct Config { pub vocab_size: usize, pub embed_dim: usize, pub intermediate_size: usize, pub max_position_embeddings: usize, pub pad_with: Option<String>, pub num_hidden_layers: usize, pub num_attention_heads: usize, pub projection_dim: usize, } impl Config { pub fn vit_base_patch32() -> Self { Self { vocab_size: 49408, embed_dim: 512, intermediate_size: 2048, max_position_embeddings: 77, pad_with: None, num_hidden_layers: 12, num_attention_heads: 8, projection_dim: 512, } } } #[derive(Clone, Debug)] struct TextEmbeddings { token_embedding: Embedding, position_embedding: Tensor, } impl TextEmbeddings { fn new(vs: VarBuilder, c: &Config) -> Result<Self> { let token_embedding = embedding(c.vocab_size, c.embed_dim, vs.pp("token_embedding"))?; let position_embedding = vs.get( (c.max_position_embeddings, c.embed_dim), "positional_embedding", )?; Ok(TextEmbeddings { token_embedding, position_embedding, }) } } impl Module for TextEmbeddings { fn forward(&self, input_ids: &Tensor) -> Result<Tensor> { let seq_length = input_ids.dim(D::Minus1)?; let inputs_embeds = self.token_embedding.forward(input_ids)?; let position_embedding = self.position_embedding.narrow(0, 0, seq_length)?; inputs_embeds.broadcast_add(&position_embedding) } } #[derive(Clone, Debug)] struct Attention { k_proj: candle_nn::Linear, v_proj: candle_nn::Linear, q_proj: candle_nn::Linear, out_proj: Linear, head_dim: usize, scale: f64, num_attention_heads: usize, } impl Attention { fn new(vs: candle_nn::VarBuilder, c: &Config) -> Result<Self> { let embed_dim = c.embed_dim; let num_attention_heads = c.num_attention_heads; let in_proj_weights = vs .get((embed_dim * 3, embed_dim), "in_proj_weight")? .chunk(3, 0)?; let (q_w, k_w, v_w) = ( &in_proj_weights[0], &in_proj_weights[1], &in_proj_weights[2], ); let in_proj_biases = vs.get(embed_dim * 3, "in_proj_bias")?.chunk(3, 0)?; let (q_b, k_b, v_b) = (&in_proj_biases[0], &in_proj_biases[1], &in_proj_biases[2]); let q_proj = Linear::new(q_w.clone(), Some(q_b.clone())); let k_proj = Linear::new(k_w.clone(), Some(k_b.clone())); let v_proj = Linear::new(v_w.clone(), Some(v_b.clone())); let out_proj = candle_nn::linear(embed_dim, embed_dim, vs.pp("out_proj"))?; let head_dim = embed_dim / num_attention_heads; let scale = (head_dim as f64).powf(-0.5); Ok(Attention { k_proj, v_proj, q_proj, out_proj, head_dim, scale, num_attention_heads, }) } fn shape_multihead(&self, xs: &Tensor, bsz: usize, seq_len: usize) -> Result<Tensor> { xs.reshape((bsz, seq_len, self.num_attention_heads, self.head_dim))? .transpose(1, 2)? .contiguous()? .to_dtype(DType::F32) } fn forward(&self, xs: &Tensor) -> Result<Tensor> { let in_dtype = xs.dtype(); let (bsz, seq_len, embed_dim) = xs.dims3()?; let q = self.shape_multihead(&self.q_proj.forward(xs)?, bsz, seq_len)?; let k = self.shape_multihead(&self.k_proj.forward(xs)?, bsz, seq_len)?; let v = self.shape_multihead(&self.v_proj.forward(xs)?, bsz, seq_len)?; let q = (q * self.scale)?; let attn_weights = q.matmul(&k.transpose(D::Minus1, D::Minus2)?)?; let attn_weights = softmax_last_dim(&attn_weights)?; let attn_output = attn_weights.matmul(&v)?.to_dtype(in_dtype)?; let attn_output = attn_output .transpose(1, 2)? .contiguous()? .reshape((bsz, seq_len, embed_dim))?; let out = self.out_proj.forward(&attn_output)?; Ok(out) } } #[derive(Clone, Debug)] struct Mlp { fc1: Linear, fc2: Linear, } impl Mlp { fn new(vs: VarBuilder, c: &Config) -> Result<Self> { let fc1 = linear(c.embed_dim, c.intermediate_size, vs.pp("c_fc"))?; let fc2 = linear(c.intermediate_size, c.embed_dim, vs.pp("c_proj"))?; Ok(Mlp { fc1, fc2 }) } } impl Mlp { fn forward(&self, xs: &Tensor) -> Result<Tensor> { let xs = self.fc1.forward(xs)?; self.fc2.forward(&xs.gelu_erf()?) } } #[derive(Clone, Debug)] struct EncoderLayer { self_attn: Attention, layer_norm1: LayerNorm, mlp: Mlp, layer_norm2: LayerNorm, } impl EncoderLayer { fn new(vs: VarBuilder, c: &Config) -> Result<Self> { let self_attn = Attention::new(vs.pp("attn"), c)?; let layer_norm1 = layer_norm(c.embed_dim, 1e-5, vs.pp("ln_1"))?; let mlp = Mlp::new(vs.pp("mlp"), c)?; let layer_norm2 = layer_norm(c.embed_dim, 1e-5, vs.pp("ln_2"))?; Ok(EncoderLayer { self_attn, layer_norm1, mlp, layer_norm2, }) } fn forward(&self, xs: &Tensor) -> Result<Tensor> { let residual = xs; let xs = self.layer_norm1.forward(xs)?; let xs = self.self_attn.forward(&xs)?; let xs = (xs + residual)?; let residual = &xs; let xs = self.layer_norm2.forward(&xs)?; let xs = self.mlp.forward(&xs)?; let out = (xs + residual)?; Ok(out) } } #[derive(Clone, Debug)] pub struct Encoder { layers: Vec<EncoderLayer>, } impl Encoder { pub fn new(vs: VarBuilder, c: &Config) -> Result<Self> { let vs = vs.pp("resblocks"); let mut layers: Vec<EncoderLayer> = Vec::new(); for index in 0..c.num_hidden_layers { let layer = EncoderLayer::new(vs.pp(index.to_string()), c)?; layers.push(layer) } Ok(Encoder { layers }) } pub fn forward(&self, xs: &Tensor) -> Result<Tensor> { let mut xs = xs.clone(); for layer in self.layers.iter() { xs = layer.forward(&xs)?; } Ok(xs) } } /// A text transformer as used in CLIP variants. #[derive(Clone, Debug)] pub struct OpenClipTextTransformer { embeddings: TextEmbeddings, encoder: Encoder, final_layer_norm: LayerNorm, } impl OpenClipTextTransformer { pub fn new(vs: VarBuilder, c: &Config) -> Result<Self> { let embeddings = TextEmbeddings::new(vs.clone(), c)?; let final_layer_norm = layer_norm(c.embed_dim, 1e-5, vs.pp("ln_final"))?; let encoder = Encoder::new(vs.pp("transformer"), c)?; Ok(OpenClipTextTransformer { embeddings, encoder, final_layer_norm, }) } pub fn forward(&self, input_ids: &Tensor) -> Result<Tensor> { let input_ids = self.embeddings.forward(input_ids)?; let input_ids = self.encoder.forward(&input_ids)?; self.final_layer_norm.forward(&input_ids) } } impl Module for OpenClipTextTransformer { fn forward(&self, input_ids: &Tensor) -> Result<Tensor> { let output = self.forward(input_ids)?; let sequence_max_indices = input_ids.argmax(D::Minus1)?.to_dtype(DType::I64)?; let mut indices = Vec::new(); for (batch_idx, &seq_idx) in sequence_max_indices.to_vec1::<i64>()?.iter().enumerate() { let index = output.i((batch_idx, seq_idx as usize))?.unsqueeze(0)?; indices.push(index); } Tensor::cat(&indices, 0) } }
candle/candle-transformers/src/models/openclip/text_model.rs/0
{ "file_path": "candle/candle-transformers/src/models/openclip/text_model.rs", "repo_id": "candle", "token_count": 3955 }
59
//! Module containing quantized MixFormer model implementation. //! //! MixFormer is an efficient transformer variant for text generation that uses //! mixture-of-experts and parallel attention/feed-forward blocks. //! This implementation provides quantization for reduced memory usage. //! //! Key features: //! - Parallel attention and feed-forward computation //! - Rotary positional embeddings //! - Optional key-value caching //! - Support for 8-bit quantization //! use crate::quantized_nn::{layer_norm, linear, Linear}; pub use crate::quantized_var_builder::VarBuilder; use candle::{DType, Device, IndexOp, Module, Result, Tensor, D}; use candle_nn::Activation; pub use crate::models::mixformer::Config; const MAX_SEQ_LEN: usize = 4096; #[derive(Debug, Clone)] struct Embedding { wte: crate::quantized_nn::Embedding, } impl Embedding { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let wte = crate::quantized_nn::Embedding::new(cfg.vocab_size, cfg.n_embd, vb.pp("wte"))?; Ok(Self { wte }) } } impl Module for Embedding { fn forward(&self, xs: &Tensor) -> Result<Tensor> { self.wte.forward(xs) } } fn get_mask(size: usize, device: &Device) -> Result<Tensor> { let mask: Vec<_> = (0..size) .flat_map(|i| (0..size).map(move |j| u8::from(j > i))) .collect(); Tensor::from_slice(&mask, (size, size), device) } fn masked_fill(on_false: &Tensor, mask: &Tensor, on_true: f32) -> Result<Tensor> { let shape = mask.shape(); let on_true = Tensor::new(on_true, on_false.device())?.broadcast_as(shape.dims())?; let m = mask.where_cond(&on_true, on_false)?; Ok(m) } #[derive(Debug, Clone)] struct RotaryEmbedding { sin: Tensor, cos: Tensor, } impl RotaryEmbedding { fn new(dim: usize, max_seq_len: usize, dev: &Device) -> Result<Self> { let inv_freq: Vec<_> = (0..dim) .step_by(2) .map(|i| 1f32 / 10000f32.powf(i as f32 / dim as f32)) .collect(); let inv_freq_len = inv_freq.len(); let inv_freq = Tensor::from_vec(inv_freq, (1, inv_freq_len), dev)?; let t = Tensor::arange(0u32, max_seq_len as u32, dev)? .to_dtype(DType::F32)? .reshape((max_seq_len, 1))?; let freqs = t.matmul(&inv_freq)?; Ok(Self { sin: freqs.sin()?, cos: freqs.cos()?, }) } fn apply_rotary_emb_qkv( &self, qkv: &Tensor, seqlen_offset: usize, ) -> Result<(Tensor, Tensor, Tensor)> { let (_b_size, seqlen, three, _, _headdim) = qkv.dims5()?; if three != 3 { candle::bail!("unexpected shape for qkv {:?}", qkv.shape()) } let (_rotary_seqlen, rotary_dim) = self.cos.dims2()?; let rotary_dim = rotary_dim * 2; let q_rot = qkv.i((.., .., 0, .., ..rotary_dim))?; let q_pass = qkv.i((.., .., 0, .., rotary_dim..))?; let k_rot = qkv.i((.., .., 1, .., ..rotary_dim))?; let k_pass = qkv.i((.., .., 1, .., rotary_dim..))?; let q12 = q_rot.chunk(2, D::Minus1)?; let k12 = k_rot.chunk(2, D::Minus1)?; let (q1, q2) = (&q12[0], &q12[1]); let (k1, k2) = (&k12[0], &k12[1]); let c = self.cos.narrow(0, seqlen_offset, seqlen)?.unsqueeze(1)?; let s = self.sin.narrow(0, seqlen_offset, seqlen)?.unsqueeze(1)?; let q_rot = Tensor::cat( &[ (q1.broadcast_mul(&c)? - q2.broadcast_mul(&s)?)?, (q1.broadcast_mul(&s)? + q2.broadcast_mul(&c)?)?, ], D::Minus1, )?; let k_rot = Tensor::cat( &[ (k1.broadcast_mul(&c)? - k2.broadcast_mul(&s)?)?, (k1.broadcast_mul(&s)? + k2.broadcast_mul(&c)?)?, ], D::Minus1, )?; let q = Tensor::cat(&[&q_rot, &q_pass], D::Minus1)?; let k = Tensor::cat(&[&k_rot, &k_pass], D::Minus1)?; let v = qkv.i((.., .., 2))?; Ok((q, k, v)) } } #[derive(Debug, Clone)] #[allow(clippy::upper_case_acronyms)] struct MLP { fc1: Linear, fc2: Linear, act: Activation, } impl MLP { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let n_inner = cfg.n_inner.unwrap_or(4 * cfg.n_embd); let fc1 = linear(cfg.n_embd, n_inner, vb.pp("fc1"))?; let fc2 = linear(n_inner, cfg.n_embd, vb.pp("fc2"))?; Ok(Self { fc1, fc2, act: cfg.activation_function, }) } } impl Module for MLP { fn forward(&self, xs: &Tensor) -> Result<Tensor> { xs.apply(&self.fc1)?.apply(&self.act)?.apply(&self.fc2) } } #[derive(Debug, Clone)] struct CausalLMHead { ln: candle_nn::LayerNorm, linear: Linear, } impl CausalLMHead { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let ln = layer_norm(cfg.n_embd, cfg.layer_norm_epsilon, vb.pp("ln"))?; let linear = linear(cfg.n_embd, cfg.vocab_size, vb.pp("linear"))?; Ok(Self { ln, linear }) } } impl Module for CausalLMHead { fn forward(&self, xs: &Tensor) -> Result<Tensor> { xs.apply(&self.ln)? .apply(&self.linear)? .to_dtype(DType::F32) } } #[derive(Debug, Clone)] #[allow(clippy::upper_case_acronyms)] struct MHA { wqkv: Linear, out_proj: Linear, rotary_emb: RotaryEmbedding, kv_cache: Option<(Tensor, Tensor)>, head_dim: usize, n_head: usize, softmax_scale: f64, span: tracing::Span, } impl MHA { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let head_dim = cfg.n_embd / cfg.n_head; let op_size = cfg.n_embd; let wqkv = linear(cfg.n_embd, 3 * op_size, vb.pp("Wqkv"))?; let out_proj = linear(op_size, cfg.n_embd, vb.pp("out_proj"))?; let rotary_emb = RotaryEmbedding::new(cfg.rotary_dim, MAX_SEQ_LEN, vb.device())?; let softmax_scale = 1f64 / (head_dim as f64).sqrt(); Ok(Self { wqkv, out_proj, head_dim, n_head: cfg.n_head, kv_cache: None, rotary_emb, softmax_scale, span: tracing::span!(tracing::Level::TRACE, "mha"), }) } fn forward(&mut self, xs: &Tensor, mask: Option<&Tensor>) -> Result<Tensor> { let _enter = self.span.enter(); let (b_size, seq_len, _n_embd) = xs.dims3()?; let qkv = self .wqkv .forward(xs)? .reshape((b_size, seq_len, 3, (), self.head_dim))?; let seqlen_offset = match &self.kv_cache { None => 0, Some((prev_k, _)) => prev_k.dim(1)?, }; // In the python implementation, a single tensor is returned with the third axis of size 3. let (q, k, v) = self.rotary_emb.apply_rotary_emb_qkv(&qkv, seqlen_offset)?; let (k, v) = match &self.kv_cache { None => (k, v), Some((prev_k, prev_v)) => { let k = Tensor::cat(&[prev_k, &k], 1)?; let v = Tensor::cat(&[prev_v, &v], 1)?; (k, v) } }; self.kv_cache = Some((k.clone(), v.clone())); // scores = torch.einsum('bthd,bshd->bhts', q, k * softmax_scale) let q = q.transpose(1, 2)?.flatten_to(1)?; // b*h, t, d let k = k.transpose(1, 2)?.flatten_to(1)?; // b*h, s, d let v = v.transpose(1, 2)?.flatten_to(1)?; // b*h, s, d let attn_weights = (q.matmul(&k.t()?)? * self.softmax_scale)?; // b*h, t, s // causal_mask = torch.triu(torch.full((seqlen_q, seqlen_k), -10000.0, device=scores.device), 1) // scores = scores + causal_mask.to(dtype=scores.dtype) let attn_weights = match mask { None => attn_weights, Some(mask) => masked_fill( &attn_weights, &mask.broadcast_left(b_size * self.n_head)?, f32::NEG_INFINITY, )?, }; let attn_weights = candle_nn::ops::softmax_last_dim(&attn_weights)?; // output = torch.einsum('bhts,bshd->bthd', attention_drop, v) // attn_weights: b*h,t,s, v: b*h,s,d let attn_output = attn_weights.matmul(&v)?; // b*h,t,d let attn_output = attn_output .reshape((b_size, (), seq_len, self.head_dim))? .transpose(1, 2)? .flatten_from(D::Minus2)?; attn_output.apply(&self.out_proj) } fn clear_kv_cache(&mut self) { self.kv_cache = None } } #[derive(Debug, Clone)] struct ParallelBlock { ln: candle_nn::LayerNorm, mixer: MHA, mlp: MLP, span: tracing::Span, } impl ParallelBlock { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let ln = layer_norm(cfg.n_embd, cfg.layer_norm_epsilon, vb.pp("ln"))?; let mixer = MHA::new(cfg, vb.pp("mixer"))?; let mlp = MLP::new(cfg, vb.pp("mlp"))?; Ok(Self { ln, mixer, mlp, span: tracing::span!(tracing::Level::TRACE, "block"), }) } fn forward(&mut self, xs: &Tensor, mask: Option<&Tensor>) -> Result<Tensor> { let _enter = self.span.enter(); let residual = xs; let xs = xs.apply(&self.ln)?; let attn_outputs = self.mixer.forward(&xs, mask)?; let feed_forward_hidden_states = self.mlp.forward(&xs)?; attn_outputs + feed_forward_hidden_states + residual } fn clear_kv_cache(&mut self) { self.mixer.clear_kv_cache() } } #[derive(Debug, Clone)] pub struct MixFormerSequentialForCausalLM { embedding: Embedding, blocks: Vec<ParallelBlock>, head: CausalLMHead, span: tracing::Span, } impl MixFormerSequentialForCausalLM { pub fn new_v2(cfg: &Config, vb: VarBuilder) -> Result<Self> { let vb_head = vb.pp("lm_head"); let vb = vb.pp("transformer"); let embedding = Embedding::new(cfg, vb.pp("embd"))?; let mut blocks = Vec::new(); for i in 0..cfg.n_layer { let block = ParallelBlock::new(cfg, vb.pp("h").pp(i))?; blocks.push(block) } let head = CausalLMHead::new(cfg, vb_head)?; Ok(Self { embedding, blocks, head, span: tracing::span!(tracing::Level::TRACE, "mixformer"), }) } pub fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let vb = vb.pp("layers"); let embedding = Embedding::new(cfg, vb.pp(0))?; let mut blocks = Vec::new(); for i in 0..cfg.n_layer { let block = ParallelBlock::new(cfg, vb.pp(i + 1))?; blocks.push(block); } let head = CausalLMHead::new(cfg, vb.pp(cfg.n_layer + 1))?; Ok(Self { embedding, blocks, head, span: tracing::span!(tracing::Level::TRACE, "mixformer"), }) } pub fn forward(&mut self, xs: &Tensor) -> Result<Tensor> { let _enter = self.span.enter(); let (_b_size, seq_len) = xs.dims2()?; let mut xs = xs.apply(&self.embedding)?; let mask = if seq_len <= 1 { None } else { Some(get_mask(seq_len, xs.device())?) }; for block in self.blocks.iter_mut() { xs = block.forward(&xs, mask.as_ref())?; } xs.narrow(1, seq_len - 1, 1)?.apply(&self.head)?.squeeze(1) } pub fn forward_with_img( &mut self, bos_token: &Tensor, xs: &Tensor, img_embeds: &Tensor, ) -> Result<Tensor> { let _enter = self.span.enter(); let xs = xs.apply(&self.embedding)?; let bos_token = bos_token.apply(&self.embedding)?; // Python implementation sequence order is <bos token embedding><img embedding><rest of text embedding> // https://github.com/vikhyat/moondream/blob/a9d788a20d1543fb1479edc54106e88cff7759d3/moondream/moondream.py#L43-L56 let mut xs = Tensor::cat(&[bos_token, img_embeds.clone(), xs], 1)?; let (_b_size, seq_len, _embds) = xs.dims3()?; let mask = Some(get_mask(seq_len, xs.device())?); for block in self.blocks.iter_mut() { xs = block.forward(&xs, mask.as_ref())? } let xs = xs .narrow(1, seq_len - 1, 1)? .apply(&self.head)? .squeeze(1)?; Ok(xs) } pub fn clear_kv_cache(&mut self) { self.blocks.iter_mut().for_each(|b| b.clear_kv_cache()) } }
candle/candle-transformers/src/models/quantized_mixformer.rs/0
{ "file_path": "candle/candle-transformers/src/models/quantized_mixformer.rs", "repo_id": "candle", "token_count": 6503 }
60
//! Recurrent Gemma model implementation //! //! Recurrent Gemma is a version of the Gemma language model that incorporates recurrent memory. //! This allows the model to maintain state between predictions and have longer-range memory. //! //! Key characteristics: //! - Real-gated linear recurrent units (RGLRU) //! - 1D convolution for local context //! - RMSNorm for layer normalization //! - Rotary positional embeddings (RoPE) //! - Grouped query attention //! //! References: //! - [Gemma: Open Models Based on Gemini Technology](https://blog.google/technology/developers/gemma-open-models/) //! - [Recurrent Memory model architecture](https://arxiv.org/abs/2402.00441) //! //! This implementation is based on the python version from huggingface/transformers. //! https://github.com/huggingface/transformers/blob/b109257f4fb8b1166e7c53cc5418632014ed53a5/src/transformers/models/recurrent_gemma/modeling_recurrent_gemma.py#L2 //! use candle::{DType, Device, IndexOp, Module, Result, Tensor, D}; use candle_nn::{linear_b as linear, Linear, VarBuilder}; use std::sync::Arc; #[derive(serde::Deserialize, Debug, Clone, Copy)] #[serde(rename_all = "snake_case")] pub enum TemporalBlockType { Attention, Recurrent, } #[derive(serde::Deserialize, Debug, Clone)] pub struct Config { pub num_hidden_layers: usize, pub vocab_size: usize, pub hidden_size: usize, pub intermediate_size: usize, pub num_attention_heads: usize, pub num_key_value_heads: usize, pub head_dim: usize, pub lru_width: Option<usize>, pub attention_window_size: usize, pub conv1d_width: usize, pub logits_soft_cap: f64, pub hidden_activation: candle_nn::Activation, pub partial_rotary_factor: f64, pub rms_norm_eps: f64, pub rope_theta: f64, #[serde(alias = "_block_types")] pub block_types: Vec<TemporalBlockType>, pub attention_bias: bool, #[serde(default = "default_max_seq_len")] pub max_seq_len: usize, } fn default_max_seq_len() -> usize { 8192 } #[derive(Debug, Clone)] pub(crate) struct RmsNorm { weight: Tensor, eps: f64, } impl RmsNorm { pub(crate) fn new(dim: usize, eps: f64, vb: VarBuilder) -> Result<Self> { let weight = vb.get(dim, "weight")?; Ok(Self { weight, eps }) } pub(crate) fn from_weight(weight: Tensor, eps: f64) -> Self { Self { weight, eps } } } impl Module for RmsNorm { fn forward(&self, x: &Tensor) -> Result<Tensor> { let x_dtype = x.dtype(); let internal_dtype = match x_dtype { DType::F16 | DType::BF16 => DType::F32, d => d, }; let hidden_size = x.dim(D::Minus1)?; let x = x.to_dtype(internal_dtype)?; let norm_x = (x.sqr()?.sum_keepdim(D::Minus1)? / hidden_size as f64)?; let x_normed = x.broadcast_div(&(norm_x + self.eps)?.sqrt()?)?; x_normed .to_dtype(x_dtype)? .broadcast_mul(&(&self.weight + 1.0)?) } } #[derive(Debug, Clone)] pub(crate) struct RotaryEmbedding { sin: Tensor, cos: Tensor, } fn rotate_half(xs: &Tensor) -> Result<Tensor> { let last_dim = xs.dim(D::Minus1)?; let xs1 = xs.narrow(D::Minus1, 0, last_dim / 2)?; let xs2 = xs.narrow(D::Minus1, last_dim / 2, last_dim - last_dim / 2)?; Tensor::cat(&[&xs2.neg()?, &xs1], D::Minus1) } impl RotaryEmbedding { pub(crate) fn new(dtype: DType, cfg: &Config, dev: &Device) -> Result<Self> { if cfg.partial_rotary_factor != 0.5 { candle::bail!("partial-rotary-factor {} <> 0.5", cfg.partial_rotary_factor) } let dim = cfg.head_dim / 2; let max_seq_len = cfg.max_seq_len; let inv_freq: Vec<_> = (0..dim) .step_by(2) .map(|i| 1f32 / cfg.rope_theta.powf(i as f64 / dim as f64) as f32) .collect(); let inv_freq_len = inv_freq.len(); let inv_freq = Tensor::from_vec(inv_freq, (1, inv_freq_len), dev)?.to_dtype(dtype)?; let t = Tensor::arange(0u32, max_seq_len as u32, dev)? .to_dtype(dtype)? .reshape((max_seq_len, 1))?; let freqs = t.matmul(&inv_freq)?; let freqs = Tensor::cat(&[&freqs, &freqs], D::Minus1)?; Ok(Self { sin: freqs.sin()?, cos: freqs.cos()?, }) } pub(crate) fn apply_rotary_emb_qkv( &self, q: &Tensor, k: &Tensor, seqlen_offset: usize, ) -> Result<(Tensor, Tensor)> { let (_b_sz, _h, seq_len, _n_embd) = q.dims4()?; let cos = self.cos.narrow(0, seqlen_offset, seq_len)?; let sin = self.sin.narrow(0, seqlen_offset, seq_len)?; let cos = cos.unsqueeze(0)?.unsqueeze(0)?; // (1, 1, seq_len, dim) let sin = sin.unsqueeze(0)?.unsqueeze(0)?; // (1, 1, seq_len, dim) let q_embed = (q.broadcast_mul(&cos)? + rotate_half(q)?.broadcast_mul(&sin))?; let k_embed = (k.broadcast_mul(&cos)? + rotate_half(k)?.broadcast_mul(&sin))?; Ok((q_embed, k_embed)) } } #[derive(Debug, Clone)] struct Mlp { gate_proj: Linear, up_proj: Linear, down_proj: Linear, act_fn: candle_nn::Activation, } impl Mlp { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let h = cfg.hidden_size; let intermediate_size = cfg.intermediate_size / 2; let gate_proj = linear(h, intermediate_size, true, vb.pp("gate_proj"))?; let up_proj = linear(h, intermediate_size, true, vb.pp("up_proj"))?; let down_proj = linear(intermediate_size, h, true, vb.pp("down_proj"))?; Ok(Self { gate_proj, up_proj, down_proj, act_fn: cfg.hidden_activation, }) } } impl Module for Mlp { fn forward(&self, xs: &Tensor) -> Result<Tensor> { let gate = xs.apply(&self.gate_proj)?.apply(&self.act_fn)?; (gate * xs.apply(&self.up_proj))?.apply(&self.down_proj) } } // Real-Gated Linear Recurrent Unit #[derive(Debug, Clone)] pub(crate) struct Rglru { pub(crate) recurrent_param: Tensor, pub(crate) input_gate_weight: Tensor, pub(crate) input_gate_bias: Tensor, pub(crate) recurrent_gate_weight: Tensor, pub(crate) recurrent_gate_bias: Tensor, pub(crate) block_width: usize, pub(crate) n_heads: usize, pub(crate) recurrent_states: Option<Tensor>, } fn baddbmm(a: &Tensor, b: &Tensor, c: &Tensor) -> Result<Tensor> { a.broadcast_add(&b.matmul(c)?) } fn softplus(xs: &Tensor) -> Result<Tensor> { (xs.exp()? + 1.0)?.log() } impl Rglru { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let h = cfg.hidden_size; let lru_width = cfg.lru_width.unwrap_or(h); let n_heads = cfg.num_attention_heads; let block_width = lru_width / n_heads; let recurrent_param = vb.get((lru_width,), "recurrent_param")?; let input_gate_weight = vb.get((n_heads, block_width, block_width), "input_gate_weight")?; let input_gate_bias = vb.get((n_heads, block_width), "input_gate_bias")?; let recurrent_gate_weight = vb.get((n_heads, block_width, block_width), "recurrent_gate_weight")?; let recurrent_gate_bias = vb.get((n_heads, block_width), "recurrent_gate_bias")?; Ok(Self { recurrent_param, input_gate_bias, input_gate_weight, recurrent_gate_bias, recurrent_gate_weight, block_width, n_heads, recurrent_states: None, }) } // https://github.com/huggingface/transformers/blob/0bd58f1ce0573c0e3269de4215a17d318add49b9/src/transformers/models/recurrent_gemma/modeling_recurrent_gemma.py#L303 pub(crate) fn forward(&mut self, xs: &Tensor, pos: usize) -> Result<Tensor> { let (b_sz, seq_len, lru_width) = xs.dims3()?; let pos = Tensor::arange(pos as u32, (pos + seq_len) as u32, xs.device())?; let reset = pos.eq(0u32)?.unsqueeze(1)?.unsqueeze(0)?; let reshape_act = xs .reshape((b_sz * seq_len, self.n_heads, self.block_width))? .permute((1, 0, 2))? .contiguous()?; let res = baddbmm( &self.input_gate_bias.unsqueeze(1)?, &reshape_act, &self.input_gate_weight, )?; let input_gate = res.transpose(0, 1)?.reshape((b_sz, seq_len, lru_width))?; let input_gate = candle_nn::ops::sigmoid(&input_gate)?; let res = baddbmm( &self.recurrent_gate_bias.unsqueeze(1)?, &reshape_act, &self.recurrent_gate_weight, )?; let recurrent_gate = res.transpose(0, 1)?.reshape((b_sz, seq_len, lru_width))?; let recurrent_gate = candle_nn::ops::sigmoid(&recurrent_gate)?; let log_recurrent_gate = (recurrent_gate * (-8.0))?.broadcast_mul(&softplus(&self.recurrent_param)?)?; let recurrent_gate = log_recurrent_gate.exp()?; let a_square = (log_recurrent_gate * 2.)?.exp()?; // Gate the input. let gated_inputs = (xs * input_gate)?; let reset = reset.to_dtype(a_square.dtype())?; let multiplier = reset.broadcast_add(&((1.0 - &reset)?.broadcast_mul(&(1.0 - a_square)?.sqrt()?))?)?; let normalized_x = (gated_inputs * multiplier.to_dtype(xs.dtype()))?; let (hidden_states, recurrent_states) = rnn_scan( &normalized_x, &recurrent_gate, &reset, self.recurrent_states.as_ref(), )?; self.recurrent_states = Some(recurrent_states); Ok(hidden_states) } } fn rnn_scan( hidden_states: &Tensor, recurrent_gate: &Tensor, reset: &Tensor, recurrent_states: Option<&Tensor>, ) -> Result<(Tensor, Tensor)> { let acc_dtype = DType::F32; let dev = hidden_states.device(); let in_dtype = hidden_states.dtype(); let inv_reset = (1.0 - reset)?.to_dtype(recurrent_gate.dtype())?; let recurrent_gate = recurrent_gate.broadcast_mul(&inv_reset)?; let (c, r) = if hidden_states.dim(1)? == 1 { match recurrent_states { None => { let next_state = hidden_states.i((.., 0))?.to_dtype(acc_dtype)?; (hidden_states.clone(), next_state) } Some(recurrent_states) => { let contextualized_states = recurrent_gate.to_dtype(acc_dtype)? * recurrent_states.unsqueeze(1)?; let contextualized_states = (contextualized_states + hidden_states.to_dtype(acc_dtype)?)?; let c = contextualized_states.to_dtype(in_dtype)?; let l = contextualized_states.dim(1)?; let r = contextualized_states.i((.., l - 1))?; (c, r) } } } else { let mut recurrent_states = match recurrent_states { None => Tensor::zeros(hidden_states.i((.., 0))?.shape(), acc_dtype, dev)?, Some(r) => r.clone(), }; let mut contextualized_states = vec![]; for t in 0..hidden_states.dim(1)? { recurrent_states = (recurrent_gate.i((.., t))?.to_dtype(acc_dtype)? * recurrent_states)?; recurrent_states = (recurrent_states + hidden_states.i((.., t))?.to_dtype(acc_dtype)?)?; contextualized_states.push(recurrent_states.to_dtype(in_dtype)?) } let contextualized_states = Tensor::stack(&contextualized_states, 1)?; (contextualized_states, recurrent_states) }; Ok((c, r)) } #[derive(Debug, Clone)] struct RecurrentBlock { linear_y: Linear, linear_x: Linear, linear_out: Linear, conv_1d: candle_nn::Conv1d, conv1d_state: Option<Tensor>, conv1d_width: usize, rg_lru: Rglru, act_fn: candle_nn::Activation, } impl RecurrentBlock { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let h = cfg.hidden_size; let lru_width = cfg.lru_width.unwrap_or(h); let linear_y = linear(h, lru_width, true, vb.pp("linear_y"))?; let linear_x = linear(h, lru_width, true, vb.pp("linear_x"))?; let linear_out = linear(lru_width, h, true, vb.pp("linear_out"))?; let conv_1d = candle_nn::conv1d( lru_width, lru_width, cfg.conv1d_width, candle_nn::Conv1dConfig { groups: lru_width, padding: cfg.conv1d_width - 1, ..Default::default() }, vb.pp("conv_1d"), )?; let rg_lru = Rglru::new(cfg, vb.pp("rg_lru"))?; Ok(Self { linear_y, linear_x, linear_out, conv_1d, conv1d_state: None, conv1d_width: cfg.conv1d_width, rg_lru, act_fn: cfg.hidden_activation, }) } pub fn forward(&mut self, xs: &Tensor, pos: usize) -> Result<Tensor> { let (_b_sz, seq_len, _) = xs.dims3()?; let y_branch = xs.apply(&self.linear_y)?.apply(&self.act_fn)?; let x_branch = xs.apply(&self.linear_x)?.transpose(1, 2)?; let x_branch = if pos == 0 { let x_len = x_branch.dim(D::Minus1)?; let pad = self.conv1d_width as i64 - x_len as i64 - 1; let padded = match pad.cmp(&0) { std::cmp::Ordering::Equal => x_branch.clone(), std::cmp::Ordering::Less => { let rev_pad = (-pad) as usize; x_branch.narrow(D::Minus1, rev_pad, x_len - rev_pad)? } std::cmp::Ordering::Greater => { x_branch.pad_with_zeros(D::Minus1, pad as usize, 0)? } }; self.conv1d_state = Some(padded); x_branch .apply(&self.conv_1d)? .narrow(D::Minus1, 0, seq_len)? } else { let conv_state = match self.conv1d_state.as_ref() { None => candle::bail!("empty cache despite pos > 0"), Some(s) => Tensor::cat(&[s, &x_branch], D::Minus1)?, }; let w = self.conv_1d.weight().i((.., 0, ..))?; let x_branch = conv_state.broadcast_mul(&w)?.sum(D::Minus1)?; let x_branch = match self.conv_1d.bias() { None => x_branch, Some(b) => x_branch.broadcast_add(b)?, }; let x_branch = x_branch.unsqueeze(D::Minus1)?; self.conv1d_state = Some(conv_state.i((.., .., 1..))?); x_branch }; let x_branch = x_branch.transpose(1, 2)?; let x_branch = self.rg_lru.forward(&x_branch, pos)?; (x_branch * y_branch)?.apply(&self.linear_out) } } #[derive(Debug, Clone)] struct SdpaAttention { q_proj: Linear, k_proj: Linear, v_proj: Linear, o_proj: Linear, n_heads: usize, n_kv_heads: usize, head_dim: usize, hidden_size: usize, kv_cache: Option<(Tensor, Tensor)>, rotary_emb: Arc<RotaryEmbedding>, } impl SdpaAttention { fn new(rotary_emb: Arc<RotaryEmbedding>, cfg: &Config, vb: VarBuilder) -> Result<Self> { let h = cfg.hidden_size; let n_heads = cfg.num_attention_heads; let n_kv_heads = cfg.num_key_value_heads; let hd = cfg.head_dim; let q_proj = linear(h, n_heads * hd, cfg.attention_bias, vb.pp("q_proj"))?; let k_proj = linear(h, n_kv_heads * hd, cfg.attention_bias, vb.pp("k_proj"))?; let v_proj = linear(h, n_kv_heads * hd, cfg.attention_bias, vb.pp("v_proj"))?; let o_proj = linear(n_heads * hd, h, true, vb.pp("o_proj"))?; Ok(Self { q_proj, k_proj, v_proj, o_proj, n_heads, n_kv_heads, head_dim: hd, hidden_size: h, kv_cache: None, rotary_emb, }) } fn repeat_kv(&self, x: Tensor) -> Result<Tensor> { let n_rep = self.n_heads / self.n_kv_heads; crate::utils::repeat_kv(x, n_rep) } fn forward( &mut self, xs: &Tensor, attention_mask: Option<&Tensor>, pos: usize, ) -> Result<Tensor> { let (bsz, q_len, _) = xs.dims3()?; let query_states = xs.apply(&self.q_proj)?; let key_states = xs.apply(&self.k_proj)?; let value_states = xs.apply(&self.v_proj)?; let query_states = query_states .reshape((bsz, q_len, self.n_heads, self.head_dim))? .transpose(1, 2)?; let key_states = key_states .reshape((bsz, q_len, self.n_kv_heads, self.head_dim))? .transpose(1, 2)?; let value_states = value_states .reshape((bsz, q_len, self.n_kv_heads, self.head_dim))? .transpose(1, 2)?; let query_states = query_states.chunk(2, D::Minus1)?; let key_states = key_states.chunk(2, D::Minus1)?; let (query_rot, key_rot) = self.rotary_emb .apply_rotary_emb_qkv(&query_states[0], &key_states[0], pos)?; let query_states = Tensor::cat(&[&query_rot, &query_states[1]], D::Minus1)?.contiguous()?; let key_states = Tensor::cat(&[&key_rot, &key_states[1]], D::Minus1)?.contiguous()?; let (key_states, value_states) = match &self.kv_cache { None => (key_states, value_states), Some((prev_k, prev_v)) => { let key_states = Tensor::cat(&[prev_k, &key_states], 2)?; let value_states = Tensor::cat(&[prev_v, &value_states], 2)?; (key_states, value_states) } }; self.kv_cache = Some((key_states.clone(), value_states.clone())); let key_states = self.repeat_kv(key_states)?; let value_states = self.repeat_kv(value_states)?; let xs = { let att = (query_states.matmul(&key_states.t()?)? / (self.head_dim as f64).sqrt())?; let att = if q_len == 1 { att } else { match attention_mask { None => att, Some(mask) => att.broadcast_add(mask)?, } }; let att = candle_nn::ops::softmax_last_dim(&att)?; att.matmul(&value_states.contiguous()?)? }; let xs = xs .transpose(1, 2)? .reshape((bsz, q_len, self.hidden_size))?; self.o_proj.forward(&xs) } } #[derive(Debug, Clone)] enum TemporalBlock { Recurrent(RecurrentBlock), Attention(SdpaAttention), } impl TemporalBlock { fn forward( &mut self, xs: &Tensor, attention_mask: Option<&Tensor>, pos: usize, ) -> Result<Tensor> { match self { Self::Recurrent(b) => b.forward(xs, pos), Self::Attention(b) => b.forward(xs, attention_mask, pos), } } } #[derive(Debug, Clone)] struct DecoderLayer { temporal_pre_norm: RmsNorm, channel_pre_norm: RmsNorm, temporal_block: TemporalBlock, mlp_block: Mlp, } impl DecoderLayer { fn new( block_idx: usize, rotary_emb: Arc<RotaryEmbedding>, cfg: &Config, vb: VarBuilder, ) -> Result<Self> { let h = cfg.hidden_size; let temporal_pre_norm = RmsNorm::new(h, cfg.rms_norm_eps, vb.pp("temporal_pre_norm"))?; let channel_pre_norm = RmsNorm::new(h, cfg.rms_norm_eps, vb.pp("channel_pre_norm"))?; let temporal_block = match cfg.block_types[block_idx % cfg.block_types.len()] { TemporalBlockType::Recurrent => { let block = RecurrentBlock::new(cfg, vb.pp("temporal_block"))?; TemporalBlock::Recurrent(block) } TemporalBlockType::Attention => { let block = SdpaAttention::new(rotary_emb, cfg, vb.pp("temporal_block"))?; TemporalBlock::Attention(block) } }; let mlp_block = Mlp::new(cfg, vb.pp("mlp_block"))?; Ok(Self { temporal_pre_norm, channel_pre_norm, temporal_block, mlp_block, }) } fn forward( &mut self, xs: &Tensor, attention_mask: Option<&Tensor>, pos: usize, ) -> Result<Tensor> { let residual = xs; let xs = xs.apply(&self.temporal_pre_norm)?; let xs = self.temporal_block.forward(&xs, attention_mask, pos)?; let xs = (xs + residual)?; let residual = &xs; let xs = xs.apply(&self.channel_pre_norm)?.apply(&self.mlp_block)?; xs + residual } } #[derive(Debug, Clone)] pub struct Model { embed_tokens: candle_nn::Embedding, layers: Vec<DecoderLayer>, final_norm: RmsNorm, lm_head: Linear, hidden_size: usize, logits_soft_cap: f64, dtype: DType, device: Device, } impl Model { pub fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let embed_tokens = candle_nn::embedding(cfg.vocab_size, cfg.hidden_size, vb.pp("embed_tokens"))?; let rotary_emb = Arc::new(RotaryEmbedding::new(vb.dtype(), cfg, vb.device())?); let vb_b = vb.pp("layers"); let mut layers = Vec::with_capacity(cfg.num_hidden_layers); for idx in 0..cfg.num_hidden_layers { let layer = DecoderLayer::new(idx, rotary_emb.clone(), cfg, vb_b.pp(idx))?; layers.push(layer) } let final_norm = RmsNorm::new(cfg.hidden_size, cfg.rms_norm_eps, vb.pp("final_norm"))?; let lm_head = Linear::new(embed_tokens.embeddings().clone(), None); Ok(Self { embed_tokens, layers, final_norm, lm_head, hidden_size: cfg.hidden_size, logits_soft_cap: cfg.logits_soft_cap, dtype: vb.dtype(), device: vb.device().clone(), }) } fn prepare_decoder_attention_mask( &self, b_size: usize, tgt_len: usize, seqlen_offset: usize, ) -> Result<Tensor> { let mask: Vec<_> = (0..tgt_len) .flat_map(|i| (0..tgt_len).map(move |j| if i < j { f32::NEG_INFINITY } else { 0. })) .collect(); let mask = Tensor::from_slice(&mask, (tgt_len, tgt_len), &self.device)?; let mask = if seqlen_offset > 0 { let mask0 = Tensor::zeros((tgt_len, seqlen_offset), DType::F32, &self.device)?; Tensor::cat(&[&mask0, &mask], D::Minus1)? } else { mask }; mask.expand((b_size, 1, tgt_len, tgt_len + seqlen_offset))? .to_dtype(self.dtype) } pub fn forward(&mut self, xs: &Tensor, pos: usize) -> Result<Tensor> { let (b_size, seq_len) = xs.dims2()?; let attention_mask = if seq_len <= 1 { None } else { let mask = self.prepare_decoder_attention_mask(b_size, seq_len, pos)?; Some(mask) }; let xs = xs.apply(&self.embed_tokens)?; let mut xs = (xs * (self.hidden_size as f64).sqrt())?; for layer in self.layers.iter_mut() { xs = layer.forward(&xs, attention_mask.as_ref(), pos)?; } let logits = xs .narrow(1, seq_len - 1, 1)? .apply(&self.final_norm)? .apply(&self.lm_head)?; let logits = ((logits / self.logits_soft_cap)?.tanh()? * self.logits_soft_cap)?; Ok(logits) } }
candle/candle-transformers/src/models/recurrent_gemma.rs/0
{ "file_path": "candle/candle-transformers/src/models/recurrent_gemma.rs", "repo_id": "candle", "token_count": 12053 }
61
//! Contrastive Language-Image Pre-Training //! //! Contrastive Language-Image Pre-Training (CLIP) is an architecture trained on //! pairs of images with related texts. //! //! - [CLIP](https://github.com/openai/CLIP) use candle::{DType, Device, Result, Tensor, D}; use candle_nn as nn; use candle_nn::Module; #[derive(Debug, Clone, Copy)] pub enum Activation { QuickGelu, Gelu, GeluErf, } impl Module for Activation { fn forward(&self, xs: &Tensor) -> Result<Tensor> { match self { Activation::QuickGelu => xs * nn::ops::sigmoid(&(xs * 1.702f64)?)?, Activation::Gelu => xs.gelu(), Activation::GeluErf => xs.gelu_erf(), } } } #[derive(Debug, Clone)] pub struct Config { vocab_size: usize, embed_dim: usize, // aka config.hidden_size activation: Activation, // aka config.hidden_act intermediate_size: usize, pub max_position_embeddings: usize, // The character to use for padding, use EOS when not set. pub pad_with: Option<String>, num_hidden_layers: usize, num_attention_heads: usize, #[allow(dead_code)] projection_dim: usize, } impl Config { // The config details can be found in the "text_config" section of this json file: // https://huggingface.co/openai/clip-vit-large-patch14/blob/main/config.json pub fn v1_5() -> Self { Self { vocab_size: 49408, embed_dim: 768, intermediate_size: 3072, max_position_embeddings: 77, pad_with: None, num_hidden_layers: 12, num_attention_heads: 12, projection_dim: 768, activation: Activation::QuickGelu, } } // https://huggingface.co/stabilityai/stable-diffusion-2-1/blob/main/text_encoder/config.json pub fn v2_1() -> Self { Self { vocab_size: 49408, embed_dim: 1024, intermediate_size: 4096, max_position_embeddings: 77, pad_with: Some("!".to_string()), num_hidden_layers: 23, num_attention_heads: 16, projection_dim: 512, activation: Activation::Gelu, } } // https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/text_encoder/config.json pub fn sdxl() -> Self { Self { vocab_size: 49408, embed_dim: 768, intermediate_size: 3072, max_position_embeddings: 77, pad_with: Some("!".to_string()), num_hidden_layers: 12, num_attention_heads: 12, projection_dim: 768, activation: Activation::QuickGelu, } } // https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/text_encoder_2/config.json pub fn sdxl2() -> Self { Self { vocab_size: 49408, embed_dim: 1280, intermediate_size: 5120, max_position_embeddings: 77, pad_with: Some("!".to_string()), num_hidden_layers: 32, num_attention_heads: 20, projection_dim: 1280, activation: Activation::Gelu, } } pub fn ssd1b() -> Self { Self::sdxl() } pub fn ssd1b2() -> Self { Self::sdxl2() } // https://huggingface.co/warp-ai/wuerstchen/blob/main/text_encoder/config.json pub fn wuerstchen() -> Self { Self { vocab_size: 49408, embed_dim: 1024, intermediate_size: 4096, max_position_embeddings: 77, pad_with: None, num_hidden_layers: 24, num_attention_heads: 16, projection_dim: 1024, activation: Activation::GeluErf, } } // https://huggingface.co/warp-ai/wuerstchen-prior/blob/main/text_encoder/config.json pub fn wuerstchen_prior() -> Self { Self { vocab_size: 49408, embed_dim: 1280, intermediate_size: 5120, max_position_embeddings: 77, pad_with: None, num_hidden_layers: 32, num_attention_heads: 20, projection_dim: 512, activation: Activation::GeluErf, } } } // CLIP Text Model // https://github.com/huggingface/transformers/blob/674f750a57431222fa2832503a108df3badf1564/src/transformers/models/clip/modeling_clip.py #[derive(Debug)] struct ClipTextEmbeddings { token_embedding: candle_nn::Embedding, position_embedding: candle_nn::Embedding, position_ids: Tensor, } impl ClipTextEmbeddings { fn new(vs: candle_nn::VarBuilder, c: &Config) -> Result<Self> { let token_embedding = candle_nn::embedding(c.vocab_size, c.embed_dim, vs.pp("token_embedding"))?; let position_embedding = candle_nn::embedding( c.max_position_embeddings, c.embed_dim, vs.pp("position_embedding"), )?; let position_ids = Tensor::arange(0u32, c.max_position_embeddings as u32, vs.device())?.unsqueeze(0)?; Ok(ClipTextEmbeddings { token_embedding, position_embedding, position_ids, }) } } impl Module for ClipTextEmbeddings { fn forward(&self, xs: &Tensor) -> Result<Tensor> { let token_embedding = self.token_embedding.forward(xs)?; let position_embedding = self.position_embedding.forward(&self.position_ids)?; token_embedding.broadcast_add(&position_embedding) } } #[derive(Debug)] struct ClipAttention { k_proj: candle_nn::Linear, v_proj: candle_nn::Linear, q_proj: candle_nn::Linear, out_proj: candle_nn::Linear, head_dim: usize, scale: f64, num_attention_heads: usize, } impl ClipAttention { fn new(vs: candle_nn::VarBuilder, c: &Config) -> Result<Self> { let embed_dim = c.embed_dim; let num_attention_heads = c.num_attention_heads; let k_proj = candle_nn::linear(embed_dim, embed_dim, vs.pp("k_proj"))?; let v_proj = candle_nn::linear(embed_dim, embed_dim, vs.pp("v_proj"))?; let q_proj = candle_nn::linear(embed_dim, embed_dim, vs.pp("q_proj"))?; let out_proj = candle_nn::linear(embed_dim, embed_dim, vs.pp("out_proj"))?; let head_dim = embed_dim / num_attention_heads; let scale = (head_dim as f64).powf(-0.5); Ok(ClipAttention { k_proj, v_proj, q_proj, out_proj, head_dim, scale, num_attention_heads, }) } fn shape(&self, xs: &Tensor, seq_len: usize, bsz: usize) -> Result<Tensor> { xs.reshape((bsz, seq_len, self.num_attention_heads, self.head_dim))? .transpose(1, 2)? .contiguous() } fn forward(&self, xs: &Tensor, causal_attention_mask: &Tensor) -> Result<Tensor> { let in_dtype = xs.dtype(); let (bsz, seq_len, embed_dim) = xs.dims3()?; let query_states = (self.q_proj.forward(xs)? * self.scale)?; let proj_shape = (bsz * self.num_attention_heads, seq_len, self.head_dim); let query_states = self .shape(&query_states, seq_len, bsz)? .reshape(proj_shape)? .to_dtype(DType::F32)?; let key_states = self .shape(&self.k_proj.forward(xs)?, seq_len, bsz)? .reshape(proj_shape)? .to_dtype(DType::F32)?; let value_states = self .shape(&self.v_proj.forward(xs)?, seq_len, bsz)? .reshape(proj_shape)? .to_dtype(DType::F32)?; let attn_weights = query_states.matmul(&key_states.transpose(1, 2)?)?; let src_len = key_states.dim(1)?; let attn_weights = attn_weights .reshape((bsz, self.num_attention_heads, seq_len, src_len))? .broadcast_add(causal_attention_mask)?; let attn_weights = attn_weights.reshape((bsz * self.num_attention_heads, seq_len, src_len))?; let attn_weights = candle_nn::ops::softmax(&attn_weights, D::Minus1)?; let attn_output = attn_weights.matmul(&value_states)?.to_dtype(in_dtype)?; let attn_output = attn_output .reshape((bsz, self.num_attention_heads, seq_len, self.head_dim))? .transpose(1, 2)? .reshape((bsz, seq_len, embed_dim))?; self.out_proj.forward(&attn_output) } } #[derive(Debug)] struct ClipMlp { fc1: candle_nn::Linear, fc2: candle_nn::Linear, activation: Activation, } impl ClipMlp { fn new(vs: candle_nn::VarBuilder, c: &Config) -> Result<Self> { let fc1 = candle_nn::linear(c.embed_dim, c.intermediate_size, vs.pp("fc1"))?; let fc2 = candle_nn::linear(c.intermediate_size, c.embed_dim, vs.pp("fc2"))?; Ok(ClipMlp { fc1, fc2, activation: c.activation, }) } } impl ClipMlp { fn forward(&self, xs: &Tensor) -> Result<Tensor> { let xs = self.fc1.forward(xs)?; self.fc2.forward(&self.activation.forward(&xs)?) } } #[derive(Debug)] struct ClipEncoderLayer { self_attn: ClipAttention, layer_norm1: candle_nn::LayerNorm, mlp: ClipMlp, layer_norm2: candle_nn::LayerNorm, } impl ClipEncoderLayer { fn new(vs: candle_nn::VarBuilder, c: &Config) -> Result<Self> { let self_attn = ClipAttention::new(vs.pp("self_attn"), c)?; let layer_norm1 = candle_nn::layer_norm(c.embed_dim, 1e-5, vs.pp("layer_norm1"))?; let mlp = ClipMlp::new(vs.pp("mlp"), c)?; let layer_norm2 = candle_nn::layer_norm(c.embed_dim, 1e-5, vs.pp("layer_norm2"))?; Ok(ClipEncoderLayer { self_attn, layer_norm1, mlp, layer_norm2, }) } fn forward(&self, xs: &Tensor, causal_attention_mask: &Tensor) -> Result<Tensor> { let residual = xs; let xs = self.layer_norm1.forward(xs)?; let xs = self.self_attn.forward(&xs, causal_attention_mask)?; let xs = (xs + residual)?; let residual = &xs; let xs = self.layer_norm2.forward(&xs)?; let xs = self.mlp.forward(&xs)?; xs + residual } } #[derive(Debug)] struct ClipEncoder { layers: Vec<ClipEncoderLayer>, } impl ClipEncoder { fn new(vs: candle_nn::VarBuilder, c: &Config) -> Result<Self> { let vs = vs.pp("layers"); let mut layers: Vec<ClipEncoderLayer> = Vec::new(); for index in 0..c.num_hidden_layers { let layer = ClipEncoderLayer::new(vs.pp(index.to_string()), c)?; layers.push(layer) } Ok(ClipEncoder { layers }) } fn forward(&self, xs: &Tensor, causal_attention_mask: &Tensor) -> Result<Tensor> { let mut xs = xs.clone(); for layer in self.layers.iter() { xs = layer.forward(&xs, causal_attention_mask)?; } Ok(xs) } } /// A CLIP transformer based model. #[derive(Debug)] pub struct ClipTextTransformer { embeddings: ClipTextEmbeddings, encoder: ClipEncoder, final_layer_norm: candle_nn::LayerNorm, } impl ClipTextTransformer { pub fn new(vs: candle_nn::VarBuilder, c: &Config) -> Result<Self> { let vs = vs.pp("text_model"); let embeddings = ClipTextEmbeddings::new(vs.pp("embeddings"), c)?; let encoder = ClipEncoder::new(vs.pp("encoder"), c)?; let final_layer_norm = candle_nn::layer_norm(c.embed_dim, 1e-5, vs.pp("final_layer_norm"))?; Ok(ClipTextTransformer { embeddings, encoder, final_layer_norm, }) } // https://github.com/huggingface/transformers/blob/674f750a57431222fa2832503a108df3badf1564/src/transformers/models/clip/modeling_clip.py#L678 fn build_causal_attention_mask( bsz: usize, seq_len: usize, mask_after: usize, device: &Device, ) -> Result<Tensor> { let mask: Vec<_> = (0..seq_len) .flat_map(|i| { (0..seq_len).map(move |j| { if j > i || j > mask_after { f32::MIN } else { 0. } }) }) .collect(); let mask = Tensor::from_slice(&mask, (seq_len, seq_len), device)?; mask.broadcast_as((bsz, seq_len, seq_len)) } pub fn forward_with_mask(&self, xs: &Tensor, mask_after: usize) -> Result<Tensor> { let (bsz, seq_len) = xs.dims2()?; let xs = self.embeddings.forward(xs)?; let causal_attention_mask = Self::build_causal_attention_mask(bsz, seq_len, mask_after, xs.device())?; let xs = self.encoder.forward(&xs, &causal_attention_mask)?; self.final_layer_norm.forward(&xs) } pub fn forward_until_encoder_layer( &self, xs: &Tensor, mask_after: usize, until_layer: isize, ) -> Result<(Tensor, Tensor)> { let (bsz, seq_len) = xs.dims2()?; let xs = self.embeddings.forward(xs)?; let causal_attention_mask = Self::build_causal_attention_mask(bsz, seq_len, mask_after, xs.device())?; let mut xs = xs.clone(); let mut intermediate = xs.clone(); // Modified encoder.forward that returns the intermediate tensor along with final output. let until_layer = if until_layer < 0 { self.encoder.layers.len() as isize + until_layer } else { until_layer } as usize; for (layer_id, layer) in self.encoder.layers.iter().enumerate() { xs = layer.forward(&xs, &causal_attention_mask)?; if layer_id == until_layer { intermediate = xs.clone(); } } Ok((self.final_layer_norm.forward(&xs)?, intermediate)) } } impl Module for ClipTextTransformer { fn forward(&self, xs: &Tensor) -> Result<Tensor> { self.forward_with_mask(xs, usize::MAX) } }
candle/candle-transformers/src/models/stable_diffusion/clip.rs/0
{ "file_path": "candle/candle-transformers/src/models/stable_diffusion/clip.rs", "repo_id": "candle", "token_count": 6983 }
62
//! T5 model implementation. //! //! T5 (Text-to-Text Transfer Transformer) is a unified text-to-text transformer model. //! This implementation follows the original model architecture. //! //! Key characteristics: //! - Text-to-text framework //! - Relative positional embeddings //! - T5-specific layer normalization //! - Encoder-decoder architecture //! - Support for sequence-to-sequence tasks //! //! References: //! - ⚡ [Interactive Wasm Example](https://huggingface.co/spaces/radames/Candle-T5-Generation-Wasm) //! - 💻[GH Model](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/modeling_t5.py) //! - 🤗 [HF Link](https://huggingface.co/docs/transformers/model_doc/t5) //! - 📝 [T5 Paper](https://arxiv.org/abs/1910.10683) //! //! # Encoder-decoder example: //! //! ```bash //! cargo run --example t5 --release -- \ //! --model-id "t5-small" \ //! --prompt "translate to German: A beautiful candle." \ //! --decode //! > ... //! > Eine schöne Kerze. //! > 9 tokens generated (2.42 token/s) //! ``` //! //! Variants such as [flan-t5](https://huggingface.co/google/flan-t5-small), [flan-ul2](https://huggingface.co/google/flan-ul2) (with `--revision "refs/pr/25"`), and [Co-EdIT](https://huggingface.co/grammarly/coedit-large) are also supported. //! //! # Translation with MADLAD //! //! //! [MADLAD-400](https://arxiv.org/abs/2309.04662) is a series of multilingual machine translation T5 models trained on 250 billion tokens covering over 450 languages using publicly available data. These models are competitive with significantly larger models. //! //! ```bash //! cargo run --example t5 --release -- \ //! --model-id "jbochi/madlad400-3b-mt" \ //! --prompt "<2de> How are you, my friend?" \ //! --decode --temperature 0 //! ... //! Wie geht es dir, mein Freund? //! ``` //! //! ## Sentence embedding example //! //! ```bash //! cargo run --example t5 --release -- \ //! --model-id "t5-small" --prompt "A beautiful candle." //! ... //! [[[ 0.0515, -0.0541, -0.0761, ..., -0.0392, 0.1511, -0.0265], //! [-0.0974, 0.0998, -0.1659, ..., -0.2450, 0.1738, -0.0164], //! [ 0.0624, -0.1024, 0.0430, ..., -0.1388, 0.0564, -0.2962], //! [-0.0389, -0.1173, 0.0026, ..., 0.1064, -0.1065, 0.0990], //! [ 0.1300, 0.0027, -0.0326, ..., 0.0026, -0.0317, 0.0851]]] //! Tensor[[1, 5, 512], f32] //! Took 303.766583ms //! ``` use crate::models::with_tracing::Embedding; use candle::{DType, Device, Module, Result, Tensor, D}; use candle_nn::{Activation, VarBuilder}; use serde::Deserialize; use std::sync::Arc; #[derive(Debug, Clone)] pub struct Linear { weight: Tensor, span: tracing::Span, } pub fn linear_no_bias(d1: usize, d2: usize, vb: VarBuilder) -> Result<Linear> { let init_ws = candle_nn::init::DEFAULT_KAIMING_NORMAL; let weight = vb.get_with_hints((d2, d1), "weight", init_ws)?; let span = tracing::span!(tracing::Level::TRACE, "linear"); Ok(Linear { weight, span }) } impl Module for Linear { fn forward(&self, xs: &Tensor) -> Result<Tensor> { let _enter = self.span.enter(); let weight = self.weight.to_dtype(xs.dtype())?; let w = match *xs.dims() { [b1, b2, _, _] => weight.broadcast_left((b1, b2))?.t()?, [bsize, _, _] => weight.broadcast_left(bsize)?.t()?, _ => weight.t()?, }; xs.matmul(&w) } } fn default_relative_attention_max_distance() -> usize { 128 } fn default_is_decoder() -> bool { false } fn default_use_cache() -> bool { true } fn default_tie_word_embeddings() -> bool { true } fn get_mask(size: usize, device: &Device) -> Result<Tensor> { let mask: Vec<_> = (0..size) .flat_map(|i| (0..size).map(move |j| u8::from(j > i))) .collect(); Tensor::from_slice(&mask, (size, size), device) } fn masked_fill(on_false: &Tensor, mask: &Tensor, on_true: f32) -> Result<Tensor> { let shape = mask.shape(); let on_true = Tensor::new(on_true, on_false.device())?.broadcast_as(shape.dims())?; let m = mask.where_cond(&on_true, on_false)?; Ok(m) } #[derive(Debug, Deserialize, Default, Clone, PartialEq)] pub struct ActivationWithOptionalGating { pub gated: bool, pub activation: candle_nn::Activation, } pub fn deserialize_feed_forward_proj_activation<'de, D>( deserializer: D, ) -> std::result::Result<ActivationWithOptionalGating, D::Error> where D: serde::de::Deserializer<'de>, { match String::deserialize(deserializer)?.as_str() { "gated-gelu" => Ok(ActivationWithOptionalGating { gated: true, activation: candle_nn::Activation::NewGelu, }), "gated-silu" => Ok(ActivationWithOptionalGating { gated: true, activation: candle_nn::Activation::Silu, }), buf => { let activation = serde_plain::from_str(buf).map_err(serde::de::Error::custom)?; Ok(ActivationWithOptionalGating { gated: false, activation, }) } } } #[derive(Debug, Clone, PartialEq, Deserialize)] pub struct Config { pub vocab_size: usize, pub d_model: usize, pub d_kv: usize, pub d_ff: usize, pub num_layers: usize, pub num_decoder_layers: Option<usize>, pub num_heads: usize, pub relative_attention_num_buckets: usize, #[serde(default = "default_relative_attention_max_distance")] pub relative_attention_max_distance: usize, pub dropout_rate: f64, pub layer_norm_epsilon: f64, pub initializer_factor: f64, #[serde(default, deserialize_with = "deserialize_feed_forward_proj_activation")] pub feed_forward_proj: ActivationWithOptionalGating, #[serde(default = "default_tie_word_embeddings")] pub tie_word_embeddings: bool, #[serde(default = "default_is_decoder")] pub is_decoder: bool, pub is_encoder_decoder: bool, #[serde(default = "default_use_cache")] pub use_cache: bool, pub pad_token_id: usize, pub eos_token_id: usize, pub decoder_start_token_id: Option<usize>, } impl Default for Config { fn default() -> Self { Self { vocab_size: 32128, d_model: 512, d_kv: 64, d_ff: 2048, num_layers: 6, num_decoder_layers: None, num_heads: 8, relative_attention_num_buckets: 32, relative_attention_max_distance: 128, dropout_rate: 0.1, layer_norm_epsilon: 1e-6, initializer_factor: 1.0, feed_forward_proj: ActivationWithOptionalGating { gated: false, activation: Activation::Relu, }, tie_word_embeddings: true, is_decoder: false, is_encoder_decoder: true, use_cache: true, pad_token_id: 0, eos_token_id: 1, decoder_start_token_id: Some(0), } } } impl Config { // https://huggingface.co/facebook/musicgen-small/blob/495da4ad086b3416a27c6187f9239f9fd96f3962/config.json#L184 pub fn musicgen_small() -> Self { Self { d_ff: 3072, d_kv: 64, d_model: 768, dropout_rate: 0.1, eos_token_id: 1, feed_forward_proj: ActivationWithOptionalGating { gated: false, activation: Activation::Relu, }, tie_word_embeddings: true, initializer_factor: 1.0, is_decoder: false, is_encoder_decoder: true, layer_norm_epsilon: 1e-6, num_decoder_layers: Some(12), num_heads: 12, num_layers: 12, pad_token_id: 0, decoder_start_token_id: Some(0), relative_attention_max_distance: 128, relative_attention_num_buckets: 32, use_cache: true, vocab_size: 32128, } } } #[derive(Debug, Clone)] struct T5LayerNorm { weight: Tensor, variance_epsilon: f64, span: tracing::Span, } impl T5LayerNorm { fn load(h: usize, eps: f64, vb: VarBuilder) -> Result<Self> { let weight = vb.get(h, "weight")?; Ok(Self { weight, variance_epsilon: eps, span: tracing::span!(tracing::Level::TRACE, "layer-norm"), }) } } impl Module for T5LayerNorm { fn forward(&self, xs: &Tensor) -> Result<Tensor> { let _enter = self.span.enter(); let dtype = xs.dtype(); let xs_f32 = xs.to_dtype(DType::F32)?; // variance = hidden_states.to(torch.float32).pow(2).mean(-1, keepdim=True) let variance = xs_f32.sqr()?.mean_keepdim(D::Minus1)?; let xs = xs_f32.broadcast_div(&(variance + self.variance_epsilon)?.sqrt()?)?; let xs = xs.to_dtype(dtype)?; let xs = xs.broadcast_mul(&self.weight.to_dtype(dtype)?)?; Ok(xs) } } #[derive(Debug, Clone)] struct T5DenseActDense { wi: Linear, wo: Linear, act: Activation, span: tracing::Span, } impl T5DenseActDense { fn load(vb: VarBuilder, cfg: &Config) -> Result<Self> { let wi = linear_no_bias(cfg.d_model, cfg.d_ff, vb.pp("wi"))?; let wo = linear_no_bias(cfg.d_ff, cfg.d_model, vb.pp("wo"))?; Ok(Self { wi, wo, act: Activation::Relu, span: tracing::span!(tracing::Level::TRACE, "dense-act-dense"), }) } } impl Module for T5DenseActDense { fn forward(&self, xs: &Tensor) -> Result<Tensor> { let _enter = self.span.enter(); let xs = self.wi.forward(xs)?; let xs = self.act.forward(&xs)?; let xs = self.wo.forward(&xs)?; Ok(xs) } } #[derive(Debug, Clone)] struct T5DenseGatedActDense { wi_0: Linear, wi_1: Linear, wo: Linear, act: Activation, span: tracing::Span, } impl T5DenseGatedActDense { fn load(vb: VarBuilder, cfg: &Config) -> Result<Self> { let wi_0 = linear_no_bias(cfg.d_model, cfg.d_ff, vb.pp("wi_0"))?; let wi_1 = linear_no_bias(cfg.d_model, cfg.d_ff, vb.pp("wi_1"))?; let wo = linear_no_bias(cfg.d_ff, cfg.d_model, vb.pp("wo"))?; Ok(Self { wi_0, wi_1, wo, act: cfg.feed_forward_proj.activation, span: tracing::span!(tracing::Level::TRACE, "dense-gated-act-dense"), }) } } impl Module for T5DenseGatedActDense { fn forward(&self, xs: &Tensor) -> Result<Tensor> { let _enter = self.span.enter(); let hidden_gelu = self.act.forward(&self.wi_0.forward(xs)?)?; let hidden_linear = self.wi_1.forward(xs)?; let xs = hidden_gelu.broadcast_mul(&hidden_linear)?; let xs = self.wo.forward(&xs)?; Ok(xs) } } #[derive(Debug, Clone)] struct T5LayerFF { dense_act: Option<T5DenseActDense>, gated_dense_act: Option<T5DenseGatedActDense>, layer_norm: T5LayerNorm, span: tracing::Span, } impl T5LayerFF { fn load(vb: VarBuilder, cfg: &Config) -> Result<Self> { let layer_norm = T5LayerNorm::load(cfg.d_model, cfg.layer_norm_epsilon, vb.pp("layer_norm"))?; let (dense_act, gated_dense_act) = if cfg.feed_forward_proj.gated { ( None, Some(T5DenseGatedActDense::load(vb.pp("DenseReluDense"), cfg)?), ) } else { ( Some(T5DenseActDense::load(vb.pp("DenseReluDense"), cfg)?), None, ) }; Ok(Self { dense_act, gated_dense_act, layer_norm, span: tracing::span!(tracing::Level::TRACE, "layer-ff"), }) } } impl Module for T5LayerFF { fn forward(&self, xs: &Tensor) -> Result<Tensor> { let _enter = self.span.enter(); let ys = self.layer_norm.forward(xs)?; let ys = match &self.dense_act { Some(dense_act) => dense_act.forward(&ys)?, None => self.gated_dense_act.as_ref().unwrap().forward(&ys)?, }; let xs = (xs + ys)?; Ok(xs) } } #[derive(Debug, Clone)] struct T5Attention { q: Linear, k: Linear, v: Linear, o: Linear, n_heads: usize, d_kv: usize, relative_attention_bias: Option<Embedding>, relative_attention_num_buckets: usize, relative_attention_max_distance: usize, inner_dim: usize, use_cache: bool, kv_cache: Option<(Tensor, Tensor)>, span: tracing::Span, span_cache: tracing::Span, span_mm: tracing::Span, span_sm: tracing::Span, } impl T5Attention { fn load( has_relative_attention_bias: bool, decoder: bool, vb: VarBuilder, cfg: &Config, ) -> Result<Self> { let inner_dim = cfg.num_heads * cfg.d_kv; let q = linear_no_bias(cfg.d_model, inner_dim, vb.pp("q"))?; let k = linear_no_bias(cfg.d_model, inner_dim, vb.pp("k"))?; let v = linear_no_bias(cfg.d_model, inner_dim, vb.pp("v"))?; let o = linear_no_bias(inner_dim, cfg.d_model, vb.pp("o"))?; let relative_attention_bias = if has_relative_attention_bias { let emb = Embedding::new( cfg.relative_attention_num_buckets, cfg.num_heads, vb.pp("relative_attention_bias"), )?; Some(emb) } else { None }; Ok(Self { q, k, v, o, n_heads: cfg.num_heads, d_kv: cfg.d_kv, relative_attention_bias, relative_attention_num_buckets: cfg.relative_attention_num_buckets, relative_attention_max_distance: cfg.relative_attention_max_distance, inner_dim, use_cache: cfg.use_cache && decoder, kv_cache: None, span: tracing::span!(tracing::Level::TRACE, "attention"), span_cache: tracing::span!(tracing::Level::TRACE, "attention-cache"), span_mm: tracing::span!(tracing::Level::TRACE, "attention-mm"), span_sm: tracing::span!(tracing::Level::TRACE, "attention-sm"), }) } fn forward( &mut self, xs: &Tensor, position_bias: Option<&Tensor>, key_value_states: Option<&Tensor>, mask: Option<&Tensor>, ) -> Result<(Tensor, Option<Tensor>)> { // Performs Self-attention (if key_value_states is None) or attention // over source sentence (provided by key_value_states). let _enter = self.span.enter(); let kv_input = match key_value_states { None => xs, Some(key_value_states) => key_value_states, }; let (b_sz, q_len) = (xs.dim(0)?, xs.dim(1)?); let kv_len = kv_input.dim(1)?; let q = self.q.forward(xs)?; let k = self.k.forward(kv_input)?; let v = self.v.forward(kv_input)?; let q = q .reshape((b_sz, q_len, self.n_heads, self.d_kv))? .transpose(1, 2)? .contiguous()?; let mut k = k .reshape((b_sz, kv_len, self.n_heads, self.d_kv))? .transpose(1, 2)?; let mut v = v .reshape((b_sz, kv_len, self.n_heads, self.d_kv))? .transpose(1, 2)?; if self.use_cache && key_value_states.is_none() { let _enter = self.span_cache.enter(); if let Some((kv_cache_k, kv_cache_v)) = &self.kv_cache { k = Tensor::cat(&[kv_cache_k, &k], 2)?; v = Tensor::cat(&[kv_cache_v, &v], 2)?; }; self.kv_cache = Some((k.clone(), v.clone())); }; let k = k.contiguous()?; let v = v.contiguous()?; // TODO: Use flash_attn. let scores = { let _enter = self.span_mm.enter(); q.matmul(&k.t()?)? }; let scores = match mask { None => scores, Some(mask) => masked_fill( &scores, &mask .unsqueeze(0)? .unsqueeze(0)? .repeat((b_sz, self.n_heads))?, f32::NEG_INFINITY, )?, }; let (scores, position_bias) = match position_bias { Some(position_bias) => ( scores.broadcast_add(position_bias)?, Some(position_bias.clone()), ), None => match &self.relative_attention_bias { None => (scores, None), Some(relative_attention_bias) => { // This only handles the bidirectional case. let kv_len = k.dim(2)?; let (q_start, q_end) = match self.use_cache { true => ((kv_len - q_len) as u32, kv_len as u32), false => (0_u32, kv_len as u32), }; let num_buckets = self.relative_attention_num_buckets as u32 / 2; let max_exact = num_buckets / 2; let relative_position = (q_start..q_end) .map(|i| { (0..kv_len as u32) .map(|j| { if i < j { if j - i < max_exact { j - i + num_buckets } else { let b = f32::log( (j - i) as f32 / max_exact as f32, self.relative_attention_max_distance as f32 / max_exact as f32, ) * (num_buckets - max_exact) as f32; u32::min( max_exact + num_buckets + b as u32, self.relative_attention_num_buckets as u32 - 1, ) } } else if i - j < max_exact { i - j } else { let b = f32::log( (i - j) as f32 / max_exact as f32, self.relative_attention_max_distance as f32 / max_exact as f32, ) * (num_buckets - max_exact) as f32; u32::min(max_exact + b as u32, num_buckets - 1) } }) .collect::<Vec<u32>>() }) .collect::<Vec<Vec<_>>>(); let relative_buckets = Tensor::new(relative_position, q.device())?; let position_bias = relative_attention_bias .forward(&relative_buckets)? .permute((2, 0, 1))? .unsqueeze(0)? .to_dtype(scores.dtype())?; (scores.broadcast_add(&position_bias)?, Some(position_bias)) // TODO: position_bias_masked? } }, }; let attn_weights = { let _enter = self.span_sm.enter(); candle_nn::ops::softmax_last_dim(&scores)? }; let attn_output = attn_weights.matmul(&v)?; let attn_output = attn_output .transpose(1, 2)? .reshape((b_sz, q_len, self.inner_dim))?; let attn_output = self.o.forward(&attn_output)?; Ok((attn_output, position_bias)) } fn clear_kv_cache(&mut self) { self.kv_cache = None } } #[derive(Debug, Clone)] struct T5LayerSelfAttention { self_attention: T5Attention, layer_norm: T5LayerNorm, span: tracing::Span, } impl T5LayerSelfAttention { fn load(h: bool, d: bool, vb: VarBuilder, cfg: &Config) -> Result<Self> { let self_attention = T5Attention::load(h, d, vb.pp("SelfAttention"), cfg)?; let layer_norm = T5LayerNorm::load(cfg.d_model, cfg.layer_norm_epsilon, vb.pp("layer_norm"))?; Ok(Self { self_attention, layer_norm, span: tracing::span!(tracing::Level::TRACE, "self-attn"), }) } fn forward( &mut self, xs: &Tensor, position_bias: Option<&Tensor>, mask: Option<&Tensor>, ) -> Result<(Tensor, Option<Tensor>)> { let _enter = self.span.enter(); let normed_xs = self.layer_norm.forward(xs)?; let (ys, position_bias) = self.self_attention .forward(&normed_xs, position_bias, None, mask)?; let ys = (xs + ys)?; Ok((ys, position_bias)) } fn clear_kv_cache(&mut self) { self.self_attention.clear_kv_cache() } } #[derive(Debug, Clone)] struct T5LayerCrossAttention { cross_attention: T5Attention, layer_norm: T5LayerNorm, span: tracing::Span, } impl T5LayerCrossAttention { fn load(decoder: bool, vb: VarBuilder, cfg: &Config) -> Result<Self> { let cross_attention = T5Attention::load(false, decoder, vb.pp("EncDecAttention"), cfg)?; let layer_norm = T5LayerNorm::load(cfg.d_model, cfg.layer_norm_epsilon, vb.pp("layer_norm"))?; Ok(Self { cross_attention, layer_norm, span: tracing::span!(tracing::Level::TRACE, "cross-attn"), }) } fn forward( &mut self, hidden_states: &Tensor, position_bias: Option<&Tensor>, key_value_states: &Tensor, ) -> Result<(Tensor, Option<Tensor>)> { let _enter = self.span.enter(); let normed_hidden_states = self.layer_norm.forward(hidden_states)?; let (ys, position_bias) = self.cross_attention.forward( &normed_hidden_states, position_bias, Some(key_value_states), None, )?; let ys = (hidden_states + ys)?; Ok((ys, position_bias)) } fn clear_kv_cache(&mut self) { self.cross_attention.clear_kv_cache() } } #[derive(Debug, Clone)] struct T5Block { self_attn: T5LayerSelfAttention, cross_attn: Option<T5LayerCrossAttention>, ff: T5LayerFF, span: tracing::Span, } impl T5Block { fn load( has_relative_attention_bias: bool, decoder: bool, vb: VarBuilder, cfg: &Config, ) -> Result<Self> { let vb = vb.pp("layer"); let self_attn = T5LayerSelfAttention::load(has_relative_attention_bias, decoder, vb.pp("0"), cfg)?; let cross_attn = if cfg.is_decoder { Some(T5LayerCrossAttention::load(decoder, vb.pp("1"), cfg)?) } else { None }; let ff_i = if cross_attn.is_some() { 2 } else { 1 }; let ff = T5LayerFF::load(vb.pp(ff_i.to_string()), cfg)?; Ok(Self { self_attn, cross_attn, ff, span: tracing::span!(tracing::Level::TRACE, "block"), }) } fn forward( &mut self, xs: &Tensor, position_bias: Option<&Tensor>, encoder_hidden_states: Option<&Tensor>, ) -> Result<(Tensor, Option<Tensor>)> { let _enter = self.span.enter(); // TODO: Cache masks let mask = match self.cross_attn.is_some() { true => { let mask_len = xs.dim(1)?; // If the input seq length is 1, no need for a mask, this is also helpful to avoid shape // issues when using the KV cache in the decoder. if mask_len <= 1 { None } else { Some(get_mask(mask_len, xs.device())?) } } false => None, }; let (mut xs, position_bias) = self.self_attn.forward(xs, position_bias, mask.as_ref())?; // TODO: clamp for f16? if let Some(cross_attn) = &mut self.cross_attn { (xs, _) = cross_attn.forward(&xs, None, encoder_hidden_states.unwrap())?; // TODO: clamp for f16? } let xs = self.ff.forward(&xs)?; // TODO: clamp for f16? Ok((xs, position_bias)) } fn clear_kv_cache(&mut self) { self.self_attn.clear_kv_cache(); self.cross_attn.iter_mut().for_each(|c| c.clear_kv_cache()); } } #[derive(Debug, Clone)] struct T5Stack { block: Vec<T5Block>, shared: Arc<Embedding>, final_layer_norm: T5LayerNorm, span: tracing::Span, } impl T5Stack { fn load(decoder: bool, vb: VarBuilder, shared: &Arc<Embedding>, cfg: &Config) -> Result<Self> { let block = (0..cfg.num_layers) .map(|i| T5Block::load(i == 0, decoder, vb.pp(format!("block.{i}")), cfg)) .collect::<Result<Vec<_>>>()?; let final_layer_norm = T5LayerNorm::load( cfg.d_model, cfg.layer_norm_epsilon, vb.pp("final_layer_norm"), )?; Ok(Self { block, shared: shared.clone(), final_layer_norm, span: tracing::span!(tracing::Level::TRACE, "stack"), }) } fn forward( &mut self, input_ids: &Tensor, encoder_hidden_states: Option<&Tensor>, ) -> Result<Tensor> { self.forward_dt(input_ids, encoder_hidden_states, None) } fn forward_dt( &mut self, input_ids: &Tensor, encoder_hidden_states: Option<&Tensor>, dtype: Option<DType>, ) -> Result<Tensor> { let _enter = self.span.enter(); let input_embeds = self.shared.as_ref().forward(input_ids)?; let input_embeds = match dtype { None => input_embeds, Some(dtype) => input_embeds.to_dtype(dtype)?, }; let mut hidden_states = input_embeds; let mut position_bias = None; for block in self.block.iter_mut() { (hidden_states, position_bias) = block.forward( &hidden_states, position_bias.as_ref(), encoder_hidden_states, )? } self.final_layer_norm.forward(&hidden_states) } fn clear_kv_cache(&mut self) { self.block.iter_mut().for_each(|b| b.clear_kv_cache()) } } #[derive(Debug, Clone)] pub struct T5EncoderModel { encoder: T5Stack, device: Device, span: tracing::Span, } impl T5EncoderModel { pub fn load(vb: VarBuilder, cfg: &Config) -> Result<Self> { let shared_vb = if vb.contains_tensor("shared.weight") { vb.pp("shared") } else if vb.contains_tensor("decoder.embed_tokens") { vb.pp("decoder").pp("embed_tokens") } else { vb.pp("encoder").pp("embed_tokens") }; let shared = Embedding::new(cfg.vocab_size, cfg.d_model, shared_vb)?; let shared = Arc::new(shared); let encoder = T5Stack::load(false, vb.pp("encoder"), &shared, cfg)?; Ok(Self { encoder, device: vb.device().clone(), span: tracing::span!(tracing::Level::TRACE, "encoder"), }) } pub fn forward(&mut self, input_ids: &Tensor) -> Result<Tensor> { let _enter = self.span.enter(); self.encoder.forward(input_ids, None) } pub fn forward_dt(&mut self, input_ids: &Tensor, dtype: Option<DType>) -> Result<Tensor> { let _enter = self.span.enter(); self.encoder.forward_dt(input_ids, None, dtype) } pub fn device(&self) -> &Device { &self.device } pub fn clear_kv_cache(&mut self) { self.encoder.clear_kv_cache() } } #[derive(Debug, Clone)] pub struct T5ForConditionalGeneration { encoder: T5Stack, decoder: T5Stack, d_model: usize, tie_word_embeddings: bool, lm_head: Option<Linear>, shared: Arc<Embedding>, device: Device, span_decode: tracing::Span, span_decode_head: tracing::Span, } impl T5ForConditionalGeneration { pub fn load(vb: VarBuilder, cfg: &Config) -> Result<Self> { assert!(cfg.is_encoder_decoder); let d_model = cfg.d_model; let shared_vb = if vb.contains_tensor("shared.weight") { vb.pp("shared") } else { vb.pp("decoder").pp("embed_tokens") }; let shared = Embedding::new(cfg.vocab_size, cfg.d_model, shared_vb)?; let shared = Arc::new(shared); let mut encoder_cfg = cfg.clone(); encoder_cfg.is_decoder = false; encoder_cfg.use_cache = false; encoder_cfg.is_encoder_decoder = false; let encoder = T5Stack::load(false, vb.pp("encoder"), &shared, &encoder_cfg)?; let mut decoder_cfg = cfg.clone(); decoder_cfg.is_decoder = true; decoder_cfg.is_encoder_decoder = false; decoder_cfg.num_layers = cfg.num_decoder_layers.unwrap_or(cfg.num_layers); let decoder = T5Stack::load(true, vb.pp("decoder"), &shared, &decoder_cfg)?; let tie_word_embeddings = cfg.tie_word_embeddings; let lm_head = if tie_word_embeddings { None } else { Some(linear_no_bias( cfg.d_model, cfg.vocab_size, vb.pp("lm_head"), )?) }; Ok(Self { encoder, decoder, d_model, tie_word_embeddings, lm_head, shared, device: vb.device().clone(), span_decode: tracing::span!(tracing::Level::TRACE, "decode"), span_decode_head: tracing::span!(tracing::Level::TRACE, "decode-head"), }) } pub fn encode(&mut self, input_ids: &Tensor) -> Result<Tensor> { self.encoder.forward(input_ids, None) } pub fn decode( &mut self, decoder_input_ids: &Tensor, encoder_output: &Tensor, ) -> Result<Tensor> { let _enter = self.span_decode.enter(); let decoder_output = self .decoder .forward(decoder_input_ids, Some(encoder_output))?; let scaling_factor = if self.tie_word_embeddings { // Rescale output before projecting on vocab // See https://github.com/tensorflow/mesh/blob/fa19d69eafc9a482aff0b59ddd96b025c0cb207d/mesh_tensorflow/transformer/transformer.py#L586 (self.d_model as f64).sqrt() } else { 1.0 }; let sequence_output = ((decoder_output .narrow(1, decoder_output.dim(1)? - 1, 1)? .squeeze(1)?) * scaling_factor)?; let output = { let _enter = self.span_decode_head.enter(); match self.lm_head { None => sequence_output.matmul(&self.shared.embeddings().t()?)?, Some(ref lm_head) => lm_head.forward(&sequence_output)?, } }; Ok(output) } pub fn forward(&mut self, input_ids: &Tensor, decoder_input_ids: &Tensor) -> Result<Tensor> { let encoder_output = self.encode(input_ids)?; self.decode(decoder_input_ids, &encoder_output) } pub fn device(&self) -> &Device { &self.device } pub fn clear_kv_cache(&mut self) { self.encoder.clear_kv_cache(); self.decoder.clear_kv_cache(); } }
candle/candle-transformers/src/models/t5.rs/0
{ "file_path": "candle/candle-transformers/src/models/t5.rs", "repo_id": "candle", "token_count": 16735 }
63
use super::common::{AttnBlock, GlobalResponseNorm, LayerNormNoWeights, TimestepBlock, WLayerNorm}; use candle::{DType, Module, Result, Tensor, D}; use candle_nn::VarBuilder; #[derive(Debug)] pub struct ResBlockStageB { depthwise: candle_nn::Conv2d, norm: WLayerNorm, channelwise_lin1: candle_nn::Linear, channelwise_grn: GlobalResponseNorm, channelwise_lin2: candle_nn::Linear, } impl ResBlockStageB { pub fn new(c: usize, c_skip: usize, ksize: usize, vb: VarBuilder) -> Result<Self> { let cfg = candle_nn::Conv2dConfig { groups: c, padding: ksize / 2, ..Default::default() }; let depthwise = candle_nn::conv2d(c, c, ksize, cfg, vb.pp("depthwise"))?; let norm = WLayerNorm::new(c)?; let channelwise_lin1 = candle_nn::linear(c + c_skip, c * 4, vb.pp("channelwise.0"))?; let channelwise_grn = GlobalResponseNorm::new(4 * c, vb.pp("channelwise.2"))?; let channelwise_lin2 = candle_nn::linear(c * 4, c, vb.pp("channelwise.4"))?; Ok(Self { depthwise, norm, channelwise_lin1, channelwise_grn, channelwise_lin2, }) } pub fn forward(&self, xs: &Tensor, x_skip: Option<&Tensor>) -> Result<Tensor> { let x_res = xs; let xs = xs.apply(&self.depthwise)?.apply(&self.norm)?; let xs = match x_skip { None => xs.clone(), Some(x_skip) => Tensor::cat(&[&xs, x_skip], 1)?, }; let xs = xs .permute((0, 2, 3, 1))? .contiguous()? .apply(&self.channelwise_lin1)? .gelu()? .apply(&self.channelwise_grn)? .apply(&self.channelwise_lin2)? .permute((0, 3, 1, 2))?; xs + x_res } } #[derive(Debug)] struct SubBlock { res_block: ResBlockStageB, ts_block: TimestepBlock, attn_block: Option<AttnBlock>, } #[derive(Debug)] struct DownBlock { layer_norm: Option<WLayerNorm>, conv: Option<candle_nn::Conv2d>, sub_blocks: Vec<SubBlock>, } #[derive(Debug)] struct UpBlock { sub_blocks: Vec<SubBlock>, layer_norm: Option<WLayerNorm>, conv: Option<candle_nn::ConvTranspose2d>, } #[derive(Debug)] pub struct WDiffNeXt { clip_mapper: candle_nn::Linear, effnet_mappers: Vec<Option<candle_nn::Conv2d>>, seq_norm: LayerNormNoWeights, embedding_conv: candle_nn::Conv2d, embedding_ln: WLayerNorm, down_blocks: Vec<DownBlock>, up_blocks: Vec<UpBlock>, clf_ln: WLayerNorm, clf_conv: candle_nn::Conv2d, c_r: usize, patch_size: usize, } impl WDiffNeXt { #[allow(clippy::too_many_arguments)] pub fn new( c_in: usize, c_out: usize, c_r: usize, c_cond: usize, clip_embd: usize, patch_size: usize, use_flash_attn: bool, vb: VarBuilder, ) -> Result<Self> { const C_HIDDEN: [usize; 4] = [320, 640, 1280, 1280]; const BLOCKS: [usize; 4] = [4, 4, 14, 4]; const NHEAD: [usize; 4] = [1, 10, 20, 20]; const INJECT_EFFNET: [bool; 4] = [false, true, true, true]; const EFFNET_EMBD: usize = 16; let clip_mapper = candle_nn::linear(clip_embd, c_cond, vb.pp("clip_mapper"))?; let mut effnet_mappers = Vec::with_capacity(2 * INJECT_EFFNET.len()); let vb_e = vb.pp("effnet_mappers"); for (i, &inject) in INJECT_EFFNET.iter().enumerate() { let c = if inject { Some(candle_nn::conv2d( EFFNET_EMBD, c_cond, 1, Default::default(), vb_e.pp(i), )?) } else { None }; effnet_mappers.push(c) } for (i, &inject) in INJECT_EFFNET.iter().rev().enumerate() { let c = if inject { Some(candle_nn::conv2d( EFFNET_EMBD, c_cond, 1, Default::default(), vb_e.pp(i + INJECT_EFFNET.len()), )?) } else { None }; effnet_mappers.push(c) } let seq_norm = LayerNormNoWeights::new(c_cond)?; let embedding_ln = WLayerNorm::new(C_HIDDEN[0])?; let embedding_conv = candle_nn::conv2d( c_in * patch_size * patch_size, C_HIDDEN[0], 1, Default::default(), vb.pp("embedding.1"), )?; let mut down_blocks = Vec::with_capacity(C_HIDDEN.len()); for (i, &c_hidden) in C_HIDDEN.iter().enumerate() { let vb = vb.pp("down_blocks").pp(i); let (layer_norm, conv, start_layer_i) = if i > 0 { let layer_norm = WLayerNorm::new(C_HIDDEN[i - 1])?; let cfg = candle_nn::Conv2dConfig { stride: 2, ..Default::default() }; let conv = candle_nn::conv2d(C_HIDDEN[i - 1], c_hidden, 2, cfg, vb.pp("0.1"))?; (Some(layer_norm), Some(conv), 1) } else { (None, None, 0) }; let mut sub_blocks = Vec::with_capacity(BLOCKS[i]); let mut layer_i = start_layer_i; for _j in 0..BLOCKS[i] { let c_skip = if INJECT_EFFNET[i] { c_cond } else { 0 }; let res_block = ResBlockStageB::new(c_hidden, c_skip, 3, vb.pp(layer_i))?; layer_i += 1; let ts_block = TimestepBlock::new(c_hidden, c_r, vb.pp(layer_i))?; layer_i += 1; let attn_block = if i == 0 { None } else { let attn_block = AttnBlock::new( c_hidden, c_cond, NHEAD[i], true, use_flash_attn, vb.pp(layer_i), )?; layer_i += 1; Some(attn_block) }; let sub_block = SubBlock { res_block, ts_block, attn_block, }; sub_blocks.push(sub_block) } let down_block = DownBlock { layer_norm, conv, sub_blocks, }; down_blocks.push(down_block) } let mut up_blocks = Vec::with_capacity(C_HIDDEN.len()); for (i, &c_hidden) in C_HIDDEN.iter().enumerate().rev() { let vb = vb.pp("up_blocks").pp(C_HIDDEN.len() - 1 - i); let mut sub_blocks = Vec::with_capacity(BLOCKS[i]); let mut layer_i = 0; for j in 0..BLOCKS[i] { let c_skip = if INJECT_EFFNET[i] { c_cond } else { 0 }; let c_skip_res = if i < BLOCKS.len() - 1 && j == 0 { c_hidden + c_skip } else { c_skip }; let res_block = ResBlockStageB::new(c_hidden, c_skip_res, 3, vb.pp(layer_i))?; layer_i += 1; let ts_block = TimestepBlock::new(c_hidden, c_r, vb.pp(layer_i))?; layer_i += 1; let attn_block = if i == 0 { None } else { let attn_block = AttnBlock::new( c_hidden, c_cond, NHEAD[i], true, use_flash_attn, vb.pp(layer_i), )?; layer_i += 1; Some(attn_block) }; let sub_block = SubBlock { res_block, ts_block, attn_block, }; sub_blocks.push(sub_block) } let (layer_norm, conv) = if i > 0 { let layer_norm = WLayerNorm::new(C_HIDDEN[i - 1])?; let cfg = candle_nn::ConvTranspose2dConfig { stride: 2, ..Default::default() }; let conv = candle_nn::conv_transpose2d( c_hidden, C_HIDDEN[i - 1], 2, cfg, vb.pp(layer_i).pp(1), )?; (Some(layer_norm), Some(conv)) } else { (None, None) }; let up_block = UpBlock { layer_norm, conv, sub_blocks, }; up_blocks.push(up_block) } let clf_ln = WLayerNorm::new(C_HIDDEN[0])?; let clf_conv = candle_nn::conv2d( C_HIDDEN[0], 2 * c_out * patch_size * patch_size, 1, Default::default(), vb.pp("clf.1"), )?; Ok(Self { clip_mapper, effnet_mappers, seq_norm, embedding_conv, embedding_ln, down_blocks, up_blocks, clf_ln, clf_conv, c_r, patch_size, }) } fn gen_r_embedding(&self, r: &Tensor) -> Result<Tensor> { const MAX_POSITIONS: usize = 10000; let r = (r * MAX_POSITIONS as f64)?; let half_dim = self.c_r / 2; let emb = (MAX_POSITIONS as f64).ln() / (half_dim - 1) as f64; let emb = (Tensor::arange(0u32, half_dim as u32, r.device())?.to_dtype(DType::F32)? * -emb)? .exp()?; let emb = r.unsqueeze(1)?.broadcast_mul(&emb.unsqueeze(0)?)?; let emb = Tensor::cat(&[emb.sin()?, emb.cos()?], 1)?; let emb = if self.c_r % 2 == 1 { emb.pad_with_zeros(D::Minus1, 0, 1)? } else { emb }; emb.to_dtype(r.dtype()) } fn gen_c_embeddings(&self, clip: &Tensor) -> Result<Tensor> { clip.apply(&self.clip_mapper)?.apply(&self.seq_norm) } pub fn forward( &self, xs: &Tensor, r: &Tensor, effnet: &Tensor, clip: Option<&Tensor>, ) -> Result<Tensor> { const EPS: f64 = 1e-3; let r_embed = self.gen_r_embedding(r)?; let clip = match clip { None => None, Some(clip) => Some(self.gen_c_embeddings(clip)?), }; let x_in = xs; let mut xs = xs .apply(&|xs: &_| candle_nn::ops::pixel_unshuffle(xs, self.patch_size))? .apply(&self.embedding_conv)? .apply(&self.embedding_ln)?; let mut level_outputs = Vec::new(); for (i, down_block) in self.down_blocks.iter().enumerate() { if let Some(ln) = &down_block.layer_norm { xs = xs.apply(ln)? } if let Some(conv) = &down_block.conv { xs = xs.apply(conv)? } let skip = match &self.effnet_mappers[i] { None => None, Some(m) => { let effnet = effnet.interpolate2d(xs.dim(D::Minus2)?, xs.dim(D::Minus1)?)?; Some(m.forward(&effnet)?) } }; for block in down_block.sub_blocks.iter() { xs = block.res_block.forward(&xs, skip.as_ref())?; xs = block.ts_block.forward(&xs, &r_embed)?; if let Some(attn_block) = &block.attn_block { xs = attn_block.forward(&xs, clip.as_ref().unwrap())?; } } level_outputs.push(xs.clone()) } level_outputs.reverse(); let mut xs = level_outputs[0].clone(); for (i, up_block) in self.up_blocks.iter().enumerate() { let effnet_c = match &self.effnet_mappers[self.down_blocks.len() + i] { None => None, Some(m) => { let effnet = effnet.interpolate2d(xs.dim(D::Minus2)?, xs.dim(D::Minus1)?)?; Some(m.forward(&effnet)?) } }; for (j, block) in up_block.sub_blocks.iter().enumerate() { let skip = if j == 0 && i > 0 { Some(&level_outputs[i]) } else { None }; let skip = match (skip, effnet_c.as_ref()) { (Some(skip), Some(effnet_c)) => Some(Tensor::cat(&[skip, effnet_c], 1)?), (None, Some(skip)) | (Some(skip), None) => Some(skip.clone()), (None, None) => None, }; xs = block.res_block.forward(&xs, skip.as_ref())?; xs = block.ts_block.forward(&xs, &r_embed)?; if let Some(attn_block) = &block.attn_block { xs = attn_block.forward(&xs, clip.as_ref().unwrap())?; } } if let Some(ln) = &up_block.layer_norm { xs = xs.apply(ln)? } if let Some(conv) = &up_block.conv { xs = xs.apply(conv)? } } let ab = xs .apply(&self.clf_ln)? .apply(&self.clf_conv)? .apply(&|xs: &_| candle_nn::ops::pixel_shuffle(xs, self.patch_size))? .chunk(2, 1)?; let b = ((candle_nn::ops::sigmoid(&ab[1])? * (1. - EPS * 2.))? + EPS)?; (x_in - &ab[0])? / b } }
candle/candle-transformers/src/models/wuerstchen/diffnext.rs/0
{ "file_path": "candle/candle-transformers/src/models/wuerstchen/diffnext.rs", "repo_id": "candle", "token_count": 8148 }
64
//load Candle Bert Module wasm module import init, { Model } from "./build/m.js"; async function fetchArrayBuffer(url) { const cacheName = "bert-candle-cache"; const cache = await caches.open(cacheName); const cachedResponse = await cache.match(url); if (cachedResponse) { const data = await cachedResponse.arrayBuffer(); return new Uint8Array(data); } const res = await fetch(url, { cache: "force-cache" }); cache.put(url, res.clone()); return new Uint8Array(await res.arrayBuffer()); } class Bert { static instance = {}; static async getInstance(weightsURL, tokenizerURL, configURL, modelID) { if (!this.instance[modelID]) { await init(); self.postMessage({ status: "loading", message: "Loading Model" }); const [weightsArrayU8, tokenizerArrayU8, mel_filtersArrayU8] = await Promise.all([ fetchArrayBuffer(weightsURL), fetchArrayBuffer(tokenizerURL), fetchArrayBuffer(configURL), ]); this.instance[modelID] = new Model( weightsArrayU8, tokenizerArrayU8, mel_filtersArrayU8 ); } else { self.postMessage({ status: "ready", message: "Model Already Loaded" }); } return this.instance[modelID]; } } self.addEventListener("message", async (event) => { const { weightsURL, tokenizerURL, configURL, modelID, sentences, normalize = true, } = event.data; try { self.postMessage({ status: "ready", message: "Starting Bert Model" }); const model = await Bert.getInstance( weightsURL, tokenizerURL, configURL, modelID ); self.postMessage({ status: "embedding", message: "Calculating Embeddings", }); const output = model.get_embeddings({ sentences: sentences, normalize_embeddings: normalize, }); self.postMessage({ status: "complete", message: "complete", output: output.data, }); } catch (e) { self.postMessage({ error: e }); } });
candle/candle-wasm-examples/bert/bertWorker.js/0
{ "file_path": "candle/candle-wasm-examples/bert/bertWorker.js", "repo_id": "candle", "token_count": 779 }
65
import init, { Model } from "./build/m.js"; async function fetchArrayBuffer(url, cacheModel = true) { if (!cacheModel) return new Uint8Array(await (await fetch(url)).arrayBuffer()); const cacheName = "moondream-candle-cache"; const cache = await caches.open(cacheName); const cachedResponse = await cache.match(url); if (cachedResponse) { const data = await cachedResponse.arrayBuffer(); return new Uint8Array(data); } const res = await fetch(url, { cache: "force-cache" }); cache.put(url, res.clone()); return new Uint8Array(await res.arrayBuffer()); } async function concatenateArrayBuffers(urls) { const arrayBuffers = await Promise.all( urls.map((url) => fetchArrayBuffer(url)) ); let totalLength = arrayBuffers.reduce( (acc, arrayBuffer) => acc + arrayBuffer.byteLength, 0 ); let concatenatedBuffer = new Uint8Array(totalLength); let offset = 0; arrayBuffers.forEach((buffer) => { concatenatedBuffer.set(new Uint8Array(buffer), offset); offset += buffer.byteLength; }); return concatenatedBuffer; } class Moondream { static imageArrayHash = {}; static instance = {}; static currentModelID = null; static async getInstance(weightsURL, modelID, tokenizerURL, quantized) { // load individual modelID only once if (!this.instance[modelID]) { await init(); self.postMessage({ status: "loading", message: "Loading Model" }); const [weightsArrayU8, tokenizerArrayU8] = await Promise.all([ weightsURL instanceof Array ? concatenateArrayBuffers(weightsURL) : fetchArrayBuffer(weightsURL), fetchArrayBuffer(tokenizerURL), ]); this.instance[modelID] = new Model( weightsArrayU8, tokenizerArrayU8, quantized ); } this.currentModelID = modelID; return this.instance[modelID]; } // Remove the modelID parameter from setImageEmbeddings static setImageEmbeddings(imageArrayU8) { // check if image embeddings are already set for this image and model const imageArrayHash = this.getSimpleHash(imageArrayU8); if ( this.imageArrayHash[this.currentModelID] === imageArrayHash && this.instance[this.currentModelID] ) { self.postMessage({ status: "embedding", message: "Embeddings Already Set", }); return; } this.imageArrayHash[this.currentModelID] = imageArrayHash; this.instance[this.currentModelID].set_image_embeddings(imageArrayU8); self.postMessage({ status: "embedding", message: "Embeddings Set" }); } static getSimpleHash(imageArrayU8) { // get simple hash of imageArrayU8 let imageArrayHash = 0; for (let i = 0; i < imageArrayU8.length; i += 100) { imageArrayHash ^= imageArrayU8[i]; } return imageArrayHash.toString(16); } } let controller = null; self.addEventListener("message", (event) => { if (event.data.command === "start") { controller = new AbortController(); generate(event.data); } else if (event.data.command === "abort") { controller.abort(); } }); async function generate(data) { const { weightsURL, modelID, tokenizerURL, quantized, imageURL, prompt, seed, temp, top_p, repeatPenalty, maxSeqLen, verbose_prompt, } = data; try { self.postMessage({ status: "loading", message: "Starting Moondream" }); const model = await Moondream.getInstance( weightsURL, modelID, tokenizerURL, quantized ); self.postMessage({ status: "loading", message: "Initializing model" }); self.postMessage({ status: "loading", message: "Loading Image" }); const imageArrayU8 = await fetchArrayBuffer(imageURL, false); self.postMessage({ status: "embedding", message: "Creating Embeddings" }); Moondream.setImageEmbeddings(imageArrayU8); self.postMessage({ status: "complete-embedding", message: "Embeddings Complete", }); const { token, token_id } = model.init_with_image_prompt({ prompt, seed: BigInt(seed), temp: parseFloat(temp), top_p: parseFloat(top_p), repeat_penalty: parseFloat(repeatPenalty), repeat_last_n: 64, verbose_prompt, }); const seq_len = 2048; let sentence = token; let maxTokens = maxSeqLen ? maxSeqLen : seq_len - prompt.length - 1; let startTime = performance.now(); let tokensCount = 0; while (tokensCount < maxTokens) { await new Promise(async (resolve) => { if (controller && controller.signal.aborted) { console.log("Aborted"); self.postMessage({ status: "aborted", message: "Aborted", output: prompt + sentence, }); return; } const { token, token_id } = await model.next_token(); if (token_id === 50256) { // <|endoftext|> self.postMessage({ status: "complete", message: "complete", output: prompt + sentence, }); return; } const tokensSec = ((tokensCount + 1) / (performance.now() - startTime)) * 1000; sentence += token; self.postMessage({ status: "generating", message: "Generating token", token: token, sentence: sentence, totalTime: performance.now() - startTime, tokensSec, prompt: prompt, }); setTimeout(resolve, 0); }); tokensCount++; } self.postMessage({ status: "complete", message: "complete", output: prompt + sentence, }); } catch (e) { self.postMessage({ error: e }); } }
candle/candle-wasm-examples/moondream/moondreamWorker.js/0
{ "file_path": "candle/candle-wasm-examples/moondream/moondreamWorker.js", "repo_id": "candle", "token_count": 2273 }
66
use candle_transformers::models::segment_anything::sam; use wasm_bindgen::prelude::*; pub use sam::{Sam, IMAGE_SIZE}; #[wasm_bindgen] extern "C" { // Use `js_namespace` here to bind `console.log(..)` instead of just // `log(..)` #[wasm_bindgen(js_namespace = console)] pub fn log(s: &str); } #[macro_export] macro_rules! console_log { // Note that this is using the `log` function imported above during // `bare_bones` ($($t:tt)*) => ($crate::log(&format_args!($($t)*).to_string())) }
candle/candle-wasm-examples/segment-anything/src/lib.rs/0
{ "file_path": "candle/candle-wasm-examples/segment-anything/src/lib.rs", "repo_id": "candle", "token_count": 213 }
67
import init, { run_app } from './pkg/candle_wasm_example_whisper.js'; async function main() { await init('/pkg/candle_wasm_example_whisper_bg.wasm'); run_app(); } main()
candle/candle-wasm-examples/whisper/main.js/0
{ "file_path": "candle/candle-wasm-examples/whisper/main.js", "repo_id": "candle", "token_count": 73 }
68
fn main() { wasm_logger::init(wasm_logger::Config::new(log::Level::Trace)); console_error_panic_hook::set_once(); yew::Renderer::<candle_wasm_example_yolo::App>::new().render(); }
candle/candle-wasm-examples/yolo/src/bin/app.rs/0
{ "file_path": "candle/candle-wasm-examples/yolo/src/bin/app.rs", "repo_id": "candle", "token_count": 82 }
69
Dockerfile .vscode/ .idea .gitignore LICENSE README.md node_modules/ .svelte-kit/ .env* !.env .env.local db models/**
chat-ui/.dockerignore/0
{ "file_path": "chat-ui/.dockerignore", "repo_id": "chat-ui", "token_count": 56 }
70
engine-strict=true
chat-ui/.npmrc/0
{ "file_path": "chat-ui/.npmrc", "repo_id": "chat-ui", "token_count": 7 }
71
{{- if .Values.infisical.enabled }} apiVersion: secrets.infisical.com/v1alpha1 kind: InfisicalSecret metadata: name: {{ include "name" $ }}-infisical-secret namespace: {{ $.Release.Namespace }} spec: authentication: universalAuth: credentialsRef: secretName: {{ .Values.infisical.operatorSecretName | quote }} secretNamespace: {{ .Values.infisical.operatorSecretNamespace | quote }} secretsScope: envSlug: {{ .Values.infisical.env | quote }} projectSlug: {{ .Values.infisical.project | quote }} secretsPath: / hostAPI: {{ .Values.infisical.url | quote }} managedSecretReference: creationPolicy: Owner secretName: {{ include "name" $ }}-secs secretNamespace: {{ .Release.Namespace | quote }} secretType: Opaque resyncInterval: {{ .Values.infisical.resyncInterval }} {{- end }}
chat-ui/chart/templates/infisical.yaml/0
{ "file_path": "chat-ui/chart/templates/infisical.yaml", "repo_id": "chat-ui", "token_count": 311 }
72
# Cloudflare | Feature | Available | | ------------------------------ | --------- | | [Tools](../tools.md) | No | | [Multimodal](../multimodal.md) | No | You may use Cloudflare Workers AI to run your own models with serverless inference. You will need to have a Cloudflare account, then get your [account ID](https://developers.cloudflare.com/fundamentals/setup/find-account-and-zone-ids/) as well as your [API token](https://developers.cloudflare.com/workers-ai/get-started/rest-api/#1-get-an-api-token) for Workers AI. You can either specify them directly in your `.env.local` using the `CLOUDFLARE_ACCOUNT_ID` and `CLOUDFLARE_API_TOKEN` variables, or you can set them directly in the endpoint config. You can find the list of models available on Cloudflare [here](https://developers.cloudflare.com/workers-ai/models/#text-generation). ```ini MODELS=`[ { "name" : "nousresearch/hermes-2-pro-mistral-7b", "tokenizer": "nousresearch/hermes-2-pro-mistral-7b", "parameters": { "stop": ["<|im_end|>"] }, "endpoints" : [ { "type" : "cloudflare" <!-- optionally specify these "accountId": "your-account-id", "authToken": "your-api-token" --> } ] } ]` ```
chat-ui/docs/source/configuration/models/providers/cloudflare.md/0
{ "file_path": "chat-ui/docs/source/configuration/models/providers/cloudflare.md", "repo_id": "chat-ui", "token_count": 516 }
73
# Running on Docker Pre-built docker images are provided with and without MongoDB built in. Refer to the [configuration section](../configuration/overview) for env variables that must be provided. We recommend using the `--env-file` option to avoid leaking secrets into your shell history. ```bash # Without built-in DB docker run -p 3000:3000 --env-file .env.local --name chat-ui ghcr.io/huggingface/chat-ui # With built-in DB docker run -p 3000:3000 --env-file .env.local -v chat-ui:/data --name chat-ui ghcr.io/huggingface/chat-ui-db ```
chat-ui/docs/source/installation/docker.md/0
{ "file_path": "chat-ui/docs/source/installation/docker.md", "repo_id": "chat-ui", "token_count": 165 }
74
/// <reference types="@sveltejs/kit" /> /// <reference types="unplugin-icons/types/svelte" /> import type { User } from "$lib/types/User"; // See https://kit.svelte.dev/docs/types#app // for information about these interfaces declare global { namespace App { // interface Error {} interface Locals { sessionId: string; user?: User & { logoutDisabled?: boolean }; isAdmin: boolean; } interface Error { message: string; errorId?: ReturnType<typeof crypto.randomUUID>; } // interface PageData {} // interface Platform {} } } export {};
chat-ui/src/app.d.ts/0
{ "file_path": "chat-ui/src/app.d.ts", "repo_id": "chat-ui", "token_count": 199 }
75
<script lang="ts"> interface Props { label?: string; position?: "top" | "bottom" | "left" | "right"; TooltipClassNames?: string; children?: import("svelte").Snippet; } let { label = "", position = "bottom", TooltipClassNames = "", children }: Props = $props(); const positionClasses = { top: "bottom-full mb-2", bottom: "top-full mt-2", left: "right-full mr-2 top-1/2 -translate-y-1/2", right: "left-full ml-2 top-1/2 -translate-y-1/2", }; </script> <div class="group/tooltip inline-block md:relative"> {@render children?.()} <div class=" invisible absolute z-10 w-64 whitespace-normal rounded-md bg-black p-2 text-center text-white group-hover/tooltip:visible group-active/tooltip:visible max-sm:left-1/2 max-sm:-translate-x-1/2 {positionClasses[position]} {TooltipClassNames} " > {label} </div> </div>
chat-ui/src/lib/components/HoverTooltip.svelte/0
{ "file_path": "chat-ui/src/lib/components/HoverTooltip.svelte", "repo_id": "chat-ui", "token_count": 380 }
76
<script lang="ts"> import CarbonStopFilledAlt from "~icons/carbon/stop-filled-alt"; interface Props { classNames?: string; onClick?: () => void; } let { classNames = "", onClick }: Props = $props(); </script> <button type="button" onclick={onClick} class="btn flex h-8 rounded-lg border bg-white px-3 py-1 shadow-sm transition-all hover:bg-gray-100 dark:border-gray-600 dark:bg-gray-700 dark:hover:bg-gray-600 {classNames}" > <CarbonStopFilledAlt class="-ml-1 mr-1 h-[1.25rem] w-[1.1875rem] text-gray-300" /> Stop generating </button>
chat-ui/src/lib/components/StopGeneratingBtn.svelte/0
{ "file_path": "chat-ui/src/lib/components/StopGeneratingBtn.svelte", "repo_id": "chat-ui", "token_count": 211 }
77
<script lang="ts"> import { createBubbler } from "svelte/legacy"; const bubble = createBubbler(); import type { Message, MessageFile } from "$lib/types/Message"; import { createEventDispatcher, onDestroy, tick } from "svelte"; import CarbonExport from "~icons/carbon/export"; import CarbonCheckmark from "~icons/carbon/checkmark"; import CarbonCaretDown from "~icons/carbon/caret-down"; import EosIconsLoading from "~icons/eos-icons/loading"; import ChatInput from "./ChatInput.svelte"; import StopGeneratingBtn from "../StopGeneratingBtn.svelte"; import type { Model } from "$lib/types/Model"; import { page } from "$app/state"; import FileDropzone from "./FileDropzone.svelte"; import RetryBtn from "../RetryBtn.svelte"; import file2base64 from "$lib/utils/file2base64"; import type { Assistant } from "$lib/types/Assistant"; import { base } from "$app/paths"; import ContinueBtn from "../ContinueBtn.svelte"; import AssistantIntroduction from "./AssistantIntroduction.svelte"; import ChatMessage from "./ChatMessage.svelte"; import ScrollToBottomBtn from "../ScrollToBottomBtn.svelte"; import ScrollToPreviousBtn from "../ScrollToPreviousBtn.svelte"; import { browser } from "$app/environment"; import { snapScrollToBottom } from "$lib/actions/snapScrollToBottom"; import SystemPromptModal from "../SystemPromptModal.svelte"; import ChatIntroduction from "./ChatIntroduction.svelte"; import UploadedFile from "./UploadedFile.svelte"; import { useSettingsStore } from "$lib/stores/settings"; import ModelSwitch from "./ModelSwitch.svelte"; import { fly } from "svelte/transition"; import { cubicInOut } from "svelte/easing"; import type { ToolFront } from "$lib/types/Tool"; import { loginModalOpen } from "$lib/stores/loginModal"; import { beforeNavigate } from "$app/navigation"; import { isVirtualKeyboard } from "$lib/utils/isVirtualKeyboard"; interface Props { messages?: Message[]; messagesAlternatives?: Message["id"][][]; loading?: boolean; pending?: boolean; shared?: boolean; currentModel: Model; models: Model[]; assistant?: Assistant | undefined; preprompt?: string | undefined; files?: File[]; } let { messages = [], messagesAlternatives = [], loading = false, pending = false, shared = false, currentModel, models, assistant = undefined, preprompt = undefined, files = $bindable([]), }: Props = $props(); let isReadOnly = $derived(!models.some((model) => model.id === currentModel.id)); let message: string = $state(""); let timeout: ReturnType<typeof setTimeout>; let isSharedRecently = $state(false); let editMsdgId: Message["id"] | null = $state(null); let pastedLongContent = $state(false); beforeNavigate(() => { if (page.params.id) { isSharedRecently = false; } }); const dispatch = createEventDispatcher<{ message: string; share: void; stop: void; retry: { id: Message["id"]; content?: string }; continue: { id: Message["id"] }; }>(); const handleSubmit = () => { if (loading) return; dispatch("message", message); message = ""; }; let lastTarget: EventTarget | null = null; let onDrag = $state(false); const onDragEnter = (e: DragEvent) => { lastTarget = e.target; onDrag = true; }; const onDragLeave = (e: DragEvent) => { if (e.target === lastTarget) { onDrag = false; } }; const onPaste = (e: ClipboardEvent) => { const textContent = e.clipboardData?.getData("text"); if (!$settings.directPaste && textContent && textContent.length >= 3984) { e.preventDefault(); pastedLongContent = true; setTimeout(() => { pastedLongContent = false; }, 1000); const pastedFile = new File([textContent], "Pasted Content", { type: "application/vnd.chatui.clipboard", }); files = [...files, pastedFile]; } if (!e.clipboardData) { return; } // paste of files const pastedFiles = Array.from(e.clipboardData.files); if (pastedFiles.length !== 0) { e.preventDefault(); // filter based on activeMimeTypes, including wildcards const filteredFiles = pastedFiles.filter((file) => { return activeMimeTypes.some((mimeType: string) => { const [type, subtype] = mimeType.split("/"); const [fileType, fileSubtype] = file.type.split("/"); return ( (type === "*" || fileType === type) && (subtype === "*" || fileSubtype === subtype) ); }); }); files = [...files, ...filteredFiles]; } }; let lastMessage = $derived(browser && (messages.at(-1) as Message)); let lastIsError = $derived( lastMessage && !loading && (lastMessage.from === "user" || lastMessage.updates?.findIndex((u) => u.type === "status" && u.status === "error") !== -1) ); let sources = $derived( files?.map<Promise<MessageFile>>((file) => file2base64(file).then((value) => ({ type: "base64", value, mime: file.type, name: file.name, })) ) ); function onShare() { if (!confirm("Are you sure you want to share this conversation? This cannot be undone.")) { return; } dispatch("share"); isSharedRecently = true; if (timeout) { clearTimeout(timeout); } timeout = setTimeout(() => { isSharedRecently = false; }, 2000); } onDestroy(() => { if (timeout) { clearTimeout(timeout); } }); let chatContainer: HTMLElement | undefined = $state(); async function scrollToBottom() { await tick(); if (!chatContainer) return; chatContainer.scrollTop = chatContainer.scrollHeight; } // If last message is from user, scroll to bottom $effect(() => { if (lastMessage && lastMessage.from === "user") { scrollToBottom(); } }); const settings = useSettingsStore(); let mimeTypesFromActiveTools = $derived( page.data.tools .filter((tool: ToolFront) => { if (assistant) { return assistant.tools?.includes(tool._id); } if (currentModel.tools) { return $settings?.tools?.includes(tool._id) ?? tool.isOnByDefault; } return false; }) .flatMap((tool: ToolFront) => tool.mimeTypes ?? []) ); let activeMimeTypes = $derived( Array.from( new Set([ ...mimeTypesFromActiveTools, // fetch mime types from active tools either from tool settings or active assistant ...(currentModel.tools && !assistant ? ["application/pdf"] : []), // if its a tool model, we can always enable document parser so we always accept pdfs ...(currentModel.multimodal ? (currentModel.multimodalAcceptedMimetypes ?? ["image/*"]) : []), // if its a multimodal model, we always accept images ]) ) ); let isFileUploadEnabled = $derived(activeMimeTypes.length > 0); let focused = $state(false); </script> <svelte:window ondragenter={onDragEnter} ondragleave={onDragLeave} ondragover={(e) => { e.preventDefault(); bubble("dragover"); }} ondrop={(e) => { e.preventDefault(); onDrag = false; }} /> <div class="relative z-[-1] min-h-0 min-w-0"> <div class="scrollbar-custom h-full overflow-y-auto" use:snapScrollToBottom={messages.map((message) => message.content)} bind:this={chatContainer} > <div class="mx-auto flex h-full max-w-3xl flex-col gap-6 px-5 pt-6 sm:gap-8 xl:max-w-4xl xl:pt-10" > {#if assistant && !!messages.length} <a class="mx-auto flex items-center gap-1.5 rounded-full border border-gray-100 bg-gray-50 py-1 pl-1 pr-3 text-sm text-gray-800 hover:bg-gray-100 dark:border-gray-800 dark:bg-gray-800 dark:text-gray-200 dark:hover:bg-gray-700" href="{base}/assistant/{assistant._id}" > {#if assistant.avatar} <img src="{base}/settings/assistants/{assistant._id.toString()}/avatar.jpg?hash=${assistant.avatar}" alt="Avatar" class="size-5 rounded-full object-cover" /> {:else} <div class="flex size-6 items-center justify-center rounded-full bg-gray-300 font-bold uppercase text-gray-500" > {assistant.name[0]} </div> {/if} {assistant.name} </a> {:else if preprompt && preprompt != currentModel.preprompt} <SystemPromptModal preprompt={preprompt ?? ""} /> {/if} {#if messages.length > 0} <div class="flex h-max flex-col gap-8 pb-52"> {#each messages as message, idx (message.id)} <ChatMessage {loading} {message} alternatives={messagesAlternatives.find((a) => a.includes(message.id)) ?? []} isAuthor={!shared} readOnly={isReadOnly} isLast={idx === messages.length - 1} bind:editMsdgId on:retry on:vote on:continue on:showAlternateMsg /> {/each} {#if isReadOnly} <ModelSwitch {models} {currentModel} /> {/if} </div> {:else if pending} <ChatMessage loading={true} message={{ id: "0-0-0-0-0", content: "", from: "assistant", children: [], }} isAuthor={!shared} readOnly={isReadOnly} /> {:else if !assistant} <ChatIntroduction {currentModel} on:message={(ev) => { if (page.data.loginRequired) { ev.preventDefault(); $loginModalOpen = true; } else { dispatch("message", ev.detail); } }} /> {:else} <AssistantIntroduction {models} {assistant} on:message={(ev) => { if (page.data.loginRequired) { ev.preventDefault(); $loginModalOpen = true; } else { dispatch("message", ev.detail); } }} /> {/if} </div> <ScrollToPreviousBtn class="fixed right-4 max-md:bottom-[calc(50%+26px)] md:bottom-48 lg:right-10" scrollNode={chatContainer} /> <ScrollToBottomBtn class="fixed right-4 max-md:bottom-[calc(50%-26px)] md:bottom-36 lg:right-10" scrollNode={chatContainer} /> </div> <div class="pointer-events-none absolute inset-x-0 bottom-0 z-0 mx-auto flex w-full max-w-3xl flex-col items-center justify-center bg-gradient-to-t from-white via-white/100 to-white/0 px-3.5 pt-2 dark:border-gray-800 dark:from-gray-900 dark:via-gray-900/100 dark:to-gray-900/0 max-sm:py-0 sm:px-5 md:pb-4 xl:max-w-4xl [&>*]:pointer-events-auto" > {#if sources?.length && !loading} <div in:fly|local={sources.length === 1 ? { y: -20, easing: cubicInOut } : undefined} class="flex flex-row flex-wrap justify-center gap-2.5 rounded-xl pb-3" > {#each sources as source, index} {#await source then src} <UploadedFile file={src} on:close={() => { files = files.filter((_, i) => i !== index); }} /> {/await} {/each} </div> {/if} <div class="w-full"> <div class="flex w-full *:mb-3"> {#if loading} <StopGeneratingBtn classNames="ml-auto" onClick={() => dispatch("stop")} /> {:else if lastIsError} <RetryBtn classNames="ml-auto" onClick={() => { if (lastMessage && lastMessage.ancestors) { dispatch("retry", { id: lastMessage.id, }); } }} /> {:else if messages && lastMessage && lastMessage.interrupted && !isReadOnly} <div class="ml-auto gap-2"> <ContinueBtn onClick={() => { if (lastMessage && lastMessage.ancestors) { dispatch("continue", { id: lastMessage?.id, }); } }} /> </div> {/if} </div> <form tabindex="-1" aria-label={isFileUploadEnabled ? "file dropzone" : undefined} onsubmit={(e) => { e.preventDefault(); handleSubmit(); }} class={{ "relative flex w-full max-w-4xl flex-1 items-center rounded-xl border bg-gray-100 dark:border-gray-600 dark:bg-gray-700": true, "opacity-30": isReadOnly, "max-sm:mb-4": focused && isVirtualKeyboard(), }} > {#if onDrag && isFileUploadEnabled} <FileDropzone bind:files bind:onDrag mimeTypes={activeMimeTypes} /> {:else} <div class="flex w-full flex-1 rounded-xl border-none bg-transparent" class:paste-glow={pastedLongContent} > {#if lastIsError} <ChatInput value="Sorry, something went wrong. Please try again." disabled={true} /> {:else} <ChatInput {assistant} placeholder={isReadOnly ? "This conversation is read-only." : "Ask anything"} {loading} bind:value={message} bind:files mimeTypes={activeMimeTypes} on:submit={handleSubmit} {onPaste} disabled={isReadOnly || lastIsError} modelHasTools={currentModel.tools} modelIsMultimodal={currentModel.multimodal} bind:focused /> {/if} {#if loading} <button disabled class="btn absolute bottom-1 right-0.5 size-10 self-end rounded-lg bg-transparent text-gray-400" > <EosIconsLoading /> </button> {:else} <button class="btn absolute bottom-2 right-2 size-7 self-end rounded-full border bg-white text-black shadow transition-none enabled:hover:bg-white enabled:hover:shadow-inner disabled:text-gray-400/50 disabled:opacity-60 dark:border-gray-600 dark:bg-gray-900 dark:text-white dark:hover:enabled:bg-black dark:disabled:text-gray-600/50" disabled={!message || isReadOnly} type="submit" aria-label="Send message" name="submit" > <svg width="1em" height="1em" viewBox="0 0 32 32" fill="none" xmlns="http://www.w3.org/2000/svg" > <path fill-rule="evenodd" clip-rule="evenodd" d="M17.0606 4.23197C16.4748 3.64618 15.525 3.64618 14.9393 4.23197L5.68412 13.4871C5.09833 14.0729 5.09833 15.0226 5.68412 15.6084C6.2699 16.1942 7.21965 16.1942 7.80544 15.6084L14.4999 8.91395V26.7074C14.4999 27.5359 15.1715 28.2074 15.9999 28.2074C16.8283 28.2074 17.4999 27.5359 17.4999 26.7074V8.91395L24.1944 15.6084C24.7802 16.1942 25.7299 16.1942 26.3157 15.6084C26.9015 15.0226 26.9015 14.0729 26.3157 13.4871L17.0606 4.23197Z" fill="currentColor" /> </svg> </button> {/if} </div> {/if} </form> <div class={{ "mt-2 flex justify-between self-stretch px-1 text-xs text-gray-400/90 max-md:mb-2 max-sm:gap-2": true, "max-sm:hidden": focused && isVirtualKeyboard(), }} > <p> Model: {#if !assistant} {#if models.find((m) => m.id === currentModel.id)} <a href="{base}/settings/{currentModel.id}" class="inline-flex items-center hover:underline" >{currentModel.displayName}<CarbonCaretDown class="text-xxs" /></a > {:else} <span class="inline-flex items-center line-through dark:border-gray-700"> {currentModel.id} </span> {/if} {:else} {@const model = models.find((m) => m.id === assistant?.modelId)} {#if model} <a href="{base}/settings/assistants/{assistant._id}" class="inline-flex items-center border-b hover:text-gray-600 dark:border-gray-700 dark:hover:text-gray-300" >{model?.displayName}<CarbonCaretDown class="text-xxs" /></a > {:else} <span class="inline-flex items-center line-through dark:border-gray-700"> {currentModel.id} </span> {/if} {/if} <span class="max-sm:hidden">·</span><br class="sm:hidden" /> Generated content may be inaccurate or false. </p> {#if messages.length} <button class="flex flex-none items-center hover:text-gray-400 max-sm:rounded-lg max-sm:bg-gray-50 max-sm:px-2.5 dark:max-sm:bg-gray-800" type="button" class:hover:underline={!isSharedRecently} onclick={onShare} disabled={isSharedRecently} > {#if isSharedRecently} <CarbonCheckmark class="text-[.6rem] sm:mr-1.5 sm:text-green-600" /> <div class="text-green-600 max-sm:hidden">Link copied to clipboard</div> {:else} <CarbonExport class="sm:text-primary-500 text-[.6rem] sm:mr-1.5" /> <div class="max-sm:hidden">Share this conversation</div> {/if} </button> {/if} </div> </div> </div> </div> <style lang="postcss"> .paste-glow { animation: glow 1s cubic-bezier(0.4, 0, 0.2, 1) forwards; will-change: box-shadow; } @keyframes glow { 0% { box-shadow: 0 0 0 0 rgba(59, 130, 246, 0.8); } 50% { box-shadow: 0 0 20px 4px rgba(59, 130, 246, 0.6); } 100% { box-shadow: 0 0 0 0 rgba(59, 130, 246, 0); } } </style>
chat-ui/src/lib/components/chat/ChatWindow.svelte/0
{ "file_path": "chat-ui/src/lib/components/chat/ChatWindow.svelte", "repo_id": "chat-ui", "token_count": 7253 }
78
<script lang="ts"> interface Props { classNames?: string; } let { classNames = "" }: Props = $props(); </script> <svg xmlns="http://www.w3.org/2000/svg" class={classNames} width="1em" height="1em" fill="none" viewBox="0 0 32 32" ><path fill="currentColor" fill-rule="evenodd" d="M3.143 20.286h4.286v2.142H3.143A2.143 2.143 0 0 1 1 20.287V3.143A2.143 2.143 0 0 1 3.143 1h17.143a2.143 2.143 0 0 1 2.142 2.143v4.286h-2.142V3.143H3.143v17.143Zm9.643-12.857v3.214H16v2.143h-3.214V16h-2.143v-3.214H7.429v-2.143h3.214V7.429h2.143Zm14.185 2.639 3.533 3.532a1.7 1.7 0 0 1 0 2.4L15.5 31H9.57v-5.928l15-15.004a1.7 1.7 0 0 1 2.4 0Zm-15.257 18.79h2.897l10.116-10.116-2.899-2.897L11.714 25.96v2.897ZM23.346 14.33l2.897 2.897 2.429-2.43-2.897-2.896-2.43 2.429Z" clip-rule="evenodd" /></svg >
chat-ui/src/lib/components/icons/IconNew.svelte/0
{ "file_path": "chat-ui/src/lib/components/icons/IconNew.svelte", "repo_id": "chat-ui", "token_count": 451 }
79
import type { Migration } from "."; import { collections } from "$lib/server/database"; import { ObjectId } from "mongodb"; import { logger } from "$lib/server/logger"; const addToolsToSettings: Migration = { _id: new ObjectId("5c9c4c4c4c4c4c4c4c4c4c4c"), name: "Add empty 'tools' record in settings", up: async () => { const { settings } = collections; // Find all assistants whose modelId is not in modelIds, and update it to use defaultModelId await settings.updateMany( { tools: { $exists: false }, }, { $set: { tools: [] } } ); settings .createIndex({ tools: 1 }) .catch((e) => logger.error(e, "Error creating index during tools migration")); return true; }, runEveryTime: false, }; export default addToolsToSettings;
chat-ui/src/lib/migrations/routines/03-add-tools-in-settings.ts/0
{ "file_path": "chat-ui/src/lib/migrations/routines/03-add-tools-in-settings.ts", "repo_id": "chat-ui", "token_count": 272 }
80
import { Elysia } from "elysia"; import { authPlugin } from "../../authPlugin"; import { requiresUser } from "$lib/server/auth"; import { collections } from "$lib/server/database"; import { authCondition } from "$lib/server/auth"; import { config } from "$lib/server/config"; import { Client } from "@gradio/client"; import yazl from "yazl"; import { downloadFile } from "$lib/server/files/downloadFile"; import mimeTypes from "mime-types"; import { logger } from "$lib/server/logger"; export interface FeatureFlags { searchEnabled: boolean; enableAssistants: boolean; enableAssistantsRAG: boolean; enableCommunityTools: boolean; loginEnabled: boolean; loginRequired: boolean; guestMode: boolean; isAdmin: boolean; } export type ApiReturnType = Awaited<ReturnType<typeof Client.prototype.view_api>>; export const misc = new Elysia() .use(authPlugin) .get("/public-config", async () => config.getPublicConfig()) .get("/feature-flags", async ({ locals }) => { let loginRequired = false; const messagesBeforeLogin = config.MESSAGES_BEFORE_LOGIN ? parseInt(config.MESSAGES_BEFORE_LOGIN) : 0; const nConversations = await collections.conversations.countDocuments(authCondition(locals)); if (requiresUser && !locals.user) { if (messagesBeforeLogin === 0) { loginRequired = true; } else if (nConversations >= messagesBeforeLogin) { loginRequired = true; } else { // get the number of messages where `from === "assistant"` across all conversations. const totalMessages = ( await collections.conversations .aggregate([ { $match: { ...authCondition(locals), "messages.from": "assistant" } }, { $project: { messages: 1 } }, { $limit: messagesBeforeLogin + 1 }, { $unwind: "$messages" }, { $match: { "messages.from": "assistant" } }, { $count: "messages" }, ]) .toArray() )[0]?.messages ?? 0; loginRequired = totalMessages >= messagesBeforeLogin; } } return { searchEnabled: !!( config.SERPAPI_KEY || config.SERPER_API_KEY || config.SERPSTACK_API_KEY || config.SEARCHAPI_KEY || config.YDC_API_KEY || config.USE_LOCAL_WEBSEARCH || config.SEARXNG_QUERY_URL || config.BING_SUBSCRIPTION_KEY ), enableAssistants: config.ENABLE_ASSISTANTS === "true", enableAssistantsRAG: config.ENABLE_ASSISTANTS_RAG === "true", enableCommunityTools: config.COMMUNITY_TOOLS === "true", loginEnabled: requiresUser, // misnomer, this is actually whether the feature is available, not required loginRequired, guestMode: requiresUser && messagesBeforeLogin > 0, isAdmin: locals.isAdmin, } satisfies FeatureFlags; }) .get("/spaces-config", async ({ query }) => { if (config.COMMUNITY_TOOLS !== "true") { throw new Error("Community tools are not enabled"); } const space = query.space; if (!space) { throw new Error("Missing space"); } // Extract namespace from space URL or use as-is if it's already in namespace format let namespace = null; if (space.startsWith("https://huggingface.co/spaces/")) { namespace = space.split("/").slice(-2).join("/"); } else if (space.match(/^[^/]+\/[^/]+$/)) { namespace = space; } if (!namespace) { throw new Error("Invalid space name. Specify a namespace or a full URL on huggingface.co."); } try { const api = await (await Client.connect(namespace)).view_api(); return api as ApiReturnType; } catch (e) { throw new Error("Error fetching space API. Is the name correct?"); } }) .get("/export", async ({ locals }) => { if (!locals.user) { throw new Error("Not logged in"); } if (!locals.isAdmin) { throw new Error("Not admin"); } if (config.ENABLE_DATA_EXPORT !== "true") { throw new Error("Data export is not enabled"); } const nExports = await collections.messageEvents.countDocuments({ userId: locals.user._id, type: "export", expiresAt: { $gt: new Date() }, }); if (nExports >= 1) { throw new Error( "You have already exported your data recently. Please wait 1 hour before exporting again." ); } const stats: { nConversations: number; nMessages: number; nAssistants: number; nAvatars: number; nFiles: number; } = { nConversations: 0, nMessages: 0, nFiles: 0, nAssistants: 0, nAvatars: 0, }; const zipfile = new yazl.ZipFile(); const promises = [ collections.conversations .find({ ...authCondition(locals) }) .toArray() .then(async (conversations) => { const formattedConversations = await Promise.all( conversations.map(async (conversation) => { stats.nConversations++; const hashes: string[] = []; conversation.messages.forEach(async (message) => { stats.nMessages++; if (message.files) { message.files.forEach((file) => { hashes.push(file.value); }); } }); const files = await Promise.all( hashes.map(async (hash) => { try { const fileData = await downloadFile(hash, conversation._id); return fileData; } catch { return null; } }) ); const filenames: string[] = []; files.forEach((file) => { if (!file) return; const extension = mimeTypes.extension(file.mime) || null; const convId = conversation._id.toString(); const fileId = file.name.split("-")[1].slice(0, 8); const fileName = `file-${convId}-${fileId}` + (extension ? `.${extension}` : ""); filenames.push(fileName); zipfile.addBuffer(Buffer.from(file.value, "base64"), fileName); stats.nFiles++; }); return { ...conversation, messages: conversation.messages.map((message) => { return { ...message, webSearch: message.webSearch ? { prompt: message.webSearch?.prompt, searchQuery: message.webSearch?.searchQuery, results: message.webSearch?.results.map((result) => result.link), } : undefined, files: filenames, updates: undefined, }; }), }; }) ); zipfile.addBuffer( Buffer.from(JSON.stringify(formattedConversations, null, 2)), "conversations.json" ); }), collections.assistants .find({ createdById: locals.user._id }) .toArray() .then(async (assistants) => { const formattedAssistants = await Promise.all( assistants.map(async (assistant) => { if (assistant.avatar) { const fileId = collections.bucket.find({ filename: assistant._id.toString() }); const content = await fileId.next().then(async (file) => { if (!file?._id) return; const fileStream = collections.bucket.openDownloadStream(file?._id); const fileBuffer = await new Promise<Buffer>((resolve, reject) => { const chunks: Uint8Array[] = []; fileStream.on("data", (chunk) => chunks.push(chunk)); fileStream.on("error", reject); fileStream.on("end", () => resolve(Buffer.concat(chunks))); }); return fileBuffer; }); if (!content) return; zipfile.addBuffer(content, `avatar-${assistant._id.toString()}.jpg`); stats.nAvatars++; } stats.nAssistants++; return { _id: assistant._id.toString(), name: assistant.name, createdById: assistant.createdById.toString(), createdByName: assistant.createdByName, avatar: `avatar-${assistant._id.toString()}.jpg`, modelId: assistant.modelId, preprompt: assistant.preprompt, description: assistant.description, dynamicPrompt: assistant.dynamicPrompt, exampleInputs: assistant.exampleInputs, rag: assistant.rag, tools: assistant.tools, generateSettings: assistant.generateSettings, createdAt: assistant.createdAt.toISOString(), updatedAt: assistant.updatedAt.toISOString(), }; }) ); zipfile.addBuffer( Buffer.from(JSON.stringify(formattedAssistants, null, 2)), "assistants.json" ); }), ]; Promise.all(promises).then(async () => { logger.info( { userId: locals.user?._id, ...stats, }, "Exported user data" ); zipfile.end(); if (locals.user?._id) { await collections.messageEvents.insertOne({ userId: locals.user?._id, type: "export", createdAt: new Date(), expiresAt: new Date(Date.now() + 1000 * 60 * 60), // 1 hour }); } }); // @ts-expect-error - zipfile.outputStream is not typed correctly return new Response(zipfile.outputStream, { headers: { "Content-Type": "application/zip", "Content-Disposition": 'attachment; filename="export.zip"', }, }); });
chat-ui/src/lib/server/api/routes/groups/misc.ts/0
{ "file_path": "chat-ui/src/lib/server/api/routes/groups/misc.ts", "repo_id": "chat-ui", "token_count": 3810 }
81
import { buildPrompt } from "$lib/buildPrompt"; import { textGenerationStream } from "@huggingface/inference"; import { z } from "zod"; import type { Endpoint } from "../endpoints"; export const endpointAwsParametersSchema = z.object({ weight: z.number().int().positive().default(1), model: z.any(), type: z.literal("aws"), url: z.string().url(), accessKey: z .string({ description: "An AWS Access Key ID. If not provided, the default AWS identity resolution will be used", }) .min(1) .optional(), secretKey: z .string({ description: "An AWS Access Key Secret. If not provided, the default AWS identity resolution will be used", }) .min(1) .optional(), sessionToken: z.string().optional(), service: z.union([z.literal("sagemaker"), z.literal("lambda")]).default("sagemaker"), region: z.string().optional(), }); export async function endpointAws( input: z.input<typeof endpointAwsParametersSchema> ): Promise<Endpoint> { let createSignedFetcher; try { createSignedFetcher = (await import("aws-sigv4-fetch")).createSignedFetcher; } catch (e) { throw new Error("Failed to import aws-sigv4-fetch"); } const { url, accessKey, secretKey, sessionToken, model, region, service } = endpointAwsParametersSchema.parse(input); const signedFetch = createSignedFetcher({ service, region, credentials: accessKey && secretKey ? { accessKeyId: accessKey, secretAccessKey: secretKey, sessionToken } : undefined, }); return async ({ messages, preprompt, continueMessage, generateSettings }) => { const prompt = await buildPrompt({ messages, continueMessage, preprompt, model, }); return textGenerationStream( { parameters: { ...model.parameters, ...generateSettings, return_full_text: false }, model: url, inputs: prompt, }, { fetch: signedFetch, } ); }; } export default endpointAws;
chat-ui/src/lib/server/endpoints/aws/endpointAws.ts/0
{ "file_path": "chat-ui/src/lib/server/endpoints/aws/endpointAws.ts", "repo_id": "chat-ui", "token_count": 684 }
82
import type { TextGenerationStreamOutput } from "@huggingface/inference"; import type OpenAI from "openai"; import type { Stream } from "openai/streaming"; /** * Transform a stream of OpenAI.Completions.Completion into a stream of TextGenerationStreamOutput */ export async function* openAICompletionToTextGenerationStream( completionStream: Stream<OpenAI.Completions.Completion> ) { let generatedText = ""; let tokenId = 0; for await (const completion of completionStream) { const { choices } = completion; const text = choices[0]?.text ?? ""; const last = choices[0]?.finish_reason === "stop" || choices[0]?.finish_reason === "length"; if (text) { generatedText = generatedText + text; } const output: TextGenerationStreamOutput = { token: { id: tokenId++, text, logprob: 0, special: last, }, generated_text: last ? generatedText : null, details: null, }; yield output; } }
chat-ui/src/lib/server/endpoints/openai/openAICompletionToTextGenerationStream.ts/0
{ "file_path": "chat-ui/src/lib/server/endpoints/openai/openAICompletionToTextGenerationStream.ts", "repo_id": "chat-ui", "token_count": 325 }
83
import { taskModel } from "$lib/server/models"; import { MessageUpdateType, type MessageUpdate } from "$lib/types/MessageUpdate"; import type { EndpointMessage } from "./endpoints/endpoints"; export async function* generateFromDefaultEndpoint({ messages, preprompt, generateSettings, }: { messages: EndpointMessage[]; preprompt?: string; generateSettings?: Record<string, unknown>; }): AsyncGenerator<MessageUpdate, string, undefined> { try { const endpoint = await taskModel.getEndpoint(); const tokenStream = await endpoint({ messages, preprompt, generateSettings }); for await (const output of tokenStream) { // if not generated_text is here it means the generation is not done if (output.generated_text) { let generated_text = output.generated_text; for (const stop of [...(taskModel.parameters?.stop ?? []), "<|endoftext|>"]) { if (generated_text.endsWith(stop)) { generated_text = generated_text.slice(0, -stop.length).trimEnd(); } } return generated_text; } yield { type: MessageUpdateType.Stream, token: output.token.text, }; } } catch (error) { return ""; } return ""; }
chat-ui/src/lib/server/generateFromDefaultEndpoint.ts/0
{ "file_path": "chat-ui/src/lib/server/generateFromDefaultEndpoint.ts", "repo_id": "chat-ui", "token_count": 396 }
84
import type { ConfigTool } from "$lib/types/Tool"; import { ObjectId } from "mongodb"; const directlyAnswer: ConfigTool = { _id: new ObjectId("00000000000000000000000D"), type: "config", description: "Answer the user's query directly. You must use this tool before you can answer the user's query.", color: "blue", icon: "chat", displayName: "Directly Answer", isOnByDefault: true, isLocked: true, isHidden: true, name: "directlyAnswer", endpoint: null, inputs: [], outputComponent: null, outputComponentIdx: null, showOutput: false, async *call() { return { outputs: [], display: false, }; }, }; export default directlyAnswer;
chat-ui/src/lib/server/tools/directlyAnswer.ts/0
{ "file_path": "chat-ui/src/lib/server/tools/directlyAnswer.ts", "repo_id": "chat-ui", "token_count": 226 }
85
import type { SerializedHTMLElement } from "../../scrape/types"; import { MarkdownElementType, type MarkdownElement } from "../types"; // --- Markdown Elements --- /** Converts markdown element to a string with formatting */ export function stringifyMarkdownElement(elem: MarkdownElement): string { const content = elem.content.trim(); if (elem.type === MarkdownElementType.Header) return `${"#".repeat(elem.level)} ${content}\n\n`; if (elem.type === MarkdownElementType.BlockQuote) { return `${"> ".repeat(elem.depth)}${content}\n\n`; } if (elem.type === MarkdownElementType.CodeBlock) return `\`\`\`\n${content}\n\`\`\`\n\n`; if (elem.type === MarkdownElementType.UnorderedListItem) return `- ${content}\n`; if (elem.type === MarkdownElementType.OrderedListItem) { const siblings = elem.parent?.children ?? [elem]; const currentIndex = siblings.indexOf(elem); const lastAdjacentIndex = siblings .slice(currentIndex + 1) .findLastIndex((child) => child.type === MarkdownElementType.OrderedListItem); const order = currentIndex - lastAdjacentIndex + 1; return `${order}. ${content}\n`; } return `${content}\n\n`; } /** Converts a tree of markdown elements to a string with formatting */ export function stringifyMarkdownElementTree(elem: MarkdownElement): string { const stringified = stringifyMarkdownElement(elem); if (!("children" in elem)) return stringified; return stringified + elem.children.map(stringifyMarkdownElementTree).join(""); } // ----- HTML Elements ----- /** Ignores all non-inline tag types and grabs their text. Converts inline tags to markdown */ export function stringifyHTMLElements(elems: (SerializedHTMLElement | string)[]): string { return elems.map(stringifyHTMLElement).join("").trim(); } /** Ignores all non-inline tag types and grabs their text. Converts inline tags to markdown */ export function stringifyHTMLElement(elem: SerializedHTMLElement | string): string { if (typeof elem === "string") return elem; if (elem.tagName === "br") return "\n"; const content = elem.content.map(stringifyHTMLElement).join(""); if (content.length === 0) return content; if (elem.tagName === "strong" || elem.tagName === "b") return `**${content}**`; if (elem.tagName === "em" || elem.tagName === "i") return `*${content}*`; if (elem.tagName === "s" || elem.tagName === "strike") return `~~${content}~~`; if (elem.tagName === "code" || elem.tagName === "var" || elem.tagName === "tt") { return `\`${content}\``; } if (elem.tagName === "sup") return `<sup>${content}</sup>`; if (elem.tagName === "sub") return `<sub>${content}</sub>`; if (elem.tagName === "a" && content.trim().length > 0) { const href = elem.attributes.href; if (!href) return elem.content.map(stringifyHTMLElement).join(""); return `[${elem.content.map(stringifyHTMLElement).join("")}](${href})`; } return elem.content.map(stringifyHTMLElement).join(""); } /** Grabs all text content directly, ignoring HTML tags */ export function stringifyHTMLElementsUnformatted( elems: (SerializedHTMLElement | string)[] ): string { return elems.map(stringifyHTMLElementUnformatted).join(""); } /** Grabs all text content directly, ignoring HTML tags */ function stringifyHTMLElementUnformatted(elem: SerializedHTMLElement | string): string { if (typeof elem === "string") return elem; return elem.content.map(stringifyHTMLElementUnformatted).join(""); }
chat-ui/src/lib/server/websearch/markdown/utils/stringify.ts/0
{ "file_path": "chat-ui/src/lib/server/websearch/markdown/utils/stringify.ts", "repo_id": "chat-ui", "token_count": 1149 }
86
import type { WebSearchSource } from "$lib/types/WebSearch"; import type { Message } from "$lib/types/Message"; import type { Assistant } from "$lib/types/Assistant"; import { getWebSearchProvider, searchWeb } from "./endpoints"; import { generateQuery } from "./generateQuery"; import { isURLStringLocal } from "$lib/server/isURLLocal"; import { isURL } from "$lib/utils/isUrl"; import z from "zod"; import JSON5 from "json5"; import { config } from "$lib/server/config"; import { makeGeneralUpdate } from "../update"; import type { MessageWebSearchUpdate } from "$lib/types/MessageUpdate"; const listSchema = z.array(z.string()).default([]); const allowList = listSchema.parse(JSON5.parse(config.WEBSEARCH_ALLOWLIST)); const blockList = listSchema.parse(JSON5.parse(config.WEBSEARCH_BLOCKLIST)); export async function* search( messages: Message[], ragSettings?: Assistant["rag"], query?: string ): AsyncGenerator< MessageWebSearchUpdate, { searchQuery: string; pages: WebSearchSource[] }, undefined > { if (ragSettings && ragSettings?.allowedLinks.length > 0) { yield makeGeneralUpdate({ message: "Using links specified in Assistant" }); return { searchQuery: "", pages: await directLinksToSource(ragSettings.allowedLinks).then(filterByBlockList), }; } const searchQuery = query ?? (await generateQuery(messages)); yield makeGeneralUpdate({ message: `Searching ${getWebSearchProvider()}`, args: [searchQuery] }); // handle the global and (optional) rag lists if (ragSettings && ragSettings?.allowedDomains.length > 0) { yield makeGeneralUpdate({ message: "Filtering on specified domains" }); } const filters = buildQueryFromSiteFilters( [...(ragSettings?.allowedDomains ?? []), ...allowList], blockList ); const searchQueryWithFilters = `${filters} ${searchQuery}`; const searchResults = await searchWeb(searchQueryWithFilters).then(filterByBlockList); return { searchQuery: searchQueryWithFilters, pages: searchResults, }; } // ---------- // Utils function filterByBlockList(results: WebSearchSource[]): WebSearchSource[] { return results.filter((result) => !blockList.some((blocked) => result.link.includes(blocked))); } function buildQueryFromSiteFilters(allow: string[], block: string[]) { return ( allow.map((item) => "site:" + item).join(" OR ") + " " + block.map((item) => "-site:" + item).join(" ") ); } async function directLinksToSource(links: string[]): Promise<WebSearchSource[]> { if (config.ENABLE_LOCAL_FETCH !== "true") { const localLinks = await Promise.all(links.map(isURLStringLocal)); links = links.filter((_, index) => !localLinks[index]); } return links.filter(isURL).map((link) => ({ link, title: "", text: [""], })); }
chat-ui/src/lib/server/websearch/search/search.ts/0
{ "file_path": "chat-ui/src/lib/server/websearch/search/search.ts", "repo_id": "chat-ui", "token_count": 872 }
87
import type { ObjectId } from "mongodb"; import type { Message } from "./Message"; import type { Timestamps } from "./Timestamps"; import type { User } from "./User"; import type { Assistant } from "./Assistant"; export interface Conversation extends Timestamps { _id: ObjectId; sessionId?: string; userId?: User["_id"]; model: string; embeddingModel: string; title: string; rootMessageId?: Message["id"]; messages: Message[]; meta?: { fromShareId?: string; }; preprompt?: string; assistantId?: Assistant["_id"]; userAgent?: string; }
chat-ui/src/lib/types/Conversation.ts/0
{ "file_path": "chat-ui/src/lib/types/Conversation.ts", "repo_id": "chat-ui", "token_count": 182 }
88
import type { ObjectId } from "mongodb"; import type { User } from "./User"; import type { Timestamps } from "./Timestamps"; import type { BackendToolContext } from "$lib/server/tools"; import type { MessageUpdate } from "./MessageUpdate"; import { z } from "zod"; import type { ReviewStatus } from "./Review"; export const ToolColor = z.union([ z.literal("purple"), z.literal("blue"), z.literal("green"), z.literal("yellow"), z.literal("red"), ]); export const ToolIcon = z.union([ z.literal("wikis"), z.literal("tools"), z.literal("camera"), z.literal("code"), z.literal("email"), z.literal("cloud"), z.literal("terminal"), z.literal("game"), z.literal("chat"), z.literal("speaker"), z.literal("video"), ]); export const ToolOutputComponents = z .string() .toLowerCase() .pipe( z.union([ z.literal("textbox"), z.literal("markdown"), z.literal("image"), z.literal("gallery"), z.literal("number"), z.literal("audio"), z.literal("video"), z.literal("file"), z.literal("json"), ]) ); export type ToolOutputComponents = z.infer<typeof ToolOutputComponents>; export type ToolLogoColor = z.infer<typeof ToolColor>; export type ToolLogoIcon = z.infer<typeof ToolIcon>; export type ToolIOType = "str" | "int" | "float" | "bool" | "file"; export type ToolInputRequired = { paramType: "required"; name: string; description?: string; }; export type ToolInputOptional = { paramType: "optional"; name: string; description?: string; default: string | number | boolean; }; export type ToolInputFixed = { paramType: "fixed"; name: string; value: string | number | boolean; }; type ToolInputBase = ToolInputRequired | ToolInputOptional | ToolInputFixed; export type ToolInputFile = ToolInputBase & { type: "file"; mimeTypes: string; }; export type ToolInputSimple = ToolInputBase & { type: Exclude<ToolIOType, "file">; }; export type ToolInput = ToolInputFile | ToolInputSimple; export interface BaseTool { _id: ObjectId; name: string; // name that will be shown to the AI baseUrl?: string; // namespace for the tool endpoint: string | null; // endpoint to call in gradio, if null we expect to override this function in code outputComponent: string | null; // Gradio component type to use for the output outputComponentIdx: number | null; // index of the output component inputs: Array<ToolInput>; showOutput: boolean; // show output in chat or not call: BackendCall; // for displaying in the UI displayName: string; color: ToolLogoColor; icon: ToolLogoIcon; description: string; } export interface ConfigTool extends BaseTool { type: "config"; isOnByDefault?: true; isLocked?: true; isHidden?: true; } export interface CommunityTool extends BaseTool, Timestamps { type: "community"; createdById: User["_id"] | string; // user id or session createdByName?: User["username"]; // used to compute popular & trending useCount: number; last24HoursUseCount: number; review: ReviewStatus; searchTokens: string[]; } // no call function in db export type CommunityToolDB = Omit<CommunityTool, "call">; export type CommunityToolEditable = Omit< CommunityToolDB, | "_id" | "useCount" | "last24HoursUseCount" | "createdById" | "createdByName" | "review" | "searchTokens" | "type" | "createdAt" | "updatedAt" >; export type Tool = ConfigTool | CommunityTool; export type ToolFront = Pick< Tool, "type" | "name" | "displayName" | "description" | "color" | "icon" > & { _id: string; isOnByDefault: boolean; isLocked: boolean; mimeTypes: string[]; timeToUseMS?: number; }; export enum ToolResultStatus { Success = "success", Error = "error", } export interface ToolResultSuccess { status: ToolResultStatus.Success; call: ToolCall; outputs: Record<string, unknown>[]; display?: boolean; } export interface ToolResultError { status: ToolResultStatus.Error; call: ToolCall; message: string; display?: boolean; } export type ToolResult = ToolResultSuccess | ToolResultError; export interface ToolCall { name: string; parameters: Record<string, string | number | boolean>; toolId?: string; } export type BackendCall = ( params: Record<string, string | number | boolean>, context: BackendToolContext, uuid: string ) => AsyncGenerator<MessageUpdate, Omit<ToolResultSuccess, "status" | "call" | "type">, undefined>;
chat-ui/src/lib/types/Tool.ts/0
{ "file_path": "chat-ui/src/lib/types/Tool.ts", "repo_id": "chat-ui", "token_count": 1458 }
89
// Approximate width from which we disable autofocus const TABLET_VIEWPORT_WIDTH = 768; export function isDesktop(window: Window) { const { innerWidth } = window; return innerWidth > TABLET_VIEWPORT_WIDTH; }
chat-ui/src/lib/utils/isDesktop.ts/0
{ "file_path": "chat-ui/src/lib/utils/isDesktop.ts", "repo_id": "chat-ui", "token_count": 67 }
90
import type { Message } from "$lib/types/Message"; import Handlebars from "handlebars"; import { Template } from "@huggingface/jinja"; import { logger } from "$lib/server/logger"; // Register Handlebars helpers Handlebars.registerHelper("ifUser", function (this: Pick<Message, "from" | "content">, options) { if (this.from == "user") return options.fn(this); }); Handlebars.registerHelper( "ifAssistant", function (this: Pick<Message, "from" | "content">, options) { if (this.from == "assistant") return options.fn(this); } ); // Updated compileTemplate to try Jinja and fallback to Handlebars if Jinja fails export function compileTemplate<T>( input: string, model: { preprompt: string; templateEngine?: string } ) { let jinjaTemplate: Template | undefined; try { // Try to compile with Jinja jinjaTemplate = new Template(input); } catch (e) { // logger.error(e, "Could not compile with Jinja"); // Could not compile with Jinja jinjaTemplate = undefined; } const hbTemplate = Handlebars.compile<T>(input, { knownHelpers: { ifUser: true, ifAssistant: true }, knownHelpersOnly: true, noEscape: true, strict: true, preventIndent: true, }); return function render(inputs: T) { if (jinjaTemplate) { try { return jinjaTemplate.render({ ...model, ...inputs }); } catch (e) { logger.error(e, "Could not render with Jinja"); // Fallback to Handlebars if Jinja rendering fails return hbTemplate({ ...model, ...inputs }); } } return hbTemplate({ ...model, ...inputs }); }; }
chat-ui/src/lib/utils/template.ts/0
{ "file_path": "chat-ui/src/lib/utils/template.ts", "repo_id": "chat-ui", "token_count": 522 }
91
// This is a debouncer for the updates from the server to the client // It is used to prevent the client from being overloaded with too many updates // It works by keeping track of the time it takes to render the updates // and adding a safety margin to it, to find the debounce time. class UpdateDebouncer { private renderStartedAt: Date | null = null; private lastRenderTimes: number[] = []; get maxUpdateTime() { if (this.lastRenderTimes.length === 0) { return 50; } const averageTime = this.lastRenderTimes.reduce((acc, time) => acc + time, 0) / this.lastRenderTimes.length; return Math.min(averageTime * 3, 500); } public startRender() { this.renderStartedAt = new Date(); } public endRender() { if (!this.renderStartedAt) { return; } const timeSinceRenderStarted = new Date().getTime() - this.renderStartedAt.getTime(); this.lastRenderTimes.push(timeSinceRenderStarted); if (this.lastRenderTimes.length > 10) { this.lastRenderTimes.shift(); } this.renderStartedAt = null; } } export const updateDebouncer = new UpdateDebouncer();
chat-ui/src/lib/utils/updates.ts/0
{ "file_path": "chat-ui/src/lib/utils/updates.ts", "repo_id": "chat-ui", "token_count": 353 }
92
import { authCondition } from "$lib/server/auth"; import { collections } from "$lib/server/database"; import { error } from "@sveltejs/kit"; import { ObjectId } from "mongodb"; export async function DELETE({ locals, params }) { const messageId = params.messageId; if (!messageId || typeof messageId !== "string") { error(400, "Invalid message id"); } const conversation = await collections.conversations.findOne({ ...authCondition(locals), _id: new ObjectId(params.id), }); if (!conversation) { error(404, "Conversation not found"); } const filteredMessages = conversation.messages .filter( (message) => // not the message AND the message is not in ancestors !(message.id === messageId) && message.ancestors && !message.ancestors.includes(messageId) ) .map((message) => { // remove the message from children if it's there if (message.children && message.children.includes(messageId)) { message.children = message.children.filter((child) => child !== messageId); } return message; }); await collections.conversations.updateOne( { _id: conversation._id, ...authCondition(locals) }, { $set: { messages: filteredMessages } } ); return new Response(); }
chat-ui/src/routes/api/conversation/[id]/message/[messageId]/+server.ts/0
{ "file_path": "chat-ui/src/routes/api/conversation/[id]/message/[messageId]/+server.ts", "repo_id": "chat-ui", "token_count": 398 }
93
<script lang="ts"> import type { PageData } from "./$types"; import { usePublicConfig } from "$lib/utils/PublicConfig.svelte"; import { goto } from "$app/navigation"; import { base } from "$app/paths"; import { page } from "$app/state"; import CarbonAdd from "~icons/carbon/add"; import CarbonHelpFilled from "~icons/carbon/help-filled"; import CarbonClose from "~icons/carbon/close"; import CarbonArrowUpRight from "~icons/carbon/arrow-up-right"; import CarbonEarthAmerica from "~icons/carbon/earth-americas-filled"; import CarbonUserMultiple from "~icons/carbon/user-multiple"; import CarbonSearch from "~icons/carbon/search"; import CarbonTools from "~icons/carbon/tools"; import Pagination from "$lib/components/Pagination.svelte"; import { formatUserCount } from "$lib/utils/formatUserCount"; import { getHref } from "$lib/utils/getHref"; import { debounce } from "$lib/utils/debounce"; import { useSettingsStore } from "$lib/stores/settings"; import IconInternet from "$lib/components/icons/IconInternet.svelte"; import { isDesktop } from "$lib/utils/isDesktop"; import { SortKey } from "$lib/types/Assistant"; import { ReviewStatus } from "$lib/types/Review"; import { loginModalOpen } from "$lib/stores/loginModal"; interface Props { data: PageData; } let { data = $bindable() }: Props = $props(); const publicConfig = usePublicConfig(); let assistantsCreator = $derived(page.url.searchParams.get("user")); let createdByMe = $derived(data.user?.username && data.user.username === assistantsCreator); const SEARCH_DEBOUNCE_DELAY = 400; let filterInputEl: HTMLInputElement | undefined = $state(); let filterValue = $state(data.query); let isFilterInPorgress = false; let sortValue = $state(data.sort as SortKey); let showUnfeatured = $state(data.showUnfeatured); const toggleShowUnfeatured = () => { showUnfeatured = !showUnfeatured; const newUrl = getHref(page.url, { newKeys: { showUnfeatured: showUnfeatured ? "true" : undefined }, existingKeys: { behaviour: "delete", keys: [] }, }); goto(newUrl); }; const onModelChange = (e: Event) => { const newUrl = getHref(page.url, { newKeys: { modelId: (e.target as HTMLSelectElement).value }, existingKeys: { behaviour: "delete_except", keys: ["user"] }, }); resetFilter(); goto(newUrl); }; const resetFilter = () => { filterValue = ""; isFilterInPorgress = false; }; const filterOnName = debounce(async (value: string) => { filterValue = value; if (isFilterInPorgress) { return; } isFilterInPorgress = true; const newUrl = getHref(page.url, { newKeys: { q: value }, existingKeys: { behaviour: "delete", keys: ["p"] }, }); await goto(newUrl); if (isDesktop(window)) { setTimeout(() => filterInputEl?.focus(), 0); } isFilterInPorgress = false; // there was a new filter query before server returned response if (filterValue !== value) { filterOnName(filterValue); } }, SEARCH_DEBOUNCE_DELAY); const sortAssistants = () => { const newUrl = getHref(page.url, { newKeys: { sort: sortValue }, existingKeys: { behaviour: "delete", keys: ["p"] }, }); goto(newUrl); }; const settings = useSettingsStore(); </script> <svelte:head> {#if publicConfig.isHuggingChat} <title>HuggingChat - Assistants</title> <meta property="og:title" content="HuggingChat - Assistants" /> <meta property="og:type" content="link" /> <meta property="og:description" content="Browse HuggingChat assistants made by the community." /> <meta property="og:image" content="{publicConfig.assetPath}/assistants-thumbnail.png" /> <meta property="og:url" content={page.url.href} /> {/if} </svelte:head> <div class="scrollbar-custom h-full overflow-y-auto py-12 max-sm:pt-8 md:py-24"> <div class="pt-42 mx-auto flex flex-col px-5 xl:w-[60rem] 2xl:w-[64rem]"> <div class="flex items-center"> <h1 class="text-2xl font-bold">Assistants</h1> {#if publicConfig.isHuggingChat} <div class="5 ml-1.5 rounded-lg text-xxs uppercase text-gray-500 dark:text-gray-500"> beta </div> <a href="https://huggingface.co/spaces/huggingchat/chat-ui/discussions/357" class="ml-auto dark:text-gray-400 dark:hover:text-gray-300" target="_blank" aria-label="Hub discussion about assistants" > <CarbonHelpFilled /> </a> {/if} </div> <h2 class="text-gray-500">Popular assistants made by the community</h2> <div class="mt-6 flex justify-between gap-2 max-sm:flex-col sm:items-center"> <select class="mt-1 h-[34px] rounded-lg border border-gray-300 bg-gray-50 px-2 text-sm text-gray-900 focus:border-blue-700 focus:ring-blue-700 dark:border-gray-600 dark:bg-gray-700 dark:text-white dark:placeholder-gray-400" bind:value={data.selectedModel} onchange={onModelChange} aria-label="Filter assistants by model" > <option value="">All models</option> {#each data.models.filter((model) => !model.unlisted) as model} <option value={model.name}>{model.name}</option> {/each} </select> {#if data.isAdmin} <label class="mr-auto flex items-center gap-1 text-red-500" title="Admin only feature"> <input type="checkbox" checked={showUnfeatured} onchange={toggleShowUnfeatured} /> Show unfeatured assistants </label> {/if} {#if page.data.loginRequired && !data.user} <button onclick={() => { $loginModalOpen = true; }} class="flex items-center gap-1 whitespace-nowrap rounded-lg border bg-white py-1 pl-1.5 pr-2.5 shadow-sm hover:bg-gray-50 hover:shadow-none dark:border-gray-600 dark:bg-gray-700 dark:hover:bg-gray-700" > <CarbonAdd />Create new assistant </button> {:else} <a href={`${base}/settings/assistants/new`} class="flex items-center gap-1 whitespace-nowrap rounded-lg border bg-white py-1 pl-1.5 pr-2.5 shadow-sm hover:bg-gray-50 hover:shadow-none dark:border-gray-600 dark:bg-gray-700 dark:hover:bg-gray-700" > <CarbonAdd />Create new assistant </a> {/if} </div> <div class="mt-7 flex flex-wrap items-center gap-x-2 gap-y-3 text-sm"> {#if assistantsCreator && !createdByMe} <div class="flex items-center gap-1.5 rounded-full border border-gray-300 bg-gray-50 px-3 py-1 dark:border-gray-600 dark:bg-gray-700 dark:text-white" > {assistantsCreator}'s Assistants <a href={getHref(page.url, { existingKeys: { behaviour: "delete", keys: ["user", "modelId", "p", "q"] }, })} onclick={resetFilter} class="group" ><CarbonClose class="text-xs group-hover:text-gray-800 dark:group-hover:text-gray-300" /></a > </div> {#if publicConfig.isHuggingChat} <a href="https://hf.co/{assistantsCreator}" target="_blank" class="ml-auto flex items-center text-xs text-gray-500 underline hover:text-gray-800 dark:text-gray-400 dark:hover:text-gray-300" ><CarbonArrowUpRight class="mr-1 flex-none text-[0.58rem]" target="_blank" />View {assistantsCreator} on HF</a > {/if} {:else} <a href={getHref(page.url, { existingKeys: { behaviour: "delete", keys: ["user", "modelId", "p", "q"] }, })} onclick={resetFilter} class="flex items-center gap-1.5 rounded-full border px-3 py-1 {!assistantsCreator ? 'border-gray-300 bg-gray-50 dark:border-gray-600 dark:bg-gray-700 dark:text-white' : 'border-transparent text-gray-400 hover:text-gray-800 dark:hover:text-gray-300'}" > <CarbonEarthAmerica class="text-xs" /> Community </a> {#if data.user?.username} <a href={getHref(page.url, { newKeys: { user: data.user.username }, existingKeys: { behaviour: "delete", keys: ["modelId", "p", "q"] }, })} onclick={resetFilter} class="flex items-center gap-1.5 truncate rounded-full border px-3 py-1 {assistantsCreator && createdByMe ? 'border-gray-300 bg-gray-50 dark:border-gray-600 dark:bg-gray-700 dark:text-white' : 'border-transparent text-gray-400 hover:text-gray-800 dark:hover:text-gray-300'}" >{data.user.username} </a> {/if} {/if} <div class="relative ml-auto flex h-[30px] w-40 items-center rounded-full border px-2 has-[:focus]:border-gray-400 dark:border-gray-600 sm:w-64" > <CarbonSearch class="pointer-events-none absolute left-2 text-xs text-gray-400" /> <input class="h-[30px] w-full bg-transparent pl-5 focus:outline-none" placeholder="Filter by name" value={filterValue} oninput={(e) => filterOnName(e.currentTarget.value)} bind:this={filterInputEl} maxlength="150" type="search" aria-label="Filter assistants by name" /> </div> <select bind:value={sortValue} onchange={sortAssistants} aria-label="Sort assistants" class="rounded-lg border border-gray-300 bg-gray-50 px-2 py-1 text-sm text-gray-900 focus:border-blue-700 focus:ring-blue-700 dark:border-gray-600 dark:bg-gray-700 dark:text-white dark:placeholder-gray-400" > <option value={SortKey.TRENDING}>{SortKey.TRENDING}</option> <option value={SortKey.POPULAR}>{SortKey.POPULAR}</option> </select> </div> <div class="mt-8 grid grid-cols-2 gap-3 sm:gap-5 md:grid-cols-3 lg:grid-cols-4"> {#each data.assistants as assistant (assistant._id)} {@const hasRag = assistant?.rag?.allowAllDomains || !!assistant?.rag?.allowedDomains?.length || !!assistant?.rag?.allowedLinks?.length || !!assistant?.dynamicPrompt} <button class="relative flex flex-col items-center justify-center overflow-hidden text-balance rounded-xl border bg-gray-50/50 px-4 py-6 text-center shadow hover:bg-gray-50 hover:shadow-inner dark:border-gray-800/70 dark:bg-gray-950/20 dark:hover:bg-gray-950/40 max-sm:px-4 sm:h-64 sm:pb-4 xl:pt-8 {!(assistant.review === ReviewStatus.APPROVED) && !createdByMe && data.isAdmin ? 'border !border-red-500/30' : ''}" onclick={() => { if (data.settings.assistants.includes(assistant._id.toString())) { settings.instantSet({ activeModel: assistant._id.toString() }); goto(`${base}` || "/"); } else { goto(`${base}/assistant/${assistant._id}`); } }} > {#if assistant.userCount && assistant.userCount > 1} <div class="absolute right-3 top-3 flex items-center gap-1 text-xs text-gray-400" title="Number of users" > <CarbonUserMultiple class="text-xxs" />{formatUserCount(assistant.userCount)} </div> {/if} <div class="absolute left-3 top-3 flex items-center gap-1 text-xs text-gray-400"> {#if assistant.tools?.length} <div class="grid size-5 place-items-center rounded-full bg-purple-500/10" title="This assistant can use tools" > <CarbonTools class="text-xs text-purple-600" /> </div> {/if} {#if hasRag} <div class="grid size-5 place-items-center rounded-full bg-blue-500/10" title="This assistant uses the websearch." > <IconInternet classNames="text-sm text-blue-600" /> </div> {/if} </div> {#if assistant.avatar} <img src="{base}/settings/assistants/{assistant._id}/avatar.jpg" alt="Avatar" class="mb-2 aspect-square size-12 flex-none rounded-full object-cover sm:mb-6 sm:size-20" /> {:else} <div class="mb-2 flex aspect-square size-12 flex-none items-center justify-center rounded-full bg-gray-300 text-2xl font-bold uppercase text-gray-500 dark:bg-gray-800 sm:mb-6 sm:size-20" > {assistant.name[0]} </div> {/if} <h3 class="mb-2 line-clamp-2 max-w-full break-words text-center text-[.8rem] font-semibold leading-snug sm:text-sm" > {assistant.name} </h3> <p class="line-clamp-4 text-xs text-gray-700 dark:text-gray-400 sm:line-clamp-2"> {assistant.description} </p> {#if assistant.createdByName} <p class="mt-auto pt-2 text-xs text-gray-400 dark:text-gray-500"> Created by <a class="hover:underline" href="{base}/assistants?user={assistant.createdByName}" > {assistant.createdByName} </a> </p> {/if} </button> {:else} No assistants found {/each} </div> <Pagination classNames="w-full flex justify-center mt-14 mb-4" numItemsPerPage={data.numItemsPerPage} numTotalItems={data.numTotalItems} /> </div> </div>
chat-ui/src/routes/assistants/+page.svelte/0
{ "file_path": "chat-ui/src/routes/assistants/+page.svelte", "repo_id": "chat-ui", "token_count": 5324 }
94
import { dev } from "$app/environment"; import { base } from "$app/paths"; import { collections } from "$lib/server/database"; import { redirect } from "@sveltejs/kit"; import { config } from "$lib/server/config"; export async function POST({ locals, cookies }) { await collections.sessions.deleteOne({ sessionId: locals.sessionId }); cookies.delete(config.COOKIE_NAME, { path: "/", // So that it works inside the space's iframe sameSite: dev || config.ALLOW_INSECURE_COOKIES === "true" ? "lax" : "none", secure: !dev && !(config.ALLOW_INSECURE_COOKIES === "true"), httpOnly: true, }); return redirect(302, `${base}/`); }
chat-ui/src/routes/logout/+server.ts/0
{ "file_path": "chat-ui/src/routes/logout/+server.ts", "repo_id": "chat-ui", "token_count": 218 }
95
<script lang="ts"> import { invalidateAll } from "$app/navigation"; import Modal from "$lib/components/Modal.svelte"; import { createEventDispatcher } from "svelte"; const dispatch = createEventDispatcher<{ close: void }>(); let reason = $state(""); interface Props { reportUrl: string; } let { reportUrl }: Props = $props(); </script> <Modal on:close> <form onsubmit={() => { fetch(`${reportUrl}`, { method: "POST", body: JSON.stringify({ reason }), }).then(() => { dispatch("close"); invalidateAll(); }); }} class="w-full min-w-64 p-4" > <span class="mb-1 text-sm font-semibold">Report content</span> <p class="text-sm text-gray-500"> Please provide a brief description of why you are reporting this content. </p> <textarea name="reportReason" class="mt-6 max-h-48 w-full resize-y rounded-lg border-2 border-gray-200 bg-gray-100 p-2 text-smd" placeholder="Reason(s) for the report" maxlength="128" bind:value={reason} ></textarea> <div class="flex w-full flex-row justify-between px-2 pt-4"> <button type="button" class="text-sm text-gray-700 hover:underline" onclick={() => dispatch("close")}>Cancel</button > <button type="submit" class="rounded-full bg-black px-4 py-2 text-sm font-semibold text-white md:px-8" disabled={!reason} class:bg-gray-200={!reason} class:!text-gray-400={!reason} > Submit report </button> </div> </form> </Modal>
chat-ui/src/routes/settings/(nav)/assistants/[assistantId]/ReportModal.svelte/0
{ "file_path": "chat-ui/src/routes/settings/(nav)/assistants/[assistantId]/ReportModal.svelte", "repo_id": "chat-ui", "token_count": 623 }
96
@import "highlight.js/styles/atom-one-dark";
chat-ui/src/styles/highlight-js.css/0
{ "file_path": "chat-ui/src/styles/highlight-js.css", "repo_id": "chat-ui", "token_count": 17 }
97
const defaultTheme = require("tailwindcss/defaultTheme"); const colors = require("tailwindcss/colors"); /** @type {import('tailwindcss').Config} */ export default { darkMode: "class", mode: "jit", content: ["./src/**/*.{html,js,svelte,ts}"], theme: { extend: { colors: { primary: colors[process.env.PUBLIC_APP_COLOR], }, fontSize: { xxs: "0.625rem", smd: "0.94rem", }, }, }, plugins: [ require("tailwind-scrollbar")({ nocompatible: true }), require("@tailwindcss/typography"), ], };
chat-ui/tailwind.config.cjs/0
{ "file_path": "chat-ui/tailwind.config.cjs", "repo_id": "chat-ui", "token_count": 220 }
98
import json import os import tempfile import datasets from utils import generate_example_dataset, get_duration SPEED_TEST_N_EXAMPLES = 500_000 RESULTS_BASEPATH, RESULTS_FILENAME = os.path.split(__file__) RESULTS_FILE_PATH = os.path.join(RESULTS_BASEPATH, "results", RESULTS_FILENAME.replace(".py", ".json")) @get_duration def select(dataset: datasets.Dataset): _ = dataset.select(range(0, len(dataset), 2)) @get_duration def sort(dataset: datasets.Dataset): _ = dataset.sort("numbers") @get_duration def shuffle(dataset: datasets.Dataset): _ = dataset.shuffle() @get_duration def train_test_split(dataset: datasets.Dataset): _ = dataset.train_test_split(0.1) @get_duration def shard(dataset: datasets.Dataset, num_shards=10): for shard_id in range(num_shards): _ = dataset.shard(num_shards, shard_id) def benchmark_indices_mapping(): times = {"num examples": SPEED_TEST_N_EXAMPLES} functions = (select, sort, shuffle, train_test_split, shard) with tempfile.TemporaryDirectory() as tmp_dir: print("generating dataset") features = datasets.Features({"text": datasets.Value("string"), "numbers": datasets.Value("float32")}) dataset = generate_example_dataset( os.path.join(tmp_dir, "dataset.arrow"), features, num_examples=SPEED_TEST_N_EXAMPLES ) print("Functions") for func in functions: print(func.__name__) times[func.__name__] = func(dataset) with open(RESULTS_FILE_PATH, "wb") as f: f.write(json.dumps(times).encode("utf-8")) if __name__ == "__main__": # useful to run the profiler benchmark_indices_mapping()
datasets/benchmarks/benchmark_indices_mapping.py/0
{ "file_path": "datasets/benchmarks/benchmark_indices_mapping.py", "repo_id": "datasets", "token_count": 677 }
99
# The cache The cache is one of the reasons why 🤗 Datasets is so efficient. It stores previously downloaded and processed datasets so when you need to use them again, they are reloaded directly from the cache. This avoids having to download a dataset all over again, or reapplying processing functions. Even after you close and start another Python session, 🤗 Datasets will reload your dataset directly from the cache! ## Fingerprint How does the cache keeps track of what transforms are applied to a dataset? Well, 🤗 Datasets assigns a fingerprint to the cache file. A fingerprint keeps track of the current state of a dataset. The initial fingerprint is computed using a hash from the Arrow table, or a hash of the Arrow files if the dataset is on disk. Subsequent fingerprints are computed by combining the fingerprint of the previous state, and a hash of the latest transform applied. <Tip> Transforms are any of the processing methods from the [How-to Process](./process) guides such as [`Dataset.map`] or [`Dataset.shuffle`]. </Tip> Here are what the actual fingerprints look like: ```py >>> from datasets import Dataset >>> dataset1 = Dataset.from_dict({"a": [0, 1, 2]}) >>> dataset2 = dataset1.map(lambda x: {"a": x["a"] + 1}) >>> print(dataset1._fingerprint, dataset2._fingerprint) d19493523d95e2dc 5b86abacd4b42434 ``` In order for a transform to be hashable, it needs to be picklable by [dill](https://dill.readthedocs.io/en/latest/) or [pickle](https://docs.python.org/3/library/pickle). When you use a non-hashable transform, 🤗 Datasets uses a random fingerprint instead and raises a warning. The non-hashable transform is considered different from the previous transforms. As a result, 🤗 Datasets will recompute all the transforms. Make sure your transforms are serializable with pickle or dill to avoid this! An example of when 🤗 Datasets recomputes everything is when caching is disabled. When this happens, the cache files are generated every time and they get written to a temporary directory. Once your Python session ends, the cache files in the temporary directory are deleted. A random hash is assigned to these cache files, instead of a fingerprint. <Tip> When caching is disabled, use [`Dataset.save_to_disk`] to save your transformed dataset or it will be deleted once the session ends. </Tip> ## Hashing The fingerprint of a dataset is updated by hashing the function passed to `map` as well as the `map` parameters (`batch_size`, `remove_columns`, etc.). You can check the hash of any Python object using the [`fingerprint.Hasher`]: ```py >>> from datasets.fingerprint import Hasher >>> my_func = lambda example: {"length": len(example["text"])} >>> print(Hasher.hash(my_func)) '3d35e2b3e94c81d6' ``` The hash is computed by dumping the object using a `dill` pickler and hashing the dumped bytes. The pickler recursively dumps all the variables used in your function, so any change you do to an object that is used in your function, will cause the hash to change. If one of your functions doesn't seem to have the same hash across sessions, it means at least one of its variables contains a Python object that is not deterministic. When this happens, feel free to hash any object you find suspicious to try to find the object that caused the hash to change. For example, if you use a list for which the order of its elements is not deterministic across sessions, then the hash won't be the same across sessions either.
datasets/docs/source/about_cache.mdx/0
{ "file_path": "datasets/docs/source/about_cache.mdx", "repo_id": "datasets", "token_count": 909 }
100
# Search index [FAISS](https://github.com/facebookresearch/faiss) and [Elasticsearch](https://www.elastic.co/elasticsearch/) enables searching for examples in a dataset. This can be useful when you want to retrieve specific examples from a dataset that are relevant to your NLP task. For example, if you are working on an Open Domain Question Answering task, you may want to only return examples that are relevant to answering your question. This guide will show you how to build an index for your dataset that will allow you to search it. ## FAISS FAISS retrieves documents based on the similarity of their vector representations. In this example, you will generate the vector representations with the [DPR](https://huggingface.co/transformers/model_doc/dpr.html) model. 1. Download the DPR model from 🤗 Transformers: ```py >>> from transformers import DPRContextEncoder, DPRContextEncoderTokenizer >>> import torch >>> torch.set_grad_enabled(False) >>> ctx_encoder = DPRContextEncoder.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base") >>> ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base") ``` 2. Load your dataset and compute the vector representations: ```py >>> from datasets import load_dataset >>> ds = load_dataset('crime_and_punish', split='train[:100]') >>> ds_with_embeddings = ds.map(lambda example: {'embeddings': ctx_encoder(**ctx_tokenizer(example["line"], return_tensors="pt"))[0][0].numpy()}) ``` 3. Create the index with [`Dataset.add_faiss_index`]: ```py >>> ds_with_embeddings.add_faiss_index(column='embeddings') ``` 4. Now you can query your dataset with the `embeddings` index. Load the DPR Question Encoder, and search for a question with [`Dataset.get_nearest_examples`]: ```py >>> from transformers import DPRQuestionEncoder, DPRQuestionEncoderTokenizer >>> q_encoder = DPRQuestionEncoder.from_pretrained("facebook/dpr-question_encoder-single-nq-base") >>> q_tokenizer = DPRQuestionEncoderTokenizer.from_pretrained("facebook/dpr-question_encoder-single-nq-base") >>> question = "Is it serious ?" >>> question_embedding = q_encoder(**q_tokenizer(question, return_tensors="pt"))[0][0].numpy() >>> scores, retrieved_examples = ds_with_embeddings.get_nearest_examples('embeddings', question_embedding, k=10) >>> retrieved_examples["line"][0] '_that_ serious? It is not serious at all. It’s simply a fantasy to amuse\r\n' ``` 5. You can access the index with [`Dataset.get_index`] and use it for special operations, e.g. query it using `range_search`: ```py >>> faiss_index = ds_with_embeddings.get_index('embeddings').faiss_index >>> limits, distances, indices = faiss_index.range_search(x=question_embedding.reshape(1, -1), thresh=0.95) ``` 6. When you are done querying, save the index on disk with [`Dataset.save_faiss_index`]: ```py >>> ds_with_embeddings.save_faiss_index('embeddings', 'my_index.faiss') ``` 7. Reload it at a later time with [`Dataset.load_faiss_index`]: ```py >>> ds = load_dataset('crime_and_punish', split='train[:100]') >>> ds.load_faiss_index('embeddings', 'my_index.faiss') ``` ## Elasticsearch Unlike FAISS, Elasticsearch retrieves documents based on exact matches. Start Elasticsearch on your machine, or see the [Elasticsearch installation guide](https://www.elastic.co/guide/en/elasticsearch/reference/current/setup.html) if you don't already have it installed. 1. Load the dataset you want to index: ```py >>> from datasets import load_dataset >>> squad = load_dataset('rajpurkar/squad', split='validation') ``` 2. Build the index with [`Dataset.add_elasticsearch_index`]: ```py >>> squad.add_elasticsearch_index("context", host="localhost", port="9200") ``` 3. Then you can query the `context` index with [`Dataset.get_nearest_examples`]: ```py >>> query = "machine" >>> scores, retrieved_examples = squad.get_nearest_examples("context", query, k=10) >>> retrieved_examples["title"][0] 'Computational_complexity_theory' ``` 4. If you want to reuse the index, define the `es_index_name` parameter when you build the index: ```py >>> from datasets import load_dataset >>> squad = load_dataset('rajpurkar/squad', split='validation') >>> squad.add_elasticsearch_index("context", host="localhost", port="9200", es_index_name="hf_squad_val_context") >>> squad.get_index("context").es_index_name hf_squad_val_context ``` 5. Reload it later with the index name when you call [`Dataset.load_elasticsearch_index`]: ```py >>> from datasets import load_dataset >>> squad = load_dataset('rajpurkar/squad', split='validation') >>> squad.load_elasticsearch_index("context", host="localhost", port="9200", es_index_name="hf_squad_val_context") >>> query = "machine" >>> scores, retrieved_examples = squad.get_nearest_examples("context", query, k=10) ``` For more advanced Elasticsearch usage, you can specify your own configuration with custom settings: ```py >>> import elasticsearch as es >>> import elasticsearch.helpers >>> from elasticsearch import Elasticsearch >>> es_client = Elasticsearch([{"host": "localhost", "port": "9200"}]) # default client >>> es_config = { ... "settings": { ... "number_of_shards": 1, ... "analysis": {"analyzer": {"stop_standard": {"type": "standard", " stopwords": "_english_"}}}, ... }, ... "mappings": {"properties": {"text": {"type": "text", "analyzer": "standard", "similarity": "BM25"}}}, ... } # default config >>> es_index_name = "hf_squad_context" # name of the index in Elasticsearch >>> squad.add_elasticsearch_index("context", es_client=es_client, es_config=es_config, es_index_name=es_index_name) ```
datasets/docs/source/faiss_es.mdx/0
{ "file_path": "datasets/docs/source/faiss_es.mdx", "repo_id": "datasets", "token_count": 1845 }
101
# Builder classes ## Builders 🤗 Datasets relies on two main classes during the dataset building process: [`DatasetBuilder`] and [`BuilderConfig`]. [[autodoc]] datasets.DatasetBuilder [[autodoc]] datasets.GeneratorBasedBuilder [[autodoc]] datasets.ArrowBasedBuilder [[autodoc]] datasets.BuilderConfig ## Download [[autodoc]] datasets.DownloadManager [[autodoc]] datasets.StreamingDownloadManager [[autodoc]] datasets.DownloadConfig [[autodoc]] datasets.DownloadMode ## Verification [[autodoc]] datasets.VerificationMode ## Splits [[autodoc]] datasets.SplitGenerator [[autodoc]] datasets.Split [[autodoc]] datasets.NamedSplit [[autodoc]] datasets.NamedSplitAll [[autodoc]] datasets.ReadInstruction ## Version [[autodoc]] datasets.utils.Version
datasets/docs/source/package_reference/builder_classes.mdx/0
{ "file_path": "datasets/docs/source/package_reference/builder_classes.mdx", "repo_id": "datasets", "token_count": 240 }
102