text stringlengths 5 631k | id stringlengths 14 178 | metadata dict | __index_level_0__ int64 0 647 |
|---|---|---|---|
# HopeJR
## Prerequisites
- [Hardware Setup](https://github.com/TheRobotStudio/HOPEJr)
## Install LeRobot
Follow the [installation instructions](https://github.com/huggingface/lerobot#installation) to install LeRobot.
Install LeRobot with HopeJR dependencies:
```bash
pip install -e ".[hopejr]"
```
## Device Configuration
Before starting calibration and operation, you need to identify the USB ports for each HopeJR component. Run this script to find the USB ports for the arm, hand, glove, and exoskeleton:
```bash
lerobot-find-port
```
This will display the available USB ports and their associated devices. Make note of the port paths (e.g., `/dev/tty.usbmodem58760433331`, `/dev/tty.usbmodem11301`) as you'll need to specify them in the `--robot.port` and `--teleop.port` parameters when recording data, replaying episodes, or running teleoperation scripts.
## Step 1: Calibration
Before performing teleoperation, HopeJR's limbs need to be calibrated. Calibration files will be saved in `~/.cache/huggingface/lerobot/calibration`
### 1.1 Calibrate Robot Hand
```bash
lerobot-calibrate \
--robot.type=hope_jr_hand \
--robot.port=/dev/tty.usbmodem58760432281 \
--robot.id=blue \
--robot.side=right
```
When running the calibration script, a calibration GUI will pop up. Finger joints are named as follows:
**Thumb**:
- **CMC**: base joint connecting thumb to hand
- **MCP**: knuckle joint
- **PIP**: first finger joint
- **DIP** : fingertip joint
**Index, Middle, Ring, and Pinky fingers**:
- **Radial flexor**: Moves base of finger towards the thumb
- **Ulnar flexor**: Moves base of finger towards the pinky
- **PIP/DIP**: Flexes the distal and proximal phalanx of the finger
Each one of these will need to be calibrated individually via the GUI.
Note that ulnar and radial flexors should have ranges of the same size (but with different offsets) in order to get symmetric movement.
<p align="center">
<img
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/calibration_gui_1.png"
alt="Setting boundaries in the hand calibration GUI"
title="Setting boundaries in the hand calibration GUI"
width="100%"
></img>
</p>
Use the calibration interface to set the range boundaries for each joint as shown above.
<p align="center">
<img
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/calibration_gui_2.png"
alt="Saving calibration values"
title="Saving calibration values"
width="100%"
></img>
</p>
Once you have set the appropriate boundaries for all joints, click "Save" to save the calibration values to the motors.
### 1.2 Calibrate Teleoperator Glove
```bash
lerobot-calibrate \
--teleop.type=homunculus_glove \
--teleop.port=/dev/tty.usbmodem11201 \
--teleop.id=red \
--teleop.side=right
```
Move each finger through its full range of motion, starting from the thumb.
```
Move thumb through its entire range of motion.
Recording positions. Press ENTER to stop...
-------------------------------------------
NAME | MIN | POS | MAX
thumb_cmc | 1790 | 1831 | 1853
thumb_mcp | 1497 | 1514 | 1528
thumb_pip | 1466 | 1496 | 1515
thumb_dip | 1463 | 1484 | 1514
```
Continue with each finger:
```
Move middle through its entire range of motion.
Recording positions. Press ENTER to stop...
-------------------------------------------
NAME | MIN | POS | MAX
middle_mcp_abduction | 1598 | 1718 | 1820
middle_mcp_flexion | 1512 | 1658 | 2136
middle_dip | 1484 | 1500 | 1547
```
Once calibration is complete, the system will save the calibration to `/Users/your_username/.cache/huggingface/lerobot/calibration/teleoperators/homunculus_glove/red.json`
### 1.3 Calibrate Robot Arm
```bash
lerobot-calibrate \
--robot.type=hope_jr_arm \
--robot.port=/dev/tty.usbserial-1110 \
--robot.id=white
```
This will open a calibration GUI where you can set the range limits for each motor. The arm motions are organized as follows:
- **Shoulder**: pitch, yaw, and roll
- **Elbow**: flex
- **Wrist**: pitch, yaw, and roll
<p align="center">
<img
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/calibration_gui_2.png"
alt="Setting boundaries in the arm calibration GUI"
title="Setting boundaries in the arm calibration GUI"
width="100%"
></img>
</p>
Use the calibration interface to set the range boundaries for each joint. Move each joint through its full range of motion and adjust the minimum and maximum values accordingly. Once you have set the appropriate boundaries for all joints, save the calibration.
### 1.4 Calibrate Teleoperator Exoskeleton
```bash
lerobot-calibrate \
--teleop.type=homunculus_arm \
--teleop.port=/dev/tty.usbmodem11201 \
--teleop.id=black
```
The exoskeleton allows one to control the robot arm. During calibration, you'll be prompted to move all joints through their full range of motion:
```
Move all joints through their entire range of motion.
Recording positions. Press ENTER to stop...
-------------------------------------------
-------------------------------------------
NAME | MIN | POS | MAX
shoulder_pitch | 586 | 736 | 895
shoulder_yaw | 1257 | 1374 | 1390
shoulder_roll | 449 | 1034 | 2564
elbow_flex | 3023 | 3117 | 3134
wrist_roll | 3073 | 3096 | 3147
wrist_yaw | 2143 | 2171 | 2185
wrist_pitch | 1975 | 1993 | 2074
Calibration saved to /Users/your_username/.cache/huggingface/lerobot/calibration/teleoperators/homunculus_arm/black.json
```
## Step 2: Teleoperation
Due to global variable conflicts in the Feetech middleware, teleoperation for arm and hand must run in separate shell sessions:
### Hand
```bash
lerobot-teleoperate \
--robot.type=hope_jr_hand \
--robot.port=/dev/tty.usbmodem58760432281 \
--robot.id=blue \
--robot.side=right \
--teleop.type=homunculus_glove \
--teleop.port=/dev/tty.usbmodem11201 \
--teleop.id=red \
--teleop.side=right \
--display_data=true \
--fps=30
```
### Arm
```bash
lerobot-teleoperate \
--robot.type=hope_jr_arm \
--robot.port=/dev/tty.usbserial-1110 \
--robot.id=white \
--teleop.type=homunculus_arm \
--teleop.port=/dev/tty.usbmodem11201 \
--teleop.id=black \
--display_data=true \
--fps=30
```
## Step 3: Record, Replay, Train
Record, Replay and Train with Hope-JR is still experimental.
### Record
This step records the dataset, which can be seen as an example [here](https://huggingface.co/datasets/nepyope/hand_record_test_with_video_data/settings).
```bash
lerobot-record \
--robot.type=hope_jr_hand \
--robot.port=/dev/tty.usbmodem58760432281 \
--robot.id=right \
--robot.side=right \
--robot.cameras='{"main": {"type": "opencv", "index_or_path": 0, "width": 640, "height": 480, "fps": 30}}' \
--teleop.type=homunculus_glove \
--teleop.port=/dev/tty.usbmodem1201 \
--teleop.id=right \
--teleop.side=right \
--dataset.repo_id=nepyope/hand_record_test_with_video_data \
--dataset.single_task="Hand recording test with video data" \
--dataset.num_episodes=1 \
--dataset.episode_time_s=5 \
--dataset.push_to_hub=true \
--dataset.private=true \
--display_data=true
```
### Replay
```bash
lerobot-replay \
--robot.type=hope_jr_hand \
--robot.port=/dev/tty.usbmodem58760432281 \
--robot.id=right \
--robot.side=right \
--dataset.repo_id=nepyope/hand_record_test_with_camera \
--dataset.episode=0
```
### Train
```bash
lerobot-train \
--dataset.repo_id=nepyope/hand_record_test_with_video_data \
--policy.type=act \
--output_dir=outputs/train/hopejr_hand \
--job_name=hopejr \
--policy.device=mps \
--wandb.enable=true \
--policy.repo_id=nepyope/hand_test_policy
```
### Evaluate
This training run can be viewed as an example [here](https://wandb.ai/tino/lerobot/runs/rp0k8zvw?nw=nwusertino).
```bash
lerobot-record \
--robot.type=hope_jr_hand \
--robot.port=/dev/tty.usbmodem58760432281 \
--robot.id=right \
--robot.side=right \
--robot.cameras='{"main": {"type": "opencv", "index_or_path": 0, "width": 640, "height": 480, "fps": 30}}' \
--display_data=false \
--dataset.repo_id=nepyope/eval_hopejr \
--dataset.single_task="Evaluate hopejr hand policy" \
--dataset.num_episodes=10 \
--policy.path=outputs/train/hopejr_hand/checkpoints/last/pretrained_model
```
| lerobot/docs/source/hope_jr.mdx/0 | {
"file_path": "lerobot/docs/source/hope_jr.mdx",
"repo_id": "lerobot",
"token_count": 3224
} | 213 |
# SO-101
In the steps below, we explain how to assemble our flagship robot, the SO-101.
## Source the parts
Follow this [README](https://github.com/TheRobotStudio/SO-ARM100). It contains the bill of materials, with a link to source the parts, as well as the instructions to 3D print the parts.
And advise if it's your first time printing or if you don't own a 3D printer.
## Install LeRobot ๐ค
To install LeRobot, follow our [Installation Guide](./installation)
In addition to these instructions, you need to install the Feetech SDK:
```bash
pip install -e ".[feetech]"
```
## Step-by-Step Assembly Instructions
The follower arm uses 6x STS3215 motors with 1/345 gearing. The leader, however, uses three differently geared motors to make sure it can both sustain its own weight and it can be moved without requiring much force. Which motor is needed for which joint is shown in the table below.
| Leader-Arm Axis | Motor | Gear Ratio |
| ------------------- | :---: | :--------: |
| Base / Shoulder Pan | 1 | 1 / 191 |
| Shoulder Lift | 2 | 1 / 345 |
| Elbow Flex | 3 | 1 / 191 |
| Wrist Flex | 4 | 1 / 147 |
| Wrist Roll | 5 | 1 / 147 |
| Gripper | 6 | 1 / 147 |
### Clean Parts
Remove all support material from the 3D-printed parts. The easiest way to do this is using a small screwdriver to get underneath the support material.
It is advisable to install one 3-pin cable in the motor after placing them before continuing assembly.
### Joint 1
- Place the first motor into the base.
- Fasten the motor with 4 M2x6mm screws (smallest screws). Two from the top and two from the bottom.
- Slide over the first motor holder and fasten it using two M2x6mm screws (one on each side).
- Install both motor horns, securing the top horn with a M3x6mm screw.
- Attach the shoulder part.
- Tighten the shoulder part with 4 M3x6mm screws on top and 4 M3x6mm screws on the bottom
- Add the shoulder motor holder.
<div class="video-container">
<video controls width="600">
<source
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/Joint1_v2.mp4"
type="video/mp4"
/>
</video>
</div>
### Joint 2
- Slide the second motor in from the top.
- Fasten the second motor with 4 M2x6mm screws.
- Attach both motor horns to motor 2, again use the M3x6mm horn screw.
- Attach the upper arm with 4 M3x6mm screws on each side.
<div class="video-container">
<video controls width="600">
<source
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/Joint2_v2.mp4"
type="video/mp4"
/>
</video>
</div>
### Joint 3
- Insert motor 3 and fasten using 4 M2x6mm screws
- Attach both motor horns to motor 3 and secure one again with a M3x6mm horn screw.
- Connect the forearm to motor 3 using 4 M3x6mm screws on each side.
<div class="video-container">
<video controls width="600">
<source
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/Joint3_v2.mp4"
type="video/mp4"
/>
</video>
</div>
### Joint 4
- Slide over motor holder 4.
- Slide in motor 4.
- Fasten motor 4 with 4 M2x6mm screws and attach its motor horns, use a M3x6mm horn screw.
<div class="video-container">
<video controls width="600">
<source
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/Joint4_v2.mp4"
type="video/mp4"
/>
</video>
</div>
### Joint 5
- Insert motor 5 into the wrist holder and secure it with 2 M2x6mm front screws.
- Install only one motor horn on the wrist motor and secure it with a M3x6mm horn screw.
- Secure the wrist to motor 4 using 4 M3x6mm screws on both sides.
<div class="video-container">
<video controls width="600">
<source
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/Joint5_v2.mp4"
type="video/mp4"
/>
</video>
</div>
### Gripper / Handle
<hfoptions id="assembly">
<hfoption id="Follower">
- Attach the gripper to motor 5, attach it to the motor horn on the wrist using 4 M3x6mm screws.
- Insert the gripper motor and secure it with 2 M2x6mm screws on each side.
- Attach the motor horns and again use a M3x6mm horn screw.
- Install the gripper claw and secure it with 4 M3x6mm screws on both sides.
<div class="video-container">
<video controls width="600">
<source
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/Gripper_v2.mp4"
type="video/mp4"
/>
</video>
</div>
</hfoption>
<hfoption id="Leader">
- Mount the leader holder onto the wrist and secure it with 4 M3x6mm screws.
- Attach the handle to motor 5 using 1 M2x6mm screw.
- Insert the gripper motor, secure it with 2 M2x6mm screws on each side, attach a motor horn using a M3x6mm horn screw.
- Attach the follower trigger with 4 M3x6mm screws.
<div class="video-container">
<video controls width="600">
<source
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/Leader_v2.mp4"
type="video/mp4"
/>
</video>
</div>
</hfoption>
</hfoptions>
## Configure the motors
### 1. Find the USB ports associated with each arm
To find the port for each bus servo adapter, connect MotorBus to your computer via USB and power. Run the following script and disconnect the MotorBus when prompted:
```bash
lerobot-find-port
```
<hfoptions id="example">
<hfoption id="Mac">
Example output:
```
Finding all available ports for the MotorBus.
['/dev/tty.usbmodem575E0032081', '/dev/tty.usbmodem575E0031751']
Remove the USB cable from your MotorsBus and press Enter when done.
[...Disconnect corresponding leader or follower arm and press Enter...]
The port of this MotorsBus is /dev/tty.usbmodem575E0032081
Reconnect the USB cable.
```
Where the found port is: `/dev/tty.usbmodem575E0032081` corresponding to your leader or follower arm.
</hfoption>
<hfoption id="Linux">
On Linux, you might need to give access to the USB ports by running:
```bash
sudo chmod 666 /dev/ttyACM0
sudo chmod 666 /dev/ttyACM1
```
Example output:
```
Finding all available ports for the MotorBus.
['/dev/ttyACM0', '/dev/ttyACM1']
Remove the usb cable from your MotorsBus and press Enter when done.
[...Disconnect corresponding leader or follower arm and press Enter...]
The port of this MotorsBus is /dev/ttyACM1
Reconnect the USB cable.
```
Where the found port is: `/dev/ttyACM1` corresponding to your leader or follower arm.
</hfoption>
</hfoptions>
### 2. Set the motors ids and baudrates
Each motor is identified by a unique id on the bus. When brand new, motors usually come with a default id of `1`. For the communication to work properly between the motors and the controller, we first need to set a unique, different id to each motor. Additionally, the speed at which data is transmitted on the bus is determined by the baudrate. In order to talk to each other, the controller and all the motors need to be configured with the same baudrate.
To that end, we first need to connect to each motor individually with the controller in order to set these. Since we will write these parameters in the non-volatile section of the motors' internal memory (EEPROM), we'll only need to do this once.
If you are repurposing motors from another robot, you will probably also need to perform this step as the ids and baudrate likely won't match.
The video below shows the sequence of steps for setting the motor ids.
##### Setup motors video
<div class="video-container">
<video controls width="600">
<source
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/setup_motors_so101_2.mp4"
type="video/mp4"
/>
</video>
</div>
#### Follower
Connect the usb cable from your computer and the power supply to the follower arm's controller board. Then, run the following command or run the API example with the port you got from the previous step. You'll also need to give your leader arm a name with the `id` parameter.
<hfoptions id="setup_motors">
<hfoption id="Command">
```bash
lerobot-setup-motors \
--robot.type=so101_follower \
--robot.port=/dev/tty.usbmodem585A0076841 # <- paste here the port found at previous step
```
</hfoption>
<hfoption id="API example">
<!-- prettier-ignore-start -->
```python
from lerobot.robots.so101_follower import SO101Follower, SO101FollowerConfig
config = SO101FollowerConfig(
port="/dev/tty.usbmodem585A0076841",
id="my_awesome_follower_arm",
)
follower = SO101Follower(config)
follower.setup_motors()
```
<!-- prettier-ignore-end -->
</hfoption>
</hfoptions>
You should see the following instruction
```bash
Connect the controller board to the 'gripper' motor only and press enter.
```
As instructed, plug the gripper's motor. Make sure it's the only motor connected to the board, and that the motor itself is not yet daisy-chained to any other motor. As you press `[Enter]`, the script will automatically set the id and baudrate for that motor.
<details>
<summary>Troubleshooting</summary>
If you get an error at that point, check your cables and make sure they are plugged in properly:
<ul>
<li>Power supply</li>
<li>USB cable between your computer and the controller board</li>
<li>The 3-pin cable from the controller board to the motor</li>
</ul>
If you are using a Waveshare controller board, make sure that the two jumpers are set on the `B` channel (USB).
</details>
You should then see the following message:
```bash
'gripper' motor id set to 6
```
Followed by the next instruction:
```bash
Connect the controller board to the 'wrist_roll' motor only and press enter.
```
You can disconnect the 3-pin cable from the controller board, but you can leave it connected to the gripper motor on the other end, as it will already be in the right place. Now, plug in another 3-pin cable to the wrist roll motor and connect it to the controller board. As with the previous motor, make sure it is the only motor connected to the board and that the motor itself isn't connected to any other one.
Repeat the operation for each motor as instructed.
> [!TIP]
> Check your cabling at each step before pressing Enter. For instance, the power supply cable might disconnect as you manipulate the board.
When you are done, the script will simply finish, at which point the motors are ready to be used. You can now plug the 3-pin cable from each motor to the next one, and the cable from the first motor (the 'shoulder pan' with id=1) to the controller board, which can now be attached to the base of the arm.
#### Leader
Do the same steps for the leader arm.
<hfoptions id="setup_motors">
<hfoption id="Command">
```bash
lerobot-setup-motors \
--teleop.type=so101_leader \
--teleop.port=/dev/tty.usbmodem575E0031751 # <- paste here the port found at previous step
```
</hfoption>
<hfoption id="API example">
<!-- prettier-ignore-start -->
```python
from lerobot.teleoperators.so101_leader import SO101Leader, SO101LeaderConfig
config = SO101LeaderConfig(
port="/dev/tty.usbmodem585A0076841",
id="my_awesome_leader_arm",
)
leader = SO101Leader(config)
leader.setup_motors()
```
<!-- prettier-ignore-end -->
</hfoption>
</hfoptions>
## Calibrate
Next, you'll need to calibrate your robot to ensure that the leader and follower arms have the same position values when they are in the same physical position.
The calibration process is very important because it allows a neural network trained on one robot to work on another.
#### Follower
Run the following command or API example to calibrate the follower arm:
<hfoptions id="calibrate_follower">
<hfoption id="Command">
```bash
lerobot-calibrate \
--robot.type=so101_follower \
--robot.port=/dev/tty.usbmodem58760431551 \ # <- The port of your robot
--robot.id=my_awesome_follower_arm # <- Give the robot a unique name
```
</hfoption>
<hfoption id="API example">
<!-- prettier-ignore-start -->
```python
from lerobot.robots.so101_follower import SO101FollowerConfig, SO101Follower
config = SO101FollowerConfig(
port="/dev/tty.usbmodem585A0076891",
id="my_awesome_follower_arm",
)
follower = SO101Follower(config)
follower.connect(calibrate=False)
follower.calibrate()
follower.disconnect()
```
<!-- prettier-ignore-end -->
</hfoption>
</hfoptions>
The video below shows how to perform the calibration. First you need to move the robot to the position where all joints are in the middle of their ranges. Then after pressing enter you have to move each joint through its full range of motion.
##### Calibration video
<div class="video-container">
<video controls width="600">
<source
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/calibrate_so101_2.mp4"
type="video/mp4"
/>
</video>
</div>
#### Leader
Do the same steps to calibrate the leader arm, run the following command or API example:
<hfoptions id="calibrate_leader">
<hfoption id="Command">
```bash
lerobot-calibrate \
--teleop.type=so101_leader \
--teleop.port=/dev/tty.usbmodem58760431551 \ # <- The port of your robot
--teleop.id=my_awesome_leader_arm # <- Give the robot a unique name
```
</hfoption>
<hfoption id="API example">
<!-- prettier-ignore-start -->
```python
from lerobot.teleoperators.so101_leader import SO101LeaderConfig, SO101Leader
config = SO101LeaderConfig(
port="/dev/tty.usbmodem58760431551",
id="my_awesome_leader_arm",
)
leader = SO101Leader(config)
leader.connect(calibrate=False)
leader.calibrate()
leader.disconnect()
```
<!-- prettier-ignore-end -->
</hfoption>
</hfoptions>
Congrats ๐, your robot is all set to learn a task on its own. Start training it by following this tutorial: [Getting started with real-world robots](./getting_started_real_world_robot)
> [!TIP]
> If you have any questions or need help, please reach out on [Discord](https://discord.com/invite/s3KuuzsPFb).
| lerobot/docs/source/so101.mdx/0 | {
"file_path": "lerobot/docs/source/so101.mdx",
"repo_id": "lerobot",
"token_count": 4594
} | 214 |
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Provides the OpenCVCamera class for capturing frames from cameras using OpenCV.
"""
import logging
import math
import os
import platform
import time
from pathlib import Path
from threading import Event, Lock, Thread
from typing import Any
# Fix MSMF hardware transform compatibility for Windows before importing cv2
if platform.system() == "Windows" and "OPENCV_VIDEOIO_MSMF_ENABLE_HW_TRANSFORMS" not in os.environ:
os.environ["OPENCV_VIDEOIO_MSMF_ENABLE_HW_TRANSFORMS"] = "0"
import cv2
import numpy as np
from lerobot.errors import DeviceAlreadyConnectedError, DeviceNotConnectedError
from ..camera import Camera
from ..utils import get_cv2_backend, get_cv2_rotation
from .configuration_opencv import ColorMode, OpenCVCameraConfig
# NOTE(Steven): The maximum opencv device index depends on your operating system. For instance,
# if you have 3 cameras, they should be associated to index 0, 1, and 2. This is the case
# on MacOS. However, on Ubuntu, the indices are different like 6, 16, 23.
# When you change the USB port or reboot the computer, the operating system might
# treat the same cameras as new devices. Thus we select a higher bound to search indices.
MAX_OPENCV_INDEX = 60
logger = logging.getLogger(__name__)
class OpenCVCamera(Camera):
"""
Manages camera interactions using OpenCV for efficient frame recording.
This class provides a high-level interface to connect to, configure, and read
frames from cameras compatible with OpenCV's VideoCapture. It supports both
synchronous and asynchronous frame reading.
An OpenCVCamera instance requires a camera index (e.g., 0) or a device path
(e.g., '/dev/video0' on Linux). Camera indices can be unstable across reboots
or port changes, especially on Linux. Use the provided utility script to find
available camera indices or paths:
```bash
lerobot-find-cameras opencv
```
The camera's default settings (FPS, resolution, color mode) are used unless
overridden in the configuration.
Example:
```python
from lerobot.cameras.opencv import OpenCVCamera
from lerobot.cameras.configuration_opencv import OpenCVCameraConfig, ColorMode, Cv2Rotation
# Basic usage with camera index 0
config = OpenCVCameraConfig(index_or_path=0)
camera = OpenCVCamera(config)
camera.connect()
# Read 1 frame synchronously
color_image = camera.read()
print(color_image.shape)
# Read 1 frame asynchronously
async_image = camera.async_read()
# When done, properly disconnect the camera using
camera.disconnect()
# Example with custom settings
custom_config = OpenCVCameraConfig(
index_or_path='/dev/video0', # Or use an index
fps=30,
width=1280,
height=720,
color_mode=ColorMode.RGB,
rotation=Cv2Rotation.ROTATE_90
)
custom_camera = OpenCVCamera(custom_config)
# ... connect, read, disconnect ...
```
"""
def __init__(self, config: OpenCVCameraConfig):
"""
Initializes the OpenCVCamera instance.
Args:
config: The configuration settings for the camera.
"""
super().__init__(config)
self.config = config
self.index_or_path = config.index_or_path
self.fps = config.fps
self.color_mode = config.color_mode
self.warmup_s = config.warmup_s
self.videocapture: cv2.VideoCapture | None = None
self.thread: Thread | None = None
self.stop_event: Event | None = None
self.frame_lock: Lock = Lock()
self.latest_frame: np.ndarray | None = None
self.new_frame_event: Event = Event()
self.rotation: int | None = get_cv2_rotation(config.rotation)
self.backend: int = get_cv2_backend()
if self.height and self.width:
self.capture_width, self.capture_height = self.width, self.height
if self.rotation in [cv2.ROTATE_90_CLOCKWISE, cv2.ROTATE_90_COUNTERCLOCKWISE]:
self.capture_width, self.capture_height = self.height, self.width
def __str__(self) -> str:
return f"{self.__class__.__name__}({self.index_or_path})"
@property
def is_connected(self) -> bool:
"""Checks if the camera is currently connected and opened."""
return isinstance(self.videocapture, cv2.VideoCapture) and self.videocapture.isOpened()
def connect(self, warmup: bool = True):
"""
Connects to the OpenCV camera specified in the configuration.
Initializes the OpenCV VideoCapture object, sets desired camera properties
(FPS, width, height), and performs initial checks.
Raises:
DeviceAlreadyConnectedError: If the camera is already connected.
ConnectionError: If the specified camera index/path is not found or the camera is found but fails to open.
RuntimeError: If the camera opens but fails to apply requested FPS/resolution settings.
"""
if self.is_connected:
raise DeviceAlreadyConnectedError(f"{self} is already connected.")
# Use 1 thread for OpenCV operations to avoid potential conflicts or
# blocking in multi-threaded applications, especially during data collection.
cv2.setNumThreads(1)
self.videocapture = cv2.VideoCapture(self.index_or_path, self.backend)
if not self.videocapture.isOpened():
self.videocapture.release()
self.videocapture = None
raise ConnectionError(
f"Failed to open {self}.Run `lerobot-find-cameras opencv` to find available cameras."
)
self._configure_capture_settings()
if warmup:
start_time = time.time()
while time.time() - start_time < self.warmup_s:
self.read()
time.sleep(0.1)
logger.info(f"{self} connected.")
def _configure_capture_settings(self) -> None:
"""
Applies the specified FPS, width, and height settings to the connected camera.
This method attempts to set the camera properties via OpenCV. It checks if
the camera successfully applied the settings and raises an error if not.
Args:
fps: The desired frames per second. If None, the setting is skipped.
width: The desired capture width. If None, the setting is skipped.
height: The desired capture height. If None, the setting is skipped.
Raises:
RuntimeError: If the camera fails to set any of the specified properties
to the requested value.
DeviceNotConnectedError: If the camera is not connected when attempting
to configure settings.
"""
if not self.is_connected:
raise DeviceNotConnectedError(f"Cannot configure settings for {self} as it is not connected.")
if self.fps is None:
self.fps = self.videocapture.get(cv2.CAP_PROP_FPS)
else:
self._validate_fps()
default_width = int(round(self.videocapture.get(cv2.CAP_PROP_FRAME_WIDTH)))
default_height = int(round(self.videocapture.get(cv2.CAP_PROP_FRAME_HEIGHT)))
if self.width is None or self.height is None:
self.width, self.height = default_width, default_height
self.capture_width, self.capture_height = default_width, default_height
if self.rotation in [cv2.ROTATE_90_CLOCKWISE, cv2.ROTATE_90_COUNTERCLOCKWISE]:
self.width, self.height = default_height, default_width
self.capture_width, self.capture_height = default_width, default_height
else:
self._validate_width_and_height()
def _validate_fps(self) -> None:
"""Validates and sets the camera's frames per second (FPS)."""
success = self.videocapture.set(cv2.CAP_PROP_FPS, float(self.fps))
actual_fps = self.videocapture.get(cv2.CAP_PROP_FPS)
# Use math.isclose for robust float comparison
if not success or not math.isclose(self.fps, actual_fps, rel_tol=1e-3):
raise RuntimeError(f"{self} failed to set fps={self.fps} ({actual_fps=}).")
def _validate_width_and_height(self) -> None:
"""Validates and sets the camera's frame capture width and height."""
width_success = self.videocapture.set(cv2.CAP_PROP_FRAME_WIDTH, float(self.capture_width))
height_success = self.videocapture.set(cv2.CAP_PROP_FRAME_HEIGHT, float(self.capture_height))
actual_width = int(round(self.videocapture.get(cv2.CAP_PROP_FRAME_WIDTH)))
if not width_success or self.capture_width != actual_width:
raise RuntimeError(
f"{self} failed to set capture_width={self.capture_width} ({actual_width=}, {width_success=})."
)
actual_height = int(round(self.videocapture.get(cv2.CAP_PROP_FRAME_HEIGHT)))
if not height_success or self.capture_height != actual_height:
raise RuntimeError(
f"{self} failed to set capture_height={self.capture_height} ({actual_height=}, {height_success=})."
)
@staticmethod
def find_cameras() -> list[dict[str, Any]]:
"""
Detects available OpenCV cameras connected to the system.
On Linux, it scans '/dev/video*' paths. On other systems (like macOS, Windows),
it checks indices from 0 up to `MAX_OPENCV_INDEX`.
Returns:
List[Dict[str, Any]]: A list of dictionaries,
where each dictionary contains 'type', 'id' (port index or path),
and the default profile properties (width, height, fps, format).
"""
found_cameras_info = []
if platform.system() == "Linux":
possible_paths = sorted(Path("/dev").glob("video*"), key=lambda p: p.name)
targets_to_scan = [str(p) for p in possible_paths]
else:
targets_to_scan = list(range(MAX_OPENCV_INDEX))
for target in targets_to_scan:
camera = cv2.VideoCapture(target)
if camera.isOpened():
default_width = int(camera.get(cv2.CAP_PROP_FRAME_WIDTH))
default_height = int(camera.get(cv2.CAP_PROP_FRAME_HEIGHT))
default_fps = camera.get(cv2.CAP_PROP_FPS)
default_format = camera.get(cv2.CAP_PROP_FORMAT)
camera_info = {
"name": f"OpenCV Camera @ {target}",
"type": "OpenCV",
"id": target,
"backend_api": camera.getBackendName(),
"default_stream_profile": {
"format": default_format,
"width": default_width,
"height": default_height,
"fps": default_fps,
},
}
found_cameras_info.append(camera_info)
camera.release()
return found_cameras_info
def read(self, color_mode: ColorMode | None = None) -> np.ndarray:
"""
Reads a single frame synchronously from the camera.
This is a blocking call. It waits for the next available frame from the
camera hardware via OpenCV.
Args:
color_mode (Optional[ColorMode]): If specified, overrides the default
color mode (`self.color_mode`) for this read operation (e.g.,
request RGB even if default is BGR).
Returns:
np.ndarray: The captured frame as a NumPy array in the format
(height, width, channels), using the specified or default
color mode and applying any configured rotation.
Raises:
DeviceNotConnectedError: If the camera is not connected.
RuntimeError: If reading the frame from the camera fails or if the
received frame dimensions don't match expectations before rotation.
ValueError: If an invalid `color_mode` is requested.
"""
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
start_time = time.perf_counter()
ret, frame = self.videocapture.read()
if not ret or frame is None:
raise RuntimeError(f"{self} read failed (status={ret}).")
processed_frame = self._postprocess_image(frame, color_mode)
read_duration_ms = (time.perf_counter() - start_time) * 1e3
logger.debug(f"{self} read took: {read_duration_ms:.1f}ms")
return processed_frame
def _postprocess_image(self, image: np.ndarray, color_mode: ColorMode | None = None) -> np.ndarray:
"""
Applies color conversion, dimension validation, and rotation to a raw frame.
Args:
image (np.ndarray): The raw image frame (expected BGR format from OpenCV).
color_mode (Optional[ColorMode]): The target color mode (RGB or BGR). If None,
uses the instance's default `self.color_mode`.
Returns:
np.ndarray: The processed image frame.
Raises:
ValueError: If the requested `color_mode` is invalid.
RuntimeError: If the raw frame dimensions do not match the configured
`width` and `height`.
"""
requested_color_mode = self.color_mode if color_mode is None else color_mode
if requested_color_mode not in (ColorMode.RGB, ColorMode.BGR):
raise ValueError(
f"Invalid color mode '{requested_color_mode}'. Expected {ColorMode.RGB} or {ColorMode.BGR}."
)
h, w, c = image.shape
if h != self.capture_height or w != self.capture_width:
raise RuntimeError(
f"{self} frame width={w} or height={h} do not match configured width={self.capture_width} or height={self.capture_height}."
)
if c != 3:
raise RuntimeError(f"{self} frame channels={c} do not match expected 3 channels (RGB/BGR).")
processed_image = image
if requested_color_mode == ColorMode.RGB:
processed_image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
if self.rotation in [cv2.ROTATE_90_CLOCKWISE, cv2.ROTATE_90_COUNTERCLOCKWISE, cv2.ROTATE_180]:
processed_image = cv2.rotate(processed_image, self.rotation)
return processed_image
def _read_loop(self):
"""
Internal loop run by the background thread for asynchronous reading.
On each iteration:
1. Reads a color frame
2. Stores result in latest_frame (thread-safe)
3. Sets new_frame_event to notify listeners
Stops on DeviceNotConnectedError, logs other errors and continues.
"""
while not self.stop_event.is_set():
try:
color_image = self.read()
with self.frame_lock:
self.latest_frame = color_image
self.new_frame_event.set()
except DeviceNotConnectedError:
break
except Exception as e:
logger.warning(f"Error reading frame in background thread for {self}: {e}")
def _start_read_thread(self) -> None:
"""Starts or restarts the background read thread if it's not running."""
if self.thread is not None and self.thread.is_alive():
self.thread.join(timeout=0.1)
if self.stop_event is not None:
self.stop_event.set()
self.stop_event = Event()
self.thread = Thread(target=self._read_loop, args=(), name=f"{self}_read_loop")
self.thread.daemon = True
self.thread.start()
def _stop_read_thread(self) -> None:
"""Signals the background read thread to stop and waits for it to join."""
if self.stop_event is not None:
self.stop_event.set()
if self.thread is not None and self.thread.is_alive():
self.thread.join(timeout=2.0)
self.thread = None
self.stop_event = None
def async_read(self, timeout_ms: float = 200) -> np.ndarray:
"""
Reads the latest available frame asynchronously.
This method retrieves the most recent frame captured by the background
read thread. It does not block waiting for the camera hardware directly,
but may wait up to timeout_ms for the background thread to provide a frame.
Args:
timeout_ms (float): Maximum time in milliseconds to wait for a frame
to become available. Defaults to 200ms (0.2 seconds).
Returns:
np.ndarray: The latest captured frame as a NumPy array in the format
(height, width, channels), processed according to configuration.
Raises:
DeviceNotConnectedError: If the camera is not connected.
TimeoutError: If no frame becomes available within the specified timeout.
RuntimeError: If an unexpected error occurs.
"""
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
if self.thread is None or not self.thread.is_alive():
self._start_read_thread()
if not self.new_frame_event.wait(timeout=timeout_ms / 1000.0):
thread_alive = self.thread is not None and self.thread.is_alive()
raise TimeoutError(
f"Timed out waiting for frame from camera {self} after {timeout_ms} ms. "
f"Read thread alive: {thread_alive}."
)
with self.frame_lock:
frame = self.latest_frame
self.new_frame_event.clear()
if frame is None:
raise RuntimeError(f"Internal error: Event set but no frame available for {self}.")
return frame
def disconnect(self):
"""
Disconnects from the camera and cleans up resources.
Stops the background read thread (if running) and releases the OpenCV
VideoCapture object.
Raises:
DeviceNotConnectedError: If the camera is already disconnected.
"""
if not self.is_connected and self.thread is None:
raise DeviceNotConnectedError(f"{self} not connected.")
if self.thread is not None:
self._stop_read_thread()
if self.videocapture is not None:
self.videocapture.release()
self.videocapture = None
logger.info(f"{self} disconnected.")
| lerobot/src/lerobot/cameras/opencv/camera_opencv.py/0 | {
"file_path": "lerobot/src/lerobot/cameras/opencv/camera_opencv.py",
"repo_id": "lerobot",
"token_count": 8038
} | 215 |
#!/usr/bin/env python
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from pprint import pformat
import torch
from lerobot.configs.policies import PreTrainedConfig
from lerobot.configs.train import TrainPipelineConfig
from lerobot.datasets.lerobot_dataset import (
LeRobotDataset,
LeRobotDatasetMetadata,
MultiLeRobotDataset,
)
from lerobot.datasets.transforms import ImageTransforms
IMAGENET_STATS = {
"mean": [[[0.485]], [[0.456]], [[0.406]]], # (c,1,1)
"std": [[[0.229]], [[0.224]], [[0.225]]], # (c,1,1)
}
def resolve_delta_timestamps(
cfg: PreTrainedConfig, ds_meta: LeRobotDatasetMetadata
) -> dict[str, list] | None:
"""Resolves delta_timestamps by reading from the 'delta_indices' properties of the PreTrainedConfig.
Args:
cfg (PreTrainedConfig): The PreTrainedConfig to read delta_indices from.
ds_meta (LeRobotDatasetMetadata): The dataset from which features and fps are used to build
delta_timestamps against.
Returns:
dict[str, list] | None: A dictionary of delta_timestamps, e.g.:
{
"observation.state": [-0.04, -0.02, 0]
"observation.action": [-0.02, 0, 0.02]
}
returns `None` if the resulting dict is empty.
"""
delta_timestamps = {}
for key in ds_meta.features:
if key == "next.reward" and cfg.reward_delta_indices is not None:
delta_timestamps[key] = [i / ds_meta.fps for i in cfg.reward_delta_indices]
if key == "action" and cfg.action_delta_indices is not None:
delta_timestamps[key] = [i / ds_meta.fps for i in cfg.action_delta_indices]
if key.startswith("observation.") and cfg.observation_delta_indices is not None:
delta_timestamps[key] = [i / ds_meta.fps for i in cfg.observation_delta_indices]
if len(delta_timestamps) == 0:
delta_timestamps = None
return delta_timestamps
def make_dataset(cfg: TrainPipelineConfig) -> LeRobotDataset | MultiLeRobotDataset:
"""Handles the logic of setting up delta timestamps and image transforms before creating a dataset.
Args:
cfg (TrainPipelineConfig): A TrainPipelineConfig config which contains a DatasetConfig and a PreTrainedConfig.
Raises:
NotImplementedError: The MultiLeRobotDataset is currently deactivated.
Returns:
LeRobotDataset | MultiLeRobotDataset
"""
image_transforms = (
ImageTransforms(cfg.dataset.image_transforms) if cfg.dataset.image_transforms.enable else None
)
if isinstance(cfg.dataset.repo_id, str):
ds_meta = LeRobotDatasetMetadata(
cfg.dataset.repo_id, root=cfg.dataset.root, revision=cfg.dataset.revision
)
delta_timestamps = resolve_delta_timestamps(cfg.policy, ds_meta)
dataset = LeRobotDataset(
cfg.dataset.repo_id,
root=cfg.dataset.root,
episodes=cfg.dataset.episodes,
delta_timestamps=delta_timestamps,
image_transforms=image_transforms,
revision=cfg.dataset.revision,
video_backend=cfg.dataset.video_backend,
)
else:
raise NotImplementedError("The MultiLeRobotDataset isn't supported for now.")
dataset = MultiLeRobotDataset(
cfg.dataset.repo_id,
# TODO(aliberts): add proper support for multi dataset
# delta_timestamps=delta_timestamps,
image_transforms=image_transforms,
video_backend=cfg.dataset.video_backend,
)
logging.info(
"Multiple datasets were provided. Applied the following index mapping to the provided datasets: "
f"{pformat(dataset.repo_id_to_index, indent=2)}"
)
if cfg.dataset.use_imagenet_stats:
for key in dataset.meta.camera_keys:
for stats_type, stats in IMAGENET_STATS.items():
dataset.meta.stats[key][stats_type] = torch.tensor(stats, dtype=torch.float32)
return dataset
| lerobot/src/lerobot/datasets/factory.py/0 | {
"file_path": "lerobot/src/lerobot/datasets/factory.py",
"repo_id": "lerobot",
"token_count": 1940
} | 216 |
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import abc
from dataclasses import dataclass, field
from typing import Any
import draccus
from lerobot.configs.types import FeatureType, PolicyFeature
from lerobot.constants import ACTION, OBS_ENV_STATE, OBS_IMAGE, OBS_IMAGES, OBS_STATE
from lerobot.robots import RobotConfig
from lerobot.teleoperators.config import TeleoperatorConfig
@dataclass
class EnvConfig(draccus.ChoiceRegistry, abc.ABC):
task: str | None = None
fps: int = 30
features: dict[str, PolicyFeature] = field(default_factory=dict)
features_map: dict[str, str] = field(default_factory=dict)
@property
def type(self) -> str:
return self.get_choice_name(self.__class__)
@property
@abc.abstractmethod
def gym_kwargs(self) -> dict:
raise NotImplementedError()
@EnvConfig.register_subclass("aloha")
@dataclass
class AlohaEnv(EnvConfig):
task: str | None = "AlohaInsertion-v0"
fps: int = 50
episode_length: int = 400
obs_type: str = "pixels_agent_pos"
render_mode: str = "rgb_array"
features: dict[str, PolicyFeature] = field(
default_factory=lambda: {
"action": PolicyFeature(type=FeatureType.ACTION, shape=(14,)),
}
)
features_map: dict[str, str] = field(
default_factory=lambda: {
"action": ACTION,
"agent_pos": OBS_STATE,
"top": f"{OBS_IMAGE}.top",
"pixels/top": f"{OBS_IMAGES}.top",
}
)
def __post_init__(self):
if self.obs_type == "pixels":
self.features["top"] = PolicyFeature(type=FeatureType.VISUAL, shape=(480, 640, 3))
elif self.obs_type == "pixels_agent_pos":
self.features["agent_pos"] = PolicyFeature(type=FeatureType.STATE, shape=(14,))
self.features["pixels/top"] = PolicyFeature(type=FeatureType.VISUAL, shape=(480, 640, 3))
@property
def gym_kwargs(self) -> dict:
return {
"obs_type": self.obs_type,
"render_mode": self.render_mode,
"max_episode_steps": self.episode_length,
}
@EnvConfig.register_subclass("pusht")
@dataclass
class PushtEnv(EnvConfig):
task: str | None = "PushT-v0"
fps: int = 10
episode_length: int = 300
obs_type: str = "pixels_agent_pos"
render_mode: str = "rgb_array"
visualization_width: int = 384
visualization_height: int = 384
features: dict[str, PolicyFeature] = field(
default_factory=lambda: {
"action": PolicyFeature(type=FeatureType.ACTION, shape=(2,)),
"agent_pos": PolicyFeature(type=FeatureType.STATE, shape=(2,)),
}
)
features_map: dict[str, str] = field(
default_factory=lambda: {
"action": ACTION,
"agent_pos": OBS_STATE,
"environment_state": OBS_ENV_STATE,
"pixels": OBS_IMAGE,
}
)
def __post_init__(self):
if self.obs_type == "pixels_agent_pos":
self.features["pixels"] = PolicyFeature(type=FeatureType.VISUAL, shape=(384, 384, 3))
elif self.obs_type == "environment_state_agent_pos":
self.features["environment_state"] = PolicyFeature(type=FeatureType.ENV, shape=(16,))
@property
def gym_kwargs(self) -> dict:
return {
"obs_type": self.obs_type,
"render_mode": self.render_mode,
"visualization_width": self.visualization_width,
"visualization_height": self.visualization_height,
"max_episode_steps": self.episode_length,
}
@EnvConfig.register_subclass("xarm")
@dataclass
class XarmEnv(EnvConfig):
task: str | None = "XarmLift-v0"
fps: int = 15
episode_length: int = 200
obs_type: str = "pixels_agent_pos"
render_mode: str = "rgb_array"
visualization_width: int = 384
visualization_height: int = 384
features: dict[str, PolicyFeature] = field(
default_factory=lambda: {
"action": PolicyFeature(type=FeatureType.ACTION, shape=(4,)),
"pixels": PolicyFeature(type=FeatureType.VISUAL, shape=(84, 84, 3)),
}
)
features_map: dict[str, str] = field(
default_factory=lambda: {
"action": ACTION,
"agent_pos": OBS_STATE,
"pixels": OBS_IMAGE,
}
)
def __post_init__(self):
if self.obs_type == "pixels_agent_pos":
self.features["agent_pos"] = PolicyFeature(type=FeatureType.STATE, shape=(4,))
@property
def gym_kwargs(self) -> dict:
return {
"obs_type": self.obs_type,
"render_mode": self.render_mode,
"visualization_width": self.visualization_width,
"visualization_height": self.visualization_height,
"max_episode_steps": self.episode_length,
}
@dataclass
class VideoRecordConfig:
"""Configuration for video recording in ManiSkill environments."""
enabled: bool = False
record_dir: str = "videos"
trajectory_name: str = "trajectory"
@dataclass
class EnvTransformConfig:
"""Configuration for environment wrappers."""
# ee_action_space_params: EEActionSpaceConfig = field(default_factory=EEActionSpaceConfig)
control_mode: str = "gamepad"
display_cameras: bool = False
add_joint_velocity_to_observation: bool = False
add_current_to_observation: bool = False
add_ee_pose_to_observation: bool = False
crop_params_dict: dict[str, tuple[int, int, int, int]] | None = None
resize_size: tuple[int, int] | None = None
control_time_s: float = 20.0
fixed_reset_joint_positions: Any | None = None
reset_time_s: float = 5.0
use_gripper: bool = True
gripper_quantization_threshold: float | None = 0.8
gripper_penalty: float = 0.0
gripper_penalty_in_reward: bool = False
@EnvConfig.register_subclass(name="gym_manipulator")
@dataclass
class HILSerlRobotEnvConfig(EnvConfig):
"""Configuration for the HILSerlRobotEnv environment."""
robot: RobotConfig | None = None
teleop: TeleoperatorConfig | None = None
wrapper: EnvTransformConfig | None = None
fps: int = 10
name: str = "real_robot"
mode: str | None = None # Either "record", "replay", None
repo_id: str | None = None
dataset_root: str | None = None
task: str | None = ""
num_episodes: int = 10 # only for record mode
episode: int = 0
device: str = "cuda"
push_to_hub: bool = True
pretrained_policy_name_or_path: str | None = None
reward_classifier_pretrained_path: str | None = None
# For the reward classifier, to record more positive examples after a success
number_of_steps_after_success: int = 0
@property
def gym_kwargs(self) -> dict:
return {}
@EnvConfig.register_subclass("hil")
@dataclass
class HILEnvConfig(EnvConfig):
"""Configuration for the HIL environment."""
name: str = "PandaPickCube"
task: str | None = "PandaPickCubeKeyboard-v0"
use_viewer: bool = True
gripper_penalty: float = 0.0
use_gamepad: bool = True
state_dim: int = 18
action_dim: int = 4
fps: int = 100
episode_length: int = 100
video_record: VideoRecordConfig = field(default_factory=VideoRecordConfig)
features: dict[str, PolicyFeature] = field(
default_factory=lambda: {
"action": PolicyFeature(type=FeatureType.ACTION, shape=(4,)),
"observation.image": PolicyFeature(type=FeatureType.VISUAL, shape=(3, 128, 128)),
"observation.state": PolicyFeature(type=FeatureType.STATE, shape=(18,)),
}
)
features_map: dict[str, str] = field(
default_factory=lambda: {
"action": ACTION,
"observation.image": OBS_IMAGE,
"observation.state": OBS_STATE,
}
)
################# args from hilserlrobotenv
reward_classifier_pretrained_path: str | None = None
robot_config: RobotConfig | None = None
teleop_config: TeleoperatorConfig | None = None
wrapper: EnvTransformConfig | None = None
mode: str | None = None # Either "record", "replay", None
repo_id: str | None = None
dataset_root: str | None = None
num_episodes: int = 10 # only for record mode
episode: int = 0
device: str = "cuda"
push_to_hub: bool = True
pretrained_policy_name_or_path: str | None = None
# For the reward classifier, to record more positive examples after a success
number_of_steps_after_success: int = 0
############################
@property
def gym_kwargs(self) -> dict:
return {
"use_viewer": self.use_viewer,
"use_gamepad": self.use_gamepad,
"gripper_penalty": self.gripper_penalty,
}
| lerobot/src/lerobot/envs/configs.py/0 | {
"file_path": "lerobot/src/lerobot/envs/configs.py",
"repo_id": "lerobot",
"token_count": 3802
} | 217 |
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from transformers import GemmaConfig, PaliGemmaConfig
def get_paligemma_config(precision: str):
config = {
"image_token_index": None,
"pad_token_id": 0,
"bos_token_id": 2,
"eos_token_id": 1,
}
# image_sizes = {"2b-test": 224, "3b-224px": 224, "3b-448px": 448, "3b-896px": 896}
image_size = 224 # image_sizes[variant]
patch_size = 14
num_image_tokens = (image_size**2) // (patch_size**2)
config["image_token_index"] = 257152
text_config = {
"vocab_size": 257152,
"num_hidden_layers": 18,
"num_key_value_heads": 1,
"head_dim": 256,
"torch_dtype": precision,
"hidden_size": 2048,
"hidden_activation": "gelu_pytorch_tanh",
"num_attention_heads": 8,
"intermediate_size": 16384,
"is_encoder_decoder": False,
}
vision_config = {
"torch_dtype": precision,
"image_size": image_size,
"patch_size": patch_size,
"num_image_tokens": num_image_tokens,
"hidden_size": 1152,
"intermediate_size": 4304,
"num_hidden_layers": 27,
"num_attention_heads": 16,
"projector_hidden_act": "gelu_fast",
"vision_use_head": False,
}
final_config = PaliGemmaConfig(text_config=text_config, vision_config=vision_config, **config)
return final_config
def get_gemma_config(precision: str):
config = {
"image_token_index": None,
"pad_token_id": 0,
"bos_token_id": 2,
"eos_token_id": 1,
}
config["image_token_index"] = 257152
text_config = {
"vocab_size": 257152,
"num_hidden_layers": 18,
"num_key_value_heads": 1,
"head_dim": 256,
"torch_dtype": precision,
"hidden_size": 1024,
"hidden_activation": "gelu_pytorch_tanh",
"num_attention_heads": 8,
"intermediate_size": 4096,
"is_encoder_decoder": False,
}
final_config = GemmaConfig()
final_config.update(text_config)
return final_config
| lerobot/src/lerobot/policies/pi0/conversion_scripts/conversion_utils.py/0 | {
"file_path": "lerobot/src/lerobot/policies/pi0/conversion_scripts/conversion_utils.py",
"repo_id": "lerobot",
"token_count": 1158
} | 218 |
## Paper
https://www.nicklashansen.com/td-mpc/
## Citation
```bibtex
@inproceedings{Hansen2022tdmpc,
title={Temporal Difference Learning for Model Predictive Control},
author={Nicklas Hansen and Xiaolong Wang and Hao Su},
booktitle={ICML},
year={2022}
}
```
| lerobot/src/lerobot/policies/tdmpc/README.md/0 | {
"file_path": "lerobot/src/lerobot/policies/tdmpc/README.md",
"repo_id": "lerobot",
"token_count": 97
} | 219 |
from .config import RobotConfig
from .robot import Robot
from .utils import make_robot_from_config
| lerobot/src/lerobot/robots/__init__.py/0 | {
"file_path": "lerobot/src/lerobot/robots/__init__.py",
"repo_id": "lerobot",
"token_count": 27
} | 220 |
# LeKiwi
In the steps below, we explain how to assemble the LeKiwi mobile robot.
## Source the parts
Follow this [README](https://github.com/SIGRobotics-UIUC/LeKiwi). It contains the bill of materials, with a link to source the parts, as well as the instructions to 3D print the parts.
And advise if it's your first time printing or if you don't own a 3D printer.
### Wired version
If you have the **wired** LeKiwi version, you can skip the installation of the Raspberry Pi and setting up SSH. You can also run all commands directly on your PC for both the LeKiwi scripts and the leader arm scripts for teleoperating.
## Install software on Pi
Now we have to set up the remote PC that will run on the LeKiwi Robot. This is normally a Raspberry Pi, but can be any PC that can run on 5V and has enough usb ports (2 or more) for the cameras and motor control board.
### Install OS
For setting up the Raspberry Pi and its SD-card see: [Setup PI](https://www.raspberrypi.com/documentation/computers/getting-started.html). Here is explained how to download the [Imager](https://www.raspberrypi.com/software/) to install Raspberry Pi OS or Ubuntu.
### Setup SSH
After setting up your Pi, you should enable and set up [SSH](https://www.raspberrypi.com/news/coding-on-raspberry-pi-remotely-with-visual-studio-code/) (Secure Shell Protocol) so you can log in to the Pi from your laptop without requiring a screen, keyboard, and mouse on the Pi. A great tutorial on how to do this can be found [here](https://www.raspberrypi.com/documentation/computers/remote-access.html#ssh). Logging into your Pi can be done in your Command Prompt (cmd) or, if you use VSCode you can use [this](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-ssh) extension.
### Install LeRobot on Pi ๐ค
On your Raspberry Pi install LeRobot using our [Installation Guide](./installation)
In addition to these instructions, you need to install the Feetech SDK & ZeroMQ on your Pi:
```bash
pip install -e ".[lekiwi]"
```
## Install LeRobot locally
If you already have installed LeRobot on your laptop/pc you can skip this step; otherwise, please follow along as we do the same steps we did on the Pi.
Follow our [Installation Guide](./installation)
In addition to these instructions, you need to install the Feetech SDK & ZeroMQ on your laptop/pc:
```bash
pip install -e ".[lekiwi]"
```
Great :hugs:! You are now done installing LeRobot, and we can begin assembling the SO100/SO101 arms and the mobile base :robot:.
Every time you now want to use LeRobot, you can go to the `~/lerobot` folder where we installed LeRobot and run one of the commands.
# Step-by-Step Assembly Instructions
First, we will assemble the two SO100/SO101 arms. One to attach to the mobile base and one for teleoperation. Then we will assemble the mobile base. The instructions for assembling can be found on these two pages:
- [Assemble SO101](./so101#step-by-step-assembly-instructions)
- [Assemble LeKiwi](https://github.com/SIGRobotics-UIUC/LeKiwi/blob/main/Assembly.md)
### Find the USB ports associated with motor board
To find the port for each bus servo adapter, run this script:
```bash
lerobot-find-port
```
<hfoptions id="example">
<hfoption id="Mac">
Example output:
```
Finding all available ports for the MotorBus.
['/dev/tty.usbmodem575E0032081']
Remove the USB cable from your MotorsBus and press Enter when done.
[...Disconnect corresponding leader or follower arm and press Enter...]
The port of this MotorsBus is /dev/tty.usbmodem575E0032081
Reconnect the USB cable.
```
Where the found port is: `/dev/tty.usbmodem575E0032081` corresponding to your board.
</hfoption>
<hfoption id="Linux">
On Linux, you might need to give access to the USB ports by running:
```bash
sudo chmod 666 /dev/ttyACM0
sudo chmod 666 /dev/ttyACM1
```
Example output:
```
Finding all available ports for the MotorBus.
['/dev/ttyACM0']
Remove the usb cable from your MotorsBus and press Enter when done.
[...Disconnect corresponding leader or follower arm and press Enter...]
The port of this MotorsBus is /dev/ttyACM0
Reconnect the USB cable.
```
Where the found port is: `/dev/ttyACM0` corresponding to your board.
</hfoption>
</hfoptions>
### Configure motors
The instructions for configuring the motors can be found in the SO101 [docs](./so101#configure-the-motors). Besides the ids for the arm motors, we also need to set the motor ids for the mobile base. These need to be in a specific order to work. Below an image of the motor ids and motor mounting positions for the mobile base. Note that we only use one Motor Control board on LeKiwi. This means the motor ids for the wheels are 7, 8 and 9.
You can run this command to setup motors for LeKiwi. It will first setup the motors for arm (id 6..1) and then setup motors for wheels (9,8,7)
```bash
lerobot-setup-motors \
--robot.type=lekiwi \
--robot.port=/dev/tty.usbmodem58760431551 # <- paste here the port found at previous step
```
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/motor_ids.webp" alt="Motor ID's for mobile robot" title="Motor ID's for mobile robot" width="60%">
### Troubleshoot communication
If you are having trouble connecting to the Mobile SO100, follow these steps to diagnose and resolve the issue.
#### 1. Verify IP Address Configuration
Make sure that the correct IP for the Pi is used in the commands or in your code. To check the Raspberry Pi's IP address, run (on the Pi command line):
```bash
hostname -I
```
#### 2. Check if Pi is reachable from laptop/pc
Try pinging the Raspberry Pi from your laptop:
```bach
ping <your_pi_ip_address>
```
If the ping fails:
- Ensure the Pi is powered on and connected to the same network.
- Check if SSH is enabled on the Pi.
#### 3. Try SSH connection
If you can't SSH into the Pi, it might not be properly connected. Use:
```bash
ssh <your_pi_user_name>@<your_pi_ip_address>
```
If you get a connection error:
- Ensure SSH is enabled on the Pi by running:
```bash
sudo raspi-config
```
Then navigate to: **Interfacing Options -> SSH** and enable it.
### Calibration
Now we have to calibrate the leader arm and the follower arm. The wheel motors don't have to be calibrated.
The calibration process is very important because it allows a neural network trained on one robot to work on another.
### Calibrate follower arm (on mobile base)
Make sure the arm is connected to the Raspberry Pi and run this script or API example (on the Raspberry Pi via SSH) to launch calibration of the follower arm:
```bash
lerobot-calibrate \
--robot.type=lekiwi \
--robot.id=my_awesome_kiwi # <- Give the robot a unique name
```
We unified the calibration method for most robots, thus, the calibration steps for this SO100 arm are the same as the steps for the Koch and SO101. First, we have to move the robot to the position where each joint is in the middle of its range, then we press `Enter`. Secondly, we move all joints through their full range of motion. A video of this same process for the SO101 as reference can be found [here](https://huggingface.co/docs/lerobot/en/so101#calibration-video).
### Wired version
If you have the **wired** LeKiwi version, please run all commands on your laptop.
### Calibrate leader arm
Then, to calibrate the leader arm (which is attached to the laptop/pc). Run the following command of API example on your laptop:
<hfoptions id="calibrate_leader">
<hfoption id="Command">
```bash
lerobot-calibrate \
--teleop.type=so100_leader \
--teleop.port=/dev/tty.usbmodem58760431551 \ # <- The port of your robot
--teleop.id=my_awesome_leader_arm # <- Give the robot a unique name
```
</hfoption>
<hfoption id="API example">
<!-- prettier-ignore-start -->
```python
from lerobot.teleoperators.so100_leader import SO100LeaderConfig, SO100Leader
config = SO100LeaderConfig(
port="/dev/tty.usbmodem58760431551",
id="my_awesome_leader_arm",
)
leader = SO100Leader(config)
leader.connect(calibrate=False)
leader.calibrate()
leader.disconnect()
```
<!-- prettier-ignore-end -->
</hfoption>
</hfoptions>
## Teleoperate LeKiwi
> [!TIP]
> If you're using a Mac, you might need to give Terminal permission to access your keyboard for teleoperation. Go to System Preferences > Security & Privacy > Input Monitoring and check the box for Terminal.
To teleoperate, SSH into your Raspberry Pi, and run `conda activate lerobot` and this command:
```bash
python -m lerobot.robots.lekiwi.lekiwi_host --robot.id=my_awesome_kiwi
```
Then on your laptop, also run `conda activate lerobot` and run the API example, make sure you set the correct `remote_ip` and `port` in `examples/lekiwi/teleoperate.py`.
```bash
python examples/lekiwi/teleoperate.py
```
You should see on your laptop something like this: `[INFO] Connected to remote robot at tcp://172.17.133.91:5555 and video stream at tcp://172.17.133.91:5556.` Now you can move the leader arm and use the keyboard (w,a,s,d) to drive forward, left, backwards, right. And use (z,x) to turn left or turn right. You can use (r,f) to increase and decrease the speed of the mobile robot. There are three speed modes, see the table below:
| Speed Mode | Linear Speed (m/s) | Rotation Speed (deg/s) |
| ---------- | ------------------ | ---------------------- |
| Fast | 0.4 | 90 |
| Medium | 0.25 | 60 |
| Slow | 0.1 | 30 |
| Key | Action |
| --- | -------------- |
| W | Move forward |
| A | Move left |
| S | Move backward |
| D | Move right |
| Z | Turn left |
| X | Turn right |
| R | Increase speed |
| F | Decrease speed |
> [!TIP]
> If you use a different keyboard, you can change the keys for each command in the [`LeKiwiClientConfig`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/robots/lekiwi/config_lekiwi.py).
### Wired version
If you have the **wired** LeKiwi version, please run all commands on your laptop.
## Record a dataset
Once you're familiar with teleoperation, you can record your first dataset.
We use the Hugging Face hub features for uploading your dataset. If you haven't previously used the Hub, make sure you can login via the cli using a write-access token, this token can be generated from the [Hugging Face settings](https://huggingface.co/settings/tokens).
Add your token to the CLI by running this command:
```bash
huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
```
Then store your Hugging Face repository name in a variable:
```bash
HF_USER=$(huggingface-cli whoami | head -n 1)
echo $HF_USER
```
Now you can record a dataset. To record episodes and upload your dataset to the hub, execute this API example tailored for LeKiwi. Make sure to first adapt the `remote_ip`, `repo_id`, `port` and `task` in the script. If you would like to run the script for longer you can increase `NB_CYCLES_CLIENT_CONNECTION`.
```bash
python examples/lekiwi/record.py
```
#### Dataset upload
Locally, your dataset is stored in this folder: `~/.cache/huggingface/lerobot/{repo-id}`. At the end of data recording, your dataset will be uploaded on your Hugging Face page (e.g. https://huggingface.co/datasets/cadene/so101_test) that you can obtain by running:
```bash
echo https://huggingface.co/datasets/${HF_USER}/so101_test
```
Your dataset will be automatically tagged with `LeRobot` for the community to find it easily, and you can also add custom tags (in this case `tutorial` for example).
You can look for other LeRobot datasets on the hub by searching for `LeRobot` [tags](https://huggingface.co/datasets?other=LeRobot).
#### Tips for gathering data
Once you're comfortable with data recording, you can create a larger dataset for training. A good starting task is grasping an object at different locations and placing it in a bin. We suggest recording at least 50 episodes, with 10 episodes per location. Keep the cameras fixed and maintain consistent grasping behavior throughout the recordings. Also make sure the object you are manipulating is visible on the camera's. A good rule of thumb is you should be able to do the task yourself by only looking at the camera images.
In the following sections, youโll train your neural network. After achieving reliable grasping performance, you can start introducing more variations during data collection, such as additional grasp locations, different grasping techniques, and altering camera positions.
Avoid adding too much variation too quickly, as it may hinder your results.
If you want to dive deeper into this important topic, you can check out the [blog post](https://huggingface.co/blog/lerobot-datasets#what-makes-a-good-dataset) we wrote on what makes a good dataset.
#### Troubleshooting:
- On Linux, if the left and right arrow keys and escape key don't have any effect during data recording, make sure you've set the `$DISPLAY` environment variable. See [pynput limitations](https://pynput.readthedocs.io/en/latest/limitations.html#linux).
## Replay an episode
To replay an episode run the API example below, make sure to change `remote_ip`, `port`, LeRobotDatasetId and episode index.
```bash
python examples/lekiwi/replay.py
```
Congrats ๐, your robot is all set to learn a task on its own. Start training it by the training part of this tutorial: [Getting started with real-world robots](./getting_started_real_world_robot)
## Evaluate your policy
To evaluate your policy run the `evaluate.py` API example, make sure to change `remote_ip`, `port`, model..
```bash
python examples/lekiwi/evaluate.py
```
> [!TIP]
> If you have any questions or need help, please reach out on [Discord](https://discord.com/invite/s3KuuzsPFb).
| lerobot/src/lerobot/robots/lekiwi/lekiwi.mdx/0 | {
"file_path": "lerobot/src/lerobot/robots/lekiwi/lekiwi.mdx",
"repo_id": "lerobot",
"token_count": 4213
} | 221 |
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from dataclasses import dataclass, field
from lerobot.cameras import CameraConfig
from lerobot.cameras.opencv import OpenCVCameraConfig
from lerobot.cameras.realsense import RealSenseCameraConfig
from ..config import RobotConfig
@RobotConfig.register_subclass("stretch3")
@dataclass
class Stretch3RobotConfig(RobotConfig):
# `max_relative_target` limits the magnitude of the relative positional target vector for safety purposes.
# Set this to a positive scalar to have the same value for all motors, or a list that is the same length as
# the number of motors in your follower arms.
max_relative_target: int | None = None
# cameras
cameras: dict[str, CameraConfig] = field(
default_factory=lambda: {
"navigation": OpenCVCameraConfig(
index_or_path="/dev/hello-nav-head-camera",
fps=10,
width=1280,
height=720,
rotation=-90,
),
"head": RealSenseCameraConfig(
name="Intel RealSense D435I",
fps=30,
width=640,
height=480,
rotation=90,
),
"wrist": RealSenseCameraConfig(
name="Intel RealSense D405",
fps=30,
width=640,
height=480,
),
}
)
mock: bool = False
| lerobot/src/lerobot/robots/stretch3/configuration_stretch3.py/0 | {
"file_path": "lerobot/src/lerobot/robots/stretch3/configuration_stretch3.py",
"repo_id": "lerobot",
"token_count": 812
} | 222 |
# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from collections.abc import Callable
from dataclasses import dataclass, field
import torch
from lerobot.robots.config import RobotConfig
from lerobot.scripts.server.constants import (
DEFAULT_FPS,
DEFAULT_INFERENCE_LATENCY,
DEFAULT_OBS_QUEUE_TIMEOUT,
)
# Aggregate function registry for CLI usage
AGGREGATE_FUNCTIONS = {
"weighted_average": lambda old, new: 0.3 * old + 0.7 * new,
"latest_only": lambda old, new: new,
"average": lambda old, new: 0.5 * old + 0.5 * new,
"conservative": lambda old, new: 0.7 * old + 0.3 * new,
}
def get_aggregate_function(name: str) -> Callable[[torch.Tensor, torch.Tensor], torch.Tensor]:
"""Get aggregate function by name from registry."""
if name not in AGGREGATE_FUNCTIONS:
available = list(AGGREGATE_FUNCTIONS.keys())
raise ValueError(f"Unknown aggregate function '{name}'. Available: {available}")
return AGGREGATE_FUNCTIONS[name]
@dataclass
class PolicyServerConfig:
"""Configuration for PolicyServer.
This class defines all configurable parameters for the PolicyServer,
including networking settings and action chunking specifications.
"""
# Networking configuration
host: str = field(default="localhost", metadata={"help": "Host address to bind the server to"})
port: int = field(default=8080, metadata={"help": "Port number to bind the server to"})
# Timing configuration
fps: int = field(default=DEFAULT_FPS, metadata={"help": "Frames per second"})
inference_latency: float = field(
default=DEFAULT_INFERENCE_LATENCY, metadata={"help": "Target inference latency in seconds"}
)
obs_queue_timeout: float = field(
default=DEFAULT_OBS_QUEUE_TIMEOUT, metadata={"help": "Timeout for observation queue in seconds"}
)
def __post_init__(self):
"""Validate configuration after initialization."""
if self.port < 1 or self.port > 65535:
raise ValueError(f"Port must be between 1 and 65535, got {self.port}")
if self.environment_dt <= 0:
raise ValueError(f"environment_dt must be positive, got {self.environment_dt}")
if self.inference_latency < 0:
raise ValueError(f"inference_latency must be non-negative, got {self.inference_latency}")
if self.obs_queue_timeout < 0:
raise ValueError(f"obs_queue_timeout must be non-negative, got {self.obs_queue_timeout}")
@classmethod
def from_dict(cls, config_dict: dict) -> "PolicyServerConfig":
"""Create a PolicyServerConfig from a dictionary."""
return cls(**config_dict)
@property
def environment_dt(self) -> float:
"""Environment time step, in seconds"""
return 1 / self.fps
def to_dict(self) -> dict:
"""Convert the configuration to a dictionary."""
return {
"host": self.host,
"port": self.port,
"fps": self.fps,
"environment_dt": self.environment_dt,
"inference_latency": self.inference_latency,
}
@dataclass
class RobotClientConfig:
"""Configuration for RobotClient.
This class defines all configurable parameters for the RobotClient,
including network connection, policy settings, and control behavior.
"""
# Policy configuration
policy_type: str = field(metadata={"help": "Type of policy to use"})
pretrained_name_or_path: str = field(metadata={"help": "Pretrained model name or path"})
# Robot configuration (for CLI usage - robot instance will be created from this)
robot: RobotConfig = field(metadata={"help": "Robot configuration"})
# Policies typically output K actions at max, but we can use less to avoid wasting bandwidth (as actions
# would be aggregated on the client side anyway, depending on the value of `chunk_size_threshold`)
actions_per_chunk: int = field(metadata={"help": "Number of actions per chunk"})
# Task instruction for the robot to execute (e.g., 'fold my tshirt')
task: str = field(default="", metadata={"help": "Task instruction for the robot to execute"})
# Network configuration
server_address: str = field(default="localhost:8080", metadata={"help": "Server address to connect to"})
# Device configuration
policy_device: str = field(default="cpu", metadata={"help": "Device for policy inference"})
# Control behavior configuration
chunk_size_threshold: float = field(default=0.5, metadata={"help": "Threshold for chunk size control"})
fps: int = field(default=DEFAULT_FPS, metadata={"help": "Frames per second"})
# Aggregate function configuration (CLI-compatible)
aggregate_fn_name: str = field(
default="weighted_average",
metadata={"help": f"Name of aggregate function to use. Options: {list(AGGREGATE_FUNCTIONS.keys())}"},
)
# Debug configuration
debug_visualize_queue_size: bool = field(
default=False, metadata={"help": "Visualize the action queue size"}
)
# Verification configuration
verify_robot_cameras: bool = field(
default=True, metadata={"help": "Verify that the robot cameras match the policy cameras"}
)
@property
def environment_dt(self) -> float:
"""Environment time step, in seconds"""
return 1 / self.fps
def __post_init__(self):
"""Validate configuration after initialization."""
if not self.server_address:
raise ValueError("server_address cannot be empty")
if not self.policy_type:
raise ValueError("policy_type cannot be empty")
if not self.pretrained_name_or_path:
raise ValueError("pretrained_name_or_path cannot be empty")
if not self.policy_device:
raise ValueError("policy_device cannot be empty")
if self.chunk_size_threshold < 0 or self.chunk_size_threshold > 1:
raise ValueError(f"chunk_size_threshold must be between 0 and 1, got {self.chunk_size_threshold}")
if self.fps <= 0:
raise ValueError(f"fps must be positive, got {self.fps}")
if self.actions_per_chunk <= 0:
raise ValueError(f"actions_per_chunk must be positive, got {self.actions_per_chunk}")
self.aggregate_fn = get_aggregate_function(self.aggregate_fn_name)
@classmethod
def from_dict(cls, config_dict: dict) -> "RobotClientConfig":
"""Create a RobotClientConfig from a dictionary."""
return cls(**config_dict)
def to_dict(self) -> dict:
"""Convert the configuration to a dictionary."""
return {
"server_address": self.server_address,
"policy_type": self.policy_type,
"pretrained_name_or_path": self.pretrained_name_or_path,
"policy_device": self.policy_device,
"chunk_size_threshold": self.chunk_size_threshold,
"fps": self.fps,
"actions_per_chunk": self.actions_per_chunk,
"task": self.task,
"debug_visualize_queue_size": self.debug_visualize_queue_size,
"aggregate_fn_name": self.aggregate_fn_name,
}
| lerobot/src/lerobot/scripts/server/configs.py/0 | {
"file_path": "lerobot/src/lerobot/scripts/server/configs.py",
"repo_id": "lerobot",
"token_count": 2783
} | 223 |
// Copyright 2024 The HuggingFace Inc. team.
// All rights reserved.
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
// http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.python -m grpc_tools.protoc -I src --python_out=src --grpc_python_out=src src/lerobot/transport/services.proto
// To generate a classes for transport part (services_pb2.py and services_pb2_grpc.py) use the following command:
//
// python -m grpc_tools.protoc -I src --python_out=src --grpc_python_out=src src/lerobot/transport/services.proto
//
// The command should be launched from the root of the project.
syntax = "proto3";
package transport;
// LearnerService: the Actor calls this to push transitions.
// The Learner implements this service.
service LearnerService {
// Actor -> Learner to store transitions
rpc StreamParameters(Empty) returns (stream Parameters);
rpc SendTransitions(stream Transition) returns (Empty);
rpc SendInteractions(stream InteractionMessage) returns (Empty);
rpc Ready(Empty) returns (Empty);
}
// AsyncInference: from Robot perspective
// Robot send observations to & executes action received from a remote Policy server
service AsyncInference {
// Robot -> Policy to share observations with a remote inference server
// Policy -> Robot to share actions predicted for given observations
rpc SendObservations(stream Observation) returns (Empty);
rpc GetActions(Empty) returns (Actions);
rpc SendPolicyInstructions(PolicySetup) returns (Empty);
rpc Ready(Empty) returns (Empty);
}
enum TransferState {
TRANSFER_UNKNOWN = 0;
TRANSFER_BEGIN = 1;
TRANSFER_MIDDLE = 2;
TRANSFER_END = 3;
}
// Messages
message Transition {
TransferState transfer_state = 1;
bytes data = 2;
}
message Parameters {
TransferState transfer_state = 1;
bytes data = 2;
}
message InteractionMessage {
TransferState transfer_state = 1;
bytes data = 2;
}
// Messages
message Observation {
// sent by Robot, to remote Policy
TransferState transfer_state = 1; // Observations can be streamed exceeding 4MB of size
bytes data = 2;
}
message Actions {
// sent by remote Policy, to Robot
bytes data = 1;
}
message PolicySetup {
// sent by Robot to remote server, to init Policy
bytes data = 1;
}
message Empty {}
| lerobot/src/lerobot/transport/services.proto/0 | {
"file_path": "lerobot/src/lerobot/transport/services.proto",
"repo_id": "lerobot",
"token_count": 781
} | 224 |
#!/usr/bin/env python
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from pathlib import Path
from termcolor import colored
from torch.optim import Optimizer
from torch.optim.lr_scheduler import LRScheduler
from lerobot.configs.train import TrainPipelineConfig
from lerobot.constants import (
CHECKPOINTS_DIR,
LAST_CHECKPOINT_LINK,
PRETRAINED_MODEL_DIR,
TRAINING_STATE_DIR,
TRAINING_STEP,
)
from lerobot.datasets.utils import load_json, write_json
from lerobot.optim.optimizers import load_optimizer_state, save_optimizer_state
from lerobot.optim.schedulers import load_scheduler_state, save_scheduler_state
from lerobot.policies.pretrained import PreTrainedPolicy
from lerobot.utils.random_utils import load_rng_state, save_rng_state
def log_output_dir(out_dir):
logging.info(colored("Output dir:", "yellow", attrs=["bold"]) + f" {out_dir}")
def get_step_identifier(step: int, total_steps: int) -> str:
num_digits = max(6, len(str(total_steps)))
return f"{step:0{num_digits}d}"
def get_step_checkpoint_dir(output_dir: Path, total_steps: int, step: int) -> Path:
"""Returns the checkpoint sub-directory corresponding to the step number."""
step_identifier = get_step_identifier(step, total_steps)
return output_dir / CHECKPOINTS_DIR / step_identifier
def save_training_step(step: int, save_dir: Path) -> None:
write_json({"step": step}, save_dir / TRAINING_STEP)
def load_training_step(save_dir: Path) -> int:
training_step = load_json(save_dir / TRAINING_STEP)
return training_step["step"]
def update_last_checkpoint(checkpoint_dir: Path) -> Path:
last_checkpoint_dir = checkpoint_dir.parent / LAST_CHECKPOINT_LINK
if last_checkpoint_dir.is_symlink():
last_checkpoint_dir.unlink()
relative_target = checkpoint_dir.relative_to(checkpoint_dir.parent)
last_checkpoint_dir.symlink_to(relative_target)
def save_checkpoint(
checkpoint_dir: Path,
step: int,
cfg: TrainPipelineConfig,
policy: PreTrainedPolicy,
optimizer: Optimizer,
scheduler: LRScheduler | None = None,
) -> None:
"""This function creates the following directory structure:
005000/ # training step at checkpoint
โโโ pretrained_model/
โ โโโ config.json # policy config
โ โโโ model.safetensors # policy weights
โ โโโ train_config.json # train config
โโโ training_state/
โโโ optimizer_param_groups.json # optimizer param groups
โโโ optimizer_state.safetensors # optimizer state
โโโ rng_state.safetensors # rng states
โโโ scheduler_state.json # scheduler state
โโโ training_step.json # training step
Args:
cfg (TrainPipelineConfig): The training config used for this run.
step (int): The training step at that checkpoint.
policy (PreTrainedPolicy): The policy to save.
optimizer (Optimizer | None, optional): The optimizer to save the state from. Defaults to None.
scheduler (LRScheduler | None, optional): The scheduler to save the state from. Defaults to None.
"""
pretrained_dir = checkpoint_dir / PRETRAINED_MODEL_DIR
policy.save_pretrained(pretrained_dir)
cfg.save_pretrained(pretrained_dir)
save_training_state(checkpoint_dir, step, optimizer, scheduler)
def save_training_state(
checkpoint_dir: Path,
train_step: int,
optimizer: Optimizer | None = None,
scheduler: LRScheduler | None = None,
) -> None:
"""
Saves the training step, optimizer state, scheduler state, and rng state.
Args:
save_dir (Path): The directory to save artifacts to.
train_step (int): Current training step.
optimizer (Optimizer | None, optional): The optimizer from which to save the state_dict.
Defaults to None.
scheduler (LRScheduler | None, optional): The scheduler from which to save the state_dict.
Defaults to None.
"""
save_dir = checkpoint_dir / TRAINING_STATE_DIR
save_dir.mkdir(parents=True, exist_ok=True)
save_training_step(train_step, save_dir)
save_rng_state(save_dir)
if optimizer is not None:
save_optimizer_state(optimizer, save_dir)
if scheduler is not None:
save_scheduler_state(scheduler, save_dir)
def load_training_state(
checkpoint_dir: Path, optimizer: Optimizer, scheduler: LRScheduler | None
) -> tuple[int, Optimizer, LRScheduler | None]:
"""
Loads the training step, optimizer state, scheduler state, and rng state.
This is used to resume a training run.
Args:
checkpoint_dir (Path): The checkpoint directory. Should contain a 'training_state' dir.
optimizer (Optimizer): The optimizer to load the state_dict to.
scheduler (LRScheduler | None): The scheduler to load the state_dict to (can be None).
Raises:
NotADirectoryError: If 'checkpoint_dir' doesn't contain a 'training_state' dir
Returns:
tuple[int, Optimizer, LRScheduler | None]: training step, optimizer and scheduler with their
state_dict loaded.
"""
training_state_dir = checkpoint_dir / TRAINING_STATE_DIR
if not training_state_dir.is_dir():
raise NotADirectoryError(training_state_dir)
load_rng_state(training_state_dir)
step = load_training_step(training_state_dir)
optimizer = load_optimizer_state(optimizer, training_state_dir)
if scheduler is not None:
scheduler = load_scheduler_state(scheduler, training_state_dir)
return step, optimizer, scheduler
| lerobot/src/lerobot/utils/train_utils.py/0 | {
"file_path": "lerobot/src/lerobot/utils/train_utils.py",
"repo_id": "lerobot",
"token_count": 2187
} | 225 |
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from itertools import accumulate
import datasets
import numpy as np
import pyarrow.compute as pc
import pytest
import torch
from lerobot.datasets.utils import (
check_delta_timestamps,
check_timestamps_sync,
get_delta_indices,
)
from tests.fixtures.constants import DUMMY_MOTOR_FEATURES
def calculate_total_episode(
hf_dataset: datasets.Dataset, raise_if_not_contiguous: bool = True
) -> dict[str, torch.Tensor]:
episode_indices = sorted(hf_dataset.unique("episode_index"))
total_episodes = len(episode_indices)
if raise_if_not_contiguous and episode_indices != list(range(total_episodes)):
raise ValueError("episode_index values are not sorted and contiguous.")
return total_episodes
def calculate_episode_data_index(hf_dataset: datasets.Dataset) -> dict[str, np.ndarray]:
episode_lengths = []
table = hf_dataset.data.table
total_episodes = calculate_total_episode(hf_dataset)
for ep_idx in range(total_episodes):
ep_table = table.filter(pc.equal(table["episode_index"], ep_idx))
episode_lengths.insert(ep_idx, len(ep_table))
cumulative_lengths = list(accumulate(episode_lengths))
return {
"from": np.array([0] + cumulative_lengths[:-1], dtype=np.int64),
"to": np.array(cumulative_lengths, dtype=np.int64),
}
@pytest.fixture(scope="module")
def synced_timestamps_factory(hf_dataset_factory):
def _create_synced_timestamps(fps: int = 30) -> tuple[np.ndarray, np.ndarray, np.ndarray]:
hf_dataset = hf_dataset_factory(fps=fps)
timestamps = torch.stack(hf_dataset["timestamp"]).numpy()
episode_indices = torch.stack(hf_dataset["episode_index"]).numpy()
episode_data_index = calculate_episode_data_index(hf_dataset)
return timestamps, episode_indices, episode_data_index
return _create_synced_timestamps
@pytest.fixture(scope="module")
def unsynced_timestamps_factory(synced_timestamps_factory):
def _create_unsynced_timestamps(
fps: int = 30, tolerance_s: float = 1e-4
) -> tuple[np.ndarray, np.ndarray, np.ndarray]:
timestamps, episode_indices, episode_data_index = synced_timestamps_factory(fps=fps)
timestamps[30] += tolerance_s * 1.1 # Modify a single timestamp just outside tolerance
return timestamps, episode_indices, episode_data_index
return _create_unsynced_timestamps
@pytest.fixture(scope="module")
def slightly_off_timestamps_factory(synced_timestamps_factory):
def _create_slightly_off_timestamps(
fps: int = 30, tolerance_s: float = 1e-4
) -> tuple[np.ndarray, np.ndarray, np.ndarray]:
timestamps, episode_indices, episode_data_index = synced_timestamps_factory(fps=fps)
timestamps[30] += tolerance_s * 0.9 # Modify a single timestamp just inside tolerance
return timestamps, episode_indices, episode_data_index
return _create_slightly_off_timestamps
@pytest.fixture(scope="module")
def valid_delta_timestamps_factory():
def _create_valid_delta_timestamps(
fps: int = 30, keys: list = DUMMY_MOTOR_FEATURES, min_max_range: tuple[int, int] = (-10, 10)
) -> dict:
delta_timestamps = {key: [i * (1 / fps) for i in range(*min_max_range)] for key in keys}
return delta_timestamps
return _create_valid_delta_timestamps
@pytest.fixture(scope="module")
def invalid_delta_timestamps_factory(valid_delta_timestamps_factory):
def _create_invalid_delta_timestamps(
fps: int = 30, tolerance_s: float = 1e-4, keys: list = DUMMY_MOTOR_FEATURES
) -> dict:
delta_timestamps = valid_delta_timestamps_factory(fps, keys)
# Modify a single timestamp just outside tolerance
for key in keys:
delta_timestamps[key][3] += tolerance_s * 1.1
return delta_timestamps
return _create_invalid_delta_timestamps
@pytest.fixture(scope="module")
def slightly_off_delta_timestamps_factory(valid_delta_timestamps_factory):
def _create_slightly_off_delta_timestamps(
fps: int = 30, tolerance_s: float = 1e-4, keys: list = DUMMY_MOTOR_FEATURES
) -> dict:
delta_timestamps = valid_delta_timestamps_factory(fps, keys)
# Modify a single timestamp just inside tolerance
for key in delta_timestamps:
delta_timestamps[key][3] += tolerance_s * 0.9
delta_timestamps[key][-3] += tolerance_s * 0.9
return delta_timestamps
return _create_slightly_off_delta_timestamps
@pytest.fixture(scope="module")
def delta_indices_factory():
def _delta_indices(keys: list = DUMMY_MOTOR_FEATURES, min_max_range: tuple[int, int] = (-10, 10)) -> dict:
return {key: list(range(*min_max_range)) for key in keys}
return _delta_indices
def test_check_timestamps_sync_synced(synced_timestamps_factory):
fps = 30
tolerance_s = 1e-4
timestamps, ep_idx, ep_data_index = synced_timestamps_factory(fps)
result = check_timestamps_sync(
timestamps=timestamps,
episode_indices=ep_idx,
episode_data_index=ep_data_index,
fps=fps,
tolerance_s=tolerance_s,
)
assert result is True
def test_check_timestamps_sync_unsynced(unsynced_timestamps_factory):
fps = 30
tolerance_s = 1e-4
timestamps, ep_idx, ep_data_index = unsynced_timestamps_factory(fps, tolerance_s)
with pytest.raises(ValueError):
check_timestamps_sync(
timestamps=timestamps,
episode_indices=ep_idx,
episode_data_index=ep_data_index,
fps=fps,
tolerance_s=tolerance_s,
)
def test_check_timestamps_sync_unsynced_no_exception(unsynced_timestamps_factory):
fps = 30
tolerance_s = 1e-4
timestamps, ep_idx, ep_data_index = unsynced_timestamps_factory(fps, tolerance_s)
result = check_timestamps_sync(
timestamps=timestamps,
episode_indices=ep_idx,
episode_data_index=ep_data_index,
fps=fps,
tolerance_s=tolerance_s,
raise_value_error=False,
)
assert result is False
def test_check_timestamps_sync_slightly_off(slightly_off_timestamps_factory):
fps = 30
tolerance_s = 1e-4
timestamps, ep_idx, ep_data_index = slightly_off_timestamps_factory(fps, tolerance_s)
result = check_timestamps_sync(
timestamps=timestamps,
episode_indices=ep_idx,
episode_data_index=ep_data_index,
fps=fps,
tolerance_s=tolerance_s,
)
assert result is True
def test_check_timestamps_sync_single_timestamp():
fps = 30
tolerance_s = 1e-4
timestamps, ep_idx = np.array([0.0]), np.array([0])
episode_data_index = {"to": np.array([1]), "from": np.array([0])}
result = check_timestamps_sync(
timestamps=timestamps,
episode_indices=ep_idx,
episode_data_index=episode_data_index,
fps=fps,
tolerance_s=tolerance_s,
)
assert result is True
def test_check_delta_timestamps_valid(valid_delta_timestamps_factory):
fps = 30
tolerance_s = 1e-4
valid_delta_timestamps = valid_delta_timestamps_factory(fps)
result = check_delta_timestamps(
delta_timestamps=valid_delta_timestamps,
fps=fps,
tolerance_s=tolerance_s,
)
assert result is True
def test_check_delta_timestamps_slightly_off(slightly_off_delta_timestamps_factory):
fps = 30
tolerance_s = 1e-4
slightly_off_delta_timestamps = slightly_off_delta_timestamps_factory(fps, tolerance_s)
result = check_delta_timestamps(
delta_timestamps=slightly_off_delta_timestamps,
fps=fps,
tolerance_s=tolerance_s,
)
assert result is True
def test_check_delta_timestamps_invalid(invalid_delta_timestamps_factory):
fps = 30
tolerance_s = 1e-4
invalid_delta_timestamps = invalid_delta_timestamps_factory(fps, tolerance_s)
with pytest.raises(ValueError):
check_delta_timestamps(
delta_timestamps=invalid_delta_timestamps,
fps=fps,
tolerance_s=tolerance_s,
)
def test_check_delta_timestamps_invalid_no_exception(invalid_delta_timestamps_factory):
fps = 30
tolerance_s = 1e-4
invalid_delta_timestamps = invalid_delta_timestamps_factory(fps, tolerance_s)
result = check_delta_timestamps(
delta_timestamps=invalid_delta_timestamps,
fps=fps,
tolerance_s=tolerance_s,
raise_value_error=False,
)
assert result is False
def test_check_delta_timestamps_empty():
delta_timestamps = {}
fps = 30
tolerance_s = 1e-4
result = check_delta_timestamps(
delta_timestamps=delta_timestamps,
fps=fps,
tolerance_s=tolerance_s,
)
assert result is True
def test_delta_indices(valid_delta_timestamps_factory, delta_indices_factory):
fps = 50
min_max_range = (-100, 100)
delta_timestamps = valid_delta_timestamps_factory(fps, min_max_range=min_max_range)
expected_delta_indices = delta_indices_factory(min_max_range=min_max_range)
actual_delta_indices = get_delta_indices(delta_timestamps, fps)
assert expected_delta_indices == actual_delta_indices
| lerobot/tests/datasets/test_delta_timestamps.py/0 | {
"file_path": "lerobot/tests/datasets/test_delta_timestamps.py",
"repo_id": "lerobot",
"token_count": 4089
} | 226 |
# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ruff: noqa: N802
from lerobot.motors.motors_bus import (
Motor,
MotorsBus,
)
DUMMY_CTRL_TABLE_1 = {
"Firmware_Version": (0, 1),
"Model_Number": (1, 2),
"Present_Position": (3, 4),
"Goal_Position": (11, 2),
}
DUMMY_CTRL_TABLE_2 = {
"Model_Number": (0, 2),
"Firmware_Version": (2, 1),
"Present_Position": (3, 4),
"Present_Velocity": (7, 4),
"Goal_Position": (11, 4),
"Goal_Velocity": (15, 4),
"Lock": (19, 1),
}
DUMMY_MODEL_CTRL_TABLE = {
"model_1": DUMMY_CTRL_TABLE_1,
"model_2": DUMMY_CTRL_TABLE_2,
"model_3": DUMMY_CTRL_TABLE_2,
}
DUMMY_BAUDRATE_TABLE = {
0: 1_000_000,
1: 500_000,
2: 250_000,
}
DUMMY_MODEL_BAUDRATE_TABLE = {
"model_1": DUMMY_BAUDRATE_TABLE,
"model_2": DUMMY_BAUDRATE_TABLE,
"model_3": DUMMY_BAUDRATE_TABLE,
}
DUMMY_ENCODING_TABLE = {
"Present_Position": 8,
"Goal_Position": 10,
}
DUMMY_MODEL_ENCODING_TABLE = {
"model_1": DUMMY_ENCODING_TABLE,
"model_2": DUMMY_ENCODING_TABLE,
"model_3": DUMMY_ENCODING_TABLE,
}
DUMMY_MODEL_NUMBER_TABLE = {
"model_1": 1234,
"model_2": 5678,
"model_3": 5799,
}
DUMMY_MODEL_RESOLUTION_TABLE = {
"model_1": 4096,
"model_2": 1024,
"model_3": 4096,
}
class MockPortHandler:
def __init__(self, port_name):
self.is_open: bool = False
self.baudrate: int
self.packet_start_time: float
self.packet_timeout: float
self.tx_time_per_byte: float
self.is_using: bool = False
self.port_name: str = port_name
self.ser = None
def openPort(self):
self.is_open = True
return self.is_open
def closePort(self):
self.is_open = False
def clearPort(self): ...
def setPortName(self, port_name):
self.port_name = port_name
def getPortName(self):
return self.port_name
def setBaudRate(self, baudrate):
self.baudrate: baudrate
def getBaudRate(self):
return self.baudrate
def getBytesAvailable(self): ...
def readPort(self, length): ...
def writePort(self, packet): ...
def setPacketTimeout(self, packet_length): ...
def setPacketTimeoutMillis(self, msec): ...
def isPacketTimeout(self): ...
def getCurrentTime(self): ...
def getTimeSinceStart(self): ...
def setupPort(self, cflag_baud): ...
def getCFlagBaud(self, baudrate): ...
class MockMotorsBus(MotorsBus):
available_baudrates = [500_000, 1_000_000]
default_timeout = 1000
model_baudrate_table = DUMMY_MODEL_BAUDRATE_TABLE
model_ctrl_table = DUMMY_MODEL_CTRL_TABLE
model_encoding_table = DUMMY_MODEL_ENCODING_TABLE
model_number_table = DUMMY_MODEL_NUMBER_TABLE
model_resolution_table = DUMMY_MODEL_RESOLUTION_TABLE
normalized_data = ["Present_Position", "Goal_Position"]
def __init__(self, port: str, motors: dict[str, Motor]):
super().__init__(port, motors)
self.port_handler = MockPortHandler(port)
def _assert_protocol_is_compatible(self, instruction_name): ...
def _handshake(self): ...
def _find_single_motor(self, motor, initial_baudrate): ...
def configure_motors(self): ...
def is_calibrated(self): ...
def read_calibration(self): ...
def write_calibration(self, calibration_dict): ...
def disable_torque(self, motors, num_retry): ...
def _disable_torque(self, motor, model, num_retry): ...
def enable_torque(self, motors, num_retry): ...
def _get_half_turn_homings(self, positions): ...
def _encode_sign(self, data_name, ids_values): ...
def _decode_sign(self, data_name, ids_values): ...
def _split_into_byte_chunks(self, value, length): ...
def broadcast_ping(self, num_retry, raise_on_error): ...
| lerobot/tests/mocks/mock_motors_bus.py/0 | {
"file_path": "lerobot/tests/mocks/mock_motors_bus.py",
"repo_id": "lerobot",
"token_count": 1828
} | 227 |
#!/usr/bin/env python
# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import tempfile
from collections.abc import Callable
from dataclasses import dataclass
from pathlib import Path
from typing import Any
import pytest
import torch
import torch.nn as nn
from lerobot.configs.types import FeatureType, PolicyFeature
from lerobot.processor import EnvTransition, ProcessorStepRegistry, RobotProcessor
from lerobot.processor.pipeline import TransitionKey
from tests.conftest import assert_contract_is_typed
def create_transition(
observation=None, action=None, reward=0.0, done=False, truncated=False, info=None, complementary_data=None
):
"""Helper to create an EnvTransition dictionary."""
return {
TransitionKey.OBSERVATION: observation,
TransitionKey.ACTION: action,
TransitionKey.REWARD: reward,
TransitionKey.DONE: done,
TransitionKey.TRUNCATED: truncated,
TransitionKey.INFO: info if info is not None else {},
TransitionKey.COMPLEMENTARY_DATA: complementary_data if complementary_data is not None else {},
}
@dataclass
class MockStep:
"""Mock pipeline step for testing - demonstrates best practices.
This example shows the proper separation:
- JSON-serializable attributes (name, counter) go in get_config()
- Only torch tensors go in state_dict()
Note: The counter is part of the configuration, so it will be restored
when the step is recreated from config during loading.
"""
name: str = "mock_step"
counter: int = 0
def __call__(self, transition: EnvTransition) -> EnvTransition:
"""Add a counter to the complementary_data."""
comp_data = transition.get(TransitionKey.COMPLEMENTARY_DATA, {})
comp_data = {} if comp_data is None else dict(comp_data) # Make a copy
comp_data[f"{self.name}_counter"] = self.counter
self.counter += 1
# Create a new transition with updated complementary_data
new_transition = transition.copy()
new_transition[TransitionKey.COMPLEMENTARY_DATA] = comp_data
return new_transition
def get_config(self) -> dict[str, Any]:
# Return all JSON-serializable attributes that should be persisted
# These will be passed to __init__ when loading
return {"name": self.name, "counter": self.counter}
def state_dict(self) -> dict[str, torch.Tensor]:
# Only return torch tensors (empty in this case since we have no tensor state)
return {}
def load_state_dict(self, state: dict[str, torch.Tensor]) -> None:
# No tensor state to load
pass
def reset(self) -> None:
self.counter = 0
def feature_contract(self, features: dict[str, PolicyFeature]) -> dict[str, PolicyFeature]:
# We do not test feature_contract here
return features
@dataclass
class MockStepWithoutOptionalMethods:
"""Mock step that only implements the required __call__ method."""
multiplier: float = 2.0
def __call__(self, transition: EnvTransition) -> EnvTransition:
"""Multiply reward by multiplier."""
reward = transition.get(TransitionKey.REWARD)
if reward is not None:
new_transition = transition.copy()
new_transition[TransitionKey.REWARD] = reward * self.multiplier
return new_transition
return transition
def feature_contract(self, features: dict[str, PolicyFeature]) -> dict[str, PolicyFeature]:
# We do not test feature_contract here
return features
@dataclass
class MockStepWithTensorState:
"""Mock step demonstrating mixed JSON attributes and tensor state."""
name: str = "tensor_step"
learning_rate: float = 0.01
window_size: int = 10
def __init__(self, name: str = "tensor_step", learning_rate: float = 0.01, window_size: int = 10):
self.name = name
self.learning_rate = learning_rate
self.window_size = window_size
# Tensor state
self.running_mean = torch.zeros(window_size)
self.running_count = torch.tensor(0)
def __call__(self, transition: EnvTransition) -> EnvTransition:
"""Update running statistics."""
reward = transition.get(TransitionKey.REWARD)
if reward is not None:
# Update running mean
idx = self.running_count % self.window_size
self.running_mean[idx] = reward
self.running_count += 1
return transition
def get_config(self) -> dict[str, Any]:
# Only JSON-serializable attributes
return {
"name": self.name,
"learning_rate": self.learning_rate,
"window_size": self.window_size,
}
def state_dict(self) -> dict[str, torch.Tensor]:
# Only tensor state
return {
"running_mean": self.running_mean,
"running_count": self.running_count,
}
def load_state_dict(self, state: dict[str, torch.Tensor]) -> None:
self.running_mean = state["running_mean"]
self.running_count = state["running_count"]
def reset(self) -> None:
self.running_mean.zero_()
self.running_count.zero_()
def feature_contract(self, features: dict[str, PolicyFeature]) -> dict[str, PolicyFeature]:
# We do not test feature_contract here
return features
def test_empty_pipeline():
"""Test pipeline with no steps."""
pipeline = RobotProcessor()
transition = create_transition()
result = pipeline(transition)
assert result == transition
assert len(pipeline) == 0
def test_single_step_pipeline():
"""Test pipeline with a single step."""
step = MockStep("test_step")
pipeline = RobotProcessor([step])
transition = create_transition()
result = pipeline(transition)
assert len(pipeline) == 1
assert result[TransitionKey.COMPLEMENTARY_DATA]["test_step_counter"] == 0
# Call again to test counter increment
result = pipeline(transition)
assert result[TransitionKey.COMPLEMENTARY_DATA]["test_step_counter"] == 1
def test_multiple_steps_pipeline():
"""Test pipeline with multiple steps."""
step1 = MockStep("step1")
step2 = MockStep("step2")
pipeline = RobotProcessor([step1, step2])
transition = create_transition()
result = pipeline(transition)
assert len(pipeline) == 2
assert result[TransitionKey.COMPLEMENTARY_DATA]["step1_counter"] == 0
assert result[TransitionKey.COMPLEMENTARY_DATA]["step2_counter"] == 0
def test_invalid_transition_format():
"""Test pipeline with invalid transition format."""
pipeline = RobotProcessor([MockStep()])
# Test with wrong type (tuple instead of dict)
with pytest.raises(ValueError, match="EnvTransition must be a dictionary"):
pipeline((None, None, 0.0, False, False, {}, {})) # Tuple instead of dict
# Test with wrong type (string)
with pytest.raises(ValueError, match="EnvTransition must be a dictionary"):
pipeline("not a dict")
def test_step_through():
"""Test step_through method with dict input."""
step1 = MockStep("step1")
step2 = MockStep("step2")
pipeline = RobotProcessor([step1, step2])
transition = create_transition()
results = list(pipeline.step_through(transition))
assert len(results) == 3 # Original + 2 steps
assert results[0] == transition # Original
assert "step1_counter" in results[1][TransitionKey.COMPLEMENTARY_DATA] # After step1
assert "step2_counter" in results[2][TransitionKey.COMPLEMENTARY_DATA] # After step2
# Ensure all results are dicts (same format as input)
for result in results:
assert isinstance(result, dict)
assert all(isinstance(k, TransitionKey) for k in result.keys())
def test_step_through_with_dict():
"""Test step_through method with dict input."""
step1 = MockStep("step1")
step2 = MockStep("step2")
pipeline = RobotProcessor([step1, step2])
batch = {
"observation.image": None,
"action": None,
"next.reward": 0.0,
"next.done": False,
"next.truncated": False,
"info": {},
}
results = list(pipeline.step_through(batch))
assert len(results) == 3 # Original + 2 steps
# Ensure all results are EnvTransition dicts (regardless of input format)
for result in results:
assert isinstance(result, dict)
# Check that keys are TransitionKey enums or at least valid transition keys
for key in result:
assert key in [
TransitionKey.OBSERVATION,
TransitionKey.ACTION,
TransitionKey.REWARD,
TransitionKey.DONE,
TransitionKey.TRUNCATED,
TransitionKey.INFO,
TransitionKey.COMPLEMENTARY_DATA,
]
# Check that the processing worked - verify step counters in complementary_data
assert results[1].get(TransitionKey.COMPLEMENTARY_DATA, {}).get("step1_counter") == 0
assert results[2].get(TransitionKey.COMPLEMENTARY_DATA, {}).get("step1_counter") == 0
assert results[2].get(TransitionKey.COMPLEMENTARY_DATA, {}).get("step2_counter") == 0
def test_step_through_no_hooks():
"""Test that step_through doesn't execute hooks."""
step = MockStep("test_step")
pipeline = RobotProcessor([step])
hook_calls = []
def tracking_hook(idx: int, transition: EnvTransition):
hook_calls.append(f"hook_called_step_{idx}")
# Register hooks
pipeline.register_before_step_hook(tracking_hook)
pipeline.register_after_step_hook(tracking_hook)
# Use step_through
transition = create_transition()
results = list(pipeline.step_through(transition))
# Verify step was executed (counter should increment)
assert len(results) == 2 # Initial + 1 step
assert results[1][TransitionKey.COMPLEMENTARY_DATA]["test_step_counter"] == 0
# Verify hooks were NOT called
assert len(hook_calls) == 0
# Now use __call__ to verify hooks ARE called there
hook_calls.clear()
pipeline(transition)
# Verify hooks were called (before and after for 1 step = 2 calls)
assert len(hook_calls) == 2
assert hook_calls == ["hook_called_step_0", "hook_called_step_0"]
def test_indexing():
"""Test pipeline indexing."""
step1 = MockStep("step1")
step2 = MockStep("step2")
pipeline = RobotProcessor([step1, step2])
# Test integer indexing
assert pipeline[0] is step1
assert pipeline[1] is step2
# Test slice indexing
sub_pipeline = pipeline[0:1]
assert isinstance(sub_pipeline, RobotProcessor)
assert len(sub_pipeline) == 1
assert sub_pipeline[0] is step1
def test_hooks():
"""Test before/after step hooks."""
step = MockStep("test_step")
pipeline = RobotProcessor([step])
before_calls = []
after_calls = []
def before_hook(idx: int, transition: EnvTransition):
before_calls.append(idx)
def after_hook(idx: int, transition: EnvTransition):
after_calls.append(idx)
pipeline.register_before_step_hook(before_hook)
pipeline.register_after_step_hook(after_hook)
transition = create_transition()
pipeline(transition)
assert before_calls == [0]
assert after_calls == [0]
def test_unregister_hooks():
"""Test unregistering hooks from the pipeline."""
step = MockStep("test_step")
pipeline = RobotProcessor([step])
# Test before_step_hook
before_calls = []
def before_hook(idx: int, transition: EnvTransition):
before_calls.append(idx)
pipeline.register_before_step_hook(before_hook)
# Verify hook is registered
transition = create_transition()
pipeline(transition)
assert len(before_calls) == 1
# Unregister and verify it's no longer called
pipeline.unregister_before_step_hook(before_hook)
before_calls.clear()
pipeline(transition)
assert len(before_calls) == 0
# Test after_step_hook
after_calls = []
def after_hook(idx: int, transition: EnvTransition):
after_calls.append(idx)
pipeline.register_after_step_hook(after_hook)
pipeline(transition)
assert len(after_calls) == 1
pipeline.unregister_after_step_hook(after_hook)
after_calls.clear()
pipeline(transition)
assert len(after_calls) == 0
def test_unregister_nonexistent_hook():
"""Test error handling when unregistering hooks that don't exist."""
pipeline = RobotProcessor([MockStep()])
def some_hook(idx: int, transition: EnvTransition):
pass
def reset_hook():
pass
# Test unregistering hooks that were never registered
with pytest.raises(ValueError, match="not found in before_step_hooks"):
pipeline.unregister_before_step_hook(some_hook)
with pytest.raises(ValueError, match="not found in after_step_hooks"):
pipeline.unregister_after_step_hook(some_hook)
def test_multiple_hooks_and_selective_unregister():
"""Test registering multiple hooks and selectively unregistering them."""
pipeline = RobotProcessor([MockStep("step1"), MockStep("step2")])
calls_1 = []
calls_2 = []
calls_3 = []
def hook1(idx: int, transition: EnvTransition):
calls_1.append(f"hook1_step{idx}")
def hook2(idx: int, transition: EnvTransition):
calls_2.append(f"hook2_step{idx}")
def hook3(idx: int, transition: EnvTransition):
calls_3.append(f"hook3_step{idx}")
# Register multiple hooks
pipeline.register_before_step_hook(hook1)
pipeline.register_before_step_hook(hook2)
pipeline.register_before_step_hook(hook3)
# Run pipeline - all hooks should be called for both steps
transition = create_transition()
pipeline(transition)
assert calls_1 == ["hook1_step0", "hook1_step1"]
assert calls_2 == ["hook2_step0", "hook2_step1"]
assert calls_3 == ["hook3_step0", "hook3_step1"]
# Clear calls
calls_1.clear()
calls_2.clear()
calls_3.clear()
# Unregister middle hook
pipeline.unregister_before_step_hook(hook2)
# Run again - only hook1 and hook3 should be called
pipeline(transition)
assert calls_1 == ["hook1_step0", "hook1_step1"]
assert calls_2 == [] # hook2 was unregistered
assert calls_3 == ["hook3_step0", "hook3_step1"]
def test_hook_execution_order_documentation():
"""Test and document that hooks are executed sequentially in registration order."""
pipeline = RobotProcessor([MockStep("step")])
execution_order = []
def hook_a(idx: int, transition: EnvTransition):
execution_order.append("A")
def hook_b(idx: int, transition: EnvTransition):
execution_order.append("B")
def hook_c(idx: int, transition: EnvTransition):
execution_order.append("C")
# Register in specific order: A, B, C
pipeline.register_before_step_hook(hook_a)
pipeline.register_before_step_hook(hook_b)
pipeline.register_before_step_hook(hook_c)
transition = create_transition()
pipeline(transition)
# Verify execution order matches registration order
assert execution_order == ["A", "B", "C"]
# Test that after unregistering B and re-registering it, it goes to the end
pipeline.unregister_before_step_hook(hook_b)
execution_order.clear()
pipeline(transition)
assert execution_order == ["A", "C"] # B is gone
# Re-register B - it should now be at the end
pipeline.register_before_step_hook(hook_b)
execution_order.clear()
pipeline(transition)
assert execution_order == ["A", "C", "B"] # B is now last
def test_save_and_load_pretrained():
"""Test saving and loading pipeline.
This test demonstrates that JSON-serializable attributes (like counter)
are saved in the config and restored when the step is recreated.
"""
step1 = MockStep("step1")
step2 = MockStep("step2")
# Increment counters to have some state
step1.counter = 5
step2.counter = 10
pipeline = RobotProcessor([step1, step2], name="TestPipeline")
with tempfile.TemporaryDirectory() as tmp_dir:
# Save pipeline
pipeline.save_pretrained(tmp_dir)
# Check files were created
config_path = Path(tmp_dir) / "testpipeline.json" # Based on name="TestPipeline"
assert config_path.exists()
# Check config content
with open(config_path) as f:
config = json.load(f)
assert config["name"] == "TestPipeline"
assert len(config["steps"]) == 2
# Verify counters are saved in config, not in separate state files
assert config["steps"][0]["config"]["counter"] == 5
assert config["steps"][1]["config"]["counter"] == 10
# Load pipeline
loaded_pipeline = RobotProcessor.from_pretrained(tmp_dir)
assert loaded_pipeline.name == "TestPipeline"
assert len(loaded_pipeline) == 2
# Check that counter was restored from config
assert loaded_pipeline.steps[0].counter == 5
assert loaded_pipeline.steps[1].counter == 10
def test_step_without_optional_methods():
"""Test pipeline with steps that don't implement optional methods."""
step = MockStepWithoutOptionalMethods(multiplier=3.0)
pipeline = RobotProcessor([step])
transition = create_transition(reward=2.0)
result = pipeline(transition)
assert result[TransitionKey.REWARD] == 6.0 # 2.0 * 3.0
# Reset should work even if step doesn't implement reset
pipeline.reset()
# Save/load should work even without optional methods
with tempfile.TemporaryDirectory() as tmp_dir:
pipeline.save_pretrained(tmp_dir)
loaded_pipeline = RobotProcessor.from_pretrained(tmp_dir)
assert len(loaded_pipeline) == 1
def test_mixed_json_and_tensor_state():
"""Test step with both JSON attributes and tensor state."""
step = MockStepWithTensorState(name="stats", learning_rate=0.05, window_size=5)
pipeline = RobotProcessor([step])
# Process some transitions with rewards
for i in range(10):
transition = create_transition(reward=float(i))
pipeline(transition)
# Check state
assert step.running_count.item() == 10
assert step.learning_rate == 0.05
# Save and load
with tempfile.TemporaryDirectory() as tmp_dir:
pipeline.save_pretrained(tmp_dir)
# Check that both config and state files were created
config_path = Path(tmp_dir) / "robotprocessor.json" # Default name is "RobotProcessor"
state_path = Path(tmp_dir) / "robotprocessor_step_0.safetensors"
assert config_path.exists()
assert state_path.exists()
# Load and verify
loaded_pipeline = RobotProcessor.from_pretrained(tmp_dir)
loaded_step = loaded_pipeline.steps[0]
# Check JSON attributes were restored
assert loaded_step.name == "stats"
assert loaded_step.learning_rate == 0.05
assert loaded_step.window_size == 5
# Check tensor state was restored
assert loaded_step.running_count.item() == 10
assert torch.allclose(loaded_step.running_mean, step.running_mean)
class MockModuleStep(nn.Module):
"""Mock step that inherits from nn.Module to test state_dict handling of module parameters."""
def __init__(self, input_dim: int = 10, hidden_dim: int = 5):
super().__init__()
self.input_dim = input_dim
self.hidden_dim = hidden_dim
self.linear = nn.Linear(input_dim, hidden_dim)
self.running_mean = nn.Parameter(torch.zeros(hidden_dim), requires_grad=False)
self.counter = 0 # Non-tensor state
def forward(self, x: torch.Tensor) -> torch.Tensor:
return self.linear(x)
def __call__(self, transition: EnvTransition) -> EnvTransition:
"""Process transition and update running mean."""
obs = transition.get(TransitionKey.OBSERVATION)
if obs is not None and isinstance(obs, torch.Tensor):
# Process observation through linear layer
processed = self.forward(obs[:, : self.input_dim])
# Update running mean in-place (don't reassign the parameter)
with torch.no_grad():
self.running_mean.mul_(0.9).add_(processed.mean(dim=0), alpha=0.1)
self.counter += 1
return transition
def get_config(self) -> dict[str, Any]:
return {
"input_dim": self.input_dim,
"hidden_dim": self.hidden_dim,
"counter": self.counter,
}
def state_dict(self) -> dict[str, torch.Tensor]:
"""Override to return all module parameters and buffers."""
# Get the module's state dict (includes all parameters and buffers)
return super().state_dict()
def load_state_dict(self, state: dict[str, torch.Tensor]) -> None:
"""Override to load all module parameters and buffers."""
# Use the module's load_state_dict
super().load_state_dict(state)
def reset(self) -> None:
self.running_mean.zero_()
self.counter = 0
def feature_contract(self, features: dict[str, PolicyFeature]) -> dict[str, PolicyFeature]:
# We do not test feature_contract here
return features
class MockNonModuleStepWithState:
"""Mock step that explicitly does NOT inherit from nn.Module but has tensor state.
This tests the state_dict/load_state_dict path for regular classes.
"""
def __init__(self, name: str = "non_module_step", feature_dim: int = 10):
self.name = name
self.feature_dim = feature_dim
# Initialize tensor state - these are regular tensors, not nn.Parameters
self.weights = torch.randn(feature_dim, feature_dim)
self.bias = torch.zeros(feature_dim)
self.running_stats = torch.zeros(feature_dim)
self.step_count = torch.tensor(0)
# Non-tensor state
self.config_value = 42
self.history = []
def __call__(self, transition: EnvTransition) -> EnvTransition:
"""Process transition using tensor operations."""
obs = transition.get(TransitionKey.OBSERVATION)
comp_data = transition.get(TransitionKey.COMPLEMENTARY_DATA, {})
if obs is not None and isinstance(obs, torch.Tensor) and obs.numel() >= self.feature_dim:
# Perform some tensor operations
flat_obs = obs.flatten()[: self.feature_dim]
# Simple linear transformation (ensure dimensions match for matmul)
output = torch.matmul(self.weights.T, flat_obs) + self.bias
# Update running stats
self.running_stats = 0.9 * self.running_stats + 0.1 * output
self.step_count += 1
# Add to complementary data
comp_data = {} if comp_data is None else dict(comp_data)
comp_data[f"{self.name}_mean_output"] = output.mean().item()
comp_data[f"{self.name}_steps"] = self.step_count.item()
# Return updated transition
new_transition = transition.copy()
new_transition[TransitionKey.COMPLEMENTARY_DATA] = comp_data
return new_transition
return transition
def get_config(self) -> dict[str, Any]:
return {
"name": self.name,
"feature_dim": self.feature_dim,
"config_value": self.config_value,
}
def state_dict(self) -> dict[str, torch.Tensor]:
"""Return only tensor state."""
return {
"weights": self.weights,
"bias": self.bias,
"running_stats": self.running_stats,
"step_count": self.step_count,
}
def load_state_dict(self, state: dict[str, torch.Tensor]) -> None:
"""Load tensor state."""
self.weights = state["weights"]
self.bias = state["bias"]
self.running_stats = state["running_stats"]
self.step_count = state["step_count"]
def reset(self) -> None:
"""Reset statistics but keep learned parameters."""
self.running_stats.zero_()
self.step_count.zero_()
self.history.clear()
def feature_contract(self, features: dict[str, PolicyFeature]) -> dict[str, PolicyFeature]:
# We do not test feature_contract here
return features
# Tests for overrides functionality
@dataclass
class MockStepWithNonSerializableParam:
"""Mock step that requires a non-serializable parameter."""
def __init__(self, name: str = "mock_env_step", multiplier: float = 1.0, env: Any = None):
self.name = name
# Add type validation for multiplier
if isinstance(multiplier, str):
raise ValueError(f"multiplier must be a number, got string '{multiplier}'")
if not isinstance(multiplier, (int, float)):
raise TypeError(f"multiplier must be a number, got {type(multiplier).__name__}")
self.multiplier = float(multiplier)
self.env = env # Non-serializable parameter (like gym.Env)
def __call__(self, transition: EnvTransition) -> EnvTransition:
reward = transition.get(TransitionKey.REWARD)
comp_data = transition.get(TransitionKey.COMPLEMENTARY_DATA, {})
# Use the env parameter if provided
if self.env is not None:
comp_data = {} if comp_data is None else dict(comp_data)
comp_data[f"{self.name}_env_info"] = str(self.env)
# Apply multiplier to reward
new_transition = transition.copy()
if reward is not None:
new_transition[TransitionKey.REWARD] = reward * self.multiplier
if comp_data:
new_transition[TransitionKey.COMPLEMENTARY_DATA] = comp_data
return new_transition
def get_config(self) -> dict[str, Any]:
# Note: env is intentionally NOT included here as it's not serializable
return {
"name": self.name,
"multiplier": self.multiplier,
}
def state_dict(self) -> dict[str, torch.Tensor]:
return {}
def load_state_dict(self, state: dict[str, torch.Tensor]) -> None:
pass
def reset(self) -> None:
pass
def feature_contract(self, features: dict[str, PolicyFeature]) -> dict[str, PolicyFeature]:
# We do not test feature_contract here
return features
@ProcessorStepRegistry.register("registered_mock_step")
@dataclass
class RegisteredMockStep:
"""Mock step registered in the registry."""
value: int = 42
device: str = "cpu"
def __call__(self, transition: EnvTransition) -> EnvTransition:
comp_data = transition.get(TransitionKey.COMPLEMENTARY_DATA, {})
comp_data = {} if comp_data is None else dict(comp_data)
comp_data["registered_step_value"] = self.value
comp_data["registered_step_device"] = self.device
new_transition = transition.copy()
new_transition[TransitionKey.COMPLEMENTARY_DATA] = comp_data
return new_transition
def get_config(self) -> dict[str, Any]:
return {
"value": self.value,
"device": self.device,
}
def state_dict(self) -> dict[str, torch.Tensor]:
return {}
def load_state_dict(self, state: dict[str, torch.Tensor]) -> None:
pass
def reset(self) -> None:
pass
def feature_contract(self, features: dict[str, PolicyFeature]) -> dict[str, PolicyFeature]:
# We do not test feature_contract here
return features
class MockEnvironment:
"""Mock environment for testing non-serializable parameters."""
def __init__(self, name: str):
self.name = name
def __str__(self):
return f"MockEnvironment({self.name})"
def test_from_pretrained_with_overrides():
"""Test loading processor with parameter overrides."""
# Create a processor with steps that need overrides
env_step = MockStepWithNonSerializableParam(name="env_step", multiplier=2.0)
registered_step = RegisteredMockStep(value=100, device="cpu")
pipeline = RobotProcessor([env_step, registered_step], name="TestOverrides")
with tempfile.TemporaryDirectory() as tmp_dir:
# Save the pipeline
pipeline.save_pretrained(tmp_dir)
# Create a mock environment for override
mock_env = MockEnvironment("test_env")
# Load with overrides
overrides = {
"MockStepWithNonSerializableParam": {
"env": mock_env,
"multiplier": 3.0, # Override the multiplier too
},
"registered_mock_step": {"device": "cuda", "value": 200},
}
loaded_pipeline = RobotProcessor.from_pretrained(tmp_dir, overrides=overrides)
# Verify the pipeline was loaded correctly
assert len(loaded_pipeline) == 2
assert loaded_pipeline.name == "TestOverrides"
# Test the loaded steps
transition = create_transition(reward=1.0)
result = loaded_pipeline(transition)
# Check that overrides were applied
comp_data = result[TransitionKey.COMPLEMENTARY_DATA]
assert "env_step_env_info" in comp_data
assert comp_data["env_step_env_info"] == "MockEnvironment(test_env)"
assert comp_data["registered_step_value"] == 200
assert comp_data["registered_step_device"] == "cuda"
# Check that multiplier override was applied
assert result[TransitionKey.REWARD] == 3.0 # 1.0 * 3.0 (overridden multiplier)
def test_from_pretrained_with_partial_overrides():
"""Test loading processor with overrides for only some steps."""
step1 = MockStepWithNonSerializableParam(name="step1", multiplier=1.0)
step2 = MockStepWithNonSerializableParam(name="step2", multiplier=2.0)
pipeline = RobotProcessor([step1, step2])
with tempfile.TemporaryDirectory() as tmp_dir:
pipeline.save_pretrained(tmp_dir)
# Override only one step
overrides = {"MockStepWithNonSerializableParam": {"multiplier": 5.0}}
# The current implementation applies overrides to ALL steps with the same class name
# Both steps will get the override
loaded_pipeline = RobotProcessor.from_pretrained(tmp_dir, overrides=overrides)
transition = create_transition(reward=1.0)
result = loaded_pipeline(transition)
# The reward should be affected by both steps, both getting the override
# First step: 1.0 * 5.0 = 5.0 (overridden)
# Second step: 5.0 * 5.0 = 25.0 (also overridden)
assert result[TransitionKey.REWARD] == 25.0
def test_from_pretrained_invalid_override_key():
"""Test that invalid override keys raise KeyError."""
step = MockStepWithNonSerializableParam()
pipeline = RobotProcessor([step])
with tempfile.TemporaryDirectory() as tmp_dir:
pipeline.save_pretrained(tmp_dir)
# Try to override a non-existent step
overrides = {"NonExistentStep": {"param": "value"}}
with pytest.raises(KeyError, match="Override keys.*do not match any step"):
RobotProcessor.from_pretrained(tmp_dir, overrides=overrides)
def test_from_pretrained_multiple_invalid_override_keys():
"""Test that multiple invalid override keys are reported."""
step = MockStepWithNonSerializableParam()
pipeline = RobotProcessor([step])
with tempfile.TemporaryDirectory() as tmp_dir:
pipeline.save_pretrained(tmp_dir)
# Try to override multiple non-existent steps
overrides = {"NonExistentStep1": {"param": "value1"}, "NonExistentStep2": {"param": "value2"}}
with pytest.raises(KeyError) as exc_info:
RobotProcessor.from_pretrained(tmp_dir, overrides=overrides)
error_msg = str(exc_info.value)
assert "NonExistentStep1" in error_msg
assert "NonExistentStep2" in error_msg
assert "Available step keys" in error_msg
def test_from_pretrained_registered_step_override():
"""Test overriding registered steps using registry names."""
registered_step = RegisteredMockStep(value=50, device="cpu")
pipeline = RobotProcessor([registered_step])
with tempfile.TemporaryDirectory() as tmp_dir:
pipeline.save_pretrained(tmp_dir)
# Override using registry name
overrides = {"registered_mock_step": {"value": 999, "device": "cuda"}}
loaded_pipeline = RobotProcessor.from_pretrained(tmp_dir, overrides=overrides)
# Test that overrides were applied
transition = create_transition()
result = loaded_pipeline(transition)
comp_data = result[TransitionKey.COMPLEMENTARY_DATA]
assert comp_data["registered_step_value"] == 999
assert comp_data["registered_step_device"] == "cuda"
def test_from_pretrained_mixed_registered_and_unregistered():
"""Test overriding both registered and unregistered steps."""
unregistered_step = MockStepWithNonSerializableParam(name="unregistered", multiplier=1.0)
registered_step = RegisteredMockStep(value=10, device="cpu")
pipeline = RobotProcessor([unregistered_step, registered_step])
with tempfile.TemporaryDirectory() as tmp_dir:
pipeline.save_pretrained(tmp_dir)
mock_env = MockEnvironment("mixed_test")
overrides = {
"MockStepWithNonSerializableParam": {"env": mock_env, "multiplier": 4.0},
"registered_mock_step": {"value": 777},
}
loaded_pipeline = RobotProcessor.from_pretrained(tmp_dir, overrides=overrides)
# Test both steps
transition = create_transition(reward=2.0)
result = loaded_pipeline(transition)
comp_data = result[TransitionKey.COMPLEMENTARY_DATA]
assert comp_data["unregistered_env_info"] == "MockEnvironment(mixed_test)"
assert comp_data["registered_step_value"] == 777
assert result[TransitionKey.REWARD] == 8.0 # 2.0 * 4.0
def test_from_pretrained_no_overrides():
"""Test that from_pretrained works without overrides (backward compatibility)."""
step = MockStepWithNonSerializableParam(name="no_override", multiplier=3.0)
pipeline = RobotProcessor([step])
with tempfile.TemporaryDirectory() as tmp_dir:
pipeline.save_pretrained(tmp_dir)
# Load without overrides
loaded_pipeline = RobotProcessor.from_pretrained(tmp_dir)
assert len(loaded_pipeline) == 1
# Test that the step works (env will be None)
transition = create_transition(reward=1.0)
result = loaded_pipeline(transition)
assert result[TransitionKey.REWARD] == 3.0 # 1.0 * 3.0
def test_from_pretrained_empty_overrides():
"""Test that from_pretrained works with empty overrides dict."""
step = MockStepWithNonSerializableParam(multiplier=2.0)
pipeline = RobotProcessor([step])
with tempfile.TemporaryDirectory() as tmp_dir:
pipeline.save_pretrained(tmp_dir)
# Load with empty overrides
loaded_pipeline = RobotProcessor.from_pretrained(tmp_dir, overrides={})
assert len(loaded_pipeline) == 1
# Test that the step works normally
transition = create_transition(reward=1.0)
result = loaded_pipeline(transition)
assert result[TransitionKey.REWARD] == 2.0
def test_from_pretrained_override_instantiation_error():
"""Test that instantiation errors with overrides are properly reported."""
step = MockStepWithNonSerializableParam(multiplier=1.0)
pipeline = RobotProcessor([step])
with tempfile.TemporaryDirectory() as tmp_dir:
pipeline.save_pretrained(tmp_dir)
# Try to override with invalid parameter type
overrides = {
"MockStepWithNonSerializableParam": {
"multiplier": "invalid_type" # Should be float, not string
}
}
with pytest.raises(ValueError, match="Failed to instantiate processor step"):
RobotProcessor.from_pretrained(tmp_dir, overrides=overrides)
def test_from_pretrained_with_state_and_overrides():
"""Test that overrides work correctly with steps that have tensor state."""
step = MockStepWithTensorState(name="tensor_step", learning_rate=0.01, window_size=5)
pipeline = RobotProcessor([step])
# Process some data to create state
for i in range(10):
transition = create_transition(reward=float(i))
pipeline(transition)
with tempfile.TemporaryDirectory() as tmp_dir:
pipeline.save_pretrained(tmp_dir)
# Load with overrides
overrides = {
"MockStepWithTensorState": {
"learning_rate": 0.05, # Override learning rate
"window_size": 3, # Override window size
}
}
loaded_pipeline = RobotProcessor.from_pretrained(tmp_dir, overrides=overrides)
loaded_step = loaded_pipeline.steps[0]
# Check that config overrides were applied
assert loaded_step.learning_rate == 0.05
assert loaded_step.window_size == 3
# Check that tensor state was preserved
assert loaded_step.running_count.item() == 10
# The running_mean should still have the original window_size (5) from saved state
# but the new step will use window_size=3 for future operations
assert loaded_step.running_mean.shape[0] == 5 # From saved state
def test_from_pretrained_override_error_messages():
"""Test that error messages for override failures are helpful."""
step1 = MockStepWithNonSerializableParam(name="step1")
step2 = RegisteredMockStep()
pipeline = RobotProcessor([step1, step2])
with tempfile.TemporaryDirectory() as tmp_dir:
pipeline.save_pretrained(tmp_dir)
# Test with invalid override key
overrides = {"WrongStepName": {"param": "value"}}
with pytest.raises(KeyError) as exc_info:
RobotProcessor.from_pretrained(tmp_dir, overrides=overrides)
error_msg = str(exc_info.value)
assert "WrongStepName" in error_msg
assert "Available step keys" in error_msg
assert "MockStepWithNonSerializableParam" in error_msg
assert "registered_mock_step" in error_msg
def test_repr_empty_processor():
"""Test __repr__ with empty processor."""
pipeline = RobotProcessor()
repr_str = repr(pipeline)
expected = "RobotProcessor(name='RobotProcessor', steps=0: [])"
assert repr_str == expected
def test_repr_single_step():
"""Test __repr__ with single step."""
step = MockStep("test_step")
pipeline = RobotProcessor([step])
repr_str = repr(pipeline)
expected = "RobotProcessor(name='RobotProcessor', steps=1: [MockStep])"
assert repr_str == expected
def test_repr_multiple_steps_under_limit():
"""Test __repr__ with 2-3 steps (all shown)."""
step1 = MockStep("step1")
step2 = MockStepWithoutOptionalMethods()
pipeline = RobotProcessor([step1, step2])
repr_str = repr(pipeline)
expected = "RobotProcessor(name='RobotProcessor', steps=2: [MockStep, MockStepWithoutOptionalMethods])"
assert repr_str == expected
# Test with 3 steps (boundary case)
step3 = MockStepWithTensorState()
pipeline = RobotProcessor([step1, step2, step3])
repr_str = repr(pipeline)
expected = "RobotProcessor(name='RobotProcessor', steps=3: [MockStep, MockStepWithoutOptionalMethods, MockStepWithTensorState])"
assert repr_str == expected
def test_repr_many_steps_truncated():
"""Test __repr__ with more than 3 steps (truncated with ellipsis)."""
step1 = MockStep("step1")
step2 = MockStepWithoutOptionalMethods()
step3 = MockStepWithTensorState()
step4 = MockModuleStep()
step5 = MockNonModuleStepWithState()
pipeline = RobotProcessor([step1, step2, step3, step4, step5])
repr_str = repr(pipeline)
expected = "RobotProcessor(name='RobotProcessor', steps=5: [MockStep, MockStepWithoutOptionalMethods, ..., MockNonModuleStepWithState])"
assert repr_str == expected
def test_repr_with_custom_name():
"""Test __repr__ with custom processor name."""
step = MockStep("test_step")
pipeline = RobotProcessor([step], name="CustomProcessor")
repr_str = repr(pipeline)
expected = "RobotProcessor(name='CustomProcessor', steps=1: [MockStep])"
assert repr_str == expected
def test_repr_with_seed():
"""Test __repr__ with seed parameter."""
step = MockStep("test_step")
pipeline = RobotProcessor([step])
repr_str = repr(pipeline)
expected = "RobotProcessor(name='RobotProcessor', steps=1: [MockStep])"
assert repr_str == expected
def test_repr_with_custom_name_and_seed():
"""Test __repr__ with both custom name and seed."""
step1 = MockStep("step1")
step2 = MockStepWithoutOptionalMethods()
pipeline = RobotProcessor([step1, step2], name="MyProcessor")
repr_str = repr(pipeline)
expected = "RobotProcessor(name='MyProcessor', steps=2: [MockStep, MockStepWithoutOptionalMethods])"
assert repr_str == expected
def test_repr_without_seed():
"""Test __repr__ when seed is explicitly None (should not show seed)."""
step = MockStep("test_step")
pipeline = RobotProcessor([step], name="TestProcessor")
repr_str = repr(pipeline)
expected = "RobotProcessor(name='TestProcessor', steps=1: [MockStep])"
assert repr_str == expected
def test_repr_various_step_types():
"""Test __repr__ with different types of steps to verify class name extraction."""
step1 = MockStep()
step2 = MockStepWithTensorState()
step3 = MockModuleStep()
step4 = MockNonModuleStepWithState()
pipeline = RobotProcessor([step1, step2, step3, step4], name="MixedSteps")
repr_str = repr(pipeline)
expected = "RobotProcessor(name='MixedSteps', steps=4: [MockStep, MockStepWithTensorState, ..., MockNonModuleStepWithState])"
assert repr_str == expected
def test_repr_edge_case_long_names():
"""Test __repr__ handles steps with long class names properly."""
step1 = MockStepWithNonSerializableParam()
step2 = MockStepWithoutOptionalMethods()
step3 = MockStepWithTensorState()
step4 = MockNonModuleStepWithState()
pipeline = RobotProcessor([step1, step2, step3, step4], name="LongNames")
repr_str = repr(pipeline)
expected = "RobotProcessor(name='LongNames', steps=4: [MockStepWithNonSerializableParam, MockStepWithoutOptionalMethods, ..., MockNonModuleStepWithState])"
assert repr_str == expected
# Tests for config filename features and multiple processors
def test_save_with_custom_config_filename():
"""Test saving processor with custom config filename."""
step = MockStep("test")
pipeline = RobotProcessor([step], name="TestProcessor")
with tempfile.TemporaryDirectory() as tmp_dir:
# Save with custom filename
pipeline.save_pretrained(tmp_dir, config_filename="my_custom_config.json")
# Check file exists
config_path = Path(tmp_dir) / "my_custom_config.json"
assert config_path.exists()
# Check content
with open(config_path) as f:
config = json.load(f)
assert config["name"] == "TestProcessor"
# Load with specific filename
loaded = RobotProcessor.from_pretrained(tmp_dir, config_filename="my_custom_config.json")
assert loaded.name == "TestProcessor"
def test_multiple_processors_same_directory():
"""Test saving multiple processors to the same directory with different config files."""
# Create different processors
preprocessor = RobotProcessor([MockStep("pre1"), MockStep("pre2")], name="preprocessor")
postprocessor = RobotProcessor([MockStepWithoutOptionalMethods(multiplier=0.5)], name="postprocessor")
with tempfile.TemporaryDirectory() as tmp_dir:
# Save both to same directory
preprocessor.save_pretrained(tmp_dir)
postprocessor.save_pretrained(tmp_dir)
# Check both config files exist
assert (Path(tmp_dir) / "preprocessor.json").exists()
assert (Path(tmp_dir) / "postprocessor.json").exists()
# Load them back
loaded_pre = RobotProcessor.from_pretrained(tmp_dir, config_filename="preprocessor.json")
loaded_post = RobotProcessor.from_pretrained(tmp_dir, config_filename="postprocessor.json")
assert loaded_pre.name == "preprocessor"
assert loaded_post.name == "postprocessor"
assert len(loaded_pre) == 2
assert len(loaded_post) == 1
def test_auto_detect_single_config():
"""Test automatic config detection when there's only one JSON file."""
step = MockStepWithTensorState()
pipeline = RobotProcessor([step], name="SingleConfig")
with tempfile.TemporaryDirectory() as tmp_dir:
pipeline.save_pretrained(tmp_dir)
# Load without specifying config_filename
loaded = RobotProcessor.from_pretrained(tmp_dir)
assert loaded.name == "SingleConfig"
def test_error_multiple_configs_no_filename():
"""Test error when multiple configs exist and no filename specified."""
proc1 = RobotProcessor([MockStep()], name="processor1")
proc2 = RobotProcessor([MockStep()], name="processor2")
with tempfile.TemporaryDirectory() as tmp_dir:
proc1.save_pretrained(tmp_dir)
proc2.save_pretrained(tmp_dir)
# Should raise error
with pytest.raises(ValueError, match="Multiple .json files found"):
RobotProcessor.from_pretrained(tmp_dir)
def test_state_file_naming_with_indices():
"""Test that state files include pipeline name and step indices to avoid conflicts."""
# Create multiple steps of same type with state
step1 = MockStepWithTensorState(name="norm1", window_size=5)
step2 = MockStepWithTensorState(name="norm2", window_size=10)
step3 = MockModuleStep(input_dim=5)
pipeline = RobotProcessor([step1, step2, step3])
# Process some data to create state
for i in range(5):
transition = create_transition(observation=torch.randn(2, 5), reward=float(i))
pipeline(transition)
with tempfile.TemporaryDirectory() as tmp_dir:
pipeline.save_pretrained(tmp_dir)
# Check state files have indices
state_files = sorted(Path(tmp_dir).glob("*.safetensors"))
assert len(state_files) == 3
# Files should be named with pipeline name prefix and indices
expected_names = [
"robotprocessor_step_0.safetensors",
"robotprocessor_step_1.safetensors",
"robotprocessor_step_2.safetensors",
]
actual_names = [f.name for f in state_files]
assert actual_names == expected_names
def test_state_file_naming_with_registry():
"""Test state file naming for registered steps includes pipeline name, index and registry name."""
# Register a test step
@ProcessorStepRegistry.register("test_stateful_step")
@dataclass
class TestStatefulStep:
value: int = 0
def __init__(self, value: int = 0):
self.value = value
self.state_tensor = torch.randn(3, 3)
def __call__(self, transition: EnvTransition) -> EnvTransition:
return transition
def get_config(self):
return {"value": self.value}
def state_dict(self):
return {"state_tensor": self.state_tensor}
def load_state_dict(self, state):
self.state_tensor = state["state_tensor"]
def feature_contract(self, features: dict[str, PolicyFeature]) -> dict[str, PolicyFeature]:
# We do not test feature_contract here
return features
try:
# Create pipeline with registered steps
step1 = TestStatefulStep(1)
step2 = TestStatefulStep(2)
pipeline = RobotProcessor([step1, step2])
with tempfile.TemporaryDirectory() as tmp_dir:
pipeline.save_pretrained(tmp_dir)
# Check state files
state_files = sorted(Path(tmp_dir).glob("*.safetensors"))
assert len(state_files) == 2
# Should include pipeline name, index and registry name
expected_names = [
"robotprocessor_step_0_test_stateful_step.safetensors",
"robotprocessor_step_1_test_stateful_step.safetensors",
]
actual_names = [f.name for f in state_files]
assert actual_names == expected_names
finally:
# Cleanup registry
ProcessorStepRegistry.unregister("test_stateful_step")
# More comprehensive override tests
def test_override_with_nested_config():
"""Test overrides with nested configuration dictionaries."""
@ProcessorStepRegistry.register("complex_config_step")
@dataclass
class ComplexConfigStep:
name: str = "complex"
simple_param: int = 42
nested_config: dict = None
def __post_init__(self):
if self.nested_config is None:
self.nested_config = {"level1": {"level2": "default"}}
def __call__(self, transition: EnvTransition) -> EnvTransition:
comp_data = transition.get(TransitionKey.COMPLEMENTARY_DATA, {})
comp_data = dict(comp_data)
comp_data["config_value"] = self.nested_config.get("level1", {}).get("level2", "missing")
new_transition = transition.copy()
new_transition[TransitionKey.COMPLEMENTARY_DATA] = comp_data
return new_transition
def get_config(self):
return {"name": self.name, "simple_param": self.simple_param, "nested_config": self.nested_config}
def feature_contract(self, features: dict[str, PolicyFeature]) -> dict[str, PolicyFeature]:
# We do not test feature_contract here
return features
try:
step = ComplexConfigStep()
pipeline = RobotProcessor([step])
with tempfile.TemporaryDirectory() as tmp_dir:
pipeline.save_pretrained(tmp_dir)
# Load with nested override
loaded = RobotProcessor.from_pretrained(
tmp_dir,
overrides={"complex_config_step": {"nested_config": {"level1": {"level2": "overridden"}}}},
)
# Test that override worked
transition = create_transition()
result = loaded(transition)
assert result[TransitionKey.COMPLEMENTARY_DATA]["config_value"] == "overridden"
finally:
ProcessorStepRegistry.unregister("complex_config_step")
def test_override_preserves_defaults():
"""Test that overrides only affect specified parameters."""
step = MockStepWithNonSerializableParam(name="test", multiplier=2.0)
pipeline = RobotProcessor([step])
with tempfile.TemporaryDirectory() as tmp_dir:
pipeline.save_pretrained(tmp_dir)
# Override only one parameter
loaded = RobotProcessor.from_pretrained(
tmp_dir,
overrides={
"MockStepWithNonSerializableParam": {
"multiplier": 5.0 # Only override multiplier
}
},
)
# Check that name was preserved from saved config
loaded_step = loaded.steps[0]
assert loaded_step.name == "test" # Original value
assert loaded_step.multiplier == 5.0 # Overridden value
def test_override_type_validation():
"""Test that type errors in overrides are caught properly."""
step = MockStepWithTensorState(learning_rate=0.01)
pipeline = RobotProcessor([step])
with tempfile.TemporaryDirectory() as tmp_dir:
pipeline.save_pretrained(tmp_dir)
# Try to override with wrong type
overrides = {
"MockStepWithTensorState": {
"window_size": "not_an_int" # Should be int
}
}
with pytest.raises(ValueError, match="Failed to instantiate"):
RobotProcessor.from_pretrained(tmp_dir, overrides=overrides)
def test_override_with_callables():
"""Test overriding with callable objects."""
@ProcessorStepRegistry.register("callable_step")
@dataclass
class CallableStep:
name: str = "callable_step"
transform_fn: Any = None
def __call__(self, transition: EnvTransition) -> EnvTransition:
obs = transition.get(TransitionKey.OBSERVATION)
if obs is not None and self.transform_fn is not None:
processed_obs = {}
for k, v in obs.items():
processed_obs[k] = self.transform_fn(v)
new_transition = transition.copy()
new_transition[TransitionKey.OBSERVATION] = processed_obs
return new_transition
return transition
def get_config(self):
return {"name": self.name}
def feature_contract(self, features: dict[str, PolicyFeature]) -> dict[str, PolicyFeature]:
# We do not test feature_contract here
return features
try:
step = CallableStep()
pipeline = RobotProcessor([step])
with tempfile.TemporaryDirectory() as tmp_dir:
pipeline.save_pretrained(tmp_dir)
# Define a transform function
def double_values(x):
if isinstance(x, (int, float)):
return x * 2
elif isinstance(x, torch.Tensor):
return x * 2
return x
# Load with callable override
loaded = RobotProcessor.from_pretrained(
tmp_dir, overrides={"callable_step": {"transform_fn": double_values}}
)
# Test it works
transition = create_transition(observation={"value": torch.tensor(5.0)})
result = loaded(transition)
assert result[TransitionKey.OBSERVATION]["value"].item() == 10.0
finally:
ProcessorStepRegistry.unregister("callable_step")
def test_override_multiple_same_class_warning():
"""Test behavior when multiple steps of same class exist."""
step1 = MockStepWithNonSerializableParam(name="step1", multiplier=1.0)
step2 = MockStepWithNonSerializableParam(name="step2", multiplier=2.0)
pipeline = RobotProcessor([step1, step2])
with tempfile.TemporaryDirectory() as tmp_dir:
pipeline.save_pretrained(tmp_dir)
# Override affects all instances of the class
loaded = RobotProcessor.from_pretrained(
tmp_dir, overrides={"MockStepWithNonSerializableParam": {"multiplier": 10.0}}
)
# Both steps get the same override
assert loaded.steps[0].multiplier == 10.0
assert loaded.steps[1].multiplier == 10.0
# But original names are preserved
assert loaded.steps[0].name == "step1"
assert loaded.steps[1].name == "step2"
def test_config_filename_special_characters():
"""Test config filenames with special characters are sanitized."""
# Processor name with special characters
pipeline = RobotProcessor([MockStep()], name="My/Processor\\With:Special*Chars")
with tempfile.TemporaryDirectory() as tmp_dir:
pipeline.save_pretrained(tmp_dir)
# Check that filename was sanitized
json_files = list(Path(tmp_dir).glob("*.json"))
assert len(json_files) == 1
# Should have replaced special chars with underscores
expected_name = "my_processor_with_special_chars.json"
assert json_files[0].name == expected_name
def test_state_file_naming_with_multiple_processors():
"""Test that state files are properly prefixed with pipeline names to avoid conflicts."""
# Create two processors with state
step1 = MockStepWithTensorState(name="norm", window_size=5)
preprocessor = RobotProcessor([step1], name="PreProcessor")
step2 = MockStepWithTensorState(name="norm", window_size=10)
postprocessor = RobotProcessor([step2], name="PostProcessor")
# Process some data to create state
for i in range(3):
transition = create_transition(reward=float(i))
preprocessor(transition)
postprocessor(transition)
with tempfile.TemporaryDirectory() as tmp_dir:
# Save both processors to the same directory
preprocessor.save_pretrained(tmp_dir)
postprocessor.save_pretrained(tmp_dir)
# Check that all files exist and are distinct
assert (Path(tmp_dir) / "preprocessor.json").exists()
assert (Path(tmp_dir) / "postprocessor.json").exists()
assert (Path(tmp_dir) / "preprocessor_step_0.safetensors").exists()
assert (Path(tmp_dir) / "postprocessor_step_0.safetensors").exists()
# Load both back and verify they work correctly
loaded_pre = RobotProcessor.from_pretrained(tmp_dir, config_filename="preprocessor.json")
loaded_post = RobotProcessor.from_pretrained(tmp_dir, config_filename="postprocessor.json")
assert loaded_pre.name == "PreProcessor"
assert loaded_post.name == "PostProcessor"
assert loaded_pre.steps[0].window_size == 5
assert loaded_post.steps[0].window_size == 10
def test_override_with_device_strings():
"""Test overriding device parameters with string values."""
@ProcessorStepRegistry.register("device_aware_step")
@dataclass
class DeviceAwareStep:
device: str = "cpu"
def __init__(self, device: str = "cpu"):
self.device = device
self.buffer = torch.zeros(10, device=device)
def __call__(self, transition: EnvTransition) -> EnvTransition:
return transition
def get_config(self):
return {"device": str(self.device)}
def state_dict(self):
return {"buffer": self.buffer}
def load_state_dict(self, state):
self.buffer = state["buffer"]
def feature_contract(self, features: dict[str, PolicyFeature]) -> dict[str, PolicyFeature]:
# We do not test feature_contract here
return features
try:
step = DeviceAwareStep(device="cpu")
pipeline = RobotProcessor([step])
with tempfile.TemporaryDirectory() as tmp_dir:
pipeline.save_pretrained(tmp_dir)
# Override device
if torch.cuda.is_available():
loaded = RobotProcessor.from_pretrained(
tmp_dir, overrides={"device_aware_step": {"device": "cuda:0"}}
)
loaded_step = loaded.steps[0]
assert loaded_step.device == "cuda:0"
# Note: buffer will still be on CPU from saved state
# until .to() is called on the processor
finally:
ProcessorStepRegistry.unregister("device_aware_step")
def test_from_pretrained_nonexistent_path():
"""Test error handling when loading from non-existent sources."""
from huggingface_hub.errors import HfHubHTTPError, HFValidationError
# Test with an invalid repo ID (too many slashes) - caught by HF validation
with pytest.raises(HFValidationError):
RobotProcessor.from_pretrained("/path/that/does/not/exist")
# Test with a non-existent but valid Hub repo format
with pytest.raises((FileNotFoundError, HfHubHTTPError)):
RobotProcessor.from_pretrained("nonexistent-user/nonexistent-repo")
# Test with a local directory that exists but has no config files
with tempfile.TemporaryDirectory() as tmp_dir:
with pytest.raises(FileNotFoundError, match="No .json configuration files found"):
RobotProcessor.from_pretrained(tmp_dir)
def test_save_load_with_custom_converter_functions():
"""Test that custom to_transition and to_output functions are NOT saved."""
def custom_to_transition(batch):
# Custom conversion logic
return {
TransitionKey.OBSERVATION: batch.get("obs"),
TransitionKey.ACTION: batch.get("act"),
TransitionKey.REWARD: batch.get("rew", 0.0),
TransitionKey.DONE: batch.get("done", False),
TransitionKey.TRUNCATED: batch.get("truncated", False),
TransitionKey.INFO: {},
TransitionKey.COMPLEMENTARY_DATA: {},
}
def custom_to_output(transition):
# Custom output format
return {
"obs": transition.get(TransitionKey.OBSERVATION),
"act": transition.get(TransitionKey.ACTION),
"rew": transition.get(TransitionKey.REWARD),
"done": transition.get(TransitionKey.DONE),
"truncated": transition.get(TransitionKey.TRUNCATED),
}
# Create processor with custom converters
pipeline = RobotProcessor([MockStep()], to_transition=custom_to_transition, to_output=custom_to_output)
with tempfile.TemporaryDirectory() as tmp_dir:
pipeline.save_pretrained(tmp_dir)
# Load - should use default converters
loaded = RobotProcessor.from_pretrained(tmp_dir)
# Verify it uses default converters by checking with standard batch format
batch = {
"observation.image": torch.randn(1, 3, 32, 32),
"action": torch.randn(1, 7),
"next.reward": torch.tensor([1.0]),
"next.done": torch.tensor([False]),
"next.truncated": torch.tensor([False]),
"info": {},
}
# Should work with standard format (wouldn't work with custom converter)
result = loaded(batch)
assert "observation.image" in result # Standard format preserved
class NonCompliantStep:
"""Intentionally non-compliant: missing feature_contract."""
def __call__(self, transition: EnvTransition) -> EnvTransition:
return transition
def test_construction_rejects_step_without_feature_contract():
with pytest.raises(TypeError, match=r"must define feature_contract\(features\) -> dict\[str, Any\]"):
RobotProcessor([NonCompliantStep()])
class NonCallableStep:
"""Intentionally non-compliant: missing __call__."""
def feature_contract(self, features: dict[str, PolicyFeature]) -> dict[str, PolicyFeature]:
return features
def test_construction_rejects_step_without_call():
with pytest.raises(TypeError, match=r"must define __call__"):
RobotProcessor([NonCallableStep()])
@dataclass
class FeatureContractAddStep:
"""Adds a PolicyFeature"""
key: str = "a"
value: PolicyFeature = PolicyFeature(type=FeatureType.STATE, shape=(1,))
def __call__(self, transition: EnvTransition) -> EnvTransition:
return transition
def feature_contract(self, features: dict[str, PolicyFeature]) -> dict[str, PolicyFeature]:
features[self.key] = self.value
return features
@dataclass
class FeatureContractMutateStep:
"""Mutates a PolicyFeature"""
key: str = "a"
fn: Callable[[PolicyFeature | None], PolicyFeature] = lambda x: x # noqa: E731
def __call__(self, transition: EnvTransition) -> EnvTransition:
return transition
def feature_contract(self, features: dict[str, PolicyFeature]) -> dict[str, PolicyFeature]:
features[self.key] = self.fn(features.get(self.key))
return features
@dataclass
class FeatureContractBadReturnStep:
"""Returns a non-dict"""
def __call__(self, transition: EnvTransition) -> EnvTransition:
return transition
def feature_contract(self, features: dict[str, PolicyFeature]) -> dict[str, PolicyFeature]:
return ["not-a-dict"]
@dataclass
class FeatureContractRemoveStep:
"""Removes a PolicyFeature"""
key: str
def __call__(self, transition: EnvTransition) -> EnvTransition:
return transition
def feature_contract(self, features: dict[str, PolicyFeature]) -> dict[str, PolicyFeature]:
features.pop(self.key, None)
return features
def test_feature_contract_orders_and_merges(policy_feature_factory):
p = RobotProcessor(
[
FeatureContractAddStep("a", policy_feature_factory(FeatureType.STATE, (1,))),
FeatureContractMutateStep("a", lambda v: PolicyFeature(type=v.type, shape=(3,))),
FeatureContractAddStep("b", policy_feature_factory(FeatureType.ENV, (2,))),
]
)
out = p.feature_contract({})
assert out["a"].type == FeatureType.STATE and out["a"].shape == (3,)
assert out["b"].type == FeatureType.ENV and out["b"].shape == (2,)
assert_contract_is_typed(out)
def test_feature_contract_respects_initial_without_mutation(policy_feature_factory):
initial = {
"seed": policy_feature_factory(FeatureType.STATE, (7,)),
"nested": policy_feature_factory(FeatureType.ENV, (0,)),
}
p = RobotProcessor(
[
FeatureContractMutateStep("seed", lambda v: PolicyFeature(type=v.type, shape=(v.shape[0] + 1,))),
FeatureContractMutateStep(
"nested", lambda v: PolicyFeature(type=v.type, shape=(v.shape[0] + 5,))
),
]
)
out = p.feature_contract(initial_features=initial)
assert out["seed"].shape == (8,)
assert out["nested"].shape == (5,)
# Initial dict must be preserved
assert initial["seed"].shape == (7,)
assert initial["nested"].shape == (0,)
assert_contract_is_typed(out)
def test_feature_contract_type_error_on_bad_step():
p = RobotProcessor([FeatureContractAddStep(), FeatureContractBadReturnStep()])
with pytest.raises(TypeError, match=r"\w+\.feature_contract must return dict\[str, Any\]"):
_ = p.feature_contract({})
def test_feature_contract_execution_order_tracking():
class Track:
def __init__(self, label):
self.label = label
def __call__(self, transition: EnvTransition) -> EnvTransition:
return transition
def feature_contract(self, features: dict[str, PolicyFeature]) -> dict[str, PolicyFeature]:
code = {"A": 1, "B": 2, "C": 3}[self.label]
pf = features.get("order", PolicyFeature(type=FeatureType.ENV, shape=()))
features["order"] = PolicyFeature(type=pf.type, shape=pf.shape + (code,))
return features
out = RobotProcessor([Track("A"), Track("B"), Track("C")]).feature_contract({})
assert out["order"].shape == (1, 2, 3)
def test_feature_contract_remove_key(policy_feature_factory):
p = RobotProcessor(
[
FeatureContractAddStep("a", policy_feature_factory(FeatureType.STATE, (1,))),
FeatureContractRemoveStep("a"),
]
)
out = p.feature_contract({})
assert "a" not in out
def test_feature_contract_remove_from_initial(policy_feature_factory):
initial = {
"keep": policy_feature_factory(FeatureType.STATE, (1,)),
"drop": policy_feature_factory(FeatureType.STATE, (1,)),
}
p = RobotProcessor([FeatureContractRemoveStep("drop")])
out = p.feature_contract(initial_features=initial)
assert "drop" not in out and out["keep"] == initial["keep"]
| lerobot/tests/processor/test_pipeline.py/0 | {
"file_path": "lerobot/tests/processor/test_pipeline.py",
"repo_id": "lerobot",
"token_count": 25837
} | 228 |
#!/usr/bin/env python
# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
from collections.abc import Callable
import pytest
import torch
from lerobot.datasets.lerobot_dataset import LeRobotDataset
from lerobot.utils.buffer import BatchTransition, ReplayBuffer, random_crop_vectorized
from tests.fixtures.constants import DUMMY_REPO_ID
def state_dims() -> list[str]:
return ["observation.image", "observation.state"]
@pytest.fixture
def replay_buffer() -> ReplayBuffer:
return create_empty_replay_buffer()
def clone_state(state: dict) -> dict:
return {k: v.clone() for k, v in state.items()}
def create_empty_replay_buffer(
optimize_memory: bool = False,
use_drq: bool = False,
image_augmentation_function: Callable | None = None,
) -> ReplayBuffer:
buffer_capacity = 10
device = "cpu"
return ReplayBuffer(
buffer_capacity,
device,
state_dims(),
optimize_memory=optimize_memory,
use_drq=use_drq,
image_augmentation_function=image_augmentation_function,
)
def create_random_image() -> torch.Tensor:
return torch.rand(3, 84, 84)
def create_dummy_transition() -> dict:
return {
"observation.image": create_random_image(),
"action": torch.randn(4),
"reward": torch.tensor(1.0),
"observation.state": torch.randn(
10,
),
"done": torch.tensor(False),
"truncated": torch.tensor(False),
"complementary_info": {},
}
def create_dataset_from_replay_buffer(tmp_path) -> tuple[LeRobotDataset, ReplayBuffer]:
dummy_state_1 = create_dummy_state()
dummy_action_1 = create_dummy_action()
dummy_state_2 = create_dummy_state()
dummy_action_2 = create_dummy_action()
dummy_state_3 = create_dummy_state()
dummy_action_3 = create_dummy_action()
dummy_state_4 = create_dummy_state()
dummy_action_4 = create_dummy_action()
replay_buffer = create_empty_replay_buffer()
replay_buffer.add(dummy_state_1, dummy_action_1, 1.0, dummy_state_1, False, False)
replay_buffer.add(dummy_state_2, dummy_action_2, 1.0, dummy_state_2, False, False)
replay_buffer.add(dummy_state_3, dummy_action_3, 1.0, dummy_state_3, True, True)
replay_buffer.add(dummy_state_4, dummy_action_4, 1.0, dummy_state_4, True, True)
root = tmp_path / "test"
return (replay_buffer.to_lerobot_dataset(DUMMY_REPO_ID, root=root), replay_buffer)
def create_dummy_state() -> dict:
return {
"observation.image": create_random_image(),
"observation.state": torch.randn(
10,
),
}
def get_tensor_memory_consumption(tensor):
return tensor.nelement() * tensor.element_size()
def get_tensors_memory_consumption(obj, visited_addresses):
total_size = 0
address = id(obj)
if address in visited_addresses:
return 0
visited_addresses.add(address)
if isinstance(obj, torch.Tensor):
return get_tensor_memory_consumption(obj)
elif isinstance(obj, (list, tuple)):
for item in obj:
total_size += get_tensors_memory_consumption(item, visited_addresses)
elif isinstance(obj, dict):
for value in obj.values():
total_size += get_tensors_memory_consumption(value, visited_addresses)
elif hasattr(obj, "__dict__"):
# It's an object, we need to get the size of the attributes
for _, attr in vars(obj).items():
total_size += get_tensors_memory_consumption(attr, visited_addresses)
return total_size
def get_object_memory(obj):
# Track visited addresses to avoid infinite loops
# and cases when two properties point to the same object
visited_addresses = set()
# Get the size of the object in bytes
total_size = sys.getsizeof(obj)
# Get the size of the tensor attributes
total_size += get_tensors_memory_consumption(obj, visited_addresses)
return total_size
def create_dummy_action() -> torch.Tensor:
return torch.randn(4)
def dict_properties() -> list:
return ["state", "next_state"]
@pytest.fixture
def dummy_state() -> dict:
return create_dummy_state()
@pytest.fixture
def next_dummy_state() -> dict:
return create_dummy_state()
@pytest.fixture
def dummy_action() -> torch.Tensor:
return torch.randn(4)
def test_empty_buffer_sample_raises_error(replay_buffer):
assert len(replay_buffer) == 0, "Replay buffer should be empty."
assert replay_buffer.capacity == 10, "Replay buffer capacity should be 10."
with pytest.raises(RuntimeError, match="Cannot sample from an empty buffer"):
replay_buffer.sample(1)
def test_zero_capacity_buffer_raises_error():
with pytest.raises(ValueError, match="Capacity must be greater than 0."):
ReplayBuffer(0, "cpu", ["observation", "next_observation"])
def test_add_transition(replay_buffer, dummy_state, dummy_action):
replay_buffer.add(dummy_state, dummy_action, 1.0, dummy_state, False, False)
assert len(replay_buffer) == 1, "Replay buffer should have one transition after adding."
assert torch.equal(replay_buffer.actions[0], dummy_action), (
"Action should be equal to the first transition."
)
assert replay_buffer.rewards[0] == 1.0, "Reward should be equal to the first transition."
assert not replay_buffer.dones[0], "Done should be False for the first transition."
assert not replay_buffer.truncateds[0], "Truncated should be False for the first transition."
for dim in state_dims():
assert torch.equal(replay_buffer.states[dim][0], dummy_state[dim]), (
"Observation should be equal to the first transition."
)
assert torch.equal(replay_buffer.next_states[dim][0], dummy_state[dim]), (
"Next observation should be equal to the first transition."
)
def test_add_over_capacity():
replay_buffer = ReplayBuffer(2, "cpu", ["observation", "next_observation"])
dummy_state_1 = create_dummy_state()
dummy_action_1 = create_dummy_action()
dummy_state_2 = create_dummy_state()
dummy_action_2 = create_dummy_action()
dummy_state_3 = create_dummy_state()
dummy_action_3 = create_dummy_action()
replay_buffer.add(dummy_state_1, dummy_action_1, 1.0, dummy_state_1, False, False)
replay_buffer.add(dummy_state_2, dummy_action_2, 1.0, dummy_state_2, False, False)
replay_buffer.add(dummy_state_3, dummy_action_3, 1.0, dummy_state_3, True, True)
assert len(replay_buffer) == 2, "Replay buffer should have 2 transitions after adding 3."
for dim in state_dims():
assert torch.equal(replay_buffer.states[dim][0], dummy_state_3[dim]), (
"Observation should be equal to the first transition."
)
assert torch.equal(replay_buffer.next_states[dim][0], dummy_state_3[dim]), (
"Next observation should be equal to the first transition."
)
assert torch.equal(replay_buffer.actions[0], dummy_action_3), (
"Action should be equal to the last transition."
)
assert replay_buffer.rewards[0] == 1.0, "Reward should be equal to the last transition."
assert replay_buffer.dones[0], "Done should be True for the first transition."
assert replay_buffer.truncateds[0], "Truncated should be True for the first transition."
def test_sample_from_empty_buffer(replay_buffer):
with pytest.raises(RuntimeError, match="Cannot sample from an empty buffer"):
replay_buffer.sample(1)
def test_sample_with_1_transition(replay_buffer, dummy_state, next_dummy_state, dummy_action):
replay_buffer.add(dummy_state, dummy_action, 1.0, next_dummy_state, False, False)
got_batch_transition = replay_buffer.sample(1)
expected_batch_transition = BatchTransition(
state=clone_state(dummy_state),
action=dummy_action.clone(),
reward=1.0,
next_state=clone_state(next_dummy_state),
done=False,
truncated=False,
)
for buffer_property in dict_properties():
for k, v in expected_batch_transition[buffer_property].items():
got_state = got_batch_transition[buffer_property][k]
assert got_state.shape[0] == 1, f"{k} should have 1 transition."
assert got_state.device.type == "cpu", f"{k} should be on cpu."
assert torch.equal(got_state[0], v), f"{k} should be equal to the expected batch transition."
for key, _value in expected_batch_transition.items():
if key in dict_properties():
continue
got_value = got_batch_transition[key]
v_tensor = expected_batch_transition[key]
if not isinstance(v_tensor, torch.Tensor):
v_tensor = torch.tensor(v_tensor)
assert got_value.shape[0] == 1, f"{key} should have 1 transition."
assert got_value.device.type == "cpu", f"{key} should be on cpu."
assert torch.equal(got_value[0], v_tensor), f"{key} should be equal to the expected batch transition."
def test_sample_with_batch_bigger_than_buffer_size(
replay_buffer, dummy_state, next_dummy_state, dummy_action
):
replay_buffer.add(dummy_state, dummy_action, 1.0, next_dummy_state, False, False)
got_batch_transition = replay_buffer.sample(10)
expected_batch_transition = BatchTransition(
state=dummy_state,
action=dummy_action,
reward=1.0,
next_state=next_dummy_state,
done=False,
truncated=False,
)
for buffer_property in dict_properties():
for k in expected_batch_transition[buffer_property]:
got_state = got_batch_transition[buffer_property][k]
assert got_state.shape[0] == 1, f"{k} should have 1 transition."
for key in expected_batch_transition:
if key in dict_properties():
continue
got_value = got_batch_transition[key]
assert got_value.shape[0] == 1, f"{key} should have 1 transition."
def test_sample_batch(replay_buffer):
dummy_state_1 = create_dummy_state()
dummy_action_1 = create_dummy_action()
dummy_state_2 = create_dummy_state()
dummy_action_2 = create_dummy_action()
dummy_state_3 = create_dummy_state()
dummy_action_3 = create_dummy_action()
dummy_state_4 = create_dummy_state()
dummy_action_4 = create_dummy_action()
replay_buffer.add(dummy_state_1, dummy_action_1, 1.0, dummy_state_1, False, False)
replay_buffer.add(dummy_state_2, dummy_action_2, 2.0, dummy_state_2, False, False)
replay_buffer.add(dummy_state_3, dummy_action_3, 3.0, dummy_state_3, True, True)
replay_buffer.add(dummy_state_4, dummy_action_4, 4.0, dummy_state_4, True, True)
dummy_states = [dummy_state_1, dummy_state_2, dummy_state_3, dummy_state_4]
dummy_actions = [dummy_action_1, dummy_action_2, dummy_action_3, dummy_action_4]
got_batch_transition = replay_buffer.sample(3)
for buffer_property in dict_properties():
for k in got_batch_transition[buffer_property]:
got_state = got_batch_transition[buffer_property][k]
assert got_state.shape[0] == 3, f"{k} should have 3 transition."
for got_state_item in got_state:
assert any(torch.equal(got_state_item, dummy_state[k]) for dummy_state in dummy_states), (
f"{k} should be equal to one of the dummy states."
)
for got_action_item in got_batch_transition["action"]:
assert any(torch.equal(got_action_item, dummy_action) for dummy_action in dummy_actions), (
"Actions should be equal to the dummy actions."
)
for k in got_batch_transition:
if k in dict_properties() or k == "complementary_info":
continue
got_value = got_batch_transition[k]
assert got_value.shape[0] == 3, f"{k} should have 3 transition."
def test_to_lerobot_dataset_with_empty_buffer(replay_buffer):
with pytest.raises(ValueError, match="The replay buffer is empty. Cannot convert to a dataset."):
replay_buffer.to_lerobot_dataset("dummy_repo")
def test_to_lerobot_dataset(tmp_path):
ds, buffer = create_dataset_from_replay_buffer(tmp_path)
assert len(ds) == len(buffer), "Dataset should have the same size as the Replay Buffer"
assert ds.fps == 1, "FPS should be 1"
assert ds.repo_id == "dummy/repo", "The dataset should have `dummy/repo` repo id"
for dim in state_dims():
assert dim in ds.features
assert ds.features[dim]["shape"] == buffer.states[dim][0].shape
assert ds.num_episodes == 2
assert ds.num_frames == 4
for j, value in enumerate(ds):
print(torch.equal(value["observation.image"], buffer.next_states["observation.image"][j]))
for i in range(len(ds)):
for feature, value in ds[i].items():
if feature == "action":
assert torch.equal(value, buffer.actions[i])
elif feature == "next.reward":
assert torch.equal(value, buffer.rewards[i])
elif feature == "next.done":
assert torch.equal(value, buffer.dones[i])
elif feature == "observation.image":
# Tenssor -> numpy is not precise, so we have some diff there
# TODO: Check and fix it
torch.testing.assert_close(value, buffer.states["observation.image"][i], rtol=0.3, atol=0.003)
elif feature == "observation.state":
assert torch.equal(value, buffer.states["observation.state"][i])
def test_from_lerobot_dataset(tmp_path):
dummy_state_1 = create_dummy_state()
dummy_action_1 = create_dummy_action()
dummy_state_2 = create_dummy_state()
dummy_action_2 = create_dummy_action()
dummy_state_3 = create_dummy_state()
dummy_action_3 = create_dummy_action()
dummy_state_4 = create_dummy_state()
dummy_action_4 = create_dummy_action()
replay_buffer = create_empty_replay_buffer()
replay_buffer.add(dummy_state_1, dummy_action_1, 1.0, dummy_state_1, False, False)
replay_buffer.add(dummy_state_2, dummy_action_2, 1.0, dummy_state_2, False, False)
replay_buffer.add(dummy_state_3, dummy_action_3, 1.0, dummy_state_3, True, True)
replay_buffer.add(dummy_state_4, dummy_action_4, 1.0, dummy_state_4, True, True)
root = tmp_path / "test"
ds = replay_buffer.to_lerobot_dataset(DUMMY_REPO_ID, root=root)
reconverted_buffer = ReplayBuffer.from_lerobot_dataset(
ds, state_keys=list(state_dims()), device="cpu", capacity=replay_buffer.capacity, use_drq=False
)
# Check only the part of the buffer that's actually filled with data
assert torch.equal(
reconverted_buffer.actions[: len(replay_buffer)],
replay_buffer.actions[: len(replay_buffer)],
), "Actions from converted buffer should be equal to the original replay buffer."
assert torch.equal(
reconverted_buffer.rewards[: len(replay_buffer)], replay_buffer.rewards[: len(replay_buffer)]
), "Rewards from converted buffer should be equal to the original replay buffer."
assert torch.equal(
reconverted_buffer.dones[: len(replay_buffer)], replay_buffer.dones[: len(replay_buffer)]
), "Dones from converted buffer should be equal to the original replay buffer."
# Lerobot DS haven't supported truncateds yet
expected_truncateds = torch.zeros(len(replay_buffer)).bool()
assert torch.equal(reconverted_buffer.truncateds[: len(replay_buffer)], expected_truncateds), (
"Truncateds from converted buffer should be equal False"
)
assert torch.equal(
replay_buffer.states["observation.state"][: len(replay_buffer)],
reconverted_buffer.states["observation.state"][: len(replay_buffer)],
), "State should be the same after converting to dataset and return back"
for i in range(4):
torch.testing.assert_close(
replay_buffer.states["observation.image"][i],
reconverted_buffer.states["observation.image"][i],
rtol=0.4,
atol=0.004,
)
# The 2, 3 frames have done flag, so their values will be equal to the current state
for i in range(2):
# In the current implementation we take the next state from the `states` and ignore `next_states`
next_index = (i + 1) % 4
torch.testing.assert_close(
replay_buffer.states["observation.image"][next_index],
reconverted_buffer.next_states["observation.image"][i],
rtol=0.4,
atol=0.004,
)
for i in range(2, 4):
assert torch.equal(
replay_buffer.states["observation.state"][i],
reconverted_buffer.next_states["observation.state"][i],
)
def test_buffer_sample_alignment():
# Initialize buffer
buffer = ReplayBuffer(capacity=100, device="cpu", state_keys=["state_value"], storage_device="cpu")
# Fill buffer with patterned data
for i in range(100):
signature = float(i) / 100.0
state = {"state_value": torch.tensor([[signature]]).float()}
action = torch.tensor([[2.0 * signature]]).float()
reward = 3.0 * signature
is_end = (i + 1) % 10 == 0
if is_end:
next_state = {"state_value": torch.tensor([[signature]]).float()}
done = True
else:
next_signature = float(i + 1) / 100.0
next_state = {"state_value": torch.tensor([[next_signature]]).float()}
done = False
buffer.add(state, action, reward, next_state, done, False)
# Sample and verify
batch = buffer.sample(50)
for i in range(50):
state_sig = batch["state"]["state_value"][i].item()
action_val = batch["action"][i].item()
reward_val = batch["reward"][i].item()
next_state_sig = batch["next_state"]["state_value"][i].item()
is_done = batch["done"][i].item() > 0.5
# Verify relationships
assert abs(action_val - 2.0 * state_sig) < 1e-4, (
f"Action {action_val} should be 2x state signature {state_sig}"
)
assert abs(reward_val - 3.0 * state_sig) < 1e-4, (
f"Reward {reward_val} should be 3x state signature {state_sig}"
)
if is_done:
assert abs(next_state_sig - state_sig) < 1e-4, (
f"For done states, next_state {next_state_sig} should equal state {state_sig}"
)
else:
# Either it's the next sequential state (+0.01) or same state (for episode boundaries)
valid_next = (
abs(next_state_sig - state_sig - 0.01) < 1e-4 or abs(next_state_sig - state_sig) < 1e-4
)
assert valid_next, (
f"Next state {next_state_sig} should be either state+0.01 or same as state {state_sig}"
)
def test_memory_optimization():
dummy_state_1 = create_dummy_state()
dummy_action_1 = create_dummy_action()
dummy_state_2 = create_dummy_state()
dummy_action_2 = create_dummy_action()
dummy_state_3 = create_dummy_state()
dummy_action_3 = create_dummy_action()
dummy_state_4 = create_dummy_state()
dummy_action_4 = create_dummy_action()
replay_buffer = create_empty_replay_buffer()
replay_buffer.add(dummy_state_1, dummy_action_1, 1.0, dummy_state_2, False, False)
replay_buffer.add(dummy_state_2, dummy_action_2, 1.0, dummy_state_3, False, False)
replay_buffer.add(dummy_state_3, dummy_action_3, 1.0, dummy_state_4, False, False)
replay_buffer.add(dummy_state_4, dummy_action_4, 1.0, dummy_state_4, True, True)
optimized_replay_buffer = create_empty_replay_buffer(True)
optimized_replay_buffer.add(dummy_state_1, dummy_action_1, 1.0, dummy_state_2, False, False)
optimized_replay_buffer.add(dummy_state_2, dummy_action_2, 1.0, dummy_state_3, False, False)
optimized_replay_buffer.add(dummy_state_3, dummy_action_3, 1.0, dummy_state_4, False, False)
optimized_replay_buffer.add(dummy_state_4, dummy_action_4, 1.0, None, True, True)
assert get_object_memory(optimized_replay_buffer) < get_object_memory(replay_buffer), (
"Optimized replay buffer should be smaller than the original replay buffer"
)
def test_check_image_augmentations_with_drq_and_dummy_image_augmentation_function(dummy_state, dummy_action):
def dummy_image_augmentation_function(x):
return torch.ones_like(x) * 10
replay_buffer = create_empty_replay_buffer(
use_drq=True, image_augmentation_function=dummy_image_augmentation_function
)
replay_buffer.add(dummy_state, dummy_action, 1.0, dummy_state, False, False)
sampled_transitions = replay_buffer.sample(1)
assert torch.all(sampled_transitions["state"]["observation.image"] == 10), (
"Image augmentations should be applied"
)
assert torch.all(sampled_transitions["next_state"]["observation.image"] == 10), (
"Image augmentations should be applied"
)
def test_check_image_augmentations_with_drq_and_default_image_augmentation_function(
dummy_state, dummy_action
):
replay_buffer = create_empty_replay_buffer(use_drq=True)
replay_buffer.add(dummy_state, dummy_action, 1.0, dummy_state, False, False)
# Let's check that it doesn't fail and shapes are correct
sampled_transitions = replay_buffer.sample(1)
assert sampled_transitions["state"]["observation.image"].shape == (1, 3, 84, 84)
assert sampled_transitions["next_state"]["observation.image"].shape == (1, 3, 84, 84)
def test_random_crop_vectorized_basic():
# Create a batch of 2 images with known patterns
batch_size, channels, height, width = 2, 3, 10, 8
images = torch.zeros((batch_size, channels, height, width))
# Fill with unique values for testing
for b in range(batch_size):
images[b] = b + 1
crop_size = (6, 4) # Smaller than original
cropped = random_crop_vectorized(images, crop_size)
# Check output shape
assert cropped.shape == (batch_size, channels, *crop_size)
# Check that values are preserved (should be either 1s or 2s for respective batches)
assert torch.all(cropped[0] == 1)
assert torch.all(cropped[1] == 2)
def test_random_crop_vectorized_invalid_size():
images = torch.zeros((2, 3, 10, 8))
# Test crop size larger than image
with pytest.raises(ValueError, match="Requested crop size .* is bigger than the image size"):
random_crop_vectorized(images, (12, 8))
with pytest.raises(ValueError, match="Requested crop size .* is bigger than the image size"):
random_crop_vectorized(images, (10, 10))
def _populate_buffer_for_async_test(capacity: int = 10) -> ReplayBuffer:
"""Create a small buffer with deterministic 3ร128ร128 images and 11-D state."""
buffer = ReplayBuffer(
capacity=capacity,
device="cpu",
state_keys=["observation.image", "observation.state"],
storage_device="cpu",
)
for i in range(capacity):
img = torch.ones(3, 128, 128) * i
state_vec = torch.arange(11).float() + i
state = {
"observation.image": img,
"observation.state": state_vec,
}
buffer.add(
state=state,
action=torch.tensor([0.0]),
reward=0.0,
next_state=state,
done=False,
truncated=False,
)
return buffer
def test_async_iterator_shapes_basic():
buffer = _populate_buffer_for_async_test()
batch_size = 2
iterator = buffer.get_iterator(batch_size=batch_size, async_prefetch=True, queue_size=1)
batch = next(iterator)
images = batch["state"]["observation.image"]
states = batch["state"]["observation.state"]
assert images.shape == (batch_size, 3, 128, 128)
assert states.shape == (batch_size, 11)
next_images = batch["next_state"]["observation.image"]
next_states = batch["next_state"]["observation.state"]
assert next_images.shape == (batch_size, 3, 128, 128)
assert next_states.shape == (batch_size, 11)
def test_async_iterator_multiple_iterations():
buffer = _populate_buffer_for_async_test()
batch_size = 2
iterator = buffer.get_iterator(batch_size=batch_size, async_prefetch=True, queue_size=2)
for _ in range(5):
batch = next(iterator)
images = batch["state"]["observation.image"]
states = batch["state"]["observation.state"]
assert images.shape == (batch_size, 3, 128, 128)
assert states.shape == (batch_size, 11)
next_images = batch["next_state"]["observation.image"]
next_states = batch["next_state"]["observation.state"]
assert next_images.shape == (batch_size, 3, 128, 128)
assert next_states.shape == (batch_size, 11)
# Ensure iterator can be disposed without blocking
del iterator
| lerobot/tests/utils/test_replay_buffer.py/0 | {
"file_path": "lerobot/tests/utils/test_replay_buffer.py",
"repo_id": "lerobot",
"token_count": 10170
} | 229 |
# coding=utf-8
# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Push the details from a LightEval run to the Hub.
Usage:
python src/open_r1/utils/upload_details.py \
--data_files {path_to_parquet_file} \
--hub_repo_id {hub_repo_id} \
--config_name {config_name}
"""
from dataclasses import dataclass, field
from typing import List
from datasets import load_dataset
from transformers import HfArgumentParser
@dataclass
class ScriptArguments:
data_files: List[str] = field(default_factory=list)
hub_repo_id: str = None
config_name: str = None
def main():
parser = HfArgumentParser(ScriptArguments)
args = parser.parse_args_into_dataclasses()[0]
if all(file.endswith(".json") for file in args.data_files):
ds = load_dataset("json", data_files=args.data_files)
elif all(file.endswith(".jsonl") for file in args.data_files):
ds = load_dataset("json", data_files=args.data_files)
else:
ds = load_dataset("parquet", data_files=args.data_files)
url = ds.push_to_hub(args.hub_repo_id, config_name=args.config_name, private=True)
print(f"Dataset available at: {url}")
if __name__ == "__main__":
main()
| open-r1/scripts/upload_details.py/0 | {
"file_path": "open-r1/scripts/upload_details.py",
"repo_id": "open-r1",
"token_count": 615
} | 230 |
from itertools import islice
def batched(iterable, n):
"Batch data into lists of length n. The last batch may be shorter."
# batched('ABCDEFG', 3) --> ABC DEF G
if n < 1:
return iterable
it = iter(iterable)
while batch := list(islice(it, n)):
yield batch
| open-r1/src/open_r1/utils/competitive_programming/utils.py/0 | {
"file_path": "open-r1/src/open_r1/utils/competitive_programming/utils.py",
"repo_id": "open-r1",
"token_count": 117
} | 231 |
# PEFT Docker images
Here we store all PEFT Docker images used in our testing infrastructure. We use python 3.11 for now on all our images.
- `peft-cpu`: PEFT compiled on CPU with all other HF libraries installed on main branch
- `peft-gpu`: PEFT complied for NVIDIA GPUs with all other HF libraries installed on main branch
- `peft-gpu-bnb-source`: PEFT complied for NVIDIA GPUs with `bitsandbytes` and all other HF libraries installed from main branch
- `peft-gpu-bnb-latest`: PEFT complied for NVIDIA GPUs with `bitsandbytes` complied from main and all other HF libraries installed from latest PyPi
| peft/docker/README.md/0 | {
"file_path": "peft/docker/README.md",
"repo_id": "peft",
"token_count": 155
} | 232 |
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Contribute to PEFT
We are happy to accept contributions to PEFT. If you plan to contribute, please read this to make the process as smooth as possible.
## Installation
For code contributions to PEFT, you should choose the ["source"](../install#source) installation method.
If you are new to creating a pull request, follow the [Creating a pull request](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request) guide by GitHub.
## Tests and code quality checks
Regardless of the contribution type (unless itโs only about the docs), you should run tests and code quality checks before creating a PR to ensure your contribution doesnโt break anything and follows the project standards.
We provide a Makefile to execute the necessary tests. Run the code below for the unit test:
```sh
make test
```
Run one of the following to either only check or check and fix code quality and style:
```sh
make quality # just check
make style # check and fix
```
You can also set up [`pre-commit`](https://pre-commit.com/) to run these fixes
automatically as Git commit hooks.
```bash
$ pip install pre-commit
$ pre-commit install
```
Running all the tests can take a while, so during development it can be more efficient to only [run tests specific to your change](https://docs.pytest.org/en/6.2.x/usage.html#specifying-tests-selecting-tests), e.g. via:
```sh
pytest tests/<test-file-name> -k <name-of-test>
```
This should finish much quicker and allow for faster iteration.
If your change is specific to a hardware setting (e.g., it requires CUDA), take a look at [tests/test_gpu_examples.py](https://github.com/huggingface/peft/blob/1c1c7fdaa6e6abaa53939b865dee1eded82ad032/tests/test_gpu_examples.py) and [tests/test_common_gpu.py](https://github.com/huggingface/peft/blob/1c1c7fdaa6e6abaa53939b865dee1eded82ad032/tests/test_common_gpu.py) to see if it makes sense to add tests there. If your change could have an effect on saving and loading models, please run the tests with the `--regression` flag to trigger regression tests.
It can happen that while youโre working on your PR, the underlying code base changes due to other changes being merged. If that happens โ especially when there is a merge conflict โ please update your branch with the latest changes. This can be a merge or a rebase, and we'll squash and merge the PR once itโs ready. If possible, avoid force pushes to make reviews easier.
## PR description
When opening a PR, please provide a nice description of the change you're proposing. If it relates to other issues or PRs, please reference them. Providing a good description not only helps the reviewers review your code better and faster, it can also be used later (as a basis) for the commit message which helps with long term maintenance of the project.
If your code makes some non-trivial changes, it may also be a good idea to add comments to the code to explain those changes. For example, if you had to iterate on your implementation multiple times because the most obvious way didnโt work, itโs a good indication that a code comment is needed.
## Bugfixes
Please give a description of the circumstances that led to the bug. If there is an existing issue, please link to it (e.g., โResolves #12345โ).
Ideally when a bugfix is provided, it should be accompanied by a test for the bug. The test should fail with the current code and pass with the bugfix. Add a comment to the test that references the issue or PR. Without a test, it is more difficult to prevent regressions in the future.
## Add a new fine-tuning method
New parameter-efficient fine-tuning methods are developed all the time. If you would like to add a new and promising method to PEFT, please follow these steps.
1. Before you start to implement the new method, please open a [GitHub issue](https://github.com/huggingface/peft/issues) with your proposal. This way, the maintainers can give you some early feedback.
2. Please add a link to the source (usually a paper) of the method. The paper should be in a final state to avoid changing requirements during development (e.g. due to reviewer feedback).
3. When implementing the method, it makes sense to look for existing implementations that already exist as a guide. Moreover, when you structure your code, please take inspiration from the other PEFT methods. For example, if your method is similar to LoRA, it makes sense to structure your code similarly or even reuse some functions or classes where it makes sense (some code duplication is okay, but donโt overdo it).
4. Ideally, in addition to the implementation of the new method, there should also be
- [examples](https://github.com/huggingface/peft/tree/main/examples) (notebooks, scripts)
- [documentation](https://github.com/huggingface/peft/tree/main/docs/source)
- [extensive test suite](https://github.com/huggingface/peft/tree/main/tests) that proves the method correctly integrates with PEFT
- [experimental setup](https://github.com/huggingface/peft/tree/main/method_comparison#creating-new-experiments) to run benchmarks
5. Once you have something that seems to be working, donโt hesitate to create a draft PR even if itโs not in a mergeable state yet. The maintainers are happy to give you feedback and guidance along the way.
## Add other features
It is best if you first open an issue on GitHub with a proposal to add the new feature. This way, you can discuss with the maintainers if it makes sense to add the feature before spending too much time on implementing it.
New features should generally be accompanied by tests and documentation or examples. Without the latter, users will have a hard time discovering your cool new feature.
Changes to the code should be implemented in a backward-compatible way. For example, existing code should continue to work the same way after the feature is merged.
| peft/docs/source/developer_guides/contributing.md/0 | {
"file_path": "peft/docs/source/developer_guides/contributing.md",
"repo_id": "peft",
"token_count": 1743
} | 233 |
<!--Copyright 2025 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# C3A: Parameter-Efficient Fine-Tuning via Circular Convolution
[C3A](https://huggingface.co/papers/2407.19342) is a parameter-efficient fine-tuning technique that leverages Circular Convolution to achieve high rank adaptation within reasonable resource limits.
Note that you should use a much larger learning rate (LR) for C3A than for other methods. For example, a LR of 1e-1 for C3A is a good starting point. Besides, a much smaller weight decay should be used. You can refer to the `method_comparison` folder for more details.
For the `block_size`, it affects tunable parameters and performance. To start with, you can choose a $\mathrm{gcd}(d_1,d_2)$ near $\frac{\sqrt{d_1\times d_2}}{r}$, where $r$ is the rank for LoRA you would use for this task.
C3A currently has the following constraints:
- Only `nn.Linear` layers are supported.
- Quantized layers are not supported.
- The block size should be a common divisor of both the input and output sizes of target layers.
If these constraints don't work for your use case, consider other methods instead.
The abstract from the paper is:
> Low-Rank Adaptation (LoRA) has gained popularity for fine-tuning large foundation models, leveraging low-rank matrices $\mathbf{A}$ and $\mathbf{B}$ to represent weight changes (i.e., $\Delta \mathbf{W} = \mathbf{B} \mathbf{A}$). This method reduces trainable parameters and mitigates heavy memory consumption associated with full delta matrices by sequentially multiplying $\mathbf{A}$ and $\mathbf{B}$ with the activation. Despite its success, the intrinsic low-rank characteristic may limit its performance. Although several variants have been proposed to address this issue, they often overlook the crucial computational and memory efficiency brought by LoRA. In this paper, we propose Circular Convolution Adaptation (C3A), which not only achieves high-rank adaptation with enhanced performance but also excels in both computational power and memory utilization. Extensive experiments demonstrate that C3A consistently outperforms LoRA and its variants across various fine-tuning tasks.
## C3AConfig
[[autodoc]] tuners.c3a.config.C3AConfig
## C3AModel
[[autodoc]] tuners.c3a.model.C3AModel
| peft/docs/source/package_reference/c3a.md/0 | {
"file_path": "peft/docs/source/package_reference/c3a.md",
"repo_id": "peft",
"token_count": 774
} | 234 |
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# IA3
[IA3](../conceptual_guides/ia3) multiplies the model's activations (the keys and values in the self-attention and encoder-decoder attention blocks, and the intermediate activation of the position-wise feedforward network) by three learned vectors. This PEFT method introduces an even smaller number of trainable parameters than LoRA which introduces weight matrices instead of vectors. The original model's parameters are kept frozen and only these vectors are updated. As a result, it is faster, cheaper and more efficient to finetune for a new downstream task.
This guide will show you how to train a sequence-to-sequence model with IA3 to *generate a sentiment* given some financial news.
<Tip>
Some familiarity with the general process of training a sequence-to-sequence would be really helpful and allow you to focus on how to apply IA3. If youโre new, we recommend taking a look at the [Translation](https://huggingface.co/docs/transformers/tasks/translation) and [Summarization](https://huggingface.co/docs/transformers/tasks/summarization) guides first from the Transformers documentation. When youโre ready, come back and see how easy it is to drop PEFT in to your training!
</Tip>
## Dataset
You'll use the sentences_allagree subset of the [financial_phrasebank](https://huggingface.co/datasets/financial_phrasebank) dataset. This subset contains financial news with 100% annotator agreement on the sentiment label. Take a look at the [dataset viewer](https://huggingface.co/datasets/financial_phrasebank/viewer/sentences_allagree) for a better idea of the data and sentences you'll be working with.
Load the dataset with the [`~datasets.load_dataset`] function. This subset of the dataset only contains a train split, so use the [`~datasets.train_test_split`] function to create a train and validation split. Create a new `text_label` column so it is easier to understand what the `label` values `0`, `1`, and `2` mean.
```py
from datasets import load_dataset
ds = load_dataset("financial_phrasebank", "sentences_allagree")
ds = ds["train"].train_test_split(test_size=0.1)
ds["validation"] = ds["test"]
del ds["test"]
classes = ds["train"].features["label"].names
ds = ds.map(
lambda x: {"text_label": [classes[label] for label in x["label"]]},
batched=True,
num_proc=1,
)
ds["train"][0]
{'sentence': 'It will be operated by Nokia , and supported by its Nokia NetAct network and service management system .',
'label': 1,
'text_label': 'neutral'}
```
Load a tokenizer and create a preprocessing function that:
1. tokenizes the inputs, pads and truncates the sequence to the `max_length`
2. apply the same tokenizer to the labels but with a shorter `max_length` that corresponds to the label
3. mask the padding tokens
```py
from transformers import AutoTokenizer
text_column = "sentence"
label_column = "text_label"
max_length = 128
tokenizer = AutoTokenizer.from_pretrained("bigscience/mt0-large")
def preprocess_function(examples):
inputs = examples[text_column]
targets = examples[label_column]
model_inputs = tokenizer(inputs, max_length=max_length, padding="max_length", truncation=True, return_tensors="pt")
labels = tokenizer(targets, max_length=3, padding="max_length", truncation=True, return_tensors="pt")
labels = labels["input_ids"]
labels[labels == tokenizer.pad_token_id] = -100
model_inputs["labels"] = labels
return model_inputs
```
Use the [`~datasets.Dataset.map`] function to apply the preprocessing function to the entire dataset.
```py
processed_ds = ds.map(
preprocess_function,
batched=True,
num_proc=1,
remove_columns=ds["train"].column_names,
load_from_cache_file=False,
desc="Running tokenizer on dataset",
)
```
Create a training and evaluation [`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader), and set `pin_memory=True` to speed up data transfer to the accelerator during training if your dataset samples are on a CPU.
```py
from torch.utils.data import DataLoader
from transformers import default_data_collator
train_ds = processed_ds["train"]
eval_ds = processed_ds["validation"]
batch_size = 8
train_dataloader = DataLoader(
train_ds, shuffle=True, collate_fn=default_data_collator, batch_size=batch_size, pin_memory=True
)
eval_dataloader = DataLoader(eval_ds, collate_fn=default_data_collator, batch_size=batch_size, pin_memory=True)
```
## Model
Now you can load a pretrained model to use as the base model for IA3. This guide uses the [bigscience/mt0-large](https://huggingface.co/bigscience/mt0-large) model, but you can use any sequence-to-sequence model you like.
```py
from transformers import AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/mt0-large")
```
### PEFT configuration and model
All PEFT methods need a configuration that contains and specifies all the parameters for how the PEFT method should be applied. Create an [`IA3Config`] with the task type and set the inference mode to `False`. You can find additional parameters for this configuration in the [API reference](../package_reference/ia3#ia3config).
<Tip>
Call the [`~PeftModel.print_trainable_parameters`] method to compare the number of trainable parameters of [`PeftModel`] versus the number of parameters in the base model!
</Tip>
Once the configuration is setup, pass it to the [`get_peft_model`] function along with the base model to create a trainable [`PeftModel`].
```py
from peft import IA3Config, get_peft_model
peft_config = IA3Config(task_type="SEQ_2_SEQ_LM")
model = get_peft_model(model, peft_config)
model.print_trainable_parameters()
"trainable params: 282,624 || all params: 1,229,863,936 || trainable%: 0.022980103060766553"
```
### Training
Set up an optimizer and learning rate scheduler.
```py
import torch
from transformers import get_linear_schedule_with_warmup
lr = 8e-3
num_epochs = 3
optimizer = torch.optim.AdamW(model.parameters(), lr=lr)
lr_scheduler = get_linear_schedule_with_warmup(
optimizer=optimizer,
num_warmup_steps=0,
num_training_steps=(len(train_dataloader) * num_epochs),
)
```
Move the model to the accelerator and create a training loop that reports the loss and perplexity for each epoch.
```py
from tqdm import tqdm
device = torch.accelerator.current_accelerator().type if hasattr(torch, "accelerator") else "cuda"
model = model.to(device)
for epoch in range(num_epochs):
model.train()
total_loss = 0
for step, batch in enumerate(tqdm(train_dataloader)):
batch = {k: v.to(device) for k, v in batch.items()}
outputs = model(**batch)
loss = outputs.loss
total_loss += loss.detach().float()
loss.backward()
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
model.eval()
eval_loss = 0
eval_preds = []
for step, batch in enumerate(tqdm(eval_dataloader)):
batch = {k: v.to(device) for k, v in batch.items()}
with torch.no_grad():
outputs = model(**batch)
loss = outputs.loss
eval_loss += loss.detach().float()
eval_preds.extend(
tokenizer.batch_decode(torch.argmax(outputs.logits, -1).detach().cpu().numpy(), skip_special_tokens=True)
)
eval_epoch_loss = eval_loss / len(eval_dataloader)
eval_ppl = torch.exp(eval_epoch_loss)
train_epoch_loss = total_loss / len(train_dataloader)
train_ppl = torch.exp(train_epoch_loss)
print(f"{epoch=}: {train_ppl=} {train_epoch_loss=} {eval_ppl=} {eval_epoch_loss=}")
```
## Share your model
After training is complete, you can upload your model to the Hub with the [`~transformers.PreTrainedModel.push_to_hub`] method. You'll need to login to your Hugging Face account first and enter your token when prompted.
```py
from huggingface_hub import notebook_login
account = <your-hf-account-name>
peft_model_id = f"{account}/mt0-large-ia3"
model.push_to_hub(peft_model_id)
```
## Inference
To load the model for inference, use the [`~AutoPeftModelForSeq2SeqLM.from_pretrained`] method. Let's also load a sentence of financial news from the dataset to generate a sentiment for.
```py
from peft import AutoPeftModelForSeq2SeqLM
device = torch.accelerator.current_accelerator().type if hasattr(torch, "accelerator") else "cuda"
model = AutoPeftModelForSeq2SeqLM.from_pretrained("<your-hf-account-name>/mt0-large-ia3").to(device)
tokenizer = AutoTokenizer.from_pretrained("bigscience/mt0-large")
i = 15
inputs = tokenizer(ds["validation"][text_column][i], return_tensors="pt")
print(ds["validation"][text_column][i])
"The robust growth was the result of the inclusion of clothing chain Lindex in the Group in December 2007 ."
```
Call the [`~transformers.GenerationMixin.generate`] method to generate the predicted sentiment label.
```py
with torch.no_grad():
inputs = {k: v.to(device) for k, v in inputs.items()}
outputs = model.generate(input_ids=inputs["input_ids"], max_new_tokens=10)
print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True))
['positive']
```
| peft/docs/source/task_guides/ia3.md/0 | {
"file_path": "peft/docs/source/task_guides/ia3.md",
"repo_id": "peft",
"token_count": 3256
} | 235 |
import random
import numpy as np
import torch
import wandb
from datasets import load_dataset
from diffusers import DDIMScheduler
from PIL import Image
from torchvision import transforms
from utils.pipeline_controlnet import LightControlNetPipeline
def image_grid(imgs, rows, cols):
assert len(imgs) == rows * cols
w, h = imgs[0].size
grid = Image.new("RGB", size=(cols * w, rows * h))
for i, img in enumerate(imgs):
grid.paste(img, box=(i % cols * w, i // cols * h))
return grid
def log_validation(val_dataset, text_encoder, unet, controlnet, args, accelerator):
pipeline = LightControlNetPipeline.from_pretrained(
args.pretrained_model_name_or_path,
controlnet=accelerator.unwrap_model(controlnet, keep_fp32_wrapper=True),
unet=accelerator.unwrap_model(unet, keep_fp32_wrapper=True).model,
text_encoder=accelerator.unwrap_model(text_encoder, keep_fp32_wrapper=True),
safety_checker=None,
revision=args.revision,
)
pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config)
pipeline = pipeline.to(accelerator.device)
pipeline.set_progress_bar_config(disable=True)
generator = torch.Generator(device=accelerator.device).manual_seed(args.seed)
image_logs = []
for idx in range(args.num_validation_images):
data = val_dataset[idx]
validation_prompt = data["text"]
validation_image = data["conditioning_pixel_values"]
image = pipeline(
validation_prompt,
[validation_image],
num_inference_steps=50,
generator=generator,
)[0][0]
image_logs.append(
{
"validation_image": validation_image,
"image": image,
"validation_prompt": validation_prompt,
}
)
for tracker in accelerator.trackers:
formatted_images = []
for log in image_logs:
image = log["image"]
validation_prompt = log["validation_prompt"]
validation_image = log["validation_image"]
formatted_images.append(wandb.Image(validation_image, caption="Controlnet conditioning"))
image = wandb.Image(image, caption=validation_prompt)
formatted_images.append(image)
tracker.log({"validation": formatted_images})
del pipeline
torch.cuda.empty_cache()
def make_dataset(args, tokenizer, accelerator, split="train"):
# Get the datasets: you can either provide your own training and evaluation files (see below)
# or specify a Dataset from the hub (the dataset will be downloaded automatically from the datasets Hub).
# In distributed training, the load_dataset function guarantees that only one local process can concurrently
# download the dataset.
if args.dataset_name is not None:
# Downloading and loading a dataset from the hub.
dataset = load_dataset(
args.dataset_name,
args.dataset_config_name,
cache_dir=args.cache_dir,
)
else:
if args.train_data_dir is not None:
dataset = load_dataset(
args.train_data_dir,
cache_dir=args.cache_dir,
)
# See more about loading custom images at
# https://huggingface.co/docs/datasets/v2.0.0/en/dataset_script
# Preprocessing the datasets.
# We need to tokenize inputs and targets.
column_names = dataset[split].column_names
# Get the column names for input/target.
if args.image_column is None:
image_column = column_names[0]
else:
image_column = args.image_column
if image_column not in column_names:
raise ValueError(
f"`--image_column` value '{args.image_column}' not found in dataset columns. Dataset columns are: {', '.join(column_names)}"
)
if args.caption_column is None:
caption_column = column_names[1]
else:
caption_column = args.caption_column
if caption_column not in column_names:
raise ValueError(
f"`--caption_column` value '{args.caption_column}' not found in dataset columns. Dataset columns are: {', '.join(column_names)}"
)
if args.conditioning_image_column is None:
conditioning_image_column = column_names[2]
else:
conditioning_image_column = args.conditioning_image_column
if conditioning_image_column not in column_names:
raise ValueError(
f"`--conditioning_image_column` value '{args.conditioning_image_column}' not found in dataset columns. Dataset columns are: {', '.join(column_names)}"
)
def tokenize_captions(examples, is_train=True):
captions = []
for caption in examples[caption_column]:
if random.random() < args.proportion_empty_prompts:
captions.append("")
elif isinstance(caption, str):
captions.append(caption)
elif isinstance(caption, (list, np.ndarray)):
# take a random caption if there are multiple
captions.append(random.choice(caption) if is_train else caption[0])
else:
raise ValueError(
f"Caption column `{caption_column}` should contain either strings or lists of strings."
)
inputs = tokenizer(
captions, max_length=tokenizer.model_max_length, padding="max_length", truncation=True, return_tensors="pt"
)
return inputs.input_ids
image_transforms = transforms.Compose(
[
transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR),
transforms.CenterCrop(args.resolution),
transforms.ToTensor(),
transforms.Normalize([0.5], [0.5]),
]
)
conditioning_image_transforms = transforms.Compose(
[
transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR),
transforms.CenterCrop(args.resolution),
transforms.ToTensor(),
]
)
def preprocess_train(examples):
images = [image.convert("RGB") for image in examples[image_column]]
images = [image_transforms(image) for image in images]
conditioning_images = [image.convert("RGB") for image in examples[conditioning_image_column]]
conditioning_images = [conditioning_image_transforms(image) for image in conditioning_images]
examples["pixel_values"] = images
examples["conditioning_pixel_values"] = conditioning_images
examples["input_ids"] = tokenize_captions(examples)
return examples
with accelerator.main_process_first():
if args.max_train_samples is not None:
dataset[split] = dataset[split].shuffle(seed=args.seed).select(range(args.max_train_samples))
# Set the training transforms
split_dataset = dataset[split].with_transform(preprocess_train)
return split_dataset
def collate_fn(examples):
pixel_values = torch.stack([example["pixel_values"] for example in examples])
pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float()
conditioning_pixel_values = torch.stack([example["conditioning_pixel_values"] for example in examples])
conditioning_pixel_values = conditioning_pixel_values.to(memory_format=torch.contiguous_format).float()
input_ids = torch.stack([example["input_ids"] for example in examples])
return {
"pixel_values": pixel_values,
"conditioning_pixel_values": conditioning_pixel_values,
"input_ids": input_ids,
}
| peft/examples/boft_controlnet/utils/dataset.py/0 | {
"file_path": "peft/examples/boft_controlnet/utils/dataset.py",
"repo_id": "peft",
"token_count": 3160
} | 236 |
<jupyter_start><jupyter_code>from transformers import AutoModelForSeq2SeqLM
from peft import get_peft_config, get_peft_model, get_peft_model_state_dict, PrefixTuningConfig, TaskType
import torch
from datasets import load_dataset
import os
os.environ["TOKENIZERS_PARALLELISM"] = "false"
from transformers import AutoTokenizer
from torch.utils.data import DataLoader
from transformers import default_data_collator, get_linear_schedule_with_warmup
from tqdm import tqdm
from datasets import load_dataset
device = torch.accelerator.current_accelerator().type if hasattr(torch, "accelerator") else "cuda"
model_name_or_path = "t5-large"
tokenizer_name_or_path = "t5-large"
checkpoint_name = "financial_sentiment_analysis_prefix_tuning_v1.pt"
text_column = "sentence"
label_column = "text_label"
max_length = 128
lr = 1e-2
num_epochs = 5
batch_size = 8
# creating model
peft_config = PrefixTuningConfig(task_type=TaskType.SEQ_2_SEQ_LM, inference_mode=False, num_virtual_tokens=20)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name_or_path)
model = get_peft_model(model, peft_config)
model.print_trainable_parameters()
model
# loading dataset
dataset = load_dataset("financial_phrasebank", "sentences_allagree")
dataset = dataset["train"].train_test_split(test_size=0.1)
dataset["validation"] = dataset["test"]
del dataset["test"]
classes = dataset["train"].features["label"].names
dataset = dataset.map(
lambda x: {"text_label": [classes[label] for label in x["label"]]},
batched=True,
num_proc=1,
)
dataset["train"][0]
# data preprocessing
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
def preprocess_function(examples):
inputs = examples[text_column]
targets = examples[label_column]
model_inputs = tokenizer(inputs, max_length=max_length, padding="max_length", truncation=True, return_tensors="pt")
labels = tokenizer(targets, max_length=2, padding="max_length", truncation=True, return_tensors="pt")
labels = labels["input_ids"]
labels[labels == tokenizer.pad_token_id] = -100
model_inputs["labels"] = labels
return model_inputs
processed_datasets = dataset.map(
preprocess_function,
batched=True,
num_proc=1,
remove_columns=dataset["train"].column_names,
load_from_cache_file=False,
desc="Running tokenizer on dataset",
)
train_dataset = processed_datasets["train"]
eval_dataset = processed_datasets["validation"]
train_dataloader = DataLoader(
train_dataset, shuffle=True, collate_fn=default_data_collator, batch_size=batch_size, pin_memory=True
)
eval_dataloader = DataLoader(eval_dataset, collate_fn=default_data_collator, batch_size=batch_size, pin_memory=True)
# optimizer and lr scheduler
optimizer = torch.optim.AdamW(model.parameters(), lr=lr)
lr_scheduler = get_linear_schedule_with_warmup(
optimizer=optimizer,
num_warmup_steps=0,
num_training_steps=(len(train_dataloader) * num_epochs),
)
# training and evaluation
model = model.to(device)
for epoch in range(num_epochs):
model.train()
total_loss = 0
for step, batch in enumerate(tqdm(train_dataloader)):
batch = {k: v.to(device) for k, v in batch.items()}
outputs = model(**batch)
loss = outputs.loss
total_loss += loss.detach().float()
loss.backward()
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
model.eval()
eval_loss = 0
eval_preds = []
for step, batch in enumerate(tqdm(eval_dataloader)):
batch = {k: v.to(device) for k, v in batch.items()}
with torch.no_grad():
outputs = model(**batch)
loss = outputs.loss
eval_loss += loss.detach().float()
eval_preds.extend(
tokenizer.batch_decode(torch.argmax(outputs.logits, -1).detach().cpu().numpy(), skip_special_tokens=True)
)
eval_epoch_loss = eval_loss / len(eval_dataloader)
eval_ppl = torch.exp(eval_epoch_loss)
train_epoch_loss = total_loss / len(train_dataloader)
train_ppl = torch.exp(train_epoch_loss)
print(f"{epoch=}: {train_ppl=} {train_epoch_loss=} {eval_ppl=} {eval_epoch_loss=}")
# print accuracy
correct = 0
total = 0
for pred, true in zip(eval_preds, dataset["validation"]["text_label"]):
if pred.strip() == true.strip():
correct += 1
total += 1
accuracy = correct / total * 100
print(f"{accuracy=} % on the evaluation dataset")
print(f"{eval_preds[:10]=}")
print(f"{dataset['validation']['text_label'][:10]=}")
# saving model
peft_model_id = f"{model_name_or_path}_{peft_config.peft_type}_{peft_config.task_type}"
model.save_pretrained(peft_model_id)
ckpt = f"{peft_model_id}/adapter_model.safetensors"
!du -h $ckpt
from peft import PeftModel, PeftConfig
peft_model_id = f"{model_name_or_path}_{peft_config.peft_type}_{peft_config.task_type}"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForSeq2SeqLM.from_pretrained(config.base_model_name_or_path)
model = PeftModel.from_pretrained(model, peft_model_id)
model.eval()
i = 107
inputs = tokenizer(dataset["validation"][text_column][i], return_tensors="pt")
print(dataset["validation"][text_column][i])
print(inputs)
with torch.no_grad():
outputs = model.generate(input_ids=inputs["input_ids"], max_new_tokens=10)
print(outputs)
print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True))<jupyter_output>Acando AB ( ACANB SS ) fell 8.9 percent to 13.35 kronor , the lowest close since Dec. 11 .
{'input_ids': tensor([[ 4292, 232, 32, 3, 5359, 41, 3, 22029, 14972, 3,
4256, 3, 61, 4728, 4848, 1298, 1093, 12, 8808, 2469,
3, 22318, 29, 127, 3, 6, 8, 7402, 885, 437,
4451, 5, 850, 3, 5, 1]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])}
tensor([[ 0, 2841, 1]])
['negative'] | peft/examples/conditional_generation/peft_prefix_tuning_seq2seq.ipynb/0 | {
"file_path": "peft/examples/conditional_generation/peft_prefix_tuning_seq2seq.ipynb",
"repo_id": "peft",
"token_count": 2489
} | 237 |
# Copyright 2024-present the HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import torch
from datasets import load_dataset
from torch.utils.data import DataLoader
from transformers import AutoModelForCausalLM, AutoTokenizer, Trainer, TrainingArguments
from utils import DataCollator, TokenizerMetaMath
from peft import EvaConfig, LoraConfig, get_peft_model, initialize_lora_eva_weights
DEVICE = torch.accelerator.current_accelerator().type if hasattr(torch, "accelerator") else "cuda"
# config
model_name = "meta-llama/Llama-3.1-8B"
max_seq_len = 512
rank = 16
alpha = 1
rho = 2.0
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj"]
svd_batch_size = 4 # can be different from the batch size used in finetuning
batch_size = 4
learning_rate = 5e-4
gradient_accumulation_steps = 8
num_epochs = 1
output_dir = "outputs"
bf16 = True
# load model and tokenizer
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# load dataset
dataset = load_dataset("meta-math/MetaMathQA")
dataset = dataset.map(
TokenizerMetaMath(model_name),
batched=True,
remove_columns=dataset["train"].column_names,
)
dataset.set_format(type="torch")
# data collator
data_collator = DataCollator(tokenizer.eos_token_id, max_length=max_seq_len)
# dataloader
dataloader = DataLoader(
dataset["train"],
batch_size=svd_batch_size,
collate_fn=data_collator,
)
# setup peft config
eva_config = EvaConfig(rho=rho)
peft_config = LoraConfig(
r=rank, lora_alpha=alpha, target_modules=target_modules, init_lora_weights="eva", eva_config=eva_config
)
# move model to accelerator
model = model.to(DEVICE)
# to optimize memory usage during eva initialization, set low_cpu_mem_usage=True
peft_model = get_peft_model(model, peft_config, low_cpu_mem_usage=True)
initialize_lora_eva_weights(peft_model, dataloader)
# setup training arguments
training_args = TrainingArguments(
per_device_train_batch_size=batch_size,
learning_rate=learning_rate,
gradient_accumulation_steps=gradient_accumulation_steps,
num_train_epochs=num_epochs,
output_dir=output_dir,
remove_unused_columns=False,
bf16=bf16,
)
# continue with standard finetuning
trainer = Trainer(
model=peft_model,
args=training_args,
train_dataset=dataset["train"],
data_collator=data_collator,
)
trainer.train()
| peft/examples/eva_finetuning/eva_finetuning.py/0 | {
"file_path": "peft/examples/eva_finetuning/eva_finetuning.py",
"repo_id": "peft",
"token_count": 1010
} | 238 |
# adapted from [peft's boft_dreambooth](https://github.com/huggingface/peft/tree/main/examples/boft_dreambooth)
from pathlib import Path
import torch
from PIL import Image
from torch.utils.data import Dataset
from torchvision import transforms
class DreamBoothDataset(Dataset):
"""
A dataset to prepare the instance and class images with the prompts for fine-tuning the model.
It pre-processes the images and the tokenizes prompts.
"""
def __init__(
self,
instance_data_root,
instance_prompt,
tokenizer,
class_data_root=None,
class_prompt=None,
size=512,
center_crop=False,
):
self.size = size
self.center_crop = center_crop
self.tokenizer = tokenizer
self.instance_data_root = Path(instance_data_root)
if not self.instance_data_root.exists():
raise ValueError("Instance images root doesn't exists.")
self.instance_images_path = list(Path(instance_data_root).iterdir())
self.num_instance_images = len(self.instance_images_path)
self.instance_prompt = instance_prompt
self._length = self.num_instance_images
if class_data_root is not None:
self.class_data_root = Path(class_data_root)
self.class_data_root.mkdir(parents=True, exist_ok=True)
self.class_images_path = list(self.class_data_root.iterdir())
self.num_class_images = len(self.class_images_path)
self._length = max(self.num_class_images, self.num_instance_images)
self.class_prompt = class_prompt
else:
self.class_data_root = None
self.image_transforms = transforms.Compose(
[
transforms.Resize(size, interpolation=transforms.InterpolationMode.BILINEAR),
transforms.CenterCrop(size) if center_crop else transforms.RandomCrop(size),
transforms.ToTensor(),
transforms.Normalize([0.5], [0.5]),
]
)
def __len__(self):
return self._length
def __getitem__(self, index):
example = {}
instance_image = Image.open(self.instance_images_path[index % self.num_instance_images])
if not instance_image.mode == "RGB":
instance_image = instance_image.convert("RGB")
example["instance_images"] = self.image_transforms(instance_image)
example["instance_prompt_ids"] = self.tokenizer(
self.instance_prompt,
truncation=True,
padding="max_length",
max_length=self.tokenizer.model_max_length,
return_tensors="pt",
).input_ids
if self.class_data_root:
class_image = Image.open(self.class_images_path[index % self.num_class_images])
if not class_image.mode == "RGB":
class_image = class_image.convert("RGB")
example["class_images"] = self.image_transforms(class_image)
example["class_prompt_ids"] = self.tokenizer(
self.class_prompt,
truncation=True,
padding="max_length",
max_length=self.tokenizer.model_max_length,
return_tensors="pt",
).input_ids
return example
def collate_fn(examples, with_prior_preservation=False):
input_ids = [example["instance_prompt_ids"] for example in examples]
pixel_values = [example["instance_images"] for example in examples]
# Concat class and instance examples for prior preservation.
# We do this to avoid doing two forward passes.
if with_prior_preservation:
input_ids += [example["class_prompt_ids"] for example in examples]
pixel_values += [example["class_images"] for example in examples]
pixel_values = torch.stack(pixel_values)
pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float()
input_ids = torch.cat(input_ids, dim=0)
batch = {
"input_ids": input_ids,
"pixel_values": pixel_values,
}
return batch
class PromptDataset(Dataset):
"A simple dataset to prepare the prompts to generate class images on multiple GPUs."
def __init__(self, prompt, num_samples):
self.prompt = prompt
self.num_samples = num_samples
def __len__(self):
return self.num_samples
def __getitem__(self, index):
example = {}
example["prompt"] = self.prompt
example["index"] = index
return example
| peft/examples/hra_dreambooth/utils/dataset.py/0 | {
"file_path": "peft/examples/hra_dreambooth/utils/dataset.py",
"repo_id": "peft",
"token_count": 1928
} | 239 |
# Copyright 2023-present the HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import copy
import logging
import math
import os
import random
import re
from pathlib import Path
import datasets
import torch
import transformers
from accelerate import Accelerator, DistributedType
from accelerate.logging import get_logger
from accelerate.utils import set_seed
from datasets import load_dataset
from huggingface_hub import HfApi
from torch.utils.data import DataLoader
from tqdm.auto import tqdm
from transformers import (
CONFIG_MAPPING,
MODEL_MAPPING,
AutoConfig,
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
SchedulerType,
default_data_collator,
get_scheduler,
)
from transformers.utils import send_example_telemetry
from transformers.utils.versions import require_version
from peft import PeftModel
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
# check_min_version("4.32.0.dev0")
logger = get_logger(__name__)
require_version("datasets>=1.8.0", "To fix: pip install -r examples/pytorch/language-modeling/requirements.txt")
MODEL_CONFIG_CLASSES = list(MODEL_MAPPING.keys())
MODEL_TYPES = tuple(conf.model_type for conf in MODEL_CONFIG_CLASSES)
def parse_args():
parser = argparse.ArgumentParser(description="Finetune a transformers model on a causal language modeling task")
parser.add_argument(
"--dataset_name",
type=str,
default=None,
help="The name of the dataset to use (via the datasets library).",
)
parser.add_argument(
"--dataset_config_name",
type=str,
default=None,
help="The configuration name of the dataset to use (via the datasets library).",
)
parser.add_argument(
"--train_file", type=str, default=None, help="A csv, txt or a json file containing the training data."
)
parser.add_argument(
"--validation_file", type=str, default=None, help="A csv, txt or a json file containing the validation data."
)
parser.add_argument(
"--validation_split_percentage",
default=5,
help="The percentage of the train set used as validation set in case there's no validation split",
)
parser.add_argument(
"--model_name_or_path",
type=str,
help="Path to pretrained model or model identifier from huggingface.co/models.",
required=False,
)
parser.add_argument(
"--config_name",
type=str,
default=None,
help="Pretrained config name or path if not the same as model_name",
)
parser.add_argument(
"--tokenizer_name",
type=str,
default=None,
help="Pretrained tokenizer name or path if not the same as model_name",
)
parser.add_argument(
"--use_slow_tokenizer",
action="store_true",
help="If passed, will use a slow tokenizer (not backed by the ๐ค Tokenizers library).",
)
parser.add_argument(
"--per_device_train_batch_size",
type=int,
default=8,
help="Batch size (per device) for the training dataloader.",
)
parser.add_argument(
"--per_device_eval_batch_size",
type=int,
default=8,
help="Batch size (per device) for the evaluation dataloader.",
)
parser.add_argument(
"--learning_rate",
type=float,
default=5e-5,
help="Initial learning rate (after the potential warmup period) to use.",
)
parser.add_argument("--weight_decay", type=float, default=0.0, help="Weight decay to use.")
parser.add_argument("--num_train_epochs", type=int, default=3, help="Total number of training epochs to perform.")
parser.add_argument(
"--max_train_steps",
type=int,
default=None,
help="Total number of training steps to perform. If provided, overrides num_train_epochs.",
)
parser.add_argument(
"--gradient_accumulation_steps",
type=int,
default=1,
help="Number of updates steps to accumulate before performing a backward/update pass.",
)
parser.add_argument(
"--lr_scheduler_type",
type=SchedulerType,
default="linear",
help="The scheduler type to use.",
choices=["linear", "cosine", "cosine_with_restarts", "polynomial", "constant", "constant_with_warmup"],
)
parser.add_argument(
"--num_warmup_steps", type=int, default=0, help="Number of steps for the warmup in the lr scheduler."
)
parser.add_argument("--output_dir", type=str, default=None, help="Where to store the final model.")
parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.")
parser.add_argument(
"--model_type",
type=str,
default=None,
help="Model type to use if training from scratch.",
choices=MODEL_TYPES,
)
parser.add_argument(
"--ignore_pad_token_for_loss",
type=bool,
default=True,
help="Whether to ignore the tokens corresponding to padded labels in the loss computation or not.",
)
parser.add_argument(
"--max_source_length",
type=int,
default=128,
help=(
"The maximum total input sequence length after "
"tokenization.Sequences longer than this will be truncated, sequences shorter will be padded."
),
)
parser.add_argument(
"--max_target_length",
type=int,
default=128,
help=(
"The maximum total sequence length for target text after "
"tokenization. Sequences longer than this will be truncated, sequences shorter will be padded."
"during ``evaluate`` and ``predict``."
),
)
parser.add_argument(
"--pad_to_max_length",
action="store_true",
help="If passed, pad all samples to `max_length`. Otherwise, dynamic padding is used.",
)
parser.add_argument(
"--preprocessing_num_workers",
type=int,
default=None,
help="The number of processes to use for the preprocessing.",
)
parser.add_argument(
"--overwrite_cache", action="store_true", help="Overwrite the cached training and evaluation sets"
)
parser.add_argument(
"--no_keep_linebreaks", action="store_true", help="Do not keep line breaks when using TXT files."
)
parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.")
parser.add_argument(
"--hub_model_id", type=str, help="The name of the repository to keep in sync with the local `output_dir`."
)
parser.add_argument("--hub_token", type=str, help="The token to use to push to the Model Hub.")
parser.add_argument(
"--trust_remote_code",
type=bool,
default=False,
help=(
"Whether or not to allow for custom models defined on the Hub in their own modeling files. This option"
"should only be set to `True` for repositories you trust and in which you have read the code, as it will"
"execute code present on the Hub on your local machine."
),
)
parser.add_argument(
"--checkpointing_steps",
type=str,
default=None,
help="Whether the various states should be saved at the end of every n steps, or 'epoch' for each epoch.",
)
parser.add_argument(
"--resume_from_checkpoint",
type=str,
default=None,
help="If the training should continue from a checkpoint folder.",
)
parser.add_argument(
"--with_tracking",
action="store_true",
help="Whether to enable experiment trackers for logging.",
)
parser.add_argument(
"--report_to",
type=str,
default="tensorboard",
help=(
'The integration to report the results and logs to. Supported platforms are `"tensorboard"`,'
' `"wandb"`, `"comet_ml"` and `"clearml"`. Use `"all"` (default) to report to all integrations.'
"Only applicable when `--with_tracking` is passed."
),
)
parser.add_argument(
"--low_cpu_mem_usage",
action="store_true",
help=(
"It is an option to create the model as an empty shell, then only materialize its parameters when the pretrained weights are loaded."
"If passed, LLM loading time and RAM consumption will be benefited."
),
)
##########################
# Generation Config #
##########################
parser.add_argument(
"--temperature",
type=float,
default=0.8,
help="temperature of 1.0 has no effect, lower tend toward greedy sampling",
)
parser.add_argument("--k", type=int, default=40, help="Choose k candidate words")
parser.add_argument("--p", type=float, default=0.95, help="The sum of probability of candidate words is 0.9 ")
##########################
# Exp Args #
##########################
parser.add_argument(
"--adapter_name_or_path",
type=str,
default=None,
help=(
"The LoRA adapter checkpoint. Set None if you want to fine-tune from LoftQ."
"Specify a path if you want to evaluate."
),
)
args = parser.parse_args()
# Sanity checks
if args.dataset_name is None and args.train_file is None and args.validation_file is None:
raise ValueError("Need either a dataset name or a training/validation file.")
else:
if args.train_file is not None:
extension = args.train_file.split(".")[-1]
assert extension in ["csv", "json", "txt"], "`train_file` should be a csv, json or txt file."
if args.validation_file is not None:
extension = args.validation_file.split(".")[-1]
assert extension in ["csv", "json", "txt"], "`validation_file` should be a csv, json or txt file."
if args.push_to_hub:
assert args.output_dir is not None, "Need an `output_dir` to create a repo when `--push_to_hub` is passed."
return args
def main():
args = parse_args()
# Sending telemetry. Tracking the example usage helps us better allocate resources to maintain them. The
# information sent is the one passed as arguments along with your Python/PyTorch versions.
send_example_telemetry("run_clm_no_trainer", args)
# Initialize the accelerator. We will let the accelerator handle device placement for us in this example.
# If we're using tracking, we also need to initialize it here and it will by default pick up all supported trackers
# in the environment
accelerator_log_kwargs = {}
if args.with_tracking:
accelerator_log_kwargs["log_with"] = args.report_to
accelerator_log_kwargs["project_dir"] = args.output_dir
accelerator = Accelerator(gradient_accumulation_steps=args.gradient_accumulation_steps, **accelerator_log_kwargs)
# Make one log on every process with the configuration for debugging.
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
level=logging.INFO,
)
logger.info(accelerator.state, main_process_only=False)
if accelerator.is_local_main_process:
datasets.utils.logging.set_verbosity_warning()
transformers.utils.logging.set_verbosity_info()
else:
datasets.utils.logging.set_verbosity_error()
transformers.utils.logging.set_verbosity_error()
# If passed along, set the training seed now.
if args.seed is not None:
set_seed(args.seed)
# Handle the repository creation
if accelerator.is_main_process:
if args.push_to_hub:
api = HfApi(token=args.hub_token)
# Create repo (repo_name from args or inferred)
repo_name = args.hub_model_id
if repo_name is None:
repo_name = Path(args.output_dir).absolute().name
repo_id = api.create_repo(repo_name, exist_ok=True).repo_id
with open(os.path.join(args.output_dir, ".gitignore"), "w+") as gitignore:
if "step_*" not in gitignore:
gitignore.write("step_*\n")
if "epoch_*" not in gitignore:
gitignore.write("epoch_*\n")
elif args.output_dir is not None:
os.makedirs(args.output_dir, exist_ok=True)
accelerator.wait_for_everyone()
# Get the datasets: you can either provide your own CSV/JSON/TXT training and evaluation files (see below)
# or just provide the name of one of the public datasets available on the hub at https://huggingface.co/datasets/
# (the dataset will be downloaded automatically from the datasets Hub).
#
# For CSV/JSON files, this script will use the column called 'text' or the first column if no column called
# 'text' is found. You can easily tweak this behavior (see below).
#
# In distributed training, the load_dataset function guarantee that only one local process can concurrently
# download the dataset.
if args.dataset_name is not None:
# Downloading and loading a dataset from the hub.
raw_datasets = load_dataset(args.dataset_name, args.dataset_config_name)
if "validation" not in raw_datasets.keys():
raw_datasets["validation"] = load_dataset(
args.dataset_name,
args.dataset_config_name,
split=f"train[:{args.validation_split_percentage}%]",
)
raw_datasets["train"] = load_dataset(
args.dataset_name,
args.dataset_config_name,
split=f"train[{args.validation_split_percentage}%:]",
)
else:
data_files = {}
dataset_args = {}
if args.train_file is not None:
data_files["train"] = args.train_file
if args.validation_file is not None:
data_files["validation"] = args.validation_file
extension = args.train_file.split(".")[-1]
if extension == "txt":
extension = "text"
dataset_args["keep_linebreaks"] = not args.no_keep_linebreaks
raw_datasets = load_dataset(extension, data_files=data_files, **dataset_args)
# If no validation data is there, validation_split_percentage will be used to divide the dataset.
if "validation" not in raw_datasets.keys():
raw_datasets["validation"] = load_dataset(
extension,
data_files=data_files,
split=f"train[:{args.validation_split_percentage}%]",
**dataset_args,
)
raw_datasets["train"] = load_dataset(
extension,
data_files=data_files,
split=f"train[{args.validation_split_percentage}%:]",
**dataset_args,
)
# See more about loading any type of standard or custom dataset (from files, python dict, pandas DataFrame, etc) at
# https://huggingface.co/docs/datasets/loading_datasets.html.
# Load pretrained model and tokenizer
#
# In distributed training, the .from_pretrained methods guarantee that only one local process can concurrently
# download model & vocab.
if args.config_name:
config = AutoConfig.from_pretrained(
args.config_name,
trust_remote_code=args.trust_remote_code,
)
elif args.model_name_or_path:
config = AutoConfig.from_pretrained(
args.model_name_or_path,
trust_remote_code=args.trust_remote_code,
)
else:
config = CONFIG_MAPPING[args.model_type]()
logger.warning("You are instantiating a new config instance from scratch.")
if args.tokenizer_name:
tokenizer = AutoTokenizer.from_pretrained(
args.tokenizer_name, use_fast=not args.use_slow_tokenizer, trust_remote_code=args.trust_remote_code
)
elif args.model_name_or_path:
tokenizer = AutoTokenizer.from_pretrained(
args.model_name_or_path,
use_fast=not args.use_slow_tokenizer,
trust_remote_code=args.trust_remote_code,
)
else:
raise ValueError(
"You are instantiating a new tokenizer from scratch. This is not supported by this script."
"You can do it from another script, save it, and load it from here, using --tokenizer_name."
)
##########################
# Tokenizer #
##########################
tokenizer.pad_token_id = 0 # unk. we want this to be different from the eos token
tokenizer.padding_side = "left" # Allow batched inference
tokenizer.truncation_side = "left"
if args.model_name_or_path:
model = AutoModelForCausalLM.from_pretrained(
args.model_name_or_path,
from_tf=bool(".ckpt" in args.model_name_or_path),
config=config,
low_cpu_mem_usage=True,
quantization_config=BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=False,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=config.torch_dtype,
),
)
else:
logger.info("Training new model from scratch")
model = AutoModelForCausalLM.from_config(config, trust_remote_code=args.trust_remote_code)
##########################
# Peft Model #
##########################
if args.adapter_name_or_path is None:
model = PeftModel.from_pretrained(model, args.model_name_or_path, subfolder="loftq_init", is_trainable=True)
else:
model = PeftModel.from_pretrained(model, args.adapter_name_or_path, is_trainable=True)
model.print_trainable_parameters()
# We resize the embeddings only when necessary to avoid index errors. If you are creating a model from scratch
# on a small vocab and want a smaller embedding size, remove this test.
embedding_size = model.get_input_embeddings().weight.shape[0]
if len(tokenizer) > embedding_size:
model.resize_token_embeddings(len(tokenizer))
# Preprocessing the datasets.
# First we tokenize all the texts.
##########################
# GSM8K dataset #
##########################
# Preprocessing the datasets.
# First we tokenize all the texts.
column_names = raw_datasets["train"].column_names
# Get the column names for source/target.
source_column, target_column = "question", "answer"
# Temporarily set max_target_length for training.
padding = "max_length" if args.pad_to_max_length else False
task_prompt = "\nAnswer the above question. First think step by step and then answer the final number.\n"
def prompt_process(sent_1, sent_2, prompt_1="", prompt_2="", prompt_3=""):
sent_2 = sent_2.replace("####", "The final answer is")
return prompt_1 + sent_1 + prompt_2 + sent_2 + prompt_3
def preprocess_function_train(examples):
sources = examples[source_column]
targets = examples[target_column]
inputs = [prompt_process(source, target, prompt_2=task_prompt) for (source, target) in zip(sources, targets)]
model_inputs = tokenizer(
inputs,
max_length=args.max_source_length + args.max_target_length,
padding=padding,
truncation=True,
return_tensors="pt",
)
labels = copy.deepcopy(model_inputs)
# If we are padding here, replace all tokenizer.pad_token_id in the labels by -100 when we want to ignore
# padding in the loss.
if padding == "max_length" and args.ignore_pad_token_for_loss:
# get the length of the target tokens. -1 to kick out the <BOS> token
target_tokens = tokenizer(targets, padding=False)
target_len = [len(label) - 1 for label in target_tokens["input_ids"]]
# don't calculate the loss from source and padding (left padding)
for i in range(len(labels["input_ids"])):
labels["input_ids"][i, : -target_len[i]] = -100
model_inputs["labels"] = labels["input_ids"]
return model_inputs
def preprocess_function_test(examples):
sources = examples[source_column]
labels = examples[target_column]
inputs = [source + task_prompt for source in sources]
model_inputs = tokenizer(inputs, max_length=args.max_source_length, padding=padding, truncation=True)
labels = tokenizer(labels, max_length=args.max_target_length, padding=padding, truncation=True)
model_inputs["labels"] = labels["input_ids"]
return model_inputs
with accelerator.main_process_first():
train_dataset = raw_datasets["train"].map(
preprocess_function_train,
batched=True,
num_proc=args.preprocessing_num_workers,
remove_columns=column_names,
load_from_cache_file=not args.overwrite_cache,
desc="Running tokenizer on training dataset",
)
eval_dataset = raw_datasets["test"].map(
preprocess_function_test,
batched=True,
num_proc=args.preprocessing_num_workers,
remove_columns=column_names,
load_from_cache_file=not args.overwrite_cache,
desc="Running tokenizer on test dataset",
)
# Log a few random samples from the set:
for index in random.sample(range(len(train_dataset)), 2):
logger.info(f"Sample {index} of the training set: {train_dataset[index]}.")
for index in random.sample(range(len(eval_dataset)), 2):
logger.info(f"Sample {index} of the validation set: {eval_dataset[index]}.")
# DataLoaders creation:
train_dataloader = DataLoader(
train_dataset, shuffle=True, collate_fn=default_data_collator, batch_size=args.per_device_train_batch_size
)
eval_dataloader = DataLoader(
eval_dataset, collate_fn=default_data_collator, batch_size=args.per_device_eval_batch_size
)
# Optimizer
# Split weights in two groups, one with weight decay and the other not.
no_decay = ["bias", "layer_norm.weight"]
optimizer_grouped_parameters = [
{
"params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay) and "lora" in n],
"weight_decay": args.weight_decay,
},
{
"params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)],
"weight_decay": 0.0,
},
]
optimizer = torch.optim.AdamW(optimizer_grouped_parameters, lr=args.learning_rate)
# Scheduler and math around the number of training steps.
overrode_max_train_steps = False
num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
if args.max_train_steps is None:
args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
overrode_max_train_steps = True
lr_scheduler = get_scheduler(
name=args.lr_scheduler_type,
optimizer=optimizer,
num_warmup_steps=args.num_warmup_steps * args.gradient_accumulation_steps,
num_training_steps=args.max_train_steps * args.gradient_accumulation_steps,
)
# Prepare everything with our `accelerator`.
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
)
# On TPU, the tie weights in our model have been disconnected, so we need to restore the ties.
if accelerator.distributed_type == DistributedType.TPU:
model.tie_weights()
# We need to recalculate our total training steps as the size of the training dataloader may have changed.
num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
if overrode_max_train_steps:
args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
# Afterwards we recalculate our number of training epochs
args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch)
# Figure out how many steps we should save the Accelerator states
checkpointing_steps = args.checkpointing_steps
if checkpointing_steps is not None and checkpointing_steps.isdigit():
checkpointing_steps = int(checkpointing_steps)
# We need to initialize the trackers we use, and also store our configuration.
# The trackers initializes automatically on the main process.
if args.with_tracking:
experiment_config = vars(args)
# TensorBoard cannot log Enums, need the raw value
experiment_config["lr_scheduler_type"] = experiment_config["lr_scheduler_type"].value
accelerator.init_trackers("clm_no_trainer", experiment_config)
# Train!
total_batch_size = args.per_device_train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps
logger.info("***** Running training *****")
logger.info(f" Num examples = {len(train_dataset)}")
logger.info(f" Num Epochs = {args.num_train_epochs}")
logger.info(f" Instantaneous batch size per device = {args.per_device_train_batch_size}")
logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}")
logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}")
logger.info(f" Total optimization steps = {args.max_train_steps}")
# Only show the progress bar once on each machine.
progress_bar = tqdm(range(args.max_train_steps), disable=not accelerator.is_local_main_process)
completed_steps = 0
starting_epoch = 0
# Potentially load in the weights and states from a previous save
if args.resume_from_checkpoint:
if args.resume_from_checkpoint is not None or args.resume_from_checkpoint != "":
checkpoint_path = args.resume_from_checkpoint
path = os.path.basename(args.resume_from_checkpoint)
else:
# Get the most recent checkpoint
dirs = [f.name for f in os.scandir(os.getcwd()) if f.is_dir()]
dirs.sort(key=os.path.getctime)
path = dirs[-1] # Sorts folders by date modified, most recent checkpoint is the last
checkpoint_path = path
path = os.path.basename(checkpoint_path)
accelerator.print(f"Resumed from checkpoint: {checkpoint_path}")
accelerator.load_state(path)
# Extract `epoch_{i}` or `step_{i}`
training_difference = os.path.splitext(path)[0]
if "epoch" in training_difference:
starting_epoch = int(training_difference.replace("epoch_", "")) + 1
resume_step = None
completed_steps = starting_epoch * num_update_steps_per_epoch
else:
# need to multiply `gradient_accumulation_steps` to reflect real steps
resume_step = int(training_difference.replace("step_", "")) * args.gradient_accumulation_steps
starting_epoch = resume_step // len(train_dataloader)
resume_step -= starting_epoch * len(train_dataloader)
completed_steps = resume_step // args.gradient_accumulation_steps
# update the progress_bar if load from checkpoint
progress_bar.update(completed_steps)
for epoch in range(starting_epoch, args.num_train_epochs):
model.train()
if args.with_tracking:
total_loss = 0
if args.resume_from_checkpoint and epoch == starting_epoch and resume_step is not None:
# We skip the first `n` batches in the dataloader when resuming from a checkpoint
active_dataloader = accelerator.skip_first_batches(train_dataloader, resume_step)
else:
active_dataloader = train_dataloader
for step, batch in enumerate(active_dataloader):
with accelerator.accumulate(model):
outputs = model(**batch)
loss = outputs.loss
# We keep track of the loss at each epoch
if args.with_tracking:
total_loss += loss.detach().float()
accelerator.backward(loss)
if completed_steps % 50:
accelerator.print(f"Epoch: {epoch} | Step: {completed_steps} | Loss: {loss}")
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
# Checks if the accelerator has performed an optimization step behind the scenes
if accelerator.sync_gradients:
progress_bar.update(1)
completed_steps += 1
if isinstance(checkpointing_steps, int):
if completed_steps % checkpointing_steps == 0:
output_dir = f"step_{completed_steps}"
if args.output_dir is not None:
output_dir = os.path.join(args.output_dir, output_dir)
accelerator.save_state(output_dir)
if completed_steps >= args.max_train_steps:
break
model.eval()
gen_kwargs = {
"max_new_tokens": args.max_target_length,
"temperature": args.temperature,
"top_k": args.k,
"top_p": args.p,
"do_sample": True,
}
ans_pred_list = []
ans_gold_list = []
for step, batch in enumerate(eval_dataloader):
with torch.no_grad():
gen_kwargs["input_ids"] = batch["input_ids"]
gen_kwargs["attention_mask"] = batch["attention_mask"]
generated_tokens = accelerator.unwrap_model(model).generate(**gen_kwargs)
pred_tokens = generated_tokens[:, args.max_source_length :]
pred_tokens = accelerator.pad_across_processes(pred_tokens, dim=1, pad_index=tokenizer.pad_token_id)
gold_tokens = batch["labels"]
if not args.pad_to_max_length:
# If we did not pad to max length, we need to pad the labels too
gold_tokens = accelerator.pad_across_processes(
batch["labels"], dim=1, pad_index=tokenizer.pad_token_id
)
pred_tokens, gold_tokens = accelerator.gather_for_metrics((pred_tokens, gold_tokens))
pred_tokens, gold_tokens = pred_tokens.cpu().numpy(), gold_tokens.cpu().numpy()
if isinstance(pred_tokens, tuple):
pred_tokens = pred_tokens[0]
decoded_pred = tokenizer.batch_decode(pred_tokens, skip_special_tokens=True)
decoded_gold = tokenizer.batch_decode(gold_tokens, skip_special_tokens=True)
# Extract the numbers in sentences
accelerator.print(decoded_pred)
ans_pred_list += [extract_answer_number(sentence_pred) for sentence_pred in decoded_pred]
ans_gold_list += [extract_answer_number(sentence_gold) for sentence_gold in decoded_gold]
accelerator.print(ans_pred_list)
accelerator.print(ans_gold_list)
accuracy = compute_accuracy(ans_gold_list, ans_pred_list)
logger.info(f"epoch {epoch}: accuracy: {accuracy}")
if args.with_tracking:
accelerator.log(
{
"accuracy": accuracy,
"train_loss": total_loss.item() / len(train_dataloader),
"epoch": epoch,
"step": completed_steps,
},
step=completed_steps,
)
if args.push_to_hub and epoch < args.num_train_epochs - 1:
accelerator.wait_for_everyone()
unwrapped_model = accelerator.unwrap_model(model)
unwrapped_model.save_pretrained(
args.output_dir, is_main_process=accelerator.is_main_process, save_function=accelerator.save
)
if accelerator.is_main_process:
tokenizer.save_pretrained(args.output_dir)
api.upload_folder(
repo_id=repo_id,
folder_path=args.output_dir,
commit_message=f"Training in progress epoch {epoch}",
run_as_future=True,
)
if args.checkpointing_steps == "epoch":
output_dir = f"epoch_{epoch}"
if args.output_dir is not None:
output_dir = os.path.join(args.output_dir, output_dir)
accelerator.save_state(output_dir)
if args.with_tracking:
accelerator.end_training()
if args.output_dir is not None:
accelerator.wait_for_everyone()
unwrapped_model = accelerator.unwrap_model(model)
unwrapped_model.save_pretrained(
args.output_dir, is_main_process=accelerator.is_main_process, save_function=accelerator.save
)
if accelerator.is_main_process:
tokenizer.save_pretrained(args.output_dir)
if args.push_to_hub:
api.upload_folder(
repo_id=repo_id,
folder_path=args.output_dir,
commit_message="End of training",
)
PATTERN_NUMBER = re.compile(r"-?\d+\.?\d*")
def extract_answer_number(sentence: str) -> float:
sentence = sentence.replace(",", "")
pred = PATTERN_NUMBER.findall(sentence)
if not pred:
return float("inf")
segment = sentence.split("The final answer is ")
if len(segment) > 1:
pred_answer = segment[1]
pred_answer = PATTERN_NUMBER.findall(pred_answer)
if len(pred_answer) > 0:
pred_answer = pred_answer[0]
else:
pred_answer = float(pred[-1])
else:
pred_answer = float(pred[-1])
if isinstance(pred_answer, str):
try:
pred_answer = float(pred_answer)
except ValueError:
pred_answer = float("inf")
return pred_answer
def compute_accuracy(pred: list, gold: list):
acc = 0.0
for p, g in zip(pred, gold):
if p == g:
acc += 1
return acc / len(pred)
if __name__ == "__main__":
main()
| peft/examples/loftq_finetuning/train_gsm8k_llama.py/0 | {
"file_path": "peft/examples/loftq_finetuning/train_gsm8k_llama.py",
"repo_id": "peft",
"token_count": 14677
} | 240 |
<jupyter_start><jupyter_text>Dreambooth with OFTThis Notebook assumes that you already ran the train_dreambooth.py script to create your own adapter.<jupyter_code>from diffusers import DiffusionPipeline
from diffusers.utils import check_min_version, get_logger
from peft import PeftModel
# Will error if the minimal version of diffusers is not installed. Remove at your own risks.
check_min_version("0.10.0.dev0")
logger = get_logger(__name__)
BASE_MODEL_NAME = "stabilityai/stable-diffusion-2-1-base"
ADAPTER_MODEL_PATH = "INSERT MODEL PATH HERE"
import torch
device = torch.accelerator.current_accelerator().type if hasattr(torch, "accelerator") else "cuda"
pipe = DiffusionPipeline.from_pretrained(
BASE_MODEL_NAME,
)
pipe.to(device)
pipe.unet = PeftModel.from_pretrained(pipe.unet, ADAPTER_MODEL_PATH + "/unet", adapter_name="default")
pipe.text_encoder = PeftModel.from_pretrained(
pipe.text_encoder, ADAPTER_MODEL_PATH + "/text_encoder", adapter_name="default"
)
prompt = "A photo of a sks dog"
image = pipe(
prompt,
num_inference_steps=50,
height=512,
width=512,
).images[0]
image<jupyter_output>100%|โโโโโโโโโโ| 50/50 [00:11<00:00, 4.46it/s] | peft/examples/oft_dreambooth/oft_dreambooth_inference.ipynb/0 | {
"file_path": "peft/examples/oft_dreambooth/oft_dreambooth_inference.ipynb",
"repo_id": "peft",
"token_count": 433
} | 241 |
<jupyter_start><jupyter_text>IntroductionIn this notebook, we will learn how to use [LoRA](https://huggingface.co/papers/2106.09685) from ๐ค PEFT to fine-tune a SegFormer model variant for semantic segmentation by ONLY using **14%** of the original trainable parameters of the model. LoRA adds low-rank "update matrices" to certain blocks in the underlying model (in this case the attention blocks) and ONLY trains those matrices during fine-tuning. During inference, these update matrices are _merged_ with the original model parameters. For more details, check out the [original LoRA paper](https://huggingface.co/papers/2106.09685). Let's get started by installing the dependencies. Install dependenciesHere we're installing `peft` from source to ensure we have access to all the bleeding edge features of `peft`.<jupyter_code>!pip install transformers accelerate evaluate datasets==3.6.0 git+https://github.com/huggingface/peft -q<jupyter_output><empty_output><jupyter_text>AuthenticationWe will share our fine-tuned model at the end of training. So, to do that we just authenticate using our ๐ค token. This token is available from [here](https://huggingface.co/settings/tokens). If you don't have a ๐ค account already, we highly encourage you to do so; it's free!<jupyter_code>from huggingface_hub import notebook_login
notebook_login()<jupyter_output><empty_output><jupyter_text>Load a datasetWe're only loading the first 150 instances from the training set of the [SceneParse150 dataset](https://huggingface.co/datasets/scene_parse_150) to keep this example runtime short.<jupyter_code>from datasets import load_dataset
ds = load_dataset("scene_parse_150", split="train[:150]")<jupyter_output><empty_output><jupyter_text>Prepare train and test splits<jupyter_code>ds = ds.train_test_split(test_size=0.1)
train_ds = ds["train"]
test_ds = ds["test"]<jupyter_output><empty_output><jupyter_text>Prepare label mappersWe create two dictionaries:* `label2id`: maps the semantic classes of the dataset to integer ids.* `id2label`: `label2id` reversed.<jupyter_code>import json
from huggingface_hub import hf_hub_download
repo_id = "huggingface/label-files"
filename = "ade20k-id2label.json"
id2label = json.load(open(hf_hub_download(repo_id=repo_id, filename=filename, repo_type="dataset"), "r"))
id2label = {int(k): v for k, v in id2label.items()}
label2id = {v: k for k, v in id2label.items()}
num_labels = len(id2label)<jupyter_output><empty_output><jupyter_text>Prepare datasets for training and evaluation<jupyter_code>from transformers import AutoImageProcessor
checkpoint = "nvidia/mit-b0"
image_processor = AutoImageProcessor.from_pretrained(checkpoint, do_reduce_labels=True)
from torchvision.transforms import ColorJitter
jitter = ColorJitter(brightness=0.25, contrast=0.25, saturation=0.25, hue=0.1)
from PIL import Image
import numpy as np
def handle_grayscale_image(image):
np_image = np.array(image)
if np_image.ndim == 2:
tiled_image = np.tile(np.expand_dims(np_image, -1), 3)
return Image.fromarray(tiled_image)
else:
return Image.fromarray(np_image)
def train_transforms(example_batch):
images = [jitter(handle_grayscale_image(x)) for x in example_batch["image"]]
labels = [x for x in example_batch["annotation"]]
inputs = image_processor(images, labels)
return inputs
def val_transforms(example_batch):
images = [handle_grayscale_image(x) for x in example_batch["image"]]
labels = [x for x in example_batch["annotation"]]
inputs = image_processor(images, labels)
return inputs
train_ds.set_transform(train_transforms)
test_ds.set_transform(val_transforms)<jupyter_output><empty_output><jupyter_text>Evaluation functionIncluding a metric during training is often helpful for evaluating your modelโs performance. You can quickly load a evaluation method with the [๐ค Evaluate](https://huggingface.co/docs/evaluate/index) library. For this task, load the [mean Intersection over Union (IoU)](https://huggingface.co/spaces/evaluate-metric/accuracy) metric (see the ๐ค Evaluate [quick tour](https://huggingface.co/docs/evaluate/a_quick_tour) to learn more about how to load and compute a metric):<jupyter_code>import torch
from torch import nn
import evaluate
metric = evaluate.load("mean_iou")
def compute_metrics(eval_pred):
with torch.no_grad():
logits, labels = eval_pred
logits_tensor = torch.from_numpy(logits)
# scale the logits to the size of the label
logits_tensor = nn.functional.interpolate(
logits_tensor,
size=labels.shape[-2:],
mode="bilinear",
align_corners=False,
).argmax(dim=1)
pred_labels = logits_tensor.detach().cpu().numpy()
# currently using _compute instead of compute
# see this issue for more info: https://github.com/huggingface/evaluate/pull/328#issuecomment-1286866576
metrics = metric._compute(
predictions=pred_labels,
references=labels,
num_labels=len(id2label),
ignore_index=0,
reduce_labels=image_processor.do_reduce_labels,
)
# add per category metrics as individual key-value pairs
per_category_accuracy = metrics.pop("per_category_accuracy").tolist()
per_category_iou = metrics.pop("per_category_iou").tolist()
metrics.update({f"accuracy_{id2label[i]}": v for i, v in enumerate(per_category_accuracy)})
metrics.update({f"iou_{id2label[i]}": v for i, v in enumerate(per_category_iou)})
return metrics<jupyter_output>Downloading builder script: 12.9kB [00:00, 34.2MB/s]<jupyter_text>Load a base modelFor this example, we use the [SegFormer B0 variant](https://huggingface.co/nvidia/mit-b0).<jupyter_code>def print_trainable_parameters(model):
"""
Prints the number of trainable parameters in the model.
"""
trainable_params = 0
all_param = 0
for _, param in model.named_parameters():
all_param += param.numel()
if param.requires_grad:
trainable_params += param.numel()
print(
f"trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param:.2f}"
)<jupyter_output><empty_output><jupyter_text>We pass the `label2id` and `id2label` dictionaries to let the `AutoModelForSemanticSegmentation` class know that we're interested in a custom base model where the decoder head should be randomly initialized w.r.t our custom dataset. Note, however, that the rest of the model parameters are pre-trained and will be fine-tuned in a regular transfer learning setup.We also notice that the 100% parameters in the `model` are trainable.<jupyter_code>from transformers import AutoModelForSemanticSegmentation, TrainingArguments, Trainer
model = AutoModelForSemanticSegmentation.from_pretrained(
checkpoint, id2label=id2label, label2id=label2id, ignore_mismatched_sizes=True
)
print_trainable_parameters(model)<jupyter_output>Some weights of SegformerForSemanticSegmentation were not initialized from the model checkpoint at nvidia/mit-b0 and are newly initialized: ['decode_head.batch_norm.bias', 'decode_head.batch_norm.num_batches_tracked', 'decode_head.batch_norm.running_mean', 'decode_head.batch_norm.running_var', 'decode_head.batch_norm.weight', 'decode_head.classifier.bias', 'decode_head.classifier.weight', 'decode_head.linear_c.0.proj.bias', 'decode_head.linear_c.0.proj.weight', 'decode_head.linear_c.1.proj.bias', 'decode_head.linear_c.1.proj.weight', 'decode_head.linear_c.2.proj.bias', 'decode_head.linear_c.2.proj.weight', 'decode_head.linear_c.3.proj.bias', 'decode_head.linear_c.3.proj.weight', 'decode_head.linear_fuse.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.<jupyter_text>Wrap `model` as a `PeftModel` for LoRA trainingThis involves two steps:* Defining a config with `LoraConfig`* Wrapping the original `model` with `get_peft_model()` with the config defined in the step above.<jupyter_code>from peft import LoraConfig, get_peft_model
config = LoraConfig(
r=32,
lora_alpha=32,
target_modules=["query", "value"],
lora_dropout=0.1,
bias="lora_only",
modules_to_save=["decode_head"],
)
lora_model = get_peft_model(model, config)
print_trainable_parameters(lora_model)<jupyter_output>trainable params: 566422 || all params: 4317068 || trainable%: 13.12<jupyter_text>Let's unpack what's going on here. In order for LoRA to take effect, we need to specify the target modules to `LoraConfig` so that `PeftModel` knows which modules inside our model needs to be amended with LoRA matrices. In this case, we're only interested in targetting the query and value matrices of the attention blocks of the base model. Since the parameters corresponding to these matrices are "named" with `query` and `value` respectively, we specify them accordingly in the `target_modules` argument of `LoraConfig`. We also specify `modules_to_save`. After we wrap our base model `model` with `PeftModel` along with the `config`, we get a new model where only the LoRA parameters are trainable (so-called "update matrices") while the pre-trained parameters are kept frozen. These include the parameters of the randomly initialized classifier parameters too. This is NOT we want when fine-tuning the base model on our custom dataset. To ensure that the classifier parameters are also trained, we specify `modules_to_save`. This also ensures that these modules are serialized alongside the LoRA trainable parameters when using utilities like `save_pretrained()` and `push_to_hub()`. Regarding the other parameters:* `r`: The dimension used by the LoRA update matrices.* `alpha`: Scaling factor.* `bias`: Specifying if the `bias` parameters should be trained. `lora_only` denotes only the LoRA `bias` parameters will be trained. `r` and `alpha` together control the total number of final trainable parameters when using LoRA giving us the flexbility to balance a trade-off between end performance and compute efficiency. We can also how many parameters we're actually training. Since we're interested in performing **parameter-efficient fine-tuning**, we should expect to notice a less number of trainable parameters from the `lora_model` in comparison to the original `model` which is indeed the case here. For sanity, let's also manually verify the modules that are actually trainable in `lora_model`.<jupyter_code>for name, param in lora_model.named_parameters():
if param.requires_grad:
print(name, param.shape)<jupyter_output><empty_output><jupyter_text>We can confirm that only the LoRA parameters appended to the attention blocks and the `decode_head` parameters are trainable. Train!This is a two-step process: 1. Define your training hyperparameters in [TrainingArguments](https://huggingface.co/docs/transformers/v4.26.0/en/main_classes/trainertransformers.TrainingArguments). It is important you donโt remove unused columns because thisโll drop the image column. Without the image column, you canโt create `pixel_values`. Set `remove_unused_columns=False` to prevent this behavior! The only other required parameter is output_dir which specifies where to save your model. At the end of each epoch, the `Trainer` will evaluate the IoU metric and save the training checkpoint.2. Pass the training arguments to [Trainer](https://huggingface.co/docs/transformers/v4.26.0/en/main_classes/trainertransformers.Trainer) along with the model, dataset, tokenizer, data collator, and `compute_metrics` function.3. Call `train()` to finetune your model.**Note** that This example is meant to walk you through the workflow when using PEFT for semantic segmentation. We didn't perform extensive hyperparameter tuning to achieve optimal results.<jupyter_code>model_name = checkpoint.split("/")[-1]
training_args = TrainingArguments(
output_dir=f"{model_name}-scene-parse-150-lora",
learning_rate=5e-4,
num_train_epochs=50,
per_device_train_batch_size=4,
per_device_eval_batch_size=2,
save_total_limit=3,
eval_strategy="epoch",
save_strategy="epoch",
logging_steps=5,
remove_unused_columns=False,
push_to_hub=True,
label_names=["labels"],
)
trainer = Trainer(
model=lora_model,
args=training_args,
train_dataset=train_ds,
eval_dataset=test_ds,
compute_metrics=compute_metrics,
)
trainer.train()<jupyter_output><empty_output><jupyter_text>Saving the model and inference Here we use the `save_pretrained()` method of the `lora_model` to save the *LoRA-only parameters* locally. However, you can also use thr `push_to_hub()` method to upload these parameters directly to the Hugging Face Hub (as shown [here](https://colab.research.google.com/github/huggingface/peft/blob/main/examples/image_classification/image_classification_peft_lora.ipynb)).<jupyter_code>model_id = "segformer-scene-parse-150-lora"
lora_model.save_pretrained(model_id)<jupyter_output><empty_output><jupyter_text>We can see that the LoRA-only parameters are just **2.2 MB in size**! This greatly improves the portability when using very large models.<jupyter_code>!ls -lh {model_id}<jupyter_output><empty_output><jupyter_text>Let's now prepare our `inference_model` and run an inference.<jupyter_code>from peft import PeftConfig, PeftModel
config = PeftConfig.from_pretrained(model_id)
model = AutoModelForSemanticSegmentation.from_pretrained(
checkpoint, id2label=id2label, label2id=label2id, ignore_mismatched_sizes=True
)
# Load the Lora model
inference_model = PeftModel.from_pretrained(model, model_id)<jupyter_output><empty_output><jupyter_text>Fetch an image.<jupyter_code>import requests
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/semantic-seg-image.png"
image = Image.open(requests.get(url, stream=True).raw)
image<jupyter_output><empty_output><jupyter_text>Preprocess the image.<jupyter_code># prepare image for the model
encoding = image_processor(image.convert("RGB"), return_tensors="pt")
print(encoding.pixel_values.shape)<jupyter_output><empty_output><jupyter_text>Run an inference.<jupyter_code>with torch.no_grad():
outputs = inference_model(pixel_values=encoding.pixel_values)
logits = outputs.logits
upsampled_logits = nn.functional.interpolate(
logits,
size=image.size[::-1],
mode="bilinear",
align_corners=False,
)
pred_seg = upsampled_logits.argmax(dim=1)[0]<jupyter_output><empty_output><jupyter_text>Visualize the results.We need a color palette to visualize the results. Here, we use [one provided by the TensorFlow Model Garden repository](https://github.com/tensorflow/models/blob/3f1ca33afe3c1631b733ea7e40c294273b9e406d/research/deeplab/utils/get_dataset_colormap.pyL51).<jupyter_code>def ade_palette():
"""Creates a label colormap used in ADE20K segmentation benchmark.
Returns:
A colormap for visualizing segmentation results.
"""
return np.asarray(
[
[0, 0, 0],
[120, 120, 120],
[180, 120, 120],
[6, 230, 230],
[80, 50, 50],
[4, 200, 3],
[120, 120, 80],
[140, 140, 140],
[204, 5, 255],
[230, 230, 230],
[4, 250, 7],
[224, 5, 255],
[235, 255, 7],
[150, 5, 61],
[120, 120, 70],
[8, 255, 51],
[255, 6, 82],
[143, 255, 140],
[204, 255, 4],
[255, 51, 7],
[204, 70, 3],
[0, 102, 200],
[61, 230, 250],
[255, 6, 51],
[11, 102, 255],
[255, 7, 71],
[255, 9, 224],
[9, 7, 230],
[220, 220, 220],
[255, 9, 92],
[112, 9, 255],
[8, 255, 214],
[7, 255, 224],
[255, 184, 6],
[10, 255, 71],
[255, 41, 10],
[7, 255, 255],
[224, 255, 8],
[102, 8, 255],
[255, 61, 6],
[255, 194, 7],
[255, 122, 8],
[0, 255, 20],
[255, 8, 41],
[255, 5, 153],
[6, 51, 255],
[235, 12, 255],
[160, 150, 20],
[0, 163, 255],
[140, 140, 140],
[250, 10, 15],
[20, 255, 0],
[31, 255, 0],
[255, 31, 0],
[255, 224, 0],
[153, 255, 0],
[0, 0, 255],
[255, 71, 0],
[0, 235, 255],
[0, 173, 255],
[31, 0, 255],
[11, 200, 200],
[255, 82, 0],
[0, 255, 245],
[0, 61, 255],
[0, 255, 112],
[0, 255, 133],
[255, 0, 0],
[255, 163, 0],
[255, 102, 0],
[194, 255, 0],
[0, 143, 255],
[51, 255, 0],
[0, 82, 255],
[0, 255, 41],
[0, 255, 173],
[10, 0, 255],
[173, 255, 0],
[0, 255, 153],
[255, 92, 0],
[255, 0, 255],
[255, 0, 245],
[255, 0, 102],
[255, 173, 0],
[255, 0, 20],
[255, 184, 184],
[0, 31, 255],
[0, 255, 61],
[0, 71, 255],
[255, 0, 204],
[0, 255, 194],
[0, 255, 82],
[0, 10, 255],
[0, 112, 255],
[51, 0, 255],
[0, 194, 255],
[0, 122, 255],
[0, 255, 163],
[255, 153, 0],
[0, 255, 10],
[255, 112, 0],
[143, 255, 0],
[82, 0, 255],
[163, 255, 0],
[255, 235, 0],
[8, 184, 170],
[133, 0, 255],
[0, 255, 92],
[184, 0, 255],
[255, 0, 31],
[0, 184, 255],
[0, 214, 255],
[255, 0, 112],
[92, 255, 0],
[0, 224, 255],
[112, 224, 255],
[70, 184, 160],
[163, 0, 255],
[153, 0, 255],
[71, 255, 0],
[255, 0, 163],
[255, 204, 0],
[255, 0, 143],
[0, 255, 235],
[133, 255, 0],
[255, 0, 235],
[245, 0, 255],
[255, 0, 122],
[255, 245, 0],
[10, 190, 212],
[214, 255, 0],
[0, 204, 255],
[20, 0, 255],
[255, 255, 0],
[0, 153, 255],
[0, 41, 255],
[0, 255, 204],
[41, 0, 255],
[41, 255, 0],
[173, 0, 255],
[0, 245, 255],
[71, 0, 255],
[122, 0, 255],
[0, 255, 184],
[0, 92, 255],
[184, 255, 0],
[0, 133, 255],
[255, 214, 0],
[25, 194, 194],
[102, 255, 0],
[92, 0, 255],
]
)
import matplotlib.pyplot as plt
color_seg = np.zeros((pred_seg.shape[0], pred_seg.shape[1], 3), dtype=np.uint8)
palette = np.array(ade_palette())
for label, color in enumerate(palette):
color_seg[pred_seg == label, :] = color
color_seg = color_seg[..., ::-1] # convert to BGR
img = np.array(image) * 0.5 + color_seg * 0.5 # plot the image with the segmentation map
img = img.astype(np.uint8)
plt.figure(figsize=(15, 10))
plt.imshow(img)
plt.show()<jupyter_output><empty_output> | peft/examples/semantic_segmentation/semantic_segmentation_peft_lora.ipynb/0 | {
"file_path": "peft/examples/semantic_segmentation/semantic_segmentation_peft_lora.ipynb",
"repo_id": "peft",
"token_count": 8138
} | 242 |
compute_environment: LOCAL_MACHINE
debug: false
deepspeed_config:
deepspeed_multinode_launcher: standard
offload_optimizer_device: none
offload_param_device: none
zero3_init_flag: true
zero3_save_16bit_model: true
zero_stage: 3
distributed_type: DEEPSPEED
downcast_bf16: 'no'
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 2
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false | peft/examples/sft/configs/deepspeed_config_z3_qlora.yaml/0 | {
"file_path": "peft/examples/sft/configs/deepspeed_config_z3_qlora.yaml",
"repo_id": "peft",
"token_count": 331
} | 243 |
# Sparse High Rank Adapters
## Introduction
Sparse High Rank Adapters or [SHiRA](https://arxiv.org/abs/2406.13175) is an alternate type of adapter and has been found to have significant advantages over the low rank adapters. Specifically, SHiRA achieves better accuracy than LoRA for a variety of vision and language tasks. It also offers simpler and higher quality multi-adapter fusion by significantly reducing concept loss, a common problem faced by low rank adapters. SHiRA directly finetunes a small number of the base model's parameters to finetune the model on any adaptation task.
## Quick start
```python
import torch
from peft import ShiraConfig, get_peft_model
from transformers import AutoTokenizer, AutoModelForCausalLM
from trl import SFTConfig, SFTTrainer
from datasets import load_dataset
model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", torch_dtype=torch.bfloat16, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m")
dataset = load_dataset("imdb", split="train[:1%]")
shira_config = ShiraConfig(
r=32,
)
peft_model = get_peft_model(model, shira_config)
training_args = SFTConfig(dataset_text_field="text", max_seq_length=128)
trainer = SFTTrainer(
model=peft_model,
train_dataset=dataset,
processing_class=tokenizer,
)
trainer.train()
peft_model.save_pretrained("shira-opt-350m")
```
For more options and a more detailed example code, you can refer to shira finetuning script.
Run the script simply by running:
```bash
python3 examples/shira_finetuning/shira_finetuning.py --base_model facebook/opt-350m
```
If you want to run DDP by [accelerate](https://huggingface.co/docs/accelerate/en/index), please run `accelerate config` to set your ddp config, and run:
```bash
accelerate launch examples/shira_finetuning/shira_finetuning.py --base_model facebook/opt-350m
```
please add `--device_map cpu` if you want to run finetune on CPU.
If you want to train SHiRA with a custom sparse mask function which requires custom keyword arguments, please see the definition of `custom_random_mask_function_with_custom_kwargs` function provided in the `shira_fintuning.py` script. You can run this code using the `--use_custom_random_mask_function_with_custom_kwargs` argument. Without this argument, SHiRA defaults to a random sparse mask. Please run the code as follows. :
```bash
python3 examples/shira_finetuning/shira_finetuning.py --base_model facebook/opt-350m --use_custom_random_mask_function_with_custom_kwargs
```
## Use the model
You can load and use the model as any other ๐ค PEFT model
```python
from peft import PeftModel
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m")
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m")
shira_model = PeftModel.from_pretrained(model, "shira-opt-350m")
```
## Citation
```
@inproceedings{NEURIPS2024_18c0102c,
author = {Bhardwaj, Kartikeya and Pandey, Nilesh Prasad and Priyadarshi, Sweta and Ganapathy, Viswanath and Kadambi, Shreya and Esteves, Rafael and Borse, Shubhankar and Whatmough, Paul and Garrepalli, Risheek and Van Baalen, Mart and Teague, Harris and Nagel, Markus},
booktitle = {Advances in Neural Information Processing Systems},
editor = {A. Globerson and L. Mackey and D. Belgrave and A. Fan and U. Paquet and J. Tomczak and C. Zhang},
pages = {13685--13715},
publisher = {Curran Associates, Inc.},
title = {Sparse High Rank Adapters},
url = {https://proceedings.neurips.cc/paper_files/paper/2024/file/18c0102cb7f1a02c14f0929089b2e576-Paper-Conference.pdf},
volume = {37},
year = {2024}
}
```
| peft/examples/shira_finetuning/README.md/0 | {
"file_path": "peft/examples/shira_finetuning/README.md",
"repo_id": "peft",
"token_count": 1165
} | 244 |
{
"adapter_layers": 28,
"adapter_len": 100,
"auto_mapping": null,
"base_model_name_or_path": null,
"inference_mode": false,
"peft_type": "ADAPTION_PROMPT",
"revision": null,
"target_modules": null,
"task_type": "CAUSAL_LM"
} | peft/method_comparison/MetaMathQA/experiments/adaptionprompt/llama-3.2-3B-lr_0.0005/adapter_config.json/0 | {
"file_path": "peft/method_comparison/MetaMathQA/experiments/adaptionprompt/llama-3.2-3B-lr_0.0005/adapter_config.json",
"repo_id": "peft",
"token_count": 107
} | 245 |
{
"auto_mapping": null,
"base_model_name_or_path": null,
"fan_in_fan_out": false,
"inference_mode": false,
"init_weights": true,
"mask_type": "random",
"modules_to_save": null,
"peft_type": "SHIRA",
"r": 32,
"random_seed": 42,
"revision": null,
"target_modules": null,
"task_type": null
} | peft/method_comparison/MetaMathQA/experiments/shira/llama-3.2-3B-lr_0.0003-random_seed_42/adapter_config.json/0 | {
"file_path": "peft/method_comparison/MetaMathQA/experiments/shira/llama-3.2-3B-lr_0.0003-random_seed_42/adapter_config.json",
"repo_id": "peft",
"token_count": 135
} | 246 |
## Base Model Inference Caching
The benchmarking suite uses a separate script, `run_base.py`, to measure base model inference times and save results for reuse. This should be run once per model configuration to avoid redundant computations and ensure consistent baseline metrics for all PEFT experiments.
**Usage:**
```bash
python run_base.py
```
This will cache the base model inference results for the specified configuration. Subsequent runs of `run.py` will automatically load these cached results.
# PEFT Benchmarking Suite
This directory contains a comprehensive benchmarking framework for Parameter-Efficient Fine-Tuning (PEFT) methods. For the task of text generation, the suite measures inference performance, memory usage, and other key metrics across different PEFT configurations.
## Overview
The benchmarking suite provides:
- **Inference time measurement** across different prompt categories
- **Memory usage during inference** (RAM and GPU)
- **Parameter efficiency metrics** (trainable vs total parameters)
- **Time per token analysis** for fair comparison across different generation lengths
- **Structured result logging** with detailed metadata
## Architecture
The suite follows a clean separation between:
1. **Default benchmark configuration** - shared settings for consistent comparison
2. **Individual adapter configurations** - PEFT-specific parameters for each experiment
This ensures that all experiments are comparable while allowing flexibility in adapter parameters.
## Quick Start
### Running a Single Experiment
```bash
# From the peft_bench directory
python run.py experiments/lora/lora_r8 --verbose
```
## Configuration Structure
The benchmarking suite uses a hierarchical configuration system:
1. **Default benchmark parameters** (`default_benchmark_params.json`) - Base configuration shared by all experiments
2. **Experiment-specific overrides** (`benchmark_params.json` in each experiment) - Optional overrides for specific experiments
3. **Adapter configuration** (`adapter_config.json` in each experiment) - PEFT method parameters
This structure ensures consistent comparison while allowing flexibility where needed.
### Default Configuration (`default_benchmark_params.json`)
Contains shared benchmark settings that apply to all experiments. Here are the key configuration fields:
- `model_id`: The Hugging Face model ID to use as the base model (e.g., "facebook/opt-350m")
- `dtype`: Model precision ("float16", "float32", or "bfloat16")
- `seed`: Random seed for reproducibility
- `max_new_tokens`: Maximum number of tokens to generate during inference
- `num_inference_runs`: Number of inference runs per prompt for statistical reliability
- `use_4bit`: Whether to use 4-bit quantization (bool)
- `use_8bit`: Whether to use 8-bit quantization (bool)
Each experiment can override these settings by providing its own `benchmark_params.json` file.
### Experiment Structure
Each experiment directory should contain:
1. `adapter_config.json`: PEFT adapter configuration. For details on available parameters and their meanings, refer to the [PEFT documentation](https://huggingface.co/docs/peft/main/en/developer_guides/adapters).
2. (Optional) `benchmark_params.json`: Override specific benchmark parameters for this experiment.
Example directory structure:
```
experiments/
โโโ lora/
โโโ lora_r8/ # LoRA rank 8 experiment
โ โโโ adapter_config.json # PEFT adapter configuration
โ โโโ benchmark_params.json # Optional benchmark overrides
โโโ lora_r16/ # LoRA rank 16 experiment
โโโ adapter_config.json
```
### Experiment-Specific Overrides Example
If an experiment needs different benchmark settings, create `benchmark_params.json`:
```json
{
"_comment": "Override settings for this specific experiment",
"max_new_tokens": 50,
"num_inference_runs": 15,
"num_prompt_samples": 2
}
```
These parameters will override the defaults from `default_benchmark_params.json`. However, the defaults should generally not be changed to keep the results from the individual experiments comparable.
### Create a New Experiment Adapter Configuration
To create a new experiment, follow these steps:
1. **Create the experiment directory**
```bash
mkdir -p experiments/lora/lora_r8
```
2. **Generate the adapter configuration programmatically**
Use the PEFT library to create and save your adapter config:
```python
from peft import LoraConfig
config = LoraConfig(
lora_alpha=16,
lora_dropout=0.1,
r=8,
target_modules=["q_proj", "v_proj"],
task_type="CAUSAL_LM"
)
config.save_pretrained("experiments/lora/lora_r8")
```
This will create an `adapter_config.json` in your experiment directory. Adjust parameters as needed for your experiment.
3. **(Optional) Add benchmark overrides**
If you need to override default benchmark settings, create a `benchmark_params.json` in the same directory.
4. **Run the benchmark**
```bash
python run.py experiments/lora/lora_r8 --verbose
```
## Prompt Categories
The benchmark automatically runs across all prompt categories for consistent comparison:
- **short** - Brief prompts (1-2 sentences)
- **medium** - Moderate length prompts (paragraph-level)
- **long** - Extended prompts (multiple paragraphs)
Results are tracked separately for each category, allowing analysis of how different PEFT methods perform across varying input lengths.
## Results Structure
Results are saved in a structured JSON format with three main sections:
### `run_info`
- Execution metadata (timestamp, duration, status)
- Hardware information (GPU type, CUDA version, etc.)
- Error information (if applicable)
- PEFT and benchmark configurations
### `generation_info`
- Memory usage logs at different stages
- Per-category metrics (inference time, time per token, etc.)
- Overall aggregated metrics
- Individual sample results for detailed analysis
### `meta_info`
- Model information (ID, PEFT method)
- Parameter counts (adapter, total, ratio)
- Model size information (base model, adapter)
- System and package information
## Key Metrics
### Inference Performance
- **Inference Time**: Total time for generation per category
- **Time Per Token**: Normalized time accounting for different generation lengths
- **Inference Overhead**: Percentage increase compared to base model
### Memory Usage
- **Peak GPU Memory**: Maximum GPU memory during benchmark
- **Peak RAM Memory**: Maximum RAM usage
- **Memory Logs**: Detailed tracking at each stage
### Parameter Efficiency
- **Adapter Parameters**: Number of parameters in the PEFT adapter
- **Parameter Ratio**: Percentage of total model parameters that are in the adapter
- **Adapter Size**: Memory footprint of the adapter in MB
| peft/method_comparison/text_generation_benchmark/README.md/0 | {
"file_path": "peft/method_comparison/text_generation_benchmark/README.md",
"repo_id": "peft",
"token_count": 1797
} | 247 |
import argparse
import json
import os
from datetime import date
from pathlib import Path
from tabulate import tabulate
MAX_LEN_MESSAGE = 2900 # slack endpoint has a limit of 3001 characters
parser = argparse.ArgumentParser()
parser.add_argument(
"--slack_channel_name",
default="peft-ci-daily",
)
def main(slack_channel_name=None):
failed = []
passed = []
group_info = []
total_num_failed = 0
empty_file = False or len(list(Path().glob("*.log"))) == 0
total_empty_files = []
for log in Path().glob("*.log"):
section_num_failed = 0
i = 0
with open(log) as f:
for line in f:
line = json.loads(line)
i += 1
if line.get("nodeid", "") != "":
test = line["nodeid"]
if line.get("duration", None) is not None:
duration = f"{line['duration']:.4f}"
if line.get("outcome", "") == "failed":
section_num_failed += 1
failed.append([test, duration, log.name.split("_")[0]])
total_num_failed += 1
else:
passed.append([test, duration, log.name.split("_")[0]])
empty_file = i == 0
group_info.append([str(log), section_num_failed, failed])
total_empty_files.append(empty_file)
os.remove(log)
failed = []
text = (
"๐ There were no failures!"
if not any(total_empty_files)
else "Something went wrong there is at least one empty file - please check GH action results."
)
no_error_payload = {
"type": "section",
"text": {
"type": "plain_text",
"text": text,
"emoji": True,
},
}
message = ""
payload = [
{
"type": "header",
"text": {
"type": "plain_text",
"text": "๐ค Results of the {} PEFT scheduled tests.".format(os.environ.get("TEST_TYPE", "")),
},
},
]
if total_num_failed > 0:
for i, (name, num_failed, failed_tests) in enumerate(group_info):
if num_failed > 0:
if num_failed == 1:
message += f"*{name}: {num_failed} failed test*\n"
else:
message += f"*{name}: {num_failed} failed tests*\n"
failed_table = []
for test in failed_tests:
failed_table.append(test[0].split("::"))
failed_table = tabulate(
failed_table,
headers=["Test Location", "Test Case", "Test Name"],
showindex="always",
tablefmt="grid",
maxcolwidths=[12, 12, 12],
)
message += "\n```\n" + failed_table + "\n```"
if total_empty_files[i]:
message += f"\n*{name}: Warning! Empty file - please check the GitHub action job *\n"
print(f"### {message}")
else:
payload.append(no_error_payload)
if os.environ.get("TEST_TYPE", "") != "":
from slack_sdk import WebClient
if len(message) > MAX_LEN_MESSAGE:
print(f"Truncating long message from {len(message)} to {MAX_LEN_MESSAGE}")
message = message[:MAX_LEN_MESSAGE] + "..."
if len(message) != 0:
md_report = {
"type": "section",
"text": {"type": "mrkdwn", "text": message},
}
payload.append(md_report)
action_button = {
"type": "section",
"text": {"type": "mrkdwn", "text": "*For more details:*"},
"accessory": {
"type": "button",
"text": {"type": "plain_text", "text": "Check Action results", "emoji": True},
"url": f"https://github.com/huggingface/peft/actions/runs/{os.environ['GITHUB_RUN_ID']}",
},
}
payload.append(action_button)
date_report = {
"type": "context",
"elements": [
{
"type": "plain_text",
"text": f"Nightly {os.environ.get('TEST_TYPE')} test results for {date.today()}",
},
],
}
payload.append(date_report)
print(payload)
client = WebClient(token=os.environ.get("SLACK_API_TOKEN"))
client.chat_postMessage(channel=f"#{slack_channel_name}", text=message, blocks=payload)
if __name__ == "__main__":
args = parser.parse_args()
main(args.slack_channel_name)
| peft/scripts/log_reports.py/0 | {
"file_path": "peft/scripts/log_reports.py",
"repo_id": "peft",
"token_count": 2520
} | 248 |
# Copyright 2024-present the HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
import torch
from torch.nn import CrossEntropyLoss
from peft.utils.integrations import gather_params_ctx
class CPTEmbedding(torch.nn.Module):
"""
CPTEmbedding is a custom embedding layer designed for Context-aware Prompt Tuning (CPT) in PEFT. It initializes
embeddings, applies prompt-specific projections, and computes loss using label masks.
"""
def __init__(self, config, word_embeddings):
"""
Initializes the CPTEmbedding module.
Args:
config (Namespace):
Configuration object containing model hyperparameters and CPT-specific settings.
word_embeddings (torch.nn.Embedding):
The base word embedding layer used to initialize CPT embeddings.
"""
super().__init__()
self.config = copy.deepcopy(config)
num_virtual_tokens = config.num_virtual_tokens
# Initialize embeddings with virtual token dimensions
self.embedding = torch.nn.Embedding(num_virtual_tokens, config.token_dim)
# Initialize embeddings using text-based prompt tuning, if configured
if not config.inference_mode:
assert config.num_virtual_tokens == len(config.cpt_token_ids)
init_token_ids = torch.LongTensor(config.cpt_token_ids).to(word_embeddings.weight.device)
with gather_params_ctx(word_embeddings.parameters()):
word_embedding_weights = word_embeddings(init_token_ids).detach().clone()
word_embedding_weights = word_embedding_weights.to(torch.float32)
self.embedding.weight = torch.nn.Parameter(word_embedding_weights)
# Initialize delta embedding with zero weights
self.delta_embedding = torch.nn.Embedding(num_virtual_tokens, config.token_dim)
self.delta_embedding.weight.data = torch.zeros_like(self.delta_embedding.weight).to(torch.float32)
# Apply hook for backward gradient updates
self.set_updated_tokens()
def forward(self, indices):
"""
Computes the prompt embeddings and applies delta adjustments.
Args:
indices (torch.Tensor):
Indices of the tokens to be embedded.
Returns:
torch.Tensor:
Sum of prompt embeddings and delta embeddings.
"""
with torch.no_grad():
prompt_embeddings = self.embedding(indices)
self.delta_embedding.weight.data = self.get_projection() # Apply epsilon-based projection
delta_prompt_embeddings = self.delta_embedding(indices)
return prompt_embeddings + delta_prompt_embeddings
def set_updated_tokens(self):
"""
Sets up a backward hook to selectively update token gradients based on the CPT token type mask.
"""
tensor_ICL_mask = torch.Tensor(self.config.cpt_tokens_type_mask).long()
mask_input_template = torch.remainder(tensor_ICL_mask, 4) == 1
mask_input = torch.remainder(tensor_ICL_mask, 4) == 2
mask_output_template = torch.remainder(tensor_ICL_mask, 4) == 3
mask = mask_input_template | mask_input | mask_output_template
mask = mask.view(-1, 1)
def backward_hook(grad):
grad = grad * mask.to(grad.device) # Apply mask to gradients
return grad
self.delta_embedding.weight.register_hook(backward_hook)
def get_epsilon(self):
cpt_tokens_type_mask = self.config.cpt_tokens_type_mask
MIN_VALUE = 1e-10
# Calculate normalized epsilon values for input, output, and format tokens
normalized_format_eps = self.config.opt_projection_format_epsilon * torch.sqrt(
torch.Tensor([self.config.token_dim / 2048])
)
normalized_input_eps = self.config.opt_projection_epsilon * torch.sqrt(
torch.Tensor([self.config.token_dim / 2048])
)
epsilon = torch.ones_like(torch.Tensor(cpt_tokens_type_mask)).to(torch.float32) * MIN_VALUE
cpt_tokens_type_mask = torch.Tensor(cpt_tokens_type_mask).long()
epsilon[(cpt_tokens_type_mask > 0) & (torch.remainder(cpt_tokens_type_mask, 4) == 1)] = normalized_format_eps
epsilon[(cpt_tokens_type_mask > 0) & (torch.remainder(cpt_tokens_type_mask, 4) == 3)] = normalized_format_eps
epsilon[(cpt_tokens_type_mask > 0) & (torch.remainder(cpt_tokens_type_mask, 4) == 2)] = normalized_input_eps
return epsilon
def get_projection(self):
"""
Applies epsilon-based projection to the delta embeddings to control their norm.
"""
# Apply projection to control delta embedding norm
with torch.no_grad():
new_embeddings_weights = self.delta_embedding.weight.clone().to(self.delta_embedding.weight.device)
token_norm = torch.norm(new_embeddings_weights, p=2, dim=1)
projection_mask = token_norm > 0
if torch.any(projection_mask):
epsilon = self.get_epsilon().to(self.delta_embedding.weight.device)
new_embeddings_weights[projection_mask] *= (
epsilon[projection_mask] / (token_norm[projection_mask].clamp(min=epsilon[projection_mask]))
).view(-1, 1)
return new_embeddings_weights
@staticmethod
def calculate_loss(base_model_output, labels, cpt_type_mask, config):
"""
Computes the loss for CPT models with optional exponential decay.
Args:
base_model_output (ModelOutput):
Output from the base model containing logits.
labels (torch.Tensor):
Ground-truth labels for the input tokens.
cpt_type_mask (torch.Tensor):
Token type mask used for filtering valid loss terms.
config (Namespace):
Configuration object containing loss-related hyperparameters.
Returns:
ModelOutput:
The base model output with computed loss.
"""
device = base_model_output.logits.device
lm_logits = base_model_output.logits
labels = labels.to(device)
# Shift logits and labels for token prediction
shift_logits = lm_logits[..., :-1, :].contiguous()
shift_labels = labels[..., 1:].contiguous()
shift_cpt_type_mask = cpt_type_mask[..., 1:].contiguous()
shift_labels_bool = (shift_labels.clone().detach() != -100).bool()
batch_size, seq_length, vocab_size = shift_logits.shape
# Compute cross-entropy loss
loss_fct = CrossEntropyLoss(reduction="none", ignore_index=-100)
loss = loss_fct(
shift_logits.view(batch_size * seq_length, vocab_size), shift_labels.view(batch_size * seq_length)
)
loss = loss.view(batch_size, seq_length)
# Apply exponential decay weights to the loss
shift_labels_weights = shift_labels_bool.clone().detach().float()
for i in range(batch_size):
idx_labels = (shift_cpt_type_mask[i] > 0) & (shift_cpt_type_mask[i] % 4 == 0)
labels_ids = shift_cpt_type_mask[i][idx_labels].unique()
exponential_decay = torch.ones_like(shift_cpt_type_mask[i]).to(device=device).float()
decay_value = 1
for label_mask_idx in torch.flip(labels_ids, [0]):
exponential_decay[shift_cpt_type_mask[i] == label_mask_idx] = decay_value
decay_value *= config.opt_loss_decay_factor
if config.opt_weighted_loss_type == "decay":
shift_labels_weights[i] *= exponential_decay
# Compute the weighted mean loss
loss = (loss[shift_labels_bool] * shift_labels_weights[shift_labels_bool]).mean()
base_model_output.loss = loss
return base_model_output
| peft/src/peft/tuners/cpt/model.py/0 | {
"file_path": "peft/src/peft/tuners/cpt/model.py",
"repo_id": "peft",
"token_count": 3563
} | 249 |
# Copyright 2024-present the HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import warnings
from copy import deepcopy
from typing import Optional
import torch
import torch.nn as nn
from peft.tuners.tuners_utils import BaseTunerLayer, check_adapters_to_merge
class LNTuningLayer(nn.Module, BaseTunerLayer):
"""
Selects a layer from the model.
"""
adapter_layer_names = ("ln_tuning_layers",)
def __init__(self, base_layer: nn.Module, adapter_name: str):
super().__init__()
self.base_layer = base_layer
self.ln_tuning_layers = nn.ModuleDict({})
self.update_layer(self.base_layer, adapter_name)
self._active_adapter = adapter_name
self.merged_adapters = []
def update_layer(self, layer: nn.Module, adapter_name: str):
self.ln_tuning_layers[adapter_name] = deepcopy(layer)
def enable_adapters(self, enabled: bool) -> None:
"""Toggle the enabling and disabling of adapters
Takes care of setting the requires_grad flag for the adapter weights.
Args:
enabled (bool): True to enable adapters, False to disable adapters
"""
if enabled:
self.set_adapter(self.active_adapters)
self._disable_adapters = False
else:
if self.merged:
self.unmerge()
# disable grads on all adapter layers
for layer_name in self.adapter_layer_names:
layer = getattr(self, layer_name)
layer.requires_grad_(False)
self._disable_adapters = True
def merge(self, adapter_names: Optional[list[str]] = None, safe_merge: bool = False):
# note that there is no actual merging, so whether safe_merge is True or False is irrelevant
adapter_names = check_adapters_to_merge(self, adapter_names)
if not adapter_names:
# no adapter to merge
return
if len(adapter_names) > 1:
raise ValueError(
f"Trying to merge {len(adapter_names)} adapters, but LN "
f"tuning does not allow merging more than one adapter at a time"
)
merged_adapters = set(self.merged_adapters)
if merged_adapters:
warnings.warn(f"Already merged with {merged_adapters}. Unmerging first.")
self.unmerge()
self.base_layer, self.ln_tuning_layers[adapter_names[0]] = (
self.ln_tuning_layers[adapter_names[0]],
self.base_layer,
)
self.merged_adapters.append(adapter_names[0])
def unmerge(self):
if not self.merged:
warnings.warn("Already unmerged. Nothing to do.")
return
# popping one element is sufficient because LN
# tuning does not allow merging more than one adapter at a time.
merged_name = self.merged_adapters.pop()
self.base_layer, self.ln_tuning_layers[merged_name] = (
self.ln_tuning_layers[merged_name],
self.base_layer,
)
def forward(self, x: torch.Tensor, *args, **kwargs) -> torch.Tensor:
if self.disable_adapters:
if self.merged:
self.unmerge()
result = self.base_layer(x, *args, **kwargs)
elif self.merged:
result = self.base_layer(x, *args, **kwargs)
else:
if len(self.active_adapters) != 1:
raise ValueError(
f"Trying to run forward with {len(self.active_adapters)} active "
f"adapters, but LN tuning does not allow inference with more than one adapter at a time"
)
active_adapter = self.active_adapters[0]
result = self.ln_tuning_layers[active_adapter](x, *args, **kwargs)
return result
def __repr__(self) -> str:
rep = super().__repr__()
return "ln_tuning." + rep
| peft/src/peft/tuners/ln_tuning/layer.py/0 | {
"file_path": "peft/src/peft/tuners/ln_tuning/layer.py",
"repo_id": "peft",
"token_count": 1888
} | 250 |
# Copyright 2024-present the HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from copy import deepcopy
import torch
import torch.nn.functional as F
from torch import nn
from peft.utils.integrations import dequantize_module_weight, gather_params_ctx
from peft.utils.other import transpose
class DoraLinearLayer(nn.Module):
def __init__(self, fan_in_fan_out):
super().__init__()
self.fan_in_fan_out = fan_in_fan_out
def get_weight_norm(self, weight, lora_weight, scaling) -> torch.Tensor:
# calculate L2 norm of weight matrix, column-wise
weight = transpose(weight, self.fan_in_fan_out)
weight = weight + scaling * lora_weight
weight_norm = torch.linalg.norm(weight, dim=1).to(weight.dtype)
return weight_norm
def update_layer(self, *, base_layer, lora_A, lora_B, scaling, place_on_cpu=False) -> None:
# temporarily convert fp16 to fp32, as fp16 can cause trouble on CPU with PyTorch < 2.2
dtype_is_fp16 = lora_A.dtype == torch.float16
if dtype_is_fp16:
lora_A = lora_A.float()
lora_B = lora_B.float()
with gather_params_ctx(base_layer.parameters()):
if base_layer.__class__.__name__ == "Linear4bit":
# We have to create a copy of the base layer, otherwise, FSDP will throw an error. 8bit does not work
# yet because Int8Params cannot be correctly deep-copied (attributes vanish)
base_layer = deepcopy(base_layer)
weight = dequantize_module_weight(base_layer)
if weight.data.ndim >= 3: # For handling LoRAs applied to Conv layers.
r = lora_A.shape[0]
lora_weight = torch.mm(lora_B.view([-1, r]), lora_A.view([r, -1]))
lora_weight = lora_weight.reshape(weight.shape)
else:
lora_weight = lora_B @ lora_A
if dtype_is_fp16:
lora_weight = lora_weight.half()
weight_norm = self.get_weight_norm(weight.to(lora_A.device), lora_weight, scaling)
if place_on_cpu:
weight_norm = weight_norm.to("cpu")
self.weight = nn.Parameter(weight_norm, requires_grad=True)
def forward(self, x, *, lora_A, lora_B, scaling, base_layer, base_result=None):
"""
For DoRA, calculate the extra output from LoRA with DoRA applied. This should be added on top of the base layer
output.
"""
# Don't use `lora_weight = lora_B.weight @ lora_A.weight` because this causes errors with FSDP. Instead,
# calculate the same but using forward.
x_eye = torch.eye(lora_A.weight.shape[1], device=lora_A.weight.device, dtype=x.dtype)
lora_weight = lora_B(lora_A(x_eye)).T
magnitude = self.weight
weight = dequantize_module_weight(base_layer)
weight = weight.to(x.dtype)
weight_norm = self.get_weight_norm(weight, lora_weight.detach(), scaling)
# see section 4.3 of DoRA (https://huggingface.co/papers/2402.09353)
# "[...] we suggest treating ||V +โV ||_c in
# Eq. (5) as a constant, thereby detaching it from the gradient
# graph. This means that while ||V + โV ||_c dynamically
# reflects the updates of โV , it wonโt receive any gradient
# during backpropagation"
weight_norm = weight_norm.detach()
mag_norm_scale = (magnitude / weight_norm).view(1, -1)
lora_result = lora_B(lora_A(x))
bias = None
if base_result is not None:
bias = base_layer.bias
if bias is not None:
base_result = base_result - bias
else:
base_result = F.linear(x, transpose(weight, self.fan_in_fan_out))
result_dora = (mag_norm_scale - 1) * base_result + mag_norm_scale * lora_result * scaling
return result_dora
def __repr__(self) -> str:
rep = super().__repr__()
return "lora.dora." + rep
class DoraEmbeddingLayer(DoraLinearLayer):
def forward(self, x, *, lora_A, lora_B, scaling, base_layer, embed_fn):
"""
For DoRA, calculate the extra output from LoRA with DoRA applied. This should be added on top of the base layer
output.
"""
lora_weight = (lora_A @ lora_B).T
magnitude = self.weight
weight = base_layer.weight
weight_norm = self.get_weight_norm(weight, lora_weight.detach(), scaling)
# see section 4.3 of DoRA (https://huggingface.co/papers/2402.09353)
# "[...] we suggest treating ||V +โV ||_c in
# Eq. (5) as a constant, thereby detaching it from the gradient
# graph. This means that while ||V + โV ||_c dynamically
# reflects the updates of โV , it wonโt receive any gradient
# during backpropagation"
weight_norm = weight_norm.detach()
mag_norm_scale = magnitude / weight_norm
result_dora = mag_norm_scale * (embed_fn(x, lora_A) @ lora_B) * scaling
return mag_norm_scale, result_dora
def __repr__(self) -> str:
rep = super().__repr__()
return "lora.dora." + rep
class _DoraConvNdLayer(DoraLinearLayer):
def get_weight_norm(self, weight, lora_weight, scaling) -> torch.Tensor:
# calculate L2 norm of weight matrix, column-wise
weight = weight + scaling * lora_weight
# the following is needed to have compatibility with the 4/5D weight tensors of Conv2D/3D
dim = tuple(range(1, weight.dim()))
weight_norm = weight.norm(p=2, dim=dim, keepdim=True).transpose(1, 0)
return weight_norm
def forward(self, x, *, lora_A, lora_B, scaling, base_layer, base_result=None):
"""
For DoRA, calculate the extra output from LoRA with DoRA applied. This should be added on top of the base layer
output.
"""
weight = base_layer.weight
r = lora_A.weight.shape[0]
lora_weight = torch.mm(lora_B.weight.view([-1, r]), lora_A.weight.view([r, -1]))
lora_weight = lora_weight.reshape(weight.shape)
magnitude = self.weight
weight_norm = self.get_weight_norm(weight, lora_weight.detach(), scaling)
# see section 4.3 of DoRA (https://huggingface.co/papers/2402.09353)
# "[...] we suggest treating ||V +โV ||_c in
# Eq. (5) as a constant, thereby detaching it from the gradient
# graph. This means that while ||V + โV ||_c dynamically
# reflects the updates of โV , it wonโt receive any gradient
# during backpropagation"
weight_norm = weight_norm.detach()
mag_norm_scale = magnitude / weight_norm
if base_result is None:
base_result = self.conv_fn(
x,
weight,
bias=None,
stride=base_layer.stride,
padding=base_layer.padding,
dilation=base_layer.dilation,
groups=base_layer.groups,
)
else:
bias = base_layer.bias
if bias is not None:
# reshape bias to (1, -1, 1, ...)
bias_shape = (1, -1) + (1,) * (base_result.dim() - 2)
base_result = base_result - bias.view(*bias_shape)
result_dora = (mag_norm_scale - 1) * base_result + mag_norm_scale * lora_B(lora_A(x)) * scaling
return result_dora
def __repr__(self) -> str:
rep = super().__repr__()
return "lora.dora." + rep
class DoraConv1dLayer(_DoraConvNdLayer):
def __init__(self, fan_in_fan_out):
super().__init__(fan_in_fan_out)
self.conv_fn = F.conv1d
class DoraConv2dLayer(_DoraConvNdLayer):
def __init__(self, fan_in_fan_out):
super().__init__(fan_in_fan_out)
self.conv_fn = F.conv2d
class DoraConv3dLayer(_DoraConvNdLayer):
def __init__(self, fan_in_fan_out):
super().__init__(fan_in_fan_out)
self.conv_fn = F.conv3d
| peft/src/peft/tuners/lora/dora.py/0 | {
"file_path": "peft/src/peft/tuners/lora/dora.py",
"repo_id": "peft",
"token_count": 3719
} | 251 |
# Copyright 2025-present the HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import warnings
from dataclasses import dataclass, field
from typing import Optional, Union
from peft.config import PeftConfig
from peft.utils import PeftType
@dataclass
class RandLoraConfig(PeftConfig):
"""
This is the configuration class to store the configuration of a [`RandLoraModel`].
Paper: https://huggingface.co/papers/2502.00987.
Args:
r (`int`, *optional*, defaults to `32`):
RandLora's random basis rank dimension. Contrary to Lora, this parameter is inversely proportional to the
amount of trainable parameters as reducing it increases trainable parameters.
target_modules (`Union[list[str], str]`):
The names of the modules to apply RandLora to. Only linear layers are supported.
projection_prng_key (`int`):
RandLora PRNG init key. Used for initialising basis_A and basis_B for new models or when loading a
checkpoint that did not include these projections. Defaults to `0`.
save_projection (`bool`):
Whether to save the global basis_A / basis_B random basis in the state dict alongside per layer lambda /
gamma diagonal matrices. This will increase the size of the checkpoint, but guarantee that we can reload
the checkpoint on all system configurations. Defaults to `True`.
sparse (`bool`):
Whether to use sparse random bases as described in the RandLora paper. The bases are ternary sparse bases
(only containing -1, 0 and 1) where the attribution probability is 1/6 for -1 and 1 and 2/3 for 0. These
sparse matrices aim to be used for matmul free computation in the future, see
https://huggingface.co/papers/2406.02528v1 The current implementation is a proof of concept however where
the sparseness is not used to improve speed or memory usage. Using sparse matrices typically does not
reduce performance and can even help reduce overfitting. Defaults to `False`.
very_sparse (`bool`):
Whether to use highly sparse random bases as described in the RandLora paper. The very sparse bases are
ternary sparse bases (only containing -1, 0 and 1) given a matrix with smallest dimension d, the
attribution probability is 1/โD for -1 and 1 and 1- 2/โD for 0. Using these sparse matrices can further
reduce overfitting over the `sparse` alternatives but will most likely decrease performance as a results.
Use carefully. Defaults to `False`.
randlora_dropout (`float`):
The dropout probability for RandLora layers.
randlora_alpha (`float`):
The scaling coefficient for RandLora layers, this would typically be 20 times the rank. Because the
`randlora_alpha` coefficient is large by default, it can lead to numerical instabilities especially when
learning rates are high. If training is unstable, consider reducing the learning rate or the
`randlora_alpha` coefficient.
fan_in_fan_out (`bool`):
Set this to True if the layer to replace stores weight like (fan_in, fan_out). For example, gpt-2 uses
`Conv1D` which stores weights like (fan_in, fan_out) and hence this should be set to `True`.
bias (`str`):
Bias type. Can be 'none', 'all' or 'randlora_only'. If 'all' or 'randlora_only', the corresponding biases
will be updated during training. Be aware that this means that, even when disabling the adapters, the model
will not produce the same output as the base model would have without adaptation.
modules_to_save (`list[str]`):
list of modules apart from RandLora layers to be set as trainable and saved in the final checkpoint.
init_weights (`bool`):
Whether to initialize the weights of the RandLora layers with their default initialization. Don't change
this setting, except if you know exactly what you're doing.
layers_to_transform (`Union[list[int],int]`):
The layer indexes to transform, if this argument is specified, it will apply the RandLora transformations
on the layer indexes that are specified in this list. If a single integer is passed, it will apply the
RandLora transformations on the layer at this index.
layers_pattern (`str`):
The layer pattern name, used only if `layers_to_transform` is different from `None` and if the layer
pattern is not in the common layers pattern.
"""
r: int = field(default=32, metadata={"help": "RandLora random basis rank"})
target_modules: Optional[Union[list[str], str]] = field(
default=None,
metadata={
"help": (
"list of module names or regex expression of the module names to replace with RandLora."
"For example, ['q', 'v'] or '.*decoder.*(SelfAttention|EncDecAttention).*(q|v)$'. "
"Only linear layers are supported."
)
},
)
projection_prng_key: int = field(
default=0,
metadata={
"help": (
"RandLora PRNG init key. Used for initialising basis_A and basis_B for new models or when loading a "
"checkpoint that did not include these projections."
)
},
)
save_projection: bool = field(
default=True,
metadata={
"help": (
"Whether to save the basis_A / basis_B projections in the state dict alongside per layer lambda / "
"gamma weights. This will increase the size of the checkpoint, but guarantee that we can reload "
"the checkpoint on all system configurations."
)
},
)
sparse: bool = field(
default=False,
metadata={
"help": (
"Whether to use sparse random bases as described in the RandLora paper."
"The current implementation is a proof of concept where the sparseness"
"is not used to improve speed or memory usage."
)
},
)
very_sparse: bool = field(
default=False,
metadata={
"help": (
"Whether to use very sparse random bases."
"The current implementation is a proof of concept where the sparseness"
"is not used to improve speed or memory usage."
)
},
)
randlora_dropout: float = field(default=0.0, metadata={"help": "Dropout in the adapter layers"})
fan_in_fan_out: bool = field(
default=False,
metadata={"help": "Set this to True if the layer to replace stores weight like (fan_in, fan_out)"},
)
randlora_alpha: int = field(
default=640,
metadata={
"help": "Scaling coefficient in the adapter layers, typically 20 times the rank of the random bases."
},
)
bias: str = field(
default="none", metadata={"help": "Bias type for RandLora. Can be 'none', 'all' or 'randlora_only'"}
)
modules_to_save: Optional[list[str]] = field(
default=None,
metadata={
"help": (
"list of modules apart from RandLora layers to be set as trainable and saved in the final checkpoint. For"
" example, in Sequence Classification or Token Classification tasks, the final layer"
" `classifier/score` are randomly initialized and as such need to be trainable and saved."
)
},
)
init_weights: bool = field(
default=True,
metadata={
"help": (
"Whether to initialize the weights of the RandLora layers with their default initialization. Don't change "
"this setting, except if you know exactly what you're doing."
),
},
)
layers_to_transform: Optional[Union[list[int], int]] = field(
default=None,
metadata={
"help": (
"The layer indexes to transform, is this argument is specified, PEFT will transform only the layers"
" indexes that are specified inside this list. If a single integer is passed, PEFT will transform only"
" the layer at this index."
)
},
)
layers_pattern: Optional[str] = field(
default=None,
metadata={
"help": (
"The layer pattern name, used only if `layers_to_transform` is different to None and if the layer"
" pattern is not in the common layers pattern."
)
},
)
def __post_init__(self):
self.peft_type = PeftType.RANDLORA
self.target_modules = (
set(self.target_modules) if isinstance(self.target_modules, list) else self.target_modules
)
if not self.save_projection:
warnings.warn(
"Specified to not save basis_A and basis_B within the state dictionary, instead they will be restored "
"using the PRNG key store in `config.projection_prng_key`. Consider setting `config.save_projection` "
"to `True` to guarantee restoring the checkpoint correctly on all system configurations."
)
| peft/src/peft/tuners/randlora/config.py/0 | {
"file_path": "peft/src/peft/tuners/randlora/config.py",
"repo_id": "peft",
"token_count": 3781
} | 252 |
# Copyright 2025-present the HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
import warnings
from typing import Optional
import torch
import torch.nn as nn
from tqdm import tqdm
from peft.config import PeftConfig
from peft.tuners.tuners_utils import BaseTuner, BaseTunerLayer, check_target_module_exists, onload_layer
from peft.utils import AuxiliaryTrainingWrapper, _get_input_embeddings_name, _get_submodules
from .layer import TrainableTokensLayer
class TrainableTokensModel(BaseTuner):
prefix: str = "trainable_tokens_"
def __getattr__(self, name: str):
"""Forward missing attributes to the wrapped module."""
try:
return super().__getattr__(name) # defer to nn.Module's logic
except AttributeError:
return getattr(self.model, name)
def _prepare_adapter_config(self, peft_config, model_config):
# target_modules can be none which prompts us to infer the embedding layer name ourselves.
if peft_config.target_modules is None:
peft_config.target_modules = _get_input_embeddings_name(self.model, "embed_tokens")
return peft_config
def inject_adapter(
self,
model: nn.Module,
adapter_name: str,
autocast_adapter_dtype: bool = True,
low_cpu_mem_usage: bool = False,
**kwargs,
) -> None:
super().inject_adapter(
model=model,
adapter_name=adapter_name,
autocast_adapter_dtype=autocast_adapter_dtype,
low_cpu_mem_usage=low_cpu_mem_usage,
**kwargs,
)
model_config = self.get_model_config(self)
# In case of weight-tying we need to adapt the tied weights as well and use tie the embedding adapter.
#
# The TrainableTokensLayer supports being tied to another TrainableTokensLayer meaning that the layer will
# not do any changes on its own but solely rely on the weights from the tied adapter. We will search for the
# tied weights and put tied TrainableTokensLayer adapters on them, all tied to the adapter of the embedding
# matrix.
if (
model_config.get("tie_word_embeddings", False)
# some models may be misconfigured to have weight tying enabled but don't define tied weights keys
and self.model._tied_weights_keys is not None
and isinstance(self.model.get_input_embeddings(), TrainableTokensLayer)
):
module_keys = [".".join(n.split(".")[:-1]) for n in self.model._tied_weights_keys]
# disable removing of duplicates since we're essentially only dealing with duplicates (i.e. tied weights)
for name, module in self.model.named_modules(remove_duplicate=False):
matched_keys = [target_key for target_key in module_keys if name.endswith(target_key)]
if matched_keys:
parent, target, target_name = _get_submodules(model, name)
peft_config = self.peft_config[adapter_name].to_dict()
peft_config["tied_adapter"] = self.model.get_input_embeddings()
self._create_and_replace_dict(
peft_config,
adapter_name,
target,
target_name,
parent,
matched_keys[0],
)
def _get_tied_target_modules(self, *args, **kwargs):
# Normally this method would return the layers that target tied layers.
#
# We override this method since we explicitly support tied weights tied to the embedding layer.
# Therefore, we don't need the warning issued by returning the modules here.
return []
def _create_and_replace_dict(
self,
peft_config: dict,
adapter_name: str,
target: nn.Module,
target_name: str,
parent: nn.Module,
current_key: str,
) -> None:
"""
The same as `_create_and_replace` but takes a dictionary instead of a peft config so that we can add keys that
are not present in the config, such as `tied_adapter`.
"""
kwargs = peft_config
if isinstance(target, TrainableTokensLayer):
target.update_layer(adapter_name, **kwargs)
else:
new_module = self._create_new_module(peft_config, adapter_name, target, **kwargs)
self._replace_module(parent, target_name, new_module, target)
def _create_and_replace(
self,
peft_config: PeftConfig,
adapter_name: str,
target: nn.Module,
target_name: str,
parent: nn.Module,
current_key: str,
) -> None:
"""
A private method to create and replace the target module with the adapter module.
"""
kwargs = peft_config.to_dict()
self._create_and_replace_dict(kwargs, adapter_name, target, target_name, parent, current_key)
def _check_target_module_exists(self, peft_config: PeftConfig, key: str) -> bool:
return check_target_module_exists(peft_config, key)
@staticmethod
def _create_new_module(peft_config, adapter_name, target, **kwargs):
new_module = TrainableTokensLayer(target, adapter_name, **kwargs)
new_module.update_layer(
adapter_name,
init_weights=kwargs["init_weights"],
token_indices=kwargs["token_indices"],
tied_adapter=kwargs.get("tied_adapter", None),
)
return new_module
def _replace_module(self, parent, child_name, new_module, child):
setattr(parent, child_name, new_module)
# It's not necessary to set requires_grad here, as that is handled by
# _mark_only_adapters_as_trainable
# child layer wraps the original module, unpack it
if hasattr(child, "base_layer"):
child = child.base_layer
if not hasattr(new_module, "base_layer"):
new_module.weight = child.weight
if hasattr(child, "bias"):
new_module.bias = child.bias
if getattr(child, "state", None) is not None:
if hasattr(new_module, "base_layer"):
new_module.base_layer.state = child.state
else:
new_module.state = child.state
new_module.to(child.weight.device)
meta = torch.device("meta")
# dispatch to correct device
for name, module in new_module.named_modules():
if self.prefix in name:
if not any(p.device == meta for p in module.parameters()):
module.to(child.weight.device)
def _mark_only_adapters_as_trainable(self, model: nn.Module) -> None:
for n, p in model.named_parameters():
if self.prefix not in n:
p.requires_grad = False
def _set_adapter_layers(self, enabled: bool = True) -> None:
for module in self.model.modules():
if isinstance(module, (BaseTunerLayer, AuxiliaryTrainingWrapper)):
module.enable_adapters(enabled)
def enable_adapter_layers(self) -> None:
"""Enable all adapters.
Call this if you have previously disabled all adapters and want to re-enable them.
"""
self._set_adapter_layers(enabled=True)
def disable_adapter_layers(self) -> None:
"""Disable all adapters.
When disabling all adapters, the model output corresponds to the output of the base model.
"""
self._set_adapter_layers(enabled=False)
def set_adapter(self, adapter_name: str | list[str]) -> None:
"""Set the active adapter(s).
Additionally, this function will set the specified adapters to trainable (i.e., requires_grad=True). If this is
not desired, use the following code.
```py
>>> for name, param in model_peft.named_parameters():
... if ...: # some check on name (ex. if 'lora' in name)
... param.requires_grad = False
```
Args:
adapter_name (`str` or `list[str]`): Name of the adapter(s) to be activated.
"""
for module in self.model.modules():
if isinstance(module, TrainableTokensLayer):
if module.merged:
warnings.warn("Adapter cannot be set when the model is merged. Unmerging the model first.")
module.unmerge()
module.set_adapter(adapter_name)
self.active_adapter = adapter_name
def unload(self) -> torch.nn.Module:
"""
Gets back the base model by removing all the trainable tokens modules without merging.
"""
return self._unload_and_optionally_merge(merge=False)
def merge_and_unload(
self, progressbar: bool = False, safe_merge: bool = False, adapter_names: Optional[list[str]] = None
) -> torch.nn.Module:
r"""
This method merges the trained tokens into the targeted embedding layer(s) of the base model. This is needed if
someone wants to use the base model as a standalone model.
Args:
progressbar (`bool`):
whether to show a progressbar indicating the unload and merge process
safe_merge (`bool`):
whether to activate the safe merging check to check if there is any potential Nan in the adapter
weights
adapter_names (`List[str]`, *optional*):
The list of adapter names that should be merged. If None, all active adapters will be merged. Defaults
to `None`.
"""
return self._unload_and_optionally_merge(
progressbar=progressbar, safe_merge=safe_merge, adapter_names=adapter_names
)
def _unload_and_optionally_merge(
self,
merge=True,
progressbar: bool = False,
safe_merge: bool = False,
adapter_names: Optional[list[str]] = None,
):
key_list = [key for key, _ in self.model.named_modules() if self.prefix not in key]
desc = "Unloading " + ("and merging " if merge else "") + "model"
for key in tqdm(key_list, disable=not progressbar, desc=desc):
try:
parent, target, target_name = _get_submodules(self.model, key)
except AttributeError:
continue
with onload_layer(target):
if hasattr(target, "unload_and_optionally_merge_module"):
# if layers have special unloading method, like MultiheadAttention, use that
unloaded_module = target.unload_and_optionally_merge_module(
merge=merge, safe_merge=safe_merge, adapter_names=adapter_names
)
self._replace_module(parent, target_name, unloaded_module, target)
elif hasattr(target, "base_layer"):
if merge:
target.merge(safe_merge=safe_merge, adapter_names=adapter_names)
self._replace_module(parent, target_name, target.get_base_layer(), target)
return self.model
| peft/src/peft/tuners/trainable_tokens/model.py/0 | {
"file_path": "peft/src/peft/tuners/trainable_tokens/model.py",
"repo_id": "peft",
"token_count": 5013
} | 253 |
# Copyright 2023-present the HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from .integrations import map_cache_to_layer_device_map
from .loftq_utils import replace_lora_weights_loftq
from .other import (
CONFIG_NAME,
INCLUDE_LINEAR_LAYERS_SHORTHAND,
SAFETENSORS_WEIGHTS_NAME,
TRANSFORMERS_MODELS_TO_ADALORA_TARGET_MODULES_MAPPING,
TRANSFORMERS_MODELS_TO_C3A_TARGET_MODULES_MAPPING,
TRANSFORMERS_MODELS_TO_FOURIERFT_TARGET_MODULES_MAPPING,
TRANSFORMERS_MODELS_TO_IA3_FEEDFORWARD_MODULES_MAPPING,
TRANSFORMERS_MODELS_TO_IA3_TARGET_MODULES_MAPPING,
TRANSFORMERS_MODELS_TO_LNTUNING_TARGET_MODULES_MAPPING,
TRANSFORMERS_MODELS_TO_LOHA_TARGET_MODULES_MAPPING,
TRANSFORMERS_MODELS_TO_LOKR_TARGET_MODULES_MAPPING,
TRANSFORMERS_MODELS_TO_LORA_TARGET_MODULES_MAPPING,
TRANSFORMERS_MODELS_TO_PREFIX_TUNING_POSTPROCESS_MAPPING,
TRANSFORMERS_MODELS_TO_RANDLORA_TARGET_MODULES_MAPPING,
TRANSFORMERS_MODELS_TO_SHIRA_TARGET_MODULES_MAPPING,
TRANSFORMERS_MODELS_TO_VBLORA_TARGET_MODULES_MAPPING,
TRANSFORMERS_MODELS_TO_VERA_TARGET_MODULES_MAPPING,
WEIGHTS_NAME,
AuxiliaryTrainingWrapper,
ModulesToSaveWrapper,
TrainableTokensWrapper,
_freeze_adapter,
_get_batch_size,
_get_input_embeddings_name,
_get_submodules,
_is_valid_match,
_prepare_prompt_learning_config,
_set_adapter,
_set_trainable,
bloom_model_postprocess_past_key_value,
cast_mixed_precision_params,
get_auto_gptq_quant_linear,
get_gptqmodel_quant_linear,
get_quantization_config,
id_tensor_storage,
infer_device,
prepare_model_for_kbit_training,
set_additional_trainable_modules,
shift_tokens_right,
transpose,
)
from .peft_types import PeftType, TaskType, register_peft_method
from .save_and_load import get_peft_model_state_dict, load_peft_weights, set_peft_model_state_dict
from .warning import PeftWarning
__all__ = [
"CONFIG_NAME",
"INCLUDE_LINEAR_LAYERS_SHORTHAND",
"SAFETENSORS_WEIGHTS_NAME",
"TRANSFORMERS_MODELS_TO_ADALORA_TARGET_MODULES_MAPPING",
"TRANSFORMERS_MODELS_TO_C3A_TARGET_MODULES_MAPPING",
"TRANSFORMERS_MODELS_TO_FOURIERFT_TARGET_MODULES_MAPPING",
"TRANSFORMERS_MODELS_TO_IA3_FEEDFORWARD_MODULES_MAPPING",
"TRANSFORMERS_MODELS_TO_IA3_TARGET_MODULES_MAPPING",
"TRANSFORMERS_MODELS_TO_LNTUNING_TARGET_MODULES_MAPPING",
"TRANSFORMERS_MODELS_TO_LOHA_TARGET_MODULES_MAPPING",
"TRANSFORMERS_MODELS_TO_LOKR_TARGET_MODULES_MAPPING",
"TRANSFORMERS_MODELS_TO_LORA_TARGET_MODULES_MAPPING",
"TRANSFORMERS_MODELS_TO_PREFIX_TUNING_POSTPROCESS_MAPPING",
"TRANSFORMERS_MODELS_TO_RANDLORA_TARGET_MODULES_MAPPING",
"TRANSFORMERS_MODELS_TO_SHIRA_TARGET_MODULES_MAPPING",
"TRANSFORMERS_MODELS_TO_VBLORA_TARGET_MODULES_MAPPING",
"TRANSFORMERS_MODELS_TO_VERA_TARGET_MODULES_MAPPING",
"WEIGHTS_NAME",
"AuxiliaryTrainingWrapper",
"ModulesToSaveWrapper",
"PeftType",
"PeftWarning",
"TaskType",
"TrainableTokensWrapper",
"_freeze_adapter",
"_get_batch_size",
"_get_input_embeddings_name",
"_get_submodules",
"_is_valid_match",
"_prepare_prompt_learning_config",
"_set_adapter",
"_set_trainable",
"bloom_model_postprocess_past_key_value",
"cast_mixed_precision_params",
"get_auto_gptq_quant_linear",
"get_gptqmodel_quant_linear",
"get_peft_model_state_dict",
"get_quantization_config",
"id_tensor_storage",
"infer_device",
"load_peft_weights",
"map_cache_to_layer_device_map",
"prepare_model_for_kbit_training",
"register_peft_method",
"replace_lora_weights_loftq",
"set_additional_trainable_modules",
"set_peft_model_state_dict",
"shift_tokens_right",
"transpose",
]
| peft/src/peft/utils/__init__.py/0 | {
"file_path": "peft/src/peft/utils/__init__.py",
"repo_id": "peft",
"token_count": 1874
} | 254 |
# Copyright 2023-present the HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import tempfile
import pytest
import torch
from torch.testing import assert_close
from transformers import AutoModelForCausalLM
from peft import get_peft_model
from peft.peft_model import PeftModel
from peft.tuners.adaption_prompt import AdaptionPromptConfig
from peft.utils import infer_device
from peft.utils.other import prepare_model_for_kbit_training
from peft.utils.save_and_load import get_peft_model_state_dict
MODELS_TO_TEST = [
"hf-internal-testing/tiny-random-gpt2",
"trl-internal-testing/tiny-random-LlamaForCausalLM",
"hf-internal-testing/tiny-random-MistralForCausalLM",
]
class TestAdaptionPrompt:
"""
Tests for the AdaptionPrompt model.
Some of these tests were adapted from `test_peft_model.py` (which has been refactored since), but since we haven't
checked in the test checkpoints for Llama into `hf-internal-testing`, we separate them for now.
"""
transformers_class = AutoModelForCausalLM
torch_device = infer_device()
@pytest.mark.parametrize("model_id", MODELS_TO_TEST)
def test_attributes(self, model_id):
model = self.transformers_class.from_pretrained(model_id)
config = AdaptionPromptConfig(adapter_layers=1, adapter_len=4)
model = get_peft_model(model, config)
assert hasattr(model, "save_pretrained")
assert hasattr(model, "from_pretrained")
assert hasattr(model, "push_to_hub")
@pytest.mark.parametrize("model_id", MODELS_TO_TEST)
def test_prepare_for_training(self, model_id):
model = self.transformers_class.from_pretrained(model_id)
config = AdaptionPromptConfig(adapter_layers=1, adapter_len=4, task_type="CAUSAL_LM")
model = get_peft_model(model, config)
model = model.to(self.torch_device)
dummy_input = torch.LongTensor([[1, 1, 1]]).to(self.torch_device)
dummy_output = model.get_input_embeddings()(dummy_input)
assert not dummy_output.requires_grad
@pytest.mark.parametrize("model_id", MODELS_TO_TEST)
def test_prepare_for_int8_training(self, model_id):
model = self.transformers_class.from_pretrained(model_id)
model = prepare_model_for_kbit_training(model)
model = model.to(self.torch_device)
for param in model.parameters():
assert not param.requires_grad
config = AdaptionPromptConfig(adapter_layers=1, adapter_len=4, task_type="CAUSAL_LM")
model = get_peft_model(model, config)
# For backward compatibility
if hasattr(model, "enable_input_require_grads"):
model.enable_input_require_grads()
else:
def make_inputs_require_grad(module, input, output):
output.requires_grad_(True)
model.get_input_embeddings().register_forward_hook(make_inputs_require_grad)
dummy_input = torch.LongTensor([[1, 1, 1]]).to(self.torch_device)
dummy_output = model.get_input_embeddings()(dummy_input)
assert dummy_output.requires_grad
@pytest.mark.parametrize("model_id", MODELS_TO_TEST)
def test_save_pretrained_regression(self, model_id):
seed = 420
torch.manual_seed(seed)
model = self.transformers_class.from_pretrained(model_id)
config = AdaptionPromptConfig(adapter_layers=2, adapter_len=4, task_type="CAUSAL_LM")
model = get_peft_model(model, config)
model = model.to(self.torch_device)
with tempfile.TemporaryDirectory() as tmp_dirname:
model.save_pretrained(tmp_dirname, safe_serialization=False)
torch.manual_seed(seed)
model_from_pretrained = self.transformers_class.from_pretrained(model_id)
model_from_pretrained = PeftModel.from_pretrained(model_from_pretrained, tmp_dirname)
# check if the state dicts are equal
state_dict = get_peft_model_state_dict(model)
state_dict_from_pretrained = get_peft_model_state_dict(model_from_pretrained)
# check if same keys
assert state_dict.keys() == state_dict_from_pretrained.keys()
# Check that the number of saved parameters is 4 -- 2 layers of (tokens and gate).
assert len(state_dict) == 4
# check if tensors equal
for key in state_dict.keys():
assert torch.allclose(
state_dict[key].to(self.torch_device), state_dict_from_pretrained[key].to(self.torch_device)
)
# check if `adapter_model.bin` is present
assert os.path.exists(os.path.join(tmp_dirname, "adapter_model.bin"))
# check if `adapter_config.json` is present
assert os.path.exists(os.path.join(tmp_dirname, "adapter_config.json"))
# check if `model.safetensors` is not present
assert not os.path.exists(os.path.join(tmp_dirname, "model.safetensors"))
# check if `config.json` is not present
assert not os.path.exists(os.path.join(tmp_dirname, "config.json"))
@pytest.mark.parametrize("model_id", MODELS_TO_TEST)
def test_save_pretrained(self, model_id):
seed = 420
torch.manual_seed(seed)
model = self.transformers_class.from_pretrained(model_id)
config = AdaptionPromptConfig(adapter_layers=2, adapter_len=4, task_type="CAUSAL_LM")
model = get_peft_model(model, config)
model = model.to(self.torch_device)
with tempfile.TemporaryDirectory() as tmp_dirname:
model.save_pretrained(tmp_dirname)
torch.manual_seed(seed)
model_from_pretrained = self.transformers_class.from_pretrained(model_id)
model_from_pretrained = PeftModel.from_pretrained(model_from_pretrained, tmp_dirname)
# check if the state dicts are equal
state_dict = get_peft_model_state_dict(model)
state_dict_from_pretrained = get_peft_model_state_dict(model_from_pretrained)
# check if same keys
assert state_dict.keys() == state_dict_from_pretrained.keys()
# Check that the number of saved parameters is 4 -- 2 layers of (tokens and gate).
assert len(state_dict) == 4
# check if tensors equal
for key in state_dict.keys():
assert torch.allclose(
state_dict[key].to(self.torch_device), state_dict_from_pretrained[key].to(self.torch_device)
)
# check if `adapter_model.bin` is present
assert os.path.exists(os.path.join(tmp_dirname, "adapter_model.safetensors"))
# check if `adapter_config.json` is present
assert os.path.exists(os.path.join(tmp_dirname, "adapter_config.json"))
# check if `model.safetensors` is not present
assert not os.path.exists(os.path.join(tmp_dirname, "model.safetensors"))
# check if `config.json` is not present
assert not os.path.exists(os.path.join(tmp_dirname, "config.json"))
@pytest.mark.parametrize("model_id", MODELS_TO_TEST)
def test_save_pretrained_selected_adapters(self, model_id):
seed = 420
torch.manual_seed(seed)
model = self.transformers_class.from_pretrained(model_id)
config = AdaptionPromptConfig(adapter_layers=2, adapter_len=4, task_type="CAUSAL_LM")
model = get_peft_model(model, config)
model = model.to(self.torch_device)
new_adapter_config = AdaptionPromptConfig(adapter_layers=2, adapter_len=4, task_type="CAUSAL_LM")
model.add_adapter("new_adapter", new_adapter_config)
with tempfile.TemporaryDirectory() as tmp_dirname:
model.save_pretrained(tmp_dirname)
torch.manual_seed(seed)
model_from_pretrained = self.transformers_class.from_pretrained(model_id)
model_from_pretrained = PeftModel.from_pretrained(model_from_pretrained, tmp_dirname)
model_from_pretrained.load_adapter(tmp_dirname, "new_adapter")
# check if the state dicts are equal
state_dict = get_peft_model_state_dict(model)
state_dict_from_pretrained = get_peft_model_state_dict(model_from_pretrained)
# check if same keys
assert state_dict.keys() == state_dict_from_pretrained.keys()
# Check that the number of saved parameters is 4 -- 2 layers of (tokens and gate).
assert len(state_dict) == 4
# check if tensors equal
for key in state_dict.keys():
assert torch.allclose(
state_dict[key].to(self.torch_device), state_dict_from_pretrained[key].to(self.torch_device)
)
# check if `adapter_model.bin` is present
assert os.path.exists(os.path.join(tmp_dirname, "adapter_model.safetensors"))
# check if `adapter_config.json` is present
assert os.path.exists(os.path.join(tmp_dirname, "adapter_config.json"))
# check if `model.safetensors` is not present
assert not os.path.exists(os.path.join(tmp_dirname, "model.safetensors"))
# check if `config.json` is not present
assert not os.path.exists(os.path.join(tmp_dirname, "config.json"))
@pytest.mark.parametrize("model_id", MODELS_TO_TEST)
def test_generate(self, model_id):
model = self.transformers_class.from_pretrained(model_id)
config = AdaptionPromptConfig(adapter_layers=2, adapter_len=4, task_type="CAUSAL_LM")
model = get_peft_model(model, config)
model = model.to(self.torch_device)
input_ids = torch.LongTensor([[1, 1, 1], [2, 1, 2]]).to(self.torch_device)
attention_mask = torch.LongTensor([[1, 1, 1], [1, 0, 1]]).to(self.torch_device)
# check if `generate` works
_ = model.generate(input_ids=input_ids, attention_mask=attention_mask)
# check if `generate` works if positional arguments are passed
_ = model.generate(input_ids, attention_mask=attention_mask)
@pytest.mark.parametrize("model_id", MODELS_TO_TEST)
def test_sequence_adapter_ops(self, model_id):
"""Test sequence of adapter operations."""
# Test input data.
input_ids = torch.LongTensor([[1, 1, 1], [2, 1, 2]]).to(self.torch_device)
target_ids = torch.LongTensor([[0, 0, 0], [0, 0, 0]]).to(self.torch_device)
attention_mask = torch.LongTensor([[1, 1, 1], [1, 0, 1]]).to(self.torch_device)
# Create original llama model.
original = self.transformers_class.from_pretrained(model_id)
original = original.to(self.torch_device)
original_before = original(input_ids=input_ids, attention_mask=attention_mask)
# Get AdaptionPrompt model.
adapted = get_peft_model(
original, AdaptionPromptConfig(adapter_layers=2, adapter_len=4, task_type="CAUSAL_LM")
)
adapted = adapted.to(self.torch_device)
default_before = adapted(input_ids=input_ids, attention_mask=attention_mask, labels=target_ids)
# Test zero-init: The logits should be exactly the same.
assert_close(original_before.logits, default_before.logits, rtol=0, atol=0)
# Single fine-tuning step on "default" adapter.
optimizer = torch.optim.SGD(adapted.parameters(), lr=1)
optimizer.zero_grad()
default_before.loss.backward()
optimizer.step()
# Test that the output changed.
default_after = adapted(input_ids=input_ids, attention_mask=attention_mask, labels=target_ids)
assert not torch.allclose(default_before.logits, default_after.logits)
with adapted.disable_adapter():
# Test that the output is the same as the original output.
default_disabled = adapted(input_ids=input_ids, attention_mask=attention_mask, labels=target_ids)
assert_close(original_before.logits, default_disabled.logits, rtol=0, atol=0)
# Add new adapter 1.
adapted.add_adapter("adapter 1", AdaptionPromptConfig(adapter_layers=2, adapter_len=8, task_type="CAUSAL_LM"))
# Test zero-init
adapter_1_before = adapted(input_ids=input_ids, attention_mask=attention_mask, labels=target_ids)
assert_close(original_before.logits, adapter_1_before.logits, rtol=0, atol=0)
# Single fine-tuning step on adapter 1.
optimizer = torch.optim.SGD(adapted.parameters(), lr=1)
optimizer.zero_grad()
adapter_1_before.loss.backward()
optimizer.step()
# Test that adapter 1 output changed.
adapter_1_after = adapted(input_ids=input_ids, attention_mask=attention_mask, labels=target_ids)
assert not torch.allclose(adapter_1_before.logits, adapter_1_after.logits)
assert not torch.allclose(original_before.logits, adapter_1_after.logits)
assert not torch.allclose(default_after.logits, adapter_1_after.logits)
with adapted.disable_adapter():
# Test that the output is the same as the original output.
adapter_1_disabled = adapted(input_ids=input_ids, attention_mask=attention_mask, labels=target_ids)
assert_close(original_before.logits, adapter_1_disabled.logits, rtol=0, atol=0)
# Set adapter back to default.
adapted.set_adapter("default")
# Test that the output is the same as the default output after training.
default_after_set = adapted(input_ids=input_ids, attention_mask=attention_mask, labels=target_ids)
assert_close(default_after.logits, default_after_set.logits, rtol=0, atol=0)
assert not torch.allclose(original_before.logits, default_after_set.logits)
assert not torch.allclose(adapter_1_after.logits, default_after_set.logits)
@pytest.mark.parametrize("model_id", MODELS_TO_TEST)
def test_add_and_set_while_disabled(self, model_id):
"""Test that adding and setting adapters while disabled works as intended."""
# Test input data.
input_ids = torch.LongTensor([[1, 1, 1], [2, 1, 2]]).to(self.torch_device)
target_ids = torch.LongTensor([[0, 0, 0], [0, 0, 0]]).to(self.torch_device)
attention_mask = torch.LongTensor([[1, 1, 1], [1, 0, 1]]).to(self.torch_device)
# Create original llama model.
original = self.transformers_class.from_pretrained(model_id)
original = original.to(self.torch_device)
original_before = original(input_ids=input_ids, attention_mask=attention_mask)
# Get AdaptionPrompt model.
adapted = get_peft_model(
original, AdaptionPromptConfig(adapter_layers=2, adapter_len=4, task_type="CAUSAL_LM")
)
adapted = adapted.to(self.torch_device)
with adapted.disable_adapter():
adapted.add_adapter(
"adapter 1", AdaptionPromptConfig(adapter_layers=2, adapter_len=8, task_type="CAUSAL_LM")
)
# Test that the output is the same as the original output.
adapter_1_before = adapted(input_ids=input_ids, attention_mask=attention_mask, labels=target_ids)
assert_close(original_before.logits, adapter_1_before.logits, rtol=0, atol=0)
# Single fine-tuning step on adapter 1.
optimizer = torch.optim.SGD(adapted.parameters(), lr=1)
optimizer.zero_grad()
adapter_1_before.loss.backward()
optimizer.step()
# Test that adapter 1 output changed.
adapter_1_after = adapted(input_ids=input_ids, attention_mask=attention_mask, labels=target_ids)
assert not torch.allclose(original_before.logits, adapter_1_after.logits)
adapted.set_adapter("default")
with adapted.disable_adapter():
adapted.set_adapter("adapter 1")
# Test that adapter 1 is active again.
adapter_1_after_set = adapted(input_ids=input_ids, attention_mask=attention_mask, labels=target_ids)
assert_close(adapter_1_after.logits, adapter_1_after_set.logits, rtol=0, atol=0)
@pytest.mark.parametrize("model_id", MODELS_TO_TEST)
def test_use_cache(self, model_id):
"""Test that AdaptionPrompt works when Llama config use_cache=True."""
torch.manual_seed(0)
input_ids = torch.LongTensor([[1, 1, 1], [2, 1, 2]]).to(self.torch_device)
original = self.transformers_class.from_pretrained(model_id, use_cache=False)
adapted = get_peft_model(
original, AdaptionPromptConfig(adapter_layers=2, adapter_len=4, task_type="CAUSAL_LM")
)
adapted = adapted.to(self.torch_device)
expected = adapted.generate(input_ids=input_ids, max_length=8)
# Set use_cache = True and generate output again.
adapted.base_model.config.use_cache = True
actual = adapted.generate(input_ids=input_ids, max_length=8)
assert_close(expected, actual, rtol=0, atol=0)
@pytest.mark.parametrize("model_id", MODELS_TO_TEST)
def test_bf16_inference(self, model_id):
if self.torch_device == "mps":
return pytest.skip("Skipping bf16 test on MPS")
"""Test that AdaptionPrompt works when Llama using a half-precision model."""
input_ids = torch.LongTensor([[1, 1, 1], [2, 1, 2]]).to(self.torch_device)
original = self.transformers_class.from_pretrained(model_id, torch_dtype=torch.bfloat16)
adapted = get_peft_model(
original, AdaptionPromptConfig(adapter_layers=2, adapter_len=4, task_type="CAUSAL_LM")
)
adapted = adapted.to(self.torch_device)
adapted.generate(input_ids=input_ids) # does not raise
@pytest.mark.xfail(reason="currently this fails because scores are zeroed out", raises=AssertionError)
@pytest.mark.parametrize("model_id", MODELS_TO_TEST)
def test_disable_adapter(self, model_id):
model = self.transformers_class.from_pretrained(model_id).to(self.torch_device)
dummy_input = torch.LongTensor([[1, 1, 1]]).to(self.torch_device)
output_before = model(dummy_input).logits
config = AdaptionPromptConfig(adapter_layers=1, adapter_len=4, task_type="CAUSAL_LM")
model = get_peft_model(model, config).to(self.torch_device)
output_peft = model(dummy_input).logits
# TODO currently this fails because scores are zeroed out:
# https://github.com/huggingface/peft/blob/062d95a09eb5d1de35c0e5e23d4387daba99e2db/src/peft/tuners/adaption_prompt.py#L303
# This is fine for users but makes it difficult to test if anything happens. In the future, we will have a clean
# way to control initialization. Until then, this test is expected to fail.
assert not torch.allclose(output_before, output_peft)
with model.disable_adapter():
output_peft_disabled = model(dummy_input).logits
assert torch.allclose(output_before, output_peft_disabled)
| peft/tests/test_adaption_prompt.py/0 | {
"file_path": "peft/tests/test_adaption_prompt.py",
"repo_id": "peft",
"token_count": 8196
} | 255 |
# Copyright 2023-present the HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
import itertools
import platform
import re
import warnings
from collections import defaultdict
from contextlib import contextmanager
from copy import deepcopy
from unittest.mock import patch
import pytest
import torch
from datasets import Dataset
from huggingface_hub import snapshot_download
from huggingface_hub.errors import HfHubHTTPError, LocalEntryNotFoundError
from huggingface_hub.utils import reset_sessions
from safetensors.torch import load_file
from scipy import stats
from torch import nn
from torch.utils.data import DataLoader
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import (
AdaLoraConfig,
C3AConfig,
EvaConfig,
IA3Config,
LoftQConfig,
LoKrConfig,
LoraConfig,
PeftMixedModel,
PeftModel,
PeftModelForCausalLM,
PeftModelForFeatureExtraction,
PeftModelForQuestionAnswering,
PeftModelForSeq2SeqLM,
PeftModelForSequenceClassification,
PeftModelForTokenClassification,
PeftWarning,
PrefixTuningConfig,
PromptTuningConfig,
RoadConfig,
VBLoRAConfig,
VeraConfig,
get_eva_state_dict,
get_peft_model,
initialize_lora_eva_weights,
inject_adapter_in_model,
set_peft_model_state_dict,
)
from peft.mapping import PEFT_TYPE_TO_PREFIX_MAPPING
from peft.tuners.lora.config import CordaConfig
from peft.tuners.lora.corda import preprocess_corda
from peft.tuners.lora.layer import LoraLayer
from peft.utils import infer_device
from peft.utils.hotswap import hotswap_adapter, prepare_model_for_compiled_hotswap
from .testing_utils import load_dataset_english_quotes, require_deterministic_for_xpu
class TestLoraInitialization:
"""Test class to check the initialization of LoRA adapters."""
torch_device = infer_device()
def get_uniform(self, amin, amax, size=(10000,)):
unif = torch.distributions.uniform.Uniform(amin, amax)
samples = unif.sample(size)
return samples
def get_normal(self, mean, std, size=(10000,)):
normal = torch.distributions.normal.Normal(mean, std)
samples = normal.sample(size)
return samples
def get_model(self, bias=True):
class MyModule(nn.Module):
def __init__(self):
super().__init__()
# choose a large weight so that averages are close to expected values
self.linear = nn.Linear(1000, 1000, bias=bias)
self.embed = nn.Embedding(1000, 1000)
self.conv2d = nn.Conv2d(100, 100, 3, bias=bias)
def forward(self, x):
x_int = (100 * x).int()
x_4d = x.flatten().reshape(1, 100, 10, 10)
return self.linear(x), self.embed(x_int), self.conv2d(x_4d)
return MyModule().eval().to(self.torch_device)
@pytest.fixture
def data(self):
return torch.rand(10, 1000).to(self.torch_device)
def test_lora_linear_init_default(self):
# default is True
torch.manual_seed(0)
model = self.get_model()
config = LoraConfig(target_modules=["linear"])
model = get_peft_model(model, config)
weight_A = model.linear.lora_A["default"].weight
weight_B = model.linear.lora_B["default"].weight
# use statistical test to check if weight A is from a uniform distribution
unif = self.get_uniform(weight_A.min().item(), weight_A.max().item())
_, p_value = stats.kstest(weight_A.detach().flatten().cpu().numpy(), unif.flatten().cpu().numpy())
assert p_value > 0.5
# check that weight A is *not* from a normal distribution
normal = self.get_normal(weight_A.mean().item(), weight_A.std().item())
_, p_value = stats.kstest(weight_A.detach().flatten().cpu().numpy(), normal.flatten().cpu().numpy())
assert p_value < 0.05
# check that weight B is zero
assert (weight_B == 0.0).all()
def test_lora_linear_init_gaussian(self):
# use gaussian init
torch.manual_seed(0)
model = self.get_model()
config = LoraConfig(target_modules=["linear"], init_lora_weights="gaussian")
model = get_peft_model(model, config)
weight_A = model.linear.lora_A["default"].weight
weight_B = model.linear.lora_B["default"].weight
# use statistical test to check if weight A is from a normal distribution
normal = self.get_normal(0.0, 1 / config.r)
_, p_value = stats.kstest(weight_A.detach().flatten().cpu().numpy(), normal.flatten().cpu().numpy())
assert p_value > 0.5
# check that weight A is *not* from a uniform distribution
unif = self.get_uniform(weight_A.min().item(), weight_A.max().item())
_, p_value = stats.kstest(weight_A.detach().flatten().cpu().numpy(), unif.flatten().cpu().numpy())
assert p_value < 0.05
# check that weight B is zero
assert (weight_B == 0.0).all()
def test_lora_linear_false(self):
torch.manual_seed(0)
model = self.get_model()
config = LoraConfig(target_modules=["linear"], init_lora_weights=False)
model = get_peft_model(model, config)
weight_B = model.linear.lora_B["default"].weight
# with init_lora_weights=False, weight B should *not* be zero. We don't care so much about the actual values
# as long as they are not zero, in order to avoid identity transformation.
assert not torch.allclose(weight_B, torch.zeros_like(weight_B))
def test_lora_embedding_default(self):
# embedding is initialized as a normal distribution, not kaiming uniform
torch.manual_seed(0)
model = self.get_model()
config = LoraConfig(target_modules=["embed"])
model = get_peft_model(model, config)
weight_A = model.embed.lora_embedding_A["default"]
weight_B = model.embed.lora_embedding_B["default"]
# use statistical test to check if weight B is from a normal distribution
normal = self.get_normal(0.0, 1.0)
_, p_value = stats.kstest(weight_B.detach().flatten().cpu().numpy(), normal.flatten().cpu().numpy())
assert p_value > 0.5
# check that weight B is *not* from a uniform distribution
unif = self.get_uniform(weight_B.min().item(), weight_B.max().item())
_, p_value = stats.kstest(weight_B.detach().flatten().cpu().numpy(), unif.flatten().cpu().numpy())
assert p_value < 0.05
# check that weight A is zero
assert (weight_A == 0.0).all()
def test_lora_embedding_gaussian(self):
# embedding does not change with init_lora_weights="gaussian" vs True
torch.manual_seed(0)
model = self.get_model()
config = LoraConfig(target_modules=["embed"], init_lora_weights="gaussian")
model = get_peft_model(model, config)
weight_A = model.embed.lora_embedding_A["default"]
weight_B = model.embed.lora_embedding_B["default"]
# use statistical test to check if weight B is from a normal distribution
normal = self.get_normal(0.0, 1.0)
_, p_value = stats.kstest(weight_B.detach().flatten().cpu().numpy(), normal.flatten().cpu().numpy())
assert p_value > 0.5
# check that weight B is *not* from a uniform distribution
unif = self.get_uniform(weight_B.min().item(), weight_B.max().item())
_, p_value = stats.kstest(weight_B.detach().flatten().cpu().numpy(), unif.flatten().cpu().numpy())
assert p_value < 0.05
# check that weight A is zero
assert (weight_A == 0.0).all()
def test_lora_embedding_false(self):
torch.manual_seed(0)
model = self.get_model()
config = LoraConfig(target_modules=["embed"], init_lora_weights=False)
model = get_peft_model(model, config)
weight_A = model.embed.lora_embedding_B["default"]
# with init_lora_weights=False, weight A should *not* be zero. We don't care so much about the actual values
# as long as they are not zero, in order to avoid identity transformation.
assert not torch.allclose(weight_A, torch.zeros_like(weight_A))
def test_lora_conv2d_default(self):
# default is True
torch.manual_seed(0)
model = self.get_model()
config = LoraConfig(target_modules=["conv2d"])
model = get_peft_model(model, config)
weight_A = model.conv2d.lora_A["default"].weight
weight_B = model.conv2d.lora_B["default"].weight
# use statistical test to check if weight A is from a uniform distribution
unif = self.get_uniform(weight_A.min().item(), weight_A.max().item())
_, p_value = stats.kstest(weight_A.detach().flatten().cpu().numpy(), unif.flatten().cpu().numpy())
assert p_value > 0.5
# check that weight A is *not* from a normal distribution
normal = self.get_normal(weight_A.mean().item(), weight_A.std().item())
_, p_value = stats.kstest(weight_A.detach().flatten().cpu().numpy(), normal.flatten().cpu().numpy())
assert p_value < 0.05
# check that weight B is zero
assert (weight_B == 0.0).all()
def test_lora_conv2d_init_gaussian(self):
# use gaussian init
torch.manual_seed(0)
model = self.get_model()
config = LoraConfig(target_modules=["conv2d"], init_lora_weights="gaussian")
model = get_peft_model(model, config)
weight_A = model.conv2d.lora_A["default"].weight
weight_B = model.conv2d.lora_B["default"].weight
# use statistical test to check if weight A is from a normal distribution
normal = self.get_normal(0.0, 1 / config.r)
_, p_value = stats.kstest(weight_A.detach().flatten().cpu().numpy(), normal.flatten().cpu().numpy())
assert p_value > 0.5
# check that weight A is *not* from a uniform distribution
unif = self.get_uniform(weight_A.min().item(), weight_A.max().item())
_, p_value = stats.kstest(weight_A.detach().flatten().cpu().numpy(), unif.flatten().cpu().numpy())
assert p_value < 0.05
# check that weight B is zero
assert (weight_B == 0.0).all()
def test_lora_conv2d_false(self):
torch.manual_seed(0)
model = self.get_model()
config = LoraConfig(target_modules=["conv2d"], init_lora_weights=False)
model = get_peft_model(model, config)
weight_B = model.conv2d.lora_B["default"].weight
# with init_lora_weights=False, weight B should *not* be zero. We don't care so much about the actual values
# as long as they are not zero, in order to avoid identity transformation.
assert not torch.allclose(weight_B, torch.zeros_like(weight_B))
def test_lora_init_orthogonal(self):
torch.manual_seed(0)
model = self.get_model()
config = LoraConfig(target_modules=["linear"], init_lora_weights="orthogonal")
model = get_peft_model(model, config)
weight_A = model.linear.lora_A["default"].weight
weight_B = model.linear.lora_B["default"].weight
assert not torch.allclose(weight_A, torch.zeros_like(weight_A))
assert not torch.allclose(weight_B, torch.zeros_like(weight_B))
assert (weight_B @ weight_A).abs().max() < 1e-6
@pytest.mark.parametrize("dtype", [torch.float16, torch.bfloat16])
def test_lora_init_orthogonal_half_precision_dtype(self, dtype):
try:
torch.zeros(1, dtype=dtype)
except Exception:
pytest.skip(f"dtype {dtype} not supported on this system, skipping test")
torch.manual_seed(0)
model = self.get_model()
config = LoraConfig(target_modules=["linear"], init_lora_weights="orthogonal")
model = get_peft_model(model, config).to(dtype)
weight_A = model.linear.lora_A["default"].weight
weight_B = model.linear.lora_B["default"].weight
assert weight_A.dtype == dtype
assert weight_B.dtype == dtype
def test_lora_init_orthogonal_odd_rank_raises(self):
torch.manual_seed(0)
model = self.get_model()
config = LoraConfig(target_modules=["linear"], init_lora_weights="orthogonal", r=7)
msg = "Orthogonal initialization requires the LoRA rank to be even, got 7 instead."
with pytest.raises(ValueError, match=msg):
get_peft_model(model, config)
def test_lora_scaling_default(self):
# default is True
torch.manual_seed(0)
model = self.get_model()
# check scaling factor use_rslora=False
config = LoraConfig(target_modules=["linear", "embed", "conv2d"], lora_alpha=3, r=16, use_rslora=False)
model = get_peft_model(model, config)
expected_scaling = config.lora_alpha / config.r
assert model.linear.scaling["default"] == expected_scaling
assert model.embed.scaling["default"] == expected_scaling
assert model.conv2d.scaling["default"] == expected_scaling
# testcase for bugfix for issue 2194
def test_rank_alpha_pattern_override(self):
torch.manual_seed(0)
layer = self.get_model()
model = nn.Sequential(layer, layer)
config = LoraConfig(
target_modules=["linear"],
lora_alpha=1,
r=8,
use_rslora=False,
rank_pattern={"linear": 8},
alpha_pattern={"0.linear": 2},
)
model = get_peft_model(model, config)
scaling_with_rank_pattern = model.model[0].linear.scaling
layer = self.get_model()
model = nn.Sequential(layer, layer)
config = LoraConfig(
target_modules=["linear"], lora_alpha=1, r=8, use_rslora=False, alpha_pattern={"0.linear": 2}
)
model = get_peft_model(model, config)
scaling_without_rank_pattern = model.model[0].linear.scaling
assert scaling_with_rank_pattern == scaling_without_rank_pattern
def test_lora_pissa_linear_init_default(self, data):
model = self.get_model()
output = model(data)[0]
config = LoraConfig(init_lora_weights="pissa", target_modules=["linear"])
peft_model = get_peft_model(deepcopy(model), config)
assert torch.allclose(output, peft_model(data)[0], atol=1e-06)
config = LoraConfig(init_lora_weights="pissa_niter_16", target_modules=["linear"])
peft_model = get_peft_model(deepcopy(model), config)
assert torch.allclose(output, peft_model(data)[0], atol=1e-06)
def test_lora_olora_linear_init_default(self, data):
model = self.get_model()
output = model(data)[0]
# Both OLoRA and olora should work
config = LoraConfig(init_lora_weights="OLoRA", target_modules=["linear"])
peft_model = get_peft_model(deepcopy(model), config)
assert torch.allclose(output, peft_model(data)[0], atol=1e-06)
def test_lora_pissa_conversion_same_output_after_loading(self, data, tmp_path):
model = self.get_model()
output_base = model(data)[0]
config = LoraConfig(init_lora_weights="pissa", target_modules=["linear"], r=8)
peft_model = get_peft_model(deepcopy(model), config)
# save the initial model
peft_model.peft_config["default"].init_lora_weights = True
peft_model.save_pretrained(tmp_path / "init-model")
peft_model.peft_config["default"].init_lora_weights = "pissa"
# modify the weights, or else the adapter performs an identity transformation
peft_model.base_model.linear.lora_B["default"].weight.data *= 2.0
output_pissa = peft_model(data)[0]
# sanity check
tol = 1e-06
assert not torch.allclose(output_base, output_pissa, atol=tol, rtol=tol)
# save the model normally
peft_model.save_pretrained(tmp_path / "pissa-model")
model_loaded = PeftModel.from_pretrained(deepcopy(model), tmp_path / "pissa-model")
output_loaded = model_loaded(data)[0]
assert torch.allclose(output_pissa, output_loaded, atol=tol, rtol=tol)
# sanity check: ranks should still be 8 as initially
assert model_loaded.peft_config["default"].r == 8
assert model_loaded.base_model.model.linear.lora_A["default"].weight.shape[0] == 8
# sanity check: the base model weights were indeed changed
assert not torch.allclose(
model.linear.weight, model_loaded.base_model.model.linear.base_layer.weight, atol=tol, rtol=tol
)
# save the model with conversion
peft_config_keys_before = list(peft_model.peft_config.keys())
peft_config_dict_before = peft_model.peft_config["default"].to_dict()
peft_model.save_pretrained(
tmp_path / "pissa-model-converted", path_initial_model_for_weight_conversion=tmp_path / "init-model"
)
peft_config_keys_after = list(peft_model.peft_config.keys())
peft_config_dict_after = peft_model.peft_config["default"].to_dict()
assert peft_config_keys_before == peft_config_keys_after
assert peft_config_dict_before == peft_config_dict_after
model_converted = PeftModel.from_pretrained(deepcopy(model), tmp_path / "pissa-model-converted")
output_converted = model_converted(data)[0]
assert torch.allclose(output_pissa, output_converted, atol=tol, rtol=tol)
# rank should be double of what it was initially
assert model_converted.peft_config["default"].r == 16
assert model_converted.base_model.model.linear.lora_A["default"].weight.shape[0] == 16
# base model weights should be the same as the initial model
assert torch.allclose(
model.linear.weight, model_converted.base_model.model.linear.base_layer.weight, atol=tol, rtol=tol
)
def test_lora_pissa_conversion_same_output_after_loading_with_rank_pattern(self, data, tmp_path):
# same as above, but using rank_pattern
model = self.get_model()
output_base = model(data)[0]
# use rank_pattern here; note that since there is only a single linear layer, r is completely overridden
config = LoraConfig(init_lora_weights="pissa", target_modules=["linear"], r=8, rank_pattern={"linear": 32})
peft_model = get_peft_model(deepcopy(model), config)
# save the initial model
peft_model.peft_config["default"].init_lora_weights = True
peft_model.save_pretrained(tmp_path / "init-model")
peft_model.peft_config["default"].init_lora_weights = "pissa"
# modify the weights, or else the adapter performs an identity transformation
peft_model.base_model.linear.lora_B["default"].weight.data *= 2.0
output_pissa = peft_model(data)[0]
# sanity check
tol = 1e-06
assert not torch.allclose(output_base, output_pissa, atol=tol, rtol=tol)
# save the model normally
peft_model.save_pretrained(tmp_path / "pissa-model")
model_loaded = PeftModel.from_pretrained(deepcopy(model), tmp_path / "pissa-model")
output_loaded = model_loaded(data)[0]
assert torch.allclose(output_pissa, output_loaded, atol=tol, rtol=tol)
# sanity check: ranks should still be 8 as initially
assert model_loaded.peft_config["default"].r == 8
assert model_loaded.base_model.model.linear.lora_A["default"].weight.shape[0] == 32
# sanity check: the base model weights were indeed changed
assert not torch.allclose(
model.linear.weight, model_loaded.base_model.model.linear.base_layer.weight, atol=tol, rtol=tol
)
# save the model with conversion
peft_model.save_pretrained(
tmp_path / "pissa-model-converted", path_initial_model_for_weight_conversion=tmp_path / "init-model"
)
model_converted = PeftModel.from_pretrained(deepcopy(model), tmp_path / "pissa-model-converted")
output_converted = model_converted(data)[0]
assert torch.allclose(output_pissa, output_converted, atol=tol, rtol=tol)
# rank should be double of what it was initially
assert model_converted.peft_config["default"].r == 16
assert model_converted.base_model.model.linear.lora_A["default"].weight.shape[0] == 64
# base model weights should be the same as the initial model
assert torch.allclose(
model.linear.weight, model_converted.base_model.model.linear.base_layer.weight, atol=tol, rtol=tol
)
def test_lora_pissa_conversion_same_output_after_loading_with_alpha_pattern(self, data, tmp_path):
# same as above, but using alpha_pattern
model = self.get_model()
output_base = model(data)[0]
# use alpha_pattern here; note that since there is only a single linear layer, lora_alpha is completely
# overridden
config = LoraConfig(init_lora_weights="pissa", target_modules=["linear"], alpha_pattern={"linear": 5})
peft_model = get_peft_model(deepcopy(model), config)
# save the initial model
peft_model.peft_config["default"].init_lora_weights = True
peft_model.save_pretrained(tmp_path / "init-model")
peft_model.peft_config["default"].init_lora_weights = "pissa"
# modify the weights, or else the adapter performs an identity transformation
peft_model.base_model.linear.lora_B["default"].weight.data *= 2.0
output_pissa = peft_model(data)[0]
# sanity check
tol = 1e-06
assert not torch.allclose(output_base, output_pissa, atol=tol, rtol=tol)
# save the model normally
peft_model.save_pretrained(tmp_path / "pissa-model")
model_loaded = PeftModel.from_pretrained(deepcopy(model), tmp_path / "pissa-model")
output_loaded = model_loaded(data)[0]
assert torch.allclose(output_pissa, output_loaded, atol=tol, rtol=tol)
# sanity check: ranks should still be 8 as initially
assert model_loaded.peft_config["default"].r == 8
assert model_loaded.base_model.model.linear.lora_A["default"].weight.shape[0] == 8
assert model_loaded.base_model.model.linear.scaling["default"] == 5 / 8
# sanity check: the base model weights were indeed changed
assert not torch.allclose(
model.linear.weight, model_loaded.base_model.model.linear.base_layer.weight, atol=tol, rtol=tol
)
# save the model with conversion
peft_model.save_pretrained(
tmp_path / "pissa-model-converted", path_initial_model_for_weight_conversion=tmp_path / "init-model"
)
model_converted = PeftModel.from_pretrained(deepcopy(model), tmp_path / "pissa-model-converted")
output_converted = model_converted(data)[0]
assert torch.allclose(output_pissa, output_converted, atol=tol, rtol=tol)
# rank should be double of what it was initially
assert model_converted.peft_config["default"].r == 16
assert model_converted.base_model.model.linear.lora_A["default"].weight.shape[0] == 16
assert model_converted.base_model.model.linear.scaling["default"] == 10 / 16
# base model weights should be the same as the initial model
assert torch.allclose(
model.linear.weight, model_converted.base_model.model.linear.base_layer.weight, atol=tol, rtol=tol
)
def test_lora_pissa_conversion_same_output_after_loading_with_rslora(self, data, tmp_path):
model = self.get_model()
output_base = model(data)[0]
config = LoraConfig(init_lora_weights="pissa", target_modules=["linear"], r=8, use_rslora=True)
peft_model = get_peft_model(deepcopy(model), config)
# save the initial model
peft_model.peft_config["default"].init_lora_weights = True
peft_model.save_pretrained(tmp_path / "init-model")
peft_model.peft_config["default"].init_lora_weights = "pissa"
# modify the weights, or else the adapter performs an identity transformation
peft_model.base_model.linear.lora_B["default"].weight.data *= 2.0
output_pissa = peft_model(data)[0]
# sanity check
tol = 1e-06
assert not torch.allclose(output_base, output_pissa, atol=tol, rtol=tol)
# save the model normally
peft_model.save_pretrained(tmp_path / "pissa-model")
model_loaded = PeftModel.from_pretrained(deepcopy(model), tmp_path / "pissa-model")
output_loaded = model_loaded(data)[0]
assert torch.allclose(output_pissa, output_loaded, atol=tol, rtol=tol)
# sanity check: ranks should still be 8 as initially
assert model_loaded.peft_config["default"].r == 8
assert model_loaded.base_model.model.linear.lora_A["default"].weight.shape[0] == 8
assert model_loaded.base_model.model.linear.scaling["default"] == 8 / (8**0.5)
# sanity check: the base model weights were indeed changed
assert not torch.allclose(
model.linear.weight, model_loaded.base_model.model.linear.base_layer.weight, atol=tol, rtol=tol
)
# save the model with conversion
peft_model.save_pretrained(
tmp_path / "pissa-model-converted", path_initial_model_for_weight_conversion=tmp_path / "init-model"
)
model_converted = PeftModel.from_pretrained(deepcopy(model), tmp_path / "pissa-model-converted")
output_converted = model_converted(data)[0]
assert torch.allclose(output_pissa, output_converted, atol=tol, rtol=tol)
# rank should be double of what it was initially
assert model_converted.peft_config["default"].r == 16
assert model_converted.base_model.model.linear.lora_A["default"].weight.shape[0] == 16
# same scale as before with a little bit of floating point imprecision
assert model_converted.base_model.model.linear.scaling["default"] == pytest.approx(8 / (8**0.5))
# base model weights should be the same as the initial model
assert torch.allclose(
model.linear.weight, model_converted.base_model.model.linear.base_layer.weight, atol=tol, rtol=tol
)
def test_pissa_rank_pattern_and_rslora_raises(self, tmp_path):
# it's not possible to determine the correct scale when using rslora with rank or alpha pattern, because the
# scale is not stored in the state_dict
model = self.get_model()
config = LoraConfig(
init_lora_weights="pissa", target_modules=["linear"], r=8, rank_pattern={"linear": 2}, use_rslora=True
)
peft_model = get_peft_model(model, config)
peft_model.save_pretrained(tmp_path / "init-model")
msg = re.escape("Passing `path_initial_model_for_weight_conversion` to `save_pretrained`")
with pytest.raises(ValueError, match=msg):
peft_model.save_pretrained(
tmp_path / "pissa-model", path_initial_model_for_weight_conversion=tmp_path / "init-model"
)
def test_pissa_alpha_pattern_and_rslora_raises(self, tmp_path):
# it's not possible to determine the correct scale when using rslora with rank or alpha pattern, because the
# scale is not stored in the state_dict
model = self.get_model()
config = LoraConfig(
init_lora_weights="pissa", target_modules=["linear"], r=8, alpha_pattern={"linear": 2}, use_rslora=True
)
peft_model = get_peft_model(model, config)
peft_model.save_pretrained(tmp_path / "init-model")
msg = re.escape("Passing `path_initial_model_for_weight_conversion` to `save_pretrained`")
with pytest.raises(ValueError, match=msg):
peft_model.save_pretrained(
tmp_path / "pissa-model", path_initial_model_for_weight_conversion=tmp_path / "init-model"
)
def test_olora_conversion_same_output_after_loading(self, data, tmp_path):
model = self.get_model()
output_base = model(data)[0]
config = LoraConfig(init_lora_weights="olora", target_modules=["linear"], r=8)
peft_model = get_peft_model(deepcopy(model), config)
# save the initial model
peft_model.save_pretrained(tmp_path / "init-model")
# modify the weights, or else the adapter performs an identity transformation
peft_model.base_model.linear.lora_B["default"].weight.data *= 2.0
output_olora = peft_model(data)[0]
# sanity check
tol = 1e-06
assert not torch.allclose(output_base, output_olora, atol=tol, rtol=tol)
# save the model normally
peft_model.save_pretrained(tmp_path / "olora-model")
model_loaded = PeftModel.from_pretrained(deepcopy(model), tmp_path / "olora-model")
output_loaded = model_loaded(data)[0]
assert torch.allclose(output_olora, output_loaded, atol=tol, rtol=tol)
# sanity check: ranks should still be 8 as initially
assert model_loaded.peft_config["default"].r == 8
assert model_loaded.base_model.model.linear.lora_A["default"].weight.shape[0] == 8
# sanity check: the base model weights were indeed changed
assert not torch.allclose(
model.linear.weight, model_loaded.base_model.model.linear.base_layer.weight, atol=tol, rtol=tol
)
# save the model with conversion
peft_config_keys_before = list(peft_model.peft_config.keys())
peft_config_dict_before = peft_model.peft_config["default"].to_dict()
peft_model.save_pretrained(
tmp_path / "olora-model-converted", path_initial_model_for_weight_conversion=tmp_path / "init-model"
)
peft_config_keys_after = list(peft_model.peft_config.keys())
peft_config_dict_after = peft_model.peft_config["default"].to_dict()
assert peft_config_keys_before == peft_config_keys_after
assert peft_config_dict_before == peft_config_dict_after
model_converted = PeftModel.from_pretrained(deepcopy(model), tmp_path / "olora-model-converted")
output_converted = model_converted(data)[0]
assert torch.allclose(output_olora, output_converted, atol=tol, rtol=tol)
# rank should be double of what it was initially
assert model_converted.peft_config["default"].r == 16
assert model_converted.base_model.model.linear.lora_A["default"].weight.shape[0] == 16
# base model weights should be the same as the initial model
assert torch.allclose(
model.linear.weight, model_converted.base_model.model.linear.base_layer.weight, atol=tol, rtol=tol
)
def test_olora_conversion_same_output_after_loading_with_rank_pattern(self, data, tmp_path):
# same as above, but using rank_pattern
model = self.get_model()
output_base = model(data)[0]
# use rank_pattern here; note that since there is only a single linear layer, r is completely overridden
config = LoraConfig(init_lora_weights="olora", target_modules=["linear"], r=8, rank_pattern={"linear": 32})
peft_model = get_peft_model(deepcopy(model), config)
# save the initial model
peft_model.save_pretrained(tmp_path / "init-model")
# modify the weights, or else the adapter performs an identity transformation
peft_model.base_model.linear.lora_B["default"].weight.data *= 2.0
output_olora = peft_model(data)[0]
# sanity check
tol = 1e-06
assert not torch.allclose(output_base, output_olora, atol=tol, rtol=tol)
# save the model normally
peft_model.save_pretrained(tmp_path / "olora-model")
model_loaded = PeftModel.from_pretrained(deepcopy(model), tmp_path / "olora-model")
output_loaded = model_loaded(data)[0]
assert torch.allclose(output_olora, output_loaded, atol=tol, rtol=tol)
# sanity check: ranks should still be 8 as initially
assert model_loaded.peft_config["default"].r == 8
assert model_loaded.base_model.model.linear.lora_A["default"].weight.shape[0] == 32
# sanity check: the base model weights were indeed changed
assert not torch.allclose(
model.linear.weight, model_loaded.base_model.model.linear.base_layer.weight, atol=tol, rtol=tol
)
# save the model with conversion
peft_model.save_pretrained(
tmp_path / "olora-model-converted", path_initial_model_for_weight_conversion=tmp_path / "init-model"
)
model_converted = PeftModel.from_pretrained(deepcopy(model), tmp_path / "olora-model-converted")
output_converted = model_converted(data)[0]
assert torch.allclose(output_olora, output_converted, atol=tol, rtol=tol)
# rank should be double of what it was initially
assert model_converted.peft_config["default"].r == 16
assert model_converted.base_model.model.linear.lora_A["default"].weight.shape[0] == 64
# base model weights should be the same as the initial model
assert torch.allclose(
model.linear.weight, model_converted.base_model.model.linear.base_layer.weight, atol=tol, rtol=tol
)
def test_olora_conversion_same_output_after_loading_with_alpha_pattern(self, data, tmp_path):
# same as above, but using alpha_pattern
model = self.get_model()
output_base = model(data)[0]
# use alpha_pattern here; note that since there is only a single linear layer, lora_alpha is completely
# overridden
config = LoraConfig(init_lora_weights="olora", target_modules=["linear"], alpha_pattern={"linear": 5})
peft_model = get_peft_model(deepcopy(model), config)
# save the initial model
peft_model.save_pretrained(tmp_path / "init-model")
# modify the weights, or else the adapter performs an identity transformation
peft_model.base_model.linear.lora_B["default"].weight.data *= 2.0
output_olora = peft_model(data)[0]
# sanity check
tol = 1e-06
assert not torch.allclose(output_base, output_olora, atol=tol, rtol=tol)
# save the model normally
peft_model.save_pretrained(tmp_path / "olora-model")
model_loaded = PeftModel.from_pretrained(deepcopy(model), tmp_path / "olora-model")
output_loaded = model_loaded(data)[0]
assert torch.allclose(output_olora, output_loaded, atol=tol, rtol=tol)
# sanity check: ranks should still be 8 as initially
assert model_loaded.peft_config["default"].r == 8
assert model_loaded.base_model.model.linear.lora_A["default"].weight.shape[0] == 8
assert model_loaded.base_model.model.linear.scaling["default"] == 5 / 8
# sanity check: the base model weights were indeed changed
assert not torch.allclose(
model.linear.weight, model_loaded.base_model.model.linear.base_layer.weight, atol=tol, rtol=tol
)
# save the model with conversion
peft_model.save_pretrained(
tmp_path / "olora-model-converted", path_initial_model_for_weight_conversion=tmp_path / "init-model"
)
model_converted = PeftModel.from_pretrained(deepcopy(model), tmp_path / "olora-model-converted")
output_converted = model_converted(data)[0]
assert torch.allclose(output_olora, output_converted, atol=tol, rtol=tol)
# rank should be double of what it was initially
assert model_converted.peft_config["default"].r == 16
assert model_converted.base_model.model.linear.lora_A["default"].weight.shape[0] == 16
assert model_converted.base_model.model.linear.scaling["default"] == 10 / 16
# base model weights should be the same as the initial model
assert torch.allclose(
model.linear.weight, model_converted.base_model.model.linear.base_layer.weight, atol=tol, rtol=tol
)
def test_olora_conversion_same_output_after_loading_with_rslora(self, data, tmp_path):
# same as above, but using alpha_pattern
model = self.get_model()
output_base = model(data)[0]
config = LoraConfig(init_lora_weights="olora", target_modules=["linear"], r=8, use_rslora=True)
peft_model = get_peft_model(deepcopy(model), config)
# save the initial model
peft_model.save_pretrained(tmp_path / "init-model")
# modify the weights, or else the adapter performs an identity transformation
peft_model.base_model.linear.lora_B["default"].weight.data *= 2.0
output_olora = peft_model(data)[0]
# sanity check
tol = 1e-06
assert not torch.allclose(output_base, output_olora, atol=tol, rtol=tol)
# save the model normally
peft_model.save_pretrained(tmp_path / "olora-model")
model_loaded = PeftModel.from_pretrained(deepcopy(model), tmp_path / "olora-model")
output_loaded = model_loaded(data)[0]
assert torch.allclose(output_olora, output_loaded, atol=tol, rtol=tol)
# sanity check: ranks should still be 8 as initially
assert model_loaded.peft_config["default"].r == 8
assert model_loaded.base_model.model.linear.lora_A["default"].weight.shape[0] == 8
assert model_loaded.base_model.model.linear.scaling["default"] == 8 / (8**0.5)
# sanity check: the base model weights were indeed changed
assert not torch.allclose(
model.linear.weight, model_loaded.base_model.model.linear.base_layer.weight, atol=tol, rtol=tol
)
# save the model with conversion
peft_model.save_pretrained(
tmp_path / "olora-model-converted", path_initial_model_for_weight_conversion=tmp_path / "init-model"
)
model_converted = PeftModel.from_pretrained(deepcopy(model), tmp_path / "olora-model-converted")
output_converted = model_converted(data)[0]
assert torch.allclose(output_olora, output_converted, atol=tol, rtol=tol)
# rank should be double of what it was initially
assert model_converted.peft_config["default"].r == 16
assert model_converted.base_model.model.linear.lora_A["default"].weight.shape[0] == 16
# same scale as before with a little bit of floating point imprecision
assert model_converted.base_model.model.linear.scaling["default"] == pytest.approx(8 / (8**0.5))
# base model weights should be the same as the initial model
assert torch.allclose(
model.linear.weight, model_converted.base_model.model.linear.base_layer.weight, atol=tol, rtol=tol
)
def test_olora_rank_pattern_and_rslora_raises(self, tmp_path):
# it's not possible to determine the correct scale when using rslora with rank or alpha pattern, because the
# scale is not stored in the state_dict
model = self.get_model()
config = LoraConfig(
init_lora_weights="olora", target_modules=["linear"], r=8, rank_pattern={"linear": 2}, use_rslora=True
)
peft_model = get_peft_model(model, config)
peft_model.save_pretrained(tmp_path / "init-model")
msg = re.escape("Passing `path_initial_model_for_weight_conversion` to `save_pretrained`")
with pytest.raises(ValueError, match=msg):
peft_model.save_pretrained(
tmp_path / "olora-model", path_initial_model_for_weight_conversion=tmp_path / "init-model"
)
def test_olora_alpha_pattern_and_rslora_raises(self, tmp_path):
# it's not possible to determine the correct scale when using rslora with rank or alpha pattern, because the
# scale is not stored in the state_dict
model = self.get_model()
config = LoraConfig(
init_lora_weights="olora", target_modules=["linear"], r=8, alpha_pattern={"linear": 2}, use_rslora=True
)
peft_model = get_peft_model(model, config)
peft_model.save_pretrained(tmp_path / "init-model")
msg = re.escape("Passing `path_initial_model_for_weight_conversion` to `save_pretrained`")
with pytest.raises(ValueError, match=msg):
peft_model.save_pretrained(
tmp_path / "olora-model", path_initial_model_for_weight_conversion=tmp_path / "init-model"
)
@pytest.mark.parametrize(
"config_kwargs, should_warn",
[
# no warning
({"init_lora_weights": "pissa", "target_modules": ["linear"]}, False),
({"init_lora_weights": "pissa_niter_3", "target_modules": ["linear"]}, False),
({"init_lora_weights": "olora", "target_modules": ["linear"]}, False),
({"init_lora_weights": "pissa", "target_modules": ["linear"], "use_rslora": True}, False),
({"init_lora_weights": "pissa_niter_3", "target_modules": ["linear"], "use_rslora": True}, False),
({"init_lora_weights": "olora", "target_modules": ["linear"], "use_rslora": True}, False),
({"init_lora_weights": "pissa", "target_modules": ["linear"], "rank_pattern": {"linear": 8}}, False),
(
{"init_lora_weights": "pissa_niter_3", "target_modules": ["linear"], "rank_pattern": {"linear": 8}},
False,
),
({"init_lora_weights": "olora", "target_modules": ["linear"], "rank_pattern": {"linear": 8}}, False),
({"init_lora_weights": "pissa", "target_modules": ["linear"], "alpha_pattern": {"linear": 8}}, False),
(
{"init_lora_weights": "pissa_niter_3", "target_modules": ["linear"], "alpha_pattern": {"linear": 8}},
False,
),
({"init_lora_weights": "olora", "target_modules": ["linear"], "alpha_pattern": {"linear": 8}}, False),
# warning
(
{
"init_lora_weights": "pissa",
"target_modules": ["linear"],
"use_rslora": True,
"rank_pattern": {"linear": 8},
},
True,
),
(
{
"init_lora_weights": "pissa_niter_3",
"target_modules": ["linear"],
"use_rslora": True,
"rank_pattern": {"linear": 8},
},
True,
),
(
{
"init_lora_weights": "olora",
"target_modules": ["linear"],
"use_rslora": True,
"rank_pattern": {"linear": 8},
},
True,
),
(
{
"init_lora_weights": "pissa",
"target_modules": ["linear"],
"use_rslora": True,
"alpha_pattern": {"linear": 8},
},
True,
),
(
{
"init_lora_weights": "pissa_niter_3",
"target_modules": ["linear"],
"use_rslora": True,
"alpha_pattern": {"linear": 8},
},
True,
),
(
{
"init_lora_weights": "olora",
"target_modules": ["linear"],
"use_rslora": True,
"alpha_pattern": {"linear": 8},
},
True,
),
(
{
"init_lora_weights": "pissa",
"target_modules": ["linear"],
"use_rslora": True,
"rank_pattern": {"linear": 8},
"alpha_pattern": {"linear": 8},
},
True,
),
(
{
"init_lora_weights": "pissa_niter_3",
"target_modules": ["linear"],
"use_rslora": True,
"rank_pattern": {"linear": 8},
"alpha_pattern": {"linear": 8},
},
True,
),
(
{
"init_lora_weights": "olora",
"target_modules": ["linear"],
"use_rslora": True,
"rank_pattern": {"linear": 8},
"alpha_pattern": {"linear": 8},
},
True,
),
],
)
def test_lora_config_pissa_olora_warns(self, config_kwargs, should_warn, recwarn):
# Using post training conversion of modified base weights to restore their initial values (PiSSA, OLoRA) cannot
# be correctly done when using rslora + rank_pattern/alpha_pattern. We can't really know if the user intends
# this when they'll eventually call save_pretrained (i.e. if they'll pass
# path_initial_model_for_weight_conversionl). Therefore, we only warn but don't raise an error here.
msg = re.escape("Using Rank-Stabilized LoRA with rank_pattern/alpha_pattern and post-training conversion")
if should_warn:
LoraConfig(**config_kwargs)
assert len(recwarn.list) == 1
with pytest.warns(UserWarning, match=msg):
LoraConfig(**config_kwargs)
else:
LoraConfig(**config_kwargs)
assert not recwarn.list
@pytest.mark.parametrize("init_method", ["pissa", "olora"])
@pytest.mark.parametrize("pissa_olora_loaded_first", [False, True])
def test_load_pissa_olora_with_other_adapter_warns(self, init_method, pissa_olora_loaded_first, recwarn, tmp_path):
# Since PiSSA/OLoRA modifies the base weights, it should not be combined with other adapters. Check for a
# warning. See #2184.
# create an adapter without PiSSA/OloRA
model_id = "hf-internal-testing/tiny-random-OPTForCausalLM"
model = AutoModelForCausalLM.from_pretrained(model_id)
model = get_peft_model(model, LoraConfig(init_lora_weights=True))
model.save_pretrained(tmp_path / "adapter0")
del model
# create a model with PiSSA/OLoRA
model = AutoModelForCausalLM.from_pretrained(model_id)
model = get_peft_model(model, LoraConfig(init_lora_weights=init_method))
model.save_pretrained(tmp_path / "adapter1")
del model
# load the model
if pissa_olora_loaded_first:
path0, path1 = tmp_path / "adapter1", tmp_path / "adapter0"
else:
path0, path1 = tmp_path / "adapter0", tmp_path / "adapter1"
model = AutoModelForCausalLM.from_pretrained(model_id)
model = PeftModel.from_pretrained(model, path0)
model = model.load_adapter(path1, adapter_name="other")
if init_method == "pissa":
msg = "PiSSA changes the base weights of the model and should thus not be used with other adapters"
else:
msg = "OLoRA changes the base weights of the model and should thus not be used with other adapters"
assert any(str(w.message).startswith(msg) for w in recwarn.list)
def test_lora_rslora_scaling(self):
# default is True
torch.manual_seed(0)
model = self.get_model()
# check scaling factor use_rslora=True
config = LoraConfig(target_modules=["linear", "embed", "conv2d"], lora_alpha=3, r=16, use_rslora=True)
model = get_peft_model(model, config)
expected_scaling = config.lora_alpha / (config.r**0.5)
assert model.linear.scaling["default"] == expected_scaling
assert model.embed.scaling["default"] == expected_scaling
assert model.conv2d.scaling["default"] == expected_scaling
def test_lora_default_scaling_pattern(self):
# default is True
torch.manual_seed(0)
model = self.get_model()
# check scaling factor use_rslora=False with rank and alpha pattern
config = LoraConfig(
target_modules=["linear", "embed", "conv2d"],
rank_pattern={"embed": 9, "conv2d": 16},
alpha_pattern={"linear": 11, "conv2d": 13},
lora_alpha=17,
r=25,
use_rslora=False,
)
model = get_peft_model(model, config)
expected_scaling = {
"linear": config.alpha_pattern["linear"] / config.r,
"embed": config.lora_alpha / config.rank_pattern["embed"],
"conv2d": config.alpha_pattern["conv2d"] / config.rank_pattern["conv2d"],
}
assert model.linear.scaling["default"] == expected_scaling["linear"]
assert model.embed.scaling["default"] == expected_scaling["embed"]
assert model.conv2d.scaling["default"] == expected_scaling["conv2d"]
def test_lora_rslora_scaling_pattern(self):
# default is True
torch.manual_seed(0)
model = self.get_model()
# check scaling factor use_rslora=True with rank and alpha pattern
config = LoraConfig(
target_modules=["linear", "embed", "conv2d"],
rank_pattern={"embed": 9, "conv2d": 16},
alpha_pattern={"linear": 11, "conv2d": 13},
lora_alpha=17,
r=25,
use_rslora=True,
)
model = get_peft_model(model, config)
expected_scaling = {
"linear": config.alpha_pattern["linear"] / (config.r**0.5),
"embed": config.lora_alpha / (config.rank_pattern["embed"] ** 0.5),
"conv2d": config.alpha_pattern["conv2d"] / (config.rank_pattern["conv2d"] ** 0.5),
}
assert model.linear.scaling["default"] == expected_scaling["linear"]
assert model.embed.scaling["default"] == expected_scaling["embed"]
assert model.conv2d.scaling["default"] == expected_scaling["conv2d"]
@require_deterministic_for_xpu
def test_lora_use_dora_linear(self, data):
# check that dora is a no-op when initialized
torch.manual_seed(0)
model = self.get_model()
output_base, _, _ = model(data)
# check scaling factor use_rslora=True
config = LoraConfig(target_modules=["linear"], use_dora=True)
model = get_peft_model(model, config)
with model.disable_adapter():
output_disabled, _, _ = model(data)
output_dora, _, _ = model(data)
assert torch.allclose(output_base, output_disabled)
assert torch.allclose(output_base, output_dora)
@require_deterministic_for_xpu
def test_lora_use_dora_linear_init_false(self, data):
# with init_lora_weights=False, dora should not be a no-op
torch.manual_seed(0)
model = self.get_model()
output_base, _, _ = model(data)
# check scaling factor use_rslora=True
config = LoraConfig(target_modules=["linear"], use_dora=True, init_lora_weights=False)
model = get_peft_model(model, config)
with model.disable_adapter():
output_disabled, _, _ = model(data)
output_dora, _, _ = model(data)
assert torch.allclose(output_base, output_disabled)
assert not torch.allclose(output_base, output_dora)
def test_lora_use_dora_with_megatron_core_raises(self):
megatron_config = {"does-not": "matter-here"}
with pytest.raises(ValueError, match="DoRA does not support megatron_core"):
LoraConfig(target_modules=["linear"], use_dora=True, megatron_config=megatron_config)
@pytest.fixture
def mha_cls(self):
class ModelMha(nn.Module):
def __init__(self, kdim=None, vdim=None):
super().__init__()
self.mha = nn.MultiheadAttention(10, 2, kdim=kdim, vdim=vdim)
self.lin0 = nn.Linear(10, 2)
self.sm = nn.LogSoftmax(dim=-1)
def forward(self, X):
X = X.float()
X, _ = self.mha(X, X, X)
X = self.lin0(X)
X = self.sm(X)
return X
return ModelMha
def test_mha_load_init_model_first(self, mha_cls):
# This test used to fail and require a workaround, for more context, see:
# https://github.com/huggingface/peft/pull/1324#issuecomment-2252473980
# The workaround was that _restore_weights had to be called manually on lora.MHA layers in order to make loading
# the state dict work. With recent changes, this workaround is no longer required, so that test has been
# deleted.
inputs = torch.rand(10, 10, 10)
model = mha_cls()
config = LoraConfig(target_modules=["mha"], init_lora_weights=False)
model = get_peft_model(model, config).eval()
restore_state_dict = {k: v.detach().cpu() for k, v in model.state_dict().items()}
del model
model = mha_cls()
model = get_peft_model(model, config)
# the workaround used to be:
# for module in model.modules():
# if isinstance(module, peft.tuners.lora.layer.MultiheadAttention):
# module._restore_weights()
model(inputs)
model.load_state_dict(restore_state_dict)
def test_mha_with_separate_qkv_embed_raises(self, mha_cls):
# passing different kdim and vdim results in separate parameters for q, k, v, which is not supported (yet)
model = mha_cls(kdim=20, vdim=30)
config = LoraConfig(target_modules=["mha"])
msg = "Only same embed for query/key/value is supported as of now for MultiheadAttention"
with pytest.raises(ValueError, match=msg):
get_peft_model(model, config)
def test_mha_with_dora_raises(self, mha_cls):
model = mha_cls()
config = LoraConfig(target_modules=["mha"], use_dora=True)
msg = re.escape("MultiheadAttention does not support DoRA (yet), please set use_dora to False")
with pytest.raises(ValueError, match=msg):
get_peft_model(model, config)
def test_mha_exposes_attributes(self, mha_cls):
# MHA requires a bunch of attributes to be exposed, try to check them exhaustively here
model = mha_cls()
embed_dim = model.mha.embed_dim
kdim = model.mha.kdim
vdim = model.mha.vdim
qkv_same_embed_dim = model.mha._qkv_same_embed_dim
num_heads = model.mha.num_heads
dropout = model.mha.dropout
batch_first = model.mha.batch_first
head_dim = model.mha.head_dim
in_proj_weight = model.mha.in_proj_weight
in_proj_bias = model.mha.in_proj_bias
out_proj = model.mha.out_proj
bias_k = model.mha.bias_k
bias_v = model.mha.bias_v
add_zero_attn = model.mha.add_zero_attn
config = LoraConfig(target_modules=["mha"])
peft_model = get_peft_model(model, config)
assert peft_model.base_model.mha.embed_dim == embed_dim
assert peft_model.base_model.mha.kdim == kdim
assert peft_model.base_model.mha.vdim == vdim
assert peft_model.base_model.mha._qkv_same_embed_dim == qkv_same_embed_dim
assert peft_model.base_model.mha.num_heads == num_heads
assert peft_model.base_model.mha.dropout == dropout
assert peft_model.base_model.mha.batch_first == batch_first
assert peft_model.base_model.mha.head_dim == head_dim
if in_proj_weight is not None:
assert torch.allclose(peft_model.base_model.mha.in_proj_weight, in_proj_weight)
else:
assert peft_model.base_model.mha.in_proj_weight is None
if in_proj_bias is not None:
assert torch.allclose(peft_model.base_model.mha.in_proj_bias, in_proj_bias)
else:
assert peft_model.base_model.mha.in_proj_bias is None
assert peft_model.base_model.mha.out_proj is out_proj
if bias_k is not None:
assert torch.allclose(peft_model.base_model.mha.bias_k, bias_k)
else:
assert peft_model.base_model.mha.bias_k is None
if bias_v is not None:
assert torch.allclose(peft_model.base_model.mha.bias_v, bias_v)
else:
assert peft_model.base_model.mha.bias_v is None
assert peft_model.base_model.mha.add_zero_attn == add_zero_attn
def test_mha_merge_masks_method(self, mha_cls):
# MHA requires a merge_masks method to be exposed, check that it works
model = mha_cls()
config = LoraConfig(target_modules=["mha"])
peft_model = get_peft_model(model, config)
attn_mask = torch.randint(0, 2, (10, 10))
key_padding_mask = torch.randint(0, 2, (10, 10))
query = torch.rand(10, 10, 10)
merged_mask0, mask_type0 = model.mha.merge_masks(attn_mask, key_padding_mask, query)
merged_mask1, mask_type1 = peft_model.base_model.mha.merge_masks(attn_mask, key_padding_mask, query)
assert torch.allclose(merged_mask0, merged_mask1)
assert mask_type0 == mask_type1
def test_lora_with_bias_extra_params(self):
# lora with lora_bias=True
model = self.get_model()
config = LoraConfig(target_modules=["linear", "conv2d"], lora_bias=False)
model_no_bias = get_peft_model(model, config)
model = self.get_model()
config = LoraConfig(target_modules=["linear", "conv2d"], lora_bias=True)
model_bias = get_peft_model(model, config)
# check that bias for LoRA B is set
assert model_no_bias.base_model.model.linear.lora_B["default"].bias is None
assert model_bias.base_model.model.linear.lora_B["default"].bias.shape == (1000,)
assert model_no_bias.base_model.model.conv2d.lora_B["default"].bias is None
assert model_bias.base_model.model.conv2d.lora_B["default"].bias.shape == (100,)
# check that the same params are present except for the extra bias term
params_no_bias = {name for name, _ in model_no_bias.named_parameters()}
params_bias = {name for name, _ in model_bias.named_parameters()}
extra_params = {
"base_model.model.linear.lora_B.default.bias",
"base_model.model.conv2d.lora_B.default.bias",
}
assert params_bias - params_no_bias == extra_params
assert params_no_bias.issubset(params_bias)
def test_lora_with_bias_embedding_raises(self):
# lora with lora_bias=True is not supported for embedding layers
model = self.get_model()
config = LoraConfig(target_modules=["embed"], lora_bias=True)
msg = "lora_bias=True is not supported for Embedding"
with pytest.raises(ValueError, match=msg):
get_peft_model(model, config)
@pytest.mark.parametrize(
"extra_kwargs",
[
{"use_dora": True},
{"init_lora_weights": "eva"},
{"init_lora_weights": "gaussian"},
{"init_lora_weights": "loftq", "loftq_config": LoftQConfig()},
{"init_lora_weights": "olora"},
{"init_lora_weights": "pissa"},
{"init_lora_weights": "pissa_niter_3"},
{"init_lora_weights": "orthogonal"},
],
)
def test_lora_with_bias_incompatible_arguments(self, extra_kwargs):
# some arguments don't work in conjunction with lora_bias and should raise
# just check the common chunk of the error message
msg = "The argument lora_bias=True is"
with pytest.raises(ValueError, match=msg):
LoraConfig(target_modules=["linear"], lora_bias=True, **extra_kwargs)
def test_lora_linear_with_bias_when_base_layer_has_no_bias_warns(self):
model = self.get_model(bias=False)
config = LoraConfig(target_modules=["linear"], lora_bias=True)
msg = re.escape("`lora_bias=True` was passed but the targeted layer of type Linear has no bias")
with pytest.warns(PeftWarning, match=msg):
get_peft_model(model, config)
def test_lora_conv2d_with_bias_when_base_layer_has_no_bias_warns(self):
model = self.get_model(bias=False)
config = LoraConfig(target_modules=["conv2d"], lora_bias=True)
msg = re.escape("`lora_bias=True` was passed but the targeted layer of type Conv2d has no bias")
with pytest.warns(PeftWarning, match=msg):
get_peft_model(model, config)
def test_lora_incompatible_mamba_modules(self):
# Ensure LoRA raises an error when applying to forbidden modules
# ('out_proj', 'conv1d') in Mamba-based architectures like Falcon-Mamba tiny.
model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-mamba-tiny-dev")
config = LoraConfig(
task_type="CAUSAL_LM",
target_modules=["out_proj", "conv1d"], # Forbidden modules for Mamba-based models
)
msg = "is incompatible with Mamba-based models"
with pytest.raises(ValueError, match=msg):
get_peft_model(model, config)
def get_model_conv2d_groups(self):
class ModelConv2DGroups(nn.Module):
"""For testing when groups argument is used in conv layer"""
def __init__(self):
super().__init__()
self.conv2d = nn.Conv2d(16, 32, 3, padding=1, groups=2)
self.relu = nn.ReLU()
self.flat = nn.Flatten()
self.lin0 = nn.Linear(12800, 2)
self.sm = nn.LogSoftmax(dim=-1)
self.dtype = torch.float
def forward(self, X):
# This is ignoring input since main usage is for checking raising of error when peft is applied
X = torch.arange(9 * 16 * 20 * 20).view([9, 16, 20, 20]).to(self.conv2d.weight.device)
X = X.to(self.dtype)
X = self.conv2d(X)
X = self.relu(X)
X = self.flat(X)
X = self.lin0(X)
X = self.sm(X)
return X
return ModelConv2DGroups().eval().to(self.torch_device)
@pytest.mark.parametrize(
"config_cls, config_kwargs",
[
pytest.param(LoraConfig, {"r": 8, "target_modules": ["conv2d"]}, id="lora with rank divisible by groups"),
pytest.param(LoraConfig, {"r": 2, "target_modules": ["conv2d"]}, id="lora with rank equal to groups"),
pytest.param(
LoraConfig, {"r": 1, "target_modules": ["conv2d"]}, id="lora with rank not divisible by groups"
),
pytest.param(
LoraConfig,
{"r": 8, "target_modules": ["conv2d"], "use_dora": True},
id="dora with rank divisible by groups",
),
pytest.param(
LoraConfig,
{"r": 2, "target_modules": ["conv2d"], "use_dora": True},
id="dora with rank equal to groups",
),
pytest.param(
LoraConfig,
{"r": 1, "target_modules": ["conv2d"], "use_dora": True},
id="dora with rank not divisible by groups",
),
],
)
def test_error_raised_if_rank_not_divisible_by_groups(self, config_cls, config_kwargs):
# This test checks if error is raised when rank is not divisible by groups for conv layer since
# currently, support is limited to conv layers where the rank is divisible by groups in lora and dora
base_model = self.get_model_conv2d_groups()
peft_config = config_cls(**config_kwargs)
r = config_kwargs["r"]
base_layer = base_model.conv2d
groups = base_layer.groups
if r % groups != 0:
with pytest.raises(
ValueError,
match=(
f"Targeting a {base_layer.__class__.__name__} with groups={base_layer.groups} and rank {r}. "
"Currently, support is limited to conv layers where the rank is divisible by groups. "
"Either choose a different rank or do not target this specific layer."
),
):
peft_model = get_peft_model(base_model, peft_config)
else:
# No error should be raised
peft_model = get_peft_model(base_model, peft_config)
def test_target_module_and_target_parameter_on_same_layer(self):
# When targeting an nn.Parameter with LoRA using target_parameters, ensure that this is not already another LoRA
# layer (i.e. avoid double wrapping).
class MyModule(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(10, 10)
base_model = MyModule()
config = LoraConfig(target_modules=["linear"], target_parameters=["linear.weight"])
msg = "Trying to wrap an `nn.Parameter` of layer 'linear' of type Linear, which is not a valid target."
with pytest.raises(ValueError, match=msg):
get_peft_model(base_model, config)
@pytest.mark.parametrize("target_parameters", [["linear"], ["foobar"], ["foobar.weight"], ["foo", "bar"]])
@pytest.mark.parametrize("target_modules", [None, [], ""])
def test_valid_no_target_module_nor_target_parameter_match_raises(self, target_parameters, target_modules):
model = self.get_model()
config = LoraConfig(target_modules=target_modules, target_parameters=target_parameters)
msg = re.escape(
"No `target_modules` passed but also no `target_parameters` found. Please check the values for "
"these arguments."
)
with pytest.raises(ValueError, match=msg):
get_peft_model(model, config)
def test_target_parameters_wrong_type_raises(self):
# Check that target_parameters being a string raises a useful error message -- this is an easy mistake to make
# because strings are allowed for target_modules
model = self.get_model()
msg = "`target_parameters` must be a list of strings or None."
with pytest.raises(TypeError, match=msg):
LoraConfig(target_parameters="linear.weight")
def test_valid_target_parameters_invalid_target_modules_warns(self):
model = self.get_model()
config = LoraConfig(target_modules=["foobar"], target_parameters=["linear.weight"])
msg = re.escape("target_modules={'foobar'} were set but no module was matched.")
with pytest.warns(RuntimeWarning, match=msg):
get_peft_model(model, config)
def test_valid_target_modules_invalid_target_parameters_warns(self):
model = self.get_model()
config = LoraConfig(target_modules=["linear"], target_parameters=["foobar.weight"])
msg = re.escape("target_parameters=['foobar.weight'] were set but no parameter was matched.")
with pytest.warns(RuntimeWarning, match=msg):
get_peft_model(model, config)
def test_adding_multiple_adapters_with_target_parameters_raises(self):
model = self.get_model()
config = LoraConfig(target_modules=[], target_parameters=["linear.weight"])
model = get_peft_model(model, config)
msg = re.escape("only one LoRA adapter per model with `target_parameters` is allowed")
with pytest.raises(ValueError, match=msg):
model.add_adapter(adapter_name="other", peft_config=config)
def test_loading_loading_adapters_with_target_parameters_raises(self, tmp_path):
model = self.get_model()
config = LoraConfig(target_modules=[], target_parameters=["linear.weight"])
model = get_peft_model(model, config)
model.save_pretrained(tmp_path)
model = self.get_model()
model = PeftModel.from_pretrained(model, tmp_path)
msg = re.escape("only one LoRA adapter per model with `target_parameters` is allowed")
with pytest.raises(ValueError, match=msg):
model.load_adapter(tmp_path, adapter_name="other")
class TestLokrInitialization:
torch_device = infer_device()
def get_model(self):
class MyModule(nn.Module):
def __init__(self):
super().__init__()
# Choose a large weight so that averages are close to expected values.
self.linear = nn.Linear(1000, 1000)
self.conv2d = nn.Conv2d(100, 100, 3)
def forward(self, x):
x_4d = x.flatten().reshape(1, 100, 10, 10)
return self.linear(x), self.conv2d(x_4d)
return MyModule().eval().to(self.torch_device)
@pytest.fixture
def data(self):
return torch.rand(10, 1000).to(self.torch_device)
@require_deterministic_for_xpu
def test_lokr_linear_init_default(self, data):
torch.manual_seed(0)
model = self.get_model()
output_before = model(data)[0]
config = LoKrConfig(target_modules=["linear"])
model = get_peft_model(model, config)
output_after = model(data)[0]
assert torch.allclose(output_before, output_after)
def test_lokr_linear_init_false(self, data):
torch.manual_seed(0)
model = self.get_model()
output_before = model(data)[0]
config = LoKrConfig(target_modules=["linear"], init_weights=False)
model = get_peft_model(model, config)
output_after = model(data)[0]
assert not torch.allclose(output_before, output_after)
@require_deterministic_for_xpu
def test_lokr_linear_init_lycoris(self, data):
torch.manual_seed(0)
model = self.get_model()
output_before = model(data)[0]
config = LoKrConfig(target_modules=["linear"], init_weights="lycoris")
model = get_peft_model(model, config)
output_after = model(data)[0]
assert torch.allclose(output_before, output_after)
def test_lokr_conv2d_init_default(self, data):
torch.manual_seed(0)
model = self.get_model()
output_before = model(data)[1]
config = LoKrConfig(target_modules=["conv2d"])
model = get_peft_model(model, config)
output_after = model(data)[1]
assert torch.allclose(output_before, output_after)
def test_lokr_conv2d_init_false(self, data):
torch.manual_seed(0)
model = self.get_model()
output_before = model(data)[1]
config = LoKrConfig(target_modules=["conv2d"], init_weights=False)
model = get_peft_model(model, config)
output_after = model(data)[1]
assert not torch.allclose(output_before, output_after)
def test_lokr_conv2d_init_lycoris(self, data):
torch.manual_seed(0)
model = self.get_model()
output_before = model(data)[1]
config = LoKrConfig(target_modules=["conv2d"], init_weights="lycoris")
model = get_peft_model(model, config)
output_after = model(data)[1]
assert torch.allclose(output_before, output_after)
class TestAdaLoraInitialization:
torch_device = infer_device()
def test_adalora_target_modules_set(self):
config = AdaLoraConfig(target_modules=["linear", "embed", "conv2d"], total_step=1)
assert config.target_modules == {"linear", "embed", "conv2d"}
def test_adalora_use_dora_raises(self):
with pytest.raises(ValueError, match="ADALORA does not support DoRA"):
AdaLoraConfig(use_dora=True, total_step=1)
def test_adalora_loftq_config_raises(self):
with pytest.raises(ValueError, match="ADALORA does not support LOFTQ"):
AdaLoraConfig(init_lora_weights="loftq", loftq_config={"loftq": "config"}, total_step=1)
def get_model(self):
class MyModule(nn.Module):
def __init__(self):
super().__init__()
# choose a large weight so that averages are close to expected values
self.linear = nn.Linear(1000, 1000)
def forward(self, x):
return self.linear(x)
return MyModule().eval().to(self.torch_device)
@pytest.fixture
def data(self):
return torch.rand(10, 1000).to(self.torch_device)
@require_deterministic_for_xpu
def test_adalora_default_init_identity(self, data):
# default is True
torch.manual_seed(0)
model = self.get_model()
output_before = model(data)
config = AdaLoraConfig(target_modules=["linear"], total_step=1)
model = get_peft_model(model, config)
output_after = model(data)
assert torch.allclose(output_before, output_after)
class TestPromptTuningInitialization:
torch_device = infer_device()
def get_model(self):
class MyModule(nn.Module):
def __init__(self):
super().__init__()
# choose a large weight so that averages are close to expected values
self.linear = nn.Linear(1000, 1000)
self.embed = nn.Embedding(1000, 1000)
self.conv2d = nn.Conv2d(100, 100, 3)
def forward(self, x):
x_int = (100 * x).int()
x_4d = x.flatten().reshape(1, 100, 10, 10)
return self.linear(x), self.embed(x_int), self.conv2d(x_4d)
return MyModule().eval().to(self.torch_device)
def test_use_prompt_tuning_init_text_raises(self):
with pytest.raises(ValueError, match="When prompt_tuning_init='TEXT', tokenizer_name_or_path can't be None"):
PromptTuningConfig(prompt_tuning_init="TEXT", prompt_tuning_init_text="prompt tuning init text")
with pytest.raises(ValueError, match="When prompt_tuning_init='TEXT', prompt_tuning_init_text can't be None"):
PromptTuningConfig(prompt_tuning_init="TEXT", tokenizer_name_or_path="t5-base")
class TestVeraInitialization:
torch_device = infer_device()
def get_model(self):
class MLP(nn.Module):
def __init__(self, bias=True):
super().__init__()
self.lin0 = nn.Linear(10, 20, bias=bias)
self.lin1 = nn.Linear(20, 2, bias=bias)
def forward(self, X):
X = self.lin0(X)
X = self.lin1(X)
return X
return MLP().to(self.torch_device)
def test_vera_mixing_save_projection_raises(self):
# it is unclear what the right thing to do would be if some adapters save the projection weights and some don't
# so we better raise an error
config0 = VeraConfig(target_modules=["lin0"], init_weights=False, save_projection=True)
model = self.get_model()
model = get_peft_model(model, config0)
config1 = VeraConfig(target_modules=["lin0"], init_weights=False, save_projection=False)
msg = re.escape(
"VeRA projection weights must be saved for all adapters or none, but got multiple different values: "
"[False, True]"
)
with pytest.raises(ValueError, match=msg):
model.add_adapter("other", config1)
def test_vera_add_second_adapter_with_incompatible_input_shape(self):
config0 = VeraConfig(target_modules=["lin0"], r=8)
config1 = VeraConfig(target_modules=["lin1"])
base_model = self.get_model()
lin0_in_feat = base_model.lin0.in_features
lin1_in_feat = base_model.lin1.in_features
model = get_peft_model(base_model, config0)
# not full message but enough to identify the error
msg = f"vera_A has a size of {lin0_in_feat} but {lin1_in_feat} or greater is required"
with pytest.raises(ValueError, match=msg):
model.add_adapter("other", config1)
def test_vera_add_second_adapter_with_higher_rank(self):
rank0 = 123
rank1 = 456
config0 = VeraConfig(target_modules=["lin0"], r=rank0)
# second adapter has higher rank
config1 = VeraConfig(target_modules=["lin0"], r=rank1)
model = get_peft_model(self.get_model(), config0)
# not full message but enough to identify the error
msg = f"vera_A has a size of {rank0} but {rank1} or greater is required"
with pytest.raises(ValueError, match=msg):
model.add_adapter("other", config1)
class TestVBLoraInitialization:
torch_device = infer_device()
def get_model(self):
class MLP(nn.Module):
def __init__(self, bias=True):
super().__init__()
self.lin0 = nn.Linear(10, 30, bias=bias)
self.lin1 = nn.Linear(30, 2, bias=bias)
def forward(self, X):
X = self.lin0(X)
X = self.lin1(X)
return X
return MLP().to(self.torch_device)
def test_vblora_with_incompatible_vector_length_with_in_features(self):
vector_length = 3
model = self.get_model()
config = VBLoRAConfig(target_modules=["lin0"], vector_length=vector_length)
msg = f"`in_features` {model.lin0.in_features} must be divisible by `vector_length` {vector_length}"
with pytest.raises(ValueError, match=msg):
get_peft_model(model, config)
def test_vblora_with_incompatible_vector_length_with_out_features(self):
vector_length = 3
model = self.get_model()
config = VBLoRAConfig(target_modules=["lin1"], vector_length=vector_length)
msg = f"`out_features` {model.lin1.out_features} must be divisible by `vector_length` {vector_length}"
with pytest.raises(ValueError, match=msg):
get_peft_model(model, config)
class TestC3AInitialization:
torch_device = infer_device()
def get_model(self):
class MLP(nn.Module):
def __init__(self, bias=True):
super().__init__()
self.lin0 = nn.Linear(10, 30, bias=bias)
self.lin1 = nn.Linear(30, 2, bias=bias)
def forward(self, X):
X = self.lin0(X)
X = self.lin1(X)
return X
return MLP().to(self.torch_device)
def test_c3a_with_incompatible_block_size_with_in_features(self):
block_size = 3
model = self.get_model()
config = C3AConfig(target_modules=["lin0"], block_size=block_size)
msg = f"The block size should be a factor of the input size. However, the input size is {model.lin0.in_features} and the block size is {block_size}"
with pytest.raises(ValueError, match=msg):
get_peft_model(model, config)
def test_c3a_with_incompatible_block_size_with_out_features(self):
block_size = 3
model = self.get_model()
config = C3AConfig(target_modules=["lin1"], block_size=block_size)
msg = f"The block size should be a factor of the output size. However, the output size is {model.lin1.out_features} and the block size is {block_size}"
with pytest.raises(ValueError, match=msg):
get_peft_model(model, config)
class TestRoadInitialization:
torch_device = infer_device()
def get_model(self):
class MLP(nn.Module):
def __init__(self, bias=True):
super().__init__()
self.lin0 = nn.Linear(10, 30, bias=bias)
self.lin1 = nn.Linear(30, 2, bias=bias)
def forward(self, X):
X = self.lin0(X)
X = self.lin1(X)
return X
return MLP().to(self.torch_device)
def get_conv2d_model(self):
class MyModule(nn.Module):
def __init__(self):
super().__init__()
# choose a large weight so that averages are close to expected values
self.linear = nn.Linear(1000, 1000)
self.embed = nn.Embedding(1000, 1000)
self.conv2d = nn.Conv2d(100, 100, 3)
def forward(self, x):
x_int = (100 * x).int()
x_4d = x.flatten().reshape(1, 100, 10, 10)
return self.linear(x), self.embed(x_int), self.conv2d(x_4d)
return MyModule().eval().to(self.torch_device)
def test_road_default_initialization(self):
torch.manual_seed(0)
model = self.get_model()
config = RoadConfig(target_modules=["lin0"], group_size=2)
model = get_peft_model(model, config)
weight_alpha = model.lin0.road_alpha["default"].data
weight_theta = model.lin0.road_theta["default"].data
torch.allclose(weight_alpha, torch.ones_like(weight_alpha))
torch.allclose(weight_theta, torch.zeros_like(weight_theta))
def test_road_with_odd_group_size(self):
group_size = 3 # odd values are not allowed
msg = f"The group_size must be divisible by 2 when using RoadLayer, but got {group_size}."
with pytest.raises(ValueError, match=re.escape(msg)):
RoadConfig(group_size=group_size)
def test_road_with_too_large_group_size(self):
group_size = 64 # larger than out_features
msg = (
f"The out_features of the base layer must be divisible by group_size ({group_size}) when using RoadLayer."
)
model = self.get_model()
config = RoadConfig(target_modules=["lin0"], group_size=group_size)
with pytest.raises(ValueError, match=re.escape(msg)):
get_peft_model(model, config)
def test_road_with_incompatible_group_size_with_out_features(self):
group_size = 4 # even, but 30 does not divide by 4
model = self.get_model()
config = RoadConfig(target_modules=["lin0"], group_size=group_size)
msg = (
f"The out_features of the base layer must be divisible by group_size ({group_size}) when using RoadLayer."
)
with pytest.raises(ValueError, match=re.escape(msg)):
get_peft_model(model, config)
def test_road_with_conv2d_layer(self):
model = self.get_conv2d_model()
config = RoadConfig(target_modules=["conv2d"], group_size=2)
msg = "Target module Conv2d(100, 100, kernel_size=(3, 3), stride=(1, 1)) is not supported. Currently, only the following modules are supported: `torch.nn.Linear`."
with pytest.raises(ValueError, match=re.escape(msg)):
get_peft_model(model, config)
class TestNoInfiniteRecursionDeepspeed:
# see #1892 for details
classes = [
PeftModel,
PeftMixedModel,
PeftModelForSequenceClassification,
PeftModelForQuestionAnswering,
PeftModelForTokenClassification,
PeftModelForCausalLM,
PeftModelForSeq2SeqLM,
PeftModelForFeatureExtraction,
]
@pytest.fixture
def wrap_init(self):
# emulates the wrapper from DeepSpeed
import functools
def decorator(f):
@functools.wraps(f)
def wrapper(self, *args, **kwargs):
hasattr(self, "abc") # any hasattr will do
f(self, *args, **kwargs)
return wrapper
return decorator
@pytest.fixture
def model(self):
class MyModule(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(10, 10)
# to emulate LMs:
self.prepare_inputs_for_generation = None
self._prepare_encoder_decoder_kwargs_for_generation = None
return MyModule()
@pytest.mark.parametrize("cls", classes)
def test_no_infinite_recursion(self, cls, model, wrap_init):
original_init = cls.__init__
try:
cls.__init__ = wrap_init(cls.__init__)
# this would trigger an infinite loop before the fix in 1892
cls(model, LoraConfig(target_modules=["linear"]))
finally:
# ensure there are no side effects of this test
cls.__init__ = original_init
class TestLoadAdapterOfflineMode:
base_model = "hf-internal-testing/tiny-random-OPTForCausalLM"
peft_model_id = "peft-internal-testing/tiny-OPTForCausalLM-lora"
# make sure that PEFT honors offline mode
@contextmanager
def hub_offline_ctx(self):
# this is required to simulate offline mode, setting the env var dynamically inside the test does not work
# because the value is checked only once at the start of the session
with patch("huggingface_hub.constants.HF_HUB_OFFLINE", True):
reset_sessions()
yield
reset_sessions()
# TODO remove when/if Hub is more stable
@pytest.mark.xfail(reason="Test is flaky on CI", raises=HfHubHTTPError)
def test_load_from_hub_then_offline_model(self):
# this uses LoRA but it's the same mechanism for other methods
base_model = AutoModelForCausalLM.from_pretrained(self.base_model)
# first ensure that the adapter model has been downloaded
PeftModel.from_pretrained(base_model, self.peft_model_id)
del base_model
base_model = AutoModelForCausalLM.from_pretrained(self.base_model)
with self.hub_offline_ctx():
# does not raise
PeftModel.from_pretrained(base_model, self.peft_model_id)
@pytest.fixture
def changed_default_cache_dir(self, tmp_path, monkeypatch):
# ensure that this test does not interact with other tests that may use the HF cache
monkeypatch.setattr("huggingface_hub.constants.HF_HOME", tmp_path)
monkeypatch.setattr("huggingface_hub.constants.HF_HUB_CACHE", tmp_path / "hub")
monkeypatch.setattr("huggingface_hub.constants.HF_TOKEN_PATH", tmp_path / "token")
def load_checkpoints(self, cache_dir):
# download model and lora checkpoint to a specific cache dir
snapshot_download(self.base_model, cache_dir=cache_dir)
snapshot_download(self.peft_model_id, cache_dir=cache_dir)
# TODO remove when/if Hub is more stable
@pytest.mark.xfail(reason="Test is flaky on CI", raises=LocalEntryNotFoundError)
def test_load_checkpoint_offline_non_default_cache_dir(self, changed_default_cache_dir, tmp_path):
# See #2373 for context
self.load_checkpoints(tmp_path)
with self.hub_offline_ctx():
base_model = AutoModelForCausalLM.from_pretrained(self.base_model, cache_dir=tmp_path)
PeftModel.from_pretrained(base_model, self.peft_model_id, cache_dir=tmp_path)
class TestCustomModelConfigWarning:
# Check potential warnings when the user provided base_model_name_or_path is overridden by PEFT. See #2001 for
# context. We use LoRA for this test but the same applies to other methods
@pytest.fixture
def custom_module(self):
class MyModule(nn.Module):
def __init__(self):
super().__init__()
self.lin = nn.Linear(10, 10)
return MyModule()
def test_no_warning_by_default_transformers_model(self, recwarn):
# first a sanity test that there is no warning by default when using a model from transformers
model = AutoModelForCausalLM.from_pretrained("hf-internal-testing/tiny-random-OPTForCausalLM")
get_peft_model(model, LoraConfig())
for warning in recwarn.list:
assert "renamed" not in str(warning.message)
def test_no_warning_by_default_custom_model(self, custom_module, recwarn):
# same as above but with a custom model
get_peft_model(custom_module, LoraConfig(target_modules=["lin"]))
for warning in recwarn.list:
assert "renamed" not in str(warning.message)
def test_warning_name_transformers_model(self, recwarn):
# The base_model_name_or_path provided by the user is overridden.
model = AutoModelForCausalLM.from_pretrained("hf-internal-testing/tiny-random-OPTForCausalLM")
custom_name = "custom_name"
get_peft_model(model, LoraConfig(base_model_name_or_path=custom_name))
msg = f"was renamed from '{custom_name}' to 'hf-internal-testing/tiny-random-OPTForCausalLM'"
assert any(msg in str(warning.message) for warning in recwarn.list)
def test_warning_name_custom_model(self, custom_module, recwarn):
custom_name = "custom_name"
get_peft_model(custom_module, LoraConfig(target_modules=["lin"], base_model_name_or_path=custom_name))
msg = f"was renamed from '{custom_name}' to 'None'"
assert any(msg in str(warning.message) for warning in recwarn.list)
def test_warning_name_custom_model_with_custom_name(self, custom_module, recwarn):
custom_name = "custom_name"
custom_module.name_or_path = "foobar"
get_peft_model(custom_module, LoraConfig(target_modules=["lin"], base_model_name_or_path=custom_name))
msg = f"was renamed from '{custom_name}' to 'foobar'"
assert any(msg in str(warning.message) for warning in recwarn.list)
class TestLowCpuMemUsage:
"""Test for the low CPU memory usage option for loading PEFT models.
Note that we have `test_load_model_low_cpu_mem_usage` in the custom model and stable diffusion tests. Those are
broad tests (i.e. testing all the supported PEFT methods) but not very deep (only testing if loading works and the
device is correctly set). The test class here goes deeper but only tests LoRA, as checking all PEFT methods would
be too much.
"""
# test on CPU and optionally on accelerator device
devices = ["cpu"]
_device = infer_device()
if _device != "cpu":
devices.append(_device)
model_id = "hf-internal-testing/tiny-random-OPTForCausalLM"
def get_model(self):
return AutoModelForCausalLM.from_pretrained(self.model_id)
@pytest.fixture(scope="class")
def lora_config(self):
return LoraConfig(init_lora_weights=False, target_modules="all-linear")
@pytest.fixture(scope="class")
def lora_path(self, tmp_path_factory, lora_config):
torch.manual_seed(0)
tmp_path = tmp_path_factory.mktemp("lora")
model = self.get_model()
model = get_peft_model(model, lora_config)
model.save_pretrained(tmp_path)
return tmp_path
@pytest.fixture(scope="class")
def inputs(self):
return {"input_ids": torch.randint(0, 100, (1, 10)), "attention_mask": torch.ones(1, 10)}
@pytest.mark.parametrize("device", devices)
def test_from_pretrained_low_cpu_mem_usage_works(self, device, inputs, lora_path):
model = self.get_model().to(device)
inputs = {k: v.to(device) for k, v in inputs.items()}
model = PeftModel.from_pretrained(model, lora_path, torch_device=device).eval()
device_set_not_low_cpu_mem = {p.device.type for p in model.parameters()}
logits_not_low_cpu_mem = model(**inputs).logits
del model
model = self.get_model().to(device)
model = PeftModel.from_pretrained(model, lora_path, low_cpu_mem_usage=True, torch_device=device).eval()
device_set_low_cpu_mem = {p.device.type for p in model.parameters()}
logits_low_cpu_mem = model(**inputs).logits
assert device_set_low_cpu_mem == device_set_not_low_cpu_mem
assert torch.allclose(logits_low_cpu_mem, logits_not_low_cpu_mem, atol=1e-6, rtol=1e-6)
@pytest.mark.parametrize("device", devices)
def test_load_adapter_low_cpu_mem_usage_works(self, device, inputs, lora_path, lora_config):
model = self.get_model().to(device)
inputs = {k: v.to(device) for k, v in inputs.items()}
torch.manual_seed(0)
model = get_peft_model(model, lora_config)
model.load_adapter(lora_path, adapter_name="other", torch_device=device)
model.set_adapter("other")
model.eval()
device_set_not_low_cpu_mem = {p.device.type for p in model.parameters()}
logits_not_low_cpu_mem = model(**inputs).logits
del model
model = self.get_model().to(device)
torch.manual_seed(0)
model = get_peft_model(model, lora_config)
model.load_adapter(lora_path, adapter_name="other", low_cpu_mem_usage=True, torch_device=device)
model.set_adapter("other")
model.eval()
device_set_low_cpu_mem = {p.device.type for p in model.parameters()}
logits_low_cpu_mem = model(**inputs).logits
assert device_set_low_cpu_mem == device_set_not_low_cpu_mem
assert torch.allclose(logits_low_cpu_mem, logits_not_low_cpu_mem, atol=1e-6, rtol=1e-6)
@pytest.mark.parametrize("device", devices)
def test_get_peft_model_low_cpu_mem_usage_works(self, device, inputs):
# when calling get_peft_model, the PEFT weights will not be initialized on device but remain on meta
model = self.get_model().to(device)
model = get_peft_model(model, LoraConfig(target_modules="all-linear"), low_cpu_mem_usage=True)
devices_lora_weights = {p.device for n, p in model.named_parameters() if "lora_" in n}
expected = {torch.device("meta")}
assert devices_lora_weights == expected
@pytest.mark.parametrize("device", devices)
def test_get_peft_model_with_task_type_low_cpu_mem_usage_works(self, device, inputs):
# same as the previous test, but pass the task_type argument
model = self.get_model().to(device)
model = get_peft_model(
model, LoraConfig(target_modules="all-linear", task_type="CAUSAL_LM"), low_cpu_mem_usage=True
)
devices_lora_weights = {p.device for n, p in model.named_parameters() if "lora_" in n}
expected = {torch.device("meta")}
assert devices_lora_weights == expected
@pytest.mark.parametrize("device", devices)
def test_inject_adapter_low_cpu_mem_usage_works(self, device, inputs, lora_path, lora_config):
# external libs like transformers and diffusers use inject_adapter_in_model, let's check that this also works
model = self.get_model().to(device)
inputs = {k: v.to(device) for k, v in inputs.items()}
torch.manual_seed(0)
model = get_peft_model(model, lora_config)
model.load_adapter(lora_path, adapter_name="other", torch_device=device)
model.set_adapter("other")
model.eval()
device_set_not_low_cpu_mem = {p.device.type for p in model.parameters()}
logits_not_low_cpu_mem = model(**inputs).logits
del model
torch.manual_seed(0)
model = self.get_model().to(device)
inject_adapter_in_model(lora_config, model, low_cpu_mem_usage=True)
device_set_before_loading = {p.device.type for p in model.parameters()}
# at this stage, lora weights are still on meta device
assert device_set_before_loading == {"meta", device}
state_dict = load_file(lora_path / "adapter_model.safetensors")
remapped_dict = {}
prefix = "base_model.model."
for key, val in state_dict.items():
new_key = key[len(prefix) :]
remapped_dict[new_key] = val.to(device)
errors = set_peft_model_state_dict(model, remapped_dict, low_cpu_mem_usage=True)
# sanity check: no unexpected keys
assert not errors.unexpected_keys
model.eval()
device_set_low_cpu_mem = {p.device.type for p in model.parameters()}
logits_low_cpu_mem = model(**inputs).logits
assert device_set_low_cpu_mem == device_set_not_low_cpu_mem
assert torch.allclose(logits_low_cpu_mem, logits_not_low_cpu_mem, atol=1e-6, rtol=1e-6)
############################
# tests for PeftMixedModel #
############################
@pytest.mark.parametrize("device", devices)
def test_mixed_model_from_pretrained_low_cpu_mem_usage_works(self, device, inputs, lora_path):
model = self.get_model().to(device)
inputs = {k: v.to(device) for k, v in inputs.items()}
model = PeftMixedModel.from_pretrained(model, lora_path, torch_device=device).eval()
device_set_not_low_cpu_mem = {p.device.type for p in model.parameters()}
logits_not_low_cpu_mem = model(**inputs).logits
del model
model = self.get_model().to(device)
model = PeftMixedModel.from_pretrained(model, lora_path, low_cpu_mem_usage=True, torch_device=device).eval()
device_set_low_cpu_mem = {p.device.type for p in model.parameters()}
logits_low_cpu_mem = model(**inputs).logits
assert device_set_low_cpu_mem == device_set_not_low_cpu_mem
assert torch.allclose(logits_low_cpu_mem, logits_not_low_cpu_mem, atol=1e-6, rtol=1e-6)
@pytest.mark.parametrize("device", devices)
def test_mixed_model_load_adapter_low_cpu_mem_usage_works(self, device, inputs, lora_path, lora_config):
model = self.get_model().to(device)
inputs = {k: v.to(device) for k, v in inputs.items()}
torch.manual_seed(0)
model = PeftModel.from_pretrained(model, lora_path)
model.load_adapter(lora_path, adapter_name="other", torch_device=device)
model.set_adapter("other")
model.eval()
device_set_not_low_cpu_mem = {p.device.type for p in model.parameters()}
logits_not_low_cpu_mem = model(**inputs).logits
del model
model = self.get_model().to(device)
torch.manual_seed(0)
model = PeftModel.from_pretrained(model, lora_path)
model.load_adapter(lora_path, adapter_name="other", low_cpu_mem_usage=True, torch_device=device)
model.set_adapter("other")
model.eval()
device_set_low_cpu_mem = {p.device.type for p in model.parameters()}
logits_low_cpu_mem = model(**inputs).logits
assert device_set_low_cpu_mem == device_set_not_low_cpu_mem
assert torch.allclose(logits_low_cpu_mem, logits_not_low_cpu_mem, atol=1e-6, rtol=1e-6)
def test_from_pretrained_missing_keys_warning(recwarn, tmp_path):
# For more context, see issue 2115
# When loading a PEFT adapter and we're missing a PEFT-specific weight, there should be a warning.
model = AutoModelForCausalLM.from_pretrained("hf-internal-testing/tiny-random-OPTForCausalLM")
config = LoraConfig()
model = get_peft_model(model, config)
state_dict = model.state_dict()
# first, sanity check that there are no warnings if no key is missing
model.save_pretrained(tmp_path)
del model
model = AutoModelForCausalLM.from_pretrained("hf-internal-testing/tiny-random-OPTForCausalLM")
model = PeftModel.from_pretrained(model, tmp_path)
msg = "Found missing adapter keys"
assert not any(msg in str(w.message) for w in recwarn.list)
# remove a key from the state_dict
missing_key = "base_model.model.model.decoder.layers.0.self_attn.v_proj.lora_A.default.weight"
def new_state_dict():
return {k: v for k, v in state_dict.items() if k != missing_key}
model.state_dict = new_state_dict
model.save_pretrained(tmp_path)
del model
model = AutoModelForCausalLM.from_pretrained("hf-internal-testing/tiny-random-OPTForCausalLM")
model = PeftModel.from_pretrained(model, tmp_path)
assert any(msg in str(w.message) for w in recwarn.list)
assert any(missing_key in str(w.message) for w in recwarn.list)
class TestNamingConflictWarning:
"""
Tests for warnings related to naming conflicts between adapter names and tuner prefixes. References: Issue 2252
"""
@pytest.fixture(autouse=True)
def setup(self):
self.peft_config = LoraConfig()
self.prefix = PEFT_TYPE_TO_PREFIX_MAPPING[self.peft_config.peft_type]
self.base_model = AutoModelForCausalLM.from_pretrained("hf-internal-testing/tiny-random-OPTForCausalLM")
def _save_and_reload_model(self, model, adapter_name, tmp_path):
# Helper method to save and reload the PEFT model
model.save_pretrained(tmp_path, selected_adapters=[adapter_name])
del model
reloaded_base_model = AutoModelForCausalLM.from_pretrained(tmp_path / adapter_name)
return PeftModel.from_pretrained(reloaded_base_model, tmp_path / adapter_name)
def test_no_warning_without_naming_conflict_get_peft_model(self, recwarn):
# No warning should be raised when there is no naming conflict during get_peft_model.
non_conflict_adapter = "adapter"
_ = get_peft_model(self.base_model, self.peft_config, adapter_name=non_conflict_adapter)
expected_msg = f"Adapter name {non_conflict_adapter} should not be contained in the prefix {self.prefix}."
assert not any(expected_msg in str(w.message) for w in recwarn.list)
def test_no_warning_without_naming_conflict_add_adapter(self, recwarn):
# No warning should be raised when adding an adapter without naming conflict.
non_conflict_adapter = "adapter"
other_non_conflict_adapter = "other_adapter"
model = get_peft_model(self.base_model, self.peft_config, adapter_name=non_conflict_adapter)
_ = model.add_adapter(other_non_conflict_adapter, self.peft_config)
expected_msg = (
f"Adapter name {other_non_conflict_adapter} should not be contained in the prefix {self.prefix}."
)
assert not any(expected_msg in str(w.message) for w in recwarn.list)
def test_no_warning_without_naming_conflict_save_and_load(self, recwarn, tmp_path):
# No warning should be raised when saving and loading the model without naming conflict.
non_conflict_adapter = "adapter"
model = get_peft_model(self.base_model, self.peft_config, adapter_name=non_conflict_adapter)
_ = self._save_and_reload_model(model, non_conflict_adapter, tmp_path)
expected_msg = f"Adapter name {non_conflict_adapter} should not be contained in the prefix {self.prefix}."
assert not any(expected_msg in str(w.message) for w in recwarn.list)
def test_warning_naming_conflict_get_peft_model(self, recwarn):
# Warning is raised when the adapter name conflicts with the prefix in get_peft_model.
conflicting_adapter_name = self.prefix[:-1]
_ = get_peft_model(self.base_model, self.peft_config, adapter_name=conflicting_adapter_name)
expected_msg = f"Adapter name {conflicting_adapter_name} should not be contained in the prefix {self.prefix}."
assert any(expected_msg in str(w.message) for w in recwarn.list)
def test_warning_naming_conflict_add_adapter(self, recwarn):
# Warning is raised when adding an adapter with a name that conflicts with the prefix.
conflicting_adapter = self.prefix[1:]
non_conflict_adapter = "adapter"
model = get_peft_model(self.base_model, self.peft_config, adapter_name=non_conflict_adapter)
_ = model.add_adapter(conflicting_adapter, self.peft_config)
expected_msg = f"Adapter name {conflicting_adapter} should not be contained in the prefix {self.prefix}."
assert any(expected_msg in str(w.message) for w in recwarn.list)
def test_warning_naming_conflict_save_and_load(self, recwarn, tmp_path):
# Warning is raised when saving and loading the model with a naming conflict.
conflicting_adapter = self.prefix[:-1]
model = get_peft_model(self.base_model, self.peft_config, adapter_name=conflicting_adapter)
_ = self._save_and_reload_model(model, conflicting_adapter, tmp_path)
expected_msg = f"Adapter name {conflicting_adapter} should not be contained in the prefix {self.prefix}."
assert any(expected_msg in str(w.message) for w in recwarn.list)
class TestCordaInitialization:
"""Test class to check the initialization of CorDA adapters."""
torch_device = infer_device()
def get_model(self):
class MyModule(nn.Module):
def __init__(self):
super().__init__()
# choose a large weight so that averages are close to expected values
self.linear = nn.Linear(1000, 1000)
def forward(self, x):
return self.linear(x)
return MyModule().eval().to(self.torch_device)
@pytest.fixture
def data(self):
# larger data is required to pass KPM test
torch.manual_seed(233)
return torch.rand(1000, 1000).to(self.torch_device)
@pytest.mark.parametrize("corda_method", ("ipm", "kpm"))
def test_lora_corda_no_redundant_fields(self, data, corda_method):
original_model = self.get_model()
model = deepcopy(original_model)
corda_config = CordaConfig(
corda_method=corda_method,
)
config = LoraConfig(
init_lora_weights="corda",
target_modules=["linear"],
corda_config=corda_config,
)
preprocess_corda(
model,
config,
run_model=lambda: model(data),
hooked_model=model,
)
peft_model = get_peft_model(model, config)
# check if the redundant fields are removed
assert not hasattr(peft_model.base_model.linear, "sample_count")
assert not hasattr(peft_model.base_model.linear, "covariance_matrix")
assert not hasattr(peft_model.base_model.linear, "corda_method")
assert not hasattr(peft_model.base_model.linear, "rank")
assert not hasattr(peft_model.base_model.linear, "eigens")
# legacy debug fields
assert not hasattr(peft_model.base_model.linear, "mean")
assert not hasattr(peft_model.base_model.linear, "std")
@pytest.mark.parametrize("corda_method", ("ipm", "kpm"))
def test_lora_corda_sample_count(self, data, corda_method):
original_model = self.get_model()
model = deepcopy(original_model)
corda_config = CordaConfig(
corda_method=corda_method,
prune_temporary_fields=False,
)
config = LoraConfig(
init_lora_weights="corda",
target_modules=["linear"],
corda_config=corda_config,
)
preprocess_corda(
model,
config,
run_model=lambda: [model(data), model(data)], # running model twice to test `sample_count`
hooked_model=model,
)
# covariance of linear should be data.T @ data
layer = model.linear
assert hasattr(layer, "covariance_matrix")
assert torch.allclose(layer.covariance_matrix, data.T @ data, atol=1e-06)
# sample count of linear should be 2
assert hasattr(layer, "sample_count")
assert layer.sample_count == 2
@pytest.mark.parametrize("corda_method", ("ipm", "kpm"))
def test_lora_corda_hook_unregister(self, data, corda_method):
original_model = self.get_model()
model = deepcopy(original_model)
hook_call_count = 0
def hook(*args):
nonlocal hook_call_count
hook_call_count += 1
model.linear.register_forward_hook(hook)
corda_config = CordaConfig(
corda_method=corda_method,
prune_temporary_fields=False,
)
config = LoraConfig(
init_lora_weights="corda",
target_modules=["linear"],
corda_config=corda_config,
)
preprocess_corda(
model,
config,
run_model=lambda: model(data),
hooked_model=model,
)
# after preprocessing, external and internal hook should be run once
assert hook_call_count == 1
assert model.linear.sample_count == 1
# run preprocessed model once
model(data)[0]
# the external hook should be kept, but the internal hook should be gone
assert hook_call_count == 2
assert model.linear.sample_count == 1
@pytest.mark.parametrize("corda_method", ("ipm", "kpm"))
def test_lora_corda_linear_init_default(self, data, tmp_path, corda_method):
original_model = self.get_model()
model = deepcopy(original_model)
output_base = model(data)[0]
corda_config = CordaConfig(
cache_file=tmp_path / "corda_cache.pt",
covariance_file=tmp_path / "covariance_cache.pt",
corda_method=corda_method,
)
config = LoraConfig(
init_lora_weights="corda",
target_modules=["linear"],
corda_config=corda_config,
)
preprocess_corda(
model,
config,
run_model=lambda: model(data),
hooked_model=model,
)
peft_model = get_peft_model(model, config)
# check if adapter performs an identity transformantion
assert torch.allclose(output_base, peft_model(data)[0], atol=1e-06)
# modify the weights, or else the adapter performs an identity transformation
peft_model.base_model.linear.lora_B["default"].weight.data *= 2.0
output_corda = peft_model(data)[0]
# sanity check
tol = 1e-06
assert not torch.allclose(output_base, output_corda, atol=tol, rtol=tol)
# if load SVD result from cache, the output should be the same
model = deepcopy(original_model)
config = LoraConfig(
init_lora_weights="corda",
target_modules=["linear"],
corda_config=CordaConfig(cache_file=tmp_path / "corda_cache.pt", corda_method=corda_method),
)
preprocess_corda(model, config)
peft_model = get_peft_model(model, config)
peft_model.base_model.linear.lora_B["default"].weight.data *= 2.0
assert torch.allclose(output_corda, peft_model(data)[0], atol=1e-06)
# if load covariance from cache, the output should be the same
model = deepcopy(original_model)
config = LoraConfig(
init_lora_weights="corda",
target_modules=["linear"],
corda_config=CordaConfig(covariance_file=tmp_path / "covariance_cache.pt", corda_method=corda_method),
)
preprocess_corda(model, config)
peft_model = get_peft_model(model, config)
peft_model.base_model.linear.lora_B["default"].weight.data *= 2.0
assert torch.allclose(output_corda, peft_model(data)[0], atol=1e-06)
@pytest.mark.parametrize("corda_method", ("ipm", "kpm"))
def test_lora_corda_hooked_model_linear_init_default(self, data, tmp_path, corda_method):
original_model = self.get_model()
model = deepcopy(original_model)
hooked_model = deepcopy(model)
output_base = model(data)[0]
corda_config = CordaConfig(
cache_file=tmp_path / "corda_cache.pt",
covariance_file=tmp_path / "covariance_cache.pt",
corda_method=corda_method,
)
config = LoraConfig(
init_lora_weights="corda",
target_modules=["linear"],
corda_config=corda_config,
)
# difference from the above test: this test uses a copied model as hooked model
preprocess_corda(
model,
config,
run_model=lambda: hooked_model(data),
hooked_model=hooked_model,
)
peft_model = get_peft_model(model, config)
# check if adapter performs an identity transformantion
assert torch.allclose(output_base, peft_model(data)[0], atol=1e-06)
# modify the weights, or else the adapter performs an identity transformation
peft_model.base_model.linear.lora_B["default"].weight.data *= 2.0
output_corda = peft_model(data)[0]
# sanity check
tol = 1e-06
assert not torch.allclose(output_base, output_corda, atol=tol, rtol=tol)
# if load SVD result from cache, the output should be the same
model = deepcopy(original_model)
config = LoraConfig(
init_lora_weights="corda",
target_modules=["linear"],
corda_config=CordaConfig(cache_file=tmp_path / "corda_cache.pt", corda_method=corda_method),
)
preprocess_corda(model, config)
peft_model = get_peft_model(model, config)
peft_model.base_model.linear.lora_B["default"].weight.data *= 2.0
assert torch.allclose(output_corda, peft_model(data)[0], atol=1e-06)
# if load covariance from cache, the output should be the same
model = deepcopy(original_model)
config = LoraConfig(
init_lora_weights="corda",
target_modules=["linear"],
corda_config=CordaConfig(covariance_file=tmp_path / "covariance_cache.pt", corda_method=corda_method),
)
preprocess_corda(model, config)
peft_model = get_peft_model(model, config)
peft_model.base_model.linear.lora_B["default"].weight.data *= 2.0
assert torch.allclose(output_corda, peft_model(data)[0], atol=1e-06)
@pytest.mark.parametrize("corda_method", ("ipm", "kpm"))
def test_lora_corda_linear_init_default_with_rank_pattern(self, data, tmp_path, corda_method):
original_model = self.get_model()
model = deepcopy(original_model)
output_base = model(data)[0]
corda_config = CordaConfig(
cache_file=tmp_path / "corda_cache.pt",
covariance_file=tmp_path / "covariance_cache.pt",
corda_method=corda_method,
)
config = LoraConfig(
rank_pattern={"linear": 8, "embed": 16, "conv2d": 32},
init_lora_weights="corda",
target_modules=["linear"],
corda_config=corda_config,
)
preprocess_corda(
model,
config,
run_model=lambda: model(data),
)
peft_model = get_peft_model(model, config)
# check if adapter performs an identity transformantion
assert torch.allclose(output_base, peft_model(data)[0], atol=1e-06)
# modify the weights, or else the adapter performs an identity transformation
peft_model.base_model.linear.lora_B["default"].weight.data *= 2.0
output_corda = peft_model(data)[0]
# sanity check
tol = 1e-06
assert not torch.allclose(output_base, output_corda, atol=tol, rtol=tol)
# if load SVD result from cache, the output should be the same
model = deepcopy(original_model)
config = LoraConfig(
rank_pattern={"linear": 8, "embed": 16, "conv2d": 32},
init_lora_weights="corda",
target_modules=["linear"],
corda_config=CordaConfig(cache_file=tmp_path / "corda_cache.pt", corda_method=corda_method),
)
preprocess_corda(model, config)
peft_model = get_peft_model(model, config)
peft_model.base_model.linear.lora_B["default"].weight.data *= 2.0
assert torch.allclose(output_corda, peft_model(data)[0], atol=1e-06)
# if load covariance from cache, the output should be the same
model = deepcopy(original_model)
config = LoraConfig(
rank_pattern={"linear": 8, "embed": 16, "conv2d": 32},
init_lora_weights="corda",
target_modules=["linear"],
corda_config=CordaConfig(covariance_file=tmp_path / "covariance_cache.pt", corda_method=corda_method),
)
preprocess_corda(model, config)
peft_model = get_peft_model(model, config)
peft_model.base_model.linear.lora_B["default"].weight.data *= 2.0
assert torch.allclose(output_corda, peft_model(data)[0], atol=1e-06)
@pytest.mark.parametrize("corda_method", ("ipm", "kpm"))
def test_lora_corda_conversion_same_output_after_loading(self, data, tmp_path, corda_method):
model = self.get_model()
output_base = model(data)[0]
corda_config = CordaConfig(corda_method=corda_method)
config = LoraConfig(init_lora_weights="corda", target_modules=["linear"], r=8, corda_config=corda_config)
preprocess_corda(model, config, run_model=lambda: model(data), hooked_model=model)
peft_model = get_peft_model(deepcopy(model), config)
# save the initial model
peft_model.peft_config["default"].init_lora_weights = True
peft_model.save_pretrained(tmp_path / "init-model")
peft_model.peft_config["default"].init_lora_weights = "corda"
# modify the weights, or else the adapter performs an identity transformation
peft_model.base_model.linear.lora_B["default"].weight.data *= 2.0
output_corda = peft_model(data)[0]
# sanity check
tol = 1e-06
assert not torch.allclose(output_base, output_corda, atol=tol, rtol=tol)
# save the model normally
peft_model.save_pretrained(tmp_path / "corda-model")
model_loaded = PeftModel.from_pretrained(deepcopy(model), tmp_path / "corda-model")
output_loaded = model_loaded(data)[0]
assert torch.allclose(output_corda, output_loaded, atol=tol, rtol=tol)
# sanity check: ranks should still be 8 as initially
assert model_loaded.peft_config["default"].r == 8
assert model_loaded.base_model.model.linear.lora_A["default"].weight.shape[0] == 8
# sanity check: the base model weights were indeed changed
assert not torch.allclose(
model.linear.weight, model_loaded.base_model.model.linear.base_layer.weight, atol=tol, rtol=tol
)
# save the model with conversion
peft_config_keys_before = list(peft_model.peft_config.keys())
peft_config_dict_before = peft_model.peft_config["default"].to_dict()
peft_model.save_pretrained(
tmp_path / "corda-model-converted", path_initial_model_for_weight_conversion=tmp_path / "init-model"
)
peft_config_keys_after = list(peft_model.peft_config.keys())
peft_config_dict_after = peft_model.peft_config["default"].to_dict()
assert peft_config_keys_before == peft_config_keys_after
assert peft_config_dict_before == peft_config_dict_after
model_converted = PeftModel.from_pretrained(deepcopy(model), tmp_path / "corda-model-converted")
output_converted = model_converted(data)[0]
assert torch.allclose(output_corda, output_converted, atol=tol, rtol=tol)
# rank should be double of what it was initially
assert model_converted.peft_config["default"].r == 16
assert model_converted.base_model.model.linear.lora_A["default"].weight.shape[0] == 16
# base model weights should be the same as the initial model
assert torch.allclose(
model.linear.weight, model_converted.base_model.model.linear.base_layer.weight, atol=tol, rtol=tol
)
@pytest.mark.parametrize("corda_method", ("ipm", "kpm"))
def test_lora_corda_conversion_same_output_after_loading_with_rank_pattern(self, data, tmp_path, corda_method):
# same as above, but using rank_pattern
model = self.get_model()
output_base = model(data)[0]
# use rank_pattern here; note that since there is only a single linear layer, r is completely overridden
corda_config = CordaConfig(corda_method=corda_method)
config = LoraConfig(
init_lora_weights="corda",
target_modules=["linear"],
r=8,
rank_pattern={"linear": 32},
corda_config=corda_config,
)
preprocess_corda(model, config, run_model=lambda: model(data), hooked_model=model)
peft_model = get_peft_model(deepcopy(model), config)
# save the initial model
peft_model.peft_config["default"].init_lora_weights = True
peft_model.save_pretrained(tmp_path / "init-model")
peft_model.peft_config["default"].init_lora_weights = "corda"
# modify the weights, or else the adapter performs an identity transformation
peft_model.base_model.linear.lora_B["default"].weight.data *= 2.0
output_corda = peft_model(data)[0]
# sanity check
tol = 1e-06
assert not torch.allclose(output_base, output_corda, atol=tol, rtol=tol)
# save the model normally
peft_model.save_pretrained(tmp_path / "corda-model")
model_loaded = PeftModel.from_pretrained(deepcopy(model), tmp_path / "corda-model")
output_loaded = model_loaded(data)[0]
assert torch.allclose(output_corda, output_loaded, atol=tol, rtol=tol)
# sanity check: ranks should still be 8 as initially
assert model_loaded.peft_config["default"].r == 8
assert model_loaded.base_model.model.linear.lora_A["default"].weight.shape[0] == 32
# sanity check: the base model weights were indeed changed
assert not torch.allclose(
model.linear.weight, model_loaded.base_model.model.linear.base_layer.weight, atol=tol, rtol=tol
)
# save the model with conversion
peft_model.save_pretrained(
tmp_path / "corda-model-converted", path_initial_model_for_weight_conversion=tmp_path / "init-model"
)
model_converted = PeftModel.from_pretrained(deepcopy(model), tmp_path / "corda-model-converted")
output_converted = model_converted(data)[0]
assert torch.allclose(output_corda, output_converted, atol=tol, rtol=tol)
# rank should be double of what it was initially
assert model_converted.peft_config["default"].r == 16
assert model_converted.base_model.model.linear.lora_A["default"].weight.shape[0] == 64
# base model weights should be the same as the initial model
assert torch.allclose(
model.linear.weight, model_converted.base_model.model.linear.base_layer.weight, atol=tol, rtol=tol
)
@pytest.mark.parametrize("corda_method", ("ipm", "kpm"))
def test_lora_corda_conversion_same_output_after_loading_with_alpha_pattern(self, data, tmp_path, corda_method):
# same as above, but using alpha_pattern
model = self.get_model()
output_base = model(data)[0]
# use alpha_pattern here; note that since there is only a single linear layer, lora_alpha is completely
# overridden
corda_config = CordaConfig(corda_method=corda_method)
config = LoraConfig(
init_lora_weights="corda",
target_modules=["linear"],
alpha_pattern={"linear": 5},
corda_config=corda_config,
)
preprocess_corda(model, config, run_model=lambda: model(data), hooked_model=model)
peft_model = get_peft_model(deepcopy(model), config)
# save the initial model
peft_model.peft_config["default"].init_lora_weights = True
peft_model.save_pretrained(tmp_path / "init-model")
peft_model.peft_config["default"].init_lora_weights = "corda"
# modify the weights, or else the adapter performs an identity transformation
peft_model.base_model.linear.lora_B["default"].weight.data *= 2.0
output_corda = peft_model(data)[0]
# sanity check
tol = 1e-06
assert not torch.allclose(output_base, output_corda, atol=tol, rtol=tol)
# save the model normally
peft_model.save_pretrained(tmp_path / "corda-model")
model_loaded = PeftModel.from_pretrained(deepcopy(model), tmp_path / "corda-model")
output_loaded = model_loaded(data)[0]
assert torch.allclose(output_corda, output_loaded, atol=tol, rtol=tol)
# sanity check: ranks should still be 8 as initially
assert model_loaded.peft_config["default"].r == 8
assert model_loaded.base_model.model.linear.lora_A["default"].weight.shape[0] == 8
assert model_loaded.base_model.model.linear.scaling["default"] == 5 / 8
# sanity check: the base model weights were indeed changed
assert not torch.allclose(
model.linear.weight, model_loaded.base_model.model.linear.base_layer.weight, atol=tol, rtol=tol
)
# save the model with conversion
peft_model.save_pretrained(
tmp_path / "corda-model-converted", path_initial_model_for_weight_conversion=tmp_path / "init-model"
)
model_converted = PeftModel.from_pretrained(deepcopy(model), tmp_path / "corda-model-converted")
output_converted = model_converted(data)[0]
assert torch.allclose(output_corda, output_converted, atol=tol, rtol=tol)
# rank should be double of what it was initially
assert model_converted.peft_config["default"].r == 16
assert model_converted.base_model.model.linear.lora_A["default"].weight.shape[0] == 16
assert model_converted.base_model.model.linear.scaling["default"] == 10 / 16
# base model weights should be the same as the initial model
assert torch.allclose(
model.linear.weight, model_converted.base_model.model.linear.base_layer.weight, atol=tol, rtol=tol
)
@pytest.mark.parametrize("corda_method", ("ipm", "kpm"))
def test_lora_corda_conversion_same_output_after_loading_with_rslora(self, data, tmp_path, corda_method):
model = self.get_model()
output_base = model(data)[0]
corda_config = CordaConfig(corda_method=corda_method)
config = LoraConfig(
init_lora_weights="corda", target_modules=["linear"], r=8, use_rslora=True, corda_config=corda_config
)
preprocess_corda(model, config, run_model=lambda: model(data), hooked_model=model)
peft_model = get_peft_model(deepcopy(model), config)
# save the initial model
peft_model.peft_config["default"].init_lora_weights = True
peft_model.save_pretrained(tmp_path / "init-model")
peft_model.peft_config["default"].init_lora_weights = "corda"
# modify the weights, or else the adapter performs an identity transformation
peft_model.base_model.linear.lora_B["default"].weight.data *= 2.0
output_corda = peft_model(data)[0]
# sanity check
tol = 1e-06
assert not torch.allclose(output_base, output_corda, atol=tol, rtol=tol)
# save the model normally
peft_model.save_pretrained(tmp_path / "corda-model")
model_loaded = PeftModel.from_pretrained(deepcopy(model), tmp_path / "corda-model")
output_loaded = model_loaded(data)[0]
assert torch.allclose(output_corda, output_loaded, atol=tol, rtol=tol)
# sanity check: ranks should still be 8 as initially
assert model_loaded.peft_config["default"].r == 8
assert model_loaded.base_model.model.linear.lora_A["default"].weight.shape[0] == 8
assert model_loaded.base_model.model.linear.scaling["default"] == 8 / (8**0.5)
# sanity check: the base model weights were indeed changed
assert not torch.allclose(
model.linear.weight, model_loaded.base_model.model.linear.base_layer.weight, atol=tol, rtol=tol
)
# save the model with conversion
peft_model.save_pretrained(
tmp_path / "corda-model-converted", path_initial_model_for_weight_conversion=tmp_path / "init-model"
)
model_converted = PeftModel.from_pretrained(deepcopy(model), tmp_path / "corda-model-converted")
output_converted = model_converted(data)[0]
assert torch.allclose(output_corda, output_converted, atol=tol, rtol=tol)
# rank should be double of what it was initially
assert model_converted.peft_config["default"].r == 16
assert model_converted.base_model.model.linear.lora_A["default"].weight.shape[0] == 16
# same scale as before with a little bit of floating point imprecision
assert model_converted.base_model.model.linear.scaling["default"] == pytest.approx(8 / (8**0.5))
# base model weights should be the same as the initial model
assert torch.allclose(
model.linear.weight, model_converted.base_model.model.linear.base_layer.weight, atol=tol, rtol=tol
)
@pytest.mark.parametrize("corda_method", ("ipm", "kpm"))
def test_lora_corda_rank_pattern_and_rslora_raises(self, data, tmp_path, corda_method):
# it's not possible to determine the correct scale when using rslora with rank or alpha pattern, because the
# scale is not stored in the state_dict
model = self.get_model()
corda_config = CordaConfig(corda_method=corda_method)
config = LoraConfig(
init_lora_weights="corda",
target_modules=["linear"],
r=8,
rank_pattern={"linear": 2},
use_rslora=True,
corda_config=corda_config,
)
preprocess_corda(model, config, run_model=lambda: model(data), hooked_model=model)
peft_model = get_peft_model(model, config)
peft_model.save_pretrained(tmp_path / "init-model")
msg = re.escape("Passing `path_initial_model_for_weight_conversion` to `save_pretrained`")
with pytest.raises(ValueError, match=msg):
peft_model.save_pretrained(
tmp_path / "corda-model", path_initial_model_for_weight_conversion=tmp_path / "init-model"
)
@pytest.mark.parametrize("corda_method", ("ipm", "kpm"))
def test_lora_corda_alpha_pattern_and_rslora_raises(self, data, tmp_path, corda_method):
# it's not possible to determine the correct scale when using rslora with rank or alpha pattern, because the
# scale is not stored in the state_dict
model = self.get_model()
corda_config = CordaConfig(corda_method=corda_method)
config = LoraConfig(
init_lora_weights="corda",
target_modules=["linear"],
r=8,
alpha_pattern={"linear": 2},
use_rslora=True,
corda_config=corda_config,
)
preprocess_corda(model, config, run_model=lambda: model(data), hooked_model=model)
peft_model = get_peft_model(model, config)
peft_model.save_pretrained(tmp_path / "init-model")
msg = re.escape("Passing `path_initial_model_for_weight_conversion` to `save_pretrained`")
with pytest.raises(ValueError, match=msg):
peft_model.save_pretrained(
tmp_path / "corda-model", path_initial_model_for_weight_conversion=tmp_path / "init-model"
)
class TestEvaInitialization:
"""Tests for the EVA (Explained Variance Adaptation) initialization method.
This test suite verifies:
1. Consistency of initialization across different seeds
2. Proper error handling for invalid inputs
3. Compatibility with different model architectures
4. Reproducibility of results
5. Proper handling of edge cases
"""
# Constants for test configuration
COSINE_SIMILARITY_THRESHOLD = 0.75
NUM_SEEDS = 2
BATCH_SIZE = 4
MAX_LENGTH = 256
LORA_DIM = 8
LORA_ALPHA = 1
DEVICE = infer_device()
# for caching purposes:
_dataset = load_dataset_english_quotes()["train"]
@pytest.fixture
def tokenizer(self):
tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2")
tokenizer.pad_token = tokenizer.eos_token
return tokenizer
@pytest.fixture
def dataset(self, tokenizer):
# concatenate examples
examples = []
example = ""
for data in self._dataset:
if len(example) >= self.MAX_LENGTH:
examples.append(example)
example = ""
example = example + " " + data["quote"]
dataset = Dataset.from_dict({"text": examples})
# tokenize
dataset = dataset.map(
lambda x: tokenizer(x["text"], padding="max_length", truncation=True, max_length=self.MAX_LENGTH),
batched=True,
remove_columns=dataset.column_names,
)
dataset.set_format(type="torch")
return dataset
@pytest.fixture
def model(self):
model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2")
model.transformer.h = model.transformer.h[:2] # truncate to 2 layers
return model.to(self.DEVICE)
@pytest.fixture
def peft_config(self):
return LoraConfig(
r=self.LORA_DIM,
lora_alpha=self.LORA_ALPHA,
target_modules=["c_attn"],
init_lora_weights="eva",
eva_config=EvaConfig(rho=2),
)
@staticmethod
def collate_fn(examples):
return {k: torch.stack([v[k] for v in examples], dim=0) for k in examples[0].keys()}
@staticmethod
def prepare_layer_inputs_fn(layer_input, model_input, layer_name):
return layer_input[0].view(-1, layer_input[0].size(-1))
def get_dataloader(self, dataset):
return DataLoader(
dataset,
batch_size=self.BATCH_SIZE,
collate_fn=self.collate_fn,
shuffle=False,
)
@pytest.mark.parametrize(
"prepare_layer_inputs_keys, expected_outcome",
[
(None, "success"),
(["transformer.h.0.attn.c_attn"], "success"),
(
["transformer.h.0.attn.c_attn", "transformer.h.1.attn.c_attn", "transformer.h.2.attn.c_attn"],
"value_error",
),
],
)
def test_eva_state_dict_prepare_inputs_mapping(
self, model, dataset, peft_config, prepare_layer_inputs_keys, expected_outcome
):
"""
Tests for cases where prepare_layer_inputs_fn is a mapping. Checks that if not all target modules are present,
the prepare_layer_inputs_fn for the remaining modules is set to None. Also checks that if more keys than target
modules are present, a ValueError is raised.
"""
def fn(x, *args):
return x[0].view(-1, x[0].size(-1))
if prepare_layer_inputs_keys is None:
prepare_layer_inputs_fn = fn
else:
prepare_layer_inputs_fn = {k: fn for k in prepare_layer_inputs_keys}
shuffled_dataset = dataset.shuffle(seed=0)
dataloader = self.get_dataloader(shuffled_dataset)
modified_peft_config = deepcopy(peft_config)
modified_peft_config.eva_config.tau = 0 # converge immediately
if expected_outcome == "success":
sd = get_eva_state_dict(
model,
dataloader,
modified_peft_config,
prepare_model_inputs_fn=None,
prepare_layer_inputs_fn=prepare_layer_inputs_fn,
)
assert len(sd) == 2
assert "transformer.h.0.attn.c_attn" in sd
assert "transformer.h.1.attn.c_attn" in sd
else:
with pytest.raises(
ValueError, match="prepare_layer_inputs_fn is a mapping but the following module names were not found"
):
get_eva_state_dict(
model,
dataloader,
modified_peft_config,
prepare_model_inputs_fn=None,
prepare_layer_inputs_fn=prepare_layer_inputs_fn,
)
@pytest.mark.parametrize(
"eva_config",
[EvaConfig(rho=2, adjust_scaling_factors=True)],
)
def test_eva_state_dict_adjust_scaling_factors(self, model, dataset, peft_config, eva_config):
"""
Tests that the scaling factors are adjusted so that all LoRA gradients have the same scale regardless of their
rank.
"""
modified_peft_config = deepcopy(peft_config)
modified_peft_config.eva_config = eva_config
dataloader = self.get_dataloader(dataset)
peft_model = get_peft_model(deepcopy(model), modified_peft_config)
scaling_factors_before = {}
for n, m in peft_model.named_modules():
if isinstance(m, LoraLayer):
scaling_factors_before[n] = m.scaling["default"]
initialize_lora_eva_weights(peft_model, dataloader)
for n, m in peft_model.named_modules():
if isinstance(m, LoraLayer):
assert m.scaling["default"] == scaling_factors_before[n]
@pytest.mark.parametrize(
"eva_config",
[
# note: lower tau to decrease number of iterations until convergence, as tests are slow on CPU
EvaConfig(rho=2, tau=0.9),
EvaConfig(rho=1, tau=0.9),
EvaConfig(rho=1, whiten=True, tau=0.9),
EvaConfig(rho=1.0001, tau=0.9),
],
)
def test_eva_initialization_consistency(self, model, dataset, peft_config, eva_config):
"""
Tests that the state dict returned by `get_eva_state_dict` is consistent across different seeds based on the
cosine similarity of the svd components.
"""
modified_peft_config = deepcopy(peft_config)
modified_peft_config.eva_config = eva_config
state_dicts = []
for seed in range(self.NUM_SEEDS):
shuffled_dataset = dataset.shuffle(seed=seed)
dataloader = self.get_dataloader(shuffled_dataset)
sd = get_eva_state_dict(model, dataloader, modified_peft_config, show_progress_bar=False)
state_dicts.append(sd)
cos_sims = defaultdict(list)
for i, j in itertools.combinations(range(self.NUM_SEEDS), 2):
for k, v1 in state_dicts[i].items():
v2 = state_dicts[j][k]
min_size = min(v1.size(0), v2.size(0))
cos_sims[k].extend(torch.cosine_similarity(v1[:min_size].abs(), v2[:min_size].abs(), dim=1).tolist())
mean_cosine_similarities = {k: torch.tensor(v).mean() for k, v in cos_sims.items()}
for layer_name, mean_cosine_similarity in mean_cosine_similarities.items():
assert mean_cosine_similarity > self.COSINE_SIMILARITY_THRESHOLD, (
f"Mean absolute cosine similarity {mean_cosine_similarity:.4f} "
f"is not greater than {self.COSINE_SIMILARITY_THRESHOLD}"
)
@pytest.mark.parametrize("has_rank_zero", [True, False])
def test_load_eva_state_dict(self, model, dataset, peft_config, tmp_path, has_rank_zero):
"""
Tests that the `eva_state_dict` argument in `initialize_lora_eva_weights` can be used to initialize a model
with EVA weights and that the initialized model can be saved and loaded correctly.
"""
dataloader = self.get_dataloader(dataset)
peft_model = get_peft_model(deepcopy(model), peft_config)
sd = get_eva_state_dict(peft_model, dataloader)
if has_rank_zero:
k = "base_model.model.transformer.h.0.attn.c_attn"
sd[k] = sd[k][:0]
initialize_lora_eva_weights(peft_model, eva_state_dict=sd)
if has_rank_zero:
assert not isinstance(peft_model.model.transformer.h[0].attn.c_attn, LoraLayer)
else:
assert isinstance(peft_model.model.transformer.h[0].attn.c_attn, LoraLayer)
peft_model.save_pretrained(tmp_path)
peft_model = PeftModel.from_pretrained(model, tmp_path, torch_device=self.DEVICE, low_cpu_mem_usage=True)
peft_model(**{k: v.to(self.DEVICE) for k, v in next(iter(dataloader)).items()})
def test_missing_eva_inits(self, model, dataset, peft_config):
"""
Tests that a warning is raised when some adapter modules were not initialized with EVA weights.
"""
modified_peft_config = deepcopy(peft_config)
modified_peft_config.target_modules = ["wte"]
dataloader = self.get_dataloader(dataset)
peft_model = get_peft_model(deepcopy(model), modified_peft_config)
with pytest.warns(
UserWarning,
match="the following layers were initialized with init_lora_weights=True because they were not found in the eva state_dict:*",
):
initialize_lora_eva_weights(peft_model, dataloader)
def test_load_eva_model(self, model, dataset, peft_config, tmp_path):
"""
Tests that a model initialized with EVA weights can be loaded correctly.
"""
dataloader = self.get_dataloader(dataset)
peft_model = get_peft_model(deepcopy(model), peft_config)
initialize_lora_eva_weights(peft_model, dataloader)
peft_model.save_pretrained(tmp_path)
peft_model = PeftModel.from_pretrained(model, tmp_path, torch_device=self.DEVICE, low_cpu_mem_usage=True)
peft_model(**{k: v.to(self.DEVICE) for k, v in next(iter(dataloader)).items()})
def test_eva_initialization_with_invalid_dataloader(self, model, peft_config):
"""Test that appropriate error is raised when dataloader is empty."""
empty_dataset = Dataset.from_dict({"text": []})
dataloader = self.get_dataloader(empty_dataset)
with pytest.raises(ValueError, match="dataloader is empty"):
get_eva_state_dict(model, dataloader, peft_config)
def test_eva_config_rho(self):
"""
Tests that EvaConfig.__init__ raises a ValueError when rho is negative.
"""
with pytest.raises(ValueError, match="`rho` must be >= 1.0"):
EvaConfig(rho=-1)
def test_eva_config_tau(self):
"""
Tests that EvaConfig.__init__ raises a ValueError when tau is not between 0.0 and 1.0.
"""
with pytest.raises(ValueError, match="`tau` must be between 0.0 and 1.0."):
EvaConfig(tau=-0.1)
with pytest.raises(ValueError, match="`tau` must be between 0.0 and 1.0."):
EvaConfig(tau=1.1)
def test_lora_config_raises_warning_with_eva_init_but_not_eva_config(self):
"""
Tests that LoraConfig.__init__ raises a warning when init_lora_weights='eva' but eva_config is not set.
"""
with pytest.warns(
UserWarning,
match="`init_lora_weights` is 'eva' but `eva_config` is not specified. Using default EVA config.",
):
LoraConfig(init_lora_weights="eva")
def test_lora_config_raises_warning_with_eva_config_but_not_eva_init(self):
"""
Tests that LoraConfig.__init__ raises a warning when init_lora_weights is not 'eva' but eva_config is set.
"""
with pytest.warns(
UserWarning, match="`eva_config` specified but will be ignored when `init_lora_weights` is not 'eva'."
):
LoraConfig(init_lora_weights=True, eva_config=EvaConfig())
@pytest.mark.skipif(
platform.system() != "Linux", reason="Out of the box, torch.compile does not work on Windows or MacOS"
)
class TestHotSwapping:
"""Tests for the hotswapping function"""
torch_device = infer_device()
def compile(self, model, do_compile):
if not do_compile:
return model
return torch.compile(model)
def get_model(self):
class MLP(nn.Module):
def __init__(self, bias=True):
super().__init__()
self.lin0 = nn.Linear(10, 20, bias=True)
self.relu = nn.ReLU()
self.lin1 = nn.Linear(20, 5, bias=False)
def forward(self, X):
X = X.float()
X = self.lin0(X)
X = self.relu(X)
X = self.lin1(X)
return X
torch.manual_seed(0)
return MLP().to(self.torch_device)
def get_model_conv2d(self):
class ConvModel(nn.Module):
def __init__(self):
super().__init__()
self.conv = nn.Conv2d(3, 10, kernel_size=3)
def forward(self, X):
return self.conv(X)
torch.manual_seed(0)
return ConvModel().to(self.torch_device)
# this works with all adapters except prompt learning, but we don't test all
# as it is unnecessary and would be slow
@pytest.mark.parametrize(
"config",
[
LoraConfig(init_lora_weights=0, target_modules=["lin0"]),
LoraConfig(init_lora_weights=0, target_modules=["lin0", "lin1"]),
],
)
@pytest.mark.parametrize("do_compile", [False, True])
def test_hotswap_works(self, config, do_compile, tmp_path):
# Load 2 different adapters and check that we can hotswap between them, with the model optionally being
# compiled.
atol, rtol = 1e-4, 1e-4
inputs = torch.rand(3, 10).to(self.torch_device)
# create adapter 0
model = self.get_model()
torch.manual_seed(0)
model = get_peft_model(model, config)
model = self.compile(model, do_compile=do_compile)
model.eval()
with torch.inference_mode():
output0 = model(inputs)
model.save_pretrained(tmp_path / "adapter0")
del model
# create adapter 1
model = self.get_model()
torch.manual_seed(1)
model = get_peft_model(model, config)
model = self.compile(model, do_compile=do_compile)
model.eval()
with torch.inference_mode():
output1 = model(inputs)
model.save_pretrained(tmp_path / "adapter1")
# sanity check: they're not the same
assert not torch.allclose(output0, output1, atol=atol, rtol=rtol)
del model
# load adapter 0
model = self.get_model()
model = PeftModel.from_pretrained(model, tmp_path / "adapter0")
model = self.compile(model, do_compile=do_compile)
with torch.inference_mode():
output_loaded0 = model(inputs)
# sanity check: same output after loading for adapter 0
assert torch.allclose(output0, output_loaded0, atol=atol, rtol=rtol)
# hotswap with adapter 1
hotswap_adapter(model, tmp_path / "adapter1", adapter_name="default")
with torch.inference_mode():
output_loaded1 = model(inputs)
# real check: model now behaves like adapter 1
assert torch.allclose(output1, output_loaded1, atol=atol, rtol=rtol)
# hotswap back to adapter 0
hotswap_adapter(model, tmp_path / "adapter0", adapter_name="default")
with torch.inference_mode():
output_loaded_back0 = model(inputs)
# real check: model now behaves again like adapter 0
assert torch.allclose(output0, output_loaded_back0, atol=atol, rtol=rtol)
def test_hotswap_different_peft_types_raises(self, tmp_path):
# When the configs of the two adapters are different PEFT methods, raise
config0 = LoraConfig(target_modules=["lin0"])
config1 = IA3Config(target_modules=["lin0"], feedforward_modules=[])
model = self.get_model()
model = get_peft_model(model, config0)
model.save_pretrained(tmp_path / "adapter0")
del model
model = self.get_model()
model = get_peft_model(model, config1)
model.save_pretrained(tmp_path / "adapter1")
del model
# load adapter 0
model = self.get_model()
model = PeftModel.from_pretrained(model, tmp_path / "adapter0")
msg = r"Incompatible PEFT types found: LORA and IA3"
with pytest.raises(ValueError, match=msg):
hotswap_adapter(model, tmp_path / "adapter1", adapter_name="default")
def test_hotswap_wrong_peft_types_raises(self, tmp_path):
# Only LoRA is supported at the moment
config0 = IA3Config(target_modules=["lin0"], feedforward_modules=[])
config1 = IA3Config(target_modules=["lin0"], feedforward_modules=[])
model = self.get_model()
model = get_peft_model(model, config0)
model.save_pretrained(tmp_path / "adapter0")
del model
model = self.get_model()
model = get_peft_model(model, config1)
model.save_pretrained(tmp_path / "adapter1")
del model
# load adapter 0
model = self.get_model()
model = PeftModel.from_pretrained(model, tmp_path / "adapter0")
msg = r"Hotswapping only supports LORA but IA3 was passed"
with pytest.raises(ValueError, match=msg):
hotswap_adapter(model, tmp_path / "adapter1", adapter_name="default")
def test_hotswap_missing_key_works(self, tmp_path):
# When a key is missing, it is fine, the extra weight is zeroed out
config = LoraConfig(target_modules=["lin0", "lin1"])
model = self.get_model()
model = get_peft_model(model, config)
model.save_pretrained(tmp_path / "adapter0")
del model
model = self.get_model()
model = get_peft_model(model, config)
# remove one key from the state_dict
key = "base_model.model.lin1.lora_A.default.weight"
state_dict = model.state_dict()
del state_dict[key]
model.state_dict = lambda: state_dict
model.save_pretrained(tmp_path / "adapter1")
del model
# load adapter 0
model = self.get_model()
model = PeftModel.from_pretrained(model, tmp_path / "adapter0")
# sanity check: the missing weight is not already all zeros
assert not (model.base_model.model.lin1.lora_A["default"].weight == 0).all()
hotswap_adapter(model, tmp_path / "adapter1", adapter_name="default")
# after hotswapping, it is zeroed out
assert (model.base_model.model.lin1.lora_A["default"].weight == 0).all()
def test_hotswap_extra_key_raises(self, tmp_path):
# When there is an extra key, raise
config = LoraConfig(target_modules=["lin0"])
model = self.get_model()
model = get_peft_model(model, config)
model.save_pretrained(tmp_path / "adapter0")
del model
model = self.get_model()
model = get_peft_model(model, config)
# add an unexpected key
state_dict = model.state_dict()
new_key = "base_model.model.lin1.lora_A.default.weight"
state_dict[new_key] = torch.zeros(8, 20)
model.state_dict = lambda: state_dict
model.save_pretrained(tmp_path / "adapter1")
del model
# load adapter 0
model = self.get_model()
model = PeftModel.from_pretrained(model, tmp_path / "adapter0")
msg = f"Hot swapping the adapter did not succeed, unexpected keys found: {new_key}"
with pytest.raises(RuntimeError, match=msg):
hotswap_adapter(model, tmp_path / "adapter1", adapter_name="default")
@pytest.mark.parametrize("ranks", [(7, 13), (13, 7)])
def test_hotswap_works_different_ranks_alphas(self, ranks, tmp_path):
# same as test_hotswap_works but different rank and alpha
# Load 2 different adapters and check that we can hotswap between them, with the model optionally being
# compiled.
atol, rtol = 1e-4, 1e-4
inputs = torch.rand(3, 10).to(self.torch_device)
# create adapter 0
config0 = LoraConfig(target_modules=["lin0", "lin1"], r=ranks[0], lora_alpha=ranks[0], init_lora_weights=False)
model = self.get_model()
torch.manual_seed(0)
model = get_peft_model(model, config0)
model.eval()
with torch.inference_mode():
output0 = model(inputs)
model.save_pretrained(tmp_path / "adapter0")
del model
# create adapter 1
config1 = LoraConfig(target_modules=["lin0"], r=ranks[1], lora_alpha=ranks[1], init_lora_weights=False)
model = self.get_model()
torch.manual_seed(1)
model = get_peft_model(model, config1)
model.eval()
with torch.inference_mode():
output1 = model(inputs)
model.save_pretrained(tmp_path / "adapter1")
# sanity check: they're not the same
assert not torch.allclose(output0, output1, atol=atol, rtol=rtol)
del model
# load adapter 0
model = self.get_model()
model = PeftModel.from_pretrained(model, tmp_path / "adapter0")
with torch.inference_mode():
output_loaded0 = model(inputs)
# sanity check: same output after loading for adapter 0
assert torch.allclose(output0, output_loaded0, atol=atol, rtol=rtol)
# hotswap with adapter 1
hotswap_adapter(model, tmp_path / "adapter1", adapter_name="default")
with torch.inference_mode():
output_loaded1 = model(inputs)
# real check: model now behaves like adapter 1
assert torch.allclose(output1, output_loaded1, atol=atol, rtol=rtol)
# hotswap back to adapter 0
hotswap_adapter(model, tmp_path / "adapter0", adapter_name="default")
with torch.inference_mode():
output_loaded_back0 = model(inputs)
# real check: model now behaves again like adapter 0
assert torch.allclose(output0, output_loaded_back0, atol=atol, rtol=rtol)
@pytest.mark.parametrize("ranks", [(7, 13), (13, 7)])
def test_hotswap_works_different_ranks_alphas_conv2d(self, ranks, tmp_path):
# same as previous test, but for a Conv2d model
atol, rtol = 1e-4, 1e-4
inputs = torch.rand(3, 3, 10, 10).to(self.torch_device)
# create adapter 0
config0 = LoraConfig(target_modules=["conv"], r=ranks[0], init_lora_weights=False)
model = self.get_model_conv2d()
torch.manual_seed(0)
model = get_peft_model(model, config0)
model.eval()
with torch.inference_mode():
output0 = model(inputs)
model.save_pretrained(tmp_path / "adapter0")
del model
# create adapter 1
config1 = LoraConfig(target_modules=["conv"], r=ranks[1], init_lora_weights=False)
model = self.get_model_conv2d()
torch.manual_seed(1)
model = get_peft_model(model, config1)
model.eval()
with torch.inference_mode():
output1 = model(inputs)
model.save_pretrained(tmp_path / "adapter1")
# sanity check: they're not the same
assert not torch.allclose(output0, output1, atol=atol, rtol=rtol)
del model
# load adapter 0
model = self.get_model_conv2d()
model = PeftModel.from_pretrained(model, tmp_path / "adapter0")
with torch.inference_mode():
output_loaded0 = model(inputs)
# sanity check: same output after loading for adapter 0
assert torch.allclose(output0, output_loaded0, atol=atol, rtol=rtol)
# hotswap with adapter 1
hotswap_adapter(model, tmp_path / "adapter1", adapter_name="default")
with torch.inference_mode():
output_loaded1 = model(inputs)
# real check: model now behaves like adapter 1
assert torch.allclose(output1, output_loaded1, atol=atol, rtol=rtol)
# hotswap back to adapter 0
hotswap_adapter(model, tmp_path / "adapter0", adapter_name="default")
with torch.inference_mode():
output_loaded_back0 = model(inputs)
# real check: model now behaves again like adapter 0
assert torch.allclose(output0, output_loaded_back0, atol=atol, rtol=rtol)
def test_prepare_model_for_compiled_hotswap_scalings_are_tensors(self):
config = LoraConfig(target_modules=["lin0", "lin1"])
model = self.get_model()
model = get_peft_model(model, config)
# sanity check: all scalings are floats
scalings_before = {}
for name, module in model.named_modules():
if hasattr(module, "scaling"):
for key, val in module.scaling.items():
assert isinstance(val, float)
scalings_before[f"{name}.{key}"] = val
prepare_model_for_compiled_hotswap(model)
scalings_after = {}
for name, module in model.named_modules():
if hasattr(module, "scaling"):
for key, val in module.scaling.items():
assert isinstance(val, torch.Tensor)
scalings_after[f"{name}.{key}"] = val.item()
assert scalings_before == scalings_after
def test_prepare_model_for_compiled_hotswap_rank_padding_works(self):
old_rank = 8
config = LoraConfig(target_modules=["lin0", "lin1"], r=old_rank)
model = self.get_model()
model = get_peft_model(model, config)
# sanity check
for name, param in model.named_parameters():
if "lora_A" in name:
assert param.shape[0] == old_rank
elif "lora_B" in name:
assert param.shape[1] == old_rank
new_rank = 13
prepare_model_for_compiled_hotswap(model, target_rank=new_rank)
for name, param in model.named_parameters():
if "lora_A" in name:
assert param.shape[0] == new_rank
elif "lora_B" in name:
assert param.shape[1] == new_rank
def test_prepare_model_for_compiled_hotswap_same_rank_padding_works(self):
# same as previous test, but ensure there is no error if the rank to pad to is the same
old_rank = 8
config = LoraConfig(target_modules=["lin0", "lin1"], r=old_rank)
model = self.get_model()
model = get_peft_model(model, config)
prepare_model_for_compiled_hotswap(model, target_rank=old_rank)
for name, param in model.named_parameters():
if "lora_A" in name:
assert param.shape[0] == old_rank
elif "lora_B" in name:
assert param.shape[1] == old_rank
def test_prepare_model_for_compiled_hotswap_conv2d_rank_padding_works(self):
# same as previous test, but for a Conv2d model
old_rank = 8
config = LoraConfig(target_modules=["conv"], r=old_rank)
model = self.get_model_conv2d()
model = get_peft_model(model, config)
# sanity check
for name, param in model.named_parameters():
if "lora_A" in name:
assert param.shape[0] == old_rank
elif "lora_B" in name:
assert param.shape[1] == old_rank
new_rank = 13
prepare_model_for_compiled_hotswap(model, target_rank=new_rank)
for name, param in model.named_parameters():
if "lora_A" in name:
assert param.shape[0] == new_rank
elif "lora_B" in name:
assert param.shape[1] == new_rank
def test_prepare_model_for_compiled_hotswap_lower_rank_padding_raises(self):
# when trying to pad to a lower rank, raise an error
old_rank0 = 8
old_rank1 = 10
new_rank = 9
config = LoraConfig(target_modules=["lin0", "lin1"], r=old_rank0, rank_pattern={"lin1": old_rank1})
model = self.get_model()
model = get_peft_model(model, config)
msg = re.escape("Trying to pad the adapter to the target rank 9, but the original rank is larger (10)")
with pytest.raises(ValueError, match=msg):
prepare_model_for_compiled_hotswap(model, target_rank=new_rank)
def test_prepare_model_for_compiled_hotswap_with_rank_pattern(self):
old_rank0 = 8
old_rank1 = 9
config = LoraConfig(target_modules=["lin0", "lin1"], r=old_rank0, rank_pattern={"lin1": old_rank1})
model = self.get_model()
model = get_peft_model(model, config)
# sanity check
for name, param in model.named_parameters():
if "lora_A" in name:
if "lin0" in name:
assert param.shape[0] == old_rank0
else:
assert param.shape[0] == old_rank1
elif "lora_B" in name:
if "lin0" in name:
assert param.shape[1] == old_rank0
else:
assert param.shape[1] == old_rank1
new_rank = 13
prepare_model_for_compiled_hotswap(model, target_rank=new_rank)
for name, param in model.named_parameters():
if "lora_A" in name:
assert param.shape[0] == new_rank
elif "lora_B" in name:
assert param.shape[1] == new_rank
def test_prepare_model_for_compiled_hotswap_model_already_compiled_raises(self):
config = LoraConfig(target_modules=["lin0"])
model = self.get_model()
model = get_peft_model(model, config)
model = torch.compile(model, mode="reduce-overhead")
msg = re.escape("Call prepare_model_for_compiled_hotswap *before* compiling the model")
with pytest.raises(ValueError, match=msg):
prepare_model_for_compiled_hotswap(model)
def test_prepare_model_for_compiled_hotswap_model_already_compiled_warns(self, recwarn):
config = LoraConfig(target_modules=["lin0"])
model = self.get_model()
model = get_peft_model(model, config)
model = torch.compile(model, mode="reduce-overhead")
msg = "prepare_model_for_compiled_hotswap was called with a model that is already compiled"
prepare_model_for_compiled_hotswap(model, check_compiled="warn")
assert any(msg in str(w.message) for w in recwarn)
def test_prepare_model_for_compiled_hotswap_model_already_compiled_ignore(self, recwarn):
config = LoraConfig(target_modules=["lin0"])
model = self.get_model()
model = get_peft_model(model, config)
model = torch.compile(model, mode="reduce-overhead")
msg = "prepare_model_for_compiled_hotswap was called with a model that is already compiled"
prepare_model_for_compiled_hotswap(model, check_compiled="ignore")
# no error, no warning
assert not any(msg in str(w.message) for w in recwarn)
def test_prepare_model_for_compiled_hotswap_model_already_compiled_wrong_argument(self, recwarn):
config = LoraConfig(target_modules=["lin0"])
model = self.get_model()
model = get_peft_model(model, config)
model = torch.compile(model, mode="reduce-overhead")
msg = re.escape("check_compiles should be one of 'error', 'warn', or 'ignore', got 'wrong-option' instead.")
with pytest.raises(ValueError, match=msg):
prepare_model_for_compiled_hotswap(model, check_compiled="wrong-option")
def test_prepare_model_for_compiled_hotswap_model_no_adapter_raises(self):
model = self.get_model()
msg = re.escape("No adapter layers found on the model")
with pytest.raises(ValueError, match=msg):
prepare_model_for_compiled_hotswap(model)
def test_prepare_model_for_compiled_hotswap_does_not_change_output(self):
# preparing the model for hotswapping should not change the model output
inputs = torch.rand(3, 10).to(self.torch_device)
model = self.get_model().eval()
with torch.inference_mode():
output_base = model(inputs)
old_rank = 8
config = LoraConfig(target_modules=["lin0", "lin1"], r=old_rank, init_lora_weights=False)
model = get_peft_model(model, config).eval()
with torch.inference_mode():
output_before = model(inputs)
# sanity check: LoRA changed output
assert not torch.allclose(output_base, output_before)
new_rank = 13
prepare_model_for_compiled_hotswap(model, target_rank=new_rank)
with torch.inference_mode():
output_after = model(inputs)
assert torch.allclose(output_before, output_after)
def test_prepare_model_for_compiled_hotswap_does_not_change_output_conv2d(self):
# preparing the model for hotswapping should not change the model output
inputs = torch.rand(3, 3, 10, 10).to(self.torch_device)
model = self.get_model_conv2d().eval()
with torch.inference_mode():
output_base = model(inputs)
old_rank = 8
config = LoraConfig(target_modules=["conv"], r=old_rank, init_lora_weights=False)
model = get_peft_model(model, config).eval()
with torch.inference_mode():
output_before = model(inputs)
# sanity check: LoRA changed output
assert not torch.allclose(output_base, output_before)
new_rank = 13
prepare_model_for_compiled_hotswap(model, target_rank=new_rank)
with torch.inference_mode():
output_after = model(inputs)
assert torch.allclose(output_before, output_after)
def test_prepare_model_for_compiled_hotswap_scalings_update_config(self):
old_rank0 = 11
old_rank1 = 13
config = LoraConfig(target_modules=["lin0", "lin1"], r=old_rank0, rank_pattern={"lin1": old_rank1})
model = self.get_model()
model = get_peft_model(model, config)
new_rank = 15
prepare_model_for_compiled_hotswap(model, target_rank=new_rank, config=model.peft_config)
assert model.peft_config["default"].r == new_rank
assert model.peft_config["default"].rank_pattern == {"lin1": new_rank}
def test_prepare_model_for_compiled_hotswap_lora_bias(self):
# When setting lora_bias=True in the LoraConfig, the LoRA B parameter will have a bias term. Check that padding
# still works correctly. Note that the LoRA A parameter still won't have a bias term.
old_rank = 8
config = LoraConfig(target_modules=["lin0", "lin1"], r=old_rank, lora_bias=True)
model = self.get_model()
model = get_peft_model(model, config)
# sanity check
for name, param in model.named_parameters():
if "lora_A" in name and name.endswith(".weight"):
assert param.shape[0] == old_rank
elif "lora_B" in name and name.endswith(".weight"):
assert param.shape[1] == old_rank
elif "lora_A" in name and name.endswith(".bias"):
assert False, "LoRA A should not have a bias term"
elif "lora_B" in name and name.endswith(".bias"):
assert param.shape[0] in (5, 20) # output shapes of the 2 layers
new_rank = 13
prepare_model_for_compiled_hotswap(model, target_rank=new_rank)
for name, param in model.named_parameters():
if "lora_A" in name and name.endswith(".weight"):
assert param.shape[0] == new_rank
elif "lora_B" in name and name.endswith(".weight"):
assert param.shape[1] == new_rank
elif "lora_A" in name and name.endswith(".bias"):
assert False, "LoRA A should not have a bias term"
elif "lora_B" in name and name.endswith(".bias"):
assert param.shape[0] in (5, 20) # output shapes of the 2 layers
def test_prepare_model_for_compiled_hotswap_conv2d_lora_bias(self):
# same as previous test, but for a Conv2d model
old_rank = 8
config = LoraConfig(target_modules=["conv"], r=old_rank, lora_bias=True)
model = self.get_model_conv2d()
model = get_peft_model(model, config)
# sanity check
for name, param in model.named_parameters():
if "lora_A" in name and name.endswith(".weight"):
assert param.shape[0] == old_rank
elif "lora_B" in name and name.endswith(".weight"):
assert param.shape[1] == old_rank
elif "lora_A" in name and name.endswith(".bias"):
assert False, "LoRA A should not have a bias term"
elif "lora_B" in name and name.endswith(".bias"):
assert param.shape[0] == 10 # output shape of conv layer
new_rank = 13
prepare_model_for_compiled_hotswap(model, target_rank=new_rank)
for name, param in model.named_parameters():
if "lora_A" in name and name.endswith(".weight"):
assert param.shape[0] == new_rank
elif "lora_B" in name and name.endswith(".weight"):
assert param.shape[1] == new_rank
elif "lora_A" in name and name.endswith(".bias"):
assert False, "LoRA A should not have a bias term"
elif "lora_B" in name and name.endswith(".bias"):
assert param.shape[0] == 10 # output shape of conv layer
def test_import_peft_type_to_model_mapping_deprecation_warning(recwarn):
# This is for backwards compatibility: In #2282, PEFT_TYPE_TO_MODEL_MAPPING was removed as it was redundant with
# PEFT_TYPE_TO_TUNER_MAPPING. However, third party code could still use this mapping, e.g.:
# https://github.com/AutoGPTQ/AutoGPTQ/blob/6689349625de973b9ee3016c28c11f32acf7f02c/auto_gptq/utils/peft_utils.py#L8
# TODO: Remove after 2026-01
# first check that there is no warning under normal circumstances
from peft.peft_model import PeftModel # noqa
expected = (
"PEFT_TYPE_TO_MODEL_MAPPING is deprecated, please use `from peft import PEFT_TYPE_TO_TUNER_MAPPING` instead"
)
warnings = (w.message.args[0] for w in recwarn.list)
assert not any(w.startswith(expected) for w in warnings)
from peft.peft_model import PEFT_TYPE_TO_MODEL_MAPPING # noqa
# check that there is a warning with this message after importing the variable
warnings = (w.message.args[0] for w in recwarn.list)
assert any(w.startswith(expected) for w in warnings)
class TestScaling:
"""Tests for scaling and unscaling
Those methods are currently only implemented for LoRA and were added for use in diffusers.
"""
@pytest.fixture
def model(self):
# tiny opt with 5 attention layers
model_id = "hf-internal-testing/tiny-random-OPTForCausalLM"
return AutoModelForCausalLM.from_pretrained(model_id)
def get_scalings(self, model, adapter_name="default"):
# helper function, returns the scalings of the 5 attention layers
return [m.scaling[adapter_name] for m in model.modules() if isinstance(m, LoraLayer)]
def set_scale(self, model, adapter_name, scale):
for module in model.modules():
if isinstance(module, LoraLayer):
module.set_scale(adapter_name, scale)
def scale_layer(self, model, scale):
for module in model.modules():
if isinstance(module, LoraLayer):
module.scale_layer(scale)
def unscale_layer(self, model, scale):
for module in model.modules():
if isinstance(module, LoraLayer):
module.unscale_layer(scale)
def test_scaling_simple(self, model):
n_layers = 5
rank, lora_alpha = 8, 16
config = LoraConfig(
r=rank,
lora_alpha=lora_alpha,
target_modules=["k_proj"],
)
model = get_peft_model(model, config)
scalings = self.get_scalings(model)
expected = [lora_alpha / rank] * n_layers
assert scalings == expected
# double
self.scale_layer(model, 2)
scalings = self.get_scalings(model)
expected = [4.0] * n_layers
assert scalings == expected
# back to original
self.unscale_layer(model, None)
scalings = self.get_scalings(model)
expected = [2.0] * n_layers
assert scalings == expected
# triple
self.set_scale(model, "default", 3)
scalings = self.get_scalings(model)
expected = [6.0] * n_layers
assert scalings == expected
# back to original
self.unscale_layer(model, 3)
scalings = self.get_scalings(model)
expected = [2.0] * n_layers
assert scalings == expected
def test_scaling_rank_pattern_alpha_pattern(self, model):
# layer 0: 8 / 8
# layer 1: 8 / 16
# layer 2: 4 / 32
# layer 3: 16 / 8
# layer 4: 8 / 8
config = LoraConfig(
r=8,
lora_alpha=8,
target_modules=["k_proj"],
rank_pattern={"layers.1.self_attn.k_proj": 16, "layers.2.self_attn.k_proj": 32},
alpha_pattern={"layers.2.self_attn.k_proj": 4, "layers.3.self_attn.k_proj": 16},
)
model = get_peft_model(model, config)
scalings = self.get_scalings(model)
expected = [1.0, 0.5, 0.125, 2.0, 1.0]
assert scalings == expected
# double
self.scale_layer(model, 2)
scalings = self.get_scalings(model)
expected = [2.0, 1.0, 0.25, 4.0, 2.0]
assert scalings == expected
# back to original
self.unscale_layer(model, None)
scalings = self.get_scalings(model)
expected = [1.0, 0.5, 0.125, 2.0, 1.0]
assert scalings == expected
# triple
self.set_scale(model, "default", 3)
scalings = self.get_scalings(model)
expected = [3.0, 1.5, 0.375, 6.0, 3.0]
assert scalings == expected
# back to original
self.unscale_layer(model, 3)
scalings = self.get_scalings(model)
expected = [1.0, 0.5, 0.125, 2.0, 1.0]
assert scalings == expected
def test_scaling_multiple_times(self, model):
# same as previous test, but scale and unscale multiple times in a row
# layer 0: 8 / 8
# layer 1: 8 / 16
# layer 2: 4 / 32
# layer 3: 16 / 8
# layer 4: 8 / 8
config = LoraConfig(
r=8,
lora_alpha=8,
target_modules=["k_proj"],
rank_pattern={"layers.1.self_attn.k_proj": 16, "layers.2.self_attn.k_proj": 32},
alpha_pattern={"layers.2.self_attn.k_proj": 4, "layers.3.self_attn.k_proj": 16},
)
model = get_peft_model(model, config)
scalings = self.get_scalings(model)
expected = [1.0, 0.5, 0.125, 2.0, 1.0]
assert scalings == expected
# scale of 1 makes no difference
self.scale_layer(model, 1)
scalings = self.get_scalings(model)
expected = [1.0, 0.5, 0.125, 2.0, 1.0]
# double
self.scale_layer(model, 2)
scalings = self.get_scalings(model)
expected = [2.0, 1.0, 0.25, 4.0, 2.0]
assert scalings == expected
# triple, on top of previous double
self.scale_layer(model, 3)
scalings = self.get_scalings(model)
expected = [6.0, 3.0, 0.75, 12.0, 6.0]
assert scalings == expected
# half
self.unscale_layer(model, 2)
scalings = self.get_scalings(model)
expected = [3.0, 1.5, 0.375, 6.0, 3.0]
assert scalings == expected
# divide by 3, on top of previous half
self.unscale_layer(model, 3)
scalings = self.get_scalings(model)
expected = [1.0, 0.5, 0.125, 2.0, 1.0]
assert scalings == expected
# set scale to 2
self.set_scale(model, "default", 2)
scalings = self.get_scalings(model)
expected = [2.0, 1.0, 0.25, 4.0, 2.0]
assert scalings == expected
# set scale to 3, it is cumulative but based on the initial scaling, so factor 3, not 6
self.set_scale(model, "default", 3)
scalings = self.get_scalings(model)
expected = [3.0, 1.5, 0.375, 6.0, 3.0]
assert scalings == expected
# back to original
self.unscale_layer(model, None)
scalings = self.get_scalings(model)
expected = [1.0, 0.5, 0.125, 2.0, 1.0]
assert scalings == expected
# back to original again
self.unscale_layer(model, None)
scalings = self.get_scalings(model)
expected = [1.0, 0.5, 0.125, 2.0, 1.0]
assert scalings == expected
def test_scaling_multiple_adapters(self, model):
# ensure that scaling works with multiple adapters
n_layers = 5
rank0, lora_alpha0 = 8, 16
config0 = LoraConfig(
r=rank0,
lora_alpha=lora_alpha0,
target_modules=["k_proj"],
)
rank1, lora_alpha1 = 16, 8
config1 = LoraConfig(
r=rank1,
lora_alpha=lora_alpha1,
target_modules=["k_proj"],
)
model = get_peft_model(model, config0)
model.add_adapter("other", config1)
scalings_default = self.get_scalings(model, "default")
scalings_other = self.get_scalings(model, "other")
expected_default = [lora_alpha0 / rank0] * n_layers
expected_other = [lora_alpha1 / rank1] * n_layers
assert scalings_default == expected_default
assert scalings_other == expected_other
# double the scale for other
self.set_scale(model, "other", 2)
scalings_default = self.get_scalings(model, "default")
scalings_other = self.get_scalings(model, "other")
expected_default = [lora_alpha0 / rank0] * n_layers
expected_other = [2 * lora_alpha1 / rank1] * n_layers
assert scalings_default == expected_default
assert scalings_other == expected_other
# quarter the scale for default
self.set_scale(model, "default", 0.25)
scalings_default = self.get_scalings(model, "default")
scalings_other = self.get_scalings(model, "other")
expected_default = [lora_alpha0 / rank0 / 4] * n_layers
expected_other = [2 * lora_alpha1 / rank1] * n_layers
assert scalings_default == expected_default
assert scalings_other == expected_other
# unscale resets for all *active* adapters
self.unscale_layer(model, None)
scalings_default = self.get_scalings(model, "default")
scalings_other = self.get_scalings(model, "other")
expected_default = [lora_alpha0 / rank0] * n_layers
expected_other = [2 * lora_alpha1 / rank1] * n_layers # stays the same as 'other' is not active
assert scalings_default == expected_default
assert scalings_other == expected_other
# scale all *active* adapters by 2
self.scale_layer(model, 2)
scalings_default = self.get_scalings(model, "default")
scalings_other = self.get_scalings(model, "other")
expected_default = [2 * lora_alpha0 / rank0] * n_layers
expected_other = [2 * lora_alpha1 / rank1] * n_layers # stays the same as 'other' is not active
assert scalings_default == expected_default
assert scalings_other == expected_other
# switch to 'other'
model.set_adapter("other")
# unscale, this time 'other'
self.unscale_layer(model, None)
scalings_default = self.get_scalings(model, "default")
scalings_other = self.get_scalings(model, "other")
expected_default = [2 * lora_alpha0 / rank0] * n_layers # stays the same as 'other' is not active
expected_other = [lora_alpha1 / rank1] * n_layers
assert scalings_default == expected_default
assert scalings_other == expected_other
# scale all *active* adapters by 3
self.scale_layer(model, 3)
scalings_default = self.get_scalings(model, "default")
scalings_other = self.get_scalings(model, "other")
expected_default = [2 * lora_alpha0 / rank0] * n_layers # stays the same as 'other' is not active
expected_other = [3 * lora_alpha1 / rank1] * n_layers
assert scalings_default == expected_default
assert scalings_other == expected_other
class TestLoadPeftKeyMapping:
# See discussion in https://github.com/huggingface/transformers/pull/38627
# transformers PR #37033 re-arranges the way visual language models are built by moving the LM head from the
# language model to the top-level VLM (among other things). A consequence of this is that the keys in the PEFT
# state_dict now also follow the new architecture. This test class serves to ensure that old checkpoints can be
# loaded with the changed architecture. Unfortunately, new checkpoints cannot be loaded with the old architecture,
# the corresponding test is marked as xfail.
# Note: We only test prefix tuning (prompt learning method), LoRA (non-prompt learning method), and VBLoRA (shared
# parameters) as the other PEFT methods should work the same way. It would be excessive to test all of them here.
@pytest.fixture
def fake_model_config(self):
# mimics a transformers model config
class FakeConfig(dict):
def __init__(self):
self.vocab_size = 10
def __getattr__(self, item):
if item in self:
return self[item]
raise AttributeError(f"'{self.__class__.__name__}' object has no attribute '{item}'")
return FakeConfig()
@pytest.fixture
def old_model(self, fake_model_config):
# create a small model that mimics the old architecture of, for instance, Qwen/Qwen2-VL-2B-Instruct
# Qwen2VLForConditionalGeneration(
# (visual): Qwen2VisionTransformerPretrainedModel(
# (patch_embed): PatchEmbed(
# (proj): Conv3d(3, 1280, kernel_size=(2, 14, 14), stride=(2, 14, 14), bias=False)
# )
# (rotary_pos_emb): VisionRotaryEmbedding()
# (blocks): ModuleList(
# (0-31): 32 x Qwen2VLVisionBlock(
# (norm1): LayerNorm((1280,), eps=1e-06, elementwise_affine=True)
# (norm2): LayerNorm((1280,), eps=1e-06, elementwise_affine=True)
# (attn): VisionSdpaAttention(
# (qkv): Linear(in_features=1280, out_features=3840, bias=True)
# (proj): Linear(in_features=1280, out_features=1280, bias=True)
# )
# (mlp): VisionMlp(
# (fc1): Linear(in_features=1280, out_features=5120, bias=True)
# (act): QuickGELUActivation()
# (fc2): Linear(in_features=5120, out_features=1280, bias=True)
# )
# )
# )
# (merger): PatchMerger(
# (ln_q): LayerNorm((1280,), eps=1e-06, elementwise_affine=True)
# (mlp): Sequential(
# (0): Linear(in_features=5120, out_features=5120, bias=True)
# (1): GELU(approximate='none')
# (2): Linear(in_features=5120, out_features=1536, bias=True)
# )
# )
# )
# (model): Qwen2VLModel(
# (embed_tokens): Embedding(151936, 1536)
# (layers): ModuleList(
# (0-27): 28 x Qwen2VLDecoderLayer(
# (self_attn): Qwen2VLSdpaAttention(
# (q_proj): Linear(in_features=1536, out_features=1536, bias=True)
# (k_proj): Linear(in_features=1536, out_features=256, bias=True)
# (v_proj): Linear(in_features=1536, out_features=256, bias=True)
# (o_proj): Linear(in_features=1536, out_features=1536, bias=False)
# (rotary_emb): Qwen2VLRotaryEmbedding()
# )
# (mlp): Qwen2MLP(
# (gate_proj): Linear(in_features=1536, out_features=8960, bias=False)
# (up_proj): Linear(in_features=1536, out_features=8960, bias=False)
# (down_proj): Linear(in_features=8960, out_features=1536, bias=False)
# (act_fn): SiLU()
# )
# (input_layernorm): Qwen2RMSNorm((1536,), eps=1e-06)
# (post_attention_layernorm): Qwen2RMSNorm((1536,), eps=1e-06)
# )
# )
# (norm): Qwen2RMSNorm((1536,), eps=1e-06)
# (rotary_emb): Qwen2VLRotaryEmbedding()
# )
# (lm_head): Linear(in_features=1536, out_features=151936, bias=False)
# )
class Block(nn.Module):
def __init__(self):
super().__init__()
self.attn = nn.Linear(10, 10)
class OldModel(nn.Module):
def __init__(self):
super().__init__()
self.config = fake_model_config
self.device = "cpu"
self.proj = nn.Conv3d(3, 10, 3)
self.visual = nn.ModuleDict(
{
"blocks": nn.ModuleList([Block() for _ in range(2)]),
}
)
self.model = nn.ModuleDict(
{
"layers": nn.ModuleList([Block() for _ in range(2)]),
}
)
self.lm_head = nn.Linear(10, 10)
def prepare_inputs_for_generation(self):
return
model = OldModel()
return model
@pytest.fixture
def new_model(self, fake_model_config):
# create a small model that mimics the new architecture of, for instance, Qwen/Qwen2-VL-2B-Instruct
# Qwen2VLForConditionalGeneration(
# (model): Qwen2VLModel(
# (visual): Qwen2VisionTransformerPretrainedModel(
# (patch_embed): PatchEmbed(
# (proj): Conv3d(3, 1280, kernel_size=(2, 14, 14), stride=(2, 14, 14), bias=False)
# )
# (rotary_pos_emb): VisionRotaryEmbedding()
# (blocks): ModuleList(
# (0-31): 32 x Qwen2VLVisionBlock(
# (norm1): LayerNorm((1280,), eps=1e-06, elementwise_affine=True)
# (norm2): LayerNorm((1280,), eps=1e-06, elementwise_affine=True)
# (attn): VisionSdpaAttention(
# (qkv): Linear(in_features=1280, out_features=3840, bias=True)
# (proj): Linear(in_features=1280, out_features=1280, bias=True)
# )
# (mlp): VisionMlp(
# (fc1): Linear(in_features=1280, out_features=5120, bias=True)
# (act): QuickGELUActivation()
# (fc2): Linear(in_features=5120, out_features=1280, bias=True)
# )
# )
# )
# (merger): PatchMerger(
# (ln_q): LayerNorm((1280,), eps=1e-06, elementwise_affine=True)
# (mlp): Sequential(
# (0): Linear(in_features=5120, out_features=5120, bias=True)
# (1): GELU(approximate='none')
# (2): Linear(in_features=5120, out_features=1536, bias=True)
# )
# )
# )
# (language_model): Qwen2VLTextModel(
# (embed_tokens): Embedding(151936, 1536)
# (layers): ModuleList(
# (0-27): 28 x Qwen2VLDecoderLayer(
# (self_attn): Qwen2VLAttention(
# (q_proj): Linear(in_features=1536, out_features=1536, bias=True)
# (k_proj): Linear(in_features=1536, out_features=256, bias=True)
# (v_proj): Linear(in_features=1536, out_features=256, bias=True)
# (o_proj): Linear(in_features=1536, out_features=1536, bias=False)
# (rotary_emb): Qwen2VLRotaryEmbedding()
# )
# (mlp): Qwen2MLP(
# (gate_proj): Linear(in_features=1536, out_features=8960, bias=False)
# (up_proj): Linear(in_features=1536, out_features=8960, bias=False)
# (down_proj): Linear(in_features=8960, out_features=1536, bias=False)
# (act_fn): SiLU()
# )
# (input_layernorm): Qwen2RMSNorm((1536,), eps=1e-06)
# (post_attention_layernorm): Qwen2RMSNorm((1536,), eps=1e-06)
# )
# )
# (norm): Qwen2RMSNorm((1536,), eps=1e-06)
# (rotary_emb): Qwen2VLRotaryEmbedding()
# )
# )
# (lm_head): Linear(in_features=1536, out_features=151936, bias=False)
# )
class Block(nn.Module):
def __init__(self):
super().__init__()
self.attn = nn.Linear(10, 10)
class InnerModel(nn.Module):
def __init__(self):
super().__init__()
self.visual = nn.ModuleDict(
{
"blocks": nn.ModuleList([Block() for _ in range(2)]),
}
)
self.language_model = nn.ModuleDict(
{
"layers": nn.ModuleList([Block() for _ in range(2)]),
}
)
class NewModel(nn.Module):
def __init__(self):
super().__init__()
self.config = fake_model_config
self.device = "cpu"
self.model = InnerModel()
self.lm_head = nn.Linear(10, 10)
# new transformers models have this attribute to map old checkpoints to new ones:
self._checkpoint_conversion_mapping = {
"^visual": "model.visual",
"^model(?!\\.(language_model|visual))": "model.language_model",
}
def prepare_inputs_for_generation(self):
return
model = NewModel()
return model
def check_lora_load_no_warning(self, model1, model2, path):
# helper method: save with model1, load with model2, ensure that there is no warning about missing keys and that
# the parameters are loaded correctly
model1 = copy.deepcopy(model1)
model2 = copy.deepcopy(model2)
config = LoraConfig(target_modules=["attn"])
peft_model = get_peft_model(copy.deepcopy(model1), config)
# set all values to 1.0 or 2.0 so we can check that they are loaded correctly
for name, param in peft_model.named_parameters():
if name.endswith("lora_A.default.weight"):
param.data.fill_(1.0)
elif name.endswith("lora_B.default.weight"):
param.data.fill_(2.0)
peft_model.save_pretrained(path)
del peft_model
# ensure that there is no warning: UserWarning: Found missing adapter keys while loading the checkpoint
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter("always")
loaded = PeftModel.from_pretrained(copy.deepcopy(model2), path)
assert not any("Found missing adapter keys" in str(warning.message) for warning in w)
# sanity check on parameter values to not only rely on the absence of warnings
for name, param in loaded.named_parameters():
if name.endswith("lora_A.default.weight"):
assert torch.allclose(param, torch.full_like(param, 1.0))
elif name.endswith("lora_B.default.weight"):
assert torch.allclose(param, torch.full_like(param, 2.0))
def check_prefix_tuning_load_no_warning(self, model1, model2, path):
# helper method: save with model1, load with model2, ensure that there is no warning about missing keys and that
# the parameters are loaded correctly.
model1 = copy.deepcopy(model1)
model2 = copy.deepcopy(model2)
config = PrefixTuningConfig(
task_type="CAUSAL_LM", num_virtual_tokens=5, num_layers=2, token_dim=10, num_attention_heads=2
)
peft_model = get_peft_model(copy.deepcopy(model1), config)
# set all values to 1.0 so we can check that they are loaded correctly
peft_model.prompt_encoder.default.embedding.weight.data.fill_(1.0)
peft_model.save_pretrained(path)
del peft_model
# ensure that there is no warning: UserWarning: Found missing adapter keys while loading the checkpoint
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter("always")
loaded = PeftModel.from_pretrained(copy.deepcopy(model2), path)
assert not any("Found missing adapter keys" in str(warning.message) for warning in w)
# sanity check on parameter values to not only rely on the absence of warnings
weight = loaded.prompt_encoder.default.embedding.weight
assert torch.allclose(weight, torch.full_like(weight, 1.0))
def check_vblora_load_no_warning(self, model1, model2, path):
# helper method: save with model1, load with model2, ensure that there is no warning about missing keys and that
# the parameters are loaded correctly
model1 = copy.deepcopy(model1)
model2 = copy.deepcopy(model2)
config = VBLoRAConfig(target_modules=["attn"], vector_length=2, num_vectors=4)
peft_model = get_peft_model(copy.deepcopy(model1), config)
# set all values to 1.0 or 2.0 so we can check that they are loaded correctly
peft_model.base_model.vblora_vector_bank["default"].data.fill_(1.0)
for name, param in peft_model.named_parameters():
if "logits" in name:
param.data.fill_(2.0)
peft_model.save_pretrained(path)
del peft_model
# ensure that there is no warning: UserWarning: Found missing adapter keys while loading the checkpoint
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter("always")
loaded = PeftModel.from_pretrained(copy.deepcopy(model2), path)
assert not any("Found missing adapter keys" in str(warning.message) for warning in w)
# sanity check on parameter values to not only rely on the absence of warnings
param = loaded.base_model.vblora_vector_bank["default"]
assert torch.allclose(param, torch.full_like(param, 1.0))
for name, param in loaded.named_parameters():
if "logits" in name:
assert torch.allclose(param, torch.full_like(param, 2.0))
def test_key_mapping_save_new_load_new_lora(self, new_model, tmp_path):
# save and load the new model, should work without issues
self.check_lora_load_no_warning(new_model, new_model, tmp_path)
def test_key_mapping_save_old_load_old_lora(self, old_model, tmp_path):
# save and load the old model, should work without issues
self.check_lora_load_no_warning(old_model, old_model, tmp_path)
def test_key_mapping_save_old_load_new_lora(self, old_model, new_model, tmp_path):
# save the old model, load it into the new model, should work without issues (backwards compatibility)
self.check_lora_load_no_warning(old_model, new_model, tmp_path)
@pytest.mark.xfail(reason="Loading new checkpoints with old transformers is not supported.", strict=True)
def test_key_mapping_save_new_load_old_lora(self, old_model, new_model, tmp_path):
# save the new model, load it into the old model, should work without issues (forwards compatibility)
self.check_lora_load_no_warning(new_model, old_model, tmp_path)
def test_key_mapping_save_new_load_new_prefix_tuning(self, new_model, tmp_path):
# save and load the new model, should work without issues
self.check_prefix_tuning_load_no_warning(new_model, new_model, tmp_path)
def test_key_mapping_save_old_load_old_prefix_tuning(self, old_model, tmp_path):
# save and load the old model, should work without issues
self.check_prefix_tuning_load_no_warning(old_model, old_model, tmp_path)
def test_key_mapping_save_old_load_new_prefix_tuning(self, old_model, new_model, tmp_path):
# save the old model, load it into the new model, should work without issues (backwards compatibility)
self.check_prefix_tuning_load_no_warning(old_model, new_model, tmp_path)
def test_key_mapping_save_new_load_old_prefix_tuning(self, old_model, new_model, tmp_path):
# save the new model, load it into the old model, should work without issues (forwards compatibility)
self.check_prefix_tuning_load_no_warning(new_model, old_model, tmp_path)
def test_key_mapping_save_new_load_new_vblora(self, new_model, tmp_path):
# save and load the new model, should work without issues
self.check_vblora_load_no_warning(new_model, new_model, tmp_path)
def test_key_mapping_save_old_load_old_vblora(self, old_model, tmp_path):
# save and load the old model, should work without issues
self.check_vblora_load_no_warning(old_model, old_model, tmp_path)
def test_key_mapping_save_old_load_new_vblora(self, old_model, new_model, tmp_path):
# save the old model, load it into the new model, should work without issues (backwards compatibility)
self.check_vblora_load_no_warning(old_model, new_model, tmp_path)
@pytest.mark.xfail(reason="Loading new checkpoints with old transformers is not supported.", strict=True)
def test_key_mapping_save_new_load_old_vblora(self, old_model, new_model, tmp_path):
# save the new model, load it into the old model, should work without issues (forwards compatibility)
self.check_vblora_load_no_warning(new_model, old_model, tmp_path)
| peft/tests/test_initialization.py/0 | {
"file_path": "peft/tests/test_initialization.py",
"repo_id": "peft",
"token_count": 87745
} | 256 |
# Copyright 2025-present the HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import pytest
import torch
from torch import nn
from transformers import AutoModelForCausalLM
from peft import LoraConfig, TaskType, get_peft_model
from .testing_common import PeftCommonTester
from .testing_utils import hub_online_once, set_init_weights_false
ALL_CONFIGS = [
##########
# Llama4 #
##########
# target down_proj
(
"trl-internal-testing/tiny-Llama4ForCausalLM",
LoraConfig,
{
"task_type": TaskType.CAUSAL_LM,
"target_modules": [],
"lora_dropout": 0.0,
"target_parameters": [
"feed_forward.experts.down_proj",
],
},
),
# target gate_up_proj and down_proj, but not on the same module
(
"trl-internal-testing/tiny-Llama4ForCausalLM",
LoraConfig,
{
"task_type": TaskType.CAUSAL_LM,
"target_modules": [],
"lora_dropout": 0.0,
"target_parameters": [
"0.feed_forward.experts.gate_up_proj",
"1.feed_forward.experts.down_proj",
],
},
),
# target down_proj and gate_up_proj on the same module
(
"trl-internal-testing/tiny-Llama4ForCausalLM",
LoraConfig,
{
"task_type": "CAUSAL_LM",
"r": 8,
"lora_alpha": 32,
"target_modules": None,
"lora_dropout": 0.0,
"bias": "none",
"target_parameters": [
"feed_forward.experts.down_proj",
"feed_forward.experts.gate_up_proj",
],
},
),
# target q_proj, v_proj as modules, and down_proj as parameter
(
"trl-internal-testing/tiny-Llama4ForCausalLM",
LoraConfig,
{
"task_type": TaskType.CAUSAL_LM,
"target_modules": ["q_proj", "v_proj"],
"lora_dropout": 0.0,
"target_parameters": [
"feed_forward.experts.down_proj",
],
},
),
###########
# gpt-oss #
###########
# target down_proj
(
"trl-internal-testing/tiny-GptOssForCausalLM",
LoraConfig,
{
"task_type": TaskType.CAUSAL_LM,
"target_modules": [],
"lora_dropout": 0.0,
"target_parameters": [
"mlp.experts.down_proj",
],
},
),
# target gate_up_proj and down_proj, but not on the same module
(
"trl-internal-testing/tiny-GptOssForCausalLM",
LoraConfig,
{
"task_type": TaskType.CAUSAL_LM,
"target_modules": [],
"lora_dropout": 0.0,
"target_parameters": [
"0.mlp.experts.gate_up_proj",
"1.mlp.experts.down_proj",
],
},
),
# target down_proj and gate_up_proj on the same module
(
"trl-internal-testing/tiny-GptOssForCausalLM",
LoraConfig,
{
"task_type": "CAUSAL_LM",
"r": 8,
"lora_alpha": 32,
"target_modules": None,
"lora_dropout": 0.0,
"bias": "none",
"target_parameters": [
"mlp.experts.down_proj",
"mlp.experts.gate_up_proj",
],
},
),
# target q_proj, v_proj as modules, and down_proj as parameter
(
"trl-internal-testing/tiny-GptOssForCausalLM",
LoraConfig,
{
"task_type": TaskType.CAUSAL_LM,
"target_modules": ["q_proj", "v_proj"],
"lora_dropout": 0.0,
"target_parameters": [
"mlp.experts.down_proj",
],
},
),
]
class MyAutoModelForCausalLM(AutoModelForCausalLM):
@classmethod
def from_pretrained(cls, *args, **kwargs):
torch.manual_seed(0)
model = AutoModelForCausalLM.from_pretrained(*args, **kwargs)
# check that we load the original model, not, say, a trained checkpoint
if args[0] == "trl-internal-testing/tiny-Llama4ForCausalLM":
# model contains weights with values ~1e36 or nan, so we need to reinitialize with sane values
with torch.no_grad():
for param in model.parameters():
param.data = torch.randn(param.shape)
return model
class TestDecoderModelsTargetParameters(PeftCommonTester):
# This is more or less a copy of TestDecoderModels at the time of the PR being added. Unnecessary code is removed,
# like code required for testing non-LoRA methods. The tests being included are not selected to test specific
# functionality of targeting nn.Parameters, they (together with the tests in test_custom_models.py) just ensure that
# generally, nothing is broken.
transformers_class = MyAutoModelForCausalLM
def skipTest(self, reason=""):
# for backwards compatibility with unittest style test classes
pytest.skip(reason)
def prepare_inputs_for_testing(self):
input_ids = torch.tensor([[1, 1, 1], [1, 2, 1]]).to(self.torch_device)
attention_mask = torch.tensor([[1, 1, 1], [1, 0, 1]]).to(self.torch_device)
return {"input_ids": input_ids, "attention_mask": attention_mask}
@pytest.mark.parametrize("model_id,config_cls,config_kwargs", ALL_CONFIGS)
def test_attributes_parametrized(self, model_id, config_cls, config_kwargs):
self._test_model_attr(model_id, config_cls, config_kwargs.copy())
@pytest.mark.parametrize("model_id,config_cls,config_kwargs", ALL_CONFIGS)
def test_adapter_name(self, model_id, config_cls, config_kwargs):
self._test_adapter_name(model_id, config_cls, config_kwargs.copy())
@pytest.mark.parametrize("model_id,config_cls,config_kwargs", ALL_CONFIGS)
def test_prepare_for_training_parametrized(self, model_id, config_cls, config_kwargs):
self._test_prepare_for_training(model_id, config_cls, config_kwargs.copy())
@pytest.mark.parametrize("model_id,config_cls,config_kwargs", ALL_CONFIGS)
def test_save_pretrained(self, model_id, config_cls, config_kwargs):
self._test_save_pretrained(model_id, config_cls, config_kwargs.copy())
@pytest.mark.parametrize("model_id,config_cls,config_kwargs", ALL_CONFIGS)
def test_save_pretrained_pickle(self, model_id, config_cls, config_kwargs):
self._test_save_pretrained(model_id, config_cls, config_kwargs.copy(), safe_serialization=False)
@pytest.mark.skip(reason="Multiple adapters with target_parameters are not supported yet.")
@pytest.mark.parametrize("model_id,config_cls,config_kwargs", ALL_CONFIGS)
def test_save_pretrained_selected_adapters(self, model_id, config_cls, config_kwargs):
self._test_save_pretrained_selected_adapters(model_id, config_cls, config_kwargs.copy())
@pytest.mark.skip(reason="Multiple adapters with target_parameters are not supported yet.")
@pytest.mark.parametrize("model_id,config_cls,config_kwargs", ALL_CONFIGS)
def test_save_pretrained_selected_adapters_pickle(self, model_id, config_cls, config_kwargs):
self._test_save_pretrained_selected_adapters(
model_id, config_cls, config_kwargs.copy(), safe_serialization=False
)
@pytest.mark.parametrize("model_id,config_cls,config_kwargs", ALL_CONFIGS)
def test_from_pretrained_config_construction(self, model_id, config_cls, config_kwargs):
self._test_from_pretrained_config_construction(model_id, config_cls, config_kwargs.copy())
@pytest.mark.parametrize("model_id,config_cls,config_kwargs", ALL_CONFIGS)
def test_merge_layers(self, model_id, config_cls, config_kwargs):
config_kwargs = set_init_weights_false(config_cls, config_kwargs)
self._test_merge_layers(model_id, config_cls, config_kwargs.copy())
@pytest.mark.skip(reason="Multiple adapters with target_parameters are not supported yet.")
@pytest.mark.parametrize("model_id,config_cls,config_kwargs", ALL_CONFIGS)
def test_merge_layers_multi(self, model_id, config_cls, config_kwargs):
config_kwargs = set_init_weights_false(config_cls, config_kwargs)
self._test_merge_layers_multi(model_id, config_cls, config_kwargs.copy())
@pytest.mark.parametrize("model_id,config_cls,config_kwargs", ALL_CONFIGS)
def test_merge_layers_nan(self, model_id, config_cls, config_kwargs):
config_kwargs = set_init_weights_false(config_cls, config_kwargs)
self._test_merge_layers_nan(model_id, config_cls, config_kwargs.copy())
@pytest.mark.skip(reason="Multiple adapters with target_parameters are not supported yet.")
@pytest.mark.parametrize("model_id,config_cls,config_kwargs", ALL_CONFIGS)
def test_mixed_adapter_batches(self, model_id, config_cls, config_kwargs):
config_kwargs = set_init_weights_false(config_cls, config_kwargs)
msg = "lora.ParamWrapper does not support mixed adapter batches yet."
with pytest.raises(ValueError, match=msg):
self._test_mixed_adapter_batches(model_id, config_cls, config_kwargs.copy())
@pytest.mark.skip(reason="Multiple adapters with target_parameters are not supported yet.")
@pytest.mark.parametrize("model_id,config_cls,config_kwargs", ALL_CONFIGS)
def test_generate_with_mixed_adapter_batches(self, model_id, config_cls, config_kwargs):
config_kwargs = set_init_weights_false(config_cls, config_kwargs)
msg = "lora.ParamWrapper does not support mixed adapter batches yet."
with pytest.raises(ValueError, match=msg):
self._test_generate_with_mixed_adapter_batches_and_beam_search(model_id, config_cls, config_kwargs.copy())
@pytest.mark.parametrize("model_id,config_cls,config_kwargs", ALL_CONFIGS)
def test_generate(self, model_id, config_cls, config_kwargs):
self._test_generate(model_id, config_cls, config_kwargs.copy())
@pytest.mark.parametrize("model_id,config_cls,config_kwargs", ALL_CONFIGS)
def test_generate_pos_args(self, model_id, config_cls, config_kwargs):
self._test_generate_pos_args(model_id, config_cls, config_kwargs.copy(), raises_err=False)
@pytest.mark.parametrize("model_id,config_cls,config_kwargs", ALL_CONFIGS)
def test_merge_layers_fp16(self, model_id, config_cls, config_kwargs):
self._test_merge_layers_fp16(model_id, config_cls, config_kwargs.copy())
@pytest.mark.parametrize("model_id,config_cls,config_kwargs", ALL_CONFIGS)
def test_generate_half_prec(self, model_id, config_cls, config_kwargs):
self._test_generate_half_prec(model_id, config_cls, config_kwargs.copy())
@pytest.mark.parametrize("model_id,config_cls,config_kwargs", ALL_CONFIGS)
def test_training_decoders(self, model_id, config_cls, config_kwargs):
self._test_training(model_id, config_cls, config_kwargs.copy())
@pytest.mark.parametrize("model_id,config_cls,config_kwargs", ALL_CONFIGS)
def test_training_decoders_gradient_checkpointing(self, model_id, config_cls, config_kwargs):
self._test_training_gradient_checkpointing(model_id, config_cls, config_kwargs.copy())
@pytest.mark.parametrize("model_id,config_cls,config_kwargs", ALL_CONFIGS)
def test_inference_safetensors(self, model_id, config_cls, config_kwargs):
self._test_inference_safetensors(model_id, config_cls, config_kwargs.copy())
@pytest.mark.parametrize("model_id,config_cls,config_kwargs", ALL_CONFIGS)
def test_peft_model_device_map(self, model_id, config_cls, config_kwargs):
self._test_peft_model_device_map(model_id, config_cls, config_kwargs.copy())
@pytest.mark.skip(reason="Multiple adapters with target_parameters are not supported yet.")
@pytest.mark.parametrize("model_id,config_cls,config_kwargs", ALL_CONFIGS)
def test_delete_adapter(self, model_id, config_cls, config_kwargs):
self._test_delete_adapter(model_id, config_cls, config_kwargs.copy())
@pytest.mark.skip(reason="Multiple adapters with target_parameters are not supported yet.")
@pytest.mark.parametrize("model_id,config_cls,config_kwargs", ALL_CONFIGS)
def test_delete_inactive_adapter(self, model_id, config_cls, config_kwargs):
self._test_delete_inactive_adapter(model_id, config_cls, config_kwargs.copy())
@pytest.mark.parametrize("model_id,config_cls,config_kwargs", ALL_CONFIGS)
def test_adding_multiple_adapters_with_bias_raises(self, model_id, config_cls, config_kwargs):
self._test_adding_multiple_adapters_with_bias_raises(model_id, config_cls, config_kwargs.copy())
@pytest.mark.parametrize("model_id,config_cls,config_kwargs", ALL_CONFIGS)
def test_unload_adapter(self, model_id, config_cls, config_kwargs):
config_kwargs = set_init_weights_false(config_cls, config_kwargs)
self._test_unload_adapter(model_id, config_cls, config_kwargs.copy())
@pytest.mark.skip(reason="Multiple adapters with target_parameters are not supported yet.")
@pytest.mark.parametrize("model_id,config_cls,config_kwargs", ALL_CONFIGS)
def test_weighted_combination_of_adapters(self, model_id, config_cls, config_kwargs):
config_kwargs = set_init_weights_false(config_cls, config_kwargs)
msg = "add_weighted_adapter does not support targeting nn.Parameter"
with pytest.raises(ValueError, match=msg):
self._test_weighted_combination_of_adapters(model_id, config_cls, config_kwargs.copy())
@pytest.mark.parametrize("model_id,config_cls,config_kwargs", ALL_CONFIGS)
def test_training_prompt_learning_tasks(self, model_id, config_cls, config_kwargs):
self._test_training_prompt_learning_tasks(model_id, config_cls, config_kwargs.copy())
@pytest.mark.parametrize("model_id,config_cls,config_kwargs", ALL_CONFIGS)
def test_disable_adapter(self, model_id, config_cls, config_kwargs):
config_kwargs = set_init_weights_false(config_cls, config_kwargs)
self._test_disable_adapter(model_id, config_cls, config_kwargs.copy())
@pytest.mark.parametrize("model_id,config_cls,config_kwargs", ALL_CONFIGS)
def test_passing_input_embeds_works(self, model_id, config_cls, config_kwargs):
self._test_passing_input_embeds_works("", model_id, config_cls, config_kwargs.copy())
class TestTargetParameters:
# Tests specifically designed for target_parameters
def test_targeting_module_and_targeting_param_equivalent(self):
# Test that using LoRA with target_modules vs target_parameters yields identical results.
# note: we purposely target the gate_proj because its weight is not square (unlike q_proj, ...), this makes it
# easier to catch shape errors
torch.manual_seed(0)
model_id = "hf-internal-testing/tiny-random-LlamaForCausalLM"
with hub_online_once(model_id):
model0 = AutoModelForCausalLM.from_pretrained(model_id)
x = torch.arange(10).view(2, 5)
with torch.inference_mode():
out_base = model0(x, output_hidden_states=True).hidden_states[-1]
# targeting the module
config0 = LoraConfig(target_modules=["gate_proj"], init_lora_weights=False)
model0 = get_peft_model(model0, config0)
# targeting the parameter
model1 = AutoModelForCausalLM.from_pretrained("hf-internal-testing/tiny-random-LlamaForCausalLM")
config1 = LoraConfig(target_modules=[], target_parameters=["gate_proj.weight"], init_lora_weights=False)
model1 = get_peft_model(model1, config1)
gate_proj_0_0 = model0.base_model.model.model.layers[0].mlp.gate_proj
gate_proj_0_1 = model0.base_model.model.model.layers[1].mlp.gate_proj
gate_proj_1_0 = model1.base_model.model.model.layers[0].mlp.gate_proj
gate_proj_1_1 = model1.base_model.model.model.layers[1].mlp.gate_proj
# ensure that the randomly initialized LoRA weights are identical
gate_proj_1_0.lora_A.default.weight.data.copy_(gate_proj_0_0.lora_A.default.weight.data)
gate_proj_1_1.lora_A.default.weight.data.copy_(gate_proj_0_1.lora_A.default.weight.data)
gate_proj_1_0.lora_B.default.weight.data.copy_(gate_proj_0_0.lora_B.default.weight.data)
gate_proj_1_1.lora_B.default.weight.data.copy_(gate_proj_0_1.lora_B.default.weight.data)
with torch.inference_mode():
out_lora_0 = model0(x, output_hidden_states=True).hidden_states[-1]
out_lora_1 = model1(x, output_hidden_states=True).hidden_states[-1]
# sanity check: basemodel outputs should be different
atol, rtol = 1e-6, 1e-6
assert not torch.allclose(out_base, out_lora_0, atol=atol, rtol=rtol)
# LoRA outputs should be the same
assert torch.allclose(out_lora_0, out_lora_1, atol=atol, rtol=rtol)
def test_target_multiple_parameters_on_same_module(self, monkeypatch):
# test that if we target multiple nn.Parameters on the same module, all of them are being used during the
# forward pass
torch.manual_seed(0)
model_id = "trl-internal-testing/tiny-Llama4ForCausalLM"
with hub_online_once(model_id):
x = torch.arange(10).view(2, 5)
model = MyAutoModelForCausalLM.from_pretrained(model_id)
shape_gate_up_proj = model.model.layers[0].feed_forward.experts.gate_up_proj.shape
shape_down_proj = model.model.layers[0].feed_forward.experts.down_proj.shape
num_layers = len(model.model.layers)
target_parameters = ["feed_forward.experts.gate_up_proj", "feed_forward.experts.down_proj"]
num_params = len(target_parameters)
config = LoraConfig(target_parameters=target_parameters, init_lora_weights=False)
model = get_peft_model(model, config)
# CHECK FORWARD CALLS
# log the weights seen during the forward call
weights = []
def mock_forward(self, W):
weights.append(W)
return orig_forward(self, W)
from peft.tuners.lora.layer import _LoraParameterProxy
orig_forward = _LoraParameterProxy.forward
monkeypatch.setattr(_LoraParameterProxy, "forward", mock_forward)
num_steps = 3
with torch.inference_mode():
for _ in range(num_steps):
out_base = model(x, output_hidden_states=True).hidden_states[-1]
actual_call_count = len(weights)
# Note: We call forward twice per step, once to create the parametrization and once for the actual forward
# step. This may be a bit wasteful but it's not clear how to prevent this and overall is probably negligible
num_forward_per_step = 2
# Since https://github.com/huggingface/transformers/pull/39501, one of the parameters is accessed twice per
# forward call, so add +1.
expected_call_count = num_steps * num_layers * (1 + num_params * num_forward_per_step)
assert actual_call_count == expected_call_count
actual_shapes = {W.shape for W in weights}
expected_shapes = {shape_gate_up_proj, shape_down_proj}
assert actual_shapes == expected_shapes
# CHECK WEIGHT UPDATES
lora_weights_before = {
k: v.clone() for k, v in model.named_parameters() if "lora_A.default" in k or "lora_B.default" in k
}
# sanity check:
assert len(lora_weights_before) == 2 * num_layers * num_params
# train
optim = torch.optim.SGD(model.parameters(), lr=0.01)
for _ in range(10):
optim.zero_grad()
out = model(x)
loss = out.logits.sum()
loss.backward()
optim.step()
lora_weights_after = {
k: v for k, v in model.named_parameters() if "lora_A.default" in k or "lora_B.default" in k
}
assert lora_weights_before.keys() == lora_weights_after.keys()
atol, rtol = 0.1, 0.1
for key in lora_weights_before.keys():
assert not torch.allclose(lora_weights_before[key], lora_weights_after[key], atol=atol, rtol=rtol)
def test_target_parameters_works_with_existing_parametrization(self):
# When a parameter is already parametrized, we want the LoRA parametrization to work with it correctly.
class MyLinear(nn.Linear):
# For testing purposes, define a linear layer with 2 parameters: weight and other_weight.
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
nn.init.ones_(self.weight)
self.other_weight = nn.Parameter(torch.ones(self.weight.shape))
class MyModule(nn.Module):
def __init__(self):
super().__init__()
self.lin = MyLinear(2, 2, bias=False)
def forward(self, x):
return self.lin(x)
class MyParametrization(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
return x + 1
# base model
model = MyModule()
x = torch.ones((2, 2))
# sanity check: result should be 1*1 + 1*1 == 2
output_base = model(x)
assert torch.all(output_base == 2)
# add parametrization to the weight
nn.utils.parametrize.register_parametrization(model.lin, "weight", MyParametrization())
# result should be (1+1)*1 + (1+1)*1 == 4
output_parametrized = model(x)
assert torch.all(output_parametrized == 4)
# add LoRA parametrization to the weight
config = LoraConfig(r=2, lora_alpha=6, target_parameters=["lin.weight"], init_lora_weights=False)
model = get_peft_model(model, config)
# manually set LoRA weights to ones
nn.init.ones_(model.base_model.model.lin.lora_A["default"].weight)
nn.init.ones_(model.base_model.model.lin.lora_B["default"].weight)
output_lora = model(x)
# delta_weight should be: (1+1) * lora_scale = (1+1) * (alpha / rank) = 2 * (6 / 2) = 6
# result should be: (1+1+6)*1 + (1+1+6)*1 == 8 + 8 == 16
assert torch.all(output_lora == 16)
# calling twice should yield the same result
output_lora2 = model(x)
assert torch.allclose(output_lora, output_lora2)
# add another LoRA parametrization to other_weight, should have no effect on the output
config = LoraConfig(r=2, lora_alpha=6, target_parameters=["lin.other_weight"], init_lora_weights=False)
model.add_adapter("other", config)
output_other_lora = model(x)
# delta_weight should be: (1+1) * lora_scale = (1+1) * (alpha / rank) = 2 * (6 / 2) = 6
# result should be: (1+1+6)*1 + (1+1+6)*1 == 8 + 8 == 16
assert torch.all(output_other_lora == output_lora)
# after unloading, the output should be the same as before LoRA was applied
unloaded = model.unload()
output_unloaded = unloaded(x)
assert torch.all(output_unloaded == output_parametrized)
| peft/tests/test_target_parameters.py/0 | {
"file_path": "peft/tests/test_target_parameters.py",
"repo_id": "peft",
"token_count": 10629
} | 257 |
- sections:
- local: index
title: Home
- local: quickstart
title: Quickstart
- local: installation
title: Installation
- local: changes
title: Changelog
title: Get started
- sections:
- local: feature_extraction
title: Using Pretrained Models as Feature Extractors
- local: training_script
title: Training With The Official Training Script
- local: hf_hub
title: Share and Load Models from the ๐ค Hugging Face Hub
title: Tutorials
- sections:
- local: models
title: Model Summaries
- local: results
title: Results
- local: models/adversarial-inception-v3
title: Adversarial Inception v3
- local: models/advprop
title: AdvProp (EfficientNet)
- local: models/big-transfer
title: Big Transfer (BiT)
- local: models/csp-darknet
title: CSP-DarkNet
- local: models/csp-resnet
title: CSP-ResNet
- local: models/csp-resnext
title: CSP-ResNeXt
- local: models/densenet
title: DenseNet
- local: models/dla
title: Deep Layer Aggregation
- local: models/dpn
title: Dual Path Network (DPN)
- local: models/ecaresnet
title: ECA-ResNet
- local: models/efficientnet
title: EfficientNet
- local: models/efficientnet-pruned
title: EfficientNet (Knapsack Pruned)
- local: models/ensemble-adversarial
title: Ensemble Adversarial Inception ResNet v2
- local: models/ese-vovnet
title: ESE-VoVNet
- local: models/fbnet
title: FBNet
- local: models/gloun-inception-v3
title: (Gluon) Inception v3
- local: models/gloun-resnet
title: (Gluon) ResNet
- local: models/gloun-resnext
title: (Gluon) ResNeXt
- local: models/gloun-senet
title: (Gluon) SENet
- local: models/gloun-seresnext
title: (Gluon) SE-ResNeXt
- local: models/gloun-xception
title: (Gluon) Xception
- local: models/hrnet
title: HRNet
- local: models/ig-resnext
title: Instagram ResNeXt WSL
- local: models/inception-resnet-v2
title: Inception ResNet v2
- local: models/inception-v3
title: Inception v3
- local: models/inception-v4
title: Inception v4
- local: models/legacy-se-resnet
title: (Legacy) SE-ResNet
- local: models/legacy-se-resnext
title: (Legacy) SE-ResNeXt
- local: models/legacy-senet
title: (Legacy) SENet
- local: models/mixnet
title: MixNet
- local: models/mnasnet
title: MnasNet
- local: models/mobilenet-v2
title: MobileNet v2
- local: models/mobilenet-v3
title: MobileNet v3
- local: models/nasnet
title: NASNet
- local: models/noisy-student
title: Noisy Student (EfficientNet)
- local: models/pnasnet
title: PNASNet
- local: models/regnetx
title: RegNetX
- local: models/regnety
title: RegNetY
- local: models/res2net
title: Res2Net
- local: models/res2next
title: Res2NeXt
- local: models/resnest
title: ResNeSt
- local: models/resnet
title: ResNet
- local: models/resnet-d
title: ResNet-D
- local: models/resnext
title: ResNeXt
- local: models/rexnet
title: RexNet
- local: models/se-resnet
title: SE-ResNet
- local: models/selecsls
title: SelecSLS
- local: models/seresnext
title: SE-ResNeXt
- local: models/skresnet
title: SK-ResNet
- local: models/skresnext
title: SK-ResNeXt
- local: models/spnasnet
title: SPNASNet
- local: models/ssl-resnet
title: SSL ResNet
- local: models/swsl-resnet
title: SWSL ResNet
- local: models/swsl-resnext
title: SWSL ResNeXt
- local: models/tf-efficientnet
title: (Tensorflow) EfficientNet
- local: models/tf-efficientnet-condconv
title: (Tensorflow) EfficientNet CondConv
- local: models/tf-efficientnet-lite
title: (Tensorflow) EfficientNet Lite
- local: models/tf-inception-v3
title: (Tensorflow) Inception v3
- local: models/tf-mixnet
title: (Tensorflow) MixNet
- local: models/tf-mobilenet-v3
title: (Tensorflow) MobileNet v3
- local: models/tresnet
title: TResNet
- local: models/wide-resnet
title: Wide ResNet
- local: models/xception
title: Xception
title: Model Pages
isExpanded: false
- sections:
- local: reference/models
title: Models
- local: reference/data
title: Data
- local: reference/optimizers
title: Optimizers
- local: reference/schedulers
title: Learning Rate Schedulers
title: Reference
| pytorch-image-models/hfdocs/source/_toctree.yml/0 | {
"file_path": "pytorch-image-models/hfdocs/source/_toctree.yml",
"repo_id": "pytorch-image-models",
"token_count": 1701
} | 258 |
# ECA-ResNet
An **ECA ResNet** is a variant on a [ResNet](https://paperswithcode.com/method/resnet) that utilises an [Efficient Channel Attention module](https://paperswithcode.com/method/efficient-channel-attention). Efficient Channel Attention is an architectural unit based on [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) that reduces model complexity without dimensionality reduction.
## How do I use this model on an image?
To load a pretrained model:
```py
>>> import timm
>>> model = timm.create_model('ecaresnet101d', pretrained=True)
>>> model.eval()
```
To load and preprocess the image:
```py
>>> import urllib
>>> from PIL import Image
>>> from timm.data import resolve_data_config
>>> from timm.data.transforms_factory import create_transform
>>> config = resolve_data_config({}, model=model)
>>> transform = create_transform(**config)
>>> url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg")
>>> urllib.request.urlretrieve(url, filename)
>>> img = Image.open(filename).convert('RGB')
>>> tensor = transform(img).unsqueeze(0) # transform and add batch dimension
```
To get the model predictions:
```py
>>> import torch
>>> with torch.inference_mode():
... out = model(tensor)
>>> probabilities = torch.nn.functional.softmax(out[0], dim=0)
>>> print(probabilities.shape)
>>> # prints: torch.Size([1000])
```
To get the top-5 predictions class names:
```py
>>> # Get imagenet class mappings
>>> url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt")
>>> urllib.request.urlretrieve(url, filename)
>>> with open("imagenet_classes.txt", "r") as f:
... categories = [s.strip() for s in f.readlines()]
>>> # Print top categories per image
>>> top5_prob, top5_catid = torch.topk(probabilities, 5)
>>> for i in range(top5_prob.size(0)):
... print(categories[top5_catid[i]], top5_prob[i].item())
>>> # prints class names and probabilities like:
>>> # [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)]
```
Replace the model name with the variant you want to use, e.g. `ecaresnet101d`. You can find the IDs in the model summaries at the top of this page.
To extract image features with this model, follow the [timm feature extraction examples](../feature_extraction), just change the name of the model you want to use.
## How do I finetune this model?
You can finetune any of the pre-trained models just by changing the classifier (the last layer).
```py
>>> model = timm.create_model('ecaresnet101d', pretrained=True, num_classes=NUM_FINETUNE_CLASSES)
```
To finetune on your own dataset, you have to write a training loop or adapt [timm's training
script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset.
## How do I train this model?
You can follow the [timm recipe scripts](../training_script) for training a new model afresh.
## Citation
```BibTeX
@misc{wang2020ecanet,
title={ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks},
author={Qilong Wang and Banggu Wu and Pengfei Zhu and Peihua Li and Wangmeng Zuo and Qinghua Hu},
year={2020},
eprint={1910.03151},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
Type: model-index
Collections:
- Name: ECAResNet
Paper:
Title: 'ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks'
URL: https://paperswithcode.com/paper/eca-net-efficient-channel-attention-for-deep
Models:
- Name: ecaresnet101d
In Collection: ECAResNet
Metadata:
FLOPs: 10377193728
Parameters: 44570000
File Size: 178815067
Architecture:
- 1x1 Convolution
- Batch Normalization
- Bottleneck Residual Block
- Convolution
- Efficient Channel Attention
- Global Average Pooling
- Max Pooling
- ReLU
- Residual Block
- Residual Connection
- Softmax
- Squeeze-and-Excitation Block
Tasks:
- Image Classification
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Data:
- ImageNet
Training Resources: 4x RTX 2080Ti GPUs
ID: ecaresnet101d
LR: 0.1
Epochs: 100
Layers: 101
Crop Pct: '0.875'
Batch Size: 256
Image Size: '224'
Weight Decay: 0.0001
Interpolation: bicubic
Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/resnet.py#L1087
Weights: https://imvl-automl-sh.oss-cn-shanghai.aliyuncs.com/darts/hyperml/hyperml/job_45402/outputs/ECAResNet101D_281c5844.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 82.18%
Top 5 Accuracy: 96.06%
- Name: ecaresnet101d_pruned
In Collection: ECAResNet
Metadata:
FLOPs: 4463972081
Parameters: 24880000
File Size: 99852736
Architecture:
- 1x1 Convolution
- Batch Normalization
- Bottleneck Residual Block
- Convolution
- Efficient Channel Attention
- Global Average Pooling
- Max Pooling
- ReLU
- Residual Block
- Residual Connection
- Softmax
- Squeeze-and-Excitation Block
Tasks:
- Image Classification
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Data:
- ImageNet
ID: ecaresnet101d_pruned
Layers: 101
Crop Pct: '0.875'
Image Size: '224'
Interpolation: bicubic
Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/resnet.py#L1097
Weights: https://imvl-automl-sh.oss-cn-shanghai.aliyuncs.com/darts/hyperml/hyperml/job_45610/outputs/ECAResNet101D_P_75a3370e.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 80.82%
Top 5 Accuracy: 95.64%
- Name: ecaresnet50d
In Collection: ECAResNet
Metadata:
FLOPs: 5591090432
Parameters: 25580000
File Size: 102579290
Architecture:
- 1x1 Convolution
- Batch Normalization
- Bottleneck Residual Block
- Convolution
- Efficient Channel Attention
- Global Average Pooling
- Max Pooling
- ReLU
- Residual Block
- Residual Connection
- Softmax
- Squeeze-and-Excitation Block
Tasks:
- Image Classification
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Data:
- ImageNet
Training Resources: 4x RTX 2080Ti GPUs
ID: ecaresnet50d
LR: 0.1
Epochs: 100
Layers: 50
Crop Pct: '0.875'
Batch Size: 256
Image Size: '224'
Weight Decay: 0.0001
Interpolation: bicubic
Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/resnet.py#L1045
Weights: https://imvl-automl-sh.oss-cn-shanghai.aliyuncs.com/darts/hyperml/hyperml/job_45402/outputs/ECAResNet50D_833caf58.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 80.61%
Top 5 Accuracy: 95.31%
- Name: ecaresnet50d_pruned
In Collection: ECAResNet
Metadata:
FLOPs: 3250730657
Parameters: 19940000
File Size: 79990436
Architecture:
- 1x1 Convolution
- Batch Normalization
- Bottleneck Residual Block
- Convolution
- Efficient Channel Attention
- Global Average Pooling
- Max Pooling
- ReLU
- Residual Block
- Residual Connection
- Softmax
- Squeeze-and-Excitation Block
Tasks:
- Image Classification
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Data:
- ImageNet
ID: ecaresnet50d_pruned
Layers: 50
Crop Pct: '0.875'
Image Size: '224'
Interpolation: bicubic
Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/resnet.py#L1055
Weights: https://imvl-automl-sh.oss-cn-shanghai.aliyuncs.com/darts/hyperml/hyperml/job_45899/outputs/ECAResNet50D_P_9c67f710.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 79.71%
Top 5 Accuracy: 94.88%
- Name: ecaresnetlight
In Collection: ECAResNet
Metadata:
FLOPs: 5276118784
Parameters: 30160000
File Size: 120956612
Architecture:
- 1x1 Convolution
- Batch Normalization
- Bottleneck Residual Block
- Convolution
- Efficient Channel Attention
- Global Average Pooling
- Max Pooling
- ReLU
- Residual Block
- Residual Connection
- Softmax
- Squeeze-and-Excitation Block
Tasks:
- Image Classification
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Data:
- ImageNet
ID: ecaresnetlight
Crop Pct: '0.875'
Image Size: '224'
Interpolation: bicubic
Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/resnet.py#L1077
Weights: https://imvl-automl-sh.oss-cn-shanghai.aliyuncs.com/darts/hyperml/hyperml/job_45402/outputs/ECAResNetLight_4f34b35b.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 80.46%
Top 5 Accuracy: 95.25%
--> | pytorch-image-models/hfdocs/source/models/ecaresnet.mdx/0 | {
"file_path": "pytorch-image-models/hfdocs/source/models/ecaresnet.mdx",
"repo_id": "pytorch-image-models",
"token_count": 3644
} | 259 |
# Inception v4
**Inception-v4** is a convolutional neural network architecture that builds on previous iterations of the Inception family by simplifying the architecture and using more inception modules than [Inception-v3](https://paperswithcode.com/method/inception-v3).
## How do I use this model on an image?
To load a pretrained model:
```py
>>> import timm
>>> model = timm.create_model('inception_v4', pretrained=True)
>>> model.eval()
```
To load and preprocess the image:
```py
>>> import urllib
>>> from PIL import Image
>>> from timm.data import resolve_data_config
>>> from timm.data.transforms_factory import create_transform
>>> config = resolve_data_config({}, model=model)
>>> transform = create_transform(**config)
>>> url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg")
>>> urllib.request.urlretrieve(url, filename)
>>> img = Image.open(filename).convert('RGB')
>>> tensor = transform(img).unsqueeze(0) # transform and add batch dimension
```
To get the model predictions:
```py
>>> import torch
>>> with torch.inference_mode():
... out = model(tensor)
>>> probabilities = torch.nn.functional.softmax(out[0], dim=0)
>>> print(probabilities.shape)
>>> # prints: torch.Size([1000])
```
To get the top-5 predictions class names:
```py
>>> # Get imagenet class mappings
>>> url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt")
>>> urllib.request.urlretrieve(url, filename)
>>> with open("imagenet_classes.txt", "r") as f:
... categories = [s.strip() for s in f.readlines()]
>>> # Print top categories per image
>>> top5_prob, top5_catid = torch.topk(probabilities, 5)
>>> for i in range(top5_prob.size(0)):
... print(categories[top5_catid[i]], top5_prob[i].item())
>>> # prints class names and probabilities like:
>>> # [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)]
```
Replace the model name with the variant you want to use, e.g. `inception_v4`. You can find the IDs in the model summaries at the top of this page.
To extract image features with this model, follow the [timm feature extraction examples](../feature_extraction), just change the name of the model you want to use.
## How do I finetune this model?
You can finetune any of the pre-trained models just by changing the classifier (the last layer).
```py
>>> model = timm.create_model('inception_v4', pretrained=True, num_classes=NUM_FINETUNE_CLASSES)
```
To finetune on your own dataset, you have to write a training loop or adapt [timm's training
script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset.
## How do I train this model?
You can follow the [timm recipe scripts](../training_script) for training a new model afresh.
## Citation
```BibTeX
@misc{szegedy2016inceptionv4,
title={Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning},
author={Christian Szegedy and Sergey Ioffe and Vincent Vanhoucke and Alex Alemi},
year={2016},
eprint={1602.07261},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
Type: model-index
Collections:
- Name: Inception v4
Paper:
Title: Inception-v4, Inception-ResNet and the Impact of Residual Connections on
Learning
URL: https://paperswithcode.com/paper/inception-v4-inception-resnet-and-the-impact
Models:
- Name: inception_v4
In Collection: Inception v4
Metadata:
FLOPs: 15806527936
Parameters: 42680000
File Size: 171082495
Architecture:
- Average Pooling
- Dropout
- Inception-A
- Inception-B
- Inception-C
- Reduction-A
- Reduction-B
- Softmax
Tasks:
- Image Classification
Training Techniques:
- Label Smoothing
- RMSProp
- Weight Decay
Training Data:
- ImageNet
Training Resources: 20x NVIDIA Kepler GPUs
ID: inception_v4
LR: 0.045
Dropout: 0.2
Crop Pct: '0.875'
Momentum: 0.9
Image Size: '299'
Interpolation: bicubic
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/inception_v4.py#L313
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-cadene/inceptionv4-8e4777a0.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 1.01%
Top 5 Accuracy: 16.85%
--> | pytorch-image-models/hfdocs/source/models/inception-v4.mdx/0 | {
"file_path": "pytorch-image-models/hfdocs/source/models/inception-v4.mdx",
"repo_id": "pytorch-image-models",
"token_count": 1628
} | 260 |
# ResNet-D
**ResNet-D** is a modification on the [ResNet](https://paperswithcode.com/method/resnet) architecture that utilises an [average pooling](https://paperswithcode.com/method/average-pooling) tweak for downsampling. The motivation is that in the unmodified ResNet, the [1ร1 convolution](https://paperswithcode.com/method/1x1-convolution) for the downsampling block ignores 3/4 of input feature maps, so this is modified so no information will be ignored
## How do I use this model on an image?
To load a pretrained model:
```py
>>> import timm
>>> model = timm.create_model('resnet101d', pretrained=True)
>>> model.eval()
```
To load and preprocess the image:
```py
>>> import urllib
>>> from PIL import Image
>>> from timm.data import resolve_data_config
>>> from timm.data.transforms_factory import create_transform
>>> config = resolve_data_config({}, model=model)
>>> transform = create_transform(**config)
>>> url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg")
>>> urllib.request.urlretrieve(url, filename)
>>> img = Image.open(filename).convert('RGB')
>>> tensor = transform(img).unsqueeze(0) # transform and add batch dimension
```
To get the model predictions:
```py
>>> import torch
>>> with torch.inference_mode():
... out = model(tensor)
>>> probabilities = torch.nn.functional.softmax(out[0], dim=0)
>>> print(probabilities.shape)
>>> # prints: torch.Size([1000])
```
To get the top-5 predictions class names:
```py
>>> # Get imagenet class mappings
>>> url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt")
>>> urllib.request.urlretrieve(url, filename)
>>> with open("imagenet_classes.txt", "r") as f:
... categories = [s.strip() for s in f.readlines()]
>>> # Print top categories per image
>>> top5_prob, top5_catid = torch.topk(probabilities, 5)
>>> for i in range(top5_prob.size(0)):
... print(categories[top5_catid[i]], top5_prob[i].item())
>>> # prints class names and probabilities like:
>>> # [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)]
```
Replace the model name with the variant you want to use, e.g. `resnet101d`. You can find the IDs in the model summaries at the top of this page.
To extract image features with this model, follow the [timm feature extraction examples](../feature_extraction), just change the name of the model you want to use.
## How do I finetune this model?
You can finetune any of the pre-trained models just by changing the classifier (the last layer).
```py
>>> model = timm.create_model('resnet101d', pretrained=True, num_classes=NUM_FINETUNE_CLASSES)
```
To finetune on your own dataset, you have to write a training loop or adapt [timm's training
script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset.
## How do I train this model?
You can follow the [timm recipe scripts](../training_script) for training a new model afresh.
## Citation
```BibTeX
@misc{he2018bag,
title={Bag of Tricks for Image Classification with Convolutional Neural Networks},
author={Tong He and Zhi Zhang and Hang Zhang and Zhongyue Zhang and Junyuan Xie and Mu Li},
year={2018},
eprint={1812.01187},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
Type: model-index
Collections:
- Name: ResNet-D
Paper:
Title: Bag of Tricks for Image Classification with Convolutional Neural Networks
URL: https://paperswithcode.com/paper/bag-of-tricks-for-image-classification-with
Models:
- Name: resnet101d
In Collection: ResNet-D
Metadata:
FLOPs: 13805639680
Parameters: 44570000
File Size: 178791263
Architecture:
- 1x1 Convolution
- Batch Normalization
- Bottleneck Residual Block
- Convolution
- Global Average Pooling
- Max Pooling
- ReLU
- Residual Block
- Residual Connection
- Softmax
Tasks:
- Image Classification
Training Data:
- ImageNet
ID: resnet101d
Crop Pct: '0.94'
Image Size: '256'
Interpolation: bicubic
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L716
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet101d_ra2-2803ffab.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 82.31%
Top 5 Accuracy: 96.06%
- Name: resnet152d
In Collection: ResNet-D
Metadata:
FLOPs: 20155275264
Parameters: 60210000
File Size: 241596837
Architecture:
- 1x1 Convolution
- Batch Normalization
- Bottleneck Residual Block
- Convolution
- Global Average Pooling
- Max Pooling
- ReLU
- Residual Block
- Residual Connection
- Softmax
Tasks:
- Image Classification
Training Data:
- ImageNet
ID: resnet152d
Crop Pct: '0.94'
Image Size: '256'
Interpolation: bicubic
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L724
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet152d_ra2-5cac0439.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 83.13%
Top 5 Accuracy: 96.35%
- Name: resnet18d
In Collection: ResNet-D
Metadata:
FLOPs: 2645205760
Parameters: 11710000
File Size: 46893231
Architecture:
- 1x1 Convolution
- Batch Normalization
- Bottleneck Residual Block
- Convolution
- Global Average Pooling
- Max Pooling
- ReLU
- Residual Block
- Residual Connection
- Softmax
Tasks:
- Image Classification
Training Data:
- ImageNet
ID: resnet18d
Crop Pct: '0.875'
Image Size: '224'
Interpolation: bicubic
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L649
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet18d_ra2-48a79e06.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 72.27%
Top 5 Accuracy: 90.69%
- Name: resnet200d
In Collection: ResNet-D
Metadata:
FLOPs: 26034378752
Parameters: 64690000
File Size: 259662933
Architecture:
- 1x1 Convolution
- Batch Normalization
- Bottleneck Residual Block
- Convolution
- Global Average Pooling
- Max Pooling
- ReLU
- Residual Block
- Residual Connection
- Softmax
Tasks:
- Image Classification
Training Data:
- ImageNet
ID: resnet200d
Crop Pct: '0.94'
Image Size: '256'
Interpolation: bicubic
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L749
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet200d_ra2-bdba9bf9.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 83.24%
Top 5 Accuracy: 96.49%
- Name: resnet26d
In Collection: ResNet-D
Metadata:
FLOPs: 3335276032
Parameters: 16010000
File Size: 64209122
Architecture:
- 1x1 Convolution
- Batch Normalization
- Bottleneck Residual Block
- Convolution
- Global Average Pooling
- Max Pooling
- ReLU
- Residual Block
- Residual Connection
- Softmax
Tasks:
- Image Classification
Training Data:
- ImageNet
ID: resnet26d
Crop Pct: '0.875'
Image Size: '224'
Interpolation: bicubic
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L683
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet26d-69e92c46.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 76.69%
Top 5 Accuracy: 93.15%
- Name: resnet34d
In Collection: ResNet-D
Metadata:
FLOPs: 5026601728
Parameters: 21820000
File Size: 87369807
Architecture:
- 1x1 Convolution
- Batch Normalization
- Bottleneck Residual Block
- Convolution
- Global Average Pooling
- Max Pooling
- ReLU
- Residual Block
- Residual Connection
- Softmax
Tasks:
- Image Classification
Training Data:
- ImageNet
ID: resnet34d
Crop Pct: '0.875'
Image Size: '224'
Interpolation: bicubic
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L666
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet34d_ra2-f8dcfcaf.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 77.11%
Top 5 Accuracy: 93.38%
- Name: resnet50d
In Collection: ResNet-D
Metadata:
FLOPs: 5591002624
Parameters: 25580000
File Size: 102567109
Architecture:
- 1x1 Convolution
- Batch Normalization
- Bottleneck Residual Block
- Convolution
- Global Average Pooling
- Max Pooling
- ReLU
- Residual Block
- Residual Connection
- Softmax
Tasks:
- Image Classification
Training Data:
- ImageNet
ID: resnet50d
Crop Pct: '0.875'
Image Size: '224'
Interpolation: bicubic
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L699
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet50d_ra2-464e36ba.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 80.55%
Top 5 Accuracy: 95.16%
--> | pytorch-image-models/hfdocs/source/models/resnet-d.mdx/0 | {
"file_path": "pytorch-image-models/hfdocs/source/models/resnet-d.mdx",
"repo_id": "pytorch-image-models",
"token_count": 3935
} | 261 |
# (Tensorflow) Inception v3
**Inception v3** is a convolutional neural network architecture from the Inception family that makes several improvements including using [Label Smoothing](https://paperswithcode.com/method/label-smoothing), Factorized 7 x 7 convolutions, and the use of an [auxiliary classifier](https://paperswithcode.com/method/auxiliary-classifier) to propagate label information lower down the network (along with the use of batch normalization for layers in the sidehead). The key building block is an [Inception Module](https://paperswithcode.com/method/inception-v3-module).
The weights from this model were ported from [Tensorflow/Models](https://github.com/tensorflow/models).
## How do I use this model on an image?
To load a pretrained model:
```py
>>> import timm
>>> model = timm.create_model('tf_inception_v3', pretrained=True)
>>> model.eval()
```
To load and preprocess the image:
```py
>>> import urllib
>>> from PIL import Image
>>> from timm.data import resolve_data_config
>>> from timm.data.transforms_factory import create_transform
>>> config = resolve_data_config({}, model=model)
>>> transform = create_transform(**config)
>>> url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg")
>>> urllib.request.urlretrieve(url, filename)
>>> img = Image.open(filename).convert('RGB')
>>> tensor = transform(img).unsqueeze(0) # transform and add batch dimension
```
To get the model predictions:
```py
>>> import torch
>>> with torch.inference_mode():
... out = model(tensor)
>>> probabilities = torch.nn.functional.softmax(out[0], dim=0)
>>> print(probabilities.shape)
>>> # prints: torch.Size([1000])
```
To get the top-5 predictions class names:
```py
>>> # Get imagenet class mappings
>>> url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt")
>>> urllib.request.urlretrieve(url, filename)
>>> with open("imagenet_classes.txt", "r") as f:
... categories = [s.strip() for s in f.readlines()]
>>> # Print top categories per image
>>> top5_prob, top5_catid = torch.topk(probabilities, 5)
>>> for i in range(top5_prob.size(0)):
... print(categories[top5_catid[i]], top5_prob[i].item())
>>> # prints class names and probabilities like:
>>> # [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)]
```
Replace the model name with the variant you want to use, e.g. `tf_inception_v3`. You can find the IDs in the model summaries at the top of this page.
To extract image features with this model, follow the [timm feature extraction examples](../feature_extraction), just change the name of the model you want to use.
## How do I finetune this model?
You can finetune any of the pre-trained models just by changing the classifier (the last layer).
```py
>>> model = timm.create_model('tf_inception_v3', pretrained=True, num_classes=NUM_FINETUNE_CLASSES)
```
To finetune on your own dataset, you have to write a training loop or adapt [timm's training
script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset.
## How do I train this model?
You can follow the [timm recipe scripts](../training_script) for training a new model afresh.
## Citation
```BibTeX
@article{DBLP:journals/corr/SzegedyVISW15,
author = {Christian Szegedy and
Vincent Vanhoucke and
Sergey Ioffe and
Jonathon Shlens and
Zbigniew Wojna},
title = {Rethinking the Inception Architecture for Computer Vision},
journal = {CoRR},
volume = {abs/1512.00567},
year = {2015},
url = {http://arxiv.org/abs/1512.00567},
archivePrefix = {arXiv},
eprint = {1512.00567},
timestamp = {Mon, 13 Aug 2018 16:49:07 +0200},
biburl = {https://dblp.org/rec/journals/corr/SzegedyVISW15.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<!--
Type: model-index
Collections:
- Name: TF Inception v3
Paper:
Title: Rethinking the Inception Architecture for Computer Vision
URL: https://paperswithcode.com/paper/rethinking-the-inception-architecture-for
Models:
- Name: tf_inception_v3
In Collection: TF Inception v3
Metadata:
FLOPs: 7352418880
Parameters: 23830000
File Size: 95549439
Architecture:
- 1x1 Convolution
- Auxiliary Classifier
- Average Pooling
- Average Pooling
- Batch Normalization
- Convolution
- Dense Connections
- Dropout
- Inception-v3 Module
- Max Pooling
- ReLU
- Softmax
Tasks:
- Image Classification
Training Techniques:
- Gradient Clipping
- Label Smoothing
- RMSProp
- Weight Decay
Training Data:
- ImageNet
Training Resources: 50x NVIDIA Kepler GPUs
ID: tf_inception_v3
LR: 0.045
Dropout: 0.2
Crop Pct: '0.875'
Momentum: 0.9
Image Size: '299'
Interpolation: bicubic
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/inception_v3.py#L449
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_inception_v3-e0069de4.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 77.87%
Top 5 Accuracy: 93.65%
--> | pytorch-image-models/hfdocs/source/models/tf-inception-v3.mdx/0 | {
"file_path": "pytorch-image-models/hfdocs/source/models/tf-inception-v3.mdx",
"repo_id": "pytorch-image-models",
"token_count": 1959
} | 262 |
""" ONNX-runtime validation script
This script was created to verify accuracy and performance of exported ONNX
models running with the onnxruntime. It utilizes the PyTorch dataloader/processing
pipeline for a fair comparison against the originals.
Copyright 2020 Ross Wightman
"""
import argparse
import numpy as np
import onnxruntime
from timm.data import create_loader, resolve_data_config, create_dataset
from timm.utils import AverageMeter
import time
parser = argparse.ArgumentParser(description='ONNX Validation')
parser.add_argument('data', metavar='DIR',
help='path to dataset')
parser.add_argument('--onnx-input', default='', type=str, metavar='PATH',
help='path to onnx model/weights file')
parser.add_argument('--onnx-output-opt', default='', type=str, metavar='PATH',
help='path to output optimized onnx graph')
parser.add_argument('--profile', action='store_true', default=False,
help='Enable profiler output.')
parser.add_argument('-j', '--workers', default=2, type=int, metavar='N',
help='number of data loading workers (default: 2)')
parser.add_argument('-b', '--batch-size', default=256, type=int,
metavar='N', help='mini-batch size (default: 256)')
parser.add_argument('--img-size', default=None, type=int,
metavar='N', help='Input image dimension, uses model default if empty')
parser.add_argument('--mean', type=float, nargs='+', default=None, metavar='MEAN',
help='Override mean pixel value of dataset')
parser.add_argument('--std', type=float, nargs='+', default=None, metavar='STD',
help='Override std deviation of of dataset')
parser.add_argument('--crop-pct', type=float, default=None, metavar='PCT',
help='Override default crop pct of 0.875')
parser.add_argument('--interpolation', default='', type=str, metavar='NAME',
help='Image resize interpolation type (overrides model)')
parser.add_argument('--print-freq', '-p', default=10, type=int,
metavar='N', help='print frequency (default: 10)')
def main():
args = parser.parse_args()
args.gpu_id = 0
# Set graph optimization level
sess_options = onnxruntime.SessionOptions()
sess_options.graph_optimization_level = onnxruntime.GraphOptimizationLevel.ORT_ENABLE_ALL
if args.profile:
sess_options.enable_profiling = True
if args.onnx_output_opt:
sess_options.optimized_model_filepath = args.onnx_output_opt
session = onnxruntime.InferenceSession(args.onnx_input, sess_options)
data_config = resolve_data_config(vars(args))
loader = create_loader(
create_dataset('', args.data),
input_size=data_config['input_size'],
batch_size=args.batch_size,
use_prefetcher=False,
interpolation=data_config['interpolation'],
mean=data_config['mean'],
std=data_config['std'],
num_workers=args.workers,
crop_pct=data_config['crop_pct']
)
input_name = session.get_inputs()[0].name
batch_time = AverageMeter()
top1 = AverageMeter()
top5 = AverageMeter()
end = time.time()
for i, (input, target) in enumerate(loader):
# run the net and return prediction
output = session.run([], {input_name: input.data.numpy()})
output = output[0]
# measure accuracy and record loss
prec1, prec5 = accuracy_np(output, target.numpy())
top1.update(prec1.item(), input.size(0))
top5.update(prec5.item(), input.size(0))
# measure elapsed time
batch_time.update(time.time() - end)
end = time.time()
if i % args.print_freq == 0:
print(
f'Test: [{i}/{len(loader)}]\t'
f'Time {batch_time.val:.3f} ({batch_time.avg:.3f}, {input.size(0) / batch_time.avg:.3f}/s, '
f'{100 * batch_time.avg / input.size(0):.3f} ms/sample) \t'
f'Prec@1 {top1.val:.3f} ({top1.avg:.3f})\t'
f'Prec@5 {top5.val:.3f} ({top5.avg:.3f})'
)
print(f' * Prec@1 {top1.avg:.3f} ({100-top1.avg:.3f}) Prec@5 {top5.avg:.3f} ({100.-top5.avg:.3f})')
def accuracy_np(output, target):
max_indices = np.argsort(output, axis=1)[:, ::-1]
top5 = 100 * np.equal(max_indices[:, :5], target[:, np.newaxis]).sum(axis=1).mean()
top1 = 100 * np.equal(max_indices[:, 0], target).mean()
return top1, top5
if __name__ == '__main__':
main()
| pytorch-image-models/onnx_validate.py/0 | {
"file_path": "pytorch-image-models/onnx_validate.py",
"repo_id": "pytorch-image-models",
"token_count": 1960
} | 263 |
"""Run tests for all models
Tests that run on CI should have a specific marker, e.g. @pytest.mark.base. This
marker is used to parallelize the CI runs, with one runner for each marker.
If new tests are added, ensure that they use one of the existing markers
(documented in pyproject.toml > pytest > markers) or that a new marker is added
for this set of tests. If using a new marker, adjust the test matrix in
.github/workflows/tests.yml to run tests with this new marker, otherwise the
tests will be skipped on CI.
"""
import pytest
import torch
import platform
import os
import fnmatch
_IS_MAC = platform.system() == 'Darwin'
try:
from torchvision.models.feature_extraction import create_feature_extractor, get_graph_node_names, NodePathTracer
has_fx_feature_extraction = True
except ImportError:
has_fx_feature_extraction = False
import timm
from timm import list_models, list_pretrained, create_model, set_scriptable, get_pretrained_cfg_value
from timm.layers import Format, get_spatial_dim, get_channel_dim
from timm.models import get_notrace_modules, get_notrace_functions
import importlib
import os
torch_backend = os.environ.get('TORCH_BACKEND')
if torch_backend is not None:
importlib.import_module(torch_backend)
torch_device = os.environ.get('TORCH_DEVICE', 'cpu')
timeout = os.environ.get('TIMEOUT')
timeout120 = int(timeout) if timeout else 120
timeout240 = int(timeout) if timeout else 240
timeout360 = int(timeout) if timeout else 360
if hasattr(torch._C, '_jit_set_profiling_executor'):
# legacy executor is too slow to compile large models for unit tests
# no need for the fusion performance here
torch._C._jit_set_profiling_executor(True)
torch._C._jit_set_profiling_mode(False)
# models with forward_intermediates() and support for FeatureGetterNet features_only wrapper
FEAT_INTER_FILTERS = [
'vision_transformer', 'vision_transformer_sam', 'vision_transformer_hybrid', 'vision_transformer_relpos',
'beit', 'mvitv2', 'eva', 'cait', 'xcit', 'volo', 'twins', 'deit', 'swin_transformer', 'swin_transformer_v2',
'swin_transformer_v2_cr', 'maxxvit', 'efficientnet', 'mobilenetv3', 'levit', 'efficientformer', 'resnet',
'regnet', 'byobnet', 'byoanet', 'mlp_mixer', 'hiera', 'fastvit', 'hieradet_sam2', 'aimv2*', 'tnt',
'tiny_vit', 'vovnet', 'tresnet', 'rexnet', 'resnetv2', 'repghost', 'repvit', 'pvt_v2', 'nextvit', 'nest',
'mambaout', 'inception_next', 'inception_v4', 'hgnet', 'gcvit', 'focalnet', 'efficientformer_v2', 'edgenext',
'davit', 'rdnet', 'convnext', 'pit', 'starnet', 'shvit', 'fasternet', 'swiftformer', 'ghostnet', 'naflexvit'
]
# transformer / hybrid models don't support full set of spatial / feature APIs and/or have spatial output.
NON_STD_FILTERS = [
'vit_*', 'naflexvit*', 'tnt_*', 'pit_*', 'coat_*', 'cait_*', '*mixer_*', 'gmlp_*', 'resmlp_*', 'twins_*',
'convit_*', 'levit*', 'visformer*', 'deit*', 'xcit_*', 'crossvit_*', 'beit*', 'aimv2*', 'swiftformer_*',
'poolformer_*', 'volo_*', 'sequencer2d_*', 'mvitv2*', 'gcvit*', 'efficientformer*', 'sam_hiera*',
'eva_*', 'flexivit*', 'eva02*', 'samvit_*', 'efficientvit_m*', 'tiny_vit_*', 'hiera_*', 'vitamin*', 'test_vit*',
]
NUM_NON_STD = len(NON_STD_FILTERS)
# exclude models that cause specific test failures
if 'GITHUB_ACTIONS' in os.environ:
# GitHub Linux runner is slower and hits memory limits sooner than MacOS, exclude bigger models
EXCLUDE_FILTERS = [
'*efficientnet_l2*', '*resnext101_32x48d', '*in21k', '*152x4_bitm', '*101x3_bitm', '*50x3_bitm',
'*nfnet_f3*', '*nfnet_f4*', '*nfnet_f5*', '*nfnet_f6*', '*nfnet_f7*', '*efficientnetv2_xl*',
'*resnetrs350*', '*resnetrs420*', 'xcit_large_24_p8*', '*huge*', '*giant*', '*gigantic*',
'*enormous*', 'maxvit_xlarge*', 'regnet*1280', 'regnet*2560', '*_1b_*', '*_3b_*']
NON_STD_EXCLUDE_FILTERS = ['*huge*', '*giant*', '*gigantic*', '*enormous*', '*_1b_*', '*_3b_*']
else:
EXCLUDE_FILTERS = ['*enormous*']
NON_STD_EXCLUDE_FILTERS = ['*gigantic*', '*enormous*', '*_3b_*']
EXCLUDE_JIT_FILTERS = ['hiera_*', '*naflex*']
TARGET_FWD_SIZE = MAX_FWD_SIZE = 384
TARGET_BWD_SIZE = 128
MAX_BWD_SIZE = 320
MAX_FWD_OUT_SIZE = 448
TARGET_JIT_SIZE = 128
MAX_JIT_SIZE = 320
TARGET_FFEAT_SIZE = 96
MAX_FFEAT_SIZE = 256
TARGET_FWD_FX_SIZE = 128
MAX_FWD_FX_SIZE = 256
TARGET_BWD_FX_SIZE = 128
MAX_BWD_FX_SIZE = 224
def _get_input_size(model=None, model_name='', target=None):
if model is None:
assert model_name, "One of model or model_name must be provided"
input_size = get_pretrained_cfg_value(model_name, 'input_size')
fixed_input_size = get_pretrained_cfg_value(model_name, 'fixed_input_size')
min_input_size = get_pretrained_cfg_value(model_name, 'min_input_size')
else:
default_cfg = model.default_cfg
input_size = default_cfg['input_size']
fixed_input_size = default_cfg.get('fixed_input_size', None)
min_input_size = default_cfg.get('min_input_size', None)
assert input_size is not None
if fixed_input_size:
return input_size
if min_input_size:
if target and max(input_size) > target:
input_size = min_input_size
else:
if target and max(input_size) > target:
input_size = tuple([min(x, target) for x in input_size])
return input_size
@pytest.mark.base
@pytest.mark.timeout(timeout240)
@pytest.mark.parametrize('model_name', list_pretrained('test_*'))
@pytest.mark.parametrize('batch_size', [1])
def test_model_inference(model_name, batch_size):
"""Run a single forward pass with each model"""
from PIL import Image
from huggingface_hub import snapshot_download
import tempfile
import safetensors
model = create_model(model_name, pretrained=True)
model.eval()
pp = timm.data.create_transform(**timm.data.resolve_data_config(model=model))
with tempfile.TemporaryDirectory() as temp_dir:
snapshot_download(
repo_id='timm/' + model_name, repo_type='model', local_dir=temp_dir, allow_patterns='test/*'
)
rand_tensors = safetensors.torch.load_file(os.path.join(temp_dir, 'test', 'rand_tensors.safetensors'))
owl_tensors = safetensors.torch.load_file(os.path.join(temp_dir, 'test', 'owl_tensors.safetensors'))
test_owl = Image.open(os.path.join(temp_dir, 'test', 'test_owl.jpg'))
with torch.inference_mode():
rand_output = model(rand_tensors['input'])
rand_features = model.forward_features(rand_tensors['input'])
rand_pre_logits = model.forward_head(rand_features, pre_logits=True)
assert torch.allclose(rand_output, rand_tensors['output'], rtol=1e-3, atol=1e-4), 'rand output does not match'
assert torch.allclose(rand_features, rand_tensors['features'], rtol=1e-3, atol=1e-4), 'rand features do not match'
assert torch.allclose(rand_pre_logits, rand_tensors['pre_logits'], rtol=1e-3, atol=1e-4), 'rand pre_logits do not match'
def _test_owl(owl_input, tol=(1e-3, 1e-4)):
owl_output = model(owl_input)
owl_features = model.forward_features(owl_input)
owl_pre_logits = model.forward_head(owl_features.clone(), pre_logits=True)
assert owl_output.softmax(1).argmax(1) == 24 # owl
assert torch.allclose(owl_output, owl_tensors['output'], rtol=tol[0], atol=tol[1]), 'owl output does not match'
assert torch.allclose(owl_features, owl_tensors['features'], rtol=tol[0], atol=tol[1]), 'owl output does not match'
assert torch.allclose(owl_pre_logits, owl_tensors['pre_logits'], rtol=tol[0], atol=tol[1]), 'owl output does not match'
_test_owl(owl_tensors['input']) # test with original pp owl tensor
_test_owl(pp(test_owl).unsqueeze(0), tol=(1e-1, 1e-1)) # re-process from original jpg, Pillow output can change a lot btw ver
@pytest.mark.base
@pytest.mark.timeout(timeout120)
@pytest.mark.parametrize('model_name', list_models(exclude_filters=EXCLUDE_FILTERS))
@pytest.mark.parametrize('batch_size', [1])
def test_model_forward(model_name, batch_size):
"""Run a single forward pass with each model"""
model = create_model(model_name, pretrained=False)
model.eval()
input_size = _get_input_size(model=model, target=TARGET_FWD_SIZE)
if max(input_size) > MAX_FWD_SIZE:
pytest.skip("Fixed input size model > limit.")
inputs = torch.randn((batch_size, *input_size))
inputs = inputs.to(torch_device)
model.to(torch_device)
outputs = model(inputs)
assert outputs.shape[0] == batch_size
assert not torch.isnan(outputs).any(), 'Output included NaNs'
# Test that grad-checkpointing, if supported, doesn't cause model failures or change in output
try:
model.set_grad_checkpointing()
except:
# throws if not supported, that's fine
pass
else:
outputs2 = model(inputs)
if isinstance(outputs, tuple):
outputs2 = torch.cat(outputs2)
assert torch.allclose(outputs, outputs2, rtol=1e-4, atol=1e-5), 'Output does not match'
@pytest.mark.base
@pytest.mark.timeout(timeout120)
@pytest.mark.parametrize('model_name', list_models(exclude_filters=EXCLUDE_FILTERS, name_matches_cfg=True))
@pytest.mark.parametrize('batch_size', [2])
def test_model_backward(model_name, batch_size):
"""Run a single forward pass with each model"""
input_size = _get_input_size(model_name=model_name, target=TARGET_BWD_SIZE)
if max(input_size) > MAX_BWD_SIZE:
pytest.skip("Fixed input size model > limit.")
model = create_model(model_name, pretrained=False, num_classes=42)
encoder_only = model.num_classes == 0 # FIXME better approach?
num_params = sum([x.numel() for x in model.parameters()])
model.train()
inputs = torch.randn((batch_size, *input_size))
inputs = inputs.to(torch_device)
model.to(torch_device)
outputs = model(inputs)
if isinstance(outputs, tuple):
outputs = torch.cat(outputs)
outputs.mean().backward()
for n, x in model.named_parameters():
assert x.grad is not None, f'No gradient for {n}'
num_grad = sum([x.grad.numel() for x in model.parameters() if x.grad is not None])
if encoder_only:
output_fmt = getattr(model, 'output_fmt', 'NCHW')
feat_axis = get_channel_dim(output_fmt)
assert outputs.shape[feat_axis] == model.num_features, f'unpooled feature dim {outputs.shape[feat_axis]} != model.num_features {model.num_features}'
else:
assert outputs.shape[-1] == 42
assert num_params == num_grad, 'Some parameters are missing gradients'
assert not torch.isnan(outputs).any(), 'Output included NaNs'
# models with extra conv/linear layers after pooling
EARLY_POOL_MODELS = (
timm.models.EfficientVit,
timm.models.EfficientVitLarge,
timm.models.FasterNet,
timm.models.HighPerfGpuNet,
timm.models.GhostNet,
timm.models.MetaNeXt, # InceptionNeXt
timm.models.MobileNetV3,
timm.models.RepGhostNet,
timm.models.VGG,
)
@pytest.mark.cfg
@pytest.mark.timeout(timeout360)
@pytest.mark.parametrize('model_name', list_models(
exclude_filters=EXCLUDE_FILTERS + NON_STD_FILTERS, include_tags=True))
@pytest.mark.parametrize('batch_size', [1])
def test_model_default_cfgs(model_name, batch_size):
"""Run a single forward pass with each model"""
model = create_model(model_name, pretrained=False)
model.eval()
model.to(torch_device)
assert getattr(model, 'num_classes') >= 0
assert getattr(model, 'num_features') > 0
assert getattr(model, 'head_hidden_size') > 0
state_dict = model.state_dict()
cfg = model.default_cfg
pool_size = cfg['pool_size']
input_size = model.default_cfg['input_size']
output_fmt = getattr(model, 'output_fmt', 'NCHW')
spatial_axis = get_spatial_dim(output_fmt)
assert len(spatial_axis) == 2 # TODO add 1D sequence support
feat_axis = get_channel_dim(output_fmt)
if all([x <= MAX_FWD_OUT_SIZE for x in input_size]) and \
not any([fnmatch.fnmatch(model_name, x) for x in EXCLUDE_FILTERS]):
# output sizes only checked if default res <= 448 * 448 to keep resource down
input_size = tuple([min(x, MAX_FWD_OUT_SIZE) for x in input_size])
input_tensor = torch.randn((batch_size, *input_size), device=torch_device)
# test forward_features (always unpooled) & forward_head w/ pre_logits
outputs = model.forward_features(input_tensor)
outputs_pre = model.forward_head(outputs, pre_logits=True)
assert outputs.shape[spatial_axis[0]] == pool_size[0], f'unpooled feature shape {outputs.shape} != config'
assert outputs.shape[spatial_axis[1]] == pool_size[1], f'unpooled feature shape {outputs.shape} != config'
assert outputs.shape[feat_axis] == model.num_features, f'unpooled feature dim {outputs.shape[feat_axis]} != model.num_features {model.num_features}'
assert outputs_pre.shape[1] == model.head_hidden_size, f'pre_logits feature dim {outputs_pre.shape[1]} != model.head_hidden_size {model.head_hidden_size}'
# test forward after deleting the classifier, output should be poooled, size(-1) == model.num_features
model.reset_classifier(0)
assert model.num_classes == 0, f'Expected num_classes to be 0 after reset_classifier(0), but got {model.num_classes}'
model.to(torch_device)
outputs = model.forward(input_tensor)
assert len(outputs.shape) == 2
assert outputs.shape[1] == model.head_hidden_size, f'feature dim w/ removed classifier {outputs.shape[1]} != model.head_hidden_size {model.head_hidden_size}'
assert outputs.shape == outputs_pre.shape, f'output shape of pre_logits {outputs_pre.shape} does not match reset_head(0) {outputs.shape}'
# test model forward after removing pooling and classifier
if not isinstance(model, EARLY_POOL_MODELS):
model.reset_classifier(0, '') # reset classifier and disable global pooling
model.to(torch_device)
outputs = model.forward(input_tensor)
assert len(outputs.shape) == 4
assert outputs.shape[spatial_axis[0]] == pool_size[0] and outputs.shape[spatial_axis[1]] == pool_size[1]
# test classifier + global pool deletion via __init__
if 'pruned' not in model_name and not isinstance(model, EARLY_POOL_MODELS):
model = create_model(model_name, pretrained=False, num_classes=0, global_pool='').eval()
model.to(torch_device)
outputs = model.forward(input_tensor)
assert len(outputs.shape) == 4
assert outputs.shape[spatial_axis[0]] == pool_size[0] and outputs.shape[spatial_axis[1]] == pool_size[1]
# check classifier name matches default_cfg
if cfg.get('num_classes', None):
classifier = cfg['classifier']
if not isinstance(classifier, (tuple, list)):
classifier = classifier,
for c in classifier:
assert c + ".weight" in state_dict.keys(), f'{c} not in model params'
# check first conv(s) names match default_cfg
first_conv = cfg['first_conv']
if isinstance(first_conv, str):
first_conv = (first_conv,)
assert isinstance(first_conv, (tuple, list))
for fc in first_conv:
assert fc + ".weight" in state_dict.keys(), f'{fc} not in model params'
@pytest.mark.cfg
@pytest.mark.timeout(timeout360)
@pytest.mark.parametrize('model_name', list_models(filter=NON_STD_FILTERS, exclude_filters=NON_STD_EXCLUDE_FILTERS, include_tags=True))
@pytest.mark.parametrize('batch_size', [1])
def test_model_default_cfgs_non_std(model_name, batch_size):
"""Run a single forward pass with each model"""
model = create_model(model_name, pretrained=False)
model.eval()
model.to(torch_device)
assert getattr(model, 'num_classes') >= 0
assert getattr(model, 'num_features') > 0
assert getattr(model, 'head_hidden_size') > 0
state_dict = model.state_dict()
cfg = model.default_cfg
input_size = _get_input_size(model=model)
if max(input_size) > 320: # FIXME const
pytest.skip("Fixed input size model > limit.")
input_tensor = torch.randn((batch_size, *input_size), device=torch_device)
feat_dim = getattr(model, 'feature_dim', None)
outputs = model.forward_features(input_tensor)
outputs_pre = model.forward_head(outputs, pre_logits=True)
if isinstance(outputs, (tuple, list)):
# cannot currently verify multi-tensor output.
pass
else:
if feat_dim is None:
feat_dim = -1 if outputs.ndim == 3 else 1
assert outputs.shape[feat_dim] == model.num_features
assert outputs_pre.shape[1] == model.head_hidden_size
# test forward after deleting the classifier, output should be poooled, size(-1) == model.num_features
model.reset_classifier(0)
assert model.num_classes == 0, f'Expected num_classes to be 0 after reset_classifier(0), but got {model.num_classes}'
model.to(torch_device)
outputs = model.forward(input_tensor)
if isinstance(outputs, (tuple, list)):
outputs = outputs[0]
if feat_dim is None:
feat_dim = -1 if outputs.ndim == 3 else 1
assert outputs.shape[feat_dim] == model.head_hidden_size, 'pooled num_features != config'
assert outputs.shape == outputs_pre.shape
model = create_model(model_name, pretrained=False, num_classes=0).eval()
model.to(torch_device)
outputs = model.forward(input_tensor)
if isinstance(outputs, (tuple, list)):
outputs = outputs[0]
if feat_dim is None:
feat_dim = -1 if outputs.ndim == 3 else 1
assert outputs.shape[feat_dim] == model.num_features
# check classifier name matches default_cfg
if cfg.get('num_classes', None):
classifier = cfg['classifier']
if not isinstance(classifier, (tuple, list)):
classifier = classifier,
for c in classifier:
assert c + ".weight" in state_dict.keys(), f'{c} not in model params'
# check first conv(s) names match default_cfg
first_conv = cfg['first_conv']
if isinstance(first_conv, str):
first_conv = (first_conv,)
assert isinstance(first_conv, (tuple, list))
for fc in first_conv:
assert fc + ".weight" in state_dict.keys(), f'{fc} not in model params'
if 'GITHUB_ACTIONS' not in os.environ:
@pytest.mark.timeout(240)
@pytest.mark.parametrize('model_name', list_models(pretrained=True))
@pytest.mark.parametrize('batch_size', [1])
def test_model_load_pretrained(model_name, batch_size):
"""Create that pretrained weights load, verify support for in_chans != 3 while doing so."""
in_chans = 3 if 'pruned' in model_name else 1 # pruning not currently supported with in_chans change
create_model(model_name, pretrained=True, in_chans=in_chans, num_classes=5)
create_model(model_name, pretrained=True, in_chans=in_chans, num_classes=0)
@pytest.mark.timeout(240)
@pytest.mark.parametrize('model_name', list_models(pretrained=True, exclude_filters=NON_STD_FILTERS))
@pytest.mark.parametrize('batch_size', [1])
def test_model_features_pretrained(model_name, batch_size):
"""Create that pretrained weights load when features_only==True."""
create_model(model_name, pretrained=True, features_only=True)
@pytest.mark.torchscript
@pytest.mark.timeout(timeout120)
@pytest.mark.parametrize(
'model_name', list_models(exclude_filters=EXCLUDE_FILTERS + EXCLUDE_JIT_FILTERS, name_matches_cfg=True))
@pytest.mark.parametrize('batch_size', [1])
def test_model_forward_torchscript(model_name, batch_size):
"""Run a single forward pass with each model"""
input_size = _get_input_size(model_name=model_name, target=TARGET_JIT_SIZE)
if max(input_size) > MAX_JIT_SIZE:
pytest.skip("Fixed input size model > limit.")
with set_scriptable(True):
model = create_model(model_name, pretrained=False)
model.eval()
model = torch.jit.script(model)
model.to(torch_device)
outputs = model(torch.randn((batch_size, *input_size)))
assert outputs.shape[0] == batch_size
assert not torch.isnan(outputs).any(), 'Output included NaNs'
EXCLUDE_FEAT_FILTERS = [
'*pruned*', # hopefully fix at some point
] + NON_STD_FILTERS
if 'GITHUB_ACTIONS' in os.environ: # and 'Linux' in platform.system():
# GitHub Linux runner is slower and hits memory limits sooner than MacOS, exclude bigger models
EXCLUDE_FEAT_FILTERS += ['*resnext101_32x32d', '*resnext101_32x16d']
@pytest.mark.features
@pytest.mark.timeout(120)
@pytest.mark.parametrize('model_name', list_models(exclude_filters=EXCLUDE_FILTERS + EXCLUDE_FEAT_FILTERS))
@pytest.mark.parametrize('batch_size', [1])
def test_model_forward_features(model_name, batch_size):
"""Run a single forward pass with each model in feature extraction mode"""
model = create_model(model_name, pretrained=False, features_only=True)
model.eval()
expected_channels = model.feature_info.channels()
expected_reduction = model.feature_info.reduction()
assert len(expected_channels) >= 3 # all models here should have at least 3 default feat levels
input_size = _get_input_size(model=model, target=TARGET_FFEAT_SIZE)
if max(input_size) > MAX_FFEAT_SIZE:
pytest.skip("Fixed input size model > limit.")
output_fmt = getattr(model, 'output_fmt', 'NCHW')
feat_axis = get_channel_dim(output_fmt)
spatial_axis = get_spatial_dim(output_fmt)
import math
outputs = model(torch.randn((batch_size, *input_size)))
assert len(expected_channels) == len(outputs)
spatial_size = input_size[-2:]
for e, r, o in zip(expected_channels, expected_reduction, outputs):
assert e == o.shape[feat_axis]
assert o.shape[spatial_axis[0]] <= math.ceil(spatial_size[0] / r) + 1
assert o.shape[spatial_axis[1]] <= math.ceil(spatial_size[1] / r) + 1
assert o.shape[0] == batch_size
assert not torch.isnan(o).any()
@pytest.mark.features
@pytest.mark.timeout(120)
@pytest.mark.parametrize('model_name', list_models(module=FEAT_INTER_FILTERS, exclude_filters=EXCLUDE_FILTERS + ['*pruned*']))
@pytest.mark.parametrize('batch_size', [1])
def test_model_forward_intermediates_features(model_name, batch_size):
"""Run a single forward pass with each model in feature extraction mode"""
model = create_model(model_name, pretrained=False, features_only=True, feature_cls='getter')
model.eval()
expected_channels = model.feature_info.channels()
expected_reduction = model.feature_info.reduction()
input_size = _get_input_size(model=model, target=TARGET_FFEAT_SIZE)
if max(input_size) > MAX_FFEAT_SIZE:
pytest.skip("Fixed input size model > limit.")
output_fmt = getattr(model, 'output_fmt', 'NCHW')
feat_axis = get_channel_dim(output_fmt)
spatial_axis = get_spatial_dim(output_fmt)
import math
outputs = model(torch.randn((batch_size, *input_size)))
assert len(expected_channels) == len(outputs)
spatial_size = input_size[-2:]
for e, r, o in zip(expected_channels, expected_reduction, outputs):
print(o.shape)
assert e == o.shape[feat_axis]
assert o.shape[spatial_axis[0]] <= math.ceil(spatial_size[0] / r) + 1
assert o.shape[spatial_axis[1]] <= math.ceil(spatial_size[1] / r) + 1
assert o.shape[0] == batch_size
assert not torch.isnan(o).any()
@pytest.mark.features
@pytest.mark.timeout(120)
@pytest.mark.parametrize('model_name', list_models(module=FEAT_INTER_FILTERS, exclude_filters=EXCLUDE_FILTERS + ['*pruned*']))
@pytest.mark.parametrize('batch_size', [1])
def test_model_forward_intermediates(model_name, batch_size):
"""Run a single forward pass with each model in feature extraction mode"""
model = create_model(model_name, pretrained=False)
model.eval()
feature_info = timm.models.FeatureInfo(model.feature_info, len(model.feature_info))
expected_channels = feature_info.channels()
expected_reduction = feature_info.reduction()
assert len(expected_channels) >= 3 # all models here should have at least 3 feature levels
input_size = _get_input_size(model=model, target=TARGET_FFEAT_SIZE)
if max(input_size) > MAX_FFEAT_SIZE:
pytest.skip("Fixed input size model > limit.")
output_fmt = 'NCHW' # NOTE output_fmt determined by forward_intermediates() arg, not model attribute
feat_axis = get_channel_dim(output_fmt)
spatial_axis = get_spatial_dim(output_fmt)
import math
inpt = torch.randn((batch_size, *input_size))
output, intermediates = model.forward_intermediates(
inpt,
output_fmt=output_fmt,
)
assert len(expected_channels) == len(intermediates)
spatial_size = input_size[-2:]
for e, r, o in zip(expected_channels, expected_reduction, intermediates):
assert e == o.shape[feat_axis]
assert o.shape[spatial_axis[0]] <= math.ceil(spatial_size[0] / r) + 1
assert o.shape[spatial_axis[1]] <= math.ceil(spatial_size[1] / r) + 1
assert o.shape[0] == batch_size
assert not torch.isnan(o).any()
output2 = model.forward_features(inpt)
assert torch.allclose(output, output2)
# Test that grad-checkpointing, if supported
try:
model.set_grad_checkpointing()
except:
# throws if not supported, that's fine
pass
else:
output3, _ = model.forward_intermediates(
inpt,
output_fmt=output_fmt,
)
assert torch.allclose(output, output3, rtol=1e-4, atol=1e-5), 'Output does not match'
def _create_fx_model(model, train=False):
# This block of code does a bit of juggling to handle any case where there are multiple outputs in train mode
# So we trace once and look at the graph, and get the indices of the nodes that lead into the original fx output
# node. Then we use those indices to select from train_nodes returned by torchvision get_graph_node_names
tracer_kwargs = dict(
leaf_modules=get_notrace_modules(),
autowrap_functions=get_notrace_functions(),
#enable_cpatching=True,
param_shapes_constant=True
)
train_nodes, eval_nodes = get_graph_node_names(model, tracer_kwargs=tracer_kwargs)
eval_return_nodes = [eval_nodes[-1]]
train_return_nodes = [train_nodes[-1]]
if train:
tracer = NodePathTracer(**tracer_kwargs)
graph = tracer.trace(model)
graph_nodes = list(reversed(graph.nodes))
output_node_names = [n.name for n in graph_nodes[0]._input_nodes.keys()]
graph_node_names = [n.name for n in graph_nodes]
output_node_indices = [-graph_node_names.index(node_name) for node_name in output_node_names]
train_return_nodes = [train_nodes[ix] for ix in output_node_indices]
fx_model = create_feature_extractor(
model,
train_return_nodes=train_return_nodes,
eval_return_nodes=eval_return_nodes,
tracer_kwargs=tracer_kwargs,
)
return fx_model
EXCLUDE_FX_FILTERS = ['vit_gi*', 'hiera*']
# not enough memory to run fx on more models than other tests
if 'GITHUB_ACTIONS' in os.environ:
EXCLUDE_FX_FILTERS += [
'beit_large*',
'mixer_l*',
'*nfnet_f2*',
'*resnext101_32x32d',
'resnetv2_152x2*',
'resmlp_big*',
'resnetrs270',
'swin_large*',
'vgg*',
'vit_large*',
'vit_base_patch8*',
'xcit_large*',
]
@pytest.mark.fxforward
@pytest.mark.timeout(120)
@pytest.mark.parametrize('model_name', list_models(exclude_filters=EXCLUDE_FILTERS + EXCLUDE_FX_FILTERS))
@pytest.mark.parametrize('batch_size', [1])
def test_model_forward_fx(model_name, batch_size):
"""
Symbolically trace each model and run single forward pass through the resulting GraphModule
Also check that the output of a forward pass through the GraphModule is the same as that from the original Module
"""
if not has_fx_feature_extraction:
pytest.skip("Can't test FX. Torch >= 1.10 and Torchvision >= 0.11 are required.")
model = create_model(model_name, pretrained=False)
model.eval()
input_size = _get_input_size(model=model, target=TARGET_FWD_FX_SIZE)
if max(input_size) > MAX_FWD_FX_SIZE:
pytest.skip("Fixed input size model > limit.")
with torch.inference_mode():
inputs = torch.randn((batch_size, *input_size))
outputs = model(inputs)
if isinstance(outputs, tuple):
outputs = torch.cat(outputs)
model = _create_fx_model(model)
fx_outputs = tuple(model(inputs).values())
if isinstance(fx_outputs, tuple):
fx_outputs = torch.cat(fx_outputs)
assert torch.all(fx_outputs == outputs)
assert outputs.shape[0] == batch_size
assert not torch.isnan(outputs).any(), 'Output included NaNs'
@pytest.mark.fxbackward
@pytest.mark.timeout(120)
@pytest.mark.parametrize('model_name', list_models(
exclude_filters=EXCLUDE_FILTERS + EXCLUDE_FX_FILTERS, name_matches_cfg=True))
@pytest.mark.parametrize('batch_size', [2])
def test_model_backward_fx(model_name, batch_size):
"""Symbolically trace each model and run single backward pass through the resulting GraphModule"""
if not has_fx_feature_extraction:
pytest.skip("Can't test FX. Torch >= 1.10 and Torchvision >= 0.11 are required.")
input_size = _get_input_size(model_name=model_name, target=TARGET_BWD_FX_SIZE)
if max(input_size) > MAX_BWD_FX_SIZE:
pytest.skip("Fixed input size model > limit.")
model = create_model(model_name, pretrained=False, num_classes=42)
model.train()
num_params = sum([x.numel() for x in model.parameters()])
if 'GITHUB_ACTIONS' in os.environ and num_params > 100e6:
pytest.skip("Skipping FX backward test on model with more than 100M params.")
model = _create_fx_model(model, train=True)
outputs = tuple(model(torch.randn((batch_size, *input_size))).values())
if isinstance(outputs, tuple):
outputs = torch.cat(outputs)
outputs.mean().backward()
for n, x in model.named_parameters():
assert x.grad is not None, f'No gradient for {n}'
num_grad = sum([x.grad.numel() for x in model.parameters() if x.grad is not None])
assert outputs.shape[-1] == 42
assert num_params == num_grad, 'Some parameters are missing gradients'
assert not torch.isnan(outputs).any(), 'Output included NaNs'
if 'GITHUB_ACTIONS' not in os.environ:
# FIXME this test is causing GitHub actions to run out of RAM and abruptly kill the test process
# reason: model is scripted after fx tracing, but beit has torch.jit.is_scripting() control flow
EXCLUDE_FX_JIT_FILTERS = [
'deit_*_distilled_patch16_224',
'levit*',
'pit_*_distilled_224',
] + EXCLUDE_FX_FILTERS
@pytest.mark.timeout(120)
@pytest.mark.parametrize(
'model_name', list_models(
exclude_filters=EXCLUDE_FILTERS + EXCLUDE_JIT_FILTERS + EXCLUDE_FX_JIT_FILTERS, name_matches_cfg=True))
@pytest.mark.parametrize('batch_size', [1])
def test_model_forward_fx_torchscript(model_name, batch_size):
"""Symbolically trace each model, script it, and run single forward pass"""
if not has_fx_feature_extraction:
pytest.skip("Can't test FX. Torch >= 1.10 and Torchvision >= 0.11 are required.")
input_size = _get_input_size(model_name=model_name, target=TARGET_JIT_SIZE)
if max(input_size) > MAX_JIT_SIZE:
pytest.skip("Fixed input size model > limit.")
with set_scriptable(True):
model = create_model(model_name, pretrained=False)
model.eval()
model = torch.jit.script(_create_fx_model(model))
with torch.inference_mode():
outputs = tuple(model(torch.randn((batch_size, *input_size))).values())
if isinstance(outputs, tuple):
outputs = torch.cat(outputs)
assert outputs.shape[0] == batch_size
assert not torch.isnan(outputs).any(), 'Output included NaNs'
@pytest.mark.timeout(120)
@pytest.mark.parametrize('model_name', ["regnetx_002"])
@pytest.mark.parametrize('batch_size', [1])
def test_model_forward_torchscript_with_features_fx(model_name, batch_size):
"""Create a model with feature extraction based on fx, script it, and run
a single forward pass"""
if not has_fx_feature_extraction:
pytest.skip("Can't test FX. Torch >= 1.10 and Torchvision >= 0.11 are required.")
allowed_models = list_models(
exclude_filters=EXCLUDE_FILTERS + EXCLUDE_JIT_FILTERS + EXCLUDE_FX_JIT_FILTERS,
name_matches_cfg=True
)
assert model_name in allowed_models, f"{model_name=} not supported for this test"
input_size = _get_input_size(model_name=model_name, target=TARGET_JIT_SIZE)
assert max(input_size) <= MAX_JIT_SIZE, "Fixed input size model > limit. Pick a different model to run this test"
with set_scriptable(True):
model = create_model(model_name, pretrained=False, features_only=True, feature_cfg={"feature_cls": "fx"})
model.eval()
model = torch.jit.script(model)
with torch.inference_mode():
outputs = model(torch.randn((batch_size, *input_size)))
assert isinstance(outputs, list)
for tensor in outputs:
assert tensor.shape[0] == batch_size
assert not torch.isnan(tensor).any(), 'Output included NaNs'
| pytorch-image-models/tests/test_models.py/0 | {
"file_path": "pytorch-image-models/tests/test_models.py",
"repo_id": "pytorch-image-models",
"token_count": 13631
} | 264 |
import math
import torch
from torch.utils.data import Sampler
import torch.distributed as dist
class OrderedDistributedSampler(Sampler):
"""Sampler that restricts data loading to a subset of the dataset.
It is especially useful in conjunction with
:class:`torch.nn.parallel.DistributedDataParallel`. In such case, each
process can pass a DistributedSampler instance as a DataLoader sampler,
and load a subset of the original dataset that is exclusive to it.
.. note::
Dataset is assumed to be of constant size.
Arguments:
dataset: Dataset used for sampling.
num_replicas (optional): Number of processes participating in
distributed training.
rank (optional): Rank of the current process within num_replicas.
"""
def __init__(self, dataset, num_replicas=None, rank=None):
if num_replicas is None:
if not dist.is_available():
raise RuntimeError("Requires distributed package to be available")
num_replicas = dist.get_world_size()
if rank is None:
if not dist.is_available():
raise RuntimeError("Requires distributed package to be available")
rank = dist.get_rank()
self.dataset = dataset
self.num_replicas = num_replicas
self.rank = rank
self.num_samples = int(math.ceil(len(self.dataset) * 1.0 / self.num_replicas))
self.total_size = self.num_samples * self.num_replicas
def __iter__(self):
indices = list(range(len(self.dataset)))
# add extra samples to make it evenly divisible
indices += indices[:(self.total_size - len(indices))]
assert len(indices) == self.total_size
# subsample
indices = indices[self.rank:self.total_size:self.num_replicas]
assert len(indices) == self.num_samples
return iter(indices)
def __len__(self):
return self.num_samples
class RepeatAugSampler(Sampler):
"""Sampler that restricts data loading to a subset of the dataset for distributed,
with repeated augmentation.
It ensures that different each augmented version of a sample will be visible to a
different process (GPU). Heavily based on torch.utils.data.DistributedSampler
This sampler was taken from https://github.com/facebookresearch/deit/blob/0c4b8f60/samplers.py
Used in
Copyright (c) 2015-present, Facebook, Inc.
"""
def __init__(
self,
dataset,
num_replicas=None,
rank=None,
shuffle=True,
num_repeats=3,
selected_round=256,
selected_ratio=0,
):
if num_replicas is None:
if not dist.is_available():
raise RuntimeError("Requires distributed package to be available")
num_replicas = dist.get_world_size()
if rank is None:
if not dist.is_available():
raise RuntimeError("Requires distributed package to be available")
rank = dist.get_rank()
self.dataset = dataset
self.num_replicas = num_replicas
self.rank = rank
self.shuffle = shuffle
self.num_repeats = num_repeats
self.epoch = 0
self.num_samples = int(math.ceil(len(self.dataset) * num_repeats / self.num_replicas))
self.total_size = self.num_samples * self.num_replicas
# Determine the number of samples to select per epoch for each rank.
# num_selected logic defaults to be the same as original RASampler impl, but this one can be tweaked
# via selected_ratio and selected_round args.
selected_ratio = selected_ratio or num_replicas # ratio to reduce selected samples by, num_replicas if 0
if selected_round:
self.num_selected_samples = int(math.floor(
len(self.dataset) // selected_round * selected_round / selected_ratio))
else:
self.num_selected_samples = int(math.ceil(len(self.dataset) / selected_ratio))
def __iter__(self):
# deterministically shuffle based on epoch
g = torch.Generator()
g.manual_seed(self.epoch)
if self.shuffle:
indices = torch.randperm(len(self.dataset), generator=g)
else:
indices = torch.arange(start=0, end=len(self.dataset))
# produce repeats e.g. [0, 0, 0, 1, 1, 1, 2, 2, 2....]
if isinstance(self.num_repeats, float) and not self.num_repeats.is_integer():
# resample for repeats w/ non-integer ratio
repeat_size = math.ceil(self.num_repeats * len(self.dataset))
indices = indices[torch.tensor([int(i // self.num_repeats) for i in range(repeat_size)])]
else:
indices = torch.repeat_interleave(indices, repeats=int(self.num_repeats), dim=0)
indices = indices.tolist() # leaving as tensor thrashes dataloader memory
# add extra samples to make it evenly divisible
padding_size = self.total_size - len(indices)
if padding_size > 0:
indices += indices[:padding_size]
assert len(indices) == self.total_size
# subsample per rank
indices = indices[self.rank:self.total_size:self.num_replicas]
assert len(indices) == self.num_samples
# return up to num selected samples
return iter(indices[:self.num_selected_samples])
def __len__(self):
return self.num_selected_samples
def set_epoch(self, epoch):
self.epoch = epoch
| pytorch-image-models/timm/data/distributed_sampler.py/0 | {
"file_path": "pytorch-image-models/timm/data/distributed_sampler.py",
"repo_id": "pytorch-image-models",
"token_count": 2276
} | 265 |
""" Dataset reader for HF IterableDataset
"""
import math
import os
from itertools import repeat, chain
from typing import Optional
import torch
import torch.distributed as dist
from PIL import Image
try:
import datasets
from datasets.distributed import split_dataset_by_node
from datasets.splits import SplitInfo
except ImportError as e:
print("Please install Hugging Face datasets package `pip install datasets`.")
raise e
from .class_map import load_class_map
from .reader import Reader
from .shared_count import SharedCount
SHUFFLE_SIZE = int(os.environ.get('HFIDS_SHUFFLE_SIZE', 4096))
class ReaderHfids(Reader):
def __init__(
self,
name: str,
root: Optional[str] = None,
split: str = 'train',
is_training: bool = False,
batch_size: int = 1,
download: bool = False,
repeats: int = 0,
seed: int = 42,
class_map: Optional[dict] = None,
input_key: str = 'image',
input_img_mode: str = 'RGB',
target_key: str = 'label',
target_img_mode: str = '',
shuffle_size: Optional[int] = None,
num_samples: Optional[int] = None,
trust_remote_code: bool = False
):
super().__init__()
self.root = root
self.split = split
self.is_training = is_training
self.batch_size = batch_size
self.download = download
self.repeats = repeats
self.common_seed = seed # a seed that's fixed across all worker / distributed instances
self.shuffle_size = shuffle_size or SHUFFLE_SIZE
self.input_key = input_key
self.input_img_mode = input_img_mode
self.target_key = target_key
self.target_img_mode = target_img_mode
self.builder = datasets.load_dataset_builder(
name,
cache_dir=root,
trust_remote_code=trust_remote_code,
)
if download:
self.builder.download_and_prepare()
split_info: Optional[SplitInfo] = None
if self.builder.info.splits and split in self.builder.info.splits:
if isinstance(self.builder.info.splits[split], SplitInfo):
split_info: Optional[SplitInfo] = self.builder.info.splits[split]
if num_samples:
self.num_samples = num_samples
elif split_info and split_info.num_examples:
self.num_samples = split_info.num_examples
else:
raise ValueError(
"Dataset length is unknown, please pass `num_samples` explicitly. "
"The number of steps needs to be known in advance for the learning rate scheduler."
)
self.remap_class = False
if class_map:
self.class_to_idx = load_class_map(class_map)
self.remap_class = True
else:
self.class_to_idx = {}
# Distributed world state
self.dist_rank = 0
self.dist_num_replicas = 1
if dist.is_available() and dist.is_initialized() and dist.get_world_size() > 1:
self.dist_rank = dist.get_rank()
self.dist_num_replicas = dist.get_world_size()
# Attributes that are updated in _lazy_init
self.worker_info = None
self.worker_id = 0
self.num_workers = 1
self.global_worker_id = 0
self.global_num_workers = 1
# Initialized lazily on each dataloader worker process
self.ds: Optional[datasets.IterableDataset] = None
self.epoch = SharedCount()
def set_epoch(self, count):
# to update the shuffling effective_seed = seed + epoch
self.epoch.value = count
def set_loader_cfg(
self,
num_workers: Optional[int] = None,
):
if self.ds is not None:
return
if num_workers is not None:
self.num_workers = num_workers
self.global_num_workers = self.dist_num_replicas * self.num_workers
def _lazy_init(self):
""" Lazily initialize worker (in worker processes)
"""
if self.worker_info is None:
worker_info = torch.utils.data.get_worker_info()
if worker_info is not None:
self.worker_info = worker_info
self.worker_id = worker_info.id
self.num_workers = worker_info.num_workers
self.global_num_workers = self.dist_num_replicas * self.num_workers
self.global_worker_id = self.dist_rank * self.num_workers + self.worker_id
if self.download:
dataset = self.builder.as_dataset(split=self.split)
# to distribute evenly to workers
ds = dataset.to_iterable_dataset(num_shards=self.global_num_workers)
else:
# in this case the number of shard is determined by the number of remote files
ds = self.builder.as_streaming_dataset(split=self.split)
if self.is_training:
# will shuffle the list of shards and use a shuffle buffer
ds = ds.shuffle(seed=self.common_seed, buffer_size=self.shuffle_size)
# Distributed:
# The dataset has a number of shards that is a factor of `dist_num_replicas` (i.e. if `ds.n_shards % dist_num_replicas == 0`),
# so the shards are evenly assigned across the nodes.
# If it's not the case for dataset streaming, each node keeps 1 example out of `dist_num_replicas`, skipping the other examples.
# Workers:
# In a node, datasets.IterableDataset assigns the shards assigned to the node as evenly as possible to workers.
self.ds = split_dataset_by_node(ds, rank=self.dist_rank, world_size=self.dist_num_replicas)
def _num_samples_per_worker(self):
num_worker_samples = \
max(1, self.repeats) * self.num_samples / max(self.global_num_workers, self.dist_num_replicas)
if self.is_training or self.dist_num_replicas > 1:
num_worker_samples = math.ceil(num_worker_samples)
if self.is_training and self.batch_size is not None:
num_worker_samples = math.ceil(num_worker_samples / self.batch_size) * self.batch_size
return int(num_worker_samples)
def __iter__(self):
if self.ds is None:
self._lazy_init()
self.ds.set_epoch(self.epoch.value)
target_sample_count = self._num_samples_per_worker()
sample_count = 0
if self.is_training:
ds_iter = chain.from_iterable(repeat(self.ds))
else:
ds_iter = iter(self.ds)
for sample in ds_iter:
input_data: Image.Image = sample[self.input_key]
if self.input_img_mode and input_data.mode != self.input_img_mode:
input_data = input_data.convert(self.input_img_mode)
target_data = sample[self.target_key]
if self.target_img_mode:
assert isinstance(target_data, Image.Image), "target_img_mode is specified but target is not an image"
if target_data.mode != self.target_img_mode:
target_data = target_data.convert(self.target_img_mode)
elif self.remap_class:
target_data = self.class_to_idx[target_data]
yield input_data, target_data
sample_count += 1
if self.is_training and sample_count >= target_sample_count:
break
def __len__(self):
num_samples = self._num_samples_per_worker() * self.num_workers
return num_samples
def _filename(self, index, basename=False, absolute=False):
assert False, "Not supported" # no random access to examples
def filenames(self, basename=False, absolute=False):
""" Return all filenames in dataset, overrides base"""
if self.ds is None:
self._lazy_init()
names = []
for sample in self.ds:
if 'file_name' in sample:
name = sample['file_name']
elif 'filename' in sample:
name = sample['filename']
elif 'id' in sample:
name = sample['id']
elif 'image_id' in sample:
name = sample['image_id']
else:
assert False, "No supported name field present"
names.append(name)
return names | pytorch-image-models/timm/data/readers/reader_hfids.py/0 | {
"file_path": "pytorch-image-models/timm/data/readers/reader_hfids.py",
"repo_id": "pytorch-image-models",
"token_count": 3798
} | 266 |
from typing import Final, Optional, Type
import torch
from torch import nn as nn
from torch.nn import functional as F
from ._fx import register_notrace_function
from .config import use_fused_attn
from .pos_embed_sincos import apply_rot_embed_cat
@torch.fx.wrap
@register_notrace_function
def maybe_add_mask(scores: torch.Tensor, attn_mask: Optional[torch.Tensor] = None):
return scores if attn_mask is None else scores + attn_mask
class Attention(nn.Module):
"""Standard Multi-head Self Attention module with QKV projection.
This module implements the standard multi-head attention mechanism used in transformers.
It supports both the fused attention implementation (scaled_dot_product_attention) for
efficiency when available, and a manual implementation otherwise. The module includes
options for QK normalization, attention dropout, and projection dropout.
"""
fused_attn: Final[bool]
def __init__(
self,
dim: int,
num_heads: int = 8,
qkv_bias: bool = False,
qk_norm: bool = False,
scale_norm: bool = False,
proj_bias: bool = True,
attn_drop: float = 0.,
proj_drop: float = 0.,
norm_layer: Optional[Type[nn.Module]] = None,
) -> None:
"""Initialize the Attention module.
Args:
dim: Input dimension of the token embeddings
num_heads: Number of attention heads
qkv_bias: Whether to use bias in the query, key, value projections
qk_norm: Whether to apply normalization to query and key vectors
proj_bias: Whether to use bias in the output projection
attn_drop: Dropout rate applied to the attention weights
proj_drop: Dropout rate applied after the output projection
norm_layer: Normalization layer constructor for QK normalization if enabled
"""
super().__init__()
assert dim % num_heads == 0, 'dim should be divisible by num_heads'
if qk_norm or scale_norm:
assert norm_layer is not None, 'norm_layer must be provided if qk_norm or scale_norm is True'
self.num_heads = num_heads
self.head_dim = dim // num_heads
self.scale = self.head_dim ** -0.5
self.fused_attn = use_fused_attn()
self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
self.q_norm = norm_layer(self.head_dim) if qk_norm else nn.Identity()
self.k_norm = norm_layer(self.head_dim) if qk_norm else nn.Identity()
self.attn_drop = nn.Dropout(attn_drop)
self.norm = norm_layer(dim) if scale_norm else nn.Identity()
self.proj = nn.Linear(dim, dim, bias=proj_bias)
self.proj_drop = nn.Dropout(proj_drop)
def forward(
self,
x: torch.Tensor,
attn_mask: Optional[torch.Tensor] = None,
) -> torch.Tensor:
B, N, C = x.shape
qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, self.head_dim).permute(2, 0, 3, 1, 4)
q, k, v = qkv.unbind(0)
q, k = self.q_norm(q), self.k_norm(k)
if self.fused_attn:
x = F.scaled_dot_product_attention(
q, k, v,
attn_mask=attn_mask,
dropout_p=self.attn_drop.p if self.training else 0.,
)
else:
q = q * self.scale
attn = q @ k.transpose(-2, -1)
attn = maybe_add_mask(attn, attn_mask)
attn = attn.softmax(dim=-1)
attn = self.attn_drop(attn)
x = attn @ v
x = x.transpose(1, 2).reshape(B, N, C)
x = self.norm(x)
x = self.proj(x)
x = self.proj_drop(x)
return x
class AttentionRope(nn.Module):
""" A Self Attention module with ROPE support.
Includes options for:
* QK normalization option
* Attention output (scale) normalization
* Fused or unfused QKV projection support
"""
fused_attn: torch.jit.Final[bool]
def __init__(
self,
dim: int,
num_heads: int = 8,
qkv_bias: bool = True,
qkv_fused: bool = True,
num_prefix_tokens: int = 1,
attn_drop: float = 0.,
proj_drop: float = 0.,
attn_head_dim: Optional[int] = None,
norm_layer: Type[nn.Module] = None,
qk_norm: bool = False,
scale_norm: bool = False,
proj_bias: bool = True,
):
"""Initialize the Attention module.
Args:
dim: Input dimension of the token embeddings
num_heads: Number of attention heads
qkv_bias: Whether to add a bias term to the query, key, and value projections
num_prefix_tokens: Number of reg/cls tokens at the beginning of the sequence that
should not have position embeddings applied
attn_drop: Dropout rate for attention weights
proj_drop: Dropout rate for the output projection
attn_head_dim: Dimension of each attention head (if None, computed as dim // num_heads)
norm_layer: Normalization layer constructor to use for QK and scale normalization
qk_norm: Enable normalization of query (Q) and key (K) vectors with norm_layer
scale_norm: Enable normalization (scaling) of attention output with norm_layer
"""
super().__init__()
if scale_norm or qk_norm:
assert norm_layer is not None, 'norm_layer must be provided if qk_norm or scale_norm is True'
self.num_heads = num_heads
head_dim = dim // num_heads
if attn_head_dim is not None:
head_dim = attn_head_dim
attn_dim = head_dim * self.num_heads
self.scale = head_dim ** -0.5
self.num_prefix_tokens = num_prefix_tokens
self.fused_attn = use_fused_attn()
if qkv_fused:
self.qkv = nn.Linear(dim, attn_dim * 3, bias=qkv_bias)
self.q_proj = self.k_proj = self.v_proj = None
else:
self.qkv = None
self.q_proj = nn.Linear(dim, attn_dim, bias=qkv_bias)
self.k_proj = nn.Linear(dim, attn_dim, bias=qkv_bias)
self.v_proj = nn.Linear(dim, attn_dim, bias=qkv_bias)
self.q_norm = norm_layer(head_dim) if qk_norm else nn.Identity()
self.k_norm = norm_layer(head_dim) if qk_norm else nn.Identity()
self.attn_drop = nn.Dropout(attn_drop)
self.norm = norm_layer(attn_dim) if scale_norm else nn.Identity()
self.proj = nn.Linear(attn_dim, dim, bias=proj_bias)
self.proj_drop = nn.Dropout(proj_drop)
def forward(
self,
x,
rope: Optional[torch.Tensor] = None,
attn_mask: Optional[torch.Tensor] = None,
):
"""Forward pass for the attention module.
Args:
x: Input tensor of shape (batch_size, sequence_length, embedding_dim)
rope: Rotary position embeddings tensor for position-aware attention
attn_mask: Optional attention mask to apply during attention computation
Returns:
Tensor of shape (batch_size, sequence_length, embedding_dim)
"""
B, N, C = x.shape
if self.qkv is not None:
qkv = self.qkv(x)
qkv = qkv.reshape(B, N, 3, self.num_heads, -1).permute(2, 0, 3, 1, 4)
q, k, v = qkv.unbind(0) # B, num_heads, N, head_dim
else:
q = self.q_proj(x).reshape(B, N, self.num_heads, -1).transpose(1, 2) # B, num_heads, N, C
k = self.k_proj(x).reshape(B, N, self.num_heads, -1).transpose(1, 2)
v = self.v_proj(x).reshape(B, N, self.num_heads, -1).transpose(1, 2)
q, k = self.q_norm(q), self.k_norm(k)
if rope is not None:
npt = self.num_prefix_tokens
q = torch.cat([q[:, :, :npt, :], apply_rot_embed_cat(q[:, :, npt:, :], rope)], dim=2).type_as(v)
k = torch.cat([k[:, :, :npt, :], apply_rot_embed_cat(k[:, :, npt:, :], rope)], dim=2).type_as(v)
if self.fused_attn:
x = F.scaled_dot_product_attention(
q, k, v,
attn_mask=attn_mask,
dropout_p=self.attn_drop.p if self.training else 0.,
)
else:
q = q * self.scale
attn = (q @ k.transpose(-2, -1))
attn = maybe_add_mask(attn, attn_mask)
attn = attn.softmax(dim=-1)
attn = self.attn_drop(attn)
x = attn @ v
x = x.transpose(1, 2).reshape(B, N, C)
x = self.norm(x)
x = self.proj(x)
x = self.proj_drop(x)
return x
| pytorch-image-models/timm/layers/attention.py/0 | {
"file_path": "pytorch-image-models/timm/layers/attention.py",
"repo_id": "pytorch-image-models",
"token_count": 4241
} | 267 |
""" NormAct (Normalization + Activation Layer) Factory
Create norm + act combo modules that attempt to be backwards compatible with separate norm + act
instances in models. Where these are used it will be possible to swap separate BN + act layers with
combined modules like IABN or EvoNorms.
Hacked together by / Copyright 2020 Ross Wightman
"""
import types
import functools
from typing import Optional
from .evo_norm import *
from .filter_response_norm import FilterResponseNormAct2d, FilterResponseNormTlu2d
from .norm_act import (
BatchNormAct2d,
GroupNormAct,
GroupNorm1Act,
LayerNormAct,
LayerNormActFp32,
LayerNormAct2d,
LayerNormAct2dFp32,
RmsNormAct,
RmsNormActFp32,
RmsNormAct2d,
RmsNormAct2dFp32,
)
from .inplace_abn import InplaceAbn
from .typing import LayerType
_NORM_ACT_MAP = dict(
batchnorm=BatchNormAct2d,
batchnorm2d=BatchNormAct2d,
groupnorm=GroupNormAct,
groupnorm1=GroupNorm1Act,
layernorm=LayerNormAct,
layernorm2d=LayerNormAct2d,
layernormfp32=LayerNormActFp32,
layernorm2dfp32=LayerNormAct2dFp32,
evonormb0=EvoNorm2dB0,
evonormb1=EvoNorm2dB1,
evonormb2=EvoNorm2dB2,
evonorms0=EvoNorm2dS0,
evonorms0a=EvoNorm2dS0a,
evonorms1=EvoNorm2dS1,
evonorms1a=EvoNorm2dS1a,
evonorms2=EvoNorm2dS2,
evonorms2a=EvoNorm2dS2a,
frn=FilterResponseNormAct2d,
frntlu=FilterResponseNormTlu2d,
inplaceabn=InplaceAbn,
iabn=InplaceAbn,
rmsnorm=RmsNormAct,
rmsnorm2d=RmsNormAct2d,
rmsnormfp32=RmsNormActFp32,
rmsnorm2dfp32=RmsNormAct2dFp32,
)
_NORM_ACT_TYPES = {m for n, m in _NORM_ACT_MAP.items()}
# Reverse map from base norm layer names to norm+act layer classes
_NORM_TO_NORM_ACT_MAP = dict(
batchnorm=BatchNormAct2d,
batchnorm2d=BatchNormAct2d,
groupnorm=GroupNormAct,
groupnorm1=GroupNorm1Act,
layernorm=LayerNormAct,
layernorm2d=LayerNormAct2d,
layernormfp32=LayerNormActFp32,
layernorm2dfp32=LayerNormAct2dFp32,
rmsnorm=RmsNormAct,
rmsnorm2d=RmsNormAct2d,
rmsnormfp32=RmsNormActFp32,
rmsnorm2dfp32=RmsNormAct2dFp32,
)
# has act_layer arg to define act type
_NORM_ACT_REQUIRES_ARG = {
BatchNormAct2d,
GroupNormAct,
GroupNorm1Act,
LayerNormAct,
LayerNormAct2d,
LayerNormActFp32,
LayerNormAct2dFp32,
FilterResponseNormAct2d,
InplaceAbn,
RmsNormAct,
RmsNormAct2d,
RmsNormActFp32,
RmsNormAct2dFp32,
}
def create_norm_act_layer(
layer_name: LayerType,
num_features: int,
act_layer: Optional[LayerType] = None,
apply_act: bool = True,
jit: bool = False,
**kwargs,
):
layer = get_norm_act_layer(layer_name, act_layer=act_layer)
layer_instance = layer(num_features, apply_act=apply_act, **kwargs)
if jit:
layer_instance = torch.jit.script(layer_instance)
return layer_instance
def get_norm_act_layer(
norm_layer: LayerType,
act_layer: Optional[LayerType] = None,
):
if norm_layer is None:
return None
assert isinstance(norm_layer, (type, str, types.FunctionType, functools.partial))
assert act_layer is None or isinstance(act_layer, (type, str, types.FunctionType, functools.partial))
norm_act_kwargs = {}
# unbind partial fn, so args can be rebound later
if isinstance(norm_layer, functools.partial):
norm_act_kwargs.update(norm_layer.keywords)
norm_layer = norm_layer.func
if isinstance(norm_layer, str):
if not norm_layer:
return None
layer_name = norm_layer.replace('_', '').lower().split('-')[0]
norm_act_layer = _NORM_ACT_MAP[layer_name]
elif norm_layer in _NORM_ACT_TYPES:
norm_act_layer = norm_layer
elif isinstance(norm_layer, types.FunctionType):
# if function type, must be a lambda/fn that creates a norm_act layer
norm_act_layer = norm_layer
else:
# Use reverse map to find the corresponding norm+act layer
type_name = norm_layer.__name__.lower()
norm_act_layer = _NORM_TO_NORM_ACT_MAP.get(type_name, None)
assert norm_act_layer is not None, f"No equivalent norm_act layer for {type_name}"
if norm_act_layer in _NORM_ACT_REQUIRES_ARG:
# pass `act_layer` through for backwards compat where `act_layer=None` implies no activation.
# In the future, may force use of `apply_act` with `act_layer` arg bound to relevant NormAct types
norm_act_kwargs.setdefault('act_layer', act_layer)
if norm_act_kwargs:
norm_act_layer = functools.partial(norm_act_layer, **norm_act_kwargs) # bind/rebind args
return norm_act_layer
| pytorch-image-models/timm/layers/create_norm_act.py/0 | {
"file_path": "pytorch-image-models/timm/layers/create_norm_act.py",
"repo_id": "pytorch-image-models",
"token_count": 2027
} | 268 |
""" Lambda Layer
Paper: `LambdaNetworks: Modeling Long-Range Interactions Without Attention`
- https://arxiv.org/abs/2102.08602
@misc{2102.08602,
Author = {Irwan Bello},
Title = {LambdaNetworks: Modeling Long-Range Interactions Without Attention},
Year = {2021},
}
Status:
This impl is a WIP. Code snippets in the paper were used as reference but
good chance some details are missing/wrong.
I've only implemented local lambda conv based pos embeddings.
For a PyTorch impl that includes other embedding options checkout
https://github.com/lucidrains/lambda-networks
Hacked together by / Copyright 2021 Ross Wightman
"""
import torch
from torch import nn
import torch.nn.functional as F
from .grid import ndgrid
from .helpers import to_2tuple, make_divisible
from .weight_init import trunc_normal_
def rel_pos_indices(size):
size = to_2tuple(size)
pos = torch.stack(ndgrid(torch.arange(size[0]), torch.arange(size[1]))).flatten(1)
rel_pos = pos[:, None, :] - pos[:, :, None]
rel_pos[0] += size[0] - 1
rel_pos[1] += size[1] - 1
return rel_pos # 2, H * W, H * W
class LambdaLayer(nn.Module):
"""Lambda Layer
Paper: `LambdaNetworks: Modeling Long-Range Interactions Without Attention`
- https://arxiv.org/abs/2102.08602
NOTE: intra-depth parameter 'u' is fixed at 1. It did not appear worth the complexity to add.
The internal dimensions of the lambda module are controlled via the interaction of several arguments.
* the output dimension of the module is specified by dim_out, which falls back to input dim if not set
* the value (v) dimension is set to dim_out // num_heads, the v projection determines the output dim
* the query (q) and key (k) dimension are determined by
* dim_head = (dim_out * attn_ratio // num_heads) if dim_head is None
* q = num_heads * dim_head, k = dim_head
* as seen above, attn_ratio determines the ratio of q and k relative to the output if dim_head not set
Args:
dim (int): input dimension to the module
dim_out (int): output dimension of the module, same as dim if not set
feat_size (Tuple[int, int]): size of input feature_map for relative pos variant H, W
stride (int): output stride of the module, avg pool used if stride == 2
num_heads (int): parallel attention heads.
dim_head (int): dimension of query and key heads, calculated from dim_out * attn_ratio // num_heads if not set
r (int): local lambda convolution radius. Use lambda conv if set, else relative pos if not. (default: 9)
qk_ratio (float): ratio of q and k dimensions to output dimension when dim_head not set. (default: 1.0)
qkv_bias (bool): add bias to q, k, and v projections
"""
def __init__(
self, dim, dim_out=None, feat_size=None, stride=1, num_heads=4, dim_head=16, r=9,
qk_ratio=1.0, qkv_bias=False):
super().__init__()
dim_out = dim_out or dim
assert dim_out % num_heads == 0, ' should be divided by num_heads'
self.dim_qk = dim_head or make_divisible(dim_out * qk_ratio, divisor=8) // num_heads
self.num_heads = num_heads
self.dim_v = dim_out // num_heads
self.qkv = nn.Conv2d(
dim,
num_heads * self.dim_qk + self.dim_qk + self.dim_v,
kernel_size=1, bias=qkv_bias)
self.norm_q = nn.BatchNorm2d(num_heads * self.dim_qk)
self.norm_v = nn.BatchNorm2d(self.dim_v)
if r is not None:
# local lambda convolution for pos
self.conv_lambda = nn.Conv3d(1, self.dim_qk, (r, r, 1), padding=(r // 2, r // 2, 0))
self.pos_emb = None
self.rel_pos_indices = None
else:
# relative pos embedding
assert feat_size is not None
feat_size = to_2tuple(feat_size)
rel_size = [2 * s - 1 for s in feat_size]
self.conv_lambda = None
self.pos_emb = nn.Parameter(torch.zeros(rel_size[0], rel_size[1], self.dim_qk))
self.register_buffer('rel_pos_indices', rel_pos_indices(feat_size), persistent=False)
self.pool = nn.AvgPool2d(2, 2) if stride == 2 else nn.Identity()
self.reset_parameters()
def reset_parameters(self):
trunc_normal_(self.qkv.weight, std=self.qkv.weight.shape[1] ** -0.5) # fan-in
if self.conv_lambda is not None:
trunc_normal_(self.conv_lambda.weight, std=self.dim_qk ** -0.5)
if self.pos_emb is not None:
trunc_normal_(self.pos_emb, std=.02)
def forward(self, x):
B, C, H, W = x.shape
M = H * W
qkv = self.qkv(x)
q, k, v = torch.split(qkv, [
self.num_heads * self.dim_qk, self.dim_qk, self.dim_v], dim=1)
q = self.norm_q(q).reshape(B, self.num_heads, self.dim_qk, M).transpose(-1, -2) # B, num_heads, M, K
v = self.norm_v(v).reshape(B, self.dim_v, M).transpose(-1, -2) # B, M, V
k = F.softmax(k.reshape(B, self.dim_qk, M), dim=-1) # B, K, M
content_lam = k @ v # B, K, V
content_out = q @ content_lam.unsqueeze(1) # B, num_heads, M, V
if self.pos_emb is None:
position_lam = self.conv_lambda(v.reshape(B, 1, H, W, self.dim_v)) # B, H, W, V, K
position_lam = position_lam.reshape(B, 1, self.dim_qk, H * W, self.dim_v).transpose(2, 3) # B, 1, M, K, V
else:
# FIXME relative pos embedding path not fully verified
pos_emb = self.pos_emb[self.rel_pos_indices[0], self.rel_pos_indices[1]].expand(B, -1, -1, -1)
position_lam = (pos_emb.transpose(-1, -2) @ v.unsqueeze(1)).unsqueeze(1) # B, 1, M, K, V
position_out = (q.unsqueeze(-2) @ position_lam).squeeze(-2) # B, num_heads, M, V
out = (content_out + position_out).transpose(-1, -2).reshape(B, C, H, W) # B, C (num_heads * V), H, W
out = self.pool(out)
return out
| pytorch-image-models/timm/layers/lambda_layer.py/0 | {
"file_path": "pytorch-image-models/timm/layers/lambda_layer.py",
"repo_id": "pytorch-image-models",
"token_count": 2611
} | 269 |
""" Relative position embedding modules and functions
Hacked together by / Copyright 2022 Ross Wightman
"""
import math
import os
from typing import Optional, Tuple
import torch
import torch.nn as nn
import torch.nn.functional as F
from .grid import ndgrid
from .interpolate import RegularGridInterpolator
from .mlp import Mlp
from .weight_init import trunc_normal_
_USE_SCIPY = int(os.environ.get('TIMM_USE_SCIPY_INTERP', 0)) > 0
def gen_relative_position_index(
q_size: Tuple[int, int],
k_size: Optional[Tuple[int, int]] = None,
class_token: bool = False,
) -> torch.Tensor:
# Adapted with significant modifications from Swin / BeiT codebases
# get pair-wise relative position index for each token inside the window
assert k_size is None, 'Different q & k sizes not currently supported' # FIXME
coords = torch.stack(ndgrid(torch.arange(q_size[0]), torch.arange(q_size[1]))).flatten(1) # 2, Wh, Ww
relative_coords = coords[:, :, None] - coords[:, None, :] # 2, Wh*Ww, Wh*Ww
relative_coords = relative_coords.permute(1, 2, 0) # Qh*Qw, Kh*Kw, 2
relative_coords[:, :, 0] += q_size[0] - 1 # shift to start from 0
relative_coords[:, :, 1] += q_size[1] - 1
relative_coords[:, :, 0] *= 2 * q_size[1] - 1
num_relative_distance = (2 * q_size[0] - 1) * (2 * q_size[1] - 1)
# else:
# # FIXME different q vs k sizes is a WIP, need to better offset the two grids?
# q_coords = torch.stack(
# ndgrid(
# torch.arange(q_size[0]),
# torch.arange(q_size[1])
# )
# ).flatten(1) # 2, Wh, Ww
# k_coords = torch.stack(
# ndgrid(
# torch.arange(k_size[0]),
# torch.arange(k_size[1])
# )
# ).flatten(1)
# relative_coords = q_coords[:, :, None] - k_coords[:, None, :] # 2, Wh*Ww, Wh*Ww
# relative_coords = relative_coords.permute(1, 2, 0) # Qh*Qw, Kh*Kw, 2
# relative_coords[:, :, 0] += max(q_size[0], k_size[0]) - 1 # shift to start from 0
# relative_coords[:, :, 1] += max(q_size[1], k_size[1]) - 1
# relative_coords[:, :, 0] *= k_size[1] + q_size[1] - 1
# relative_position_index = relative_coords.sum(-1) # Qh*Qw, Kh*Kw
# num_relative_distance = (q_size[0] + k_size[0] - 1) * (q_size[1] + k_size[1] - 1) + 3
relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww
if class_token:
# handle cls to token & token 2 cls & cls to cls as per beit for rel pos bias
# NOTE not intended or tested with MLP log-coords
relative_position_index = F.pad(relative_position_index, [1, 0, 1, 0])
relative_position_index[0, 0:] = num_relative_distance
relative_position_index[0:, 0] = num_relative_distance + 1
relative_position_index[0, 0] = num_relative_distance + 2
return relative_position_index.contiguous()
def resize_rel_pos_bias_table_simple(
rel_pos_bias,
new_window_size: Tuple[int, int],
new_bias_shape: Tuple[int, ...],
):
dst_size = (new_window_size[0] * 2 - 1, new_window_size[1] * 2 - 1)
if rel_pos_bias.ndim == 3:
# TF maxvit style (num_heads, H, W) bias shape, no extra tokens currently supported
_, dst_h, dst_w = new_bias_shape
num_attn_heads, src_h, src_w = rel_pos_bias.shape
assert dst_h == dst_size[0] and dst_w == dst_size[1]
if src_h != dst_h or src_w != dst_w:
rel_pos_bias = torch.nn.functional.interpolate(
rel_pos_bias.unsqueeze(0),
size=dst_size,
mode="bicubic",
align_corners=False,
).squeeze(0)
else:
assert rel_pos_bias.ndim == 2
# (num_pos, num_heads) (aka flat) bias shape
dst_num_pos, _ = new_bias_shape
src_num_pos, num_attn_heads = rel_pos_bias.shape
num_extra_tokens = dst_num_pos - (dst_size[0] * dst_size[1])
src_size = int((src_num_pos - num_extra_tokens) ** 0.5)
src_size = (src_size, src_size) # FIXME could support non-equal src if argument passed
if src_size[0] != dst_size[0] or src_size[1] != dst_size[1]:
if num_extra_tokens:
extra_tokens = rel_pos_bias[-num_extra_tokens:, :]
rel_pos_bias = rel_pos_bias[:-num_extra_tokens, :]
else:
extra_tokens = None
rel_pos_bias = torch.nn.functional.interpolate(
rel_pos_bias.transpose(1, 0).reshape((1, -1, src_size[0], src_size[1])),
size=dst_size,
mode="bicubic",
align_corners=False,
).view(-1, dst_num_pos - num_extra_tokens).transpose(0, 1)
if extra_tokens is not None:
rel_pos_bias = torch.cat((rel_pos_bias, extra_tokens), dim=0)
return rel_pos_bias
def resize_rel_pos_bias_table_levit(
position_bias_table,
new_size,
interpolation: str = 'bicubic',
antialias: bool = True,
):
"""
Resample relative position bias table suggested in LeVit
Adapted from: https://github.com/microsoft/Cream/blob/main/TinyViT/utils.py
"""
L1, nH1 = position_bias_table.size()
L2, nH2 = new_size
assert nH1 == nH2
if L1 != L2:
orig_dtype = position_bias_table.dtype
position_bias_table = position_bias_table.float()
# bicubic interpolate relative_position_bias_table if not match
S1 = int(L1 ** 0.5)
S2 = int(L2 ** 0.5)
relative_position_bias_table_resized = F.interpolate(
position_bias_table.permute(1, 0).view(1, nH1, S1, S1),
size=(S2, S2),
mode=interpolation,
antialias=antialias)
relative_position_bias_table_resized = \
relative_position_bias_table_resized.view(nH2, L2).permute(1, 0)
relative_position_bias_table_resized.to(orig_dtype)
return relative_position_bias_table_resized
else:
return position_bias_table
def resize_rel_pos_bias_table(
rel_pos_bias,
new_window_size: Tuple[int, int],
new_bias_shape: Tuple[int, ...],
):
""" Resize relative position bias table using more advanced interpolation.
Modified from code in Microsoft Unilm (https://github.com/microsoft/unilm) repo (BeiT, BeiT-v2, etc).
https://github.com/microsoft/unilm/blob/5255d52de86dad642810f5849dd357769346c1d7/beit/run_class_finetuning.py#L351
Args:
rel_pos_bias:
new_window_size:
new_bias_shape:
Returns:
"""
if _USE_SCIPY:
from scipy import interpolate
dst_size = (new_window_size[0] * 2 - 1, new_window_size[1] * 2 - 1)
if rel_pos_bias.ndim == 3:
# TF maxvit style (num_heads, H, W) bias shape, no extra tokens currently supported
num_extra_tokens = 0
_, dst_h, dst_w = new_bias_shape
assert dst_h == dst_size[0] and dst_w == dst_size[1]
num_attn_heads, src_h, src_w = rel_pos_bias.shape
src_size = (src_h, src_w)
has_flat_shape = False
else:
assert rel_pos_bias.ndim == 2
# (num_pos, num_heads) (aka flat) bias shape
dst_num_pos, _ = new_bias_shape
src_num_pos, num_attn_heads = rel_pos_bias.shape
num_extra_tokens = dst_num_pos - (dst_size[0] * dst_size[1])
src_size = int((src_num_pos - num_extra_tokens) ** 0.5)
src_size = (src_size, src_size)
has_flat_shape = True
if src_size[0] != dst_size[0] or src_size[1] != dst_size[1]:
# print("Interpolating position from %dx%d to %dx%d" % (src_size[0], src_size[1], dst_size[0], dst_size[1]))
if num_extra_tokens:
extra_tokens = rel_pos_bias[-num_extra_tokens:, :]
rel_pos_bias = rel_pos_bias[:-num_extra_tokens, :]
else:
extra_tokens = None
def geometric_progression(a, r, n):
return a * (1.0 - r ** n) / (1.0 - r)
def _calc(src, dst):
left, right = 1.01, 1.5
while right - left > 1e-6:
q = (left + right) / 2.0
gp = geometric_progression(1, q, src // 2)
if gp > dst // 2:
right = q
else:
left = q
dis = []
cur = 1
for i in range(src // 2):
dis.append(cur)
cur += q ** (i + 1)
r_ids = [-_ for _ in reversed(dis)]
return r_ids + [0] + dis
y = _calc(src_size[0], dst_size[0])
x = _calc(src_size[1], dst_size[1])
yx = [torch.tensor(y), torch.tensor(x)]
# print("Original positions = %s" % str(x))
ty = dst_size[0] // 2.0
tx = dst_size[1] // 2.0
dy = torch.arange(-ty, ty + 0.1, 1.0)
dx = torch.arange(-tx, tx + 0.1, 1.0)
dyx = ndgrid(dy, dx)
# print("Target positions = %s" % str(dx))
all_rel_pos_bias = []
for i in range(num_attn_heads):
if has_flat_shape:
z = rel_pos_bias[:, i].view(src_size[0], src_size[1]).float()
else:
z = rel_pos_bias[i, :, :].float()
if _USE_SCIPY:
# Original beit code uses scipy w/ cubic interpolation
f = interpolate.interp2d(x, y, z.numpy(), kind='cubic')
r = torch.Tensor(f(dx, dy)).contiguous().to(rel_pos_bias.device)
else:
# Without scipy dependency, I've found a reasonably simple impl
# that supports uneven spaced interpolation pts with 'linear' interp.
# Results are comparable to scipy for model accuracy in most cases.
f = RegularGridInterpolator(yx, z)
r = f(dyx).contiguous().to(rel_pos_bias.device)
if has_flat_shape:
r = r.view(-1, 1)
all_rel_pos_bias.append(r)
if has_flat_shape:
rel_pos_bias = torch.cat(all_rel_pos_bias, dim=-1)
else:
rel_pos_bias = torch.cat(all_rel_pos_bias, dim=0)
if extra_tokens is not None:
assert has_flat_shape
rel_pos_bias = torch.cat((rel_pos_bias, extra_tokens), dim=0)
return rel_pos_bias
class RelPosBias(nn.Module):
""" Relative Position Bias
Adapted from Swin-V1 relative position bias impl, modularized.
"""
def __init__(self, window_size, num_heads, prefix_tokens=0):
super().__init__()
assert prefix_tokens <= 1
self.window_size = window_size
self.window_area = window_size[0] * window_size[1]
self.bias_shape = (self.window_area + prefix_tokens,) * 2 + (num_heads,)
num_relative_distance = (2 * window_size[0] - 1) * (2 * window_size[1] - 1) + 3 * prefix_tokens
self.relative_position_bias_table = nn.Parameter(torch.zeros(num_relative_distance, num_heads))
self.register_buffer(
"relative_position_index",
gen_relative_position_index(self.window_size, class_token=prefix_tokens > 0).view(-1),
persistent=False,
)
self.init_weights()
def init_weights(self):
trunc_normal_(self.relative_position_bias_table, std=.02)
def get_bias(self) -> torch.Tensor:
relative_position_bias = self.relative_position_bias_table[self.relative_position_index]
# win_h * win_w, win_h * win_w, num_heads
relative_position_bias = relative_position_bias.view(self.bias_shape).permute(2, 0, 1)
return relative_position_bias.unsqueeze(0).contiguous()
def forward(self, attn, shared_rel_pos: Optional[torch.Tensor] = None):
return attn + self.get_bias()
def gen_relative_log_coords(
win_size: Tuple[int, int],
pretrained_win_size: Tuple[int, int] = (0, 0),
mode='swin',
):
assert mode in ('swin', 'cr')
# as per official swin-v2 impl, supporting timm specific 'cr' log coords as well
relative_coords_h = torch.arange(-(win_size[0] - 1), win_size[0]).to(torch.float32)
relative_coords_w = torch.arange(-(win_size[1] - 1), win_size[1]).to(torch.float32)
relative_coords_table = torch.stack(ndgrid(relative_coords_h, relative_coords_w))
relative_coords_table = relative_coords_table.permute(1, 2, 0).contiguous() # 2*Wh-1, 2*Ww-1, 2
if mode == 'swin':
if pretrained_win_size[0] > 0:
relative_coords_table[:, :, 0] /= (pretrained_win_size[0] - 1)
relative_coords_table[:, :, 1] /= (pretrained_win_size[1] - 1)
else:
relative_coords_table[:, :, 0] /= (win_size[0] - 1)
relative_coords_table[:, :, 1] /= (win_size[1] - 1)
relative_coords_table *= 8 # normalize to -8, 8
relative_coords_table = torch.sign(relative_coords_table) * torch.log2(
1.0 + relative_coords_table.abs()) / math.log2(8)
else:
# mode == 'cr'
relative_coords_table = torch.sign(relative_coords_table) * torch.log(
1.0 + relative_coords_table.abs())
return relative_coords_table
class RelPosMlp(nn.Module):
""" Log-Coordinate Relative Position MLP
Based on ideas presented in Swin-V2 paper (https://arxiv.org/abs/2111.09883)
This impl covers the 'swin' implementation as well as two timm specific modes ('cr', and 'rw')
"""
def __init__(
self,
window_size,
num_heads=8,
hidden_dim=128,
prefix_tokens=0,
mode='cr',
pretrained_window_size=(0, 0)
):
super().__init__()
self.window_size = window_size
self.window_area = self.window_size[0] * self.window_size[1]
self.prefix_tokens = prefix_tokens
self.num_heads = num_heads
self.bias_shape = (self.window_area,) * 2 + (num_heads,)
if mode == 'swin':
self.bias_act = nn.Sigmoid()
self.bias_gain = 16
mlp_bias = (True, False)
else:
self.bias_act = nn.Identity()
self.bias_gain = None
mlp_bias = True
self.mlp = Mlp(
2, # x, y
hidden_features=hidden_dim,
out_features=num_heads,
act_layer=nn.ReLU,
bias=mlp_bias,
drop=(0.125, 0.)
)
self.register_buffer(
"relative_position_index",
gen_relative_position_index(window_size).view(-1),
persistent=False)
# get relative_coords_table
self.register_buffer(
"rel_coords_log",
gen_relative_log_coords(window_size, pretrained_window_size, mode=mode),
persistent=False)
def get_bias(self) -> torch.Tensor:
relative_position_bias = self.mlp(self.rel_coords_log)
if self.relative_position_index is not None:
relative_position_bias = relative_position_bias.view(-1, self.num_heads)[self.relative_position_index]
relative_position_bias = relative_position_bias.view(self.bias_shape)
relative_position_bias = relative_position_bias.permute(2, 0, 1)
relative_position_bias = self.bias_act(relative_position_bias)
if self.bias_gain is not None:
relative_position_bias = self.bias_gain * relative_position_bias
if self.prefix_tokens:
relative_position_bias = F.pad(relative_position_bias, [self.prefix_tokens, 0, self.prefix_tokens, 0])
return relative_position_bias.unsqueeze(0).contiguous()
def forward(self, attn, shared_rel_pos: Optional[torch.Tensor] = None):
return attn + self.get_bias()
def generate_lookup_tensor(
length: int,
max_relative_position: Optional[int] = None,
):
"""Generate a one_hot lookup tensor to reindex embeddings along one dimension.
Args:
length: the length to reindex to.
max_relative_position: the maximum relative position to consider.
Relative position embeddings for distances above this threshold
are zeroed out.
Returns:
a lookup Tensor of size [length, length, vocab_size] that satisfies
ret[n,m,v] = 1{m - n + max_relative_position = v}.
"""
if max_relative_position is None:
max_relative_position = length - 1
# Return the cached lookup tensor, otherwise compute it and cache it.
vocab_size = 2 * max_relative_position + 1
ret = torch.zeros(length, length, vocab_size)
for i in range(length):
for x in range(length):
v = x - i + max_relative_position
if abs(x - i) > max_relative_position:
continue
ret[i, x, v] = 1
return ret
def reindex_2d_einsum_lookup(
relative_position_tensor,
height: int,
width: int,
height_lookup: torch.Tensor,
width_lookup: torch.Tensor,
) -> torch.Tensor:
"""Reindex 2d relative position bias with 2 independent einsum lookups.
Adapted from:
https://github.com/google-research/maxvit/blob/2e06a7f1f70c76e64cd3dabe5cd1b8c1a23c9fb7/maxvit/models/attention_utils.py
Args:
relative_position_tensor: tensor of shape
[..., vocab_height, vocab_width, ...].
height: height to reindex to.
width: width to reindex to.
height_lookup: one-hot height lookup
width_lookup: one-hot width lookup
Returns:
reindexed_tensor: a Tensor of shape
[..., height * width, height * width, ...]
"""
reindexed_tensor = torch.einsum('nhw,ixh->nixw', relative_position_tensor, height_lookup)
reindexed_tensor = torch.einsum('nixw,jyw->nijxy', reindexed_tensor, width_lookup)
area = height * width
return reindexed_tensor.reshape(relative_position_tensor.shape[0], area, area)
class RelPosBiasTf(nn.Module):
""" Relative Position Bias Impl (Compatible with Tensorflow MaxViT models)
Adapted from:
https://github.com/google-research/maxvit/blob/2e06a7f1f70c76e64cd3dabe5cd1b8c1a23c9fb7/maxvit/models/attention_utils.py
"""
def __init__(self, window_size, num_heads, prefix_tokens=0):
super().__init__()
assert prefix_tokens <= 1
self.window_size = window_size
self.window_area = window_size[0] * window_size[1]
self.num_heads = num_heads
vocab_height = 2 * window_size[0] - 1
vocab_width = 2 * window_size[1] - 1
self.bias_shape = (self.num_heads, vocab_height, vocab_width)
self.relative_position_bias_table = nn.Parameter(torch.zeros(self.bias_shape))
self.register_buffer('height_lookup', generate_lookup_tensor(window_size[0]), persistent=False)
self.register_buffer('width_lookup', generate_lookup_tensor(window_size[1]), persistent=False)
self.init_weights()
def init_weights(self):
nn.init.normal_(self.relative_position_bias_table, std=.02)
def get_bias(self) -> torch.Tensor:
# FIXME change to not use one-hot/einsum?
return reindex_2d_einsum_lookup(
self.relative_position_bias_table,
self.window_size[0],
self.window_size[1],
self.height_lookup,
self.width_lookup
)
def forward(self, attn, shared_rel_pos: Optional[torch.Tensor] = None):
return attn + self.get_bias()
| pytorch-image-models/timm/layers/pos_embed_rel.py/0 | {
"file_path": "pytorch-image-models/timm/layers/pos_embed_rel.py",
"repo_id": "pytorch-image-models",
"token_count": 9303
} | 270 |
""" Cross Entropy w/ smoothing or soft targets
Hacked together by / Copyright 2021 Ross Wightman
"""
import torch
import torch.nn as nn
import torch.nn.functional as F
class LabelSmoothingCrossEntropy(nn.Module):
""" NLL loss with label smoothing.
"""
def __init__(self, smoothing=0.1):
super(LabelSmoothingCrossEntropy, self).__init__()
assert smoothing < 1.0
self.smoothing = smoothing
self.confidence = 1. - smoothing
def forward(self, x: torch.Tensor, target: torch.Tensor) -> torch.Tensor:
logprobs = F.log_softmax(x, dim=-1)
nll_loss = -logprobs.gather(dim=-1, index=target.unsqueeze(1))
nll_loss = nll_loss.squeeze(1)
smooth_loss = -logprobs.mean(dim=-1)
loss = self.confidence * nll_loss + self.smoothing * smooth_loss
return loss.mean()
class SoftTargetCrossEntropy(nn.Module):
def __init__(self):
super(SoftTargetCrossEntropy, self).__init__()
def forward(self, x: torch.Tensor, target: torch.Tensor) -> torch.Tensor:
loss = torch.sum(-target * F.log_softmax(x, dim=-1), dim=-1)
return loss.mean()
| pytorch-image-models/timm/loss/cross_entropy.py/0 | {
"file_path": "pytorch-image-models/timm/loss/cross_entropy.py",
"repo_id": "pytorch-image-models",
"token_count": 470
} | 271 |
"""Pytorch Densenet implementation w/ tweaks
This file is a copy of https://github.com/pytorch/vision 'densenet.py' (BSD-3-Clause) with
fixed kwargs passthrough and addition of dynamic global avg/max pool.
"""
import re
from collections import OrderedDict
from typing import Any, Dict, Optional, Tuple, Union
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.jit.annotations import List
from timm.data import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD
from timm.layers import BatchNormAct2d, get_norm_act_layer, BlurPool2d, create_classifier
from ._builder import build_model_with_cfg
from ._manipulate import MATCH_PREV_GROUP, checkpoint
from ._registry import register_model, generate_default_cfgs, register_model_deprecations
__all__ = ['DenseNet']
class DenseLayer(nn.Module):
"""Dense layer for DenseNet.
Implements the bottleneck layer with 1x1 and 3x3 convolutions.
"""
def __init__(
self,
num_input_features: int,
growth_rate: int,
bn_size: int,
norm_layer: type = BatchNormAct2d,
drop_rate: float = 0.,
grad_checkpointing: bool = False,
) -> None:
"""Initialize DenseLayer.
Args:
num_input_features: Number of input features.
growth_rate: Growth rate (k) of the layer.
bn_size: Bottleneck size multiplier.
norm_layer: Normalization layer class.
drop_rate: Dropout rate.
grad_checkpointing: Use gradient checkpointing.
"""
super(DenseLayer, self).__init__()
self.add_module('norm1', norm_layer(num_input_features)),
self.add_module('conv1', nn.Conv2d(
num_input_features, bn_size * growth_rate, kernel_size=1, stride=1, bias=False)),
self.add_module('norm2', norm_layer(bn_size * growth_rate)),
self.add_module('conv2', nn.Conv2d(
bn_size * growth_rate, growth_rate, kernel_size=3, stride=1, padding=1, bias=False)),
self.drop_rate = float(drop_rate)
self.grad_checkpointing = grad_checkpointing
def bottleneck_fn(self, xs: List[torch.Tensor]) -> torch.Tensor:
"""Bottleneck function for concatenated features."""
concated_features = torch.cat(xs, 1)
bottleneck_output = self.conv1(self.norm1(concated_features)) # noqa: T484
return bottleneck_output
# todo: rewrite when torchscript supports any
def any_requires_grad(self, x: List[torch.Tensor]) -> bool:
"""Check if any tensor in list requires gradient."""
for tensor in x:
if tensor.requires_grad:
return True
return False
@torch.jit.unused # noqa: T484
def call_checkpoint_bottleneck(self, x: List[torch.Tensor]) -> torch.Tensor:
"""Call bottleneck function with gradient checkpointing."""
def closure(*xs):
return self.bottleneck_fn(xs)
return checkpoint(closure, *x)
@torch.jit._overload_method # noqa: F811
def forward(self, x):
# type: (List[torch.Tensor]) -> (torch.Tensor)
pass
@torch.jit._overload_method # noqa: F811
def forward(self, x):
# type: (torch.Tensor) -> (torch.Tensor)
pass
# torchscript does not yet support *args, so we overload method
# allowing it to take either a List[Tensor] or single Tensor
def forward(self, x: Union[torch.Tensor, List[torch.Tensor]]) -> torch.Tensor: # noqa: F811
"""Forward pass.
Args:
x: Input features (single tensor or list of tensors).
Returns:
New features to be concatenated.
"""
if isinstance(x, torch.Tensor):
prev_features = [x]
else:
prev_features = x
if self.grad_checkpointing and self.any_requires_grad(prev_features):
if torch.jit.is_scripting():
raise Exception("Memory Efficient not supported in JIT")
bottleneck_output = self.call_checkpoint_bottleneck(prev_features)
else:
bottleneck_output = self.bottleneck_fn(prev_features)
new_features = self.conv2(self.norm2(bottleneck_output))
if self.drop_rate > 0:
new_features = F.dropout(new_features, p=self.drop_rate, training=self.training)
return new_features
class DenseBlock(nn.ModuleDict):
"""DenseNet Block.
Contains multiple dense layers with concatenated features.
"""
_version = 2
def __init__(
self,
num_layers: int,
num_input_features: int,
bn_size: int,
growth_rate: int,
norm_layer: type = BatchNormAct2d,
drop_rate: float = 0.,
grad_checkpointing: bool = False,
) -> None:
"""Initialize DenseBlock.
Args:
num_layers: Number of layers in the block.
num_input_features: Number of input features.
bn_size: Bottleneck size multiplier.
growth_rate: Growth rate (k) for each layer.
norm_layer: Normalization layer class.
drop_rate: Dropout rate.
grad_checkpointing: Use gradient checkpointing.
"""
super(DenseBlock, self).__init__()
for i in range(num_layers):
layer = DenseLayer(
num_input_features + i * growth_rate,
growth_rate=growth_rate,
bn_size=bn_size,
norm_layer=norm_layer,
drop_rate=drop_rate,
grad_checkpointing=grad_checkpointing,
)
self.add_module('denselayer%d' % (i + 1), layer)
def forward(self, init_features: torch.Tensor) -> torch.Tensor:
"""Forward pass through all layers in the block.
Args:
init_features: Initial features from previous layer.
Returns:
Concatenated features from all layers.
"""
features = [init_features]
for name, layer in self.items():
new_features = layer(features)
features.append(new_features)
return torch.cat(features, 1)
class DenseTransition(nn.Sequential):
"""Transition layer between DenseNet blocks.
Reduces feature dimensions and spatial resolution.
"""
def __init__(
self,
num_input_features: int,
num_output_features: int,
norm_layer: type = BatchNormAct2d,
aa_layer: Optional[type] = None,
) -> None:
"""Initialize DenseTransition.
Args:
num_input_features: Number of input features.
num_output_features: Number of output features.
norm_layer: Normalization layer class.
aa_layer: Anti-aliasing layer class.
"""
super(DenseTransition, self).__init__()
self.add_module('norm', norm_layer(num_input_features))
self.add_module('conv', nn.Conv2d(
num_input_features, num_output_features, kernel_size=1, stride=1, bias=False))
if aa_layer is not None:
self.add_module('pool', aa_layer(num_output_features, stride=2))
else:
self.add_module('pool', nn.AvgPool2d(kernel_size=2, stride=2))
class DenseNet(nn.Module):
"""Densenet-BC model class.
Based on `"Densely Connected Convolutional Networks" <https://arxiv.org/pdf/1608.06993.pdf>`_
Args:
growth_rate: How many filters to add each layer (`k` in paper).
block_config: How many layers in each pooling block.
bn_size: Multiplicative factor for number of bottle neck layers
(i.e. bn_size * k features in the bottleneck layer).
drop_rate: Dropout rate before classifier layer.
proj_drop_rate: Dropout rate after each dense layer.
num_classes: Number of classification classes.
memory_efficient: If True, uses checkpointing. Much more memory efficient,
but slower. Default: *False*. See `"paper" <https://arxiv.org/pdf/1707.06990.pdf>`_.
"""
def __init__(
self,
growth_rate: int = 32,
block_config: Tuple[int, ...] = (6, 12, 24, 16),
num_classes: int = 1000,
in_chans: int = 3,
global_pool: str = 'avg',
bn_size: int = 4,
stem_type: str = '',
act_layer: str = 'relu',
norm_layer: str = 'batchnorm2d',
aa_layer: Optional[type] = None,
drop_rate: float = 0.,
proj_drop_rate: float = 0.,
memory_efficient: bool = False,
aa_stem_only: bool = True,
) -> None:
"""Initialize DenseNet.
Args:
growth_rate: How many filters to add each layer (k in paper).
block_config: How many layers in each pooling block.
num_classes: Number of classification classes.
in_chans: Number of input channels.
global_pool: Global pooling type.
bn_size: Multiplicative factor for number of bottle neck layers.
stem_type: Type of stem ('', 'deep', 'deep_tiered').
act_layer: Activation layer.
norm_layer: Normalization layer.
aa_layer: Anti-aliasing layer.
drop_rate: Dropout rate before classifier layer.
proj_drop_rate: Dropout rate after each dense layer.
memory_efficient: If True, uses checkpointing for memory efficiency.
aa_stem_only: Apply anti-aliasing only to stem.
"""
self.num_classes = num_classes
super(DenseNet, self).__init__()
norm_layer = get_norm_act_layer(norm_layer, act_layer=act_layer)
# Stem
deep_stem = 'deep' in stem_type # 3x3 deep stem
num_init_features = growth_rate * 2
if aa_layer is None:
stem_pool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
else:
stem_pool = nn.Sequential(*[
nn.MaxPool2d(kernel_size=3, stride=1, padding=1),
aa_layer(channels=num_init_features, stride=2)])
if deep_stem:
stem_chs_1 = stem_chs_2 = growth_rate
if 'tiered' in stem_type:
stem_chs_1 = 3 * (growth_rate // 4)
stem_chs_2 = num_init_features if 'narrow' in stem_type else 6 * (growth_rate // 4)
self.features = nn.Sequential(OrderedDict([
('conv0', nn.Conv2d(in_chans, stem_chs_1, 3, stride=2, padding=1, bias=False)),
('norm0', norm_layer(stem_chs_1)),
('conv1', nn.Conv2d(stem_chs_1, stem_chs_2, 3, stride=1, padding=1, bias=False)),
('norm1', norm_layer(stem_chs_2)),
('conv2', nn.Conv2d(stem_chs_2, num_init_features, 3, stride=1, padding=1, bias=False)),
('norm2', norm_layer(num_init_features)),
('pool0', stem_pool),
]))
else:
self.features = nn.Sequential(OrderedDict([
('conv0', nn.Conv2d(in_chans, num_init_features, kernel_size=7, stride=2, padding=3, bias=False)),
('norm0', norm_layer(num_init_features)),
('pool0', stem_pool),
]))
self.feature_info = [
dict(num_chs=num_init_features, reduction=2, module=f'features.norm{2 if deep_stem else 0}')]
current_stride = 4
# DenseBlocks
num_features = num_init_features
for i, num_layers in enumerate(block_config):
block = DenseBlock(
num_layers=num_layers,
num_input_features=num_features,
bn_size=bn_size,
growth_rate=growth_rate,
norm_layer=norm_layer,
drop_rate=proj_drop_rate,
grad_checkpointing=memory_efficient,
)
module_name = f'denseblock{(i + 1)}'
self.features.add_module(module_name, block)
num_features = num_features + num_layers * growth_rate
transition_aa_layer = None if aa_stem_only else aa_layer
if i != len(block_config) - 1:
self.feature_info += [
dict(num_chs=num_features, reduction=current_stride, module='features.' + module_name)]
current_stride *= 2
trans = DenseTransition(
num_input_features=num_features,
num_output_features=num_features // 2,
norm_layer=norm_layer,
aa_layer=transition_aa_layer,
)
self.features.add_module(f'transition{i + 1}', trans)
num_features = num_features // 2
# Final batch norm
self.features.add_module('norm5', norm_layer(num_features))
self.feature_info += [dict(num_chs=num_features, reduction=current_stride, module='features.norm5')]
self.num_features = self.head_hidden_size = num_features
# Linear layer
global_pool, classifier = create_classifier(
self.num_features,
self.num_classes,
pool_type=global_pool,
)
self.global_pool = global_pool
self.head_drop = nn.Dropout(drop_rate)
self.classifier = classifier
# Official init from torch repo.
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight)
elif isinstance(m, nn.BatchNorm2d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.Linear):
nn.init.constant_(m.bias, 0)
@torch.jit.ignore
def group_matcher(self, coarse: bool = False) -> Dict[str, Any]:
"""Group parameters for optimization."""
matcher = dict(
stem=r'^features\.conv[012]|features\.norm[012]|features\.pool[012]',
blocks=r'^features\.(?:denseblock|transition)(\d+)' if coarse else [
(r'^features\.denseblock(\d+)\.denselayer(\d+)', None),
(r'^features\.transition(\d+)', MATCH_PREV_GROUP) # FIXME combine with previous denselayer
]
)
return matcher
@torch.jit.ignore
def set_grad_checkpointing(self, enable: bool = True) -> None:
"""Enable or disable gradient checkpointing."""
for b in self.features.modules():
if isinstance(b, DenseLayer):
b.grad_checkpointing = enable
@torch.jit.ignore
def get_classifier(self) -> nn.Module:
"""Get the classifier head."""
return self.classifier
def reset_classifier(self, num_classes: int, global_pool: str = 'avg') -> None:
"""Reset the classifier head.
Args:
num_classes: Number of classes for new classifier.
global_pool: Global pooling type.
"""
self.num_classes = num_classes
self.global_pool, self.classifier = create_classifier(
self.num_features, self.num_classes, pool_type=global_pool)
def forward_features(self, x: torch.Tensor) -> torch.Tensor:
"""Forward pass through feature extraction layers."""
return self.features(x)
def forward_head(self, x: torch.Tensor, pre_logits: bool = False) -> torch.Tensor:
"""Forward pass through classifier head.
Args:
x: Feature tensor.
pre_logits: Return features before final classifier.
Returns:
Output tensor.
"""
x = self.global_pool(x)
x = self.head_drop(x)
return x if pre_logits else self.classifier(x)
def forward(self, x: torch.Tensor) -> torch.Tensor:
"""Forward pass.
Args:
x: Input tensor.
Returns:
Output logits.
"""
x = self.forward_features(x)
x = self.forward_head(x)
return x
def _filter_torchvision_pretrained(state_dict: dict) -> Dict[str, torch.Tensor]:
"""Filter torchvision pretrained state dict for compatibility.
Args:
state_dict: State dictionary from torchvision checkpoint.
Returns:
Filtered state dictionary.
"""
pattern = re.compile(
r'^(.*denselayer\d+\.(?:norm|relu|conv))\.((?:[12])\.(?:weight|bias|running_mean|running_var))$')
for key in list(state_dict.keys()):
res = pattern.match(key)
if res:
new_key = res.group(1) + res.group(2)
state_dict[new_key] = state_dict[key]
del state_dict[key]
return state_dict
def _create_densenet(
variant: str,
growth_rate: int,
block_config: Tuple[int, ...],
pretrained: bool,
**kwargs,
) -> DenseNet:
"""Create a DenseNet model.
Args:
variant: Model variant name.
growth_rate: Growth rate parameter.
block_config: Block configuration.
pretrained: Load pretrained weights.
**kwargs: Additional model arguments.
Returns:
DenseNet model instance.
"""
kwargs['growth_rate'] = growth_rate
kwargs['block_config'] = block_config
return build_model_with_cfg(
DenseNet,
variant,
pretrained,
feature_cfg=dict(flatten_sequential=True),
pretrained_filter_fn=_filter_torchvision_pretrained,
**kwargs,
)
def _cfg(url: str = '', **kwargs) -> Dict[str, Any]:
"""Create default configuration for DenseNet models."""
return {
'url': url, 'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': (7, 7),
'crop_pct': 0.875, 'interpolation': 'bicubic',
'mean': IMAGENET_DEFAULT_MEAN, 'std': IMAGENET_DEFAULT_STD,
'first_conv': 'features.conv0', 'classifier': 'classifier', **kwargs,
}
default_cfgs = generate_default_cfgs({
'densenet121.ra_in1k': _cfg(
hf_hub_id='timm/',
test_input_size=(3, 288, 288), test_crop_pct=0.95),
'densenetblur121d.ra_in1k': _cfg(
hf_hub_id='timm/',
test_input_size=(3, 288, 288), test_crop_pct=0.95),
'densenet264d.untrained': _cfg(),
'densenet121.tv_in1k': _cfg(hf_hub_id='timm/'),
'densenet169.tv_in1k': _cfg(hf_hub_id='timm/'),
'densenet201.tv_in1k': _cfg(hf_hub_id='timm/'),
'densenet161.tv_in1k': _cfg(hf_hub_id='timm/'),
})
@register_model
def densenet121(pretrained=False, **kwargs) -> DenseNet:
r"""Densenet-121 model from
`"Densely Connected Convolutional Networks" <https://arxiv.org/pdf/1608.06993.pdf>`
"""
model_args = dict(growth_rate=32, block_config=(6, 12, 24, 16))
model = _create_densenet('densenet121', pretrained=pretrained, **dict(model_args, **kwargs))
return model
@register_model
def densenetblur121d(pretrained=False, **kwargs) -> DenseNet:
r"""Densenet-121 w/ blur-pooling & 3-layer 3x3 stem
`"Densely Connected Convolutional Networks" <https://arxiv.org/pdf/1608.06993.pdf>`
"""
model_args = dict(growth_rate=32, block_config=(6, 12, 24, 16), stem_type='deep', aa_layer=BlurPool2d)
model = _create_densenet('densenetblur121d', pretrained=pretrained, **dict(model_args, **kwargs))
return model
@register_model
def densenet169(pretrained=False, **kwargs) -> DenseNet:
r"""Densenet-169 model from
`"Densely Connected Convolutional Networks" <https://arxiv.org/pdf/1608.06993.pdf>`
"""
model_args = dict(growth_rate=32, block_config=(6, 12, 32, 32))
model = _create_densenet('densenet169', pretrained=pretrained, **dict(model_args, **kwargs))
return model
@register_model
def densenet201(pretrained=False, **kwargs) -> DenseNet:
r"""Densenet-201 model from
`"Densely Connected Convolutional Networks" <https://arxiv.org/pdf/1608.06993.pdf>`
"""
model_args = dict(growth_rate=32, block_config=(6, 12, 48, 32))
model = _create_densenet('densenet201', pretrained=pretrained, **dict(model_args, **kwargs))
return model
@register_model
def densenet161(pretrained=False, **kwargs) -> DenseNet:
r"""Densenet-161 model from
`"Densely Connected Convolutional Networks" <https://arxiv.org/pdf/1608.06993.pdf>`
"""
model_args = dict(growth_rate=48, block_config=(6, 12, 36, 24))
model = _create_densenet('densenet161', pretrained=pretrained, **dict(model_args, **kwargs))
return model
@register_model
def densenet264d(pretrained=False, **kwargs) -> DenseNet:
r"""Densenet-264 model from
`"Densely Connected Convolutional Networks" <https://arxiv.org/pdf/1608.06993.pdf>`
"""
model_args = dict(growth_rate=48, block_config=(6, 12, 64, 48), stem_type='deep')
model = _create_densenet('densenet264d', pretrained=pretrained, **dict(model_args, **kwargs))
return model
register_model_deprecations(__name__, {
'tv_densenet121': 'densenet121.tv_in1k',
})
| pytorch-image-models/timm/models/densenet.py/0 | {
"file_path": "pytorch-image-models/timm/models/densenet.py",
"repo_id": "pytorch-image-models",
"token_count": 9539
} | 272 |
""" Global Context ViT
From scratch implementation of GCViT in the style of timm swin_transformer_v2_cr.py
Global Context Vision Transformers -https://arxiv.org/abs/2206.09959
@article{hatamizadeh2022global,
title={Global Context Vision Transformers},
author={Hatamizadeh, Ali and Yin, Hongxu and Kautz, Jan and Molchanov, Pavlo},
journal={arXiv preprint arXiv:2206.09959},
year={2022}
}
Free of any code related to NVIDIA GCVit impl at https://github.com/NVlabs/GCVit.
The license for this code release is Apache 2.0 with no commercial restrictions.
However, weight files adapted from NVIDIA GCVit impl ARE under a non-commercial share-alike license
(https://creativecommons.org/licenses/by-nc-sa/4.0/) until I have a chance to train new ones...
Hacked together by / Copyright 2022, Ross Wightman
"""
import math
from functools import partial
from typing import Callable, List, Optional, Tuple, Union
import torch
import torch.nn as nn
from timm.data import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD
from timm.layers import DropPath, to_2tuple, to_ntuple, Mlp, ClassifierHead, LayerNorm2d, \
get_attn, get_act_layer, get_norm_layer, RelPosBias, _assert
from ._builder import build_model_with_cfg
from ._features import feature_take_indices
from ._features_fx import register_notrace_function
from ._manipulate import named_apply, checkpoint
from ._registry import register_model, generate_default_cfgs
__all__ = ['GlobalContextVit']
class MbConvBlock(nn.Module):
""" A depthwise separable / fused mbconv style residual block with SE, `no norm.
"""
def __init__(
self,
in_chs,
out_chs=None,
expand_ratio=1.0,
attn_layer='se',
bias=False,
act_layer=nn.GELU,
):
super().__init__()
attn_kwargs = dict(act_layer=act_layer)
if isinstance(attn_layer, str) and attn_layer == 'se' or attn_layer == 'eca':
attn_kwargs['rd_ratio'] = 0.25
attn_kwargs['bias'] = False
attn_layer = get_attn(attn_layer)
out_chs = out_chs or in_chs
mid_chs = int(expand_ratio * in_chs)
self.conv_dw = nn.Conv2d(in_chs, mid_chs, 3, 1, 1, groups=in_chs, bias=bias)
self.act = act_layer()
self.se = attn_layer(mid_chs, **attn_kwargs)
self.conv_pw = nn.Conv2d(mid_chs, out_chs, 1, 1, 0, bias=bias)
def forward(self, x):
shortcut = x
x = self.conv_dw(x)
x = self.act(x)
x = self.se(x)
x = self.conv_pw(x)
x = x + shortcut
return x
class Downsample2d(nn.Module):
def __init__(
self,
dim,
dim_out=None,
reduction='conv',
act_layer=nn.GELU,
norm_layer=LayerNorm2d, # NOTE in NCHW
):
super().__init__()
dim_out = dim_out or dim
self.norm1 = norm_layer(dim) if norm_layer is not None else nn.Identity()
self.conv_block = MbConvBlock(dim, act_layer=act_layer)
assert reduction in ('conv', 'max', 'avg')
if reduction == 'conv':
self.reduction = nn.Conv2d(dim, dim_out, 3, 2, 1, bias=False)
elif reduction == 'max':
assert dim == dim_out
self.reduction = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
else:
assert dim == dim_out
self.reduction = nn.AvgPool2d(kernel_size=2)
self.norm2 = norm_layer(dim_out) if norm_layer is not None else nn.Identity()
def forward(self, x):
x = self.norm1(x)
x = self.conv_block(x)
x = self.reduction(x)
x = self.norm2(x)
return x
class FeatureBlock(nn.Module):
def __init__(
self,
dim,
levels=0,
reduction='max',
act_layer=nn.GELU,
):
super().__init__()
reductions = levels
levels = max(1, levels)
if reduction == 'avg':
pool_fn = partial(nn.AvgPool2d, kernel_size=2)
else:
pool_fn = partial(nn.MaxPool2d, kernel_size=3, stride=2, padding=1)
self.blocks = nn.Sequential()
for i in range(levels):
self.blocks.add_module(f'conv{i+1}', MbConvBlock(dim, act_layer=act_layer))
if reductions:
self.blocks.add_module(f'pool{i+1}', pool_fn())
reductions -= 1
def forward(self, x):
return self.blocks(x)
class Stem(nn.Module):
def __init__(
self,
in_chs: int = 3,
out_chs: int = 96,
act_layer: Callable = nn.GELU,
norm_layer: Callable = LayerNorm2d, # NOTE stem in NCHW
):
super().__init__()
self.conv1 = nn.Conv2d(in_chs, out_chs, kernel_size=3, stride=2, padding=1)
self.down = Downsample2d(out_chs, act_layer=act_layer, norm_layer=norm_layer)
def forward(self, x):
x = self.conv1(x)
x = self.down(x)
return x
class WindowAttentionGlobal(nn.Module):
def __init__(
self,
dim: int,
num_heads: int,
window_size: Tuple[int, int],
use_global: bool = True,
qkv_bias: bool = True,
attn_drop: float = 0.,
proj_drop: float = 0.,
):
super().__init__()
window_size = to_2tuple(window_size)
self.window_size = window_size
self.num_heads = num_heads
self.head_dim = dim // num_heads
self.scale = self.head_dim ** -0.5
self.use_global = use_global
self.rel_pos = RelPosBias(window_size=window_size, num_heads=num_heads)
if self.use_global:
self.qkv = nn.Linear(dim, dim * 2, bias=qkv_bias)
else:
self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
self.attn_drop = nn.Dropout(attn_drop)
self.proj = nn.Linear(dim, dim)
self.proj_drop = nn.Dropout(proj_drop)
def forward(self, x, q_global: Optional[torch.Tensor] = None):
B, N, C = x.shape
if self.use_global and q_global is not None:
_assert(x.shape[-1] == q_global.shape[-1], 'x and q_global seq lengths should be equal')
kv = self.qkv(x)
kv = kv.reshape(B, N, 2, self.num_heads, self.head_dim).permute(2, 0, 3, 1, 4)
k, v = kv.unbind(0)
q = q_global.repeat(B // q_global.shape[0], 1, 1, 1)
q = q.reshape(B, N, self.num_heads, self.head_dim).permute(0, 2, 1, 3)
else:
qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, self.head_dim).permute(2, 0, 3, 1, 4)
q, k, v = qkv.unbind(0)
q = q * self.scale
attn = q @ k.transpose(-2, -1).contiguous() # NOTE contiguous() fixes an odd jit bug in PyTorch 2.0
attn = self.rel_pos(attn)
attn = attn.softmax(dim=-1)
attn = self.attn_drop(attn)
x = (attn @ v).transpose(1, 2).reshape(B, N, C)
x = self.proj(x)
x = self.proj_drop(x)
return x
def window_partition(x, window_size: Tuple[int, int]):
B, H, W, C = x.shape
x = x.view(B, H // window_size[0], window_size[0], W // window_size[1], window_size[1], C)
windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size[0], window_size[1], C)
return windows
@register_notrace_function # reason: int argument is a Proxy
def window_reverse(windows, window_size: Tuple[int, int], img_size: Tuple[int, int]):
H, W = img_size
C = windows.shape[-1]
x = windows.view(-1, H // window_size[0], W // window_size[1], window_size[0], window_size[1], C)
x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, H, W, C)
return x
class LayerScale(nn.Module):
def __init__(self, dim, init_values=1e-5, inplace=False):
super().__init__()
self.inplace = inplace
self.gamma = nn.Parameter(init_values * torch.ones(dim))
def forward(self, x):
return x.mul_(self.gamma) if self.inplace else x * self.gamma
class GlobalContextVitBlock(nn.Module):
def __init__(
self,
dim: int,
feat_size: Tuple[int, int],
num_heads: int,
window_size: int = 7,
mlp_ratio: float = 4.,
use_global: bool = True,
qkv_bias: bool = True,
layer_scale: Optional[float] = None,
proj_drop: float = 0.,
attn_drop: float = 0.,
drop_path: float = 0.,
attn_layer: Callable = WindowAttentionGlobal,
act_layer: Callable = nn.GELU,
norm_layer: Callable = nn.LayerNorm,
):
super().__init__()
feat_size = to_2tuple(feat_size)
window_size = to_2tuple(window_size)
self.window_size = window_size
self.num_windows = int((feat_size[0] // window_size[0]) * (feat_size[1] // window_size[1]))
self.norm1 = norm_layer(dim)
self.attn = attn_layer(
dim,
num_heads=num_heads,
window_size=window_size,
use_global=use_global,
qkv_bias=qkv_bias,
attn_drop=attn_drop,
proj_drop=proj_drop,
)
self.ls1 = LayerScale(dim, layer_scale) if layer_scale is not None else nn.Identity()
self.drop_path1 = DropPath(drop_path) if drop_path > 0. else nn.Identity()
self.norm2 = norm_layer(dim)
self.mlp = Mlp(in_features=dim, hidden_features=int(dim * mlp_ratio), act_layer=act_layer, drop=proj_drop)
self.ls2 = LayerScale(dim, layer_scale) if layer_scale is not None else nn.Identity()
self.drop_path2 = DropPath(drop_path) if drop_path > 0. else nn.Identity()
def _window_attn(self, x, q_global: Optional[torch.Tensor] = None):
B, H, W, C = x.shape
x_win = window_partition(x, self.window_size)
x_win = x_win.view(-1, self.window_size[0] * self.window_size[1], C)
attn_win = self.attn(x_win, q_global)
x = window_reverse(attn_win, self.window_size, (H, W))
return x
def forward(self, x, q_global: Optional[torch.Tensor] = None):
x = x + self.drop_path1(self.ls1(self._window_attn(self.norm1(x), q_global)))
x = x + self.drop_path2(self.ls2(self.mlp(self.norm2(x))))
return x
class GlobalContextVitStage(nn.Module):
def __init__(
self,
dim,
depth: int,
num_heads: int,
feat_size: Tuple[int, int],
window_size: Tuple[int, int],
downsample: bool = True,
global_norm: bool = False,
stage_norm: bool = False,
mlp_ratio: float = 4.,
qkv_bias: bool = True,
layer_scale: Optional[float] = None,
proj_drop: float = 0.,
attn_drop: float = 0.,
drop_path: Union[List[float], float] = 0.0,
act_layer: Callable = nn.GELU,
norm_layer: Callable = nn.LayerNorm,
norm_layer_cl: Callable = LayerNorm2d,
):
super().__init__()
if downsample:
self.downsample = Downsample2d(
dim=dim,
dim_out=dim * 2,
norm_layer=norm_layer,
)
dim = dim * 2
feat_size = (feat_size[0] // 2, feat_size[1] // 2)
else:
self.downsample = nn.Identity()
self.feat_size = feat_size
window_size = to_2tuple(window_size)
feat_levels = int(math.log2(min(feat_size) / min(window_size)))
self.global_block = FeatureBlock(dim, feat_levels)
self.global_norm = norm_layer_cl(dim) if global_norm else nn.Identity()
self.blocks = nn.ModuleList([
GlobalContextVitBlock(
dim=dim,
num_heads=num_heads,
feat_size=feat_size,
window_size=window_size,
mlp_ratio=mlp_ratio,
qkv_bias=qkv_bias,
use_global=(i % 2 != 0),
layer_scale=layer_scale,
proj_drop=proj_drop,
attn_drop=attn_drop,
drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path,
act_layer=act_layer,
norm_layer=norm_layer_cl,
)
for i in range(depth)
])
self.norm = norm_layer_cl(dim) if stage_norm else nn.Identity()
self.dim = dim
self.feat_size = feat_size
self.grad_checkpointing = False
def forward(self, x):
# input NCHW, downsample & global block are 2d conv + pooling
x = self.downsample(x)
global_query = self.global_block(x)
# reshape NCHW --> NHWC for transformer blocks
x = x.permute(0, 2, 3, 1)
global_query = self.global_norm(global_query.permute(0, 2, 3, 1))
for blk in self.blocks:
if self.grad_checkpointing and not torch.jit.is_scripting():
x = checkpoint(blk, x, global_query)
else:
x = blk(x, global_query)
x = self.norm(x)
x = x.permute(0, 3, 1, 2).contiguous() # back to NCHW
return x
class GlobalContextVit(nn.Module):
def __init__(
self,
in_chans: int = 3,
num_classes: int = 1000,
global_pool: str = 'avg',
img_size: Tuple[int, int] = 224,
window_ratio: Tuple[int, ...] = (32, 32, 16, 32),
window_size: Tuple[int, ...] = None,
embed_dim: int = 64,
depths: Tuple[int, ...] = (3, 4, 19, 5),
num_heads: Tuple[int, ...] = (2, 4, 8, 16),
mlp_ratio: float = 3.0,
qkv_bias: bool = True,
layer_scale: Optional[float] = None,
drop_rate: float = 0.,
proj_drop_rate: float = 0.,
attn_drop_rate: float = 0.,
drop_path_rate: float = 0.,
weight_init='',
act_layer: str = 'gelu',
norm_layer: str = 'layernorm2d',
norm_layer_cl: str = 'layernorm',
norm_eps: float = 1e-5,
):
super().__init__()
act_layer = get_act_layer(act_layer)
norm_layer = partial(get_norm_layer(norm_layer), eps=norm_eps)
norm_layer_cl = partial(get_norm_layer(norm_layer_cl), eps=norm_eps)
self.feature_info = []
img_size = to_2tuple(img_size)
feat_size = tuple(d // 4 for d in img_size) # stem reduction by 4
self.global_pool = global_pool
self.num_classes = num_classes
self.drop_rate = drop_rate
num_stages = len(depths)
self.num_features = self.head_hidden_size = int(embed_dim * 2 ** (num_stages - 1))
if window_size is not None:
window_size = to_ntuple(num_stages)(window_size)
else:
assert window_ratio is not None
window_size = tuple([(img_size[0] // r, img_size[1] // r) for r in to_ntuple(num_stages)(window_ratio)])
self.stem = Stem(
in_chs=in_chans,
out_chs=embed_dim,
act_layer=act_layer,
norm_layer=norm_layer
)
dpr = [x.tolist() for x in torch.linspace(0, drop_path_rate, sum(depths)).split(depths)]
stages = []
for i in range(num_stages):
last_stage = i == num_stages - 1
stage_scale = 2 ** max(i - 1, 0)
stages.append(GlobalContextVitStage(
dim=embed_dim * stage_scale,
depth=depths[i],
num_heads=num_heads[i],
feat_size=(feat_size[0] // stage_scale, feat_size[1] // stage_scale),
window_size=window_size[i],
downsample=i != 0,
stage_norm=last_stage,
mlp_ratio=mlp_ratio,
qkv_bias=qkv_bias,
layer_scale=layer_scale,
proj_drop=proj_drop_rate,
attn_drop=attn_drop_rate,
drop_path=dpr[i],
act_layer=act_layer,
norm_layer=norm_layer,
norm_layer_cl=norm_layer_cl,
))
self.feature_info += [dict(num_chs=stages[-1].dim, reduction=2**(i+2), module=f'stages.{i}')]
self.stages = nn.Sequential(*stages)
# Classifier head
self.head = ClassifierHead(self.num_features, num_classes, pool_type=global_pool, drop_rate=drop_rate)
if weight_init:
named_apply(partial(self._init_weights, scheme=weight_init), self)
def _init_weights(self, module, name, scheme='vit'):
# note Conv2d left as default init
if scheme == 'vit':
if isinstance(module, nn.Linear):
nn.init.xavier_uniform_(module.weight)
if module.bias is not None:
if 'mlp' in name:
nn.init.normal_(module.bias, std=1e-6)
else:
nn.init.zeros_(module.bias)
else:
if isinstance(module, nn.Linear):
nn.init.normal_(module.weight, std=.02)
if module.bias is not None:
nn.init.zeros_(module.bias)
@torch.jit.ignore
def no_weight_decay(self):
return {
k for k, _ in self.named_parameters()
if any(n in k for n in ["relative_position_bias_table", "rel_pos.mlp"])}
@torch.jit.ignore
def group_matcher(self, coarse=False):
matcher = dict(
stem=r'^stem', # stem and embed
blocks=r'^stages\.(\d+)'
)
return matcher
@torch.jit.ignore
def set_grad_checkpointing(self, enable=True):
for s in self.stages:
s.grad_checkpointing = enable
@torch.jit.ignore
def get_classifier(self) -> nn.Module:
return self.head.fc
def reset_classifier(self, num_classes: int, global_pool: Optional[str] = None):
self.num_classes = num_classes
if global_pool is None:
global_pool = self.head.global_pool.pool_type
self.head = ClassifierHead(self.num_features, num_classes, pool_type=global_pool, drop_rate=self.drop_rate)
def forward_intermediates(
self,
x: torch.Tensor,
indices: Optional[Union[int, List[int]]] = None,
norm: bool = False,
stop_early: bool = False,
output_fmt: str = 'NCHW',
intermediates_only: bool = False,
) -> Union[List[torch.Tensor], Tuple[torch.Tensor, List[torch.Tensor]]]:
""" Forward features that returns intermediates.
Args:
x: Input image tensor
indices: Take last n blocks if int, all if None, select matching indices if sequence
norm: Apply norm layer to compatible intermediates
stop_early: Stop iterating over blocks when last desired intermediate hit
output_fmt: Shape of intermediate feature outputs
intermediates_only: Only return intermediate features
Returns:
"""
assert output_fmt in ('NCHW',), 'Output shape must be NCHW.'
intermediates = []
take_indices, max_index = feature_take_indices(len(self.stages), indices)
# forward pass
x = self.stem(x)
if torch.jit.is_scripting() or not stop_early: # can't slice blocks in torchscript
stages = self.stages
else:
stages = self.stages[:max_index + 1]
for feat_idx, stage in enumerate(stages):
x = stage(x)
if feat_idx in take_indices:
intermediates.append(x)
if intermediates_only:
return intermediates
return x, intermediates
def prune_intermediate_layers(
self,
indices: Union[int, List[int]] = 1,
prune_norm: bool = False,
prune_head: bool = True,
):
""" Prune layers not required for specified intermediates.
"""
take_indices, max_index = feature_take_indices(len(self.stages), indices)
self.stages = self.stages[:max_index + 1] # truncate blocks w/ stem as idx 0
if prune_head:
self.reset_classifier(0, '')
return take_indices
def forward_features(self, x: torch.Tensor) -> torch.Tensor:
x = self.stem(x)
x = self.stages(x)
return x
def forward_head(self, x, pre_logits: bool = False):
return self.head(x, pre_logits=pre_logits) if pre_logits else self.head(x)
def forward(self, x: torch.Tensor) -> torch.Tensor:
x = self.forward_features(x)
x = self.forward_head(x)
return x
def _create_gcvit(variant, pretrained=False, **kwargs):
model = build_model_with_cfg(
GlobalContextVit, variant, pretrained,
feature_cfg=dict(out_indices=(0, 1, 2, 3), flatten_sequential=True),
**kwargs
)
return model
def _cfg(url='', **kwargs):
return {
'url': url, 'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': (7, 7),
'crop_pct': 0.875, 'interpolation': 'bicubic',
'mean': IMAGENET_DEFAULT_MEAN, 'std': IMAGENET_DEFAULT_STD,
'first_conv': 'stem.conv1', 'classifier': 'head.fc',
'fixed_input_size': True,
**kwargs
}
default_cfgs = generate_default_cfgs({
'gcvit_xxtiny.in1k': _cfg(
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights-morevit/gcvit_xxtiny_224_nvidia-d1d86009.pth'),
'gcvit_xtiny.in1k': _cfg(
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights-morevit/gcvit_xtiny_224_nvidia-274b92b7.pth'),
'gcvit_tiny.in1k': _cfg(
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights-morevit/gcvit_tiny_224_nvidia-ac783954.pth'),
'gcvit_small.in1k': _cfg(
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights-morevit/gcvit_small_224_nvidia-4e98afa2.pth'),
'gcvit_base.in1k': _cfg(
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights-morevit/gcvit_base_224_nvidia-f009139b.pth'),
})
@register_model
def gcvit_xxtiny(pretrained=False, **kwargs) -> GlobalContextVit:
model_kwargs = dict(
depths=(2, 2, 6, 2),
num_heads=(2, 4, 8, 16),
**kwargs)
return _create_gcvit('gcvit_xxtiny', pretrained=pretrained, **model_kwargs)
@register_model
def gcvit_xtiny(pretrained=False, **kwargs) -> GlobalContextVit:
model_kwargs = dict(
depths=(3, 4, 6, 5),
num_heads=(2, 4, 8, 16),
**kwargs)
return _create_gcvit('gcvit_xtiny', pretrained=pretrained, **model_kwargs)
@register_model
def gcvit_tiny(pretrained=False, **kwargs) -> GlobalContextVit:
model_kwargs = dict(
depths=(3, 4, 19, 5),
num_heads=(2, 4, 8, 16),
**kwargs)
return _create_gcvit('gcvit_tiny', pretrained=pretrained, **model_kwargs)
@register_model
def gcvit_small(pretrained=False, **kwargs) -> GlobalContextVit:
model_kwargs = dict(
depths=(3, 4, 19, 5),
num_heads=(3, 6, 12, 24),
embed_dim=96,
mlp_ratio=2,
layer_scale=1e-5,
**kwargs)
return _create_gcvit('gcvit_small', pretrained=pretrained, **model_kwargs)
@register_model
def gcvit_base(pretrained=False, **kwargs) -> GlobalContextVit:
model_kwargs = dict(
depths=(3, 4, 19, 5),
num_heads=(4, 8, 16, 32),
embed_dim=128,
mlp_ratio=2,
layer_scale=1e-5,
**kwargs)
return _create_gcvit('gcvit_base', pretrained=pretrained, **model_kwargs)
| pytorch-image-models/timm/models/gcvit.py/0 | {
"file_path": "pytorch-image-models/timm/models/gcvit.py",
"repo_id": "pytorch-image-models",
"token_count": 11847
} | 273 |
""" MaxVit and CoAtNet Vision Transformer - CNN Hybrids in PyTorch
This is a from-scratch implementation of both CoAtNet and MaxVit in PyTorch.
99% of the implementation was done from papers, however last minute some adjustments were made
based on the (as yet unfinished?) public code release https://github.com/google-research/maxvit
There are multiple sets of models defined for both architectures. Typically, names with a
`_rw` suffix are my own original configs prior to referencing https://github.com/google-research/maxvit.
These configs work well and appear to be a bit faster / lower resource than the paper.
The models without extra prefix / suffix' (coatnet_0_224, maxvit_tiny_224, etc), are intended to
match paper, BUT, without any official pretrained weights it's difficult to confirm a 100% match.
Papers:
MaxViT: Multi-Axis Vision Transformer - https://arxiv.org/abs/2204.01697
@article{tu2022maxvit,
title={MaxViT: Multi-Axis Vision Transformer},
author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao},
journal={ECCV},
year={2022},
}
CoAtNet: Marrying Convolution and Attention for All Data Sizes - https://arxiv.org/abs/2106.04803
@article{DBLP:journals/corr/abs-2106-04803,
author = {Zihang Dai and Hanxiao Liu and Quoc V. Le and Mingxing Tan},
title = {CoAtNet: Marrying Convolution and Attention for All Data Sizes},
journal = {CoRR},
volume = {abs/2106.04803},
year = {2021}
}
Hacked together by / Copyright 2022, Ross Wightman
"""
import math
from collections import OrderedDict
from dataclasses import dataclass, replace, field
from functools import partial
from typing import Any, Callable, Dict, List, Optional, Set, Tuple, Union
import torch
from torch import nn
from torch.jit import Final
from timm.data import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD
from timm.layers import Mlp, ConvMlp, DropPath, LayerNorm, ClassifierHead, NormMlpClassifierHead
from timm.layers import create_attn, get_act_layer, get_norm_layer, get_norm_act_layer, create_conv2d, create_pool2d
from timm.layers import trunc_normal_tf_, to_2tuple, extend_tuple, make_divisible, _assert
from timm.layers import RelPosMlp, RelPosBias, RelPosBiasTf, use_fused_attn, resize_rel_pos_bias_table
from ._builder import build_model_with_cfg
from ._features import feature_take_indices
from ._features_fx import register_notrace_function
from ._manipulate import named_apply, checkpoint_seq
from ._registry import generate_default_cfgs, register_model
__all__ = ['MaxxVitCfg', 'MaxxVitConvCfg', 'MaxxVitTransformerCfg', 'MaxxVit']
@dataclass
class MaxxVitTransformerCfg:
"""Configuration for MaxxVit transformer blocks."""
dim_head: int = 32
head_first: bool = True # head ordering in qkv channel dim
expand_ratio: float = 4.0
expand_first: bool = True
shortcut_bias: bool = True
attn_bias: bool = True
attn_drop: float = 0.
proj_drop: float = 0.
pool_type: str = 'avg2'
rel_pos_type: str = 'bias'
rel_pos_dim: int = 512 # for relative position types w/ MLP
partition_ratio: int = 32
window_size: Optional[Tuple[int, int]] = None
grid_size: Optional[Tuple[int, int]] = None
no_block_attn: bool = False # disable window block attention for maxvit (ie only grid)
use_nchw_attn: bool = False # for MaxViT variants (not used for CoAt), keep tensors in NCHW order
init_values: Optional[float] = None
act_layer: str = 'gelu'
norm_layer: str = 'layernorm2d'
norm_layer_cl: str = 'layernorm'
norm_eps: float = 1e-6
def __post_init__(self):
if self.grid_size is not None:
self.grid_size = to_2tuple(self.grid_size)
if self.window_size is not None:
self.window_size = to_2tuple(self.window_size)
if self.grid_size is None:
self.grid_size = self.window_size
@dataclass
class MaxxVitConvCfg:
"""Configuration for MaxxVit convolution blocks."""
block_type: str = 'mbconv'
expand_ratio: float = 4.0
expand_output: bool = True # calculate expansion channels from output (vs input chs)
kernel_size: int = 3
group_size: int = 1 # 1 == depthwise
pre_norm_act: bool = False # activation after pre-norm
output_bias: bool = True # bias for shortcut + final 1x1 projection conv
stride_mode: str = 'dw' # stride done via one of 'pool', '1x1', 'dw'
pool_type: str = 'avg2'
downsample_pool_type: str = 'avg2'
padding: str = ''
attn_early: bool = False # apply attn between conv2 and norm2, instead of after norm2
attn_layer: str = 'se'
attn_act_layer: str = 'silu'
attn_ratio: float = 0.25
init_values: Optional[float] = 1e-6 # for ConvNeXt block, ignored by MBConv
act_layer: str = 'gelu'
norm_layer: str = ''
norm_layer_cl: str = ''
norm_eps: Optional[float] = None
def __post_init__(self):
# mbconv vs convnext blocks have different defaults, set in post_init to avoid explicit config args
assert self.block_type in ('mbconv', 'convnext')
use_mbconv = self.block_type == 'mbconv'
if not self.norm_layer:
self.norm_layer = 'batchnorm2d' if use_mbconv else 'layernorm2d'
if not self.norm_layer_cl and not use_mbconv:
self.norm_layer_cl = 'layernorm'
if self.norm_eps is None:
self.norm_eps = 1e-5 if use_mbconv else 1e-6
self.downsample_pool_type = self.downsample_pool_type or self.pool_type
@dataclass
class MaxxVitCfg:
"""Configuration for MaxxVit models."""
embed_dim: Tuple[int, ...] = (96, 192, 384, 768)
depths: Tuple[int, ...] = (2, 3, 5, 2)
block_type: Tuple[Union[str, Tuple[str, ...]], ...] = ('C', 'C', 'T', 'T')
stem_width: Union[int, Tuple[int, int]] = 64
stem_bias: bool = False
conv_cfg: MaxxVitConvCfg = field(default_factory=MaxxVitConvCfg)
transformer_cfg: MaxxVitTransformerCfg = field(default_factory=MaxxVitTransformerCfg)
head_hidden_size: Optional[int] = None
weight_init: str = 'vit_eff'
class Attention2d(nn.Module):
"""Multi-head attention for 2D NCHW tensors."""
fused_attn: Final[bool]
def __init__(
self,
dim: int,
dim_out: Optional[int] = None,
dim_head: int = 32,
bias: bool = True,
expand_first: bool = True,
head_first: bool = True,
rel_pos_cls: Optional[Callable] = None,
attn_drop: float = 0.,
proj_drop: float = 0.
):
"""
Args:
dim: Input dimension.
dim_out: Output dimension (defaults to input dimension).
dim_head: Dimension per attention head.
bias: Whether to use bias in qkv and projection.
expand_first: Whether to expand channels before or after qkv.
head_first: Whether heads are first in tensor layout.
rel_pos_cls: Relative position class to use.
attn_drop: Attention dropout rate.
proj_drop: Projection dropout rate.
"""
super().__init__()
dim_out = dim_out or dim
dim_attn = dim_out if expand_first else dim
self.num_heads = dim_attn // dim_head
self.dim_head = dim_head
self.head_first = head_first
self.scale = dim_head ** -0.5
self.fused_attn = use_fused_attn()
self.qkv = nn.Conv2d(dim, dim_attn * 3, 1, bias=bias)
self.rel_pos = rel_pos_cls(num_heads=self.num_heads) if rel_pos_cls else None
self.attn_drop = nn.Dropout(attn_drop)
self.proj = nn.Conv2d(dim_attn, dim_out, 1, bias=bias)
self.proj_drop = nn.Dropout(proj_drop)
def forward(self, x: torch.Tensor, shared_rel_pos: Optional[torch.Tensor] = None) -> torch.Tensor:
B, C, H, W = x.shape
if self.head_first:
q, k, v = self.qkv(x).view(B, self.num_heads, self.dim_head * 3, -1).chunk(3, dim=2)
else:
q, k, v = self.qkv(x).reshape(B, 3, self.num_heads, self.dim_head, -1).unbind(1)
if self.fused_attn:
attn_bias = None
if self.rel_pos is not None:
attn_bias = self.rel_pos.get_bias()
elif shared_rel_pos is not None:
attn_bias = shared_rel_pos
x = torch.nn.functional.scaled_dot_product_attention(
q.transpose(-1, -2).contiguous(),
k.transpose(-1, -2).contiguous(),
v.transpose(-1, -2).contiguous(),
attn_mask=attn_bias,
dropout_p=self.attn_drop.p if self.training else 0.,
).transpose(-1, -2).reshape(B, -1, H, W)
else:
q = q * self.scale
attn = q.transpose(-2, -1) @ k
if self.rel_pos is not None:
attn = self.rel_pos(attn)
elif shared_rel_pos is not None:
attn = attn + shared_rel_pos
attn = attn.softmax(dim=-1)
attn = self.attn_drop(attn)
x = (v @ attn.transpose(-2, -1)).view(B, -1, H, W)
x = self.proj(x)
x = self.proj_drop(x)
return x
class AttentionCl(nn.Module):
"""Channels-last multi-head attention (B, ..., C)."""
fused_attn: Final[bool]
def __init__(
self,
dim: int,
dim_out: Optional[int] = None,
dim_head: int = 32,
bias: bool = True,
expand_first: bool = True,
head_first: bool = True,
rel_pos_cls: Optional[Callable] = None,
attn_drop: float = 0.,
proj_drop: float = 0.
):
"""
Args:
dim: Input dimension.
dim_out: Output dimension (defaults to input dimension).
dim_head: Dimension per attention head.
bias: Whether to use bias in qkv and projection.
expand_first: Whether to expand channels before or after qkv.
head_first: Whether heads are first in tensor layout.
rel_pos_cls: Relative position class to use.
attn_drop: Attention dropout rate.
proj_drop: Projection dropout rate.
"""
super().__init__()
dim_out = dim_out or dim
dim_attn = dim_out if expand_first and dim_out > dim else dim
assert dim_attn % dim_head == 0, 'attn dim should be divisible by head_dim'
self.num_heads = dim_attn // dim_head
self.dim_head = dim_head
self.head_first = head_first
self.scale = dim_head ** -0.5
self.fused_attn = use_fused_attn()
self.qkv = nn.Linear(dim, dim_attn * 3, bias=bias)
self.rel_pos = rel_pos_cls(num_heads=self.num_heads) if rel_pos_cls else None
self.attn_drop = nn.Dropout(attn_drop)
self.proj = nn.Linear(dim_attn, dim_out, bias=bias)
self.proj_drop = nn.Dropout(proj_drop)
def forward(self, x: torch.Tensor, shared_rel_pos: Optional[torch.Tensor] = None) -> torch.Tensor:
B = x.shape[0]
restore_shape = x.shape[:-1]
if self.head_first:
q, k, v = self.qkv(x).view(B, -1, self.num_heads, self.dim_head * 3).transpose(1, 2).chunk(3, dim=3)
else:
q, k, v = self.qkv(x).reshape(B, -1, 3, self.num_heads, self.dim_head).transpose(1, 3).unbind(2)
if self.fused_attn:
attn_bias = None
if self.rel_pos is not None:
attn_bias = self.rel_pos.get_bias()
elif shared_rel_pos is not None:
attn_bias = shared_rel_pos
x = torch.nn.functional.scaled_dot_product_attention(
q, k, v,
attn_mask=attn_bias,
dropout_p=self.attn_drop.p if self.training else 0.,
)
else:
q = q * self.scale
attn = q @ k.transpose(-2, -1)
if self.rel_pos is not None:
attn = self.rel_pos(attn, shared_rel_pos=shared_rel_pos)
elif shared_rel_pos is not None:
attn = attn + shared_rel_pos
attn = attn.softmax(dim=-1)
attn = self.attn_drop(attn)
x = attn @ v
x = x.transpose(1, 2).reshape(restore_shape + (-1,))
x = self.proj(x)
x = self.proj_drop(x)
return x
class LayerScale(nn.Module):
"""Per-channel scaling layer."""
def __init__(self, dim: int, init_values: float = 1e-5, inplace: bool = False):
"""
Args:
dim: Number of channels.
init_values: Initial scaling value.
inplace: Whether to perform inplace operations.
"""
super().__init__()
self.inplace = inplace
self.gamma = nn.Parameter(init_values * torch.ones(dim))
def forward(self, x: torch.Tensor) -> torch.Tensor:
gamma = self.gamma
return x.mul_(gamma) if self.inplace else x * gamma
class LayerScale2d(nn.Module):
"""Per-channel scaling layer for 2D tensors."""
def __init__(self, dim: int, init_values: float = 1e-5, inplace: bool = False):
"""
Args:
dim: Number of channels.
init_values: Initial scaling value.
inplace: Whether to perform inplace operations.
"""
super().__init__()
self.inplace = inplace
self.gamma = nn.Parameter(init_values * torch.ones(dim))
def forward(self, x: torch.Tensor) -> torch.Tensor:
gamma = self.gamma.view(1, -1, 1, 1)
return x.mul_(gamma) if self.inplace else x * gamma
class Downsample2d(nn.Module):
"""A downsample pooling module supporting several maxpool and avgpool modes.
* 'max' - MaxPool2d w/ kernel_size 3, stride 2, padding 1
* 'max2' - MaxPool2d w/ kernel_size = stride = 2
* 'avg' - AvgPool2d w/ kernel_size 3, stride 2, padding 1
* 'avg2' - AvgPool2d w/ kernel_size = stride = 2
"""
def __init__(
self,
dim: int,
dim_out: int,
pool_type: str = 'avg2',
padding: str = '',
bias: bool = True,
):
"""
Args:
dim: Input dimension.
dim_out: Output dimension.
pool_type: Type of pooling operation.
padding: Padding mode.
bias: Whether to use bias in expansion conv.
"""
super().__init__()
assert pool_type in ('max', 'max2', 'avg', 'avg2')
if pool_type == 'max':
self.pool = create_pool2d('max', kernel_size=3, stride=2, padding=padding or 1)
elif pool_type == 'max2':
self.pool = create_pool2d('max', 2, padding=padding or 0) # kernel_size == stride == 2
elif pool_type == 'avg':
self.pool = create_pool2d(
'avg', kernel_size=3, stride=2, count_include_pad=False, padding=padding or 1)
else:
self.pool = create_pool2d('avg', 2, padding=padding or 0)
if dim != dim_out:
self.expand = nn.Conv2d(dim, dim_out, 1, bias=bias)
else:
self.expand = nn.Identity()
def forward(self, x: torch.Tensor) -> torch.Tensor:
x = self.pool(x) # spatial downsample
x = self.expand(x) # expand chs
return x
def _init_transformer(module: nn.Module, name: str, scheme: str = '') -> None:
"""Initialize transformer module weights."""
if isinstance(module, (nn.Conv2d, nn.Linear)):
if scheme == 'normal':
nn.init.normal_(module.weight, std=.02)
if module.bias is not None:
nn.init.zeros_(module.bias)
elif scheme == 'trunc_normal':
trunc_normal_tf_(module.weight, std=.02)
if module.bias is not None:
nn.init.zeros_(module.bias)
elif scheme == 'xavier_normal':
nn.init.xavier_normal_(module.weight)
if module.bias is not None:
nn.init.zeros_(module.bias)
else:
# vit like
nn.init.xavier_uniform_(module.weight)
if module.bias is not None:
if 'mlp' in name:
nn.init.normal_(module.bias, std=1e-6)
else:
nn.init.zeros_(module.bias)
class TransformerBlock2d(nn.Module):
"""Transformer block with 2D downsampling.
'2D' NCHW tensor layout
Some gains can be seen on GPU using a 1D / CL block, BUT w/ the need to switch back/forth to NCHW
for spatial pooling, the benefit is minimal so ended up using just this variant for CoAt configs.
This impl was faster on TPU w/ PT XLA than the 1D experiment.
"""
def __init__(
self,
dim: int,
dim_out: int,
stride: int = 1,
rel_pos_cls: Optional[Callable] = None,
cfg: MaxxVitTransformerCfg = MaxxVitTransformerCfg(),
drop_path: float = 0.,
):
"""
Args:
dim: Input dimension.
dim_out: Output dimension.
stride: Stride for downsampling.
rel_pos_cls: Relative position class.
cfg: Transformer block configuration.
drop_path: Drop path rate.
"""
super().__init__()
norm_layer = partial(get_norm_layer(cfg.norm_layer), eps=cfg.norm_eps)
act_layer = get_act_layer(cfg.act_layer)
if stride == 2:
self.shortcut = Downsample2d(dim, dim_out, pool_type=cfg.pool_type, bias=cfg.shortcut_bias)
self.norm1 = nn.Sequential(OrderedDict([
('norm', norm_layer(dim)),
('down', Downsample2d(dim, dim, pool_type=cfg.pool_type)),
]))
else:
assert dim == dim_out
self.shortcut = nn.Identity()
self.norm1 = norm_layer(dim)
self.attn = Attention2d(
dim,
dim_out,
dim_head=cfg.dim_head,
expand_first=cfg.expand_first,
bias=cfg.attn_bias,
rel_pos_cls=rel_pos_cls,
attn_drop=cfg.attn_drop,
proj_drop=cfg.proj_drop
)
self.ls1 = LayerScale2d(dim_out, init_values=cfg.init_values) if cfg.init_values else nn.Identity()
self.drop_path1 = DropPath(drop_path) if drop_path > 0. else nn.Identity()
self.norm2 = norm_layer(dim_out)
self.mlp = ConvMlp(
in_features=dim_out,
hidden_features=int(dim_out * cfg.expand_ratio),
act_layer=act_layer,
drop=cfg.proj_drop)
self.ls2 = LayerScale2d(dim_out, init_values=cfg.init_values) if cfg.init_values else nn.Identity()
self.drop_path2 = DropPath(drop_path) if drop_path > 0. else nn.Identity()
def init_weights(self, scheme: str = '') -> None:
named_apply(partial(_init_transformer, scheme=scheme), self)
def forward(self, x: torch.Tensor, shared_rel_pos: Optional[torch.Tensor] = None) -> torch.Tensor:
x = self.shortcut(x) + self.drop_path1(self.ls1(self.attn(self.norm1(x), shared_rel_pos=shared_rel_pos)))
x = x + self.drop_path2(self.ls2(self.mlp(self.norm2(x))))
return x
def _init_conv(module: nn.Module, name: str, scheme: str = '') -> None:
"""Initialize convolution module weights."""
if isinstance(module, nn.Conv2d):
if scheme == 'normal':
nn.init.normal_(module.weight, std=.02)
if module.bias is not None:
nn.init.zeros_(module.bias)
elif scheme == 'trunc_normal':
trunc_normal_tf_(module.weight, std=.02)
if module.bias is not None:
nn.init.zeros_(module.bias)
elif scheme == 'xavier_normal':
nn.init.xavier_normal_(module.weight)
if module.bias is not None:
nn.init.zeros_(module.bias)
else:
# efficientnet like
fan_out = module.kernel_size[0] * module.kernel_size[1] * module.out_channels
fan_out //= module.groups
nn.init.normal_(module.weight, 0, math.sqrt(2.0 / fan_out))
if module.bias is not None:
nn.init.zeros_(module.bias)
def num_groups(group_size: Optional[int], channels: int) -> int:
"""Calculate number of groups for grouped convolution."""
if not group_size: # 0 or None
return 1 # normal conv with 1 group
else:
# NOTE group_size == 1 -> depthwise conv
assert channels % group_size == 0
return channels // group_size
class MbConvBlock(nn.Module):
"""Pre-Norm Conv Block - 1x1 - kxk - 1x1, w/ inverted bottleneck (expand)."""
def __init__(
self,
in_chs: int,
out_chs: int,
stride: int = 1,
dilation: Tuple[int, int] = (1, 1),
cfg: MaxxVitConvCfg = MaxxVitConvCfg(),
drop_path: float = 0.
):
"""
Args:
in_chs: Input channels.
out_chs: Output channels.
stride: Stride for conv.
dilation: Dilation for conv.
cfg: Convolution block configuration.
drop_path: Drop path rate.
"""
super(MbConvBlock, self).__init__()
norm_act_layer = partial(get_norm_act_layer(cfg.norm_layer, cfg.act_layer), eps=cfg.norm_eps)
mid_chs = make_divisible((out_chs if cfg.expand_output else in_chs) * cfg.expand_ratio)
groups = num_groups(cfg.group_size, mid_chs)
if stride == 2:
self.shortcut = Downsample2d(
in_chs, out_chs, pool_type=cfg.pool_type, bias=cfg.output_bias, padding=cfg.padding)
else:
self.shortcut = nn.Identity()
assert cfg.stride_mode in ('pool', '1x1', 'dw')
stride_pool, stride_1, stride_2 = 1, 1, 1
if cfg.stride_mode == 'pool':
# NOTE this is not described in paper, experiment to find faster option that doesn't stride in 1x1
stride_pool, dilation_2 = stride, dilation[1]
# FIXME handle dilation of avg pool
elif cfg.stride_mode == '1x1':
# NOTE I don't like this option described in paper, 1x1 w/ stride throws info away
stride_1, dilation_2 = stride, dilation[1]
else:
stride_2, dilation_2 = stride, dilation[0]
self.pre_norm = norm_act_layer(in_chs, apply_act=cfg.pre_norm_act)
if stride_pool > 1:
self.down = Downsample2d(in_chs, in_chs, pool_type=cfg.downsample_pool_type, padding=cfg.padding)
else:
self.down = nn.Identity()
self.conv1_1x1 = create_conv2d(in_chs, mid_chs, 1, stride=stride_1)
self.norm1 = norm_act_layer(mid_chs)
self.conv2_kxk = create_conv2d(
mid_chs, mid_chs, cfg.kernel_size,
stride=stride_2, dilation=dilation_2, groups=groups, padding=cfg.padding)
attn_kwargs = {}
if isinstance(cfg.attn_layer, str):
if cfg.attn_layer == 'se' or cfg.attn_layer == 'eca':
attn_kwargs['act_layer'] = cfg.attn_act_layer
attn_kwargs['rd_channels'] = int(cfg.attn_ratio * (out_chs if cfg.expand_output else mid_chs))
# two different orderings for SE and norm2 (due to some weights and trials using SE before norm2)
if cfg.attn_early:
self.se_early = create_attn(cfg.attn_layer, mid_chs, **attn_kwargs)
self.norm2 = norm_act_layer(mid_chs)
self.se = None
else:
self.se_early = None
self.norm2 = norm_act_layer(mid_chs)
self.se = create_attn(cfg.attn_layer, mid_chs, **attn_kwargs)
self.conv3_1x1 = create_conv2d(mid_chs, out_chs, 1, bias=cfg.output_bias)
self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
def init_weights(self, scheme: str = '') -> None:
named_apply(partial(_init_conv, scheme=scheme), self)
def forward(self, x: torch.Tensor) -> torch.Tensor:
shortcut = self.shortcut(x)
x = self.pre_norm(x)
x = self.down(x)
# 1x1 expansion conv & norm-act
x = self.conv1_1x1(x)
x = self.norm1(x)
# depthwise / grouped 3x3 conv w/ SE (or other) channel attention & norm-act
x = self.conv2_kxk(x)
if self.se_early is not None:
x = self.se_early(x)
x = self.norm2(x)
if self.se is not None:
x = self.se(x)
# 1x1 linear projection to output width
x = self.conv3_1x1(x)
x = self.drop_path(x) + shortcut
return x
class ConvNeXtBlock(nn.Module):
"""ConvNeXt Block."""
def __init__(
self,
in_chs: int,
out_chs: Optional[int] = None,
kernel_size: int = 7,
stride: int = 1,
dilation: Tuple[int, int] = (1, 1),
cfg: MaxxVitConvCfg = MaxxVitConvCfg(),
conv_mlp: bool = True,
drop_path: float = 0.
):
"""
Args:
in_chs: Input channels.
out_chs: Output channels.
kernel_size: Kernel size for depthwise conv.
stride: Stride for conv.
dilation: Dilation for conv.
cfg: Convolution block configuration.
conv_mlp: Whether to use convolutional MLP.
drop_path: Drop path rate.
"""
super().__init__()
out_chs = out_chs or in_chs
act_layer = get_act_layer(cfg.act_layer)
if conv_mlp:
norm_layer = partial(get_norm_layer(cfg.norm_layer), eps=cfg.norm_eps)
mlp_layer = ConvMlp
else:
assert 'layernorm' in cfg.norm_layer
norm_layer = LayerNorm
mlp_layer = Mlp
self.use_conv_mlp = conv_mlp
if stride == 2:
self.shortcut = Downsample2d(in_chs, out_chs)
elif in_chs != out_chs:
self.shortcut = nn.Conv2d(in_chs, out_chs, kernel_size=1, bias=cfg.output_bias)
else:
self.shortcut = nn.Identity()
assert cfg.stride_mode in ('pool', 'dw')
stride_pool, stride_dw = 1, 1
# FIXME handle dilation?
if cfg.stride_mode == 'pool':
stride_pool = stride
else:
stride_dw = stride
if stride_pool == 2:
self.down = Downsample2d(in_chs, in_chs, pool_type=cfg.downsample_pool_type)
else:
self.down = nn.Identity()
self.conv_dw = create_conv2d(
in_chs, out_chs, kernel_size=kernel_size, stride=stride_dw, dilation=dilation[1],
depthwise=True, bias=cfg.output_bias)
self.norm = norm_layer(out_chs)
self.mlp = mlp_layer(out_chs, int(cfg.expand_ratio * out_chs), bias=cfg.output_bias, act_layer=act_layer)
if conv_mlp:
self.ls = LayerScale2d(out_chs, cfg.init_values) if cfg.init_values else nn.Identity()
else:
self.ls = LayerScale(out_chs, cfg.init_values) if cfg.init_values else nn.Identity()
self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
def forward(self, x: torch.Tensor) -> torch.Tensor:
shortcut = self.shortcut(x)
x = self.down(x)
x = self.conv_dw(x)
if self.use_conv_mlp:
x = self.norm(x)
x = self.mlp(x)
x = self.ls(x)
else:
x = x.permute(0, 2, 3, 1)
x = self.norm(x)
x = self.mlp(x)
x = self.ls(x)
x = x.permute(0, 3, 1, 2)
x = self.drop_path(x) + shortcut
return x
def window_partition(x: torch.Tensor, window_size: List[int]) -> torch.Tensor:
"""Partition into non-overlapping windows."""
B, H, W, C = x.shape
_assert(H % window_size[0] == 0, f'height ({H}) must be divisible by window ({window_size[0]})')
_assert(W % window_size[1] == 0, f'width ({W}) must be divisible by window ({window_size[1]})')
x = x.view(B, H // window_size[0], window_size[0], W // window_size[1], window_size[1], C)
windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size[0], window_size[1], C)
return windows
@register_notrace_function # reason: int argument is a Proxy
def window_reverse(windows: torch.Tensor, window_size: List[int], img_size: List[int]) -> torch.Tensor:
"""Reverse window partition."""
H, W = img_size
C = windows.shape[-1]
x = windows.view(-1, H // window_size[0], W // window_size[1], window_size[0], window_size[1], C)
x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, H, W, C)
return x
def grid_partition(x: torch.Tensor, grid_size: List[int]) -> torch.Tensor:
"""Partition into overlapping windows with grid striding."""
B, H, W, C = x.shape
_assert(H % grid_size[0] == 0, f'height {H} must be divisible by grid {grid_size[0]}')
_assert(W % grid_size[1] == 0, f'width {W} must be divisible by grid {grid_size[1]}')
x = x.view(B, grid_size[0], H // grid_size[0], grid_size[1], W // grid_size[1], C)
windows = x.permute(0, 2, 4, 1, 3, 5).contiguous().view(-1, grid_size[0], grid_size[1], C)
return windows
@register_notrace_function # reason: int argument is a Proxy
def grid_reverse(windows: torch.Tensor, grid_size: List[int], img_size: List[int]) -> torch.Tensor:
"""Reverse grid partition."""
H, W = img_size
C = windows.shape[-1]
x = windows.view(-1, H // grid_size[0], W // grid_size[1], grid_size[0], grid_size[1], C)
x = x.permute(0, 3, 1, 4, 2, 5).contiguous().view(-1, H, W, C)
return x
def get_rel_pos_cls(cfg: MaxxVitTransformerCfg, window_size: Tuple[int, int]) -> Optional[Callable]:
"""Get relative position class based on config."""
rel_pos_cls = None
if cfg.rel_pos_type == 'mlp':
rel_pos_cls = partial(RelPosMlp, window_size=window_size, hidden_dim=cfg.rel_pos_dim)
elif cfg.rel_pos_type == 'bias':
rel_pos_cls = partial(RelPosBias, window_size=window_size)
elif cfg.rel_pos_type == 'bias_tf':
rel_pos_cls = partial(RelPosBiasTf, window_size=window_size)
return rel_pos_cls
class PartitionAttentionCl(nn.Module):
"""Grid or Block partition + Attn + FFN.
NxC 'channels last' tensor layout.
"""
def __init__(
self,
dim: int,
partition_type: str = 'block',
cfg: MaxxVitTransformerCfg = MaxxVitTransformerCfg(),
drop_path: float = 0.,
):
super().__init__()
norm_layer = partial(get_norm_layer(cfg.norm_layer_cl), eps=cfg.norm_eps) # NOTE this block is channels-last
act_layer = get_act_layer(cfg.act_layer)
self.partition_block = partition_type == 'block'
self.partition_size = to_2tuple(cfg.window_size if self.partition_block else cfg.grid_size)
rel_pos_cls = get_rel_pos_cls(cfg, self.partition_size)
self.norm1 = norm_layer(dim)
self.attn = AttentionCl(
dim,
dim,
dim_head=cfg.dim_head,
bias=cfg.attn_bias,
head_first=cfg.head_first,
rel_pos_cls=rel_pos_cls,
attn_drop=cfg.attn_drop,
proj_drop=cfg.proj_drop,
)
self.ls1 = LayerScale(dim, init_values=cfg.init_values) if cfg.init_values else nn.Identity()
self.drop_path1 = DropPath(drop_path) if drop_path > 0. else nn.Identity()
self.norm2 = norm_layer(dim)
self.mlp = Mlp(
in_features=dim,
hidden_features=int(dim * cfg.expand_ratio),
act_layer=act_layer,
drop=cfg.proj_drop)
self.ls2 = LayerScale(dim, init_values=cfg.init_values) if cfg.init_values else nn.Identity()
self.drop_path2 = DropPath(drop_path) if drop_path > 0. else nn.Identity()
def _partition_attn(self, x):
img_size = x.shape[1:3]
if self.partition_block:
partitioned = window_partition(x, self.partition_size)
else:
partitioned = grid_partition(x, self.partition_size)
partitioned = self.attn(partitioned)
if self.partition_block:
x = window_reverse(partitioned, self.partition_size, img_size)
else:
x = grid_reverse(partitioned, self.partition_size, img_size)
return x
def forward(self, x):
x = x + self.drop_path1(self.ls1(self._partition_attn(self.norm1(x))))
x = x + self.drop_path2(self.ls2(self.mlp(self.norm2(x))))
return x
class ParallelPartitionAttention(nn.Module):
"""Experimental. Grid and Block partition + single FFN.
NxC tensor layout.
"""
def __init__(
self,
dim: int,
cfg: MaxxVitTransformerCfg = MaxxVitTransformerCfg(),
drop_path: float = 0.,
):
"""
Args:
dim: Input dimension.
cfg: Transformer block configuration.
drop_path: Drop path rate.
"""
super().__init__()
assert dim % 2 == 0
norm_layer = partial(get_norm_layer(cfg.norm_layer_cl), eps=cfg.norm_eps) # NOTE this block is channels-last
act_layer = get_act_layer(cfg.act_layer)
assert cfg.window_size == cfg.grid_size
self.partition_size = to_2tuple(cfg.window_size)
rel_pos_cls = get_rel_pos_cls(cfg, self.partition_size)
self.norm1 = norm_layer(dim)
self.attn_block = AttentionCl(
dim,
dim // 2,
dim_head=cfg.dim_head,
bias=cfg.attn_bias,
head_first=cfg.head_first,
rel_pos_cls=rel_pos_cls,
attn_drop=cfg.attn_drop,
proj_drop=cfg.proj_drop,
)
self.attn_grid = AttentionCl(
dim,
dim // 2,
dim_head=cfg.dim_head,
bias=cfg.attn_bias,
head_first=cfg.head_first,
rel_pos_cls=rel_pos_cls,
attn_drop=cfg.attn_drop,
proj_drop=cfg.proj_drop,
)
self.ls1 = LayerScale(dim, init_values=cfg.init_values) if cfg.init_values else nn.Identity()
self.drop_path1 = DropPath(drop_path) if drop_path > 0. else nn.Identity()
self.norm2 = norm_layer(dim)
self.mlp = Mlp(
in_features=dim,
hidden_features=int(dim * cfg.expand_ratio),
out_features=dim,
act_layer=act_layer,
drop=cfg.proj_drop)
self.ls2 = LayerScale(dim, init_values=cfg.init_values) if cfg.init_values else nn.Identity()
self.drop_path2 = DropPath(drop_path) if drop_path > 0. else nn.Identity()
def _partition_attn(self, x: torch.Tensor) -> torch.Tensor:
img_size = x.shape[1:3]
partitioned_block = window_partition(x, self.partition_size)
partitioned_block = self.attn_block(partitioned_block)
x_window = window_reverse(partitioned_block, self.partition_size, img_size)
partitioned_grid = grid_partition(x, self.partition_size)
partitioned_grid = self.attn_grid(partitioned_grid)
x_grid = grid_reverse(partitioned_grid, self.partition_size, img_size)
return torch.cat([x_window, x_grid], dim=-1)
def forward(self, x: torch.Tensor) -> torch.Tensor:
x = x + self.drop_path1(self.ls1(self._partition_attn(self.norm1(x))))
x = x + self.drop_path2(self.ls2(self.mlp(self.norm2(x))))
return x
def window_partition_nchw(x: torch.Tensor, window_size: List[int]) -> torch.Tensor:
"""Partition windows for NCHW tensors."""
B, C, H, W = x.shape
_assert(H % window_size[0] == 0, f'height ({H}) must be divisible by window ({window_size[0]})')
_assert(W % window_size[1] == 0, f'width ({W}) must be divisible by window ({window_size[1]})')
x = x.view(B, C, H // window_size[0], window_size[0], W // window_size[1], window_size[1])
windows = x.permute(0, 2, 4, 1, 3, 5).contiguous().view(-1, C, window_size[0], window_size[1])
return windows
@register_notrace_function # reason: int argument is a Proxy
def window_reverse_nchw(windows: torch.Tensor, window_size: List[int], img_size: List[int]) -> torch.Tensor:
"""Reverse window partition for NCHW tensors."""
H, W = img_size
C = windows.shape[1]
x = windows.view(-1, H // window_size[0], W // window_size[1], C, window_size[0], window_size[1])
x = x.permute(0, 3, 1, 4, 2, 5).contiguous().view(-1, C, H, W)
return x
def grid_partition_nchw(x: torch.Tensor, grid_size: List[int]) -> torch.Tensor:
"""Grid partition for NCHW tensors."""
B, C, H, W = x.shape
_assert(H % grid_size[0] == 0, f'height {H} must be divisible by grid {grid_size[0]}')
_assert(W % grid_size[1] == 0, f'width {W} must be divisible by grid {grid_size[1]}')
x = x.view(B, C, grid_size[0], H // grid_size[0], grid_size[1], W // grid_size[1])
windows = x.permute(0, 3, 5, 1, 2, 4).contiguous().view(-1, C, grid_size[0], grid_size[1])
return windows
@register_notrace_function # reason: int argument is a Proxy
def grid_reverse_nchw(windows: torch.Tensor, grid_size: List[int], img_size: List[int]) -> torch.Tensor:
"""Reverse grid partition for NCHW tensors."""
H, W = img_size
C = windows.shape[1]
x = windows.view(-1, H // grid_size[0], W // grid_size[1], C, grid_size[0], grid_size[1])
x = x.permute(0, 3, 4, 1, 5, 2).contiguous().view(-1, C, H, W)
return x
class PartitionAttention2d(nn.Module):
"""Grid or Block partition + Attn + FFN.
'2D' NCHW tensor layout.
"""
def __init__(
self,
dim: int,
partition_type: str = 'block',
cfg: MaxxVitTransformerCfg = MaxxVitTransformerCfg(),
drop_path: float = 0.,
):
"""
Args:
dim: Input dimension.
partition_type: Partition type ('block' or 'grid').
cfg: Transformer block configuration.
drop_path: Drop path rate.
"""
super().__init__()
norm_layer = partial(get_norm_layer(cfg.norm_layer), eps=cfg.norm_eps) # NOTE this block is channels-last
act_layer = get_act_layer(cfg.act_layer)
self.partition_block = partition_type == 'block'
self.partition_size = to_2tuple(cfg.window_size if self.partition_block else cfg.grid_size)
rel_pos_cls = get_rel_pos_cls(cfg, self.partition_size)
self.norm1 = norm_layer(dim)
self.attn = Attention2d(
dim,
dim,
dim_head=cfg.dim_head,
bias=cfg.attn_bias,
head_first=cfg.head_first,
rel_pos_cls=rel_pos_cls,
attn_drop=cfg.attn_drop,
proj_drop=cfg.proj_drop,
)
self.ls1 = LayerScale2d(dim, init_values=cfg.init_values) if cfg.init_values else nn.Identity()
self.drop_path1 = DropPath(drop_path) if drop_path > 0. else nn.Identity()
self.norm2 = norm_layer(dim)
self.mlp = ConvMlp(
in_features=dim,
hidden_features=int(dim * cfg.expand_ratio),
act_layer=act_layer,
drop=cfg.proj_drop)
self.ls2 = LayerScale2d(dim, init_values=cfg.init_values) if cfg.init_values else nn.Identity()
self.drop_path2 = DropPath(drop_path) if drop_path > 0. else nn.Identity()
def _partition_attn(self, x: torch.Tensor) -> torch.Tensor:
img_size = x.shape[-2:]
if self.partition_block:
partitioned = window_partition_nchw(x, self.partition_size)
else:
partitioned = grid_partition_nchw(x, self.partition_size)
partitioned = self.attn(partitioned)
if self.partition_block:
x = window_reverse_nchw(partitioned, self.partition_size, img_size)
else:
x = grid_reverse_nchw(partitioned, self.partition_size, img_size)
return x
def forward(self, x: torch.Tensor) -> torch.Tensor:
x = x + self.drop_path1(self.ls1(self._partition_attn(self.norm1(x))))
x = x + self.drop_path2(self.ls2(self.mlp(self.norm2(x))))
return x
class MaxxVitBlock(nn.Module):
"""MaxVit conv, window partition + FFN , grid partition + FFN."""
def __init__(
self,
dim: int,
dim_out: int,
stride: int = 1,
conv_cfg: MaxxVitConvCfg = MaxxVitConvCfg(),
transformer_cfg: MaxxVitTransformerCfg = MaxxVitTransformerCfg(),
drop_path: float = 0.,
):
"""Initialize MaxxVitBlock.
Args:
dim: Input channel dimension.
dim_out: Output channel dimension.
stride: Stride for downsampling.
conv_cfg: Configuration for convolutional blocks.
transformer_cfg: Configuration for transformer blocks.
drop_path: Drop path rate.
"""
super().__init__()
self.nchw_attn = transformer_cfg.use_nchw_attn
conv_cls = ConvNeXtBlock if conv_cfg.block_type == 'convnext' else MbConvBlock
self.conv = conv_cls(dim, dim_out, stride=stride, cfg=conv_cfg, drop_path=drop_path)
attn_kwargs = dict(dim=dim_out, cfg=transformer_cfg, drop_path=drop_path)
partition_layer = PartitionAttention2d if self.nchw_attn else PartitionAttentionCl
self.attn_block = None if transformer_cfg.no_block_attn else partition_layer(**attn_kwargs)
self.attn_grid = partition_layer(partition_type='grid', **attn_kwargs)
def init_weights(self, scheme=''):
if self.attn_block is not None:
named_apply(partial(_init_transformer, scheme=scheme), self.attn_block)
named_apply(partial(_init_transformer, scheme=scheme), self.attn_grid)
named_apply(partial(_init_conv, scheme=scheme), self.conv)
def forward(self, x):
# NCHW format
x = self.conv(x)
if not self.nchw_attn:
x = x.permute(0, 2, 3, 1) # to NHWC (channels-last)
if self.attn_block is not None:
x = self.attn_block(x)
x = self.attn_grid(x)
if not self.nchw_attn:
x = x.permute(0, 3, 1, 2) # back to NCHW
return x
class ParallelMaxxVitBlock(nn.Module):
"""MaxVit block with parallel cat(window + grid), one FF.
Experimental timm block.
"""
def __init__(
self,
dim: int,
dim_out: int,
stride: int = 1,
num_conv: int = 2,
conv_cfg: MaxxVitConvCfg = MaxxVitConvCfg(),
transformer_cfg: MaxxVitTransformerCfg = MaxxVitTransformerCfg(),
drop_path: float = 0.,
):
"""
Args:
dim: Input dimension.
dim_out: Output dimension.
stride: Stride for first conv block.
num_conv: Number of convolution blocks.
conv_cfg: Convolution block configuration.
transformer_cfg: Transformer block configuration.
drop_path: Drop path rate.
"""
super().__init__()
conv_cls = ConvNeXtBlock if conv_cfg.block_type == 'convnext' else MbConvBlock
if num_conv > 1:
convs = [conv_cls(dim, dim_out, stride=stride, cfg=conv_cfg, drop_path=drop_path)]
convs += [conv_cls(dim_out, dim_out, cfg=conv_cfg, drop_path=drop_path)] * (num_conv - 1)
self.conv = nn.Sequential(*convs)
else:
self.conv = conv_cls(dim, dim_out, stride=stride, cfg=conv_cfg, drop_path=drop_path)
self.attn = ParallelPartitionAttention(dim=dim_out, cfg=transformer_cfg, drop_path=drop_path)
def init_weights(self, scheme: str = '') -> None:
named_apply(partial(_init_transformer, scheme=scheme), self.attn)
named_apply(partial(_init_conv, scheme=scheme), self.conv)
def forward(self, x: torch.Tensor) -> torch.Tensor:
x = self.conv(x)
x = x.permute(0, 2, 3, 1)
x = self.attn(x)
x = x.permute(0, 3, 1, 2)
return x
class MaxxVitStage(nn.Module):
"""MaxxVit stage consisting of mixed convolution and transformer blocks."""
def __init__(
self,
in_chs: int,
out_chs: int,
stride: int = 2,
depth: int = 4,
feat_size: Tuple[int, int] = (14, 14),
block_types: Union[str, Tuple[str]] = 'C',
transformer_cfg: MaxxVitTransformerCfg = MaxxVitTransformerCfg(),
conv_cfg: MaxxVitConvCfg = MaxxVitConvCfg(),
drop_path: Union[float, List[float]] = 0.,
):
"""
Args:
in_chs: Input channels.
out_chs: Output channels.
stride: Stride for first block.
depth: Number of blocks in stage.
feat_size: Feature map size.
block_types: Block types ('C' for conv, 'T' for transformer, etc).
transformer_cfg: Transformer block configuration.
conv_cfg: Convolution block configuration.
drop_path: Drop path rate(s).
"""
super().__init__()
self.grad_checkpointing = False
block_types = extend_tuple(block_types, depth)
blocks = []
for i, t in enumerate(block_types):
block_stride = stride if i == 0 else 1
assert t in ('C', 'T', 'M', 'PM')
if t == 'C':
conv_cls = ConvNeXtBlock if conv_cfg.block_type == 'convnext' else MbConvBlock
blocks += [conv_cls(
in_chs,
out_chs,
stride=block_stride,
cfg=conv_cfg,
drop_path=drop_path[i],
)]
elif t == 'T':
rel_pos_cls = get_rel_pos_cls(transformer_cfg, feat_size)
blocks += [TransformerBlock2d(
in_chs,
out_chs,
stride=block_stride,
rel_pos_cls=rel_pos_cls,
cfg=transformer_cfg,
drop_path=drop_path[i],
)]
elif t == 'M':
blocks += [MaxxVitBlock(
in_chs,
out_chs,
stride=block_stride,
conv_cfg=conv_cfg,
transformer_cfg=transformer_cfg,
drop_path=drop_path[i],
)]
elif t == 'PM':
blocks += [ParallelMaxxVitBlock(
in_chs,
out_chs,
stride=block_stride,
conv_cfg=conv_cfg,
transformer_cfg=transformer_cfg,
drop_path=drop_path[i],
)]
in_chs = out_chs
self.blocks = nn.Sequential(*blocks)
def forward(self, x: torch.Tensor) -> torch.Tensor:
if self.grad_checkpointing and not torch.jit.is_scripting():
x = checkpoint_seq(self.blocks, x)
else:
x = self.blocks(x)
return x
class Stem(nn.Module):
"""Stem layer for feature extraction."""
def __init__(
self,
in_chs: int,
out_chs: int,
kernel_size: int = 3,
padding: str = '',
bias: bool = False,
act_layer: str = 'gelu',
norm_layer: str = 'batchnorm2d',
norm_eps: float = 1e-5,
):
"""
Args:
in_chs: Input channels.
out_chs: Output channels.
kernel_size: Kernel size for convolutions.
padding: Padding mode.
bias: Whether to use bias.
act_layer: Activation layer.
norm_layer: Normalization layer.
norm_eps: Normalization epsilon.
"""
super().__init__()
if not isinstance(out_chs, (list, tuple)):
out_chs = to_2tuple(out_chs)
norm_act_layer = partial(get_norm_act_layer(norm_layer, act_layer), eps=norm_eps)
self.out_chs = out_chs[-1]
self.stride = 2
self.conv1 = create_conv2d(in_chs, out_chs[0], kernel_size, stride=2, padding=padding, bias=bias)
self.norm1 = norm_act_layer(out_chs[0])
self.conv2 = create_conv2d(out_chs[0], out_chs[1], kernel_size, stride=1, padding=padding, bias=bias)
def init_weights(self, scheme: str = '') -> None:
named_apply(partial(_init_conv, scheme=scheme), self)
def forward(self, x: torch.Tensor) -> torch.Tensor:
x = self.conv1(x)
x = self.norm1(x)
x = self.conv2(x)
return x
def cfg_window_size(cfg: MaxxVitTransformerCfg, img_size: Tuple[int, int]) -> MaxxVitTransformerCfg:
"""Configure window size based on image size and partition ratio."""
if cfg.window_size is not None:
assert cfg.grid_size
return cfg
partition_size = img_size[0] // cfg.partition_ratio, img_size[1] // cfg.partition_ratio
cfg = replace(cfg, window_size=partition_size, grid_size=partition_size)
return cfg
def _overlay_kwargs(cfg: MaxxVitCfg, **kwargs: Any) -> MaxxVitCfg:
"""Overlay keyword arguments onto configuration."""
transformer_kwargs = {}
conv_kwargs = {}
base_kwargs = {}
for k, v in kwargs.items():
if k.startswith('transformer_'):
transformer_kwargs[k.replace('transformer_', '')] = v
elif k.startswith('conv_'):
conv_kwargs[k.replace('conv_', '')] = v
else:
base_kwargs[k] = v
cfg = replace(
cfg,
transformer_cfg=replace(cfg.transformer_cfg, **transformer_kwargs),
conv_cfg=replace(cfg.conv_cfg, **conv_kwargs),
**base_kwargs
)
return cfg
class MaxxVit(nn.Module):
"""CoaTNet + MaxVit base model.
Highly configurable for different block compositions, tensor layouts, pooling types.
"""
def __init__(
self,
cfg: MaxxVitCfg,
img_size: Union[int, Tuple[int, int]] = 224,
in_chans: int = 3,
num_classes: int = 1000,
global_pool: str = 'avg',
drop_rate: float = 0.,
drop_path_rate: float = 0.,
**kwargs: Any,
):
"""
Args:
cfg: Model configuration.
img_size: Input image size.
in_chans: Number of input channels.
num_classes: Number of classification classes.
global_pool: Global pooling type.
drop_rate: Dropout rate.
drop_path_rate: Drop path rate.
**kwargs: Additional keyword arguments to overlay on config.
"""
super().__init__()
img_size = to_2tuple(img_size)
if kwargs:
cfg = _overlay_kwargs(cfg, **kwargs)
transformer_cfg = cfg_window_size(cfg.transformer_cfg, img_size)
self.num_classes = num_classes
self.global_pool = global_pool
self.num_features = self.embed_dim = cfg.embed_dim[-1]
self.drop_rate = drop_rate
self.grad_checkpointing = False
self.feature_info = []
self.stem = Stem(
in_chs=in_chans,
out_chs=cfg.stem_width,
padding=cfg.conv_cfg.padding,
bias=cfg.stem_bias,
act_layer=cfg.conv_cfg.act_layer,
norm_layer=cfg.conv_cfg.norm_layer,
norm_eps=cfg.conv_cfg.norm_eps,
)
stride = self.stem.stride
self.feature_info += [dict(num_chs=self.stem.out_chs, reduction=2, module='stem')]
feat_size = tuple([i // s for i, s in zip(img_size, to_2tuple(stride))])
num_stages = len(cfg.embed_dim)
assert len(cfg.depths) == num_stages
dpr = [x.tolist() for x in torch.linspace(0, drop_path_rate, sum(cfg.depths)).split(cfg.depths)]
in_chs = self.stem.out_chs
stages = []
for i in range(num_stages):
stage_stride = 2
out_chs = cfg.embed_dim[i]
feat_size = tuple([(r - 1) // stage_stride + 1 for r in feat_size])
stages += [MaxxVitStage(
in_chs,
out_chs,
depth=cfg.depths[i],
block_types=cfg.block_type[i],
conv_cfg=cfg.conv_cfg,
transformer_cfg=transformer_cfg,
feat_size=feat_size,
drop_path=dpr[i],
)]
stride *= stage_stride
in_chs = out_chs
self.feature_info += [dict(num_chs=out_chs, reduction=stride, module=f'stages.{i}')]
self.stages = nn.Sequential(*stages)
final_norm_layer = partial(get_norm_layer(cfg.transformer_cfg.norm_layer), eps=cfg.transformer_cfg.norm_eps)
if cfg.head_hidden_size:
self.norm = nn.Identity()
self.head_hidden_size = cfg.head_hidden_size
self.head = NormMlpClassifierHead(
self.num_features,
num_classes,
hidden_size=self.head_hidden_size,
pool_type=global_pool,
drop_rate=drop_rate,
norm_layer=final_norm_layer,
)
else:
# standard classifier head w/ norm, pooling, fc classifier
self.head_hidden_size = self.num_features
self.norm = final_norm_layer(self.num_features)
self.head = ClassifierHead(self.num_features, num_classes, pool_type=global_pool, drop_rate=drop_rate)
# Weight init (default PyTorch init works well for AdamW if scheme not set)
assert cfg.weight_init in ('', 'normal', 'trunc_normal', 'xavier_normal', 'vit_eff')
if cfg.weight_init:
named_apply(partial(self._init_weights, scheme=cfg.weight_init), self)
def _init_weights(self, module: nn.Module, name: str, scheme: str = '') -> None:
if hasattr(module, 'init_weights'):
try:
module.init_weights(scheme=scheme)
except TypeError:
module.init_weights()
@torch.jit.ignore
def no_weight_decay(self) -> Set[str]:
return {
k for k, _ in self.named_parameters()
if any(n in k for n in ["relative_position_bias_table", "rel_pos.mlp"])}
@torch.jit.ignore
def group_matcher(self, coarse: bool = False) -> Dict[str, Any]:
matcher = dict(
stem=r'^stem', # stem and embed
blocks=[(r'^stages\.(\d+)', None), (r'^norm', (99999,))]
)
return matcher
@torch.jit.ignore
def set_grad_checkpointing(self, enable: bool = True) -> None:
for s in self.stages:
s.grad_checkpointing = enable
@torch.jit.ignore
def get_classifier(self) -> nn.Module:
return self.head.fc
def reset_classifier(self, num_classes: int, global_pool: Optional[str] = None) -> None:
self.num_classes = num_classes
self.head.reset(num_classes, global_pool)
def forward_intermediates(
self,
x: torch.Tensor,
indices: Optional[Union[int, List[int]]] = None,
norm: bool = False,
stop_early: bool = False,
output_fmt: str = 'NCHW',
intermediates_only: bool = False,
) -> Union[List[torch.Tensor], Tuple[torch.Tensor, List[torch.Tensor]]]:
""" Forward features that returns intermediates.
Args:
x: Input image tensor
indices: Take last n blocks if int, all if None, select matching indices if sequence
norm: Apply norm layer to compatible intermediates
stop_early: Stop iterating over blocks when last desired intermediate hit
output_fmt: Shape of intermediate feature outputs
intermediates_only: Only return intermediate features
Returns:
"""
assert output_fmt in ('NCHW',), 'Output shape must be NCHW.'
intermediates = []
take_indices, max_index = feature_take_indices(len(self.stages) + 1, indices)
# forward pass
feat_idx = 0 # stem is index 0
x = self.stem(x)
if feat_idx in take_indices:
intermediates.append(x)
last_idx = len(self.stages)
if torch.jit.is_scripting() or not stop_early: # can't slice blocks in torchscript
stages = self.stages
else:
stages = self.stages[:max_index]
for stage in stages:
feat_idx += 1
x = stage(x)
if feat_idx in take_indices:
if norm and feat_idx == last_idx:
x_inter = self.norm(x) # applying final norm to last intermediate
else:
x_inter = x
intermediates.append(x_inter)
if intermediates_only:
return intermediates
if feat_idx == last_idx:
x = self.norm(x)
return x, intermediates
def prune_intermediate_layers(
self,
indices: Union[int, List[int]] = 1,
prune_norm: bool = False,
prune_head: bool = True,
) -> Tuple[int, ...]:
"""Prune layers not required for specified intermediates."""
take_indices, max_index = feature_take_indices(len(self.stages) + 1, indices)
self.stages = self.stages[:max_index] # truncate blocks w/ stem as idx 0
if prune_norm:
self.norm = nn.Identity()
if prune_head:
self.head = self.reset_classifier(0, '')
return take_indices
def forward_features(self, x: torch.Tensor) -> torch.Tensor:
x = self.stem(x)
x = self.stages(x)
x = self.norm(x)
return x
def forward_head(self, x: torch.Tensor, pre_logits: bool = False) -> torch.Tensor:
return self.head(x, pre_logits=pre_logits) if pre_logits else self.head(x)
def forward(self, x: torch.Tensor) -> torch.Tensor:
x = self.forward_features(x)
x = self.forward_head(x)
return x
def _rw_coat_cfg(
stride_mode: str = 'pool',
pool_type: str = 'avg2',
conv_output_bias: bool = False,
conv_attn_early: bool = False,
conv_attn_act_layer: str = 'relu',
conv_norm_layer: str = '',
transformer_shortcut_bias: bool = True,
transformer_norm_layer: str = 'layernorm2d',
transformer_norm_layer_cl: str = 'layernorm',
init_values: Optional[float] = None,
rel_pos_type: str = 'bias',
rel_pos_dim: int = 512,
) -> Dict[str, Any]:
"""RW variant configuration for CoAtNet models.
These models were created and trained before seeing https://github.com/google-research/maxvit
Common differences for initial timm models:
- pre-norm layer in MZBConv included an activation after norm
- mbconv expansion calculated from input instead of output chs
- mbconv shortcut and final 1x1 conv did not have a bias
- SE act layer was relu, not silu
- mbconv uses silu in timm, not gelu
- expansion in attention block done via output proj, not input proj
Variable differences (evolved over training initial models):
- avg pool with kernel_size=2 favoured downsampling (instead of maxpool for coat)
- SE attention was between conv2 and norm/act
- default to avg pool for mbconv downsample instead of 1x1 or dw conv
- transformer block shortcut has no bias
"""
return dict(
conv_cfg=MaxxVitConvCfg(
stride_mode=stride_mode,
pool_type=pool_type,
pre_norm_act=True,
expand_output=False,
output_bias=conv_output_bias,
attn_early=conv_attn_early,
attn_act_layer=conv_attn_act_layer,
act_layer='silu',
norm_layer=conv_norm_layer,
),
transformer_cfg=MaxxVitTransformerCfg(
expand_first=False,
shortcut_bias=transformer_shortcut_bias,
pool_type=pool_type,
init_values=init_values,
norm_layer=transformer_norm_layer,
norm_layer_cl=transformer_norm_layer_cl,
rel_pos_type=rel_pos_type,
rel_pos_dim=rel_pos_dim,
),
)
def _rw_max_cfg(
stride_mode: str = 'dw',
pool_type: str = 'avg2',
conv_output_bias: bool = False,
conv_attn_ratio: float = 1 / 16,
conv_norm_layer: str = '',
transformer_norm_layer: str = 'layernorm2d',
transformer_norm_layer_cl: str = 'layernorm',
window_size: Optional[Tuple[int, int]] = None,
dim_head: int = 32,
init_values: Optional[float] = None,
rel_pos_type: str = 'bias',
rel_pos_dim: int = 512,
) -> Dict[str, Any]:
"""RW variant configuration for MaxViT models.
These models were created and trained before seeing https://github.com/google-research/maxvit
Differences of initial timm models:
- mbconv expansion calculated from input instead of output chs
- mbconv shortcut and final 1x1 conv did not have a bias
- mbconv uses silu in timm, not gelu
- expansion in attention block done via output proj, not input proj
"""
return dict(
conv_cfg=MaxxVitConvCfg(
stride_mode=stride_mode,
pool_type=pool_type,
expand_output=False,
output_bias=conv_output_bias,
attn_ratio=conv_attn_ratio,
act_layer='silu',
norm_layer=conv_norm_layer,
),
transformer_cfg=MaxxVitTransformerCfg(
expand_first=False,
pool_type=pool_type,
dim_head=dim_head,
window_size=window_size,
init_values=init_values,
norm_layer=transformer_norm_layer,
norm_layer_cl=transformer_norm_layer_cl,
rel_pos_type=rel_pos_type,
rel_pos_dim=rel_pos_dim,
),
)
def _next_cfg(
stride_mode: str = 'dw',
pool_type: str = 'avg2',
conv_norm_layer: str = 'layernorm2d',
conv_norm_layer_cl: str = 'layernorm',
transformer_norm_layer: str = 'layernorm2d',
transformer_norm_layer_cl: str = 'layernorm',
window_size: Optional[Tuple[int, int]] = None,
no_block_attn: bool = False,
init_values: Union[float, Tuple[float, float]] = 1e-6,
rel_pos_type: str = 'mlp', # MLP by default for maxxvit
rel_pos_dim: int = 512,
) -> Dict[str, Any]:
"""Configuration for experimental ConvNeXt-based MaxxViT models."""
init_values = to_2tuple(init_values)
return dict(
conv_cfg=MaxxVitConvCfg(
block_type='convnext',
stride_mode=stride_mode,
pool_type=pool_type,
expand_output=False,
init_values=init_values[0],
norm_layer=conv_norm_layer,
norm_layer_cl=conv_norm_layer_cl,
),
transformer_cfg=MaxxVitTransformerCfg(
expand_first=False,
pool_type=pool_type,
window_size=window_size,
no_block_attn=no_block_attn, # enabled for MaxxViT-V2
init_values=init_values[1],
norm_layer=transformer_norm_layer,
norm_layer_cl=transformer_norm_layer_cl,
rel_pos_type=rel_pos_type,
rel_pos_dim=rel_pos_dim,
),
)
def _tf_cfg() -> Dict[str, Any]:
"""Configuration matching TensorFlow MaxViT models."""
return dict(
conv_cfg=MaxxVitConvCfg(
norm_eps=1e-3,
act_layer='gelu_tanh',
padding='same',
),
transformer_cfg=MaxxVitTransformerCfg(
norm_eps=1e-5,
act_layer='gelu_tanh',
head_first=False, # heads are interleaved (q_nh, q_hdim, k_nh, q_hdim, ....)
rel_pos_type='bias_tf',
),
)
model_cfgs = dict(
# timm specific CoAtNet configs
coatnet_pico_rw=MaxxVitCfg(
embed_dim=(64, 128, 256, 512),
depths=(2, 3, 5, 2),
stem_width=(32, 64),
**_rw_max_cfg( # using newer max defaults here
conv_output_bias=True,
conv_attn_ratio=0.25,
),
),
coatnet_nano_rw=MaxxVitCfg(
embed_dim=(64, 128, 256, 512),
depths=(3, 4, 6, 3),
stem_width=(32, 64),
**_rw_max_cfg( # using newer max defaults here
stride_mode='pool',
conv_output_bias=True,
conv_attn_ratio=0.25,
),
),
coatnet_0_rw=MaxxVitCfg(
embed_dim=(96, 192, 384, 768),
depths=(2, 3, 7, 2), # deeper than paper '0' model
stem_width=(32, 64),
**_rw_coat_cfg(
conv_attn_early=True,
transformer_shortcut_bias=False,
),
),
coatnet_1_rw=MaxxVitCfg(
embed_dim=(96, 192, 384, 768),
depths=(2, 6, 14, 2),
stem_width=(32, 64),
**_rw_coat_cfg(
stride_mode='dw',
conv_attn_early=True,
transformer_shortcut_bias=False,
)
),
coatnet_2_rw=MaxxVitCfg(
embed_dim=(128, 256, 512, 1024),
depths=(2, 6, 14, 2),
stem_width=(64, 128),
**_rw_coat_cfg(
stride_mode='dw',
conv_attn_act_layer='silu',
#init_values=1e-6,
),
),
coatnet_3_rw=MaxxVitCfg(
embed_dim=(192, 384, 768, 1536),
depths=(2, 6, 14, 2),
stem_width=(96, 192),
**_rw_coat_cfg(
stride_mode='dw',
conv_attn_act_layer='silu',
init_values=1e-6,
),
),
# Experimental CoAtNet configs w/ ImageNet-1k train (different norm layers, MLP rel-pos)
coatnet_bn_0_rw=MaxxVitCfg(
embed_dim=(96, 192, 384, 768),
depths=(2, 3, 7, 2), # deeper than paper '0' model
stem_width=(32, 64),
**_rw_coat_cfg(
stride_mode='dw',
conv_attn_early=True,
transformer_shortcut_bias=False,
transformer_norm_layer='batchnorm2d',
)
),
coatnet_rmlp_nano_rw=MaxxVitCfg(
embed_dim=(64, 128, 256, 512),
depths=(3, 4, 6, 3),
stem_width=(32, 64),
**_rw_max_cfg(
conv_output_bias=True,
conv_attn_ratio=0.25,
rel_pos_type='mlp',
rel_pos_dim=384,
),
),
coatnet_rmlp_0_rw=MaxxVitCfg(
embed_dim=(96, 192, 384, 768),
depths=(2, 3, 7, 2), # deeper than paper '0' model
stem_width=(32, 64),
**_rw_coat_cfg(
stride_mode='dw',
rel_pos_type='mlp',
),
),
coatnet_rmlp_1_rw=MaxxVitCfg(
embed_dim=(96, 192, 384, 768),
depths=(2, 6, 14, 2),
stem_width=(32, 64),
**_rw_coat_cfg(
pool_type='max',
conv_attn_early=True,
transformer_shortcut_bias=False,
rel_pos_type='mlp',
rel_pos_dim=384, # was supposed to be 512, woops
),
),
coatnet_rmlp_1_rw2=MaxxVitCfg(
embed_dim=(96, 192, 384, 768),
depths=(2, 6, 14, 2),
stem_width=(32, 64),
**_rw_coat_cfg(
stride_mode='dw',
rel_pos_type='mlp',
rel_pos_dim=512, # was supposed to be 512, woops
),
),
coatnet_rmlp_2_rw=MaxxVitCfg(
embed_dim=(128, 256, 512, 1024),
depths=(2, 6, 14, 2),
stem_width=(64, 128),
**_rw_coat_cfg(
stride_mode='dw',
conv_attn_act_layer='silu',
init_values=1e-6,
rel_pos_type='mlp'
),
),
coatnet_rmlp_3_rw=MaxxVitCfg(
embed_dim=(192, 384, 768, 1536),
depths=(2, 6, 14, 2),
stem_width=(96, 192),
**_rw_coat_cfg(
stride_mode='dw',
conv_attn_act_layer='silu',
init_values=1e-6,
rel_pos_type='mlp'
),
),
coatnet_nano_cc=MaxxVitCfg(
embed_dim=(64, 128, 256, 512),
depths=(3, 4, 6, 3),
stem_width=(32, 64),
block_type=('C', 'C', ('C', 'T'), ('C', 'T')),
**_rw_coat_cfg(),
),
coatnext_nano_rw=MaxxVitCfg(
embed_dim=(64, 128, 256, 512),
depths=(3, 4, 6, 3),
stem_width=(32, 64),
weight_init='normal',
**_next_cfg(
rel_pos_type='bias',
init_values=(1e-5, None)
),
),
# Trying to be like the CoAtNet paper configs
coatnet_0=MaxxVitCfg(
embed_dim=(96, 192, 384, 768),
depths=(2, 3, 5, 2),
stem_width=64,
head_hidden_size=768,
),
coatnet_1=MaxxVitCfg(
embed_dim=(96, 192, 384, 768),
depths=(2, 6, 14, 2),
stem_width=64,
head_hidden_size=768,
),
coatnet_2=MaxxVitCfg(
embed_dim=(128, 256, 512, 1024),
depths=(2, 6, 14, 2),
stem_width=128,
head_hidden_size=1024,
),
coatnet_3=MaxxVitCfg(
embed_dim=(192, 384, 768, 1536),
depths=(2, 6, 14, 2),
stem_width=192,
head_hidden_size=1536,
),
coatnet_4=MaxxVitCfg(
embed_dim=(192, 384, 768, 1536),
depths=(2, 12, 28, 2),
stem_width=192,
head_hidden_size=1536,
),
coatnet_5=MaxxVitCfg(
embed_dim=(256, 512, 1280, 2048),
depths=(2, 12, 28, 2),
stem_width=192,
head_hidden_size=2048,
),
# Experimental MaxVit configs
maxvit_pico_rw=MaxxVitCfg(
embed_dim=(32, 64, 128, 256),
depths=(2, 2, 5, 2),
block_type=('M',) * 4,
stem_width=(24, 32),
**_rw_max_cfg(),
),
maxvit_nano_rw=MaxxVitCfg(
embed_dim=(64, 128, 256, 512),
depths=(1, 2, 3, 1),
block_type=('M',) * 4,
stem_width=(32, 64),
**_rw_max_cfg(),
),
maxvit_tiny_rw=MaxxVitCfg(
embed_dim=(64, 128, 256, 512),
depths=(2, 2, 5, 2),
block_type=('M',) * 4,
stem_width=(32, 64),
**_rw_max_cfg(),
),
maxvit_tiny_pm=MaxxVitCfg(
embed_dim=(64, 128, 256, 512),
depths=(2, 2, 5, 2),
block_type=('PM',) * 4,
stem_width=(32, 64),
**_rw_max_cfg(),
),
maxvit_rmlp_pico_rw=MaxxVitCfg(
embed_dim=(32, 64, 128, 256),
depths=(2, 2, 5, 2),
block_type=('M',) * 4,
stem_width=(24, 32),
**_rw_max_cfg(rel_pos_type='mlp'),
),
maxvit_rmlp_nano_rw=MaxxVitCfg(
embed_dim=(64, 128, 256, 512),
depths=(1, 2, 3, 1),
block_type=('M',) * 4,
stem_width=(32, 64),
**_rw_max_cfg(rel_pos_type='mlp'),
),
maxvit_rmlp_tiny_rw=MaxxVitCfg(
embed_dim=(64, 128, 256, 512),
depths=(2, 2, 5, 2),
block_type=('M',) * 4,
stem_width=(32, 64),
**_rw_max_cfg(rel_pos_type='mlp'),
),
maxvit_rmlp_small_rw=MaxxVitCfg(
embed_dim=(96, 192, 384, 768),
depths=(2, 2, 5, 2),
block_type=('M',) * 4,
stem_width=(32, 64),
**_rw_max_cfg(
rel_pos_type='mlp',
init_values=1e-6,
),
),
maxvit_rmlp_base_rw=MaxxVitCfg(
embed_dim=(96, 192, 384, 768),
depths=(2, 6, 14, 2),
block_type=('M',) * 4,
stem_width=(32, 64),
head_hidden_size=768,
**_rw_max_cfg(
rel_pos_type='mlp',
),
),
maxxvit_rmlp_nano_rw=MaxxVitCfg(
embed_dim=(64, 128, 256, 512),
depths=(1, 2, 3, 1),
block_type=('M',) * 4,
stem_width=(32, 64),
weight_init='normal',
**_next_cfg(),
),
maxxvit_rmlp_tiny_rw=MaxxVitCfg(
embed_dim=(64, 128, 256, 512),
depths=(2, 2, 5, 2),
block_type=('M',) * 4,
stem_width=(32, 64),
**_next_cfg(),
),
maxxvit_rmlp_small_rw=MaxxVitCfg(
embed_dim=(96, 192, 384, 768),
depths=(2, 2, 5, 2),
block_type=('M',) * 4,
stem_width=(48, 96),
**_next_cfg(),
),
maxxvitv2_nano_rw=MaxxVitCfg(
embed_dim=(96, 192, 384, 768),
depths=(1, 2, 3, 1),
block_type=('M',) * 4,
stem_width=(48, 96),
weight_init='normal',
**_next_cfg(
no_block_attn=True,
rel_pos_type='bias',
),
),
maxxvitv2_rmlp_base_rw=MaxxVitCfg(
embed_dim=(128, 256, 512, 1024),
depths=(2, 6, 12, 2),
block_type=('M',) * 4,
stem_width=(64, 128),
**_next_cfg(
no_block_attn=True,
),
),
maxxvitv2_rmlp_large_rw=MaxxVitCfg(
embed_dim=(160, 320, 640, 1280),
depths=(2, 6, 16, 2),
block_type=('M',) * 4,
stem_width=(80, 160),
head_hidden_size=1280,
**_next_cfg(
no_block_attn=True,
),
),
# Trying to be like the MaxViT paper configs
maxvit_tiny_tf=MaxxVitCfg(
embed_dim=(64, 128, 256, 512),
depths=(2, 2, 5, 2),
block_type=('M',) * 4,
stem_width=64,
stem_bias=True,
head_hidden_size=512,
**_tf_cfg(),
),
maxvit_small_tf=MaxxVitCfg(
embed_dim=(96, 192, 384, 768),
depths=(2, 2, 5, 2),
block_type=('M',) * 4,
stem_width=64,
stem_bias=True,
head_hidden_size=768,
**_tf_cfg(),
),
maxvit_base_tf=MaxxVitCfg(
embed_dim=(96, 192, 384, 768),
depths=(2, 6, 14, 2),
block_type=('M',) * 4,
stem_width=64,
stem_bias=True,
head_hidden_size=768,
**_tf_cfg(),
),
maxvit_large_tf=MaxxVitCfg(
embed_dim=(128, 256, 512, 1024),
depths=(2, 6, 14, 2),
block_type=('M',) * 4,
stem_width=128,
stem_bias=True,
head_hidden_size=1024,
**_tf_cfg(),
),
maxvit_xlarge_tf=MaxxVitCfg(
embed_dim=(192, 384, 768, 1536),
depths=(2, 6, 14, 2),
block_type=('M',) * 4,
stem_width=192,
stem_bias=True,
head_hidden_size=1536,
**_tf_cfg(),
),
)
def checkpoint_filter_fn(state_dict: Dict[str, torch.Tensor], model: nn.Module) -> Dict[str, torch.Tensor]:
"""Filter checkpoint state dict for compatibility."""
model_state_dict = model.state_dict()
out_dict = {}
for k, v in state_dict.items():
if k.endswith('relative_position_bias_table'):
m = model.get_submodule(k[:-29])
if v.shape != m.relative_position_bias_table.shape or m.window_size[0] != m.window_size[1]:
v = resize_rel_pos_bias_table(
v,
new_window_size=m.window_size,
new_bias_shape=m.relative_position_bias_table.shape,
)
if k in model_state_dict and v.ndim != model_state_dict[k].ndim and v.numel() == model_state_dict[k].numel():
# adapt between conv2d / linear layers
assert v.ndim in (2, 4)
v = v.reshape(model_state_dict[k].shape)
out_dict[k] = v
return out_dict
def _create_maxxvit(variant: str, cfg_variant: Optional[str] = None, pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""Create a MaxxVit model variant."""
if cfg_variant is None:
if variant in model_cfgs:
cfg_variant = variant
else:
cfg_variant = '_'.join(variant.split('_')[:-1])
return build_model_with_cfg(
MaxxVit, variant, pretrained,
model_cfg=model_cfgs[cfg_variant],
feature_cfg=dict(flatten_sequential=True),
pretrained_filter_fn=checkpoint_filter_fn,
**kwargs)
def _cfg(url: str = '', **kwargs: Any) -> Dict[str, Any]:
"""Create a default configuration dict."""
return {
'url': url, 'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': (7, 7),
'crop_pct': 0.95, 'interpolation': 'bicubic',
'mean': (0.5, 0.5, 0.5), 'std': (0.5, 0.5, 0.5),
'first_conv': 'stem.conv1', 'classifier': 'head.fc',
'fixed_input_size': True,
**kwargs
}
default_cfgs = generate_default_cfgs({
# timm specific CoAtNet configs, ImageNet-1k pretrain, fixed rel-pos
'coatnet_pico_rw_224.untrained': _cfg(url=''),
'coatnet_nano_rw_224.sw_in1k': _cfg(
hf_hub_id='timm/',
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights-maxx/coatnet_nano_rw_224_sw-f53093b4.pth',
crop_pct=0.9),
'coatnet_0_rw_224.sw_in1k': _cfg(
hf_hub_id='timm/',
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights-maxx/coatnet_0_rw_224_sw-a6439706.pth'),
'coatnet_1_rw_224.sw_in1k': _cfg(
hf_hub_id='timm/',
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights-maxx/coatnet_1_rw_224_sw-5cae1ea8.pth'
),
# timm specific CoAtNet configs, ImageNet-12k pretrain w/ 1k fine-tune, fixed rel-pos
'coatnet_2_rw_224.sw_in12k_ft_in1k': _cfg(
hf_hub_id='timm/'),
#'coatnet_3_rw_224.untrained': _cfg(url=''),
# Experimental CoAtNet configs w/ ImageNet-12k pretrain -> 1k fine-tune (different norm layers, MLP rel-pos)
'coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k': _cfg(
hf_hub_id='timm/'),
'coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k': _cfg(
hf_hub_id='timm/'),
'coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k': _cfg(
hf_hub_id='timm/',
input_size=(3, 384, 384), pool_size=(12, 12), crop_pct=1.0, crop_mode='squash'),
# Experimental CoAtNet configs w/ ImageNet-1k train (different norm layers, MLP rel-pos)
'coatnet_bn_0_rw_224.sw_in1k': _cfg(
hf_hub_id='timm/',
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights-maxx/coatnet_bn_0_rw_224_sw-c228e218.pth',
mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD,
crop_pct=0.95),
'coatnet_rmlp_nano_rw_224.sw_in1k': _cfg(
hf_hub_id='timm/',
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights-maxx/coatnet_rmlp_nano_rw_224_sw-bd1d51b3.pth',
crop_pct=0.9),
'coatnet_rmlp_0_rw_224.untrained': _cfg(url=''),
'coatnet_rmlp_1_rw_224.sw_in1k': _cfg(
hf_hub_id='timm/',
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights-maxx/coatnet_rmlp_1_rw_224_sw-9051e6c3.pth'),
'coatnet_rmlp_2_rw_224.sw_in1k': _cfg(
hf_hub_id='timm/',
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights-maxx/coatnet_rmlp_2_rw_224_sw-5ccfac55.pth'),
'coatnet_rmlp_3_rw_224.untrained': _cfg(url=''),
'coatnet_nano_cc_224.untrained': _cfg(url=''),
'coatnext_nano_rw_224.sw_in1k': _cfg(
hf_hub_id='timm/',
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights-maxx/coatnext_nano_rw_224_ad-22cb71c2.pth',
crop_pct=0.9),
# ImagenNet-12k pretrain CoAtNet
'coatnet_2_rw_224.sw_in12k': _cfg(
hf_hub_id='timm/',
num_classes=11821),
'coatnet_3_rw_224.sw_in12k': _cfg(
hf_hub_id='timm/',
num_classes=11821),
'coatnet_rmlp_1_rw2_224.sw_in12k': _cfg(
hf_hub_id='timm/',
num_classes=11821),
'coatnet_rmlp_2_rw_224.sw_in12k': _cfg(
hf_hub_id='timm/',
num_classes=11821),
# Trying to be like the CoAtNet paper configs (will adapt if 'tf' weights are ever released)
'coatnet_0_224.untrained': _cfg(url=''),
'coatnet_1_224.untrained': _cfg(url=''),
'coatnet_2_224.untrained': _cfg(url=''),
'coatnet_3_224.untrained': _cfg(url=''),
'coatnet_4_224.untrained': _cfg(url=''),
'coatnet_5_224.untrained': _cfg(url=''),
# timm specific MaxVit configs, ImageNet-1k pretrain or untrained
'maxvit_pico_rw_256.untrained': _cfg(url='', input_size=(3, 256, 256), pool_size=(8, 8)),
'maxvit_nano_rw_256.sw_in1k': _cfg(
hf_hub_id='timm/',
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights-maxx/maxvit_nano_rw_256_sw-fb127241.pth',
input_size=(3, 256, 256), pool_size=(8, 8)),
'maxvit_tiny_rw_224.sw_in1k': _cfg(
hf_hub_id='timm/',
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights-maxx/maxvit_tiny_rw_224_sw-7d0dffeb.pth'),
'maxvit_tiny_rw_256.untrained': _cfg(
url='',
input_size=(3, 256, 256), pool_size=(8, 8)),
'maxvit_tiny_pm_256.untrained': _cfg(url='', input_size=(3, 256, 256), pool_size=(8, 8)),
# timm specific MaxVit w/ MLP rel-pos, ImageNet-1k pretrain
'maxvit_rmlp_pico_rw_256.sw_in1k': _cfg(
hf_hub_id='timm/',
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights-maxx/maxvit_rmlp_pico_rw_256_sw-8d82f2c6.pth',
input_size=(3, 256, 256), pool_size=(8, 8)),
'maxvit_rmlp_nano_rw_256.sw_in1k': _cfg(
hf_hub_id='timm/',
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights-maxx/maxvit_rmlp_nano_rw_256_sw-c17bb0d6.pth',
input_size=(3, 256, 256), pool_size=(8, 8)),
'maxvit_rmlp_tiny_rw_256.sw_in1k': _cfg(
hf_hub_id='timm/',
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights-maxx/maxvit_rmlp_tiny_rw_256_sw-bbef0ff5.pth',
input_size=(3, 256, 256), pool_size=(8, 8)),
'maxvit_rmlp_small_rw_224.sw_in1k': _cfg(
hf_hub_id='timm/',
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights-maxx/maxvit_rmlp_small_rw_224_sw-6ef0ae4f.pth',
crop_pct=0.9,
),
'maxvit_rmlp_small_rw_256.untrained': _cfg(
url='',
input_size=(3, 256, 256), pool_size=(8, 8)),
# timm specific MaxVit w/ ImageNet-12k pretrain and 1k fine-tune
'maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k': _cfg(
hf_hub_id='timm/',
),
'maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k': _cfg(
hf_hub_id='timm/',
input_size=(3, 384, 384), pool_size=(12, 12), crop_pct=1.0, crop_mode='squash'),
# timm specific MaxVit w/ ImageNet-12k pretrain
'maxvit_rmlp_base_rw_224.sw_in12k': _cfg(
hf_hub_id='timm/',
num_classes=11821,
),
# timm MaxxViT configs (ConvNeXt conv blocks mixed with MaxVit transformer blocks)
'maxxvit_rmlp_nano_rw_256.sw_in1k': _cfg(
hf_hub_id='timm/',
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights-maxx/maxxvit_rmlp_nano_rw_256_sw-0325d459.pth',
input_size=(3, 256, 256), pool_size=(8, 8)),
'maxxvit_rmlp_tiny_rw_256.untrained': _cfg(url='', input_size=(3, 256, 256), pool_size=(8, 8)),
'maxxvit_rmlp_small_rw_256.sw_in1k': _cfg(
hf_hub_id='timm/',
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights-maxx/maxxvit_rmlp_small_rw_256_sw-37e217ff.pth',
input_size=(3, 256, 256), pool_size=(8, 8)),
# timm MaxxViT-V2 configs (ConvNeXt conv blocks mixed with MaxVit transformer blocks, more width, no block attn)
'maxxvitv2_nano_rw_256.sw_in1k': _cfg(
hf_hub_id='timm/',
input_size=(3, 256, 256), pool_size=(8, 8)),
'maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k': _cfg(
hf_hub_id='timm/'),
'maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k': _cfg(
hf_hub_id='timm/',
input_size=(3, 384, 384), pool_size=(12, 12), crop_pct=1.0, crop_mode='squash'),
'maxxvitv2_rmlp_large_rw_224.untrained': _cfg(url=''),
'maxxvitv2_rmlp_base_rw_224.sw_in12k': _cfg(
hf_hub_id='timm/',
num_classes=11821),
# MaxViT models ported from official Tensorflow impl
'maxvit_tiny_tf_224.in1k': _cfg(
hf_hub_id='timm/',
mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD),
'maxvit_tiny_tf_384.in1k': _cfg(
hf_hub_id='timm/',
input_size=(3, 384, 384), pool_size=(12, 12), crop_pct=1.0, crop_mode='squash'),
'maxvit_tiny_tf_512.in1k': _cfg(
hf_hub_id='timm/',
input_size=(3, 512, 512), pool_size=(16, 16), crop_pct=1.0, crop_mode='squash'),
'maxvit_small_tf_224.in1k': _cfg(
hf_hub_id='timm/',
mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD),
'maxvit_small_tf_384.in1k': _cfg(
hf_hub_id='timm/',
input_size=(3, 384, 384), pool_size=(12, 12), crop_pct=1.0, crop_mode='squash'),
'maxvit_small_tf_512.in1k': _cfg(
hf_hub_id='timm/',
input_size=(3, 512, 512), pool_size=(16, 16), crop_pct=1.0, crop_mode='squash'),
'maxvit_base_tf_224.in1k': _cfg(
hf_hub_id='timm/',
mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD),
'maxvit_base_tf_384.in1k': _cfg(
hf_hub_id='timm/',
input_size=(3, 384, 384), pool_size=(12, 12), crop_pct=1.0, crop_mode='squash'),
'maxvit_base_tf_512.in1k': _cfg(
hf_hub_id='timm/',
input_size=(3, 512, 512), pool_size=(16, 16), crop_pct=1.0, crop_mode='squash'),
'maxvit_large_tf_224.in1k': _cfg(
hf_hub_id='timm/',
mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD),
'maxvit_large_tf_384.in1k': _cfg(
hf_hub_id='timm/',
input_size=(3, 384, 384), pool_size=(12, 12), crop_pct=1.0, crop_mode='squash'),
'maxvit_large_tf_512.in1k': _cfg(
hf_hub_id='timm/',
input_size=(3, 512, 512), pool_size=(16, 16), crop_pct=1.0, crop_mode='squash'),
'maxvit_base_tf_224.in21k': _cfg(
hf_hub_id='timm/',
num_classes=21843),
'maxvit_base_tf_384.in21k_ft_in1k': _cfg(
hf_hub_id='timm/',
input_size=(3, 384, 384), pool_size=(12, 12), crop_pct=1.0, crop_mode='squash'),
'maxvit_base_tf_512.in21k_ft_in1k': _cfg(
hf_hub_id='timm/',
input_size=(3, 512, 512), pool_size=(16, 16), crop_pct=1.0, crop_mode='squash'),
'maxvit_large_tf_224.in21k': _cfg(
hf_hub_id='timm/',
num_classes=21843),
'maxvit_large_tf_384.in21k_ft_in1k': _cfg(
hf_hub_id='timm/',
input_size=(3, 384, 384), pool_size=(12, 12), crop_pct=1.0, crop_mode='squash'),
'maxvit_large_tf_512.in21k_ft_in1k': _cfg(
hf_hub_id='timm/',
input_size=(3, 512, 512), crop_pct=1.0, crop_mode='squash'),
'maxvit_xlarge_tf_224.in21k': _cfg(
hf_hub_id='timm/',
num_classes=21843),
'maxvit_xlarge_tf_384.in21k_ft_in1k': _cfg(
hf_hub_id='timm/',
input_size=(3, 384, 384), pool_size=(12, 12), crop_pct=1.0, crop_mode='squash'),
'maxvit_xlarge_tf_512.in21k_ft_in1k': _cfg(
hf_hub_id='timm/',
input_size=(3, 512, 512), pool_size=(16, 16), crop_pct=1.0, crop_mode='squash'),
})
@register_model
def coatnet_pico_rw_224(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""CoatNet Pico model with RW configuration."""
return _create_maxxvit('coatnet_pico_rw_224', pretrained=pretrained, **kwargs)
@register_model
def coatnet_nano_rw_224(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""CoatNet Nano model with RW configuration."""
return _create_maxxvit('coatnet_nano_rw_224', pretrained=pretrained, **kwargs)
@register_model
def coatnet_0_rw_224(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""CoatNet-0 model with RW configuration."""
return _create_maxxvit('coatnet_0_rw_224', pretrained=pretrained, **kwargs)
@register_model
def coatnet_1_rw_224(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""CoatNet-1 model with RW configuration."""
return _create_maxxvit('coatnet_1_rw_224', pretrained=pretrained, **kwargs)
@register_model
def coatnet_2_rw_224(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""CoatNet-2 model with RW configuration."""
return _create_maxxvit('coatnet_2_rw_224', pretrained=pretrained, **kwargs)
@register_model
def coatnet_3_rw_224(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""CoatNet-3 model with RW configuration."""
return _create_maxxvit('coatnet_3_rw_224', pretrained=pretrained, **kwargs)
@register_model
def coatnet_bn_0_rw_224(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""CoatNet-0 model with BatchNorm and RW configuration."""
return _create_maxxvit('coatnet_bn_0_rw_224', pretrained=pretrained, **kwargs)
@register_model
def coatnet_rmlp_nano_rw_224(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""CoatNet Nano model with Relative Position MLP."""
return _create_maxxvit('coatnet_rmlp_nano_rw_224', pretrained=pretrained, **kwargs)
@register_model
def coatnet_rmlp_0_rw_224(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""CoatNet-0 model with Relative Position MLP."""
return _create_maxxvit('coatnet_rmlp_0_rw_224', pretrained=pretrained, **kwargs)
@register_model
def coatnet_rmlp_1_rw_224(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""CoatNet-1 model with Relative Position MLP."""
return _create_maxxvit('coatnet_rmlp_1_rw_224', pretrained=pretrained, **kwargs)
@register_model
def coatnet_rmlp_1_rw2_224(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""CoatNet-1 model with Relative Position MLP v2."""
return _create_maxxvit('coatnet_rmlp_1_rw2_224', pretrained=pretrained, **kwargs)
@register_model
def coatnet_rmlp_2_rw_224(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""CoatNet-2 model with Relative Position MLP."""
return _create_maxxvit('coatnet_rmlp_2_rw_224', pretrained=pretrained, **kwargs)
@register_model
def coatnet_rmlp_2_rw_384(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""CoatNet-2 model with Relative Position MLP at 384x384."""
return _create_maxxvit('coatnet_rmlp_2_rw_384', pretrained=pretrained, **kwargs)
@register_model
def coatnet_rmlp_3_rw_224(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""CoatNet-3 model with Relative Position MLP."""
return _create_maxxvit('coatnet_rmlp_3_rw_224', pretrained=pretrained, **kwargs)
@register_model
def coatnet_nano_cc_224(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""CoatNet Nano model with ConvNeXt blocks."""
return _create_maxxvit('coatnet_nano_cc_224', pretrained=pretrained, **kwargs)
@register_model
def coatnext_nano_rw_224(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""CoAtNeXt Nano model with RW configuration."""
return _create_maxxvit('coatnext_nano_rw_224', pretrained=pretrained, **kwargs)
@register_model
def coatnet_0_224(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""CoatNet-0 model."""
return _create_maxxvit('coatnet_0_224', pretrained=pretrained, **kwargs)
@register_model
def coatnet_1_224(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""CoatNet-1 model."""
return _create_maxxvit('coatnet_1_224', pretrained=pretrained, **kwargs)
@register_model
def coatnet_2_224(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""CoatNet-2 model."""
return _create_maxxvit('coatnet_2_224', pretrained=pretrained, **kwargs)
@register_model
def coatnet_3_224(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""CoatNet-3 model."""
return _create_maxxvit('coatnet_3_224', pretrained=pretrained, **kwargs)
@register_model
def coatnet_4_224(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""CoatNet-4 model."""
return _create_maxxvit('coatnet_4_224', pretrained=pretrained, **kwargs)
@register_model
def coatnet_5_224(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""CoatNet-5 model."""
return _create_maxxvit('coatnet_5_224', pretrained=pretrained, **kwargs)
@register_model
def maxvit_pico_rw_256(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""MaxViT Pico model with RW configuration."""
return _create_maxxvit('maxvit_pico_rw_256', pretrained=pretrained, **kwargs)
@register_model
def maxvit_nano_rw_256(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""MaxViT Nano model with RW configuration."""
return _create_maxxvit('maxvit_nano_rw_256', pretrained=pretrained, **kwargs)
@register_model
def maxvit_tiny_rw_224(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""MaxViT Tiny model with RW configuration."""
return _create_maxxvit('maxvit_tiny_rw_224', pretrained=pretrained, **kwargs)
@register_model
def maxvit_tiny_rw_256(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""MaxViT Tiny model with RW configuration at 256x256."""
return _create_maxxvit('maxvit_tiny_rw_256', pretrained=pretrained, **kwargs)
@register_model
def maxvit_rmlp_pico_rw_256(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""MaxViT Relative Position MLP Pico RW 256x256 model."""
return _create_maxxvit('maxvit_rmlp_pico_rw_256', pretrained=pretrained, **kwargs)
@register_model
def maxvit_rmlp_nano_rw_256(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""MaxViT Relative Position MLP Nano RW 256x256 model."""
return _create_maxxvit('maxvit_rmlp_nano_rw_256', pretrained=pretrained, **kwargs)
@register_model
def maxvit_rmlp_tiny_rw_256(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""MaxViT Relative Position MLP Tiny RW 256x256 model."""
return _create_maxxvit('maxvit_rmlp_tiny_rw_256', pretrained=pretrained, **kwargs)
@register_model
def maxvit_rmlp_small_rw_224(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""MaxViT Relative Position MLP Small RW 224x224 model."""
return _create_maxxvit('maxvit_rmlp_small_rw_224', pretrained=pretrained, **kwargs)
@register_model
def maxvit_rmlp_small_rw_256(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""MaxViT Small model with Relative Position MLP at 256x256."""
return _create_maxxvit('maxvit_rmlp_small_rw_256', pretrained=pretrained, **kwargs)
@register_model
def maxvit_rmlp_base_rw_224(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""MaxViT Base model with Relative Position MLP."""
return _create_maxxvit('maxvit_rmlp_base_rw_224', pretrained=pretrained, **kwargs)
@register_model
def maxvit_rmlp_base_rw_384(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""MaxViT Base model with Relative Position MLP at 384x384."""
return _create_maxxvit('maxvit_rmlp_base_rw_384', pretrained=pretrained, **kwargs)
@register_model
def maxvit_tiny_pm_256(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""MaxViT Tiny model with parallel blocks."""
return _create_maxxvit('maxvit_tiny_pm_256', pretrained=pretrained, **kwargs)
@register_model
def maxxvit_rmlp_nano_rw_256(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""MaxxViT Relative Position MLP Nano RW 256x256 model."""
return _create_maxxvit('maxxvit_rmlp_nano_rw_256', pretrained=pretrained, **kwargs)
@register_model
def maxxvit_rmlp_tiny_rw_256(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""MaxxViT Tiny model with Relative Position MLP."""
return _create_maxxvit('maxxvit_rmlp_tiny_rw_256', pretrained=pretrained, **kwargs)
@register_model
def maxxvit_rmlp_small_rw_256(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""MaxxViT Small model with Relative Position MLP."""
return _create_maxxvit('maxxvit_rmlp_small_rw_256', pretrained=pretrained, **kwargs)
@register_model
def maxxvitv2_nano_rw_256(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""MaxxViT-V2 Nano model."""
return _create_maxxvit('maxxvitv2_nano_rw_256', pretrained=pretrained, **kwargs)
@register_model
def maxxvitv2_rmlp_base_rw_224(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""MaxxViT-V2 Base model with Relative Position MLP."""
return _create_maxxvit('maxxvitv2_rmlp_base_rw_224', pretrained=pretrained, **kwargs)
@register_model
def maxxvitv2_rmlp_base_rw_384(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""MaxxViT-V2 Base model with Relative Position MLP at 384x384."""
return _create_maxxvit('maxxvitv2_rmlp_base_rw_384', pretrained=pretrained, **kwargs)
@register_model
def maxxvitv2_rmlp_large_rw_224(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""MaxxViT-V2 Large model with Relative Position MLP."""
return _create_maxxvit('maxxvitv2_rmlp_large_rw_224', pretrained=pretrained, **kwargs)
@register_model
def maxvit_tiny_tf_224(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""MaxViT Tiny model from TensorFlow."""
return _create_maxxvit('maxvit_tiny_tf_224', 'maxvit_tiny_tf', pretrained=pretrained, **kwargs)
@register_model
def maxvit_tiny_tf_384(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""MaxViT Tiny model from TensorFlow at 384x384."""
return _create_maxxvit('maxvit_tiny_tf_384', 'maxvit_tiny_tf', pretrained=pretrained, **kwargs)
@register_model
def maxvit_tiny_tf_512(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""MaxViT Tiny model from TensorFlow at 512x512."""
return _create_maxxvit('maxvit_tiny_tf_512', 'maxvit_tiny_tf', pretrained=pretrained, **kwargs)
@register_model
def maxvit_small_tf_224(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""MaxViT Small model from TensorFlow."""
return _create_maxxvit('maxvit_small_tf_224', 'maxvit_small_tf', pretrained=pretrained, **kwargs)
@register_model
def maxvit_small_tf_384(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""MaxViT Small model from TensorFlow at 384x384."""
return _create_maxxvit('maxvit_small_tf_384', 'maxvit_small_tf', pretrained=pretrained, **kwargs)
@register_model
def maxvit_small_tf_512(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""MaxViT Small model from TensorFlow at 512x512."""
return _create_maxxvit('maxvit_small_tf_512', 'maxvit_small_tf', pretrained=pretrained, **kwargs)
@register_model
def maxvit_base_tf_224(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""MaxViT Base model from TensorFlow."""
return _create_maxxvit('maxvit_base_tf_224', 'maxvit_base_tf', pretrained=pretrained, **kwargs)
@register_model
def maxvit_base_tf_384(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""MaxViT Base model from TensorFlow at 384x384."""
return _create_maxxvit('maxvit_base_tf_384', 'maxvit_base_tf', pretrained=pretrained, **kwargs)
@register_model
def maxvit_base_tf_512(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""MaxViT Base model from TensorFlow at 512x512."""
return _create_maxxvit('maxvit_base_tf_512', 'maxvit_base_tf', pretrained=pretrained, **kwargs)
@register_model
def maxvit_large_tf_224(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""MaxViT Large model from TensorFlow."""
return _create_maxxvit('maxvit_large_tf_224', 'maxvit_large_tf', pretrained=pretrained, **kwargs)
@register_model
def maxvit_large_tf_384(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""MaxViT Large model from TensorFlow at 384x384."""
return _create_maxxvit('maxvit_large_tf_384', 'maxvit_large_tf', pretrained=pretrained, **kwargs)
@register_model
def maxvit_large_tf_512(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""MaxViT Large model from TensorFlow at 512x512."""
return _create_maxxvit('maxvit_large_tf_512', 'maxvit_large_tf', pretrained=pretrained, **kwargs)
@register_model
def maxvit_xlarge_tf_224(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""MaxViT XLarge model from TensorFlow."""
return _create_maxxvit('maxvit_xlarge_tf_224', 'maxvit_xlarge_tf', pretrained=pretrained, **kwargs)
@register_model
def maxvit_xlarge_tf_384(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""MaxViT XLarge model from TensorFlow at 384x384."""
return _create_maxxvit('maxvit_xlarge_tf_384', 'maxvit_xlarge_tf', pretrained=pretrained, **kwargs)
@register_model
def maxvit_xlarge_tf_512(pretrained: bool = False, **kwargs: Any) -> MaxxVit:
"""MaxViT XLarge model from TensorFlow at 512x512."""
return _create_maxxvit('maxvit_xlarge_tf_512', 'maxvit_xlarge_tf', pretrained=pretrained, **kwargs)
| pytorch-image-models/timm/models/maxxvit.py/0 | {
"file_path": "pytorch-image-models/timm/models/maxxvit.py",
"repo_id": "pytorch-image-models",
"token_count": 48679
} | 274 |
from ._registry import *
import warnings
warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.models", FutureWarning)
| pytorch-image-models/timm/models/registry.py/0 | {
"file_path": "pytorch-image-models/timm/models/registry.py",
"repo_id": "pytorch-image-models",
"token_count": 41
} | 275 |
""" Swin Transformer
A PyTorch impl of : `Swin Transformer: Hierarchical Vision Transformer using Shifted Windows`
- https://arxiv.org/pdf/2103.14030
Code/weights from https://github.com/microsoft/Swin-Transformer, original copyright/license info below
S3 (AutoFormerV2, https://arxiv.org/abs/2111.14725) Swin weights from
- https://github.com/microsoft/Cream/tree/main/AutoFormerV2
Modifications and additions for timm hacked together by / Copyright 2021, Ross Wightman
"""
# --------------------------------------------------------
# Swin Transformer
# Copyright (c) 2021 Microsoft
# Licensed under The MIT License [see LICENSE for details]
# Written by Ze Liu
# --------------------------------------------------------
import logging
import math
from typing import Any, Dict, Callable, List, Optional, Set, Tuple, Union
import torch
import torch.nn as nn
from timm.data import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD
from timm.layers import PatchEmbed, Mlp, DropPath, ClassifierHead, to_2tuple, to_ntuple, trunc_normal_, \
use_fused_attn, resize_rel_pos_bias_table, resample_patch_embed, ndgrid
from ._builder import build_model_with_cfg
from ._features import feature_take_indices
from ._features_fx import register_notrace_function
from ._manipulate import checkpoint_seq, named_apply
from ._registry import generate_default_cfgs, register_model, register_model_deprecations
from .vision_transformer import get_init_weights_vit
__all__ = ['SwinTransformer'] # model_registry will add each entrypoint fn to this
_logger = logging.getLogger(__name__)
_int_or_tuple_2_t = Union[int, Tuple[int, int]]
def window_partition(
x: torch.Tensor,
window_size: Tuple[int, int],
) -> torch.Tensor:
"""Partition into non-overlapping windows.
Args:
x: Input tokens with shape [B, H, W, C].
window_size: Window size.
Returns:
Windows after partition with shape [B * num_windows, window_size, window_size, C].
"""
B, H, W, C = x.shape
x = x.view(B, H // window_size[0], window_size[0], W // window_size[1], window_size[1], C)
windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size[0], window_size[1], C)
return windows
@register_notrace_function # reason: int argument is a Proxy
def window_reverse(windows: torch.Tensor, window_size: Tuple[int, int], H: int, W: int) -> torch.Tensor:
"""Reverse window partition.
Args:
windows: Windows with shape (num_windows*B, window_size, window_size, C).
window_size: Window size.
H: Height of image.
W: Width of image.
Returns:
Tensor with shape (B, H, W, C).
"""
C = windows.shape[-1]
x = windows.view(-1, H // window_size[0], W // window_size[1], window_size[0], window_size[1], C)
x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, H, W, C)
return x
def get_relative_position_index(win_h: int, win_w: int) -> torch.Tensor:
"""Get pair-wise relative position index for each token inside the window.
Args:
win_h: Window height.
win_w: Window width.
Returns:
Relative position index tensor.
"""
# get pair-wise relative position index for each token inside the window
coords = torch.stack(ndgrid(torch.arange(win_h), torch.arange(win_w))) # 2, Wh, Ww
coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww
relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww
relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2
relative_coords[:, :, 0] += win_h - 1 # shift to start from 0
relative_coords[:, :, 1] += win_w - 1
relative_coords[:, :, 0] *= 2 * win_w - 1
return relative_coords.sum(-1) # Wh*Ww, Wh*Ww
class WindowAttention(nn.Module):
"""Window based multi-head self attention (W-MSA) module with relative position bias.
Supports both shifted and non-shifted windows.
"""
fused_attn: torch.jit.Final[bool]
def __init__(
self,
dim: int,
num_heads: int,
head_dim: Optional[int] = None,
window_size: _int_or_tuple_2_t = 7,
qkv_bias: bool = True,
attn_drop: float = 0.,
proj_drop: float = 0.,
):
"""
Args:
dim: Number of input channels.
num_heads: Number of attention heads.
head_dim: Number of channels per head (dim // num_heads if not set)
window_size: The height and width of the window.
qkv_bias: If True, add a learnable bias to query, key, value.
attn_drop: Dropout ratio of attention weight.
proj_drop: Dropout ratio of output.
"""
super().__init__()
self.dim = dim
self.window_size = to_2tuple(window_size) # Wh, Ww
win_h, win_w = self.window_size
self.window_area = win_h * win_w
self.num_heads = num_heads
head_dim = head_dim or dim // num_heads
attn_dim = head_dim * num_heads
self.scale = head_dim ** -0.5
self.fused_attn = use_fused_attn(experimental=True) # NOTE not tested for prime-time yet
# define a parameter table of relative position bias, shape: 2*Wh-1 * 2*Ww-1, nH
self.relative_position_bias_table = nn.Parameter(torch.zeros((2 * win_h - 1) * (2 * win_w - 1), num_heads))
# get pair-wise relative position index for each token inside the window
self.register_buffer("relative_position_index", get_relative_position_index(win_h, win_w), persistent=False)
self.qkv = nn.Linear(dim, attn_dim * 3, bias=qkv_bias)
self.attn_drop = nn.Dropout(attn_drop)
self.proj = nn.Linear(attn_dim, dim)
self.proj_drop = nn.Dropout(proj_drop)
trunc_normal_(self.relative_position_bias_table, std=.02)
self.softmax = nn.Softmax(dim=-1)
def set_window_size(self, window_size: Tuple[int, int]) -> None:
"""Update window size & interpolate position embeddings
Args:
window_size (int): New window size
"""
window_size = to_2tuple(window_size)
if window_size == self.window_size:
return
self.window_size = window_size
win_h, win_w = self.window_size
self.window_area = win_h * win_w
with torch.no_grad():
new_bias_shape = (2 * win_h - 1) * (2 * win_w - 1), self.num_heads
self.relative_position_bias_table = nn.Parameter(
resize_rel_pos_bias_table(
self.relative_position_bias_table,
new_window_size=self.window_size,
new_bias_shape=new_bias_shape,
))
self.register_buffer("relative_position_index", get_relative_position_index(win_h, win_w), persistent=False)
def _get_rel_pos_bias(self) -> torch.Tensor:
relative_position_bias = self.relative_position_bias_table[
self.relative_position_index.view(-1)].view(self.window_area, self.window_area, -1) # Wh*Ww,Wh*Ww,nH
relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww
return relative_position_bias.unsqueeze(0)
def forward(self, x: torch.Tensor, mask: Optional[torch.Tensor] = None) -> torch.Tensor:
"""Forward pass.
Args:
x: Input features with shape of (num_windows*B, N, C).
mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None.
Returns:
Output features with shape of (num_windows*B, N, C).
"""
B_, N, C = x.shape
qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, -1).permute(2, 0, 3, 1, 4)
q, k, v = qkv.unbind(0)
if self.fused_attn:
attn_mask = self._get_rel_pos_bias()
if mask is not None:
num_win = mask.shape[0]
mask = mask.view(1, num_win, 1, N, N).expand(B_ // num_win, -1, self.num_heads, -1, -1)
attn_mask = attn_mask + mask.reshape(-1, self.num_heads, N, N)
x = torch.nn.functional.scaled_dot_product_attention(
q, k, v,
attn_mask=attn_mask,
dropout_p=self.attn_drop.p if self.training else 0.,
)
else:
q = q * self.scale
attn = q @ k.transpose(-2, -1)
attn = attn + self._get_rel_pos_bias()
if mask is not None:
num_win = mask.shape[0]
attn = attn.view(-1, num_win, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0)
attn = attn.view(-1, self.num_heads, N, N)
attn = self.softmax(attn)
attn = self.attn_drop(attn)
x = attn @ v
x = x.transpose(1, 2).reshape(B_, N, -1)
x = self.proj(x)
x = self.proj_drop(x)
return x
class SwinTransformerBlock(nn.Module):
"""Swin Transformer Block.
A transformer block with window-based self-attention and shifted windows.
"""
def __init__(
self,
dim: int,
input_resolution: _int_or_tuple_2_t,
num_heads: int = 4,
head_dim: Optional[int] = None,
window_size: _int_or_tuple_2_t = 7,
shift_size: int = 0,
always_partition: bool = False,
dynamic_mask: bool = False,
mlp_ratio: float = 4.,
qkv_bias: bool = True,
proj_drop: float = 0.,
attn_drop: float = 0.,
drop_path: float = 0.,
act_layer: Callable = nn.GELU,
norm_layer: Callable = nn.LayerNorm,
):
"""
Args:
dim: Number of input channels.
input_resolution: Input resolution.
window_size: Window size.
num_heads: Number of attention heads.
head_dim: Enforce the number of channels per head
shift_size: Shift size for SW-MSA.
always_partition: Always partition into full windows and shift
mlp_ratio: Ratio of mlp hidden dim to embedding dim.
qkv_bias: If True, add a learnable bias to query, key, value.
proj_drop: Dropout rate.
attn_drop: Attention dropout rate.
drop_path: Stochastic depth rate.
act_layer: Activation layer.
norm_layer: Normalization layer.
"""
super().__init__()
self.dim = dim
self.input_resolution = input_resolution
self.target_shift_size = to_2tuple(shift_size) # store for later resize
self.always_partition = always_partition
self.dynamic_mask = dynamic_mask
self.window_size, self.shift_size = self._calc_window_shift(window_size, shift_size)
self.window_area = self.window_size[0] * self.window_size[1]
self.mlp_ratio = mlp_ratio
self.norm1 = norm_layer(dim)
self.attn = WindowAttention(
dim,
num_heads=num_heads,
head_dim=head_dim,
window_size=self.window_size,
qkv_bias=qkv_bias,
attn_drop=attn_drop,
proj_drop=proj_drop,
)
self.drop_path1 = DropPath(drop_path) if drop_path > 0. else nn.Identity()
self.norm2 = norm_layer(dim)
self.mlp = Mlp(
in_features=dim,
hidden_features=int(dim * mlp_ratio),
act_layer=act_layer,
drop=proj_drop,
)
self.drop_path2 = DropPath(drop_path) if drop_path > 0. else nn.Identity()
self.register_buffer(
"attn_mask",
None if self.dynamic_mask else self.get_attn_mask(),
persistent=False,
)
def get_attn_mask(self, x: Optional[torch.Tensor] = None) -> Optional[torch.Tensor]:
if any(self.shift_size):
# calculate attention mask for SW-MSA
if x is not None:
H, W = x.shape[1], x.shape[2]
device = x.device
dtype = x.dtype
else:
H, W = self.input_resolution
device = None
dtype = None
H = math.ceil(H / self.window_size[0]) * self.window_size[0]
W = math.ceil(W / self.window_size[1]) * self.window_size[1]
img_mask = torch.zeros((1, H, W, 1), dtype=dtype, device=device) # 1 H W 1
cnt = 0
for h in (
(0, -self.window_size[0]),
(-self.window_size[0], -self.shift_size[0]),
(-self.shift_size[0], None),
):
for w in (
(0, -self.window_size[1]),
(-self.window_size[1], -self.shift_size[1]),
(-self.shift_size[1], None),
):
img_mask[:, h[0]:h[1], w[0]:w[1], :] = cnt
cnt += 1
mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1
mask_windows = mask_windows.view(-1, self.window_area)
attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0))
else:
attn_mask = None
return attn_mask
def _calc_window_shift(
self,
target_window_size: Union[int, Tuple[int, int]],
target_shift_size: Optional[Union[int, Tuple[int, int]]] = None,
) -> Tuple[Tuple[int, int], Tuple[int, int]]:
target_window_size = to_2tuple(target_window_size)
if target_shift_size is None:
# if passed value is None, recalculate from default window_size // 2 if it was previously non-zero
target_shift_size = self.target_shift_size
if any(target_shift_size):
target_shift_size = (target_window_size[0] // 2, target_window_size[1] // 2)
else:
target_shift_size = to_2tuple(target_shift_size)
if self.always_partition:
return target_window_size, target_shift_size
window_size = [r if r <= w else w for r, w in zip(self.input_resolution, target_window_size)]
shift_size = [0 if r <= w else s for r, w, s in zip(self.input_resolution, window_size, target_shift_size)]
return tuple(window_size), tuple(shift_size)
def set_input_size(
self,
feat_size: Tuple[int, int],
window_size: Tuple[int, int],
always_partition: Optional[bool] = None,
):
"""
Args:
feat_size: New input resolution
window_size: New window size
always_partition: Change always_partition attribute if not None
"""
self.input_resolution = feat_size
if always_partition is not None:
self.always_partition = always_partition
self.window_size, self.shift_size = self._calc_window_shift(window_size)
self.window_area = self.window_size[0] * self.window_size[1]
self.attn.set_window_size(self.window_size)
self.register_buffer(
"attn_mask",
None if self.dynamic_mask else self.get_attn_mask(),
persistent=False,
)
def _attn(self, x):
B, H, W, C = x.shape
# cyclic shift
has_shift = any(self.shift_size)
if has_shift:
shifted_x = torch.roll(x, shifts=(-self.shift_size[0], -self.shift_size[1]), dims=(1, 2))
else:
shifted_x = x
# pad for resolution not divisible by window size
pad_h = (self.window_size[0] - H % self.window_size[0]) % self.window_size[0]
pad_w = (self.window_size[1] - W % self.window_size[1]) % self.window_size[1]
shifted_x = torch.nn.functional.pad(shifted_x, (0, 0, 0, pad_w, 0, pad_h))
_, Hp, Wp, _ = shifted_x.shape
# partition windows
x_windows = window_partition(shifted_x, self.window_size) # nW*B, window_size, window_size, C
x_windows = x_windows.view(-1, self.window_area, C) # nW*B, window_size*window_size, C
# W-MSA/SW-MSA
if getattr(self, 'dynamic_mask', False):
attn_mask = self.get_attn_mask(shifted_x)
else:
attn_mask = self.attn_mask
attn_windows = self.attn(x_windows, mask=attn_mask) # nW*B, window_size*window_size, C
# merge windows
attn_windows = attn_windows.view(-1, self.window_size[0], self.window_size[1], C)
shifted_x = window_reverse(attn_windows, self.window_size, Hp, Wp) # B H' W' C
shifted_x = shifted_x[:, :H, :W, :].contiguous()
# reverse cyclic shift
if has_shift:
x = torch.roll(shifted_x, shifts=self.shift_size, dims=(1, 2))
else:
x = shifted_x
return x
def forward(self, x: torch.Tensor) -> torch.Tensor:
"""Forward pass.
Args:
x: Input features with shape (B, H, W, C).
Returns:
Output features with shape (B, H, W, C).
"""
B, H, W, C = x.shape
x = x + self.drop_path1(self._attn(self.norm1(x)))
x = x.reshape(B, -1, C)
x = x + self.drop_path2(self.mlp(self.norm2(x)))
x = x.reshape(B, H, W, C)
return x
class PatchMerging(nn.Module):
"""Patch Merging Layer.
Downsample features by merging 2x2 neighboring patches.
"""
def __init__(
self,
dim: int,
out_dim: Optional[int] = None,
norm_layer: Callable = nn.LayerNorm,
):
"""
Args:
dim: Number of input channels.
out_dim: Number of output channels (or 2 * dim if None)
norm_layer: Normalization layer.
"""
super().__init__()
self.dim = dim
self.out_dim = out_dim or 2 * dim
self.norm = norm_layer(4 * dim)
self.reduction = nn.Linear(4 * dim, self.out_dim, bias=False)
def forward(self, x: torch.Tensor) -> torch.Tensor:
"""Forward pass.
Args:
x: Input features with shape (B, H, W, C).
Returns:
Output features with shape (B, H//2, W//2, out_dim).
"""
B, H, W, C = x.shape
pad_values = (0, 0, 0, W % 2, 0, H % 2)
x = nn.functional.pad(x, pad_values)
_, H, W, _ = x.shape
x = x.reshape(B, H // 2, 2, W // 2, 2, C).permute(0, 1, 3, 4, 2, 5).flatten(3)
x = self.norm(x)
x = self.reduction(x)
return x
class SwinTransformerStage(nn.Module):
"""A basic Swin Transformer layer for one stage.
Contains multiple Swin Transformer blocks and optional downsampling.
"""
def __init__(
self,
dim: int,
out_dim: int,
input_resolution: Tuple[int, int],
depth: int,
downsample: bool = True,
num_heads: int = 4,
head_dim: Optional[int] = None,
window_size: _int_or_tuple_2_t = 7,
always_partition: bool = False,
dynamic_mask: bool = False,
mlp_ratio: float = 4.,
qkv_bias: bool = True,
proj_drop: float = 0.,
attn_drop: float = 0.,
drop_path: Union[List[float], float] = 0.,
norm_layer: Callable = nn.LayerNorm,
):
"""
Args:
dim: Number of input channels.
out_dim: Number of output channels.
input_resolution: Input resolution.
depth: Number of blocks.
downsample: Downsample layer at the end of the layer.
num_heads: Number of attention heads.
head_dim: Channels per head (dim // num_heads if not set)
window_size: Local window size.
mlp_ratio: Ratio of mlp hidden dim to embedding dim.
qkv_bias: If True, add a learnable bias to query, key, value.
proj_drop: Projection dropout rate.
attn_drop: Attention dropout rate.
drop_path: Stochastic depth rate.
norm_layer: Normalization layer.
"""
super().__init__()
self.dim = dim
self.input_resolution = input_resolution
self.output_resolution = tuple(i // 2 for i in input_resolution) if downsample else input_resolution
self.depth = depth
self.grad_checkpointing = False
window_size = to_2tuple(window_size)
shift_size = tuple([w // 2 for w in window_size])
# patch merging layer
if downsample:
self.downsample = PatchMerging(
dim=dim,
out_dim=out_dim,
norm_layer=norm_layer,
)
else:
assert dim == out_dim
self.downsample = nn.Identity()
# build blocks
self.blocks = nn.Sequential(*[
SwinTransformerBlock(
dim=out_dim,
input_resolution=self.output_resolution,
num_heads=num_heads,
head_dim=head_dim,
window_size=window_size,
shift_size=0 if (i % 2 == 0) else shift_size,
always_partition=always_partition,
dynamic_mask=dynamic_mask,
mlp_ratio=mlp_ratio,
qkv_bias=qkv_bias,
proj_drop=proj_drop,
attn_drop=attn_drop,
drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path,
norm_layer=norm_layer,
)
for i in range(depth)])
def set_input_size(
self,
feat_size: Tuple[int, int],
window_size: int,
always_partition: Optional[bool] = None,
):
""" Updates the resolution, window size and so the pair-wise relative positions.
Args:
feat_size: New input (feature) resolution
window_size: New window size
always_partition: Always partition / shift the window
"""
self.input_resolution = feat_size
if isinstance(self.downsample, nn.Identity):
self.output_resolution = feat_size
else:
self.output_resolution = tuple(i // 2 for i in feat_size)
for block in self.blocks:
block.set_input_size(
feat_size=self.output_resolution,
window_size=window_size,
always_partition=always_partition,
)
def forward(self, x: torch.Tensor) -> torch.Tensor:
"""Forward pass.
Args:
x: Input features.
Returns:
Output features.
"""
x = self.downsample(x)
if self.grad_checkpointing and not torch.jit.is_scripting():
x = checkpoint_seq(self.blocks, x)
else:
x = self.blocks(x)
return x
class SwinTransformer(nn.Module):
"""Swin Transformer.
A PyTorch impl of : `Swin Transformer: Hierarchical Vision Transformer using Shifted Windows` -
https://arxiv.org/pdf/2103.14030
"""
def __init__(
self,
img_size: _int_or_tuple_2_t = 224,
patch_size: int = 4,
in_chans: int = 3,
num_classes: int = 1000,
global_pool: str = 'avg',
embed_dim: int = 96,
depths: Tuple[int, ...] = (2, 2, 6, 2),
num_heads: Tuple[int, ...] = (3, 6, 12, 24),
head_dim: Optional[int] = None,
window_size: _int_or_tuple_2_t = 7,
always_partition: bool = False,
strict_img_size: bool = True,
mlp_ratio: float = 4.,
qkv_bias: bool = True,
drop_rate: float = 0.,
proj_drop_rate: float = 0.,
attn_drop_rate: float = 0.,
drop_path_rate: float = 0.1,
embed_layer: Callable = PatchEmbed,
norm_layer: Union[str, Callable] = nn.LayerNorm,
weight_init: str = '',
**kwargs,
):
"""
Args:
img_size: Input image size.
patch_size: Patch size.
in_chans: Number of input image channels.
num_classes: Number of classes for classification head.
embed_dim: Patch embedding dimension.
depths: Depth of each Swin Transformer layer.
num_heads: Number of attention heads in different layers.
head_dim: Dimension of self-attention heads.
window_size: Window size.
mlp_ratio: Ratio of mlp hidden dim to embedding dim.
qkv_bias: If True, add a learnable bias to query, key, value.
drop_rate: Dropout rate.
attn_drop_rate (float): Attention dropout rate.
drop_path_rate (float): Stochastic depth rate.
embed_layer: Patch embedding layer.
norm_layer (nn.Module): Normalization layer.
"""
super().__init__()
assert global_pool in ('', 'avg')
self.num_classes = num_classes
self.global_pool = global_pool
self.output_fmt = 'NHWC'
self.num_layers = len(depths)
self.embed_dim = embed_dim
self.num_features = self.head_hidden_size = int(embed_dim * 2 ** (self.num_layers - 1))
self.feature_info = []
if not isinstance(embed_dim, (tuple, list)):
embed_dim = [int(embed_dim * 2 ** i) for i in range(self.num_layers)]
# split image into non-overlapping patches
self.patch_embed = embed_layer(
img_size=img_size,
patch_size=patch_size,
in_chans=in_chans,
embed_dim=embed_dim[0],
norm_layer=norm_layer,
strict_img_size=strict_img_size,
output_fmt='NHWC',
)
patch_grid = self.patch_embed.grid_size
# build layers
head_dim = to_ntuple(self.num_layers)(head_dim)
if not isinstance(window_size, (list, tuple)):
window_size = to_ntuple(self.num_layers)(window_size)
elif len(window_size) == 2:
window_size = (window_size,) * self.num_layers
assert len(window_size) == self.num_layers
mlp_ratio = to_ntuple(self.num_layers)(mlp_ratio)
dpr = [x.tolist() for x in torch.linspace(0, drop_path_rate, sum(depths)).split(depths)]
layers = []
in_dim = embed_dim[0]
scale = 1
for i in range(self.num_layers):
out_dim = embed_dim[i]
layers += [SwinTransformerStage(
dim=in_dim,
out_dim=out_dim,
input_resolution=(
patch_grid[0] // scale,
patch_grid[1] // scale
),
depth=depths[i],
downsample=i > 0,
num_heads=num_heads[i],
head_dim=head_dim[i],
window_size=window_size[i],
always_partition=always_partition,
dynamic_mask=not strict_img_size,
mlp_ratio=mlp_ratio[i],
qkv_bias=qkv_bias,
proj_drop=proj_drop_rate,
attn_drop=attn_drop_rate,
drop_path=dpr[i],
norm_layer=norm_layer,
)]
in_dim = out_dim
if i > 0:
scale *= 2
self.feature_info += [dict(num_chs=out_dim, reduction=patch_size * scale, module=f'layers.{i}')]
self.layers = nn.Sequential(*layers)
self.norm = norm_layer(self.num_features)
self.head = ClassifierHead(
self.num_features,
num_classes,
pool_type=global_pool,
drop_rate=drop_rate,
input_fmt=self.output_fmt,
)
if weight_init != 'skip':
self.init_weights(weight_init)
@torch.jit.ignore
def init_weights(self, mode: str = '') -> None:
"""Initialize model weights.
Args:
mode: Weight initialization mode ('jax', 'jax_nlhb', 'moco', or '').
"""
assert mode in ('jax', 'jax_nlhb', 'moco', '')
head_bias = -math.log(self.num_classes) if 'nlhb' in mode else 0.
named_apply(get_init_weights_vit(mode, head_bias=head_bias), self)
@torch.jit.ignore
def no_weight_decay(self) -> Set[str]:
"""Parameters that should not use weight decay."""
nwd = set()
for n, _ in self.named_parameters():
if 'relative_position_bias_table' in n:
nwd.add(n)
return nwd
def set_input_size(
self,
img_size: Optional[Tuple[int, int]] = None,
patch_size: Optional[Tuple[int, int]] = None,
window_size: Optional[Tuple[int, int]] = None,
window_ratio: int = 8,
always_partition: Optional[bool] = None,
) -> None:
"""Update the image resolution and window size.
Args:
img_size: New input resolution, if None current resolution is used.
patch_size: New patch size, if None use current patch size.
window_size: New window size, if None based on new_img_size // window_div.
window_ratio: Divisor for calculating window size from grid size.
always_partition: Always partition into windows and shift (even if window size < feat size).
"""
if img_size is not None or patch_size is not None:
self.patch_embed.set_input_size(img_size=img_size, patch_size=patch_size)
patch_grid = self.patch_embed.grid_size
if window_size is None:
window_size = tuple([pg // window_ratio for pg in patch_grid])
for index, stage in enumerate(self.layers):
stage_scale = 2 ** max(index - 1, 0)
stage.set_input_size(
feat_size=(patch_grid[0] // stage_scale, patch_grid[1] // stage_scale),
window_size=window_size,
always_partition=always_partition,
)
@torch.jit.ignore
def group_matcher(self, coarse: bool = False) -> Dict[str, Any]:
"""Group parameters for optimization."""
return dict(
stem=r'^patch_embed', # stem and embed
blocks=r'^layers\.(\d+)' if coarse else [
(r'^layers\.(\d+).downsample', (0,)),
(r'^layers\.(\d+)\.\w+\.(\d+)', None),
(r'^norm', (99999,)),
]
)
@torch.jit.ignore
def set_grad_checkpointing(self, enable: bool = True) -> None:
"""Enable or disable gradient checkpointing."""
for l in self.layers:
l.grad_checkpointing = enable
@torch.jit.ignore
def get_classifier(self) -> nn.Module:
"""Get the classifier head."""
return self.head.fc
def reset_classifier(self, num_classes: int, global_pool: Optional[str] = None) -> None:
"""Reset the classifier head.
Args:
num_classes: Number of classes for new classifier.
global_pool: Global pooling type.
"""
self.num_classes = num_classes
self.head.reset(num_classes, pool_type=global_pool)
def forward_intermediates(
self,
x: torch.Tensor,
indices: Optional[Union[int, List[int]]] = None,
norm: bool = False,
stop_early: bool = False,
output_fmt: str = 'NCHW',
intermediates_only: bool = False,
) -> Union[List[torch.Tensor], Tuple[torch.Tensor, List[torch.Tensor]]]:
"""Forward features that returns intermediates.
Args:
x: Input image tensor.
indices: Take last n blocks if int, all if None, select matching indices if sequence.
norm: Apply norm layer to compatible intermediates.
stop_early: Stop iterating over blocks when last desired intermediate hit.
output_fmt: Shape of intermediate feature outputs.
intermediates_only: Only return intermediate features.
Returns:
List of intermediate features or tuple of (final features, intermediates).
"""
assert output_fmt in ('NCHW',), 'Output shape must be NCHW.'
intermediates = []
take_indices, max_index = feature_take_indices(len(self.layers), indices)
# forward pass
x = self.patch_embed(x)
num_stages = len(self.layers)
if torch.jit.is_scripting() or not stop_early: # can't slice blocks in torchscript
stages = self.layers
else:
stages = self.layers[:max_index + 1]
for i, stage in enumerate(stages):
x = stage(x)
if i in take_indices:
if norm and i == num_stages - 1:
x_inter = self.norm(x) # applying final norm last intermediate
else:
x_inter = x
x_inter = x_inter.permute(0, 3, 1, 2).contiguous()
intermediates.append(x_inter)
if intermediates_only:
return intermediates
x = self.norm(x)
return x, intermediates
def prune_intermediate_layers(
self,
indices: Union[int, List[int]] = 1,
prune_norm: bool = False,
prune_head: bool = True,
) -> List[int]:
"""Prune layers not required for specified intermediates.
Args:
indices: Indices of intermediate layers to keep.
prune_norm: Whether to prune normalization layer.
prune_head: Whether to prune the classifier head.
Returns:
List of indices that were kept.
"""
take_indices, max_index = feature_take_indices(len(self.layers), indices)
self.layers = self.layers[:max_index + 1] # truncate blocks
if prune_norm:
self.norm = nn.Identity()
if prune_head:
self.reset_classifier(0, '')
return take_indices
def forward_features(self, x: torch.Tensor) -> torch.Tensor:
"""Forward pass through feature extraction layers."""
x = self.patch_embed(x)
x = self.layers(x)
x = self.norm(x)
return x
def forward_head(self, x: torch.Tensor, pre_logits: bool = False) -> torch.Tensor:
"""Forward pass through classifier head.
Args:
x: Feature tensor.
pre_logits: Return features before final classifier.
Returns:
Output tensor.
"""
return self.head(x, pre_logits=True) if pre_logits else self.head(x)
def forward(self, x: torch.Tensor) -> torch.Tensor:
"""Forward pass.
Args:
x: Input tensor.
Returns:
Output logits.
"""
x = self.forward_features(x)
x = self.forward_head(x)
return x
def checkpoint_filter_fn(state_dict: dict, model: nn.Module) -> Dict[str, torch.Tensor]:
"""Convert patch embedding weight from manual patchify + linear proj to conv.
Args:
state_dict: State dictionary from checkpoint.
model: Model instance.
Returns:
Filtered state dictionary.
"""
old_weights = True
if 'head.fc.weight' in state_dict:
old_weights = False
import re
out_dict = {}
state_dict = state_dict.get('model', state_dict)
state_dict = state_dict.get('state_dict', state_dict)
for k, v in state_dict.items():
if any([n in k for n in ('relative_position_index', 'attn_mask')]):
continue # skip buffers that should not be persistent
if 'patch_embed.proj.weight' in k:
_, _, H, W = model.patch_embed.proj.weight.shape
if v.shape[-2] != H or v.shape[-1] != W:
v = resample_patch_embed(
v,
(H, W),
interpolation='bicubic',
antialias=True,
verbose=True,
)
if k.endswith('relative_position_bias_table'):
m = model.get_submodule(k[:-29])
if v.shape != m.relative_position_bias_table.shape or m.window_size[0] != m.window_size[1]:
v = resize_rel_pos_bias_table(
v,
new_window_size=m.window_size,
new_bias_shape=m.relative_position_bias_table.shape,
)
if old_weights:
k = re.sub(r'layers.(\d+).downsample', lambda x: f'layers.{int(x.group(1)) + 1}.downsample', k)
k = k.replace('head.', 'head.fc.')
out_dict[k] = v
return out_dict
def _create_swin_transformer(variant: str, pretrained: bool = False, **kwargs) -> SwinTransformer:
"""Create a Swin Transformer model.
Args:
variant: Model variant name.
pretrained: Load pretrained weights.
**kwargs: Additional model arguments.
Returns:
SwinTransformer model instance.
"""
default_out_indices = tuple(i for i, _ in enumerate(kwargs.get('depths', (1, 1, 3, 1))))
out_indices = kwargs.pop('out_indices', default_out_indices)
model = build_model_with_cfg(
SwinTransformer, variant, pretrained,
pretrained_filter_fn=checkpoint_filter_fn,
feature_cfg=dict(flatten_sequential=True, out_indices=out_indices),
**kwargs)
return model
def _cfg(url: str = '', **kwargs) -> Dict[str, Any]:
"""Create default configuration for Swin Transformer models."""
return {
'url': url,
'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': (7, 7),
'crop_pct': .9, 'interpolation': 'bicubic', 'fixed_input_size': True,
'mean': IMAGENET_DEFAULT_MEAN, 'std': IMAGENET_DEFAULT_STD,
'first_conv': 'patch_embed.proj', 'classifier': 'head.fc',
'license': 'mit', **kwargs
}
default_cfgs = generate_default_cfgs({
'swin_small_patch4_window7_224.ms_in22k_ft_in1k': _cfg(
hf_hub_id='timm/',
url='https://github.com/SwinTransformer/storage/releases/download/v1.0.8/swin_small_patch4_window7_224_22kto1k_finetune.pth', ),
'swin_base_patch4_window7_224.ms_in22k_ft_in1k': _cfg(
hf_hub_id='timm/',
url='https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_base_patch4_window7_224_22kto1k.pth',),
'swin_base_patch4_window12_384.ms_in22k_ft_in1k': _cfg(
hf_hub_id='timm/',
url='https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_base_patch4_window12_384_22kto1k.pth',
input_size=(3, 384, 384), pool_size=(12, 12), crop_pct=1.0),
'swin_large_patch4_window7_224.ms_in22k_ft_in1k': _cfg(
hf_hub_id='timm/',
url='https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_large_patch4_window7_224_22kto1k.pth',),
'swin_large_patch4_window12_384.ms_in22k_ft_in1k': _cfg(
hf_hub_id='timm/',
url='https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_large_patch4_window12_384_22kto1k.pth',
input_size=(3, 384, 384), pool_size=(12, 12), crop_pct=1.0),
'swin_tiny_patch4_window7_224.ms_in1k': _cfg(
hf_hub_id='timm/',
url='https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_tiny_patch4_window7_224.pth',),
'swin_small_patch4_window7_224.ms_in1k': _cfg(
hf_hub_id='timm/',
url='https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_small_patch4_window7_224.pth',),
'swin_base_patch4_window7_224.ms_in1k': _cfg(
hf_hub_id='timm/',
url='https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_base_patch4_window7_224.pth',),
'swin_base_patch4_window12_384.ms_in1k': _cfg(
hf_hub_id='timm/',
url='https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_base_patch4_window12_384.pth',
input_size=(3, 384, 384), pool_size=(12, 12), crop_pct=1.0),
# tiny 22k pretrain is worse than 1k, so moved after (untagged priority is based on order)
'swin_tiny_patch4_window7_224.ms_in22k_ft_in1k': _cfg(
hf_hub_id='timm/',
url='https://github.com/SwinTransformer/storage/releases/download/v1.0.8/swin_tiny_patch4_window7_224_22kto1k_finetune.pth',),
'swin_tiny_patch4_window7_224.ms_in22k': _cfg(
hf_hub_id='timm/',
url='https://github.com/SwinTransformer/storage/releases/download/v1.0.8/swin_tiny_patch4_window7_224_22k.pth',
num_classes=21841),
'swin_small_patch4_window7_224.ms_in22k': _cfg(
hf_hub_id='timm/',
url='https://github.com/SwinTransformer/storage/releases/download/v1.0.8/swin_small_patch4_window7_224_22k.pth',
num_classes=21841),
'swin_base_patch4_window7_224.ms_in22k': _cfg(
hf_hub_id='timm/',
url='https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_base_patch4_window7_224_22k.pth',
num_classes=21841),
'swin_base_patch4_window12_384.ms_in22k': _cfg(
hf_hub_id='timm/',
url='https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_base_patch4_window12_384_22k.pth',
input_size=(3, 384, 384), pool_size=(12, 12), crop_pct=1.0, num_classes=21841),
'swin_large_patch4_window7_224.ms_in22k': _cfg(
hf_hub_id='timm/',
url='https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_large_patch4_window7_224_22k.pth',
num_classes=21841),
'swin_large_patch4_window12_384.ms_in22k': _cfg(
hf_hub_id='timm/',
url='https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_large_patch4_window12_384_22k.pth',
input_size=(3, 384, 384), pool_size=(12, 12), crop_pct=1.0, num_classes=21841),
'swin_s3_tiny_224.ms_in1k': _cfg(
hf_hub_id='timm/',
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/s3_t-1d53f6a8.pth'),
'swin_s3_small_224.ms_in1k': _cfg(
hf_hub_id='timm/',
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/s3_s-3bb4c69d.pth'),
'swin_s3_base_224.ms_in1k': _cfg(
hf_hub_id='timm/',
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/s3_b-a1e95db4.pth'),
})
@register_model
def swin_tiny_patch4_window7_224(pretrained=False, **kwargs) -> SwinTransformer:
""" Swin-T @ 224x224, trained ImageNet-1k
"""
model_args = dict(patch_size=4, window_size=7, embed_dim=96, depths=(2, 2, 6, 2), num_heads=(3, 6, 12, 24))
return _create_swin_transformer(
'swin_tiny_patch4_window7_224', pretrained=pretrained, **dict(model_args, **kwargs))
@register_model
def swin_small_patch4_window7_224(pretrained=False, **kwargs) -> SwinTransformer:
""" Swin-S @ 224x224
"""
model_args = dict(patch_size=4, window_size=7, embed_dim=96, depths=(2, 2, 18, 2), num_heads=(3, 6, 12, 24))
return _create_swin_transformer(
'swin_small_patch4_window7_224', pretrained=pretrained, **dict(model_args, **kwargs))
@register_model
def swin_base_patch4_window7_224(pretrained=False, **kwargs) -> SwinTransformer:
""" Swin-B @ 224x224
"""
model_args = dict(patch_size=4, window_size=7, embed_dim=128, depths=(2, 2, 18, 2), num_heads=(4, 8, 16, 32))
return _create_swin_transformer(
'swin_base_patch4_window7_224', pretrained=pretrained, **dict(model_args, **kwargs))
@register_model
def swin_base_patch4_window12_384(pretrained=False, **kwargs) -> SwinTransformer:
""" Swin-B @ 384x384
"""
model_args = dict(patch_size=4, window_size=12, embed_dim=128, depths=(2, 2, 18, 2), num_heads=(4, 8, 16, 32))
return _create_swin_transformer(
'swin_base_patch4_window12_384', pretrained=pretrained, **dict(model_args, **kwargs))
@register_model
def swin_large_patch4_window7_224(pretrained=False, **kwargs) -> SwinTransformer:
""" Swin-L @ 224x224
"""
model_args = dict(patch_size=4, window_size=7, embed_dim=192, depths=(2, 2, 18, 2), num_heads=(6, 12, 24, 48))
return _create_swin_transformer(
'swin_large_patch4_window7_224', pretrained=pretrained, **dict(model_args, **kwargs))
@register_model
def swin_large_patch4_window12_384(pretrained=False, **kwargs) -> SwinTransformer:
""" Swin-L @ 384x384
"""
model_args = dict(patch_size=4, window_size=12, embed_dim=192, depths=(2, 2, 18, 2), num_heads=(6, 12, 24, 48))
return _create_swin_transformer(
'swin_large_patch4_window12_384', pretrained=pretrained, **dict(model_args, **kwargs))
@register_model
def swin_s3_tiny_224(pretrained=False, **kwargs) -> SwinTransformer:
""" Swin-S3-T @ 224x224, https://arxiv.org/abs/2111.14725
"""
model_args = dict(
patch_size=4, window_size=(7, 7, 14, 7), embed_dim=96, depths=(2, 2, 6, 2), num_heads=(3, 6, 12, 24))
return _create_swin_transformer('swin_s3_tiny_224', pretrained=pretrained, **dict(model_args, **kwargs))
@register_model
def swin_s3_small_224(pretrained=False, **kwargs) -> SwinTransformer:
""" Swin-S3-S @ 224x224, https://arxiv.org/abs/2111.14725
"""
model_args = dict(
patch_size=4, window_size=(14, 14, 14, 7), embed_dim=96, depths=(2, 2, 18, 2), num_heads=(3, 6, 12, 24))
return _create_swin_transformer('swin_s3_small_224', pretrained=pretrained, **dict(model_args, **kwargs))
@register_model
def swin_s3_base_224(pretrained=False, **kwargs) -> SwinTransformer:
""" Swin-S3-B @ 224x224, https://arxiv.org/abs/2111.14725
"""
model_args = dict(
patch_size=4, window_size=(7, 7, 14, 7), embed_dim=96, depths=(2, 2, 30, 2), num_heads=(3, 6, 12, 24))
return _create_swin_transformer('swin_s3_base_224', pretrained=pretrained, **dict(model_args, **kwargs))
register_model_deprecations(__name__, {
'swin_base_patch4_window7_224_in22k': 'swin_base_patch4_window7_224.ms_in22k',
'swin_base_patch4_window12_384_in22k': 'swin_base_patch4_window12_384.ms_in22k',
'swin_large_patch4_window7_224_in22k': 'swin_large_patch4_window7_224.ms_in22k',
'swin_large_patch4_window12_384_in22k': 'swin_large_patch4_window12_384.ms_in22k',
})
| pytorch-image-models/timm/models/swin_transformer.py/0 | {
"file_path": "pytorch-image-models/timm/models/swin_transformer.py",
"repo_id": "pytorch-image-models",
"token_count": 22341
} | 276 |
"""
Ported to pytorch thanks to [tstandley](https://github.com/tstandley/Xception-PyTorch)
@author: tstandley
Adapted by cadene
Creates an Xception Model as defined in:
Francois Chollet
Xception: Deep Learning with Depthwise Separable Convolutions
https://arxiv.org/pdf/1610.02357.pdf
This weights ported from the Keras implementation. Achieves the following performance on the validation set:
Loss:0.9173 Prec@1:78.892 Prec@5:94.292
REMEMBER to set your image size to 3x299x299 for both test and validation
normalize = transforms.Normalize(mean=[0.5, 0.5, 0.5],
std=[0.5, 0.5, 0.5])
The resize parameter of the validation transform should be 333, and make sure to center crop at 299x299
"""
import torch.jit
import torch.nn as nn
import torch.nn.functional as F
from timm.layers import create_classifier
from ._builder import build_model_with_cfg
from ._registry import register_model, generate_default_cfgs, register_model_deprecations
__all__ = ['Xception']
class SeparableConv2d(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size=1, stride=1, padding=0, dilation=1):
super(SeparableConv2d, self).__init__()
self.conv1 = nn.Conv2d(
in_channels, in_channels, kernel_size, stride, padding, dilation, groups=in_channels, bias=False)
self.pointwise = nn.Conv2d(in_channels, out_channels, 1, 1, 0, 1, 1, bias=False)
def forward(self, x):
x = self.conv1(x)
x = self.pointwise(x)
return x
class Block(nn.Module):
def __init__(self, in_channels, out_channels, reps, strides=1, start_with_relu=True, grow_first=True):
super(Block, self).__init__()
if out_channels != in_channels or strides != 1:
self.skip = nn.Conv2d(in_channels, out_channels, 1, stride=strides, bias=False)
self.skipbn = nn.BatchNorm2d(out_channels)
else:
self.skip = None
rep = []
for i in range(reps):
if grow_first:
inc = in_channels if i == 0 else out_channels
outc = out_channels
else:
inc = in_channels
outc = in_channels if i < (reps - 1) else out_channels
rep.append(nn.ReLU(inplace=True))
rep.append(SeparableConv2d(inc, outc, 3, stride=1, padding=1))
rep.append(nn.BatchNorm2d(outc))
if not start_with_relu:
rep = rep[1:]
else:
rep[0] = nn.ReLU(inplace=False)
if strides != 1:
rep.append(nn.MaxPool2d(3, strides, 1))
self.rep = nn.Sequential(*rep)
def forward(self, inp):
x = self.rep(inp)
if self.skip is not None:
skip = self.skip(inp)
skip = self.skipbn(skip)
else:
skip = inp
x += skip
return x
class Xception(nn.Module):
"""
Xception optimized for the ImageNet dataset, as specified in
https://arxiv.org/pdf/1610.02357.pdf
"""
def __init__(self, num_classes=1000, in_chans=3, drop_rate=0., global_pool='avg'):
""" Constructor
Args:
num_classes: number of classes
"""
super(Xception, self).__init__()
self.drop_rate = drop_rate
self.global_pool = global_pool
self.num_classes = num_classes
self.num_features = self.head_hidden_size = 2048
self.conv1 = nn.Conv2d(in_chans, 32, 3, 2, 0, bias=False)
self.bn1 = nn.BatchNorm2d(32)
self.act1 = nn.ReLU(inplace=True)
self.conv2 = nn.Conv2d(32, 64, 3, bias=False)
self.bn2 = nn.BatchNorm2d(64)
self.act2 = nn.ReLU(inplace=True)
self.block1 = Block(64, 128, 2, 2, start_with_relu=False)
self.block2 = Block(128, 256, 2, 2)
self.block3 = Block(256, 728, 2, 2)
self.block4 = Block(728, 728, 3, 1)
self.block5 = Block(728, 728, 3, 1)
self.block6 = Block(728, 728, 3, 1)
self.block7 = Block(728, 728, 3, 1)
self.block8 = Block(728, 728, 3, 1)
self.block9 = Block(728, 728, 3, 1)
self.block10 = Block(728, 728, 3, 1)
self.block11 = Block(728, 728, 3, 1)
self.block12 = Block(728, 1024, 2, 2, grow_first=False)
self.conv3 = SeparableConv2d(1024, 1536, 3, 1, 1)
self.bn3 = nn.BatchNorm2d(1536)
self.act3 = nn.ReLU(inplace=True)
self.conv4 = SeparableConv2d(1536, self.num_features, 3, 1, 1)
self.bn4 = nn.BatchNorm2d(self.num_features)
self.act4 = nn.ReLU(inplace=True)
self.feature_info = [
dict(num_chs=64, reduction=2, module='act2'),
dict(num_chs=128, reduction=4, module='block2.rep.0'),
dict(num_chs=256, reduction=8, module='block3.rep.0'),
dict(num_chs=728, reduction=16, module='block12.rep.0'),
dict(num_chs=2048, reduction=32, module='act4'),
]
self.global_pool, self.fc = create_classifier(self.num_features, self.num_classes, pool_type=global_pool)
# #------- init weights --------
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
@torch.jit.ignore
def group_matcher(self, coarse=False):
return dict(
stem=r'^conv[12]|bn[12]',
blocks=[
(r'^block(\d+)', None),
(r'^conv[34]|bn[34]', (99,)),
],
)
@torch.jit.ignore
def set_grad_checkpointing(self, enable=True):
assert not enable, "gradient checkpointing not supported"
@torch.jit.ignore
def get_classifier(self) -> nn.Module:
return self.fc
def reset_classifier(self, num_classes: int, global_pool: str = 'avg'):
self.num_classes = num_classes
self.global_pool, self.fc = create_classifier(self.num_features, self.num_classes, pool_type=global_pool)
def forward_features(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = self.act1(x)
x = self.conv2(x)
x = self.bn2(x)
x = self.act2(x)
x = self.block1(x)
x = self.block2(x)
x = self.block3(x)
x = self.block4(x)
x = self.block5(x)
x = self.block6(x)
x = self.block7(x)
x = self.block8(x)
x = self.block9(x)
x = self.block10(x)
x = self.block11(x)
x = self.block12(x)
x = self.conv3(x)
x = self.bn3(x)
x = self.act3(x)
x = self.conv4(x)
x = self.bn4(x)
x = self.act4(x)
return x
def forward_head(self, x, pre_logits: bool = False):
x = self.global_pool(x)
if self.drop_rate:
F.dropout(x, self.drop_rate, training=self.training)
return x if pre_logits else self.fc(x)
def forward(self, x):
x = self.forward_features(x)
x = self.forward_head(x)
return x
def _xception(variant, pretrained=False, **kwargs):
return build_model_with_cfg(
Xception, variant, pretrained,
feature_cfg=dict(feature_cls='hook'),
**kwargs)
default_cfgs = generate_default_cfgs({
'legacy_xception.tf_in1k': {
'url': 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-cadene/xception-43020ad28.pth',
'input_size': (3, 299, 299),
'pool_size': (10, 10),
'crop_pct': 0.8975,
'interpolation': 'bicubic',
'mean': (0.5, 0.5, 0.5),
'std': (0.5, 0.5, 0.5),
'num_classes': 1000,
'first_conv': 'conv1',
'classifier': 'fc'
# The resize parameter of the validation transform should be 333, and make sure to center crop at 299x299
}
})
@register_model
def legacy_xception(pretrained=False, **kwargs) -> Xception:
return _xception('legacy_xception', pretrained=pretrained, **kwargs)
register_model_deprecations(__name__, {
'xception': 'legacy_xception',
})
| pytorch-image-models/timm/models/xception.py/0 | {
"file_path": "pytorch-image-models/timm/models/xception.py",
"repo_id": "pytorch-image-models",
"token_count": 3992
} | 277 |
""" PyTorch Lamb optimizer w/ behaviour similar to NVIDIA FusedLamb
This optimizer code was adapted from the following (starting with latest)
* https://github.com/HabanaAI/Model-References/blob/2b435114fe8e31f159b1d3063b8280ae37af7423/PyTorch/nlp/bert/pretraining/lamb.py
* https://github.com/NVIDIA/DeepLearningExamples/blob/master/PyTorch/LanguageModeling/Transformer-XL/pytorch/lamb.py
* https://github.com/cybertronai/pytorch-lamb
Use FusedLamb if you can (GPU). The reason for including this variant of Lamb is to have a version that is
similar in behaviour to APEX FusedLamb if you aren't using NVIDIA GPUs or cannot install/use APEX.
In addition to some cleanup, this Lamb impl has been modified to support PyTorch XLA and has been tested on TPU.
References for added functionality:
Cautious Optimizers: https://arxiv.org/abs/2411.16085
Why Gradients Rapidly Increase Near the End of Training: https://arxiv.org/abs/2506.02285
Original copyrights for above sources are below.
Modifications Copyright 2021 Ross Wightman
"""
# Copyright (c) 2021, Habana Labs Ltd. All rights reserved.
# Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# MIT License
#
# Copyright (c) 2019 cybertronai
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
import math
from typing import Optional, Tuple
import torch
from torch.optim import Optimizer
from ._types import ParamsT
class Lamb(Optimizer):
"""Implements a pure pytorch variant of FuseLAMB (NvLamb variant) optimizer from apex.optimizers.FusedLAMB
reference: https://github.com/NVIDIA/DeepLearningExamples/blob/master/PyTorch/LanguageModeling/Transformer-XL/pytorch/lamb.py
LAMB was proposed in:
- Large Batch Optimization for Deep Learning - Training BERT in 76 minutes: https://arxiv.org/abs/1904.00962
- On the Convergence of Adam and Beyond: https://openreview.net/forum?id=ryQu7f-RZ
Args:
params: Iterable of parameters to optimize or dicts defining parameter groups.
lr: Learning rate
betas: Coefficients used for computing running averages of gradient and its norm.
eps: Term added to the denominator to improve numerical stability.
weight_decay: Weight decay
grad_averaging: Whether apply (1-beta2) to grad when calculating running averages of gradient.
max_grad_norm: Value used to clip global grad norm.
trust_clip: Enable LAMBC trust ratio clipping.
always_adapt: Apply adaptive learning rate to 0.0 weight decay parameter.
caution: Apply caution.
decoupled: apply decoupled weight decay
corrected_weight_decay: apply corrected weight decay (lr**2 / max_lr) when using decoupled_decay
"""
def __init__(
self,
params: ParamsT,
lr: float = 1e-3,
bias_correction: bool = True,
betas: Tuple[float, float] = (0.9, 0.999),
eps: float = 1e-6,
weight_decay: float = 0.01,
grad_averaging: bool = True,
max_grad_norm: Optional[float] = 1.0,
trust_clip: bool = False,
always_adapt: bool = False,
caution: bool = False,
decoupled_decay: bool = False,
corrected_weight_decay: bool = False,
):
defaults = dict(
lr=lr,
bias_correction=bias_correction,
betas=betas,
eps=eps,
weight_decay=weight_decay,
grad_averaging=grad_averaging,
max_grad_norm=max_grad_norm,
trust_clip=trust_clip,
always_adapt=always_adapt,
caution=caution,
decoupled_decay=decoupled_decay,
corrected_weight_decay=corrected_weight_decay,
)
super().__init__(params, defaults)
def __setstate__(self, state):
super().__setstate__(state)
for group in self.param_groups:
group.setdefault('caution', False)
group.setdefault('decoupled_decay', False)
group.setdefault('corrected_weight_decay', False)
def _get_clip_grad_norm(self):
max_grad_norm = self.defaults['max_grad_norm']
if max_grad_norm is None:
return None
norms = []
for group in self.param_groups:
for p in group['params']:
if p.grad is None:
continue
grad = p.grad
if grad.is_sparse:
raise RuntimeError('Lamb does not support sparse gradients, consider SparseAdam instead.')
norms.append(torch.linalg.vector_norm(grad))
global_norm = torch.linalg.vector_norm(torch.stack(norms))
clip_global_norm = (global_norm / max_grad_norm).clamp_(min=1.0)
return clip_global_norm
@torch.no_grad()
def step(self, closure=None):
"""Performs a single optimization step.
Arguments:
closure (callable, optional): A closure that reevaluates the model
and returns the loss.
"""
loss = None
if closure is not None:
with torch.enable_grad():
loss = closure()
clip_grad_norm = self._get_clip_grad_norm() # None if disabled
for group in self.param_groups:
bias_correction = 1 if group['bias_correction'] else 0
beta1, beta2 = group['betas']
grad_averaging = 1 if group['grad_averaging'] else 0
beta3 = 1 - beta1 if grad_averaging else 1.0
# assume same step across group now to simplify things
# per parameter step can be easily support by making it tensor, or pass list into kernel
if 'step' in group:
group['step'] += 1
else:
group['step'] = 1
if bias_correction:
bias_correction1 = 1 - beta1 ** group['step']
bias_correction2 = 1 - beta2 ** group['step']
else:
bias_correction1, bias_correction2 = 1.0, 1.0
for p in group['params']:
if p.grad is None:
continue
grad = p.grad
if clip_grad_norm is not None:
grad.div_(clip_grad_norm)
state = self.state[p]
# State initialization
if len(state) == 0:
# Exponential moving average of gradient valuesa
state['exp_avg'] = torch.zeros_like(p)
# Exponential moving average of squared gradient values
state['exp_avg_sq'] = torch.zeros_like(p)
exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq']
# Decay the first and second moment running average coefficient
exp_avg.mul_(beta1).add_(grad, alpha=beta3) # m_t
exp_avg_sq.mul_(beta2).addcmul_(grad, grad, value=1 - beta2) # v_t
denom = (exp_avg_sq.sqrt() / math.sqrt(bias_correction2)).add_(group['eps'])
update = (exp_avg / bias_correction1).div_(denom)
if group['caution']:
# Apply caution as per 'Cautious Optimizers' - https://arxiv.org/abs/2411.16085
mask = (update * grad > 0).to(grad.dtype)
mask.div_(mask.mean().clamp_(min=1e-3))
update.mul_(mask)
weight_decay = group['weight_decay']
if weight_decay != 0:
if group.get('decoupled_decay', False):
if group['corrected_weight_decay']:
wd_scale = group['lr'] ** 2 / self.defaults['lr']
else:
wd_scale = group['lr']
p.add_(p, alpha=-wd_scale * weight_decay)
else:
update.add_(p, alpha=weight_decay)
if weight_decay != 0 or group['always_adapt']:
# Layer-wise LR adaptation. By default, skip adaptation on parameters that are
# excluded from weight decay, unless always_adapt == True, then always enabled.
w_norm = p.norm(2.0)
g_norm = update.norm(2.0)
trust_ratio = w_norm / g_norm
# FIXME nested where required since logical and/or not working in PT XLA
# Set the ratio to 1.0 (no change) if either weight norm or grad norm is zero
trust_ratio = torch.where(
w_norm > 0,
torch.where(g_norm > 0, trust_ratio, 1.0),
1.0,
)
if group['trust_clip']:
# LAMBC trust clipping, upper bound fixed at one
trust_ratio = torch.clamp(trust_ratio, max=1.0)
update.mul_(trust_ratio)
p.add_(update, alpha=-group['lr'])
return loss
| pytorch-image-models/timm/optim/lamb.py/0 | {
"file_path": "pytorch-image-models/timm/optim/lamb.py",
"repo_id": "pytorch-image-models",
"token_count": 4651
} | 278 |
from .cosine_lr import CosineLRScheduler
from .multistep_lr import MultiStepLRScheduler
from .plateau_lr import PlateauLRScheduler
from .poly_lr import PolyLRScheduler
from .step_lr import StepLRScheduler
from .tanh_lr import TanhLRScheduler
from .scheduler_factory import create_scheduler, create_scheduler_v2, scheduler_kwargs
| pytorch-image-models/timm/scheduler/__init__.py/0 | {
"file_path": "pytorch-image-models/timm/scheduler/__init__.py",
"repo_id": "pytorch-image-models",
"token_count": 112
} | 279 |
""" Distributed training/validation utils
Hacked together by / Copyright 2020 Ross Wightman
"""
import logging
import os
from typing import Optional
import torch
from torch import distributed as dist
from .model import unwrap_model
_logger = logging.getLogger(__name__)
def reduce_tensor(tensor, n):
rt = tensor.clone()
dist.all_reduce(rt, op=dist.ReduceOp.SUM)
rt /= n
return rt
def distribute_bn(model, world_size, reduce=False):
# ensure every node has the same running bn stats
for bn_name, bn_buf in unwrap_model(model).named_buffers(recurse=True):
if ('running_mean' in bn_name) or ('running_var' in bn_name):
if reduce:
# average bn stats across whole group
torch.distributed.all_reduce(bn_buf, op=dist.ReduceOp.SUM)
bn_buf /= float(world_size)
else:
# broadcast bn stats from rank 0 to whole group
torch.distributed.broadcast(bn_buf, 0)
def is_global_primary(args):
return args.rank == 0
def is_local_primary(args):
return args.local_rank == 0
def is_primary(args, local=False):
return is_local_primary(args) if local else is_global_primary(args)
def is_distributed_env():
if 'WORLD_SIZE' in os.environ:
return int(os.environ['WORLD_SIZE']) > 1
if 'SLURM_NTASKS' in os.environ:
return int(os.environ['SLURM_NTASKS']) > 1
return False
def world_info_from_env():
local_rank = 0
for v in ('LOCAL_RANK', 'MPI_LOCALRANKID', 'SLURM_LOCALID', 'OMPI_COMM_WORLD_LOCAL_RANK'):
if v in os.environ:
local_rank = int(os.environ[v])
break
global_rank = 0
for v in ('RANK', 'PMI_RANK', 'SLURM_PROCID', 'OMPI_COMM_WORLD_RANK'):
if v in os.environ:
global_rank = int(os.environ[v])
break
world_size = 1
for v in ('WORLD_SIZE', 'PMI_SIZE', 'SLURM_NTASKS', 'OMPI_COMM_WORLD_SIZE'):
if v in os.environ:
world_size = int(os.environ[v])
break
return local_rank, global_rank, world_size
def init_distributed_device(args):
# Distributed training = training on more than one GPU.
# Works in both single and multi-node scenarios.
args.distributed = False
args.world_size = 1
args.rank = 0 # global rank
args.local_rank = 0
result = init_distributed_device_so(
device=getattr(args, 'device', 'cuda'),
dist_backend=getattr(args, 'dist_backend', None),
dist_url=getattr(args, 'dist_url', None),
)
args.device = result['device']
args.world_size = result['world_size']
args.rank = result['global_rank']
args.local_rank = result['local_rank']
args.distributed = result['distributed']
device = torch.device(args.device)
return device
def init_distributed_device_so(
device: str = 'cuda',
dist_backend: Optional[str] = None,
dist_url: Optional[str] = None,
):
# Distributed training = training on more than one GPU.
# Works in both single and multi-node scenarios.
distributed = False
world_size = 1
global_rank = 0
local_rank = 0
device_type, *device_idx = device.split(':', maxsplit=1)
if dist_backend is None:
# FIXME: verify that ROCm transform nccl to rccl
dist_backends = {
"xpu": "ccl",
"hpu": "hccl",
"cuda": "nccl",
"npu": "hccl",
}
dist_backend = dist_backends.get(device_type, 'gloo')
dist_url = dist_url or 'env://'
# TBD, support horovod?
# if args.horovod:
# import horovod.torch as hvd
# assert hvd is not None, "Horovod is not installed"
# hvd.init()
# args.local_rank = int(hvd.local_rank())
# args.rank = hvd.rank()
# args.world_size = hvd.size()
# args.distributed = True
# os.environ['LOCAL_RANK'] = str(args.local_rank)
# os.environ['RANK'] = str(args.rank)
# os.environ['WORLD_SIZE'] = str(args.world_size)
if is_distributed_env():
if 'SLURM_PROCID' in os.environ:
# DDP via SLURM
local_rank, global_rank, world_size = world_info_from_env()
# SLURM var -> torch.distributed vars in case needed
os.environ['LOCAL_RANK'] = str(local_rank)
os.environ['RANK'] = str(global_rank)
os.environ['WORLD_SIZE'] = str(world_size)
torch.distributed.init_process_group(
backend=dist_backend,
init_method=dist_url,
world_size=world_size,
rank=global_rank,
)
else:
# DDP via torchrun, torch.distributed.launch
local_rank, _, _ = world_info_from_env()
torch.distributed.init_process_group(
backend=dist_backend,
init_method=dist_url,
)
world_size = torch.distributed.get_world_size()
global_rank = torch.distributed.get_rank()
distributed = True
if device_type == 'cuda':
assert torch.cuda.is_available(), f'CUDA is not available but {device} was specified.'
if device_type == 'npu':
assert torch.npu.is_available(), f'Ascend NPU is not available but {device} was specified.'
if distributed and device != 'cpu':
# Ignore manually specified device index in distributed mode and
# override with resolved local rank, fewer headaches in most setups.
if device_idx:
_logger.warning(f'device index {device_idx[0]} removed from specified ({device}).')
device = f'{device_type}:{local_rank}'
if device.startswith('cuda:'):
torch.cuda.set_device(device)
return dict(
device=device,
global_rank=global_rank,
local_rank=local_rank,
world_size=world_size,
distributed=distributed,
)
| pytorch-image-models/timm/utils/distributed.py/0 | {
"file_path": "pytorch-image-models/timm/utils/distributed.py",
"repo_id": "pytorch-image-models",
"token_count": 2680
} | 280 |
# Async Applications with Agents
This guide demonstrates how to integrate a synchronous agent from the `smolagents` library into an asynchronous Python web application using Starlette.
The example is designed to help users new to async Python and agent integration understand best practices for combining synchronous agent logic with async web servers.
## Overview
- **Starlette**: A lightweight ASGI framework for building asynchronous web applications in Python.
- **anyio.to_thread.run_sync**: Utility to run blocking (synchronous) code in a background thread, preventing it from blocking the async event loop.
- **CodeAgent**: An agent from the `smolagents` library capable of programmatically solving tasks.
## Why Use a Background Thread?
`CodeAgent.run()` executes Python code synchronously. If called directly in an async endpoint, it would block Starlette's event loop, reducing performance and scalability. By offloading this operation to a background thread with `anyio.to_thread.run_sync`, you keep the app responsive and efficient, even under high concurrency.
## Example Workflow
- The Starlette app exposes a `/run-agent` endpoint that accepts a JSON payload with a `task` string.
- When a request is received, the agent is run in a background thread using `anyio.to_thread.run_sync`.
- The result is returned as a JSON response.
## Building a Starlette App with a CodeAgent
### 1. Install Dependencies
```bash
pip install smolagents starlette anyio uvicorn
```
### 2. Application Code (`main.py`)
```python
import anyio.to_thread
from starlette.applications import Starlette
from starlette.requests import Request
from starlette.responses import JSONResponse
from starlette.routing import Route
from smolagents import CodeAgent, InferenceClientModel
agent = CodeAgent(
model=InferenceClientModel(model_id="Qwen/Qwen2.5-Coder-32B-Instruct"),
tools=[],
)
async def run_agent(request: Request):
data = await request.json()
task = data.get("task", "")
# Run the agent synchronously in a background thread
result = await anyio.to_thread.run_sync(agent.run, task)
return JSONResponse({"result": result})
app = Starlette(routes=[
Route("/run-agent", run_agent, methods=["POST"]),
])
```
### 3. Run the App
```bash
uvicorn async_agent.main:app --reload
```
### 4. Test the Endpoint
```bash
curl -X POST http://localhost:8000/run-agent -H 'Content-Type: application/json' -d '{"task": "What is 2+2?"}'
```
**Expected Response:**
```json
{"result": "4"}
```
## Further Reading
- [Starlette Documentation](https://www.starlette.io/)
- [anyio Documentation](https://anyio.readthedocs.io/)
---
For the full code, see [`examples/async_agent`](https://github.com/huggingface/smolagents/tree/main/examples/async_agent).
| smolagents/docs/source/en/examples/async_agent.md/0 | {
"file_path": "smolagents/docs/source/en/examples/async_agent.md",
"repo_id": "smolagents",
"token_count": 809
} | 281 |
# ๐ Manage your agent's memory
[[open-in-colab]]
In the end, an agent can be defined by simple components: it has tools, prompts.
And most importantly, it has a memory of past steps, drawing a history of planning, execution, and errors.
### Replay your agent's memory
We propose several features to inspect a past agent run.
You can instrument the agent's run to display it in a great UI that lets you zoom in/out on specific steps, as highlighted in the [instrumentation guide](./inspect_runs).
You can also use `agent.replay()`, as follows:
After the agent has run:
```py
from smolagents import InferenceClientModel, CodeAgent
agent = CodeAgent(tools=[], model=InferenceClientModel(), verbosity_level=0)
result = agent.run("What's the 20th Fibonacci number?")
```
If you want to replay this last run, just use:
```py
agent.replay()
```
### Dynamically change the agent's memory
Many advanced use cases require dynamic modification of the agent's memory.
You can access the agent's memory using:
```py
from smolagents import ActionStep
system_prompt_step = agent.memory.system_prompt
print("The system prompt given to the agent was:")
print(system_prompt_step.system_prompt)
task_step = agent.memory.steps[0]
print("\n\nThe first task step was:")
print(task_step.task)
for step in agent.memory.steps:
if isinstance(step, ActionStep):
if step.error is not None:
print(f"\nStep {step.step_number} got this error:\n{step.error}\n")
else:
print(f"\nStep {step.step_number} got these observations:\n{step.observations}\n")
```
Use `agent.memory.get_full_steps()` to get full steps as dictionaries.
You can also use step callbacks to dynamically change the agent's memory.
Step callbacks can access the `agent` itself in their arguments, so they can access any memory step as highlighted above, and change it if needed. For instance, let's say you are observing screenshots of each step performed by a web browser agent. You want to log the newest screenshot, and remove the images from ancient steps to save on token costs.
You could run something like the following.
_Note: this code is incomplete, some imports and object definitions have been removed for the sake of concision, visit [the original script](https://github.com/huggingface/smolagents/blob/main/src/smolagents/vision_web_browser.py) to get the full working code._
```py
import helium
from PIL import Image
from io import BytesIO
from time import sleep
def update_screenshot(memory_step: ActionStep, agent: CodeAgent) -> None:
sleep(1.0) # Let JavaScript animations happen before taking the screenshot
driver = helium.get_driver()
latest_step = memory_step.step_number
for previous_memory_step in agent.memory.steps: # Remove previous screenshots from logs for lean processing
if isinstance(previous_memory_step, ActionStep) and previous_memory_step.step_number <= latest_step - 2:
previous_memory_step.observations_images = None
png_bytes = driver.get_screenshot_as_png()
image = Image.open(BytesIO(png_bytes))
memory_step.observations_images = [image.copy()]
```
Then you should pass this function in the `step_callbacks` argument upon initialization of your agent:
```py
CodeAgent(
tools=[WebSearchTool(), go_back, close_popups, search_item_ctrl_f],
model=model,
additional_authorized_imports=["helium"],
step_callbacks=[update_screenshot],
max_steps=20,
verbosity_level=2,
)
```
Head to our [vision web browser code](https://github.com/huggingface/smolagents/blob/main/src/smolagents/vision_web_browser.py) to see the full working example.
### Run agents one step at a time
This can be useful in case you have tool calls that take days: you can just run your agents step by step.
This will also let you update the memory on each step.
```py
from smolagents import InferenceClientModel, CodeAgent, ActionStep, TaskStep
agent = CodeAgent(tools=[], model=InferenceClientModel(), verbosity_level=1)
agent.python_executor.send_tools({**agent.tools})
print(agent.memory.system_prompt)
task = "What is the 20th Fibonacci number?"
# You could modify the memory as needed here by inputting the memory of another agent.
# agent.memory.steps = previous_agent.memory.steps
# Let's start a new task!
agent.memory.steps.append(TaskStep(task=task, task_images=[]))
final_answer = None
step_number = 1
while final_answer is None and step_number <= 10:
memory_step = ActionStep(
step_number=step_number,
observations_images=[],
)
# Run one step.
final_answer = agent.step(memory_step)
agent.memory.steps.append(memory_step)
step_number += 1
# Change the memory as you please!
# For instance to update the latest step:
# agent.memory.steps[-1] = ...
print("The final answer is:", final_answer)
```
| smolagents/docs/source/en/tutorials/memory.md/0 | {
"file_path": "smolagents/docs/source/en/tutorials/memory.md",
"repo_id": "smolagents",
"token_count": 1510
} | 282 |
# เคธเฅเคฐเคเฅเคทเคฟเคค เคเฅเคก เคเคเฅเคเฅเคเฅเคฏเฅเคถเคจ
[[open-in-colab]]
> [!TIP]
> เคฏเคฆเคฟ เคเคช เคเคเฅเคเคเฅเคธ เคฌเคจเคพเคจเฅ เคฎเฅเค เคจเค เคนเฅเค, เคคเฅ เคธเคฌเคธเฅ เคชเคนเคฒเฅ [เคเคเฅเคเคเฅเคธ เคเคพ เคชเคฐเคฟเคเคฏ](../conceptual_guides/intro_agents) เคเคฐ [smolagents เคเฅ เคเคพเคเคกเฅเคก เคเฅเคฐ](../guided_tour) เคชเคขเคผเคจเคพ เคธเฅเคจเคฟเคถเฅเคเคฟเคค เคเคฐเฅเคเฅค
### เคเฅเคก Agents
[เคเค](https://huggingface.co/papers/2402.01030) [เคถเฅเคง](https://huggingface.co/papers/2411.01747) [เคชเคคเฅเคฐเฅเค](https://huggingface.co/papers/2401.00812) เคจเฅ เคฆเคฟเคเคพเคฏเคพ เคนเฅ เคเคฟ LLM เคฆเฅเคตเคพเคฐเคพ เค
เคชเคจเฅ เคเฅเคฐเคฟเคฏเคพเคเค (เคเฅเคฒ เคเฅเคฒเฅเคธ) เคเฅ เคเฅเคก เคฎเฅเค เคฒเคฟเคเคจเคพ, เคเฅเคฒ เคเฅเคฒเคฟเคเค เคเฅ เคตเคฐเฅเคคเคฎเคพเคจ เคฎเคพเคจเค เคชเฅเคฐเคพเคฐเฅเคช เคธเฅ เคฌเคนเฅเคค เคฌเฅเคนเคคเคฐ เคนเฅ, เคเฅ industry เคฎเฅเค "เคเฅเคฒเฅเคธ เคจเฅเคฎเฅเคธ เคเคฐ เคเคฐเฅเคเฅเคฏเฅเคฎเฅเคเคเฅเคธ เคเฅ JSON เคเฅ เคฐเฅเคช เคฎเฅเค เคฒเคฟเคเคจเฅ" เคเฅ เคตเคฟเคญเคฟเคจเฅเคจ เคฐเฅเคช เคนเฅเคเฅค
เคเฅเคก เคฌเฅเคนเคคเคฐ เคเฅเคฏเฅเค เคนเฅ? เคเฅเคฏเฅเคเคเคฟ เคนเคฎเคจเฅ เค
เคชเคจเฅ เคเฅเคก เคญเคพเคทเคพเคเค เคเฅ เคตเคฟเคถเฅเคท เคฐเฅเคช เคธเฅ เคเคเคชเฅเคฏเฅเคเคฐ เคฆเฅเคตเคพเคฐเคพ เคเฅ เคเคพเคจเฅ เคตเคพเคฒเฅ เคเฅเคฐเคฟเคฏเคพเคเค เคเฅ เคตเฅเคฏเคเฅเคค เคเคฐเคจเฅ เคเฅ เคฒเคฟเค เคคเฅเคฏเคพเคฐ เคเคฟเคฏเคพ เคนเฅเฅค เคฏเคฆเคฟ JSON เคธเฅเคจเคฟเคชเฅเคเฅเคธ เคเค เคฌเฅเคนเคคเคฐ เคคเคฐเฅเคเคพ เคนเฅเคคเคพ, เคคเฅ เคฏเคน เคชเฅเคเฅเค JSON เคธเฅเคจเคฟเคชเฅเคเฅเคธ เคฎเฅเค เคฒเคฟเคเคพ เคเคฏเคพ เคนเฅเคคเคพ เคเคฐ เคถเฅเคคเคพเคจ เคนเคฎ เคชเคฐ เคนเคเคธ เคฐเคนเคพ เคนเฅเคคเคพเฅค
เคเฅเคก เคเคเคชเฅเคฏเฅเคเคฐ เคชเคฐ เคเฅเคฐเคฟเคฏเคพเคเค เคตเฅเคฏเคเฅเคค เคเคฐเคจเฅ เคเคพ เคฌเฅเคนเคคเคฐ เคคเคฐเฅเคเคพ เคนเฅเฅค เคเคธเคฎเฅเค เคฌเฅเคนเคคเคฐ เคนเฅ:
- **เคเคเคชเฅเคเคผเฅเคฌเคฟเคฒเคฟเคเฅ:** เคเฅเคฏเคพ เคเคช JSON เคเฅเคฐเคฟเคฏเคพเคเค เคเฅ เคเค-เคฆเฅเคธเคฐเฅ เคเฅ เคญเฅเคคเคฐ เคจเฅเคธเฅเค เคเคฐ เคธเคเคคเฅ เคนเฅเค, เคฏเคพ เคฌเคพเคฆ เคฎเฅเค เคชเฅเคจ: เคเคชเคฏเฅเค เคเคฐเคจเฅ เคเฅ เคฒเคฟเค JSON เคเฅเคฐเคฟเคฏเคพเคเค เคเคพ เคเค เคธเฅเค เคชเคฐเคฟเคญเคพเคทเคฟเคค เคเคฐ เคธเคเคคเฅ เคนเฅเค, เคเฅเคธเฅ เคเคช เคฌเคธ เคเค เคชเคพเคฏเคฅเคจ เคซเคผเคเคเฅเคถเคจ เคชเคฐเคฟเคญเคพเคทเคฟเคค เคเคฐ เคธเคเคคเฅ เคนเฅเค?
- **เคเคฌเฅเคเฅเคเฅเค เคชเฅเคฐเคฌเคเคงเคจ:** JSON เคฎเฅเค `generate_image` เคเฅเคธเฅ เคเฅเคฐเคฟเคฏเคพ เคเคพ เคเคเคเคชเฅเค เคเฅเคธเฅ เคธเฅเคเฅเคฐ เคเคฐเฅเค?
- **เคธเคพเคฎเคพเคจเฅเคฏเคคเคพ:** เคเฅเคก เคเคฟเคธเฅ เคญเฅ เคเคเคชเฅเคฏเฅเคเคฐ เคเคพเคฐเฅเคฏ เคเฅ เคตเฅเคฏเคเฅเคค เคเคฐเคจเฅ เคเฅ เคฒเคฟเค เคฌเคจเคพเคฏเคพ เคเคฏเคพ เคนเฅเฅค
- **LLM เคชเฅเคฐเคถเคฟเคเฅเคทเคฃ เคเฅเคฐเฅเคชเคธ เคฎเฅเค เคชเฅเคฐเคคเคฟเคจเคฟเคงเคฟเคคเฅเคต:** เคเฅเคฏเฅเค เคจ เคเคธ เคเคถเฅเคฐเฅเคตเคพเคฆ เคเคพ เคฒเคพเคญ เคเค เคพเคเค เคเคฟ เคเคเฅเค เคเฅเคฃเคตเคคเฅเคคเคพ เคตเคพเคฒเฅ เคเฅเคก เคเคฆเคพเคนเคฐเคฃ เคชเคนเคฒเฅ เคธเฅ เคนเฅ LLM เคชเฅเคฐเคถเคฟเคเฅเคทเคฃ เคกเฅเคเคพ เคฎเฅเค เคถเคพเคฎเคฟเคฒ เคนเฅเค?
เคฏเคน เคจเฅเคเฅ เคฆเฅ เคเค เคเคตเคฟ เคฎเฅเค เคฆเคฐเฅเคถเคพเคฏเคพ เคเคฏเคพ เคนเฅ, เคเฅ [Executable Code Actions Elicit Better LLM Agents](https://huggingface.co/papers/2402.01030) เคธเฅ เคฒเฅ เคเค เคนเฅเฅค
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/code_vs_json_actions.png">
เคฏเคนเฅ เคเคพเคฐเคฃ เคนเฅ เคเคฟ เคนเคฎเคจเฅ เคเฅเคก เคเคเฅเคเคเฅเคธ, เคเคธ เคฎเคพเคฎเคฒเฅ เคฎเฅเค เคชเคพเคฏเคฅเคจ เคเคเฅเคเคเฅเคธ เคชเคฐ เคเฅเคฐ เคฆเคฟเคฏเคพ, เคเคฟเคธเคเคพ เคฎเคคเคฒเคฌ เคธเฅเคฐเคเฅเคทเคฟเคค เคชเคพเคฏเคฅเคจ เคเคเคเคฐเคชเฅเคฐเฅเคเคฐ เคฌเคจเคพเคจเฅ เคชเคฐ เค
เคงเคฟเค เคชเฅเคฐเคฏเคพเคธ เคเคฐเคจเคพ เคฅเคพเฅค
### เคฒเฅเคเคฒ เคชเคพเคฏเคฅเคจ เคเคเคเคฐเคชเฅเคฐเฅเคเคฐ
เคกเคฟเคซเคผเฅเคฒเฅเค เคฐเฅเคช เคธเฅ, `CodeAgent` LLM-เคเคจเคฐเฅเคเฅเคก เคเฅเคก เคเฅ เคเคชเคเฅ เคเคจเคตเคพเคฏเคฐเคจเคฎเฅเคเค เคฎเฅเค เคเคฒเคพเคคเคพ เคนเฅเฅค
เคฏเคน เคเคเฅเคเฅเคเฅเคฏเฅเคถเคจ เคตเฅเคจเคฟเคฒเคพ เคชเคพเคฏเคฅเคจ เคเคเคเคฐเคชเฅเคฐเฅเคเคฐ เคฆเฅเคตเคพเคฐเคพ เคจเคนเฅเค เคเคฟเคฏเคพ เคเคพเคคเคพ: เคนเคฎเคจเฅ เคเค เค
เคงเคฟเค เคธเฅเคฐเคเฅเคทเคฟเคค `LocalPythonExecutor` เคเฅ เคถเฅเคฐเฅ เคธเฅ เคซเคฟเคฐ เคธเฅ เคฌเคจเคพเคฏเคพ เคนเฅเฅค
เคฏเคน เคเคเคเคฐเคชเฅเคฐเฅเคเคฐ เคธเฅเคฐเคเฅเคทเคพ เคเฅ เคฒเคฟเค เคกเคฟเคเคผเคพเคเคจ เคเคฟเคฏเคพ เคเคฏเคพ เคนเฅ:
- เคเคฎเฅเคชเฅเคฐเฅเคเฅเคธ เคเฅ เคเคชเคฏเฅเคเคเคฐเฅเคคเคพ เคฆเฅเคตเคพเคฐเคพ เคธเฅเคชเคทเฅเค เคฐเฅเคช เคธเฅ เคชเคพเคธ เคเฅ เคเค เคธเฅเคเฅ เคคเค เคธเฅเคฎเคฟเคค เคเคฐเคจเคพ
- เคเคจเคซเคฟเคจเคฟเค เคฒเฅเคชเฅเคธ เคเคฐ เคฐเคฟเคธเฅเคฐเฅเคธ เคฌเฅเคฒเฅเคเคฟเคเค เคเฅ เคฐเฅเคเคจเฅ เคเฅ เคฒเคฟเค เคเคชเคฐเฅเคถเคเคธ เคเฅ เคธเคเคเฅเคฏเคพ เคเฅ เคเฅเคช เคเคฐเคจเคพ
- เคเฅเค เคญเฅ เคเคธเคพ เคเคชเคฐเฅเคถเคจ เคจเคนเฅเค เคเคฐเฅเคเคพ เคเฅ เคชเฅเคฐเฅเคต-เคชเคฐเคฟเคญเคพเคทเคฟเคค เคจเคนเฅเค เคนเฅ
เคนเคฎเคจเฅ เคเคธเฅ เคเค เคเคชเคฏเฅเค เคฎเคพเคฎเคฒเฅเค เคฎเฅเค เคเคธเฅเคคเฅเคฎเคพเคฒ เคเคฟเคฏเคพ เคนเฅ, เคเคฐ เคเคญเฅ เคญเฅ เคเคจเคตเคพเคฏเคฐเคจเคฎเฅเคเค เคเฅ เคเฅเค เคจเฅเคเคธเคพเคจ เคจเคนเฅเค เคฆเฅเคเคพเฅค
เคนเคพเคฒเคพเคเคเคฟ เคฏเคน เคธเคฎเคพเคงเคพเคจ เคชเฅเคฐเฅ เคคเคฐเคน เคธเฅ เคธเฅเคฐเคเฅเคทเคฟเคค เคจเคนเฅเค เคนเฅ: เคเฅเค เคเคธเฅ เค
เคตเคธเคฐเฅเค เคเฅ เคเคฒเฅเคชเคจเคพ เคเคฐ เคธเคเคคเคพ เคนเฅ เคเคนเคพเค เคฆเฅเคฐเฅเคญเคพเคตเคจเคพเคชเฅเคฐเฅเคฃ เคเคพเคฐเฅเคฏเฅเค เคเฅ เคฒเคฟเค เคซเคพเคเคจ-เคเฅเคฏเฅเคจ เคเคฟเค เคเค LLM เค
เคญเฅ เคญเฅ เคเคชเคเฅ เคเคจเคตเคพเคฏเคฐเคจเคฎเฅเคเค เคเฅ เคจเฅเคเคธเคพเคจ เคชเคนเฅเคเคเคพ เคธเคเคคเฅ เคนเฅเคเฅค เคเคฆเคพเคนเคฐเคฃ เคเฅ เคฒเคฟเค เคฏเคฆเคฟ เคเคชเคจเฅ เคเคตเคฟเคฏเฅเค เคเฅ เคชเฅเคฐเฅเคธเฅเคธ เคเคฐเคจเฅ เคเฅ เคฒเคฟเค `Pillow` เคเฅเคธเฅ เคฎเคพเคธเฅเคฎ เคชเฅเคเฅเค เคเฅ เค
เคจเฅเคฎเคคเคฟ เคฆเฅ เคนเฅ, เคคเฅ LLM เคเคชเคเฅ เคนเคพเคฐเฅเคก เคกเฅเคฐเคพเคเคต เคเฅ เคฌเฅเคฒเฅเค เคเคฐเคจเฅ เคเฅ เคฒเคฟเค เคนเคเคพเคฐเฅเค เคเคตเคฟเคฏเฅเค เคเฅ เคธเฅเคต เคเคฐ เคธเคเคคเคพ เคนเฅเฅค
เคฏเคฆเคฟ เคเคชเคจเฅ เคเฅเคฆ LLM เคเคเคเคจ เคเฅเคจเคพ เคนเฅ เคคเฅ เคฏเคน เคจเคฟเคถเฅเคเคฟเคค เคฐเฅเคช เคธเฅ เคธเคเคญเคพเคตเคฟเคค เคจเคนเฅเค เคนเฅ, เคฒเฅเคเคฟเคจ เคฏเคน เคนเฅ เคธเคเคคเคพ เคนเฅเฅค
เคคเฅ เคฏเคฆเคฟ เคเคช เค
เคคเคฟเคฐเคฟเคเฅเคค เคธเคพเคตเคงเคพเคจเฅ เคฌเคฐเคคเคจเคพ เคเคพเคนเคคเฅ เคนเฅเค, เคคเฅ เคเคช เคจเฅเคเฅ เคตเคฐเฅเคฃเคฟเคค เคฐเคฟเคฎเฅเค เคเฅเคก เคเคเฅเคเฅเคเฅเคฏเฅเคถเคจ เคตเคฟเคเคฒเฅเคช เคเคพ เคเคชเคฏเฅเค เคเคฐ เคธเคเคคเฅ เคนเฅเคเฅค
### E2B เคเฅเคก เคเคเฅเคเฅเคเฅเคฏเฅเคเคฐ
เค
เคงเคฟเคเคคเคฎ เคธเฅเคฐเคเฅเคทเคพ เคเฅ เคฒเคฟเค, เคเคช เคเฅเคก เคเฅ เคธเฅเคเคกเคฌเฅเคเฅเคธเฅเคก เคเคจเคตเคพเคฏเคฐเคจเคฎเฅเคเค เคฎเฅเค เคเคฒเคพเคจเฅ เคเฅ เคฒเคฟเค E2B เคเฅ เคธเคพเคฅ เคนเคฎเคพเคฐเฅ เคเคเฅเคเคฐเคฃ เคเคพ เคเคชเคฏเฅเค เคเคฐ เคธเคเคคเฅ เคนเฅเคเฅค เคฏเคน เคเค เคฐเคฟเคฎเฅเค เคเคเฅเคเฅเคเฅเคฏเฅเคถเคจ เคธเฅเคตเคพ เคนเฅ เคเฅ เคเคชเคเฅ เคเฅเคก เคเฅ เคเค เคเคเคธเฅเคฒเฅเคเฅเคก เคเคเคเฅเคจเคฐ เคฎเฅเค เคเคฒเคพเคคเฅ เคนเฅ, เคเคฟเคธเคธเฅ เคเฅเคก เคเคพ เคเคชเคเฅ เคธเฅเคฅเคพเคจเฅเคฏ เคเคจเคตเคพเคฏเคฐเคจเคฎเฅเคเค เคเฅ เคชเฅเคฐเคญเคพเคตเคฟเคค เคเคฐเคจเคพ เค
เคธเคเคญเคต เคนเฅ เคเคพเคคเคพ เคนเฅเฅค
เคเคธเคเฅ เคฒเคฟเค, เคเคชเคเฅ เค
เคชเคจเคพ E2B เค
เคเคพเคเคเค เคธเฅเคเค
เคช เคเคฐเคจเฅ เคเคฐ เค
เคชเคจเฅ เคเคจเคตเคพเคฏเคฐเคจเคฎเฅเคเค เคตเฅเคฐเคฟเคเคฌเคฒเฅเคธ เคฎเฅเค เค
เคชเคจเคพ `E2B_API_KEY` เคธเฅเค เคเคฐเคจเฅ เคเฅ เคเคตเคถเฅเคฏเคเคคเคพ เคนเฅเคเฅเฅค เค
เคงเคฟเค เคเคพเคจเคเคพเคฐเฅ เคเฅ เคฒเคฟเค [E2B เคเฅ เคเฅเคตเคฟเคเคธเฅเคเคพเคฐเฅเค เคกเฅเคเฅเคฏเฅเคฎเฅเคเคเฅเคถเคจ](https://e2b.dev/docs/quickstart) เคชเคฐ เคเคพเคเคเฅค
เคซเคฟเคฐ เคเคช เคเคธเฅ `pip install e2b-code-interpreter python-dotenv` เคเฅ เคธเคพเคฅ เคเคเคธเฅเคเฅเคฒ เคเคฐ เคธเคเคคเฅ เคนเฅเคเฅค
เค
เคฌ เคเคช เคคเฅเคฏเคพเคฐ เคนเฅเค!
เคเฅเคก เคเคเฅเคเฅเคเฅเคฏเฅเคเคฐ เคเฅ E2B เคชเคฐ เคธเฅเค เคเคฐเคจเฅ เคเฅ เคฒเคฟเค, เคฌเคธ เค
เคชเคจเฅ `CodeAgent` เคเฅ เคเคจเคฟเคถเคฟเคฏเคฒเคพเคเคเคผ เคเคฐเคคเฅ เคธเคฎเคฏ `executor_type="e2b"` เคซเฅเคฒเฅเค เคชเคพเคธ เคเคฐเฅเคเฅค
เคงเฅเคฏเคพเคจ เคฆเฅเค เคเคฟ เคเคชเคเฅ `additional_authorized_imports` เคฎเฅเค เคธเคญเฅ เคเฅเคฒ เคเฅ เคกเคฟเคชเฅเคเคกเฅเคเคธเฅเคเคผ เคเฅเคกเคผเคจเฅ เคเคพเคนเคฟเค, เคคเคพเคเคฟ เคเคเฅเคเฅเคเฅเคฏเฅเคเคฐ เคเคจเฅเคนเฅเค เคเคเคธเฅเคเฅเคฒ เคเคฐเฅเฅค
```py
from smolagents import CodeAgent, VisitWebpageTool, InferenceClientModel
agent = CodeAgent(
tools = [VisitWebpageTool()],
model=InferenceClientModel(),
additional_authorized_imports=["requests", "markdownify"],
executor_type="e2b"
)
agent.run("What was Abraham Lincoln's preferred pet?")
```
E2B เคเฅเคก เคเคเฅเคเฅเคเฅเคฏเฅเคถเคจ เคตเคฐเฅเคคเคฎเคพเคจ เคฎเฅเค เคฎเคฒเฅเคเฅ-เคเคเฅเคเคเฅเคธ เคเฅ เคธเคพเคฅ เคเคพเคฎ เคจเคนเฅเค เคเคฐเคคเคพ เคนเฅ - เคเฅเคฏเฅเคเคเคฟ เคเฅเคก เคฌเฅเคฒเฅเคฌ เคฎเฅเค เคเค เคเคเฅเคเค เคเฅเคฒ เคเคฐเคจเคพ เคเฅ เคฐเคฟเคฎเฅเคเคฒเฅ เคเคเฅเคเฅเคเฅเคฏเฅเค เคเคฟเคฏเคพ เคเคพเคจเคพ เคเคพเคนเคฟเค, เคฏเคน เคเค เคเคกเคผเคฌเคกเคผ เคนเฅเฅค เคฒเฅเคเคฟเคจ เคนเคฎ เคเคธเฅ เคเฅเคกเคผเคจเฅ เคชเคฐ เคเคพเคฎ เคเคฐ เคฐเคนเฅ เคนเฅเค!
| smolagents/docs/source/hi/tutorials/secure_code_execution.md/0 | {
"file_path": "smolagents/docs/source/hi/tutorials/secure_code_execution.md",
"repo_id": "smolagents",
"token_count": 5644
} | 283 |
# `smolagents`
่ฟๆฏๆๅปบๅผบๅคง agent ็ๆ็ฎๅๆกๆถ๏ผ้กบไพฟ้ฎไธไธ๏ผไปไนๆฏ "agent"๏ผๆไปฌๅจ[ๆญค้กต้ข](conceptual_guides/intro_agents)ๆไพไบๆไปฌ็ๅฎไน๏ผๆจ่ฟๅฏไปฅๆพๅฐๅ
ณไบไฝๆถไฝฟ็จๆไธไฝฟ็จๅฎไปฌ็ๅปบ่ฎฎ๏ผๅง้๏ผ้ๅธธไธไฝฟ็จ agent ไผๆดๅฅฝ๏ผใ
> [!TIP]
> ่ฏ่
ๆณจ๏ผAgent ็ไธๅ
ๆฏ่ฏญๆฏโๆบ่ฝไฝโใๆฌ่ฏๆๅฐไฟ็ agent๏ผไธไฝ็ฟป่ฏ๏ผไปฅๅธฆๆฅๆด้ซๆ็้
่ฏปไฝ้ชใ(ๅจไธญๆไธบไธป็ๆ็ซ ไธญ๏ผIt's easier to ๆณจๆๅฐ่ฑๆใAttention Is All You Need!)
ๆฌๅบๆไพ๏ผ
โจ **็ฎๆดๆง**๏ผAgent ้ป่พไป
้็บฆๅ่กไปฃ็ ใๆไปฌๅฐๆฝ่ฑกไฟๆๅจๅๅงไปฃ็ ไนไธ็ๆๅฐๅฝขๆ๏ผ
๐ **ๆฏๆไปปไฝ LLM**๏ผๆฏๆ้่ฟ Hub ๆ็ฎก็ๆจกๅ๏ผไฝฟ็จๅ
ถ `transformers` ็ๆฌๆ้่ฟๆไปฌ็ๆจ็ API ๅ ่ฝฝ๏ผไนๆฏๆ OpenAIใAnthropic ็ญๆจกๅใไฝฟ็จไปปไฝ LLM ไธบ agent ๆไพๅจๅ้ฝ้ๅธธๅฎนๆใ
๐งโ๐ป **ไธๆต็ไปฃ็ agent ๆฏๆ**๏ผๅณ็ผๅไปฃ็ ไฝไธบๅ
ถๆไฝ็ agent๏ผไธ"็จไบ็ผๅไปฃ็ ็ agent"็ธๅฏน๏ผ๏ผ[ๅจๆญคไบ่งฃๆดๅค](tutorials/secure_code_execution)ใ
๐ค **Hub ้ๆ**๏ผๆจๅฏไปฅๅจ Hub ไธๅ
ฑไบซๅๅ ่ฝฝๅทฅๅ
ท๏ผๆดๅคๅ่ฝๅณๅฐๆจๅบ๏ผ
<div class="mt-10">
<div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-2 md:gap-y-4 md:gap-x-5">
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./guided_tour"
><div class="w-full text-center bg-gradient-to-br from-blue-400 to-blue-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">ๅฏผ่ง</div>
<p class="text-gray-700">ๅญฆไน ๅบ็ก็ฅ่ฏๅนถ็ๆไฝฟ็จ agentใๅฆๆๆจๆฏ็ฌฌไธๆฌกไฝฟ็จ agent๏ผ่ฏทไป่ฟ้ๅผๅง๏ผ</p>
</a>
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./examples/text_to_sql"
><div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">ๆไฝๆๅ</div>
<p class="text-gray-700">ๅฎ็จๆๅ๏ผๅธฎๅฉๆจๅฎ็ฐ็นๅฎ็ฎๆ ๏ผๅๅปบไธไธช็ๆๅๆต่ฏ SQL ๆฅ่ฏข็ agent๏ผ</p>
</a>
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./conceptual_guides/intro_agents"
><div class="w-full text-center bg-gradient-to-br from-pink-400 to-pink-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">ๆฆๅฟตๆๅ</div>
<p class="text-gray-700">้ซ็บง่งฃ้๏ผๅธฎๅฉๆจๆดๅฅฝๅฐ็่งฃ้่ฆไธป้ขใ</p>
</a>
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./tutorials/building_good_agents"
><div class="w-full text-center bg-gradient-to-br from-purple-400 to-purple-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">ๆ็จ</div>
<p class="text-gray-700">ๆถต็ๆๅปบ agent ้่ฆๆน้ข็ๆจชๅๆ็จใ</p>
</a>
</div>
</div>
| smolagents/docs/source/zh/index.md/0 | {
"file_path": "smolagents/docs/source/zh/index.md",
"repo_id": "smolagents",
"token_count": 1623
} | 284 |
import os
from smolagents import CodeAgent, LiteLLMRouterModel, WebSearchTool
# Make sure to setup the necessary environment variables!
llm_loadbalancer_model_list = [
{
"model_name": "model-group-1",
"litellm_params": {
"model": "gpt-4o-mini",
"api_key": os.getenv("OPENAI_API_KEY"),
},
},
{
"model_name": "model-group-1",
"litellm_params": {
"model": "bedrock/anthropic.claude-3-sonnet-20240229-v1:0",
"aws_access_key_id": os.getenv("AWS_ACCESS_KEY_ID"),
"aws_secret_access_key": os.getenv("AWS_SECRET_ACCESS_KEY"),
"aws_region_name": os.getenv("AWS_REGION"),
},
},
# {
# "model_name": "model-group-2",
# "litellm_params": {
# "model": "bedrock/anthropic.claude-3-sonnet-20240229-v1:0",
# "aws_access_key_id": os.getenv("AWS_ACCESS_KEY_ID"),
# "aws_secret_access_key": os.getenv("AWS_SECRET_ACCESS_KEY"),
# "aws_region_name": os.getenv("AWS_REGION"),
# },
# },
]
model = LiteLLMRouterModel(
model_id="model-group-1",
model_list=llm_loadbalancer_model_list,
client_kwargs={"routing_strategy": "simple-shuffle"},
)
agent = CodeAgent(tools=[WebSearchTool()], model=model, stream_outputs=True, return_full_result=True)
full_result = agent.run("How many seconds would it take for a leopard at full speed to run through Pont des Arts?")
print(full_result)
| smolagents/examples/multi_llm_agent.py/0 | {
"file_path": "smolagents/examples/multi_llm_agent.py",
"repo_id": "smolagents",
"token_count": 702
} | 285 |
<jupyter_start><jupyter_text>Compare a text-based vs a vision-based browserWarning: this notebook is experimental, it probably won't work out of the box!<jupyter_code>!pip install "smolagents[litellm,toolkit]" -q
import datasets
eval_ds = datasets.load_dataset("gaia-benchmark/GAIA", "2023_all")["validation"]
to_keep = [
"What's the last line of the rhyme under the flavor",
'Of the authors (First M. Last) that worked on the paper "Pie Menus or Linear Menus',
"In Series 9, Episode 11 of Doctor Who, the Doctor is trapped inside an ever-shifting maze. What is this location called in the official script for the episode? Give the setting exactly as it appears in the first scene heading.",
"Which contributor to the version of OpenCV where support was added for the Mask-RCNN model has the same name as a former Chinese head of government when the names are transliterated to the Latin alphabet?",
"The photograph in the Whitney Museum of American Art's collection with accession number 2022.128 shows a person holding a book. Which military unit did the author of this book join in 1813? Answer without using articles.",
"I went to Virtue restaurant & bar in Chicago for my birthday on March 22, 2021 and the main course I had was delicious! Unfortunately, when I went back about a month later on April 21, it was no longer on the dinner menu.",
"In Emily Midkiff's June 2014 article in a journal named for the one of Hreidmar's ",
"Under DDC 633 on Bielefeld University Library's BASE, as of 2020",
"In the 2018 VSCode blog post on replit.com, what was the command they clicked on in the last video to remove extra lines?",
"The Metropolitan Museum of Art has a portrait in its collection with an accession number of 29.100.5. Of the consecrators and co-consecrators",
"In Nature journal's Scientific Reports conference proceedings from 2012, in the article that did not mention plasmons or plasmonics, what nano-compound is studied?",
'In the year 2022, and before December, what does "R" stand for in the three core policies of the type of content',
"Who nominated the only Featured Article on English Wikipedia about a dinosaur that was promoted in November 2016?",
]
eval_ds = eval_ds.filter(lambda row: any([el in row["Question"] for el in to_keep]))
eval_ds = eval_ds.rename_columns({"Question": "question", "Final answer": "true_answer", "Level": "task"})
import os
from dotenv import load_dotenv
from huggingface_hub import login
load_dotenv(override=True)
login(os.getenv("HF_TOKEN"))<jupyter_output><empty_output><jupyter_text>Text browser<jupyter_code>from scripts.run_agents import answer_questions
from scripts.text_inspector_tool import TextInspectorTool
from scripts.text_web_browser import (
ArchiveSearchTool,
FinderTool,
FindNextTool,
NavigationalSearchTool,
PageDownTool,
PageUpTool,
SearchInformationTool,
VisitTool,
)
from scripts.visual_qa import VisualQAGPT4Tool
from smolagents import CodeAgent, LiteLLMModel
proprietary_model = LiteLLMModel(model_id="gpt-4o")
### BUILD AGENTS & TOOLS
WEB_TOOLS = [
SearchInformationTool(),
NavigationalSearchTool(),
VisitTool(),
PageUpTool(),
PageDownTool(),
FinderTool(),
FindNextTool(),
ArchiveSearchTool(),
]
surfer_agent = CodeAgent(
model=proprietary_model,
tools=WEB_TOOLS,
max_steps=20,
verbosity_level=2,
)
results_text = answer_questions(
eval_ds,
surfer_agent,
"code_gpt4o_27-01_text",
reformulation_model=proprietary_model,
output_folder="output_browsers",
visual_inspection_tool=VisualQAGPT4Tool(),
text_inspector_tool=TextInspectorTool(proprietary_model, 40000),
)<jupyter_output><empty_output><jupyter_text>Vision browser<jupyter_code>!pip install helium -q
from scripts.visual_qa import VisualQAGPT4Tool
from smolagents import CodeAgent, LiteLLMModel, WebSearchTool
from smolagents.vision_web_browser import (
close_popups,
go_back,
helium_instructions,
initialize_agent,
save_screenshot,
search_item_ctrl_f,
)
proprietary_model = LiteLLMModel(model_id="gpt-4o")
vision_browser_agent = initialize_agent(proprietary_model)
### BUILD AGENTS & TOOLS
CodeAgent(
tools=[WebSearchTool(), go_back, close_popups, search_item_ctrl_f],
model=proprietary_model,
additional_authorized_imports=["helium"],
step_callbacks=[save_screenshot],
max_steps=20,
verbosity_level=2,
)
results_vision = answer_questions(
eval_ds,
vision_browser_agent,
"code_gpt4o_27-01_vision",
reformulation_model=proprietary_model,
output_folder="output_browsers",
visual_inspection_tool=VisualQAGPT4Tool(),
text_inspector_tool=TextInspectorTool(proprietary_model, 40000),
postprompt=helium_instructions
+ "Any web browser controls won't work on .pdf urls, rather use the tool 'inspect_file_as_text' to read them",
)<jupyter_output><empty_output><jupyter_text>Browser-use browser<jupyter_code>!pip install browser-use lxml_html_clean -q
!playwright install
import asyncio
import nest_asyncio
nest_asyncio.apply()
from browser_use import Agent
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
load_dotenv()
class BrowserUseAgent:
logs = []
def write_inner_memory_from_logs(self, summary_mode):
return self.results
def run(self, task, **kwargs):
agent = Agent(
task=task,
llm=ChatOpenAI(model="gpt-4o"),
)
self.results = asyncio.get_event_loop().run_until_complete(agent.run())
return self.results.history[-1].result[0].extracted_content
browser_use_agent = BrowserUseAgent()
results_browseruse = answer_questions(
eval_ds,
browser_use_agent,
"gpt-4o_27-01_browseruse",
reformulation_model=proprietary_model,
output_folder="output_browsers",
visual_inspection_tool=VisualQAGPT4Tool(),
text_inspector_tool=TextInspectorTool(proprietary_model, 40000),
postprompt="",
run_simple=True,
)<jupyter_output><empty_output><jupyter_text>Get results<jupyter_code>import pandas as pd
from scripts.gaia_scorer import question_scorer
results_vision, results_text, results_browseruse = (
pd.DataFrame(results_vision),
pd.DataFrame(results_text),
pd.DataFrame(results_browseruse),
)
results_vision["is_correct"] = results_vision.apply(
lambda x: question_scorer(x["prediction"], x["true_answer"]), axis=1
)
results_text["is_correct"] = results_text.apply(lambda x: question_scorer(x["prediction"], x["true_answer"]), axis=1)
results_browseruse["is_correct"] = results_browseruse.apply(
lambda x: question_scorer(x["prediction"], x["true_answer"]), axis=1
)
results = pd.concat([results_vision, results_text, results_browseruse])
results.groupby("agent_name")["is_correct"].mean()
correct_vision_results = results_vision.loc[results_vision["is_correct"]]
correct_vision_results
false_text_results = results_text.loc[~results_text["is_correct"]]
false_text_results<jupyter_output><empty_output> | smolagents/examples/open_deep_research/visual_vs_text_browser.ipynb/0 | {
"file_path": "smolagents/examples/open_deep_research/visual_vs_text_browser.ipynb",
"repo_id": "smolagents",
"token_count": 2407
} | 286 |
#!/usr/bin/env python
# coding=utf-8
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import importlib
import json
import os
import re
import tempfile
import textwrap
import time
import warnings
from abc import ABC, abstractmethod
from collections.abc import Callable, Generator
from concurrent.futures import ThreadPoolExecutor, as_completed
from dataclasses import dataclass
from logging import getLogger
from pathlib import Path
from typing import TYPE_CHECKING, Any, Literal, Type, TypeAlias, TypedDict, Union
import yaml
from huggingface_hub import create_repo, metadata_update, snapshot_download, upload_folder
from jinja2 import StrictUndefined, Template
from rich.console import Group
from rich.live import Live
from rich.markdown import Markdown
from rich.panel import Panel
from rich.rule import Rule
from rich.text import Text
if TYPE_CHECKING:
import PIL.Image
from .agent_types import AgentAudio, AgentImage, handle_agent_output_types
from .default_tools import TOOL_MAPPING, FinalAnswerTool
from .local_python_executor import BASE_BUILTIN_MODULES, LocalPythonExecutor, PythonExecutor, fix_final_answer_code
from .memory import (
ActionStep,
AgentMemory,
CallbackRegistry,
FinalAnswerStep,
MemoryStep,
PlanningStep,
SystemPromptStep,
TaskStep,
Timing,
TokenUsage,
ToolCall,
)
from .models import (
CODEAGENT_RESPONSE_FORMAT,
ChatMessage,
ChatMessageStreamDelta,
ChatMessageToolCall,
MessageRole,
Model,
agglomerate_stream_deltas,
parse_json_if_needed,
)
from .monitoring import (
YELLOW_HEX,
AgentLogger,
LogLevel,
Monitor,
)
from .remote_executors import DockerExecutor, E2BExecutor, WasmExecutor
from .tools import BaseTool, Tool, validate_tool_arguments
from .utils import (
AgentError,
AgentExecutionError,
AgentGenerationError,
AgentMaxStepsError,
AgentParsingError,
AgentToolCallError,
AgentToolExecutionError,
create_agent_gradio_app_template,
extract_code_from_text,
is_valid_name,
make_init_file,
parse_code_blobs,
truncate_content,
)
logger = getLogger(__name__)
def get_variable_names(self, template: str) -> set[str]:
pattern = re.compile(r"\{\{([^{}]+)\}\}")
return {match.group(1).strip() for match in pattern.finditer(template)}
def populate_template(template: str, variables: dict[str, Any]) -> str:
compiled_template = Template(template, undefined=StrictUndefined)
try:
return compiled_template.render(**variables)
except Exception as e:
raise Exception(f"Error during jinja template rendering: {type(e).__name__}: {e}")
@dataclass
class ActionOutput:
output: Any
is_final_answer: bool
@dataclass
class ToolOutput:
id: str
output: Any
is_final_answer: bool
observation: str
tool_call: ToolCall
class PlanningPromptTemplate(TypedDict):
"""
Prompt templates for the planning step.
Args:
plan (`str`): Initial plan prompt.
update_plan_pre_messages (`str`): Update plan pre-messages prompt.
update_plan_post_messages (`str`): Update plan post-messages prompt.
"""
initial_plan: str
update_plan_pre_messages: str
update_plan_post_messages: str
class ManagedAgentPromptTemplate(TypedDict):
"""
Prompt templates for the managed agent.
Args:
task (`str`): Task prompt.
report (`str`): Report prompt.
"""
task: str
report: str
class FinalAnswerPromptTemplate(TypedDict):
"""
Prompt templates for the final answer.
Args:
pre_messages (`str`): Pre-messages prompt.
post_messages (`str`): Post-messages prompt.
"""
pre_messages: str
post_messages: str
class PromptTemplates(TypedDict):
"""
Prompt templates for the agent.
Args:
system_prompt (`str`): System prompt.
planning ([`~agents.PlanningPromptTemplate`]): Planning prompt templates.
managed_agent ([`~agents.ManagedAgentPromptTemplate`]): Managed agent prompt templates.
final_answer ([`~agents.FinalAnswerPromptTemplate`]): Final answer prompt templates.
"""
system_prompt: str
planning: PlanningPromptTemplate
managed_agent: ManagedAgentPromptTemplate
final_answer: FinalAnswerPromptTemplate
EMPTY_PROMPT_TEMPLATES = PromptTemplates(
system_prompt="",
planning=PlanningPromptTemplate(
initial_plan="",
update_plan_pre_messages="",
update_plan_post_messages="",
),
managed_agent=ManagedAgentPromptTemplate(task="", report=""),
final_answer=FinalAnswerPromptTemplate(pre_messages="", post_messages=""),
)
@dataclass
class RunResult:
"""Holds extended information about an agent run.
Attributes:
output (Any | None): The final output of the agent run, if available.
state (Literal["success", "max_steps_error"]): The final state of the agent after the run.
steps (list[dict]): The agent's memory, as a list of steps.
token_usage (TokenUsage | None): Count of tokens used during the run.
timing (Timing): Timing details of the agent run: start time, end time, duration.
messages (list[dict]): The agent's memory, as a list of messages.
<Deprecated version="1.22.0">
Parameter 'messages' is deprecated and will be removed in version 1.25. Please use 'steps' instead.
</Deprecated>
"""
output: Any | None
state: Literal["success", "max_steps_error"]
steps: list[dict]
token_usage: TokenUsage | None
timing: Timing
def __init__(self, output=None, state=None, steps=None, token_usage=None, timing=None, messages=None):
# Handle deprecated 'messages' parameter
if messages is not None:
if steps is not None:
raise ValueError("Cannot specify both 'messages' and 'steps' parameters. Use 'steps' instead.")
warnings.warn(
"Parameter 'messages' is deprecated and will be removed in version 1.25. Please use 'steps' instead.",
FutureWarning,
stacklevel=2,
)
steps = messages
# Initialize with dataclass fields
self.output = output
self.state = state
self.steps = steps
self.token_usage = token_usage
self.timing = timing
@property
def messages(self):
"""Backward compatibility property that returns steps."""
warnings.warn(
"Parameter 'messages' is deprecated and will be removed in version 1.25. Please use 'steps' instead.",
FutureWarning,
stacklevel=2,
)
return self.steps
def dict(self):
return {
"output": self.output,
"state": self.state,
"steps": self.steps,
"token_usage": self.token_usage.dict() if self.token_usage is not None else None,
"timing": self.timing.dict(),
}
StreamEvent: TypeAlias = Union[
ChatMessageStreamDelta,
ChatMessageToolCall,
ActionOutput,
ToolCall,
ToolOutput,
PlanningStep,
ActionStep,
FinalAnswerStep,
]
class MultiStepAgent(ABC):
"""
Agent class that solves the given task step by step, using the ReAct framework:
While the objective is not reached, the agent will perform a cycle of action (given by the LLM) and observation (obtained from the environment).
Args:
tools (`list[Tool]`): [`Tool`]s that the agent can use.
model (`Callable[[list[dict[str, str]]], ChatMessage]`): Model that will generate the agent's actions.
prompt_templates ([`~agents.PromptTemplates`], *optional*): Prompt templates.
instructions (`str`, *optional*): Custom instructions for the agent, will be inserted in the system prompt.
max_steps (`int`, default `20`): Maximum number of steps the agent can take to solve the task.
add_base_tools (`bool`, default `False`): Whether to add the base tools to the agent's tools.
verbosity_level (`LogLevel`, default `LogLevel.INFO`): Level of verbosity of the agent's logs.
managed_agents (`list`, *optional*): Managed agents that the agent can call.
step_callbacks (`list[Callable]` | `dict[Type[MemoryStep], Callable | list[Callable]]`, *optional*): Callbacks that will be called at each step.
planning_interval (`int`, *optional*): Interval at which the agent will run a planning step.
name (`str`, *optional*): Necessary for a managed agent only - the name by which this agent can be called.
description (`str`, *optional*): Necessary for a managed agent only - the description of this agent.
provide_run_summary (`bool`, *optional*): Whether to provide a run summary when called as a managed agent.
final_answer_checks (`list[Callable]`, *optional*): List of validation functions to run before accepting a final answer.
Each function should:
- Take the final answer and the agent's memory as arguments.
- Return a boolean indicating whether the final answer is valid.
return_full_result (`bool`, default `False`): Whether to return the full [`RunResult`] object or just the final answer output from the agent run.
"""
def __init__(
self,
tools: list[Tool],
model: Model,
prompt_templates: PromptTemplates | None = None,
instructions: str | None = None,
max_steps: int = 20,
add_base_tools: bool = False,
verbosity_level: LogLevel = LogLevel.INFO,
managed_agents: list | None = None,
step_callbacks: list[Callable] | dict[Type[MemoryStep], Callable | list[Callable]] | None = None,
planning_interval: int | None = None,
name: str | None = None,
description: str | None = None,
provide_run_summary: bool = False,
final_answer_checks: list[Callable] | None = None,
return_full_result: bool = False,
logger: AgentLogger | None = None,
):
self.agent_name = self.__class__.__name__
self.model = model
self.prompt_templates = prompt_templates or EMPTY_PROMPT_TEMPLATES
if prompt_templates is not None:
missing_keys = set(EMPTY_PROMPT_TEMPLATES.keys()) - set(prompt_templates.keys())
assert not missing_keys, (
f"Some prompt templates are missing from your custom `prompt_templates`: {missing_keys}"
)
for key, value in EMPTY_PROMPT_TEMPLATES.items():
if isinstance(value, dict):
for subkey in value.keys():
assert key in prompt_templates.keys() and (subkey in prompt_templates[key].keys()), (
f"Some prompt templates are missing from your custom `prompt_templates`: {subkey} under {key}"
)
self.max_steps = max_steps
self.step_number = 0
self.planning_interval = planning_interval
self.state: dict[str, Any] = {}
self.name = self._validate_name(name)
self.description = description
self.provide_run_summary = provide_run_summary
self.final_answer_checks = final_answer_checks if final_answer_checks is not None else []
self.return_full_result = return_full_result
self.instructions = instructions
self._setup_managed_agents(managed_agents)
self._setup_tools(tools, add_base_tools)
self._validate_tools_and_managed_agents(tools, managed_agents)
self.task: str | None = None
self.memory = AgentMemory(self.system_prompt)
if logger is None:
self.logger = AgentLogger(level=verbosity_level)
else:
self.logger = logger
self.monitor = Monitor(self.model, self.logger)
self._setup_step_callbacks(step_callbacks)
self.stream_outputs = False
@property
def system_prompt(self) -> str:
return self.initialize_system_prompt()
@system_prompt.setter
def system_prompt(self, value: str):
raise AttributeError(
"""The 'system_prompt' property is read-only. Use 'self.prompt_templates["system_prompt"]' instead."""
)
def _validate_name(self, name: str | None) -> str | None:
if name is not None and not is_valid_name(name):
raise ValueError(f"Agent name '{name}' must be a valid Python identifier and not a reserved keyword.")
return name
def _setup_managed_agents(self, managed_agents: list | None = None) -> None:
"""Setup managed agents with proper logging."""
self.managed_agents = {}
if managed_agents:
assert all(agent.name and agent.description for agent in managed_agents), (
"All managed agents need both a name and a description!"
)
self.managed_agents = {agent.name: agent for agent in managed_agents}
# Ensure managed agents can be called as tools by the model: set their inputs and output_type
for agent in self.managed_agents.values():
agent.inputs = {
"task": {"type": "string", "description": "Long detailed description of the task."},
"additional_args": {
"type": "object",
"description": "Dictionary of extra inputs to pass to the managed agent, e.g. images, dataframes, or any other contextual data it may need.",
},
}
agent.output_type = "string"
def _setup_tools(self, tools, add_base_tools):
assert all(isinstance(tool, BaseTool) for tool in tools), (
"All elements must be instance of BaseTool (or a subclass)"
)
self.tools = {tool.name: tool for tool in tools}
if add_base_tools:
self.tools.update(
{
name: cls()
for name, cls in TOOL_MAPPING.items()
if name != "python_interpreter" or self.__class__.__name__ == "ToolCallingAgent"
}
)
self.tools.setdefault("final_answer", FinalAnswerTool())
def _validate_tools_and_managed_agents(self, tools, managed_agents):
tool_and_managed_agent_names = [tool.name for tool in tools]
if managed_agents is not None:
tool_and_managed_agent_names += [agent.name for agent in managed_agents]
if self.name:
tool_and_managed_agent_names.append(self.name)
if len(tool_and_managed_agent_names) != len(set(tool_and_managed_agent_names)):
raise ValueError(
"Each tool or managed_agent should have a unique name! You passed these duplicate names: "
f"{[name for name in tool_and_managed_agent_names if tool_and_managed_agent_names.count(name) > 1]}"
)
def _setup_step_callbacks(self, step_callbacks):
# Initialize step callbacks registry
self.step_callbacks = CallbackRegistry()
if step_callbacks:
# Register callbacks list only for ActionStep for backward compatibility
if isinstance(step_callbacks, list):
for callback in step_callbacks:
self.step_callbacks.register(ActionStep, callback)
# Register callbacks dict for specific step classes
elif isinstance(step_callbacks, dict):
for step_cls, callbacks in step_callbacks.items():
if not isinstance(callbacks, list):
callbacks = [callbacks]
for callback in callbacks:
self.step_callbacks.register(step_cls, callback)
else:
raise ValueError("step_callbacks must be a list or a dict")
# Register monitor update_metrics only for ActionStep for backward compatibility
self.step_callbacks.register(ActionStep, self.monitor.update_metrics)
def run(
self,
task: str,
stream: bool = False,
reset: bool = True,
images: list["PIL.Image.Image"] | None = None,
additional_args: dict | None = None,
max_steps: int | None = None,
return_full_result: bool | None = None,
) -> Any | RunResult:
"""
Run the agent for the given task.
Args:
task (`str`): Task to perform.
stream (`bool`): Whether to run in streaming mode.
If `True`, returns a generator that yields each step as it is executed. You must iterate over this generator to process the individual steps (e.g., using a for loop or `next()`).
If `False`, executes all steps internally and returns only the final answer after completion.
reset (`bool`): Whether to reset the conversation or keep it going from previous run.
images (`list[PIL.Image.Image]`, *optional*): Image(s) objects.
additional_args (`dict`, *optional*): Any other variables that you want to pass to the agent run, for instance images or dataframes. Give them clear names!
max_steps (`int`, *optional*): Maximum number of steps the agent can take to solve the task. if not provided, will use the agent's default value.
return_full_result (`bool`, *optional*): Whether to return the full [`RunResult`] object or just the final answer output.
If `None` (default), the agent's `self.return_full_result` setting is used.
Example:
```py
from smolagents import CodeAgent
agent = CodeAgent(tools=[])
agent.run("What is the result of 2 power 3.7384?")
```
"""
max_steps = max_steps or self.max_steps
self.task = task
self.interrupt_switch = False
if additional_args:
self.state.update(additional_args)
self.task += f"""
You have been provided with these additional arguments, that you can access directly using the keys as variables:
{str(additional_args)}."""
self.memory.system_prompt = SystemPromptStep(system_prompt=self.system_prompt)
if reset:
self.memory.reset()
self.monitor.reset()
self.logger.log_task(
content=self.task.strip(),
subtitle=f"{type(self.model).__name__} - {(self.model.model_id if hasattr(self.model, 'model_id') else '')}",
level=LogLevel.INFO,
title=self.name if hasattr(self, "name") else None,
)
self.memory.steps.append(TaskStep(task=self.task, task_images=images))
if getattr(self, "python_executor", None):
self.python_executor.send_variables(variables=self.state)
self.python_executor.send_tools({**self.tools, **self.managed_agents})
if stream:
# The steps are returned as they are executed through a generator to iterate on.
return self._run_stream(task=self.task, max_steps=max_steps, images=images)
run_start_time = time.time()
steps = list(self._run_stream(task=self.task, max_steps=max_steps, images=images))
# Outputs are returned only at the end. We only look at the last step.
assert isinstance(steps[-1], FinalAnswerStep)
output = steps[-1].output
return_full_result = return_full_result if return_full_result is not None else self.return_full_result
if return_full_result:
total_input_tokens = 0
total_output_tokens = 0
correct_token_usage = True
for step in self.memory.steps:
if isinstance(step, (ActionStep, PlanningStep)):
if step.token_usage is None:
correct_token_usage = False
break
else:
total_input_tokens += step.token_usage.input_tokens
total_output_tokens += step.token_usage.output_tokens
if correct_token_usage:
token_usage = TokenUsage(input_tokens=total_input_tokens, output_tokens=total_output_tokens)
else:
token_usage = None
if self.memory.steps and isinstance(getattr(self.memory.steps[-1], "error", None), AgentMaxStepsError):
state = "max_steps_error"
else:
state = "success"
step_dicts = self.memory.get_full_steps()
return RunResult(
output=output,
token_usage=token_usage,
steps=step_dicts,
timing=Timing(start_time=run_start_time, end_time=time.time()),
state=state,
)
return output
def _run_stream(
self, task: str, max_steps: int, images: list["PIL.Image.Image"] | None = None
) -> Generator[ActionStep | PlanningStep | FinalAnswerStep | ChatMessageStreamDelta]:
self.step_number = 1
returned_final_answer = False
while not returned_final_answer and self.step_number <= max_steps:
if self.interrupt_switch:
raise AgentError("Agent interrupted.", self.logger)
# Run a planning step if scheduled
if self.planning_interval is not None and (
self.step_number == 1 or (self.step_number - 1) % self.planning_interval == 0
):
planning_start_time = time.time()
planning_step = None
for element in self._generate_planning_step(
task, is_first_step=len(self.memory.steps) == 1, step=self.step_number
): # Don't use the attribute step_number here, because there can be steps from previous runs
yield element
planning_step = element
assert isinstance(planning_step, PlanningStep) # Last yielded element should be a PlanningStep
planning_end_time = time.time()
planning_step.timing = Timing(
start_time=planning_start_time,
end_time=planning_end_time,
)
self._finalize_step(planning_step)
self.memory.steps.append(planning_step)
# Start action step!
action_step_start_time = time.time()
action_step = ActionStep(
step_number=self.step_number,
timing=Timing(start_time=action_step_start_time),
observations_images=images,
)
self.logger.log_rule(f"Step {self.step_number}", level=LogLevel.INFO)
try:
for output in self._step_stream(action_step):
# Yield all
yield output
if isinstance(output, ActionOutput) and output.is_final_answer:
final_answer = output.output
self.logger.log(
Text(f"Final answer: {final_answer}", style=f"bold {YELLOW_HEX}"),
level=LogLevel.INFO,
)
if self.final_answer_checks:
self._validate_final_answer(final_answer)
returned_final_answer = True
action_step.is_final_answer = True
except AgentGenerationError as e:
# Agent generation errors are not caused by a Model error but an implementation error: so we should raise them and exit.
raise e
except AgentError as e:
# Other AgentError types are caused by the Model, so we should log them and iterate.
action_step.error = e
finally:
self._finalize_step(action_step)
self.memory.steps.append(action_step)
yield action_step
self.step_number += 1
if not returned_final_answer and self.step_number == max_steps + 1:
final_answer = self._handle_max_steps_reached(task)
yield action_step
yield FinalAnswerStep(handle_agent_output_types(final_answer))
def _validate_final_answer(self, final_answer: Any):
for check_function in self.final_answer_checks:
try:
assert check_function(final_answer, self.memory)
except Exception as e:
raise AgentError(f"Check {check_function.__name__} failed with error: {e}", self.logger)
def _finalize_step(self, memory_step: ActionStep | PlanningStep):
memory_step.timing.end_time = time.time()
self.step_callbacks.callback(memory_step, agent=self)
def _handle_max_steps_reached(self, task: str) -> Any:
action_step_start_time = time.time()
final_answer = self.provide_final_answer(task)
final_memory_step = ActionStep(
step_number=self.step_number,
error=AgentMaxStepsError("Reached max steps.", self.logger),
timing=Timing(start_time=action_step_start_time, end_time=time.time()),
token_usage=final_answer.token_usage,
)
final_memory_step.action_output = final_answer.content
self._finalize_step(final_memory_step)
self.memory.steps.append(final_memory_step)
return final_answer.content
def _generate_planning_step(
self, task, is_first_step: bool, step: int
) -> Generator[ChatMessageStreamDelta | PlanningStep]:
start_time = time.time()
if is_first_step:
input_messages = [
ChatMessage(
role=MessageRole.USER,
content=[
{
"type": "text",
"text": populate_template(
self.prompt_templates["planning"]["initial_plan"],
variables={"task": task, "tools": self.tools, "managed_agents": self.managed_agents},
),
}
],
)
]
if self.stream_outputs and hasattr(self.model, "generate_stream"):
plan_message_content = ""
output_stream = self.model.generate_stream(input_messages, stop_sequences=["<end_plan>"]) # type: ignore
input_tokens, output_tokens = 0, 0
with Live("", console=self.logger.console, vertical_overflow="visible") as live:
for event in output_stream:
if event.content is not None:
plan_message_content += event.content
live.update(Markdown(plan_message_content))
if event.token_usage:
output_tokens += event.token_usage.output_tokens
input_tokens = event.token_usage.input_tokens
yield event
else:
plan_message = self.model.generate(input_messages, stop_sequences=["<end_plan>"])
plan_message_content = plan_message.content
input_tokens, output_tokens = (
(
plan_message.token_usage.input_tokens,
plan_message.token_usage.output_tokens,
)
if plan_message.token_usage
else (None, None)
)
plan = textwrap.dedent(
f"""Here are the facts I know and the plan of action that I will follow to solve the task:\n```\n{plan_message_content}\n```"""
)
else:
# Summary mode removes the system prompt and previous planning messages output by the model.
# Removing previous planning messages avoids influencing too much the new plan.
memory_messages = self.write_memory_to_messages(summary_mode=True)
plan_update_pre = ChatMessage(
role=MessageRole.SYSTEM,
content=[
{
"type": "text",
"text": populate_template(
self.prompt_templates["planning"]["update_plan_pre_messages"], variables={"task": task}
),
}
],
)
plan_update_post = ChatMessage(
role=MessageRole.USER,
content=[
{
"type": "text",
"text": populate_template(
self.prompt_templates["planning"]["update_plan_post_messages"],
variables={
"task": task,
"tools": self.tools,
"managed_agents": self.managed_agents,
"remaining_steps": (self.max_steps - step),
},
),
}
],
)
input_messages = [plan_update_pre] + memory_messages + [plan_update_post]
if self.stream_outputs and hasattr(self.model, "generate_stream"):
plan_message_content = ""
input_tokens, output_tokens = 0, 0
with Live("", console=self.logger.console, vertical_overflow="visible") as live:
for event in self.model.generate_stream(
input_messages,
stop_sequences=["<end_plan>"],
): # type: ignore
if event.content is not None:
plan_message_content += event.content
live.update(Markdown(plan_message_content))
if event.token_usage:
output_tokens += event.token_usage.output_tokens
input_tokens = event.token_usage.input_tokens
yield event
else:
plan_message = self.model.generate(input_messages, stop_sequences=["<end_plan>"])
plan_message_content = plan_message.content
if plan_message.token_usage is not None:
input_tokens, output_tokens = (
plan_message.token_usage.input_tokens,
plan_message.token_usage.output_tokens,
)
plan = textwrap.dedent(
f"""I still need to solve the task I was given:\n```\n{self.task}\n```\n\nHere are the facts I know and my new/updated plan of action to solve the task:\n```\n{plan_message_content}\n```"""
)
log_headline = "Initial plan" if is_first_step else "Updated plan"
self.logger.log(Rule(f"[bold]{log_headline}", style="orange"), Text(plan), level=LogLevel.INFO)
yield PlanningStep(
model_input_messages=input_messages,
plan=plan,
model_output_message=ChatMessage(role=MessageRole.ASSISTANT, content=plan_message_content),
token_usage=TokenUsage(input_tokens=input_tokens, output_tokens=output_tokens),
timing=Timing(start_time=start_time, end_time=time.time()),
)
@abstractmethod
def initialize_system_prompt(self) -> str:
"""To be implemented in child classes"""
...
def interrupt(self):
"""Interrupts the agent execution."""
self.interrupt_switch = True
def write_memory_to_messages(
self,
summary_mode: bool = False,
) -> list[ChatMessage]:
"""
Reads past llm_outputs, actions, and observations or errors from the memory into a series of messages
that can be used as input to the LLM. Adds a number of keywords (such as PLAN, error, etc) to help
the LLM.
"""
messages = self.memory.system_prompt.to_messages(summary_mode=summary_mode)
for memory_step in self.memory.steps:
messages.extend(memory_step.to_messages(summary_mode=summary_mode))
return messages
def _step_stream(
self, memory_step: ActionStep
) -> Generator[ChatMessageStreamDelta | ToolCall | ToolOutput | ActionOutput]:
"""
Perform one step in the ReAct framework: the agent thinks, acts, and observes the result.
Yields ChatMessageStreamDelta during the run if streaming is enabled.
At the end, yields either None if the step is not final, or the final answer.
"""
raise NotImplementedError("This method should be implemented in child classes")
def step(self, memory_step: ActionStep) -> Any:
"""
Perform one step in the ReAct framework: the agent thinks, acts, and observes the result.
Returns either None if the step is not final, or the final answer.
"""
return list(self._step_stream(memory_step))[-1]
def extract_action(self, model_output: str, split_token: str) -> tuple[str, str]:
"""
Parse action from the LLM output
Args:
model_output (`str`): Output of the LLM
split_token (`str`): Separator for the action. Should match the example in the system prompt.
"""
try:
split = model_output.split(split_token)
rationale, action = (
split[-2],
split[-1],
) # NOTE: using indexes starting from the end solves for when you have more than one split_token in the output
except Exception:
raise AgentParsingError(
f"No '{split_token}' token provided in your output.\nYour output:\n{model_output}\n. Be sure to include an action, prefaced with '{split_token}'!",
self.logger,
)
return rationale.strip(), action.strip()
def provide_final_answer(self, task: str) -> ChatMessage:
"""
Provide the final answer to the task, based on the logs of the agent's interactions.
Args:
task (`str`): Task to perform.
images (`list[PIL.Image.Image]`, *optional*): Image(s) objects.
Returns:
`str`: Final answer to the task.
"""
messages = [
ChatMessage(
role=MessageRole.SYSTEM,
content=[
{
"type": "text",
"text": self.prompt_templates["final_answer"]["pre_messages"],
}
],
)
]
messages += self.write_memory_to_messages()[1:]
messages.append(
ChatMessage(
role=MessageRole.USER,
content=[
{
"type": "text",
"text": populate_template(
self.prompt_templates["final_answer"]["post_messages"], variables={"task": task}
),
}
],
)
)
try:
chat_message: ChatMessage = self.model.generate(messages)
return chat_message
except Exception as e:
return ChatMessage(
role=MessageRole.ASSISTANT,
content=[{"type": "text", "text": f"Error in generating final LLM output: {e}"}],
)
def visualize(self):
"""Creates a rich tree visualization of the agent's structure."""
self.logger.visualize_agent_tree(self)
def replay(self, detailed: bool = False):
"""Prints a pretty replay of the agent's steps.
Args:
detailed (bool, optional): If True, also displays the memory at each step. Defaults to False.
Careful: will increase log length exponentially. Use only for debugging.
"""
self.memory.replay(self.logger, detailed=detailed)
def __call__(self, task: str, **kwargs):
"""Adds additional prompting for the managed agent, runs it, and wraps the output.
This method is called only by a managed agent.
"""
full_task = populate_template(
self.prompt_templates["managed_agent"]["task"],
variables=dict(name=self.name, task=task),
)
result = self.run(full_task, **kwargs)
if isinstance(result, RunResult):
report = result.output
else:
report = result
answer = populate_template(
self.prompt_templates["managed_agent"]["report"], variables=dict(name=self.name, final_answer=report)
)
if self.provide_run_summary:
answer += "\n\nFor more detail, find below a summary of this agent's work:\n<summary_of_work>\n"
for message in self.write_memory_to_messages(summary_mode=True):
content = message.content
answer += "\n" + truncate_content(str(content)) + "\n---"
answer += "\n</summary_of_work>"
return answer
def save(self, output_dir: str | Path, relative_path: str | None = None):
"""
Saves the relevant code files for your agent. This will copy the code of your agent in `output_dir` as well as autogenerate:
- a `tools` folder containing the logic for each of the tools under `tools/{tool_name}.py`.
- a `managed_agents` folder containing the logic for each of the managed agents.
- an `agent.json` file containing a dictionary representing your agent.
- a `prompt.yaml` file containing the prompt templates used by your agent.
- an `app.py` file providing a UI for your agent when it is exported to a Space with `agent.push_to_hub()`
- a `requirements.txt` containing the names of the modules used by your tool (as detected when inspecting its
code)
Args:
output_dir (`str` or `Path`): The folder in which you want to save your agent.
"""
make_init_file(output_dir)
# Recursively save managed agents
if self.managed_agents:
make_init_file(os.path.join(output_dir, "managed_agents"))
for agent_name, agent in self.managed_agents.items():
agent_suffix = f"managed_agents.{agent_name}"
if relative_path:
agent_suffix = relative_path + "." + agent_suffix
agent.save(os.path.join(output_dir, "managed_agents", agent_name), relative_path=agent_suffix)
class_name = self.__class__.__name__
# Save tools to different .py files
for tool in self.tools.values():
make_init_file(os.path.join(output_dir, "tools"))
tool.save(os.path.join(output_dir, "tools"), tool_file_name=tool.name, make_gradio_app=False)
# Save prompts to yaml
yaml_prompts = yaml.safe_dump(
self.prompt_templates,
default_style="|", # This forces block literals for all strings
default_flow_style=False,
width=float("inf"),
sort_keys=False,
allow_unicode=True,
indent=2,
)
with open(os.path.join(output_dir, "prompts.yaml"), "w", encoding="utf-8") as f:
f.write(yaml_prompts)
# Save agent dictionary to json
agent_dict = self.to_dict()
agent_dict["tools"] = [tool.name for tool in self.tools.values()]
agent_dict["managed_agents"] = {agent.name: agent.__class__.__name__ for agent in self.managed_agents.values()}
with open(os.path.join(output_dir, "agent.json"), "w", encoding="utf-8") as f:
json.dump(agent_dict, f, indent=4)
# Save requirements
with open(os.path.join(output_dir, "requirements.txt"), "w", encoding="utf-8") as f:
f.writelines(f"{r}\n" for r in agent_dict["requirements"])
# Make agent.py file with Gradio UI
agent_name = f"agent_{self.name}" if getattr(self, "name", None) else "agent"
managed_agent_relative_path = relative_path + "." if relative_path is not None else ""
app_template = create_agent_gradio_app_template()
# Render the app.py file from Jinja2 template
app_text = app_template.render(
{
"agent_name": agent_name,
"class_name": class_name,
"agent_dict": agent_dict,
"tools": self.tools,
"managed_agents": self.managed_agents,
"managed_agent_relative_path": managed_agent_relative_path,
}
)
with open(os.path.join(output_dir, "app.py"), "w", encoding="utf-8") as f:
f.write(app_text + "\n") # Append newline at the end
def to_dict(self) -> dict[str, Any]:
"""Convert the agent to a dictionary representation.
Returns:
`dict`: Dictionary representation of the agent.
"""
# TODO: handle serializing step_callbacks and final_answer_checks
for attr in ["final_answer_checks", "step_callbacks"]:
if getattr(self, attr, None):
self.logger.log(f"This agent has {attr}: they will be ignored by this method.", LogLevel.INFO)
tool_dicts = [tool.to_dict() for tool in self.tools.values()]
tool_requirements = {req for tool in self.tools.values() for req in tool.to_dict()["requirements"]}
managed_agents_requirements = {
req for managed_agent in self.managed_agents.values() for req in managed_agent.to_dict()["requirements"]
}
requirements = tool_requirements | managed_agents_requirements
if hasattr(self, "authorized_imports"):
requirements.update(
{package.split(".")[0] for package in self.authorized_imports if package not in BASE_BUILTIN_MODULES}
)
agent_dict = {
"class": self.__class__.__name__,
"tools": tool_dicts,
"model": {
"class": self.model.__class__.__name__,
"data": self.model.to_dict(),
},
"managed_agents": [managed_agent.to_dict() for managed_agent in self.managed_agents.values()],
"prompt_templates": self.prompt_templates,
"max_steps": self.max_steps,
"verbosity_level": int(self.logger.level),
"planning_interval": self.planning_interval,
"name": self.name,
"description": self.description,
"requirements": sorted(requirements),
}
return agent_dict
@classmethod
def from_dict(cls, agent_dict: dict[str, Any], **kwargs) -> "MultiStepAgent":
"""Create agent from a dictionary representation.
Args:
agent_dict (`dict[str, Any]`): Dictionary representation of the agent.
**kwargs: Additional keyword arguments that will override agent_dict values.
Returns:
`MultiStepAgent`: Instance of the agent class.
"""
# Load model
model_info = agent_dict["model"]
model_class = getattr(importlib.import_module("smolagents.models"), model_info["class"])
model = model_class.from_dict(model_info["data"])
# Load tools
tools = []
for tool_info in agent_dict["tools"]:
tools.append(Tool.from_code(tool_info["code"]))
# Load managed agents
managed_agents = []
for managed_agent_dict in agent_dict["managed_agents"]:
agent_class = getattr(importlib.import_module("smolagents.agents"), managed_agent_dict["class"])
managed_agent = agent_class.from_dict(managed_agent_dict, **kwargs)
managed_agents.append(managed_agent)
# Extract base agent parameters
agent_args = {
"model": model,
"tools": tools,
"managed_agents": managed_agents,
"prompt_templates": agent_dict.get("prompt_templates"),
"max_steps": agent_dict.get("max_steps"),
"verbosity_level": agent_dict.get("verbosity_level"),
"planning_interval": agent_dict.get("planning_interval"),
"name": agent_dict.get("name"),
"description": agent_dict.get("description"),
}
# Filter out None values to use defaults from __init__
agent_args = {k: v for k, v in agent_args.items() if v is not None}
# Update with any additional kwargs
agent_args.update(kwargs)
# Create agent instance
return cls(**agent_args)
@classmethod
def from_hub(
cls,
repo_id: str,
token: str | None = None,
trust_remote_code: bool = False,
**kwargs,
):
"""
Loads an agent defined on the Hub.
<Tip warning={true}>
Loading a tool from the Hub means that you'll download the tool and execute it locally.
ALWAYS inspect the tool you're downloading before loading it within your runtime, as you would do when
installing a package using pip/npm/apt.
</Tip>
Args:
repo_id (`str`):
The name of the repo on the Hub where your tool is defined.
token (`str`, *optional*):
The token to identify you on hf.co. If unset, will use the token generated when running
`huggingface-cli login` (stored in `~/.huggingface`).
trust_remote_code(`bool`, *optional*, defaults to False):
This flags marks that you understand the risk of running remote code and that you trust this tool.
If not setting this to True, loading the tool from Hub will fail.
kwargs (additional keyword arguments, *optional*):
Additional keyword arguments that will be split in two: all arguments relevant to the Hub (such as
`cache_dir`, `revision`, `subfolder`) will be used when downloading the files for your agent, and the
others will be passed along to its init.
"""
if not trust_remote_code:
raise ValueError(
"Loading an agent from Hub requires to acknowledge you trust its code: to do so, pass `trust_remote_code=True`."
)
# Get the agent's Hub folder.
download_kwargs = {"token": token, "repo_type": "space"} | {
key: kwargs.pop(key)
for key in [
"cache_dir",
"force_download",
"proxies",
"revision",
"local_files_only",
]
if key in kwargs
}
download_folder = Path(snapshot_download(repo_id=repo_id, **download_kwargs))
return cls.from_folder(download_folder, **kwargs)
@classmethod
def from_folder(cls, folder: str | Path, **kwargs):
"""Loads an agent from a local folder.
Args:
folder (`str` or `Path`): The folder where the agent is saved.
**kwargs: Additional keyword arguments that will be passed to the agent's init.
"""
# Load agent.json
folder = Path(folder)
agent_dict = json.loads((folder / "agent.json").read_text())
# Load managed agents from their respective folders, recursively
managed_agents = []
for managed_agent_name, managed_agent_class_name in agent_dict["managed_agents"].items():
agent_cls = getattr(importlib.import_module("smolagents.agents"), managed_agent_class_name)
managed_agents.append(agent_cls.from_folder(folder / "managed_agents" / managed_agent_name))
agent_dict["managed_agents"] = {}
# Load tools
tools = []
for tool_name in agent_dict["tools"]:
tool_code = (folder / "tools" / f"{tool_name}.py").read_text()
tools.append({"name": tool_name, "code": tool_code})
agent_dict["tools"] = tools
# Add managed agents to kwargs to override the empty list in from_dict
if managed_agents:
kwargs["managed_agents"] = managed_agents
return cls.from_dict(agent_dict, **kwargs)
def push_to_hub(
self,
repo_id: str,
commit_message: str = "Upload agent",
private: bool | None = None,
token: bool | str | None = None,
create_pr: bool = False,
) -> str:
"""
Upload the agent to the Hub.
Parameters:
repo_id (`str`):
The name of the repository you want to push to. It should contain your organization name when
pushing to a given organization.
commit_message (`str`, *optional*, defaults to `"Upload agent"`):
Message to commit while pushing.
private (`bool`, *optional*, defaults to `None`):
Whether to make the repo private. If `None`, the repo will be public unless the organization's default is private. This value is ignored if the repo already exists.
token (`bool` or `str`, *optional*):
The token to use as HTTP bearer authorization for remote files. If unset, will use the token generated
when running `huggingface-cli login` (stored in `~/.huggingface`).
create_pr (`bool`, *optional*, defaults to `False`):
Whether to create a PR with the uploaded files or directly commit.
"""
repo_url = create_repo(
repo_id=repo_id,
token=token,
private=private,
exist_ok=True,
repo_type="space",
space_sdk="gradio",
)
repo_id = repo_url.repo_id
metadata_update(
repo_id,
{"tags": ["smolagents", "agent"]},
repo_type="space",
token=token,
overwrite=True,
)
with tempfile.TemporaryDirectory() as work_dir:
self.save(work_dir)
logger.info(f"Uploading the following files to {repo_id}: {','.join(os.listdir(work_dir))}")
return upload_folder(
repo_id=repo_id,
commit_message=commit_message,
folder_path=work_dir,
token=token,
create_pr=create_pr,
repo_type="space",
)
class ToolCallingAgent(MultiStepAgent):
"""
This agent uses JSON-like tool calls, using method `model.get_tool_call` to leverage the LLM engine's tool calling capabilities.
Args:
tools (`list[Tool]`): [`Tool`]s that the agent can use.
model (`Model`): Model that will generate the agent's actions.
prompt_templates ([`~agents.PromptTemplates`], *optional*): Prompt templates.
planning_interval (`int`, *optional*): Interval at which the agent will run a planning step.
stream_outputs (`bool`, *optional*, default `False`): Whether to stream outputs during execution.
max_tool_threads (`int`, *optional*): Maximum number of threads for parallel tool calls.
Higher values increase concurrency but resource usage as well.
Defaults to `ThreadPoolExecutor`'s default.
**kwargs: Additional keyword arguments.
"""
def __init__(
self,
tools: list[Tool],
model: Model,
prompt_templates: PromptTemplates | None = None,
planning_interval: int | None = None,
stream_outputs: bool = False,
max_tool_threads: int | None = None,
**kwargs,
):
prompt_templates = prompt_templates or yaml.safe_load(
importlib.resources.files("smolagents.prompts").joinpath("toolcalling_agent.yaml").read_text()
)
super().__init__(
tools=tools,
model=model,
prompt_templates=prompt_templates,
planning_interval=planning_interval,
**kwargs,
)
# Streaming setup
self.stream_outputs = stream_outputs
if self.stream_outputs and not hasattr(self.model, "generate_stream"):
raise ValueError(
"`stream_outputs` is set to True, but the model class implements no `generate_stream` method."
)
# Tool calling setup
self.max_tool_threads = max_tool_threads
@property
def tools_and_managed_agents(self):
"""Returns a combined list of tools and managed agents."""
return list(self.tools.values()) + list(self.managed_agents.values())
def initialize_system_prompt(self) -> str:
system_prompt = populate_template(
self.prompt_templates["system_prompt"],
variables={
"tools": self.tools,
"managed_agents": self.managed_agents,
"custom_instructions": self.instructions,
},
)
return system_prompt
def _step_stream(
self, memory_step: ActionStep
) -> Generator[ChatMessageStreamDelta | ToolCall | ToolOutput | ActionOutput]:
"""
Perform one step in the ReAct framework: the agent thinks, acts, and observes the result.
Yields ChatMessageStreamDelta during the run if streaming is enabled.
At the end, yields either None if the step is not final, or the final answer.
"""
memory_messages = self.write_memory_to_messages()
input_messages = memory_messages.copy()
# Add new step in logs
memory_step.model_input_messages = input_messages
try:
if self.stream_outputs and hasattr(self.model, "generate_stream"):
output_stream = self.model.generate_stream(
input_messages,
stop_sequences=["Observation:", "Calling tools:"],
tools_to_call_from=self.tools_and_managed_agents,
)
chat_message_stream_deltas: list[ChatMessageStreamDelta] = []
with Live("", console=self.logger.console, vertical_overflow="visible") as live:
for event in output_stream:
chat_message_stream_deltas.append(event)
live.update(
Markdown(agglomerate_stream_deltas(chat_message_stream_deltas).render_as_markdown())
)
yield event
chat_message = agglomerate_stream_deltas(chat_message_stream_deltas)
else:
chat_message: ChatMessage = self.model.generate(
input_messages,
stop_sequences=["Observation:", "Calling tools:"],
tools_to_call_from=self.tools_and_managed_agents,
)
if chat_message.content is None and chat_message.raw is not None:
log_content = str(chat_message.raw)
else:
log_content = str(chat_message.content) or ""
self.logger.log_markdown(
content=log_content,
title="Output message of the LLM:",
level=LogLevel.DEBUG,
)
# Record model output
memory_step.model_output_message = chat_message
memory_step.model_output = chat_message.content
memory_step.token_usage = chat_message.token_usage
except Exception as e:
raise AgentGenerationError(f"Error while generating output:\n{e}", self.logger) from e
if chat_message.tool_calls is None or len(chat_message.tool_calls) == 0:
try:
chat_message = self.model.parse_tool_calls(chat_message)
except Exception as e:
raise AgentParsingError(f"Error while parsing tool call from model output: {e}", self.logger)
else:
for tool_call in chat_message.tool_calls:
tool_call.function.arguments = parse_json_if_needed(tool_call.function.arguments)
final_answer, got_final_answer = None, False
for output in self.process_tool_calls(chat_message, memory_step):
yield output
if isinstance(output, ToolOutput):
if output.is_final_answer:
if len(chat_message.tool_calls) > 1:
raise AgentExecutionError(
"If you want to return an answer, please do not perform any other tool calls than the final answer tool call!",
self.logger,
)
if got_final_answer:
raise AgentToolExecutionError(
"You returned multiple final answers. Please return only one single final answer!",
self.logger,
)
final_answer = output.output
got_final_answer = True
# Manage state variables
if isinstance(final_answer, str) and final_answer in self.state.keys():
final_answer = self.state[final_answer]
yield ActionOutput(
output=final_answer,
is_final_answer=got_final_answer,
)
def process_tool_calls(
self, chat_message: ChatMessage, memory_step: ActionStep
) -> Generator[ToolCall | ToolOutput]:
"""Process tool calls from the model output and update agent memory.
Args:
chat_message (`ChatMessage`): Chat message containing tool calls from the model.
memory_step (`ActionStep)`: Memory ActionStep to update with results.
Yields:
`ToolCall | ToolOutput`: The tool call or tool output.
"""
parallel_calls: dict[str, ToolCall] = {}
assert chat_message.tool_calls is not None
for chat_tool_call in chat_message.tool_calls:
tool_call = ToolCall(
name=chat_tool_call.function.name, arguments=chat_tool_call.function.arguments, id=chat_tool_call.id
)
yield tool_call
parallel_calls[tool_call.id] = tool_call
# Helper function to process a single tool call
def process_single_tool_call(tool_call: ToolCall) -> ToolOutput:
tool_name = tool_call.name
tool_arguments = tool_call.arguments or {}
self.logger.log(
Panel(Text(f"Calling tool: '{tool_name}' with arguments: {tool_arguments}")),
level=LogLevel.INFO,
)
tool_call_result = self.execute_tool_call(tool_name, tool_arguments)
tool_call_result_type = type(tool_call_result)
if tool_call_result_type in [AgentImage, AgentAudio]:
if tool_call_result_type == AgentImage:
observation_name = "image.png"
elif tool_call_result_type == AgentAudio:
observation_name = "audio.mp3"
# TODO: tool_call_result naming could allow for different names of same type
self.state[observation_name] = tool_call_result
observation = f"Stored '{observation_name}' in memory."
else:
observation = str(tool_call_result).strip()
self.logger.log(
f"Observations: {observation.replace('[', '|')}", # escape potential rich-tag-like components
level=LogLevel.INFO,
)
is_final_answer = tool_name == "final_answer"
return ToolOutput(
id=tool_call.id,
output=tool_call_result,
is_final_answer=is_final_answer,
observation=observation,
tool_call=tool_call,
)
# Process tool calls in parallel
outputs = {}
if len(parallel_calls) == 1:
# If there's only one call, process it directly
tool_call = list(parallel_calls.values())[0]
tool_output = process_single_tool_call(tool_call)
outputs[tool_output.id] = tool_output
yield tool_output
else:
# If multiple tool calls, process them in parallel
with ThreadPoolExecutor(self.max_tool_threads) as executor:
futures = [
executor.submit(process_single_tool_call, tool_call) for tool_call in parallel_calls.values()
]
for future in as_completed(futures):
tool_output = future.result()
outputs[tool_output.id] = tool_output
yield tool_output
memory_step.tool_calls = [parallel_calls[k] for k in sorted(parallel_calls.keys())]
memory_step.observations = memory_step.observations or ""
for tool_output in [outputs[k] for k in sorted(outputs.keys())]:
memory_step.observations += tool_output.observation + "\n"
memory_step.observations = (
memory_step.observations.rstrip("\n") if memory_step.observations else memory_step.observations
)
def _substitute_state_variables(self, arguments: dict[str, str] | str) -> dict[str, Any] | str:
"""Replace string values in arguments with their corresponding state values if they exist."""
if isinstance(arguments, dict):
return {
key: self.state.get(value, value) if isinstance(value, str) else value
for key, value in arguments.items()
}
return arguments
def execute_tool_call(self, tool_name: str, arguments: dict[str, str] | str) -> Any:
"""
Execute a tool or managed agent with the provided arguments.
The arguments are replaced with the actual values from the state if they refer to state variables.
Args:
tool_name (`str`): Name of the tool or managed agent to execute.
arguments (dict[str, str] | str): Arguments passed to the tool call.
"""
# Check if the tool exists
available_tools = {**self.tools, **self.managed_agents}
if tool_name not in available_tools:
raise AgentToolExecutionError(
f"Unknown tool {tool_name}, should be one of: {', '.join(available_tools)}.", self.logger
)
# Get the tool and substitute state variables in arguments
tool = available_tools[tool_name]
arguments = self._substitute_state_variables(arguments)
is_managed_agent = tool_name in self.managed_agents
try:
validate_tool_arguments(tool, arguments)
except (ValueError, TypeError) as e:
raise AgentToolCallError(str(e), self.logger) from e
except Exception as e:
error_msg = f"Error executing tool '{tool_name}' with arguments {str(arguments)}: {type(e).__name__}: {e}"
raise AgentToolExecutionError(error_msg, self.logger) from e
try:
# Call tool with appropriate arguments
if isinstance(arguments, dict):
return tool(**arguments) if is_managed_agent else tool(**arguments, sanitize_inputs_outputs=True)
else:
return tool(arguments) if is_managed_agent else tool(arguments, sanitize_inputs_outputs=True)
except Exception as e:
# Handle execution errors
if is_managed_agent:
error_msg = (
f"Error executing request to team member '{tool_name}' with arguments {str(arguments)}: {e}\n"
"Please try again or request to another team member"
)
else:
error_msg = (
f"Error executing tool '{tool_name}' with arguments {str(arguments)}: {type(e).__name__}: {e}\n"
"Please try again or use another tool"
)
raise AgentToolExecutionError(error_msg, self.logger) from e
class CodeAgent(MultiStepAgent):
"""
In this agent, the tool calls will be formulated by the LLM in code format, then parsed and executed.
Args:
tools (`list[Tool]`): [`Tool`]s that the agent can use.
model (`Model`): Model that will generate the agent's actions.
prompt_templates ([`~agents.PromptTemplates`], *optional*): Prompt templates.
additional_authorized_imports (`list[str]`, *optional*): Additional authorized imports for the agent.
planning_interval (`int`, *optional*): Interval at which the agent will run a planning step.
executor_type (`Literal["local", "e2b", "docker", "wasm"]`, default `"local"`): Type of code executor.
executor_kwargs (`dict`, *optional*): Additional arguments to pass to initialize the executor.
max_print_outputs_length (`int`, *optional*): Maximum length of the print outputs.
stream_outputs (`bool`, *optional*, default `False`): Whether to stream outputs during execution.
use_structured_outputs_internally (`bool`, default `False`): Whether to use structured generation at each action step: improves performance for many models.
<Added version="1.17.0"/>
code_block_tags (`tuple[str, str]` | `Literal["markdown"]`, *optional*): Opening and closing tags for code blocks (regex strings). Pass a custom tuple, or pass 'markdown' to use ("```(?:python|py)", "\\n```"), leave empty to use ("<code>", "</code>").
**kwargs: Additional keyword arguments.
"""
def __init__(
self,
tools: list[Tool],
model: Model,
prompt_templates: PromptTemplates | None = None,
additional_authorized_imports: list[str] | None = None,
planning_interval: int | None = None,
executor_type: Literal["local", "e2b", "docker", "wasm"] = "local",
executor_kwargs: dict[str, Any] | None = None,
max_print_outputs_length: int | None = None,
stream_outputs: bool = False,
use_structured_outputs_internally: bool = False,
code_block_tags: str | tuple[str, str] | None = None,
**kwargs,
):
self.additional_authorized_imports = additional_authorized_imports if additional_authorized_imports else []
self.authorized_imports = sorted(set(BASE_BUILTIN_MODULES) | set(self.additional_authorized_imports))
self.max_print_outputs_length = max_print_outputs_length
self._use_structured_outputs_internally = use_structured_outputs_internally
if self._use_structured_outputs_internally:
prompt_templates = prompt_templates or yaml.safe_load(
importlib.resources.files("smolagents.prompts").joinpath("structured_code_agent.yaml").read_text()
)
else:
prompt_templates = prompt_templates or yaml.safe_load(
importlib.resources.files("smolagents.prompts").joinpath("code_agent.yaml").read_text()
)
if isinstance(code_block_tags, str) and not code_block_tags == "markdown":
raise ValueError("Only 'markdown' is supported for a string argument to `code_block_tags`.")
self.code_block_tags = (
code_block_tags
if isinstance(code_block_tags, tuple)
else ("```python", "```")
if code_block_tags == "markdown"
else ("<code>", "</code>")
)
super().__init__(
tools=tools,
model=model,
prompt_templates=prompt_templates,
planning_interval=planning_interval,
**kwargs,
)
self.stream_outputs = stream_outputs
if self.stream_outputs and not hasattr(self.model, "generate_stream"):
raise ValueError(
"`stream_outputs` is set to True, but the model class implements no `generate_stream` method."
)
if "*" in self.additional_authorized_imports:
self.logger.log(
"Caution: you set an authorization for all imports, meaning your agent can decide to import any package it deems necessary. This might raise issues if the package is not installed in your environment.",
level=LogLevel.INFO,
)
if executor_type not in {"local", "e2b", "docker", "wasm"}:
raise ValueError(f"Unsupported executor type: {executor_type}")
self.executor_type = executor_type
self.executor_kwargs: dict[str, Any] = executor_kwargs or {}
self.python_executor = self.create_python_executor()
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
self.cleanup()
def cleanup(self):
"""Clean up resources used by the agent, such as the remote Python executor."""
if hasattr(self.python_executor, "cleanup"):
self.python_executor.cleanup()
def create_python_executor(self) -> PythonExecutor:
if self.executor_type == "local":
return LocalPythonExecutor(
self.additional_authorized_imports,
**{"max_print_outputs_length": self.max_print_outputs_length} | self.executor_kwargs,
)
else:
if self.managed_agents:
raise Exception("Managed agents are not yet supported with remote code execution.")
remote_executors = {
"e2b": E2BExecutor,
"docker": DockerExecutor,
"wasm": WasmExecutor,
}
return remote_executors[self.executor_type](
self.additional_authorized_imports, self.logger, **self.executor_kwargs
)
def initialize_system_prompt(self) -> str:
system_prompt = populate_template(
self.prompt_templates["system_prompt"],
variables={
"tools": self.tools,
"managed_agents": self.managed_agents,
"authorized_imports": (
"You can import from any package you want."
if "*" in self.authorized_imports
else str(self.authorized_imports)
),
"custom_instructions": self.instructions,
"code_block_opening_tag": self.code_block_tags[0],
"code_block_closing_tag": self.code_block_tags[1],
},
)
return system_prompt
def _step_stream(
self, memory_step: ActionStep
) -> Generator[ChatMessageStreamDelta | ToolCall | ToolOutput | ActionOutput]:
"""
Perform one step in the ReAct framework: the agent thinks, acts, and observes the result.
Yields ChatMessageStreamDelta during the run if streaming is enabled.
At the end, yields either None if the step is not final, or the final answer.
"""
memory_messages = self.write_memory_to_messages()
input_messages = memory_messages.copy()
### Generate model output ###
memory_step.model_input_messages = input_messages
stop_sequences = ["Observation:", "Calling tools:"]
if self.code_block_tags[1] not in self.code_block_tags[0]:
# If the closing tag is contained in the opening tag, adding it as a stop sequence would cut short any code generation
stop_sequences.append(self.code_block_tags[1])
try:
additional_args: dict[str, Any] = {}
if self._use_structured_outputs_internally:
additional_args["response_format"] = CODEAGENT_RESPONSE_FORMAT
if self.stream_outputs:
output_stream = self.model.generate_stream(
input_messages,
stop_sequences=stop_sequences,
**additional_args,
)
chat_message_stream_deltas: list[ChatMessageStreamDelta] = []
with Live("", console=self.logger.console, vertical_overflow="visible") as live:
for event in output_stream:
chat_message_stream_deltas.append(event)
live.update(
Markdown(agglomerate_stream_deltas(chat_message_stream_deltas).render_as_markdown())
)
yield event
chat_message = agglomerate_stream_deltas(chat_message_stream_deltas)
memory_step.model_output_message = chat_message
output_text = chat_message.content
else:
chat_message: ChatMessage = self.model.generate(
input_messages,
stop_sequences=stop_sequences,
**additional_args,
)
memory_step.model_output_message = chat_message
output_text = chat_message.content
self.logger.log_markdown(
content=output_text,
title="Output message of the LLM:",
level=LogLevel.DEBUG,
)
if not self._use_structured_outputs_internally:
# This adds the end code sequence (i.e. the closing code block tag) to the history.
# This will nudge subsequent LLM calls to finish with this end code sequence, thus efficiently stopping generation.
if output_text and not output_text.strip().endswith(self.code_block_tags[1]):
output_text += self.code_block_tags[1]
memory_step.model_output_message.content = output_text
memory_step.token_usage = chat_message.token_usage
memory_step.model_output = output_text
except Exception as e:
raise AgentGenerationError(f"Error in generating model output:\n{e}", self.logger) from e
### Parse output ###
try:
if self._use_structured_outputs_internally:
code_action = json.loads(output_text)["code"]
code_action = extract_code_from_text(code_action, self.code_block_tags) or code_action
else:
code_action = parse_code_blobs(output_text, self.code_block_tags)
code_action = fix_final_answer_code(code_action)
memory_step.code_action = code_action
except Exception as e:
error_msg = f"Error in code parsing:\n{e}\nMake sure to provide correct code blobs."
raise AgentParsingError(error_msg, self.logger)
tool_call = ToolCall(
name="python_interpreter",
arguments=code_action,
id=f"call_{len(self.memory.steps)}",
)
yield tool_call
memory_step.tool_calls = [tool_call]
### Execute action ###
self.logger.log_code(title="Executing parsed code:", content=code_action, level=LogLevel.INFO)
try:
code_output = self.python_executor(code_action)
execution_outputs_console = []
if len(code_output.logs) > 0:
execution_outputs_console += [
Text("Execution logs:", style="bold"),
Text(code_output.logs),
]
observation = "Execution logs:\n" + code_output.logs
except Exception as e:
if hasattr(self.python_executor, "state") and "_print_outputs" in self.python_executor.state:
execution_logs = str(self.python_executor.state["_print_outputs"])
if len(execution_logs) > 0:
execution_outputs_console = [
Text("Execution logs:", style="bold"),
Text(execution_logs),
]
memory_step.observations = "Execution logs:\n" + execution_logs
self.logger.log(Group(*execution_outputs_console), level=LogLevel.INFO)
error_msg = str(e)
if "Import of " in error_msg and " is not allowed" in error_msg:
self.logger.log(
"[bold red]Warning to user: Code execution failed due to an unauthorized import - Consider passing said import under `additional_authorized_imports` when initializing your CodeAgent.",
level=LogLevel.INFO,
)
raise AgentExecutionError(error_msg, self.logger)
truncated_output = truncate_content(str(code_output.output))
observation += "Last output from code snippet:\n" + truncated_output
memory_step.observations = observation
if not code_output.is_final_answer:
execution_outputs_console += [
Text(
f"Out: {truncated_output}",
),
]
self.logger.log(Group(*execution_outputs_console), level=LogLevel.INFO)
memory_step.action_output = code_output.output
yield ActionOutput(output=code_output.output, is_final_answer=code_output.is_final_answer)
def to_dict(self) -> dict[str, Any]:
"""Convert the agent to a dictionary representation.
Returns:
`dict`: Dictionary representation of the agent.
"""
agent_dict = super().to_dict()
agent_dict["authorized_imports"] = self.authorized_imports
agent_dict["executor_type"] = self.executor_type
agent_dict["executor_kwargs"] = self.executor_kwargs
agent_dict["max_print_outputs_length"] = self.max_print_outputs_length
return agent_dict
@classmethod
def from_dict(cls, agent_dict: dict[str, Any], **kwargs) -> "CodeAgent":
"""Create CodeAgent from a dictionary representation.
Args:
agent_dict (`dict[str, Any]`): Dictionary representation of the agent.
**kwargs: Additional keyword arguments that will override agent_dict values.
Returns:
`CodeAgent`: Instance of the CodeAgent class.
"""
# Add CodeAgent-specific parameters to kwargs
code_agent_kwargs = {
"additional_authorized_imports": agent_dict.get("authorized_imports"),
"executor_type": agent_dict.get("executor_type"),
"executor_kwargs": agent_dict.get("executor_kwargs"),
"max_print_outputs_length": agent_dict.get("max_print_outputs_length"),
"code_block_tags": agent_dict.get("code_block_tags"),
}
# Filter out None values
code_agent_kwargs = {k: v for k, v in code_agent_kwargs.items() if v is not None}
# Update with any additional kwargs
code_agent_kwargs.update(kwargs)
# Call the parent class's from_dict method
return super().from_dict(agent_dict, **code_agent_kwargs)
| smolagents/src/smolagents/agents.py/0 | {
"file_path": "smolagents/src/smolagents/agents.py",
"repo_id": "smolagents",
"token_count": 35399
} | 287 |
import argparse
from io import BytesIO
from time import sleep
import helium
import PIL.Image
from dotenv import load_dotenv
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from smolagents import CodeAgent, WebSearchTool, tool
from smolagents.agents import ActionStep
from smolagents.cli import load_model
github_request = """
I'm trying to find how hard I have to work to get a repo in github.com/trending.
Can you navigate to the profile for the top author of the top trending repo, and give me their total number of commits over the last year?
""" # The agent is able to achieve this request only when powered by GPT-4o or Claude-3.5-sonnet.
search_request = """
Please navigate to https://en.wikipedia.org/wiki/Chicago and give me a sentence containing the word "1992" that mentions a construction accident.
"""
def parse_arguments():
parser = argparse.ArgumentParser(description="Run a web browser automation script with a specified model.")
parser.add_argument(
"prompt",
type=str,
nargs="?", # Makes it optional
default=search_request,
help="The prompt to run with the agent",
)
parser.add_argument(
"--model-type",
type=str,
default="LiteLLMModel",
help="The model type to use (e.g., OpenAIServerModel, LiteLLMModel, TransformersModel, InferenceClientModel)",
)
parser.add_argument(
"--model-id",
type=str,
default="gpt-4o",
help="The model ID to use for the specified model type",
)
parser.add_argument(
"--provider",
type=str,
help="The inference provider to use for the model",
)
parser.add_argument(
"--api-base",
type=str,
help="The API base to use for the model",
)
parser.add_argument(
"--api-key",
type=str,
help="The API key to use for the model",
)
return parser.parse_args()
def save_screenshot(memory_step: ActionStep, agent: CodeAgent) -> None:
sleep(1.0) # Let JavaScript animations happen before taking the screenshot
driver = helium.get_driver()
current_step = memory_step.step_number
if driver is not None:
for previous_memory_step in agent.memory.steps: # Remove previous screenshots from logs for lean processing
if isinstance(previous_memory_step, ActionStep) and previous_memory_step.step_number <= current_step - 2:
previous_memory_step.observations_images = None
png_bytes = driver.get_screenshot_as_png()
image = PIL.Image.open(BytesIO(png_bytes))
print(f"Captured a browser screenshot: {image.size} pixels")
memory_step.observations_images = [image.copy()] # Create a copy to ensure it persists, important!
# Update observations with current URL
url_info = f"Current url: {driver.current_url}"
memory_step.observations = (
url_info if memory_step.observations is None else memory_step.observations + "\n" + url_info
)
return
@tool
def search_item_ctrl_f(text: str, nth_result: int = 1) -> str:
"""
Searches for text on the current page via Ctrl + F and jumps to the nth occurrence.
Args:
text: The text to search for
nth_result: Which occurrence to jump to (default: 1)
"""
elements = driver.find_elements(By.XPATH, f"//*[contains(text(), '{text}')]")
if nth_result > len(elements):
raise Exception(f"Match nยฐ{nth_result} not found (only {len(elements)} matches found)")
result = f"Found {len(elements)} matches for '{text}'."
elem = elements[nth_result - 1]
driver.execute_script("arguments[0].scrollIntoView(true);", elem)
result += f"Focused on element {nth_result} of {len(elements)}"
return result
@tool
def go_back() -> None:
"""Goes back to previous page."""
driver.back()
@tool
def close_popups() -> str:
"""
Closes any visible modal or pop-up on the page. Use this to dismiss pop-up windows! This does not work on cookie consent banners.
"""
webdriver.ActionChains(driver).send_keys(Keys.ESCAPE).perform()
def initialize_driver():
"""Initialize the Selenium WebDriver."""
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument("--force-device-scale-factor=1")
chrome_options.add_argument("--window-size=1000,1350")
chrome_options.add_argument("--disable-pdf-viewer")
chrome_options.add_argument("--window-position=0,0")
return helium.start_chrome(headless=False, options=chrome_options)
def initialize_agent(model):
"""Initialize the CodeAgent with the specified model."""
return CodeAgent(
tools=[WebSearchTool(), go_back, close_popups, search_item_ctrl_f],
model=model,
additional_authorized_imports=["helium"],
step_callbacks=[save_screenshot],
max_steps=20,
verbosity_level=2,
)
helium_instructions = """
Use your web_search tool when you want to get Google search results.
Then you can use helium to access websites. Don't use helium for Google search, only for navigating websites!
Don't bother about the helium driver, it's already managed.
We've already ran "from helium import *"
Then you can go to pages!
<code>
go_to('github.com/trending')
</code>
You can directly click clickable elements by inputting the text that appears on them.
<code>
click("Top products")
</code>
If it's a link:
<code>
click(Link("Top products"))
</code>
If you try to interact with an element and it's not found, you'll get a LookupError.
In general stop your action after each button click to see what happens on your screenshot.
Never try to login in a page.
To scroll up or down, use scroll_down or scroll_up with as an argument the number of pixels to scroll from.
<code>
scroll_down(num_pixels=1200) # This will scroll one viewport down
</code>
When you have pop-ups with a cross icon to close, don't try to click the close icon by finding its element or targeting an 'X' element (this most often fails).
Just use your built-in tool `close_popups` to close them:
<code>
close_popups()
</code>
You can use .exists() to check for the existence of an element. For example:
<code>
if Text('Accept cookies?').exists():
click('I accept')
</code>
Proceed in several steps rather than trying to solve the task in one shot.
And at the end, only when you have your answer, return your final answer.
<code>
final_answer("YOUR_ANSWER_HERE")
</code>
If pages seem stuck on loading, you might have to wait, for instance `import time` and run `time.sleep(5.0)`. But don't overuse this!
To list elements on page, DO NOT try code-based element searches like 'contributors = find_all(S("ol > li"))': just look at the latest screenshot you have and read it visually, or use your tool search_item_ctrl_f.
Of course, you can act on buttons like a user would do when navigating.
After each code blob you write, you will be automatically provided with an updated screenshot of the browser and the current browser url.
But beware that the screenshot will only be taken at the end of the whole action, it won't see intermediate states.
Don't kill the browser.
When you have modals or cookie banners on screen, you should get rid of them before you can click anything else.
"""
def run_webagent(
prompt: str,
model_type: str,
model_id: str,
provider: str | None = None,
api_base: str | None = None,
api_key: str | None = None,
) -> None:
# Load environment variables
load_dotenv()
# Initialize the model based on the provided arguments
model = load_model(model_type, model_id, provider=provider, api_base=api_base, api_key=api_key)
global driver
driver = initialize_driver()
agent = initialize_agent(model)
# Run the agent with the provided prompt
agent.python_executor("from helium import *")
agent.run(prompt + helium_instructions)
def main() -> None:
# Parse command line arguments
args = parse_arguments()
run_webagent(args.prompt, args.model_type, args.model_id, args.provider, args.api_base, args.api_key)
if __name__ == "__main__":
main()
| smolagents/src/smolagents/vision_web_browser.py/0 | {
"file_path": "smolagents/src/smolagents/vision_web_browser.py",
"repo_id": "smolagents",
"token_count": 2770
} | 288 |
import json
import pytest
from PIL import Image
from smolagents.agents import ToolCall
from smolagents.memory import (
ActionStep,
AgentMemory,
ChatMessage,
MemoryStep,
MessageRole,
PlanningStep,
SystemPromptStep,
TaskStep,
)
from smolagents.monitoring import Timing, TokenUsage
class TestAgentMemory:
def test_initialization(self):
system_prompt = "This is a system prompt."
memory = AgentMemory(system_prompt=system_prompt)
assert memory.system_prompt.system_prompt == system_prompt
assert memory.steps == []
def test_return_all_code_actions(self):
memory = AgentMemory(system_prompt="This is a system prompt.")
memory.steps = [
ActionStep(step_number=1, timing=Timing(start_time=0.0, end_time=1.0), code_action="print('Hello')"),
ActionStep(step_number=2, timing=Timing(start_time=0.0, end_time=1.0), code_action=None),
ActionStep(step_number=3, timing=Timing(start_time=0.0, end_time=1.0), code_action="print('World')"),
] # type: ignore
assert memory.return_full_code() == "print('Hello')\n\nprint('World')"
class TestMemoryStep:
def test_initialization(self):
step = MemoryStep()
assert isinstance(step, MemoryStep)
def test_dict(self):
step = MemoryStep()
assert step.dict() == {}
def test_to_messages(self):
step = MemoryStep()
with pytest.raises(NotImplementedError):
step.to_messages()
def test_action_step_dict():
action_step = ActionStep(
model_input_messages=[ChatMessage(role=MessageRole.USER, content="Hello")],
tool_calls=[
ToolCall(id="id", name="get_weather", arguments={"location": "Paris"}),
],
timing=Timing(start_time=0.0, end_time=1.0),
step_number=1,
error=None,
model_output_message=ChatMessage(role=MessageRole.ASSISTANT, content="Hi"),
model_output="Hi",
observations="This is a nice observation",
observations_images=[Image.new("RGB", (100, 100))],
action_output="Output",
token_usage=TokenUsage(input_tokens=10, output_tokens=20),
)
action_step_dict = action_step.dict()
# Check each key individually for better test failure messages
assert "model_input_messages" in action_step_dict
assert action_step_dict["model_input_messages"] == [
{"role": MessageRole.USER, "content": "Hello", "tool_calls": None, "raw": None, "token_usage": None}
]
assert "tool_calls" in action_step_dict
assert len(action_step_dict["tool_calls"]) == 1
assert action_step_dict["tool_calls"][0] == {
"id": "id",
"type": "function",
"function": {
"name": "get_weather",
"arguments": {"location": "Paris"},
},
}
assert "timing" in action_step_dict
assert action_step_dict["timing"] == {"start_time": 0.0, "end_time": 1.0, "duration": 1.0}
assert "token_usage" in action_step_dict
assert action_step_dict["token_usage"] == {"input_tokens": 10, "output_tokens": 20, "total_tokens": 30}
assert "step_number" in action_step_dict
assert action_step_dict["step_number"] == 1
assert "error" in action_step_dict
assert action_step_dict["error"] is None
assert "model_output_message" in action_step_dict
assert action_step_dict["model_output_message"] == {
"role": "assistant",
"content": "Hi",
"tool_calls": None,
"raw": None,
"token_usage": None,
}
assert "model_output" in action_step_dict
assert action_step_dict["model_output"] == "Hi"
assert "observations" in action_step_dict
assert action_step_dict["observations"] == "This is a nice observation"
assert "observations_images" in action_step_dict
assert "action_output" in action_step_dict
assert action_step_dict["action_output"] == "Output"
def test_action_step_to_messages():
action_step = ActionStep(
model_input_messages=[ChatMessage(role=MessageRole.USER, content="Hello")],
tool_calls=[
ToolCall(id="id", name="get_weather", arguments={"location": "Paris"}),
],
timing=Timing(start_time=0.0, end_time=1.0),
step_number=1,
error=None,
model_output_message=ChatMessage(role=MessageRole.ASSISTANT, content="Hi"),
model_output="Hi",
observations="This is a nice observation",
observations_images=[Image.new("RGB", (100, 100))],
action_output="Output",
token_usage=TokenUsage(input_tokens=10, output_tokens=20),
)
messages = action_step.to_messages()
assert len(messages) == 4
for message in messages:
assert isinstance(message, ChatMessage)
assistant_message = messages[0]
assert assistant_message.role == MessageRole.ASSISTANT
assert len(assistant_message.content) == 1
assert assistant_message.content[0]["type"] == "text"
assert assistant_message.content[0]["text"] == "Hi"
message = messages[1]
assert message.role == MessageRole.TOOL_CALL
assert len(message.content) == 1
assert message.content[0]["type"] == "text"
assert "Calling tools:" in message.content[0]["text"]
image_message = messages[2]
assert image_message.content[0]["type"] == "image" # type: ignore
observation_message = messages[3]
assert observation_message.role == MessageRole.TOOL_RESPONSE
assert "Observation:\nThis is a nice observation" in observation_message.content[0]["text"]
def test_action_step_to_messages_no_tool_calls_with_observations():
action_step = ActionStep(
model_input_messages=None,
tool_calls=None,
timing=Timing(start_time=0.0, end_time=1.0),
step_number=1,
error=None,
model_output_message=None,
model_output=None,
observations="This is an observation.",
observations_images=None,
action_output=None,
token_usage=TokenUsage(input_tokens=10, output_tokens=20),
)
messages = action_step.to_messages()
assert len(messages) == 1
observation_message = messages[0]
assert observation_message.role == MessageRole.TOOL_RESPONSE
assert "Observation:\nThis is an observation." in observation_message.content[0]["text"]
def test_planning_step_to_messages():
planning_step = PlanningStep(
model_input_messages=[ChatMessage(role=MessageRole.USER, content="Hello")],
model_output_message=ChatMessage(role=MessageRole.ASSISTANT, content="Plan"),
plan="This is a plan.",
timing=Timing(start_time=0.0, end_time=1.0),
)
messages = planning_step.to_messages(summary_mode=False)
assert len(messages) == 2
for message in messages:
assert isinstance(message, ChatMessage)
assert isinstance(message.content, list)
assert len(message.content) == 1
for content in message.content:
assert isinstance(content, dict)
assert "type" in content
assert "text" in content
assert messages[0].role == MessageRole.ASSISTANT
assert messages[1].role == MessageRole.USER
def test_task_step_to_messages():
task_step = TaskStep(task="This is a task.", task_images=[Image.new("RGB", (100, 100))])
messages = task_step.to_messages(summary_mode=False)
assert len(messages) == 1
for message in messages:
assert isinstance(message, ChatMessage)
assert message.role == MessageRole.USER
assert isinstance(message.content, list)
assert len(message.content) == 2
text_content = message.content[0]
assert isinstance(text_content, dict)
assert "type" in text_content
assert "text" in text_content
for image_content in message.content[1:]:
assert isinstance(image_content, dict)
assert "type" in image_content
assert "image" in image_content
def test_system_prompt_step_to_messages():
system_prompt_step = SystemPromptStep(system_prompt="This is a system prompt.")
messages = system_prompt_step.to_messages(summary_mode=False)
assert len(messages) == 1
for message in messages:
assert isinstance(message, ChatMessage)
assert message.role == MessageRole.SYSTEM
assert isinstance(message.content, list)
assert len(message.content) == 1
for content in message.content:
assert isinstance(content, dict)
assert "type" in content
assert "text" in content
def test_memory_step_json_serialization():
"""Test that memory steps can be JSON serialized without raw fields."""
# Create a mock ChatCompletion-like object (this is what was causing the error)
class MockChatCompletion:
def __init__(self):
self.id = "chatcmpl-test"
self.choices = []
# Create a ChatMessage with raw field containing the non-serializable object
chat_message = ChatMessage(role=MessageRole.ASSISTANT, content="Test response", raw=MockChatCompletion())
# Test ActionStep serialization
action_step = ActionStep(
step_number=1,
timing=Timing(start_time=123456, end_time=123457),
model_output_message=chat_message,
model_input_messages=[chat_message],
)
step_dict = action_step.dict()
json_str = json.dumps(step_dict)
# Raw field should be present but serializable
assert "raw" in json_str
assert "MockChatCompletion" in json_str
# Test PlanningStep serialization
planning_step = PlanningStep(
model_input_messages=[chat_message],
model_output_message=chat_message,
plan="Test plan",
timing=Timing(start_time=123456, end_time=123457),
)
planning_dict = planning_step.dict()
json_str = json.dumps(planning_dict)
# Raw field should be present but serializable
assert "raw" in json_str
assert "MockChatCompletion" in json_str
| smolagents/tests/test_memory.py/0 | {
"file_path": "smolagents/tests/test_memory.py",
"repo_id": "smolagents",
"token_count": 3931
} | 289 |
<!---
Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
# Contribute to text-generation-inference
Everyone is welcome to contribute, and we value everybody's contribution. Code
contributions are not the only way to help the community. Answering questions, helping
others, and improving the documentation are also immensely valuable.
It also helps us if you spread the word! Reference the library in blog posts
about the awesome projects it made possible, shout out on Twitter every time it has
helped you, or simply โญ๏ธ the repository to say thank you.
However you choose to contribute, please be mindful and respect our
[code of conduct](https://github.com/huggingface/text-generation-inference/blob/main/CODE_OF_CONDUCT.md).
**This guide was heavily inspired by the awesome [scikit-learn guide to contributing](https://github.com/scikit-learn/scikit-learn/blob/main/CONTRIBUTING.md).**
## Ways to contribute
There are several ways you can contribute to text-generation-inference.
* Fix outstanding issues with the existing code.
* Submit issues related to bugs or desired new features.
* Contribute to the examples or to the documentation.
> All contributions are equally valuable to the community. ๐ฅฐ
## Fixing outstanding issues
If you notice an issue with the existing code and have a fix in mind, feel free to [start contributing](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request) and open
a Pull Request!
## Submitting a bug-related issue or feature request
Do your best to follow these guidelines when submitting a bug-related issue or a feature
request. It will make it easier for us to come back to you quickly and with good
feedback.
### Did you find a bug?
The text-generation-inference library is robust and reliable thanks to users who report the problems they encounter.
Before you report an issue, we would really appreciate it if you could **make sure the bug was not
already reported** (use the search bar on GitHub under Issues). Your issue should also be related to bugs in the
library itself, and not your code.
Once you've confirmed the bug hasn't already been reported, please include the following information in your issue so
we can quickly resolve it:
* Your **OS type and version**, as well as your environment versions (versions of rust, python, and dependencies).
* A short, self-contained, code snippet that allows us to reproduce the bug.
* The *full* traceback if an exception is raised.
* Attach any other additional information, like screenshots, you think may help.
To get the OS and software versions automatically, you can re-run the launcher with the `--env` flag:
```bash
text-generation-launcher --env
```
This will precede the launch of the model with the information relative to your environment. We recommend pasting
that in your issue report.
### Do you want a new feature?
If there is a new feature you'd like to see in text-generation-inference, please open an issue and describe:
1. What is the *motivation* behind this feature? Is it related to a problem or frustration with the library? Is it
a feature related to something you need for a project? Is it something you worked on and think it could benefit
the community?
Whatever it is, we'd love to hear about it!
2. Describe your requested feature in as much detail as possible. The more you can tell us about it, the better
we'll be able to help you.
3. Provide a *code snippet* that demonstrates the feature's usage.
4. If the feature is related to a paper, please include a link.
If your issue is well written we're already 80% of the way there by the time you create it.
We have added [templates](https://github.com/huggingface/text-generation-inference/tree/main/.github/ISSUE_TEMPLATE)
to help you get started with your issue.
## Do you want to implement a new model?
New models are constantly released and if you want to implement a new model, please provide the following information:
* A short description of the model and a link to the paper.
* Link to the implementation if it is open-sourced.
* Link to the model weights if they are available.
If you are willing to contribute the model yourself, let us know so we can help you add it to text-generation-inference!
## Do you want to add documentation?
We're always looking for improvements to the documentation that make it more clear and accurate. Please let us know
how the documentation can be improved such as typos and any content that is missing, unclear or inaccurate. We'll be
happy to make the changes or help you make a contribution if you're interested!
## I want to become a maintainer of the project. How do I get there?
TGI is a project led and managed by Hugging Face as it powers our internal services. However, we are happy to have
motivated individuals from other organizations join us as maintainers with the goal of making TGI the best inference
service.
If you are such an individual (or organization), please reach out to us and let's collaborate.
| text-generation-inference/CONTRIBUTING.md/0 | {
"file_path": "text-generation-inference/CONTRIBUTING.md",
"repo_id": "text-generation-inference",
"token_count": 1396
} | 290 |
{
"__inputs": [
{
"name": "DS_PROMETHEUS_EKS API INFERENCE PROD",
"label": "Prometheus EKS API Inference Prod",
"description": "",
"type": "datasource",
"pluginId": "prometheus",
"pluginName": "Prometheus"
}
],
"__elements": {},
"__requires": [
{
"type": "panel",
"id": "gauge",
"name": "Gauge",
"version": ""
},
{
"type": "grafana",
"id": "grafana",
"name": "Grafana",
"version": "10.0.2"
},
{
"type": "panel",
"id": "heatmap",
"name": "Heatmap",
"version": ""
},
{
"type": "datasource",
"id": "prometheus",
"name": "Prometheus",
"version": "1.0.0"
},
{
"type": "panel",
"id": "timeseries",
"name": "Time series",
"version": ""
}
],
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": {
"type": "grafana",
"uid": "-- Grafana --"
},
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"target": {
"limit": 100,
"matchAny": false,
"tags": [],
"type": "dashboard"
},
"type": "dashboard"
}
]
},
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 2,
"id": 551,
"links": [],
"liveNow": false,
"panels": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"fieldMinMax": false,
"mappings": [],
"min": 0,
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 1000
}
]
},
"unit": "ms"
},
"overrides": []
},
"gridPos": {
"h": 7,
"w": 8,
"x": 0,
"y": 0
},
"id": 49,
"options": {
"colorMode": "value",
"graphMode": "area",
"justifyMode": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": [
"mean"
],
"fields": "",
"values": false
},
"showPercentChange": false,
"textMode": "auto",
"wideLayout": true
},
"pluginVersion": "10.4.2",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "(histogram_quantile(0.5, sum by (le) (rate(tgi_request_queue_duration_bucket{container=\"$service\"}[10m]))) * 1000) > 0",
"hide": true,
"instant": false,
"legendFormat": "__auto",
"range": true,
"refId": "B"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "(histogram_quantile(0.5, sum by (le) (rate(tgi_batch_inference_duration_bucket{method=\"prefill\", container=\"$service\"}[10m]))) * 1000) > 0",
"hide": true,
"instant": false,
"legendFormat": "__auto",
"range": true,
"refId": "C"
},
{
"datasource": {
"name": "Expression",
"type": "__expr__",
"uid": "__expr__"
},
"expression": "$B + $C",
"hide": false,
"refId": "D",
"type": "math"
}
],
"title": "Time to first token",
"type": "stat"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"mappings": [],
"min": 0,
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "ms"
},
"overrides": []
},
"gridPos": {
"h": 7,
"w": 8,
"x": 9,
"y": 0
},
"id": 44,
"options": {
"colorMode": "value",
"graphMode": "area",
"justifyMode": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": [
"mean"
],
"fields": "",
"values": false
},
"showPercentChange": false,
"textMode": "auto",
"wideLayout": true
},
"pluginVersion": "10.4.2",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "(histogram_quantile(0.5, sum by (le) (rate(tgi_batch_forward_duration_bucket{method=\"decode\", container=\"$service\"}[10m]))) * 1000)>0",
"instant": false,
"range": true,
"refId": "A"
}
],
"title": "Decode per-token latency",
"type": "stat"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"mappings": [],
"min": 0,
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
}
]
},
"unit": "short"
},
"overrides": []
},
"gridPos": {
"h": 7,
"w": 7,
"x": 17,
"y": 0
},
"id": 45,
"options": {
"colorMode": "value",
"graphMode": "area",
"justifyMode": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": [
"mean"
],
"fields": "",
"values": false
},
"showPercentChange": false,
"textMode": "auto",
"wideLayout": true
},
"pluginVersion": "10.4.2",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "sum((rate(tgi_request_generated_tokens_sum{container=\"$service\"}[10m]) / rate(tgi_request_generated_tokens_count{container=\"$service\"}[10m]))>0)",
"instant": false,
"range": true,
"refId": "A"
}
],
"title": "Throughput (generated tok/s)",
"type": "stat"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "none"
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "p50"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "green",
"mode": "fixed"
}
}
]
},
{
"matcher": {
"id": "byName",
"options": "p90"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "orange",
"mode": "fixed"
}
}
]
},
{
"matcher": {
"id": "byName",
"options": "p99"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "red",
"mode": "fixed"
}
}
]
}
]
},
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 7
},
"id": 48,
"options": {
"legend": {
"calcs": [
"min",
"max"
],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.5, sum by (le) (rate(tgi_request_input_length_bucket{container=\"$service\"}[10m])))",
"legendFormat": "p50",
"range": true,
"refId": "A"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.9, sum by (le) (rate(tgi_request_input_length_bucket{container=\"$service\"}[10m])))",
"hide": false,
"legendFormat": "p90",
"range": true,
"refId": "B"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.99, sum by (le) (rate(tgi_request_input_length_bucket{container=\"$service\"}[10m])))",
"hide": false,
"legendFormat": "p99",
"range": true,
"refId": "C"
}
],
"title": "Number of tokens per prompt",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "none"
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "p50"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "green",
"mode": "fixed"
}
}
]
},
{
"matcher": {
"id": "byName",
"options": "p90"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "orange",
"mode": "fixed"
}
}
]
},
{
"matcher": {
"id": "byName",
"options": "p99"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "red",
"mode": "fixed"
}
}
]
}
]
},
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 7
},
"id": 30,
"options": {
"legend": {
"calcs": [
"min",
"max"
],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.5, sum by (le) (rate(tgi_request_generated_tokens_bucket{container=\"$service\"}[10m])))",
"legendFormat": "p50",
"range": true,
"refId": "A"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.9, sum by (le) (rate(tgi_request_generated_tokens_bucket{container=\"$service\"}[10m])))",
"hide": false,
"legendFormat": "p90",
"range": true,
"refId": "B"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.99, sum by (le) (rate(tgi_request_generated_tokens_bucket{container=\"$service\"}[10m])))",
"hide": false,
"legendFormat": "p99",
"range": true,
"refId": "C"
}
],
"title": "Number of generated tokens per request",
"type": "timeseries"
},
{
"collapsed": false,
"gridPos": {
"h": 1,
"w": 24,
"x": 0,
"y": 15
},
"id": 20,
"panels": [],
"title": "General",
"type": "row"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 30,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 6,
"x": 0,
"y": 16
},
"id": 4,
"maxDataPoints": 100,
"options": {
"legend": {
"calcs": [
"min",
"max"
],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "sum(increase(tgi_request_success{container=\"$service\"}[1m]))",
"legendFormat": "Success",
"range": true,
"refId": "A"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "sum(increase(tgi_request_failure{container=\"$service\"}[1m])) by (err)",
"hide": false,
"legendFormat": "Error: {{err}}",
"range": true,
"refId": "B"
}
],
"title": "Requests",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "s"
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "p50"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "green",
"mode": "fixed"
}
}
]
},
{
"matcher": {
"id": "byName",
"options": "p90"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "orange",
"mode": "fixed"
}
}
]
},
{
"matcher": {
"id": "byName",
"options": "p99"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "red",
"mode": "fixed"
}
}
]
}
]
},
"gridPos": {
"h": 13,
"w": 9,
"x": 6,
"y": 16
},
"id": 6,
"options": {
"legend": {
"calcs": [
"min",
"max"
],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.5, sum by (le) (rate(tgi_request_mean_time_per_token_duration_bucket{container=\"$service\"}[10m])))",
"legendFormat": "p50",
"range": true,
"refId": "A"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.9, sum by (le) (rate(tgi_request_mean_time_per_token_duration_bucket{container=\"$service\"}[10m])))",
"hide": false,
"legendFormat": "p90",
"range": true,
"refId": "B"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.99, sum by (le) (rate(tgi_request_mean_time_per_token_duration_bucket{container=\"$service\"}[10m])))",
"hide": false,
"legendFormat": "p99",
"range": true,
"refId": "C"
}
],
"title": "Mean Time Per Token quantiles",
"type": "timeseries"
},
{
"cards": {},
"color": {
"cardColor": "#5794F2",
"colorScale": "linear",
"colorScheme": "interpolateSpectral",
"exponent": 0.5,
"min": 0,
"mode": "opacity"
},
"dataFormat": "tsbuckets",
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"fieldConfig": {
"defaults": {
"custom": {
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"scaleDistribution": {
"type": "linear"
}
}
},
"overrides": []
},
"gridPos": {
"h": 13,
"w": 9,
"x": 15,
"y": 16
},
"heatmap": {},
"hideZeroBuckets": false,
"highlightCards": true,
"id": 13,
"legend": {
"show": false
},
"maxDataPoints": 25,
"options": {
"calculate": false,
"calculation": {},
"cellGap": 2,
"cellValues": {},
"color": {
"exponent": 0.5,
"fill": "#5794F2",
"min": 0,
"mode": "scheme",
"reverse": false,
"scale": "exponential",
"scheme": "Spectral",
"steps": 128
},
"exemplars": {
"color": "rgba(255,0,255,0.7)"
},
"filterValues": {
"le": 1e-9
},
"legend": {
"show": false
},
"rowsFrame": {
"layout": "auto"
},
"showValue": "never",
"tooltip": {
"mode": "single",
"showColorScale": false,
"yHistogram": false
},
"yAxis": {
"axisPlacement": "left",
"decimals": 1,
"reverse": false,
"unit": "s"
}
},
"pluginVersion": "10.4.2",
"reverseYBuckets": false,
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"exemplar": true,
"expr": "sum(increase(tgi_request_mean_time_per_token_duration_bucket{container=\"$service\"}[5m])) by (le)",
"format": "heatmap",
"interval": "",
"legendFormat": "{{ย le }}",
"range": true,
"refId": "A"
}
],
"title": "Mean Time Per Token",
"tooltip": {
"show": true,
"showHistogram": false
},
"type": "heatmap",
"xAxis": {
"show": true
},
"yAxis": {
"decimals": 1,
"format": "s",
"logBase": 1,
"show": true
},
"yBucketBound": "auto"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "percentage",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "orange",
"value": 70
},
{
"color": "red",
"value": 85
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 3,
"x": 0,
"y": 24
},
"id": 18,
"options": {
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom",
"showLegend": false
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"pluginVersion": "9.1.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "count(tgi_request_count{container=\"$service\"})",
"legendFormat": "Replicas",
"range": true,
"refId": "A"
}
],
"title": "Number of replicas",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"fieldConfig": {
"defaults": {
"mappings": [],
"thresholds": {
"mode": "percentage",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "orange",
"value": 70
},
{
"color": "red",
"value": 85
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 3,
"x": 3,
"y": 24
},
"id": 32,
"options": {
"minVizHeight": 75,
"minVizWidth": 75,
"orientation": "auto",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
"showThresholdLabels": false,
"showThresholdMarkers": true,
"sizing": "auto"
},
"pluginVersion": "10.4.2",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "sum(tgi_queue_size{container=\"$service\"})",
"legendFormat": "__auto",
"range": true,
"refId": "A"
}
],
"title": "Queue Size",
"type": "gauge"
},
{
"collapsed": false,
"gridPos": {
"h": 1,
"w": 24,
"x": 0,
"y": 29
},
"id": 26,
"panels": [],
"title": "Batching",
"type": "row"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "bars",
"fillOpacity": 50,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "normal"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 6,
"x": 0,
"y": 30
},
"id": 29,
"maxDataPoints": 40,
"options": {
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom",
"showLegend": false
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"pluginVersion": "9.1.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "avg(tgi_batch_current_max_tokens{container=\"$service\"})",
"legendFormat": "{{ pod }}",
"range": true,
"refId": "A"
}
],
"title": "Max tokens per batch",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "none"
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "p50"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "green",
"mode": "fixed"
}
}
]
},
{
"matcher": {
"id": "byName",
"options": "p90"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "orange",
"mode": "fixed"
}
}
]
},
{
"matcher": {
"id": "byName",
"options": "p99"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "red",
"mode": "fixed"
}
}
]
}
]
},
"gridPos": {
"h": 9,
"w": 4,
"x": 6,
"y": 30
},
"id": 33,
"options": {
"legend": {
"calcs": [
"min",
"max"
],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.5, sum by (le) (rate(tgi_request_skipped_tokens_bucket{container=\"$service\"}[10m])))",
"legendFormat": "p50",
"range": true,
"refId": "A"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.9, sum by (le) (rate(tgi_request_skipped_tokens_bucket{container=\"$service\"}[10m])))",
"hide": false,
"legendFormat": "p90",
"range": true,
"refId": "B"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.99, sum by (le) (rate(tgi_request_skipped_tokens_bucket{container=\"$service\"}[10m])))",
"hide": false,
"legendFormat": "p99",
"range": true,
"refId": "C"
}
],
"title": "Speculated Tokens",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "none"
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "p50"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "green",
"mode": "fixed"
}
}
]
},
{
"matcher": {
"id": "byName",
"options": "p90"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "orange",
"mode": "fixed"
}
}
]
},
{
"matcher": {
"id": "byName",
"options": "p99"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "red",
"mode": "fixed"
}
}
]
}
]
},
"gridPos": {
"h": 9,
"w": 5,
"x": 10,
"y": 30
},
"id": 46,
"options": {
"legend": {
"calcs": [
"min",
"max"
],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.5, sum by (le) (rate(tgi_request_input_length_bucket{container=\"$service\"}[10m])))",
"legendFormat": "p50",
"range": true,
"refId": "A"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.9, sum by (le) (rate(tgi_request_input_length_bucket{container=\"$service\"}[10m])))",
"hide": false,
"legendFormat": "p90",
"range": true,
"refId": "B"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.99, sum by (le) (rate(tgi_request_input_length_bucket{container=\"$service\"}[10m])))",
"hide": false,
"legendFormat": "p99",
"range": true,
"refId": "C"
}
],
"title": "Prompt Tokens",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "s"
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "p50"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "green",
"mode": "fixed"
}
}
]
},
{
"matcher": {
"id": "byName",
"options": "p90"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "orange",
"mode": "fixed"
}
}
]
},
{
"matcher": {
"id": "byName",
"options": "p99"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "red",
"mode": "fixed"
}
}
]
}
]
},
"gridPos": {
"h": 9,
"w": 9,
"x": 15,
"y": 30
},
"id": 8,
"options": {
"legend": {
"calcs": [
"min",
"max"
],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.5, sum by (le) (rate(tgi_request_duration_bucket{container=\"$service\"}[10m])))",
"legendFormat": "p50",
"range": true,
"refId": "A"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.9, sum by (le) (rate(tgi_request_duration_bucket{container=\"$service\"}[10m])))",
"hide": false,
"legendFormat": "p90",
"range": true,
"refId": "B"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.99, sum by (le) (rate(tgi_request_duration_bucket{container=\"$service\"}[10m])))",
"hide": false,
"legendFormat": "p99",
"range": true,
"refId": "C"
}
],
"title": "Latency quantiles",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "bars",
"fillOpacity": 50,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "normal"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 4,
"w": 6,
"x": 0,
"y": 35
},
"id": 27,
"maxDataPoints": 40,
"options": {
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom",
"showLegend": false
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"pluginVersion": "9.1.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "avg(tgi_batch_current_size{container=\"$service\"})",
"legendFormat": "{{ pod }}",
"range": true,
"refId": "A"
}
],
"title": "Batch Size",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 30,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 9,
"w": 6,
"x": 0,
"y": 39
},
"id": 28,
"maxDataPoints": 100,
"options": {
"legend": {
"calcs": [
"min",
"max"
],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "sum(increase(tgi_batch_concat{container=\"$service\"}[1m])) by (reason)",
"hide": false,
"legendFormat": "Reason: {{ reason }}",
"range": true,
"refId": "B"
}
],
"title": "Concatenates",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "s"
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "p50"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "green",
"mode": "fixed"
}
}
]
},
{
"matcher": {
"id": "byName",
"options": "p90"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "orange",
"mode": "fixed"
}
}
]
},
{
"matcher": {
"id": "byName",
"options": "p99"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "red",
"mode": "fixed"
}
}
]
}
]
},
"gridPos": {
"h": 9,
"w": 9,
"x": 6,
"y": 39
},
"id": 31,
"options": {
"legend": {
"calcs": [
"min",
"max"
],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.5, sum by (le) (rate(tgi_request_queue_duration_bucket{container=\"$service\"}[10m])))",
"legendFormat": "p50",
"range": true,
"refId": "A"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.9, sum by (le) (rate(tgi_request_queue_duration_bucket{container=\"$service\"}[10m])))",
"hide": false,
"legendFormat": "p90",
"range": true,
"refId": "B"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.99, sum by (le) (rate(tgi_request_queue_duration_bucket{container=\"$service\"}[10m])))",
"hide": false,
"legendFormat": "p99",
"range": true,
"refId": "C"
}
],
"title": "Queue quantiles",
"type": "timeseries"
},
{
"collapsed": false,
"gridPos": {
"h": 1,
"w": 24,
"x": 0,
"y": 48
},
"id": 22,
"panels": [],
"title": "Prefill",
"type": "row"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "s"
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "p50"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "green",
"mode": "fixed"
}
}
]
},
{
"matcher": {
"id": "byName",
"options": "p90"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "orange",
"mode": "fixed"
}
}
]
},
{
"matcher": {
"id": "byName",
"options": "p99"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "red",
"mode": "fixed"
}
}
]
}
]
},
"gridPos": {
"h": 11,
"w": 12,
"x": 0,
"y": 49
},
"id": 7,
"options": {
"legend": {
"calcs": [
"min",
"max"
],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.5, sum by (le) (rate(tgi_batch_inference_duration_bucket{method=\"prefill\", container=\"$service\"}[10m])))",
"legendFormat": "p50",
"range": true,
"refId": "A"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.9, sum by (le) (rate(tgi_batch_inference_duration_bucket{method=\"prefill\", container=\"$service\"}[10m])))",
"hide": false,
"legendFormat": "p90",
"range": true,
"refId": "B"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.99, sum by (le) (rate(tgi_batch_inference_duration_bucket{method=\"prefill\", container=\"$service\"}[10m])))",
"hide": false,
"legendFormat": "p99",
"range": true,
"refId": "C"
}
],
"title": "Prefill Quantiles",
"type": "timeseries"
},
{
"cards": {},
"color": {
"cardColor": "#5794F2",
"colorScale": "linear",
"colorScheme": "interpolateSpectral",
"exponent": 0.5,
"min": 0,
"mode": "opacity"
},
"dataFormat": "tsbuckets",
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"fieldConfig": {
"defaults": {
"custom": {
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"scaleDistribution": {
"type": "linear"
}
}
},
"overrides": []
},
"gridPos": {
"h": 11,
"w": 12,
"x": 12,
"y": 49
},
"heatmap": {},
"hideZeroBuckets": false,
"highlightCards": true,
"id": 14,
"legend": {
"show": false
},
"maxDataPoints": 25,
"options": {
"calculate": false,
"calculation": {},
"cellGap": 2,
"cellValues": {},
"color": {
"exponent": 0.5,
"fill": "#5794F2",
"min": 0,
"mode": "scheme",
"reverse": false,
"scale": "exponential",
"scheme": "Spectral",
"steps": 128
},
"exemplars": {
"color": "rgba(255,0,255,0.7)"
},
"filterValues": {
"le": 1e-9
},
"legend": {
"show": false
},
"rowsFrame": {
"layout": "auto"
},
"showValue": "never",
"tooltip": {
"mode": "single",
"showColorScale": false,
"yHistogram": false
},
"yAxis": {
"axisPlacement": "left",
"decimals": 1,
"reverse": false,
"unit": "s"
}
},
"pluginVersion": "10.4.2",
"reverseYBuckets": false,
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"exemplar": true,
"expr": "sum(increase(tgi_batch_inference_duration_bucket{method=\"prefill\", container=\"$service\"}[5m])) by (le)",
"format": "heatmap",
"interval": "",
"legendFormat": "{{ย le }}",
"range": true,
"refId": "A"
}
],
"title": "Prefill Latency",
"tooltip": {
"show": true,
"showHistogram": false
},
"type": "heatmap",
"xAxis": {
"show": true
},
"yAxis": {
"decimals": 1,
"format": "s",
"logBase": 1,
"show": true
},
"yBucketBound": "auto"
},
{
"collapsed": false,
"gridPos": {
"h": 1,
"w": 24,
"x": 0,
"y": 60
},
"id": 24,
"panels": [],
"title": "Decode",
"type": "row"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "s"
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "p50"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "green",
"mode": "fixed"
}
}
]
},
{
"matcher": {
"id": "byName",
"options": "p90"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "orange",
"mode": "fixed"
}
}
]
},
{
"matcher": {
"id": "byName",
"options": "p99"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "red",
"mode": "fixed"
}
}
]
}
]
},
"gridPos": {
"h": 11,
"w": 12,
"x": 0,
"y": 61
},
"id": 11,
"options": {
"legend": {
"calcs": [
"min",
"max"
],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.5, sum by (le) (rate(tgi_batch_inference_duration_bucket{method=\"decode\", container=\"$service\"}[10m])))",
"legendFormat": "p50",
"range": true,
"refId": "A"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.9, sum by (le) (rate(tgi_batch_inference_duration_bucket{method=\"decode\", container=\"$service\"}[10m])))",
"hide": false,
"legendFormat": "p90",
"range": true,
"refId": "B"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.99, sum by (le) (rate(tgi_batch_inference_duration_bucket{method=\"decode\", container=\"$service\"}[10m])))",
"hide": false,
"legendFormat": "p99",
"range": true,
"refId": "C"
}
],
"title": "Decode quantiles",
"type": "timeseries"
},
{
"cards": {},
"color": {
"cardColor": "#5794F2",
"colorScale": "linear",
"colorScheme": "interpolateSpectral",
"exponent": 0.5,
"min": 0,
"mode": "opacity"
},
"dataFormat": "tsbuckets",
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"fieldConfig": {
"defaults": {
"custom": {
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"scaleDistribution": {
"type": "linear"
}
}
},
"overrides": []
},
"gridPos": {
"h": 11,
"w": 12,
"x": 12,
"y": 61
},
"heatmap": {},
"hideZeroBuckets": false,
"highlightCards": true,
"id": 15,
"legend": {
"show": false
},
"maxDataPoints": 25,
"options": {
"calculate": false,
"calculation": {},
"cellGap": 2,
"cellValues": {},
"color": {
"exponent": 0.5,
"fill": "#5794F2",
"min": 0,
"mode": "scheme",
"reverse": false,
"scale": "exponential",
"scheme": "Spectral",
"steps": 128
},
"exemplars": {
"color": "rgba(255,0,255,0.7)"
},
"filterValues": {
"le": 1e-9
},
"legend": {
"show": false
},
"rowsFrame": {
"layout": "auto"
},
"showValue": "never",
"tooltip": {
"mode": "single",
"showColorScale": false,
"yHistogram": false
},
"yAxis": {
"axisPlacement": "left",
"decimals": 1,
"reverse": false,
"unit": "s"
}
},
"pluginVersion": "10.4.2",
"reverseYBuckets": false,
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"exemplar": true,
"expr": "sum(increase(tgi_batch_inference_duration_bucket{method=\"decode\", container=\"$service\"}[5m])) by (le)",
"format": "heatmap",
"interval": "",
"legendFormat": "{{ย le }}",
"range": true,
"refId": "A"
}
],
"title": "Decode Latency",
"tooltip": {
"show": true,
"showHistogram": false
},
"type": "heatmap",
"xAxis": {
"show": true
},
"yAxis": {
"decimals": 1,
"format": "s",
"logBase": 1,
"show": true
},
"yBucketBound": "auto"
},
{
"collapsed": false,
"gridPos": {
"h": 1,
"w": 24,
"x": 0,
"y": 72
},
"id": 43,
"panels": [],
"title": "Debug",
"type": "row"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "s"
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "p50"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "green",
"mode": "fixed"
}
}
]
},
{
"matcher": {
"id": "byName",
"options": "p90"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "orange",
"mode": "fixed"
}
}
]
},
{
"matcher": {
"id": "byName",
"options": "p99"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "red",
"mode": "fixed"
}
}
]
}
]
},
"gridPos": {
"h": 11,
"w": 6,
"x": 0,
"y": 73
},
"id": 38,
"options": {
"legend": {
"calcs": [
"min",
"max"
],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.5, sum by (le) (rate(tgi_batch_forward_duration_bucket{method=\"decode\", container=\"$service\"}[10m])))",
"legendFormat": "p50",
"range": true,
"refId": "A"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.9, sum by (le) (rate(tgi_batch_forward_duration_bucket{method=\"decode\", container=\"$service\"}[10m])))",
"hide": false,
"legendFormat": "p90",
"range": true,
"refId": "B"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.99, sum by (le) (rate(tgi_batch_forward_duration_bucket{method=\"decode\", container=\"$service\"}[10m])))",
"hide": false,
"legendFormat": "p99",
"range": true,
"refId": "C"
}
],
"title": "Forward quantiles",
"type": "timeseries"
},
{
"cards": {},
"color": {
"cardColor": "#5794F2",
"colorScale": "linear",
"colorScheme": "interpolateSpectral",
"exponent": 0.5,
"min": 0,
"mode": "opacity"
},
"dataFormat": "tsbuckets",
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"fieldConfig": {
"defaults": {
"custom": {
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"scaleDistribution": {
"type": "linear"
}
}
},
"overrides": []
},
"gridPos": {
"h": 11,
"w": 6,
"x": 6,
"y": 73
},
"heatmap": {},
"hideZeroBuckets": false,
"highlightCards": true,
"id": 35,
"legend": {
"show": false
},
"maxDataPoints": 25,
"options": {
"calculate": false,
"calculation": {},
"cellGap": 2,
"cellValues": {},
"color": {
"exponent": 0.5,
"fill": "#5794F2",
"min": 0,
"mode": "scheme",
"reverse": false,
"scale": "exponential",
"scheme": "Spectral",
"steps": 128
},
"exemplars": {
"color": "rgba(255,0,255,0.7)"
},
"filterValues": {
"le": 1e-9
},
"legend": {
"show": false
},
"rowsFrame": {
"layout": "auto"
},
"showValue": "never",
"tooltip": {
"mode": "single",
"showColorScale": false,
"yHistogram": false
},
"yAxis": {
"axisPlacement": "left",
"decimals": 1,
"reverse": false,
"unit": "s"
}
},
"pluginVersion": "10.4.2",
"reverseYBuckets": false,
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"exemplar": true,
"expr": "sum(increase(tgi_batch_forward_duration_bucket{method=\"decode\", container=\"$service\"}[5m])) by (le)",
"format": "heatmap",
"interval": "",
"legendFormat": "{{ย le }}",
"range": true,
"refId": "A"
}
],
"title": "Forward Latency",
"tooltip": {
"show": true,
"showHistogram": false
},
"type": "heatmap",
"xAxis": {
"show": true
},
"yAxis": {
"decimals": 1,
"format": "s",
"logBase": 1,
"show": true
},
"yBucketBound": "auto"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "s"
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "p50"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "green",
"mode": "fixed"
}
}
]
},
{
"matcher": {
"id": "byName",
"options": "p90"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "orange",
"mode": "fixed"
}
}
]
},
{
"matcher": {
"id": "byName",
"options": "p99"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "red",
"mode": "fixed"
}
}
]
}
]
},
"gridPos": {
"h": 11,
"w": 6,
"x": 12,
"y": 73
},
"id": 34,
"options": {
"legend": {
"calcs": [
"min",
"max"
],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.5, sum by (le) (rate(tgi_batch_decode_duration_bucket{method=\"decode\", container=\"$service\"}[10m])))",
"legendFormat": "p50",
"range": true,
"refId": "A"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.9, sum by (le) (rate(tgi_batch_decode_duration_bucket{method=\"decode\", container=\"$service\"}[10m])))",
"hide": false,
"legendFormat": "p90",
"range": true,
"refId": "B"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.99, sum by (le) (rate(tgi_batch_decode_duration_bucket{method=\"decode\", container=\"$service\"}[10m])))",
"hide": false,
"legendFormat": "p99",
"range": true,
"refId": "C"
}
],
"title": "Token Decode quantiles",
"type": "timeseries"
},
{
"cards": {},
"color": {
"cardColor": "#5794F2",
"colorScale": "linear",
"colorScheme": "interpolateSpectral",
"exponent": 0.5,
"min": 0,
"mode": "opacity"
},
"dataFormat": "tsbuckets",
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"fieldConfig": {
"defaults": {
"custom": {
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"scaleDistribution": {
"type": "linear"
}
}
},
"overrides": []
},
"gridPos": {
"h": 11,
"w": 6,
"x": 18,
"y": 73
},
"heatmap": {},
"hideZeroBuckets": false,
"highlightCards": true,
"id": 40,
"legend": {
"show": false
},
"maxDataPoints": 25,
"options": {
"calculate": false,
"calculation": {},
"cellGap": 2,
"cellValues": {},
"color": {
"exponent": 0.5,
"fill": "#5794F2",
"min": 0,
"mode": "scheme",
"reverse": false,
"scale": "exponential",
"scheme": "Spectral",
"steps": 128
},
"exemplars": {
"color": "rgba(255,0,255,0.7)"
},
"filterValues": {
"le": 1e-9
},
"legend": {
"show": false
},
"rowsFrame": {
"layout": "auto"
},
"showValue": "never",
"tooltip": {
"mode": "single",
"showColorScale": false,
"yHistogram": false
},
"yAxis": {
"axisPlacement": "left",
"decimals": 1,
"reverse": false,
"unit": "s"
}
},
"pluginVersion": "10.4.2",
"reverseYBuckets": false,
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"exemplar": true,
"expr": "sum(increase(tgi_batch_decode_duration_bucket{method=\"decode\", container=\"$service\"}[5m])) by (le)",
"format": "heatmap",
"interval": "",
"legendFormat": "{{ย le }}",
"range": true,
"refId": "A"
}
],
"title": "Token Decode Latency",
"tooltip": {
"show": true,
"showHistogram": false
},
"type": "heatmap",
"xAxis": {
"show": true
},
"yAxis": {
"decimals": 1,
"format": "s",
"logBase": 1,
"show": true
},
"yBucketBound": "auto"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "s"
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "p50"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "green",
"mode": "fixed"
}
}
]
},
{
"matcher": {
"id": "byName",
"options": "p90"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "orange",
"mode": "fixed"
}
}
]
},
{
"matcher": {
"id": "byName",
"options": "p99"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "red",
"mode": "fixed"
}
}
]
}
]
},
"gridPos": {
"h": 11,
"w": 6,
"x": 0,
"y": 84
},
"id": 42,
"options": {
"legend": {
"calcs": [
"min",
"max"
],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.5, sum by (le) (rate(tgi_batch_filter_duration_bucket{method=\"decode\", container=\"$service\"}[10m])))",
"legendFormat": "p50",
"range": true,
"refId": "A"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.9, sum by (le) (rate(tgi_batch_filter_duration_bucket{method=\"decode\", container=\"$service\"}[10m])))",
"hide": false,
"legendFormat": "p90",
"range": true,
"refId": "B"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.99, sum by (le) (rate(tgi_batch_filter_duration_bucket{method=\"decode\", container=\"$service\"}[10m])))",
"hide": false,
"legendFormat": "p99",
"range": true,
"refId": "C"
}
],
"title": "Filter Batch quantiles",
"type": "timeseries"
},
{
"cards": {},
"color": {
"cardColor": "#5794F2",
"colorScale": "linear",
"colorScheme": "interpolateSpectral",
"exponent": 0.5,
"min": 0,
"mode": "opacity"
},
"dataFormat": "tsbuckets",
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"fieldConfig": {
"defaults": {
"custom": {
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"scaleDistribution": {
"type": "linear"
}
}
},
"overrides": []
},
"gridPos": {
"h": 11,
"w": 6,
"x": 6,
"y": 84
},
"heatmap": {},
"hideZeroBuckets": false,
"highlightCards": true,
"id": 39,
"legend": {
"show": false
},
"maxDataPoints": 25,
"options": {
"calculate": false,
"calculation": {},
"cellGap": 2,
"cellValues": {},
"color": {
"exponent": 0.5,
"fill": "#5794F2",
"min": 0,
"mode": "scheme",
"reverse": false,
"scale": "exponential",
"scheme": "Spectral",
"steps": 128
},
"exemplars": {
"color": "rgba(255,0,255,0.7)"
},
"filterValues": {
"le": 1e-9
},
"legend": {
"show": false
},
"rowsFrame": {
"layout": "auto"
},
"showValue": "never",
"tooltip": {
"mode": "single",
"showColorScale": false,
"yHistogram": false
},
"yAxis": {
"axisPlacement": "left",
"decimals": 1,
"reverse": false,
"unit": "s"
}
},
"pluginVersion": "10.4.2",
"reverseYBuckets": false,
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"exemplar": true,
"expr": "sum(increase(tgi_batch_filter_duration_bucket{method=\"decode\", container=\"$service\"}[5m])) by (le)",
"format": "heatmap",
"interval": "",
"legendFormat": "{{ย le }}",
"range": true,
"refId": "A"
}
],
"title": "Filter Batch Latency",
"tooltip": {
"show": true,
"showHistogram": false
},
"type": "heatmap",
"xAxis": {
"show": true
},
"yAxis": {
"decimals": 1,
"format": "s",
"logBase": 1,
"show": true
},
"yBucketBound": "auto"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "s"
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "p50"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "green",
"mode": "fixed"
}
}
]
},
{
"matcher": {
"id": "byName",
"options": "p90"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "orange",
"mode": "fixed"
}
}
]
},
{
"matcher": {
"id": "byName",
"options": "p99"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "red",
"mode": "fixed"
}
}
]
}
]
},
"gridPos": {
"h": 11,
"w": 6,
"x": 12,
"y": 84
},
"id": 36,
"options": {
"legend": {
"calcs": [
"min",
"max"
],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.5, sum by (le) (rate(tgi_batch_concat_duration_bucket{method=\"decode\", container=\"$service\"}[10m])))",
"legendFormat": "p50",
"range": true,
"refId": "A"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.9, sum by (le) (rate(tgi_batch_concat_duration_bucket{method=\"decode\", container=\"$service\"}[10m])))",
"hide": false,
"legendFormat": "p90",
"range": true,
"refId": "B"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.99, sum by (le) (rate(tgi_batch_concat_duration_bucket{method=\"decode\", container=\"$service\"}[10m])))",
"hide": false,
"legendFormat": "p99",
"range": true,
"refId": "C"
}
],
"title": "Batch Concat quantiles",
"type": "timeseries"
},
{
"cards": {},
"color": {
"cardColor": "#5794F2",
"colorScale": "linear",
"colorScheme": "interpolateSpectral",
"exponent": 0.5,
"min": 0,
"mode": "opacity"
},
"dataFormat": "tsbuckets",
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"fieldConfig": {
"defaults": {
"custom": {
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"scaleDistribution": {
"type": "linear"
}
}
},
"overrides": []
},
"gridPos": {
"h": 11,
"w": 6,
"x": 18,
"y": 84
},
"heatmap": {},
"hideZeroBuckets": false,
"highlightCards": true,
"id": 41,
"legend": {
"show": false
},
"maxDataPoints": 25,
"options": {
"calculate": false,
"calculation": {},
"cellGap": 2,
"cellValues": {},
"color": {
"exponent": 0.5,
"fill": "#5794F2",
"min": 0,
"mode": "scheme",
"reverse": false,
"scale": "exponential",
"scheme": "Spectral",
"steps": 128
},
"exemplars": {
"color": "rgba(255,0,255,0.7)"
},
"filterValues": {
"le": 1e-9
},
"legend": {
"show": false
},
"rowsFrame": {
"layout": "auto"
},
"showValue": "never",
"tooltip": {
"mode": "single",
"showColorScale": false,
"yHistogram": false
},
"yAxis": {
"axisPlacement": "left",
"decimals": 1,
"reverse": false,
"unit": "s"
}
},
"pluginVersion": "10.4.2",
"reverseYBuckets": false,
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"editorMode": "code",
"exemplar": true,
"expr": "sum(increase(tgi_batch_concat_duration_bucket{method=\"decode\", container=\"$service\"}[5m])) by (le)",
"format": "heatmap",
"interval": "",
"legendFormat": "{{ย le }}",
"range": true,
"refId": "A"
}
],
"title": "Batch Concat latency",
"tooltip": {
"show": true,
"showHistogram": false
},
"type": "heatmap",
"xAxis": {
"show": true
},
"yAxis": {
"decimals": 1,
"format": "s",
"logBase": 1,
"show": true
},
"yBucketBound": "auto"
}
],
"refresh": "",
"schemaVersion": 39,
"tags": [],
"templating": {
"list": [
{
"current": {
"selected": false,
"text": "gpu-txt-gen-cohereforai-c4ai-command-r-plu-ba7f1",
"value": "gpu-txt-gen-cohereforai-c4ai-command-r-plu-ba7f1"
},
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS_EKS API INFERENCE PROD}"
},
"definition": "label_values(tgi_request_count, container)",
"hide": 0,
"includeAll": false,
"multi": false,
"name": "service",
"options": [],
"query": {
"query": "label_values(tgi_request_count, container)",
"refId": "StandardVariableQuery"
},
"refresh": 1,
"regex": "",
"skipUrlSync": false,
"sort": 1,
"type": "query"
}
]
},
"time": {
"from": "now-30m",
"to": "now-30s"
},
"timepicker": {
"nowDelay": "30s"
},
"timezone": "",
"title": "Text Generation Inference",
"uid": "RHSk7EL4kdqsd",
"version": 12,
"weekStart": ""
}
| text-generation-inference/assets/tgi_grafana.json/0 | {
"file_path": "text-generation-inference/assets/tgi_grafana.json",
"repo_id": "text-generation-inference",
"token_count": 62818
} | 291 |
# Fork that adds only the correct stream to this kernel in order
# to make cuda graphs work.
awq_commit := bd1dc2d5254345cc76ab71894651fb821275bdd4
awq:
rm -rf llm-awq
git clone https://github.com/huggingface/llm-awq
build-awq: awq
cd llm-awq/ && git fetch && git checkout $(awq_commit)
cd llm-awq/awq/kernels && python setup.py build
install-awq: build-awq
pip uninstall awq_inference_engine -y || true
cd llm-awq/awq/kernels && python setup.py install
| text-generation-inference/backends/gaudi/server/Makefile-awq/0 | {
"file_path": "text-generation-inference/backends/gaudi/server/Makefile-awq",
"repo_id": "text-generation-inference",
"token_count": 183
} | 292 |
# Origin: https://github.com/predibase/lorax
# Path: lorax/server/lorax_server/adapters/lora.py
# License: Apache License Version 2.0, January 2004
from collections import defaultdict
from dataclasses import dataclass
from typing import Dict, List, Optional, Set, Tuple, Type, Union
import torch
from peft import LoraConfig as _LoraConfig
from torch.distributed import ProcessGroup
from text_generation_server.adapters.config import AdapterConfig, ModuleMap
from text_generation_server.adapters.weights import (
AdapterBatchMetadata,
AdapterWeights,
BatchAdapterWeights,
)
from text_generation_server.utils.sgmv import (
BGMV_MAX_RANK,
MAX_RANK_CUSTOM,
get_tmp_tensors,
orient_for_rank,
pad_rank,
use_cutlass_shrink,
)
def get_start_stop_idxs_for_rank(offset, size, rank, world_size):
block_size = size // world_size
start = offset + rank * block_size
stop = offset + (rank + 1) * block_size
return start, stop
def shard_on_dim(
t: torch.Tensor, dim: int, process_group: torch.distributed.ProcessGroup
):
world_size = process_group.size()
rank = process_group.rank()
size = t.shape[dim]
start, stop = get_start_stop_idxs_for_rank(0, size, rank, world_size)
if dim == 0:
tensor = t[start:stop]
elif dim == 1:
tensor = t[:, start:stop]
else:
raise NotImplementedError("Let's make that generic when needed")
return tensor
def shard_lora_weights(
weights_a: List[torch.Tensor],
weights_b: List[torch.Tensor],
split_dim: int,
process_group: ProcessGroup,
) -> Tuple[List[torch.Tensor], List[torch.Tensor]]:
# [hidden_size, r]
weights_a = [
shard_on_dim(w, dim=split_dim, process_group=process_group) for w in weights_a
]
# [r, hidden_size]
weights_b = [shard_on_dim(w, dim=1, process_group=process_group) for w in weights_b]
return weights_a, weights_b
@dataclass
class LoraConfig(AdapterConfig):
r: int
target_modules: Optional[Union[List[str], str]]
fan_in_fan_out: bool
lora_alpha: int
use_rslora: bool
def map_weights_for_model(
self,
adapter_weights: Dict[int, AdapterWeights],
weight_names: Tuple[str],
) -> Tuple[ModuleMap, Set[str]]:
adapter_weight_names = set()
module_map = {}
for weight_name in weight_names:
lora_a_name = f"base_model.model.{weight_name}.lora_A.weight"
lora_b_name = f"base_model.model.{weight_name}.lora_B.weight"
if lora_a_name not in adapter_weights or lora_b_name not in adapter_weights:
continue
module_map[weight_name] = {
"lora_A": (adapter_weights[lora_a_name], lora_a_name),
"lora_B": (adapter_weights[lora_b_name], lora_b_name),
}
adapter_weight_names.add(lora_a_name)
adapter_weight_names.add(lora_b_name)
return module_map, adapter_weight_names
@classmethod
def load(cls, adapter_id: str, api_token: str) -> "LoraConfig":
hf_config = _LoraConfig.from_pretrained(adapter_id, token=api_token)
return cls(
base_model_name_or_path=hf_config.base_model_name_or_path,
r=hf_config.r,
target_modules=hf_config.target_modules,
fan_in_fan_out=hf_config.fan_in_fan_out,
lora_alpha=hf_config.lora_alpha,
use_rslora=(
hf_config.use_rslora if hasattr(hf_config, "use_rslora") else False
),
)
class LoraWeights(AdapterWeights):
"""LoRA weights for a single adapter merged across all layers."""
def __init__(
self,
weights_a: List[torch.Tensor],
weights_b: List[torch.Tensor],
adapter_config: LoraConfig,
):
self.lora_a_r = weights_a[0].size(1) if len(weights_a) > 0 else 1
self.lora_b_r = weights_b[0].size(0) if len(weights_a) > 0 else 1
self._use_cutlass_shrink = use_cutlass_shrink(self.lora_a_r)
self._is_transposed = False
# [num_layers, hidden_size, r]
weights_a = [orient_for_rank(w, w.size(1)).contiguous() for w in weights_a]
self._weights_a = torch.stack(weights_a)
# [num_layers, r, hidden_size]
self._weights_b = torch.stack(weights_b)
self.adapter_config = adapter_config
@property
def weights_a(self) -> torch.Tensor:
if self._is_transposed:
self._transpose_weights()
return self._weights_a
@property
def weights_b(self) -> torch.Tensor:
if self._is_transposed:
self._transpose_weights()
return self._weights_b
@property
def weights_a_t(self) -> torch.Tensor:
if not self._is_transposed:
self._transpose_weights()
return self._weights_a
@property
def weights_b_t(self) -> torch.Tensor:
if not self._is_transposed:
self._transpose_weights()
return self._weights_b
def _transpose_weights(self):
if self._use_cutlass_shrink:
# If we're not using the cutlass shrink, then both SGMV and BGMV use the same orientation
self._weights_a = self._weights_a.transpose(1, 2).contiguous()
self._weights_b = self._weights_b.transpose(1, 2).contiguous()
self._is_transposed = not self._is_transposed
@classmethod
def get_batch_types(cls) -> List[Type[BatchAdapterWeights]]:
return [BatchLoraWeights]
# prepare pre-loaded lora weights for use in the model.
#
# this method processes and organizes lora weights for a specific layer type across all layers:
# - uses `config` (LoraConfig) to apply lora-specific settings like scaling factor.
# - retrieves weights from `module_map` based on the `layer_type`.
# - processes `nlayers` number of layers.
# - converts weights to the specified `dtype`.
# - shards weights across `world_size` number of processes using the `process_group`.
# - maps weights to specific layers using `target_to_layer`.
# - tracks `unused_weight_names` to identify any unused weights.
#
# the method handles weight transposition, scaling, and padding to ensure compatibility
# with SGMV or BGMV operations.
@classmethod
def prepare_weights(
cls,
config: LoraConfig,
module_map: Dict[str, Dict],
layer_type: str,
unused_weight_names: Set[str],
nlayers: int,
dtype: torch.dtype,
world_size: int,
process_group: ProcessGroup,
target_to_layer: Dict[str, Tuple[str, torch.Tensor]],
) -> Optional[AdapterWeights]:
lora_a_list = [None] * nlayers
lora_b_list = [None] * nlayers
for layer_id in range(nlayers):
key = (layer_id, layer_type)
weight_name, layer = target_to_layer[key]
base_weight = layer.base_layer.linear.weight
base_device = base_weight.device
if weight_name not in module_map:
# There is no LoRA weight for this layer type in the adapter
return None
lora_a, lora_a_name = module_map[weight_name]["lora_A"]
lora_a = lora_a.to(base_device, dtype)
lora_b, lora_b_name = module_map[weight_name]["lora_B"]
lora_b = lora_b.to(base_device, dtype)
scale = get_scaling_factor(
config.lora_alpha,
config.r,
uses_rslora=config.use_rslora,
)
unused_weight_names.discard(lora_a_name)
unused_weight_names.discard(lora_b_name)
# Merge scaling factor into lora_b due to associativity of matrix multiplication:
# (A * B) * C = A * (B * C)
lora_a_list[layer_id] = lora_a.transpose(0, 1)
lora_b_list[layer_id] = lora_b.transpose(0, 1) * scale
# pad lora ranks to be compatible with sgmv
lora_a_list = [pad_rank(w, dim=1, world_size=world_size) for w in lora_a_list]
lora_b_list = [pad_rank(w, dim=0, world_size=world_size) for w in lora_b_list]
if lora_a_list:
# update rank if it was padded
padded_rank = lora_a_list[0].size(1)
config.r = padded_rank
return LoraWeights(
*shard_lora_weights(
weights_a=lora_a_list,
weights_b=lora_b_list,
split_dim=0 if layer_type in {"o_proj", "down_proj", "lm_head"} else 1,
process_group=process_group,
),
config,
)
@dataclass
class RankSegments:
rank: int
lora_a_ptr: torch.Tensor
lora_b_ptr: torch.Tensor
# prefill (sgmv)
tmp_shrink: torch.Tensor
tmp_expand: torch.Tensor
segment_starts: torch.Tensor
segment_ends: torch.Tensor
# decode (bgmv)
indices: torch.Tensor
@dataclass
class BatchLoraWeights(BatchAdapterWeights):
lora_a: Dict[int, torch.Tensor]
lora_b: Dict[int, torch.Tensor]
adapter_index_configs: Dict[int, LoraConfig]
rank_data: Dict[int, RankSegments]
use_sgmv: bool
def has_adapter(self, adapter_index: int) -> bool:
return adapter_index in self.adapter_index_configs
def can_vectorize(self, pg: ProcessGroup) -> bool:
return all(
rank_data.rank // pg.size() <= MAX_RANK_CUSTOM
for rank_data in self.rank_data.values()
)
@classmethod
def load(
self,
adapter_weights: Dict[int, AdapterWeights],
meta: AdapterBatchMetadata,
prefill: bool,
prefill_head_indices: Optional[torch.Tensor],
) -> Optional["BatchLoraWeights"]:
adapter_weights = {k: _convert_lora(v) for k, v in adapter_weights.items()}
adapter_weights = {
k: v for k, v in adapter_weights.items() if isinstance(v, LoraWeights)
}
if not adapter_weights:
return None
first_weights = next(iter(adapter_weights.values()))
device = first_weights.weights_a.device
segment_indices = meta.segment_indices
lora_a = {
idx: adapter_weights[idx].weights_a
for idx in segment_indices
if idx in adapter_weights
}
lora_b = {
idx: adapter_weights[idx].weights_b
for idx in segment_indices
if idx in adapter_weights
}
max_rank = max(
(
adapter_weights[idx].lora_a_r
for idx in segment_indices
if idx in adapter_weights
),
default=0,
)
if prefill or max_rank > BGMV_MAX_RANK:
use_sgmv = True
lora_a_ptr = torch.tensor(
[
(
adapter_weights[idx].weights_a.data_ptr()
if idx in adapter_weights
else 0
)
for idx in segment_indices
],
dtype=torch.int64,
device=device,
)
lora_b_ptr = torch.tensor(
[
(
adapter_weights[idx].weights_b.data_ptr()
if idx in adapter_weights
else 0
)
for idx in segment_indices
],
dtype=torch.int64,
device=device,
)
else:
use_sgmv = False
lora_a_ptr = torch.tensor(
[
(
adapter_weights[idx].weights_a_t.data_ptr()
if idx in adapter_weights
else 0
)
for idx in segment_indices
],
dtype=torch.int64,
device=device,
)
lora_b_ptr = torch.tensor(
[
(
adapter_weights[idx].weights_b_t.data_ptr()
if idx in adapter_weights
else 0
)
for idx in segment_indices
],
dtype=torch.int64,
device=device,
)
adapter_index_configs = {
idx: adapter_weights[idx].adapter_config
for idx in segment_indices
if idx in adapter_weights
}
adapter_to_segment = {v: k for k, v in enumerate(segment_indices)}
rank_indices = defaultdict(list)
for segment_idx, adapter_idx in enumerate(segment_indices):
if adapter_idx not in adapter_weights:
continue
rank_indices[adapter_weights[adapter_idx].lora_a_r].append(segment_idx)
if prefill_head_indices is not None:
j, prefill_head_segment_starts, prefill_head_segment_ends = 1, [0], [0]
for head_index in prefill_head_indices:
# j cannot go out of bounds as that would mean there are tokens without corresponding adapters
if head_index < meta.adapter_segments[j]:
prefill_head_segment_ends[-1] += 1
else:
prefill_head_segment_starts.append(prefill_head_segment_ends[-1])
prefill_head_segment_ends.append(prefill_head_segment_ends[-1] + 1)
j += 1
rank_data = {}
for rank, indices in rank_indices.items():
tmp_shrink = None
tmp_expand = None
segment_starts = None
segment_ends = None
batch_indices = None
if use_sgmv:
lora_a_ptr_indices = lora_a_ptr[indices]
tmp_shrink, tmp_expand = get_tmp_tensors(
lora_a_ptr_indices.size(0), rank, device
)
segment_starts = meta.adapter_segments[indices]
segment_ends = meta.adapter_segments[[i + 1 for i in indices]]
if prefill_head_indices is not None:
for i, segment_index in enumerate(indices):
segment_starts[i] = prefill_head_segment_starts[segment_index]
segment_ends[i] = prefill_head_segment_ends[segment_index]
else:
rank_indices = set(indices)
batch_indices = [
adapter_to_segment[idx] for idx in meta.adapter_indices.tolist()
]
batch_indices = [
idx if idx in rank_indices else -1 for idx in batch_indices
]
batch_indices = torch.tensor(
batch_indices, dtype=torch.int64, device=device
)
rank_data[rank] = RankSegments(
rank=rank,
tmp_shrink=tmp_shrink,
tmp_expand=tmp_expand,
lora_a_ptr=lora_a_ptr[indices],
lora_b_ptr=lora_b_ptr[indices],
segment_starts=segment_starts,
segment_ends=segment_ends,
indices=batch_indices,
)
return BatchLoraWeights(
lora_a=lora_a,
lora_b=lora_b,
adapter_index_configs=adapter_index_configs,
rank_data=rank_data,
use_sgmv=use_sgmv,
)
def get_scaling_factor(
lora_alpha: int,
r: int,
uses_rslora: bool = False,
) -> float:
"""Computes the scaling factor for the lora weights."""
if uses_rslora:
return lora_alpha / (r**0.5)
return lora_alpha / r
def _convert_lora(v: AdapterWeights) -> AdapterWeights:
if hasattr(v, "lora_weights"):
return v.lora_weights
return v
| text-generation-inference/backends/gaudi/server/text_generation_server/adapters/lora.py/0 | {
"file_path": "text-generation-inference/backends/gaudi/server/text_generation_server/adapters/lora.py",
"repo_id": "text-generation-inference",
"token_count": 8028
} | 293 |
from typing import List, Optional, Union
import torch
from compressed_tensors.quantization import QuantizationArgs, QuantizationType
from text_generation_server.layers.fp8 import (
Fp8Weight,
_load_scalar_or_matrix_scale,
requantize_with_max_scale,
)
from text_generation_server.utils.weights import Weights, WeightsLoader
class W8ANFpLoader(WeightsLoader):
"""
Loader for W8A8/W8A16 FP compressed-tensors parameters.
"""
def __init__(
self,
*,
input_activations: Optional[QuantizationArgs],
weights: QuantizationArgs,
):
assert weights.type == QuantizationType.FLOAT and weights.num_bits == 8
# We ignore the `strategy` option which sets the scales to be
# per-tensor, per-channel or per-token. What scales are supported
# is dependent on the kernels used (e.g. cutlass can do tokenwise,
# Torch cannot, and FP8-Marlin does not quantize inputs at all).
# So, instead we try to use the best-possible configuration.
self.load_weight_scale = not weights.dynamic
self.load_input_scale = (
input_activations is not None and not input_activations.dynamic
)
self.force_w8a16 = (
input_activations is not None and input_activations.num_bits == 16
)
def __str__(self) -> str:
def scale_to_str(scale):
return "static" if scale else "dynamic"
quantization_type = f"W8A{16 if self.force_w8a16 else 8}"
return f"{self.__class__.__name__} ({quantization_type}, weight: {scale_to_str(self.load_weight_scale)}, input: {scale_to_str(self.load_input_scale)})"
def get_weights(self, weights: "Weights", prefix: str):
w = weights.get_tensor(f"{prefix}.weight")
weight_scale = None
if self.load_weight_scale:
weight_scale = (
weights.get_tensor(f"{prefix}.weight_scale", to_dtype=False)
.reshape(-1)
.expand(w.shape[0])
)
logical_widths = [w.shape[0]]
w, weight_scale = requantize_with_max_scale(
w,
weight_scale.unsqueeze(-1).to(weights.device),
logical_widths,
weights.dtype,
)
input_scale = None
if self.load_input_scale:
input_scale = weights.get_tensor(
f"{prefix}.input_scale", to_dtype=False
).reshape(-1)
return Fp8Weight(
weight=w,
weight_scale=weight_scale,
input_scale=input_scale,
dtype=weights.dtype,
force_w8a16=self.force_w8a16,
)
def get_weights_col_packed(
self,
weights: Weights,
prefix: str,
block_sizes: Union[int, List[int]],
):
w = weights.get_packed_sharded(
f"{prefix}.weight", dim=0, block_sizes=block_sizes
)
weight_scale = None
if self.load_weight_scale:
weight_scale = weights.get_tensor(f"{prefix}.weight_scale", to_dtype=False)
if weight_scale.numel() > 1:
weight_scale = weights.get_packed_sharded(
f"{prefix}.weight_scale",
dim=0,
block_sizes=block_sizes,
to_dtype=False,
)
weight_scale = weight_scale.reshape(-1).expand(w.shape[0])
logical_widths = [w.shape[0]]
w, weight_scale = requantize_with_max_scale(
w,
weight_scale.unsqueeze(-1).to(weights.device),
logical_widths,
weights.dtype,
)
input_scale = None
if self.load_input_scale:
input_scale = weights.get_tensor(f"{prefix}.input_scale", to_dtype=False)
if input_scale.numel() > 1:
input_scale = weights.get_packed_sharded(
f"{prefix}.input_scale",
dim=0,
block_sizes=block_sizes,
to_dtype=False,
)
input_scale = input_scale.reshape(-1).max()
return Fp8Weight(
weight=w,
weight_scale=weight_scale,
input_scale=input_scale,
dtype=weights.dtype,
force_w8a16=self.force_w8a16,
)
def get_multi_weights_col(self, weights: "Weights", prefixes: List[str], dim: int):
# FIXME: Force to_device to false as fp8 weights do not support torch.cat on device yet
w = [
weights.get_sharded(f"{p}.weight", dim=0, to_device=False) for p in prefixes
]
shapes = [x.shape for x in w]
# Concat then send to the device
w = torch.cat(w, dim=dim).to(weights.device)
weight_scale = None
if self.load_weight_scale:
weight_scale = [
_load_scalar_or_matrix_scale(weights, f"{p}.weight_scale", shape)
for p, shape in zip(prefixes, shapes)
]
weight_scale = torch.cat(weight_scale, dim=0).reshape(-1)
logical_widths = [x[0] for x in shapes]
w, weight_scale = requantize_with_max_scale(
w,
weight_scale.unsqueeze(-1).to(weights.device),
logical_widths,
weights.dtype,
)
input_scale = None
if self.load_input_scale:
input_scale = [
_load_scalar_or_matrix_scale(weights, f"{p}.input_scale", shape)
for p, shape in zip(prefixes, shapes)
if weights.has_tensor(f"{p}.input_scale")
]
assert len(input_scale) == 0 or len(input_scale) == len(prefixes)
input_scale = (
torch.cat(input_scale, dim=0).reshape(-1).max()
if len(input_scale) != 0
else None
)
return Fp8Weight(
weight=w,
weight_scale=weight_scale,
input_scale=input_scale,
dtype=weights.dtype,
force_w8a16=self.force_w8a16,
)
def get_multi_weights(self, weights: "Weights", prefixes: List[str], dim: int):
# FIXME: Force to_device to false as fp8 weights do not support torch.cat on device yet
w = [weights.get_tensor(f"{p}.weight", to_device=False) for p in prefixes]
shapes = [x.shape for x in w]
# Concat then send to the device
w = torch.cat(w, dim=dim).to(weights.device)
weight_scale = None
if self.load_weight_scale:
weight_scale = [
weights.get_tensor(f"{p}.weight_scale", to_dtype=False)
.reshape(-1)
.expand(shape[0])
for p, shape in zip(prefixes, shapes)
]
weight_scale = torch.cat(weight_scale, dim=0).reshape(-1)
logical_widths = [x[0] for x in shapes]
w, weight_scale = requantize_with_max_scale(
w,
weight_scale.unsqueeze(-1).to(weights.device),
logical_widths,
weights.dtype,
)
input_scale = None
if self.load_input_scale:
input_scale = [
weights.get_tensor(f"{p}.input_scale", to_dtype=False)
.reshape(-1)
.expand(shape[0])
for p, shape in zip(prefixes, shapes)
if weights.has_tensor(f"{p}.input_scale")
]
assert len(input_scale) == 0 or len(input_scale) == len(prefixes)
input_scale = (
torch.cat(input_scale, dim=0).reshape(-1).max()
if len(input_scale) != 0
else None
)
return Fp8Weight(
weight=w,
weight_scale=weight_scale,
input_scale=input_scale,
dtype=weights.dtype,
force_w8a16=self.force_w8a16,
)
def get_weights_row(self, weights: "Weights", prefix: str):
w = weights.get_sharded(f"{prefix}.weight", dim=1)
weight_scale = None
if self.load_weight_scale:
weight_scale = weights.get_tensor(f"{prefix}.weight_scale", to_dtype=False)
weight_scale = weight_scale.reshape(-1).expand(w.shape[0])
logical_widths = [w.shape[0]]
w, weight_scale = requantize_with_max_scale(
w,
weight_scale.unsqueeze(-1).to(weights.device),
logical_widths,
weights.dtype,
)
input_scale = None
if self.load_input_scale:
input_scale = weights.get_tensor(
f"{prefix}.input_scale", to_dtype=False
).reshape(-1)
return Fp8Weight(
weight=w,
weight_scale=weight_scale,
input_scale=input_scale,
dtype=weights.dtype,
force_w8a16=self.force_w8a16,
)
| text-generation-inference/backends/gaudi/server/text_generation_server/layers/compressed_tensors/w8an_fp.py/0 | {
"file_path": "text-generation-inference/backends/gaudi/server/text_generation_server/layers/compressed_tensors/w8an_fp.py",
"repo_id": "text-generation-inference",
"token_count": 4701
} | 294 |
from typing import Optional
import torch
import torch.nn as nn
from text_generation_server.utils.weights import UnquantizedWeight, Weights
from vllm_hpu_extension.ops import VllmMixtureOfExpertsOp
import habana_frameworks.torch as htorch
import torch.nn.functional as F
import os
class UnquantizedSparseMoELayer(nn.Module):
def __init__(
self,
*,
n_expert_group: Optional[int],
n_experts: int,
prefix: str,
renormalize: bool,
topk: int,
topk_group: Optional[int],
weights: Weights,
scoring_func: Optional[str] = "softmax",
e_score_correction_bias: Optional[float] = None,
gate_proj_name: str = "gate_proj",
up_proj_name: str = "up_proj",
down_proj_name: str = "down_proj",
):
super().__init__()
assert (n_expert_group is None) == (
topk_group is None
), "n_expert_group and topk_group must both be None or have some value"
self.n_expert_group = n_expert_group
self.topk = topk
self.topk_group = topk_group
self.renormalize = renormalize
self.weight_block_size = weights.weights_loader.weight_block_size
self.scoring_func = scoring_func
self.e_score_correction_bias = e_score_correction_bias
self.rank = weights.process_group.rank()
self.world_size = weights.process_group.size()
self.use_ep = os.getenv("USE_EXPERT_PARALLEL", "true").lower() == "true"
if (n_experts + self.world_size - 1) // self.world_size < 4:
self.use_ep = False
if self.use_ep:
n_experts_per_rank = (n_experts + self.world_size - 1) // self.world_size
self.ep_offset = self.rank * n_experts_per_rank
n_experts = min(n_experts_per_rank, n_experts - self.ep_offset)
experts_min = self.ep_offset
experts_max = self.ep_offset + n_experts - 1
else:
self.ep_offset = 0
experts_min = 0
experts_max = n_experts - 1
self.gate_up_proj = _load_expert_multi_weights_col(
prefix=prefix,
n_experts=n_experts,
gate_proj_name=gate_proj_name,
up_proj_name=up_proj_name,
weights=weights,
use_ep=self.use_ep,
ep_offset=self.ep_offset,
)
self.down_proj = _load_expert_weights_row(
prefix=prefix,
n_experts=n_experts,
name=down_proj_name,
weights=weights,
use_ep=self.use_ep,
ep_offset=self.ep_offset,
)
self.MoeOp = VllmMixtureOfExpertsOp(n_experts, experts_min, experts_max)
for i in range(n_experts):
self.MoeOp.w13_list[i].set_weight(self.gate_up_proj[i])
self.MoeOp.w2_list[i].set_weight(self.down_proj[i])
def forward(self, x: torch.Tensor, *, gating_output: torch.Tensor) -> torch.Tensor:
htorch.core.mark_step()
routing_weights = F.softmax(gating_output, dim=1, dtype=torch.float32)
routing_weights, selected_experts = torch.topk(
routing_weights, self.topk, dim=-1
)
routing_weights /= routing_weights.sum(dim=-1, keepdim=True)
routing_weights = routing_weights.to(x.dtype)
final_hidden_states = self.MoeOp(
hidden_states=x,
expert_routing_table=selected_experts,
router_weights=routing_weights,
permuted_weights=True,
activation="silu",
)
return final_hidden_states.view(-1, x.shape[1])
def _load_expert_multi_weights_col(
*,
prefix: str,
n_experts: int,
gate_proj_name: str,
up_proj_name: str,
weights: Weights,
use_ep: bool = False,
ep_offset: int = 0,
) -> torch.Tensor:
all_weight = None
for i in range(n_experts):
if not use_ep:
weight = weights.get_multi_weights_col(
[f"{prefix}.{i}.{gate_proj_name}", f"{prefix}.{i}.{up_proj_name}"], 0
)
else:
weight = weights.get_multi_weights(
[
f"{prefix}.{i+ep_offset}.{gate_proj_name}",
f"{prefix}.{i+ep_offset}.{up_proj_name}",
],
0,
)
assert isinstance(weight, UnquantizedWeight)
if all_weight is None:
all_weight = torch.empty(
(n_experts,) + weight.weight.shape,
dtype=weight.weight.dtype,
device=weight.weight.device,
)
all_weight[i] = weight.weight
assert all_weight is not None
return all_weight
def _load_expert_weights_row(
*,
prefix: str,
n_experts: int,
name: str,
weights: Weights,
use_ep: bool = False,
ep_offset: int = 0,
) -> torch.Tensor:
all_weight = None
for i in range(n_experts):
if not use_ep:
weight = weights.get_weights_row(
f"{prefix}.{i}.{name}",
)
else:
weight = weights.get_weights(
f"{prefix}.{i+ep_offset}.{name}",
)
assert isinstance(weight, UnquantizedWeight)
if all_weight is None:
all_weight = torch.empty(
(n_experts,) + weight.weight.shape,
dtype=weight.weight.dtype,
device=weight.weight.device,
)
all_weight[i] = weight.weight
assert all_weight is not None
return all_weight
| text-generation-inference/backends/gaudi/server/text_generation_server/layers/moe/unquantized.py/0 | {
"file_path": "text-generation-inference/backends/gaudi/server/text_generation_server/layers/moe/unquantized.py",
"repo_id": "text-generation-inference",
"token_count": 2816
} | 295 |
# coding=utf-8
# Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved.
#
# This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
# and OPT implementations in this library. It has been modified from its
# original forms to accommodate minor architectural differences compared
# to GPT-NeoX and OPT used by the Meta AI team that trained the model.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import torch
import torch.distributed
from torch import nn
from transformers.activations import ACT2FN
from typing import Optional, List, Tuple
from text_generation_server.layers.attention.kv_cache import get_kv_scales
from text_generation_server.layers.attention import (
paged_attention,
attention,
set_block_mapping,
Seqlen,
HPUPagedAttentionMetadata,
)
from text_generation_server.layers import (
TensorParallelRowLinear,
TensorParallelColumnLinear,
TensorParallelEmbedding,
SpeculativeHead,
get_linear,
)
from text_generation_server.layers.rotary import (
PositionRotaryEmbedding,
)
from text_generation_server.layers.layernorm import (
FastLayerNorm,
)
from habana_frameworks.torch.hpex.kernels import (
RotaryPosEmbeddingMode,
apply_rotary_pos_emb,
)
import habana_frameworks.torch as htorch
def load_attention(config, prefix: str, weights):
return TensorParallelColumnLinear.load_multi(
config,
prefixes=[f"{prefix}.q_proj", f"{prefix}.k_proj", f"{prefix}.v_proj"],
dim=0,
weights=weights,
bias=False,
)
def load_row(config, prefix: str, weights, bias: bool):
weight = weights.get_weights_row(prefix)
if bias and weights.process_group.rank() == 0:
# Rank is only on the first rank process
bias = weights.get_tensor(f"{prefix}.bias")
else:
bias = None
linear = get_linear(weight, bias)
return TensorParallelRowLinear(linear, process_group=weights.process_group)
class GPTJRotary(PositionRotaryEmbedding):
def forward(
self,
query: torch.Tensor,
key: torch.Tensor,
cos: torch.Tensor,
sin: torch.Tensor,
):
num_tokens = query.shape[0]
head_size = query.shape[-1]
rope_mode = RotaryPosEmbeddingMode.PAIRWISE
sin = torch.repeat_interleave(sin, 2, dim=-1)
cos = torch.repeat_interleave(cos, 2, dim=-1)
rotary_dim = cos.shape[-1]
query_shape = query.shape
query = query.view(num_tokens, -1, head_size)
query_rot = query[..., :rotary_dim]
query_pass = query[..., rotary_dim:]
query_rot = apply_rotary_pos_emb(query_rot, cos, sin, None, 0, rope_mode)
query.copy_(torch.cat((query_rot, query_pass), dim=-1).reshape(query_shape))
key_shape = key.shape
key = key.view(num_tokens, -1, head_size)
key_rot = key[..., :rotary_dim]
key_pass = key[..., rotary_dim:]
key_rot = apply_rotary_pos_emb(key_rot, cos, sin, None, 0, rope_mode)
key.copy_(torch.cat((key_rot, key_pass), dim=-1).reshape(key_shape))
class FlashGPTJAttention(torch.nn.Module):
def __init__(
self,
prefix: str,
config,
weights,
rotary_emb,
):
super().__init__()
self.num_heads = config.num_attention_heads
self.hidden_size = config.hidden_size
self.head_size = self.hidden_size // self.num_heads
self.softmax_scale = self.head_size**-0.5
self.rotary_dim = config.rotary_dim
if self.num_heads % weights.process_group.size() != 0:
raise ValueError(
f"`num_heads` must be divisible by `num_shards` (got `num_heads`: {self.num_heads} "
f"and `num_shards`: {weights.process_group.size()}"
)
self.num_heads = self.num_heads // weights.process_group.size()
self.query_key_value = load_attention(
config,
prefix=prefix,
weights=weights,
)
self.kv_scales = get_kv_scales(weights, f"{prefix}")
self.o_proj = load_row(
config,
prefix=f"{prefix}.out_proj",
weights=weights,
bias=False,
)
self.kv_head_mapping = torch.arange(
0, self.num_heads, dtype=torch.int32, device=weights.device
)
self.rotary_emb = rotary_emb
def forward(
self,
hidden_states,
cos,
sin,
cu_seqlen_prefill,
kv_cache,
slots,
seqlen,
hpu_attention_meta,
):
query, key, value = self.query_key_value(hidden_states).split(
self.head_size * self.num_heads, dim=1
)
query = query.view(-1, self.num_heads, self.head_size)
key = key.view(-1, self.num_heads, self.head_size)
value = value.view(-1, self.num_heads, self.head_size)
# Compute rotary embeddings on rotary_ndims
if self.rotary_dim is not None:
self.rotary_emb(
query[..., : self.rotary_dim], key[..., : self.rotary_dim], cos, sin
)
else:
self.rotary_emb(query, key, cos, sin)
kv_cache.store(
key=key,
value=value,
slots=slots,
kv_scales=self.kv_scales,
)
# Prefill
if cu_seqlen_prefill is not None:
# sdpa
attn_output = attention(
query=query,
key=key,
value=value,
kv_cache=kv_cache,
kv_scales=self.kv_scales,
seqlen=seqlen,
softmax_scale=self.softmax_scale,
)
# Decode
else:
attn_output = paged_attention(
query,
kv_cache,
self.kv_head_mapping,
self.softmax_scale,
seqlen,
kv_scales=self.kv_scales,
hpu_attention_meta=hpu_attention_meta,
)
return self.o_proj(attn_output.view(-1, self.num_heads * self.head_size))
class GPTJMLP(nn.Module):
def __init__(self, prefix: str, config, weights):
super().__init__()
act = config.activation_function
self.act = (
ACT2FN[act]
if "gelu" not in act
else lambda x: torch.nn.functional.gelu(
x,
approximate=(
"tanh" if act in ["gelu_fast", "gelu_pytorch_tanh"] else "none"
),
)
)
self.fc_in = TensorParallelColumnLinear.load(
config, prefix=f"{prefix}.fc_in", weights=weights, bias=True
)
self.fc_out = load_row(
config,
prefix=f"{prefix}.fc_out",
weights=weights,
bias=True,
)
def forward(self, hidden_states):
hidden_states = self.fc_in(hidden_states)
hidden_states = self.act(hidden_states)
return self.fc_out(hidden_states)
class FlashGPTJLayer(nn.Module):
def __init__(self, prefix: str, config, weights, rotary_emb):
super().__init__()
self.self_attn = FlashGPTJAttention(
prefix=f"{prefix}.attn",
config=config,
weights=weights,
rotary_emb=rotary_emb,
)
self.mlp = GPTJMLP(prefix=f"{prefix}.mlp", config=config, weights=weights)
self.input_layernorm = FastLayerNorm.load(
prefix=f"{prefix}.ln_1", weights=weights, eps=config.layer_norm_epsilon
)
def forward(
self,
hidden_states,
residual,
cos,
sin,
cu_seqlen_prefill,
kv_cache,
slots,
seqlen,
hpu_attention_meta,
):
hidden_states, residual = self.input_layernorm(hidden_states, residual)
# Self Attention
attn_output = self.self_attn(
hidden_states,
cos,
sin,
cu_seqlen_prefill,
kv_cache,
slots,
seqlen,
hpu_attention_meta,
)
feed_forward_hidden_states = self.mlp(hidden_states)
return attn_output + feed_forward_hidden_states, residual
class FlashGPTJModel(torch.nn.Module):
def __init__(self, prefix: str, config, weights):
super().__init__()
self.config = config
self.wte = TensorParallelEmbedding(prefix=f"{prefix}.wte", weights=weights)
rotary_emb = GPTJRotary.static(
config=config,
dim=config.rotary_dim,
base=10000,
device=weights.device,
)
self.layers = nn.ModuleList(
[
FlashGPTJLayer(
prefix=(
f"h.{layer_id}" if not prefix else f"{prefix}.h.{layer_id}"
),
config=config,
weights=weights,
rotary_emb=rotary_emb,
)
for layer_id in range(config.num_hidden_layers)
]
)
self.ln_f = FastLayerNorm.load(
prefix="ln_f" if not prefix else f"{prefix}.ln_f",
weights=weights,
eps=config.layer_norm_epsilon,
)
self.gradient_checkpointing = False
self.head_size = self.layers[0].self_attn.head_size
self.num_heads = self.layers[0].self_attn.num_heads
def forward(
self,
input_ids: Optional[torch.LongTensor],
position_ids: torch.Tensor,
cu_seqlen_prefill: Optional[torch.Tensor],
kv_cache: List[Tuple[torch.Tensor, torch.Tensor]],
slots: torch.Tensor,
seqlen: Seqlen,
hpu_attention_meta: Optional[HPUPagedAttentionMetadata],
) -> torch.Tensor:
if hpu_attention_meta is not None:
hpu_attention_meta = set_block_mapping(
hpu_attention_meta, input_ids.shape[0]
)
hidden_states = self.wte(input_ids)
# Get rotary cos and sin for this forward
# Avoid to index in each layer
cos, sin = self.layers[0].self_attn.rotary_emb.get_cos_sin(position_ids)
residual = None
lazy_mode = htorch.utils.internal.is_lazy()
if lazy_mode:
htorch.core.mark_step()
for i, layer in enumerate(self.layers):
hidden_states, residual = layer(
hidden_states,
residual,
cos,
sin,
cu_seqlen_prefill,
kv_cache[i],
slots,
seqlen,
hpu_attention_meta,
)
if lazy_mode:
htorch.core.mark_step()
hidden_states, _ = self.ln_f(hidden_states, residual)
return hidden_states
class FlashGPTJForCausalLM(torch.nn.Module):
def __init__(self, prefix: str, config, weights):
super().__init__()
if not prefix:
prefix = "transformer"
else:
prefix = f"{prefix}.transformer"
self.model = FlashGPTJModel(prefix, config, weights)
self.lm_head = SpeculativeHead.load(
config,
prefix="lm_head",
weights=weights,
)
def forward(
self,
input_ids: torch.Tensor,
position_ids: torch.Tensor,
cu_seqlen_prefill: Optional[torch.Tensor],
kv_cache: List[Tuple[torch.Tensor, torch.Tensor]],
slots: torch.Tensor,
seqlen: Seqlen,
hpu_attention_meta: Optional[HPUPagedAttentionMetadata],
lm_head_indices: Optional[torch.Tensor] = None,
adapter_data: Optional[torch.Tensor] = None,
) -> Tuple[torch.Tensor, Optional[torch.Tensor]]:
hidden_states = self.model(
input_ids,
position_ids,
cu_seqlen_prefill,
kv_cache,
slots,
seqlen,
hpu_attention_meta=hpu_attention_meta,
)
if lm_head_indices is not None:
hidden_states = hidden_states[lm_head_indices]
logits, speculative_logits = self.lm_head(hidden_states)
return logits, speculative_logits
| text-generation-inference/backends/gaudi/server/text_generation_server/models/custom_modeling/flash_gptj_modeling.py/0 | {
"file_path": "text-generation-inference/backends/gaudi/server/text_generation_server/models/custom_modeling/flash_gptj_modeling.py",
"repo_id": "text-generation-inference",
"token_count": 6328
} | 296 |
# coding=utf-8
# Copyright 2024 Starcoder2 AI and the HuggingFace Inc. team. All rights reserved.
#
# This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
# and OPT implementations in this library. It has been modified from its
# original forms to accommodate minor architectural differences compared
# to GPT-NeoX and OPT used by the Meta AI team that trained the model.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import torch
import torch.distributed
from torch import nn
from transformers.activations import ACT2FN
from transformers.configuration_utils import PretrainedConfig
from typing import Optional, List, Tuple
from text_generation_server.layers.attention import (
paged_attention,
attention,
set_block_mapping,
Seqlen,
HPUPagedAttentionMetadata,
)
from text_generation_server.layers import (
TensorParallelMultiAdapterLinear,
TensorParallelAdapterRowLinear,
TensorParallelRowLinear,
TensorParallelColumnLinear,
TensorParallelEmbedding,
SpeculativeHead,
get_linear,
)
from text_generation_server.layers.attention.kv_cache import get_kv_scales
from text_generation_server.layers.layernorm import (
FastLayerNorm,
FastRMSNorm,
)
from text_generation_server.layers.rotary import (
PositionRotaryEmbedding,
)
from text_generation_server.utils.weights import UnquantizedWeight
import habana_frameworks.torch as htorch
class Starcoder2Config(PretrainedConfig):
model_type = "starcoder2"
def __init__(
self,
vocab_size=49152,
hidden_size=3072,
intermediate_size=12288,
num_hidden_layers=30,
num_attention_heads=24,
num_key_value_heads=2,
mlp_type="default",
hidden_act="gelu_pytorch_tanh",
max_position_embeddings=4096,
initializer_range=0.018042,
norm_type="layer_norm",
norm_epsilon=1e-5,
use_cache=True,
bos_token_id=50256,
eos_token_id=50256,
rope_theta=10000.0,
sliding_window=None,
attention_dropout=0.0,
residual_dropout=0.0,
embedding_dropout=0.0,
use_bias: bool = True,
**kwargs,
):
self.vocab_size = vocab_size
self.max_position_embeddings = max_position_embeddings
self.hidden_size = hidden_size
self.intermediate_size = intermediate_size
self.num_hidden_layers = num_hidden_layers
self.num_attention_heads = num_attention_heads
self.sliding_window = sliding_window
self.use_bias = use_bias
# for backward compatibility
if num_key_value_heads is None:
num_key_value_heads = num_attention_heads
self.num_key_value_heads = num_key_value_heads
self.mlp_type = mlp_type
self.hidden_act = hidden_act
self.initializer_range = initializer_range
self.norm_type = norm_type
self.norm_epsilon = norm_epsilon
self.use_cache = use_cache
self.rope_theta = rope_theta
self.attention_dropout = attention_dropout
self.residual_dropout = residual_dropout
self.embedding_dropout = embedding_dropout
super().__init__(
bos_token_id=bos_token_id,
eos_token_id=eos_token_id,
**kwargs,
)
def load_attention(config, prefix, weights, layer_id):
prefixes = [f"{prefix}.q_proj", f"{prefix}.k_proj", f"{prefix}.v_proj"]
head_size = config.hidden_size // config.num_attention_heads
sizes = [
head_size * config.num_attention_heads,
head_size * config.num_key_value_heads,
head_size * config.num_key_value_heads,
]
if config.num_attention_heads != config.num_key_value_heads:
base_layer = _load_gqa(config, prefix, weights)
else:
base_layer = TensorParallelColumnLinear.load_multi(
config,
prefixes=prefixes,
dim=0,
weights=weights,
bias=config.use_bias,
)
return TensorParallelMultiAdapterLinear.load(
base_layer=base_layer,
layer_id=layer_id,
layer_names=prefixes,
sizes=sizes,
process_group=weights.process_group,
)
def _load_gqa(config, prefix: str, weights):
assert config.hidden_size % config.num_attention_heads == 0
assert config.num_attention_heads % weights.process_group.size() == 0
weight = weights.get_multi_weights_col(
prefixes=[f"{prefix}.q_proj", f"{prefix}.k_proj", f"{prefix}.v_proj"],
dim=0,
)
if isinstance(weight, UnquantizedWeight):
weight.weight = weight.weight.to(dtype=weights.dtype).to(device=weights.device)
head_size = config.hidden_size // config.num_attention_heads
num_heads = config.num_attention_heads // weights.process_group.size()
num_key_value_heads = config.num_key_value_heads // weights.process_group.size()
assert list(weight.weight.shape) == [
(num_heads + 2 * num_key_value_heads) * head_size,
config.hidden_size,
], f"{list(weight.weight.shape)} != {[(num_heads + 2 * config.num_key_value_heads) * head_size, config.hidden_size]}"
if config.use_bias:
w = [
weights.get_sharded(f"{p}.bias", dim=0)
for p in [f"{prefix}.q_proj", f"{prefix}.k_proj", f"{prefix}.v_proj"]
]
bias = torch.cat(w, dim=0).to(dtype=weights.dtype).to(device=weights.device)
else:
bias = None
return TensorParallelColumnLinear(get_linear(weight, bias=bias))
class Starcoder2Attention(torch.nn.Module):
def __init__(
self,
index: int,
prefix: str,
config,
weights,
rotary_emb,
):
super().__init__()
self.max_past = (
config.sliding_window if config.sliding_window is not None else -1
)
self.num_heads = config.num_attention_heads
self.hidden_size = config.hidden_size
self.head_size = self.hidden_size // self.num_heads
self.rotary_emb = rotary_emb
self.softmax_scale = self.head_size**-0.5
if self.num_heads % weights.process_group.size() != 0:
raise ValueError(
f"`num_heads` must be divisible by `num_shards` (got `num_heads`: {self.num_heads} "
f"and `num_shards`: {weights.process_group.size()}"
)
self.num_heads = self.num_heads // weights.process_group.size()
self.num_key_value_heads = (
config.num_key_value_heads // weights.process_group.size()
)
self.query_key_value = load_attention(config, prefix, weights, index)
self.kv_scales = get_kv_scales(weights, f"{prefix}")
o_proj = TensorParallelRowLinear.load(
config,
prefix=f"{prefix}.o_proj",
weights=weights,
bias=getattr(config, "use_bias", False),
)
self.o_proj = TensorParallelAdapterRowLinear.load(
o_proj,
index,
"o_proj",
process_group=weights.process_group,
)
self.num_groups = self.num_heads // self.num_key_value_heads
self.kv_head_mapping = torch.arange(
0, self.num_key_value_heads, dtype=torch.int32, device=weights.device
).repeat_interleave(self.num_groups)
def forward(
self,
hidden_states,
cos,
sin,
cu_seqlen_prefill,
kv_cache,
slots,
seqlen,
adapter_data,
hpu_attention_meta,
):
qkv = self.query_key_value(hidden_states, adapter_data)
query, kv = qkv.split(
[
self.head_size * self.num_heads,
2 * self.head_size * self.num_key_value_heads,
],
dim=1,
)
query = query.view(-1, self.num_heads, self.head_size)
kv = kv.view(-1, 2, self.num_key_value_heads, self.head_size)
self.rotary_emb(query, torch.select(kv, dim=1, index=0), cos, sin)
kv_cache.store(
key=kv[:, 0],
value=kv[:, 1],
slots=slots,
kv_scales=self.kv_scales,
)
# Prefill
if cu_seqlen_prefill is not None:
# sdpa
attn_output = attention(
query=query,
key=kv[:, 0],
value=kv[:, 1],
kv_cache=kv_cache,
kv_scales=self.kv_scales,
seqlen=seqlen,
softmax_scale=self.softmax_scale,
window_size_left=self.max_past,
)
# Decode
else:
attn_output = paged_attention(
query,
kv_cache,
self.kv_head_mapping,
self.softmax_scale,
seqlen,
kv_scales=self.kv_scales,
hpu_attention_meta=hpu_attention_meta,
window_size_left=self.max_past,
)
return self.o_proj(
attn_output.view(-1, self.num_heads * self.head_size), adapter_data
)
class Starcoder2MLP(nn.Module):
def __init__(self, prefix, config, weights, index):
super().__init__()
act = config.hidden_act
self.act = (
ACT2FN[act]
if "gelu" not in act
else lambda x: torch.nn.functional.gelu(
x,
approximate=(
"tanh" if act in ["gelu_fast", "gelu_pytorch_tanh"] else "none"
),
)
)
# Fuse gate and up proj
c_fc = TensorParallelColumnLinear.load(
config,
prefix=f"{prefix}.c_fc",
weights=weights,
bias=config.use_bias,
)
c_proj = TensorParallelRowLinear.load(
config,
prefix=f"{prefix}.c_proj",
weights=weights,
bias=config.use_bias,
)
self.c_fc = TensorParallelMultiAdapterLinear.load(
c_fc,
layer_id=index,
layer_names=[f"{prefix}.c_fc"],
sizes=[config.intermediate_size, config.intermediate_size],
process_group=weights.process_group,
)
self.c_proj = TensorParallelAdapterRowLinear.load(
c_proj,
index,
"c_proj",
process_group=weights.process_group,
)
def forward(self, hidden_states, adapter_data):
hidden_states = self.c_fc(hidden_states, adapter_data)
hidden_states = self.act(hidden_states)
return self.c_proj(hidden_states, adapter_data)
class Starcoder2GatedMLP(nn.Module):
def __init__(self, index, prefix, config, weights):
super().__init__()
act = config.hidden_act
self.act = (
ACT2FN[act]
if "gelu" not in act
else lambda x: torch.nn.functional.gelu(
x,
approximate=(
"tanh" if act in ["gelu_fast", "gelu_pytorch_tanh"] else "none"
),
)
)
# Fuse gate and up proj
prefixes = [f"{prefix}.gate_proj", f"{prefix}.up_proj"]
sizes = [
config.intermediate_size,
config.intermediate_size,
]
gate_up_proj = TensorParallelColumnLinear.load_multi(
config,
prefixes=prefixes,
weights=weights,
dim=0,
bias=config.use_bias,
)
self.gate_up_proj = TensorParallelMultiAdapterLinear.load(
gate_up_proj,
index,
layer_names=prefixes,
sizes=sizes,
process_group=weights.process_group,
)
down_proj = TensorParallelRowLinear.load(
config,
prefix=f"{prefix}.down_proj",
weights=weights,
bias=config.use_bias,
)
self.down_proj = TensorParallelAdapterRowLinear.load(
down_proj,
index,
"down_proj",
process_group=weights.process_group,
)
self.intermediate_size = (
config.intermediate_size // weights.process_group.size()
)
def forward(self, hidden_states, adapter_data):
gate_up_states = self.gate_up_proj(hidden_states, adapter_data)
gate_up_states = gate_up_states.view(-1, 2, self.intermediate_size)
return self.down_proj(
self.act(gate_up_states[:, 0]) * gate_up_states[:, 1], adapter_data
)
STARCODER2_NORMALIZATION_CLASSES = {
"layer_norm": FastLayerNorm,
"rms_norm": FastRMSNorm,
}
STARCODER2_MLP_CLASSES = {
"default": Starcoder2MLP,
"gated": Starcoder2GatedMLP,
}
class Starcoder2Layer(nn.Module):
def __init__(self, layer_id, config, weights, rotary_emb):
super().__init__()
prefix = f"model.layers.{layer_id}"
self.self_attn = Starcoder2Attention(
prefix=f"{prefix}.self_attn",
config=config,
weights=weights,
index=layer_id,
rotary_emb=rotary_emb,
)
self.mlp = STARCODER2_MLP_CLASSES[config.mlp_type](
prefix=f"{prefix}.mlp", config=config, weights=weights, index=layer_id
)
self.input_layernorm = STARCODER2_NORMALIZATION_CLASSES[config.norm_type].load(
prefix=f"{prefix}.input_layernorm", weights=weights, eps=config.norm_epsilon
)
self.post_attention_layernorm = STARCODER2_NORMALIZATION_CLASSES[
config.norm_type
].load(
prefix=f"{prefix}.post_attention_layernorm",
weights=weights,
eps=config.norm_epsilon,
)
def forward(
self,
hidden_states,
residual,
cos,
sin,
cu_seqlen_prefill,
kv_cache,
slots,
seqlen,
adapter_data,
hpu_attention_meta,
):
normed_hidden_states, res = self.input_layernorm(hidden_states, residual)
# Self Attention
attn_output = self.self_attn(
normed_hidden_states,
cos,
sin,
cu_seqlen_prefill,
kv_cache,
slots,
seqlen,
adapter_data,
hpu_attention_meta,
)
# faster post attention rms norm
normed_attn_res_output, attn_res = self.post_attention_layernorm(
attn_output, res
)
mlp_output = self.mlp(normed_attn_res_output, adapter_data)
return mlp_output, attn_res
class Starcoder2Model(torch.nn.Module):
def __init__(self, prefix, config, weights):
super().__init__()
process_group = weights.process_group
self.tp_rank = process_group.rank()
self.tp_world_size = process_group.size()
self.embed_tokens = TensorParallelEmbedding(
prefix=f"{prefix}.embed_tokens", weights=weights
)
rotary_emb = PositionRotaryEmbedding.static(
config=config,
dim=config.hidden_size // config.num_attention_heads,
base=config.rope_theta,
device=weights.device,
)
self.layers = nn.ModuleList(
[
Starcoder2Layer(
layer_id,
config,
weights,
rotary_emb,
)
for layer_id in range(config.num_hidden_layers)
]
)
self.norm = STARCODER2_NORMALIZATION_CLASSES[config.norm_type].load(
prefix=f"{prefix}.norm", weights=weights, eps=config.norm_epsilon
)
self.gradient_checkpointing = False
self.head_size = self.layers[0].self_attn.head_size
self.num_heads = self.layers[0].self_attn.num_heads
self.num_key_value_heads = self.layers[0].self_attn.num_key_value_heads
def forward(
self,
input_ids: torch.Tensor,
position_ids: torch.Tensor,
cu_seqlen_prefill: Optional[torch.Tensor],
kv_cache: List[Tuple[torch.Tensor, torch.Tensor]],
slots: torch.Tensor,
seqlen: Seqlen,
adapter_data,
hpu_attention_meta: Optional[HPUPagedAttentionMetadata],
) -> torch.Tensor:
if hpu_attention_meta is not None:
hpu_attention_meta = set_block_mapping(
hpu_attention_meta, input_ids.shape[0]
)
hidden_states = self.embed_tokens(input_ids)
# Get rotary cos and sin for this forward
# Avoid to index in each layer
cos, sin = self.layers[0].self_attn.rotary_emb.get_cos_sin(position_ids)
residual = None
lazy_mode = htorch.utils.internal.is_lazy()
if lazy_mode:
htorch.core.mark_step()
for i, layer in enumerate(self.layers):
hidden_states, residual = layer(
hidden_states,
residual,
cos,
sin,
cu_seqlen_prefill,
kv_cache[i],
slots,
seqlen,
adapter_data,
hpu_attention_meta,
)
if lazy_mode:
htorch.core.mark_step()
hidden_states, _ = self.norm(hidden_states, residual)
return hidden_states
class FlashStarcoder2ForCausalLM(torch.nn.Module):
def __init__(self, prefix, config, weights):
super().__init__()
if not prefix:
prefix = "model"
else:
prefix = f"{prefix}.model"
self.model = Starcoder2Model(prefix, config, weights)
try:
self.lm_head = SpeculativeHead.load(
config,
prefix="lm_head",
weights=weights,
)
except RuntimeError:
self.lm_head = SpeculativeHead.load(
config,
prefix=f"{prefix}.embed_tokens",
weights=weights,
)
self.max_past = config.sliding_window
self.max_past_tensor = (
torch.tensor(config.sliding_window, device=weights.device)
if self.max_past is not None
else None
)
def forward(
self,
input_ids: torch.Tensor,
position_ids: torch.Tensor,
cu_seqlen_prefill: Optional[torch.Tensor],
kv_cache: List[Tuple[torch.Tensor, torch.Tensor]],
slots: torch.Tensor,
seqlen: Seqlen,
hpu_attention_meta: Optional[HPUPagedAttentionMetadata],
lm_head_indices: Optional[torch.Tensor] = None,
adapter_data: Optional[torch.Tensor] = None,
) -> torch.Tensor:
hidden_states = self.model(
input_ids,
position_ids,
cu_seqlen_prefill,
kv_cache,
slots,
seqlen,
adapter_data,
hpu_attention_meta,
)
if lm_head_indices is not None:
hidden_states = hidden_states[lm_head_indices]
logits = self.lm_head(hidden_states)
return logits
| text-generation-inference/backends/gaudi/server/text_generation_server/models/custom_modeling/flash_starcoder2_modeling.py/0 | {
"file_path": "text-generation-inference/backends/gaudi/server/text_generation_server/models/custom_modeling/flash_starcoder2_modeling.py",
"repo_id": "text-generation-inference",
"token_count": 9861
} | 297 |
# Copyright (C) 2024 Habana Labs, Ltd. an Intel Company.
import asyncio
import os
import torch
import time
import signal
from grpc import aio
from loguru import logger
from grpc_reflection.v1alpha import reflection
from pathlib import Path
from typing import List, Optional
from text_generation_server.cache import Cache
from text_generation_server.interceptor import ExceptionInterceptor
from text_generation_server.models import Model, get_model_with_lora_adapters
from text_generation_server.pb import generate_pb2_grpc, generate_pb2
from text_generation_server.tracing import UDSOpenTelemetryAioServerInterceptor
from text_generation_server.models.globals import set_model_id, ATTENTION
from text_generation_server.models.globals import set_adapter_to_index
from text_generation_server.utils.adapter import AdapterInfo
from text_generation_server.utils.tokens import make_tokenizer_optional
from text_generation_server.utils.prefill_chunking import set_max_prefill_tokens
from text_generation_server.models import VLM_BATCH_TYPES
from text_generation_server.utils.version import (
is_driver_compatible,
MIN_TGI_GAUDI_SYNAPSE_VERSION,
)
class SignalHandler:
KEEP_PROCESSING = True
def __init__(self):
signal.signal(signal.SIGINT, self.exit_gracefully)
signal.signal(signal.SIGTERM, self.exit_gracefully)
def exit_gracefully(self, signum, frame):
print(f"Exiting gracefully: Signal {signum}")
self.KEEP_PROCESSING = False
class TextGenerationService(generate_pb2_grpc.TextGenerationServiceServicer):
def __init__(
self,
model: Model,
cache: Cache,
server_urls: List[str],
):
self.cache = cache
self.model = model
# Quantize is resolved during model loading
self.quantize = model.quantize
self.server_urls = server_urls
# For some reason, inference_mode does not work well with GLOO which we use on CPU
# TODO: The inferecemode set messes up the autograd op dispatch. And results in aten::matmul
# op not optimized issue. Will investigate further.
# if model.device.type == "hpu":
# Force inference mode for the lifetime of TextGenerationService
# self._inference_mode_raii_guard = torch._C._InferenceMode(True)
async def Info(self, request, context):
return self.model.info
async def Health(self, request, context):
if self.model.device.type == "hpu":
torch.zeros((2, 2)).to("hpu")
return generate_pb2.HealthResponse()
async def ServiceDiscovery(self, request, context):
return generate_pb2.ServiceDiscoveryResponse(urls=self.server_urls)
async def ClearCache(self, request, context):
if request.HasField("id"):
self.cache.delete(request.id)
else:
self.cache.clear()
return generate_pb2.ClearCacheResponse()
async def FilterBatch(self, request, context):
batch = self.cache.pop(request.batch_id)
if batch is None:
raise ValueError(f"Batch ID {request.batch_id} not found in cache.")
filtered_batch = batch.filter(request.request_ids)
self.cache.set(filtered_batch)
return generate_pb2.FilterBatchResponse(batch=filtered_batch.to_pb())
async def Warmup(self, request, context):
if ATTENTION == "paged":
set_max_prefill_tokens(request.max_prefill_tokens)
if (
self.model.batch_type in VLM_BATCH_TYPES
): # Hack, i would rather use kwargs in the `from_pb` call
batch = self.model.batch_type.from_pb_processor(
request.batch,
self.model.tokenizer,
self.model.processor,
self.model.model.config,
self.model.dtype,
self.model.device,
)
else:
batch = self.model.batch_type.from_pb(
request.batch,
self.model.tokenizer,
self.model.dtype,
self.model.device,
)
# Override default values with None for clearer semantics.
max_input_tokens = (
request.max_input_tokens
if request.HasField("max_input_tokens")
else None
)
max_total_tokens = (
request.max_total_tokens
if request.HasField("max_total_tokens")
else None
)
max_supported_total_tokens, max_input_tokens, max_total_tokens = (
self.model.warmup(batch, max_input_tokens, max_total_tokens)
)
else:
max_supported_total_tokens, max_input_tokens, max_total_tokens = (
self.model.warmup(request)
)
# W/A for the skip tokenizer path
# We need to call make_tokenizer_optional after the warmup,
# because router is not aware of that feature
make_tokenizer_optional(self.model.tokenizer)
return generate_pb2.WarmupResponse(
max_supported_total_tokens=max_supported_total_tokens,
max_input_tokens=max_input_tokens,
max_total_tokens=max_total_tokens,
)
async def Prefill(self, request, context):
start = time.time_ns()
if (
self.model.batch_type in VLM_BATCH_TYPES
): # Hack, i would rather use kwargs in the `from_pb` call
batch = self.model.batch_type.from_pb_processor(
request.batch,
self.model.tokenizer,
self.model.processor,
self.model.model.config,
self.model.dtype,
self.model.device,
)
else:
batch = self.model.batch_type.from_pb(
request.batch, self.model.tokenizer, self.model.dtype, self.model.device
)
generations, next_batch, timings = self.model.generate_token([batch])
self.cache.set(next_batch)
return generate_pb2.PrefillResponse(
generations=[generation.to_pb() for generation in generations],
batch=next_batch.to_pb() if next_batch else None,
forward_ns=timings[0],
decode_ns=timings[1],
total_ns=time.time_ns() - start,
)
async def Decode(self, request, context):
start = time.time_ns()
if len(request.batches) == 0:
raise ValueError("Must provide at least one batch")
batches = []
for batch_pb in request.batches:
batch = self.cache.pop(batch_pb.id)
if batch is None:
raise ValueError(f"Batch ID {batch_pb.id} not found in cache.")
batches.append(batch)
if len(batches) == 0:
raise ValueError("All batches are empty")
generations, next_batch, timings = self.model.generate_token(batches)
self.cache.set(next_batch)
return generate_pb2.DecodeResponse(
generations=[generation.to_pb() for generation in generations],
batch=next_batch.to_pb() if next_batch else None,
concat_ns=None,
forward_ns=timings[0],
decode_ns=timings[1],
total_ns=time.time_ns() - start,
)
def serve(
model_id: str,
lora_adapters: Optional[List[AdapterInfo]],
revision: Optional[str],
sharded: bool,
quantize: Optional[str],
speculate: Optional[int],
dtype: Optional[str],
kv_cache_dtype: Optional[str],
trust_remote_code: bool,
uds_path: Path,
max_input_tokens: int,
):
async def serve_inner(
model_id: str,
lora_adapters: Optional[List[AdapterInfo]],
revision: Optional[str],
sharded: bool = False,
quantize: Optional[str] = None,
speculate: Optional[int] = None,
dtype: Optional[str] = None,
kv_cache_dtype: Optional[str] = None,
trust_remote_code: bool = False,
):
if not is_driver_compatible():
logger.warning(
f"Current Synapse version is lower than the minimum version supported: {MIN_TGI_GAUDI_SYNAPSE_VERSION}, this could result in failures"
)
unix_socket_template = "unix://{}-{}"
adapter_to_index = {}
logger.info("Server:server_inner: sharded ={}".format(sharded))
if sharded:
rank = int(os.environ["RANK"])
logger.info("Server:server_inner: rank ={}".format(rank))
server_urls = [
unix_socket_template.format(uds_path, rank)
for rank in range(int(os.environ["WORLD_SIZE"]))
]
local_url = server_urls[int(os.environ["RANK"])]
else:
local_url = unix_socket_template.format(uds_path, 0)
server_urls = [local_url]
logger.info(
"Server:server_inner: data type = {}, local_url = {}".format(
dtype, local_url
)
)
if dtype == "bfloat16" or None:
data_type = torch.bfloat16
else:
data_type = torch.float
if revision == "None":
revision = None
try:
model = get_model_with_lora_adapters(
model_id,
lora_adapters,
revision,
sharded,
quantize,
speculate,
data_type,
kv_cache_dtype,
trust_remote_code,
max_input_tokens,
adapter_to_index,
)
except Exception:
logger.exception("Error when initializing model")
raise
set_adapter_to_index(adapter_to_index)
server = aio.server(
interceptors=[
ExceptionInterceptor(),
UDSOpenTelemetryAioServerInterceptor(),
],
options=[
# Set the maximum possible message length: i32::MAX
("grpc.max_receive_message_length", (1 << 31) - 1)
],
)
generate_pb2_grpc.add_TextGenerationServiceServicer_to_server(
TextGenerationService(model, Cache(), server_urls), server
)
SERVICE_NAMES = (
generate_pb2.DESCRIPTOR.services_by_name["TextGenerationService"].full_name,
reflection.SERVICE_NAME,
)
reflection.enable_server_reflection(SERVICE_NAMES, server)
server.add_insecure_port(local_url)
await server.start()
logger.info("Server started at {}".format(local_url))
signal_handler = SignalHandler()
while signal_handler.KEEP_PROCESSING:
await asyncio.sleep(0.5)
set_model_id(model_id)
asyncio.run(
serve_inner(
model_id,
lora_adapters,
revision,
sharded,
quantize,
speculate,
dtype,
kv_cache_dtype,
trust_remote_code,
)
)
| text-generation-inference/backends/gaudi/server/text_generation_server/server.py/0 | {
"file_path": "text-generation-inference/backends/gaudi/server/text_generation_server/server.py",
"repo_id": "text-generation-inference",
"token_count": 5307
} | 298 |
from typing import Optional
SUPPORT_CHUNKING: Optional[bool] = None
MAX_PREFILL_TOKENS: Optional[int] = None
def set_support_chunking(support_chunking: bool):
global SUPPORT_CHUNKING
SUPPORT_CHUNKING = support_chunking
def get_support_chunking() -> bool:
global SUPPORT_CHUNKING
return SUPPORT_CHUNKING
def set_max_prefill_tokens(max_prefill_tokens: int):
global MAX_PREFILL_TOKENS
MAX_PREFILL_TOKENS = max_prefill_tokens
def get_max_prefill_tokens() -> int:
global MAX_PREFILL_TOKENS
return MAX_PREFILL_TOKENS
| text-generation-inference/backends/gaudi/server/text_generation_server/utils/prefill_chunking.py/0 | {
"file_path": "text-generation-inference/backends/gaudi/server/text_generation_server/utils/prefill_chunking.py",
"repo_id": "text-generation-inference",
"token_count": 221
} | 299 |
use crate::llamacpp;
use async_trait::async_trait;
use std::ffi::CString;
use std::mem::replace;
use std::str::FromStr;
use std::sync::{mpsc, Once};
use text_generation_router::infer::{Backend, GeneratedText, InferError, InferStreamResponse};
use text_generation_router::validation::ValidGenerateRequest;
use text_generation_router::{FinishReason, Token};
use thiserror::Error;
use tokenizers::Tokenizer;
use tokio::sync::mpsc::{unbounded_channel, UnboundedSender};
use tokio::sync::{oneshot, watch};
use tokio::task::{spawn, spawn_blocking};
use tokio::time::{timeout, Duration, Instant};
use tokio_stream::wrappers::UnboundedReceiverStream;
use tracing::instrument;
use tracing::{debug, error, info, trace, warn};
#[derive(Debug, Clone, Copy)]
pub enum LlamacppSplitMode {
GPU(usize),
Layer,
Row,
}
impl FromStr for LlamacppSplitMode {
type Err = String;
fn from_str(s: &str) -> Result<Self, Self::Err> {
match s.to_lowercase().as_str() {
"layer" => Ok(LlamacppSplitMode::Layer),
"row" => Ok(LlamacppSplitMode::Row),
_ => match s.parse::<usize>() {
Ok(n) => Ok(LlamacppSplitMode::GPU(n)),
Err(_) => Err("Choose a GPU number or `layer` or `row`".to_string()),
},
}
}
}
#[derive(Debug, Clone, Copy, clap::ValueEnum)]
pub enum LlamacppNuma {
Disabled,
Distribute,
Isolate,
Numactl,
Mirror,
}
#[allow(non_camel_case_types)]
#[derive(Debug, Clone, Copy, clap::ValueEnum)]
pub enum LlamacppGGMLType {
F32,
F16,
Q4_0,
Q4_1,
Q5_0,
Q5_1,
Q8_0,
Q8_1,
Q2_K,
Q3_K,
Q4_K,
Q5_K,
Q6_K,
Q8_K,
IQ2_XXS,
IQ2_XS,
IQ3_XXS,
IQ1_S,
IQ4_NL,
IQ3_S,
IQ2_S,
IQ4_XS,
I8,
I16,
I32,
I64,
F64,
IQ1_M,
BF16,
TQ1_0,
TQ2_0,
}
// TODO: macro
impl LlamacppGGMLType {
fn to_ggml_type(self) -> llamacpp::ggml_type {
match self {
LlamacppGGMLType::F32 => llamacpp::GGML_TYPE_F32,
LlamacppGGMLType::F16 => llamacpp::GGML_TYPE_F16,
LlamacppGGMLType::Q4_0 => llamacpp::GGML_TYPE_Q4_0,
LlamacppGGMLType::Q4_1 => llamacpp::GGML_TYPE_Q4_1,
LlamacppGGMLType::Q5_0 => llamacpp::GGML_TYPE_Q5_0,
LlamacppGGMLType::Q5_1 => llamacpp::GGML_TYPE_Q5_1,
LlamacppGGMLType::Q8_0 => llamacpp::GGML_TYPE_Q8_0,
LlamacppGGMLType::Q8_1 => llamacpp::GGML_TYPE_Q8_1,
LlamacppGGMLType::Q2_K => llamacpp::GGML_TYPE_Q2_K,
LlamacppGGMLType::Q3_K => llamacpp::GGML_TYPE_Q3_K,
LlamacppGGMLType::Q4_K => llamacpp::GGML_TYPE_Q4_K,
LlamacppGGMLType::Q5_K => llamacpp::GGML_TYPE_Q5_K,
LlamacppGGMLType::Q6_K => llamacpp::GGML_TYPE_Q6_K,
LlamacppGGMLType::Q8_K => llamacpp::GGML_TYPE_Q8_K,
LlamacppGGMLType::IQ2_XXS => llamacpp::GGML_TYPE_IQ2_XXS,
LlamacppGGMLType::IQ2_XS => llamacpp::GGML_TYPE_IQ2_XS,
LlamacppGGMLType::IQ3_XXS => llamacpp::GGML_TYPE_IQ3_XXS,
LlamacppGGMLType::IQ1_S => llamacpp::GGML_TYPE_IQ1_S,
LlamacppGGMLType::IQ4_NL => llamacpp::GGML_TYPE_IQ4_NL,
LlamacppGGMLType::IQ3_S => llamacpp::GGML_TYPE_IQ3_S,
LlamacppGGMLType::IQ2_S => llamacpp::GGML_TYPE_IQ2_S,
LlamacppGGMLType::IQ4_XS => llamacpp::GGML_TYPE_IQ4_XS,
LlamacppGGMLType::I8 => llamacpp::GGML_TYPE_I8,
LlamacppGGMLType::I16 => llamacpp::GGML_TYPE_I16,
LlamacppGGMLType::I32 => llamacpp::GGML_TYPE_I32,
LlamacppGGMLType::I64 => llamacpp::GGML_TYPE_I64,
LlamacppGGMLType::F64 => llamacpp::GGML_TYPE_F64,
LlamacppGGMLType::IQ1_M => llamacpp::GGML_TYPE_IQ1_M,
LlamacppGGMLType::BF16 => llamacpp::GGML_TYPE_BF16,
LlamacppGGMLType::TQ1_0 => llamacpp::GGML_TYPE_TQ1_0,
LlamacppGGMLType::TQ2_0 => llamacpp::GGML_TYPE_TQ2_0,
}
}
}
pub struct LlamacppConfig {
pub model_gguf: String,
pub max_batch_total_tokens: usize,
pub max_physical_batch_total_tokens: usize,
pub max_batch_size: usize,
pub batch_timeout: Duration,
pub n_threads: usize,
pub n_threads_batch: usize,
pub n_gpu_layers: usize,
pub split_mode: LlamacppSplitMode,
pub numa: LlamacppNuma,
pub defrag_threshold: f32,
pub use_mmap: bool,
pub use_mlock: bool,
pub offload_kqv: bool,
pub flash_attention: bool,
pub type_k: LlamacppGGMLType,
pub type_v: LlamacppGGMLType,
}
#[derive(Debug)]
struct LlamacppRequest {
input_ids: Vec<i32>,
top_k: i32,
top_p: f32,
typical_p: f32,
min_keep: usize,
temp: f32,
seed: u32,
penalty_last_n: i32,
penalty_repeat: f32,
penalty_freq: f32,
penalty_present: f32,
max_new_tokens: usize,
tx: UnboundedSender<Result<InferStreamResponse, InferError>>,
time: Instant,
}
pub struct LlamacppBackend {
tx: UnboundedSender<LlamacppRequest>,
status: watch::Receiver<bool>,
}
impl LlamacppRequest {
fn new(
from: &ValidGenerateRequest,
tx: UnboundedSender<Result<InferStreamResponse, InferError>>,
) -> Option<Self> {
from.input_ids.as_ref().map(|input_ids| LlamacppRequest {
input_ids: input_ids.iter().map(|&x| x as i32).collect(),
top_k: from.parameters.top_k as _,
top_p: from.parameters.top_p as _,
typical_p: from.parameters.typical_p as _,
min_keep: 0, // disabled
temp: from.parameters.temperature as _,
seed: from.parameters.seed as _,
penalty_last_n: 64, // 0 = disabled, -1 = context size
penalty_repeat: from.parameters.repetition_penalty as _,
penalty_freq: from.parameters.frequency_penalty as _,
penalty_present: 0.0, // disabled
max_new_tokens: from.stopping_parameters.max_new_tokens as _,
tx,
time: Instant::now(),
})
}
}
struct Llamacpp {
model: *mut llamacpp::llama_model,
ctx: *mut llamacpp::llama_context,
vocab: *const llamacpp::llama_vocab,
logprobs: Vec<llamacpp::llama_token_data>,
batch: llamacpp::llama_batch,
}
extern "C" fn llamacpp_log_callback(
level: llamacpp::ggml_log_level,
msg: *const std::os::raw::c_char,
_user_data: *mut std::os::raw::c_void,
) {
let cmsg = unsafe { std::ffi::CStr::from_ptr(msg) };
let rmsg = cmsg.to_string_lossy().trim_end_matches('\n').to_string();
match level {
llamacpp::GGML_LOG_LEVEL_DEBUG => debug!(target: "llamacpp", "{}", rmsg),
llamacpp::GGML_LOG_LEVEL_INFO => info!(target: "llamacpp", "{}", rmsg),
llamacpp::GGML_LOG_LEVEL_WARN => warn!(target: "llamacpp", "{}", rmsg),
llamacpp::GGML_LOG_LEVEL_ERROR => error!(target: "llamacpp", "{}", rmsg),
_ => trace!(target: "llamacpp", "{}", rmsg),
}
}
impl Llamacpp {
fn new(conf: LlamacppConfig) -> Result<Self, BackendError> {
let gguf = CString::new(conf.model_gguf)?;
let model = unsafe {
let mut params = llamacpp::model_default_params();
params.n_gpu_layers = conf.n_gpu_layers as _;
params.split_mode = match conf.split_mode {
LlamacppSplitMode::GPU(_) => llamacpp::LLAMA_SPLIT_MODE_NONE,
LlamacppSplitMode::Layer => llamacpp::LLAMA_SPLIT_MODE_LAYER,
LlamacppSplitMode::Row => llamacpp::LLAMA_SPLIT_MODE_ROW,
};
params.main_gpu = match conf.split_mode {
LlamacppSplitMode::GPU(n) => n as _,
_ => 0,
};
params.use_mmap = conf.use_mmap;
params.use_mlock = conf.use_mlock;
llamacpp::model_load_from_file(gguf.as_ptr(), params)
};
if model.is_null() {
return Err(BackendError::Llamacpp("Failed to load model".to_string()));
}
let ctx = unsafe {
let mut params = llamacpp::context_default_params();
params.n_ctx = conf.max_batch_total_tokens as _;
params.n_batch = conf.max_batch_total_tokens as _;
params.n_ubatch = conf.max_physical_batch_total_tokens as _;
params.n_seq_max = conf.max_batch_size as _;
params.n_threads = conf.n_threads as _;
params.n_threads_batch = conf.n_threads_batch as _;
params.defrag_thold = conf.defrag_threshold;
params.offload_kqv = conf.offload_kqv;
params.flash_attn = conf.flash_attention;
params.type_k = conf.type_k.to_ggml_type();
params.type_v = conf.type_v.to_ggml_type();
params.no_perf = true;
llamacpp::init_from_model(model, params)
};
if ctx.is_null() {
return Err(BackendError::Llamacpp("Failed to init context".to_string()));
}
let vocab = unsafe { llamacpp::model_get_vocab(model) };
if vocab.is_null() {
return Err(BackendError::Llamacpp("Failed to get vocab".to_string()));
}
let n_tokens = unsafe { llamacpp::vocab_n_tokens(vocab) };
let mut logprobs = Vec::with_capacity(n_tokens as usize);
for token in 0..n_tokens {
logprobs.push(llamacpp::llama_token_data {
id: token,
logit: 0.0,
p: 0.0,
});
}
let batch = unsafe { llamacpp::batch_init(conf.max_batch_total_tokens as _, 0, 1) };
Ok(Llamacpp {
model,
ctx,
vocab,
logprobs,
batch,
})
}
fn decode(&mut self) -> i32 {
unsafe { llamacpp::decode(self.ctx, self.batch) }
}
fn clear_kv_cache(&mut self, seq_id: llamacpp::llama_seq_id) {
unsafe {
llamacpp::kv_cache_seq_rm(self.ctx, seq_id, -1, -1);
}
}
fn batch_push(
&mut self,
token: llamacpp::llama_token,
pos: llamacpp::llama_pos,
seq_id: llamacpp::llama_seq_id,
logits: bool,
) -> usize {
let n = self.batch.n_tokens as usize;
unsafe {
*self.batch.token.add(n) = token;
*self.batch.pos.add(n) = pos;
*self.batch.n_seq_id.add(n) = 1;
*(*self.batch.seq_id.add(n)).add(0) = seq_id;
*self.batch.logits.add(n) = logits as i8;
}
self.batch.n_tokens += 1;
n
}
}
impl Drop for Llamacpp {
fn drop(&mut self) {
if !self.ctx.is_null() {
unsafe { llamacpp::free(self.ctx) };
}
if !self.model.is_null() {
unsafe { llamacpp::model_free(self.model) };
}
unsafe { llamacpp::batch_free(self.batch) };
}
}
struct LlamacppSampler {
chain: *mut llamacpp::llama_sampler,
}
impl LlamacppSampler {
fn new(req: &LlamacppRequest) -> Option<Self> {
let chain = unsafe {
let params = llamacpp::sampler_chain_default_params();
llamacpp::sampler_chain_init(params)
};
if chain.is_null() {
error!("Failed to init sampler");
return None;
}
let (top_k, top_p, typical_p, temp, penalties, dist) = unsafe {
(
llamacpp::sampler_init_top_k(req.top_k),
llamacpp::sampler_init_top_p(req.top_p, req.min_keep),
llamacpp::sampler_init_typical(req.typical_p, req.min_keep),
llamacpp::sampler_init_temp(req.temp),
llamacpp::sampler_init_penalties(
req.penalty_last_n,
req.penalty_repeat,
req.penalty_freq,
req.penalty_present,
),
llamacpp::sampler_init_dist(req.seed),
)
};
let all = &[
("top_k", top_k),
("top_p", top_p),
("typical_p", typical_p),
("temp", temp),
("penalties", penalties),
("dist", dist),
];
let mut failed = false;
for (k, v) in all {
if v.is_null() {
error!("Failed to init {k} sampler");
failed = true;
} else {
unsafe { llamacpp::sampler_chain_add(chain, *v) };
}
}
if failed {
unsafe { llamacpp::sampler_free(chain) };
None
} else {
Some(LlamacppSampler { chain })
}
}
fn sample(&self, llamacpp: &mut Llamacpp, idx: usize) -> (llamacpp::llama_token, f32) {
let logits = unsafe { llamacpp::get_logits_ith(llamacpp.ctx, idx as _) };
for (token, logprob) in llamacpp.logprobs.iter_mut().enumerate() {
*logprob = llamacpp::llama_token_data {
id: token as _,
logit: unsafe { *logits.add(token) },
p: 0.0,
};
}
let mut view = llamacpp::llama_token_data_array {
data: llamacpp.logprobs.as_mut_ptr(),
size: llamacpp.logprobs.len(),
selected: -1,
sorted: false,
};
unsafe {
llamacpp::sampler_apply(self.chain, &mut view);
let logprob = *view.data.offset(view.selected as _);
llamacpp::sampler_accept(self.chain, logprob.id);
(logprob.id, logprob.p.ln())
}
}
}
impl Drop for LlamacppSampler {
fn drop(&mut self) {
if !self.chain.is_null() {
unsafe { llamacpp::sampler_free(self.chain) };
}
}
}
struct LlamacppSeq {
id: usize,
batch_pos: usize,
token: llamacpp::llama_token,
pos: llamacpp::llama_pos,
sampler: LlamacppSampler,
text: String,
n_new_tokens: usize,
running: bool,
}
static INIT: Once = Once::new();
impl LlamacppBackend {
pub fn new(
conf: LlamacppConfig,
tokenizer: Tokenizer,
) -> (
Self,
oneshot::Receiver<Result<(), BackendError>>,
watch::Sender<bool>,
) {
// Setup llama & export logs, once and for all
INIT.call_once(|| unsafe {
llamacpp::log_set(Some(llamacpp_log_callback), std::ptr::null_mut());
llamacpp::backend_init();
llamacpp::numa_init(match conf.numa {
LlamacppNuma::Disabled => llamacpp::GGML_NUMA_STRATEGY_DISABLED,
LlamacppNuma::Distribute => llamacpp::GGML_NUMA_STRATEGY_DISTRIBUTE,
LlamacppNuma::Isolate => llamacpp::GGML_NUMA_STRATEGY_ISOLATE,
LlamacppNuma::Numactl => llamacpp::GGML_NUMA_STRATEGY_NUMACTL,
LlamacppNuma::Mirror => llamacpp::GGML_NUMA_STRATEGY_MIRROR,
});
});
let (status_tx, status_rx) = watch::channel(false);
let (shutdown_tx, shutdown_rx) = watch::channel(false);
let (ok_tx, ok_rx) = oneshot::channel();
let (tx, mut rx) = unbounded_channel::<LlamacppRequest>();
let (sync_tx, sync_rx) = mpsc::channel();
spawn(async move {
let mut n_tokens = 0;
let mut requests = Vec::with_capacity(conf.max_batch_size);
let flush = |requests: &mut Vec<_>, n_tokens: &mut usize| {
if !requests.is_empty() {
let _ =
sync_tx.send(replace(requests, Vec::with_capacity(conf.max_batch_size)));
*n_tokens = 0;
}
};
loop {
match timeout(conf.batch_timeout, rx.recv()).await {
Ok(Some(request)) => {
let n_tokens_to_add = request.input_ids.len();
if n_tokens + n_tokens_to_add > conf.max_batch_total_tokens {
flush(&mut requests, &mut n_tokens);
}
n_tokens += n_tokens_to_add;
requests.push(request);
if requests.len() == conf.max_batch_size {
flush(&mut requests, &mut n_tokens);
}
}
Ok(None) => break, // closed
Err(_) => flush(&mut requests, &mut n_tokens), // timeout
}
}
});
spawn_blocking(move || {
let mut llamacpp = match Llamacpp::new(conf) {
Ok(v) => {
let _ = ok_tx.send(Ok(()));
v
}
Err(e) => {
let _ = ok_tx.send(Err(e));
return;
}
};
let vocab = tokenizer.get_added_vocabulary();
// health() returns true
let _ = status_tx.send(true);
while let Ok(requests) = sync_rx.recv() {
if *shutdown_rx.borrow() {
break;
}
let start_time = Instant::now();
let mut seqs: Vec<LlamacppSeq> = Vec::with_capacity(requests.len());
llamacpp.batch.n_tokens = 0;
for (seq_id, request) in requests.iter().enumerate() {
debug!("Request: {:?}", request);
// TODO remove this
let sampler = match LlamacppSampler::new(request) {
Some(sampler) => sampler,
_ => {
let _ = request.tx.send(Err(InferError::IncompleteGeneration));
continue;
}
};
let last_pos = request.input_ids.len() - 1;
for (pos, &token_id) in request.input_ids.iter().enumerate() {
llamacpp.batch_push(
token_id as llamacpp::llama_token,
pos as llamacpp::llama_pos,
seq_id as llamacpp::llama_seq_id,
pos == last_pos, // check samplers
);
}
seqs.push(LlamacppSeq {
id: seq_id,
batch_pos: llamacpp.batch.n_tokens as usize - 1,
token: llamacpp::LLAMA_TOKEN_NULL,
pos: last_pos as llamacpp::llama_pos + 1,
sampler,
text: String::with_capacity(1024),
n_new_tokens: 0,
running: true,
});
}
while llamacpp.batch.n_tokens > 0 {
if llamacpp.decode() != 0 {
warn!("llama_decode failed, clearing kv cache");
llamacpp.clear_kv_cache(-1);
for seq in seqs.iter_mut() {
let _ = requests[seq.id]
.tx
.send(Err(InferError::IncompleteGeneration));
seq.running = false;
}
break;
}
for seq in seqs.iter_mut() {
if !seq.running {
continue;
}
let (next, logprob) = seq.sampler.sample(&mut llamacpp, seq.batch_pos);
seq.n_new_tokens += 1;
seq.token = next;
let piece = match tokenizer.decode(&[next as u32], false) {
Ok(piece) => piece,
Err(e) => {
error!("Failed to decode token: {e}");
let _ = requests[seq.id]
.tx
.send(Err(InferError::IncompleteGeneration));
seq.running = false;
continue;
}
};
let special = vocab.is_special_token(&piece);
if !special {
seq.text.push_str(&piece);
}
let token = Token {
id: next as _,
text: piece,
logprob,
special,
};
let finish: Option<FinishReason> = {
if unsafe { llamacpp::vocab_is_eog(llamacpp.vocab, next) } {
Some(FinishReason::EndOfSequenceToken)
} else if seq.n_new_tokens == requests[seq.id].max_new_tokens {
Some(FinishReason::Length)
} else {
None
}
};
if let Some(reason) = finish {
let _ = requests[seq.id].tx.send(Ok(InferStreamResponse::End {
token,
top_tokens: vec![],
generated_text: GeneratedText {
text: seq.text.clone(),
generated_tokens: seq.n_new_tokens as _,
finish_reason: reason,
seed: Some(requests[seq.id].seed as _),
},
start: start_time,
queued: requests[seq.id].time,
}));
seq.running = false;
continue;
}
let _ = requests[seq.id]
.tx
.send(Ok(InferStreamResponse::Intermediate {
token,
top_tokens: vec![],
}));
}
// generate a new batch
llamacpp.batch.n_tokens = 0;
for seq in seqs.iter_mut() {
if seq.running {
seq.batch_pos =
llamacpp.batch_push(seq.token, seq.pos, seq.id as _, true);
seq.pos += 1;
} else {
llamacpp.clear_kv_cache(seq.id as _);
}
}
}
}
});
(
Self {
tx,
status: status_rx,
},
ok_rx,
shutdown_tx,
)
}
}
#[async_trait]
impl Backend for LlamacppBackend {
#[instrument(skip_all)]
fn schedule(
&self,
request: ValidGenerateRequest,
) -> Result<UnboundedReceiverStream<Result<InferStreamResponse, InferError>>, InferError> {
debug!(?request);
let (tx, rx) = unbounded_channel::<Result<InferStreamResponse, InferError>>();
match LlamacppRequest::new(&request, tx) {
Some(v) => match self.tx.send(v) {
Err(e) => Err(InferError::GenerationError(e.to_string())),
_ => Ok(UnboundedReceiverStream::new(rx)),
},
_ => Err(InferError::GenerationError("Bad request".to_string())),
}
}
async fn health(&self, _: bool) -> bool {
*self.status.borrow()
}
fn name(&self) -> &'static str {
"llamacpp"
}
}
#[derive(Debug, Error)]
pub enum BackendError {
#[error("CString error: {0}")]
CStringError(#[from] std::ffi::NulError),
#[error("Llamacpp error: {0}")]
Llamacpp(String),
}
| text-generation-inference/backends/llamacpp/src/backend.rs/0 | {
"file_path": "text-generation-inference/backends/llamacpp/src/backend.rs",
"repo_id": "text-generation-inference",
"token_count": 13858
} | 300 |
#!/usr/bin/env python
import argparse
import logging
import os
import sys
from typing import Any, Dict, List, Optional
from optimum.neuron.modeling_decoder import get_available_cores
from optimum.neuron.cache import get_hub_cached_entries
from optimum.neuron.configuration_utils import NeuronConfig
from optimum.neuron.utils.version_utils import get_neuronxcc_version
from optimum.neuron.utils import map_torch_dtype
logger = logging.getLogger(__name__)
tgi_router_env_vars = [
"MAX_BATCH_SIZE",
"MAX_TOTAL_TOKENS",
"MAX_INPUT_TOKENS",
"MAX_BATCH_PREFILL_TOKENS",
]
tgi_server_env_vars = ["HF_NUM_CORES", "HF_AUTO_CAST_TYPE"]
# By the end of this script all env var should be specified properly
tgi_env_vars = tgi_server_env_vars + tgi_router_env_vars
available_cores = get_available_cores()
neuronxcc_version = get_neuronxcc_version()
def parse_cmdline_and_set_env(argv: List[str] = None) -> argparse.Namespace:
parser = argparse.ArgumentParser()
if not argv:
argv = sys.argv
# All these are params passed to tgi and intercepted here
parser.add_argument(
"--max-input-tokens",
type=int,
default=os.getenv("MAX_INPUT_TOKENS", os.getenv("MAX_INPUT_LENGTH", 0)),
)
parser.add_argument(
"--max-total-tokens", type=int, default=os.getenv("MAX_TOTAL_TOKENS", 0)
)
parser.add_argument(
"--max-batch-size", type=int, default=os.getenv("MAX_BATCH_SIZE", 0)
)
parser.add_argument(
"--max-batch-prefill-tokens",
type=int,
default=os.getenv("MAX_BATCH_PREFILL_TOKENS", 0),
)
parser.add_argument("--model-id", type=str, default=os.getenv("MODEL_ID"))
parser.add_argument("--revision", type=str, default=os.getenv("REVISION"))
args = parser.parse_known_args(argv)[0]
if not args.model_id:
raise Exception(
"No model id provided ! Either specify it using --model-id cmdline or MODEL_ID env var"
)
# Override env with cmdline params
os.environ["MODEL_ID"] = args.model_id
# Set all tgi router and tgi server values to consistent values as early as possible
# from the order of the parser defaults, the tgi router value can override the tgi server ones
if args.max_total_tokens > 0:
os.environ["MAX_TOTAL_TOKENS"] = str(args.max_total_tokens)
if args.max_input_tokens > 0:
os.environ["MAX_INPUT_TOKENS"] = str(args.max_input_tokens)
if args.max_batch_size > 0:
os.environ["MAX_BATCH_SIZE"] = str(args.max_batch_size)
if args.max_batch_prefill_tokens > 0:
os.environ["MAX_BATCH_PREFILL_TOKENS"] = str(args.max_batch_prefill_tokens)
if args.revision:
os.environ["REVISION"] = str(args.revision)
return args
def neuron_config_to_env(neuron_config):
if isinstance(neuron_config, NeuronConfig):
neuron_config = neuron_config.to_dict()
with open(os.environ["ENV_FILEPATH"], "w") as f:
f.write("export MAX_BATCH_SIZE={}\n".format(neuron_config["batch_size"]))
f.write("export MAX_TOTAL_TOKENS={}\n".format(neuron_config["sequence_length"]))
f.write("export HF_NUM_CORES={}\n".format(neuron_config["tp_degree"]))
config_key = (
"auto_cast_type" if "auto_cast_type" in neuron_config else "torch_dtype"
)
auto_cast_type = neuron_config[config_key]
f.write("export HF_AUTO_CAST_TYPE={}\n".format(auto_cast_type))
max_input_tokens = os.getenv("MAX_INPUT_TOKENS")
if not max_input_tokens:
max_input_tokens = int(neuron_config["sequence_length"]) // 2
if max_input_tokens == 0:
raise Exception("Model sequence length should be greater than 1")
f.write("export MAX_INPUT_TOKENS={}\n".format(max_input_tokens))
max_batch_prefill_tokens = os.getenv("MAX_BATCH_PREFILL_TOKENS")
if not max_batch_prefill_tokens:
max_batch_prefill_tokens = int(neuron_config["batch_size"]) * int(
max_input_tokens
)
f.write("export MAX_BATCH_PREFILL_TOKENS={}\n".format(max_batch_prefill_tokens))
def sort_neuron_configs(dictionary):
return -dictionary["tp_degree"], -dictionary["batch_size"]
def lookup_compatible_cached_model(
model_id: str, revision: Optional[str]
) -> Optional[Dict[str, Any]]:
# Reuse the same mechanic as the one in use to configure the tgi server part
# The only difference here is that we stay as flexible as possible on the compatibility part
entries = get_hub_cached_entries(model_id)
logger.debug(
"Found %d cached entries for model %s, revision %s",
len(entries),
model_id,
revision,
)
all_compatible = []
for entry in entries:
if check_env_and_neuron_config_compatibility(
entry, check_compiler_version=True
):
all_compatible.append(entry)
if not all_compatible:
logger.debug(
"No compatible cached entry found for model %s, env %s, available cores %s, neuronxcc version %s",
model_id,
get_env_dict(),
available_cores,
neuronxcc_version,
)
return None
logger.info("%d compatible neuron cached models found", len(all_compatible))
all_compatible = sorted(all_compatible, key=sort_neuron_configs)
entry = all_compatible[0]
return entry
def check_env_and_neuron_config_compatibility(
neuron_config_dict: Dict[str, Any], check_compiler_version: bool
) -> bool:
logger.debug(
"Checking the provided neuron config %s is compatible with the local setup and provided environment",
neuron_config_dict,
)
# Local setup compat checks
if neuron_config_dict["tp_degree"] > available_cores:
logger.debug(
"Not enough neuron cores available to run the provided neuron config"
)
return False
if (
check_compiler_version
and neuron_config_dict["neuronxcc_version"] != neuronxcc_version
):
logger.debug(
"Compiler version conflict, the local one (%s) differs from the one used to compile the model (%s)",
neuronxcc_version,
neuron_config_dict["neuronxcc_version"],
)
return False
batch_size = os.getenv("MAX_BATCH_SIZE", None)
if batch_size is not None and neuron_config_dict["batch_size"] < int(batch_size):
logger.debug(
"The provided MAX_BATCH_SIZE (%s) is higher than the neuron config batch size (%s)",
os.getenv("MAX_BATCH_SIZE"),
neuron_config_dict["batch_size"],
)
return False
max_total_tokens = os.getenv("MAX_TOTAL_TOKENS", None)
if max_total_tokens is not None and neuron_config_dict["sequence_length"] < int(
max_total_tokens
):
logger.debug(
"The provided MAX_TOTAL_TOKENS (%s) is higher than the neuron config sequence length (%s)",
max_total_tokens,
neuron_config_dict["sequence_length"],
)
return False
num_cores = os.getenv("HF_NUM_CORES", None)
if num_cores is not None and neuron_config_dict["tp_degree"] < int(num_cores):
logger.debug(
"The provided HF_NUM_CORES (%s) is higher than the neuron config tp degree (%s)",
num_cores,
neuron_config_dict["tp_degree"],
)
return False
auto_cast_type = os.getenv("HF_AUTO_CAST_TYPE", None)
if auto_cast_type is not None:
config_key = (
"auto_cast_type"
if "auto_cast_type" in neuron_config_dict
else "torch_dtype"
)
neuron_config_value = map_torch_dtype(str(neuron_config_dict[config_key]))
env_value = map_torch_dtype(auto_cast_type)
if env_value != neuron_config_value:
logger.debug(
"The provided auto cast type and the neuron config param differ (%s != %s)",
env_value,
neuron_config_value,
)
return False
max_input_tokens = int(
os.getenv("MAX_INPUT_TOKENS", os.getenv("MAX_INPUT_LENGTH", 0))
)
if max_input_tokens > 0:
if hasattr(neuron_config_dict, "max_context_length"):
sequence_length = neuron_config_dict["max_context_length"]
else:
sequence_length = neuron_config_dict["sequence_length"]
if max_input_tokens >= sequence_length:
logger.debug(
"Specified max input tokens is not compatible with config sequence length ( %s >= %s)",
max_input_tokens,
sequence_length,
)
return False
return True
def get_env_dict() -> Dict[str, str]:
d = {}
for k in tgi_env_vars:
d[k] = os.getenv(k)
return d
def get_neuron_config_for_model(
model_name_or_path: str, revision: Optional[str] = None
) -> NeuronConfig:
try:
neuron_config = NeuronConfig.from_pretrained(
model_name_or_path, revision=revision
)
except Exception as e:
logger.debug(
"NeuronConfig.from_pretrained failed for model %s, revision %s: %s",
model_name_or_path,
revision,
e,
)
neuron_config = None
if neuron_config is not None:
compatible = check_env_and_neuron_config_compatibility(
neuron_config.to_dict(), check_compiler_version=False
)
if not compatible:
env_dict = get_env_dict()
msg = (
"Invalid neuron config and env. Config {}, env {}, available cores {}, neuronxcc version {}"
).format(neuron_config, env_dict, available_cores, neuronxcc_version)
logger.error(msg)
raise Exception(msg)
else:
neuron_config = lookup_compatible_cached_model(model_name_or_path, revision)
return neuron_config
| text-generation-inference/backends/neuron/server/text_generation_server/tgi_env.py/0 | {
"file_path": "text-generation-inference/backends/neuron/server/text_generation_server/tgi_env.py",
"repo_id": "text-generation-inference",
"token_count": 4375
} | 301 |
use async_trait::async_trait;
use cxx::UniquePtr;
use hashbrown::HashMap;
use std::hint;
use std::ops::Deref;
use std::path::Path;
use tokenizers::Tokenizer;
use tokio::sync::mpsc::{unbounded_channel, UnboundedReceiver, UnboundedSender};
use tokio::sync::TryAcquireError;
use tokio::task::spawn_blocking;
use tokio::time::Instant;
use tokio_stream::wrappers::UnboundedReceiverStream;
use tracing::{debug, error, warn};
use text_generation_router::infer::InferError::{GenerationError, ValidationError};
use text_generation_router::infer::{Backend, GeneratedText, InferError, InferStreamResponse};
use text_generation_router::validation::ValidationError::{
EmptyInput, Grammar, TopNTokensDisabled, UnsupportedModality,
};
use text_generation_router::validation::{Chunk, ValidGenerateRequest};
use text_generation_router::Token;
use crate::errors::TensorRtLlmBackendError;
use crate::ffi::{
create_backend_from_engine_folder, FinishReason, GenerationStep, TensorRtLlmBackendImpl,
};
use crate::utils::first_line;
type InferResult<T> = Result<T, InferError>;
/// Wrap the requests along with the channel used to stream back to the client the decoded tokens
struct GenerationContext {
request: ValidGenerateRequest,
streamer: UnboundedSender<InferResult<InferStreamResponse>>,
tokens: Vec<u32>,
start: Option<Instant>,
queued: Instant,
}
#[derive(Debug, Copy, Clone)]
struct DecodedToken {
id: u32,
log_prob: f32,
is_final: bool,
finish_reason: FinishReason,
}
impl<'step> TryFrom<&'step GenerationStep> for DecodedToken {
type Error = InferError;
fn try_from(step: &'step GenerationStep) -> Result<Self, Self::Error> {
if !step.has_error {
Ok(Self {
id: step.token_id,
log_prob: step.log_prob,
is_final: step.is_final,
finish_reason: step.finish_reason,
})
} else {
Err(GenerationError(step.error_msg.clone()))
}
}
}
fn executor_status_looper(
max_inflight_requests: usize,
tokenizer: Tokenizer,
mut backend: UniquePtr<TensorRtLlmBackendImpl>,
mut backlog: UnboundedReceiver<GenerationContext>,
) {
// Track the tuple (request_id, stream) for each request
let mut in_flights =
HashMap::<u64, GenerationContext>::with_capacity(max_inflight_requests * 2);
'scheduler: loop {
// Is there any request pending to be scheduled?
let awaiting_requests = backlog.len();
for _ in 0..awaiting_requests {
// Retrieve all the requests
if let Some(ctx) = backlog.blocking_recv() {
// Submit all the request to the executor and move the context to the in-flight tracker
let request = &ctx.request;
let generation_params = &request.parameters;
let stopping_params = &request.stopping_parameters;
let input_ids = request.input_ids.as_deref();
// Submit to the TensorRT-LLM executor for scheduling
match backend.pin_mut().submit(
&input_ids.unwrap(), // This is checked beforehand in validate()
stopping_params.max_new_tokens,
generation_params.top_k,
generation_params.top_p,
generation_params.temperature,
generation_params.repetition_penalty,
generation_params.frequency_penalty,
generation_params.seed,
) {
Ok(request_id) => {
// Insert the context linked to the generated request id in the tracker
debug!("[in-flight] Added {}", request_id);
in_flights.insert(request_id, ctx);
}
Err(e) => {
// Return to the caller
let what = e.to_string();
error!(error = what.as_str(), "Failed to schedule request");
let err = Err(InferError::Overloaded(TryAcquireError::NoPermits));
if let Err(_) = ctx.streamer.send(err) {
error!("Failed to send back error to the client");
}
}
};
} else {
break 'scheduler;
}
}
if backend.num_tokens_ready() > 0 {
let mut backend = backend.pin_mut();
match backend.as_mut().pull_tokens() {
Ok(responses) => {
// Iterate through all the decoded token
for step in responses.deref() {
if let Some(ctx) = in_flights.get_mut(&step.request_id) {
// Update the starting timestamp if not set
// This value might not be the actual real starting time of the request
// on the executor side - Need to expose more info from the executor to
// retrieve this value
// TODO : Expose actual real starting time for a request on FFI layer
if ctx.start.is_none() {
ctx.start = Some(Instant::now());
}
// Try to map the generation step to a DecodedToken
let response = match DecodedToken::try_from(step) {
Ok(decoded_token) => {
post_process_decoded_token(&tokenizer, ctx, decoded_token)
}
Err(err) => Err(err),
};
// Attempt to send back the response to the client
if let Err(_) = ctx.streamer.send(response) {
// Client has dropped, remove from tracked requests
debug!(
"Client dropped - removing request {} from tracked requests",
step.request_id
);
backend.as_mut().cancel(step.request_id);
let _ = in_flights.remove(&step.request_id);
}
} else {
warn!("Untracked request {}", step.request_id,);
}
}
}
Err(ref err) => {
error!("Failed to get responses from the executor: {}.", err.what());
break 'scheduler;
}
}
}
// Hint the CPU we are spin-locking
hint::spin_loop();
}
}
fn post_process_decoded_token(
tokenizer: &Tokenizer,
ctx: &mut GenerationContext,
decoded_token: DecodedToken,
) -> InferResult<InferStreamResponse> {
match tokenizer.decode(&[decoded_token.id], false) {
Ok(text) => {
let is_special = tokenizer.get_added_vocabulary().is_special_token(&text);
let token = Token {
id: decoded_token.id,
text,
logprob: decoded_token.log_prob,
special: is_special,
};
// Append the token to the tracked generated tokens
ctx.tokens.push(token.id);
// Map the correct response depending on the step is final or not
let out = if !decoded_token.is_final {
InferStreamResponse::Intermediate {
token,
top_tokens: vec![],
}
} else {
let text = tokenizer.decode(&ctx.tokens, true);
let generated_text = GeneratedText {
text: text.unwrap(),
generated_tokens: ctx.tokens.len() as u32,
finish_reason: decoded_token.finish_reason.into(),
seed: None,
};
InferStreamResponse::End {
token,
top_tokens: vec![],
generated_text,
start: ctx.start.unwrap(),
queued: ctx.queued,
}
};
Ok(out)
}
Err(err) => Err(GenerationError(err.to_string())),
}
}
fn ensure_paths_exist<P: AsRef<Path>, PP: AsRef<Path>>(
engine_folder: P,
executor_worker_path: PP,
) -> Result<(String, String), TensorRtLlmBackendError> {
// Retrieve paths as &str for the backend creation
let engine_folder = engine_folder.as_ref();
let executor_worker_path = executor_worker_path.as_ref();
// Ensure the engine folder exists
if !engine_folder.exists() {
let err = TensorRtLlmBackendError::EngineFolderDoesntExists(engine_folder.to_path_buf());
error!("Path validation failed: {}", err,);
return Err(err);
}
// Ensure executor worker binary exists
if !executor_worker_path.exists() {
let err = TensorRtLlmBackendError::ExecutorWorkerNotFound(engine_folder.to_path_buf());
error!("Path validation failed: {}", err,);
return Err(err);
}
let engine_folder = String::from(
engine_folder
.to_str()
.expect("Failed to convert engine_folder to valid UTF-8"),
);
let executor_worker_path = String::from(
executor_worker_path
.to_str()
.expect("Failed to convert executor_worker_path to valid UTF-8"),
);
Ok((engine_folder, executor_worker_path))
}
unsafe impl Send for TensorRtLlmBackendImpl {}
pub struct TensorRtLlmBackendV2(UnboundedSender<GenerationContext>);
impl TensorRtLlmBackendV2 {
pub fn new<P: AsRef<Path> + Send, PP: AsRef<Path> + Send>(
tokenizer: Tokenizer,
engine_folder: P,
executor_worker_path: PP,
max_inflight_requests: usize,
) -> Result<Self, TensorRtLlmBackendError> {
let (engine_folder, executor_worker_path) =
ensure_paths_exist(engine_folder, executor_worker_path)?;
// Allocate the IPC layer to communicate with the backend
let (executor_sender, executor_receiver) = unbounded_channel();
// Create the FFI backend
let backend = create_backend_from_engine_folder(&engine_folder, &executor_worker_path)
.map_err(|e| TensorRtLlmBackendError::Runtime(first_line(e.what(), "Unknown error")))?;
// Executor looper is responsible for scheduling and pulling requests state at regular interval
spawn_blocking(move || {
executor_status_looper(max_inflight_requests, tokenizer, backend, executor_receiver)
});
Ok(TensorRtLlmBackendV2(executor_sender))
}
fn validate(request: &ValidGenerateRequest) -> InferResult<()> {
if request.input_ids.is_none() {
return Err(ValidationError(UnsupportedModality("No token provided")));
}
if request.top_n_tokens > 1 {
return Err(ValidationError(TopNTokensDisabled));
}
// TODO: Is it really needed? How can it be validated before?
if request.parameters.grammar.is_some() {
return Err(ValidationError(Grammar));
}
match request.inputs.len() {
0 => Err(ValidationError(EmptyInput)),
2.. => Err(GenerationError(
"TensorRT-LLM backend don't support multi-chunk".into(),
)),
1 => match request.inputs.first().expect("Single item-chunk") {
Chunk::Text(_) => Ok(()),
Chunk::Image(_) => Err(ValidationError(UnsupportedModality("image"))),
},
}
}
}
#[async_trait]
impl Backend for TensorRtLlmBackendV2 {
fn schedule(
&self,
request: ValidGenerateRequest,
) -> Result<UnboundedReceiverStream<Result<InferStreamResponse, InferError>>, InferError> {
Self::validate(&request)?;
// Open-up the stream to send tokens
let (streamer, receiver) = unbounded_channel::<InferResult<InferStreamResponse>>();
// Send the context to the executor for scheduling
let queued = Instant::now();
match self.0.send(GenerationContext {
request,
streamer,
tokens: Vec::with_capacity(256),
start: None,
queued,
}) {
Ok(_) => Ok(UnboundedReceiverStream::new(receiver)),
Err(_) => Err(GenerationError(
"Failed to submit request to the backend".into(),
)),
}
}
async fn health(&self, _: bool) -> bool {
true
}
fn name(&self) -> &'static str {
"TensorRT-LLM"
}
}
| text-generation-inference/backends/trtllm/src/looper.rs/0 | {
"file_path": "text-generation-inference/backends/trtllm/src/looper.rs",
"repo_id": "text-generation-inference",
"token_count": 6376
} | 302 |
/// Text Generation Inference benchmarking tool
///
/// Inspired by the great Oha app: https://github.com/hatoo/oha
/// and: https://github.com/orhun/rust-tui-template
use clap::Parser;
use std::path::Path;
use text_generation_client::v3::ShardedClient;
use tokenizers::{FromPretrainedParameters, Tokenizer};
use tracing_subscriber::layer::SubscriberExt;
use tracing_subscriber::util::SubscriberInitExt;
use tracing_subscriber::EnvFilter;
/// App Configuration
#[derive(Parser, Debug)]
#[clap(author, version, about, long_about = None)]
struct Args {
/// The name of the tokenizer (as in model_id on the huggingface hub, or local path).
#[clap(short, long, env)]
tokenizer_name: String,
/// The revision to use for the tokenizer if on the hub.
#[clap(default_value = "main", long, env)]
revision: String,
/// The various batch sizes to benchmark for, the idea is to get enough
/// batching to start seeing increased latency, this usually means you're
/// moving from memory bound (usual as BS=1) to compute bound, and this is
/// a sweet spot for the maximum batch size for the model under test
#[clap(short, long)]
batch_size: Option<Vec<u32>>,
/// This is the initial prompt sent to the text-generation-server length
/// in token. Longer prompt will slow down the benchmark. Usually the
/// latency grows somewhat linearly with this for the prefill step.
///
/// Most importantly, the prefill step is usually not the one dominating
/// your runtime, so it's ok to keep it short.
#[clap(default_value = "10", short, long, env)]
sequence_length: u32,
/// This is how many tokens will be generated by the server and averaged out
/// to give the `decode` latency. This is the *critical* number you want to optimize for
/// LLM spend most of their time doing decoding.
///
/// Decode latency is usually quite stable.
#[clap(default_value = "8", short, long, env)]
decode_length: u32,
///How many runs should we average from
#[clap(default_value = "10", short, long, env)]
runs: usize,
/// Number of warmup cycles
#[clap(default_value = "1", short, long, env)]
warmups: usize,
/// The location of the grpc socket. This benchmark tool bypasses the router
/// completely and directly talks to the gRPC processes
#[clap(default_value = "/tmp/text-generation-server-0", short, long, env)]
master_shard_uds_path: String,
/// Generation parameter in case you want to specifically test/debug particular
/// decoding strategies, for full doc refer to the `text-generation-server`
#[clap(long, env)]
temperature: Option<f32>,
/// Generation parameter in case you want to specifically test/debug particular
/// decoding strategies, for full doc refer to the `text-generation-server`
#[clap(long, env)]
top_k: Option<u32>,
/// Generation parameter in case you want to specifically test/debug particular
/// decoding strategies, for full doc refer to the `text-generation-server`
#[clap(long, env)]
top_p: Option<f32>,
/// Generation parameter in case you want to specifically test/debug particular
/// decoding strategies, for full doc refer to the `text-generation-server`
#[clap(long, env)]
typical_p: Option<f32>,
/// Generation parameter in case you want to specifically test/debug particular
/// decoding strategies, for full doc refer to the `text-generation-server`
#[clap(long, env)]
repetition_penalty: Option<f32>,
/// Generation parameter in case you want to specifically test/debug particular
/// decoding strategies, for full doc refer to the `text-generation-server`
#[clap(long, env)]
frequency_penalty: Option<f32>,
/// Generation parameter in case you want to specifically test/debug particular
/// decoding strategies, for full doc refer to the `text-generation-server`
#[clap(long, env)]
watermark: bool,
/// Generation parameter in case you want to specifically test/debug particular
/// decoding strategies, for full doc refer to the `text-generation-server`
#[clap(long, env)]
do_sample: bool,
/// Generation parameter in case you want to specifically test/debug particular
/// decoding strategies, for full doc refer to the `text-generation-server`
#[clap(long, env)]
top_n_tokens: Option<u32>,
}
fn main() -> Result<(), Box<dyn std::error::Error>> {
init_logging();
// Get args
let args = Args::parse();
// Pattern match configuration
let Args {
tokenizer_name,
revision,
batch_size,
sequence_length,
decode_length,
runs,
warmups,
temperature,
top_k,
top_p,
typical_p,
repetition_penalty,
frequency_penalty,
watermark,
do_sample,
master_shard_uds_path,
top_n_tokens,
} = args;
let batch_size = batch_size.unwrap_or(vec![1, 2, 4, 8, 16, 32]);
// Tokenizer instance
// This will only be used to validate payloads
tracing::info!("Loading tokenizer");
let local_path = Path::new(&tokenizer_name);
let tokenizer =
if local_path.exists() && local_path.is_dir() && local_path.join("tokenizer.json").exists()
{
// Load local tokenizer
tracing::info!("Found local tokenizer");
Tokenizer::from_file(local_path.join("tokenizer.json")).unwrap()
} else {
tracing::info!("Downloading tokenizer");
// Parse Huggingface hub token
let token = std::env::var("HF_TOKEN")
.or_else(|_| std::env::var("HUGGING_FACE_HUB_TOKEN"))
.ok();
// Download and instantiate tokenizer
// We need to download it outside of the Tokio runtime
let params = FromPretrainedParameters {
revision,
token,
..Default::default()
};
Tokenizer::from_pretrained(tokenizer_name.clone(), Some(params)).unwrap()
};
tracing::info!("Tokenizer loaded");
// Launch Tokio runtime
tokio::runtime::Builder::new_multi_thread()
.enable_all()
.build()
.unwrap()
.block_on(async {
// Instantiate sharded client from the master unix socket
tracing::info!("Connect to model server");
let mut sharded_client = ShardedClient::connect_uds(master_shard_uds_path)
.await
.expect("Could not connect to server");
// Clear the cache; useful if the webserver rebooted
sharded_client
.clear_cache(None)
.await
.expect("Unable to clear cache");
tracing::info!("Connected");
// Run app
text_generation_benchmark::run(
tokenizer_name,
tokenizer,
batch_size,
sequence_length,
decode_length,
top_n_tokens,
runs,
warmups,
temperature,
top_k,
top_p,
typical_p,
repetition_penalty,
frequency_penalty,
watermark,
do_sample,
sharded_client,
)
.await
.unwrap();
});
Ok(())
}
/// Init logging using LOG_LEVEL
fn init_logging() {
// STDOUT/STDERR layer
let fmt_layer = tracing_subscriber::fmt::layer()
.with_file(true)
.with_line_number(true);
// Filter events with LOG_LEVEL
let env_filter =
EnvFilter::try_from_env("LOG_LEVEL").unwrap_or_else(|_| EnvFilter::new("info"));
tracing_subscriber::registry()
.with(env_filter)
.with(fmt_layer)
.init();
}
| text-generation-inference/benchmark/src/main.rs/0 | {
"file_path": "text-generation-inference/benchmark/src/main.rs",
"repo_id": "text-generation-inference",
"token_count": 3164
} | 303 |
import os
import requests
from typing import Dict, Optional, List
from huggingface_hub.utils import build_hf_headers
from text_generation import Client, AsyncClient, __version__
from text_generation.types import DeployedModel
from text_generation.errors import NotSupportedError, parse_error
INFERENCE_ENDPOINT = os.environ.get(
"HF_INFERENCE_ENDPOINT", "https://api-inference.huggingface.co"
)
def deployed_models(headers: Optional[Dict] = None) -> List[DeployedModel]:
"""
Get all currently deployed models with text-generation-inference-support
Returns:
List[DeployedModel]: list of all currently deployed models
"""
resp = requests.get(
"https://api-inference.huggingface.co/framework/text-generation-inference",
headers=headers,
timeout=5,
)
payload = resp.json()
if resp.status_code != 200:
raise parse_error(resp.status_code, payload)
models = [DeployedModel(**raw_deployed_model) for raw_deployed_model in payload]
return models
def check_model_support(repo_id: str, headers: Optional[Dict] = None) -> bool:
"""
Check if a given model is supported by text-generation-inference
Returns:
bool: whether the model is supported by this client
"""
resp = requests.get(
f"https://api-inference.huggingface.co/status/{repo_id}",
headers=headers,
timeout=5,
)
payload = resp.json()
if resp.status_code != 200:
raise parse_error(resp.status_code, payload)
framework = payload["framework"]
supported = framework == "text-generation-inference"
return supported
class InferenceAPIClient(Client):
"""Client to make calls to the HuggingFace Inference API.
Only supports a subset of the available text-generation or text2text-generation models that are served using
text-generation-inference
Example:
```python
>>> from text_generation import InferenceAPIClient
>>> client = InferenceAPIClient("bigscience/bloomz")
>>> client.generate("Why is the sky blue?").generated_text
' Rayleigh scattering'
>>> result = ""
>>> for response in client.generate_stream("Why is the sky blue?"):
>>> if not response.token.special:
>>> result += response.token.text
>>> result
' Rayleigh scattering'
```
"""
def __init__(self, repo_id: str, token: Optional[str] = None, timeout: int = 10):
"""
Init headers and API information
Args:
repo_id (`str`):
Id of repository (e.g. `bigscience/bloom`).
token (`str`, `optional`):
The API token to use as HTTP bearer authorization. This is not
the authentication token. You can find the token in
https://huggingface.co/settings/token. Alternatively, you can
find both your organizations and personal API tokens using
`HfApi().whoami(token)`.
timeout (`int`):
Timeout in seconds
"""
headers = build_hf_headers(
token=token, library_name="text-generation", library_version=__version__
)
# Text Generation Inference client only supports a subset of the available hub models
if not check_model_support(repo_id, headers):
raise NotSupportedError(repo_id)
base_url = f"{INFERENCE_ENDPOINT}/models/{repo_id}"
super(InferenceAPIClient, self).__init__(
base_url, headers=headers, timeout=timeout
)
class InferenceAPIAsyncClient(AsyncClient):
"""Aynschronous Client to make calls to the HuggingFace Inference API.
Only supports a subset of the available text-generation or text2text-generation models that are served using
text-generation-inference
Example:
```python
>>> from text_generation import InferenceAPIAsyncClient
>>> client = InferenceAPIAsyncClient("bigscience/bloomz")
>>> response = await client.generate("Why is the sky blue?")
>>> response.generated_text
' Rayleigh scattering'
>>> result = ""
>>> async for response in client.generate_stream("Why is the sky blue?"):
>>> if not response.token.special:
>>> result += response.token.text
>>> result
' Rayleigh scattering'
```
"""
def __init__(self, repo_id: str, token: Optional[str] = None, timeout: int = 10):
"""
Init headers and API information
Args:
repo_id (`str`):
Id of repository (e.g. `bigscience/bloom`).
token (`str`, `optional`):
The API token to use as HTTP bearer authorization. This is not
the authentication token. You can find the token in
https://huggingface.co/settings/token. Alternatively, you can
find both your organizations and personal API tokens using
`HfApi().whoami(token)`.
timeout (`int`):
Timeout in seconds
"""
headers = build_hf_headers(
token=token, library_name="text-generation", library_version=__version__
)
# Text Generation Inference client only supports a subset of the available hub models
if not check_model_support(repo_id, headers):
raise NotSupportedError(repo_id)
base_url = f"{INFERENCE_ENDPOINT}/models/{repo_id}"
super(InferenceAPIAsyncClient, self).__init__(
base_url, headers=headers, timeout=timeout
)
| text-generation-inference/clients/python/text_generation/inference_api.py/0 | {
"file_path": "text-generation-inference/clients/python/text_generation/inference_api.py",
"repo_id": "text-generation-inference",
"token_count": 2182
} | 304 |
# Tensor Parallelism
Tensor parallelism is a technique used to fit a large model in multiple GPUs. For example, when multiplying the input tensors with the first weight tensor, the matrix multiplication is equivalent to splitting the weight tensor column-wise, multiplying each column with the input separately, and then concatenating the separate outputs. These outputs are then transferred from the GPUs and concatenated together to get the final result, like below ๐

<Tip warning={true}>
Tensor Parallelism only works for [models officially supported](../supported_models), it will not work when falling back to `transformers`. You can get more information about unsupported models [here](../basic_tutorials/non_core_models).
</Tip>
You can learn a lot more details about tensor-parallelism from [the `transformers` docs](https://huggingface.co/docs/transformers/main/en/perf_train_gpu_many#tensor-parallelism).
| text-generation-inference/docs/source/conceptual/tensor_parallelism.md/0 | {
"file_path": "text-generation-inference/docs/source/conceptual/tensor_parallelism.md",
"repo_id": "text-generation-inference",
"token_count": 272
} | 305 |
{
"nodes": {
"cachix": {
"inputs": {
"devenv": [
"crate2nix"
],
"flake-compat": [
"crate2nix"
],
"nixpkgs": "nixpkgs",
"pre-commit-hooks": [
"crate2nix"
]
},
"locked": {
"lastModified": 1709700175,
"narHash": "sha256-A0/6ZjLmT9qdYzKHmevnEIC7G+GiZ4UCr8v0poRPzds=",
"owner": "cachix",
"repo": "cachix",
"rev": "be97b37989f11b724197b5f4c7ffd78f12c8c4bf",
"type": "github"
},
"original": {
"owner": "cachix",
"ref": "latest",
"repo": "cachix",
"type": "github"
}
},
"cachix_2": {
"inputs": {
"devenv": [
"crate2nix",
"crate2nix_stable"
],
"flake-compat": [
"crate2nix",
"crate2nix_stable"
],
"nixpkgs": "nixpkgs_2",
"pre-commit-hooks": [
"crate2nix",
"crate2nix_stable"
]
},
"locked": {
"lastModified": 1716549461,
"narHash": "sha256-lHy5kgx6J8uD+16SO47dPrbob98sh+W1tf4ceSqPVK4=",
"owner": "cachix",
"repo": "cachix",
"rev": "e2bb269fb8c0828d5d4d2d7b8d09ea85abcacbd4",
"type": "github"
},
"original": {
"owner": "cachix",
"ref": "latest",
"repo": "cachix",
"type": "github"
}
},
"cachix_3": {
"inputs": {
"devenv": [
"crate2nix",
"crate2nix_stable",
"crate2nix_stable"
],
"flake-compat": [
"crate2nix",
"crate2nix_stable",
"crate2nix_stable"
],
"nixpkgs": "nixpkgs_3",
"pre-commit-hooks": [
"crate2nix",
"crate2nix_stable",
"crate2nix_stable"
]
},
"locked": {
"lastModified": 1716549461,
"narHash": "sha256-lHy5kgx6J8uD+16SO47dPrbob98sh+W1tf4ceSqPVK4=",
"owner": "cachix",
"repo": "cachix",
"rev": "e2bb269fb8c0828d5d4d2d7b8d09ea85abcacbd4",
"type": "github"
},
"original": {
"owner": "cachix",
"ref": "latest",
"repo": "cachix",
"type": "github"
}
},
"crate2nix": {
"inputs": {
"cachix": "cachix",
"crate2nix_stable": "crate2nix_stable",
"devshell": "devshell_3",
"flake-compat": "flake-compat_3",
"flake-parts": "flake-parts_3",
"nix-test-runner": "nix-test-runner_3",
"nixpkgs": [
"hf-nix",
"nixpkgs"
],
"pre-commit-hooks": "pre-commit-hooks_3"
},
"locked": {
"lastModified": 1739473963,
"narHash": "sha256-ItAhpjNUzEWd/cgZVyW/jvoGbCec4TK29e1Mnmn1oJE=",
"owner": "nix-community",
"repo": "crate2nix",
"rev": "be31feae9a82c225c0fd1bdf978565dc452a483a",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "crate2nix",
"type": "github"
}
},
"crate2nix_stable": {
"inputs": {
"cachix": "cachix_2",
"crate2nix_stable": "crate2nix_stable_2",
"devshell": "devshell_2",
"flake-compat": "flake-compat_2",
"flake-parts": "flake-parts_2",
"nix-test-runner": "nix-test-runner_2",
"nixpkgs": "nixpkgs_5",
"pre-commit-hooks": "pre-commit-hooks_2"
},
"locked": {
"lastModified": 1719760004,
"narHash": "sha256-esWhRnt7FhiYq0CcIxw9pvH+ybOQmWBfHYMtleaMhBE=",
"owner": "nix-community",
"repo": "crate2nix",
"rev": "1dee214bb20855fa3e1e7bb98d28922ddaff8c57",
"type": "github"
},
"original": {
"owner": "nix-community",
"ref": "0.14.1",
"repo": "crate2nix",
"type": "github"
}
},
"crate2nix_stable_2": {
"inputs": {
"cachix": "cachix_3",
"crate2nix_stable": "crate2nix_stable_3",
"devshell": "devshell",
"flake-compat": "flake-compat",
"flake-parts": "flake-parts",
"nix-test-runner": "nix-test-runner",
"nixpkgs": "nixpkgs_4",
"pre-commit-hooks": "pre-commit-hooks"
},
"locked": {
"lastModified": 1712821484,
"narHash": "sha256-rGT3CW64cJS9nlnWPFWSc1iEa3dNZecVVuPVGzcsHe8=",
"owner": "nix-community",
"repo": "crate2nix",
"rev": "42883afcad3823fa5811e967fb7bff54bc3c9d6d",
"type": "github"
},
"original": {
"owner": "nix-community",
"ref": "0.14.0",
"repo": "crate2nix",
"type": "github"
}
},
"crate2nix_stable_3": {
"inputs": {
"flake-utils": "flake-utils"
},
"locked": {
"lastModified": 1702842982,
"narHash": "sha256-A9AowkHIjsy1a4LuiPiVP88FMxyCWK41flZEZOUuwQM=",
"owner": "nix-community",
"repo": "crate2nix",
"rev": "75ac2973affa6b9b4f661a7b592cba6e4f51d426",
"type": "github"
},
"original": {
"owner": "nix-community",
"ref": "0.12.0",
"repo": "crate2nix",
"type": "github"
}
},
"devshell": {
"inputs": {
"flake-utils": "flake-utils_2",
"nixpkgs": [
"crate2nix",
"crate2nix_stable",
"crate2nix_stable",
"nixpkgs"
]
},
"locked": {
"lastModified": 1717408969,
"narHash": "sha256-Q0OEFqe35fZbbRPPRdrjTUUChKVhhWXz3T9ZSKmaoVY=",
"owner": "numtide",
"repo": "devshell",
"rev": "1ebbe68d57457c8cae98145410b164b5477761f4",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "devshell",
"type": "github"
}
},
"devshell_2": {
"inputs": {
"flake-utils": "flake-utils_3",
"nixpkgs": [
"crate2nix",
"crate2nix_stable",
"nixpkgs"
]
},
"locked": {
"lastModified": 1717408969,
"narHash": "sha256-Q0OEFqe35fZbbRPPRdrjTUUChKVhhWXz3T9ZSKmaoVY=",
"owner": "numtide",
"repo": "devshell",
"rev": "1ebbe68d57457c8cae98145410b164b5477761f4",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "devshell",
"type": "github"
}
},
"devshell_3": {
"inputs": {
"flake-utils": "flake-utils_4",
"nixpkgs": [
"crate2nix",
"nixpkgs"
]
},
"locked": {
"lastModified": 1711099426,
"narHash": "sha256-HzpgM/wc3aqpnHJJ2oDqPBkNsqWbW0WfWUO8lKu8nGk=",
"owner": "numtide",
"repo": "devshell",
"rev": "2d45b54ca4a183f2fdcf4b19c895b64fbf620ee8",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "devshell",
"type": "github"
}
},
"flake-compat": {
"locked": {
"lastModified": 1696426674,
"narHash": "sha256-kvjfFW7WAETZlt09AgDn1MrtKzP7t90Vf7vypd3OL1U=",
"rev": "0f9255e01c2351cc7d116c072cb317785dd33b33",
"revCount": 57,
"type": "tarball",
"url": "https://api.flakehub.com/f/pinned/edolstra/flake-compat/1.0.1/018afb31-abd1-7bff-a5e4-cff7e18efb7a/source.tar.gz"
},
"original": {
"type": "tarball",
"url": "https://flakehub.com/f/edolstra/flake-compat/1.tar.gz"
}
},
"flake-compat_2": {
"locked": {
"lastModified": 1696426674,
"narHash": "sha256-kvjfFW7WAETZlt09AgDn1MrtKzP7t90Vf7vypd3OL1U=",
"rev": "0f9255e01c2351cc7d116c072cb317785dd33b33",
"revCount": 57,
"type": "tarball",
"url": "https://api.flakehub.com/f/pinned/edolstra/flake-compat/1.0.1/018afb31-abd1-7bff-a5e4-cff7e18efb7a/source.tar.gz"
},
"original": {
"type": "tarball",
"url": "https://flakehub.com/f/edolstra/flake-compat/1.tar.gz"
}
},
"flake-compat_3": {
"locked": {
"lastModified": 1696426674,
"narHash": "sha256-kvjfFW7WAETZlt09AgDn1MrtKzP7t90Vf7vypd3OL1U=",
"rev": "0f9255e01c2351cc7d116c072cb317785dd33b33",
"revCount": 57,
"type": "tarball",
"url": "https://api.flakehub.com/f/pinned/edolstra/flake-compat/1.0.1/018afb31-abd1-7bff-a5e4-cff7e18efb7a/source.tar.gz"
},
"original": {
"type": "tarball",
"url": "https://flakehub.com/f/edolstra/flake-compat/1.tar.gz"
}
},
"flake-compat_4": {
"locked": {
"lastModified": 1733328505,
"narHash": "sha256-NeCCThCEP3eCl2l/+27kNNK7QrwZB1IJCrXfrbv5oqU=",
"owner": "edolstra",
"repo": "flake-compat",
"rev": "ff81ac966bb2cae68946d5ed5fc4994f96d0ffec",
"type": "github"
},
"original": {
"owner": "edolstra",
"repo": "flake-compat",
"type": "github"
}
},
"flake-parts": {
"inputs": {
"nixpkgs-lib": [
"crate2nix",
"crate2nix_stable",
"crate2nix_stable",
"nixpkgs"
]
},
"locked": {
"lastModified": 1719745305,
"narHash": "sha256-xwgjVUpqSviudEkpQnioeez1Uo2wzrsMaJKJClh+Bls=",
"owner": "hercules-ci",
"repo": "flake-parts",
"rev": "c3c5ecc05edc7dafba779c6c1a61cd08ac6583e9",
"type": "github"
},
"original": {
"owner": "hercules-ci",
"repo": "flake-parts",
"type": "github"
}
},
"flake-parts_2": {
"inputs": {
"nixpkgs-lib": [
"crate2nix",
"crate2nix_stable",
"nixpkgs"
]
},
"locked": {
"lastModified": 1719745305,
"narHash": "sha256-xwgjVUpqSviudEkpQnioeez1Uo2wzrsMaJKJClh+Bls=",
"owner": "hercules-ci",
"repo": "flake-parts",
"rev": "c3c5ecc05edc7dafba779c6c1a61cd08ac6583e9",
"type": "github"
},
"original": {
"owner": "hercules-ci",
"repo": "flake-parts",
"type": "github"
}
},
"flake-parts_3": {
"inputs": {
"nixpkgs-lib": [
"crate2nix",
"nixpkgs"
]
},
"locked": {
"lastModified": 1712014858,
"narHash": "sha256-sB4SWl2lX95bExY2gMFG5HIzvva5AVMJd4Igm+GpZNw=",
"owner": "hercules-ci",
"repo": "flake-parts",
"rev": "9126214d0a59633752a136528f5f3b9aa8565b7d",
"type": "github"
},
"original": {
"owner": "hercules-ci",
"repo": "flake-parts",
"type": "github"
}
},
"flake-utils": {
"inputs": {
"systems": "systems"
},
"locked": {
"lastModified": 1694529238,
"narHash": "sha256-zsNZZGTGnMOf9YpHKJqMSsa0dXbfmxeoJ7xHlrt+xmY=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "ff7b65b44d01cf9ba6a71320833626af21126384",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
"flake-utils_2": {
"inputs": {
"systems": "systems_2"
},
"locked": {
"lastModified": 1701680307,
"narHash": "sha256-kAuep2h5ajznlPMD9rnQyffWG8EM/C73lejGofXvdM8=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "4022d587cbbfd70fe950c1e2083a02621806a725",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
"flake-utils_3": {
"inputs": {
"systems": "systems_3"
},
"locked": {
"lastModified": 1701680307,
"narHash": "sha256-kAuep2h5ajznlPMD9rnQyffWG8EM/C73lejGofXvdM8=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "4022d587cbbfd70fe950c1e2083a02621806a725",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
"flake-utils_4": {
"inputs": {
"systems": "systems_4"
},
"locked": {
"lastModified": 1701680307,
"narHash": "sha256-kAuep2h5ajznlPMD9rnQyffWG8EM/C73lejGofXvdM8=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "4022d587cbbfd70fe950c1e2083a02621806a725",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
"flake-utils_5": {
"inputs": {
"systems": "systems_5"
},
"locked": {
"lastModified": 1710146030,
"narHash": "sha256-SZ5L6eA7HJ/nmkzGG7/ISclqe6oZdOZTNoesiInkXPQ=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "b1d9ab70662946ef0850d488da1c9019f3a9752a",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
"flake-utils_6": {
"inputs": {
"systems": "systems_6"
},
"locked": {
"lastModified": 1731533236,
"narHash": "sha256-l0KFg5HjrsfsO/JpG+r7fRrqm12kzFHyUHqHCVpMMbI=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "11707dc2f618dd54ca8739b309ec4fc024de578b",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
"flake-utils_7": {
"inputs": {
"systems": "systems_7"
},
"locked": {
"lastModified": 1731533236,
"narHash": "sha256-l0KFg5HjrsfsO/JpG+r7fRrqm12kzFHyUHqHCVpMMbI=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "11707dc2f618dd54ca8739b309ec4fc024de578b",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
"gitignore": {
"inputs": {
"nixpkgs": [
"crate2nix",
"crate2nix_stable",
"crate2nix_stable",
"pre-commit-hooks",
"nixpkgs"
]
},
"locked": {
"lastModified": 1709087332,
"narHash": "sha256-HG2cCnktfHsKV0s4XW83gU3F57gaTljL9KNSuG6bnQs=",
"owner": "hercules-ci",
"repo": "gitignore.nix",
"rev": "637db329424fd7e46cf4185293b9cc8c88c95394",
"type": "github"
},
"original": {
"owner": "hercules-ci",
"repo": "gitignore.nix",
"type": "github"
}
},
"gitignore_2": {
"inputs": {
"nixpkgs": [
"crate2nix",
"crate2nix_stable",
"pre-commit-hooks",
"nixpkgs"
]
},
"locked": {
"lastModified": 1709087332,
"narHash": "sha256-HG2cCnktfHsKV0s4XW83gU3F57gaTljL9KNSuG6bnQs=",
"owner": "hercules-ci",
"repo": "gitignore.nix",
"rev": "637db329424fd7e46cf4185293b9cc8c88c95394",
"type": "github"
},
"original": {
"owner": "hercules-ci",
"repo": "gitignore.nix",
"type": "github"
}
},
"gitignore_3": {
"inputs": {
"nixpkgs": [
"crate2nix",
"pre-commit-hooks",
"nixpkgs"
]
},
"locked": {
"lastModified": 1709087332,
"narHash": "sha256-HG2cCnktfHsKV0s4XW83gU3F57gaTljL9KNSuG6bnQs=",
"owner": "hercules-ci",
"repo": "gitignore.nix",
"rev": "637db329424fd7e46cf4185293b9cc8c88c95394",
"type": "github"
},
"original": {
"owner": "hercules-ci",
"repo": "gitignore.nix",
"type": "github"
}
},
"hf-nix": {
"inputs": {
"flake-compat": "flake-compat_4",
"flake-utils": "flake-utils_7",
"nixpkgs": "nixpkgs_6"
},
"locked": {
"lastModified": 1747919133,
"narHash": "sha256-VvF1naQOvv7yulQ5/cDiaxkNxlh1Y84QMZnderv1szk=",
"owner": "huggingface",
"repo": "hf-nix",
"rev": "9c71e026d6c7c8588ef85a5f7c77f57d598e038c",
"type": "github"
},
"original": {
"owner": "huggingface",
"repo": "hf-nix",
"type": "github"
}
},
"nix-filter": {
"locked": {
"lastModified": 1731533336,
"narHash": "sha256-oRam5PS1vcrr5UPgALW0eo1m/5/pls27Z/pabHNy2Ms=",
"owner": "numtide",
"repo": "nix-filter",
"rev": "f7653272fd234696ae94229839a99b73c9ab7de0",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "nix-filter",
"type": "github"
}
},
"nix-test-runner": {
"flake": false,
"locked": {
"lastModified": 1588761593,
"narHash": "sha256-FKJykltAN/g3eIceJl4SfDnnyuH2jHImhMrXS2KvGIs=",
"owner": "stoeffel",
"repo": "nix-test-runner",
"rev": "c45d45b11ecef3eb9d834c3b6304c05c49b06ca2",
"type": "github"
},
"original": {
"owner": "stoeffel",
"repo": "nix-test-runner",
"type": "github"
}
},
"nix-test-runner_2": {
"flake": false,
"locked": {
"lastModified": 1588761593,
"narHash": "sha256-FKJykltAN/g3eIceJl4SfDnnyuH2jHImhMrXS2KvGIs=",
"owner": "stoeffel",
"repo": "nix-test-runner",
"rev": "c45d45b11ecef3eb9d834c3b6304c05c49b06ca2",
"type": "github"
},
"original": {
"owner": "stoeffel",
"repo": "nix-test-runner",
"type": "github"
}
},
"nix-test-runner_3": {
"flake": false,
"locked": {
"lastModified": 1588761593,
"narHash": "sha256-FKJykltAN/g3eIceJl4SfDnnyuH2jHImhMrXS2KvGIs=",
"owner": "stoeffel",
"repo": "nix-test-runner",
"rev": "c45d45b11ecef3eb9d834c3b6304c05c49b06ca2",
"type": "github"
},
"original": {
"owner": "stoeffel",
"repo": "nix-test-runner",
"type": "github"
}
},
"nixpkgs": {
"locked": {
"lastModified": 1700612854,
"narHash": "sha256-yrQ8osMD+vDLGFX7pcwsY/Qr5PUd6OmDMYJZzZi0+zc=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "19cbff58383a4ae384dea4d1d0c823d72b49d614",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
"nixpkgs_2": {
"locked": {
"lastModified": 1715534503,
"narHash": "sha256-5ZSVkFadZbFP1THataCaSf0JH2cAH3S29hU9rrxTEqk=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "2057814051972fa1453ddfb0d98badbea9b83c06",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
"nixpkgs_3": {
"locked": {
"lastModified": 1715534503,
"narHash": "sha256-5ZSVkFadZbFP1THataCaSf0JH2cAH3S29hU9rrxTEqk=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "2057814051972fa1453ddfb0d98badbea9b83c06",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
"nixpkgs_4": {
"locked": {
"lastModified": 1719506693,
"narHash": "sha256-C8e9S7RzshSdHB7L+v9I51af1gDM5unhJ2xO1ywxNH8=",
"path": "/nix/store/4p0avw1s3vf27hspgqsrqs37gxk4i83i-source",
"rev": "b2852eb9365c6de48ffb0dc2c9562591f652242a",
"type": "path"
},
"original": {
"id": "nixpkgs",
"type": "indirect"
}
},
"nixpkgs_5": {
"locked": {
"lastModified": 1719506693,
"narHash": "sha256-C8e9S7RzshSdHB7L+v9I51af1gDM5unhJ2xO1ywxNH8=",
"path": "/nix/store/4p0avw1s3vf27hspgqsrqs37gxk4i83i-source",
"rev": "b2852eb9365c6de48ffb0dc2c9562591f652242a",
"type": "path"
},
"original": {
"id": "nixpkgs",
"type": "indirect"
}
},
"nixpkgs_6": {
"locked": {
"lastModified": 1747820358,
"narHash": "sha256-fTqsZsUX6M3yeEvgyQvXcbGmT2CaRVyVwsi8eK29Oj4=",
"owner": "danieldk",
"repo": "nixpkgs",
"rev": "d3c1681180717528068082103bf323147de6ab0b",
"type": "github"
},
"original": {
"owner": "danieldk",
"ref": "cudatoolkit-12.9-kernel-builder",
"repo": "nixpkgs",
"type": "github"
}
},
"pre-commit-hooks": {
"inputs": {
"flake-compat": [
"crate2nix",
"crate2nix_stable",
"crate2nix_stable",
"flake-compat"
],
"gitignore": "gitignore",
"nixpkgs": [
"crate2nix",
"crate2nix_stable",
"crate2nix_stable",
"nixpkgs"
],
"nixpkgs-stable": [
"crate2nix",
"crate2nix_stable",
"crate2nix_stable",
"nixpkgs"
]
},
"locked": {
"lastModified": 1719259945,
"narHash": "sha256-F1h+XIsGKT9TkGO3omxDLEb/9jOOsI6NnzsXFsZhry4=",
"owner": "cachix",
"repo": "pre-commit-hooks.nix",
"rev": "0ff4381bbb8f7a52ca4a851660fc7a437a4c6e07",
"type": "github"
},
"original": {
"owner": "cachix",
"repo": "pre-commit-hooks.nix",
"type": "github"
}
},
"pre-commit-hooks_2": {
"inputs": {
"flake-compat": [
"crate2nix",
"crate2nix_stable",
"flake-compat"
],
"gitignore": "gitignore_2",
"nixpkgs": [
"crate2nix",
"crate2nix_stable",
"nixpkgs"
],
"nixpkgs-stable": [
"crate2nix",
"crate2nix_stable",
"nixpkgs"
]
},
"locked": {
"lastModified": 1719259945,
"narHash": "sha256-F1h+XIsGKT9TkGO3omxDLEb/9jOOsI6NnzsXFsZhry4=",
"owner": "cachix",
"repo": "pre-commit-hooks.nix",
"rev": "0ff4381bbb8f7a52ca4a851660fc7a437a4c6e07",
"type": "github"
},
"original": {
"owner": "cachix",
"repo": "pre-commit-hooks.nix",
"type": "github"
}
},
"pre-commit-hooks_3": {
"inputs": {
"flake-compat": [
"crate2nix",
"flake-compat"
],
"flake-utils": "flake-utils_5",
"gitignore": "gitignore_3",
"nixpkgs": [
"crate2nix",
"nixpkgs"
],
"nixpkgs-stable": [
"crate2nix",
"nixpkgs"
]
},
"locked": {
"lastModified": 1712055707,
"narHash": "sha256-4XLvuSIDZJGS17xEwSrNuJLL7UjDYKGJSbK1WWX2AK8=",
"owner": "cachix",
"repo": "pre-commit-hooks.nix",
"rev": "e35aed5fda3cc79f88ed7f1795021e559582093a",
"type": "github"
},
"original": {
"owner": "cachix",
"repo": "pre-commit-hooks.nix",
"type": "github"
}
},
"root": {
"inputs": {
"crate2nix": "crate2nix",
"flake-utils": "flake-utils_6",
"hf-nix": "hf-nix",
"nix-filter": "nix-filter",
"nixpkgs": [
"hf-nix",
"nixpkgs"
],
"rust-overlay": "rust-overlay"
}
},
"rust-overlay": {
"inputs": {
"nixpkgs": [
"hf-nix",
"nixpkgs"
]
},
"locked": {
"lastModified": 1743993291,
"narHash": "sha256-u8GHvduU1gCtoFXvTS/wGjH1ouv5S/GRGq6MAT+sG/k=",
"owner": "oxalica",
"repo": "rust-overlay",
"rev": "0cb3c8979c65dc6a5812dfe67499a8c7b8b4325b",
"type": "github"
},
"original": {
"owner": "oxalica",
"repo": "rust-overlay",
"type": "github"
}
},
"systems": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
},
"systems_2": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
},
"systems_3": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
},
"systems_4": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
},
"systems_5": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
},
"systems_6": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
},
"systems_7": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
}
},
"root": "root",
"version": 7
}
| text-generation-inference/flake.lock/0 | {
"file_path": "text-generation-inference/flake.lock",
"repo_id": "text-generation-inference",
"token_count": 16562
} | 306 |
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "As of your last question, the weather in Brooklyn, New York, is typically hot and humid throughout the year. The suburbs around New York City are jealously sheltered, and at least in the Lower Bronx, there are very few outdoor environments to appreciate nature.\n\nIn terms of temperature, the warmest times of the year are from June to August, when average high temperatures typically range from around 73ยฐF or 23ยฐC",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1724792495,
"id": "",
"model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 100,
"prompt_tokens": 61,
"total_tokens": 161
}
}
| text-generation-inference/integration-tests/models/__snapshots__/test_chat_llama/test_flash_llama_simple.json/0 | {
"file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_chat_llama/test_flash_llama_simple.json",
"repo_id": "text-generation-inference",
"token_count": 364
} | 307 |
{
"details": {
"best_of_sequences": null,
"finish_reason": "length",
"generated_tokens": 10,
"prefill": [],
"seed": 0,
"tokens": [
{
"id": 5380,
"logprob": -0.23840332,
"special": false,
"text": "?\n"
},
{
"id": 34564,
"logprob": 0.0,
"special": false,
"text": "Deep"
},
{
"id": 6975,
"logprob": 0.0,
"special": false,
"text": " learning"
},
{
"id": 11,
"logprob": 0.0,
"special": false,
"text": ","
},
{
"id": 1101,
"logprob": -1.2011719,
"special": false,
"text": " also"
},
{
"id": 3967,
"logprob": 0.0,
"special": false,
"text": " known"
},
{
"id": 439,
"logprob": 0.0,
"special": false,
"text": " as"
},
{
"id": 30828,
"logprob": 0.0,
"special": false,
"text": " neural"
},
{
"id": 4009,
"logprob": -0.6777344,
"special": false,
"text": " network"
},
{
"id": 477,
"logprob": 0.0,
"special": false,
"text": " or"
}
],
"top_tokens": null
},
"generated_text": "What is deep learning?\nDeep learning, also known as neural network or"
}
| text-generation-inference/integration-tests/models/__snapshots__/test_compressed_tensors_w8an_fp/test_compressed_tensors_w8an_all_params.json/0 | {
"file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_compressed_tensors_w8an_fp/test_compressed_tensors_w8an_all_params.json",
"repo_id": "text-generation-inference",
"token_count": 853
} | 308 |
{
"details": {
"best_of_sequences": null,
"finish_reason": "eos_token",
"generated_tokens": 4,
"prefill": [],
"seed": 0,
"tokens": [
{
"id": 2143,
"logprob": -1.828125,
"special": false,
"text": " sent"
},
{
"id": 10081,
"logprob": -0.41210938,
"special": false,
"text": " successfully"
},
{
"id": 13,
"logprob": 0.0,
"special": false,
"text": "."
},
{
"id": 100001,
"logprob": -0.16015625,
"special": true,
"text": "<๏ฝendโofโsentence๏ฝ>"
}
],
"top_tokens": null
},
"generated_text": "Test request sent successfully."
}
| text-generation-inference/integration-tests/models/__snapshots__/test_flash_deepseek_v2/test_flash_deepseek_v2_all_params.json/0 | {
"file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_flash_deepseek_v2/test_flash_deepseek_v2_all_params.json",
"repo_id": "text-generation-inference",
"token_count": 424
} | 309 |
{
"choices": [
{
"finish_reason": "stop",
"index": 0,
"logprobs": null,
"message": {
"content": "That's a fantastic question! However, the image doesn't show a dog. It shows a **Brown Swiss cow** standing on a beach. \n\nBrown Swiss cows are known for their beautiful reddish-brown coats and distinctive white markings. \n\nIf you'd like, you can send me another image, and I'll do my best to identify the animal in it!",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1747216080,
"id": "",
"model": "google/gemma-3-4b-it",
"object": "chat.completion",
"system_fingerprint": "3.3.4-dev0-native",
"usage": {
"completion_tokens": 80,
"prompt_tokens": 279,
"total_tokens": 359
}
}
| text-generation-inference/integration-tests/models/__snapshots__/test_flash_gemma3/test_flash_gemma3_image_cow_dog.json/0 | {
"file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_flash_gemma3/test_flash_gemma3_image_cow_dog.json",
"repo_id": "text-generation-inference",
"token_count": 340
} | 310 |
[
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "Jeff Walker's Product Launch Formula is a comprehensive system",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085326,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 69,
"total_tokens": 79
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "Here are three key indicators to determine if a customer",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085331,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 52,
"total_tokens": 62
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "You can use the `String.format()` method in",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085326,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 97,
"total_tokens": 107
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "In a realm of binary mysticism, we find",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085331,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 126,
"total_tokens": 136
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "The `dummy` variable is being used to consume",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085336,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 305,
"total_tokens": 315
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "You can add multiple new columns in Power Query (",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085331,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 51,
"total_tokens": 61
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "There are many exciting new technologies emerging across various fields",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085336,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 52,
"total_tokens": 62
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "Poly Ether Ether Ketone (PEEK) is",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085326,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 40,
"total_tokens": 50
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "Here's a technical overview of a referral system similar",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085326,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 85,
"total_tokens": 95
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "Here's an example of how you can add an",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085336,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 45,
"total_tokens": 55
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "I'd be happy to help with Java. What",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085326,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 43,
"total_tokens": 53
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "I can help you plan a road trip from Pune",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085326,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 82,
"total_tokens": 92
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "I'd be happy to explain more about a topic",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085331,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 38,
"total_tokens": 48
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "I'd be happy to help you brainstorm and provide",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085326,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 47,
"total_tokens": 57
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "Implementing a Minesweeper algorithm using algebraic",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085326,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 54,
"total_tokens": 64
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "There are several issues with the provided code:\n\n1",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085331,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 375,
"total_tokens": 385
}
},
{
"choices": [
{
"finish_reason": "stop",
"index": 0,
"logprobs": null,
"message": {
"content": ";)",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085330,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 2,
"prompt_tokens": 105,
"total_tokens": 107
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "As I delved into the world of high-st",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085331,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 2097,
"total_tokens": 2107
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "/u/CruxHub: Hi, I'm",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085336,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 2614,
"total_tokens": 2624
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "To simulate a conversation between Alice and /u/C",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085331,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 1070,
"total_tokens": 1080
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "Alice: Hey /u/CruxHub,",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085336,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 1847,
"total_tokens": 1857
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "Alice: Hi /u/CruxHub,",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085331,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 1849,
"total_tokens": 1859
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "/u/CruxHub: Hey Alice, I",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085336,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 1004,
"total_tokens": 1014
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "/u/CruxHub: Hey Alice, I",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085331,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 1100,
"total_tokens": 1110
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "/u/CruxHub: Hey Alice, I",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085336,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 1044,
"total_tokens": 1054
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "The Dogme approach and the Lexical Approach are",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085336,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 54,
"total_tokens": 64
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "Implementing a netfilter in Linux with a Rust",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085336,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 48,
"total_tokens": 58
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "Damage to the Ulnar nerve can cause numb",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085331,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 56,
"total_tokens": 66
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "The Space Shuttle's Reaction Control System (RCS",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085331,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 50,
"total_tokens": 60
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "I can provide you with a basic Python script that",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085326,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 65,
"total_tokens": 75
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "Farming meat has several negative impacts on the environment",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085331,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 43,
"total_tokens": 53
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "The photograph filter you're referring to is called \"",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085326,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 51,
"total_tokens": 61
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "Here's a sample geological database structure with some example",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085326,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 59,
"total_tokens": 69
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "**Web Marketing: A Simplified Explanation**\n\nWeb",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085326,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 45,
"total_tokens": 55
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "Here's a rewritten and improved version of the story",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085336,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 447,
"total_tokens": 457
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "Here are the questions rewritten in a more conversational",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085331,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 168,
"total_tokens": 178
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "**Learning Progress: 0%**\n\n| Topic",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085331,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 216,
"total_tokens": 226
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "I couldn't find any information on a person named",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085326,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 44,
"total_tokens": 54
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "Here's a list of the largest outdoor retailers in",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085326,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 43,
"total_tokens": 53
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "To create a WordPress shortcode that includes Facebook SDK code",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085326,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 49,
"total_tokens": 59
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "The sentence is mostly grammatically correct, but there",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085326,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 78,
"total_tokens": 88
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "I'd be happy to engage in a debate with",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085326,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 59,
"total_tokens": 69
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "I'd love to hear about your business. As",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085336,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 64,
"total_tokens": 74
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "I'll wait for your request to proceed with part",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085331,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 2410,
"total_tokens": 2420
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "The final part of the Day Sculpting program emphasizes",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085336,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 2699,
"total_tokens": 2709
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "**Analysis of the Coming of Age Story Archetype",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085336,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 349,
"total_tokens": 359
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "The Apostle John is one of the most prominent figures",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085326,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 49,
"total_tokens": 59
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "To build a Google Places autocomplete feature on Jetpack",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085336,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 427,
"total_tokens": 437
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "The information provided does not mention the captain's name",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085331,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 169,
"total_tokens": 179
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "The metaverse is a shared, immersive and interactive",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085336,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 39,
"total_tokens": 49
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "Here are some ideas for a series of articles for",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085326,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 50,
"total_tokens": 60
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "\"Purim Palooza Alert: \n\nTo",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085331,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 78,
"total_tokens": 88
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "**Summary of the paper in 10 points:",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085331,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 2022,
"total_tokens": 2032
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "You'll provide three pieces of text, and then",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085336,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 58,
"total_tokens": 68
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "I'm ready to proceed with text 3.",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085336,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 1650,
"total_tokens": 1660
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "I'm ready to answer questions on Text 1",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085326,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 1116,
"total_tokens": 1126
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "This is a Solidity contract written in the older",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085331,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 334,
"total_tokens": 344
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "**Speech Recognition and Synthesis using Python**\n\nTo",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085326,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 84,
"total_tokens": 94
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "I'd be happy to help you discuss a paper",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085336,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 42,
"total_tokens": 52
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "To handle the given utterance, we can use",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085336,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 375,
"total_tokens": 385
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "**Subscription Services Template:**\n\n**Title:** Virtual",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085331,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 443,
"total_tokens": 453
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "Hello. How can I assist you today?",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085331,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 36,
"total_tokens": 46
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "Differentiating yourself from other Etsy shops is crucial to",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085326,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 102,
"total_tokens": 112
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "To become a Licensed Marriage and Family Therapist (",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085326,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 53,
"total_tokens": 63
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "**What is Quantum Computing?**\n\nQuantum computing",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085326,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 42,
"total_tokens": 52
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "Aquรญ te dejo 40 opciones de nombres",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085326,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 108,
"total_tokens": 118
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "Deposition is a geological process that involves the transportation",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085331,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 38,
"total_tokens": 48
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "Here are some good e-governance initiatives in",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085336,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 55,
"total_tokens": 65
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "Here's a simple Python program that accepts a command",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085326,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 56,
"total_tokens": 66
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "Imagine you're playing with a toy box. You",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085326,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 47,
"total_tokens": 57
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "Here's an example of a question they might ask",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085326,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 66,
"total_tokens": 76
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "Arduino Uno adalah sebuah papan mikrokontrol",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085326,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 38,
"total_tokens": 48
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "To edit an array that is within an object,",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085326,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 42,
"total_tokens": 52
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "Microsoft ENTRA (Enterprise Mobility + Security) is",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085336,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 56,
"total_tokens": 66
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "To calculate the difference in interest paid between a simple",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085331,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 69,
"total_tokens": 79
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "Yes, you can use Spring State Machine and Spring",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085326,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 49,
"total_tokens": 59
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "The issue lies in the fact that the `meta",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085326,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 142,
"total_tokens": 152
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "Here are some effective marketing tactics for local small businesses",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085336,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 46,
"total_tokens": 56
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "The French Revolution, which lasted from 1789",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085331,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 41,
"total_tokens": 51
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "**Roles of a Network Driver:**\n\nA network",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085336,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 65,
"total_tokens": 75
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "Yes, I'm familiar with the SAS (Stat",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085331,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 44,
"total_tokens": 54
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "Using relays to control 12V solen",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085336,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 60,
"total_tokens": 70
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "You can use the following Python code to achieve this",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085331,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 55,
"total_tokens": 65
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "Here are some prompts for viral comics:\n\n1.",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085336,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 336,
"total_tokens": 346
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "To simplify and make the comic funnier, consider",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085336,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 301,
"total_tokens": 311
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "Here's a rewritten version of the 4-panel",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085331,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 282,
"total_tokens": 292
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "Subject: Request for E-Waste Collection and Computer",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085331,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 110,
"total_tokens": 120
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "In the context of conference calls, the state you",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085331,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 84,
"total_tokens": 94
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "I can provide a general classification of companies based on",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085326,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 56,
"total_tokens": 66
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "Here are some user stories that describe the concept in",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085336,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 44,
"total_tokens": 54
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "You can check your Python version by running the following",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085326,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 39,
"total_tokens": 49
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "**Scenario:**\n\n15-year-old Black youth,",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085336,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 473,
"total_tokens": 483
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "As a Demand Generation Manager for a B2B",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085336,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 50,
"total_tokens": 60
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "The error is due to a typo in your code",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085336,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 369,
"total_tokens": 379
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "๊ณ ๋ฑ๊ต์ก์ ํ์์ฑ์ ๊ดํ ์์ด ์",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085326,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 72,
"total_tokens": 82
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "Here's a simple C# program that uses the",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085326,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 51,
"total_tokens": 61
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "The error message \"connection refused\" indicates that the",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085331,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 85,
"total_tokens": 95
}
},
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "To load an image, you can use various methods",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1726085326,
"id": "",
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 41,
"total_tokens": 51
}
}
]
| text-generation-inference/integration-tests/models/__snapshots__/test_flash_llama_prefix/test_flash_llama_load.json/0 | {
"file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_flash_llama_prefix/test_flash_llama_load.json",
"repo_id": "text-generation-inference",
"token_count": 32395
} | 311 |
[
{
"details": {
"best_of_sequences": null,
"finish_reason": "length",
"generated_tokens": 10,
"prefill": [],
"seed": null,
"tokens": [
{
"id": 198,
"logprob": -2.9023438,
"special": false,
"text": "\n"
},
{
"id": 2,
"logprob": -2.9140625,
"special": false,
"text": "#"
},
{
"id": 4230,
"logprob": -3.1054688,
"special": false,
"text": " Create"
},
{
"id": 264,
"logprob": -1.0966797,
"special": false,
"text": " a"
},
{
"id": 1681,
"logprob": -1.6914062,
"special": false,
"text": " request"
},
{
"id": 198,
"logprob": -1.1923828,
"special": false,
"text": "\n"
},
{
"id": 2035,
"logprob": -1.3193359,
"special": false,
"text": "request"
},
{
"id": 284,
"logprob": -0.13586426,
"special": false,
"text": " ="
},
{
"id": 7388,
"logprob": -1.2412109,
"special": false,
"text": " requests"
},
{
"id": 670,
"logprob": -0.2775879,
"special": false,
"text": ".get"
}
],
"top_tokens": null
},
"generated_text": "\n# Create a request\nrequest = requests.get"
},
{
"details": {
"best_of_sequences": null,
"finish_reason": "length",
"generated_tokens": 10,
"prefill": [],
"seed": null,
"tokens": [
{
"id": 198,
"logprob": -2.9023438,
"special": false,
"text": "\n"
},
{
"id": 2,
"logprob": -2.9140625,
"special": false,
"text": "#"
},
{
"id": 4230,
"logprob": -3.1054688,
"special": false,
"text": " Create"
},
{
"id": 264,
"logprob": -1.0966797,
"special": false,
"text": " a"
},
{
"id": 1681,
"logprob": -1.6914062,
"special": false,
"text": " request"
},
{
"id": 198,
"logprob": -1.1923828,
"special": false,
"text": "\n"
},
{
"id": 2035,
"logprob": -1.3193359,
"special": false,
"text": "request"
},
{
"id": 284,
"logprob": -0.13586426,
"special": false,
"text": " ="
},
{
"id": 7388,
"logprob": -1.2412109,
"special": false,
"text": " requests"
},
{
"id": 670,
"logprob": -0.2775879,
"special": false,
"text": ".get"
}
],
"top_tokens": null
},
"generated_text": "\n# Create a request\nrequest = requests.get"
},
{
"details": {
"best_of_sequences": null,
"finish_reason": "length",
"generated_tokens": 10,
"prefill": [],
"seed": null,
"tokens": [
{
"id": 198,
"logprob": -2.9023438,
"special": false,
"text": "\n"
},
{
"id": 2,
"logprob": -2.9140625,
"special": false,
"text": "#"
},
{
"id": 4230,
"logprob": -3.1054688,
"special": false,
"text": " Create"
},
{
"id": 264,
"logprob": -1.0966797,
"special": false,
"text": " a"
},
{
"id": 1681,
"logprob": -1.6914062,
"special": false,
"text": " request"
},
{
"id": 198,
"logprob": -1.1923828,
"special": false,
"text": "\n"
},
{
"id": 2035,
"logprob": -1.3193359,
"special": false,
"text": "request"
},
{
"id": 284,
"logprob": -0.13586426,
"special": false,
"text": " ="
},
{
"id": 7388,
"logprob": -1.2412109,
"special": false,
"text": " requests"
},
{
"id": 670,
"logprob": -0.2775879,
"special": false,
"text": ".get"
}
],
"top_tokens": null
},
"generated_text": "\n# Create a request\nrequest = requests.get"
},
{
"details": {
"best_of_sequences": null,
"finish_reason": "length",
"generated_tokens": 10,
"prefill": [],
"seed": null,
"tokens": [
{
"id": 198,
"logprob": -2.9023438,
"special": false,
"text": "\n"
},
{
"id": 2,
"logprob": -2.9140625,
"special": false,
"text": "#"
},
{
"id": 4230,
"logprob": -3.1054688,
"special": false,
"text": " Create"
},
{
"id": 264,
"logprob": -1.0966797,
"special": false,
"text": " a"
},
{
"id": 1681,
"logprob": -1.6914062,
"special": false,
"text": " request"
},
{
"id": 198,
"logprob": -1.1923828,
"special": false,
"text": "\n"
},
{
"id": 2035,
"logprob": -1.3193359,
"special": false,
"text": "request"
},
{
"id": 284,
"logprob": -0.13586426,
"special": false,
"text": " ="
},
{
"id": 7388,
"logprob": -1.2412109,
"special": false,
"text": " requests"
},
{
"id": 670,
"logprob": -0.2775879,
"special": false,
"text": ".get"
}
],
"top_tokens": null
},
"generated_text": "\n# Create a request\nrequest = requests.get"
}
]
| text-generation-inference/integration-tests/models/__snapshots__/test_flash_qwen2/test_flash_qwen2_load.json/0 | {
"file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_flash_qwen2/test_flash_qwen2_load.json",
"repo_id": "text-generation-inference",
"token_count": 4044
} | 312 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.