diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000000000000000000000000000000000000..6bc48ff0195576544d0a0cf92f9cad6e014954c6 --- /dev/null +++ b/.gitignore @@ -0,0 +1,3 @@ +__pycache__/ +wandb/ +src/__pycache__/ \ No newline at end of file diff --git a/README.md b/README.md new file mode 100644 index 0000000000000000000000000000000000000000..46b063aab6f211c622822050b412846012621e88 --- /dev/null +++ b/README.md @@ -0,0 +1,99 @@ +## Speech missions with frozen RWKV language models + +- [中文说明](README_CN.md) +- [English](README.md) + +This repo is an exploratory experiment to enable frozen pretrained RWKV language models to accept speech modality input. Generally, LLMs trained on text data are not directly applicable to speech recognition tasks, and there are many solutions (such as adapters + pretrained audio encoders, or neural audio codecs) to bridge the gap between text and speech. We followed the idea of [SLAM_ASR](https://arxiv.org/abs/2402.08846) and used the RWKV language model as the LLM, and instead of directly writing a prompt template we directly finetuned the initial state of the RWKV model. We were able to achieve 4.6% WER on Librispeech 960h Clean test set (6.9% on Other test) with a 3B RWKV model. + +This code inside is developed on [RWKV-PEFT](https://github.com/JL-er/RWKV-PEFT). And the current implementation of speech encoder and adapter is based on [SLAM_ASR](https://arxiv.org/abs/2402.08846#). + +### Roadmap + +We want to explore compute-efficient and high-performance ways to extend text-based RWKV into multimodal ones. In the audio and speech modality, these are the tasks we are attempting: + +- [x] ASR in single language +- [x] ASR in many languages +- [x] Speech Translation +- [x] Voice input question answering (like GPT-4o) +- [ ] Other audio missions +- [ ] Multiple turns answering + +### Environment + +The following command will create a new conda environment and install the required packages: + +```bash +conda create -n rwkv python=3.10 +conda activate rwkv +pip install -r requirements.txt +``` + +### Training + +1. Download RWKV-6-World model files from one of the following links. We used the 3B model in our experiments, i.e. RWKV-x060-World-3B-v2.1-20240417-ctx4096.pth. + +- [Hugging Face](https://huggingface.co/BlinkDL/rwkv-6-world/tree/main) +- [Hf Mirror (CN)](https://hf-mirror.com/BlinkDL/rwkv-6-world/tree/main) +- [Modelscope](https://modelscope.cn/models/Blink_DL/rwkv-6-world/files) + +2. Open ```demo/demo-state-tuning.sh```. Set ```OP=train``` for training and ```load_model=path/to/your/model/```. Modify ```n_layer``` and ```n_embd``` according to the table below: + +| Model | n_layer | n_embd | +| --------- | ---- | ---- | +| 1.6B | 24 | 2048 | +| 3B | 32 | 2560 | +| 7B | 32 | 4096 | +| 14B | 61 | 4096 | + +Other parameters for training: +| parameter | description | +| --------- | ---- | +| micro_bsz | batch size for each device | +| epoch_steps | num of steps in 1 epoch. please modified as (dataset size / real batch size) | +| device | num of GPU for training | + +The default setting will train a 3B rwkv model on librispeech 960h dataset, with 4 devices and a batch size of 4 per device (real batch size = 16). + +3. The script will overwrite the .pth file in ```output/```. Make sure to save the needed .pth model files under this path to other dir before the training. +4. run ```sh demo/demo-state-tuning.sh``` to start the training process. + +The training process looks like this: + +- It first loads the provided RWKV model, and a speech encoder model from huggingface. An adapter and an initial state for RWKV model will be initialized randomly. +- The (symbolically) simplified formula for this model is: + +``` +RWKV( [InitialState], [Adapter](SpeechEncoder(audio))) -> "The weather is good. " +``` + +Modules and variables in `[ ]` will be trained, the rest is all frozen. + +There are also some codes to enable other PEFT training of the whole model. Note that not all methods are fully adapted to speech modality training as of now, and we are still actively working on this. + +### Evaluation + +Follow the instruction in Training, but modify ```OP=eval``` in ```demo/demo-state-tuning.sh```. The trained model in ```output/``` will be used to calculate the WER of the model in ```output/``` on the clean test set and the other test set of Librispeech. + +### Audio File Prediction + +Open ```demo/demo-predict.sh``` and modify ```file_path=path/to/your/audio/file```. Run ```sh demo/demo-predict.sh``` to load trained weights in ```output/``` and predict the content of the input audio file. + +### Pretrained weights + +Download the pretrained weights from the following link: + +ASR:https://huggingface.co/JerryAGENDD/RWKV-ASR/tree/main/ASR + +SpeechTranslate:https://huggingface.co/JerryAGENDD/RWKV-ASR/tree/main/ST + +SpeechQA:https://huggingface.co/JerryAGENDD/RWKV-ASR/tree/main/SpeechQA + +The pretrained weights contain the necessary parameters for the adapter and the RWKV initial state. These weights are trained using WavLM Large as the speech encoder and RWKV-3B as the language model (script default configuration). Place the weights in the ```output/``` directory for the script to load them. + +### Speech Chat with RWKV + +A script for real-time speech conversation with RWKV: + +https://github.com/AGENDD/RWKV-SpeechChat + +You can use the trained weights to interact with RWKV in real time. \ No newline at end of file diff --git a/README_CN.md b/README_CN.md new file mode 100644 index 0000000000000000000000000000000000000000..0ec0aadfa317b4888ca3219023d6764b4f42e93f --- /dev/null +++ b/README_CN.md @@ -0,0 +1,100 @@ +## 使用预训练的 RWKV 语言模型进行语音识别 + +- [中文说明](README_CN.md) +- [English](README.md) + +本仓库是一个探索性实验,旨在使预训练的 RWKV 语言模型能够接受语音输入。通常,在文本数据上训练的 LLM 不直接适用于语音识别任务,有很多解决方案(例如适配器 + 预训练音频编码器或神经音频编解码器)可以弥合文本和语音之间的差距。我们遵循了 [SLAM_ASR](https://arxiv.org/abs/2402.08846) 的思路,使用 RWKV 语言模型作为 LLM,而不是直接编写提示模板,我们直接微调了 RWKV 模型的初始状态。在 Librispeech 960h Clean 测试集上,我们使用 3B RWKV 模型实现了 4.6% 的 WER(Other 测试集为 6.9%)。 + +本仓库的代码基于 [RWKV-PEFT](https://github.com/JL-er/RWKV-PEFT) 开发。当前的语音编码器和适配器实现基于 [SLAM_ASR](https://arxiv.org/abs/2402.08846#)。 + +### 路线图 + +我们希望探索计算效率高、性能优越的方式将基于文本的 RWKV 扩展到多模态模型。在音频和语音领域,我们正在尝试以下任务: + +- [x] 单语言 ASR +- [x] 多语言 ASR +- [x] 语音翻译 +- [x] 语音输入问答(如 GPT-4o) +- [ ] 其他音频任务 +- [ ] 多轮对话 + +### 环境 + +以下命令将创建一个新的 conda 环境并安装所需的包: + +```bash +conda create -n rwkv python=3.10 +conda activate rwkv +pip install -r requirements.txt +``` + +### 训练 + +1. 从以下链接之一下载 RWKV-6-World 模型文件。我们在实验中使用了 3B 模型,即 RWKV-x060-World-3B-v2.1-20240417-ctx4096.pth。 + +- [Hugging Face](https://huggingface.co/BlinkDL/rwkv-6-world/tree/main) +- [Hf Mirror (CN)](https://hf-mirror.com/BlinkDL/rwkv-6-world/tree/main) +- [Modelscope](https://modelscope.cn/models/Blink_DL/rwkv-6-world/files) + +2. 打开 ```demo/demo-state-tuning.sh```。将 ```OP=train``` 设置为训练,并将 ```load_model=path/to/your/model/``` 设置为您的模型路径。根据以下表修改 ```n_layer``` 和 ```n_embd```: + +| 模型 | n_layer | n_embd | +| --------- | ---- | ---- | +| 1.6B | 24 | 2048 | +| 3B | 32 | 2560 | +| 7B | 32 | 4096 | +| 14B | 61 | 4096 | + +其他训练参数: +| 参数 | 描述 | +| --------- | ---- | +| micro_bsz | 每个设备的批量大小 | +| epoch_steps | 每个 epoch 的步骤数。请根据(数据集大小 / 实际批量大小)进行修改 | +| device | 用于训练的 GPU 数量 | + +默认设置将在 4 个设备上训练 3B rwkv 模型,每个设备的批量大小为 4(实际批量大小 = 16)。 + +3. 该脚本将覆盖 ```output/``` 中的 .pth 文件。确保在训练前将所需的 .pth 模型文件保存到其他目录下! +4. 运行 ```sh demo/demo-state-tuning.sh``` 以开始训练过程。 + +训练过程如下: + +- 它首先加载RWKV模型和从huggingface下载的语音编码模型。将随机初始化适配器和 RWKV 模型的初始状态。 +- 模型的(符号)简化公式如下: + +``` +RWKV( [InitialState], [Adapter](SpeechEncoder(audio))) -> "The weather is good. +``` + +用`[ ]`包围的部分会被训练,其他参数是锁定的。 + +还有一些代码可以启用整个模型的其他 PEFT 训练。目前,我们还没有完全适配于语音模态训练,我们仍在积极开发中。 + +### 评估 + +参考训练的步骤,但设定`demo/demo-state-tuning.sh`里的`OP=eval`。保存在`output/`中的模型将被用于评估,脚本会计算Librispeech 960h Clean和Other测试集的WER。 + + +### 音频文件预测 + +打开```demo/demo-predict.sh```并修改```file_path```为输入音频的路径。运行```sh demo/demo-predict.sh```来从```output/```加载训练权重并预测音频内容。 + +### 预训练权重 + +下载预训练权重,请访问以下链接: + +语音识别:https://huggingface.co/JerryAGENDD/RWKV-ASR/tree/main/ASR + +语音翻译:https://huggingface.co/JerryAGENDD/RWKV-ASR/tree/main/ST + +语音问答:https://huggingface.co/JerryAGENDD/RWKV-ASR/tree/main/SpeechQA + +预训练权重包含适配器和RWKV初始状态的必要参数。这些权重是使用WavLM Large作为语音编码器和RWKV-3B作为语言模型(脚本默认配置)进行训练的。请将权重放置在```output/```目录中,以便脚本加载它们。 + +### RWKV 语音对话 + +这是一个与 RWKV 进行实时语音对话的脚本: + +https://github.com/AGENDD/RWKV-SpeechChat + +您可以使用训练后的权重与 RWKV 进行实时语音交互。 \ No newline at end of file diff --git a/cuda/wkv5_cuda.cu b/cuda/wkv5_cuda.cu new file mode 100644 index 0000000000000000000000000000000000000000..3e6b8594e58ac7990c5b205df064373ab6bbe4da --- /dev/null +++ b/cuda/wkv5_cuda.cu @@ -0,0 +1,202 @@ +#include +#include +#include "ATen/ATen.h" +typedef at::BFloat16 bf16; + +template +__global__ void kernel_forward(const int B, const int T, const int C, const int H, + const F *__restrict__ const _r, const F *__restrict__ const _k, const F *__restrict__ const _v, const float *__restrict__ _w, const F *__restrict__ _u, + F *__restrict__ const _y) +{ + const int b = blockIdx.x / H; + const int h = blockIdx.x % H; + const int i = threadIdx.x; + _w += h*_N_; + _u += h*_N_; + + __shared__ float r[_N_], k[_N_], u[_N_], w[_N_]; + float state[_N_] = {0}; + + __syncthreads(); + w[i] = _w[i]; + u[i] = float(_u[i]); + __syncthreads(); + + for (int t = b*T*C + h*_N_ + i; t < (b+1)*T*C + h*_N_ + i; t += C) + { + __syncthreads(); + r[i] = float(_r[t]); + k[i] = float(_k[t]); + __syncthreads(); + + const float v = float(_v[t]); + float y = 0; + + #pragma unroll + for (int j = 0; j < _N_; j+=4) + { + const float4& r_ = (float4&)(r[j]); + const float4& k_ = (float4&)(k[j]); + const float4& w_ = (float4&)(w[j]); + const float4& u_ = (float4&)(u[j]); + float4& s = (float4&)(state[j]); + float4 x; + + x.x = k_.x * v; + x.y = k_.y * v; + x.z = k_.z * v; + x.w = k_.w * v; + + y += r_.x * (u_.x * x.x + s.x); + y += r_.y * (u_.y * x.y + s.y); + y += r_.z * (u_.z * x.z + s.z); + y += r_.w * (u_.w * x.w + s.w); + + s.x = s.x * w_.x + x.x; + s.y = s.y * w_.y + x.y; + s.z = s.z * w_.z + x.z; + s.w = s.w * w_.w + x.w; + } + _y[t] = F(y); + } +} + +template +__global__ void kernel_backward(const int B, const int T, const int C, const int H, + const F *__restrict__ const _r, const F *__restrict__ const _k, const F *__restrict__ const _v, const float *__restrict__ _w, const float *__restrict__ __w, const F *__restrict__ _u, const F *__restrict__ const _gy, + F *__restrict__ const _gr, F *__restrict__ const _gk, F *__restrict__ const _gv, F *__restrict__ const _gw, F *__restrict__ const _gu) +{ + const int b = blockIdx.x / H; + const int h = blockIdx.x % H; + const int i = threadIdx.x; + _w += h*_N_; + _u += h*_N_; + __w += h*_N_; + + __shared__ float w_[_N_], u_[_N_]; + __shared__ float r[_N_], k[_N_], v[_N_], gy[_N_]; + __syncthreads(); + w_[i] = _w[i]; + u_[i] = float(_u[i]); + __syncthreads(); + + const float w = w_[i]; + const float ww = __w[i]; + const float u = u_[i]; + + float state[_N_] = {0}, saaaa[_N_] = {0}, sbbbb[_N_] = {0}, scccc[_N_] = {0}, sdddd[_N_] = {0}; + + float gw = 0, gu = 0; + const int t000 = b*T*C + h*_N_ + i; + const int t111 = (b+1)*T*C + h*_N_ + i; + const int t222 = t111 - 2*C; + + for (int t = t000; t < t111; t += C) + { + __syncthreads(); + v[i] = float(_v[t]); + gy[i] = float(_gy[t]); + __syncthreads(); + + const float k = float(_k[t]); + float gr = 0, gu_ = 0; + + #pragma unroll + for (int j = 0; j < _N_; j++) + { + float& s = state[j]; + float x = k * v[j]; + + gr += (u * x + s) * gy[j]; + gu_ += x * gy[j]; + s = s * w + x; + } + _gr[t] = F(gr); + gu += float(_r[t]) * gu_; + } + _gu[b*C + h*_N_ + i] = F(gu); + + for (int t = t000; t < t222; t += C) + { + __syncthreads(); + v[i] = float(_v[t]); + gy[i] = float(_gy[t + 2*C]); + __syncthreads(); + + const float k = float(_k[t]); + float gw_ = 0; + + #pragma unroll + for (int j = 0; j < _N_; j++) + { + float& s = saaaa[j]; + float& s2 = sbbbb[j]; + float x = k * v[j]; + + float tmp = w * (x + s); + s = tmp; + s2 = tmp + w * s2; + gw_ += s2 * gy[j]; + } + gw += float(_r[t + 2*C]) * gw_; + } + _gw[b*C + h*_N_ + i] = F(ww * gw); + + for (int t = t111 - C; t >= t000; t -= C) + { + __syncthreads(); + v[i] = float(_v[t]); + gy[i] = float(_gy[t]); + __syncthreads(); + + const float rr = float(_r[t]); + float gk = 0; + + #pragma unroll + for (int j = 0; j < _N_; j++) + { + float& s = scccc[j]; + float x = rr * gy[j]; + + gk += (u * x + s) * v[j]; + s = x + s * w; + } + _gk[t] = F(gk); + } + + for (int t = t111 - C; t >= t000; t -= C) + { + __syncthreads(); + r[i] = float(_r[t]); + k[i] = float(_k[t]); + __syncthreads(); + + const float gyy = float(_gy[t]); + float gv = 0; + + #pragma unroll + for (int j = 0; j < _N_; j++) + { + float& s = sdddd[j]; + float x = gyy * r[j]; + + gv += (u_[j] * x + s) * k[j]; + s = x + s * w_[j]; + } + _gv[t] = F(gv); + } +} + +void cuda_forward(int B, int T, int C, int H, bf16 *r, bf16 *k, bf16 *v, float *w, bf16 *u, bf16 *y) +{ + assert(H*_N_ == C); + assert(_N_%4 == 0); + kernel_forward<<>>(B, T, C, H, r, k, v, w, u, y); +} + +void cuda_backward(int B, int T, int C, int H, bf16 *r, bf16 *k, bf16 *v, float *w, float *ww, bf16 *u, bf16 *gy, bf16 *gr, bf16 *gk, bf16 *gv, bf16 *gw, bf16 *gu) +{ + assert(H*_N_ == C); + assert(_N_%4 == 0); + kernel_backward<<>>(B, T, C, H, r, k, v, w, ww, u, gy, gr, gk, gv, gw, gu); +} diff --git a/cuda/wkv5_op.cpp b/cuda/wkv5_op.cpp new file mode 100644 index 0000000000000000000000000000000000000000..4c9ece15017edbf147e5a9fc8b39124fe1b56d68 --- /dev/null +++ b/cuda/wkv5_op.cpp @@ -0,0 +1,22 @@ +#include +#include "ATen/ATen.h" +typedef at::BFloat16 bf16; + +void cuda_forward(int B, int T, int C, int H, bf16 *r, bf16 *k, bf16 *v, float *w, bf16 *u, bf16 *y); +void cuda_backward(int B, int T, int C, int H, bf16 *r, bf16 *k, bf16 *v, float *w, float *ww, bf16 *u, bf16 *gy, bf16 *gr, bf16 *gk, bf16 *gv, bf16 *gw, bf16 *gu); + +void forward(int64_t B, int64_t T, int64_t C, int64_t H, torch::Tensor &r, torch::Tensor &k, torch::Tensor &v, torch::Tensor &w, torch::Tensor &u, torch::Tensor &y) { + cuda_forward(B, T, C, H, r.data_ptr(), k.data_ptr(), v.data_ptr(), w.data_ptr(), u.data_ptr(), y.data_ptr()); +} +void backward(int64_t B, int64_t T, int64_t C, int64_t H, torch::Tensor &r, torch::Tensor &k, torch::Tensor &v, torch::Tensor &w, torch::Tensor &ww, torch::Tensor &u, torch::Tensor &gy, torch::Tensor &gr, torch::Tensor &gk, torch::Tensor &gv, torch::Tensor &gw, torch::Tensor &gu) { + cuda_backward(B, T, C, H, r.data_ptr(), k.data_ptr(), v.data_ptr(), w.data_ptr(), ww.data_ptr(), u.data_ptr(), gy.data_ptr(), gr.data_ptr(), gk.data_ptr(), gv.data_ptr(), gw.data_ptr(), gu.data_ptr()); +} +PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { + m.def("forward", &forward, "wkv5 forward"); + m.def("backward", &backward, "wkv5 backward"); +} + +TORCH_LIBRARY(wkv5, m) { + m.def("forward", forward); + m.def("backward", backward); +} diff --git a/cuda/wkv6_cuda.cu b/cuda/wkv6_cuda.cu new file mode 100644 index 0000000000000000000000000000000000000000..7b7c8366c22d20237f94d88ddc36983c3b7d441e --- /dev/null +++ b/cuda/wkv6_cuda.cu @@ -0,0 +1,242 @@ +#include +#include +#include "ATen/ATen.h" +typedef at::BFloat16 bf16; + +template +__global__ void kernel_forward(const int B, const int T, const int C, const int H, + const F *__restrict__ const _r, const F *__restrict__ const _k, const F *__restrict__ const _v, const float *__restrict__ _w, const F *__restrict__ _u, + F *__restrict__ const _y) +{ + const int b = blockIdx.x / H; + const int h = blockIdx.x % H; + const int i = threadIdx.x; + _u += h*_N_; + + __shared__ float r[_N_], k[_N_], u[_N_], w[_N_]; + float state[_N_] = {0}; + + __syncthreads(); + u[i] = float(_u[i]); + __syncthreads(); + + for (int t = b*T*C + h*_N_ + i; t < (b+1)*T*C + h*_N_ + i; t += C) + { + __syncthreads(); + w[i] = exp(_w[t]); + r[i] = float(_r[t]); + k[i] = float(_k[t]); + __syncthreads(); + + const float v = float(_v[t]); + float y = 0; + + #pragma unroll + for (int j = 0; j < _N_; j+=4) + { + const float4& r_ = (float4&)(r[j]); + const float4& k_ = (float4&)(k[j]); + const float4& w_ = (float4&)(w[j]); + const float4& u_ = (float4&)(u[j]); + float4& s = (float4&)(state[j]); + float4 x; + + x.x = k_.x * v; + x.y = k_.y * v; + x.z = k_.z * v; + x.w = k_.w * v; + + y += r_.x * (u_.x * x.x + s.x); + y += r_.y * (u_.y * x.y + s.y); + y += r_.z * (u_.z * x.z + s.z); + y += r_.w * (u_.w * x.w + s.w); + + s.x = s.x * w_.x + x.x; + s.y = s.y * w_.y + x.y; + s.z = s.z * w_.z + x.z; + s.w = s.w * w_.w + x.w; + } + _y[t] = F(y); + } +} + +template +__global__ void kernel_backward_111(const int B, const int T, const int C, const int H, + const F *__restrict__ const _r, const F *__restrict__ const _k, const F *__restrict__ const _v, const float *__restrict__ _w, const F *__restrict__ _u, const F *__restrict__ const _gy, + F *__restrict__ const _gr, F *__restrict__ const _gk, F *__restrict__ const _gv, F *__restrict__ const _gu) +{ + const int b = blockIdx.x / H; + const int h = blockIdx.x % H; + const int i = threadIdx.x; + _u += h*_N_; + + __shared__ float u_[_N_]; + __shared__ float r[_N_], k[_N_], v[_N_], w_[_N_], gy[_N_]; + __syncthreads(); + u_[i] = float(_u[i]); + __syncthreads(); + + const float u = u_[i]; + + float state[_N_] = {0}, scccc[_N_] = {0}, sdddd[_N_] = {0}; + + const int t_0 = b*T*C + h*_N_ + i; + const int t_T_1 = t_0 + (T-1)*C; + const int t_T = t_0 + T*C; + + float gu = 0; + for (int t = t_0; t < t_T; t += C) + { + __syncthreads(); + v[i] = float(_v[t]); + gy[i] = float(_gy[t]); + __syncthreads(); + + const float k = float(_k[t]); + const float w = exp(_w[t]); + float gr = 0, gu_ = 0; + + #pragma unroll + for (int j = 0; j < _N_; j++) + { + float& s = state[j]; + float x = k * v[j]; + + gr += (u * x + s) * gy[j]; + gu_ += x * gy[j]; + s = s * w + x; + } + _gr[t] = F(gr); + gu += float(_r[t]) * gu_; + } + _gu[b*C + h*_N_ + i] = F(gu); + + for (int t = t_T_1; t >= t_0; t -= C) + { + __syncthreads(); + v[i] = float(_v[t]); + gy[i] = float(_gy[t]); + __syncthreads(); + + const float rr = float(_r[t]); + const float w = exp(_w[t]); + float gk = 0; + + #pragma unroll + for (int j = 0; j < _N_; j++) + { + float& s = scccc[j]; + float x = rr * gy[j]; + + gk += (u * x + s) * v[j]; + s = x + s * w; + } + _gk[t] = F(gk); + } + + for (int t = t_T_1; t >= t_0; t -= C) + { + __syncthreads(); + r[i] = float(_r[t]); + k[i] = float(_k[t]); + w_[i] = exp(_w[t]); + __syncthreads(); + + const float gyy = float(_gy[t]); + float gv = 0; + + #pragma unroll + for (int j = 0; j < _N_; j++) + { + float& s = sdddd[j]; + float x = gyy * r[j]; + + gv += (u_[j] * x + s) * k[j]; + s = x + s * w_[j]; + } + _gv[t] = F(gv); + } +} + +template +__global__ void kernel_backward_222(const int B, const int T, const int C, const int H, + const F *__restrict__ const _r, const F *__restrict__ const _k, const F *__restrict__ const _v, const float *__restrict__ _w, const F *__restrict__ _u, const F *__restrict__ const _gy, + F *__restrict__ const _gw) +{ + const int b = blockIdx.x / H; + const int h = blockIdx.x % H; + const int i = threadIdx.x; + + __shared__ float v[_N_], gy[_N_]; + float saaaa[_N_] = {0}, sbbbb[_T_-2] = {0}, scccc[_N_] = {0}; + + const int t_0 = b*T*C + h*_N_ + i; + const int t_1 = t_0 + C; + const int t_2 = t_0 + 2*C; + const int t_T_1 = t_0 + (T-1)*C; + + for (int t = t_T_1; t > t_1; t -= C) + { + __syncthreads(); + gy[i] = float(_gy[t]); + v[i] = float(_v[t-2*C]); + __syncthreads(); + + const float r = float(_r[t]); + const float w = exp(_w[t-C]); + float sum = 0.0f; + + #pragma unroll + for (int j = 0; j < _N_; j++) + { + float& s = saaaa[j]; + float x = r * gy[j]; + s = (s + x) * w; + sum += s * v[j]; + } + sbbbb[(t-t_2)/C] = sum * float(_k[t-2*C]); + } + + float sss = sbbbb[0]; + _gw[t_0] = 0; + _gw[t_1] = F(sss * _w[t_1]); + + for (int t = t_2; t < t_T_1; t += C) + { + __syncthreads(); + gy[i] = float(_gy[t]); + v[i] = float(_v[t-2*C]); + __syncthreads(); + + const float w = exp(_w[t-C]); + const float k = float(_k[t-2*C]); + float sum = 0.0f; + + #pragma unroll + for (int j = 0; j < _N_; j++) + { + float& s = scccc[j]; + float x = k * v[j]; + s = (s + x) * w; + sum += s * gy[j]; + } + sss += sbbbb[(t-t_1)/C] - (sum * float(_r[t])); + _gw[t] = F(sss * _w[t]); + } + _gw[t_T_1] = 0; +} + +void cuda_forward(int B, int T, int C, int H, bf16 *r, bf16 *k, bf16 *v, float *w, bf16 *u, bf16 *y) +{ + assert(H*_N_ == C); + assert(_N_%4 == 0); + kernel_forward<<>>(B, T, C, H, r, k, v, w, u, y); +} + +void cuda_backward(int B, int T, int C, int H, bf16 *r, bf16 *k, bf16 *v, float *w, bf16 *u, bf16 *gy, bf16 *gr, bf16 *gk, bf16 *gv, bf16 *gw, bf16 *gu) +{ + assert(H*_N_ == C); + assert(_N_%4 == 0); + kernel_backward_111<<>>(B, T, C, H, r, k, v, w, u, gy, gr, gk, gv, gu); + kernel_backward_222<<>>(B, T, C, H, r, k, v, w, u, gy, gw); +} diff --git a/cuda/wkv6_op.cpp b/cuda/wkv6_op.cpp new file mode 100644 index 0000000000000000000000000000000000000000..432ac56bea0b2f0c444a4d6ccc490c0b635aa964 --- /dev/null +++ b/cuda/wkv6_op.cpp @@ -0,0 +1,22 @@ +#include +#include "ATen/ATen.h" +typedef at::BFloat16 bf16; + +void cuda_forward(int B, int T, int C, int H, bf16 *r, bf16 *k, bf16 *v, float *w, bf16 *u, bf16 *y); +void cuda_backward(int B, int T, int C, int H, bf16 *r, bf16 *k, bf16 *v, float *w, bf16 *u, bf16 *gy, bf16 *gr, bf16 *gk, bf16 *gv, bf16 *gw, bf16 *gu); + +void forward(int64_t B, int64_t T, int64_t C, int64_t H, torch::Tensor &r, torch::Tensor &k, torch::Tensor &v, torch::Tensor &w, torch::Tensor &u, torch::Tensor &y) { + cuda_forward(B, T, C, H, r.data_ptr(), k.data_ptr(), v.data_ptr(), w.data_ptr(), u.data_ptr(), y.data_ptr()); +} +void backward(int64_t B, int64_t T, int64_t C, int64_t H, torch::Tensor &r, torch::Tensor &k, torch::Tensor &v, torch::Tensor &w, torch::Tensor &u, torch::Tensor &gy, torch::Tensor &gr, torch::Tensor &gk, torch::Tensor &gv, torch::Tensor &gw, torch::Tensor &gu) { + cuda_backward(B, T, C, H, r.data_ptr(), k.data_ptr(), v.data_ptr(), w.data_ptr(), u.data_ptr(), gy.data_ptr(), gr.data_ptr(), gk.data_ptr(), gv.data_ptr(), gw.data_ptr(), gu.data_ptr()); +} +PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { + m.def("forward", &forward, "wkv6 forward"); + m.def("backward", &backward, "wkv6 backward"); +} + +TORCH_LIBRARY(wkv6, m) { + m.def("forward", forward); + m.def("backward", backward); +} diff --git a/cuda/wkv6infctx_cuda.cu b/cuda/wkv6infctx_cuda.cu new file mode 100644 index 0000000000000000000000000000000000000000..597fba90e4c05faeace585268c46c1b851bb04cf --- /dev/null +++ b/cuda/wkv6infctx_cuda.cu @@ -0,0 +1,311 @@ +#include +#include +#include "ATen/ATen.h" +typedef at::BFloat16 bf16; + +template +__global__ void kernel_forward(const int B, const int T, const int C, const int H, + const F *__restrict__ const _r, const F *__restrict__ const _k, const F *__restrict__ const _v, const F *__restrict__ _w, const F *__restrict__ _u, F *__restrict__ _s, + F *__restrict__ const _y) +{ + const int b = blockIdx.x / H; + const int h = blockIdx.x % H; + const int i = threadIdx.x; + _u += h*_N_; + _s += b*C*_N_ + h*_N_*_N_ + i*_N_; + + __shared__ float r[_N_], k[_N_], u[_N_], w[_N_]; + float state[_N_]; + + __syncthreads(); + u[i] = float(_u[i]); + __syncthreads(); + for (int j = 0; j < _N_; j++) { + state[j] = float(_s[j]); + } + + for (int t = b*T*C + h*_N_ + i; t < (b+1)*T*C + h*_N_ + i; t += C) + { + __syncthreads(); + w[i] = __expf(-__expf(float(_w[t]))); + r[i] = float(_r[t]); + k[i] = float(_k[t]); + __syncthreads(); + + const float v = float(_v[t]); + float y = 0; + + #pragma unroll + for (int j = 0; j < _N_; j+=4) + { + const float4& r_ = (float4&)(r[j]); + const float4& k_ = (float4&)(k[j]); + const float4& w_ = (float4&)(w[j]); + const float4& u_ = (float4&)(u[j]); + float4& s = (float4&)(state[j]); + float4 x; + + x.x = k_.x * v; + x.y = k_.y * v; + x.z = k_.z * v; + x.w = k_.w * v; + + y += r_.x * (u_.x * x.x + s.x); + y += r_.y * (u_.y * x.y + s.y); + y += r_.z * (u_.z * x.z + s.z); + y += r_.w * (u_.w * x.w + s.w); + + s.x = s.x * w_.x + x.x; + s.y = s.y * w_.y + x.y; + s.z = s.z * w_.z + x.z; + s.w = s.w * w_.w + x.w; + } + _y[t] = F(y); + } + #pragma unroll + for (int j = 0; j < _N_; j++) + _s[j] = F(state[j]); +} + +template +__global__ void kernel_backward_111(const int B, const int T, const int C, const int H, + const F *__restrict__ const _r, const F *__restrict__ const _k, const F *__restrict__ const _v, const F *__restrict__ _w, const F *__restrict__ _u, const F *__restrict__ _s, const F *__restrict__ const _gy, + F *__restrict__ const _gr, F *__restrict__ const _gk, F *__restrict__ const _gv, F *__restrict__ const _gu, F *__restrict__ const _gs) +{ + const int b = blockIdx.x / H; + const int h = blockIdx.x % H; + const int i = threadIdx.x; + _u += h*_N_; + _s += b*C+ h*_N_ + i; + + __shared__ float u_[_N_]; + __shared__ float r[_N_], k[_N_], v[_N_], w_[_N_], gy[_N_]; + __syncthreads(); + u_[i] = float(_u[i]); + __syncthreads(); + + const float u = u_[i]; + + float state[_N_], scccc[_N_] = {0}, sdddd[_N_] = {0}, sssss[_N_] = {0}, swwww[_N_]; + for (int j = 0; j < _N_; j++) { + state[j] = float(_s[j*_N_]); + swwww[j] = 1.0; + } + + const int t_0 = b*T*C + h*_N_ + i; + const int t_T_1 = t_0 + (T-1)*C; + const int t_T = t_0 + T*C; + + float gu = 0; + for (int t = t_0; t < t_T; t += C) + { + __syncthreads(); + v[i] = float(_v[t]); + gy[i] = float(_gy[t]); + __syncthreads(); + + const float k = float(_k[t]); + const float w = __expf(-__expf(float(_w[t]))); + float gr = 0, gu_ = 0; + + #pragma unroll + for (int j = 0; j < _N_; j++) + { + float& s = state[j]; + float x = k * v[j]; + + gr += (u * x + s) * gy[j]; + gu_ += x * gy[j]; + s = s * w + x; + } + _gr[t] = F(gr); + gu += float(_r[t]) * gu_; + } + _gu[b*C + h*_N_ + i] = F(gu); + + for (int t = t_T_1; t >= t_0; t -= C) + { + __syncthreads(); + v[i] = float(_v[t]); + gy[i] = float(_gy[t]); + __syncthreads(); + + const float rr = float(_r[t]); + const float w = __expf(-__expf(float(_w[t]))); + float gk = 0; + + #pragma unroll + for (int j = 0; j < _N_; j++) + { + float& s = scccc[j]; + float x = rr * gy[j]; + + gk += (u * x + s) * v[j]; + s = x + s * w; + } + _gk[t] = F(gk); + } + + for (int t = t_T_1; t >= t_0; t -= C) + { + __syncthreads(); + r[i] = float(_r[t]); + k[i] = float(_k[t]); + w_[i] = __expf(-__expf(float(_w[t]))); + __syncthreads(); + + const float gyy = float(_gy[t]); + float gv = 0; + + #pragma unroll + for (int j = 0; j < _N_; j++) + { + float& s = sdddd[j]; + float x = gyy * r[j]; + + gv += (u_[j] * x + s) * k[j]; + s = x + s * w_[j]; + } + _gv[t] = F(gv); + } + + for (int t = t_0; t < t_T; t += C) + { + __syncthreads(); + r[i] = float(_r[t]); + w_[i] = __expf(-__expf(float(_w[t]))); + __syncthreads(); + + const float gyy = float(_gy[t]); + + #pragma unroll + for (int j = 0; j < _N_; j++) + { + float& w = swwww[j]; + sssss[j] += gyy * w * r[j]; + w *= w_[j]; + } + } + for (int j = 0; j < _N_; j++) + _gs[b*H*_N_*_N_ + h*_N_*_N_ + i*_N_ + j] = F(sssss[j]); +} + +template +__global__ void kernel_backward_222(const int B, const int T, const int C, const int H, + const F *__restrict__ const _r, const F *__restrict__ const _k, const F *__restrict__ const _v, const F *__restrict__ _w, const F *__restrict__ _u, const F *__restrict__ _s, const F *__restrict__ const _gy, + F *__restrict__ const _gw) +{ + const int b = blockIdx.x / H; + const int h = blockIdx.x % H; + const int i = threadIdx.x; + _s += b*C + h*_N_ + i; + + __shared__ float v[_N_], gy[_N_]; + float state[_N_], saaaa[_N_] = {0}, sbbbb[_T_-1] = {0}, scccc[_N_] = {0}; + for (int j = 0; j < _N_; j++) { + state[j] = float(_s[j*_N_]); + } + + const int t_0 = b*T*C + h*_N_ + i; + const int t_1 = t_0 + C; + const int t_2 = t_0 + 2*C; + const int t_T_1 = t_0 + (T-1)*C; + + for (int t = t_T_1; t > t_1; t -= C) + { + __syncthreads(); + gy[i] = float(_gy[t]); + v[i] = float(_v[t-2*C]); + __syncthreads(); + + const float r = float(_r[t]); + const float w = __expf(-__expf(float(_w[t-C]))); + float sum = 0.0f; + + #pragma unroll + for (int j = 0; j < _N_; j++) + { + float& s = saaaa[j]; + s = (s + r * gy[j]) * w; + sum += s * v[j]; + } + sbbbb[(t-t_1)/C] = sum * float(_k[t-2*C]); + } + { + __syncthreads(); + gy[i] = float(_gy[t_1]); + __syncthreads(); + + const float r = float(_r[t_1]); + const float w = __expf(-__expf(float(_w[t_0]))); + float sum = 0.0f; + + #pragma unroll + for (int j = 0; j < _N_; j++) + { + float& s = saaaa[j]; + s = (s + r * gy[j]) * w; + sum += s * state[j]; + } + sbbbb[0] = sum; + } + + float sss = sbbbb[0]; + _gw[t_0] = F(sss * -__expf(float(_w[t_0]))); + + { + __syncthreads(); + gy[i] = float(_gy[t_1]); + __syncthreads(); + + const float w = __expf(-__expf(float(_w[t_0]))); + float sum = 0.0f; + + #pragma unroll + for (int j = 0; j < _N_; j++) + { + float& s = scccc[j]; + s = (s + state[j]) * w; + sum += s * gy[j]; + } + sss += sbbbb[1] - (sum * float(_r[t_1])); + _gw[t_1] = F(sss * -__expf(float(_w[t_1]))); + } + for (int t = t_2; t < t_T_1; t += C) + { + __syncthreads(); + gy[i] = float(_gy[t]); + v[i] = float(_v[t-2*C]); + __syncthreads(); + + const float w = __expf(-__expf(float(_w[t-C]))); + const float k = float(_k[t-2*C]); + float sum = 0.0f; + + #pragma unroll + for (int j = 0; j < _N_; j++) + { + float& s = scccc[j]; + s = (s + k * v[j]) * w; + sum += s * gy[j]; + } + sss += sbbbb[(t-t_0)/C] - (sum * float(_r[t])); + _gw[t] = F(sss * -__expf(float(_w[t]))); + } + _gw[t_T_1] = 0; +} + +void cuda_forward(int B, int T, int C, int H, bf16 *r, bf16 *k, bf16 *v, bf16 *w, bf16 *u, bf16 *z, bf16 *y) +{ + assert(H*_N_ == C); + assert(_N_%4 == 0); + kernel_forward<<>>(B, T, C, H, r, k, v, w, u, z, y); +} + +void cuda_backward(int B, int T, int C, int H, bf16 *r, bf16 *k, bf16 *v, bf16 *w, bf16 *u, bf16 *z, bf16 *gy, bf16 *gr, bf16 *gk, bf16 *gv, bf16 *gw, bf16 *gu, bf16 *gs) +{ + assert(H*_N_ == C); + assert(_N_%4 == 0); + kernel_backward_111<<>>(B, T, C, H, r, k, v, w, u, z, gy, gr, gk, gv, gu, gs); + kernel_backward_222<<>>(B, T, C, H, r, k, v, w, u, z, gy, gw); +} diff --git a/cuda/wkv6infctx_op.cpp b/cuda/wkv6infctx_op.cpp new file mode 100644 index 0000000000000000000000000000000000000000..4df24ca936ef05656293e0c98f8a0445bcc5dca2 --- /dev/null +++ b/cuda/wkv6infctx_op.cpp @@ -0,0 +1,22 @@ +#include +#include "ATen/ATen.h" +typedef at::BFloat16 bf16; + +void cuda_forward(int B, int T, int C, int H, bf16 *r, bf16 *k, bf16 *v, bf16 *w, bf16 *u, bf16 *s, bf16 *y); +void cuda_backward(int B, int T, int C, int H, bf16 *r, bf16 *k, bf16 *v, bf16 *w, bf16 *u, bf16 *s, bf16 *gy, bf16 *gr, bf16 *gk, bf16 *gv, bf16 *gw, bf16 *gu, bf16 *gs); + +void forward(int64_t B, int64_t T, int64_t C, int64_t H, torch::Tensor &r, torch::Tensor &k, torch::Tensor &v, torch::Tensor &w, torch::Tensor &u, torch::Tensor &s, torch::Tensor &y) { + cuda_forward(B, T, C, H, r.data_ptr(), k.data_ptr(), v.data_ptr(), w.data_ptr(), u.data_ptr(), s.data_ptr(), y.data_ptr()); +} +void backward(int64_t B, int64_t T, int64_t C, int64_t H, torch::Tensor &r, torch::Tensor &k, torch::Tensor &v, torch::Tensor &w, torch::Tensor &u, torch::Tensor &s, torch::Tensor &gy, torch::Tensor &gr, torch::Tensor &gk, torch::Tensor &gv, torch::Tensor &gw, torch::Tensor &gu, torch::Tensor &gs) { + cuda_backward(B, T, C, H, r.data_ptr(), k.data_ptr(), v.data_ptr(), w.data_ptr(), u.data_ptr(), s.data_ptr(), gy.data_ptr(), gr.data_ptr(), gk.data_ptr(), gv.data_ptr(), gw.data_ptr(), gu.data_ptr(), gs.data_ptr()); +} +PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { + m.def("forward", &forward, "wkv6state forward"); + m.def("backward", &backward, "wkv6state backward"); +} + +TORCH_LIBRARY(wkv6state, m) { + m.def("forward", forward); + m.def("backward", backward); +} diff --git a/cuda/wkv6state_cuda.cu b/cuda/wkv6state_cuda.cu new file mode 100644 index 0000000000000000000000000000000000000000..2996a7d474499c7682be54149a39e32be865b010 --- /dev/null +++ b/cuda/wkv6state_cuda.cu @@ -0,0 +1,311 @@ +#include +#include +#include "ATen/ATen.h" +typedef at::BFloat16 bf16; + +template +__global__ void kernel_forward(const int B, const int T, const int C, const int H, + const F *__restrict__ const _r, const F *__restrict__ const _k, const F *__restrict__ const _v, const F *__restrict__ _w, const F *__restrict__ _u,const F *__restrict__ _s, + F *__restrict__ const _y) +{ + const int b = blockIdx.x / H; + const int h = blockIdx.x % H; + const int i = threadIdx.x; + _u += h*_N_; + _s += h*_N_*_N_ + i*_N_; + + __shared__ float r[_N_], k[_N_], u[_N_], w[_N_]; + float state[_N_]; + + __syncthreads(); + u[i] = float(_u[i]); + __syncthreads(); + for (int j = 0; j < _N_; j++) { + state[j] = float(_s[j]); + } + + for (int t = b*T*C + h*_N_ + i; t < (b+1)*T*C + h*_N_ + i; t += C) + { + __syncthreads(); + w[i] = __expf(-__expf(float(_w[t]))); + r[i] = float(_r[t]); + k[i] = float(_k[t]); + __syncthreads(); + + const float v = float(_v[t]); + float y = 0; + + #pragma unroll + for (int j = 0; j < _N_; j+=4) + { + const float4& r_ = (float4&)(r[j]); + const float4& k_ = (float4&)(k[j]); + const float4& w_ = (float4&)(w[j]); + const float4& u_ = (float4&)(u[j]); + float4& s = (float4&)(state[j]); + float4 x; + + x.x = k_.x * v; + x.y = k_.y * v; + x.z = k_.z * v; + x.w = k_.w * v; + + y += r_.x * (u_.x * x.x + s.x); + y += r_.y * (u_.y * x.y + s.y); + y += r_.z * (u_.z * x.z + s.z); + y += r_.w * (u_.w * x.w + s.w); + + s.x = s.x * w_.x + x.x; + s.y = s.y * w_.y + x.y; + s.z = s.z * w_.z + x.z; + s.w = s.w * w_.w + x.w; + } + _y[t] = F(y); + } + // #pragma unroll + // for (int j = 0; j < _N_; j++) + // _s[j] = F(state[j]); +} + +template +__global__ void kernel_backward_111(const int B, const int T, const int C, const int H, + const F *__restrict__ const _r, const F *__restrict__ const _k, const F *__restrict__ const _v, const F *__restrict__ _w, const F *__restrict__ _u, const F *__restrict__ _s, const F *__restrict__ const _gy, + F *__restrict__ const _gr, F *__restrict__ const _gk, F *__restrict__ const _gv, F *__restrict__ const _gu, F *__restrict__ const _gs) +{ + const int b = blockIdx.x / H; + const int h = blockIdx.x % H; + const int i = threadIdx.x; + _u += h*_N_; + _s += b*C*_N_ + h*_N_*_N_ + i*_N_; + + __shared__ float u_[_N_]; + __shared__ float r[_N_], k[_N_], v[_N_], w_[_N_], gy[_N_]; + __syncthreads(); + u_[i] = float(_u[i]); + __syncthreads(); + + const float u = u_[i]; + + float state[_N_], scccc[_N_] = {0}, sdddd[_N_] = {0}, sssss[_N_] = {0}, swwww[_N_]; + for (int j = 0; j < _N_; j++) { + state[j] = float(_s[j*_N_]); + swwww[j] = 1.0; + } + + const int t_0 = b*T*C + h*_N_ + i; + const int t_T_1 = t_0 + (T-1)*C; + const int t_T = t_0 + T*C; + + float gu = 0; + for (int t = t_0; t < t_T; t += C) + { + __syncthreads(); + v[i] = float(_v[t]); + gy[i] = float(_gy[t]); + __syncthreads(); + + const float k = float(_k[t]); + const float w = __expf(-__expf(float(_w[t]))); + float gr = 0, gu_ = 0; + + #pragma unroll + for (int j = 0; j < _N_; j++) + { + float& s = state[j]; + float x = k * v[j]; + + gr += (u * x + s) * gy[j]; + gu_ += x * gy[j]; + s = s * w + x; + } + _gr[t] = F(gr); + gu += float(_r[t]) * gu_; + } + _gu[b*C + h*_N_ + i] = F(gu); + + for (int t = t_T_1; t >= t_0; t -= C) + { + __syncthreads(); + v[i] = float(_v[t]); + gy[i] = float(_gy[t]); + __syncthreads(); + + const float rr = float(_r[t]); + const float w = __expf(-__expf(float(_w[t]))); + float gk = 0; + + #pragma unroll + for (int j = 0; j < _N_; j++) + { + float& s = scccc[j]; + float x = rr * gy[j]; + + gk += (u * x + s) * v[j]; + s = x + s * w; + } + _gk[t] = F(gk); + } + + for (int t = t_T_1; t >= t_0; t -= C) + { + __syncthreads(); + r[i] = float(_r[t]); + k[i] = float(_k[t]); + w_[i] = __expf(-__expf(float(_w[t]))); + __syncthreads(); + + const float gyy = float(_gy[t]); + float gv = 0; + + #pragma unroll + for (int j = 0; j < _N_; j++) + { + float& s = sdddd[j]; + float x = gyy * r[j]; + + gv += (u_[j] * x + s) * k[j]; + s = x + s * w_[j]; + } + _gv[t] = F(gv); + } + + for (int t = t_0; t < t_T; t += C) + { + __syncthreads(); + r[i] = float(_r[t]); + w_[i] = __expf(-__expf(float(_w[t]))); + __syncthreads(); + + const float gyy = float(_gy[t]); + + #pragma unroll + for (int j = 0; j < _N_; j++) + { + float& w = swwww[j]; + sssss[j] += gyy * w * r[j]; + w *= w_[j]; + } + } + for (int j = 0; j < _N_; j++) + _gs[b*H*_N_*_N_ + h*_N_*_N_ + i*_N_ + j] = F(sssss[j]); +} + +template +__global__ void kernel_backward_222(const int B, const int T, const int C, const int H, + const F *__restrict__ const _r, const F *__restrict__ const _k, const F *__restrict__ const _v, const F *__restrict__ _w, const F *__restrict__ _u, const F *__restrict__ _s, const F *__restrict__ const _gy, + F *__restrict__ const _gw) +{ + const int b = blockIdx.x / H; + const int h = blockIdx.x % H; + const int i = threadIdx.x; + _s += h*_N_*_N_ + i; + + __shared__ float v[_N_], gy[_N_]; + float state[_N_], saaaa[_N_] = {0}, sbbbb[_T_-1] = {0}, scccc[_N_] = {0}; + for (int j = 0; j < _N_; j++) { + state[j] = float(_s[j*_N_]); + } + + const int t_0 = b*T*C + h*_N_ + i; + const int t_1 = t_0 + C; + const int t_2 = t_0 + 2*C; + const int t_T_1 = t_0 + (T-1)*C; + + for (int t = t_T_1; t > t_1; t -= C) + { + __syncthreads(); + gy[i] = float(_gy[t]); + v[i] = float(_v[t-2*C]); + __syncthreads(); + + const float r = float(_r[t]); + const float w = __expf(-__expf(float(_w[t-C]))); + float sum = 0.0f; + + #pragma unroll + for (int j = 0; j < _N_; j++) + { + float& s = saaaa[j]; + s = (s + r * gy[j]) * w; + sum += s * v[j]; + } + sbbbb[(t-t_1)/C] = sum * float(_k[t-2*C]); + } + { + __syncthreads(); + gy[i] = float(_gy[t_1]); + __syncthreads(); + + const float r = float(_r[t_1]); + const float w = __expf(-__expf(float(_w[t_0]))); + float sum = 0.0f; + + #pragma unroll + for (int j = 0; j < _N_; j++) + { + float& s = saaaa[j]; + s = (s + r * gy[j]) * w; + sum += s * state[j]; + } + sbbbb[0] = sum; + } + + float sss = sbbbb[0]; + _gw[t_0] = F(sss * -__expf(float(_w[t_0]))); + + { + __syncthreads(); + gy[i] = float(_gy[t_1]); + __syncthreads(); + + const float w = __expf(-__expf(float(_w[t_0]))); + float sum = 0.0f; + + #pragma unroll + for (int j = 0; j < _N_; j++) + { + float& s = scccc[j]; + s = (s + state[j]) * w; + sum += s * gy[j]; + } + sss += sbbbb[1] - (sum * float(_r[t_1])); + _gw[t_1] = F(sss * -__expf(float(_w[t_1]))); + } + for (int t = t_2; t < t_T_1; t += C) + { + __syncthreads(); + gy[i] = float(_gy[t]); + v[i] = float(_v[t-2*C]); + __syncthreads(); + + const float w = __expf(-__expf(float(_w[t-C]))); + const float k = float(_k[t-2*C]); + float sum = 0.0f; + + #pragma unroll + for (int j = 0; j < _N_; j++) + { + float& s = scccc[j]; + s = (s + k * v[j]) * w; + sum += s * gy[j]; + } + sss += sbbbb[(t-t_0)/C] - (sum * float(_r[t])); + _gw[t] = F(sss * -__expf(float(_w[t]))); + } + _gw[t_T_1] = 0; +} + +void cuda_forward(int B, int T, int C, int H, bf16 *r, bf16 *k, bf16 *v, bf16 *w, bf16 *u, bf16 *z, bf16 *y) +{ + assert(H*_N_ == C); + assert(_N_%4 == 0); + kernel_forward<<>>(B, T, C, H, r, k, v, w, u, z, y); +} + +void cuda_backward(int B, int T, int C, int H, bf16 *r, bf16 *k, bf16 *v, bf16 *w, bf16 *u, bf16 *z, bf16 *gy, bf16 *gr, bf16 *gk, bf16 *gv, bf16 *gw, bf16 *gu, bf16 *gs) +{ + assert(H*_N_ == C); + assert(_N_%4 == 0); + kernel_backward_111<<>>(B, T, C, H, r, k, v, w, u, z, gy, gr, gk, gv, gu, gs); + kernel_backward_222<<>>(B, T, C, H, r, k, v, w, u, z, gy, gw); +} diff --git a/cuda/wkv6state_op.cpp b/cuda/wkv6state_op.cpp new file mode 100644 index 0000000000000000000000000000000000000000..4df24ca936ef05656293e0c98f8a0445bcc5dca2 --- /dev/null +++ b/cuda/wkv6state_op.cpp @@ -0,0 +1,22 @@ +#include +#include "ATen/ATen.h" +typedef at::BFloat16 bf16; + +void cuda_forward(int B, int T, int C, int H, bf16 *r, bf16 *k, bf16 *v, bf16 *w, bf16 *u, bf16 *s, bf16 *y); +void cuda_backward(int B, int T, int C, int H, bf16 *r, bf16 *k, bf16 *v, bf16 *w, bf16 *u, bf16 *s, bf16 *gy, bf16 *gr, bf16 *gk, bf16 *gv, bf16 *gw, bf16 *gu, bf16 *gs); + +void forward(int64_t B, int64_t T, int64_t C, int64_t H, torch::Tensor &r, torch::Tensor &k, torch::Tensor &v, torch::Tensor &w, torch::Tensor &u, torch::Tensor &s, torch::Tensor &y) { + cuda_forward(B, T, C, H, r.data_ptr(), k.data_ptr(), v.data_ptr(), w.data_ptr(), u.data_ptr(), s.data_ptr(), y.data_ptr()); +} +void backward(int64_t B, int64_t T, int64_t C, int64_t H, torch::Tensor &r, torch::Tensor &k, torch::Tensor &v, torch::Tensor &w, torch::Tensor &u, torch::Tensor &s, torch::Tensor &gy, torch::Tensor &gr, torch::Tensor &gk, torch::Tensor &gv, torch::Tensor &gw, torch::Tensor &gu, torch::Tensor &gs) { + cuda_backward(B, T, C, H, r.data_ptr(), k.data_ptr(), v.data_ptr(), w.data_ptr(), u.data_ptr(), s.data_ptr(), gy.data_ptr(), gr.data_ptr(), gk.data_ptr(), gv.data_ptr(), gw.data_ptr(), gu.data_ptr(), gs.data_ptr()); +} +PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { + m.def("forward", &forward, "wkv6state forward"); + m.def("backward", &backward, "wkv6state backward"); +} + +TORCH_LIBRARY(wkv6state, m) { + m.def("forward", forward); + m.def("backward", backward); +} diff --git a/demo/demo-predict.sh b/demo/demo-predict.sh new file mode 100644 index 0000000000000000000000000000000000000000..193c7f74e87443d34833f0c261668674b60cfbff --- /dev/null +++ b/demo/demo-predict.sh @@ -0,0 +1,39 @@ + +# 3B +load_model='RWKV-x060-World-3B-v2.1-20240417-ctx4096.pth' +#7B +# load_model='RWKV-x060-World-7B-v2.1-20240507-ctx4096.pth' + +#model output dir +proj_dir='output' + +# 3B +n_layer=32 +n_embd=2560 + +# 7B +# n_layer=32 +# n_embd=4096 + +micro_bsz=4 +epoch_steps=18089 +ctx_len=1024 +device=4 +epoch_save=1 + +file_path="path/to/your/audio/file" +OP="predict" + +QUANT='nf4' +export HF_ENDPOINT=https://hf-mirror.com +python train.py --load_model $load_model --devices $device --file_path $path_file\ +--proj_dir $proj_dir \ +--data_type binidx --vocab_size 65536 \ +--ctx_len $ctx_len --epoch_steps $epoch_steps --epoch_count 1000 --epoch_begin 0 --epoch_save $epoch_save --micro_bsz $micro_bsz \ +--n_layer $n_layer --n_embd $n_embd \ +--pre_ffn 0 --head_qk 0 --lr_init 1e-4 --lr_final 1e-4 --warmup_steps 100 --beta1 0.9 --beta2 0.99 --adam_eps 1e-8 \ +--accelerator gpu --strategy deepspeed_stage_1 --grad_cp 1 --op $OP \ +--precision bf16 \ +--my_testing "x060" \ +--train_type "state" --dataload pad +# --quant $QUANT diff --git a/demo/demo-state-tuning.sh b/demo/demo-state-tuning.sh new file mode 100644 index 0000000000000000000000000000000000000000..58c1acd5d95943d2378a698120074a15db49da9c --- /dev/null +++ b/demo/demo-state-tuning.sh @@ -0,0 +1,38 @@ + +# 3B +load_model='RWKV-x060-World-3B-v2.1-20240417-ctx4096.pth' +#7B +# load_model='RWKV-x060-World-7B-v2.1-20240507-ctx4096.pth' + +#model output dir +proj_dir='output' + +# 3B +n_layer=32 +n_embd=2560 + +# 7B +# n_layer=32 +# n_embd=4096 + +micro_bsz=4 +epoch_steps=18089 +ctx_len=1024 +device=4 +epoch_save=1 + +OP="train" + +QUANT='nf4' + +python train.py --load_model $load_model --devices $device \ +--proj_dir $proj_dir \ +--data_type binidx --vocab_size 65536 \ +--ctx_len $ctx_len --epoch_steps $epoch_steps --epoch_count 1000 --epoch_begin 0 --epoch_save $epoch_save --micro_bsz $micro_bsz \ +--n_layer $n_layer --n_embd $n_embd \ +--pre_ffn 0 --head_qk 0 --lr_init 1e-4 --lr_final 1e-4 --warmup_steps 100 --beta1 0.9 --beta2 0.99 --adam_eps 1e-8 \ +--accelerator gpu --strategy deepspeed_stage_1 --grad_cp 1 --op $OP \ +--precision bf16 \ +--my_testing "x060" \ +--train_type "state" --dataload pad +# --quant $QUANT diff --git a/fla/__init__.py b/fla/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..b500e55a2cd02d95ea8267e90be76e3a3268e967 --- /dev/null +++ b/fla/__init__.py @@ -0,0 +1,50 @@ +# -*- coding: utf-8 -*- + +from fla.layers import (ABCAttention, BasedLinearAttention, DeltaNet, + GatedLinearAttention, HGRN2Attention, LinearAttention, + MultiScaleRetention, ReBasedLinearAttention) +from fla.models import (ABCForCausalLM, ABCModel, DeltaNetForCausalLM, + DeltaNetModel, GLAForCausalLM, GLAModel, + HGRN2ForCausalLM, HGRN2Model, HGRNForCausalLM, + HGRNModel, LinearAttentionForCausalLM, + LinearAttentionModel, RetNetForCausalLM, RetNetModel, + RWKV6ForCausalLM, RWKV6Model, TransformerForCausalLM, + TransformerModel) +from fla.ops import (chunk_gla, chunk_retention, fused_chunk_based, + fused_chunk_gla, fused_chunk_retention) + +__all__ = [ + 'ABCAttention', + 'BasedLinearAttention', + 'DeltaNet', + 'HGRN2Attention', + 'GatedLinearAttention', + 'LinearAttention', + 'MultiScaleRetention', + 'ReBasedLinearAttention', + 'ABCForCausalLM', + 'ABCModel', + 'DeltaNetForCausalLM', + 'DeltaNetModel', + 'HGRNForCausalLM', + 'HGRNModel', + 'HGRN2ForCausalLM', + 'HGRN2Model', + 'GLAForCausalLM', + 'GLAModel', + 'LinearAttentionForCausalLM', + 'LinearAttentionModel', + 'RetNetForCausalLM', + 'RetNetModel', + 'RWKV6ForCausalLM', + 'RWKV6Model', + 'TransformerForCausalLM', + 'TransformerModel', + 'chunk_gla', + 'chunk_retention', + 'fused_chunk_based', + 'fused_chunk_gla', + 'fused_chunk_retention' +] + +__version__ = '0.1' diff --git a/fla/layers/__init__.py b/fla/layers/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..cb8e4424356b46590659d2b52f4b0f2bf6aa5c8d --- /dev/null +++ b/fla/layers/__init__.py @@ -0,0 +1,25 @@ +# -*- coding: utf-8 -*- + +from .abc import ABCAttention +from .based import BasedLinearAttention +from .delta_net import DeltaNet +from .gla import GatedLinearAttention +from .hgrn import HGRNAttention +from .hgrn2 import HGRN2Attention +from .linear_attn import LinearAttention +from .multiscale_retention import MultiScaleRetention +from .rebased import ReBasedLinearAttention +from .rwkv6 import RWKV6Attention + +__all__ = [ + 'ABCAttention', + 'BasedLinearAttention', + 'DeltaNet', + 'GatedLinearAttention', + 'HGRNAttention', + 'HGRN2Attention', + 'LinearAttention', + 'MultiScaleRetention', + 'ReBasedLinearAttention', + 'RWKV6Attention' +] diff --git a/fla/layers/abc.py b/fla/layers/abc.py new file mode 100644 index 0000000000000000000000000000000000000000..4f4a9cc3ccbb09ef6db75c7ef53a2665526847ed --- /dev/null +++ b/fla/layers/abc.py @@ -0,0 +1,195 @@ +# -*- coding: utf-8 -*- + +from __future__ import annotations + +import warnings +from typing import Optional, Tuple + +import torch +import torch.nn as nn +from einops import rearrange +from transformers.cache_utils import Cache + +from fla.modules import (FusedRMSNormSwishGate, RMSNorm, RotaryEmbedding, + ShortConvolution) +from fla.modules.activations import swiglu, swish +from fla.modules.convolution import proj_then_conv1d +from fla.ops.abc.chunk import chunk_abc + + +class ABCAttention(nn.Module): + + def __init__( + self, + hidden_size: int = 1024, + expand_k: float = 0.5, + expand_v: float = 1.0, + num_heads: int = 4, + use_short_conv: bool = False, + conv_size: int = 4, + conv_bias: bool = False, + share_conv_kernel: bool = True, + num_slots: Optional[int] = None, + elementwise_affine: Optional[bool] = True, + norm_eps: float = 1e-5, + gate_low_rank_dim: int = 16, + gate_logit_normalizer: int = 16, + use_input_gate: bool = False, + use_output_gate: bool = True, + use_norm: bool = True, + clamp_min: Optional[float] = -32, + clamp_max: Optional[float] = 32, + layer_idx: Optional[int] = None, + **kwargs + ) -> ABCAttention: + super().__init__() + + self.hidden_size = hidden_size + self.expand_k = expand_k + self.expand_v = expand_v + self.num_heads = num_heads + self.key_dim = int(self.hidden_size * self.expand_k) + self.value_dim = int(self.hidden_size * self.expand_v) + self.head_k_dim = self.key_dim // self.num_heads + self.head_v_dim = self.value_dim // self.num_heads + + self.use_short_conv = use_short_conv + self.conv_size = conv_size + self.conv_bias = conv_bias + self.share_conv_kernel = share_conv_kernel + + self.gate_low_rank_dim = gate_low_rank_dim + self.gate_logit_normalizer = gate_logit_normalizer + + self.use_input_gate = use_input_gate + self.use_output_gate = use_output_gate + self.use_norm = use_norm + + if num_slots is None: + num_slots = self.head_k_dim + self.num_slots = num_slots + + self.norm_eps = norm_eps + + self.clamp_min = clamp_min + self.clamp_max = clamp_max + self.layer_idx = layer_idx + + if layer_idx is None: + warnings.warn( + f"Instantiating {self.__class__.__name__} without passing `layer_idx` is not recommended and will " + "to errors during the forward call, if caching is used. Please make sure to provide a `layer_idx` " + "when creating this class." + ) + + self.q_proj = nn.Linear(self.hidden_size, self.key_dim, bias=False) + self.k_proj = nn.Linear(self.hidden_size, self.key_dim, bias=False) + self.v_proj = nn.Linear(self.hidden_size, self.value_dim, bias=False) + + if use_output_gate: + self.g_proj = nn.Linear(self.hidden_size, self.value_dim, bias=False) + self.s_proj = nn.Linear(self.hidden_size, self.num_heads * self.num_slots, bias=False) + self.o_proj = nn.Linear(self.value_dim, self.hidden_size, bias=False) + + if use_short_conv: + self.conv_size = conv_size + if share_conv_kernel: + self.h_conv1d = ShortConvolution(hidden_size, conv_size, activation='silu') + else: + self.q_conv1d = ShortConvolution(self.key_dim, conv_size, activation='silu') + self.k_conv1d = ShortConvolution(self.key_dim, conv_size, activation='silu') + self.v_conv1d = ShortConvolution(self.value_dim, conv_size, activation='silu') + + if self.use_norm: + if self.use_output_gate: + self.g_norm = FusedRMSNormSwishGate(self.head_v_dim, elementwise_affine, norm_eps) + else: + self.g_norm = RMSNorm(self.head_v_dim, elementwise_affine, norm_eps) + + if self.use_rope: + self.rotary = RotaryEmbedding(self.head_k_dim) + + self.apply(self._initialize_weights) + + def _initialize_weights(self, module: nn.Module): + if getattr(module, "_is_hf_initialized", False): + return + if isinstance(module, nn.Linear): + nn.init.xavier_uniform_(module.weight, gain=2 ** -2.5) + if module.bias is not None: + nn.init.zeros_(module.bias) + module._is_hf_initialized = True + + def forward( + self, + hidden_states: torch.Tensor, + attention_mask: Optional[torch.Tensor] = None, + past_key_values: Optional[Cache] = None, + use_cache: Optional[bool] = False, + output_attentions: Optional[bool] = False, + **kwargs + ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Cache]]: + + if self.use_short_conv: + if self.share_conv_kernel: + hidden_states = self.h_conv1d(hidden_states) + q = self.q_proj(hidden_states) + k = self.k_proj(hidden_states) + v = self.v_proj(hidden_states) + else: + q = proj_then_conv1d(hidden_states, self.q_proj.weight, self.q_conv1d.weight, self.q_conv1d.bias) + k = proj_then_conv1d(hidden_states, self.k_proj.weight, self.k_conv1d.weight, self.k_conv1d.bias) + v = proj_then_conv1d(hidden_states, self.v_proj.weight, self.v_conv1d.weight, self.v_conv1d.bias) + else: + q = self.q_proj(hidden_states) + k = self.k_proj(hidden_states) + v = self.v_proj(hidden_states) + + if self.use_input_gate: + q, k, v = map(lambda x: swish(x), (q, k, v)) + + if self.use_rope: + q = rearrange(q, '... (h d) -> ... h d', h=self.num_heads) + k = rearrange(k, '... (h d) -> ... h d', h=self.num_heads) + seqlen_offset = 0 + if past_key_values is not None: + seqlen_offset = past_key_values.get_seq_length(self.layer_idx) + q, k = self.rotary(q, k, seqlen_offset) + q = rearrange(q, 'b n h d -> b h n d', h=self.num_heads) + k = rearrange(k, 'b n h d -> b h n d', h=self.num_heads) + else: + q = rearrange(q, 'b n (h d) -> b h n d', h=self.num_heads) + k = rearrange(k, 'b n (h d) -> b h n d', h=self.num_heads) + v = rearrange(v, 'b n (h d) -> b h n d', h=self.num_heads) + + # [batch_size, n_heads, seq_len, num_slots] + s = rearrange(self.s_proj(hidden_states), 'b t (h m) -> b h t m', h=self.num_heads) + s = s.clamp_(self.clamp_min, self.clamp_max) + + last_state = past_key_values[self.layer_idx] if use_cache else None + o, last_state = chunk_abc(q, k, v, s, initial_state=last_state, output_final_state=use_cache) + if past_key_values is not None and last_state is not None: + past_key_values.update(last_state, self.layer_idx, q.shape[2]) + + o = rearrange(o, 'b h t d -> b t h d') + if self.use_norm and not self.use_output_gate: + o = self.g_norm(o) + elif self.use_output_gate: + g = rearrange(self.g_proj(hidden_states), 'b t (h d) -> b t h d', h=self.num_heads) + o = self.g_norm(o, g) if self.use_norm else swiglu(g, o) + o = rearrange(o, 'b t h d -> b t (h d)') + o = self.o_proj(o) + + return o, None, past_key_values + + def init_state(self, batch_size: int) -> Tuple[torch.Tensor]: + param = next(self.parameters()) + state = tuple() + if self.use_short_conv: + state += (param.new_zeros(batch_size, self.hidden_size, self.conv_size),) + state += (param.new_zeros(batch_size, self.num_heads, self.head_k_dim, self.num_slots), + param.new_zeros(batch_size, self.num_heads, self.num_slots, self.head_v_dim)) + return state + + def state_size(self, sequence_length: int = 2048): + return self.num_heads * self.key_dim * self.head_v_dim diff --git a/fla/layers/based.py b/fla/layers/based.py new file mode 100644 index 0000000000000000000000000000000000000000..bed0c161d182a8a7412502ab5e9c68801c39354a --- /dev/null +++ b/fla/layers/based.py @@ -0,0 +1,126 @@ +# -*- coding: utf-8 -*- + +""" +Linear attention in Based. +https://github.com/HazyResearch/zoology/blob/main/zoology/mixers/based.py +""" + +import torch +import torch.nn as nn +from einops import rearrange + +from fla.modules.feature_map import TaylorFeatureMap +from fla.ops.based import parallel_based +from fla.ops.linear_attn import chunk_linear_attn, fused_chunk_linear_attn + + +class BasedLinearAttention(nn.Module): + def __init__( + self, + hidden_size: int, + l_max: int = 2048, + feature_dim: int = 16, + num_key_value_heads: int = 12, + num_heads: int = 12, + feature_name: str = "taylor_exp", + eps: float = 1e-12, + causal: bool = True, + mode: str = "parallel", + ): + super().__init__() + self.hidden_size + self.l_max = l_max + self.mode = mode + assert self.mode in ["fused_chunk", "parallel", 'chunk'] + + # linear attention + self.feature_name = feature_name + self.feature_dim = feature_dim + self.num_key_value_heads = num_key_value_heads + self.num_heads = num_heads + self.head_dim = self.hidden_size // self.num_key_value_heads + self.causal = causal + + self.q_proj = nn.Linear(self.hidden_size, self.feature_dim * self.num_heads, bias=False) + self.k_proj = nn.Linear(self.hidden_size, self.feature_dim * self.num_heads, bias=False) + self.v_proj = nn.Linear(self.hidden_size, self.num_key_value_heads * self.head_dim, bias=False) + self.o_proj = nn.Linear(self.num_heads * self.head_dim, self.hidden_size, bias=False) + self.dropout = nn.Identity() + self.feature_map = TaylorFeatureMap(feature_dim) + self.eps = eps + + self.apply(self._initialize_weights) + + def _initialize_weights(self, module: nn.Module): + if getattr(module, "_is_hf_initialized", False): + return + if isinstance(module, nn.Linear): + nn.init.xavier_uniform_(module.weight, gain=2 ** -2.5) + if module.bias is not None: + nn.init.zeros_(module.bias) + module._is_hf_initialized = True + + def forward(self, hidden_states: torch.Tensor, **kwargs): + mode = self.mode + q, k, v = self.q_proj(hidden_states), self.k_proj(hidden_states), self.v_proj(hidden_states) + q, k, v = map(lambda x: rearrange(x, "b l (h d) -> b h l d", h=self.num_heads), [q, k, v]) + if mode == "fused_chunk": + q, k = self.feature_map(q), self.feature_map(k) + o = fused_chunk_linear_attn(q, k, v, normalize=True, scale=1) + elif mode == 'chunk': + q, k = self.feature_map(q), self.feature_map(k) + o = chunk_linear_attn(q, k, v, normalize=True, scale=1) + elif mode == 'parallel': + assert q.shape[-1] <= 128 + o = parallel_based(q, k, v, True, True) + o = rearrange(o, "b h l d -> b l (h d)") + o = self.o_proj(o) + o = self.dropout(o) + return o + + # https://github.com/HazyResearch/zoology/blob/main/zoology/mixers/based.py#L119 + + def forward_reference(self, hidden_states: torch.Tensor, filters: torch.Tensor = None, *args, **kwargs): + """ + x (torch.Tensor): tensor of shape (b, d, l) + y (torch.Tensor): tensor of shape (b, d, l) + """ + # hidden_states = hidden_states.transpose(1, 2) + b, l, _ = hidden_states.size() + q, k, v = self.q_proj(hidden_states), self.k_proj(hidden_states), self.v_proj(hidden_states) + + q = q.view(b, l, self.num_heads, self.feature_dim).transpose(1, 2) + k = k.view(b, l, self.num_key_value_heads, self.feature_dim).transpose(1, 2) + v = v.view(b, l, self.num_key_value_heads, self.head_dim).transpose(1, 2) + + # Linear attention + q, k = self.feature_map(q), self.feature_map(k) + q, k, v = q.unsqueeze(-2), k.unsqueeze(-2), v.unsqueeze(-1) + + # Compute attention + if self.causal: + y = ((q * (k * v).cumsum(2)).sum(-1) / ((q * k.cumsum(2)).sum(-1) + self.eps)) + else: + y = ((q * (k * v).sum(2, True)).sum(-1) / ((q * k.sum(2, True)).sum(-1) + self.eps)) + y = rearrange(y, 'b h l d -> b l (h d)') + y = self.o_proj(y.to(hidden_states.dtype)) + y = self.dropout(y) + return y.to(hidden_states.dtype) + + +if __name__ == '__main__': + batch = 4 + seq_len = 1024 + hidden_size = 1024 + dtype = torch.float32 + x = torch.randn(batch, seq_len, hidden_size).to(dtype).cuda().requires_grad_(True) + dy = torch.randn(batch, seq_len, hidden_size).to(dtype).cuda() + model = BasedLinearAttention(hidden_size, mode='chunk').to(dtype).cuda() + y = model(x) + y.backward(dy, retain_graph=True) + x_grad, x.grad = x.grad, None + y2 = model.forward_reference(x) + y2.backward(dy) + assert y.allclose(y2, 0, 1e-4), breakpoint() + assert x_grad.allclose(x.grad, 0, 1e-4), breakpoint() + print("Pass") diff --git a/fla/layers/delta_net.py b/fla/layers/delta_net.py new file mode 100644 index 0000000000000000000000000000000000000000..194b18a558d0afbf1e0f7d86c9f029ed3da375ff --- /dev/null +++ b/fla/layers/delta_net.py @@ -0,0 +1,254 @@ +# -*- coding: utf-8 -*- + +# Sect4.2 of Linear Transformers Are Secretly Fast Weight Programmers https://arxiv.org/abs/2102.11174 + + +from __future__ import annotations + +from typing import Optional, Tuple + +import torch +import torch.nn as nn +from einops import rearrange +from transformers.cache_utils import Cache + + +from fla.modules import FusedRMSNormSwishGate, RMSNorm, ShortConvolution, LayerNorm +from fla.modules.rotary import RotaryEmbedding +from fla.ops.delta_rule import (fused_chunk_delta_rule, + fused_recurrent_linear_attn_delta_rule, + chunk_delta_rule) +from torch.nn import functional as F + + +def simple_norm(x): + return (F.normalize(x, dim=-1) * x.shape[-1] ** 0.5).to(x) + + +# @torch.jit.script +def elu_p1(x): + return (F.elu(x, 1., False) + 1.).to(x) + + +# @torch.jit.script +def sum_norm(x): + return (x / x.sum(-1, keepdim=True)).to(x) + + +# @torch.jit.script +def elu_norm(x): + dtype = x.dtype + x = F.elu(x, 1., False) + 1. + return (x / x.sum(-1, keepdim=True)).to(dtype) + + + + +# https://github.com/IDSIA/recurrent-fwp/blob/master/algorithmic/layers.py#L86C1-L146C1 +class DeltaNet(nn.Module): + def __init__( + self, + d_model: int = None, + hidden_size: int = 1024, + expand_k: float = 1.0, + expand_v: float = 1.0, + num_heads: int = 4, + mode: str = 'fused_chunk', + chunk_size: int = 16, + use_beta: bool = True, + use_gate: bool = True, + use_rope: bool = False, + use_output_norm: bool = True, + use_elu: bool = False, + use_short_conv: bool = True, + conv_size: int = 4, + conv_bias: bool = False, + share_conv_kernel: bool = False, + layer_idx: int = None, + qk_activation: str = 'silu', + qk_norm: str = None, + save_memory: str = False, + **kwargs + ) -> DeltaNet: + super().__init__() + self.mode = mode + self.qk_activation = qk_activation + self.qk_norm = qk_norm + assert self.qk_activation in ['silu', 'relu', 'elu', 'identity'] + assert self.qk_norm in ['l2', 'sum'] + if d_model is not None: + hidden_size = d_model + self.hidden_size = hidden_size + self.expand_k = expand_k + self.expand_v = expand_v + self.num_heads = num_heads + self.chunk_size = chunk_size + self.use_gate = use_gate + self.use_output_norm = use_output_norm + self.use_short_conv = use_short_conv + self.conv_size = conv_size + self.conv_bias = conv_bias + self.share_conv_kernel = share_conv_kernel + + self.key_dim = int(hidden_size * expand_k) + self.value_dim = int(hidden_size * expand_v) + self.head_qk_dim = self.key_dim // num_heads + self.head_v_dim = self.value_dim // num_heads + self.layer_idx = layer_idx + + self.silu = torch.nn.SiLU() + + assert mode in ['chunk', 'fused_chunk', 'fused_recurrent'], f"Not suppoerted mode `{mode}`." + assert self.key_dim % num_heads == 0, f"key dim must be divisible by num_heads of {num_heads}" + assert self.value_dim % num_heads == 0, f"value dim must be divisible by num_heads of {num_heads}" + + self.q_proj = nn.Linear(hidden_size, self.key_dim, bias=False) + self.k_proj = nn.Linear(hidden_size, self.key_dim, bias=False) + self.v_proj = nn.Linear(hidden_size, self.value_dim, bias=False) + self.o_proj = nn.Linear(self.value_dim, hidden_size, bias=False) + + self.use_beta = use_beta + self.use_elu = use_elu + if self.use_beta: + self.b_proj = nn.Linear(hidden_size, self.num_heads, bias=False) + if use_short_conv: + self.conv_size = conv_size + if share_conv_kernel: + self.h_conv1d = ShortConvolution(hidden_size, conv_size, activation=None) + else: + self.q_conv1d = ShortConvolution(self.key_dim, conv_size, activation='silu' if qk_activation == 'silu' else None) + self.k_conv1d = ShortConvolution(self.key_dim, conv_size, activation='silu' if qk_activation == 'silu' else None) + self.v_conv1d = ShortConvolution(self.value_dim, conv_size, activation='silu') + if use_gate: + self.g_proj = nn.Linear(hidden_size, self.value_dim, bias=False) + if self.use_gate: + self.norm = FusedRMSNormSwishGate(self.head_v_dim) + else: + self.norm = RMSNorm(self.head_v_dim) + self.apply(self._initialize_weights) + + def _initialize_weights(self, module: nn.Module): + if getattr(module, "_is_hf_initialized", False): + return + if isinstance(module, nn.Linear): + nn.init.xavier_uniform_(module.weight, gain=2 ** -2.5) + if module.bias is not None: + nn.init.zeros_(module.bias) + module._is_hf_initialized = True + + + def forward( + self, + hidden_states: torch.Tensor, + attention_mask: Optional[torch.Tensor] = None, + past_key_values: Optional[Cache] = None, + use_cache: Optional[bool] = False, + output_attentions: Optional[bool] = False, + **kwargs + ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Cache]]: + + # change to inference mode. + mode = 'fused_recurrent' if hidden_states.shape[1] < 64 else self.mode + last_state = past_key_values[self.layer_idx] if use_cache else None + + if attention_mask is not None: + if attention_mask.shape[-1] != hidden_states.shape[-2]: + attention_mask = attention_mask[:, -1:] + + if self.use_short_conv: + conv_state = last_state[0] if use_cache else None + if self.share_conv_kernel: + # conv state is updated inplace + hidden_states = self.h_conv1d(hidden_states, attention_mask, conv_state) + q = self.q_proj(hidden_states) + k = self.k_proj(hidden_states) + v = self.v_proj(hidden_states) + else: + conv_state_q = last_state[0] if use_cache else None + conv_state_k = last_state[1] if use_cache else None + conv_state_v = last_state[2] if use_cache else None + k = self.k_proj(hidden_states) + v = self.v_proj(hidden_states) + q = self.q_proj(hidden_states) + q = self.q_conv1d(q, attention_mask, conv_state_q) + k = self.k_conv1d(k, attention_mask, conv_state_k) + v = self.v_conv1d(v, attention_mask, conv_state_v) + else: + q = (self.q_proj(hidden_states)) + k = (self.k_proj(hidden_states)) + v = self.silu(self.v_proj(hidden_states)) + + # dealing with left-padding + if attention_mask is not None: + v = v.mul_(attention_mask.unsqueeze(-1)) + + q, k, v = map(lambda x: rearrange(x, 'b l (h d) -> b h l d', h=self.num_heads), (q, k, v)) + + if self.qk_activation != 'silu': + if self.qk_activation == 'relu': + q, k = q.relu(), k.relu() + elif self.qk_activation == 'elu': + q, k = elu_p1(q), elu_p1(k) + elif self.qk_activation == 'identity': + pass + else: + raise NotImplementedError + + if self.qk_norm is not None: + if self.qk_norm == 'l2': + k = torch.nn.functional.normalize(k, dim=-1, p=2).to(v) #auto mixed precision type transfer is annoying. + q = torch.nn.functional.normalize(q, dim=-1, p=2).to(v) + elif self.qk_norm == 'sum': + q = sum_norm(q).to(v) + k = sum_norm(k).to(v) + + if self.use_beta: + beta = rearrange(self.b_proj(hidden_states), 'b l h -> b h l').sigmoid() + else: + beta = q.new_ones(q.shape[0], q.shape[1], q.shape[2]) + state = past_key_values[self.layer_idx][-1] if use_cache else None + if mode == 'fused_recurrent': + o, recurrent_state = fused_recurrent_linear_attn_delta_rule(q, k, v, beta, state, output_final_state=use_cache) + elif mode == 'fused_chunk': + assert self.chunk_size in [16, 32, 64] + o, recurrent_state = fused_chunk_delta_rule(q, k, v, beta, self.chunk_size, state, output_final_state=use_cache) + elif mode == 'chunk': + assert self.chunk_size in [16, 32, 64] + o, recurrent_state = chunk_delta_rule(q, k, v, beta, self.chunk_size, state, output_final_state=use_cache) + else: + raise NotImplementedError(f"Not supported mode `{mode}`.") + + if past_key_values is not None: + if self.use_short_conv: + if self.share_conv_kernel: + state = (conv_state, recurrent_state) + else: + state = (conv_state_q, conv_state_k, conv_state_v, recurrent_state) + else: + state = (recurrent_state,) + past_key_values.update(state, self.layer_idx) + + o = rearrange(o, 'b h l d -> b l h d') + if self.use_gate: + g = rearrange(self.g_proj(hidden_states), 'b l (h d) -> b l h d', h=self.num_heads) + o = self.norm(o, g) + else: + o = self.norm(o) + o = rearrange(o, 'b l h d -> b l (h d)') + o = self.o_proj(o) + + return o, None, past_key_values + + def init_state(self, batch_size: int) -> Tuple[torch.Tensor]: + param = next(self.parameters()) + state = tuple() + if self.use_short_conv: + if self.share_conv_kernel: + state += (param.new_zeros(batch_size, self.hidden_size, self.conv_size),) + else: + # for q/k/v each + state += (param.new_zeros(batch_size, self.key_dim, self.conv_size), + param.new_zeros(batch_size, self.key_dim, self.conv_size), + param.new_zeros(batch_size, self.value_dim, self.conv_size)) + state += (param.new_zeros(batch_size, self.num_heads, self.head_qk_dim, self.head_v_dim),) + return state \ No newline at end of file diff --git a/fla/layers/gated_abc.py b/fla/layers/gated_abc.py new file mode 100644 index 0000000000000000000000000000000000000000..e1bf4fe3ea8429027cf23fc96bbcb1178420b924 --- /dev/null +++ b/fla/layers/gated_abc.py @@ -0,0 +1,234 @@ +# -*- coding: utf-8 -*- + +from __future__ import annotations + +import warnings +from typing import Optional, Tuple + +import torch +import torch.nn as nn +import torch.nn.functional as F +from einops import rearrange, repeat +from transformers.cache_utils import Cache + +from fla.modules import (FusedRMSNormSwishGateLinear, RMSNormLinear, + RotaryEmbedding, ShortConvolution) +from fla.modules.activations import ACT2FN, swiglu_linear, swish +from fla.ops.abc.chunk_gate import chunk_gated_abc + + +class GatedABCAttention(nn.Module): + + def __init__( + self, + hidden_size: int = 1024, + expand_k: float = 1., + expand_v: float = 1., + num_heads: int = 4, + num_kv_heads: Optional[int] = None, + use_short_conv: bool = False, + conv_size: int = 4, + conv_bias: bool = False, + share_conv_kernel: bool = True, + num_slots: Optional[int] = None, + elementwise_affine: Optional[bool] = True, + norm_eps: float = 1e-5, + gate_low_rank_dim: Optional[int] = None, + gate_logit_normalizer: int = 16, + feature_map: str = 'swish', + use_rope: bool = False, + use_output_gate: bool = False, + use_norm: bool = True, + layer_idx: Optional[int] = None, + **kwargs + ) -> GatedABCAttention: + super().__init__() + + self.hidden_size = hidden_size + self.expand_k = expand_k + self.expand_v = expand_v + self.num_heads = num_heads + self.num_kv_heads = num_heads if num_kv_heads is None else num_kv_heads + self.num_kv_groups = self.num_heads // self.num_kv_heads + self.key_dim = int(hidden_size * expand_k) + self.value_dim = int(hidden_size * expand_v) + self.key_dim_per_group = self.key_dim // self.num_kv_groups + self.value_dim_per_group = self.value_dim // self.num_kv_groups + self.head_k_dim = self.key_dim // self.num_heads + self.head_v_dim = self.value_dim // self.num_heads + + self.use_short_conv = use_short_conv + self.conv_size = conv_size + self.conv_bias = conv_bias + self.share_conv_kernel = share_conv_kernel + + if gate_low_rank_dim is None: + gate_low_rank_dim = self.hidden_size // 16 + self.gate_low_rank_dim = gate_low_rank_dim + self.gate_logit_normalizer = gate_logit_normalizer + + self.feature_map = feature_map + self.use_rope = use_rope + self.use_output_gate = use_output_gate + self.use_norm = use_norm + + if num_slots is None: + num_slots = self.head_k_dim + self.num_slots = num_slots + + self.layer_idx = layer_idx + + if layer_idx is None: + warnings.warn( + f"Instantiating {self.__class__.__name__} without passing `layer_idx` is not recommended and will " + "to errors during the forward call, if caching is used. Please make sure to provide a `layer_idx` " + "when creating this class." + ) + + self.q_proj = nn.Linear(self.hidden_size, self.key_dim, bias=False) + self.k_proj = nn.Linear(self.hidden_size, self.key_dim_per_group, bias=False) + self.v_proj = nn.Linear(self.hidden_size, self.value_dim_per_group, bias=False) + self.f_proj = nn.Linear(self.hidden_size, self.num_kv_heads * self.num_slots, bias=False) + + if use_output_gate: + self.g_proj = nn.Linear(hidden_size, self.value_dim, bias=False) + + if use_short_conv: + self.conv_size = conv_size + if share_conv_kernel: + self.h_conv1d = ShortConvolution(hidden_size, conv_size, activation='silu') + else: + self.q_conv1d = ShortConvolution(self.key_dim, conv_size, activation='silu') + self.k_conv1d = ShortConvolution(self.key_dim_per_group, conv_size, activation='silu') + self.v_conv1d = ShortConvolution(self.value_dim_per_group, conv_size, activation='silu') + + if self.use_norm: + if self.use_output_gate: + self.g_norm = FusedRMSNormSwishGateLinear(self.hidden_size, elementwise_affine, norm_eps) + else: + self.g_norm = RMSNormLinear(self.hidden_size, elementwise_affine, norm_eps) + self.o_proj = nn.Linear(self.value_dim, self.hidden_size, bias=False) + + if self.use_rope: + self.rotary = RotaryEmbedding(self.head_k_dim) + + self.apply(self._initialize_weights) + + def _initialize_weights(self, module: nn.Module): + if getattr(module, "_is_hf_initialized", False): + return + if isinstance(module, nn.Linear): + nn.init.xavier_uniform_(module.weight, gain=2 ** -2.5) + if module.bias is not None: + nn.init.zeros_(module.bias) + module._is_hf_initialized = True + + def forward( + self, + hidden_states: torch.Tensor, + attention_mask: Optional[torch.Tensor] = None, + past_key_values: Optional[Cache] = None, + use_cache: Optional[bool] = False, + output_attentions: Optional[bool] = False, + lower_bound: Optional[torch.Tensor] = None, + **kwargs + ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Cache]]: + + last_state = past_key_values[self.layer_idx] if use_cache else None + if self.use_short_conv: + conv_state = last_state[0] if use_cache else None + if self.share_conv_kernel: + # conv state is updated inplace + hidden_states = self.h_conv1d(hidden_states, attention_mask, conv_state) + q = self.q_proj(hidden_states) + k = self.k_proj(hidden_states) + v = self.v_proj(hidden_states) + else: + conv_state_q = last_state[0] if use_cache else None + conv_state_k = last_state[1] if use_cache else None + conv_state_v = last_state[2] if use_cache else None + q = self.q_proj(hidden_states) + k = self.k_proj(hidden_states) + v = self.v_proj(hidden_states) + q = self.q_conv1d(q, attention_mask, conv_state_q) + k = self.k_conv1d(k, attention_mask, conv_state_k) + v = self.v_conv1d(v, attention_mask, conv_state_v) + else: + q = self.q_proj(hidden_states) + k = self.k_proj(hidden_states) + v = self.v_proj(hidden_states) + f = self.f_proj(hidden_states) + + if self.use_rope: + q = rearrange(q, '... (h d) -> ... h d', h=self.num_heads) + k = rearrange(k, '... (h d) -> ... h d', h=self.num_kv_heads) + seqlen_offset = 0 + if past_key_values is not None: + seqlen_offset = past_key_values.get_seq_length(self.layer_idx) + q, k = self.rotary(q, k, seqlen_offset) + q = rearrange(q, 'b n h d -> b h n d', h=self.num_heads) + k = rearrange(k, 'b n h d -> b h n d', h=self.num_kv_heads) + else: + q = rearrange(q, 'b n (h d) -> b h n d', h=self.num_heads) + if self.num_kv_groups > 1: + k = repeat(k, 'b n (h d) -> b (h g) n d', h=self.num_kv_heads, g=self.num_kv_groups) + else: + k = rearrange(k, 'b n (h d) -> b h n d', h=self.num_kv_heads) + if self.num_kv_groups > 1: + v = repeat(v, 'b n (h d) -> b (h g) n d', h=self.num_kv_heads, g=self.num_kv_groups) + f = repeat(f, 'b n (h m) -> b (h g) n m', h=self.num_kv_heads, g=self.num_kv_groups) + else: + v = rearrange(v, 'b n (h d) -> b h n d', h=self.num_kv_heads) + f = rearrange(f, 'b n (h m) -> b h n m', h=self.num_kv_heads) + + if self.feature_map is not None: + q, k, v = map(lambda x: ACT2FN[self.feature_map](x), (q, k, v)) + f = F.logsigmoid(f) / self.gate_logit_normalizer + s = (1 - f.exp()).to(f.dtype) + # dealing with left-padding + if attention_mask is not None: + s = s.mul_(attention_mask.view(attention_mask.shape[0], 1, -1, 1)) + v = v.mul_(attention_mask.view(attention_mask.shape[0], 1, -1, 1)) + + recurrent_state = last_state[-2:] if use_cache else None + o, recurrent_state = chunk_gated_abc(q, k, v, s, f, + initial_state=recurrent_state, + output_final_state=use_cache) + if past_key_values is not None: + if self.use_short_conv: + if self.share_conv_kernel: + last_state = (conv_state,) + recurrent_state + else: + last_state = (conv_state_q, conv_state_k, conv_state_v) + recurrent_state + else: + last_state = recurrent_state + past_key_values.update(last_state, self.layer_idx, q.shape[2]) + + o = rearrange(o, 'b h t d -> b t (h d)') + if self.use_norm and not self.use_output_gate: + o = swish(o) + o = self.g_norm(o, self.o_proj.weight, self.o_proj.bias) + elif self.use_output_gate and not self.use_norm: + o = swiglu_linear(self.g_proj(hidden_states), o, self.o_proj.weight, self.o_proj.bias) + elif self.use_output_gate and self.use_norm: + o = self.g_norm(o, self.g_proj(hidden_states), self.o_proj.weight, self.o_proj.bias) + else: + o = self.o_proj(o) + return o, None, past_key_values + + def init_state(self, batch_size: int) -> Tuple[torch.Tensor]: + param = next(self.parameters()) + state = tuple() + if self.use_short_conv: + if self.share_conv_kernel: + state += (param.new_zeros(batch_size, self.hidden_size, self.conv_size),) + else: + state += (param.new_zeros(batch_size, self.key_dim, self.conv_size), + param.new_zeros(batch_size, self.key_dim, self.conv_size), + param.new_zeros(batch_size, self.value_dim, self.conv_size)) + state += (param.new_zeros(batch_size, self.num_heads, self.head_k_dim, self.num_slots), + param.new_zeros(batch_size, self.num_heads, self.num_slots, self.head_v_dim)) + return state + + def state_size(self, sequence_length: int = 2048): + return self.num_heads * self.key_dim * self.head_v_dim diff --git a/fla/layers/gla.py b/fla/layers/gla.py new file mode 100644 index 0000000000000000000000000000000000000000..8257196e1b2f47624682b4b691313cb3d51f2712 --- /dev/null +++ b/fla/layers/gla.py @@ -0,0 +1,268 @@ +# -*- coding: utf-8 -*- + + +from __future__ import annotations + +from typing import Optional, Tuple + +import torch +import torch.nn as nn +import torch.nn.functional as F +from einops import rearrange, repeat +from transformers.cache_utils import Cache + +from fla.modules import FusedRMSNormSwishGate, RMSNorm, ShortConvolution +from fla.modules.activations import ACT2FN +from fla.ops.gla import chunk_gla, fused_chunk_gla, fused_recurrent_gla + + +class GatedLinearAttention(nn.Module): + r""" + The layer implementaion for [Gated Linear Attention Transformers with Hardware-Efficient Training](https://arxiv.org/abs/2312.06635). # noqa + + Args: + mode (str, Optional): + Which GLA kernel to use. + Currently available: `chunk`, `fused_recurrent`, and `fused_chunk`. + Default: `chunk`. + hidden_size (int, Optional): + The hidden size of the input. Default: 1024. + expand_k (float, Optional): + The expansion ratio for the key dim. Default: 0.5. + expand_v (float, Optional): + The expansion ratio for the value dim. Default: 1.0. + num_heads (int, Optional): + The number of heads. Default: 4. + num_kv_heads (int, Optional): + The number of key/value heads, used for MQA. Default: None. + feature_map (str, Optional): + Feature map function applied to queries/keys. Default: None. + use_short_conv (bool, Optional): + Whether to use short convolutions. Default: `False`. + conv_size (int, Optional): + The kernel size of the short convolution, only used when `use_short_conv` is `True`. Default: 4. + conv_bias (bool, Optional): + Whether to use bias in the short convolution, only used when `use_short_conv` is `True`. Default: `False`. + share_conv_kernel (bool, Optional): + Whether to apply convolutions berfore q/k/v mapping, only taking effects when `use_short_conv`. Default: `True`. + use_output_gate (bool, Optional): + Whether to use output gate. Default: `True`. + gate_fn (str, Optional): + The activation function for the output gate. Default: `swish`. + elementwise_affine (bool, Optional): + If `True`, applies elementwise affine to LayerNorm with learnable parameters. Default: `True`. + norm_eps (float, Optional): + The epsilon value for the layernorm/rmsnorm layer. Default: 1e-5. + gate_logit_normalizer (int, Optional): + The normalizer for the gate logits, appied after `logsigmoid`. Default: 16. + gate_low_rank_dim (int, Optional): + The low rank dim for the gate projection. Default: 16. + clamp_min (float, Optional): + The minimum value for the gate logits. Default: None. + fuse_norm (bool, Optional): + Whether to fuse the norm and the output gate for better memory footprint. Default: `True`. + layer_idx (int, Optional): + The index of the layer. Default: None. + """ + + def __init__( + self, + mode: str = 'chunk', + hidden_size: int = 1024, + expand_k: float = 0.5, + expand_v: float = 1.0, + num_heads: int = 4, + num_kv_heads: Optional[int] = None, + feature_map: Optional[str] = None, + use_short_conv: bool = False, + conv_size: int = 4, + conv_bias: bool = False, + share_conv_kernel: bool = True, + use_output_gate: bool = True, + gate_fn: str = 'swish', + elementwise_affine: Optional[bool] = True, + norm_eps: float = 1e-5, + gate_logit_normalizer: int = 16, + gate_low_rank_dim: int = 16, + clamp_min: Optional[float] = None, + fuse_norm: bool = True, + layer_idx: int = None, + ) -> GatedLinearAttention: + super().__init__() + + self.mode = mode + self.hidden_size = hidden_size + self.expand_k = expand_k + self.expand_v = expand_v + self.num_heads = num_heads + self.num_kv_heads = num_kv_heads if num_kv_heads is not None else num_heads + self.num_kv_groups = self.num_heads // self.num_kv_heads + self.feature_map_fn = ACT2FN[feature_map] if feature_map is not None else None + + self.use_short_conv = use_short_conv + self.conv_size = conv_size + self.conv_bias = conv_bias + self.share_conv_kernel = share_conv_kernel + self.use_output_gate = use_output_gate + + self.key_dim = int(hidden_size * expand_k) + self.value_dim = int(hidden_size * expand_v) + self.key_dim_per_group = self.key_dim // self.num_kv_groups + self.value_dim_per_group = self.value_dim // self.num_kv_groups + self.clamp_min = clamp_min + self.layer_idx = layer_idx + + assert mode in ['chunk', 'fused_recurrent', 'fused_chunk'], f"Not suppoerted mode `{mode}`." + assert self.key_dim % num_heads == 0, f"key dim must be divisible by num_heads of {num_heads}" + assert self.value_dim % num_heads == 0, f"value dim must be divisible by num_heads of {num_heads}" + + self.head_qk_dim = self.key_dim // num_heads + self.head_v_dim = self.value_dim // num_heads + + self.q_proj = nn.Linear(hidden_size, self.key_dim, bias=False) + self.k_proj = nn.Linear(hidden_size, self.key_dim_per_group, bias=False) + self.v_proj = nn.Linear(hidden_size, self.value_dim_per_group, bias=False) + if self.use_output_gate: + self.g_proj = nn.Linear(hidden_size, self.value_dim, bias=False) + + if use_short_conv: + self.conv_size = conv_size + if share_conv_kernel: + self.h_conv1d = ShortConvolution(hidden_size, conv_size, activation='silu') + else: + self.q_conv1d = ShortConvolution(self.key_dim, conv_size, activation='silu') + self.k_conv1d = ShortConvolution(self.key_dim_per_group, conv_size, activation='silu') + self.v_conv1d = ShortConvolution(self.value_dim_per_group, conv_size, activation='silu') + + self.gk_proj = nn.Sequential(nn.Linear(hidden_size, gate_low_rank_dim, bias=False), + nn.Linear(gate_low_rank_dim, self.key_dim_per_group, bias=True)) + self.o_proj = nn.Linear(self.value_dim, hidden_size, bias=False) + + if gate_fn == 'swish' and fuse_norm and use_output_gate: + self.g_norm_swish_gate = FusedRMSNormSwishGate(self.head_v_dim, elementwise_affine, norm_eps) + self.fuse_norm_and_gate = True + else: + self.fuse_norm_and_gate = False + self.g_norm = RMSNorm(self.head_v_dim, elementwise_affine, norm_eps) + self.gate_fn = ACT2FN[gate_fn] + + self.gate_logit_normalizer = gate_logit_normalizer + + self.apply(self._initialize_weights) + + def _initialize_weights(self, module: nn.Module): + if getattr(module, "_is_hf_initialized", False): + return + if isinstance(module, nn.Linear): + nn.init.xavier_uniform_(module.weight, gain=2 ** -2.5) + if module.bias is not None: + nn.init.zeros_(module.bias) + module._is_hf_initialized = True + + def forward( + self, + hidden_states: torch.Tensor, + attention_mask: Optional[torch.Tensor] = None, + past_key_values: Optional[Cache] = None, + use_cache: Optional[bool] = False, + output_attentions: Optional[bool] = False, + **kwargs + ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Cache]]: + # launching the triton kernel for just one token will actually be slower + mode = 'fused_recurrent' if hidden_states.shape[1] == 1 else self.mode + + last_state = past_key_values[self.layer_idx] if use_cache else None + if self.use_short_conv: + conv_state = last_state[0] if use_cache else None + if self.share_conv_kernel: + # conv state is updated inplace + hidden_states = self.h_conv1d(hidden_states, attention_mask, conv_state) + q = self.q_proj(hidden_states) + k = self.k_proj(hidden_states) + v = self.v_proj(hidden_states) + else: + conv_state_q = last_state[0] if use_cache else None + conv_state_k = last_state[1] if use_cache else None + conv_state_v = last_state[2] if use_cache else None + q = self.q_proj(hidden_states) + k = self.k_proj(hidden_states) + v = self.v_proj(hidden_states) + q = self.q_conv1d(q, attention_mask, conv_state_q) + k = self.k_conv1d(k, attention_mask, conv_state_k) + v = self.v_conv1d(v, attention_mask, conv_state_v) + else: + q = self.q_proj(hidden_states) + k = self.k_proj(hidden_states) + v = self.v_proj(hidden_states) + gk = self.gk_proj(hidden_states) + + if self.feature_map_fn is not None: + q, k = map(self.feature_map_fn, (q, k)) + # dealing with left-padding + if attention_mask is not None: + v = v.mul_(attention_mask.unsqueeze(-1)) + q = rearrange(q, 'b l (h d) -> b h l d', h=self.num_heads) + if self.num_kv_groups > 1: + k, v, gk = (repeat(x, 'b l (h d) -> b (h g) l d', h=self.num_kv_heads, g=self.num_kv_groups) for x in (k, v, gk)) + else: + k, v, gk = (rearrange(x, 'b l (h d) -> b h l d', h=self.num_kv_heads) for x in (k, v, gk)) + gk = F.logsigmoid(gk) / self.gate_logit_normalizer + + if self.clamp_min is not None: + gk = torch.clamp_min(gk, self.clamp_min) + + recurrent_state = last_state[-1] if use_cache else None + if mode == 'fused_recurrent': + o, recurrent_state = fused_recurrent_gla(q, k, v, gk, initial_state=recurrent_state, output_final_state=use_cache) + elif mode == 'fused_chunk': + o, recurrent_state = fused_chunk_gla(q, k, v, gk, initial_state=recurrent_state, output_final_state=use_cache) + elif mode == 'chunk': + o, recurrent_state = chunk_gla(q, k, v, gk, initial_state=recurrent_state, output_final_state=use_cache) + else: + raise NotImplementedError(f"Not supported mode `{mode}`.") + + if past_key_values is not None: + if self.use_short_conv: + if self.share_conv_kernel: + last_state = (conv_state, recurrent_state) + else: + last_state = (conv_state_q, conv_state_k, conv_state_v, recurrent_state) + else: + last_state = (recurrent_state,) + past_key_values.update(last_state, self.layer_idx, q.shape[2]) + + o = rearrange(o, 'b h l d -> b l h d') + if self.use_output_gate: + g = self.g_proj(hidden_states) + if self.fuse_norm_and_gate: + g = rearrange(g, 'b l (h d) -> b l h d', h=self.num_heads) + o = self.g_norm_swish_gate(o, g) + o = rearrange(o, 'b l h d -> b l (h d)') + else: + o = rearrange(self.g_norm(o), 'b l h d -> b l (h d)') + o = o * self.gate_fn(g) + else: + o = rearrange(self.g_norm(o), 'b l h d -> b l (h d)') + o = self.o_proj(o) + + return o, None, past_key_values + + def init_state(self, batch_size: int) -> Tuple[torch.Tensor]: + param = next(self.parameters()) + state = tuple() + if self.use_short_conv: + if self.share_conv_kernel: + state += (param.new_zeros(batch_size, self.hidden_size, self.conv_size),) + else: + state += (param.new_zeros(batch_size, self.key_dim, self.conv_size), + param.new_zeros(batch_size, self.key_dim, self.conv_size), + param.new_zeros(batch_size, self.value_dim, self.conv_size)) + state += (param.new_zeros(batch_size, self.num_heads, self.head_qk_dim, self.head_v_dim),) + return state + + def state_size(self, **kwargs) -> int: + state_size = self.key_dim * self.head_v_dim + for module in self.children(): + if isinstance(module, ShortConvolution): + state_size += module.state_size + return state_size diff --git a/fla/layers/hgrn.py b/fla/layers/hgrn.py new file mode 100644 index 0000000000000000000000000000000000000000..b852d29ef91b6f16d15eda31a677e2de19c1ee01 --- /dev/null +++ b/fla/layers/hgrn.py @@ -0,0 +1,165 @@ +# -*- coding: utf-8 -*- + +# "Hierarchically Gated Recurrent Neural Network for Sequence Modeling" [https://arxiv.org/abs/2311.04823] + +from __future__ import annotations + +from typing import Optional, Tuple + +import torch +import torch.nn as nn +import torch.nn.functional as F +from einops import rearrange +from transformers.cache_utils import Cache + +from fla.modules import FusedRMSNormSwishGate, ShortConvolution +from fla.modules.activations import swiglu +from fla.ops.hgrn import chunk_hgrn, fused_recurrent_hgrn + + +class HGRNAttention(nn.Module): + + def __init__( + self, + mode: str = 'chunk', + hidden_size: int = 1024, + num_heads: Optional[int] = None, + expand_ratio: Optional[int] = 1, + use_short_conv: bool = False, + conv_size: int = 4, + conv_bias: bool = False, + share_conv_kernel: bool = True, + elementwise_affine: Optional[bool] = True, + norm_eps: float = 1e-5, + layer_idx: int = None + ) -> HGRNAttention: + super().__init__() + + self.mode = mode + self.hidden_size = hidden_size + self.num_heads = num_heads + self.expand_ratio = expand_ratio + self.input_dim = int(hidden_size * expand_ratio) + self.head_dim = self.input_dim // self.num_heads + + self.use_short_conv = use_short_conv + self.conv_size = conv_size + self.conv_bias = conv_bias + self.share_conv_kernel = share_conv_kernel + + self.layer_idx = layer_idx + + assert mode in ['chunk', 'fused_recurrent'], f"Not suppoerted mode `{mode}`." + assert self.hidden_size % num_heads == 0, f"hidden size must be divisible by num_heads of {num_heads}" + + self.i_proj = nn.Linear(hidden_size, self.input_dim, bias=False) + self.f_proj = nn.Linear(hidden_size, self.input_dim, bias=False) + self.g_proj = nn.Linear(hidden_size, self.input_dim, bias=False) + + if use_short_conv: + self.conv_size = conv_size + if share_conv_kernel: + self.h_conv1d = ShortConvolution(hidden_size, conv_size, activation='silu') + else: + self.q_conv1d = ShortConvolution(self.input_dim, conv_size, activation='silu') + self.f_conv1d = ShortConvolution(self.input_dim, conv_size, activation='silu') + self.i_conv1d = ShortConvolution(self.input_dim, conv_size, activation='silu') + + self.g_norm = FusedRMSNormSwishGate(self.input_dim, elementwise_affine, norm_eps) + self.o_proj = nn.Linear(self.input_dim, hidden_size, bias=False) + + self.apply(self._initialize_weights) + + def _initialize_weights(self, module: nn.Module): + if getattr(module, "_is_hf_initialized", False): + return + if isinstance(module, nn.Linear): + nn.init.xavier_uniform_(module.weight, gain=2 ** -2.5) + if module.bias is not None: + nn.init.zeros_(module.bias) + module._is_hf_initialized = True + + def forward( + self, + hidden_states: torch.Tensor, + attention_mask: Optional[torch.Tensor] = None, + past_key_values: Optional[Cache] = None, + use_cache: Optional[bool] = False, + output_attentions: Optional[bool] = False, + lower_bound: Optional[torch.Tensor] = None, + **kwargs + ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Cache]]: + # launching the triton kernel for just one token will actually be slower + mode = 'fused_recurrent' if hidden_states.shape[1] == 1 else self.mode + + last_state = past_key_values[self.layer_idx] if use_cache else None + if self.use_short_conv: + conv_state = last_state[0] if use_cache else None + if self.share_conv_kernel: + # conv state is updated inplace + hidden_states = self.h_conv1d(hidden_states, attention_mask, conv_state) + i = self.i_proj(hidden_states) + f = self.f_proj(hidden_states) + else: + conv_state_i = last_state[2] if use_cache else None + conv_state_f = last_state[1] if use_cache else None + i = self.i_conv1d(self.i_proj(hidden_states), attention_mask, conv_state_i) + f = self.f_conv1d(self.f_proj(hidden_states), attention_mask, conv_state_f) + else: + i = self.i_proj(hidden_states) + f = self.f_proj(hidden_states) + + # the lower bound for the first layer is zero + if lower_bound is None or self.layer_idx == 0: + i, f = swiglu(i, 1 - f.sigmoid()), F.logsigmoid(f) + else: + g = lower_bound + (1 - lower_bound) * f.sigmoid() + i, f = swiglu(i, 1 - g), g.log() + + # dealing with left-padding + if attention_mask is not None: + i = i.mul_(attention_mask.unsqueeze(-1)) + i, f = map(lambda x: rearrange(x, 'b l (h d) -> b h l d', h=self.num_heads), (i, f)) + + recurrent_state = last_state[-1] if use_cache else None + if mode == 'chunk': + o, recurrent_state = chunk_hgrn(i, f, initial_state=recurrent_state, output_final_state=use_cache) + elif mode == 'fused_recurrent': + o, recurrent_state = fused_recurrent_hgrn(i, f, initial_state=recurrent_state, output_final_state=use_cache) + else: + raise NotImplementedError(f"Not supported mode `{mode}`.") + + if past_key_values is not None: + if self.use_short_conv: + if self.share_conv_kernel: + last_state = (conv_state, recurrent_state) + else: + last_state = (conv_state_i, conv_state_f, recurrent_state) + else: + last_state = (recurrent_state,) + past_key_values.update(last_state, self.layer_idx, i.shape[2]) + + o = self.g_norm(self.g_proj(hidden_states), rearrange(o, 'b h l d -> b l (h d)')) + o = self.o_proj(o) + + return o, None, past_key_values + + def init_state(self, batch_size: int) -> Tuple[torch.Tensor]: + param = next(self.parameters()) + state = tuple() + if self.use_short_conv: + if self.share_conv_kernel: + state += (param.new_zeros(batch_size, self.hidden_size, self.conv_size),) + else: + state += (param.new_zeros(batch_size, self.hidden_size, self.conv_size), + param.new_zeros(batch_size, self.hidden_size, self.conv_size), + param.new_zeros(batch_size, self.hidden_size, self.conv_size)) + state += (param.new_zeros(batch_size, self.num_heads, self.head_dim),) + return state + + def state_size(self, **kwargs) -> int: + state_size = self.hidden_size + for module in self.children(): + if isinstance(module, ShortConvolution): + state_size += module.state_size + return state_size diff --git a/fla/layers/hgrn2.py b/fla/layers/hgrn2.py new file mode 100644 index 0000000000000000000000000000000000000000..19a3da6fc5ab6df3ddf764594d2fe117ae605778 --- /dev/null +++ b/fla/layers/hgrn2.py @@ -0,0 +1,186 @@ +# -*- coding: utf-8 -*- + +# "HGRN2: Gated Linear RNNs with State Expansion"[https://arxiv.org/abs/2404.07904] + +from __future__ import annotations + +from typing import Optional, Tuple + +import torch +import torch.nn as nn +import torch.nn.functional as F +from einops import rearrange +from transformers.cache_utils import Cache + +from fla.modules import RMSNorm, ShortConvolution +from fla.modules.activations import swish +from fla.ops.gla import chunk_gla, fused_chunk_gla, fused_recurrent_gla + + +class HGRN2Attention(nn.Module): + + def __init__( + self, + mode: str = 'chunk', + hidden_size: int = 1024, + num_heads: Optional[int] = None, + expand_ratio: Optional[int] = 128, + use_short_conv: bool = False, + conv_size: int = 4, + conv_bias: bool = False, + share_conv_kernel: bool = True, + elementwise_affine: Optional[bool] = True, + norm_eps: float = 1e-5, + layer_idx: int = None + ) -> HGRN2Attention: + super().__init__() + + self.mode = mode + self.hidden_size = hidden_size + + if expand_ratio is None and num_heads is not None: + expand_ratio = hidden_size // num_heads + elif expand_ratio is not None and num_heads is None: + num_heads = hidden_size // expand_ratio + else: + raise RuntimeError("One of `expand_ratio` or `num_heads` should be provided.") + self.num_heads = num_heads + self.expand_ratio = expand_ratio + + self.use_short_conv = use_short_conv + self.conv_size = conv_size + self.conv_bias = conv_bias + self.share_conv_kernel = share_conv_kernel + + self.forget_dim = int(self.num_heads * self.expand_ratio) + self.input_dim = hidden_size + self.layer_idx = layer_idx + + assert mode in ['chunk', 'fused_recurrent', 'fused_chunk'], f"Not suppoerted mode `{mode}`." + assert self.forget_dim % num_heads == 0, f"forget dim must be divisible by num_heads of {num_heads}" + assert self.input_dim % num_heads == 0, f"input dim must be divisible by num_heads of {num_heads}" + + self.head_f_dim = self.expand_ratio + self.head_i_dim = self.hidden_size // num_heads + + self.q_proj = nn.Linear(hidden_size, self.forget_dim, bias=False) + self.f_proj = nn.Linear(hidden_size, self.forget_dim, bias=False) + self.i_proj = nn.Linear(hidden_size, self.input_dim, bias=False) + + if use_short_conv: + self.conv_size = conv_size + if share_conv_kernel: + self.h_conv1d = ShortConvolution(hidden_size, conv_size, activation='silu') + else: + self.q_conv1d = ShortConvolution(self.forget_dim, conv_size, activation='silu') + self.f_conv1d = ShortConvolution(self.forget_dim, conv_size, activation='silu') + self.i_conv1d = ShortConvolution(self.input_dim, conv_size, activation='silu') + + self.g_norm = RMSNorm(self.hidden_size, elementwise_affine, norm_eps) + self.o_proj = nn.Linear(self.input_dim, hidden_size, bias=False) + + self.apply(self._initialize_weights) + + def _initialize_weights(self, module: nn.Module): + if getattr(module, "_is_hf_initialized", False): + return + if isinstance(module, nn.Linear): + nn.init.xavier_uniform_(module.weight, gain=2 ** -2.5) + if module.bias is not None: + nn.init.zeros_(module.bias) + module._is_hf_initialized = True + + def forward( + self, + hidden_states: torch.Tensor, + attention_mask: Optional[torch.Tensor] = None, + past_key_values: Optional[Cache] = None, + use_cache: Optional[bool] = False, + output_attentions: Optional[bool] = False, + lower_bound: Optional[torch.Tensor] = None, + **kwargs + ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Cache]]: + # launching the triton kernel for just one token will actually be slower + mode = 'fused_recurrent' if hidden_states.shape[1] == 1 else self.mode + + last_state = past_key_values[self.layer_idx] if use_cache else None + if self.use_short_conv: + conv_state = last_state[0] if use_cache else None + if self.share_conv_kernel: + # conv state is updated inplace + hidden_states = self.h_conv1d(hidden_states, attention_mask, conv_state) + q = self.q_proj(hidden_states) + f = self.f_proj(hidden_states) + i = self.i_proj(hidden_states) + else: + conv_state_q = last_state[0] if use_cache else None + conv_state_f = last_state[1] if use_cache else None + conv_state_i = last_state[2] if use_cache else None + q = self.q_proj(hidden_states) + f = self.f_proj(hidden_states) + i = self.i_proj(hidden_states) + q = self.q_conv1d(q, attention_mask, conv_state_q) + f = self.f_conv1d(f, attention_mask, conv_state_f) + i = self.i_conv1d(i, attention_mask, conv_state_i) + else: + q = self.q_proj(hidden_states) + f = self.f_proj(hidden_states) + i = self.i_proj(hidden_states) + + # dealing with left-padding + if attention_mask is not None: + i = i.mul_(attention_mask.unsqueeze(-1)) + + q = swish(q) + # the lower bound for the first layer is zero + if lower_bound is None or self.layer_idx == 0: + k, g = 1 - f.sigmoid(), F.logsigmoid(f) + else: + g = lower_bound + (1 - lower_bound) * f.sigmoid() + k, g = 1 - g, g.log() + q, k, i, g = map(lambda x: rearrange(x, 'b l (h d) -> b h l d', h=self.num_heads), (q, k, i, g)) + + recurrent_state = last_state[-1] if use_cache else None + if mode == 'fused_recurrent': + o, recurrent_state = fused_recurrent_gla(q, k, i, g, initial_state=recurrent_state, output_final_state=use_cache) + elif mode == 'fused_chunk': + o, recurrent_state = fused_chunk_gla(q, k, i, g, initial_state=recurrent_state, output_final_state=use_cache) + elif mode == 'chunk': + o, recurrent_state = chunk_gla(q, k, i, g, initial_state=recurrent_state, output_final_state=use_cache) + else: + raise NotImplementedError(f"Not supported mode `{mode}`.") + + if past_key_values is not None: + if self.use_short_conv: + if self.share_conv_kernel: + last_state = (conv_state, recurrent_state) + else: + last_state = (conv_state_q, conv_state_f, conv_state_i, recurrent_state) + else: + last_state = (recurrent_state,) + past_key_values.update(last_state, self.layer_idx, q.shape[2]) + + o = self.g_norm(rearrange(o, 'b h l d -> b l (h d)')) + o = self.o_proj(o) + + return o, None, past_key_values + + def init_state(self, batch_size: int) -> Tuple[torch.Tensor]: + param = next(self.parameters()) + state = tuple() + if self.use_short_conv: + if self.share_conv_kernel: + state += (param.new_zeros(batch_size, self.hidden_size, self.conv_size),) + else: + state += (param.new_zeros(batch_size, self.forget_dim, self.conv_size), + param.new_zeros(batch_size, self.forget_dim, self.conv_size), + param.new_zeros(batch_size, self.input_dim, self.conv_size)) + state += (param.new_zeros(batch_size, self.num_heads, self.head_f_dim, self.head_i_dim),) + return state + + def state_size(self, **kwargs) -> int: + state_size = self.forget_dim * self.head_i_dim + for module in self.children(): + if isinstance(module, ShortConvolution): + state_size += module.state_size + return state_size diff --git a/fla/layers/linear_attn.py b/fla/layers/linear_attn.py new file mode 100644 index 0000000000000000000000000000000000000000..73b3270b6f233a2b6cbd4c4ffaa8ff2c70c096b8 --- /dev/null +++ b/fla/layers/linear_attn.py @@ -0,0 +1,156 @@ +# -*- coding: utf-8 -*- + +import torch.nn as nn +import torch.nn.functional as F +from einops import rearrange + +from fla.modules import RMSNorm +from fla.modules.feature_map import (DPFPFeatureMap, HadamardFeatureMap, + HedgehogFeatureMap, T2RFeatureMap) +from fla.ops.linear_attn import (chunk_linear_attn, fused_chunk_linear_attn, + fused_recurrent_linear_attn) + + +class LinearAttention(nn.Module): + def __init__( + self, + hidden_size: str = 1024, + expand_k: int = 1.0, + expand_v: int = 1.0, + num_heads: int = 8, + mode: str = 'chunk', + feature_map: str = 'elementwise_product', + tie_feature_map_qk: bool = False, + output_norm: str = 'rmsnorm', + norm_q: bool = False, + norm_k: bool = False, + # standard linear attention normalization + do_feature_map_norm: bool = False, + elementwise_affine: bool = True, + norm_eps: float = 1e-5, + **kwargs, + ): + super().__init__() + assert feature_map in ['elu', 'relu', 'hedgehog', 't2r', 'dpfp', + 'identity', 'elementwise_product'], f"Not supported feature map `{feature_map}`." + + assert output_norm in ['rmsnorm', 'identity'], f"Not supported output norm `{output_norm}`." + + self.hidden_size + self.mode = mode + self.key_dim = int(hidden_size * expand_k) + self.value_dim = int(hidden_size * expand_v) + self.num_heads = num_heads + + assert mode in ['chunk', 'fused_chunk', 'fused_recurrent'], f"Not suppoerted mode `{mode}`." + assert self.key_dim % num_heads == 0, f"key dim must be divisible by num_heads of {num_heads}" + assert self.value_dim % num_heads == 0, f"value dim must be divisible by num_heads of {num_heads}" + + self.head_qk_dim = self.key_dim // num_heads + self.head_v_dim = self.value_dim // num_heads + + if feature_map == 'hedgehog': + if tie_feature_map_qk: + self.feature_map_q = self.feature_map_k = HedgehogFeatureMap(head_dim=self.head_qk_dim) + else: + self.feature_map_q = HedgehogFeatureMap(head_dim=self.head_qk_dim) + self.feature_map_k = HedgehogFeatureMap(head_dim=self.head_qk_dim) + + elif feature_map == 't2r': + if tie_feature_map_qk: + self.feature_map_q = self.feature_map_k = T2RFeatureMap(head_dim=self.head_qk_dim) + else: + self.feature_map_q = T2RFeatureMap(head_dim=self.head_qk_dim) + self.feature_map_k = T2RFeatureMap(head_dim=self.head_qk_dim) + + elif feature_map == 'elementwise_product': + if tie_feature_map_qk: + self.feature_map_q = self.feature_map_k = HadamardFeatureMap(head_dim=self.head_qk_dim) + else: + self.feature_map_q = HadamardFeatureMap(head_dim=self.head_qk_dim) + self.feature_map_k = HadamardFeatureMap(head_dim=self.head_qk_dim) + + elif feature_map == 'dpfp': + self.feature_map_q = DPFPFeatureMap(head_dim=self.head_qk_dim) + self.feature_map_k = DPFPFeatureMap(head_dim=self.head_qk_dim) + + elif feature_map == 'elu': + def elu(x): + return F.elu(x) + 1 + self.feature_map_q = elu + self.feature_map_k = elu + + elif feature_map == 'relu': + self.feature_map_q = nn.ReLU() + self.feature_map_k = nn.ReLU() + + elif feature_map == 'identity': + self.feature_map_q = nn.Identity() + self.feature_map_k = nn.Identity() + else: + raise NotImplementedError + + self.do_feature_map_norm = do_feature_map_norm + if output_norm == 'rmsnorm': + self.norm = RMSNorm(self.head_v_dim, elementwise_affine, norm_eps) + elif output_norm == 'identity': + self.norm = nn.Identity() + else: + raise NotImplementedError + + self.q_proj = nn.Linear(hidden_size, self.key_dim, bias=False) + self.k_proj = nn.Linear(hidden_size, self.key_dim, bias=False) + self.v_proj = nn.Linear(hidden_size, self.value_dim, bias=False) + self.o_proj = nn.Linear(self.value_dim, hidden_size, bias=False) + + self.norm_q = norm_q + self.norm_k = norm_k + + self.apply(self._initialize_weights) + + def _initialize_weights(self, module: nn.Module): + if getattr(module, "_is_hf_initialized", False): + return + if isinstance(module, nn.Linear): + nn.init.xavier_uniform_(module.weight, gain=2 ** -2.5) + if module.bias is not None: + nn.init.zeros_(module.bias) + module._is_hf_initialized = True + + def forward(self, x): + mode = self.mode + q = rearrange(self.q_proj(x), 'b n (h d) -> b h n d', h=self.num_heads) + k = rearrange(self.k_proj(x), 'b n (h d) -> b h n d', h=self.num_heads) + v = rearrange(self.v_proj(x), 'b n (h d) -> b h n d', h=self.num_heads) + q = self.feature_map_q(q) + k = self.feature_map_k(k) + if self.norm_q: + q = q / (q.sum(-1, keepdim=True) + 1e-4) + if self.norm_k: + k = k / (k.sum(-1, keepdim=True) + 1e-4) + + if mode == 'chunk': + o = chunk_linear_attn(q, k, v, normalize=self.do_feature_map_norm) + elif mode == 'fused_chunk': + o = fused_chunk_linear_attn(q, k, v, normalize=self.do_feature_map_norm) + elif mode == 'fused_recurrent': + o = fused_recurrent_linear_attn(q, k, v, normalize=self.do_feature_map_norm) + else: + raise NotImplementedError + o = self.norm(o) + o = rearrange(o, 'b h n d -> b n (h d)') + o = self.o_proj(o) + return o + + +if __name__ == '__main__': + import torch + batch = 4 + seq_len = 1024 + hidden_size = 1024 + x = torch.randn(batch, seq_len, hidden_size).to(torch.bfloat16).cuda().requires_grad_(True) + model = LinearAttention(hidden_size, feature_map='dplp').to(torch.bfloat16).cuda() + y = model(x) + print(y.shape) + y.sum().backward() + print(x.grad.shape) diff --git a/fla/layers/multiscale_retention.py b/fla/layers/multiscale_retention.py new file mode 100644 index 0000000000000000000000000000000000000000..4d143df32962a3825d9c49148cc38c18e677dc8a --- /dev/null +++ b/fla/layers/multiscale_retention.py @@ -0,0 +1,271 @@ +# -*- coding: utf-8 -*- + +from __future__ import annotations + +from typing import Optional, Tuple + +import torch +import torch.nn as nn +from einops import rearrange, repeat +from transformers.activations import ACT2FN +from transformers.cache_utils import Cache + +from fla.modules import FusedRMSNormSwishGate, RMSNorm, ShortConvolution +from fla.modules.rotary import RotaryEmbedding +from fla.ops.retention import (chunk_retention, fused_chunk_retention, + fused_recurrent_retention, parallel_retention) + + +class MultiScaleRetention(nn.Module): + r""" + The layer implementaion for [Retentive Network: A Successor to Transformer for Large Language Models](https://arxiv.org/pdf/2307.08621.pdf). # noqa + + Args: + mode (str, Optional): + Which Retention kernel to use. + Currently available: `chunk`, `fused_recurrent`, `parallel`, and `fused_chunk`. + Default: `fused_chunk`. + hidden_size (int, Optional): + The hidden size of the input. Default: 1024. + expand_k (float, Optional): + The expansion ratio for the key dim. Default: 1.0. + expand_v (float, Optional): + The expansion ratio for the value dim. Default: 2.0. + num_heads (int, Optional): + The number of heads. Default: 8. + num_kv_heads (int, Optional): + The number of key/value heads, used for MQA. Default: None. + feature_map (str, Optional): + Feature map function applied to queries/keys. Default: None. + use_short_conv (bool, Optional): + Whether to use short convolutions. Default: `False`. + conv_size (int, Optional): + The kernel size of the short convolution, only used when `use_short_conv` is `True`. Default: 4. + conv_bias (bool, Optional): + Whether to use bias in the short convolution, only used when `use_short_conv` is `True`. Default: `False`. + share_conv_kernel (bool, Optional): + Whether to apply convolutions berfore q/k/v mapping, only taking effects when `use_short_conv`. Default: `True`. + use_output_gate (bool, Optional): + Whether to use output gate. Default: `True`. + gate_fn (str, Optional): + The activation function for the output gate. Default: `swish`. + elementwise_affine (bool, Optional): + If `True`, applies elementwise affine to LayerNorm with learnable parameters. Default: `True`. + norm_eps (float, Optional): + The epsilon value for the layernorm/rmsnorm layer. Default: 1e-5. + fuse_norm (bool, Optional): + Whether to fuse the norm and the output gate for better memory footprint. Default: `True`. + layer_idx (int, Optional): + The index of the layer. Default: None. + """ + + def __init__( + self, + mode: str = 'fused_chunk', + hidden_size: int = 1024, + expand_k: float = 1.0, + expand_v: float = 2.0, + num_heads: int = 8, + num_kv_heads: Optional[int] = None, + feature_map: Optional[str] = None, + use_short_conv: bool = False, + conv_size: int = 4, + conv_bias: bool = False, + share_conv_kernel: bool = True, + use_output_gate: bool = True, + gate_fn: str = 'swish', + elementwise_affine: Optional[bool] = True, + norm_eps: float = 1e-5, + fuse_norm: bool = True, + layer_idx: int = None, + **kwargs + ) -> MultiScaleRetention: + super().__init__() + + self.mode = mode + self.hidden_size = hidden_size + self.expand_k = expand_k + self.expand_v = expand_v + self.num_heads = num_heads + self.num_kv_heads = num_kv_heads if num_kv_heads is not None else num_heads + self.num_kv_groups = self.num_heads // self.num_kv_heads + self.feature_map_fn = ACT2FN[feature_map] if feature_map is not None else None + + self.use_short_conv = use_short_conv + self.conv_size = conv_size + self.conv_bias = conv_bias + self.share_conv_kernel = share_conv_kernel + self.use_output_gate = use_output_gate + + self.key_dim = int(hidden_size * expand_k) + self.value_dim = int(hidden_size * expand_v) + self.key_dim_per_group = self.key_dim // self.num_kv_groups + self.value_dim_per_group = self.value_dim // self.num_kv_groups + self.layer_idx = layer_idx + + assert mode in ['chunk', 'fused_chunk', 'parallel', 'fused_recurrent'], f"Not suppoerted mode `{mode}`." + assert self.key_dim % num_heads == 0, f"key dim must be divisible by num_heads of {num_heads}" + assert self.value_dim % num_heads == 0, f"value dim must be divisible by num_heads of {num_heads}" + + self.head_qk_dim = self.key_dim // num_heads + self.head_v_dim = self.value_dim // num_heads + + self.q_proj = nn.Linear(hidden_size, self.key_dim, bias=False) + self.k_proj = nn.Linear(hidden_size, self.key_dim_per_group, bias=False) + self.v_proj = nn.Linear(hidden_size, self.value_dim_per_group, bias=False) + if self.use_output_gate: + self.g_proj = nn.Linear(hidden_size, self.value_dim, bias=False) + + if use_short_conv: + self.conv_size = conv_size + if share_conv_kernel: + self.h_conv1d = ShortConvolution(hidden_size, conv_size, activation='silu') + else: + self.q_conv1d = ShortConvolution(self.key_dim, conv_size, activation='silu') + self.k_conv1d = ShortConvolution(self.key_dim_per_group, conv_size, activation='silu') + self.v_conv1d = ShortConvolution(self.value_dim_per_group, conv_size, activation='silu') + + self.o_proj = nn.Linear(self.value_dim, hidden_size, bias=False) + + if gate_fn == 'swish' and fuse_norm and use_output_gate: + self.g_norm_swish_gate = FusedRMSNormSwishGate(self.head_v_dim, elementwise_affine, norm_eps) + self.fuse_norm_and_gate = True + else: + self.fuse_norm_and_gate = False + self.g_norm = RMSNorm(self.head_v_dim, elementwise_affine, norm_eps) + self.gate_fn = ACT2FN[gate_fn] + + # TODO: fix this issue + # https://github.com/Dao-AILab/flash-attention/blob/main/flash_attn/ops/triton/rotary.py#L180 + # Ideally, we would want to support arbitrary d_head_qk + assert self.head_qk_dim <= 256, "head_qk_dim must be less than or equal to 256" + self.rotary = RotaryEmbedding(dim=self.head_qk_dim) + + self.apply(self._initialize_weights) + + def _initialize_weights(self, module: nn.Module): + if getattr(module, "_is_hf_initialized", False): + return + if isinstance(module, nn.Linear): + nn.init.xavier_uniform_(module.weight, gain=2 ** -2.5) + if module.bias is not None: + nn.init.zeros_(module.bias) + module._is_hf_initialized = True + + def forward( + self, + hidden_states: torch.Tensor, + attention_mask: Optional[torch.Tensor] = None, + past_key_values: Optional[Cache] = None, + use_cache: Optional[bool] = False, + output_attentions: Optional[bool] = False, + **kwargs + ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Cache]]: + # launching the triton kernel for just one token will actually be slower + mode = 'fused_recurrent' if hidden_states.shape[1] == 1 else self.mode + + last_state = past_key_values[self.layer_idx] if use_cache else None + if self.use_short_conv: + conv_state = last_state[0] if use_cache else None + if self.share_conv_kernel: + # conv state is updated inplace + hidden_states = self.h_conv1d(hidden_states, attention_mask, conv_state) + q = self.q_proj(hidden_states) + k = self.k_proj(hidden_states) + v = self.v_proj(hidden_states) + else: + conv_state_q = last_state[0] if use_cache else None + conv_state_k = last_state[1] if use_cache else None + conv_state_v = last_state[2] if use_cache else None + q = self.q_proj(hidden_states) + k = self.k_proj(hidden_states) + v = self.v_proj(hidden_states) + q = self.q_conv1d(q, attention_mask, conv_state_q) + k = self.k_conv1d(k, attention_mask, conv_state_k) + v = self.v_conv1d(v, attention_mask, conv_state_v) + else: + q = self.q_proj(hidden_states) + k = self.k_proj(hidden_states) + v = self.v_proj(hidden_states) + + # dealing with left-padding + if attention_mask is not None: + v = v.mul_(attention_mask.unsqueeze(-1)) + q = rearrange(q, '... (h d) -> ... h d', h=self.num_heads) + k = rearrange(k, '... (h d) -> ... h d', h=self.num_kv_heads) + if self.feature_map_fn is not None: + q, k = map(self.feature_map_fn, (q, k)) + + seqlen_offset, max_seqlen = 0, None + if past_key_values is not None: + seqlen_offset = past_key_values.get_seq_length(self.layer_idx) + max_seqlen = q.shape[1] + seqlen_offset + if attention_mask is not None: + # to deliminate the offsets of padding tokens + seqlen_offset = seqlen_offset + attention_mask.sum(-1) - attention_mask.shape[-1] + max_seqlen = q.shape[1] + max(seqlen_offset) + q, k = self.rotary(q, k, seqlen_offset, max_seqlen) + q = q.transpose(1, 2) + if self.num_kv_groups > 1: + k = repeat(k, 'b t h d -> b (h g) t d', h=self.num_kv_heads, g=self.num_kv_groups) + v = repeat(v, 'b t (h d) -> b (h g) t d', h=self.num_kv_heads, g=self.num_kv_groups) + else: + k, v = rearrange(k, 'b t h d -> b h t d'), rearrange(v, 'b t (h d) -> b h t d', h=self.num_kv_heads) + + state = last_state[-1] if use_cache else None + if mode == 'chunk': + o, recurrent_state = chunk_retention(q, k, v, initial_state=state, output_final_state=use_cache) + elif mode == 'fused_chunk': + o, recurrent_state = fused_chunk_retention(q, k, v, initial_state=state, output_final_state=use_cache) + elif mode == 'parallel': + o, recurrent_state = parallel_retention(q, k, v, initial_state=state, output_final_state=use_cache) + elif mode == 'fused_recurrent': + o, recurrent_state = fused_recurrent_retention(q, k, v, initial_state=state, output_final_state=use_cache) + else: + raise NotImplementedError(f"Not supported mode `{mode}`.") + + if past_key_values is not None: + if self.use_short_conv: + if self.share_conv_kernel: + last_state = (conv_state, recurrent_state) + else: + last_state = (conv_state_q, conv_state_k, conv_state_v, recurrent_state) + else: + last_state = (recurrent_state,) + past_key_values.update(last_state, self.layer_idx, q.shape[2]) + + o = rearrange(o, 'b h l d -> b l h d') + if self.use_output_gate: + g = self.g_proj(hidden_states) + if self.fuse_norm_and_gate: + g = rearrange(g, 'b l (h d) -> b l h d', h=self.num_heads) + o = self.g_norm_swish_gate(o, g) + o = rearrange(o, 'b l h d -> b l (h d)') + else: + o = rearrange(self.g_norm(o), 'b l h d -> b l (h d)') + o = o * self.gate_fn(g) + else: + o = rearrange(self.g_norm(o), 'b l h d -> b l (h d)') + o = self.o_proj(o) + + return o, None, past_key_values + + def init_state(self, batch_size: int) -> Tuple[torch.Tensor]: + param = next(self.parameters()) + state = tuple() + if self.use_short_conv: + if self.share_conv_kernel: + state += (param.new_zeros(batch_size, self.hidden_size, self.conv_size),) + else: + state += (param.new_zeros(batch_size, self.key_dim, self.conv_size), + param.new_zeros(batch_size, self.key_dim, self.conv_size), + param.new_zeros(batch_size, self.value_dim, self.conv_size)) + state += (param.new_zeros(batch_size, self.num_heads, self.head_qk_dim, self.head_v_dim),) + return state + + def state_size(self, **kwargs) -> int: + state_size = self.key_dim * self.head_v_dim + for module in self.children(): + if isinstance(module, ShortConvolution): + state_size += module.state_size + return state_size diff --git a/fla/layers/rebased.py b/fla/layers/rebased.py new file mode 100644 index 0000000000000000000000000000000000000000..3dad7b328012d9a4e4b77f3f760b6147c5d18ed6 --- /dev/null +++ b/fla/layers/rebased.py @@ -0,0 +1,137 @@ +# -*- coding: utf-8 -*- + +""" +https://github.com/corl-team/rebased/blob/main/flash_linear_attention/fla/layers/rebased_fast.py +""" + +from __future__ import annotations + +from typing import Optional + +import torch +import torch.nn as nn +from einops import rearrange + +from fla.modules.feature_map import RebasedFeatureMap +from fla.ops.linear_attn import chunk_linear_attn, fused_chunk_linear_attn +from fla.ops.rebased import parallel_rebased + + +class ReBasedLinearAttention(nn.Module): + def __init__( + self, + hidden_size: int, + l_max: int = 2048, + feature_dim: int = 16, + num_key_value_heads: int = 16, + num_heads: int = 16, + use_gamma: Optional[bool] = True, + use_beta: Optional[bool] = True, + normalize: Optional[bool] = True, + causal: bool = True, + eps: float = 1e-5, + mode: str = "parallel", + layer_idx: Optional[int] = None, + **kwargs + ) -> ReBasedLinearAttention: + super().__init__() + self.hidden_size = hidden_size + self.l_max = l_max + self.mode = mode + assert self.mode in ["fused_chunk", "parallel", 'chunk'] + + # linear attention + self.feature_dim = feature_dim + self.num_key_value_heads = num_key_value_heads + self.num_heads = num_heads + self.head_dim = self.hidden_size // self.num_key_value_heads + self.use_gamma = use_gamma + self.use_beta = use_beta + self.normalize = normalize + self.causal = causal + + self.feature_map = RebasedFeatureMap(self.feature_dim, use_gamma, use_beta, normalize) + self.q_proj = nn.Linear(self.hidden_size, self.feature_dim * self.num_heads, bias=False) + self.k_proj = nn.Linear(self.hidden_size, self.feature_dim * self.num_heads, bias=False) + self.v_proj = nn.Linear(self.hidden_size, self.num_key_value_heads * self.head_dim, bias=False) + self.o_proj = nn.Linear(self.num_heads * self.head_dim, self.hidden_size, bias=False) + self.dropout = nn.Identity() + self.eps = eps + + self.apply(self._initialize_weights) + + def _initialize_weights(self, module: nn.Module): + if getattr(module, "_is_hf_initialized", False): + return + if isinstance(module, nn.Linear): + nn.init.xavier_uniform_(module.weight, gain=2 ** -2.5) + if module.bias is not None: + nn.init.zeros_(module.bias) + module._is_hf_initialized = True + + def forward(self, hidden_states: torch.Tensor, **kwargs): + mode = self.mode + q, k, v = self.q_proj(hidden_states), self.k_proj(hidden_states), self.v_proj(hidden_states) + q, k, v = map(lambda x: rearrange(x, "b l (h d) -> b h l d", h=self.num_heads), [q, k, v]) + q, k = self.feature_map(q, flatten=(mode != 'parallel')), self.feature_map(k, flatten=(mode != 'parallel')) + if mode == "fused_chunk": + o = fused_chunk_linear_attn(q, k, v, normalize=True, scale=1) + elif mode == 'chunk': + o = chunk_linear_attn(q, k, v, normalize=True, scale=1) + elif mode == 'parallel': + assert q.shape[-1] <= 128 + o = parallel_rebased(q, k, v, self.eps, True, True) + o = rearrange(o, "b h l d -> b l (h d)") + o = self.o_proj(o) + o = self.dropout(o) + return o + + # https://github.com/HazyResearch/zoology/blob/main/zoology/mixers/based.py#L119 + def forward_reference(self, hidden_states: torch.Tensor, filters: torch.Tensor = None, *args, **kwargs): + """ + x (torch.Tensor): tensor of shape (b, d, l) + y (torch.Tensor): tensor of shape (b, d, l) + """ + # hidden_states = hidden_states.transpose(1, 2) + b, l, _ = hidden_states.size() + q, k, v = self.q_proj(hidden_states), self.k_proj(hidden_states), self.v_proj(hidden_states) + + q = q.view(b, l, self.num_heads, self.feature_dim).transpose(1, 2) + k = k.view(b, l, self.num_key_value_heads, self.feature_dim).transpose(1, 2) + v = v.view(b, l, self.num_key_value_heads, self.head_dim).transpose(1, 2) + + # Linear attention + q, k = self.feature_map(q), self.feature_map(k) + q, k, v = q.unsqueeze(-2), k.unsqueeze(-2), v.unsqueeze(-1) + + # Compute attention + if self.causal: + y = ((q * (k * v).cumsum(2)).sum(-1) / ((q * k.cumsum(2)).sum(-1) + self.eps)) + else: + y = ((q * (k * v).sum(2, True)).sum(-1) / ((q * k.sum(2, True)).sum(-1) + self.eps)) + y = rearrange(y, 'b h l d -> b l (h d)') + y = self.o_proj(y.to(hidden_states.dtype)) + y = self.dropout(y) + return y.to(hidden_states.dtype) + + +if __name__ == '__main__': + batch = 4 + seq_len = 1024 + hidden_size = 1024 + dtype = torch.float32 + x = torch.randn(batch, seq_len, hidden_size).to(dtype).cuda().requires_grad_(True) + dy = torch.randn(batch, seq_len, hidden_size).to(dtype).cuda() + model = ReBasedLinearAttention(hidden_size=hidden_size, mode='parallel').to(dtype).cuda() + + y = model(x) + y.backward(dy, retain_graph=True) + x_grad, x.grad = x.grad, None + print(model.mode) + model.mode = 'fused_chunk' + y2 = model(x) + print(model.mode) + y2.backward(dy) + # assert y.allclose(y2, 0, 1e-4), breakpoint() + # assert x_grad.allclose(x.grad, 0, 1e-4), breakpoint() + print("Pass") diff --git a/fla/layers/rwkv6.py b/fla/layers/rwkv6.py new file mode 100644 index 0000000000000000000000000000000000000000..ec44974f67367432baeefe6b6428918fc9563ad2 --- /dev/null +++ b/fla/layers/rwkv6.py @@ -0,0 +1,264 @@ +# -*- coding: utf-8 -*- + +# "Eagle and Finch: RWKV with Matrix-Valued States and Dynamic Recurrence"[https://arxiv.org/abs/2404.05892] + +from __future__ import annotations + +from typing import Optional, Tuple + +import torch +import torch.nn as nn +from einops import rearrange +from transformers.activations import ACT2FN +from transformers.cache_utils import Cache + +from fla.modules import FusedLayerNormSwishGate, LayerNorm +from fla.ops.rwkv6 import chunk_rwkv6, fused_recurrent_rwkv6 + + +class RWKV6Attention(nn.Module): + + def __init__( + self, + mode: str = 'chunk', + hidden_size: int = 1024, + expand_k: float = 0.5, + expand_v: float = 1.0, + num_heads: int = 4, + gate_fn: str = 'swish', + proj_low_rank_dim: int = 32, + gate_low_rank_dim: int = 64, + fuse_norm: bool = True, + elementwise_affine: Optional[bool] = True, + norm_eps: float = 1e-5, + layer_idx: int = None, + **kwargs + ) -> RWKV6Attention: + super().__init__() + + self.mode = mode + self.hidden_size = hidden_size + self.expand_k = expand_k + self.expand_v = expand_v + self.num_heads = num_heads + self.proj_low_rank_dim = proj_low_rank_dim + self.gate_low_rank_dim = gate_low_rank_dim + + self.key_dim = int(hidden_size * expand_k) + self.value_dim = int(hidden_size * expand_v) + self.layer_idx = layer_idx + + assert mode in ['chunk', 'fused_recurrent'], f"Not suppoerted mode `{mode}`." + assert self.key_dim % num_heads == 0, f"key dim must be divisible by num_heads of {num_heads}" + assert self.value_dim % num_heads == 0, f"value dim must be divisible by num_heads of {num_heads}" + + self.head_qk_dim = self.key_dim // num_heads + self.head_v_dim = self.value_dim // num_heads + + self.time_shift = nn.ZeroPad2d((0, 0, 1, -1)) + self.x_proj = nn.Sequential( + LerpLinear(hidden_size, proj_low_rank_dim * 5), + nn.Tanh(), + nn.Linear(proj_low_rank_dim * 5, hidden_size, bias=True) + ) + self.r_proj = DDLerpLinear(hidden_size, self.key_dim) + self.w_proj = DDLerpLinear(hidden_size, self.key_dim, low_rank_dim=gate_low_rank_dim) + self.k_proj = DDLerpLinear(hidden_size, self.key_dim) + self.v_proj = DDLerpLinear(hidden_size, self.value_dim) + self.g_proj = DDLerpLinear(hidden_size, self.value_dim) + self.bonus = nn.Parameter(torch.zeros(num_heads, self.head_qk_dim)) + + self.o_proj = nn.Linear(self.value_dim, hidden_size, bias=False) + + if gate_fn == 'swish' and fuse_norm: + self.g_norm_swish_gate = FusedLayerNormSwishGate(self.head_v_dim, elementwise_affine, norm_eps) + self.fuse_norm_and_gate = True + else: + self.fuse_norm_and_gate = False + self.g_norm = LayerNorm(self.head_v_dim, elementwise_affine, norm_eps) + self.gate_fn = ACT2FN[gate_fn] + + self.apply(self._initialize_weights) + + def _initialize_weights(self, module: nn.Module): + if getattr(module, "_is_hf_initialized", False): + return + if isinstance(module, nn.Linear): + nn.init.xavier_uniform_(module.weight, gain=2 ** -2.5) + if module.bias is not None: + nn.init.zeros_(module.bias) + if isinstance(module, nn.Parameter): + nn.init.xavier_uniform_(module, gain=2 ** -2.5) + module._is_hf_initialized = True + + def forward( + self, + hidden_states: torch.Tensor, + attention_mask: Optional[torch.Tensor] = None, + past_key_values: Optional[Cache] = None, + use_cache: Optional[bool] = False, + output_attentions: Optional[bool] = False, + **kwargs + ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Cache]]: + batch_size, seq_len, hidden_size = hidden_states.size() + # launching the triton kernel for just one token will actually be slower + mode = 'fused_recurrent' if hidden_states.shape[1] == 1 else self.mode + + delta = self.time_shift(hidden_states) - hidden_states + x = self.x_proj[0](hidden_states, delta).view(batch_size, seq_len, -1, self.proj_low_rank_dim) + r, w, k, v, g = torch.einsum('b l n r, n r d-> b l n d', + self.x_proj[1](x), + self.x_proj[2].weight.view(5, -1, hidden_size)).unbind(-2) + r = self.r_proj(hidden_states, r, delta) + w = self.w_proj(hidden_states, w, delta) + k = self.k_proj(hidden_states, k, delta) + v = self.v_proj(hidden_states, v, delta) + g = self.g_proj(hidden_states, g, delta) + + # dealing with left-padding + if attention_mask is not None: + v = v.mul_(attention_mask.unsqueeze(-1)) + r, w, k, v = map(lambda x: rearrange(x, 'b l (h d) -> b h l d', h=self.num_heads), (r, w, k, v)) + w = -torch.exp(w) + u = self.bonus + + last_state = past_key_values[self.layer_idx] if use_cache else None + state = last_state[-1] if use_cache else None + if mode == 'fused_recurrent': + o, recurrent_state = fused_recurrent_rwkv6(r, k, v, w, u, initial_state=state, output_final_state=use_cache) + elif mode == 'chunk': + o, recurrent_state = chunk_rwkv6(r, k, v, w, u, initial_state=state, output_final_state=use_cache) + else: + raise NotImplementedError(f"Not supported mode `{mode}`.") + + if past_key_values is not None: + past_key_values.update((recurrent_state,), self.layer_idx, r.shape[2]) + + o = rearrange(o, 'b h l d -> b l h d') + if self.fuse_norm_and_gate: + g = rearrange(g, 'b l (h d) -> b l h d', h=self.num_heads) + o = self.g_norm_swish_gate(o, g) + o = rearrange(o, 'b l h d -> b l (h d)') + else: + o = self.g_norm(o) + o = rearrange(o, 'b l h d -> b l (h d)') + o = o * self.gate_fn(g) + o = self.o_proj(o) + + return o, None, past_key_values + + def init_state(self, batch_size: int) -> Tuple[torch.Tensor]: + param = next(self.parameters()) + state = (param.new_zeros(batch_size, self.num_heads, self.head_qk_dim, self.head_v_dim),) + return state + + def state_size(self, **kwargs) -> int: + state_size = self.key_dim * self.head_v_dim + return state_size + + +class LoRA(nn.Module): + + def __init__( + self, + input_dim: int, + output_dim: int, + low_rank_dim: int, + bias: Optional[bool] = True + ): + super().__init__() + + self.input_dim = input_dim + self.output_dim = output_dim + self.low_rank_dim = low_rank_dim + self.bias = bias + + self.lora = nn.Sequential( + nn.Linear(input_dim, low_rank_dim, bias=False), + nn.Tanh(), + nn.Linear(low_rank_dim, output_dim, bias=bias) + ) + + def __repr__(self) -> str: + s = f"{self.__class__.__name__}(" + s += f"input_dim={self.input_dim}, low_rank_dim={self.low_rank_dim}, output_dim={self.output_dim}" + if not self.bias: + s += f", bias={self.bias}" + s += ")" + return s + + def forward(self, x: torch.Tensor) -> torch.Tensor: + return self.lora(x) + + +class LerpLinear(nn.Module): + + def __init__( + self, + input_dim: int, + output_dim: int, + low_rank_dim: Optional[int] = None + ): + super().__init__() + + self.input_dim = input_dim + self.output_dim = output_dim + self.low_rank_dim = low_rank_dim + + self.time_shift = nn.ZeroPad2d((0, 0, 1, -1)) + if low_rank_dim is None: + self.linear = nn.Linear(input_dim, output_dim, bias=False) + else: + self.linear = LoRA(input_dim, output_dim, low_rank_dim) + self.mu = nn.Parameter(torch.zeros(input_dim)) + + def __repr__(self) -> str: + s = f"{self.__class__.__name__}({self.input_dim}, {self.output_dim}" + if self.low_rank_dim is not None: + s += f", low_rank_dim={self.low_rank_dim}" + s += ")" + return s + + def forward(self, x: torch.Tensor, delta: Optional[torch.Tensor] = None) -> torch.Tensor: + if delta is None: + shifted = self.time_shift(x) + if len(shifted.shape) == 2: + shifted = shifted.unsqueeze(1) + delta = shifted - x + return self.linear(x + delta * self.mu) + + +class DDLerpLinear(nn.Module): + + def __init__( + self, + input_dim: int, + output_dim: int, + low_rank_dim: Optional[int] = None + ): + super().__init__() + + self.input_dim = input_dim + self.output_dim = output_dim + self.low_rank_dim = low_rank_dim + + self.time_shift = nn.ZeroPad2d((0, 0, 1, -1)) + if low_rank_dim is None: + self.linear = nn.Linear(input_dim, output_dim, bias=False) + else: + self.linear = LoRA(input_dim, output_dim, low_rank_dim) + + def __repr__(self) -> str: + s = f"{self.__class__.__name__}({self.input_dim}, {self.output_dim}" + if self.low_rank_dim is not None: + s += f", low_rank_dim={self.low_rank_dim}" + s += ")" + return s + + def forward(self, x: torch.Tensor, mu: torch.Tensor, delta: Optional[torch.Tensor] = None) -> torch.Tensor: + if delta is None: + shifted = self.time_shift(x) + if len(shifted.shape) == 2: + shifted = shifted.unsqueeze(1) + delta = shifted - x + return self.linear(x + delta * mu) diff --git a/fla/layers/simple_gla.py b/fla/layers/simple_gla.py new file mode 100644 index 0000000000000000000000000000000000000000..43a637a0bd6a1437e74f27e69a09099d9c06741b --- /dev/null +++ b/fla/layers/simple_gla.py @@ -0,0 +1,143 @@ +# -*- coding: utf-8 -*- + +from __future__ import annotations + +from typing import Optional + +import torch +import torch.nn as nn +import torch.nn.functional as F +from einops import rearrange +from transformers.activations import ACT2FN + +from fla.modules import FusedRMSNormSwishGate, RMSNorm +from fla.ops.simple_gla import chunk_simple_gla + + +class SimpleGatedLinearAttention(nn.Module): + r""" + The layer implementaion for [Gated Linear Attention Transformers with Hardware-Efficient Training](https://arxiv.org/abs/2312.06635). # noqa + This layer calls the simplified GLA kernel in which the gating is head-wise instead of elementwise. + + Args: + mode (str, Optional): + Which GLA kernel to use. + Currently available: `chunk`. + Default: `chunk`. + hidden_size (int, Optional): + The hidden size of the input. Default: 1024. + expand_k (float, Optional): + The expansion ratio for the key dim. Default: 0.5. + expand_v (float, Optional): + The expansion ratio for the value dim. Default: 1.0. + num_heads (int, Optional): + The number of heads. Default: 4. + gate_fn (str, Optional): + The activation function for the output gate. Default: `swish`. + elementwise_affine (bool, Optional): + If `True`, applies elementwise affine to LayerNorm with learnable parameters. Default: `True`. + norm_eps (float, Optional): + The epsilon value for the layernorm/rmsnorm layer. Default: 1e-5. + gate_logit_normalizer (int, Optional): + The normalizer for the gate logits, appied after `logsigmoid`. Default: 16. + fuse_norm (bool, Optional): + Whether to fuse the norm and the output gate for better memory footprint. Default: `True`. + layer_idx (int, Optional): + The index of the layer. Default: None. + """ + + def __init__( + self, + mode: str = 'chunk', + hidden_size: int = 1024, + expand_k: float = 1.0, + expand_v: float = 2.0, + num_heads: int = 4, + gate_fn: str = 'swish', + elementwise_affine: Optional[bool] = True, + norm_eps: float = 1e-5, + gate_logit_normalizer: int = 16, + fuse_norm: bool = True, + **kwargs + ) -> SimpleGatedLinearAttention: + super().__init__() + self.hidden_size = hidden_size + + self.mode = mode + self.key_dim = int(hidden_size * expand_k) + self.value_dim = int(hidden_size * expand_v) + assert mode in ['chunk'], f"Not suppoerted mode `{mode}`." + assert self.key_dim % num_heads == 0, f"key dim must be divisible by num_heads of {num_heads}" + assert self.value_dim % num_heads == 0, f"value dim must be divisible by num_heads of {num_heads}" + self.num_heads = num_heads + self.head_qk_dim = self.key_dim // num_heads + self.head_v_dim = self.value_dim // num_heads + self.gate_fn = ACT2FN[gate_fn] + + self.q_proj = nn.Linear(hidden_size, self.key_dim, bias=False) + self.k_proj = nn.Linear(hidden_size, self.key_dim, bias=False) + self.v_proj = nn.Linear(hidden_size, self.value_dim, bias=False) + self.g_proj = nn.Linear(hidden_size, self.value_dim, bias=False) + + self.gk_proj = nn.Linear(hidden_size, self.num_heads) + self.o_proj = nn.Linear(self.value_dim, hidden_size, bias=False) + + if gate_fn == 'swish' and fuse_norm: + self.g_norm_swish_gate = FusedRMSNormSwishGate(self.head_v_dim, elementwise_affine, norm_eps) + self.fuse_norm_and_gate = True + else: + self.fuse_norm_and_gate = False + self.g_norm = RMSNorm(self.head_v_dim, elementwise_affine, norm_eps) + + self.gate_logit_normalizer = gate_logit_normalizer + + self.apply(self._initialize_weights) + + def _initialize_weights(self, module: nn.Module): + if getattr(module, "_is_hf_initialized", False): + return + if isinstance(module, nn.Linear): + nn.init.xavier_uniform_(module.weight, gain=2 ** -2.5) + if module.bias is not None: + nn.init.zeros_(module.bias) + module._is_hf_initialized = True + + def forward(self, x): + mode = self.mode + q = rearrange(self.q_proj(x), 'b n (h d) -> b h n d', h=self.num_heads) + k = rearrange(self.k_proj(x), 'b n (h d) -> b h n d', h=self.num_heads) + v = rearrange(self.v_proj(x), 'b n (h d) -> b h n d', h=self.num_heads) + gk = rearrange(self.gk_proj(x), 'b n h -> b h n') + gk = (F.logsigmoid(gk) / self.gate_logit_normalizer) + + if mode == 'chunk': + o = chunk_simple_gla(q, k, v, gk) + else: + raise NotImplementedError(f"Not supported mode `{mode}`.") + + o = rearrange(o, 'b h l d -> b l h d') + g = self.g_proj(x) + + if self.fuse_norm_and_gate: + g = rearrange(g, 'b l (h d) -> b l h d', h=self.num_heads) + o = self.g_norm_swish_gate(o, g) + o = rearrange(o, 'b l h d -> b l (h d)') + else: + o = self.g_norm(o) + o = rearrange(o, 'b l h d -> b l (h d)') + o = o * self.gate_fn(g) + o = self.o_proj(o) + return o + + +if __name__ == '__main__': + batch = 4 + seq_len = 1024 + + hidden_size = 2048 + x = torch.randn(batch, seq_len, hidden_size).to(torch.bfloat16).cuda().requires_grad_(True) + model = SimpleGatedLinearAttention(hidden_size=hidden_size, mode='chunk').to(torch.bfloat16).cuda() + y = model(x) + print(y.shape) + y.sum().backward() + print(x.grad.shape) diff --git a/fla/models/__init__.py b/fla/models/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..ffc48e10d0b8c7fa48f561f12f3c03475830262c --- /dev/null +++ b/fla/models/__init__.py @@ -0,0 +1,29 @@ +# -*- coding: utf-8 -*- + +from fla.models.abc import ABCConfig, ABCForCausalLM, ABCModel +from fla.models.delta_net import (DeltaNetConfig, DeltaNetForCausalLM, + DeltaNetModel) +from fla.models.gla import GLAConfig, GLAForCausalLM, GLAModel +from fla.models.hgrn import HGRNConfig, HGRNForCausalLM, HGRNModel +from fla.models.hgrn2 import HGRN2Config, HGRN2ForCausalLM, HGRN2Model +from fla.models.linear_attn import (LinearAttentionConfig, + LinearAttentionForCausalLM, + LinearAttentionModel) +from fla.models.mamba import MambaConfig, MambaForCausalLM, MambaModel +from fla.models.retnet import RetNetConfig, RetNetForCausalLM, RetNetModel +from fla.models.rwkv6 import RWKV6Config, RWKV6ForCausalLM, RWKV6Model +from fla.models.transformer import (TransformerConfig, TransformerForCausalLM, + TransformerModel) + +__all__ = [ + 'ABCConfig', 'ABCForCausalLM', 'ABCModel', + 'DeltaNetConfig', 'DeltaNetForCausalLM', 'DeltaNetModel', + 'GLAConfig', 'GLAForCausalLM', 'GLAModel', + 'HGRNConfig', 'HGRNForCausalLM', 'HGRNModel', + 'HGRN2Config', 'HGRN2ForCausalLM', 'HGRN2Model', + 'LinearAttentionConfig', 'LinearAttentionForCausalLM', 'LinearAttentionModel', + 'MambaConfig', 'MambaForCausalLM', 'MambaModel', + 'RetNetConfig', 'RetNetForCausalLM', 'RetNetModel', + 'RWKV6Config', 'RWKV6ForCausalLM', 'RWKV6Model', + 'TransformerConfig', 'TransformerForCausalLM', 'TransformerModel' +] diff --git a/fla/models/abc/__init__.py b/fla/models/abc/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..f7021f22ff0f9781432bd3969473520851f4b553 --- /dev/null +++ b/fla/models/abc/__init__.py @@ -0,0 +1,13 @@ +# -*- coding: utf-8 -*- + +from transformers import AutoConfig, AutoModel, AutoModelForCausalLM + +from fla.models.abc.configuration_abc import ABCConfig +from fla.models.abc.modeling_abc import ABCForCausalLM, ABCModel + +AutoConfig.register(ABCConfig.model_type, ABCConfig) +AutoModel.register(ABCConfig, ABCModel) +AutoModelForCausalLM.register(ABCConfig, ABCForCausalLM) + + +__all__ = ['ABCConfig', 'ABCForCausalLM', 'ABCModel'] diff --git a/fla/models/abc/configuration_abc.py b/fla/models/abc/configuration_abc.py new file mode 100644 index 0000000000000000000000000000000000000000..3c185579c8df92e67785848c5b8a84d6d0deadb0 --- /dev/null +++ b/fla/models/abc/configuration_abc.py @@ -0,0 +1,74 @@ +# -*- coding: utf-8 -*- + +from typing import Optional + +from transformers.configuration_utils import PretrainedConfig + + +class ABCConfig(PretrainedConfig): + + model_type = 'abc' + keys_to_ignore_at_inference = ['past_key_values'] + + def __init__( + self, + vocab_size: int = 32000, + hidden_size: int = 2048, + gate_low_rank_dim: int = 16, + clamp_min: float = -32, + clamp_max: float = 32, + hidden_ratio: Optional[int] = 4, + intermediate_size: Optional[int] = None, + num_hidden_layers: int = 24, + num_heads: int = 4, + num_slots: Optional[int] = 64, + use_short_conv: bool = True, + conv_size: int = 4, + share_conv_kernel: bool = True, + exapnd_k: float = 0.5, + exapnd_v: float = 1, + hidden_act: str = "swish", + max_position_embeddings: int = 2048, + elementwise_affine: Optional[bool] = True, + norm_eps: float = 1e-6, + use_cache: bool = True, + pad_token_id: int = None, + bos_token_id: int = 1, + eos_token_id: int = 2, + initializer_range: float = 0.02, + tie_word_embeddings: bool = False, + fuse_norm: bool = True, + fuse_cross_entropy: bool = True, + **kwargs + ): + self.vocab_size = vocab_size + self.max_position_embeddings = max_position_embeddings + self.hidden_size = hidden_size + self.gate_low_rank_dim = gate_low_rank_dim + self.clamp_min = clamp_min + self.clamp_max = clamp_max + self.hidden_ratio = hidden_ratio + self.intermediate_size = intermediate_size + self.num_hidden_layers = num_hidden_layers + self.num_heads = num_heads + self.num_slots = num_slots + self.use_short_conv = use_short_conv + self.conv_size = conv_size + self.share_conv_kernel = share_conv_kernel + self.expand_k = exapnd_k + self.expand_v = exapnd_v + self.hidden_act = hidden_act + self.elementwise_affine = elementwise_affine + self.norm_eps = norm_eps + self.use_cache = use_cache + self.initializer_range = initializer_range + self.fuse_cross_entropy = fuse_cross_entropy + self.fuse_norm = fuse_norm + + super().__init__( + pad_token_id=pad_token_id, + bos_token_id=bos_token_id, + eos_token_id=eos_token_id, + tie_word_embeddings=tie_word_embeddings, + **kwargs, + ) diff --git a/fla/models/abc/modeling_abc.py b/fla/models/abc/modeling_abc.py new file mode 100644 index 0000000000000000000000000000000000000000..431df20568c95d210507427fdf19dc30c24a5364 --- /dev/null +++ b/fla/models/abc/modeling_abc.py @@ -0,0 +1,394 @@ +# -*- coding: utf-8 -*- + +from __future__ import annotations + +import math +import warnings +from typing import List, Optional, Tuple, Union + +import torch +import torch.nn as nn +import torch.utils.checkpoint +from transformers.activations import ACT2FN +from transformers.modeling_outputs import (BaseModelOutputWithPast, + CausalLMOutputWithPast) +from transformers.modeling_utils import PreTrainedModel +from transformers.utils import logging + +from fla.layers.abc import ABCAttention +from fla.models.abc.configuration_abc import ABCConfig +from fla.models.utils import RecurrentCache +from fla.modules import FusedCrossEntropyLoss, RMSNorm +from fla.modules.activations import swiglu_linear + +logger = logging.get_logger(__name__) + + +class ABCMLP(nn.Module): + + def __init__( + self, + hidden_size: int, + hidden_ratio: Optional[int] = None, + intermediate_size: Optional[int] = None, + hidden_act: str = 'swish' + ) -> ABCMLP: + super().__init__() + + self.hidden_size = hidden_size + # the final number of params is `hidden_ratio * hidden_size^2` + # `intermediate_size` is chosen to be a multiple of 256 closest to `2/3 * hidden_size * hidden_ratio` + if hidden_ratio is None: + hidden_ratio = 4 + if intermediate_size is None: + intermediate_size = int(hidden_size * hidden_ratio * 2 / 3) + intermediate_size = 256 * ((intermediate_size + 256 - 1) // 256) + self.hidden_ratio = hidden_ratio + self.intermediate_size = intermediate_size + + self.gate_proj = nn.Linear(self.hidden_size, self.intermediate_size * 2, bias=False) + self.down_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=False) + self.act_fn = ACT2FN[hidden_act] + + def forward(self, x): + y = self.gate_proj(x) + gate, y = y.chunk(2, -1) + return swiglu_linear(gate, y, self.down_proj.weight, self.down_proj.bias) + + +class ABCBlock(nn.Module): + def __init__(self, config: ABCConfig, layer_idx: int): + super().__init__() + self.hidden_size = config.hidden_size + + self.attn_norm = RMSNorm(hidden_size=config.hidden_size, eps=config.norm_eps) + self.attn = ABCAttention( + hidden_size=config.hidden_size, + expand_k=config.expand_k, + expand_v=config.expand_v, + num_heads=config.num_heads, + num_slots=config.num_slots, + use_short_conv=config.use_short_conv, + conv_size=config.conv_size, + share_conv_kernel=config.share_conv_kernel, + gate_fn=config.hidden_act, + elementwise_affine=config.elementwise_affine, + norm_eps=config.norm_eps, + clamp_min=config.clamp_min, + clamp_max=config.clamp_max, + fuse_norm=config.fuse_norm, + layer_idx=layer_idx + ) + self.mlp_norm = RMSNorm(hidden_size=config.hidden_size, eps=config.norm_eps) + self.mlp = ABCMLP( + hidden_size=config.hidden_size, + hidden_ratio=config.hidden_ratio, + intermediate_size=config.intermediate_size, + hidden_act=config.hidden_act + ) + + def forward( + self, + hidden_states: torch.Tensor, + attention_mask: Optional[torch.Tensor] = None, + past_key_values: Optional[Tuple[List[torch.Tensor]]] = None, + use_cache: Optional[bool] = False, + output_attentions: Optional[bool] = False, + **kwargs, + ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]: + + residual = hidden_states + + hidden_states = self.attn_norm(hidden_states) + hidden_states, attentions, past_key_values = self.attn( + hidden_states=hidden_states, + attention_mask=attention_mask, + past_key_values=past_key_values, + use_cache=use_cache, + output_attentions=output_attentions + ) + hidden_states, residual = self.mlp_norm(hidden_states, residual, True) + hidden_states = self.mlp(hidden_states) + hidden_states = residual + hidden_states + + outputs = (hidden_states, attentions, past_key_values) + + return outputs + + +class ABCPreTrainedModel(PreTrainedModel): + + config_class = ABCConfig + supports_gradient_checkpointing = True + _no_split_modules = ['ABCBlock'] + + def __init__(self, *inputs, **kwargs): + super().__init__(*inputs, **kwargs) + + def _init_weights( + self, + module: nn.Module, + rescale_prenorm_residual: bool = True, + num_residuals_per_layer: int = 2, + ): + if isinstance(module, (nn.Linear, nn.Conv1d)): + # Slightly different from the TF version which uses truncated_normal for initialization + # cf https://github.com/pytorch/pytorch/pull/5617 + nn.init.normal_(module.weight, mean=0.0, std=self.config.initializer_range) + if module.bias is not None: + nn.init.zeros_(module.bias) + elif isinstance(module, nn.Embedding): + nn.init.normal_(module.weight, mean=0.0, std=self.config.initializer_range) + if module.padding_idx is not None: + module.weight.data[module.padding_idx].zero_() + + if rescale_prenorm_residual: + # Reinitialize selected weights subject to the OpenAI GPT-2 Paper Scheme: + # > A modified initialization which accounts for the accumulation on the residual path with model depth. Scale + # > the weights of residual layers at initialization by a factor of 1/√N where N is the # of residual layers. + # > -- GPT-2 :: https://openai.com/blog/better-language-models/ + # + # Reference (Megatron-LM): https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/gpt_model.py + for name, p in module.named_parameters(): + if name in ["o_proj.weight", "down_proj.weight"]: + # Special Scaled Initialization --> There are 2 Layer Norms per Transformer Block + # Following Pytorch init, except scale by 1/sqrt(2 * n_layer) + # We need to reinit p since this code could be called multiple times + # Having just p *= scale would repeatedly scale it down + with torch.no_grad(): + p /= math.sqrt(num_residuals_per_layer * self.config.num_hidden_layers) + + +class ABCModel(ABCPreTrainedModel): + + def __init__(self, config: ABCConfig): + super().__init__(config) + self.padding_idx = config.pad_token_id + self.vocab_size = config.vocab_size + + self.embeddings = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx) + self.layers = nn.ModuleList([ABCBlock(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]) + self.norm = RMSNorm(config.hidden_size, eps=config.norm_eps) + + self.gradient_checkpointing = False + + self.post_init() + + def get_input_embeddings(self): + return self.embeddings + + def set_input_embeddings(self, value): + self.embeddings = value + + def forward( + self, + input_ids: Optional[torch.LongTensor] = None, + attention_mask: Optional[torch.Tensor] = None, # noqa + inputs_embeds: Optional[torch.FloatTensor] = None, + past_key_values: Optional[Tuple[List[torch.Tensor]]] = None, + use_cache: Optional[bool] = None, + output_attentions: Optional[bool] = None, + output_hidden_states: Optional[bool] = None, + return_dict: Optional[bool] = None + ) -> Union[Tuple, BaseModelOutputWithPast]: + if output_attentions: + warnings.warn("`ABCModel` does not `output_attentions` now, setting it to `False`.") + output_attentions = False + output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions + output_hidden_states = output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states + use_cache = use_cache if use_cache is not None else (self.config.use_cache if not self.training else False) + return_dict = return_dict if return_dict is not None else self.config.use_return_dict + + # retrieve input_ids and inputs_embeds + if input_ids is not None and inputs_embeds is not None: + raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") + elif input_ids is not None: + batch_size = input_ids.shape[0] + elif inputs_embeds is not None: + batch_size = inputs_embeds.shape[0] + else: + raise ValueError("You have to specify either input_ids or inputs_embeds") + + if inputs_embeds is None: + inputs_embeds = self.embeddings(input_ids) + hidden_states = inputs_embeds + + if use_cache: + if past_key_values is None: + past_key_values = [layer.attn.init_state(batch_size) for layer in self.layers] + if not isinstance(past_key_values, RecurrentCache): + past_key_values = RecurrentCache.from_legacy_cache(past_key_values) + + if self.gradient_checkpointing and self.training: + if use_cache: + logger.warning_once( + "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." + ) + use_cache = False + + all_hidden_states = () if output_hidden_states else None + all_attns = () if output_attentions else None + for layer in self.layers: + if output_hidden_states: + all_hidden_states += (hidden_states,) + + if self.gradient_checkpointing and self.training: + hidden_states, attentions, past_key_values = self._gradient_checkpointing_func( + layer.__call__, + hidden_states, + attention_mask, + past_key_values, + use_cache, + output_attentions + ) + else: + hidden_states, attentions, past_key_values = layer( + hidden_states, + attention_mask, + past_key_values=past_key_values, + use_cache=use_cache, + output_attentions=output_attentions + ) + + if output_attentions: + all_attns += (attentions,) + + hidden_states = self.norm(hidden_states) + + # add hidden states from the last decoder layer + if output_hidden_states: + all_hidden_states += (hidden_states,) + + next_cache = None + if use_cache: + next_cache = past_key_values.to_legacy_cache() + if not return_dict: + return tuple(x for x in [hidden_states, next_cache, all_hidden_states, all_attns] if x is not None) + return BaseModelOutputWithPast( + last_hidden_state=hidden_states, + past_key_values=next_cache, + hidden_states=all_hidden_states, + attentions=all_attns + ) + + +class ABCForCausalLM(ABCPreTrainedModel): + _tied_weights_keys = ["lm_head.weight"] + + def __init__(self, config): + super().__init__(config) + self.model = ABCModel(config) + self.vocab_size = config.vocab_size + self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False) + + # Initialize weights and apply final processing + self.post_init() + + def get_input_embeddings(self): + return self.model.embeddings + + def set_input_embeddings(self, value): + self.model.embeddings = value + + def get_output_embeddings(self): + return self.lm_head + + def set_output_embeddings(self, new_embeddings): + self.lm_head = new_embeddings + + def set_decoder(self, decoder): + self.model = decoder + + def get_decoder(self): + return self.model + + def generate(self, *args, **kwargs): + try: + return super().generate(*args, **kwargs) + except AttributeError as exception: + if 'past_key_values' in str(exception): + raise AttributeError( + f"You tried to call `generate` with a decoding strategy that manipulates `past_key_values`, " + f"which is not supported for {self.__class__.__name__}. " + f"Try another generation strategy instead. " + f"For the available generation strategies, check this doc: " + f"https://huggingface.co/docs/transformers/en/generation_strategies#decoding-strategies" + ) + else: + raise exception + + def prepare_inputs_for_generation( + self, + input_ids: torch.LongTensor = None, + past_key_values: Optional[Tuple[List[torch.Tensor]]] = None, + inputs_embeds: Optional[torch.FloatTensor] = None, + **kwargs + ): + # only last token for `inputs_ids` if the `past_key_values` is passed along. + if past_key_values is not None: + if not isinstance(past_key_values, RecurrentCache): + past_key_values = RecurrentCache.from_legacy_cache(past_key_values, input_ids.shape[1] - 1) + input_ids = input_ids[:, -1:] + + # if `inputs_embeds` are passed, we only want to use them in the 1st generation step + if inputs_embeds is not None and past_key_values is None: + model_inputs = {'inputs_embeds': inputs_embeds} + else: + model_inputs = {'input_ids': input_ids} + model_inputs['past_key_values'] = past_key_values + return model_inputs + + def forward( + self, + input_ids: torch.LongTensor = None, + attention_mask: Optional[torch.Tensor] = None, + inputs_embeds: Optional[torch.Tensor] = None, + past_key_values: Optional[Tuple[List[torch.Tensor]]] = None, + labels: Optional[torch.LongTensor] = None, + use_cache: Optional[bool] = None, + output_attentions: Optional[bool] = None, + output_hidden_states: Optional[bool] = None, + return_dict: Optional[bool] = None, + ) -> Union[Tuple, CausalLMOutputWithPast]: + output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions + output_hidden_states = ( + output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states + ) + return_dict = return_dict if return_dict is not None else self.config.use_return_dict + + outputs = self.model( + input_ids=input_ids, + attention_mask=attention_mask, + inputs_embeds=inputs_embeds, + past_key_values=past_key_values, + use_cache=use_cache, + output_attentions=output_attentions, + output_hidden_states=output_hidden_states, + return_dict=return_dict + ) + + hidden_states = outputs[0] + logits = self.lm_head(hidden_states) + + loss = None + if labels is not None: + if self.config.fuse_cross_entropy: + loss_fct = FusedCrossEntropyLoss(inplace_backward=True) + else: + loss_fct = nn.CrossEntropyLoss() + # Enable model parallelism + labels = labels.to(logits.device) + labels = torch.cat((labels[..., 1:], torch.full_like(labels[:, :1], loss_fct.ignore_index)), 1) + loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1)) + + if not return_dict: + output = (logits,) + outputs[1:] + return (loss,) + output if loss is not None else output + + return CausalLMOutputWithPast( + loss=loss, + logits=logits, + past_key_values=outputs.past_key_values, + hidden_states=outputs.hidden_states, + attentions=outputs.attentions, + ) diff --git a/fla/models/delta_net/__init__.py b/fla/models/delta_net/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..6df38418d2502fb3b56b7ccbeea81f5be190aeb9 --- /dev/null +++ b/fla/models/delta_net/__init__.py @@ -0,0 +1,14 @@ +# -*- coding: utf-8 -*- + +from transformers import AutoConfig, AutoModel, AutoModelForCausalLM + +from fla.models.delta_net.configuration_delta_net import \ + DeltaNetConfig +from fla.models.delta_net.modeling_delta_net import ( + DeltaNetForCausalLM, DeltaNetModel) + +AutoConfig.register(DeltaNetConfig.model_type, DeltaNetConfig) +AutoModel.register(DeltaNetConfig, DeltaNetModel) +AutoModelForCausalLM.register(DeltaNetConfig, DeltaNetForCausalLM) + +__all__ = ['DeltaNetConfig', 'DeltaNetForCausalLM', 'DeltaNetModel'] diff --git a/fla/models/delta_net/configuration_delta_net.py b/fla/models/delta_net/configuration_delta_net.py new file mode 100644 index 0000000000000000000000000000000000000000..c8eaaaec6cd69f5de27cdb27498461af2d254fd7 --- /dev/null +++ b/fla/models/delta_net/configuration_delta_net.py @@ -0,0 +1,77 @@ +# -*- coding: utf-8 -*- + +from typing import Optional + +from transformers.configuration_utils import PretrainedConfig + + +class DeltaNetConfig(PretrainedConfig): + + model_type = 'delta_net' + keys_to_ignore_at_inference = ['past_key_values'] + + def __init__( + self, + vocab_size: int = 32000, + hidden_size: int = 2048, + expand_k: int = 1, + expand_v: int = 1, + use_gate: bool = False, + use_short_conv: bool = True, + conv_size: int = 4, + share_conv_kernel: bool = False, + use_rope: bool = False, + use_beta: bool = True, + use_output_norm: bool = True, + hidden_ratio: Optional[int] = 4, + intermediate_size: Optional[int] = None, + num_hidden_layers: int = 24, + num_heads: int = 4, + attn_mode: str = "chunk", + qk_norm: str = 'l2', + qk_activation: str = 'silu', + chunk_size: int = 64, + hidden_act: str = "swish", + max_position_embeddings: int = 2048, + rms_norm_eps: float = 1e-6, + use_cache: bool = True, + pad_token_id: int = None, + bos_token_id: int = 1, + eos_token_id: int = 2, + tie_word_embeddings: bool = False, + initializer_range: float = 0.02, + fuse_cross_entropy: bool = True, + **kwargs + ): + self.vocab_size = vocab_size + self.max_position_embeddings = max_position_embeddings + self.hidden_size = hidden_size + self.expand_k = expand_k + self.expand_v = expand_v + self.hidden_ratio = hidden_ratio + self.intermediate_size = intermediate_size + self.num_hidden_layers = num_hidden_layers + self.num_heads = num_heads + self.attn_mode = attn_mode + self.hidden_act = hidden_act + self.rms_norm_eps = rms_norm_eps + self.use_cache = use_cache + self.initializer_range = initializer_range + self.fuse_cross_entropy = fuse_cross_entropy + self.use_gate = use_gate + self.use_short_conv = use_short_conv + self.conv_size = conv_size + self.share_conv_kernel = share_conv_kernel + self.use_rope = use_rope + self.use_beta = use_beta + self.use_output_norm = use_output_norm + self.qk_norm = qk_norm + self.qk_activation = qk_activation + + super().__init__( + pad_token_id=pad_token_id, + bos_token_id=bos_token_id, + eos_token_id=eos_token_id, + tie_word_embeddings=tie_word_embeddings, + **kwargs, + ) diff --git a/fla/models/delta_net/modeling_delta_net.py b/fla/models/delta_net/modeling_delta_net.py new file mode 100644 index 0000000000000000000000000000000000000000..cec4bdc6c40dea401e4e541c9d2776d33f65f085 --- /dev/null +++ b/fla/models/delta_net/modeling_delta_net.py @@ -0,0 +1,405 @@ +# -*- coding: utf-8 -*- + +from __future__ import annotations + +import math +import warnings +from typing import List, Optional, Tuple, Union + +import torch +import torch.nn as nn +import torch.utils.checkpoint +from transformers.activations import ACT2FN +from transformers.modeling_outputs import (BaseModelOutputWithPast, + CausalLMOutputWithPast) +from transformers.modeling_utils import PreTrainedModel +from transformers.utils import logging + +from fla.layers.delta_net import DeltaNet +from fla.models.delta_net.configuration_delta_net import DeltaNetConfig +from fla.models.utils import RecurrentCache +from fla.modules import FusedCrossEntropyLoss, RMSNorm +from fla.modules.activations import swiglu_linear + +logger = logging.get_logger(__name__) + + +class DeltaNetMLP(nn.Module): + + def __init__( + self, + hidden_size: int, + hidden_ratio: Optional[int] = None, + intermediate_size: Optional[int] = None, + hidden_act: str = 'swish' + ) -> DeltaNetMLP: + super().__init__() + + self.hidden_size = hidden_size + # the final number of params is `hidden_ratio * hidden_size^2` + # `intermediate_size` is chosen to be a multiple of 256 closest to `2/3 * hidden_size * hidden_ratio` + if hidden_ratio is None: + hidden_ratio = 4 + if intermediate_size is None: + intermediate_size = int(hidden_size * hidden_ratio * 2 / 3) + intermediate_size = 256 * ((intermediate_size + 256 - 1) // 256) + self.hidden_ratio = hidden_ratio + self.intermediate_size = intermediate_size + + self.gate_proj = nn.Linear(self.hidden_size, self.intermediate_size * 2, bias=False) + self.down_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=False) + self.act_fn = ACT2FN[hidden_act] + + def forward(self, x): + y = self.gate_proj(x) + gate, y = y.chunk(2, -1) + return swiglu_linear(gate, y, self.down_proj.weight, self.down_proj.bias) + + +class DeltaNetBlock(nn.Module): + def __init__(self, config: DeltaNetConfig, layer_idx: int): + super().__init__() + self.hidden_size = config.hidden_size + + self.attn_norm = RMSNorm(hidden_size=config.hidden_size, eps=config.rms_norm_eps) + self.attn = DeltaNet( + mode=config.attn_mode, + hidden_size=config.hidden_size, + expand_k=config.expand_k, + expand_v=config.expand_v, + num_heads=config.num_heads, + use_gate=config.use_gate, + use_rope=config.use_rope, + use_beta=config.use_beta, + use_short_conv=config.use_short_conv, + use_output_norm=config.use_output_norm, + conv_size=config.conv_size, + share_conv_kernel=config.share_conv_kernel, + layer_idx=layer_idx, + qk_norm=config.qk_norm, + qk_activation=config.qk_activation + ) + self.mlp_norm = RMSNorm(hidden_size=config.hidden_size, eps=config.rms_norm_eps) + self.mlp = DeltaNetMLP( + hidden_size=config.hidden_size, + hidden_ratio=config.hidden_ratio, + intermediate_size=config.intermediate_size, + hidden_act=config.hidden_act + ) + + def forward( + self, + hidden_states: torch.Tensor, + attention_mask: Optional[torch.Tensor] = None, + past_key_values: Optional[Tuple[List[torch.Tensor]]] = None, + use_cache: Optional[bool] = False, + output_attentions: Optional[bool] = False, + **kwargs, + ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]: + + residual = hidden_states + + hidden_states = self.attn_norm(hidden_states) + hidden_states, attentions, past_key_values = self.attn( + hidden_states=hidden_states, + attention_mask=attention_mask, + past_key_values=past_key_values, + use_cache=use_cache, + output_attentions=output_attentions + ) + hidden_states, residual = self.mlp_norm(hidden_states, residual, True) + hidden_states = self.mlp(hidden_states) + hidden_states = residual + hidden_states + + outputs = (hidden_states, attentions, past_key_values) + + return outputs + + +class DeltaNetPreTrainedModel(PreTrainedModel): + + config_class = DeltaNetConfig + supports_gradient_checkpointing = True + _no_split_modules = ['DeltaNetBlock'] + + def __init__(self, *inputs, **kwargs): + super().__init__(*inputs, **kwargs) + + def _init_weights( + self, + module: nn.Module, + rescale_prenorm_residual: bool = True, + num_residuals_per_layer: int = 2, + ): + if isinstance(module, (nn.Linear, nn.Conv1d)): + # Slightly different from the TF version which uses truncated_normal for initialization + # cf https://github.com/pytorch/pytorch/pull/5617 + nn.init.normal_(module.weight, mean=0.0, std=self.config.initializer_range) + if module.bias is not None: + nn.init.zeros_(module.bias) + elif isinstance(module, nn.Embedding): + nn.init.normal_(module.weight, mean=0.0, std=self.config.initializer_range) + if module.padding_idx is not None: + module.weight.data[module.padding_idx].zero_() + + if rescale_prenorm_residual: + # Reinitialize selected weights subject to the OpenAI GPT-2 Paper Scheme: + # > A modified initialization which accounts for the accumulation on the residual path with model depth. Scale + # > the weights of residual layers at initialization by a factor of 1/√N where N is the # of residual layers. + # > -- GPT-2 :: https://openai.com/blog/better-language-models/ + # + # Reference (Megatron-LM): https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/gpt_model.py + for name, p in module.named_parameters(): + if name in ["o_proj.weight", "down_proj.weight"]: + # Special Scaled Initialization --> There are 2 Layer Norms per Transformer Block + # Following Pytorch init, except scale by 1/sqrt(2 * n_layer) + # We need to reinit p since this code could be called multiple times + # Having just p *= scale would repeatedly scale it down + with torch.no_grad(): + p /= math.sqrt(num_residuals_per_layer * self.config.num_hidden_layers) + + +class DeltaNetModel(DeltaNetPreTrainedModel): + + def __init__(self, config: DeltaNetConfig): + super().__init__(config) + self.padding_idx = config.pad_token_id + self.vocab_size = config.vocab_size + + self.embeddings = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx) + self.layers = nn.ModuleList([DeltaNetBlock(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]) + self.norm = RMSNorm(config.hidden_size, eps=config.rms_norm_eps) + + self.gradient_checkpointing = False + + self.post_init() + + def get_input_embeddings(self): + return self.embeddings + + def set_input_embeddings(self, value): + self.embeddings = value + + def forward( + self, + input_ids: Optional[torch.LongTensor] = None, + attention_mask: Optional[torch.Tensor] = None, # noqa + inputs_embeds: Optional[torch.FloatTensor] = None, + past_key_values: Optional[Tuple[List[torch.Tensor]]] = None, + use_cache: Optional[bool] = None, + output_attentions: Optional[bool] = None, + output_hidden_states: Optional[bool] = None, + return_dict: Optional[bool] = None + ) -> Union[Tuple, BaseModelOutputWithPast]: + if output_attentions: + warnings.warn("`DeltaNetModel` does not `output_attentions` now, setting it to `False`.") + output_attentions = False + output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions + output_hidden_states = output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states + use_cache = use_cache if use_cache is not None else (self.config.use_cache if not self.training else False) + return_dict = return_dict if return_dict is not None else self.config.use_return_dict + + # retrieve input_ids and inputs_embeds + if input_ids is not None and inputs_embeds is not None: + raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") + elif input_ids is not None: + batch_size = input_ids.shape[0] + elif inputs_embeds is not None: + batch_size = inputs_embeds.shape[0] + else: + raise ValueError("You have to specify either input_ids or inputs_embeds") + + if inputs_embeds is None: + inputs_embeds = self.embeddings(input_ids) + hidden_states = inputs_embeds + + if use_cache: + if past_key_values is None: + past_key_values = [layer.attn.init_state(batch_size) for layer in self.layers] + if not isinstance(past_key_values, RecurrentCache): + past_key_values = RecurrentCache.from_legacy_cache(past_key_values) + + if self.gradient_checkpointing and self.training: + if use_cache: + logger.warning_once( + "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." + ) + use_cache = False + + all_hidden_states = () if output_hidden_states else None + all_attns = () if output_attentions else None + for layer in self.layers: + if output_hidden_states: + all_hidden_states += (hidden_states,) + + if self.gradient_checkpointing and self.training: + hidden_states, attentions, past_key_values = self._gradient_checkpointing_func( + layer.__call__, + hidden_states, + attention_mask, + past_key_values, + use_cache, + output_attentions + ) + else: + hidden_states, attentions, past_key_values = layer( + hidden_states, + attention_mask=attention_mask, + past_key_values=past_key_values, + use_cache=use_cache, + output_attentions=output_attentions + ) + + if output_attentions: + all_attns += (attentions,) + + hidden_states = self.norm(hidden_states) + + # add hidden states from the last decoder layer + if output_hidden_states: + all_hidden_states += (hidden_states,) + + next_cache = past_key_values + # if use_cache: + # next_cache = past_key_values.to_legacy_cache() + if not return_dict: + return tuple(x for x in [hidden_states, next_cache, all_hidden_states, all_attns] if x is not None) + return BaseModelOutputWithPast( + last_hidden_state=hidden_states, + past_key_values=next_cache, + hidden_states=all_hidden_states, + attentions=all_attns + ) + + +class DeltaNetForCausalLM(DeltaNetPreTrainedModel): + _tied_weights_keys = ["lm_head.weight"] + + def __init__(self, config): + super().__init__(config) + self.model = DeltaNetModel(config) + self.vocab_size = config.vocab_size + self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False) + + # Initialize weights and apply final processing + self.post_init() + + def get_input_embeddings(self): + return self.model.embeddings + + def set_input_embeddings(self, value): + self.model.embeddings = value + + def get_output_embeddings(self): + return self.lm_head + + def set_output_embeddings(self, new_embeddings): + self.lm_head = new_embeddings + + def set_decoder(self, decoder): + self.model = decoder + + def get_decoder(self): + return self.model + + def generate(self, *args, **kwargs): + try: + return super().generate(*args, **kwargs) + except AttributeError as exception: + if 'past_key_values' in str(exception): + raise AttributeError( + f"You tried to call `generate` with a decoding strategy that manipulates `past_key_values`, " + f"which is not supported for {self.__class__.__name__}. " + f"Try another generation strategy instead. " + f"For the available generation strategies, check this doc: " + f"https://huggingface.co/docs/transformers/en/generation_strategies#decoding-strategies" + ) + else: + raise exception + + def prepare_inputs_for_generation( + self, + input_ids: torch.LongTensor = None, + past_key_values: Optional[Tuple[List[torch.Tensor]]] = None, + attention_mask: Optional[torch.Tensor] = None, + inputs_embeds: Optional[torch.FloatTensor] = None, + **kwargs + ): + # only last token for `inputs_ids` if the `past_key_values` is passed along. + if past_key_values is not None: + if not isinstance(past_key_values, RecurrentCache): + past_key_values = RecurrentCache.from_legacy_cache(past_key_values, input_ids.shape[1] - 1) + # breakpoint() + input_ids, attention_mask = input_ids[:, -1:], attention_mask[:, -1:] + + # if `inputs_embeds` are passed, we only want to use them in the 1st generation step + if inputs_embeds is not None and past_key_values is None: + model_inputs = {'inputs_embeds': inputs_embeds} + else: + # The `contiguous()` here is necessary to have a static stride during decoding. torchdynamo otherwise + # recompiles graphs as the stride of the inputs is a guard. + # Ref: https://github.com/huggingface/transformers/pull/29114 + # TODO: use `next_tokens` directly instead. + model_inputs = {'input_ids': input_ids.contiguous()} + + model_inputs.update({ + 'past_key_values': past_key_values, + 'use_cache': kwargs.get('use_cache'), + 'attention_mask': attention_mask, + }) + return model_inputs + + def forward( + self, + input_ids: torch.LongTensor = None, + attention_mask: Optional[torch.Tensor] = None, + inputs_embeds: Optional[torch.Tensor] = None, + past_key_values: Optional[Tuple[List[torch.Tensor]]] = None, + labels: Optional[torch.LongTensor] = None, + use_cache: Optional[bool] = None, + output_attentions: Optional[bool] = None, + output_hidden_states: Optional[bool] = None, + return_dict: Optional[bool] = None, + ) -> Union[Tuple, CausalLMOutputWithPast]: + output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions + output_hidden_states = ( + output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states + ) + return_dict = return_dict if return_dict is not None else self.config.use_return_dict + + outputs = self.model( + input_ids=input_ids, + attention_mask=attention_mask, + inputs_embeds=inputs_embeds, + past_key_values=past_key_values, + use_cache=use_cache, + output_attentions=output_attentions, + output_hidden_states=output_hidden_states, + return_dict=return_dict + ) + + hidden_states = outputs[0] + logits = self.lm_head(hidden_states) + + loss = None + if labels is not None: + if self.config.fuse_cross_entropy: + loss_fct = FusedCrossEntropyLoss(inplace_backward=True) + else: + loss_fct = nn.CrossEntropyLoss() + # Enable model parallelism + labels = labels.to(logits.device) + labels = torch.cat((labels[..., 1:], torch.full_like(labels[:, :1], loss_fct.ignore_index)), 1) + loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1)) + + if not return_dict: + output = (logits,) + outputs[1:] + return (loss,) + output if loss is not None else output + + return CausalLMOutputWithPast( + loss=loss, + logits=logits, + past_key_values=outputs.past_key_values, + hidden_states=outputs.hidden_states, + attentions=outputs.attentions, + ) diff --git a/fla/models/gla/__init__.py b/fla/models/gla/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..edccb515af8f04144308bfcbb72be8e91e714cd7 --- /dev/null +++ b/fla/models/gla/__init__.py @@ -0,0 +1,13 @@ +# -*- coding: utf-8 -*- + +from transformers import AutoConfig, AutoModel, AutoModelForCausalLM + +from fla.models.gla.configuration_gla import GLAConfig +from fla.models.gla.modeling_gla import GLAForCausalLM, GLAModel + +AutoConfig.register(GLAConfig.model_type, GLAConfig) +AutoModel.register(GLAConfig, GLAModel) +AutoModelForCausalLM.register(GLAConfig, GLAForCausalLM) + + +__all__ = ['GLAConfig', 'GLAForCausalLM', 'GLAModel'] diff --git a/fla/models/gla/configuration_gla.py b/fla/models/gla/configuration_gla.py new file mode 100644 index 0000000000000000000000000000000000000000..f8bf56a4f7167af7e27a19845a600d92530adf8f --- /dev/null +++ b/fla/models/gla/configuration_gla.py @@ -0,0 +1,80 @@ +# -*- coding: utf-8 -*- + +from typing import Optional + +from transformers.configuration_utils import PretrainedConfig + + +class GLAConfig(PretrainedConfig): + + model_type = 'gla' + keys_to_ignore_at_inference = ['past_key_values'] + + def __init__( + self, + vocab_size: int = 32000, + hidden_size: int = 2048, + expand_k: int = 0.5, + expand_v: int = 1, + hidden_ratio: Optional[int] = 4, + intermediate_size: Optional[int] = None, + num_hidden_layers: int = 24, + num_heads: int = 4, + num_kv_heads: Optional[int] = None, + feature_map: Optional[str] = None, + attn_mode: str = "chunk", + use_short_conv: bool = False, + conv_size: int = 4, + share_conv_kernel: bool = True, + use_output_gate: bool = True, + clamp_min: Optional[float] = None, + hidden_act: str = "swish", + max_position_embeddings: int = 2048, + elementwise_affine: Optional[bool] = True, + norm_eps: float = 1e-6, + use_gk: bool = True, + use_gv: bool = False, + use_cache: bool = True, + pad_token_id: int = None, + bos_token_id: int = 1, + eos_token_id: int = 2, + tie_word_embeddings: bool = False, + initializer_range: float = 0.02, + fuse_norm: bool = True, + fuse_cross_entropy: bool = True, + **kwargs + ): + self.vocab_size = vocab_size + self.max_position_embeddings = max_position_embeddings + self.hidden_size = hidden_size + self.expand_k = expand_k + self.expand_v = expand_v + self.hidden_ratio = hidden_ratio + self.intermediate_size = intermediate_size + self.num_hidden_layers = num_hidden_layers + self.num_heads = num_heads + self.num_kv_heads = num_kv_heads + self.feature_map = feature_map + self.attn_mode = attn_mode + self.clamp_min = clamp_min + self.hidden_act = hidden_act + self.elementwise_affine = elementwise_affine + self.norm_eps = norm_eps + self.use_gk = use_gk + self.use_gv = use_gv + self.use_cache = use_cache + self.initializer_range = initializer_range + self.fuse_norm = fuse_norm + self.fuse_cross_entropy = fuse_cross_entropy + self.use_short_conv = use_short_conv + self.conv_size = conv_size + self.share_conv_kernel = share_conv_kernel + self.use_output_gate = use_output_gate + + super().__init__( + pad_token_id=pad_token_id, + bos_token_id=bos_token_id, + eos_token_id=eos_token_id, + tie_word_embeddings=tie_word_embeddings, + **kwargs, + ) diff --git a/fla/models/gla/modeling_gla.py b/fla/models/gla/modeling_gla.py new file mode 100644 index 0000000000000000000000000000000000000000..9ad435282a2a23636db9917b3a41f78de1b6bf88 --- /dev/null +++ b/fla/models/gla/modeling_gla.py @@ -0,0 +1,403 @@ +# -*- coding: utf-8 -*- + +from __future__ import annotations + +import math +import warnings +from typing import List, Optional, Tuple, Union + +import torch +import torch.nn as nn +import torch.utils.checkpoint +from transformers.activations import ACT2FN +from transformers.modeling_outputs import (BaseModelOutputWithPast, + CausalLMOutputWithPast) +from transformers.modeling_utils import PreTrainedModel +from transformers.utils import logging + +from fla.layers.gla import GatedLinearAttention +from fla.models.gla.configuration_gla import GLAConfig +from fla.models.utils import RecurrentCache +from fla.modules import FusedCrossEntropyLoss, RMSNorm +from fla.modules.activations import swiglu_linear + +logger = logging.get_logger(__name__) + + +class GLAMLP(nn.Module): + + def __init__( + self, + hidden_size: int, + hidden_ratio: Optional[int] = None, + intermediate_size: Optional[int] = None, + hidden_act: str = 'swish' + ) -> GLAMLP: + super().__init__() + + self.hidden_size = hidden_size + # the final number of params is `hidden_ratio * hidden_size^2` + # `intermediate_size` is chosen to be a multiple of 256 closest to `2/3 * hidden_size * hidden_ratio` + if hidden_ratio is None: + hidden_ratio = 4 + if intermediate_size is None: + intermediate_size = int(hidden_size * hidden_ratio * 2 / 3) + intermediate_size = 256 * ((intermediate_size + 256 - 1) // 256) + self.hidden_ratio = hidden_ratio + self.intermediate_size = intermediate_size + + self.gate_proj = nn.Linear(self.hidden_size, self.intermediate_size * 2, bias=False) + self.down_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=False) + self.act_fn = ACT2FN[hidden_act] + + def forward(self, x): + y = self.gate_proj(x) + gate, y = y.chunk(2, -1) + return swiglu_linear(gate, y, self.down_proj.weight, self.down_proj.bias) + + +class GLABlock(nn.Module): + def __init__(self, config: GLAConfig, layer_idx: int): + super().__init__() + self.hidden_size = config.hidden_size + + self.attn_norm = RMSNorm(hidden_size=config.hidden_size, eps=config.norm_eps) + self.attn = GatedLinearAttention( + mode=config.attn_mode, + hidden_size=config.hidden_size, + expand_k=config.expand_k, + expand_v=config.expand_v, + num_heads=config.num_heads, + num_kv_heads=config.num_kv_heads, + feature_map=config.feature_map, + use_short_conv=config.use_short_conv, + conv_size=config.conv_size, + share_conv_kernel=config.share_conv_kernel, + use_output_gate=config.use_output_gate, + gate_fn=config.hidden_act, + elementwise_affine=config.elementwise_affine, + norm_eps=config.norm_eps, + clamp_min=config.clamp_min, + fuse_norm=config.fuse_norm, + layer_idx=layer_idx + ) + self.mlp_norm = RMSNorm(hidden_size=config.hidden_size, eps=config.norm_eps) + self.mlp = GLAMLP( + hidden_size=config.hidden_size, + hidden_ratio=config.hidden_ratio, + intermediate_size=config.intermediate_size, + hidden_act=config.hidden_act + ) + + def forward( + self, + hidden_states: torch.Tensor, + attention_mask: Optional[torch.Tensor] = None, + past_key_values: Optional[Tuple[List[torch.Tensor]]] = None, + use_cache: Optional[bool] = False, + output_attentions: Optional[bool] = False, + **kwargs, + ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]: + residual = hidden_states + hidden_states = self.attn_norm(hidden_states) + hidden_states, attentions, past_key_values = self.attn( + hidden_states=hidden_states, + attention_mask=attention_mask, + past_key_values=past_key_values, + use_cache=use_cache, + output_attentions=output_attentions + ) + hidden_states, residual = self.mlp_norm(hidden_states, residual, True) + hidden_states = self.mlp(hidden_states) + hidden_states = residual + hidden_states + + outputs = (hidden_states, attentions, past_key_values) + + return outputs + + +class GLAPreTrainedModel(PreTrainedModel): + + config_class = GLAConfig + supports_gradient_checkpointing = True + _no_split_modules = ['GLABlock'] + + def __init__(self, *inputs, **kwargs): + super().__init__(*inputs, **kwargs) + + def _init_weights( + self, + module: nn.Module, + rescale_prenorm_residual: bool = True, + num_residuals_per_layer: int = 2, + ): + if isinstance(module, (nn.Linear, nn.Conv1d)): + # Slightly different from the TF version which uses truncated_normal for initialization + # cf https://github.com/pytorch/pytorch/pull/5617 + nn.init.normal_(module.weight, mean=0.0, std=self.config.initializer_range) + if module.bias is not None: + nn.init.zeros_(module.bias) + elif isinstance(module, nn.Embedding): + nn.init.normal_(module.weight, mean=0.0, std=self.config.initializer_range) + if module.padding_idx is not None: + module.weight.data[module.padding_idx].zero_() + + if rescale_prenorm_residual: + # Reinitialize selected weights subject to the OpenAI GPT-2 Paper Scheme: + # > A modified initialization which accounts for the accumulation on the residual path with model depth. Scale + # > the weights of residual layers at initialization by a factor of 1/√N where N is the # of residual layers. + # > -- GPT-2 :: https://openai.com/blog/better-language-models/ + # + # Reference (Megatron-LM): https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/gpt_model.py + for name, p in module.named_parameters(): + if name in ["o_proj.weight", "down_proj.weight"]: + # Special Scaled Initialization --> There are 2 Layer Norms per Transformer Block + # Following Pytorch init, except scale by 1/sqrt(2 * n_layer) + # We need to reinit p since this code could be called multiple times + # Having just p *= scale would repeatedly scale it down + with torch.no_grad(): + p /= math.sqrt(num_residuals_per_layer * self.config.num_hidden_layers) + + +class GLAModel(GLAPreTrainedModel): + + def __init__(self, config: GLAConfig): + super().__init__(config) + self.padding_idx = config.pad_token_id + self.vocab_size = config.vocab_size + + self.embeddings = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx) + self.layers = nn.ModuleList([GLABlock(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]) + self.norm = RMSNorm(config.hidden_size, eps=config.norm_eps) + + self.gradient_checkpointing = False + + self.post_init() + + def get_input_embeddings(self): + return self.embeddings + + def set_input_embeddings(self, value): + self.embeddings = value + + def forward( + self, + input_ids: Optional[torch.LongTensor] = None, + attention_mask: Optional[torch.Tensor] = None, # noqa + inputs_embeds: Optional[torch.FloatTensor] = None, + past_key_values: Optional[Tuple[List[torch.Tensor]]] = None, + use_cache: Optional[bool] = None, + output_attentions: Optional[bool] = None, + output_hidden_states: Optional[bool] = None, + return_dict: Optional[bool] = None + ) -> Union[Tuple, BaseModelOutputWithPast]: + if output_attentions: + warnings.warn("`GLAModel` does not `output_attentions` now, setting it to `False`.") + output_attentions = False + output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions + output_hidden_states = output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states + use_cache = use_cache if use_cache is not None else (self.config.use_cache if not self.training else False) + return_dict = return_dict if return_dict is not None else self.config.use_return_dict + + # retrieve input_ids and inputs_embeds + if input_ids is not None and inputs_embeds is not None: + raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") + elif input_ids is not None: + batch_size = input_ids.shape[0] + elif inputs_embeds is not None: + batch_size = inputs_embeds.shape[0] + else: + raise ValueError("You have to specify either input_ids or inputs_embeds") + + if inputs_embeds is None: + inputs_embeds = self.embeddings(input_ids) + hidden_states = inputs_embeds + + if use_cache: + if past_key_values is None: + past_key_values = [layer.attn.init_state(batch_size) for layer in self.layers] + if not isinstance(past_key_values, RecurrentCache): + past_key_values = RecurrentCache.from_legacy_cache(past_key_values) + + if self.gradient_checkpointing and self.training: + if use_cache: + logger.warning_once( + "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." + ) + use_cache = False + + all_hidden_states = () if output_hidden_states else None + all_attns = () if output_attentions else None + for layer in self.layers: + if output_hidden_states: + all_hidden_states += (hidden_states,) + + if self.gradient_checkpointing and self.training: + hidden_states, attentions, past_key_values = self._gradient_checkpointing_func( + layer.__call__, + hidden_states, + attention_mask, + past_key_values, + use_cache, + output_attentions + ) + else: + hidden_states, attentions, past_key_values = layer( + hidden_states, + attention_mask=attention_mask, + past_key_values=past_key_values, + use_cache=use_cache, + output_attentions=output_attentions + ) + + if output_attentions: + all_attns += (attentions,) + + hidden_states = self.norm(hidden_states) + + # add hidden states from the last decoder layer + if output_hidden_states: + all_hidden_states += (hidden_states,) + + next_cache = None + if use_cache: + next_cache = past_key_values.to_legacy_cache() + if not return_dict: + return tuple(x for x in [hidden_states, next_cache, all_hidden_states, all_attns] if x is not None) + return BaseModelOutputWithPast( + last_hidden_state=hidden_states, + past_key_values=next_cache, + hidden_states=all_hidden_states, + attentions=all_attns + ) + + +class GLAForCausalLM(GLAPreTrainedModel): + _tied_weights_keys = ["lm_head.weight"] + + def __init__(self, config): + super().__init__(config) + self.model = GLAModel(config) + self.vocab_size = config.vocab_size + self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False) + + # Initialize weights and apply final processing + self.post_init() + + def get_input_embeddings(self): + return self.model.embeddings + + def set_input_embeddings(self, value): + self.model.embeddings = value + + def get_output_embeddings(self): + return self.lm_head + + def set_output_embeddings(self, new_embeddings): + self.lm_head = new_embeddings + + def set_decoder(self, decoder): + self.model = decoder + + def get_decoder(self): + return self.model + + def generate(self, *args, **kwargs): + try: + return super().generate(*args, **kwargs) + except AttributeError as exception: + if 'past_key_values' in str(exception): + raise AttributeError( + f"You tried to call `generate` with a decoding strategy that manipulates `past_key_values`, " + f"which is not supported for {self.__class__.__name__}. " + f"Try another generation strategy instead. " + f"For the available generation strategies, check this doc: " + f"https://huggingface.co/docs/transformers/en/generation_strategies#decoding-strategies" + ) + else: + raise exception + + def prepare_inputs_for_generation( + self, + input_ids: torch.LongTensor = None, + past_key_values: Optional[Tuple[List[torch.Tensor]]] = None, + attention_mask: Optional[torch.Tensor] = None, + inputs_embeds: Optional[torch.Tensor] = None, + **kwargs + ): + # only last token for `inputs_ids` if the `past_key_values` is passed along. + if past_key_values is not None: + if not isinstance(past_key_values, RecurrentCache): + past_key_values = RecurrentCache.from_legacy_cache(past_key_values, input_ids.shape[1] - 1) + input_ids, attention_mask = input_ids[:, -1:], attention_mask[:, -1:] + # if `inputs_embeds` are passed, we only want to use them in the 1st generation step + if inputs_embeds is not None and past_key_values is None: + model_inputs = {'inputs_embeds': inputs_embeds} + else: + # The `contiguous()` here is necessary to have a static stride during decoding. torchdynamo otherwise + # recompiles graphs as the stride of the inputs is a guard. + # Ref: https://github.com/huggingface/transformers/pull/29114 + # TODO: use `next_tokens` directly instead. + model_inputs = {'input_ids': input_ids.contiguous()} + + model_inputs.update({ + 'past_key_values': past_key_values, + 'use_cache': kwargs.get('use_cache'), + 'attention_mask': attention_mask, + }) + return model_inputs + + def forward( + self, + input_ids: torch.LongTensor = None, + attention_mask: Optional[torch.Tensor] = None, + inputs_embeds: Optional[torch.Tensor] = None, + past_key_values: Optional[Tuple[List[torch.Tensor]]] = None, + labels: Optional[torch.LongTensor] = None, + use_cache: Optional[bool] = None, + output_attentions: Optional[bool] = None, + output_hidden_states: Optional[bool] = None, + return_dict: Optional[bool] = None, + ) -> Union[Tuple, CausalLMOutputWithPast]: + output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions + output_hidden_states = ( + output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states + ) + return_dict = return_dict if return_dict is not None else self.config.use_return_dict + + outputs = self.model( + input_ids=input_ids, + attention_mask=attention_mask, + inputs_embeds=inputs_embeds, + past_key_values=past_key_values, + use_cache=use_cache, + output_attentions=output_attentions, + output_hidden_states=output_hidden_states, + return_dict=return_dict + ) + + hidden_states = outputs[0] + logits = self.lm_head(hidden_states) + + loss = None + if labels is not None: + if self.config.fuse_cross_entropy: + loss_fct = FusedCrossEntropyLoss(inplace_backward=True) + else: + loss_fct = nn.CrossEntropyLoss() + # Enable model parallelism + labels = labels.to(logits.device) + labels = torch.cat((labels[..., 1:], torch.full_like(labels[:, :1], loss_fct.ignore_index)), 1) + loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1)) + + if not return_dict: + output = (logits,) + outputs[1:] + return (loss,) + output if loss is not None else output + + return CausalLMOutputWithPast( + loss=loss, + logits=logits, + past_key_values=outputs.past_key_values, + hidden_states=outputs.hidden_states, + attentions=outputs.attentions, + ) diff --git a/fla/models/hgrn/__init__.py b/fla/models/hgrn/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..3b29a3dd82da6d64bac6cc887e24295a03de5b23 --- /dev/null +++ b/fla/models/hgrn/__init__.py @@ -0,0 +1,13 @@ +# -*- coding: utf-8 -*- + +from transformers import AutoConfig, AutoModel, AutoModelForCausalLM + +from fla.models.hgrn.configuration_hgrn import HGRNConfig +from fla.models.hgrn.modeling_hgrn import HGRNForCausalLM, HGRNModel + +AutoConfig.register(HGRNConfig.model_type, HGRNConfig) +AutoModel.register(HGRNConfig, HGRNModel) +AutoModelForCausalLM.register(HGRNConfig, HGRNForCausalLM) + + +__all__ = ['HGRNConfig', 'HGRNForCausalLM', 'HGRNModel'] diff --git a/fla/models/hgrn/configuration_hgrn.py b/fla/models/hgrn/configuration_hgrn.py new file mode 100644 index 0000000000000000000000000000000000000000..6b70667bb65b991dcb1cf6783e6a76914cb4912c --- /dev/null +++ b/fla/models/hgrn/configuration_hgrn.py @@ -0,0 +1,66 @@ +# -*- coding: utf-8 -*- + +from typing import Optional + +from transformers.configuration_utils import PretrainedConfig + + +class HGRNConfig(PretrainedConfig): + + model_type = 'hgrn' + keys_to_ignore_at_inference = ['past_key_values'] + + def __init__( + self, + attn_mode: str = "chunk", + vocab_size: int = 32000, + hidden_size: int = 2048, + num_hidden_layers: int = 24, + num_heads: Optional[int] = 1, + expand_ratio: Optional[int] = 1, + use_short_conv: bool = False, + conv_size: int = 4, + share_conv_kernel: bool = True, + use_lower_bound: bool = True, + hidden_ratio: Optional[int] = 4, + intermediate_size: Optional[int] = None, + hidden_act: str = "swish", + max_position_embeddings: int = 2048, + elementwise_affine: Optional[bool] = True, + norm_eps: float = 1e-6, + use_cache: bool = True, + pad_token_id: int = None, + bos_token_id: int = 1, + eos_token_id: int = 2, + tie_word_embeddings: bool = False, + initializer_range: float = 0.02, + fuse_cross_entropy: bool = True, + **kwargs + ): + self.attn_mode = attn_mode + self.vocab_size = vocab_size + self.max_position_embeddings = max_position_embeddings + self.hidden_size = hidden_size + self.num_hidden_layers = num_hidden_layers + self.num_heads = num_heads + self.expand_ratio = expand_ratio + self.use_short_conv = use_short_conv + self.conv_size = conv_size + self.share_conv_kernel = share_conv_kernel + self.use_lower_bound = use_lower_bound + self.hidden_ratio = hidden_ratio + self.intermediate_size = intermediate_size + self.hidden_act = hidden_act + self.elementwise_affine = elementwise_affine + self.norm_eps = norm_eps + self.use_cache = use_cache + self.initializer_range = initializer_range + self.fuse_cross_entropy = fuse_cross_entropy + + super().__init__( + pad_token_id=pad_token_id, + bos_token_id=bos_token_id, + eos_token_id=eos_token_id, + tie_word_embeddings=tie_word_embeddings, + **kwargs, + ) diff --git a/fla/models/hgrn/modeling_hgrn.py b/fla/models/hgrn/modeling_hgrn.py new file mode 100644 index 0000000000000000000000000000000000000000..b41b274f1ef21e0eee68773ded8bba78d214e5af --- /dev/null +++ b/fla/models/hgrn/modeling_hgrn.py @@ -0,0 +1,407 @@ +# -*- coding: utf-8 -*- + +from __future__ import annotations + +import math +import warnings +from typing import List, Optional, Tuple, Union + +import torch +import torch.nn as nn +import torch.utils.checkpoint +from transformers.activations import ACT2FN +from transformers.modeling_outputs import (BaseModelOutputWithPast, + CausalLMOutputWithPast) +from transformers.modeling_utils import PreTrainedModel +from transformers.utils import logging + +from fla.layers.hgrn import HGRNAttention +from fla.models.hgrn.configuration_hgrn import HGRNConfig +from fla.models.utils import RecurrentCache +from fla.modules import FusedCrossEntropyLoss, RMSNorm +from fla.modules.activations import swiglu_linear + +logger = logging.get_logger(__name__) + + +class HGRNMLP(nn.Module): + + def __init__( + self, + hidden_size: int, + hidden_ratio: Optional[int] = None, + intermediate_size: Optional[int] = None, + hidden_act: str = 'swish' + ) -> HGRNMLP: + super().__init__() + + self.hidden_size = hidden_size + # the final number of params is `hidden_ratio * hidden_size^2` + # `intermediate_size` is chosen to be a multiple of 256 closest to `2/3 * hidden_size * hidden_ratio` + if hidden_ratio is None: + hidden_ratio = 4 + if intermediate_size is None: + intermediate_size = int(hidden_size * hidden_ratio * 2 / 3) + intermediate_size = 256 * ((intermediate_size + 256 - 1) // 256) + self.hidden_ratio = hidden_ratio + self.intermediate_size = intermediate_size + + self.gate_proj = nn.Linear(self.hidden_size, self.intermediate_size * 2, bias=False) + self.down_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=False) + self.act_fn = ACT2FN[hidden_act] + + def forward(self, x): + y = self.gate_proj(x) + gate, y = y.chunk(2, -1) + return swiglu_linear(gate, y, self.down_proj.weight, self.down_proj.bias) + + +class HGRNBlock(nn.Module): + def __init__(self, config: HGRNConfig, layer_idx: int): + super().__init__() + self.hidden_size = config.hidden_size + + self.attn_norm = RMSNorm(hidden_size=config.hidden_size, eps=config.norm_eps) + self.attn = HGRNAttention( + mode=config.attn_mode, + hidden_size=config.hidden_size, + num_heads=config.num_heads, + expand_ratio=config.expand_ratio, + use_short_conv=config.use_short_conv, + conv_size=config.conv_size, + share_conv_kernel=config.share_conv_kernel, + elementwise_affine=config.elementwise_affine, + norm_eps=config.norm_eps, + layer_idx=layer_idx + ) + self.mlp_norm = RMSNorm(hidden_size=config.hidden_size, eps=config.norm_eps) + self.mlp = HGRNMLP( + hidden_size=config.hidden_size, + hidden_ratio=config.hidden_ratio, + intermediate_size=config.intermediate_size, + hidden_act=config.hidden_act + ) + + def forward( + self, + hidden_states: torch.Tensor, + attention_mask: Optional[torch.Tensor] = None, + past_key_values: Optional[Tuple[List[torch.Tensor]]] = None, + use_cache: Optional[bool] = False, + output_attentions: Optional[bool] = False, + lower_bound: Optional[torch.Tensor] = False, + **kwargs, + ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]: + residual = hidden_states + hidden_states = self.attn_norm(hidden_states) + hidden_states, attentions, past_key_values = self.attn( + hidden_states=hidden_states, + attention_mask=attention_mask, + past_key_values=past_key_values, + use_cache=use_cache, + output_attentions=output_attentions, + lower_bound=lower_bound + ) + hidden_states, residual = self.mlp_norm(hidden_states, residual, True) + hidden_states = self.mlp(hidden_states) + hidden_states = residual + hidden_states + + outputs = (hidden_states, attentions, past_key_values) + + return outputs + + +class HGRNPreTrainedModel(PreTrainedModel): + + config_class = HGRNConfig + supports_gradient_checkpointing = True + _no_split_modules = ['HGRNBlock'] + + def __init__(self, *inputs, **kwargs): + super().__init__(*inputs, **kwargs) + + def _init_weights( + self, + module: nn.Module, + rescale_prenorm_residual: bool = True, + num_residuals_per_layer: int = 2, + ): + if isinstance(module, (nn.Linear, nn.Conv1d)): + # Slightly different from the TF version which uses truncated_normal for initialization + # cf https://github.com/pytorch/pytorch/pull/5617 + nn.init.normal_(module.weight, mean=0.0, std=self.config.initializer_range) + if module.bias is not None: + nn.init.zeros_(module.bias) + elif isinstance(module, nn.Embedding): + nn.init.normal_(module.weight, mean=0.0, std=self.config.initializer_range) + if module.padding_idx is not None: + module.weight.data[module.padding_idx].zero_() + + if rescale_prenorm_residual: + # Reinitialize selected weights subject to the OpenAI GPT-2 Paper Scheme: + # > A modified initialization which accounts for the accumulation on the residual path with model depth. Scale + # > the weights of residual layers at initialization by a factor of 1/√N where N is the # of residual layers. + # > -- GPT-2 :: https://openai.com/blog/better-language-models/ + # + # Reference (Megatron-LM): https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/gpt_model.py + for name, p in module.named_parameters(): + if name in ["o_proj.weight", "down_proj.weight"]: + # Special Scaled Initialization --> There are 2 Layer Norms per Transformer Block + # Following Pytorch init, except scale by 1/sqrt(2 * n_layer) + # We need to reinit p since this code could be called multiple times + # Having just p *= scale would repeatedly scale it down + with torch.no_grad(): + p /= math.sqrt(num_residuals_per_layer * self.config.num_hidden_layers) + + +class HGRNModel(HGRNPreTrainedModel): + + def __init__(self, config: HGRNConfig): + super().__init__(config) + self.padding_idx = config.pad_token_id + self.vocab_size = config.vocab_size + + self.embeddings = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx) + if config.use_lower_bound: + self.lower_bounds = nn.Parameter(torch.zeros(config.num_hidden_layers, config.hidden_size)) + self.layers = nn.ModuleList([HGRNBlock(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]) + self.norm = RMSNorm(config.hidden_size, eps=config.norm_eps) + + self.gradient_checkpointing = False + + self.post_init() + + def get_input_embeddings(self): + return self.embeddings + + def set_input_embeddings(self, value): + self.embeddings = value + + def forward( + self, + input_ids: Optional[torch.LongTensor] = None, + attention_mask: Optional[torch.Tensor] = None, # noqa + inputs_embeds: Optional[torch.FloatTensor] = None, + past_key_values: Optional[Tuple[List[torch.Tensor]]] = None, + use_cache: Optional[bool] = None, + output_attentions: Optional[bool] = None, + output_hidden_states: Optional[bool] = None, + return_dict: Optional[bool] = None + ) -> Union[Tuple, BaseModelOutputWithPast]: + if output_attentions: + warnings.warn("`HGRNModel` does not `output_attentions` now, setting it to `False`.") + output_attentions = False + output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions + output_hidden_states = output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states + use_cache = use_cache if use_cache is not None else (self.config.use_cache if not self.training else False) + return_dict = return_dict if return_dict is not None else self.config.use_return_dict + + # retrieve input_ids and inputs_embeds + if input_ids is not None and inputs_embeds is not None: + raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") + elif input_ids is not None: + batch_size = input_ids.shape[0] + elif inputs_embeds is not None: + batch_size = inputs_embeds.shape[0] + else: + raise ValueError("You have to specify either input_ids or inputs_embeds") + + if inputs_embeds is None: + inputs_embeds = self.embeddings(input_ids) + hidden_states = inputs_embeds + + if use_cache: + if past_key_values is None: + past_key_values = [layer.attn.init_state(batch_size) for layer in self.layers] + if not isinstance(past_key_values, RecurrentCache): + past_key_values = RecurrentCache.from_legacy_cache(past_key_values) + + if self.gradient_checkpointing and self.training: + if use_cache: + logger.warning_once( + "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." + ) + use_cache = False + + all_hidden_states = () if output_hidden_states else None + all_attns = () if output_attentions else None + + if self.config.use_lower_bound: + lower_bounds = self.lower_bounds.softmax(0) + lower_bounds = lower_bounds.cumsum(0) - lower_bounds[0] + for i, layer in enumerate(self.layers): + if output_hidden_states: + all_hidden_states += (hidden_states,) + + lower_bound = lower_bounds[i] if self.config.use_lower_bound else None + if self.gradient_checkpointing and self.training: + hidden_states, attentions, past_key_values = self._gradient_checkpointing_func( + layer.__call__, + hidden_states, + attention_mask, + past_key_values, + use_cache, + output_attentions, + lower_bound + ) + else: + hidden_states, attentions, past_key_values = layer( + hidden_states, + attention_mask=attention_mask, + past_key_values=past_key_values, + use_cache=use_cache, + output_attentions=output_attentions, + lower_bound=lower_bound + ) + + if output_attentions: + all_attns += (attentions,) + + hidden_states = self.norm(hidden_states) + + # add hidden states from the last decoder layer + if output_hidden_states: + all_hidden_states += (hidden_states,) + + next_cache = None + if use_cache: + next_cache = past_key_values.to_legacy_cache() + if not return_dict: + return tuple(x for x in [hidden_states, next_cache, all_hidden_states, all_attns] if x is not None) + return BaseModelOutputWithPast( + last_hidden_state=hidden_states, + past_key_values=next_cache, + hidden_states=all_hidden_states, + attentions=all_attns + ) + + +class HGRNForCausalLM(HGRNPreTrainedModel): + _tied_weights_keys = ["lm_head.weight"] + + def __init__(self, config): + super().__init__(config) + self.model = HGRNModel(config) + self.vocab_size = config.vocab_size + self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False) + + # Initialize weights and apply final processing + self.post_init() + + def get_input_embeddings(self): + return self.model.embeddings + + def set_input_embeddings(self, value): + self.model.embeddings = value + + def get_output_embeddings(self): + return self.lm_head + + def set_output_embeddings(self, new_embeddings): + self.lm_head = new_embeddings + + def set_decoder(self, decoder): + self.model = decoder + + def get_decoder(self): + return self.model + + def generate(self, *args, **kwargs): + try: + return super().generate(*args, **kwargs) + except AttributeError as exception: + if 'past_key_values' in str(exception): + raise AttributeError( + f"You tried to call `generate` with a decoding strategy that manipulates `past_key_values`, " + f"which is not supported for {self.__class__.__name__}. " + f"Try another generation strategy instead. " + f"For the available generation strategies, check this doc: " + f"https://huggingface.co/docs/transformers/en/generation_strategies#decoding-strategies" + ) + else: + raise exception + + def prepare_inputs_for_generation( + self, + input_ids: torch.LongTensor = None, + past_key_values: Optional[Tuple[List[torch.Tensor]]] = None, + attention_mask: Optional[torch.Tensor] = None, + inputs_embeds: Optional[torch.Tensor] = None, + **kwargs + ): + # only last token for `inputs_ids` if the `past_key_values` is passed along. + if past_key_values is not None: + if not isinstance(past_key_values, RecurrentCache): + past_key_values = RecurrentCache.from_legacy_cache(past_key_values, input_ids.shape[1] - 1) + input_ids, attention_mask = input_ids[:, -1:], attention_mask[:, -1:] + # if `inputs_embeds` are passed, we only want to use them in the 1st generation step + if inputs_embeds is not None and past_key_values is None: + model_inputs = {'inputs_embeds': inputs_embeds} + else: + # The `contiguous()` here is necessary to have a static stride during decoding. torchdynamo otherwise + # recompiles graphs as the stride of the inputs is a guard. + # Ref: https://github.com/huggingface/transformers/pull/29114 + # TODO: use `next_tokens` directly instead. + model_inputs = {'input_ids': input_ids.contiguous()} + + model_inputs.update({ + 'past_key_values': past_key_values, + 'use_cache': kwargs.get('use_cache'), + 'attention_mask': attention_mask, + }) + return model_inputs + + def forward( + self, + input_ids: torch.LongTensor = None, + attention_mask: Optional[torch.Tensor] = None, + inputs_embeds: Optional[torch.Tensor] = None, + past_key_values: Optional[Tuple[List[torch.Tensor]]] = None, + labels: Optional[torch.LongTensor] = None, + use_cache: Optional[bool] = None, + output_attentions: Optional[bool] = None, + output_hidden_states: Optional[bool] = None, + return_dict: Optional[bool] = None, + ) -> Union[Tuple, CausalLMOutputWithPast]: + output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions + output_hidden_states = ( + output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states + ) + return_dict = return_dict if return_dict is not None else self.config.use_return_dict + + outputs = self.model( + input_ids=input_ids, + attention_mask=attention_mask, + inputs_embeds=inputs_embeds, + past_key_values=past_key_values, + use_cache=use_cache, + output_attentions=output_attentions, + output_hidden_states=output_hidden_states, + return_dict=return_dict + ) + + hidden_states = outputs[0] + logits = self.lm_head(hidden_states) + + loss = None + if labels is not None: + if self.config.fuse_cross_entropy: + loss_fct = FusedCrossEntropyLoss(inplace_backward=True) + else: + loss_fct = nn.CrossEntropyLoss() + # Enable model parallelism + labels = labels.to(logits.device) + labels = torch.cat((labels[..., 1:], torch.full_like(labels[:, :1], loss_fct.ignore_index)), 1) + loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1)) + + if not return_dict: + output = (logits,) + outputs[1:] + return (loss,) + output if loss is not None else output + + return CausalLMOutputWithPast( + loss=loss, + logits=logits, + past_key_values=outputs.past_key_values, + hidden_states=outputs.hidden_states, + attentions=outputs.attentions, + ) diff --git a/fla/models/hgrn2/__init__.py b/fla/models/hgrn2/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..306b8082220a57091f2e99cd689c011690db0439 --- /dev/null +++ b/fla/models/hgrn2/__init__.py @@ -0,0 +1,13 @@ +# -*- coding: utf-8 -*- + +from transformers import AutoConfig, AutoModel, AutoModelForCausalLM + +from fla.models.hgrn2.configuration_hgrn2 import HGRN2Config +from fla.models.hgrn2.modeling_hgrn2 import HGRN2ForCausalLM, HGRN2Model + +AutoConfig.register(HGRN2Config.model_type, HGRN2Config) +AutoModel.register(HGRN2Config, HGRN2Model) +AutoModelForCausalLM.register(HGRN2Config, HGRN2ForCausalLM) + + +__all__ = ['HGRN2Config', 'HGRN2ForCausalLM', 'HGRN2Model'] diff --git a/fla/models/hgrn2/configuration_hgrn2.py b/fla/models/hgrn2/configuration_hgrn2.py new file mode 100644 index 0000000000000000000000000000000000000000..5f5382c9b68fa1ad2abd6538d58ab1d635aa1276 --- /dev/null +++ b/fla/models/hgrn2/configuration_hgrn2.py @@ -0,0 +1,66 @@ +# -*- coding: utf-8 -*- + +from typing import Optional + +from transformers.configuration_utils import PretrainedConfig + + +class HGRN2Config(PretrainedConfig): + + model_type = 'hgrn2' + keys_to_ignore_at_inference = ['past_key_values'] + + def __init__( + self, + vocab_size: int = 32000, + hidden_size: int = 2048, + num_hidden_layers: int = 24, + attn_mode: str = "chunk", + num_heads: Optional[int] = None, + expand_ratio: Optional[int] = 128, + use_short_conv: bool = False, + conv_size: int = 4, + share_conv_kernel: bool = True, + use_lower_bound: bool = True, + hidden_ratio: Optional[int] = 4, + intermediate_size: Optional[int] = None, + hidden_act: str = "swish", + max_position_embeddings: int = 2048, + elementwise_affine: Optional[bool] = True, + norm_eps: float = 1e-6, + use_cache: bool = True, + pad_token_id: int = None, + bos_token_id: int = 1, + eos_token_id: int = 2, + tie_word_embeddings: bool = False, + initializer_range: float = 0.02, + fuse_cross_entropy: bool = True, + **kwargs + ): + self.vocab_size = vocab_size + self.max_position_embeddings = max_position_embeddings + self.hidden_size = hidden_size + self.num_hidden_layers = num_hidden_layers + self.attn_mode = attn_mode + self.num_heads = num_heads + self.expand_ratio = expand_ratio + self.use_short_conv = use_short_conv + self.conv_size = conv_size + self.share_conv_kernel = share_conv_kernel + self.use_lower_bound = use_lower_bound + self.hidden_ratio = hidden_ratio + self.intermediate_size = intermediate_size + self.hidden_act = hidden_act + self.elementwise_affine = elementwise_affine + self.norm_eps = norm_eps + self.use_cache = use_cache + self.initializer_range = initializer_range + self.fuse_cross_entropy = fuse_cross_entropy + + super().__init__( + pad_token_id=pad_token_id, + bos_token_id=bos_token_id, + eos_token_id=eos_token_id, + tie_word_embeddings=tie_word_embeddings, + **kwargs, + ) diff --git a/fla/models/hgrn2/modeling_hgrn2.py b/fla/models/hgrn2/modeling_hgrn2.py new file mode 100644 index 0000000000000000000000000000000000000000..0f530a2abe30bc7f0b2be5c0b9bb56f4814d19ca --- /dev/null +++ b/fla/models/hgrn2/modeling_hgrn2.py @@ -0,0 +1,407 @@ +# -*- coding: utf-8 -*- + +from __future__ import annotations + +import math +import warnings +from typing import List, Optional, Tuple, Union + +import torch +import torch.nn as nn +import torch.utils.checkpoint +from transformers.activations import ACT2FN +from transformers.modeling_outputs import (BaseModelOutputWithPast, + CausalLMOutputWithPast) +from transformers.modeling_utils import PreTrainedModel +from transformers.utils import logging + +from fla.layers.hgrn2 import HGRN2Attention +from fla.models.hgrn2.configuration_hgrn2 import HGRN2Config +from fla.models.utils import RecurrentCache +from fla.modules import FusedCrossEntropyLoss, RMSNorm +from fla.modules.activations import swiglu_linear + +logger = logging.get_logger(__name__) + + +class HGRN2MLP(nn.Module): + + def __init__( + self, + hidden_size: int, + hidden_ratio: Optional[int] = None, + intermediate_size: Optional[int] = None, + hidden_act: str = 'swish' + ) -> HGRN2MLP: + super().__init__() + + self.hidden_size = hidden_size + # the final number of params is `hidden_ratio * hidden_size^2` + # `intermediate_size` is chosen to be a multiple of 256 closest to `2/3 * hidden_size * hidden_ratio` + if hidden_ratio is None: + hidden_ratio = 4 + if intermediate_size is None: + intermediate_size = int(hidden_size * hidden_ratio * 2 / 3) + intermediate_size = 256 * ((intermediate_size + 256 - 1) // 256) + self.hidden_ratio = hidden_ratio + self.intermediate_size = intermediate_size + + self.gate_proj = nn.Linear(self.hidden_size, self.intermediate_size * 2, bias=False) + self.down_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=False) + self.act_fn = ACT2FN[hidden_act] + + def forward(self, x): + y = self.gate_proj(x) + gate, y = y.chunk(2, -1) + return swiglu_linear(gate, y, self.down_proj.weight, self.down_proj.bias) + + +class HGRN2Block(nn.Module): + def __init__(self, config: HGRN2Config, layer_idx: int): + super().__init__() + self.hidden_size = config.hidden_size + + self.attn_norm = RMSNorm(hidden_size=config.hidden_size, eps=config.norm_eps) + self.attn = HGRN2Attention( + mode=config.attn_mode, + hidden_size=config.hidden_size, + num_heads=config.num_heads, + expand_ratio=config.expand_ratio, + use_short_conv=config.use_short_conv, + conv_size=config.conv_size, + share_conv_kernel=config.share_conv_kernel, + elementwise_affine=config.elementwise_affine, + norm_eps=config.norm_eps, + layer_idx=layer_idx + ) + self.mlp_norm = RMSNorm(hidden_size=config.hidden_size, eps=config.norm_eps) + self.mlp = HGRN2MLP( + hidden_size=config.hidden_size, + hidden_ratio=config.hidden_ratio, + intermediate_size=config.intermediate_size, + hidden_act=config.hidden_act + ) + + def forward( + self, + hidden_states: torch.Tensor, + attention_mask: Optional[torch.Tensor] = None, + past_key_values: Optional[Tuple[List[torch.Tensor]]] = None, + use_cache: Optional[bool] = False, + output_attentions: Optional[bool] = False, + lower_bound: Optional[torch.Tensor] = False, + **kwargs, + ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]: + residual = hidden_states + hidden_states = self.attn_norm(hidden_states) + hidden_states, attentions, past_key_values = self.attn( + hidden_states=hidden_states, + attention_mask=attention_mask, + past_key_values=past_key_values, + use_cache=use_cache, + output_attentions=output_attentions, + lower_bound=lower_bound + ) + hidden_states, residual = self.mlp_norm(hidden_states, residual, True) + hidden_states = self.mlp(hidden_states) + hidden_states = residual + hidden_states + + outputs = (hidden_states, attentions, past_key_values) + + return outputs + + +class HGRN2PreTrainedModel(PreTrainedModel): + + config_class = HGRN2Config + supports_gradient_checkpointing = True + _no_split_modules = ['HGRN2Block'] + + def __init__(self, *inputs, **kwargs): + super().__init__(*inputs, **kwargs) + + def _init_weights( + self, + module: nn.Module, + rescale_prenorm_residual: bool = True, + num_residuals_per_layer: int = 2, + ): + if isinstance(module, (nn.Linear, nn.Conv1d)): + # Slightly different from the TF version which uses truncated_normal for initialization + # cf https://github.com/pytorch/pytorch/pull/5617 + nn.init.normal_(module.weight, mean=0.0, std=self.config.initializer_range) + if module.bias is not None: + nn.init.zeros_(module.bias) + elif isinstance(module, nn.Embedding): + nn.init.normal_(module.weight, mean=0.0, std=self.config.initializer_range) + if module.padding_idx is not None: + module.weight.data[module.padding_idx].zero_() + + if rescale_prenorm_residual: + # Reinitialize selected weights subject to the OpenAI GPT-2 Paper Scheme: + # > A modified initialization which accounts for the accumulation on the residual path with model depth. Scale + # > the weights of residual layers at initialization by a factor of 1/√N where N is the # of residual layers. + # > -- GPT-2 :: https://openai.com/blog/better-language-models/ + # + # Reference (Megatron-LM): https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/gpt_model.py + for name, p in module.named_parameters(): + if name in ["o_proj.weight", "down_proj.weight"]: + # Special Scaled Initialization --> There are 2 Layer Norms per Transformer Block + # Following Pytorch init, except scale by 1/sqrt(2 * n_layer) + # We need to reinit p since this code could be called multiple times + # Having just p *= scale would repeatedly scale it down + with torch.no_grad(): + p /= math.sqrt(num_residuals_per_layer * self.config.num_hidden_layers) + + +class HGRN2Model(HGRN2PreTrainedModel): + + def __init__(self, config: HGRN2Config): + super().__init__(config) + self.padding_idx = config.pad_token_id + self.vocab_size = config.vocab_size + + self.embeddings = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx) + if config.use_lower_bound: + self.lower_bounds = nn.Parameter(torch.zeros(config.num_hidden_layers, config.hidden_size)) + self.layers = nn.ModuleList([HGRN2Block(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]) + self.norm = RMSNorm(config.hidden_size, eps=config.norm_eps) + + self.gradient_checkpointing = False + + self.post_init() + + def get_input_embeddings(self): + return self.embeddings + + def set_input_embeddings(self, value): + self.embeddings = value + + def forward( + self, + input_ids: Optional[torch.LongTensor] = None, + attention_mask: Optional[torch.Tensor] = None, # noqa + inputs_embeds: Optional[torch.FloatTensor] = None, + past_key_values: Optional[Tuple[List[torch.Tensor]]] = None, + use_cache: Optional[bool] = None, + output_attentions: Optional[bool] = None, + output_hidden_states: Optional[bool] = None, + return_dict: Optional[bool] = None + ) -> Union[Tuple, BaseModelOutputWithPast]: + if output_attentions: + warnings.warn("`HGRN2Model` does not `output_attentions` now, setting it to `False`.") + output_attentions = False + output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions + output_hidden_states = output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states + use_cache = use_cache if use_cache is not None else (self.config.use_cache if not self.training else False) + return_dict = return_dict if return_dict is not None else self.config.use_return_dict + + # retrieve input_ids and inputs_embeds + if input_ids is not None and inputs_embeds is not None: + raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") + elif input_ids is not None: + batch_size = input_ids.shape[0] + elif inputs_embeds is not None: + batch_size = inputs_embeds.shape[0] + else: + raise ValueError("You have to specify either input_ids or inputs_embeds") + + if inputs_embeds is None: + inputs_embeds = self.embeddings(input_ids) + hidden_states = inputs_embeds + + if use_cache: + if past_key_values is None: + past_key_values = [layer.attn.init_state(batch_size) for layer in self.layers] + if not isinstance(past_key_values, RecurrentCache): + past_key_values = RecurrentCache.from_legacy_cache(past_key_values) + + if self.gradient_checkpointing and self.training: + if use_cache: + logger.warning_once( + "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." + ) + use_cache = False + + all_hidden_states = () if output_hidden_states else None + all_attns = () if output_attentions else None + + if self.config.use_lower_bound: + lower_bounds = self.lower_bounds.softmax(0) + lower_bounds = lower_bounds.cumsum(0) - lower_bounds[0] + for i, layer in enumerate(self.layers): + if output_hidden_states: + all_hidden_states += (hidden_states,) + + lower_bound = lower_bounds[i] if self.config.use_lower_bound else None + if self.gradient_checkpointing and self.training: + hidden_states, attentions, past_key_values = self._gradient_checkpointing_func( + layer.__call__, + hidden_states, + attention_mask, + past_key_values, + use_cache, + output_attentions, + lower_bound + ) + else: + hidden_states, attentions, past_key_values = layer( + hidden_states, + attention_mask=attention_mask, + past_key_values=past_key_values, + use_cache=use_cache, + output_attentions=output_attentions, + lower_bound=lower_bound + ) + + if output_attentions: + all_attns += (attentions,) + + hidden_states = self.norm(hidden_states) + + # add hidden states from the last decoder layer + if output_hidden_states: + all_hidden_states += (hidden_states,) + + next_cache = None + if use_cache: + next_cache = past_key_values.to_legacy_cache() + if not return_dict: + return tuple(x for x in [hidden_states, next_cache, all_hidden_states, all_attns] if x is not None) + return BaseModelOutputWithPast( + last_hidden_state=hidden_states, + past_key_values=next_cache, + hidden_states=all_hidden_states, + attentions=all_attns + ) + + +class HGRN2ForCausalLM(HGRN2PreTrainedModel): + _tied_weights_keys = ["lm_head.weight"] + + def __init__(self, config): + super().__init__(config) + self.model = HGRN2Model(config) + self.vocab_size = config.vocab_size + self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False) + + # Initialize weights and apply final processing + self.post_init() + + def get_input_embeddings(self): + return self.model.embeddings + + def set_input_embeddings(self, value): + self.model.embeddings = value + + def get_output_embeddings(self): + return self.lm_head + + def set_output_embeddings(self, new_embeddings): + self.lm_head = new_embeddings + + def set_decoder(self, decoder): + self.model = decoder + + def get_decoder(self): + return self.model + + def generate(self, *args, **kwargs): + try: + return super().generate(*args, **kwargs) + except AttributeError as exception: + if 'past_key_values' in str(exception): + raise AttributeError( + f"You tried to call `generate` with a decoding strategy that manipulates `past_key_values`, " + f"which is not supported for {self.__class__.__name__}. " + f"Try another generation strategy instead. " + f"For the available generation strategies, check this doc: " + f"https://huggingface.co/docs/transformers/en/generation_strategies#decoding-strategies" + ) + else: + raise exception + + def prepare_inputs_for_generation( + self, + input_ids: torch.LongTensor = None, + past_key_values: Optional[Tuple[List[torch.Tensor]]] = None, + attention_mask: Optional[torch.Tensor] = None, + inputs_embeds: Optional[torch.Tensor] = None, + **kwargs + ): + # only last token for `inputs_ids` if the `past_key_values` is passed along. + if past_key_values is not None: + if not isinstance(past_key_values, RecurrentCache): + past_key_values = RecurrentCache.from_legacy_cache(past_key_values, input_ids.shape[1] - 1) + input_ids, attention_mask = input_ids[:, -1:], attention_mask[:, -1:] + # if `inputs_embeds` are passed, we only want to use them in the 1st generation step + if inputs_embeds is not None and past_key_values is None: + model_inputs = {'inputs_embeds': inputs_embeds} + else: + # The `contiguous()` here is necessary to have a static stride during decoding. torchdynamo otherwise + # recompiles graphs as the stride of the inputs is a guard. + # Ref: https://github.com/huggingface/transformers/pull/29114 + # TODO: use `next_tokens` directly instead. + model_inputs = {'input_ids': input_ids.contiguous()} + + model_inputs.update({ + 'past_key_values': past_key_values, + 'use_cache': kwargs.get('use_cache'), + 'attention_mask': attention_mask, + }) + return model_inputs + + def forward( + self, + input_ids: torch.LongTensor = None, + attention_mask: Optional[torch.Tensor] = None, + inputs_embeds: Optional[torch.Tensor] = None, + past_key_values: Optional[Tuple[List[torch.Tensor]]] = None, + labels: Optional[torch.LongTensor] = None, + use_cache: Optional[bool] = None, + output_attentions: Optional[bool] = None, + output_hidden_states: Optional[bool] = None, + return_dict: Optional[bool] = None, + ) -> Union[Tuple, CausalLMOutputWithPast]: + output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions + output_hidden_states = ( + output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states + ) + return_dict = return_dict if return_dict is not None else self.config.use_return_dict + + outputs = self.model( + input_ids=input_ids, + attention_mask=attention_mask, + inputs_embeds=inputs_embeds, + past_key_values=past_key_values, + use_cache=use_cache, + output_attentions=output_attentions, + output_hidden_states=output_hidden_states, + return_dict=return_dict + ) + + hidden_states = outputs[0] + logits = self.lm_head(hidden_states) + + loss = None + if labels is not None: + if self.config.fuse_cross_entropy: + loss_fct = FusedCrossEntropyLoss(inplace_backward=True) + else: + loss_fct = nn.CrossEntropyLoss() + # Enable model parallelism + labels = labels.to(logits.device) + labels = torch.cat((labels[..., 1:], torch.full_like(labels[:, :1], loss_fct.ignore_index)), 1) + loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1)) + + if not return_dict: + output = (logits,) + outputs[1:] + return (loss,) + output if loss is not None else output + + return CausalLMOutputWithPast( + loss=loss, + logits=logits, + past_key_values=outputs.past_key_values, + hidden_states=outputs.hidden_states, + attentions=outputs.attentions, + ) diff --git a/fla/models/linear_attn/__init__.py b/fla/models/linear_attn/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..72d5d022de95afe9dc6cf76d3c2026a6a7f9e7a0 --- /dev/null +++ b/fla/models/linear_attn/__init__.py @@ -0,0 +1,14 @@ +# -*- coding: utf-8 -*- + +from transformers import AutoConfig, AutoModel, AutoModelForCausalLM + +from fla.models.linear_attn.configuration_linear_attn import \ + LinearAttentionConfig +from fla.models.linear_attn.modeling_linear_attn import ( + LinearAttentionForCausalLM, LinearAttentionModel) + +AutoConfig.register(LinearAttentionConfig.model_type, LinearAttentionConfig) +AutoModel.register(LinearAttentionConfig, LinearAttentionModel) +AutoModelForCausalLM.register(LinearAttentionConfig, LinearAttentionForCausalLM) + +__all__ = ['LinearAttentionConfig', 'LinearAttentionForCausalLM', 'LinearAttentionModel'] diff --git a/fla/models/linear_attn/configuration_linear_attn.py b/fla/models/linear_attn/configuration_linear_attn.py new file mode 100644 index 0000000000000000000000000000000000000000..35d6d209056eb9d8aa9ea72e7856aba83acd8291 --- /dev/null +++ b/fla/models/linear_attn/configuration_linear_attn.py @@ -0,0 +1,70 @@ +# -*- coding: utf-8 -*- + +from typing import Optional + +from transformers.configuration_utils import PretrainedConfig + + +class LinearAttentionConfig(PretrainedConfig): + + model_type = 'linear_attn' + keys_to_ignore_at_inference = ['past_key_values'] + + def __init__( + self, + vocab_size: int = 32000, + hidden_size: int = 2048, + expand_k: int = 1, + expand_v: int = 1, + hidden_ratio: Optional[int] = 4, + intermediate_size: Optional[int] = None, + num_hidden_layers: int = 24, + num_heads: int = 4, + attn_mode: str = "fused_chunk", + feature_map: str = "elementwise_product", + tie_feature_map_qk: bool = False, + norm_q: bool = False, + norm_k: bool = False, + norm_feature_map: bool = False, + hidden_act: str = "swish", + max_position_embeddings: int = 2048, + elementwise_affine: Optional[bool] = True, + norm_eps: float = 1e-6, + use_cache: bool = True, + pad_token_id: int = None, + bos_token_id: int = 1, + eos_token_id: int = 2, + tie_word_embeddings: bool = False, + initializer_range: float = 0.02, + fuse_cross_entropy: bool = True, + **kwargs + ): + self.vocab_size = vocab_size + self.max_position_embeddings = max_position_embeddings + self.hidden_size = hidden_size + self.expand_k = expand_k + self.expand_v = expand_v + self.hidden_ratio = hidden_ratio + self.intermediate_size = intermediate_size + self.num_hidden_layers = num_hidden_layers + self.num_heads = num_heads + self.attn_mode = attn_mode + self.feature_map = feature_map + self.tie_feature_map_qk = tie_feature_map_qk + self.norm_q = norm_q + self.norm_k = norm_k + self.norm_feature_map = norm_feature_map + self.hidden_act = hidden_act + self.elementwise_affine = elementwise_affine + self.norm_eps = norm_eps + self.use_cache = use_cache + self.initializer_range = initializer_range + self.fuse_cross_entropy = fuse_cross_entropy + + super().__init__( + pad_token_id=pad_token_id, + bos_token_id=bos_token_id, + eos_token_id=eos_token_id, + tie_word_embeddings=tie_word_embeddings, + **kwargs, + ) diff --git a/fla/models/linear_attn/modeling_linear_attn.py b/fla/models/linear_attn/modeling_linear_attn.py new file mode 100644 index 0000000000000000000000000000000000000000..cfcc5c050406d87c765de3c182604d4a57ccecf3 --- /dev/null +++ b/fla/models/linear_attn/modeling_linear_attn.py @@ -0,0 +1,424 @@ +# -*- coding: utf-8 -*- + +from __future__ import annotations + +import math +import warnings +from typing import List, Optional, Tuple, Union + +import torch +import torch.nn as nn +import torch.utils.checkpoint +from transformers.activations import ACT2FN +from transformers.cache_utils import Cache, DynamicCache +from transformers.modeling_outputs import (BaseModelOutputWithPast, + CausalLMOutputWithPast) +from transformers.modeling_utils import PreTrainedModel +from transformers.utils import logging + +from fla.layers.linear_attn import LinearAttention +from fla.models.linear_attn.configuration_linear_attn import \ + LinearAttentionConfig +from fla.modules import FusedCrossEntropyLoss, RMSNorm +from fla.modules.activations import swiglu_linear + +logger = logging.get_logger(__name__) + + +class LinearAttentionMLP(nn.Module): + def __init__( + self, + hidden_size: int, + hidden_ratio: Optional[int] = None, + intermediate_size: Optional[int] = None, + hidden_act: str = 'swish' + ) -> LinearAttentionMLP: + super().__init__() + + self.hidden_size = hidden_size + # the final number of params is `hidden_ratio * hidden_size^2` + # `intermediate_size` is chosen to be a multiple of 256 closest to `2/3 * hidden_size * hidden_ratio` + if hidden_ratio is None: + hidden_ratio = 4 + if intermediate_size is None: + intermediate_size = int(hidden_size * hidden_ratio * 2 / 3) + intermediate_size = 256 * ((intermediate_size + 256 - 1) // 256) + self.hidden_ratio = hidden_ratio + self.intermediate_size = intermediate_size + + self.gate_proj = nn.Linear(self.hidden_size, self.intermediate_size * 2, bias=False) + self.down_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=False) + self.act_fn = ACT2FN[hidden_act] + + def forward(self, x): + y = self.gate_proj(x) + gate, y = y.chunk(2, -1) + return swiglu_linear(gate, y, self.down_proj.weight, self.down_proj.bias) + + +class LinearAttentionBlock(nn.Module): + def __init__(self, config: LinearAttentionConfig, layer_idx: int): + super().__init__() + self.hidden_size = config.hidden_size + + self.attn_norm = RMSNorm(hidden_size=config.hidden_size, eps=config.norm_eps) + self.attn = LinearAttention( + hidden_size=config.hidden_size, + expand_k=config.expand_k, + expand_v=config.expand_v, + num_heads=config.num_heads, + mode=config.attn_mode, + feature_map=config.feature_map, + tie_feature_map_qk=config.tie_feature_map_qk, + norm_q=config.norm_q, + norm_k=config.norm_k, + do_feature_map_norm=config.norm_feature_map, + elementwise_affine=config.elementwise_affine, + norm_eps=config.norm_eps, + layer_idx=layer_idx + ) + self.mlp_norm = RMSNorm(hidden_size=config.hidden_size, eps=config.norm_eps) + self.mlp = LinearAttentionMLP( + hidden_size=config.hidden_size, + hidden_ratio=config.hidden_ratio, + intermediate_size=config.intermediate_size, + hidden_act=config.hidden_act + ) + + def forward( + self, + hidden_states: torch.Tensor, + attention_mask: Optional[torch.Tensor] = None, + position_ids: Optional[torch.LongTensor] = None, + past_key_value: Optional[Tuple[torch.Tensor]] = None, + output_attentions: Optional[bool] = False, + use_cache: Optional[bool] = False, + **kwargs, + ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]: + + residual = hidden_states + # currently not supported + attn_weights, present_key_value = None, None + + hidden_states = self.attn_norm(hidden_states) + hidden_states = self.attn(hidden_states) + hidden_states, residual = self.mlp_norm(hidden_states, residual, True) + hidden_states = self.mlp(hidden_states) + hidden_states = residual + hidden_states + + outputs = (hidden_states,) + + if output_attentions: + outputs += (attn_weights,) + + if use_cache: + outputs += (present_key_value,) + + return outputs + + +class LinearAttentionPreTrainedModel(PreTrainedModel): + config_class = LinearAttentionConfig + supports_gradient_checkpointing = True + _no_split_modules = ['LinearAttentionBlock'] + + def __init__(self, *inputs, **kwargs): + super().__init__(*inputs, **kwargs) + + def _init_weights( + self, + module: nn.Module, + rescale_prenorm_residual: bool = True, + num_residuals_per_layer: int = 2, + ): + if isinstance(module, (nn.Linear, nn.Conv1d)): + # Slightly different from the TF version which uses truncated_normal for initialization + # cf https://github.com/pytorch/pytorch/pull/5617 + nn.init.normal_(module.weight, mean=0.0, std=self.config.initializer_range) + if module.bias is not None: + nn.init.zeros_(module.bias) + elif isinstance(module, nn.Embedding): + nn.init.normal_(module.weight, mean=0.0, std=self.config.initializer_range) + if module.padding_idx is not None: + module.weight.data[module.padding_idx].zero_() + + if rescale_prenorm_residual: + # Reinitialize selected weights subject to the OpenAI GPT-2 Paper Scheme: + # > A modified initialization which accounts for the accumulation on the residual path with model depth. Scale + # > the weights of residual layers at initialization by a factor of 1/√N where N is the # of residual layers. + # > -- GPT-2 :: https://openai.com/blog/better-language-models/ + # + # Reference (Megatron-LM): https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/gpt_model.py + for name, p in module.named_parameters(): + if name in ["o_proj.weight", "down_proj.weight"]: + # Special Scaled Initialization --> There are 2 Layer Norms per Transformer Block + # Following Pytorch init, except scale by 1/sqrt(2 * n_layer) + # We need to reinit p since this code could be called multiple times + # Having just p *= scale would repeatedly scale it down + with torch.no_grad(): + p /= math.sqrt(num_residuals_per_layer * self.config.num_hidden_layers) + + +class LinearAttentionModel(LinearAttentionPreTrainedModel): + + def __init__(self, config: LinearAttentionConfig): + super().__init__(config) + self.padding_idx = config.pad_token_id + self.vocab_size = config.vocab_size + + self.embeddings = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx) + self.layers = nn.ModuleList( + [LinearAttentionBlock(config, layer_idx) for layer_idx in range(config.num_hidden_layers)] + ) + self.norm = RMSNorm(config.hidden_size, eps=config.norm_eps) + + self.gradient_checkpointing = False + + self.post_init() + + def get_input_embeddings(self): + return self.embeddings + + def set_input_embeddings(self, value): + self.embeddings = value + + def forward( + self, + input_ids: torch.LongTensor = None, + attention_mask: Optional[torch.Tensor] = None, + position_ids: Optional[torch.LongTensor] = None, + past_key_values: Optional[List[torch.FloatTensor]] = None, + inputs_embeds: Optional[torch.FloatTensor] = None, + use_cache: Optional[bool] = None, + output_attentions: Optional[bool] = None, + output_hidden_states: Optional[bool] = None, + return_dict: Optional[bool] = None, + ) -> Union[Tuple, BaseModelOutputWithPast]: + if output_attentions: + warnings.warn( + "`LinearAttentionModel` does not support output attention weights now, " + "so `output_attentions` is set to `False`." + ) + output_attentions = False + output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions + output_hidden_states = ( + output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states + ) + use_cache = use_cache if use_cache is not None else (self.config.use_cache if not self.training else False) + return_dict = return_dict if return_dict is not None else self.config.use_return_dict + + # retrieve input_ids and inputs_embeds + if input_ids is not None and inputs_embeds is not None: + raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") + elif input_ids is not None: + _, seq_length = input_ids.shape[:2] + elif inputs_embeds is not None: + _, seq_length = inputs_embeds.shape[:2] + else: + raise ValueError("You have to specify either input_ids or inputs_embeds") + + past_key_values_length = 0 + if use_cache: + use_legacy_cache = not isinstance(past_key_values, Cache) + if use_legacy_cache: + past_key_values = DynamicCache.from_legacy_cache(past_key_values) + past_key_values_length = past_key_values.get_usable_length(seq_length) + + if position_ids is None: + device = input_ids.device if input_ids is not None else inputs_embeds.device + position_ids = torch.arange( + past_key_values_length, seq_length + past_key_values_length, dtype=torch.long, device=device + ) + position_ids = position_ids.unsqueeze(0) + + if inputs_embeds is None: + inputs_embeds = self.embeddings(input_ids) + + # embed positions + hidden_states = inputs_embeds + + if self.gradient_checkpointing and self.training: + if use_cache: + logger.warning_once( + "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." + ) + use_cache = False + + # decoder layers + all_hidden_states = () if output_hidden_states else None + all_self_attns = () if output_attentions else None + next_decoder_cache = None + + for decoder_layer in self.layers: + if output_hidden_states: + all_hidden_states += (hidden_states,) + + if self.gradient_checkpointing and self.training: + layer_outputs = self._gradient_checkpointing_func( + decoder_layer.__call__, + hidden_states, + attention_mask, + position_ids, + past_key_values, + output_attentions, + use_cache, + ) + else: + layer_outputs = decoder_layer( + hidden_states, + attention_mask=attention_mask, + position_ids=position_ids, + past_key_value=past_key_values, + output_attentions=output_attentions, + use_cache=use_cache, + ) + + hidden_states = layer_outputs[0] + + if use_cache: + next_decoder_cache = layer_outputs[2 if output_attentions else 1] + + if output_attentions: + all_self_attns += (layer_outputs[1],) + + hidden_states = self.norm(hidden_states) + + # add hidden states from the last decoder layer + if output_hidden_states: + all_hidden_states += (hidden_states,) + + next_cache = None + if use_cache: + next_cache = next_decoder_cache.to_legacy_cache() if use_legacy_cache else next_decoder_cache + if not return_dict: + return tuple(v for v in [hidden_states, next_cache, all_hidden_states, all_self_attns] if v is not None) + return BaseModelOutputWithPast( + last_hidden_state=hidden_states, + past_key_values=next_cache, + hidden_states=all_hidden_states, + attentions=all_self_attns, + ) + + +class LinearAttentionForCausalLM(LinearAttentionPreTrainedModel): + _tied_weights_keys = ["lm_head.weight"] + + def __init__(self, config): + super().__init__(config) + self.model = LinearAttentionModel(config) + self.vocab_size = config.vocab_size + self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False) + + # Initialize weights and apply final processing + self.post_init() + + def get_input_embeddings(self): + return self.model.embeddings + + def set_input_embeddings(self, value): + self.model.embeddings = value + + def get_output_embeddings(self): + return self.lm_head + + def set_output_embeddings(self, new_embeddings): + self.lm_head = new_embeddings + + def set_decoder(self, decoder): + self.model = decoder + + def get_decoder(self): + return self.model + + def generate(self, *args, **kwargs): + try: + return super().generate(*args, **kwargs) + except AttributeError as exc: + # Expected exception: "AttributeError: '(object name)' object has no attribute 'past_key_values'" + if 'past_key_values' in str(exc): + raise AttributeError( + f"You tried to call `generate` with a decoding strategy that manipulates `past_key_values`, " + f"which is not supported for {self.__class__.__name__}. " + f"Try another generation strategy instead. " + f"For the available generation strategies, check this doc: " + f"https://huggingface.co/docs/transformers/en/generation_strategies#decoding-strategies" + ) + else: + raise exc + + def prepare_inputs_for_generation( + self, + input_ids: torch.LongTensor = None, + state: Optional[torch.Tensor] = None, + inputs_embeds: Optional[torch.FloatTensor] = None, + **kwargs + ): + # only last token for inputs_ids if the state is passed along. + if state is not None: + input_ids = input_ids[:, -1].unsqueeze(-1) + + # if `inputs_embeds` are passed, we only want to use them in the 1st generation step + if inputs_embeds is not None and state is None: + model_inputs = {"inputs_embeds": inputs_embeds} + else: + model_inputs = {"input_ids": input_ids} + model_inputs["state"] = state + return model_inputs + + def forward( + self, + input_ids: torch.LongTensor = None, + attention_mask: Optional[torch.Tensor] = None, + position_ids: Optional[torch.LongTensor] = None, + past_key_values: Optional[List[torch.FloatTensor]] = None, + inputs_embeds: Optional[torch.FloatTensor] = None, + labels: Optional[torch.LongTensor] = None, + use_cache: Optional[bool] = None, + output_attentions: Optional[bool] = None, + output_hidden_states: Optional[bool] = None, + return_dict: Optional[bool] = None, + ) -> Union[Tuple, CausalLMOutputWithPast]: + output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions + output_hidden_states = ( + output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states + ) + return_dict = return_dict if return_dict is not None else self.config.use_return_dict + + # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn) + outputs = self.model( + input_ids=input_ids, + attention_mask=attention_mask, + position_ids=position_ids, + past_key_values=past_key_values, + inputs_embeds=inputs_embeds, + use_cache=use_cache, + output_attentions=output_attentions, + output_hidden_states=output_hidden_states, + return_dict=return_dict, + ) + + hidden_states = outputs[0] + logits = self.lm_head(hidden_states) + + loss = None + if labels is not None: + if self.config.fuse_cross_entropy: + loss_fct = FusedCrossEntropyLoss(inplace_backward=True) + else: + loss_fct = nn.CrossEntropyLoss() + # Enable model parallelism + labels = labels.to(logits.device) + labels = torch.cat((labels[..., 1:], torch.full_like(labels[:, :1], loss_fct.ignore_index)), 1) + loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1)) + + if not return_dict: + output = (logits,) + outputs[1:] + return (loss,) + output if loss is not None else output + + return CausalLMOutputWithPast( + loss=loss, + logits=logits, + past_key_values=outputs.past_key_values, + hidden_states=outputs.hidden_states, + attentions=outputs.attentions, + ) diff --git a/fla/models/mamba/__init__.py b/fla/models/mamba/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..a0eff2ea26f3a11bcf2333002509686eca2289aa --- /dev/null +++ b/fla/models/mamba/__init__.py @@ -0,0 +1,14 @@ +# -*- coding: utf-8 -*- + +from transformers import AutoConfig, AutoModel, AutoModelForCausalLM + +from fla.models.mamba.configuration_mamba import MambaConfig +from fla.models.mamba.modeling_mamba import (MambaBlock, MambaForCausalLM, + MambaModel) + +AutoConfig.register(MambaConfig.model_type, MambaConfig, True) +AutoModel.register(MambaConfig, MambaModel, True) +AutoModelForCausalLM.register(MambaConfig, MambaForCausalLM, True) + + +__all__ = ['MambaConfig', 'MambaForCausalLM', 'MambaModel', 'MambaBlock'] diff --git a/fla/models/mamba/configuration_mamba.py b/fla/models/mamba/configuration_mamba.py new file mode 100644 index 0000000000000000000000000000000000000000..0467c05ec4014d795a4597c570810c8ac7d52951 --- /dev/null +++ b/fla/models/mamba/configuration_mamba.py @@ -0,0 +1,156 @@ +# coding=utf-8 +# Copyright 2024 The HuggingFace Inc. team. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +"""MAMBA configuration""" + +import math + +from transformers.configuration_utils import PretrainedConfig + + +class MambaConfig(PretrainedConfig): + """ + This is the configuration class to store the configuration of a [`MambaModel`]. It is used to instantiate a MAMBA + model according to the specified arguments, defining the model architecture. Instantiating a configuration with the + defaults will yield a similar configuration to that of the MAMBA + [state-spaces/mamba-2.8b](https://huggingface.co/state-spaces/mamba-2.8b) architecture. + + Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the + documentation from [`PretrainedConfig`] for more information. + + + Args: + vocab_size (`int`, *optional*, defaults to 50280): + Vocabulary size of the MAMBA model. Defines the number of different tokens that can be represented by the + `inputs_ids` passed when calling [`MambaModel`]. + hidden_size (`int`, *optional*, defaults to 768): + Dimensionality of the embeddings and hidden states. + state_size (`int`, *optional*, defaults to 16): shape of the state space latents. + num_hidden_layers (`int`, *optional*, defaults to 32): + Number of hidden layers in the model. + layer_norm_epsilon (`float`, *optional*, defaults to 1e-05): + The epsilon to use in the layer normalization layers. + pad_token_id (`int`, *optional*, defaults to 0): + Padding token id. + bos_token_id (`int`, *optional*, defaults to 0): + The id of the beginning of sentence token in the vocabulary. + eos_token_id (`int`, *optional*, defaults to 0): + The id of the end of sentence token in the vocabulary. + expand (`int`, *optional*, defaults to 2): Expanding factor used to determine the intermediate size. + conv_kernel (`int`, *optional*, defaults to 4): Size of the convolution kernel. + use_bias (`bool`, *optional*, defaults to `False`): + Whether or not to use bias in ["in_proj", "out_proj"] of the mixer block + use_conv_bias (`bool`, *optional*, defaults to `True`): + Whether or not to use bias in the convolution layer of the mixer block. + hidden_act (`str`, *optional*, defaults to `"silu"`): + The non-linear activation function (function or string) in the decoder. + initializer_range (`float`, *optional*, defaults to 0.1): + The standard deviation of the truncated_normal_initializer for initializing all weight matrices. + residual_in_fp32 (`bool`, *optional*, defaults to `True`): + Whether or not residuals should be in `float32`. + If set to `False` residuals will keep the same `dtype` as the rest of the model + time_step_rank (`Union[int,str]`, *optional*, defaults to `"auto"`): + Rank of the the discretization projection matrix. + `"auto"` means that it will default to `math.ceil(self.hidden_size / 16)` + time_step_scale (`float`, *optional*, defaults to 1.0): + Scale used used to scale `dt_proj.bias`. + time_step_min (`float`, *optional*, defaults to 0.001): + Minimum `time_step` used to bound `dt_proj.bias`. + time_step_max (`float`, *optional*, defaults to 0.1): + Maximum `time_step` used to bound `dt_proj.bias`. + time_step_init_scheme (`float`, *optional*, defaults to `"random"`): + Init scheme used for `dt_proj.weight`. Should be one of `["random","uniform"]` + time_step_floor (`float`, *optional*, defaults to 0.0001): + Minimum clamping value of the `dt_proj.bias` layer initialization. + rescale_prenorm_residual (`bool`, *optional*, defaults to `False`): + Whether or not to rescale `out_proj` weights when initializing. + use_cache (`bool`, *optional*, defaults to `True`): + Whether or not the cache should be used. + + + Example: + + ```python + >>> from transformers import MambaConfig, MambaModel + + >>> # Initializing a Mamba configuration + >>> configuration = MambaConfig() + + >>> # Initializing a model (with random weights) from the configuration + >>> model = MambaModel(configuration) + + >>> # Accessing the model configuration + >>> configuration = model.config + ```""" + + model_type = "mamba" + + def __init__( + self, + vocab_size=32000, + hidden_size=2048, + state_size=16, + num_hidden_layers=48, + layer_norm_epsilon=1e-5, + pad_token_id= 0, + bos_token_id= 1, + eos_token_id= 2, + expand=2, + conv_kernel=4, + use_bias=False, + use_conv_bias=True, + hidden_act="silu", + initializer_range=0.1, + residual_in_fp32=False, + time_step_rank="auto", + time_step_scale=1.0, + time_step_min=0.001, + time_step_max=0.1, + time_step_init_scheme="random", + time_step_floor=1e-4, + rescale_prenorm_residual=False, + use_cache=True, + fuse_norm: bool = True, + fuse_cross_entropy: bool = True, + tie_word_embeddings: bool = False, + **kwargs, + ): + self.vocab_size = vocab_size + self.hidden_size = hidden_size + self.state_size = state_size + self.num_hidden_layers = num_hidden_layers + self.layer_norm_epsilon = layer_norm_epsilon + self.conv_kernel = conv_kernel + self.expand = expand + self.intermediate_size = int(expand * self.hidden_size) + self.bos_token_id = bos_token_id + self.eos_token_id = eos_token_id + self.pad_token_id = pad_token_id + self.use_bias = use_bias + self.use_conv_bias = use_conv_bias + self.hidden_act = hidden_act + self.initializer_range = initializer_range + self.time_step_rank = math.ceil(self.hidden_size / 16) if time_step_rank == "auto" else time_step_rank + self.time_step_scale = time_step_scale + self.time_step_min = time_step_min + self.time_step_max = time_step_max + self.time_step_init_scheme = time_step_init_scheme + self.time_step_floor = time_step_floor + self.rescale_prenorm_residual = rescale_prenorm_residual + self.residual_in_fp32 = residual_in_fp32 + self.use_cache = use_cache + self.fuse_cross_entropy = fuse_cross_entropy + self.fuse_norm = fuse_norm + + super().__init__(bos_token_id=bos_token_id, eos_token_id=eos_token_id, pad_token_id=pad_token_id, tie_word_embeddings=tie_word_embeddings, **kwargs) diff --git a/fla/models/mamba/modeling_mamba.py b/fla/models/mamba/modeling_mamba.py new file mode 100644 index 0000000000000000000000000000000000000000..5b9760f29e8b38be88ad3014b88f13d6ab12836e --- /dev/null +++ b/fla/models/mamba/modeling_mamba.py @@ -0,0 +1,605 @@ +# coding=utf-8 +# Copyright 2024 state-spaces/mamba org and HuggingFace Inc. team. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +"""PyTorch MAMBA model.""" + +import math +from dataclasses import dataclass +from typing import Any, Dict, Optional, Tuple, Union + +import torch +import torch.utils.checkpoint +from torch import nn +from transformers.activations import ACT2FN +from transformers.modeling_utils import PreTrainedModel +from transformers.utils import ModelOutput, logging + +from fla.models.mamba.configuration_mamba import MambaConfig +from fla.modules import FusedCrossEntropyLoss, RMSNorm + +logger = logging.get_logger(__name__) + +try: + from mamba_ssm.ops.selective_scan_interface import (mamba_inner_fn, + selective_scan_fn) + from mamba_ssm.ops.triton.selective_state_update import \ + selective_state_update +except ImportError: + selective_state_update, selective_scan_fn, mamba_inner_fn = None, None, None + +try: + from causal_conv1d import causal_conv1d_fn, causal_conv1d_update +except ImportError: + causal_conv1d_update, causal_conv1d_fn = None, None + +is_fast_path_available = all( + (selective_state_update, selective_scan_fn, causal_conv1d_fn, causal_conv1d_update, mamba_inner_fn) +) + + +class MambaCache: + def __init__(self, config, batch_size, dtype=torch.float16, device=None): + self.seqlen_offset = 0 + self.dtype = dtype + intermediate_size = config.intermediate_size + ssm_state_size = config.state_size + conv_kernel_size = config.conv_kernel + + self.conv_states = { + i: torch.zeros(batch_size, intermediate_size, conv_kernel_size, device=device, dtype=dtype) + for i in range(config.num_hidden_layers) + } + self.ssm_states = { + i: torch.zeros(batch_size, intermediate_size, ssm_state_size, device=device, dtype=dtype) + for i in range(config.num_hidden_layers) + } + + +class MambaMixer(nn.Module): + """ + Compute ∆, A, B, C, and D the state space parameters and compute the `contextualized_states`. + A, D are input independent (see Mamba paper [1] Section 3.5.2 "Interpretation of A" for why A isn't selective) + ∆, B, C are input-dependent (this is a key difference between Mamba and the linear time invariant S4, + and is why Mamba is called **selective** state spaces) + """ + + def __init__(self, config, layer_idx): + super().__init__() + self.hidden_size = config.hidden_size + self.ssm_state_size = config.state_size + self.conv_kernel_size = config.conv_kernel + self.intermediate_size = config.intermediate_size + self.time_step_rank = config.time_step_rank + self.layer_idx = layer_idx + self.use_conv_bias = config.use_conv_bias + self.conv1d = nn.Conv1d( + in_channels=self.intermediate_size, + out_channels=self.intermediate_size, + bias=config.use_conv_bias, + kernel_size=config.conv_kernel, + groups=self.intermediate_size, + padding=config.conv_kernel - 1, + ) + + self.activation = config.hidden_act + self.act = ACT2FN[config.hidden_act] + + # projection of the input hidden states + self.in_proj = nn.Linear(self.hidden_size, self.intermediate_size * 2, bias=config.use_bias) + # selective projection used to make dt, B and C input dependant + self.x_proj = nn.Linear(self.intermediate_size, self.time_step_rank + self.ssm_state_size * 2, bias=False) + # time step projection (discretization) + self.dt_proj = nn.Linear(self.time_step_rank, self.intermediate_size, bias=True) + + # S4D real initialization. These are not discretized! + # The core is to load them, compute the discrete states, then write the updated state. Keeps the memory bounded + A = torch.arange(1, self.ssm_state_size + 1, dtype=torch.float32)[None, :] + A = A.expand(self.intermediate_size, -1).contiguous() + + self.A_log = nn.Parameter(torch.log(A)) + self.D = nn.Parameter(torch.ones(self.intermediate_size)) + self.out_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=config.use_bias) + self.use_bias = config.use_bias + + if not is_fast_path_available: + logger.warning_once( + "The fast path is not available because on of " + "`(selective_state_update, selective_scan_fn, causal_conv1d_fn, causal_conv1d_update, mamba_inner_fn)`" + " is None. Falling back to the naive implementation. " + "To install follow https://github.com/state-spaces/mamba/#installation and" + " https://github.com/Dao-AILab/causal-conv1d" + ) + + def cuda_kernels_forward(self, hidden_states: torch.Tensor, cache_params: Optional[MambaCache] = None): + # 1. Gated MLP's linear projection + projected_states = self.in_proj(hidden_states).transpose(1, 2) + + if self.training and cache_params is None: # Doesn't support outputting the states -> used for training + contextualized_states = mamba_inner_fn( + projected_states, + self.conv1d.weight, + self.conv1d.bias if self.use_conv_bias else None, + self.x_proj.weight, + self.dt_proj.weight, + self.out_proj.weight, + self.out_proj.bias.float() if self.use_bias else None, + -torch.exp(self.A_log.float()), + None, # input-dependent B + None, # input-dependent C + self.D.float(), + delta_bias=self.dt_proj.bias.float(), + delta_softplus=True, + ) + + else: + hidden_states, gate = projected_states.chunk(2, dim=1) + + # 2. Convolution sequence transformation + conv_weights = self.conv1d.weight.view(self.conv1d.weight.size(0), self.conv1d.weight.size(2)) + if cache_params is not None and cache_params.seqlen_offset > 0: + hidden_states = causal_conv1d_update( + hidden_states.squeeze(-1), + cache_params.conv_states[self.layer_idx], + conv_weights, + self.conv1d.bias, + self.activation, + ) + hidden_states = hidden_states.unsqueeze(-1) + else: + if cache_params is not None: + conv_states = nn.functional.pad( + hidden_states, (self.conv_kernel_size - hidden_states.shape[-1], 0) + ) + cache_params.conv_states[self.layer_idx].copy_(conv_states) + hidden_states = causal_conv1d_fn( + hidden_states, conv_weights, self.conv1d.bias, activation=self.activation + ) + + # 3. State Space Model sequence transformation + # 3.a. input varying initialization of time_step, B and C + ssm_parameters = self.x_proj(hidden_states.transpose(1, 2)) + time_step, B, C = torch.split( + ssm_parameters, [self.time_step_rank, self.ssm_state_size, self.ssm_state_size], dim=-1 + ) + discrete_time_step = self.dt_proj.weight @ time_step.transpose(1, 2) + + A = -torch.exp(self.A_log.float()) + # 3.c perform the recurrence y ← SSM(A, B, C)(x) + time_proj_bias = self.dt_proj.bias.float() if hasattr(self.dt_proj, "bias") else None + if cache_params is not None and cache_params.seqlen_offset > 0: + scan_outputs = selective_state_update( + cache_params.ssm_states[self.layer_idx], + hidden_states[..., 0], + discrete_time_step[..., 0], + A, + B[:, 0], + C[:, 0], + self.D, + gate[..., 0], + time_proj_bias, + dt_softplus=True, + ).unsqueeze(-1) + else: + scan_outputs, ssm_state = selective_scan_fn( + hidden_states, + discrete_time_step, + A, + B.transpose(1, 2), + C.transpose(1, 2), + self.D.float(), + gate, + time_proj_bias, + delta_softplus=True, + return_last_state=True, + ) + if ssm_state is not None and cache_params is not None: + cache_params.ssm_states[self.layer_idx].copy_(ssm_state) + + # 4. Final linear projection + contextualized_states = self.out_proj(scan_outputs.transpose(1, 2)) + return contextualized_states + + # fmt: off + def slow_forward(self, input_states, cache_params: Optional[MambaCache] = None): + batch_size, seq_len, _ = input_states.shape + dtype = input_states.dtype + # 1. Gated MLP's linear projection + # [batch, 2 * intermediate_size, seq_len] + projected_states = self.in_proj(input_states).transpose(1, 2) + hidden_states, gate = projected_states.chunk(2, dim=1) + + # 2. Convolution sequence transformation + if cache_params is not None: + ssm_state = cache_params.ssm_states[self.layer_idx].clone() + if cache_params.seqlen_offset > 0: + # [batch, intermediate_size, conv_kernel_size] + conv_state = cache_params.conv_states[self.layer_idx] + conv_state = torch.roll(conv_state, shifts=-1, dims=-1) + conv_state[:, :, -1] = hidden_states[:, :, 0] + cache_params.conv_states[self.layer_idx].copy_(conv_state) + hidden_states = torch.sum(conv_state * self.conv1d.weight[:, 0, :], dim=-1) + if self.use_conv_bias: + hidden_states += self.conv1d.bias + # [batch, intermediate_size, 1] : decoding + hidden_states = self.act(hidden_states).to(dtype).unsqueeze(-1) + else: + conv_state = nn.functional.pad( + hidden_states, + (self.conv_kernel_size - hidden_states.shape[-1], 0) + ) + cache_params.conv_states[self.layer_idx].copy_(conv_state) + # [batch, intermediate_size, seq_len] + hidden_states = self.act(self.conv1d(hidden_states)[..., :seq_len]) + else: + ssm_state = torch.zeros( + (batch_size, self.intermediate_size, self.ssm_state_size), + device=hidden_states.device, dtype=dtype + ) + # [batch, intermediate_size, seq_len] + hidden_states = self.act(self.conv1d(hidden_states)[..., :seq_len]) + + # 3. State Space Model sequence transformation + # 3.a. Selection: [batch, seq_len, self.time_step_rank + self.ssm_state_size * 2] + ssm_parameters = self.x_proj(hidden_states.transpose(1, 2)) + time_step, B, C = torch.split( + ssm_parameters, [self.time_step_rank, self.ssm_state_size, self.ssm_state_size], dim=-1 + ) + # [batch, seq_len, intermediate_size] + discrete_time_step = self.dt_proj(time_step) + # [batch, intermediate_size, seq_len] + discrete_time_step = nn.functional.softplus(discrete_time_step).transpose(1, 2) + + # 3.b. Discretization: B and C to [batch, seq_len, intermediate_size, ssm_state_size] (SRAM) + # [intermediate_size, ssm_state_size] + A = -torch.exp(self.A_log.float()) + # [batch, intermediate_size, seq_len, ssm_state_size] + discrete_A = torch.exp(A[None, :, None, :] * discrete_time_step[:, :, :, None]) + # [batch, intermediade_size, seq_len, ssm_state_size] + discrete_B = discrete_time_step[:, :, :, None] * B[:, None, :, :].float() + deltaB_u = discrete_B * hidden_states[:, :, :, None].float() + + # 3.c perform the recurrence y ← SSM(A, B, C)(x) + scan_outputs = [] + for i in range(seq_len): + # [batch, intermediade_size, ssm_state] + ssm_state = discrete_A[:, :, i, :] * ssm_state + deltaB_u[:, :, i, :] + # [batch, intermediade_size, 1] + scan_output = torch.matmul(ssm_state.to(dtype), C[:, i, :].unsqueeze(-1)) + scan_outputs.append(scan_output[:, :, 0]) + # [batch, seq_len, intermediade_size] + scan_output = torch.stack(scan_outputs, dim=-1) + scan_output = scan_output + (hidden_states * self.D[None, :, None]) + scan_output = (scan_output * self.act(gate)) + + if cache_params is not None: + cache_params.ssm_states[self.layer_idx].copy_(ssm_state) + + # 4. Final linear projection + # [batch, seq_len, hidden_size] + contextualized_states = self.out_proj(scan_output.transpose(1, 2)) + return contextualized_states + # fmt: on + + def forward(self, hidden_states, cache_params: Optional[MambaCache] = None): + if is_fast_path_available and "cuda" in self.x_proj.weight.device.type: + return self.cuda_kernels_forward(hidden_states, cache_params) + return self.slow_forward(hidden_states, cache_params) + + +class MambaBlock(nn.Module): + def __init__(self, config, layer_idx): + super().__init__() + self.config = config + self.layer_idx = layer_idx + self.residual_in_fp32 = config.residual_in_fp32 + self.norm = RMSNorm(config.hidden_size, eps=config.layer_norm_epsilon) + self.mixer = MambaMixer(config, layer_idx=layer_idx) + + def forward(self, hidden_states, cache_params: Optional[MambaCache] = None): + residual = hidden_states + hidden_states = self.norm(hidden_states) + # if self.residual_in_fp32: + # residual = residual.to(torch.float32) + hidden_states = self.mixer(hidden_states, cache_params=cache_params) + hidden_states = residual + hidden_states + return hidden_states + + +class MambaPreTrainedModel(PreTrainedModel): + """ + An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained + models. + """ + + config_class = MambaConfig + base_model_prefix = "backbone" + _no_split_modules = ["MambaBlock"] + supports_gradient_checkpointing = True + + def _init_weights(self, module): + """Initialize the weights.""" + if isinstance(module, MambaMixer): + module.A_log._no_weight_decay = True + module.D._no_weight_decay = True + + dt_init_std = self.config.time_step_rank**-0.5 * self.config.time_step_scale + if self.config.time_step_init_scheme == "constant": + nn.init.constant_(module.dt_proj.weight, dt_init_std) + elif self.config.time_step_init_scheme == "random": + nn.init.uniform_(module.dt_proj.weight, -dt_init_std, dt_init_std) + + dt = torch.exp( + torch.rand(self.config.intermediate_size) + * (math.log(self.config.time_step_max) - math.log(self.config.time_step_min)) + + math.log(self.config.time_step_min) + ).clamp(min=self.config.time_step_floor) + # # Inverse of softplus: https://github.com/pytorch/pytorch/issues/72759 + inv_dt = dt + torch.log(-torch.expm1(-dt)) + with torch.no_grad(): + module.dt_proj.bias.copy_(inv_dt) + module.dt_proj.bias._no_reinit = True + + if isinstance(module, nn.Linear): + if module.bias is not None: + if not getattr(module.bias, "_no_reinit", False): + nn.init.zeros_(module.bias) + elif isinstance(module, nn.Embedding): + nn.init.normal_(module.weight, std=self.config.initializer_range) + + if self.config.rescale_prenorm_residual: + # Reinitialize selected weights subject to the OpenAI GPT-2 Paper Scheme: + # > A modified initialization which accounts for the accumulation on the residual path with model depth. Scale + # > the weights of residual layers at initialization by a factor of 1/√N where N is the # of residual layers. + # > -- GPT-2 :: https://openai.com/blog/better-language-models/ + # + # Reference (Megatron-LM): https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/gpt_model.py + for name, p in module.named_parameters(): + if name in ["out_proj.weight"]: + # Special Scaled Initialization --> There are 2 Layer Norms per Transformer Block + # Following Pytorch init, except scale by 1/sqrt(2 * n_layer) + # We need to reinit p since this code could be called multiple times + # Having just p *= scale would repeatedly scale it down + nn.init.kaiming_uniform_(p, a=math.sqrt(5)) + with torch.no_grad(): + p /= math.sqrt(self.config.num_layers) + + +@dataclass +class MambaOutput(ModelOutput): + """ + Class for the MAMBA model outputs. + + Args: + last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`): + Sequence of hidden-states at the output of the last layer of the model. + cache_params (`MambaCache`): + The state of the model at the last time step. Can be used in a forward method with the next `input_ids` to + avoid providing the old `input_ids`. + + Includes both the State space model state matrices after the selective scan, and the Convolutional states + hidden_states (`tuple(torch.FloatTensor)`, *optional*, + returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): + Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, + + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. + + Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. + """ + + last_hidden_state: Optional[torch.FloatTensor] = None + cache_params: Optional[MambaCache] = None + hidden_states: Optional[Tuple[torch.FloatTensor]] = None + + +@dataclass +class MambaCausalLMOutput(ModelOutput): + """ + Base class for causal language model (or autoregressive) outputs. + + Args: + loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided): + Language modeling loss (for next-token prediction). + logits (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`): + Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). + cache_params (`MambaCache`): + The state of the model at the last time step. Can be used in a forward method with the next `input_ids` to + avoid providing the old `input_ids`. + + Includes both the State space model state matrices after the selective scan, and the Convolutional states + hidden_states (`tuple(torch.FloatTensor)`, *optional*, + returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): + Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, + + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. + + Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. + """ + + loss: Optional[torch.FloatTensor] = None + logits: Optional[torch.FloatTensor] = None + cache_params: Optional[MambaCache] = None + hidden_states: Optional[Tuple[torch.FloatTensor]] = None + + +class MambaModel(MambaPreTrainedModel): + def __init__(self, config): + super().__init__(config) + + self.embeddings = nn.Embedding(config.vocab_size, config.hidden_size) + self.layers = nn.ModuleList([MambaBlock(config, layer_idx=idx) for idx in range(config.num_hidden_layers)]) + + self.gradient_checkpointing = False + self.norm_f = RMSNorm(config.hidden_size, eps=config.layer_norm_epsilon) + # Initialize weights and apply final processing + self.post_init() + + def get_input_embeddings(self): + return self.embeddings + + def set_input_embeddings(self, new_embeddings): + self.embeddings = new_embeddings + + def forward( + self, + input_ids: Optional[torch.LongTensor] = None, + inputs_embeds: Optional[torch.LongTensor] = None, + cache_params: Optional[MambaCache] = None, + use_cache: Optional[bool] = None, + output_hidden_states: Optional[bool] = None, + return_dict: Optional[bool] = None, + **kwargs, # `attention_mask` is passed by the tokenizer and we don't want it + ) -> Union[Tuple, MambaOutput]: + output_hidden_states = ( + output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states + ) + use_cache = use_cache if use_cache is not None else (self.config.use_cache if not self.training else False) + return_dict = return_dict if return_dict is not None else self.config.use_return_dict + + if (input_ids is None) ^ (inputs_embeds is not None): # ^ is python for xor + raise ValueError( + "You cannot specify both input_ids and inputs_embeds at the same time, and must specify either one" + ) + + if inputs_embeds is None: + inputs_embeds = self.embeddings(input_ids) + + if self.gradient_checkpointing and self.training and use_cache: + use_cache = False + + if cache_params is None and use_cache: + cache_params = MambaCache( + self.config, inputs_embeds.size(0), device=inputs_embeds.device, dtype=inputs_embeds.dtype + ) + + hidden_states = inputs_embeds + all_hidden_states = () if output_hidden_states else None + for mixer_block in self.layers: + if self.gradient_checkpointing and self.training: + hidden_states = self._gradient_checkpointing_func(mixer_block.__call__, hidden_states, cache_params) + else: + hidden_states = mixer_block(hidden_states, cache_params=cache_params) + + if output_hidden_states: + all_hidden_states = all_hidden_states + (hidden_states,) + + if use_cache: + cache_params.seqlen_offset += inputs_embeds.shape[1] + + hidden_states = self.norm_f(hidden_states) + + if output_hidden_states: + all_hidden_states = all_hidden_states + (hidden_states,) + + if not return_dict: + return tuple(v for v in [hidden_states, cache_params, all_hidden_states] if v is not None) + + return MambaOutput( + last_hidden_state=hidden_states, + cache_params=cache_params if use_cache else None, + hidden_states=all_hidden_states, + ) + + +class MambaForCausalLM(MambaPreTrainedModel): + _tied_weights_keys = ["lm_head.weight"] + + def __init__(self, config): + super().__init__(config) + self.backbone = MambaModel(config) + self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False) + # Initialize weights and apply final processing + self.post_init() + + def get_output_embeddings(self): + return self.lm_head + + def set_output_embeddings(self, new_embeddings): + self.lm_head = new_embeddings + + def get_input_embeddings(self): + return self.backbone.get_input_embeddings() + + def set_input_embeddings(self, new_embeddings): + return self.backbone.set_input_embeddings(new_embeddings) + + def _update_model_kwargs_for_generation( + self, outputs: ModelOutput, model_kwargs: Dict[str, Any], **kwargs + ) -> Dict[str, Any]: + model_kwargs["cache_params"] = outputs.get("cache_params", None) + return model_kwargs + + def prepare_inputs_for_generation( + self, input_ids, cache_params: Optional[MambaCache] = None, inputs_embeds=None, attention_mask=None, **kwargs + ): + # only last token for inputs_ids if the state is passed along. + if cache_params is not None: + input_ids = input_ids[:, -1].unsqueeze(-1) + + if inputs_embeds is not None and cache_params is None: + model_inputs = {"inputs_embeds": inputs_embeds} + else: + model_inputs = {"input_ids": input_ids} + + model_inputs["cache_params"] = cache_params + return model_inputs + + def forward( + self, + input_ids: Optional[torch.LongTensor] = None, + inputs_embeds: Optional[torch.FloatTensor] = None, + cache_params: Optional[MambaCache] = None, + labels: Optional[torch.LongTensor] = None, + output_hidden_states: Optional[bool] = None, + return_dict: Optional[bool] = None, + use_cache: Optional[bool] = None, + **kwargs, # for now we need this for generation + ) -> Union[Tuple, MambaCausalLMOutput]: + r""" + labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): + Labels for language modeling. Note that the labels **are shifted** inside the model, i.e. you can set + `labels = input_ids` Indices are selected in `[-100, 0, ..., config.vocab_size]` All labels set to `-100` + are ignored (masked), the loss is only computed for labels in `[0, ..., config.vocab_size]` + """ + return_dict = return_dict if return_dict is not None else self.config.use_return_dict + + mamba_outputs = self.backbone( + input_ids, + cache_params=cache_params, + inputs_embeds=inputs_embeds, + output_hidden_states=output_hidden_states, + return_dict=return_dict, + use_cache=use_cache, + ) + hidden_states = mamba_outputs[0] + logits = self.lm_head(hidden_states) + + loss = None + if labels is not None: + if self.config.fuse_cross_entropy: + loss_fct = FusedCrossEntropyLoss(inplace_backward=True) + else: + loss_fct = nn.CrossEntropyLoss() + # Enable model parallelism + labels = labels.to(logits.device) + labels = torch.cat((labels[..., 1:], torch.full_like(labels[:, :1], loss_fct.ignore_index)), 1) + loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1)) + + if not return_dict: + output = (logits,) + mamba_outputs[1:] + return (loss,) + output if loss is not None else output + + return MambaCausalLMOutput( + loss=loss, + logits=logits, + cache_params=mamba_outputs.cache_params, + hidden_states=mamba_outputs.hidden_states, + ) diff --git a/fla/models/retnet/__init__.py b/fla/models/retnet/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..ad7d9e9da930819a2a6728e3e189090651b82a2e --- /dev/null +++ b/fla/models/retnet/__init__.py @@ -0,0 +1,13 @@ +# -*- coding: utf-8 -*- + +from transformers import AutoConfig, AutoModel, AutoModelForCausalLM + +from fla.models.retnet.configuration_retnet import RetNetConfig +from fla.models.retnet.modeling_retnet import RetNetForCausalLM, RetNetModel + +AutoConfig.register(RetNetConfig.model_type, RetNetConfig) +AutoModel.register(RetNetConfig, RetNetModel) +AutoModelForCausalLM.register(RetNetConfig, RetNetForCausalLM) + + +__all__ = ['RetNetConfig', 'RetNetForCausalLM', 'RetNetModel'] diff --git a/fla/models/retnet/configuration_retnet.py b/fla/models/retnet/configuration_retnet.py new file mode 100644 index 0000000000000000000000000000000000000000..b01bda8d6f2bf12572a255f00386db57ee3488b2 --- /dev/null +++ b/fla/models/retnet/configuration_retnet.py @@ -0,0 +1,76 @@ +# -*- coding: utf-8 -*- + +from __future__ import annotations + +from typing import Optional + +from transformers.configuration_utils import PretrainedConfig + + +class RetNetConfig(PretrainedConfig): + + model_type = 'retnet' + keys_to_ignore_at_inference = ['past_key_values'] + + def __init__( + self, + vocab_size: int = 32000, + hidden_size: int = 2048, + expand_k: int = 1, + expand_v: int = 2, + hidden_ratio: Optional[int] = 2, + intermediate_size: Optional[int] = None, + num_hidden_layers: int = 24, + num_heads: int = 8, + num_kv_heads: Optional[int] = None, + feature_map: Optional[str] = None, + attn_mode: str = "fused_chunk", + hidden_act: str = "swish", + use_short_conv: bool = False, + conv_size: int = 4, + share_conv_kernel: bool = True, + use_output_gate: bool = True, + max_position_embeddings: int = 2048, + elementwise_affine: Optional[bool] = True, + norm_eps: float = 1e-6, + use_cache: bool = True, + pad_token_id: int = None, + bos_token_id: int = 1, + eos_token_id: int = 2, + tie_word_embeddings: bool = False, + initializer_range: float = 0.02, + fuse_norm: bool = True, + fuse_cross_entropy: bool = True, + **kwargs + ) -> RetNetConfig: + self.vocab_size = vocab_size + self.max_position_embeddings = max_position_embeddings + self.hidden_size = hidden_size + self.expand_k = expand_k + self.expand_v = expand_v + self.hidden_ratio = hidden_ratio + self.intermediate_size = intermediate_size + self.num_hidden_layers = num_hidden_layers + self.num_heads = num_heads + self.num_kv_heads = num_kv_heads + self.feature_map = feature_map + self.attn_mode = attn_mode + self.hidden_act = hidden_act + self.use_short_conv = use_short_conv + self.conv_size = conv_size + self.share_conv_kernel = share_conv_kernel + self.use_output_gate = use_output_gate + self.elementwise_affine = elementwise_affine + self.norm_eps = norm_eps + self.use_cache = use_cache + self.initializer_range = initializer_range + self.fuse_norm = fuse_norm + self.fuse_cross_entropy = fuse_cross_entropy + + super().__init__( + pad_token_id=pad_token_id, + bos_token_id=bos_token_id, + eos_token_id=eos_token_id, + tie_word_embeddings=tie_word_embeddings, + **kwargs, + ) diff --git a/fla/models/retnet/modeling_retnet.py b/fla/models/retnet/modeling_retnet.py new file mode 100644 index 0000000000000000000000000000000000000000..49a3eb24ed2bccc881fd867876ba9a0f07651a98 --- /dev/null +++ b/fla/models/retnet/modeling_retnet.py @@ -0,0 +1,410 @@ +# -*- coding: utf-8 -*- + +from __future__ import annotations + +import math +import warnings +from typing import List, Optional, Tuple, Union + +import torch +import torch.nn as nn +import torch.utils.checkpoint +from transformers.activations import ACT2FN +from transformers.modeling_outputs import (BaseModelOutputWithPast, + CausalLMOutputWithPast) +from transformers.modeling_utils import PreTrainedModel +from transformers.utils import logging + +from fla.layers.multiscale_retention import MultiScaleRetention +from fla.models.retnet.configuration_retnet import RetNetConfig +from fla.models.utils import RecurrentCache +from fla.modules import FusedCrossEntropyLoss, RMSNorm +from fla.modules.activations import swiglu_linear + +logger = logging.get_logger(__name__) + + +class RetNetMLP(nn.Module): + + def __init__( + self, + hidden_size: int, + hidden_ratio: Optional[int] = None, + intermediate_size: Optional[int] = None, + hidden_act: str = 'swish' + ) -> RetNetMLP: + super().__init__() + + self.hidden_size = hidden_size + # the final number of params is `hidden_ratio * hidden_size^2` + # `intermediate_size` is chosen to be a multiple of 256 closest to `2/3 * hidden_size * hidden_ratio` + if hidden_ratio is None: + hidden_ratio = 4 + if intermediate_size is None: + intermediate_size = int(hidden_size * hidden_ratio * 2 / 3) + intermediate_size = 256 * ((intermediate_size + 256 - 1) // 256) + self.hidden_ratio = hidden_ratio + self.intermediate_size = intermediate_size + + self.gate_proj = nn.Linear(self.hidden_size, self.intermediate_size * 2, bias=False) + self.down_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=False) + self.act_fn = ACT2FN[hidden_act] + + def forward(self, x): + y = self.gate_proj(x) + gate, y = y.chunk(2, -1) + return swiglu_linear(gate, y, self.down_proj.weight, self.down_proj.bias) + + +class RetNetBlock(nn.Module): + def __init__(self, config: RetNetConfig, layer_idx: int): + super().__init__() + self.hidden_size = config.hidden_size + + self.attn_norm = RMSNorm(hidden_size=config.hidden_size, eps=config.norm_eps) + self.attn = MultiScaleRetention( + mode=config.attn_mode, + hidden_size=config.hidden_size, + expand_k=config.expand_k, + expand_v=config.expand_v, + num_heads=config.num_heads, + num_kv_heads=config.num_kv_heads, + feature_map=config.feature_map, + use_output_gate=config.use_output_gate, + gate_fn=config.hidden_act, + elementwise_affine=config.elementwise_affine, + norm_eps=config.norm_eps, + fuse_norm=config.fuse_norm, + layer_idx=layer_idx + ) + self.mlp_norm = RMSNorm(hidden_size=config.hidden_size, eps=config.norm_eps) + self.mlp = RetNetMLP( + hidden_size=config.hidden_size, + hidden_ratio=config.hidden_ratio, + intermediate_size=config.intermediate_size, + hidden_act=config.hidden_act + ) + + def forward( + self, + hidden_states: torch.Tensor, + attention_mask: Optional[torch.Tensor] = None, + past_key_values: Optional[List[torch.FloatTensor]] = None, + use_cache: Optional[bool] = False, + output_attentions: Optional[bool] = False, + **kwargs, + ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]: + + residual = hidden_states + + hidden_states = self.attn_norm(hidden_states) + hidden_states, attentions, past_key_values = self.attn( + hidden_states=hidden_states, + attention_mask=attention_mask, + past_key_values=past_key_values, + use_cache=use_cache, + output_attentions=output_attentions + ) + hidden_states, residual = self.mlp_norm(hidden_states, residual, True) + hidden_states = self.mlp(hidden_states) + hidden_states = residual + hidden_states + + outputs = (hidden_states, attentions, past_key_values) + + return outputs + + +class RetNetPreTrainedModel(PreTrainedModel): + + config_class = RetNetConfig + supports_gradient_checkpointing = True + _no_split_modules = ['RetNetBlock'] + + def __init__(self, *inputs, **kwargs): + super().__init__(*inputs, **kwargs) + + def _init_weights( + self, + module: nn.Module, + rescale_prenorm_residual: bool = True, + num_residuals_per_layer: int = 2, + ): + if isinstance(module, (nn.Linear, nn.Conv1d)): + # Slightly different from the TF version which uses truncated_normal for initialization + # cf https://github.com/pytorch/pytorch/pull/5617 + nn.init.normal_(module.weight, mean=0.0, std=self.config.initializer_range) + if module.bias is not None: + nn.init.zeros_(module.bias) + elif isinstance(module, nn.Embedding): + nn.init.normal_(module.weight, mean=0.0, std=self.config.initializer_range) + if module.padding_idx is not None: + module.weight.data[module.padding_idx].zero_() + + if rescale_prenorm_residual: + # Reinitialize selected weights subject to the OpenAI GPT-2 Paper Scheme: + # > A modified initialization which accounts for the accumulation on the residual path with model depth. Scale + # > the weights of residual layers at initialization by a factor of 1/√N where N is the # of residual layers. + # > -- GPT-2 :: https://openai.com/blog/better-language-models/ + # + # Reference (Megatron-LM): https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/gpt_model.py + for name, p in module.named_parameters(): + if name in ["o_proj.weight", "down_proj.weight"]: + # Special Scaled Initialization --> There are 2 Layer Norms per Transformer Block + # Following Pytorch init, except scale by 1/sqrt(2 * n_layer) + # We need to reinit p since this code could be called multiple times + # Having just p *= scale would repeatedly scale it down + with torch.no_grad(): + p /= math.sqrt(num_residuals_per_layer * self.config.num_hidden_layers) + + +class RetNetModel(RetNetPreTrainedModel): + + def __init__(self, config: RetNetConfig): + super().__init__(config) + self.padding_idx = config.pad_token_id + self.vocab_size = config.vocab_size + + self.embeddings = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx) + self.layers = nn.ModuleList( + [RetNetBlock(config, layer_idx) for layer_idx in range(config.num_hidden_layers)] + ) + self.norm = RMSNorm(config.hidden_size, eps=config.norm_eps) + + self.gradient_checkpointing = False + + self.post_init() + + def get_input_embeddings(self): + return self.embeddings + + def set_input_embeddings(self, value): + self.embeddings = value + + def forward( + self, + input_ids: Optional[torch.LongTensor] = None, + attention_mask: Optional[torch.Tensor] = None, # noqa + inputs_embeds: Optional[torch.FloatTensor] = None, + past_key_values: Optional[List[torch.FloatTensor]] = None, + use_cache: Optional[bool] = None, + output_attentions: Optional[bool] = None, + output_hidden_states: Optional[bool] = None, + return_dict: Optional[bool] = None + ) -> Union[Tuple, BaseModelOutputWithPast]: + if output_attentions: + warnings.warn( + "`RetNetModel` does not support output attention weights now, so `output_attentions` is set to `False`." + ) + output_attentions = False + output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions + output_hidden_states = ( + output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states + ) + use_cache = use_cache if use_cache is not None else (self.config.use_cache if not self.training else False) + return_dict = return_dict if return_dict is not None else self.config.use_return_dict + + # retrieve input_ids and inputs_embeds + if input_ids is not None and inputs_embeds is not None: + raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") + elif input_ids is not None: + batch_size, seq_len = input_ids.shape[:2] + elif inputs_embeds is not None: + batch_size, seq_len = inputs_embeds.shape[:2] + else: + raise ValueError("You have to specify either input_ids or inputs_embeds") + + if inputs_embeds is None: + inputs_embeds = self.embeddings(input_ids) + hidden_states = inputs_embeds + + if use_cache: + if past_key_values is None: + past_key_values = [layer.attn.init_state(batch_size) for layer in self.layers] + if not isinstance(past_key_values, RecurrentCache): + past_key_values = RecurrentCache.from_legacy_cache(past_key_values) + + if self.gradient_checkpointing and self.training: + if use_cache: + logger.warning_once( + "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." + ) + use_cache = False + + all_hidden_states = () if output_hidden_states else None + all_attns = () if output_attentions else None + for layer in self.layers: + if output_hidden_states: + all_hidden_states += (hidden_states,) + + if self.gradient_checkpointing and self.training: + hidden_states, attentions, past_key_values = self._gradient_checkpointing_func( + layer.__call__, + hidden_states, + attention_mask, + past_key_values, + use_cache, + output_attentions + ) + else: + hidden_states, attentions, past_key_values = layer( + hidden_states, + attention_mask=attention_mask, + past_key_values=past_key_values, + use_cache=use_cache, + output_attentions=output_attentions + ) + + if output_attentions: + all_attns += (attentions,) + + hidden_states = self.norm(hidden_states) + + # add hidden states from the last decoder layer + if output_hidden_states: + all_hidden_states += (hidden_states,) + + next_cache = None + if use_cache: + next_cache = past_key_values.to_legacy_cache() + if not return_dict: + return tuple(x for x in [hidden_states, next_cache, all_hidden_states, all_attns] if x is not None) + return BaseModelOutputWithPast( + last_hidden_state=hidden_states, + past_key_values=next_cache, + hidden_states=all_hidden_states, + attentions=all_attns + ) + + +class RetNetForCausalLM(RetNetPreTrainedModel): + _tied_weights_keys = ["lm_head.weight"] + + def __init__(self, config): + super().__init__(config) + self.model = RetNetModel(config) + self.vocab_size = config.vocab_size + self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False) + + # Initialize weights and apply final processing + self.post_init() + + def get_input_embeddings(self): + return self.model.embeddings + + def set_input_embeddings(self, value): + self.model.embeddings = value + + def get_output_embeddings(self): + return self.lm_head + + def set_output_embeddings(self, new_embeddings): + self.lm_head = new_embeddings + + def set_decoder(self, decoder): + self.model = decoder + + def get_decoder(self): + return self.model + + def generate(self, *args, **kwargs): + try: + return super().generate(*args, **kwargs) + except AttributeError as exception: + # Expected exception: "AttributeError: '(object name)' object has no attribute 'past_key_values'" + if 'past_key_values' in str(exception): + raise AttributeError( + f"You tried to call `generate` with a decoding strategy that manipulates `past_key_values`, " + f"which is not supported for {self.__class__.__name__}. " + f"Try another generation strategy instead. " + f"For the available generation strategies, check this doc: " + f"https://huggingface.co/docs/transformers/en/generation_strategies#decoding-strategies" + ) + else: + raise exception + + def prepare_inputs_for_generation( + self, + input_ids: torch.LongTensor = None, + past_key_values: Optional[torch.Tensor] = None, + attention_mask: Optional[torch.Tensor] = None, + inputs_embeds: Optional[torch.FloatTensor] = None, + **kwargs + ): + # only last token for `inputs_ids` if the `past_key_values` is passed along. + if past_key_values is not None: + if not isinstance(past_key_values, RecurrentCache): + past_key_values = RecurrentCache.from_legacy_cache(past_key_values, input_ids.shape[1] - 1) + input_ids, attention_mask = input_ids[:, -1:], attention_mask[:, -1:] + + # if `inputs_embeds` are passed, we only want to use them in the 1st generation step + if inputs_embeds is not None and past_key_values is None: + model_inputs = {'inputs_embeds': inputs_embeds} + else: + # The `contiguous()` here is necessary to have a static stride during decoding. torchdynamo otherwise + # recompiles graphs as the stride of the inputs is a guard. + # Ref: https://github.com/huggingface/transformers/pull/29114 + # TODO: use `next_tokens` directly instead. + model_inputs = {'input_ids': input_ids.contiguous()} + + model_inputs.update({ + 'past_key_values': past_key_values, + 'use_cache': kwargs.get('use_cache'), + 'attention_mask': attention_mask, + }) + return model_inputs + + def forward( + self, + input_ids: torch.LongTensor = None, + attention_mask: Optional[torch.Tensor] = None, + inputs_embeds: Optional[torch.FloatTensor] = None, + past_key_values: Optional[List[torch.FloatTensor]] = None, + labels: Optional[torch.LongTensor] = None, + use_cache: Optional[bool] = None, + output_attentions: Optional[bool] = None, + output_hidden_states: Optional[bool] = None, + return_dict: Optional[bool] = None, + ) -> Union[Tuple, CausalLMOutputWithPast]: + output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions + output_hidden_states = ( + output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states + ) + return_dict = return_dict if return_dict is not None else self.config.use_return_dict + + # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn) + outputs = self.model( + input_ids=input_ids, + attention_mask=attention_mask, + inputs_embeds=inputs_embeds, + past_key_values=past_key_values, + use_cache=use_cache, + output_attentions=output_attentions, + output_hidden_states=output_hidden_states, + return_dict=return_dict + ) + + hidden_states = outputs[0] + logits = self.lm_head(hidden_states) + + loss = None + if labels is not None: + if self.config.fuse_cross_entropy: + loss_fct = FusedCrossEntropyLoss(inplace_backward=True) + else: + loss_fct = nn.CrossEntropyLoss() + # Enable model parallelism + labels = labels.to(logits.device) + labels = torch.cat((labels[..., 1:], torch.full_like(labels[:, :1], loss_fct.ignore_index)), 1) + loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1)) + + if not return_dict: + output = (logits,) + outputs[1:] + return (loss,) + output if loss is not None else output + + return CausalLMOutputWithPast( + loss=loss, + logits=logits, + past_key_values=outputs.past_key_values, + hidden_states=outputs.hidden_states, + attentions=outputs.attentions, + ) diff --git a/fla/models/rwkv6/__init__.py b/fla/models/rwkv6/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..942c6dc203bf6c867ffd5111e7f2ae1e7c060386 --- /dev/null +++ b/fla/models/rwkv6/__init__.py @@ -0,0 +1,13 @@ +# -*- coding: utf-8 -*- + +from transformers import AutoConfig, AutoModel, AutoModelForCausalLM + +from fla.models.rwkv6.configuration_rwkv6 import RWKV6Config +from fla.models.rwkv6.modeling_rwkv6 import RWKV6ForCausalLM, RWKV6Model + +AutoConfig.register(RWKV6Config.model_type, RWKV6Config) +AutoModel.register(RWKV6Config, RWKV6Model) +AutoModelForCausalLM.register(RWKV6Config, RWKV6ForCausalLM) + + +__all__ = ['RWKV6Config', 'RWKV6ForCausalLM', 'RWKV6Model'] diff --git a/fla/models/rwkv6/configuration_rwkv6.py b/fla/models/rwkv6/configuration_rwkv6.py new file mode 100644 index 0000000000000000000000000000000000000000..ff187a89abead85c21c70a01d1e375c6eb5eecde --- /dev/null +++ b/fla/models/rwkv6/configuration_rwkv6.py @@ -0,0 +1,66 @@ +# -*- coding: utf-8 -*- + +from typing import Optional + +from transformers.configuration_utils import PretrainedConfig + + +class RWKV6Config(PretrainedConfig): + + model_type = 'rwkv6' + keys_to_ignore_at_inference = ['past_key_values'] + + def __init__( + self, + attn_mode: str = "chunk", + vocab_size: int = 32000, + hidden_size: int = 2048, + expand_k: int = 0.5, + expand_v: int = 1, + hidden_ratio: Optional[int] = 3.5, + intermediate_size: Optional[int] = None, + use_glu: Optional[bool] = False, + num_hidden_layers: int = 24, + num_heads: int = 4, + proj_low_rank_dim: int = 32, + gate_low_rank_dim: int = 64, + hidden_act: str = "sqrelu", + max_position_embeddings: int = 2048, + eps: float = 1e-6, + use_cache: bool = True, + pad_token_id: int = None, + bos_token_id: int = 1, + eos_token_id: int = 2, + tie_word_embeddings: bool = False, + initializer_range: float = 0.02, + fuse_norm: bool = True, + fuse_cross_entropy: bool = True, + **kwargs + ): + self.vocab_size = vocab_size + self.max_position_embeddings = max_position_embeddings + self.hidden_size = hidden_size + self.expand_k = expand_k + self.expand_v = expand_v + self.hidden_ratio = hidden_ratio + self.intermediate_size = intermediate_size + self.use_glu = use_glu + self.num_hidden_layers = num_hidden_layers + self.num_heads = num_heads + self.proj_low_rank_dim = proj_low_rank_dim + self.gate_low_rank_dim = gate_low_rank_dim + self.attn_mode = attn_mode + self.hidden_act = hidden_act + self.eps = eps + self.use_cache = use_cache + self.initializer_range = initializer_range + self.fuse_norm = fuse_norm + self.fuse_cross_entropy = fuse_cross_entropy + + super().__init__( + pad_token_id=pad_token_id, + bos_token_id=bos_token_id, + eos_token_id=eos_token_id, + tie_word_embeddings=tie_word_embeddings, + **kwargs, + ) diff --git a/fla/models/rwkv6/modeling_rwkv6.py b/fla/models/rwkv6/modeling_rwkv6.py new file mode 100644 index 0000000000000000000000000000000000000000..ef701e057016e31fdbd5fffb0532697d83218168 --- /dev/null +++ b/fla/models/rwkv6/modeling_rwkv6.py @@ -0,0 +1,443 @@ +# -*- coding: utf-8 -*- + +from __future__ import annotations + +import math +import warnings +from typing import List, Optional, Tuple, Union + +import torch +import torch.nn as nn +import torch.utils.checkpoint +from transformers.modeling_outputs import (BaseModelOutputWithPast, + CausalLMOutputWithPast) +from transformers.modeling_utils import PreTrainedModel +from transformers.utils import logging + +from fla.layers.rwkv6 import LerpLinear, RWKV6Attention +from fla.models.rwkv6.configuration_rwkv6 import RWKV6Config +from fla.models.utils import RecurrentCache +from fla.modules import FusedCrossEntropyLoss, LayerNorm +from fla.modules.activations import ACT2FN, swiglu_linear + +logger = logging.get_logger(__name__) + + +class RWKV6FeedForward(nn.Module): + + def __init__( + self, + hidden_size: int, + hidden_ratio: Optional[int] = None, + intermediate_size: Optional[int] = None, + hidden_act: str = 'sqrelu', + layer_idx: int = None + ) -> RWKV6FeedForward: + super().__init__() + + self.hidden_size = hidden_size + if hidden_ratio is None: + hidden_ratio = 3.5 + if intermediate_size is None: + intermediate_size = int(hidden_size * hidden_ratio) + intermediate_size = 32 * ((intermediate_size + 32 - 1) // 32) + self.hidden_ratio = hidden_ratio + self.intermediate_size = intermediate_size + + self.time_shift = nn.ZeroPad2d((0, 0, 1, -1)) + + self.key = LerpLinear(hidden_size, intermediate_size) + self.value = nn.Linear(intermediate_size, hidden_size) + self.receptance = LerpLinear(hidden_size, hidden_size) + self.act_fn = ACT2FN[hidden_act] + + self.layer_idx = layer_idx + + def forward(self, x: torch.Tensor, state: Optional[torch.Tensor] = None) -> torch.Tensor: + if state is not None: + raise NotImplementedError("Past state is not yet supported in `RWKV6FeedForward`.") + shifted = self.time_shift(x) + if len(shifted.shape) == 2: + shifted = shifted.unsqueeze(1) + delta = shifted - x + key = self.act_fn(self.key(x, delta)) + value = self.value(key) + receptance = self.receptance(x, delta) + return receptance.sigmoid() * value + + +class RWKV6GLU(nn.Module): + + def __init__( + self, + hidden_size: int, + hidden_ratio: Optional[int] = None, + intermediate_size: Optional[int] = None, + hidden_act: str = 'swish', + layer_idx: int = None + ) -> RWKV6GLU: + super().__init__() + + self.hidden_size = hidden_size + # the final number of params is `hidden_ratio * hidden_size^2` + # `intermediate_size` is chosen to be a multiple of 256 closest to `2/3 * hidden_size * hidden_ratio` + if hidden_ratio is None: + hidden_ratio = 4 + if intermediate_size is None: + intermediate_size = int(hidden_size * hidden_ratio * 2 / 3) + intermediate_size = 256 * ((intermediate_size + 256 - 1) // 256) + self.hidden_ratio = hidden_ratio + self.intermediate_size = intermediate_size + self.layer_idx = layer_idx + + self.gate_proj = nn.Linear(self.hidden_size, self.intermediate_size * 2, bias=False) + self.down_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=False) + self.act_fn = ACT2FN[hidden_act] + + def forward(self, x): + y = self.gate_proj(x) + gate, y = y.chunk(2, -1) + return swiglu_linear(gate, y, self.down_proj.weight, self.down_proj.bias) + + +class RWKV6Block(nn.Module): + def __init__(self, config: RWKV6Config, layer_idx: int): + super().__init__() + self.hidden_size = config.hidden_size + + self.attn_norm = LayerNorm(hidden_size=config.hidden_size, eps=config.eps) + self.attn = RWKV6Attention( + mode=config.attn_mode, + hidden_size=config.hidden_size, + expand_k=config.expand_k, + expand_v=config.expand_v, + num_heads=config.num_heads, + proj_low_rank_dim=config.proj_low_rank_dim, + gate_low_rank_dim=config.gate_low_rank_dim, + eps=config.eps, + fuse_norm=config.fuse_norm, + layer_idx=layer_idx + ) + self.ffn_norm = LayerNorm(hidden_size=config.hidden_size, eps=config.eps) + self.ffn = (RWKV6GLU if config.use_glu else RWKV6FeedForward)( + hidden_size=config.hidden_size, + hidden_ratio=config.hidden_ratio, + intermediate_size=config.intermediate_size, + hidden_act=config.hidden_act, + layer_idx=layer_idx + ) + + def forward( + self, + hidden_states: torch.Tensor, + attention_mask: Optional[torch.Tensor] = None, + past_key_values: Optional[Tuple[List[torch.Tensor]]] = None, + use_cache: Optional[bool] = False, + output_attentions: Optional[bool] = False, + **kwargs, + ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]: + residual = hidden_states + hidden_states = self.attn_norm(hidden_states) + hidden_states, attentions, past_key_values = self.attn( + hidden_states=hidden_states, + attention_mask=attention_mask, + past_key_values=past_key_values, + use_cache=use_cache, + output_attentions=output_attentions + ) + hidden_states, residual = self.ffn_norm(hidden_states, residual, True) + hidden_states = self.ffn(hidden_states) + hidden_states = residual + hidden_states + + outputs = (hidden_states, attentions, past_key_values) + + return outputs + + +class RWKV6PreTrainedModel(PreTrainedModel): + + config_class = RWKV6Config + supports_gradient_checkpointing = True + _no_split_modules = ['RWKV6Block'] + + def __init__(self, *inputs, **kwargs): + super().__init__(*inputs, **kwargs) + + def _init_weights( + self, + module: nn.Module, + rescale_prenorm_residual: bool = True, + num_residuals_per_layer: int = 2, + ): + if isinstance(module, (nn.Linear, nn.Conv1d)): + # Slightly different from the TF version which uses truncated_normal for initialization + # cf https://github.com/pytorch/pytorch/pull/5617 + nn.init.normal_(module.weight, mean=0.0, std=self.config.initializer_range) + if module.bias is not None: + nn.init.zeros_(module.bias) + elif isinstance(module, nn.Parameter): + nn.init.normal_(module, mean=0.0, std=self.config.initializer_range) + elif isinstance(module, nn.Embedding): + nn.init.normal_(module.weight, mean=0.0, std=self.config.initializer_range) + if module.padding_idx is not None: + module.weight.data[module.padding_idx].zero_() + + if rescale_prenorm_residual: + # Reinitialize selected weights subject to the OpenAI GPT-2 Paper Scheme: + # > A modified initialization which accounts for the accumulation on the residual path with model depth. Scale + # > the weights of residual layers at initialization by a factor of 1/√N where N is the # of residual layers. + # > -- GPT-2 :: https://openai.com/blog/better-language-models/ + # + # Reference (Megatron-LM): https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/gpt_model.py + for name, p in module.named_parameters(): + if name in ["o_proj.weight", "down_proj.weight"]: + # Special Scaled Initialization --> There are 2 Layer Norms per Transformer Block + # Following Pytorch init, except scale by 1/sqrt(2 * n_layer) + # We need to reinit p since this code could be called multiple times + # Having just p *= scale would repeatedly scale it down + with torch.no_grad(): + p /= math.sqrt(num_residuals_per_layer * self.config.num_hidden_layers) + + +class RWKV6Model(RWKV6PreTrainedModel): + + def __init__(self, config: RWKV6Config): + super().__init__(config) + self.padding_idx = config.pad_token_id + self.vocab_size = config.vocab_size + + self.embeddings = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx) + self.layers = nn.ModuleList([RWKV6Block(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]) + self.norm = LayerNorm(config.hidden_size, eps=config.eps) + + self.gradient_checkpointing = False + + self.post_init() + + def get_input_embeddings(self): + return self.embeddings + + def set_input_embeddings(self, value): + self.embeddings = value + + def forward( + self, + input_ids: Optional[torch.LongTensor] = None, + attention_mask: Optional[torch.Tensor] = None, # noqa + inputs_embeds: Optional[torch.FloatTensor] = None, + past_key_values: Optional[Tuple[List[torch.Tensor]]] = None, + use_cache: Optional[bool] = None, + output_attentions: Optional[bool] = None, + output_hidden_states: Optional[bool] = None, + return_dict: Optional[bool] = None + ) -> Union[Tuple, BaseModelOutputWithPast]: + if output_attentions: + warnings.warn("`RWKV6Model` does not `output_attentions` now, setting it to `False`.") + output_attentions = False + output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions + output_hidden_states = output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states + use_cache = use_cache if use_cache is not None else (self.config.use_cache if not self.training else False) + return_dict = return_dict if return_dict is not None else self.config.use_return_dict + + # retrieve input_ids and inputs_embeds + if input_ids is not None and inputs_embeds is not None: + raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") + elif input_ids is not None: + batch_size = input_ids.shape[0] + elif inputs_embeds is not None: + batch_size = inputs_embeds.shape[0] + else: + raise ValueError("You have to specify either input_ids or inputs_embeds") + + if inputs_embeds is None: + inputs_embeds = self.embeddings(input_ids) + hidden_states = inputs_embeds + + if use_cache: + if past_key_values is None: + past_key_values = [layer.attn.init_state(batch_size) for layer in self.layers] + if not isinstance(past_key_values, RecurrentCache): + past_key_values = RecurrentCache.from_legacy_cache(past_key_values) + + if self.gradient_checkpointing and self.training: + if use_cache: + logger.warning_once( + "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." + ) + use_cache = False + + all_hidden_states = () if output_hidden_states else None + all_attns = () if output_attentions else None + for layer in self.layers: + if output_hidden_states: + all_hidden_states += (hidden_states,) + + if self.gradient_checkpointing and self.training: + hidden_states, attentions, past_key_values = self._gradient_checkpointing_func( + layer.__call__, + hidden_states, + attention_mask, + past_key_values, + use_cache, + output_attentions + ) + else: + hidden_states, attentions, past_key_values = layer( + hidden_states, + attention_mask=attention_mask, + past_key_values=past_key_values, + use_cache=use_cache, + output_attentions=output_attentions + ) + + if output_attentions: + all_attns += (attentions,) + + hidden_states = self.norm(hidden_states) + + # add hidden states from the last decoder layer + if output_hidden_states: + all_hidden_states += (hidden_states,) + + next_cache = None + if use_cache: + next_cache = past_key_values.to_legacy_cache() + if not return_dict: + return tuple(x for x in [hidden_states, next_cache, all_hidden_states, all_attns] if x is not None) + return BaseModelOutputWithPast( + last_hidden_state=hidden_states, + past_key_values=next_cache, + hidden_states=all_hidden_states, + attentions=all_attns + ) + + +class RWKV6ForCausalLM(RWKV6PreTrainedModel): + _tied_weights_keys = ["lm_head.weight"] + + def __init__(self, config): + super().__init__(config) + self.model = RWKV6Model(config) + self.vocab_size = config.vocab_size + self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False) + + # Initialize weights and apply final processing + self.post_init() + + def get_input_embeddings(self): + return self.model.embeddings + + def set_input_embeddings(self, value): + self.model.embeddings = value + + def get_output_embeddings(self): + return self.lm_head + + def set_output_embeddings(self, new_embeddings): + self.lm_head = new_embeddings + + def set_decoder(self, decoder): + self.model = decoder + + def get_decoder(self): + return self.model + + def generate(self, *args, **kwargs): + try: + return super().generate(*args, **kwargs) + except AttributeError as exception: + if 'past_key_values' in str(exception): + raise AttributeError( + f"You tried to call `generate` with a decoding strategy that manipulates `past_key_values`, " + f"which is not supported for {self.__class__.__name__}. " + f"Try another generation strategy instead. " + f"For the available generation strategies, check this doc: " + f"https://huggingface.co/docs/transformers/en/generation_strategies#decoding-strategies" + ) + else: + raise exception + + def prepare_inputs_for_generation( + self, + input_ids: torch.LongTensor = None, + past_key_values: Optional[Tuple[List[torch.Tensor]]] = None, + attention_mask: Optional[torch.Tensor] = None, + inputs_embeds: Optional[torch.Tensor] = None, + **kwargs + ): + # only last token for `inputs_ids` if the `past_key_values` is passed along. + if past_key_values is not None: + if not isinstance(past_key_values, RecurrentCache): + past_key_values = RecurrentCache.from_legacy_cache(past_key_values, input_ids.shape[1] - 1) + input_ids, attention_mask = input_ids[:, -1:], attention_mask[:, -1:] + # if `inputs_embeds` are passed, we only want to use them in the 1st generation step + if inputs_embeds is not None and past_key_values is None: + model_inputs = {'inputs_embeds': inputs_embeds} + else: + # The `contiguous()` here is necessary to have a static stride during decoding. torchdynamo otherwise + # recompiles graphs as the stride of the inputs is a guard. + # Ref: https://github.com/huggingface/transformers/pull/29114 + # TODO: use `next_tokens` directly instead. + model_inputs = {'input_ids': input_ids.contiguous()} + + model_inputs.update({ + 'past_key_values': past_key_values, + 'use_cache': kwargs.get('use_cache'), + 'attention_mask': attention_mask, + }) + return model_inputs + + def forward( + self, + input_ids: torch.LongTensor = None, + attention_mask: Optional[torch.Tensor] = None, + inputs_embeds: Optional[torch.Tensor] = None, + past_key_values: Optional[Tuple[List[torch.Tensor]]] = None, + labels: Optional[torch.LongTensor] = None, + use_cache: Optional[bool] = None, + output_attentions: Optional[bool] = None, + output_hidden_states: Optional[bool] = None, + return_dict: Optional[bool] = None, + ) -> Union[Tuple, CausalLMOutputWithPast]: + output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions + output_hidden_states = ( + output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states + ) + return_dict = return_dict if return_dict is not None else self.config.use_return_dict + + outputs = self.model( + input_ids=input_ids, + attention_mask=attention_mask, + inputs_embeds=inputs_embeds, + past_key_values=past_key_values, + use_cache=use_cache, + output_attentions=output_attentions, + output_hidden_states=output_hidden_states, + return_dict=return_dict + ) + + hidden_states = outputs[0] + logits = self.lm_head(hidden_states) + + loss = None + if labels is not None: + if self.config.fuse_cross_entropy: + loss_fct = FusedCrossEntropyLoss(inplace_backward=True) + else: + loss_fct = nn.CrossEntropyLoss() + # Enable model parallelism + labels = labels.to(logits.device) + labels = torch.cat((labels[..., 1:], torch.full_like(labels[:, :1], loss_fct.ignore_index)), 1) + loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1)) + + if not return_dict: + output = (logits,) + outputs[1:] + return (loss,) + output if loss is not None else output + + return CausalLMOutputWithPast( + loss=loss, + logits=logits, + past_key_values=outputs.past_key_values, + hidden_states=outputs.hidden_states, + attentions=outputs.attentions, + ) diff --git a/fla/models/transformer/__init__.py b/fla/models/transformer/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..47df999fe1446258dc9930e8b0aa6941f1c93f58 --- /dev/null +++ b/fla/models/transformer/__init__.py @@ -0,0 +1,14 @@ +# -*- coding: utf-8 -*- + +from transformers import AutoConfig, AutoModel, AutoModelForCausalLM + +from fla.models.transformer.configuration_transformer import TransformerConfig +from fla.models.transformer.modeling_transformer import ( + TransformerForCausalLM, TransformerModel) + +AutoConfig.register(TransformerConfig.model_type, TransformerConfig) +AutoModel.register(TransformerConfig, TransformerModel) +AutoModelForCausalLM.register(TransformerConfig, TransformerForCausalLM) + + +__all__ = ['TransformerConfig', 'TransformerForCausalLM', 'TransformerModel'] diff --git a/fla/models/transformer/configuration_transformer.py b/fla/models/transformer/configuration_transformer.py new file mode 100644 index 0000000000000000000000000000000000000000..10e7fdcb8534e2907678ebbe1e359743a9206f07 --- /dev/null +++ b/fla/models/transformer/configuration_transformer.py @@ -0,0 +1,61 @@ +# -*- coding: utf-8 -*- + +from typing import Optional + +from transformers.configuration_utils import PretrainedConfig + + +class TransformerConfig(PretrainedConfig): + + model_type = 'transformer' + keys_to_ignore_at_inference = ['past_key_values'] + + def __init__( + self, + vocab_size: int = 32000, + hidden_size: int = 2048, + hidden_ratio: Optional[int] = 4, + intermediate_size: Optional[int] = None, + num_hidden_layers: int = 24, + num_heads: int = 32, + num_kv_heads: int = None, + hidden_act: str = "swish", + max_position_embeddings: int = 2048, + initializer_range: float = 0.02, + elementwise_affine: Optional[bool] = True, + norm_eps: float = 1e-6, + use_cache: bool = True, + pad_token_id: int = None, + bos_token_id: int = 1, + eos_token_id: int = 2, + tie_word_embeddings: bool = False, + attention_bias: bool = False, + fuse_norm: bool = True, + fuse_cross_entropy: bool = True, + **kwargs, + ): + self.vocab_size = vocab_size + self.max_position_embeddings = max_position_embeddings + self.hidden_size = hidden_size + self.hidden_ratio = hidden_ratio + self.intermediate_size = intermediate_size + self.num_hidden_layers = num_hidden_layers + self.num_heads = num_heads + self.num_kv_heads = num_kv_heads + + self.hidden_act = hidden_act + self.initializer_range = initializer_range + self.elementwise_affine = elementwise_affine + self.norm_eps = norm_eps + self.use_cache = use_cache + self.attention_bias = attention_bias + self.fuse_cross_entropy = fuse_cross_entropy + self.fuse_norm = fuse_norm + + super().__init__( + pad_token_id=pad_token_id, + bos_token_id=bos_token_id, + eos_token_id=eos_token_id, + tie_word_embeddings=tie_word_embeddings, + **kwargs, + ) diff --git a/fla/models/transformer/modeling_transformer.py b/fla/models/transformer/modeling_transformer.py new file mode 100644 index 0000000000000000000000000000000000000000..e9bc3946b1281ebbcffd94d7835c24166add13f6 --- /dev/null +++ b/fla/models/transformer/modeling_transformer.py @@ -0,0 +1,515 @@ +# -*- coding: utf-8 -*- + +from __future__ import annotations + +import math +import warnings +from typing import List, Optional, Tuple, Union + +import torch +import torch.nn as nn +import torch.nn.functional as F +import torch.utils.checkpoint +from einops import rearrange +from transformers.activations import ACT2FN +from transformers.cache_utils import Cache, DynamicCache +from transformers.modeling_outputs import (BaseModelOutputWithPast, + CausalLMOutputWithPast) +from transformers.modeling_utils import PreTrainedModel +from transformers.utils import logging + +from fla.models.transformer.configuration_transformer import TransformerConfig +from fla.modules import FusedCrossEntropyLoss, RMSNorm, RotaryEmbedding +from fla.modules.activations import swiglu_linear + + + + +class TransformerAttention(nn.Module): + + def __init__( + self, + config: TransformerConfig, + layer_idx: Optional[int] = None, + **kwargs + ): + super().__init__() + + self.config = config + self.layer_idx = layer_idx + + self.num_heads = config.num_heads + if config.num_kv_heads is None: + self.num_kv_heads = self.num_heads + else: + self.num_kv_heads = config.num_kv_heads + self.num_kv_groups = config.num_heads // self.num_kv_heads + self.hidden_size = config.hidden_size + self.head_dim = self.hidden_size // self.num_heads + self.kv_dim = self.num_kv_heads * self.head_dim + self.kv_dim = self.num_kv_heads * self.head_dim + self.max_position_embeddings = config.max_position_embeddings + + self.q_proj = nn.Linear(self.hidden_size, self.hidden_size, bias=False) + self.k_proj = nn.Linear(self.hidden_size, self.kv_dim, bias=False) + self.v_proj = nn.Linear(self.hidden_size, self.kv_dim, bias=False) + self.o_proj = nn.Linear(self.hidden_size, self.hidden_size, bias=False) + + self.rotary = RotaryEmbedding(self.head_dim) + + self.apply(self._initialize_weights) + + def _initialize_weights(self, module: nn.Module): + if getattr(module, "_is_hf_initialized", False): + return + if isinstance(module, nn.Linear): + nn.init.xavier_uniform_(module.weight, gain=2 ** -2.5) + if module.bias is not None: + nn.init.zeros_(module.bias) + module._is_hf_initialized = True + + def forward( + self, + hidden_states: torch.Tensor, + attention_mask: Optional[torch.LongTensor] = None, + past_key_values: Optional[Cache] = None, + output_attentions: bool = False, + use_cache: bool = False, + **kwargs, + ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]: + batch_size, q_len, _ = hidden_states.size() + q = rearrange(self.q_proj(hidden_states), '... (h d) -> ... h d', h=self.num_heads) + k = rearrange(self.k_proj(hidden_states), '... (h d) -> ... h d', h=self.num_kv_heads) + v = rearrange(self.v_proj(hidden_states), 'b t (h d) -> b h t d', h=self.num_kv_heads) + + seqlen_offset = 0 + if past_key_values is not None: + seqlen_offset = past_key_values.get_seq_length(self.layer_idx) + + if attention_mask is not None: + # to deliminate the offsets of padding tokens + seqlen_offset = seqlen_offset + attention_mask.sum(-1) - attention_mask.shape[-1] + q, k = self.rotary(q, k, seqlen_offset, self.max_position_embeddings) + + k = rearrange(k, 'b t h d -> b h t d') + if past_key_values is not None: + k, v = past_key_values.update(k, v, self.layer_idx) + k, v = rearrange(k, 'b h t d -> b t h d'), rearrange(v, 'b h t d -> b t h d') + if self.num_kv_groups > 1: + k = rearrange(k.unsqueeze(-2).repeat(1, 1, 1, self.num_kv_groups, 1), 'b t h g d -> b t (h g) d') + v = rearrange(v.unsqueeze(-2).repeat(1, 1, 1, self.num_kv_groups, 1), 'b t h g d -> b t (h g) d') + + if flash_attn_func is None: + raise ImportError("Please install Flash Attention via `pip install flash-attn --no-build-isolation` first") + + # Contains at least one padding token in the sequence + if attention_mask is not None: + q, k, v, indices_q, cu_seq_lens, max_seq_lens = self._upad_input(q, k, v, attention_mask, q_len) + cu_seqlens_q, cu_seqlens_k = cu_seq_lens + max_seqlen_q, max_seqlen_k = max_seq_lens + o = flash_attn_varlen_func( + q, k, v, + cu_seqlens_q=cu_seqlens_q, + cu_seqlens_k=cu_seqlens_k, + max_seqlen_q=max_seqlen_q, + max_seqlen_k=max_seqlen_k, + causal=True + ) + o = pad_input(o, indices_q, batch_size, q_len) + else: + o = flash_attn_func(q, k, v, causal=True) + o = o.reshape(batch_size, q_len, self.hidden_size) + o = self.o_proj(o) + + if not output_attentions: + attentions = None + + return o, attentions, past_key_values + + def _upad_input(self, q, k, v, attention_mask, q_len): + seqlens = attention_mask.sum(-1, dtype=torch.int32) + indices_k = torch.nonzero(attention_mask.flatten(), as_tuple=False).flatten() + max_seqlen_k = seqlens.max().item() + cu_seqlens_k = F.pad(torch.cumsum(seqlens, dim=0, dtype=torch.int32), (1, 0)) + batch_size, seq_len, num_key_value_heads, head_dim = k.shape + + k = index_first_axis(k.reshape(batch_size * seq_len, num_key_value_heads, head_dim), indices_k) + v = index_first_axis(v.reshape(batch_size * seq_len, num_key_value_heads, head_dim), indices_k) + if q_len == seq_len: + q = index_first_axis(q.reshape(batch_size * seq_len, self.num_heads, head_dim), indices_k) + cu_seqlens_q = cu_seqlens_k + max_seqlen_q = max_seqlen_k + indices_q = indices_k + elif q_len == 1: + max_seqlen_q = 1 + # There is a memcpy here, that is very bad. + cu_seqlens_q = torch.arange(batch_size + 1, dtype=torch.int32, device=q.device) + indices_q = cu_seqlens_q[:-1] + q = q.squeeze(1) + else: + # The -q_len: slice assumes left padding. + attention_mask = attention_mask[:, -q_len:] + q, indices_q, cu_seqlens_q, max_seqlen_q = unpad_input(q, attention_mask) + + return q, k, v, indices_q, (cu_seqlens_q, cu_seqlens_k), (max_seqlen_q, max_seqlen_k) + + +class TransformerMLP(nn.Module): + + def __init__( + self, + hidden_size: int, + hidden_ratio: Optional[int] = None, + intermediate_size: Optional[int] = None, + hidden_act: str = 'swish' + ) -> TransformerMLP: + super().__init__() + + self.hidden_size = hidden_size + # the final number of params is `hidden_ratio * hidden_size^2` + # `intermediate_size` is chosen to be a multiple of 256 closest to `2/3 * hidden_size * hidden_ratio` + if hidden_ratio is None: + hidden_ratio = 4 + if intermediate_size is None: + intermediate_size = int(hidden_size * hidden_ratio * 2 / 3) + intermediate_size = 256 * ((intermediate_size + 256 - 1) // 256) + self.hidden_ratio = hidden_ratio + self.intermediate_size = intermediate_size + + self.gate_proj = nn.Linear(self.hidden_size, self.intermediate_size * 2, bias=False) + self.down_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=False) + self.act_fn = ACT2FN[hidden_act] + + def forward(self, x): + y = self.gate_proj(x) + gate, y = y.chunk(2, -1) + return swiglu_linear(gate, y, self.down_proj.weight, self.down_proj.bias) + + +class TransformerBlock(nn.Module): + def __init__(self, config: TransformerConfig, layer_idx: int): + super().__init__() + self.hidden_size = config.hidden_size + + self.attn_norm = RMSNorm(hidden_size=config.hidden_size, eps=config.norm_eps) + self.attn = TransformerAttention( + config=config, + layer_idx=layer_idx + ) + self.mlp_norm = RMSNorm(hidden_size=config.hidden_size, eps=config.norm_eps) + self.mlp = TransformerMLP( + hidden_size=config.hidden_size, + hidden_ratio=config.hidden_ratio, + intermediate_size=config.intermediate_size, + hidden_act=config.hidden_act + ) + + def forward( + self, + hidden_states: torch.Tensor, + attention_mask: Optional[torch.Tensor] = None, + past_key_values: Optional[Tuple[torch.Tensor]] = None, + output_attentions: Optional[bool] = False, + use_cache: Optional[bool] = False, + **kwargs, + ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]: + + residual = hidden_states + hidden_states = self.attn_norm(hidden_states) + hidden_states, attentions, past_key_values = self.attn( + hidden_states=hidden_states, + attention_mask=attention_mask, + past_key_values=past_key_values, + use_cache=use_cache, + output_attentions=output_attentions + ) + hidden_states, residual = self.mlp_norm(hidden_states, residual, True) + hidden_states = self.mlp(hidden_states) + hidden_states = residual + hidden_states + + outputs = (hidden_states,) + + if output_attentions: + outputs += (attentions,) + + if use_cache: + outputs += (past_key_values,) + + return outputs + + +class TransformerPreTrainedModel(PreTrainedModel): + + config_class = TransformerConfig + supports_gradient_checkpointing = True + _no_split_modules = ['TransformerBlock'] + + def __init__(self, *inputs, **kwargs): + super().__init__(*inputs, **kwargs) + + def _init_weights( + self, + module: nn.Module, + rescale_prenorm_residual: bool = True, + num_residuals_per_layer: int = 2, + ): + if isinstance(module, (nn.Linear, nn.Conv1d)): + # Slightly different from the TF version which uses truncated_normal for initialization + # cf https://github.com/pytorch/pytorch/pull/5617 + nn.init.normal_(module.weight, mean=0.0, std=self.config.initializer_range) + if module.bias is not None: + nn.init.zeros_(module.bias) + elif isinstance(module, nn.Embedding): + nn.init.normal_(module.weight, mean=0.0, std=self.config.initializer_range) + if module.padding_idx is not None: + module.weight.data[module.padding_idx].zero_() + + if rescale_prenorm_residual: + # Reinitialize selected weights subject to the OpenAI GPT-2 Paper Scheme: + # > A modified initialization which accounts for the accumulation on the residual path with model depth. Scale + # > the weights of residual layers at initialization by a factor of 1/√N where N is the # of residual layers. + # > -- GPT-2 :: https://openai.com/blog/better-language-models/ + # + # Reference (Megatron-LM): https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/gpt_model.py + for name, p in module.named_parameters(): + if name in ["o_proj.weight", "down_proj.weight"]: + # Special Scaled Initialization --> There are 2 Layer Norms per Transformer Block + # Following Pytorch init, except scale by 1/sqrt(2 * n_layer) + # We need to reinit p since this code could be called multiple times + # Having just p *= scale would repeatedly scale it down + with torch.no_grad(): + p /= math.sqrt(num_residuals_per_layer * self.config.num_hidden_layers) + + +class TransformerModel(TransformerPreTrainedModel): + + def __init__(self, config: TransformerConfig): + super().__init__(config) + self.padding_idx = config.pad_token_id + self.vocab_size = config.vocab_size + + self.embeddings = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx) + self.layers = nn.ModuleList([TransformerBlock(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]) + self.norm = RMSNorm(config.hidden_size, eps=config.norm_eps) + + self.gradient_checkpointing = False + + self.post_init() + + def get_input_embeddings(self): + return self.embeddings + + def set_input_embeddings(self, value): + self.embeddings = value + + def forward( + self, + input_ids: Optional[torch.LongTensor] = None, + attention_mask: Optional[torch.Tensor] = None, + past_key_values: Optional[List[torch.FloatTensor]] = None, + inputs_embeds: Optional[torch.FloatTensor] = None, + use_cache: Optional[bool] = None, + output_attentions: Optional[bool] = None, + output_hidden_states: Optional[bool] = None, + return_dict: Optional[bool] = None + ) -> Union[Tuple, CausalLMOutputWithPast]: + if output_attentions: + warnings.warn( + "`TransformerModel` does not support output attention weights now, so `output_attentions` is set to `False`." + ) + output_attentions = False + output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions + output_hidden_states = output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states + use_cache = use_cache if use_cache is not None else (self.config.use_cache if not self.training else False) + return_dict = return_dict if return_dict is not None else self.config.use_return_dict + + # retrieve input_ids and inputs_embeds + if input_ids is not None and inputs_embeds is not None: + raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") + elif input_ids is None and inputs_embeds is None: + raise ValueError("You have to specify either input_ids or inputs_embeds") + + if use_cache: + use_legacy_cache = not isinstance(past_key_values, Cache) + if use_legacy_cache: + past_key_values = DynamicCache.from_legacy_cache(past_key_values) + + if inputs_embeds is None: + inputs_embeds = self.embeddings(input_ids) + + # embed positions + hidden_states = inputs_embeds + + if self.gradient_checkpointing and self.training: + if use_cache: + logger.warning_once( + "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." + ) + use_cache = False + + all_hidden_states = () if output_hidden_states else None + all_attns = () if output_attentions else None + next_decoder_cache = None + + for layer in self.layers: + if output_hidden_states: + all_hidden_states += (hidden_states,) + + if self.gradient_checkpointing and self.training: + layer_outputs = self._gradient_checkpointing_func( + layer.__call__, + hidden_states, + attention_mask, + past_key_values, + output_attentions, + use_cache + ) + else: + layer_outputs = layer( + hidden_states, + attention_mask=attention_mask, + past_key_values=past_key_values, + output_attentions=output_attentions, + use_cache=use_cache + ) + + hidden_states = layer_outputs[0] + + if use_cache: + next_decoder_cache = layer_outputs[2 if output_attentions else 1] + + if output_attentions: + all_attns += (layer_outputs[1],) + + hidden_states = self.norm(hidden_states) + + # add hidden states from the last decoder layer + if output_hidden_states: + all_hidden_states += (hidden_states,) + + next_cache = None + if use_cache: + next_cache = next_decoder_cache.to_legacy_cache() if use_legacy_cache else next_decoder_cache + if not return_dict: + return tuple(v for v in [hidden_states, next_cache, all_hidden_states, all_attns] if v is not None) + + return BaseModelOutputWithPast( + last_hidden_state=hidden_states, + past_key_values=next_cache, + hidden_states=all_hidden_states, + attentions=all_attns + ) + + +class TransformerForCausalLM(TransformerPreTrainedModel): + _tied_weights_keys = ["lm_head.weight"] + + def __init__(self, config): + super().__init__(config) + self.model = TransformerModel(config) + self.vocab_size = config.vocab_size + self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False) + + # Initialize weights and apply final processing + self.post_init() + + def get_input_embeddings(self): + return self.model.embeddings + + def set_input_embeddings(self, value): + self.model.embeddings = value + + def get_output_embeddings(self): + return self.lm_head + + def set_output_embeddings(self, new_embeddings): + self.lm_head = new_embeddings + + def set_decoder(self, decoder): + self.model = decoder + + def get_decoder(self): + return self.model + + def prepare_inputs_for_generation( + self, + input_ids: torch.LongTensor = None, + past_key_values: Optional[torch.Tensor] = None, + attention_mask: Optional[torch.Tensor] = None, + inputs_embeds: Optional[torch.Tensor] = None, + **kwargs + ): + # only last token for `inputs_ids` if the `past_key_values` is passed along. + if past_key_values is not None: + input_ids = input_ids[:, -1:] + # if `inputs_embeds` are passed, we only want to use them in the 1st generation step + if inputs_embeds is not None and past_key_values is None: + model_inputs = {'inputs_embeds': inputs_embeds} + else: + # The `contiguous()` here is necessary to have a static stride during decoding. torchdynamo otherwise + # recompiles graphs as the stride of the inputs is a guard. + # Ref: https://github.com/huggingface/transformers/pull/29114 + # TODO: use `next_tokens` directly instead. + model_inputs = {'input_ids': input_ids.contiguous()} + + model_inputs.update({ + 'past_key_values': past_key_values, + 'use_cache': kwargs.get('use_cache'), + 'attention_mask': attention_mask, + }) + return model_inputs + + def forward( + self, + input_ids: torch.LongTensor = None, + attention_mask: Optional[torch.Tensor] = None, + past_key_values: Optional[List[torch.FloatTensor]] = None, + inputs_embeds: Optional[torch.FloatTensor] = None, + labels: Optional[torch.LongTensor] = None, + use_cache: Optional[bool] = None, + output_attentions: Optional[bool] = None, + output_hidden_states: Optional[bool] = None, + return_dict: Optional[bool] = None, + ) -> Union[Tuple, CausalLMOutputWithPast]: + output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions + output_hidden_states = ( + output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states + ) + return_dict = return_dict if return_dict is not None else self.config.use_return_dict + + outputs = self.model( + input_ids=input_ids, + attention_mask=attention_mask, + past_key_values=past_key_values, + inputs_embeds=inputs_embeds, + use_cache=use_cache, + output_attentions=output_attentions, + output_hidden_states=output_hidden_states, + return_dict=return_dict + ) + + hidden_states = outputs[0] + logits = self.lm_head(hidden_states) + + loss = None + if labels is not None: + if self.config.fuse_cross_entropy: + loss_fct = FusedCrossEntropyLoss(inplace_backward=True) + else: + loss_fct = nn.CrossEntropyLoss() + # Enable model parallelism + labels = labels.to(logits.device) + labels = torch.cat((labels[..., 1:], torch.full_like(labels[:, :1], loss_fct.ignore_index)), 1) + loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1)) + + if not return_dict: + output = (logits,) + outputs[1:] + return (loss,) + output if loss is not None else output + + return CausalLMOutputWithPast( + loss=loss, + logits=logits, + past_key_values=outputs.past_key_values, + hidden_states=outputs.hidden_states, + attentions=outputs.attentions, + ) diff --git a/fla/models/utils.py b/fla/models/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..ed51f7d7a3807c2b30373e3f9d33f6bee5824e02 --- /dev/null +++ b/fla/models/utils.py @@ -0,0 +1,107 @@ +# -*- coding: utf-8 -*- + +from __future__ import annotations + +from typing import Any, Dict, List, Optional, Tuple + +import torch +from transformers.cache_utils import Cache + + +class RecurrentCache(Cache): + """ + A cache used for storing hidden states produced by flash linear attention models. + + It stores the states of each layer as the tensor of shape `[batch_size, key_dim, value_dim]`. + """ + + def __init__( + self, + seen_tokens: int = 0 + ) -> RecurrentCache: + + self.states: List[torch.Tensor] = [] + self._seen_tokens = seen_tokens # Used in `generate` to keep tally of how many tokens the cache has seen + + def __getitem__(self, layer_idx: int) -> torch.Tensor: + if layer_idx < len(self): + return self.states[layer_idx] + else: + raise KeyError(f"Cache only has {len(self)} layers, attempted to access layer with index {layer_idx}") + + def __iter__(self): + for state in self.states: + yield state + + def __len__(self): + return len(self.states) + + def update( + self, + state: Tuple[torch.Tensor], + layer_idx: int, + offset: Optional[int] = 1, + cache_kwargs: Optional[Dict[str, Any]] = None, + ) -> Tuple[torch.Tensor]: + """ + Updates the cache with the new `state` for the layer `layer_idx`. + + Parameters: + state (`Tuple[torch.Tensor]`): + The new state to cache. + layer_idx (`int`): + The index of the layer to cache the states for. + offset (`int`): + The offset of current fed tokens. + cache_kwargs (`Dict[str, Any]`, `optional`): + Additional arguments for the cache subclass. + + Return: + The updated state. + """ + + if isinstance(state, torch.Tensor): + state = (state,) + if len(self.states) <= layer_idx: + self.states.append(state) + else: + for i, s in enumerate(state): + self.states[layer_idx][i].copy_(s) + # update the number of seen tokens once we achieve the last layer + if layer_idx == len(self) - 1: + self._seen_tokens += offset + + return state + + def get_seq_length(self, layer_idx: Optional[int] = 0) -> int: + """Returns the sequence length of the cached states. A layer index can be optionally passed.""" + if len(self.states) <= layer_idx: + return 0 + return self._seen_tokens + + def get_max_length(self) -> Optional[int]: + """Returns the maximum sequence length of the cached states. RecurrentCache does not have a maximum length.""" + return None + + def reorder_cache(self, beam_idx: torch.LongTensor): + """Reorders the cache for beam search, given the selected beam indices.""" + for layer_idx in range(len(self.states)): + device = self.states[layer_idx].device + self.states[layer_idx] = self.states[layer_idx].index_select(0, beam_idx.to(device)) + + def to_legacy_cache(self) -> Tuple[torch.Tensor]: + return tuple(self.states) + + @classmethod + def from_legacy_cache( + cls, + past_key_values: Optional[Tuple[torch.Tensor]] = None, + seen_tokens: int = 0 + ) -> RecurrentCache: + """Converts a cache in the legacy cache format into an equivalent `RecurrentCache`.""" + + cache = cls(seen_tokens) + if past_key_values is not None: + for layer_idx in range(len(past_key_values)): + cache.update(past_key_values[layer_idx], layer_idx) + return cache diff --git a/fla/modules/__init__.py b/fla/modules/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..4874b9d02f88d0f691842f1e76c6f7a18788c37b --- /dev/null +++ b/fla/modules/__init__.py @@ -0,0 +1,20 @@ +# -*- coding: utf-8 -*- + +from fla.modules.convolution import (ImplicitLongConvolution, LongConvolution, + ShortConvolution) +from fla.modules.fused_cross_entropy import FusedCrossEntropyLoss +from fla.modules.fused_norm_gate import (FusedLayerNormSwishGate, + FusedLayerNormSwishGateLinear, + FusedRMSNormSwishGate, + FusedRMSNormSwishGateLinear) +from fla.modules.layernorm import (LayerNorm, LayerNormLinear, RMSNorm, + RMSNormLinear) +from fla.modules.rotary import RotaryEmbedding + +__all__ = [ + 'ImplicitLongConvolution', 'LongConvolution', 'ShortConvolution', + 'FusedCrossEntropyLoss', + 'LayerNorm', 'LayerNormLinear', 'RMSNorm', 'RMSNormLinear', + 'FusedLayerNormSwishGate', 'FusedLayerNormSwishGateLinear', 'FusedRMSNormSwishGate', 'FusedRMSNormSwishGateLinear', + 'RotaryEmbedding' +] diff --git a/fla/modules/activations.py b/fla/modules/activations.py new file mode 100644 index 0000000000000000000000000000000000000000..46d86c5cfa096c1409ba87a25c8d3e64c6b84c78 --- /dev/null +++ b/fla/modules/activations.py @@ -0,0 +1,394 @@ +# -*- coding: utf-8 -*- + +# Copyright (c) 2023-2024, Tri Dao, Yu Zhang, Songlin Yang. + +import torch +import torch.nn.functional as F +import triton +import triton.language as tl + +from fla.utils import contiguous + +sigmoid_fwd_codestring = """ +template T sigmoid_fwd(T x) { + return 1.0f / (1.0f + ::exp(-float(x))); +} +""" +sigmoid_bwd_codestring = """ +template T sigmoid_bwd(T x, T g) { + float x_sigmoid = 1.0f / (1.0f + ::exp(-float(x))); + return float(g) * x_sigmoid * (1.0f - x_sigmoid); +} +""" + +sigmoid_fwd = torch.cuda.jiterator._create_jit_fn(sigmoid_fwd_codestring) +sigmoid_bwd = torch.cuda.jiterator._create_jit_fn(sigmoid_bwd_codestring) + + +class SigmoidFunction(torch.autograd.Function): + + @staticmethod + def forward(ctx, x): + ctx.save_for_backward(x) + return sigmoid_fwd(x) + + @staticmethod + def backward(ctx, dout): + x, = ctx.saved_tensors + return sigmoid_bwd(x, dout) + + +sigmoid = SigmoidFunction.apply + + +@triton.autotune( + configs=[ + triton.Config({'BT': 16}, num_warps=2), + triton.Config({'BT': 16}, num_warps=4), + triton.Config({'BT': 16}, num_warps=8), + triton.Config({'BT': 32}, num_warps=2), + triton.Config({'BT': 32}, num_warps=4), + triton.Config({'BT': 32}, num_warps=8), + triton.Config({'BT': 64}, num_warps=2), + triton.Config({'BT': 64}, num_warps=4), + triton.Config({'BT': 64}, num_warps=8), + triton.Config({'BT': 128}, num_warps=2), + triton.Config({'BT': 128}, num_warps=4), + triton.Config({'BT': 128}, num_warps=8), + triton.Config({'BT': 256}, num_warps=2), + triton.Config({'BT': 256}, num_warps=4), + triton.Config({'BT': 256}, num_warps=8) + ], + key=['D'] +) +@triton.jit +def logsigmoid_fwd_kernel( + x, + y, + T: tl.constexpr, + D: tl.constexpr, + BT: tl.constexpr +): + i = tl.program_id(0) + o_i = i * BT + tl.arange(0, BT) + + p_x = x + o_i + p_y = y + o_i + mask = o_i < T + + # [D,] + b_x = tl.load(p_x, mask=mask, other=0.).to(tl.float32) + b_m = tl.minimum(0., b_x) + b_z = 1. + tl.exp(-tl.abs(b_x)) + b_y = b_m - tl.log(b_z) + tl.store(p_y, b_y.to(p_y.dtype.element_ty), mask=mask) + + +@triton.autotune( + configs=[ + triton.Config({'BT': 16}, num_warps=2), + triton.Config({'BT': 16}, num_warps=4), + triton.Config({'BT': 16}, num_warps=8), + triton.Config({'BT': 32}, num_warps=2), + triton.Config({'BT': 32}, num_warps=4), + triton.Config({'BT': 32}, num_warps=8), + triton.Config({'BT': 64}, num_warps=2), + triton.Config({'BT': 64}, num_warps=4), + triton.Config({'BT': 64}, num_warps=8), + triton.Config({'BT': 128}, num_warps=2), + triton.Config({'BT': 128}, num_warps=4), + triton.Config({'BT': 128}, num_warps=8), + triton.Config({'BT': 256}, num_warps=2), + triton.Config({'BT': 256}, num_warps=4), + triton.Config({'BT': 256}, num_warps=8) + ], + key=['D'] +) +@triton.jit +def logsigmoid_bwd_kernel( + x, + dx, + dy, + T: tl.constexpr, + D: tl.constexpr, + BT: tl.constexpr +): + i = tl.program_id(0) + o_i = i * BT + tl.arange(0, BT) + + p_x = x + o_i + p_dx = dx + o_i + p_dy = dy + o_i + mask = o_i < T + + # [D,] + b_x = tl.load(p_x, mask=mask, other=0.).to(tl.float32) + b_dy = tl.load(p_dy, mask=mask, other=0.).to(tl.float32) + b_dx = b_dy * (1. - tl.sigmoid(b_x)) + tl.store(p_dx, b_dx.to(p_dx.dtype.element_ty), mask=mask) + + +class LogSigmoidFunction(torch.autograd.Function): + + @staticmethod + @contiguous + def forward(ctx, x): + T, D = x.numel(), x.shape[-1] + y = torch.empty_like(x) + logsigmoid_fwd_kernel[lambda meta: (triton.cdiv(meta['T'], meta['D']),)](x, y, T=T, D=D) + ctx.save_for_backward(x,) + return y + + @staticmethod + @contiguous + def backward(ctx, dy): + x, = ctx.saved_tensors + T, D = x.numel(), x.shape[-1] + dx = torch.empty_like(x) + logsigmoid_bwd_kernel[lambda meta: (triton.cdiv(meta['T'], meta['D']),)](x, dx, dy, T=T, D=D) + return dx + + +logsigmoid = LogSigmoidFunction.apply + +swish_fwd_codestring = """ +template T swish_fwd(T x) { + float x_sigmoid = 1.0f / (1.0f + ::exp(-float(x))); + return float(x) * x_sigmoid; +} +""" +swish_bwd_codestring = """ +template T swish_bwd(T x, T g) { + float x_sigmoid = 1.0f / (1.0f + ::exp(-float(x))); + return float(g) * x_sigmoid * (1.0f - float(x) * x_sigmoid + float(x)); +} +""" + +swish_fwd = torch.cuda.jiterator._create_jit_fn(swish_fwd_codestring) +swish_bwd = torch.cuda.jiterator._create_jit_fn(swish_bwd_codestring) + + +class SwishFunction(torch.autograd.Function): + + @staticmethod + def forward(ctx, x): + ctx.save_for_backward(x) + return swish_fwd(x) + + @staticmethod + def backward(ctx, dout): + x, = ctx.saved_tensors + return swish_bwd(x, dout) + + +swish = SwishFunction.apply + +# 1/sqrt(2*pi)-> 0.3989423 +# 1/sqrt(2) -> 0.70710678 +# sqrt(2/pi) -> 0.79788456 + + +# this function is tanh approximation of gelu +# actual gelu is: +# x * 0.5 * (1.0 + torch.erf(x * 0.70710678)) +@torch.jit.script +def bias_gelu(y, bias): + x = bias + y + return (x * 0.5 * (1.0 + torch.tanh(0.79788456 * x * (1 + 0.044715 * x * x)))).to(dtype=y.dtype) + + +# gradient of tanh approximation of gelu +# gradient of actual gelu is: +# 0.5 * (1. + torch.erf(x * 0.70710678)) + 0.3989423 * x * torch.exp(-0.5 * x * x) +@torch.jit.script +def bias_gelu_bwd(g, y, bias): + """Assume that y has shape (B, D) and bias has shape (D)""" + x = bias + y + tanh_out = torch.tanh(0.79788456 * x * (1 + 0.044715 * x * x)) + # sqrt(2/pi) * 3 * 0.044715 -> 0.1070322243 + ff = 0.5 * x * ((1 - tanh_out * tanh_out) * (0.79788456 + 0.1070322243 * x * x)) + 0.5 * ( + 1 + tanh_out + ) + grad_y = ff * g + return grad_y.to(dtype=y.dtype), grad_y.sum(dim=(0), dtype=bias.dtype) + + +class GeLUFunction(torch.autograd.Function): + + @staticmethod + # bias is an optional argument + def forward(ctx, input, bias): + ctx.save_for_backward(input, bias) + return bias_gelu(input, bias) + + @staticmethod + def backward(ctx, grad_output): + input, bias = ctx.saved_tensors + tmp = bias_gelu_bwd(grad_output, input, bias) + return tmp, tmp + + +bias_gelu_impl = GeLUFunction.apply + + +# this function is tanh approximation of gelu +# actual gelu is: +# x * 0.5 * (1.0 + torch.erf(x * 0.70710678)) +@torch.jit.script +def gelu_fwd(x): + return (x * 0.5 * (1.0 + torch.tanh(0.79788456 * x * (1 + 0.044715 * x * x)))).to(dtype=x.dtype) + + +# gradient of tanh approximation of gelu +# gradient of actual gelu is: +# 0.5 * (1. + torch.erf(x * 0.70710678)) + 0.3989423 * x * torch.exp(-0.5 * x * x) +@torch.jit.script +def gelu_bwd(g, x): + tanh_out = torch.tanh(0.79788456 * x * (1 + 0.044715 * x * x)) + # sqrt(2/pi) * 3 * 0.044715 -> 0.1070322243 + ff = 0.5 * x * ((1 - tanh_out * tanh_out) * (0.79788456 + 0.1070322243 * x * x)) + 0.5 * ( + 1 + tanh_out + ) + return (ff * g).to(dtype=x.dtype) + + +class FastGeLUFunction(torch.autograd.Function): + @staticmethod + # bias is an optional argument + def forward(ctx, input): + ctx.save_for_backward(input) + return gelu_fwd(input) + + @staticmethod + def backward(ctx, grad_output): + (input,) = ctx.saved_tensors + tmp = gelu_bwd(grad_output, input) + return tmp + + +fast_gelu_impl = FastGeLUFunction.apply + + +@torch.jit.script +def relu_bwd(g, x): + return torch.where(x >= 0, g, 0.0).to(dtype=x.dtype) + + +@torch.jit.script +def sqrelu_fwd(x): + r = F.relu(x) + return (r * r).to(dtype=x.dtype) + + +@torch.jit.script +def sqrelu_bwd(g, x): + return (2.0 * g * F.relu(x)).to(dtype=x.dtype) + + +class SquaredReLUFunction(torch.autograd.Function): + + @staticmethod + def forward(ctx, input): + ctx.save_for_backward(input) + return sqrelu_fwd(input) + + @staticmethod + def backward(ctx, grad_output): + input, = ctx.saved_tensors + return sqrelu_bwd(grad_output, input) + + +sqrelu = SquaredReLUFunction.apply + + +swiglu_fwd_codestring = """ +template T swiglu_fwd(T x, T y) { + return float(x) * float(y) / (1.0f + ::exp(-float(x))); +} +""" +swiglu_bwd_codestring = """ +template T swiglu_bwd(T x, T y, T g, T& dx, T& dy) { + float x_sigmoid = 1.0f / (1.0f + ::exp(-float(x))); + dx = x_sigmoid * (1 + float(x) * (1.0f - x_sigmoid)) * float(g) * float(y); + dy = float(x) * x_sigmoid * float(g); +} +""" + +swiglu_bwd_with_output_codestring = """ +template T swiglu_bwd_with_output(T x, T y, T g, T& dx, T& dy, T& z) { + float x_sigmoid = 1.0f / (1.0f + ::exp(-float(x))); + float x_swish = float(x) * x_sigmoid; + dx = x_sigmoid * (1 + float(x) * (1.0f - x_sigmoid)) * float(g) * float(y); + dy = x_swish * float(g); + z = x_swish * float(y); +} +""" + +swiglu_fwd = torch.cuda.jiterator._create_jit_fn(swiglu_fwd_codestring) +swiglu_bwd = torch.cuda.jiterator._create_multi_output_jit_fn(swiglu_bwd_codestring, num_outputs=2) +swiglu_bwd_with_output = torch.cuda.jiterator._create_multi_output_jit_fn(swiglu_bwd_with_output_codestring, num_outputs=3) + + +class SwiGLUFunction(torch.autograd.Function): + r""" + Swish-Gated Linear Unit (SwiGLU) function. + + .. math:: + \text{SwiGLU}(x, y) = swish(x) * y = \frac{x}{1 + \exp(-x)} * y + """ + + @staticmethod + def forward(ctx, x, y): + ctx.save_for_backward(x, y) + return swiglu_fwd(x, y) + + @staticmethod + def backward(ctx, dout): + x, y = ctx.saved_tensors + return swiglu_bwd(x, y, dout) + + +class SwiGLULinearFunction(torch.autograd.Function): + r""" + Swish-Gated Linear Unit (SwiGLU) function followed by a linear transformation. + + .. math:: + \text{SwiGLULinear}(x, y, W, b) = (swish(x) * y) W + b + + This simple wrap discards the intermediate results of SwiGLU(x, y) to save memory. + """ + + @staticmethod + def forward(ctx, x, y, weight, bias): + z = swiglu_fwd(x, y) + out = F.linear(z.to(weight.dtype), weight, bias) + # We don't store z, will be recomputed in the backward pass to save memory + ctx.save_for_backward(x, y, weight) + ctx.linear_bias_is_none = bias is None + return out + + @staticmethod + def backward(ctx, dout, *args): + x, y, weight = ctx.saved_tensors + dout = dout.reshape(-1, dout.shape[-1]) + dz = F.linear(dout, weight.t()).view_as(x) + dx, dy, z = swiglu_bwd_with_output(x, y, dz) + dlinear_weight = torch.einsum("bo,bi->oi", dout, z.reshape(-1, z.shape[-1])) + dlinear_bias = None if ctx.linear_bias_is_none else dout.sum(0) + return dx, dy, dlinear_weight, dlinear_bias + + +swiglu = SwiGLUFunction.apply + +swiglu_linear = SwiGLULinearFunction.apply + +ACT2FN = { + 'relu': F.relu, + 'sigmoid': sigmoid, + 'logsigmoid': logsigmoid, + 'silu': swish, + 'swish': swish, + 'sqrelu': sqrelu, + 'gelu': fast_gelu_impl, + 'bias_gelu': bias_gelu_impl, +} diff --git a/fla/modules/convolution.py b/fla/modules/convolution.py new file mode 100644 index 0000000000000000000000000000000000000000..3e2e07d03bf31b0efc1e2c3d723c7243f937e615 --- /dev/null +++ b/fla/modules/convolution.py @@ -0,0 +1,336 @@ +# -*- coding: utf-8 -*- + +# from https://github.com/HazyResearch/zoology/blob/main/zoology/mixers/convolution.py + +import math +import warnings +from typing import Optional + +import torch +import torch.nn as nn +import torch.nn.functional as F +from einops import rearrange + +from fla.modules.activations import ACT2FN +from fla.utils import checkpoint + +try: + from causal_conv1d import causal_conv1d_fn, causal_conv1d_update +except ImportError: + causal_conv1d_fn = None + causal_conv1d_update = None + + +def fft_conv(u, k, dropout_mask, gelu=True, k_rev=None): + seqlen = u.shape[-1] + fft_size = 2 * seqlen + k_f = torch.fft.rfft(k, n=fft_size) / fft_size + if k_rev is not None: + k_rev_f = torch.fft.rfft(k_rev, n=fft_size) / fft_size + k_f = k_f + k_rev_f.conj() + u_f = torch.fft.rfft(u.to(dtype=k.dtype), n=fft_size) + + if len(u.shape) > 3: + k_f = k_f.unsqueeze(1) + y = torch.fft.irfft(u_f * k_f, n=fft_size, norm="forward")[..., :seqlen] + + out = y + u + if gelu: + out = F.gelu(out) + if dropout_mask is not None: + return (out * rearrange(dropout_mask, "b H -> b H 1")).to(dtype=u.dtype) + else: + return out.to(dtype=u.dtype) + + +@checkpoint +def proj_then_conv1d( + x: torch.Tensor, + proj_weight: torch.Tensor, + conv1d_weight: torch.Tensor, + conv1d_bias: Optional[torch.Tensor] = None, + cache: Optional[torch.Tensor] = None +) -> torch.Tensor: + # We do matmul and transpose BLH -> HBL at the same time + x = rearrange(proj_weight @ rearrange(x, "b l d -> d (b l)"), "d (b l) -> b d l", l=x.shape[-2]) + + if causal_conv1d_fn is None: + raise ImportError("`causal_conv1d_fn` is not available. Please install `causal-conv1d` first.") + if cache is None: + x = causal_conv1d_fn( + x=x, + weight=rearrange(conv1d_weight, "d 1 w -> d w"), + bias=conv1d_bias, + activation="silu", + ).transpose(1, 2) + else: + assert x.shape[-1] == 1, "Only support decoding with 1 token at a time for now" + x = x.squeeze(-1) + x = causal_conv1d_update( + x=x, + weight=rearrange(conv1d_weight, "d 1 w -> d w"), + bias=conv1d_bias, + cache=cache, + activation="silu", + ) + return x + + +class ShortConvolution(nn.Conv1d): + """ + Simple wrapper around `nn.Conv1d` that accepts dimension last. + """ + + def __init__( + self, + hidden_size: int, + kernel_size: int, + bias: bool = False, + activation: Optional[str] = 'silu', + use_causal_conv: Optional[bool] = True + ): + super().__init__(in_channels=hidden_size, + out_channels=hidden_size, + kernel_size=kernel_size, + groups=hidden_size, + bias=bias, + padding=kernel_size - 1) + + self.hidden_size = hidden_size + self.activation = None + if activation is not None: + assert activation in ['silu', 'swish'], f"Activation `{activation}` not supported yet." + self.activation = activation + + if use_causal_conv: + if causal_conv1d_fn is None: + warnings.warn("Please install `causal-conv1d` to use causal convolutions, setting `use_causal_conv` to False.") + use_causal_conv = False + self.use_causal_conv = use_causal_conv + + def extra_repr(self): + s = ('{in_channels}, {out_channels}, kernel_size={kernel_size}' + ', stride={stride}') + if self.padding != (0,) * len(self.padding): + s += ', padding={padding}' + if self.dilation != (1,) * len(self.dilation): + s += ', dilation={dilation}' + if self.output_padding != (0,) * len(self.output_padding): + s += ', output_padding={output_padding}' + if self.groups != 1: + s += ', groups={groups}' + if self.bias is None: + s += ', bias=False' + if self.padding_mode != 'zeros': + s += ', padding_mode={padding_mode}' + if self.activation is not None: + s += ', activation={activation}' + if not self.use_causal_conv: + s += ', use_causal_conv={use_causal_conv}' + return s.format(**self.__dict__) + + def forward( + self, + x: torch.Tensor, + mask: Optional[torch.Tensor] = None, + cache: Optional[torch.Tensor] = None + ) -> torch.Tensor: + """ + Args: + x (`torch.Tensor`): + Tensor of shape `[batch_size, seq_len, hidden_size]` + mask (`Optional[torch.Tensor]`): + Attention mask dealing with padded positions. + cache (`Optional[torch.Tensor]`): + Previous cache tensor of shape `[batch_size, hidden_size, kernel_size]`, + Returns: + Tensor of shape `[batch_size, seq_len, hidden_size]`. The `cache` (if provided) is updated inplace. + """ + + if mask is not None: + x = x.mul_(mask.unsqueeze(-1)) + if cache is not None and x.shape[1] == 1: + return self.step(x, cache) + x = rearrange(x, "b l d -> b d l") + # Update state (B D W) + if cache is not None: + cache.copy_(F.pad(x, (self.kernel_size[0] - x.shape[-1], 0))) + if self.use_causal_conv: + x = causal_conv1d_fn( + x=x, + weight=rearrange(self.weight, "d 1 w -> d w"), + bias=self.bias, + activation=self.activation, + ) + else: + x = self._conv_forward(x, self.weight, self.bias)[..., :x.shape[-1]] + if self.activation is not None: + x = ACT2FN[self.activation](x) + return rearrange(x, "b d l -> b l d") + + def step( + self, + x: torch.Tensor, + cache: torch.Tensor + ): + assert x.shape[1] == 1, "Only support decoding with 1 token at a time for now" + + x = x.squeeze(1) + if self.use_causal_conv: + x = causal_conv1d_update( + x=x, + conv_state=cache, + weight=rearrange(self.weight, "d 1 w -> d w"), + bias=self.bias, + activation=self.activation, + ) + else: + dtype = x.dtype + cache.copy_(torch.roll(cache, shifts=-1, dims=-1)) + cache[:, :, -1] = x + x = torch.sum(cache * rearrange(self.weight, "d 1 w -> d w"), dim=-1) + if self.bias is not None: + x = x + self.bias + if self.activation is not None: + x = ACT2FN[self.activation](x).to(dtype=dtype) + return x.unsqueeze(1) + + @property + def state_size(self) -> int: + return self.hidden_size * self.kernel_size + + +class LongConvolution(nn.Module): + """ + LongConvolution applies a convolution operation on the input tensor using a fixed + filter of length l_max. + The filter is learned during training and is applied using FFT convolution. + Args: + hidden_size (int): The number of expected features in the input and output. + l_max (int): The maximum sequence length. + Returns: + y: (b, l, d) tensor + """ + + def __init__( + self, + hidden_size: int, + l_max: int, + **kwargs, + ): + """ + Initializes the LongConvolution module. + Args: + hidden_size (int): The number of expected features in the input and output. + l_max (int): The maximum sequence length. + """ + super().__init__() + self.hidden_size = hidden_size + self.filter = nn.Parameter(torch.randn(self.hidden_size, l_max), requires_grad=True) + + def forward(self, x: torch.Tensor, *args, **kwargs): + """ + Applies the LongConvolution operation on the input tensor. + Args: + x: (b, l, d) tensor + Returns: + y: (b, l, d) tensor + """ + x = x.transpose(1, 2) + y = fft_conv(x, self.filter, dropout_mask=None, gelu=False) + y = y.transpose(1, 2) + return y.to(dtype=x.dtype) + + +class PositionalEmbedding(nn.Module): + def __init__(self, emb_dim: int, seq_len: int, **kwargs): + """Complex exponential positional embeddings for implicit long convolution filters.""" + super().__init__() + + self.seq_len = seq_len + # The time embedding fed to the filteres is normalized so that t_f = 1 + t = torch.linspace(0, 1, self.seq_len)[None, :, None] # 1, L, 1 + + if emb_dim > 1: + bands = (emb_dim - 1) // 2 + # To compute the right embeddings we use the "proper" linspace + t_rescaled = torch.linspace(0, seq_len - 1, seq_len)[None, :, None] + w = 2 * math.pi * t_rescaled / seq_len # 1, L, 1 + + f = torch.linspace(1e-4, bands - 1, bands)[None, None] + z = torch.exp(-1j * f * w) + z = torch.cat([t, z.real, z.imag], dim=-1) + self.z = nn.Parameter(z, requires_grad=False) + + def forward(self, L): + return self.z[:, :L] + + +class ImplicitLongConvolution(nn.Module): + """ + Long convolution with implicit filter parameterized by an MLP. + + Args: + hidden_size (int): + The number of expected features in the input and output. + l_max (int): + The maximum sequence length. + d_emb (Optional[int]): + The dimension of the positional embeddings. Must be odd and greater or equal to 3 (time, sine and cosine). + Defaults to 3. + d_hidden (Optional[int]): + The number of features in the hidden layer of the MLP. Defaults to 16. + + Attributes: + pos_emb (`PositionalEmbedding`): The positional embedding layer. + mlp (`nn.Sequential`): The MLP that parameterizes the implicit filter. + + """ + + def __init__( + self, + hidden_size: int, + l_max: int, + d_emb: int = 3, + d_hidden: int = 16, + **kwargs, + ): + """ + Long convolution with implicit filter parameterized by an MLP. + + + """ + super().__init__() + self.hidden_size = hidden_size + self.d_emb = d_emb + + assert ( + d_emb % 2 != 0 and d_emb >= 3 + ), "d_emb must be odd and greater or equal to 3 (time, sine and cosine)" + self.pos_emb = PositionalEmbedding(d_emb, l_max) + + # final linear layer + self.mlp = nn.Sequential( + nn.Linear(d_emb, d_hidden), + torch.nn.ReLU(), + nn.Linear(d_hidden, hidden_size), + ) + + def filter(self, seq_len: int, *args, **kwargs): + k = self.mlp(self.pos_emb(seq_len)) + + return k.transpose(1, 2) + + def forward(self, x: torch.Tensor, *args, **kwargs): + """ + Args: + x: (b, l, d) tensor + Returns: + y: (b, l, d) tensor + """ + x = x.transpose(1, 2) + k = self.filter(x.shape[-1]) + y = fft_conv(x, k, dropout_mask=None, gelu=False) + + y = y.transpose(1, 2) + return y.to(dtype=x.dtype) diff --git a/fla/modules/feature_map.py b/fla/modules/feature_map.py new file mode 100644 index 0000000000000000000000000000000000000000..43c3bb167f6b317d89bc51142c4184f78c9217bd --- /dev/null +++ b/fla/modules/feature_map.py @@ -0,0 +1,235 @@ +# -*- coding: utf-8 -*- + +from __future__ import annotations + +import math +from typing import Optional + +import torch +import torch.nn.functional as F +from torch import nn + +from fla.modules.layernorm import layer_norm_fn +from fla.utils import checkpoint + + +@checkpoint +def flatten_diag_outer_product(x, y): + z = torch.einsum("...i,...j->...ij", x, y) + N = z.size(-1) + indicies = torch.triu_indices(N, N) + return z[..., indicies[0], indicies[1]] + + +@checkpoint +def flatten_diag_outer_product_off1(x, y): + z = torch.einsum("...i,...j->...ij", x, y) + N = z.size(-1) + indicies = torch.triu_indices(N, N, 1) + indices2 = torch.arange(0, N) + return z[..., indicies[0], indicies[1]], z[..., indices2, indices2] + + +def is_power_of_2(n): + return (n & (n - 1) == 0) and n != 0 + + +class HedgehogFeatureMap(nn.Module): + + r""" + Hedgehog feature map as introduced in + `The Hedgehog & the Porcupine: Expressive Linear Attentions with Softmax Mimicry `_ + """ + + def __init__( + self, + head_dim: int + ) -> HedgehogFeatureMap: + super().__init__() + # Trainable map + self.layer = nn.Linear(head_dim, head_dim) + self.init_weights_() + + def init_weights_(self): + """Initialize trainable map as identity""" + with torch.no_grad(): + identity = torch.eye(*self.layer.weight.shape[-2:], dtype=torch.float) + self.layer.weight.copy_(identity.to(self.layer.weight)) + nn.init.zeros_(self.layer.bias) + + def forward(self, x: torch.Tensor): + x = self.layer(x) # shape b, h, l, d + return torch.cat([2*x, -2*x], dim=-1).softmax(-1) + + +class T2RFeatureMap(nn.Module): + + r""" + Simple linear mapping feature map as in + `Finetuning Pretrained Transformers into RNNs `_ + """ + + def __init__( + self, + head_dim: int, + dot_dim: int = None + ) -> T2RFeatureMap: + super().__init__() + # Trainable map + if dot_dim is None: + dot_dim = head_dim + self.layer = nn.Linear(head_dim, dot_dim) + + def forward(self, x: torch.Tensor): + return self.layer(x).relu() + + +class DPFPFeatureMap(nn.Module): + + r""" + Deterministic Parameter-Free Projection (DPFP) feature map in + `Linear Transformers Are Secretly Fast Weight Programmers `_ + """ + + def __init__( + self, + head_dim: int, + nu: int = 4 + ) -> DPFPFeatureMap: + super().__init__() + self.nu = nu + + def forward(self, x: torch.Tensor): + x = torch.cat([x.relu(), -x.relu()], dim=-1) + x_rolled = torch.cat([x.roll(shifts=j, dims=-1) for j in range(1, self.nu+1)], dim=-1) + x_repeat = torch.cat([x] * self.nu, dim=-1) + return x_repeat * x_rolled + + +class HadamardFeatureMap(nn.Module): + def __init__( + self, + head_dim: int + ) -> HadamardFeatureMap: + super().__init__() + # Trainable map + self.layer1 = nn.Linear(head_dim, head_dim) + self.layer2 = nn.Linear(head_dim, head_dim) + + def forward(self, x: torch.Tensor): + return self.layer1(x) * self.layer2(x) + + +class LearnableOuterProductFeatureMap(nn.Module): + def __init__( + self, + head_dim: int, + feature_dim: int + ) -> LearnableOuterProductFeatureMap: + super().__init__() + # Trainable map + self.layer1 = nn.Linear(head_dim, feature_dim, bias=False) + self.layer2 = nn.Linear(head_dim, feature_dim, bias=False) + self.normalizer = feature_dim ** -0.5 + + def forward(self, x: torch.Tensor): + return flatten_diag_outer_product(self.layer1(x), self.layer2(x)) + + +class LearnablePolySketchNonNegativeFeatureMap(nn.Module): + + def __init__( + self, + head_dim: int, + sketch_size: Optional[int] = None, + degree: Optional[int] = 2 + ) -> LearnablePolySketchNonNegativeFeatureMap: + super().__init__() + + assert is_power_of_2(degree) and degree >= 2, f"The degree {degree} must be a power of 2" + + self.head_dim = head_dim + self.sketch_size = sketch_size if sketch_size is not None else head_dim + self.degree = degree + + self.gamma = nn.Parameter(torch.ones(head_dim)) + self.beta = nn.Parameter(torch.zeros(head_dim)) + # NOTE: the sketch layers defined here are quite different from the original paper + # currently we simply use linear layers without any non-linear activations + self.sketches1 = nn.ModuleList([ + nn.Linear(head_dim, sketch_size, bias=False), + *[nn.Linear(sketch_size, sketch_size, bias=False) for _ in range(int(math.log2(self.degree)) - 2)] + ]) + self.sketches2 = nn.ModuleList([ + nn.Linear(head_dim, sketch_size, bias=False), + *[nn.Linear(sketch_size, sketch_size, bias=False) for _ in range(int(math.log2(self.degree)) - 2)] + ]) + + def forward(self, x: torch.Tensor): + # Section 2.1 + x = layer_norm_fn(x, self.gamma, self.beta) + # first map the input to sketch size with learnable parameters + x = self.sketches1[0](x) * self.sketches2[0](x) * self.head_dim ** -0.5 + for i in range(1, int(math.log2(self.degree)) - 1): + x = self.sketches1[i](x) * self.sketches2[i](x) * self.head_dim ** -0.5 + # do sketch mapping for log2(p) - 1 times in total + # do p=2 mapping to ensure non-negativity + return flatten_diag_outer_product(x, x) + + +class TaylorFeatureMap(nn.Module): + def __init__( + self, + head_dim: int + ) -> TaylorFeatureMap: + super().__init__() + self.head_dim = head_dim + self.r2 = math.sqrt(2) + self.rd = math.sqrt(self.head_dim) + self.rrd = math.sqrt(self.rd) + + def forward(self, x: torch.Tensor): + x2_1, x2_2 = flatten_diag_outer_product_off1(x, x) + return torch.cat([torch.ones_like(x[..., 0:1]), x / self.rrd, x2_2 / (self.rd * self.r2), x2_1 / self.rd], dim=-1) + + +class RebasedFeatureMap(nn.Module): + + def __init__( + self, + head_dim: int, + use_gamma: Optional[bool] = True, + use_beta: Optional[bool] = True, + normalize: Optional[bool] = True + ) -> RebasedFeatureMap: + super().__init__() + + self.head_dim = head_dim + self.use_gamma = use_gamma + self.use_beta = use_beta + self.normalize = normalize + + self.gamma = None + self.beta = None + if use_gamma: + self.gamma = nn.Parameter(torch.ones(head_dim)) + if use_beta: + self.beta = nn.Parameter(torch.zeros(head_dim)) + + def forward(self, x: torch.Tensor, flatten: Optional[bool] = True): + if self.use_beta and self.use_gamma and self.normalize: + x = layer_norm_fn(x, self.gamma, self.beta) + elif self.normalize: + x = F.layer_norm(x, (self.head_dim,), self.gamma, self.beta) + elif self.use_gamma and self.use_beta: + x = torch.addcmul(self.beta, x, self.gamma) + elif self.use_gamma: + x = x.mul(self.gamma) + else: + raise RuntimeError(f"Not supported combination of `use_gamma`, `use_beta` and `normalize`, " + f"which is currentlt set as (`{self.use_gamma}`, `{self.use_beta}`, `{self.normalize}`)") + if not flatten: + return x + x2_1, x2_2 = flatten_diag_outer_product_off1(x, x) + # rebased use learnable parameters to approximate any quadratic function + return torch.cat([x2_2 * self.head_dim ** -0.5, x2_1 * (2 / self.head_dim) ** 0.5], dim=-1) diff --git a/fla/modules/fused_cross_entropy.py b/fla/modules/fused_cross_entropy.py new file mode 100644 index 0000000000000000000000000000000000000000..3364680d414d31608b0a77204d62e4118ea80ee3 --- /dev/null +++ b/fla/modules/fused_cross_entropy.py @@ -0,0 +1,398 @@ +# -*- coding: utf-8 -*- + +# Copyright (c) 2023, Tri Dao. + +from typing import Tuple + +import torch +import torch.nn as nn +import triton +import triton.language as tl + +# `all_gather_into_tensor` and `reduce_scatter_tensor` are new placeholders for +# `_all_gather_base` and `_reduce_scatter_base`. They require the most recent +# version of PyTorch. The following 2 lines are for backward compatibility with +# older PyTorch. +if "all_gather_into_tensor" not in dir(torch.distributed): + torch.distributed.all_gather_into_tensor = torch.distributed._all_gather_base + + +@triton.heuristics( + { + "HAS_SMOOTHING": lambda args: args["smoothing"] > 0.0, + } +) +@triton.jit +def cross_entropy_fwd_kernel( + loss_ptr, # data ptrs + lse_ptr, + z_loss_ptr, + logits_ptr, + labels_ptr, + smoothing, + logit_scale, + lse_square_scale, + ignored_index, + total_classes, + class_start_idx, # Useful for tensor parallel when each rank only has a subset of classes + n_cols, # shapes + n_rows, + logits_row_stride, # strides + BLOCK_SIZE: tl.constexpr, + HAS_SMOOTHING: tl.constexpr, + # if SPLIT (e.g. tensor parallel), don't include the LSE in the loss since it's not the final LSE + SPLIT: tl.constexpr, +): + row_idx = tl.program_id(0) + col_block_idx = tl.program_id(1) + logits_ptr = logits_ptr + row_idx * logits_row_stride.to(tl.int64) + col_offsets = col_block_idx * BLOCK_SIZE + tl.arange(0, BLOCK_SIZE) + label_idx = tl.load(labels_ptr + row_idx) + logits = tl.load(logits_ptr + col_offsets, mask=col_offsets < n_cols, other=-float("inf")).to( + tl.float32 + ) * logit_scale + max_logits = tl.max(logits, 0) + if HAS_SMOOTHING: + sum_logits = tl.sum(tl.where(col_offsets < n_cols, logits, 0.0), 0) + lse = tl.log(tl.sum(tl.exp(logits - max_logits), 0)) + max_logits + tl.store(lse_ptr + col_block_idx * n_rows + row_idx, lse) + if label_idx == ignored_index: + loss = 0.0 + z_loss = 0.0 + else: + label_idx -= class_start_idx + if label_idx >= col_block_idx * BLOCK_SIZE and label_idx < min( + n_cols, (col_block_idx + 1) * BLOCK_SIZE + ): + logits_label = tl.load(logits_ptr + label_idx) * logit_scale + if HAS_SMOOTHING: + loss = ( + (lse if not SPLIT else 0.0) + - smoothing * sum_logits / total_classes + - (1 - smoothing) * logits_label + ) + else: + loss = (lse if not SPLIT else 0.0) - logits_label + else: + # If label is out of bounds, we set the CE loss to 0.0. But we still want the smoothing loss + if HAS_SMOOTHING: + loss = smoothing * ((lse if not SPLIT else 0.0) - sum_logits / total_classes) + else: + loss = 0.0 + if not SPLIT: + z_loss = lse_square_scale * lse * lse + loss += z_loss + else: + z_loss = 0.0 + tl.store(loss_ptr + col_block_idx * n_rows + row_idx, loss) + if not SPLIT: + tl.store(z_loss_ptr + col_block_idx * n_rows + row_idx, z_loss) + + +@triton.heuristics( + { + "HAS_SMOOTHING": lambda args: args["smoothing"] > 0.0, + } +) +@triton.jit +def cross_entropy_bwd_kernel( + dlogits_ptr, # data ptrs + dloss_ptr, + logits_ptr, + lse_ptr, + labels_ptr, + smoothing, + logit_scale, + lse_square_scale, + ignored_index, + total_classes, + class_start_idx, # Useful for tensor parallel when each rank only has a subset of classes + n_cols, # shapes + logits_row_stride, # strides + dlogits_row_stride, + dloss_row_stride, + BLOCK_SIZE: tl.constexpr, + HAS_SMOOTHING: tl.constexpr, +): + row_idx = tl.program_id(0) + col_block_idx = tl.program_id(1) + logits_ptr = logits_ptr + row_idx * logits_row_stride.to(tl.int64) + dlogits_ptr = dlogits_ptr + row_idx * dlogits_row_stride.to(tl.int64) + col_offsets = col_block_idx * BLOCK_SIZE + tl.arange(0, BLOCK_SIZE) + label_idx = tl.load(labels_ptr + row_idx) + if label_idx != ignored_index: + dloss = tl.load(dloss_ptr + row_idx * dloss_row_stride) + else: + dloss = 0.0 + logits = tl.load(logits_ptr + col_offsets, mask=col_offsets < n_cols, other=-float("inf")).to( + tl.float32 + ) * logit_scale + lse = tl.load(lse_ptr + row_idx) + probs = tl.exp(logits - lse) + probs += 2.0 * lse_square_scale * lse * probs + label_idx -= class_start_idx + if HAS_SMOOTHING: + smooth_negative = smoothing / total_classes + probs = tl.where(col_offsets == label_idx, probs - (1 - smoothing), probs) - smooth_negative + else: + probs = tl.where(col_offsets == label_idx, probs - 1.0, probs) + tl.store(dlogits_ptr + col_offsets, (dloss * logit_scale) * probs, mask=col_offsets < n_cols) + + +class CrossEntropyLossFunction(torch.autograd.Function): + + @staticmethod + def forward( + ctx, + logits, + labels, + smoothing=0.0, + logit_scale=1.0, + lse_square_scale=0.0, + ignored_index=-100, + inplace_backward=False, + process_group=None, + ): + n_rows, n_cols = logits.shape + assert labels.shape == (n_rows,) + world_size = 1 if process_group is None else torch.distributed.get_world_size(process_group) + total_classes = world_size * n_cols + rank = 0 if process_group is None else torch.distributed.get_rank(process_group) + class_start_idx = rank * n_cols + + if logits.stride(-1) != 1: + logits = logits.contiguous() + # Set these similar to https://github.com/openai/triton/blob/main/python/tutorials/02-fused-softmax.py + MAX_BLOCK_SIZE = 64 * 1024 + BLOCK_SIZE = min(triton.next_power_of_2(n_cols), MAX_BLOCK_SIZE) + num_warps = ( + 4 + if BLOCK_SIZE < 2048 + else (8 if BLOCK_SIZE < 8192 else (16 if BLOCK_SIZE < 128 * 1024 else 32)) + ) + # We may split the lse computation across multiple blocks, then do a reduction + # lse(local_lse) to get the final LSE. This is faster for large n_cols (e.g., > 64k) + # where having just one thread block processing more than 64k elements is slow. + split = world_size > 1 or n_cols > MAX_BLOCK_SIZE + n_splits = (n_cols + BLOCK_SIZE - 1) // BLOCK_SIZE + loss_shape = (n_splits, n_rows) if n_splits > 1 else (n_rows,) + losses = torch.empty(*loss_shape, dtype=torch.float, device=logits.device) + lse = torch.empty(*loss_shape, dtype=torch.float, device=logits.device) + z_losses = torch.empty(*loss_shape, dtype=torch.float, device=logits.device) + # Need this, otherwise Triton tries to launch from cuda:0 and we get + # ValueError: Pointer argument (at 0) cannot be accessed from Triton (cpu tensor?) + with torch.cuda.device(logits.device.index): + cross_entropy_fwd_kernel[(n_rows, n_splits)]( + losses, # data ptrs + lse, + z_losses, + logits, + labels, + smoothing, + logit_scale, + lse_square_scale, + ignored_index, + total_classes, + class_start_idx, + n_cols, # shapes + n_rows, + logits.stride(0), # strides + BLOCK_SIZE=BLOCK_SIZE, # constants + num_warps=num_warps, + SPLIT=split, + ) + + if split: + # If there's no smoothing, if labels are in the vocab of this partition, losses contains + # - predicted logit, and 0 otherwise. + # If there's smoothing=0.1, for labels in the vocab of this partition, losses contains + # -0.9 * predicted logit - 0.1 * sum logit / total_classes. + # For labels not in the vocab of this partition, losses contains + # -0.1 * sum logit / total_classes. + if n_splits > 1: + lse = torch.logsumexp(lse, dim=0) + losses = losses.sum(dim=0) + if world_size > 1: + lse_allgather = torch.empty(world_size, n_rows, dtype=lse.dtype, device=lse.device) + torch.distributed.all_gather_into_tensor(lse_allgather, lse, group=process_group) + handle_losses = torch.distributed.all_reduce( + losses, op=torch.distributed.ReduceOp.SUM, group=process_group, async_op=True + ) + lse = torch.logsumexp(lse_allgather, dim=0) + handle_losses.wait() + # After the allreduce, if there's no smoothing, the total losses are - predicted_logit, + # we just have to add the (global) lse. + # If there's smoothing=0.1, the total losses are + # -0.9 * predicted_logit - 0.1 * sum logit / total_classes. + # Again, we just have to add the (global) lse. + losses += lse + if lse_square_scale != 0.0: + z_losses = lse_square_scale * lse.square() + z_losses.masked_fill_(labels == ignored_index, 0.0) + losses += z_losses + else: + z_losses = torch.zeros_like(losses) + losses.masked_fill_(labels == ignored_index, 0.0) + + ctx.save_for_backward(logits, lse, labels) + ctx.mark_non_differentiable(z_losses) + ctx.smoothing = smoothing + ctx.logit_scale = logit_scale + ctx.lse_square_scale = lse_square_scale + ctx.ignored_index = ignored_index + ctx.total_classes = total_classes + ctx.class_start_idx = class_start_idx + ctx.inplace_backward = inplace_backward + + return losses, z_losses + + @staticmethod + def backward(ctx, grad_losses, grad_z_losses): + del grad_z_losses # z_losses are only for logging. + + logits, lse, labels = ctx.saved_tensors + dlogits = logits if ctx.inplace_backward else torch.empty_like(logits) + n_rows, n_cols = logits.shape + BLOCK_SIZE = min(triton.next_power_of_2(n_cols), 4 * 1024) + num_warps = 4 if BLOCK_SIZE < 2048 else (8 if BLOCK_SIZE < 8192 else 16) + def grid(META): return (n_rows, triton.cdiv(n_cols, META["BLOCK_SIZE"])) # noqa + # Need this, otherwise Triton tries to launch from cuda:0 and we get + # ValueError: Pointer argument (at 0) cannot be accessed from Triton (cpu tensor?) + with torch.cuda.device(logits.device.index): + cross_entropy_bwd_kernel[grid]( + dlogits, # data ptrs + grad_losses, + logits, + lse, + labels, + ctx.smoothing, + ctx.logit_scale, + ctx.lse_square_scale, + ctx.ignored_index, + ctx.total_classes, + ctx.class_start_idx, + n_cols, # shapes + logits.stride(0), # strides + dlogits.stride(0), + grad_losses.stride(0), + BLOCK_SIZE=BLOCK_SIZE, # constants + num_warps=num_warps, + ) + return dlogits, None, None, None, None, None, None, None, None + + +def cross_entropy_loss( + logits: torch.Tensor, + labels: torch.Tensor, + label_smoothing: float = 0.0, + logit_scale: float = 1.0, + lse_square_scale: float = 0.0, + ignored_index=-100, + inplace_backward: bool = False, + process_group=None, +) -> Tuple[torch.Tensor, torch.Tensor]: + """ + Arguments: + logits: (batch, vocab_size) + labels: (batch,) + label_smoothing: float + logit_scale: float. Multiply logits by this scale before calculating the loss. + lse_square_scale: float. If > 0, we add lse_square_scale * lse(logits) ^ 2 to the loss. + This is also referred to as "z-loss". + ignored_index: int. If labels == ignored_index, the loss is set to 0.0. + inplace_backward: bool. If True, we do the backward pass in-place by modifying the logits. + This saves memory. + process_group: if not None, we're doing Tensor Parallel: each process is responsible for + one part of the vocab. The loss will be aggregated across processes. + Returns: + losses: (batch,), float + z_losses: (batch,), float + """ + return CrossEntropyLossFunction.apply( + logits, + labels, + label_smoothing, + logit_scale, + lse_square_scale, + ignored_index, + inplace_backward, + process_group, + ) + + +class FusedCrossEntropyLoss(nn.Module): + def __init__( + self, + ignore_index=-100, + reduction="mean", + label_smoothing=0.0, + logit_scale=1.0, + lse_square_scale=0.0, + inplace_backward=False, + process_group=None, + return_z_loss=False, + ): + """ + Arguments: + ignored_index: int. If labels == ignored_index, the loss is set to 0.0. + label_smoothing: float + lse_square_scale: float. If > 0, we add lse_square_scale * lse(logits) ^ 2 to the loss. + This is also referred to as "z-loss". + inplace_backward: bool. If True, we do the backward pass in-place by modifying the logits. + This saves memory. + process_group: if not None, we're doing Tensor Parallel: each process is responsible for + one part of the vocab. The loss will be aggregated across processes. + return_z_loss: bool. If True, we return the component of the loss contributed by + the lse_square_scale value. This value is only for logging and does not support + backprop. + """ + super().__init__() + if reduction not in ["mean", "none", "sum"]: + raise NotImplementedError("Only support reduction = 'mean' or 'none' or 'sum'") + self.ignore_index = ignore_index + self.reduction = reduction + self.label_smoothing = label_smoothing + self.logit_scale = logit_scale + self.lse_square_scale = lse_square_scale + self.inplace_backward = inplace_backward + self.process_group = process_group + self.return_z_loss = return_z_loss + + def forward(self, input, target): + """ + Arguments: + input: (batch, vocab_size) + target: (batch,) + Returns: + losses: (batch,) if reduction is 'none', else (1,), dtype float + z_loss: (batch,) if reduction is 'none', else (1,), dtype float (if self.return_z_loss) + """ + assert input.is_cuda and target.is_cuda, "Only support CUDA tensors" + loss, z_loss = cross_entropy_loss( + input, + target, + label_smoothing=self.label_smoothing, + logit_scale=self.logit_scale, + lse_square_scale=self.lse_square_scale, + ignored_index=self.ignore_index, + inplace_backward=self.inplace_backward, + process_group=self.process_group, + ) + if self.reduction == "mean": + loss = loss.sum() / (target != self.ignore_index).sum() + elif self.reduction == "sum": + loss = loss.sum() + else: + loss = loss + + if not self.return_z_loss: + return loss + + if self.reduction == "mean": + z_loss = z_loss.sum() / (target != self.ignore_index).sum() + elif self.reduction == "sum": + z_loss = z_loss.sum() + else: + z_loss = z_loss + + return loss, z_loss diff --git a/fla/modules/fused_norm_gate.py b/fla/modules/fused_norm_gate.py new file mode 100644 index 0000000000000000000000000000000000000000..739b5ae46ca4e15d263fabbebfa70dcd6424ed7d --- /dev/null +++ b/fla/modules/fused_norm_gate.py @@ -0,0 +1,889 @@ +# -*- coding: utf-8 -*- + +# Copyright (c) 2023, Tri Dao. +# https://github.com/state-spaces/mamba/blob/fb7b5310fa865dbd62aa059b1e26f2b431363e2a/mamba_ssm/ops/triton/layernorm.py +# Implement residual + layer_norm / rms_norm. + +# Based on the Triton LayerNorm tutorial: https://triton-lang.org/main/getting-started/tutorials/05-layer-norm.html +# For the backward pass, we keep weight_grad and bias_grad in registers and accumulate. +# This is faster for dimensions up to 8k, but after that it's much slower due to register spilling. +# The models we train have hidden dim up to 8k anyway (e.g. Llama 70B), so this is fine. + +from __future__ import annotations + +import math + +import torch +import torch.nn as nn +import torch.nn.functional as F +import triton +import triton.language as tl + +from fla.utils import contiguous + + +def layer_norm_ref(x, weight, bias, residual=None, eps=1e-6, prenorm=False, upcast=False): + dtype = x.dtype + if upcast: + weight = weight.float() + bias = bias.float() if bias is not None else None + if upcast: + x = x.float() + residual = residual.float() if residual is not None else residual + if residual is not None: + x = (x + residual).to(x.dtype) + out = F.layer_norm(x.to(weight.dtype), x.shape[-1:], weight=weight, bias=bias, eps=eps).to( + dtype + ) + return out if not prenorm else (out, x) + + +def rms_norm_ref(x, weight, bias, residual=None, eps=1e-6, prenorm=False, upcast=False): + dtype = x.dtype + if upcast: + weight = weight.float() + bias = bias.float() if bias is not None else None + if upcast: + x = x.float() + residual = residual.float() if residual is not None else residual + if residual is not None: + x = (x + residual).to(x.dtype) + rstd = 1 / torch.sqrt((x.square()).mean(dim=-1, keepdim=True) + eps) + out = (x * rstd * weight) + \ + bias if bias is not None else (x * rstd * weight) + out = out.to(dtype) + return out if not prenorm else (out, x) + + +@triton.autotune( + configs=[ + triton.Config({}, num_warps=1), + triton.Config({}, num_warps=2), + triton.Config({}, num_warps=4), + triton.Config({}, num_warps=8), + triton.Config({}, num_warps=16), + triton.Config({}, num_warps=32), + ], + key=["N", "HAS_RESIDUAL", "STORE_RESIDUAL_OUT", "IS_RMS_NORM", "HAS_BIAS"], +) +# @triton.heuristics({"HAS_BIAS": lambda args: args["B"] is not None}) +# @triton.heuristics({"HAS_RESIDUAL": lambda args: args["RESIDUAL"] is not None}) +@triton.jit +def _layer_norm_fwd_1pass_kernel( + X, # pointer to the input + O, # pointer to the gate + Y, # pointer to the output + W, # pointer to the weights + B, # pointer to the biases + RESIDUAL, # pointer to the residual + RESIDUAL_OUT, # pointer to the residual + Mean, # pointer to the mean + Rstd, # pointer to the 1/std + stride_x_row, # how much to increase the pointer when moving by 1 row + stride_y_row, + stride_res_row, + stride_res_out_row, + N, # number of columns in X + eps, # epsilon to avoid division by zero + IS_RMS_NORM: tl.constexpr, + BLOCK_N: tl.constexpr, + HAS_RESIDUAL: tl.constexpr, + STORE_RESIDUAL_OUT: tl.constexpr, + HAS_WEIGHT: tl.constexpr, + HAS_BIAS: tl.constexpr +): + # Map the program id to the row of X and Y it should compute. + row = tl.program_id(0) + X += row * stride_x_row + Y += row * stride_y_row + O += row * stride_x_row + if HAS_RESIDUAL: + RESIDUAL += row * stride_res_row + if STORE_RESIDUAL_OUT: + RESIDUAL_OUT += row * stride_res_out_row + # Compute mean and variance + cols = tl.arange(0, BLOCK_N) + x = tl.load(X + cols, mask=cols < N, other=0.0).to(tl.float32) + if HAS_RESIDUAL: + residual = tl.load(RESIDUAL + cols, mask=cols < + N, other=0.0).to(tl.float32) + x += residual + if STORE_RESIDUAL_OUT: + tl.store(RESIDUAL_OUT + cols, x, mask=cols < N) + if not IS_RMS_NORM: + mean = tl.sum(x, axis=0) / N + tl.store(Mean + row, mean) + xbar = tl.where(cols < N, x - mean, 0.0) + var = tl.sum(xbar * xbar, axis=0) / N + else: + xbar = tl.where(cols < N, x, 0.0) + var = tl.sum(xbar * xbar, axis=0) / N + rstd = 1 / tl.sqrt(var + eps) + tl.store(Rstd + row, rstd) + # Normalize and apply linear transformation + mask = cols < N + if HAS_WEIGHT: + w = tl.load(W + cols, mask=mask).to(tl.float32) + if HAS_BIAS: + b = tl.load(B + cols, mask=mask).to(tl.float32) + x_hat = (x - mean) * rstd if not IS_RMS_NORM else x * rstd + y = x_hat * w if HAS_WEIGHT else x_hat + if HAS_BIAS: + y = y + b + + # Swish output gate + o = tl.load(O + cols, mask=cols < N, other=0.0).to(tl.float32) + y = y * o * tl.sigmoid(o) + + # Write output + tl.store(Y + cols, y, mask=mask) + + +def _layer_norm_fwd( + x, o, weight, bias, eps, residual=None, out_dtype=None, residual_dtype=None, is_rms_norm=False +): + if residual is not None: + residual_dtype = residual.dtype + M, N = x.shape + assert x.stride(-1) == 1 + if residual is not None: + assert residual.stride(-1) == 1 + assert residual.shape == (M, N) + if weight is not None: + assert weight.shape == (N,) + assert weight.stride(-1) == 1 + if bias is not None: + assert bias.stride(-1) == 1 + assert bias.shape == (N,) + # allocate output + y = torch.empty_like(x, dtype=x.dtype if out_dtype is None else out_dtype) + assert y.stride(-1) == 1 + if residual is not None or (residual_dtype is not None and residual_dtype != x.dtype): + residual_out = torch.empty(M, N, device=x.device, dtype=residual_dtype) + assert residual_out.stride(-1) == 1 + else: + residual_out = None + mean = torch.empty((M,), dtype=torch.float32, + device="cuda") if not is_rms_norm else None + rstd = torch.empty((M,), dtype=torch.float32, device="cuda") + # Less than 64KB per feature: enqueue fused kernel + MAX_FUSED_SIZE = 65536 // x.element_size() + BLOCK_N = min(MAX_FUSED_SIZE, triton.next_power_of_2(N)) + if N > BLOCK_N: + raise RuntimeError( + "This layer norm doesn't support feature dim >= 64KB.") + # heuristics for number of warps + with torch.cuda.device(x.device.index): + _layer_norm_fwd_1pass_kernel[(M,)]( + x, + o, + y, + weight, + bias, + residual, + residual_out, + mean, + rstd, + x.stride(0), + y.stride(0), + residual.stride(0) if residual is not None else 0, + residual_out.stride(0) if residual_out is not None else 0, + N, + eps, + is_rms_norm, + BLOCK_N, + residual is not None, + residual_out is not None, + weight is not None, + bias is not None, + ) + # residual_out is None if residual is None and residual_dtype == input_dtype + return y, mean, rstd, residual_out if residual_out is not None else x + + +@triton.autotune( + configs=[ + triton.Config({}, num_warps=1), + triton.Config({}, num_warps=2), + triton.Config({}, num_warps=4), + triton.Config({}, num_warps=8), + triton.Config({}, num_warps=16), + triton.Config({}, num_warps=32), + ], + key=["N", "HAS_DRESIDUAL", "STORE_DRESIDUAL", "IS_RMS_NORM", "HAS_BIAS"], +) +# @triton.heuristics({"HAS_BIAS": lambda args: args["B"] is not None}) +# @triton.heuristics({"HAS_DRESIDUAL": lambda args: args["DRESIDUAL"] is not None}) +# @triton.heuristics({"STORE_DRESIDUAL": lambda args: args["DRESIDUAL_IN"] is not None}) +@triton.heuristics({"RECOMPUTE_OUTPUT": lambda args: args["Y"] is not None}) +@triton.jit +def _layer_norm_bwd_kernel( + X, # pointer to the input + O, # pointer to the gate + W, # pointer to the weights + B, # pointer to the biases + Y, # pointer to the output to be recomputed + DY, # pointer to the output gradient + DX, # pointer to the input gradient + DO, # pointer to the gate gradient + DW, # pointer to the partial sum of weights gradient + DB, # pointer to the partial sum of biases gradient + DRESIDUAL, + DRESIDUAL_IN, + Mean, # pointer to the mean + Rstd, # pointer to the 1/std + stride_x_row, # how much to increase the pointer when moving by 1 row + stride_y_row, + stride_dy_row, + stride_dx_row, + stride_dres_row, + stride_dres_in_row, + M, # number of rows in X + N, # number of columns in X + eps, # epsilon to avoid division by zero + rows_per_program, + IS_RMS_NORM: tl.constexpr, + BLOCK_N: tl.constexpr, + HAS_DRESIDUAL: tl.constexpr, + STORE_DRESIDUAL: tl.constexpr, + HAS_WEIGHT: tl.constexpr, + HAS_BIAS: tl.constexpr, + RECOMPUTE_OUTPUT: tl.constexpr, +): + # Map the program id to the elements of X, DX, and DY it should compute. + row_block_id = tl.program_id(0) + row_start = row_block_id * rows_per_program + cols = tl.arange(0, BLOCK_N) + mask = cols < N + X += row_start * stride_x_row + O += row_start * stride_x_row + if HAS_DRESIDUAL: + DRESIDUAL += row_start * stride_dres_row + if STORE_DRESIDUAL: + DRESIDUAL_IN += row_start * stride_dres_in_row + DY += row_start * stride_dy_row + DX += row_start * stride_dx_row + DO += row_start * stride_dx_row + if RECOMPUTE_OUTPUT: + Y += row_start * stride_y_row + if HAS_WEIGHT: + w = tl.load(W + cols, mask=mask).to(tl.float32) + dw = tl.zeros((BLOCK_N,), dtype=tl.float32) + if RECOMPUTE_OUTPUT and HAS_BIAS: + b = tl.load(B + cols, mask=mask, other=0.0).to(tl.float32) + if HAS_BIAS: + db = tl.zeros((BLOCK_N,), dtype=tl.float32) + row_end = min((row_block_id + 1) * rows_per_program, M) + for row in range(row_start, row_end): + # Load data to SRAM + x = tl.load(X + cols, mask=mask, other=0).to(tl.float32) + o = tl.load(O + cols, mask=mask, other=0).to(tl.float32) + dy = tl.load(DY + cols, mask=mask, other=0).to(tl.float32) + + if not IS_RMS_NORM: + mean = tl.load(Mean + row) + rstd = tl.load(Rstd + row) + # Compute dx + xhat = (x - mean) * rstd if not IS_RMS_NORM else x * rstd + xhat = tl.where(mask, xhat, 0.0) + + y = xhat * w if HAS_WEIGHT else xhat + if HAS_BIAS: + y = y + b + if RECOMPUTE_OUTPUT: + tl.store(Y + cols, y, mask=mask) + + sigmoid_o = tl.sigmoid(o) + do = dy * y * (sigmoid_o + o * sigmoid_o * (1 - sigmoid_o)) + dy = dy * o * sigmoid_o + wdy = dy + if HAS_WEIGHT: + wdy = dy * w + dw += dy * xhat + if HAS_BIAS: + db += dy + if not IS_RMS_NORM: + c1 = tl.sum(xhat * wdy, axis=0) / N + c2 = tl.sum(wdy, axis=0) / N + dx = (wdy - (xhat * c1 + c2)) * rstd + else: + c1 = tl.sum(xhat * wdy, axis=0) / N + dx = (wdy - xhat * c1) * rstd + if HAS_DRESIDUAL: + dres = tl.load(DRESIDUAL + cols, mask=mask, other=0).to(tl.float32) + dx += dres + # Write dx + if STORE_DRESIDUAL: + tl.store(DRESIDUAL_IN + cols, dx, mask=mask) + tl.store(DX + cols, dx, mask=mask) + tl.store(DO + cols, do, mask=mask) + + X += stride_x_row + O += stride_x_row + if HAS_DRESIDUAL: + DRESIDUAL += stride_dres_row + if STORE_DRESIDUAL: + DRESIDUAL_IN += stride_dres_in_row + if RECOMPUTE_OUTPUT: + Y += stride_y_row + DY += stride_dy_row + DX += stride_dx_row + DO += stride_dx_row + if HAS_WEIGHT: + tl.store(DW + row_block_id * N + cols, dw, mask=mask) + if HAS_BIAS: + tl.store(DB + row_block_id * N + cols, db, mask=mask) + + +def _layer_norm_bwd( + dy, + x, + o, + weight, + bias, + eps, + mean, + rstd, + dresidual=None, + has_residual=False, + is_rms_norm=False, + x_dtype=None, + recompute_output=False, +): + M, N = x.shape + assert x.stride(-1) == 1 + assert dy.stride(-1) == 1 + assert dy.shape == (M, N) + if dresidual is not None: + assert dresidual.stride(-1) == 1 + assert dresidual.shape == (M, N) + if weight is not None: + assert weight.shape == (N,) + assert weight.stride(-1) == 1 + if bias is not None: + assert bias.stride(-1) == 1 + assert bias.shape == (N,) + # allocate output + dx = ( + torch.empty_like(x) + if x_dtype is None + else torch.empty(M, N, dtype=x_dtype, device=x.device) + ) + do = ( + torch.empty_like(o) + if x_dtype is None + else torch.empty(M, N, dtype=x_dtype, device=x.device) + ) + dresidual_in = torch.empty_like(x) if has_residual and dx.dtype != x.dtype else None + y = torch.empty(M, N, dtype=dy.dtype, device=dy.device) if recompute_output else None + + # Less than 64KB per feature: enqueue fused kernel + MAX_FUSED_SIZE = 65536 // x.element_size() + BLOCK_N = min(MAX_FUSED_SIZE, triton.next_power_of_2(N)) + if N > BLOCK_N: + raise RuntimeError("This layer norm doesn't support feature dim >= 64KB.") + sm_count = torch.cuda.get_device_properties(x.device).multi_processor_count + _dw = ( + torch.empty((sm_count, N), dtype=torch.float32, device=weight.device) + if weight is not None + else None + ) + _db = ( + torch.empty((sm_count, N), dtype=torch.float32, device=bias.device) + if bias is not None + else None + ) + rows_per_program = math.ceil(M / sm_count) + grid = (sm_count,) + with torch.cuda.device(x.device.index): + _layer_norm_bwd_kernel[grid]( + x, + o, + weight, + bias, + y, + dy, + dx, + do, + _dw, + _db, + dresidual, + dresidual_in, + mean, + rstd, + x.stride(0), + 0 if not recompute_output else y.stride(0), + dy.stride(0), + dx.stride(0), + dresidual.stride(0) if dresidual is not None else 0, + dresidual_in.stride(0) if dresidual_in is not None else 0, + M, + N, + eps, + rows_per_program, + is_rms_norm, + BLOCK_N, + dresidual is not None, + dresidual_in is not None, + weight is not None, + bias is not None, + ) + dw = _dw.sum(0).to(weight.dtype) if weight is not None else None + db = _db.sum(0).to(bias.dtype) if bias is not None else None + # Don't need to compute dresidual_in separately in this case + if has_residual and dx.dtype == x.dtype: + dresidual_in = dx + return (dx, do, dw, db, dresidual_in) if not recompute_output else (dx, do, dw, db, dresidual_in, y) + + +class LayerNormSwishGateFn(torch.autograd.Function): + + @staticmethod + @contiguous + def forward( + ctx, + x, + o, + weight, + bias, + residual=None, + eps=1e-6, + prenorm=False, + residual_in_fp32=False, + is_rms_norm=False, + ): + x_shape_og = x.shape + o_shape_og = o.shape + # reshape input data into 2D tensor + x = x.reshape(-1, x.shape[-1]) + o = o.reshape(-1, o.shape[-1]) + if residual is not None: + assert residual.shape == x_shape_og + residual = residual.reshape(-1, residual.shape[-1]) + residual_dtype = ( + residual.dtype + if residual is not None + else (torch.float32 if residual_in_fp32 else None) + ) + y, mean, rstd, residual_out = _layer_norm_fwd( + x, o, weight, bias, eps, residual, residual_dtype=residual_dtype, is_rms_norm=is_rms_norm + ) + ctx.save_for_backward(residual_out, o, weight, bias, mean, rstd) + ctx.x_shape_og = x_shape_og + ctx.o_shape_og = o_shape_og + ctx.eps = eps + ctx.is_rms_norm = is_rms_norm + ctx.has_residual = residual is not None + ctx.prenorm = prenorm + ctx.x_dtype = x.dtype + y = y.reshape(x_shape_og) + return y if not prenorm else (y, residual_out.reshape(x_shape_og)) + + @staticmethod + @contiguous + def backward(ctx, dy, *args): + x, o, weight, bias, mean, rstd = ctx.saved_tensors + dy = dy.reshape(-1, dy.shape[-1]) + assert dy.shape == x.shape + if ctx.prenorm: + dresidual = args[0] + dresidual = dresidual.reshape(-1, dresidual.shape[-1]) + assert dresidual.shape == x.shape + else: + dresidual = None + dx, do, dw, db, dresidual_in = _layer_norm_bwd( + dy, + x, + o, + weight, + bias, + ctx.eps, + mean, + rstd, + dresidual, + ctx.has_residual, + ctx.is_rms_norm, + x_dtype=ctx.x_dtype, + ) + return ( + dx.reshape(ctx.x_shape_og), + do.reshape(ctx.o_shape_og), + dw, + db, + dresidual_in.reshape(ctx.x_shape_og) if ctx.has_residual else None, + None, + None, + None, + None, + ) + + +class LayerNormSwishGateLinearFn(torch.autograd.Function): + + @staticmethod + @contiguous + def forward( + ctx, + x, + o, + norm_weight, + norm_bias, + linear_weight, + linear_bias, + residual=None, + eps=1e-6, + prenorm=False, + residual_in_fp32=False, + is_rms_norm=False, + ): + x_shape_og = x.shape + o_shape_og = o.shape + # reshape input data into 2D tensor + x = x.reshape(-1, x.shape[-1]) + o = o.reshape(-1, o.shape[-1]) + if residual is not None: + assert residual.shape == x_shape_og + residual = residual.reshape(-1, residual.shape[-1]) + residual_dtype = ( + residual.dtype + if residual is not None + else (torch.float32 if residual_in_fp32 else None) + ) + y, mean, rstd, residual_out = _layer_norm_fwd( + x, + o, + norm_weight, + norm_bias, + eps, + residual, + residual_dtype=residual_dtype, + is_rms_norm=is_rms_norm + ) + y = y.reshape(x_shape_og) + dtype = torch.get_autocast_gpu_dtype() if torch.is_autocast_enabled() else y.dtype + linear_weight = linear_weight.to(dtype) + linear_bias = linear_bias.to(dtype) if linear_bias is not None else None + out = F.linear(y.to(linear_weight.dtype), linear_weight, linear_bias) + # We don't store y, will be recomputed in the backward pass to save memory + ctx.save_for_backward(residual_out, o, norm_weight, norm_bias, linear_weight, mean, rstd) + ctx.x_shape_og = x_shape_og + ctx.o_shape_og = o_shape_og + ctx.eps = eps + ctx.is_rms_norm = is_rms_norm + ctx.has_residual = residual is not None + ctx.prenorm = prenorm + ctx.x_dtype = x.dtype + ctx.linear_bias_is_none = linear_bias is None + return out if not prenorm else (out, residual_out.reshape(x_shape_og)) + + @staticmethod + @contiguous + def backward(ctx, dout, *args): + x, o, norm_weight, norm_bias, linear_weight, mean, rstd = ctx.saved_tensors + dout = dout.reshape(-1, dout.shape[-1]) + dy = F.linear(dout, linear_weight.t()) + dlinear_bias = None if ctx.linear_bias_is_none else dout.sum(0) + assert dy.shape == x.shape + if ctx.prenorm: + dresidual = args[0] + dresidual = dresidual.reshape(-1, dresidual.shape[-1]) + assert dresidual.shape == x.shape + else: + dresidual = None + dx, do, dnorm_weight, dnorm_bias, dresidual_in, y = _layer_norm_bwd( + dy, + x, + o, + norm_weight, + norm_bias, + ctx.eps, + mean, + rstd, + dresidual=dresidual, + has_residual=ctx.has_residual, + is_rms_norm=ctx.is_rms_norm, + x_dtype=ctx.x_dtype, + recompute_output=True, + ) + dlinear_weight = torch.einsum("bo,bi->oi", dout, y) + return ( + dx.reshape(ctx.x_shape_og), + do.reshape(ctx.o_shape_og), + dnorm_weight, + dnorm_bias, + dlinear_weight, + dlinear_bias, + dresidual_in.reshape(ctx.x_shape_og) if ctx.has_residual else None, + None, + None, + None, + None, + ) + + +def layer_norm_swish_gate_fn( + x, + o, + weight, + bias, + residual=None, + prenorm=False, + residual_in_fp32=False, + eps=1e-6 +): + return LayerNormSwishGateFn.apply( + x, + o, + weight, + bias, + residual, + eps, + prenorm, + residual_in_fp32, + False + ) + + +def rms_norm_swish_gate_fn( + x, + o, + weight, + bias, + residual=None, + prenorm=False, + residual_in_fp32=False, + eps=1e-6 +): + return LayerNormSwishGateFn.apply( + x, + o, + weight, + bias, + residual, + eps, + prenorm, + residual_in_fp32, + True + ) + + +def layer_norm_swish_gate_linear_fn( + x, + o, + norm_weight, + norm_bias, + linear_weight, + linear_bias, + residual=None, + prenorm=False, + residual_in_fp32=False, + eps=1e-6 +): + return LayerNormSwishGateLinearFn.apply( + x, + o, + norm_weight, + norm_bias, + linear_weight, + linear_bias, + residual, + eps, + prenorm, + residual_in_fp32, + False + ) + + +def rms_norm_swish_gate_linear_fn( + x, + o, + norm_weight, + norm_bias, + linear_weight, + linear_bias, + residual=None, + prenorm=False, + residual_in_fp32=False, + eps=1e-6 +): + return LayerNormSwishGateLinearFn.apply( + x, + o, + norm_weight, + norm_bias, + linear_weight, + linear_bias, + residual, + eps, + prenorm, + residual_in_fp32, + True + ) + + +class FusedLayerNormSwishGate(nn.Module): + + def __init__( + self, + hidden_size, + elementwise_affine: bool = True, + eps=1e-5 + ) -> FusedLayerNormSwishGate: + super().__init__() + + self.hidden_size = hidden_size + self.elementwise_affine = elementwise_affine + self.eps = eps + + if elementwise_affine: + self.weight = nn.Parameter(torch.ones(hidden_size)) + else: + self.register_parameter("weight", None) + self.register_parameter("bias", None) + + def __repr__(self) -> str: + s = f"{self.__class__.__name__}({self.hidden_size}" + if not self.elementwise_affine: + s += f", elementwise_affine={self.elementwise_affine}" + s += f", eps={self.eps}" + s += ")" + return s + + def forward(self, x, o, residual=None, prenorm=False, residual_in_fp32=False): + return layer_norm_swish_gate_fn( + x, + o, + self.weight, + self.bias, + residual=residual, + eps=self.eps, + prenorm=prenorm, + residual_in_fp32=residual_in_fp32 + ) + + +class FusedRMSNormSwishGate(nn.Module): + + def __init__( + self, + hidden_size, + elementwise_affine: bool = True, + eps=1e-5 + ) -> FusedRMSNormSwishGate: + super().__init__() + + self.hidden_size = hidden_size + self.elementwise_affine = elementwise_affine + self.eps = eps + + if elementwise_affine: + self.weight = nn.Parameter(torch.ones(hidden_size)) + else: + self.register_parameter("weight", None) + self.register_parameter("bias", None) + + def __repr__(self) -> str: + s = f"{self.__class__.__name__}({self.hidden_size}" + if not self.elementwise_affine: + s += f", elementwise_affine={self.elementwise_affine}" + s += f", eps={self.eps}" + s += ")" + return s + + def forward(self, x, o, residual=None, prenorm=False, residual_in_fp32=False): + return rms_norm_swish_gate_fn( + x, + o, + self.weight, + self.bias, + residual=residual, + eps=self.eps, + prenorm=prenorm, + residual_in_fp32=residual_in_fp32 + ) + + +class FusedLayerNormSwishGateLinear(nn.Module): + + def __init__( + self, + hidden_size, + elementwise_affine: bool = True, + eps=1e-5 + ) -> FusedLayerNormSwishGateLinear: + super().__init__() + + self.hidden_size = hidden_size + self.elementwise_affine = elementwise_affine + self.eps = eps + + if elementwise_affine: + self.weight = nn.Parameter(torch.ones(hidden_size)) + else: + self.register_parameter("weight", None) + self.register_parameter("bias", None) + + def __repr__(self) -> str: + s = f"{self.__class__.__name__}({self.hidden_size}" + if not self.elementwise_affine: + s += f", elementwise_affine={self.elementwise_affine}" + s += f", eps={self.eps}" + s += ")" + return s + + def forward(self, x, o, weight, bias, residual=None, prenorm=False, residual_in_fp32=False): + return layer_norm_swish_gate_linear_fn( + x, + o, + self.weight, + self.bias, + weight, + bias, + residual=residual, + eps=self.eps, + prenorm=prenorm, + residual_in_fp32=residual_in_fp32 + ) + + +class FusedRMSNormSwishGateLinear(nn.Module): + + def __init__( + self, + hidden_size, + elementwise_affine: bool = True, + eps=1e-5 + ) -> FusedRMSNormSwishGateLinear: + super().__init__() + + self.hidden_size = hidden_size + self.elementwise_affine = elementwise_affine + self.eps = eps + + if elementwise_affine: + self.weight = nn.Parameter(torch.ones(hidden_size)) + else: + self.register_parameter("weight", None) + self.register_parameter("bias", None) + + def __repr__(self) -> str: + s = f"{self.__class__.__name__}({self.hidden_size}" + if not self.elementwise_affine: + s += f", elementwise_affine={self.elementwise_affine}" + s += f", eps={self.eps}" + s += ")" + return s + + def forward(self, x, o, weight, bias, residual=None, prenorm=False, residual_in_fp32=False): + return rms_norm_swish_gate_linear_fn( + x, + o, + self.weight, + self.bias, + weight, + bias, + residual=residual, + eps=self.eps, + prenorm=prenorm, + residual_in_fp32=residual_in_fp32 + ) diff --git a/fla/modules/l2norm.py b/fla/modules/l2norm.py new file mode 100644 index 0000000000000000000000000000000000000000..9af045fa1eee5677309abbc98af427b7fc8b7bb5 --- /dev/null +++ b/fla/modules/l2norm.py @@ -0,0 +1,216 @@ +# -*- coding: utf-8 -*- +import math +import torch +import torch.nn.functional as F +from torch.cuda.amp import custom_fwd, custom_bwd +import triton +import triton.language as tl + +@triton.autotune( + configs=[ + triton.Config({}, num_warps=1), + triton.Config({}, num_warps=2), + triton.Config({}, num_warps=4), + triton.Config({}, num_warps=8), + triton.Config({}, num_warps=16), + triton.Config({}, num_warps=32), + ], + key=["N"], +) +# @triton.heuristics({"HAS_BIAS": lambda args: args["B"] is not None}) +# @triton.heuristics({"HAS_RESIDUAL": lambda args: args["RESIDUAL"] is not None}) +@triton.jit +def _l2_norm_fwd_1pass_kernel( + X, # pointer to the input + Y, # pointer to the output + stride_x_row, # how much to increase the pointer when moving by 1 row + N, # number of columns in X + eps, # epsilon to avoid division by zero + BLOCK_N: tl.constexpr, +): + # Map the program id to the row of X and Y it should compute. + row = tl.program_id(0) + X += row * stride_x_row + Y += row * stride_x_row + # Compute mean and variance + cols = tl.arange(0, BLOCK_N) + x = tl.load(X + cols, mask=cols < N, other=0.0).to(tl.float32) + xbar = tl.where(cols < N, x, 0.0) + var = tl.sum(xbar * xbar, axis=0) + rstd = 1 / tl.sqrt(var + eps) + # tl.store(Rstd + row, rstd) + # Normalize and apply linear transformation + mask = cols < N + y = x * rstd + # Write output + tl.store(Y + cols, y, mask=mask) + + +@triton.autotune( + configs=[ + triton.Config({}, num_warps=1), + triton.Config({}, num_warps=2), + triton.Config({}, num_warps=4), + triton.Config({}, num_warps=8), + triton.Config({}, num_warps=16), + triton.Config({}, num_warps=32), + ], + key=["N"], +) +# @triton.heuristics({"HAS_BIAS": lambda args: args["B"] is not None}) +# @triton.heuristics({"HAS_DRESIDUAL": lambda args: args["DRESIDUAL"] is not None}) +# @triton.heuristics({"STORE_DRESIDUAL": lambda args: args["DRESIDUAL_IN"] is not None}) +# @triton.heuristics({"RECOMPUTE_OUTPUT": lambda args: args["Y"] is not None}) +@triton.jit +def _l2_norm_bwd_kernel( + X, # pointer to the input + # Y, # pointer to the output to be recomputed + DY, # pointer to the output gradient + DX, # pointer to the input gradient + stride_x_row, # how much to increase the pointer when moving by 1 row + N, # number of columns in X + eps, # epsilon to avoid division by zero + BLOCK_N: tl.constexpr, +): + # Map the program id to the elements of X, DX, and DY it should compute. + # Map the program id to the row of X and Y it should compute. + row = tl.program_id(0) + X += row * stride_x_row + DX += row * stride_x_row + DY += row * stride_x_row + + # Y += row * stride_y_row + cols = tl.arange(0, BLOCK_N) + x = tl.load(X + cols, mask=cols < N, other=0.0).to(tl.float32) + x = tl.where(cols < N, x, 0.0) + var = tl.sum(x * x) + rstd = 1 / tl.sqrt(var + eps) + # tl.store(Rstd + row, rstd) + # Normalize and apply linear transformation + mask = cols < N + # y = x * rstd + dy = tl.load(DY + cols, mask=cols < N, other=0.0).to(tl.float32) + dy = tl.where(cols < N, dy, 0.0) + # dx = dy * rstd - tl.sum(dy * x) * (1 / (var+eps)) * rstd * x + dx = dy * rstd - tl.sum(dy * x) * (1 / (var+eps)) * rstd * x + tl.store(DX + cols, dx, mask=mask) + +def _l2_norm_fwd( + x, eps=1e-6 +): + x_shape_og = x.shape + x = x.reshape(-1, x.shape[-1]) + if x.stride(-1) != 1: + x = x.contiguous() + M, N = x.shape + assert x.stride(-1) == 1 + # allocate output + y = torch.empty_like(x) + assert y.stride(-1) == 1 + N = x.shape[-1] + M = x.shape[0] + # rstd = torch.empty((M,), dtype=torch.float32, device="cuda") + # Less than 64KB per feature: enqueue fused kernel + MAX_FUSED_SIZE = 65536 // x.element_size() + BLOCK_N = min(MAX_FUSED_SIZE, triton.next_power_of_2(N)) + if N > BLOCK_N: + raise RuntimeError( + "This layer norm doesn't support feature dim >= 64KB.") + # heuristics for number of warps + with torch.cuda.device(x.device.index): + _l2_norm_fwd_1pass_kernel[(M,)]( + x, + y, + x.stride(0), + N, + eps, + # is_rms_norm, + BLOCK_N, + # residual is not None, + # residual_out is not None, + # bias is not None, + ) + return y.reshape(x_shape_og) + +def _l2_norm_bwd( + x, dy, eps=1e-5, +): + x_shape_og = x.shape + x = x.reshape(-1, dy.shape[-1]) + dy = dy.reshape(-1, dy.shape[-1]) + if dy.stride(-1) != 1: + dy = dy.contiguous() + assert dy.shape == x.shape + # allocate output + dx = torch.empty_like(x) + N = x.shape[-1] + M = x.shape[0] + assert x.stride(-1) == 1 + assert dy.stride(-1) == 1 + # rstd = torch.empty((M,), dtype=torch.float32, device="cuda") + # Less than 64KB per feature: enqueue fused kernel + MAX_FUSED_SIZE = 65536 // x.element_size() + BLOCK_N = min(MAX_FUSED_SIZE, triton.next_power_of_2(N)) + if N > BLOCK_N: + raise RuntimeError( + "This layer norm doesn't support feature dim >= 64KB.") + # heuristics for number of warps + with torch.cuda.device(x.device.index): + _l2_norm_bwd_kernel[(M,)]( + x, + dy, + dx, + x.stride(0), + N, + eps, + BLOCK_N, + ) + return dx.reshape(x_shape_og) + + +class L2NormFN(torch.autograd.Function): + @staticmethod + def forward( + ctx, + x, + eps=1e-6, + ): + # reshape input data into 2D tensor + y = _l2_norm_fwd(x, eps) + ctx.x_shape_og = x_shape_og + ctx.eps = eps + ctx.x_dtype = x.dtype + ctx.save_for_backward(x) + return y + + @staticmethod + def backward(ctx, dy, *args): + x, = ctx.saved_tensors + dx = _l2_norm_bwd( + x, + dy, + ctx.eps, + ) + return ( + dx, + None + ) + +l2_norm_fn = L2NormFN.apply + +if __name__ == '__main__': + x = torch.rand(10, 10, 100).cuda().requires_grad_(True) + y = torch.nn.functional.normalize(x, dim=-1, p=2) + dy = torch.rand_like(y) + y.backward(dy, retain_graph=True) + x_grad, x.grad = x.grad, None + y2 = l2_norm_fn(x, 1e-6) + print((y-y2).abs().max()) + y2.backward(dy, retain_graph=True) + x_grad2, x.grad = x.grad, None + print((x_grad2-x_grad).abs().max()) + breakpoint() + + + + diff --git a/fla/modules/layernorm.py b/fla/modules/layernorm.py new file mode 100644 index 0000000000000000000000000000000000000000..9bd74774e262ede4606d2f4c90a446c6227c5389 --- /dev/null +++ b/fla/modules/layernorm.py @@ -0,0 +1,802 @@ +# -*- coding: utf-8 -*- + +# Copyright (c) 2023, Tri Dao. +# https://github.com/state-spaces/mamba/blob/fb7b5310fa865dbd62aa059b1e26f2b431363e2a/mamba_ssm/ops/triton/layernorm.py +# Implement residual + layer_norm / rms_norm. + +# Based on the Triton LayerNorm tutorial: https://triton-lang.org/main/getting-started/tutorials/05-layer-norm.html +# For the backward pass, we keep weight_grad and bias_grad in registers and accumulate. +# This is faster for dimensions up to 8k, but after that it's much slower due to register spilling. +# The models we train have hidden dim up to 8k anyway (e.g. Llama 70B), so this is fine. + +from __future__ import annotations + +import math + +import torch +import torch.nn as nn +import torch.nn.functional as F +import triton +import triton.language as tl + +from fla.utils import contiguous + + +def layer_norm_ref(x, weight, bias, residual=None, eps=1e-6, prenorm=False, upcast=False): + dtype = x.dtype + if upcast: + weight = weight.float() + bias = bias.float() if bias is not None else None + if upcast: + x = x.float() + residual = residual.float() if residual is not None else residual + if residual is not None: + x = (x + residual).to(x.dtype) + out = F.layer_norm(x.to(weight.dtype), x.shape[-1:], weight=weight, bias=bias, eps=eps).to( + dtype + ) + return out if not prenorm else (out, x) + + +def rms_norm_ref(x, weight, bias, residual=None, eps=1e-6, prenorm=False, upcast=False): + dtype = x.dtype + if upcast: + weight = weight.float() + bias = bias.float() if bias is not None else None + if upcast: + x = x.float() + residual = residual.float() if residual is not None else residual + if residual is not None: + x = (x + residual).to(x.dtype) + rstd = 1 / torch.sqrt((x.square()).mean(dim=-1, keepdim=True) + eps) + out = (x * rstd * weight) + \ + bias if bias is not None else (x * rstd * weight) + out = out.to(dtype) + return out if not prenorm else (out, x) + + +@triton.autotune( + configs=[ + triton.Config({}, num_warps=1), + triton.Config({}, num_warps=2), + triton.Config({}, num_warps=4), + triton.Config({}, num_warps=8), + triton.Config({}, num_warps=16), + triton.Config({}, num_warps=32), + ], + key=["N", "HAS_RESIDUAL", "STORE_RESIDUAL_OUT", "IS_RMS_NORM", "HAS_BIAS"], +) +# @triton.heuristics({"HAS_BIAS": lambda args: args["B"] is not None}) +# @triton.heuristics({"HAS_RESIDUAL": lambda args: args["RESIDUAL"] is not None}) +@triton.jit +def _layer_norm_fwd_1pass_kernel( + X, # pointer to the input + Y, # pointer to the output + W, # pointer to the weights + B, # pointer to the biases + RESIDUAL, # pointer to the residual + RESIDUAL_OUT, # pointer to the residual + Mean, # pointer to the mean + Rstd, # pointer to the 1/std + stride_x_row, # how much to increase the pointer when moving by 1 row + stride_y_row, + stride_res_row, + stride_res_out_row, + N, # number of columns in X + eps, # epsilon to avoid division by zero + IS_RMS_NORM: tl.constexpr, + BLOCK_N: tl.constexpr, + HAS_RESIDUAL: tl.constexpr, + STORE_RESIDUAL_OUT: tl.constexpr, + HAS_WEIGHT: tl.constexpr, + HAS_BIAS: tl.constexpr +): + # Map the program id to the row of X and Y it should compute. + row = tl.program_id(0) + X += row * stride_x_row + Y += row * stride_y_row + if HAS_RESIDUAL: + RESIDUAL += row * stride_res_row + if STORE_RESIDUAL_OUT: + RESIDUAL_OUT += row * stride_res_out_row + # Compute mean and variance + cols = tl.arange(0, BLOCK_N) + x = tl.load(X + cols, mask=cols < N, other=0.0).to(tl.float32) + if HAS_RESIDUAL: + residual = tl.load(RESIDUAL + cols, mask=cols < + N, other=0.0).to(tl.float32) + x += residual + if STORE_RESIDUAL_OUT: + tl.store(RESIDUAL_OUT + cols, x, mask=cols < N) + if not IS_RMS_NORM: + mean = tl.sum(x, axis=0) / N + tl.store(Mean + row, mean) + xbar = tl.where(cols < N, x - mean, 0.0) + var = tl.sum(xbar * xbar, axis=0) / N + else: + xbar = tl.where(cols < N, x, 0.0) + var = tl.sum(xbar * xbar, axis=0) / N + rstd = 1 / tl.sqrt(var + eps) + tl.store(Rstd + row, rstd) + # Normalize and apply linear transformation + mask = cols < N + if HAS_WEIGHT: + w = tl.load(W + cols, mask=mask).to(tl.float32) + if HAS_BIAS: + b = tl.load(B + cols, mask=mask).to(tl.float32) + x_hat = (x - mean) * rstd if not IS_RMS_NORM else x * rstd + + y = x_hat * w if HAS_WEIGHT else x_hat + if HAS_BIAS: + y = y + b + # Write output + tl.store(Y + cols, y, mask=mask) + + +def _layer_norm_fwd( + x, weight, bias, eps, residual=None, out_dtype=None, residual_dtype=None, is_rms_norm=False +): + if residual is not None: + residual_dtype = residual.dtype + M, N = x.shape + assert x.stride(-1) == 1 + if residual is not None: + assert residual.stride(-1) == 1 + assert residual.shape == (M, N) + if weight is not None: + assert weight.shape == (N,) + assert weight.stride(-1) == 1 + if bias is not None: + assert bias.stride(-1) == 1 + assert bias.shape == (N,) + # allocate output + y = torch.empty_like(x, dtype=x.dtype if out_dtype is None else out_dtype) + assert y.stride(-1) == 1 + if residual is not None or (residual_dtype is not None and residual_dtype != x.dtype): + residual_out = torch.empty(M, N, device=x.device, dtype=residual_dtype) + assert residual_out.stride(-1) == 1 + else: + residual_out = None + mean = torch.empty((M,), dtype=torch.float32, + device="cuda") if not is_rms_norm else None + rstd = torch.empty((M,), dtype=torch.float32, device="cuda") + # Less than 64KB per feature: enqueue fused kernel + MAX_FUSED_SIZE = 65536 // x.element_size() + BLOCK_N = min(MAX_FUSED_SIZE, triton.next_power_of_2(N)) + if N > BLOCK_N: + raise RuntimeError( + "This layer norm doesn't support feature dim >= 64KB.") + # heuristics for number of warps + with torch.cuda.device(x.device.index): + _layer_norm_fwd_1pass_kernel[(M,)]( + x, + y, + weight, + bias, + residual, + residual_out, + mean, + rstd, + x.stride(0), + y.stride(0), + residual.stride(0) if residual is not None else 0, + residual_out.stride(0) if residual_out is not None else 0, + N, + eps, + is_rms_norm, + BLOCK_N, + residual is not None, + residual_out is not None, + weight is not None, + bias is not None, + ) + # residual_out is None if residual is None and residual_dtype == input_dtype + return y, mean, rstd, residual_out if residual_out is not None else x + + +@triton.autotune( + configs=[ + triton.Config({}, num_warps=1), + triton.Config({}, num_warps=2), + triton.Config({}, num_warps=4), + triton.Config({}, num_warps=8), + triton.Config({}, num_warps=16), + triton.Config({}, num_warps=32), + ], + key=["N", "HAS_DRESIDUAL", "STORE_DRESIDUAL", "IS_RMS_NORM", "HAS_BIAS"], +) +# @triton.heuristics({"HAS_BIAS": lambda args: args["B"] is not None}) +# @triton.heuristics({"HAS_DRESIDUAL": lambda args: args["DRESIDUAL"] is not None}) +# @triton.heuristics({"STORE_DRESIDUAL": lambda args: args["DRESIDUAL_IN"] is not None}) +@triton.heuristics({"RECOMPUTE_OUTPUT": lambda args: args["Y"] is not None}) +@triton.jit +def _layer_norm_bwd_kernel( + X, # pointer to the input + W, # pointer to the weights + B, # pointer to the biases + Y, # pointer to the output to be recomputed + DY, # pointer to the output gradient + DX, # pointer to the input gradient + DW, # pointer to the partial sum of weights gradient + DB, # pointer to the partial sum of biases gradient + DRESIDUAL, + DRESIDUAL_IN, + Mean, # pointer to the mean + Rstd, # pointer to the 1/std + stride_x_row, # how much to increase the pointer when moving by 1 row + stride_y_row, + stride_dy_row, + stride_dx_row, + stride_dres_row, + stride_dres_in_row, + M, # number of rows in X + N, # number of columns in X + eps, # epsilon to avoid division by zero + rows_per_program, + IS_RMS_NORM: tl.constexpr, + BLOCK_N: tl.constexpr, + HAS_DRESIDUAL: tl.constexpr, + STORE_DRESIDUAL: tl.constexpr, + HAS_WEIGHT: tl.constexpr, + HAS_BIAS: tl.constexpr, + RECOMPUTE_OUTPUT: tl.constexpr, +): + # Map the program id to the elements of X, DX, and DY it should compute. + row_block_id = tl.program_id(0) + row_start = row_block_id * rows_per_program + cols = tl.arange(0, BLOCK_N) + mask = cols < N + X += row_start * stride_x_row + if HAS_DRESIDUAL: + DRESIDUAL += row_start * stride_dres_row + if STORE_DRESIDUAL: + DRESIDUAL_IN += row_start * stride_dres_in_row + DY += row_start * stride_dy_row + DX += row_start * stride_dx_row + if RECOMPUTE_OUTPUT: + Y += row_start * stride_y_row + if HAS_WEIGHT: + w = tl.load(W + cols, mask=mask).to(tl.float32) + dw = tl.zeros((BLOCK_N,), dtype=tl.float32) + if RECOMPUTE_OUTPUT and HAS_BIAS: + b = tl.load(B + cols, mask=mask, other=0.0).to(tl.float32) + if HAS_BIAS: + db = tl.zeros((BLOCK_N,), dtype=tl.float32) + row_end = min((row_block_id + 1) * rows_per_program, M) + for row in range(row_start, row_end): + # Load data to SRAM + x = tl.load(X + cols, mask=mask, other=0).to(tl.float32) + dy = tl.load(DY + cols, mask=mask, other=0).to(tl.float32) + if not IS_RMS_NORM: + mean = tl.load(Mean + row) + rstd = tl.load(Rstd + row) + # Compute dx + xhat = (x - mean) * rstd if not IS_RMS_NORM else x * rstd + xhat = tl.where(mask, xhat, 0.0) + if RECOMPUTE_OUTPUT: + y = xhat * w if HAS_WEIGHT else xhat + if HAS_BIAS: + y = y + b + tl.store(Y + cols, y, mask=mask) + wdy = dy + if HAS_WEIGHT: + wdy = dy * w + dw += dy * xhat + if HAS_BIAS: + db += dy + if not IS_RMS_NORM: + c1 = tl.sum(xhat * wdy, axis=0) / N + c2 = tl.sum(wdy, axis=0) / N + dx = (wdy - (xhat * c1 + c2)) * rstd + else: + c1 = tl.sum(xhat * wdy, axis=0) / N + dx = (wdy - xhat * c1) * rstd + if HAS_DRESIDUAL: + dres = tl.load(DRESIDUAL + cols, mask=mask, other=0).to(tl.float32) + dx += dres + # Write dx + if STORE_DRESIDUAL: + tl.store(DRESIDUAL_IN + cols, dx, mask=mask) + tl.store(DX + cols, dx, mask=mask) + + X += stride_x_row + if HAS_DRESIDUAL: + DRESIDUAL += stride_dres_row + if STORE_DRESIDUAL: + DRESIDUAL_IN += stride_dres_in_row + if RECOMPUTE_OUTPUT: + Y += stride_y_row + DY += stride_dy_row + DX += stride_dx_row + if HAS_WEIGHT: + tl.store(DW + row_block_id * N + cols, dw, mask=mask) + if HAS_BIAS: + tl.store(DB + row_block_id * N + cols, db, mask=mask) + + +def _layer_norm_bwd( + dy, + x, + weight, + bias, + eps, + mean, + rstd, + dresidual=None, + has_residual=False, + is_rms_norm=False, + x_dtype=None, + recompute_output=False, +): + M, N = x.shape + assert x.stride(-1) == 1 + assert dy.stride(-1) == 1 + assert dy.shape == (M, N) + if dresidual is not None: + assert dresidual.stride(-1) == 1 + assert dresidual.shape == (M, N) + if weight is not None: + assert weight.shape == (N,) + assert weight.stride(-1) == 1 + if bias is not None: + assert bias.stride(-1) == 1 + assert bias.shape == (N,) + # allocate output + dx = ( + torch.empty_like(x) + if x_dtype is None + else torch.empty(M, N, dtype=x_dtype, device=x.device) + ) + dresidual_in = torch.empty_like( + x) if has_residual and dx.dtype != x.dtype else None + y = torch.empty(M, N, dtype=dy.dtype, + device=dy.device) if recompute_output else None + + # Less than 64KB per feature: enqueue fused kernel + MAX_FUSED_SIZE = 65536 // x.element_size() + BLOCK_N = min(MAX_FUSED_SIZE, triton.next_power_of_2(N)) + if N > BLOCK_N: + raise RuntimeError( + "This layer norm doesn't support feature dim >= 64KB.") + sm_count = torch.cuda.get_device_properties(x.device).multi_processor_count + _dw = ( + torch.empty((sm_count, N), dtype=torch.float32, device=weight.device) + if weight is not None + else None + ) + _db = ( + torch.empty((sm_count, N), dtype=torch.float32, device=bias.device) + if bias is not None + else None + ) + rows_per_program = math.ceil(M / sm_count) + grid = (sm_count,) + with torch.cuda.device(x.device.index): + _layer_norm_bwd_kernel[grid]( + x, + weight, + bias, + y, + dy, + dx, + _dw, + _db, + dresidual, + dresidual_in, + mean, + rstd, + x.stride(0), + 0 if not recompute_output else y.stride(0), + dy.stride(0), + dx.stride(0), + dresidual.stride(0) if dresidual is not None else 0, + dresidual_in.stride(0) if dresidual_in is not None else 0, + M, + N, + eps, + rows_per_program, + is_rms_norm, + BLOCK_N, + dresidual is not None, + dresidual_in is not None, + weight is not None, + bias is not None, + ) + dw = _dw.sum(0).to(weight.dtype) if weight is not None else None + db = _db.sum(0).to(bias.dtype) if bias is not None else None + # Don't need to compute dresidual_in separately in this case + if has_residual and dx.dtype == x.dtype: + dresidual_in = dx + return (dx, dw, db, dresidual_in) if not recompute_output else (dx, dw, db, dresidual_in, y) + + +class LayerNormFn(torch.autograd.Function): + + @staticmethod + @contiguous + def forward( + ctx, + x, + weight, + bias, + residual=None, + eps=1e-6, + prenorm=False, + residual_in_fp32=False, + is_rms_norm=False, + ): + x_shape_og = x.shape + # reshape input data into 2D tensor + x = x.reshape(-1, x.shape[-1]) + if residual is not None: + assert residual.shape == x_shape_og + residual = residual.reshape(-1, residual.shape[-1]) + residual_dtype = ( + residual.dtype + if residual is not None + else (torch.float32 if residual_in_fp32 else None) + ) + y, mean, rstd, residual_out = _layer_norm_fwd( + x, weight, bias, eps, residual, residual_dtype=residual_dtype, is_rms_norm=is_rms_norm + ) + ctx.save_for_backward(residual_out, weight, bias, mean, rstd) + ctx.x_shape_og = x_shape_og + ctx.eps = eps + ctx.is_rms_norm = is_rms_norm + ctx.has_residual = residual is not None + ctx.prenorm = prenorm + ctx.x_dtype = x.dtype + y = y.reshape(x_shape_og) + return y if not prenorm else (y, residual_out.reshape(x_shape_og)) + + @staticmethod + @contiguous + def backward(ctx, dy, *args): + x, weight, bias, mean, rstd = ctx.saved_tensors + dy = dy.reshape(-1, dy.shape[-1]) + assert dy.shape == x.shape + if ctx.prenorm: + dresidual = args[0] + dresidual = dresidual.reshape(-1, dresidual.shape[-1]) + assert dresidual.shape == x.shape + else: + dresidual = None + dx, dw, db, dresidual_in = _layer_norm_bwd( + dy, + x, + weight, + bias, + ctx.eps, + mean, + rstd, + dresidual, + ctx.has_residual, + ctx.is_rms_norm, + x_dtype=ctx.x_dtype, + ) + return ( + dx.reshape(ctx.x_shape_og), + dw, + db, + dresidual_in.reshape(ctx.x_shape_og) if ctx.has_residual else None, + None, + None, + None, + None, + ) + + +def layer_norm_fn( + x, + weight, + bias, + residual=None, + eps=1e-6, + prenorm=False, + residual_in_fp32=False, + is_rms_norm=False, +): + return LayerNormFn.apply(x, weight, bias, residual, eps, prenorm, residual_in_fp32, is_rms_norm) + + +def rms_norm_fn( + x, + weight, + bias, + residual=None, + prenorm=False, + residual_in_fp32=False, + eps=1e-6 +): + return LayerNormFn.apply(x, weight, bias, residual, eps, prenorm, residual_in_fp32, True) + + +class LayerNorm(nn.Module): + + def __init__( + self, + hidden_size: int, + elementwise_affine: bool = True, + eps: float = 1e-5 + ) -> LayerNorm: + super().__init__() + + self.hidden_size = hidden_size + self.elementwise_affine = elementwise_affine + self.eps = eps + + if elementwise_affine: + self.weight = nn.Parameter(torch.ones(hidden_size)) + else: + self.register_parameter("weight", None) + self.register_parameter("bias", None) + + def __repr__(self) -> str: + s = f"{self.__class__.__name__}({self.hidden_size}" + if not self.elementwise_affine: + s += f", elementwise_affine={self.elementwise_affine}" + s += f", eps={self.eps}" + s += ")" + return s + + def forward(self, x, residual=None, prenorm=False, residual_in_fp32=False): + return layer_norm_fn( + x, + self.weight, + self.bias, + residual=residual, + eps=self.eps, + prenorm=prenorm, + residual_in_fp32=residual_in_fp32 + ) + + +class RMSNorm(nn.Module): + + def __init__( + self, + hidden_size: int, + elementwise_affine: bool = True, + eps: float = 1e-5 + ) -> RMSNorm: + super().__init__() + + self.hidden_size = hidden_size + self.elementwise_affine = elementwise_affine + self.eps = eps + + if elementwise_affine: + self.weight = nn.Parameter(torch.ones(hidden_size)) + else: + self.register_parameter("weight", None) + self.register_parameter("bias", None) + + def __repr__(self) -> str: + s = f"{self.__class__.__name__}({self.hidden_size}" + if not self.elementwise_affine: + s += f", elementwise_affine={self.elementwise_affine}" + s += f", eps={self.eps}" + s += ")" + return s + + def forward(self, x, residual=None, prenorm=False, residual_in_fp32=False): + return rms_norm_fn( + x, + self.weight, + self.bias, + residual=residual, + eps=self.eps, + prenorm=prenorm, + residual_in_fp32=residual_in_fp32, + ) + + +class LayerNormLinearFn(torch.autograd.Function): + + @staticmethod + @contiguous + def forward( + ctx, + x, + norm_weight, + norm_bias, + linear_weight, + linear_bias, + residual=None, + eps=1e-6, + prenorm=False, + residual_in_fp32=False, + is_rms_norm=False, + ): + x_shape_og = x.shape + # reshape input data into 2D tensor + x = x.reshape(-1, x.shape[-1]) + if residual is not None: + assert residual.shape == x_shape_og + residual = residual.reshape(-1, residual.shape[-1]) + residual_dtype = ( + residual.dtype + if residual is not None + else (torch.float32 if residual_in_fp32 else None) + ) + y, mean, rstd, residual_out = _layer_norm_fwd( + x, + norm_weight, + norm_bias, + eps, + residual, + out_dtype=None if not torch.is_autocast_enabled() else torch.get_autocast_gpu_dtype(), + residual_dtype=residual_dtype, + is_rms_norm=is_rms_norm, + ) + y = y.reshape(x_shape_og) + dtype = torch.get_autocast_gpu_dtype() if torch.is_autocast_enabled() else y.dtype + linear_weight = linear_weight.to(dtype) + linear_bias = linear_bias.to( + dtype) if linear_bias is not None else None + out = F.linear(y.to(linear_weight.dtype), linear_weight, linear_bias) + # We don't store y, will be recomputed in the backward pass to save memory + ctx.save_for_backward(residual_out, norm_weight, + norm_bias, linear_weight, mean, rstd) + ctx.x_shape_og = x_shape_og + ctx.eps = eps + ctx.is_rms_norm = is_rms_norm + ctx.has_residual = residual is not None + ctx.prenorm = prenorm + ctx.x_dtype = x.dtype + ctx.linear_bias_is_none = linear_bias is None + return out if not prenorm else (out, residual_out.reshape(x_shape_og)) + + @staticmethod + @contiguous + def backward(ctx, dout, *args): + x, norm_weight, norm_bias, linear_weight, mean, rstd = ctx.saved_tensors + dout = dout.reshape(-1, dout.shape[-1]) + dy = F.linear(dout, linear_weight.t()) + dlinear_bias = None if ctx.linear_bias_is_none else dout.sum(0) + assert dy.shape == x.shape + if ctx.prenorm: + dresidual = args[0] + dresidual = dresidual.reshape(-1, dresidual.shape[-1]) + assert dresidual.shape == x.shape + else: + dresidual = None + dx, dnorm_weight, dnorm_bias, dresidual_in, y = _layer_norm_bwd( + dy, + x, + norm_weight, + norm_bias, + ctx.eps, + mean, + rstd, + dresidual, + ctx.has_residual, + ctx.is_rms_norm, + x_dtype=ctx.x_dtype, + recompute_output=True, + ) + dlinear_weight = torch.einsum("bo,bi->oi", dout, y) + return ( + dx.reshape(ctx.x_shape_og), + dnorm_weight, + dnorm_bias, + dlinear_weight, + dlinear_bias, + dresidual_in.reshape(ctx.x_shape_og) if ctx.has_residual else None, + None, + None, + None, + None, + ) + + +def layer_norm_linear_fn( + x, + norm_weight, + norm_bias, + linear_weight, + linear_bias, + residual=None, + eps=1e-6, + prenorm=False, + residual_in_fp32=False, + is_rms_norm=False, +): + return LayerNormLinearFn.apply( + x, + norm_weight, + norm_bias, + linear_weight, + linear_bias, + residual, + eps, + prenorm, + residual_in_fp32, + is_rms_norm, + ) + + +class LayerNormLinear(nn.Module): + + def __init__( + self, + hidden_size, + elementwise_affine: bool = True, + eps=1e-5 + ) -> LayerNormLinear: + super().__init__() + + self.hidden_size = hidden_size + self.elementwise_affine = elementwise_affine + self.eps = eps + + if elementwise_affine: + self.weight = nn.Parameter(torch.ones(hidden_size)) + else: + self.register_parameter("weight", None) + self.register_parameter("bias", None) + + def __repr__(self) -> str: + s = f"{self.__class__.__name__}({self.hidden_size}" + if not self.elementwise_affine: + s += f", elementwise_affine={self.elementwise_affine}" + s += f", eps={self.eps}" + s += ")" + return s + + def forward(self, x, weight, bias, residual=None, prenorm=False, residual_in_fp32=False): + return layer_norm_linear_fn( + x, + self.weight, + self.bias, + weight, + bias, + residual=residual, + eps=self.eps, + prenorm=prenorm, + residual_in_fp32=residual_in_fp32, + is_rms_norm=False + ) + + +class RMSNormLinear(nn.Module): + + def __init__( + self, + hidden_size, + elementwise_affine: bool = True, + eps=1e-5 + ) -> RMSNormLinear: + super().__init__() + + self.hidden_size = hidden_size + self.elementwise_affine = elementwise_affine + self.eps = eps + + if elementwise_affine: + self.weight = nn.Parameter(torch.ones(hidden_size)) + else: + self.register_parameter("weight", None) + self.register_parameter("bias", None) + + def __repr__(self) -> str: + s = f"{self.__class__.__name__}({self.hidden_size}" + if not self.elementwise_affine: + s += f", elementwise_affine={self.elementwise_affine}" + s += f", eps={self.eps}" + s += ")" + return s + + def forward(self, x, weight, bias, residual=None, prenorm=False, residual_in_fp32=False): + return layer_norm_linear_fn( + x, + self.weight, + self.bias, + weight, + bias, + residual=residual, + eps=self.eps, + prenorm=prenorm, + residual_in_fp32=residual_in_fp32, + is_rms_norm=True + ) diff --git a/fla/modules/rotary.py b/fla/modules/rotary.py new file mode 100644 index 0000000000000000000000000000000000000000..f77e5ee6b997277607883ec4e9a63f57e6142bbc --- /dev/null +++ b/fla/modules/rotary.py @@ -0,0 +1,310 @@ +# -*- coding: utf-8 -*- + +# Copyright (c) 2023, Tri Dao. + +from typing import Optional, Tuple, Union + +import torch +from einops import rearrange, repeat + +from fla.ops.rotary import apply_rotary + + +def rotate_half(x, interleaved=False): + if not interleaved: + x1, x2 = x.chunk(2, dim=-1) + return torch.cat((-x2, x1), dim=-1) + else: + x1, x2 = x[..., ::2], x[..., 1::2] + return rearrange(torch.stack((-x2, x1), dim=-1), "... d two -> ... (d two)", two=2) + + +def apply_rotary_emb_torch(x, cos, sin, interleaved=False): + """ + x: (batch_size, seqlen, nheads, headdim) + cos, sin: (seqlen, rotary_dim / 2) or (batch_size, seqlen, rotary_dim / 2) + """ + ro_dim = cos.shape[-1] * 2 + assert ro_dim <= x.shape[-1] + cos = repeat( + cos, "... d -> ... 1 (2 d)" if not interleaved else "... d -> ... 1 (d 2)") + sin = repeat( + sin, "... d -> ... 1 (2 d)" if not interleaved else "... d -> ... 1 (d 2)") + return torch.cat( + [x[..., :ro_dim] * cos + + rotate_half(x[..., :ro_dim], interleaved) * sin, x[..., ro_dim:]], + dim=-1, + ) + + +class ApplyRotaryEmb(torch.autograd.Function): + @staticmethod + def forward( + ctx, + x, + cos, + sin, + interleaved=False, + inplace=False, + seqlen_offsets: Union[int, torch.Tensor] = 0, + cu_seqlens: Optional[torch.Tensor] = None, + max_seqlen: Optional[int] = None, + ): + out = apply_rotary( + x, + cos, + sin, + seqlen_offsets=seqlen_offsets, + cu_seqlens=cu_seqlens, + max_seqlen=max_seqlen, + interleaved=interleaved, + inplace=inplace, + ) + if isinstance(seqlen_offsets, int): + # Can't save int with save_for_backward + ctx.save_for_backward(cos, sin, cu_seqlens) + ctx.seqlen_offsets = seqlen_offsets + else: + ctx.save_for_backward(cos, sin, cu_seqlens, seqlen_offsets) + ctx.seqlen_offsets = None + ctx.interleaved = interleaved + ctx.inplace = inplace + ctx.max_seqlen = max_seqlen + return out if not inplace else x + + @staticmethod + def backward(ctx, do): + seqlen_offsets = ctx.seqlen_offsets + if seqlen_offsets is None: + cos, sin, cu_seqlens, seqlen_offsets = ctx.saved_tensors + else: + cos, sin, cu_seqlens = ctx.saved_tensors + # TD [2023-09-02]: For some reason Triton (2.0.0.post1) errors with + # "[CUDA]: invalid device context", and cloning makes it work. Idk why. Triton 2.1.0 works. + if not ctx.interleaved and not ctx.inplace: + do = do.clone() + dx = apply_rotary( + do, + cos, + sin, + seqlen_offsets=seqlen_offsets, + cu_seqlens=cu_seqlens, + max_seqlen=ctx.max_seqlen, + interleaved=ctx.interleaved, + inplace=ctx.inplace, + conjugate=True, + ) + return dx, None, None, None, None, None, None, None + + +def apply_rotary_emb( + x, + cos, + sin, + interleaved=False, + inplace=False, + seqlen_offsets: Union[int, torch.Tensor] = 0, + cu_seqlens: Optional[torch.Tensor] = None, + max_seqlen: Optional[int] = None, +): + """ + Arguments: + x: (batch_size, seqlen, nheads, headdim) if cu_seqlens is None + else (total_seqlen, nheads, headdim) + cos, sin: (seqlen_rotary, rotary_dim / 2) + interleaved: if True, rotate pairs of even and odd dimensions (GPT-J style) instead + of 1st half and 2nd half (GPT-NeoX style). + inplace: if True, apply rotary embedding in-place. + seqlen_offsets: (batch_size,) or int. Each sequence in x is shifted by this amount. + Most commonly used in inference when we have KV cache. + cu_seqlens: (batch + 1,) or None + max_seqlen: int + Return: + out: (batch_size, seqlen, nheads, headdim) if cu_seqlens is None + else (total_seqlen, nheads, headdim) + rotary_dim must be <= headdim + Apply rotary embedding to the first rotary_dim of x. + """ + return ApplyRotaryEmb.apply( + x, cos, sin, interleaved, inplace, seqlen_offsets, cu_seqlens, max_seqlen + ) + + +# For backward compatibility +apply_rotary_emb_func = apply_rotary_emb + + +class RotaryEmbedding(torch.nn.Module): + """ + The rotary position embeddings from RoFormer_ (Su et. al). + A crucial insight from the method is that the query and keys are + transformed by rotation matrices which depend on the relative positions. + + Other implementations are available in the Rotary Transformer repo_ and in + GPT-NeoX_, GPT-NeoX was an inspiration + + .. _RoFormer: https://arxiv.org/abs/2104.09864 + .. _repo: https://github.com/ZhuiyiTechnology/roformer + .. _GPT-NeoX: https://github.com/EleutherAI/gpt-neox + + If scale_base is not None, this implements XPos (Sun et al., https://arxiv.org/abs/2212.10554). + A recommended value for scale_base is 512: https://github.com/HazyResearch/flash-attention/issues/96 + Reference: https://github.com/sunyt32/torchscale/blob/main/torchscale/component/xpos_relative_position.py + """ + + def __init__( + self, + dim: int, + base=10000.0, + interleaved=False, + scale_base=None, + pos_idx_in_fp32=True, + device=None, + ): + """ + interleaved: if True, rotate pairs of even and odd dimensions (GPT-J style) instead + of 1st half and 2nd half (GPT-NeoX style). + pos_idx_in_fp32: if True, the position indices [0.0, ..., seqlen - 1] are in fp32, + otherwise they might be in lower precision. + This option was added because previously (before 2023-07-02), when we construct + the position indices, we use the dtype of self.inv_freq. In most cases this would + be fp32, but if the model is trained in pure bf16 (not mixed precision), then + self.inv_freq would be bf16, and the position indices are also in bf16. + Because of the limited precision of bf16 (e.g. 1995.0 is rounded to 2000.0), the + embeddings for some positions will coincide. + To maintain compatibility with models previously trained in pure bf16, + we add this option. + """ + super().__init__() + self.dim = dim + self.base = float(base) + self.pos_idx_in_fp32 = pos_idx_in_fp32 + # Generate and save the inverse frequency buffer (non trainable) + inv_freq = self._compute_inv_freq(device) + self.register_buffer("inv_freq", inv_freq, persistent=False) + self.interleaved = interleaved + self.scale_base = scale_base + scale = ( + (torch.arange(0, dim, 2, device=device, + dtype=torch.float32) + 0.4 * dim) / (1.4 * dim) + if scale_base is not None + else None + ) + self.register_buffer("scale", scale, persistent=False) + + self._seq_len_cached = 0 + self._cos_cached = None + self._sin_cached = None + self._cos_k_cached = None + self._sin_k_cached = None + + def _compute_inv_freq(self, device=None): + return 1.0 / ( + self.base + ** (torch.arange(0, self.dim, 2, device=device, dtype=torch.float32) / self.dim) + ) + + def _update_cos_sin_cache(self, seqlen, device=None, dtype=None): + # Reset the tables if the sequence length has changed, + # if we're on a new device (possibly due to tracing for instance), + # or if we're switching from inference mode to training + if ( + seqlen > self._seq_len_cached + or self._cos_cached is None + or self._cos_cached.device != device + or self._cos_cached.dtype != dtype + or (self.training and self._cos_cached.is_inference()) + ): + self._seq_len_cached = seqlen + # We want fp32 here, not self.inv_freq.dtype, since the model could be loaded in bf16 + # And the output of arange can be quite large, so bf16 would lose a lot of precision. + # However, for compatibility reason, we add an option to use the dtype of self.inv_freq. + if self.pos_idx_in_fp32: + t = torch.arange(seqlen, device=device, dtype=torch.float32) + # We want fp32 here as well since inv_freq will be multiplied with t, and the output + # will be large. Having it in bf16 will lose a lot of precision and cause the + # cos & sin output to change significantly. + # We want to recompute self.inv_freq if it was not loaded in fp32 + if self.inv_freq.dtype != torch.float32: + inv_freq = self._compute_inv_freq(device=device) + else: + inv_freq = self.inv_freq + else: + t = torch.arange(seqlen, device=device, + dtype=self.inv_freq.dtype) + inv_freq = self.inv_freq + # Don't do einsum, it converts fp32 to fp16 under AMP + # freqs = torch.einsum("i,j->ij", t, self.inv_freq) + freqs = torch.outer(t, inv_freq) + if self.scale is None: + self._cos_cached = torch.cos(freqs).to(dtype) + self._sin_cached = torch.sin(freqs).to(dtype) + else: + power = ( + torch.arange(seqlen, dtype=self.scale.dtype, + device=self.scale.device) + - seqlen // 2 + ) / self.scale_base + scale = self.scale.to( + device=power.device) ** rearrange(power, "s -> s 1") + # We want the multiplication by scale to happen in fp32 + self._cos_cached = (torch.cos(freqs) * scale).to(dtype) + self._sin_cached = (torch.sin(freqs) * scale).to(dtype) + self._cos_k_cached = (torch.cos(freqs) / scale).to(dtype) + self._sin_k_cached = (torch.sin(freqs) / scale).to(dtype) + + def forward( + self, + q: torch.Tensor, + k: torch.Tensor, + seqlen_offset: Union[int, torch.Tensor] = 0, + max_seqlen: Optional[int] = None, + ) -> Union[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]]: + """ + qkv: (batch, seqlen, 3, nheads, headdim) if kv is none, + else it's just q of shape (batch, seqlen, nheads, headdim) + kv: (batch, seqlen, 2, nheads, headdim) + seqlen_offset: (batch_size,) or int. Each sequence in x is shifted by this amount. + Most commonly used in inference when we have KV cache. + If it's a tensor of shape (batch_size,), then to update the cos / sin cache, one + should pass in max_seqlen, which will update the cos / sin cache up to that length. + Apply rotary embedding *inplace* to qkv and / or kv. + """ + seqlen = q.shape[1] + if max_seqlen is not None: + self._update_cos_sin_cache(max_seqlen, device=q.device, dtype=q.dtype) + elif isinstance(seqlen_offset, int): + self._update_cos_sin_cache(seqlen + seqlen_offset, device=q.device, dtype=q.dtype) + if self.scale is None: + q = apply_rotary_emb_func( + q, + self._cos_cached, + self._sin_cached, + interleaved=self.interleaved, + seqlen_offsets=seqlen_offset, + ) + k = apply_rotary_emb_func( + k, + self._cos_cached, + self._sin_cached, + interleaved=self.interleaved, + seqlen_offsets=seqlen_offset, + ) + + else: + q = apply_rotary_emb_func( + q, + self._cos_cached, + self._sin_cached, + interleaved=self.interleaved, + seqlen_offsets=seqlen_offset, + ) + k = apply_rotary_emb_func( + k, + self._cos_k_cached, + self._sin_k_cached, + interleaved=self.interleaved, + seqlen_offsets=seqlen_offset, + ) + + return q, k diff --git a/fla/ops/__init__.py b/fla/ops/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..4f8681d2a16bb0c9b86fc0f3cb268c4bb69ce5b8 --- /dev/null +++ b/fla/ops/__init__.py @@ -0,0 +1,18 @@ +# -*- coding: utf-8 -*- + +from .based import fused_chunk_based, parallel_based +from .gla import chunk_gla, fused_chunk_gla, fused_recurrent_gla +from .retention import (chunk_retention, fused_chunk_retention, + fused_recurrent_retention, parallel_retention) + +__all__ = [ + 'fused_chunk_based', + 'parallel_based', + 'chunk_gla', + 'fused_chunk_gla', + 'fused_recurrent_gla', + 'chunk_retention', + 'fused_chunk_retention', + 'fused_recurrent_retention', + 'parallel_retention' +] diff --git a/fla/ops/abc/__init__.py b/fla/ops/abc/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..1fa366a836aa307b9e4cd4a486e8600f8ac473b1 --- /dev/null +++ b/fla/ops/abc/__init__.py @@ -0,0 +1,11 @@ +# -*- coding: utf-8 -*- + +from .chunk import chunk_abc +from .chunk_gate import chunk_gated_abc +from .recurrent_fuse import fused_recurrent_gated_abc + +__all__ = [ + 'chunk_abc', + 'chunk_gated_abc', + 'fused_recurrent_gated_abc' +] diff --git a/fla/ops/abc/chunk.py b/fla/ops/abc/chunk.py new file mode 100644 index 0000000000000000000000000000000000000000..599317e7d20a876e0f0f02fb2e7af303ca9fc8bc --- /dev/null +++ b/fla/ops/abc/chunk.py @@ -0,0 +1,1194 @@ +# -*- coding: utf-8 -*- + +# Copyright (c) 2023-2024, Yu Zhang, Songlin Yang + +from typing import Optional, Tuple + +import torch +import triton +import triton.language as tl + +from fla.ops.utils import (logcumsumexp_fwd_kernel, softmax_bwd_kernel, + softmax_fwd_kernel) +from fla.utils import contiguous + + +@triton.jit +def chunk_abc_fwd_kernel_h( + k, + v, + z, + h, + h0, + ht, + s_k_h, + s_k_t, + s_k_d, + s_v_h, + s_v_t, + s_v_d, + s_h_h, + s_h_t, + s_h_d, + T: tl.constexpr, + K: tl.constexpr, + V: tl.constexpr, + BT: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr, + NT: tl.constexpr, + NORMK: tl.constexpr, + USE_INITIAL_STATE: tl.constexpr, + STORE_FINAL_STATE: tl.constexpr +): + i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + + b_h = tl.zeros([BK, BV], dtype=tl.float32) + if USE_INITIAL_STATE: + p_h = tl.make_block_ptr(h0 + i_bh * K * V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + b_h += tl.load(p_h, boundary_check=(0, 1)).to(tl.float32) + if NORMK: + p_z0 = tl.make_block_ptr(z + i_bh * s_k_h, (T * K,), (s_k_d,), (i_k * BK,), (BK,), (0,)) + else: + p_z0 = tl.make_block_ptr(z + i_bh * s_v_h, (T * V,), (s_v_d,), (i_v * BV,), (BV,), (0,)) + b_zp = tl.load(p_z0).to(tl.float32) + for i_t in range(NT): + p_k = tl.make_block_ptr(k + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, i_t * BT), (BK, BT), (0, 1)) + p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_h = tl.make_block_ptr(h + i_bh * s_h_h + i_t * K * V, (K, V), (s_h_t, s_h_d), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + + tl.store(p_h, b_h.to(p_h.dtype.element_ty), boundary_check=(0, 1)) + # [BK, BT] + b_k = tl.load(p_k, boundary_check=(0, 1)) + # [BT, BV] + b_v = tl.load(p_v, boundary_check=(0, 1)) + if NORMK: + p_zc = tl.make_block_ptr(z + i_bh * s_k_h, (T * K,), (s_k_d,), ((i_t * BT + BT - 1) * K + i_k * BK,), (BK,), (0,)) + # [BK,] + b_zc = tl.load(p_zc, boundary_check=(0,)) + b_r, b_zp = tl.exp(b_zp - b_zc), b_zc + # [BK, BV] + b_h = b_h * b_r[:, None] + b_k = tl.exp(b_k - b_zc[:, None]).to(b_k.dtype) + else: + p_zc = tl.make_block_ptr(z + i_bh * s_v_h, (T * V,), (s_v_d,), ((i_t * BT + BT - 1) * V + i_v * BV,), (BV,), (0,)) + # [BV,] + b_zc = tl.load(p_zc, boundary_check=(0,)) + b_r, b_zp = tl.exp(b_zp - b_zc), b_zc + # [BK, BV] + b_h = b_h * b_r[None, :] + b_v = tl.exp(b_v - b_zc[None, :]).to(b_v.dtype) + # [BK, BV] + b_h += tl.dot(b_k, b_v, allow_tf32=False) + + if STORE_FINAL_STATE: + p_h = tl.make_block_ptr(ht + i_bh * K * V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + tl.store(p_h, b_h.to(p_h.dtype.element_ty), boundary_check=(0, 1)) + + +@triton.jit +def chunk_abc_fwd_kernel_intra_K( + v, + z, + o, + A, + s_v_h, + s_v_t, + s_v_d, + T: tl.constexpr, + V: tl.constexpr, + BT: tl.constexpr, + BC: tl.constexpr, + BV: tl.constexpr, + NC: tl.constexpr +): + i_v, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + i_t, i_i = i_c // NC, i_c % NC + + p_z = tl.make_block_ptr(z + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT + i_i * BC, i_v * BV), (BC, BV), (1, 0)) + p_zn = tl.make_block_ptr(z + i_bh * s_v_h, (T * V,), (s_v_d,), ((i_t * BT + i_i * BC) * V + i_v * BV,), (BV,), (0,)) + # [BV,] + b_zn = tl.load(p_zn, boundary_check=(0,)) + # [BC, BV] + b_o = tl.zeros([BC, BV], dtype=tl.float32) + for i_j in range(0, i_i): + p_A = tl.make_block_ptr(A + i_bh * T * BT, (T, BT), (BT, 1), (i_t * BT + i_i * BC, i_j * BC), (BC, BC), (1, 0)) + p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT + i_j * BC, i_v * BV), (BC, BV), (1, 0)) + # [BC, BV] + b_v = tl.load(p_v, boundary_check=(0, 1)) + # [BC, BC] + b_A = tl.load(p_A, boundary_check=(0, 1)) + b_o += tl.dot(b_A, tl.exp(b_v - b_zn[None, :]).to(b_v.dtype), allow_tf32=False) + b_z = tl.load(p_z, boundary_check=(0, 1)) + b_o *= tl.exp(b_zn[None, :] - b_z) + + o_i = tl.arange(0, BC) + o_A = i_bh * T * BT + (i_t * BT + i_i * BC + tl.arange(0, BC)) * BT + i_i * BC + m_A = (i_t * BT + i_i * BC + tl.arange(0, BC)) < T + for j in range(0, BC): + p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T * V,), (1,), ((i_t * BT + i_i * BC + j) * V + i_v * BV,), (BV,), (0,)) + # [BC,] + b_A = tl.load(A + o_A + j, mask=m_A, other=0) + # [BV,] + b_v = tl.load(p_v, boundary_check=(0,)).to(tl.float32) + # [BC, BV] + # avoid 0 * inf = inf + m_i = o_i[:, None] >= j + b_o += tl.where(m_i, b_A[:, None] * tl.exp(b_v[None, :] - b_z), 0) + p_o = tl.make_block_ptr(o + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT + i_i * BC, i_v * BV), (BC, BV), (1, 0)) + tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1)) + + +@triton.jit +def chunk_abc_fwd_kernel_K( + q, + k, + z, + h, + o, + A, + s_k_h, + s_k_t, + s_k_d, + s_v_h, + s_v_t, + s_v_d, + s_h_h, + s_h_t, + s_h_d, + scale, + T: tl.constexpr, + K: tl.constexpr, + V: tl.constexpr, + BT: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr +): + i_v, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + i_p = tl.maximum(i_t * BT - 1, 0) + + o_i = tl.arange(0, BT) + m_s = o_i[:, None] >= o_i[None, :] + + b_o = tl.zeros([BT, BV], dtype=tl.float32) + b_A = tl.zeros([BT, BT], dtype=tl.float32) + for i_k in range(tl.cdiv(K, BK)): + p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_k = tl.make_block_ptr(k + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, i_t * BT), (BK, BT), (0, 1)) + p_h = tl.make_block_ptr(h + i_bh * s_h_h + i_t * K * V, (K, V), (s_h_t, s_h_d), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + + # [BT, BK] + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_q = (b_q * scale).to(b_q.dtype) + # [BK, BT] + b_k = tl.load(p_k, boundary_check=(0, 1)) + # [BK, BV] + b_h = tl.load(p_h, boundary_check=(0, 1)) + # [BT, BV] + b_o += tl.dot(b_q, b_h, allow_tf32=False) + # [BT, BT] + b_A += tl.dot(b_q, b_k, allow_tf32=False) + p_z = tl.make_block_ptr(z + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_o = tl.make_block_ptr(o + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + # [BT, BV] + b_z = tl.load(p_z, boundary_check=(0, 1)) + # [BT, BV] + p_zp = tl.make_block_ptr(z + i_bh * s_v_h, (T * V,), (s_v_d,), (i_p * V + i_v * BV,), (BV,), (0,)) + b_zp = tl.load(p_zp, boundary_check=(0,)) + b_o = b_o * tl.exp(b_zp[None, :] - b_z) + tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1)) + + p_A = tl.make_block_ptr(A + i_bh * T * BT, (T, BT), (BT, 1), (i_t * BT, 0), (BT, BT), (1, 0)) + # [BT, BT] + b_A = tl.where(m_s, b_A, 0.) + if i_v == 0: + tl.store(p_A, b_A.to(p_A.dtype.element_ty), boundary_check=(0, 1)) + + +@triton.jit +def chunk_abc_fwd_kernel_intra_V( + q, + k, + z, + A, + s_k_h, + s_k_t, + s_k_d, + scale, + T: tl.constexpr, + K: tl.constexpr, + BT: tl.constexpr, + BC: tl.constexpr, + BK: tl.constexpr, + NC: tl.constexpr +): + i_k, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + i_t, i_i, i_j = i_c // (NC * NC), (i_c % (NC * NC)) // NC, (i_c % (NC * NC)) % NC + n_bh = tl.num_programs(2) + + if i_i > i_j: + p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0)) + p_k = tl.make_block_ptr(k + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, i_t * BT + i_j * BC), (BK, BC), (0, 1)) + p_z = tl.make_block_ptr(z + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0)) + p_A = tl.make_block_ptr(A + (i_k*n_bh+i_bh)*T*BT, (T, BT), (BT, 1), (i_t * BT + i_i * BC, i_j * BC), (BC, BC), (1, 0)) + p_zn = tl.make_block_ptr(z + i_bh * s_k_h, (T * K,), (s_k_d,), ((i_t * BT + i_i * BC) * K + i_k * BK,), (BK,), (0,)) + # [BK,] + b_zn = tl.load(p_zn, boundary_check=(0,)) + # [BC, BK] + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_z = tl.load(p_z, boundary_check=(0, 1)) + b_q = (b_q * tl.exp(b_zn[None, :] - b_z) * scale).to(b_q.dtype) + # [BK, BC] + b_k = tl.load(p_k, boundary_check=(0, 1)) + b_k = tl.exp(b_k - b_zn[:, None]).to(b_k.dtype) + # [BC, BC] + b_A = tl.dot(b_q, b_k, allow_tf32=False) + tl.store(p_A, b_A.to(A.dtype.element_ty), boundary_check=(0, 1)) + elif i_i == i_j: + p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0)) + p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T * K,), (s_k_d,), ((i_t * BT + i_j * BC) * K + i_k * BK,), (BK,), (0,)) + p_z = tl.make_block_ptr(z + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0)) + # [BC, BK] + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_z = tl.load(p_z, boundary_check=(0, 1)) + + o_i = tl.arange(0, BC) + o_A = (i_bh + i_k * n_bh) * T * BT + (i_t * BT + i_i * BC + tl.arange(0, BC)) * BT + i_j * BC + m_A = (i_t * BT + i_i * BC + tl.arange(0, BC)) < T + for j in range(0, BC): + # [BK,] + b_k = tl.load(p_k, boundary_check=(0,)).to(tl.float32) + # [BC,] + b_A = tl.sum(b_q * tl.exp(b_k[None, :] - b_z) * scale, 1) + b_A = tl.where(o_i >= j, b_A, 0.) + tl.store(A + o_A + j, b_A.to(b_q.dtype), mask=m_A) + + p_k = tl.advance(p_k, (K,)) + + +@triton.jit +def chunk_abc_fwd_kernel_V( + q, + v, + z, + h, + o, + A, + s_k_h, + s_k_t, + s_k_d, + s_v_h, + s_v_t, + s_v_d, + s_h_h, + s_h_t, + s_h_d, + scale, + T: tl.constexpr, + K: tl.constexpr, + V: tl.constexpr, + BT: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr +): + i_v, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + i_p = tl.maximum(i_t * BT - 1, 0) + + b_o = tl.zeros([BT, BV], dtype=tl.float32) + for i_k in range(tl.cdiv(K, BK)): + p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_z = tl.make_block_ptr(z + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_h = tl.make_block_ptr(h + i_bh * s_h_h + i_t * K * V, (K, V), (s_h_t, s_h_d), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + p_zp = tl.make_block_ptr(z + i_bh * s_k_h, (T * K,), (s_k_d,), (i_p * K + i_k * BK,), (BK,), (0,)) + + # [BT, BK] + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_q = (b_q * scale).to(b_q.dtype) + # [BT, BK] + b_z = tl.load(p_z, boundary_check=(0, 1)) + # [BT, BK] + b_zp = tl.load(p_zp, boundary_check=(0,)) + b_q = (b_q * tl.exp(b_zp[None, :] - b_z)).to(b_q.dtype) + # [BK, BV] + b_h = tl.load(p_h, boundary_check=(0, 1)) + # works but dkw, owing to divine benevolence + # [BT, BV] + if i_k >= 0: + b_o += tl.dot(b_q, b_h, allow_tf32=False) + p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_o = tl.make_block_ptr(o + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_A = tl.make_block_ptr(A + i_bh * T * BT, (T, BT), (BT, 1), (i_t * BT, 0), (BT, BT), (1, 0)) + # [BT, BV] + b_v = tl.load(p_v, boundary_check=(0, 1)) + # [BT, BT] + b_A = tl.load(p_A, boundary_check=(0, 1)) + b_o += tl.dot(b_A, b_v, allow_tf32=False) + tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1)) + + +@triton.jit +def chunk_abc_bwd_kernel_dh( + q, + z, + do, + dh, + s_k_h, + s_k_t, + s_k_d, + s_v_h, + s_v_t, + s_v_d, + s_h_h, + s_h_t, + s_h_d, + scale, + T: tl.constexpr, + K: tl.constexpr, + V: tl.constexpr, + BT: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr, + NT: tl.constexpr, + NORMK: tl.constexpr +): + i_k, i_v, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + + b_dh = tl.zeros([BK, BV], dtype=tl.float32) + b_zp = tl.full([BK if NORMK else BV], float('inf'), dtype=tl.float32) + for i_t in range(NT - 1, -1, -1): + i_p = tl.maximum(i_t * BT - 1, 0) + p_q = tl.make_block_ptr(q + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, i_t * BT), (BK, BT), (0, 1)) + p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_dh = tl.make_block_ptr(dh + i_bh * s_h_h + i_t * K*V, (K, V), (s_h_t, s_h_d), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + + # [BK, BT] + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_q = (b_q * scale).to(b_q.dtype) + # [BT, BV] + b_do = tl.load(p_do, boundary_check=(0, 1)) + + tl.store(p_dh, b_dh.to(p_dh.dtype.element_ty), boundary_check=(0, 1)) + if NORMK: + p_z = tl.make_block_ptr(z + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, i_t * BT), (BK, BT), (0, 1)) + p_zc = tl.make_block_ptr(z + i_bh * s_k_h, (T * K,), (s_k_d,), (i_p * K + i_k * BK,), (BK,), (0,)) + # [BK,] + b_zc = tl.load(p_zc, boundary_check=(0,)) + b_r, b_zp = tl.exp(b_zc - b_zp), b_zc + # [BK, BT] + b_z = tl.load(p_z, boundary_check=(0, 1)) + b_q = (b_q * tl.exp(b_zc[:, None] - b_z)).to(b_q.dtype) + # [BK, BV] + b_dh = b_dh * b_r[:, None] + else: + p_z = tl.make_block_ptr(z + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_zc = tl.make_block_ptr(z + i_bh * s_v_h, (T * V,), (s_v_d,), (i_p * V + i_v * BV,), (BV,), (0,)) + # [BV,] + b_zc = tl.load(p_zc, boundary_check=(0,)) + b_r, b_zp = tl.exp(b_zc - b_zp), b_zc + # [BT, BV] + b_z = tl.load(p_z, boundary_check=(0,)) + b_do = (b_do * tl.exp(b_zc[None, :] - b_z)).to(b_do.dtype) + # [BK, BV] + b_dh = b_dh * b_r[None, :] + # [BK, BV] + b_dh += tl.dot(b_q, b_do, allow_tf32=False) + + +@triton.jit +def chunk_abc_bwd_kernel_V( + k, + v, + z, + h, + A, + do, + dh, + dq, + dk, + dv, + dA, + s_k_h, + s_k_t, + s_k_d, + s_v_h, + s_v_t, + s_v_d, + s_h_h, + s_h_t, + s_h_d, + scale, + T: tl.constexpr, + K: tl.constexpr, + V: tl.constexpr, + BT: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr +): + i_k, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + i_p = tl.maximum(i_t * BT - 1, 0) + n_bh = tl.num_programs(2) + + p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_zc = tl.make_block_ptr(z + i_bh * s_k_h, (T * K,), (s_k_d,), ((i_t * BT + BT - 1) * K + i_k * BK,), (BK,), (0,)) + p_A = tl.make_block_ptr(A + i_bh * T * BT, (BT, T), (1, BT), (0, i_t * BT), (BT, BT), (0, 1)) + + # [BK,] + b_zc = tl.load(p_zc, boundary_check=(0,)) + # [BT, BK] + b_k = tl.load(p_k, boundary_check=(0, 1)) + b_k = tl.exp(b_k - b_zc[None, :]).to(b_k.dtype) + # [BT, BT] + b_A = tl.load(p_A, boundary_check=(0, 1)) + + b_dq = tl.zeros([BT, BK], dtype=tl.float32) + b_dk = tl.zeros([BT, BK], dtype=tl.float32) + b_dA = tl.zeros([BT, BT], dtype=tl.float32) + for i_v in range(tl.cdiv(V, BV)): + p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_h = tl.make_block_ptr(h + i_bh * s_h_h + i_t * V * K, (V, K), (s_h_d, s_h_t), (i_v * BV, i_k * BK), (BV, BK), (0, 1)) + p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_dh = tl.make_block_ptr(dh + i_bh * s_h_h + i_t * K*V, (K, V), (s_h_t, s_h_d), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + p_dv = tl.make_block_ptr(dv + (i_k*n_bh+i_bh) * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + + # [BT, BV] + b_v = tl.load(p_v, boundary_check=(0, 1)) + # [BV, BK] + b_h = tl.load(p_h, boundary_check=(0, 1)) + # [BT, BV] + b_do = tl.load(p_do, boundary_check=(0, 1)) + # [BK, BV] + b_dh = tl.load(p_dh, boundary_check=(0, 1)) + + # [BT, BV] + b_dv = tl.dot(b_k, b_dh, allow_tf32=False) + if i_k == 0: + b_dv += tl.dot(b_A, b_do, allow_tf32=False) + b_do = (b_do * scale).to(b_do.dtype) + tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), boundary_check=(0, 1)) + # [BT, BT] + b_dA += tl.dot(b_do, tl.trans(b_v), allow_tf32=False) + # [BT, BK] + b_dq += tl.dot(b_do, b_h, allow_tf32=False) + # [BT, BK] + b_dk += tl.dot(b_v, tl.trans(b_dh), allow_tf32=False) + p_z = tl.make_block_ptr(z + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_zp = tl.make_block_ptr(z + i_bh * s_k_h, (T * K,), (s_k_d,), (i_p * K + i_k * BK,), (BK,), (0,)) + # [BK,] + b_zp = tl.load(p_zp, boundary_check=(0,)) + # [BT, BK] + b_z = tl.load(p_z, boundary_check=(0, 1)) + b_z = tl.exp(b_zp[None, :] - b_z) + # [BT, BK] + b_dq = b_dq * b_z + b_dk = b_dk * b_k + + p_dq = tl.make_block_ptr(dq + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_dk = tl.make_block_ptr(dk + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_dA = tl.make_block_ptr(dA + i_bh * T * BT, (T, BT,), (BT, 1), (i_t * BT, 0), (BT, BT), (1, 0)) + tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1)) + tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1)) + + o_i = tl.arange(0, BT) + m_s = o_i[:, None] >= o_i[None, :] + # [BT, BT] + b_dA = tl.where(m_s, b_dA, 0.).to(b_k.dtype) + if i_k == 0: + tl.store(p_dA, b_dA.to(p_dA.dtype.element_ty), boundary_check=(0, 1)) + + +@triton.jit +def chunk_abc_bwd_kernel_intra_V( + q, + k, + z, + dA, + dq, + dk, + s_k_h, + s_k_t, + s_k_d, + T: tl.constexpr, + K: tl.constexpr, + BT: tl.constexpr, + BC: tl.constexpr, + BK: tl.constexpr, + NC: tl.constexpr +): + i_k, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + i_t, i_i = i_c // NC, i_c % NC + + p_z = tl.make_block_ptr(z + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0)) + p_zn = tl.make_block_ptr(z + i_bh * s_k_h, (T * K,), (s_k_d,), ((i_t * BT + i_i * BC) * K + i_k * BK,), (BK,), (0,)) + # [BK,] + b_zn = tl.load(p_zn, boundary_check=(0,)) + # [BC, BK] + b_z = tl.load(p_z, boundary_check=(0, 1)) + b_zq = tl.exp(b_zn[None, :] - b_z) + b_dq = tl.zeros([BC, BK], dtype=tl.float32) + for i_j in range(0, i_i): + p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_j * BC, i_k * BK), (BC, BK), (1, 0)) + p_dA = tl.make_block_ptr(dA + i_bh * T * BT, (T, BT), (BT, 1), (i_t * BT + i_i * BC, i_j * BC), (BC, BC), (1, 0)) + # [BC, BK] + b_k = tl.load(p_k, boundary_check=(0, 1)) + b_kz = tl.exp(b_k - b_zn[None, :]).to(b_k.dtype) + # [BC, BC] + b_dA = tl.load(p_dA, boundary_check=(0, 1)) + # [BC, BK] + b_dq += tl.dot(b_dA, b_kz, allow_tf32=False) + b_dq *= b_zq + + o_i = tl.arange(0, BC) + o_dA = i_bh * T * BT + (i_t * BT + i_i * BC + tl.arange(0, BC)) * BT + i_i * BC + m_dA = (i_t * BT + i_i * BC + tl.arange(0, BC)) < T + for j in range(0, BC): + p_kj = tl.make_block_ptr(k + i_bh * s_k_h, (T * K,), (1,), ((i_t * BT + i_i*BC+j) * K + i_k * BK,), (BK,), (0,)) + # [BC,] + b_dA = tl.load(dA + o_dA + j, mask=m_dA, other=0) + # [BK,] + b_kj = tl.load(p_kj, boundary_check=(0,)).to(tl.float32) + # [BC, BK] + m_i = o_i[:, None] >= j + # [BC, BK] + b_dq += tl.where(m_i, b_dA[:, None] * tl.exp(b_kj[None, :] - b_z), 0.) + p_dq = tl.make_block_ptr(dq + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0)) + tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1)) + + tl.debug_barrier() + p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0)) + p_zn = tl.make_block_ptr(z + i_bh * s_k_h, (T*K,), (s_k_d,), ((i_t * BT + i_i * BC + BC - 1) * K + i_k * BK,), (BK,), (0,)) + # [BK,] + b_zn = tl.load(p_zn, boundary_check=(0,)) + # [BC, BK] + b_k = tl.load(p_k, boundary_check=(0, 1)) + b_kz = tl.exp(b_k - b_zn[None, :]) + b_dk = tl.zeros([BC, BK], dtype=tl.float32) + for i_j in range(i_i + 1, NC): + p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_j * BC, i_k * BK), (BC, BK), (1, 0)) + p_z = tl.make_block_ptr(z + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_j * BC, i_k * BK), (BC, BK), (1, 0)) + p_dA = tl.make_block_ptr(dA + i_bh * T * BT, (T, BT), (BT, 1), (i_t * BT + i_j * BC, i_i * BC), (BC, BC), (1, 0)) + # [BC, BK] + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_z = tl.load(p_z, boundary_check=(0, 1)) + b_qz = (b_q * tl.exp(b_zn[None, :] - b_z)).to(b_q.dtype) + # [BC, BC] + b_dA = tl.load(p_dA, boundary_check=(0, 1)) + # [BC, BK] + b_dk += tl.dot(tl.trans(b_dA), b_qz, allow_tf32=False) + b_dk *= b_kz + + o_dA = i_bh * T * BT + (i_t * BT + i_i * BC) * BT + i_i * BC + tl.arange(0, BC) + for j in range(0, BC): + p_qj = tl.make_block_ptr(q + i_bh * s_k_h, (T * K,), (1,), ((i_t * BT + i_i * BC + j) * K + i_k * BK,), (BK,), (0,)) + p_zj = tl.make_block_ptr(z + i_bh * s_k_h, (T * K,), (1,), ((i_t * BT + i_i * BC + j) * K + i_k * BK,), (BK,), (0,)) + # [BC,] + b_dA = tl.load(dA + o_dA + j * BT, mask=(i_t * BT + i_i * BC + j < T), other=0) + # [BK,] + b_qj = tl.load(p_qj, boundary_check=(0,)).to(tl.float32) + b_zj = tl.load(p_zj, boundary_check=(0,)).to(tl.float32) + # [BC, BK] + m_i = o_i[:, None] <= j + b_dk += tl.where(m_i, b_dA[:, None] * b_qj[None, :] * tl.exp(b_k - b_zj[None, :]), 0.) + p_dk = tl.make_block_ptr(dk + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0)) + tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1)) + + +@triton.jit +def chunk_abc_bwd_kernel_intra_K( + v, + z, + do, + dA, + s_v_h, + s_v_t, + s_v_d, + scale, + T: tl.constexpr, + V: tl.constexpr, + BT: tl.constexpr, + BC: tl.constexpr, + BV: tl.constexpr, + NC: tl.constexpr +): + i_v, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + i_t, i_i, i_j = i_c // (NC * NC), (i_c % (NC * NC)) // NC, (i_c % (NC * NC)) % NC + n_bh = tl.num_programs(2) + + if i_i > i_j: + p_v = tl.make_block_ptr(v + i_bh * s_v_h, (V, T), (s_v_d, s_v_t), (i_v * BV, i_t * BT + i_j * BC), (BV, BC), (0, 1)) + p_z = tl.make_block_ptr(z + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT + i_i * BC, i_v * BV), (BC, BV), (1, 0)) + p_zn = tl.make_block_ptr(z + i_bh * s_v_h, (T * V,), (s_v_d,), ((i_t * BT + i_i * BC) * V + i_v * BV,), (BV,), (0,)) + p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT + i_i * BC, i_v * BV), (BC, BV), (1, 0)) + p_dA = tl.make_block_ptr(dA+(i_bh+i_v*n_bh)*T*BT, (T, BT), (BT, 1), (i_t * BT + i_i * BC, i_j * BC), (BC, BC), (1, 0)) + # [BV,] + b_zn = tl.load(p_zn, boundary_check=(0,)) + # [BC, BV] + b_z = tl.load(p_z, boundary_check=(0, 1)) + b_do = tl.load(p_do, boundary_check=(0, 1)) + b_do = (b_do * tl.exp(b_zn[None, :] - b_z) * scale).to(b_do.dtype) + # [BV, BC] + b_v = tl.load(p_v, boundary_check=(0, 1)) + b_v = tl.exp(b_v - b_zn[:, None]).to(b_v.dtype) + # [BC, BC] + b_dA = tl.dot(b_do, b_v, allow_tf32=False) + tl.store(p_dA, b_dA.to(dA.dtype.element_ty), boundary_check=(0, 1)) + elif i_i == i_j: + p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T * V,), (s_v_d,), ((i_t * BT + i_j * BC) * V + i_v * BV,), (BV,), (0,)) + p_z = tl.make_block_ptr(z + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT + i_i * BC, i_v * BV), (BC, BV), (1, 0)) + p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT + i_i * BC, i_v * BV), (BC, BV), (1, 0)) + # [BC, BV] + b_z = tl.load(p_z, boundary_check=(0, 1)) + b_do = tl.load(p_do, boundary_check=(0, 1)) * scale + + o_i = tl.arange(0, BC) + o_A = (i_bh + i_v * n_bh) * T * BT + (i_t * BT + i_i * BC + tl.arange(0, BC)) * BT + i_j * BC + m_A = (i_t * BT + i_i * BC + tl.arange(0, BC)) < T + for j in range(0, BC): + # [BV,] + b_v = tl.load(p_v, boundary_check=(0,)).to(tl.float32) + # [BC,] + b_dA = tl.sum(b_do * tl.exp(b_v[None, :] - b_z), 1) + b_dA = tl.where(o_i >= j, b_dA, 0) + tl.store(dA + o_A + j, b_dA.to(b_do.dtype), mask=m_A) + + p_v = tl.advance(p_v, (V,)) + + +@triton.jit +def chunk_abc_bwd_kernel_K( + q, + k, + v, + z, + h, + A, + do, + dh, + dq, + dk, + dv, + dA, + s_k_h, + s_k_t, + s_k_d, + s_v_h, + s_v_t, + s_v_d, + s_h_h, + s_h_t, + s_h_d, + scale, + T: tl.constexpr, + K: tl.constexpr, + V: tl.constexpr, + BT: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr +): + i_k, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + i_p = tl.maximum(i_t * BT - 1, 0) + n_bh = tl.num_programs(2) + + o_i = tl.arange(0, BT) + m_s = o_i[:, None] >= o_i[None, :] + + p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_A = tl.make_block_ptr(A + (i_k*n_bh+i_bh) * T * BT, (T, BT, ), (BT, 1), (i_t * BT, 0), (BT, BT), (1, 0)) + + # [BT, BK] + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_k = tl.load(p_k, boundary_check=(0, 1)) + # [BT, BT] + b_A = tl.dot((b_q * scale).to(b_q.dtype), tl.trans(b_k), allow_tf32=False) + b_A = tl.where(m_s, b_A, 0.) + tl.store(p_A, b_A.to(p_A.dtype.element_ty), boundary_check=(0, 1)) + + b_dq = tl.zeros([BT, BK], dtype=tl.float32) + b_dk = tl.zeros([BT, BK], dtype=tl.float32) + for i_v in range(tl.cdiv(V, BV)): + p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_z = tl.make_block_ptr(z + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_zp = tl.make_block_ptr(z + i_bh * s_v_h, (T * V,), (s_v_d,), (i_p * V + i_v * BV,), (BV,), (0,)) + p_zc = tl.make_block_ptr(z + i_bh * s_v_h, (T * V,), (s_v_d,), ((i_t * BT + BT - 1) * V + i_v * BV,), (BV,), (0,)) + p_h = tl.make_block_ptr(h + i_bh * s_h_h + i_t * K*V, (V, K), (s_h_d, s_h_t), (i_v * BV, i_k * BK), (BV, BK), (0, 1)) + + p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_dh = tl.make_block_ptr(dh + i_bh * s_h_h + i_t * K*V, (K, V), (s_h_t, s_h_d), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + p_dv = tl.make_block_ptr(dv + (i_k*n_bh+i_bh) * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + + # [BV,] + b_zp = tl.load(p_zp, boundary_check=(0,)) + b_zc = tl.load(p_zc, boundary_check=(0,)) + # [BT, BV] + b_v = tl.load(p_v, boundary_check=(0, 1)) + b_v = tl.exp(b_v - b_zc[None, :]).to(b_v.dtype) + b_z = tl.load(p_z, boundary_check=(0, 1)) + b_z = tl.exp(b_zp[None, :] - b_z) + # [BV, BK] + b_h = tl.load(p_h, boundary_check=(0, 1)) + # [BT, BV] + b_do = tl.load(p_do, boundary_check=(0, 1)) + b_do = (b_do * b_z * scale).to(b_do.dtype) + # [BK, BV] + b_dh = tl.load(p_dh, boundary_check=(0, 1)) + + # [BT, BK] + b_dq += tl.dot(b_do, b_h, allow_tf32=False) + b_dk += tl.dot(b_v, tl.trans(b_dh), allow_tf32=False) + # [BT, BV] + b_dv = b_v * tl.dot(b_k, b_dh, allow_tf32=False) + tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), boundary_check=(0, 1)) + p_dA = tl.make_block_ptr(dA + i_bh * T * BT, (T, BT, ), (BT, 1), (i_t * BT, 0), (BT, BT), (1, 0)) + # [BT, BT] + b_dA = tl.load(p_dA, boundary_check=(0, 1)) + # [BT, BK] + b_dq += tl.dot(b_dA, b_k, allow_tf32=False) + b_dk += tl.dot(tl.trans(b_dA).to(b_k.dtype), b_q, allow_tf32=False) + + p_dq = tl.make_block_ptr(dq + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_dk = tl.make_block_ptr(dk + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1)) + tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1)) + + +@triton.jit +def chunk_abc_bwd_kernel_intra_KV( + v, + z, + A, + do, + dv, + s_v_h, + s_v_t, + s_v_d, + T: tl.constexpr, + V: tl.constexpr, + BT: tl.constexpr, + BC: tl.constexpr, + BV: tl.constexpr, + NC: tl.constexpr +): + i_v, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + i_t, i_i = i_c // NC, i_c % NC + + p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT + i_i * BC, i_v * BV), (BC, BV), (1, 0)) + p_zn = tl.make_block_ptr(z + i_bh * s_v_h, (T*V,), (s_v_d,), ((i_t * BT + i_i * BC + BC - 1) * V + i_v * BV,), (BV,), (0,)) + # [BV,] + b_zn = tl.load(p_zn, boundary_check=(0,)) + # [BC, BV] + b_v = tl.load(p_v, boundary_check=(0, 1)) + b_dv = tl.zeros([BC, BV], dtype=tl.float32) + for i_j in range(i_i + 1, NC): + p_z = tl.make_block_ptr(z + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT + i_j * BC, i_v * BV), (BC, BV), (1, 0)) + p_A = tl.make_block_ptr(A + i_bh * T * BT, (BT, T), (1, BT), (i_i * BC, i_t * BT + i_j * BC), (BC, BC), (0, 1)) + p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT + i_j * BC, i_v * BV), (BC, BV), (1, 0)) + # [BC, BV] + b_z = tl.load(p_z, boundary_check=(0, 1)) + b_do = tl.load(p_do, boundary_check=(0, 1)) + b_do = (b_do * tl.exp(b_zn[None, :] - b_z)).to(b_do.dtype) + # [BC, BC] + b_A = tl.load(p_A, boundary_check=(0, 1)) + b_dv += tl.dot(b_A, b_do, allow_tf32=False) + b_dv *= tl.exp(b_v - b_zn[None, :]) + + o_i = tl.arange(0, BC) + for j in range(0, BC): + p_z = tl.make_block_ptr(z + i_bh * s_v_h, (T * V,), (1,), ((i_t * BT + i_i * BC + j) * V + i_v * BV,), (BV,), (0,)) + p_A = tl.make_block_ptr(A + i_bh * T * BT, (T * BT,), (1,), ((i_t * BT + i_i * BC + j) * BT + i_i * BC,), (BC,), (0,)) + p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T * V,), (1,), ((i_t * BT + i_i * BC + j) * V + i_v * BV,), (BV,), (0,)) + # [BC,] + b_A = tl.load(p_A, boundary_check=(0,)) + # [BV,] + b_z = tl.load(p_z, boundary_check=(0,)) + b_do = tl.load(p_do, boundary_check=(0,)) + # [BC, BV] + m_i = o_i[:, None] <= j + b_dv += tl.where(m_i, tl.exp(b_v - b_z[None, :]) * b_A[:, None] * b_do[None, :], 0.) + p_dv = tl.make_block_ptr(dv + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT + i_i * BC, i_v * BV), (BC, BV), (1, 0)) + tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), boundary_check=(0, 1)) + + +@triton.jit +def chunk_abc_bwd_kernel_rcum_inter( + s, + z, + ss, + doo, + s_s_h, + s_s_t, + s_s_d, + T: tl.constexpr, + S: tl.constexpr, + BT: tl.constexpr, + BS: tl.constexpr, + NT: tl.constexpr +): + i_m, i_bh = tl.program_id(0), tl.program_id(1) + + b_sp = tl.zeros([BS,], dtype=tl.float32) + b_zp = tl.full([BS,], float('inf'), dtype=tl.float32) + for i_t in range(NT - 1, -1, -1): + p_s = tl.make_block_ptr(s + i_bh * s_s_h, (T, S), (s_s_t, s_s_d), (i_t * BT, i_m * BS), (BT, BS), (1, 0)) + p_z = tl.make_block_ptr(z + i_bh * s_s_h, (T, S), (s_s_t, s_s_d), (i_t * BT, i_m * BS), (BT, BS), (1, 0)) + p_zc = tl.make_block_ptr(z + i_bh * s_s_h, (T * S,), (s_s_d,), ((i_t * BT) * S + i_m * BS,), (BS,), (0,)) + p_ss = tl.make_block_ptr(ss + i_bh * s_s_h, (T, S), (s_s_t, s_s_d), (i_t * BT, i_m * BS), (BT, BS), (1, 0)) + p_doo = tl.make_block_ptr(doo + i_bh * s_s_h, (T, S), (s_s_t, s_s_d), (i_t * BT, i_m * BS), (BT, BS), (1, 0)) + # [BS,] + b_zc = tl.load(p_zc, boundary_check=(0,)) + # [BT, BS] + b_s = tl.load(p_s, boundary_check=(0, 1)) + b_z = tl.load(p_z, boundary_check=(0, 1)) + b_ss = tl.load(p_ss, boundary_check=(0, 1)) + + b_doo = tl.exp(b_s - b_zp[None, :]) * b_sp[None, :] + tl.store(p_doo, b_doo.to(p_doo.dtype.element_ty), boundary_check=(0, 1)) + # [BS,] + b_sp = b_sp * tl.exp(b_zc - b_zp) + tl.sum(b_ss * tl.exp(b_zc[None, :] - b_z), 0) + b_zp = b_zc + + +@triton.jit +def chunk_abc_bwd_kernel_rcum_intra( + s, + z, + ss, + doo, + s_s_h, + s_s_t, + s_s_d, + T: tl.constexpr, + S: tl.constexpr, + BT: tl.constexpr, + BC: tl.constexpr, + BS: tl.constexpr, + NC: tl.constexpr +): + i_s, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + i_t, i_i = i_c // NC, i_c % NC + + o_i = tl.arange(0, BC) + m_o = tl.full([BC, BC], 1., dtype=tl.float32) + + p_s = tl.make_block_ptr(s + i_bh * s_s_h, (T, S), (s_s_t, s_s_d), (i_t * BT + i_i * BC, i_s * BS), (BC, BS), (1, 0)) + p_zn = tl.make_block_ptr(z + i_bh * s_s_h, (T*S,), (s_s_d,), ((i_t * BT + i_i * BC + BC - 1) * S + i_s * BS,), (BS,), (0,)) + p_doo = tl.make_block_ptr(doo + i_bh * s_s_h, (T, S), (s_s_t, s_s_d), (i_t * BT + i_i * BC, i_s * BS), (BC, BS), (1, 0)) + # [BC, BS] + b_s = tl.load(p_s, boundary_check=(0, 1)) + # [BS,] + b_zn = tl.load(p_zn, boundary_check=(0,)) + + b_doo = tl.zeros([BC, BS], dtype=tl.float32) + for i_j in range(i_i + 1, NC): + p_z = tl.make_block_ptr(z + i_bh * s_s_h, (T, S), (s_s_t, s_s_d), (i_t * BT + i_j * BC, i_s * BS), (BC, BS), (1, 0)) + p_ss = tl.make_block_ptr(ss + i_bh * s_s_h, (T, S), (s_s_t, s_s_d), (i_t * BT + i_j * BC, i_s * BS), (BC, BS), (1, 0)) + # [BC, BS] + b_z = tl.load(p_z, boundary_check=(0, 1)) + b_ss = tl.load(p_ss, boundary_check=(0, 1)) + # [BC, BS] + b_doo += b_ss * tl.exp(b_zn[None, :] - b_z) + b_doo = tl.exp(b_s - b_zn[None, :]) * tl.dot(m_o.to(b_s.dtype), b_doo.to(b_s.dtype), allow_tf32=False) + + for j in range(0, BC): + p_z = tl.make_block_ptr(z + i_bh * s_s_h, (T * S,), (1,), ((i_t * BT + i_i * BC + j) * S + i_s * BS,), (BS,), (0,)) + p_ss = tl.make_block_ptr(ss + i_bh * s_s_h, (T * S,), (1,), ((i_t * BT + i_i * BC + j) * S + i_s * BS,), (BS,), (0,)) + # [BS,] + b_z = tl.load(p_z, boundary_check=(0,)) + b_ss = tl.load(p_ss, boundary_check=(0,)) + # [BC, BS] + m_i = o_i[:, None] <= j + b_doo += tl.where(m_i, tl.exp(b_s - b_z[None, :]) * b_ss[None, :], 0.) + b_doo += tl.load(p_doo, boundary_check=(0, 1)) + tl.store(p_doo, b_doo.to(p_doo.dtype.element_ty), boundary_check=(0, 1)) + + +class ChunkABCFunction(torch.autograd.Function): + + @staticmethod + @contiguous + def forward(ctx, q, k, v, s, initial_state, output_final_state): + B, H, T, K, V, M = *q.shape, v.shape[-1], s.shape[-1] + BT, BC = 64, 16 + BK = min(64, triton.next_power_of_2(K)) + BV = min(64, triton.next_power_of_2(V)) + BM = min(64, triton.next_power_of_2(M)) + NT, NC = triton.cdiv(T, BT), triton.cdiv(BT, BC) + NV, NM = triton.cdiv(V, BV), triton.cdiv(M, BM) + num_warps = 4 if BK == 64 else 2 + num_stages = 1 + + def fwd_pre(s, B, H, T, S): + # keep cummulative normalizer in fp32 + z = torch.empty_like(s, dtype=torch.float) + grid = (B * H,) + logcumsumexp_fwd_kernel[grid]( + s, z, + s.stride(1), s.stride(2), s.stride(3), + T=T, S=S + ) + return z + + def fwd_inner(q, k, v, z, B, H, T, K, V, BT, BK, BV, NT, normk=False, h0=None, ht=None): + NK, NV = triton.cdiv(K, BK), triton.cdiv(V, BV) + h = q.new_empty(B, H, NT * K, V) + grid = (NV, NK, B * H) + chunk_abc_fwd_kernel_h[grid]( + k, v, z, h, h0, ht, + k.stride(1), k.stride(2), k.stride(3), + v.stride(1), v.stride(2), v.stride(3), + h.stride(1), h.stride(2), h.stride(3), + T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, NT=NT, + NORMK=normk, + USE_INITIAL_STATE=h0 is not None, + STORE_FINAL_STATE=ht is not None, + num_warps=num_warps, + num_stages=num_stages + ) + return h + + final_state = None + if output_final_state: + final_state = (q.new_empty(B, H, K, M, dtype=torch.float), + q.new_empty(B, H, M, V, dtype=torch.float)) + + z = fwd_pre(s, B, H, T, M) + scale = K ** -0.5 + hk = fwd_inner( + q=q, k=k, v=s, z=z, + B=B, H=H, T=T, K=K, V=M, BT=BT, BK=BK, BV=BM, NT=NT, + normk=False, + h0=initial_state[0] if initial_state is not None else None, + ht=final_state[0] if final_state is not None else None + ) + ok1 = torch.empty_like(s) + Ak = q.new_empty(B, H, T, BT) + grid = (NM, NT, B * H) + chunk_abc_fwd_kernel_K[grid]( + q, k, z, hk, ok1, Ak, + k.stride(1), k.stride(2), k.stride(3), + s.stride(1), s.stride(2), s.stride(3), + hk.stride(1), hk.stride(2), hk.stride(3), + scale=scale, + T=T, K=K, V=M, BT=BT, BK=BK, BV=BM, + num_warps=num_warps, + num_stages=num_stages + ) + ok0 = torch.empty_like(s) + grid = (NM, NT * NC, B * H) + chunk_abc_fwd_kernel_intra_K[grid]( + s, z, ok0, Ak, + s.stride(1), s.stride(2), s.stride(3), + T=T, V=M, BT=BT, BC=BC, BV=BM, NC=NC, + num_warps=2, + num_stages=num_stages + ) + ok = ok0.add_(ok1) + + scale = 1. + # equivalent to: + # p = ok.softmax(-1, torch.float) + # p is kept in fp32 for safe softmax backward + p = torch.empty_like(ok, dtype=torch.float) + grid = (NT, B * H) + softmax_fwd_kernel[grid]( + ok, p, + s.stride(1), s.stride(2), s.stride(3), + T=T, S=M, BT=BT + ) + qv = p.to(q.dtype) + + scale = 1. + hv = fwd_inner( + q=qv, k=s, v=v, z=z, + B=B, H=H, T=T, K=M, V=V, BT=BT, BK=BM, BV=BV, NT=NT, + normk=True, + h0=initial_state[1] if initial_state is not None else None, + ht=final_state[1] if final_state is not None else None + ) + Av = q.new_zeros(NM, B, H, T, BT) + grid = (NM, NT * NC * NC, B * H) + chunk_abc_fwd_kernel_intra_V[grid]( + qv, s, z, Av, + s.stride(1), s.stride(2), s.stride(3), + scale=scale, + T=T, K=M, BT=BT, BC=BC, BK=BM, NC=NC, + num_warps=2, + num_stages=num_stages + ) + Av = Av.sum(0) + ov = torch.empty_like(v) + grid = (NV, NT, B * H) + chunk_abc_fwd_kernel_V[grid]( + qv, v, z, hv, ov, Av, + s.stride(1), s.stride(2), s.stride(3), + v.stride(1), v.stride(2), v.stride(3), + hv.stride(1), hv.stride(2), hv.stride(3), + scale=scale, + T=T, K=M, V=V, BT=BT, BK=BM, BV=BV, + num_warps=num_warps, + num_stages=num_stages + ) + ctx.save_for_backward(q, k, v, s, z, ok, p, hk, hv, Av) + ctx.BT = BT + return ov, final_state + + @staticmethod + @contiguous + def backward(ctx, dov, dht=None): + q, k, v, s, z, ok, p, hk, hv, Av = ctx.saved_tensors + B, H, T, K, V, M = *q.shape, v.shape[-1], s.shape[-1] + BT, BC = ctx.BT, 16 + BK = min(64, triton.next_power_of_2(K)) + BV = min(64, triton.next_power_of_2(V)) + BM = min(64, triton.next_power_of_2(M)) + NT, NC = triton.cdiv(T, BT), triton.cdiv(BT, BC) + NK, NM = triton.cdiv(K, BK), triton.cdiv(M, BM) + num_warps = 4 if BK == 64 else 2 + num_stages = 1 + + def bwd_inner(q, z, do, B, H, T, K, V, BT, BK, BV, NT, scale, normk=False): + NK, NV = triton.cdiv(K, BK), triton.cdiv(V, BV) + dh = q.new_empty(B, H, NT * K, V) + grid = (NK, NV, B * H) + chunk_abc_bwd_kernel_dh[grid]( + q, z, do, dh, + q.stride(1), q.stride(2), q.stride(3), + do.stride(1), do.stride(2), do.stride(3), + dh.stride(1), dh.stride(2), dh.stride(3), + scale=scale, + T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, NT=NT, + NORMK=normk, + num_warps=num_warps, + num_stages=num_stages + ) + return dh + + def bwd_post(s, z, ss, B, H, T, S, BT, BC, BS, NT, NC, NS): + doo = torch.empty_like(s) + grid = (NS, B * H) + chunk_abc_bwd_kernel_rcum_inter[grid]( + s, z, ss, doo, + s.stride(1), s.stride(2), s.stride(3), + T=T, S=S, BT=BT, BS=BS, NT=NT, + num_warps=num_warps, + num_stages=num_stages + ) + grid = (NS, NT * NC, B * H) + chunk_abc_bwd_kernel_rcum_intra[grid]( + s, z, ss, doo, + s.stride(1), s.stride(2), s.stride(3), + T=T, S=S, BT=BT, BC=BC, BS=BS, NC=NC, + num_warps=num_warps, + num_stages=num_stages + ) + return doo + + scale = 1. + qv = p.to(q.dtype) + dhv = bwd_inner( + qv, z, dov, + B=B, H=H, T=T, K=M, V=V, BT=BT, BK=BM, BV=BV, NT=NT, + scale=scale, + normk=True + ) + dp1 = torch.empty_like(p) + dsv1 = torch.empty_like(s, dtype=torch.float) + dv = v.new_empty(NM, *v.shape) + dAv = q.new_zeros(B, H, T, BT) + grid = (NM, NT, B * H) + chunk_abc_bwd_kernel_V[grid]( + s, v, z, hv, Av, dov, dhv, dp1, dsv1, dv, dAv, + s.stride(1), s.stride(2), s.stride(3), + v.stride(1), v.stride(2), v.stride(3), + hv.stride(1), hv.stride(2), hv.stride(3), + scale=scale, + T=T, K=M, V=V, BT=BT, BK=BM, BV=BV, + num_warps=num_warps, + num_stages=num_stages + ) + dv = dv.sum(0) + dp0 = torch.empty_like(p) + dsv0 = s.new_zeros(s.shape, dtype=torch.float) + grid = (NM, NT * NC, B * H) + chunk_abc_bwd_kernel_intra_V[grid]( + qv, s, z, dAv, dp0, dsv0, + s.stride(1), s.stride(2), s.stride(3), + T=T, K=M, BT=BT, BC=BC, BK=BM, NC=NC, + num_warps=2, + num_stages=num_stages + ) + dp = dp1.add_(dp0) + dsv = dsv1.add_(dsv0) + + # softmax gradient, equivalent to: + # dok = p * (dp - (p * dp).sum(-1, True)) + dok = torch.empty_like(ok) + grid = (NT, B * H) + softmax_bwd_kernel[grid]( + p, dp, dok, + s.stride(1), s.stride(2), s.stride(3), + T=T, S=M, BT=BT + ) + + scale = K ** -0.5 + dhk = bwd_inner( + q, z, dok, + B=B, H=H, T=T, K=K, V=M, BT=BT, BK=BK, BV=BM, NT=NT, + scale=scale, + normk=False + ) + dAk = q.new_zeros(NM, B, H, T, BT) + grid = (NM, NT * NC * NC, B * H) + chunk_abc_bwd_kernel_intra_K[grid]( + s, z, dok, dAk, + s.stride(1), s.stride(2), s.stride(3), + scale=scale, + T=T, V=M, BT=BT, BC=BC, BV=BM, NC=NC, + num_warps=2, + num_stages=num_stages + ) + dAk = dAk.sum(0) + + Ak = q.new_zeros(NK, B, H, T, BT) + dq = torch.empty_like(q) + dk = torch.empty_like(k) + dsk1 = s.new_empty(NK, *s.shape, dtype=torch.float) + grid = (NK, NT, B * H) + chunk_abc_bwd_kernel_K[grid]( + q, k, s, z, hk, Ak, dok, dhk, dq, dk, dsk1, dAk, + q.stride(1), q.stride(2), q.stride(3), + s.stride(1), s.stride(2), s.stride(3), + hk.stride(1), hk.stride(2), hk.stride(3), + scale=scale, + T=T, K=K, V=M, BT=BT, BK=BK, BV=BM, + num_warps=num_warps, + num_stages=num_stages + ) + Ak = Ak.sum(0) + dsk1 = dsk1.sum(0) + dsk0 = torch.empty_like(s, dtype=torch.float) + grid = (NM, NT * NC, B * H) + chunk_abc_bwd_kernel_intra_KV[grid]( + s, z, Ak, dok, dsk0, + s.stride(1), s.stride(2), s.stride(3), + T=T, V=M, BT=BT, BC=BC, BV=BM, NC=NC, + num_warps=2, + num_stages=num_stages + ) + ds = dsv.add_(dsk1.add_(dsk0)) + ds -= bwd_post(s, z, ok * dok + p * dp, B, H, T, M, BT, BC, BM, NT, NC, NM) + ds = ds.to(s.dtype) + return dq, dk, dv, ds, None, None + + +def chunk_abc( + q: torch.Tensor, + k: torch.Tensor, + v: torch.Tensor, + s: torch.Tensor, + initial_state: Optional[Tuple[torch.Tensor]] = None, + output_final_state: Optional[bool] = False +) -> Tuple[torch.Tensor, Tuple[torch.Tensor]]: + if initial_state is not None: + initial_state = tuple(i.detach() for i in initial_state) + ov, final_state = ChunkABCFunction.apply(q, k, v, s, initial_state, output_final_state) + return ov, final_state diff --git a/fla/ops/abc/chunk_gate.py b/fla/ops/abc/chunk_gate.py new file mode 100644 index 0000000000000000000000000000000000000000..3cb9801394e28f493f21d35e1c6f9cd45f848a52 --- /dev/null +++ b/fla/ops/abc/chunk_gate.py @@ -0,0 +1,1287 @@ +# -*- coding: utf-8 -*- + +# Copyright (c) 2023-2024, Yu Zhang, Songlin Yang + +from typing import Optional, Tuple + +import torch +import triton +import triton.language as tl + +from fla.ops.utils import (chunk_reversed_cumsum_fwd, softmax_bwd_kernel, + softmax_fwd_kernel) +from fla.utils import contiguous + + +@triton.autotune( + configs=[ + triton.Config({'BS': 16}, num_warps=2), + triton.Config({'BS': 16}, num_warps=4), + triton.Config({'BS': 16}, num_warps=8), + triton.Config({'BS': 32}, num_warps=2), + triton.Config({'BS': 32}, num_warps=4), + triton.Config({'BS': 32}, num_warps=8), + triton.Config({'BS': 64}, num_warps=2), + triton.Config({'BS': 64}, num_warps=4), + triton.Config({'BS': 64}, num_warps=8), + ], + key=['S'] +) +@triton.jit +def chunk_gated_abc_fwd_kernel_cum( + s, + o, + s_s_h, + s_s_t, + s_s_d, + T: tl.constexpr, + S: tl.constexpr, + BT: tl.constexpr, + BS: tl.constexpr, +): + i_s, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + o_i = tl.arange(0, BT) + m_s = tl.where(o_i[:, None] >= o_i[None, :], 1., 0.).to(tl.float32) + + p_s = tl.make_block_ptr(s + i_bh * s_s_h, (T, S), (s_s_t, s_s_d), (i_t * BT, i_s * BS), (BT, BS), (1, 0)) + p_o = tl.make_block_ptr(o + i_bh * s_s_h, (T, S), (s_s_t, s_s_d), (i_t * BT, i_s * BS), (BT, BS), (1, 0)) + # [BT, BS] + b_s = tl.load(p_s, boundary_check=(0, 1)).to(tl.float32) + b_o = tl.dot(m_s, b_s, allow_tf32=False) + tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1)) + + +@triton.jit +def chunk_gated_abc_fwd_kernel_h( + k, + v, + g, + h, + h0, + ht, + s_k_h, + s_k_t, + s_k_d, + s_v_h, + s_v_t, + s_v_d, + s_h_h, + s_h_t, + s_h_d, + T: tl.constexpr, + K: tl.constexpr, + V: tl.constexpr, + BT: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr, + NT: tl.constexpr, + GATEK: tl.constexpr, + USE_INITIAL_STATE: tl.constexpr, + STORE_FINAL_STATE: tl.constexpr +): + i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + + b_h = tl.zeros([BK, BV], dtype=tl.float32) + if USE_INITIAL_STATE: + p_h = tl.make_block_ptr(h0 + i_bh * K * V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + b_h += tl.load(p_h, boundary_check=(0, 1)).to(tl.float32) + for i_t in range(NT): + p_k = tl.make_block_ptr(k + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, i_t * BT), (BK, BT), (0, 1)) + p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_h = tl.make_block_ptr(h + i_bh * s_h_h + i_t * K * V, (K, V), (s_h_t, s_h_d), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + + tl.store(p_h, b_h.to(p_h.dtype.element_ty), boundary_check=(0, 1)) + # [BK, BT] + b_k = tl.load(p_k, boundary_check=(0, 1)) + # [BT, BV] + b_v = tl.load(p_v, boundary_check=(0, 1)) + if GATEK: + p_g = tl.make_block_ptr(g + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, i_t * BT), (BK, BT), (0, 1)) + p_gn = tl.make_block_ptr(g + i_bh * s_k_h, (T * K,), (s_k_d,), ((i_t * BT + BT - 1) * K + i_k * BK,), (BK,), (0,)) + # [BK,] + b_gn = tl.load(p_gn, boundary_check=(0,)) + # [BK, BV] + b_h *= tl.exp(b_gn)[:, None] + # [BK, BT] + b_g = tl.load(p_g, boundary_check=(0, 1)) + b_k = (b_k * tl.exp(b_gn[:, None] - b_g)).to(b_k.dtype) + else: + p_g = tl.make_block_ptr(g + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_gn = tl.make_block_ptr(g + i_bh * s_v_h, (T * V,), (s_v_d,), ((i_t * BT + BT - 1) * V + i_v * BV,), (BV,), (0,)) + # [BV,] + b_gn = tl.load(p_gn, boundary_check=(0,)) + # [BK, BV] + b_h *= tl.exp(b_gn)[None, :] + # [BT, BV] + b_g = tl.load(p_g, boundary_check=(0, 1)) + b_v = (b_v * tl.exp(b_gn[None, :] - b_g)).to(b_v.dtype) + # [BK, BV] + b_h += tl.dot(b_k, b_v, allow_tf32=False) + + if STORE_FINAL_STATE: + p_h = tl.make_block_ptr(ht + i_bh * K * V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + tl.store(p_h, b_h.to(p_h.dtype.element_ty), boundary_check=(0, 1)) + + +@triton.jit +def chunk_gated_abc_fwd_kernel_intra_K( + v, + g, + o, + A, + s_v_h, + s_v_t, + s_v_d, + T: tl.constexpr, + V: tl.constexpr, + BT: tl.constexpr, + BC: tl.constexpr, + BV: tl.constexpr, + NC: tl.constexpr +): + i_v, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + i_t, i_i = i_c // NC, i_c % NC + + p_g = tl.make_block_ptr(g + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT + i_i * BC, i_v * BV), (BC, BV), (1, 0)) + p_gn = tl.make_block_ptr(g + i_bh * s_v_h, (T * V,), (s_v_d,), ((i_t * BT + i_i * BC) * V + i_v * BV,), (BV,), (0,)) + # [BV,] + b_gn = tl.load(p_gn, boundary_check=(0,)) + # [BC, BV] + b_o = tl.zeros([BC, BV], dtype=tl.float32) + for i_j in range(0, i_i): + p_A = tl.make_block_ptr(A + i_bh * T * BT, (T, BT), (BT, 1), (i_t * BT + i_i * BC, i_j * BC), (BC, BC), (1, 0)) + p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT + i_j * BC, i_v * BV), (BC, BV), (1, 0)) + p_gv = tl.make_block_ptr(g + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT + i_j * BC, i_v * BV), (BC, BV), (1, 0)) + # [BC, BV] + b_v = tl.load(p_v, boundary_check=(0, 1)) + b_gv = tl.load(p_gv, boundary_check=(0, 1)) + b_vg = (b_v * tl.exp(b_gn[None, :] - b_gv)).to(b_v.dtype) + # [BC, BC] + b_A = tl.load(p_A, boundary_check=(0, 1)) + b_o += tl.dot(b_A, b_vg, allow_tf32=False) + b_g = tl.load(p_g, boundary_check=(0, 1)) + b_o *= tl.exp(b_g - b_gn[None, :]) + + o_i = tl.arange(0, BC) + o_A = i_bh * T * BT + (i_t * BT + i_i * BC + tl.arange(0, BC)) * BT + i_i * BC + m_A = (i_t * BT + i_i * BC + tl.arange(0, BC)) < T + for j in range(0, BC): + p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T * V,), (1,), ((i_t * BT + i_i * BC + j) * V + i_v * BV,), (BV,), (0,)) + p_gv = tl.make_block_ptr(g + i_bh * s_v_h, (T * V,), (1,), ((i_t * BT + i_i * BC + j) * V + i_v * BV,), (BV,), (0,)) + # [BC,] + b_A = tl.load(A + o_A + j, mask=m_A, other=0) + # [BV,] + b_v = tl.load(p_v, boundary_check=(0,)).to(tl.float32) + b_gv = tl.load(p_gv, boundary_check=(0,)).to(tl.float32) + # [BC, BV] + b_vg = b_v[None, :] * tl.exp(b_g - b_gv[None, :]) + # avoid 0 * inf = inf + m_i = o_i[:, None] >= j + b_o += tl.where(m_i, b_A[:, None] * b_vg, 0.) + p_o = tl.make_block_ptr(o + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT + i_i * BC, i_v * BV), (BC, BV), (1, 0)) + + b_o += tl.load(p_o, boundary_check=(0, 1)) + tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1)) + + +@triton.jit +def chunk_gated_abc_fwd_kernel_K( + q, + k, + h, + g, + o, + A, + s_k_h, + s_k_t, + s_k_d, + s_v_h, + s_v_t, + s_v_d, + s_h_h, + s_h_t, + s_h_d, + scale, + T: tl.constexpr, + K: tl.constexpr, + V: tl.constexpr, + BT: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr +): + i_v, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + + o_i = tl.arange(0, BT) + m_s = o_i[:, None] >= o_i[None, :] + + b_o = tl.zeros([BT, BV], dtype=tl.float32) + b_A = tl.zeros([BT, BT], dtype=tl.float32) + for i_k in range(tl.cdiv(K, BK)): + p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_k = tl.make_block_ptr(k + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, i_t * BT), (BK, BT), (0, 1)) + p_h = tl.make_block_ptr(h + i_bh * s_h_h + i_t * K * V, (K, V), (s_h_t, s_h_d), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + + # [BT, BK] + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_q = (b_q * scale).to(b_q.dtype) + # [BK, BT] + b_k = tl.load(p_k, boundary_check=(0, 1)) + # [BK, BV] + b_h = tl.load(p_h, boundary_check=(0, 1)) + # [BT, BV] + b_o += tl.dot(b_q, b_h, allow_tf32=False) + # [BT, BT] + b_A += tl.dot(b_q, b_k, allow_tf32=False) + p_g = tl.make_block_ptr(g + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_o = tl.make_block_ptr(o + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + # [BT, BV] + b_g = tl.load(p_g, boundary_check=(0, 1)) + b_o = b_o * tl.exp(b_g) + tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1)) + + p_A = tl.make_block_ptr(A + i_bh * T * BT, (T, BT), (BT, 1), (i_t * BT, 0), (BT, BT), (1, 0)) + # [BT, BT] + b_A = tl.where(m_s, b_A, 0.) + if i_v == 0: + tl.store(p_A, b_A.to(p_A.dtype.element_ty), boundary_check=(0, 1)) + + +@triton.jit +def chunk_gated_abc_fwd_kernel_intra_V( + q, + k, + g, + A, + s_k_h, + s_k_t, + s_k_d, + scale, + T: tl.constexpr, + K: tl.constexpr, + BT: tl.constexpr, + BC: tl.constexpr, + BK: tl.constexpr, + NC: tl.constexpr +): + i_k, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + i_t, i_i, i_j = i_c // (NC * NC), (i_c % (NC * NC)) // NC, (i_c % (NC * NC)) % NC + n_bh = tl.num_programs(2) + + if i_i > i_j: + p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0)) + p_g = tl.make_block_ptr(g + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0)) + p_k = tl.make_block_ptr(k + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, i_t * BT + i_j * BC), (BK, BC), (0, 1)) + p_gk = tl.make_block_ptr(g + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, i_t * BT + i_j * BC), (BK, BC), (0, 1)) + p_gn = tl.make_block_ptr(g + i_bh * s_k_h, (T * K,), (s_k_d,), ((i_t * BT + i_i * BC) * K + i_k * BK,), (BK,), (0,)) + p_A = tl.make_block_ptr(A + (i_k*n_bh+i_bh)*T*BT, (T, BT), (BT, 1), (i_t * BT + i_i * BC, i_j * BC), (BC, BC), (1, 0)) + # [BK,] + b_gn = tl.load(p_gn, boundary_check=(0,)) + # [BC, BK] + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_g = tl.load(p_g, boundary_check=(0, 1)) + b_qg = (b_q * tl.exp(b_g - b_gn[None, :]) * scale).to(b_q.dtype) + # [BK, BC] + b_k = tl.load(p_k, boundary_check=(0, 1)) + b_gk = tl.load(p_gk, boundary_check=(0, 1)) + b_kg = (b_k * tl.exp(b_gn[:, None] - b_gk)).to(b_k.dtype) + # [BC, BC] + b_A = tl.dot(b_qg, b_kg, allow_tf32=False) + tl.store(p_A, b_A.to(A.dtype.element_ty), boundary_check=(0, 1)) + elif i_i == i_j: + p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0)) + p_g = tl.make_block_ptr(g + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0)) + p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T * K,), (s_k_d,), ((i_t * BT + i_j * BC) * K + i_k * BK,), (BK,), (0,)) + p_gk = tl.make_block_ptr(g + i_bh * s_k_h, (T * K,), (s_k_d,), ((i_t * BT + i_j * BC) * K + i_k * BK,), (BK,), (0,)) + # [BC, BK] + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_g = tl.load(p_g, boundary_check=(0, 1)) + + o_i = tl.arange(0, BC) + o_A = (i_bh + i_k * n_bh) * T * BT + (i_t * BT + i_i * BC + tl.arange(0, BC)) * BT + i_j * BC + m_A = (i_t * BT + i_i * BC + tl.arange(0, BC)) < T + for j in range(0, BC): + # [BK,] + b_k = tl.load(p_k, boundary_check=(0,)).to(tl.float32) + b_gk = tl.load(p_gk, boundary_check=(0,)).to(tl.float32) + # [BC,] + b_A = tl.sum(b_q * b_k[None, :] * tl.exp(b_g - b_gk[None, :]) * scale, 1) + b_A = tl.where(o_i >= j, b_A, 0.) + tl.store(A + o_A + j, b_A.to(b_q.dtype), mask=m_A) + + p_k = tl.advance(p_k, (K,)) + p_gk = tl.advance(p_gk, (K,)) + + +@triton.jit +def chunk_gated_abc_fwd_kernel_V( + q, + v, + g, + h, + o, + A, + s_k_h, + s_k_t, + s_k_d, + s_v_h, + s_v_t, + s_v_d, + s_h_h, + s_h_t, + s_h_d, + scale, + T: tl.constexpr, + K: tl.constexpr, + V: tl.constexpr, + BT: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr +): + i_v, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + + b_o = tl.zeros([BT, BV], dtype=tl.float32) + for i_k in range(tl.cdiv(K, BK)): + p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_g = tl.make_block_ptr(g + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_h = tl.make_block_ptr(h + i_bh * s_h_h + i_t * K * V, (K, V), (s_h_t, s_h_d), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + + # [BT, BK] + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_q = (b_q * scale).to(b_q.dtype) + # [BT, BK] + b_g = tl.load(p_g, boundary_check=(0, 1)) + # [BT, BK] + b_qg = (b_q * tl.exp(b_g)).to(b_q.dtype) + # [BK, BV] + b_h = tl.load(p_h, boundary_check=(0, 1)) + # works but dkw, owing to divine benevolence + # [BT, BV] + if i_k >= 0: + b_o += tl.dot(b_qg, b_h, allow_tf32=False) + p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_o = tl.make_block_ptr(o + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_A = tl.make_block_ptr(A + i_bh * T * BT, (T, BT), (BT, 1), (i_t * BT, 0), (BT, BT), (1, 0)) + # [BT, BV] + b_v = tl.load(p_v, boundary_check=(0, 1)) + # [BT, BT] + b_A = tl.load(p_A, boundary_check=(0, 1)) + b_o += tl.dot(b_A, b_v, allow_tf32=False) + tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1)) + + +@triton.jit +def chunk_gated_abc_bwd_kernel_dh( + q, + g, + do, + dh, + s_k_h, + s_k_t, + s_k_d, + s_v_h, + s_v_t, + s_v_d, + s_h_h, + s_h_t, + s_h_d, + scale, + T: tl.constexpr, + K: tl.constexpr, + V: tl.constexpr, + BT: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr, + NT: tl.constexpr, + GATEK: tl.constexpr +): + i_k, i_v, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + + b_dh = tl.zeros([BK, BV], dtype=tl.float32) + for i_t in range(NT - 1, -1, -1): + p_q = tl.make_block_ptr(q + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, i_t * BT), (BK, BT), (0, 1)) + p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_dh = tl.make_block_ptr(dh + i_bh * s_h_h + i_t * K*V, (K, V), (s_h_t, s_h_d), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + + # [BK, BT] + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_q = (b_q * scale).to(b_q.dtype) + # [BT, BV] + b_do = tl.load(p_do, boundary_check=(0, 1)) + + tl.store(p_dh, b_dh.to(p_dh.dtype.element_ty), boundary_check=(0, 1)) + if GATEK: + p_g = tl.make_block_ptr(g + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, i_t * BT), (BK, BT), (0, 1)) + p_gn = tl.make_block_ptr(g + i_bh * s_k_h, (T * K,), (s_k_d,), ((i_t * BT + BT - 1) * K + i_k * BK,), (BK,), (0,)) + # [BK,] + b_gn = tl.load(p_gn, boundary_check=(0,)) + # [BK, BV] + b_dh *= tl.exp(b_gn)[:, None] + # [BK, BT] + b_g = tl.load(p_g, boundary_check=(0, 1)) + b_q = (b_q * tl.exp(b_g)).to(b_q.dtype) + else: + p_g = tl.make_block_ptr(g + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_gn = tl.make_block_ptr(g + i_bh * s_v_h, (T * V,), (s_v_d,), ((i_t * BT + BT - 1) * V + i_v * BV,), (BV,), (0,)) + # [BV,] + b_gn = tl.load(p_gn, boundary_check=(0,)) + # [BK, BV] + b_dh *= tl.exp(b_gn)[None, :] + # [BT, BV] + b_g = tl.load(p_g, boundary_check=(0, 1)) + b_do = (b_do * tl.exp(b_g)).to(b_do.dtype) + # [BK, BV] + b_dh += tl.dot(b_q, b_do, allow_tf32=False) + + +@triton.jit +def chunk_gated_abc_bwd_kernel_V( + k, + v, + h, + g, + A, + do, + dh, + dq, + dk, + dv, + dA, + s_k_h, + s_k_t, + s_k_d, + s_v_h, + s_v_t, + s_v_d, + s_h_h, + s_h_t, + s_h_d, + scale, + T: tl.constexpr, + K: tl.constexpr, + V: tl.constexpr, + BT: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr +): + i_k, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + n_bh = tl.num_programs(2) + + p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_gk = tl.make_block_ptr(g + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_gn = tl.make_block_ptr(g + i_bh * s_k_h, (T * K,), (s_k_d,), ((i_t * BT + BT - 1) * K + i_k * BK,), (BK,), (0,)) + p_A = tl.make_block_ptr(A + i_bh * T * BT, (BT, T), (1, BT), (0, i_t * BT), (BT, BT), (0, 1)) + + # [BK,] + # [BT, BK] + b_k = tl.load(p_k, boundary_check=(0, 1)) + b_gk = tl.load(p_gk, boundary_check=(0, 1)) + b_gn = tl.exp(tl.load(p_gn, boundary_check=(0,))[None, :] - b_gk) + b_k = (b_k * b_gn).to(b_k.dtype) + # [BT, BT] + b_A = tl.load(p_A, boundary_check=(0, 1)) + + b_dq = tl.zeros([BT, BK], dtype=tl.float32) + b_dk = tl.zeros([BT, BK], dtype=tl.float32) + b_dA = tl.zeros([BT, BT], dtype=tl.float32) + for i_v in range(tl.cdiv(V, BV)): + p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_h = tl.make_block_ptr(h + i_bh * s_h_h + i_t * V * K, (V, K), (s_h_d, s_h_t), (i_v * BV, i_k * BK), (BV, BK), (0, 1)) + p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_dh = tl.make_block_ptr(dh + i_bh * s_h_h + i_t * K*V, (K, V), (s_h_t, s_h_d), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + p_dv = tl.make_block_ptr(dv + (i_k*n_bh+i_bh) * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + + # [BT, BV] + b_v = tl.load(p_v, boundary_check=(0, 1)) + # [BV, BK] + b_h = tl.load(p_h, boundary_check=(0, 1)) + # [BT, BV] + b_do = tl.load(p_do, boundary_check=(0, 1)) + # [BK, BV] + b_dh = tl.load(p_dh, boundary_check=(0, 1)) + + # [BT, BV] + b_dv = tl.dot(b_k, b_dh, allow_tf32=False) + if i_k == 0: + b_dv += tl.dot(b_A, b_do, allow_tf32=False) + b_do = (b_do * scale).to(b_do.dtype) + tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), boundary_check=(0, 1)) + # [BT, BT] + b_dA += tl.dot(b_do, tl.trans(b_v), allow_tf32=False) + # [BT, BK] + b_dq += tl.dot(b_do, b_h, allow_tf32=False) + # [BT, BK] + b_dk += tl.dot(b_v, tl.trans(b_dh), allow_tf32=False) + b_dq = b_dq * tl.exp(b_gk) + b_dk = b_dk * b_gn + + p_dq = tl.make_block_ptr(dq + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_dk = tl.make_block_ptr(dk + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_dA = tl.make_block_ptr(dA + i_bh * T * BT, (T, BT, ), (BT, 1), (i_t * BT, 0), (BT, BT), (1, 0)) + tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1)) + tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1)) + + o_i = tl.arange(0, BT) + m_s = o_i[:, None] >= o_i[None, :] + # [BT, BT] + b_dA = tl.where(m_s, b_dA, 0.).to(b_k.dtype) + if i_k == 0: + tl.store(p_dA, b_dA.to(p_dA.dtype.element_ty), boundary_check=(0, 1)) + + +@triton.jit +def chunk_gated_abc_bwd_kernel_intra_V( + q, + k, + g, + dA, + dq, + dk, + dg, + s_k_h, + s_k_t, + s_k_d, + T: tl.constexpr, + K: tl.constexpr, + BT: tl.constexpr, + BC: tl.constexpr, + BK: tl.constexpr, + NC: tl.constexpr, + OVERWRITE: tl.constexpr +): + i_k, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + i_t, i_i = i_c // NC, i_c % NC + + p_g = tl.make_block_ptr(g + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0)) + p_gn = tl.make_block_ptr(g + i_bh * s_k_h, (T * K,), (s_k_d,), ((i_t * BT + i_i * BC) * K + i_k * BK,), (BK,), (0,)) + # [BK,] + b_gn = tl.load(p_gn, boundary_check=(0,)) + # [BC, BK] + b_g = tl.load(p_g, boundary_check=(0, 1)) + b_dq = tl.zeros([BC, BK], dtype=tl.float32) + for i_j in range(0, i_i): + p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_j * BC, i_k * BK), (BC, BK), (1, 0)) + p_gk = tl.make_block_ptr(g + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_j * BC, i_k * BK), (BC, BK), (1, 0)) + p_dA = tl.make_block_ptr(dA + i_bh * T * BT, (T, BT), (BT, 1), (i_t * BT + i_i * BC, i_j * BC), (BC, BC), (1, 0)) + # [BC, BK] + b_k = tl.load(p_k, boundary_check=(0, 1)) + b_gk = tl.load(p_gk, boundary_check=(0, 1)) + b_kg = (b_k * tl.exp(b_gn[None, :] - b_gk)).to(b_k.dtype) + # [BC, BC] + b_dA = tl.load(p_dA, boundary_check=(0, 1)) + # [BC, BK] + b_dq += tl.dot(b_dA, b_kg, allow_tf32=False) + b_dq *= tl.exp(b_g - b_gn[None, :]) + + o_i = tl.arange(0, BC) + o_dA = i_bh * T * BT + (i_t * BT + i_i * BC + tl.arange(0, BC)) * BT + i_i * BC + m_dA = (i_t * BT + i_i * BC + tl.arange(0, BC)) < T + for j in range(0, BC): + p_kj = tl.make_block_ptr(k + i_bh * s_k_h, (T * K,), (1,), ((i_t * BT + i_i*BC+j) * K + i_k * BK,), (BK,), (0,)) + p_gkj = tl.make_block_ptr(g + i_bh * s_k_h, (T * K,), (1,), ((i_t * BT + i_i*BC+j) * K + i_k * BK,), (BK,), (0,)) + # [BC,] + b_dA = tl.load(dA + o_dA + j, mask=m_dA, other=0) + # [BK,] + b_kj = tl.load(p_kj, boundary_check=(0,)).to(tl.float32) + b_gkj = tl.load(p_gkj, boundary_check=(0,)).to(tl.float32) + # [BC, BK] + m_i = o_i[:, None] >= j + # [BC, BK] + b_dq += tl.where(m_i, b_dA[:, None] * b_kj[None, :] * tl.exp(b_g - b_gkj[None, :]), 0.) + p_dq = tl.make_block_ptr(dq + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0)) + + b_dq = b_dq + tl.load(p_dq, boundary_check=(0, 1)) + tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1)) + + tl.debug_barrier() + p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0)) + p_gk = tl.make_block_ptr(g + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0)) + p_gn = tl.make_block_ptr(g + i_bh * s_k_h, (T*K,), (s_k_d,), ((i_t * BT + i_i * BC + BC - 1) * K + i_k * BK,), (BK,), (0,)) + # [BK,] + b_gn = tl.load(p_gn, boundary_check=(0,)) + # [BC, BK] + b_k = tl.load(p_k, boundary_check=(0, 1)) + b_gk = tl.load(p_gk, boundary_check=(0, 1)) + b_dk = tl.zeros([BC, BK], dtype=tl.float32) + for i_j in range(i_i + 1, NC): + p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_j * BC, i_k * BK), (BC, BK), (1, 0)) + p_g = tl.make_block_ptr(g + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_j * BC, i_k * BK), (BC, BK), (1, 0)) + p_dA = tl.make_block_ptr(dA + i_bh * T * BT, (T, BT), (BT, 1), (i_t * BT + i_j * BC, i_i * BC), (BC, BC), (1, 0)) + # [BC, BK] + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_g = tl.load(p_g, boundary_check=(0, 1)) + b_qg = (b_q * tl.exp(b_g - b_gn[None, :])).to(b_q.dtype) + # [BC, BC] + b_dA = tl.load(p_dA, boundary_check=(0, 1)) + # [BC, BK] + b_dk += tl.dot(tl.trans(b_dA), b_qg, allow_tf32=False) + b_dk *= tl.exp(b_gn[None, :] - b_gk) + + o_dA = i_bh * T * BT + (i_t * BT + i_i * BC) * BT + i_i * BC + tl.arange(0, BC) + for j in range(0, BC): + p_qj = tl.make_block_ptr(q + i_bh * s_k_h, (T * K,), (1,), ((i_t * BT + i_i * BC + j) * K + i_k * BK,), (BK,), (0,)) + p_gqj = tl.make_block_ptr(g + i_bh * s_k_h, (T * K,), (1,), ((i_t * BT + i_i * BC + j) * K + i_k * BK,), (BK,), (0,)) + # [BC,] + b_dA = tl.load(dA + o_dA + j * BT, mask=(i_t * BT + i_i * BC + j < T), other=0) + # [BK,] + b_qj = tl.load(p_qj, boundary_check=(0,)).to(tl.float32) + b_gqj = tl.load(p_gqj, boundary_check=(0,)).to(tl.float32) + # [BC, BK] + m_i = o_i[:, None] <= j + b_dk += tl.where(m_i, b_dA[:, None] * b_qj[None, :] * tl.exp(b_gqj[None, :] - b_gk), 0.) + p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0)) + p_dk = tl.make_block_ptr(dk + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0)) + p_dg = tl.make_block_ptr(dg + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0)) + + b_q = tl.load(p_q, boundary_check=(0, 1)).to(tl.float32) + b_dk = b_dk + tl.load(p_dk, boundary_check=(0, 1)).to(tl.float32) + b_dg = b_q * b_dq - b_k * b_dk + if not OVERWRITE: + b_dg = b_dg + tl.load(p_dg, boundary_check=(0, 1)).to(tl.float32) + + tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1)) + tl.store(p_dg, b_dg.to(p_dg.dtype.element_ty), boundary_check=(0, 1)) + + +@triton.jit +def chunk_gated_abc_bwd_kernel_intra_K( + v, + g, + do, + dA, + s_v_h, + s_v_t, + s_v_d, + scale, + T: tl.constexpr, + V: tl.constexpr, + BT: tl.constexpr, + BC: tl.constexpr, + BV: tl.constexpr, + NC: tl.constexpr +): + i_v, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + i_t, i_i, i_j = i_c // (NC * NC), (i_c % (NC * NC)) // NC, (i_c % (NC * NC)) % NC + n_bh = tl.num_programs(2) + + if i_i > i_j: + p_v = tl.make_block_ptr(v + i_bh * s_v_h, (V, T), (s_v_d, s_v_t), (i_v * BV, i_t * BT + i_j * BC), (BV, BC), (0, 1)) + p_gv = tl.make_block_ptr(g + i_bh * s_v_h, (V, T), (s_v_d, s_v_t), (i_v * BV, i_t * BT + i_j * BC), (BV, BC), (0, 1)) + p_gn = tl.make_block_ptr(g + i_bh * s_v_h, (T * V,), (s_v_d,), ((i_t * BT + i_i * BC) * V + i_v * BV,), (BV,), (0,)) + p_g = tl.make_block_ptr(g + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT + i_i * BC, i_v * BV), (BC, BV), (1, 0)) + p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT + i_i * BC, i_v * BV), (BC, BV), (1, 0)) + p_dA = tl.make_block_ptr(dA+(i_bh+i_v*n_bh)*T*BT, (T, BT), (BT, 1), (i_t * BT + i_i * BC, i_j * BC), (BC, BC), (1, 0)) + # [BV,] + b_gn = tl.load(p_gn, boundary_check=(0,)) + # [BC, BV] + b_g = tl.load(p_g, boundary_check=(0, 1)) + b_do = tl.load(p_do, boundary_check=(0, 1)) + b_do = (b_do * tl.exp(b_g - b_gn[None, :]) * scale).to(b_do.dtype) + # [BV, BC] + b_v = tl.load(p_v, boundary_check=(0, 1)) + b_gv = tl.load(p_gv, boundary_check=(0, 1)) + b_vg = (b_v * tl.exp(b_gn[:, None] - b_gv)).to(b_v.dtype) + # [BC, BC] + b_dA = tl.dot(b_do, b_vg, allow_tf32=False) + tl.store(p_dA, b_dA.to(dA.dtype.element_ty), boundary_check=(0, 1)) + elif i_i == i_j: + p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T * V,), (s_v_d,), ((i_t * BT + i_j * BC) * V + i_v * BV,), (BV,), (0,)) + p_gv = tl.make_block_ptr(g + i_bh * s_v_h, (T * V,), (s_v_d,), ((i_t * BT + i_j * BC) * V + i_v * BV,), (BV,), (0,)) + p_g = tl.make_block_ptr(g + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT + i_i * BC, i_v * BV), (BC, BV), (1, 0)) + p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT + i_i * BC, i_v * BV), (BC, BV), (1, 0)) + # [BC, BV] + b_g = tl.load(p_g, boundary_check=(0, 1)) + b_do = tl.load(p_do, boundary_check=(0, 1)) * scale + + o_i = tl.arange(0, BC) + o_A = (i_bh + i_v * n_bh) * T * BT + (i_t * BT + i_i * BC + tl.arange(0, BC)) * BT + i_j * BC + m_A = (i_t * BT + i_i * BC + tl.arange(0, BC)) < T + for j in range(0, BC): + # [BV,] + b_v = tl.load(p_v, boundary_check=(0,)).to(tl.float32) + b_gv = tl.load(p_gv, boundary_check=(0,)).to(tl.float32) + # [BC,] + b_dA = tl.sum(b_do * b_v[None, :] * tl.exp(b_g - b_gv[None, :]), 1) + b_dA = tl.where(o_i >= j, b_dA, 0) + tl.store(dA + o_A + j, b_dA.to(b_do.dtype), mask=m_A) + + p_v = tl.advance(p_v, (V,)) + p_gv = tl.advance(p_gv, (V,)) + + +@triton.jit +def chunk_gated_abc_bwd_kernel_K( + q, + k, + v, + h, + g, + A, + do, + dh, + dq, + dk, + dv, + dA, + s_k_h, + s_k_t, + s_k_d, + s_v_h, + s_v_t, + s_v_d, + s_h_h, + s_h_t, + s_h_d, + scale, + T: tl.constexpr, + K: tl.constexpr, + V: tl.constexpr, + BT: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr +): + i_k, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + n_bh = tl.num_programs(2) + + o_i = tl.arange(0, BT) + m_s = o_i[:, None] >= o_i[None, :] + + p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_A = tl.make_block_ptr(A + (i_k*n_bh+i_bh) * T * BT, (T, BT, ), (BT, 1), (i_t * BT, 0), (BT, BT), (1, 0)) + + # [BT, BK] + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_k = tl.load(p_k, boundary_check=(0, 1)) + # [BT, BT] + b_A = tl.dot((b_q * scale).to(b_q.dtype), tl.trans(b_k), allow_tf32=False) + b_A = tl.where(m_s, b_A, 0.) + tl.store(p_A, b_A.to(p_A.dtype.element_ty), boundary_check=(0, 1)) + + b_dq = tl.zeros([BT, BK], dtype=tl.float32) + b_dk = tl.zeros([BT, BK], dtype=tl.float32) + for i_v in range(tl.cdiv(V, BV)): + p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_h = tl.make_block_ptr(h + i_bh * s_h_h + i_t * K*V, (V, K), (s_h_d, s_h_t), (i_v * BV, i_k * BK), (BV, BK), (0, 1)) + p_g = tl.make_block_ptr(g + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_gn = tl.make_block_ptr(g + i_bh * s_v_h, (T * V,), (s_v_d,), ((i_t * BT + BT - 1) * V + i_v * BV,), (BV,), (0,)) + + p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_dh = tl.make_block_ptr(dh + i_bh * s_h_h + i_t * K*V, (K, V), (s_h_t, s_h_d), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + p_dv = tl.make_block_ptr(dv + (i_k*n_bh+i_bh) * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + + # [BV,] + b_gn = tl.load(p_gn, boundary_check=(0,)) + # [BT, BV] + b_v = tl.load(p_v, boundary_check=(0, 1)) + b_g = tl.load(p_g, boundary_check=(0, 1)) + b_v = b_v * tl.exp(b_gn[None, :] - b_g) + # [BV, BK] + b_h = tl.load(p_h, boundary_check=(0, 1)) + # [BT, BV] + b_do = tl.load(p_do, boundary_check=(0, 1)) + b_do = (b_do * tl.exp(b_g) * scale).to(b_do.dtype) + # [BK, BV] + b_dh = tl.load(p_dh, boundary_check=(0, 1)) + + # [BT, BK] + b_dq += tl.dot(b_do, b_h, allow_tf32=False) + b_dk += tl.dot(b_v.to(b_dh.dtype), tl.trans(b_dh), allow_tf32=False) + # [BT, BV] + b_dv = tl.exp(b_gn[None, :] - b_g) * tl.dot(b_k, b_dh, allow_tf32=False) + tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), boundary_check=(0, 1)) + p_dA = tl.make_block_ptr(dA + i_bh * T * BT, (T, BT, ), (BT, 1), (i_t * BT, 0), (BT, BT), (1, 0)) + # [BT, BT] + b_dA = tl.load(p_dA, boundary_check=(0, 1)) + # [BT, BK] + b_dq += tl.dot(b_dA, b_k, allow_tf32=False) + b_dk += tl.dot(tl.trans(b_dA).to(b_k.dtype), b_q, allow_tf32=False) + + p_dq = tl.make_block_ptr(dq + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_dk = tl.make_block_ptr(dk + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1)) + tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1)) + + +@triton.jit +def chunk_gated_abc_bwd_kernel_intra_KV( + v, + g, + o, + A, + do, + dv, + dg, + s_v_h, + s_v_t, + s_v_d, + T: tl.constexpr, + V: tl.constexpr, + BT: tl.constexpr, + BC: tl.constexpr, + BV: tl.constexpr, + NC: tl.constexpr, + OVERWRITE: tl.constexpr +): + i_v, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + i_t, i_i = i_c // NC, i_c % NC + + p_gv = tl.make_block_ptr(g + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT + i_i * BC, i_v * BV), (BC, BV), (1, 0)) + p_gn = tl.make_block_ptr(g + i_bh * s_v_h, (T*V,), (s_v_d,), ((i_t * BT + i_i * BC + BC - 1) * V + i_v * BV,), (BV,), (0,)) + # [BV,] + b_gn = tl.load(p_gn, boundary_check=(0,)) + # [BC, BV] + b_gv = tl.load(p_gv, boundary_check=(0, 1)) + b_dv = tl.zeros([BC, BV], dtype=tl.float32) + for i_j in range(i_i + 1, NC): + p_g = tl.make_block_ptr(g + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT + i_j * BC, i_v * BV), (BC, BV), (1, 0)) + p_A = tl.make_block_ptr(A + i_bh * T * BT, (BT, T), (1, BT), (i_i * BC, i_t * BT + i_j * BC), (BC, BC), (0, 1)) + p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT + i_j * BC, i_v * BV), (BC, BV), (1, 0)) + # [BC, BV] + b_g = tl.load(p_g, boundary_check=(0, 1)) + b_do = tl.load(p_do, boundary_check=(0, 1)) + b_do = (b_do * tl.exp(b_g - b_gn[None, :])).to(b_do.dtype) + # [BC, BC] + b_A = tl.load(p_A, boundary_check=(0, 1)) + b_dv += tl.dot(b_A, b_do, allow_tf32=False) + b_dv *= tl.exp(b_gn[None, :] - b_gv) + + o_i = tl.arange(0, BC) + for j in range(0, BC): + p_g = tl.make_block_ptr(g + i_bh * s_v_h, (T * V,), (1,), ((i_t * BT + i_i * BC + j) * V + i_v * BV,), (BV,), (0,)) + p_A = tl.make_block_ptr(A + i_bh * T * BT, (T * BT,), (1,), ((i_t * BT + i_i * BC + j) * BT + i_i * BC,), (BC,), (0,)) + p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T * V,), (1,), ((i_t * BT + i_i * BC + j) * V + i_v * BV,), (BV,), (0,)) + # [BC,] + b_A = tl.load(p_A, boundary_check=(0,)) + # [BV,] + b_g = tl.load(p_g, boundary_check=(0,)) + b_do = tl.load(p_do, boundary_check=(0,)) + # [BC, BV] + m_i = o_i[:, None] <= j + b_dv += tl.where(m_i, tl.exp(b_g[None, :] - b_gv) * b_A[:, None] * b_do[None, :], 0.) + p_o = tl.make_block_ptr(o + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT + i_i * BC, i_v * BV), (BC, BV), (1, 0)) + p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT + i_i * BC, i_v * BV), (BC, BV), (1, 0)) + p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT + i_i * BC, i_v * BV), (BC, BV), (1, 0)) + p_dv = tl.make_block_ptr(dv + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT + i_i * BC, i_v * BV), (BC, BV), (1, 0)) + p_dg = tl.make_block_ptr(dg + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT + i_i * BC, i_v * BV), (BC, BV), (1, 0)) + + b_o = tl.load(p_o, boundary_check=(0, 1)).to(tl.float32) + b_v = tl.load(p_v, boundary_check=(0, 1)).to(tl.float32) + b_do = tl.load(p_do, boundary_check=(0, 1)).to(tl.float32) + b_dv = b_dv + tl.load(p_dv, boundary_check=(0, 1)).to(tl.float32) + b_dg = b_o * b_do - b_v * b_dv + if not OVERWRITE: + b_dg = b_dg + tl.load(p_dg, boundary_check=(0, 1)).to(tl.float32) + tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), boundary_check=(0, 1)) + tl.store(p_dg, b_dg.to(p_dg.dtype.element_ty), boundary_check=(0, 1)) + + +def fwd_pre(g, B, H, T, S, BT): + NT = triton.cdiv(T, BT) + g_org, g = g, torch.empty_like(g, dtype=torch.float) + def grid(meta): return (triton.cdiv(meta['S'], meta['BS']), NT, B * H) + # keep cummulative normalizer in fp32 + # this kernel is equivalent to + # g = g.view(B, H, NT, BT, -1).cumsum(-2).view(B, H, T, -1) + chunk_gated_abc_fwd_kernel_cum[grid]( + g_org, g, + g.stride(1), g.stride(2), g.stride(3), + T=T, S=S, BT=BT + ) + return g + + +def fwd_inner(q, k, v, g, B, H, T, K, V, BT, BK, BV, gatek=False, h0=None, ht=None): + NT = triton.cdiv(T, BT) + NK, NV = triton.cdiv(K, BK), triton.cdiv(V, BV) + num_warps = 4 if BK == 64 else 2 + num_stages = 1 + + h = q.new_empty(B, H, NT * K, V) + grid = (NV, NK, B * H) + chunk_gated_abc_fwd_kernel_h[grid]( + k, v, g, h, h0, ht, + k.stride(1), k.stride(2), k.stride(3), + v.stride(1), v.stride(2), v.stride(3), + h.stride(1), h.stride(2), h.stride(3), + T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, NT=NT, + GATEK=gatek, + USE_INITIAL_STATE=h0 is not None, + STORE_FINAL_STATE=ht is not None, + num_warps=num_warps, + num_stages=num_stages + ) + return h + + +def fwd_v(q, k, v, g, B, H, T, K, V, BT, BK, BV, BC, h0=None, ht=None, scale=1.): + NT = triton.cdiv(T, BT) + NK, NV = triton.cdiv(K, BK), triton.cdiv(V, BV) + NC = triton.cdiv(BT, BC) + num_warps = 4 if BK == 64 else 2 + num_stages = 1 + + h = fwd_inner( + q=q, k=k, v=v, g=g, + B=B, H=H, T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, + gatek=True, + h0=h0, + ht=ht + ) + A = q.new_zeros(NK, B, H, T, BT) + grid = (NK, NT * NC * NC, B * H) + chunk_gated_abc_fwd_kernel_intra_V[grid]( + q, k, g, A, + k.stride(1), k.stride(2), k.stride(3), + scale, + T=T, K=K, BT=BT, BC=BC, BK=BK, NC=NC, + num_warps=2, + num_stages=num_stages + ) + A = A.sum(0, dtype=A.dtype) + o = torch.empty_like(v) + grid = (NV, NT, B * H) + chunk_gated_abc_fwd_kernel_V[grid]( + q, v, g, h, o, A, + k.stride(1), k.stride(2), k.stride(3), + v.stride(1), v.stride(2), v.stride(3), + h.stride(1), h.stride(2), h.stride(3), + scale, + T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, + num_warps=num_warps, + num_stages=num_stages + ) + return o, h, A + + +def fwd_k(q, k, v, g, B, H, T, K, V, BT, BK, BV, BC, h0=None, ht=None, scale=1.): + NT = triton.cdiv(T, BT) + NV = triton.cdiv(V, BV) + NC = triton.cdiv(BT, BC) + num_warps = 4 if BK == 64 else 2 + num_stages = 1 + + h = fwd_inner( + q=q, k=k, v=v, g=g, + B=B, H=H, T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, + gatek=False, + h0=h0, + ht=ht + ) + o = torch.empty_like(v) + A = q.new_empty(B, H, T, BT) + grid = (NV, NT, B * H) + chunk_gated_abc_fwd_kernel_K[grid]( + q, k, h, g, o, A, + k.stride(1), k.stride(2), k.stride(3), + v.stride(1), v.stride(2), v.stride(3), + h.stride(1), h.stride(2), h.stride(3), + scale, + T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, + num_warps=num_warps, + num_stages=num_stages + ) + grid = (NV, NT * NC, B * H) + chunk_gated_abc_fwd_kernel_intra_K[grid]( + v, g, o, A, + v.stride(1), v.stride(2), v.stride(3), + T=T, V=V, BT=BT, BC=BC, BV=BV, NC=NC, + num_warps=2, + num_stages=num_stages + ) + return o, h, A + + +def bwd_inner(q, g, do, B, H, T, K, V, BT, BK, BV, scale, gatek=False): + NT = triton.cdiv(T, BT) + NK, NV = triton.cdiv(K, BK), triton.cdiv(V, BV) + num_warps = 4 if BK == 64 else 2 + num_stages = 1 + + dh = q.new_empty(B, H, NT * K, V) + grid = (NK, NV, B * H) + chunk_gated_abc_bwd_kernel_dh[grid]( + q, g, do, dh, + q.stride(1), q.stride(2), q.stride(3), + do.stride(1), do.stride(2), do.stride(3), + dh.stride(1), dh.stride(2), dh.stride(3), + scale, + T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, NT=NT, + GATEK=gatek, + num_warps=num_warps, + num_stages=num_stages + ) + return dh + + +def bwd_v(q, k, v, g, h, A, do, dg, B, H, T, K, V, BT, BK, BV, BC, scale=1.): + NT = triton.cdiv(T, BT) + NK = triton.cdiv(K, BK) + NC = triton.cdiv(BT, BC) + num_warps = 4 if BK == 64 else 2 + num_stages = 1 + + overwrite_dg = dg is None + dh = bwd_inner( + q, g, do, + B=B, H=H, T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, + scale=scale, + gatek=True + ) + dq = torch.empty_like(q, dtype=torch.float) + dk = torch.empty_like(k, dtype=torch.float) + dv = v.new_empty(NK, *v.shape) + dg = torch.empty_like(g, dtype=torch.float) if dg is None else dg + dA = v.new_zeros(B, H, T, BT) + + grid = (NK, NT, B * H) + chunk_gated_abc_bwd_kernel_V[grid]( + k, v, h, g, A, do, dh, dq, dk, dv, dA, + k.stride(1), k.stride(2), k.stride(3), + v.stride(1), v.stride(2), v.stride(3), + h.stride(1), h.stride(2), h.stride(3), + scale, + T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, + num_warps=num_warps, + num_stages=num_stages + ) + dv = dv.sum(0, dtype=dv.dtype) + grid = (NK, NT * NC, B * H) + chunk_gated_abc_bwd_kernel_intra_V[grid]( + q, k, g, dA, dq, dk, dg, + k.stride(1), k.stride(2), k.stride(3), + T=T, K=K, BT=BT, BC=BC, BK=BK, NC=NC, + OVERWRITE=overwrite_dg, + num_warps=num_warps, + num_stages=num_stages + ) + return dq, dk, dv, dg + + +def bwd_k(q, k, v, g, h, o, do, dg, B, H, T, K, V, BT, BK, BV, BC, scale=1.): + NT = triton.cdiv(T, BT) + NK, NV = triton.cdiv(K, BK), triton.cdiv(V, BV) + NC = triton.cdiv(BT, BC) + num_warps = 4 if BK == 64 else 2 + num_stages = 1 + + overwrite_dg = dg is None + dh = bwd_inner( + q, g, do, + B=B, H=H, T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, + scale=scale, + gatek=False + ) + dA = q.new_zeros(NV, B, H, T, BT) + grid = (NV, NT * NC * NC, B * H) + chunk_gated_abc_bwd_kernel_intra_K[grid]( + v, g, do, dA, + v.stride(1), v.stride(2), v.stride(3), + scale, + T=T, V=V, BT=BT, BC=BC, BV=BV, NC=NC, + num_warps=num_warps, + num_stages=num_stages + ) + dA = dA.sum(0, dtype=dA.dtype) + + A = do.new_zeros(NK, B, H, T, BT) + dq = torch.empty_like(q) + dk = torch.empty_like(k) + dv = v.new_empty(NK, *v.shape) + dg = torch.empty_like(g, dtype=torch.float) if dg is None else dg + grid = (NK, NT, B * H) + chunk_gated_abc_bwd_kernel_K[grid]( + q, k, v, h, g, A, do, dh, dq, dk, dv, dA, + q.stride(1), q.stride(2), q.stride(3), + v.stride(1), v.stride(2), v.stride(3), + h.stride(1), h.stride(2), h.stride(3), + scale, + T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, + num_warps=num_warps, + num_stages=num_stages + ) + A = A.sum(0, dtype=A.dtype) + dv = dv.sum(0, dtype=dv.dtype) + grid = (NV, NT * NC, B * H) + chunk_gated_abc_bwd_kernel_intra_KV[grid]( + v, g, o, A, do, dv, dg, + v.stride(1), v.stride(2), v.stride(3), + T=T, V=V, BT=BT, BC=BC, BV=BV, NC=NC, + OVERWRITE=overwrite_dg, + num_warps=num_warps, + num_stages=num_stages + ) + return dq, dk, dv, dg + + +class ChunkGatedABCFunction(torch.autograd.Function): + + @staticmethod + @contiguous + def forward(ctx, q, k, v, s, g, scale, initial_state, output_final_state, checkpoint_level): + B, H, T, K, V, M = *q.shape, v.shape[-1], s.shape[-1] + BT, BC = 64, 16 + BK = min(64, triton.next_power_of_2(K)) + BV = min(64, triton.next_power_of_2(V)) + BM = min(64, triton.next_power_of_2(M)) + + final_state = None + if output_final_state: + final_state = (q.new_empty(B, H, K, M, dtype=torch.float), + q.new_empty(B, H, M, V, dtype=torch.float)) + + g_org, g = g, fwd_pre(g, B, H, T, M, BT) + ok, hk, _ = fwd_k( + q=q, k=k, v=s, g=g, + B=B, H=H, T=T, K=K, V=M, BT=BT, BK=BK, BV=BM, BC=BC, + h0=initial_state[0] if initial_state is not None else None, + ht=final_state[0] if final_state is not None else None, + scale=scale + ) + + # equivalent to: + # p = ok.softmax(-1, torch.float) + # p is kept in fp32 for safe softmax backward + p = torch.empty_like(ok, dtype=torch.float) + def grid(meta): return (triton.cdiv(meta['T'], meta['BT']), B * H) + softmax_fwd_kernel[grid]( + ok, p, + s.stride(1), s.stride(2), s.stride(3), + T=T, S=M, BT=BT + ) + + ov, hv, Av = fwd_v( + q=p.to(q.dtype), k=s, v=v, g=g, + B=B, H=H, T=T, K=M, V=V, BT=BT, BK=BM, BV=BV, BC=BC, + h0=initial_state[1] if initial_state is not None else None, + ht=final_state[1] if final_state is not None else None, + scale=1. + ) + + if checkpoint_level >= 1: + del g + g = g_org + if checkpoint_level > 1: + del hk + del hv + hk, hv = None, None + initial_state = tuple() if initial_state is None else initial_state + else: + initial_state = tuple() + + ctx.save_for_backward(q, k, v, s, g, ok, p, hk, hv, Av, *initial_state) + ctx.checkpoint_level = checkpoint_level + ctx.scale = scale + ctx.BT = BT + return ov, final_state + + @staticmethod + @contiguous + def backward(ctx, dov, dht=None): + q, k, v, s, g, ok, p, hk, hv, Av, *initial_state = ctx.saved_tensors + qv = p.to(q.dtype) + B, H, T, K, V, M = *q.shape, v.shape[-1], s.shape[-1] + BT, BC = ctx.BT, 16 + BK = min(64, triton.next_power_of_2(K)) + BV = min(64, triton.next_power_of_2(V)) + BM = min(64, triton.next_power_of_2(M)) + + if ctx.checkpoint_level >= 1: + g = fwd_pre(g, B, H, T, M, BT) + + # rerun the forward pass to get h if checkpoint_level >= 1 + if ctx.checkpoint_level > 1: + hk = fwd_inner( + q=q, k=k, v=s, g=g, + B=B, H=H, T=T, K=K, V=M, BT=BT, BK=BK, BV=BM, + gatek=False, + h0=initial_state[0] if len(initial_state) > 0 else None, + ht=None + ) + hv = fwd_inner( + q=qv, k=s, v=v, g=g, + B=B, H=H, T=T, K=M, V=V, BT=BT, BK=BM, BV=BV, + gatek=True, + h0=initial_state[1] if len(initial_state) > 0 else None, + ht=None + ) + + dqv, dsv, dv, dg = bwd_v( + q=qv, k=s, v=v, g=g, h=hv, A=Av, do=dov, dg=None, + B=B, H=H, T=T, K=M, V=V, BT=BT, BK=BM, BV=BV, BC=BC, + scale=1. + ) + + # softmax gradient, equivalent to: + # dok = qv * (dqv - (qv * dqv).sum(-1, True)) + dok = torch.empty_like(ok) + def grid(meta): return (triton.cdiv(meta['T'], meta['BT']), B * H) + softmax_bwd_kernel[grid]( + p, dqv, dok, + s.stride(1), s.stride(2), s.stride(3), + T=T, S=M, BT=BT + ) + + dq, dk, dsk, dg = bwd_k( + q=q, k=k, v=s, g=g, h=hk, o=ok, do=dok, dg=dg, + B=B, H=H, T=T, K=K, V=M, BT=BT, BK=BK, BV=BM, BC=BC, + scale=ctx.scale + ) + + ds = dsv.add_(dsk) + # reversed cumsum, equivalent to: + # + # def reversed_cumsum(x, dim=-1): + # c = x.cumsum(dim) + # return x + c.index_select(dim, x.new_tensor([c.shape[dim]-1], dtype=torch.long)) - c + dg = chunk_reversed_cumsum_fwd(dg).to(s.dtype) + return dq, dk, dv, ds, dg, None, None, None, None + + +def chunk_gated_abc( + q: torch.Tensor, + k: torch.Tensor, + v: torch.Tensor, + s: torch.Tensor, + g: Optional[torch.Tensor] = None, + scale: Optional[int] = None, + initial_state: Optional[Tuple[torch.Tensor]] = None, + output_final_state: Optional[bool] = False, + checkpoint_level: Optional[int] = 2 +) -> Tuple[torch.Tensor, torch.Tensor]: + r""" + Args: + q (torch.Tensor): + queries of shape `(B, H, T, K)` + k (torch.Tensor): + keys of shape `(B, H, T, K)` + v (torch.Tensor): + values of shape `(B, H, T, V)` + g (torch.Tensor): + Forget gates of shape `(B, H, T, M)` applied to keys. + If not provided, this function is equivalent to vanilla ABC. + scale (Optional[int]): + Scale factor for attention scores. + If not provided, it will default to `1 / sqrt(K)`. Default: `None`. + initial_state (Optional[Tuple[torch.Tensor]]): + Initial state tuple having tensors of shape `(B, H, K, V)`. Default: `None`. + output_final_state (Optional[bool]): + Whether to output the final state tuple, having tensors of shape `(B, H, K, V)`. Default: `False`. + checkpoint_level (Optional[int]): + Checkpointing level; higher values will save more memories and do more recomputations during backward. + Default: `2`: + - Level `0`: no memory saved, no recomputation. + - Level `1`: recompute the fp32 cumulative values during backward. + - Level `2`: recompute the fp32 cumulative values and forward hidden states during backward. + """ + assert checkpoint_level in [0, 1, 2] + if initial_state is not None: + initial_state = tuple(i.detach() for i in initial_state) + if g is None: + # TODO: this 3 steps took huge amount of time, ought to be optimized + z = s.float().logcumsumexp(2) + g = torch.cat((z[:, :, :1], z[:, :, :-1]), 2) - z + s = torch.exp(s - z).to(k.dtype) + if scale is None: + scale = q.shape[-1] ** -0.5 + ov, final_state = ChunkGatedABCFunction.apply(q, k, v, s, g, scale, initial_state, output_final_state, checkpoint_level) + return ov, final_state diff --git a/fla/ops/abc/naive.py b/fla/ops/abc/naive.py new file mode 100644 index 0000000000000000000000000000000000000000..5abc3f5cd3ccf3e37dd30d4986c5996871a117de --- /dev/null +++ b/fla/ops/abc/naive.py @@ -0,0 +1,90 @@ +# -*- coding: utf-8 -*- + +from typing import Optional + +import torch + + +def naive_recurrent_abc( + q: torch.Tensor, + k: torch.Tensor, + v: torch.Tensor, + s: torch.Tensor, + g: Optional[torch.Tensor] = None, + scale: Optional[int] = None, + initial_state: Optional[torch.Tensor] = None, + output_final_state: Optional[bool] = False +) -> torch.Tensor: + dtype = q.dtype + + # [batch_size, n_heads, seq_len, n_slots] + if g is None: + z = s.float().logcumsumexp(2) + g = torch.cat((z[:, :, :1], z[:, :, :-1]), 2) - z + s = torch.exp(s - z) + q, k, v, s, g = map(lambda x: x.float(), (q, k, v, s, g)) + B, H, T, K, V, M = *q.shape, v.shape[-1], s.shape[-1] + + hk = torch.zeros(B, H, K, M, dtype=torch.float, device=q.device) + ok = torch.zeros_like(s) + + if scale is None: + scale = q.shape[-1] ** -0.5 + + final_state = None + if initial_state is not None: + hk += initial_state[0] + + for i in range(T): + q_i = q[:, :, i] * scale + k_i = k[:, :, i] + v_i = s[:, :, i] + g_i = g[:, :, i].exp() + hk = hk * g_i[..., None, :] + k_i[..., None] * v_i[..., None, :] + ok[:, :, i] = (q_i[..., None] * hk).sum(-2) + + qv = ok.softmax(-1) + hv = torch.zeros(B, H, M, V, dtype=torch.float, device=q.device) + ov = torch.zeros_like(v) + if initial_state is not None: + hv += initial_state[1] + + for i in range(T): + q_i = qv[:, :, i] + k_i = s[:, :, i] + v_i = v[:, :, i] + g_i = g[:, :, i].exp() + hv = hv * g_i[..., :, None] + k_i[..., None] * v_i[..., None, :] + ov[:, :, i] = (q_i[..., None] * hv).sum(-2) + + if output_final_state: + final_state = (hk, hv) + return ov.to(dtype), final_state + + +def naive_cumsum_abc( + q: torch.Tensor, + k: torch.Tensor, + v: torch.Tensor, + s: torch.Tensor +) -> torch.Tensor: + """ + A simple implementation of vanilla ABC that is more aligned with the descriptions in the paper. + This is just for demonstration purposes, with no numerical stabilities guaranteed. + """ + + dtype = q.dtype + q, k, v, s = map(lambda x: x.float(), (q, k, v, s)) + + scale = q.shape[-1] ** -0.5 + # [batch_size, n_heads, seq_len, n_slots] + s = (s - s.max(2, True)[0]).exp() + z = s.cumsum(2) + # [batch_size, n_heads, seq_len, n_slots, d_head] + K = (s.unsqueeze(-1) * k.unsqueeze(-2)).cumsum(2) / z.unsqueeze(-1) + V = (s.unsqueeze(-1) * v.unsqueeze(-2)).cumsum(2) / z.unsqueeze(-1) + # [batch_size, n_heads, seq_len, n_slots] + p = torch.einsum('...d,...md->...m', q * scale, K).softmax(-1) + # [batch_size, n_heads, seq_len, d_head] + o = torch.einsum('...m,...md->...d', p, V) + return o.to(dtype), None diff --git a/fla/ops/abc/recurrent_fuse.py b/fla/ops/abc/recurrent_fuse.py new file mode 100644 index 0000000000000000000000000000000000000000..3b4491d5bcf724bed222ad46bcf6fb6289f0af76 --- /dev/null +++ b/fla/ops/abc/recurrent_fuse.py @@ -0,0 +1,388 @@ +# -*- coding: utf-8 -*- + +# Copyright (c) 2024, Yu Zhang, Songlin Yang + +from typing import Optional, Tuple + +import torch +import triton +import triton.language as tl +from torch.cuda.amp import custom_bwd, custom_fwd + +from fla.utils import contiguous + + +@triton.jit +def fused_recurrent_gated_abc_fwd_kernel( + q, + k, + v, + gk, + gv, + o, + h0, + ht, + s_k_h, + s_v_h, + scale, + B: tl.constexpr, + H: tl.constexpr, + T: tl.constexpr, + K: tl.constexpr, + V: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr, + USE_INITIAL_STATE: tl.constexpr, + STORE_FINAL_STATE: tl.constexpr, + REVERSE: tl.constexpr, + USE_GK: tl.constexpr, + USE_GV: tl.constexpr, +): + # indices + i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + + p_q = q + i_bh * s_k_h + i_k * BK + tl.arange(0, BK) + ((T-1) * K if REVERSE else 0) + p_k = k + i_bh * s_k_h + i_k * BK + tl.arange(0, BK) + ((T-1) * K if REVERSE else 0) + p_v = v + i_bh * s_v_h + i_v * BV + tl.arange(0, BV) + ((T-1) * V if REVERSE else 0) + p_o = o + (i_bh + i_k * B * H) * s_v_h + i_v * BV + tl.arange(0, BV) + ((T-1) * V if REVERSE else 0) + + if USE_GK: + p_gk = gk + i_bh * s_k_h + i_k * BK + tl.arange(0, BK) + ((T-1) * K if REVERSE else 0) + if USE_GV: + p_gv = gv + i_bh * s_v_h + i_v * BV + tl.arange(0, BV) + ((T-1) * V if REVERSE else 0) + + mask_bk = (i_k * BK + tl.arange(0, BK)) < K + mask_bv = (i_v * BV + tl.arange(0, BV)) < V + + h = tl.zeros([BV, BK], dtype=tl.float32) + mask_kv = mask_bk[None, :] & mask_bv[:, None] + + if USE_INITIAL_STATE: + p_h0 = h0 + i_bh * K * V + (i_k * BK + tl.arange(0, BK)[None, :]) * V + (i_v * BV + tl.arange(0, BV)[:, None]) + h += tl.load(p_h0, mask=mask_kv, other=0).to(tl.float32) + + for _ in range(0, T): + b_q = tl.load(p_q, mask=mask_bk, other=0).to(tl.float32) * scale + b_k = tl.load(p_k, mask=mask_bk, other=0).to(tl.float32) + b_v = tl.load(p_v, mask=mask_bv, other=0).to(tl.float32) + if USE_GK: + b_gk = tl.load(p_gk, mask=mask_bk, other=0).to(tl.float32) + h = h * b_gk[None, :] + if USE_GV: + b_gv = tl.load(p_gv, mask=mask_bv, other=0).to(tl.float32) + h = h * b_gv[:, None] + h += b_k[None, :] * b_v[:, None] + b_o = h * b_q[None, :] + b_o = tl.sum(b_o, axis=1) + tl.store(p_o, b_o.to(p_o.dtype.element_ty), mask=mask_bv) + p_q += -K if REVERSE else K + p_k += -K if REVERSE else K + p_o += -V if REVERSE else V + p_v += -V if REVERSE else V + if USE_GK: + p_gk += -K if REVERSE else K + if USE_GV: + p_gv += -V if REVERSE else V + + if STORE_FINAL_STATE: + p_ht = ht + i_bh * K * V + (i_k * BK + tl.arange(0, BK)[None, :]) * V + (i_v * BV + tl.arange(0, BV)[:, None]) + tl.store(p_ht, h.to(p_ht.dtype.element_ty), mask=mask_kv) + + +@triton.jit +def fused_recurrent_gated_abc_bwd_kernel( + q, + k, + v, + gk, + gv, + do, + dq, + dk, + dv, + h0, + s_k_h, + s_v_h, + scale, + B: tl.constexpr, + H: tl.constexpr, + T: tl.constexpr, + K: tl.constexpr, + V: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr, + USE_INITIAL_STATE: tl.constexpr, + REVERSE: tl.constexpr, + USE_GK: tl.constexpr, + USE_GV: tl.constexpr, +): + i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + + p_q = q + i_bh * s_k_h + i_k * BK + tl.arange(0, BK) + ((T-1) * K if REVERSE else 0) + p_k = k + i_bh * s_k_h + i_k * BK + tl.arange(0, BK) + ((T-1) * K if REVERSE else 0) + p_v = v + i_bh * s_v_h + i_v * BV + tl.arange(0, BV) + ((T-1) * V if REVERSE else 0) + p_do = do + i_bh * s_v_h + i_v * BV + tl.arange(0, BV) + ((T-1) * V if REVERSE else 0) + p_dq = dq + (i_bh + i_v * B * H) * s_k_h + i_k * BK + tl.arange(0, BK) + ((T-1) * K if REVERSE else 0) + if USE_GK: + p_gk = gk + i_bh * s_k_h + i_k * BK + tl.arange(0, BK) + ((T-1) * K if REVERSE else 0) + if USE_GV: + p_gv = gv + i_bh * s_v_h + i_v * BV + tl.arange(0, BV) + ((T-1) * V if REVERSE else 0) + mask_bk = i_k * BK + tl.arange(0, BK) < K + mask_bv = i_v * BV + tl.arange(0, BV) < V + mask_kv = mask_bk[:, None] & mask_bv[None, :] + h = tl.zeros([BK, BV], dtype=tl.float32) + + if USE_INITIAL_STATE: + p_h0 = h0 + i_bh * K * V + (i_k * BK + tl.arange(0, BK)[:, None]) * V + (i_v * BV + tl.arange(0, BV)[None, :]) + h += tl.load(p_h0, mask=mask_kv, other=0).to(tl.float32) + + for _ in range(0, T): + b_k = tl.load(p_k, mask=mask_bk, other=0).to(tl.float32) + b_v = tl.load(p_v, mask=mask_bv, other=0).to(tl.float32) + b_do = tl.load(p_do, mask=mask_bv, other=0).to(tl.float32) + if USE_GK: + b_gk = tl.load(p_gk, mask=mask_bk, other=0).to(tl.float32) + h = h * b_gk[:, None] + if USE_GV: + b_gv = tl.load(p_gv, mask=mask_bv, other=0).to(tl.float32) + h = h * b_gv[None, :] + h += b_k[:, None] * b_v[None, :] + b_dq = tl.sum(h * b_do[None, :], axis=1) * scale + tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), mask=mask_bk) + + p_k += -K if REVERSE else K + p_v += -V if REVERSE else V + p_q += -K if REVERSE else K + p_do += -V if REVERSE else V + p_dq += -K if REVERSE else K + if USE_GK: + p_gk += -K if REVERSE else K + if USE_GV: + p_gv += -V if REVERSE else V + + # sync threads + tl.debug_barrier() + + p_q = q + i_bh * s_k_h + i_k * BK + tl.arange(0, BK) + ((T - 1) * K if not REVERSE else 0) + p_k = k + i_bh * s_k_h + i_k * BK + tl.arange(0, BK) + ((T - 1) * K if not REVERSE else 0) + p_v = v + i_bh * s_v_h + i_v * BV + tl.arange(0, BV) + ((T - 1) * V if not REVERSE else 0) + p_do = do + i_bh * s_v_h + i_v * BV + tl.arange(0, BV) + ((T - 1) * V if not REVERSE else 0) + p_dk = dk + (i_bh + i_v * B * H) * s_k_h + i_k * BK + tl.arange(0, BK) + ((T - 1) * K if not REVERSE else 0) + p_dv = dv + (i_bh + i_k * B * H) * s_v_h + i_v * BV + tl.arange(0, BV) + ((T - 1) * V if not REVERSE else 0) + if USE_GK: + p_gk = gk + i_bh * s_k_h + i_k * BK + tl.arange(0, BK) + ((T - 1) * K if not REVERSE else 0) + if USE_GV: + p_gv = gv + i_bh * s_v_h + i_v * BV + tl.arange(0, BV) + ((T - 1) * V if not REVERSE else 0) + + b_dh = tl.zeros([BK, BV], dtype=tl.float32) + for _ in range(T): + b_q = tl.load(p_q, mask=mask_bk, other=0).to(tl.float32) * scale + b_k = tl.load(p_k, mask=mask_bk, other=0).to(tl.float32) + b_v = tl.load(p_v, mask=mask_bv, other=0).to(tl.float32) + b_do = tl.load(p_do, mask=mask_bv, other=0).to(tl.float32) + b_dh += b_q[:, None] * b_do[None, :] + b_dk = tl.sum(b_dh * b_v[None, :], axis=1) + b_dv = tl.sum(b_dh * b_k[:, None], axis=0) + if USE_GK: + b_gk = tl.load(p_gk, mask=mask_bk, other=0).to(tl.float32) + b_dh *= b_gk[:, None] + if USE_GV: + b_gv = tl.load(p_gv, mask=mask_bv, other=0).to(tl.float32) + b_dh *= b_gv[None, :] + tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), mask=mask_bk) + tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), mask=mask_bv) + + p_q += K if REVERSE else -K + p_k += K if REVERSE else -K + p_v += V if REVERSE else -V + p_do += V if REVERSE else -V + p_dk += K if REVERSE else -K + p_dv += V if REVERSE else -V + if USE_GK: + p_gk += K if REVERSE else -K + if USE_GV: + p_gv += V if REVERSE else -V + + +class FusedRecurrentGatedABCFunction(torch.autograd.Function): + + @staticmethod + @contiguous + @custom_fwd + def forward(ctx, q, k, v, s, g, scale=None, initial_state=None, output_final_state=False, reverse=False): + B, H, T, K, V, M = *q.shape, v.shape[-1], s.shape[-1] + # default scale + if scale is None: + scale = K ** -0.5 + + BK, BV, BM = min(K, 32), min(V, 32), min(M, 32) + NK, NV, NM = triton.cdiv(K, BK), triton.cdiv(V, BV), triton.cdiv(M, BM) + num_stages = 1 + num_warps = 1 + + g = g.float().exp() + + final_state = (None, None) + if output_final_state: + final_state = (q.new_empty(B, H, K, M), q.new_empty(B, H, M, V)) + + ok = q.new_empty(NK, B, H, T, M, dtype=torch.float) + gk, gv = None, g + grid = (NM, NK, B * H) + fused_recurrent_gated_abc_fwd_kernel[grid]( + q, k, s, gk, gv, ok, initial_state[0], final_state[0], + k.stride(1), + s.stride(1), + scale=scale, + B=B, H=H, T=T, K=K, V=M, BK=BK, BV=BM, + USE_INITIAL_STATE=initial_state[0] is not None, + STORE_FINAL_STATE=final_state[0] is not None, + USE_GK=False, + USE_GV=True, + REVERSE=reverse, + num_warps=num_warps, + num_stages=num_stages + ) + ok = ok.sum(0) + + qv = ok.softmax(-1, dtype=torch.float) + ov = q.new_empty(NM, B, H, T, V, dtype=torch.float) + gk, gv = g, None + grid = (NV, NM, B * H) + fused_recurrent_gated_abc_fwd_kernel[grid]( + qv, s, v, gk, gv, ov, initial_state[1], final_state[1], + s.stride(1), + v.stride(1), + scale=1., + B=B, H=H, T=T, K=M, V=V, BK=BM, BV=BV, + USE_INITIAL_STATE=initial_state[0] is not None, + STORE_FINAL_STATE=final_state[0] is not None, + USE_GK=True, + USE_GV=False, + REVERSE=reverse, + num_warps=num_warps, + num_stages=num_stages + ) + ov = ov.sum(0) + + ctx.save_for_backward(q, k, v, s, g, qv, *initial_state, ok) + ctx.scale = scale + ctx.reverse = reverse + # we do not need the gradient of the final state from the next chunk + # similiar to Trunctated BPTT + if final_state is not None: + final_state = tuple(i.detach() for i in final_state) + return ov.to(q.dtype), final_state + + @staticmethod + @contiguous + @custom_bwd + def backward(ctx, do, dht=None): + q, k, v, s, g, qv, *initial_state, ok = ctx.saved_tensors + B, H, T, K, V, M = *q.shape, v.shape[-1], s.shape[-1] + V = v.shape[-1] + scale = ctx.scale + + BK, BV, BM = min(K, 32), min(V, 32), min(M, 32) + NK, NV, NM = triton.cdiv(K, BK), triton.cdiv(V, BV), triton.cdiv(M, BM) + num_stages = 1 + num_warps = 1 + + dqv = q.new_empty(NV, B, H, T, M, dtype=torch.float) + dsv = q.new_empty(NV, B, H, T, M, dtype=torch.float) + dv = q.new_empty(NM, B, H, T, V, dtype=torch.float) + gk, gv = g, None + grid = (NV, NM, B * H) + fused_recurrent_gated_abc_bwd_kernel[grid]( + qv, s, v, gk, gv, do, dqv, dsv, dv, initial_state[1], + s.stride(1), + v.stride(1), + scale=1., + B=B, H=H, T=T, K=M, V=V, BK=BM, BV=BV, + num_warps=num_warps, + num_stages=num_stages, + USE_INITIAL_STATE=initial_state[1] is not None, + REVERSE=ctx.reverse, + USE_GK=gk is not None, + USE_GV=gv is not None + ) + dqv = dqv.sum(0) + dsv = dsv.sum(0) + dv = dv.sum(0) + dgk = dqv * qv.float() - dsv * s.float() + dgk_cumsum = dgk.cumsum(-2) + dgk = dgk + dgk_cumsum[:, :, -1, None] - dgk_cumsum + + dok = qv * (dqv - (qv * dqv).sum(-1, True)) + dq = q.new_empty(NM, B, H, T, K, dtype=torch.float) + dk = q.new_empty(NM, B, H, T, K, dtype=torch.float) + dsk = q.new_empty(NK, B, H, T, M, dtype=torch.float) + gk, gv = None, g + grid = (NM, NK, B * H) + fused_recurrent_gated_abc_bwd_kernel[grid]( + q, k, s, gk, gv, dok, dq, dk, dsk, initial_state[0], + q.stride(1), + s.stride(1), + scale=scale, + B=B, H=H, T=T, K=K, V=M, BK=BK, BV=BM, + num_warps=num_warps, + num_stages=num_stages, + USE_INITIAL_STATE=initial_state[0] is not None, + REVERSE=ctx.reverse, + USE_GK=gk is not None, + USE_GV=gv is not None + ) + dq = dq.sum(0) + dk = dk.sum(0) + dsk = dsk.sum(0) + + dgv = dok.float() * ok.float() - dsk * s.float() + dgv_cumsum = dgv.cumsum(-2) + dgv = dgv + dgv_cumsum[:, :, -1, None] - dgv_cumsum + + ds = dsk.add_(dsv) + dg = dgk.add_(dgv) + + return dq.to(q), dk.to(k), dv.to(v), ds.to(s), dg.to(g), None, None, None, None + + +def fused_recurrent_gated_abc( + q: torch.Tensor, + k: torch.Tensor, + v: torch.Tensor, + s: torch.Tensor, + g: Optional[torch.Tensor] = None, + scale: Optional[int] = None, + initial_state: Optional[Tuple[torch.Tensor]] = None, + output_final_state: Optional[bool] = False +) -> Tuple[torch.Tensor, torch.Tensor]: + r""" + Args: + q (torch.Tensor): + queries of shape `(B, H, T, K)` + k (torch.Tensor): + keys of shape `(B, H, T, K)` + v (torch.Tensor): + values of shape `(B, H, T, V)` + g (torch.Tensor): + Forget gates of shape `(B, H, T, M)` applied to keys. + If not provided, this function is equivalent to vanilla ABC. + scale (Optional[int]): + Scale factor for attention scores. + If not provided, it will default to `1 / sqrt(K)`. Default: `None`. + initial_state (Optional[Tuple[torch.Tensor]]): + Initial state tuple having tensors of shape `(B, H, K, V)`. Default: `None`. + output_final_state (Optional[bool]): + Whether to output the final state tuple, having tensors of shape `(B, H, K, V)`. Default: `False`. + """ + if initial_state is not None: + initial_state = tuple(i.detach() for i in initial_state) + if g is None: + # TODO: this 3 steps took huge amount of time, ought to be optimized + z = s.float().logcumsumexp(2) + g = torch.cat((z[:, :, :1], z[:, :, :-1]), 2) - z + s = torch.exp(s - z).to(k.dtype) + if scale is None: + scale = q.shape[-1] ** -0.5 + ov, final_state = FusedRecurrentGatedABCFunction.apply(q, k, v, s, g, scale, initial_state, output_final_state) + return ov, final_state diff --git a/fla/ops/based/__init__.py b/fla/ops/based/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..5bcfcdc536a2a3eea00541e768207e633e8485fe --- /dev/null +++ b/fla/ops/based/__init__.py @@ -0,0 +1,9 @@ +# -*- coding: utf-8 -*- + +from .chunk_fuse import fused_chunk_based +from .parallel import parallel_based + +__all__ = [ + 'fused_chunk_based', + 'parallel_based' +] diff --git a/fla/ops/based/chunk_fuse.py b/fla/ops/based/chunk_fuse.py new file mode 100644 index 0000000000000000000000000000000000000000..2f10405048066de363133602964f1e6c36f53eae --- /dev/null +++ b/fla/ops/based/chunk_fuse.py @@ -0,0 +1,410 @@ +# -*- coding: utf-8 -*- + +import torch +import triton +import triton.language as tl +from torch.cuda.amp import custom_bwd, custom_fwd + +from fla.utils import contiguous + +# on-the-fly computation without materializing hidden statets into HBMs + + +@triton.jit +def fused_chunk_based_fwd_kernel( + # B: batch_size, H: n_heads, T: seq_len, D: d_head + q, # query [B, H, L, D_head_K] + k, # key [B, H, L, D_head_V] + v, # value [B, H, L, D_head_V] + o, # output [B, H, L, D_head_V] + z, # normalizer [B, H, L, 1] + s_qk_h, # stride size: L * D_head_K + s_qk_t, # stride size: D_head_K + s_qk_d, # stride size: 1 + s_vo_h, # stride size: L * D_head_V + s_vo_t, # stride size: D_head_V + s_vo_d, # stride size: 1 + B, # batch size + H, # n_heads + T, # seq_len + scale, # D_head_K ** -0.5 + BT: tl.constexpr, # BLOCK SIZE along the sequence dimension, a.k.a. chunk size + BK: tl.constexpr, # BLOCK SIZE along the K dimension + BV: tl.constexpr, # BLOCK SIZE along the V dimension + DK: tl.constexpr, # D_head_K + DV: tl.constexpr, # D_head_V +): + # indices + i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + + o_i = tl.arange(0, BT) + + # [BT, BT] + m_s = o_i[:, None] >= o_i[None, :] + + # [BV], zero-order taylor expansion + b_h_0o = tl.zeros([BV], dtype=tl.float32) + # [BK, BV], first-order taylor expansion + b_h_1o = tl.zeros([BK, BV], dtype=tl.float32) + # [BK, BK, BV] second-order taylor expansion + b_h_2o = tl.zeros([BK*BK, BV], dtype=tl.float32) + + # make block pointers + p_q = tl.make_block_ptr(q + i_bh * s_qk_h, (T, DK), + (s_qk_t, s_qk_d), (0, i_k * BK), (BT, BK), (1, 0)) + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (DK, T), + (s_qk_d, s_qk_t), (i_k * BK, 0), (BK, BT), (0, 1)) + p_v = tl.make_block_ptr(v + i_bh * s_vo_h, (T, DV), + (s_vo_t, s_vo_d), (0, i_v * BV), (BT, BV), (1, 0)) + p_o = tl.make_block_ptr(o + (i_bh + i_k*B*H) * s_vo_h, (T, DV), + (s_vo_t, s_vo_d), (0, i_v * BV), (BT, BV), (1, 0)) + + p_z = z + (i_bh + i_k * B * H) * T + tl.arange(0, BT) + k_2o = tl.zeros([1, BK * BK], dtype=tl.float32) + k_1o = tl.zeros([1, BK], dtype=tl.float32) + k_0o = 0 + + for i in range(0, tl.cdiv(T, BT)): + # [BK, BT] + b_k = tl.load(p_k, boundary_check=(0, 1)) + # [BK*BK, BT] + b_k_2o = b_k[:, None, :] * b_k[None, :, :] + b_k_2o = tl.reshape(b_k_2o, [BK * BK, BT]).to(b_k.dtype) + # [BT, BV] + b_v = tl.load(p_v, boundary_check=(0, 1)) + # [BT, BK] + b_q = (tl.load(p_q, boundary_check=(0, 1)) * scale).to(b_k.dtype) + b_o = tl.zeros([BT, BV], dtype=tl.float32) + b_z = tl.zeros([BT], dtype=tl.float32) + + # interchunk + # zero-order + b_o += b_h_0o + b_z += k_0o + # first-order + b_o += tl.dot(b_q, b_h_1o.to(b_q.dtype), allow_tf32=False) + b_z += tl.sum(b_q * k_1o, axis=1) + # second-order + b_q_2o = b_q[:, :, None] * b_q[:, None, :] + b_q_2o = tl.reshape(b_q_2o, [BT, BK * BK]).to(b_k.dtype) + b_o += tl.dot(b_q_2o, b_h_2o.to(b_q_2o.dtype), allow_tf32=False) * 0.5 + b_z += tl.sum(b_q_2o * k_2o, axis=1) * 0.5 + + # update running statistics + k_1o += tl.sum(b_k, axis=1)[None, :] + k_2o += tl.sum(b_k_2o, axis=1)[None, :] + k_0o += BT + + # intrachunk + # [BT, BT] + b_s = tl.dot(b_q, b_k, allow_tf32=False) + b_s = 1 + b_s + 0.5 * b_s * b_s + b_s = tl.where(m_s, b_s, 0) + b_z += tl.sum(b_s, axis=1) + b_o += tl.dot(b_s.to(b_q.dtype), b_v, allow_tf32=False) + # [TB, BV] + tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1)) + tl.store(p_z, b_z.to(p_z.dtype.element_ty), + mask=(i * BT + tl.arange(0, BT)) < T) + + # update hidden state + # [BK, BV] + b_h_2o = b_h_2o + tl.dot(b_k_2o.to(b_v.dtype), b_v, allow_tf32=False) + b_h_1o = b_h_1o + tl.dot(b_k, b_v, allow_tf32=False) + b_h_0o = b_h_0o + tl.sum(b_v, axis=0) + + p_q = tl.advance(p_q, (BT, 0)) + p_k = tl.advance(p_k, (0, BT)) + p_v = tl.advance(p_v, (BT, 0)) + p_o = tl.advance(p_o, (BT, 0)) + p_z += BT + + +# Similar to Algorithm1 of https://arxiv.org/abs/2006.16236 +@triton.jit +def fused_chunk_based_bwd_kernel( + # B: batch_size, H: n_heads, T: seq_len, D: d_head + # NV: number of split in the V dimension. NK: number of split in the K dimension + q, # query [B, H, L, D_head_K] + k, # key [B, H, L, D_head_V] + v, # value [B, H, L, D_head_V] + do, # gradient of output [B, H, L, D_head_V] + dz, # gradient of normalizer [B, H, L] + dq, # gradient of query [NV, B, H, L, D_head_K] + dk, # gradient of key [NV, B, H, L, D_head_K] + dv, # gradient of value [NK, B, H, L, D_head_V] + s_qk_h, # stride size: L * D_head_K + s_qk_t, # stride size: D_head_K + s_qk_d, # stride size: 1 + s_vo_h, # stride size: L * D_head_V + s_vo_t, # stride size: D_head_V + s_vo_d, # stride size: 1 + B, # batch_size + H, # n_heads + T, # seq_len + scale, # D_head_K ** -0.5 + BT: tl.constexpr, # BLOCK SIZE along the sequence dimension, a.k.a. chunk size + BK: tl.constexpr, # BLOCK SIZE along the K dimension + BV: tl.constexpr, # BLOCK SIZE along the V dimension + DK: tl.constexpr, # D_head_K + DV: tl.constexpr, # D_head_V +): + i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + + o_i = tl.arange(0, BT) + m_s = o_i[:, None] >= o_i[None, :] + + # [BV], zero-order taylor expansion + # b_h_0o = tl.zeros([BV], dtype=tl.float32) + # [BK, BV], first-order taylor expansion + b_h_1o = tl.zeros([BV, BK], dtype=tl.float32) + # [BK, BK, BV] second-order taylor expansion + b_h_2o = tl.zeros([BV, BK*BK], dtype=tl.float32) + + k_1o = tl.zeros([1, BK], dtype=tl.float32) + k_2o = tl.zeros([1, BK * BK], dtype=tl.float32) + + for i in range(0, tl.cdiv(T, BT)): + p_q = tl.make_block_ptr( + q + i_bh * s_qk_h, (T, DK), (s_qk_t, s_qk_d), (i * BT, i_k * BK), (BT, BK), (1, 0)) + p_k = tl.make_block_ptr( + k + i_bh * s_qk_h, (T, DK), (s_qk_t, s_qk_d), (i * BT, i_k * BK), (BT, BK), (1, 0)) + p_v = tl.make_block_ptr( + v + i_bh * s_vo_h, (DV, T), (s_vo_d, s_vo_t), (i_v * BV, i * BT), (BV, BT), (0, 1)) + p_do = tl.make_block_ptr( + do + i_bh * s_vo_h, (T, DV), (s_vo_t, s_vo_d), (i * BT, i_v * BV), (BT, BV), (1, 0)) + p_dq = tl.make_block_ptr(dq + (i_bh + i_v*B*H) * s_qk_h, + (T, DK), (s_qk_t, s_qk_d), (i*BT, i_k*BK), (BT, BK), (1, 0)) + p_dz = dz + (i_bh) * T + tl.arange(0, BT) + i * BT + b_dq = tl.zeros([BT, BK], dtype=tl.float32) + + # load tensors + # [BT, BK] + b_dz = tl.load(p_dz, mask=(tl.arange(0, BT) + i * BT) < T) + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_q = (b_q * scale).to(b_q.dtype) + b_do = tl.load(p_do, boundary_check=(0, 1)).to(b_q.dtype) + b_k = tl.load(p_k, boundary_check=(0, 1)) + # [BV, BT] + b_v = tl.load(p_v, boundary_check=(0, 1)) + + # inter-chunk + b_dq += tl.dot(b_do, (b_h_1o).to(b_do.dtype), allow_tf32=False) + if i_v == 0: + b_dq += b_dz[:, None] * k_1o + b_dq_2o = tl.dot(b_do, (b_h_2o).to(b_do.dtype), allow_tf32=False) * 0.5 + if i_v == 0: + b_dq_2o += (b_dz[:, None] * k_2o) * 0.5 + b_dq_2o = tl.reshape(b_dq_2o, [BT, BK, BK]) + b_dq += tl.sum(b_dq_2o * b_q[:, :, None], axis=1) + b_dq += tl.sum(b_dq_2o * b_q[:, None, :], axis=2) + b_dq *= scale + + # intra-chunk + # [BT, BT] + b_ds = tl.dot(b_do, b_v, allow_tf32=False) + if i_v == 0: + b_ds += b_dz[:, None] + b_ds = tl.where(m_s, b_ds, 0) * scale + b_s = tl.dot(b_q, tl.trans(b_k), allow_tf32=False) + b_s = tl.where(m_s, b_s, 0) + b_dq += tl.dot((b_ds * (1 + b_s)).to(b_q.dtype), b_k, allow_tf32=False) + + # store + tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1)) + + # update hidden state + # [BT, BK*BK] + b_k_2o = b_k[:, :, None] * b_k[:, None, :] + b_k_2o = tl.reshape(b_k_2o, [BT, BK * BK]).to(b_k.dtype) + # [BV, BK*BK] + b_h_2o = b_h_2o + tl.dot(b_v, b_k_2o.to(b_v.dtype), allow_tf32=False) + # [BV, BK] + b_h_1o = b_h_1o + tl.dot(b_v, b_k, allow_tf32=False) + + if i_v == 0: + # update running statistics + k_1o += tl.sum(b_k, axis=0)[None, :] + k_2o += tl.sum(b_k_2o, axis=0)[None, :] + + tl.debug_barrier() + b_h_1o = None + b_h_2o = None + + # [BK, BV], first-order taylor expansion + b_dh_1o = tl.zeros([BK, BV], dtype=tl.float32) + # [BK, BK, BV] second-order taylor expansion + b_dh_2o = tl.zeros([BK*BK, BV], dtype=tl.float32) + b_dh_0o = tl.zeros([BV], dtype=tl.float32) + m_s = tl.arange(0, BT)[:, None] <= tl.arange(0, BT)[None, :] + + dq_1o = tl.zeros([1, BK], dtype=tl.float32) + dq_2o = tl.zeros([BK * BK, 1], dtype=tl.float32) + + for i in range(tl.cdiv(T, BT) * BT - BT, -BT, -BT): + p_q = tl.make_block_ptr( + q + i_bh * s_qk_h, (DK, T), (s_qk_d, s_qk_t), (i_k * BK, i), (BK, BT), (0, 1)) + p_k = tl.make_block_ptr( + k + i_bh * s_qk_h, (T, DK), (s_qk_t, s_qk_d), (i, i_k * BK), (BT, BK), (1, 0)) + p_v = tl.make_block_ptr( + v + i_bh * s_vo_h, (T, DV), (s_vo_t, s_vo_d), (i, i_v * BV), (BT, BV), (1, 0)) + p_do = tl.make_block_ptr( + do + i_bh * s_vo_h, (T, DV), (s_vo_t, s_vo_d), (i, i_v * BV), (BT, BV), (1, 0)) + p_dk = tl.make_block_ptr(dk + (i_bh+i_v*B*H) * s_qk_h, (T, DK), + (s_qk_t, s_qk_d), (i, i_k*BK), (BT, BK), (1, 0)) + p_dv = tl.make_block_ptr(dv + (i_bh+i_k*B*H) * s_vo_h, (T, DV), + (s_vo_t, s_vo_d), (i, i_v*BV), (BT, BV), (1, 0)) + p_dz = dz + (i_bh) * T + tl.arange(0, BT) + i + + b_dk = tl.zeros([BT, BK], dtype=tl.float32) + b_dv = tl.zeros([BT, BV], dtype=tl.float32) + + b_dz = tl.load(p_dz, mask=(tl.arange(0, BT)+i) < T) + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_k = tl.load(p_k, boundary_check=(0, 1)) + b_v = tl.load(p_v, boundary_check=(0, 1)) + b_do = tl.load(p_do, boundary_check=(0, 1)).to(b_q.dtype) + b_q = (b_q * scale).to(b_k.dtype) + + # intra chunk + b_ds = tl.dot(b_v, tl.trans(b_do), allow_tf32=False) + if i_v == 0: + b_ds += b_dz[None, :] + b_ds = tl.where(m_s, b_ds, 0) + b_s = tl.dot(b_k, b_q, allow_tf32=False) + b_s2 = 1 + b_s + 0.5 * b_s * b_s + b_s = tl.where(m_s, b_s, 0) + b_s2 = tl.where(m_s, b_s2, 0) + b_ds *= (1+b_s) + + b_dk += tl.dot(b_ds.to(b_k.dtype), tl.trans(b_q), allow_tf32=False) + b_dv += tl.dot(b_s2.to(b_do.dtype), b_do, allow_tf32=False) + + # inter chunk + b_k_2o = b_k[:, :, None] * b_k[:, None, :] + b_k_2o = tl.reshape(b_k_2o, [BT, BK * BK]).to(b_k.dtype) + + b_dv += tl.dot(b_k, b_dh_1o.to(b_k.dtype), allow_tf32=False) + b_dv += tl.dot(b_k_2o, b_dh_2o.to(b_k.dtype), allow_tf32=False) + b_dv += b_dh_0o + + b_dk += tl.dot(b_v, tl.trans(b_dh_1o).to(b_k.dtype), allow_tf32=False) + + if i_v == 0: + b_dk += dq_1o + + b_dk_2o = tl.dot(b_dh_2o.to(b_k.dtype), + tl.trans(b_v), allow_tf32=False) + if i_v == 0: + b_dk_2o += dq_2o + b_dk_2o = tl.reshape(b_dk_2o, [BK, BK, BT]) + b_k_fp32 = tl.trans(b_k.to(tl.float32)) + b_dk2 = tl.sum(b_dk_2o * b_k_fp32[:, None, :], axis=0) + b_dk2 += tl.sum(b_dk_2o * b_k_fp32[None, :, :], axis=1) + b_dk += tl.trans(b_dk2) + + # hidden state update + b_dh_0o += tl.sum(b_do, axis=0) + b_dh_1o = b_dh_1o + tl.dot(b_q, b_do, allow_tf32=False) + b_q_2o = b_q[None, :, :] * b_q[:, None, :] + b_q_2o = tl.reshape(b_q_2o, [BK * BK, BT]).to(b_k.dtype) + b_dh_2o = b_dh_2o + tl.dot(b_q_2o, b_do, allow_tf32=False) * 0.5 + + if i_v == 0: + dq_1o += (tl.sum(b_dz[None, :] * b_q, axis=1))[None, :] + dq_2o += (tl.sum(b_dz[None, :] * b_q_2o, axis=1) * 0.5)[:, None] + + tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1)) + tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), boundary_check=(0, 1)) + + +class FusedChunkBasedFunction(torch.autograd.Function): + + @staticmethod + @contiguous + @custom_fwd + def forward(ctx, q, k, v, scale=1): + batch_size, n_heads, seq_len, d_head_qk = q.shape + # assert d_head_qk == 16, "currently we do not support feature dim other than 16" + d_head_v = v.shape[-1] + + scale = scale + BT = 16 + BK, BV = min(d_head_qk, 16), min(d_head_v, 32) + BK, BV = max(BK, 16), max(BV, 16) + NK, NV = triton.cdiv(d_head_qk, BK), triton.cdiv(d_head_v, BV) + + num_warps = 4 + + # the norm of o might explode, so we need to use float32 here + o = q.new_empty(NK, batch_size, n_heads, seq_len, + d_head_v, dtype=torch.float32) + z = q.new_empty(NK, batch_size, n_heads, seq_len, dtype=torch.float32) + + grid = (NV, NK, batch_size * n_heads) + fused_chunk_based_fwd_kernel[grid]( + q, k, v, o, z, + q.stride(1), q.stride(2), q.stride(3), + v.stride(1), v.stride(2), v.stride(3), + batch_size, n_heads, seq_len, scale, + BT=BT, DK=d_head_qk, DV=d_head_v, BK=BK, BV=BV, + num_warps=num_warps, + ) + o = o.sum(0) + z = z.sum(0) + ctx.save_for_backward(q, k, v) + ctx.scale = scale + return o.to(q.dtype), z.to(z.dtype) + + @staticmethod + @contiguous + @custom_bwd + def backward(ctx, do, dz): + q, k, v = ctx.saved_tensors + batch_size, n_heads, seq_len, d_head_qk = q.shape + d_head_v = v.shape[-1] + scale = ctx.scale + + BT = 16 + BK, BV = min(d_head_qk, 16), min(d_head_v, 32) + BK, BV = max(BK, 16), max(BV, 16) + NK, NV = triton.cdiv(d_head_qk, BK), triton.cdiv(d_head_v, BV) + num_stages = 1 + num_warps = 4 + + dq = q.new_empty(NV, batch_size, n_heads, seq_len, d_head_qk) + dk = q.new_empty(NV, batch_size, n_heads, seq_len, d_head_qk) + dv = q.new_empty(NK, batch_size, n_heads, seq_len, d_head_v) + grid = (NV, NK, batch_size * n_heads) + + fused_chunk_based_bwd_kernel[grid]( + q, k, v, do, dz, dq, dk, dv, + q.stride(1), q.stride(2), q.stride(3), + v.stride(1), v.stride(2), v.stride(3), + batch_size, n_heads, seq_len, scale, + BT=BT, DK=d_head_qk, DV=d_head_v, BK=BK, BV=BV, + num_warps=num_warps, + num_stages=num_stages + ) + dq = dq.sum(0) + dk = dk.sum(0) + dv = dv.sum(0) + return dq.to(q.dtype), dk.to(k.dtype), dv.to(v.dtype), None + + +triton_fused_chunk_based = FusedChunkBasedFunction.apply + + +def fused_chunk_based(q, k, v, use_scale=True, use_normalize=True): + assert q.shape[-1] <= 16, 'only support feature dimension up to 16.' + if use_scale: + scale = q.shape[-1] ** -0.5 + else: + scale = 1 + o, z = triton_fused_chunk_based(q, k, v, scale) + if use_normalize: + o = o / (z[..., None] + 1e-6) + else: + o = o + + return o.to(q.dtype) diff --git a/fla/ops/based/naive.py b/fla/ops/based/naive.py new file mode 100644 index 0000000000000000000000000000000000000000..fbfabbb2909f89e1d283b2658afa365f1514f156 --- /dev/null +++ b/fla/ops/based/naive.py @@ -0,0 +1,132 @@ +# -*- coding: utf-8 -*- + +import torch +from einops import rearrange + +from fla.ops.based.chunk_fuse import fused_chunk_based +from fla.ops.based.parallel import parallel_based + + +def naive_parallel_based(q, k, v, use_scale=True, use_norm=True): + if use_scale: + q = q * (q.shape[-1] ** -0.5) + attn = q @ k.transpose(-2, -1) + attn = 1 + attn + 1/2 * (attn ** 2) + attn.masked_fill_(~torch.tril(torch.ones( + q.shape[-2], q.shape[-2], dtype=torch.bool, device=q.device)), 0) + o = attn @ v + if use_norm: + z = attn.sum(-1) + return o / (z[..., None] + 1e-6) + else: + return o + + +def naive_chunk_based(q, k, v, chunk_size=256): + q = q * (q.shape[-1] ** -0.5) + + # compute normalizer. + k_cumsum = torch.cumsum(k, dim=-2) + kk_cumsum = torch.cumsum(k.unsqueeze(-1) * k.unsqueeze(-2), dim=-3) + # first + z = (q * k_cumsum).sum(-1) + # second order + z += (q.unsqueeze(-1) * q.unsqueeze(-2) * kk_cumsum).sum((-1, -2)) * 0.5 + # zero-th order + z += (torch.arange(0, q.shape[-2]).to(z.device) * 1.0 + 1.0)[None, None, :] + + # compute o + # constant term + _o = v.cumsum(-2) + + q = rearrange(q, 'b h (n c) d -> b h n c d', c=chunk_size) + + k = rearrange(k, 'b h (n c) d -> b h n c d', c=chunk_size) + v = rearrange(v, 'b h (n c) d -> b h n c d', c=chunk_size) + + intra_chunk_attn = q @ k.transpose(-2, -1) + intra_chunk_attn = intra_chunk_attn + 1/2 * (intra_chunk_attn ** 2) + intra_chunk_attn.masked_fill_( + ~torch.tril( + torch.ones(chunk_size, chunk_size, + dtype=torch.bool, device=q.device), + ), 0) + o = intra_chunk_attn @ v + + # quadractic term + kv = torch.einsum( + 'b h n c x, b h n c y, b h n c z -> b h n x y z', k, k, v) + kv = kv.cumsum(2) + kv = torch.cat([torch.zeros_like(kv[:, :, :1]), kv[:, :, :-1]], dim=2) + + o += 0.5 * torch.einsum('b h n x y z, b h n c x, b h n c y -> b h n c z', kv, q, q) + + # linear term + kv = torch.einsum('b h n c x, b h n c y -> b h n x y', k, v) + kv = kv.cumsum(2) + kv = torch.cat([torch.zeros_like(kv[:, :, :1]), kv[:, :, :-1]], dim=2) + o += torch.einsum('b h n x y, b h n c x -> b h n c y', kv, q) + + o = rearrange(o, 'b h n c d -> b h (n c) d') + o = o + _o + return o / (z[..., None] + 1e-6) + + +if __name__ == "__main__": + B = 4 + H = 4 + L = 128 + # D = 15 + dtype = torch.float32 + q = (torch.randn(B, H, L, 16).cuda().to(dtype)).requires_grad_(True) + k = (torch.randn(B, H, L, 16).cuda().to(dtype)).requires_grad_(True) + v = torch.randn(B, H, L, 128).cuda().to(dtype).requires_grad_(True) + + do = torch.randn_like(v).cuda() + ref = naive_parallel_based(q, k, v, True, True) + ref.backward(do, retain_graph=True) + ref_dq, q.grad = q.grad.clone(), None + ref_dk, k.grad = k.grad.clone(), None + ref_dv, v.grad = v.grad.clone(), None + + # tri = naive_chunk_based(q, k, v) + # tri.backward(do, retain_graph=True) + # tri_dq, q.grad = q.grad.clone(), None + # tri_dk, k.grad = k.grad.clone(), None + # tri_dv, v.grad = v.grad.clone(), None + + # assert ref.allclose(tri, 0, 1e-4), breakpoint() + # assert ref_dq.allclose(tri_dq, 0, 1e-4), breakpoint() + # assert ref_dk.allclose(tri_dk, 0, 1e-4), breakpoint() + # assert ref_dv.allclose(tri_dv, 0, 1e-4), breakpoint() + + tri = fused_chunk_based(q, k, v, True, True) + tri.backward(do, retain_graph=True) + tri_dq, q.grad = q.grad.clone(), None + tri_dk, k.grad = k.grad.clone(), None + tri_dv, v.grad = v.grad.clone(), None + print((ref-tri).abs().max()) + print((ref_dq-tri_dq).abs().max()) + print((ref_dk-tri_dk).abs().max()) + print((ref_dv-tri_dv).abs().max()) + + # assert ref.allclose(tri, 0, 1e-4), breakpoint() + # assert ref_dq.allclose(tri_dq, 0, 1e-4), breakpoint() + # assert ref_dk.allclose(tri_dk, 0, 1e-4), breakpoint() + # assert ref_dv.allclose(tri_dv, 0, 1e-4), breakpoint() + + tri = parallel_based(q, k, v, True, True) + tri.backward(do, retain_graph=True) + tri_dq, q.grad = q.grad.clone(), None + tri_dk, k.grad = k.grad.clone(), None + tri_dv, v.grad = v.grad.clone(), None + + print((ref-tri).abs().max()) + print((ref_dq-tri_dq).abs().max()) + print((ref_dk-tri_dk).abs().max()) + print((ref_dv-tri_dv).abs().max()) + + # assert ref.allclose(tri, 0, 1e-4), breakpoint() + # assert ref_dq.allclose(tri_dq, 0, 1e-4), breakpoint() + # assert ref_dk.allclose(tri_dk, 0, 1e-4), breakpoint() + # assert ref_dv.allclose(tri_dv, 0, 1e-4), breakpoint() diff --git a/fla/ops/based/parallel.py b/fla/ops/based/parallel.py new file mode 100644 index 0000000000000000000000000000000000000000..f4e3fad76337035682cb3fa0d7bbb9fd94abec89 --- /dev/null +++ b/fla/ops/based/parallel.py @@ -0,0 +1,388 @@ + +# -*- coding: utf-8 -*- + +import torch +import triton +import triton.language as tl +from torch.cuda.amp import custom_bwd, custom_fwd + +from fla.utils import contiguous + +# Based: An Educational and Effective Sequence Mixer +# https://hazyresearch.stanford.edu/blog/2023-12-11-zoology2-based + + +@triton.jit +def parallel_based_fwd_kernel( + # B: batch_size, H: n_heads, T: seq_len, D: d_head + q, # query [B, H, L, D_head_K] + k, # key [B, H, L, D_head_V] + v, # value [B, H, L, D_head_V] + o, # output [B, H, L, D_head_V] + z, # normalizer [B, H, L] + s_qk_h, # stride size: L * D_head_K + s_qk_t, # stride size: D_head_K + s_qk_d, # stride size: 1 + s_vo_h, # stride size: L * D_head_V + s_vo_t, # stride size: D_head_V + s_vo_d, # stride size: 1 + B, # batch size + H, # n_heads + T, # seq_len + scale, # D_head_K ** -0.5 + BTL: tl.constexpr, # BLOCK SIZE along the sequence dimension for Q + BTS: tl.constexpr, # BLOCK SIZE along the sequence dimension for K/V + BK: tl.constexpr, # BLOCK SIZE along the K dimension + BV: tl.constexpr, # BLOCK SIZE along the V dimension + DK: tl.constexpr, # D_head_K + DV: tl.constexpr, # D_head_V +): + # i_c: chunk index. used for sequence parallelism + i_kv, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + NV = tl.cdiv(DV, BV) + i_k = i_kv // (NV) + i_v = i_kv % (NV) + + p_q = tl.make_block_ptr(q + i_bh * s_qk_h, (T, DK), + (s_qk_t, s_qk_d), (i_c * BTL, i_k * BK), (BTL, BK), (1, 0)) + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (DK, T), + (s_qk_d, s_qk_t), (i_k * BK, 0), (BK, BTS), (0, 1)) + p_v = tl.make_block_ptr(v + i_bh * s_vo_h, (T, DV), + (s_vo_t, s_vo_d), (0, i_v * BV), (BTS, BV), (1, 0)) + + # [BQ, BD] block Q, in the shared memory throughout the whole kernel + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_q = (b_q * scale).to(b_q.dtype) + b_o = tl.zeros([BTL, BV], dtype=tl.float32) + b_z = tl.zeros([BTL], dtype=tl.float32) + + # Q block and K block have no overlap + # no need for mask, thereby saving flops + for _ in range(0, i_c * BTL, BTS): + # [BK, BTS] + b_k = tl.load(p_k, boundary_check=(0, 1)) + + # [BTS, BV] + b_v = tl.load(p_v, boundary_check=(0, 1)) + # [BTL, BTS] + b_s = tl.dot(b_q, (b_k), allow_tf32=False) + b_s = 1 + b_s + 0.5 * b_s * b_s + b_z += tl.sum(b_s, axis=1) + + # [BQ, BD] + b_o = b_o + tl.dot(b_s.to(b_v.dtype), b_v, allow_tf32=False) + p_k = tl.advance(p_k, (0, BTS)) + p_v = tl.advance(p_v, (BTS, 0)) + + # # rescale interchunk output + tl.debug_barrier() + o_q = tl.arange(0, BTL) + # # sync threads, easy for compiler to optimize + # tl.debug_barrier() + + o_k = tl.arange(0, BTS) + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (DK, T), + (s_qk_d, s_qk_t), (i_k * BK, i_c * BTL), (BK, BTS), (0, 1)) + p_v = tl.make_block_ptr(v + i_bh * s_vo_h, (T, DV), + (s_vo_t, s_vo_d), (i_c * BTL, i_v * BV), (BTS, BV), (1, 0)) + # Q block and K block have overlap. masks required + for _ in range(i_c * BTL, (i_c + 1) * BTL, BTS): + # [BK, BTS] + b_k = tl.load(p_k, boundary_check=(0, 1)) + # [BTS, BV] + b_v = tl.load(p_v, boundary_check=(0, 1)) + # [BTL, BTS] + m_s = o_q[:, None] >= o_k[None, :] + b_s = tl.dot(b_q, b_k, allow_tf32=False) + b_s = 1 + b_s + 0.5 * b_s * b_s + b_s = tl.where(m_s, b_s, 0) + b_z += tl.sum(b_s, axis=1) + # [BTL, BV] + b_o += tl.dot(b_s.to(b_q.dtype), b_v, allow_tf32=False) + + p_k = tl.advance(p_k, (0, BTS)) + p_v = tl.advance(p_v, (BTS, 0)) + o_k += BTS + + p_o = tl.make_block_ptr(o + (i_bh + B * H * i_k) * s_vo_h, (T, DV), + (s_vo_t, s_vo_d), (i_c*BTL, i_v*BV), (BTL, BV), (1, 0)) + p_z = z + (i_bh + B * H * i_k) * T + i_c * BTL + tl.arange(0, BTL) + tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1)) + tl.store(p_z, b_z.to(p_z.dtype.element_ty), + mask=((i_c * BTL + tl.arange(0, BTL)) < T)) + + +@triton.jit +def _parallel_based_bwd_dq( + i_bh, i_c, i_k, i_v, i_h, + q, k, v, do, dz, dq, s_qk_h, s_qk_t, s_qk_d, s_vo_h, + s_vo_t, s_vo_d, B, H, T, scale, + BTL: tl.constexpr, BTS: tl.constexpr, BK: tl.constexpr, BV: tl.constexpr, + DK: tl.constexpr, DV: tl.constexpr, +): + p_do = tl.make_block_ptr(do + i_bh * s_vo_h, (T, DV), (s_vo_t, s_vo_d), + (i_c * BTL, i_v * BV), (BTL, BV), (1, 0)) + p_q = tl.make_block_ptr(q + (i_bh) * s_qk_h, (T, DK), + (s_qk_t, s_qk_d), (i_c*BTL, i_k*BK), (BTL, BK), (1, 0)) + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_do = tl.load(p_do, boundary_check=(0, 1)).to(b_q.dtype) + b_q = (b_q * scale).to(b_q.dtype) + b_dq = tl.zeros([BTL, BK], dtype=tl.float32) + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (T, DK), + (s_qk_t, s_qk_d), (0, i_k * BK), (BTS, BK), (1, 0)) + p_v = tl.make_block_ptr(v + i_bh * s_vo_h, (DV, T), + (s_vo_d, s_vo_t), (i_v * BV, 0), (BV, BTS), (0, 1)) + p_dz = dz + i_bh * T + i_c * BTL + tl.arange(0, BTL) + b_dz = tl.load(p_dz, mask=(i_c * BTL + tl.arange(0, BTL)) < T) + + for _ in range(0, i_c * BTL, BTS): + # [BTS, BK] + b_k = tl.load(p_k, boundary_check=(0, 1)) + # [BV, BTS] + b_v = tl.load(p_v, boundary_check=(0, 1)) + # [BTL, BTS] + b_ds = tl.dot(b_do, b_v, allow_tf32=False) + if i_v == 0: + b_ds += b_dz[:, None] + else: + b_ds = b_ds + b_s = tl.dot(b_q, tl.trans(b_k), allow_tf32=False) + # [BQ, BD] + b_dq += tl.dot((b_ds * (1 + b_s)).to(b_v.dtype), b_k, allow_tf32=False) + p_k = tl.advance(p_k, (BTS, 0)) + p_v = tl.advance(p_v, (0, BTS)) + + b_dq *= scale + o_q = tl.arange(0, BTL) + o_k = tl.arange(0, BTS) + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (T, DK), + (s_qk_t, s_qk_d), (i_c * BTL, i_k * BK), (BTS, BK), (1, 0)) + p_v = tl.make_block_ptr(v + i_bh * s_vo_h, (DV, T), + (s_vo_d, s_vo_t), (i_v * BV, i_c * BTL), (BV, BTS), (0, 1)) + # Q block and K block have overlap. masks required + for _ in range(i_c * BTL, (i_c + 1) * BTL, BTS): + # [BTS, BK] + b_k = tl.load(p_k, boundary_check=(0, 1)) + # [BV, BTS] + b_v = tl.load(p_v, boundary_check=(0, 1)) + # [BTL, BTS] + m_s = o_q[:, None] >= o_k[None, :] + b_ds = tl.dot(b_do, b_v, allow_tf32=False) + if i_v == 0: + b_ds += b_dz[:, None] + else: + b_ds = b_ds + b_ds = tl.where(m_s, b_ds, 0) * scale + b_s = tl.dot(b_q, tl.trans(b_k), allow_tf32=False) + b_s = tl.where(m_s, b_s, 0) + # [BTL, BK] + b_dq += tl.dot((b_ds + b_ds * b_s).to(b_k.dtype), + b_k, allow_tf32=False) + p_k = tl.advance(p_k, (BTS, 0)) + p_v = tl.advance(p_v, (0, BTS)) + o_k += BTS + p_dq = tl.make_block_ptr(dq + (i_bh + B * H * i_v) * s_qk_h, (T, DK), + (s_qk_t, s_qk_d), (i_c*BTL, i_k*BK), (BTL, BK), (1, 0)) + tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1)) + return + + +@triton.jit +def _parallel_based_bwd_dkv( + i_bh, i_c, i_k, i_v, i_h, + q, k, v, do, dz, dk, dv, s_qk_h, s_qk_t, s_qk_d, s_vo_h, + s_vo_t, s_vo_d, B, H, T, scale, + BTL: tl.constexpr, BTS: tl.constexpr, BK: tl.constexpr, BV: tl.constexpr, + DK: tl.constexpr, DV: tl.constexpr, +): + # compute dk dv + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (T, DK), (s_qk_t, s_qk_d), + (i_c * BTL, i_k * BK), (BTL, BK), (1, 0)) + p_v = tl.make_block_ptr(v + i_bh * s_vo_h, (T, DV), (s_vo_t, s_vo_d), + (i_c * BTL, i_v * BV), (BTL, BV), (1, 0)) + b_k, b_v = tl.load(p_k, boundary_check=(0, 1)), tl.load( + p_v, boundary_check=(0, 1)) + b_dk, b_dv = tl.zeros([BTL, BK], dtype=tl.float32), tl.zeros( + [BTL, BV], dtype=tl.float32) + + for i in range((tl.cdiv(T, BTS) * BTS)-BTS, (i_c + 1) * BTL - BTS, -BTS): + p_q = tl.make_block_ptr( + q + i_bh * s_qk_h, (DK, T), (s_qk_d, s_qk_t), (i_k * BK, i), (BK, BTS), (0, 1)) + p_do = tl.make_block_ptr( + do + i_bh * s_vo_h, (DV, T), (s_vo_d, s_vo_t), (i_v * BV, i), (BV, BTS), (0, 1)) + p_dz = dz + i_bh * T + i + tl.arange(0, BTS) + b_q = tl.load(p_q, boundary_check=(0, 1)) # [BK, BTS] + b_do = tl.load(p_do, boundary_check=(0, 1)).to(b_q.dtype) # [BV, BTS] + b_dz = tl.load(p_dz, mask=(i + tl.arange(0, BTS)) < T) + b_s = tl.dot(b_k.to(b_q.dtype), b_q, allow_tf32=False) * \ + scale # [BTL, BTS] + b_s2 = 1 + b_s + 0.5 * b_s * b_s + b_dv += tl.dot(b_s2.to(b_q.dtype), tl.trans(b_do), allow_tf32=False) + b_ds = tl.dot(b_v, b_do, allow_tf32=False) * scale + if i_v == 0: + b_ds += b_dz[None, :] * scale + else: + b_ds = b_ds + b_dk += tl.dot((b_ds + b_ds * b_s).to(b_q.dtype), + tl.trans(b_q), allow_tf32=False) + + tl.debug_barrier() + o_q, o_k = tl.arange(0, BTS), tl.arange(0, BTL) + for i in range(i_c*BTL, (i_c+1)*BTL, BTS): + p_q = tl.make_block_ptr( + q + i_bh * s_qk_h, (DK, T), (s_qk_d, s_qk_t), (i_k * BK, i), (BK, BTS), (0, 1)) + p_do = tl.make_block_ptr( + do + i_bh * s_vo_h, (DV, T), (s_vo_d, s_vo_t), (i_v * BV, i), (BV, BTS), (0, 1)) + p_dz = dz + i_bh * T + i + tl.arange(0, BTS) + b_q = tl.load(p_q, boundary_check=(0, 1)) # [BD, BQ] + b_do = tl.load(p_do, boundary_check=(0, 1)).to(b_q.dtype) + b_dz = tl.load(p_dz, mask=(i + tl.arange(0, BTS)) < T) + # [BK, BQ] + m_s = o_k[:, None] <= o_q[None, :] + b_s = tl.dot(b_k, b_q, allow_tf32=False) * scale + b_s2 = 1 + b_s + 0.5 * b_s * b_s + b_s = tl.where(m_s, b_s, 0) + b_s2 = tl.where(m_s, b_s2, 0) + + b_ds = tl.dot(b_v, b_do, allow_tf32=False) + if i_v == 0: + b_ds += b_dz[None, :] + else: + b_ds = b_ds + b_ds = tl.where(m_s, b_ds, 0) * scale + # [BK, BD] + b_dv += tl.dot(b_s2.to(b_q.dtype), tl.trans(b_do), allow_tf32=False) + b_dk += tl.dot((b_ds + b_ds * b_s).to(b_q.dtype), + tl.trans(b_q), allow_tf32=False) + o_q += BTS + + p_dk = tl.make_block_ptr(dk + (i_bh + B * H * i_v) * s_qk_h, + (T, DK), (s_qk_t, s_qk_d), (i_c*BTL, i_k*BK), (BTL, BK), (1, 0)) + p_dv = tl.make_block_ptr(dv + (i_bh + B * H * i_k) * s_vo_h, + (T, DV), (s_vo_t, s_vo_d), (i_c*BTL, i_v*BV), (BTL, BV), (1, 0)) + tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1)) + tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), boundary_check=(0, 1)) + return + + +@triton.jit +def parallel_based_bwd_kernel( + q, k, v, do, dz, dq, dk, dv, s_qk_h, s_qk_t, s_qk_d, s_vo_h, + s_vo_t, s_vo_d, B, H, T, scale, + BTL: tl.constexpr, BTS: tl.constexpr, BK: tl.constexpr, BV: tl.constexpr, + DK: tl.constexpr, DV: tl.constexpr, +): + i_kv, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + NV = tl.cdiv(DV, BV) + i_k = i_kv // (NV) + i_v = i_kv % (NV) + i_h = i_bh % H + _parallel_based_bwd_dq( + i_bh, i_c, i_k, i_v, i_h, + q, k, v, do, dz, dq, s_qk_h, s_qk_t, s_qk_d, s_vo_h, + s_vo_t, s_vo_d, B, H, T, scale, BTL=BTL, BTS=BTS, BK=BK, BV=BV, DK=DK, DV=DV + ) + tl.debug_barrier() + _parallel_based_bwd_dkv( + i_bh, i_c, i_k, i_v, i_h, + q, k, v, do, dz, dk, dv, s_qk_h, s_qk_t, s_qk_d, s_vo_h, + s_vo_t, s_vo_d, B, H, T, scale, BTL, BTS, BK, BV, DK, DV + ) + + +class ParallelBasedFunction(torch.autograd.Function): + @staticmethod + @contiguous + @custom_fwd + def forward(ctx, q, k, v, scale): + BTL, BTS = 128, 32 + assert BTL % BTS == 0 + # assert q.shape[-1] % 16 == 0 + BK = min(128, triton.next_power_of_2(k.shape[-1])) + BV = min(128, triton.next_power_of_2(v.shape[-1])) + BK, BV = max(BK, 16), max(BV, 16) + batch_size, n_heads, seq_len, d_head_qk = q.shape + d_head_v = v.shape[-1] + num_stages = 2 + num_warps = 4 + NK = triton.cdiv(d_head_qk, BK) + NV = triton.cdiv(d_head_v, BV) + grid = (NK * NV, triton.cdiv(seq_len, BTL), batch_size * n_heads) + + assert NK == 1, "will encounter some synchronization issue if not." + + o = torch.empty(NK, batch_size, n_heads, seq_len, + d_head_v, device=q.device) + z = torch.empty(NK, batch_size, n_heads, seq_len, + device=q.device) + parallel_based_fwd_kernel[grid]( + q, k, v, o, z, + q.stride(1), q.stride(2), q.stride(3), + v.stride(1), v.stride(2), v.stride(3), + batch_size, n_heads, seq_len, scale, + BTL=BTL, BTS=BTS, BK=BK, BV=BV, DK=d_head_qk, DV=d_head_v, + num_warps=num_warps, + num_stages=num_stages + ) + ctx.save_for_backward(q, k, v) + ctx.scale = scale + return o.sum(0).to(q.dtype), z.sum(0).to(q.dtype) + + @staticmethod + @custom_bwd + @contiguous + def backward(ctx, do, dz): + q, k, v = ctx.saved_tensors + scale = ctx.scale + BTL, BTS = 64, 32 + assert BTL % BTS == 0 + BK = min(128, triton.next_power_of_2(k.shape[-1])) + BV = min(128, triton.next_power_of_2(v.shape[-1])) + BK, BV = max(BK, 16), max(BV, 16) + batch_size, n_heads, seq_len, d_head_qk = q.shape + d_head_v = v.shape[-1] + num_stages = 2 + num_warps = 4 + NK = triton.cdiv(d_head_qk, BK) + NV = triton.cdiv(d_head_v, BV) + grid = (NK * NV, triton.cdiv(seq_len, BTL), batch_size * n_heads) + + assert NK == 1, "will encounter some synchronization issue if not" + + dq = torch.empty(NV, batch_size, n_heads, seq_len, + d_head_qk, dtype=q.dtype, device=q.device) + dk = torch.empty(NV, batch_size, n_heads, seq_len, + d_head_qk, dtype=q.dtype, device=q.device) + dv = torch.empty(NK, batch_size, n_heads, seq_len, + d_head_v, dtype=q.dtype, device=q.device) + + parallel_based_bwd_kernel[grid]( + q, k, v, do, dz, dq, dk, dv, + q.stride(1), q.stride(2), q.stride(3), + v.stride(1), v.stride(2), v.stride(3), + batch_size, n_heads, seq_len, scale, + BTL=BTL, BTS=BTS, BK=BK, BV=BV, DK=d_head_qk, DV=d_head_v, + num_warps=num_warps, + num_stages=num_stages + ) + + return dq.sum(0).to(q.dtype), dk.sum(0).to(k.dtype), dv.sum(0).to(v.dtype), None + + +triton_parallel_based = ParallelBasedFunction.apply + + +def parallel_based(q, k, v, use_scale=True, use_normalize=True, return_both=False): + assert q.shape[-1] <= 128, "only support feature dim up to 128" + if use_scale: + scale = q.shape[-1] ** -0.5 + else: + scale = 1 + o, z = triton_parallel_based(q, k, v, scale) + if return_both: + return o, z + if use_normalize: + o = o / (z[..., None] + 1e-6) + else: + o = o + return o.to(q.dtype) diff --git a/fla/ops/delta_rule/README.md b/fla/ops/delta_rule/README.md new file mode 100644 index 0000000000000000000000000000000000000000..1ab2d485a9552d70238c1f68288c72c62f9e0ef2 --- /dev/null +++ b/fla/ops/delta_rule/README.md @@ -0,0 +1,4 @@ +- Delta Rule + +The implementation of delta rule described in https://arxiv.org/abs/2102.11174 + diff --git a/fla/ops/delta_rule/__init__.py b/fla/ops/delta_rule/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..b0848b3e9a3e753905b9be2464563a1a9c5f1af2 --- /dev/null +++ b/fla/ops/delta_rule/__init__.py @@ -0,0 +1,11 @@ +# -*- coding: utf-8 -*- + +from .chunk_fuse import fused_chunk_delta_rule +from .recurrent_fuse import fused_recurrent_linear_attn_delta_rule +from .chunk import chunk_delta_rule + +__all__ = [ + 'fused_chunk_delta_rule', + 'fused_recurrent_linear_attn_delta_rule', + 'chunk_delta_rule' +] diff --git a/fla/ops/delta_rule/chunk.py b/fla/ops/delta_rule/chunk.py new file mode 100644 index 0000000000000000000000000000000000000000..4f1d0cd7b113042bfc5cd0f43ae77593eb83816b --- /dev/null +++ b/fla/ops/delta_rule/chunk.py @@ -0,0 +1,544 @@ +# -*- coding: utf-8 -*- +# Copyright (c) 2023, Yu Zhang, Songlin Yang + +import torch +import triton +import triton.language as tl +from fla.ops.utils import contiguous +from torch.cuda.amp import custom_bwd, custom_fwd +from fla.ops.delta_rule.wy_fast import fwd_recompute_w_u, fwd_prepare_wy_repr, bwd_prepare_wy_repr +from fla.ops.delta_rule.chunk_fuse import fused_chunk_delta_rule_fwd, fused_chunk_delta_rule_bwd +# from fla.ops.delta_rule.utils import bwd_prepare_wy_repr + + +@triton.autotune( + configs=[ + triton.Config({}, num_warps=1), + triton.Config({}, num_warps=2), + triton.Config({}, num_warps=4), + triton.Config({}, num_warps=8), + triton.Config({}, num_warps=16), + triton.Config({}, num_warps=32), + ], + key=["BT", "BK", "BV"], +) +@triton.jit +def fwd_prepare_dv_kernel( + q, + k, + do, + dv, + s_qk_h, + s_qk_t, + s_qk_d, + s_vo_h, + s_vo_t, + s_vo_d, + T, + K, + V, + scale, + BT: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr +): + i_t, i_bh = tl.program_id(0), tl.program_id(1) + + b_A = tl.zeros([BT, BT], dtype=tl.float32) + + for i_k in range(tl.cdiv(K, BK)): + p_q = tl.make_block_ptr(q + i_bh * s_qk_h, (K, T), (s_qk_d, s_qk_t), (i_k * BK, i_t * BT), (BK, BT), (0, 1)) + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (T, K), (s_qk_t, s_qk_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + b_k = tl.load(p_k, boundary_check=(0, 1)) + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_q = (b_q * scale).to(b_k.dtype) + b_A += tl.dot(b_k, b_q, allow_tf32=False) + + b_A = tl.where(tl.arange(0, BT)[:, None] <= tl.arange(0, BT)[None, :], b_A , 0).to(do.dtype.element_ty) + + for i_v in range(tl.cdiv(V, BV)): + p_do = tl.make_block_ptr(do + i_bh * s_vo_h, (T, V), (s_vo_t, s_vo_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + b_do = tl.load(p_do, boundary_check=(0, 1)) + p_dv = tl.make_block_ptr(dv + i_bh * s_vo_h, (T, V), (s_vo_t, s_vo_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + b_dv = tl.dot(b_A, b_do, allow_tf32=False) + tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), boundary_check=(0, 1)) + + +def fwd_prepare_dv(q, k, do, BT): + dv = torch.empty_like(do) + B, H, T, K, V = *k.shape, do.shape[-1] + NT = triton.cdiv(T, BT) + BK = min(triton.next_power_of_2(K), 64) + BV = min(triton.next_power_of_2(V), 64) + fwd_prepare_dv_kernel[(NT, B*H)]( + q, k, do, dv, + k.stride(1), k.stride(2), k.stride(3), + do.stride(1), do.stride(2), do.stride(3), + T, K, V, K**-0.5, BT, BK, BV + ) + return dv + + +@triton.autotune( + configs=[ + triton.Config({}, num_warps=1), + triton.Config({}, num_warps=2), + triton.Config({}, num_warps=4), + triton.Config({}, num_warps=8), + triton.Config({}, num_warps=16), + triton.Config({}, num_warps=32), + ], + key=["BT", "BK", "BV"], +) +@triton.jit +def chunk_delta_rule_fwd_kernel_h( + k, + v, + d, + v_new, + h, + initial_state, # initial state of the chunk [B, H, D_head_K, D_head_V] + final_state, # final state of the chunk [B, H, D_head_K, D_head_V] + s_qk_h, + s_qk_t, + s_qk_d, + s_vo_h, + s_vo_t, + s_vo_d, + s_h_h, + s_h_t, + H: tl.constexpr, + T: tl.constexpr, + K: tl.constexpr, + V: tl.constexpr, + BT: tl.constexpr, + BC: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr, + NT: tl.constexpr, + USE_INITIAL_STATE: tl.constexpr, + STORE_FINAL_STATE: tl.constexpr +): + i_k, i_v, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + + # [BK, BV] + b_h = tl.zeros([BK, BV], dtype=tl.float32) + + if USE_INITIAL_STATE: + p_h0 = tl.make_block_ptr(initial_state + i_bh * K * V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + b_h = tl.load(p_h0, boundary_check=(0, 1)).to(tl.float32) + + for i_t in range(NT): + p_h = tl.make_block_ptr(h + i_bh * s_h_h + i_t * K * V, (K, V), (s_h_t, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + tl.store(p_h, b_h.to(p_h.dtype.element_ty), boundary_check=(0, 1)) + b_h_cumsum = tl.zeros([BK, BV], dtype=tl.float32) + # since we need to make all DK in the SRAM. we face serve SRAM memory burden. By subchunking we allievate such burden + for i_c in range(tl.cdiv(BT, BC)): + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (K, T), (s_qk_d, s_qk_t), (i_k * BK, i_t * BT + i_c * BC), (BK, BC), (0, 1)) + p_d = tl.make_block_ptr(d + i_bh * s_qk_h, (T, K), (s_qk_t, s_qk_d), (i_t * BT + i_c * BC, i_k * BK), (BC, BK), (1, 0)) + p_v = tl.make_block_ptr(v + i_bh * s_vo_h, (T, V), (s_vo_t, s_vo_d), (i_t * BT + i_c * BC, i_v * BV), (BC, BV), (1, 0)) + p_v_new = tl.make_block_ptr(v_new + i_bh * s_vo_h, (T, V), (s_vo_t, s_vo_d), (i_t * BT + i_c * BC, i_v * BV), (BC, BV), (1, 0)) + b_k = tl.load(p_k, boundary_check=(0, 1)) + # [BT, BK] + b_d = tl.load(p_d, boundary_check=(0, 1)) + # [BT, BV] + b_v = tl.load(p_v, boundary_check=(0, 1)) + b_v -= tl.dot(b_d, b_h.to(b_k.dtype), allow_tf32=False) + # [BK, BV] + tl.store(p_v_new, b_v.to(p_v_new.dtype.element_ty), boundary_check=(0, 1)) + b_h_cumsum += tl.dot(b_k, b_v.to(b_k.dtype), allow_tf32=False) + b_h += b_h_cumsum + + if STORE_FINAL_STATE: + p_ht = tl.make_block_ptr(final_state + i_bh * K * V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + tl.store(p_ht, b_h.to(p_ht.dtype.element_ty), boundary_check=(0, 1)) + +@triton.autotune( + configs=[ + triton.Config({}, num_warps=1), + triton.Config({}, num_warps=2), + triton.Config({}, num_warps=4), + triton.Config({}, num_warps=8), + triton.Config({}, num_warps=16), + triton.Config({}, num_warps=32), + ], + key=["BT", "BK", "BV"], +) +@triton.jit +def chunk_linear_attn_fwd_kernel_o( + q, + k, + v, + h, + o, + s_qk_h, + s_qk_t, + s_qk_d, + s_vo_h, + s_vo_t, + s_vo_d, + s_h_h, + s_h_t, + scale, + H: tl.constexpr, + T: tl.constexpr, + K: tl.constexpr, + V: tl.constexpr, + BT: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr +): + i_v, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + + o_i = tl.arange(0, BT) + m_s = o_i[:, None] >= o_i[None, :] + + b_o = tl.zeros([BT, BV], dtype=tl.float32) + b_s = tl.zeros([BT, BT], dtype=tl.float32) + for i_k in range(tl.cdiv(K, BK)): + p_q = tl.make_block_ptr(q + i_bh * s_qk_h, (T, K), (s_qk_t, s_qk_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (K, T), (s_qk_d, s_qk_t), (i_k * BK, i_t * BT), (BK, BT), (0, 1)) + p_h = tl.make_block_ptr(h + i_bh * s_h_h + i_t * K * V, (K, V), (s_h_t, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + # [BT, BK] + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_q = (b_q * scale).to(b_q.dtype) + # [BK, BT] + b_k = tl.load(p_k, boundary_check=(0, 1)) + # [BK, BV] + b_h = tl.load(p_h, boundary_check=(0, 1)) + b_o += tl.dot(b_q, b_h, allow_tf32=False) + b_s += tl.dot(b_q, b_k, allow_tf32=False) + + b_s = tl.where(m_s, b_s, 0) + p_v = tl.make_block_ptr(v + i_bh * s_vo_h, (T, V), (s_vo_t, s_vo_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + b_v = tl.load(p_v, boundary_check=(0, 1)) + b_o = (b_o + tl.dot(b_s.to(b_v.dtype), b_v, allow_tf32=False)) + p_o = tl.make_block_ptr(o + i_bh * s_vo_h, (T, V), (s_vo_t, s_vo_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1)) + +@triton.autotune( + configs=[ + triton.Config({}, num_warps=1), + triton.Config({}, num_warps=2), + triton.Config({}, num_warps=4), + triton.Config({}, num_warps=8), + triton.Config({}, num_warps=16), + triton.Config({}, num_warps=32), + ], + key=["BT", "BK", "BV"], +) +@triton.jit +def chunk_delta_rule_bwd_kernel_dhu( + q, + k, + d, + do, + dh, + dv, + dv2, + s_qk_h, + s_qk_t, + s_qk_d, + s_vo_h, + s_vo_t, + s_vo_d, + s_h_h, + s_h_t, + scale, + H: tl.constexpr, + T: tl.constexpr, + K: tl.constexpr, + V: tl.constexpr, + BT: tl.constexpr, + BC: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr, + NT: tl.constexpr +): + i_k, i_v, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + + # [BK, BV] + b_dh = tl.zeros([BK, BV], dtype=tl.float32) + for i_t in range(NT - 1, -1, -1): + p_dh = tl.make_block_ptr(dh + i_bh * s_h_h + i_t * K * V, (K, V), (s_h_t, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + tl.store(p_dh, b_dh.to(p_dh.dtype.element_ty), boundary_check=(0, 1)) + b_dh_tmp = tl.zeros([BK, BV], dtype=tl.float32) + for i_c in range(tl.cdiv(BT, BC) - 1, -1, -1): + p_q = tl.make_block_ptr(q + i_bh * s_qk_h, (K, T), (s_qk_d, s_qk_t), (i_k * BK, i_t * BT + i_c * BC), (BK, BC), (0, 1)) + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (T, K), (s_qk_t, s_qk_d), (i_t * BT + i_c * BC, i_k * BK), (BC, BK), (1, 0)) + p_d = tl.make_block_ptr(d + i_bh * s_qk_h, (K, T), (s_qk_d, s_qk_t), (i_k * BK, i_t * BT + i_c * BC), (BK, BC), (0, 1)) + p_dv = tl.make_block_ptr(dv + i_bh * s_vo_h, (T, V), (s_vo_t, s_vo_d), (i_t * BT + i_c * BC, i_v * BV), (BC, BV), (1, 0)) + p_do = tl.make_block_ptr(do + i_bh * s_vo_h, (T, V), (s_vo_t, s_vo_d), (i_t * BT + i_c * BC, i_v * BV), (BC, BV), (1, 0)) + # [BK, BT] + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_q = (b_q * scale).to(b_q.dtype) + # [BT, BK] + b_k = tl.load(p_k, boundary_check=(0, 1)) + b_d = tl.load(p_d, boundary_check=(0, 1)) + # [BT, V] + b_do = tl.load(p_do, boundary_check=(0, 1)) + + # [BT, BT] + # b_s = tl.dot(b_k, b_q, allow_tf32=False) + # b_s = tl.where(m_s, b_s, 0) + # b_dv = tl.dot(b_s.to(b_do.dtype), b_do, allow_tf32=False) + tl.dot(b_k, b_dh.to(b_k.dtype), allow_tf32=False) + + b_dv = tl.load(p_dv, boundary_check=(0, 1)) + b_dv += tl.dot(b_k, b_dh.to(b_k.dtype), allow_tf32=False) + p_dv2 = tl.make_block_ptr(dv2 + i_bh * s_vo_h, (T, V), (s_vo_t, s_vo_d), (i_t * BT + i_c * BC, i_v * BV), (BC, BV), (1, 0)) + tl.store(p_dv2, b_dv.to(p_dv.dtype.element_ty), boundary_check=(0, 1)) + # [BK, BV] + b_dh_tmp += tl.dot(b_q, b_do.to(b_q.dtype), allow_tf32=False) + b_dh_tmp -= tl.dot(b_d, b_dv.to(b_q.dtype), allow_tf32=False) + b_dh += b_dh_tmp + +@triton.autotune( + configs=[ + triton.Config({}, num_warps=1), + triton.Config({}, num_warps=2), + triton.Config({}, num_warps=4), + triton.Config({}, num_warps=8), + triton.Config({}, num_warps=16), + triton.Config({}, num_warps=32), + ], + key=["BT", "BK", "BV"], +) +@triton.jit +def chunk_delta_rule_bwd_kernel_dqkw( + q, + k, + v, + w, + h, + do, + dh, + dq, + dk, + dv, + dw, + s_qk_h, + s_qk_t, + s_qk_d, + s_vo_h, + s_vo_t, + s_vo_d, + s_h_h, + s_h_t, + scale, + H: tl.constexpr, + T: tl.constexpr, + K: tl.constexpr, + V: tl.constexpr, + BT: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr, + NT: tl.constexpr +): + i_k, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + n_bh = tl.num_programs(2) + o_i = tl.arange(0, BT) + + p_q = tl.make_block_ptr(q + i_bh * s_qk_h, (K, T), (s_qk_d, s_qk_t), (i_k * BK, i_t * BT), (BK, BT), (0, 1)) + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (T, K), (s_qk_t, s_qk_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_k = tl.load(p_k, boundary_check=(0, 1)) + b_s = tl.dot(b_k, b_q, allow_tf32=False) * scale + b_s = tl.where(o_i[:, None] <= o_i[None, :], b_s, 0) + + b_dq = tl.zeros([BT, BK], dtype=tl.float32) + b_dk = tl.zeros([BT, BK], dtype=tl.float32) + b_dw = tl.zeros([BT, BK], dtype=tl.float32) + b_ds = tl.zeros([BT, BT], dtype=tl.float32) + for i_v in range(tl.cdiv(V, BV)): + p_v = tl.make_block_ptr(v + i_bh * s_vo_h, (T, V), (s_vo_t, s_vo_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_h = tl.make_block_ptr(h + i_bh * s_h_h, (V, NT * K), (1, s_h_t), (i_v * BV, i_t * K + i_k * BK), (BV, BK), (0, 1)) + p_do = tl.make_block_ptr(do + i_bh * s_vo_h, (T, V), (s_vo_t, s_vo_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_dh = tl.make_block_ptr(dh + i_bh * s_h_h, (NT * K, V), (s_h_t, 1), (i_t * K + i_k * BK, i_v * BV), (BK, BV), (1, 0)) + p_dv = tl.make_block_ptr(dv + i_bh * s_vo_h, (T, V), (s_vo_t, s_vo_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + # [BT, BV] + b_v = tl.load(p_v, boundary_check=(0, 1)) + b_do = tl.load(p_do, boundary_check=(0, 1)) + # [BV, BK] + b_h = tl.load(p_h, boundary_check=(0, 1)) + # [BK, BV] + b_dh = tl.load(p_dh, boundary_check=(0, 1)) + # [BT, BT] + b_ds += tl.dot(b_do, tl.trans(b_v), allow_tf32=False) + # [BT, BK] + b_dq += tl.dot(b_do, b_h, allow_tf32=False) * scale + b_dk += tl.dot(b_v, tl.trans(b_dh), allow_tf32=False) + + b_dv = tl.load(p_dv, boundary_check=(0, 1)) + b_dw += tl.dot(b_dv.to(b_k.dtype), b_h.to(b_k.dtype), allow_tf32=False) + + # [BT, BT] + b_ds = tl.where(o_i[:, None] >= o_i[None, :], b_ds * scale, 0).to(b_q.dtype) + # [BT, BK] + b_dq += tl.dot(b_ds, b_k, allow_tf32=False) + b_dk += tl.trans(tl.dot(b_q, b_ds, allow_tf32=False)) + + p_dq = tl.make_block_ptr(dq + i_bh * s_qk_h, (T, K), (s_qk_t, s_qk_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_dk = tl.make_block_ptr(dk + i_bh * s_qk_h, (T, K), (s_qk_t, s_qk_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_dw = tl.make_block_ptr(dw + i_bh * s_qk_h, (T, K), (s_qk_t, s_qk_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1)) + tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1)) + tl.store(p_dw, -b_dw.to(p_dw.dtype.element_ty), boundary_check=(0, 1)) + + + +def chunk_fwd_h_fn(k, w, u, BT, initial_state, final_state): + B, H, T, K, V = *k.shape, u.shape[-1] + + BK = triton.next_power_of_2(K) + assert BK <= 256, "current kernel does not support head dimension larger than 256." + BV = 16 if BK > 128 else 32 + BV = 64 if BK <= 64 else BV + BC = 16 if BK > 128 else 32 + BC = 64 if BK <= 64 else BC + BC = min(BT, BC) + NT, NK, NV = triton.cdiv(T, BT), triton.cdiv(K, BK), triton.cdiv(V, BV) + assert NK == 1, 'NK > 1 is not supported because it involves time-consuming synchronization' + + h = k.new_empty(B, H, NT * K, V) + grid = (NK, NV, B * H) + v_new = torch.empty_like(u) + chunk_delta_rule_fwd_kernel_h[grid]( + k, u, w, v_new, h, initial_state, final_state, + k.stride(1), k.stride(2), k.stride(3), + u.stride(1), u.stride(2), u.stride(3), + h.stride(1), h.stride(2), + H=H, T=T, K=K, V=V, BT=BT, BC=BC, BK=BK, BV=BV, NT=NT, + USE_INITIAL_STATE=initial_state is not None, + STORE_FINAL_STATE=final_state is not None, + ) + return h, v_new + + +def chunk_bwd_dhu_fn(q, k, w, do, dv, BT): + B, H, T, K, V = *q.shape, do.shape[-1] + + BK = triton.next_power_of_2(K) + assert BK <= 256, "current kernel does not support head dimension being larger than 256." + BV = 16 if BK > 128 else 32 + BV = 64 if BK <= 64 else BV + BC = 16 if BK > 128 else 32 + BC = 64 if BK <= 64 else BC + BC = min(BT, BC) + NT, NK, NV = triton.cdiv(T, BT), triton.cdiv(K, BK), triton.cdiv(V, BV) + assert NK == 1, 'NK > 1 is not supported because it involves time-consuming synchronization' + + dh = q.new_empty(B, H, NT * K, V) + # dv_new = torch.empty_like(do) + grid = (NK, NV, B * H) + dv2 = torch.empty_like(dv) + chunk_delta_rule_bwd_kernel_dhu[grid]( + q, k, w, do, dh, dv, dv2, + q.stride(1), q.stride(2), q.stride(3), + do.stride(1), do.stride(2), do.stride(3), + dh.stride(1), dh.stride(2), + K**-0.5, + H=H, T=T, K=K, V=V, BT=BT, BC=BC, BK=BK, BV=BV, NT=NT, + ) + return dh, dv2 + + +def chunk_fwd_o_fn(q, k, v_new, h, BT): + B, H, T, K, V = *q.shape, v_new.shape[-1] + + BK = triton.next_power_of_2(K) + o = torch.empty_like(v_new) + BK = min(triton.next_power_of_2(K), 64) + BV = min(triton.next_power_of_2(K), 64) + NV = triton.cdiv(V, BV) + NT = triton.cdiv(T, BT) + grid = (NV, NT, B * H) + chunk_linear_attn_fwd_kernel_o[grid]( + q, k, v_new, h, o, + q.stride(1), q.stride(2), q.stride(3), + v_new.stride(1), v_new.stride(2), v_new.stride(3), + h.stride(1), h.stride(2), + scale=K**-0.5, + H=H, T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, + ) + return o + + + +def chunk_bwd_dqkw_fn(q, k, v_new, w, h, du, do, dh, BT): + B, H, T, K, V = *q.shape, v_new.shape[-1] + + BK = triton.next_power_of_2(K) + BK = min(triton.next_power_of_2(K), 64) + BV = min(triton.next_power_of_2(V), 64) + NV = triton.cdiv(V, BV) + NT = triton.cdiv(T, BT) + grid = (NV, NT, B * H) + dq = torch.empty_like(q) + dk = torch.empty_like(k) + dw = torch.empty_like(w) + chunk_delta_rule_bwd_kernel_dqkw[grid]( + q, k, v_new, w, h, do, dh, dq, dk, du, dw, + q.stride(1), q.stride(2), q.stride(3), + v_new.stride(1), v_new.stride(2), v_new.stride(3), + dh.stride(1), dh.stride(2), + scale = K ** -0.5, + H=H, T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, NT=NT, + ) + return dq.to(q.dtype), dk.to(k.dtype), dw.to(w.dtype) + + +class ChunkDeltaRuleFunction(torch.autograd.Function): + + @staticmethod + @custom_fwd + @contiguous + def forward(ctx, q, k, v, beta, BT, initial_state, output_final_state, checkpoint_level=1): + ### obtain WY representation. u is actually the new v. + w, u, A = fwd_prepare_wy_repr(k, v, beta, BT) + # ### forward_h + final_state = None + if output_final_state: + final_state = q.new_empty(B, H, K, V, dtype=torch.float32, requires_grad=False) + h, v_new = chunk_fwd_h_fn(k, w, u, BT, initial_state, final_state) + ## obtain output + o = chunk_fwd_o_fn(q, k, v_new, h, BT) + # save memory + if checkpoint_level == 1: + h, v_new = None, None + ctx.save_for_backward(q, k, v, beta, A, h, v_new, initial_state) + ctx.BT = BT + return o.to(q.dtype), final_state + + @staticmethod + @custom_bwd + @contiguous + def backward(ctx, do, d_ht=None): + q, k, v, beta, A, h, v_new, initial_state = ctx.saved_tensors + scale = q.shape[-1] ** -0.5 + BT = ctx.BT + w, u = fwd_recompute_w_u(k, v, beta, A, BT) + # checkpont_level=1, recomputation. + if h is None: + h, v_new = chunk_fwd_h_fn(k, w, u, BT, initial_state, None) + dv = fwd_prepare_dv(q, k, do, BT) + dh, dv = chunk_bwd_dhu_fn(q, k, w, do, dv, BT) + dq, dk, dw = chunk_bwd_dqkw_fn(q, k, v_new, w, h, dv, do, dh, BT) + dk2, dv, dbeta = bwd_prepare_wy_repr(k, v, beta, A, dw, dv, BT) + dk.add_(dk2) + return dq.to(q.dtype), dk.to(k.dtype), dv.to(v.dtype), dbeta.to(beta.dtype), None, None, None, None + +def chunk_delta_rule( + q: torch.Tensor, + k: torch.Tensor, + v: torch.Tensor, + beta: torch.Tensor, + BT: int, + initial_state: torch.Tensor = None, + output_final_state: bool = False +): + assert q.dtype == k.dtype == v.dtype + if initial_state is not None: + initial_state = initial_state.detach() + o, final_state = ChunkDeltaRuleFunction.apply(q, k, v, beta, BT, initial_state, output_final_state) + return o, final_state diff --git a/fla/ops/delta_rule/chunk_fuse.py b/fla/ops/delta_rule/chunk_fuse.py new file mode 100644 index 0000000000000000000000000000000000000000..efb36fdebf3b110078362449a5178acdf7a3fb80 --- /dev/null +++ b/fla/ops/delta_rule/chunk_fuse.py @@ -0,0 +1,419 @@ +# -*- coding: utf-8 -*- + +from typing import Tuple + +import torch +import triton +import triton.language as tl +from packaging import version +from torch.cuda.amp import custom_bwd, custom_fwd + +from fla.ops.delta_rule.utils import bwd_prepare_wy_repr, fwd_prepare_wy_repr +from fla.utils import contiguous + + +# on-the-fly computation without materializing hidden statets into HBMs +@triton.autotune( + configs=[ + triton.Config({}, num_warps=1), + triton.Config({}, num_warps=2), + triton.Config({}, num_warps=4), + triton.Config({}, num_warps=8) + ], + key=["BT", "BK"], +) +@triton.jit +def fused_chunk_delta_rule_fwd_kernel( + # B: batch_size, H: n_heads, T: seq_len, D: d_head + q, # query [B, H, L, D_head_K] + k, # key [B, H, L, D_head_K] + v, # value [B, H, L, D_head_V] + v_new, + d, # decay [B, H, L, D_head_K] + o, # output [B, H, L, D_head_V] + initial_state, # initial state of the chunk [B, H, D_head_K, D_head_V] + final_state, # final state of the chunk [B, H, D_head_K, D_head_V] + s_qk_h, # stride size: L * D_head_K + s_qk_t, # stride size: D_head_K + s_qk_d, # stride size: 1 + s_vo_h, # stride size: L * D_head_V + s_vo_t, # stride size: D_head_V + s_vo_d, # stride size: 1 + B, # batch size + H, # n_heads + T, # seq_len + scale, # D_head_K ** -0.5 + BT: tl.constexpr, # BLOCK SIZE along the sequence dimension, a.k.a. chunk size + BK: tl.constexpr, # BLOCK SIZE along the K dimension + BV: tl.constexpr, # BLOCK SIZE along the V dimension + DK: tl.constexpr, # D_head_K + DV: tl.constexpr, # D_head_V + USE_INITIAL_STATE: tl.constexpr, + STORE_FINAL_STATE: tl.constexpr, + CHECK: tl.constexpr +): + # indices + i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + + o_i = tl.arange(0, BT) + + # [BT, BT] + m_s = o_i[:, None] >= o_i[None, :] + # [BK, BV] + b_h = tl.zeros([BK, BV], dtype=tl.float32) + + # make block pointers + p_q = tl.make_block_ptr(q + i_bh * s_qk_h, (T, DK), (s_qk_t, s_qk_d), (0, i_k * BK), (BT, BK), (1, 0)) + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (DK, T), (s_qk_d, s_qk_t), (i_k * BK, 0), (BK, BT), (0, 1)) + p_d = tl.make_block_ptr(d + i_bh * s_qk_h, (T, DK), (s_qk_t, s_qk_d), (0, i_k * BK), (BT, BK), (1, 0)) + p_v = tl.make_block_ptr(v + i_bh * s_vo_h, (T, DV), (s_vo_t, s_vo_d), (0, i_v * BV), (BT, BV), (1, 0)) + p_o = tl.make_block_ptr(o + (i_bh+i_k*B*H) * s_vo_h, (T, DV), (s_vo_t, s_vo_d), (0, i_v * BV), (BT, BV), (1, 0)) + p_v_new = tl.make_block_ptr(v_new + i_bh * s_vo_h, (T, DV), (s_vo_t, s_vo_d), (0, i_v * BV), (BT, BV), (1, 0)) + + if USE_INITIAL_STATE: + p_h = tl.make_block_ptr(initial_state + i_bh * DK * DV, (DK, DV), (DV, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + b_h = tl.load(p_h, boundary_check=(0, 1)).to(tl.float32) + + for i in range(0, tl.cdiv(T, BT)): + # [BK, BT] + b_k = tl.load(p_k, boundary_check=(0, 1)) + # [BT, BV] + b_v = tl.load(p_v, boundary_check=(0, 1)) + # [BT, BK] + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_d = tl.load(p_d, boundary_check=(0, 1)) + b_q = (b_q * scale).to(b_k.dtype) + + # [BT, BT] + b_s = tl.dot(b_q, b_k, allow_tf32=False) + b_s = tl.where(m_s, b_s, 0) + # [BT, BV] + b_v_prime = tl.dot(b_d, b_h.to(b_q.dtype), allow_tf32=False) + b_v = b_v - b_v_prime + tl.store(p_v_new, b_v.to(p_v.dtype.element_ty), boundary_check=(0, 1)) + + b_o = tl.dot(b_s.to(b_q.dtype), b_v.to(b_q.dtype), allow_tf32=False) + if CHECK and i == 0: + b_o += tl.dot(b_q, b_h.to(b_q.dtype), allow_tf32=False) + b_h = b_h + tl.dot(b_k, b_v.to(b_k.dtype), allow_tf32=False) + else: + b_o += tl.dot(b_q, b_h.to(b_q.dtype), allow_tf32=False) + b_h = b_h + tl.dot(b_k, b_v.to(b_k.dtype), allow_tf32=False) + tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1)) + p_q = tl.advance(p_q, (BT, 0)) + p_k = tl.advance(p_k, (0, BT)) + p_v = tl.advance(p_v, (BT, 0)) + p_v_new = tl.advance(p_v_new, (BT, 0)) + p_o = tl.advance(p_o, (BT, 0)) + p_d = tl.advance(p_d, (BT, 0)) + + if STORE_FINAL_STATE: + p_final = tl.make_block_ptr(final_state + i_bh * DK * DV, (DK, DV), (DV, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + tl.store(p_final, b_h.to(p_final.dtype.element_ty), boundary_check=(0, 1)) + + +# Similar to Algorithm1 of https://arxiv.org/abs/2006.16236 +@triton.autotune( + configs=[ + triton.Config({}, num_warps=1), + triton.Config({}, num_warps=2), + triton.Config({}, num_warps=4), + triton.Config({}, num_warps=8), + triton.Config({}, num_warps=16), + triton.Config({}, num_warps=32), + ], + key=["BT", "BK", "BV"], +) +@triton.jit +def fused_chunk_delta_rule_bwd_kernel( + # B: batch_size, H: n_heads, T: seq_len, D: d_head + # NV: number of split in the V dimension. NK: number of split in the K dimension + q, # query [B, H, L, D_head_K] + k, # key [B, H, L, D_head_V] + v, # value [B, H, L, D_head_V] + d, # decay [B, H, L, D_head_K] + do, # gradient of output [B, H, L, D_head_V] + dq, # gradient of query [NV, B, H, L, D_head_K] + dk, # gradient of key [NV, B, H, L, D_head_K] + dv, # gradient of value [NK, B, H, L, D_head_V] + dd, # gradient of decay [NV, B, H, L, D_head_K] + initial_state, # initial state of the chunk [B, H, D_head_K, D_head_V] + s_qk_h, # stride size: L * D_head_K + s_qk_t, # stride size: D_head_K + s_qk_d, # stride size: 1 + s_vo_h, # stride size: L * D_head_V + s_vo_t, # stride size: D_head_V + s_vo_d, # stride size: 1 + B, # batch_size + H, # n_heads + T, # seq_len + scale, # D_head_K ** -0.5 + BT: tl.constexpr, # BLOCK SIZE along the sequence dimension, a.k.a. chunk size + BK: tl.constexpr, # BLOCK SIZE along the K dimension + BV: tl.constexpr, # BLOCK SIZE along the V dimension + DK: tl.constexpr, # D_head_K + DV: tl.constexpr, # D_head_V + USE_INITIAL_STATE: tl.constexpr, + CHECK: tl.constexpr +): + i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + o_i = tl.arange(0, BT) + + # first reverse + # [BK, BV] + b_dh = tl.zeros([BK, BV], dtype=tl.float32) + m_s = o_i[:, None] <= o_i[None, :] + for i in range(1, tl.cdiv(T, BT) + 1): + p_q = tl.make_block_ptr(q + i_bh * s_qk_h, (DK, T), (s_qk_d, s_qk_t), (i_k * BK, T - i * BT), (BK, BT), (0, 1)) + p_d = tl.make_block_ptr(d + i_bh * s_qk_h, (DK, T), (s_qk_d, s_qk_t), (i_k * BK, T - i * BT), (BK, BT), (0, 1)) + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (T, DK), (s_qk_t, s_qk_d), (T - i * BT, i_k * BK), (BT, BK), (1, 0)) + + p_v = tl.make_block_ptr(v + i_bh * s_vo_h, (T, DV), (s_vo_t, s_vo_d), (T - i * BT, i_v * BV), (BT, BV), (1, 0)) + p_do = tl.make_block_ptr(do + i_bh * s_vo_h, (T, DV), (s_vo_t, s_vo_d), (T - i * BT, i_v * BV), (BT, BV), (1, 0)) + p_dk = tl.make_block_ptr(dk + (i_bh+i_v*B*H) * s_qk_h, (T, DK), (s_qk_t, s_qk_d), (T - i*BT, i_k*BK), (BT, BK), (1, 0)) + p_dv = tl.make_block_ptr(dv + (i_bh+i_k*B*H) * s_vo_h, (T, DV), (s_vo_t, s_vo_d), (T - i*BT, i_v*BV), (BT, BV), (1, 0)) + # [DK, BT] + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_q = (b_q * scale).to(b_q.dtype) + # [BT, DK] + b_k = tl.load(p_k, boundary_check=(0, 1)) + # [BT, DV] + b_v = tl.load(p_v, boundary_check=(0, 1)) + b_do = tl.load(p_do, boundary_check=(0, 1)) + + # [BT, BT] + b_ds = tl.dot(b_v, tl.trans(b_do), allow_tf32=False) + b_ds = tl.where(m_s, b_ds, 0).to(b_q.dtype) + # [BT, BT] + b_s = tl.dot(b_k, b_q, allow_tf32=False) + b_s = tl.where(m_s, b_s, 0).to(b_q.dtype) + # [BT, DK] + b_dk = tl.dot(b_ds, tl.trans(b_q), allow_tf32=False) + # [BT, DV] + b_dv = tl.dot(b_s, b_do, allow_tf32=False) + b_d = tl.load(p_d, boundary_check=(0, 1)) + if CHECK and i == 1: + b_dk += tl.dot(b_v, tl.trans(b_dh).to(b_v.dtype), allow_tf32=False) + b_dv += tl.dot(b_k, b_dh.to(b_k.dtype), allow_tf32=False) + b_dh += tl.dot(b_q, b_do, allow_tf32=False) + b_dh -= tl.dot(b_d, b_dv.to(b_d.dtype), allow_tf32=False) + else: + b_dk += tl.dot(b_v, tl.trans(b_dh).to(b_v.dtype), allow_tf32=False) + b_dv += tl.dot(b_k, b_dh.to(b_k.dtype), allow_tf32=False) + b_dh += tl.dot(b_q, b_do, allow_tf32=False) + b_dh -= tl.dot(b_d, b_dv.to(b_d.dtype), allow_tf32=False) + + tl.store(p_dk, (b_dk).to(p_dk.dtype.element_ty), boundary_check=(0, 1)) + tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), boundary_check=(0, 1)) + + # sync threads + b_h = None + tl.debug_barrier() + m_s = o_i[:, None] >= o_i[None, :] + # [BV, BK] + b_h = tl.zeros([BV, BK], dtype=tl.float32) + if USE_INITIAL_STATE: + p_h = tl.make_block_ptr(initial_state + i_bh * DK * DV, (DV, DK), (1, DV), (i_v * BV, i_k * BK), (BV, BK), (0, 1)) + b_h = tl.load(p_h, boundary_check=(0, 1)).to(tl.float32) + NT = tl.cdiv(T, BT) + for i in range(0, NT): + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (T, DK), (s_qk_t, s_qk_d), (i * BT, i_k * BK), (BT, BK), (1, 0)) + p_v = tl.make_block_ptr(v + i_bh * s_vo_h, (DV, T), (s_vo_d, s_vo_t), (i_v * BV, i * BT), (BV, BT), (0, 1)) + p_do = tl.make_block_ptr(do + i_bh * s_vo_h, (T, DV), (s_vo_t, s_vo_d), (i * BT, i_v * BV), (BT, BV), (1, 0)) + p_dq = tl.make_block_ptr(dq + (i_bh + i_v*B*H) * s_qk_h, (T, DK), (s_qk_t, s_qk_d), (i*BT, i_k*BK), (BT, BK), (1, 0)) + + # [BT, DK] + b_k = tl.load(p_k, boundary_check=(0, 1)) + # [DV, BT] + b_v = tl.load(p_v, boundary_check=(0, 1)) + # [BT, DV] + b_do = tl.load(p_do, boundary_check=(0, 1)) + + # [BT, BT] + b_ds = tl.dot(b_do, b_v, allow_tf32=False) + b_ds = tl.where(m_s, b_ds, 0) + # [BT, DK] + b_dq = tl.dot(b_ds.to(b_k.dtype), b_k, allow_tf32=False) + # [DV, DK] + if CHECK and i == 0: + b_dq += tl.dot(b_do, b_h.to(b_do.dtype), allow_tf32=False) + b_h = b_h + tl.dot(b_v, b_k, allow_tf32=False) + else: + b_dq += tl.dot(b_do, b_h.to(b_do.dtype), allow_tf32=False) + b_h = b_h + tl.dot(b_v, b_k, allow_tf32=False) + b_dq *= scale + tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1)) + + if i < (NT - 1): + p_dv = tl.make_block_ptr(dv + i_bh * s_vo_h, (T, DV), (s_vo_t, s_vo_d), ((i + 1) * BT, i_v * BV), (BT, BV), (1, 0)) + b_dv = tl.load(p_dv, boundary_check=(0, 1)) + b_dd = tl.dot(b_dv.to(b_k.dtype), b_h.to(b_k.dtype), allow_tf32=False) + p_dd = tl.make_block_ptr(dd + (i_bh + i_v*B*H) * s_qk_h, (T, DK), (s_qk_t, s_qk_d), + ((i+1) * BT, i_k * BK), (BT, BK), (1, 0)) + tl.store(p_dd, -b_dd.to(p_dd.dtype.element_ty), boundary_check=(0, 1)) + + +def fused_chunk_delta_rule_fwd(q, k, v, d, BT, initial_state, output_final_state): + batch_size, n_heads, seq_len, d_head_qk = q.shape + d_head_v = v.shape[-1] + scale = d_head_qk ** -0.5 + BT = BT + # ctx.BT = BT + BK, BV = triton.next_power_of_2(d_head_qk), min(triton.next_power_of_2(d_head_v), 32) + NK, NV = triton.cdiv(d_head_qk, BK), triton.cdiv(d_head_v, BV) + assert NK == 1, 'NK should be 1' + o = q.new_empty(batch_size, n_heads, seq_len, d_head_v) + if output_final_state: + final_state = q.new_empty(batch_size, n_heads, d_head_qk, d_head_v, dtype=torch.float32, requires_grad=False) + else: + final_state = None + CHECK = True + # if version.parse(triton.__version__) < version.parse('2.2.0'): + # import warnings + # warnings.warn( + # "Triton<2.2.0 detected for running this kernel, " + # "which is known to have some weird compiler issues (refer to https://github.com/openai/triton/issues/2852) " + # "that lead to significant precision loss. " + # "We've add some initial condition checks to resolve this, sadly at the sacrifice of the speed. " + # "For optimal performance, it is recommended to install Triton>=2.2.0 (if possible)." + # ) + # CHECK = True + grid = (NV, NK, batch_size * n_heads) + v_new = torch.empty_like(v) + fused_chunk_delta_rule_fwd_kernel[grid]( + q, k, v, v_new, d, o, initial_state, final_state, + q.stride(1), q.stride(2), q.stride(3), + v.stride(1), v.stride(2), v.stride(3), + batch_size, n_heads, seq_len, scale, + BT=BT, DK=d_head_qk, DV=d_head_v, BK=BK, BV=BV, + USE_INITIAL_STATE=initial_state is not None, + STORE_FINAL_STATE=output_final_state, + CHECK=CHECK, + ) + return o, v_new, CHECK, final_state + + +def fused_chunk_delta_rule_bwd(q, k, v, d, do, BT, CHECK, initial_state): + batch_size, n_heads, seq_len, d_head_qk = q.shape + d_head_v = v.shape[-1] + scale = d_head_qk ** -0.5 + BK, BV = triton.next_power_of_2(d_head_qk), min(triton.next_power_of_2(d_head_v), 32) + NK, NV = triton.cdiv(d_head_qk, BK), triton.cdiv(d_head_v, BV) + assert NK == 1 + dq = q.new_empty(NV, batch_size, n_heads, seq_len, d_head_qk) + dk = q.new_empty(NV, batch_size, n_heads, seq_len, d_head_qk) + dd = q.new_empty(NV, batch_size, n_heads, seq_len, d_head_qk) + dv = q.new_empty(NK, batch_size, n_heads, seq_len, d_head_v) + grid = (NV, NK, batch_size * n_heads) + fused_chunk_delta_rule_bwd_kernel[grid]( + q, k, v, d, do, dq, dk, dv, dd, initial_state, + q.stride(1), q.stride(2), q.stride(3), + v.stride(1), v.stride(2), v.stride(3), + batch_size, n_heads, seq_len, scale, + BT=BT, DK=d_head_qk, DV=d_head_v, BK=BK, BV=BV, + USE_INITIAL_STATE=initial_state is not None, + CHECK=CHECK, + # num_warps=num_warps, + # num_stages=num_stages + ) + dq = dq.sum(0) + dk = dk.sum(0) + dv = dv.sum(0) + dd = dd.sum(0) + dd[:, :, 0:BT] = 0 + return dq, dk, dv, dd + +class FusedChunkDeltaRuleFunction(torch.autograd.Function): + @staticmethod + @contiguous + @custom_fwd + def forward(ctx, q, k, v, beta, BT, initial_state, output_final_state, checkpoint_level=0): + # lvl=1 will recompute ``fwd_prepare_wy_repr`` for saving memory. + assert checkpoint_level in [0, 1] + k_origin = k + # k = _l2_norm_fwd(k_origin) + k = k + d, v_new = fwd_prepare_wy_repr(k, v, beta, BT) + o, v_new2, CHECK, final_state = fused_chunk_delta_rule_fwd(q, k, v_new, d, BT, initial_state, output_final_state) + if checkpoint_level == 1: + d, v_new = None, None + ctx.save_for_backward(q, k_origin, v, v_new, v_new2, d, beta, initial_state) + ctx.CHECK = CHECK + ctx.chunk_size = BT + return o.to(q.dtype), final_state + + @staticmethod + @custom_bwd + @contiguous + def backward(ctx, do, d_final_state=None): + q, k_origin, v, v_new, v_new2, d, beta, initial_state = ctx.saved_tensors + chunk_size = ctx.chunk_size + k = k_origin + # k = _l2_norm_fwd(k_origin) + if d is None: + d, v_new = fwd_prepare_wy_repr(k, v, beta, chunk_size) + dq, dk, dv, dd = fused_chunk_delta_rule_bwd(q, k, v_new2, d, do, chunk_size, ctx.CHECK, initial_state) + dk2, dv, dbeta = bwd_prepare_wy_repr(k, v, beta, d, v_new, dd, dv, chunk_size) + dk.add_(dk2) + # dk = _l2_norm_bwd(k_origin, dk) + return dq.to(q.dtype), dk.to(k.dtype), dv.to(v.dtype), dbeta.to(d.dtype), None, None, None + + +def fused_chunk_delta_rule( + q: torch.Tensor, + k: torch.Tensor, + v: torch.Tensor, + beta: torch.Tensor, + BT: int, + initial_state: torch.Tensor = None, + output_final_state: bool = False, +) -> Tuple[torch.Tensor, torch.Tensor]: + if initial_state is not None: + initial_state = initial_state.detach() + o, final_state = FusedChunkDeltaRuleFunction.apply(q, k, v, beta, BT, initial_state, output_final_state) + return o, final_state + + +def delta_rule_recurrence(q, k, v, beta): + b, h, l, d_k = q.shape + d_v = v.shape[-1] + o = torch.zeros_like(v) + S = torch.zeros(b, h, d_k, d_v).to(v) + q = q * (d_k ** -0.5) + k = torch.nn.functional.normalize(k, p=2, dim=-1) + for i in range(l): + _k = k[:, :, i] + _q = q[:, :, i] + _v = v[:, :, i].clone() + beta_i = beta[:, :, i] + _v = _v - (S.clone() * _k[..., None]).sum(-2) + _v = _v * beta_i[..., None] + S = S.clone() + _k.unsqueeze(-1) * _v.unsqueeze(-2) + o[:, :, i] = torch.einsum('bhd,bhdm->bhm', _q, S) + return o + + +if __name__ == "__main__": + import torch.nn.functional as F + seq_len = 128 + b = 2 + h = 4 + q = F.normalize(torch.randn(b, h, seq_len, 64), 2, -1) + k = F.normalize(torch.randn(b, h, seq_len, 64), 2, -1) + v = F.normalize(torch.randn(b, h, seq_len, 128), 2, -1) + beta = torch.rand(b, h, seq_len).sigmoid() + q, k, v, beta = map(lambda x: x.cuda().to(torch.float32).requires_grad_(True), (q, k, v, beta)) + do = torch.rand_like(v) + o2 = delta_rule_recurrence(q, k, v.clone(), beta) + o2.backward(do, retain_graph=True) + q_grad2, k_grad2, v_grad2, beta_grad2 = q.grad, k.grad, v.grad, beta.grad + q.grad = k.grad = v.grad = beta.grad = None + o, _ = fused_chunk_delta_rule(q, k, v, beta, 32) + o.backward(do, retain_graph=True) + q_grad, k_grad, v_grad, beta_grad = q.grad, k.grad, v.grad, beta.grad + q.grad = k.grad = v.grad = beta.grad = None + print((o - o2).abs().max()) + print((q_grad - q_grad2).abs().max()) + print((k_grad - k_grad2).abs().max()) + print((v_grad - v_grad2).abs().max()) + print((beta_grad - beta_grad2).abs().max()) diff --git a/fla/ops/delta_rule/naive.py b/fla/ops/delta_rule/naive.py new file mode 100644 index 0000000000000000000000000000000000000000..45ca247cb6f406bc71f6cc542898947be92f3cf1 --- /dev/null +++ b/fla/ops/delta_rule/naive.py @@ -0,0 +1,92 @@ +# -*- coding: utf-8 -*- + +import torch +from einops import rearrange + + +def delta_rule_recurrence(q, k, v, beta): + b, h, l, d_k = q.shape + d_v = v.shape[-1] + o = torch.zeros_like(v) + S = torch.zeros(b, h, d_k, d_v).to(v) + q = q * (d_k ** -0.5) + for i in range(l): + _k = k[:, :, i] + _q = q[:, :, i] + _v = v[:, :, i].clone() + beta_i = beta[:, :, i] + _v = _v - (S.clone() * _k[..., None]).sum(-2) + _v = _v * beta_i[..., None] + S = S.clone() + _k.unsqueeze(-1) * _v.unsqueeze(-2) + o[:, :, i] = torch.einsum('bhd,bhdm->bhm', _q, S) + return o + + +def delta_rule_chunkwise(q, k, v, beta, chunk_size=32): + b, h, l, d_k = q.shape + d_v = v.shape[-1] + q = q * (d_k ** -0.5) + v = v * beta[..., None] + k_beta = k * beta[..., None] + + assert l % chunk_size == 0 + + # note that diagonal is masked. + mask = torch.triu(torch.ones(chunk_size, chunk_size, dtype=torch.bool, device=q.device), diagonal=0) + q, k, v, k_beta = map(lambda x: rearrange(x, 'b h (n c) d -> b h n c d', c=chunk_size), [q, k, v, k_beta]) + attn = -(k_beta @ k.transpose(-1, -2)).masked_fill(mask, 0) + + for i in range(1, chunk_size): + attn[..., i, :i] = attn[..., i, :i] + (attn[..., i, :, None].clone() * attn[..., :, :i].clone()).sum(-2) + + attn = attn + torch.eye(chunk_size, dtype=torch.float, device=q.device) + # u + k_cumsum = attn @ v + # w + k_cumdecay = attn @ k_beta + + v = k_cumsum + S = k.new_zeros(b, h, d_k, d_v) + o = torch.zeros_like(v) + mask = torch.triu(torch.ones(chunk_size, chunk_size, dtype=torch.bool, device=q.device), diagonal=1) + for i in range(0, l // chunk_size): + q_i, k_i, v_i = q[:, :, i], k[:, :, i], v[:, :, i] + attn = (q_i @ k_i.transpose(-1, -2)).masked_fill_(mask, 0) + v_prime = k_cumdecay[:, :, i] @ S + v_new = v_i - v_prime + o_inter = q_i @ S + o[:, :, i] = o_inter + attn @ v_new + # chunk state update + S = S + k_i.transpose(-1, -2) @ v_new + + return rearrange(o, 'b h n c d -> b h (n c) d') + + +if __name__ == '__main__': + B = 2 + H = 4 + L = 256 + DK = 128 + DV = 128 + q = (torch.randn(B, H, L, DK)).cuda().requires_grad_(True) + k = (torch.randn(B, H, L, DK)).cuda() + k = torch.nn.functional.normalize(k, dim=-1, p=2).requires_grad_(True) + v = (torch.randn(B, H, L, DV)).cuda().requires_grad_(True) + beta = torch.randn(B, H, L).cuda().sigmoid().requires_grad_(True) + + o = delta_rule_recurrence(q, k, v, beta) + do = torch.randn(B, H, L, DV).cuda() + o.backward(do, retain_graph=True) + q_grad, q.grad = q.grad, None + k_grad, k.grad = k.grad, None + v_grad, v.grad = v.grad, None + beta_grad, beta.grad = beta.grad, None + + o2 = delta_rule_chunkwise(q, k, v, beta) + o2.backward(do) + assert torch.allclose(o, o2, atol=1e-4), breakpoint() + assert torch.allclose(q.grad, q_grad, atol=1e-4), breakpoint() + assert torch.allclose(k.grad, k_grad, atol=1e-4), breakpoint() + assert torch.allclose(v.grad, v_grad, atol=1e-4), breakpoint() + assert torch.allclose(beta.grad, beta_grad, atol=1e-4), breakpoint() + print("All passed!") diff --git a/fla/ops/delta_rule/recurrent_fuse.py b/fla/ops/delta_rule/recurrent_fuse.py new file mode 100644 index 0000000000000000000000000000000000000000..6bd2426495d061a9da25233bfe9b0b147b7dcd9b --- /dev/null +++ b/fla/ops/delta_rule/recurrent_fuse.py @@ -0,0 +1,312 @@ +# -*- coding: utf-8 -*- +# Copyright (c) 2023, Yu Zhang, Songlin Yang + +from typing import Tuple + +import torch +import triton +import triton.language as tl + +from fla.utils import contiguous + +# on-the-fly computation without materializing hidden statets into HBMs + + +@triton.jit +def fused_recurrent_fwd_kernel( + # B: batch_size, H: n_heads, T: seq_len, D: d_head + q, # query [B, H, L, D_head_K] + k, # key [B, H, L, D_head_V] + v, # value [B, H, L, D_head_V]. + beta, # beta [B, H, L] + o, # output [B, H, L, D_head_V] + initial_state, + final_state, # final hidden state [B, H, D_head_K, D_head_V] + + + s_qk_h, # stride size: L * D_head_K + s_qk_t, # stride size: D_head_K + s_qk_d, # stride size: 1 + + s_vo_h, # stride size: L * D_head_V + s_vo_t, # stride size: D_head_V + s_vo_d, # stride size: 1 + + B, # batch size + H, # n_heads + T, # seq_len + scale, # D_head_K ** -0.5 + BK: tl.constexpr, # BLOCK SIZE along the K dimension + BV: tl.constexpr, # BLOCK SIZE along the V dimension + DK: tl.constexpr, # D_head_K + DV: tl.constexpr, # D_head_V + USE_INITIAL_STATE: tl.constexpr, # whether to use initial state + STORE_FINAL_STATE: tl.constexpr, # whether to store final state +): + + # indices + i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + + p_q = q + i_bh * s_qk_h + i_k * BK + tl.arange(0, BK) + p_k = k + i_bh * s_qk_h + i_k * BK + tl.arange(0, BK) + p_v = v + i_bh * s_vo_h + i_v * BV + tl.arange(0, BV) + p_beta = beta + i_bh * T + p_o = o + (i_bh + i_k * B * H) * s_vo_h + i_v * BV + tl.arange(0, BV) + + mask_bk = (i_k * BK + tl.arange(0, BK)) < DK + mask_bv = (i_v * BV + tl.arange(0, BV)) < DV + mask_kv = mask_bk[None, :] & mask_bv[:, None] + + h = tl.zeros([BV, BK], dtype=tl.float32) + + if USE_INITIAL_STATE: + p_init_s = initial_state + i_bh * DK * DV + \ + (i_k * BK + tl.arange(0, BK)[None, :]) * \ + DV + (i_v * BV + tl.arange(0, BV)[:, None]) + h += tl.load(p_init_s, mask=mask_kv, other=0).to(tl.float32) + + for _ in range(0, T): + _k = tl.load(p_k, mask=mask_bk, other=0).to(tl.float32) + _v = tl.load(p_v, mask=mask_bv, other=0).to(tl.float32) + _q = tl.load(p_q, mask=mask_bk, other=0).to(tl.float32) * scale + _v_minus = tl.sum(h * _k[None, :], axis=1) + _v -= _v_minus + _beta = tl.load(p_beta).to(tl.float32) + # in-place overwrite + tl.store(p_v, _v.to(p_v.dtype.element_ty), mask=mask_bv) + _v *= _beta + h += _k[None, :] * _v[:, None] + _o = h * _q[None, :] + _o = tl.sum(_o, axis=1) + tl.store(p_o, _o.to(p_o.dtype.element_ty), mask=mask_bv) + + p_q += DK + p_k += DK + p_o += DV + p_v += DV + p_beta += 1 + + if STORE_FINAL_STATE: + p_final_s = final_state + i_bh * DK * DV + \ + (i_k * BK + tl.arange(0, BK)[None, :]) * \ + DV + (i_v * BV + tl.arange(0, BV)[:, None]) + tl.store(p_final_s, h.to(p_final_s.dtype.element_ty), mask=mask_kv) + + +# Similar to Algorithm1 of https://arxiv.org/abs/2006.16236 +@triton.jit +def fused_recurrent_bwd_kernel( + # B: batch_size, H: n_heads, T: seq_len, D: d_head + # NV: number of split in the V dimension. NK: number of split in the K dimension + q, # query [B, H, L, D_head_K] + k, # key [B, H, L, D_head_V] + v, # value [B, H, L, D_head_V] + beta, # beta [B, H, L] + + do, # gradient of output [B, H, L, D_head_V] + dq, # gradient of query [NV, B, H, L, D_head_K] + dk, # gradient of key [NV, B, H, L, D_head_K] + dv, # gradient of value [NK, B, H, L, D_head_V] + dbeta, # gradient of beta [B, H, L] + + # initial hidden state initialization [B, H, D_head_K, D_head_V] + initial_state, + + s_qk_h, # stride size: L * D_head_K + s_qk_t, # stride size: D_head_K + s_qk_d, # stride size: 1 + + s_vo_h, # stride size: L * D_head_V + s_vo_t, # stride size: D_head_V + s_vo_d, # stride size: 1 + + B, # batch_size + H, # n_heads + T, # seq_len + scale, # D_head_K ** -0.5 + BK: tl.constexpr, # BLOCK SIZE along the K dimension + BV: tl.constexpr, # BLOCK SIZE along the V dimension + DK: tl.constexpr, # D_head_K + DV: tl.constexpr, # D_head_V + USE_INITIAL_STATE: tl.constexpr, # whether to use initial state +): + i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + mask_bk = i_k * BK + tl.arange(0, BK) < DK + mask_bv = i_v * BV + tl.arange(0, BV) < DV + + p_q = q + i_bh * s_qk_h + i_k * BK + tl.arange(0, BK) + (T - 1) * DK + p_k = k + i_bh * s_qk_h + i_k * BK + tl.arange(0, BK) + (T - 1) * DK + p_do = do + i_bh * s_vo_h + i_v * BV + tl.arange(0, BV) + (T - 1) * DV + p_v = v + i_bh * s_vo_h + i_v * BV + tl.arange(0, BV) + (T - 1) * DV + p_beta = beta + i_bh * T + T - 1 + p_dbeta = dbeta + (i_bh + i_v * B * H) * T + T - 1 + + p_dk = dk + (i_bh + i_v * B * H) * s_qk_h + i_k * \ + BK + tl.arange(0, BK) + (T - 1) * DK + p_dv = dv + (i_bh + i_k * B * H) * s_vo_h + i_v * \ + BV + tl.arange(0, BV) + (T - 1) * DV + d_h = tl.zeros([BK, BV], dtype=tl.float32) + + for _ in range(T): + _do = tl.load(p_do, mask=mask_bv, other=0).to(tl.float32) + _q = tl.load(p_q, mask=mask_bk, other=0).to(tl.float32) * scale + _k = tl.load(p_k, mask=mask_bk, other=0).to(tl.float32) + _v = tl.load(p_v, mask=mask_bv, other=0).to(tl.float32) + _beta = tl.load(p_beta).to(tl.float32) + d_h += _q[:, None] * _do[None, :] + d_k = tl.sum(d_h * _v[None, :] * _beta, axis=1) + d_v = tl.sum(d_h * _k[:, None], axis=0) + + d_beta = tl.sum(d_v * _v) + d_v = d_v * _beta + + tl.store(p_dk, d_k.to(p_dk.dtype.element_ty), mask=mask_bk) + tl.store(p_dv, d_v.to(p_dv.dtype.element_ty), mask=mask_bv) + tl.store(p_dbeta, d_beta.to(p_dbeta.dtype.element_ty)) + + d_h -= _k[:, None] * d_v[None, :] + + p_do -= DV + p_q -= DK + p_k -= DK + p_v -= DV + p_dk -= DK + p_dv -= DV + p_dbeta -= 1 + p_beta -= 1 + + tl.debug_barrier() + + h = tl.zeros([BK, BV], dtype=tl.float32) + + p_q = q + i_bh * s_qk_h + i_k * BK + tl.arange(0, BK) + p_k = k + i_bh * s_qk_h + i_k * BK + tl.arange(0, BK) + p_v = v + i_bh * s_vo_h + i_v * BV + tl.arange(0, BV) + p_beta = beta + i_bh * T + p_do = do + i_bh * s_vo_h + i_v * BV + tl.arange(0, BV) + p_dq = dq + (i_bh + i_v * B * H) * s_qk_h + i_k * BK + tl.arange(0, BK) + p_dv = dv + (i_bh + i_k * B * H) * s_vo_h + i_v * BV + tl.arange(0, BV) + DV + p_dk = dk + (i_bh + i_v * B * H) * s_qk_h + i_k * BK + tl.arange(0, BK) + DK + + if USE_INITIAL_STATE: + mask_kv = mask_bk[:, None] & mask_bv[None, :] + p_init_s = initial_state + i_bh * DK * DV + \ + (i_k * BK + tl.arange(0, BK)[:, None]) * \ + DV + (i_v * BV + tl.arange(0, BV)[None, :]) + h += tl.load(p_init_s, mask=mask_kv, other=0).to(tl.float32) + + for i in range(0, T): + _k = tl.load(p_k, mask=mask_bk, other=0).to(tl.float32) + _v = tl.load(p_v, mask=mask_bv, other=0).to(tl.float32) + _do = tl.load(p_do, mask=mask_bv, other=0).to(tl.float32) + _beta = tl.load(p_beta).to(tl.float32) + _v *= _beta + + h += _k[:, None] * _v[None, :] + _d_q = h * _do[None, :] + d_q = tl.sum(_d_q, axis=1) * scale + tl.store(p_dq, d_q.to(p_dq.dtype.element_ty), mask=mask_bk) + + if i < T - 1: + d_k = tl.load(p_dk, mask=mask_bk, other=0).to(tl.float32) + d_v = tl.load(p_dv, mask=mask_bv, other=0).to(tl.float32) + d_k -= tl.sum(d_v[None, :] * h, axis=1) + tl.store(p_dk, d_k.to(p_dk.dtype.element_ty), mask=mask_bk) + + p_k += DK + p_do += DV + p_v += DV + p_dk += DK + p_dv += DV + p_dq += DK + p_beta += 1 + + +class FusedRecurrentFunction(torch.autograd.Function): + + @staticmethod + @contiguous + def forward(ctx, q, k, v, beta, initial_state=None, output_final_state=False): + batch_size, n_heads, seq_len, d_head_qk = q.shape + d_head_v = v.shape[-1] + + scale = d_head_qk ** -0.5 + BK, BV = triton.next_power_of_2(d_head_qk), min(triton.next_power_of_2(d_head_v), 8) + NK, NV = triton.cdiv(d_head_qk, BK), triton.cdiv(d_head_v, BV) + num_stages = 1 + num_warps = 1 + assert NK == 1, "NK > 1 is not supported yet" + o = q.new_empty(NK, batch_size, n_heads, seq_len, d_head_v) + + if output_final_state: + final_state = q.new_empty(batch_size, n_heads, d_head_qk, d_head_v) + else: + final_state = None + + grid = (NV, NK, batch_size * n_heads) + fused_recurrent_fwd_kernel[grid]( + q, k, v, beta, o, initial_state, final_state, + q.stride(1), q.stride(2), q.stride(3), + v.stride(1), v.stride(2), v.stride(3), + batch_size, n_heads, seq_len, scale, + DK=d_head_qk, DV=d_head_v, BK=BK, BV=BV, + num_warps=num_warps, + num_stages=num_stages, + USE_INITIAL_STATE=initial_state is not None, + STORE_FINAL_STATE=final_state is not None + ) + o = o.sum(0) + ctx.save_for_backward(q, k, v, beta, initial_state) + return o, final_state + + @staticmethod + @contiguous + def backward(ctx, do, d_final_state=None): + q, k, v, beta, initial_state = ctx.saved_tensors + batch_size, n_heads, seq_len, d_head_qk = q.shape + d_head_v = v.shape[-1] + scale = d_head_qk ** -0.5 + BK, BV = triton.next_power_of_2(d_head_qk), min(triton.next_power_of_2(d_head_v), 32) + NK, NV = triton.cdiv(d_head_qk, BK), triton.cdiv(d_head_v, BV) + assert NK == 1, "NK > 1 is not supported yet" + num_stages = 1 + num_warps = 2 + + dq = q.new_empty(NV, batch_size, n_heads, seq_len, d_head_qk) + dk = q.new_empty(NV, batch_size, n_heads, seq_len, d_head_qk) + dv = q.new_empty(NK, batch_size, n_heads, seq_len, d_head_v) + grid = (NV, NK, batch_size * n_heads) + dbeta = q.new_empty(NV, batch_size, n_heads, seq_len) + + fused_recurrent_bwd_kernel[grid]( + q, k, v, beta, do, dq, dk, dv, dbeta, initial_state, + q.stride(1), q.stride(2), q.stride(3), + v.stride(1), v.stride(2), v.stride(3), + batch_size, n_heads, seq_len, scale, + DK=d_head_qk, DV=d_head_v, BK=BK, BV=BV, + num_warps=num_warps, + num_stages=num_stages, + USE_INITIAL_STATE=initial_state is not None + ) + dq = dq.sum(0) + dk = dk.sum(0) + dv = dv.sum(0) + dbeta = dbeta.sum(0) + return dq.to(q), dk.to(k), dv.to(v), dbeta.to(beta), None, None + + +def fused_recurrent_linear_attn_delta_rule( + q: torch.Tensor, + k: torch.Tensor, + v: torch.Tensor, + beta: torch.Tensor = None, + initial_state: torch.Tensor = None, + output_final_state: bool = False, + normalize: bool = False +) -> Tuple[torch.Tensor, torch.Tensor]: + if initial_state is not None: + initial_state = initial_state.detach() + if beta is None: + beta = torch.ones_like(q[..., 0]) + o, final_state = FusedRecurrentFunction.apply(q, k, v, beta, initial_state, output_final_state) + return o, final_state diff --git a/fla/ops/delta_rule/utils.py b/fla/ops/delta_rule/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..92eafdd6a8c2ae44bcd506299000ebcc31e8f5ef --- /dev/null +++ b/fla/ops/delta_rule/utils.py @@ -0,0 +1,297 @@ +# -*- coding: utf-8 -*- + +import torch +import triton +import triton.language as tl +from einops import rearrange +from torch.cuda.amp import custom_bwd, custom_fwd + +from fla.utils import contiguous +from fla.ops.delta_rule.wy_fast import prepare_wy_repr as prepare_wy_repr2 + + + +# Inspired by "THE WY REPRESENTATION FOR PRODUCTS OF HOUSEHOLDER MATRICES" https://epubs.siam.org/doi/pdf/10.1137/0908009 +# o: cumprod +# o2: cumprodsum +@triton.autotune( + configs=[ + triton.Config({}, num_warps=1), + triton.Config({}, num_warps=2), + triton.Config({}, num_warps=4), + triton.Config({}, num_warps=8), + triton.Config({}, num_warps=16), + triton.Config({}, num_warps=32), + ], + key=["BT", "BK", "BV"], +) +@triton.jit +def fwd_prepare_wy_repr_kernel( + k, + v, + beta, + o, + o2, + T, + K, + V, + BT: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr +): + i_t, i_bh = tl.program_id(0), tl.program_id(1) + + p_k = k + i_bh * T * K + (i_t * BT + tl.arange(0, BT)[:, None]) * K + tl.arange(0, BK)[None, :] + p_v = v + i_bh * T * V + (i_t * BT + tl.arange(0, BT)[:, None]) * V + tl.arange(0, BV)[None, :] + p_beta = beta + i_bh * T + i_t * BT + tl.arange(0, BT) + mask_bt = (tl.arange(0, BT) + i_t * BT) < T + mask_bk = tl.arange(0, BK) < K + mask_bv = tl.arange(0, BV) < V + mask_bk = mask_bk[None, :] & mask_bt[:, None] + mask_bv = mask_bv[None, :] & mask_bt[:, None] + # [BT, BK] + b_k = tl.load(p_k, mask=mask_bk, other=0) + # [BT,] + b_beta = tl.load(p_beta, mask=mask_bt, other=0).to(tl.float32) + # [BT, BV] + b_v = tl.load(p_v, mask=mask_bv, other=0) + b_v = (b_v * b_beta[:, None]).to(b_v.dtype) + # [BT, BK] + b_kb = (b_k * b_beta[:, None]).to(b_k.dtype) + # [BT, BT] + b_A = tl.dot(b_kb, tl.trans(b_k), allow_tf32=False) + b_A = -tl.where(tl.arange(0, BT)[:, None] > tl.arange(0, BT)[None, :], b_A, 0) + + for i in range(BT): + mask = tl.arange(0, BT) == i + b_a = tl.sum(tl.where(mask[:, None], b_A, 0), 0) + b_a = b_a + tl.sum(b_a[:, None] * b_A, 0) * (tl.arange(0, BT) < i) + b_A = tl.where(mask[:, None], b_a, b_A) + b_A += tl.arange(0, BT)[:, None] == tl.arange(0, BT)[None, :] + b_A = b_A.to(b_k.dtype) + b_w = tl.dot(b_A, b_kb, allow_tf32=False) + b_u = tl.dot(b_A, b_v, allow_tf32=False) + + p_o = o + i_bh * T * K + (i_t * BT + tl.arange(0, BT)[:, None]) * K + tl.arange(0, BK)[None, :] + tl.store(p_o, b_w.to(p_o.dtype.element_ty), mask=mask_bk) + p_o2 = o2 + i_bh * T * V + (i_t * BT + tl.arange(0, BT)[:, None]) * V + tl.arange(0, BV)[None, :] + tl.store(p_o2, b_u.to(p_o2.dtype.element_ty), mask=mask_bv) + + +@triton.autotune( + configs=[ + triton.Config({}, num_warps=1), + triton.Config({}, num_warps=2), + triton.Config({}, num_warps=4), + triton.Config({}, num_warps=8), + triton.Config({}, num_warps=16), + triton.Config({}, num_warps=32), + ], + key=["BT", "BK", "BV"], +) +@triton.jit +def bwd_prepare_wy_repr_kernel( + k, v, beta, + o, o2, do, do2, + dk, dv, dbeta, + NT, K, V, T, + BT: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr, +): + i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + p_k = k + i_bh * T * K + (i_t * BT + tl.arange(0, BT)[:, None]) * K + tl.arange(0, BK)[None, :] + p_do = do + i_bh * T * K + (i_t * BT + tl.arange(0, BT)[:, None]) * K + tl.arange(0, BK)[None, :] + p_do2 = do2 + i_bh * T * V + (i_t * BT + tl.arange(0, BT)[:, None]) * V + tl.arange(0, BV)[None, :] + + p_beta = beta + i_bh * T + i_t * BT + tl.arange(0, BT) + mask_bt = (tl.arange(0, BT) + i_t * BT) < T + mask_bk = (tl.arange(0, BK) < K)[None, :] & mask_bt[:, None] + mask_bv = (tl.arange(0, BV) < V)[None, :] & mask_bt[:, None] + b_k, b_beta = tl.load(p_k, mask=mask_bk), tl.load(p_beta, mask=mask_bt) + + b_beta = b_beta.to(tl.float32) + A = tl.dot(b_k, tl.trans(b_k), allow_tf32=False) * b_beta[:, None] + A = tl.where(tl.arange(0, BT)[:, None] > tl.arange(0, BT)[None, :], A, 0) + b_do = tl.load(p_do, mask=mask_bk).to(tl.float32) + b_dv = tl.load(p_do2, mask=mask_bv).to(tl.float32) + dA = tl.zeros([BT, BT], dtype=tl.float32) + b_dk = tl.zeros([BT, BK], dtype=tl.float32) + for i in range(BT-1, -1, -1): + mask = tl.arange(0, BT) == i + attn = tl.sum(tl.where(mask[:, None], A, 0), axis=0) + do_ = tl.sum(tl.where(mask[:, None], b_do, 0), axis=0) + dv_ = tl.sum(tl.where(mask[:, None], b_dv, 0), axis=0) + b_do = b_do - attn[:, None] * do_[None, :] + b_dv = b_dv - attn[:, None] * dv_[None, :] + tl.debug_barrier() + p_v = v + i_bh * T * V + (i_t * BT + tl.arange(0, BT)[:, None]) * V + tl.arange(0, BV)[None, :] + b_v = tl.load(p_v, mask=mask_bv) + b_dk += b_do * b_beta[:, None] + b_dbeta = tl.sum(b_do * b_k, axis=1) + b_dbeta += tl.sum(b_dv * b_v, axis=1) + b_v = None + + p_o = o + i_bh * T * K + (i_t * BT + tl.arange(0, BT)[:, None]) * K + tl.arange(0, BK)[None, :] + p_o2 = o2 + i_bh * T * V + (i_t * BT + tl.arange(0, BT)[:, None]) * V + tl.arange(0, BV)[None, :] + b_o = tl.load(p_o, mask=mask_bk) + b_o2 = tl.load(p_o2, mask=mask_bv) + + dA = -tl.dot(b_do.to(b_o.dtype), tl.trans(b_o), allow_tf32=False) + dA -= tl.dot(b_dv.to(b_o2.dtype), tl.trans(b_o2).to(b_o.dtype), + allow_tf32=False) + dA = tl.where(tl.arange(0, BT)[:, None] > tl.arange(0, BT)[None, :], dA, 0) + b_dv *= b_beta[:, None] + p_dv = dv + i_bh * T * V + (i_t * BT + tl.arange(0, BT)[:, None]) * V + tl.arange(0, BV)[None, :] + tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), mask=mask_bv) + + b_dbeta += tl.sum(dA * tl.dot(b_k, tl.trans(b_k), allow_tf32=False), axis=1) + dA = dA * b_beta[:, None] + b_dk += tl.dot(tl.trans(dA.to(b_k.dtype)), b_k, allow_tf32=False) + b_dk += tl.dot(dA.to(b_k.dtype), b_k, allow_tf32=False) + p_dk = dk + i_bh * T * K + (i_t * BT + tl.arange(0, BT)[:, None]) * K + tl.arange(0, BK)[None, :] + tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), mask=mask_bk) + p_dbeta = dbeta + i_bh * T + i_t * BT + tl.arange(0, BT) + tl.store(p_dbeta, b_dbeta.to(p_dbeta.dtype.element_ty), mask=mask_bt) + + +def fwd_prepare_wy_repr(k, v, beta, chunk_size): + B, H, T, K, V = *k.shape, v.shape[-1] + v_new = torch.empty_like(v) + o_cumdecay = torch.empty_like(k) + BT = chunk_size + NT = triton.cdiv(T, BT) + BK = triton.next_power_of_2(K) + BV = triton.next_power_of_2(V) + fwd_prepare_wy_repr_kernel[(NT, B*H)]( + k, v, beta, o_cumdecay, v_new, + T, K, V, BT, BK, BV + ) + return o_cumdecay, v_new + + +def bwd_prepare_wy_repr(k, v, beta, o_cumdecay, v_new, do, do2, chunk_size): + b, h, l, d_k = do.shape + d_v = v.shape[-1] + BK = triton.next_power_of_2(d_k) + BV = triton.next_power_of_2(d_v) + c = chunk_size + BK = d_k + NT = triton.cdiv(l, c) + dk = torch.empty_like(k) + dv = torch.empty_like(v) + dbeta = torch.zeros_like(beta) + bwd_prepare_wy_repr_kernel[(NT, b*h)]( + k, v, beta, + o_cumdecay, v_new, do, do2, + dk, dv, dbeta, + NT, d_k, d_v, l, chunk_size, BK, BV + ) + return dk, dv, dbeta + +class WYRepresentationPrepration(torch.autograd.Function): + @staticmethod + @contiguous + @custom_fwd + def forward(ctx, k, v, beta, chunk_size): + o_cumdecay, v_new = fwd_prepare_wy_repr(k, v, beta, chunk_size) + ctx.chunk_size = chunk_size + ctx.save_for_backward(k.to(v), v, beta, o_cumdecay, v_new) + return o_cumdecay, v_new + + @staticmethod + @contiguous + @custom_bwd + def backward(ctx, do, do2): + k, v, beta, o_cumdecay, v_new = ctx.saved_tensors + dk, dv, dbeta = bwd_prepare_wy_repr(k, v, beta, o_cumdecay, v_new, do, do2, ctx.chunk_size) + return dk, dv, dbeta, None + +prepare_wy_repr = WYRepresentationPrepration.apply + + +def naive(k, v, beta, chunk_size): + l_org = k.shape[2] + l_new = triton.next_power_of_2(l_org) + # pad k, v, beta + k = torch.cat([k, torch.zeros_like(k)[:, :, :l_new-l_org, :]], dim=2) + v = torch.cat([v, torch.zeros_like(v)[:, :, :l_new-l_org, :]], dim=2) + beta = torch.cat([beta, torch.zeros_like(beta)[:, :, :l_new-l_org]], dim=2) + + k, v = map(lambda x: rearrange(x, 'b h (n c) d -> b h n c d', c=chunk_size), (k, v)) + # k = torch.nn.functional.normalize(k, dim=-1, p=2) + beta = rearrange(beta, 'b h (n c) -> b h n c', c=chunk_size) + mask = torch.triu(torch.ones(chunk_size, chunk_size, dtype=torch.bool, device=k.device), diagonal=0) + k_beta = k * beta[..., None] + v = v * beta[..., None] + attn = (k @ k.transpose(-1, -2)).masked_fill_(mask, 0) + attn = attn * beta[..., None] + x = attn @ v + + o = torch.zeros_like(k) + o2 = torch.zeros_like(v) + + o[..., 0, :] = k_beta[..., 0, :].clone() + o2[..., 0, :] = x[..., 0, :].clone() + for i in range(1, chunk_size): + o_i = (o[..., :i, :]).clone() + o[..., i, :] = -(attn[..., i, :i, None] * o_i).sum(3) + k_beta[..., i, :] + o2_i = (o2[..., :i, :]).clone() + o2[..., i, :] = -(attn[..., i, :i, None] * o2_i).sum(3) + x[..., i, :] + return map(lambda x: rearrange(x, 'b h n c d -> b h (n c) d')[:, :, :l_org], (o, v-o2)) + + +if __name__ == "__main__": + torch.set_default_dtype(torch.bfloat16) + seq_len = 2048 + b = 4 + h = 8 + k = torch.nn.functional.normalize(torch.randn(b, h, seq_len, 256), dim=-1, p=2) + v = torch.randn(b, h, seq_len, 256) + beta = torch.rand(b, h, seq_len).sigmoid() + require_grad = True + k, v, beta = map(lambda x: x.cuda().requires_grad_(require_grad), (k, v, beta)) + do = torch.rand_like(k) + do2 = torch.rand_like(v) + + print("Start warmup.") + o1, o2 = prepare_wy_repr(k, v, beta, 32) + # (o1 * do + o2 * do2).sum().backward() + o3, o4 = prepare_wy_repr2(k, v, beta, 32) + # (o1 * do + o2 * do2).sum().backward() + print((o1 - o3).abs().max()) + print((o2 - o4).abs().max()) + + + for i in range(30): + o1, o2 = prepare_wy_repr(k, v, beta, 32) + (o1 * do + o2 * do2).sum().backward() + o1, o2 = prepare_wy_repr2(k, v, beta, 32) + (o1 * do + o2 * do2).sum().backward() + + print("Done warmup.") + + import time + torch.cuda.synchronize() + start = time.time() + + for i in range(200): + o1, o2 = prepare_wy_repr(k, v, beta, 64) + (o1 * do + o2 * do2).sum().backward() + + torch.cuda.synchronize() + print(time.time() - start) + + + torch.cuda.synchronize() + start = time.time() + + for i in range(200): + o1, o2 = prepare_wy_repr2(k, v, beta, 64) + (o1 * do + o2 * do2).sum().backward() + + torch.cuda.synchronize() + print(time.time() - start) + + + \ No newline at end of file diff --git a/fla/ops/delta_rule/wy_fast.py b/fla/ops/delta_rule/wy_fast.py new file mode 100644 index 0000000000000000000000000000000000000000..750565e49fa9d30cc309fa96cca170f6b60bb35b --- /dev/null +++ b/fla/ops/delta_rule/wy_fast.py @@ -0,0 +1,401 @@ +# -*- coding: utf-8 -*- + +import torch +import triton +import triton.language as tl +from einops import rearrange +from torch.cuda.amp import custom_bwd, custom_fwd + +from fla.utils import contiguous + +# Inspired by "THE WY REPRESENTATION FOR PRODUCTS OF HOUSEHOLDER MATRICES" https://epubs.siam.org/doi/pdf/10.1137/0908009 +# o: cumprod +# o2: cumprodsum +@triton.autotune( + configs=[ + triton.Config({}, num_warps=1), + triton.Config({}, num_warps=2), + triton.Config({}, num_warps=4), + triton.Config({}, num_warps=8), + triton.Config({}, num_warps=16), + triton.Config({}, num_warps=32), + ], + key=["BT", "BK", "BV"], +) +@triton.jit +def fwd_prepare_wy_repr_kernel( + k, + v, + beta, + w, + u, + A, + s_qk_h, + s_qk_t, + s_qk_d, + s_vo_h, + s_vo_t, + s_vo_d, + T, + K, + V, + BT: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr +): + i_t, i_bh = tl.program_id(0), tl.program_id(1) + + b_A = tl.zeros([BT, BT], dtype=tl.float32) + p_beta = tl.make_block_ptr(beta + i_bh * T, (T,), (1,), (i_t * BT,), (BT,), (0,)) + b_beta = tl.load(p_beta, boundary_check=(0,)) + + for i_k in range(tl.cdiv(K, BK)): + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (T, K), (s_qk_t, s_qk_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + b_k = tl.load(p_k, boundary_check=(0, 1)) + b_kb = (b_k * b_beta[:, None]).to(b_k.dtype) + b_A += tl.dot(b_kb, tl.trans(b_k), allow_tf32=False) + + b_A = -tl.where(tl.arange(0, BT)[:, None] > tl.arange(0, BT)[None, :], b_A, 0) + + for i in range(1, BT): + mask = tl.arange(0, BT) == i + b_a = tl.sum(tl.where(mask[:, None], b_A, 0), 0) + b_a = b_a + tl.sum(b_a[:, None] * b_A, 0) * (tl.arange(0, BT) < i) + b_A = tl.where(mask[:, None], b_a, b_A) + + b_A += tl.arange(0, BT)[:, None] == tl.arange(0, BT)[None, :] + + p_A = tl.make_block_ptr(A + i_bh * T * BT, (T, BT), (BT, 1), (i_t * BT, 0), (BT, BT), (1, 0)) + tl.store(p_A, (b_A).to(p_A.dtype.element_ty), boundary_check=(0, 1)) + b_A = b_A.to(k.dtype.element_ty) + + for i_v in range(tl.cdiv(V, BV)): + p_v = tl.make_block_ptr(v + i_bh * s_vo_h, (T, V), (s_vo_t, s_vo_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + b_v = tl.load(p_v, boundary_check=(0, 1)) + b_vb = (b_v * b_beta[:, None]).to(b_v.dtype) + b_u = tl.dot(b_A, b_vb, allow_tf32=False) + p_u = tl.make_block_ptr(u + i_bh * s_vo_h, (T, V), (s_vo_t, s_vo_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + tl.store(p_u, (b_u).to(p_u.dtype.element_ty), boundary_check=(0, 1)) + + for i_k in range(tl.cdiv(K, BK)): + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (T, K), (s_qk_t, s_qk_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + b_k = tl.load(p_k, boundary_check=(0, 1)) + b_kb = (b_k * b_beta[:, None]).to(b_k.dtype) + b_w = tl.dot(b_A, b_kb, allow_tf32=False) + p_w = tl.make_block_ptr(w + i_bh * s_qk_h, (T, K), (s_qk_t, s_qk_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + tl.store(p_w, b_w.to(p_w.dtype.element_ty), boundary_check=(0, 1)) + + + +@triton.autotune( + configs=[ + triton.Config({}, num_warps=1), + triton.Config({}, num_warps=2), + triton.Config({}, num_warps=4), + triton.Config({}, num_warps=8), + triton.Config({}, num_warps=16), + triton.Config({}, num_warps=32), + ], + key=["BT", "BK", "BV"], +) +@triton.jit +def fwd_recompute_w_u_kernel( + k, + v, + beta, + w, + u, + A, + s_qk_h, + s_qk_t, + s_qk_d, + s_vo_h, + s_vo_t, + s_vo_d, + T, + K, + V, + BT: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr +): + i_t, i_bh = tl.program_id(0), tl.program_id(1) + + p_beta = tl.make_block_ptr(beta + i_bh * T, (T,), (1,), (i_t * BT,), (BT,), (0,)) + b_beta = tl.load(p_beta, boundary_check=(0,)) + + p_A = tl.make_block_ptr(A + i_bh * T * BT, (T, BT), (BT, 1), (i_t * BT, 0), (BT, BT), (1, 0)) + b_A = tl.load(p_A, boundary_check=(0, 1)).to(k.dtype.element_ty) + + for i_v in range(tl.cdiv(V, BV)): + p_v = tl.make_block_ptr(v + i_bh * s_vo_h, (T, V), (s_vo_t, s_vo_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + b_v = tl.load(p_v, boundary_check=(0, 1)) + b_vb = (b_v * b_beta[:, None]).to(b_v.dtype) + b_u = tl.dot(b_A, b_vb, allow_tf32=False) + p_u = tl.make_block_ptr(u + i_bh * s_vo_h, (T, V), (s_vo_t, s_vo_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + tl.store(p_u, (b_u).to(p_u.dtype.element_ty), boundary_check=(0, 1)) + + for i_k in range(tl.cdiv(K, BK)): + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (T, K), (s_qk_t, s_qk_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + b_k = tl.load(p_k, boundary_check=(0, 1)) + b_kb = (b_k * b_beta[:, None]).to(b_k.dtype) + b_w = tl.dot(b_A, b_kb, allow_tf32=False) + p_w = tl.make_block_ptr(w + i_bh * s_qk_h, (T, K), (s_qk_t, s_qk_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + tl.store(p_w, b_w.to(p_w.dtype.element_ty), boundary_check=(0, 1)) + + + + + +@triton.autotune( + configs=[ + triton.Config({}, num_warps=1), + triton.Config({}, num_warps=2), + triton.Config({}, num_warps=4), + triton.Config({}, num_warps=8), + triton.Config({}, num_warps=16), + triton.Config({}, num_warps=32), + ], + key=["BT", "BK", "BV"], +) +@triton.jit +def bwd_prepare_wy_repr_kernel( + k, v, beta, A, + dw, du, + dk, dv, dbeta, + s_qk_h, + s_qk_t, + s_qk_d, + s_vo_h, + s_vo_t, + s_vo_d, + T, + K, + V, + BT: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr +): + i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + p_A = tl.make_block_ptr(A + i_bh * T * BT, (T, BT), (BT, 1), (i_t * BT, 0), (BT, BT), (1, 0)) + b_A = tl.load(p_A, boundary_check=(0, 1)).to(k.dtype.element_ty) + + b_dbeta = tl.zeros([BT], dtype=tl.float32) + b_dA = tl.zeros([BT, BT], dtype=tl.float32) + p_beta = tl.make_block_ptr(beta + i_bh * T, (T,), (1,), (i_t * BT,), (BT,), (0,)) + b_beta = tl.load(p_beta, boundary_check=(0,)) + + for i_v in range(tl.cdiv(V, BV)): + p_v = tl.make_block_ptr(v + i_bh * s_vo_h, (T, V), (s_vo_t, s_vo_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_du = tl.make_block_ptr(du + i_bh * s_vo_h, (T, V), (s_vo_t, s_vo_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + b_v = tl.load(p_v, boundary_check=(0, 1)) + b_v_beta = (b_v * b_beta[:, None]).to(b_v.dtype) + b_du = tl.load(p_du, boundary_check=(0, 1)) + b_dA += tl.dot(b_du, tl.trans(b_v_beta), allow_tf32=False) + b_dv_beta = tl.dot(tl.trans(b_A), b_du, allow_tf32=False) + b_dv = b_dv_beta * b_beta[:, None] + b_dbeta += tl.sum(b_dv_beta * b_v, 1) + # store + p_dv = tl.make_block_ptr(dv + i_bh * s_vo_h, (T, V), (s_vo_t, s_vo_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), boundary_check=(0, 1)) + + tl.debug_barrier() + b_A2 = tl.zeros([BT, BT], dtype=tl.float32) + for i_k in range(tl.cdiv(K, BK)): + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (T, K), (s_qk_t, s_qk_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_dw = tl.make_block_ptr(dw + i_bh * s_qk_h, (T, K), (s_qk_t, s_qk_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + b_k = tl.load(p_k, boundary_check=(0, 1)) + b_k_beta = (b_k * b_beta[:, None]).to(b_k.dtype) + b_dw = tl.load(p_dw, boundary_check=(0, 1)) + b_dA += tl.dot(b_dw, tl.trans(b_k_beta), allow_tf32=False) + b_A2 += tl.dot(b_k_beta, tl.trans(b_k), allow_tf32=False) + b_dk_beta = tl.dot(tl.trans(b_A), b_dw, allow_tf32=False) + b_dk = b_dk_beta * b_beta[:, None] + b_dbeta += tl.sum(b_dk_beta * b_k, 1) + # store + p_dk = tl.make_block_ptr(dk + i_bh * s_qk_h, (T, K), (s_qk_t, s_qk_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1)) + + b_A -= (tl.arange(0, BT)[:, None] == tl.arange(0, BT)[None, :]) + b_A2 = tl.where(tl.arange(0, BT)[:, None] > tl.arange(0, BT)[None, :], -b_A2, 0) + b_dA = tl.where(tl.arange(0, BT)[:, None] > tl.arange(0, BT)[None, :], b_dA, 0) + tl.debug_barrier() + + for i in range(BT-1, 0, -1): + mask = tl.arange(0, BT) == i + b_da = tl.sum(tl.where(mask[:, None], b_dA, 0), 0) + b_a = tl.sum(tl.where(mask[:, None], b_A2, 0), 0) + b_da2 = b_da + tl.sum(b_da[None, :] * b_A, 1) + b_dA = tl.where(mask[:, None], b_da2, b_dA) + b_dA += b_da[None, :] * b_a[:, None] + + b_dA = tl.where(tl.arange(0, BT)[:, None] > tl.arange(0, BT)[None, :], -b_dA, 0).to(k.dtype.element_ty) + tl.debug_barrier() + + for i_k in range(tl.cdiv(K, BK)): + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (T, K), (s_qk_t, s_qk_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_dk = tl.make_block_ptr(dk + i_bh * s_qk_h, (T, K), (s_qk_t, s_qk_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + b_k = tl.load(p_k, boundary_check=(0, 1)) + b_dk = tl.load(p_dk, boundary_check=(0, 1)) + b_k_beta = (b_k * b_beta[:, None]).to(b_k.dtype) + + b_dk_beta = tl.dot(b_dA, b_k, allow_tf32=False) + b_dbeta += tl.sum(b_dk_beta * b_k, 1) + b_dk += tl.dot(tl.trans(b_dA), b_k_beta, allow_tf32=False) + b_dk += b_dk_beta * b_beta[:, None] + tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1)) + + p_dbeta = tl.make_block_ptr(dbeta + i_bh * T, (T,), (1,), (i_t * BT,), (BT,), (0,)) + tl.store(p_dbeta, b_dbeta.to(p_dbeta.dtype.element_ty),boundary_check=(0,)) + + +def fwd_prepare_wy_repr(k, v, beta, BT): + B, H, T, K, V = *k.shape, v.shape[-1] + u = torch.empty_like(v) + w = torch.empty_like(k) + NT = triton.cdiv(T, BT) + BK = min(triton.next_power_of_2(K), 64) + BV = min(triton.next_power_of_2(V), 64) + A = torch.empty(B, H, T, BT, device=k.device, dtype=k.dtype) + fwd_prepare_wy_repr_kernel[(NT, B*H)]( + k, v, beta, w, u, A, + k.stride(1), k.stride(2), k.stride(3), + v.stride(1), v.stride(2), v.stride(3), + T, K, V, BT, BK, BV + ) + return w, u, A + + + +def fwd_recompute_w_u(k, v, beta, A, BT): + B, H, T, K, V = *k.shape, v.shape[-1] + u = torch.empty_like(v) + w = torch.empty_like(k) + NT = triton.cdiv(T, BT) + BK = min(triton.next_power_of_2(K), 64) + BV = min(triton.next_power_of_2(V), 64) + fwd_recompute_w_u_kernel[(NT, B*H)]( + k, v, beta, w, u, A, + k.stride(1), k.stride(2), k.stride(3), + v.stride(1), v.stride(2), v.stride(3), + T, K, V, BT, BK, BV + ) + return w, u + + + + + +def bwd_prepare_wy_repr(k, v, beta, A, dw, du, BT): + B, H, T, K, V = *k.shape, v.shape[-1] + + NT = triton.cdiv(T, BT) + BK = min(triton.next_power_of_2(K), 64) + BV = min(triton.next_power_of_2(V), 64) + NT = triton.cdiv(T, BT) + dk = torch.empty_like(k) + dv = torch.empty_like(v).contiguous() + dbeta = torch.zeros_like(beta) + + bwd_prepare_wy_repr_kernel[(NT, B*H)]( + k, v, beta, A, + dw, du, + dk, dv, dbeta, + k.stride(1), k.stride(2), k.stride(3), + v.stride(1), v.stride(2), v.stride(3), + T, K, V, BT, BK, BV + ) + return dk, dv, dbeta + + +class WYRepresentationPrepration(torch.autograd.Function): + @staticmethod + @contiguous + @custom_fwd + def forward(ctx, k, v, beta, chunk_size): + ctx.BT = chunk_size + w, u, A = fwd_prepare_wy_repr(k, v, beta, ctx.BT) + ctx.save_for_backward(k, v, beta, A) + return w, u + + @staticmethod + @contiguous + @custom_bwd + def backward(ctx, dw, du): + k, v, beta, A = ctx.saved_tensors + BT = ctx.BT + dk, dv, dbeta = bwd_prepare_wy_repr(k, v, beta, A, dw, du, BT) + return dk, dv, dbeta, None + + + + +prepare_wy_repr = WYRepresentationPrepration.apply + +def naive(k, v, beta, chunk_size): + l_org = k.shape[2] + l_new = triton.next_power_of_2(l_org) + # pad k, v, beta + k = torch.cat([k, torch.zeros_like(k)[:, :, :l_new-l_org, :]], dim=2) + v = torch.cat([v, torch.zeros_like(v)[:, :, :l_new-l_org, :]], dim=2) + beta = torch.cat([beta, torch.zeros_like(beta)[:, :, :l_new-l_org]], dim=2) + + k, v = map(lambda x: rearrange(x, 'b h (n c) d -> b h n c d', c=chunk_size), (k, v)) + # k = torch.nn.functional.normalize(k, dim=-1, p=2) + beta = rearrange(beta, 'b h (n c) -> b h n c', c=chunk_size) + mask = torch.triu(torch.ones(chunk_size, chunk_size, dtype=torch.bool, device=k.device), diagonal=0) + k_beta = k * beta[..., None] + v = v * beta[..., None] + attn = (k @ k.transpose(-1, -2)).masked_fill_(mask, 0) + attn = attn * beta[..., None] + x = attn @ v + + o = torch.zeros_like(k) + o2 = torch.zeros_like(v) + + o[..., 0, :] = k_beta[..., 0, :].clone() + o2[..., 0, :] = x[..., 0, :].clone() + for i in range(1, chunk_size): + o_i = (o[..., :i, :]).clone() + o[..., i, :] = -(attn[..., i, :i, None] * o_i).sum(3) + k_beta[..., i, :] + o2_i = (o2[..., :i, :]).clone() + o2[..., i, :] = -(attn[..., i, :i, None] * o2_i).sum(3) + x[..., i, :] + return map(lambda x: rearrange(x, 'b h n c d -> b h (n c) d')[:, :, :l_org], (o, v-o2)) + + +if __name__ == "__main__": + torch.set_default_dtype(torch.float32) + seq_len = 1024 + b = 4 + h = 4 + k = torch.nn.functional.normalize(torch.randn(b, h, seq_len, 128), dim=-1, p=2) + v = torch.randn(b, h, seq_len, 128) + beta = torch.rand(b, h, seq_len).sigmoid() + # beta = torch.ones(b, h, seq_len) + require_grad = True + + k, v, beta = map(lambda x: x.cuda().requires_grad_(require_grad), (k, v, beta)) + do = torch.rand_like(k) + do2 = torch.rand_like(v) + + o1, o2 = naive(k.clone(), v.clone(), beta.clone(), 64) + if require_grad: + o1.backward(do, retain_graph=True) + o2.backward(do2, retain_graph=True) + + k_grad2, v_grad2, beta_grad2 = k.grad, v.grad, beta.grad + k.grad = v.grad = beta.grad = None + + o3, o4 = prepare_wy_repr(k.clone(), v.clone(), beta.clone()) + print((o1-o3).abs().max()) + print((o2-o4).abs().max()) + + if require_grad: + o3.backward(do, retain_graph=True) + o4.backward(do2, retain_graph=True) + k_grad, v_grad, beta_grad = k.grad, v.grad, beta.grad + print((k_grad2-k_grad).abs().max()) + print((v_grad2-v_grad).abs().max()) + print((beta_grad2-beta_grad).abs().max()) + breakpoint() + diff --git a/fla/ops/gla/__init__.py b/fla/ops/gla/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..f1fdb9563ac716719cbe2cda45197d756f10f435 --- /dev/null +++ b/fla/ops/gla/__init__.py @@ -0,0 +1,11 @@ +# -*- coding: utf-8 -*- + +from .chunk import chunk_gla +from .chunk_fuse import fused_chunk_gla +from .recurrent_fuse import fused_recurrent_gla + +__all__ = [ + 'chunk_gla', + 'fused_chunk_gla', + 'fused_recurrent_gla' +] diff --git a/fla/ops/gla/chunk.py b/fla/ops/gla/chunk.py new file mode 100644 index 0000000000000000000000000000000000000000..7c83529237e4b041bf3ac254a06fae28b8d3a258 --- /dev/null +++ b/fla/ops/gla/chunk.py @@ -0,0 +1,734 @@ +# -*- coding: utf-8 -*- + +# Copyright (c) 2023-2024, Yu Zhang, Songlin Yang + +from typing import Optional, Tuple + +import torch +import triton +import triton.language as tl + +from fla.ops.utils import chunk_reversed_cumsum_fwd +from fla.utils import contiguous + + +@triton.autotune( + configs=[ + triton.Config({'BS': 16}, num_warps=2), + triton.Config({'BS': 16}, num_warps=4), + triton.Config({'BS': 16}, num_warps=8), + triton.Config({'BS': 32}, num_warps=2), + triton.Config({'BS': 32}, num_warps=4), + triton.Config({'BS': 32}, num_warps=8), + triton.Config({'BS': 64}, num_warps=2), + triton.Config({'BS': 64}, num_warps=4), + triton.Config({'BS': 64}, num_warps=8), + ], + key=['S'] +) +@triton.jit +def chunk_gla_fwd_kernel_cum( + s, + o, + s_s_h, + s_s_t, + s_s_d, + T: tl.constexpr, + S: tl.constexpr, + BT: tl.constexpr, + BS: tl.constexpr +): + i_s, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + o_i = tl.arange(0, BT) + m_s = tl.where(o_i[:, None] >= o_i[None, :], 1., 0.) + + p_s = tl.make_block_ptr(s + i_bh * s_s_h, (T, S), (s_s_t, s_s_d), (i_t * BT, i_s * BS), (BT, BS), (1, 0)) + p_o = tl.make_block_ptr(o + i_bh * s_s_h, (T, S), (s_s_t, s_s_d), (i_t * BT, i_s * BS), (BT, BS), (1, 0)) + # [BT, BS] + b_s = tl.load(p_s, boundary_check=(0, 1)).to(tl.float32) + b_o = tl.dot(m_s, b_s, allow_tf32=False) + tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1)) + + +@triton.jit +def chunk_gla_fwd_kernel_h( + k, + v, + g, + h, + h0, + ht, + s_k_h, + s_k_t, + s_k_d, + s_v_h, + s_v_t, + s_v_d, + s_h_h, + s_h_t, + s_h_d, + T: tl.constexpr, + K: tl.constexpr, + V: tl.constexpr, + BT: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr, + NT: tl.constexpr, + USE_INITIAL_STATE: tl.constexpr, + STORE_FINAL_STATE: tl.constexpr +): + i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + b_h = tl.zeros([BK, BV], dtype=tl.float32) + if USE_INITIAL_STATE: + p_h = tl.make_block_ptr(h0 + i_bh * K * V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + b_h += tl.load(p_h, boundary_check=(0, 1)).to(tl.float32) + for i_t in range(NT): + p_k = tl.make_block_ptr(k + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, i_t * BT), (BK, BT), (0, 1)) + p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_h = tl.make_block_ptr(h + i_bh * s_h_h + i_t * K * V, (K, V), (s_h_t, s_h_d), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + p_g = tl.make_block_ptr(g + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, i_t * BT), (BK, BT), (0, 1)) + p_gn = tl.make_block_ptr(g + i_bh * s_k_h, (T * K,), (s_k_d,), ((i_t * BT + BT - 1) * K + i_k * BK,), (BK,), (0,)) + + tl.store(p_h, b_h.to(p_h.dtype.element_ty), boundary_check=(0, 1)) + # [BK, BT] + b_k = tl.load(p_k, boundary_check=(0, 1)) + # [BT, BV] + b_v = tl.load(p_v, boundary_check=(0, 1)) + # [BK, BT] + b_g = tl.load(p_g, boundary_check=(0, 1)) + if i_t < NT - 1: + # [BK,] + b_gn = tl.load(p_gn, boundary_check=(0,)) + else: + b_gn = tl.min(b_g, axis=1) + b_h *= tl.exp(b_gn)[:, None] + b_k = (b_k * tl.exp(b_gn[:, None] - b_g)).to(b_k.dtype) + b_h += tl.dot(b_k, b_v, allow_tf32=False) + + if STORE_FINAL_STATE: + p_h = tl.make_block_ptr(ht + i_bh * K * V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + tl.store(p_h, b_h.to(p_h.dtype.element_ty), boundary_check=(0, 1)) + + +@triton.jit +def chunk_gla_fwd_kernel_intra( + q, + k, + g, + A, + s_k_h, + s_k_t, + s_k_d, + scale, + T: tl.constexpr, + K: tl.constexpr, + BT: tl.constexpr, + BC: tl.constexpr, + BK: tl.constexpr, + NC: tl.constexpr +): + i_k, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + i_t, i_i, i_j = i_c // (NC * NC), (i_c % (NC * NC)) // NC, (i_c % (NC * NC)) % NC + n_bh = tl.num_programs(2) + + if i_i > i_j: + p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0)) + p_g = tl.make_block_ptr(g + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0)) + p_k = tl.make_block_ptr(k + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, i_t * BT + i_j * BC), (BK, BC), (0, 1)) + p_gk = tl.make_block_ptr(g + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, i_t * BT + i_j * BC), (BK, BC), (0, 1)) + p_gn = tl.make_block_ptr(g + i_bh * s_k_h, (T * K,), (s_k_d,), ((i_t * BT + i_i * BC) * K + i_k * BK,), (BK,), (0,)) + p_A = tl.make_block_ptr(A + (i_k*n_bh+i_bh)*T*BT, (T, BT), (BT, 1), (i_t * BT + i_i * BC, i_j * BC), (BC, BC), (1, 0)) + # [BK,] + b_gn = tl.load(p_gn, boundary_check=(0,)) + # [BC, BK] + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_g = tl.load(p_g, boundary_check=(0, 1)) + b_qg = (b_q * tl.exp(b_g - b_gn[None, :]) * scale).to(b_q.dtype) + # [BK, BC] + b_k = tl.load(p_k, boundary_check=(0, 1)) + b_gk = tl.load(p_gk, boundary_check=(0, 1)) + b_kg = (b_k * tl.exp(b_gn[:, None] - b_gk)).to(b_k.dtype) + # [BC, BC] + b_A = tl.dot(b_qg, b_kg, allow_tf32=False) + tl.store(p_A, b_A.to(A.dtype.element_ty), boundary_check=(0, 1)) + elif i_i == i_j: + p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0)) + p_g = tl.make_block_ptr(g + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0)) + p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T * K,), (s_k_d,), ((i_t * BT + i_j * BC) * K + i_k * BK,), (BK,), (0,)) + p_gk = tl.make_block_ptr(g + i_bh * s_k_h, (T * K,), (s_k_d,), ((i_t * BT + i_j * BC) * K + i_k * BK,), (BK,), (0,)) + # [BC, BK] + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_g = tl.load(p_g, boundary_check=(0, 1)) + + o_i = tl.arange(0, BC) + o_A = (i_bh + i_k * n_bh) * T * BT + (i_t * BT + i_i * BC + tl.arange(0, BC)) * BT + i_j * BC + m_A = (i_t * BT + i_i * BC + tl.arange(0, BC)) < T + for j in range(0, BC): + # [BK,] + b_k = tl.load(p_k, boundary_check=(0,)).to(tl.float32) + b_gk = tl.load(p_gk, boundary_check=(0,)).to(tl.float32) + # [BC,] + b_A = tl.sum(b_q * b_k[None, :] * tl.exp(b_g - b_gk[None, :]) * scale, 1) + b_A = tl.where(o_i >= j, b_A, 0.) + tl.store(A + o_A + j, b_A.to(b_q.dtype), mask=m_A) + + p_k = tl.advance(p_k, (K,)) + p_gk = tl.advance(p_gk, (K,)) + + +@triton.jit +def chunk_gla_fwd_kernel_inter( + q, + v, + g, + h, + o, + A, + s_k_h, + s_k_t, + s_k_d, + s_v_h, + s_v_t, + s_v_d, + s_h_h, + s_h_t, + s_h_d, + scale, + T: tl.constexpr, + K: tl.constexpr, + V: tl.constexpr, + BT: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr +): + i_v, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + + b_o = tl.zeros([BT, BV], dtype=tl.float32) + for i_k in range(tl.cdiv(K, BK)): + p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_g = tl.make_block_ptr(g + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_h = tl.make_block_ptr(h + i_bh * s_h_h + i_t * K * V, (K, V), (s_h_t, s_h_d), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + + # [BT, BK] + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_q = (b_q * scale).to(b_q.dtype) + # [BT, BK] + b_g = tl.load(p_g, boundary_check=(0, 1)) + # [BT, BK] + b_qg = (b_q * tl.exp(b_g)).to(b_q.dtype) + # [BK, BV] + b_h = tl.load(p_h, boundary_check=(0, 1)) + # works but dkw, owing to divine benevolence + # [BT, BV] + if i_k >= 0: + b_o += tl.dot(b_qg, b_h, allow_tf32=False) + p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_o = tl.make_block_ptr(o + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_A = tl.make_block_ptr(A + i_bh * T * BT, (T, BT), (BT, 1), (i_t * BT, 0), (BT, BT), (1, 0)) + # [BT, BV] + b_v = tl.load(p_v, boundary_check=(0, 1)) + # [BT, BT] + b_A = tl.load(p_A, boundary_check=(0, 1)) + b_o += tl.dot(b_A, b_v, allow_tf32=False) + tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1)) + + +@triton.jit +def chunk_gla_bwd_kernel_dh( + q, + g, + do, + dh, + s_k_h, + s_k_t, + s_k_d, + s_v_h, + s_v_t, + s_v_d, + s_h_h, + s_h_t, + s_h_d, + scale, + T: tl.constexpr, + K: tl.constexpr, + V: tl.constexpr, + BT: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr, + NT: tl.constexpr +): + i_k, i_v, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + + b_dh = tl.zeros([BK, BV], dtype=tl.float32) + for i_t in range(NT - 1, -1, -1): + p_q = tl.make_block_ptr(q + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, i_t * BT), (BK, BT), (0, 1)) + p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_dh = tl.make_block_ptr(dh + i_bh * s_h_h + i_t * K*V, (K, V), (s_h_t, s_h_d), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + p_g = tl.make_block_ptr(g + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, i_t * BT), (BK, BT), (0, 1)) + p_gn = tl.make_block_ptr(g + i_bh * s_k_h, (T * K,), (s_k_d,), ((i_t * BT + BT - 1) * K + i_k * BK,), (BK,), (0,)) + + # [BK, BT] + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_q = (b_q * scale).to(b_q.dtype) + # [BT, BV] + b_do = tl.load(p_do, boundary_check=(0, 1)) + + tl.store(p_dh, b_dh.to(p_dh.dtype.element_ty), boundary_check=(0, 1)) + + # [BK,] + b_gn = tl.load(p_gn, boundary_check=(0,)) + # [BK, BV] + b_dh *= tl.exp(b_gn)[:, None] + # [BK, BT] + b_g = tl.load(p_g, boundary_check=(0, 1)) + b_q = (b_q * tl.exp(b_g)).to(b_q.dtype) + + # [BK, BV] + b_dh += tl.dot(b_q, b_do, allow_tf32=False) + + +@triton.jit +def chunk_gla_bwd_kernel_inter( + k, + v, + h, + g, + A, + do, + dh, + dq, + dk, + dv, + dA, + s_k_h, + s_k_t, + s_k_d, + s_v_h, + s_v_t, + s_v_d, + s_h_h, + s_h_t, + s_h_d, + scale, + T: tl.constexpr, + K: tl.constexpr, + V: tl.constexpr, + BT: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr +): + i_k, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + n_bh = tl.num_programs(2) + + p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_gk = tl.make_block_ptr(g + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_gn = tl.make_block_ptr(g + i_bh * s_k_h, (T * K,), (s_k_d,), ((i_t * BT + BT - 1) * K + i_k * BK,), (BK,), (0,)) + p_A = tl.make_block_ptr(A + i_bh * T * BT, (BT, T), (1, BT), (0, i_t * BT), (BT, BT), (0, 1)) + + # [BT, BK] + b_k = tl.load(p_k, boundary_check=(0, 1)) + b_gk = tl.load(p_gk, boundary_check=(0, 1)) + b_gn = tl.exp(tl.load(p_gn, boundary_check=(0,))[None, :] - b_gk) + b_k = (b_k * b_gn).to(b_k.dtype) + # [BT, BT] + b_A = tl.load(p_A, boundary_check=(0, 1)) + + b_dq = tl.zeros([BT, BK], dtype=tl.float32) + b_dk = tl.zeros([BT, BK], dtype=tl.float32) + b_dA = tl.zeros([BT, BT], dtype=tl.float32) + for i_v in range(tl.cdiv(V, BV)): + p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_h = tl.make_block_ptr(h + i_bh * s_h_h + i_t * V * K, (V, K), (s_h_d, s_h_t), (i_v * BV, i_k * BK), (BV, BK), (0, 1)) + p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_dh = tl.make_block_ptr(dh + i_bh * s_h_h + i_t * K*V, (K, V), (s_h_t, s_h_d), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + p_dv = tl.make_block_ptr(dv + (i_k*n_bh+i_bh) * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + + # [BT, BV] + b_v = tl.load(p_v, boundary_check=(0, 1)) + # [BV, BK] + b_h = tl.load(p_h, boundary_check=(0, 1)) + # [BT, BV] + b_do = tl.load(p_do, boundary_check=(0, 1)) + # [BK, BV] + b_dh = tl.load(p_dh, boundary_check=(0, 1)) + + # [BT, BV] + b_dv = tl.dot(b_k, b_dh, allow_tf32=False) + if i_k == 0: + b_dv += tl.dot(b_A, b_do, allow_tf32=False) + b_do = (b_do * scale).to(b_do.dtype) + tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), boundary_check=(0, 1)) + # [BT, BT] + b_dA += tl.dot(b_do, tl.trans(b_v), allow_tf32=False) + # [BT, BK] + b_dq += tl.dot(b_do, b_h, allow_tf32=False) + # [BT, BK] + b_dk += tl.dot(b_v, tl.trans(b_dh), allow_tf32=False) + b_dq = b_dq * tl.exp(b_gk) + b_dk = b_dk * b_gn + + p_dq = tl.make_block_ptr(dq + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_dk = tl.make_block_ptr(dk + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_dA = tl.make_block_ptr(dA + i_bh * T * BT, (T, BT, ), (BT, 1), (i_t * BT, 0), (BT, BT), (1, 0)) + tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1)) + tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1)) + + o_i = tl.arange(0, BT) + m_s = o_i[:, None] >= o_i[None, :] + # [BT, BT] + b_dA = tl.where(m_s, b_dA, 0.).to(b_k.dtype) + if i_k == 0: + tl.store(p_dA, b_dA.to(p_dA.dtype.element_ty), boundary_check=(0, 1)) + + +@triton.jit +def chunk_gla_bwd_kernel_intra( + q, + k, + g, + dA, + dq, + dk, + dg, + s_k_h, + s_k_t, + s_k_d, + T: tl.constexpr, + K: tl.constexpr, + BT: tl.constexpr, + BC: tl.constexpr, + BK: tl.constexpr, + NC: tl.constexpr +): + i_k, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + i_t, i_i = i_c // NC, i_c % NC + + p_g = tl.make_block_ptr(g + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0)) + p_gn = tl.make_block_ptr(g + i_bh * s_k_h, (T * K,), (s_k_d,), ((i_t * BT + i_i * BC) * K + i_k * BK,), (BK,), (0,)) + # [BK,] + b_gn = tl.load(p_gn, boundary_check=(0,)) + # [BC, BK] + b_g = tl.load(p_g, boundary_check=(0, 1)) + b_dq = tl.zeros([BC, BK], dtype=tl.float32) + for i_j in range(0, i_i): + p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_j * BC, i_k * BK), (BC, BK), (1, 0)) + p_gk = tl.make_block_ptr(g + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_j * BC, i_k * BK), (BC, BK), (1, 0)) + p_dA = tl.make_block_ptr(dA + i_bh * T * BT, (T, BT), (BT, 1), (i_t * BT + i_i * BC, i_j * BC), (BC, BC), (1, 0)) + # [BC, BK] + b_k = tl.load(p_k, boundary_check=(0, 1)) + b_gk = tl.load(p_gk, boundary_check=(0, 1)) + b_kg = (b_k * tl.exp(b_gn[None, :] - b_gk)).to(b_k.dtype) + # [BC, BC] + b_dA = tl.load(p_dA, boundary_check=(0, 1)) + # [BC, BK] + b_dq += tl.dot(b_dA, b_kg, allow_tf32=False) + b_dq *= tl.exp(b_g - b_gn[None, :]) + + o_i = tl.arange(0, BC) + o_dA = i_bh * T * BT + (i_t * BT + i_i * BC + tl.arange(0, BC)) * BT + i_i * BC + m_dA = (i_t * BT + i_i * BC + tl.arange(0, BC)) < T + for j in range(0, BC): + p_kj = tl.make_block_ptr(k + i_bh * s_k_h, (T * K,), (1,), ((i_t * BT + i_i*BC+j) * K + i_k * BK,), (BK,), (0,)) + p_gkj = tl.make_block_ptr(g + i_bh * s_k_h, (T * K,), (1,), ((i_t * BT + i_i*BC+j) * K + i_k * BK,), (BK,), (0,)) + # [BC,] + b_dA = tl.load(dA + o_dA + j, mask=m_dA, other=0) + # [BK,] + b_kj = tl.load(p_kj, boundary_check=(0,)).to(tl.float32) + b_gkj = tl.load(p_gkj, boundary_check=(0,)).to(tl.float32) + # [BC, BK] + m_i = o_i[:, None] >= j + # [BC, BK] + b_dq += tl.where(m_i, b_dA[:, None] * b_kj[None, :] * tl.exp(b_g - b_gkj[None, :]), 0.) + p_dq = tl.make_block_ptr(dq + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0)) + + b_dq = b_dq + tl.load(p_dq, boundary_check=(0, 1)) + tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1)) + + tl.debug_barrier() + p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0)) + p_gk = tl.make_block_ptr(g + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0)) + p_gn = tl.make_block_ptr(g + i_bh * s_k_h, (T*K,), (s_k_d,), ((i_t * BT + i_i * BC + BC - 1) * K + i_k * BK,), (BK,), (0,)) + # [BK,] + b_gn = tl.load(p_gn, boundary_check=(0,)) + # [BC, BK] + b_k = tl.load(p_k, boundary_check=(0, 1)) + b_gk = tl.load(p_gk, boundary_check=(0, 1)) + b_dk = tl.zeros([BC, BK], dtype=tl.float32) + for i_j in range(i_i + 1, NC): + p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_j * BC, i_k * BK), (BC, BK), (1, 0)) + p_g = tl.make_block_ptr(g + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_j * BC, i_k * BK), (BC, BK), (1, 0)) + p_dA = tl.make_block_ptr(dA + i_bh * T * BT, (T, BT), (BT, 1), (i_t * BT + i_j * BC, i_i * BC), (BC, BC), (1, 0)) + # [BC, BK] + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_g = tl.load(p_g, boundary_check=(0, 1)) + b_qg = (b_q * tl.exp(b_g - b_gn[None, :])).to(b_q.dtype) + # [BC, BC] + b_dA = tl.load(p_dA, boundary_check=(0, 1)) + # [BC, BK] + b_dk += tl.dot(tl.trans(b_dA), b_qg, allow_tf32=False) + b_dk *= tl.exp(b_gn[None, :] - b_gk) + + o_dA = i_bh * T * BT + (i_t * BT + i_i * BC) * BT + i_i * BC + tl.arange(0, BC) + for j in range(0, BC): + p_qj = tl.make_block_ptr(q + i_bh * s_k_h, (T * K,), (1,), ((i_t * BT + i_i * BC + j) * K + i_k * BK,), (BK,), (0,)) + p_gqj = tl.make_block_ptr(g + i_bh * s_k_h, (T * K,), (1,), ((i_t * BT + i_i * BC + j) * K + i_k * BK,), (BK,), (0,)) + # [BC,] + b_dA = tl.load(dA + o_dA + j * BT, mask=(i_t * BT + i_i * BC + j < T), other=0) + # [BK,] + b_qj = tl.load(p_qj, boundary_check=(0,)).to(tl.float32) + b_gqj = tl.load(p_gqj, boundary_check=(0,)).to(tl.float32) + # [BC, BK] + m_i = o_i[:, None] <= j + b_dk += tl.where(m_i, b_dA[:, None] * b_qj[None, :] * tl.exp(b_gqj[None, :] - b_gk), 0.) + + p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0)) + p_dk = tl.make_block_ptr(dk + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0)) + p_dg = tl.make_block_ptr(dg + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0)) + + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_dk = b_dk + tl.load(p_dk, boundary_check=(0, 1)) + b_dg = b_q * b_dq - b_k * b_dk + tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1)) + tl.store(p_dg, b_dg.to(p_dg.dtype.element_ty), boundary_check=(0, 1)) + + +class ChunkGLAFunction(torch.autograd.Function): + + @staticmethod + @contiguous + def forward(ctx, q, k, v, g, scale, initial_state, output_final_state, checkpoint_level): + B, H, T, K, V = *q.shape, v.shape[-1] + BT, BC = 64, 16 + BK = min(64, triton.next_power_of_2(K)) + BV = min(64, triton.next_power_of_2(V)) + NT, NC = triton.cdiv(T, BT), triton.cdiv(BT, BC) + NK = triton.cdiv(K, BK) + NV = triton.cdiv(V, BV) + num_warps = 4 if BK == 64 else 2 + num_stages = 1 + + def fwd_inner(q, k, v, g, B, H, T, K, V, BT, BK, BV, NT, h0=None, ht=None): + NK, NV = triton.cdiv(K, BK), triton.cdiv(V, BV) + h = q.new_empty(B, H, NT * K, V) + grid = (NV, NK, B * H) + chunk_gla_fwd_kernel_h[grid]( + k, v, g, h, h0, ht, + k.stride(1), k.stride(2), k.stride(3), + v.stride(1), v.stride(2), v.stride(3), + h.stride(1), h.stride(2), h.stride(3), + T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, NT=NT, + USE_INITIAL_STATE=h0 is not None, + STORE_FINAL_STATE=ht is not None, + num_warps=num_warps, + num_stages=num_stages + ) + return h + + final_state = None + if output_final_state: + final_state = q.new_empty(B, H, K, V, dtype=torch.float) + + g_org, g = g, torch.empty_like(g, dtype=torch.float) + def grid(meta): return ((triton.cdiv(meta['S'], meta['BS']), NT, B * H)) + # keep cummulative normalizer in fp32 + # this kernel is equivalent to + # g = g.view(B, H, NT, BT, -1).cumsum(-2).view(B, H, T, -1) + chunk_gla_fwd_kernel_cum[grid]( + g_org, g, + g.stride(1), g.stride(2), g.stride(3), + T=T, S=K, BT=BT + ) + h = fwd_inner( + q=q, k=k, v=v, g=g, + B=B, H=H, T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, NT=NT, + h0=initial_state if initial_state is not None else None, + ht=final_state if final_state is not None else None + ) + A = q.new_zeros(NK, B, H, T, BT) + grid = (NK, NT * NC * NC, B * H) + chunk_gla_fwd_kernel_intra[grid]( + q, k, g, A, + k.stride(1), k.stride(2), k.stride(3), + scale, + T=T, K=K, BT=BT, BC=BC, BK=BK, NC=NC, + num_warps=num_warps, + num_stages=num_stages + ) + A = A.sum(0, dtype=A.dtype) + o = torch.empty_like(v) + grid = (NV, NT, B * H) + chunk_gla_fwd_kernel_inter[grid]( + q, v, g, h, o, A, + k.stride(1), k.stride(2), k.stride(3), + v.stride(1), v.stride(2), v.stride(3), + h.stride(1), h.stride(2), h.stride(3), + scale, + T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, + num_warps=num_warps, + num_stages=num_stages + ) + if checkpoint_level >= 1: + del g + g = g_org + if checkpoint_level > 1: + del h + h, initial_state = None, None + + ctx.save_for_backward(q, k, v, g, h, initial_state, A) + ctx.BT = BT + ctx.scale = scale + ctx.checkpoint_level = checkpoint_level + return o, final_state + + @staticmethod + @contiguous + def backward(ctx, do, dht=None): + q, k, v, g, h, initial_state, A = ctx.saved_tensors + B, H, T, K, V = *q.shape, v.shape[-1] + BT, BC = ctx.BT, 16 + BK = min(64, triton.next_power_of_2(K)) + BV = min(64, triton.next_power_of_2(V)) + NT, NC = triton.cdiv(T, BT), triton.cdiv(BT, BC) + NK = triton.cdiv(K, BK) + num_warps = 4 if BK == 64 else 2 + num_stages = 1 + + def fwd_inner(q, k, v, g, B, H, T, K, V, BT, BK, BV, NT, h0=None, ht=None): + NK, NV = triton.cdiv(K, BK), triton.cdiv(V, BV) + h = q.new_empty(B, H, NT * K, V) + grid = (NV, NK, B * H) + chunk_gla_fwd_kernel_h[grid]( + k, v, g, h, h0, ht, + k.stride(1), k.stride(2), k.stride(3), + v.stride(1), v.stride(2), v.stride(3), + h.stride(1), h.stride(2), h.stride(3), + T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, NT=NT, + USE_INITIAL_STATE=h0 is not None, + STORE_FINAL_STATE=ht is not None, + num_warps=num_warps, + num_stages=num_stages + ) + return h + + def bwd_inner(q, g, do, B, H, T, K, V, BT, BK, BV, NT, scale): + NK, NV = triton.cdiv(K, BK), triton.cdiv(V, BV) + dh = q.new_empty(B, H, NT * K, V) + grid = (NK, NV, B * H) + chunk_gla_bwd_kernel_dh[grid]( + q, g, do, dh, + q.stride(1), q.stride(2), q.stride(3), + do.stride(1), do.stride(2), do.stride(3), + dh.stride(1), dh.stride(2), dh.stride(3), + scale, + T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, NT=NT, + num_warps=num_warps, + num_stages=num_stages + ) + return dh + + if ctx.checkpoint_level >= 1: + # save the original g and compute its fp32 cumsum during the backward pass for memory consideration + g_org, g = g, torch.zeros_like(g, dtype=torch.float) + def grid(meta): return ((triton.cdiv(meta['S'], meta['BS']), NT, B * H)) + # keep cummulative normalizer in fp32 + # this kernel is equivalent to + # g = g.view(B, H, NT, BT, -1).cumsum(-2).view(B, H, T, -1) + chunk_gla_fwd_kernel_cum[grid]( + g_org, g, + g.stride(1), g.stride(2), g.stride(3), + T=T, S=K, BT=BT + ) + + # rerun the forward pass to get h if checkpoint_level >= 1 + if ctx.checkpoint_level > 1: + h = fwd_inner( + q=q, k=k, v=v, g=g, + B=B, H=H, T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, NT=NT, + h0=initial_state if initial_state is not None else None, + ht=None + ) + + scale = ctx.scale + dh = bwd_inner( + q, g, do, + B=B, H=H, T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, NT=NT, + scale=scale + ) + dq = torch.empty_like(q, dtype=torch.float) + dk = torch.empty_like(k, dtype=torch.float) + dg = torch.empty_like(k, dtype=torch.float) + dv = v.new_empty(NK, *v.shape) + dA = q.new_zeros(B, H, T, BT) + grid = (NK, NT, B * H) + chunk_gla_bwd_kernel_inter[grid]( + k, v, h, g, A, do, dh, dq, dk, dv, dA, + k.stride(1), k.stride(2), k.stride(3), + v.stride(1), v.stride(2), v.stride(3), + h.stride(1), h.stride(2), h.stride(3), + scale, + T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, + num_warps=num_warps, + num_stages=num_stages + ) + dv = dv.sum(0, dtype=dv.dtype) + grid = (NK, NT * NC, B * H) + chunk_gla_bwd_kernel_intra[grid]( + q, k, g, dA, dq, dk, dg, + k.stride(1), k.stride(2), k.stride(3), + T=T, K=K, BT=BT, BC=BC, BK=BK, NC=NC, + num_warps=num_warps, + num_stages=num_stages + ) + + dq = dq.to(q.dtype) + dk = dk.to(q.dtype) + # reversed cumsum, equivalent to: + # + # def reversed_cumsum(x, dim=-1): + # c = x.cumsum(dim) + # return x + c.index_select(dim, x.new_tensor([c.shape[dim]-1], dtype=torch.long)) - c + dg = chunk_reversed_cumsum_fwd(dg).to(k.dtype) + return dq, dk, dv, dg, None, None, None, None + + +def chunk_gla( + q: torch.Tensor, + k: torch.Tensor, + v: torch.Tensor, + g: torch.Tensor, + scale: Optional[int] = None, + initial_state: torch.Tensor = None, + output_final_state: bool = False, + checkpoint_level: Optional[int] = 2 +) -> Tuple[torch.Tensor, torch.Tensor]: + r""" + Args: + q (torch.Tensor): + queries of shape `(B, H, T, K)` + k (torch.Tensor): + keys of shape `(B, H, T, K)` + v (torch.Tensor): + values of shape `(B, H, T, V)` + g (torch.Tensor): + Forget gates of shape `(B, H, T, K)` applied to keys. + scale (Optional[int]): + Scale factor for the GLA attention scores. + If not provided, it will default to `1 / sqrt(K)`. Default: `None`. + initial_state (Optional[torch.Tensor]): + Initial state of shape `(B, H, K, V)`. Default: `None`. + output_final_state (Optional[bool]): + Whether to output the final state of shape `(B, H, K, V)`. Default: `False`. + checkpoint_level (Optional[int]): + Checkpointing level; higher values will save more memories and do more recomputations during backward. + Default: `0`: + - Level `0`: no memory saved, no recomputation. + - Level `1`: recompute the fp32 cumulative values during backward. + - Level `2`: recompute the fp32 cumulative values and forward hidden states during backward. + """ + assert checkpoint_level in [0, 1, 2] + if scale is None: + scale = q.shape[-1] ** -0.5 + if initial_state is not None: + initial_state = initial_state.detach() + o, final_state = ChunkGLAFunction.apply(q, k, v, g, scale, initial_state, output_final_state, checkpoint_level) + return o, final_state diff --git a/fla/ops/gla/chunk_fuse.py b/fla/ops/gla/chunk_fuse.py new file mode 100644 index 0000000000000000000000000000000000000000..affbcf22f100dcc00a78248d51b63815e76440fa --- /dev/null +++ b/fla/ops/gla/chunk_fuse.py @@ -0,0 +1,548 @@ +# -*- coding: utf-8 -*- + +# Copyright (c) 2023, Songlin Yang +# Gated Linear Attention Transformers with Hardware-Efficient Training: https://arxiv.org/abs/2312.06635 +# on-the-fly computation without materializing hidden statets into HBMs + +from typing import Tuple + +import torch +import torch.nn.functional as F +import triton +import triton.language as tl +from einops import rearrange +from packaging import version +from torch.cuda.amp import custom_bwd, custom_fwd + +from fla.ops.gla.chunk_util import (bwd_decay_global_cumsum, fwd_decay_cumsum, + prepare_qg_kg) +from fla.utils import contiguous + +inv_ln2 = 1.44269504 + +@triton.jit +def fused_chunk_gla_fwd_kernel( + # B: batch_size, H: n_heads, T: seq_len, D: d_head + q, # query [B, H, L, D_head_K] + k, # key [B, H, L, D_head_K] + v, # value [B, H, L, D_head_V] + g, # cumulative sum of log decay [B, H, L, D_head_K] + o, # output [B, H, L, D_head_V] + + initial_state, # initial state of the chunk [B, H, D_head_K, D_head_V] + final_state, # final state of the chunk [B, H, D_head_K, D_head_V] + + s_qk_h, # stride size: L * D_head_K + s_qk_t, # stride size: D_head_K + s_qk_d, # stride size: 1 + + s_vo_h, # stride size: L * D_head_V + s_vo_t, # stride size: D_head_V + s_vo_d, # stride size: 1 + + B, # batch size + H, # n_heads + T, # seq_len + scale, # D_head_K ** -0.5 + BT: tl.constexpr, # BLOCK SIZE along the sequence dimension, a.k.a. chunk size + BK: tl.constexpr, # BLOCK SIZE along the K dimension + BV: tl.constexpr, # BLOCK SIZE along the V dimension + DK: tl.constexpr, # D_head_K + DV: tl.constexpr, # D_head_V + USE_INITIAL_STATE: tl.constexpr, + STORE_FINAL_STATE: tl.constexpr, + CHECK: tl.constexpr +): + # indices + i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + + b_h = tl.zeros([BK, BV], dtype=tl.float32) + + # make block pointers + p_q = tl.make_block_ptr(q + i_bh * s_qk_h, (T, DK), (s_qk_t, s_qk_d), (0, i_k * BK), (BT, BK), (1, 0)) + p_db = g + i_bh * s_qk_h + (BT - 1) * s_qk_t + i_k * BK + tl.arange(0, BK) + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (DK, T), (s_qk_d, s_qk_t), (i_k * BK, 0), (BK, BT), (0, 1)) + p_v = tl.make_block_ptr(v + i_bh * s_vo_h, (T, DV), (s_vo_t, s_vo_d), (0, i_v * BV), (BT, BV), (1, 0)) + p_o = tl.make_block_ptr(o + (i_bh + i_k * B * H) * s_vo_h, (T, DV), (s_vo_t, s_vo_d), (0, i_v * BV), (BT, BV), (1, 0)) + + if USE_INITIAL_STATE: + p_h = tl.make_block_ptr(initial_state + i_bh * DK * DV, (DK, DV), (DV, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + b_h += tl.load(p_h, boundary_check=(0, 1)).to(tl.float32) + + mask = (i_k * BK + tl.arange(0, BK)) < DK + + for i in range(0, tl.cdiv(T, BT)): + # [BK, BT] + b_k = tl.load(p_k, boundary_check=(0, 1)) + # [BT, BV] + b_o = tl.zeros([BT, BV], dtype=tl.float32) + b_v = tl.load(p_v, boundary_check=(0, 1)) + # [BT, BK] + b_q = tl.load(p_q, boundary_check=(0, 1)) + d_b = tl.load(p_db, mask=mask, other=0).to(tl.float32) + if CHECK and i == 0: + b_o = tl.dot(b_q.to(b_v.dtype), b_h.to(b_v.dtype), allow_tf32=False) + b_h = b_h * tl.math.exp2(d_b)[:, None] + tl.dot(b_k.to(b_v.dtype), b_v, allow_tf32=False) + else: + b_o = tl.dot(b_q.to(b_v.dtype), b_h.to(b_v.dtype), allow_tf32=False) + b_h = b_h * tl.math.exp2(d_b)[:, None] + tl.dot(b_k.to(b_v.dtype), b_v, allow_tf32=False) + + tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1)) + p_q = tl.advance(p_q, (BT, 0)) + p_k = tl.advance(p_k, (0, BT)) + p_v = tl.advance(p_v, (BT, 0)) + p_o = tl.advance(p_o, (BT, 0)) + p_db += BT * DK + + if STORE_FINAL_STATE: + p_final = tl.make_block_ptr(final_state + i_bh * DK * DV, (DK, DV), (DV, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + tl.store(p_final, b_h.to(p_final.dtype.element_ty), boundary_check=(0, 1)) + + +# Similar to Algorithm1 of https://arxiv.org/abs/2006.16236 +@triton.jit +def fused_chunk_gla_bwd_kernel( + q, k, v, g, + do, # gradient of output [B, H, L, D_head_V] + dq, # gradient of query [NV, B, H, L, D_head_K] + dk, # gradient of key [NV, B, H, L, D_head_K] + dv, # gradient of value [NK, B, H, L, D_head_V] + + initial_state, # initial state of the chunk [B, H, D_head_K, D_head_V] + + s_qk_h, # stride size: L * D_head_K + s_qk_t, # stride size: D_head_K + s_qk_d, # stride size: 1 + + s_vo_h, # stride size: L * D_head_V + s_vo_t, # stride size: D_head_V + s_vo_d, # stride size: 1 + + B, # batch_size + H, # n_heads + T, # seq_len + scale, # D_head_K ** -0.5 + # clamp_min, # minimum log value of the gate for numerical stability. default: -5 + BT: tl.constexpr, # BLOCK SIZE along the sequence dimension, a.k.a. chunk size + BK: tl.constexpr, # BLOCK SIZE along the K dimension + BV: tl.constexpr, # BLOCK SIZE along the V dimension + DK: tl.constexpr, # D_head_K + DV: tl.constexpr, # D_head_V + USE_INITIAL_STATE: tl.constexpr, + CHECK: tl.constexpr +): + i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + # [BV, BK] + b_h = tl.zeros([BV, BK], dtype=tl.float32) + + if USE_INITIAL_STATE: + p_h = tl.make_block_ptr(initial_state + i_bh * DK * DV, (DV, DK), (1, DV), (i_v * BV, i_k * BK), (BV, BK), (0, 1)) + b_h += tl.load(p_h, boundary_check=(0, 1)).to(tl.float32) + + mask = (i_k * BK + tl.arange(0, BK)) < DK + for i in range(0, tl.cdiv(T, BT)): + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (T, DK), (s_qk_t, s_qk_d), (i * BT, i_k * BK), (BT, BK), (1, 0)) + p_db = g + i_bh * s_qk_h + ((i+1) * BT - 1) * s_qk_t + i_k * BK + tl.arange(0, BK) + p_v = tl.make_block_ptr(v + i_bh * s_vo_h, (DV, T), (s_vo_d, s_vo_t), (i_v * BV, i * BT), (BV, BT), (0, 1)) + p_do = tl.make_block_ptr(do + i_bh * s_vo_h, (T, DV), (s_vo_t, s_vo_d), (i * BT, i_v * BV), (BT, BV), (1, 0)) + p_dq = tl.make_block_ptr(dq + (i_bh+i_v*B*H)*s_qk_h, (T, DK), (s_qk_t, s_qk_d), (i * BT, i_k * BK), (BT, BK), (1, 0)) + b_dq = tl.zeros([BT, BK], dtype=tl.float32) + # [BT, DK] + b_k = tl.load(p_k, boundary_check=(0, 1)) + # b_g = tl.load(p_g, boundary_check=(0, 1)) * inv_ln2 + d_b = tl.load(p_db, mask=mask, other=0).to(tl.float32) + + # [DV, BT] + b_v = tl.load(p_v, boundary_check=(0, 1)) + # [BT, DV] + b_do = tl.load(p_do, boundary_check=(0, 1)) + # [DV, DK] + if CHECK and i == 0: + b_dq += tl.dot(b_do, b_h.to(b_do.dtype), allow_tf32=False) + b_h = b_h * tl.math.exp2(d_b)[None, :] + tl.dot(b_v, b_k.to(b_v.dtype), allow_tf32=False) + else: + b_dq += tl.dot(b_do, b_h.to(b_do.dtype), allow_tf32=False) + b_h = b_h * tl.math.exp2(d_b)[None, :] + tl.dot(b_v, b_k.to(b_v.dtype), allow_tf32=False) + b_dq *= scale + tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1)) + + # sync threads + b_h = None + tl.debug_barrier() + # [BK, BV] + b_dh = tl.zeros([BK, BV], dtype=tl.float32) + + # cum = tl.zeros([BK], dtype=tl.float32) + for i in range(1, tl.cdiv(T, BT) + 1): + p_q = tl.make_block_ptr(q + i_bh * s_qk_h, (DK, T), (s_qk_d, s_qk_t), (i_k * BK, T - i * BT), (BK, BT), (0, 1)) + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (T, DK), (s_qk_t, s_qk_d), (T - i * BT, i_k * BK), (BT, BK), (1, 0)) + p_db = g + i_bh * s_qk_h + (T - (i-1) * BT - 1) * s_qk_t + i_k * BK + tl.arange(0, BK) + p_v = tl.make_block_ptr(v + i_bh * s_vo_h, (T, DV), (s_vo_t, s_vo_d), (T - i * BT, i_v * BV), (BT, BV), (1, 0)) + p_do = tl.make_block_ptr(do + i_bh * s_vo_h, (T, DV), (s_vo_t, s_vo_d), (T - i * BT, i_v * BV), (BT, BV), (1, 0)) + p_dk = tl.make_block_ptr(dk + (i_bh + i_v * B * H) * s_qk_h, (T, DK), + (s_qk_t, s_qk_d), (T - i * BT, i_k * BK), (BT, BK), (1, 0)) + p_dv = tl.make_block_ptr(dv + (i_bh + i_k * B * H) * s_vo_h, (T, DV), + (s_vo_t, s_vo_d), (T - i * BT, i_v * BV), (BT, BV), (1, 0)) + # [DK, BT] + b_q = tl.load(p_q, boundary_check=(0, 1)) + # [BT, DK] + b_k = tl.load(p_k, boundary_check=(0, 1)) + # [BT, DV] + b_v = tl.load(p_v, boundary_check=(0, 1)) + b_do = tl.load(p_do, boundary_check=(0, 1)) + b_db = tl.load(p_db, mask=mask, other=0).to(tl.float32) + + # inter-chunk + # [DK, DV] + if CHECK and i == 1: + b_dk = tl.trans(tl.dot(b_dh.to(b_v.dtype), tl.trans(b_v), allow_tf32=False)) + b_dv = tl.dot((b_k).to(b_v.dtype), b_dh.to(b_v.dtype), allow_tf32=False) + b_dh = b_dh * tl.math.exp2(b_db)[:, None] + tl.dot(b_q.to(b_do.dtype), b_do, allow_tf32=False) + else: + b_dk = tl.trans(tl.dot(b_dh.to(b_v.dtype), tl.trans(b_v), allow_tf32=False)) + b_dv = tl.dot((b_k).to(b_v.dtype), b_dh.to(b_v.dtype), allow_tf32=False) + b_dh = b_dh * tl.math.exp2(b_db)[:, None] + tl.dot(b_q.to(b_do.dtype), b_do, allow_tf32=False) + + tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1)) + tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), boundary_check=(0, 1)) + + +@triton.jit +def fwd_inner_chunk( + q, k, g, A, + s_qk_h, # stride size: L * D_head_K + s_qk_t, # stride size: D_head_K + s_qk_d, # stride size: 1 + B, # batch_size + H, # n_heads + T, # seq_len + scale, # D_head_K ** -0.5 + # clamp_min, # minimum log value of the gate for numerical stability. default: -5 + BT: tl.constexpr, # BLOCK SIZE along the sequence dimension, a.k.a. chunk size + BK: tl.constexpr, # BLOCK SIZE along the K dimension + DK: tl.constexpr, # D_head_K +): + + i_k, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (T, DK), (s_qk_t, s_qk_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + + b_k = tl.load(p_k, boundary_check=(0, 1)) + + p_g = tl.make_block_ptr(g + i_bh * s_qk_h, (T, DK), (s_qk_t, s_qk_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + + b_g = tl.load(p_g, boundary_check=(0, 1)).to(tl.float32) + + mask = (i_k * BK + tl.arange(0, BK)) < DK + o_i = tl.arange(0, BT) + + p_q = q + i_bh * s_qk_h + i_k * BK + i_t * BT * DK + tl.arange(0, BK) + p_gq = g + i_bh * s_qk_h + i_k * BK + i_t * BT * DK + tl.arange(0, BK) + p_A = A + (i_bh + (i_k * B * H)) * (tl.cdiv(T, BT) * BT * BT) + i_t * BT * BT + tl.arange(0, BT) + + for i in range(BT): + _q = tl.load(p_q, mask=mask, other=0) * scale + gq = tl.load(p_gq, mask=mask, other=0).to(tl.float32) + s = _q[None, :] * b_k * tl.math.exp2(gq[None, :] - b_g) + score = tl.sum(s, axis=1) + score = tl.where(o_i <= i, score, 0) + tl.store(p_A, score.to(p_A.dtype.element_ty)) + p_q += DK + p_gq += DK + p_A += BT + + +@triton.jit +def bwd_inner_chunk( + q, + k, + g, + dA, + dq, + dk, + s_qk_h, # stride size: L * D_head_K + s_qk_t, # stride size: D_head_K + s_qk_d, # stride size: 1 + B, # batch_size + H, # n_heads + T, # seq_len + scale, # D_head_K ** -0.5 + # clamp_min, # minimum log value of the gate for numerical stability. default: -5 + BT: tl.constexpr, # BLOCK SIZE along the sequence dimension, a.k.a. chunk size + BK: tl.constexpr, # BLOCK SIZE along the K dimension + DK: tl.constexpr, # D_head_K +): + i_k, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (T, DK), (s_qk_t, s_qk_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + b_k = tl.load(p_k, boundary_check=(0, 1)) + p_g = tl.make_block_ptr(g + i_bh * s_qk_h, (T, DK), (s_qk_t, s_qk_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + b_g = tl.load(p_g, boundary_check=(0, 1)).to(tl.float32) + + mask = (i_k * BK + tl.arange(0, BK)) < DK + o_i = tl.arange(0, BT) + + p_q = q + i_bh * s_qk_h + i_k * BK + i_t * BT * DK + tl.arange(0, BK) + p_dq = dq + (i_bh) * s_qk_h + i_k * BK + i_t * BT * DK + tl.arange(0, BK) + p_gq = g + i_bh * s_qk_h + i_k * BK + i_t * BT * DK + tl.arange(0, BK) + p_dA = dA + i_bh * (tl.cdiv(T, BT) * BT * BT) + i_t * BT * BT + tl.arange(0, BT) + + b_dk = tl.zeros([BT, BK], dtype=tl.float32) + + for i in range(BT): + _q = tl.load(p_q, mask=mask, other=0) + gq = tl.load(p_gq, mask=mask, other=0).to(tl.float32) + score = tl.math.exp2(gq[None, :] - b_g) + score = tl.where(o_i[:, None] <= i, score, 0) + _dA = tl.load(p_dA) + _dA = tl.where(o_i <= i, _dA, 0) + b_dk += (_dA[:, None] * score * _q[None, :]) + b_dq = tl.sum(_dA[:, None] * score * b_k, axis=0) + tl.store(p_dq, b_dq, mask=mask) + p_q += DK + p_dq += DK + p_gq += DK + p_dA += BT + + p_dk = tl.make_block_ptr(dk + i_bh * s_qk_h, (T, DK), (s_qk_t, s_qk_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + tl.store(p_dk, b_dk.to(dk.dtype.element_ty), boundary_check=(0, 1)) + + +class FusedChunkGLAFunction(torch.autograd.Function): + + @staticmethod + @contiguous + @custom_fwd + def forward(ctx, q, k, v, g, scale, initial_state, output_final_state): + ctx.g_dtype = g.dtype + g_original = g + # cumulative decay should be in float32, otherwise the err will be accumulated and amplified. + g = torch.empty_like(g, dtype=torch.float32) + batch_size, n_heads, seq_len, d_head_qk = q.shape + d_head_v = v.shape[-1] + ctx.scale = scale + + # inter-chunk + BT = 16 # chunk_size + BK, BV = min(d_head_qk, 64), min(d_head_v, 64) + num_stages = 1 + num_warps = 2 + + NK, NV = triton.cdiv(d_head_qk, BK), triton.cdiv(d_head_v, BV) + o = q.new_empty(NK, batch_size, n_heads, seq_len, d_head_v) + q_g = torch.empty_like(q) + k_g = torch.empty_like(k) + grid = (NK, triton.cdiv(seq_len, BT), batch_size * n_heads) + fwd_decay_cumsum[grid]( + g_original, + g, + q.stride(1), q.stride(2), q.stride(3), + batch_size, n_heads, seq_len, scale, + BT=BT, BK=BK, DK=d_head_qk, num_warps=1 + ) + prepare_qg_kg[grid]( + q, k, g, q_g, k_g, + q.stride(1), q.stride(2), q.stride(3), + batch_size, n_heads, seq_len, scale, + BT=BT, BK=BK, DK=d_head_qk, num_warps=1 + ) + + if output_final_state: + final_state = q.new_empty(batch_size, n_heads, d_head_qk, d_head_v, dtype=torch.float, requires_grad=False) + else: + final_state = None + # the bug still exists even for Triton 2.2 on H100 GPUs + # so we always enable initial checks + CHECK = True + if version.parse(triton.__version__) < version.parse('2.2.0'): + import warnings + warnings.warn( + "Triton<2.2.0 detected for running this kernel, " + "which is known to have some weird compiler issues (refer to https://github.com/openai/triton/issues/2852) " + "that lead to significant precision loss. " + "We've add some initial condition checks to resolve this, sadly at the sacrifice of the speed. " + "For optimal performance, it is recommended to install Triton>=2.2.0 (if possible)." + ) + CHECK = True + + grid = (NV, NK, batch_size * n_heads) + fused_chunk_gla_fwd_kernel[grid]( + q_g, k_g, v, g, o, initial_state, final_state, + q.stride(1), q.stride(2), q.stride(3), + v.stride(1), v.stride(2), v.stride(3), + batch_size, n_heads, seq_len, scale, + BT=BT, DK=d_head_qk, DV=d_head_v, BK=BK, BV=BV, + USE_INITIAL_STATE=initial_state is not None, + STORE_FINAL_STATE=output_final_state, + CHECK=CHECK, + num_warps=num_warps, + num_stages=num_stages + ) + + o = o.sum(0) + + # intra-chunk + chunk_size = 16 + num_chunk = seq_len // chunk_size + v2 = rearrange(v, 'b h (n c) d -> b h n c d', n=num_chunk) + BK = min(d_head_qk, 64) + NK = triton.cdiv(d_head_qk, BK) + A = q.new_empty(NK, batch_size, n_heads, triton.cdiv(seq_len, BT), BT, BT) + grid = (NK, triton.cdiv(seq_len, BT), batch_size * n_heads) + fwd_inner_chunk[grid]( + q, k, g, A, + q.stride(1), q.stride(2), q.stride(3), + batch_size, n_heads, seq_len, scale, BT=BT, BK=BK, DK=d_head_qk, num_stages=3, + num_warps=4 + ) + A = A.sum(0) + o2 = A @ v2 + o2 = rearrange(o2, 'b h n c d -> b h (n c) d') + # combine inner and inter + o.add_(o2) + ctx.save_for_backward(q, k, v, g_original, A, initial_state) + ctx.CHECK = CHECK + return o.to(v), final_state + + @staticmethod + @contiguous + @custom_bwd + def backward(ctx, do, d_final_state=None): + q, k, v, g_origin, A, initial_state = ctx.saved_tensors + batch_size, n_heads, seq_len, d_head_qk = q.shape + d_head_v = v.shape[-1] + scale = ctx.scale + + # recomputation + # inter-chunk + BT = 16 # chunk_size + g = torch.empty_like(g_origin, dtype=torch.float32) + BK, BV = min(d_head_qk, 64), min(d_head_v, 64) + NK, NV = triton.cdiv(d_head_qk, BK), triton.cdiv(d_head_v, BV) + q_g = torch.empty_like(q) + k_g = torch.empty_like(k) + grid = (NK, triton.cdiv(seq_len, BT), batch_size * n_heads) + fwd_decay_cumsum[grid]( + g_origin, + g, + q.stride(1), q.stride(2), q.stride(3), + batch_size, n_heads, seq_len, scale, + BT=BT, BK=BK, DK=d_head_qk, num_warps=1 + ) + prepare_qg_kg[grid]( + q, k, g, q_g, k_g, + q.stride(1), q.stride(2), q.stride(3), + batch_size, n_heads, seq_len, scale, + BT=BT, BK=BK, DK=d_head_qk, num_warps=1 + ) + + # inter-chunk + BT = 16 + BK, BV = min(triton.next_power_of_2(d_head_qk), 64), min(triton.next_power_of_2(d_head_v), 64) + NK, NV = triton.cdiv(d_head_qk, BK), triton.cdiv(d_head_v, BV) + num_stages = 1 + num_warps = 2 + dq = q.new_empty(NV, batch_size, n_heads, seq_len, d_head_qk) + dk = q.new_empty(NV, batch_size, n_heads, seq_len, d_head_qk) + dv = q.new_empty(NK, batch_size, n_heads, seq_len, d_head_v) + + grid = (NV, NK, batch_size * n_heads) + + fused_chunk_gla_bwd_kernel[grid]( + q_g, k_g, v, g, do, dq, dk, dv, initial_state, + q.stride(1), q.stride(2), q.stride(3), + v.stride(1), v.stride(2), v.stride(3), + batch_size, n_heads, seq_len, scale, + # clamp_min=-3, + BT=BT, DK=d_head_qk, DV=d_head_v, BK=BK, BV=BV, + USE_INITIAL_STATE=initial_state is not None, + CHECK=ctx.CHECK, + num_warps=num_warps, + num_stages=num_stages, + ) + dq = dq.sum(0) + dk = dk.sum(0) + dv = dv.sum(0) + + # intra chunk + num_chunk = seq_len // BT + v2 = rearrange(v, 'b h (n c) d -> b h n c d', n=num_chunk) + do2 = rearrange(do, 'b h (n c) d -> b h n c d', n=num_chunk) + dA2 = (do2 @ v2.transpose(-2, -1)) * scale + dv2 = A.transpose(-1, -2) @ do2 + dv2 = rearrange(dv2, 'b h n c d -> b h (n c) d', n=num_chunk) + + BK = min(triton.next_power_of_2(d_head_qk), 16) + NK = triton.cdiv(d_head_qk, BK) + dk2 = torch.empty_like(k) + dq2 = torch.empty_like(q) + + grid = (NK, triton.cdiv(seq_len, BT), batch_size * n_heads) + bwd_inner_chunk[grid]( + q, k, g, + dA2, dq2, dk2, + q.stride(1), q.stride(2), q.stride(3), + batch_size, n_heads, seq_len, scale, + BT=BT, DK=d_head_qk, BK=BK, + num_warps=1, + num_stages=3 + ) + + BK = min(triton.next_power_of_2(d_head_qk), 32) + NK = triton.cdiv(d_head_qk, BK) + dg = torch.empty_like(g, dtype=torch.float32) + grid = (NK, triton.cdiv(seq_len, BT), batch_size * n_heads) + bwd_decay_global_cumsum[grid]( + dq2, dq, dk2, dk, q, k, g, dg, + q.stride(1), q.stride(2), q.stride(3), + batch_size, n_heads, seq_len, scale, + BT=BT, DK=d_head_qk, BK=BK, + num_warps=1, + num_stages=1 + ) + dg = rearrange(dg, 'b h (n c) d -> b h n c d', c=BT) + + def rev_cumsum_exclusive(x): + cumsum_x = x.cumsum(-2) + rev_cumsum_x = cumsum_x[..., -1, None, :] - cumsum_x + return rev_cumsum_x + + rev_cumsum_dg = rev_cumsum_exclusive(dg[..., 0, :]) + dg.add_(rev_cumsum_dg.unsqueeze(-2)) + dv.add_(dv2) + dg = rearrange(dg, 'b h n c d -> b h (n c) d') + + return dq.to(q), dk.to(k), dv.to(v), dg.to(ctx.g_dtype), None, None, None + + +def pad(x, chunk_size=16): + seq_len = x.shape[-2] + padded_seq_len = ceildiv(seq_len, chunk_size) * chunk_size + if x.shape[-2] % chunk_size != 0: + x = F.pad(x, (0, 0, 0, padded_seq_len - seq_len)) + + return x + + +def ceildiv(a, b): + return -(a // -b) + + +def fused_chunk_gla( + q: torch.Tensor, + k: torch.Tensor, + v: torch.Tensor, + g: torch.Tensor, + scale: int = -1, + initial_state: torch.Tensor = None, + output_final_state: bool = False +) -> Tuple[torch.Tensor, torch.Tensor]: + if scale == -1: + scale = q.shape[-1] ** -0.5 + if initial_state is not None: + initial_state = initial_state.detach() + seq_len = q.shape[-2] + q, k, v, g = map(lambda x: pad(x), [q, k, v, g]) + o, final_state = FusedChunkGLAFunction.apply( + q, k, v, g, scale, initial_state, output_final_state) + o = o[..., :seq_len, :] + return o, final_state diff --git a/fla/ops/gla/chunk_util.py b/fla/ops/gla/chunk_util.py new file mode 100644 index 0000000000000000000000000000000000000000..ba9db38bded8680e7ff4f985648387639e9b5cbf --- /dev/null +++ b/fla/ops/gla/chunk_util.py @@ -0,0 +1,138 @@ +import triton +import triton.language as tl + +inv_ln2 = 1.44269504 + + + +@triton.jit +def fwd_decay_cumsum( + g, + g_o, + s_qk_h, + s_qk_t, + s_qk_d, + B, + H, + T, + scale, + BT: tl.constexpr, + BK: tl.constexpr, + DK: tl.constexpr +): + i_k, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + p_g = g + i_bh * s_qk_h + i_c * BT * DK + i_k * BK + tl.arange(0, BK) + p_go = g_o + i_bh * s_qk_h + i_c * BT * DK + i_k * BK + tl.arange(0, BK) + cum_decay = tl.zeros([BK], dtype=tl.float32) + mask = (i_k * BK + tl.arange(0, BK)) < DK + + for i in range(BT): + _g = tl.load(p_g, mask=mask, other=0).to(tl.float32) + cum_decay += _g * inv_ln2 + tl.store(p_go, cum_decay.to(p_go.dtype.element_ty), mask=mask) + p_g += DK + p_go += DK + +@triton.jit +def prepare_qg_kg( + q, + k, + g, + qg, + kg, + s_qk_h, + s_qk_t, + s_qk_d, + B, + H, + T, + scale, + BT: tl.constexpr, + BK: tl.constexpr, + DK: tl.constexpr +): + + i_k, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + p_q = q + i_bh * s_qk_h + i_c * BT * DK + i_k * BK + tl.arange(0, BK) + p_g = g + i_bh * s_qk_h + i_c * BT * DK + i_k * BK + tl.arange(0, BK) + p_k = k + i_bh * s_qk_h + i_c * BT * DK + i_k * BK + tl.arange(0, BK) + p_qg = qg + i_bh * s_qk_h + i_c * BT * DK + i_k * BK + tl.arange(0, BK) + p_kg = kg + i_bh * s_qk_h + i_c * BT * DK + i_k * BK + tl.arange(0, BK) + + mask = (i_k * BK + tl.arange(0, BK)) < DK + + last_decay = tl.load(g + i_bh * s_qk_h + (i_c * BT + BT - 1) * DK + i_k * BK + tl.arange(0, BK)) + + for i in range(BT): + _q = tl.load(p_q, mask=mask, other=0) + _k = tl.load(p_k, mask=mask, other=0) + _g = tl.load(p_g, mask=mask, other=0).to(tl.float32) + _q *= tl.math.exp2(_g) * scale + _k *= tl.math.exp2(last_decay - _g) + tl.store(p_kg, _k.to(p_kg.dtype.element_ty), mask=mask) + tl.store(p_qg, _q.to(p_qg.dtype.element_ty), mask=mask) + p_q += DK + p_g += DK + p_k += DK + p_kg += DK + p_qg += DK + + +@triton.jit +def bwd_decay_global_cumsum( + dq_inner, + dq_inter, + dk_inner, + dk_inter, + q, k, g, dg, + s_qk_h, + s_qk_t, + s_qk_d, + B, + H, + T, + scale, + BT: tl.constexpr, + BK: tl.constexpr, + DK: tl.constexpr +): + i_k, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + p_q = q + i_bh * s_qk_h + i_k * BK + tl.arange(0, BK) + (i_c * BT + BT - 1) * DK + p_k = k + i_bh * s_qk_h + i_k * BK + tl.arange(0, BK) + (i_c * BT + BT - 1) * DK + p_g = g + i_bh * s_qk_h + i_k * BK + tl.arange(0, BK) + (i_c * BT + BT - 1) * DK + p_dg = dg + i_bh * s_qk_h + i_k * BK + tl.arange(0, BK) + (i_c * BT + BT - 1) * DK + p_dq_inner = dq_inner + i_bh * s_qk_h + i_k * BK + tl.arange(0, BK) + (i_c * BT + BT - 1) * DK + p_dk_inner = dk_inner + i_bh * s_qk_h + i_k * BK + tl.arange(0, BK) + (i_c * BT + BT - 1) * DK + p_dq_inter = dq_inter + i_bh * s_qk_h + i_k * BK + tl.arange(0, BK) + (i_c * BT + BT - 1) * DK + p_dk_inter = dk_inter + i_bh * s_qk_h + i_k * BK + tl.arange(0, BK) + (i_c * BT + BT - 1) * DK + cum_grad_dg = tl.zeros([BK], dtype=tl.float32) + mask = (i_k * BK + tl.arange(0, BK)) < DK + last_g = tl.zeros([BK], dtype=tl.float32) + for j in range(BT-1, -1, -1): + _g = tl.load(p_g, mask=mask, other=0).to(tl.float32) + if j == (BT-1): + last_g = _g + _dq1 = tl.load(p_dq_inner, mask=mask, other=0) + _dq2 = tl.load(p_dq_inter, mask=mask, other=0) + _dq2 *= tl.math.exp2(_g) + _dq = _dq1 + _dq2 + tl.store(p_dq_inter, _dq, mask=mask) + _dk1 = tl.load(p_dk_inner, mask=mask, other=0) + _dk2 = tl.load(p_dk_inter, mask=mask, other=0) + _dk2 *= tl.math.exp2(last_g - _g) + _dk = _dk1 + _dk2 + tl.store(p_dk_inter, _dk, mask=mask) + _q = tl.load(p_q, mask=mask, other=0) + _k = tl.load(p_k, mask=mask, other=0) + _dg = _dq * _q - _dk * _k + cum_grad_dg += _dg + tl.store(p_dg, cum_grad_dg.to(p_dg.dtype.element_ty), mask=mask) + p_g -= DK + p_k -= DK + p_q -= DK + p_dq_inner -= DK + p_dk_inner -= DK + p_dq_inter -= DK + p_dk_inter -= DK + p_dg -= DK + diff --git a/fla/ops/gla/naive.py b/fla/ops/gla/naive.py new file mode 100644 index 0000000000000000000000000000000000000000..b8cf03b7881607b02c6cf497c8213d52f6b3d02e --- /dev/null +++ b/fla/ops/gla/naive.py @@ -0,0 +1,116 @@ +# -*- coding: utf-8 -*- + +import torch +import torch.nn.functional as F + +from fla.ops.gla.recurrent_fuse import fused_recurrent_gla + + +def ceildiv(a, b): + return -(a // -b) + + +def naive_recurrent_gla( + q, + k, + v, + gk, + initial_state=None, + output_final_state=False, + causal=True +): + orig_dtype = q.dtype + q, k, v, gk = map(lambda x: x.float(), (q, k, v, gk)) + batch_size, n_heads, seq_len, d_head_k = q.shape + _, _, _, d_head_v = v.shape + h = torch.zeros(batch_size, n_heads, d_head_k, d_head_v, dtype=torch.float32, device=q.device) + o = torch.zeros_like(v) + scale = d_head_k ** -0.5 + + if initial_state is not None: + h += initial_state + + for i in range(seq_len): + q_i = q[:, :, i, :] * scale + k_i = k[:, :, i] + v_i = v[:, :, i, :] + gk_i = gk[:, :, i].exp() + kv_i = k_i[..., None] * v_i[..., None, :] + h = h * gk_i[..., None] + kv_i + o_i = (q_i[..., None] * h).sum(-2) + o[:, :, i] = o_i + + if causal: + return o.to(orig_dtype), h + else: + o_reverse = torch.zeros_like(v) + h = torch.zeros(batch_size, n_heads, d_head_k, d_head_v, dtype=torch.float32, device=q.device) + for i in range(seq_len-1, -1, -1): + q_i = q[:, :, i, :] * scale + k_i = k[:, :, i] + v_i = v[:, :, i, :] + gk_i = gk[:, :, i].exp() + kv_i = k_i[..., None] * v_i[..., None, :] + h = h * gk_i[..., None] + kv_i + o_i = (q_i[..., None] * h).sum(-2) + o_reverse[:, :, i] = o_i + + return o, o_reverse + + +if __name__ == "__main__": + B = 4 + H = 4 + L = 512 + D = 128 + dtype = torch.float32 + q = (torch.randn(B, H, L, D).cuda().to(dtype)).requires_grad_(True) + k = (torch.randn(B, H, L, D).cuda().to(dtype)).requires_grad_(True) + v = torch.randn(B, H, L, D).cuda().to(dtype).requires_grad_(True) + g = F.logsigmoid(torch.rand(B, H, L, D)).cuda( + ).clamp_min(-1).to(torch.float32).requires_grad_(True) + + do = torch.rand_like(v).cuda() + do2 = torch.rand_like(v).cuda() + intial_state = torch.rand(B, H, D, D).cuda() + + ref, ref_rev = naive_recurrent_gla(q, k, v, g, causal=False) + + ref.backward(do, retain_graph=True) + ref_rev.backward(do2, retain_graph=True) + + ref_dq, q.grad = q.grad.clone(), None + ref_dk, k.grad = k.grad.clone(), None + ref_dv, v.grad = v.grad.clone(), None + ref_dg, g.grad = g.grad.clone(), None + + tri, tri_rev = fused_recurrent_gla( + q, k, v, g, initial_state=None, scale=D**-0.5, output_final_state=False, causal=False) + tri.backward(do, retain_graph=True) + tri_rev.backward(do2, retain_graph=True) + tri_dq, q.grad = q.grad.clone(), None + tri_dk, k.grad = k.grad.clone(), None + tri_dv, v.grad = v.grad.clone(), None + tri_dg, g.grad = g.grad.clone(), None + + assert ref.allclose(tri, 0, 1e-5), breakpoint() + assert ref_rev.allclose(tri_rev, 0, 1e-5), breakpoint() + assert ref_dq.allclose(tri_dq, 0, 1e-5), breakpoint() + assert ref_dk.allclose(tri_dk, 0, 1e-5), breakpoint() + assert ref_dv.allclose(tri_dv, 0, 1e-5), breakpoint() + assert ref_dg.allclose(tri_dg, 0, 1e-4), breakpoint() + + # tri = fused_chunk_gla(q, k, v, g) + # tri.backward(do, retain_graph=True) + # tri_dq, q.grad = q.grad.clone(), None + # tri_dk, k.grad = k.grad.clone(), None + # tri_dv, v.grad = v.grad.clone(), None + # tri_dg, g.grad = g.grad.clone(), None + + # assert ref.allclose(tri, 0, 1e-5), breakpoint() + # assert ref_dq.allclose(tri_dq, 0, 1e-5), breakpoint() + # assert ref_dk.allclose(tri_dk, 0, 1e-5), breakpoint() + # assert ref_dv.allclose(tri_dv, 0, 1e-5), breakpoint() + # assert ref_dg.allclose(tri_dg, 0, 1e-4), breakpoint() + # breakpoint() + print("Pass") diff --git a/fla/ops/gla/recurrent_fuse.py b/fla/ops/gla/recurrent_fuse.py new file mode 100644 index 0000000000000000000000000000000000000000..ea14dca4049ba5c6517ac502d945566e2e7befc8 --- /dev/null +++ b/fla/ops/gla/recurrent_fuse.py @@ -0,0 +1,404 @@ +# -*- coding: utf-8 -*- + +# Copyright (c) 2023, Songlin Yang + +from typing import Tuple + +import torch +import triton +import triton.language as tl +from torch.cuda.amp import custom_bwd, custom_fwd + +from fla.utils import contiguous + +# on-the-fly computation without materializing hidden statets into HBMs + + +@triton.jit +def fused_recurrent_gla_fwd_kernel( + # B: batch_size, H: n_heads, T: seq_len, D: d_head + q, # query [B, H, L, D_head_K] + k, # key [B, H, L, D_head_K] + v, # value [B, H, L, D_head_V] + gk, # log gate [B, H, L, D_head_K] + gv, # log gate [B, H, L, D_head_V] + o, # output [B, H, L, D_head_V] + # initial hidden state initialization [B, H, D_head_K, D_head_V] + initial_state, + final_state, # final hidden state [B, H, D_head_K, D_head_V] + + s_qk_h, # stride size: L * D_head_K + s_qk_t, # stride size: D_head_K + s_qk_d, # stride size: 1 + + s_vo_h, # stride size: L * D_head_V + s_vo_t, # stride size: D_head_V + s_vo_d, # stride size: 1 + + B, # batch size + H, # n_heads + T, # seq_len + scale, # D_head_K ** -0.5 + BK: tl.constexpr, # BLOCK SIZE along the K dimension + BV: tl.constexpr, # BLOCK SIZE along the V dimension + DK: tl.constexpr, # D_head_K + DV: tl.constexpr, # D_head_V + USE_INITIAL_STATE: tl.constexpr, # whether to use initial state + STORE_FINAL_STATE: tl.constexpr, # whether to store final state + REVERSE: tl.constexpr, # whether to do autoregressive modeling in the reverse direction + USE_GK: tl.constexpr, # whether to use gk + USE_GV: tl.constexpr, # whether to use gv +): + # indices + i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + + p_q = q + i_bh * s_qk_h + i_k * BK + \ + tl.arange(0, BK) + ((T-1) * DK if REVERSE else 0) + p_k = k + i_bh * s_qk_h + i_k * BK + \ + tl.arange(0, BK) + ((T-1) * DK if REVERSE else 0) + p_v = v + i_bh * s_vo_h + i_v * BV + \ + tl.arange(0, BV) + ((T-1) * DV if REVERSE else 0) + p_o = o + (i_bh + i_k * B * H) * s_vo_h + i_v * BV + \ + tl.arange(0, BV) + ((T-1) * DV if REVERSE else 0) + + if USE_GK: + p_gk = gk + i_bh * s_qk_h + i_k * BK + \ + tl.arange(0, BK) + ((T-1) * DK if REVERSE else 0) + if USE_GV: + p_gv = gv + i_bh * s_vo_h + i_v * BV + \ + tl.arange(0, BV) + ((T-1) * DV if REVERSE else 0) + + mask_bk = (i_k * BK + tl.arange(0, BK)) < DK + mask_bv = (i_v * BV + tl.arange(0, BV)) < DV + + h = tl.zeros([BV, BK], dtype=tl.float32) + + mask_kv = mask_bk[None, :] & mask_bv[:, None] + + if USE_INITIAL_STATE: + p_init_s = initial_state + i_bh * DK * DV + \ + (i_k * BK + tl.arange(0, BK)[None, :]) * \ + DV + (i_v * BV + tl.arange(0, BV)[:, None]) + h += tl.load(p_init_s, mask=mask_kv, other=0).to(tl.float32) + + for _ in range(0, T): + _k = tl.load(p_k, mask=mask_bk, other=0).to(tl.float32) + _v = tl.load(p_v, mask=mask_bv, other=0).to(tl.float32) + _q = tl.load(p_q, mask=mask_bk, other=0).to(tl.float32) * scale + if USE_GK: + _gk = tl.load(p_gk, mask=mask_bk, other=0).to(tl.float32) + h = h * _gk[None, :] + if USE_GV: + _gv = tl.load(p_gv, mask=mask_bv, other=0).to(tl.float32) + h = h * _gv[:, None] + h += _k[None, :] * _v[:, None] + _o = h * _q[None, :] + _o = tl.sum(_o, axis=1) + tl.store(p_o, _o.to(p_o.dtype.element_ty), mask=mask_bv) + p_q += -DK if REVERSE else DK + p_k += -DK if REVERSE else DK + p_o += -DV if REVERSE else DV + p_v += -DV if REVERSE else DV + if USE_GK: + p_gk += -DK if REVERSE else DK + if USE_GV: + p_gv += -DV if REVERSE else DV + + if STORE_FINAL_STATE: + p_final_s = final_state + i_bh * DK * DV + \ + (i_k * BK + tl.arange(0, BK)[None, :]) * \ + DV + (i_v * BV + tl.arange(0, BV)[:, None]) + tl.store(p_final_s, h.to(p_final_s.dtype.element_ty), mask=mask_kv) + + +# Similar to Algorithm1 of https://arxiv.org/abs/2006.16236 +@triton.jit +def fused_recurrent_gla_bwd_kernel( + # B: batch_size, H: n_heads, T: seq_len, D: d_head + # NV: number of split in the V dimension. NK: number of split in the K dimension + q, # query [B, H, L, D_head_K] + k, # key [B, H, L, D_head_V] + v, # value [B, H, L, D_head_V] + gk, # log gate [B, H, L, D_head_K] \alpha + gv, # log gate [B, H, L, D_head_V] \bete + + do, # gradient of output [B, H, L, D_head_V] + dq, # gradient of query [NV, B, H, L, D_head_K] + dk, # gradient of key [NV, B, H, L, D_head_K] + dv, # gradient of value [NK, B, H, L, D_head_V] + + # initial hidden state initialization [B, H, D_head_K, D_head_V] + initial_state, + + s_qk_h, # stride size: L * D_head_K + s_qk_t, # stride size: D_head_K + s_qk_d, # stride size: 1 + + s_vo_h, # stride size: L * D_head_V + s_vo_t, # stride size: D_head_V + s_vo_d, # stride size: 1 + + B, # batch_size + H, # n_heads + T, # seq_len + scale, # D_head_K ** -0.5 + BK: tl.constexpr, # BLOCK SIZE along the K dimension + BV: tl.constexpr, # BLOCK SIZE along the V dimension + DK: tl.constexpr, # D_head_K + DV: tl.constexpr, # D_head_V + USE_INITIAL_STATE: tl.constexpr, # whether to use initial state + REVERSE: tl.constexpr, # whether to do autoregressive modeling in the reverse direction + USE_GK: tl.constexpr, # whether to use gk + USE_GV: tl.constexpr, # whether to use gv +): + i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + + p_q = q + i_bh * s_qk_h + i_k * BK + \ + tl.arange(0, BK) + ((T-1) * DK if REVERSE else 0) + p_k = k + i_bh * s_qk_h + i_k * BK + \ + tl.arange(0, BK) + ((T-1) * DK if REVERSE else 0) + p_v = v + i_bh * s_vo_h + i_v * BV + \ + tl.arange(0, BV) + ((T-1) * DV if REVERSE else 0) + p_do = do + i_bh * s_vo_h + i_v * BV + \ + tl.arange(0, BV) + ((T-1) * DV if REVERSE else 0) + p_dq = dq + (i_bh + i_v * B * H) * s_qk_h + i_k * BK + \ + tl.arange(0, BK) + ((T-1) * DK if REVERSE else 0) + if USE_GK: + p_gk = gk + i_bh * s_qk_h + i_k * BK + \ + tl.arange(0, BK) + ((T-1) * DK if REVERSE else 0) + if USE_GV: + p_gv = gv + i_bh * s_vo_h + i_v * BV + \ + tl.arange(0, BV) + ((T-1) * DV if REVERSE else 0) + mask_bk = i_k * BK + tl.arange(0, BK) < DK + mask_bv = i_v * BV + tl.arange(0, BV) < DV + mask_kv = mask_bk[:, None] & mask_bv[None, :] + h = tl.zeros([BK, BV], dtype=tl.float32) + + if USE_INITIAL_STATE: + p_init_s = initial_state + i_bh * DK * DV + \ + (i_k * BK + tl.arange(0, BK)[:, None]) * \ + DV + (i_v * BV + tl.arange(0, BV)[None, :]) + h += tl.load(p_init_s, mask=mask_kv, other=0).to(tl.float32) + + for i in range(0, T): + _k = tl.load(p_k, mask=mask_bk, other=0).to(tl.float32) + _v = tl.load(p_v, mask=mask_bv, other=0).to(tl.float32) + _do = tl.load(p_do, mask=mask_bv, other=0).to(tl.float32) + if USE_GK: + _gk = tl.load(p_gk, mask=mask_bk, other=0).to(tl.float32) + h = h * _gk[:, None] + if USE_GV: + _gv = tl.load(p_gv, mask=mask_bv, other=0).to(tl.float32) + h = h * _gv[None, :] + h += _k[:, None] * _v[None, :] + _d_q = h * _do[None, :] + d_q = tl.sum(_d_q, axis=1) * scale + tl.store(p_dq, d_q.to(p_dq.dtype.element_ty), mask=mask_bk) + + p_k += -DK if REVERSE else DK + p_v += -DV if REVERSE else DV + p_q += -DK if REVERSE else DK + p_do += -DV if REVERSE else DV + p_dq += -DK if REVERSE else DK + if USE_GK: + p_gk += -DK if REVERSE else DK + if USE_GV: + p_gv += -DV if REVERSE else DV + + # sync threads + tl.debug_barrier() + + p_q = q + i_bh * s_qk_h + i_k * BK + \ + tl.arange(0, BK) + ((T - 1) * DK if not REVERSE else 0) + p_k = k + i_bh * s_qk_h + i_k * BK + \ + tl.arange(0, BK) + ((T - 1) * DK if not REVERSE else 0) + p_do = do + i_bh * s_vo_h + i_v * BV + \ + tl.arange(0, BV) + ((T - 1) * DV if not REVERSE else 0) + p_v = v + i_bh * s_vo_h + i_v * BV + \ + tl.arange(0, BV) + ((T - 1) * DV if not REVERSE else 0) + p_dk = dk + (i_bh + i_v * B * H) * s_qk_h + i_k * \ + BK + tl.arange(0, BK) + ((T - 1) * DK if not REVERSE else 0) + p_dv = dv + (i_bh + i_k * B * H) * s_vo_h + i_v * \ + BV + tl.arange(0, BV) + ((T - 1) * DV if not REVERSE else 0) + if USE_GK: + p_gk = gk + i_bh * s_qk_h + i_k * BK + \ + tl.arange(0, BK) + ((T - 1) * DK if not REVERSE else 0) + if USE_GV: + p_gv = gv + i_bh * s_vo_h + i_v * BV + \ + tl.arange(0, BV) + ((T - 1) * DV if not REVERSE else 0) + + d_h = tl.zeros([BK, BV], dtype=tl.float32) + + for _ in range(T): + _do = tl.load(p_do, mask=mask_bv, other=0).to(tl.float32) + _q = tl.load(p_q, mask=mask_bk, other=0).to(tl.float32) * scale + _k = tl.load(p_k, mask=mask_bk, other=0).to(tl.float32) + _v = tl.load(p_v, mask=mask_bv, other=0).to(tl.float32) + d_h += _q[:, None] * _do[None, :] + d_k = tl.sum(d_h * _v[None, :], axis=1) + d_v = tl.sum(d_h * _k[:, None], axis=0) + if USE_GK: + _gk = tl.load(p_gk, mask=mask_bk, other=0).to(tl.float32) + d_h *= _gk[:, None] + if USE_GV: + _gv = tl.load(p_gv, mask=mask_bv, other=0).to(tl.float32) + d_h *= _gv[None, :] + tl.store(p_dk, d_k.to(p_dk.dtype.element_ty), mask=mask_bk) + tl.store(p_dv, d_v.to(p_dv.dtype.element_ty), mask=mask_bv) + + p_do += DV if REVERSE else -DV + p_q += DK if REVERSE else -DK + p_k += DK if REVERSE else -DK + p_v += DV if REVERSE else -DV + p_dk += DK if REVERSE else -DK + p_dv += DV if REVERSE else -DV + if USE_GK: + p_gk += DK if REVERSE else -DK + if USE_GV: + p_gv += DV if REVERSE else -DV + + +class FusedRecurrentGLAFunction(torch.autograd.Function): + + @staticmethod + @contiguous + @custom_fwd + def forward(ctx, q, k, v, gk, gv, scale=None, initial_state=None, output_final_state=False, reverse=False): + batch_size, n_heads, seq_len, d_head_qk = q.shape + d_head_v = v.shape[-1] + # default scale + if scale is None: + scale = d_head_qk ** -0.5 + if gk is not None: + gk = gk.float().exp() + if gv is not None: + gv = gv.float().exp() + + BK, BV = min(d_head_qk, 32), min(d_head_v, 32) + NK, NV = triton.cdiv(d_head_qk, BK), triton.cdiv(d_head_v, BV) + num_stages = 1 + num_warps = 1 + + o = q.new_empty(NK, batch_size, n_heads, seq_len, + d_head_v, dtype=torch.float32) + + if output_final_state: + final_state = q.new_empty(batch_size, n_heads, d_head_qk, d_head_v) + else: + final_state = None + + grid = (NV, NK, batch_size * n_heads) + fused_recurrent_gla_fwd_kernel[grid]( + q, k, v, gk, gv, o, initial_state, final_state, + q.stride(1), q.stride(2), q.stride(3), + v.stride(1), v.stride(2), v.stride(3), + batch_size, n_heads, seq_len, scale, + DK=d_head_qk, DV=d_head_v, BK=BK, BV=BV, + USE_INITIAL_STATE=initial_state is not None, + STORE_FINAL_STATE=final_state is not None, + USE_GK=gk is not None, + USE_GV=gv is not None, + REVERSE=reverse, + num_warps=num_warps, + num_stages=num_stages + ) + + o = o.sum(0) + ctx.save_for_backward(q, k, v, gk, gv, initial_state, o) + ctx.scale = scale + ctx.reverse = reverse + # we do not need the gradient of the final state from the next chunk + # similiar to Trunctated BPTT + if final_state is not None: + final_state = final_state.detach() + return o.to(q.dtype), final_state + + @staticmethod + @contiguous + @custom_bwd + def backward(ctx, do, d_final_state=None): + q, k, v, gk, gv, initial_state, o = ctx.saved_tensors + batch_size, n_heads, seq_len, d_head_qk = q.shape + d_head_v = v.shape[-1] + scale = ctx.scale + + BK, BV = min(d_head_qk, 32), min(d_head_v, 32) + NK, NV = triton.cdiv(d_head_qk, BK), triton.cdiv(d_head_v, BV) + num_stages = 1 + num_warps = 1 + + dq = q.new_empty(NV, batch_size, n_heads, seq_len, + d_head_qk, dtype=torch.float32) + dk = q.new_empty(NV, batch_size, n_heads, seq_len, + d_head_qk, dtype=torch.float32) + dv = q.new_empty(NK, batch_size, n_heads, seq_len, + d_head_v, dtype=torch.float32) + grid = (NV, NK, batch_size * n_heads) + + fused_recurrent_gla_bwd_kernel[grid]( + q, k, v, gk, gv, do, dq, dk, dv, initial_state, + q.stride(1), q.stride(2), q.stride(3), + v.stride(1), v.stride(2), v.stride(3), + batch_size, n_heads, seq_len, scale, + DK=d_head_qk, DV=d_head_v, BK=BK, BV=BV, + num_warps=num_warps, + num_stages=num_stages, + USE_INITIAL_STATE=initial_state is not None, + REVERSE=ctx.reverse, + USE_GK=gk is not None, + USE_GV=gv is not None + ) + dq = dq.sum(0) + dk = dk.sum(0) + dv = dv.sum(0) + if gk is not None: + _dgk = dq * q.float() - dk * k.float() + if ctx.reverse: + dgk = _dgk.cumsum(-2) + else: + _dgk_cumsum = _dgk.cumsum(-2) + dgk = _dgk + _dgk_cumsum[:, :, -1, None] - _dgk_cumsum + else: + dgk = None + + if gv is not None: + _dgv = do.float() * o.float() - dv * v.float() + if ctx.reverse: + dgv = _dgv.cumsum(-2) + else: + _dgv_cumsum = _dgv.cumsum(-2) + dgv = _dgv + _dgv_cumsum[:, :, -1, None] - _dgv_cumsum + else: + dgv = None + + return dq.to(q.dtype), dk.to(k.dtype), dv.to(v.dtype), dgk, dgv, None, None, None, None + + +# if scale is None, use d_head_qk ** -0.5 by default. Otherwise specify the scale yourself. e.g. scale = 1.0 +def fused_recurrent_gla( + q: torch.Tensor, + k: torch.Tensor, + v: torch.Tensor, + gk: torch.Tensor = None, + gv: torch.Tensor = None, + scale: int = -1, + initial_state: torch.Tensor = None, + output_final_state: bool = False, + causal: bool = True +) -> Tuple[torch.Tensor, torch.Tensor]: + if scale == -1: + scale = q.shape[-1] ** -0.5 + if initial_state is not None: + initial_state = initial_state.detach() + if causal: + o, final_state = FusedRecurrentGLAFunction.apply(q, k, v, gk, gv, scale, initial_state, output_final_state) + return o, final_state + else: + # do not support initial_state yet. looks very strange for bidirectional modeling + assert initial_state is None + assert output_final_state is False + o, final_state = FusedRecurrentGLAFunction.apply( + q, k, v, gk, gv, scale, initial_state, output_final_state, False) + o_reversed, final_state = FusedRecurrentGLAFunction.apply( + q, k, v, gk, gv, scale, initial_state, output_final_state, True) + return [o, o_reversed] diff --git a/fla/ops/hgrn/__init__.py b/fla/ops/hgrn/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..96f24b1d286315351d41d4df104d1d9ba65c2d16 --- /dev/null +++ b/fla/ops/hgrn/__init__.py @@ -0,0 +1,9 @@ +# -*- coding: utf-8 -*- + +from .chunk import chunk_hgrn +from .recurrent_fuse import fused_recurrent_hgrn + +__all__ = [ + 'chunk_hgrn', + 'fused_recurrent_hgrn' +] diff --git a/fla/ops/hgrn/chunk.py b/fla/ops/hgrn/chunk.py new file mode 100644 index 0000000000000000000000000000000000000000..6efb77c17780fefaa44d2eab142fe1f457b3cd88 --- /dev/null +++ b/fla/ops/hgrn/chunk.py @@ -0,0 +1,373 @@ +# -*- coding: utf-8 -*- + +# Copyright (c) 2024, Yu Zhang, Songlin Yang + +# this function implements the chunkwise form of HGRN, inspired by +# [Volodymyr Kyrylov in his blog post](https://proger.github.io/posts/scan/chunk.html) +# also refer to the `accelerated-scan` lib: https://github.com/proger/accelerated-scan + +# from tests on H800, with B, H, D = 16, 4, 128, we see that the chunk can be greatly faster than the recurrent: +# +# Performance: +# seq_len chunk recurrent chunk_bwd recurrent_bwd +# 0 128.0 0.039360 0.061056 0.312160 0.205008 +# 1 256.0 0.045824 0.123712 0.308784 0.297696 +# 2 512.0 0.058688 0.241952 0.310720 0.626528 +# 3 1024.0 0.088288 0.476992 0.313184 1.333152 +# 4 2048.0 0.169472 0.943264 0.452464 2.724864 +# 5 4096.0 0.329920 1.886144 0.881600 5.551520 +# 6 8192.0 0.647872 3.755040 1.740496 11.117184 +# 7 16384.0 1.272064 7.520576 3.446608 22.362528 + +from typing import Tuple + +import torch +import triton +import triton.language as tl + +from fla.utils import contiguous + + +@triton.autotune( + configs=[ + triton.Config({'BD': 32}, num_warps=1), + triton.Config({'BD': 32}, num_warps=2), + triton.Config({'BD': 32}, num_warps=4), + triton.Config({'BD': 32}, num_warps=8), + triton.Config({'BD': 64}, num_warps=1), + triton.Config({'BD': 64}, num_warps=2), + triton.Config({'BD': 64}, num_warps=4), + triton.Config({'BD': 64}, num_warps=8), + triton.Config({'BD': 128}, num_warps=1), + triton.Config({'BD': 128}, num_warps=2), + triton.Config({'BD': 128}, num_warps=4), + triton.Config({'BD': 128}, num_warps=8), + ], + key=['D'] +) +@triton.jit +def chunk_hgrn_fwd_kernel_h( + x, + g, + gc, + o, + h0, + T: tl.constexpr, + D: tl.constexpr, + BT: tl.constexpr, + BD: tl.constexpr, + USE_INITIAL_STATE: tl.constexpr +): + i_d, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + o_d = i_d * BD + tl.arange(0, BD) + mask = o_d < D + + p_x = x + i_bh * T * D + i_t * BT * D + o_d + p_g = g + i_bh * T * D + i_t * BT * D + o_d + p_gc = gc + i_bh * T * D + i_t * BT * D + o_d + p_o = o + i_bh * T * D + i_t * BT * D + o_d + + b_h = tl.zeros([BD], dtype=tl.float32) + b_gc = tl.zeros([BD], dtype=tl.float32) + if USE_INITIAL_STATE: + if i_t == 0: + b_h += tl.load(h0 + i_bh * D + o_d, mask=mask, other=0).to(tl.float32) + for i in range(0, BT): + mask_t = mask & ((i_t * BT + i) < T) + b_x = tl.load(p_x, mask=mask_t, other=0).to(tl.float32) + b_g = tl.load(p_g, mask=mask_t, other=0).to(tl.float32) + b_h = tl.exp(b_g) * b_h + b_x + b_gc = b_gc + b_g + tl.store(p_gc, b_gc.to(p_o.dtype.element_ty), mask=mask_t) + tl.store(p_o, b_h.to(p_o.dtype.element_ty), mask=mask_t) + + p_x += D + p_g += D + p_gc += D + p_o += D + + +@triton.jit +def chunk_hgrn_fwd_kernel_o( + gc, + o, + s_h, + s_t, + s_d, + T: tl.constexpr, + D: tl.constexpr, + BT: tl.constexpr, + BD: tl.constexpr +): + i_d, i_bh = tl.program_id(0), tl.program_id(1) + o_d = i_d * BD + tl.arange(0, BD) + mask = o_d < D + + for i_t in range(1, tl.cdiv(T, BT)): + p_gc = tl.make_block_ptr(gc + i_bh * s_h, (T, D), (s_t, s_d), (i_t * BT, i_d * BD), (BT, BD), (1, 0)) + p_o = tl.make_block_ptr(o + i_bh * s_h, (T, D), (s_t, s_d), (i_t * BT, i_d * BD), (BT, BD), (1, 0)) + + # [BD,] + b_h0 = tl.load(o + i_bh * T * D + i_t * BT * D - D + o_d, mask=mask, other=0).to(tl.float32) + # [BT, BD] + b_gc = tl.load(p_gc, boundary_check=(0, 1)).to(tl.float32) + b_o = tl.load(p_o, boundary_check=(0, 1)).to(tl.float32) + b_o = b_o + tl.exp(b_gc) * b_h0[None, :] + tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1)) + + +@triton.autotune( + configs=[ + triton.Config({'BD': 32}, num_warps=1), + triton.Config({'BD': 32}, num_warps=2), + triton.Config({'BD': 32}, num_warps=4), + triton.Config({'BD': 32}, num_warps=8), + triton.Config({'BD': 64}, num_warps=1), + triton.Config({'BD': 64}, num_warps=2), + triton.Config({'BD': 64}, num_warps=4), + triton.Config({'BD': 64}, num_warps=8), + triton.Config({'BD': 128}, num_warps=1), + triton.Config({'BD': 128}, num_warps=2), + triton.Config({'BD': 128}, num_warps=4), + triton.Config({'BD': 128}, num_warps=8), + ], + key=['D'] +) +@triton.jit +def chunk_hgrn_bwd_kernel_h( + g, + gc, + dx, + do, + T: tl.constexpr, + D: tl.constexpr, + BT: tl.constexpr, + BD: tl.constexpr +): + i_d, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + o_d = i_d * BD + tl.arange(0, BD) + mask = o_d < D + BC = min(BT, T - i_t * BT) + NT = tl.num_programs(1) + + p_g = g + (i_bh * T + i_t * BT + BC - 1) * D + o_d + p_gc = gc + (i_bh * T + i_t * BT + BC - 1) * D + o_d + p_dx = dx + (i_bh * T + i_t * BT + BC - 1) * D + o_d + p_do = do + (i_bh * T + i_t * BT + BC - 1) * D + o_d + + if i_t == NT - 1: + b_gc = tl.zeros([BD], dtype=tl.float32) + else: + b_gc = tl.load(g + (i_bh * T + i_t * BT + BT) * D + o_d, mask=mask, other=0).to(tl.float32) + b_dh = tl.zeros([BD], dtype=tl.float32) + for _ in range(BC - 1, -1, -1): + tl.store(p_gc, b_gc.to(p_gc.dtype.element_ty), mask=mask) + + b_g = tl.load(p_g, mask=mask, other=0).to(tl.float32) + b_do = tl.load(p_do, mask=mask, other=0).to(tl.float32) + + b_gc = b_gc + b_g + b_dh = b_dh + b_do + b_dx = b_dh + b_dh = b_dh * tl.exp(b_g) + + tl.store(p_dx, b_dx.to(p_dx.dtype.element_ty), mask=mask) + + p_g -= D + p_gc -= D + p_dx -= D + p_do -= D + + +@triton.jit +def chunk_hgrn_bwd_kernel_o( + g, + gc, + o, + dx, + dg, + s_h, + s_t, + s_d, + T: tl.constexpr, + D: tl.constexpr, + BT: tl.constexpr, + BD: tl.constexpr +): + i_d, i_bh = tl.program_id(0), tl.program_id(1) + o_d = i_d * BD + tl.arange(0, BD) + mask = o_d < D + + for i_t in range(tl.cdiv(T, BT) - 1, -1, -1): + p_g = tl.make_block_ptr(g + i_bh * s_h, (T, D), (s_t, s_d), (i_t * BT, i_d * BD), (BT, BD), (1, 0)) + p_gc = tl.make_block_ptr(gc + i_bh * s_h, (T, D), (s_t, s_d), (i_t * BT, i_d * BD), (BT, BD), (1, 0)) + p_o = tl.make_block_ptr(o + i_bh * s_h, (T, D), (s_t, s_d), (i_t * BT - 1, i_d * BD), (BT, BD), (1, 0)) + p_dx = tl.make_block_ptr(dx + i_bh * s_h, (T, D), (s_t, s_d), (i_t * BT, i_d * BD), (BT, BD), (1, 0)) + p_dg = tl.make_block_ptr(dg + i_bh * s_h, (T, D), (s_t, s_d), (i_t * BT, i_d * BD), (BT, BD), (1, 0)) + + # [BD,] + mask_t = mask & ((i_t + 1) * BT < T) + b_ht = tl.load(dx + i_bh * T * D + (i_t + 1) * BT * D + o_d, mask=mask_t, other=0).to(tl.float32) + # [BT, BD] + b_g = tl.load(p_g, boundary_check=(0, 1)).to(tl.float32) + b_gc = tl.load(p_gc, boundary_check=(0, 1)).to(tl.float32) + b_o = tl.load(p_o, boundary_check=(0, 1)).to(tl.float32) + b_dx = tl.load(p_dx, boundary_check=(0, 1)).to(tl.float32) + b_dg = tl.load(p_dg, boundary_check=(0, 1)).to(tl.float32) + b_dx = b_dx + tl.exp(b_gc) * b_ht[None, :] + b_dg = b_o * b_dx * tl.exp(b_g) + tl.store(p_dx, b_dx.to(p_dx.dtype.element_ty), boundary_check=(0, 1)) + tl.store(p_dg, b_dg.to(p_dg.dtype.element_ty), boundary_check=(0, 1)) + + +class ChunkHGRNFunction(torch.autograd.Function): + + @staticmethod + @contiguous + def forward(ctx, x, g, initial_state=None, output_final_state=False): + B, H, T, D = x.shape + BT, BD = 128, min(64, triton.next_power_of_2(D)) + num_warps = 8 if BD == 64 else 4 + + gc = torch.empty_like(g, dtype=torch.float) + o = torch.empty_like(x, dtype=torch.float) + def grid(meta): return (triton.cdiv(D, meta['BD']), triton.cdiv(T, meta['BT']), B * H) + chunk_hgrn_fwd_kernel_h[grid]( + x, g, gc, o, initial_state, + T, D, + BT=BT, + USE_INITIAL_STATE=initial_state is not None + ) + def grid(meta): return (triton.cdiv(D, meta['BD']), B * H) + chunk_hgrn_fwd_kernel_o[grid]( + gc, o, + o.stride(1), o.stride(2), o.stride(3), + T, D, + BT=BT, BD=BD, + num_warps=num_warps + ) + final_state = None + if output_final_state: + final_state = o[:, :, -1].clone() + o = o.to(x.dtype) + ctx.save_for_backward(g, o, initial_state) + return o, final_state + + @staticmethod + @contiguous + def backward(ctx, do, dht=None): + g, o, initial_state = ctx.saved_tensors + B, H, T, D = do.shape + BT, BD = 128, min(64, triton.next_power_of_2(D)) + num_warps = 8 if BD == 64 else 4 + + gc = torch.empty_like(g, dtype=torch.float) + dx = torch.empty_like(o) + dg = torch.empty_like(g) + def grid(meta): return (triton.cdiv(D, meta['BD']), triton.cdiv(T, meta['BT']), B * H) + chunk_hgrn_bwd_kernel_h[grid]( + g, gc, dx, do, + T, D, + BT=BT + ) + def grid(meta): return (triton.cdiv(D, meta['BD']), B * H) + chunk_hgrn_bwd_kernel_o[grid]( + g, gc, o, dx, dg, + o.stride(1), o.stride(2), o.stride(3), + T, D, + BT=BT, BD=BD, + num_warps=num_warps + ) + if initial_state is not None: + dg[:, :, 0] = initial_state * dx[:, :, 0] * g[:, :, 0].exp() + + return dx, dg, None, None + + +def chunk_hgrn( + x: torch.Tensor, + g: torch.Tensor, + initial_state: torch.Tensor = None, + output_final_state: bool = False +) -> Tuple[torch.Tensor, torch.Tensor]: + if initial_state is not None: + initial_state = initial_state.detach() + o, final_state = ChunkHGRNFunction.apply(x, g, initial_state, output_final_state) + return o, final_state + + +if __name__ == '__main__': + import torch.nn.functional as F + + from fla.ops.hgrn.naive import naive_recurrent_hgrn + from fla.ops.hgrn.recurrent_fuse import fused_recurrent_hgrn + B, H, T, D = 8, 4, 512, 128 + dtype = torch.bfloat16 + torch.manual_seed(42) + # [batch_size, n_heads, seq_len, d_head] + x = torch.randn((B, H, T, D), dtype=dtype, device='cuda') + g = torch.randn((B, H, T, D), dtype=dtype, device='cuda') + x, g = (1 - g.sigmoid()) * x, F.logsigmoid(g) + print(f'x:\t{float(x.min()):>10.6f}\t{float(x.max()):>10.6f}') + print(f'g:\t{float(g.min()):>10.6f}\t{float(g.max()):>10.6f}') + x, g = (i.detach().clone().to(dtype).requires_grad_() for i in (x, g)) + print(f"DTYPE:\t{x.dtype}") + do = torch.randn_like(x) + h0 = torch.randn_like(x[:, :, 0]) + ref, ref_ht = naive_recurrent_hgrn(x, g, h0, output_final_state=True) + ref.backward(do) + ref_dx, x.grad = x.grad.clone(), None + ref_dg, g.grad = g.grad.clone(), None + + tri, tri_ht = fused_recurrent_hgrn(x, g, h0, output_final_state=True) + tri.backward(do) + tri_dx, x.grad = x.grad.clone(), None + tri_dg, g.grad = g.grad.clone(), None + print(" \t DIFF\t MAX") + print(' o\t', f"{float((ref - tri).abs().max()):>10.6f}\t{float(ref.max()):>10.6f}") + print('ht\t', f"{float((ref_ht[0] - tri_ht[0]).abs().max()):>10.6f}\t{float(ref.max()):>10.6f}") + print('dx\t', f"{float((ref_dx - tri_dx).abs().max()):>10.6f}\t{float(ref_dx.max()):>10.6f}") + print('dg\t', f"{float((ref_dg - tri_dg).abs().max()):>10.6f}\t{float(ref_dg.max()):>10.6f}") + print('Done!') + + @triton.testing.perf_report( + triton.testing.Benchmark( + # argument names to use as an x-axis for the plot + x_names=['seq_len'], + # different possible values for `x_name` + x_vals=[128 * 2 ** i for i in range(0, 8)], + # argument name whose value corresponds to a different line in the plot + line_arg='provider', + # possible values for `line_arg`` + line_vals=['chunk', 'recurrent', 'chunk_bwd', 'recurrent_bwd'], + # label name for the lines + line_names=['chunk', 'recurrent', 'chunk_bwd', 'recurrent_bwd'], + # line styles + styles=[('green', '-'), ('blue', '--'), ('red', '-.'), ('cyan', ':'), ('yellow', 'dotted'), ('black', 'dashed')], + ylabel="Execution Time (ms)", # label name for the y-axis + # name for the plot. Used also as a file name for saving the plot. + plot_name="Performance", + args={}, + ) + ) + def benchmark(seq_len, provider): + dtype = torch.bfloat16 + B, H, D = 16, 4, 128 + + x = torch.randn((B, H, seq_len, D), dtype=dtype, device='cuda') + g = torch.randn((B, H, seq_len, D), dtype=dtype, device='cuda').sigmoid() + x = (1 - g) * x + x, g = (i.detach().clone().to(dtype).requires_grad_() for i in (x, g)) + do = torch.randn_like(x, dtype=dtype) + quantiles = [0.5, 0.2, 0.8] + results = 0, 0, 0 + if provider == 'chunk': + results = triton.testing.do_bench(lambda: chunk_hgrn(x, g), quantiles=quantiles) + if provider == 'recurrent': + results = triton.testing.do_bench(lambda: fused_recurrent_hgrn(x, g), quantiles=quantiles) + if provider == 'chunk_bwd': + results = triton.testing.do_bench(lambda: chunk_hgrn(x, g)[0].backward(do), quantiles=quantiles) + if provider == 'recurrent_bwd': + results = triton.testing.do_bench(lambda: fused_recurrent_hgrn(x, g)[0].backward(do), quantiles=quantiles) + return results + benchmark.run(print_data=True) diff --git a/fla/ops/hgrn/naive.py b/fla/ops/hgrn/naive.py new file mode 100644 index 0000000000000000000000000000000000000000..0d79cb93f79bb245a9746bd2b28558968a8284e6 --- /dev/null +++ b/fla/ops/hgrn/naive.py @@ -0,0 +1,31 @@ +# -*- coding: utf-8 -*- + +from typing import Optional + +import torch + + +def naive_recurrent_hgrn( + x: torch.Tensor, + g: torch.Tensor, + initial_state: Optional[torch.Tensor] = None, + output_final_state: Optional[bool] = False +) -> torch.Tensor: + dtype = x.dtype + x, g = map(lambda i: i.float(), (x, g)) + B, H, T, D = x.shape + + h = torch.zeros(B, H, D, dtype=torch.float, device=x.device) + o = torch.zeros_like(x) + + final_state = None + if initial_state is not None: + h += initial_state.detach() + + for i in range(T): + h = g[:, :, i].exp() * h + x[:, :, i] + o[:, :, i] = h + + if output_final_state: + final_state = h + return o.to(dtype), final_state diff --git a/fla/ops/hgrn/recurrent_fuse.py b/fla/ops/hgrn/recurrent_fuse.py new file mode 100644 index 0000000000000000000000000000000000000000..82224d616f828bb13ff59560895b6d88e50a494b --- /dev/null +++ b/fla/ops/hgrn/recurrent_fuse.py @@ -0,0 +1,185 @@ +# -*- coding: utf-8 -*- + +# Copyright (c) 2023, Songlin Yang + +from typing import Tuple + +import torch +import triton +import triton.language as tl + +from fla.utils import contiguous + + +@triton.autotune( + configs=[ + triton.Config({'BD': 32}, num_warps=1), + triton.Config({'BD': 32}, num_warps=2), + triton.Config({'BD': 32}, num_warps=4), + triton.Config({'BD': 32}, num_warps=8), + triton.Config({'BD': 64}, num_warps=1), + triton.Config({'BD': 64}, num_warps=2), + triton.Config({'BD': 64}, num_warps=4), + triton.Config({'BD': 64}, num_warps=8), + triton.Config({'BD': 128}, num_warps=1), + triton.Config({'BD': 128}, num_warps=2), + triton.Config({'BD': 128}, num_warps=4), + triton.Config({'BD': 128}, num_warps=8), + ], + key=['D'] +) +@triton.jit +def fused_recurrent_hgrn_fwd_kernel( + x, + g, + o, + h0, + ht, + T: tl.constexpr, + D: tl.constexpr, + BD: tl.constexpr, + USE_INITIAL_STATE: tl.constexpr, + STORE_FINAL_STATE: tl.constexpr +): + i_d, i_bh = tl.program_id(0), tl.program_id(1) + o_d = i_d * BD + tl.arange(0, BD) + mask = o_d < D + + p_x = x + i_bh * T * D + o_d + p_g = g + i_bh * T * D + o_d + p_o = o + i_bh * T * D + o_d + + b_h = tl.zeros([BD], dtype=tl.float32) + if USE_INITIAL_STATE: + p_h0 = h0 + i_bh * D + o_d + b_h += tl.load(p_h0, mask=mask, other=0).to(tl.float32) + for _ in range(0, T): + b_x = tl.load(p_x, mask=mask, other=0).to(tl.float32) + b_g = tl.load(p_g, mask=mask, other=0).to(tl.float32) + b_h = tl.exp(b_g) * b_h + b_x + tl.store(p_o, b_h.to(p_o.dtype.element_ty), mask=mask) + + p_x += D + p_g += D + p_o += D + + if STORE_FINAL_STATE: + p_ht = ht + i_bh * D + o_d + tl.store(p_ht, b_h.to(p_ht.dtype.element_ty), mask=mask) + + +@triton.autotune( + configs=[ + triton.Config({'BD': 32}, num_warps=1), + triton.Config({'BD': 32}, num_warps=2), + triton.Config({'BD': 32}, num_warps=4), + triton.Config({'BD': 32}, num_warps=8), + triton.Config({'BD': 64}, num_warps=1), + triton.Config({'BD': 64}, num_warps=2), + triton.Config({'BD': 64}, num_warps=4), + triton.Config({'BD': 64}, num_warps=8), + triton.Config({'BD': 128}, num_warps=1), + triton.Config({'BD': 128}, num_warps=2), + triton.Config({'BD': 128}, num_warps=4), + triton.Config({'BD': 128}, num_warps=8), + ], + key=['D'] +) +@triton.jit +def fused_recurrent_hgrn_bwd_kernel( + g, + o, + dx, + dg, + do, + h0, + T: tl.constexpr, + D: tl.constexpr, + BD: tl.constexpr, + USE_INITIAL_STATE: tl.constexpr +): + i_d, i_bh = tl.program_id(0), tl.program_id(1) + o_d = i_d * BD + tl.arange(0, BD) + mask = o_d < D + + p_g = g + (i_bh * T + T - 1) * D + o_d + p_o = o + (i_bh * T + T - 2) * D + o_d + p_dx = dx + (i_bh * T + T - 1) * D + o_d + p_dg = dg + (i_bh * T + T - 1) * D + o_d + p_do = do + (i_bh * T + T - 1) * D + o_d + + b_dh = tl.zeros([BD], dtype=tl.float32) + for i in range(T - 1, -1, -1): + b_g = tl.load(p_g, mask=mask, other=0).to(tl.float32) + b_do = tl.load(p_do, mask=mask, other=0).to(tl.float32) + if i > 0: + b_o = tl.load(p_o, mask=mask, other=0).to(tl.float32) + elif USE_INITIAL_STATE: + b_o = tl.load(h0 + i_bh * D + o_d, mask=mask, other=0).to(tl.float32) + else: + b_o = tl.zeros([BD], dtype=tl.float32) + + b_dh = b_dh + b_do + b_dx = b_dh + b_dh = b_dh * tl.exp(b_g) + b_dg = b_dh * b_o + tl.store(p_dx, b_dx.to(p_dx.dtype.element_ty), mask=mask) + tl.store(p_dg, b_dg.to(p_dg.dtype.element_ty), mask=mask) + + p_g -= D + p_o -= D + p_dx -= D + p_dg -= D + p_do -= D + + +class FusedRecurrentHGRNFunction(torch.autograd.Function): + + @staticmethod + @contiguous + def forward(ctx, x, g, initial_state=None, output_final_state=False): + B, H, T, D = x.shape + + final_state = None + if output_final_state: + final_state = x.new_empty(B, H, D) + + o = torch.empty_like(x) + def grid(meta): return (triton.cdiv(D, meta['BD']), B * H) + fused_recurrent_hgrn_fwd_kernel[grid]( + x, g, o, initial_state, final_state, + T, D, + USE_INITIAL_STATE=initial_state is not None, + STORE_FINAL_STATE=final_state is not None + ) + ctx.save_for_backward(g, o, initial_state) + return o, final_state + + @staticmethod + @contiguous + def backward(ctx, do, dht=None): + g, o, initial_state = ctx.saved_tensors + B, H, T, D = do.shape + + dx = torch.empty_like(o) + dg = torch.empty_like(g) + def grid(meta): return (triton.cdiv(D, meta['BD']), B * H) + fused_recurrent_hgrn_bwd_kernel[grid]( + g, o, dx, dg, do, initial_state, + T, D, + USE_INITIAL_STATE=initial_state is not None, + ) + + return dx, dg, None, None + + +def fused_recurrent_hgrn( + x: torch.Tensor, + g: torch.Tensor, + initial_state: torch.Tensor = None, + output_final_state: bool = False +) -> Tuple[torch.Tensor, torch.Tensor]: + if initial_state is not None: + initial_state = initial_state.detach() + o, final_state = FusedRecurrentHGRNFunction.apply(x, g, initial_state, output_final_state) + return o, final_state diff --git a/fla/ops/linear_attn/__init__.py b/fla/ops/linear_attn/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..4563a5f37c0f748bf83a5a97a97a27f7bb27a7c5 --- /dev/null +++ b/fla/ops/linear_attn/__init__.py @@ -0,0 +1,12 @@ +# -*- coding: utf-8 -*- + +from .chunk import chunk_linear_attn +from .chunk_fuse import fused_chunk_linear_attn +from .recurrent_fuse import fused_recurrent_linear_attn + +__all__ = [ + 'chunk_linear_attn', + 'fused_chunk_linear_attn', + 'fused_recurrent_linear_attn' +] + diff --git a/fla/ops/linear_attn/chunk.py b/fla/ops/linear_attn/chunk.py new file mode 100644 index 0000000000000000000000000000000000000000..c474cd185f96f5ed82d93d57ac475483705be96a --- /dev/null +++ b/fla/ops/linear_attn/chunk.py @@ -0,0 +1,359 @@ +# -*- coding: utf-8 -*- +# Copyright (c) 2023, Yu Zhang, Songlin Yang + +from typing import Tuple + +import torch +import triton +import triton.language as tl +from torch.cuda.amp import custom_bwd, custom_fwd + +from fla.utils import contiguous + + +@torch.jit.script +def normalize_output(q, k, o): + k = k.transpose(-2, -1) + k = k.cumsum(-1) + k = k.transpose(-2, -1) + z = (q * k).sum(-1, keepdim=True) + return o / (z + 1e-5) + + +@triton.jit +def chunk_linear_attn_fwd_kernel_h( + k, + v, + h, + initial_state, # initial state of the chunk [B, H, D_head_K, D_head_V] + final_state, # final state of the chunk [B, H, D_head_K, D_head_V] + s_qk_h, + s_qk_t, + s_qk_d, + s_vo_h, + s_vo_t, + s_vo_d, + s_h_h, + s_h_t, + H: tl.constexpr, + T: tl.constexpr, + K: tl.constexpr, + V: tl.constexpr, + BT: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr, + NT: tl.constexpr, + USE_INITIAL_STATE: tl.constexpr, + STORE_FINAL_STATE: tl.constexpr +): + i_k, i_v, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + + # [BK, BV] + b_h = tl.zeros([BK, BV], dtype=tl.float32) + + if USE_INITIAL_STATE: + p_h0 = tl.make_block_ptr(initial_state + i_bh * K * V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + b_h = tl.load(p_h0, boundary_check=(0, 1)).to(tl.float32) + + for i_t in range(NT): + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (K, T), (s_qk_d, s_qk_t), (i_k * BK, i_t * BT), (BK, BT), (0, 1)) + p_v = tl.make_block_ptr(v + i_bh * s_vo_h, (T, V), (s_vo_t, s_vo_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_h = tl.make_block_ptr(h + i_bh * s_h_h + i_t * K * V, (K, V), (s_h_t, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + + tl.store(p_h, b_h.to(p_h.dtype.element_ty), boundary_check=(0, 1)) + # [BK, BT] + b_k = tl.load(p_k, boundary_check=(0, 1)) + # [BT, BV] + b_v = tl.load(p_v, boundary_check=(0, 1)) + # [BK, BV] + b_h += tl.dot(b_k, b_v, allow_tf32=False) + + if STORE_FINAL_STATE: + p_ht = tl.make_block_ptr(final_state + i_bh * K * V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + tl.store(p_ht, b_h.to(p_ht.dtype.element_ty), boundary_check=(0, 1)) + + +@triton.jit +def chunk_linear_attn_fwd_kernel_o( + q, + k, + v, + h, + o, + s_qk_h, + s_qk_t, + s_qk_d, + s_vo_h, + s_vo_t, + s_vo_d, + s_h_h, + s_h_t, + scale, + H: tl.constexpr, + T: tl.constexpr, + K: tl.constexpr, + V: tl.constexpr, + BT: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr +): + i_v, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + + o_i = tl.arange(0, BT) + m_s = o_i[:, None] >= o_i[None, :] + + b_o = tl.zeros([BT, BV], dtype=tl.float32) + b_s = tl.zeros([BT, BT], dtype=tl.float32) + for i_k in range(tl.cdiv(K, BK)): + p_q = tl.make_block_ptr(q + i_bh * s_qk_h, (T, K), (s_qk_t, s_qk_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (K, T), (s_qk_d, s_qk_t), (i_k * BK, i_t * BT), (BK, BT), (0, 1)) + p_h = tl.make_block_ptr(h + i_bh * s_h_h + i_t * K * V, (K, V), (s_h_t, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + # [BT, BK] + b_q = tl.load(p_q, boundary_check=(0, 1)) + # [BK, BT] + b_k = tl.load(p_k, boundary_check=(0, 1)) + # [BK, BV] + b_h = tl.load(p_h, boundary_check=(0, 1)) + b_o += tl.dot(b_q, b_h, allow_tf32=False) + b_s += tl.dot(b_q, b_k, allow_tf32=False) + + b_s = tl.where(m_s, b_s, 0) + p_v = tl.make_block_ptr(v + i_bh * s_vo_h, (T, V), (s_vo_t, s_vo_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + b_v = tl.load(p_v, boundary_check=(0, 1)) + b_o = (b_o + tl.dot(b_s.to(b_v.dtype), b_v, allow_tf32=False)) * scale + p_o = tl.make_block_ptr(o + i_bh * s_vo_h, (T, V), (s_vo_t, s_vo_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1)) + + +@triton.jit +def chunk_linear_attn_bwd_kernel_dh( + q, + do, + dh, + s_qk_h, + s_qk_t, + s_qk_d, + s_vo_h, + s_vo_t, + s_vo_d, + s_h_h, + s_h_t, + scale, + H: tl.constexpr, + T: tl.constexpr, + K: tl.constexpr, + V: tl.constexpr, + BT: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr, + NT: tl.constexpr +): + i_k, i_v, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + + # [BK, BV] + b_dh = tl.zeros([BK, BV], dtype=tl.float32) + for i_t in range(NT - 1, -1, -1): + p_q = tl.make_block_ptr(q + i_bh * s_qk_h, (K, T), (s_qk_d, s_qk_t), (i_k * BK, i_t * BT), (BK, BT), (0, 1)) + p_do = tl.make_block_ptr(do + i_bh * s_vo_h, (T, V), (s_vo_t, s_vo_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_dh = tl.make_block_ptr(dh + i_bh * s_h_h + i_t * K * V, (K, V), (s_h_t, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + + tl.store(p_dh, b_dh.to(p_dh.dtype.element_ty), boundary_check=(0, 1)) + # [BK, BT] + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_q = (b_q * scale).to(b_q.dtype) + # [BT, V] + b_do = tl.load(p_do, boundary_check=(0, 1)) + # [BK, BV] + b_dh += tl.dot(b_q, b_do.to(b_q.dtype), allow_tf32=False) + + +@triton.jit +def chunk_linear_attn_bwd_kernel_dqkv( + q, + k, + v, + h, + do, + dh, + dq, + dk, + dv, + s_qk_h, + s_qk_t, + s_qk_d, + s_vo_h, + s_vo_t, + s_vo_d, + s_h_h, + s_h_t, + scale, + H: tl.constexpr, + T: tl.constexpr, + K: tl.constexpr, + V: tl.constexpr, + BT: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr, + NT: tl.constexpr +): + i_k, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + n_bh = tl.num_programs(2) + o_i = tl.arange(0, BT) + + p_q = tl.make_block_ptr(q + i_bh * s_qk_h, (K, T), (s_qk_d, s_qk_t), (i_k * BK, i_t * BT), (BK, BT), (0, 1)) + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (T, K), (s_qk_t, s_qk_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_k = tl.load(p_k, boundary_check=(0, 1)) + b_s = tl.dot(b_k, b_q, allow_tf32=False) * scale + b_s = tl.where(o_i[:, None] <= o_i[None, :], b_s, 0) + + b_dq = tl.zeros([BT, BK], dtype=tl.float32) + b_dk = tl.zeros([BT, BK], dtype=tl.float32) + b_ds = tl.zeros([BT, BT], dtype=tl.float32) + for i_v in range(tl.cdiv(V, BV)): + p_v = tl.make_block_ptr(v + i_bh * s_vo_h, (T, V), (s_vo_t, s_vo_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_h = tl.make_block_ptr(h + i_bh * s_h_h, (V, NT * K), (1, s_h_t), (i_v * BV, i_t * K + i_k * BK), (BV, BK), (0, 1)) + p_do = tl.make_block_ptr(do + i_bh * s_vo_h, (T, V), (s_vo_t, s_vo_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_dh = tl.make_block_ptr(dh + i_bh * s_h_h, (NT * K, V), (s_h_t, 1), (i_t * K + i_k * BK, i_v * BV), (BK, BV), (1, 0)) + p_dv = tl.make_block_ptr(dv + (i_k*n_bh+i_bh)*s_vo_h, (T, V), (s_vo_t, s_vo_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + # [BT, BV] + b_v = tl.load(p_v, boundary_check=(0, 1)) + b_do = tl.load(p_do, boundary_check=(0, 1)) + # [BV, BK] + b_h = tl.load(p_h, boundary_check=(0, 1)) + # [BK, BV] + b_dh = tl.load(p_dh, boundary_check=(0, 1)) + # [BT, BT] + b_ds += tl.dot(b_do, tl.trans(b_v), allow_tf32=False) + # [BT, BK] + b_dq += tl.dot(b_do, b_h, allow_tf32=False) * scale + b_dk += tl.dot(b_v, tl.trans(b_dh), allow_tf32=False) + # [BT, BV] + b_dv = tl.dot(b_k, b_dh, allow_tf32=False) + tl.dot(b_s.to(b_q.dtype), b_do, allow_tf32=False) + tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), boundary_check=(0, 1)) + # [BT, BT] + b_ds = tl.where(o_i[:, None] >= o_i[None, :], b_ds * scale, 0).to(b_q.dtype) + # [BT, BK] + b_dq += tl.dot(b_ds, b_k, allow_tf32=False) + b_dk += tl.trans(tl.dot(b_q, b_ds, allow_tf32=False)) + + p_dq = tl.make_block_ptr(dq + i_bh * s_qk_h, (T, K), (s_qk_t, s_qk_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_dk = tl.make_block_ptr(dk + i_bh * s_qk_h, (T, K), (s_qk_t, s_qk_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1)) + tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1)) + + +class ChunkLinearAttentionFunction(torch.autograd.Function): + + @staticmethod + @custom_fwd + @contiguous + def forward(ctx, q, k, v, scale, initial_state, output_final_state): + B, H, T, K, V = *q.shape, v.shape[-1] + BT = 64 + BK, BV = min(64, triton.next_power_of_2(K)), min(64, triton.next_power_of_2(V)) + NT, NK, NV = triton.cdiv(T, BT), triton.cdiv(K, BK), triton.cdiv(V, BV) + num_stages = 1 + num_warps = 4 if BK == 64 else 2 + ctx.scale = scale + + final_state = None + if output_final_state: + final_state = q.new_empty(B, H, K, V, dtype=torch.float32, requires_grad=False) + + h = q.new_empty(B, H, NT * K, V) + grid = (NK, NV, B * H) + chunk_linear_attn_fwd_kernel_h[grid]( + k, v, h, initial_state, final_state, + q.stride(1), q.stride(2), q.stride(3), + v.stride(1), v.stride(2), v.stride(3), + h.stride(1), h.stride(2), + H=H, T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, NT=NT, + USE_INITIAL_STATE=initial_state is not None, + STORE_FINAL_STATE=output_final_state, + num_warps=num_warps, + num_stages=num_stages + ) + grid = (NV, NT, B * H) + o = torch.empty_like(v) + chunk_linear_attn_fwd_kernel_o[grid]( + q, k, v, h, o, + q.stride(1), q.stride(2), q.stride(3), + v.stride(1), v.stride(2), v.stride(3), + h.stride(1), h.stride(2), + scale, + H=H, T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, + num_warps=num_warps, + num_stages=num_stages + ) + + ctx.save_for_backward(q, k, v, h) + return o.to(q.dtype), final_state + + @staticmethod + @custom_bwd + @contiguous + def backward(ctx, do, d_ht=None): + q, k, v, h = ctx.saved_tensors + + B, H, T, K, V = *q.shape, v.shape[-1] + BT = 64 + BK, BV = min(64, triton.next_power_of_2(K)), min(32 if q.dtype == torch.float32 else 64, triton.next_power_of_2(V)) + NT, NK, NV = triton.cdiv(T, BT), triton.cdiv(K, BK), triton.cdiv(V, BV) + num_stages = 1 + num_warps = 4 if BK == 64 else 2 + scale = ctx.scale + + dh = q.new_empty(B, H, NT * K, V) + grid = (NK, NV, B * H) + chunk_linear_attn_bwd_kernel_dh[grid]( + q, do, dh, + q.stride(1), q.stride(2), q.stride(3), + v.stride(1), v.stride(2), v.stride(3), + dh.stride(1), dh.stride(2), + scale, + H=H, T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, NT=NT, + num_warps=num_warps, + num_stages=num_stages + ) + + grid = (NK, NT, B * H) + dq = torch.empty_like(q) + dk = torch.empty_like(k) + dv = v.new_empty(NK, *v.shape) + num_stages = 1 + num_warps = 4 if BK == 64 else 2 + chunk_linear_attn_bwd_kernel_dqkv[grid]( + q, k, v, h, do, dh, dq, dk, dv, + q.stride(1), q.stride(2), q.stride(3), + v.stride(1), v.stride(2), v.stride(3), + dh.stride(1), dh.stride(2), + scale, + H=H, T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, NT=NT, + num_warps=num_warps, + num_stages=num_stages + ) + dv = dv.sum(0) + return dq.to(q.dtype), dk.to(k.dtype), dv.to(v.dtype), None, None, None + + +def chunk_linear_attn( + q: torch.Tensor, + k: torch.Tensor, + v: torch.Tensor, + scale: float = -1, + initial_state: torch.Tensor = None, + output_final_state: bool = False, + normalize: bool = True +) -> Tuple[torch.Tensor, torch.Tensor]: + if scale == -1: + scale = q.shape[-1] ** -0.5 + if initial_state is not None: + initial_state = initial_state.detach() + o, final_state = ChunkLinearAttentionFunction.apply(q, k, v, scale, initial_state, output_final_state) + + if normalize: + o = normalize_output(q * scale, k, o) + + return o, final_state diff --git a/fla/ops/linear_attn/chunk_fuse.py b/fla/ops/linear_attn/chunk_fuse.py new file mode 100644 index 0000000000000000000000000000000000000000..0ca7101d18963ff2e4132c5c525132ae9f1b2432 --- /dev/null +++ b/fla/ops/linear_attn/chunk_fuse.py @@ -0,0 +1,326 @@ +# -*- coding: utf-8 -*- +# Copyright (c) 2023, Yu Zhang, Songlin Yang + +from typing import Tuple + +import torch +import triton +import triton.language as tl +from packaging import version +from torch.cuda.amp import custom_bwd, custom_fwd + +from fla.utils import contiguous + +# on-the-fly computation without materializing hidden statets into HBMs + + +@torch.jit.script +def normalize_output(q, k, o): + k = k.transpose(-2, -1) + k = k.cumsum(-1) + k = k.transpose(-2, -1) + z = (q * k).sum(-1, keepdim=True) + return o / (z + 1e-5) + + +@triton.jit +def fused_chunk_linear_attn_fwd_kernel( + # B: batch_size, H: n_heads, T: seq_len, D: d_head + q, # query [B, H, L, D_head_K] + k, # key [B, H, L, D_head_V] + v, # value [B, H, L, D_head_V] + o, # output [B, H, L, D_head_V] + initial_state, # initial state of the chunk [B, H, D_head_K, D_head_V] + final_state, # final state of the chunk [B, H, D_head_K, D_head_V] + s_qk_h, # stride size: L * D_head_K + s_qk_t, # stride size: D_head_K + s_qk_d, # stride size: 1 + s_vo_h, # stride size: L * D_head_V + s_vo_t, # stride size: D_head_V + s_vo_d, # stride size: 1 + B, # batch size + H, # n_heads + T, # seq_len + scale, # D_head_K ** -0.5 + BT: tl.constexpr, # BLOCK SIZE along the sequence dimension, a.k.a. chunk size + BK: tl.constexpr, # BLOCK SIZE along the K dimension + BV: tl.constexpr, # BLOCK SIZE along the V dimension + DK: tl.constexpr, # D_head_K + DV: tl.constexpr, # D_head_V + USE_INITIAL_STATE: tl.constexpr, + STORE_FINAL_STATE: tl.constexpr, + CHECK: tl.constexpr +): + # indices + i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + + o_i = tl.arange(0, BT) + + # [BT, BT] + m_s = o_i[:, None] >= o_i[None, :] + # [BK, BV] + b_h = tl.zeros([BK, BV], dtype=tl.float32) + + # make block pointers + p_q = tl.make_block_ptr(q + i_bh * s_qk_h, (T, DK), (s_qk_t, s_qk_d), (0, i_k * BK), (BT, BK), (1, 0)) + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (DK, T), (s_qk_d, s_qk_t), (i_k * BK, 0), (BK, BT), (0, 1)) + p_v = tl.make_block_ptr(v + i_bh * s_vo_h, (T, DV), (s_vo_t, s_vo_d), (0, i_v * BV), (BT, BV), (1, 0)) + p_o = tl.make_block_ptr(o + (i_bh+i_k*B*H) * s_vo_h, (T, DV), (s_vo_t, s_vo_d), (0, i_v * BV), (BT, BV), (1, 0)) + + if USE_INITIAL_STATE: + p_h = tl.make_block_ptr(initial_state + i_bh * DK * DV, (DK, DV), (DV, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + b_h = tl.load(p_h, boundary_check=(0, 1)).to(tl.float32) + + for i in range(0, tl.cdiv(T, BT)): + # [BK, BT] + b_k = tl.load(p_k, boundary_check=(0, 1)) + # [BT, BV] + b_v = tl.load(p_v, boundary_check=(0, 1)) + # [BT, BK] + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_q = (b_q * scale).to(b_k.dtype) + + # [BT, BT] + b_s = tl.dot(b_q, b_k, allow_tf32=False) + b_s = tl.where(m_s, b_s, 0) + # [BT, BV] + b_o = tl.dot(b_s.to(b_q.dtype), b_v, allow_tf32=False) + if CHECK and i == 0: + b_o += tl.dot(b_q, b_h.to(b_q.dtype), allow_tf32=False) + b_h = b_h + tl.dot(b_k, b_v, allow_tf32=False) + else: + b_o += tl.dot(b_q, b_h.to(b_q.dtype), allow_tf32=False) + b_h = b_h + tl.dot(b_k, b_v, allow_tf32=False) + tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1)) + p_q = tl.advance(p_q, (BT, 0)) + p_k = tl.advance(p_k, (0, BT)) + p_v = tl.advance(p_v, (BT, 0)) + p_o = tl.advance(p_o, (BT, 0)) + + if STORE_FINAL_STATE: + p_final = tl.make_block_ptr(final_state + i_bh * DK * DV, (DK, DV), (DV, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + tl.store(p_final, b_h.to(p_final.dtype.element_ty), boundary_check=(0, 1)) + + +# Similar to Algorithm1 of https://arxiv.org/abs/2006.16236 +@triton.jit +def fused_chunk_linear_attn_bwd_kernel( + # B: batch_size, H: n_heads, T: seq_len, D: d_head + # NV: number of split in the V dimension. NK: number of split in the K dimension + q, # query [B, H, L, D_head_K] + k, # key [B, H, L, D_head_V] + v, # value [B, H, L, D_head_V] + do, # gradient of output [B, H, L, D_head_V] + dq, # gradient of query [NV, B, H, L, D_head_K] + dk, # gradient of key [NV, B, H, L, D_head_K] + dv, # gradient of value [NK, B, H, L, D_head_V] + + initial_state, # initial state of the chunk [B, H, D_head_K, D_head_V] + + s_qk_h, # stride size: L * D_head_K + s_qk_t, # stride size: D_head_K + s_qk_d, # stride size: 1 + s_vo_h, # stride size: L * D_head_V + s_vo_t, # stride size: D_head_V + s_vo_d, # stride size: 1 + B, # batch_size + H, # n_heads + T, # seq_len + scale, # D_head_K ** -0.5 + BT: tl.constexpr, # BLOCK SIZE along the sequence dimension, a.k.a. chunk size + BK: tl.constexpr, # BLOCK SIZE along the K dimension + BV: tl.constexpr, # BLOCK SIZE along the V dimension + DK: tl.constexpr, # D_head_K + DV: tl.constexpr, # D_head_V + USE_INITIAL_STATE: tl.constexpr, + CHECK: tl.constexpr +): + i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + o_i = tl.arange(0, BT) + + m_s = o_i[:, None] >= o_i[None, :] + # [BV, BK] + b_h = tl.zeros([BV, BK], dtype=tl.float32) + if USE_INITIAL_STATE: + p_h = tl.make_block_ptr(initial_state + i_bh * DK * DV, (DV, DK), (1, DV), (i_v * BV, i_k * BK), (BV, BK), (0, 1)) + b_h = tl.load(p_h, boundary_check=(0, 1)).to(tl.float32) + + for i in range(0, tl.cdiv(T, BT)): + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (T, DK), (s_qk_t, s_qk_d), (i * BT, i_k * BK), (BT, BK), (1, 0)) + p_v = tl.make_block_ptr(v + i_bh * s_vo_h, (DV, T), (s_vo_d, s_vo_t), (i_v * BV, i * BT), (BV, BT), (0, 1)) + p_do = tl.make_block_ptr(do + i_bh * s_vo_h, (T, DV), (s_vo_t, s_vo_d), (i * BT, i_v * BV), (BT, BV), (1, 0)) + p_dq = tl.make_block_ptr(dq + (i_bh + i_v*B*H) * s_qk_h, (T, DK), (s_qk_t, s_qk_d), (i*BT, i_k*BK), (BT, BK), (1, 0)) + + # [BT, DK] + b_k = tl.load(p_k, boundary_check=(0, 1)) + # [DV, BT] + b_v = tl.load(p_v, boundary_check=(0, 1)) + # [BT, DV] + b_do = tl.load(p_do, boundary_check=(0, 1)) + + # [BT, BT] + b_ds = tl.dot(b_do, b_v, allow_tf32=False) + b_ds = tl.where(m_s, b_ds, 0) + # [BT, DK] + b_dq = tl.dot(b_ds.to(b_k.dtype), b_k, allow_tf32=False) + # [DV, DK] + if CHECK and i == 0: + b_dq += tl.dot(b_do, b_h.to(b_do.dtype), allow_tf32=False) + b_h = b_h + tl.dot(b_v, b_k, allow_tf32=False) + else: + b_dq += tl.dot(b_do, b_h.to(b_do.dtype), allow_tf32=False) + b_h = b_h + tl.dot(b_v, b_k, allow_tf32=False) + b_dq *= scale + tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1)) + + # sync threads + b_h = None + tl.debug_barrier() + # [BK, BV] + b_dh = tl.zeros([BK, BV], dtype=tl.float32) + m_s = o_i[:, None] <= o_i[None, :] + for i in range(1, tl.cdiv(T, BT) + 1): + p_q = tl.make_block_ptr(q + i_bh * s_qk_h, (DK, T), (s_qk_d, s_qk_t), (i_k * BK, T - i * BT), (BK, BT), (0, 1)) + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (T, DK), (s_qk_t, s_qk_d), (T - i * BT, i_k * BK), (BT, BK), (1, 0)) + p_v = tl.make_block_ptr(v + i_bh * s_vo_h, (T, DV), (s_vo_t, s_vo_d), (T - i * BT, i_v * BV), (BT, BV), (1, 0)) + p_do = tl.make_block_ptr(do + i_bh * s_vo_h, (T, DV), (s_vo_t, s_vo_d), (T - i * BT, i_v * BV), (BT, BV), (1, 0)) + p_dk = tl.make_block_ptr(dk + (i_bh+i_v*B*H) * s_qk_h, (T, DK), (s_qk_t, s_qk_d), (T - i*BT, i_k*BK), (BT, BK), (1, 0)) + p_dv = tl.make_block_ptr(dv + (i_bh+i_k*B*H) * s_vo_h, (T, DV), (s_vo_t, s_vo_d), (T - i*BT, i_v*BV), (BT, BV), (1, 0)) + # [DK, BT] + b_q = tl.load(p_q, boundary_check=(0, 1)) + # [BT, DK] + b_k = tl.load(p_k, boundary_check=(0, 1)) + # [BT, DV] + b_v = tl.load(p_v, boundary_check=(0, 1)) + b_do = tl.load(p_do, boundary_check=(0, 1)) + + # b_dd = (b_do]).to(b_do.dtype) + + # [BT, BT] + b_ds = tl.dot(b_v, tl.trans(b_do), allow_tf32=False) + b_ds = tl.where(m_s, b_ds, 0).to(b_q.dtype) + # [BT, BT] + b_s = tl.dot(b_k, b_q, allow_tf32=False) * scale + b_s = tl.where(m_s, b_s, 0).to(b_q.dtype) + # [BT, DK] + b_dk = tl.dot(b_ds, tl.trans(b_q), allow_tf32=False) + # [BT, DV] + b_dv = tl.dot(b_s, b_do, allow_tf32=False) + if CHECK and i == 1: + b_dk += tl.dot(b_v, tl.trans(b_dh).to(b_v.dtype), allow_tf32=False) + b_dv += tl.dot(b_k, b_dh.to(b_k.dtype), allow_tf32=False) + b_dh += tl.dot(b_q, b_do, allow_tf32=False) + else: + b_dk += tl.dot(b_v, tl.trans(b_dh).to(b_v.dtype), allow_tf32=False) + b_dv += tl.dot(b_k, b_dh.to(b_k.dtype), allow_tf32=False) + b_dh += tl.dot(b_q, b_do, allow_tf32=False) + + tl.store(p_dk, (b_dk * scale).to(p_dk.dtype.element_ty), boundary_check=(0, 1)) + tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), boundary_check=(0, 1)) + + +class FusedChunkLinearAttentionFunction(torch.autograd.Function): + @staticmethod + @contiguous + @custom_fwd + def forward(ctx, q, k, v, scale, initial_state, output_final_state): + batch_size, n_heads, seq_len, d_head_qk = q.shape + d_head_v = v.shape[-1] + ctx.scale = scale + BT = 64 + BK, BV = min(triton.next_power_of_2(d_head_qk), 64), min(triton.next_power_of_2(d_head_v), 64) + NK, NV = triton.cdiv(d_head_qk, BK), triton.cdiv(d_head_v, BV) + num_stages = 1 + num_warps = 4 + o = q.new_empty(NK, batch_size, n_heads, seq_len, d_head_v) + if output_final_state: + final_state = q.new_empty(batch_size, n_heads, d_head_qk, d_head_v, dtype=torch.float32, requires_grad=False) + else: + final_state = None + # the bug still exists even for Triton 2.2 on H100 GPUs + # so we always enable initial checks + CHECK = True + if version.parse(triton.__version__) < version.parse('2.2.0'): + import warnings + warnings.warn( + "Triton<2.2.0 detected for running this kernel, " + "which is known to have some weird compiler issues (refer to https://github.com/openai/triton/issues/2852) " + "that lead to significant precision loss. " + "We've add some initial condition checks to resolve this, sadly at the sacrifice of the speed. " + "For optimal performance, it is recommended to install Triton>=2.2.0 (if possible)." + ) + CHECK = True + + grid = (NV, NK, batch_size * n_heads) + fused_chunk_linear_attn_fwd_kernel[grid]( + q, k, v, o, initial_state, final_state, + q.stride(1), q.stride(2), q.stride(3), + v.stride(1), v.stride(2), v.stride(3), + batch_size, n_heads, seq_len, scale, + BT=BT, DK=d_head_qk, DV=d_head_v, BK=BK, BV=BV, + USE_INITIAL_STATE=initial_state is not None, + STORE_FINAL_STATE=output_final_state, + CHECK=CHECK, + num_warps=num_warps, + num_stages=num_stages + ) + + o = o.sum(0) + ctx.save_for_backward(q, k, v, initial_state) + ctx.CHECK = CHECK + return o.to(q.dtype), final_state + + @staticmethod + @custom_bwd + @contiguous + def backward(ctx, do, d_final_state=None): + q, k, v, initial_state = ctx.saved_tensors + batch_size, n_heads, seq_len, d_head_qk = q.shape + d_head_v = v.shape[-1] + scale = ctx.scale + + BT = 64 + BK, BV = min(triton.next_power_of_2(d_head_qk), 64), min(triton.next_power_of_2(d_head_v), 64) + NK, NV = triton.cdiv(d_head_qk, BK), triton.cdiv(d_head_v, BV) + num_stages = 1 + num_warps = 4 + + dq = q.new_empty(NV, batch_size, n_heads, seq_len, d_head_qk) + dk = q.new_empty(NV, batch_size, n_heads, seq_len, d_head_qk) + dv = q.new_empty(NK, batch_size, n_heads, seq_len, d_head_v) + grid = (NV, NK, batch_size * n_heads) + + fused_chunk_linear_attn_bwd_kernel[grid]( + q, k, v, do, dq, dk, dv, initial_state, + q.stride(1), q.stride(2), q.stride(3), + v.stride(1), v.stride(2), v.stride(3), + batch_size, n_heads, seq_len, scale, + BT=BT, DK=d_head_qk, DV=d_head_v, BK=BK, BV=BV, + USE_INITIAL_STATE=initial_state is not None, + CHECK=ctx.CHECK, + num_warps=num_warps, + num_stages=num_stages + ) + dq = dq.sum(0) + dk = dk.sum(0) + dv = dv.sum(0) + return dq.to(q.dtype), dk.to(k.dtype), dv.to(v.dtype), None, None, None + + +def fused_chunk_linear_attn( + q: torch.Tensor, + k: torch.Tensor, + v: torch.Tensor, + scale: float = -1, + initial_state: torch.Tensor = None, + output_final_state: bool = False, + normalize: bool = True +) -> Tuple[torch.Tensor, torch.Tensor]: + if initial_state is not None: + initial_state = initial_state.detach() + if scale == -1: + scale = q.shape[-1] ** -0.5 + o, final_state = FusedChunkLinearAttentionFunction.apply(q, k, v, scale, initial_state, output_final_state) + if normalize: + o = normalize_output(q * scale, k, o) + return o, final_state diff --git a/fla/ops/linear_attn/naive.py b/fla/ops/linear_attn/naive.py new file mode 100644 index 0000000000000000000000000000000000000000..60b319722b71204ae6ff74b0f05b4c6b9b02a9bd --- /dev/null +++ b/fla/ops/linear_attn/naive.py @@ -0,0 +1,20 @@ +# -*- coding: utf-8 -*- + +import torch +from einops import rearrange + + +def torch_chunk_linear_attn(q, k, v, chunk_size=64): + q = rearrange(q, 'b h (n c) d -> b h n c d', c = chunk_size) * (q.shape[-1] **-0.5) + k = rearrange(k, 'b h (n c) d -> b h n c d', c = chunk_size) + v = rearrange(v, 'b h (n c) d -> b h n c d', c = chunk_size) + kv = k.transpose(-1, -2) @ v + kv = kv.cumsum(2) + kv = torch.cat([ + torch.zeros_like(kv[:, :, :1]), + kv[:, :, :-1] + ], dim=2) + inter = q @ kv + intra = ((q @ k.transpose(-1, -2)).masked_fill_(torch.triu(torch.ones(chunk_size, chunk_size, dtype=bool, device=q.device), diagonal=1), 0)) @ v + o = inter + intra + return rearrange(o, 'b h n c d -> b h (n c) d') diff --git a/fla/ops/linear_attn/recurrent_fuse.py b/fla/ops/linear_attn/recurrent_fuse.py new file mode 100644 index 0000000000000000000000000000000000000000..20bd0fe891c015e3b958088d2433c9c5638e6eec --- /dev/null +++ b/fla/ops/linear_attn/recurrent_fuse.py @@ -0,0 +1,284 @@ +# -*- coding: utf-8 -*- +# Copyright (c) 2023, Yu Zhang, Songlin Yang + +from typing import Tuple + +import torch +import triton +import triton.language as tl + +from fla.utils import contiguous + +# on-the-fly computation without materializing hidden statets into HBMs + + +@torch.jit.script +def normalize_output(q, k, o): + k = k.transpose(-2, -1) + k = k.cumsum(-1) + k = k.transpose(-2, -1) + z = (q * k).sum(-1, keepdim=True) + return o / (z + 1e-5) + + +@triton.jit +def fused_recurrent_linear_attn_fwd_kernel( + # B: batch_size, H: n_heads, T: seq_len, D: d_head + q, # query [B, H, L, D_head_K] + k, # key [B, H, L, D_head_V] + v, # value [B, H, L, D_head_V] + o, # output [B, H, L, D_head_V] + initial_state, + final_state, # final hidden state [B, H, D_head_K, D_head_V] + + s_qk_h, # stride size: L * D_head_K + s_qk_t, # stride size: D_head_K + s_qk_d, # stride size: 1 + + s_vo_h, # stride size: L * D_head_V + s_vo_t, # stride size: D_head_V + s_vo_d, # stride size: 1 + + B, # batch size + H, # n_heads + T, # seq_len + scale, # D_head_K ** -0.5 + BK: tl.constexpr, # BLOCK SIZE along the K dimension + BV: tl.constexpr, # BLOCK SIZE along the V dimension + DK: tl.constexpr, # D_head_K + DV: tl.constexpr, # D_head_V + USE_INITIAL_STATE: tl.constexpr, # whether to use initial state + STORE_FINAL_STATE: tl.constexpr, # whether to store final state +): + # indices + i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + + p_q = q + i_bh * s_qk_h + i_k * BK + tl.arange(0, BK) + p_k = k + i_bh * s_qk_h + i_k * BK + tl.arange(0, BK) + p_v = v + i_bh * s_vo_h + i_v * BV + tl.arange(0, BV) + p_o = o + (i_bh + i_k * B * H) * s_vo_h + i_v * BV + tl.arange(0, BV) + + mask_bk = (i_k * BK + tl.arange(0, BK)) < DK + mask_bv = (i_v * BV + tl.arange(0, BV)) < DV + mask_kv = mask_bk[None, :] & mask_bv[:, None] + + h = tl.zeros([BV, BK], dtype=tl.float32) + + if USE_INITIAL_STATE: + p_init_s = initial_state + i_bh * DK * DV + \ + (i_k * BK + tl.arange(0, BK)[None, :]) * \ + DV + (i_v * BV + tl.arange(0, BV)[:, None]) + h += tl.load(p_init_s, mask=mask_kv, other=0).to(tl.float32) + + for _ in range(0, T): + _k = tl.load(p_k, mask=mask_bk, other=0).to(tl.float32) + _v = tl.load(p_v, mask=mask_bv, other=0).to(tl.float32) + _q = tl.load(p_q, mask=mask_bk, other=0).to(tl.float32) * scale + + h += _k[None, :] * _v[:, None] + _o = h * _q[None, :] + _o = tl.sum(_o, axis=1) + tl.store(p_o, _o.to(p_o.dtype.element_ty), mask=mask_bv) + + p_q += DK + p_k += DK + p_o += DV + p_v += DV + + if STORE_FINAL_STATE: + p_final_s = final_state + i_bh * DK * DV + \ + (i_k * BK + tl.arange(0, BK)[None, :]) * \ + DV + (i_v * BV + tl.arange(0, BV)[:, None]) + tl.store(p_final_s, h.to(p_final_s.dtype.element_ty), mask=mask_kv) + + +# Similar to Algorithm1 of https://arxiv.org/abs/2006.16236 +@triton.jit +def fused_recurrent_linear_attn_bwd_kernel( + # B: batch_size, H: n_heads, T: seq_len, D: d_head + # NV: number of split in the V dimension. NK: number of split in the K dimension + q, # query [B, H, L, D_head_K] + k, # key [B, H, L, D_head_V] + v, # value [B, H, L, D_head_V] + + do, # gradient of output [B, H, L, D_head_V] + dq, # gradient of query [NV, B, H, L, D_head_K] + dk, # gradient of key [NV, B, H, L, D_head_K] + dv, # gradient of value [NK, B, H, L, D_head_V] + + # initial hidden state initialization [B, H, D_head_K, D_head_V] + initial_state, + + s_qk_h, # stride size: L * D_head_K + s_qk_t, # stride size: D_head_K + s_qk_d, # stride size: 1 + + s_vo_h, # stride size: L * D_head_V + s_vo_t, # stride size: D_head_V + s_vo_d, # stride size: 1 + + B, # batch_size + H, # n_heads + T, # seq_len + scale, # D_head_K ** -0.5 + BK: tl.constexpr, # BLOCK SIZE along the K dimension + BV: tl.constexpr, # BLOCK SIZE along the V dimension + DK: tl.constexpr, # D_head_K + DV: tl.constexpr, # D_head_V + USE_INITIAL_STATE: tl.constexpr, # whether to use initial state +): + i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + + p_q = q + i_bh * s_qk_h + i_k * BK + tl.arange(0, BK) + p_k = k + i_bh * s_qk_h + i_k * BK + tl.arange(0, BK) + p_v = v + i_bh * s_vo_h + i_v * BV + tl.arange(0, BV) + p_do = do + i_bh * s_vo_h + i_v * BV + tl.arange(0, BV) + + p_dq = dq + (i_bh + i_v * B * H) * s_qk_h + i_k * BK + tl.arange(0, BK) + mask_bk = i_k * BK + tl.arange(0, BK) < DK + mask_bv = i_v * BV + tl.arange(0, BV) < DV + + h = tl.zeros([BK, BV], dtype=tl.float32) + + if USE_INITIAL_STATE: + mask_kv = mask_bk[:, None] & mask_bv[None, :] + p_init_s = initial_state + i_bh * DK * DV + \ + (i_k * BK + tl.arange(0, BK)[:, None]) * \ + DV + (i_v * BV + tl.arange(0, BV)[None, :]) + h += tl.load(p_init_s, mask=mask_kv, other=0).to(tl.float32) + + for i in range(0, T): + _k = tl.load(p_k, mask=mask_bk, other=0).to(tl.float32) + _v = tl.load(p_v, mask=mask_bv, other=0).to(tl.float32) + _do = tl.load(p_do, mask=mask_bv, other=0).to(tl.float32) + + h += _k[:, None] * _v[None, :] + _d_q = h * _do[None, :] + d_q = tl.sum(_d_q, axis=1) * scale + tl.store(p_dq, d_q.to(p_dq.dtype.element_ty), mask=mask_bk) + + p_k += DK + p_do += DV + p_v += DV + p_dq += DK + + # sync threads + tl.debug_barrier() + + p_q = q + i_bh * s_qk_h + i_k * BK + tl.arange(0, BK) + (T - 1) * DK + p_k = k + i_bh * s_qk_h + i_k * BK + tl.arange(0, BK) + (T - 1) * DK + p_do = do + i_bh * s_vo_h + i_v * BV + tl.arange(0, BV) + (T - 1) * DV + p_v = v + i_bh * s_vo_h + i_v * BV + tl.arange(0, BV) + (T - 1) * DV + p_dk = dk + (i_bh + i_v * B * H) * s_qk_h + i_k * \ + BK + tl.arange(0, BK) + (T - 1) * DK + p_dv = dv + (i_bh + i_k * B * H) * s_vo_h + i_v * \ + BV + tl.arange(0, BV) + (T - 1) * DV + d_h = tl.zeros([BK, BV], dtype=tl.float32) + + for _ in range(T): + _do = tl.load(p_do, mask=mask_bv, other=0).to(tl.float32) + _q = tl.load(p_q, mask=mask_bk, other=0).to(tl.float32) * scale + _k = tl.load(p_k, mask=mask_bk, other=0).to(tl.float32) + _v = tl.load(p_v, mask=mask_bv, other=0).to(tl.float32) + d_h += _q[:, None] * _do[None, :] + d_k = tl.sum(d_h * _v[None, :], axis=1) + d_v = tl.sum(d_h * _k[:, None], axis=0) + + tl.store(p_dk, d_k.to(p_dk.dtype.element_ty), mask=mask_bk) + tl.store(p_dv, d_v.to(p_dv.dtype.element_ty), mask=mask_bv) + + p_do -= DV + p_q -= DK + p_k -= DK + p_v -= DV + p_dk -= DK + p_dv -= DV + + +class FusedRecurrentLinearAttentionFunction(torch.autograd.Function): + + @staticmethod + @contiguous + def forward(ctx, q, k, v, initial_state=None, output_final_state=False): + batch_size, n_heads, seq_len, d_head_qk = q.shape + d_head_v = v.shape[-1] + + scale = d_head_qk ** -0.5 + BK, BV = min(d_head_qk, 32), min(d_head_v, 32) + NK, NV = triton.cdiv(d_head_qk, BK), triton.cdiv(d_head_v, BV) + num_stages = 1 + num_warps = 1 + + o = q.new_empty(NK, batch_size, n_heads, seq_len, d_head_v) + + if output_final_state: + final_state = q.new_empty(batch_size, n_heads, d_head_qk, d_head_v) + else: + final_state = None + + grid = (NV, NK, batch_size * n_heads) + fused_recurrent_linear_attn_fwd_kernel[grid]( + q, k, v, o, initial_state, final_state, + q.stride(1), q.stride(2), q.stride(3), + v.stride(1), v.stride(2), v.stride(3), + batch_size, n_heads, seq_len, scale, + DK=d_head_qk, DV=d_head_v, BK=BK, BV=BV, + num_warps=num_warps, + num_stages=num_stages, + USE_INITIAL_STATE=initial_state is not None, + STORE_FINAL_STATE=final_state is not None + ) + + o = o.sum(0) + ctx.save_for_backward(q, k, v, initial_state) + return o, final_state + + @staticmethod + @contiguous + def backward(ctx, do, d_final_state=None): + q, k, v, initial_state = ctx.saved_tensors + batch_size, n_heads, seq_len, d_head_qk = q.shape + d_head_v = v.shape[-1] + scale = d_head_qk ** -0.5 + + BK, BV = min(d_head_qk, 32), min(d_head_v, 32) + NK, NV = triton.cdiv(d_head_qk, BK), triton.cdiv(d_head_v, BV) + num_stages = 1 + num_warps = 1 + + dq = q.new_empty(NV, batch_size, n_heads, seq_len, d_head_qk) + dk = q.new_empty(NV, batch_size, n_heads, seq_len, d_head_qk) + dv = q.new_empty(NK, batch_size, n_heads, seq_len, d_head_v) + grid = (NV, NK, batch_size * n_heads) + + fused_recurrent_linear_attn_bwd_kernel[grid]( + q, k, v, do, dq, dk, dv, initial_state, + q.stride(1), q.stride(2), q.stride(3), + v.stride(1), v.stride(2), v.stride(3), + batch_size, n_heads, seq_len, scale, + DK=d_head_qk, DV=d_head_v, BK=BK, BV=BV, + num_warps=num_warps, + num_stages=num_stages, + USE_INITIAL_STATE=initial_state is not None + ) + dq = dq.sum(0) + dk = dk.sum(0) + dv = dv.sum(0) + return dq, dk, dv, None, None + + +def fused_recurrent_linear_attn( + q: torch.Tensor, + k: torch.Tensor, + v: torch.Tensor, + initial_state: torch.Tensor = None, + output_final_state: bool = False, + normalize: bool = False +) -> Tuple[torch.Tensor, torch.Tensor]: + if initial_state is not None: + initial_state = initial_state.detach() + o, final_state = FusedRecurrentLinearAttentionFunction.apply( + q, k, v, initial_state, output_final_state) + if normalize: + o = normalize_output(q, k, o) + return o, final_state diff --git a/fla/ops/rebased/__init__.py b/fla/ops/rebased/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..6ec6a0cb31f7f635aa528cad753d5e19196a2028 --- /dev/null +++ b/fla/ops/rebased/__init__.py @@ -0,0 +1,7 @@ +# -*- coding: utf-8 -*- + +from .parallel import parallel_rebased + +__all__ = [ + 'parallel_rebased' +] diff --git a/fla/ops/rebased/naive.py b/fla/ops/rebased/naive.py new file mode 100644 index 0000000000000000000000000000000000000000..77bdf56ba9dc6a814600a90fc26894b4956d3ed0 --- /dev/null +++ b/fla/ops/rebased/naive.py @@ -0,0 +1,80 @@ +# -*- coding: utf-8 -*- + +import torch +from einops import rearrange + +from fla.ops.rebased.parallel import parallel_rebased + +def naive_parallel_rebased(q, k, v, use_scale=True, use_norm=True): + if use_scale: + q = q * (q.shape[-1] ** -0.5) + attn = q @ k.transpose(-2, -1) + attn = (attn ** 2) + attn.masked_fill_(~torch.tril(torch.ones( + q.shape[-2], q.shape[-2], dtype=torch.bool, device=q.device)), 0) + o = attn @ v + if use_norm: + z = attn.sum(-1) + return o / (z[..., None] + 1e-6) + else: + return o + + +if __name__ == "__main__": + B = 4 + H = 4 + L = 128 + # D = 15 + dtype = torch.float32 + q = (torch.randn(B, H, L, 16).cuda().to(dtype)).requires_grad_(True) + k = (torch.randn(B, H, L, 16).cuda().to(dtype)).requires_grad_(True) + v = torch.randn(B, H, L, 128).cuda().to(dtype).requires_grad_(True) + + do = torch.randn_like(v).cuda() + ref = naive_parallel_rebased(q, k, v, True, True) + ref.backward(do, retain_graph=True) + ref_dq, q.grad = q.grad.clone(), None + ref_dk, k.grad = k.grad.clone(), None + ref_dv, v.grad = v.grad.clone(), None + + # tri = naive_chunk_based(q, k, v) + # tri.backward(do, retain_graph=True) + # tri_dq, q.grad = q.grad.clone(), None + # tri_dk, k.grad = k.grad.clone(), None + # tri_dv, v.grad = v.grad.clone(), None + + # assert ref.allclose(tri, 0, 1e-4), breakpoint() + # assert ref_dq.allclose(tri_dq, 0, 1e-4), breakpoint() + # assert ref_dk.allclose(tri_dk, 0, 1e-4), breakpoint() + # assert ref_dv.allclose(tri_dv, 0, 1e-4), breakpoint() + + tri = parallel_rebased(q, k, v, 1e-6, True, True) + tri.backward(do, retain_graph=True) + tri_dq, q.grad = q.grad.clone(), None + tri_dk, k.grad = k.grad.clone(), None + tri_dv, v.grad = v.grad.clone(), None + print((ref-tri).abs().max()) + print((ref_dq-tri_dq).abs().max()) + print((ref_dk-tri_dk).abs().max()) + print((ref_dv-tri_dv).abs().max()) + + # assert ref.allclose(tri, 0, 1e-4), breakpoint() + # assert ref_dq.allclose(tri_dq, 0, 1e-4), breakpoint() + # assert ref_dk.allclose(tri_dk, 0, 1e-4), breakpoint() + # assert ref_dv.allclose(tri_dv, 0, 1e-4), breakpoint() + + # tri = parallel_based(q, k, v, True, True) + # tri.backward(do, retain_graph=True) + # tri_dq, q.grad = q.grad.clone(), None + # tri_dk, k.grad = k.grad.clone(), None + # tri_dv, v.grad = v.grad.clone(), None + + # print((ref-tri).abs().max()) + # print((ref_dq-tri_dq).abs().max()) + # print((ref_dk-tri_dk).abs().max()) + # print((ref_dv-tri_dv).abs().max()) + + # assert ref.allclose(tri, 0, 1e-4), breakpoint() + # assert ref_dq.allclose(tri_dq, 0, 1e-4), breakpoint() + # assert ref_dk.allclose(tri_dk, 0, 1e-4), breakpoint() + # assert ref_dv.allclose(tri_dv, 0, 1e-4), breakpoint() diff --git a/fla/ops/rebased/parallel.py b/fla/ops/rebased/parallel.py new file mode 100644 index 0000000000000000000000000000000000000000..73920b591462653ddcf9bc32d7e89df67fe62ea8 --- /dev/null +++ b/fla/ops/rebased/parallel.py @@ -0,0 +1,387 @@ + +# -*- coding: utf-8 -*- + +import torch +import triton +import triton.language as tl +from torch.cuda.amp import custom_bwd, custom_fwd + +from fla.utils import contiguous + +# Rebased: Linear Transformers with Learnable Kernel Functions are Better In-Context Models +# https://github.com/corl-team/rebased/blob/main/flash_linear_attention/fla/ops/triton/rebased_fast/parallel.py + + +@triton.jit +def parallel_rebased_fwd_kernel( + # B: batch_size, H: n_heads, T: seq_len, D: d_head + q, # query [B, H, L, D_head_K] + k, # key [B, H, L, D_head_V] + v, # value [B, H, L, D_head_V] + o, # output [B, H, L, D_head_V] + z, # normalizer [B, H, L] + s_qk_h, # stride size: L * D_head_K + s_qk_t, # stride size: D_head_K + s_qk_d, # stride size: 1 + s_vo_h, # stride size: L * D_head_V + s_vo_t, # stride size: D_head_V + s_vo_d, # stride size: 1 + B, # batch size + H, # n_heads + T, # seq_len + scale, # D_head_K ** -0.5 + BTL: tl.constexpr, # BLOCK SIZE along the sequence dimension for Q + BTS: tl.constexpr, # BLOCK SIZE along the sequence dimension for K/V + BK: tl.constexpr, # BLOCK SIZE along the K dimension + BV: tl.constexpr, # BLOCK SIZE along the V dimension + DK: tl.constexpr, # D_head_K + DV: tl.constexpr, # D_head_V +): + # i_c: chunk index. used for sequence parallelism + i_kv, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + NV = tl.cdiv(DV, BV) + i_k = i_kv // (NV) + i_v = i_kv % (NV) + + p_q = tl.make_block_ptr(q + i_bh * s_qk_h, (T, DK), + (s_qk_t, s_qk_d), (i_c * BTL, i_k * BK), (BTL, BK), (1, 0)) + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (DK, T), + (s_qk_d, s_qk_t), (i_k * BK, 0), (BK, BTS), (0, 1)) + p_v = tl.make_block_ptr(v + i_bh * s_vo_h, (T, DV), + (s_vo_t, s_vo_d), (0, i_v * BV), (BTS, BV), (1, 0)) + + # [BQ, BD] block Q, in the shared memory throughout the whole kernel + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_q = (b_q * scale).to(b_q.dtype) + b_o = tl.zeros([BTL, BV], dtype=tl.float32) + b_z = tl.zeros([BTL], dtype=tl.float32) + + # Q block and K block have no overlap + # no need for mask, thereby saving flops + for _ in range(0, i_c * BTL, BTS): + # [BK, BTS] + b_k = tl.load(p_k, boundary_check=(0, 1)) + + # [BTS, BV] + b_v = tl.load(p_v, boundary_check=(0, 1)) + # [BTL, BTS] + b_s = tl.dot(b_q, (b_k), allow_tf32=False) + b_s = b_s * b_s + b_z += tl.sum(b_s, axis=1) + + # [BQ, BD] + b_o = b_o + tl.dot(b_s.to(b_v.dtype), b_v, allow_tf32=False) + p_k = tl.advance(p_k, (0, BTS)) + p_v = tl.advance(p_v, (BTS, 0)) + + # # rescale interchunk output + tl.debug_barrier() + o_q = tl.arange(0, BTL) + # # sync threads, easy for compiler to optimize + # tl.debug_barrier() + + o_k = tl.arange(0, BTS) + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (DK, T), + (s_qk_d, s_qk_t), (i_k * BK, i_c * BTL), (BK, BTS), (0, 1)) + p_v = tl.make_block_ptr(v + i_bh * s_vo_h, (T, DV), + (s_vo_t, s_vo_d), (i_c * BTL, i_v * BV), (BTS, BV), (1, 0)) + # Q block and K block have overlap. masks required + for _ in range(i_c * BTL, (i_c + 1) * BTL, BTS): + # [BK, BTS] + b_k = tl.load(p_k, boundary_check=(0, 1)) + # [BTS, BV] + b_v = tl.load(p_v, boundary_check=(0, 1)) + # [BTL, BTS] + m_s = o_q[:, None] >= o_k[None, :] + b_s = tl.dot(b_q, b_k, allow_tf32=False) + b_s = b_s * b_s + b_s = tl.where(m_s, b_s, 0) + b_z += tl.sum(b_s, axis=1) + # [BTL, BV] + b_o += tl.dot(b_s.to(b_q.dtype), b_v, allow_tf32=False) + p_k = tl.advance(p_k, (0, BTS)) + p_v = tl.advance(p_v, (BTS, 0)) + o_k += BTS + + p_o = tl.make_block_ptr(o + (i_bh + B * H * i_k) * s_vo_h, (T, DV), + (s_vo_t, s_vo_d), (i_c*BTL, i_v*BV), (BTL, BV), (1, 0)) + p_z = z + (i_bh + B * H * i_k) * T + i_c * BTL + tl.arange(0, BTL) + tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1)) + tl.store(p_z, b_z.to(p_z.dtype.element_ty), + mask=((i_c * BTL + tl.arange(0, BTL)) < T)) + + +@triton.jit +def _parallel_rebased_bwd_dq( + i_bh, i_c, i_k, i_v, i_h, + q, k, v, do, dz, dq, s_qk_h, s_qk_t, s_qk_d, s_vo_h, + s_vo_t, s_vo_d, B, H, T, scale, + BTL: tl.constexpr, BTS: tl.constexpr, BK: tl.constexpr, BV: tl.constexpr, + DK: tl.constexpr, DV: tl.constexpr, +): + p_do = tl.make_block_ptr(do + i_bh * s_vo_h, (T, DV), (s_vo_t, s_vo_d), + (i_c * BTL, i_v * BV), (BTL, BV), (1, 0)) + p_q = tl.make_block_ptr(q + (i_bh) * s_qk_h, (T, DK), + (s_qk_t, s_qk_d), (i_c*BTL, i_k*BK), (BTL, BK), (1, 0)) + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_do = tl.load(p_do, boundary_check=(0, 1)).to(b_q.dtype) + b_q = (b_q * scale).to(b_q.dtype) + b_dq = tl.zeros([BTL, BK], dtype=tl.float32) + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (T, DK), + (s_qk_t, s_qk_d), (0, i_k * BK), (BTS, BK), (1, 0)) + p_v = tl.make_block_ptr(v + i_bh * s_vo_h, (DV, T), + (s_vo_d, s_vo_t), (i_v * BV, 0), (BV, BTS), (0, 1)) + p_dz = dz + i_bh * T + i_c * BTL + tl.arange(0, BTL) + b_dz = tl.load(p_dz, mask=(i_c * BTL + tl.arange(0, BTL)) < T) + + for _ in range(0, i_c * BTL, BTS): + # [BTS, BK] + b_k = tl.load(p_k, boundary_check=(0, 1)) + # [BV, BTS] + b_v = tl.load(p_v, boundary_check=(0, 1)) + # [BTL, BTS] + b_ds = tl.dot(b_do, b_v, allow_tf32=False) + if i_v == 0: + b_ds += b_dz[:, None] + else: + b_ds = b_ds + b_s = tl.dot(b_q, tl.trans(b_k), allow_tf32=False) + # [BQ, BD] + b_dq += tl.dot((2 * b_ds * b_s).to(b_v.dtype), b_k, allow_tf32=False) + p_k = tl.advance(p_k, (BTS, 0)) + p_v = tl.advance(p_v, (0, BTS)) + + b_dq *= scale + o_q = tl.arange(0, BTL) + o_k = tl.arange(0, BTS) + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (T, DK), + (s_qk_t, s_qk_d), (i_c * BTL, i_k * BK), (BTS, BK), (1, 0)) + p_v = tl.make_block_ptr(v + i_bh * s_vo_h, (DV, T), + (s_vo_d, s_vo_t), (i_v * BV, i_c * BTL), (BV, BTS), (0, 1)) + # Q block and K block have overlap. masks required + for _ in range(i_c * BTL, (i_c + 1) * BTL, BTS): + # [BTS, BK] + b_k = tl.load(p_k, boundary_check=(0, 1)) + # [BV, BTS] + b_v = tl.load(p_v, boundary_check=(0, 1)) + # [BTL, BTS] + m_s = o_q[:, None] >= o_k[None, :] + b_ds = tl.dot(b_do, b_v, allow_tf32=False) + if i_v == 0: + b_ds += b_dz[:, None] + else: + b_ds = b_ds + b_ds = tl.where(m_s, b_ds, 0) * scale + b_s = tl.dot(b_q, tl.trans(b_k), allow_tf32=False) + b_s = tl.where(m_s, b_s, 0) + # [BTL, BK] + b_dq += tl.dot((2 * b_ds * b_s).to(b_k.dtype), + b_k, allow_tf32=False) + p_k = tl.advance(p_k, (BTS, 0)) + p_v = tl.advance(p_v, (0, BTS)) + o_k += BTS + p_dq = tl.make_block_ptr(dq + (i_bh + B * H * i_v) * s_qk_h, (T, DK), + (s_qk_t, s_qk_d), (i_c*BTL, i_k*BK), (BTL, BK), (1, 0)) + tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1)) + return + + +@triton.jit +def _parallel_rebased_bwd_dkv( + i_bh, i_c, i_k, i_v, i_h, + q, k, v, do, dz, dk, dv, s_qk_h, s_qk_t, s_qk_d, s_vo_h, + s_vo_t, s_vo_d, B, H, T, scale, + BTL: tl.constexpr, BTS: tl.constexpr, BK: tl.constexpr, BV: tl.constexpr, + DK: tl.constexpr, DV: tl.constexpr, +): + # compute dk dv + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (T, DK), (s_qk_t, s_qk_d), + (i_c * BTL, i_k * BK), (BTL, BK), (1, 0)) + p_v = tl.make_block_ptr(v + i_bh * s_vo_h, (T, DV), (s_vo_t, s_vo_d), + (i_c * BTL, i_v * BV), (BTL, BV), (1, 0)) + b_k, b_v = tl.load(p_k, boundary_check=(0, 1)), tl.load( + p_v, boundary_check=(0, 1)) + b_dk, b_dv = tl.zeros([BTL, BK], dtype=tl.float32), tl.zeros( + [BTL, BV], dtype=tl.float32) + + for i in range((tl.cdiv(T, BTS) * BTS)-BTS, (i_c + 1) * BTL - BTS, -BTS): + p_q = tl.make_block_ptr( + q + i_bh * s_qk_h, (DK, T), (s_qk_d, s_qk_t), (i_k * BK, i), (BK, BTS), (0, 1)) + p_do = tl.make_block_ptr( + do + i_bh * s_vo_h, (DV, T), (s_vo_d, s_vo_t), (i_v * BV, i), (BV, BTS), (0, 1)) + p_dz = dz + i_bh * T + i + tl.arange(0, BTS) + b_q = tl.load(p_q, boundary_check=(0, 1)) # [BK, BTS] + b_do = tl.load(p_do, boundary_check=(0, 1)).to(b_q.dtype) # [BV, BTS] + b_dz = tl.load(p_dz, mask=(i + tl.arange(0, BTS)) < T) + b_s = tl.dot(b_k.to(b_q.dtype), b_q, allow_tf32=False) * \ + scale # [BTL, BTS] + b_s2 = b_s * b_s + b_dv += tl.dot(b_s2.to(b_q.dtype), tl.trans(b_do), allow_tf32=False) + b_ds = tl.dot(b_v, b_do, allow_tf32=False) * scale + if i_v == 0: + b_ds += b_dz[None, :] * scale + else: + b_ds = b_ds + b_dk += tl.dot((2 * b_ds * b_s).to(b_q.dtype), + tl.trans(b_q), allow_tf32=False) + + tl.debug_barrier() + o_q, o_k = tl.arange(0, BTS), tl.arange(0, BTL) + for i in range(i_c*BTL, (i_c+1)*BTL, BTS): + p_q = tl.make_block_ptr( + q + i_bh * s_qk_h, (DK, T), (s_qk_d, s_qk_t), (i_k * BK, i), (BK, BTS), (0, 1)) + p_do = tl.make_block_ptr( + do + i_bh * s_vo_h, (DV, T), (s_vo_d, s_vo_t), (i_v * BV, i), (BV, BTS), (0, 1)) + p_dz = dz + i_bh * T + i + tl.arange(0, BTS) + b_q = tl.load(p_q, boundary_check=(0, 1)) # [BD, BQ] + b_do = tl.load(p_do, boundary_check=(0, 1)).to(b_q.dtype) + b_dz = tl.load(p_dz, mask=(i + tl.arange(0, BTS)) < T) + # [BK, BQ] + m_s = o_k[:, None] <= o_q[None, :] + b_s = tl.dot(b_k, b_q, allow_tf32=False) * scale + b_s2 = b_s * b_s + b_s = tl.where(m_s, b_s, 0) + b_s2 = tl.where(m_s, b_s2, 0) + + b_ds = tl.dot(b_v, b_do, allow_tf32=False) + if i_v == 0: + b_ds += b_dz[None, :] + else: + b_ds = b_ds + b_ds = tl.where(m_s, b_ds, 0) * scale + # [BK, BD] + b_dv += tl.dot(b_s2.to(b_q.dtype), tl.trans(b_do), allow_tf32=False) + b_dk += tl.dot((2 * b_ds * b_s).to(b_q.dtype), + tl.trans(b_q), allow_tf32=False) + o_q += BTS + + p_dk = tl.make_block_ptr(dk + (i_bh + B * H * i_v) * s_qk_h, + (T, DK), (s_qk_t, s_qk_d), (i_c*BTL, i_k*BK), (BTL, BK), (1, 0)) + p_dv = tl.make_block_ptr(dv + (i_bh + B * H * i_k) * s_vo_h, + (T, DV), (s_vo_t, s_vo_d), (i_c*BTL, i_v*BV), (BTL, BV), (1, 0)) + tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1)) + tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), boundary_check=(0, 1)) + return + + +@triton.jit +def parallel_rebased_bwd_kernel( + q, k, v, do, dz, dq, dk, dv, s_qk_h, s_qk_t, s_qk_d, s_vo_h, + s_vo_t, s_vo_d, B, H, T, scale, + BTL: tl.constexpr, BTS: tl.constexpr, BK: tl.constexpr, BV: tl.constexpr, + DK: tl.constexpr, DV: tl.constexpr, +): + i_kv, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + NV = tl.cdiv(DV, BV) + i_k = i_kv // (NV) + i_v = i_kv % (NV) + i_h = i_bh % H + _parallel_rebased_bwd_dq( + i_bh, i_c, i_k, i_v, i_h, + q, k, v, do, dz, dq, s_qk_h, s_qk_t, s_qk_d, s_vo_h, + s_vo_t, s_vo_d, B, H, T, scale, BTL=BTL, BTS=BTS, BK=BK, BV=BV, DK=DK, DV=DV + ) + tl.debug_barrier() + _parallel_rebased_bwd_dkv( + i_bh, i_c, i_k, i_v, i_h, + q, k, v, do, dz, dk, dv, s_qk_h, s_qk_t, s_qk_d, s_vo_h, + s_vo_t, s_vo_d, B, H, T, scale, BTL, BTS, BK, BV, DK, DV + ) + + +class ParallelBasedFunction(torch.autograd.Function): + @staticmethod + @contiguous + @custom_fwd + def forward(ctx, q, k, v, scale): + BTL, BTS = 128, 32 + assert BTL % BTS == 0 + # assert q.shape[-1] % 16 == 0 + BK = min(128, triton.next_power_of_2(k.shape[-1])) + BV = min(128, triton.next_power_of_2(v.shape[-1])) + BK, BV = max(BK, 16), max(BV, 16) + batch_size, n_heads, seq_len, d_head_qk = q.shape + d_head_v = v.shape[-1] + num_stages = 2 + num_warps = 4 + NK = triton.cdiv(d_head_qk, BK) + NV = triton.cdiv(d_head_v, BV) + grid = (NK * NV, triton.cdiv(seq_len, BTL), batch_size * n_heads) + + assert NK == 1, "will encounter some synchronization issue if not." + + o = torch.empty(NK, batch_size, n_heads, seq_len, + d_head_v, device=q.device) + z = torch.empty(NK, batch_size, n_heads, seq_len, + device=q.device) + parallel_rebased_fwd_kernel[grid]( + q, k, v, o, z, + q.stride(1), q.stride(2), q.stride(3), + v.stride(1), v.stride(2), v.stride(3), + batch_size, n_heads, seq_len, scale, + BTL=BTL, BTS=BTS, BK=BK, BV=BV, DK=d_head_qk, DV=d_head_v, + num_warps=num_warps, + num_stages=num_stages + ) + ctx.save_for_backward(q, k, v) + ctx.scale = scale + return o.sum(0).to(q.dtype), z.sum(0).to(q.dtype) + + @staticmethod + @custom_bwd + @contiguous + def backward(ctx, do, dz): + q, k, v = ctx.saved_tensors + scale = ctx.scale + BTL, BTS = 64, 32 + assert BTL % BTS == 0 + BK = min(128, triton.next_power_of_2(k.shape[-1])) + BV = min(128, triton.next_power_of_2(v.shape[-1])) + BK, BV = max(BK, 16), max(BV, 16) + batch_size, n_heads, seq_len, d_head_qk = q.shape + d_head_v = v.shape[-1] + num_stages = 2 + num_warps = 4 + NK = triton.cdiv(d_head_qk, BK) + NV = triton.cdiv(d_head_v, BV) + grid = (NK * NV, triton.cdiv(seq_len, BTL), batch_size * n_heads) + + assert NK == 1, "will encounter some synchronization issue if not" + + dq = torch.empty(NV, batch_size, n_heads, seq_len, + d_head_qk, dtype=q.dtype, device=q.device) + dk = torch.empty(NV, batch_size, n_heads, seq_len, + d_head_qk, dtype=q.dtype, device=q.device) + dv = torch.empty(NK, batch_size, n_heads, seq_len, + d_head_v, dtype=q.dtype, device=q.device) + + parallel_rebased_bwd_kernel[grid]( + q, k, v, do, dz, dq, dk, dv, + q.stride(1), q.stride(2), q.stride(3), + v.stride(1), v.stride(2), v.stride(3), + batch_size, n_heads, seq_len, scale, + BTL=BTL, BTS=BTS, BK=BK, BV=BV, DK=d_head_qk, DV=d_head_v, + num_warps=num_warps, + num_stages=num_stages + ) + + return dq.sum(0).to(q.dtype), dk.sum(0).to(k.dtype), dv.sum(0).to(v.dtype), None + + +triton_parallel_based = ParallelBasedFunction.apply + + +def parallel_rebased(q, k, v, eps=1e-5, use_scale=True, use_normalize=True, return_both=False): + assert q.shape[-1] <= 128, "only support feature dim up to 128" + if use_scale: + scale = q.shape[-1] ** -0.5 + else: + scale = 1 + o, z = triton_parallel_based(q, k, v, scale) + if return_both: + return o, z + if use_normalize: + o = o / (z[..., None] + eps) + else: + o = o + return o.to(q.dtype) diff --git a/fla/ops/retention/__init__.py b/fla/ops/retention/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..b7f29d7fbf5f36c7a2ba6a3b8c6bfa9f7ea19096 --- /dev/null +++ b/fla/ops/retention/__init__.py @@ -0,0 +1,13 @@ +# -*- coding: utf-8 -*- + +from .chunk import chunk_retention +from .chunk_fuse import fused_chunk_retention +from .parallel import parallel_retention +from .recurrent_fuse import fused_recurrent_retention + +__all__ = [ + 'chunk_retention', + 'fused_chunk_retention', + 'parallel_retention', + 'fused_recurrent_retention' +] diff --git a/fla/ops/retention/chunk.py b/fla/ops/retention/chunk.py new file mode 100644 index 0000000000000000000000000000000000000000..0b162be4a09709eb610be2683a643478b075d8b4 --- /dev/null +++ b/fla/ops/retention/chunk.py @@ -0,0 +1,364 @@ +# -*- coding: utf-8 -*- +# Copyright (c) 2023, Yu Zhang, Songlin Yang + +from typing import Tuple + +import torch +import triton +import triton.language as tl +from torch.cuda.amp import custom_bwd, custom_fwd + +from fla.utils import contiguous + + +@triton.jit +def chunk_retention_fwd_kernel_h( + k, + v, + h, + initial_state, # initial state of the chunk [B, H, D_head_K, D_head_V] + final_state, # final state of the chunk [B, H, D_head_K, D_head_V] + s_qk_h, + s_qk_t, + s_qk_d, + s_vo_h, + s_vo_t, + s_vo_d, + s_h_h, + s_h_t, + H: tl.constexpr, + T: tl.constexpr, + K: tl.constexpr, + V: tl.constexpr, + BT: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr, + NT: tl.constexpr, + USE_INITIAL_STATE: tl.constexpr, + STORE_FINAL_STATE: tl.constexpr +): + i_k, i_v, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + i_h = i_bh % H + b_b = tl.math.log2(1 - tl.math.pow(2, -5 - i_h * 1.0)) + + o_i = tl.arange(0, BT) + d_b, d_i = tl.math.exp2(BT * b_b), tl.math.exp2((BT - o_i - 1) * b_b) + # [BK, BV] + b_h = tl.zeros([BK, BV], dtype=tl.float32) + + if USE_INITIAL_STATE: + p_h0 = tl.make_block_ptr(initial_state + i_bh * K * V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + b_h = tl.load(p_h0, boundary_check=(0, 1)).to(tl.float32) + + for i_t in range(NT): + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (K, T), (s_qk_d, s_qk_t), (i_k * BK, i_t * BT), (BK, BT), (0, 1)) + p_v = tl.make_block_ptr(v + i_bh * s_vo_h, (T, V), (s_vo_t, s_vo_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_h = tl.make_block_ptr(h + i_bh * s_h_h + i_t * K * V, (K, V), (s_h_t, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + + tl.store(p_h, b_h.to(p_h.dtype.element_ty), boundary_check=(0, 1)) + # [BK, BT] + b_k = tl.load(p_k, boundary_check=(0, 1)) + # [BT, BV] + b_v = tl.load(p_v, boundary_check=(0, 1)) + # [BK, BV] + if i_t == NT - 1 and (T % BT) != 0: + d_b = tl.math.exp2((T % BT) * b_b) + d_i = tl.math.exp2(((T % BT) - o_i - 1) * b_b) + b_h = d_b * b_h + tl.dot(b_k, (b_v * d_i[:, None]).to(b_k.dtype), allow_tf32=False) + + if STORE_FINAL_STATE: + p_ht = tl.make_block_ptr(final_state + i_bh * K * V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + tl.store(p_ht, b_h.to(p_ht.dtype.element_ty), boundary_check=(0, 1)) + + +@triton.jit +def chunk_retention_fwd_kernel_o( + q, + k, + v, + h, + o, + s_qk_h, + s_qk_t, + s_qk_d, + s_vo_h, + s_vo_t, + s_vo_d, + s_h_h, + s_h_t, + scale, + H: tl.constexpr, + T: tl.constexpr, + K: tl.constexpr, + V: tl.constexpr, + BT: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr +): + i_v, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + i_h = i_bh % H + b_b = tl.math.log2(1 - tl.math.pow(2, -5 - i_h * 1.0)) + + o_i = tl.arange(0, BT) + d_i = tl.math.exp2((o_i + 1) * b_b) + m_s = o_i[:, None] >= o_i[None, :] + d_s = tl.where(m_s, tl.math.exp2((o_i[:, None] - o_i[None, :]) * b_b), 0) + + b_o = tl.zeros([BT, BV], dtype=tl.float32) + b_s = tl.zeros([BT, BT], dtype=tl.float32) + for i_k in range(tl.cdiv(K, BK)): + p_q = tl.make_block_ptr(q + i_bh * s_qk_h, (T, K), (s_qk_t, s_qk_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (K, T), (s_qk_d, s_qk_t), (i_k * BK, i_t * BT), (BK, BT), (0, 1)) + p_h = tl.make_block_ptr(h + i_bh * s_h_h + i_t * K * V, (K, V), (s_h_t, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + # [BT, BK] + b_q = tl.load(p_q, boundary_check=(0, 1)) + # [BK, BT] + b_k = tl.load(p_k, boundary_check=(0, 1)) + # [BK, BV] + b_h = tl.load(p_h, boundary_check=(0, 1)) + b_o += tl.dot((b_q * d_i[:, None]).to(b_q.dtype), b_h, allow_tf32=False) + b_s += tl.dot(b_q, b_k, allow_tf32=False) + + b_s *= d_s + p_v = tl.make_block_ptr(v + i_bh * s_vo_h, (T, V), (s_vo_t, s_vo_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + b_v = tl.load(p_v, boundary_check=(0, 1)) + b_o = (b_o + tl.dot(b_s.to(b_v.dtype), b_v, allow_tf32=False)) * scale + p_o = tl.make_block_ptr(o + i_bh * s_vo_h, (T, V), (s_vo_t, s_vo_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1)) + + +@triton.jit +def chunk_retention_bwd_kernel_dh( + q, + do, + dh, + s_qk_h, + s_qk_t, + s_qk_d, + s_vo_h, + s_vo_t, + s_vo_d, + s_h_h, + s_h_t, + scale, + H: tl.constexpr, + T: tl.constexpr, + K: tl.constexpr, + V: tl.constexpr, + BT: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr, + NT: tl.constexpr +): + i_k, i_v, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + i_h = i_bh % H + b_b = tl.math.log2(1 - tl.math.pow(2, -5 - i_h * 1.0)) + + o_i = tl.arange(0, BT) + d_b, d_i = tl.math.exp2(BT * b_b), tl.math.exp2((o_i + 1) * b_b) + # [BK, BV] + b_dh = tl.zeros([BK, BV], dtype=tl.float32) + for i_t in range(NT - 1, -1, -1): + p_q = tl.make_block_ptr(q + i_bh * s_qk_h, (K, T), (s_qk_d, s_qk_t), (i_k * BK, i_t * BT), (BK, BT), (0, 1)) + p_do = tl.make_block_ptr(do + i_bh * s_vo_h, (T, V), (s_vo_t, s_vo_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_dh = tl.make_block_ptr(dh + i_bh * s_h_h + i_t * K * V, (K, V), (s_h_t, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + + tl.store(p_dh, b_dh.to(p_dh.dtype.element_ty), boundary_check=(0, 1)) + # [BK, BT] + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_q = (b_q * scale).to(b_q.dtype) + # [BT, V] + b_do = tl.load(p_do, boundary_check=(0, 1)) + # [BK, BV] + b_dh = d_b * b_dh + tl.dot(b_q, (b_do * d_i[:, None]).to(b_q.dtype), allow_tf32=False) + + +@triton.jit +def chunk_retention_bwd_kernel_dqkv( + q, + k, + v, + h, + do, + dh, + dq, + dk, + dv, + s_qk_h, + s_qk_t, + s_qk_d, + s_vo_h, + s_vo_t, + s_vo_d, + s_h_h, + s_h_t, + scale, + H: tl.constexpr, + T: tl.constexpr, + K: tl.constexpr, + V: tl.constexpr, + BT: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr, + NT: tl.constexpr +): + i_k, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + i_h = i_bh % H + n_bh = tl.num_programs(2) + b_b = tl.math.log2(1 - tl.math.pow(2, -5 - i_h * 1.0)) + + o_i = tl.arange(0, BT) + d_q, d_k = tl.math.exp2((o_i + 1) * b_b), tl.math.exp2((BT - o_i - 1) * b_b) + d_q = (d_q * scale).to(d_q.dtype) + m_s = o_i[:, None] >= o_i[None, :] + d_s = tl.where(m_s, tl.math.exp2((o_i[:, None] - o_i[None, :]) * b_b), 0) * scale + + p_q = tl.make_block_ptr(q + i_bh * s_qk_h, (K, T), (s_qk_d, s_qk_t), (i_k * BK, i_t * BT), (BK, BT), (0, 1)) + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (T, K), (s_qk_t, s_qk_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_k = tl.load(p_k, boundary_check=(0, 1)) + b_s = tl.dot(b_k, b_q, allow_tf32=False) * tl.trans(d_s) + + b_dq = tl.zeros([BT, BK], dtype=tl.float32) + b_dk = tl.zeros([BT, BK], dtype=tl.float32) + b_ds = tl.zeros([BT, BT], dtype=tl.float32) + for i_v in range(tl.cdiv(V, BV)): + p_v = tl.make_block_ptr(v + i_bh * s_vo_h, (T, V), (s_vo_t, s_vo_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_h = tl.make_block_ptr(h + i_bh * s_h_h, (V, NT * K), (1, s_h_t), (i_v * BV, i_t * K + i_k * BK), (BV, BK), (0, 1)) + p_do = tl.make_block_ptr(do + i_bh * s_vo_h, (T, V), (s_vo_t, s_vo_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_dh = tl.make_block_ptr(dh + i_bh * s_h_h, (NT * K, V), (s_h_t, 1), (i_t * K + i_k * BK, i_v * BV), (BK, BV), (1, 0)) + p_dv = tl.make_block_ptr(dv + (i_k*n_bh+i_bh)*s_vo_h, (T, V), (s_vo_t, s_vo_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + # [BT, BV] + b_v = tl.load(p_v, boundary_check=(0, 1)) + b_do = tl.load(p_do, boundary_check=(0, 1)) + # [BV, BK] + b_h = tl.load(p_h, boundary_check=(0, 1)) + # [BK, BV] + b_dh = tl.load(p_dh, boundary_check=(0, 1)) + + # [BT, BT] + b_ds += tl.dot(b_do, tl.trans(b_v), allow_tf32=False) + # [BT, BK] + b_dq += tl.dot(b_do, b_h, allow_tf32=False) + b_dk += tl.dot(b_v, tl.trans(b_dh), allow_tf32=False) + # [BT, BV] + b_dv = tl.dot(b_k, b_dh, allow_tf32=False) * d_k[:, None] + tl.dot(b_s.to(b_q.dtype), b_do, allow_tf32=False) + tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), boundary_check=(0, 1)) + # [BT, BT] + b_ds = (b_ds * d_s).to(b_q.dtype) + # [BT, BK] + b_dq = b_dq * d_q[:, None] + tl.dot(b_ds, b_k, allow_tf32=False) + b_dk = b_dk * d_k[:, None] + tl.trans(tl.dot(b_q, b_ds, allow_tf32=False)) + + p_dq = tl.make_block_ptr(dq + i_bh * s_qk_h, (T, K), (s_qk_t, s_qk_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_dk = tl.make_block_ptr(dk + i_bh * s_qk_h, (T, K), (s_qk_t, s_qk_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1)) + tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1)) + + +class ChunkRetentionFunction(torch.autograd.Function): + + @staticmethod + @custom_fwd + @contiguous + def forward(ctx, q, k, v, initial_state, output_final_state): + B, H, T, K, V = *q.shape, v.shape[-1] + BT = 64 + BK, BV = min(64, triton.next_power_of_2(K)), min(64, triton.next_power_of_2(V)) + NT, NK, NV = triton.cdiv(T, BT), triton.cdiv(K, BK), triton.cdiv(V, BV) + num_stages = 1 + num_warps = 4 if BK == 64 else 2 + scale = K ** -0.5 + + final_state = None + if output_final_state: + final_state = q.new_empty(B, H, K, V, dtype=torch.float32, requires_grad=False) + + h = q.new_empty(B, H, NT * K, V) + grid = (NK, NV, B * H) + chunk_retention_fwd_kernel_h[grid]( + k, v, h, initial_state, final_state, + q.stride(1), q.stride(2), q.stride(3), + v.stride(1), v.stride(2), v.stride(3), + h.stride(1), h.stride(2), + H=H, T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, NT=NT, + USE_INITIAL_STATE=initial_state is not None, + STORE_FINAL_STATE=output_final_state, + num_warps=num_warps, + num_stages=num_stages + ) + grid = (NV, NT, B * H) + o = torch.empty_like(v) + chunk_retention_fwd_kernel_o[grid]( + q, k, v, h, o, + q.stride(1), q.stride(2), q.stride(3), + v.stride(1), v.stride(2), v.stride(3), + h.stride(1), h.stride(2), + scale, + H=H, T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, + num_warps=num_warps, + num_stages=num_stages + ) + + ctx.save_for_backward(q, k, v, h) + return o.to(q.dtype), final_state + + @staticmethod + @custom_bwd + @contiguous + def backward(ctx, do, d_ht=None): + q, k, v, h = ctx.saved_tensors + + B, H, T, K, V = *q.shape, v.shape[-1] + BT = 64 + BK, BV = min(64, triton.next_power_of_2(K)), min(64, triton.next_power_of_2(V)) + NT, NK, NV = triton.cdiv(T, BT), triton.cdiv(K, BK), triton.cdiv(V, BV) + num_stages = 1 + num_warps = 4 if BK == 64 else 2 + scale = K ** -0.5 + + dh = q.new_empty(B, H, NT * K, V) + grid = (NK, NV, B * H) + chunk_retention_bwd_kernel_dh[grid]( + q, do, dh, + q.stride(1), q.stride(2), q.stride(3), + v.stride(1), v.stride(2), v.stride(3), + dh.stride(1), dh.stride(2), + scale, + H=H, T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, NT=NT, + num_warps=num_warps, + num_stages=num_stages + ) + + grid = (NK, NT, B * H) + dq = torch.empty_like(q) + dk = torch.empty_like(k) + dv = v.new_empty(NK, *v.shape) + num_stages = 1 + num_warps = 4 if BK == 64 else 2 + chunk_retention_bwd_kernel_dqkv[grid]( + q, k, v, h, do, dh, dq, dk, dv, + q.stride(1), q.stride(2), q.stride(3), + v.stride(1), v.stride(2), v.stride(3), + dh.stride(1), dh.stride(2), + scale, + H=H, T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, NT=NT, + num_warps=num_warps, + num_stages=num_stages + ) + dv = dv.sum(0) + return dq.to(q.dtype), dk.to(k.dtype), dv.to(v.dtype), None, None + + +def chunk_retention( + q: torch.Tensor, + k: torch.Tensor, + v: torch.Tensor, + initial_state: torch.Tensor = None, + output_final_state: bool = False +) -> Tuple[torch.Tensor, torch.Tensor]: + if initial_state is not None: + initial_state = initial_state.detach() + o, final_state = ChunkRetentionFunction.apply(q, k, v, initial_state, output_final_state) + return o, final_state diff --git a/fla/ops/retention/chunk_fuse.py b/fla/ops/retention/chunk_fuse.py new file mode 100644 index 0000000000000000000000000000000000000000..80af7f26f40638c7894765074a129b4ab438f6d3 --- /dev/null +++ b/fla/ops/retention/chunk_fuse.py @@ -0,0 +1,334 @@ +# -*- coding: utf-8 -*- +# Copyright (c) 2023, Yu Zhang, Songlin Yang + +from typing import Tuple + +import torch +import triton +import triton.language as tl +from packaging import version +from torch.cuda.amp import custom_bwd, custom_fwd + +from fla.utils import contiguous + +# on-the-fly computation without materializing hidden statets into HBMs + + +@triton.jit +def fused_chunk_retention_fwd_kernel( + # B: batch_size, H: n_heads, T: seq_len, D: d_head + q, # query [B, H, L, D_head_K] + k, # key [B, H, L, D_head_V] + v, # value [B, H, L, D_head_V] + o, # output [B, H, L, D_head_V] + initial_state, # initial state of the chunk [B, H, D_head_K, D_head_V] + final_state, # final state of the chunk [B, H, D_head_K, D_head_V] + s_qk_h, # stride size: L * D_head_K + s_qk_t, # stride size: D_head_K + s_qk_d, # stride size: 1 + s_vo_h, # stride size: L * D_head_V + s_vo_t, # stride size: D_head_V + s_vo_d, # stride size: 1 + B, # batch size + H, # n_heads + T, # seq_len + scale, # D_head_K ** -0.5 + BT: tl.constexpr, # BLOCK SIZE along the sequence dimension, a.k.a. chunk size + BK: tl.constexpr, # BLOCK SIZE along the K dimension + BV: tl.constexpr, # BLOCK SIZE along the V dimension + DK: tl.constexpr, # D_head_K + DV: tl.constexpr, # D_head_V + USE_INITIAL_STATE: tl.constexpr, + STORE_FINAL_STATE: tl.constexpr, + CHECK: tl.constexpr +): + # indices + i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + i_h = i_bh % H + + o_i = tl.arange(0, BT) + # decay rate given the head index + b_b = tl.math.log2(1 - tl.math.pow(2, -5 - i_h * 1.0)) + + # d_b: overall decay for the entire chunk + # d_o: cumulative decay from the start of the chunk + # d_h: cumulative decay from the end of the chunk + d_b, d_o, d_h = tl.math.exp2(BT * b_b), tl.math.exp2((o_i + 1) * b_b), tl.math.exp2((BT - o_i - 1) * b_b) + + # [BT, BT] + m_s = o_i[:, None] >= o_i[None, :] + d_s = tl.where(m_s, tl.math.exp2((o_i[:, None] - o_i[None, :]) * b_b), 0) + # [BK, BV] + b_h = tl.zeros([BK, BV], dtype=tl.float32) + + # make block pointers + p_q = tl.make_block_ptr(q + i_bh * s_qk_h, (T, DK), (s_qk_t, s_qk_d), (0, i_k * BK), (BT, BK), (1, 0)) + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (DK, T), (s_qk_d, s_qk_t), (i_k * BK, 0), (BK, BT), (0, 1)) + p_v = tl.make_block_ptr(v + i_bh * s_vo_h, (T, DV), (s_vo_t, s_vo_d), (0, i_v * BV), (BT, BV), (1, 0)) + p_o = tl.make_block_ptr(o + (i_bh+i_k*B*H) * s_vo_h, (T, DV), (s_vo_t, s_vo_d), (0, i_v * BV), (BT, BV), (1, 0)) + + if USE_INITIAL_STATE: + p_h = tl.make_block_ptr(initial_state + i_bh * DK * DV, (DK, DV), (DV, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + b_h = tl.load(p_h, boundary_check=(0, 1)).to(tl.float32) + + NT = tl.cdiv(T, BT) + for i in range(0, NT): + # [BK, BT] + b_k = tl.load(p_k, boundary_check=(0, 1)) + # [BT, BV] + b_v = tl.load(p_v, boundary_check=(0, 1)) + # [BT, BK] + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_q = (b_q * scale).to(b_k.dtype) + + # [BT, BT] + b_s = tl.dot(b_q, b_k, allow_tf32=False) * d_s + # [BT, BV] + b_o = tl.dot(b_s.to(b_q.dtype), b_v, allow_tf32=False) + if CHECK and i == 0: + b_o += tl.dot(b_q, b_h.to(b_q.dtype), allow_tf32=False) * d_o[:, None] + b_h = d_b * b_h + tl.dot(b_k, (b_v * d_h[:, None]).to(b_k.dtype), allow_tf32=False) + else: + b_o += tl.dot(b_q, b_h.to(b_q.dtype), allow_tf32=False) * d_o[:, None] + if i == NT - 1 and (T % BT) != 0: + d_b = tl.math.exp2((T % BT) * b_b) + d_h = tl.math.exp2(((T % BT) - o_i - 1) * b_b) + b_h = d_b * b_h + tl.dot(b_k, (b_v * d_h[:, None]).to(b_k.dtype), allow_tf32=False) + tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1)) + + p_q = tl.advance(p_q, (BT, 0)) + p_k = tl.advance(p_k, (0, BT)) + p_v = tl.advance(p_v, (BT, 0)) + p_o = tl.advance(p_o, (BT, 0)) + + if STORE_FINAL_STATE: + p_final = tl.make_block_ptr(final_state + i_bh * DK * DV, (DK, DV), (DV, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + tl.store(p_final, b_h.to(p_final.dtype.element_ty), boundary_check=(0, 1)) + + +# Similar to Algorithm1 of https://arxiv.org/abs/2006.16236 +@triton.jit +def fused_chunk_retention_bwd_kernel( + # B: batch_size, H: n_heads, T: seq_len, D: d_head + # NV: number of split in the V dimension. NK: number of split in the K dimension + q, # query [B, H, L, D_head_K] + k, # key [B, H, L, D_head_V] + v, # value [B, H, L, D_head_V] + do, # gradient of output [B, H, L, D_head_V] + dq, # gradient of query [NV, B, H, L, D_head_K] + dk, # gradient of key [NV, B, H, L, D_head_K] + dv, # gradient of value [NK, B, H, L, D_head_V] + + initial_state, # initial state of the chunk [B, H, D_head_K, D_head_V] + + s_qk_h, # stride size: L * D_head_K + s_qk_t, # stride size: D_head_K + s_qk_d, # stride size: 1 + s_vo_h, # stride size: L * D_head_V + s_vo_t, # stride size: D_head_V + s_vo_d, # stride size: 1 + B, # batch_size + H, # n_heads + T, # seq_len + scale, # D_head_K ** -0.5 + BT: tl.constexpr, # BLOCK SIZE along the sequence dimension, a.k.a. chunk size + BK: tl.constexpr, # BLOCK SIZE along the K dimension + BV: tl.constexpr, # BLOCK SIZE along the V dimension + DK: tl.constexpr, # D_head_K + DV: tl.constexpr, # D_head_V + USE_INITIAL_STATE: tl.constexpr, + CHECK: tl.constexpr +): + i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + i_h = i_bh % H + + o_i = tl.arange(0, BT) + b_b = tl.math.log2(1 - tl.math.pow(2, -5 - i_h * 1.0)) + d_q, d_k = tl.math.exp2((o_i+1) * b_b) * scale, tl.math.exp2((BT - o_i - 1) * b_b) + d_b = tl.math.exp2(BT * b_b) + + m_s = o_i[:, None] >= o_i[None, :] + d_s = tl.where(m_s, tl.math.exp2((o_i[:, None] - o_i[None, :]) * b_b), 0) * scale + # [BV, BK] + b_h = tl.zeros([BV, BK], dtype=tl.float32) + if USE_INITIAL_STATE: + p_h = tl.make_block_ptr(initial_state + i_bh * DK * DV, (DV, DK), (1, DV), (i_v * BV, i_k * BK), (BV, BK), (0, 1)) + b_h = tl.load(p_h, boundary_check=(0, 1)).to(tl.float32) + + for i in range(0, tl.cdiv(T, BT)): + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (T, DK), (s_qk_t, s_qk_d), (i * BT, i_k * BK), (BT, BK), (1, 0)) + p_v = tl.make_block_ptr(v + i_bh * s_vo_h, (DV, T), (s_vo_d, s_vo_t), (i_v * BV, i * BT), (BV, BT), (0, 1)) + p_do = tl.make_block_ptr(do + i_bh * s_vo_h, (T, DV), (s_vo_t, s_vo_d), (i * BT, i_v * BV), (BT, BV), (1, 0)) + p_dq = tl.make_block_ptr(dq + (i_bh + i_v*B*H) * s_qk_h, (T, DK), (s_qk_t, s_qk_d), (i*BT, i_k*BK), (BT, BK), (1, 0)) + + # [BT, DK] + b_k = tl.load(p_k, boundary_check=(0, 1)) + # [DV, BT] + b_v = tl.load(p_v, boundary_check=(0, 1)) + # [BT, DV] + b_do = tl.load(p_do, boundary_check=(0, 1)) + b_dd = (b_do * d_q[:, None]).to(b_do.dtype) + + # [BT, BT] + b_ds = tl.dot(b_do, b_v, allow_tf32=False) + b_ds = (b_ds * d_s).to(b_k.dtype) + # [BT, DK] + b_dq = tl.dot(b_ds, b_k, allow_tf32=False) + # [DV, DK] + if CHECK and i == 0: + b_dq += tl.dot(b_dd, b_h.to(b_k.dtype), allow_tf32=False) + b_h = d_b * b_h + tl.dot((b_v * d_k[None, :]).to(b_k.dtype), b_k, allow_tf32=False) + else: + b_dq += tl.dot(b_dd, b_h.to(b_k.dtype), allow_tf32=False) + b_h = d_b * b_h + tl.dot((b_v * d_k[None, :]).to(b_k.dtype), b_k, allow_tf32=False) + + tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1)) + + # sync threads + b_h = None + tl.debug_barrier() + d_s = tl.trans(d_s) + # [BK, BV] + b_dh = tl.zeros([BK, BV], dtype=tl.float32) + for i in range(1, tl.cdiv(T, BT) + 1): + p_q = tl.make_block_ptr(q + i_bh * s_qk_h, (DK, T), (s_qk_d, s_qk_t), (i_k * BK, T - i * BT), (BK, BT), (0, 1)) + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (T, DK), (s_qk_t, s_qk_d), (T - i * BT, i_k * BK), (BT, BK), (1, 0)) + p_v = tl.make_block_ptr(v + i_bh * s_vo_h, (T, DV), (s_vo_t, s_vo_d), (T - i * BT, i_v * BV), (BT, BV), (1, 0)) + p_do = tl.make_block_ptr(do + i_bh * s_vo_h, (T, DV), (s_vo_t, s_vo_d), (T - i * BT, i_v * BV), (BT, BV), (1, 0)) + p_dk = tl.make_block_ptr(dk + (i_bh+i_v*B*H) * s_qk_h, (T, DK), (s_qk_t, s_qk_d), (T - i*BT, i_k*BK), (BT, BK), (1, 0)) + p_dv = tl.make_block_ptr(dv + (i_bh+i_k*B*H) * s_vo_h, (T, DV), (s_vo_t, s_vo_d), (T - i*BT, i_v*BV), (BT, BV), (1, 0)) + # [DK, BT] + b_q = tl.load(p_q, boundary_check=(0, 1)) + # [BT, DK] + b_k = tl.load(p_k, boundary_check=(0, 1)) + # [BT, DV] + b_v = tl.load(p_v, boundary_check=(0, 1)) + b_do = tl.load(p_do, boundary_check=(0, 1)) + b_dd = (b_do * d_q[:, None]).to(b_do.dtype) + + # [BT, BT] + b_ds = tl.dot(b_v, tl.trans(b_do), allow_tf32=False) + b_ds = (b_ds * d_s).to(b_k.dtype) + + # [BT, BT] + b_s = tl.dot(b_k, b_q, allow_tf32=False) * d_s + # [BT, DK] + b_dk = tl.dot(b_ds, tl.trans(b_q), allow_tf32=False) + # [BT, DV] + b_dv = tl.dot(b_s.to(b_q.dtype), b_do, allow_tf32=False) + if CHECK and i == 1: + b_dk += tl.dot(b_v, tl.trans(b_dh).to(b_v.dtype), allow_tf32=False) * d_k[:, None] + b_dv += tl.dot(b_k, b_dh.to(b_k.dtype), allow_tf32=False) * d_k[:, None] + b_dh = d_b * b_dh + tl.dot(b_q, b_dd, allow_tf32=False) + else: + b_dk += tl.dot(b_v, tl.trans(b_dh).to(b_v.dtype), allow_tf32=False) * d_k[:, None] + b_dv += tl.dot(b_k, b_dh.to(b_k.dtype), allow_tf32=False) * d_k[:, None] + b_dh = d_b * b_dh + tl.dot(b_q, b_dd, allow_tf32=False) + + tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1)) + tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), boundary_check=(0, 1)) + + +class FusedChunkRetentionFunction(torch.autograd.Function): + + @staticmethod + @contiguous + @custom_fwd + def forward(ctx, q, k, v, initial_state, output_final_state): + batch_size, n_heads, seq_len, d_head_qk = q.shape + d_head_v = v.shape[-1] + + scale = d_head_qk ** -0.5 + BT = 64 + BK, BV = min(triton.next_power_of_2(d_head_qk), 64), min(triton.next_power_of_2(d_head_v), 64) + NK, NV = triton.cdiv(d_head_qk, BK), triton.cdiv(d_head_v, BV) + num_stages = 1 + num_warps = 4 + + o = q.new_empty(NK, batch_size, n_heads, seq_len, d_head_v) + + if output_final_state: + final_state = q.new_empty(batch_size, n_heads, d_head_qk, d_head_v, dtype=torch.float32, requires_grad=False) + else: + final_state = None + # the bug still exists even for Triton 2.2 on H100 GPUs + # so we always enable initial checks + CHECK = True + if version.parse(triton.__version__) < version.parse('2.2.0'): + import warnings + warnings.warn( + "Triton<2.2.0 detected for running this kernel, " + "which is known to have some weird compiler issues (refer to https://github.com/openai/triton/issues/2852) " + "that lead to significant precision loss. " + "We've add some initial condition checks to resolve this, sadly at the sacrifice of the speed. " + "For optimal performance, it is recommended to install Triton>=2.2.0 (if possible)." + ) + CHECK = True + + grid = (NV, NK, batch_size * n_heads) + fused_chunk_retention_fwd_kernel[grid]( + q, k, v, o, initial_state, final_state, + q.stride(1), q.stride(2), q.stride(3), + v.stride(1), v.stride(2), v.stride(3), + batch_size, n_heads, seq_len, scale, + BT=BT, DK=d_head_qk, DV=d_head_v, BK=BK, BV=BV, + USE_INITIAL_STATE=initial_state is not None, + STORE_FINAL_STATE=output_final_state, + CHECK=CHECK, + num_warps=num_warps, + num_stages=num_stages + ) + + o = o.sum(0) + ctx.save_for_backward(q, k, v, initial_state) + ctx.CHECK = CHECK + return o.to(q.dtype), final_state + + @staticmethod + @custom_bwd + @contiguous + def backward(ctx, do, d_final_state=None): + q, k, v, initial_state = ctx.saved_tensors + batch_size, n_heads, seq_len, d_head_qk = q.shape + d_head_v = v.shape[-1] + scale = d_head_qk ** -0.5 + + BT = 64 + BK, BV = min(triton.next_power_of_2(d_head_qk), 64), min(triton.next_power_of_2(d_head_v), 64) + NK, NV = triton.cdiv(d_head_qk, BK), triton.cdiv(d_head_v, BV) + num_stages = 1 + num_warps = 4 + + dq = q.new_empty(NV, batch_size, n_heads, seq_len, d_head_qk) + dk = q.new_empty(NV, batch_size, n_heads, seq_len, d_head_qk) + dv = q.new_empty(NK, batch_size, n_heads, seq_len, d_head_v) + grid = (NV, NK, batch_size * n_heads) + + fused_chunk_retention_bwd_kernel[grid]( + q, k, v, do, dq, dk, dv, initial_state, + q.stride(1), q.stride(2), q.stride(3), + v.stride(1), v.stride(2), v.stride(3), + batch_size, n_heads, seq_len, scale, + BT=BT, DK=d_head_qk, DV=d_head_v, BK=BK, BV=BV, + USE_INITIAL_STATE=initial_state is not None, + CHECK=ctx.CHECK, + num_warps=num_warps, + num_stages=num_stages + ) + dq = dq.sum(0) + dk = dk.sum(0) + dv = dv.sum(0) + return dq.to(q.dtype), dk.to(k.dtype), dv.to(v.dtype), None, None + + +def fused_chunk_retention( + q: torch.Tensor, + k: torch.Tensor, + v: torch.Tensor, + initial_state: torch.Tensor = None, + output_final_state: bool = False +) -> Tuple[torch.Tensor, torch.Tensor]: + if initial_state is not None: + initial_state = initial_state.detach() + o, final_state = FusedChunkRetentionFunction.apply(q, k, v, initial_state, output_final_state) + return o, final_state diff --git a/fla/ops/retention/naive.py b/fla/ops/retention/naive.py new file mode 100644 index 0000000000000000000000000000000000000000..15611bf649779d2d956d2ab390b7d72dbb12201d --- /dev/null +++ b/fla/ops/retention/naive.py @@ -0,0 +1,15 @@ +# -*- coding: utf-8 -*- + +import torch + + +def naive_retention(q, k, v): + orig_type = q.dtype + q, k, v = q.float(), k.float(), v.float() + _, n_heads, seq_len, d_head = q.shape + s = (1 - q.new_tensor(2., dtype=torch.float).pow(-5. - q.new_tensor(range(n_heads), dtype=torch.float))).log2() + n = q.new_tensor(range(seq_len), dtype=torch.float) + n = torch.exp2((n.unsqueeze(-1) - n) * s.view(-1, 1, 1)) * n.unsqueeze(-1).ge(n) + s = torch.einsum('bhqd,bhkd,hqk->bhqk', q * d_head ** -0.5, k, n.to(q.dtype)) + o = torch.einsum('bhqk,bhkd->bhqd', s, v) + return o.to(orig_type) diff --git a/fla/ops/retention/parallel.py b/fla/ops/retention/parallel.py new file mode 100644 index 0000000000000000000000000000000000000000..18c72684992c95cab5558fbb5ca524cb3b79cbca --- /dev/null +++ b/fla/ops/retention/parallel.py @@ -0,0 +1,339 @@ +# -*- coding: utf-8 -*- +# Copyright (c) 2023, Yu Zhang, Songlin Yang + +import torch +import triton +import triton.language as tl +from torch.cuda.amp import custom_bwd, custom_fwd + +from fla.utils import contiguous + + +@triton.jit +def parallel_retention_fwd_kernel( + # B: batch_size, H: n_heads, T: seq_len, D: d_head + q, # query [B, H, L, D_head_K] + k, # key [B, H, L, D_head_V] + v, # value [B, H, L, D_head_V] + o, # output [B, H, L, D_head_V] + s_qk_h, # stride size: L * D_head_K + s_qk_t, # stride size: D_head_K + s_qk_d, # stride size: 1 + s_vo_h, # stride size: L * D_head_V + s_vo_t, # stride size: D_head_V + s_vo_d, # stride size: 1 + B, # batch size + H, # n_heads + T, # seq_len + scale, # D_head_K ** -0.5 + BTL: tl.constexpr, # BLOCK SIZE along the sequence dimension for Q + BTS: tl.constexpr, # BLOCK SIZE along the sequence dimension for K/V + BK: tl.constexpr, # BLOCK SIZE along the K dimension + BV: tl.constexpr, # BLOCK SIZE along the V dimension + DK: tl.constexpr, # D_head_K + DV: tl.constexpr, # D_head_V +): + # i_c: chunk index. used for sequence parallelism + i_kv, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + NV = tl.cdiv(DV, BV) + i_k = i_kv // (NV) + i_v = i_kv % (NV) + i_h = i_bh % H + # decay rate given the head index + b_b = tl.math.log2(1 - tl.math.pow(2, -5 - i_h * 1.0)) + # cumulative decay from the end of the chunk + o_k = tl.arange(0, BTS) + d_h = tl.math.exp2((BTS - o_k) * b_b) + + p_q = tl.make_block_ptr(q + i_bh * s_qk_h, (T, DK), + (s_qk_t, s_qk_d), (i_c * BTL, i_k * BK), (BTL, BK), (1, 0)) + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (DK, T), + (s_qk_d, s_qk_t), (i_k * BK, 0), (BK, BTS), (0, 1)) + p_v = tl.make_block_ptr(v + i_bh * s_vo_h, (T, DV), + (s_vo_t, s_vo_d), (0, i_v * BV), (BTS, BV), (1, 0)) + + # [BQ, BD] block Q, in the shared memory throughout the whole kernel + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_q = (b_q * scale).to(b_q.dtype) + b_o = tl.zeros([BTL, BV], dtype=tl.float32) + + # Q block and K block have no overlap + # no need for mask, thereby saving flops + for _ in range(0, i_c * BTL, BTS): + # [BK, BTS] + b_k = tl.load(p_k, boundary_check=(0, 1)) + # [BTS, BV] + b_v = tl.load(p_v, boundary_check=(0, 1)) + # [BTL, BTS] + b_s = tl.dot(b_q, (b_k), allow_tf32=False) * d_h[None, :] + # [BQ, BD] + b_o = b_o * tl.math.exp2(b_b * BTS) + b_o = b_o + tl.dot(b_s.to(b_v.dtype), b_v, allow_tf32=False) + p_k = tl.advance(p_k, (0, BTS)) + p_v = tl.advance(p_v, (BTS, 0)) + + # # rescale interchunk output + tl.debug_barrier() + o_q = tl.arange(0, BTL) + d_q = tl.math.exp2(tl.arange(0, BTL) * b_b) + b_o *= d_q[:, None] + # # sync threads, easy for compiler to optimize + # tl.debug_barrier() + + o_k = tl.arange(0, BTS) + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (DK, T), + (s_qk_d, s_qk_t), (i_k * BK, i_c * BTL), (BK, BTS), (0, 1)) + p_v = tl.make_block_ptr(v + i_bh * s_vo_h, (T, DV), + (s_vo_t, s_vo_d), (i_c * BTL, i_v * BV), (BTS, BV), (1, 0)) + # Q block and K block have overlap. masks required + for _ in range(i_c * BTL, (i_c + 1) * BTL, BTS): + # [BK, BTS] + b_k = tl.load(p_k, boundary_check=(0, 1)) + # [BTS, BV] + b_v = tl.load(p_v, boundary_check=(0, 1)) + # [BTL, BTS] + m_s = o_q[:, None] >= o_k[None, :] + d_s = tl.where(m_s, tl.math.exp2( + (o_q[:, None] - o_k[None, :]) * b_b), 0) + b_s = tl.dot(b_q, b_k, allow_tf32=False) * d_s + # [BTL, BV] + b_o += tl.dot(b_s.to(b_q.dtype), b_v, allow_tf32=False) + + p_k = tl.advance(p_k, (0, BTS)) + p_v = tl.advance(p_v, (BTS, 0)) + o_k += BTS + + p_o = tl.make_block_ptr(o + (i_bh + B * H * i_k) * s_vo_h, (T, DV), + (s_vo_t, s_vo_d), (i_c*BTL, i_v*BV), (BTL, BV), (1, 0)) + tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1)) + + +@triton.jit +def _parallel_retention_bwd_dq( + i_bh, i_c, i_k, i_v, i_h, + k, v, do, dq, s_qk_h, s_qk_t, s_qk_d, s_vo_h, + s_vo_t, s_vo_d, B, H, T, scale, + BTL: tl.constexpr, BTS: tl.constexpr, BK: tl.constexpr, BV: tl.constexpr, + DK: tl.constexpr, DV: tl.constexpr, +): + p_do = tl.make_block_ptr(do + i_bh * s_vo_h, (T, DV), (s_vo_t, s_vo_d), + (i_c * BTL, i_v * BV), (BTL, BV), (1, 0)) + b_do = tl.load(p_do, boundary_check=(0, 1)) + b_dq = tl.zeros([BTL, BK], dtype=tl.float32) + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (T, DK), + (s_qk_t, s_qk_d), (0, i_k * BK), (BTS, BK), (1, 0)) + p_v = tl.make_block_ptr(v + i_bh * s_vo_h, (DV, T), + (s_vo_d, s_vo_t), (i_v * BV, 0), (BV, BTS), (0, 1)) + # decay rate given the head index + b_b = tl.math.log2(1 - tl.math.pow(2, -5 - i_h * 1.0)) + # overall decay rate for an entire block + d_b = tl.math.exp2(b_b * BTS) + # cumulative decay from the end of the chunk + d_h = tl.math.exp2((BTS - tl.arange(0, BTS)) * b_b) + for _ in range(0, i_c * BTL, BTS): + # [BTS, BK] + b_k = tl.load(p_k, boundary_check=(0, 1)) + # [BV, BTS] + b_v = tl.load(p_v, boundary_check=(0, 1)) + # [BTL, BTS] + b_ds = tl.dot(b_do, b_v, allow_tf32=False) * d_h[None, :] + # [BQ, BD] + b_dq *= d_b + b_dq += tl.dot(b_ds.to(b_v.dtype), b_k, allow_tf32=False) + p_k = tl.advance(p_k, (BTS, 0)) + p_v = tl.advance(p_v, (0, BTS)) + b_dq *= tl.math.exp2(tl.arange(0, BTL) * b_b)[:, None] * scale + o_q = tl.arange(0, BTL) + o_k = tl.arange(0, BTS) + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (T, DK), + (s_qk_t, s_qk_d), (i_c * BTL, i_k * BK), (BTS, BK), (1, 0)) + p_v = tl.make_block_ptr(v + i_bh * s_vo_h, (DV, T), + (s_vo_d, s_vo_t), (i_v * BV, i_c * BTL), (BV, BTS), (0, 1)) + # Q block and K block have overlap. masks required + for _ in range(i_c * BTL, (i_c + 1) * BTL, BTS): + # [BTS, BK] + b_k = tl.load(p_k, boundary_check=(0, 1)) + # [BV, BTS] + b_v = tl.load(p_v, boundary_check=(0, 1)) + # [BTL, BTS] + m_s = o_q[:, None] >= o_k[None, :] + d_s = tl.where(m_s, tl.math.exp2( + (o_q[:, None] - o_k[None, :]) * b_b), 0) + b_ds = tl.dot(b_do, b_v, allow_tf32=False) * d_s * scale + # [BTL, BK] + b_dq += tl.dot(b_ds.to(b_k.dtype), b_k, allow_tf32=False) + p_k = tl.advance(p_k, (BTS, 0)) + p_v = tl.advance(p_v, (0, BTS)) + o_k += BTS + p_dq = tl.make_block_ptr(dq + (i_bh + B * H * i_v) * s_qk_h, (T, DK), + (s_qk_t, s_qk_d), (i_c*BTL, i_k*BK), (BTL, BK), (1, 0)) + tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1)) + return + + +@triton.jit +def _parallel_retention_bwd_dkv( + i_bh, i_c, i_k, i_v, i_h, + q, k, v, do, dk, dv, s_qk_h, s_qk_t, s_qk_d, s_vo_h, + s_vo_t, s_vo_d, B, H, T, scale, + BTL: tl.constexpr, BTS: tl.constexpr, BK: tl.constexpr, BV: tl.constexpr, + DK: tl.constexpr, DV: tl.constexpr, +): + # no overlap. no need for mask. + b_b = tl.math.log2(1 - tl.math.pow(2, -5 - i_h * 1.0)) + # overall decay rate for an entire block + d_b = tl.math.exp2(b_b * BTS) + # compute dk dv + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (T, DK), (s_qk_t, s_qk_d), + (i_c * BTL, i_k * BK), (BTL, BK), (1, 0)) + p_v = tl.make_block_ptr(v + i_bh * s_vo_h, (T, DV), (s_vo_t, s_vo_d), + (i_c * BTL, i_v * BV), (BTL, BV), (1, 0)) + b_k, b_v = tl.load(p_k, boundary_check=(0, 1)), tl.load( + p_v, boundary_check=(0, 1)) + b_dk, b_dv = tl.zeros([BTL, BK], dtype=tl.float32), tl.zeros( + [BTL, BV], dtype=tl.float32) + d_h = tl.math.exp2((BTL - tl.arange(0, BTL)) * b_b) + b_kd = (b_k * d_h[:, None]).to(b_k.dtype) + d_q = tl.math.exp2(tl.arange(0, BTS) * b_b) + for i in range((tl.cdiv(T, BTS) * BTS)-BTS, (i_c + 1) * BTL - BTS, -BTS): + p_q = tl.make_block_ptr( + q + i_bh * s_qk_h, (DK, T), (s_qk_d, s_qk_t), (i_k * BK, i), (BK, BTS), (0, 1)) + p_do = tl.make_block_ptr( + do + i_bh * s_vo_h, (DV, T), (s_vo_d, s_vo_t), (i_v * BV, i), (BV, BTS), (0, 1)) + b_q = tl.load(p_q, boundary_check=(0, 1)) # [BK, BTS] + b_do = tl.load(p_do, boundary_check=(0, 1)) # [BV, BTS] + b_do = (b_do * d_q[None, :]).to(b_do.dtype) + + b_dv *= d_b + b_s = tl.dot(b_kd.to(b_q.dtype), b_q, allow_tf32=False) # [BTL, BTS] + b_dv += tl.dot(b_s.to(b_q.dtype), tl.trans(b_do), allow_tf32=False) + + b_dk *= d_b + b_ds = tl.dot(b_v, b_do, allow_tf32=False) + b_dk += tl.dot(b_ds.to(b_q.dtype), tl.trans(b_q), allow_tf32=False) + b_dk *= d_h[:, None] * scale + b_dv *= scale + tl.debug_barrier() + o_q, o_k = tl.arange(0, BTS), tl.arange(0, BTL) + for i in range(i_c*BTL, (i_c+1)*BTL, BTS): + p_q = tl.make_block_ptr( + q + i_bh * s_qk_h, (DK, T), (s_qk_d, s_qk_t), (i_k * BK, i), (BK, BTS), (0, 1)) + p_do = tl.make_block_ptr( + do + i_bh * s_vo_h, (DV, T), (s_vo_d, s_vo_t), (i_v * BV, i), (BV, BTS), (0, 1)) + b_q = tl.load(p_q, boundary_check=(0, 1)) # [BD, BQ] + b_do = tl.load(p_do, boundary_check=(0, 1)) + # [BK, BQ] + m_s = o_k[:, None] <= o_q[None, :] + d_s = tl.where(m_s, tl.math.exp2( + (-o_k[:, None] + o_q[None, :]) * b_b.to(tl.float32)), 0) * scale + b_s = tl.dot(b_k, b_q, allow_tf32=False) * d_s + b_ds = tl.dot(b_v, b_do, allow_tf32=False) * d_s + # [BK, BD] + b_dk += tl.dot(b_ds.to(b_q.dtype), tl.trans(b_q), allow_tf32=False) + b_dv += tl.dot(b_s.to(b_q.dtype), tl.trans(b_do), allow_tf32=False) + o_q += BTS + p_dk = tl.make_block_ptr(dk + (i_bh + B * H * i_v) * s_qk_h, + (T, DK), (s_qk_t, s_qk_d), (i_c*BTL, i_k*BK), (BTL, BK), (1, 0)) + p_dv = tl.make_block_ptr(dv + (i_bh + B * H * i_k) * s_vo_h, + (T, DV), (s_vo_t, s_vo_d), (i_c*BTL, i_v*BV), (BTL, BV), (1, 0)) + tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1)) + tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), boundary_check=(0, 1)) + return + + +@triton.jit +def parallel_retention_bwd_kernel( + q, k, v, do, dq, dk, dv, s_qk_h, s_qk_t, s_qk_d, s_vo_h, + s_vo_t, s_vo_d, B, H, T, scale, + BTL: tl.constexpr, BTS: tl.constexpr, BK: tl.constexpr, BV: tl.constexpr, + DK: tl.constexpr, DV: tl.constexpr, +): + i_kv, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + NV = tl.cdiv(DV, BV) + i_k = i_kv // (NV) + i_v = i_kv % (NV) + i_h = i_bh % H + _parallel_retention_bwd_dq( + i_bh, i_c, i_k, i_v, i_h, + k, v, do, dq, s_qk_h, s_qk_t, s_qk_d, s_vo_h, + s_vo_t, s_vo_d, B, H, T, scale, BTL=BTL, BTS=BTS, BK=BK, BV=BV, DK=DK, DV=DV + ) + tl.debug_barrier() + _parallel_retention_bwd_dkv( + i_bh, i_c, i_k, i_v, i_h, + q, k, v, do, dk, dv, s_qk_h, s_qk_t, s_qk_d, s_vo_h, + s_vo_t, s_vo_d, B, H, T, scale, BTL, BTS, BK, BV, DK, DV + ) + + +class ParallelRetentionFunction(torch.autograd.Function): + @staticmethod + @contiguous + @custom_fwd + def forward(ctx, q, k, v): + BTL, BTS = 128, 32 + assert BTL % BTS == 0 + BK = min(128, triton.next_power_of_2(k.shape[-1])) + BV = min(128, triton.next_power_of_2(v.shape[-1])) + batch_size, n_heads, seq_len, d_head_qk = q.shape + d_head_v = v.shape[-1] + num_stages = 3 if d_head_qk <= 64 else 2 + num_warps = 4 + NK = triton.cdiv(d_head_qk, BK) + NV = triton.cdiv(d_head_v, BV) + + grid = (NK * NV, triton.cdiv(seq_len, BTL), batch_size * n_heads) + scale = d_head_qk ** -0.5 + o = torch.empty(NK, batch_size, n_heads, seq_len, + d_head_v, dtype=q.dtype, device=q.device) + parallel_retention_fwd_kernel[grid]( + q, k, v, o, + q.stride(1), q.stride(2), q.stride(3), + v.stride(1), v.stride(2), v.stride(3), + batch_size, n_heads, seq_len, scale, + BTL=BTL, BTS=BTS, BK=BK, BV=BV, DK=d_head_qk, DV=d_head_v, + num_warps=num_warps, + num_stages=num_stages + ) + ctx.save_for_backward(q, k, v) + return o.sum(0).to(q.dtype) + + @staticmethod + @contiguous + @custom_bwd + def backward(ctx, do): + q, k, v = ctx.saved_tensors + BTL, BTS = 64, 32 + assert BTL % BTS == 0 + BK = min(128, triton.next_power_of_2(k.shape[-1])) + BV = min(128, triton.next_power_of_2(v.shape[-1])) + batch_size, n_heads, seq_len, d_head_qk = q.shape + d_head_v = v.shape[-1] + num_stages = 3 if d_head_qk <= 64 else 2 + num_warps = 4 + NK = triton.cdiv(d_head_qk, BK) + NV = triton.cdiv(d_head_v, BV) + grid = (NK * NV, triton.cdiv(seq_len, BTL), batch_size * n_heads) + scale = d_head_qk ** -0.5 + + dq = torch.empty(NV, batch_size, n_heads, seq_len, + d_head_qk, dtype=q.dtype, device=q.device) + dk = torch.empty(NV, batch_size, n_heads, seq_len, + d_head_qk, dtype=q.dtype, device=q.device) + dv = torch.empty(NK, batch_size, n_heads, seq_len, + d_head_v, dtype=q.dtype, device=q.device) + + parallel_retention_bwd_kernel[grid]( + q, k, v, do, dq, dk, dv, + q.stride(1), q.stride(2), q.stride(3), + v.stride(1), v.stride(2), v.stride(3), + batch_size, n_heads, seq_len, scale, + BTL=BTL, BTS=BTS, BK=BK, BV=BV, DK=d_head_qk, DV=d_head_v, + num_warps=num_warps, + num_stages=num_stages + ) + + return dq.sum(0).to(q.dtype), dk.sum(0).to(k.dtype), dv.sum(0).to(v.dtype) + + +parallel_retention = ParallelRetentionFunction.apply diff --git a/fla/ops/retention/recurrent_fuse.py b/fla/ops/retention/recurrent_fuse.py new file mode 100644 index 0000000000000000000000000000000000000000..f78b45fffe8f9cbd3572092999de0a43b48ace68 --- /dev/null +++ b/fla/ops/retention/recurrent_fuse.py @@ -0,0 +1,281 @@ +# -*- coding: utf-8 -*- +# Copyright (c) 2023, Yu Zhang, Songlin Yang + +from typing import Tuple + +import torch +import triton +import triton.language as tl + +from fla.utils import contiguous + +# on-the-fly computation without materializing hidden statets into HBMs + + +@triton.jit +def fused_recurrent_retention_fwd_kernel( + # B: batch_size, H: n_heads, T: seq_len, D: d_head + q, # query [B, H, L, D_head_K] + k, # key [B, H, L, D_head_V] + v, # value [B, H, L, D_head_V] + o, # output [B, H, L, D_head_V] + initial_state, + final_state, # final hidden state [B, H, D_head_K, D_head_V] + + s_qk_h, # stride size: L * D_head_K + s_qk_t, # stride size: D_head_K + s_qk_d, # stride size: 1 + + s_vo_h, # stride size: L * D_head_V + s_vo_t, # stride size: D_head_V + s_vo_d, # stride size: 1 + + B, # batch size + H, # n_heads + T, # seq_len + scale, # D_head_K ** -0.5 + BK: tl.constexpr, # BLOCK SIZE along the K dimension + BV: tl.constexpr, # BLOCK SIZE along the V dimension + DK: tl.constexpr, # D_head_K + DV: tl.constexpr, # D_head_V + USE_INITIAL_STATE: tl.constexpr, # whether to use initial state + STORE_FINAL_STATE: tl.constexpr, # whether to store final state +): + # indices + i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + i_h = i_bh % H + + # decay rate given the head index + b_b = (1 - tl.math.pow(2, -5 - i_h * 1.0)) + + p_q = q + i_bh * s_qk_h + i_k * BK + tl.arange(0, BK) + p_k = k + i_bh * s_qk_h + i_k * BK + tl.arange(0, BK) + p_v = v + i_bh * s_vo_h + i_v * BV + tl.arange(0, BV) + p_o = o + (i_bh + i_k * B * H) * s_vo_h + i_v * BV + tl.arange(0, BV) + + mask_bk = (i_k * BK + tl.arange(0, BK)) < DK + mask_bv = (i_v * BV + tl.arange(0, BV)) < DV + mask_kv = mask_bk[None, :] & mask_bv[:, None] + + h = tl.zeros([BV, BK], dtype=tl.float32) + + if USE_INITIAL_STATE: + p_init_s = initial_state + i_bh * DK * DV + \ + (i_k * BK + tl.arange(0, BK)[None, :]) * \ + DV + (i_v * BV + tl.arange(0, BV)[:, None]) + h += tl.load(p_init_s, mask=mask_kv, other=0).to(tl.float32) + + for _ in range(0, T): + _k = tl.load(p_k, mask=mask_bk, other=0).to(tl.float32) + _v = tl.load(p_v, mask=mask_bv, other=0).to(tl.float32) + _q = tl.load(p_q, mask=mask_bk, other=0).to(tl.float32) * scale + + h = b_b * h + _k[None, :] * _v[:, None] + _o = h * _q[None, :] + _o = tl.sum(_o, axis=1) + tl.store(p_o, _o.to(p_o.dtype.element_ty), mask=mask_bv) + + p_q += DK + p_k += DK + p_o += DV + p_v += DV + + if STORE_FINAL_STATE: + p_final_s = final_state + i_bh * DK * DV + \ + (i_k * BK + tl.arange(0, BK)[None, :]) * \ + DV + (i_v * BV + tl.arange(0, BV)[:, None]) + tl.store(p_final_s, h.to(p_final_s.dtype.element_ty), mask=mask_kv) + + +# Similar to Algorithm1 of https://arxiv.org/abs/2006.16236 +@triton.jit +def fused_recurrent_retention_bwd_kernel( + # B: batch_size, H: n_heads, T: seq_len, D: d_head + # NV: number of split in the V dimension. NK: number of split in the K dimension + q, # query [B, H, L, D_head_K] + k, # key [B, H, L, D_head_V] + v, # value [B, H, L, D_head_V] + + do, # gradient of output [B, H, L, D_head_V] + dq, # gradient of query [NV, B, H, L, D_head_K] + dk, # gradient of key [NV, B, H, L, D_head_K] + dv, # gradient of value [NK, B, H, L, D_head_V] + + # initial hidden state initialization [B, H, D_head_K, D_head_V] + initial_state, + + s_qk_h, # stride size: L * D_head_K + s_qk_t, # stride size: D_head_K + s_qk_d, # stride size: 1 + + s_vo_h, # stride size: L * D_head_V + s_vo_t, # stride size: D_head_V + s_vo_d, # stride size: 1 + + B, # batch_size + H, # n_heads + T, # seq_len + scale, # D_head_K ** -0.5 + BK: tl.constexpr, # BLOCK SIZE along the K dimension + BV: tl.constexpr, # BLOCK SIZE along the V dimension + DK: tl.constexpr, # D_head_K + DV: tl.constexpr, # D_head_V + USE_INITIAL_STATE: tl.constexpr, # whether to use initial state +): + i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + i_h = i_bh % H + + b_b = 1 - tl.math.pow(2, -5 - i_h * 1.0) + + p_q = q + i_bh * s_qk_h + i_k * BK + tl.arange(0, BK) + p_k = k + i_bh * s_qk_h + i_k * BK + tl.arange(0, BK) + p_v = v + i_bh * s_vo_h + i_v * BV + tl.arange(0, BV) + p_do = do + i_bh * s_vo_h + i_v * BV + tl.arange(0, BV) + + p_dq = dq + (i_bh + i_v * B * H) * s_qk_h + i_k * BK + tl.arange(0, BK) + mask_bk = i_k * BK + tl.arange(0, BK) < DK + mask_bv = i_v * BV + tl.arange(0, BV) < DV + + h = tl.zeros([BK, BV], dtype=tl.float32) + + if USE_INITIAL_STATE: + mask_kv = mask_bk[:, None] & mask_bv[None, :] + p_init_s = initial_state + i_bh * DK * DV + \ + (i_k * BK + tl.arange(0, BK)[:, None]) * \ + DV + (i_v * BV + tl.arange(0, BV)[None, :]) + h += tl.load(p_init_s, mask=mask_kv, other=0).to(tl.float32) + + for i in range(0, T): + _k = tl.load(p_k, mask=mask_bk, other=0).to(tl.float32) + _v = tl.load(p_v, mask=mask_bv, other=0).to(tl.float32) + _do = tl.load(p_do, mask=mask_bv, other=0).to(tl.float32) + + h = b_b * h + _k[:, None] * _v[None, :] + _d_q = h * _do[None, :] + d_q = tl.sum(_d_q, axis=1) * scale + tl.store(p_dq, d_q.to(p_dq.dtype.element_ty), mask=mask_bk) + + p_k += DK + p_do += DV + p_v += DV + p_dq += DK + + # sync threads + tl.debug_barrier() + + p_q = q + i_bh * s_qk_h + i_k * BK + tl.arange(0, BK) + (T - 1) * DK + p_k = k + i_bh * s_qk_h + i_k * BK + tl.arange(0, BK) + (T - 1) * DK + p_do = do + i_bh * s_vo_h + i_v * BV + tl.arange(0, BV) + (T - 1) * DV + p_v = v + i_bh * s_vo_h + i_v * BV + tl.arange(0, BV) + (T - 1) * DV + p_dk = dk + (i_bh + i_v * B * H) * s_qk_h + i_k * \ + BK + tl.arange(0, BK) + (T - 1) * DK + p_dv = dv + (i_bh + i_k * B * H) * s_vo_h + i_v * \ + BV + tl.arange(0, BV) + (T - 1) * DV + d_h = tl.zeros([BK, BV], dtype=tl.float32) + + for _ in range(T): + _do = tl.load(p_do, mask=mask_bv, other=0).to(tl.float32) + _q = tl.load(p_q, mask=mask_bk, other=0).to(tl.float32) * scale + _k = tl.load(p_k, mask=mask_bk, other=0).to(tl.float32) + _v = tl.load(p_v, mask=mask_bv, other=0).to(tl.float32) + d_h += _q[:, None] * _do[None, :] + d_k = tl.sum(d_h * _v[None, :], axis=1) + d_v = tl.sum(d_h * _k[:, None], axis=0) + + d_h *= b_b + tl.store(p_dk, d_k.to(p_dk.dtype.element_ty), mask=mask_bk) + tl.store(p_dv, d_v.to(p_dv.dtype.element_ty), mask=mask_bv) + + p_do -= DV + p_q -= DK + p_k -= DK + p_v -= DV + p_dk -= DK + p_dv -= DV + + +class FusedRecurrentRetentionFunction(torch.autograd.Function): + + @staticmethod + @contiguous + def forward(ctx, q, k, v, initial_state=None, output_final_state=False): + batch_size, n_heads, seq_len, d_head_qk = q.shape + d_head_v = v.shape[-1] + + scale = d_head_qk ** -0.5 + BK, BV = min(d_head_qk, 32), min(d_head_v, 32) + NK, NV = triton.cdiv(d_head_qk, BK), triton.cdiv(d_head_v, BV) + num_stages = 1 + num_warps = 1 + + o = q.new_empty(NK, batch_size, n_heads, seq_len, d_head_v) + + if output_final_state: + final_state = q.new_empty(batch_size, n_heads, d_head_qk, d_head_v) + else: + final_state = None + + grid = (NV, NK, batch_size * n_heads) + fused_recurrent_retention_fwd_kernel[grid]( + q, k, v, o, initial_state, final_state, + q.stride(1), q.stride(2), q.stride(3), + v.stride(1), v.stride(2), v.stride(3), + batch_size, n_heads, seq_len, scale, + DK=d_head_qk, DV=d_head_v, BK=BK, BV=BV, + num_warps=num_warps, + num_stages=num_stages, + USE_INITIAL_STATE=initial_state is not None, + STORE_FINAL_STATE=final_state is not None + ) + + o = o.sum(0) + ctx.save_for_backward(q, k, v, initial_state) + return o, final_state + + @staticmethod + @contiguous + def backward(ctx, do, d_final_state=None): + q, k, v, initial_state = ctx.saved_tensors + batch_size, n_heads, seq_len, d_head_qk = q.shape + d_head_v = v.shape[-1] + scale = d_head_qk ** -0.5 + + BK, BV = min(d_head_qk, 32), min(d_head_v, 32) + NK, NV = triton.cdiv(d_head_qk, BK), triton.cdiv(d_head_v, BV) + num_stages = 1 + num_warps = 1 + + dq = q.new_empty(NV, batch_size, n_heads, seq_len, d_head_qk) + dk = q.new_empty(NV, batch_size, n_heads, seq_len, d_head_qk) + dv = q.new_empty(NK, batch_size, n_heads, seq_len, d_head_v) + grid = (NV, NK, batch_size * n_heads) + + fused_recurrent_retention_bwd_kernel[grid]( + q, k, v, do, dq, dk, dv, initial_state, + q.stride(1), q.stride(2), q.stride(3), + v.stride(1), v.stride(2), v.stride(3), + batch_size, n_heads, seq_len, scale, + DK=d_head_qk, DV=d_head_v, BK=BK, BV=BV, + num_warps=num_warps, + num_stages=num_stages, + USE_INITIAL_STATE=initial_state is not None + ) + dq = dq.sum(0) + dk = dk.sum(0) + dv = dv.sum(0) + return dq, dk, dv, None, None + + +# fused_recurrent_retention = FusedRecurrentRetentionFunction.apply + +def fused_recurrent_retention( + q: torch.Tensor, + k: torch.Tensor, + v: torch.Tensor, + initial_state: torch.Tensor = None, + output_final_state: bool = False +) -> Tuple[torch.Tensor, torch.Tensor]: + if initial_state is not None: + initial_state = initial_state.detach() + o, final_state = FusedRecurrentRetentionFunction.apply(q, k, v, initial_state, output_final_state) + return o, final_state diff --git a/fla/ops/rotary.py b/fla/ops/rotary.py new file mode 100644 index 0000000000000000000000000000000000000000..18ccc5f06a231f6a92aa2bfdca290fe9a65ffae7 --- /dev/null +++ b/fla/ops/rotary.py @@ -0,0 +1,252 @@ +# Copyright (c) 2023, Tri Dao. https://github.com/Dao-AILab/flash-attention/blob/main/flash_attn/ops/triton/rotary.py + +from typing import Optional, Union + +import torch + +import triton +import triton.language as tl + + +# @triton.autotune( +# configs=[ +# triton.Config({"BLOCK_M": 2}), +# triton.Config({"BLOCK_M": 4}), +# triton.Config({"BLOCK_M": 8}), +# triton.Config({"BLOCK_M": 16}), +# ], +# key=["CACHE_KEY_SEQLEN", "BLOCK_K", "INTERLEAVED"], +# ) +@triton.jit +def rotary_kernel( + OUT, # Pointers to matrices + X, + COS, + SIN, + CU_SEQLENS, + SEQLEN_OFFSETS, # this could be int or a pointer + # Matrix dimensions + seqlen, + nheads, + rotary_dim, + seqlen_ro, + CACHE_KEY_SEQLEN, + # strides + stride_out_batch, + stride_out_seqlen, + stride_out_nheads, + stride_out_headdim, + stride_x_batch, + stride_x_seqlen, + stride_x_nheads, + stride_x_headdim, + # Meta-parameters + BLOCK_K: tl.constexpr, + IS_SEQLEN_OFFSETS_TENSOR: tl.constexpr, + IS_VARLEN: tl.constexpr, + INTERLEAVED: tl.constexpr, + CONJUGATE: tl.constexpr, + BLOCK_M: tl.constexpr, +): + pid_m = tl.program_id(axis=0) + pid_batch = tl.program_id(axis=1) + pid_head = tl.program_id(axis=2) + rotary_dim_half = rotary_dim // 2 + + if not IS_VARLEN: + X = X + pid_batch * stride_x_batch + pid_head * stride_x_nheads + OUT = OUT + pid_batch * stride_out_batch + pid_head * stride_out_nheads + else: + start_idx = tl.load(CU_SEQLENS + pid_batch) + seqlen = tl.load(CU_SEQLENS + pid_batch + 1) - start_idx + X = X + start_idx * stride_x_seqlen + pid_head * stride_x_nheads + OUT = OUT + start_idx * stride_out_seqlen + pid_head * stride_out_nheads + + if pid_m * BLOCK_M >= seqlen: + return + rm = pid_m * BLOCK_M + tl.arange(0, BLOCK_M) + if not IS_SEQLEN_OFFSETS_TENSOR: + rm_cs = rm + SEQLEN_OFFSETS + else: + rm_cs = rm + tl.load(SEQLEN_OFFSETS + pid_batch) + rk = tl.arange(0, BLOCK_K) + rk_half = tl.arange(0, BLOCK_K // 2) + + if not INTERLEAVED: + # Load the 1st and 2nd halves of X, do calculation, then store to 1st and 2nd halves of OUT + X = X + (rm[:, None] * stride_x_seqlen + + rk_half[None, :] * stride_x_headdim) + COS = COS + (rm_cs[:, None] * rotary_dim_half + rk_half[None, :]) + SIN = SIN + (rm_cs[:, None] * rotary_dim_half + rk_half[None, :]) + cos = tl.load( + COS, mask=(rm_cs[:, None] < seqlen_ro) & (rk_half[None, :] < rotary_dim_half), other=1.0 + ).to(tl.float32) + sin = tl.load( + SIN, mask=(rm_cs[:, None] < seqlen_ro) & (rk_half[None, :] < rotary_dim_half), other=0.0 + ).to(tl.float32) + x0 = tl.load( + X, mask=(rm[:, None] < seqlen) & (rk_half[None, :] < rotary_dim_half), other=0.0 + ).to(tl.float32) + x1 = tl.load( + X + rotary_dim_half * stride_x_headdim, + mask=(rm[:, None] < seqlen) & (rk_half[None, :] < rotary_dim_half), + other=0.0, + ).to(tl.float32) + if CONJUGATE: + sin = -sin + o0 = x0 * cos - x1 * sin + o1 = x0 * sin + x1 * cos + # write back result + OUT = OUT + (rm[:, None] * stride_out_seqlen + + rk_half[None, :] * stride_out_headdim) + tl.store(OUT, o0, mask=(rm[:, None] < seqlen) + & (rk_half[None, :] < rotary_dim_half)) + tl.store( + OUT + rotary_dim_half * stride_out_headdim, + o1, + mask=(rm[:, None] < seqlen) & (rk_half[None, :] < rotary_dim_half), + ) + else: + # We don't want to load X[0, 2, 4, ...] and X[1, 3, 5, ...] separately since both are slow. + # Instead, we load x0 = X[0, 1, 2, 3, ...] and x1 = X[1, 0, 3, 2, ...]. + # Loading x0 will be fast but x1 will be slow. + # Then we load cos = COS[0, 0, 1, 1, ...] and sin = SIN[0, 0, 1, 1, ...]. + # Then we do the calculation and use tl.where to pick put the right outputs for the even + # and for the odd indices. + rk_swap = rk + ((rk + 1) % 2) * 2 - 1 # 1, 0, 3, 2, 5, 4, ... + rk_repeat = tl.arange(0, BLOCK_K) // 2 + X0 = X + (rm[:, None] * stride_x_seqlen + + rk[None, :] * stride_x_headdim) + X1 = X + (rm[:, None] * stride_x_seqlen + + rk_swap[None, :] * stride_x_headdim) + COS = COS + (rm_cs[:, None] * rotary_dim_half + rk_repeat[None, :]) + SIN = SIN + (rm_cs[:, None] * rotary_dim_half + rk_repeat[None, :]) + cos = tl.load( + COS, + mask=(rm_cs[:, None] < seqlen_ro) & ( + rk_repeat[None, :] < rotary_dim_half), + other=1.0, + ).to(tl.float32) + sin = tl.load( + SIN, + mask=(rm_cs[:, None] < seqlen_ro) & ( + rk_repeat[None, :] < rotary_dim_half), + other=0.0, + ).to(tl.float32) + x0 = tl.load(X0, mask=(rm[:, None] < seqlen) & (rk[None, :] < rotary_dim), other=0.0).to( + tl.float32 + ) + x1 = tl.load( + X1, mask=(rm[:, None] < seqlen) & (rk_swap[None, :] < rotary_dim), other=0.0 + ).to(tl.float32) + if CONJUGATE: + sin = -sin + x0_cos = x0 * cos + x1_sin = x1 * sin + out = tl.where(rk[None, :] % 2 == 0, x0_cos - x1_sin, x0_cos + x1_sin) + OUT = OUT + (rm[:, None] * stride_out_seqlen + + rk[None, :] * stride_out_headdim) + tl.store(OUT, out, mask=(rm[:, None] < seqlen) + & (rk[None, :] < rotary_dim)) + + +def apply_rotary( + x: torch.Tensor, + cos: torch.Tensor, + sin: torch.Tensor, + seqlen_offsets: Union[int, torch.Tensor] = 0, + cu_seqlens: Optional[torch.Tensor] = None, + max_seqlen: Optional[int] = None, + interleaved=False, + inplace=False, + conjugate=False, +) -> torch.Tensor: + """ + Arguments: + x: (batch, seqlen, nheads, headdim) if cu_seqlens is None + else (total_seqlen, nheads, headdim). + cos: (seqlen_ro, rotary_dim / 2) + sin: (seqlen_ro, rotary_dim / 2) + seqlen_offsets: integer or integer tensor of size (batch,) + cu_seqlens: (batch + 1,) or None + max_seqlen: int + Returns: + y: (batch, seqlen, nheads, headdim) + """ + is_varlen = cu_seqlens is not None + if not is_varlen: + batch, seqlen, nheads, headdim = x.shape + else: + assert max_seqlen is not None, "If cu_seqlens is passed in, then max_seqlen must be passed" + total_seqlen, nheads, headdim = x.shape + batch_p_1 = cu_seqlens.shape[0] + batch = batch_p_1 - 1 + seqlen = max_seqlen + seqlen_ro, rotary_dim = cos.shape + assert sin.shape == cos.shape + rotary_dim *= 2 + assert rotary_dim <= headdim, "rotary_dim must be <= headdim" + assert headdim <= 256, "Only support headdim <= 256" + assert seqlen_ro >= seqlen, "seqlen_ro must be >= seqlen" + + assert ( + cos.dtype == sin.dtype + ), f"cos and sin must have the same dtype, got {cos.dtype} and {sin.dtype}" + assert ( + x.dtype == cos.dtype + ), f"Input and cos/sin must have the same dtype, got {x.dtype} and {cos.dtype}" + + cos, sin = cos.contiguous(), sin.contiguous() + if isinstance(seqlen_offsets, torch.Tensor): + assert seqlen_offsets.shape == (batch,) + assert seqlen_offsets.dtype in [torch.int32, torch.int64] + seqlen_offsets = seqlen_offsets.contiguous() + else: + assert seqlen_offsets + seqlen <= seqlen_ro + + output = torch.empty_like(x) if not inplace else x + if rotary_dim < headdim and not inplace: + output[..., rotary_dim:].copy_(x[..., rotary_dim:]) + + BLOCK_K = ( + 32 + if rotary_dim <= 32 + else (64 if rotary_dim <= 64 else (128 if rotary_dim <= 128 else 256)) + ) + def grid(META): return (triton.cdiv(seqlen, META["BLOCK_M"]), batch, nheads) # noqa + BLOCK_M = 4 if interleaved else (8 if rotary_dim <= 64 else 4) + + # Need this, otherwise Triton tries to launch from cuda:0 and we get + # ValueError: Pointer argument (at 0) cannot be accessed from Triton (cpu tensor?) + with torch.cuda.device(x.device.index): + rotary_kernel[grid]( + output, # data ptrs + x, + cos, + sin, + cu_seqlens, + seqlen_offsets, + seqlen, # shapes + nheads, + rotary_dim, + seqlen_ro, + # key for triton cache (limit number of compilations) + seqlen // 128, + # batch_strides if not varlen else 0 + output.stride(0) if not is_varlen else 0, + output.stride(-3), # seqlen_stride or total_seqlen_stride + output.stride(-2), # nheads_stride + output.stride(-1), # headdim_stride + # batch_strides if not varlen else 0 + x.stride(0) if not is_varlen else 0, + x.stride(-3), # seqlen stride or total_seqlen_stride + x.stride(-2), # nheads stride + x.stride(-1), # headdim stride + BLOCK_K, + isinstance(seqlen_offsets, torch.Tensor), + is_varlen, + interleaved, + conjugate, + BLOCK_M, + ) + return output diff --git a/fla/ops/rwkv4/__init__.py b/fla/ops/rwkv4/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..ae23a00c1673d1b3f60611d781c66dc8c0e83095 --- /dev/null +++ b/fla/ops/rwkv4/__init__.py @@ -0,0 +1,7 @@ +# -*- coding: utf-8 -*- + +from .recurrent_fuse import fused_recurrent_rwkv4 + +__all__ = [ + 'fused_recurrent_rwkv4' +] diff --git a/fla/ops/rwkv4/recurrent_fuse.py b/fla/ops/rwkv4/recurrent_fuse.py new file mode 100644 index 0000000000000000000000000000000000000000..3232087af98dd9dd84957afdd709ec292956a809 --- /dev/null +++ b/fla/ops/rwkv4/recurrent_fuse.py @@ -0,0 +1,484 @@ +# -*- coding: utf-8 -*- +# adopted from https://github.com/codekansas/rwkv + +from typing import Any, cast + +import torch +import triton +import triton.language as tl +from torch import Tensor +from torch.autograd.function import Function, FunctionCtx, once_differentiable + + +def get_block_size_c(chans: int) -> int: + if chans < 32: + return 32 + if chans < 64: + return 64 + return 128 + + +@triton.jit +def fused_recurrent_rwkv4_forward_kernel( + # W + w_ptr, + w_s_c, + # U + u_ptr, + u_s_c, + # K + k_ptr, + k_s_b, + k_s_t, + k_s_c, + # V + v_ptr, + v_s_b, + v_s_t, + v_s_c, + # State + state_ptr, + state_s_b, + state_s_abe, + state_s_c, + # WKV + wkv_ptr, + wkv_s_b, + wkv_s_t, + wkv_s_c, + # Output state + state_out_ptr, + state_out_s_b, + state_out_s_abe, + state_out_s_t, + state_out_s_c, + # Params + chans, + tsz, + BLOCK_SIZE_C: tl.constexpr, +): + # Parallelize over the batch dimension. + b_idx = tl.program_id(0) + c_idx = tl.program_id(1) + + cs = (c_idx * BLOCK_SIZE_C) + tl.arange(0, BLOCK_SIZE_C) + cmask = cs < chans + + # Pointers to the batch (and possibly channel) for the input tensors. + k_ptr = k_ptr + b_idx * k_s_b + v_ptr = v_ptr + b_idx * v_s_b + alpha_ptr = state_ptr + b_idx * state_s_b + beta_ptr = state_ptr + b_idx * state_s_b + state_s_abe + eps_ptr = state_ptr + b_idx * state_s_b + 2 * state_s_abe + + # Pointers to the batch (and possibly channel) for the output tensors. + wkv_ptr = wkv_ptr + b_idx * wkv_s_b + alpha_out_ptr = state_out_ptr + b_idx * state_out_s_b + beta_out_ptr = state_out_ptr + b_idx * state_out_s_b + state_out_s_abe + eps_out_ptr = state_out_ptr + b_idx * state_out_s_b + 2 * state_out_s_abe + + # Loads parameters. + alpha = tl.load(alpha_ptr + cs * state_s_c, mask=cmask).to(tl.float32) + beta = tl.load(beta_ptr + cs * state_s_c, mask=cmask).to(tl.float32) + eps = tl.load(eps_ptr + cs * state_s_c, mask=cmask).to(tl.float32) + w = tl.load(w_ptr + cs * w_s_c, mask=cmask).to(tl.float32) + u = tl.load(u_ptr + cs * u_s_c, mask=cmask).to(tl.float32) + + for t in range(tsz): + kt = tl.load(k_ptr + t * k_s_t + cs * k_s_c, mask=cmask).to(tl.float32) + vt = tl.load(v_ptr + t * v_s_t + cs * v_s_c, mask=cmask).to(tl.float32) + + ukt = u + kt + tau = tl.maximum(ukt, eps) + e1a = tl.exp(eps - tau) + e2a = tl.exp(ukt - tau) + wkv = (e1a * alpha + e2a * vt) / (e1a * beta + e2a) + tl.store(wkv_ptr + t * wkv_s_t + cs * wkv_s_c, wkv, mask=cmask) + + w_eps = w + eps + eps = tl.maximum(w_eps, kt) + e1b = tl.exp(w_eps - eps) + e2b = tl.exp(kt - eps) + alpha = e1b * alpha + e2b * vt + beta = e1b * beta + e2b + tl.store(alpha_out_ptr + t * state_out_s_t + cs * state_out_s_c, alpha, mask=cmask) + tl.store(beta_out_ptr + t * state_out_s_t + cs * state_out_s_c, beta, mask=cmask) + tl.store(eps_out_ptr + t * state_out_s_t + cs * state_out_s_c, eps, mask=cmask) + + +def fused_recurrent_rwkv4_forward( + w: Tensor, + u: Tensor, + k: Tensor, + v: Tensor, + state: Tensor, +) -> tuple[Tensor, Tensor]: + (bsz, tsz, chans) = k.shape + + # New tensors to output. + wkvs = k.new_empty(bsz, tsz, chans) + state_out = k.new_empty(bsz, 3, tsz, chans) + + # Constants. + block_size_c = get_block_size_c(chans) + + def grid(meta: dict[str, Any]) -> tuple[int, ...]: + return (bsz, triton.cdiv(chans, meta["BLOCK_SIZE_C"])) + + fused_recurrent_rwkv4_forward_kernel[grid]( + # W + w, + w.stride(0), + # U + u, + u.stride(0), + # K + k, + k.stride(0), + k.stride(1), + k.stride(2), + # V + v, + v.stride(0), + v.stride(1), + v.stride(2), + # State + state, + state.stride(0), + state.stride(1), + state.stride(3), + # WKV + wkvs, + wkvs.stride(0), + wkvs.stride(1), + wkvs.stride(2), + # Output state + state_out, + state_out.stride(0), + state_out.stride(1), + state_out.stride(2), + state_out.stride(3), + # Params + chans, + tsz, + BLOCK_SIZE_C=block_size_c, + ) + + state_out = torch.cat((state, state_out), dim=2) + + return wkvs, state_out + + +@triton.jit +def fused_recurrent_rwkv4_backward_kernel( + # W + w_ptr, + w_s_c, + # U + u_ptr, + u_s_c, + # K + k_ptr, + k_s_b, + k_s_t, + k_s_c, + # V + v_ptr, + v_s_b, + v_s_t, + v_s_c, + # State + state_ptr, + state_s_b, + state_s_abe, + state_s_t, + state_s_c, + # WKV grad + gwkv_ptr, + gwkv_s_b, + gwkv_s_t, + gwkv_s_c, + # Output state grad + gstate_out_ptr, + gstate_out_s_b, + gstate_out_s_abe, + gstate_out_s_c, + # W grad + gw_ptr, + gw_s_c, + # U grad + gu_ptr, + gu_s_c, + # K grad + gk_ptr, + gk_s_b, + gk_s_t, + gk_s_c, + # V grad + gv_ptr, + gv_s_b, + gv_s_t, + gv_s_c, + # State grad + gstate_ptr, + gstate_s_b, + gstate_s_abe, + gstate_s_c, + # Params + tsz, + chans, + BLOCK_SIZE_C: tl.constexpr, +): + # Parallelize over the batch dimension. + b_idx = tl.program_id(0) + c_idx = tl.program_id(1) + + cs = (c_idx * BLOCK_SIZE_C) + tl.arange(0, BLOCK_SIZE_C) + cmask = cs < chans + + # Pointers to the batch (and possibly channel) for the input tensors. + k_ptr = k_ptr + b_idx * k_s_b + v_ptr = v_ptr + b_idx * v_s_b + alpha_ptr = state_ptr + b_idx * state_s_b + beta_ptr = state_ptr + b_idx * state_s_b + state_s_abe + eps_ptr = state_ptr + b_idx * state_s_b + 2 * state_s_abe + + # Pointers to the batch (and possibly channel) for the output tensors. + gk_ptr = gk_ptr + b_idx * gk_s_b + gv_ptr = gv_ptr + b_idx * gv_s_b + + # Pointers to gradients which were recieved by the function. + gwkv_ptr = gwkv_ptr + b_idx * gwkv_s_b + galpha_out_ptr = gstate_out_ptr + b_idx * gstate_out_s_b + gbeta_out_ptr = gstate_out_ptr + b_idx * gstate_out_s_b + gstate_out_s_abe + geps_out_ptr = gstate_out_ptr + b_idx * gstate_out_s_b + 2 * gstate_out_s_abe + + # Loads parameters. + galpha = tl.load(galpha_out_ptr + gstate_out_s_c * cs, mask=cmask).to(tl.float32) + gbeta = tl.load(gbeta_out_ptr + gstate_out_s_c * cs, mask=cmask).to(tl.float32) + geps = tl.load(geps_out_ptr + gstate_out_s_c * cs, mask=cmask).to(tl.float32) + w = tl.load(w_ptr + w_s_c * cs, mask=cmask).to(tl.float32) + u = tl.load(u_ptr + u_s_c * cs, mask=cmask).to(tl.float32) + + # Gradient accumulators. + gw = tl.zeros_like(w) + gu = tl.zeros_like(u) + + alpha_prev = tl.load(alpha_ptr + tsz * state_s_t + state_s_c * cs, mask=cmask).to(tl.float32) + beta_prev = tl.load(beta_ptr + tsz * state_s_t + state_s_c * cs, mask=cmask).to(tl.float32) + eps_prev = tl.load(eps_ptr + tsz * state_s_t + state_s_c * cs, mask=cmask).to(tl.float32) + + for t in range(tsz): + tc = tsz - t - 1 + + kt = tl.load(k_ptr + tc * k_s_t + k_s_c * cs, mask=cmask).to(tl.float32) + vt = tl.load(v_ptr + tc * v_s_t + v_s_c * cs, mask=cmask).to(tl.float32) + + alpha_curr = alpha_prev + beta_curr = beta_prev + eps_curr = eps_prev + + alpha_prev = tl.load(alpha_ptr + tc * state_s_t + state_s_c * cs, mask=cmask).to(tl.float32) + beta_prev = tl.load(beta_ptr + tc * state_s_t + state_s_c * cs, mask=cmask).to(tl.float32) + eps_prev = tl.load(eps_ptr + tc * state_s_t + state_s_c * cs, mask=cmask).to(tl.float32) + + ukt = u + kt + tau = tl.maximum(ukt, eps_prev) + e1 = tl.exp(eps_prev - tau) + e2 = tl.exp(ukt - tau) + + euke = tl.exp(ukt + eps_prev - 2 * tau) + + denom = e1 * beta_prev + e2 + denom_sq = denom * denom + + gwkvt = tl.load(gwkv_ptr + tc * gwkv_s_t + gwkv_s_c * cs, mask=cmask).to(tl.float32) + + # Backpropagates wkv gradients. + guk = gwkvt * e2 * (e1 * beta_prev * vt - e1 * alpha_prev) / denom_sq + gu += guk + gk = guk + gv = gwkvt * e2 / denom + + galpha_wkv = gwkvt * e1 / denom + gbeta_wkv = -gwkvt * e1 * (e2 * vt + e1 * alpha_prev) / denom_sq + geps_wkv_denom = e1 * beta_prev + e2 + geps_wkv = gwkvt * euke * (alpha_prev - vt * beta_prev) / (geps_wkv_denom * geps_wkv_denom) + + e1 = tl.exp(w + eps_prev - eps_curr) + e2 = tl.exp(kt - eps_curr) + + # Backpropagates alpha gradients. + galpha_we = galpha * e1 * alpha_prev + gw += galpha_we + gk += galpha * e2 * vt + gv += galpha * e2 + geps += galpha * -alpha_curr + + # Backpropagates beta gradients. + gbeta_we = gbeta * e1 * beta_prev + gw += gbeta_we + gk += gbeta * e2 + geps += gbeta * -beta_curr + + # Backpropagates epsilon gradients. + geps_mask = w + eps_prev > kt + geps_we = tl.where(geps_mask, geps, tl.zeros_like(geps)) + gw += geps_we + gk += tl.where(geps_mask, tl.zeros_like(geps), geps) + + # Stores the gradients for k and v. + tl.store(gk_ptr + tc * gk_s_t + gk_s_c * cs, gk, mask=cmask) + tl.store(gv_ptr + tc * gv_s_t + gv_s_c * cs, gv, mask=cmask) + + # Computes new gradients for alpha and beta. + galpha = galpha * e1 + galpha_wkv + gbeta = gbeta * e1 + gbeta_wkv + geps = galpha_we + gbeta_we + geps_we + geps_wkv + + # Stores final gradients for alpha and beta. + galpha_ptr = gstate_ptr + b_idx * gstate_s_b + gbeta_ptr = gstate_ptr + b_idx * gstate_s_b + gstate_s_abe + geps_ptr = gstate_ptr + b_idx * gstate_s_b + 2 * gstate_s_abe + tl.store(galpha_ptr + gstate_s_c * cs, galpha, mask=cmask) + tl.store(gbeta_ptr + gstate_s_c * cs, gbeta, mask=cmask) + tl.store(geps_ptr + gstate_s_c * cs, geps, mask=cmask) + + # Stores final gradients for w and u. + gw_temp = tl.load(gw_ptr + gw_s_c * cs, mask=cmask).to(tl.float32) + gw_temp += gw + tl.store(gw_ptr + gw_s_c * cs, gw_temp, mask=cmask) + gu_temp = tl.load(gu_ptr + gu_s_c * cs, mask=cmask).to(tl.float32) + gu_temp += gu + tl.store(gu_ptr + gu_s_c * cs, gu_temp, mask=cmask) + + +def fused_recurrent_rwkv4_backward( + w: Tensor, + u: Tensor, + k: Tensor, + v: Tensor, + state: Tensor, + grad_wkv: Tensor, + grad_state: Tensor, +) -> tuple[Tensor, Tensor, Tensor, Tensor, Tensor]: + bsz, tsz, chans = k.shape + + gw = torch.zeros_like(w) # New tensors to output. + gu = torch.zeros_like(u) + gk = torch.empty_like(k) + gv = torch.empty_like(v) + gstate = k.new_empty(bsz, 3, 1, chans) + + block_size_c = get_block_size_c(chans) # Constants. + + def grid(meta: dict[str, Any]) -> tuple[int, ...]: + return (bsz, triton.cdiv(chans, meta["BLOCK_SIZE_C"])) + + fused_recurrent_rwkv4_backward_kernel[grid]( + # W + w, + w.stride(0), + # U + u, + u.stride(0), + # K + k, + k.stride(0), + k.stride(1), + k.stride(2), + # V + v, + v.stride(0), + v.stride(1), + v.stride(2), + # State + state, + state.stride(0), + state.stride(1), + state.stride(2), + state.stride(3), + # WKV grad + grad_wkv, + grad_wkv.stride(0), + grad_wkv.stride(1), + grad_wkv.stride(2), + # Output state grad + grad_state, + grad_state.stride(0), + grad_state.stride(1), + grad_state.stride(3), + # W grad + gw, + gw.stride(0), + # U grad + gu, + gu.stride(0), + # K grad + gk, + gk.stride(0), + gk.stride(1), + gk.stride(2), + # V grad + gv, + gv.stride(0), + gv.stride(1), + gv.stride(2), + # State grad + gstate, + gstate.stride(0), + gstate.stride(1), + gstate.stride(3), + # Params + tsz, + chans, + BLOCK_SIZE_C=block_size_c, + ) + + return gw, gu, gk, gv, gstate + + +class FusedRecurrentRWKV4Function(Function): + @staticmethod + def forward( + ctx: FunctionCtx, + w: Tensor, + u: Tensor, + k: Tensor, + v: Tensor, + state: Tensor, + ) -> tuple[Tensor, Tensor]: + ctx.input_dtype = k.dtype + + if ( + w.device.type != "cuda" + or u.device.type != "cuda" + or k.device.type != "cuda" + or v.device.type != "cuda" + ): + raise ValueError( + "Calling the CUDA kernel for wkv attention requires all tensors to be on CUDA devices." + ) + + w = -torch.exp(w.float().contiguous()) + if k.dtype == torch.float16: + u = u.float() + k = k.float() + v = v.float() + u = u.contiguous() + k = k.contiguous() + v = v.contiguous() + wkv, state_out = fused_recurrent_rwkv4_forward(w, u, k, v, state) + ctx.save_for_backward(w, u, k, v, state_out[:, :, :-1]) + return wkv, state_out[:, :, -1:] + + @staticmethod + @once_differentiable + def backward(ctx: FunctionCtx, gwkv: Tensor, gstate: Tensor) -> tuple[Tensor, Tensor, Tensor, Tensor, Tensor]: + w, u, k, v, state = cast(tuple[Tensor, ...], ctx.saved_tensors) + gw, gu, gk, gv, gstate = fused_recurrent_rwkv4_backward(w, u, k, v, state, gwkv, gstate) + return gw, gu, gk, gv, gstate + + +def fused_recurrent_rwkv4(w: Tensor, u: Tensor, k: Tensor, v: Tensor, state: Tensor) -> tuple[Tensor, Tensor]: + return FusedRecurrentRWKV4Function.apply(w, u, k, v, state) diff --git a/fla/ops/rwkv6/__init__.py b/fla/ops/rwkv6/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..52f9fe7ea317f30e1bd78f3a13914e9c8774bfff --- /dev/null +++ b/fla/ops/rwkv6/__init__.py @@ -0,0 +1,9 @@ +# -*- coding: utf-8 -*- + +from .chunk import chunk_rwkv6 +from .recurrent_fuse import fused_recurrent_rwkv6 + +__all__ = [ + 'chunk_rwkv6', + 'fused_recurrent_rwkv6' +] diff --git a/fla/ops/rwkv6/chunk.py b/fla/ops/rwkv6/chunk.py new file mode 100644 index 0000000000000000000000000000000000000000..0e746c996368a654bd096bcc861f9404a2ef194b --- /dev/null +++ b/fla/ops/rwkv6/chunk.py @@ -0,0 +1,921 @@ +# -*- coding: utf-8 -*- + +# Copyright (c) 2023-2024, Yu Zhang, Songlin Yang + +from typing import Optional, Tuple + +import torch +import triton +import triton.language as tl + +from fla.ops.utils import chunk_reversed_cumsum_fwd +from fla.utils import contiguous + + +@triton.autotune( + configs=[ + triton.Config({'BS': 16}, num_warps=2), + triton.Config({'BS': 16}, num_warps=4), + triton.Config({'BS': 16}, num_warps=8), + triton.Config({'BS': 32}, num_warps=2), + triton.Config({'BS': 32}, num_warps=4), + triton.Config({'BS': 32}, num_warps=8), + triton.Config({'BS': 64}, num_warps=2), + triton.Config({'BS': 64}, num_warps=4), + triton.Config({'BS': 64}, num_warps=8), + ], + key=['S'] +) +@triton.jit +def chunk_rwkv6_fwd_kernel_cum( + s, + o, + o_minus_s, + s_s_h, + s_s_t, + s_s_d, + T: tl.constexpr, + S: tl.constexpr, + BT: tl.constexpr, + BS: tl.constexpr +): + i_s, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + o_i = tl.arange(0, BT) + m_s = tl.where(o_i[:, None] >= o_i[None, :], 1., 0.) + + p_s = tl.make_block_ptr(s + i_bh * s_s_h, (T, S), (s_s_t, s_s_d), (i_t * BT, i_s * BS), (BT, BS), (1, 0)) + p_o = tl.make_block_ptr(o + i_bh * s_s_h, (T, S), (s_s_t, s_s_d), (i_t * BT, i_s * BS), (BT, BS), (1, 0)) + p_o_minus_s = tl.make_block_ptr(o_minus_s + i_bh * s_s_h, (T, S), (s_s_t, s_s_d), (i_t * BT, i_s * BS), (BT, BS), (1, 0)) + # [BT, BS] + b_s = tl.load(p_s, boundary_check=(0, 1)).to(tl.float32) + b_o = tl.dot(m_s, b_s, allow_tf32=False) + tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1)) + tl.store(p_o_minus_s, (b_o - b_s).to(p_o_minus_s.dtype.element_ty), boundary_check=(0, 1)) + + +@triton.jit +def post_process_grad( + q, + k, + v, + u, + do, + dk, + dq, + du, + scale, + s_k_h, + s_k_t, + s_k_d, + s_v_h, + s_v_t, + s_v_d, + H, + T: tl.constexpr, + BT: tl.constexpr, + K: tl.constexpr, + V: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr, +): + i_t, i_bh = tl.program_id(0), tl.program_id(1) + i_h = i_bh % H + + # Note that BK = tl.next_power_of_2(K), BV = tl.next_power_of_2(V) + p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, 0), (BT, BK), (1, 0)) + p_dq = tl.make_block_ptr(dq + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, 0), (BT, BK), (1, 0)) + p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, 0), (BT, BK), (1, 0)) + p_dk = tl.make_block_ptr(dk + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, 0), (BT, BK), (1, 0)) + p_du = tl.make_block_ptr(du + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, 0), (BT, BK), (1, 0)) + p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, 0), (BT, BV), (1, 0)) + p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, 0), (BT, BV), (1, 0)) + p_u = tl.make_block_ptr(u + i_h * K, (K,), (1,), (0,), (BK,), (0,)) + + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_k = tl.load(p_k, boundary_check=(0, 1)) + b_v = tl.load(p_v, boundary_check=(0, 1)) + b_do = tl.load(p_do, boundary_check=(0, 1)) + b_u = tl.load(p_u, boundary_check=(0,)) + + b_vdo = tl.sum(b_v * b_do, axis=1) + b_du = b_vdo[:, None] * b_k * b_q * scale + b_dq = b_vdo[:, None] * b_k * b_u[None, :] * scale + b_dk = b_vdo[:, None] * b_q * b_u[None, :] * scale + + b_dq += tl.load(p_dq, boundary_check=(0, 1)) + tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1)) + + b_dk += tl.load(p_dk, boundary_check=(0, 1)) + tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1)) + + tl.store(p_du, b_du.to(p_du.dtype.element_ty), boundary_check=(0, 1)) + + +@triton.jit +def chunk_rwkv6_fwd_kernel_h( + k, + v, + g, + h, + h0, + ht, + s_k_h, + s_k_t, + s_k_d, + s_v_h, + s_v_t, + s_v_d, + s_h_h, + s_h_t, + s_h_d, + T: tl.constexpr, + K: tl.constexpr, + V: tl.constexpr, + BT: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr, + NT: tl.constexpr, + USE_INITIAL_STATE: tl.constexpr, + STORE_FINAL_STATE: tl.constexpr +): + i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + b_h = tl.zeros([BK, BV], dtype=tl.float32) + if USE_INITIAL_STATE: + p_h = tl.make_block_ptr(h0 + i_bh * K * V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + b_h += tl.load(p_h, boundary_check=(0, 1)).to(tl.float32) + for i_t in range(NT): + p_k = tl.make_block_ptr(k + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, i_t * BT), (BK, BT), (0, 1)) + p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_h = tl.make_block_ptr(h + i_bh * s_h_h + i_t * K * V, (K, V), (s_h_t, s_h_d), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + p_g = tl.make_block_ptr(g + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, i_t * BT), (BK, BT), (0, 1)) + p_gn = tl.make_block_ptr(g + i_bh * s_k_h, (T * K,), (s_k_d,), ((i_t * BT + BT - 1) * K + i_k * BK,), (BK,), (0,)) + + tl.store(p_h, b_h.to(p_h.dtype.element_ty), boundary_check=(0, 1)) + # [BK, BT] + b_k = tl.load(p_k, boundary_check=(0, 1)) + # [BT, BV] + b_v = tl.load(p_v, boundary_check=(0, 1)) + # [BK, BT] + b_g = tl.load(p_g, boundary_check=(0, 1)) + if i_t < NT - 1: + # [BK,] + b_gn = tl.load(p_gn, boundary_check=(0,)) + else: + b_gn = tl.min(b_g, axis=1) + b_h *= tl.exp(b_gn)[:, None] + b_k = (b_k * tl.exp(b_gn[:, None] - b_g)).to(b_k.dtype) + b_h += tl.dot(b_k, b_v, allow_tf32=False) + + if STORE_FINAL_STATE: + p_h = tl.make_block_ptr(ht + i_bh * K * V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + tl.store(p_h, b_h.to(p_h.dtype.element_ty), boundary_check=(0, 1)) + + +@triton.jit +def chunk_rwkv6_fwd_kernel_intra( + q, + k, + g, + gs, + u, + A, + s_k_h, + s_k_t, + s_k_d, + scale, + H, + T: tl.constexpr, + K: tl.constexpr, + BT: tl.constexpr, + BC: tl.constexpr, + BK: tl.constexpr, + NC: tl.constexpr, + DK: tl.constexpr +): + i_k, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + i_t, i_i, i_j = i_c // (NC * NC), (i_c % (NC * NC)) // NC, (i_c % (NC * NC)) % NC + i_h = i_bh % H + n_bh = tl.num_programs(2) + + o_k = i_k * BK + tl.arange(0, BK) + o_q = i_t * BT + i_i * BC + m_k = o_k < K + + if i_i > i_j: + p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0)) + p_k = tl.make_block_ptr(k + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, i_t * BT + i_j * BC), (BK, BC), (0, 1)) + p_gs = tl.make_block_ptr(gs + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0)) + p_gk = tl.make_block_ptr(g + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, i_t * BT + i_j * BC), (BK, BC), (0, 1)) + p_A = tl.make_block_ptr(A + (i_k*n_bh+i_bh)*T*BT, (T, BT), (BT, 1), (i_t * BT + i_i * BC, i_j * BC), (BC, BC), (1, 0)) + # [BK,] + b_gn = tl.load(g + i_bh * T * K + (o_q - 1) * K + o_k, mask=(m_k & (i_i > 0) & (o_q <= T)), other=0) + # [BC, BK] + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_gs = tl.load(p_gs, boundary_check=(0, 1)) + b_qg = (b_q * tl.exp(b_gs - b_gn[None, :]) * scale).to(b_q.dtype) + # [BK, BC] + b_k = tl.load(p_k, boundary_check=(0, 1)) + b_gk = tl.load(p_gk, boundary_check=(0, 1)) + b_kg = (b_k * tl.exp(b_gn[:, None] - b_gk)).to(b_k.dtype) + # [BC, BC] + b_A = tl.dot(b_qg, b_kg, allow_tf32=False) + tl.store(p_A, b_A.to(A.dtype.element_ty), boundary_check=(0, 1)) + elif i_i == i_j: + p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0)) + p_gs = tl.make_block_ptr(gs + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0)) + p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T * K,), (s_k_d,), ((i_t * BT + i_j * BC) * K + i_k * BK,), (BK,), (0,)) + p_q_self = tl.make_block_ptr(q + i_bh * s_k_h, (T*K,), (s_k_d,), ((i_t * BT + i_j * BC) * K + i_k * BK,), (BK,), (0,)) + + # [BC, BK] + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_gs = tl.load(p_gs, boundary_check=(0, 1)) + o_i = tl.arange(0, BC) + o_g = i_bh * T * K + (i_t * BT + i_j * BC) * K + o_k + o_A = (i_bh + i_k * n_bh) * T * BT + (i_t * BT + i_i * BC + tl.arange(0, BC)) * BT + i_j * BC + m_A = (i_t * BT + i_i * BC + tl.arange(0, BC)) < T + p_u = tl.make_block_ptr(u + i_h * DK, (DK,), (1,), (i_k * BK), (BK,), (0,)) + b_u = tl.load(p_u, boundary_check=(0,)) + for j in range(0, BC): + # [BK,] + b_k = tl.load(p_k, boundary_check=(0,)).to(tl.float32) + b_gk = tl.load(g + o_g + j * K, mask=(m_k & ((i_t * BT + i_j * BC + j) < T)), other=0).to(tl.float32) + # [BC,] + b_A = tl.sum(b_q * b_k[None, :] * tl.exp(b_gs - b_gk[None, :]) * scale, 1) + b_A = tl.where(o_i > j, b_A, 0.) + # self + b_q_self = tl.load(p_q_self, boundary_check=(0,)).to(tl.float32) + A_self = tl.sum(b_q_self * b_k * b_u * scale, axis=0) + m_self = tl.arange(0, BC) == j + b_A = tl.where(m_self, A_self[None], b_A) + tl.store(A + o_A + j, b_A.to(A.dtype.element_ty), mask=m_A) + p_k = tl.advance(p_k, (K,)) + p_q_self = tl.advance(p_q_self, (K,)) + + +@triton.jit +def chunk_rwkv6_fwd_kernel_inter( + q, + v, + gs, + h, + o, + A, + s_k_h, + s_k_t, + s_k_d, + s_v_h, + s_v_t, + s_v_d, + s_h_h, + s_h_t, + s_h_d, + scale, + T: tl.constexpr, + K: tl.constexpr, + V: tl.constexpr, + BT: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr +): + i_v, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + + b_o = tl.zeros([BT, BV], dtype=tl.float32) + for i_k in range(tl.cdiv(K, BK)): + p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_gs = tl.make_block_ptr(gs + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_h = tl.make_block_ptr(h + i_bh * s_h_h + i_t * K * V, (K, V), (s_h_t, s_h_d), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + + # [BT, BK] + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_q = (b_q * scale).to(b_q.dtype) + # [BT, BK] + b_gs = tl.load(p_gs, boundary_check=(0, 1)) + # [BT, BK] + b_qg = (b_q * tl.exp(b_gs)).to(b_q.dtype) + # [BK, BV] + b_h = tl.load(p_h, boundary_check=(0, 1)) + # works but dkw, owing to divine benevolence + # [BT, BV] + if i_k >= 0: + b_o += tl.dot(b_qg, b_h, allow_tf32=False) + p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_o = tl.make_block_ptr(o + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_A = tl.make_block_ptr(A + i_bh * T * BT, (T, BT), (BT, 1), (i_t * BT, 0), (BT, BT), (1, 0)) + # [BT, BV] + b_v = tl.load(p_v, boundary_check=(0, 1)) + # [BT, BT] + b_A = tl.load(p_A, boundary_check=(0, 1)) + b_o += tl.dot(b_A, b_v, allow_tf32=False) + tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1)) + + +@triton.jit +def chunk_rwkv6_bwd_kernel_dh( + q, + g, + gs, + do, + dh, + dh0, + s_k_h, + s_k_t, + s_k_d, + s_v_h, + s_v_t, + s_v_d, + s_h_h, + s_h_t, + s_h_d, + scale, + T: tl.constexpr, + K: tl.constexpr, + V: tl.constexpr, + BT: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr, + NT: tl.constexpr, + USE_INITIAL_STATE: tl.constexpr +): + i_k, i_v, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + + b_dh = tl.zeros([BK, BV], dtype=tl.float32) + for i_t in range(NT - 1, -1, -1): + p_q = tl.make_block_ptr(q + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, i_t * BT), (BK, BT), (0, 1)) + p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_dh = tl.make_block_ptr(dh + i_bh * s_h_h + i_t * K*V, (K, V), (s_h_t, s_h_d), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + p_gs = tl.make_block_ptr(gs + i_bh * s_k_h, (K, T), (s_k_d, s_k_t), (i_k * BK, i_t * BT), (BK, BT), (0, 1)) + p_gn = tl.make_block_ptr(g + i_bh * s_k_h, (T * K,), (s_k_d,), ((i_t * BT + BT - 1) * K + i_k * BK,), (BK,), (0,)) + + # [BK, BT] + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_q = (b_q * scale).to(b_q.dtype) + # [BT, BV] + b_do = tl.load(p_do, boundary_check=(0, 1)) + + tl.store(p_dh, b_dh.to(p_dh.dtype.element_ty), boundary_check=(0, 1)) + + # [BK,] + b_gn = tl.load(p_gn, boundary_check=(0,)) + # [BK, BV] + b_dh *= tl.exp(b_gn)[:, None] + # [BK, BT] + b_gs = tl.load(p_gs, boundary_check=(0, 1)) + b_q = (b_q * tl.exp(b_gs)).to(b_q.dtype) + + # [BK, BV] + b_dh += tl.dot(b_q, b_do, allow_tf32=False) + + if USE_INITIAL_STATE: + p_dh0 = tl.make_block_ptr(dh0 + i_bh * K * V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + tl.store(p_dh0, b_dh.to(p_dh0.dtype.element_ty), boundary_check=(0, 1)) + + +@triton.jit +def chunk_rwkv6_bwd_kernel_inter( + k, + v, + h, + g, + gs, + A, + do, + dh, + dq, + dk, + dv, + dA, + s_k_h, + s_k_t, + s_k_d, + s_v_h, + s_v_t, + s_v_d, + s_h_h, + s_h_t, + s_h_d, + scale, + T: tl.constexpr, + K: tl.constexpr, + V: tl.constexpr, + BT: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr +): + i_k, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + n_bh = tl.num_programs(2) + + p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_gk = tl.make_block_ptr(g + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_gq = tl.make_block_ptr(gs + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_gn = tl.make_block_ptr(g + i_bh * s_k_h, (T * K,), (s_k_d,), ((i_t * BT + BT - 1) * K + i_k * BK,), (BK,), (0,)) + p_A = tl.make_block_ptr(A + i_bh * T * BT, (BT, T), (1, BT), (0, i_t * BT), (BT, BT), (0, 1)) + + # [BT, BK] + b_k = tl.load(p_k, boundary_check=(0, 1)) + b_gk = tl.load(p_gk, boundary_check=(0, 1)) + b_gq = tl.load(p_gq, boundary_check=(0, 1)) + b_gn = tl.exp(tl.load(p_gn, boundary_check=(0,))[None, :] - b_gk) + b_k = (b_k * b_gn).to(b_k.dtype) + # [BT, BT] + b_A = tl.load(p_A, boundary_check=(0, 1)) + + b_dq = tl.zeros([BT, BK], dtype=tl.float32) + b_dk = tl.zeros([BT, BK], dtype=tl.float32) + b_dA = tl.zeros([BT, BT], dtype=tl.float32) + for i_v in range(tl.cdiv(V, BV)): + p_v = tl.make_block_ptr(v + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_h = tl.make_block_ptr(h + i_bh * s_h_h + i_t * V * K, (V, K), (s_h_d, s_h_t), (i_v * BV, i_k * BK), (BV, BK), (0, 1)) + p_do = tl.make_block_ptr(do + i_bh * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_dh = tl.make_block_ptr(dh + i_bh * s_h_h + i_t * K*V, (K, V), (s_h_t, s_h_d), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + p_dv = tl.make_block_ptr(dv + (i_k*n_bh+i_bh) * s_v_h, (T, V), (s_v_t, s_v_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + + # [BT, BV] + b_v = tl.load(p_v, boundary_check=(0, 1)) + # [BV, BK] + b_h = tl.load(p_h, boundary_check=(0, 1)) + # [BT, BV] + b_do = tl.load(p_do, boundary_check=(0, 1)) + # [BK, BV] + b_dh = tl.load(p_dh, boundary_check=(0, 1)) + + # [BT, BV] + b_dv = tl.dot(b_k, b_dh, allow_tf32=False) + if i_k == 0: + b_dv += tl.dot(b_A, b_do, allow_tf32=False) + b_do = (b_do * scale).to(b_do.dtype) + tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), boundary_check=(0, 1)) + # [BT, BT] + b_dA += tl.dot(b_do, tl.trans(b_v), allow_tf32=False) + # [BT, BK] + b_dq += tl.dot(b_do, b_h, allow_tf32=False) + # [BT, BK] + b_dk += tl.dot(b_v, tl.trans(b_dh), allow_tf32=False) + + b_dq = b_dq * tl.exp(b_gq) + b_dk = b_dk * b_gn + + p_dq = tl.make_block_ptr(dq + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_dk = tl.make_block_ptr(dk + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_dA = tl.make_block_ptr(dA + i_bh * T * BT, (T, BT, ), (BT, 1), (i_t * BT, 0), (BT, BT), (1, 0)) + tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1)) + tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1)) + + o_i = tl.arange(0, BT) + m_s = o_i[:, None] > o_i[None, :] + # [BT, BT] + b_dA = tl.where(m_s, b_dA, 0.).to(b_k.dtype) + if i_k == 0: + tl.store(p_dA, b_dA.to(p_dA.dtype.element_ty), boundary_check=(0, 1)) + + +@triton.jit +def chunk_rwkv6_bwd_kernel_intra( + q, + k, + g, + gs, + dA, + dq, + dk, + s_k_h, + s_k_t, + s_k_d, + T: tl.constexpr, + K: tl.constexpr, + BT: tl.constexpr, + BC: tl.constexpr, + BK: tl.constexpr, + NC: tl.constexpr +): + i_k, i_c, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + i_t, i_i = i_c // NC, i_c % NC + + o_k = i_k * BK + tl.arange(0, BK) + o_q = i_t * BT + i_i * BC + m_k = o_k < K + + p_gs = tl.make_block_ptr(gs + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0)) + # [BK,] + b_gn = tl.load(g + i_bh * T * K + (o_q - 1) * K + o_k, mask=(m_k & (i_i > 0) & (o_q <= T)), other=0) + # [BC, BK] + b_gs = tl.load(p_gs, boundary_check=(0, 1)) + b_dq = tl.zeros([BC, BK], dtype=tl.float32) + for i_j in range(0, i_i): + p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_j * BC, i_k * BK), (BC, BK), (1, 0)) + p_gk = tl.make_block_ptr(g + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_j * BC, i_k * BK), (BC, BK), (1, 0)) + p_dA = tl.make_block_ptr(dA + i_bh * T * BT, (T, BT), (BT, 1), (i_t * BT + i_i * BC, i_j * BC), (BC, BC), (1, 0)) + # [BC, BK] + b_k = tl.load(p_k, boundary_check=(0, 1)) + b_gk = tl.load(p_gk, boundary_check=(0, 1)) + b_kg = (b_k * tl.exp(b_gn[None, :] - b_gk)).to(b_k.dtype) + # [BC, BC] + b_dA = tl.load(p_dA, boundary_check=(0, 1)) + # [BC, BK] + b_dq += tl.dot(b_dA, b_kg, allow_tf32=False) + b_dq *= tl.exp(b_gs - b_gn[None, :]) + + o_i = tl.arange(0, BC) + o_dA = i_bh * T * BT + (i_t * BT + i_i * BC + tl.arange(0, BC)) * BT + i_i * BC + m_dA = (i_t * BT + i_i * BC + tl.arange(0, BC)) < T + + for j in range(0, BC): + p_kj = tl.make_block_ptr(k + i_bh * s_k_h, (T * K,), (1,), ((i_t * BT + i_i*BC+j) * K + i_k * BK,), (BK,), (0,)) + + # [BC,] + b_dA = tl.load(dA + o_dA + j, mask=m_dA, other=0) + # [BK,] + b_kj = tl.load(p_kj, boundary_check=(0,)).to(tl.float32) + b_gkj = tl.load(g + i_bh * T * K + (o_q + j) * K + o_k, mask=(m_k & ((o_q + j) < T)), other=0) + # [BC, BK] + m_i = o_i[:, None] > j + # [BC, BK] + b_dq += tl.where(m_i, b_dA[:, None] * b_kj[None, :] * tl.exp(b_gs - b_gkj[None, :]), 0.) + + p_dq = tl.make_block_ptr(dq + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0)) + + b_dq = b_dq + tl.load(p_dq, boundary_check=(0, 1)) + tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1)) + + tl.debug_barrier() + p_k = tl.make_block_ptr(k + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0)) + p_gk = tl.make_block_ptr(g + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0)) + p_gn = tl.make_block_ptr(g + i_bh * s_k_h, (T*K,), (s_k_d,), ((i_t * BT + i_i * BC + BC - 1) * K + i_k * BK,), (BK,), (0,)) + # [BK,] + b_gn = tl.load(p_gn, boundary_check=(0,)) + # [BC, BK] + b_k = tl.load(p_k, boundary_check=(0, 1)) + b_gk = tl.load(p_gk, boundary_check=(0, 1)) + b_dk = tl.zeros([BC, BK], dtype=tl.float32) + for i_j in range(i_i + 1, NC): + p_q = tl.make_block_ptr(q + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_j * BC, i_k * BK), (BC, BK), (1, 0)) + p_gs = tl.make_block_ptr(gs + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_j * BC, i_k * BK), (BC, BK), (1, 0)) + p_dA = tl.make_block_ptr(dA + i_bh * T * BT, (T, BT), (BT, 1), (i_t * BT + i_j * BC, i_i * BC), (BC, BC), (1, 0)) + # [BC, BK] + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_gs = tl.load(p_gs, boundary_check=(0, 1)) + b_qg = (b_q * tl.exp(b_gs - b_gn[None, :])).to(b_q.dtype) + # [BC, BC] + b_dA = tl.load(p_dA, boundary_check=(0, 1)) + # [BC, BK] + b_dk += tl.dot(tl.trans(b_dA), b_qg, allow_tf32=False) + b_dk *= tl.exp(b_gn[None, :] - b_gk) + + o_dA = i_bh * T * BT + (i_t * BT + i_i * BC) * BT + i_i * BC + tl.arange(0, BC) + for j in range(0, BC): + p_qj = tl.make_block_ptr(q + i_bh * s_k_h, (T * K,), (1,), ((i_t * BT + i_i * BC + j) * K + i_k * BK,), (BK,), (0,)) + p_gqj = tl.make_block_ptr(gs + i_bh * s_k_h, (T * K,), (1,), ((i_t * BT + i_i * BC + j) * K + i_k * BK,), (BK,), (0,)) + # [BC,] + b_dA = tl.load(dA + o_dA + j * BT, mask=(i_t * BT + i_i * BC + j < T), other=0) + # [BK,] + b_qj = tl.load(p_qj, boundary_check=(0,)).to(tl.float32) + b_gqj = tl.load(p_gqj, boundary_check=(0,)).to(tl.float32) + # [BC, BK] + m_i = o_i[:, None] < j + b_dk += tl.where(m_i, b_dA[:, None] * b_qj[None, :] * tl.exp(b_gqj[None, :] - b_gk), 0.) + + p_dk = tl.make_block_ptr(dk + i_bh * s_k_h, (T, K), (s_k_t, s_k_d), (i_t * BT + i_i * BC, i_k * BK), (BC, BK), (1, 0)) + b_dk = b_dk + tl.load(p_dk, boundary_check=(0, 1)) + tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1)) + + +class ChunkRWKV6Function(torch.autograd.Function): + + @staticmethod + @contiguous + def forward(ctx, r, k, v, g, u, scale, initial_state, output_final_state, checkpoint_level): + q = r # alias + B, H, T, K, V = *q.shape, v.shape[-1] + BT, BC = 64, 16 + BK = min(64, triton.next_power_of_2(K)) + BV = min(64, triton.next_power_of_2(V)) + NT, NC = triton.cdiv(T, BT), triton.cdiv(BT, BC) + NK = triton.cdiv(K, BK) + NV = triton.cdiv(V, BV) + num_warps = 4 if BK == 64 else 2 + num_stages = 1 + + def fwd_inner(q, k, v, g, B, H, T, K, V, BT, BK, BV, NT, h0=None, ht=None): + NK, NV = triton.cdiv(K, BK), triton.cdiv(V, BV) + h = q.new_empty(B, H, NT * K, V) + grid = (NV, NK, B * H) + chunk_rwkv6_fwd_kernel_h[grid]( + k, v, g, h, h0, ht, + k.stride(1), k.stride(2), k.stride(3), + v.stride(1), v.stride(2), v.stride(3), + h.stride(1), h.stride(2), h.stride(3), + T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, NT=NT, + USE_INITIAL_STATE=h0 is not None, + STORE_FINAL_STATE=ht is not None, + num_warps=num_warps, + num_stages=num_stages + ) + return h + + final_state = None + if output_final_state: + final_state = q.new_empty(B, H, K, V, dtype=torch.float) + + g_org, g, gs = g, torch.empty_like(g, dtype=torch.float), torch.empty_like(g, dtype=torch.float) + def grid(meta): return ((triton.cdiv(meta['S'], meta['BS']), NT, B * H)) + # keep cummulative normalizer in fp32 + # this kernel is equivalent to + # g_org = g_org.view(B, H, NT, BT, -1) + # g = g_org.cumsum(-2).view(B, H, T, -1) + # gs = g - g_org + chunk_rwkv6_fwd_kernel_cum[grid]( + g_org, g, gs, + g.stride(1), g.stride(2), g.stride(3), + T=T, S=K, BT=BT + ) + h = fwd_inner( + q=q, k=k, v=v, g=g, + B=B, H=H, T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, NT=NT, + h0=initial_state if initial_state is not None else None, + ht=final_state if final_state is not None else None + ) + A = q.new_zeros(NK, B, H, T, BT) + grid = (NK, NT * NC * NC, B * H) + chunk_rwkv6_fwd_kernel_intra[grid]( + q, k, g, gs, u, A, + k.stride(1), k.stride(2), k.stride(3), + scale, + H=H, T=T, K=K, BT=BT, BC=BC, BK=BK, NC=NC, DK=K, + num_warps=num_warps, + num_stages=num_stages + ) + A = A.sum(0, dtype=A.dtype) + o = torch.empty_like(v) + + grid = (NV, NT, B * H) + chunk_rwkv6_fwd_kernel_inter[grid]( + q, v, gs, h, o, A, + k.stride(1), k.stride(2), k.stride(3), + v.stride(1), v.stride(2), v.stride(3), + h.stride(1), h.stride(2), h.stride(3), + scale, + T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, + num_warps=num_warps, + num_stages=num_stages + ) + + if checkpoint_level > 1: + del h + h, initial_state = None, None + del g, gs + ctx.save_for_backward(q, k, v, g_org, u, h, initial_state, A) + ctx.BT = BT + ctx.scale = scale + ctx.checkpoint_level = checkpoint_level + return o, final_state + + @staticmethod + @contiguous + def backward(ctx, do, dht=None): + q, k, v, g, u, h, initial_state, A = ctx.saved_tensors + B, H, T, K, V = *q.shape, v.shape[-1] + BT, BC = ctx.BT, 16 + BK = min(64, triton.next_power_of_2(K)) + BV = min(64, triton.next_power_of_2(V)) + NT, NC = triton.cdiv(T, BT), triton.cdiv(BT, BC) + NK = triton.cdiv(K, BK) + num_warps = 4 if BK == 64 else 2 + num_stages = 1 + + def fwd_inner(q, k, v, g, B, H, T, K, V, BT, BK, BV, NT, h0=None, ht=None): + NK, NV = triton.cdiv(K, BK), triton.cdiv(V, BV) + h = q.new_empty(B, H, NT * K, V) + grid = (NV, NK, B * H) + chunk_rwkv6_fwd_kernel_h[grid]( + k, v, g, h, h0, ht, + k.stride(1), k.stride(2), k.stride(3), + v.stride(1), v.stride(2), v.stride(3), + h.stride(1), h.stride(2), h.stride(3), + T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, NT=NT, + USE_INITIAL_STATE=h0 is not None, + STORE_FINAL_STATE=ht is not None, + num_warps=num_warps, + num_stages=num_stages + ) + return h + + def bwd_inner(q, g, gs, h0, do, B, H, T, K, V, BT, BK, BV, NT, scale): + NK, NV = triton.cdiv(K, BK), triton.cdiv(V, BV) + dh = q.new_empty(B, H, NT * K, V) + dh0 = torch.empty_like(h0) if h0 is not None else None + grid = (NK, NV, B * H) + chunk_rwkv6_bwd_kernel_dh[grid]( + q, g, gs, do, dh, dh0, + q.stride(1), q.stride(2), q.stride(3), + do.stride(1), do.stride(2), do.stride(3), + dh.stride(1), dh.stride(2), dh.stride(3), + scale, + T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, NT=NT, + USE_INITIAL_STATE=h0 is not None, + num_warps=num_warps, + num_stages=num_stages + ) + return dh, dh0 + + # recompute cumulative log decays. + g_org, g, gs = g, torch.empty_like(g, dtype=torch.float), torch.empty_like(g, dtype=torch.float) + def grid(meta): return ((triton.cdiv(meta['S'], meta['BS']), NT, B * H)) + # keep cummulative normalizer in fp32 + # this kernel is equivalent to + # g = g.view(B, H, NT, BT, -1).cumsum(-2).view(B, H, T, -1) + chunk_rwkv6_fwd_kernel_cum[grid]( + g_org, g, gs, + g.stride(1), g.stride(2), g.stride(3), + T=T, S=K, BT=BT + ) + + # rerun the forward pass to get h if checkpoint_level >= 1 + if ctx.checkpoint_level == 1: + h = fwd_inner( + q=q, k=k, v=v, g=g, + B=B, H=H, T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, NT=NT, + h0=initial_state if initial_state is not None else None, + ht=None + ) + + scale = ctx.scale + dh, dh0 = bwd_inner( + q, g, gs, initial_state, do, + B=B, H=H, T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, NT=NT, + scale=scale + ) + dq = torch.empty_like(q, dtype=torch.float) + dk = torch.empty_like(k, dtype=torch.float) + dv = v.new_empty(NK, *v.shape) + dA = q.new_zeros(B, H, T, BT) + grid = (NK, NT, B * H) + chunk_rwkv6_bwd_kernel_inter[grid]( + k, v, h, g, gs, A, do, dh, dq, dk, dv, dA, + k.stride(1), k.stride(2), k.stride(3), + v.stride(1), v.stride(2), v.stride(3), + h.stride(1), h.stride(2), h.stride(3), + scale, + T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, + num_warps=num_warps, + num_stages=num_stages + ) + dv = dv.sum(0, dtype=dv.dtype) + grid = (NK, NT * NC, B * H) + chunk_rwkv6_bwd_kernel_intra[grid]( + q, k, g, gs, dA, dq, dk, + k.stride(1), k.stride(2), k.stride(3), + T=T, K=K, BT=BT, BC=BC, BK=BK, NC=NC, + num_warps=num_warps, + num_stages=num_stages + ) + + # TODO: fuse? + dg = (dq * q)[:, :, 1:] - (dk * k)[:, :, 0:-1] + dg = torch.nn.functional.pad(dg, (0, 0, 0, 1, 0, 0, 0, 0), value=0) + dg = chunk_reversed_cumsum_fwd(dg).to(g) + # equivalent to the following pytorch code. + # du = ((do * v).sum(-1)[..., None] * k * q * scale).sum(-2).to(u) + # dq += ((do * v).sum(-1)[..., None] * k * scale * u[:, :, None, :]) + # dk += ((do * v).sum(-1)[..., None] * q * scale * u[:, :, None, :]) + BT = 64 + grid = (triton.cdiv(T, BT), B * H) + du = torch.empty_like(g, dtype=torch.float) + post_process_grad[grid]( + q, k, v, u, do, dk, dq, du, scale, + q.stride(1), q.stride(2), q.stride(3), + v.stride(1), v.stride(2), v.stride(3), H=H, + T=T, BT=BT, K=K, V=V, BK=triton.next_power_of_2(K), BV=triton.next_power_of_2(V), + num_warps=4 + ) + du = du.sum([0, 2]) + return dq.to(q), dk.to(k), dv.to(v), dg.to(g), du.to(u), None, dh0, None, None + + +def chunk_rwkv6( + r: torch.Tensor, + k: torch.Tensor, + v: torch.Tensor, + g: torch.Tensor, + u: torch.Tensor, + scale: Optional[int] = None, + initial_state: torch.Tensor = None, + output_final_state: bool = False, + checkpoint_level: Optional[int] = 0 +) -> Tuple[torch.Tensor, torch.Tensor]: + r""" + Args: + r (torch.Tensor): + reception of shape `(B, H, T, K)`. Alias: q, query in linear attention. + k (torch.Tensor): + keys of shape `(B, H, T, K)` + v (torch.Tensor): + values of shape `(B, H, T, V)` + w (torch.Tensor): + data-dependent decays of shape `(B, H, T, K)` in log space! Alias: g. + u (torch.Tensor): + bonus of shape `(H, K)` + scale (Optional[int]): + Scale factor for the RWKV6 attention scores. + If not provided, it will default to `1 / sqrt(K)`. Default: `None`. + initial_state (Optional[torch.Tensor]): + Initial state of shape `(B, H, K, V)`. Default: `None`. + output_final_state (Optional[bool]): + Whether to output the final state of shape `(B, H, K, V)`. Default: `False`. + checkpoint_level (Optional[int]): + Checkpointing level; higher values will save more memories and do more recomputations during backward. + Default: `0`: + - Level `0`: store forward hidden states for backprop. + - Level `1`: recompute the forward hidden states during backward. + """ + assert checkpoint_level in [0, 1] + if scale is None: + scale = r.shape[-1] ** -0.5 + o, final_state = ChunkRWKV6Function.apply(r, k, v, g, u, scale, initial_state, output_final_state, checkpoint_level) + return o, final_state + + +if __name__ == "__main__": + import torch.nn.functional as F + + from fla.ops.rwkv6.recurrent_fuse import fused_recurrent_rwkv6 + B = 4 + H = 4 + L = 1024 + K = 100 + V = 120 + + torch.manual_seed(0) + dtype = torch.float32 + q = torch.randn(B, H, L, K).cuda().to(dtype).requires_grad_(True) + k = torch.randn(B, H, L, K).cuda().to(dtype).requires_grad_(True) + v = torch.randn(B, H, L, V).cuda().to(dtype).requires_grad_(True) + w = (-torch.randn(B, H, L, K).exp()).cuda().to(torch.float32).requires_grad_(True) + u = torch.randn(H, K).cuda().to(dtype).requires_grad_(True) + h0 = torch.randn(B, H, K, V).cuda().to(dtype).requires_grad_(True) + do = torch.rand_like(v).cuda() + o, ht = fused_recurrent_rwkv6(q, k, v, w, u, initial_state=h0, output_final_state=True) + o.backward(do) + dq, q.grad = q.grad.clone(), None + dk, k.grad = k.grad.clone(), None + dv, v.grad = v.grad.clone(), None + dw, w.grad = w.grad.clone(), None + du, u.grad = u.grad.clone(), None + dh0, h0.grad = h0.grad.clone(), None + o2, ht2 = chunk_rwkv6(q, k, v, w, u, initial_state=h0, output_final_state=True) + o2.backward(do) + torch.testing.assert_close(o, o2, rtol=0, atol=1e-4) + torch.testing.assert_close(ht, ht2, rtol=0, atol=1e-4) + torch.testing.assert_close(q.grad, dq, rtol=0, atol=1e-4) + torch.testing.assert_close(k.grad, dk, rtol=0, atol=1e-4) + torch.testing.assert_close(v.grad, dv, rtol=0, atol=1e-4) + torch.testing.assert_close(w.grad, dw, rtol=0, atol=1e-4) + torch.testing.assert_close(u.grad, du, rtol=0, atol=2e-4) + torch.testing.assert_close(h0.grad, dh0, rtol=0, atol=2e-4) + + print("All tests passed!") + + @triton.testing.perf_report( + triton.testing.Benchmark( + # argument names to use as an x-axis for the plot + x_names=['T'], + # different possible values for `x_name` + x_vals=[128 * 2 ** i for i in range(0, 8)], + # argument name whose value corresponds to a different line in the plot + line_arg='provider', + # possible values for `line_arg`` + line_vals=['recurrent', 'chunk', 'recurrent_bwd', 'chunk_bwd'], + # label name for the lines + line_names=['recurrent', 'chunk', 'recurrent_bwd', 'chunk_bwd'], + # line styles + styles=[('green', '-'), ('blue', '--'), ('red', '-.'), ('cyan', ':'), ('yellow', 'dotted'), ('black', 'dashed')], + ylabel="Execution Time (ms)", # label name for the y-axis + # name for the plot. Used also as a file name for saving the plot. + plot_name="Performance", + args={}, + ) + ) + def benchmark(T, provider): + device = 'cuda' + dtype = torch.bfloat16 + requires_grad = True + B, H, K = 16, 4, 128 + + q = torch.randn(B, H, T, K, device=device, requires_grad=requires_grad, dtype=dtype) + k = torch.randn(B, H, T, K, device=device, requires_grad=requires_grad, dtype=dtype) + v = torch.randn(B, H, T, K, device=device, requires_grad=requires_grad, dtype=dtype) + w = F.logsigmoid(torch.randn(B, H, T, K)).to(dtype=dtype, device=device).requires_grad_(True) + u = torch.randn(H, K, device=device, requires_grad=requires_grad, dtype=dtype) + + do = torch.ones_like(q, dtype=dtype) + quantiles = [0.5, 0.2, 0.8] + results = 0, 0, 0 + if provider == 'recurrent': + results = triton.testing.do_bench(lambda: fused_recurrent_rwkv6(q, k, v, w, u), quantiles=quantiles) + if provider == 'chunk': + results = triton.testing.do_bench(lambda: chunk_rwkv6(q, k, v, w, u), quantiles=quantiles) + if provider == 'recurrent_bwd': + results = triton.testing.do_bench(lambda: fused_recurrent_rwkv6(q, k, v, w, u) + [0].backward(do), quantiles=quantiles) + if provider == 'chunk_bwd': + results = triton.testing.do_bench(lambda: chunk_rwkv6(q, k, v, w, u)[0].backward(do), quantiles=quantiles) + return results + benchmark.run(print_data=True) diff --git a/fla/ops/rwkv6/chunk_naive.py b/fla/ops/rwkv6/chunk_naive.py new file mode 100644 index 0000000000000000000000000000000000000000..e4cf9bdf20d229ccb2b2ba94375f1524bec88b53 --- /dev/null +++ b/fla/ops/rwkv6/chunk_naive.py @@ -0,0 +1,79 @@ +# -*- coding: utf-8 -*- + +import torch +from einops import rearrange + +from fla.ops.rwkv6.chunk import chunk_rwkv6 +from fla.ops.rwkv6.recurrent_fuse import fused_recurrent_rwkv6 + + +def naive_chunk_rwkv6( + q, + k, + v, + w, + u, + chunk_size=32, + initial_state=None, + output_final_state=True, +): + assert q.shape[-2] % chunk_size == 0 + orig_dtype = q.dtype + num_chunk = q.shape[-2] // chunk_size + u = u.unsqueeze(0) + + q, k, v, w = map(lambda x: rearrange(x, 'b h (n c) d -> b h n c d', c=chunk_size).float(), (q, k, v, w)) + + w_cumsum = w.cumsum(-2) + + kw = k * (w_cumsum[..., -1, None, :] - w_cumsum).exp() + wkv = kw.transpose(-1, -2) @ v + + wkv_new = torch.zeros_like(wkv) + + for i in range(num_chunk - 1): + wkv_new[:, :, i+1] = (wkv_new[:, :, i] * w_cumsum[:, :, i, -1, :, None].exp()) + wkv[:, :, i] + + o_inter = torch.einsum('b h n d p, b h n c d -> b h n c p', wkv_new, (q * (w_cumsum - w).exp())) + + o_intra = torch.zeros_like(o_inter) + for i in range(chunk_size): + attn = (q[:, :, :, i, None] * k * (w_cumsum[:, :, :, i, None] - w[:, :, :, i, None] - w_cumsum).exp()).sum(-1) + mask = (torch.arange(0, chunk_size) < i).to(attn.device) + attn.masked_fill_(~mask, 0) + intra_inter_o = (attn.unsqueeze(-1) * v).sum(-2) + intra_intra_o = (q[:, :, :, i] * u.unsqueeze(2) * k[:, :, :, i]).sum(-1).unsqueeze(-1) * v[:, :, :, i] + o_intra[:, :, :, i] = intra_inter_o + intra_intra_o + o = o_inter + o_intra + return rearrange(o, 'b h n c d -> b h (n c) d').to(orig_dtype) + + +if __name__ == "__main__": + B = 4 + H = 4 + L = 1024 + D = 100 + dtype = torch.bfloat16 + require_grad = True + q = (torch.randn(B, H, L, D).cuda().to(dtype)).requires_grad_(require_grad) + k = (torch.randn(B, H, L, D).cuda().to(dtype)).requires_grad_(require_grad) + v = torch.randn(B, H, L, 2*D).cuda().to(dtype).requires_grad_(require_grad) + w = torch.nn.functional.logsigmoid(torch.randn(B, H, L, D)).cuda().to(dtype).requires_grad_(require_grad) + u = (torch.randn(H, D).cuda().to(dtype)).requires_grad_(require_grad) + do = torch.rand_like(v).cuda() + o2, _ = chunk_rwkv6(q, k, v, w.clone(), u) + o, _ = fused_recurrent_rwkv6(q, k, v, w, u, scale=1.0) + o.backward(do) + dq, q.grad = q.grad.clone(), None + dk, k.grad = k.grad.clone(), None + dv, v.grad = v.grad.clone(), None + dw, w.grad = w.grad.clone(), None + du, u.grad = u.grad.clone(), None + print((o - o2).abs().max()) + o2.backward(do) + print((o-o2).abs().max()) + print((q.grad - dq).abs().max()) + print((k.grad - dk).abs().max()) + print((v.grad - dv).abs().max()) + print((w.grad - dw).abs().max()) + print((u.grad - du).abs().max()) diff --git a/fla/ops/rwkv6/recurrent_fuse.py b/fla/ops/rwkv6/recurrent_fuse.py new file mode 100644 index 0000000000000000000000000000000000000000..af251526fc6506af4a93d60c5d9ee28aaebbc65f --- /dev/null +++ b/fla/ops/rwkv6/recurrent_fuse.py @@ -0,0 +1,378 @@ +# -*- coding: utf-8 -*- + +# Copyright (c) 2024, Songlin Yang + +from typing import Tuple + +import torch +import triton +import triton.language as tl +from torch.cuda.amp import custom_bwd, custom_fwd + +from fla.ops.utils import chunk_reversed_cumsum_fwd +from fla.utils import contiguous + + +@triton.jit +def fused_recurrent_rwkv6_fwd_kernel( + q, # query [B, H, T, K] + k, # key [B, H, T, K] + v, # value [B, H, T, V] + w, # log gate [B, H, T, K] + u, # bonus [B, H, K] + o, # output [B, H, T, V] + # initial hidden state initialization [B, H, K, V] + h0, + ht, # final hidden state [B, H, K, V] + s_k_h, # stride size: T * K + s_v_h, # stride size: T * V + scale, # K ** -0.5 + B: tl.constexpr, + H: tl.constexpr, + T: tl.constexpr, + K: tl.constexpr, + V: tl.constexpr, + BK: tl.constexpr, # BLOCK SIZE along the K dimension + BV: tl.constexpr, # BLOCK SIZE along the V dimension + USE_INITIAL_STATE: tl.constexpr, # whether to use initial state + STORE_FINAL_STATE: tl.constexpr, # whether to store final state + REVERSE: tl.constexpr, # whether to do autoregressive modeling in the reverse direction +): + i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + i_h = i_bh % H + + p_q = q + i_bh * s_k_h + i_k * BK + tl.arange(0, BK) + ((T-1) * K if REVERSE else 0) + p_k = k + i_bh * s_k_h + i_k * BK + tl.arange(0, BK) + ((T-1) * K if REVERSE else 0) + p_v = v + i_bh * s_v_h + i_v * BV + tl.arange(0, BV) + ((T-1) * V if REVERSE else 0) + p_o = o + (i_bh + i_k * B * H) * s_v_h + i_v * BV + tl.arange(0, BV) + ((T-1) * V if REVERSE else 0) + p_w = w + i_bh * s_k_h + i_k * BK + tl.arange(0, BK) + ((T-1) * K if REVERSE else 0) + p_u = u + i_h * K + tl.arange(0, BK) + i_k * BK + + mask_bk = (i_k * BK + tl.arange(0, BK)) < K + mask_bv = (i_v * BV + tl.arange(0, BV)) < V + mask_kv = mask_bv[:, None] & mask_bk[None, :] + + b_h = tl.zeros([BV, BK], dtype=tl.float32) + if USE_INITIAL_STATE: + p_h0 = h0 + i_bh * K * V + (i_k * BK + tl.arange(0, BK)[None, :]) * V + (i_v * BV + tl.arange(0, BV)[:, None]) + b_h += tl.load(p_h0, mask=mask_kv, other=0).to(tl.float32) + + b_u = tl.load(p_u, mask=mask_bk, other=0).to(tl.float32) + for _ in range(0, T): + b_k = tl.load(p_k, mask=mask_bk, other=0).to(tl.float32) + b_v = tl.load(p_v, mask=mask_bv, other=0).to(tl.float32) + b_q = tl.load(p_q, mask=mask_bk, other=0).to(tl.float32) * scale + b_w = tl.load(p_w, mask=mask_bk, other=0).to(tl.float32) + b_w = tl.exp(b_w) + b_kv = b_k[None, :] * b_v[:, None] + b_o = (b_h + b_kv * b_u[None, :]) * b_q[None, :] + b_o = tl.sum(b_o, axis=1) + b_h = b_h * b_w[None, :] + b_h += b_kv + tl.store(p_o, b_o.to(p_o.dtype.element_ty), mask=mask_bv) + p_q += -K if REVERSE else K + p_k += -K if REVERSE else K + p_o += -V if REVERSE else V + p_v += -V if REVERSE else V + p_w += -K if REVERSE else K + + if STORE_FINAL_STATE: + p_ht = ht + i_bh * K * V + (i_k * BK + tl.arange(0, BK)[None, :]) * V + (i_v * BV + tl.arange(0, BV)[:, None]) + tl.store(p_ht, b_h.to(p_ht.dtype.element_ty), mask=mask_kv) + + +# Similar to Algorithm1 of https://arxiv.org/abs/2006.16236 +@triton.jit +def fused_recurrent_rwkv6_bwd_kernel_dq( + # B: B, H: H, T: T, D: d_head + # NV: number of split in the V dimension. NK: number of split in the K dimension + k, # key [B, H, T, V] + v, # value [B, H, T, V] + w, # log gate [B, H, T, K] + u, # bonus [B, H, K] + + do, # gradient of output [B, H, T, V] + dq, # gradient of query [NV, B, H, T, K] + dq_aux, # gradient of query_aux [NV, B, H, T, K] + + # initial hidden state initialization [B, H, K, V] + h0, + + s_k_h, # stride size: T * K + s_v_h, # stride size: T * V + + scale, # K ** -0.5 + B: tl.constexpr, # B + H: tl.constexpr, # H + T: tl.constexpr, # T + BK: tl.constexpr, # BLOCK SIZE along the K dimension + BV: tl.constexpr, # BLOCK SIZE along the V dimension + K: tl.constexpr, # K + V: tl.constexpr, # V + USE_INITIAL_STATE: tl.constexpr, # whether to use initial state + REVERSE: tl.constexpr, # whether to do autoregressive modeling in the reverse direction +): + i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + i_h = i_bh % H + p_k = k + i_bh * s_k_h + i_k * BK + tl.arange(0, BK) + ((T-1) * K if REVERSE else 0) + p_v = v + i_bh * s_v_h + i_v * BV + tl.arange(0, BV) + ((T-1) * V if REVERSE else 0) + p_do = do + i_bh * s_v_h + i_v * BV + tl.arange(0, BV) + ((T-1) * V if REVERSE else 0) + p_dq = dq + (i_bh + i_v * B * H) * s_k_h + i_k * BK + tl.arange(0, BK) + ((T-1) * K if REVERSE else 0) + p_dq_aux = dq_aux + (i_bh + i_v * B * H) * s_k_h + i_k * BK + tl.arange(0, BK) + ((T-1) * K if REVERSE else 0) + p_w = w + i_bh * s_k_h + i_k * BK + tl.arange(0, BK) + ((T-1) * K if REVERSE else 0) + p_u = u + i_h * K + tl.arange(0, BK) + i_k * BK + + mask_bk = i_k * BK + tl.arange(0, BK) < K + mask_bv = i_v * BV + tl.arange(0, BV) < V + mask_kv = mask_bv[:, None] & mask_bk[None, :] + b_u = tl.load(p_u, mask=mask_bk, other=0).to(tl.float32) + b_h = tl.zeros([BV, BK], dtype=tl.float32) + + if USE_INITIAL_STATE: + p_h0 = h0 + i_bh * K * V + (i_k * BK + tl.arange(0, BK)[None, :]) * V + (i_v * BV + tl.arange(0, BV)[:, None]) + b_h += tl.load(p_h0, mask=mask_kv, other=0).to(tl.float32) + + for _ in range(0, T): + b_k = tl.load(p_k, mask=mask_bk, other=0).to(tl.float32) + b_v = tl.load(p_v, mask=mask_bv, other=0).to(tl.float32) + b_kv = b_k[None, :] * b_v[:, None] + b_do = tl.load(p_do, mask=mask_bv, other=0).to(tl.float32) + b_w = tl.load(p_w, mask=mask_bk, other=0).to(tl.float32) + b_w = tl.exp(b_w) + h_q = b_h * b_do[:, None] + b_dq = tl.sum(h_q + b_kv * b_u[None, :] * b_do[:, None], axis=0) + b_dq *= scale + b_dq_aux = tl.sum(h_q, axis=0) + b_h = b_h * b_w[None, :] + b_h += b_kv + tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), mask=mask_bk) + tl.store(p_dq_aux, b_dq_aux.to(p_dq_aux.dtype.element_ty), mask=mask_bk) + p_k += -K if REVERSE else K + p_do += -V if REVERSE else V + p_v += -V if REVERSE else V + p_w += -K if REVERSE else K + p_dq += -K if REVERSE else K + p_dq_aux += -K if REVERSE else K + + +@triton.jit +def fused_recurrent_rwkv6_bwd_kernel_dkv( + # B: B, H: H, T: T, D: d_head + # NV: number of split in the V dimension. NK: number of split in the K dimension + q, # query [B, H, T, K] + k, # key [B, H, T, V] + v, # value [B, H, T, V] + w, # log gate [B, H, T, K] + u, # bonus [B, H, K] + + do, # gradient of output [B, H, T, V] + dk, + dk_aux, + dv, + dh0, + + # initial hidden state initialization [B, H, K, V] + s_k_h, # stride size: T * K + s_v_h, # stride size: T * V + + scale, # K ** -0.5 + B, # B + H, # H + T, # T + BK: tl.constexpr, # BLOCK SIZE along the K dimension + BV: tl.constexpr, # BLOCK SIZE along the V dimension + K: tl.constexpr, # K + V: tl.constexpr, # V + USE_INITIAL_STATE: tl.constexpr, # whether to use initial state + REVERSE: tl.constexpr, # whether to do autoregressive modeling in the reverse direction +): + i_v, i_k, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + i_h = i_bh % H + p_q = q + i_bh * s_k_h + i_k * BK + tl.arange(0, BK) + ((T - 1) * K if not REVERSE else 0) + p_k = k + i_bh * s_k_h + i_k * BK + tl.arange(0, BK) + ((T - 1) * K if not REVERSE else 0) + p_do = do + i_bh * s_v_h + i_v * BV + tl.arange(0, BV) + ((T - 1) * V if not REVERSE else 0) + p_v = v + i_bh * s_v_h + i_v * BV + tl.arange(0, BV) + ((T - 1) * V if not REVERSE else 0) + p_dk = dk + (i_bh + i_v * B * H) * s_k_h + i_k * BK + tl.arange(0, BK) + ((T - 1) * K if not REVERSE else 0) + p_dk_aux = dk_aux + (i_bh + i_v * B * H) * s_k_h + i_k * BK + tl.arange(0, BK) + ((T - 1) * K if not REVERSE else 0) + p_dv = dv + (i_bh + i_k * B * H) * s_v_h + i_v * BV + tl.arange(0, BV) + ((T - 1) * V if not REVERSE else 0) + p_w = w + i_bh * s_k_h + i_k * BK + tl.arange(0, BK) + ((T - 1) * K if not REVERSE else 0) + b_dh = tl.zeros([BK, BV], dtype=tl.float32) + mask_bk = i_k * BK + tl.arange(0, BK) < K + mask_bv = i_v * BV + tl.arange(0, BV) < V + mask_kv = mask_bk[:, None] & mask_bv[None, :] + + p_u = u + i_h * K + tl.arange(0, BK) + i_k * BK + b_u = tl.load(p_u, mask=mask_bk, other=0).to(tl.float32) + + for _ in range(T-1, -1, -1): + b_q = tl.load(p_q, mask=mask_bk, other=0).to(tl.float32) * scale + b_k = tl.load(p_k, mask=mask_bk, other=0).to(tl.float32) + b_v = tl.load(p_v, mask=mask_bv, other=0).to(tl.float32) + b_w = tl.load(p_w, mask=mask_bk, other=0).to(tl.float32) + b_do = tl.load(p_do, mask=mask_bv, other=0).to(tl.float32) + b_dkv = b_q[:, None] * b_do[None, :] + b_dk = tl.sum(b_dh * b_v[None, :], axis=1) + tl.store(p_dk_aux, b_dk.to(p_dk_aux.dtype.element_ty), mask=mask_bk) + b_dk += tl.sum(b_dkv * b_u[:, None] * b_v[None, :], axis=1) + b_dv = tl.sum((b_dh + (b_dkv * b_u[:, None])) * b_k[:, None], axis=0) + + tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), mask=mask_bk) + tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), mask=mask_bv) + b_dh *= tl.exp(b_w)[:, None] + b_dh += b_dkv + + p_q += K if REVERSE else -K + p_k += K if REVERSE else -K + p_v += V if REVERSE else -V + p_w += K if REVERSE else -K + p_do += V if REVERSE else -V + p_dk += K if REVERSE else -K + p_dk_aux += K if REVERSE else -K + p_dv += V if REVERSE else -V + + if USE_INITIAL_STATE: + p_dh0 = dh0 + i_bh * K * V + (i_k * BK + tl.arange(0, BK)[:, None]) * V + (i_v * BV + tl.arange(0, BV)[None, :]) + tl.store(p_dh0, b_dh.to(p_dh0.dtype.element_ty), mask=mask_kv) + + +class FusedRecurrentRWKV6Function(torch.autograd.Function): + + @staticmethod + @contiguous + @custom_fwd + def forward(ctx, r, k, v, w, u, scale=None, initial_state=None, output_final_state=False, reverse=False): + # alias + q = r + B, H, T, K, V = *q.shape, v.shape[-1] + + BK, BV = min(triton.next_power_of_2(K), 32), min(triton.next_power_of_2(V), 32) + NK, NV = triton.cdiv(K, BK), triton.cdiv(V, BV) + num_stages = 1 + num_warps = 1 + + if output_final_state: + final_state = q.new_empty(B, H, K, V) + else: + final_state = None + + o = q.new_empty(NK, B, H, T, V, dtype=torch.float32) + grid = (NV, NK, B * H) + fused_recurrent_rwkv6_fwd_kernel[grid]( + q, k, v, w, u, o, initial_state, final_state, + k.stride(1), + v.stride(1), + scale, + B=B, H=H, T=T, K=K, V=V, BK=BK, BV=BV, + USE_INITIAL_STATE=initial_state is not None, + STORE_FINAL_STATE=final_state is not None, + REVERSE=reverse, + num_warps=num_warps, + num_stages=num_stages + ) + + o = o.sum(0) + ctx.save_for_backward(q, k, v, w, u, initial_state, o) + ctx.scale = scale + ctx.reverse = reverse + # we do not need the gradient of the final state from the next chunk + # similiar to Trunctated BPTT + if final_state is not None: + final_state = final_state.detach() + return o.to(q.dtype), final_state + + @staticmethod + @contiguous + @custom_bwd + def backward(ctx, do, d_final_state=None): + q, k, v, w, u, initial_state, o = ctx.saved_tensors + B, H, T, K, V = *q.shape, v.shape[-1] + scale = ctx.scale + + BK, BV = min(triton.next_power_of_2(K), 16), min(triton.next_power_of_2(V), 64) + NK, NV = triton.cdiv(K, BK), triton.cdiv(V, BV) + num_stages = 1 + num_warps = 1 + dq = q.new_empty(NV, B, H, T, K, dtype=torch.float32) + dq_aux = torch.empty_like(dq) + grid = (NV, NK, B * H) + + fused_recurrent_rwkv6_bwd_kernel_dq[grid]( + k, v, w, u, do, dq, dq_aux, initial_state, + q.stride(1), + v.stride(1), + scale, + B=B, H=H, T=T, K=K, V=V, BK=BK, BV=BV, + num_warps=num_warps, + num_stages=num_stages, + USE_INITIAL_STATE=initial_state is not None, + REVERSE=ctx.reverse, + ) + dq = dq.sum(0).to(q) + dq_aux = dq_aux.sum(0) + + BK, BV = min(triton.next_power_of_2(K), 32), min(triton.next_power_of_2(V), 32) + NK, NV = triton.cdiv(K, BK), triton.cdiv(V, BV) + + dk = q.new_empty(NV, B, H, T, K, dtype=torch.float32) + dk_aux = q.new_empty(NV, B, H, T, K, dtype=torch.float32) + dv = q.new_empty(NK, B, H, T, V, dtype=torch.float32) + dh0 = initial_state.new_empty(B, H, K, V) if initial_state is not None else None + grid = (NV, NK, B * H) + fused_recurrent_rwkv6_bwd_kernel_dkv[grid]( + q, k, v, w, u, do, dk, dk_aux, dv, dh0, + q.stride(1), + v.stride(1), + scale, + B=B, H=H, T=T, K=K, V=V, BK=BK, BV=BV, + num_warps=num_warps, + num_stages=num_stages, + USE_INITIAL_STATE=initial_state is not None, + REVERSE=ctx.reverse, + ) + dk = dk.sum(0).to(k) + dv = dv.sum(0).to(v) + dk_aux = dk_aux.sum(0) + + dw = (dq_aux * q * scale)[:, :, 1:] - (dk_aux * k)[:, :, 0:-1] + dw = torch.nn.functional.pad(dw, (0, 0, 0, 1, 0, 0, 0, 0), value=0) + dw = chunk_reversed_cumsum_fwd(dw).to(w) + + du = ((do * v).sum(-1)[..., None] * k * q * scale).sum([0, -2]).to(u) + return dq, dk, dv, dw, du, None, dh0, None, None + + +def fused_recurrent_rwkv6( + r: torch.Tensor, + k: torch.Tensor, + v: torch.Tensor, + w: torch.Tensor, + u: torch.Tensor, + scale: int = -1, + initial_state: torch.Tensor = None, + output_final_state: bool = False, + causal: bool = True +) -> Tuple[torch.Tensor, torch.Tensor]: + r""" + Args: + r (torch.Tensor): + reception of shape `(B, H, T, K)`. Alias: q, query in linear attention. + k (torch.Tensor): + keys of shape `(B, H, T, K)` + v (torch.Tensor): + values of shape `(B, H, T, V)` + w (torch.Tensor): + data-dependent decays of shape `(B, H, T, K)` in log space! Alias: g. + u (torch.Tensor): + bonus of shape `(H, K)` + scale (Optional[int]): + Scale factor for the RWKV6 attention scores. + If not provided, it will default to `1 / sqrt(K)`. Default: `None`. + initial_state (Optional[torch.Tensor]): + Initial state of shape `(B, H, K, V)`. Default: `None`. + output_final_state (Optional[bool]): + Whether to output the final state of shape `(B, H, K, V)`. Default: `False`. + """ + if scale == -1: + scale = r.shape[-1] ** -0.5 + o, final_state = FusedRecurrentRWKV6Function.apply(r, k, v, w, u, scale, initial_state, output_final_state) + return o, final_state diff --git a/fla/ops/rwkv6/recurrent_naive.py b/fla/ops/rwkv6/recurrent_naive.py new file mode 100644 index 0000000000000000000000000000000000000000..7b1b67e5dd690e238af6aa817bad55677ac32526 --- /dev/null +++ b/fla/ops/rwkv6/recurrent_naive.py @@ -0,0 +1,102 @@ +# -*- coding: utf-8 -*- + +from typing import Optional + +import torch + + +def naive_recurrent_rwkv6( + q: torch.Tensor, + k: torch.Tensor, + v: torch.Tensor, + w: torch.Tensor, + u: torch.Tensor, + scale: Optional[float] = None, + initial_state: Optional[torch.Tensor] = None, + output_final_state: Optional[bool] = False +): + orig_dtype = q.dtype + B, H, T, K, V = *q.shape, v.shape[-1] + q, k, v, w, u = map(lambda x: x.float(), (q, k, v, w, u)) + h = torch.zeros(B, H, K, V, dtype=torch.float32, device=q.device) + o = torch.zeros_like(v) + + if scale is None: + scale = K ** -0.5 + + if initial_state is not None: + h += initial_state + + for i in range(T): + q_i = q[:, :, i, :] * scale + k_i = k[:, :, i] + v_i = v[:, :, i, :] + w_i = w[:, :, i].exp() + kv_i = k_i[..., None] * v_i[..., None, :] + o_i = (h + u[None, ..., None] * kv_i) * q_i[..., None] + o[:, :, i] = o_i.sum(-2) + h = h * w_i[..., None] + kv_i + ht = h if output_final_state else None + return o.to(orig_dtype), ht + + +def naive_recurrent_rwkv6_bwd( + q, + k, + v, + w, + u, + o, + do, + initial_state=None, + output_final_state=False +): + q, k, v, w, u, o, do = map(lambda x: x.float(), (q, k, v, w, u, o, do)) + B, H, T, K, V = *q.shape, v.shape[-1] + h = torch.zeros(B, H, K, V, dtype=torch.float32, device=q.device) + dq = torch.zeros_like(q) + dq_aux = torch.zeros_like(q) + + if initial_state is not None: + h += initial_state + + for i in range(T): + k_i = k[:, :, i] + v_i = v[:, :, i] + w_i = w[:, :, i].exp() + kv_i = k_i[..., None] * v_i[..., None, :] + h_i = (h + u[None, ..., None] * kv_i) + dq_i = (do[:, :, i, None, :] * h_i).sum(-1) + dq_aux_i = (do[:, :, i, None, :] * h).sum(-1) + dq[:, :, i] = dq_i + dq_aux[:, :, i] = dq_aux_i + h = h * w_i[..., None] + kv_i + + du = torch.zeros_like(u) + dh = torch.zeros_like(h) + dk = torch.zeros_like(k) + dk_aux = torch.zeros_like(k) + dv = torch.zeros_like(v) + + for i in range(T - 1, -1, -1): + d_kv_i = do[:, :, i, None, :] * q[:, :, i, :, None] + k_i = k[:, :, i] + v_i = v[:, :, i] + du_i = (d_kv_i * k_i[..., None] * v_i[..., None, :]).sum(-1) + du += du_i + dk_i = (dh * v_i[..., None, :]).sum(-1) + dk_aux[:, :, i] = dk_i + dk_i += (d_kv_i * u[None, ..., None] * v_i[..., None, :]).sum(-1) + dv_i = (d_kv_i * u[None, ..., None] * k_i[..., None]).sum(-2) + dv_i += (dh * k_i[..., None]).sum(-2) + + dk[:, :, i] = dk_i + dv[:, :, i] = dv_i + dh = dh * w[:, :, i, :, None].exp() + d_kv_i + + # dw = q * dq_aux - k * dk_aux + dw = torch.zeros_like(w) + for i in range(T - 2, -1, -1): + dw[:, :, i] = dw[:, :, i+1] + dq_aux[:, :, i+1] * q[:, :, i+1] - dk_aux[:, :, i] * k[:, :, i] + + return dq, dk, dv, dw, du diff --git a/fla/ops/simple_gla/README.md b/fla/ops/simple_gla/README.md new file mode 100644 index 0000000000000000000000000000000000000000..72e710a3aa837e4d3543a62fb93de61a714cbe1d --- /dev/null +++ b/fla/ops/simple_gla/README.md @@ -0,0 +1,5 @@ +- Simple GLA + +Gating mechanism in https://arxiv.org/abs/2103.02143. Compared to GLA, the gating is head-wise instead of elementwise. As a result, we can adapt the RetNet kernel for training using matmul w/o numerical instability. It is faster than GLA but has less expressive power. I will use it as a baseline for the GLA. + +$S_{t+1} = g_{t+1} \odot S_{t} + K_{t+1} V_{t+1}^{\top}$ where $g$ is a scalar. \ No newline at end of file diff --git a/fla/ops/simple_gla/__init__.py b/fla/ops/simple_gla/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..b2f906615579749f3a22276a75b9b1b2390397f1 --- /dev/null +++ b/fla/ops/simple_gla/__init__.py @@ -0,0 +1,8 @@ +# -*- coding: utf-8 -*- + +from .chunk import chunk_simple_gla + +__all__ = [ + 'chunk_simple_gla' +] + diff --git a/fla/ops/simple_gla/chunk.py b/fla/ops/simple_gla/chunk.py new file mode 100644 index 0000000000000000000000000000000000000000..a5ca3d49c5d5b087439fb9cd434f2f02c4feedd9 --- /dev/null +++ b/fla/ops/simple_gla/chunk.py @@ -0,0 +1,415 @@ +# -*- coding: utf-8 -*- +# Copyright (c) 2023, Yu Zhang, Songlin Yang + +from typing import Tuple + +import torch +import triton +import triton.language as tl +from torch.cuda.amp import custom_bwd, custom_fwd + +from fla.utils import contiguous + + +@torch.jit.script +def normalize_output(q, k, o): + k = k.transpose(-2, -1) + k = k.cumsum(-1) + k = k.transpose(-2, -1) + z = (q * k).sum(-1, keepdim=True) + return o / (z + 1e-5) + + +@triton.jit +def chunk_simple_gla_fwd_kernel_h( + k, + v, + h, + g, + initial_state, # initial state of the chunk [B, H, D_head_K, D_head_V] + final_state, # final state of the chunk [B, H, D_head_K, D_head_V] + s_qk_h, + s_qk_t, + s_qk_d, + s_vo_h, + s_vo_t, + s_vo_d, + s_h_h, + s_h_t, + H: tl.constexpr, + T: tl.constexpr, + K: tl.constexpr, + V: tl.constexpr, + BT: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr, + NT: tl.constexpr, + USE_INITIAL_STATE: tl.constexpr, + STORE_FINAL_STATE: tl.constexpr +): + i_k, i_v, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + + # [BK, BV] + b_h = tl.zeros([BK, BV], dtype=tl.float32) + + if USE_INITIAL_STATE: + p_h0 = tl.make_block_ptr(initial_state + i_bh * K * V, + (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + b_h = tl.load(p_h0, boundary_check=(0, 1)).to(tl.float32) + + for i_t in range(NT): + p_k = tl.make_block_ptr( + k + i_bh * s_qk_h, (K, T), (s_qk_d, s_qk_t), (i_k * BK, i_t * BT), (BK, BT), (0, 1)) + p_v = tl.make_block_ptr( + v + i_bh * s_vo_h, (T, V), (s_vo_t, s_vo_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_h = tl.make_block_ptr(h + i_bh * s_h_h + i_t * K * V, + (K, V), (s_h_t, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + + tl.store(p_h, b_h.to(p_h.dtype.element_ty), boundary_check=(0, 1)) + # [BK, BT] + b_k = tl.load(p_k, boundary_check=(0, 1)) + # [BT, BV] + b_v = tl.load(p_v, boundary_check=(0, 1)) + # [BK, BV] + b_g_last = tl.load(g + i_bh * T + i_t * BT + BT - 1) + b_h *= tl.math.exp2(b_g_last) + b_g = tl.load(g + i_bh * T + i_t * BT + tl.arange(0, BT)) + b_h += tl.dot(b_k, (b_v * tl.math.exp2(b_g_last - b_g)[:, None]).to(b_k.dtype), allow_tf32=False) + + if STORE_FINAL_STATE: + p_ht = tl.make_block_ptr( + final_state + i_bh * K * V, (K, V), (V, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + tl.store(p_ht, b_h.to(p_ht.dtype.element_ty), boundary_check=(0, 1)) + + +@triton.jit +def chunk_simple_gla_fwd_kernel_o( + q, + k, + v, + h, + g, + o, + s_qk_h, + s_qk_t, + s_qk_d, + s_vo_h, + s_vo_t, + s_vo_d, + s_h_h, + s_h_t, + scale, + H: tl.constexpr, + T: tl.constexpr, + K: tl.constexpr, + V: tl.constexpr, + BT: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr +): + i_v, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + + o_i = tl.arange(0, BT) + m_s = o_i[:, None] >= o_i[None, :] + + b_o = tl.zeros([BT, BV], dtype=tl.float32) + b_s = tl.zeros([BT, BT], dtype=tl.float32) + for i_k in range(tl.cdiv(K, BK)): + p_q = tl.make_block_ptr( + q + i_bh * s_qk_h, (T, K), (s_qk_t, s_qk_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_k = tl.make_block_ptr( + k + i_bh * s_qk_h, (K, T), (s_qk_d, s_qk_t), (i_k * BK, i_t * BT), (BK, BT), (0, 1)) + p_h = tl.make_block_ptr(h + i_bh * s_h_h + i_t * K * V, + (K, V), (s_h_t, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + + # [BT, BK] + b_q = tl.load(p_q, boundary_check=(0, 1)) + # [BK, BT] + b_k = tl.load(p_k, boundary_check=(0, 1)) + # [BT] + + # [BK, BV] + b_h = tl.load(p_h, boundary_check=(0, 1)) + b_o += tl.dot(b_q, b_h, allow_tf32=False) + b_s += tl.dot(b_q, b_k, allow_tf32=False) + + p_g = g + i_bh * T + i_t * BT + tl.arange(0, BT) + b_g = tl.load(p_g) + b_o = b_o * tl.math.exp2(b_g)[:, None] + b_s = b_s * tl.math.exp2(b_g[:, None] - b_g[None, :]) + b_s = tl.where(m_s, b_s, 0) + + p_v = tl.make_block_ptr(v + i_bh * s_vo_h, (T, V), + (s_vo_t, s_vo_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + b_v = tl.load(p_v, boundary_check=(0, 1)) + b_o = (b_o + tl.dot(b_s.to(b_v.dtype), b_v, allow_tf32=False)) * scale + p_o = tl.make_block_ptr(o + i_bh * s_vo_h, (T, V), + (s_vo_t, s_vo_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + tl.store(p_o, b_o.to(p_o.dtype.element_ty), boundary_check=(0, 1)) + + +@triton.jit +def chunk_simple_gla_bwd_kernel_dh( + q, + g, + do, + dh, + s_qk_h, + s_qk_t, + s_qk_d, + s_vo_h, + s_vo_t, + s_vo_d, + s_h_h, + s_h_t, + scale, + H: tl.constexpr, + T: tl.constexpr, + K: tl.constexpr, + V: tl.constexpr, + BT: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr, + NT: tl.constexpr +): + i_k, i_v, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + + # [BK, BV] + b_dh = tl.zeros([BK, BV], dtype=tl.float32) + for i_t in range(NT - 1, -1, -1): + p_q = tl.make_block_ptr( + q + i_bh * s_qk_h, (K, T), (s_qk_d, s_qk_t), (i_k * BK, i_t * BT), (BK, BT), (0, 1)) + p_do = tl.make_block_ptr( + do + i_bh * s_vo_h, (T, V), (s_vo_t, s_vo_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_dh = tl.make_block_ptr(dh + i_bh * s_h_h + i_t * K * V, + (K, V), (s_h_t, 1), (i_k * BK, i_v * BV), (BK, BV), (1, 0)) + + tl.store(p_dh, b_dh.to(p_dh.dtype.element_ty), boundary_check=(0, 1)) + # [BK, BT] + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_q = (b_q * scale * tl.math.exp2(tl.load(g + i_bh * T + + i_t * BT + tl.arange(0, BT)))[None, :]).to(b_q.dtype) + # [BT, V] + b_do = tl.load(p_do, boundary_check=(0, 1)) + # [BK, BV] + b_dh *= tl.math.exp2(tl.load(g + i_bh * T + i_t * BT + BT - 1)) + b_dh += tl.dot(b_q, b_do.to(b_q.dtype), allow_tf32=False) + + +@triton.jit +def chunk_simple_gla_bwd_kernel_dqkv( + q, + k, + v, + h, + g, + do, + dh, + dq, + dk, + dv, + s_qk_h, + s_qk_t, + s_qk_d, + s_vo_h, + s_vo_t, + s_vo_d, + s_h_h, + s_h_t, + scale, + B: tl.constexpr, + H: tl.constexpr, + T: tl.constexpr, + K: tl.constexpr, + V: tl.constexpr, + BT: tl.constexpr, + BK: tl.constexpr, + BV: tl.constexpr, + NT: tl.constexpr +): + i_k, i_t, i_bh = tl.program_id(0), tl.program_id(1), tl.program_id(2) + n_bh = tl.num_programs(2) + o_i = tl.arange(0, BT) + + p_q = tl.make_block_ptr(q + i_bh * s_qk_h, (K, T), + (s_qk_d, s_qk_t), (i_k * BK, i_t * BT), (BK, BT), (0, 1)) + p_k = tl.make_block_ptr(k + i_bh * s_qk_h, (T, K), + (s_qk_t, s_qk_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + + b_q = tl.load(p_q, boundary_check=(0, 1)) + b_k = tl.load(p_k, boundary_check=(0, 1)) + b_s = tl.dot(b_k, b_q, allow_tf32=False) + p_g = g + i_bh * T + i_t * BT + tl.arange(0, BT) + b_g = tl.load(p_g) + b_g_last = tl.load(g + i_bh * T + i_t * BT + BT - 1) + mask = tl.math.exp2(b_g[None, :] - b_g[:, None]) + mask = tl.where(o_i[:, None] <= o_i[None, :], mask * scale, 0) + b_s = b_s * mask + + b_dq = tl.zeros([BT, BK], dtype=tl.float32) + b_dk = tl.zeros([BT, BK], dtype=tl.float32) + b_ds = tl.zeros([BT, BT], dtype=tl.float32) + for i_v in range(tl.cdiv(V, BV)): + p_v = tl.make_block_ptr( + v + i_bh * s_vo_h, (T, V), (s_vo_t, s_vo_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_h = tl.make_block_ptr(h + i_bh * s_h_h, (V, NT * K), (1, s_h_t), + (i_v * BV, i_t * K + i_k * BK), (BV, BK), (0, 1)) + p_do = tl.make_block_ptr( + do + i_bh * s_vo_h, (T, V), (s_vo_t, s_vo_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + p_dh = tl.make_block_ptr(dh + i_bh * s_h_h, (NT * K, V), + (s_h_t, 1), (i_t * K + i_k * BK, i_v * BV), (BK, BV), (1, 0)) + p_dv = tl.make_block_ptr(dv + (i_k*n_bh+i_bh)*s_vo_h, (T, V), + (s_vo_t, s_vo_d), (i_t * BT, i_v * BV), (BT, BV), (1, 0)) + # [BT, BV] + b_v = tl.load(p_v, boundary_check=(0, 1)) + b_do = tl.load(p_do, boundary_check=(0, 1)) + # [BV, BK] + b_h = tl.load(p_h, boundary_check=(0, 1)) + # [BK, BV] + b_dh = tl.load(p_dh, boundary_check=(0, 1)) + # [BT, BT] + b_ds += tl.dot(b_do, tl.trans(b_v), allow_tf32=False) + # [BT, BK] + b_dq += tl.dot(b_do, b_h, allow_tf32=False) * scale + b_dk += tl.dot(b_v, tl.trans(b_dh), allow_tf32=False) + # [BT, BV] + b_dv = tl.dot(b_k, b_dh, allow_tf32=False) * tl.math.exp2(-b_g + b_g_last)[:, None] + \ + tl.dot(b_s.to(b_q.dtype), b_do, allow_tf32=False) + tl.store(p_dv, b_dv.to(p_dv.dtype.element_ty), boundary_check=(0, 1)) + + b_dq = b_dq * tl.math.exp2(b_g)[:, None] + b_dk = b_dk * tl.math.exp2(-b_g + b_g_last)[:, None] + b_ds = b_ds * tl.trans(mask) + b_ds = b_ds.to(b_k.dtype) + # [BT, BK] + b_dq += tl.dot(b_ds, b_k, allow_tf32=False) + b_dk += tl.trans(tl.dot(b_q, b_ds, allow_tf32=False)) + p_dq = tl.make_block_ptr(dq + i_bh * s_qk_h, (T, K), + (s_qk_t, s_qk_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + p_dk = tl.make_block_ptr(dk + i_bh * s_qk_h, (T, K), + (s_qk_t, s_qk_d), (i_t * BT, i_k * BK), (BT, BK), (1, 0)) + tl.store(p_dq, b_dq.to(p_dq.dtype.element_ty), boundary_check=(0, 1)) + tl.store(p_dk, b_dk.to(p_dk.dtype.element_ty), boundary_check=(0, 1)) + + +class SimpleGLAFunction(torch.autograd.Function): + + @staticmethod + @custom_fwd + @contiguous + def forward(ctx, q, k, v, g, initial_state, output_final_state): + B, H, T, K, V = *q.shape, v.shape[-1] + BT = 64 + BK, BV = min(64, triton.next_power_of_2(K)), min( + 64, triton.next_power_of_2(V)) + NT, NK, NV = triton.cdiv(T, BT), triton.cdiv(K, BK), triton.cdiv(V, BV) + num_stages = 1 + num_warps = 4 if BK == 64 else 2 + scale = K ** -0.5 + + BT = 64 + assert T % BT == 0, 'sequence length must be divisible by BT' + g = g.reshape(B, H, -1, BT) + g = g.cumsum(-1) * 1.44269504 + g = g.reshape(B, H, -1) + + final_state = None + if output_final_state: + final_state = q.new_empty(B, H, K, V, dtype=torch.float32, requires_grad=False) + + h = q.new_empty(B, H, NT * K, V) + grid = (NK, NV, B * H) + chunk_simple_gla_fwd_kernel_h[grid]( + k, v, h, g, initial_state, final_state, + q.stride(1), q.stride(2), q.stride(3), + v.stride(1), v.stride(2), v.stride(3), + h.stride(1), h.stride(2), + H=H, T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, NT=NT, + USE_INITIAL_STATE=initial_state is not None, + STORE_FINAL_STATE=output_final_state, + num_warps=num_warps, + num_stages=num_stages + ) + grid = (NV, NT, B * H) + o = torch.empty_like(v) + chunk_simple_gla_fwd_kernel_o[grid]( + q, k, v, h, g, o, + q.stride(1), q.stride(2), q.stride(3), + v.stride(1), v.stride(2), v.stride(3), + h.stride(1), h.stride(2), + scale, + H=H, T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, + num_warps=num_warps, + num_stages=num_stages + ) + + ctx.save_for_backward(q, k, v, h, g) + return o.to(q.dtype), final_state + + @staticmethod + @custom_bwd + @contiguous + def backward(ctx, do, d_ht=None): + q, k, v, h, g = ctx.saved_tensors + + B, H, T, K, V = *q.shape, v.shape[-1] + BT = 64 + BK, BV = min(32 if q.dtype == torch.float32 else 64, triton.next_power_of_2(K)), min( + 32 if q.dtype == torch.float32 else 64, triton.next_power_of_2(V)) + NT, NK, NV = triton.cdiv(T, BT), triton.cdiv(K, BK), triton.cdiv(V, BV) + num_stages = 1 + num_warps = 4 if BK == 64 else 2 + scale = K ** -0.5 + + dh = q.new_empty(B, H, NT * K, V) + grid = (NK, NV, B * H) + chunk_simple_gla_bwd_kernel_dh[grid]( + q, g, do, dh, + q.stride(1), q.stride(2), q.stride(3), + v.stride(1), v.stride(2), v.stride(3), + dh.stride(1), dh.stride(2), + scale, + H=H, T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, NT=NT, + num_warps=num_warps, + num_stages=num_stages + ) + grid = (NK, NT, B * H) + dq = torch.empty_like(q) + dk = torch.empty_like(k) + dv = v.new_empty(NK, *v.shape) + num_stages = 1 + num_warps = 4 if BK == 64 else 2 + chunk_simple_gla_bwd_kernel_dqkv[grid]( + q, k, v, h, g, do, dh, dq, dk, dv, + q.stride(1), q.stride(2), q.stride(3), + v.stride(1), v.stride(2), v.stride(3), + dh.stride(1), dh.stride(2), + scale, + B=B, H=H, T=T, K=K, V=V, BT=BT, BK=BK, BV=BV, NT=NT, + num_warps=num_warps, + num_stages=num_stages + ) + dv = dv.sum(0) + dg = (dq * q - dk * k).sum(-1) + + def rev_cumsum(x): + cumsum_x = x.cumsum(-1) + rev_cumsum_x = cumsum_x[..., -1, None] - cumsum_x + return rev_cumsum_x + x + dg = rev_cumsum(dg) + return dq.to(q.dtype), dk.to(k.dtype), dv.to(v.dtype), dg.to(g.dtype), None, None + + +def chunk_simple_gla( + q: torch.Tensor, + k: torch.Tensor, + v: torch.Tensor, + g: torch.Tensor, # log decay + initial_state: torch.Tensor = None, + output_final_state: bool = False +) -> Tuple[torch.Tensor, torch.Tensor]: + if initial_state is not None: + initial_state = initial_state.detach() + g = g.float() + o, final_state = SimpleGLAFunction.apply(q, k, v, g, initial_state, output_final_state) + return o, final_state diff --git a/fla/ops/simple_gla/naive.py b/fla/ops/simple_gla/naive.py new file mode 100644 index 0000000000000000000000000000000000000000..f7f1e2288d59eceaa7884fab90406fa3882c25ce --- /dev/null +++ b/fla/ops/simple_gla/naive.py @@ -0,0 +1,52 @@ +# -*- coding: utf-8 -*- + +import torch +from einops import rearrange + + +def torch_simple_gla(q, k, v, g, chunk_size=64): + q = rearrange(q, 'b h (n c) d -> b h n c d', c = chunk_size) * (q.shape[-1] ** -0.5) + k = rearrange(k, 'b h (n c) d -> b h n c d', c = chunk_size) + v = rearrange(v, 'b h (n c) d -> b h n c d', c = chunk_size) + g = rearrange(g, 'b h (n c) -> b h n c', c = chunk_size) + g = g.cumsum(-1) + kv = k.transpose(-1, -2) @ (v * (-g + g[:, :, :, -1, None]).exp()[..., None]) + S = torch.zeros_like(kv) + + for i in range(1, g.shape[-2]): + S[:, :, i] = S[:, :, i-1].clone() * g[:, :, i-1, -1, None, None].exp() + kv[:, :, i-1] + + inter = (q * g[..., None].exp()) @ S + attn = q @ k.transpose(-1, -2) + attn = attn * (g[..., None] - g[..., None, :]).exp() + attn = attn.masked_fill(torch.triu(torch.ones(chunk_size, chunk_size, dtype=bool, device=q.device), diagonal=1), 0) + intra = attn @ v + o = inter + intra + return rearrange(o, 'b h n c d -> b h (n c) d') + + +def torch_simple_gla_recurrent(q, k, v, g, chunk_size=64): + # q = rearrange(q, 'b h (n c) d -> b h n c d', c = chunk_size) * (q.shape[-1] ** -0.5) + # k = rearrange(k, 'b h (n c) d -> b h n c d', c = chunk_size) + # v = rearrange(v, 'b h (n c) d -> b h n c d', c = chunk_size) + # g = rearrange(g, 'b h (n c) -> b h n c', c = chunk_size) + # g = g.cumsum(-1) + # kv = k.transpose(-1, -2) @ v + + B, H, T, DK = q.shape + q = q * (DK ** -0.5) + _, _, _, DV = v.shape + S = torch.zeros(B, H, DK, DV).to(q) + o = torch.zeros(B, H, T, DV).to(q) + for i in range(T): + gate = g[:, :, i].exp() + key = k[:, :, i] + value = v[:, :, i] + kv = key.unsqueeze(-1) * value.unsqueeze(-2) + S = S.clone() * gate.unsqueeze(-1).unsqueeze(-1) + kv + q_i = q[:, :, i, :] + o_i = (q_i.unsqueeze(-1) * S).sum(-2) + o[:, :, i] = o_i + + return o + diff --git a/fla/ops/utils.py b/fla/ops/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..50202093d0aa5b9b0bcd360c02932bd9edd74460 --- /dev/null +++ b/fla/ops/utils.py @@ -0,0 +1,579 @@ +# -*- coding: utf-8 -*- +# Copyright (c) 2023-2024, Yu Zhang, Songlin Yang + +from typing import Optional + +import torch +import triton +import triton.language as tl + +from fla.utils import contiguous + + +@triton.autotune( + configs=[ + triton.Config({'BT': 16}, num_warps=2), + triton.Config({'BT': 16}, num_warps=4), + triton.Config({'BT': 16}, num_warps=8), + triton.Config({'BT': 32}, num_warps=2), + triton.Config({'BT': 32}, num_warps=4), + triton.Config({'BT': 32}, num_warps=8), + triton.Config({'BT': 64}, num_warps=2), + triton.Config({'BT': 64}, num_warps=4), + triton.Config({'BT': 64}, num_warps=8), + ], + key=['S'] +) +@triton.jit +def logcumsumexp_fwd_kernel( + s, + z, + s_s_h, + s_s_t, + s_s_d, + T: tl.constexpr, + S: tl.constexpr, + BT: tl.constexpr +): + i_bh = tl.program_id(0) + o_i = tl.arange(0, BT) + m_s = tl.where(o_i[:, None] >= o_i[None, :], 1., 0.) + + b_mp = tl.full([S,], float('-inf'), dtype=tl.float32) + b_zp = tl.zeros([S,], dtype=tl.float32) + for i_t in range(tl.cdiv(T, BT)): + p_s = tl.make_block_ptr(s + i_bh * s_s_h, (T, S), (s_s_t, s_s_d), (i_t * BT, 0), (BT, S), (1, 0)) + p_z = tl.make_block_ptr(z + i_bh * s_s_h, (T, S), (s_s_t, s_s_d), (i_t * BT, 0), (BT, S), (1, 0)) + + # [BT, S] + b_s = tl.load(p_s, boundary_check=(0, 1)).to(tl.float32) + # [S,] + b_mc = tl.max(b_s, 0) + # workaround for compiler bugs + if i_t > 0: + b_mc = tl.maximum(b_mp, b_mc) + b_zp = b_zp * tl.exp(b_mp - b_mc) + # [BT, S] + b_s = tl.exp(b_s - b_mc) + b_z = tl.dot(m_s, b_s, allow_tf32=False) + b_zp + # [S,] + b_zc = tl.max(b_z, 0) + b_mp = b_mc + b_zp = b_zc + # [BT, BS] + # small eps to prevent underflows + b_z = tl.log(tl.where(b_z != 0, b_z, 1e-20)) + b_mc + tl.store(p_z, b_z.to(p_z.dtype.element_ty), boundary_check=(0, 1)) + + +@triton.autotune( + configs=[ + triton.Config({}, num_warps=2), + triton.Config({}, num_warps=4), + triton.Config({}, num_warps=8), + ], + key=['S'] +) +@triton.jit +def softmax_fwd_kernel( + s, + p, + s_s_h, + s_s_t, + s_s_d, + T: tl.constexpr, + S: tl.constexpr, + BT: tl.constexpr +): + i_t, i_bh = tl.program_id(0), tl.program_id(1) + + p_s = tl.make_block_ptr(s + i_bh * s_s_h, (T, S), (s_s_t, s_s_d), (i_t * BT, 0), (BT, S), (1, 0)) + p_p = tl.make_block_ptr(p + i_bh * s_s_h, (T, S), (s_s_t, s_s_d), (i_t * BT, 0), (BT, S), (1, 0)) + + # [BT, S] + b_s = tl.load(p_s, boundary_check=(0, 1)).to(tl.float32) + # [BT] + b_m = tl.max(b_s, 1) + + # [BT, BS] + b_s = tl.exp(b_s - b_m[:, None]) + b_z = tl.sum(b_s, 1) + b_p = tl.where(b_s != 0, b_s / b_z[:, None], 0.) + tl.store(p_p, b_p.to(p_p.dtype.element_ty), boundary_check=(0, 1)) + + +@triton.autotune( + configs=[ + triton.Config({}, num_warps=2), + triton.Config({}, num_warps=4), + triton.Config({}, num_warps=8), + ], + key=['S'] +) +@triton.jit +def softmax_bwd_kernel( + p, + dp, + ds, + s_s_h, + s_s_t, + s_s_d, + T: tl.constexpr, + S: tl.constexpr, + BT: tl.constexpr +): + i_t, i_bh = tl.program_id(0), tl.program_id(1) + + p_p = tl.make_block_ptr(p + i_bh * s_s_h, (T, S), (s_s_t, s_s_d), (i_t * BT, 0), (BT, S), (1, 0)) + p_dp = tl.make_block_ptr(dp + i_bh * s_s_h, (T, S), (s_s_t, s_s_d), (i_t * BT, 0), (BT, S), (1, 0)) + p_ds = tl.make_block_ptr(ds + i_bh * s_s_h, (T, S), (s_s_t, s_s_d), (i_t * BT, 0), (BT, S), (1, 0)) + # [BT, BS] + b_p = tl.load(p_p, boundary_check=(0, 1)).to(tl.float32) + b_dp = tl.load(p_dp, boundary_check=(0, 1)).to(tl.float32) + # [BT,] + b_pp = tl.sum(b_p * b_dp, 1) + # [BT, BS] + b_ds = b_p * b_dp - b_p * b_pp[:, None] + tl.store(p_ds, b_ds.to(p_ds.dtype.element_ty), boundary_check=(0, 1)) + + +@triton.autotune( + configs=[ + triton.Config({'BS': 32}, num_warps=2), + triton.Config({'BS': 32}, num_warps=4), + triton.Config({'BS': 32}, num_warps=8), + triton.Config({'BS': 64}, num_warps=2), + triton.Config({'BS': 64}, num_warps=4), + triton.Config({'BS': 64}, num_warps=8), + triton.Config({'BS': 128}, num_warps=2), + triton.Config({'BS': 128}, num_warps=4), + triton.Config({'BS': 128}, num_warps=8), + ], + key=['S'] +) +@triton.jit +def recurrent_cumsum_fwd_kernel( + s, + z, + s_s_h, + s_s_t, + T: tl.constexpr, + S: tl.constexpr, + BS: tl.constexpr +): + i_s, i_bh = tl.program_id(0), tl.program_id(1) + + o_s = i_s * BS + tl.arange(0, BS) + mask = o_s < S + + b_z = tl.zeros([BS], dtype=tl.float32) + for i_t in range(0, T): + # [BS] + b_s = tl.load(s + i_bh * s_s_h + i_t * s_s_t + o_s, mask=mask, other=0).to(tl.float32) + b_z = b_z + b_s + + tl.store(z + i_bh * s_s_h + i_t * s_s_t + o_s, b_z.to(s.dtype.element_ty), mask=mask) + + +@triton.autotune( + configs=[ + triton.Config({'BS': 32}, num_warps=2), + triton.Config({'BS': 32}, num_warps=4), + triton.Config({'BS': 32}, num_warps=8), + triton.Config({'BS': 64}, num_warps=2), + triton.Config({'BS': 64}, num_warps=4), + triton.Config({'BS': 64}, num_warps=8), + triton.Config({'BS': 128}, num_warps=2), + triton.Config({'BS': 128}, num_warps=4), + triton.Config({'BS': 128}, num_warps=8), + ], + key=['S'] +) +@triton.jit +def recurrent_cumsum_bwd_kernel( + ds, + dz, + s_s_h, + s_s_t, + T: tl.constexpr, + S: tl.constexpr, + BS: tl.constexpr +): + i_s, i_bh = tl.program_id(0), tl.program_id(1) + + o_s = i_s * BS + tl.arange(0, BS) + mask = o_s < S + + b_ds = tl.zeros([BS], dtype=tl.float32) + for i_t in range(T - 1, -1, -1): + # [BS] + b_dz = tl.load(dz + i_bh * s_s_h + i_t * s_s_t + o_s, mask=mask, other=0).to(tl.float32) + b_ds = b_ds + b_dz + + tl.store(ds + i_bh * s_s_h + i_t * s_s_t + o_s, b_ds.to(ds.dtype.element_ty), mask=mask) + + +@triton.autotune( + configs=[ + triton.Config({'BT': 16}, num_warps=2), + triton.Config({'BT': 16}, num_warps=4), + triton.Config({'BT': 16}, num_warps=8), + triton.Config({'BT': 32}, num_warps=2), + triton.Config({'BT': 32}, num_warps=4), + triton.Config({'BT': 32}, num_warps=8), + triton.Config({'BT': 64}, num_warps=2), + triton.Config({'BT': 64}, num_warps=4), + triton.Config({'BT': 64}, num_warps=8), + ], + key=['S'] +) +@triton.jit +def chunk_cumsum_fwd_kernel( + s, + z, + s_s_h, + s_s_t, + s_s_d, + T: tl.constexpr, + S: tl.constexpr, + BT: tl.constexpr, + BS: tl.constexpr +): + i_s, i_bh = tl.program_id(0), tl.program_id(1) + o_i = tl.arange(0, BT) + m_s = tl.where(o_i[:, None] >= o_i[None, :], 1., 0.) + + b_z = tl.zeros([BS], dtype=tl.float32) + for i_t in range(tl.cdiv(T, BT)): + p_s = tl.make_block_ptr(s + i_bh * s_s_h, (T, S), (s_s_t, s_s_d), (i_t * BT, i_s * BS), (BT, BS), (1, 0)) + p_z = tl.make_block_ptr(z + i_bh * s_s_h, (T, S), (s_s_t, s_s_d), (i_t * BT, i_s * BS), (BT, BS), (1, 0)) + # [BT, BS] + b_s = tl.load(p_s, boundary_check=(0, 1)).to(tl.float32) + b_c = b_z[None, :] + tl.dot(m_s, b_s, allow_tf32=False) + tl.store(p_z, b_c.to(p_z.dtype.element_ty), boundary_check=(0, 1)) + + if i_t >= 0: + b_z += tl.sum(b_s, 0) + + +@triton.autotune( + configs=[ + triton.Config({'BT': 16}, num_warps=2), + triton.Config({'BT': 16}, num_warps=4), + triton.Config({'BT': 16}, num_warps=8), + triton.Config({'BT': 32}, num_warps=2), + triton.Config({'BT': 32}, num_warps=4), + triton.Config({'BT': 32}, num_warps=8), + triton.Config({'BT': 64}, num_warps=2), + triton.Config({'BT': 64}, num_warps=4), + triton.Config({'BT': 64}, num_warps=8), + ], + key=['S'] +) +@triton.jit +def chunk_cumsum_bwd_kernel( + ds, + dz, + s_s_h, + s_s_t, + s_s_d, + T: tl.constexpr, + S: tl.constexpr, + BT: tl.constexpr, + BS: tl.constexpr +): + i_s, i_bh = tl.program_id(0), tl.program_id(1) + o_i = tl.arange(0, BT) + m_s = tl.where(o_i[:, None] <= o_i[None, :], 1., 0.) + + b_ds = tl.zeros([BS], dtype=tl.float32) + for i_t in range(tl.cdiv(T, BT) - 1, -1, -1): + p_ds = tl.make_block_ptr(ds + i_bh * s_s_h, (T, S), (s_s_t, s_s_d), (i_t * BT, i_s * BS), (BT, BS), (1, 0)) + p_dz = tl.make_block_ptr(dz + i_bh * s_s_h, (T, S), (s_s_t, s_s_d), (i_t * BT, i_s * BS), (BT, BS), (1, 0)) + # [BT, BS] + b_dz = tl.load(p_dz, boundary_check=(0, 1)).to(tl.float32) + b_c = b_ds[None, :] + tl.dot(m_s, b_dz, allow_tf32=False) + tl.store(p_ds, b_c.to(p_ds.dtype.element_ty), boundary_check=(0, 1)) + + if i_t >= 0: + b_ds += tl.sum(b_dz, 0) + + +@contiguous +def chunk_cumsum_fwd( + s: torch.Tensor, + dtype: Optional[torch.dtype] = None, +) -> torch.Tensor: + B, H, T, S = s.shape + BS = 32 + + dtype = dtype or s.dtype + grid = (triton.cdiv(S, BS), B * H) + z = torch.empty_like(s, dtype=dtype) + chunk_cumsum_fwd_kernel[grid]( + s, z, + s.stride(1), s.stride(2), s.stride(3), + T=T, S=S, BS=BS + ) + return z + + +@contiguous +def chunk_cumsum_bwd( + dz: torch.Tensor, + dtype: Optional[torch.dtype] = None, +) -> torch.Tensor: + B, H, T, S = dz.shape + BS = 32 + + dtype = dtype or dz.dtype + grid = (triton.cdiv(S, BS), B * H) + ds = torch.empty_like(dz, dtype=dtype) + chunk_cumsum_bwd_kernel[grid]( + ds, dz, + ds.stride(1), ds.stride(2), ds.stride(3), + T=T, S=S, BS=BS + ) + return ds + + +class CumsumFunction(torch.autograd.Function): + + @staticmethod + def forward(ctx, s, dtype): + z = chunk_cumsum_fwd(s, dtype) + ctx.dtype = dtype + return z + + @staticmethod + def backward(ctx, dz): + ds = chunk_cumsum_bwd(dz, ctx.dtype) + return ds, None + + +def cumsum( + s: torch.Tensor, + dtype: Optional[torch.dtype] = None, +) -> torch.Tensor: + return CumsumFunction.apply(s, dtype) + + +@triton.autotune( + configs=[ + triton.Config({'BS': 32}, num_warps=2), + triton.Config({'BS': 32}, num_warps=4), + triton.Config({'BS': 32}, num_warps=8), + triton.Config({'BS': 64}, num_warps=2), + triton.Config({'BS': 64}, num_warps=4), + triton.Config({'BS': 64}, num_warps=8), + triton.Config({'BS': 128}, num_warps=2), + triton.Config({'BS': 128}, num_warps=4), + triton.Config({'BS': 128}, num_warps=8), + ], + key=['S'] +) +@triton.jit +def recurrent_reversed_cumsum_fwd_kernel( + s, + z, + s_s_h, + s_s_t, + T: tl.constexpr, + S: tl.constexpr, + BS: tl.constexpr +): + i_s, i_bh = tl.program_id(0), tl.program_id(1) + + o_s = i_s * BS + tl.arange(0, BS) + mask = o_s < S + + b_z = tl.zeros([BS], dtype=tl.float32) + for i_t in range(T - 1, -1, -1): + # [BS] + b_s = tl.load(s + i_bh * s_s_h + i_t * s_s_t + o_s, mask=mask, other=0).to(tl.float32) + b_z = b_z + b_s + + tl.store(z + i_bh * s_s_h + i_t * s_s_t + o_s, b_z.to(s.dtype.element_ty), mask=mask) + + +@triton.autotune( + configs=[ + triton.Config({'BS': 32}, num_warps=2), + triton.Config({'BS': 32}, num_warps=4), + triton.Config({'BS': 32}, num_warps=8), + triton.Config({'BS': 64}, num_warps=2), + triton.Config({'BS': 64}, num_warps=4), + triton.Config({'BS': 64}, num_warps=8), + triton.Config({'BS': 128}, num_warps=2), + triton.Config({'BS': 128}, num_warps=4), + triton.Config({'BS': 128}, num_warps=8), + ], + key=['S'] +) +@triton.jit +def recurrent_reversed_cumsum_bwd_kernel( + ds, + dz, + s_s_h, + s_s_t, + T: tl.constexpr, + S: tl.constexpr, + BS: tl.constexpr +): + i_s, i_bh = tl.program_id(0), tl.program_id(1) + + o_s = i_s * BS + tl.arange(0, BS) + mask = o_s < S + + b_ds = tl.zeros([BS], dtype=tl.float32) + for i_t in range(0, T): + # [BS] + b_dz = tl.load(dz + i_bh * s_s_h + i_t * s_s_t + o_s, mask=mask, other=0).to(tl.float32) + b_ds = b_ds + b_dz + + tl.store(ds + i_bh * s_s_h + i_t * s_s_t + o_s, b_ds.to(ds.dtype.element_ty), mask=mask) + + +@triton.autotune( + configs=[ + triton.Config({'BT': 16}, num_warps=2), + triton.Config({'BT': 16}, num_warps=4), + triton.Config({'BT': 16}, num_warps=8), + triton.Config({'BT': 32}, num_warps=2), + triton.Config({'BT': 32}, num_warps=4), + triton.Config({'BT': 32}, num_warps=8), + triton.Config({'BT': 64}, num_warps=2), + triton.Config({'BT': 64}, num_warps=4), + triton.Config({'BT': 64}, num_warps=8), + ], + key=['S'] +) +@triton.jit +def chunk_reversed_cumsum_fwd_kernel( + s, + z, + s_s_h, + s_s_t, + s_s_d, + T: tl.constexpr, + S: tl.constexpr, + BT: tl.constexpr, + BS: tl.constexpr +): + i_s, i_bh = tl.program_id(0), tl.program_id(1) + o_i = tl.arange(0, BT) + m_s = tl.where(o_i[:, None] <= o_i[None, :], 1., 0.) + + b_z = tl.zeros([BS], dtype=tl.float32) + for i_t in range(tl.cdiv(T, BT) - 1, -1, -1): + p_s = tl.make_block_ptr(s + i_bh * s_s_h, (T, S), (s_s_t, s_s_d), (i_t * BT, i_s * BS), (BT, BS), (1, 0)) + p_z = tl.make_block_ptr(z + i_bh * s_s_h, (T, S), (s_s_t, s_s_d), (i_t * BT, i_s * BS), (BT, BS), (1, 0)) + # [BT, BS] + b_s = tl.load(p_s, boundary_check=(0, 1)).to(tl.float32) + b_c = b_z[None, :] + tl.dot(m_s, b_s, allow_tf32=False) + tl.store(p_z, b_c.to(p_z.dtype.element_ty), boundary_check=(0, 1)) + + if i_t >= 0: + b_z += tl.sum(b_s, 0) + + +@triton.autotune( + configs=[ + triton.Config({'BT': 16}, num_warps=2), + triton.Config({'BT': 16}, num_warps=4), + triton.Config({'BT': 16}, num_warps=8), + triton.Config({'BT': 32}, num_warps=2), + triton.Config({'BT': 32}, num_warps=4), + triton.Config({'BT': 32}, num_warps=8), + triton.Config({'BT': 64}, num_warps=2), + triton.Config({'BT': 64}, num_warps=4), + triton.Config({'BT': 64}, num_warps=8), + ], + key=['S'] +) +@triton.jit +def chunk_reversed_cumsum_bwd_kernel( + ds, + dz, + s_s_h, + s_s_t, + s_s_d, + T: tl.constexpr, + S: tl.constexpr, + BT: tl.constexpr, + BS: tl.constexpr +): + i_s, i_bh = tl.program_id(0), tl.program_id(1) + o_i = tl.arange(0, BT) + m_s = tl.where(o_i[:, None] >= o_i[None, :], 1., 0.) + + b_ds = tl.zeros([BS], dtype=tl.float32) + for i_t in range(tl.cdiv(T, BT)): + p_ds = tl.make_block_ptr(ds + i_bh * s_s_h, (T, S), (s_s_t, s_s_d), (i_t * BT, i_s * BS), (BT, BS), (1, 0)) + p_dz = tl.make_block_ptr(dz + i_bh * s_s_h, (T, S), (s_s_t, s_s_d), (i_t * BT, i_s * BS), (BT, BS), (1, 0)) + # [BT, BS] + b_dz = tl.load(p_dz, boundary_check=(0, 1)).to(tl.float32) + b_c = b_ds[None, :] + tl.dot(m_s, b_dz, allow_tf32=False) + tl.store(p_ds, b_c.to(p_ds.dtype.element_ty), boundary_check=(0, 1)) + + if i_t >= 0: + b_ds += tl.sum(b_dz, 0) + + +@contiguous +def chunk_reversed_cumsum_fwd( + s: torch.Tensor, + dtype: Optional[torch.dtype] = None, +) -> torch.Tensor: + B, H, T, S = s.shape + BS = 32 + + dtype = dtype or s.dtype + grid = (triton.cdiv(S, BS), B * H) + z = torch.empty_like(s, dtype=dtype) + chunk_reversed_cumsum_fwd_kernel[grid]( + s, z, + s.stride(1), s.stride(2), s.stride(3), + T=T, S=S, BS=BS + ) + return z + + +@contiguous +def chunk_reversed_cumsum_bwd( + dz: torch.Tensor, + dtype: Optional[torch.dtype] = None, +) -> torch.Tensor: + B, H, T, S = dz.shape + BS = 32 + + dtype = dtype or dz.dtype + grid = (triton.cdiv(S, BS), B * H) + ds = torch.empty_like(dz, dtype=dtype) + chunk_reversed_cumsum_bwd_kernel[grid]( + ds, dz, + ds.stride(1), ds.stride(2), ds.stride(3), + T=T, S=S, BS=BS + ) + return ds + + +class ReversedCumsumFunction(torch.autograd.Function): + + @staticmethod + def forward(ctx, s, dtype): + z = chunk_reversed_cumsum_fwd(s, dtype) + ctx.dtype = dtype + return z + + @staticmethod + def backward(ctx, dz): + ds = chunk_reversed_cumsum_bwd(dz, ctx.dtype) + return ds, None + + +def reversed_cumsum( + s: torch.Tensor, + dtype: Optional[torch.dtype] = None, +) -> torch.Tensor: + return CumsumFunction.apply(s, dtype) diff --git a/fla/utils.py b/fla/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..7a9a04757ac6d857545d4683ac5bfb622e0826a0 --- /dev/null +++ b/fla/utils.py @@ -0,0 +1,33 @@ +# -*- coding: utf-8 -*- + +import functools + +import torch + + +def contiguous(fn): + @functools.wraps(fn) + def wrapper(ctx, *args, **kwargs): + return fn(ctx, + *(i if not isinstance(i, torch.Tensor) else i.contiguous() for i in args), + **{k: (v if not isinstance(v, torch.Tensor) else v.contiguous()) for k, v in kwargs.items()}) + return wrapper + + +def require_version(version, hint): + def decorator(fn): + @functools.wraps(fn) + def wrapper(ctx, *args, **kwargs): + from transformers.utils.versions import require_version + require_version(version, hint) + return fn(ctx, + *(i if not isinstance(i, torch.Tensor) else i.contiguous() for i in args), + **{k: (v if not isinstance(v, torch.Tensor) else v.contiguous()) for k, v in kwargs.items()}) + return wrapper + return decorator + + +def checkpoint(func): + def wrapper(*args, **kwargs): + return torch.utils.checkpoint.checkpoint(func, *args, **kwargs) + return wrapper diff --git a/merge/merge.py b/merge/merge.py new file mode 100644 index 0000000000000000000000000000000000000000..c4c35b8a763a44bdc330b890206090f1ec7d0504 --- /dev/null +++ b/merge/merge.py @@ -0,0 +1,98 @@ +from collections import OrderedDict +import os +import sys +from typing import Dict +import typing +import torch +import bitsandbytes as bnb +from argparse import ArgumentParser + +parser = ArgumentParser() +parser.add_argument("--type", default="pissa", type=str) +parser.add_argument("--base_model", default="", type=str) +parser.add_argument("--lora_init", default="none", type=str) +parser.add_argument("--lora_checkpoint", default="", type=str) +parser.add_argument("--output", default="", type=str) +parser.add_argument("--quant", default="none", type=str) +parser.add_argument("--device", default="cuda", type=str) +parser.add_argument("--lora_alpha", default=16, type=int) +args = parser.parse_args() +device= args.device +base_model = args.base_model +init_lora= args.lora_init +lora= args.lora_checkpoint +output= args.output +quant= args.quant +lora_alpha = args.lora_alpha + +with torch.no_grad(): + w: Dict[str, torch.Tensor] = torch.load(base_model, map_location='cpu') + # merge LoRA-only slim checkpoint into the main weights + w_lora: Dict[str, torch.Tensor] = torch.load(lora, map_location='cpu') + + if args.type=='pissa': + w_init_lora: Dict[str, torch.Tensor] = torch.load(init_lora, map_location='cpu') + for k in w_lora.keys(): + w[k] = w_lora[k] + output_w: typing.OrderedDict[str, torch.Tensor] = OrderedDict() + # merge LoRA weights + keys = list(w.keys()) + for k in keys: + if k.endswith('.weight'): + prefix = k[:-len('.weight')] + lora_A = prefix + '.lora_A' + lora_B = prefix + '.lora_B' + init_lora_A = prefix + '.init_lora_A' + init_lora_B = prefix + '.init_lora_B' + if lora_A in keys: + assert lora_B in keys + print(f'merging {lora_A} and {lora_B} into {k}') + assert w[lora_B].shape[1] == w[lora_A].shape[0] + lora_r = w[lora_B].shape[1] + w[k] = w[k].to(device=device) + w[lora_A] = w[lora_A].to(device=device) + w[lora_B] = w[lora_B].to(device=device) + + if args.type=='pissa': + w_init_lora[init_lora_A] = w_init_lora[init_lora_A].to(device=device) + w_init_lora[init_lora_B] = w_init_lora[init_lora_B].to(device=device) + if quant=='4bit': + qw,qs = bnb.functional.quantize_4bit(w[k]- w_init_lora[init_lora_B] @ w_init_lora[init_lora_A]) + w[k] = (bnb.functional.dequantize_4bit(qw,quant_state=qs)).to(dtype=torch.bfloat16) + elif quant == 'nf4': + qw,qs = bnb.functional.quantize_nf4(w[k]- w_init_lora[init_lora_B] @ w_init_lora[init_lora_A]) + w[k] = (bnb.functional.dequantize_nf4(qw,quant_state=qs)).to(dtype=torch.bfloat16) + elif quant == 'fp4': + qw,qs = bnb.functional.quantize_fp4(w[k]- w_init_lora[init_lora_B] @ w_init_lora[init_lora_A]) + w[k] = (bnb.functional.dequantize_fp4(qw,quant_state=qs)).to(dtype=torch.bfloat16) + elif quant == 'int8': + qw,qs = bnb.functional.quantize(w[k]- w_init_lora[init_lora_B] @ w_init_lora[init_lora_A]) + w[k] = (bnb.functional.dequantize(qw,state=qs)).to(dtype=torch.bfloat16) + else: + w[k] = (w[k]- w_init_lora[init_lora_B] @ w_init_lora[init_lora_A]).to(dtype=torch.bfloat16) + w[k] += w[lora_B] @ w[lora_A] + else: + if quant=='4bit': + qw,qs = bnb.functional.quantize_4bit(w[k]) + w[k] = (bnb.functional.dequantize_4bit(qw,quant_state=qs)).to(dtype=torch.bfloat16) + elif quant=='nf4': + qw,qs = bnb.functional.quantize_nf4(w[k]) + w[k] = (bnb.functional.dequantize_nf4(qw,quant_state=qs)).to(dtype=torch.bfloat16) + elif quant=='fp4': + qw,qs = bnb.functional.quantize_fp4(w[k]) + w[k] = (bnb.functional.dequantize_fp4(qw,quant_state=qs)).to(dtype=torch.bfloat16) + elif quant=='int8': + qw,qs = bnb.functional.quantize(w[k]) + w[k] = (bnb.functional.dequantize(qw,state=qs)).to(dtype=torch.bfloat16) + w[k] += w[lora_B] @ w[lora_A] * (lora_alpha / lora_r) + output_w[k] = w[k].to(device='cpu', copy=True) + del w[k] + del w[lora_A] + del w[lora_B] + continue + + if 'lora' not in k: + print(f'retaining {k}') + output_w[k] = w[k].clone() + del w[k] + torch.save(output_w, output) \ No newline at end of file diff --git a/merge/merge_lora.py b/merge/merge_lora.py new file mode 100644 index 0000000000000000000000000000000000000000..ee38fdfd259fb1fa629a9d26d67e7d5c878e2f0a --- /dev/null +++ b/merge/merge_lora.py @@ -0,0 +1,52 @@ +from collections import OrderedDict +import os +import sys +from typing import Dict +import typing +import torch + +if '-h' in sys.argv or '--help' in sys.argv: + print(f'Usage: python3 {sys.argv[0]} [--use-gpu] ') + +if sys.argv[1] == '--use-gpu': + device = 'cuda' + lora_alpha, base_model, lora, output = float(sys.argv[2]), sys.argv[3], sys.argv[4], sys.argv[5] +else: + device = 'cpu' + lora_alpha, base_model, lora, output = float(sys.argv[1]), sys.argv[2], sys.argv[3], sys.argv[4] + + +with torch.no_grad(): + w: Dict[str, torch.Tensor] = torch.load(base_model, map_location='cpu') + # merge LoRA-only slim checkpoint into the main weights + w_lora: Dict[str, torch.Tensor] = torch.load(lora, map_location='cpu') + for k in w_lora.keys(): + w[k] = w_lora[k] + output_w: typing.OrderedDict[str, torch.Tensor] = OrderedDict() + # merge LoRA weights + keys = list(w.keys()) + for k in keys: + if k.endswith('.weight'): + prefix = k[:-len('.weight')] + lora_A = prefix + '.lora_A' + lora_B = prefix + '.lora_B' + if lora_A in keys: + assert lora_B in keys + print(f'merging {lora_A} and {lora_B} into {k}') + assert w[lora_B].shape[1] == w[lora_A].shape[0] + lora_r = w[lora_B].shape[1] + w[k] = w[k].to(device=device) + w[lora_A] = w[lora_A].to(device=device) + w[lora_B] = w[lora_B].to(device=device) + w[k] += w[lora_B] @ w[lora_A] * (lora_alpha / lora_r) + output_w[k] = w[k].to(device='cpu', copy=True) + del w[k] + del w[lora_A] + del w[lora_B] + continue + + if 'lora' not in k: + print(f'retaining {k}') + output_w[k] = w[k].clone() + del w[k] + torch.save(output_w, output) diff --git a/merge/merge_pissa.py b/merge/merge_pissa.py new file mode 100644 index 0000000000000000000000000000000000000000..001358d56088cab3f3cf94fb4577ce3cbd31a8f3 --- /dev/null +++ b/merge/merge_pissa.py @@ -0,0 +1,58 @@ +from collections import OrderedDict +import os +import sys +from typing import Dict +import typing +import torch + +if '-h' in sys.argv or '--help' in sys.argv: + print(f'Usage: python3 {sys.argv[0]} [--use-gpu] ') + +if sys.argv[1] == '--use-gpu': + device = 'cuda' + base_model, init_lora, lora, output = sys.argv[2], sys.argv[3], sys.argv[4], sys.argv[5] +else: + device = 'cpu' + base_model, init_lora, lora, output = sys.argv[1], sys.argv[2], sys.argv[3], sys.argv[4] + + +with torch.no_grad(): + w: Dict[str, torch.Tensor] = torch.load(base_model, map_location='cpu') + # merge LoRA-only slim checkpoint into the main weights + w_lora: Dict[str, torch.Tensor] = torch.load(lora, map_location='cpu') + w_init_lora: Dict[str, torch.Tensor] = torch.load(init_lora, map_location='cpu') + for k in w_lora.keys(): + w[k] = w_lora[k] + output_w: typing.OrderedDict[str, torch.Tensor] = OrderedDict() + # merge LoRA weights + keys = list(w.keys()) + for k in keys: + if k.endswith('.weight'): + prefix = k[:-len('.weight')] + lora_A = prefix + '.lora_A' + lora_B = prefix + '.lora_B' + init_lora_A = prefix + '.init_lora_A' + init_lora_B = prefix + '.init_lora_B' + if lora_A in keys: + assert lora_B in keys + print(f'merging {lora_A} and {lora_B} into {k}') + assert w[lora_B].shape[1] == w[lora_A].shape[0] + lora_r = w[lora_B].shape[1] + w[k] = w[k].to(device=device) + w[lora_A] = w[lora_A].to(device=device) + w[lora_B] = w[lora_B].to(device=device) + w_init_lora[init_lora_A] = w_init_lora[init_lora_A].to(device=device) + w_init_lora[init_lora_B] = w_init_lora[init_lora_B].to(device=device) + w[k] = (w[k]- w_init_lora[init_lora_B] @ w_init_lora[init_lora_A]).to(dtype=torch.bfloat16) + w[k] += w[lora_B] @ w[lora_A] + output_w[k] = w[k].to(device='cpu', copy=True) + del w[k] + del w[lora_A] + del w[lora_B] + continue + + if 'lora' not in k: + print(f'retaining {k}') + output_w[k] = w[k].clone() + del w[k] + torch.save(output_w, output) \ No newline at end of file diff --git a/merge/merge_state.py b/merge/merge_state.py new file mode 100644 index 0000000000000000000000000000000000000000..51e4767e5aa323b11bd9ad43ec09b12fe789374b --- /dev/null +++ b/merge/merge_state.py @@ -0,0 +1,36 @@ +from collections import OrderedDict +import os +import sys +from typing import Dict +import typing +import torch +import bitsandbytes as bnb +from argparse import ArgumentParser + +parser = ArgumentParser() +parser.add_argument("--base_model", default="", type=str) +parser.add_argument("--state_checkpoint", default="", type=str) +parser.add_argument("--output", default="", type=str) +# parser.add_argument("--quant", default="none", type=str) +parser.add_argument("--device", default="cuda", type=str) +# parser.add_argument("--lora_alpha", default=16, type=int) +args = parser.parse_args() +device= args.device +base_model = args.base_model +state= args.state_checkpoint +output= args.output + + +with torch.no_grad(): + w: Dict[str, torch.Tensor] = torch.load(base_model, map_location='cpu') + # merge LoRA-only slim checkpoint into the main weights + w_state: Dict[str, torch.Tensor] = torch.load(state, map_location='cpu') + + for k in w_state.keys(): + print(k) + w[k] = w_state[k] + # merge LoRA weights + for k in w.keys(): + print(k) + + torch.save(w, output) \ No newline at end of file diff --git a/output/model output dir.txt b/output/model output dir.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/requirements.txt b/requirements.txt new file mode 100644 index 0000000000000000000000000000000000000000..28d1bb04d750eb4c1ecc7ffe428068db8fcdce24 --- /dev/null +++ b/requirements.txt @@ -0,0 +1,10 @@ +pytorch-lightning==1.9.5 +bitsandbytes +deepspeed +einops +triton==2.2.0 +transformers[torch] +datasets +evaluate +jiwer +tqdm \ No newline at end of file diff --git a/src/__init__.py b/src/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/src/asr.py b/src/asr.py new file mode 100644 index 0000000000000000000000000000000000000000..93f1f444ea2a20f30824ac0babd785dc6f8a3e1f --- /dev/null +++ b/src/asr.py @@ -0,0 +1,467 @@ +""" +The main body of the ASR model, + +User: +Model: +""" + +import torch +import torch.nn as nn +from transformers import LlamaForCausalLM, LlamaTokenizer +from typing import List + +try: + from .speech_encoder import SpeechEncoder +except ImportError: + from speech_encoder import SpeechEncoder + + +from transformers import AutoModelForCausalLM, AutoTokenizer +from .model import RWKV +# from .lora import LinearWithLoRA +import pytorch_lightning as pl +from torch.nn import functional as F +from pytorch_lightning.strategies import DeepSpeedStrategy +import os, math, gc, importlib +if importlib.util.find_spec('deepspeed'): + import deepspeed + from deepspeed.ops.adam import DeepSpeedCPUAdam, FusedAdam +import time + +class L2Wrap(torch.autograd.Function): + @staticmethod + def forward(ctx, loss, y): + ctx.save_for_backward(y) + return loss + + @staticmethod + def backward(ctx, grad_output): + y = ctx.saved_tensors[0] + # to encourage the logits to be close to 0 + factor = 1e-4 / (y.shape[0] * y.shape[1]) + maxx, ids = torch.max(y, -1, keepdim=True) + gy = torch.zeros_like(y) + gy.scatter_(-1, ids, maxx * factor) + return (grad_output, gy) + +class SLAM_ASR(pl.LightningModule): + def __init__( + self, + args, + speech_encoder_model_id,#facebook/hubert-base-ls960 + language_model, + downsample_K=5, + hidden_dim=2048, + train_mode="adapter", + device="cuda", + token = "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", + ): + assert train_mode in ["adapter", "full"] + super().__init__() + self.args = args + self._device = device + + self.language_tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-6-world-1b6",trust_remote_code=True) + ########################################换成RWKV-PEFT的模型结构 + + self.language_model = language_model + ######################################### + + + language_project_dim = args.n_embd + #3B language_project_dim = 2560 + #7B language_project_dim = 4096 + + + self.speech_encoder = SpeechEncoder( + speech_encoder_model_id, + language_project_dim, + downsample_K=downsample_K, + hidden_dim=hidden_dim, + train_mode=train_mode, + device=device, + ).to(self._device) + + self.set_gradient(train_mode,'state') + + def gradient_checkpointing_enable(self, **kwargs): + self.language_model.gradient_checkpointing_enable(**kwargs) + + def set_gradient(self, train_mode,tuning): + assert train_mode in ["adapter", "full"] + + # call set_gradient for speech encoder + self.speech_encoder.set_gradient(train_mode) + + print("Parameters that require grad:") + + for name, param in self.named_parameters(): + if param.requires_grad: + print(f" {name}: {param.shape}") + + + def remove_padding(self, x, mask): + #根据mask去除speech_output的padding部分 + x_no_padding = [] + # 对于每一个样本和对应的掩码 + for x_i, mask_i in zip(x, mask): + # 使用掩码来选择非填充部分 + x_i_no_padding = x_i[mask_i.bool()] + # 将结果添加到列表中 + x_no_padding.append(x_i_no_padding) + + return x_no_padding + + def concatenate_audio_transcription(self, audio, transcription): + #将两个二维/三维向量在第二维度拼起来 + result = [] + for sublist1, sublist2 in zip(audio, transcription): + sub_result = torch.cat((sublist1 ,sublist2), dim=0) + result.append(sub_result) + + return result + + + def _prepare_input_embeds( + self, audios: List[float], transcriptions: List[str] = None + ): + """ + First, run audios through speech_encoder to get the embeddings and mask + """ + + + speech_output, mask = self.speech_encoder(audios) + mask = mask.to(self._device) + if transcriptions is not None: + + ###########处理prompt_embed ############################################################################### + + #去除speech padding + audio_no_padding = self.remove_padding(speech_output,mask) + + #在speech结尾添加end of audio:# + end_of_audio = self.language_tokenizer( + "#", + return_tensors="pt", + ).to(self.device) + with torch.no_grad(): + end_of_audio = self.language_model.embed(end_of_audio.input_ids) + audio_no_padding_eoa = [] + for t in audio_no_padding: + t = torch.cat((t, end_of_audio.squeeze(0))) + audio_no_padding_eoa.append(t) + + #audio mask 左边添加1 + ones = torch.ones(mask.size(0), 1).to(self._device) + mask =torch.cat((ones, mask), dim=1) + + #处理transcription,得到embeded label + _labels = self.language_tokenizer( + transcriptions, + return_tensors="pt", + padding=True, + truncation=True, + add_special_tokens=False, + ).to(self.device) + with torch.no_grad(): + # labels_embeds = self.language_model.rwkv.get_input_embeddings()(_labels.input_ids) + labels_embeds = self.language_model.embed(_labels.input_ids) + att3 = _labels.attention_mask + + #拼接speech和label + audio_label = self.concatenate_audio_transcription(audio_no_padding_eoa , labels_embeds) + # print(f"concatenated inputs:\t{len(audio_label)}-{[len(x) for x in audio_label]}") + + #对拼接后的内容进行padding + max_seq = max([len(x) for x in audio_label]) + for i, x in enumerate(audio_label): + times = max_seq - len(x) + for _ in range(times): + x = torch.cat((x,x[len(x)-1].unsqueeze(0))) + audio_label[i] = x + # print(f"padded inputs:\t{len(audio_label)}-{[len(x) for x in audio_label]}") + + #转换成tensor + audio_label = torch.stack(audio_label) + # print(f"padded inputs tensor:\t{audio_label.shape}") + prompt_embed = audio_label + # print() + + #####处理prompt_mask ################################################## + + # 剔除audio mask 右边的0 + mask_no_zero = [] + for mask_i in mask: + mask_i_no_zero = mask_i[mask_i != 0] + mask_no_zero.append(mask_i_no_zero) + + # 将audio mask和transcription mask 拼接 + mask_concatenate = self.concatenate_audio_transcription(mask_no_zero, att3) + + #向mask 填充0 + max_mask = max([len(x) for x in mask_concatenate]) + for i, x in enumerate(mask_concatenate): + times = max_mask - len(x) + for _ in range(times): + x = torch.cat((x,torch.tensor([0]).to(self.device))) + mask_concatenate[i] = x + + #转换成tensor + mask_concatenate = torch.stack(mask_concatenate) + prompt_mask = mask_concatenate + + # #########处理loss mask ##################################################### + # import torch.nn.functional as F + # loss_mask = [] + + # for t in mask_no_zero: + # pad_len = max_mask - len(t) + # pad = F.pad(t, (0, pad_len), "constant", 0) + # loss_mask.append(pad) + + # loss_mask = torch.stack(loss_mask) + # loss_mask = prompt_mask - loss_mask + + # print(f"loss mask:\t{loss_mask.shape}") + + #########处理true_labels ################################################### + # print() + + # 为transcription 结尾添加 end of sentence: + transcriptions_eos = [] + for starr in transcriptions: + starr = starr + "" + transcriptions_eos.append(starr) + _labels = self.language_tokenizer( + transcriptions_eos, + return_tensors="pt", + padding=True, + truncation=True, + add_special_tokens=False, + ).to(self.device) + true_labels = _labels.input_ids + + #在ture label左侧填充audio 长度的-100, 同时在右侧填充-100使batch对齐 + padded_labels = [] + for i,t in enumerate(true_labels): + back_padding = max_mask - t.shape[0] - audio_no_padding[i].shape[0] + t = torch.cat( + [ + torch.full( + (audio_no_padding[i].shape[0], ), + -100, + dtype=torch.long, + device=self.device, + ), + t, + torch.full( + (back_padding, ), + -100, + dtype=torch.long, + device=self.device, + ), + ] + ) + padded_labels.append(t) + + padded_labels = torch.stack(padded_labels) + true_labels = padded_labels + else: + end_of_audio = self.language_tokenizer( + "#", + return_tensors="pt", + ).to(self.device) + with torch.no_grad(): + end_of_audio = self.language_model.embed(end_of_audio.input_ids) + + # print(f"speech output:{speech_output.shape}") + # print(f"end_of_audio:{end_of_audio.shape}") + # exit(0) + speech_output = torch.cat((speech_output, end_of_audio), dim= 1) + + prompt_embed = speech_output + prompt_mask = mask + true_labels = None + return prompt_embed, prompt_mask, true_labels + + def forward(self, audios: List[float], transcriptions: List[str] = None): + + prompt_embed, prompt_mask, true_labels = self._prepare_input_embeds( + audios, transcriptions + ) + + outputs = self.language_model(inputs_embeds=prompt_embed) + + + return outputs, true_labels, prompt_mask + + def generate(self, audios: List[float], stopping_criteria=None): + """ + Generate the transcription + """ + prompt_embed, prompt_mask, _ = self._prepare_input_embeds(audios) + + # outputs = self.language_model( + # inputs_embeds=prompt_embed, + # attention_mask=prompt_mask.bool() + # ) + self.language_model.to(self._device, dtype=torch.bfloat16) + outputs = self.language_model.generate(tokenizer= self.language_tokenizer,inputs_embeds=prompt_embed) + + return outputs + + def training_step(self, batch, batch_idx): + args = self.args + if args.loss_mask: + idx, targets, mask = batch + mask = mask.view(-1) + sum_mask = torch.sum(mask).item() + logits = self(idx) + loss = F.cross_entropy(logits.view(-1, logits.size(-1)), targets.view(-1), reduction='none') + loss = torch.sum(loss * mask) / sum_mask + # elif args.my_qa_mask != 1: + # idx, targets = batch + # logits = self(idx) + # loss = F.cross_entropy(logits.view(-1, logits.size(-1)), targets.view(-1)) + # if '0' in os.environ["RWKV_MY_TESTING"]: + # print('logits', logits) + # torch.set_printoptions(threshold=10000) + # print('idx', idx) + # exit(0) + else: + + ##改动 + # idx, transcription = batch + idx = [item[0] for item in batch] + transcription = [item[1] for item in batch] + + logits, targets, mask = self(idx, transcription) + mask = mask.view(-1) + sum_mask = torch.sum(mask).item() + ###### + + # idx, targets, mask = batch + # mask = mask.view(-1) + # sum_mask = torch.sum(mask).item() + # # if sum_mask == 0: + # # return torch.tensor([0.0], requires_grad=True) + + # logits = self(idx) + if sum_mask == mask.shape[0]: + loss = F.cross_entropy(logits.view(-1, logits.size(-1)), targets.view(-1)) + # print('rank', self.global_rank, 'loss', loss.item()) + else: + loss = F.cross_entropy(logits.view(-1, logits.size(-1)), targets.view(-1), reduction='none') + # loss_raw = loss + loss = torch.sum(loss * mask) / sum_mask + + # torch.set_printoptions(threshold=10000) + # if True: #self.global_rank == 1: + # tmp = '' + # sss = 0 + # ccc = 0 + # for i in range(mask.shape[0]): + # if mask[i] > 0: + # tmp += str(idx.view(-1)[i].item()) + ',' + # sss += loss_raw.view(-1)[i].float().item() + # ccc += 1 + # print('rank', self.global_rank, 'loss', loss.item(), 'lavg', sss / ccc)#, 'tmp', tmp, 'input', idx) + + return L2Wrap.apply(loss, logits) + + def configure_optimizers(self): + args = self.args + + lr_decay = set() + lr_1x = set() + lr_2x = set() + lr_3x = set() + for n, p in self.named_parameters(): + if not p.requires_grad: + continue + if (("_w1" in n) or ("_w2" in n)) and (args.layerwise_lr > 0): + lr_1x.add(n) + elif (("time_mix" in n) or ("time_maa" in n)) and (args.layerwise_lr > 0): + if args.my_pile_stage == 2: + lr_2x.add(n) + else: + lr_1x.add(n) + elif (("time_decay" in n) or ("time_daaaa" in n)) and (args.layerwise_lr > 0): + if args.my_pile_stage == 2: + lr_3x.add(n) + else: + lr_2x.add(n) + elif ("time_faaaa" in n) and (args.layerwise_lr > 0): + if args.my_pile_stage == 2: + lr_2x.add(n) + else: + lr_1x.add(n) + elif ("time_first" in n) and (args.layerwise_lr > 0): + lr_3x.add(n) + elif (len(p.squeeze().shape) >= 2) and (args.weight_decay > 0): + lr_decay.add(n) + else: + lr_1x.add(n) + + lr_decay = sorted(list(lr_decay)) + lr_1x = sorted(list(lr_1x)) + lr_2x = sorted(list(lr_2x)) + lr_3x = sorted(list(lr_3x)) + # print('decay', lr_decay) + # print('1x', lr_1x) + # print('2x', lr_2x) + # print('3x', lr_3x) + param_dict = {n: p for n, p in self.named_parameters()} + + if args.layerwise_lr > 0: + if args.my_pile_stage == 2: + optim_groups = [ + {"params": [param_dict[n] for n in lr_1x], "weight_decay": 0.0, "my_lr_scale": 1.0}, + {"params": [param_dict[n] for n in lr_2x], "weight_decay": 0.0, "my_lr_scale": 5.0},# test: 2e-3 / args.lr_init}, + {"params": [param_dict[n] for n in lr_3x], "weight_decay": 0.0, "my_lr_scale": 5.0},# test: 3e-3 / args.lr_init}, + ] + else: + optim_groups = [ + {"params": [param_dict[n] for n in lr_1x], "weight_decay": 0.0, "my_lr_scale": 1.0}, + {"params": [param_dict[n] for n in lr_2x], "weight_decay": 0.0, "my_lr_scale": 2.0}, + {"params": [param_dict[n] for n in lr_3x], "weight_decay": 0.0, "my_lr_scale": 3.0}, + ] + else: + optim_groups = [{"params": [param_dict[n] for n in lr_1x], "weight_decay": 0.0, "my_lr_scale": 1.0}] + + if args.weight_decay > 0: + optim_groups += [{"params": [param_dict[n] for n in lr_decay], "weight_decay": args.weight_decay, "my_lr_scale": 1.0}] + if self.deepspeed_offload: + return DeepSpeedCPUAdam(optim_groups, lr=self.args.lr_init, betas=self.args.betas, eps=self.args.adam_eps, bias_correction=True, adamw_mode=True, amsgrad=False) + return FusedAdam(optim_groups, lr=self.args.lr_init, betas=self.args.betas, eps=self.args.adam_eps, bias_correction=True, adam_w_mode=True, amsgrad=False) + else: + if self.deepspeed_offload: + return DeepSpeedCPUAdam(optim_groups, lr=self.args.lr_init, betas=self.args.betas, eps=self.args.adam_eps, bias_correction=True, adamw_mode=False, weight_decay=0, amsgrad=False) + return FusedAdam(optim_groups, lr=self.args.lr_init, betas=self.args.betas, eps=self.args.adam_eps, bias_correction=True, adam_w_mode=False, weight_decay=0, amsgrad=False) + # return ZeroOneAdam(optim_groups, lr=self.args.lr_init, betas=self.args.betas, eps=self.args.adam_eps, bias_correction=True, weight_decay=0, amsgrad=False, cuda_aware=False) + + def return_tokenizer(self): + return self.language_tokenizer + + @property + def config(self): + return self.language_model.config + + @property + def device(self): + return self._device + + @device.setter + def device(self, value): + + self._device = value + + + + @property + def deepspeed_offload(self) -> bool: + strategy = self.trainer.strategy + if isinstance(strategy, DeepSpeedStrategy): + cfg = strategy.config["zero_optimization"] + return cfg.get("offload_optimizer") or cfg.get("offload_param") + return False diff --git a/src/binidx.py b/src/binidx.py new file mode 100644 index 0000000000000000000000000000000000000000..adb99f73db818e3c6127bcd7952697852ec1fbe1 --- /dev/null +++ b/src/binidx.py @@ -0,0 +1,296 @@ +import os +import torch +import numpy as np +import shutil +import struct +from functools import lru_cache +from itertools import accumulate + +def print_rank_0(*message): + pass + # """If distributed is initialized print only on rank 0.""" + # if torch.distributed.is_initialized(): + # if torch.distributed.get_rank() == 0: + # print(*message, flush=True) + # else: + # print(*message, flush=True) + +def _warmup_mmap_file(path): + pass + # with open(path, "rb") as stream: + # while stream.read(100 * 1024 * 1024): + # pass + +dtypes = { + 1: np.uint8, + 2: np.int8, + 3: np.int16, + 4: np.int32, + 5: np.int64, + 6: float, + 7: np.double, + 8: np.uint16, +} + +def code(dtype): + for k in dtypes.keys(): + if dtypes[k] == dtype: + return k + raise ValueError(dtype) + +def index_file_path(prefix_path): + return prefix_path + ".idx" + +def data_file_path(prefix_path): + return prefix_path + ".bin" + +class MMapIndexedDataset(torch.utils.data.Dataset): + class Index(object): + _HDR_MAGIC = b"MMIDIDX\x00\x00" + + @classmethod + def writer(cls, path, dtype): + class _Writer(object): + def __enter__(self): + self._file = open(path, "wb") + + # Write Magic string so we can check the file format then opening it again. + self._file.write(cls._HDR_MAGIC) + # Write version number + # Little endian unsigned 64 Bit integer + self._file.write(struct.pack(" b h l d', h = H) + k = rearrange(k, 'b l (h d) -> b h l d', h = H) + v = rearrange(v, 'b l (h d) -> b h l d', h = H) + w = rearrange(-torch.exp(w), 'b l (h d) -> b h l d', h = H) + o, state = chunk_rwkv6(r, k, v, w, u=u, scale=1., initial_state=s, output_final_state=True) + x = rearrange(o, 'b h l d -> b l (h d)') + return x, state + elif os.environ["RWKV_TRAIN_TYPE"] == 'states': + def RUN_CUDA_RWKV6_STATE(B, T, C, H, r, k, v, w, u, s): + r = rearrange(r, 'b l (h d) -> b h l d', h = H) + k = rearrange(k, 'b l (h d) -> b h l d', h = H) + v = rearrange(v, 'b l (h d) -> b h l d', h = H) + w = rearrange(-torch.exp(w), 'b l (h d) -> b h l d', h = H) + s = s.transpose(1, 2).expand(B,*s.shape) + o,_ = chunk_rwkv6(r, k, v, w, u=u, scale=1., initial_state=s, output_final_state=False) + x = rearrange(o, 'b h l d -> b l (h d)') + return x + else: + def RUN_CUDA_RWKV6(B, T, C, H, r, k, v, w, u): + r = rearrange(r, 'b l (h d) -> b h l d', h = H) + k = rearrange(k, 'b l (h d) -> b h l d', h = H) + v = rearrange(v, 'b l (h d) -> b h l d', h = H) + w = rearrange(-torch.exp(w), 'b l (h d) -> b h l d', h = H) + o,_ = chunk_rwkv6(r, k, v, w, u=u, scale=1., initial_state=None, output_final_state=False) + x = rearrange(o, 'b h l d -> b l (h d)') + return x + +else: + from torch.utils.cpp_extension import load + + HEAD_SIZE = int(os.environ["RWKV_HEAD_SIZE_A"]) + + if 'x060' in os.environ["RWKV_MY_TESTING"]: + if os.environ["RWKV_TRAIN_TYPE"] == 'infctx': + wkv6state_cuda = load(name="wkv6infctx", sources=["cuda/wkv6infctx_op.cpp", f"cuda/wkv6infctx_cuda.cu"], + verbose=True, extra_cuda_cflags=["-res-usage", "--use_fast_math", "-O3", "-Xptxas -O3", "--extra-device-vectorization", f"-D_N_={HEAD_SIZE}", f"-D_T_={int(os.environ['RWKV_CTXLEN'])}"]) + + class WKV_6STATE(torch.autograd.Function): + @staticmethod + def forward(ctx, B, T, C, H, r, k, v, w, u, s): + with torch.no_grad(): + assert r.dtype == torch.bfloat16 + assert k.dtype == torch.bfloat16 + assert v.dtype == torch.bfloat16 + assert w.dtype == torch.bfloat16 + assert u.dtype == torch.bfloat16 + assert s.dtype == torch.bfloat16 + assert HEAD_SIZE == C // H + ctx.B = B + ctx.T = T + ctx.C = C + ctx.H = H + assert r.is_contiguous() + assert k.is_contiguous() + assert v.is_contiguous() + assert w.is_contiguous() + assert u.is_contiguous() + assert s.is_contiguous() + ctx.save_for_backward(r, k, v, w, u, s) + y = torch.empty((B, T, C), device=r.device, dtype=torch.bfloat16, memory_format=torch.contiguous_format)#.uniform_(-100, 100) + wkv6state_cuda.forward(B, T, C, H, r, k, v, w, u, s, y) + return y + + @staticmethod + def backward(ctx, gy): + with torch.no_grad(): + assert gy.dtype == torch.bfloat16 + B = ctx.B + T = ctx.T + C = ctx.C + H = ctx.H + assert gy.is_contiguous() + r, k, v, w, u, s = ctx.saved_tensors + gr = torch.empty((B, T, C), device=gy.device, requires_grad=False, dtype=torch.bfloat16, memory_format=torch.contiguous_format)#.uniform_(-100, 100) + gk = torch.empty((B, T, C), device=gy.device, requires_grad=False, dtype=torch.bfloat16, memory_format=torch.contiguous_format)#.uniform_(-100, 100) + gv = torch.empty((B, T, C), device=gy.device, requires_grad=False, dtype=torch.bfloat16, memory_format=torch.contiguous_format)#.uniform_(-100, 100) + gw = torch.empty((B, T, C), device=gy.device, requires_grad=False, dtype=torch.bfloat16, memory_format=torch.contiguous_format)#.uniform_(-100, 100) + gu = torch.empty((B, C), device=gy.device, requires_grad=False, dtype=torch.bfloat16, memory_format=torch.contiguous_format)#.uniform_(-100, 100) + gs = torch.empty((B, H, C//H, C//H), device=gy.device, requires_grad=False, dtype=torch.bfloat16, memory_format=torch.contiguous_format)#.uniform_(-100, 100) + wkv6state_cuda.backward(B, T, C, H, r, k, v, w, u, s, gy, gr, gk, gv, gw, gu, gs) + gu = torch.sum(gu, 0).view(H, C//H) + gs = torch.sum(gs, 0).view(H, C//H, C//H) + return (None, None, None, None, gr, gk, gv, gw, gu, gs) + + def RUN_CUDA_RWKV6_STATE(B, T, C, H, r, k, v, w, u, s): + x = WKV_6STATE.apply(B, T, C, H, r, k, v, w, u, s) + return x, s + elif os.environ["RWKV_TRAIN_TYPE"] == 'states': + wkv6state_cuda = load(name="wkv6state", sources=["cuda/wkv6state_op.cpp", f"cuda/wkv6state_cuda.cu"], + verbose=True, extra_cuda_cflags=["-res-usage", "--use_fast_math", "-O3", "-Xptxas -O3", "--extra-device-vectorization", f"-D_N_={HEAD_SIZE}", f"-D_T_={int(os.environ['RWKV_CTXLEN'])}"]) + + class WKV_6STATE(torch.autograd.Function): + @staticmethod + def forward(ctx, B, T, C, H, r, k, v, w, u, s): + with torch.no_grad(): + assert r.dtype == torch.bfloat16 + assert k.dtype == torch.bfloat16 + assert v.dtype == torch.bfloat16 + assert w.dtype == torch.bfloat16 + assert u.dtype == torch.bfloat16 + assert s.dtype == torch.bfloat16 + assert HEAD_SIZE == C // H + ctx.B = B + ctx.T = T + ctx.C = C + ctx.H = H + assert r.is_contiguous() + assert k.is_contiguous() + assert v.is_contiguous() + assert w.is_contiguous() + assert u.is_contiguous() + assert s.is_contiguous() + ctx.save_for_backward(r, k, v, w, u, s) + y = torch.empty((B, T, C), device=r.device, dtype=torch.bfloat16, memory_format=torch.contiguous_format)#.uniform_(-100, 100) + wkv6state_cuda.forward(B, T, C, H, r, k, v, w, u, s, y) + return y + + @staticmethod + def backward(ctx, gy): + with torch.no_grad(): + assert gy.dtype == torch.bfloat16 + B = ctx.B + T = ctx.T + C = ctx.C + H = ctx.H + assert gy.is_contiguous() + r, k, v, w, u, s = ctx.saved_tensors + gr = torch.empty((B, T, C), device=gy.device, requires_grad=False, dtype=torch.bfloat16, memory_format=torch.contiguous_format)#.uniform_(-100, 100) + gk = torch.empty((B, T, C), device=gy.device, requires_grad=False, dtype=torch.bfloat16, memory_format=torch.contiguous_format)#.uniform_(-100, 100) + gv = torch.empty((B, T, C), device=gy.device, requires_grad=False, dtype=torch.bfloat16, memory_format=torch.contiguous_format)#.uniform_(-100, 100) + gw = torch.empty((B, T, C), device=gy.device, requires_grad=False, dtype=torch.bfloat16, memory_format=torch.contiguous_format)#.uniform_(-100, 100) + gu = torch.empty((B, C), device=gy.device, requires_grad=False, dtype=torch.bfloat16, memory_format=torch.contiguous_format)#.uniform_(-100, 100) + gs = torch.empty((B, H, C//H, C//H), device=gy.device, requires_grad=False, dtype=torch.bfloat16, memory_format=torch.contiguous_format)#.uniform_(-100, 100) + wkv6state_cuda.backward(B, T, C, H, r, k, v, w, u, s, gy, gr, gk, gv, gw, gu, gs) + gu = torch.sum(gu, 0).view(H, C//H) + gs = torch.sum(gs, 0).view(H, C//H, C//H) + return (None, None, None, None, gr, gk, gv, gw, gu, gs) + + def RUN_CUDA_RWKV6_STATE(B, T, C, H, r, k, v, w, u, s): + return WKV_6STATE.apply(B, T, C, H, r, k, v, w, u, s) + + else: + wkv6_cuda = load(name="wkv6", sources=["cuda/wkv6_op.cpp", f"cuda/wkv6_cuda.cu"], + verbose=True, extra_cuda_cflags=["-res-usage", "--use_fast_math", "-O3", "-Xptxas -O3", "--extra-device-vectorization", f"-D_N_={HEAD_SIZE}", f"-D_T_={int(os.environ['RWKV_CTXLEN'])}"]) + + class WKV_6(torch.autograd.Function): + @staticmethod + def forward(ctx, B, T, C, H, r, k, v, w, u): + with torch.no_grad(): + assert r.dtype == torch.bfloat16 + assert k.dtype == torch.bfloat16 + assert v.dtype == torch.bfloat16 + assert w.dtype == torch.bfloat16 + assert u.dtype == torch.bfloat16 + assert HEAD_SIZE == C // H + ctx.B = B + ctx.T = T + ctx.C = C + ctx.H = H + assert r.is_contiguous() + assert k.is_contiguous() + assert v.is_contiguous() + assert w.is_contiguous() + assert u.is_contiguous() + ew = (-torch.exp(w.float())).contiguous() + ctx.save_for_backward(r, k, v, ew, u) + y = torch.empty((B, T, C), device=r.device, dtype=torch.bfloat16, memory_format=torch.contiguous_format)#.uniform_(-100, 100) + wkv6_cuda.forward(B, T, C, H, r, k, v, ew, u, y) + return y + + @staticmethod + def backward(ctx, gy): + with torch.no_grad(): + assert gy.dtype == torch.bfloat16 + B = ctx.B + T = ctx.T + C = ctx.C + H = ctx.H + assert gy.is_contiguous() + r, k, v, ew, u = ctx.saved_tensors + gr = torch.empty((B, T, C), device=gy.device, requires_grad=False, dtype=torch.bfloat16, memory_format=torch.contiguous_format)#.uniform_(-100, 100) + gk = torch.empty((B, T, C), device=gy.device, requires_grad=False, dtype=torch.bfloat16, memory_format=torch.contiguous_format)#.uniform_(-100, 100) + gv = torch.empty((B, T, C), device=gy.device, requires_grad=False, dtype=torch.bfloat16, memory_format=torch.contiguous_format)#.uniform_(-100, 100) + gw = torch.empty((B, T, C), device=gy.device, requires_grad=False, dtype=torch.bfloat16, memory_format=torch.contiguous_format)#.uniform_(-100, 100) + gu = torch.empty((B, C), device=gy.device, requires_grad=False, dtype=torch.bfloat16, memory_format=torch.contiguous_format)#.uniform_(-100, 100) + wkv6_cuda.backward(B, T, C, H, r, k, v, ew, u, gy, gr, gk, gv, gw, gu) + gu = torch.sum(gu, 0).view(H, C//H) + return (None, None, None, None, gr, gk, gv, gw, gu) + + def RUN_CUDA_RWKV6(B, T, C, H, r, k, v, w, u): + return WKV_6.apply(B, T, C, H, r, k, v, w, u) + else: + wkv5_cuda = load(name="wkv5", sources=["cuda/wkv5_op.cpp", f"cuda/wkv5_cuda.cu"], + verbose=True, extra_cuda_cflags=["-res-usage", "--use_fast_math", "-O3", "-Xptxas -O3", "--extra-device-vectorization", f"-D_N_={HEAD_SIZE}"]) + + class WKV_5(torch.autograd.Function): + @staticmethod + def forward(ctx, B, T, C, H, r, k, v, w, u): + with torch.no_grad(): + assert r.dtype == torch.bfloat16 + assert k.dtype == torch.bfloat16 + assert v.dtype == torch.bfloat16 + assert w.dtype == torch.bfloat16 + assert u.dtype == torch.bfloat16 + assert HEAD_SIZE == C // H + ctx.B = B + ctx.T = T + ctx.C = C + ctx.H = H + assert r.is_contiguous() + assert k.is_contiguous() + assert v.is_contiguous() + assert w.is_contiguous() + assert u.is_contiguous() + ew = (-torch.exp(w.float())).contiguous() + eew = (torch.exp(ew)).contiguous() + ctx.save_for_backward(r, k, v, eew, ew, u) + y = torch.empty((B, T, C), device=r.device, dtype=torch.bfloat16, memory_format=torch.contiguous_format) # .uniform_(-1, 1) + wkv5_cuda.forward(B, T, C, H, r, k, v, eew, u, y) + return y + + @staticmethod + def backward(ctx, gy): + with torch.no_grad(): + assert gy.dtype == torch.bfloat16 + B = ctx.B + T = ctx.T + C = ctx.C + H = ctx.H + assert gy.is_contiguous() + r, k, v, eew, ew, u = ctx.saved_tensors + gr = torch.empty((B, T, C), device=gy.device, requires_grad=False, dtype=torch.bfloat16, memory_format=torch.contiguous_format) # .uniform_(-1, 1) + gk = torch.empty((B, T, C), device=gy.device, requires_grad=False, dtype=torch.bfloat16, memory_format=torch.contiguous_format) # .uniform_(-1, 1) + gv = torch.empty((B, T, C), device=gy.device, requires_grad=False, dtype=torch.bfloat16, memory_format=torch.contiguous_format) # .uniform_(-1, 1) + gw = torch.empty((B, C), device=gy.device, requires_grad=False, dtype=torch.bfloat16, memory_format=torch.contiguous_format) # .uniform_(-1, 1) + gu = torch.empty((B, C), device=gy.device, requires_grad=False, dtype=torch.bfloat16, memory_format=torch.contiguous_format) # .uniform_(-1, 1) + wkv5_cuda.backward(B, T, C, H, r, k, v, eew, ew, u, gy, gr, gk, gv, gw, gu) + gw = torch.sum(gw, 0).view(H, C//H) + gu = torch.sum(gu, 0).view(H, C//H) + return (None, None, None, None, gr, gk, gv, gw, gu) + + def RUN_CUDA_RWKV5(B, T, C, H, r, k, v, w, u): + return WKV_5.apply(B, T, C, H, r, k, v, w, u) + +######################################################################################################## + +class RWKV_TimeMix_RWKV5(MyModule): + def __init__(self, args, layer_id): + super().__init__() + self.args = args + self.layer_id = layer_id + + self.head_size = args.head_size_a + assert HEAD_SIZE == self.head_size # change HEAD_SIZE to match args.head_size_a + self.n_head = args.dim_att // self.head_size + assert args.dim_att % self.n_head == 0 + self.head_size_divisor = args.head_size_divisor + + with torch.no_grad(): + ratio_0_to_1 = layer_id / (args.n_layer - 1) # 0 to 1 + ratio_1_to_almost0 = 1.0 - (layer_id / args.n_layer) # 1 to ~0 + ddd = torch.ones(1, 1, args.n_embd) + for i in range(args.n_embd): + ddd[0, 0, i] = i / args.n_embd + + # fancy time_mix + self.time_mix_k = nn.Parameter(torch.pow(ddd, ratio_1_to_almost0)) + self.time_mix_v = nn.Parameter(torch.pow(ddd, ratio_1_to_almost0) + 0.3 * ratio_0_to_1) + self.time_mix_r = nn.Parameter(torch.pow(ddd, 0.5 * ratio_1_to_almost0)) + self.time_mix_g = nn.Parameter(torch.pow(ddd, 0.5 * ratio_1_to_almost0)) + + # fancy time_decay + decay_speed = torch.ones(args.dim_att) + for n in range(args.dim_att): + decay_speed[n] = -6 + 5 * (n / (args.dim_att - 1)) ** (0.7 + 1.3 * ratio_0_to_1) + self.time_decay = nn.Parameter(decay_speed.reshape(self.n_head, self.head_size)) + # print(layer_id, self.time_decay.flatten()[:3].cpu().numpy(), '...', self.time_decay.flatten()[-3:].cpu().numpy()) + + tmp = torch.zeros(args.dim_att) + for n in range(args.dim_att): + zigzag = ((n + 1) % 3 - 1) * 0.1 + tmp[n] = ratio_0_to_1 * (1 - (n / (args.dim_att - 1))) + zigzag + + self.time_faaaa = nn.Parameter(tmp.reshape(self.n_head, self.head_size)) + + self.time_shift = nn.ZeroPad2d((0, 0, 1, -1)) + self.receptance = make_linear_att(args.n_embd, args.dim_att, bias=False) + self.key = make_linear_att(args.n_embd, args.dim_att, bias=False) + self.value = make_linear_att(args.n_embd, args.dim_att, bias=False) + + self.output = make_linear_att(args.dim_att, args.n_embd, bias=False) + self.gate = make_linear_att(args.n_embd, args.dim_att, bias=False) + self.ln_x = nn.GroupNorm(self.n_head, args.dim_att) + + @MyFunction + def jit_func(self, x): + B, T, C = x.size() + + xx = self.time_shift(x) # Mix x with the previous timestep to produce xk, xv, xr + xk = x * self.time_mix_k + xx * (1 - self.time_mix_k) + xv = x * self.time_mix_v + xx * (1 - self.time_mix_v) + xr = x * self.time_mix_r + xx * (1 - self.time_mix_r) + xg = x * self.time_mix_g + xx * (1 - self.time_mix_g) + + r = self.receptance(xr) + k = self.key(xk) + v = self.value(xv) + g = F.silu(self.gate(xg)) + + return r, k, v, g + + @MyFunction + def jit_func_2(self, x, g): + B, T, C = x.size() + x = x.view(B * T, C) + + x = self.ln_x(x / self.head_size_divisor).view(B, T, C) + x = self.output(x * g) + return x + + def forward(self, x): + B, T, C = x.size() + H = self.n_head + + r, k, v, g = self.jit_func(x) + + x = RUN_CUDA_RWKV5(B, T, C, H, r, k, v, w=self.time_decay, u=self.time_faaaa) + + return self.jit_func_2(x, g) + +class RWKV_Tmix_x060(MyModule): + def __init__(self, args, layer_id): + super().__init__() + self.args = args + self.layer_id = layer_id + + self.head_size = args.head_size_a + self.n_head = args.dim_att // self.head_size + assert args.dim_att % self.n_head == 0 + + with torch.no_grad(): + ratio_0_to_1 = layer_id / (args.n_layer - 1) # 0 to 1 + ratio_1_to_almost0 = 1.0 - (layer_id / args.n_layer) # 1 to ~0 + ddd = torch.ones(1, 1, args.n_embd) + for i in range(args.n_embd): + ddd[0, 0, i] = i / args.n_embd + + # fancy time_mix + self.time_maa_x = nn.Parameter(1.0 - torch.pow(ddd, ratio_1_to_almost0)) + self.time_maa_w = nn.Parameter(1.0 - torch.pow(ddd, ratio_1_to_almost0)) + self.time_maa_k = nn.Parameter(1.0 - torch.pow(ddd, ratio_1_to_almost0)) + self.time_maa_v = nn.Parameter(1.0 - (torch.pow(ddd, ratio_1_to_almost0) + 0.3 * ratio_0_to_1)) + self.time_maa_r = nn.Parameter(1.0 - torch.pow(ddd, 0.5 * ratio_1_to_almost0)) + self.time_maa_g = nn.Parameter(1.0 - torch.pow(ddd, 0.5 * ratio_1_to_almost0)) + + TIME_MIX_EXTRA_DIM = 32 # generate TIME_MIX for w,k,v,r,g + if args.n_embd==4096: + TIME_MIX_EXTRA_DIM = TIME_MIX_EXTRA_DIM*2 + self.time_maa_w1 = nn.Parameter(torch.zeros(args.n_embd, TIME_MIX_EXTRA_DIM*5).uniform_(-1e-4, 1e-4)) + self.time_maa_w2 = nn.Parameter(torch.zeros(5, TIME_MIX_EXTRA_DIM, args.n_embd).uniform_(-1e-4, 1e-4)) + + # fancy time_decay + decay_speed = torch.ones(args.dim_att) + for n in range(args.dim_att): + decay_speed[n] = -6 + 5 * (n / (args.dim_att - 1)) ** (0.7 + 1.3 * ratio_0_to_1) + self.time_decay = nn.Parameter(decay_speed.reshape(1,1,args.dim_att)) + + TIME_DECAY_EXTRA_DIM = 64 + if args.n_embd==4096: + TIME_DECAY_EXTRA_DIM = TIME_DECAY_EXTRA_DIM*2 + self.time_decay_w1 = nn.Parameter(torch.zeros(args.n_embd, TIME_DECAY_EXTRA_DIM).uniform_(-1e-4, 1e-4)) + self.time_decay_w2 = nn.Parameter(torch.zeros(TIME_DECAY_EXTRA_DIM, args.dim_att).uniform_(-1e-4, 1e-4)) + + tmp = torch.zeros(args.dim_att) + for n in range(args.dim_att): + zigzag = ((n + 1) % 3 - 1) * 0.1 + tmp[n] = ratio_0_to_1 * (1 - (n / (args.dim_att - 1))) + zigzag + + self.time_faaaa = nn.Parameter(tmp.reshape(self.n_head, self.head_size)) + + self.time_shift = nn.ZeroPad2d((0, 0, 1, -1)) + self.receptance = make_linear_att(args.n_embd, args.dim_att, bias=False) + self.key = make_linear_att(args.n_embd, args.dim_att, bias=False) + self.value = make_linear_att(args.n_embd, args.dim_att, bias=False) + self.output = make_linear_att(args.dim_att, args.n_embd, bias=False) + self.gate = make_linear_att(args.n_embd, args.dim_att, bias=False) + self.ln_x = nn.GroupNorm(self.n_head, args.dim_att, eps=(1e-5)*(args.head_size_divisor**2)) + + @MyFunction + def jit_func(self, x): + B, T, C = x.size() + + xx = self.time_shift(x) - x + + xxx = x + xx * self.time_maa_x + xxx = torch.tanh(xxx @ self.time_maa_w1).view(B*T, 5, -1).transpose(0, 1) + xxx = torch.bmm(xxx, self.time_maa_w2).view(5, B, T, -1) + mw, mk, mv, mr, mg = xxx.unbind(dim=0) + + xw = x + xx * (self.time_maa_w + mw) + xk = x + xx * (self.time_maa_k + mk) + xv = x + xx * (self.time_maa_v + mv) + xr = x + xx * (self.time_maa_r + mr) + xg = x + xx * (self.time_maa_g + mg) + + r = self.receptance(xr) + k = self.key(xk) + v = self.value(xv) + g = F.silu(self.gate(xg)) + + ww = torch.tanh(xw @ self.time_decay_w1) @ self.time_decay_w2 + w = self.time_decay + ww + + return r, k, v, g, w + + @MyFunction + def jit_func_2(self, x, g): + B, T, C = x.size() + x = x.view(B * T, C) + + x = self.ln_x(x).view(B, T, C) + x = self.output(x * g) + return x + + def forward(self, x): + B, T, C = x.size() + H = self.n_head + + r, k, v, g, w = self.jit_func(x) + x = RUN_CUDA_RWKV6(B, T, C, H, r, k, v, w, u=self.time_faaaa) + + return self.jit_func_2(x, g) + +######################################################################################################## + +class RWKV_Tmix_x060_state(MyModule): + def __init__(self, args, layer_id): + super().__init__() + self.args = args + self.layer_id = layer_id + + self.head_size = args.head_size_a + self.n_head = args.dim_att // self.head_size + assert args.dim_att % self.n_head == 0 + + with torch.no_grad(): + ratio_0_to_1 = layer_id / (args.n_layer - 1) # 0 to 1 + ratio_1_to_almost0 = 1.0 - (layer_id / args.n_layer) # 1 to ~0 + ddd = torch.ones(1, 1, args.n_embd) + for i in range(args.n_embd): + ddd[0, 0, i] = i / args.n_embd + + # fancy time_mix + self.time_maa_x = nn.Parameter(1.0 - torch.pow(ddd, ratio_1_to_almost0)) + self.time_maa_w = nn.Parameter(1.0 - torch.pow(ddd, ratio_1_to_almost0)) + self.time_maa_k = nn.Parameter(1.0 - torch.pow(ddd, ratio_1_to_almost0)) + self.time_maa_v = nn.Parameter(1.0 - (torch.pow(ddd, ratio_1_to_almost0) + 0.3 * ratio_0_to_1)) + self.time_maa_r = nn.Parameter(1.0 - torch.pow(ddd, 0.5 * ratio_1_to_almost0)) + self.time_maa_g = nn.Parameter(1.0 - torch.pow(ddd, 0.5 * ratio_1_to_almost0)) + + D_MIX_LORA = 32 # generate TIME_MIX for w,k,v,r,g + if args.n_embd==4096: + D_MIX_LORA = D_MIX_LORA*2 + self.time_maa_w1 = nn.Parameter(torch.zeros(args.n_embd, D_MIX_LORA*5)) + self.time_maa_w2 = nn.Parameter(torch.zeros(5, D_MIX_LORA, args.n_embd).uniform_(-0.01, 0.01)) + + # fancy time_decay + decay_speed = torch.ones(args.dim_att) + for n in range(args.dim_att): + decay_speed[n] = -6 + 5 * (n / (args.dim_att - 1)) ** (0.7 + 1.3 * ratio_0_to_1) + self.time_decay = nn.Parameter(decay_speed.reshape(1,1,args.dim_att)) + + D_DECAY_LORA = 64 + if args.n_embd==4096: + D_DECAY_LORA = D_DECAY_LORA*2 + self.time_decay_w1 = nn.Parameter(torch.zeros(args.n_embd, D_DECAY_LORA)) + self.time_decay_w2 = nn.Parameter(torch.zeros(D_DECAY_LORA, args.dim_att).uniform_(-0.01, 0.01)) + + tmp = torch.zeros(args.dim_att) + for n in range(args.dim_att): + zigzag = ((n + 1) % 3 - 1) * 0.1 + tmp[n] = ratio_0_to_1 * (1 - (n / (args.dim_att - 1))) + zigzag + + self.time_faaaa = nn.Parameter(tmp.reshape(self.n_head, self.head_size)) + self.time_state = nn.Parameter(torch.zeros(self.n_head, self.head_size, self.head_size)) + + self.time_shift = nn.ZeroPad2d((0, 0, 1, -1)) + self.receptance = make_linear_att(args.n_embd, args.dim_att, bias=False) + self.key = make_linear_att(args.n_embd, args.dim_att, bias=False) + + self.value = make_linear_att(args.n_embd, args.dim_att, bias=False) + self.output = make_linear_att(args.dim_att, args.n_embd, bias=False) + self.gate = make_linear_att(args.n_embd, args.dim_att, bias=False) + self.ln_x = nn.GroupNorm(self.n_head, args.dim_att, eps=(1e-5)*(args.head_size_divisor**2)) + + @MyFunction + def jit_func(self, x): + B, T, C = x.size() + + xx = self.time_shift(x) - x + + xxx = x + xx * self.time_maa_x + xxx = torch.tanh(xxx @ self.time_maa_w1).view(B*T, 5, -1).transpose(0, 1) + xxx = torch.bmm(xxx, self.time_maa_w2).view(5, B, T, -1) + mw, mk, mv, mr, mg = xxx.unbind(dim=0) + + xw = x + xx * (self.time_maa_w + mw) + xk = x + xx * (self.time_maa_k + mk) + xv = x + xx * (self.time_maa_v + mv) + xr = x + xx * (self.time_maa_r + mr) + xg = x + xx * (self.time_maa_g + mg) + + r = self.receptance(xr) + k = self.key(xk) + v = self.value(xv) + g = F.silu(self.gate(xg)) + + ww = torch.tanh(xw @ self.time_decay_w1) @ self.time_decay_w2 + w = self.time_decay + ww + + return r, k, v, g, w + + @MyFunction + def jit_func_2(self, x, g): + B, T, C = x.size() + x = x.view(B * T, C) + + x = self.ln_x(x).view(B, T, C) + x = self.output(x * g) + return x + + def forward(self, x): + B, T, C = x.size() + H = self.n_head + + r, k, v, g, w = self.jit_func(x) + x = RUN_CUDA_RWKV6_STATE(B, T, C, H, r, k, v, w, u=self.time_faaaa, s=self.time_state) + + return self.jit_func_2(x, g) +######################################################################################################## + +class RWKV_ChannelMix(MyModule): + def __init__(self, args, layer_id): + super().__init__() + self.args = args + self.layer_id = layer_id + self.time_shift = nn.ZeroPad2d((0, 0, 1, -1)) + + with torch.no_grad(): # fancy init of time_mix + ratio_1_to_almost0 = 1.0 - (layer_id / args.n_layer) # 1 to ~0 + ddd = torch.ones(1, 1, args.n_embd) + for i in range(args.n_embd): + ddd[0, 0, i] = i / args.n_embd + self.time_mix_k = nn.Parameter(torch.pow(ddd, ratio_1_to_almost0)) + self.time_mix_r = nn.Parameter(torch.pow(ddd, ratio_1_to_almost0)) + + self.key = make_linear_ffn(args.n_embd, args.dim_ffn, bias=False) + self.receptance = make_linear_ffn(args.n_embd, args.n_embd, bias=False) + self.value = make_linear_ffn(args.dim_ffn, args.n_embd, bias=False) + + @MyFunction + def forward(self, x): + xx = self.time_shift(x) + xk = x * self.time_mix_k + xx * (1 - self.time_mix_k) + xr = x * self.time_mix_r + xx * (1 - self.time_mix_r) + k = self.key(xk) + k = torch.relu(k) ** 2 + kv = self.value(k) + return torch.sigmoid(self.receptance(xr)) * kv + +class RWKV_CMix_x060(MyModule): + def __init__(self, args, layer_id): + super().__init__() + self.args = args + self.layer_id = layer_id + self.time_shift = nn.ZeroPad2d((0, 0, 1, -1)) + + with torch.no_grad(): # fancy init of time_mix + ratio_1_to_almost0 = 1.0 - (layer_id / args.n_layer) # 1 to ~0 + ddd = torch.ones(1, 1, args.n_embd) + for i in range(args.n_embd): + ddd[0, 0, i] = i / args.n_embd + self.time_maa_k = nn.Parameter(1.0 - torch.pow(ddd, ratio_1_to_almost0)) + self.time_maa_r = nn.Parameter(1.0 - torch.pow(ddd, ratio_1_to_almost0)) + + self.key = make_linear_ffn(args.n_embd, args.dim_ffn, bias=False) + self.receptance = make_linear_ffn(args.n_embd, args.n_embd, bias=False) + self.value = make_linear_ffn(args.dim_ffn, args.n_embd, bias=False) + + @MyFunction + def forward(self, x): + xx = self.time_shift(x) - x + xk = x + xx * self.time_maa_k + xr = x + xx * self.time_maa_r + + k = self.key(xk) + k = torch.relu(k) ** 2 + kv = self.value(k) + return torch.sigmoid(self.receptance(xr)) * kv + +######################################################################################################## + +class MishGLU(MyModule): + def __init__(self, args, layer_id): + super().__init__() + self.args = args + self.layer_id = layer_id + self.time_shift = nn.ZeroPad2d((0, 0, 1, -1)) + + with torch.no_grad(): + ratio_1_to_almost0 = 1.0 - (layer_id / args.n_layer) + + x = torch.ones(1, 1, args.n_embd) + for i in range(args.n_embd): + x[0, 0, i] = i / args.n_embd + + self.time_mix_k = nn.Parameter(torch.pow(x, ratio_1_to_almost0)) + self.time_mix_r = nn.Parameter(torch.pow(x, ratio_1_to_almost0)) + self.aa = nn.Linear(args.n_embd, args.dim_ffn, bias=False) + self.bb = nn.Linear(args.n_embd, args.dim_ffn, bias=False) + self.value = nn.Linear(args.dim_ffn, args.n_embd, bias=False) + + @MyFunction + def forward(self, x): + xx = self.time_shift(x) + xa = x * self.time_mix_k + xx * (1 - self.time_mix_k) + xb = x * self.time_mix_r + xx * (1 - self.time_mix_r) + a = self.aa(xa) + b = self.bb(xb) + return self.value(a * F.mish(b)) +######################################################################################################## + +class RWKV_Tmix_x060_infctx(MyModule): + def __init__(self, args, layer_id): + super().__init__() + self.args = args + self.layer_id = layer_id + + self.head_size = args.head_size_a + self.n_head = args.dim_att // self.head_size + assert args.dim_att % self.n_head == 0 + + with torch.no_grad(): + ratio_0_to_1 = layer_id / (args.n_layer - 1) # 0 to 1 + ratio_1_to_almost0 = 1.0 - (layer_id / args.n_layer) # 1 to ~0 + ddd = torch.ones(1, 1, args.n_embd) + for i in range(args.n_embd): + ddd[0, 0, i] = i / args.n_embd + + # fancy time_mix + self.time_maa_x = nn.Parameter(1.0 - torch.pow(ddd, ratio_1_to_almost0)) + self.time_maa_w = nn.Parameter(1.0 - torch.pow(ddd, ratio_1_to_almost0)) + self.time_maa_k = nn.Parameter(1.0 - torch.pow(ddd, ratio_1_to_almost0)) + self.time_maa_v = nn.Parameter(1.0 - (torch.pow(ddd, ratio_1_to_almost0) + 0.3 * ratio_0_to_1)) + self.time_maa_r = nn.Parameter(1.0 - torch.pow(ddd, 0.5 * ratio_1_to_almost0)) + self.time_maa_g = nn.Parameter(1.0 - torch.pow(ddd, 0.5 * ratio_1_to_almost0)) + + D_MIX_LORA = 32 # generate TIME_MIX for w,k,v,r,g + if args.n_embd==4096: + D_MIX_LORA = D_MIX_LORA*2 + self.time_maa_w1 = nn.Parameter(torch.zeros(args.n_embd, D_MIX_LORA*5)) + self.time_maa_w2 = nn.Parameter(torch.zeros(5, D_MIX_LORA, args.n_embd).uniform_(-0.01, 0.01)) + + # fancy time_decay + decay_speed = torch.ones(args.dim_att) + for n in range(args.dim_att): + decay_speed[n] = -6 + 5 * (n / (args.dim_att - 1)) ** (0.7 + 1.3 * ratio_0_to_1) + self.time_decay = nn.Parameter(decay_speed.reshape(1,1,args.dim_att)) + + D_DECAY_LORA = 64 + if args.n_embd==4096: + D_DECAY_LORA = D_DECAY_LORA*2 + self.time_decay_w1 = nn.Parameter(torch.zeros(args.n_embd, D_DECAY_LORA)) + self.time_decay_w2 = nn.Parameter(torch.zeros(D_DECAY_LORA, args.dim_att).uniform_(-0.01, 0.01)) + + tmp = torch.zeros(args.dim_att) + for n in range(args.dim_att): + zigzag = ((n + 1) % 3 - 1) * 0.1 + tmp[n] = ratio_0_to_1 * (1 - (n / (args.dim_att - 1))) + zigzag + + self.time_faaaa = nn.Parameter(tmp.reshape(self.n_head, self.head_size)) + #self.time_state = nn.Parameter(torch.zeros(self.n_head, self.head_size, self.head_size)) + + self.time_shift = nn.ZeroPad2d((0, 0, 1, -1)) + self.receptance = make_linear_att(args.n_embd, args.dim_att, bias=False) + self.key = make_linear_att(args.n_embd, args.dim_att, bias=False) + + self.value = make_linear_att(args.n_embd, args.dim_att, bias=False) + self.output = make_linear_att(args.dim_att, args.n_embd, bias=False) + self.gate = make_linear_att(args.n_embd, args.dim_att, bias=False) + self.ln_x = nn.GroupNorm(self.n_head, args.dim_att, eps=(1e-5)*(args.head_size_divisor**2)) + + @MyFunction + def jit_func(self, x, shift_state): + B, T, C = x.size() + xx = torch.concat((shift_state.unsqueeze(1), x[:, :-1]), dim=1) - x + + xxx = x + xx * self.time_maa_x + xxx = torch.tanh(xxx @ self.time_maa_w1).view(B*T, 5, -1).transpose(0, 1) + xxx = torch.bmm(xxx, self.time_maa_w2).view(5, B, T, -1) + mw, mk, mv, mr, mg = xxx.unbind(dim=0) + + xw = x + xx * (self.time_maa_w + mw) + xk = x + xx * (self.time_maa_k + mk) + xv = x + xx * (self.time_maa_v + mv) + xr = x + xx * (self.time_maa_r + mr) + xg = x + xx * (self.time_maa_g + mg) + + r = self.receptance(xr) + k = self.key(xk) + v = self.value(xv) + g = F.silu(self.gate(xg)) + + ww = torch.tanh(xw @ self.time_decay_w1) @ self.time_decay_w2 + w = self.time_decay + ww + + return r, k, v, g, w, x[:, -1] + + @MyFunction + def jit_func_2(self, x, g, timemixstate:TimeMixState): + B, T, C = x.size() + x = x.view(B * T, C) + + x = self.ln_x(x).view(B, T, C) + x = self.output(x * g) + return x, timemixstate + + def forward(self, x, last_state: TimeMixState): + B, T, C = x.size() + H = self.n_head + shift_state = last_state.shift_state + r, k, v, g, w, lx = self.jit_func(x, shift_state) + ###### + wkv_state = last_state.wkv_state.clone().contiguous() + x, wkv_state = RUN_CUDA_RWKV6_STATE(B, T, C, H, r, k, v, w, u=self.time_faaaa, s=wkv_state) + #wkv_state = last_state.wkv_state + return self.jit_func_2(x, g, TimeMixState(lx, wkv_state)) + +class RWKV_CMix_x060_infctx(MyModule): + def __init__(self, args, layer_id): + super().__init__() + self.args = args + self.layer_id = layer_id + self.time_shift = nn.ZeroPad2d((0, 0, 1, -1)) + + with torch.no_grad(): # fancy init of time_mix + ratio_1_to_almost0 = 1.0 - (layer_id / args.n_layer) # 1 to ~0 + ddd = torch.ones(1, 1, args.n_embd) + for i in range(args.n_embd): + ddd[0, 0, i] = i / args.n_embd + self.time_maa_k = nn.Parameter(1.0 - torch.pow(ddd, ratio_1_to_almost0)) + self.time_maa_r = nn.Parameter(1.0 - torch.pow(ddd, ratio_1_to_almost0)) + + self.key = make_linear_ffn(args.n_embd, args.dim_ffn, bias=False) + self.receptance = make_linear_ffn(args.n_embd, args.n_embd, bias=False) + self.value = make_linear_ffn(args.dim_ffn, args.n_embd, bias=False) + + @MyFunction + def forward(self, x, last_state: ChannelMixState): + xx = torch.concat((last_state.shift_state.unsqueeze(1), x[:, :-1]), dim=1) - x + xk = x + xx * self.time_maa_k + xr = x + xx * self.time_maa_r + + k = self.key(xk) + k = torch.relu(k) ** 2 + kv = self.value(k) + return torch.sigmoid(self.receptance(xr)) * kv, ChannelMixState(x[:, -1]) +######################################################################################################## +# The RWKV Model with our blocks +######################################################################################################## + + +class Block(nn.Module): + def __init__(self, args, layer_id): + super().__init__() + self.args = args + self.layer_id = layer_id + + self.ln1 = nn.LayerNorm(args.n_embd) + self.ln2 = nn.LayerNorm(args.n_embd) + + if self.layer_id == 0: + self.ln0 = nn.LayerNorm(args.n_embd) + if args.my_pos_emb > 0: + self.pos_emb_x = nn.Parameter(torch.zeros((1,args.my_pos_emb,args.n_embd))) + self.pos_emb_y = nn.Parameter(torch.zeros((args.my_pos_emb,1,args.n_embd))) + + if self.layer_id == 0 and self.args.pre_ffn > 0: + self.ffnPre = RWKV_ChannelMix(args, 0) + else: + if 'x060' in os.environ["RWKV_MY_TESTING"]: + if os.environ["RWKV_TRAIN_TYPE"] == 'states': + self.att = RWKV_Tmix_x060_state(args, layer_id) + elif os.environ["RWKV_TRAIN_TYPE"] == 'infctx': + self.att = RWKV_Tmix_x060_infctx(args, layer_id) + else: + self.att = RWKV_Tmix_x060(args, layer_id) + else: + self.att = RWKV_TimeMix_RWKV5(args, layer_id) + + if 'g' in os.environ["RWKV_MY_TESTING"]: + self.ffn = MishGLU(args, layer_id) + else: + if 'x060' in os.environ["RWKV_MY_TESTING"]: + if os.environ["RWKV_TRAIN_TYPE"] == 'infctx': + self.ffn = RWKV_CMix_x060_infctx(args, layer_id) + else: + self.ffn = RWKV_CMix_x060(args, layer_id) + else: + self.ffn = RWKV_ChannelMix(args, layer_id) + + if args.tiny_att_dim > 0 and self.layer_id == args.tiny_att_layer: + self.tiny_ln = nn.LayerNorm(args.n_embd) + self.tiny_q = nn.Linear(args.n_embd, args.tiny_att_dim, bias=False) + self.tiny_k = nn.Linear(args.n_embd, args.tiny_att_dim, bias=False) + self.tiny_v = nn.Linear(args.n_embd, args.n_embd, bias=False) + self.register_buffer("tiny_mask", torch.tril(torch.ones(args.ctx_len, args.ctx_len))) + + if args.dropout > 0: + self.drop0 = nn.Dropout(p = args.dropout) + self.drop1 = nn.Dropout(p = args.dropout) + + if os.environ["RWKV_TRAIN_TYPE"] == 'infctx': + def forward(self, x, last_state: BlockState, x_emb=None): + args = self.args + B, T, C = x.size() + if self.layer_id == 0: + x = self.ln0(x) + if args.my_pos_emb > 0: + pos_emb = (self.pos_emb_x + self.pos_emb_y).reshape(T+1, -1)[:-1,:] + x = x + pos_emb + + if self.args.dropout == 0: + if self.layer_id == 0 and args.pre_ffn > 0: + x = x + self.ffnPre(self.ln1(x)) + else: + att_out, att_state = self.att(self.ln1(x), last_state.time_mix_state) + x = x + att_out + ffn_out, fnn_state = self.ffn(self.ln2(x), last_state.channel_mix_state) + x = x + ffn_out + else: + if self.layer_id == 0 and args.pre_ffn > 0: + x = self.drop0(x + self.ffnPre(self.ln1(x))) + else: + x = self.drop0(x + self.att(self.ln1(x))) + x = self.drop1(x + self.ffn(self.ln2(x))) + + if args.tiny_att_dim > 0 and self.layer_id == args.tiny_att_layer: + xx = self.tiny_ln(x) + q = self.tiny_q(xx)[:, :T, :] + k = self.tiny_k(xx)[:, :T, :] + c = (q @ k.transpose(-2, -1)) * (args.tiny_att_dim ** (-0.5)) + c = c.masked_fill(self.tiny_mask[:T, :T] == 0, 0) + x = x + c @ self.tiny_v(x_emb) + return x, BlockState(att_state, fnn_state) + else: + def forward(self, x, x_emb=None): + args = self.args + B, T, C = x.size() + if self.layer_id == 0: + x = self.ln0(x) + if args.my_pos_emb > 0: + pos_emb = (self.pos_emb_x + self.pos_emb_y).reshape(T+1, -1)[:-1,:] + x = x + pos_emb + + if self.args.dropout == 0: + if self.layer_id == 0 and args.pre_ffn > 0: + x = x + self.ffnPre(self.ln1(x)) + else: + x = x + self.att(self.ln1(x)) + x = x + self.ffn(self.ln2(x)) + else: + if self.layer_id == 0 and args.pre_ffn > 0: + x = self.drop0(x + self.ffnPre(self.ln1(x))) + else: + x = self.drop0(x + self.att(self.ln1(x))) + x = self.drop1(x + self.ffn(self.ln2(x))) + + if args.tiny_att_dim > 0 and self.layer_id == args.tiny_att_layer: + xx = self.tiny_ln(x) + q = self.tiny_q(xx)[:, :T, :] + k = self.tiny_k(xx)[:, :T, :] + c = (q @ k.transpose(-2, -1)) * (args.tiny_att_dim ** (-0.5)) + c = c.masked_fill(self.tiny_mask[:T, :T] == 0, 0) + x = x + c @ self.tiny_v(x_emb) + return x + + +if os.environ["RWKV_TRAIN_TYPE"] == 'infctx': + class L2Wrap(torch.autograd.Function): + @staticmethod + def forward(ctx, loss, y, token_amount): + ctx.save_for_backward(y) + ctx.token_amount = token_amount + return loss + + @staticmethod + def backward(ctx, grad_output): #这个函数会不会影响batch和grad_accu的一致性?感觉上会。梯度累积时,factor变大了。但是只有loss缩放,这里的正则化项反而没有缩放 + y = ctx.saved_tensors[0] + # to encourage the logits to be close to 0 + if ctx.token_amount == 0: + return (grad_output, None, None) + factor = 1e-4 / ctx.token_amount #这一行类似crossentropy在token上平均。 + maxx, ids = torch.max(y, -1, keepdim=True) + gy = torch.zeros_like(y) + if os.environ.get("WN_FIX_L2WRAP"): #实现batch等价性 + # maxx[maxx<3.]=0. #防止对已经较小的logits值下拉,只对大于阈值的往下拉 + gy.scatter_(-1, ids, maxx * factor * grad_output) + else: + gy.scatter_(-1, ids, maxx * factor) + return (grad_output, gy, None) +else: + class L2Wrap(torch.autograd.Function): + @staticmethod + def forward(ctx, loss, y): + ctx.save_for_backward(y) + return loss + + @staticmethod + def backward(ctx, grad_output): + y = ctx.saved_tensors[0] + # to encourage the logits to be close to 0 + factor = 1e-4 / (y.shape[0] * y.shape[1]) + maxx, ids = torch.max(y, -1, keepdim=True) + gy = torch.zeros_like(y) + gy.scatter_(-1, ids, maxx * factor) + return (grad_output, gy) + + +class RWKV(pl.LightningModule): + def __init__(self, args): + super().__init__() + self.args = args + if not hasattr(args, 'dim_att'): + args.dim_att = args.n_embd + if not hasattr(args, 'dim_ffn'): + args.dim_ffn = args.n_embd * 4 + if not hasattr(args, 'tiny_att_layer'): + args.tiny_att_layer = -1 + if not hasattr(args, 'tiny_att_dim'): + args.tiny_att_dim = -1 + assert args.n_embd % 32 == 0 + assert args.dim_att % 32 == 0 + assert args.dim_ffn % 32 == 0 + + self.emb = nn.Embedding(args.vocab_size, args.n_embd) + + self.blocks = nn.ModuleList([Block(args, i) for i in range(args.n_layer)]) + + self.ln_out = nn.LayerNorm(args.n_embd) + self.head = nn.Linear(args.n_embd, args.vocab_size, bias=False) + + if args.head_qk > 0: + self.head_q = nn.Linear(args.n_embd, args.head_qk, bias=False) + self.head_k = nn.Linear(args.n_embd, args.head_qk, bias=False) + self.register_buffer("copy_mask", torch.tril(torch.ones(args.ctx_len, args.ctx_len))) + if args.dropout > 0: + self.drop0 = nn.Dropout(p = args.dropout) + + def configure_optimizers(self): + args = self.args + + lr_decay = set() + lr_1x = set() + lr_2x = set() + lr_3x = set() + for n, p in self.named_parameters(): + if not p.requires_grad: + continue + if (("_w1" in n) or ("_w2" in n)) and (args.layerwise_lr > 0): + lr_1x.add(n) + elif (("time_mix" in n) or ("time_maa" in n)) and (args.layerwise_lr > 0): + if args.my_pile_stage == 2: + lr_2x.add(n) + else: + lr_1x.add(n) + elif (("time_decay" in n) or ("time_daaaa" in n)) and (args.layerwise_lr > 0): + if args.my_pile_stage == 2: + lr_3x.add(n) + else: + lr_2x.add(n) + elif ("time_faaaa" in n) and (args.layerwise_lr > 0): + if args.my_pile_stage == 2: + lr_2x.add(n) + else: + lr_1x.add(n) + elif ("time_first" in n) and (args.layerwise_lr > 0): + lr_3x.add(n) + elif (len(p.squeeze().shape) >= 2) and (args.weight_decay > 0): + lr_decay.add(n) + else: + lr_1x.add(n) + + lr_decay = sorted(list(lr_decay)) + lr_1x = sorted(list(lr_1x)) + lr_2x = sorted(list(lr_2x)) + lr_3x = sorted(list(lr_3x)) + # print('decay', lr_decay) + # print('1x', lr_1x) + # print('2x', lr_2x) + # print('3x', lr_3x) + param_dict = {n: p for n, p in self.named_parameters()} + + if args.layerwise_lr > 0: + if args.my_pile_stage == 2: + optim_groups = [ + {"params": [param_dict[n] for n in lr_1x], "weight_decay": 0.0, "my_lr_scale": 1.0}, + {"params": [param_dict[n] for n in lr_2x], "weight_decay": 0.0, "my_lr_scale": 5.0},# test: 2e-3 / args.lr_init}, + {"params": [param_dict[n] for n in lr_3x], "weight_decay": 0.0, "my_lr_scale": 5.0},# test: 3e-3 / args.lr_init}, + ] + else: + optim_groups = [ + {"params": [param_dict[n] for n in lr_1x], "weight_decay": 0.0, "my_lr_scale": 1.0}, + {"params": [param_dict[n] for n in lr_2x], "weight_decay": 0.0, "my_lr_scale": 2.0}, + {"params": [param_dict[n] for n in lr_3x], "weight_decay": 0.0, "my_lr_scale": 3.0}, + ] + else: + optim_groups = [{"params": [param_dict[n] for n in lr_1x], "weight_decay": 0.0, "my_lr_scale": 1.0}] + + if args.weight_decay > 0: + optim_groups += [{"params": [param_dict[n] for n in lr_decay], "weight_decay": args.weight_decay, "my_lr_scale": 1.0}] + if self.deepspeed_offload: + return DeepSpeedCPUAdam(optim_groups, lr=self.args.lr_init, betas=self.args.betas, eps=self.args.adam_eps, bias_correction=True, adamw_mode=True, amsgrad=False) + return FusedAdam(optim_groups, lr=self.args.lr_init, betas=self.args.betas, eps=self.args.adam_eps, bias_correction=True, adam_w_mode=True, amsgrad=False) + else: + if self.deepspeed_offload: + return DeepSpeedCPUAdam(optim_groups, lr=self.args.lr_init, betas=self.args.betas, eps=self.args.adam_eps, bias_correction=True, adamw_mode=False, weight_decay=0, amsgrad=False) + return FusedAdam(optim_groups, lr=self.args.lr_init, betas=self.args.betas, eps=self.args.adam_eps, bias_correction=True, adam_w_mode=False, weight_decay=0, amsgrad=False) + # return ZeroOneAdam(optim_groups, lr=self.args.lr_init, betas=self.args.betas, eps=self.args.adam_eps, bias_correction=True, weight_decay=0, amsgrad=False, cuda_aware=False) + + @property + def deepspeed_offload(self) -> bool: + strategy = self.trainer.strategy + if isinstance(strategy, DeepSpeedStrategy): + cfg = strategy.config["zero_optimization"] + return cfg.get("offload_optimizer") or cfg.get("offload_param") + return False + + if os.environ["RWKV_TRAIN_TYPE"] == 'infctx': + + def forward(self, idx, last_shift_states: torch.Tensor, + last_wkv_states: torch.Tensor): + args = self.args + B, T = idx.size() + assert T <= args.chunk_ctx, "Cannot forward, model ctx_len is exhausted." + C = args.n_embd + H = args.dim_att // args.head_size_a + assert C==H*args.head_size_a + + x = self.emb(idx) + x_emb = x + new_states = BlockStateList.empty(args.n_layer, B, args.n_embd, H, + x.device, x.dtype) + if args.dropout > 0: + x = self.drop0(x) + + for i, (block, block_state) in enumerate(zip(self.blocks, + BlockStateList(last_shift_states, last_wkv_states))): + # x = x.to(block.device) + if args.grad_cp == 1 and i > 0: # and i < len(self.blocks)-1 + x, new_block_state = torch_checkpoint(block, x, block_state, use_reentrant=False) + else: + x, new_block_state = block(x, block_state) + new_states[i] = new_block_state + + x = self.ln_out(x) + + if args.head_qk > 0: + q = self.head_q(x)[:, :T, :] + k = self.head_k(x)[:, :T, :] + c = (q @ k.transpose(-2, -1)) * (1.0 / args.head_qk) + c = c.masked_fill(self.copy_mask[:T, :T] == 0, 0) + + if "32" in os.environ["RWKV_FLOAT_MODE"]: + c = c @ F.one_hot(idx, num_classes=args.vocab_size) + elif os.environ["RWKV_FLOAT_MODE"] == "fp16": + c = c @ F.one_hot(idx, num_classes=args.vocab_size).half() + elif os.environ["RWKV_FLOAT_MODE"] == "bf16": + c = c @ F.one_hot(idx, num_classes=args.vocab_size).bfloat16() + + x = self.head(x) + c + else: + x = self.head(x) + + return x, new_states.shift_states, new_states.wkv_states + + def training_step(self, batch, batch_idx): + args = self.args + T_train = args.chunk_ctx + idx, targets = batch + B, T = idx.shape + C = args.n_embd + H = args.dim_att // args.head_size_a + assert C==H*args.head_size_a + states = BlockStateList.create(args.n_layer, B, C, H, idx.device, + self.emb.weight.dtype) + + def checkpointed_step(idx, targets, prev_loss, last_shift_states, + last_wkv_states, prev_token_amount): + logits, new_shift_states, new_wkv_states = self(idx, last_shift_states, last_wkv_states) + current_token_amount = (targets!=-100).sum() #这样是不是更合适? + current_token_amount = idx.shape[1] + if current_token_amount == 0: + loss = F.cross_entropy(logits.view(-1, logits.size(-1)), targets.reshape(-1),reduction='sum') + else: + loss = F.cross_entropy(logits.view(-1, logits.size(-1)), targets.reshape(-1)) + loss = L2Wrap.apply(loss, logits, current_token_amount) + new_token_amount = prev_token_amount+current_token_amount + if new_token_amount>0: + new_loss = prev_loss * (prev_token_amount / new_token_amount) + loss * ( + current_token_amount / new_token_amount) + else: + new_loss = prev_loss + + return new_loss, new_shift_states, new_wkv_states, new_token_amount + + total_loss = torch.tensor(0.,dtype=self.emb.weight.dtype).requires_grad_() + token_amount = 0 + i = 0 + for i in range(math.ceil(T / T_train)): + # states.shift_states = states.shift_states.cuda() + # states.wkv_states = states.wkv_states.cuda() + total_loss,new_shift_states, new_wkv_states,token_amount = torch_checkpoint( + checkpointed_step, + idx[:, i * T_train:(i + 1) * T_train], + targets[:, i * T_train:(i + 1) * T_train], + total_loss, + states.shift_states, + states.wkv_states, + token_amount, + use_reentrant=False + ) + # total_loss,new_shift_states, new_wkv_states,token_amount = checkpointed_step( + # idx[:, i * T_train:(i + 1) * T_train], + # targets[:, i * T_train:(i + 1) * T_train], + # total_loss, + # states.shift_states, + # states.wkv_states, + # token_amount + # ) + # new_shift_states = new_shift_states.cpu() + # new_wkv_states = new_wkv_states.cpu() + states = BlockStateList(new_shift_states, new_wkv_states) + + return total_loss + else: + + def embed(self, inputs): + return self.emb(inputs) + + def forward(self, idx=None, inputs_embeds = None): + + if(idx != None): + args = self.args + B, T = idx.size() + assert T <= args.ctx_len, "Cannot forward, model ctx_len is exhausted." + + x = self.emb(idx) + x_emb = x + elif(inputs_embeds != None): + args = self.args + B, T,_ = inputs_embeds.size() + assert T <= args.ctx_len, "Cannot forward, model ctx_len is exhausted." + x_emb = inputs_embeds + x = x_emb + + if args.dropout > 0: + x = self.drop0(x) + if args.tiny_att_dim > 0: + for block in self.blocks: + if args.grad_cp == 1: + if args.lora or args.state_tune or args.train_type == 'state': + x = torch_checkpoint(block, x, x_emb, use_reentrant=False) + else: + x = deepspeed.checkpointing.checkpoint(block, x, x_emb) + else: + x = block(x, x_emb) + else: + for block in self.blocks: + if args.grad_cp == 1: + if args.lora or args.state_tune or args.train_type == 'state': + x = torch_checkpoint(block, x, x_emb ,use_reentrant=False) + else: + x = deepspeed.checkpointing.checkpoint(block, x) + else: + x = block(x) + + x = self.ln_out(x) + + if args.head_qk > 0: + q = self.head_q(x)[:, :T, :] + k = self.head_k(x)[:, :T, :] + c = (q @ k.transpose(-2, -1)) * (1.0 / args.head_qk) + c = c.masked_fill(self.copy_mask[:T, :T] == 0, 0) + + if "32" in os.environ["RWKV_FLOAT_MODE"]: + c = c @ F.one_hot(idx, num_classes=args.vocab_size) + elif os.environ["RWKV_FLOAT_MODE"] == "fp16": + c = c @ F.one_hot(idx, num_classes=args.vocab_size).half() + elif os.environ["RWKV_FLOAT_MODE"] == "bf16": + c = c @ F.one_hot(idx, num_classes=args.vocab_size).bfloat16() + + x = self.head(x) + c + else: + x = self.head(x) + + return x + + def generate(self, tokenizer, idx=None, inputs_embeds=None,): + MAX_LENGTH = 100 + output_seq = self(idx,inputs_embeds)#调用模型 + temp = output_seq.clone() + true_output = [] + + for i in range(MAX_LENGTH): + + last_logit = output_seq[:,-1,:] + + probabilities = F.softmax(last_logit, dim=-1) + _, top_idx = probabilities.topk(1, dim=-1) + + decoded_token = tokenizer.decode(top_idx.squeeze(-1)) + if decoded_token == '': + break + else: + true_output.append(decoded_token) + next_input = self.embed(top_idx.squeeze(-1)) + inputs_embeds = torch.cat((inputs_embeds,next_input.unsqueeze(1)), dim = 1) + output_seq = self(idx,inputs_embeds) + + return true_output + + + def training_step(self, batch, batch_idx): + args = self.args + if args.loss_mask: + idx, targets, mask = batch + mask = mask.view(-1) + sum_mask = torch.sum(mask).item() + logits = self(idx) + loss = F.cross_entropy(logits.view(-1, logits.size(-1)), targets.view(-1), reduction='none') + loss = torch.sum(loss * mask) / sum_mask + elif args.my_qa_mask != 1: + idx, targets = batch + logits = self(idx) + loss = F.cross_entropy(logits.view(-1, logits.size(-1)), targets.view(-1)) + # if '0' in os.environ["RWKV_MY_TESTING"]: + # print('logits', logits) + # torch.set_printoptions(threshold=10000) + # print('idx', idx) + # exit(0) + else: + idx, targets, mask = batch + mask = mask.view(-1) + sum_mask = torch.sum(mask).item() + # if sum_mask == 0: + # return torch.tensor([0.0], requires_grad=True) + + logits = self(idx) + if sum_mask == mask.shape[0]: + loss = F.cross_entropy(logits.view(-1, logits.size(-1)), targets.view(-1)) + # print('rank', self.global_rank, 'loss', loss.item()) + else: + loss = F.cross_entropy(logits.view(-1, logits.size(-1)), targets.view(-1), reduction='none') + # loss_raw = loss + loss = torch.sum(loss * mask) / sum_mask + + # torch.set_printoptions(threshold=10000) + # if True: #self.global_rank == 1: + # tmp = '' + # sss = 0 + # ccc = 0 + # for i in range(mask.shape[0]): + # if mask[i] > 0: + # tmp += str(idx.view(-1)[i].item()) + ',' + # sss += loss_raw.view(-1)[i].float().item() + # ccc += 1 + # print('rank', self.global_rank, 'loss', loss.item(), 'lavg', sss / ccc)#, 'tmp', tmp, 'input', idx) + + return L2Wrap.apply(loss, logits) + + def training_step_end(self, batch_parts): + if pl.__version__[0]!='2': + all = self.all_gather(batch_parts) + if self.trainer.is_global_zero: + self.trainer.my_loss_all = all + + def generate_init_weight(self): + print( + f""" +############################################################################ +# +# Init model weight (slow for large models)... +# +############################################################################ +""" + ) + m = {} + for n in self.state_dict(): + p = self.state_dict()[n] + shape = p.shape + + gain = 1.0 + scale = 1.0 + if "ln_" in n or ".ln" in n or "time_" in n or "_mask" in n or "pos_emb" in n or '.mask.' in n: + if 'ln_x.weight' in n: + layer_scale = (1+int(n.split('.')[1])) / self.args.n_layer + m[n] = (p * 0.0) + (layer_scale ** 0.7) + else: + m[n] = p + else: + if n == "emb.weight": + scale = -1 * self.args.lr_init + else: + if shape[0] > shape[1]: + gain = math.sqrt(shape[0] / shape[1]) + + zero = [".att.output.", ".ffn.value.", ".ffn.receptance.", ".ffnPre.value.", ".ffnPre.receptance.", "head_q.", '.oo.', '.rr.'] + + for kk in zero: + if kk in n: + scale = 0 + if n == "head.weight": + scale = 0.5 + if "head_k." in n: + scale = 0.1 + if "head_q." in n: + scale = 0 + + print(f"{str(shape[0]).ljust(5)} {str(shape[1]).ljust(5)} {str(scale).ljust(4)} {n}") + + if self.args.accelerator.upper() == "GPU": + m[n] = torch.empty((shape[0], shape[1]), device="cuda") + else: + m[n] = torch.empty((shape[0], shape[1])) + + if scale == 0: + nn.init.zeros_(m[n]) + elif scale < 0: + nn.init.uniform_(m[n], a=scale, b=-scale) + else: + nn.init.orthogonal_(m[n], gain=gain * scale) + + m[n] = m[n].cpu() + if os.environ["RWKV_FLOAT_MODE"] == "fp16": + m[n] = m[n].half() + elif os.environ["RWKV_FLOAT_MODE"] == "bf16": + m[n] = m[n].bfloat16() + + # if n == "emb.weight": + # print(m[n]) + + gc.collect() + torch.cuda.empty_cache() + return m diff --git a/src/rwkvLinear.py b/src/rwkvLinear.py new file mode 100644 index 0000000000000000000000000000000000000000..0824eab35d42505b4a1575d7687afb96a820564c --- /dev/null +++ b/src/rwkvLinear.py @@ -0,0 +1,139 @@ +import torch, math +import torch.nn as nn +import bitsandbytes as bnb +from torch.nn import functional as F +from torch._lowrank import svd_lowrank +import functools + +def rwkv_quantize(quant_type, weight): + if quant_type=='4bit': + qweight, qstate= bnb.functional.quantize_4bit((weight.data).to('cuda')) + elif quant_type=='nf4': + qweight, qstate= bnb.functional.quantize_nf4((weight.data).to('cuda')) + elif quant_type=='fp4': + qweight, qstate= bnb.functional.quantize_fp4((weight.data).to('cuda')) + elif quant_type=='int8': + qweight, qstate= bnb.functional.quantize((weight.data).to('cuda')) + return qweight, qstate + + +def rwkv_dequantize(quant_type, weight, qstate): + if quant_type=='4bit': + deweight= bnb.functional.dequantize_4bit(weight.data,quant_state=qstate) + elif quant_type=='nf4': + deweight= bnb.functional.dequantize_nf4(weight.data,quant_state=qstate) + elif quant_type=='fp4': + deweight= bnb.functional.dequantize_fp4(weight.data,quant_state=qstate) + elif quant_type=='int8': + deweight= bnb.functional.dequantize(weight.data,state=qstate) + return deweight + + + +LORA_CONFIG = { + "r": 0, + "alpha": 0, + "dropout": 0, + "parts": {"att", "ln", "time", "ffn"}, + "quant": False, +} +class LoraLinear(nn.Module): + + def __init__(self, in_features: int, out_features: int, bias: bool): + super().__init__() + + self.weight = nn.Parameter(torch.empty((out_features, in_features))) + assert bias == False, "Biased LoraLinear not supported" + + r, alpha, dropout = LORA_CONFIG["r"], LORA_CONFIG[ + "alpha"], LORA_CONFIG["dropout"] + self.lora_A = nn.Parameter(torch.empty(r, in_features)) + self.lora_B = nn.Parameter(torch.empty(out_features, r)) + self.lora_dropout = nn.Dropout(dropout) + self.scaling = alpha / r + self.r = r + nn.init.kaiming_uniform_(self.weight, a=math.sqrt(5)) + nn.init.kaiming_uniform_(self.lora_A, a=math.sqrt(5)) + nn.init.zeros_(self.lora_B) + self.pissa = False + self.is_quant = False + + def pissa_load(self, init_A, init_B): + self.pissa = True + self.weight.data = self.weight.data - init_B @ init_A + + + def pissa_init(self, svd_niter): + + self.pissa = True + Ur, Sr, Vr = svd_lowrank(self.weight.data, self.r, niter=svd_niter) + Vhr = Vr.t() + lora_A = torch.diag(torch.sqrt(Sr)) @ Vhr + lora_B = Ur @ torch.diag(torch.sqrt(Sr)) + self.lora_A.data = lora_A + self.lora_B.data = lora_B + self.weight.data = self.weight.data - lora_B @ lora_A + def quant(self, quant_type): + self.is_quant = True + self.quant_type = quant_type + self.weight.data, self.qstate= rwkv_quantize(self.quant_type, (self.weight.data).to('cuda')) + + def forward(self, x): + + if self.is_quant: + if self.pissa: + return ( + F.linear(x, rwkv_dequantize(self.quant_type, self.weight.data, self.qstate).to(torch.bfloat16)) + + F.linear(F.linear(x, self.lora_A), self.lora_B)) + return ( + F.linear(x, rwkv_dequantize(self.quant_type, self.weight.data, self.qstate)) + self.scaling * + F.linear(F.linear(self.lora_dropout(x), self.lora_A), self.lora_B)) + + if self.pissa: + return ( + F.linear(x, self.weight) + + F.linear(F.linear(x, self.lora_A), self.lora_B)) + return ( + F.linear(x, self.weight) + self.scaling * + F.linear(F.linear(self.lora_dropout(x), self.lora_A), self.lora_B)) + + +class QuantLinear(nn.Module): + def __init__(self, in_features: int, out_features: int, bias: bool): + super().__init__() + + self.weight = nn.Parameter(torch.empty((out_features, in_features))) + assert bias == False, "Biased QuantLinear not supported" + self.is_quant = False + + def quant(self, quant_type): + self.is_quant = True + self.quant_type = quant_type + #self.dummy_tensor = nn.Parameter(torch.zeros(1)) + self.weight.data, self.qstate= rwkv_quantize(self.quant_type, (self.weight.data).to('cuda')) + def forward(self, x): + + if self.is_quant: + return F.linear(x, rwkv_dequantize(self.quant_type, self.weight.data, self.qstate).to(torch.bfloat16)) + else: + return F.linear(x, self.weight) + + +@functools.wraps(LoraLinear) +def make_linear_att(*args, **kwargs): + if "att" in LORA_CONFIG["parts"] and LORA_CONFIG["r"] > 0: + return LoraLinear(*args, **kwargs) + elif LORA_CONFIG["quant"]: + return QuantLinear(*args, **kwargs) + else: + return nn.Linear(*args, **kwargs) + + +@functools.wraps(LoraLinear) +def make_linear_ffn(*args, **kwargs): + if "ffn" in LORA_CONFIG["parts"] and LORA_CONFIG["r"] > 0: + return LoraLinear(*args, **kwargs) + elif LORA_CONFIG["quant"]: + return QuantLinear(*args, **kwargs) + else: + return nn.Linear(*args, **kwargs) \ No newline at end of file diff --git a/src/speech_encoder.py b/src/speech_encoder.py new file mode 100644 index 0000000000000000000000000000000000000000..16b9b84594eaa63ce845d34655649392f4026519 --- /dev/null +++ b/src/speech_encoder.py @@ -0,0 +1,94 @@ +import torch +import torch.nn as nn +import torch.nn.functional as F +from torch.nn import TransformerEncoder, TransformerEncoderLayer +import numpy as np + +from transformers import AutoProcessor, AutoModel +from transformers import Wav2Vec2FeatureExtractor +from transformers import Wav2Vec2Processor +from transformers import Wav2Vec2CTCTokenizer + + +class SpeechEncoder(nn.Module): + def __init__( + self, + model_id, + project_dim, + downsample_K=5, + hidden_dim=2048, + train_mode="adapter", + device="cuda", + ): + assert train_mode in ["adapter", "full"] + super(SpeechEncoder, self).__init__() + + feature_extractor = Wav2Vec2FeatureExtractor( + feature_size=1, + sampling_rate=16000, + padding_value=0.0, + do_normalize=True, + return_attention_mask=False, + ) + self.device = device + self.processor = AutoProcessor.from_pretrained("facebook/hubert-large-ls960-ft") + self.time_reduction_factor = int( + self.processor.feature_extractor.sampling_rate / 50 + ) + self.padding_length = 320 + self.model = AutoModel.from_pretrained(model_id).to(self.device,dtype=torch.bfloat16) + self.model_output_dim = self.model.config.hidden_size + self.downsample_K = downsample_K + self.project_dim = project_dim + if hidden_dim is None: + self.hidden_dim = self.project_dim * 2 + else: + self.hidden_dim = hidden_dim + # adapter shall be a Linear(Relu(Linear)) structure + self.adapter = nn.Sequential( + nn.Linear(self.model_output_dim * self.downsample_K, self.hidden_dim), + nn.ReLU(), + nn.Linear(self.hidden_dim, self.project_dim), + ).to(self.device,dtype=torch.bfloat16) + self.set_gradient(train_mode) + + def set_gradient(self, train_mode): + """ + if train_mode is "adapter", only train the adapter layers, otherwise train the whole model + """ + if train_mode == "adapter": + for param in self.model.parameters(): + param.requires_grad = False + for param in self.adapter.parameters(): + param.requires_grad = True + else: + for param in self.model.parameters(): + param.requires_grad = True + for param in self.adapter.parameters(): + param.requires_grad = True + + def calculate_mask(self, input_dict): + """ + Also need to handle the masking issue, to let the model not to attend to the padding tokens + """ + attention_mask = input_dict["attention_mask"] # [batch, num_samples] + length_in_samples = ( + attention_mask.shape[1] // self.padding_length * self.padding_length + ) + # calculate the mask length + mask_length = length_in_samples // self.time_reduction_factor + # create the mask + mask = attention_mask[:, :: (self.time_reduction_factor * self.downsample_K)] + return mask + + def forward(self, x): + input_dict = self.processor( + x, return_tensors="pt", padding=True, sampling_rate=16000 + ).to(self.device,dtype=torch.bfloat16) + mask = self.calculate_mask(input_dict) + x = self.model(**input_dict).last_hidden_state + # reshape the output from [batch_size, num_frames, hidden_size] to [batch_size, num_frames//downsample_K, hidden_size*downsample_K] + x = x.unfold(1, self.downsample_K, self.downsample_K).flatten(2) + x = self.adapter(x) + mask = mask[:, : x.shape[1]] + return x, mask diff --git a/src/trainer.py b/src/trainer.py new file mode 100644 index 0000000000000000000000000000000000000000..2dc6d4e0b71f07d1c910668f8df0241c59cfe4d5 --- /dev/null +++ b/src/trainer.py @@ -0,0 +1,364 @@ +import os, math, time, datetime, subprocess +import torch +from torch.utils.data import DataLoader +import pytorch_lightning as pl +from pytorch_lightning.utilities import rank_zero_info, rank_zero_only +from .model import LORA_CONFIG +import re +import numpy as np + +def my_save(args, trainer, dd, ff): + if '14b-run1' in ff: + fn = ff.split('/')[-1] + fff = '/dev/shm/' + fn + torch.save(dd, fff) + subprocess.Popen(f" aws s3 mv {fff} s3://rwkv-14b-4k/{fn} --quiet", shell=True) + elif ('world/14b' in ff) or ('world/7b' in ff): + aa = ff.split('/')[1] + fn = ff.split('/')[-1] + fff = f'/dev/shm/{aa}-{fn}' + torch.save(dd, fff) + subprocess.Popen(f" aws s3 mv {fff} s3://rwkv-world/{aa}-{fn} --quiet", shell=True) + else: + torch.save(dd, ff) + +from collections import deque + +class Queue: + def __init__(self, max_len=10): + self.queue = deque(maxlen=max_len) + self.sum = 0 + + def enqueue(self, val): + if len(self.queue) == self.queue.maxlen: + self.sum -= self.queue[0] + self.queue.append(val) + self.sum += val + + def average(self): + return self.sum / len(self.queue) if self.queue else None + +class train_callback(pl.Callback): + def __init__(self, args): + super().__init__() + self.args = args + self.step = 0 + self.loss_queue = Queue(50) + + def on_train_batch_start(self, trainer, pl_module, batch, batch_idx): + args = self.args + # if args.cuda_cleanup > 0: + # torch.cuda.empty_cache() + real_step = trainer.global_step + args.epoch_begin * args.epoch_steps + + # LR schedule + w_step = args.warmup_steps + if args.lr_final == args.lr_init or args.epoch_count == 0: + lr = args.lr_init + else: + decay_step = real_step - args.my_pile_edecay * args.epoch_steps + decay_total = (args.epoch_count - args.my_pile_edecay) * args.epoch_steps + progress = (decay_step - w_step + 1) / (decay_total - w_step) + progress = min(1, max(0, progress)) + + if args.lr_final == 0 or args.lr_init == 0: # linear decay + lr = args.lr_init + (args.lr_final - args.lr_init) * progress + else: # exp decay + lr = args.lr_init * math.exp(math.log(args.lr_final / args.lr_init) * pow(progress, 1)) + # if trainer.is_global_zero: + # print(trainer.global_step, decay_step, decay_total, w_step, progress, lr) + + if args.my_exit_tokens != 0: # cosine decay + real_tokens = real_step * args.ctx_len * args.real_bsz + warmup_tokens = w_step * args.ctx_len * args.real_bsz + progress = (real_tokens - warmup_tokens) / (abs(args.my_exit_tokens) - warmup_tokens) + progress = max(0, min(1, progress)) + lr_final_factor = args.lr_final / args.lr_init + lr_mult = (0.5 + lr_final_factor / 2) + (0.5 - lr_final_factor / 2) * math.cos(math.pi * progress) + if args.my_exit_tokens > 0: + lr = args.lr_init * lr_mult + else: + lr = (lr + args.lr_init * lr_mult) / 2 + if progress >= 1: + if (trainer.is_global_zero) or ('deepspeed_stage_3' in args.strategy): + my_save( + args, trainer, + pl_module.state_dict(), + f"{args.proj_dir}/rwkv-final.pth", + ) + exit(0) + if trainer.global_step < w_step: + lr = lr * (0.2 + 0.8 * trainer.global_step / w_step) + + if args.weight_decay_final > 0: + wd_now = args.weight_decay * math.exp(math.log(args.weight_decay_final / args.weight_decay) * progress) + else: + wd_now = args.weight_decay + + for param_group in trainer.optimizers[0].param_groups: + if param_group["weight_decay"] > 0: + param_group["weight_decay"] = wd_now + if args.layerwise_lr > 0: + param_group["lr"] = lr * param_group["my_lr_scale"] + # print(param_group["lr"], param_group["my_lr_scale"]) + else: + param_group["lr"] = lr + + trainer.my_lr = lr + trainer.my_wd = wd_now + # rank_zero_info(f"{real_step} {lr}") + + if trainer.global_step == 0: + if trainer.is_global_zero: # logging + trainer.my_loss_sum = 0 + trainer.my_loss_count = 0 + trainer.my_log = open(args.proj_dir + "/train_log.txt", "a") + trainer.my_log.write(f"NEW RUN {args.my_timestamp}\n{vars(self.args)}\n") + try: + print(f"\n{trainer.strategy.config}\n") + trainer.my_log.write(f"{trainer.strategy.config}\n") + except: + pass + trainer.my_log.flush() + if len(args.wandb) > 0: + print("Login to wandb...") + import wandb + wandb.init( + project=args.wandb, + name=args.run_name + " " + args.my_timestamp, + config=args, + save_code=False, + ) + trainer.my_wandb = wandb + + def on_train_batch_end(self, trainer, pl_module, outputs, batch, batch_idx): + args = self.args + + self.step += 1 + if(self.step % 100 == 0 and trainer.is_global_zero): + print("saving...") + # to_save_dict = pl_module.state_dict() + filtered_state_dict = {} + for key in pl_module.state_dict().keys(): + # Check if the key matches any of the commented weights + if key.startswith('language_model.blocks.') and "att.time_state" in key: + # Add the key and value to the filtered state dict + filtered_state_dict[key] = pl_module.state_dict()[key] + elif key.startswith('speech_encoder.adapter.'): + filtered_state_dict[key] = pl_module.state_dict()[key] + # elif + + try: + import glob + files = glob.glob(os.path.join(args.proj_dir, '*.pth')) + for file in files: + os.remove(file) + + my_save( + args, trainer, + filtered_state_dict, + f"{args.proj_dir}/rwkv-adapter-{self.step}.pth", + ) + except Exception as e: + print('Error\n\n', e, '\n\n') + + + + token_per_step = args.ctx_len * args.real_bsz + real_step = trainer.global_step + args.epoch_begin * args.epoch_steps + if trainer.is_global_zero: # logging + t_now = time.time_ns() + kt_s = 0 + try: + t_cost = (t_now - trainer.my_time_ns) / 1e9 + kt_s = token_per_step / t_cost / 1000 + # self.log("REAL it/s", 1.0 / t_cost, prog_bar=True, on_step=True) + # self.log("Kt/s", kt_s, prog_bar=True, on_step=True) + except: + pass + trainer.my_time_ns = t_now + if pl.__version__[0]=='2': + trainer.my_loss = outputs["loss"] + else: + # trainer.my_loss = trainer.my_loss_all.float().mean().item() + # trainer.my_loss = trainer.my_loss_sum.float().mean().item()#修改 + trainer.my_loss = outputs["loss"] + trainer.my_loss_sum += trainer.my_loss + trainer.my_loss_count += 1 + trainer.my_epoch_loss = trainer.my_loss_sum / trainer.my_loss_count + + self.loss_queue.enqueue(trainer.my_loss) + self.log("lr", trainer.my_lr, prog_bar=True, on_step=True) + # self.log("loss", trainer.my_epoch_loss, prog_bar=True, on_step=True) + self.log("avg_loss", self.loss_queue.average(), prog_bar=True, on_step=True) + self.log("step_loss", trainer.my_loss, prog_bar=True, on_step=True) + # self.log("s", real_step, prog_bar=True, on_step=True) + + if len(args.wandb) > 0: + lll = {"loss": trainer.my_loss, "lr": trainer.my_lr, "wd": trainer.my_wd, "Gtokens": real_step * token_per_step / 1e9} + if kt_s > 0: + lll["kt/s"] = kt_s + trainer.my_wandb.log(lll, step=int(real_step)) + if (trainer.is_global_zero) or ('deepspeed_stage_3' in args.strategy): # save pth + if args.magic_prime > 0: + expand_factor = 2 if args.my_qa_mask > 0 else 1 + if int(real_step) == int(args.magic_prime * expand_factor // args.real_bsz) - 1 + int(args.my_random_steps): + to_save_dict = pl_module.state_dict() + my_save( + args, trainer, + to_save_dict, + f"{args.proj_dir}/rwkv-final.pth", + ) + + if args.LISA and (batch_idx+1)%args.lisa_k==0: + pl_module.requires_grad_(False) + select_layers = np.random.choice(range(args.n_layer), args.lisa_r, replace=False) + + for name, module in pl_module.named_modules(): + for pname, param in module.named_parameters(): + if 'emb' in pname or 'head' in pname or '.ln' in pname or 'time' in pname: + param.requires_grad = True + elif 'ln_out' in pname: + param.requires_grad = True + match = re.search(r'\d+', pname) + if match: + number = int(match.group()) + if number in select_layers: + param.requires_grad = True + break + # if args.batch_save==batch_idx : + # to_save_dict = pl_module.state_dict() + # for name, state in to_save_dict.items(): + # if 'img' in name: + # to_save_dict[name] = state + # try: + # my_save( + # args, trainer, + # to_save_dict, + # f"{args.proj_dir}/rwkv-{args.epoch_begin + trainer.current_epoch}-{batch_idx}.pth", + # ) + # except Exception as e: + # print('Error\n\n', e, '\n\n') + + + def on_train_epoch_start(self, trainer, pl_module): + args = self.args + if pl.__version__[0]=='2': + dataset = trainer.train_dataloader.dataset + else: + dataset = trainer.train_dataloader.dataset.datasets + assert "MyDataset" in str(dataset) + dataset.global_rank = trainer.global_rank + dataset.real_epoch = int(args.epoch_begin + trainer.current_epoch) + dataset.world_size = trainer.world_size + # print(f'########## world_size {dataset.world_size} global_rank {dataset.global_rank} real_epoch {dataset.real_epoch} ##########') + + def on_train_epoch_end(self, trainer, pl_module): + args = self.args + to_save_dict = {} + if (trainer.is_global_zero) or ('deepspeed_stage_3' in args.strategy): # save pth + if (args.epoch_save > 0 and trainer.current_epoch % args.epoch_save == 0) or (trainer.current_epoch == args.epoch_count - 1): + if args.data_type == 'wds_img': + raw_dict = pl_module.state_dict() + for k in raw_dict: + if k.startswith('encoder.') or k.startswith('decoder.'): + to_save_dict[k] = raw_dict[k] + else: + to_save_dict = pl_module.state_dict() + + if args.data_type=='img' and not args.lora: + for name, state in to_save_dict.items(): + if 'img' in name: + to_save_dict[name] = state + + if args.state_tune or args.train_type=='state': + # lora_dict = {} + # for name, state in to_save_dict.items(): + # if 'state' in name: + # lora_dict[name] = state + lora_dict = to_save_dict + to_save_dict = lora_dict + + + if args.lora: + enable_time_finetune = 'time' in LORA_CONFIG["parts"] + enable_ln_finetune = 'ln' in LORA_CONFIG["parts"] + lora_dict = {} + for name, state in to_save_dict.items(): + if len(args.load_model) == 0: + if 'emb' in name or 'head' in name or 'ln' in name: + lora_dict[name] = state + if args.emb and 'emb' in name: + lora_dict[name] = state + if ('.lora_' in name + or (enable_time_finetune and '.time_' in name) + or (enable_ln_finetune and '.ln' in name)): + lora_dict[name] = state + to_save_dict = lora_dict + + # try: + # import glob + # files = glob.glob(os.path.join(args.proj_dir, '*.pth')) + # for file in files: + # os.remove(file) + + # my_save( + # args, trainer, + # to_save_dict, + # f"{args.proj_dir}/rwkv-{args.epoch_begin + trainer.current_epoch}.pth", + # ) + # except Exception as e: + # print('Error\n\n', e, '\n\n') + + if trainer.is_global_zero: # logging + trainer.my_log.write(f"{args.epoch_begin + trainer.current_epoch} {trainer.my_epoch_loss:.6f} {math.exp(trainer.my_epoch_loss):.4f} {trainer.my_lr:.8f} {datetime.datetime.now()} {trainer.current_epoch}\n") + trainer.my_log.flush() + + trainer.my_loss_sum = 0 + trainer.my_loss_count = 0 + if (args.epoch_begin + trainer.current_epoch) >= args.my_exit: + exit(0) + + +@rank_zero_only +def generate_init_weight(model, init_weight_name): + mm = model.generate_init_weight() + + if model.args.my_pile_stage == 1: + if len(model.args.load_model) > 0: + print(f"Combine weights from {model.args.load_model}...") + load_dict = torch.load(model.args.load_model, map_location="cpu") + for k in load_dict: + try: + assert k in mm + except: + print('missing', k) + exit(0) + src = load_dict[k] + try: + mm[k] = src.reshape(mm[k].shape) + except: + tmp = mm[k].squeeze().clone() + print(k, src.shape, '-->', mm[k].shape) + ss = src.shape[0] + dd = tmp.shape[0] + for i in range(dd): + pos = i / dd * ss + if pos >= ss - 1: + tmp[i] = src[ss-1] + else: + p0 = int(math.floor(pos)) + ii = pos - p0 + tmp[i] = src[p0] * (1-ii) + src[p0+1] * (ii) + mm[k] = tmp.reshape(mm[k].shape) + sss = src.squeeze().float().cpu().numpy() + print(sss[:10], '...', sss[-10:]) + mmm = mm[k].squeeze().float().cpu().numpy() + print(mmm[:10], '...', mmm[-10:]) + + print(f"Save to {init_weight_name}...") + torch.save(mm, init_weight_name) + + if model.args.my_pile_stage == 1: + print("Done. Now go for stage 2.") + exit(0) diff --git a/src/utils.py b/src/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..ea25990b41750cc020a26020b3905eb0e302aeae --- /dev/null +++ b/src/utils.py @@ -0,0 +1,130 @@ +import json, time, random, os +import numpy as np +import torch +from torch.nn import functional as F + +time_slot = {} +time_ref = time.time_ns() + +def record_time(name): + if name not in time_slot: + time_slot[name] = 1e20 + tt = (time.time_ns() - time_ref) / 1e9 + if tt < time_slot[name]: + time_slot[name] = tt + +class TOKENIZER(): + def __init__(self, WORD_NAME, UNKNOWN_CHAR='\ue083'): + if 'list' in str(type(WORD_NAME)): + self.charMode = False + if WORD_NAME[0] == WORD_NAME[1]: + from transformers import PreTrainedTokenizerFast + self.tokenizer = PreTrainedTokenizerFast(tokenizer_file=WORD_NAME[0]) + else: + from transformers import GPT2TokenizerFast + self.tokenizer = GPT2TokenizerFast(WORD_NAME[0], WORD_NAME[1]) + self.vocab_size = len(self.tokenizer) + else: + self.charMode = True + with open(WORD_NAME + '.json', "r", encoding="utf-16") as result_file: + self.word_table = json.load(result_file) + + self.vocab_size = len(self.word_table) + + self.stoi = {v: int(k) for k, v in self.word_table.items()} + self.itos = {int(k): v for k, v in self.word_table.items()} + + self.UNKNOWN_CHAR = self.stoi[UNKNOWN_CHAR] + + def refine_context(self, context): + context = context.strip().split('\n') + for c in range(len(context)): + context[c] = context[c].strip().strip('\u3000').strip('\r') + context = list(filter(lambda c: c != '', context)) + context = '\n' + ('\n'.join(context)).strip() + if context == '': + context = '\n' + return context + + def sample_logits(self, out, x, ctx_len, temperature=1.0, top_p_usual=None, top_p_newline=None): + # out[self.UNKNOWN_CHAR] = -float('Inf') + lastChar = int(x[-1]) + + probs = F.softmax(out, dim=-1) + + if self.charMode: + if self.itos[lastChar] == '\n': + top_p = top_p_newline + else: + top_p = top_p_usual + else: + top_p = top_p_usual + + if os.environ["RWKV_RUN_DEVICE"] == "cpu": + probs = probs.numpy() + sorted_probs = np.sort(probs)[::-1] + cumulative_probs = np.cumsum(sorted_probs) + cutoff = float(sorted_probs[np.argmax(cumulative_probs > top_p)]) + probs[probs < cutoff] = 0 + if temperature != 1.0: + probs = probs.pow(1.0 / temperature) + probs = probs / np.sum(probs) + out = np.random.choice(a=len(probs), p=probs) + return out + else: + sorted_probs = torch.sort(probs, descending=True)[0] + cumulative_probs = torch.cumsum(sorted_probs, dim=-1).cpu().numpy() + cutoff = float(sorted_probs[np.argmax(cumulative_probs > top_p)]) + probs[probs < cutoff] = 0 + if temperature != 1.0: + probs = probs.pow(1.0 / temperature) + out = torch.multinomial(probs, num_samples=1)[0] + return out + +def MaybeIsPrime(number): + if FermatPrimalityTest(number) and MillerRabinPrimalityTest(number): + return True + else: + return False + + +def FermatPrimalityTest(number): + if number > 1: + for time in range(3): + randomNumber = random.randint(2, number) - 1 + if pow(randomNumber, number - 1, number) != 1: + return False + return True + else: + return False + + +def MillerRabinPrimalityTest(number): + if number == 2: + return True + elif number == 1 or number % 2 == 0: + return False + oddPartOfNumber = number - 1 + timesTwoDividNumber = 0 + while oddPartOfNumber % 2 == 0: + oddPartOfNumber = oddPartOfNumber // 2 + timesTwoDividNumber = timesTwoDividNumber + 1 + + for time in range(3): + while True: + randomNumber = random.randint(2, number) - 1 + if randomNumber != 0 and randomNumber != 1: + break + + randomNumberWithPower = pow(randomNumber, oddPartOfNumber, number) + + if (randomNumberWithPower != 1) and (randomNumberWithPower != number - 1): + iterationNumber = 1 + + while (iterationNumber <= timesTwoDividNumber - 1) and (randomNumberWithPower != number - 1): + randomNumberWithPower = pow(randomNumberWithPower, 2, number) + iterationNumber = iterationNumber + 1 + if randomNumberWithPower != (number - 1): + return False + + return True diff --git a/train.py b/train.py new file mode 100644 index 0000000000000000000000000000000000000000..e89b1ec484aa51b02364402eb86ff9bda6014a21 --- /dev/null +++ b/train.py @@ -0,0 +1,456 @@ +######################################################################################################## +# The RWKV Language Model - https://github.com/BlinkDL/RWKV-LM +######################################################################################################## +import os + +import logging +logging.basicConfig(level=logging.INFO) + +if __name__ == "__main__": + from argparse import ArgumentParser + from pytorch_lightning import Trainer + from pytorch_lightning.utilities import rank_zero_info, rank_zero_only + import pytorch_lightning as pl + + rank_zero_info("########## work in progress ##########") + + parser = ArgumentParser() + + parser.add_argument("--op", default="train", type=str) # train or eval + + parser.add_argument("--load_model", default="", type=str) # full path, with .pth + parser.add_argument("--wandb", default="", type=str) # wandb project name. if "" then don't use wandb + parser.add_argument("--proj_dir", default="out", type=str) + parser.add_argument("--random_seed", default="-1", type=int) + + parser.add_argument("--data_type", default="utf-8", type=str) + parser.add_argument("--vocab_size", default=0, type=int) # vocab_size = 0 means auto (for char-level LM and .txt data) + + parser.add_argument("--ctx_len", default=1024, type=int) + parser.add_argument("--epoch_steps", default=1000, type=int) # a mini "epoch" has [epoch_steps] steps + parser.add_argument("--epoch_count", default=500, type=int) # train for this many "epochs". will continue afterwards with lr = lr_final + parser.add_argument("--epoch_begin", default=0, type=int) # if you load a model trained for x "epochs", set epoch_begin = x + parser.add_argument("--epoch_save", default=5, type=int) # save the model every [epoch_save] "epochs" + + parser.add_argument("--micro_bsz", default=12, type=int) # micro batch size (batch size per GPU) + parser.add_argument("--n_layer", default=6, type=int) + parser.add_argument("--n_embd", default=512, type=int) + parser.add_argument("--dim_att", default=0, type=int) + parser.add_argument("--dim_ffn", default=0, type=int) + parser.add_argument("--pre_ffn", default=0, type=int) # replace first att layer by ffn (sometimes better) + parser.add_argument("--head_qk", default=0, type=int) # my headQK trick + parser.add_argument("--tiny_att_dim", default=0, type=int) # tiny attention dim + parser.add_argument("--tiny_att_layer", default=-999, type=int) # tiny attention @ which layer + + parser.add_argument("--lr_init", default=6e-4, type=float) # 6e-4 for L12-D768, 4e-4 for L24-D1024, 3e-4 for L24-D2048 + parser.add_argument("--lr_final", default=1e-5, type=float) + parser.add_argument("--warmup_steps", default=-1, type=int) # try 50 if you load a model + parser.add_argument("--beta1", default=0.9, type=float) + parser.add_argument("--beta2", default=0.99, type=float) # use 0.999 when your model is close to convergence + parser.add_argument("--adam_eps", default=1e-8, type=float) + parser.add_argument("--grad_cp", default=0, type=int) # gradient checkpt: saves VRAM, but slower + parser.add_argument("--dropout", default=0, type=float) # try 0.01 / 0.02 / 0.05 / 0.1 + parser.add_argument("--weight_decay", default=0, type=float) # try 0.1 / 0.01 / 0.001 + parser.add_argument("--weight_decay_final", default=-1, type=float) + + parser.add_argument("--my_pile_version", default=1, type=int) # my special pile version + parser.add_argument("--my_pile_stage", default=0, type=int) # my special pile mode + parser.add_argument("--my_pile_shift", default=-1, type=int) # my special pile mode - text shift + parser.add_argument("--my_pile_edecay", default=0, type=int) + parser.add_argument("--layerwise_lr", default=1, type=int) # layerwise lr for faster convergence (but slower it/s) + parser.add_argument("--ds_bucket_mb", default=200, type=int) # deepspeed bucket size in MB. 200 seems enough + # parser.add_argument("--cuda_cleanup", default=0, type=int) # extra cuda cleanup (sometimes helpful) + + parser.add_argument("--my_sample_len", default=0, type=int) + parser.add_argument("--my_ffn_shift", default=1, type=int) + parser.add_argument("--my_att_shift", default=1, type=int) + parser.add_argument("--head_size_a", default=64, type=int) # can try larger values for larger models + parser.add_argument("--head_size_divisor", default=8, type=int) + parser.add_argument("--my_pos_emb", default=0, type=int) + parser.add_argument("--load_partial", default=0, type=int) + parser.add_argument("--magic_prime", default=0, type=int) + parser.add_argument("--my_qa_mask", default=0, type=int) + parser.add_argument("--my_random_steps", default=0, type=int) + parser.add_argument("--my_testing", default='x052', type=str) + parser.add_argument("--my_exit", default=99999999, type=int) + parser.add_argument("--my_exit_tokens", default=0, type=int) + + #LORA + parser.add_argument("--emb", action="store_true") + parser.add_argument("--lora", action="store_true") + parser.add_argument("--lora_load", default="", type=str) + parser.add_argument("--lora_r", default=8, type=int) + parser.add_argument("--lora_alpha", default=32, type=float) + parser.add_argument("--lora_dropout", default=0.01, type=float) + parser.add_argument("--lora_parts", default="att,ln,time", type=str) + + #LISA + parser.add_argument("--LISA", action="store_true") + parser.add_argument("--lisa_r", default=2, type=int) + parser.add_argument("--lisa_k", default=100, type=int) + + #PISSA + parser.add_argument("--PISSA", action="store_true") + parser.add_argument("--svd_niter", default=4, type=int) + parser.add_argument("--pissa_load", default="", type=str) + parser.add_argument("--pissa_init", default="", type=str) + + #quant + parser.add_argument("--quant", default="none", type=str) + + #dataset + parser.add_argument("--dataload", default="get", type=str) + + #state tuning + parser.add_argument("--state_tune", action="store_true") + + + parser.add_argument("--chunk_ctx", default=512, type=int) + #fla + parser.add_argument("--fla", action="store_true") + parser.add_argument("--train_type", default="none", type=str) + + #loss_mask + parser.add_argument("--loss_mask", action="store_true") + parser.add_argument("--file_path", default="none", type=str) + + if pl.__version__[0]=='2': + parser.add_argument("--accelerator", default="gpu", type=str) + parser.add_argument("--strategy", default="auto", type=str) + parser.add_argument("--devices", default=1, type=int) + parser.add_argument("--num_nodes", default=1, type=int) + parser.add_argument("--precision", default="fp16", type=str) + parser.add_argument("--accumulate_grad_batches", default=4, type=int) + else: + parser = Trainer.add_argparse_args(parser) + args = parser.parse_args() + + ######################################################################################################## + + import os, warnings, math, datetime, sys, time + import numpy as np + import torch + from torch.utils.data import DataLoader + if "deepspeed" in args.strategy: + import deepspeed + from pytorch_lightning import seed_everything + + if args.random_seed >= 0: + print(f"########## WARNING: GLOBAL SEED {args.random_seed} THIS WILL AFFECT MULTIGPU SAMPLING ##########\n" * 3) + seed_everything(args.random_seed) + + np.set_printoptions(precision=4, suppress=True, linewidth=200) + warnings.filterwarnings("ignore", ".*Consider increasing the value of the `num_workers` argument*") + warnings.filterwarnings("ignore", ".*The progress bar already tracks a metric with the*") + # os.environ["WDS_SHOW_SEED"] = "1" + + args.my_timestamp = datetime.datetime.today().strftime("%Y-%m-%d-%H-%M-%S") + args.enable_checkpointing = False + args.replace_sampler_ddp = False + args.logger = False + args.gradient_clip_val = 10.0 + args.gradient_accumulation_steps = 4 + args.num_sanity_val_steps = 0 + args.check_val_every_n_epoch = int(1e20) + args.log_every_n_steps = int(1e20) + args.max_epochs = -1 # continue forever + if args.dataload!='get': + args.max_epochs = args.epoch_count + args.betas = (args.beta1, args.beta2) + args.real_bsz = int(args.num_nodes) * int(args.devices) * args.micro_bsz + os.environ["RWKV_MY_TESTING"] = args.my_testing + os.environ["RWKV_CTXLEN"] = str(args.ctx_len) + os.environ["RWKV_HEAD_SIZE_A"] = str(args.head_size_a) + ######state tuning + os.environ["RWKV_TRAIN_TYPE"]='' + if args.train_type=='state': + os.environ["RWKV_TRAIN_TYPE"]='states' + + os.environ["WKV"]='fla' if args.fla else '' + if args.dim_att <= 0: + args.dim_att = args.n_embd + if args.dim_ffn <= 0: + args.dim_ffn = int((args.n_embd * 3.5) // 32 * 32) # default = 3.5x emb size + + if args.data_type == "wds_img": + args.run_name = f"v{args.my_img_version}-{args.my_img_size}-{args.my_img_bit}bit-{args.my_img_clip}x{args.my_img_clip_scale}" + args.proj_dir = f"{args.proj_dir}-{args.run_name}" + else: + args.run_name = f"{args.vocab_size} ctx{args.ctx_len} L{args.n_layer} D{args.n_embd}" + if not os.path.exists(args.proj_dir): + os.makedirs(args.proj_dir) + + if args.my_pile_stage > 0: + magic_prime_bak = args.magic_prime + + if args.my_pile_shift < 0: + args.my_pile_shift = 0 + + if magic_prime_bak > 0: + args.magic_prime = magic_prime_bak + if args.my_qa_mask == 2: + args.epoch_count = 2 * args.magic_prime // 40320 + else: + args.epoch_count = args.magic_prime // 40320 + + args.epoch_steps = 40320 // args.real_bsz + assert args.epoch_steps * args.real_bsz == 40320 + # if args.my_pile_stage == 2: + # assert args.lr_final == args.lr_init + if args.my_pile_stage >= 2: # find latest saved model + list_p = [] + for p in os.listdir(args.proj_dir): + if p.startswith("rwkv") and p.endswith(".pth"): + p = ((p.split("-"))[1].split("."))[0] + if p != "final": + if p == "init": + p = -1 + else: + p = int(p) + list_p += [p] + list_p.sort() + max_p = list_p[-1] + if len(list_p) > 1: + args.my_pile_prev_p = list_p[-2] # in case max_p is corrupted + if max_p == -1: + args.load_model = f"{args.proj_dir}/rwkv-init.pth" + else: + args.load_model = f"{args.proj_dir}/rwkv-{max_p}.pth" + if args.warmup_steps < 0: + if args.my_pile_stage == 2: + args.warmup_steps = 10 + else: + args.warmup_steps = 30 + args.epoch_begin = max_p + 1 + + samples_per_epoch = args.epoch_steps * args.real_bsz + tokens_per_epoch = samples_per_epoch * args.ctx_len + try: + deepspeed_version = deepspeed.__version__ + except: + deepspeed_version = None + pass + rank_zero_info( + f""" +############################################################################ +# +# RWKV-5 {args.precision.upper()} on {args.num_nodes}x{args.devices} {args.accelerator.upper()}, bsz {args.num_nodes}x{args.devices}x{args.micro_bsz}={args.real_bsz}, {args.strategy} {'with grad_cp' if args.grad_cp > 0 else ''} +# +# Data = ({args.data_type}), ProjDir = {args.proj_dir} +# +# Epoch = {args.epoch_begin} to {args.epoch_begin + args.epoch_count - 1} (will continue afterwards), save every {args.epoch_save} epoch +# +# Each "epoch" = {args.epoch_steps} steps, {samples_per_epoch} samples, {tokens_per_epoch} tokens +# +# Model = {args.n_layer} n_layer, {args.n_embd} n_embd, {args.ctx_len} ctx_len +# +# Adam = lr {args.lr_init} to {args.lr_final}, warmup {args.warmup_steps} steps, beta {args.betas}, eps {args.adam_eps} +# +# Found torch {torch.__version__}, recommend 1.13.1+cu117 or newer +# Found deepspeed {deepspeed_version}, recommend 0.7.0 (faster than newer versions) +# Found pytorch_lightning {pl.__version__}, recommend 1.9.5 +# +############################################################################ +""" + ) + rank_zero_info(str(vars(args)) + "\n") + + assert args.data_type in ["utf-8", "utf-16le", "numpy", "binidx", "dummy", "uint16"] + + if args.lr_final == 0 or args.lr_init == 0: + rank_zero_info("\n\nNote: lr_final = 0 or lr_init = 0. Using linear LR schedule instead.\n\n") + + assert args.precision in ["fp32", "tf32", "fp16", "bf16"] + os.environ["RWKV_FLOAT_MODE"] = args.precision + if args.precision == "fp32": + for i in range(10): + rank_zero_info("\n\nNote: you are using fp32 (very slow). Try bf16 / tf32 for faster training.\n\n") + if args.precision == "fp16": + rank_zero_info("\n\nNote: you are using fp16 (might overflow). Try bf16 / tf32 for stable training.\n\n") + + os.environ["RWKV_JIT_ON"] = "0" + if "deepspeed_stage_3" in args.strategy: + os.environ["RWKV_JIT_ON"] = "0" + + torch.backends.cudnn.benchmark = True + torch.backends.cudnn.enabled = True + if args.precision == "fp32": + torch.backends.cudnn.allow_tf32 = False + torch.backends.cuda.matmul.allow_tf32 = False + else: + torch.backends.cudnn.allow_tf32 = True + torch.backends.cuda.matmul.allow_tf32 = True + + if "32" in args.precision: + args.precision = 32 + elif args.precision == "fp16": + args.precision = 16 + else: + args.precision = "bf16" + + ######################################################################################################## + + from src.trainer import train_callback, generate_init_weight + from src.dataset2 import MyDataset + + # train_data = MyDataset(args) + # args.vocab_size = train_data.vocab_size + from src.rwkvLinear import LORA_CONFIG, LoraLinear + from src.model import RWKV + + if args.quant!='none': + LORA_CONFIG["quant"]=True + model = RWKV(args) + freeze=False + if args.lora or args.LISA or args.train_type=='state': + model.requires_grad_(False) + freeze=True + + if args.state_tune or args.train_type=='state': + for name, module in model.named_modules(): + for pname, param in module.named_parameters(): + if 'state' in pname : + param.requires_grad = True + break + + if len(args.load_model) == 0 or args.my_pile_stage == 1: # shall we build the initial weights? + init_weight_name = f"{args.proj_dir}/rwkv-init.pth" + generate_init_weight(model, init_weight_name) # save initial weights + args.load_model = init_weight_name + + rank_zero_info(f"########## Loading {args.load_model}... ##########") + + model.load_state_dict(torch.load(args.load_model, map_location="cpu"), strict=(not freeze)) + + + + + if pl.__version__[0]=='2': + trainer = Trainer(accelerator=args.accelerator,strategy=args.strategy,devices=args.devices,num_nodes=args.num_nodes,precision=args.precision, + logger=args.logger,callbacks=[train_callback(args)],max_epochs=args.max_epochs,check_val_every_n_epoch=args.check_val_every_n_epoch,num_sanity_val_steps=args.num_sanity_val_steps, + log_every_n_steps=args.log_every_n_steps,enable_checkpointing=args.enable_checkpointing,accumulate_grad_batches=args.accumulate_grad_batches,gradient_clip_val=args.gradient_clip_val) + else: + trainer = Trainer.from_argparse_args( + args, + callbacks=[train_callback(args)], + ) + + if trainer.global_rank == 100: + for n in model.state_dict(): + shape = model.state_dict()[n].shape + shape = [i for i in shape if i != 1] + if len(shape) > 1: + print(f"{str(shape[0]).ljust(5)} {str(shape[1]).ljust(5)} {n}") + else: + print(f"{str(shape[0]).ljust(5)} {n}") + + if "deepspeed" in args.strategy: + trainer.strategy.config["zero_optimization"]["allgather_bucket_size"] = args.ds_bucket_mb * 1000 * 1000 + trainer.strategy.config["zero_optimization"]["reduce_bucket_size"] = args.ds_bucket_mb * 1000 * 1000 + + from src.asr import SLAM_ASR + Total_model = SLAM_ASR( + args, + # "facebook/hubert-large-ls960-ft", # SHOULD NOT BE USED, THIS IS A FINETUNED VERSION. + # "microsoft/wavlm-base-plus", + "microsoft/wavlm-large", + # "facebook/hubert-large-ll60k", + model, + ) + + import glob + file_paths = glob.glob('output/rwkv-adapter*.pth') + # file_paths = glob.glob('output/rwkv*.pth') + # 检查是否找到了文件 + if file_paths: + file_path = file_paths[0] + Total_model.load_state_dict(torch.load(file_path), strict=False) + print(f"Loaded model from {file_path}") + else: + print("No weights found. Create origin model.") + + + from datasets import load_from_disk,load_dataset, concatenate_datasets + if(args.op == "train"):# training + + + dataset = load_dataset('librispeech_asr','clean',split='train.100') + dataset2 = load_dataset('librispeech_asr','clean',split='train.360') + dataset3 = load_dataset('librispeech_asr','other',split='train.500') + + + dataset = concatenate_datasets([dataset, dataset2, dataset3]).shuffle() + dataset = MyDataset(args, dataset) + data_loader = DataLoader(dataset, shuffle=True, pin_memory=True, batch_size=args.micro_bsz, num_workers=8, persistent_workers=False, drop_last=True, collate_fn=lambda x: x) + print("train starting...") + trainer.fit(Total_model, data_loader) + + # elif(args.op == "eval"):#prediction + + # dataset = load_dataset('librispeech_asr','clean',split='train.100') + # dataset = dataset.select(range(100)) + + # tokenizer = Total_model.return_tokenizer() + # Total_model.to("cuda", dtype=torch.bfloat16) + + # for data in dataset: + # import librosa + + # output= Total_model.generate(data['audio']['array']) + # output = ''.join(output) + + # print(f"output:\n{output}") + # print(f"answer:\n{data['text'].lower()}") + # print("\n\n") + elif(args.op == "eval"):#wer + from datasets import load_dataset + ds1 = load_dataset("librispeech_asr","clean",split="test") + ds2 = load_dataset("librispeech_asr","other",split="test") + dss = [ds1,ds2] + tokenizer = Total_model.return_tokenizer() + Total_model.to("cuda", dtype=torch.bfloat16) + + from jiwer import wer + def calculate_wer(predictions, references): + total_wer = 0.0 + for pred, ref in zip(predictions, references): + total_wer += wer(ref, pred) + average_wer = total_wer / len(predictions) + return average_wer + + from tqdm import tqdm + for ds in dss: + predictions = [] + references = [] + for i in tqdm(range(len(ds))): + x = ds[i]["audio"]["array"] + z = ds[i]["text"].lower() + # asr(x) + # print(f"Audio length:{len(x)/16000} s") + with torch.no_grad(): + output = Total_model.generate(x) + output = ''.join(output) + predictions.append(output) + references.append(z) + + average_wer = calculate_wer(predictions, references) + # print(ds) + print(f"Average WER for {ds} is: {average_wer}") + elif(args.op == 'predict'): + + import librosa + import time + + audio, sr = librosa.load(args.file_path, sr=None) + audio = librosa.resample(audio, orig_sr=sr, target_sr=16000) + Total_model = Total_model.to("cuda", dtype=torch.bfloat16) + + start_time = time.time() + output= Total_model.generate(audio) + output = ''.join(output) + end_time = time.time() + + print(f"audio: {args.file_path}") + print(f"predict: {output}") + print(f"Response time: {end_time - start_time} seconds") +