File size: 407 Bytes
2cbee7b ce7b47b | 1 2 3 4 5 | ---
license: apache-2.0
---
This repository houses a fork of [`togethercomputer/LLaMA-2-7B-32K`](https://huggingface.co/togethercomputer/LLaMA-2-7B-32K)'s [`modeling_flash_llama.py`](https://huggingface.co/togethercomputer/LLaMA-2-7B-32K/blob/main/modeling_flash_llama.py), with a [fix for padding of attention weights](https://huggingface.co/togethercomputer/LLaMA-2-7B-32K/discussions/17) merged into it. |