How to use MeetPEFT/MeetPEFT-7B-16K with Transformers:
# Use a pipeline as a high-level helper # Warning: Pipeline type "summarization" is no longer supported in transformers v5. # You must load the model directly (see below) or downgrade to v4.x with: # 'pip install "transformers<5.0.0' from transformers import pipeline pipe = pipeline("summarization", model="MeetPEFT/MeetPEFT-7B-16K")
# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("MeetPEFT/MeetPEFT-7B-16K") model = AutoModelForCausalLM.from_pretrained("MeetPEFT/MeetPEFT-7B-16K")
We use quantized LongLoRA to fine-tune a Llama-2-7b model and extend the context length from 4k to 16k.
The model is fine-tuned on MeetingBank and QMSum datasets.