Text Generation
PEFT
Safetensors
sourize's picture
Update README.md
b4a7a77 verified
|
raw
history blame
1.63 kB
metadata
base_model: microsoft/phi-2
library_name: peft
license: mit
tags:
  - text-generation
pipeline_tag: text-generation

phi2-memory-lora

This repository contains the LoRA adapter weights for microsoft/phi-2, fine-tuned to maintain short-term conversational memory for DeepTalks.

Model Details

Model Description

A lightweight LoRA adapter that injects memory awareness into Phi-2. It helps the assistant recall recent turns in a conversation and respond accordingly, without retraining the full model.

  • Developed by: Sourish
  • Finetuned from: microsoft/phi-2
  • License: MIT
  • Language: English (but generalizes to any text input)

Usage

Once the adapter is added to your base model, you can load it with PEFT:

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel, LoraConfig

# 1) Load the base
tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-2")
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-2")

# 2) Apply the LoRA adapter
adapter_config = LoraConfig.from_pretrained("sourize/phi2-memory-lora")
model = PeftModel.from_pretrained(model, adapter_config)

# 3) Resize embeddings if needed
model.base_model.resize_token_embeddings(len(tokenizer))

# 4) Ready to generate!


@misc{sourize_phi2_memory_lora,
  title        = {phi2-memory-lora: LoRA adapter for Phi-2 with conversational memory},
  author       = {Sourish},
  year         = {2025},
  howpublished = {\url{https://huggingface.co/sourize/phi2-memory-lora}},
  license      = {MIT}
}