GilbertAkham's picture
Update README.md
2428eca verified
metadata
language:
  - en
library_name: peft
pipeline_tag: text-generation
license: apache-2.0
tags:
  - text generation
  - multitask
  - email
  - stories
  - qa
  - summarization
  - chat
base_model:
  - Qwen/Qwen1.5-1.8B-Chat

Gilbert-Qwen-Multitask-LoRA

LoRA fine-tuned Qwen1.5-1.8B-Chat for multiple text generation tasks including email drafting, story continuation, technical Q&A, news summarization, and chat responses.

Model Description

This model is a LoRA (Low-Rank Adaptation) adapter fine-tuned on Qwen1.5-1.8B-Chat for multitask text generation across 5 domains:

  • ✉️ Email Drafting: Generate professional email replies
  • 📖 Story Continuation: Continue fictional narratives
  • 💻 Technical Q&A: Answer programming and technical questions
  • 📰 News Summarization: Create concise summaries of articles
  • 💬 Chat Responses: Generate conversational replies

Training Details

  • Base Model: Qwen/Qwen1.5-1.8B-Chat
  • Fine-tuning Method: LoRA (Low-Rank Adaptation)
  • Quantization: 4-bit (QLoRA)
  • Training Tasks: Multi-task learning
  • Training Steps: 15,000
  • Learning Rate: 3e-5
  • Context Length: 1024 tokens

Usage

Installation

pip install transformers peft torch accelerate bitsandbytes