children-story-generator

This is a fine-tuned LoRA adapter for personalized children's story generation, based on Llama 2 7B Chat.

Model Description

This model generates personalized stories for children based on:

  • Child's name
  • Age
  • Mood (happy, curious, brave, excited, scared)
  • Favorite animal

Usage

Using Transformers + PEFT

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
from peft import PeftModel

# Load base model with 4-bit quantization
bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch.float16,
)

base_model = AutoModelForCausalLM.from_pretrained(
    "NousResearch/llama-2-7b-chat-hf",
    quantization_config=bnb_config,
    device_map="auto",
    torch_dtype=torch.float16,
)

# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "GhulamMustafa0/children-story-generator")
tokenizer = AutoTokenizer.from_pretrained("NousResearch/llama-2-7b-chat-hf")

# Generate story
prompt = "<s>[INST] Generate a story for Emma, age 6, mood: curious, favorite animal: fox [/INST]"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=500, temperature=0.7)
story = tokenizer.decode(outputs[0], skip_special_tokens=True)

Using Inference API

import requests

API_URL = "https://api-inference.huggingface.co/models/GhulamMustafa0/children-story-generator"
headers = {"Authorization": "Bearer YOUR_HF_TOKEN"}

def generate_story(name, age, mood, animal):
    prompt = f"<s>[INST] Generate a story for {name}, age {age}, mood: {mood}, favorite animal: {animal} [/INST]"
    
    response = requests.post(
        API_URL,
        headers=headers,
        json={
            "inputs": prompt,
            "parameters": {
                "max_new_tokens": 500,
                "temperature": 0.7,
                "top_p": 0.9,
                "do_sample": True
            }
        }
    )
    return response.json()[0]['generated_text']

# Generate a story
story = generate_story("Alex", "6", "brave", "lion")
print(story)

Training Details

  • Base Model: Llama 2 7B Chat
  • Training Method: LoRA (Low-Rank Adaptation)
  • Training Data: Personalized children's stories
  • Framework: Transformers, PEFT, BitsAndBytes

Limitations

  • Designed for children's stories only
  • Best results with ages 4-10
  • English language only

License

This model inherits the Llama 2 license from the base model.

Downloads last month
22
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Space using GhulamMustafa0/children-story-generator 1