YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Fine-Tuned GPT-2 Model for Instruction-Based Tasks

This model is a fine-tuned version of GPT-2, adapted for instruction-based tasks. It has been trained to provide helpful and coherent responses to a variety of prompts.

Model Description

This model is based on OpenAI's GPT-2 architecture and has been fine-tuned to respond to instructions in a format that mimics conversational exchanges. The fine-tuning process enhances its ability to follow specific instructions and generate appropriate responses, making it a valuable tool for interactive applications.

Example Usage

Below is an example of how to use the fine-tuned model in your application:

import torch
import random
import numpy as np
from transformers import GPT2LMHeadModel, GPT2Tokenizer

# Load the fine-tuned model and tokenizer
model = GPT2LMHeadModel.from_pretrained("Autsadin/gpt2_instruct")
tokenizer = GPT2Tokenizer.from_pretrained("Autsadin/gpt2_instruct")

# Define the template for instruction-based prompts
template = '''<s>[INST] <<SYS>>
You are a helpful assistant
<</SYS>>

{instruct}[/INST]'''

# Function to format prompts using the template
def format_entry(prompt):
    return template.format(instruct=prompt)

# Define the input prompt
prompt = "What is a dog?"

# Tokenize the input prompt
inputs = tokenizer.encode(format_entry(prompt), return_tensors='pt')

# Generate a response
outputs = model.generate(
    inputs,
    max_length=256,
    num_return_sequences=1,
    top_k=50,
    top_p=0.95,
    temperature=0.8,
    pad_token_id=tokenizer.eos_token_id,
    do_sample=True,
    early_stopping=True
)

# Decode and print the generated text
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)

Explanation:

#Training Data The model was fine-tuned using the Alpaca GPT-4 dataset available at the following GitHub repository. https://github.com/hy5468/TransLLM/tree/main/data/train Specifically, the alpaca_gpt4_data_en.zip dataset was utilized. This dataset includes a wide range of instruction-based prompts and responses, providing a robust foundation for the model's training.

#Training Procedure The fine-tuning process was carried out with the following hyperparameters: Learning Rate: 2e-5 Batch Size (Train): 4 Batch Size (Eval): 4 Number of Epochs: 1 Weight Decay: 0.01

#Training Environment The model was trained using PyTorch and the Hugging Face transformers library. The training was performed on a GPU-enabled environment to accelerate the fine-tuning process.The training script ensures reproducibility by setting a consistent random seed across different components.

Downloads last month
2
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support