Wikipedia-trained Phi-2 (Merged)

This is a fine-tuned version of Microsoft's Phi-2 model, adapted for Wikipedia-style content generation. The LoRA weights have been merged into the base model for easier inference.

Model Details

  • Base Model: microsoft/phi-2
  • Fine-tuning Method: LoRA (Low-Rank Adaptation) with weights merged
  • Training Data: Wikipedia articles
  • Training Objective: Text generation and completion

Usage

With Transformers

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

# Load model
model = AutoModelForCausalLM.from_pretrained(
    "iZELX1/llm-wikipedia",
    torch_dtype=torch.float16,
    device_map="auto",
    trust_remote_code=True
)

# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("iZELX1/llm-wikipedia")

# Generate text
inputs = tokenizer("The history of artificial intelligence", return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=100, temperature=0.7)
text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(text)

Training Details

  • LoRA Rank: 64
  • LoRA Alpha: 16
  • Target Modules: ["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"]
  • Training Steps: 2,184
  • Batch Size: 4
  • Learning Rate: 2e-4

Performance

  • Perplexity: ~12.5 (on validation set)
  • BLEU Score: ~0.15
  • ROUGE-1 F1: ~0.35

Limitations

This is a personal project for educational purposes. The model may:

  • Generate factually incorrect information
  • Exhibit biases present in the training data
  • Produce inappropriate content
  • Have limited knowledge outside of Wikipedia-style content

License

MIT License - see the LICENSE file for details.

Acknowledgments

  • Microsoft for the Phi-2 base model
  • Hugging Face for the transformers library
  • The PEFT library for LoRA implementation
Downloads last month
4
Safetensors
Model size
3B params
Tensor type
F32
F16
U8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support