YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

๐Ÿค– Thoth Text Model

๐Ÿ“˜ Overview

Thoth Text ู‡ูˆ ู†ู…ูˆุฐุฌ ู„ุบูˆูŠ ุนุฑุจูŠ ู…ุจู†ูŠ ุนู„ู‰ Qwen2.5-7B-Instruct
ุชู… ุชุฏุฑูŠุจู‡ ุจุงุณุชุฎุฏุงู… ุชู‚ู†ูŠุฉ LoRA (Low-Rank Adaptation) ู„ุชุญุณูŠู† ุงู„ุฃุฏุงุก ููŠ ูู‡ู… ุงู„ู†ุตูˆุต ุงู„ุนุฑุจูŠุฉ
ูˆุชูˆู„ูŠุฏ ุฅุฌุงุจุงุช ุฏู‚ูŠู‚ุฉ ููŠ ุงู„ู…ุฌุงู„ุงุช ุงู„ุนุงู…ุฉ ูˆุงู„ุชุนู„ูŠู…ูŠุฉ.


๐Ÿง  Base Model

  • Base: Qwen/Qwen2.5-7B-Instruct
  • Adapter: LoRA fine-tuned using Axolotl
  • Architecture: Transformer Decoder (Causal LM)
  • Precision: bfloat16
  • Frameworks: PyTorch + Transformers + PEFT

๐Ÿ‹๏ธ Fine-tuning Details

  • Library: Axolotl
  • Adapter Type: LoRA
  • Learning Rate: 2e-4
  • LoRA ฮฑ: 16
  • LoRA r: 8
  • Dropout: 0.05
  • Batch Size: 16
  • Epochs: 1
  • Optimizer: adamw_bnb_8bit
  • Sequence Length: 4096
  • Compute: RunPod GPU Instance

๐Ÿ“‚ Dataset

โš ๏ธ Note:
The dataset used for fine-tuning is private and locally stored at
/workspace/fine-tuning/data/trump.json

It follows the Alpaca-style JSON format:

[
  {
    "instruction": "ุงุดุฑุญ ู„ูŠ ู…ูู‡ูˆู… ุงู„ุฐูƒุงุก ุงู„ุงุตุทู†ุงุนูŠ.",
    "input": "",
    "output": "ุงู„ุฐูƒุงุก ุงู„ุงุตุทู†ุงุนูŠ ู‡ูˆ ูุฑุน ู…ู† ุนู„ูˆู… ุงู„ุญุงุณูˆุจ ูŠู‡ุชู… ุจุฌุนู„ ุงู„ุฃู†ุธู…ุฉ ู‚ุงุฏุฑุฉ ุนู„ู‰ ุงู„ุชููƒูŠุฑ ูˆุงู„ุชุนู„ู…."
  }
]
Downloads last month
8
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support