Gear-2-500m / README.md
HeavensHackDev's picture
Update README.md
cdcaec5 verified
metadata
language:
  - en
pipeline_tag: text-generation
license: apache-2.0
base_model: Qwen/Qwen2.5-0.5B-Instruct
tags:
  - unsloth
  - qwen
  - text-generation
  - code

Model Card for GEAR-2-500m-Identity

Model Details

Model Description

GEAR-2-500m-Identity is a lightweight Transformer LLM with approximately 0.5 billion parameters, fine-tuned on the Qwen2.5 architecture using Unsloth. It is designed to run extremely fast on local machines (CPU/Edge) with minimal memory usage. The model embodies the persona of Gear, an intelligent assistant created by HeavensHack.

It is capable of code generation (Python) and general chat. While efficient, it is a small model and may struggle with complex reasoning compared to larger parameters.

  • Developed by: HeavensHack
  • Model type: Qwen2 For Causal LM
  • Language(s) (NLP): English, Python (Code) , (New) Russian
  • License: Apache 2.0
  • Finetuned from model: Qwen/Qwen2.5-0.5B-Instruct

Uses

Direct Use

  • Fast local chat assistant
  • Python code generation and debugging

Out-of-Scope Use

  • Complex mathematical reasoning
  • High-stakes decision making
  • Long-context analysis requiring high accuracy

Bias, Risks, and Limitations

  • Hallucinations: Due to the 0.5B parameter size, it may generate plausible but incorrect information.
  • Identity: The model is strictly fine-tuned to identify itself as "Gear" by HeavensHack.
  • Inconsistency: Behavior might be variable in long conversations.

Recommendations

  • Use for educational purposes, hobby projects, or low-resource environments.
  • Verify any code generated before running it in production.

How to Get Started

  • Load the model using Unsloth or standard Hugging Face transformers.
  • Optimized for local inference.

Training Details

  • Training Data: Custom identity dataset (HeavensHack), Alpaca (English), and Python Code instructions.
  • Training Procedure: Fine-tuned using Unsloth (LoRA) for efficiency.
  • Training Regime: Mixed precision (BF16/FP16).

Evaluation

  • Validated for identity retention and basic coding tasks.
  • Not benchmarked for enterprise production use.

Environmental Impact

  • Extremely low compute cost during training due to Unsloth optimization.

Model Card Contact

  • Author: HeavensHackDev

But...

  • At first, only the GGUF file will be available. The rest will follow later.