fix: add pipeline_tag, library_name, and conversational tag for inference compatibility
9dcd6cc verified metadata
license: apache-2.0
library_name: transformers
pipeline_tag: text-generation
base_model:
- Qwen/Qwen2.5-0.5B-Instruct
tags:
- conversational
- safetensors
- qwen2
ChatGPT-5
Ultra-fast AI chat model based on Qwen2.5-0.5B-Instruct architecture (494M parameters).
Features
- ⚡ Ultra-fast — Lightweight 494M parameter model for instant responses
- 💬 Conversational — Optimized for multi-turn chat
- 🔧 Instruction Following — Follows instructions accurately
Chat UI
Try it now: ChatGPT-5 Chat
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("ScottzillaSystems/ChatGPT-5")
tokenizer = AutoTokenizer.from_pretrained("ScottzillaSystems/ChatGPT-5")
messages = [{"role": "user", "content": "Hello!"}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))