Plagon X 69 LLM
Plagon X 69 is a custom Large Language Model (LLM) developed by Shehab, optimized for real-time performance on local hardware (like the AMD Ryzen 5 5600GT). It is based on the GPT-2 architecture but customized for specific conversational tasks.
π Model Details
- Architecture: GPT-2 (Customized)
- Parameters: ~124 Million
- Optimization: FP32 (SafeTensors) & TFLite (8-bit Quantized)
- Developer: Shehab
π οΈ Intended Use
This model is designed for interactive chat applications and local AI experimentation. It is lightweight enough to run on mid-range CPUs and integrated GPUs without significant lag.
π» How to Run Locally
You can use the plagon_web.py script to launch a web-based interface for this model.
from transformers import GPT2LMHeadModel, GPT2Tokenizer
model_id = "sbshehab200/plagonx69"
tokenizer = GPT2Tokenizer.from_pretrained(model_id)
model = GPT2LMHeadModel.from_pretrained(model_id)
# Example interaction
input_text = "Hello Plagon!"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
βοΈ License
This project is licensed under the MIT License.
Created with β€οΈ by Shehab.
- Downloads last month
- 25
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support