HOS-OSS-3.08B / README.md
hydffgg's picture
Upload folder using huggingface_hub
2ac8e38 verified
metadata
language:
  - en
license: apache-2.0
library_name: transformers
pipeline_tag: text-generation
tags:
  - llama
  - causal-lm
  - code-generation
  - lightweight
  - 3.08B
base_model:
  - Qwen/Qwen2.5-Coder-3B-Instruct

HOS-OSS-3.08B

HOS-OSS-3.08B

HOS-OSS-3.08B is a lightweight 3.08B parameter causal language model optimized for text and code generation tasks.
It is designed for fast inference, low resource usage, and local deployment.


🚀 Overview

  • Model size: ~3.08B parameters
  • Architecture: LLaMA-style decoder-only transformer
  • Base model: Qwen2.5-Coder-3B-Instruct (distilled / adapted)
  • Framework: 🤗 Transformers
  • Use cases:
    • Code generation
    • Instruction following
    • Chat-style completion
    • Lightweight local AI assistant

âš¡ Features

  • Fast inference on low-end GPUs
  • Runs on Kaggle / Colab without large VRAM
  • Suitable for edge deployment
  • Clean instruction-response formatting

🧠 Example Usage

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_name = "hydffgg/HOS-OSS-3.08B"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

prompt = "User: Write a Python Hello World
Assistant:"

inputs = tokenizer(prompt, return_tensors="pt")

with torch.no_grad():
    outputs = model.generate(
        **inputs,
        max_new_tokens=512,
        temperature=0.7
    )

print(tokenizer.decode(outputs[0], skip_special_tokens=True))