Qwen3-GenX1.9-30B-A3B-LoRA

GenX Overview

GenX๋Š” INTERX Gen.AI ํŒ€์—์„œ ๊ฐœ๋ฐœํ•œ ์ œ์กฐ ํŠนํ™” ์–ธ์–ด ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค.

GenX๋Š” ์ž์ฒด ์ˆ˜์ง‘ํ•œ ์ œ์กฐ ๋„๋ฉ”์ธ ๋ฐ์ดํ„ฐ๋ฅผ ์ด์šฉํ•ด ํ•™์Šต๋˜์—ˆ์œผ๋ฉฐ, ๋›ฐ์–ด๋‚œ ์ œ์กฐ ์ง€์‹์„ ๋ฐ”ํƒ•์œผ๋กœ ์‚ฌ์šฉ์ž์˜ ๋ฌผ์Œ์— ๋” ๊ธธ๊ณ  ์ •ํ™•ํ•˜๋ฉฐ ์ž์„ธํ•œ ๋‹ต๋ณ€์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค.

Model Details

  • Qwen3-GenX1.9-30B-A3B-LoRA๋Š” Qwen3-30B-A3B-Instruct-2507๋ฅผ ๋ฒ ์ด์Šค ๋ชจ๋ธ๋กœ ์‚ฌ์šฉํ•˜์—ฌ LoRA๋กœ Supervised Finetuning์„ ์ˆ˜ํ–‰ํ•˜์˜€์Šต๋‹ˆ๋‹ค.
  • ํ•™์Šต ๋ฐ์ดํ„ฐ๋Š” ์˜์–ด ๋ฐ ํ•œ๊ตญ์–ด๋กœ ๊ตฌ์„ฑ๋˜์–ด ์žˆ์œผ๋ฉฐ, ์ œ์กฐ ๋„๋ฉ”์ธ ๋ฐ์ดํ„ฐ๋ฅผ ์ฃผ์š”ํ•˜๊ฒŒ ํฌํ•จํ•˜์—ฌ ๊ณผํ•™, ์ˆ˜ํ•™, ์˜๋ฃŒ, ์ผ๋ฐ˜ ๋„๋ฉ”์ธ, ์ฝ”๋”ฉ, Instruction following ๋“ฑ ๋‹ค์–‘ํ•œ ๋„๋ฉ”์ธ๊ณผ ํƒœ์Šคํฌ๋ฅผ ํฌํ•จํ•˜์—ฌ ๊ตฌ์ถ•ํ•œ ์•ฝ 1๋ฐฑ๋งŒ๊ฑด(5GB) ๋ถ„๋Ÿ‰์˜ SFT ๋ฐ์ดํ„ฐ๋ฅผ ์‚ฌ์šฉํ–ˆ์Šต๋‹ˆ๋‹ค.
  • SFT ๋ฐ์ดํ„ฐ์— ํฌํ•จ๋œ ์ œ์กฐ ๋„๋ฉ”์ธ ๋ฐ์ดํ„ฐ๋Š” Molding, Welding, Machining, Press, Heat Treatment, Surface Treatment, Additive Manufacturing, PLC, CAD/CAM, Robotics ๋“ฑ ๋‹ค์–‘ํ•œ ์ œ์กฐ ์„œ๋ธŒ๋„๋ฉ”์ธ์„ ํฌํ•จํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค.

Inference example

Input Qwen3-30B-A3B-Instruct-2507 Qwen3-GenX1.9-30B-A3B-LoRA
CNC ๊ฐ€๊ณต ๊ณต์ •์—์„œ ์น˜์ˆ˜ ๋ถˆ๋Ÿ‰์ด ๊ฐ‘์ž๊ธฐ ์ฆ๊ฐ€ํ–ˆ์Šต๋‹ˆ๋‹ค. ์„ค๋น„ ์กฐ๊ฑด์€ ๋™์ผํ•œ๋ฐ ๊ณต๊ตฌ ์‚ฌ์šฉ ์‹œ๊ฐ„์ด ๊ธฐ์ค€๋ณด๋‹ค ๊ธธ์–ด์กŒ์Šต๋‹ˆ๋‹ค. ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ ๋†’์€ ์›์ธ๊ณผ ์ทจํ•ด์•ผํ•  ์กฐ์น˜๋ฅผ ์„ธ ๋ฌธ์žฅ ๋‚ด๋กœ ๋Œ€๋‹ตํ•ด์ค˜ ๊ณต๊ตฌ ๋งˆ๋ชจ๋กœ ์ธํ•œ ์น˜์ˆ˜ ๋ถˆ๋Ÿ‰์ด ๋ฐœ์ƒํ–ˆ์„ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์Šต๋‹ˆ๋‹ค. ๊ณต๊ตฌ ์‚ฌ์šฉ ์‹œ๊ฐ„์ด ๊ธฐ์ค€์„ ์ดˆ๊ณผํ–ˆ์œผ๋ฏ€๋กœ ์ฆ‰์‹œ ๊ณต๊ตฌ ๊ต์ฒด๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ๊ต์ฒด ํ›„ ๊ณต์ • ๊ฒ€์ฆ์„ ํ†ตํ•ด ์น˜์ˆ˜ ์•ˆ์ •์„ฑ์„ ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊ณต๊ตฌ ๋งˆ๋ชจ๊ฐ€ ์น˜์ˆ˜ ๋ถˆ๋Ÿ‰์˜ ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ ๋†’์€ ์›์ธ์ž…๋‹ˆ๋‹ค. ์ฆ‰์‹œ ์‚ฌ์šฉ ์ค‘์ธ ๊ณต๊ตฌ๋ฅผ ๊ต์ฒดํ•˜๊ณ , ๋‹ค์Œ ๊ณต๊ตฌ์˜ ๊ต์ฒด ์ฃผ๊ธฐ๋ฅผ 10~20% ๋‹จ์ถ•ํ•˜์—ฌ ์˜ˆ๋ฐฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ, ๊ณต๊ตฌ ์ƒ์กด ์‹œ๊ฐ„๊ณผ ์น˜์ˆ˜ ํ’ˆ์งˆ์˜ ์ƒ๊ด€ ๊ด€๊ณ„๋ฅผ ๊ธฐ๋กํ•˜์—ฌ ํ–ฅํ›„ ํŠธ๋žœ๋“œ๋ฅผ ๋ถ„์„ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.
์‚ฌ์ถœ ์„ฑํ˜• ๊ณต์ •์—์„œ ํŠน์ • ๊ต๋Œ€์กฐ์—์„œ๋งŒ ์™ธ๊ด€ ๋ถˆ๋Ÿ‰์ด ๋ฐœ์ƒํ•ฉ๋‹ˆ๋‹ค. ๊ฐ€์žฅ ๋จผ์ € ์˜์‹ฌํ•ด์•ผ ํ•  ์š”์†Œ๋Š” ๋ฌด์—‡์ธ์ง€ ํ•œ ๋งˆ๋””๋กœ ๋Œ€๋‹ตํ•˜์„ธ์š”. ์‚ฌ์šฉ ์ค‘์ธ ๊ธˆํ˜•์ด๋‚˜ ๋งคํŠธ๋ฆญ์Šค์˜ ํŠน์ • ์กฐ๊ฑด/๋ˆ„์ˆ˜/๋งˆ๋ชจ ์ƒํƒœ ๊ต๋Œ€์กฐ๋ณ„๋กœ ๋‹ฌ๋ผ์ง€๋Š” ์ธ๋ ฅ์˜ ์ž‘์—… ์Šต๊ด€ ๋˜๋Š” ๊ต์œก ์ˆ˜์ค€

Quickstart

์•„๋ž˜ ์˜ˆ์‹œ ์ฝ”๋“œ๋ฅผ ํ™œ์šฉํ•˜๋ฉด GenX๋ฅผ transformers ๊ธฐ๋ฐ˜์œผ๋กœ ๋ถˆ๋Ÿฌ์™€ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation import GenerationConfig

model_id = "INTERX/Qwen3-GenX1.9-30B-A3B-LoRA"

tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", torch_dtype="auto", trust_remote_code=True)

prompt = "์‚ฌ์ถœ์„ฑํ˜•์ด ๋ญ”๊ฐ€์š”?"
messages = [{"role": "user", "content": prompt}]

tokenized_chat = tokenizer.apply_chat_template(
        messages,
        tokenizer=True,
        add_generation_prompt=True,
        return_tensors='pt'
).to(model.device)

generated_ids = model.generate(tokenized_chat, max_new_tokens=512)

generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(tokenized_chat, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

print(response)

Citation

@misc{qwen3-genx1.9-30b-a3b-lora,
    title = {Qwen3-GenX1.9-30B-A3B-LoRA},
    url = {https://huggingface.co/INTERX/Qwen3-GenX1.9-30B-A3B-LoRA/main/README.md},
    author = {Gen.AI@INTERX},
    month = {Dec},
    year = {2025}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for INTERX/Qwen3-GenX1.9-30B-A3B-LoRA

Adapter
(9)
this model

Collection including INTERX/Qwen3-GenX1.9-30B-A3B-LoRA