GGUF
How to use from
llama.cpp
Install from brew
brew install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf devmeta/Llama-3-8B-Racing-Level-Design-Expert:Q4_K_M_RACING
# Run inference directly in the terminal:
llama-cli -hf devmeta/Llama-3-8B-Racing-Level-Design-Expert:Q4_K_M_RACING
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf devmeta/Llama-3-8B-Racing-Level-Design-Expert:Q4_K_M_RACING
# Run inference directly in the terminal:
llama-cli -hf devmeta/Llama-3-8B-Racing-Level-Design-Expert:Q4_K_M_RACING
Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases
# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf devmeta/Llama-3-8B-Racing-Level-Design-Expert:Q4_K_M_RACING
# Run inference directly in the terminal:
./llama-cli -hf devmeta/Llama-3-8B-Racing-Level-Design-Expert:Q4_K_M_RACING
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli
# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf devmeta/Llama-3-8B-Racing-Level-Design-Expert:Q4_K_M_RACING
# Run inference directly in the terminal:
./build/bin/llama-cli -hf devmeta/Llama-3-8B-Racing-Level-Design-Expert:Q4_K_M_RACING
Use Docker
docker model run hf.co/devmeta/Llama-3-8B-Racing-Level-Design-Expert:Q4_K_M_RACING
Quick Links

Llama-3-8B-Racing-Level-Design-Expert (GGUF)

1. Model Summary (๋ชจ๋ธ ๊ฐœ์š”)

[EN] This model is a specialized Small Language Model (SLM) fine-tuned for analyzing racing game level design components and player preferences. It integrates 20+ years of industry expertise from Nexon (KartRider series) with academic research data. [KR] ๋ณธ ๋ชจ๋ธ์€ ๋ ˆ์ด์‹ฑ ๊ฒŒ์ž„ ๋ ˆ๋ฒจ ๋””์ž์ธ์˜ ๊ตฌ์„ฑ ์š”์†Œ์™€ ํ”Œ๋ ˆ์ด์–ด ์„ ํ˜ธ๋„๋ฅผ ๋ถ„์„ํ•˜๊ธฐ ์œ„ํ•ด ํŒŒ์ธํŠœ๋‹๋œ SLM(Small Language Model)์ž…๋‹ˆ๋‹ค. ๋„ฅ์Šจ ใ€Š์นดํŠธ๋ผ์ด๋”ใ€‹ ์‹œ๋ฆฌ์ฆˆ์—์„œ 20๋…„ ์ด์ƒ ์Œ“์€ ์‹ค๋ฌด ๋…ธํ•˜์šฐ์™€ ํ•™์ˆ ์  ๋ฐ์ดํ„ฐ๋ฅผ ๊ฒฐํ•ฉํ•˜์˜€์Šต๋‹ˆ๋‹ค.

2. About the Author

Kim Tae-Wan

  • Current Role: Game Developer & Researcher at NEXON (20+ years of experience)
  • Academic Background:
    • Ph.D. Student in Technology at Sogang University Graduate School of Metaverse
    • M.S. in Game Design from Gachon University
    • B.F.A. from Pusan National University
  • Expertise: Level Design for the KartRider series, World Building Systems, and LLM-based Content Pipelines.

3. Research Context (์—ฐ๊ตฌ ๋ฐฐ๊ฒฝ)

[EN] The training dataset is based on the author's Master's thesis, which identifies 19 key level design variables and their impact on player satisfaction. [KR] ๋ณธ ๋ชจ๋ธ์˜ ํ•™์Šต ๋ฐ์ดํ„ฐ์…‹์€ ์ €์ž์˜ ์„์‚ฌ ํ•™์œ„ ๋…ผ๋ฌธ์„ ๋ฐ”ํƒ•์œผ๋กœ ํ•ฉ๋‹ˆ๋‹ค. ๋ ˆ์ด์‹ฑ ๊ฒŒ์ž„์˜ 19๊ฐ€์ง€ ํ•ต์‹ฌ ๋ ˆ๋ฒจ ๋””์ž์ธ ๋ณ€์ˆ˜(์˜ˆ๊ฐ ์ปค๋ธŒ, ํ—ค์–ดํ•€, ๊ฐ€์† ํŠธ๋ฆฌ๊ฑฐ ๋“ฑ)์™€ ์œ ์ € ๋งŒ์กฑ๋„ ๊ฐ„์˜ ์ƒ๊ด€๊ด€๊ณ„๋ฅผ ํ•™์Šตํ•˜์˜€์Šต๋‹ˆ๋‹ค.

Key Research Variables (ํ•ต์‹ฌ ์—ฐ๊ตฌ ๋ณ€์ˆ˜):

  • Acute Curves (์˜ˆ๊ฐ ์ปค๋ธŒ)
  • Hairpin Turns (ํ—ค์–ดํ•€)
  • Acceleration Triggers (๊ฐ€์† ํŠธ๋ฆฌ๊ฑฐ)
  • Verticality and Slopes (๊ณ ์ €์ฐจ ๋ฐ ๊ฒฝ์‚ฌ๋กœ)
  • Visibility and Obstacles (์‹œ์•ผ ๋ฐ ์žฅ์• ๋ฌผ)

4. Intended Use (์ฃผ์š” ์šฉ๋„)

  • Design Automation: Automated analysis of track structures during the planning stage.
  • Preference Prediction: Evaluating the potential success of a track based on player preference data.
  • Research Integration: Part of the "VN Studio" and "Persona AI System" projects for automated game content generation.

5. Technical Details (๊ธฐ์ˆ  ์‚ฌ์–‘)

  • Base Model: Llama-3-8B (4-bit quantized)
  • Format: GGUF (Optimized for local inference via LM Studio/Ollama)
  • Training Method: Supervised Fine-Tuning (SFT) using Unsloth

6. Reference & Citation (์ธ์šฉ ๋ฐ ์ฐธ๊ณ ๋ฌธํ—Œ)

Thesis: A Study on Level Design Components and Player Preferences in Racing Game Content (Gachon Univ.)


Contact: https://github.com/Taewan627

Downloads last month
-
GGUF
Model size
8B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support