| library_name: transformers | |
| pipeline_tag: text-generation | |
| base_model: deepseek-ai/DeepSeek-R1-0528-Qwen3-8B | |
| tags: | |
| - lora | |
| - peft | |
| - decompilation | |
| - code | |
| # decompiler-v3 | |
| This repository contains merged fine‑tuned weights for the base model **deepseek-ai/DeepSeek-R1-0528-Qwen3-8B**. | |
| - **Task:** idiomatic decompilation (assembly → high-level code) | |
| - **Training:** LoRA/DoRA adapters trained with TRL SFT on custom assembly→Dart/Swift pairs | |
| - **How to load (merged):** | |
| ```python | |
| from transformers import AutoModelForCausalLM, AutoTokenizer | |
| repo_id = "raafatabualazm/decompiler-v3" | |
| tok = AutoTokenizer.from_pretrained(repo_id, use_fast=True) | |
| model = AutoModelForCausalLM.from_pretrained(repo_id, torch_dtype="bfloat16", trust_remote_code=True) | |
| ``` | |
| > Replace the repo id with your own if you fork or rename this repository. | |