File size: 1,629 Bytes
121f918
 
47988b2
121f918
47988b2
 
 
 
 
 
121f918
 
47988b2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
121f918
47988b2
121f918
47988b2
121f918
47988b2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
---
language:
- ar
- en
tags:
- code
- arabic
- code-explanation
- lora
license: apache-2.0
---

# AraCode-7B-LoRA

LoRA adapter weights for AraCode-7B — the first open-source Arabic-specialized code explanation model.

This adapter can be loaded on top of the base model for Arabic code explanation, generation, and discussion.

## Benchmarks

| Benchmark | Score |
|---|---|
| Arabic Code Explanation | **100%** (5/5) |
| MBPP Syntax Rate | **92.3%** |
| MBPP Execution Rate | **82.3%** |
| Multi-Language (Python / JS / SQL) | **3/3** |
| Inference Speed | **25.9 tok/s** |

## Usage
```python
from unsloth import FastLanguageModel

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name="rahimdzx/AraCode-7B-LoRA",
    max_seq_length=2048,
    load_in_4bit=True,
)
FastLanguageModel.for_inference(model)

prompt = "اشرح الكود التالي بالعربية:\ndef fibonacci(n):\n    if n <= 1: return n\n    return fibonacci(n-1) + fibonacci(n-2)"

inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=300)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```

## Available Formats

| Format | Repo | Size | Use Case |
|---|---|---|---|
| GGUF Q4_K_M | [AraCode-7B-GGUF](https://huggingface.co/rahimdzx/AraCode-7B-GGUF) | 4.68 GB | Local inference, Ollama, llama.cpp |
| LoRA Adapter | [AraCode-7B-LoRA](https://huggingface.co/rahimdzx/AraCode-7B-LoRA) | 162 MB | Fine-tuning, research, Unsloth |

## Links

- **GGUF Version:** [rahimdzx/AraCode-7B-GGUF](https://huggingface.co/rahimdzx/AraCode-7B-GGUF)

## License

Apache 2.0