File size: 2,032 Bytes
3e6a8b2 2f84cd0 851b1d7 2f84cd0 a346e76 2f84cd0 851b1d7 2f84cd0 851b1d7 2f84cd0 851b1d7 3e6a8b2 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 | ---
license: apache-2.0
base_model:
- meta-llama/Llama-3.2-1B
---
# Nexa 0.1 First - The Genesis
**Release Date:** March 29, 2026
**Project Phase:** Alpha / Prototype
**Developer:** WeAreNexa
---
## Overview
**Nexa 0.1 First** marks the beginning of the Nexa series. It is a compact, high-performance **Text-to-Text** language model with **1.2 Billion parameters**, designed to demonstrate the efficiency of Low-Rank Adaptation (LoRA) on edge-compatible architectures.
This model was specifically fine-tuned to handle Italian and English instructions with low latency, serving as the foundational proof-of-concept for the subsequent Enhanced and Ultra versions.
## Technical Specifications
| Feature | Specification |
| :--- | :--- |
| **Model Architecture** | Llama 3.2 (Dense) |
| **Parameter Count** | 1.24 Billion |
| **Task Type** | Text-to-Text Generation |
| **Base Model** | unsloth/Llama-3.2-1B-Instruct-bnb-4bit |
| **Quantization** | 4-bit NormalFloat (NF4) via bitsandbytes |
| **Context Length** | 2048 Tokens |
| **Language Support** | Italian, English |
## Fine-Tuning Details (PEFT/LoRA)
* **Method:** LoRA (Low-Rank Adaptation)
* **Rank (r):** 16
* **Alpha:** 16
* **Target Modules:** `q_proj`, `k_proj`, `v_proj`, `o_proj`
* **Dataset:** Alpaca-GPT4 (Filtered subset for logical consistency)
* **Epochs:** 1
* **Optimizer:** AdamW 8-bit
## Lineage & Evolution
1. **Nexa 0.1 First (Current):** Initial alpha release focusing on base instruction following.
2. **Nexa 0.2 Enhanced:** Optimization of response fluidity and broader vocabulary.
3. **Nexa 0.3 Enhanced:** High-density tuning (Rank 128) for advanced reasoning.
4. **Nexa 0.4 Enhanced:** Final optimization targeting all-linear projections for maximum parameter density.
## How to Use
```python
from unsloth import FastLanguageModel
import torch
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "WeAreNexa/Nexa_0.1_First",
max_seq_length = 2048,
load_in_4bit = True,
)
# Fast inference mode |