File size: 3,831 Bytes
45cbc6e dee857b 45cbc6e dee857b 45cbc6e dee857b 45cbc6e dee857b 45cbc6e dee857b 45cbc6e dee857b 45cbc6e dee857b 45cbc6e dee857b 45cbc6e dee857b 45cbc6e dee857b 45cbc6e dee857b 45cbc6e dee857b 45cbc6e dee857b 45cbc6e dee857b 45cbc6e dee857b 45cbc6e dee857b 45cbc6e dee857b 45cbc6e dee857b 45cbc6e dee857b | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 | ---
language: it
license: apache-2.0
library_name: peft
base_model: Qwen/Qwen2.5-1.5B
tags:
- lora
- peft
- cognitive-architecture
- progressive-learning
- dream-pruning
- svd
- math
- arithmetic
- intuition
- tool-use
datasets:
- custom
pipeline_tag: text-generation
---
# Architettura Cognitiva Progressiva β Dream-LoRA con SVD Pruning (Italiano)
**Modello principale italiano** β Qwen2.5-1.5B addestrato con architettura cognitiva progressiva a 4 fasi + **SVD Dream Pruning** (rank 16β8).
## π Risultati
| Metrica | Dream-LoRA (questo) | Progressive-LoRA | Flat-LoRA |
|---------|---------------------|------------------|-----------|
| Accuratezza Esatta | **58.6% Β± 2.9** | 37.0% Β± 0.5 | 60.6% |
| Number Sense | **60.0% Β± 0.8** | 57.7% Β± 0.5 | 0.0% |
| Metacognizione | **100.0%** | 98.5% | 0.0% |
Il passaggio da magnitude pruning a SVD Dream Pruning ha migliorato significativamente l'accuratezza esatta (+21.6pp) preservando number sense e metacognizione.
## π§ Progressive Cognitive Architecture
A bio-inspired 4-phase training methodology:
| Phase | Name | What happens |
|-------|------|-------------|
| 1 | **Foundation** | Learn exact arithmetic via LoRA fine-tuning |
| 2 | **Consolidation** | SVD Dream Pruning (rank 16β8) compresses knowledge into intuition |
| 3 | **Delegation** | Learn complexity-aware routing: compute internally vs. delegate to tool |
| 4 | **Orchestration** | Full pipeline: intuit β route β tool β validate |
**Guiding Principle:** *Knowledge doesn't disappear β it collapses into attractors. Intuition is the compressed residue of experience.*
## π Dream Pruning (Fattorizzazione SVD a Basso Rango)
Invece di azzerare i pesi piccoli, il Dream Pruning usa la **decomposizione SVD** per ridurre il rango effettivo delle matrici LoRA da 16 a 8. Preserva le direzioni principali ("connessioni logiche") scartando il rumore β analogo al consolidamento della memoria durante il sonno.
## π§ Configurazione
| Parametro | Valore |
|-----------|--------|
| Modello Base | Qwen/Qwen2.5-1.5B |
| LoRA Rank | 16 (β 8 dopo SVD) |
| LoRA Alpha | 32 |
| Target LoRA | q_proj, k_proj, v_proj, o_proj |
| Tipo Pruning | SVD Low-Rank Factorization |
| Lingua Dati | Italiano |
## π Uso Rapido
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base_model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen2.5-1.5B", device_map="auto", torch_dtype="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-1.5B")
model = PeftModel.from_pretrained(
base_model,
"dexmac/progressive-cognitive-dream-lora",
subfolder="lora_adapters"
)
messages = [{"role": "user", "content": "Risolvi: 342 * 67"}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=200, temperature=0.1)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## π Modelli Correlati
- [Progressive-LoRA (IT)](https://huggingface.co/dexmac/progressive-cognitive-lora) β Primo prototipo con magnitude pruning
- [Flat-LoRA (IT)](https://huggingface.co/dexmac/progressive-cognitive-baseline-lora) β Controllo senza fasi
- [**1.5B Dream (EN)**](https://huggingface.co/dexmac/progressive-cognitive-dream-lora-en) β Miglior modello (inglese, composite 87.6)
- [GitHub](https://github.com/dexmac221/progressive-cognitive) β Codice sorgente completo
## π Citation
```bibtex
@software{progressive_cognitive_2026,
author = {Dex Mac},
title = {Progressive Cognitive Architecture for LLMs},
year = {2026},
url = {https://github.com/dexmac221/progressive-cognitive},
version = {1.0.0}
}
```
## π License
Apache 2.0
|