Architettura Cognitiva Progressiva β€” Dream-LoRA con SVD Pruning (Italiano)

Modello principale italiano β€” Qwen2.5-1.5B addestrato con architettura cognitiva progressiva a 4 fasi + SVD Dream Pruning (rank 16β†’8).

πŸ“Š Risultati

Metrica Dream-LoRA (questo) Progressive-LoRA Flat-LoRA
Accuratezza Esatta 58.6% Β± 2.9 37.0% Β± 0.5 60.6%
Number Sense 60.0% Β± 0.8 57.7% Β± 0.5 0.0%
Metacognizione 100.0% 98.5% 0.0%

Il passaggio da magnitude pruning a SVD Dream Pruning ha migliorato significativamente l'accuratezza esatta (+21.6pp) preservando number sense e metacognizione.

🧠 Progressive Cognitive Architecture

A bio-inspired 4-phase training methodology:

Phase Name What happens
1 Foundation Learn exact arithmetic via LoRA fine-tuning
2 Consolidation SVD Dream Pruning (rank 16β†’8) compresses knowledge into intuition
3 Delegation Learn complexity-aware routing: compute internally vs. delegate to tool
4 Orchestration Full pipeline: intuit β†’ route β†’ tool β†’ validate

Guiding Principle: Knowledge doesn't disappear β€” it collapses into attractors. Intuition is the compressed residue of experience.

πŸŒ™ Dream Pruning (Fattorizzazione SVD a Basso Rango)

Invece di azzerare i pesi piccoli, il Dream Pruning usa la decomposizione SVD per ridurre il rango effettivo delle matrici LoRA da 16 a 8. Preserva le direzioni principali ("connessioni logiche") scartando il rumore β€” analogo al consolidamento della memoria durante il sonno.

πŸ”§ Configurazione

Parametro Valore
Modello Base Qwen/Qwen2.5-1.5B
LoRA Rank 16 (β†’ 8 dopo SVD)
LoRA Alpha 32
Target LoRA q_proj, k_proj, v_proj, o_proj
Tipo Pruning SVD Low-Rank Factorization
Lingua Dati Italiano

πŸš€ Uso Rapido

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

base_model = AutoModelForCausalLM.from_pretrained(
    "Qwen/Qwen2.5-1.5B", device_map="auto", torch_dtype="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-1.5B")

model = PeftModel.from_pretrained(
    base_model,
    "dexmac/progressive-cognitive-dream-lora",
    subfolder="lora_adapters"
)

messages = [{"role": "user", "content": "Risolvi: 342 * 67"}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=200, temperature=0.1)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

πŸ”— Modelli Correlati

πŸ“ Citation

@software{progressive_cognitive_2026,
  author = {Dex Mac},
  title = {Progressive Cognitive Architecture for LLMs},
  year = {2026},
  url = {https://github.com/dexmac221/progressive-cognitive},
  version = {1.0.0}
}

πŸ“„ License

Apache 2.0

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for dexmac/progressive-cognitive-dream-lora

Base model

Qwen/Qwen2.5-1.5B
Adapter
(472)
this model