dexmac commited on
Commit
0c76694
·
verified ·
1 Parent(s): 0bb6577

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +77 -0
README.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: apache-2.0
4
+ library_name: peft
5
+ base_model: Qwen/Qwen2.5-1.5B
6
+ tags:
7
+ - lora
8
+ - peft
9
+ - logic
10
+ - reasoning
11
+ - syllogism
12
+ - boolean-logic
13
+ - cognitive-architecture
14
+ datasets:
15
+ - custom
16
+ pipeline_tag: text-generation
17
+ ---
18
+
19
+ # Progressive Cognitive Architecture - Logic Specialist (English)
20
+
21
+ LoRA adapter specialized for compact logical reasoning on top of Qwen2.5-1.5B.
22
+
23
+ ## Summary
24
+
25
+ This adapter was trained as the logic specialist component of the Progressive Cognitive Architecture project. It targets logic-focused tasks such as syllogistic validity, conditional reasoning, boolean evaluation, and short-form symbolic transformations.
26
+
27
+ ## Observed Behavior
28
+
29
+ On the focused Socratic benchmark used in this project, the logic specialist consistently improved logical reasoning over the 1.5B base model.
30
+
31
+ - 2-seed logic composite mean: 70.3%
32
+ - 2-seed overall mean on the mixed benchmark: 43.9%
33
+ - Strongest dimensions: syllogism validity, conditional validity, boolean evaluation
34
+ - Weakest dimensions: negation and open-form compound logic transformations
35
+
36
+ These numbers come from the project evaluation artifacts available in the Progressive Cognitive results dataset and should be interpreted as research results rather than a production benchmark.
37
+
38
+ ## Intended Use
39
+
40
+ - logic classification and validation
41
+ - short logical inference tasks
42
+ - use as a specialist in routed or multi-agent architectures
43
+
44
+ ## Limitations
45
+
46
+ - not intended as a general-purpose chat model
47
+ - weaker on arithmetic and tool-use than the math-oriented adapters
48
+ - open-ended logical rewrites remain less reliable than binary validity judgments
49
+
50
+ ## Loading
51
+
52
+ ```python
53
+ from transformers import AutoModelForCausalLM, AutoTokenizer
54
+ from peft import PeftModel
55
+
56
+ base_model = AutoModelForCausalLM.from_pretrained(
57
+ "Qwen/Qwen2.5-1.5B", device_map="auto", torch_dtype="auto"
58
+ )
59
+ tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-1.5B")
60
+
61
+ model = PeftModel.from_pretrained(
62
+ base_model,
63
+ "dexmac/progressive-cognitive-logic-specialist-en",
64
+ subfolder="lora_adapters"
65
+ )
66
+ ```
67
+
68
+ ## Related Repositories
69
+
70
+ - Math specialist: https://huggingface.co/dexmac/progressive-cognitive-dream-lora-en
71
+ - Monolithic math+logic model: https://huggingface.co/dexmac/progressive-cognitive-logic-dream-lora-en
72
+ - Router model: https://huggingface.co/dexmac/progressive-cognitive-router-en
73
+ - Results dataset: https://huggingface.co/datasets/dexmac/progressive-cognitive-results
74
+
75
+ ## License
76
+
77
+ Apache 2.0