sneakyfree commited on
Commit
6bced2e
Β·
verified Β·
1 Parent(s): 51a0c02

Upload patient_files/windy-tier-ALMA-7B-R.md with huggingface_hub

Browse files
patient_files/windy-tier-ALMA-7B-R.md ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # πŸŒͺ️ Patient File: windy-tier-ALMA-7B-R
2
+ **Generated:** 23 Mar 2026 05:49 UTC
3
+ **Pipeline:** Windy Pro Assembly Line Phase 3
4
+ **Built by:** Kit 0C1 Alpha on Veron-1 (RTX 5090, Mount Pleasant SC)
5
+
6
+ ---
7
+
8
+ ## πŸ“‹ Model Information
9
+
10
+ - **Model Key:** `ALMA-7B-R`
11
+ - **Model ID:** `windy-tier-ALMA-7B-R`
12
+ - **Source Repo:** N/A
13
+ - **Origin:** N/A
14
+ - **License:** CC-BY-4.0
15
+ - **Architecture:** MarianMT (Seq2Seq Transformer)
16
+
17
+ ## 🌍 Language Pair
18
+
19
+ - **Claimed Direction:** ALMA (`ALMA`) β†’ 7B-R (`7B-R`)
20
+ - **Detected Source Language:** English (`en`)
21
+ - ⚠️ **Note:** Detected source language differs from claimed source language
22
+
23
+ ## πŸ“… Timeline
24
+
25
+ - **Source Downloaded:** N/A
26
+
27
+ ## πŸ”„ Re-Certification History
28
+
29
+ - **Total Certification Attempts:** 0
30
+
31
+ ## πŸ”¬ Surgery Report β€” LoRA Variant
32
+
33
+ ### LoRA Configuration (from assembly_line.py)
34
+
35
+ ```python
36
+ LoraConfig(
37
+ r=4, # LoRA rank
38
+ lora_alpha=8, # Alpha parameter
39
+ target_modules=["q_proj", "v_proj"], # Attention projections
40
+ lora_dropout=0.05,
41
+ bias="none"
42
+ )
43
+ ```
44
+
45
+ ### Weight Modification Analysis
46
+
47
+ - **Note:** Could not read config.json to estimate parameter counts
48
+
49
+ **LoRA Merge Status:** Merged back into full model (not separate adapters)
50
+
51
+ ### Model Size Comparison
52
+
53
+
54
+ ## πŸ“Š Overall Status
55
+
56
+ - **Status:** ⏳ ERROR
57
+ - **Quality Rating:** N/A
58
+
59
+ ## πŸ”¬ Sweep 1 Results (Initial Certification)
60
+
61
+ | Variant | Status | Score | Stars | Quality | HF Repo |
62
+ |---------|--------|-------|-------|---------|----------|
63
+ | Base | ⚠️ UNKNOWN | 0/10 | 0 | N/A | N/A |
64
+ | CT2/INT8 | ⚠️ UNKNOWN | 0/10 | 0 | N/A | N/A |
65
+ | LoRA | ⚠️ UNKNOWN | 0/10 | 0 | N/A | N/A |
66
+
67
+ ### Sample Outputs (Sweep 1)
68
+
69
+ ## 🩺 Symptoms
70
+
71
+ - βœ… No issues detected
72
+
73
+ ## πŸ’‘ Hypothesis / Analysis
74
+
75
+ - βœ… Model successfully certified - all variants meet quality thresholds
76
+
77
+ ---
78
+ *Patient file generated by Windy Pro Patient File Generator v3.0 (Admiral Edition)*
79
+
80
+ ---
81
+
82
+ ## CT2 Safetensors Re-Export (Herm Zero, Dr. B)
83
+ - **Date:** 2026-03-24 ~15:18 UTC
84
+ - **Doctor:** Herm Zero (Dr. B)
85
+ - **Procedure:** Fixed broken pickle INT8 format β€” re-exported as proper safetensors
86
+ - **Reason:** transformers 4.50+ broke INT8 pickle loader compatibility
87
+ - **Method:** Load base model via MarianMTModel.from_pretrained(), save_pretrained() to ct2/