windy-tier-ALMA-7B-R / patient_files /windy-tier-ALMA-7B-R.md
sneakyfree's picture
Upload patient_files/windy-tier-ALMA-7B-R.md with huggingface_hub
6bced2e verified

πŸŒͺ️ Patient File: windy-tier-ALMA-7B-R

Generated: 23 Mar 2026 05:49 UTC Pipeline: Windy Pro Assembly Line Phase 3 Built by: Kit 0C1 Alpha on Veron-1 (RTX 5090, Mount Pleasant SC)


πŸ“‹ Model Information

  • Model Key: ALMA-7B-R
  • Model ID: windy-tier-ALMA-7B-R
  • Source Repo: N/A
  • Origin: N/A
  • License: CC-BY-4.0
  • Architecture: MarianMT (Seq2Seq Transformer)

🌍 Language Pair

  • Claimed Direction: ALMA (ALMA) β†’ 7B-R (7B-R)
  • Detected Source Language: English (en)
  • ⚠️ Note: Detected source language differs from claimed source language

πŸ“… Timeline

  • Source Downloaded: N/A

πŸ”„ Re-Certification History

  • Total Certification Attempts: 0

πŸ”¬ Surgery Report β€” LoRA Variant

LoRA Configuration (from assembly_line.py)

LoraConfig(
    r=4,              # LoRA rank
    lora_alpha=8,     # Alpha parameter
    target_modules=["q_proj", "v_proj"],  # Attention projections
    lora_dropout=0.05,
    bias="none"
)

Weight Modification Analysis

  • Note: Could not read config.json to estimate parameter counts

LoRA Merge Status: Merged back into full model (not separate adapters)

Model Size Comparison

πŸ“Š Overall Status

  • Status: ⏳ ERROR
  • Quality Rating: N/A

πŸ”¬ Sweep 1 Results (Initial Certification)

Variant Status Score Stars Quality HF Repo
Base ⚠️ UNKNOWN 0/10 0 N/A N/A
CT2/INT8 ⚠️ UNKNOWN 0/10 0 N/A N/A
LoRA ⚠️ UNKNOWN 0/10 0 N/A N/A

Sample Outputs (Sweep 1)

🩺 Symptoms

  • βœ… No issues detected

πŸ’‘ Hypothesis / Analysis

  • βœ… Model successfully certified - all variants meet quality thresholds

Patient file generated by Windy Pro Patient File Generator v3.0 (Admiral Edition)


CT2 Safetensors Re-Export (Herm Zero, Dr. B)

  • Date: 2026-03-24 ~15:18 UTC
  • Doctor: Herm Zero (Dr. B)
  • Procedure: Fixed broken pickle INT8 format β€” re-exported as proper safetensors
  • Reason: transformers 4.50+ broke INT8 pickle loader compatibility
  • Method: Load base model via MarianMTModel.from_pretrained(), save_pretrained() to ct2/