yasserrmd commited on
Commit
861ed85
·
verified ·
1 Parent(s): dd611c0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +90 -12
README.md CHANGED
@@ -1,21 +1,99 @@
1
  ---
 
2
  base_model: LiquidAI/LFM2.5-1.2B-Instruct
3
  tags:
4
- - text-generation-inference
5
- - transformers
6
- - unsloth
7
- - lfm2
8
- license: apache-2.0
9
- language:
 
 
 
10
  - en
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  ---
12
 
13
- # Uploaded finetuned model
14
 
15
- - **Developed by:** yasserrmd
16
- - **License:** apache-2.0
17
- - **Finetuned from model :** LiquidAI/LFM2.5-1.2B-Instruct
18
 
19
- This lfm2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
 
21
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
1
  ---
2
+ license: apache-2.0
3
  base_model: LiquidAI/LFM2.5-1.2B-Instruct
4
  tags:
5
+ - linux
6
+ - terminal
7
+ - bash
8
+ - devops
9
+ - liquid-foundation-model
10
+ - multilingual
11
+ - arabic
12
+ - tamil
13
+ languages:
14
  - en
15
+ - ar
16
+ - ta
17
+ metrics:
18
+ - accuracy
19
+ model_name: HydroShell-1.2B
20
+ ---
21
+
22
+ # HydroShell-1.2B: Liquid Linux Expert
23
+
24
+ **HydroShell-1.2B** is a specialized, multilingual fine-tuned version of the **Liquid AI (LFM 2.5 1.2B)** model. It is optimized to act as a high-performance, low-latency assistant for Linux system administration, shell scripting, and DevOps automation.
25
+
26
+ By leveraging the **Liquid Foundation Model** architecture, HydroShell excels at processing long-form technical instructions and mapping complex natural language (English, Arabic, and Tamil) to functional Bash one-liners.
27
+
28
+
29
+
30
+ ## ⚠️ Safety & Destructive Command Warning
31
+
32
+ > **WARNING:** This model is designed to generate powerful system-level commands. It can and will generate **destructive commands** (e.g., `rm -rf`, `mkfs`, or overwriting configurations with `>`).
33
+ > * **Always verify commands** in a sandbox or test environment before executing them on production systems.
34
+ > * The model may occasionally hallucinate flags or mix Linux distributions (e.g., suggesting `pacman` for Ubuntu systems).
35
+
36
+ ---
37
+
38
+ ## Model Details
39
+
40
+ - **Developed by:** [Your Name/MindLab]
41
+ - **Base Model:** LiquidAI/LFM2.5-1.2B-Instruct
42
+ - **Architecture:** Liquid Foundation Model (Dynamical Systems-based)
43
+ - **Primary Domain:** Linux CLI, Bash Scripting, System Hardening.
44
+ - **Languages Supported:** English, Arabic (Technical), Tamil.
45
+
46
+
47
+
48
  ---
49
 
50
+ ## Evaluation Results (Zero-Shot Testing)
51
 
52
+ The following results were observed during a 100-prompt "Stress Test" covering System Audit, Security, and File Management.
 
 
53
 
54
+ ### Technical Performance Matrix
55
+ | Category | Accuracy | Notes |
56
+ | :--- | :--- | :--- |
57
+ | **Basic Admin (`ls`, `cd`, `mkdir`)** | 98% | Flawless execution. |
58
+ | **Log Parsing (`awk`, `sed`, `grep`)** | 75% | Occasionally confuses line vs. field flags. |
59
+ | **Systemd & Services** | 90% | Strong understanding of service lifecycles. |
60
+ | **Networking (`iptables`, `ss`)** | 82% | Occasional source/destination flag inversion. |
61
+
62
+ ### Multilingual Capability
63
+ - **Arabic:** 90% Accuracy in intent recognition. Successfully maps Arabic technical terms like "حظر" (Block) and "مزامنة" (Sync).
64
+ - **English:** 95% Accuracy in intent recognition.
65
+
66
+ ---
67
+
68
+ ## Known Issues & Limitations
69
+ 1. **Distro Confusion:** The model may suggest Arch Linux (`pacman`) commands when asked for Ubuntu tasks if the prompt is not specific.
70
+ 2. **Redirection Risks:** In some tests, the model used `>` (overwrite) instead of `>>` (append) for configuration files.
71
+ 3. **Hallucination:** For very complex `find` commands, it may invent non-existent flags (e.g., `-md5`).
72
+
73
+ ---
74
+
75
+ ## Usage (Python)
76
+
77
+ ```python
78
+ from transformers import AutoTokenizer, AutoModelForCausalLM
79
+
80
+ model_id = "your-username/HydroShell-1.2B"
81
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
82
+ model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", trust_remote_code=True)
83
+
84
+ messages = [{"role": "user", "content": "البحث عن العمليات التي تستهلك أكبر قدر من الذاكرة"}]
85
+ inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to("cuda")
86
+
87
+ outputs = model.generate(**inputs, max_new_tokens=64, temperature=0.3)
88
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
89
+
90
+ ```
91
+
92
+ ## Citation
93
+
94
+ If you use this model in your research or projects, please cite the base Liquid AI model and this fine-tuned version.
95
+
96
+ ```
97
+
98
+ ---
99