moustyb commited on
Commit
94f4fa7
·
verified ·
1 Parent(s): 54a8a41

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -93
README.md CHANGED
@@ -1,93 +0,0 @@
1
- ---
2
- license: apache-2.0
3
- language:
4
- - en
5
- - multiligual
6
- pipeline_tag: text-generation
7
- tags:
8
- - on-device
9
- - privacy
10
- - offline
11
- - nlp
12
- - yongle-ai
13
- base_model: Qwen/Qwen2.5-0.5B-Instruct
14
- library_name: transformers
15
- ---
16
-
17
- # Yongle Nano 1.0
18
-
19
- <div align="center">
20
- <img src="![favicon_128px](https://cdn-uploads.huggingface.co/production/uploads/66611cd0c03abe636681af82/-xl5BprmWyXYvIvrnmn7e.png)
21
-
22
- <br>
23
- <b>Lightweight, private, offline intelligence for every device.</b>
24
- </div>
25
-
26
- ## 🧠 Overview
27
-
28
- **Yongle Nano 1.0** is a compact, efficient language model designed for fast, private, on‑device use. It delivers reliable chat, writing assistance, summaries, and light coding support — even on older laptops.
29
-
30
- Nano 1.0 is part of the **Yongle Nano Series (1.x)**, your small‑model family optimized for speed, simplicity, and everyday tasks.
31
-
32
- ---
33
-
34
- ## ✨ Key Features
35
-
36
- * **Fully offline:** Zero cloud dependency.
37
- * **Fast on CPU:** Runs smoothly on standard CPU‑only devices.
38
- * **Low memory footprint:** Minimal RAM usage.
39
- * **Strong instruction following:** Understands and executes commands effectively.
40
- * **Good writing & summarization:** Capable of drafting and condensing text.
41
- * **Long‑context support:** Up to **32K tokens**.
42
- * **Multilingual:** Supports **29+ languages**.
43
- * **Energy‑efficient:** Privacy‑first architecture.
44
-
45
- ---
46
-
47
- ## 🎯 Intended Use
48
-
49
- * **Everyday chat:** Casual conversation and assistance.
50
- * **Writing & editing:** Drafting emails, essays, and correcting grammar.
51
- * **Summaries & explanations:** Condensing articles or explaining concepts.
52
- * **Light coding help:** Basic snippets and debugging.
53
- * **Education & learning:** Study buddy and tutor.
54
- * **Offline productivity:** Tools that work without Wi-Fi.
55
- * **Low‑spec hardware:** Ideal for laptops and desktops with limited resources.
56
-
57
- ### 🚫 Not Intended For
58
- * Heavy reasoning or complex logic puzzles.
59
- * Multi‑document deep analysis.
60
- * Enterprise‑scale workloads.
61
- * Large‑context research tasks.
62
-
63
-
64
-
65
- ---
66
-
67
- ## ⚙️ Technical Details
68
-
69
- | Feature | Specification |
70
- | :--- | :--- |
71
- | **Internal Size** | 0.49B parameters |
72
- | **Architecture** | Transformer (RoPE, SwiGLU, RMSNorm, GQA) |
73
- | **Layers** | 24 |
74
- | **Attention Heads** | 14 (Q), 2 (KV) |
75
- | **Context Length** | 32,768 tokens |
76
- | **Generation Length** | Up to 8,192 tokens |
77
- | **Training** | Pretraining + Instruction tuning |
78
-
79
- ---
80
-
81
- ## 💻 Hardware Requirements
82
-
83
- **Minimum**
84
- * **RAM:** 4GB
85
- * **Processor:** CPU‑only (Works on most laptops from 2014–2024)
86
-
87
- **Recommended**
88
- * **RAM:** 8GB
89
- * **GPU:** Optional acceleration (2–4GB VRAM)
90
-
91
- ---
92
-
93
- e(outputs[0], skip_special_tokens=True))