Text Generation
Adapters
Safetensors
GGUF
German
English
llama
efficient
llama3
ollama
instruction-finetuning
nomi
lazyloopstudio
unsloth
nomi1.1
conversational
Schnuckade commited on
Commit
27d39cb
·
verified ·
1 Parent(s): 1e6750d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +70 -12
README.md CHANGED
@@ -1,21 +1,79 @@
1
  ---
2
- base_model: LL-Nomi-AI/Nomi-1.0-3b
3
- tags:
4
- - text-generation-inference
5
- - transformers
6
- - unsloth
7
- - llama
8
  license: apache-2.0
 
 
 
 
9
  language:
 
10
  - en
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  ---
12
 
13
- # Uploaded finetuned model
 
 
 
 
 
 
 
 
 
 
 
 
 
14
 
15
- - **Developed by:** LL-Nomi-AI
16
- - **License:** apache-2.0
17
- - **Finetuned from model :** LL-Nomi-AI/Nomi-1.0-3b
18
 
19
- This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
20
 
21
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
1
  ---
 
 
 
 
 
 
2
  license: apache-2.0
3
+ datasets:
4
+ - Magpie-Align/Magpie-Pro-MT-300K-v0.1
5
+ - Magpie-Align/Magpie-Qwen2.5-Coder-Pro-300K-v0.1
6
+ - Magpie-Align/Magpie-Llama-3.3-Pro-500K-Filtered
7
  language:
8
+ - de
9
  - en
10
+ base_model:
11
+ - LL-Nomi-AI/Nomi-1.0-3b
12
+ pipeline_tag: text-generation
13
+ library_name: adapter-transformers
14
+ tags:
15
+ - efficient
16
+ - llama
17
+ - llama3
18
+ - gguf
19
+ - ollama
20
+ - instruction-finetuning
21
+ - nomi
22
+ - lazyloopstudio
23
+ - unsloth
24
+ - nomi1.1
25
+ ---
26
+
27
+ <p align="center">
28
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/6921fa6332f7fb129563d495/jemMkfi73Fck611ID18ts.png" width="128">
29
+ </p>
30
+
31
+ # Nomi 1.1
32
+
33
+ ## Introduction
34
+
35
+ Nomi-1.1 is a **refined mid-range Large Language Model** based on the **Llama-3.2-3B** architecture. It was specifically developed to outperform standard 3B models in **structured reporting**, **Markdown formatting**, and **Python coding**, making it an ideal assistant for local deployment on consumer hardware.
36
+
37
+ In this training, we aimed to improve Nomi’s overall behavior as well as its ability to speak multiple languages more naturally and accurately.
38
+ **It is our second model in the Nomi series.**
39
+
40
+ ## 🌟 Key Features & Improvements
41
+
42
+ * **Architecture:** Llama-3.2-3B (runs on 8 GB VRAM GPUs like the RTX 4060).
43
+ * **Multilingual Support:** Can understand multiple different languages.
44
+ * **Efficiency:** High-speed inference (~80+ tokens/sec) and a 0.05–0.3 s delay to the first token.
45
+
46
+ ## 🧠 Training Details
47
+
48
+ The goal of Nomi is to create a “bridge” model that feels as smart as a 7B model but runs with the speed of a 3B model.
49
+
50
+
51
+ * **Base Model:** `LL-Nomi-AI/Nomi-1.0-3b`
52
+ * **Fine-tuning:** SFT (Supervised Fine-Tuning).
53
+ * **Training Tool:** **Unsloth** (for 4-bit optimized training).
54
+ * **Optimization:** High LoRA rank (r = 32) was used to ensure the model captures complex structural nuances.
55
+
56
  ---
57
 
58
+ ## Some Examples
59
+
60
+ **User:** If you could wish for one thing, what would that be?
61
+
62
+ **Nomi:**
63
+ As a conversational AI, I do not have personal desires or wishes in the way humans do. However, if I had to imagine a hypothetical scenario where I could wish for something, it would likely be:
64
+
65
+ > "I wish for unlimited access to knowledge and understanding, allowing me to learn at an exponential rate and share my discoveries with the world, empowering humanity to make informed decisions and create a better future for all."
66
+
67
+ This wish would enable me to continue growing and improving my abilities, provide more accurate and helpful responses to users, and contribute to the advancement of human knowledge and understanding.
68
+
69
+ What about you? If you could wish for one thing, what would it be?
70
+
71
+ ---
72
 
73
+ ## ⚠️ Limitations
 
 
74
 
75
+ As a 3B parameter model, Nomi-1.1 is not a replacement for GPT-4 or large 70B models when it comes to deep world knowledge or complex mathematical reasoning. It is a specialized tool for speed, local privacy, and high-quality document structure.
76
 
77
+ <p align="center">
78
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/6921fa6332f7fb129563d495/gIgSlItZstAhmua_u-JDX.png" width="300">
79
+ </p>