Schnuckade commited on
Commit
be348f8
·
verified ·
1 Parent(s): 4665bfd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +62 -14
README.md CHANGED
@@ -1,23 +1,71 @@
1
  ---
 
 
 
 
 
2
  tags:
3
- - gguf
4
- - llama.cpp
 
5
  - unsloth
6
-
 
 
 
 
 
 
 
 
 
 
 
7
  ---
8
 
9
- # Nomi-1-Flash : GGUF
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
 
11
- This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
 
 
 
 
 
12
 
13
- **Example usage**:
14
- - For text only LLMs: `./llama.cpp/llama-cli -hf LazyLoopStudio/Nomi-1-Flash --jinja`
15
- - For multimodal models: `./llama.cpp/llama-mtmd-cli -hf LazyLoopStudio/Nomi-1-Flash --jinja`
16
 
17
- ## Available Model files:
18
- - `Qwen2.5-3B-Instruct.Q4_K_M.gguf`
19
 
20
- ## Ollama
21
- An Ollama Modelfile is included for easy deployment.
22
- This was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth)
23
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
1
  ---
2
+ license: apache-2.0
3
+ base_model: unsloth/Qwen2.5-3B-Instruct-bnb-4bit
4
+ language:
5
+ - en
6
+ - de
7
  tags:
8
+ - creative
9
+ - flash
10
+ - roleplay
11
  - unsloth
12
+ - lazyloopstudio
13
+ - nomi
14
+ - creative_writing
15
+ - nomi flash
16
+ - nomi 1
17
+ model_name: Nomi-1-Flash
18
+ datasets:
19
+ - databricks/databricks-dolly-15k
20
+ metrics:
21
+ - character
22
+ pipeline_tag: text-generation
23
+ library_name: transformers
24
  ---
25
 
26
+ <div align="center">
27
+
28
+ ![Znomi_nbg](https://cdn-uploads.huggingface.co/production/uploads/6921fa6332f7fb129563d495/9oOMYWNoxBDmF854MciTn.png)
29
+
30
+ </div>
31
+
32
+ # Nomi-1-Flash ⚡
33
+
34
+ **Nomi-1-Flash** is a high-speed, creative-focused AI companion based on the Qwen 2.5 3B architecture. Developed by **LazyLoopStudio**, this model is fine-tuned to prioritize vibrant personality, creative vocabulary, and rapid responses over rigid technical instruction following.
35
+
36
+
37
+
38
+ ## 📊 Evaluation & Benchmarks
39
+
40
+ We believe in transparency. Nomi-1-Flash was tested using a custom evaluation suite to measure its "Creative First" approach.
41
+
42
+ | Metric | Score | Interpretation |
43
+ | :--- | :---: | :--- |
44
+ | **Creativity Index** | **100.0%** | Exceptional vocabulary diversity and imaginative flair. |
45
+ | **General Knowledge (MMLU)** | **48.0%** | Solid factual foundation, comparable to mid-sized models. |
46
+ | **Instruction Following (IFEval)** | **33.3%** | Low. Nomi tends to prioritize style over strict formatting rules. |
47
+
48
+ ### Summary
49
+ Nomi-1-Flash is **not** a coding or logic expert. She is a storyteller and a conversationalist. While her instruction following is lower than the base model, her creative output is significantly more engaging and human-like.
50
+
51
+ ## 🚀 Quick Start (Inference)
52
+
53
+ To use Nomi-1-Flash in Python (requires `unsloth` or `transformers`):
54
+
55
+ ```python
56
+ from unsloth import FastLanguageModel
57
+ import torch
58
 
59
+ model, tokenizer = FastLanguageModel.from_pretrained(
60
+ model_name = "LazyLoopStudio/Nomi-1-Flash",
61
+ max_seq_length = 2048,
62
+ load_in_4bit = True,
63
+ )
64
+ FastLanguageModel.for_inference(model)
65
 
66
+ prompt = "Write a creative opening for a story about a neon-lit cloud city."
67
+ inputs = tokenizer([f"<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n"], return_tensors="pt").to("cuda")
 
68
 
69
+ outputs = model.generate(**inputs, max_new_tokens=256)
70
+ print(tokenizer.batch_decode(outputs)[0])
71