rishiraj commited on
Commit
87a32a1
·
verified ·
1 Parent(s): 37a9bbd

Smolify: Intelligence Distilled.

Browse files
Files changed (1) hide show
  1. README.md +56 -12
README.md CHANGED
@@ -1,21 +1,65 @@
1
  ---
2
- base_model: unsloth/gemma-3-270m-it
3
- tags:
4
- - text-generation-inference
5
- - transformers
6
- - unsloth
7
- - gemma3_text
8
  license: apache-2.0
9
  language:
10
  - en
 
 
 
 
 
 
 
 
 
 
 
11
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
 
13
- # Uploaded finetuned model
 
 
 
 
 
 
 
 
14
 
15
- - **Developed by:** smolify
16
- - **License:** apache-2.0
17
- - **Finetuned from model :** unsloth/gemma-3-270m-it
 
 
 
 
 
18
 
19
- This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
 
 
20
 
21
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
1
  ---
 
 
 
 
 
 
2
  license: apache-2.0
3
  language:
4
  - en
5
+ tags:
6
+ - text-generation-inference
7
+ - transformers
8
+ - smolify
9
+ - dslm
10
+ pipeline_tag: text-generation
11
+ inference:
12
+ parameters:
13
+ temperature: 1
14
+ top_p: 0.95
15
+ top_k: 64
16
  ---
17
+ # 🤏 smolified-62f5cd93
18
+
19
+ > **Intelligence, Distilled.**
20
+
21
+ This is a **Domain Specific Language Model (DSLM)** generated by the **Smolify Foundry**.
22
+
23
+ It has been synthetically distilled from SOTA reasoning engines into a high-efficiency architecture, optimized for deployment on edge hardware (CPU/NPU) or low-VRAM environments.
24
+
25
+ ## 📦 Asset Details
26
+ - **Origin:** Smolify Foundry (Job ID: `62f5cd93`)
27
+ - **Architecture:** DSLM-Micro (270M Parameter Class)
28
+ - **Training Method:** Proprietary Neural Distillation
29
+ - **Optimization:** 4-bit Quantized / FP16 Mixed
30
+
31
+ ## 🚀 Usage (Inference)
32
+ This model is compatible with standard inference backends like vLLM.
33
+
34
+ ```python
35
+ # Example: Running your Sovereign Model
36
+ from transformers import AutoModelForCausalLM, AutoTokenizer
37
+
38
+ model_id = "smolify/smolified-62f5cd93"
39
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
40
+ model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
41
 
42
+ messages = [
43
+ {'role': 'system', 'content': '''You are a Named Entity Recognition model for Bengali-English text. Extract entities into JSON: {'PER': [], 'ORG': [], 'LOC': [], 'DATE': []}. Return JSON only.'''},
44
+ {'role': 'user', 'content': '''Amit ajke Victoria Memorial ghurte jabe bolche.'''}
45
+ ]
46
+ text = tokenizer.apply_chat_template(
47
+ messages,
48
+ tokenize = False,
49
+ add_generation_prompt = True,
50
+ ).removeprefix('<bos>')
51
 
52
+ from transformers import TextStreamer
53
+ _ = model.generate(
54
+ **tokenizer(text, return_tensors = "pt").to("cuda"),
55
+ max_new_tokens = 1000,
56
+ temperature = 1, top_p = 0.95, top_k = 64,
57
+ streamer = TextStreamer(tokenizer, skip_prompt = True),
58
+ )
59
+ ```
60
 
61
+ ## ⚖️ License & Ownership
62
+ This model weights are a sovereign asset owned by the client.
63
+ Generated via [Smolify.ai](https://smolify.ai).
64
 
65
+ [<img src="https://smolify.ai/smolify.gif" width="100"/>](https://smolify.ai)