rishiraj commited on
Commit
a02c2e1
·
verified ·
1 Parent(s): 6086616

Smolify: Intelligence Distilled.

Browse files
Files changed (1) hide show
  1. README.md +57 -12
README.md CHANGED
@@ -1,21 +1,66 @@
1
  ---
2
- base_model: unsloth/gemma-3-270m-it
3
- tags:
4
- - text-generation-inference
5
- - transformers
6
- - unsloth
7
- - gemma3_text
8
  license: apache-2.0
9
  language:
10
  - en
 
 
 
 
 
 
 
 
 
 
 
11
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
 
13
- # Uploaded finetuned model
 
 
 
 
 
 
 
 
14
 
15
- - **Developed by:** smolify
16
- - **License:** apache-2.0
17
- - **Finetuned from model :** unsloth/gemma-3-270m-it
 
 
 
 
 
18
 
19
- This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
 
 
20
 
21
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
1
  ---
 
 
 
 
 
 
2
  license: apache-2.0
3
  language:
4
  - en
5
+ tags:
6
+ - text-generation-inference
7
+ - transformers
8
+ - smolify
9
+ - dslm
10
+ pipeline_tag: text-generation
11
+ inference:
12
+ parameters:
13
+ temperature: 1
14
+ top_p: 0.95
15
+ top_k: 64
16
  ---
17
+ # 🤏 smolified-verilog-krackhack
18
+
19
+ > **Intelligence, Distilled.**
20
+
21
+ This is a **Domain Specific Language Model (DSLM)** generated by the **Smolify Foundry**.
22
+
23
+ It has been synthetically distilled from SOTA reasoning engines into a high-efficiency architecture, optimized for deployment on edge hardware (CPU/NPU) or low-VRAM environments.
24
+
25
+ ## 📦 Asset Details
26
+ - **Origin:** Smolify Foundry (Job ID: `a13d194c`)
27
+ - **Architecture:** DSLM-Micro (270M Parameter Class)
28
+ - **Training Method:** Proprietary Neural Distillation
29
+ - **Optimization:** 4-bit Quantized / FP16 Mixed
30
+ - **Dataset:** [Link to Dataset](https://huggingface.co/datasets/smolify/smolified-verilog-krackhack)
31
+
32
+ ## 🚀 Usage (Inference)
33
+ This model is compatible with standard inference backends like vLLM.
34
+
35
+ ```python
36
+ # Example: Running your Sovereign Model
37
+ from transformers import AutoModelForCausalLM, AutoTokenizer
38
+
39
+ model_id = "smolify/smolified-verilog-krackhack"
40
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
41
+ model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
42
 
43
+ messages = [
44
+ {'role': 'system', 'content': '''The user will provide a natural language description of a digital circuit. Your task is to generate synthesizable Verilog code for FPGA implementation that accurately reflects the description. Ensure the code is clear, concise, and follows common Verilog coding practices for synthesis.'''},
45
+ {'role': 'user', 'content': '''Design a basic NOT gate using an assign statement. It should take a single bit input 'in_sig' and produce an output 'out_sig'.'''}
46
+ ]
47
+ text = tokenizer.apply_chat_template(
48
+ messages,
49
+ tokenize = False,
50
+ add_generation_prompt = True,
51
+ ).removeprefix('<bos>')
52
 
53
+ from transformers import TextStreamer
54
+ _ = model.generate(
55
+ **tokenizer(text, return_tensors = "pt").to("cuda"),
56
+ max_new_tokens = 1000,
57
+ temperature = 1, top_p = 0.95, top_k = 64,
58
+ streamer = TextStreamer(tokenizer, skip_prompt = True),
59
+ )
60
+ ```
61
 
62
+ ## ⚖️ License & Ownership
63
+ This model weights are a sovereign asset owned by **smolify**.
64
+ Generated via [Smolify.ai](https://smolify.ai).
65
 
66
+ [<img src="https://smolify.ai/smolify.gif" width="100"/>](https://smolify.ai)