File size: 2,493 Bytes
63c5c88
 
 
 
e01a7e4
 
 
 
 
 
 
 
 
 
 
63c5c88
e01a7e4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
63c5c88
e01a7e4
 
 
 
 
 
 
 
 
63c5c88
e01a7e4
 
 
 
 
 
 
 
63c5c88
e01a7e4
 
 
63c5c88
e01a7e4
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
---
license: apache-2.0
language:
- en
tags:
- text-generation-inference
- transformers
- smolify
- dslm
pipeline_tag: text-generation
inference:
  parameters:
    temperature: 1
    top_p: 0.95
    top_k: 64
---
# 🤏 smolified-tiny-text-to-code

> **Intelligence, Distilled.**

This is a **Domain Specific Language Model (DSLM)** generated by the **Smolify Foundry**.

It has been synthetically distilled from SOTA reasoning engines into a high-efficiency architecture, optimized for deployment on edge hardware (CPU/NPU) or low-VRAM environments.

## 📦 Asset Details
- **Origin:** Smolify Foundry (Job ID: `fe9b19bf`)
- **Architecture:** DSLM-Micro (270M Parameter Class)
- **Training Method:** Proprietary Neural Distillation
- **Optimization:** 4-bit Quantized / FP16 Mixed
- **Dataset:** [Link to Dataset](https://huggingface.co/datasets/programmerGodbyte/smolified-tiny-text-to-code)

## 🚀 Usage (Inference)
This model is compatible with standard inference backends like vLLM.

```python
# Example: Running your Sovereign Model
from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "programmerGodbyte/smolified-tiny-text-to-code"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")

messages = [
    {'role': 'system', 'content': '''The user will provide a natural language description of a programming task. Your goal is to generate correct, runnable Python code that solves the task. Adhere to PEP 8 style guidelines. Include type hints for all functions and variables. The code should be self-contained and ready to run.'''},
    {'role': 'user', 'content': '''Create a Python function named `factorial` that calculates the factorial of a non-negative integer. If the input is negative, it should raise a `ValueError`. If the input is 0, it should return 1.'''}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize = False,
    add_generation_prompt = True,
).removeprefix('<bos>')

from transformers import TextStreamer
_ = model.generate(
    **tokenizer(text, return_tensors = "pt").to("cuda"),
    max_new_tokens = 1000,
    temperature = 1, top_p = 0.95, top_k = 64,
    streamer = TextStreamer(tokenizer, skip_prompt = True),
)
```

## ⚖️ License & Ownership
This model weights are a sovereign asset owned by **programmerGodbyte**.
Generated via [Smolify.ai](https://smolify.ai).

[<img src="https://smolify.ai/smolify.gif" width="100"/>](https://smolify.ai)