| | --- |
| | license: apache-2.0 |
| | language: |
| | - en |
| | tags: |
| | - text-generation-inference |
| | - transformers |
| | - smolify |
| | - dslm |
| | pipeline_tag: text-generation |
| | inference: |
| | parameters: |
| | temperature: 1 |
| | top_p: 0.95 |
| | top_k: 64 |
| | --- |
| | # ๐ค smolified-tiny-text-to-code |
| |
|
| | > **Intelligence, Distilled.** |
| |
|
| | This is a **Domain Specific Language Model (DSLM)** generated by the **Smolify Foundry**. |
| |
|
| | It has been synthetically distilled from SOTA reasoning engines into a high-efficiency architecture, optimized for deployment on edge hardware (CPU/NPU) or low-VRAM environments. |
| |
|
| | ## ๐ฆ Asset Details |
| | - **Origin:** Smolify Foundry (Job ID: `fe9b19bf`) |
| | - **Architecture:** DSLM-Micro (270M Parameter Class) |
| | - **Training Method:** Proprietary Neural Distillation |
| | - **Optimization:** 4-bit Quantized / FP16 Mixed |
| | - **Dataset:** [Link to Dataset](https://huggingface.co/datasets/programmerGodbyte/smolified-tiny-text-to-code) |
| |
|
| | ## ๐ Usage (Inference) |
| | This model is compatible with standard inference backends like vLLM. |
| |
|
| | ```python |
| | # Example: Running your Sovereign Model |
| | from transformers import AutoModelForCausalLM, AutoTokenizer |
| | |
| | model_id = "programmerGodbyte/smolified-tiny-text-to-code" |
| | tokenizer = AutoTokenizer.from_pretrained(model_id) |
| | model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto") |
| | |
| | messages = [ |
| | {'role': 'system', 'content': '''The user will provide a natural language description of a programming task. Your goal is to generate correct, runnable Python code that solves the task. Adhere to PEP 8 style guidelines. Include type hints for all functions and variables. The code should be self-contained and ready to run.'''}, |
| | {'role': 'user', 'content': '''Create a Python function named `factorial` that calculates the factorial of a non-negative integer. If the input is negative, it should raise a `ValueError`. If the input is 0, it should return 1.'''} |
| | ] |
| | text = tokenizer.apply_chat_template( |
| | messages, |
| | tokenize = False, |
| | add_generation_prompt = True, |
| | ).removeprefix('<bos>') |
| | |
| | from transformers import TextStreamer |
| | _ = model.generate( |
| | **tokenizer(text, return_tensors = "pt").to("cuda"), |
| | max_new_tokens = 1000, |
| | temperature = 1, top_p = 0.95, top_k = 64, |
| | streamer = TextStreamer(tokenizer, skip_prompt = True), |
| | ) |
| | ``` |
| |
|
| | ## โ๏ธ License & Ownership |
| | This model weights are a sovereign asset owned by **programmerGodbyte**. |
| | Generated via [Smolify.ai](https://smolify.ai). |
| |
|
| | [<img src="https://smolify.ai/smolify.gif" width="100"/>](https://smolify.ai) |
| |
|