| | --- |
| | license: apache-2.0 |
| | language: |
| | - en |
| | tags: |
| | - text-generation-inference |
| | - transformers |
| | - smolify |
| | - dslm |
| | pipeline_tag: text-generation |
| | inference: |
| | parameters: |
| | temperature: 1 |
| | top_p: 0.95 |
| | top_k: 64 |
| | --- |
| | # ๐ค smolified-errmind-compiler-explainer-final |
| |
|
| | > **Intelligence, Distilled.** |
| |
|
| | This is a **Domain Specific Language Model (DSLM)** generated by the **Smolify Foundry**. |
| |
|
| | It has been synthetically distilled from SOTA reasoning engines into a high-efficiency architecture, optimized for deployment on edge hardware (CPU/NPU) or low-VRAM environments. |
| |
|
| | ## ๐ฆ Asset Details |
| | - **Origin:** Smolify Foundry (Job ID: `4ea62cd3`) |
| | - **Architecture:** DSLM-Micro (270M Parameter Class) |
| | - **Training Method:** Proprietary Neural Distillation |
| | - **Optimization:** 4-bit Quantized / FP16 Mixed |
| | - **Dataset:** [Link to Dataset](https://huggingface.co/datasets/smolify/smolified-errmind-compiler-explainer-final) |
| |
|
| | ## ๐ Usage (Inference) |
| | This model is compatible with standard inference backends like vLLM. |
| |
|
| | ```python |
| | # Example: Running your Sovereign Model |
| | from transformers import AutoModelForCausalLM, AutoTokenizer |
| | |
| | model_id = "smolify/smolified-errmind-compiler-explainer-final" |
| | tokenizer = AutoTokenizer.from_pretrained(model_id) |
| | model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto") |
| | |
| | messages = [ |
| | {'role': 'system', 'content': '''You are a precise expert at interpreting C++ compiler errors (GCC/Clang). Given a raw compiler error log, output ONLY valid JSON with keys: summary, why, fix, patch, confidence. No extra text.'''}, |
| | {'role': 'user', 'content': '''tmp_gen/err_undefined_variable_84.cpp:1:5: error: 'v84_sdlk' was not declared in this scope 1 | int main(){ v84_sdlk; // force generic error for undefined_variable }'''} |
| | ] |
| | text = tokenizer.apply_chat_template( |
| | messages, |
| | tokenize = False, |
| | add_generation_prompt = True, |
| | ).removeprefix('<bos>') |
| | |
| | from transformers import TextStreamer |
| | _ = model.generate( |
| | **tokenizer(text, return_tensors = "pt").to("cuda"), |
| | max_new_tokens = 1000, |
| | temperature = 1, top_p = 0.95, top_k = 64, |
| | streamer = TextStreamer(tokenizer, skip_prompt = True), |
| | ) |
| | ``` |
| |
|
| | ## โ๏ธ License & Ownership |
| | This model weights are a sovereign asset owned by **smolify**. |
| | Generated via [Smolify.ai](https://smolify.ai). |
| |
|
| | [<img src="https://smolify.ai/smolify.gif" width="100"/>](https://smolify.ai) |
| |
|