| language: en | |
| license: mit | |
| tags: | |
| - text-generation | |
| - fine-tuning | |
| - humor | |
| - nonsense | |
| base_model: HuggingFaceTB/SmolLM2-360M | |
| datasets: | |
| - gor-g/philosophical-deflections | |
| model_name: lavisduscribe | |
| model_type: causal-lm | |
| pipeline_tag: text-generation | |
| # 🧠 Lavisduscribe | |
| **Lavisduscribe** is a fine-tuned version of [HuggingFaceTB/SmolLM2-360M](https://huggingface.co/HuggingFaceTB/SmolLM2-360M) trained on the dataset [gor-g/philosophical-deflections](https://huggingface.co/datasets/gor-g/philosophical-deflections). | |
| ## 🪶 Model Description | |
| Lavisduscribe is designed **not** to answer questions directly — instead, it produces elaborate, pseudo-philosophical gibberish reminiscent of _Otis the Scribe_ from _Asterix & Obelix: Mission Cleopatra_. | |
| The model’s goal is to **deflect any query** into verbose, dramatic, and meaningless reflections that sound profound but say very little. | |
| "It can be seen as a parody or experimental text generator, exploring absurdism, verbosity, and meta-commentary in AI dialogue." (GPT-5 Mini, 24 october 2025) | |