--- license: apache-2.0 language: - pt base_model: - google/gemma-3-270m --- # ๐Ÿถ DogeAI-v1.5-Coder DogeAI-v1.5-Coder is a **small, experimental code-focused language model** fine-tuned from **Gemma 3 (270M parameters)**. This model was created as a learning and experimentation project, focusing on **code generation and completion** with limited resources. It is **not intended to compete with large-scale coding models**, but rather to explore how far a compact model can go when domain-focused. --- ## ๐Ÿ” Model Details - **Base model:** Gemma 3 โ€“ 270M - **Fine-tuning type:** Supervised fine-tuning (SFT) - **Primary domain:** Programming / code-related text - **Languages:** Mixed (depends on dataset; mainly scripting-style code) - **Parameters:** ~270 million - **Context length:** Limited (inherits base model constraints) --- ## ๐ŸŽฏ Intended Use DogeAI-v1.5-Coder is best suited for: - Simple code completion - Small scripting examples - Educational purposes (learning how fine-tuning works) - Research on **small language models** - Benchmarking and experimentation It performs best when: - Prompts are short and explicit - The task is narrow and well-defined - Expectations are aligned with its size --- ## โš ๏ธ Limitations This model has **clear and expected limitations**: - Weak long-range reasoning - Inconsistent performance on complex programming tasks - Limited generalization outside the training distribution - Not reliable for production or critical systems These limitations are a direct consequence of its **small scale and experimental nature**. --- ## ๐Ÿงช Training Notes - The model was fine-tuned on a custom dataset focused on code-related text. - No reinforcement learning or advanced alignment techniques were used. - The goal was experimentation and learning, not optimization for benchmarks. --- ## ๐Ÿ“š Why This Model Exists DogeAI-v1.5-Coder exists as a **learning artifact**. It represents: - Early experimentation with fine-tuning - Exploration of low-parameter models - A step in understanding data quality, formatting, and model behavior Small models are valuable tools for understanding how language models actually work. --- ## ๐Ÿšซ What This Model Is NOT - โŒ A replacement for large coding assistants - โŒ A reasoning-focused model - โŒ Production-ready - โŒ Instruction-following at a high level --- ## ๐Ÿ“œ License This model follows the same license as its base model (Gemma). Please ensure compliance with the original license when using or redistributing. --- ## ๐Ÿ™Œ Acknowledgements - Google Gemma team for the base model - The open-source ML community --- ## ๐Ÿง  Final Note DogeAI-v1.5-Coder is small, imperfect, and honest. Its value lies in experimentation, not performance. Sometimes, understanding the limits teaches more than chasing scale. MADE BY AXIONLAB