# MALM: Modular Adapter-based Language Model πŸ“„ [Read the full paper (MALM.pdf)](./MALM.pdf) πŸ“ Author: **Hilal Limo (Independent Researcher, 15)** πŸ“œ License: [Apache-2.0](./LICENSE) --- ## Overview This repository contains the research paper **MALM: Modular Adapter-based Language Model**, which introduces a lightweight and scalable framework for multilingual AI. Instead of relying on massive monolithic models, MALM separates **reasoning** and **translation** into two modular parts: - **Core Language Model (CLM):** A compact, English-focused reasoning engine. - **Specialized Translation Adapters (STAs):** Lightweight, swappable neural machine translation models. - **Orchestration Layer:** Connects the pieces, parsing delegation tokens (e.g. ` ... `) and routing requests to the right adapter. This design drastically reduces compute cost, makes it easier to add new languages, and is especially useful for **small models**, edge devices, and research settings. --- ## Why MALM? - πŸš€ **Efficiency:** Keep one reasoning core small and sharp. - 🌍 **Scalability:** Add or update languages by swapping STAs. - πŸ› οΈ **Maintainability:** Upgrade individual adapters without retraining the whole system. - πŸ“± **Small Models:** Perfect for low-resource environments, edge devices, and startups. --- ## Example Conversation Flows ```text User: Translate "my name is Adam" into German. CLM β†’ my name is Adam STA β†’ "Mein Name ist Adam" User (in Spanish): "ΒΏCuΓ‘nto es 12 + 7?" Input STA (esβ†’en) β†’ "How much is 12 + 7?" CLM β†’ "The answer is 19 " Output STA β†’ "La respuesta es 19"