YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
MALM: Modular Adapter-based Language Model
π Read the full paper (MALM.pdf)
π Author: Hilal Limo (Independent Researcher, 15)
π License: Apache-2.0
Overview
This repository contains the research paper MALM: Modular Adapter-based Language Model, which introduces a lightweight and scalable framework for multilingual AI.
Instead of relying on massive monolithic models, MALM separates reasoning and translation into two modular parts:
- Core Language Model (CLM): A compact, English-focused reasoning engine.
- Specialized Translation Adapters (STAs): Lightweight, swappable neural machine translation models.
- Orchestration Layer: Connects the pieces, parsing delegation tokens (e.g.
<to:de> ... </to>) and routing requests to the right adapter.
This design drastically reduces compute cost, makes it easier to add new languages, and is especially useful for small models, edge devices, and research settings.
Why MALM?
- π Efficiency: Keep one reasoning core small and sharp.
- π Scalability: Add or update languages by swapping STAs.
- π οΈ Maintainability: Upgrade individual adapters without retraining the whole system.
- π± Small Models: Perfect for low-resource environments, edge devices, and startups.
Example Conversation Flows
User: Translate "my name is Adam" into German.
CLM β <to:de> my name is Adam </to>
STA β "Mein Name ist Adam"
User (in Spanish): "ΒΏCuΓ‘nto es 12 + 7?"
Input STA (esβen) β "How much is 12 + 7?"
CLM β "The answer is <to:es> 19 </to>"
Output STA β "La respuesta es 19"
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support