YAML Metadata Warning: empty or missing yaml metadata in repo card

Check out the documentation for more information.

license: apache-2.0

MoME-A2.7B (Multi-Chain Mixture of Experts)

Introduction

MoME (Multi-Chain Mixture of Experts) is a specialized large language model tailored for multi-chain transaction analysis and cross-chain data workflows. By leveraging a Mixture of Experts (MoE) architecture, MoME delivers chain-specific insights for multiple blockchain networks—such as Aptos, Polkadot, Ripple, and more—all under one inference environment.

MoME-A2.7B will be open-sourced soon. We will update this card with direct links to the weights and checkpoints when they become publicly available.

Model Details

  • Architecture: MoE-based, derived from a dense LLM and optimized for multi-chain transaction parsing and domain-focused conversation.
  • Parameters: Approximately 14.3B total parameters, with an average of 2.7B activated at runtime, enabling efficient inference across multiple “expert” domains.
  • Performance: Achieves performance on par with a larger 7B-class multi-chain model while requiring around 25% fewer computational resources. Early benchmarking shows 1.74× faster inference compared to more extensive multi-chain models.
  • Training Data: Trained on a curated set of chain-centric corpora (e.g., on-chain logs, developer manuals, academic references). This specialized data covers Aptos, Polkadot, Ripple, and beyond.

Requirements

MoME relies on custom modules in the latest transformers library from Hugging Face. For best compatibility, install from source:

pip install git+https://github.com/huggingface/transformers

This ensures any custom model classes (e.g., mome_moe) are properly registered and loaded.

Usage

While MoME-A2.7B can provide a foundation for multi-chain text generation tasks, targeted fine-tuning—such as SFT, RLHF, or extended domain pretraining—is strongly recommended for:

  • Cross-chain transaction decoding
  • Chain-specific Q&A
  • DeFi analytics across multiple blockchains
  • NFT contract interpretation
  • General blockchain R&D

Basic Example

from transformers import AutoTokenizer, AutoModelForCausalLM

# Example usage - subject to change once weights are released
model_name = "momeaicrypto/mome-a2.7b"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

prompt = "Explain how to decode a Polkadot liquidity transaction"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Limitations & Disclaimer

  1. Early Release: MoME remains under development, and final weights will be shared pending internal validation.
  2. Chain Expertise Bias: Certain blockchains or contract types may be underrepresented in the training data, leading to potentially incomplete or biased outputs.
  3. Production Readiness: Further finetuning or adaptation is advised if using this model in production-critical settings.
  4. Responsible Use: Comply with relevant legal and ethical guidelines for AI applications in finance and blockchain.

Citation & Contact

Questions or collaboration inquiries can be directed to our forthcoming GitHub repo (link to be provided) or directly to the maintainers. If you integrate MoME into research or production, please cite it once the official white paper becomes available.

We look forward to releasing MoME-A2.7B and expanding the multi-chain LLM ecosystem.

Downloads last month
-
Safetensors
Model size
14B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for momeaicrypto/MoME-A2.7B

Quantizations
2 models