FUMEA-F
Collection
FUMEA-F(Frontier Unified Multi-Expert Agent - Financial) is a Marketing and Finance Focused model
•
13 items
•
Updated
FUMEA-F is a 4-expert Mixture-of-Experts language model designed for financial analysis, marketing intelligence, market trend detection, and multi-step financial reasoning. It uses top-2 expert routing per token, activating roughly half its total parameters per forward pass.
| Property | Value |
|---|---|
| Architecture | Decoder-only Transformer (MoE) |
| Number of Experts | 4 |
| Experts per Token | 2 (top-2 routing) |
| Gate Mode | Hidden state routing |
| Context Window | 131,072 tokens |
| Positional Encoding | RoPE with YaRN scaling (factor 4.0, base 32,768) |
| Precision | bfloat16 |
| Expert | Domain |
|---|---|
| 0 | Marketing intelligence, brand strategy, campaign analysis |
| 1 | Financial forecasting, time-series, technical indicators |
| 2 | E-commerce trends, consumer behavior, competitive pricing |
| 3 | Financial reasoning, chain-of-thought valuation, compliance |
Supports structured function calling via chat template with <tool_call> tags.
Built-in schemas: analyze_ohlcv, web_search, code_executor, file_reader.
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
"uaytug/fumea-f", torch_dtype=torch.bfloat16,
device_map="auto", trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("uaytug/fumea-f", trust_remote_code=True)
messages = [{"role": "user", "content": "Walk me through a DCF valuation for a SaaS company."}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
output = model.generate(**inputs, max_new_tokens=2048, temperature=0.6, top_p=0.9)
print(tokenizer.decode(output[0][inputs["input_ids"].shape[-1]:], skip_special_tokens=True))
See uaytug/fumea-f-gguf for GGUF quantizations from F16 to IQ1_S.
| Parameter | Value |
|---|---|
| temperature | 0.6 |
| top_p | 0.9 |
| repetition_penalty | 1.1 |
| max_new_tokens | 8192 |
Apache 2.0