Fat_Fish

Fat_Fish

March, 2026. I wanted to upscale, I wanted to prune. So why not have both? And why's the fish fat anyway? And is this even coherent at this point?

It's coherent, follows instructions, knows new stuff, and new languages.

It started as a normal Mistral Nemo, then it ate about 3B tokens, and absolutely unhinged modifications were made to it, making it thiccer at all the right(?) places.

Basically, this is a highly experimental proper upscale of mistralai/Mistral-Nemo-Base-2407.

About 1,000$ went into this little project, not that bad of an investment for a worthwhile upscale experiment done to a Mistral-based model.

IMPORTANT: This is an intermediate step of what I have in mind; this model, while (surprisingly) coherent, needs more work. I decided to release it publicly 'as is' in its current form, because multiple people expressed enthusiasm in wanting to tune it (based unhinged curiosity, to be honest).


But WHY?!

Because I think that:

  1. Mistral Nemo is excellent
  2. We likely won't get many more dense models, because MOE master race

Both points hold more gravitas than people realize. While Mistral released newer versions of dense models at a similar size (14B, for example), their old Nemo, in many people's opinion, was generally better. How do I know? Simple, look how many tunes (post 2025, and even 2026) Nemo got, vs the newer bases. Also, the benchmarks suggest that the old Nemo knows more stuff and is very tuning-friendly.

For the second point, while 'here and there' the open source community gets a new dense base, they are few and far between, since the meteoric rise of (mostly giant) moes.

Basically, I went "If I can't get a new base model, I'll make one myself", sort of.


"Proper" upscale AND a prune

Why do I say "proper"? Aren't there countless upscales of various models in the wild? Not really. Most of the "upscales" are just stack merges made with mergekit, and often down_proj is zeroed out, because slapping duplicated layers in random segments usually makes the model output ascii chars and some random words. No layers were zeroed out during the feeding of this fish.

This is both an upscale AND a prune, truly naughty stuff was made to the beloved little Nemo.

Here are the main architecture changes I made:

Parameter Base Nemo Fat_Fish
Hidden Size 5120 5120
Intermediate Size 14336 12608
Layers 32 56
Attention Heads 32 48
Key/Value Heads 8 12 (because why not)
  • Why 12 KV heads instead of 16? While I know 12 isn’t a neat divisor, I wanted to see how it behaves in practice. Theoretically, increasing KV heads should improve context representation and attention fidelity, but jumping all the way to 16 would introduce a noticeably larger memory and compute overhead during both training and inference. I experimented with 12 as a middle ground, and it ended up working surprisingly well — stable during tuning, no issues during inference, and it also behaved nicely under quantization. So despite being a slightly “awkward” number architecturally, in practice it turned out to be a very workable compromise between efficiency and capacity.

Suggestions on how to use it

This model is NOT made for human consumption 'as is', but rather as a base to build upon. You don't just eat raw dough now, do you? (actually, I'm sure that somewhere someone is 🥟👨‍🍳)

While noise was injected into various places to encourage the model and duplicated tensors in specific places to be noisy enough, so they can learn new stuff, surprisingly, after the massive CPT, some of them began to converge to nearly the same patterns. Hence, I recommend:

  • Running layer similarity analysis
  • Target the layers with the most similarity for full finetuning while keeping the rest frozen

What new data was added

Data Source / Type Percentage Notes
Fandom / Lore Knowledge 20% Heavy emphasis on Morrowind, Fallout, and Kenshi Knowledge and lore
Human Written Content 50% General internet writing, essays, blogs, discussions, and natural dialogue
Synthetic Instruct Data 4% Instruction-style prompts
Hebrew Text Corpus 16% Modern Hebrew web text, forums, documentation, and conversational data
Other Mixed Sources 10% Miscellaneous datasets and balancing material

SAFETY

  • Not very safe. Neither are knives; it's a dangerous world out there.

For the paper lovers, here's some more reading material about the subject:

Downloads last month
52
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for SicariusSicariiStuff/Fat_Fish

Finetuned
(80)
this model
Quantizations
2 models

Collection including SicariusSicariiStuff/Fat_Fish

Papers for SicariusSicariiStuff/Fat_Fish