AI & ML interests

Instruction residuals (task vectors) allow to continue pretraining LLMs while benefitting from SOTA instruction-following capabilities. This organization contains Residuals adapters for various popular open-weight models.

omarkamaliĀ 
posted an update 7 days ago
view post
Post
184
Omneity Labs LID Benchmark is live šŸ”„

- 8 Evals
- 10 Models (GlotLID, OpenLID, our own Gherbal and others)
- 200+ Languages
- One Leaderboard To Rule Them All!

Come find your language and which LID model supports it best in this space šŸ‘‡

omneity-labs/lid-benchmark
omarkamaliĀ 
posted an update 8 days ago
view post
Post
1852
I just might have cracked tokenizer-free LLMs. No vocab, no softmax.

I'm training a 22M params LLM rn to test this "thing" and it's able to formulate coherent sentences 🤯

Bear in mind, this is a completely new, tokenizer-free LLM architecture with built-in language universality.

Check the explainer video to understand what's happening. Feedback welcome on this approach!

  • 14 replies
Ā·
omarkamaliĀ 
posted an update 25 days ago
view post
Post
334
You're probably training on outdated Wikipedia data right now and don't know it. šŸ’”

In June last year, a friend from the Moroccan Wikipedia community slid into my DMs: "Are you using the current version? The official dataset is severely outdated. We added so many articles nowhere to be found on HuggingFace."

He was right. I was running a 2023 snapshot. In 2025. The official Wikipedia dataset, the one hundreds of labs and researchers grab by default without a second thought, was frozen in time.
• For English, that's 700,000 missing articles.
• For Moroccan Arabic, 30% of the language's entire Wikipedia.
• For 31 other languages, there was literally no text corpus at all until recently.

I could've shrugged and moved on. Instead I spent the next months building a monthly automated pipeline for 340+ languages, on my personal laptop, nearly killing it several times in the process (100% disk, frozen screen, the works).

Nous Research trained Hermes 4 on it. INRIA cited it. It's now three years ahead of what most people are training on.

Here's the full story of how I built Wikipedia Monthly šŸ‘‡

https://omarkamali.com/blog/wikipedia-monthly-pipeline
omarkamaliĀ 
posted an update 3 months ago
view post
Post
1697
New year, new dataset šŸš€

I just released omarkamali/wikipedia-labels, with all the structural labels and namespace from wikipedia in 300+ languages. A gift for the data preprocessors and cleaners among us.

Happy new year 2026 everyone! šŸŽ†
omarkamaliĀ 
posted an update 3 months ago
view post
Post
301
Picomon v0.2.0 released! šŸ’«

- Supports all of AMD, Nvidia and Apple Silicon šŸ§‘ā€šŸ§‘ā€šŸ§’ā€šŸ§’
- Beautiful TUI with themes (who said monitoring should be boring?) šŸ’…
- Shareable Rig Cards! Boast to friends, family and foes alike 🫨

Get it now! uvx picomon or pip install picomon then picomon
  • 3 replies
Ā·
omarkamaliĀ 
posted an update 4 months ago
view post
Post
3495
Hello picomon! AMD GPU Monitoring made easy

Just run uvx picomon and behold:
ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”  ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”
│ GPU 0  GFX  42%  UMC  21%                │  │ GPU 1  GFX  78%  UMC  66%                │
│ PWR 135/250W (54%)  VRAM 10.0/16.0GB 62% │  │ PWR 210/250W (84%)  VRAM 14.5/16.0GB 90% │
│                                          │  │                                          │
│ GFX ā–ā–‚ā–‚ā–ƒā–„ā–„ā–…ā–†ā–†ā–‡ā–ˆā–‡ā–†ā–…ā–„ā–ƒā–‚ā–                   │  │ GFX ā–‚ā–ƒā–„ā–…ā–†ā–‡ā–ˆā–ˆā–‡ā–†ā–…ā–„ā–‚ā–‚ā–ƒā–…ā–†                    │
│ PWR ā–ā–ā–‚ā–‚ā–ƒā–„ā–„ā–…ā–†ā–‡ā–ˆā–ˆā–‡ā–†ā–…ā–„ā–‚ā–                   │  │ PWR ā–‚ā–‚ā–ƒā–„ā–…ā–†ā–‡ā–ˆā–ˆā–‡ā–†ā–…ā–„ā–ƒā–‚ā–‚ā–ƒ                    │
│ VRM ā–ā–ā–‚ā–‚ā–ƒā–„ā–„ā–…ā–†ā–‡ā–ˆā–ˆā–ˆā–‡ā–†ā–…ā–„ā–‚                   │  │ VRM ā–‚ā–ƒā–„ā–…ā–†ā–†ā–‡ā–ˆā–ˆā–ˆā–‡ā–†ā–…ā–„ā–ƒā–‚ā–‚                    │
ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜  ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜


Repo at https://github.com/omarkamali/picomon
Or pypi at https://pypi.org/project/picomon
omarkamaliĀ 
posted an update 4 months ago
view post
Post
5231
Exciting updates to the Wikipedia Monthly dataset for November! šŸš€

惻 Fixed a bug to remove infobox leftovers and other wiki markers such as __TOC__
惻 New python package https://pypi.org/project/wikisets: a dataset builder with efficient sampling so you can combine the languages you want seamlessly for any date (ideal for pretraining data but works for any purpose)
惻 Moved the pipeline to a large server. Much higher costs but with better reliability and predictability (let me know if you'd like to sponsor this!).
惻 Dataset sizes are unfortunately missing for this month due to shenanigans with the migration, but should be back in December's update.

Check out the dataset:
omarkamali/wikipedia-monthly