AI & ML interests

Small LMs for small computers

Recent Activity

Tonicย 
posted an update about 10 hours ago
view post
Post
61
๐Ÿ™‹๐Ÿปโ€โ™‚๏ธ Hey there folks ,

Turns out : if we predict ๐ŸŒ earth we can save a lot of time looking for interesting things and less time looking at things that we expect to see.

Sentinel-2 imagery ๐Ÿ›ฐ๏ธbasically takes a long time to download towards earth. so our "near real time" systems are quite far from that in practical terms.

meanwhile , if we "predict" what we will see , based on what we do see , we can send down much less data in a timely way , and prioritize ๐Ÿ“กearth-bound response .

I'm talking about illegal fishing , logging , mining or building in nature reserves , the more of that we predict early the more we're able to stop it on time.

At least that's the concept !

check out the blog : https://huggingface.co/blog/Tonic/save-patagonia-by-predicting-earth


- Collection: https://huggingface.co/collections/NuTonic/earth-observation-with-temporal-and-general-understanding
- Code: https://github.com/Josephrp/Nutonic
- Dataset: NuTonic/sat-vl-sft-training-ready-v1
- Model: NuTonic/lspace
- Training: NuTonic/lspace-trackio
- Evals: NuTonic/Patagonia_Eval
ajibawa-2023ย 
posted an update 9 days ago
view post
Post
2040
Stitched-Reasoning-Trajectories-7M

Dataset: ajibawa-2023/Stitched-Reasoning-Trajectories-7M
Stitched-Reasoning-Trajectories-7M is a massive-scale, synthetic multi-hop reasoning dataset. It was built by algorithmically "stitching" together discrete reasoning traces from the original glaiveai/reasoning-v1-20m dataset into continuous, coherent, and logically structured multi-agent trajectories.

By extracting internal sub-questions from <think> blocks and mapping high-information keyword overlaps, this dataset transforms single-turn Q&A pairs into deep, multi-step research plans. To ensure high quality and eliminate "topic drift," every trajectory has been verified using a dense semantic embedding model (BAAI/bge-large-en-v1.5).

The resulting dataset consists of 709 .jsonl files containing over 7.2 million entirely deduplicated, highly coherent reasoning chains.
Sri-Vigneshwar-DJย 
posted an update 12 days ago
view post
Post
113
![Feather DB LongMemEval Results]( Hawky-ai/longmemeval-results)

We ran Feather DB v0.8.0 on LongMemEval (ICLR 2025) โ€” 500 questions across real multi-session conversations, up to 115K tokens each.

**Score: 0.693** ยท GPT-4o full-context baseline: 0.640
Full 500-question run with Gemini-Flash: **$2.40**

Per-axis breakdown:
โ†’ Info-extraction: **0.942**
โ†’ Knowledge-update: **0.714**
โ†’ Multi-session: **0.606**
โ†’ Temporal: **0.477** โ† the hard one, Phase 9 addresses this

Architecture: Hybrid BM25+dense ยท adaptive temporal decay ยท embedded (no server) ยท p50 = 0.19ms ยท MIT

pip install feather-db

Raw results + audit JSONs: Hawky-ai/longmemeval-results
prithivMLmodsย 
posted an update 13 days ago
view post
Post
4905
Multimodal-Edge Demo, a node-based inference canvas demo, is now live on Spaces. It features node-based Transformers for fast inference across 10+ edge-device multimodal models on the Hub, all within a single space. The series includes models from Qwen3.5, Qwen3-VL, Gemma 4, and the LFM 2.5 VL model series, with support for reasoning and grounding tasks.

๐Ÿค— Demo: prithivMLmods/Multimodal-Edge-Node
๐Ÿ”— GitHub: https://github.com/PRITHIVSAKTHIUR/Multimodal-Edge-Node
โœ… Multimodal Apps Collections: https://huggingface.co/collections/prithivMLmods/hall-of-multimodal-apps

๐Ÿค— > To learn more, visit the app page or the respective model pages.
Tonicย 
posted an update 16 days ago
view post
Post
4168
๐Ÿ™‹๐Ÿปโ€โ™‚๏ธ Hey there folks,

since everyone liked my previous announcement post ( https://huggingface.co/posts/Tonic/338509028435394 ) so much , i'm back with more high quality proceedural datasets in the Geospacial domain for SFT training !

Check this one out :
NuTonic/sat-bbox-metadata-sft-v1

the goal is to be able to train vision models on multiple images for remote sensing analysis with one shot .

hope you like it ! ๐Ÿš€
  • 2 replies
ยท
Tonicย 
posted an update 21 days ago
view post
Post
3572
๐Ÿ™‹๐Ÿปโ€โ™‚๏ธ Hey there folks ,

I'm sharing huggingface's largest dataset of annotated statelite images today.

check it out here : NuTonic/sat-image-boundingbox-sft-full

I hope you like it , the idea is to be able to use this with small vision models ๐Ÿš€
prithivMLmodsย 
posted an update 22 days ago
view post
Post
1870
Now, a collection of various compression schemes for Qwen3.6 and the abliterated version 1 of dense models is available on the Hub. Check it out via the links below. ๐Ÿ‘‡

๐Ÿ”— Qwen3.6-MoE: https://huggingface.co/collections/prithivMLmods/qwen36-35b-a3b-compressions
๐Ÿ”— Qwen3.6-27B Compressions: https://huggingface.co/collections/prithivMLmods/qwen36-27b-compressions

๐Ÿค— > To learn more, visit the app page or the respective model pages.
ajibawa-2023ย 
posted an update 25 days ago
view post
Post
1254
Ruby-Code-Large
Dataset : ajibawa-2023/Ruby-Code-Large

Ruby-Code-Large is a large-scale corpus of Ruby programming language source code comprising 331,743 code samples stored in .jsonl format. The dataset is designed to support research and development in large language model (LLM) pretraining, static analysis, web application development, and software engineering automation within the Ruby ecosystem.

By offering a substantial, language-focused dataset, Ruby-Code-Large enables targeted experimentation in dynamic programming, object-oriented design, and rapid application developmentโ€”areas where Ruby is widely used, particularly in web frameworks and scripting.

Ruby-Code-Large addresses the lack of large, curated, Ruby-specific datasets, enabling focused research on expressive syntax, metaprogramming, and high-level abstractions.
ajibawa-2023ย 
posted an update 26 days ago
view post
Post
6109
Go-Code-Large
Dataset: ajibawa-2023/Go-Code-Large

Go-Code-Large is a large-scale corpus of Go (Golang) programming language source code, comprising 316,427 code samples stored in .jsonl format. The dataset is designed to support research and development in large language model (LLM) pretraining, static analysis, cloud-native systems, and modern backend software engineering.

By offering a focused and curated dataset for Go, this corpus enables experimentation in concurrent programming, distributed systems, and performance-oriented backend servicesโ€”domains where Go is widely adopted.

Go-Code-Large addresses the relative scarcity of large, language-specific datasets for Go, enabling targeted research into idiomatic Go patterns, concurrency primitives, and scalable system design.
  • 2 replies
ยท
prithivMLmodsย 
posted an update 27 days ago
view post
Post
4183
HY-World-2.0 โ€” A Multi-Modal World Model for Reconstructing, Generating, and Simulating 3D Worlds is now available on Spaces, and it works both as native Gradio components and in Gradio server mode.

> HY-World-2.0-Demo: prithivMLmods/HY-World-2.0-Demo
> HY-World-2.0 [Server Mode]: prithivMLmods/HY-World-2.0-Demo
> Featuring 3D reconstruction and Gaussian splats with the Rerun viewer, along with camera poses, depth maps, and surface normals.
> In Server Mode, Gradio is served via FastAPI, with FastAPI remaining the top-level server.
> Model: tencent/HY-World-2.0
> GitHub: https://github.com/PRITHIVSAKTHIUR/HY-World-2.0-Demo

๐Ÿค—To learn more, visit the app page or the respective model pages.
prithivMLmodsย 
posted an update about 1 month ago
view post
Post
6209
A new comparator on Spaces showcases Standard FLUX.2 Decoder vs. FLUX.2 Small Decoder. The Small Decoder is ~1.4ร— faster, uses ~1.4ร— less VRAM, and maintains near-identical image quality. It has ~28M parameters with narrower channels [96, 192, 384, 384] vs. [128, 256, 512, 512], and the demo supports sequence generation by running both decoders simultaneously and comparing the results side by side.

๐Ÿค— Comparator: prithivMLmods/Flux.2-4B-Decoder-Comparator
๐Ÿ”— FLUX.2-small-decoder: black-forest-labs/FLUX.2-small-decoder
๐Ÿ”— GitHub: https://github.com/PRITHIVSAKTHIUR/Flux.2-4B-Encoder-Comparator
๐Ÿš Collection: https://huggingface.co/collections/prithivMLmods/image-generation-apps-collection

๐Ÿค— > App built on the Gradio SDK. To learn more, visit the app page or the respective model pages.
prithivMLmodsย 
posted an update about 1 month ago
view post
Post
4235
Now, a collection of various compression schemes for Gemma 4 and the abliterated version 1 of dense models is available on the Hub. Check it out via the links below. ๐Ÿ‘‡

๐Ÿ”—Gemma 4 Compression(s)- https://huggingface.co/collections/prithivMLmods/gemma-4-compressions
๐Ÿ”—Gemma 4 Uncensored [MAX] + Compression(s) - [`ฮฒ ]- https://huggingface.co/collections/prithivMLmods/gemma-4-uncensored-max-compressions
๐Ÿ”—Gemma 4 Compression(s) - MoE- https://huggingface.co/collections/prithivMLmods/gemma-4-compressions-moe
๐Ÿ”—Gemma-4 F32 GGUF- https://huggingface.co/collections/prithivMLmods/gemma-4-f32-gguf

๐Ÿค— > To learn more, visit the app page or the respective model pages.
prithivMLmodsย 
posted an update about 1 month ago
view post
Post
2328
Now the demo for image detection based on SAM3 and Gemma-4 (*Filter) is available on Spaces, using full-fledged Transformers inference with multimodal reasoning for processed images. It also supports video segmentation (mask), video segmentation (annotation), and image click segmentation.

๐Ÿค— Demo Space: prithivMLmods/SAM3-Gemma4-CUDA
๐Ÿฅฝ SAM3: facebook/sam3
๐Ÿ”— gemma-4-E2B-it: google/gemma-4-E2B-it

To learn more, visit the app page or the respective model pages.
  • 1 reply
ยท
prithivMLmodsย 
posted an update about 1 month ago
view post
Post
4765
The demo for Image Detection (*Filter) based on SAM3 and Qwen-3.5 is now available on Hugging Face Spaces using Transformers inference, with multimodal reasoning for processed images, and it also supports video segmentation (mask), video segmentation (annotation), and image click segmentation.

๐Ÿค— Demo Space: prithivMLmods/SAM3-Plus-Qwen3.5
๐Ÿฅฝ SAM3: facebook/sam3
๐Ÿ”— Qwen-3.5: Qwen/Qwen3.5-2B

To learn more, visit the app page or the respective model pages.
  • 5 replies
ยท
prithivMLmodsย 
posted an update about 2 months ago
view post
Post
5319
Flux-Klein-KV-Edit-Consistency demo is now available on Spaces. It preserves character identity and delivers high-quality, realistic results after edits. No need for any special prompts, just upload the image, type your prompt, and get the resulting image blazing fast.

๐Ÿ”ฅ Demo Space: prithivMLmods/flux-klein-kv-edit-consistency
๐Ÿค— Model: black-forest-labs/FLUX.2-klein-9b-kv
๐Ÿค— Collection: https://huggingface.co/collections/prithivMLmods/image-generation-apps-collection
๐Ÿ”— Gradio Server Mode: https://www.gradio.app/main/guides/server-mode

โž” Built with Headless Gradio, an alternative to using gr.Blocks for creating the frontend and triggering events, powered by FastAPI + Gradio. You can now design the frontend however you want, with continued support for APIs, MCP, and ZeroGPU.

โž” Gradio Server Mode is now available from gradio@v6.10.0.

To learn more, visit the app page or the respective model pages.
prithivMLmodsย 
posted an update about 2 months ago
view post
Post
4487
Map-Anything v1 (Universal Feed-Forward Metric 3D Reconstruction) demo is now available on Hugging Face Spaces. Built with Gradio and integrated with Rerun, it performs multi-image and video-based 3D reconstruction, depth, normal map, and interactive measurements.

๐Ÿค— Demo: prithivMLmods/Map-Anything-v1
๐Ÿค— Model: facebook/map-anything-v1
๐Ÿค— Hf-Papers: MapAnything: Universal Feed-Forward Metric 3D Reconstruction (2509.13414)
ajibawa-2023ย 
posted an update about 2 months ago
view post
Post
2794
C-Code-Large
Dataset: ajibawa-2023/C-Code-Large

C-Code-Large is a large-scale corpus of C programming language source code comprising more than 4 million code samples stored in .jsonl format. The dataset is designed to support research and development in large language model (LLM) pretraining, static analysis, and software engineering automation for the C ecosystem.

By offering a high-volume, language-focused dataset, C-Code-Large enables targeted experimentation in low-level programming, memory-constrained environments, and performance-critical systems, where C continues to be a dominant language.

C-Code-Large addresses the lack of large, curated, C-specific datasets, making it possible to conduct focused research on procedural programming paradigms, manual memory management, and system-level abstractions.

prithivMLmodsย 
posted an update about 2 months ago
view post
Post
3141
Introducing QIE-Bbox-Studio! ๐Ÿ”ฅ๐Ÿค—

The QIE-Bbox-Studio demo is now live โ€” more precise and packed with more options. Users can manipulate images with object removal, design addition, and even move objects from one place to another, all in just 4-step fast inference.

๐Ÿค— Demo: prithivMLmods/QIE-Bbox-Studio
๐Ÿ”— GitHub: https://github.com/PRITHIVSAKTHIUR/QIE-Bbox-Studio

๐Ÿš€ Models [LoRA] :

โ— QIE-2511-Object-Mover-Bbox: prithivMLmods/QIE-2511-Object-Mover-Bbox
โ— QIE-2511-Object-Remover-Bbox-v3: prithivMLmods/QIE-2511-Object-Remover-Bbox-v3
โ— QIE-2511-Outfit-Design-Layout: prithivMLmods/QIE-2511-Outfit-Design-Layout
โ— QIE-2509-Object-Remover-Bbox-v3: prithivMLmods/QIE-2509-Object-Remover-Bbox-v3
โ— QIE-2509-Object-Mover-Bbox: prithivMLmods/QIE-2509-Object-Mover-Bbox

๐Ÿš€ Collection:

โ— Qwen Image Edit [Layout Bbox]: https://huggingface.co/collections/prithivMLmods/qwen-image-edit-layout-bbox

To learn more, visit the app page or the respective model pages.
Nymboย 
posted an update 2 months ago
view post
Post
6992
We should really have a release date range slider on the /models page. Tired of "trending/most downloaded" being the best way to sort and still seeing models from 2023 on the first page just because they're embedded in enterprise pipelines and get downloaded repeatedly. "Recently Created/Recently Updated" don't solve the discovery problem considering the amount of noise to sift through.

Slight caveat: Trending actually does have some recency bias, but it's not strong/precise enough.
  • 3 replies
ยท