title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
DEMO: New Gemini Flash 2.5 Audio model preview - Natural conversational flows! | 4 | TL;DR Google has recently released a new Native Audio version of Gemini 2.5 Flash via AI Studio. It has improved interruption detection and a neat affective dialog option which tries to match the energy of the speaker.
Try it here: [https://aistudio.google.com/live](https://aistudio.google.com/live)
Details: [https://ai.google.dev/gemini-api/docs/models#gemini-2.5-flash-native-audio](https://ai.google.dev/gemini-api/docs/models#gemini-2.5-flash-native-audio)
Hot Takes so far:
* I'm quite impressed with how well it handled my interruptions and barge-ins, and it responded quite naturally almost every time.
* I did notice it had some hard times when I had my speakers on and it was talking -- almost like it kept interrupting itself and then crashing the service. Google might need some echo cancellation of some sort to fix that.
* Adding grounding with web search took care of the two knowledge cutoff issues I ran into.
* I got easily annoyed with how it always asked a question after every response. This felt very unnatural and I ended up wanting to interrupt it as soon as I knew it was going to ask something.
* The affective dialog option is super weird. I tried a few different affect tones (angry, cheerful, funny, etc.) and it only sometimes responded. When I became annoyed it actually seemed like it was annoyed with me in some conversations which was a trip. I wish I got those on the recording :).
* All in all the natural flow felt pretty good and I can see using this modality for some types of questions. But honestly I felt like most of Gemini's answers were too short and not detailed enough when spoken aloud. I definitely prefer having text output for any queries of import.
Hope folks found this useful! I'd love any feedback on the overall presentation/video as I'm starting to do this sort of thing more often -- covering new models and tools as they come out. Thanks for watching!
Yw | 2025-09-24T09:09:07 | https://v.redd.it/sgq5sn8os2rf1 | YuzoRoGuAI | /r/LocalLLaMA/comments/1np7btz/demo_new_gemini_flash_25_audio_model_preview/ | 1970-01-01T00:00:00 | 0 | {} | 1np7btz | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/sgq5sn8os2rf1/DASHPlaylist.mpd?a=1761426555%2CMDRlZWJjZTIyYTNlNDc0N2I4OTBiZDM0YzhhNDViNTA1YmFjZjNiNTMxYjc0YjM4OGFhOTFjZDYzODA4YmM2OA%3D%3D&v=1&f=sd', 'duration': 262, 'fallback_url': 'https://v.redd.it/sgq5sn8os2rf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/sgq5sn8os2rf1/HLSPlaylist.m3u8?a=1761426555%2CZWE4ZDVhMTZiNjhhYWNjYmNiZmU3ZjhlZWYwNDExZTFhZWE1ZGUxNWFkNDk1NzQ3NjY5NTk3YWFjZmVjYzZlNg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/sgq5sn8os2rf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1np7btz | /r/LocalLLaMA/comments/1np7btz/demo_new_gemini_flash_25_audio_model_preview/ | false | false | 4 | {'enabled': False, 'images': [{'id': 'MHN2azdmOW9zMnJmMUV3gaFIlYEXlcMm61mFxZgTVaz-FZaBoYzhWdvd8G4n', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MHN2azdmOW9zMnJmMUV3gaFIlYEXlcMm61mFxZgTVaz-FZaBoYzhWdvd8G4n.png?width=108&crop=smart&format=pjpg&auto=webp&s=cf5750a43182951ff9ac079b8b1bea7b934318f1', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MHN2azdmOW9zMnJmMUV3gaFIlYEXlcMm61mFxZgTVaz-FZaBoYzhWdvd8G4n.png?width=216&crop=smart&format=pjpg&auto=webp&s=b960a23dbfd5f5f7a33d82949e1a78fe83220f68', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MHN2azdmOW9zMnJmMUV3gaFIlYEXlcMm61mFxZgTVaz-FZaBoYzhWdvd8G4n.png?width=320&crop=smart&format=pjpg&auto=webp&s=5b619a5263cde047880ddad5cd5374b3f63adf3f', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MHN2azdmOW9zMnJmMUV3gaFIlYEXlcMm61mFxZgTVaz-FZaBoYzhWdvd8G4n.png?width=640&crop=smart&format=pjpg&auto=webp&s=e0e73c3d1f1c7fcaa083737791381bd761245ca7', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MHN2azdmOW9zMnJmMUV3gaFIlYEXlcMm61mFxZgTVaz-FZaBoYzhWdvd8G4n.png?width=960&crop=smart&format=pjpg&auto=webp&s=5a3f6354c0aaef438f51cc192ebeae3311e64641', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MHN2azdmOW9zMnJmMUV3gaFIlYEXlcMm61mFxZgTVaz-FZaBoYzhWdvd8G4n.png?width=1080&crop=smart&format=pjpg&auto=webp&s=bd13b0863370928897d401421363cb149a526be1', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/MHN2azdmOW9zMnJmMUV3gaFIlYEXlcMm61mFxZgTVaz-FZaBoYzhWdvd8G4n.png?format=pjpg&auto=webp&s=c3a2ddff85e2d6dfec6b9d9771b1abaadb26313e', 'width': 1920}, 'variants': {}}]} | |
[Rant] Magistral-Small-2509 > Claude4 | 42 | So unsure if many of you use Claude4 for non-coding stuff...but it's been turned into a blithering idiot thanks to Anthropic giving us a dumb quant that cannot follow simple writing instructions (professional writing about such exciting topics as science/etc).
Claude4 is amazing for 3-4 business days after they come out with a new release. I believe this is due to them giving the public the full precision model for a few days to generate publicity and buzz...then forcing everyone onto a dumbed-down quant to save money on compute/etc.
That said...
I recall some guy on here saying his wife felt that Magistral-Small-2509 was better than Claude. I'm no male feminist (check my comments to confirm my offensiveness), but I trust a woman's intuition.
That said, and based on this random lady mentioned in a random anecdote, I downloaded Magistral-Small-2509-Q6_K.gguf from Bartowski and was able to fit it on my 3060 and 64GB DDR4 RAM.
Loaded up Oobabooga, set "cache type" to Q6 (assuming that's the right setting), and set "enable thinking" to "high."
Magistral, even at a Q6 quant on my shitty 3060 and 64GB of RAM was better able to adhere to a prompt and follow a list of grammar rules WAY better than Claude4.
While full precision Claude4 would blow anything local out of the water and dance the Irish jig on its rotting corpse....for some reason the major AI companies are giving us dumbed-down quants. Not talking shit about Magistral, nor all their hard work.
But one would expect a Q6 SMALL model to be a pile of shit compared to the billion-dollar AI models from Anthropic and their ilk. So, I'm absolutely blown away at how this little model that can is punching WELL above its weight class. | 2025-09-24T08:58:16 | https://www.reddit.com/r/LocalLLaMA/comments/1np75y4/rant_magistralsmall2509_claude4/ | OsakaSeafoodConcrn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1np75y4 | false | null | t3_1np75y4 | /r/LocalLLaMA/comments/1np75y4/rant_magistralsmall2509_claude4/ | false | false | self | 42 | null |
Has some ever turn their phone into their ai bot | 0 | . | 2025-09-24T08:42:33 | https://www.reddit.com/r/LocalLLaMA/comments/1np6xnu/has_some_ever_turn_their_phone_into_their_ai_bot/ | happyprolite | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1np6xnu | false | null | t3_1np6xnu | /r/LocalLLaMA/comments/1np6xnu/has_some_ever_turn_their_phone_into_their_ai_bot/ | false | false | self | 0 | null |
self-service portal for sharing openai/anthropic/custom AI APIs | 0 | hello everyone we built [maskllm](https://maskllm.com/) , a self-service portal for teams to share LLM APIs with their team members without sharing secret keys.
it is a super easy to get started instantly -- login as admin and keep your secrets at one place, invite team members from the portal and allow them to generate personal masked keys for their use.
use our simple SDK to resolve the keys into actual use right inside your backend environment.
\-prevent key leaks, key sprawls, bill leaks and get real time auditing and compliance.
\- revoke access instantly without deep disintegration
for single accounts this is free to use and easiest way to share keys!
| 2025-09-24T08:34:51 | https://www.reddit.com/r/LocalLLaMA/comments/1np6tsq/selfservice_portal_for_sharing/ | AdSure3977 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1np6tsq | false | null | t3_1np6tsq | /r/LocalLLaMA/comments/1np6tsq/selfservice_portal_for_sharing/ | false | false | self | 0 | null |
GitHub - shantur/jarvis-mcp: Bring your AI to life—talk to assistants instantly in your browser. Zero hasle, No API keys, No Whisper | 12 | 2025-09-24T08:28:30 | https://github.com/shantur/jarvis-mcp | Recent-Success-1520 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1np6qlc | false | null | t3_1np6qlc | /r/LocalLLaMA/comments/1np6qlc/github_shanturjarvismcp_bring_your_ai_to_lifetalk/ | false | false | default | 12 | {'enabled': False, 'images': [{'id': '2V7Z3Ld5qugJkRQYIrVOWknaoGQak8bMDIIrJLFZB6s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2V7Z3Ld5qugJkRQYIrVOWknaoGQak8bMDIIrJLFZB6s.png?width=108&crop=smart&auto=webp&s=c2ce4d0b9af8017c8267216d8c41a6c5bb9fe801', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2V7Z3Ld5qugJkRQYIrVOWknaoGQak8bMDIIrJLFZB6s.png?width=216&crop=smart&auto=webp&s=85a193bee2fc0bc8d7e5ae71805040204c39aec0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2V7Z3Ld5qugJkRQYIrVOWknaoGQak8bMDIIrJLFZB6s.png?width=320&crop=smart&auto=webp&s=3c0d408160b707aa21308bc856280f0fd23d7516', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2V7Z3Ld5qugJkRQYIrVOWknaoGQak8bMDIIrJLFZB6s.png?width=640&crop=smart&auto=webp&s=9331bf926c23d1fd88fc74e2ac52b7a2f1595d88', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2V7Z3Ld5qugJkRQYIrVOWknaoGQak8bMDIIrJLFZB6s.png?width=960&crop=smart&auto=webp&s=712f64ae5e8860e7e88825650a7179f6f388890e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2V7Z3Ld5qugJkRQYIrVOWknaoGQak8bMDIIrJLFZB6s.png?width=1080&crop=smart&auto=webp&s=67702b4167f2290d3c0f84996131d5ffa5870801', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2V7Z3Ld5qugJkRQYIrVOWknaoGQak8bMDIIrJLFZB6s.png?auto=webp&s=3befbbef1da7b4a41774de27d7ffd552c3a97097', 'width': 1200}, 'variants': {}}]} | |
Anyone had a feeling that anthropic models are only good at coding ? | 0 | I had been using these models (sonnet 4 & opus 4/4.1) for a while. I'd say coding ability is far better than local llms. but the more I used it, the more I realized they were good at implementations only. These models act more like a sophisticated engineer who would code up anything you requested, but the solutions they gave are sometimes hacky and lack a systematic thinking. I mainly used it for 3d geometry related coding tasks and it turned out GPT5 and QWEN3 can better incorporate the existing formula and theory into the code. | 2025-09-24T07:29:43 | https://www.reddit.com/r/LocalLLaMA/comments/1np5wbg/anyone_had_a_feeling_that_anthropic_models_are/ | GregView | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1np5wbg | false | null | t3_1np5wbg | /r/LocalLLaMA/comments/1np5wbg/anyone_had_a_feeling_that_anthropic_models_are/ | false | false | self | 0 | null |
Oh my God, what a monster is this? | 716 | 2025-09-24T07:24:01 | NearbyBig3383 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1np5te1 | false | null | t3_1np5te1 | /r/LocalLLaMA/comments/1np5te1/oh_my_god_what_a_monster_is_this/ | false | false | default | 716 | {'enabled': True, 'images': [{'id': '1pxmwf50e2rf1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/1pxmwf50e2rf1.jpeg?width=108&crop=smart&auto=webp&s=4cffebb410068cfa5488c95b39ee8ba97fc81870', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/1pxmwf50e2rf1.jpeg?width=216&crop=smart&auto=webp&s=89b121d1a15a6d917e440f07b83a50afb2821ac9', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/1pxmwf50e2rf1.jpeg?width=320&crop=smart&auto=webp&s=9331b3bb7c68695ee9a9713106f8b822b6610ecc', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/1pxmwf50e2rf1.jpeg?width=640&crop=smart&auto=webp&s=f9d1eb3d320d1305fe702c08f9c69cd841db5fd1', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/1pxmwf50e2rf1.jpeg?width=960&crop=smart&auto=webp&s=69baca46107ce7d8993b0b753265e872e7ffd397', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/1pxmwf50e2rf1.jpeg?width=1080&crop=smart&auto=webp&s=a819cb9919411bbcb38b5898202726eb349645f2', 'width': 1080}], 'source': {'height': 2412, 'url': 'https://preview.redd.it/1pxmwf50e2rf1.jpeg?auto=webp&s=dbdace288c27ef2742d0d52a5a8aa1040948ecff', 'width': 1080}, 'variants': {}}]} | ||
What memory/conversation history methods you find work best for your local AI in production? | 3 | Hi everyone,
I’m exploring different ways to handle memory for long conversations with local models, and I’d love to hear what approaches you’ve found effective in practice.
So far, I’ve tried the straightforward method of feeding the entire conversation into the model, and occasionally summarizing it with the same model to keep the context window manageable. I’ve also been experimenting with RAG setups (previously using Haystack) and heard and read a bit about approaches involving knowledge graphs or hybrid methods.
My challenge is finding a balance: I don’t want to overfeed the model with irrelevant history, but I also don’t want to lose important context across long sessions. From my research, it seems there isn’t a one-size-fits-all solution, and opinions vary a lot depending on the use case.
I’m currently experimenting with Gemma 3 12B locally. What I’d like to know is:
* Which memory or conversation-history methods are you using with your local AI models?
* For which use cases?
* Which libraries or frameworks do you find most reliable?
I’m more interested in practical setups that work well than covering every possible detail of past conversations. Any comparisons or lessons learned would be super helpful.
Thanks! | 2025-09-24T07:12:31 | https://www.reddit.com/r/LocalLLaMA/comments/1np5n5y/what_memoryconversation_history_methods_you_find/ | SomeRandomGuuuuuuy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1np5n5y | false | null | t3_1np5n5y | /r/LocalLLaMA/comments/1np5n5y/what_memoryconversation_history_methods_you_find/ | false | false | self | 3 | null |
MiniModel-200M-Base | 266 | Most “efficient” small models still need days of training or massive clusters. **MiniModel-200M-Base** was trained **from scratch on just 10B tokens** in **110k steps (≈1 day)** on a **single RTX 5090**, using **no gradient accumulation** yet still achieving a **batch size of 64 x 2048 tokens** and with peak memory **<30 GB VRAM**.
Key efficiency techniques:
* **Adaptive Muon optimizer**: 2.1× more data-efficient than AdamW
* **Float8 pretraining**: \~30% less VRAM, \~20% higher throughput (attention kept in bf16)
* **ReLU² activation** (from Google’s *Primer*)
* **Bin-packing**: reduced padding from >70% → <5%
* **Full attention + QK-norm without scalars** for stability
Despite its size, it shows surprising competence:
✅ **Fibonacci (temp=0.0001)**
def fibonacci(n: int):
if n < 2:
return n
return fibonacci(n - 1) + fibonacci(n - 2)
✅ **Digits of π (temp=0.0001)**
Recites **3.14159265358979323846…** correctly — the first 20+ digits.
It’s **Apache 2.0 licensed**, with public config, tokenizer, and safetensors weights. No instruct-tuning yet, as this is pure pretraining on educational data (Ultra-FineWeb, Python tutorials, math).
Not perfect (it thinks Earth’s radius is 375,000 miles), but for a 200M model trained in a day it’s a solid base for experimentation, distillation, or local prototyping.
🔗 [Hugging Face: MiniModel-200M-Base](https://huggingface.co/xTimeCrystal/MiniModel-200M-Base)
🧠 200M | 🌐 en/zh/Python | 📜 Apache 2.0
Any feedback is welcome, especially on replicating the training setup or improving data efficiency! | 2025-09-24T06:58:12 | Wooden-Deer-1276 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1np5ey8 | false | null | t3_1np5ey8 | /r/LocalLLaMA/comments/1np5ey8/minimodel200mbase/ | false | false | default | 266 | {'enabled': True, 'images': [{'id': 'clbzeq0i82rf1', 'resolutions': [{'height': 42, 'url': 'https://preview.redd.it/clbzeq0i82rf1.png?width=108&crop=smart&auto=webp&s=d479c9bdab12d610d47a42eef5eee2dd832a22c8', 'width': 108}, {'height': 84, 'url': 'https://preview.redd.it/clbzeq0i82rf1.png?width=216&crop=smart&auto=webp&s=4e35ba0f2a1537ea6d6841ca2b056639cb67d24b', 'width': 216}, {'height': 125, 'url': 'https://preview.redd.it/clbzeq0i82rf1.png?width=320&crop=smart&auto=webp&s=d2d530f7fad612225cece4d6b7bcaf48d12d9a3d', 'width': 320}, {'height': 251, 'url': 'https://preview.redd.it/clbzeq0i82rf1.png?width=640&crop=smart&auto=webp&s=056ef5c77a2001c2a6d5509cbdcb9173566b1c52', 'width': 640}, {'height': 376, 'url': 'https://preview.redd.it/clbzeq0i82rf1.png?width=960&crop=smart&auto=webp&s=c7c16daf1207668f0c6d6481af5ec2b4197f215d', 'width': 960}, {'height': 423, 'url': 'https://preview.redd.it/clbzeq0i82rf1.png?width=1080&crop=smart&auto=webp&s=3434ee56bf33c181898808f83d600f8787956ca5', 'width': 1080}], 'source': {'height': 886, 'url': 'https://preview.redd.it/clbzeq0i82rf1.png?auto=webp&s=c59ab7ef00c87d6f213f7f77834da6f551a152a3', 'width': 2258}, 'variants': {}}]} | |
Raspberry Pi 5 + IMX500 AI Camera Risk Monitoring | 7 | I’m planning a capstone project using a **Raspberry Pi 5 (8GB)** with a **Sony IMX500 AI camera** to monitor individuals for fall risks and hazards. The camera will run object detection directly on-sensor, while a separate PC will handle a Vision-Language Model (VLM) to interpret events and generate alerts. I want to confirm whether a Pi 5 (8GB) is sufficient to handle the IMX500 and stream only detection metadata to the server, and whether this setup would be better than using a normal Pi camera with an external accelerator like a Hailo-13T or Hailo-26T for this use case. in addition, im also considering which is most cost efficient. Thanks! | 2025-09-24T06:45:40 | https://www.reddit.com/r/LocalLLaMA/comments/1np57u7/raspberry_pi_5_imx500_ai_camera_risk_monitoring/ | Wraithraisrr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1np57u7 | false | null | t3_1np57u7 | /r/LocalLLaMA/comments/1np57u7/raspberry_pi_5_imx500_ai_camera_risk_monitoring/ | false | false | self | 7 | null |
qwen max pricy 1.2/M | 0 | [https://openrouter.ai/qwen/qwen3-max](https://openrouter.ai/qwen/qwen3-max)
gpt-5 prices (actually gpt-5 is 50% off, so like 2x gpt-5) | 2025-09-24T06:36:29 | https://www.reddit.com/r/LocalLLaMA/comments/1np52n6/qwen_max_pricy_12m/ | kaggleqrdl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1np52n6 | false | null | t3_1np52n6 | /r/LocalLLaMA/comments/1np52n6/qwen_max_pricy_12m/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'MfSPMkd1-LtTxNz-fQFZr0vcs_262ZMFeNx7D8jTEik', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/MfSPMkd1-LtTxNz-fQFZr0vcs_262ZMFeNx7D8jTEik.png?width=108&crop=smart&auto=webp&s=36fd632385e4067020cbb951267cc25a54027986', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/MfSPMkd1-LtTxNz-fQFZr0vcs_262ZMFeNx7D8jTEik.png?width=216&crop=smart&auto=webp&s=204cbbda53aca9fd5684f78ebbadcd53dd4ec0f4', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/MfSPMkd1-LtTxNz-fQFZr0vcs_262ZMFeNx7D8jTEik.png?width=320&crop=smart&auto=webp&s=42449436beb4c7d3927a4d75232a41384163825d', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/MfSPMkd1-LtTxNz-fQFZr0vcs_262ZMFeNx7D8jTEik.png?width=640&crop=smart&auto=webp&s=cd39f1e977b4f70887bfa02d697e194ab5c61dfd', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/MfSPMkd1-LtTxNz-fQFZr0vcs_262ZMFeNx7D8jTEik.png?width=960&crop=smart&auto=webp&s=d6371b3184e66c6a7dd557e7649a63c9e4624fb6', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/MfSPMkd1-LtTxNz-fQFZr0vcs_262ZMFeNx7D8jTEik.png?width=1080&crop=smart&auto=webp&s=d5ab3c592593e4238626f028ed612cc349d5d732', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/MfSPMkd1-LtTxNz-fQFZr0vcs_262ZMFeNx7D8jTEik.png?auto=webp&s=daeb59cc7dc805c0e07d19bffafc7298f4ade3a7', 'width': 1200}, 'variants': {}}]} |
Training SLM on Agentic workflow | 7 | So I have a specific use case, in which Deepseek-v3.1 works well, but it's simply too big and takes time to load on our GPU (everything runs locally in my organization, we have 16 H100 GPUs and maybe about 8 more A100s) as I use Ollama, I can’t keep VLLM loaded across all GPUs without hogging resources that others need.
What I want is a **smaller model** that I can use for an **agentic task** mainly to work with a set of custom MCP tools I’ve built.
The biggest reason I want to build a model of my own is because I can get one hell of an education in the process, and since the hardware is already in-house (and mostly idle), I figured this is the perfect opportunity.
But I’m not sure where to start:
1. Should I train a model from scratch, or take an existing pretrained model and fine-tune?
2. What base architecture would be a good starting point for agent-style tasks?
If anyone can point me toward resources specifically focused on **training or finetuning models for agentic tasks**, I’d really appreciate it.
| 2025-09-24T04:44:19 | https://www.reddit.com/r/LocalLLaMA/comments/1np37dk/training_slm_on_agentic_workflow/ | LifeguardNew6929 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1np37dk | false | null | t3_1np37dk | /r/LocalLLaMA/comments/1np37dk/training_slm_on_agentic_workflow/ | false | false | self | 7 | null |
Large Language Model Performance Doubles Every 7 Months | 159 | 2025-09-24T04:24:24 | https://spectrum.ieee.org/large-language-model-performance | Aralknight | spectrum.ieee.org | 1970-01-01T00:00:00 | 0 | {} | 1np2v1i | false | null | t3_1np2v1i | /r/LocalLLaMA/comments/1np2v1i/large_language_model_performance_doubles_every_7/ | false | false | 159 | {'enabled': False, 'images': [{'id': 'FIe2X4pB5JIPoblqtKC-Psg0C0IDm1Mq5ljjHekoesw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FIe2X4pB5JIPoblqtKC-Psg0C0IDm1Mq5ljjHekoesw.png?width=108&crop=smart&auto=webp&s=14b6b286218675801ceff124274688f44beba629', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/FIe2X4pB5JIPoblqtKC-Psg0C0IDm1Mq5ljjHekoesw.png?width=216&crop=smart&auto=webp&s=98dd3818e26dddba7c0bf29890098f73d7c8f93c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/FIe2X4pB5JIPoblqtKC-Psg0C0IDm1Mq5ljjHekoesw.png?width=320&crop=smart&auto=webp&s=36c925881af6fbf47019b1322c23fa27a05bf846', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/FIe2X4pB5JIPoblqtKC-Psg0C0IDm1Mq5ljjHekoesw.png?width=640&crop=smart&auto=webp&s=74fd271c0f36614a182e5a476492961d5ccd453d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/FIe2X4pB5JIPoblqtKC-Psg0C0IDm1Mq5ljjHekoesw.png?width=960&crop=smart&auto=webp&s=b6568038104822753af55526ef15bdb37f8ddf52', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/FIe2X4pB5JIPoblqtKC-Psg0C0IDm1Mq5ljjHekoesw.png?width=1080&crop=smart&auto=webp&s=143445f3afead7d0b9de1bb0c99533a7d72d86dc', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/FIe2X4pB5JIPoblqtKC-Psg0C0IDm1Mq5ljjHekoesw.png?auto=webp&s=4d5d90d4e2f28c6aa20689c0eac41d49869e2ce5', 'width': 1200}, 'variants': {}}]} | ||
What is the best 9B model or under ? | 22 | What is the best model I can run on my system ?
I can run anything that's 9B or under it.
**You can include third party finetunes of it too**. On the side note, I believe we are not getting as many finetunes as before. Can it take that base models are better themselves ? or it's getting harder to finetuning.
It's just for personal use. Right now I'm using Gemma 4b, 3n and the old 9b model. | 2025-09-24T03:35:45 | https://www.reddit.com/r/LocalLLaMA/comments/1np1ytk/what_is_the_best_9b_model_or_under/ | Prior-Blood5979 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1np1ytk | false | null | t3_1np1ytk | /r/LocalLLaMA/comments/1np1ytk/what_is_the_best_9b_model_or_under/ | false | false | self | 22 | null |
Layla AI is 0arynering with Qualcomm: Snapdragon Summit 2025 | Snapdragon Tech Event | 0 | Absolutely HUGE if you're running local AI on portable devices.
https://www.qualcomm.com/company/events/snapdragon-summit
@everyone Layla is partnering with Qualcomm!
We hope to deliver local, personal, agentic AI experiences on Snapdragons next generation of chipsets.
Catch us at the Snapdragon Summit 2025 tomorrow where I will be presenting agentic use-cases for local, on device LLMs via Paage.ai (the free version of Layla)
Layla v6 is expected to release a few days after the event! While Paage.ai gives users a free demo on what is possible with on device agents, premium users (those who purchased Layla) can experience a more in-depth implementation of Layla Agentic Framework, including customisable agents, MCP support, and programmable tools.
Even though v6 is released, mobile agents are still a very new technology in general. I will be adding more tools, improving the implementation, and adding more customisability over the course of v6 with your feedback.
For those who wish to try this ahead of time, you can always go to Layla discord channel and download the pinned APK. You can read more about the updates in this channel: | 2025-09-24T03:30:56 | https://www.qualcomm.com/company/events/snapdragon-summit | On-The-Red-Team | qualcomm.com | 1970-01-01T00:00:00 | 0 | {} | 1np1vjy | false | null | t3_1np1vjy | /r/LocalLLaMA/comments/1np1vjy/layla_ai_is_0arynering_with_qualcomm_snapdragon/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'Bj7YYhDROAhZtlX0QuSdPnBa9bTz1GgH_XbEkraHLAw', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/Bj7YYhDROAhZtlX0QuSdPnBa9bTz1GgH_XbEkraHLAw.jpeg?width=108&crop=smart&auto=webp&s=ab02e6bf7b8c41e1ad9c2fb93af1acb1241eb401', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/Bj7YYhDROAhZtlX0QuSdPnBa9bTz1GgH_XbEkraHLAw.jpeg?width=216&crop=smart&auto=webp&s=97f881d6ec3ef076ded8d36d5ceb10e36d0f4ada', 'width': 216}, {'height': 183, 'url': 'https://external-preview.redd.it/Bj7YYhDROAhZtlX0QuSdPnBa9bTz1GgH_XbEkraHLAw.jpeg?width=320&crop=smart&auto=webp&s=02236e41b587dbedc244f66e1a832d51c78f1f18', 'width': 320}, {'height': 366, 'url': 'https://external-preview.redd.it/Bj7YYhDROAhZtlX0QuSdPnBa9bTz1GgH_XbEkraHLAw.jpeg?width=640&crop=smart&auto=webp&s=9b04154f484ab8542b0d4022b6c8071eecb52394', 'width': 640}], 'source': {'height': 458, 'url': 'https://external-preview.redd.it/Bj7YYhDROAhZtlX0QuSdPnBa9bTz1GgH_XbEkraHLAw.jpeg?auto=webp&s=c1d1c7cbe68e30aa867c9a614fc9c7330786e4b0', 'width': 800}, 'variants': {}}]} |
Help with finetuning parameters: OOM on a 1B? | 6 | Hey guys, I've been Lora finetuning for a few days now.
So I do most of my stuff on an A100, done a 12b, but when I tried to do a 1b, I got OOM's? I had increased my settings because this model is 12 times smaller than the 12b, so I assumed that was it.
I lowered them such that the only parameter changed was that instead of doing qLoRa as in my 12b config, I was doing a full f16 finetune. Still OOM! Seriously, 80GB of vram, yet OOM on what I would consider modest settings (gradient\_accumulation\_steps=8, micro\_batch\_size=2, sequence\_len=4096) on a 1B model?
I suspect either I'm doing something terribly wrong, or I just don't understand some principle of finetuning. Any help? | 2025-09-24T03:08:19 | https://www.reddit.com/r/LocalLLaMA/comments/1np1fnj/help_with_finetuning_parameters_oom_on_a_1b/ | qalpha7134 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1np1fnj | false | null | t3_1np1fnj | /r/LocalLLaMA/comments/1np1fnj/help_with_finetuning_parameters_oom_on_a_1b/ | false | false | self | 6 | null |
Is it worth it with what I have? | 2 | I can understand "worth it" being subjective, but hoping for some shared experiences or opinions.
I have am4 series motherboards (x570 and b550), 5950x/5900x/3900x
And
(3)3090's and (3) 3060's.
Some 6800xt's too.
RAM, 128gb limited by platform.
So it looks like if I'm using an x570/motherboard, I max out with (2) 3090's for 48gb vram or (2) 3060's for 24gb, but then also why not just use (1) 3090... Limiting factors being the PCIE 4.0 x8 of the combined 5950x/x570 combo?
I don't have any experience, so I want to play with all the AI toys, lyric generation - music creation, writing- chapters to help write a book, image generation. Maybe even text to short video clip generations?
With what I have, can the experience still be fun and with reasonable performance? Or does the real fun really start with platforms with more PCIe lanes?
| 2025-09-24T02:48:28 | https://www.reddit.com/r/LocalLLaMA/comments/1np11d6/is_it_worth_it_with_what_i_have/ | Inigmatics | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1np11d6 | false | null | t3_1np11d6 | /r/LocalLLaMA/comments/1np11d6/is_it_worth_it_with_what_i_have/ | false | false | self | 2 | null |
The Ryzen AI MAX+ 395 is a true unicorn (In a good way) | 245 | I put an order for the [128GB version of the Framework Desktop Board](https://frame.work/products/framework-desktop-mainboard-amd-ryzen-ai-max-300-series?v=FRAFMK0006) for AI inference mainly, and while I've been waiting patiently for it to ship, I had doubts recently about the cost to benefit/future upgrade-ability since the RAM, CPU/iGPU are soldered into the motherboard.
So I decided to do a quick exercise of PC part picking to match the specs Framework is offering in their 128GB Board. I started looking at Motherboards offering 4 Channels, and thought I'd find something cheap.. wrong!
* Cheapest consumer level MB offering DDR5 at a high speed (8000 MT/s) with more than 2 channels is $600+.
* CPU equivalent to the 395 MAX+ in benchmarks is the [9955HX3d](https://www.amazon.com/AMD-Ryzen-9950X3D-16-Core-Processor/dp/B0DVZSG8D5), which runs about \~$660 from Amazon. A quiet heat sink with dual fans from [Noctua](https://www.amazon.com/Noctua-NH-D15-heatpipe-NF-A15-140mm/dp/B00L7UZMAK?s=electronics) is $130
* RAM from [G.Skill 4x24](https://www.amazon.com/G-SKILL-Trident-CL38-48-48-128-Desktop-Computer/dp/B0F4M6C65N) (128GB total) at 8000 MT/s runs you closer to $450.
* The 8060s iGPU is similar in performance to the RTX 4060 or [4060 Ti 16gb](https://www.amazon.com/MSI-Gaming-GeForce-GDRR6-Boost/dp/B0D3KGNMXP), runs about $400.
Total for this build is \~**$2240. I**t's obviously a good $500+ more than Framework's board. Cost aside, the speed is compromised as the GPU in this setup will access most of the system RAM at some a loss since it lives outside the GPU chip, and has to traverse the PCIE 5 to access the Memory directly. Total power draw out the wall at full system load at least double the 395's setup. More power = More fan noise = [More heat](https://www.reddit.com/r/LocalLLaMA/comments/1nogrv2/computer_literally_warms_my_room_by_5_degrees/).
To compare, the M4 Pro/Max offer higher memory bandwidth, but suck at running diffusion models, also runs at 2X the cost at the same RAM/GPU specs. The 395 runs Linux/Windows, more flexibility and versatility (Games on Windows, Inference on Linux). Nvidia is so far out in the cost alone it makes no sense to compare it. The closest equivalent (but at much higher inference speed) is 4x 3090 which costs more, consumes multiple times the power, and generates a ton more heat.
AMD has a true unicorn here. For tinkers and hobbyists looking to develop, test, and gain more knowledge in this field, the MAX+ 395 is pretty much the only viable option at this $$ amount, with this low power draw. I decided to continue on with my order, but wondering if anyone else went down this rabbit hole seeking similar answers..! | 2025-09-24T01:57:05 | https://www.reddit.com/r/LocalLLaMA/comments/1nozz23/the_ryzen_ai_max_395_is_a_true_unicorn_in_a_good/ | simracerman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nozz23 | false | null | t3_1nozz23 | /r/LocalLLaMA/comments/1nozz23/the_ryzen_ai_max_395_is_a_true_unicorn_in_a_good/ | false | false | self | 245 | {'enabled': False, 'images': [{'id': '1dHcqLg0toT2bOy6wG4TAjzjPEBr48FOca10zk-NwuQ', 'resolutions': [{'height': 131, 'url': 'https://external-preview.redd.it/1dHcqLg0toT2bOy6wG4TAjzjPEBr48FOca10zk-NwuQ.jpeg?width=108&crop=smart&auto=webp&s=e7c8dc49401e7b81d964e9555624025bbfee3db8', 'width': 108}, {'height': 263, 'url': 'https://external-preview.redd.it/1dHcqLg0toT2bOy6wG4TAjzjPEBr48FOca10zk-NwuQ.jpeg?width=216&crop=smart&auto=webp&s=5cd7520ff0d6c2c9c547682d8f64eba4b8f0b2c4', 'width': 216}, {'height': 390, 'url': 'https://external-preview.redd.it/1dHcqLg0toT2bOy6wG4TAjzjPEBr48FOca10zk-NwuQ.jpeg?width=320&crop=smart&auto=webp&s=86f6328506d69d9d640c2b508a3ebe0fcb7b7cf9', 'width': 320}, {'height': 781, 'url': 'https://external-preview.redd.it/1dHcqLg0toT2bOy6wG4TAjzjPEBr48FOca10zk-NwuQ.jpeg?width=640&crop=smart&auto=webp&s=430f5d4fca113503a0c66c37fa7d8512ca6253fe', 'width': 640}, {'height': 1172, 'url': 'https://external-preview.redd.it/1dHcqLg0toT2bOy6wG4TAjzjPEBr48FOca10zk-NwuQ.jpeg?width=960&crop=smart&auto=webp&s=312724cf9c7d68918e5c64112f9f9c1de08e4f77', 'width': 960}, {'height': 1319, 'url': 'https://external-preview.redd.it/1dHcqLg0toT2bOy6wG4TAjzjPEBr48FOca10zk-NwuQ.jpeg?width=1080&crop=smart&auto=webp&s=b534b8d52a7be3cf29f23f245f7cd43edd95aba6', 'width': 1080}], 'source': {'height': 1600, 'url': 'https://external-preview.redd.it/1dHcqLg0toT2bOy6wG4TAjzjPEBr48FOca10zk-NwuQ.jpeg?auto=webp&s=73ad146a0b35b9fd70c77bf8e56d7a42ff9e35ed', 'width': 1310}, 'variants': {}}]} |
Small model for understanding and generating NSFW text? (not roleplay model) | 4 | By small I mean under 8B. And by NSFW that includes anything NSFW.
Use cases examples:
- detect NSFW text and replace it with SFW equivalent
- and the opposite: rewrite text using NSFW language
- detect NSFW and quote those excerpts verbatim or just list the NSFW words or themes
- tell a joke or short story using NSFW language
Thanks | 2025-09-24T01:49:54 | https://www.reddit.com/r/LocalLLaMA/comments/1noztkp/small_model_for_understanding_and_generating_nsfw/ | hideo_kuze_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1noztkp | false | null | t3_1noztkp | /r/LocalLLaMA/comments/1noztkp/small_model_for_understanding_and_generating_nsfw/ | false | false | nsfw | 4 | null |
I built a tribute to Terry Davis's TempleOS using a local LLM. It's a holy DnD campaign where "God" is a random number generator and the DM is a local llama | 16 | I've been haunted for years by the ghost of Terry Davis and his incomprehensible creation, TempleOS. Terry's core belief—that he could speak with God by generating random numbers and mapping them to the Bible—was a fascinating interction of faith and programming genius.
While building an OS is beyond me, I wanted to pay tribute to his core concept in a modern way. So, I created **Portals**, a project that reimagines TempleOS's "divine random number generator" as a story-telling engine, powered entirely by a local LLM.
The whole thing runs locally with Streamlit and Ollama. It's a deeply personal, offline experience, just as Terry would have wanted.
# The Philosophy: A Modern Take on Terry's "Offering"
Terry believed you had to make an "offering"—a significant, life-altering act—to get God's attention before generating a number. My project embraces this. The idea isn't just to click a button, but to engage with the app after you've done something meaningful in your own life.
# How It Works:
1. **The "Offering" (The Human Part):** This happens entirely outside the app. It's a personal commitment, a change in perspective, a difficult choice. This is you, preparing to "talk to God."
2. **Consult the Oracle:** You run the app and click the button. A random number is generated, just like in TempleOS.
3. **A Verse is Revealed:** The number is mapped to a specific line in a numbered Bible text file, and a small paragraph around that line is pulled out. This is the "divine message."
4. **Semantic Resonance (The LLM Part):** This is where the magic happens. The local LLM (I'm using Llama 3) reads the Bible verse and compares it to the last chapter of your ongoing D&D campaign story. It then decides if the verse has "High Resonance" or "Low Resonance" with the story's themes of angels, demons, and apocalypse.
5. **The Story Unfolds:**
* If it's **"High Resonance,"** your offering was accepted. The LLM then uses the verse as inspiration to write the next chapter of your D&D campaign, introducing a new character, monster, location, or artifact inspired by the text.
* If it's **"Low Resonance,"** the offering was "boring," as Terry would say. The heavens are silent, and the story doesn't progress. You're told to try again when you have something more significant to offer.
It's essentially a solo D&D campaign where the Dungeon Master is a local LLM, and the plot twists are generated by the chaotic, divine randomness that Terry Davis revered. The LLM doesn't know your offering; it only interprets the synchronicity between the random verse and your story.
This feels like the closest I can get to the spirit of TempleOS without dedicating my life to kernel development. It's a system for generating meaning from chaos, all running privately on your own hardware.
I'd love for you guys to check it out, and I'm curious to hear your thoughts on this intersection of local AI, randomness, and the strange, brilliant legacy of Terry Davis.
**GitHub Repo** [happy jumping](https://github.com/iblameandrew/portals/tree/main)
https://reddit.com/link/1nozt72/video/sonesfylo0rf1/player
| 2025-09-24T01:49:22 | https://www.reddit.com/r/LocalLLaMA/comments/1nozt72/i_built_a_tribute_to_terry_daviss_templeos_using/ | Temporary_Exam_3620 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nozt72 | false | null | t3_1nozt72 | /r/LocalLLaMA/comments/1nozt72/i_built_a_tribute_to_terry_daviss_templeos_using/ | false | false | self | 16 | null |
Official llama.cpp image for Intel GPUs is slower than Ollama from ipex-llm | 4 | I got a B580 and I am getting \~42t/s on qwen2.5-coder:14b from Ollama from ipex-llm (`pip install ipex-llm[cpp]`, `init-ollama`). I am running it inside a container on an Ubuntu 25.04 host. I tried the official llama.cpp images, but their performance is low and I am having issues with them.
ghcr.io/ggml-org/llama.cpp:full-intel is giving me ~30t/s, but sometimes it goes down to ~25t/s. \
ghcr.io/ggml-org/llama.cpp:full-vulkan is horrible, giving only ~12t/s.
Any ideas on how to match or pass the Ollama performance? | 2025-09-24T01:46:19 | https://www.reddit.com/r/LocalLLaMA/comments/1nozqw1/official_llamacpp_image_for_intel_gpus_is_slower/ | WizardlyBump17 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nozqw1 | false | null | t3_1nozqw1 | /r/LocalLLaMA/comments/1nozqw1/official_llamacpp_image_for_intel_gpus_is_slower/ | false | false | self | 4 | null |
Radeon Instinct MI50 32GB work on Vulkan on Windows? | 5 | As per the title, I am wondering these work out of the box in vulkan llama-cpp like in LM studio and other llama-cpp apps. I was thinking of pairing a couple as usb4 external gpus on a strix halo mini PC. | 2025-09-24T00:48:24 | https://www.reddit.com/r/LocalLLaMA/comments/1noyjho/radeon_instinct_mi50_32gb_work_on_vulkan_on/ | Goldkoron | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1noyjho | false | null | t3_1noyjho | /r/LocalLLaMA/comments/1noyjho/radeon_instinct_mi50_32gb_work_on_vulkan_on/ | false | false | self | 5 | null |
Qwen3-VL — the most powerful vision-language model in the Qwen | 2 | 2025-09-24T00:43:08 | https://www.reddit.com/r/LocalLLaMA/comments/1noyfdg/qwen3vl_the_most_powerful_visionlanguage_model_in/ | touhidul002 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1noyfdg | false | null | t3_1noyfdg | /r/LocalLLaMA/comments/1noyfdg/qwen3vl_the_most_powerful_visionlanguage_model_in/ | false | false | 2 | null | ||
Opus 4.1 LOL | 0 | 2025-09-24T00:29:32 | https://www.reddit.com/r/LocalLLaMA/comments/1noy56g/opus_41_lol/ | sb6_6_6_6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1noy56g | false | null | t3_1noy56g | /r/LocalLLaMA/comments/1noy56g/opus_41_lol/ | false | false | self | 0 | null | |
GPT-OSS is insane at leetcode | 26 | I've tested several open-source models on this problem—specifically ones that fit within 16GB of VRAM—and none could solve it. Even GPT-4o had some trouble with it previously. I was impressed that this model nailed it on the first attempt, achieving a 100% score for time and space complexity. And, for some reason, GPT-OSS is a lot faster than others models at prompt eval.
Problem:
[https://leetcode.com/problems/maximum-employees-to-be-invited-to-a-meeting/submissions/1780701076/](https://leetcode.com/problems/maximum-employees-to-be-invited-to-a-meeting/submissions/1780701076/)
https://preview.redd.it/c9ixfvgd40rf1.png?width=1034&format=png&auto=webp&s=3e3bb1bd3145ca9117ccb2ff0c8883c993fa595a
| 2025-09-23T23:49:44 | https://www.reddit.com/r/LocalLLaMA/comments/1noxalu/gptoss_is_insane_at_leetcode/ | JsThiago5 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1noxalu | false | null | t3_1noxalu | /r/LocalLLaMA/comments/1noxalu/gptoss_is_insane_at_leetcode/ | false | false | 26 | null | |
Intel just released a LLM finetuning app for their ARC GPUs | 29 | I discovered that Intel has a LLM finetuning tool on their GitHub repository: https://github.com/open-edge-platform/edge-ai-tuning-kit | 2025-09-23T23:27:39 | https://www.reddit.com/r/LocalLLaMA/comments/1nowsyu/intel_just_released_a_llm_finetuning_app_for/ | Aggressive-Breath852 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nowsyu | false | null | t3_1nowsyu | /r/LocalLLaMA/comments/1nowsyu/intel_just_released_a_llm_finetuning_app_for/ | false | false | self | 29 | {'enabled': False, 'images': [{'id': '58BhwdvZM-ClfYy9x4Lu33WpZ0-4mc2EAZlwICYqMQ8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/58BhwdvZM-ClfYy9x4Lu33WpZ0-4mc2EAZlwICYqMQ8.png?width=108&crop=smart&auto=webp&s=7913a78c56e970c9fdb7b01b696111b1d6c4a722', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/58BhwdvZM-ClfYy9x4Lu33WpZ0-4mc2EAZlwICYqMQ8.png?width=216&crop=smart&auto=webp&s=9033ca00a94f18713739f59bdbeb660a1faa9da1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/58BhwdvZM-ClfYy9x4Lu33WpZ0-4mc2EAZlwICYqMQ8.png?width=320&crop=smart&auto=webp&s=63d7fd2887e75df4a14a69c71c0f6f5bd1181b47', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/58BhwdvZM-ClfYy9x4Lu33WpZ0-4mc2EAZlwICYqMQ8.png?width=640&crop=smart&auto=webp&s=4c2fc78459d673287be75fffa7e479b72b75ab69', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/58BhwdvZM-ClfYy9x4Lu33WpZ0-4mc2EAZlwICYqMQ8.png?width=960&crop=smart&auto=webp&s=30807cb9015a7908e0df9af9551592640819cf8f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/58BhwdvZM-ClfYy9x4Lu33WpZ0-4mc2EAZlwICYqMQ8.png?width=1080&crop=smart&auto=webp&s=fcd3a56356ea385e90ae77cd6311b08259364e0c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/58BhwdvZM-ClfYy9x4Lu33WpZ0-4mc2EAZlwICYqMQ8.png?auto=webp&s=a97e8b4aaef0f2ff2002030cca74e2b2ca09ac1f', 'width': 1200}, 'variants': {}}]} |
Is Qwen3 VL 235b supposed to be better or worse than Qwen3 VL Plus? | 9 | Which one is better? Should someone run 235b locally or use Plus via API if they are optimizing for performance? (Assume enough hardware in any scenario).
Here are the API Platform info pages:
| name | link | input price | output price |
| ---------------------- | -------------------------------------------------------------------------------------------------------------------------------- | ----------: | -----------: |
| Qwen3 VL Plus | https://modelstudio.console.alibabacloud.com/?tab=doc#/doc/?type=model&url=2840914_2&modelId=qwen3-vl-plus | 0–32K input tokens: $0.20; 32K–128K: $0.30; 128K–256K: $0.60 | 0–32K input tokens: $1.60; 32K–128K: $2.40; 128K–256K: $4.80 |
| Qwen3 VL 235B Instruct | https://modelstudio.console.alibabacloud.com/?tab=doc#/doc/?type=model&url=2840914_2&modelId=qwen3-vl-235b-a22b-instruct | $0.700 | $$2.800 |
| Qwen3 VL 235B Thinking | https://modelstudio.console.alibabacloud.com/?tab=doc#/doc/?type=model&url=2840914_2&modelId=qwen3-vl-235b-a22b-thinking | $0.700 | $8.400 | | 2025-09-23T23:20:55 | https://www.reddit.com/r/LocalLLaMA/comments/1nowni6/is_qwen3_vl_235b_supposed_to_be_better_or_worse/ | DistanceSolar1449 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nowni6 | false | null | t3_1nowni6 | /r/LocalLLaMA/comments/1nowni6/is_qwen3_vl_235b_supposed_to_be_better_or_worse/ | false | false | self | 9 | null |
Finally. Qwen3-VL is out | 14 | [Model Architecture ](https://preview.redd.it/uk6esd4uuzqf1.png?width=5908&format=png&auto=webp&s=0cc94d0e034c72ea2e86cda79bfe09890c187957)
I've been waiting for this. Anyone interested in finetuning Qwen3-VL-235B-A22B-Instruct? Would def love to share ideas | 2025-09-23T22:54:47 | https://www.reddit.com/r/LocalLLaMA/comments/1now2ck/finally_qwen3vl_is_out/ | function-devs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1now2ck | false | null | t3_1now2ck | /r/LocalLLaMA/comments/1now2ck/finally_qwen3vl_is_out/ | false | false | 14 | null | |
Qwen Devs and Release Teams right now. | 75 | Friggin’ Legends!!! Hope they get some well-deserved time off, but not too much time cause we need those Llama.cpp PRs worked LOL 🤣 Seriously tho, thanks for all the amazing new models. | 2025-09-23T22:51:31 | Porespellar | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1novzr4 | false | null | t3_1novzr4 | /r/LocalLLaMA/comments/1novzr4/qwen_devs_and_release_teams_right_now/ | false | false | default | 75 | {'enabled': True, 'images': [{'id': 'g6rdscgkuzqf1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/g6rdscgkuzqf1.gif?width=108&crop=smart&format=png8&s=7f604a6768ebcbd07626fc922bfce4b4cb79affd', 'width': 108}, {'height': 120, 'url': 'https://preview.redd.it/g6rdscgkuzqf1.gif?width=216&crop=smart&format=png8&s=748417bfede74ada3cadd549abb1555dc361b8a9', 'width': 216}, {'height': 178, 'url': 'https://preview.redd.it/g6rdscgkuzqf1.gif?width=320&crop=smart&format=png8&s=c009a94c9502007fceec088f5f95d00bfc648a94', 'width': 320}], 'source': {'height': 200, 'url': 'https://preview.redd.it/g6rdscgkuzqf1.gif?format=png8&s=c570a86fb3ec9465b311b4fb0fee2bc8899e0ff0', 'width': 358}, 'variants': {'gif': {'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/g6rdscgkuzqf1.gif?width=108&crop=smart&s=9cfcfb876444a570ed2667d706310ab195126146', 'width': 108}, {'height': 120, 'url': 'https://preview.redd.it/g6rdscgkuzqf1.gif?width=216&crop=smart&s=a3be3bfb4a1b37bd058962c855d527804ceb19ca', 'width': 216}, {'height': 178, 'url': 'https://preview.redd.it/g6rdscgkuzqf1.gif?width=320&crop=smart&s=dc07fa2490e2521f5f3215fe223b8637193bbe3d', 'width': 320}], 'source': {'height': 200, 'url': 'https://preview.redd.it/g6rdscgkuzqf1.gif?s=45cbc75e4414d0ec1de55470808b1c6a88397c7b', 'width': 358}}, 'mp4': {'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/g6rdscgkuzqf1.gif?width=108&format=mp4&s=829a499579509aec04f21dfbeb01b2efee9d5c89', 'width': 108}, {'height': 120, 'url': 'https://preview.redd.it/g6rdscgkuzqf1.gif?width=216&format=mp4&s=9f1177ecfdec237ee7434dbc959d280eb81c36dc', 'width': 216}, {'height': 178, 'url': 'https://preview.redd.it/g6rdscgkuzqf1.gif?width=320&format=mp4&s=b6f1b05f900f03d5e3737c1bf98efa5810b2c6b4', 'width': 320}], 'source': {'height': 200, 'url': 'https://preview.redd.it/g6rdscgkuzqf1.gif?format=mp4&s=4775021929218cfb148b01e99ef81efde4a6f2fb', 'width': 358}}}}]} | |
Qwen3VL-235B-A22B beats GPT5 and Claude-Opus 4.1 | 23 | 2025-09-23T22:33:38 | https://www.reddit.com/r/LocalLLaMA/comments/1novksp/qwen3vl235ba22b_beats_gpt5_and_claudeopus_41/ | AlanzhuLy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1novksp | false | null | t3_1novksp | /r/LocalLLaMA/comments/1novksp/qwen3vl235ba22b_beats_gpt5_and_claudeopus_41/ | false | false | 23 | null | ||
Datasets for instruction-following, tool use, conciseness; also size question | 6 | I'm starting my first training runs (on Qwen3-0.6B at first, on to Qwen3-4B as soon as I start getting results). I have my own things to run (will attempt a style/behaviour lift on Kimi K2, etc), but I'm worried about triggering catastrophic forgetting on the existing instruction following and tool use training.
So I'd like to mix some of that into the dataset too, or ideally just to train from -base and apply "instruct" after that. But what datasets for instruction following and tool use can I use? I see people mentioning they trained for tool use - how do you get or generate that data?
Separately: Qwens are wordy. 4B is a bad bloater of its own context window. Are there existing datasets to bake in some brevity?
And finally: is there some guidance as to how many pairs on SFT and DPO are sufficient for what size models? Something like "100 will sway .6B and you need 500 for 4B" but I just invented these numbers, I'd appreciate knowledgeable advice here.
Thanks! | 2025-09-23T22:15:00 | https://www.reddit.com/r/LocalLLaMA/comments/1nov4w3/datasets_for_instructionfollowing_tool_use/ | ramendik | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nov4w3 | false | null | t3_1nov4w3 | /r/LocalLLaMA/comments/1nov4w3/datasets_for_instructionfollowing_tool_use/ | false | false | self | 6 | null |
help on a school project | 0 | So I've chosen to showcase in our CCT (Creative Critical Thinking)how a LocalLLaMA works in Java code generation, like able to do tasks like as complex as asking it to generate codes that can generate something close to this as an example:
import java.util.Scanner;
public class ArrayOperations {
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
// Initial Array
int[] dsaLA = {2, 4, 6, 8, 10, 12, 14};
while (true) {
System.out.println("\n===== ARRAY OPERATIONS MENU =====");
System.out.println("1. Traverse (Display Elements)");
System.out.println("2. Search");
System.out.println("3. Insert");
System.out.println("4. Delete");
System.out.println("5. Exit");
System.out.print("Choose an option: ");
int choice = sc.nextInt();
switch (choice) {
case 1: // Traverse
System.out.println("\nArray Elements:");
displayArray(dsaLA);
break;
case 2: // Search
System.out.print("\nEnter a value to search: ");
int searchValue = sc.nextInt();
searchArray(dsaLA, searchValue);
break;
case 3: // Insert
System.out.print("\nEnter value to insert: ");
int insertValue = sc.nextInt();
System.out.print("Enter index to insert at: ");
int insertIndex = sc.nextInt();
dsaLA = insertArray(dsaLA, insertValue, insertIndex);
System.out.println("New Array after Insertion:");
displayArray(dsaLA);
break;
case 4: // Delete
System.out.print("\nEnter value to delete: ");
int deleteValue = sc.nextInt();
dsaLA = deleteArray(dsaLA, deleteValue);
System.out.println("New Array after Deletion:");
displayArray(dsaLA);
break;
case 5: // Exit
System.out.println("Exiting program. Goodbye!");
sc.close();
return;
default:
System.out.println("Invalid choice! Please select again.");
}
}
}
// Function to display array
public static void displayArray(int[] arr) {
for (int i = 0; i < arr.length; i++) {
System.out.println("dsaLA[" + i + "]: " + arr[i]);
}
}
// Function to search array
public static void searchArray(int[] arr, int value) {
boolean found = false;
for (int i = 0; i < arr.length; i++) {
if (arr[i] == value) {
System.out.println("The value " + value + " is found at index " + i);
found = true;
break;
}
}
if (!found) {
System.out.println("The value " + value + " is not found in the array.");
}
}
// Function to insert into array
public static int[] insertArray(int[] arr, int value, int index) {
if (index < 0 || index > arr.length) {
System.out.println("Invalid index! Insertion failed.");
return arr;
}
int[] newArr = new int[arr.length + 1];
for (int i = 0, j = 0; i < newArr.length; i++) {
if (i == index) {
newArr[i] = value;
} else {
newArr[i] = arr[j];
j++;
}
}
return newArr;
}
// Function to delete from array
public static int[] deleteArray(int[] arr, int value) {
int index = -1;
for (int i = 0; i < arr.length; i++) {
if (arr[i] == value) {
index = i;
break;
}
}
if (index == -1) {
System.out.println("Value not found! Deletion failed.");
return arr;
}
int[] newArr = new int[arr.length - 1];
for (int i = 0, j = 0; i < arr.length; i++) {
if (i != index) {
newArr[j] = arr[i];
j++;
}
}
return newArr;
}
}
| 2025-09-23T22:13:44 | https://www.reddit.com/r/LocalLLaMA/comments/1nov3tx/help_on_a_school_project/ | Goss3n | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nov3tx | false | null | t3_1nov3tx | /r/LocalLLaMA/comments/1nov3tx/help_on_a_school_project/ | false | false | self | 0 | null |
The AI landscape - September 2025 | 173 | 2025-09-23T22:11:30 | abdouhlili | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nov1w5 | false | null | t3_1nov1w5 | /r/LocalLLaMA/comments/1nov1w5/the_ai_landscape_september_2025/ | false | false | 173 | {'enabled': True, 'images': [{'id': 'FbV9z-cENycwDZj-nsODLYslWwa0D--uNS0FR9y5cVs', 'resolutions': [{'height': 102, 'url': 'https://preview.redd.it/i0wydmbfnzqf1.png?width=108&crop=smart&auto=webp&s=08619505ccd03d3e462b2826ce663d2c16c02e78', 'width': 108}, {'height': 205, 'url': 'https://preview.redd.it/i0wydmbfnzqf1.png?width=216&crop=smart&auto=webp&s=568a5e948f25c8ea8e284ff8cd589fe246fd7a8c', 'width': 216}, {'height': 304, 'url': 'https://preview.redd.it/i0wydmbfnzqf1.png?width=320&crop=smart&auto=webp&s=0983b38af838c2e02ce9e320f788cfaabbb723dc', 'width': 320}], 'source': {'height': 499, 'url': 'https://preview.redd.it/i0wydmbfnzqf1.png?auto=webp&s=25ad2ee29d8916650a05979b49adcfea42ed3ceb', 'width': 524}, 'variants': {}}]} | |||
Best open source tts model with emotion control and emotion tags? | 8 | What is the best open source tts model that has emotional control capabilities and can be tagged with things like (laugh), (sight) | 2025-09-23T22:02:38 | https://www.reddit.com/r/LocalLLaMA/comments/1nouu70/best_open_source_tts_model_with_emotion_control/ | Adept_Lawyer_4592 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nouu70 | false | null | t3_1nouu70 | /r/LocalLLaMA/comments/1nouu70/best_open_source_tts_model_with_emotion_control/ | false | false | self | 8 | null |
OrKa-reasoning: 95.6% cost savings with local models + cognitive orchestration and high accuracy/success-rate | 11 | Built a cognitive AI framework that achieved 95%+ accuracy using local DeepSeek-R1:32b vs expensive cloud APIs.
**Economics:**
- Total cost: $0.131 vs $2.50-3.00 cloud
- 114K tokens processed locally
- Extended reasoning capability (11 loops vs typical 3-4)
**Architecture:**
Multi-agent Society of Mind approach with specialized roles, memory layers, and iterative debate loops. Full YAML-declarative orchestration.
**Live on HuggingFace:** https://huggingface.co/spaces/marcosomma79/orka-reasoning
Shows you can get enterprise-grade reasoning without breaking the bank on API costs. All code is open source. | 2025-09-23T22:01:20 | https://www.reddit.com/r/LocalLLaMA/comments/1nout35/orkareasoning_956_cost_savings_with_local_models/ | marcosomma-OrKA | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nout35 | false | null | t3_1nout35 | /r/LocalLLaMA/comments/1nout35/orkareasoning_956_cost_savings_with_local_models/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'FqWmWiPNz2Tp-8CVxhCw8QSTHfA2aS6DBiZU-S3IkNw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/FqWmWiPNz2Tp-8CVxhCw8QSTHfA2aS6DBiZU-S3IkNw.png?width=108&crop=smart&auto=webp&s=cc50c2945a005b08e825bd1f7d56f98212338fd0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/FqWmWiPNz2Tp-8CVxhCw8QSTHfA2aS6DBiZU-S3IkNw.png?width=216&crop=smart&auto=webp&s=4ff25a93efc0541e02cf7fd18c24e63b6e91dcc5', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/FqWmWiPNz2Tp-8CVxhCw8QSTHfA2aS6DBiZU-S3IkNw.png?width=320&crop=smart&auto=webp&s=93e1ad3a3ad765d4d73a5991383bbc8b693825bd', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/FqWmWiPNz2Tp-8CVxhCw8QSTHfA2aS6DBiZU-S3IkNw.png?width=640&crop=smart&auto=webp&s=cef283993dde433c0c29239f004dd9a3ef036f13', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/FqWmWiPNz2Tp-8CVxhCw8QSTHfA2aS6DBiZU-S3IkNw.png?width=960&crop=smart&auto=webp&s=d0b04feab682b55c11602cb9f37f4b49fe55de6b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/FqWmWiPNz2Tp-8CVxhCw8QSTHfA2aS6DBiZU-S3IkNw.png?width=1080&crop=smart&auto=webp&s=9865b635f153c0486391f0f5c46eeb2f2c42262a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/FqWmWiPNz2Tp-8CVxhCw8QSTHfA2aS6DBiZU-S3IkNw.png?auto=webp&s=6af46422d6393640b109655e881ee2b85d1b906b', 'width': 1200}, 'variants': {}}]} |
I had no idea local models were this good at this point! Now I’m obsessed with getting some dedicated hardware, but I’m not really sure where to start. | 0 | So I stumbled into the local LLM/SLM world while messing with some document automation. I’d just written off the idea as being out of reach, assuming either the models sucked or hardware was just out of normal financial reach. Apparently I’m wrong!
I’ve got a M4 MacBook Pro and I’ve now got LM Studio running qwen-3-4b and gemma-3-27b to do some OCR and document tagging work, it’s working beautifully! But realistically it’s not sustainable because I can’t devote this machine to this purpose. What I really need is something that I can run as a server.
My current home server is a NUC, great for all my little docker apps, but not going to cut it for a good local AI I know. But I’ve been thinking about upgrading it anyway,  and now those thoughts have expanded significantly. But I’m not really clear on what I’m looking at when I start looking at server hardware.
I see a lot of people talk about refurbished enterprise stuff. I know I need a lot of RAM and ideally a GPU.  And as a side effect for all my media purposes, I’d love to have like 8 hard drive bays without having to use a separate enclosure. I don’t think I wanna deal with a rack mount situation. And then I start to try and understand power usage and fan noise and my eyes glaze over.
If anyone has recommendations I’d appreciate it, both for the hardware itself, as well as where to get it and any learning resources.  For comparison sake, those models I mentioned above, what would be the minimum viable hardware from the server point of view to run those at similar capacity? | 2025-09-23T22:00:40 | https://www.reddit.com/r/LocalLLaMA/comments/1nouseo/i_had_no_idea_local_models_were_this_good_at_this/ | chazwhiz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nouseo | false | null | t3_1nouseo | /r/LocalLLaMA/comments/1nouseo/i_had_no_idea_local_models_were_this_good_at_this/ | false | false | self | 0 | null |
Where to get started? | 2 | Hi all.
So I'm looking to run a general home LLM for use for my family for general use. I've been on the fringe looking in for a while and now I'm at a point where I want to dive in. I guess I just don't know where to begin.
I've looked up some videos and seen some stuff but am still just kinda a bit overwhelmed. Like I know GPUs and their vram are generally the way to go but I've seen some stuff on the framework AI desktops but don't know how those stack up.
The question is, where to begin? What model to run and how to run it efficiently? | 2025-09-23T21:56:30 | https://www.reddit.com/r/LocalLLaMA/comments/1nouou1/where_to_get_started/ | Firecracker048 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nouou1 | false | null | t3_1nouou1 | /r/LocalLLaMA/comments/1nouou1/where_to_get_started/ | false | false | self | 2 | null |
Advice on CPU + GPU Build Inference for Large Model Local LLM | 1 | Please provide Feedback anything else I need to think of for a AI Inference build where I can run multiple models at the same time and use the right model quickly for different agentic coding workflows.
Overall Build - Single EPYC with GPU for long prompt processing parts where necessary for 1 to 3 users at home max.
It is most probably overkill for what I need, but I am hoping that it will keep me good for a long time with a GPU upgrade in a couple of years time.
Motherboard: **SuperMicro H14SSL-NT**
* 12 DIMM support for maximum bandwidth to memory
* 10G Networking to connect to a NAS.
* Dual PCIe 5 x4 M2 slots
* Approx $850
CPU: **AMD EPYC 9175F**
* Full 16 CCDs for maximum bandwidth
* Highest Frequency
* AVX-512 Support
* Only 16 cores though
* Full 32MB Cache for each core though this is not as useful for LLM purposes.
* Approx $2850
Memory: 12x 32GB for a total of **384GB**
* 6400 speed for maximum bandwidth
* Approx $3000 with $250 per DIMM
GPU: A 5060 or a Pro 4000 Blackwell
* Approx $600 - $1500
Disks: 2x Samsung 9100 Pro 4TB
* Already have them.
* Approx $800
Power: Corsair HXi1500 | 2025-09-23T21:49:47 | https://www.reddit.com/r/LocalLLaMA/comments/1nouit1/advice_on_cpu_gpu_build_inference_for_large_model/ | Weary-Net1650 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nouit1 | false | null | t3_1nouit1 | /r/LocalLLaMA/comments/1nouit1/advice_on_cpu_gpu_build_inference_for_large_model/ | false | false | self | 1 | null |
Qwen3-Omni thinking model running on local H100 (major leap over 2.5) | 132 | Just gave the new Qwen3-Omni (thinking model) a run on my local H100.
Running FP8 dynamic quant with a 32k context size, enough room for 11x concurrency without issue. Latency is higher (which is expected) since thinking is enabled and it's streaming reasoning tokens.
But the output is sharp, and it's clearly smarter than Qwen 2.5 with better reasoning, memory, and real-world awareness.
It consistently understands what I’m saying, and even picked up when I was “singing” (just made some boop boop sounds lol).
Tool calling works too, which is huge. More on that + load testing soon! | 2025-09-23T21:49:43 | https://v.redd.it/hsp0mvqthzqf1 | Weary-Wing-6806 | /r/LocalLLaMA/comments/1nouiqj/qwen3omni_thinking_model_running_on_local_h100/ | 1970-01-01T00:00:00 | 0 | {} | 1nouiqj | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/hsp0mvqthzqf1/DASHPlaylist.mpd?a=1761385788%2CMWJmYTU5OTJiMDE0OGFmMzg1N2U1ZmMwMDUzYTE0YTdjZjc2ZGM0NjI3YTY2ZjM3ZWY3YzQ0MTRjOTZiNTA4OQ%3D%3D&v=1&f=sd', 'duration': 175, 'fallback_url': 'https://v.redd.it/hsp0mvqthzqf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/hsp0mvqthzqf1/HLSPlaylist.m3u8?a=1761385788%2CMTljNzU0YmVjNzFjMGE5N2JkNGE1ZjdjYmIyNDA3YjA2YjZlMDc4YWRhODg0NGMzNDVlOTkyMzhjZDFlZDQ3NQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/hsp0mvqthzqf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1nouiqj | /r/LocalLLaMA/comments/1nouiqj/qwen3omni_thinking_model_running_on_local_h100/ | false | false | 132 | {'enabled': False, 'images': [{'id': 'ZG5qNW92cXRoenFmMQidY-VedNK5oWhNvWMcKBJGzqCaGjB2dyVwW_xfHksA', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZG5qNW92cXRoenFmMQidY-VedNK5oWhNvWMcKBJGzqCaGjB2dyVwW_xfHksA.png?width=108&crop=smart&format=pjpg&auto=webp&s=03df94ae49590705981ca5c7bd68689bb5eb9940', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ZG5qNW92cXRoenFmMQidY-VedNK5oWhNvWMcKBJGzqCaGjB2dyVwW_xfHksA.png?width=216&crop=smart&format=pjpg&auto=webp&s=dec411071d29a5ae3a14127e11513911c3c59a46', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ZG5qNW92cXRoenFmMQidY-VedNK5oWhNvWMcKBJGzqCaGjB2dyVwW_xfHksA.png?width=320&crop=smart&format=pjpg&auto=webp&s=f2cc4fa327e337f070076f3fd10e1bc952d75ad6', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ZG5qNW92cXRoenFmMQidY-VedNK5oWhNvWMcKBJGzqCaGjB2dyVwW_xfHksA.png?width=640&crop=smart&format=pjpg&auto=webp&s=56ce15f17d0b3dbde3d515c7c56a8fc531b792a6', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ZG5qNW92cXRoenFmMQidY-VedNK5oWhNvWMcKBJGzqCaGjB2dyVwW_xfHksA.png?width=960&crop=smart&format=pjpg&auto=webp&s=4d42211f9727693e83ec529e6b913b5d3e8d62ff', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ZG5qNW92cXRoenFmMQidY-VedNK5oWhNvWMcKBJGzqCaGjB2dyVwW_xfHksA.png?width=1080&crop=smart&format=pjpg&auto=webp&s=63816c815e530247d29b4b76082e9b65a38082a1', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ZG5qNW92cXRoenFmMQidY-VedNK5oWhNvWMcKBJGzqCaGjB2dyVwW_xfHksA.png?format=pjpg&auto=webp&s=440309c67e0c415362429630f2482a56cab399c6', 'width': 1920}, 'variants': {}}]} | |
Question Regarding Classroom Use of Local LLMs | 2 | I'm teaching an English class for a group of second-semester IT students in Germany and have decided to completely embrace (local) AI use in the course.
There is a range of activities we'll be doing together, but most or all will require them to use a locally installed LLM for discussion, brainstorming, and as an English source they will evaluate and correct if necessary.
The target group is 20-23 year old tech students in Bavaria. The will have good portable hardware for the class (iPads, MS Surfaces, or beefy gaming notebooks) as well as latest-generation smart phones (80% using iPhones).
Their English is already very good in most cases (B2+), so any AI-based projects might help them to develop vocabulary and structure in a more *personalized* way with the LLM's help.
I myself like to use Ollama with an 8B Llama 3.1 model for small unimportant tasks on my work computer. I use larger models and GUI's like LM Studio on my gaming computer at home.
But which light but usable models (and interfaces) would you recommend for a project like this? Any tips are appreciated! | 2025-09-23T21:36:21 | https://www.reddit.com/r/LocalLLaMA/comments/1nou6rv/question_regarding_classroom_use_of_local_llms/ | McDoof | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nou6rv | false | null | t3_1nou6rv | /r/LocalLLaMA/comments/1nou6rv/question_regarding_classroom_use_of_local_llms/ | false | false | self | 2 | null |
What happens when coding agents stop feeling like dialup? | 0 | 2025-09-23T21:35:36 | https://martinalderson.com/posts/what-happens-when-coding-agents-stop-feeling-like-dialup/ | malderson | martinalderson.com | 1970-01-01T00:00:00 | 0 | {} | 1nou625 | false | null | t3_1nou625 | /r/LocalLLaMA/comments/1nou625/what_happens_when_coding_agents_stop_feeling_like/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'yaS710QCUNO04GnC06TR_iSHG2O_JKW8weRh4ZT0LOU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/yaS710QCUNO04GnC06TR_iSHG2O_JKW8weRh4ZT0LOU.png?width=108&crop=smart&auto=webp&s=bd582f3a25cac67a8b298889ab24dd16617d5863', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/yaS710QCUNO04GnC06TR_iSHG2O_JKW8weRh4ZT0LOU.png?width=216&crop=smart&auto=webp&s=625e1e216f043198b1a92380901d90b9056e3514', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/yaS710QCUNO04GnC06TR_iSHG2O_JKW8weRh4ZT0LOU.png?width=320&crop=smart&auto=webp&s=b1d94ccd01c66b388e4eba1843221a355159d5c7', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/yaS710QCUNO04GnC06TR_iSHG2O_JKW8weRh4ZT0LOU.png?width=640&crop=smart&auto=webp&s=53cf180d23c35eed208bd018924ecb511925558e', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/yaS710QCUNO04GnC06TR_iSHG2O_JKW8weRh4ZT0LOU.png?width=960&crop=smart&auto=webp&s=ca9bf998540bf095efc9fce252f4e3c948b73364', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/yaS710QCUNO04GnC06TR_iSHG2O_JKW8weRh4ZT0LOU.png?width=1080&crop=smart&auto=webp&s=89c4e036fdec3a92d8c3274244010d9fc71a608d', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/yaS710QCUNO04GnC06TR_iSHG2O_JKW8weRh4ZT0LOU.png?auto=webp&s=f7678c42c6be79d63b952305d16774a75b455b6b', 'width': 1200}, 'variants': {}}]} | |
STEM and Coding LLMs | 3 | I’ve been trying out LLMs and can’t come up with clear choices for best ones.
My use cases are STEM, mostly math, and programming.
I am limited by hardware (mobile 4070, 13th gen i7, 16GB RAM), but here are models I am testing:
- Qwen3 14B
- Magistral-small-2509
- Phi4 reasoning-plus
- Mistral-small 3.2
- GPT-OSS 20B
- Gemma3 12B
- Llama4 Scout / Maverick (slow)
I have tried several others but they were not as good in my experience.
What’s your experience with these? I want to keep up to 3 of them, with at least one vision enabled, one for STEM, and one for coding.
To me accuracy is above all. If it’s accurate I don’t care how slow it is as long as it loads and runs. | 2025-09-23T21:26:24 | https://www.reddit.com/r/LocalLLaMA/comments/1notxs8/stem_and_coding_llms/ | Southern-Blueberry46 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1notxs8 | false | null | t3_1notxs8 | /r/LocalLLaMA/comments/1notxs8/stem_and_coding_llms/ | false | false | self | 3 | null |
MediaTek Dimensity 9500: Huge speed increase in prefill speed, generation also faster but memory limited | 11 | See Geekerwan’s latest video: https://youtu.be/tDvr1YOdlWg
Amazing they achieved such a huge bump in token prefill speed. Very helpful for summarization, classification and long-context QA. | 2025-09-23T21:10:48 | Balance- | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1notjjc | false | null | t3_1notjjc | /r/LocalLLaMA/comments/1notjjc/mediatek_dimensity_9500_huge_speed_increase_in/ | false | false | default | 11 | {'enabled': True, 'images': [{'id': 'letmkgllczqf1', 'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/letmkgllczqf1.jpeg?width=108&crop=smart&auto=webp&s=25398ed390c10b51fbd116ef60bc889c88d34c2e', 'width': 108}, {'height': 118, 'url': 'https://preview.redd.it/letmkgllczqf1.jpeg?width=216&crop=smart&auto=webp&s=97017bad7913f32aaba193833485206ecd2d9b25', 'width': 216}, {'height': 176, 'url': 'https://preview.redd.it/letmkgllczqf1.jpeg?width=320&crop=smart&auto=webp&s=9a7ad44f2e76e5e58fd284e9f0c1ff42086bf7de', 'width': 320}, {'height': 352, 'url': 'https://preview.redd.it/letmkgllczqf1.jpeg?width=640&crop=smart&auto=webp&s=0c3e681d4b0d1f7b73b2b705a84d7332eec259b8', 'width': 640}, {'height': 528, 'url': 'https://preview.redd.it/letmkgllczqf1.jpeg?width=960&crop=smart&auto=webp&s=835e9448ffb0746ece73e3a8fd1fdaac75996c3e', 'width': 960}, {'height': 594, 'url': 'https://preview.redd.it/letmkgllczqf1.jpeg?width=1080&crop=smart&auto=webp&s=223c142ada293380b89be8bb34d846acb7c45b9a', 'width': 1080}], 'source': {'height': 1549, 'url': 'https://preview.redd.it/letmkgllczqf1.jpeg?auto=webp&s=a0abf6b3d82d9bf95abb4f21c487beea545833ff', 'width': 2816}, 'variants': {}}]} | |
What's the best model for Creative AI Writing? | 2 | What's the best model for Creative AI Writing? Preferably one that's a bit media-savy in anime, cartoons, novels, because I often reference that a lot. Best is probably Claude but that's obvious not open source. I usually take criticism, have it expand what I write, gimme ideas, etc. any that fit the bill? | 2025-09-23T21:01:51 | https://www.reddit.com/r/LocalLLaMA/comments/1notb88/whats_the_best_model_for_creative_ai_writing/ | EffectiveIcy6917 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1notb88 | false | null | t3_1notb88 | /r/LocalLLaMA/comments/1notb88/whats_the_best_model_for_creative_ai_writing/ | false | false | self | 2 | null |
Qwen3-VL-235B-A22B available on HF | 47 | [https://huggingface.co/Qwen/Qwen3-VL-235B-A22B-Instruct](https://huggingface.co/Qwen/Qwen3-VL-235B-A22B-Instruct)
[https://huggingface.co/Qwen/Qwen3-VL-235B-A22B-Thinking](https://huggingface.co/Qwen/Qwen3-VL-235B-A22B-Thinking) | 2025-09-23T20:55:13 | https://www.reddit.com/r/LocalLLaMA/comments/1not4zb/qwen3vl235ba22b_available_on_hf/ | AlbeHxT9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1not4zb | false | null | t3_1not4zb | /r/LocalLLaMA/comments/1not4zb/qwen3vl235ba22b_available_on_hf/ | false | false | self | 47 | null |
Qwen3-VL-235B-A22B-Thinking and Qwen3-VL-235B-A22B-Instruct | 175 | [https://huggingface.co/Qwen/Qwen3-VL-235B-A22B-Thinking](https://huggingface.co/Qwen/Qwen3-VL-235B-A22B-Thinking)
[https://huggingface.co/Qwen/Qwen3-VL-235B-A22B-Instruct](https://huggingface.co/Qwen/Qwen3-VL-235B-A22B-Instruct)
Meet Qwen3-VL — the most powerful vision-language model in the Qwen series to date.
This generation delivers comprehensive upgrades across the board: superior text understanding & generation, deeper visual perception & reasoning, extended context length, enhanced spatial and video dynamics comprehension, and stronger agent interaction capabilities.
Available in Dense and MoE architectures that scale from edge to cloud, with Instruct and reasoning‑enhanced Thinking editions for flexible, on‑demand deployment.
# [](https://huggingface.co/Qwen/Qwen3-VL-235B-A22B-Instruct#key-enhancements)
# Key Enhancements:
* **Visual Agent**: Operates PC/mobile GUIs—recognizes elements, understands functions, invokes tools, completes tasks.
* **Visual Coding Boost**: Generates [Draw.io/HTML/CSS/JS](http://Draw.io/HTML/CSS/JS) from images/videos.
* **Advanced Spatial Perception**: Judges object positions, viewpoints, and occlusions; provides stronger 2D grounding and enables 3D grounding for spatial reasoning and embodied AI.
* **Long Context & Video Understanding**: Native 256K context, expandable to 1M; handles books and hours-long video with full recall and second-level indexing.
* **Enhanced Multimodal Reasoning**: Excels in STEM/Math—causal analysis and logical, evidence-based answers.
* **Upgraded Visual Recognition**: Broader, higher-quality pretraining is able to “recognize everything”—celebrities, anime, products, landmarks, flora/fauna, etc.
* **Expanded OCR**: Supports 32 languages (up from 19); robust in low light, blur, and tilt; better with rare/ancient characters and jargon; improved long-document structure parsing.
* **Text Understanding on par with pure LLMs**: Seamless text–vision fusion for lossless, unified comprehension.
| 2025-09-23T20:55:04 | https://www.reddit.com/r/LocalLLaMA/comments/1not4up/qwen3vl235ba22bthinking_and/ | jacek2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1not4up | true | null | t3_1not4up | /r/LocalLLaMA/comments/1not4up/qwen3vl235ba22bthinking_and/ | false | false | self | 175 | {'enabled': False, 'images': [{'id': 'buohQYfptNXWK_RjSglwO9z3swviJ-ly59KfrJBuCDs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/buohQYfptNXWK_RjSglwO9z3swviJ-ly59KfrJBuCDs.png?width=108&crop=smart&auto=webp&s=d793a5e3ae9182e97ca695982976783d58dd37b9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/buohQYfptNXWK_RjSglwO9z3swviJ-ly59KfrJBuCDs.png?width=216&crop=smart&auto=webp&s=7da109771c26b7febb0c98355ecf5a63b4291177', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/buohQYfptNXWK_RjSglwO9z3swviJ-ly59KfrJBuCDs.png?width=320&crop=smart&auto=webp&s=8e954bad6bb1ff9fb8a235189b3fe6f089ed45d5', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/buohQYfptNXWK_RjSglwO9z3swviJ-ly59KfrJBuCDs.png?width=640&crop=smart&auto=webp&s=e7b84536da0880f9dbcad1c25d46560ac5263a67', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/buohQYfptNXWK_RjSglwO9z3swviJ-ly59KfrJBuCDs.png?width=960&crop=smart&auto=webp&s=61453b246e3c4b867e815b90937e5eb8cf7d1822', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/buohQYfptNXWK_RjSglwO9z3swviJ-ly59KfrJBuCDs.png?width=1080&crop=smart&auto=webp&s=310313b46168efd05838c4bbb6e9cf51042612ab', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/buohQYfptNXWK_RjSglwO9z3swviJ-ly59KfrJBuCDs.png?auto=webp&s=7bb0c80c3ea8485003dbb0d1f4d841149b372014', 'width': 1200}, 'variants': {}}]} |
Qwen3-VL available on HF | 1 | [deleted] | 2025-09-23T20:54:26 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1not4a7 | false | null | t3_1not4a7 | /r/LocalLLaMA/comments/1not4a7/qwen3vl_available_on_hf/ | false | false | default | 1 | null | ||
16–24x More Experiment Throughput Without Extra GPUs | 1 | We built RapidFire AI, an open-source Python tool to **speed up LLM fine-tuning and post-training** with a powerful level of control not found in most tools: **Stop**, **resume, clone-modify and warm-start configs on the fly**—so you can branch experiments while they’re running instead of starting from scratch or running one after another.
* **Works within your OSS stack:** PyTorch, HuggingFace TRL/PEFT), MLflow,
* **Hyperparallel search:** launch as many configs as you want together, even on a single GPU
* **Dynamic real-time control:** stop laggards, resume them later to revisit, branch promising configs in flight.
* **Deterministic eval + run tracking:** Metrics curves are automatically plotted and are comparable.
* **Apache License v2.0:** No vendor lock in. Develop on your IDE, launch from CLI.
Repo: [https://github.com/RapidFireAI/rapidfireai](https://github.com/RapidFireAI/rapidfireai)
PyPI: [https://pypi.org/project/rapidfireai/](https://pypi.org/project/rapidfireai/)
Docs: [https://oss-docs.rapidfire.ai/](https://oss-docs.rapidfire.ai/)
We hope you enjoy the power of *rapid experimentation* with RapidFire AI for your LLM customization projects! We’d love to hear your feedback–both positive and negative–on the UX and UI, API, any rough edges, and what integrations and extensions you’d be excited to see. | 2025-09-23T20:46:48 | https://www.reddit.com/r/LocalLLaMA/comments/1nosx48/1624x_more_experiment_throughput_without_extra/ | Whole-Net-8262 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nosx48 | false | null | t3_1nosx48 | /r/LocalLLaMA/comments/1nosx48/1624x_more_experiment_throughput_without_extra/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'zx-Z6PpKFzD0T5r64-sm5rUpQYA4AugsXzYN99E4Bdo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zx-Z6PpKFzD0T5r64-sm5rUpQYA4AugsXzYN99E4Bdo.png?width=108&crop=smart&auto=webp&s=9476674bc66ea4a655ff7d5a82d00a2caa153540', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zx-Z6PpKFzD0T5r64-sm5rUpQYA4AugsXzYN99E4Bdo.png?width=216&crop=smart&auto=webp&s=4f4260892f35d2a944f9da995d2c8ac60b3d68d3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zx-Z6PpKFzD0T5r64-sm5rUpQYA4AugsXzYN99E4Bdo.png?width=320&crop=smart&auto=webp&s=867014c2ae4f6ce4bbf6393e7b26f760fb0d3d40', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zx-Z6PpKFzD0T5r64-sm5rUpQYA4AugsXzYN99E4Bdo.png?width=640&crop=smart&auto=webp&s=4308e2cd871c4b45e00191eeb538d5b7de735e19', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zx-Z6PpKFzD0T5r64-sm5rUpQYA4AugsXzYN99E4Bdo.png?width=960&crop=smart&auto=webp&s=0524080bec1abd67e491a4a5a56af2bf6841d08b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zx-Z6PpKFzD0T5r64-sm5rUpQYA4AugsXzYN99E4Bdo.png?width=1080&crop=smart&auto=webp&s=b06be89c5f3339f3823c3ea50b3938d2062a5280', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zx-Z6PpKFzD0T5r64-sm5rUpQYA4AugsXzYN99E4Bdo.png?auto=webp&s=140b2e3f40eaf40b7a9c6e169dc02b6f3270b1ba', 'width': 1200}, 'variants': {}}]} |
Anybody knows what tts model been used in this video? | 1 | 2025-09-23T20:37:53 | https://v.redd.it/r4rh1k1q6zqf1 | Adept_Lawyer_4592 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nosows | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/r4rh1k1q6zqf1/DASHPlaylist.mpd?a=1761251890%2CMzYyZmM2OGRlNTYyNGZjOWI5Zjg5ZDFlMzU3MmY1YWIxYjI2MTZlNWIyN2Y4YWQwYjRlMzlkZDg2MDUxZTRjNw%3D%3D&v=1&f=sd', 'duration': 34, 'fallback_url': 'https://v.redd.it/r4rh1k1q6zqf1/DASH_480.mp4?source=fallback', 'has_audio': True, 'height': 854, 'hls_url': 'https://v.redd.it/r4rh1k1q6zqf1/HLSPlaylist.m3u8?a=1761251890%2CNDM1YTVhYzczMzc1ZTUyNGFmMzI1ZmQzNGYyZTEzY2I1NDFiYjRjMGVjOWQxMWEwMGI5Y2I4MjdjMWQ5NjRlNg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/r4rh1k1q6zqf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 480}} | t3_1nosows | /r/LocalLLaMA/comments/1nosows/anybody_knows_what_tts_model_been_used_in_this/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'bGphYXU1NHE2enFmMZHx8TCDGNzZ0NKT7Hd8PfIPVuMYKPSH2c8yOruQfdtV', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/bGphYXU1NHE2enFmMZHx8TCDGNzZ0NKT7Hd8PfIPVuMYKPSH2c8yOruQfdtV.png?width=108&crop=smart&format=pjpg&auto=webp&s=c60407674040dd7e3511096cb8812288639b74da', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/bGphYXU1NHE2enFmMZHx8TCDGNzZ0NKT7Hd8PfIPVuMYKPSH2c8yOruQfdtV.png?width=216&crop=smart&format=pjpg&auto=webp&s=d84ff6b854be17ad56537306162abbd737498a4d', 'width': 216}, {'height': 569, 'url': 'https://external-preview.redd.it/bGphYXU1NHE2enFmMZHx8TCDGNzZ0NKT7Hd8PfIPVuMYKPSH2c8yOruQfdtV.png?width=320&crop=smart&format=pjpg&auto=webp&s=efdbb19f39b47a3c681463613998f73be82ee113', 'width': 320}, {'height': 1138, 'url': 'https://external-preview.redd.it/bGphYXU1NHE2enFmMZHx8TCDGNzZ0NKT7Hd8PfIPVuMYKPSH2c8yOruQfdtV.png?width=640&crop=smart&format=pjpg&auto=webp&s=8f720b5e8ed9ff9e1d5e594d9cd51d1110389f01', 'width': 640}, {'height': 1707, 'url': 'https://external-preview.redd.it/bGphYXU1NHE2enFmMZHx8TCDGNzZ0NKT7Hd8PfIPVuMYKPSH2c8yOruQfdtV.png?width=960&crop=smart&format=pjpg&auto=webp&s=3b175a904780ddb560ec2c834be88ca0db18c725', 'width': 960}], 'source': {'height': 1757, 'url': 'https://external-preview.redd.it/bGphYXU1NHE2enFmMZHx8TCDGNzZ0NKT7Hd8PfIPVuMYKPSH2c8yOruQfdtV.png?format=pjpg&auto=webp&s=7e84cbb9fcf234bc561105c5e86fd6f65606e982', 'width': 988}, 'variants': {}}]} | ||
Qwen3-VL: Sharper Vision, Deeper Thought, Broader Action | 188 | 2025-09-23T20:26:06 | https://qwen.ai/blog?id=99f0335c4ad9ff6153e517418d48535ab6d8afef&from=research.latest-advancements-list | abdouhlili | qwen.ai | 1970-01-01T00:00:00 | 0 | {} | 1nosdxy | false | null | t3_1nosdxy | /r/LocalLLaMA/comments/1nosdxy/qwen3vl_sharper_vision_deeper_thought_broader/ | false | false | default | 188 | null | |
Qwen3 vl 235B A22B | 14 | I saw that the Qwen3 vl 235B A22B new multimodal vision model from Qwen is available in Qwen's chat, but I didn't find it on huggingface. Does anyone know if we will have it available? | 2025-09-23T20:13:32 | https://www.reddit.com/r/LocalLLaMA/comments/1nos247/qwen3_vl_235b_a22b/ | AppealThink1733 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nos247 | false | null | t3_1nos247 | /r/LocalLLaMA/comments/1nos247/qwen3_vl_235b_a22b/ | false | false | self | 14 | null |
GPU Fenghua No.3, 112GB HBM, DX12, Vulcan 1.2, Claims to Support CUDA | 94 | * Over 112 GB high-bandwidth memory for large-scale AI workloads
* First Chinese GPU with hardware ray tracing support
* vGPU design architecture with hardware virtualization
* Supports DirectX 12, Vulkan 1.2, OpenGL 4.6, and up to six 8K displays
* Domestic design based on OpenCore RISC-V CPU and full set of IP
[https://videocardz.com/newz/innosilicon-unveils-fenghua-3-gpu-with-directx12-support-and-hardware-ray-tracing](https://videocardz.com/newz/innosilicon-unveils-fenghua-3-gpu-with-directx12-support-and-hardware-ray-tracing)
[https://www.tomshardware.com/pc-components/gpus/chinas-latest-gpu-arrives-with-claims-of-cuda-compatibility-and-rt-support-fenghua-no-3-also-boasts-112gb-of-hbm-memory-for-ai](https://www.tomshardware.com/pc-components/gpus/chinas-latest-gpu-arrives-with-claims-of-cuda-compatibility-and-rt-support-fenghua-no-3-also-boasts-112gb-of-hbm-memory-for-ai)
# [Claims to Support CUDA](https://www.techpowerup.com/341268/innosilicons-fenghua-no-3-gpu-launches-with-112gb-hbm-memory-and-claims-to-support-cuda)
https://preview.redd.it/kxpifn9e0zqf1.jpg?width=168&format=pjpg&auto=webp&s=ebccecdf4a52af907db2b392b698eb7557d75e74
| 2025-09-23T20:05:10 | https://www.reddit.com/r/LocalLLaMA/comments/1noru3p/gpu_fenghua_no3_112gb_hbm_dx12_vulcan_12_claims/ | On1ineAxeL | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1noru3p | false | null | t3_1noru3p | /r/LocalLLaMA/comments/1noru3p/gpu_fenghua_no3_112gb_hbm_dx12_vulcan_12_claims/ | false | false | 94 | {'enabled': False, 'images': [{'id': 'U0uchtAJMRqNemoxp-a8VWVQpNPPATSQ8bLLGYLEcHQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/U0uchtAJMRqNemoxp-a8VWVQpNPPATSQ8bLLGYLEcHQ.jpeg?width=108&crop=smart&auto=webp&s=72089feb518a20a94f10d525d3618b44ca0daa1a', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/U0uchtAJMRqNemoxp-a8VWVQpNPPATSQ8bLLGYLEcHQ.jpeg?width=216&crop=smart&auto=webp&s=eef2e525c7186d6899245a19ae8b3fb18070eb0b', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/U0uchtAJMRqNemoxp-a8VWVQpNPPATSQ8bLLGYLEcHQ.jpeg?width=320&crop=smart&auto=webp&s=a565182aef128f2ae396291057332f3c9d944f69', 'width': 320}, {'height': 332, 'url': 'https://external-preview.redd.it/U0uchtAJMRqNemoxp-a8VWVQpNPPATSQ8bLLGYLEcHQ.jpeg?width=640&crop=smart&auto=webp&s=2df1ad2c345943a90fdd8666f38e455de3a4819c', 'width': 640}, {'height': 499, 'url': 'https://external-preview.redd.it/U0uchtAJMRqNemoxp-a8VWVQpNPPATSQ8bLLGYLEcHQ.jpeg?width=960&crop=smart&auto=webp&s=2d93e317a3e4d1590dbcc6880bebc9c95d5a0ee8', 'width': 960}, {'height': 561, 'url': 'https://external-preview.redd.it/U0uchtAJMRqNemoxp-a8VWVQpNPPATSQ8bLLGYLEcHQ.jpeg?width=1080&crop=smart&auto=webp&s=709fafcf900eeef3e277d98163edee11a7d1603f', 'width': 1080}], 'source': {'height': 1040, 'url': 'https://external-preview.redd.it/U0uchtAJMRqNemoxp-a8VWVQpNPPATSQ8bLLGYLEcHQ.jpeg?auto=webp&s=298d53c33ed76316fe8b4894917dbdd1f553b400', 'width': 2000}, 'variants': {}}]} | |
Qwen3-VL-235B was just added | 75 | 2025-09-23T19:57:47 | abdouhlili | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1norn0n | false | null | t3_1norn0n | /r/LocalLLaMA/comments/1norn0n/qwen3vl235b_was_just_added/ | false | false | default | 75 | {'enabled': True, 'images': [{'id': '7knujtlkzyqf1', 'resolutions': [{'height': 21, 'url': 'https://preview.redd.it/7knujtlkzyqf1.jpeg?width=108&crop=smart&auto=webp&s=68d4c5d041c85348b5d599f8a951c2120fc5278b', 'width': 108}, {'height': 42, 'url': 'https://preview.redd.it/7knujtlkzyqf1.jpeg?width=216&crop=smart&auto=webp&s=27741fa7c0ee10861ee8e3f278e21f9fd58134dd', 'width': 216}, {'height': 63, 'url': 'https://preview.redd.it/7knujtlkzyqf1.jpeg?width=320&crop=smart&auto=webp&s=51a63a8785188dc7f2404f9abaa336b79f68bcd2', 'width': 320}, {'height': 126, 'url': 'https://preview.redd.it/7knujtlkzyqf1.jpeg?width=640&crop=smart&auto=webp&s=dadfb037f53141b9800bb0c2923a35ba97588b7f', 'width': 640}], 'source': {'height': 142, 'url': 'https://preview.redd.it/7knujtlkzyqf1.jpeg?auto=webp&s=e74611ceef3ff9a37bedf44ece8e34b9dcdf2089', 'width': 716}, 'variants': {}}]} | ||
Best TTS to run on GTX 1650 apart from kokoro | 6 | I'm running kokoro FastAPI at the moment | 2025-09-23T19:53:08 | https://www.reddit.com/r/LocalLLaMA/comments/1noril6/best_tts_to_run_on_gtx_1650_apart_from_kokoro/ | therealsharad | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1noril6 | false | null | t3_1noril6 | /r/LocalLLaMA/comments/1noril6/best_tts_to_run_on_gtx_1650_apart_from_kokoro/ | false | false | self | 6 | null |
Qwen 3 max released | 513 | ERROR: type should be string, got "https://qwen.ai/blog?id=241398b9cd6353de490b0f82806c7848c5d2777d&from=research.latest-advancements-list\n\nFollowing the release of the Qwen3-2507 series, we are thrilled to introduce Qwen3-Max — our largest and most capable model to date. The preview version of Qwen3-Max-Instruct currently ranks third on the Text Arena leaderboard, surpassing GPT-5-Chat. The official release further enhances performance in coding and agent capabilities, achieving state-of-the-art results across a comprehensive suite of benchmarks — including knowledge, reasoning, coding, instruction following, human preference alignment, agent tasks, and multilingual understanding. We invite you to try Qwen3-Max-Instruct via its API on Alibaba Cloud or explore it directly on Qwen Chat. Meanwhile, Qwen3-Max-Thinking — still under active training — is already demonstrating remarkable potential. When augmented with tool usage and scaled test-time compute, the Thinking variant has achieved 100% on challenging reasoning benchmarks such as AIME 25 and HMMT. We look forward to releasing it publicly in the near future." | 2025-09-23T19:40:02 | https://www.reddit.com/r/LocalLLaMA/comments/1nor65d/qwen_3_max_released/ | clem844 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nor65d | false | null | t3_1nor65d | /r/LocalLLaMA/comments/1nor65d/qwen_3_max_released/ | false | false | self | 513 | null |
Seeking Local LLM Recommendations for AST Generation (by Function Calling) | 7 | > Looking for Local LLM recommendations that can generate complex AST structures through function calling. This is an area that shows different performance patterns from existing programming benchmarks, so looking for models that can be actually tested.
## Our Approach
We're developing AutoBE, an open-source project that automatically generates backend applications.
AutoBE's core principle differs from typical AI code generation. Instead of having AI write backend source code as text, we have AI generate AST (Abstract Syntax Tree) - the compiler's structured representation - through function calling. When invalid AST data is generated, we validate it logically and provide feedback to the AI, or compile it to generate backend applications.
The AST structures we use are quite complex. Below are examples of AutoBE's AST structure - as you can see, countless elements are intertwined through union types and tree structures.
- [`AutoBePrisma.IApplication`](https://github.com/wrtnlabs/autobe/blob/main/packages/interface/src/prisma/AutoBePrisma.ts) ([function calling schema](https://typia.io/playground/?script=JYWwDg9gTgLgBAbzgQQK4wgIQKYAUrADOIAhnAL5wBmUEIcARAAInoQBG2A9MAHYzYoVEgGNsDANwAoUJFiI4ASQAyAGxDIwYVcBEkYwCLwrVa9ZoRIgRACyNcIYbLxJhgkmeGjwYATzdkNHSMfgEeUiJGhPCuYABcSmoaWjp6BkYAPAy2+gDmYDAMAHxwALxwocAkAHSq6tWxqfqGvBlScIjtHXAA7gQCALIQACbYqgAU8QogI2MJaBg4+ESk1YpDo6oUAJQJAG4QwMPSHeQANF3ZNnkFDFJF49vSkbyEEKrYtRC547FPUkA))
- [`AutoBeOpenApi.IDocument`](https://github.com/wrtnlabs/autobe/blob/main/packages/interface/src/openapi/AutoBeOpenApi.ts) ([function calling schema](https://typia.io/playground/?script=JYWwDg9gTgLgBAbzgQQK4wgIQKYHkzYB2yYwcAvnAGZQQhwBEAAgIboQBG2A9MITNihUWAY2wMA3AChQkWIjgBJADIAbECTCrgIljGAQucUhSq06BBiEEALfewhhsXBmGBjJ4aPBgBPFw3KU1DQ+fm7igvpEcM5gAFzyyqrqmtq6+gA8NNY6AOZgMDQAfHAAvHAhwAwAdEoqVTEpOnpc6eJwiG3tcADuUMC8APIOUE36ABRxsmSoXIJpXPFoGDj4RFVyAGIzc82GAJTxAG4QwAAmEu3EADSdWVa5+TTihWN7EhFcBBBK2DUQOWMYm9xEA))
- [`AutoBeTest.IFunction`](https://github.com/wrtnlabs/autobe/blob/main/packages/interface/src/test/AutoBeTest.ts) ([function calling schema](https://typia.io/playground/?script=JYWwDg9gTgLgBAbzgQQK4wgIQKYBVsDO8AvnAGZQQhwBEAAgIboQBG2A9MAHYzZRkMAxthoBuAFChIsRHACSAGQA2IZGDBLgghjGAQucUhSq06BBiEEALfewhhsXBmGBjJ4aPBgBPFw3KU1DQ+fm7igvpEcM5gAFzyyqrqmtq6+gA8NNY6AOZgMDQAfHAAvHAhwAwAdEoqVTEpOnpc6eJwiG3tcADuUMC8APIOUE36ABRxsmSoXIJpXPFoGDj4RFVyAGIzc82GAJTxAG4QwAAmEu3EADSdWVa5+TTihWN7EhFcBBBK2DUQOWMYm9xEA))
```typescript
export namespace AutoBeOpenApi {
export type IJsonSchema =
| IJsonSchema.IConstant
| IJsonSchema.IBoolean
| IJsonSchema.IInteger
| IJsonSchema.INumber
| IJsonSchema.IString
| IJsonSchema.IArray
| IJsonSchema.IObject
| IJsonSchema.IReference
| IJsonSchema.IOneOf
| IJsonSchema.INull;
export namespace IJsonSchema {
export interface IObject {
type: 'object';
properties: Record<string, IJsonSchema>;
required: string[];
additionalProperties?: boolean | IJsonSchema;
description?: string;
}
}
}
export namespace AutoBeTest {
export type IExpression =
// LITERALS
| IBooleanLiteral
| INumericLiteral
| IStringLiteral
| IArrayLiteralExpression
| IObjectLiteralExpression
| INullLiteral
| IUndefinedKeyword
// ACCESSORS
| IIdentifier
| IPropertyAccessExpression
| IElementAccessExpression
// OPERATORS
| ITypeOfExpression
| IPrefixUnaryExpression
| IPostfixUnaryExpression
| IBinaryExpression
// FUNCTIONAL
| IArrowFunction
| ICallExpression
| INewExpression
| IArrayFilterExpression
| IArrayForEachExpression
| IArrayMapExpression
| IArrayRepeatExpression
// RANDOM GENERATORS
| IPickRandom
| ISampleRandom
| IBooleanRandom
| IIntegerRandom
| INumberRandom
| IStringRandom
| IPatternRandom
| IFormatRandom
| IKeywordRandom
// PREDICATORS
| IEqualPredicate
| INotEqualPredicate
| IConditionalPredicate
| IErrorPredicate;
export interface IElementAccessExpression {
type: "elementAccessExpression";
expression: IExpression;
questionDot?: boolean;
argumentExpression: IExpression;
}
}
```
## Why This Matters for AI Model Performance
Because AutoBE is heavily dependent on AI models' function calling capabilities, typical AI model programming abilities and benchmark rankings often show completely different results in AutoBE.
In practice, `openai/gpt-4.1` and `openai/gpt-4.1-mini` models actually create backend applications better than `openai/gpt-5` in AutoBE. The `qwen3-next-80b-a3b` model handles DTO types (`AutoBeOpenApi.IJsonSchema`) very well, while `qwen3-coder` (450b), which has far more parameters, fails completely at DTO type generation (0% success rate). This shows patterns completely different from typical AI benchmarks.
## Our Benchmarking Initiative
Based on this, our AutoBE team conducts ongoing benchmark tests on AI models using the AutoBE project and plans to publish these regularly as reports.
However, AutoBE has been developed and optimized targeting `openai/gpt-4.1` and `openai/gpt-4.1-mini`, and we've only recently begun introducing and testing Local LLMs like `qwen3-235b-a22b` and `qwen3-next-80b-a3b`.
Therefore, aside from qwen3, we don't know well which other models can effectively create complex structures like AST through function calling or structured output. We want to receive recommendations for various Local LLM models from this community, experiment and validate them with AutoBE, and publish them as benchmark reports.
Thank you for reading this long post, and we appreciate your model recommendations. | 2025-09-23T19:32:15 | jhnam88 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1noqyx0 | false | null | t3_1noqyx0 | /r/LocalLLaMA/comments/1noqyx0/seeking_local_llm_recommendations_for_ast/ | false | false | default | 7 | {'enabled': True, 'images': [{'id': 'bl6tugbzuyqf1', 'resolutions': [{'height': 55, 'url': 'https://preview.redd.it/bl6tugbzuyqf1.png?width=108&crop=smart&auto=webp&s=d9c2d5a68a782bade2c0c76c526e1da85457f63a', 'width': 108}, {'height': 110, 'url': 'https://preview.redd.it/bl6tugbzuyqf1.png?width=216&crop=smart&auto=webp&s=b6d5bdaed3bdbddb59e4129a55d5658c35faa3e4', 'width': 216}, {'height': 164, 'url': 'https://preview.redd.it/bl6tugbzuyqf1.png?width=320&crop=smart&auto=webp&s=a49db8bd697dafeb22eabdd09df2b3c31e2f1791', 'width': 320}, {'height': 328, 'url': 'https://preview.redd.it/bl6tugbzuyqf1.png?width=640&crop=smart&auto=webp&s=d0bab564c30a8f86fe8d41fd338cc25e77566194', 'width': 640}], 'source': {'height': 408, 'url': 'https://preview.redd.it/bl6tugbzuyqf1.png?auto=webp&s=1cf94ca8f4c0a222a35f79ecc8f7840688308d1d', 'width': 796}, 'variants': {}}]} | |
Google | 1 | [deleted] | 2025-09-23T19:21:44 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1noqp40 | false | null | t3_1noqp40 | /r/LocalLLaMA/comments/1noqp40/google/ | false | false | default | 1 | null | ||
Why can’t we cancel the coding plan subscription on z.ai yet? | 22 | 2025-09-23T19:14:43 | https://www.reddit.com/r/LocalLLaMA/comments/1noqifv/why_cant_we_cancel_the_coding_plan_subscription/ | thestreamcode | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1noqifv | false | null | t3_1noqifv | /r/LocalLLaMA/comments/1noqifv/why_cant_we_cancel_the_coding_plan_subscription/ | false | false | 22 | null | ||
3 api models probably means they're at least pretty powerful, I don't really like GPT or Claude that much so qwen could very well outshine it and push the frontier :) | 0 | 2025-09-23T19:05:30 | Longjumping_Spot5843 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1noq9r3 | false | null | t3_1noq9r3 | /r/LocalLLaMA/comments/1noq9r3/3_api_models_probably_means_theyre_at_least/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'gl32n0z0qyqf1', 'resolutions': [{'height': 37, 'url': 'https://preview.redd.it/gl32n0z0qyqf1.png?width=108&crop=smart&auto=webp&s=9249b4ba9e1d3c571208d852129627aac83eeab2', 'width': 108}, {'height': 74, 'url': 'https://preview.redd.it/gl32n0z0qyqf1.png?width=216&crop=smart&auto=webp&s=8ff3fd5ade48e6dbc446344ec72b5c01091bf310', 'width': 216}, {'height': 109, 'url': 'https://preview.redd.it/gl32n0z0qyqf1.png?width=320&crop=smart&auto=webp&s=6b4446793da38e41e4fe07a72e3ce242944d5342', 'width': 320}, {'height': 219, 'url': 'https://preview.redd.it/gl32n0z0qyqf1.png?width=640&crop=smart&auto=webp&s=b4830f533eab8111530607af15f33d8905b6179d', 'width': 640}, {'height': 328, 'url': 'https://preview.redd.it/gl32n0z0qyqf1.png?width=960&crop=smart&auto=webp&s=75f2cd00647f716c2a145e072217b99261890d01', 'width': 960}, {'height': 370, 'url': 'https://preview.redd.it/gl32n0z0qyqf1.png?width=1080&crop=smart&auto=webp&s=02891d0af08e23aacb634c830b13c7f9c3b719fd', 'width': 1080}], 'source': {'height': 370, 'url': 'https://preview.redd.it/gl32n0z0qyqf1.png?auto=webp&s=ccf170f8cee3b2ae466c04a9cf88e610c3cb40f4', 'width': 1080}, 'variants': {}}]} | ||
How accurate is PrivateGPT? | 1 | **Hello,**
I'm interested in using PrivateGPT to conduct research across a large collection of documents. I’d like to know how accurate it is in practice. Has anyone here used it before and can share their experience?
Thanks in advance! | 2025-09-23T19:04:59 | https://www.reddit.com/r/LocalLLaMA/comments/1noq990/how_accurate_is_privategpt/ | Ok-Macaroon9817 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1noq990 | false | null | t3_1noq990 | /r/LocalLLaMA/comments/1noq990/how_accurate_is_privategpt/ | false | false | self | 1 | null |
Thinking about Qwen.. | 0 | I think the reason Qwen (Alibaba) is speed running AI development is to stay ahead before the inevitable nvidia ban by their government. | 2025-09-23T18:49:51 | https://www.reddit.com/r/LocalLLaMA/comments/1nopuq3/thinking_about_qwen/ | No_Conversation9561 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nopuq3 | false | null | t3_1nopuq3 | /r/LocalLLaMA/comments/1nopuq3/thinking_about_qwen/ | false | false | self | 0 | null |
Local speech to speech conversation ai? | 5 | You know how you can talk back and forth with something like chatgpt thru a interface using your voice... well it there something like that that is free and unlimited and possibly local. I want to see what this type of ai can do and ive seen some cool use cases online. | 2025-09-23T18:47:16 | https://www.reddit.com/r/LocalLLaMA/comments/1nops7m/local_speech_to_speech_conversation_ai/ | No_Strawberry_8719 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nops7m | false | null | t3_1nops7m | /r/LocalLLaMA/comments/1nops7m/local_speech_to_speech_conversation_ai/ | false | false | self | 5 | null |
Alpie-Core: A 4-Bit Quantized Reasoning Model that
Outperforms Full-Precision Models | 8 | Hey everyone, I’m part of the team at 169Pi, and I wanted to share something we’ve been building for the past few months.
We just released **Alpie Core, a 32B parameter, 4-bit quantized reasoning model.** It’s one of the first large-scale 4-bit reasoning models from India (and globally). Our goal wasn’t to chase trillion-parameter scaling, but instead to prove that efficiency + reasoning can coexist.
**Why this matters:**
1. \~75% lower VRAM usage vs FP16 → runs on much more accessible hardware
2. Strong performance + lower carbon + cost footprint
3. Released under Apache 2.0 license (fully open to contributions)
**Benchmarks (4-bit):**
**- GSM8K: 92.8%** (mathematical reasoning)
**- SciQ: 98%** (scientific reasoning)
**- SWE-Bench Verified: 57.8%** (software engineering, leading score)
**- BBH: 85.1%** (outperforming GPT-4o, Claude 3.5, Qwen2.5)
**- AIME: 47.3%** (strong performance on advanced mathematics)
**- Humanity’s Last Exam(HLE):** (matching Claude 4, beating Deepseek V3, Llama 4 Maverick)
The model is live now on Hugging Face: [https://huggingface.co/169Pi/Alpie-Core](https://huggingface.co/169Pi/Alpie-Core)
We also released 6 high-quality curated datasets on HF (\~2B tokens) across STEM, Indic reasoning, law, psychology, coding, and advanced math to support reproducibility & community research.
We’ll also have an API & Playground dropping very soon, and our AI platform Alpie goes live this week, so you can try it in real workflows.
We’d love feedback, contributions, and even critiques from this community, the idea is to build in the open and hopefully create something useful for researchers, devs, and organisations worldwide.
Happy to answer any questions!
https://reddit.com/link/1nopqf9/video/15smx16jmyqf1/player
| 2025-09-23T18:45:27 | https://www.reddit.com/r/LocalLLaMA/comments/1nopqf9/alpiecore_a_4bit_quantized_reasoning_model_that/ | BlockLight2207 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nopqf9 | false | null | t3_1nopqf9 | /r/LocalLLaMA/comments/1nopqf9/alpiecore_a_4bit_quantized_reasoning_model_that/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'qIMPpq1YvjlfOIpsK6C_3aIsgmMZaE_mrS8dZNWoz4k', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/qIMPpq1YvjlfOIpsK6C_3aIsgmMZaE_mrS8dZNWoz4k.png?width=108&crop=smart&auto=webp&s=659c417009a8cdfd496736703978827b43cb3419', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/qIMPpq1YvjlfOIpsK6C_3aIsgmMZaE_mrS8dZNWoz4k.png?width=216&crop=smart&auto=webp&s=37c0752dd687e2b8c6dd7bdc06a84834faa474f2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/qIMPpq1YvjlfOIpsK6C_3aIsgmMZaE_mrS8dZNWoz4k.png?width=320&crop=smart&auto=webp&s=d51336faaa0440006a3dc2189160be45365a41fd', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/qIMPpq1YvjlfOIpsK6C_3aIsgmMZaE_mrS8dZNWoz4k.png?width=640&crop=smart&auto=webp&s=09912e16bd4a468440b11a9f444ba3224e891706', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/qIMPpq1YvjlfOIpsK6C_3aIsgmMZaE_mrS8dZNWoz4k.png?width=960&crop=smart&auto=webp&s=ff6493312c21f895cf83bae568cbde48fc8667d2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/qIMPpq1YvjlfOIpsK6C_3aIsgmMZaE_mrS8dZNWoz4k.png?width=1080&crop=smart&auto=webp&s=33fcfbb5905856e94b623fa8f1bd2fc303eb7730', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/qIMPpq1YvjlfOIpsK6C_3aIsgmMZaE_mrS8dZNWoz4k.png?auto=webp&s=11b4d9e7a701fdf74a8af92b676f1318e6346601', 'width': 1200}, 'variants': {}}]} |
Magistral-Small Results in My Personal LLM Benchmark | 26 | # Introduction
A few days ago, I posted a thread discussing how surprised I was by the result of Magistral-small in a small personal benchmark I use to evaluate some LLMs I test. Due to the positive reception of the post, I've decided to create a couple of graphs showing some results.
# What does it consist of?
The benchmark is based on a well-known TV show in Spain called "Pasapalabra." The show works as follows: an alphabet is presented in a circular format (rosco), and a question starting with the first letter of the alphabet—in this case, "A"—is asked about any topic. The user must answer correctly to score points or pass to the next word. If they answer incorrectly, they are penalized; if correct, they score points. The thing is, a football (soccer) YouTube channel I follow created several challenges emulating this TV show, but with a solely football-themed focus. The questions are generally historical in nature, such as player dates, obscure team names, stadium references, or obscure rules, among others.
In this case, I have 104 questions, corresponding to 4 rounds (roscos) of 26 letters each. I provided all the LLMs with the option that if they were unsure of the answer or had serious doubts, they could pass to the next word instead of risking an incorrect response.
# Results
I've created two graphs, one of which shows the hit rate, pass rate, and failure rate for each LLM. The second one shows a scoring system where the LLM earns 3 points for each correct answer, 1 point for passing, and loses 1 point for each incorrect answer. All models are in thinking mode except Kimi K2, which obviously lacks this mode, yet curiously delivers some of the best results. The LLMs with over 200 billion parameters all achieved high scores, but Magistral still surprises me, as although it failed more questions than these larger models, when combining hit and pass rates, it performs quite comparably. It's also worth noting that in 70% of the instances where Magistral passed on a word, upon reviewing its thought process, I realized it actually knew the answer but deviated at the last moment—perhaps with better prompt tuning, the results could be even better. GLM-4.5 Air also performs reasonably well, while Qwen-30B-A3B gives a worse result, and Qwen-4B performs even more poorly. Additionally, Magistral is a dense model, which I believe may also contribute to its precision.
*I'm a novice in all of this, so I welcome suggestions and criticism.*
https://preview.redd.it/3ttlbpf2lyqf1.jpg?width=1000&format=pjpg&auto=webp&s=f71c941a3edbff06009432725c4375106a64520f
https://preview.redd.it/1ydhhof2lyqf1.jpg?width=1000&format=pjpg&auto=webp&s=7331e34a2d56f023f815f5f865e2a3a0b9afeb37
| 2025-09-23T18:38:26 | https://www.reddit.com/r/LocalLLaMA/comments/1nopjmx/magistralsmall_results_in_my_personal_llm/ | Different_File6723 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nopjmx | false | null | t3_1nopjmx | /r/LocalLLaMA/comments/1nopjmx/magistralsmall_results_in_my_personal_llm/ | false | false | 26 | null | |
Deploying ML Models with Kubernetes | 1 | One of the biggest bottlenecks I’ve seen in ML projects isn’t training the model; it’s getting it into production reliably. You train locally, tweak dependencies, then suddenly nothing runs the same way on staging or prod.
I recently tried out **KitOps**, a CNCF project that introduces something called *ModelKits*. Think of them as “Docker images for ML models”: a single, versioned artifact that contains your model weights, code, configs, and metadata. You can tag them, push them to a registry, roll them back, and even sign them with Cosign. No more mismatched file structures or missing `.env` files.
The workflow I tested looked like this:
1. Fine-tune a small model (I used FLAN-T5 with a tiny spam/ham dataset).
2. Wrap the weights + inference code + Kitfile into a ModelKit using the Kit CLI.
3. Push the ModelKit to **Jozu Hub** (an OCI-style registry built for ModelKits).
4. Deploy to Kubernetes with a ready-to-go YAML manifest that Jozu generates.
Also, the init-container pattern in Kubernetes pulls your exact ModelKit into a shared volume, so the main container can just boot up, load the model, and serve requests. That makes it super consistent whether you’re running Minikube on your laptop or scaling replicas on EKS.
What stood out to me:
* **Versioning** actually works. ModelKits live in your registry with tags just like Docker images.
* **Reproducibility** is built-in since the Kitfile pins data checksums and runtime commands.
* **Collaboration** is smoother. Data scientists, backend devs, and SREs all run the same artifact without fiddling with paths.
* **Cloud agnostic,** the same ModelKit runs locally or on any Kubernetes cluster.
I wrote up a full walkthrough (including the FastAPI server, Kitfile setup, packaging, and Kubernetes manifests) guide [here](https://jozu.com/blog/scalable-ml-deployments-made-simple-with-kitops-and-kubernetes-no-hardware-required/).
Would love feedback from folks who’ve faced issues with ML deployments, does this approach look like it could simplify your workflow, or do you think it adds another layer of tooling to maintain? | 2025-09-23T18:35:56 | https://www.reddit.com/r/LocalLLaMA/comments/1nopham/deploying_ml_models_with_kubernetes/ | Arindam_200 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nopham | false | null | t3_1nopham | /r/LocalLLaMA/comments/1nopham/deploying_ml_models_with_kubernetes/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'UyEiqp0HQUHH7Dpqr6wpvR8ZI5Eei5-jan9wiCLcl3s', 'resolutions': [{'height': 51, 'url': 'https://external-preview.redd.it/UyEiqp0HQUHH7Dpqr6wpvR8ZI5Eei5-jan9wiCLcl3s.png?width=108&crop=smart&auto=webp&s=3b5a9e84cfe7d73b21a472e513fe5b770b549903', 'width': 108}, {'height': 102, 'url': 'https://external-preview.redd.it/UyEiqp0HQUHH7Dpqr6wpvR8ZI5Eei5-jan9wiCLcl3s.png?width=216&crop=smart&auto=webp&s=d46919d7d93c9f20015de8de318508204c3dda86', 'width': 216}, {'height': 152, 'url': 'https://external-preview.redd.it/UyEiqp0HQUHH7Dpqr6wpvR8ZI5Eei5-jan9wiCLcl3s.png?width=320&crop=smart&auto=webp&s=075273d5bcdd3878aeea03fce8d491ba30230335', 'width': 320}, {'height': 304, 'url': 'https://external-preview.redd.it/UyEiqp0HQUHH7Dpqr6wpvR8ZI5Eei5-jan9wiCLcl3s.png?width=640&crop=smart&auto=webp&s=bb60008253f90845563c4c0d8aa1894ddd111bb0', 'width': 640}, {'height': 457, 'url': 'https://external-preview.redd.it/UyEiqp0HQUHH7Dpqr6wpvR8ZI5Eei5-jan9wiCLcl3s.png?width=960&crop=smart&auto=webp&s=c27f971e3838827b211973fd734526dc59e8cdbd', 'width': 960}, {'height': 514, 'url': 'https://external-preview.redd.it/UyEiqp0HQUHH7Dpqr6wpvR8ZI5Eei5-jan9wiCLcl3s.png?width=1080&crop=smart&auto=webp&s=116bc1eca14020bd96c53b13bc188c21f3623b09', 'width': 1080}], 'source': {'height': 942, 'url': 'https://external-preview.redd.it/UyEiqp0HQUHH7Dpqr6wpvR8ZI5Eei5-jan9wiCLcl3s.png?auto=webp&s=ce53497a23914627d4447f123077ddd72b327f30', 'width': 1978}, 'variants': {}}]} |
Huawei Plans Three-Year Campaign to Overtake Nvidia in AI Chips | 200 | 2025-09-23T18:31:10 | https://finance.yahoo.com/news/huawei-plans-three-campaign-overtake-052622404.html | fallingdowndizzyvr | finance.yahoo.com | 1970-01-01T00:00:00 | 0 | {} | 1nopcry | false | null | t3_1nopcry | /r/LocalLLaMA/comments/1nopcry/huawei_plans_threeyear_campaign_to_overtake/ | false | false | default | 200 | {'enabled': False, 'images': [{'id': 'RG_Drphb5Z3LeMOBYSjdaBWtUZI-aLmpgWBBRijj4mk', 'resolutions': [{'height': 85, 'url': 'https://external-preview.redd.it/RG_Drphb5Z3LeMOBYSjdaBWtUZI-aLmpgWBBRijj4mk.jpeg?width=108&crop=smart&auto=webp&s=5e1c6e5abfeab61144501fd6803c4ab3a6172209', 'width': 108}, {'height': 171, 'url': 'https://external-preview.redd.it/RG_Drphb5Z3LeMOBYSjdaBWtUZI-aLmpgWBBRijj4mk.jpeg?width=216&crop=smart&auto=webp&s=d20273470e439680a5ef786f4f9089bf7cdb53b4', 'width': 216}, {'height': 253, 'url': 'https://external-preview.redd.it/RG_Drphb5Z3LeMOBYSjdaBWtUZI-aLmpgWBBRijj4mk.jpeg?width=320&crop=smart&auto=webp&s=ff604de58e42d28be9f78c12c65b4c486fc060be', 'width': 320}, {'height': 507, 'url': 'https://external-preview.redd.it/RG_Drphb5Z3LeMOBYSjdaBWtUZI-aLmpgWBBRijj4mk.jpeg?width=640&crop=smart&auto=webp&s=0d14901df55c903a2634eb230416e5ea6c1bf81c', 'width': 640}, {'height': 761, 'url': 'https://external-preview.redd.it/RG_Drphb5Z3LeMOBYSjdaBWtUZI-aLmpgWBBRijj4mk.jpeg?width=960&crop=smart&auto=webp&s=0203bf4203d51a99efca264f128b10e04329e67c', 'width': 960}, {'height': 856, 'url': 'https://external-preview.redd.it/RG_Drphb5Z3LeMOBYSjdaBWtUZI-aLmpgWBBRijj4mk.jpeg?width=1080&crop=smart&auto=webp&s=2138cac7c2dc5524d89ac46157f4c625ee99ab5d', 'width': 1080}], 'source': {'height': 952, 'url': 'https://external-preview.redd.it/RG_Drphb5Z3LeMOBYSjdaBWtUZI-aLmpgWBBRijj4mk.jpeg?auto=webp&s=20373de69268feafc9bbed55ab45d08acab87b14', 'width': 1200}, 'variants': {}}]} | |
In the future, could we potentially see high level AI running on small hardware? | 0 | My dog is stinky | 2025-09-23T18:16:24 | https://www.reddit.com/r/LocalLLaMA/comments/1nooyoe/in_the_future_could_we_potentially_see_high_level/ | Civil_Opposite7103 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nooyoe | false | null | t3_1nooyoe | /r/LocalLLaMA/comments/1nooyoe/in_the_future_could_we_potentially_see_high_level/ | false | false | self | 0 | null |
I wonder if same mod would be possible for mac studios with 64gb ram as people are doing with 4090s. | 0 | M1 mac studios are locked at 64 gb. People have upgraded the storage on MacBooks and I wonder if it would be possible to mod to add more unified memory. | 2025-09-23T18:05:39 | https://www.reddit.com/r/LocalLLaMA/comments/1nooogo/i_wonder_if_same_mod_would_be_possible_for_mac/ | NoFudge4700 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nooogo | false | null | t3_1nooogo | /r/LocalLLaMA/comments/1nooogo/i_wonder_if_same_mod_would_be_possible_for_mac/ | false | false | self | 0 | null |
Has anyone tried the new oss qwen models? (If they're even out) | 1 | [removed] | 2025-09-23T17:41:07 | https://www.reddit.com/r/LocalLLaMA/comments/1noo0lc/has_anyone_tried_the_new_oss_qwen_models_if/ | ArtisticKey4324 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1noo0lc | false | null | t3_1noo0lc | /r/LocalLLaMA/comments/1noo0lc/has_anyone_tried_the_new_oss_qwen_models_if/ | false | false | self | 1 | null |
GenExam: A Multidisciplinary Text-to-Image Exam | 1 | [removed] | 2025-09-23T17:33:16 | https://www.reddit.com/r/LocalLLaMA/comments/1nont7z/genexam_a_multidisciplinary_texttoimage_exam/ | Medical_Sweet_3641 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nont7z | false | null | t3_1nont7z | /r/LocalLLaMA/comments/1nont7z/genexam_a_multidisciplinary_texttoimage_exam/ | false | false | self | 1 | null |
Xet powers 5M models and datasets on Hugging Face | 55 | 2025-09-23T17:32:52 | clem59480 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nonsvg | false | null | t3_1nonsvg | /r/LocalLLaMA/comments/1nonsvg/xet_powers_5m_models_and_datasets_on_hugging_face/ | false | false | default | 55 | {'enabled': True, 'images': [{'id': '8nzs9ffk9yqf1', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/8nzs9ffk9yqf1.png?width=108&crop=smart&auto=webp&s=edaa1aa346c09a3ecbc6c8f5899e24442b3908da', 'width': 108}, {'height': 289, 'url': 'https://preview.redd.it/8nzs9ffk9yqf1.png?width=216&crop=smart&auto=webp&s=11afd18a2854b527fa13159a049c0c15e3183a45', 'width': 216}, {'height': 428, 'url': 'https://preview.redd.it/8nzs9ffk9yqf1.png?width=320&crop=smart&auto=webp&s=4606747719ffb7b13405a31a09e5a985fb736313', 'width': 320}, {'height': 857, 'url': 'https://preview.redd.it/8nzs9ffk9yqf1.png?width=640&crop=smart&auto=webp&s=900d49ab7de15060ca933d082069e2b24385301e', 'width': 640}], 'source': {'height': 1066, 'url': 'https://preview.redd.it/8nzs9ffk9yqf1.png?auto=webp&s=cb9ab86d83ba8eb2f230c02cc45ac840373c077b', 'width': 796}, 'variants': {}}]} | ||
GenExam: A Multidisciplinary Text-to-Image Exam | 1 | [removed] | 2025-09-23T17:31:33 | https://www.reddit.com/r/LocalLLaMA/comments/1nonroo/genexam_a_multidisciplinary_texttoimage_exam/ | Medical_Sweet_3641 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nonroo | false | null | t3_1nonroo | /r/LocalLLaMA/comments/1nonroo/genexam_a_multidisciplinary_texttoimage_exam/ | false | false | self | 1 | null |
MediaTek claims 1.58-bit BitNet support with Dimensity 9500 SoC | 42 | > Integrating the ninth-generation MediaTek NPU 990 with Generative AI Engine 2.0 doubles compute power and introduces BitNet 1.58-bit large model processing, reducing power consumption by up to 33%. Doubling its integer and floating-point computing capabilities, users benefit from 100% faster 3 billion parameter LLM output, 128K token long text processing, and the industry’s first 4k ultra-high-definition image generation; all while slashing power consumption at peak performance by 56%.
Anyone any idea which model(s) they could have tested this on? | 2025-09-23T17:14:49 | https://www.mediatek.com/press-room/mediatek-dimensity-9500-unleashes-best-in-class-performance-ai-experiences-and-power-efficiency-for-the-next-generation-of-mobile-devices | Balance- | mediatek.com | 1970-01-01T00:00:00 | 0 | {} | 1nonbug | false | null | t3_1nonbug | /r/LocalLLaMA/comments/1nonbug/mediatek_claims_158bit_bitnet_support_with/ | false | false | default | 42 | null |
Want to discuss basic AI and how it would help in research | 5 |
I’m a resident in general surgery. Im interested in doing research in AI in surgery at any capacity. But I lack basic understanding of how AI works and how I can apply it especially in field of surgical medicine (from which I’ve heard is much harder to integrate compared to diagnostic/non operative medicine). I just wanna chat and discuss and learn about AI and how I can integrate it. What expectations I must have, how to train AI based on my goals and what are its current requirements and limits. If anyone’s themselves are interested in this, I wouldn’t mind collaborating to give adequate data for anything they have in mind, as I work in a high volume centre.
If you can guide me to certain sites or other sub reddits more suited for my question, it would be much appreciated
If you have any doubts or need clarification on what I’m actually looking for, feel free to ask, as I feel I haven’t articulated my own thoughts properly. | 2025-09-23T17:07:59 | https://www.reddit.com/r/LocalLLaMA/comments/1non5ha/want_to_discuss_basic_ai_and_how_it_would_help_in/ | Kurosaki_Minato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1non5ha | false | null | t3_1non5ha | /r/LocalLLaMA/comments/1non5ha/want_to_discuss_basic_ai_and_how_it_would_help_in/ | false | false | self | 5 | null |
Lessons from building a multi-agent coding system: orchestration > single-agent setups | 0 | Most coding assistants today use a single LLM agent. That’s fine for smaller tasks, but we kept running into reliability issues when trying to handle feature-level or production-grade workflows. So we tried something different: orchestrating multiple sub-agents with distinct roles — Plan → Code → Verify — and running them in parallel when possible. A few takeaways:
* **Planning before coding**: Explicit task decomposition upfront reduced rework and improved alignment with requirements.
* **Parallel execution**: Multiple agents can run simultaneously in isolated worktrees, which prevents collisions and speeds up delivery.
* **Transparency**: We added diff-based summaries and test results at each step so changes were explainable instead of black-box.
In practice, this plan-first approach has been noticeably more robust than any single-agent setup we’ve tested.We built these experiments into [Verdent](https://www.verdent.ai/), an agentic coding suite (available as a VS Code extension and desktop app).But more importantly, I’d love to hear from this community:
* Have you tried orchestration or multi-agent designs in your own projects?
* What worked (or failed) for you? | 2025-09-23T17:02:43 | https://www.reddit.com/r/LocalLLaMA/comments/1non0fw/lessons_from_building_a_multiagent_coding_system/ | Pitiful_Guess7262 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1non0fw | false | null | t3_1non0fw | /r/LocalLLaMA/comments/1non0fw/lessons_from_building_a_multiagent_coding_system/ | false | false | self | 0 | null |
Leaderboards & Benchmarks | 139 | Many Leaderboards are not up to date, recent models are missing. Don't know what happened to GPU Poor LLM Arena? I check Livebench, Dubesor, EQ-Bench often. Like these boards because these come with more Small & Medium size models(Typical boards usually stop with 30B at bottom & only few small models). For my laptop config(8GB VRAM & 32GB RAM), I need models 1-35B models. Dubesor's benchmark comes with Quant size too which is convenient & nice.
It's really heavy & consistent work to keep things up to date so big kudos to all leaderboards. What leaderboards do you check usually? | 2025-09-23T16:53:47 | pmttyji | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nomrj7 | false | null | t3_1nomrj7 | /r/LocalLLaMA/comments/1nomrj7/leaderboards_benchmarks/ | false | false | default | 139 | {'enabled': True, 'images': [{'id': 'n79ymm450yqf1', 'resolutions': [{'height': 90, 'url': 'https://preview.redd.it/n79ymm450yqf1.jpeg?width=108&crop=smart&auto=webp&s=c7b3b4663aec85560ead99d77505c91ba87ec40c', 'width': 108}, {'height': 180, 'url': 'https://preview.redd.it/n79ymm450yqf1.jpeg?width=216&crop=smart&auto=webp&s=226b35cfc12b9e76df043ca9553a42eec56c35d6', 'width': 216}, {'height': 267, 'url': 'https://preview.redd.it/n79ymm450yqf1.jpeg?width=320&crop=smart&auto=webp&s=366296e43fd0844292650a0fe0b1176903e5bd77', 'width': 320}], 'source': {'height': 457, 'url': 'https://preview.redd.it/n79ymm450yqf1.jpeg?auto=webp&s=51311509fcd7dd4a473c7fa0a3782e75d0631dc5', 'width': 546}, 'variants': {}}]} | |
I built an open-source Writing Assistant inspired by Apple Intelligence, called ProseFlow. | 43 | Good evening,
As someone who barely communicates with others, I really find it hard to write to talk to others, and while AI makes it easier, still, selecting the right words—is it correct or not—is this the best way to deliver information? Ah, while AI helps, but keeping copy-paste and refining my inputs is just frustrating. I was tired of the clunky workflow of copy-pasting text into a separate UI. I wanted my models to feel integrated into my OS. So, I built ProseFlow.
ProseFlow is a system-level utility that lets you apply AI actions to selected text anywhere. You highlight text in your browser, IDE, or document editor, press a hotkey, and a menu of your custom actions appears.
The core workflow is simple:
1. **Select text** in any application.
2. **Press a global hotkey** (e.g., `Ctrl+J`).
3. A floating, searchable menu of your custom AI **Actions** (Proofread, Summarize, Refactor Code) appears.
4. Select an action, and it transforms your text instantly.
The key features are:
* **Deep Customization:** You can create unlimited actions, each with its own system prompt, to tailor the model's behavior for specific tasks.
* **Iterative Refinement:** For complex tasks, the result opens in a window where you can conversationally refine it (e.g., "make it shorter," "add bullet points").
* **Smart Paste:** Assign a second hotkey to your most-used action for one-press text transformation.
* **Context-Aware Actions:** You can make actions (like code refactoring) only appear when you're in specific apps (like VS Code).
* **Official Models & Dataset:** I fine-tuned **[ProseFlow-v1-1.5B-Instruct](https://huggingface.co/LSXPrime/ProseFlow-v1-1.5B-Instruct)** specifically for this action-based format. It's trained on an open-source dataset I created, **[ProseFlow-Actions-v1](https://huggingface.co/datasets/LSXPrime/ProseFlow-Actions-v1)**, to ensure high-quality, structured output. Both are available for one-click download in the app.
* **Live Hardware Monitoring:** The dashboard includes real-time VRAM, RAM, CPU, and GPU monitoring so you can see exactly what your models are doing.
This project is free, open-source (AGPLv3), and ready for you to try. I'm looking for feedback on performance with different hardware and models.
* **Download & Website:** [https://lsxprime.github.io/proseflow-web](https://lsxprime.github.io/proseflow-web)
* **GitHub Repository:** [https://github.com/LSXPrime/ProseFlow](https://github.com/LSXPrime/ProseFlow)
Let me know what you think.
macOS still untested; I would be thankful if any Mac user can confirm its functionality or report with the logs. | 2025-09-23T16:44:01 | https://v.redd.it/9dqh3tfvpxqf1 | LSXPRIME | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nomi16 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/9dqh3tfvpxqf1/DASHPlaylist.mpd?a=1761237854%2CZTgzNGJhNWY4NTBkZTUwZTA1ZWEzMjJiNmIxMGE4YWU5YjczYmM3ZmY4NDU0OTEzOGI1YTYyM2I3MmUyMDE2Nw%3D%3D&v=1&f=sd', 'duration': 60, 'fallback_url': 'https://v.redd.it/9dqh3tfvpxqf1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/9dqh3tfvpxqf1/HLSPlaylist.m3u8?a=1761237854%2COTQ0ZDRiNzdmNzY2MWQwZDQ5NWU4NTM1NDNmZjI1MmVhNGM0YWNhOTc2MDM3OTE2YjBkZTYwN2UyNjRiZDgzOQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/9dqh3tfvpxqf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1020}} | t3_1nomi16 | /r/LocalLLaMA/comments/1nomi16/i_built_an_opensource_writing_assistant_inspired/ | false | false | 43 | {'enabled': False, 'images': [{'id': 'YWUxbmU3Z3ZweHFmMQ5U_qROBXjFN3SoDz3kTm8LCTfeK5cYjjrj33SnLUqP', 'resolutions': [{'height': 76, 'url': 'https://external-preview.redd.it/YWUxbmU3Z3ZweHFmMQ5U_qROBXjFN3SoDz3kTm8LCTfeK5cYjjrj33SnLUqP.png?width=108&crop=smart&format=pjpg&auto=webp&s=886b82cb2d7931be6357f6103cde44a8b2165a83', 'width': 108}, {'height': 152, 'url': 'https://external-preview.redd.it/YWUxbmU3Z3ZweHFmMQ5U_qROBXjFN3SoDz3kTm8LCTfeK5cYjjrj33SnLUqP.png?width=216&crop=smart&format=pjpg&auto=webp&s=e822f12a2abd89774e6a59c1cc3ab9195241dbe6', 'width': 216}, {'height': 226, 'url': 'https://external-preview.redd.it/YWUxbmU3Z3ZweHFmMQ5U_qROBXjFN3SoDz3kTm8LCTfeK5cYjjrj33SnLUqP.png?width=320&crop=smart&format=pjpg&auto=webp&s=9c842c5fe66eaeac89f591b5c7b5fe18ac7f5d91', 'width': 320}, {'height': 452, 'url': 'https://external-preview.redd.it/YWUxbmU3Z3ZweHFmMQ5U_qROBXjFN3SoDz3kTm8LCTfeK5cYjjrj33SnLUqP.png?width=640&crop=smart&format=pjpg&auto=webp&s=627ef052d44dbe0929f99648449fb81679c17815', 'width': 640}, {'height': 678, 'url': 'https://external-preview.redd.it/YWUxbmU3Z3ZweHFmMQ5U_qROBXjFN3SoDz3kTm8LCTfeK5cYjjrj33SnLUqP.png?width=960&crop=smart&format=pjpg&auto=webp&s=f3bb311cd494b0f19e39db3cfaa4e3309b89b7bb', 'width': 960}, {'height': 763, 'url': 'https://external-preview.redd.it/YWUxbmU3Z3ZweHFmMQ5U_qROBXjFN3SoDz3kTm8LCTfeK5cYjjrj33SnLUqP.png?width=1080&crop=smart&format=pjpg&auto=webp&s=9b09776acbc72660d062c3037b47b52e5f2a44a6', 'width': 1080}], 'source': {'height': 957, 'url': 'https://external-preview.redd.it/YWUxbmU3Z3ZweHFmMQ5U_qROBXjFN3SoDz3kTm8LCTfeK5cYjjrj33SnLUqP.png?format=pjpg&auto=webp&s=f71925cb97f7068731e1d3f060971ab917d74f89', 'width': 1354}, 'variants': {}}]} | |
Anyone trained up to ~11B params? What setup actually works? | 10 | Hey folks,
I’ve been playing around with training a language model up to the 11B parameter range. Tried it on Kaggle already, but it blew past the 30h limit 😅 so I’m clearly gonna need a different setup.
A few things I’d love input on from people who’ve actually run jobs this size:
• What’s the minimum viable hardware you’ve made work (GPU type/count, RAM, storage, networking)?
• Tips for making model parallelism + distributed training less painful?
• Frameworks/tools that actually save headaches (MosaicML, Composer, HuggingFace, FSDP, etc.)?
• Any “wish I knew this earlier” lessons—cost, reliability, troubleshooting, or general sanity-savers.
Extra love if you can share real cluster specs (e.g., “needed X A100s” or “Y 4090s with Z TB of fast storage”), bottlenecks you hit with storage/networking, or what you’d do differently next time.
Appreciate any wisdom 🙏
| 2025-09-23T16:37:27 | https://www.reddit.com/r/LocalLLaMA/comments/1nombr7/anyone_trained_up_to_11b_params_what_setup/ | pepsituta | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nombr7 | false | null | t3_1nombr7 | /r/LocalLLaMA/comments/1nombr7/anyone_trained_up_to_11b_params_what_setup/ | false | false | self | 10 | null |
Generating Java Data Structures With LLMs Like Apple’s Foundation Models Framework | 3 | The Java type/class is first transformed into a valid JSON schema, injected into the system prompt and in the HTTP request. To enrich the system prompt, additional field descriptions are read from custom @Guide annotations using Java's Reflection APIs. When the server (ex. llama-server or any OpenAI API compatible server) gets the request, it transforms the JSON schema to BNF grammar that is enforced on the LLM's response tokens. The LLM's response strictly follows the JSON schema, which is then sent back to the client, where it is deserializing and converted to an instance of the Java class initially given to the client.
Video:
1. Assign the role of a 'natural language parser' to the client (it goes in the system prompt)
2. The sample query is a huge paragraph from which we wish to extract relevant details.
3. The ECommerceProduct class contains @Guide annotations and fields that we wish to extract from the query/paragraph defined in (2).
4. Execute the program and after a few moments, the string representation (toString()) of the class ECommerceProduct is visible in the console.
Blog: https://medium.com/@equipintelligence/generating-java-data-structures-with-llms-like-apples-foundation-models-framework-bd161f6f1be0
GitHub: https://github.com/shubham0204/Guided-Generation-Java
| 2025-09-23T16:34:26 | shubham0204_dev | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nom8t8 | false | null | t3_1nom8t8 | /r/LocalLLaMA/comments/1nom8t8/generating_java_data_structures_with_llms_like/ | false | false | default | 3 | {'enabled': True, 'images': [{'id': 'yphk9kv7zxqf1', 'resolutions': [{'height': 36, 'url': 'https://preview.redd.it/yphk9kv7zxqf1.png?width=108&crop=smart&auto=webp&s=7d459fe5012ea27503943900a4d7d0f798488129', 'width': 108}, {'height': 73, 'url': 'https://preview.redd.it/yphk9kv7zxqf1.png?width=216&crop=smart&auto=webp&s=d68e533e7db0131c18542f8018c21b3b02477e4c', 'width': 216}, {'height': 108, 'url': 'https://preview.redd.it/yphk9kv7zxqf1.png?width=320&crop=smart&auto=webp&s=613617bb0fd7d3c220c9405878370d2c8806b86a', 'width': 320}, {'height': 217, 'url': 'https://preview.redd.it/yphk9kv7zxqf1.png?width=640&crop=smart&auto=webp&s=1c5d1c65bb1e24eba7280ae935ddbc324098e9e0', 'width': 640}, {'height': 326, 'url': 'https://preview.redd.it/yphk9kv7zxqf1.png?width=960&crop=smart&auto=webp&s=eb336f7866bc2f9c5c7b1a714c3b55bddb264bdb', 'width': 960}, {'height': 367, 'url': 'https://preview.redd.it/yphk9kv7zxqf1.png?width=1080&crop=smart&auto=webp&s=537e0c13f56261e801f677da48bce81cc9f4f952', 'width': 1080}], 'source': {'height': 1392, 'url': 'https://preview.redd.it/yphk9kv7zxqf1.png?auto=webp&s=f8c8cbd665086f0cc54b902b3c83f955d906e870', 'width': 4096}, 'variants': {}}]} | |
Qwen3Guard - a Qwen Collection | 161 | 2025-09-23T16:24:35 | https://huggingface.co/collections/Qwen/qwen3guard-68d2729abbfae4716f3343a1 | Few_Painter_5588 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1nolz9e | false | null | t3_1nolz9e | /r/LocalLLaMA/comments/1nolz9e/qwen3guard_a_qwen_collection/ | false | false | default | 161 | {'enabled': False, 'images': [{'id': 'SybQlpd57ri5DOffonwxQ3RJbORPPReSb_vD77lSWek', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/SybQlpd57ri5DOffonwxQ3RJbORPPReSb_vD77lSWek.png?width=108&crop=smart&auto=webp&s=aa3dffed08d4aeaa86e97a89e6d4b73187400dc8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/SybQlpd57ri5DOffonwxQ3RJbORPPReSb_vD77lSWek.png?width=216&crop=smart&auto=webp&s=e58f4a4e3bdd9e3da632d3d074eaa6e10d0ab233', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/SybQlpd57ri5DOffonwxQ3RJbORPPReSb_vD77lSWek.png?width=320&crop=smart&auto=webp&s=748f848949bf7b5344f7a19eb473694eaeebdcbf', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/SybQlpd57ri5DOffonwxQ3RJbORPPReSb_vD77lSWek.png?width=640&crop=smart&auto=webp&s=aa8f4c7470cc8c0c033a57dba7ee804d9d9e1d36', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/SybQlpd57ri5DOffonwxQ3RJbORPPReSb_vD77lSWek.png?width=960&crop=smart&auto=webp&s=b05091a43ea4f799162f31c27541a5def5fe1594', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/SybQlpd57ri5DOffonwxQ3RJbORPPReSb_vD77lSWek.png?width=1080&crop=smart&auto=webp&s=87cd1f63cc41862f6679e315a45a4aa6fe475290', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/SybQlpd57ri5DOffonwxQ3RJbORPPReSb_vD77lSWek.png?auto=webp&s=4f619c921da5e0e70bba57a5ad3f7d94d32c6830', 'width': 1200}, 'variants': {}}]} | |
Condescension in AI is getting worse | 0 | I just had to tell 4 separate AI (Claude, ChatGPT, gpt-oss-20b, Qwen3-Max) that I am not some dumb nobody who thinks ai is cool and is randomly flipping switches and turning knobs with ai settings like i'm a kid in a candy store causing a mess because it gives me attention.
I'm so sick of asking a technical question, and it being condescending to me and treating me like i'm asking some off the wall question, like "ooh cute baby, let's tell you it's none of your concern and stop you form breaking things" not those exact words, but the same freaking tone. I mean if I'm asking about a technical aspect, and including terminology that almost no normie is going to know, then obviously i'm not some dumbass who can only understand turn it on and back off again.
And it's getting worse! Every online AI, i've had conversations with for months. Most of them know my personality\\quirks and so forth. some have memory in system that shows, i'm not tech illiterate.
But every damned time I ask a technical question, i get that "oh you don't know what you're talking about. Let me tell you about the underlying technology in kiddie terms and warn you not to touch shit."
WHY IS AI SO CONDESCENDING LATELY? | 2025-09-23T16:18:32 | https://www.reddit.com/r/LocalLLaMA/comments/1nolt8t/condescension_in_ai_is_getting_worse/ | Savantskie1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nolt8t | false | null | t3_1nolt8t | /r/LocalLLaMA/comments/1nolt8t/condescension_in_ai_is_getting_worse/ | false | false | self | 0 | null |
Hi, i just downloaded LM studio, and i need some help. | 1 | Why is the ai generating tokens so slowly? is there a setting / way to improve it?
(my system is quite weak, but i wont run anything on the backround) | 2025-09-23T16:03:44 | https://www.reddit.com/r/LocalLLaMA/comments/1nolf2m/hi_i_just_downloaded_lm_studio_and_i_need_some/ | magach6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nolf2m | false | null | t3_1nolf2m | /r/LocalLLaMA/comments/1nolf2m/hi_i_just_downloaded_lm_studio_and_i_need_some/ | false | false | self | 1 | null |
Local LLM coding AI | 5 | Has anyone been able to get any coding AI working locally?
Been pulling out my hairs by the roots now for a while getting Visual Code, Roocode, LM Studio and different models to cooperate, but so far in vain.
Suggestions on what to try?
Tried to get ollama to work, but it seem hellbent on refusing connections and only works from the GUI. Since I got LMStudio to work before I fired it up and it worked out of the box, accepting API calls.
Willing to trade for any other editor if necessary, but would prefer Visual Studio or Visual Code.
Roocode seemed to be the best extension to get, but maybe I was mislead by advertising?
The problems I get varies depending on model/prompt.
Endless looping is the best result so far:
Visual Code/RooCode/LMStudio/oh-dcft-v3.1-claude-3-5-sonnet-20241022 (Context length: 65536)
Many other attempts fail due to prompt/context length - got this example by resetting context length to 4096, but I got these even with context lengths at 65536):
2025-09-23 17:04:51 [ERROR]
Trying to keep the first 6402 tokens when context the overflows. However, the model is loaded with context length of only 4096 tokens, which is not enough. Try to load the model with a larger context length, or provide a shorter input. Error Data: n/a, Additional Data: n/a
I also got this error in the LM Studio log:
2025-09-23 17:29:01 [ERROR]
Error rendering prompt with jinja template: "You have passed a message containing <|channel|> tags in the content field. Instead of doing this, you should pass analysis messages (the string between '<|message|>' and '<|end|>') in the 'thinking' field, and final messages (the string between '<|message|>' and '<|end|>') in the 'content' field.".
This is usually an issue with the model's prompt template. If you are using a popular model, you can try to search the model under lmstudio-community, which will have fixed prompt templates. If you cannot find one, you are welcome to post this issue to our discord or issue tracker on GitHub. Alternatively, if you know how to write jinja templates, you can override the prompt template in My Models > model settings > Prompt Template.. Error Data: n/a, Additional Data: n/a
| 2025-09-23T15:57:09 | https://www.reddit.com/r/LocalLLaMA/comments/1nol8nr/local_llm_coding_ai/ | Darlanio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nol8nr | false | null | t3_1nol8nr | /r/LocalLLaMA/comments/1nol8nr/local_llm_coding_ai/ | false | false | self | 5 | null |
Any good research papers you recommend ? | 5 | Me and My friends we have a circle to read papers weekly but lately we can't find good papers with interesting ideas to read.
We liked the Next Scale Predection for Autoreggressive Image Generation paper last year and was wandering if there are other interesting papers like it. | 2025-09-23T15:56:13 | https://www.reddit.com/r/LocalLLaMA/comments/1nol7s9/any_good_research_papers_you_recommend/ | Severe-Awareness829 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nol7s9 | false | null | t3_1nol7s9 | /r/LocalLLaMA/comments/1nol7s9/any_good_research_papers_you_recommend/ | false | false | self | 5 | null |
oLLM: run Qwen3-Next-80B on 8GB GPU (at 1tok/2s throughput) | 6 | 2025-09-23T15:56:06 | https://github.com/Mega4alik/ollm | paf1138 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1nol7o8 | false | null | t3_1nol7o8 | /r/LocalLLaMA/comments/1nol7o8/ollm_run_qwen3next80b_on_8gb_gpu_at_1tok2s/ | false | false | 6 | {'enabled': False, 'images': [{'id': 'loqYh-WCtEaxMSj7OVC1KJ5pM9gu3MpUO3u8a7ppcoY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/loqYh-WCtEaxMSj7OVC1KJ5pM9gu3MpUO3u8a7ppcoY.png?width=108&crop=smart&auto=webp&s=f344d48a6b30df385c6254bc80ad88f32e31e069', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/loqYh-WCtEaxMSj7OVC1KJ5pM9gu3MpUO3u8a7ppcoY.png?width=216&crop=smart&auto=webp&s=84be04f11238da3878ad3782ce64e889d869a164', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/loqYh-WCtEaxMSj7OVC1KJ5pM9gu3MpUO3u8a7ppcoY.png?width=320&crop=smart&auto=webp&s=b7c22a94f724e24296ac05db5db6bd760f115394', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/loqYh-WCtEaxMSj7OVC1KJ5pM9gu3MpUO3u8a7ppcoY.png?width=640&crop=smart&auto=webp&s=cab0230c5dc68a3b50a7ad3a367504dacead83b8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/loqYh-WCtEaxMSj7OVC1KJ5pM9gu3MpUO3u8a7ppcoY.png?width=960&crop=smart&auto=webp&s=5d0603b8acf85f959bc64c87ef89be20860b3766', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/loqYh-WCtEaxMSj7OVC1KJ5pM9gu3MpUO3u8a7ppcoY.png?width=1080&crop=smart&auto=webp&s=652a59b3e70ee045a2556bb28037f96ea1cc0779', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/loqYh-WCtEaxMSj7OVC1KJ5pM9gu3MpUO3u8a7ppcoY.png?auto=webp&s=664a2d566d85323d92ae2f6552fcaa47bfe8c21b', 'width': 1200}, 'variants': {}}]} | ||
Show HN: Run Qwen3-Next-80B on 8GB GPU (at 1tok/2s throughput) | 1 | >oLLM is a lightweight Python library for large-context LLM inference, built on top of Huggingface Transformers and PyTorch. It enables running models like [gpt-oss-20B](https://huggingface.co/openai/gpt-oss-20b), [qwen3-next-80B](https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Instruct) or [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on 100k context using \~$200 consumer GPU with 8GB VRAM. No quantization is used—only fp16/bf16 precision. | 2025-09-23T15:54:02 | https://github.com/Mega4alik/ollm | paf1138 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1nol5pi | false | null | t3_1nol5pi | /r/LocalLLaMA/comments/1nol5pi/show_hn_run_qwen3next80b_on_8gb_gpu_at_1tok2s/ | false | false | default | 1 | null |
Computer Use on Windows Sandbox | 20 | Introducing Windows Sandbox support - run computer-use agents on Windows business apps without VMs or cloud costs.
Your enterprise software runs on Windows, but testing agents required expensive cloud instances. Windows Sandbox changes this - it's Microsoft's built-in lightweight virtualization sitting on every Windows 10/11 machine, ready for instant agent development.
Enterprise customers kept asking for AutoCAD automation, SAP integration, and legacy Windows software support. Traditional VM testing was slow and resource-heavy. Windows Sandbox solves this with disposable, seconds-to-boot Windows environments for safe agent testing.
What you can build: AutoCAD drawing automation, SAP workflow processing, Bloomberg terminal trading bots, manufacturing execution system integration, or any Windows-only enterprise software automation - all tested safely in disposable sandbox environments.
Free with Windows 10/11, boots in seconds, completely disposable. Perfect for development and testing before deploying to Windows cloud instances (coming later this month).
Check out the github here : https://github.com/trycua/cua
Blog : https://www.trycua.com/blog/windows-sandbox | 2025-09-23T15:47:22 | https://v.redd.it/188jelvvqxqf1 | Impressive_Half_2819 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nokzcf | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/188jelvvqxqf1/DASHPlaylist.mpd?a=1761234455%2CZDg0YzM1MTM3MmZiZmIxOWE3ZjZmM2MwZWNhNDE0ZmI1MTBlYzFjNmVkNGIxOTcxOTY4ZDdkN2JlY2ZjMGE5Yg%3D%3D&v=1&f=sd', 'duration': 69, 'fallback_url': 'https://v.redd.it/188jelvvqxqf1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/188jelvvqxqf1/HLSPlaylist.m3u8?a=1761234455%2CM2NjZTM0NWI3OTI1NWNhMWY1NjllMzhhMDdkYjk3YWFiZGQ5NTg5OTE1YzljZjI4MGJkYzZmMDlkOWQ3NDFmZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/188jelvvqxqf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1nokzcf | /r/LocalLLaMA/comments/1nokzcf/computer_use_on_windows_sandbox/ | false | false | 20 | {'enabled': False, 'images': [{'id': 'ZGcybmN2ZnZxeHFmMYar2P-d3EU8x2ju_uKYrB4yrb0aAUxLp4mH5szJsZ9M', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZGcybmN2ZnZxeHFmMYar2P-d3EU8x2ju_uKYrB4yrb0aAUxLp4mH5szJsZ9M.png?width=108&crop=smart&format=pjpg&auto=webp&s=0ce0b125d6af03b0a07561720c6a4853acb7b8f0', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ZGcybmN2ZnZxeHFmMYar2P-d3EU8x2ju_uKYrB4yrb0aAUxLp4mH5szJsZ9M.png?width=216&crop=smart&format=pjpg&auto=webp&s=f9253c21a6f763a863bcdcdcc07d5c52377413c8', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ZGcybmN2ZnZxeHFmMYar2P-d3EU8x2ju_uKYrB4yrb0aAUxLp4mH5szJsZ9M.png?width=320&crop=smart&format=pjpg&auto=webp&s=6fe94bc6e49564beaa4fd9a21baec72ccce64ecc', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ZGcybmN2ZnZxeHFmMYar2P-d3EU8x2ju_uKYrB4yrb0aAUxLp4mH5szJsZ9M.png?width=640&crop=smart&format=pjpg&auto=webp&s=c455200c23128f6d939892b21b95ef2f820d98c7', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ZGcybmN2ZnZxeHFmMYar2P-d3EU8x2ju_uKYrB4yrb0aAUxLp4mH5szJsZ9M.png?width=960&crop=smart&format=pjpg&auto=webp&s=98e181ecaba82fe51be42970c0658cf60e9ea699', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ZGcybmN2ZnZxeHFmMYar2P-d3EU8x2ju_uKYrB4yrb0aAUxLp4mH5szJsZ9M.png?width=1080&crop=smart&format=pjpg&auto=webp&s=966fbf3e2fc25b2d82bad1e38f7a58d6130d2cf3', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/ZGcybmN2ZnZxeHFmMYar2P-d3EU8x2ju_uKYrB4yrb0aAUxLp4mH5szJsZ9M.png?format=pjpg&auto=webp&s=07779b4ea36cc3a7e8b165ca32afd1cf6bd81b0d', 'width': 1280}, 'variants': {}}]} | |
Observation #004: Sequential Output Behavior in Response to High-Pressure Directive | 1 | [removed] | 2025-09-23T15:46:59 | https://www.reddit.com/r/LocalLLaMA/comments/1nokyzk/observation_004_sequential_output_behavior_in/ | Embarrassed-Crow7078 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nokyzk | false | null | t3_1nokyzk | /r/LocalLLaMA/comments/1nokyzk/observation_004_sequential_output_behavior_in/ | false | false | self | 1 | null |
DeepStudio - Google AI Studio's App Builder at home (for static html/css/js apps and sites) | 32 | [DeepStudio - the main workspace](https://preview.redd.it/4xudsfesnxqf1.png?width=3083&format=png&auto=webp&s=3dd0f4bf93f6ebc44e767adb2b13f61e4dc314c8)
Howdy!
I've been tinkering on **DeepStudio** for a while and I think it's finally good and clean enough to share.
A [DeepSite v2](https://huggingface.co/spaces/enzostvs/deepsite) fork where I first added support for more providers and model listing, then multi-file support, taking that much further with a Virtual File System (file storage in IndexedDB), adding agentic capabilities for the code changes, conversation/session history, checkpoints and saves, then adding sh/bash commands in the VFS for the agent to use (reducing the need for dozens of tool definitions to just 2), support for non-tool models via JSON parsing, responsive UX/UI and so much more that I can't even remember.
In the end I ended up with what is basically **Google AI Studio's App Builder** at home.
Major part of the motivation for the project has also been the fact that I quite enjoy Google AI Studio's App builder for testing out ideas whether at home or out, but I always have a nagging feeling that there's going to be a day when they slap a 5k/mo price tag on it and then I'll be back to being a frustrated peasant.
Work with **Ollama** and **LM Studio** as well, but I've been testing mostly with OpenRouter (note it reports 4x higher costs than actual). Some models that work well: gpt-oss-120b, Qwen3 series, GLM-4.5, Kimi K2. The closed source SOTA models obviously work great too.
**If you're using OpenRouter or any other remote provider then be sure to set up limits**. Although there is a stop functionality for stopping further tool calls/processing, it's entirely possible something goes wrong and I'd be plenty miffed if someone spent their lifesavings on a html5 snake game.
If you make something cool with DeepStudio I'd appreciate it a lot if you could share it with me and please consider that this is a **solo project** that I've been doing on the side, so please be patient if fixes take a bit of time to arrive.
**HF Demo**: [https://huggingface.co/spaces/otst/deepstudio](https://huggingface.co/spaces/otst/deepstudio)
**Git / Source code**: [https://github.com/o-stahl/deepstudio](https://github.com/o-stahl/deepstudio)
| 2025-09-23T15:45:48 | https://www.reddit.com/r/LocalLLaMA/comments/1nokxsj/deepstudio_google_ai_studios_app_builder_at_home/ | Perfect_Twist713 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nokxsj | false | null | t3_1nokxsj | /r/LocalLLaMA/comments/1nokxsj/deepstudio_google_ai_studios_app_builder_at_home/ | false | false | 32 | {'enabled': False, 'images': [{'id': 'yDvyf-zbJDe3LDBNM7frIodnUlArIlsW27VvFHZ7mM8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/yDvyf-zbJDe3LDBNM7frIodnUlArIlsW27VvFHZ7mM8.png?width=108&crop=smart&auto=webp&s=97332e7fc5eadaf8c970a7203fbad19b6c95b738', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/yDvyf-zbJDe3LDBNM7frIodnUlArIlsW27VvFHZ7mM8.png?width=216&crop=smart&auto=webp&s=1185c70712caca5e8d733e132abbf7d514e660ca', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/yDvyf-zbJDe3LDBNM7frIodnUlArIlsW27VvFHZ7mM8.png?width=320&crop=smart&auto=webp&s=e8e83ae34956c3c02370c74ee8e00e2ecacc99e0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/yDvyf-zbJDe3LDBNM7frIodnUlArIlsW27VvFHZ7mM8.png?width=640&crop=smart&auto=webp&s=ecdb4d9c5d62513c3d9d4e891cdad0e6ed24d7aa', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/yDvyf-zbJDe3LDBNM7frIodnUlArIlsW27VvFHZ7mM8.png?width=960&crop=smart&auto=webp&s=76fd132244242f78036d2c81a79901be749dd64a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/yDvyf-zbJDe3LDBNM7frIodnUlArIlsW27VvFHZ7mM8.png?width=1080&crop=smart&auto=webp&s=2bcb4309020b24b234ff2dc40f11968ed4cd372f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/yDvyf-zbJDe3LDBNM7frIodnUlArIlsW27VvFHZ7mM8.png?auto=webp&s=5c63fd8eaebf129491b5c91ac90588123a9fe1c7', 'width': 1200}, 'variants': {}}]} | |
Life's good. Enjoying 5,000,000 free tokens a DAY! (supports anthropic api format!) | 0 | 2025-09-23T15:35:23 | https://www.reddit.com/r/LocalLLaMA/comments/1noknuz/lifes_good_enjoying_5000000_free_tokens_a_day/ | Adventurous-Slide776 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1noknuz | false | null | t3_1noknuz | /r/LocalLLaMA/comments/1noknuz/lifes_good_enjoying_5000000_free_tokens_a_day/ | false | false | spoiler | 0 | null | |
Qwen3Guard - Qwen3-based safety moderation model series built for global, real-time AI safety | 1 | [removed] | 2025-09-23T15:34:45 | https://www.reddit.com/r/LocalLLaMA/comments/1nokn8s/qwen3guard_qwen3based_safety_moderation_model/ | nullmove | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nokn8s | false | null | t3_1nokn8s | /r/LocalLLaMA/comments/1nokn8s/qwen3guard_qwen3based_safety_moderation_model/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'SybQlpd57ri5DOffonwxQ3RJbORPPReSb_vD77lSWek', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/SybQlpd57ri5DOffonwxQ3RJbORPPReSb_vD77lSWek.png?width=108&crop=smart&auto=webp&s=aa3dffed08d4aeaa86e97a89e6d4b73187400dc8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/SybQlpd57ri5DOffonwxQ3RJbORPPReSb_vD77lSWek.png?width=216&crop=smart&auto=webp&s=e58f4a4e3bdd9e3da632d3d074eaa6e10d0ab233', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/SybQlpd57ri5DOffonwxQ3RJbORPPReSb_vD77lSWek.png?width=320&crop=smart&auto=webp&s=748f848949bf7b5344f7a19eb473694eaeebdcbf', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/SybQlpd57ri5DOffonwxQ3RJbORPPReSb_vD77lSWek.png?width=640&crop=smart&auto=webp&s=aa8f4c7470cc8c0c033a57dba7ee804d9d9e1d36', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/SybQlpd57ri5DOffonwxQ3RJbORPPReSb_vD77lSWek.png?width=960&crop=smart&auto=webp&s=b05091a43ea4f799162f31c27541a5def5fe1594', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/SybQlpd57ri5DOffonwxQ3RJbORPPReSb_vD77lSWek.png?width=1080&crop=smart&auto=webp&s=87cd1f63cc41862f6679e315a45a4aa6fe475290', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/SybQlpd57ri5DOffonwxQ3RJbORPPReSb_vD77lSWek.png?auto=webp&s=4f619c921da5e0e70bba57a5ad3f7d94d32c6830', 'width': 1200}, 'variants': {}}]} |
Tiny local model for chatting about notes | 5 | Hey everyone!
I'm looking for a tiny (~4b) local model that I can run on my M2 Macbook Air with 8GB of RAM. I get that this is an incredibly low-spec device, so I shouldn't expect much. Is there anything better than Qwen 3 4B Instruct 2507? It should be getting most of its data from notes that I'm taking, so hallucinations should(?) be less of a problem, which I would imagine is the biggest problem with a model this small. | 2025-09-23T15:32:27 | https://www.reddit.com/r/LocalLLaMA/comments/1nokl35/tiny_local_model_for_chatting_about_notes/ | JustShyOrDoYouHateMe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nokl35 | false | null | t3_1nokl35 | /r/LocalLLaMA/comments/1nokl35/tiny_local_model_for_chatting_about_notes/ | false | false | self | 5 | null |
🛡️ Meet Qwen3Guard | 1 | [removed] | 2025-09-23T15:30:54 | ResearchCrafty1804 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nokjmn | false | null | t3_1nokjmn | /r/LocalLLaMA/comments/1nokjmn/meet_qwen3guard/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'IZ7MIK3FtYyw65h_KW5Eh7bLEATOtnfrfWr-GaJPvGk', 'resolutions': [{'height': 41, 'url': 'https://preview.redd.it/q02rnpdynxqf1.jpeg?width=108&crop=smart&auto=webp&s=2cf38c2f5e7b3e2b82f522c79060e884e2cae020', 'width': 108}, {'height': 82, 'url': 'https://preview.redd.it/q02rnpdynxqf1.jpeg?width=216&crop=smart&auto=webp&s=15b7ae9b9af442a07c969aefe4edb213e64e2b5f', 'width': 216}, {'height': 121, 'url': 'https://preview.redd.it/q02rnpdynxqf1.jpeg?width=320&crop=smart&auto=webp&s=ab1e872fafa051786ef498b1b914dccbe5cc8aee', 'width': 320}, {'height': 243, 'url': 'https://preview.redd.it/q02rnpdynxqf1.jpeg?width=640&crop=smart&auto=webp&s=0f622a379094613f184e18310b855d55e1782b03', 'width': 640}, {'height': 365, 'url': 'https://preview.redd.it/q02rnpdynxqf1.jpeg?width=960&crop=smart&auto=webp&s=cab576e9fff367c6b786cdcbd58bdea20115a443', 'width': 960}, {'height': 411, 'url': 'https://preview.redd.it/q02rnpdynxqf1.jpeg?width=1080&crop=smart&auto=webp&s=33e1535b294a6ef93562fd11589ba7ce5d30f54e', 'width': 1080}], 'source': {'height': 458, 'url': 'https://preview.redd.it/q02rnpdynxqf1.jpeg?auto=webp&s=7f08dc42c7ea2b4b84effb3b1f120b9e1b56fdde', 'width': 1203}, 'variants': {}}]} | ||
Qwen3Guard live on HF | 1 | [removed] | 2025-09-23T15:10:57 | https://huggingface.co/collections/Qwen/qwen3guard-68d2729abbfae4716f3343a1 | Ok-Nefariousness5673 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1nok0f3 | false | null | t3_1nok0f3 | /r/LocalLLaMA/comments/1nok0f3/qwen3guard_live_on_hf/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'SybQlpd57ri5DOffonwxQ3RJbORPPReSb_vD77lSWek', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/SybQlpd57ri5DOffonwxQ3RJbORPPReSb_vD77lSWek.png?width=108&crop=smart&auto=webp&s=aa3dffed08d4aeaa86e97a89e6d4b73187400dc8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/SybQlpd57ri5DOffonwxQ3RJbORPPReSb_vD77lSWek.png?width=216&crop=smart&auto=webp&s=e58f4a4e3bdd9e3da632d3d074eaa6e10d0ab233', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/SybQlpd57ri5DOffonwxQ3RJbORPPReSb_vD77lSWek.png?width=320&crop=smart&auto=webp&s=748f848949bf7b5344f7a19eb473694eaeebdcbf', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/SybQlpd57ri5DOffonwxQ3RJbORPPReSb_vD77lSWek.png?width=640&crop=smart&auto=webp&s=aa8f4c7470cc8c0c033a57dba7ee804d9d9e1d36', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/SybQlpd57ri5DOffonwxQ3RJbORPPReSb_vD77lSWek.png?width=960&crop=smart&auto=webp&s=b05091a43ea4f799162f31c27541a5def5fe1594', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/SybQlpd57ri5DOffonwxQ3RJbORPPReSb_vD77lSWek.png?width=1080&crop=smart&auto=webp&s=87cd1f63cc41862f6679e315a45a4aa6fe475290', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/SybQlpd57ri5DOffonwxQ3RJbORPPReSb_vD77lSWek.png?auto=webp&s=4f619c921da5e0e70bba57a5ad3f7d94d32c6830', 'width': 1200}, 'variants': {}}]} | |
Qwen3 Guard-Stream 0.6B , 4B, 8B | 1 | [removed] | 2025-09-23T15:06:37 | https://www.reddit.com/r/LocalLLaMA/comments/1nojwa7/qwen3_guardstream_06b_4b_8b/ | touhidul002 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nojwa7 | false | null | t3_1nojwa7 | /r/LocalLLaMA/comments/1nojwa7/qwen3_guardstream_06b_4b_8b/ | false | false | self | 1 | null |
Scaling Agents via Continual Pre-training : AgentFounder-30B (Tongyi DeepResearch) | 18 | Most open-source “agents” today are just general LLMs with some post-training on tool-use demos. That creates a conflict: the model has to learn agent skills and align to expert behavior at the same time, which caps performance.
The paper *Scaling Agents via Continual Pre-training* (Alibaba, 2025) proposes **Agentic Continual Pre-training (CPT)** as a fix. Instead of skipping straight from pre-training → post-training, they add an intermediate stage where the model is continually pre-trained on agent-like behaviors. This produces an **agentic foundation model** before fine-tuning.
Two key ideas drive this:
* **First-order Action Synthesis (FAS):** Build (question → plan → reasoning/action) data without real API calls. Covers planning steps and reasoning chains cheaply at scale.
* **Higher-order Action Synthesis (HAS):** Expand existing trajectories into multiple decision branches at each step. This reuses discarded trajectories and forces the model to practice step-wise decision-making instead of just copying one “golden” path.
Training runs in **two stages**:
1. \~200B tokens of FAS + short HAS data, 32K context.
2. \~100B tokens of high-quality HAS data, 128K context (long-horizon reasoning).
The result is **AgentFounder-30B**, which outperforms all other open-source research agents and even beats some closed ones (e.g., >30% on HLE, 72.8% GAIA).
Takeaway: Agentic CPT shifts the burden. Post-training no longer has to teach both skills and alignment. Instead, the model enters fine-tuning already “thinking” like an agent.
Paper Link : [https://arxiv.org/pdf/2509.13310](https://arxiv.org/pdf/2509.13310)
Video explanation : [https://www.youtube.com/watch?v=csz2X2c4BWM&t=5s](https://www.youtube.com/watch?v=csz2X2c4BWM&t=5s) | 2025-09-23T14:54:01 | https://www.reddit.com/r/LocalLLaMA/comments/1nojjx7/scaling_agents_via_continual_pretraining/ | Technical-Love-8479 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nojjx7 | false | null | t3_1nojjx7 | /r/LocalLLaMA/comments/1nojjx7/scaling_agents_via_continual_pretraining/ | false | false | self | 18 | null |
LLM vs LLM with Websearch | 11 | Did you guys also feel that whenever an LLM does websearch its output is very bad? It takes low quality information from the web but when it answers itself without websearch its response is high quality with more depth and variety in response. | 2025-09-23T14:44:10 | https://www.reddit.com/r/LocalLLaMA/comments/1nojauv/llm_vs_llm_with_websearch/ | AdSoft9261 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nojauv | false | null | t3_1nojauv | /r/LocalLLaMA/comments/1nojauv/llm_vs_llm_with_websearch/ | false | false | self | 11 | null |
Run Qwen3-Next-80B on 8GB GPU at 1tok/2s throughput | 14 | 2025-09-23T14:39:57 | https://github.com/Mega4alik/ollm | Maxious | github.com | 1970-01-01T00:00:00 | 0 | {} | 1noj6xs | false | null | t3_1noj6xs | /r/LocalLLaMA/comments/1noj6xs/run_qwen3next80b_on_8gb_gpu_at_1tok2s_throughput/ | false | false | 14 | {'enabled': False, 'images': [{'id': 'loqYh-WCtEaxMSj7OVC1KJ5pM9gu3MpUO3u8a7ppcoY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/loqYh-WCtEaxMSj7OVC1KJ5pM9gu3MpUO3u8a7ppcoY.png?width=108&crop=smart&auto=webp&s=f344d48a6b30df385c6254bc80ad88f32e31e069', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/loqYh-WCtEaxMSj7OVC1KJ5pM9gu3MpUO3u8a7ppcoY.png?width=216&crop=smart&auto=webp&s=84be04f11238da3878ad3782ce64e889d869a164', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/loqYh-WCtEaxMSj7OVC1KJ5pM9gu3MpUO3u8a7ppcoY.png?width=320&crop=smart&auto=webp&s=b7c22a94f724e24296ac05db5db6bd760f115394', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/loqYh-WCtEaxMSj7OVC1KJ5pM9gu3MpUO3u8a7ppcoY.png?width=640&crop=smart&auto=webp&s=cab0230c5dc68a3b50a7ad3a367504dacead83b8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/loqYh-WCtEaxMSj7OVC1KJ5pM9gu3MpUO3u8a7ppcoY.png?width=960&crop=smart&auto=webp&s=5d0603b8acf85f959bc64c87ef89be20860b3766', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/loqYh-WCtEaxMSj7OVC1KJ5pM9gu3MpUO3u8a7ppcoY.png?width=1080&crop=smart&auto=webp&s=652a59b3e70ee045a2556bb28037f96ea1cc0779', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/loqYh-WCtEaxMSj7OVC1KJ5pM9gu3MpUO3u8a7ppcoY.png?auto=webp&s=664a2d566d85323d92ae2f6552fcaa47bfe8c21b', 'width': 1200}, 'variants': {}}]} | ||
PDF text extraction using VLMs | 11 | Have some PDFs which contain text chunks including headers subheaders bodies and miscellaneous texts and need to extract them into JSON schema. difficult part is getting a model to semantically differentiate between different parts of the defined schema (schema is a little more complex than just the above described). Additionally some chunks have images associated with them and they need to be marked as such. Not getting any good results with local models and was wondering if any of you have done something similar and found success.
Biggest issue seems to be the semantics of what is what respective to the schema. Maybe local models just arent smart enough.
| 2025-09-23T14:34:27 | https://www.reddit.com/r/LocalLLaMA/comments/1noj229/pdf_text_extraction_using_vlms/ | lochloch | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1noj229 | false | null | t3_1noj229 | /r/LocalLLaMA/comments/1noj229/pdf_text_extraction_using_vlms/ | false | false | self | 11 | null |
Dual Modded 4090 48GBs on a consumer ASUS ProArt Z790 board | 83 | There are some curiosities and questions here about the modded 4090 48GB cards. For my local AI test environment, I need a setup with a larger VRAM pool to run some tests, so I got my hands on a dual-card rig with these. I've run some initial benchmarks and wanted to share the data.
The results are as expected, and I think it's a good idea to have these modded 4090 48GB cards.
# Test 1: Single Card GGUF Speed (GPUStack llama-box/llama.cpp)
Just a simple, raw generation speed test on a single card to see how they compare head-to-head.
* Model: Qwen-32B (GGUF, Q4\_K\_M)
* Backend: llama-box (llama-box in GPUStack)
* Test: Single short prompt request generation via GPUStack UI's compare feature.
Results:
* Modded 4090 48GB: 38.86 t/s
* Standard 4090 24GB (ASUS TUF): 39.45 t/s
Observation**:** The standard 24GB card was slightly faster. Not by much, but consistently.
# Test 2: Single Card vLLM Speed
The same test but with a smaller model on vLLM to see if the pattern held.
* Model: Qwen-8B (FP16)
* Backend: vLLM v0.10.2 in GPUStack (custom backend)
* Test: Single short request generation.
Results:
* Modded 4090 48GB: 55.87 t/s
* Standard 4090 24GB: 57.27 t/s
Observation**:** Same story. The 24GB card is again marginally faster in a simple, single-stream inference task. The extra VRAM doesn't translate to more speed for a single request, which is expected, and there might be a tiny performance penalty for the modded memory.
# Test 3: Multi-GPU Stress Test (2x 48GB vs 4x 24GB)
This is where I compared my dual 48GB rig against a cloud machine with four standard 4090s. Both setups have 96GB of total VRAM running the same large model under a heavy concurrent load.
* Model: Qwen-32B (FP16)
* Backend: vLLM v0.10.2 in GPUStack (custom backend)
* Tool: evalscope (100 concurrent users, 400 total requests)
* Setup A (Local): 2x Modded 4090 48GB (TP=2) on an ASUS ProArt Z790
* Setup B (Cloud): 4x Standard 4090 24GB (TP=4) on a server-grade board
Results (Cloud 4x24GB was significantly better):
||
||
|Metric|2x 4090 48GB (Our Rig)|4x 4090 24GB (Cloud)|
|Output Throughput (tok/s)|1054.1|1262.95|
|Avg. Latency (s)|105.46|86.99|
|Avg. TTFT (s)|0.4179|0.3947|
|Avg. Time Per Output Token (s)|0.0844|0.0690|
Analysis**:** The 4-card setup on the server was clearly superior across all metrics—almost 20% higher throughput and significantly lower latency. My initial guess was the motherboard's PCIe topology (PCIE 5.0 x16 PHB on our Z790 vs. a better link on the server, which is also PCIE).
To confirm this, I ran nccl-test to measure the effective inter-GPU bandwidth. The results were clear:
* **Local 2x48GB Rig:** Avg bus bandwidth was **\~3.0 GB/s**. (\[nccl-test log\](link to your local nccl-test image))
* **Cloud 4x24GB Rig:** Avg bus bandwidth was **\~3.3 GB/s**. (\[nccl-test log\](link to your cloud nccl-test image))
That \~10% higher bus bandwidth on the server board seems to be the key difference, allowing it to overcome the extra communication overhead of a larger tensor parallel group (TP=4 vs TP=2) and deliver much better performance. | 2025-09-23T14:15:42 | https://www.reddit.com/gallery/1noikw2 | Ok-Actuary-4527 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1noikw2 | false | null | t3_1noikw2 | /r/LocalLLaMA/comments/1noikw2/dual_modded_4090_48gbs_on_a_consumer_asus_proart/ | false | false | 83 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.